Oct 02 10:48:58 localhost kernel: Linux version 5.14.0-620.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025
Oct 02 10:48:58 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct 02 10:48:58 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 02 10:48:58 localhost kernel: BIOS-provided physical RAM map:
Oct 02 10:48:58 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct 02 10:48:58 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct 02 10:48:58 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct 02 10:48:58 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Oct 02 10:48:58 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Oct 02 10:48:58 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct 02 10:48:58 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct 02 10:48:58 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Oct 02 10:48:58 localhost kernel: NX (Execute Disable) protection: active
Oct 02 10:48:58 localhost kernel: APIC: Static calls initialized
Oct 02 10:48:58 localhost kernel: SMBIOS 2.8 present.
Oct 02 10:48:58 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Oct 02 10:48:58 localhost kernel: Hypervisor detected: KVM
Oct 02 10:48:58 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct 02 10:48:58 localhost kernel: kvm-clock: using sched offset of 4117678470 cycles
Oct 02 10:48:58 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct 02 10:48:58 localhost kernel: tsc: Detected 2799.886 MHz processor
Oct 02 10:48:58 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Oct 02 10:48:58 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Oct 02 10:48:58 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct 02 10:48:58 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct 02 10:48:58 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct 02 10:48:58 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Oct 02 10:48:58 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Oct 02 10:48:58 localhost kernel: Using GB pages for direct mapping
Oct 02 10:48:58 localhost kernel: RAMDISK: [mem 0x2d7c4000-0x32bd9fff]
Oct 02 10:48:58 localhost kernel: ACPI: Early table checksum verification disabled
Oct 02 10:48:58 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Oct 02 10:48:58 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 02 10:48:58 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 02 10:48:58 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 02 10:48:58 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Oct 02 10:48:58 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 02 10:48:58 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 02 10:48:58 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Oct 02 10:48:58 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Oct 02 10:48:58 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Oct 02 10:48:58 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Oct 02 10:48:58 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Oct 02 10:48:58 localhost kernel: No NUMA configuration found
Oct 02 10:48:58 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Oct 02 10:48:58 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Oct 02 10:48:58 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Oct 02 10:48:58 localhost kernel: Zone ranges:
Oct 02 10:48:58 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct 02 10:48:58 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct 02 10:48:58 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Oct 02 10:48:58 localhost kernel:   Device   empty
Oct 02 10:48:58 localhost kernel: Movable zone start for each node
Oct 02 10:48:58 localhost kernel: Early memory node ranges
Oct 02 10:48:58 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct 02 10:48:58 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Oct 02 10:48:58 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Oct 02 10:48:58 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Oct 02 10:48:58 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct 02 10:48:58 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct 02 10:48:58 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct 02 10:48:58 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Oct 02 10:48:58 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct 02 10:48:58 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct 02 10:48:58 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct 02 10:48:58 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct 02 10:48:58 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct 02 10:48:58 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct 02 10:48:58 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct 02 10:48:58 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct 02 10:48:58 localhost kernel: TSC deadline timer available
Oct 02 10:48:58 localhost kernel: CPU topo: Max. logical packages:   8
Oct 02 10:48:58 localhost kernel: CPU topo: Max. logical dies:       8
Oct 02 10:48:58 localhost kernel: CPU topo: Max. dies per package:   1
Oct 02 10:48:58 localhost kernel: CPU topo: Max. threads per core:   1
Oct 02 10:48:58 localhost kernel: CPU topo: Num. cores per package:     1
Oct 02 10:48:58 localhost kernel: CPU topo: Num. threads per package:   1
Oct 02 10:48:58 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Oct 02 10:48:58 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct 02 10:48:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct 02 10:48:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct 02 10:48:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct 02 10:48:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct 02 10:48:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Oct 02 10:48:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Oct 02 10:48:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct 02 10:48:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct 02 10:48:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct 02 10:48:58 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Oct 02 10:48:58 localhost kernel: Booting paravirtualized kernel on KVM
Oct 02 10:48:58 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct 02 10:48:58 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Oct 02 10:48:58 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Oct 02 10:48:58 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Oct 02 10:48:58 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Oct 02 10:48:58 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Oct 02 10:48:58 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 02 10:48:58 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64", will be passed to user space.
Oct 02 10:48:58 localhost kernel: random: crng init done
Oct 02 10:48:58 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct 02 10:48:58 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct 02 10:48:58 localhost kernel: Fallback order for Node 0: 0 
Oct 02 10:48:58 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct 02 10:48:58 localhost kernel: Policy zone: Normal
Oct 02 10:48:58 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct 02 10:48:58 localhost kernel: software IO TLB: area num 8.
Oct 02 10:48:58 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Oct 02 10:48:58 localhost kernel: ftrace: allocating 49370 entries in 193 pages
Oct 02 10:48:58 localhost kernel: ftrace: allocated 193 pages with 3 groups
Oct 02 10:48:58 localhost kernel: Dynamic Preempt: voluntary
Oct 02 10:48:58 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Oct 02 10:48:58 localhost kernel: rcu:         RCU event tracing is enabled.
Oct 02 10:48:58 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Oct 02 10:48:58 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Oct 02 10:48:58 localhost kernel:         Rude variant of Tasks RCU enabled.
Oct 02 10:48:58 localhost kernel:         Tracing variant of Tasks RCU enabled.
Oct 02 10:48:58 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct 02 10:48:58 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Oct 02 10:48:58 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 02 10:48:58 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 02 10:48:58 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 02 10:48:58 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Oct 02 10:48:58 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct 02 10:48:58 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct 02 10:48:58 localhost kernel: Console: colour VGA+ 80x25
Oct 02 10:48:58 localhost kernel: printk: console [ttyS0] enabled
Oct 02 10:48:58 localhost kernel: ACPI: Core revision 20230331
Oct 02 10:48:58 localhost kernel: APIC: Switch to symmetric I/O mode setup
Oct 02 10:48:58 localhost kernel: x2apic enabled
Oct 02 10:48:58 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Oct 02 10:48:58 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct 02 10:48:58 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.77 BogoMIPS (lpj=2799886)
Oct 02 10:48:58 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct 02 10:48:58 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct 02 10:48:58 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct 02 10:48:58 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct 02 10:48:58 localhost kernel: Spectre V2 : Mitigation: Retpolines
Oct 02 10:48:58 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct 02 10:48:58 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Oct 02 10:48:58 localhost kernel: RETBleed: Mitigation: untrained return thunk
Oct 02 10:48:58 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct 02 10:48:58 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct 02 10:48:58 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct 02 10:48:58 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct 02 10:48:58 localhost kernel: x86/bugs: return thunk changed
Oct 02 10:48:58 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct 02 10:48:58 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct 02 10:48:58 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct 02 10:48:58 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct 02 10:48:58 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct 02 10:48:58 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Oct 02 10:48:58 localhost kernel: Freeing SMP alternatives memory: 40K
Oct 02 10:48:58 localhost kernel: pid_max: default: 32768 minimum: 301
Oct 02 10:48:58 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct 02 10:48:58 localhost kernel: landlock: Up and running.
Oct 02 10:48:58 localhost kernel: Yama: becoming mindful.
Oct 02 10:48:58 localhost kernel: SELinux:  Initializing.
Oct 02 10:48:58 localhost kernel: LSM support for eBPF active
Oct 02 10:48:58 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 02 10:48:58 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 02 10:48:58 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Oct 02 10:48:58 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct 02 10:48:58 localhost kernel: ... version:                0
Oct 02 10:48:58 localhost kernel: ... bit width:              48
Oct 02 10:48:58 localhost kernel: ... generic registers:      6
Oct 02 10:48:58 localhost kernel: ... value mask:             0000ffffffffffff
Oct 02 10:48:58 localhost kernel: ... max period:             00007fffffffffff
Oct 02 10:48:58 localhost kernel: ... fixed-purpose events:   0
Oct 02 10:48:58 localhost kernel: ... event mask:             000000000000003f
Oct 02 10:48:58 localhost kernel: signal: max sigframe size: 1776
Oct 02 10:48:58 localhost kernel: rcu: Hierarchical SRCU implementation.
Oct 02 10:48:58 localhost kernel: rcu:         Max phase no-delay instances is 400.
Oct 02 10:48:58 localhost kernel: smp: Bringing up secondary CPUs ...
Oct 02 10:48:58 localhost kernel: smpboot: x86: Booting SMP configuration:
Oct 02 10:48:58 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Oct 02 10:48:58 localhost kernel: smp: Brought up 1 node, 8 CPUs
Oct 02 10:48:58 localhost kernel: smpboot: Total of 8 processors activated (44798.17 BogoMIPS)
Oct 02 10:48:58 localhost kernel: node 0 deferred pages initialised in 20ms
Oct 02 10:48:58 localhost kernel: Memory: 7765484K/8388068K available (16384K kernel code, 5784K rwdata, 13996K rodata, 4068K init, 7304K bss, 616504K reserved, 0K cma-reserved)
Oct 02 10:48:58 localhost kernel: devtmpfs: initialized
Oct 02 10:48:58 localhost kernel: x86/mm: Memory block size: 128MB
Oct 02 10:48:58 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct 02 10:48:58 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Oct 02 10:48:58 localhost kernel: pinctrl core: initialized pinctrl subsystem
Oct 02 10:48:58 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct 02 10:48:58 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct 02 10:48:58 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct 02 10:48:58 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct 02 10:48:58 localhost kernel: audit: initializing netlink subsys (disabled)
Oct 02 10:48:58 localhost kernel: audit: type=2000 audit(1759402137.232:1): state=initialized audit_enabled=0 res=1
Oct 02 10:48:58 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct 02 10:48:58 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct 02 10:48:58 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Oct 02 10:48:58 localhost kernel: cpuidle: using governor menu
Oct 02 10:48:58 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct 02 10:48:58 localhost kernel: PCI: Using configuration type 1 for base access
Oct 02 10:48:58 localhost kernel: PCI: Using configuration type 1 for extended access
Oct 02 10:48:58 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct 02 10:48:58 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct 02 10:48:58 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct 02 10:48:58 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct 02 10:48:58 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct 02 10:48:58 localhost kernel: Demotion targets for Node 0: null
Oct 02 10:48:58 localhost kernel: cryptd: max_cpu_qlen set to 1000
Oct 02 10:48:58 localhost kernel: ACPI: Added _OSI(Module Device)
Oct 02 10:48:58 localhost kernel: ACPI: Added _OSI(Processor Device)
Oct 02 10:48:58 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct 02 10:48:58 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct 02 10:48:58 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct 02 10:48:58 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct 02 10:48:58 localhost kernel: ACPI: Interpreter enabled
Oct 02 10:48:58 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Oct 02 10:48:58 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Oct 02 10:48:58 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct 02 10:48:58 localhost kernel: PCI: Using E820 reservations for host bridge windows
Oct 02 10:48:58 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Oct 02 10:48:58 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct 02 10:48:58 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct 02 10:48:58 localhost kernel: acpiphp: Slot [3] registered
Oct 02 10:48:58 localhost kernel: acpiphp: Slot [4] registered
Oct 02 10:48:58 localhost kernel: acpiphp: Slot [5] registered
Oct 02 10:48:58 localhost kernel: acpiphp: Slot [6] registered
Oct 02 10:48:58 localhost kernel: acpiphp: Slot [7] registered
Oct 02 10:48:58 localhost kernel: acpiphp: Slot [8] registered
Oct 02 10:48:58 localhost kernel: acpiphp: Slot [9] registered
Oct 02 10:48:58 localhost kernel: acpiphp: Slot [10] registered
Oct 02 10:48:58 localhost kernel: acpiphp: Slot [11] registered
Oct 02 10:48:58 localhost kernel: acpiphp: Slot [12] registered
Oct 02 10:48:58 localhost kernel: acpiphp: Slot [13] registered
Oct 02 10:48:58 localhost kernel: acpiphp: Slot [14] registered
Oct 02 10:48:58 localhost kernel: acpiphp: Slot [15] registered
Oct 02 10:48:58 localhost kernel: acpiphp: Slot [16] registered
Oct 02 10:48:58 localhost kernel: acpiphp: Slot [17] registered
Oct 02 10:48:58 localhost kernel: acpiphp: Slot [18] registered
Oct 02 10:48:58 localhost kernel: acpiphp: Slot [19] registered
Oct 02 10:48:58 localhost kernel: acpiphp: Slot [20] registered
Oct 02 10:48:58 localhost kernel: acpiphp: Slot [21] registered
Oct 02 10:48:58 localhost kernel: acpiphp: Slot [22] registered
Oct 02 10:48:58 localhost kernel: acpiphp: Slot [23] registered
Oct 02 10:48:58 localhost kernel: acpiphp: Slot [24] registered
Oct 02 10:48:58 localhost kernel: acpiphp: Slot [25] registered
Oct 02 10:48:58 localhost kernel: acpiphp: Slot [26] registered
Oct 02 10:48:58 localhost kernel: acpiphp: Slot [27] registered
Oct 02 10:48:58 localhost kernel: acpiphp: Slot [28] registered
Oct 02 10:48:58 localhost kernel: acpiphp: Slot [29] registered
Oct 02 10:48:58 localhost kernel: acpiphp: Slot [30] registered
Oct 02 10:48:58 localhost kernel: acpiphp: Slot [31] registered
Oct 02 10:48:58 localhost kernel: PCI host bridge to bus 0000:00
Oct 02 10:48:58 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct 02 10:48:58 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct 02 10:48:58 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct 02 10:48:58 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct 02 10:48:58 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Oct 02 10:48:58 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct 02 10:48:58 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Oct 02 10:48:58 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Oct 02 10:48:58 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Oct 02 10:48:58 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Oct 02 10:48:58 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Oct 02 10:48:58 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Oct 02 10:48:58 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Oct 02 10:48:58 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Oct 02 10:48:58 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct 02 10:48:58 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Oct 02 10:48:58 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Oct 02 10:48:58 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Oct 02 10:48:58 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Oct 02 10:48:58 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct 02 10:48:58 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Oct 02 10:48:58 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Oct 02 10:48:58 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Oct 02 10:48:58 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Oct 02 10:48:58 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct 02 10:48:58 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 02 10:48:58 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Oct 02 10:48:58 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Oct 02 10:48:58 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Oct 02 10:48:58 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Oct 02 10:48:58 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Oct 02 10:48:58 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Oct 02 10:48:58 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Oct 02 10:48:58 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Oct 02 10:48:58 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Oct 02 10:48:58 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Oct 02 10:48:58 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Oct 02 10:48:58 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Oct 02 10:48:58 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Oct 02 10:48:58 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Oct 02 10:48:58 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct 02 10:48:58 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct 02 10:48:58 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct 02 10:48:58 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct 02 10:48:58 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct 02 10:48:58 localhost kernel: iommu: Default domain type: Translated
Oct 02 10:48:58 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct 02 10:48:58 localhost kernel: SCSI subsystem initialized
Oct 02 10:48:58 localhost kernel: ACPI: bus type USB registered
Oct 02 10:48:58 localhost kernel: usbcore: registered new interface driver usbfs
Oct 02 10:48:58 localhost kernel: usbcore: registered new interface driver hub
Oct 02 10:48:58 localhost kernel: usbcore: registered new device driver usb
Oct 02 10:48:58 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Oct 02 10:48:58 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct 02 10:48:58 localhost kernel: PTP clock support registered
Oct 02 10:48:58 localhost kernel: EDAC MC: Ver: 3.0.0
Oct 02 10:48:58 localhost kernel: NetLabel: Initializing
Oct 02 10:48:58 localhost kernel: NetLabel:  domain hash size = 128
Oct 02 10:48:58 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct 02 10:48:58 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Oct 02 10:48:58 localhost kernel: PCI: Using ACPI for IRQ routing
Oct 02 10:48:58 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Oct 02 10:48:58 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Oct 02 10:48:58 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Oct 02 10:48:58 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Oct 02 10:48:58 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Oct 02 10:48:58 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct 02 10:48:58 localhost kernel: vgaarb: loaded
Oct 02 10:48:58 localhost kernel: clocksource: Switched to clocksource kvm-clock
Oct 02 10:48:58 localhost kernel: VFS: Disk quotas dquot_6.6.0
Oct 02 10:48:58 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct 02 10:48:58 localhost kernel: pnp: PnP ACPI init
Oct 02 10:48:58 localhost kernel: pnp 00:03: [dma 2]
Oct 02 10:48:58 localhost kernel: pnp: PnP ACPI: found 5 devices
Oct 02 10:48:58 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct 02 10:48:58 localhost kernel: NET: Registered PF_INET protocol family
Oct 02 10:48:58 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct 02 10:48:58 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct 02 10:48:58 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct 02 10:48:58 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct 02 10:48:58 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct 02 10:48:58 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct 02 10:48:58 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct 02 10:48:58 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 02 10:48:58 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 02 10:48:58 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct 02 10:48:58 localhost kernel: NET: Registered PF_XDP protocol family
Oct 02 10:48:58 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct 02 10:48:58 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct 02 10:48:58 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct 02 10:48:58 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Oct 02 10:48:58 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Oct 02 10:48:58 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Oct 02 10:48:58 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct 02 10:48:58 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct 02 10:48:58 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 72333 usecs
Oct 02 10:48:58 localhost kernel: PCI: CLS 0 bytes, default 64
Oct 02 10:48:58 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct 02 10:48:58 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Oct 02 10:48:58 localhost kernel: ACPI: bus type thunderbolt registered
Oct 02 10:48:58 localhost kernel: Trying to unpack rootfs image as initramfs...
Oct 02 10:48:58 localhost kernel: Initialise system trusted keyrings
Oct 02 10:48:58 localhost kernel: Key type blacklist registered
Oct 02 10:48:58 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct 02 10:48:58 localhost kernel: zbud: loaded
Oct 02 10:48:58 localhost kernel: integrity: Platform Keyring initialized
Oct 02 10:48:58 localhost kernel: integrity: Machine keyring initialized
Oct 02 10:48:58 localhost kernel: Freeing initrd memory: 86104K
Oct 02 10:48:58 localhost kernel: NET: Registered PF_ALG protocol family
Oct 02 10:48:58 localhost kernel: xor: automatically using best checksumming function   avx       
Oct 02 10:48:58 localhost kernel: Key type asymmetric registered
Oct 02 10:48:58 localhost kernel: Asymmetric key parser 'x509' registered
Oct 02 10:48:58 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct 02 10:48:58 localhost kernel: io scheduler mq-deadline registered
Oct 02 10:48:58 localhost kernel: io scheduler kyber registered
Oct 02 10:48:58 localhost kernel: io scheduler bfq registered
Oct 02 10:48:58 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct 02 10:48:58 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct 02 10:48:58 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct 02 10:48:58 localhost kernel: ACPI: button: Power Button [PWRF]
Oct 02 10:48:58 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Oct 02 10:48:58 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Oct 02 10:48:58 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Oct 02 10:48:58 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct 02 10:48:58 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct 02 10:48:58 localhost kernel: Non-volatile memory driver v1.3
Oct 02 10:48:58 localhost kernel: rdac: device handler registered
Oct 02 10:48:58 localhost kernel: hp_sw: device handler registered
Oct 02 10:48:58 localhost kernel: emc: device handler registered
Oct 02 10:48:58 localhost kernel: alua: device handler registered
Oct 02 10:48:58 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Oct 02 10:48:58 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Oct 02 10:48:58 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Oct 02 10:48:58 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Oct 02 10:48:58 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct 02 10:48:58 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct 02 10:48:58 localhost kernel: usb usb1: Product: UHCI Host Controller
Oct 02 10:48:58 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-620.el9.x86_64 uhci_hcd
Oct 02 10:48:58 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Oct 02 10:48:58 localhost kernel: hub 1-0:1.0: USB hub found
Oct 02 10:48:58 localhost kernel: hub 1-0:1.0: 2 ports detected
Oct 02 10:48:58 localhost kernel: usbcore: registered new interface driver usbserial_generic
Oct 02 10:48:58 localhost kernel: usbserial: USB Serial support registered for generic
Oct 02 10:48:58 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct 02 10:48:58 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct 02 10:48:58 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct 02 10:48:58 localhost kernel: mousedev: PS/2 mouse device common for all mice
Oct 02 10:48:58 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Oct 02 10:48:58 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct 02 10:48:58 localhost kernel: rtc_cmos 00:04: registered as rtc0
Oct 02 10:48:58 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-10-02T10:48:57 UTC (1759402137)
Oct 02 10:48:58 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Oct 02 10:48:58 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct 02 10:48:58 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct 02 10:48:58 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Oct 02 10:48:58 localhost kernel: usbcore: registered new interface driver usbhid
Oct 02 10:48:58 localhost kernel: usbhid: USB HID core driver
Oct 02 10:48:58 localhost kernel: drop_monitor: Initializing network drop monitor service
Oct 02 10:48:58 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct 02 10:48:58 localhost kernel: Initializing XFRM netlink socket
Oct 02 10:48:58 localhost kernel: NET: Registered PF_INET6 protocol family
Oct 02 10:48:58 localhost kernel: Segment Routing with IPv6
Oct 02 10:48:58 localhost kernel: NET: Registered PF_PACKET protocol family
Oct 02 10:48:58 localhost kernel: mpls_gso: MPLS GSO support
Oct 02 10:48:58 localhost kernel: IPI shorthand broadcast: enabled
Oct 02 10:48:58 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Oct 02 10:48:58 localhost kernel: AES CTR mode by8 optimization enabled
Oct 02 10:48:58 localhost kernel: sched_clock: Marking stable (1154002685, 145041584)->(1420934504, -121890235)
Oct 02 10:48:58 localhost kernel: registered taskstats version 1
Oct 02 10:48:58 localhost kernel: Loading compiled-in X.509 certificates
Oct 02 10:48:58 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct 02 10:48:58 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct 02 10:48:58 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct 02 10:48:58 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct 02 10:48:58 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct 02 10:48:58 localhost kernel: Demotion targets for Node 0: null
Oct 02 10:48:58 localhost kernel: page_owner is disabled
Oct 02 10:48:58 localhost kernel: Key type .fscrypt registered
Oct 02 10:48:58 localhost kernel: Key type fscrypt-provisioning registered
Oct 02 10:48:58 localhost kernel: Key type big_key registered
Oct 02 10:48:58 localhost kernel: Key type encrypted registered
Oct 02 10:48:58 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Oct 02 10:48:58 localhost kernel: Loading compiled-in module X.509 certificates
Oct 02 10:48:58 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct 02 10:48:58 localhost kernel: ima: Allocated hash algorithm: sha256
Oct 02 10:48:58 localhost kernel: ima: No architecture policies found
Oct 02 10:48:58 localhost kernel: evm: Initialising EVM extended attributes:
Oct 02 10:48:58 localhost kernel: evm: security.selinux
Oct 02 10:48:58 localhost kernel: evm: security.SMACK64 (disabled)
Oct 02 10:48:58 localhost kernel: evm: security.SMACK64EXEC (disabled)
Oct 02 10:48:58 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct 02 10:48:58 localhost kernel: evm: security.SMACK64MMAP (disabled)
Oct 02 10:48:58 localhost kernel: evm: security.apparmor (disabled)
Oct 02 10:48:58 localhost kernel: evm: security.ima
Oct 02 10:48:58 localhost kernel: evm: security.capability
Oct 02 10:48:58 localhost kernel: evm: HMAC attrs: 0x1
Oct 02 10:48:58 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct 02 10:48:58 localhost kernel: Running certificate verification RSA selftest
Oct 02 10:48:58 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct 02 10:48:58 localhost kernel: Running certificate verification ECDSA selftest
Oct 02 10:48:58 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct 02 10:48:58 localhost kernel: clk: Disabling unused clocks
Oct 02 10:48:58 localhost kernel: Freeing unused decrypted memory: 2028K
Oct 02 10:48:58 localhost kernel: Freeing unused kernel image (initmem) memory: 4068K
Oct 02 10:48:58 localhost kernel: Write protecting the kernel read-only data: 30720k
Oct 02 10:48:58 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 340K
Oct 02 10:48:58 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct 02 10:48:58 localhost kernel: Run /init as init process
Oct 02 10:48:58 localhost kernel:   with arguments:
Oct 02 10:48:58 localhost kernel:     /init
Oct 02 10:48:58 localhost kernel:   with environment:
Oct 02 10:48:58 localhost kernel:     HOME=/
Oct 02 10:48:58 localhost kernel:     TERM=linux
Oct 02 10:48:58 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64
Oct 02 10:48:58 localhost systemd[1]: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 02 10:48:58 localhost systemd[1]: Detected virtualization kvm.
Oct 02 10:48:58 localhost systemd[1]: Detected architecture x86-64.
Oct 02 10:48:58 localhost systemd[1]: Running in initrd.
Oct 02 10:48:58 localhost systemd[1]: No hostname configured, using default hostname.
Oct 02 10:48:58 localhost systemd[1]: Hostname set to <localhost>.
Oct 02 10:48:58 localhost systemd[1]: Initializing machine ID from VM UUID.
Oct 02 10:48:58 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct 02 10:48:58 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct 02 10:48:58 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Oct 02 10:48:58 localhost kernel: usb 1-1: Manufacturer: QEMU
Oct 02 10:48:58 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Oct 02 10:48:58 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct 02 10:48:58 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Oct 02 10:48:58 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Oct 02 10:48:58 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 02 10:48:58 localhost systemd[1]: Reached target Local Encrypted Volumes.
Oct 02 10:48:58 localhost systemd[1]: Reached target Initrd /usr File System.
Oct 02 10:48:58 localhost systemd[1]: Reached target Local File Systems.
Oct 02 10:48:58 localhost systemd[1]: Reached target Path Units.
Oct 02 10:48:58 localhost systemd[1]: Reached target Slice Units.
Oct 02 10:48:58 localhost systemd[1]: Reached target Swaps.
Oct 02 10:48:58 localhost systemd[1]: Reached target Timer Units.
Oct 02 10:48:58 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 02 10:48:58 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Oct 02 10:48:58 localhost systemd[1]: Listening on Journal Socket.
Oct 02 10:48:58 localhost systemd[1]: Listening on udev Control Socket.
Oct 02 10:48:58 localhost systemd[1]: Listening on udev Kernel Socket.
Oct 02 10:48:58 localhost systemd[1]: Reached target Socket Units.
Oct 02 10:48:58 localhost systemd[1]: Starting Create List of Static Device Nodes...
Oct 02 10:48:58 localhost systemd[1]: Starting Journal Service...
Oct 02 10:48:58 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 02 10:48:58 localhost systemd[1]: Starting Apply Kernel Variables...
Oct 02 10:48:58 localhost systemd[1]: Starting Create System Users...
Oct 02 10:48:58 localhost systemd[1]: Starting Setup Virtual Console...
Oct 02 10:48:58 localhost systemd[1]: Finished Create List of Static Device Nodes.
Oct 02 10:48:58 localhost systemd[1]: Finished Apply Kernel Variables.
Oct 02 10:48:58 localhost systemd-journald[308]: Journal started
Oct 02 10:48:58 localhost systemd-journald[308]: Runtime Journal (/run/log/journal/5d5cabb12c53462b89f316d4280c3e4c) is 8.0M, max 153.5M, 145.5M free.
Oct 02 10:48:58 localhost systemd-sysusers[313]: Creating group 'users' with GID 100.
Oct 02 10:48:58 localhost systemd-sysusers[313]: Creating group 'dbus' with GID 81.
Oct 02 10:48:58 localhost systemd-sysusers[313]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct 02 10:48:58 localhost systemd[1]: Started Journal Service.
Oct 02 10:48:58 localhost systemd[1]: Finished Create System Users.
Oct 02 10:48:58 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 02 10:48:58 localhost systemd[1]: Starting Create Volatile Files and Directories...
Oct 02 10:48:58 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 02 10:48:58 localhost systemd[1]: Finished Create Volatile Files and Directories.
Oct 02 10:48:58 localhost systemd[1]: Finished Setup Virtual Console.
Oct 02 10:48:58 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct 02 10:48:58 localhost systemd[1]: Starting dracut cmdline hook...
Oct 02 10:48:58 localhost dracut-cmdline[329]: dracut-9 dracut-057-102.git20250818.el9
Oct 02 10:48:58 localhost dracut-cmdline[329]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 02 10:48:58 localhost systemd[1]: Finished dracut cmdline hook.
Oct 02 10:48:58 localhost systemd[1]: Starting dracut pre-udev hook...
Oct 02 10:48:58 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct 02 10:48:58 localhost kernel: device-mapper: uevent: version 1.0.3
Oct 02 10:48:58 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct 02 10:48:58 localhost kernel: RPC: Registered named UNIX socket transport module.
Oct 02 10:48:58 localhost kernel: RPC: Registered udp transport module.
Oct 02 10:48:58 localhost kernel: RPC: Registered tcp transport module.
Oct 02 10:48:58 localhost kernel: RPC: Registered tcp-with-tls transport module.
Oct 02 10:48:58 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct 02 10:48:58 localhost rpc.statd[445]: Version 2.5.4 starting
Oct 02 10:48:58 localhost rpc.statd[445]: Initializing NSM state
Oct 02 10:48:58 localhost rpc.idmapd[450]: Setting log level to 0
Oct 02 10:48:58 localhost systemd[1]: Finished dracut pre-udev hook.
Oct 02 10:48:58 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 02 10:48:58 localhost systemd-udevd[463]: Using default interface naming scheme 'rhel-9.0'.
Oct 02 10:48:58 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 02 10:48:58 localhost systemd[1]: Starting dracut pre-trigger hook...
Oct 02 10:48:58 localhost systemd[1]: Finished dracut pre-trigger hook.
Oct 02 10:48:58 localhost systemd[1]: Starting Coldplug All udev Devices...
Oct 02 10:48:58 localhost systemd[1]: Created slice Slice /system/modprobe.
Oct 02 10:48:58 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 02 10:48:58 localhost systemd[1]: Finished Coldplug All udev Devices.
Oct 02 10:48:58 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 02 10:48:58 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 02 10:48:58 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 02 10:48:58 localhost systemd[1]: Reached target Network.
Oct 02 10:48:58 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 02 10:48:58 localhost systemd[1]: Starting dracut initqueue hook...
Oct 02 10:48:58 localhost kernel: libata version 3.00 loaded.
Oct 02 10:48:58 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Oct 02 10:48:58 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Oct 02 10:48:58 localhost systemd-udevd[467]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 10:48:58 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct 02 10:48:58 localhost kernel:  vda: vda1
Oct 02 10:48:58 localhost kernel: scsi host0: ata_piix
Oct 02 10:48:58 localhost kernel: scsi host1: ata_piix
Oct 02 10:48:58 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Oct 02 10:48:58 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Oct 02 10:48:59 localhost systemd[1]: Found device /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct 02 10:48:59 localhost systemd[1]: Reached target Initrd Root Device.
Oct 02 10:48:59 localhost kernel: ata1: found unknown device (class 0)
Oct 02 10:48:59 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct 02 10:48:59 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct 02 10:48:59 localhost systemd[1]: Mounting Kernel Configuration File System...
Oct 02 10:48:59 localhost systemd[1]: Mounted Kernel Configuration File System.
Oct 02 10:48:59 localhost systemd[1]: Reached target System Initialization.
Oct 02 10:48:59 localhost systemd[1]: Reached target Basic System.
Oct 02 10:48:59 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct 02 10:48:59 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct 02 10:48:59 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct 02 10:48:59 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Oct 02 10:48:59 localhost systemd[1]: Finished dracut initqueue hook.
Oct 02 10:48:59 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Oct 02 10:48:59 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Oct 02 10:48:59 localhost systemd[1]: Reached target Remote File Systems.
Oct 02 10:48:59 localhost systemd[1]: Starting dracut pre-mount hook...
Oct 02 10:48:59 localhost systemd[1]: Finished dracut pre-mount hook.
Oct 02 10:48:59 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458...
Oct 02 10:48:59 localhost systemd-fsck[558]: /usr/sbin/fsck.xfs: XFS file system.
Oct 02 10:48:59 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct 02 10:48:59 localhost systemd[1]: Mounting /sysroot...
Oct 02 10:48:59 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct 02 10:48:59 localhost kernel: XFS (vda1): Mounting V5 Filesystem 1631a6ad-43b8-436d-ae76-16fa14b94458
Oct 02 10:48:59 localhost kernel: XFS (vda1): Ending clean mount
Oct 02 10:48:59 localhost systemd[1]: Mounted /sysroot.
Oct 02 10:48:59 localhost systemd[1]: Reached target Initrd Root File System.
Oct 02 10:48:59 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct 02 10:48:59 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct 02 10:48:59 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct 02 10:48:59 localhost systemd[1]: Reached target Initrd File Systems.
Oct 02 10:48:59 localhost systemd[1]: Reached target Initrd Default Target.
Oct 02 10:48:59 localhost systemd[1]: Starting dracut mount hook...
Oct 02 10:48:59 localhost systemd[1]: Finished dracut mount hook.
Oct 02 10:48:59 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct 02 10:49:00 localhost rpc.idmapd[450]: exiting on signal 15
Oct 02 10:49:00 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct 02 10:49:00 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct 02 10:49:00 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct 02 10:49:00 localhost systemd[1]: Stopped target Network.
Oct 02 10:49:00 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Oct 02 10:49:00 localhost systemd[1]: Stopped target Timer Units.
Oct 02 10:49:00 localhost systemd[1]: dbus.socket: Deactivated successfully.
Oct 02 10:49:00 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Oct 02 10:49:00 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct 02 10:49:00 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct 02 10:49:00 localhost systemd[1]: Stopped target Initrd Default Target.
Oct 02 10:49:00 localhost systemd[1]: Stopped target Basic System.
Oct 02 10:49:00 localhost systemd[1]: Stopped target Initrd Root Device.
Oct 02 10:49:00 localhost systemd[1]: Stopped target Initrd /usr File System.
Oct 02 10:49:00 localhost systemd[1]: Stopped target Path Units.
Oct 02 10:49:00 localhost systemd[1]: Stopped target Remote File Systems.
Oct 02 10:49:00 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Oct 02 10:49:00 localhost systemd[1]: Stopped target Slice Units.
Oct 02 10:49:00 localhost systemd[1]: Stopped target Socket Units.
Oct 02 10:49:00 localhost systemd[1]: Stopped target System Initialization.
Oct 02 10:49:00 localhost systemd[1]: Stopped target Local File Systems.
Oct 02 10:49:00 localhost systemd[1]: Stopped target Swaps.
Oct 02 10:49:00 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Oct 02 10:49:00 localhost systemd[1]: Stopped dracut mount hook.
Oct 02 10:49:00 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct 02 10:49:00 localhost systemd[1]: Stopped dracut pre-mount hook.
Oct 02 10:49:00 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Oct 02 10:49:00 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct 02 10:49:00 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct 02 10:49:00 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct 02 10:49:00 localhost systemd[1]: Stopped dracut initqueue hook.
Oct 02 10:49:00 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 02 10:49:00 localhost systemd[1]: Stopped Apply Kernel Variables.
Oct 02 10:49:00 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct 02 10:49:00 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Oct 02 10:49:00 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct 02 10:49:00 localhost systemd[1]: Stopped Coldplug All udev Devices.
Oct 02 10:49:00 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct 02 10:49:00 localhost systemd[1]: Stopped dracut pre-trigger hook.
Oct 02 10:49:00 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct 02 10:49:00 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct 02 10:49:00 localhost systemd[1]: Stopped Setup Virtual Console.
Oct 02 10:49:00 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Oct 02 10:49:00 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 02 10:49:00 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct 02 10:49:00 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct 02 10:49:00 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct 02 10:49:00 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct 02 10:49:00 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct 02 10:49:00 localhost systemd[1]: Closed udev Control Socket.
Oct 02 10:49:00 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct 02 10:49:00 localhost systemd[1]: Closed udev Kernel Socket.
Oct 02 10:49:00 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct 02 10:49:00 localhost systemd[1]: Stopped dracut pre-udev hook.
Oct 02 10:49:00 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct 02 10:49:00 localhost systemd[1]: Stopped dracut cmdline hook.
Oct 02 10:49:00 localhost systemd[1]: Starting Cleanup udev Database...
Oct 02 10:49:00 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct 02 10:49:00 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct 02 10:49:00 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct 02 10:49:00 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Oct 02 10:49:00 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct 02 10:49:00 localhost systemd[1]: Stopped Create System Users.
Oct 02 10:49:00 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Oct 02 10:49:00 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Oct 02 10:49:00 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct 02 10:49:00 localhost systemd[1]: Finished Cleanup udev Database.
Oct 02 10:49:00 localhost systemd[1]: Reached target Switch Root.
Oct 02 10:49:00 localhost systemd[1]: Starting Switch Root...
Oct 02 10:49:00 localhost systemd[1]: Switching root.
Oct 02 10:49:00 localhost systemd-journald[308]: Journal stopped
Oct 02 10:49:01 localhost systemd-journald[308]: Received SIGTERM from PID 1 (systemd).
Oct 02 10:49:01 localhost kernel: audit: type=1404 audit(1759402140.441:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct 02 10:49:01 localhost kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 10:49:01 localhost kernel: SELinux:  policy capability open_perms=1
Oct 02 10:49:01 localhost kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 10:49:01 localhost kernel: SELinux:  policy capability always_check_network=0
Oct 02 10:49:01 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 10:49:01 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 10:49:01 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 10:49:01 localhost kernel: audit: type=1403 audit(1759402140.619:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct 02 10:49:01 localhost systemd[1]: Successfully loaded SELinux policy in 181.443ms.
Oct 02 10:49:01 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 30.798ms.
Oct 02 10:49:01 localhost systemd[1]: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 02 10:49:01 localhost systemd[1]: Detected virtualization kvm.
Oct 02 10:49:01 localhost systemd[1]: Detected architecture x86-64.
Oct 02 10:49:01 localhost systemd-rc-local-generator[640]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 10:49:01 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Oct 02 10:49:01 localhost systemd[1]: Stopped Switch Root.
Oct 02 10:49:01 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct 02 10:49:01 localhost systemd[1]: Created slice Slice /system/getty.
Oct 02 10:49:01 localhost systemd[1]: Created slice Slice /system/serial-getty.
Oct 02 10:49:01 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Oct 02 10:49:01 localhost systemd[1]: Created slice User and Session Slice.
Oct 02 10:49:01 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 02 10:49:01 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Oct 02 10:49:01 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct 02 10:49:01 localhost systemd[1]: Reached target Local Encrypted Volumes.
Oct 02 10:49:01 localhost systemd[1]: Stopped target Switch Root.
Oct 02 10:49:01 localhost systemd[1]: Stopped target Initrd File Systems.
Oct 02 10:49:01 localhost systemd[1]: Stopped target Initrd Root File System.
Oct 02 10:49:01 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Oct 02 10:49:01 localhost systemd[1]: Reached target Path Units.
Oct 02 10:49:01 localhost systemd[1]: Reached target rpc_pipefs.target.
Oct 02 10:49:01 localhost systemd[1]: Reached target Slice Units.
Oct 02 10:49:01 localhost systemd[1]: Reached target Swaps.
Oct 02 10:49:01 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Oct 02 10:49:01 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Oct 02 10:49:01 localhost systemd[1]: Reached target RPC Port Mapper.
Oct 02 10:49:01 localhost systemd[1]: Listening on Process Core Dump Socket.
Oct 02 10:49:01 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Oct 02 10:49:01 localhost systemd[1]: Listening on udev Control Socket.
Oct 02 10:49:01 localhost systemd[1]: Listening on udev Kernel Socket.
Oct 02 10:49:01 localhost systemd[1]: Mounting Huge Pages File System...
Oct 02 10:49:01 localhost systemd[1]: Mounting POSIX Message Queue File System...
Oct 02 10:49:01 localhost systemd[1]: Mounting Kernel Debug File System...
Oct 02 10:49:01 localhost systemd[1]: Mounting Kernel Trace File System...
Oct 02 10:49:01 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 02 10:49:01 localhost systemd[1]: Starting Create List of Static Device Nodes...
Oct 02 10:49:01 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 02 10:49:01 localhost systemd[1]: Starting Load Kernel Module drm...
Oct 02 10:49:01 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Oct 02 10:49:01 localhost systemd[1]: Starting Load Kernel Module fuse...
Oct 02 10:49:01 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct 02 10:49:01 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Oct 02 10:49:01 localhost systemd[1]: Stopped File System Check on Root Device.
Oct 02 10:49:01 localhost systemd[1]: Stopped Journal Service.
Oct 02 10:49:01 localhost systemd[1]: Starting Journal Service...
Oct 02 10:49:01 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 02 10:49:01 localhost systemd[1]: Starting Generate network units from Kernel command line...
Oct 02 10:49:01 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 02 10:49:01 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Oct 02 10:49:01 localhost kernel: fuse: init (API version 7.37)
Oct 02 10:49:01 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct 02 10:49:01 localhost systemd[1]: Starting Apply Kernel Variables...
Oct 02 10:49:01 localhost systemd[1]: Starting Coldplug All udev Devices...
Oct 02 10:49:01 localhost systemd[1]: Mounted Huge Pages File System.
Oct 02 10:49:01 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct 02 10:49:01 localhost systemd[1]: Mounted POSIX Message Queue File System.
Oct 02 10:49:01 localhost systemd[1]: Mounted Kernel Debug File System.
Oct 02 10:49:01 localhost systemd-journald[681]: Journal started
Oct 02 10:49:01 localhost systemd-journald[681]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.5M, 145.5M free.
Oct 02 10:49:01 localhost systemd[1]: Queued start job for default target Multi-User System.
Oct 02 10:49:01 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Oct 02 10:49:01 localhost systemd[1]: Started Journal Service.
Oct 02 10:49:01 localhost systemd[1]: Mounted Kernel Trace File System.
Oct 02 10:49:01 localhost systemd[1]: Finished Create List of Static Device Nodes.
Oct 02 10:49:01 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 02 10:49:01 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 02 10:49:01 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct 02 10:49:01 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Oct 02 10:49:01 localhost kernel: ACPI: bus type drm_connector registered
Oct 02 10:49:01 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct 02 10:49:01 localhost systemd[1]: Finished Load Kernel Module fuse.
Oct 02 10:49:01 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct 02 10:49:01 localhost systemd[1]: Finished Load Kernel Module drm.
Oct 02 10:49:01 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct 02 10:49:01 localhost systemd[1]: Finished Generate network units from Kernel command line.
Oct 02 10:49:01 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Oct 02 10:49:01 localhost systemd[1]: Finished Apply Kernel Variables.
Oct 02 10:49:01 localhost systemd[1]: Mounting FUSE Control File System...
Oct 02 10:49:01 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 02 10:49:01 localhost systemd[1]: Starting Rebuild Hardware Database...
Oct 02 10:49:01 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Oct 02 10:49:01 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct 02 10:49:01 localhost systemd[1]: Starting Load/Save OS Random Seed...
Oct 02 10:49:01 localhost systemd[1]: Starting Create System Users...
Oct 02 10:49:01 localhost systemd[1]: Mounted FUSE Control File System.
Oct 02 10:49:01 localhost systemd-journald[681]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.5M, 145.5M free.
Oct 02 10:49:01 localhost systemd-journald[681]: Received client request to flush runtime journal.
Oct 02 10:49:01 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Oct 02 10:49:01 localhost systemd[1]: Finished Load/Save OS Random Seed.
Oct 02 10:49:01 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 02 10:49:01 localhost systemd[1]: Finished Create System Users.
Oct 02 10:49:01 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 02 10:49:01 localhost systemd[1]: Finished Coldplug All udev Devices.
Oct 02 10:49:01 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 02 10:49:01 localhost systemd[1]: Reached target Preparation for Local File Systems.
Oct 02 10:49:01 localhost systemd[1]: Reached target Local File Systems.
Oct 02 10:49:01 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Oct 02 10:49:01 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct 02 10:49:01 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct 02 10:49:01 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct 02 10:49:01 localhost systemd[1]: Starting Automatic Boot Loader Update...
Oct 02 10:49:01 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct 02 10:49:01 localhost systemd[1]: Starting Create Volatile Files and Directories...
Oct 02 10:49:01 localhost bootctl[697]: Couldn't find EFI system partition, skipping.
Oct 02 10:49:01 localhost systemd[1]: Finished Automatic Boot Loader Update.
Oct 02 10:49:01 localhost systemd[1]: Finished Create Volatile Files and Directories.
Oct 02 10:49:01 localhost systemd[1]: Starting Security Auditing Service...
Oct 02 10:49:01 localhost systemd[1]: Starting RPC Bind...
Oct 02 10:49:01 localhost systemd[1]: Starting Rebuild Journal Catalog...
Oct 02 10:49:01 localhost auditd[703]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct 02 10:49:01 localhost auditd[703]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct 02 10:49:01 localhost systemd[1]: Finished Rebuild Journal Catalog.
Oct 02 10:49:01 localhost systemd[1]: Started RPC Bind.
Oct 02 10:49:01 localhost augenrules[709]: /sbin/augenrules: No change
Oct 02 10:49:01 localhost augenrules[725]: No rules
Oct 02 10:49:01 localhost augenrules[725]: enabled 1
Oct 02 10:49:01 localhost augenrules[725]: failure 1
Oct 02 10:49:01 localhost augenrules[725]: pid 703
Oct 02 10:49:01 localhost augenrules[725]: rate_limit 0
Oct 02 10:49:01 localhost augenrules[725]: backlog_limit 8192
Oct 02 10:49:01 localhost augenrules[725]: lost 0
Oct 02 10:49:01 localhost augenrules[725]: backlog 4
Oct 02 10:49:01 localhost augenrules[725]: backlog_wait_time 60000
Oct 02 10:49:01 localhost augenrules[725]: backlog_wait_time_actual 0
Oct 02 10:49:01 localhost augenrules[725]: enabled 1
Oct 02 10:49:01 localhost augenrules[725]: failure 1
Oct 02 10:49:01 localhost augenrules[725]: pid 703
Oct 02 10:49:01 localhost augenrules[725]: rate_limit 0
Oct 02 10:49:01 localhost augenrules[725]: backlog_limit 8192
Oct 02 10:49:01 localhost augenrules[725]: lost 0
Oct 02 10:49:01 localhost augenrules[725]: backlog 4
Oct 02 10:49:01 localhost augenrules[725]: backlog_wait_time 60000
Oct 02 10:49:01 localhost augenrules[725]: backlog_wait_time_actual 0
Oct 02 10:49:01 localhost augenrules[725]: enabled 1
Oct 02 10:49:01 localhost augenrules[725]: failure 1
Oct 02 10:49:01 localhost augenrules[725]: pid 703
Oct 02 10:49:01 localhost augenrules[725]: rate_limit 0
Oct 02 10:49:01 localhost augenrules[725]: backlog_limit 8192
Oct 02 10:49:01 localhost augenrules[725]: lost 0
Oct 02 10:49:01 localhost augenrules[725]: backlog 1
Oct 02 10:49:01 localhost augenrules[725]: backlog_wait_time 60000
Oct 02 10:49:01 localhost augenrules[725]: backlog_wait_time_actual 0
Oct 02 10:49:01 localhost systemd[1]: Started Security Auditing Service.
Oct 02 10:49:01 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct 02 10:49:01 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct 02 10:49:01 localhost systemd[1]: Finished Rebuild Hardware Database.
Oct 02 10:49:01 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 02 10:49:01 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Oct 02 10:49:01 localhost systemd[1]: Starting Update is Completed...
Oct 02 10:49:01 localhost systemd-udevd[733]: Using default interface naming scheme 'rhel-9.0'.
Oct 02 10:49:01 localhost systemd[1]: Finished Update is Completed.
Oct 02 10:49:01 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 02 10:49:01 localhost systemd[1]: Reached target System Initialization.
Oct 02 10:49:01 localhost systemd[1]: Started dnf makecache --timer.
Oct 02 10:49:01 localhost systemd[1]: Started Daily rotation of log files.
Oct 02 10:49:01 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct 02 10:49:01 localhost systemd[1]: Reached target Timer Units.
Oct 02 10:49:01 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 02 10:49:01 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct 02 10:49:01 localhost systemd[1]: Reached target Socket Units.
Oct 02 10:49:01 localhost systemd-udevd[737]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 10:49:01 localhost systemd[1]: Starting D-Bus System Message Bus...
Oct 02 10:49:01 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 02 10:49:01 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct 02 10:49:01 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 02 10:49:01 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 02 10:49:01 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 02 10:49:02 localhost systemd[1]: Started D-Bus System Message Bus.
Oct 02 10:49:02 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Oct 02 10:49:02 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct 02 10:49:02 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct 02 10:49:02 localhost systemd[1]: Reached target Basic System.
Oct 02 10:49:02 localhost dbus-broker-lau[772]: Ready
Oct 02 10:49:02 localhost systemd[1]: Starting NTP client/server...
Oct 02 10:49:02 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct 02 10:49:02 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct 02 10:49:02 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct 02 10:49:02 localhost systemd[1]: Starting IPv4 firewall with iptables...
Oct 02 10:49:02 localhost systemd[1]: Started irqbalance daemon.
Oct 02 10:49:02 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct 02 10:49:02 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 02 10:49:02 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 02 10:49:02 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 02 10:49:02 localhost systemd[1]: Reached target sshd-keygen.target.
Oct 02 10:49:02 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct 02 10:49:02 localhost systemd[1]: Reached target User and Group Name Lookups.
Oct 02 10:49:02 localhost systemd[1]: Starting User Login Management...
Oct 02 10:49:02 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct 02 10:49:02 localhost chronyd[802]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 02 10:49:02 localhost chronyd[802]: Loaded 0 symmetric keys
Oct 02 10:49:02 localhost chronyd[802]: Using right/UTC timezone to obtain leap second data
Oct 02 10:49:02 localhost chronyd[802]: Loaded seccomp filter (level 2)
Oct 02 10:49:02 localhost systemd[1]: Started NTP client/server.
Oct 02 10:49:02 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct 02 10:49:02 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Oct 02 10:49:02 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Oct 02 10:49:02 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Oct 02 10:49:02 localhost kernel: Console: switching to colour dummy device 80x25
Oct 02 10:49:02 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct 02 10:49:02 localhost kernel: [drm] features: -context_init
Oct 02 10:49:02 localhost kernel: [drm] number of scanouts: 1
Oct 02 10:49:02 localhost kernel: [drm] number of cap sets: 0
Oct 02 10:49:02 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Oct 02 10:49:02 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct 02 10:49:02 localhost kernel: Console: switching to colour frame buffer device 128x48
Oct 02 10:49:02 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct 02 10:49:02 localhost systemd-logind[795]: New seat seat0.
Oct 02 10:49:02 localhost systemd-logind[795]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 02 10:49:02 localhost systemd-logind[795]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 02 10:49:02 localhost kernel: kvm_amd: TSC scaling supported
Oct 02 10:49:02 localhost kernel: kvm_amd: Nested Virtualization enabled
Oct 02 10:49:02 localhost kernel: kvm_amd: Nested Paging enabled
Oct 02 10:49:02 localhost kernel: kvm_amd: LBR virtualization supported
Oct 02 10:49:02 localhost systemd[1]: Started User Login Management.
Oct 02 10:49:02 localhost iptables.init[788]: iptables: Applying firewall rules: [  OK  ]
Oct 02 10:49:02 localhost systemd[1]: Finished IPv4 firewall with iptables.
Oct 02 10:49:02 localhost cloud-init[841]: Cloud-init v. 24.4-7.el9 running 'init-local' at Thu, 02 Oct 2025 10:49:02 +0000. Up 6.35 seconds.
Oct 02 10:49:02 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Oct 02 10:49:02 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Oct 02 10:49:02 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpa7xnznch.mount: Deactivated successfully.
Oct 02 10:49:03 localhost systemd[1]: Starting Hostname Service...
Oct 02 10:49:03 localhost systemd[1]: Started Hostname Service.
Oct 02 10:49:03 np0005466030.novalocal systemd-hostnamed[855]: Hostname set to <np0005466030.novalocal> (static)
Oct 02 10:49:03 np0005466030.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct 02 10:49:03 np0005466030.novalocal systemd[1]: Reached target Preparation for Network.
Oct 02 10:49:03 np0005466030.novalocal systemd[1]: Starting Network Manager...
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3117] NetworkManager (version 1.54.1-1.el9) is starting... (boot:37dcc26c-0803-4cf1-8993-af1de3c457fe)
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3122] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3283] manager[0x56412ef75080]: monitoring kernel firmware directory '/lib/firmware'.
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3323] hostname: hostname: using hostnamed
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3323] hostname: static hostname changed from (none) to "np0005466030.novalocal"
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3327] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3434] manager[0x56412ef75080]: rfkill: Wi-Fi hardware radio set enabled
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3441] manager[0x56412ef75080]: rfkill: WWAN hardware radio set enabled
Oct 02 10:49:03 np0005466030.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3522] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3523] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3524] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3525] manager: Networking is enabled by state file
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3527] settings: Loaded settings plugin: keyfile (internal)
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3559] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3586] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3612] dhcp: init: Using DHCP client 'internal'
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3615] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3627] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3640] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3647] device (lo): Activation: starting connection 'lo' (754f1d75-6208-49e2-9f27-490774a22f8d)
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3656] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3659] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 10:49:03 np0005466030.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3688] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3692] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3695] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3697] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3699] device (eth0): carrier: link connected
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3703] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 02 10:49:03 np0005466030.novalocal systemd[1]: Started Network Manager.
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3708] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3714] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3718] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3719] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3720] manager: NetworkManager state is now CONNECTING
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3721] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 10:49:03 np0005466030.novalocal systemd[1]: Reached target Network.
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3726] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3729] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 02 10:49:03 np0005466030.novalocal systemd[1]: Starting Network Manager Wait Online...
Oct 02 10:49:03 np0005466030.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Oct 02 10:49:03 np0005466030.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3894] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3896] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 02 10:49:03 np0005466030.novalocal NetworkManager[859]: <info>  [1759402143.3904] device (lo): Activation: successful, device activated.
Oct 02 10:49:03 np0005466030.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Oct 02 10:49:03 np0005466030.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 02 10:49:03 np0005466030.novalocal systemd[1]: Reached target NFS client services.
Oct 02 10:49:03 np0005466030.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Oct 02 10:49:03 np0005466030.novalocal systemd[1]: Reached target Remote File Systems.
Oct 02 10:49:03 np0005466030.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 02 10:49:04 np0005466030.novalocal NetworkManager[859]: <info>  [1759402144.7850] dhcp4 (eth0): state changed new lease, address=38.129.56.3
Oct 02 10:49:04 np0005466030.novalocal NetworkManager[859]: <info>  [1759402144.7870] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 02 10:49:04 np0005466030.novalocal NetworkManager[859]: <info>  [1759402144.7914] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 10:49:04 np0005466030.novalocal NetworkManager[859]: <info>  [1759402144.7954] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 10:49:04 np0005466030.novalocal NetworkManager[859]: <info>  [1759402144.7959] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 10:49:04 np0005466030.novalocal NetworkManager[859]: <info>  [1759402144.7968] manager: NetworkManager state is now CONNECTED_SITE
Oct 02 10:49:04 np0005466030.novalocal NetworkManager[859]: <info>  [1759402144.7977] device (eth0): Activation: successful, device activated.
Oct 02 10:49:04 np0005466030.novalocal NetworkManager[859]: <info>  [1759402144.7988] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 02 10:49:04 np0005466030.novalocal NetworkManager[859]: <info>  [1759402144.7996] manager: startup complete
Oct 02 10:49:04 np0005466030.novalocal systemd[1]: Finished Network Manager Wait Online.
Oct 02 10:49:04 np0005466030.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Oct 02 10:49:05 np0005466030.novalocal cloud-init[922]: Cloud-init v. 24.4-7.el9 running 'init' at Thu, 02 Oct 2025 10:49:05 +0000. Up 8.69 seconds.
Oct 02 10:49:05 np0005466030.novalocal cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Oct 02 10:49:05 np0005466030.novalocal cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 02 10:49:05 np0005466030.novalocal cloud-init[922]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Oct 02 10:49:05 np0005466030.novalocal cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 02 10:49:05 np0005466030.novalocal cloud-init[922]: ci-info: |  eth0  | True |         38.129.56.3          | 255.255.255.0 | global | fa:16:3e:d4:f1:89 |
Oct 02 10:49:05 np0005466030.novalocal cloud-init[922]: ci-info: |  eth0  | True | fe80::f816:3eff:fed4:f189/64 |       .       |  link  | fa:16:3e:d4:f1:89 |
Oct 02 10:49:05 np0005466030.novalocal cloud-init[922]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Oct 02 10:49:05 np0005466030.novalocal cloud-init[922]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Oct 02 10:49:05 np0005466030.novalocal cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct 02 10:49:05 np0005466030.novalocal cloud-init[922]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++++
Oct 02 10:49:05 np0005466030.novalocal cloud-init[922]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Oct 02 10:49:05 np0005466030.novalocal cloud-init[922]: ci-info: | Route |   Destination   |   Gateway   |     Genmask     | Interface | Flags |
Oct 02 10:49:05 np0005466030.novalocal cloud-init[922]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Oct 02 10:49:05 np0005466030.novalocal cloud-init[922]: ci-info: |   0   |     0.0.0.0     | 38.129.56.1 |     0.0.0.0     |    eth0   |   UG  |
Oct 02 10:49:05 np0005466030.novalocal cloud-init[922]: ci-info: |   1   |   38.129.56.0   |   0.0.0.0   |  255.255.255.0  |    eth0   |   U   |
Oct 02 10:49:05 np0005466030.novalocal cloud-init[922]: ci-info: |   2   | 169.254.169.254 | 38.129.56.5 | 255.255.255.255 |    eth0   |  UGH  |
Oct 02 10:49:05 np0005466030.novalocal cloud-init[922]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Oct 02 10:49:05 np0005466030.novalocal cloud-init[922]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Oct 02 10:49:05 np0005466030.novalocal cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 02 10:49:05 np0005466030.novalocal cloud-init[922]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Oct 02 10:49:05 np0005466030.novalocal cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 02 10:49:05 np0005466030.novalocal cloud-init[922]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Oct 02 10:49:05 np0005466030.novalocal cloud-init[922]: ci-info: |   3   |    local    |    ::   |    eth0   |   U   |
Oct 02 10:49:05 np0005466030.novalocal cloud-init[922]: ci-info: |   4   |  multicast  |    ::   |    eth0   |   U   |
Oct 02 10:49:05 np0005466030.novalocal cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 02 10:49:06 np0005466030.novalocal useradd[989]: new group: name=cloud-user, GID=1001
Oct 02 10:49:06 np0005466030.novalocal useradd[989]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Oct 02 10:49:06 np0005466030.novalocal useradd[989]: add 'cloud-user' to group 'adm'
Oct 02 10:49:06 np0005466030.novalocal useradd[989]: add 'cloud-user' to group 'systemd-journal'
Oct 02 10:49:06 np0005466030.novalocal useradd[989]: add 'cloud-user' to shadow group 'adm'
Oct 02 10:49:06 np0005466030.novalocal useradd[989]: add 'cloud-user' to shadow group 'systemd-journal'
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: Generating public/private rsa key pair.
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: The key fingerprint is:
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: SHA256:t/nFIjxCkjlEy1767q/Z/sUxI4tYkU2z6Po7xEc+LwA root@np0005466030.novalocal
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: The key's randomart image is:
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: +---[RSA 3072]----+
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: |      .     o    |
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: |     o .   = o   |
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: |      + . + o    |
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: |     o =E. ..    |
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: |      B So+o. +  |
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: |       = *+++= + |
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: |        =.Booo=  |
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: |       . =.+.+.  |
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: |       .=+*+o.   |
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: +----[SHA256]-----+
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: Generating public/private ecdsa key pair.
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: The key fingerprint is:
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: SHA256:4b96B/45TGMKSsePKtdhOJUbt3VgeZRt+3PEukeUr0w root@np0005466030.novalocal
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: The key's randomart image is:
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: +---[ECDSA 256]---+
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: |             o.o |
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: |            + o o|
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: |        .. . o oo|
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: |       .+.. . .o+|
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: |       +S+ o . +o|
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: |      + B.o + E.=|
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: |     . * *.* + +o|
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: |    . o o =.+.+ .|
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: |     o...o.oo. . |
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: +----[SHA256]-----+
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: Generating public/private ed25519 key pair.
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: The key fingerprint is:
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: SHA256:1CSznXUUyniTO/9pEw3MgugSNGIawZsVF4ksbCXEv7E root@np0005466030.novalocal
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: The key's randomart image is:
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: +--[ED25519 256]--+
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: | =++ooooo . ..+. |
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: |  B.*.+  B = +   |
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: | . X o .o.=.*o   |
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: |  + o ... ...o+  |
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: |     + oS   o. ..|
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: |    E . .    o ..|
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: |       .      . .|
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: |               +.|
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: |              ..o|
Oct 02 10:49:06 np0005466030.novalocal cloud-init[922]: +----[SHA256]-----+
Oct 02 10:49:06 np0005466030.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Oct 02 10:49:06 np0005466030.novalocal systemd[1]: Reached target Cloud-config availability.
Oct 02 10:49:06 np0005466030.novalocal systemd[1]: Reached target Network is Online.
Oct 02 10:49:06 np0005466030.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Oct 02 10:49:06 np0005466030.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Oct 02 10:49:06 np0005466030.novalocal systemd[1]: Starting System Logging Service...
Oct 02 10:49:06 np0005466030.novalocal systemd[1]: Starting OpenSSH server daemon...
Oct 02 10:49:06 np0005466030.novalocal sm-notify[1005]: Version 2.5.4 starting
Oct 02 10:49:06 np0005466030.novalocal systemd[1]: Starting Permit User Sessions...
Oct 02 10:49:06 np0005466030.novalocal systemd[1]: Started Notify NFS peers of a restart.
Oct 02 10:49:06 np0005466030.novalocal sshd[1007]: Server listening on 0.0.0.0 port 22.
Oct 02 10:49:06 np0005466030.novalocal sshd[1007]: Server listening on :: port 22.
Oct 02 10:49:06 np0005466030.novalocal systemd[1]: Started OpenSSH server daemon.
Oct 02 10:49:06 np0005466030.novalocal systemd[1]: Finished Permit User Sessions.
Oct 02 10:49:06 np0005466030.novalocal systemd[1]: Started Command Scheduler.
Oct 02 10:49:06 np0005466030.novalocal systemd[1]: Started Getty on tty1.
Oct 02 10:49:06 np0005466030.novalocal systemd[1]: Started Serial Getty on ttyS0.
Oct 02 10:49:06 np0005466030.novalocal systemd[1]: Reached target Login Prompts.
Oct 02 10:49:06 np0005466030.novalocal crond[1009]: (CRON) STARTUP (1.5.7)
Oct 02 10:49:06 np0005466030.novalocal crond[1009]: (CRON) INFO (Syslog will be used instead of sendmail.)
Oct 02 10:49:06 np0005466030.novalocal systemd[1]: Started System Logging Service.
Oct 02 10:49:06 np0005466030.novalocal crond[1009]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 17% if used.)
Oct 02 10:49:06 np0005466030.novalocal rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] start
Oct 02 10:49:06 np0005466030.novalocal rsyslogd[1006]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Oct 02 10:49:06 np0005466030.novalocal systemd[1]: Reached target Multi-User System.
Oct 02 10:49:06 np0005466030.novalocal crond[1009]: (CRON) INFO (running with inotify support)
Oct 02 10:49:06 np0005466030.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Oct 02 10:49:06 np0005466030.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct 02 10:49:06 np0005466030.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Oct 02 10:49:06 np0005466030.novalocal rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 10:49:07 np0005466030.novalocal cloud-init[1019]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Thu, 02 Oct 2025 10:49:07 +0000. Up 10.77 seconds.
Oct 02 10:49:07 np0005466030.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Oct 02 10:49:07 np0005466030.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Oct 02 10:49:07 np0005466030.novalocal cloud-init[1023]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Thu, 02 Oct 2025 10:49:07 +0000. Up 11.24 seconds.
Oct 02 10:49:07 np0005466030.novalocal cloud-init[1025]: #############################################################
Oct 02 10:49:07 np0005466030.novalocal cloud-init[1026]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Oct 02 10:49:07 np0005466030.novalocal cloud-init[1028]: 256 SHA256:4b96B/45TGMKSsePKtdhOJUbt3VgeZRt+3PEukeUr0w root@np0005466030.novalocal (ECDSA)
Oct 02 10:49:07 np0005466030.novalocal cloud-init[1030]: 256 SHA256:1CSznXUUyniTO/9pEw3MgugSNGIawZsVF4ksbCXEv7E root@np0005466030.novalocal (ED25519)
Oct 02 10:49:07 np0005466030.novalocal cloud-init[1032]: 3072 SHA256:t/nFIjxCkjlEy1767q/Z/sUxI4tYkU2z6Po7xEc+LwA root@np0005466030.novalocal (RSA)
Oct 02 10:49:07 np0005466030.novalocal cloud-init[1033]: -----END SSH HOST KEY FINGERPRINTS-----
Oct 02 10:49:07 np0005466030.novalocal cloud-init[1034]: #############################################################
Oct 02 10:49:07 np0005466030.novalocal cloud-init[1023]: Cloud-init v. 24.4-7.el9 finished at Thu, 02 Oct 2025 10:49:07 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 11.44 seconds
Oct 02 10:49:07 np0005466030.novalocal sshd-session[1038]: Connection reset by 38.102.83.114 port 50692 [preauth]
Oct 02 10:49:07 np0005466030.novalocal sshd-session[1040]: Unable to negotiate with 38.102.83.114 port 50698: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Oct 02 10:49:07 np0005466030.novalocal sshd-session[1042]: Connection reset by 38.102.83.114 port 50700 [preauth]
Oct 02 10:49:07 np0005466030.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Oct 02 10:49:07 np0005466030.novalocal systemd[1]: Reached target Cloud-init target.
Oct 02 10:49:07 np0005466030.novalocal systemd[1]: Startup finished in 1.552s (kernel) + 2.494s (initrd) + 7.471s (userspace) = 11.518s.
Oct 02 10:49:07 np0005466030.novalocal sshd-session[1044]: Unable to negotiate with 38.102.83.114 port 50716: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Oct 02 10:49:07 np0005466030.novalocal sshd-session[1046]: Unable to negotiate with 38.102.83.114 port 50730: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Oct 02 10:49:07 np0005466030.novalocal sshd-session[1048]: Connection reset by 38.102.83.114 port 50740 [preauth]
Oct 02 10:49:07 np0005466030.novalocal sshd-session[1052]: Unable to negotiate with 38.102.83.114 port 50762: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Oct 02 10:49:08 np0005466030.novalocal sshd-session[1054]: Unable to negotiate with 38.102.83.114 port 50778: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Oct 02 10:49:08 np0005466030.novalocal sshd-session[1050]: Connection closed by 38.102.83.114 port 50754 [preauth]
Oct 02 10:49:09 np0005466030.novalocal chronyd[802]: Selected source 158.69.247.84 (2.centos.pool.ntp.org)
Oct 02 10:49:09 np0005466030.novalocal chronyd[802]: System clock TAI offset set to 37 seconds
Oct 02 10:49:11 np0005466030.novalocal chronyd[802]: Selected source 206.108.0.133 (2.centos.pool.ntp.org)
Oct 02 10:49:12 np0005466030.novalocal irqbalance[793]: Cannot change IRQ 35 affinity: Operation not permitted
Oct 02 10:49:12 np0005466030.novalocal irqbalance[793]: IRQ 35 affinity is now unmanaged
Oct 02 10:49:12 np0005466030.novalocal irqbalance[793]: Cannot change IRQ 33 affinity: Operation not permitted
Oct 02 10:49:12 np0005466030.novalocal irqbalance[793]: IRQ 33 affinity is now unmanaged
Oct 02 10:49:12 np0005466030.novalocal irqbalance[793]: Cannot change IRQ 31 affinity: Operation not permitted
Oct 02 10:49:12 np0005466030.novalocal irqbalance[793]: IRQ 31 affinity is now unmanaged
Oct 02 10:49:12 np0005466030.novalocal irqbalance[793]: Cannot change IRQ 28 affinity: Operation not permitted
Oct 02 10:49:12 np0005466030.novalocal irqbalance[793]: IRQ 28 affinity is now unmanaged
Oct 02 10:49:12 np0005466030.novalocal irqbalance[793]: Cannot change IRQ 34 affinity: Operation not permitted
Oct 02 10:49:12 np0005466030.novalocal irqbalance[793]: IRQ 34 affinity is now unmanaged
Oct 02 10:49:12 np0005466030.novalocal irqbalance[793]: Cannot change IRQ 32 affinity: Operation not permitted
Oct 02 10:49:12 np0005466030.novalocal irqbalance[793]: IRQ 32 affinity is now unmanaged
Oct 02 10:49:12 np0005466030.novalocal irqbalance[793]: Cannot change IRQ 30 affinity: Operation not permitted
Oct 02 10:49:12 np0005466030.novalocal irqbalance[793]: IRQ 30 affinity is now unmanaged
Oct 02 10:49:12 np0005466030.novalocal irqbalance[793]: Cannot change IRQ 29 affinity: Operation not permitted
Oct 02 10:49:12 np0005466030.novalocal irqbalance[793]: IRQ 29 affinity is now unmanaged
Oct 02 10:49:14 np0005466030.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 02 10:49:33 np0005466030.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 02 11:01:01 np0005466030.novalocal CROND[1062]: (root) CMD (run-parts /etc/cron.hourly)
Oct 02 11:01:01 np0005466030.novalocal run-parts[1065]: (/etc/cron.hourly) starting 0anacron
Oct 02 11:01:01 np0005466030.novalocal anacron[1073]: Anacron started on 2025-10-02
Oct 02 11:01:02 np0005466030.novalocal anacron[1073]: Will run job `cron.daily' in 43 min.
Oct 02 11:01:02 np0005466030.novalocal anacron[1073]: Will run job `cron.weekly' in 63 min.
Oct 02 11:01:02 np0005466030.novalocal anacron[1073]: Will run job `cron.monthly' in 83 min.
Oct 02 11:01:02 np0005466030.novalocal anacron[1073]: Jobs will be executed sequentially
Oct 02 11:01:02 np0005466030.novalocal run-parts[1075]: (/etc/cron.hourly) finished 0anacron
Oct 02 11:01:02 np0005466030.novalocal CROND[1061]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 02 11:02:12 np0005466030.novalocal sshd-session[1077]: Accepted publickey for zuul from 38.102.83.114 port 37082 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Oct 02 11:02:12 np0005466030.novalocal systemd[1]: Created slice User Slice of UID 1000.
Oct 02 11:02:12 np0005466030.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct 02 11:02:12 np0005466030.novalocal systemd-logind[795]: New session 1 of user zuul.
Oct 02 11:02:13 np0005466030.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct 02 11:02:13 np0005466030.novalocal systemd[1]: Starting User Manager for UID 1000...
Oct 02 11:02:13 np0005466030.novalocal systemd[1082]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:02:13 np0005466030.novalocal systemd[1082]: Queued start job for default target Main User Target.
Oct 02 11:02:13 np0005466030.novalocal systemd[1082]: Created slice User Application Slice.
Oct 02 11:02:13 np0005466030.novalocal systemd[1082]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 02 11:02:13 np0005466030.novalocal systemd[1082]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 11:02:13 np0005466030.novalocal systemd[1082]: Reached target Paths.
Oct 02 11:02:13 np0005466030.novalocal systemd[1082]: Reached target Timers.
Oct 02 11:02:13 np0005466030.novalocal systemd[1082]: Starting D-Bus User Message Bus Socket...
Oct 02 11:02:13 np0005466030.novalocal systemd[1082]: Starting Create User's Volatile Files and Directories...
Oct 02 11:02:13 np0005466030.novalocal systemd[1082]: Finished Create User's Volatile Files and Directories.
Oct 02 11:02:13 np0005466030.novalocal systemd[1082]: Listening on D-Bus User Message Bus Socket.
Oct 02 11:02:13 np0005466030.novalocal systemd[1082]: Reached target Sockets.
Oct 02 11:02:13 np0005466030.novalocal systemd[1082]: Reached target Basic System.
Oct 02 11:02:13 np0005466030.novalocal systemd[1082]: Reached target Main User Target.
Oct 02 11:02:13 np0005466030.novalocal systemd[1082]: Startup finished in 124ms.
Oct 02 11:02:13 np0005466030.novalocal systemd[1]: Started User Manager for UID 1000.
Oct 02 11:02:13 np0005466030.novalocal systemd[1]: Started Session 1 of User zuul.
Oct 02 11:02:13 np0005466030.novalocal sshd-session[1077]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:02:13 np0005466030.novalocal python3[1165]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:02:17 np0005466030.novalocal python3[1193]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:02:25 np0005466030.novalocal python3[1251]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:02:26 np0005466030.novalocal python3[1291]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Oct 02 11:02:28 np0005466030.novalocal python3[1317]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDdHOgImyIPDgNWnaMxITEPAN7NVtxzu14ISD59Z0krS9o0Yef/lJRBJcwAtbdZl6thmmrmd+i6nLhYv58i91I9BglmtPCtwZOV73PkKRHZ//oaGwnMih4wB70pyMygFWOrMfCeHRbPChFn2mwctskvcL515U/KpRwUH6WlesAnHltNt9DFUSKyQADMR0GdPnnDw8gLOq9DBkiwlfGxOV1vxXnsJgtCzmcYqLfOMUyT5CJybnG3mpE2Rfc4aNSBi+3/P2Age5mBEwGZMXQU8BTcxVemx04TNqPzeSvzH96Xtnm6b/EZ1nBpVZVpqJLubsNcY65zoE9DNXQJGgx09voZuQytvk2ksubtwSyX2khxwkaAPUuGWesuCs/pP/g0634ox7wm21U4hFzvMni4TFc4otDkcIsKet/KbBKdvGkk7IVb08Z3k8S96poyWuD8sK4zHLKur4EKbCU4aodgLm2RXTqJN6pLISaY3GAnRN94PvuTmeqA+tMo1IfiAgcif0k= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 11:02:28 np0005466030.novalocal python3[1341]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:02:29 np0005466030.novalocal python3[1440]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:02:29 np0005466030.novalocal python3[1511]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759402949.0350459-253-211783638593079/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=e1f611fafbac4ef993faa9123ba23e77_id_rsa follow=False checksum=923ba278c698bf654f2c8fd44aaead32908a4e27 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:02:30 np0005466030.novalocal python3[1634]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:02:30 np0005466030.novalocal python3[1705]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759402950.114852-307-185486104353466/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=e1f611fafbac4ef993faa9123ba23e77_id_rsa.pub follow=False checksum=9747c9704720df2c89f1c3bf3782f9b9dd59b88f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:02:32 np0005466030.novalocal python3[1753]: ansible-ping Invoked with data=pong
Oct 02 11:02:33 np0005466030.novalocal python3[1777]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:02:36 np0005466030.novalocal python3[1835]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Oct 02 11:02:37 np0005466030.novalocal python3[1867]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:02:37 np0005466030.novalocal python3[1891]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:02:38 np0005466030.novalocal python3[1915]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:02:38 np0005466030.novalocal python3[1939]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:02:38 np0005466030.novalocal python3[1963]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:02:38 np0005466030.novalocal python3[1987]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:02:40 np0005466030.novalocal sudo[2011]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygdyrslixqzkfzouxhnsqpjokntiitya ; /usr/bin/python3'
Oct 02 11:02:40 np0005466030.novalocal sudo[2011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:02:40 np0005466030.novalocal python3[2013]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:02:40 np0005466030.novalocal sudo[2011]: pam_unix(sudo:session): session closed for user root
Oct 02 11:02:41 np0005466030.novalocal sudo[2089]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlffsdjimdatkkifotswshyunemgesrs ; /usr/bin/python3'
Oct 02 11:02:41 np0005466030.novalocal sudo[2089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:02:41 np0005466030.novalocal python3[2091]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:02:41 np0005466030.novalocal sudo[2089]: pam_unix(sudo:session): session closed for user root
Oct 02 11:02:41 np0005466030.novalocal sudo[2162]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxunuavphryoppcykmizvjrnldtpmhqi ; /usr/bin/python3'
Oct 02 11:02:41 np0005466030.novalocal sudo[2162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:02:41 np0005466030.novalocal python3[2164]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759402960.9564967-32-186289017366440/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:02:41 np0005466030.novalocal sudo[2162]: pam_unix(sudo:session): session closed for user root
Oct 02 11:02:42 np0005466030.novalocal python3[2212]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 11:02:42 np0005466030.novalocal python3[2236]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 11:02:43 np0005466030.novalocal python3[2260]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 11:02:43 np0005466030.novalocal python3[2284]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 11:02:43 np0005466030.novalocal python3[2308]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 11:02:43 np0005466030.novalocal python3[2332]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 11:02:44 np0005466030.novalocal python3[2356]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 11:02:44 np0005466030.novalocal python3[2380]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 11:02:44 np0005466030.novalocal python3[2404]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 11:02:45 np0005466030.novalocal python3[2428]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 11:02:45 np0005466030.novalocal python3[2452]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 11:02:45 np0005466030.novalocal python3[2476]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 11:02:45 np0005466030.novalocal python3[2500]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 11:02:46 np0005466030.novalocal python3[2524]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 11:02:46 np0005466030.novalocal python3[2548]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 11:02:46 np0005466030.novalocal python3[2572]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 11:02:47 np0005466030.novalocal python3[2596]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 11:02:47 np0005466030.novalocal python3[2620]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 11:02:47 np0005466030.novalocal python3[2644]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 11:02:47 np0005466030.novalocal python3[2668]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 11:02:48 np0005466030.novalocal python3[2692]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 11:02:48 np0005466030.novalocal python3[2716]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 11:02:48 np0005466030.novalocal python3[2740]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 11:02:49 np0005466030.novalocal python3[2764]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 11:02:49 np0005466030.novalocal python3[2788]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 11:02:49 np0005466030.novalocal python3[2812]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 11:02:52 np0005466030.novalocal sudo[2836]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjzkcwylqgkkchwvdjlbimbehvbsnsmk ; /usr/bin/python3'
Oct 02 11:02:52 np0005466030.novalocal sudo[2836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:02:52 np0005466030.novalocal python3[2838]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 02 11:02:52 np0005466030.novalocal systemd[1]: Starting Time & Date Service...
Oct 02 11:02:52 np0005466030.novalocal systemd[1]: Started Time & Date Service.
Oct 02 11:02:52 np0005466030.novalocal systemd-timedated[2840]: Changed time zone to 'UTC' (UTC).
Oct 02 11:02:52 np0005466030.novalocal sudo[2836]: pam_unix(sudo:session): session closed for user root
Oct 02 11:02:52 np0005466030.novalocal sudo[2867]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtrhineqkfkhaxopdzjswicwzkkirooj ; /usr/bin/python3'
Oct 02 11:02:52 np0005466030.novalocal sudo[2867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:02:52 np0005466030.novalocal python3[2869]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:02:53 np0005466030.novalocal sudo[2867]: pam_unix(sudo:session): session closed for user root
Oct 02 11:02:53 np0005466030.novalocal python3[2945]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:02:53 np0005466030.novalocal python3[3016]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1759402973.1979249-252-26345957052802/source _original_basename=tmpaz4hyegb follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:02:54 np0005466030.novalocal python3[3116]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:02:54 np0005466030.novalocal python3[3187]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759402974.0780747-302-70111321585757/source _original_basename=tmpc70jnsba follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:02:55 np0005466030.novalocal sudo[3287]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcbhojxomdqhfimuvnpgcywktzvakmtm ; /usr/bin/python3'
Oct 02 11:02:55 np0005466030.novalocal sudo[3287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:02:55 np0005466030.novalocal python3[3289]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:02:55 np0005466030.novalocal sudo[3287]: pam_unix(sudo:session): session closed for user root
Oct 02 11:02:55 np0005466030.novalocal sudo[3360]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhrvmghkkddjzypjardtzqolhxnkbjnb ; /usr/bin/python3'
Oct 02 11:02:55 np0005466030.novalocal sudo[3360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:02:56 np0005466030.novalocal python3[3362]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759402975.2202332-382-182090364043033/source _original_basename=tmp_p7d9vbr follow=False checksum=543712d4d707f51827e90f243e5a01210e719e2f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:02:56 np0005466030.novalocal sudo[3360]: pam_unix(sudo:session): session closed for user root
Oct 02 11:02:56 np0005466030.novalocal python3[3410]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:02:56 np0005466030.novalocal python3[3436]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:02:57 np0005466030.novalocal sudo[3514]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tawrjqjtkoaqcnyxjreceaoyajqzqgzk ; /usr/bin/python3'
Oct 02 11:02:57 np0005466030.novalocal sudo[3514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:02:57 np0005466030.novalocal python3[3516]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:02:57 np0005466030.novalocal sudo[3514]: pam_unix(sudo:session): session closed for user root
Oct 02 11:02:57 np0005466030.novalocal sudo[3587]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohsioybtsxcierrokdonnbrqvepdhmwf ; /usr/bin/python3'
Oct 02 11:02:57 np0005466030.novalocal sudo[3587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:02:57 np0005466030.novalocal python3[3589]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1759402977.047506-453-154571928777526/source _original_basename=tmp4knax7b7 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:02:57 np0005466030.novalocal sudo[3587]: pam_unix(sudo:session): session closed for user root
Oct 02 11:02:58 np0005466030.novalocal sudo[3638]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-roabhqltycevdqjppwykagvwarvoyvam ; /usr/bin/python3'
Oct 02 11:02:58 np0005466030.novalocal sudo[3638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:02:58 np0005466030.novalocal python3[3640]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-634d-add4-00000000001f-1-compute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:02:58 np0005466030.novalocal sudo[3638]: pam_unix(sudo:session): session closed for user root
Oct 02 11:02:59 np0005466030.novalocal python3[3668]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-634d-add4-000000000020-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Oct 02 11:03:00 np0005466030.novalocal python3[3696]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:03:22 np0005466030.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 02 11:03:28 np0005466030.novalocal sudo[3722]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izpcevbkdgqoauczvfpolrglvvcqbbpi ; /usr/bin/python3'
Oct 02 11:03:28 np0005466030.novalocal sudo[3722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:03:28 np0005466030.novalocal python3[3724]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:03:28 np0005466030.novalocal sudo[3722]: pam_unix(sudo:session): session closed for user root
Oct 02 11:04:10 np0005466030.novalocal systemd[1]: Starting Cleanup of Temporary Directories...
Oct 02 11:04:10 np0005466030.novalocal systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct 02 11:04:10 np0005466030.novalocal systemd[1]: Finished Cleanup of Temporary Directories.
Oct 02 11:04:10 np0005466030.novalocal systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct 02 11:04:28 np0005466030.novalocal sshd-session[1092]: Received disconnect from 38.102.83.114 port 37082:11: disconnected by user
Oct 02 11:04:28 np0005466030.novalocal sshd-session[1092]: Disconnected from user zuul 38.102.83.114 port 37082
Oct 02 11:04:28 np0005466030.novalocal sshd-session[1077]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:04:28 np0005466030.novalocal systemd-logind[795]: Session 1 logged out. Waiting for processes to exit.
Oct 02 11:04:34 np0005466030.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 02 11:04:34 np0005466030.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Oct 02 11:04:34 np0005466030.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Oct 02 11:04:34 np0005466030.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Oct 02 11:04:34 np0005466030.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Oct 02 11:04:34 np0005466030.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Oct 02 11:04:34 np0005466030.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Oct 02 11:04:34 np0005466030.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Oct 02 11:04:34 np0005466030.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Oct 02 11:04:34 np0005466030.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Oct 02 11:04:34 np0005466030.novalocal NetworkManager[859]: <info>  [1759403074.0865] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 02 11:04:34 np0005466030.novalocal systemd-udevd[3729]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 11:04:34 np0005466030.novalocal NetworkManager[859]: <info>  [1759403074.1041] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 11:04:34 np0005466030.novalocal NetworkManager[859]: <info>  [1759403074.1070] settings: (eth1): created default wired connection 'Wired connection 1'
Oct 02 11:04:34 np0005466030.novalocal NetworkManager[859]: <info>  [1759403074.1075] device (eth1): carrier: link connected
Oct 02 11:04:34 np0005466030.novalocal NetworkManager[859]: <info>  [1759403074.1077] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 02 11:04:34 np0005466030.novalocal NetworkManager[859]: <info>  [1759403074.1084] policy: auto-activating connection 'Wired connection 1' (6e2af74a-3979-3958-b10b-5fcb5c84b968)
Oct 02 11:04:34 np0005466030.novalocal NetworkManager[859]: <info>  [1759403074.1089] device (eth1): Activation: starting connection 'Wired connection 1' (6e2af74a-3979-3958-b10b-5fcb5c84b968)
Oct 02 11:04:34 np0005466030.novalocal NetworkManager[859]: <info>  [1759403074.1090] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 11:04:34 np0005466030.novalocal NetworkManager[859]: <info>  [1759403074.1093] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 11:04:34 np0005466030.novalocal NetworkManager[859]: <info>  [1759403074.1098] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 11:04:34 np0005466030.novalocal NetworkManager[859]: <info>  [1759403074.1103] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 02 11:04:34 np0005466030.novalocal systemd[1082]: Starting Mark boot as successful...
Oct 02 11:04:34 np0005466030.novalocal systemd[1082]: Finished Mark boot as successful.
Oct 02 11:04:35 np0005466030.novalocal sshd-session[3734]: Accepted publickey for zuul from 38.102.83.114 port 34844 ssh2: RSA SHA256:kF187RjowWfVB0Eh8J6+KYVujBZ/IQN67xGI3Wy/+nI
Oct 02 11:04:35 np0005466030.novalocal systemd-logind[795]: New session 3 of user zuul.
Oct 02 11:04:35 np0005466030.novalocal systemd[1]: Started Session 3 of User zuul.
Oct 02 11:04:35 np0005466030.novalocal sshd-session[3734]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:04:35 np0005466030.novalocal python3[3761]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-98ee-a261-00000000018f-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:04:45 np0005466030.novalocal sudo[3842]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzghuqubzmbcbdtipuozyifafqefzuli ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 02 11:04:45 np0005466030.novalocal sudo[3842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:04:45 np0005466030.novalocal python3[3844]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:04:45 np0005466030.novalocal sudo[3842]: pam_unix(sudo:session): session closed for user root
Oct 02 11:04:45 np0005466030.novalocal sudo[3915]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkwivwsfujpoavojfynhveqppupugjrn ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 02 11:04:45 np0005466030.novalocal sudo[3915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:04:45 np0005466030.novalocal python3[3917]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759403085.1418831-155-105764592428768/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=7f4e5c46bae12badf8ec6f27cb3609054cb28e27 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:04:45 np0005466030.novalocal sudo[3915]: pam_unix(sudo:session): session closed for user root
Oct 02 11:04:46 np0005466030.novalocal sudo[3965]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtmqblmvlxjscndgzbsgkgbuirapgpwl ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 02 11:04:46 np0005466030.novalocal sudo[3965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:04:46 np0005466030.novalocal python3[3967]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 11:04:46 np0005466030.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct 02 11:04:46 np0005466030.novalocal systemd[1]: Stopped Network Manager Wait Online.
Oct 02 11:04:46 np0005466030.novalocal systemd[1]: Stopping Network Manager Wait Online...
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[859]: <info>  [1759403086.4966] caught SIGTERM, shutting down normally.
Oct 02 11:04:46 np0005466030.novalocal systemd[1]: Stopping Network Manager...
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[859]: <info>  [1759403086.4985] dhcp4 (eth0): canceled DHCP transaction
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[859]: <info>  [1759403086.4985] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[859]: <info>  [1759403086.4986] dhcp4 (eth0): state changed no lease
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[859]: <info>  [1759403086.4989] manager: NetworkManager state is now CONNECTING
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[859]: <info>  [1759403086.5083] dhcp4 (eth1): canceled DHCP transaction
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[859]: <info>  [1759403086.5084] dhcp4 (eth1): state changed no lease
Oct 02 11:04:46 np0005466030.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[859]: <info>  [1759403086.5139] exiting (success)
Oct 02 11:04:46 np0005466030.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 02 11:04:46 np0005466030.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Oct 02 11:04:46 np0005466030.novalocal systemd[1]: Stopped Network Manager.
Oct 02 11:04:46 np0005466030.novalocal systemd[1]: NetworkManager.service: Consumed 5.370s CPU time, 10.1M memory peak.
Oct 02 11:04:46 np0005466030.novalocal systemd[1]: Starting Network Manager...
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.5987] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:37dcc26c-0803-4cf1-8993-af1de3c457fe)
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.5989] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.6065] manager[0x561dc9c36070]: monitoring kernel firmware directory '/lib/firmware'.
Oct 02 11:04:46 np0005466030.novalocal systemd[1]: Starting Hostname Service...
Oct 02 11:04:46 np0005466030.novalocal systemd[1]: Started Hostname Service.
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.6804] hostname: hostname: using hostnamed
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.6808] hostname: static hostname changed from (none) to "np0005466030.novalocal"
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.6814] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.6820] manager[0x561dc9c36070]: rfkill: Wi-Fi hardware radio set enabled
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.6821] manager[0x561dc9c36070]: rfkill: WWAN hardware radio set enabled
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.6863] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.6863] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.6864] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.6865] manager: Networking is enabled by state file
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.6869] settings: Loaded settings plugin: keyfile (internal)
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.6875] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.6909] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.6921] dhcp: init: Using DHCP client 'internal'
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.6925] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.6933] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.6941] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.6953] device (lo): Activation: starting connection 'lo' (754f1d75-6208-49e2-9f27-490774a22f8d)
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.6964] device (eth0): carrier: link connected
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.6970] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.6978] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.6979] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.6988] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.6999] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.7008] device (eth1): carrier: link connected
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.7015] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.7024] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (6e2af74a-3979-3958-b10b-5fcb5c84b968) (indicated)
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.7025] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.7032] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.7041] device (eth1): Activation: starting connection 'Wired connection 1' (6e2af74a-3979-3958-b10b-5fcb5c84b968)
Oct 02 11:04:46 np0005466030.novalocal systemd[1]: Started Network Manager.
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.7053] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.7059] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.7064] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.7068] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.7073] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.7089] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.7094] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.7097] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.7101] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.7112] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.7117] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.7134] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.7139] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.7158] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.7162] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.7169] device (lo): Activation: successful, device activated.
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.7187] dhcp4 (eth0): state changed new lease, address=38.129.56.3
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.7194] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 02 11:04:46 np0005466030.novalocal systemd[1]: Starting Network Manager Wait Online...
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.7260] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.7275] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.7277] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.7281] manager: NetworkManager state is now CONNECTED_SITE
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.7284] device (eth0): Activation: successful, device activated.
Oct 02 11:04:46 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403086.7290] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 02 11:04:46 np0005466030.novalocal sudo[3965]: pam_unix(sudo:session): session closed for user root
Oct 02 11:04:47 np0005466030.novalocal python3[4052]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-98ee-a261-0000000000c8-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:04:56 np0005466030.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 02 11:05:16 np0005466030.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 02 11:05:32 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403132.3830] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 02 11:05:32 np0005466030.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 02 11:05:32 np0005466030.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 02 11:05:32 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403132.4092] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 02 11:05:32 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403132.4095] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 02 11:05:32 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403132.4104] device (eth1): Activation: successful, device activated.
Oct 02 11:05:32 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403132.4111] manager: startup complete
Oct 02 11:05:32 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403132.4113] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Oct 02 11:05:32 np0005466030.novalocal NetworkManager[3976]: <warn>  [1759403132.4118] device (eth1): Activation: failed for connection 'Wired connection 1'
Oct 02 11:05:32 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403132.4128] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Oct 02 11:05:32 np0005466030.novalocal systemd[1]: Finished Network Manager Wait Online.
Oct 02 11:05:32 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403132.4243] dhcp4 (eth1): canceled DHCP transaction
Oct 02 11:05:32 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403132.4244] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 02 11:05:32 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403132.4244] dhcp4 (eth1): state changed no lease
Oct 02 11:05:32 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403132.4256] policy: auto-activating connection 'ci-private-network' (f0e350ee-ba0f-53f3-aecd-b1bd05c74472)
Oct 02 11:05:32 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403132.4260] device (eth1): Activation: starting connection 'ci-private-network' (f0e350ee-ba0f-53f3-aecd-b1bd05c74472)
Oct 02 11:05:32 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403132.4261] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 11:05:32 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403132.4264] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 11:05:32 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403132.4271] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 11:05:32 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403132.4279] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 11:05:32 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403132.4324] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 11:05:32 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403132.4325] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 11:05:32 np0005466030.novalocal NetworkManager[3976]: <info>  [1759403132.4331] device (eth1): Activation: successful, device activated.
Oct 02 11:05:42 np0005466030.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 02 11:05:47 np0005466030.novalocal sshd-session[3737]: Received disconnect from 38.102.83.114 port 34844:11: disconnected by user
Oct 02 11:05:47 np0005466030.novalocal sshd-session[3737]: Disconnected from user zuul 38.102.83.114 port 34844
Oct 02 11:05:47 np0005466030.novalocal sshd-session[3734]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:05:47 np0005466030.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Oct 02 11:05:47 np0005466030.novalocal systemd[1]: session-3.scope: Consumed 1.726s CPU time.
Oct 02 11:05:47 np0005466030.novalocal systemd-logind[795]: Session 3 logged out. Waiting for processes to exit.
Oct 02 11:05:47 np0005466030.novalocal systemd-logind[795]: Removed session 3.
Oct 02 11:06:27 np0005466030.novalocal sshd-session[4082]: Accepted publickey for zuul from 38.102.83.114 port 37172 ssh2: RSA SHA256:kF187RjowWfVB0Eh8J6+KYVujBZ/IQN67xGI3Wy/+nI
Oct 02 11:06:27 np0005466030.novalocal systemd-logind[795]: New session 4 of user zuul.
Oct 02 11:06:27 np0005466030.novalocal systemd[1]: Started Session 4 of User zuul.
Oct 02 11:06:27 np0005466030.novalocal sshd-session[4082]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:06:27 np0005466030.novalocal sudo[4161]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubjnysuudjuzmtztsxwncudekspinlbb ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 02 11:06:27 np0005466030.novalocal sudo[4161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:06:28 np0005466030.novalocal python3[4163]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:06:28 np0005466030.novalocal sudo[4161]: pam_unix(sudo:session): session closed for user root
Oct 02 11:06:28 np0005466030.novalocal sudo[4234]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlqrknywvtdajzcwimkmloqpjlzwzxom ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 02 11:06:28 np0005466030.novalocal sudo[4234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:06:28 np0005466030.novalocal python3[4236]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759403187.7121105-373-261862792232779/source _original_basename=tmpembgra3w follow=False checksum=c919e886c60bd4fe64e018977b3d3fbde98f63d3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:06:28 np0005466030.novalocal sudo[4234]: pam_unix(sudo:session): session closed for user root
Oct 02 11:06:31 np0005466030.novalocal sshd-session[4085]: Connection closed by 38.102.83.114 port 37172
Oct 02 11:06:31 np0005466030.novalocal sshd-session[4082]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:06:31 np0005466030.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Oct 02 11:06:31 np0005466030.novalocal systemd-logind[795]: Session 4 logged out. Waiting for processes to exit.
Oct 02 11:06:31 np0005466030.novalocal systemd-logind[795]: Removed session 4.
Oct 02 11:08:10 np0005466030.novalocal systemd[1082]: Created slice User Background Tasks Slice.
Oct 02 11:08:10 np0005466030.novalocal systemd[1082]: Starting Cleanup of User's Temporary Files and Directories...
Oct 02 11:08:10 np0005466030.novalocal systemd[1082]: Finished Cleanup of User's Temporary Files and Directories.
Oct 02 11:12:53 np0005466030.novalocal sshd-session[4266]: Accepted publickey for zuul from 38.102.83.114 port 54034 ssh2: RSA SHA256:kF187RjowWfVB0Eh8J6+KYVujBZ/IQN67xGI3Wy/+nI
Oct 02 11:12:53 np0005466030.novalocal systemd-logind[795]: New session 5 of user zuul.
Oct 02 11:12:53 np0005466030.novalocal systemd[1]: Started Session 5 of User zuul.
Oct 02 11:12:53 np0005466030.novalocal sshd-session[4266]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:12:53 np0005466030.novalocal sudo[4293]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnxmfedswrcpkbctsvctnjxkihigjvxd ; /usr/bin/python3'
Oct 02 11:12:53 np0005466030.novalocal sudo[4293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:12:54 np0005466030.novalocal python3[4295]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-383d-cd01-000000000cac-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:12:54 np0005466030.novalocal sudo[4293]: pam_unix(sudo:session): session closed for user root
Oct 02 11:12:54 np0005466030.novalocal sudo[4322]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lllmzzwlrlzapweovuqueuirkskwybyq ; /usr/bin/python3'
Oct 02 11:12:54 np0005466030.novalocal sudo[4322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:12:54 np0005466030.novalocal python3[4324]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:12:54 np0005466030.novalocal sudo[4322]: pam_unix(sudo:session): session closed for user root
Oct 02 11:12:54 np0005466030.novalocal sudo[4348]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxolnbmotweqebyxftvdywrokakxorpe ; /usr/bin/python3'
Oct 02 11:12:54 np0005466030.novalocal sudo[4348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:12:55 np0005466030.novalocal python3[4350]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:12:55 np0005466030.novalocal sudo[4348]: pam_unix(sudo:session): session closed for user root
Oct 02 11:12:55 np0005466030.novalocal sudo[4374]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smzqqfxfdlaamvwfxzhgoqgqedirvhun ; /usr/bin/python3'
Oct 02 11:12:55 np0005466030.novalocal sudo[4374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:12:55 np0005466030.novalocal python3[4376]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:12:55 np0005466030.novalocal sudo[4374]: pam_unix(sudo:session): session closed for user root
Oct 02 11:12:55 np0005466030.novalocal sudo[4400]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncnusuizhabtsgkutmahbupqghnnnvqg ; /usr/bin/python3'
Oct 02 11:12:55 np0005466030.novalocal sudo[4400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:12:55 np0005466030.novalocal python3[4402]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:12:55 np0005466030.novalocal sudo[4400]: pam_unix(sudo:session): session closed for user root
Oct 02 11:12:55 np0005466030.novalocal sudo[4426]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phakfcuyxpswbcuzzdfhtstwasyvkwsg ; /usr/bin/python3'
Oct 02 11:12:55 np0005466030.novalocal sudo[4426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:12:56 np0005466030.novalocal python3[4428]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/systemd/system.conf regexp=^#DefaultIOAccounting=no line=DefaultIOAccounting=yes state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:12:56 np0005466030.novalocal python3[4428]: ansible-ansible.builtin.lineinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Oct 02 11:12:56 np0005466030.novalocal sudo[4426]: pam_unix(sudo:session): session closed for user root
Oct 02 11:12:56 np0005466030.novalocal sudo[4452]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvxqtnxlgftcotvogyayedpkfsoavsji ; /usr/bin/python3'
Oct 02 11:12:56 np0005466030.novalocal sudo[4452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:12:56 np0005466030.novalocal python3[4454]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 11:12:56 np0005466030.novalocal systemd[1]: Reloading.
Oct 02 11:12:56 np0005466030.novalocal systemd-rc-local-generator[4477]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:12:57 np0005466030.novalocal systemd[1]: Starting dnf makecache...
Oct 02 11:12:57 np0005466030.novalocal sudo[4452]: pam_unix(sudo:session): session closed for user root
Oct 02 11:12:58 np0005466030.novalocal dnf[4485]: Failed determining last makecache time.
Oct 02 11:12:58 np0005466030.novalocal sudo[4509]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxhnfwifptgxhvvliajxbqsynxkqgipo ; /usr/bin/python3'
Oct 02 11:12:58 np0005466030.novalocal sudo[4509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:12:58 np0005466030.novalocal python3[4511]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Oct 02 11:12:58 np0005466030.novalocal sudo[4509]: pam_unix(sudo:session): session closed for user root
Oct 02 11:12:58 np0005466030.novalocal dnf[4485]: CentOS Stream 9 - BaseOS                         41 kB/s | 6.7 kB     00:00
Oct 02 11:12:58 np0005466030.novalocal sudo[4541]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruoqgxsikucrbpyyhtmouosloxhmzyej ; /usr/bin/python3'
Oct 02 11:12:58 np0005466030.novalocal sudo[4541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:12:59 np0005466030.novalocal python3[4543]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:12:59 np0005466030.novalocal dnf[4485]: CentOS Stream 9 - AppStream                      74 kB/s | 6.8 kB     00:00
Oct 02 11:12:59 np0005466030.novalocal sudo[4541]: pam_unix(sudo:session): session closed for user root
Oct 02 11:12:59 np0005466030.novalocal sudo[4570]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aktqhcejommcfewikzksqctnvddnvexw ; /usr/bin/python3'
Oct 02 11:12:59 np0005466030.novalocal sudo[4570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:12:59 np0005466030.novalocal python3[4572]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:12:59 np0005466030.novalocal sudo[4570]: pam_unix(sudo:session): session closed for user root
Oct 02 11:12:59 np0005466030.novalocal sudo[4598]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onmdbrcqewrraeagnsnuuwegedibmcjm ; /usr/bin/python3'
Oct 02 11:12:59 np0005466030.novalocal sudo[4598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:12:59 np0005466030.novalocal python3[4600]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:12:59 np0005466030.novalocal sudo[4598]: pam_unix(sudo:session): session closed for user root
Oct 02 11:12:59 np0005466030.novalocal sudo[4626]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbkadrfjxgxuqwbzkbsmqnptomnfonlb ; /usr/bin/python3'
Oct 02 11:12:59 np0005466030.novalocal sudo[4626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:13:00 np0005466030.novalocal dnf[4485]: CentOS Stream 9 - CRB                            71 kB/s | 6.6 kB     00:00
Oct 02 11:13:00 np0005466030.novalocal python3[4628]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:13:00 np0005466030.novalocal sudo[4626]: pam_unix(sudo:session): session closed for user root
Oct 02 11:13:00 np0005466030.novalocal dnf[4485]: CentOS Stream 9 - Extras packages                79 kB/s | 8.0 kB     00:00
Oct 02 11:13:00 np0005466030.novalocal dnf[4485]: Metadata cache created.
Oct 02 11:13:00 np0005466030.novalocal systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct 02 11:13:00 np0005466030.novalocal systemd[1]: Finished dnf makecache.
Oct 02 11:13:00 np0005466030.novalocal python3[4657]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-383d-cd01-000000000cb2-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:13:01 np0005466030.novalocal python3[4688]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:13:04 np0005466030.novalocal sshd-session[4269]: Connection closed by 38.102.83.114 port 54034
Oct 02 11:13:04 np0005466030.novalocal sshd-session[4266]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:13:04 np0005466030.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Oct 02 11:13:04 np0005466030.novalocal systemd[1]: session-5.scope: Consumed 3.506s CPU time.
Oct 02 11:13:04 np0005466030.novalocal systemd-logind[795]: Session 5 logged out. Waiting for processes to exit.
Oct 02 11:13:04 np0005466030.novalocal systemd-logind[795]: Removed session 5.
Oct 02 11:13:05 np0005466030.novalocal sshd-session[4692]: Accepted publickey for zuul from 38.102.83.114 port 50184 ssh2: RSA SHA256:kF187RjowWfVB0Eh8J6+KYVujBZ/IQN67xGI3Wy/+nI
Oct 02 11:13:05 np0005466030.novalocal systemd-logind[795]: New session 6 of user zuul.
Oct 02 11:13:05 np0005466030.novalocal systemd[1]: Started Session 6 of User zuul.
Oct 02 11:13:05 np0005466030.novalocal sshd-session[4692]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:13:05 np0005466030.novalocal sudo[4719]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ximtnmkqgsckypauevzonjqycveowlhu ; /usr/bin/python3'
Oct 02 11:13:05 np0005466030.novalocal sudo[4719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:13:06 np0005466030.novalocal python3[4721]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 02 11:13:34 np0005466030.novalocal kernel: SELinux:  Converting 365 SID table entries...
Oct 02 11:13:35 np0005466030.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 11:13:35 np0005466030.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 02 11:13:35 np0005466030.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 11:13:35 np0005466030.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 02 11:13:35 np0005466030.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 11:13:35 np0005466030.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 11:13:35 np0005466030.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 11:13:50 np0005466030.novalocal kernel: SELinux:  Converting 365 SID table entries...
Oct 02 11:13:50 np0005466030.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 11:13:50 np0005466030.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 02 11:13:50 np0005466030.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 11:13:50 np0005466030.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 02 11:13:50 np0005466030.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 11:13:50 np0005466030.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 11:13:50 np0005466030.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 11:14:01 np0005466030.novalocal kernel: SELinux:  Converting 365 SID table entries...
Oct 02 11:14:01 np0005466030.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 11:14:01 np0005466030.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 02 11:14:01 np0005466030.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 11:14:01 np0005466030.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 02 11:14:01 np0005466030.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 11:14:01 np0005466030.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 11:14:01 np0005466030.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 11:14:03 np0005466030.novalocal setsebool[4784]: The virt_use_nfs policy boolean was changed to 1 by root
Oct 02 11:14:03 np0005466030.novalocal setsebool[4784]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Oct 02 11:14:14 np0005466030.novalocal kernel: SELinux:  Converting 368 SID table entries...
Oct 02 11:14:14 np0005466030.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 11:14:14 np0005466030.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 02 11:14:14 np0005466030.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 11:14:14 np0005466030.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 02 11:14:14 np0005466030.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 11:14:14 np0005466030.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 11:14:14 np0005466030.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 11:14:27 np0005466030.novalocal sshd-session[4807]: banner exchange: Connection from 106.39.129.134 port 42728: invalid format
Oct 02 11:14:38 np0005466030.novalocal dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 02 11:14:38 np0005466030.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 02 11:14:38 np0005466030.novalocal systemd[1]: Starting man-db-cache-update.service...
Oct 02 11:14:38 np0005466030.novalocal systemd[1]: Reloading.
Oct 02 11:14:39 np0005466030.novalocal systemd-rc-local-generator[5539]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:14:39 np0005466030.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Oct 02 11:14:41 np0005466030.novalocal systemd[1]: Starting PackageKit Daemon...
Oct 02 11:14:41 np0005466030.novalocal PackageKit[6912]: daemon start
Oct 02 11:14:41 np0005466030.novalocal systemd[1]: Starting Authorization Manager...
Oct 02 11:14:41 np0005466030.novalocal polkitd[6956]: Started polkitd version 0.117
Oct 02 11:14:42 np0005466030.novalocal polkitd[6956]: Loading rules from directory /etc/polkit-1/rules.d
Oct 02 11:14:42 np0005466030.novalocal polkitd[6956]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 02 11:14:42 np0005466030.novalocal polkitd[6956]: Finished loading, compiling and executing 3 rules
Oct 02 11:14:42 np0005466030.novalocal systemd[1]: Started Authorization Manager.
Oct 02 11:14:42 np0005466030.novalocal polkitd[6956]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Oct 02 11:14:42 np0005466030.novalocal systemd[1]: Started PackageKit Daemon.
Oct 02 11:14:42 np0005466030.novalocal sshd-session[5589]: Invalid user wqmarlduiqkmgs from 106.39.129.134 port 51356
Oct 02 11:14:42 np0005466030.novalocal sshd-session[5589]: fatal: userauth_pubkey: parse publickey packet: incomplete message [preauth]
Oct 02 11:14:43 np0005466030.novalocal sudo[4719]: pam_unix(sudo:session): session closed for user root
Oct 02 11:14:43 np0005466030.novalocal python3[8046]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-52da-2fd4-00000000000c-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:14:44 np0005466030.novalocal kernel: evm: overlay not supported
Oct 02 11:14:44 np0005466030.novalocal systemd[1082]: Starting D-Bus User Message Bus...
Oct 02 11:14:44 np0005466030.novalocal dbus-broker-launch[8913]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Oct 02 11:14:44 np0005466030.novalocal dbus-broker-launch[8913]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Oct 02 11:14:44 np0005466030.novalocal systemd[1082]: Started D-Bus User Message Bus.
Oct 02 11:14:44 np0005466030.novalocal dbus-broker-lau[8913]: Ready
Oct 02 11:14:44 np0005466030.novalocal systemd[1082]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 02 11:14:44 np0005466030.novalocal systemd[1082]: Created slice Slice /user.
Oct 02 11:14:44 np0005466030.novalocal systemd[1082]: podman-8791.scope: unit configures an IP firewall, but not running as root.
Oct 02 11:14:44 np0005466030.novalocal systemd[1082]: (This warning is only shown for the first unit using IP firewalling.)
Oct 02 11:14:44 np0005466030.novalocal systemd[1082]: Started podman-8791.scope.
Oct 02 11:14:45 np0005466030.novalocal systemd[1082]: Started podman-pause-72f6f642.scope.
Oct 02 11:14:45 np0005466030.novalocal sudo[9566]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvdqugacmitcbjsepciskbgqwdattjim ; /usr/bin/python3'
Oct 02 11:14:45 np0005466030.novalocal sudo[9566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:14:45 np0005466030.novalocal python3[9591]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                      location = "38.102.83.136:5001"
                                                      insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                      location = "38.102.83.136:5001"
                                                      insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:14:45 np0005466030.novalocal sudo[9566]: pam_unix(sudo:session): session closed for user root
Oct 02 11:14:46 np0005466030.novalocal sshd-session[4695]: Connection closed by 38.102.83.114 port 50184
Oct 02 11:14:46 np0005466030.novalocal sshd-session[4692]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:14:46 np0005466030.novalocal systemd[1]: session-6.scope: Deactivated successfully.
Oct 02 11:14:46 np0005466030.novalocal systemd[1]: session-6.scope: Consumed 1min 4.399s CPU time.
Oct 02 11:14:46 np0005466030.novalocal systemd-logind[795]: Session 6 logged out. Waiting for processes to exit.
Oct 02 11:14:46 np0005466030.novalocal systemd-logind[795]: Removed session 6.
Oct 02 11:15:06 np0005466030.novalocal sshd-session[16473]: Connection closed by 38.129.56.116 port 41414 [preauth]
Oct 02 11:15:06 np0005466030.novalocal sshd-session[16479]: Connection closed by 38.129.56.116 port 41424 [preauth]
Oct 02 11:15:06 np0005466030.novalocal sshd-session[16476]: Unable to negotiate with 38.129.56.116 port 41428: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Oct 02 11:15:06 np0005466030.novalocal sshd-session[16475]: Unable to negotiate with 38.129.56.116 port 41444: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Oct 02 11:15:06 np0005466030.novalocal sshd-session[16478]: Unable to negotiate with 38.129.56.116 port 41460: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Oct 02 11:15:11 np0005466030.novalocal sshd-session[17814]: Accepted publickey for zuul from 38.102.83.114 port 38008 ssh2: RSA SHA256:kF187RjowWfVB0Eh8J6+KYVujBZ/IQN67xGI3Wy/+nI
Oct 02 11:15:11 np0005466030.novalocal systemd-logind[795]: New session 7 of user zuul.
Oct 02 11:15:11 np0005466030.novalocal systemd[1]: Started Session 7 of User zuul.
Oct 02 11:15:11 np0005466030.novalocal sshd-session[17814]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:15:11 np0005466030.novalocal python3[17889]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNsTHDrxDbYOuMoD7926Svbn4szasIp+JBKPLUL9nua54ooI00ganN5oLgNtcW5XoiwXGhTq8QJyxUTZ1zUgiQs= zuul@np0005466028.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 11:15:11 np0005466030.novalocal sudo[18022]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwxqhjpwxtvuxnafdunlhwgysmmnqpvt ; /usr/bin/python3'
Oct 02 11:15:11 np0005466030.novalocal sudo[18022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:15:11 np0005466030.novalocal python3[18033]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNsTHDrxDbYOuMoD7926Svbn4szasIp+JBKPLUL9nua54ooI00ganN5oLgNtcW5XoiwXGhTq8QJyxUTZ1zUgiQs= zuul@np0005466028.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 11:15:11 np0005466030.novalocal sudo[18022]: pam_unix(sudo:session): session closed for user root
Oct 02 11:15:12 np0005466030.novalocal sshd-session[18221]: banner exchange: Connection from 91.238.181.95 port 65208: invalid format
Oct 02 11:15:12 np0005466030.novalocal sudo[18310]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzibwmoescvvvfzxozoxqeysbbwfutqc ; /usr/bin/python3'
Oct 02 11:15:12 np0005466030.novalocal sudo[18310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:15:12 np0005466030.novalocal python3[18319]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005466030.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Oct 02 11:15:13 np0005466030.novalocal useradd[18376]: new group: name=cloud-admin, GID=1002
Oct 02 11:15:13 np0005466030.novalocal useradd[18376]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Oct 02 11:15:13 np0005466030.novalocal sudo[18310]: pam_unix(sudo:session): session closed for user root
Oct 02 11:15:13 np0005466030.novalocal sudo[18574]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxijuvkaflrlokvekvlollvlxlovucfr ; /usr/bin/python3'
Oct 02 11:15:13 np0005466030.novalocal sudo[18574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:15:13 np0005466030.novalocal python3[18581]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNsTHDrxDbYOuMoD7926Svbn4szasIp+JBKPLUL9nua54ooI00ganN5oLgNtcW5XoiwXGhTq8QJyxUTZ1zUgiQs= zuul@np0005466028.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 02 11:15:13 np0005466030.novalocal sudo[18574]: pam_unix(sudo:session): session closed for user root
Oct 02 11:15:13 np0005466030.novalocal sudo[18823]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvzboatoaylpeysofhwuwolnkmhsurro ; /usr/bin/python3'
Oct 02 11:15:13 np0005466030.novalocal sudo[18823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:15:14 np0005466030.novalocal python3[18830]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:15:14 np0005466030.novalocal sudo[18823]: pam_unix(sudo:session): session closed for user root
Oct 02 11:15:14 np0005466030.novalocal sudo[19087]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-astoagzfeywxaydsrpordqvphgndjhhh ; /usr/bin/python3'
Oct 02 11:15:14 np0005466030.novalocal sudo[19087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:15:14 np0005466030.novalocal python3[19101]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759403713.8318872-169-209930373179287/source _original_basename=tmpkuinfmsf follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:15:14 np0005466030.novalocal sudo[19087]: pam_unix(sudo:session): session closed for user root
Oct 02 11:15:15 np0005466030.novalocal sudo[19410]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vujsxidrqdbduztjwjlmtytzzmmscibj ; /usr/bin/python3'
Oct 02 11:15:15 np0005466030.novalocal sudo[19410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:15:15 np0005466030.novalocal python3[19417]: ansible-ansible.builtin.hostname Invoked with name=compute-1 use=systemd
Oct 02 11:15:15 np0005466030.novalocal systemd[1]: Starting Hostname Service...
Oct 02 11:15:15 np0005466030.novalocal systemd[1]: Started Hostname Service.
Oct 02 11:15:15 np0005466030.novalocal systemd-hostnamed[19531]: Changed pretty hostname to 'compute-1'
Oct 02 11:15:15 compute-1 systemd-hostnamed[19531]: Hostname set to <compute-1> (static)
Oct 02 11:15:15 compute-1 NetworkManager[3976]: <info>  [1759403715.6796] hostname: static hostname changed from "np0005466030.novalocal" to "compute-1"
Oct 02 11:15:15 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 02 11:15:15 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 02 11:15:15 compute-1 sudo[19410]: pam_unix(sudo:session): session closed for user root
Oct 02 11:15:15 compute-1 sshd-session[17838]: Connection closed by 38.102.83.114 port 38008
Oct 02 11:15:15 compute-1 sshd-session[17814]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:15:15 compute-1 systemd[1]: session-7.scope: Deactivated successfully.
Oct 02 11:15:15 compute-1 systemd[1]: session-7.scope: Consumed 2.360s CPU time.
Oct 02 11:15:15 compute-1 systemd-logind[795]: Session 7 logged out. Waiting for processes to exit.
Oct 02 11:15:15 compute-1 systemd-logind[795]: Removed session 7.
Oct 02 11:15:25 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 02 11:15:40 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 02 11:15:40 compute-1 systemd[1]: Finished man-db-cache-update.service.
Oct 02 11:15:40 compute-1 systemd[1]: man-db-cache-update.service: Consumed 1min 4.418s CPU time.
Oct 02 11:15:40 compute-1 systemd[1]: run-rf7cbedc196ca45e1a75ea832b5edd2bc.service: Deactivated successfully.
Oct 02 11:15:45 compute-1 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 02 11:19:14 compute-1 sshd-session[26616]: Accepted publickey for zuul from 38.129.56.116 port 52636 ssh2: RSA SHA256:kF187RjowWfVB0Eh8J6+KYVujBZ/IQN67xGI3Wy/+nI
Oct 02 11:19:14 compute-1 systemd-logind[795]: New session 8 of user zuul.
Oct 02 11:19:14 compute-1 systemd[1]: Started Session 8 of User zuul.
Oct 02 11:19:14 compute-1 sshd-session[26616]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:19:15 compute-1 python3[26692]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:19:16 compute-1 sudo[26806]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oihzvcclnacmclmdbobpjhhbypmzeccu ; /usr/bin/python3'
Oct 02 11:19:16 compute-1 sudo[26806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:19:16 compute-1 python3[26808]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:19:16 compute-1 sudo[26806]: pam_unix(sudo:session): session closed for user root
Oct 02 11:19:17 compute-1 sudo[26879]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsddzlbhtxakhduizhfihsbkgivaaaao ; /usr/bin/python3'
Oct 02 11:19:17 compute-1 sudo[26879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:19:17 compute-1 python3[26881]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759403956.5979817-30636-14518536914006/source mode=0755 _original_basename=delorean.repo follow=False checksum=bb4c2ff9dad546f135d54d9729ea11b84117755d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:19:17 compute-1 sudo[26879]: pam_unix(sudo:session): session closed for user root
Oct 02 11:19:17 compute-1 sudo[26905]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egpuwcisacaofykyxsbjbmkxyfwbpavf ; /usr/bin/python3'
Oct 02 11:19:17 compute-1 sudo[26905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:19:17 compute-1 python3[26907]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:19:17 compute-1 sudo[26905]: pam_unix(sudo:session): session closed for user root
Oct 02 11:19:17 compute-1 sudo[26978]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kosokjxljfvzvkuhcwysrqvwmryfrkyd ; /usr/bin/python3'
Oct 02 11:19:17 compute-1 sudo[26978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:19:18 compute-1 python3[26980]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759403956.5979817-30636-14518536914006/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:19:18 compute-1 sudo[26978]: pam_unix(sudo:session): session closed for user root
Oct 02 11:19:18 compute-1 sudo[27004]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olututvldriijmljvjgtybdszlgosmel ; /usr/bin/python3'
Oct 02 11:19:18 compute-1 sudo[27004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:19:18 compute-1 python3[27006]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:19:18 compute-1 sudo[27004]: pam_unix(sudo:session): session closed for user root
Oct 02 11:19:18 compute-1 sudo[27077]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpmgizfnrqbtmqiculupagtbhjgdunxo ; /usr/bin/python3'
Oct 02 11:19:18 compute-1 sudo[27077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:19:18 compute-1 python3[27079]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759403956.5979817-30636-14518536914006/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:19:18 compute-1 sudo[27077]: pam_unix(sudo:session): session closed for user root
Oct 02 11:19:18 compute-1 sudo[27103]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldbexhfigggksydslmesxtwlqdegxlyt ; /usr/bin/python3'
Oct 02 11:19:18 compute-1 sudo[27103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:19:18 compute-1 python3[27105]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:19:18 compute-1 sudo[27103]: pam_unix(sudo:session): session closed for user root
Oct 02 11:19:19 compute-1 sudo[27176]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fueivmtmvhcnfocrclzqvznoctxzlcrb ; /usr/bin/python3'
Oct 02 11:19:19 compute-1 sudo[27176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:19:19 compute-1 python3[27178]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759403956.5979817-30636-14518536914006/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:19:19 compute-1 sudo[27176]: pam_unix(sudo:session): session closed for user root
Oct 02 11:19:19 compute-1 sudo[27202]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cthvehkdtvcjkscwdhtqtksidbuquaso ; /usr/bin/python3'
Oct 02 11:19:19 compute-1 sudo[27202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:19:19 compute-1 python3[27204]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:19:19 compute-1 sudo[27202]: pam_unix(sudo:session): session closed for user root
Oct 02 11:19:19 compute-1 sudo[27275]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxlipghhsjbkqnobpeaaqdonbtrgmice ; /usr/bin/python3'
Oct 02 11:19:19 compute-1 sudo[27275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:19:19 compute-1 python3[27277]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759403956.5979817-30636-14518536914006/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:19:19 compute-1 sudo[27275]: pam_unix(sudo:session): session closed for user root
Oct 02 11:19:19 compute-1 sudo[27301]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rirywwoqjlmdoybopfztnnfcgundhslz ; /usr/bin/python3'
Oct 02 11:19:19 compute-1 sudo[27301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:19:20 compute-1 python3[27303]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:19:20 compute-1 sudo[27301]: pam_unix(sudo:session): session closed for user root
Oct 02 11:19:20 compute-1 sudo[27374]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bafjvfijusmnjkktwlpkyvdkaaksebqj ; /usr/bin/python3'
Oct 02 11:19:20 compute-1 sudo[27374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:19:20 compute-1 python3[27376]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759403956.5979817-30636-14518536914006/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:19:20 compute-1 sudo[27374]: pam_unix(sudo:session): session closed for user root
Oct 02 11:19:20 compute-1 sudo[27400]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byboidlrnclmxpdbxletalgtqemigdmn ; /usr/bin/python3'
Oct 02 11:19:20 compute-1 sudo[27400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:19:20 compute-1 python3[27402]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:19:20 compute-1 sudo[27400]: pam_unix(sudo:session): session closed for user root
Oct 02 11:19:20 compute-1 sudo[27473]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmdcovgexzysdfmjloelzdvkdwcdsgez ; /usr/bin/python3'
Oct 02 11:19:20 compute-1 sudo[27473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:19:21 compute-1 python3[27475]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759403956.5979817-30636-14518536914006/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=d911291791b114a72daf18f370e91cb1ae300933 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:19:21 compute-1 sudo[27473]: pam_unix(sudo:session): session closed for user root
Oct 02 11:19:33 compute-1 python3[27523]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:19:47 compute-1 PackageKit[6912]: daemon quit
Oct 02 11:19:47 compute-1 systemd[1]: packagekit.service: Deactivated successfully.
Oct 02 11:24:32 compute-1 sshd-session[26619]: Received disconnect from 38.129.56.116 port 52636:11: disconnected by user
Oct 02 11:24:32 compute-1 sshd-session[26619]: Disconnected from user zuul 38.129.56.116 port 52636
Oct 02 11:24:32 compute-1 sshd-session[26616]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:24:32 compute-1 systemd[1]: session-8.scope: Deactivated successfully.
Oct 02 11:24:32 compute-1 systemd[1]: session-8.scope: Consumed 5.101s CPU time.
Oct 02 11:24:32 compute-1 systemd-logind[795]: Session 8 logged out. Waiting for processes to exit.
Oct 02 11:24:32 compute-1 systemd-logind[795]: Removed session 8.
Oct 02 11:27:23 compute-1 sshd-session[27529]: banner exchange: Connection from 93.123.109.214 port 56242: invalid format
Oct 02 11:27:23 compute-1 sshd-session[27530]: banner exchange: Connection from 93.123.109.214 port 56254: invalid format
Oct 02 11:34:07 compute-1 sshd-session[27533]: Accepted publickey for zuul from 192.168.122.30 port 60608 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:34:07 compute-1 systemd-logind[795]: New session 9 of user zuul.
Oct 02 11:34:07 compute-1 systemd[1]: Started Session 9 of User zuul.
Oct 02 11:34:07 compute-1 sshd-session[27533]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:34:08 compute-1 python3.9[27686]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:34:09 compute-1 sudo[27865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egegasjkgkgtslxyqbpwdfyhywisrtsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404849.0376916-62-215793065859752/AnsiballZ_command.py'
Oct 02 11:34:09 compute-1 sudo[27865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:34:09 compute-1 python3.9[27867]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:34:17 compute-1 sudo[27865]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:18 compute-1 sshd-session[27536]: Connection closed by 192.168.122.30 port 60608
Oct 02 11:34:18 compute-1 sshd-session[27533]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:34:18 compute-1 systemd[1]: session-9.scope: Deactivated successfully.
Oct 02 11:34:18 compute-1 systemd[1]: session-9.scope: Consumed 8.988s CPU time.
Oct 02 11:34:18 compute-1 systemd-logind[795]: Session 9 logged out. Waiting for processes to exit.
Oct 02 11:34:18 compute-1 systemd-logind[795]: Removed session 9.
Oct 02 11:34:33 compute-1 sshd-session[27924]: Accepted publickey for zuul from 192.168.122.30 port 34806 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:34:33 compute-1 systemd-logind[795]: New session 10 of user zuul.
Oct 02 11:34:33 compute-1 systemd[1]: Started Session 10 of User zuul.
Oct 02 11:34:33 compute-1 sshd-session[27924]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:34:34 compute-1 python3.9[28077]: ansible-ansible.legacy.ping Invoked with data=pong
Oct 02 11:34:35 compute-1 python3.9[28251]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:34:36 compute-1 sudo[28401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tipntjyslofnutniqmlgjyvodkmfnfdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404876.0615377-99-211962124536417/AnsiballZ_command.py'
Oct 02 11:34:36 compute-1 sudo[28401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:34:36 compute-1 python3.9[28403]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:34:36 compute-1 sudo[28401]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:37 compute-1 sudo[28554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvjtaaenckoydefgdnzyjhwmzkhbovgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404877.045063-135-58456512352869/AnsiballZ_stat.py'
Oct 02 11:34:37 compute-1 sudo[28554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:34:37 compute-1 python3.9[28556]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:34:37 compute-1 sudo[28554]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:38 compute-1 sudo[28706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofkshmiffjohfndfmhhybellxqvqlfba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404877.8627381-159-149769240856584/AnsiballZ_file.py'
Oct 02 11:34:38 compute-1 sudo[28706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:34:38 compute-1 python3.9[28708]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:34:38 compute-1 sudo[28706]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:38 compute-1 sudo[28858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmemjtpgczyyumhtjefisbzmnfyoctvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404878.6571164-183-36581701069383/AnsiballZ_stat.py'
Oct 02 11:34:38 compute-1 sudo[28858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:34:39 compute-1 python3.9[28860]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:34:39 compute-1 sudo[28858]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:39 compute-1 sudo[28981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzrtuoiwjuxzdiilpebzewgvehympwpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404878.6571164-183-36581701069383/AnsiballZ_copy.py'
Oct 02 11:34:39 compute-1 sudo[28981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:34:39 compute-1 python3.9[28983]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759404878.6571164-183-36581701069383/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:34:39 compute-1 sudo[28981]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:40 compute-1 sudo[29133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otaavbslqmkujwlmmaatfvwvaqpgrvbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404880.0231519-228-213727527767219/AnsiballZ_setup.py'
Oct 02 11:34:40 compute-1 sudo[29133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:34:40 compute-1 python3.9[29135]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:34:40 compute-1 sudo[29133]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:41 compute-1 sudo[29289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldepnylhigehsfzbfdoiqzvppsxccqoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404881.1077406-252-229595358601384/AnsiballZ_file.py'
Oct 02 11:34:41 compute-1 sudo[29289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:34:41 compute-1 python3.9[29291]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:34:41 compute-1 sudo[29289]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:42 compute-1 python3.9[29441]: ansible-ansible.builtin.service_facts Invoked
Oct 02 11:34:48 compute-1 python3.9[29696]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:34:49 compute-1 python3.9[29846]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:34:50 compute-1 python3.9[30000]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:34:51 compute-1 sudo[30156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thnpwcmedoztxfujqjnnncpursrjzgaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404890.977372-396-223945890761880/AnsiballZ_setup.py'
Oct 02 11:34:51 compute-1 sudo[30156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:34:51 compute-1 python3.9[30158]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:34:51 compute-1 sudo[30156]: pam_unix(sudo:session): session closed for user root
Oct 02 11:34:52 compute-1 sudo[30240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsgjddwhuicmzbteprcmavmqdfzlxpkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759404890.977372-396-223945890761880/AnsiballZ_dnf.py'
Oct 02 11:34:52 compute-1 sudo[30240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:34:52 compute-1 python3.9[30242]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:35:38 compute-1 systemd[1]: Reloading.
Oct 02 11:35:38 compute-1 systemd-rc-local-generator[30434]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:35:38 compute-1 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Oct 02 11:35:38 compute-1 systemd[1]: Reloading.
Oct 02 11:35:38 compute-1 systemd-rc-local-generator[30475]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:35:39 compute-1 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct 02 11:35:39 compute-1 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct 02 11:35:39 compute-1 systemd[1]: Reloading.
Oct 02 11:35:39 compute-1 systemd-rc-local-generator[30514]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:35:39 compute-1 systemd[1]: Listening on LVM2 poll daemon socket.
Oct 02 11:35:39 compute-1 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Oct 02 11:35:39 compute-1 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Oct 02 11:35:39 compute-1 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Oct 02 11:36:51 compute-1 kernel: SELinux:  Converting 2713 SID table entries...
Oct 02 11:36:51 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 11:36:51 compute-1 kernel: SELinux:  policy capability open_perms=1
Oct 02 11:36:51 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 11:36:51 compute-1 kernel: SELinux:  policy capability always_check_network=0
Oct 02 11:36:51 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 11:36:51 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 11:36:51 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 11:36:51 compute-1 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Oct 02 11:36:51 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 02 11:36:51 compute-1 systemd[1]: Starting man-db-cache-update.service...
Oct 02 11:36:51 compute-1 systemd[1]: Reloading.
Oct 02 11:36:51 compute-1 systemd-rc-local-generator[30838]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:36:51 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 02 11:36:51 compute-1 systemd[1]: Starting PackageKit Daemon...
Oct 02 11:36:51 compute-1 PackageKit[31077]: daemon start
Oct 02 11:36:52 compute-1 systemd[1]: Started PackageKit Daemon.
Oct 02 11:36:52 compute-1 sudo[30240]: pam_unix(sudo:session): session closed for user root
Oct 02 11:36:52 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 02 11:36:52 compute-1 systemd[1]: Finished man-db-cache-update.service.
Oct 02 11:36:52 compute-1 systemd[1]: man-db-cache-update.service: Consumed 1.251s CPU time.
Oct 02 11:36:52 compute-1 systemd[1]: run-r46e943053d2a44bf92f9795910d703a1.service: Deactivated successfully.
Oct 02 11:36:52 compute-1 sudo[31758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alvzyujdrboaoqrzeejnoehsqnfodwhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405012.4000368-432-63233457845002/AnsiballZ_command.py'
Oct 02 11:36:52 compute-1 sudo[31758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:36:52 compute-1 python3.9[31760]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:36:53 compute-1 sudo[31758]: pam_unix(sudo:session): session closed for user root
Oct 02 11:36:55 compute-1 sudo[32039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxqsomhwzvooeclpxnrabqfwshfajcfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405014.923777-456-239620926214309/AnsiballZ_selinux.py'
Oct 02 11:36:55 compute-1 sudo[32039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:36:55 compute-1 python3.9[32041]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct 02 11:36:55 compute-1 sudo[32039]: pam_unix(sudo:session): session closed for user root
Oct 02 11:36:56 compute-1 sudo[32191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgfxcpkmhzijzqnzagkdwbfhuvvuhchn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405016.3777483-489-188885680745840/AnsiballZ_command.py'
Oct 02 11:36:56 compute-1 sudo[32191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:36:56 compute-1 python3.9[32193]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct 02 11:36:57 compute-1 sudo[32191]: pam_unix(sudo:session): session closed for user root
Oct 02 11:36:58 compute-1 sudo[32345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbtotyliiabpwlkiybgighjdkooxnbzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405018.062464-513-253812576063269/AnsiballZ_file.py'
Oct 02 11:36:58 compute-1 sudo[32345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:36:59 compute-1 python3.9[32347]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:36:59 compute-1 sudo[32345]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:00 compute-1 sudo[32497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxjycldsuxjljqwmixsbeugzofaswvkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405019.7173831-537-184223250045894/AnsiballZ_mount.py'
Oct 02 11:37:00 compute-1 sudo[32497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:00 compute-1 python3.9[32499]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct 02 11:37:00 compute-1 sudo[32497]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:01 compute-1 sudo[32649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhluilidmadhsgwpugkwaonyhydkrftl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405021.254918-621-49133578490405/AnsiballZ_file.py'
Oct 02 11:37:01 compute-1 sudo[32649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:01 compute-1 python3.9[32651]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:37:01 compute-1 sudo[32649]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:02 compute-1 sudo[32801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsaekktklhqcfbwvpkdpcnztfghroyfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405021.9074333-645-204581355465243/AnsiballZ_stat.py'
Oct 02 11:37:02 compute-1 sudo[32801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:02 compute-1 python3.9[32803]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:37:02 compute-1 sudo[32801]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:02 compute-1 sudo[32924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhaniacnovnsbfyxuzhjoyglywlwxfsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405021.9074333-645-204581355465243/AnsiballZ_copy.py'
Oct 02 11:37:02 compute-1 sudo[32924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:02 compute-1 python3.9[32926]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405021.9074333-645-204581355465243/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=fcdb52e49c4d8b9ffc79ce29410702893676d42e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:37:02 compute-1 sudo[32924]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:07 compute-1 sudo[33077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgbgjayekjzjdhxkjbanxmtwwpzqwmxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405027.0865543-726-211124805797332/AnsiballZ_getent.py'
Oct 02 11:37:07 compute-1 sudo[33077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:07 compute-1 python3.9[33079]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct 02 11:37:07 compute-1 sudo[33077]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:08 compute-1 sudo[33230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnvcuasntdagbzoqqxfaqaxmcwdejsbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405027.9478824-750-100011858103171/AnsiballZ_group.py'
Oct 02 11:37:08 compute-1 sudo[33230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:08 compute-1 python3.9[33232]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 02 11:37:08 compute-1 groupadd[33233]: group added to /etc/group: name=qemu, GID=107
Oct 02 11:37:08 compute-1 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 11:37:08 compute-1 groupadd[33233]: group added to /etc/gshadow: name=qemu
Oct 02 11:37:08 compute-1 groupadd[33233]: new group: name=qemu, GID=107
Oct 02 11:37:08 compute-1 sudo[33230]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:09 compute-1 sudo[33389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewceguxtdmcqskwmwrxirmngmihphegy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405029.0979147-774-266321995195218/AnsiballZ_user.py'
Oct 02 11:37:09 compute-1 sudo[33389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:09 compute-1 python3.9[33391]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 02 11:37:09 compute-1 useradd[33393]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Oct 02 11:37:09 compute-1 sudo[33389]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:10 compute-1 sudo[33549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrbtrbdkjzleknsmudmnkrnvzxqzzwfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405030.2372956-798-26147469154722/AnsiballZ_getent.py'
Oct 02 11:37:10 compute-1 sudo[33549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:10 compute-1 python3.9[33551]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct 02 11:37:10 compute-1 sudo[33549]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:11 compute-1 sudo[33702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfrpxrkzjiqucizmnckgmmnsjkrowoej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405030.9561477-822-253576034400893/AnsiballZ_group.py'
Oct 02 11:37:11 compute-1 sudo[33702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:11 compute-1 python3.9[33704]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 02 11:37:11 compute-1 groupadd[33705]: group added to /etc/group: name=hugetlbfs, GID=42477
Oct 02 11:37:11 compute-1 groupadd[33705]: group added to /etc/gshadow: name=hugetlbfs
Oct 02 11:37:11 compute-1 groupadd[33705]: new group: name=hugetlbfs, GID=42477
Oct 02 11:37:11 compute-1 sudo[33702]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:12 compute-1 sudo[33860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytfihczxwcpkjxsdszrozvtqhvxevrfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405031.9124734-849-52465555808132/AnsiballZ_file.py'
Oct 02 11:37:12 compute-1 sudo[33860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:12 compute-1 python3.9[33862]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct 02 11:37:12 compute-1 sudo[33860]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:13 compute-1 sudo[34012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxmfrkuplykxwkabpkkmmqeneczgcwww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405032.7816718-882-89901777505237/AnsiballZ_dnf.py'
Oct 02 11:37:13 compute-1 sudo[34012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:13 compute-1 python3.9[34014]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:37:15 compute-1 sudo[34012]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:15 compute-1 sudo[34165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvfkxuvteamvqkwstjtecobyqcqqjgln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405035.468736-906-117043478127105/AnsiballZ_file.py'
Oct 02 11:37:15 compute-1 sudo[34165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:15 compute-1 python3.9[34167]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:37:15 compute-1 sudo[34165]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:16 compute-1 sudo[34317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnhoihifnbtkyxgoxmukrxlfynvrlcuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405036.151181-930-18834669000000/AnsiballZ_stat.py'
Oct 02 11:37:16 compute-1 sudo[34317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:16 compute-1 python3.9[34319]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:37:16 compute-1 sudo[34317]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:16 compute-1 sudo[34440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svwauucxvehqhslywyrdeosqctcknaut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405036.151181-930-18834669000000/AnsiballZ_copy.py'
Oct 02 11:37:16 compute-1 sudo[34440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:17 compute-1 python3.9[34442]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405036.151181-930-18834669000000/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:37:17 compute-1 sudo[34440]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:17 compute-1 sudo[34592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agpddydipxdnebmezgxoidjfjucfllpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405037.3273134-975-66293455571007/AnsiballZ_systemd.py'
Oct 02 11:37:17 compute-1 sudo[34592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:18 compute-1 python3.9[34594]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 11:37:18 compute-1 systemd[1]: Starting Load Kernel Modules...
Oct 02 11:37:18 compute-1 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct 02 11:37:18 compute-1 kernel: Bridge firewalling registered
Oct 02 11:37:18 compute-1 systemd-modules-load[34598]: Inserted module 'br_netfilter'
Oct 02 11:37:18 compute-1 systemd[1]: Finished Load Kernel Modules.
Oct 02 11:37:18 compute-1 sudo[34592]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:18 compute-1 sudo[34751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdqutrwyuljbbhcihxthtxggsxthqnuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405038.5456245-999-86428888645326/AnsiballZ_stat.py'
Oct 02 11:37:18 compute-1 sudo[34751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:18 compute-1 python3.9[34753]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:37:18 compute-1 sudo[34751]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:19 compute-1 sudo[34874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osfealymhcakxqlksxitsipvozuknqfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405038.5456245-999-86428888645326/AnsiballZ_copy.py'
Oct 02 11:37:19 compute-1 sudo[34874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:19 compute-1 python3.9[34876]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405038.5456245-999-86428888645326/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:37:19 compute-1 sudo[34874]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:20 compute-1 sudo[35026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lonnzzttohygcdxemiwdwopqghhrmibx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405040.0088809-1053-150344891775877/AnsiballZ_dnf.py'
Oct 02 11:37:20 compute-1 sudo[35026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:20 compute-1 python3.9[35028]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:37:24 compute-1 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Oct 02 11:37:24 compute-1 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Oct 02 11:37:24 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 02 11:37:24 compute-1 systemd[1]: Starting man-db-cache-update.service...
Oct 02 11:37:24 compute-1 systemd[1]: Reloading.
Oct 02 11:37:24 compute-1 systemd-rc-local-generator[35090]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:37:25 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 02 11:37:25 compute-1 sudo[35026]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:26 compute-1 python3.9[36372]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:37:27 compute-1 python3.9[37471]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct 02 11:37:28 compute-1 python3.9[38269]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:37:28 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 02 11:37:28 compute-1 systemd[1]: Finished man-db-cache-update.service.
Oct 02 11:37:28 compute-1 systemd[1]: man-db-cache-update.service: Consumed 4.685s CPU time.
Oct 02 11:37:28 compute-1 systemd[1]: run-re2169240e7bd48b4b6cf77e63f710d3a.service: Deactivated successfully.
Oct 02 11:37:28 compute-1 sudo[39231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjecoggemvfgrjqyhuzcstjigwvdvzvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405048.5760286-1170-8964034057487/AnsiballZ_command.py'
Oct 02 11:37:28 compute-1 sudo[39231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:29 compute-1 python3.9[39233]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:37:29 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 02 11:37:29 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 02 11:37:29 compute-1 sudo[39231]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:30 compute-1 sudo[39604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjmatolhdjrtnrxoamvfwylcmmdqwqfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405050.4103467-1197-194258067103467/AnsiballZ_systemd.py'
Oct 02 11:37:30 compute-1 sudo[39604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:31 compute-1 python3.9[39606]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:37:31 compute-1 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 02 11:37:31 compute-1 systemd[1]: tuned.service: Deactivated successfully.
Oct 02 11:37:31 compute-1 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 02 11:37:31 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 02 11:37:31 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 02 11:37:31 compute-1 sudo[39604]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:32 compute-1 python3.9[39767]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct 02 11:37:35 compute-1 sudo[39917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkorznenykcwrdddzhmtbjsbbsnehreb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405054.9547594-1369-119446815410250/AnsiballZ_systemd.py'
Oct 02 11:37:35 compute-1 sudo[39917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:35 compute-1 python3.9[39919]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:37:35 compute-1 systemd[1]: Reloading.
Oct 02 11:37:35 compute-1 systemd-rc-local-generator[39948]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:37:35 compute-1 sudo[39917]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:36 compute-1 sudo[40105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfquhbzduwuzeuiscrzmimccxpufyauf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405055.8579643-1369-76120038758191/AnsiballZ_systemd.py'
Oct 02 11:37:36 compute-1 sudo[40105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:36 compute-1 python3.9[40107]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:37:36 compute-1 systemd[1]: Reloading.
Oct 02 11:37:36 compute-1 systemd-rc-local-generator[40139]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:37:36 compute-1 sudo[40105]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:37 compute-1 sudo[40295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orcyxbcxmmidfsszrjrbchdfexkfznnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405057.210969-1416-5294430412312/AnsiballZ_command.py'
Oct 02 11:37:37 compute-1 sudo[40295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:37 compute-1 python3.9[40297]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:37:37 compute-1 sudo[40295]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:38 compute-1 sudo[40448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tccnzojdsxlnqjvnezqnztjbohcdeinj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405057.9319615-1440-128952151406041/AnsiballZ_command.py'
Oct 02 11:37:38 compute-1 sudo[40448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:38 compute-1 python3.9[40450]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:37:38 compute-1 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct 02 11:37:38 compute-1 sudo[40448]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:38 compute-1 sudo[40601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-popflkkfgmsnhqxiogayszlhpszaiujp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405058.6156743-1464-272449726108777/AnsiballZ_command.py'
Oct 02 11:37:38 compute-1 sudo[40601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:39 compute-1 python3.9[40603]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:37:40 compute-1 sudo[40601]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:42 compute-1 sudo[40763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydmjirqaqgkwzsfyfucvygzjrsosxluv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405061.8406625-1488-177451759965021/AnsiballZ_command.py'
Oct 02 11:37:42 compute-1 sudo[40763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:42 compute-1 python3.9[40765]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:37:42 compute-1 sudo[40763]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:43 compute-1 sudo[40916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jawxvtulbeutoypygdnwzyebwxhhkvkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405063.0029469-1512-136000792596494/AnsiballZ_systemd.py'
Oct 02 11:37:43 compute-1 sudo[40916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:43 compute-1 python3.9[40918]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 11:37:43 compute-1 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 02 11:37:43 compute-1 systemd[1]: Stopped Apply Kernel Variables.
Oct 02 11:37:43 compute-1 systemd[1]: Stopping Apply Kernel Variables...
Oct 02 11:37:43 compute-1 systemd[1]: Starting Apply Kernel Variables...
Oct 02 11:37:43 compute-1 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 02 11:37:43 compute-1 systemd[1]: Finished Apply Kernel Variables.
Oct 02 11:37:43 compute-1 sudo[40916]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:44 compute-1 sshd-session[27927]: Connection closed by 192.168.122.30 port 34806
Oct 02 11:37:44 compute-1 sshd-session[27924]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:37:44 compute-1 systemd-logind[795]: Session 10 logged out. Waiting for processes to exit.
Oct 02 11:37:44 compute-1 systemd[1]: session-10.scope: Deactivated successfully.
Oct 02 11:37:44 compute-1 systemd[1]: session-10.scope: Consumed 2min 13.150s CPU time.
Oct 02 11:37:44 compute-1 systemd-logind[795]: Removed session 10.
Oct 02 11:37:50 compute-1 sshd-session[40948]: Accepted publickey for zuul from 192.168.122.30 port 60760 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:37:50 compute-1 systemd-logind[795]: New session 11 of user zuul.
Oct 02 11:37:50 compute-1 systemd[1]: Started Session 11 of User zuul.
Oct 02 11:37:50 compute-1 sshd-session[40948]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:37:51 compute-1 python3.9[41101]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:37:52 compute-1 sudo[41255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auouapcgpsrgubtenzmpnlyhwsscxoym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405072.3183908-74-66648191719279/AnsiballZ_getent.py'
Oct 02 11:37:52 compute-1 sudo[41255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:53 compute-1 python3.9[41257]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct 02 11:37:53 compute-1 sudo[41255]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:53 compute-1 sudo[41408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbymwtkhqbvhgunlefppookuyqrawtrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405073.2315965-98-190539684353680/AnsiballZ_group.py'
Oct 02 11:37:53 compute-1 sudo[41408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:54 compute-1 python3.9[41410]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 02 11:37:54 compute-1 groupadd[41411]: group added to /etc/group: name=openvswitch, GID=42476
Oct 02 11:37:54 compute-1 groupadd[41411]: group added to /etc/gshadow: name=openvswitch
Oct 02 11:37:54 compute-1 groupadd[41411]: new group: name=openvswitch, GID=42476
Oct 02 11:37:54 compute-1 sudo[41408]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:55 compute-1 sudo[41566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cugpztzkqdnusxcmeblwhetibcbnqlme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405074.6937022-122-105042093180315/AnsiballZ_user.py'
Oct 02 11:37:55 compute-1 sudo[41566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:55 compute-1 python3.9[41568]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 02 11:37:55 compute-1 useradd[41570]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Oct 02 11:37:55 compute-1 useradd[41570]: add 'openvswitch' to group 'hugetlbfs'
Oct 02 11:37:55 compute-1 useradd[41570]: add 'openvswitch' to shadow group 'hugetlbfs'
Oct 02 11:37:55 compute-1 sudo[41566]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:56 compute-1 sudo[41726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffvowpzmpvvqcfdhjphicmtgiivsddhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405076.1968982-152-188137740589492/AnsiballZ_setup.py'
Oct 02 11:37:56 compute-1 sudo[41726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:56 compute-1 python3.9[41728]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:37:57 compute-1 sudo[41726]: pam_unix(sudo:session): session closed for user root
Oct 02 11:37:57 compute-1 sudo[41810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztogrvqfrkggobtplqorlbqsdvnhdeed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405076.1968982-152-188137740589492/AnsiballZ_dnf.py'
Oct 02 11:37:57 compute-1 sudo[41810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:37:57 compute-1 python3.9[41812]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 02 11:37:59 compute-1 sudo[41810]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:00 compute-1 sudo[41974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynougtaeqnsljdbzbdrebtkaklgwyjmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405080.012363-194-226744827282390/AnsiballZ_dnf.py'
Oct 02 11:38:00 compute-1 sudo[41974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:00 compute-1 python3.9[41976]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:38:14 compute-1 kernel: SELinux:  Converting 2723 SID table entries...
Oct 02 11:38:14 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 11:38:14 compute-1 kernel: SELinux:  policy capability open_perms=1
Oct 02 11:38:14 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 11:38:14 compute-1 kernel: SELinux:  policy capability always_check_network=0
Oct 02 11:38:14 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 11:38:14 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 11:38:14 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 11:38:14 compute-1 groupadd[42000]: group added to /etc/group: name=unbound, GID=993
Oct 02 11:38:14 compute-1 groupadd[42000]: group added to /etc/gshadow: name=unbound
Oct 02 11:38:14 compute-1 groupadd[42000]: new group: name=unbound, GID=993
Oct 02 11:38:14 compute-1 useradd[42007]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Oct 02 11:38:14 compute-1 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Oct 02 11:38:14 compute-1 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct 02 11:38:16 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 02 11:38:16 compute-1 systemd[1]: Starting man-db-cache-update.service...
Oct 02 11:38:16 compute-1 systemd[1]: Reloading.
Oct 02 11:38:16 compute-1 systemd-rc-local-generator[42506]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:38:16 compute-1 systemd-sysv-generator[42509]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:38:16 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 02 11:38:18 compute-1 sudo[41974]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:18 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 02 11:38:18 compute-1 systemd[1]: Finished man-db-cache-update.service.
Oct 02 11:38:18 compute-1 systemd[1]: run-re67a3717daa64f18ab4f6825ac56389f.service: Deactivated successfully.
Oct 02 11:38:18 compute-1 sudo[43077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayuojcvtntgdrvqiwrvckspfnrkokvuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405098.1911323-218-94307814544747/AnsiballZ_systemd.py'
Oct 02 11:38:18 compute-1 sudo[43077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:19 compute-1 python3.9[43079]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 11:38:19 compute-1 systemd[1]: Reloading.
Oct 02 11:38:19 compute-1 systemd-sysv-generator[43113]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:38:19 compute-1 systemd-rc-local-generator[43110]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:38:19 compute-1 systemd[1]: Starting Open vSwitch Database Unit...
Oct 02 11:38:19 compute-1 chown[43121]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct 02 11:38:19 compute-1 ovs-ctl[43126]: /etc/openvswitch/conf.db does not exist ... (warning).
Oct 02 11:38:19 compute-1 ovs-ctl[43126]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Oct 02 11:38:19 compute-1 ovs-ctl[43126]: Starting ovsdb-server [  OK  ]
Oct 02 11:38:19 compute-1 ovs-vsctl[43175]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct 02 11:38:19 compute-1 ovs-vsctl[43193]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"db222192-8da1-4f7c-972d-dc680c3e6630\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct 02 11:38:19 compute-1 ovs-ctl[43126]: Configuring Open vSwitch system IDs [  OK  ]
Oct 02 11:38:19 compute-1 ovs-ctl[43126]: Enabling remote OVSDB managers [  OK  ]
Oct 02 11:38:19 compute-1 systemd[1]: Started Open vSwitch Database Unit.
Oct 02 11:38:19 compute-1 ovs-vsctl[43201]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Oct 02 11:38:19 compute-1 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct 02 11:38:19 compute-1 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct 02 11:38:19 compute-1 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct 02 11:38:19 compute-1 kernel: openvswitch: Open vSwitch switching datapath
Oct 02 11:38:19 compute-1 ovs-ctl[43246]: Inserting openvswitch module [  OK  ]
Oct 02 11:38:19 compute-1 ovs-ctl[43215]: Starting ovs-vswitchd [  OK  ]
Oct 02 11:38:19 compute-1 ovs-vsctl[43263]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Oct 02 11:38:19 compute-1 ovs-ctl[43215]: Enabling remote OVSDB managers [  OK  ]
Oct 02 11:38:19 compute-1 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct 02 11:38:19 compute-1 systemd[1]: Starting Open vSwitch...
Oct 02 11:38:19 compute-1 systemd[1]: Finished Open vSwitch.
Oct 02 11:38:20 compute-1 sudo[43077]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:20 compute-1 python3.9[43415]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:38:21 compute-1 sudo[43565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fltlulvuiyrczcfjolnunccrrowoqywo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405101.0912511-272-202957031477133/AnsiballZ_sefcontext.py'
Oct 02 11:38:21 compute-1 sudo[43565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:21 compute-1 python3.9[43567]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct 02 11:38:23 compute-1 kernel: SELinux:  Converting 2737 SID table entries...
Oct 02 11:38:23 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 11:38:23 compute-1 kernel: SELinux:  policy capability open_perms=1
Oct 02 11:38:23 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 11:38:23 compute-1 kernel: SELinux:  policy capability always_check_network=0
Oct 02 11:38:23 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 11:38:23 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 11:38:23 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 11:38:23 compute-1 sudo[43565]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:24 compute-1 python3.9[43722]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:38:25 compute-1 sudo[43878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrfixatsfbrqqiefqinlqftjmzmrvrqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405105.205872-326-95009083102429/AnsiballZ_dnf.py'
Oct 02 11:38:25 compute-1 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Oct 02 11:38:25 compute-1 sudo[43878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:25 compute-1 python3.9[43880]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:38:27 compute-1 sudo[43878]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:27 compute-1 sudo[44031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyjcvzbrqpsqagzyapdmsbbovxatdemo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405107.2956798-350-31954802571960/AnsiballZ_command.py'
Oct 02 11:38:27 compute-1 sudo[44031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:27 compute-1 python3.9[44033]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:38:28 compute-1 sudo[44031]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:29 compute-1 sudo[44318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbomatkfqqysihslqnixnyoxtbztvhfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405108.82422-374-53902987543992/AnsiballZ_file.py'
Oct 02 11:38:29 compute-1 sudo[44318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:29 compute-1 python3.9[44320]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 02 11:38:29 compute-1 sudo[44318]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:30 compute-1 python3.9[44470]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:38:30 compute-1 sudo[44622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szygewsbojjrddbuiicwkdsguzvfcevu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405110.4665453-422-133276388189192/AnsiballZ_dnf.py'
Oct 02 11:38:30 compute-1 sudo[44622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:30 compute-1 python3.9[44624]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:38:33 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 02 11:38:33 compute-1 systemd[1]: Starting man-db-cache-update.service...
Oct 02 11:38:33 compute-1 systemd[1]: Reloading.
Oct 02 11:38:33 compute-1 systemd-sysv-generator[44668]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:38:33 compute-1 systemd-rc-local-generator[44665]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:38:33 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 02 11:38:33 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 02 11:38:33 compute-1 systemd[1]: Finished man-db-cache-update.service.
Oct 02 11:38:33 compute-1 systemd[1]: run-re57caa1180704f14b823766298bd8525.service: Deactivated successfully.
Oct 02 11:38:33 compute-1 sudo[44622]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:34 compute-1 sudo[44940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfbbswclhwusljeumoiyjkbzbeqsnbwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405114.063429-446-273765742501/AnsiballZ_systemd.py'
Oct 02 11:38:34 compute-1 sudo[44940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:34 compute-1 python3.9[44942]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 11:38:34 compute-1 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct 02 11:38:34 compute-1 systemd[1]: Stopped Network Manager Wait Online.
Oct 02 11:38:34 compute-1 systemd[1]: Stopping Network Manager Wait Online...
Oct 02 11:38:34 compute-1 systemd[1]: Stopping Network Manager...
Oct 02 11:38:34 compute-1 NetworkManager[3976]: <info>  [1759405114.7080] caught SIGTERM, shutting down normally.
Oct 02 11:38:34 compute-1 NetworkManager[3976]: <info>  [1759405114.7093] dhcp4 (eth0): canceled DHCP transaction
Oct 02 11:38:34 compute-1 NetworkManager[3976]: <info>  [1759405114.7093] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 02 11:38:34 compute-1 NetworkManager[3976]: <info>  [1759405114.7093] dhcp4 (eth0): state changed no lease
Oct 02 11:38:34 compute-1 NetworkManager[3976]: <info>  [1759405114.7095] manager: NetworkManager state is now CONNECTED_SITE
Oct 02 11:38:34 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 02 11:38:34 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 02 11:38:34 compute-1 NetworkManager[3976]: <info>  [1759405114.7657] exiting (success)
Oct 02 11:38:34 compute-1 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct 02 11:38:34 compute-1 systemd[1]: Stopped Network Manager.
Oct 02 11:38:34 compute-1 systemd[1]: NetworkManager.service: Consumed 13.780s CPU time, 4.1M memory peak, read 0B from disk, written 35.0K to disk.
Oct 02 11:38:34 compute-1 systemd[1]: Starting Network Manager...
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.8328] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:37dcc26c-0803-4cf1-8993-af1de3c457fe)
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.8330] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.8381] manager[0x55ce296ae090]: monitoring kernel firmware directory '/lib/firmware'.
Oct 02 11:38:34 compute-1 systemd[1]: Starting Hostname Service...
Oct 02 11:38:34 compute-1 systemd[1]: Started Hostname Service.
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9223] hostname: hostname: using hostnamed
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9224] hostname: static hostname changed from (none) to "compute-1"
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9230] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9234] manager[0x55ce296ae090]: rfkill: Wi-Fi hardware radio set enabled
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9235] manager[0x55ce296ae090]: rfkill: WWAN hardware radio set enabled
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9258] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9269] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9269] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9270] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9270] manager: Networking is enabled by state file
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9273] settings: Loaded settings plugin: keyfile (internal)
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9277] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9306] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9317] dhcp: init: Using DHCP client 'internal'
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9320] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9326] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9332] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9342] device (lo): Activation: starting connection 'lo' (754f1d75-6208-49e2-9f27-490774a22f8d)
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9351] device (eth0): carrier: link connected
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9356] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9367] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9368] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9374] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9379] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9385] device (eth1): carrier: link connected
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9389] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9392] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (f0e350ee-ba0f-53f3-aecd-b1bd05c74472) (indicated)
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9392] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9396] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9402] device (eth1): Activation: starting connection 'ci-private-network' (f0e350ee-ba0f-53f3-aecd-b1bd05c74472)
Oct 02 11:38:34 compute-1 systemd[1]: Started Network Manager.
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9412] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9427] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9429] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9430] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9432] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9436] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9438] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9444] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9459] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9467] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9470] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9486] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 02 11:38:34 compute-1 systemd[1]: Starting Network Manager Wait Online...
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9518] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9531] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9534] dhcp4 (eth0): state changed new lease, address=38.129.56.3
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9537] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9548] device (lo): Activation: successful, device activated.
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9560] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 02 11:38:34 compute-1 sudo[44940]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9791] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9802] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9804] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9808] manager: NetworkManager state is now CONNECTED_LOCAL
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9811] device (eth1): Activation: successful, device activated.
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9872] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9874] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9878] manager: NetworkManager state is now CONNECTED_SITE
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9882] device (eth0): Activation: successful, device activated.
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9886] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 02 11:38:34 compute-1 NetworkManager[44960]: <info>  [1759405114.9889] manager: startup complete
Oct 02 11:38:34 compute-1 systemd[1]: Finished Network Manager Wait Online.
Oct 02 11:38:35 compute-1 sudo[45166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woovzojeoywdntejnzyqppdoyogxdqum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405115.276472-470-234479964152570/AnsiballZ_dnf.py'
Oct 02 11:38:35 compute-1 sudo[45166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:35 compute-1 python3.9[45168]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:38:45 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 02 11:38:45 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 02 11:38:45 compute-1 systemd[1]: Starting man-db-cache-update.service...
Oct 02 11:38:45 compute-1 systemd[1]: Reloading.
Oct 02 11:38:45 compute-1 systemd-sysv-generator[45223]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:38:45 compute-1 systemd-rc-local-generator[45219]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:38:45 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 02 11:38:46 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 02 11:38:46 compute-1 systemd[1]: Finished man-db-cache-update.service.
Oct 02 11:38:46 compute-1 systemd[1]: run-r3fbef32659bd4c70875eb59f7a697a60.service: Deactivated successfully.
Oct 02 11:38:46 compute-1 sudo[45166]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:47 compute-1 sudo[45628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnmrphzsttyqwwzrggptsqmomqblfwgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405127.1294847-506-27442693029418/AnsiballZ_stat.py'
Oct 02 11:38:47 compute-1 sudo[45628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:47 compute-1 python3.9[45630]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:38:47 compute-1 sudo[45628]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:48 compute-1 sudo[45780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqaordnplvjexxyqfqotntxexkvxcjcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405127.834392-533-70682755377611/AnsiballZ_ini_file.py'
Oct 02 11:38:48 compute-1 sudo[45780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:48 compute-1 python3.9[45782]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:38:48 compute-1 sudo[45780]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:48 compute-1 sudo[45934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmpzwzxvxliuotyhgdavrjfywodvorxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405128.7753243-563-178343775795057/AnsiballZ_ini_file.py'
Oct 02 11:38:48 compute-1 sudo[45934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:49 compute-1 python3.9[45936]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:38:49 compute-1 sudo[45934]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:49 compute-1 sudo[46086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-neqllljorsjzsasscqpqylwoxmnygsiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405129.3300223-563-77612724290879/AnsiballZ_ini_file.py'
Oct 02 11:38:49 compute-1 sudo[46086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:49 compute-1 python3.9[46088]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:38:49 compute-1 sudo[46086]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:50 compute-1 sudo[46238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zotqdeyvluihkwgxadhapuqqsyypyipk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405129.9414074-608-77213874064354/AnsiballZ_ini_file.py'
Oct 02 11:38:50 compute-1 sudo[46238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:50 compute-1 python3.9[46240]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:38:50 compute-1 sudo[46238]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:50 compute-1 sudo[46390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqhlbhikrspupcbigqpeughpvsezelhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405130.5518415-608-80602788501977/AnsiballZ_ini_file.py'
Oct 02 11:38:50 compute-1 sudo[46390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:51 compute-1 python3.9[46392]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:38:51 compute-1 sudo[46390]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:51 compute-1 sudo[46542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtbtcmfmbghkrimbvouwnkiwerncsast ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405131.198894-653-279225655378820/AnsiballZ_stat.py'
Oct 02 11:38:51 compute-1 sudo[46542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:51 compute-1 python3.9[46544]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:38:51 compute-1 sudo[46542]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:52 compute-1 sudo[46665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbwtzuflwkkmshzdisteqeltwljtrkhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405131.198894-653-279225655378820/AnsiballZ_copy.py'
Oct 02 11:38:52 compute-1 sudo[46665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:52 compute-1 python3.9[46667]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405131.198894-653-279225655378820/.source _original_basename=.jrqkg2q8 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:38:52 compute-1 sudo[46665]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:52 compute-1 sudo[46817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvqlhxgvgimibehdmklclnkmmezkctht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405132.5384035-698-141446423890649/AnsiballZ_file.py'
Oct 02 11:38:52 compute-1 sudo[46817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:53 compute-1 python3.9[46819]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:38:53 compute-1 sudo[46817]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:53 compute-1 sudo[46969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvcvrsuoxygebejadegvxvkspupeqglk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405133.1974843-722-57585017671866/AnsiballZ_edpm_os_net_config_mappings.py'
Oct 02 11:38:53 compute-1 sudo[46969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:53 compute-1 python3.9[46971]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Oct 02 11:38:53 compute-1 sudo[46969]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:54 compute-1 sudo[47121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmbeecurzhagronjsqspqgdjnozezbbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405134.023034-749-173758410281967/AnsiballZ_file.py'
Oct 02 11:38:54 compute-1 sudo[47121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:54 compute-1 python3.9[47123]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:38:54 compute-1 sudo[47121]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:55 compute-1 sudo[47273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grurgtszpqksqssfxzbzluwdowoefybi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405134.830948-779-245020265787456/AnsiballZ_stat.py'
Oct 02 11:38:55 compute-1 sudo[47273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:55 compute-1 sudo[47273]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:55 compute-1 sudo[47396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lermjwjqzazufnmwbmdmwqmgxnhrsprg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405134.830948-779-245020265787456/AnsiballZ_copy.py'
Oct 02 11:38:55 compute-1 sudo[47396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:55 compute-1 sudo[47396]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:56 compute-1 sudo[47548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpxmtybjaqdlwzpftsfjzhtjvbynamrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405136.0353022-824-10583152493184/AnsiballZ_slurp.py'
Oct 02 11:38:56 compute-1 sudo[47548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:56 compute-1 python3.9[47550]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Oct 02 11:38:56 compute-1 sudo[47548]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:57 compute-1 sudo[47723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odmykjovjlutbiyiqjsmzgkhygwgypnb ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405136.967407-851-125005158112260/async_wrapper.py j387925174704 300 /home/zuul/.ansible/tmp/ansible-tmp-1759405136.967407-851-125005158112260/AnsiballZ_edpm_os_net_config.py _'
Oct 02 11:38:57 compute-1 sudo[47723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:38:57 compute-1 ansible-async_wrapper.py[47725]: Invoked with j387925174704 300 /home/zuul/.ansible/tmp/ansible-tmp-1759405136.967407-851-125005158112260/AnsiballZ_edpm_os_net_config.py _
Oct 02 11:38:57 compute-1 ansible-async_wrapper.py[47728]: Starting module and watcher
Oct 02 11:38:57 compute-1 ansible-async_wrapper.py[47728]: Start watching 47729 (300)
Oct 02 11:38:57 compute-1 ansible-async_wrapper.py[47729]: Start module (47729)
Oct 02 11:38:57 compute-1 ansible-async_wrapper.py[47725]: Return async_wrapper task started.
Oct 02 11:38:57 compute-1 sudo[47723]: pam_unix(sudo:session): session closed for user root
Oct 02 11:38:58 compute-1 python3.9[47730]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Oct 02 11:38:58 compute-1 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct 02 11:38:58 compute-1 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct 02 11:38:58 compute-1 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Oct 02 11:38:58 compute-1 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct 02 11:38:58 compute-1 kernel: cfg80211: failed to load regulatory.db
Oct 02 11:38:59 compute-1 NetworkManager[44960]: <info>  [1759405139.9211] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47731 uid=0 result="success"
Oct 02 11:38:59 compute-1 NetworkManager[44960]: <info>  [1759405139.9234] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47731 uid=0 result="success"
Oct 02 11:38:59 compute-1 NetworkManager[44960]: <info>  [1759405139.9748] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Oct 02 11:38:59 compute-1 NetworkManager[44960]: <info>  [1759405139.9750] audit: op="connection-add" uuid="b6560eb7-aa7a-46a8-9e22-a94e8e9b623b" name="br-ex-br" pid=47731 uid=0 result="success"
Oct 02 11:38:59 compute-1 NetworkManager[44960]: <info>  [1759405139.9764] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Oct 02 11:38:59 compute-1 NetworkManager[44960]: <info>  [1759405139.9766] audit: op="connection-add" uuid="3762f612-f6fb-4636-9dc5-532fb7368103" name="br-ex-port" pid=47731 uid=0 result="success"
Oct 02 11:38:59 compute-1 NetworkManager[44960]: <info>  [1759405139.9777] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Oct 02 11:38:59 compute-1 NetworkManager[44960]: <info>  [1759405139.9778] audit: op="connection-add" uuid="89fbbef3-d049-48ee-8abd-263cc30f2ff3" name="eth1-port" pid=47731 uid=0 result="success"
Oct 02 11:38:59 compute-1 NetworkManager[44960]: <info>  [1759405139.9788] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Oct 02 11:38:59 compute-1 NetworkManager[44960]: <info>  [1759405139.9790] audit: op="connection-add" uuid="2159f9f4-9bc0-437b-9e3b-60177e3009fa" name="vlan20-port" pid=47731 uid=0 result="success"
Oct 02 11:38:59 compute-1 NetworkManager[44960]: <info>  [1759405139.9799] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Oct 02 11:38:59 compute-1 NetworkManager[44960]: <info>  [1759405139.9801] audit: op="connection-add" uuid="45287b28-64e8-4216-86a4-7d34cc8270a0" name="vlan21-port" pid=47731 uid=0 result="success"
Oct 02 11:38:59 compute-1 NetworkManager[44960]: <info>  [1759405139.9810] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct 02 11:38:59 compute-1 NetworkManager[44960]: <info>  [1759405139.9812] audit: op="connection-add" uuid="25d7d2cf-d530-46a5-a139-82bba86ce1d3" name="vlan22-port" pid=47731 uid=0 result="success"
Oct 02 11:38:59 compute-1 NetworkManager[44960]: <info>  [1759405139.9821] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Oct 02 11:38:59 compute-1 NetworkManager[44960]: <info>  [1759405139.9823] audit: op="connection-add" uuid="e7fe5191-2592-4c84-8d54-6bb73b938742" name="vlan23-port" pid=47731 uid=0 result="success"
Oct 02 11:38:59 compute-1 NetworkManager[44960]: <info>  [1759405139.9839] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.autoconnect-priority,connection.timestamp,802-3-ethernet.mtu,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method" pid=47731 uid=0 result="success"
Oct 02 11:38:59 compute-1 NetworkManager[44960]: <info>  [1759405139.9852] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Oct 02 11:38:59 compute-1 NetworkManager[44960]: <info>  [1759405139.9854] audit: op="connection-add" uuid="d56d70f6-073e-4c01-9100-96c1bdf35220" name="br-ex-if" pid=47731 uid=0 result="success"
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0395] audit: op="connection-update" uuid="f0e350ee-ba0f-53f3-aecd-b1bd05c74472" name="ci-private-network" args="ovs-external-ids.data,ovs-interface.type,ipv4.routing-rules,ipv4.addresses,ipv4.routes,ipv4.never-default,ipv4.dns,ipv4.method,connection.controller,connection.master,connection.timestamp,connection.slave-type,connection.port-type,ipv6.addr-gen-mode,ipv6.addresses,ipv6.routes,ipv6.routing-rules,ipv6.dns,ipv6.method" pid=47731 uid=0 result="success"
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0418] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0420] audit: op="connection-add" uuid="d3d9e72f-d955-41c4-aa48-a68a5acce9bd" name="vlan20-if" pid=47731 uid=0 result="success"
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0443] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0444] audit: op="connection-add" uuid="2f1a5d35-8405-4392-b059-83d53872764f" name="vlan21-if" pid=47731 uid=0 result="success"
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0460] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0463] audit: op="connection-add" uuid="1988b30a-4113-479f-963f-36b51c4465d8" name="vlan22-if" pid=47731 uid=0 result="success"
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0480] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0482] audit: op="connection-add" uuid="aa0b45d7-8959-47ce-ab65-dc4ef312eca5" name="vlan23-if" pid=47731 uid=0 result="success"
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0493] audit: op="connection-delete" uuid="6e2af74a-3979-3958-b10b-5fcb5c84b968" name="Wired connection 1" pid=47731 uid=0 result="success"
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0504] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0514] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0518] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (b6560eb7-aa7a-46a8-9e22-a94e8e9b623b)
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0519] audit: op="connection-activate" uuid="b6560eb7-aa7a-46a8-9e22-a94e8e9b623b" name="br-ex-br" pid=47731 uid=0 result="success"
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0520] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0527] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0530] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (3762f612-f6fb-4636-9dc5-532fb7368103)
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0532] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0536] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0540] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (89fbbef3-d049-48ee-8abd-263cc30f2ff3)
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0542] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0548] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0551] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (2159f9f4-9bc0-437b-9e3b-60177e3009fa)
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0553] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0560] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0565] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (45287b28-64e8-4216-86a4-7d34cc8270a0)
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0567] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0573] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0577] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (25d7d2cf-d530-46a5-a139-82bba86ce1d3)
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0579] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0586] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0590] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (e7fe5191-2592-4c84-8d54-6bb73b938742)
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0591] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0593] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0595] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0601] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0605] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0609] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (d56d70f6-073e-4c01-9100-96c1bdf35220)
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0610] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0614] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0616] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0619] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0621] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0635] device (eth1): disconnecting for new activation request.
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0637] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0640] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0642] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0644] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0648] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0652] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0656] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (d3d9e72f-d955-41c4-aa48-a68a5acce9bd)
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0657] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0660] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0663] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0664] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0667] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0670] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0673] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (2f1a5d35-8405-4392-b059-83d53872764f)
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0674] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0677] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0678] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0679] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0682] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0685] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0688] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (1988b30a-4113-479f-963f-36b51c4465d8)
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0689] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0693] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0695] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0696] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0698] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0701] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0705] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (aa0b45d7-8959-47ce-ab65-dc4ef312eca5)
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0706] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0708] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0711] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0713] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0715] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0725] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.autoconnect-priority,802-3-ethernet.mtu,ipv6.addr-gen-mode,ipv6.method" pid=47731 uid=0 result="success"
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0726] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0729] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0731] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0736] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0739] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0743] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0747] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0749] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 kernel: ovs-system: entered promiscuous mode
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0753] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0756] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0758] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0759] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0764] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0768] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0771] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0774] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 kernel: Timeout policy base is empty
Oct 02 11:39:00 compute-1 systemd-udevd[47735]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0779] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0783] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0787] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0789] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0792] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0795] dhcp4 (eth0): canceled DHCP transaction
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0796] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0796] dhcp4 (eth0): state changed no lease
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0798] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct 02 11:39:00 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0821] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0828] audit: op="device-reapply" interface="eth1" ifindex=3 pid=47731 uid=0 result="fail" reason="Device is not activated"
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0833] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0851] dhcp4 (eth0): state changed new lease, address=38.129.56.3
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0856] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0909] device (eth1): disconnecting for new activation request.
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0910] audit: op="connection-activate" uuid="f0e350ee-ba0f-53f3-aecd-b1bd05c74472" name="ci-private-network" pid=47731 uid=0 result="success"
Oct 02 11:39:00 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0921] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.0929] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1051] device (eth1): Activation: starting connection 'ci-private-network' (f0e350ee-ba0f-53f3-aecd-b1bd05c74472)
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1059] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1062] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1074] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1076] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1083] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1087] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1091] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47731 uid=0 result="success"
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1092] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1093] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1093] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1095] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1096] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1097] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1100] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1107] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1110] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1114] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1117] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1124] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1127] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1131] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1135] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1138] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1140] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1144] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1148] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1154] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1157] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 kernel: br-ex: entered promiscuous mode
Oct 02 11:39:00 compute-1 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct 02 11:39:00 compute-1 kernel: vlan22: entered promiscuous mode
Oct 02 11:39:00 compute-1 systemd-udevd[47734]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 11:39:00 compute-1 kernel: vlan21: entered promiscuous mode
Oct 02 11:39:00 compute-1 kernel: vlan23: entered promiscuous mode
Oct 02 11:39:00 compute-1 systemd-udevd[47843]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 11:39:00 compute-1 kernel: vlan20: entered promiscuous mode
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1673] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1681] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1688] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1694] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1700] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1701] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1716] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1722] device (eth1): Activation: successful, device activated.
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1767] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1772] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1778] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1783] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1789] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1801] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1803] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1804] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1807] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1812] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1817] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1821] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1825] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1829] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1832] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1833] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1836] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1841] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1846] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 02 11:39:00 compute-1 NetworkManager[44960]: <info>  [1759405140.1851] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 02 11:39:01 compute-1 NetworkManager[44960]: <info>  [1759405141.4061] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47731 uid=0 result="success"
Oct 02 11:39:01 compute-1 sudo[48087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mleigsreopxgkhshjngcbmixcsnnpzew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405141.0174556-851-100930019920111/AnsiballZ_async_status.py'
Oct 02 11:39:01 compute-1 sudo[48087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:01 compute-1 NetworkManager[44960]: <info>  [1759405141.5281] checkpoint[0x55ce29684950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Oct 02 11:39:01 compute-1 NetworkManager[44960]: <info>  [1759405141.5283] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47731 uid=0 result="success"
Oct 02 11:39:01 compute-1 python3.9[48089]: ansible-ansible.legacy.async_status Invoked with jid=j387925174704.47725 mode=status _async_dir=/root/.ansible_async
Oct 02 11:39:01 compute-1 sudo[48087]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:01 compute-1 NetworkManager[44960]: <info>  [1759405141.8053] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47731 uid=0 result="success"
Oct 02 11:39:01 compute-1 NetworkManager[44960]: <info>  [1759405141.8067] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47731 uid=0 result="success"
Oct 02 11:39:02 compute-1 NetworkManager[44960]: <info>  [1759405142.0647] audit: op="networking-control" arg="global-dns-configuration" pid=47731 uid=0 result="success"
Oct 02 11:39:02 compute-1 NetworkManager[44960]: <info>  [1759405142.1202] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Oct 02 11:39:02 compute-1 NetworkManager[44960]: <info>  [1759405142.1742] audit: op="networking-control" arg="global-dns-configuration" pid=47731 uid=0 result="success"
Oct 02 11:39:02 compute-1 NetworkManager[44960]: <info>  [1759405142.1776] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47731 uid=0 result="success"
Oct 02 11:39:02 compute-1 NetworkManager[44960]: <info>  [1759405142.3273] checkpoint[0x55ce29684a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Oct 02 11:39:02 compute-1 NetworkManager[44960]: <info>  [1759405142.3281] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47731 uid=0 result="success"
Oct 02 11:39:02 compute-1 ansible-async_wrapper.py[47729]: Module complete (47729)
Oct 02 11:39:02 compute-1 ansible-async_wrapper.py[47728]: Done in kid B.
Oct 02 11:39:04 compute-1 sudo[48193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdernawztqowasgtbcymabdebprpakdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405141.0174556-851-100930019920111/AnsiballZ_async_status.py'
Oct 02 11:39:04 compute-1 sudo[48193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:04 compute-1 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 02 11:39:05 compute-1 python3.9[48195]: ansible-ansible.legacy.async_status Invoked with jid=j387925174704.47725 mode=status _async_dir=/root/.ansible_async
Oct 02 11:39:05 compute-1 sudo[48193]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:05 compute-1 sudo[48295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igtdoptjpmwvugvahccujeoewybfqsml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405141.0174556-851-100930019920111/AnsiballZ_async_status.py'
Oct 02 11:39:05 compute-1 sudo[48295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:05 compute-1 python3.9[48297]: ansible-ansible.legacy.async_status Invoked with jid=j387925174704.47725 mode=cleanup _async_dir=/root/.ansible_async
Oct 02 11:39:05 compute-1 sudo[48295]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:06 compute-1 sudo[48447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiukufulbeocwmyccrodbidehmlulnho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405145.9076734-932-107755002570456/AnsiballZ_stat.py'
Oct 02 11:39:06 compute-1 sudo[48447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:06 compute-1 python3.9[48449]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:39:06 compute-1 sudo[48447]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:06 compute-1 sudo[48570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pejquwvcbkiecyfwgrljvzrrostvjstj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405145.9076734-932-107755002570456/AnsiballZ_copy.py'
Oct 02 11:39:06 compute-1 sudo[48570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:06 compute-1 python3.9[48572]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405145.9076734-932-107755002570456/.source.returncode _original_basename=.j6ypej5h follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:39:06 compute-1 sudo[48570]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:07 compute-1 sudo[48722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyscsmzusonsotrkoaiobuleqpaadpja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405147.214209-980-114445389337510/AnsiballZ_stat.py'
Oct 02 11:39:07 compute-1 sudo[48722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:07 compute-1 python3.9[48724]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:39:07 compute-1 sudo[48722]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:07 compute-1 sudo[48845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpeqzgdotggpzhcgihvlcjezqvifmnkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405147.214209-980-114445389337510/AnsiballZ_copy.py'
Oct 02 11:39:07 compute-1 sudo[48845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:08 compute-1 python3.9[48847]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405147.214209-980-114445389337510/.source.cfg _original_basename=.7ke4oh7p follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:39:08 compute-1 sudo[48845]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:08 compute-1 sudo[48998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uucapdrwcnafqbsdxhfvhqseashbdyzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405148.5065055-1025-231968179726191/AnsiballZ_systemd.py'
Oct 02 11:39:08 compute-1 sudo[48998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:09 compute-1 python3.9[49000]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 11:39:09 compute-1 systemd[1]: Reloading Network Manager...
Oct 02 11:39:09 compute-1 NetworkManager[44960]: <info>  [1759405149.2131] audit: op="reload" arg="0" pid=49004 uid=0 result="success"
Oct 02 11:39:09 compute-1 NetworkManager[44960]: <info>  [1759405149.2137] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Oct 02 11:39:09 compute-1 systemd[1]: Reloaded Network Manager.
Oct 02 11:39:09 compute-1 sudo[48998]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:09 compute-1 sshd-session[40951]: Connection closed by 192.168.122.30 port 60760
Oct 02 11:39:09 compute-1 sshd-session[40948]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:39:09 compute-1 systemd[1]: session-11.scope: Deactivated successfully.
Oct 02 11:39:09 compute-1 systemd[1]: session-11.scope: Consumed 48.458s CPU time.
Oct 02 11:39:09 compute-1 systemd-logind[795]: Session 11 logged out. Waiting for processes to exit.
Oct 02 11:39:09 compute-1 systemd-logind[795]: Removed session 11.
Oct 02 11:39:14 compute-1 sshd-session[49035]: Accepted publickey for zuul from 192.168.122.30 port 34738 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:39:14 compute-1 systemd-logind[795]: New session 12 of user zuul.
Oct 02 11:39:14 compute-1 systemd[1]: Started Session 12 of User zuul.
Oct 02 11:39:14 compute-1 sshd-session[49035]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:39:15 compute-1 python3.9[49188]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:39:16 compute-1 python3.9[49342]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:39:18 compute-1 python3.9[49536]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:39:18 compute-1 sshd-session[49038]: Connection closed by 192.168.122.30 port 34738
Oct 02 11:39:18 compute-1 sshd-session[49035]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:39:18 compute-1 systemd[1]: session-12.scope: Deactivated successfully.
Oct 02 11:39:18 compute-1 systemd[1]: session-12.scope: Consumed 2.188s CPU time.
Oct 02 11:39:18 compute-1 systemd-logind[795]: Session 12 logged out. Waiting for processes to exit.
Oct 02 11:39:18 compute-1 systemd-logind[795]: Removed session 12.
Oct 02 11:39:19 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 02 11:39:24 compute-1 sshd-session[49564]: Accepted publickey for zuul from 192.168.122.30 port 56570 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:39:24 compute-1 systemd-logind[795]: New session 13 of user zuul.
Oct 02 11:39:24 compute-1 systemd[1]: Started Session 13 of User zuul.
Oct 02 11:39:24 compute-1 sshd-session[49564]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:39:25 compute-1 python3.9[49718]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:39:26 compute-1 python3.9[49872]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:39:26 compute-1 sudo[50026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctreejaacwvbcamqlfpniejjpznsdcey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405166.5996997-86-77033472091546/AnsiballZ_setup.py'
Oct 02 11:39:26 compute-1 sudo[50026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:27 compute-1 python3.9[50028]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:39:27 compute-1 sudo[50026]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:27 compute-1 sudo[50111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvlprngzbkhlmyshrvcprngbttybcrao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405166.5996997-86-77033472091546/AnsiballZ_dnf.py'
Oct 02 11:39:27 compute-1 sudo[50111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:28 compute-1 python3.9[50113]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:39:29 compute-1 sudo[50111]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:30 compute-1 sudo[50264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrebsmmwagagftuqlyqmlvhdtdoviups ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405169.748659-122-156802434027707/AnsiballZ_setup.py'
Oct 02 11:39:30 compute-1 sudo[50264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:30 compute-1 python3.9[50266]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:39:30 compute-1 sudo[50264]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:31 compute-1 sudo[50460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiskisntmovlhotxhixcaucmjtxqrzjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405170.950659-155-130027508261345/AnsiballZ_file.py'
Oct 02 11:39:31 compute-1 sudo[50460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:31 compute-1 python3.9[50462]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:39:31 compute-1 sudo[50460]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:32 compute-1 sudo[50612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvqmznrfwhkflyntpikswiehnllbvpod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405171.8056357-179-140720543564196/AnsiballZ_command.py'
Oct 02 11:39:32 compute-1 sudo[50612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:32 compute-1 python3.9[50614]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:39:32 compute-1 systemd[1]: var-lib-containers-storage-overlay-compat2047843413-merged.mount: Deactivated successfully.
Oct 02 11:39:32 compute-1 podman[50615]: 2025-10-02 11:39:32.53956629 +0000 UTC m=+0.092734561 system refresh
Oct 02 11:39:32 compute-1 sudo[50612]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:33 compute-1 sudo[50774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdibwndfrekzhspqrhdfpmzmnmadrxog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405172.7161007-203-198325121287530/AnsiballZ_stat.py'
Oct 02 11:39:33 compute-1 sudo[50774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:33 compute-1 python3.9[50776]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:39:33 compute-1 sudo[50774]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:33 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:39:33 compute-1 sudo[50897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxrdsbwlsyiyxfpjdjtsrrvqsgxxnsio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405172.7161007-203-198325121287530/AnsiballZ_copy.py'
Oct 02 11:39:33 compute-1 sudo[50897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:33 compute-1 python3.9[50899]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405172.7161007-203-198325121287530/.source.json follow=False _original_basename=podman_network_config.j2 checksum=4b19eedd85b966592ec31feebbc0c01623d2244e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:39:34 compute-1 sudo[50897]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:34 compute-1 sudo[51049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgnfjaxdapjtwkbmmieafffnxnqhypay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405174.1822789-248-101404481579571/AnsiballZ_stat.py'
Oct 02 11:39:34 compute-1 sudo[51049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:34 compute-1 python3.9[51051]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:39:34 compute-1 sudo[51049]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:35 compute-1 sudo[51172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phwynrqvyzdbekjtgvuieauwcdxobekr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405174.1822789-248-101404481579571/AnsiballZ_copy.py'
Oct 02 11:39:35 compute-1 sudo[51172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:35 compute-1 python3.9[51174]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405174.1822789-248-101404481579571/.source.conf follow=False _original_basename=registries.conf.j2 checksum=2f54462ce13fc7f0e9dc5b3970581b7761b51f34 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:39:35 compute-1 sudo[51172]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:35 compute-1 sudo[51324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrbnsxjigmvxcsqtwzdqfgsvzurjifmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405175.454488-296-152843972224194/AnsiballZ_ini_file.py'
Oct 02 11:39:35 compute-1 sudo[51324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:36 compute-1 python3.9[51326]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:39:36 compute-1 sudo[51324]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:36 compute-1 sudo[51476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqldlzpjfodhsinfpxyvagoenmszlcjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405176.3127675-296-153936843406559/AnsiballZ_ini_file.py'
Oct 02 11:39:36 compute-1 sudo[51476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:36 compute-1 python3.9[51478]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:39:36 compute-1 sudo[51476]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:37 compute-1 sudo[51628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvypmedwtjayrnvogkopfpimkgpxbxeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405176.8991785-296-65259628573344/AnsiballZ_ini_file.py'
Oct 02 11:39:37 compute-1 sudo[51628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:37 compute-1 python3.9[51630]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:39:37 compute-1 sudo[51628]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:37 compute-1 sudo[51780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilqrsuzlpycsfpyvnjbzcgmxohahafwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405177.5123441-296-259179238589233/AnsiballZ_ini_file.py'
Oct 02 11:39:37 compute-1 sudo[51780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:37 compute-1 python3.9[51782]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:39:37 compute-1 sudo[51780]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:38 compute-1 sudo[51932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnleocuqrxcyiahxkpyxydldppfvqzgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405178.3746414-389-212865723547008/AnsiballZ_dnf.py'
Oct 02 11:39:38 compute-1 sudo[51932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:38 compute-1 python3.9[51934]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:39:40 compute-1 sudo[51932]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:40 compute-1 sudo[52085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pntobmgcpazlsclfdltzytsmalwraswz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405180.542073-422-7204767871210/AnsiballZ_setup.py'
Oct 02 11:39:40 compute-1 sudo[52085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:41 compute-1 python3.9[52087]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:39:41 compute-1 sudo[52085]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:41 compute-1 sudo[52239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fglunaugnbckwuhdbmunsobhadafjhgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405181.3253896-446-214032099253667/AnsiballZ_stat.py'
Oct 02 11:39:41 compute-1 sudo[52239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:41 compute-1 python3.9[52241]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:39:41 compute-1 sudo[52239]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:42 compute-1 sudo[52391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zibwvmuhpfagnrjqouucfjdtzlnuiiqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405182.0098839-473-138429812571479/AnsiballZ_stat.py'
Oct 02 11:39:42 compute-1 sudo[52391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:42 compute-1 python3.9[52393]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:39:42 compute-1 sudo[52391]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:43 compute-1 sudo[52543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqwlzlfjcdvjdvbvewkusqgxntqgozej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405182.7598035-503-11900252048605/AnsiballZ_service_facts.py'
Oct 02 11:39:43 compute-1 sudo[52543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:43 compute-1 python3.9[52545]: ansible-service_facts Invoked
Oct 02 11:39:43 compute-1 network[52562]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 11:39:43 compute-1 network[52563]: 'network-scripts' will be removed from distribution in near future.
Oct 02 11:39:43 compute-1 network[52564]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 11:39:45 compute-1 sudo[52543]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:47 compute-1 sudo[52849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmjjslepsryeikyakvcofzpfyttxzwal ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1759405186.9574678-542-187130279524309/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1759405186.9574678-542-187130279524309/args'
Oct 02 11:39:47 compute-1 sudo[52849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:47 compute-1 sudo[52849]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:47 compute-1 sudo[53016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbxshkwxfylgtzecldbotubhjjmnjmfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405187.5836315-575-163910645696082/AnsiballZ_dnf.py'
Oct 02 11:39:47 compute-1 sudo[53016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:48 compute-1 python3.9[53018]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:39:49 compute-1 sudo[53016]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:50 compute-1 sudo[53169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmzdkwhbtavcurskewmxnpfajdsajxcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405189.7135432-614-142478012253616/AnsiballZ_package_facts.py'
Oct 02 11:39:50 compute-1 sudo[53169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:50 compute-1 python3.9[53171]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct 02 11:39:50 compute-1 sudo[53169]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:51 compute-1 sudo[53321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaosxiuaahaiglyxedpokuxmmtrxafqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405191.6470292-644-10571626935825/AnsiballZ_stat.py'
Oct 02 11:39:51 compute-1 sudo[53321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:52 compute-1 python3.9[53323]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:39:52 compute-1 sudo[53321]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:52 compute-1 sudo[53446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqayeimswkivnaxkjladsxmbhuabkhcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405191.6470292-644-10571626935825/AnsiballZ_copy.py'
Oct 02 11:39:52 compute-1 sudo[53446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:52 compute-1 python3.9[53448]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405191.6470292-644-10571626935825/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:39:52 compute-1 sudo[53446]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:53 compute-1 sudo[53600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfcfftfcnrypydzsiciutlxaknjuvuef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405193.0161536-690-205162396311375/AnsiballZ_stat.py'
Oct 02 11:39:53 compute-1 sudo[53600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:53 compute-1 python3.9[53602]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:39:53 compute-1 sudo[53600]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:53 compute-1 sudo[53725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muxyyhpelmmwykxyqnheaupzomsotzgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405193.0161536-690-205162396311375/AnsiballZ_copy.py'
Oct 02 11:39:53 compute-1 sudo[53725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:54 compute-1 python3.9[53727]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405193.0161536-690-205162396311375/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:39:54 compute-1 sudo[53725]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:55 compute-1 sudo[53879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vemdqkcfxszetkorohcjmwuhjqodbidp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405194.8652055-752-159344148316880/AnsiballZ_lineinfile.py'
Oct 02 11:39:55 compute-1 sudo[53879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:55 compute-1 python3.9[53881]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:39:55 compute-1 sudo[53879]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:56 compute-1 sudo[54033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnplvsrxkoecflkkdternawlwqgceimw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405196.5941186-798-98099354344312/AnsiballZ_setup.py'
Oct 02 11:39:56 compute-1 sudo[54033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:57 compute-1 python3.9[54035]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:39:57 compute-1 sudo[54033]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:57 compute-1 sudo[54117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clhfilgjedxfsaaxtvtvzquyaomevszz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405196.5941186-798-98099354344312/AnsiballZ_systemd.py'
Oct 02 11:39:57 compute-1 sudo[54117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:58 compute-1 python3.9[54119]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:39:58 compute-1 sudo[54117]: pam_unix(sudo:session): session closed for user root
Oct 02 11:39:59 compute-1 sudo[54271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcptaryglzdzocyqkfntfcspmkhkirdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405199.107237-846-155668047463943/AnsiballZ_setup.py'
Oct 02 11:39:59 compute-1 sudo[54271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:39:59 compute-1 python3.9[54273]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:39:59 compute-1 sudo[54271]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:00 compute-1 sudo[54355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adfssbnubmaacfcjazlxunallygaaoch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405199.107237-846-155668047463943/AnsiballZ_systemd.py'
Oct 02 11:40:00 compute-1 sudo[54355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:00 compute-1 python3.9[54357]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 11:40:00 compute-1 chronyd[802]: chronyd exiting
Oct 02 11:40:00 compute-1 systemd[1]: Stopping NTP client/server...
Oct 02 11:40:00 compute-1 systemd[1]: chronyd.service: Deactivated successfully.
Oct 02 11:40:00 compute-1 systemd[1]: Stopped NTP client/server.
Oct 02 11:40:00 compute-1 systemd[1]: Starting NTP client/server...
Oct 02 11:40:00 compute-1 chronyd[54365]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 02 11:40:00 compute-1 chronyd[54365]: Frequency 9.122 +/- 0.334 ppm read from /var/lib/chrony/drift
Oct 02 11:40:00 compute-1 chronyd[54365]: Loaded seccomp filter (level 2)
Oct 02 11:40:00 compute-1 systemd[1]: Started NTP client/server.
Oct 02 11:40:00 compute-1 sudo[54355]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:00 compute-1 sshd-session[49567]: Connection closed by 192.168.122.30 port 56570
Oct 02 11:40:00 compute-1 sshd-session[49564]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:40:00 compute-1 systemd[1]: session-13.scope: Deactivated successfully.
Oct 02 11:40:00 compute-1 systemd[1]: session-13.scope: Consumed 23.814s CPU time.
Oct 02 11:40:00 compute-1 systemd-logind[795]: Session 13 logged out. Waiting for processes to exit.
Oct 02 11:40:00 compute-1 systemd-logind[795]: Removed session 13.
Oct 02 11:40:06 compute-1 sshd-session[54391]: Accepted publickey for zuul from 192.168.122.30 port 56850 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:40:06 compute-1 systemd-logind[795]: New session 14 of user zuul.
Oct 02 11:40:06 compute-1 systemd[1]: Started Session 14 of User zuul.
Oct 02 11:40:06 compute-1 sshd-session[54391]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:40:07 compute-1 sudo[54544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aziswhndafktbmytcijyndtfeuzuyzfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405206.6569595-32-116138473616327/AnsiballZ_file.py'
Oct 02 11:40:07 compute-1 sudo[54544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:07 compute-1 python3.9[54546]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:40:07 compute-1 sudo[54544]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:07 compute-1 sudo[54696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwppodxygyamzbrzrxfbieqlsysodkzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405207.5603087-68-162952480946353/AnsiballZ_stat.py'
Oct 02 11:40:07 compute-1 sudo[54696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:08 compute-1 python3.9[54698]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:40:08 compute-1 sudo[54696]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:08 compute-1 sudo[54819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyaouhftmvepxjgcvfakjgddqxrwowux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405207.5603087-68-162952480946353/AnsiballZ_copy.py'
Oct 02 11:40:08 compute-1 sudo[54819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:08 compute-1 python3.9[54821]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405207.5603087-68-162952480946353/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:40:08 compute-1 sudo[54819]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:09 compute-1 sshd-session[54394]: Connection closed by 192.168.122.30 port 56850
Oct 02 11:40:09 compute-1 sshd-session[54391]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:40:09 compute-1 systemd[1]: session-14.scope: Deactivated successfully.
Oct 02 11:40:09 compute-1 systemd[1]: session-14.scope: Consumed 1.531s CPU time.
Oct 02 11:40:09 compute-1 systemd-logind[795]: Session 14 logged out. Waiting for processes to exit.
Oct 02 11:40:09 compute-1 systemd-logind[795]: Removed session 14.
Oct 02 11:40:14 compute-1 sshd-session[54846]: Accepted publickey for zuul from 192.168.122.30 port 39804 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:40:14 compute-1 systemd-logind[795]: New session 15 of user zuul.
Oct 02 11:40:14 compute-1 systemd[1]: Started Session 15 of User zuul.
Oct 02 11:40:14 compute-1 sshd-session[54846]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:40:15 compute-1 python3.9[54999]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:40:16 compute-1 sudo[55153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntpudyhgyceiopksuxoasflfajhejomv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405216.0584288-65-244065278115637/AnsiballZ_file.py'
Oct 02 11:40:16 compute-1 sudo[55153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:16 compute-1 python3.9[55155]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:40:16 compute-1 sudo[55153]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:17 compute-1 sudo[55328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trmecsdqvpnukiosvrqifghkeqgcdbyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405216.9354713-89-157339913528518/AnsiballZ_stat.py'
Oct 02 11:40:17 compute-1 sudo[55328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:17 compute-1 python3.9[55330]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:40:17 compute-1 sudo[55328]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:18 compute-1 sudo[55451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoenkmlxlhaoffknefbbyldivzfvolfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405216.9354713-89-157339913528518/AnsiballZ_copy.py'
Oct 02 11:40:18 compute-1 sudo[55451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:18 compute-1 python3.9[55453]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1759405216.9354713-89-157339913528518/.source.json _original_basename=.yqpy6ga0 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:40:18 compute-1 sudo[55451]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:19 compute-1 sudo[55603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcvliazvzhjjmglbwwcrcjhsjlakyote ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405218.796078-158-18709057615563/AnsiballZ_stat.py'
Oct 02 11:40:19 compute-1 sudo[55603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:19 compute-1 python3.9[55605]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:40:19 compute-1 sudo[55603]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:19 compute-1 sudo[55726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtlfuoccdttlmnpvwjmstagqkihkqeuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405218.796078-158-18709057615563/AnsiballZ_copy.py'
Oct 02 11:40:19 compute-1 sudo[55726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:19 compute-1 python3.9[55728]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405218.796078-158-18709057615563/.source _original_basename=.b4_ws6dw follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:40:19 compute-1 sudo[55726]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:20 compute-1 sudo[55878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqhnddbloqegylkvqijtotefejawepix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405220.0074427-206-28810993776053/AnsiballZ_file.py'
Oct 02 11:40:20 compute-1 sudo[55878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:20 compute-1 python3.9[55880]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:40:20 compute-1 sudo[55878]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:20 compute-1 sudo[56030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxbikusixxhhefvufaeuebibwddbebcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405220.6652126-230-29662054966249/AnsiballZ_stat.py'
Oct 02 11:40:20 compute-1 sudo[56030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:21 compute-1 python3.9[56032]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:40:21 compute-1 sudo[56030]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:21 compute-1 sudo[56153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufgbxpklfqdbevysgymfnobhtxbhhnce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405220.6652126-230-29662054966249/AnsiballZ_copy.py'
Oct 02 11:40:21 compute-1 sudo[56153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:21 compute-1 python3.9[56155]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405220.6652126-230-29662054966249/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:40:21 compute-1 sudo[56153]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:22 compute-1 sudo[56305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtpaswooiqrgrhbeadeeaxptfqwolygb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405221.7876337-230-91997720174840/AnsiballZ_stat.py'
Oct 02 11:40:22 compute-1 sudo[56305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:22 compute-1 python3.9[56307]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:40:22 compute-1 sudo[56305]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:22 compute-1 sudo[56428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yoxfbyhekidgfbsofyeaouhrmiqedrlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405221.7876337-230-91997720174840/AnsiballZ_copy.py'
Oct 02 11:40:22 compute-1 sudo[56428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:22 compute-1 python3.9[56430]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759405221.7876337-230-91997720174840/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:40:22 compute-1 sudo[56428]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:23 compute-1 sudo[56580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yeeblreirncpzbfcnpklqbwmktoobgof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405223.0541914-317-238040655944799/AnsiballZ_file.py'
Oct 02 11:40:23 compute-1 sudo[56580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:23 compute-1 python3.9[56582]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:40:23 compute-1 sudo[56580]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:23 compute-1 sudo[56732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcdnwlzhxdskfwqzzugpirzsuddptiaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405223.6672528-341-68613096231987/AnsiballZ_stat.py'
Oct 02 11:40:23 compute-1 sudo[56732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:24 compute-1 python3.9[56734]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:40:24 compute-1 sudo[56732]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:24 compute-1 sudo[56855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdjiufzwqubxzussoiioinrshbwtrxeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405223.6672528-341-68613096231987/AnsiballZ_copy.py'
Oct 02 11:40:24 compute-1 sudo[56855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:24 compute-1 python3.9[56857]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405223.6672528-341-68613096231987/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:40:24 compute-1 sudo[56855]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:25 compute-1 sudo[57007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elnuikjjxbxfhnohstkgakeitsepqdgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405224.8593314-386-33140082130427/AnsiballZ_stat.py'
Oct 02 11:40:25 compute-1 sudo[57007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:25 compute-1 python3.9[57009]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:40:25 compute-1 sudo[57007]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:25 compute-1 sudo[57130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikkvrwxsndrpwwlwgskufltorjhifyra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405224.8593314-386-33140082130427/AnsiballZ_copy.py'
Oct 02 11:40:25 compute-1 sudo[57130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:25 compute-1 python3.9[57132]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405224.8593314-386-33140082130427/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:40:25 compute-1 sudo[57130]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:26 compute-1 sudo[57282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bupssvddynhwqdflfpdijndhmtddflew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405226.1102283-431-87013136959631/AnsiballZ_systemd.py'
Oct 02 11:40:26 compute-1 sudo[57282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:26 compute-1 python3.9[57284]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:40:27 compute-1 systemd[1]: Reloading.
Oct 02 11:40:27 compute-1 systemd-rc-local-generator[57309]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:40:27 compute-1 systemd-sysv-generator[57314]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:40:27 compute-1 systemd[1]: Reloading.
Oct 02 11:40:27 compute-1 systemd-rc-local-generator[57348]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:40:27 compute-1 systemd-sysv-generator[57351]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:40:27 compute-1 systemd[1]: Starting EDPM Container Shutdown...
Oct 02 11:40:27 compute-1 systemd[1]: Finished EDPM Container Shutdown.
Oct 02 11:40:27 compute-1 sudo[57282]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:28 compute-1 sudo[57508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byuhknrilwesvwftqezycpqtqvqlwjew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405227.7337432-455-223414698389803/AnsiballZ_stat.py'
Oct 02 11:40:28 compute-1 sudo[57508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:28 compute-1 python3.9[57510]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:40:28 compute-1 sudo[57508]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:28 compute-1 sudo[57631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjlaiyzihfvrgnkdkmrkqjcajlniseie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405227.7337432-455-223414698389803/AnsiballZ_copy.py'
Oct 02 11:40:28 compute-1 sudo[57631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:28 compute-1 python3.9[57633]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405227.7337432-455-223414698389803/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:40:28 compute-1 sudo[57631]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:29 compute-1 sudo[57783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgckayfuezxqvidumrdgttflvsgiyahf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405229.0869932-500-197584828609392/AnsiballZ_stat.py'
Oct 02 11:40:29 compute-1 sudo[57783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:29 compute-1 python3.9[57785]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:40:29 compute-1 sudo[57783]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:29 compute-1 sudo[57906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtmskwtcmgzhctdjhjzbfrqojhuvlrxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405229.0869932-500-197584828609392/AnsiballZ_copy.py'
Oct 02 11:40:29 compute-1 sudo[57906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:30 compute-1 python3.9[57908]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405229.0869932-500-197584828609392/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:40:30 compute-1 sudo[57906]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:30 compute-1 sudo[58058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bryaywqplyissdgoixkvqmenvdrvywkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405230.4292288-545-176099768835834/AnsiballZ_systemd.py'
Oct 02 11:40:30 compute-1 sudo[58058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:30 compute-1 python3.9[58060]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:40:31 compute-1 systemd[1]: Reloading.
Oct 02 11:40:31 compute-1 systemd-sysv-generator[58091]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:40:31 compute-1 systemd-rc-local-generator[58087]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:40:31 compute-1 systemd[1]: Reloading.
Oct 02 11:40:31 compute-1 systemd-rc-local-generator[58125]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:40:31 compute-1 systemd-sysv-generator[58129]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:40:31 compute-1 systemd[1]: Starting Create netns directory...
Oct 02 11:40:31 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 02 11:40:31 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 02 11:40:31 compute-1 systemd[1]: Finished Create netns directory.
Oct 02 11:40:31 compute-1 sudo[58058]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:32 compute-1 python3.9[58286]: ansible-ansible.builtin.service_facts Invoked
Oct 02 11:40:32 compute-1 network[58303]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 11:40:32 compute-1 network[58304]: 'network-scripts' will be removed from distribution in near future.
Oct 02 11:40:32 compute-1 network[58305]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 11:40:36 compute-1 sudo[58567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtwltjvglmjunhzphyfveidndrqwvtkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405236.1824656-593-184658590724600/AnsiballZ_systemd.py'
Oct 02 11:40:36 compute-1 sudo[58567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:36 compute-1 python3.9[58569]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:40:36 compute-1 systemd[1]: Reloading.
Oct 02 11:40:36 compute-1 systemd-rc-local-generator[58598]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:40:36 compute-1 systemd-sysv-generator[58601]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:40:37 compute-1 systemd[1]: Stopping IPv4 firewall with iptables...
Oct 02 11:40:37 compute-1 iptables.init[58608]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Oct 02 11:40:37 compute-1 iptables.init[58608]: iptables: Flushing firewall rules: [  OK  ]
Oct 02 11:40:37 compute-1 systemd[1]: iptables.service: Deactivated successfully.
Oct 02 11:40:37 compute-1 systemd[1]: Stopped IPv4 firewall with iptables.
Oct 02 11:40:37 compute-1 sudo[58567]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:37 compute-1 sudo[58802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbpqhpkpdyeikoprskcwbzifbemcwnji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405237.4873571-593-172593565082337/AnsiballZ_systemd.py'
Oct 02 11:40:37 compute-1 sudo[58802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:38 compute-1 python3.9[58804]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:40:38 compute-1 sudo[58802]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:38 compute-1 sudo[58956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmnabinmyfvwtcdzfhjdsdduuaqfdpxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405238.505415-641-278530163720127/AnsiballZ_systemd.py'
Oct 02 11:40:38 compute-1 sudo[58956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:39 compute-1 python3.9[58958]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:40:39 compute-1 systemd[1]: Reloading.
Oct 02 11:40:39 compute-1 systemd-rc-local-generator[58986]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:40:39 compute-1 systemd-sysv-generator[58991]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:40:39 compute-1 systemd[1]: Starting Netfilter Tables...
Oct 02 11:40:39 compute-1 systemd[1]: Finished Netfilter Tables.
Oct 02 11:40:39 compute-1 sudo[58956]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:40 compute-1 sudo[59147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxcyqxmauxxtdrmzkjpqakwahpeaumey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405239.589259-665-107748528252384/AnsiballZ_command.py'
Oct 02 11:40:40 compute-1 sudo[59147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:40 compute-1 python3.9[59149]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:40:40 compute-1 sudo[59147]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:41 compute-1 sudo[59300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otmjqcwthlvvscrclmkvlablibdpvjvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405240.7707264-707-173960495954506/AnsiballZ_stat.py'
Oct 02 11:40:41 compute-1 sudo[59300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:41 compute-1 python3.9[59302]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:40:41 compute-1 sudo[59300]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:41 compute-1 sudo[59425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrrbyhmtfssfmjgwuanjuvgddhgadgbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405240.7707264-707-173960495954506/AnsiballZ_copy.py'
Oct 02 11:40:41 compute-1 sudo[59425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:40:41 compute-1 python3.9[59427]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405240.7707264-707-173960495954506/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=4729b6ffc5b555fa142bf0b6e6dc15609cb89a22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:40:41 compute-1 sudo[59425]: pam_unix(sudo:session): session closed for user root
Oct 02 11:40:42 compute-1 python3.9[59578]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 11:40:42 compute-1 polkitd[6956]: Registered Authentication Agent for unix-process:59580:310621 (system bus name :1.526 [/usr/bin/pkttyagent --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 02 11:41:07 compute-1 polkitd[6956]: Unregistered Authentication Agent for unix-process:59580:310621 (system bus name :1.526, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 02 11:41:07 compute-1 polkitd[6956]: Operator of unix-process:59580:310621 FAILED to authenticate to gain authorization for action org.freedesktop.systemd1.manage-units for system-bus-name::1.525 [<unknown>] (owned by unix-user:zuul)
Oct 02 11:41:07 compute-1 polkit-agent-helper-1[59592]: pam_unix(polkit-1:auth): conversation failed
Oct 02 11:41:07 compute-1 polkit-agent-helper-1[59592]: pam_unix(polkit-1:auth): auth could not identify password for [root]
Oct 02 11:41:08 compute-1 sshd-session[54849]: Connection closed by 192.168.122.30 port 39804
Oct 02 11:41:08 compute-1 sshd-session[54846]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:41:08 compute-1 systemd[1]: session-15.scope: Deactivated successfully.
Oct 02 11:41:08 compute-1 systemd[1]: session-15.scope: Consumed 18.302s CPU time.
Oct 02 11:41:08 compute-1 systemd-logind[795]: Session 15 logged out. Waiting for processes to exit.
Oct 02 11:41:08 compute-1 systemd-logind[795]: Removed session 15.
Oct 02 11:41:21 compute-1 sshd-session[59618]: Accepted publickey for zuul from 192.168.122.30 port 48708 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:41:21 compute-1 systemd-logind[795]: New session 16 of user zuul.
Oct 02 11:41:21 compute-1 systemd[1]: Started Session 16 of User zuul.
Oct 02 11:41:21 compute-1 sshd-session[59618]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:41:22 compute-1 python3.9[59771]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:41:23 compute-1 sudo[59925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kswuzgqcxsyfwowesdhhimrzlsfxdlkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405282.8239343-65-233900976563661/AnsiballZ_file.py'
Oct 02 11:41:23 compute-1 sudo[59925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:23 compute-1 python3.9[59927]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:41:23 compute-1 sudo[59925]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:24 compute-1 sudo[60100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjscicbrbzbugwrlwvsyceeevnrefccb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405283.683557-89-76197360015381/AnsiballZ_stat.py'
Oct 02 11:41:24 compute-1 sudo[60100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:24 compute-1 python3.9[60102]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:41:24 compute-1 sudo[60100]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:24 compute-1 sudo[60178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiuroxpoqsnglsqsprlkbfuwqakvwaoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405283.683557-89-76197360015381/AnsiballZ_file.py'
Oct 02 11:41:24 compute-1 sudo[60178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:24 compute-1 python3.9[60180]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.0i2dyvq9 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:41:24 compute-1 sudo[60178]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:25 compute-1 sudo[60330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrewzbwxihstohssuowvphqrnpclkqdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405285.3338058-149-65678275496204/AnsiballZ_stat.py'
Oct 02 11:41:25 compute-1 sudo[60330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:25 compute-1 python3.9[60332]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:41:25 compute-1 sudo[60330]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:26 compute-1 sudo[60408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhrlbgqgqijdshqkvjhejqyhnyatestv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405285.3338058-149-65678275496204/AnsiballZ_file.py'
Oct 02 11:41:26 compute-1 sudo[60408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:26 compute-1 python3.9[60410]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.mlam78ay recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:41:26 compute-1 sudo[60408]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:26 compute-1 sudo[60560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmbjcsbygmcsadhhfqpgzbpmftxbsahs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405286.5488389-188-140024322577205/AnsiballZ_file.py'
Oct 02 11:41:26 compute-1 sudo[60560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:26 compute-1 python3.9[60562]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:41:27 compute-1 sudo[60560]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:28 compute-1 sudo[60712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iispufdeeyldmwrfqwrgbyytcglumwjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405288.2395456-212-65779951274854/AnsiballZ_stat.py'
Oct 02 11:41:28 compute-1 sudo[60712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:28 compute-1 python3.9[60714]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:41:28 compute-1 sudo[60712]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:29 compute-1 sudo[60790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guoztupytxvmltxekfyeqlthosbqjixs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405288.2395456-212-65779951274854/AnsiballZ_file.py'
Oct 02 11:41:29 compute-1 sudo[60790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:29 compute-1 python3.9[60792]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:41:29 compute-1 sudo[60790]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:29 compute-1 sudo[60942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbnqppgghtpwzhofekiqwhqpnpywtgvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405289.561762-212-141093139858709/AnsiballZ_stat.py'
Oct 02 11:41:29 compute-1 sudo[60942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:30 compute-1 python3.9[60944]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:41:30 compute-1 sudo[60942]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:30 compute-1 sudo[61020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdetgyxwallogdycywzslmztforombuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405289.561762-212-141093139858709/AnsiballZ_file.py'
Oct 02 11:41:30 compute-1 sudo[61020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:30 compute-1 python3.9[61022]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:41:30 compute-1 sudo[61020]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:31 compute-1 sudo[61172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xaulajnwmxcfdwcxopcmdxnszndmrthx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405290.8131504-281-193998276829392/AnsiballZ_file.py'
Oct 02 11:41:31 compute-1 sudo[61172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:31 compute-1 python3.9[61174]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:41:31 compute-1 sudo[61172]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:31 compute-1 sudo[61324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fauhrxunpwnsvhqoduqkwhyqziuvgnnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405291.4886842-305-219251424389706/AnsiballZ_stat.py'
Oct 02 11:41:31 compute-1 sudo[61324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:31 compute-1 python3.9[61326]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:41:32 compute-1 sudo[61324]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:32 compute-1 sudo[61402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecoapgipouvrlnyjgdnokhayfgvaoetx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405291.4886842-305-219251424389706/AnsiballZ_file.py'
Oct 02 11:41:32 compute-1 sudo[61402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:32 compute-1 python3.9[61404]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:41:32 compute-1 sudo[61402]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:32 compute-1 sudo[61554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzsfyusqvihidyhdkildgridqnjncpqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405292.6888354-341-126406551697595/AnsiballZ_stat.py'
Oct 02 11:41:32 compute-1 sudo[61554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:33 compute-1 python3.9[61556]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:41:33 compute-1 sudo[61554]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:33 compute-1 sudo[61632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmuacjnkpdcvttkkrhvomsrcnbbjlooq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405292.6888354-341-126406551697595/AnsiballZ_file.py'
Oct 02 11:41:33 compute-1 sudo[61632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:33 compute-1 python3.9[61634]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:41:33 compute-1 sudo[61632]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:34 compute-1 sudo[61784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjzidumsigeavkrtcbskygjchjanqhja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405293.8583589-377-199534958353139/AnsiballZ_systemd.py'
Oct 02 11:41:34 compute-1 sudo[61784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:34 compute-1 python3.9[61786]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:41:34 compute-1 systemd[1]: Reloading.
Oct 02 11:41:34 compute-1 systemd-rc-local-generator[61814]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:41:34 compute-1 systemd-sysv-generator[61817]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:41:35 compute-1 sudo[61784]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:35 compute-1 sudo[61973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lymrsbaamfadarbrpjejkudftmhaflra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405295.40659-401-222092619406058/AnsiballZ_stat.py'
Oct 02 11:41:35 compute-1 sudo[61973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:35 compute-1 python3.9[61975]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:41:35 compute-1 sudo[61973]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:36 compute-1 sudo[62051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enkpwbfxwynfiahmoxshhczlkhyekqnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405295.40659-401-222092619406058/AnsiballZ_file.py'
Oct 02 11:41:36 compute-1 sudo[62051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:36 compute-1 python3.9[62053]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:41:36 compute-1 sudo[62051]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:36 compute-1 sudo[62203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynlvsezgvkuhawnayjgmhovowthdiynb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405296.5140162-437-54773071572718/AnsiballZ_stat.py'
Oct 02 11:41:36 compute-1 sudo[62203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:36 compute-1 python3.9[62205]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:41:37 compute-1 sudo[62203]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:37 compute-1 sudo[62281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqrcutpbgruvumddwoqzselrxwauuuth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405296.5140162-437-54773071572718/AnsiballZ_file.py'
Oct 02 11:41:37 compute-1 sudo[62281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:37 compute-1 python3.9[62283]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:41:37 compute-1 sudo[62281]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:37 compute-1 sudo[62433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejbjynvuwnhlmcuppjdmjcfqmcqplejf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405297.6218884-473-75611281410688/AnsiballZ_systemd.py'
Oct 02 11:41:37 compute-1 sudo[62433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:38 compute-1 python3.9[62435]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:41:38 compute-1 systemd[1]: Reloading.
Oct 02 11:41:38 compute-1 systemd-rc-local-generator[62462]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:41:38 compute-1 systemd-sysv-generator[62465]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:41:38 compute-1 systemd[1]: Starting Create netns directory...
Oct 02 11:41:38 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 02 11:41:38 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 02 11:41:38 compute-1 systemd[1]: Finished Create netns directory.
Oct 02 11:41:38 compute-1 sudo[62433]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:39 compute-1 python3.9[62626]: ansible-ansible.builtin.service_facts Invoked
Oct 02 11:41:39 compute-1 network[62643]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 11:41:39 compute-1 network[62644]: 'network-scripts' will be removed from distribution in near future.
Oct 02 11:41:39 compute-1 network[62645]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 11:41:44 compute-1 sudo[62906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezkiplcvjltvoxqpziltjqnpjfujirjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405303.8718152-551-70597526656691/AnsiballZ_stat.py'
Oct 02 11:41:44 compute-1 sudo[62906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:44 compute-1 python3.9[62908]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:41:44 compute-1 sudo[62906]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:44 compute-1 sudo[62984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmryvizruusewirrkyioygtqoamxjply ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405303.8718152-551-70597526656691/AnsiballZ_file.py'
Oct 02 11:41:44 compute-1 sudo[62984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:45 compute-1 python3.9[62986]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:41:45 compute-1 sudo[62984]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:45 compute-1 sudo[63136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egvwgrilxdvprlwaeucpauawlbkhwxtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405305.2986608-590-245032962035161/AnsiballZ_file.py'
Oct 02 11:41:45 compute-1 sudo[63136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:45 compute-1 python3.9[63138]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:41:45 compute-1 sudo[63136]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:46 compute-1 sudo[63288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkjogqvugzowwaaydurbxjjvizcsabou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405305.9864008-614-13313058275821/AnsiballZ_stat.py'
Oct 02 11:41:46 compute-1 sudo[63288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:46 compute-1 python3.9[63290]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:41:46 compute-1 sudo[63288]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:46 compute-1 sudo[63411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhvzbkfjlhfgglbojalysonujtidxkyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405305.9864008-614-13313058275821/AnsiballZ_copy.py'
Oct 02 11:41:46 compute-1 sudo[63411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:47 compute-1 python3.9[63413]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405305.9864008-614-13313058275821/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:41:47 compute-1 sudo[63411]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:48 compute-1 sudo[63563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llhzoiuzepfxenceiwtwljrghzccyurr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405307.6251943-669-82259284383163/AnsiballZ_timezone.py'
Oct 02 11:41:48 compute-1 sudo[63563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:48 compute-1 python3.9[63565]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 02 11:41:48 compute-1 systemd[1]: Starting Time & Date Service...
Oct 02 11:41:48 compute-1 systemd[1]: Started Time & Date Service.
Oct 02 11:41:48 compute-1 sudo[63563]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:48 compute-1 sudo[63719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyhbpgmaufusxznfmcuzpcltztkuzkxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405308.626934-695-19472369356750/AnsiballZ_file.py'
Oct 02 11:41:48 compute-1 sudo[63719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:49 compute-1 python3.9[63721]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:41:49 compute-1 sudo[63719]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:49 compute-1 sudo[63871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywiedoifmtohtzipehmhanupgczqrnqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405309.3544161-719-46476962704567/AnsiballZ_stat.py'
Oct 02 11:41:49 compute-1 sudo[63871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:49 compute-1 python3.9[63873]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:41:49 compute-1 sudo[63871]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:50 compute-1 sudo[63994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baujvvpullxpyvbwykromvcnyndnipmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405309.3544161-719-46476962704567/AnsiballZ_copy.py'
Oct 02 11:41:50 compute-1 sudo[63994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:50 compute-1 python3.9[63996]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405309.3544161-719-46476962704567/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:41:50 compute-1 sudo[63994]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:51 compute-1 sudo[64146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlqcvxhwydmfmlpaqgcfnkiqszpkhgtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405310.9026983-765-219170727835183/AnsiballZ_stat.py'
Oct 02 11:41:51 compute-1 sudo[64146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:51 compute-1 python3.9[64148]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:41:51 compute-1 sudo[64146]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:51 compute-1 sudo[64269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anjkadparlqmrvkwisgthwdgqtsfrkyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405310.9026983-765-219170727835183/AnsiballZ_copy.py'
Oct 02 11:41:51 compute-1 sudo[64269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:51 compute-1 python3.9[64271]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405310.9026983-765-219170727835183/.source.yaml _original_basename=.6t_2dcps follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:41:51 compute-1 sudo[64269]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:52 compute-1 sudo[64421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhaxpvgdmptrpfwxovqyfkssvradkwwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405312.1590478-809-226399252936899/AnsiballZ_stat.py'
Oct 02 11:41:52 compute-1 sudo[64421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:52 compute-1 python3.9[64423]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:41:52 compute-1 sudo[64421]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:53 compute-1 sudo[64544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nanpiifwzttxgxvmbeknlidahoukgkyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405312.1590478-809-226399252936899/AnsiballZ_copy.py'
Oct 02 11:41:53 compute-1 sudo[64544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:53 compute-1 python3.9[64546]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405312.1590478-809-226399252936899/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:41:53 compute-1 sudo[64544]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:53 compute-1 sudo[64696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enojfdpmijugfpfpqhluglxxjqooytss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405313.43693-854-87644084997242/AnsiballZ_command.py'
Oct 02 11:41:53 compute-1 sudo[64696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:54 compute-1 python3.9[64698]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:41:54 compute-1 sudo[64696]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:54 compute-1 sudo[64849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqrmtxdrkwydjgmshfhidmfdqlscoepu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405314.267421-878-7698466233379/AnsiballZ_command.py'
Oct 02 11:41:54 compute-1 sudo[64849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:54 compute-1 python3.9[64851]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:41:54 compute-1 sudo[64849]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:55 compute-1 sudo[65002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmwpotenbmcyjctmomgwsyxabzcglguo ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759405314.9409056-902-202264823647305/AnsiballZ_edpm_nftables_from_files.py'
Oct 02 11:41:55 compute-1 sudo[65002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:55 compute-1 python3[65004]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 02 11:41:55 compute-1 sudo[65002]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:56 compute-1 sudo[65154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmmmowjahkxnuvbdjzpxzsobztauyskj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405315.7851572-926-35313362132657/AnsiballZ_stat.py'
Oct 02 11:41:56 compute-1 sudo[65154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:56 compute-1 python3.9[65156]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:41:56 compute-1 sudo[65154]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:56 compute-1 sudo[65277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbhedaovcbohjqxzriziuvxriudfvkkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405315.7851572-926-35313362132657/AnsiballZ_copy.py'
Oct 02 11:41:56 compute-1 sudo[65277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:56 compute-1 python3.9[65279]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405315.7851572-926-35313362132657/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:41:56 compute-1 sudo[65277]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:57 compute-1 sudo[65429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slpbrvvdvtsudecqpsqncacocjwsxwjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405317.0314128-971-196457072997456/AnsiballZ_stat.py'
Oct 02 11:41:57 compute-1 sudo[65429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:57 compute-1 python3.9[65431]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:41:57 compute-1 sudo[65429]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:57 compute-1 sudo[65552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpnkizuunxwotbeptvozsyxofwtqpoma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405317.0314128-971-196457072997456/AnsiballZ_copy.py'
Oct 02 11:41:57 compute-1 sudo[65552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:58 compute-1 python3.9[65554]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405317.0314128-971-196457072997456/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:41:58 compute-1 sudo[65552]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:58 compute-1 sudo[65704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okbuqshnjpaqhkcsnmvhcgemzcrfevnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405318.2668617-1016-233817359502489/AnsiballZ_stat.py'
Oct 02 11:41:58 compute-1 sudo[65704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:58 compute-1 python3.9[65706]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:41:58 compute-1 sudo[65704]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:59 compute-1 sudo[65827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugcvyyauattctwudejcmdlmmxyosqpfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405318.2668617-1016-233817359502489/AnsiballZ_copy.py'
Oct 02 11:41:59 compute-1 sudo[65827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:59 compute-1 python3.9[65829]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405318.2668617-1016-233817359502489/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:41:59 compute-1 sudo[65827]: pam_unix(sudo:session): session closed for user root
Oct 02 11:41:59 compute-1 sudo[65979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbhpdlgzepawrfkrkpowknbwvswtatqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405319.4759328-1061-147669845365241/AnsiballZ_stat.py'
Oct 02 11:41:59 compute-1 sudo[65979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:41:59 compute-1 python3.9[65981]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:41:59 compute-1 sudo[65979]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:00 compute-1 sudo[66102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avugxtifnijxjvzqplpkzrifxtpfnwed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405319.4759328-1061-147669845365241/AnsiballZ_copy.py'
Oct 02 11:42:00 compute-1 sudo[66102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:00 compute-1 python3.9[66104]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405319.4759328-1061-147669845365241/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:42:00 compute-1 sudo[66102]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:01 compute-1 sudo[66254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-goghzcvkpkjpbkqtcpsmptrbmzbvumrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405320.7212932-1106-158706800665332/AnsiballZ_stat.py'
Oct 02 11:42:01 compute-1 sudo[66254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:01 compute-1 python3.9[66256]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:42:01 compute-1 sudo[66254]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:01 compute-1 sudo[66377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgjdliwsvwaxzxgandjikryaeftmineg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405320.7212932-1106-158706800665332/AnsiballZ_copy.py'
Oct 02 11:42:01 compute-1 sudo[66377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:01 compute-1 python3.9[66379]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759405320.7212932-1106-158706800665332/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:42:01 compute-1 sudo[66377]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:02 compute-1 sudo[66529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zukcqfpsubatokyuqdmtktrsnokxtdxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405322.2227504-1151-88485869359663/AnsiballZ_file.py'
Oct 02 11:42:02 compute-1 sudo[66529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:02 compute-1 python3.9[66531]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:42:02 compute-1 sudo[66529]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:03 compute-1 sudo[66681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuvqerqzmqfeupswskgvcoxuhbhicvsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405322.9560559-1175-121364827932507/AnsiballZ_command.py'
Oct 02 11:42:03 compute-1 sudo[66681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:03 compute-1 python3.9[66683]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:42:03 compute-1 sudo[66681]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:04 compute-1 sudo[66840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upfzgoxstevexfdxwrfjzrtxbswptslf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405323.752168-1199-270206434898234/AnsiballZ_blockinfile.py'
Oct 02 11:42:04 compute-1 sudo[66840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:04 compute-1 python3.9[66842]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:42:04 compute-1 sudo[66840]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:04 compute-1 sudo[66993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eloozaoojmhtjdfddsywnutyqdjueije ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405324.6649919-1226-6762683488430/AnsiballZ_file.py'
Oct 02 11:42:04 compute-1 sudo[66993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:05 compute-1 python3.9[66995]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:42:05 compute-1 sudo[66993]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:05 compute-1 sudo[67145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuqzspsegizqplpwwtndgqohxbiguxgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405325.2596002-1226-4219678894096/AnsiballZ_file.py'
Oct 02 11:42:05 compute-1 sudo[67145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:05 compute-1 python3.9[67147]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:42:05 compute-1 sudo[67145]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:06 compute-1 sudo[67297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uedpjcdceibqxoruxmpmmyipezplvnaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405325.8922787-1271-251241464597942/AnsiballZ_mount.py'
Oct 02 11:42:06 compute-1 sudo[67297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:06 compute-1 python3.9[67299]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 02 11:42:06 compute-1 sudo[67297]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:06 compute-1 sudo[67450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkapvgdmhdzthbrjiaruylxltwqkfcgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405326.6390676-1271-91464701898763/AnsiballZ_mount.py'
Oct 02 11:42:06 compute-1 sudo[67450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:07 compute-1 python3.9[67452]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 02 11:42:07 compute-1 sudo[67450]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:07 compute-1 sshd-session[59621]: Connection closed by 192.168.122.30 port 48708
Oct 02 11:42:07 compute-1 sshd-session[59618]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:42:07 compute-1 systemd[1]: session-16.scope: Deactivated successfully.
Oct 02 11:42:07 compute-1 systemd[1]: session-16.scope: Consumed 29.641s CPU time.
Oct 02 11:42:07 compute-1 systemd-logind[795]: Session 16 logged out. Waiting for processes to exit.
Oct 02 11:42:07 compute-1 systemd-logind[795]: Removed session 16.
Oct 02 11:42:09 compute-1 chronyd[54365]: Selected source 216.128.178.20 (pool.ntp.org)
Oct 02 11:42:12 compute-1 sshd-session[67478]: Accepted publickey for zuul from 192.168.122.30 port 52840 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:42:12 compute-1 systemd-logind[795]: New session 17 of user zuul.
Oct 02 11:42:12 compute-1 systemd[1]: Started Session 17 of User zuul.
Oct 02 11:42:12 compute-1 sshd-session[67478]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:42:13 compute-1 sudo[67631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szhteekzwlhuotalicymazyahwrzchea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405332.582417-24-228566545125932/AnsiballZ_tempfile.py'
Oct 02 11:42:13 compute-1 sudo[67631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:13 compute-1 python3.9[67633]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct 02 11:42:13 compute-1 sudo[67631]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:13 compute-1 sudo[67783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfslrvmyycasbgxyhttrguaivhkndbir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405333.4477663-60-152490027101442/AnsiballZ_stat.py'
Oct 02 11:42:13 compute-1 sudo[67783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:14 compute-1 python3.9[67785]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:42:14 compute-1 sudo[67783]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:14 compute-1 sudo[67935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-padniucepskprnamaqhucjvyjwlqusqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405334.3171046-90-210478708113804/AnsiballZ_setup.py'
Oct 02 11:42:14 compute-1 sudo[67935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:15 compute-1 python3.9[67937]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:42:15 compute-1 sudo[67935]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:15 compute-1 sudo[68087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pajntlrassvficjtvcvnoransdwfjnui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405335.4424183-115-127451993607999/AnsiballZ_blockinfile.py'
Oct 02 11:42:15 compute-1 sudo[68087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:16 compute-1 python3.9[68089]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDfikJfuUE7Xs2lF9Qh9l0WUdl+Tct7ff0gJQZVpPwLHlAwFnY1lIlqF2IQ3J7LtFcsjYF5RcofKcj+ARkMTobXFoygI/H3Yl5EGDehZbaNONLkDXT20bcYtosTZBjJTZWMJaDGUobRPnKWEbt7P8G/CVwj+LKBYxYcl65Bs0m8Ii2JZObV/41E/44oNBbTT6VnLqrH1BjRfNgToFyoYZToIU6gJw+lDGgt/afrHnDeR8fo6fgHkoHZKHxctrFraqhPOEX+SW/RD5ra4/WxZTBDAcOelVyZhpZ0V6HTQuS0IuD/sy9RD9W59TrF0oFH8kP6H1F3EbhrMfM/wkGJqxcBEMPIlGjUgoOCOY4tgCsAuyKcqelTUJIoL5uTuk06fd+1+B0t8j//vY7eWDCGwHAYrOCbL954GsjqhEOd/SL8vW6cT4Eh+DaWzKpvnl+bEN+G7wkI9etJ4B8NugtDyE25Ikfn9nsBLIcPcuepnlcBQkTN4sC+w0I1AEm3Uo8MFOM=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPxo/cGygmGP55Hjd3RI5yFpLqrtrtdd2PGw/FbMnxJJ
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLbUwjRfNWPOWmPM9kXykw3bNz7sYSt7DYbalJhzh+E3yGMACUO+HxFuSQ4lHBBXquZltdOcmR202cRP+4s05oI=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDb8D90laelhslbtmfz72Mp6Q7iCMu+KiPRuBFH59nBtb1LmjrIFjvU1qZnJ+wipHW+bRcdDzNWNM8KJ4IImBqFxbrg17RhHeunE84nnR8leX3OYiMZumpygvXYCykppXcKbe6pfxYUtyTc8Tz3bNoayi7uGoKgN/iaUeADLuyJUDDVyusj2q7uIj7gZ6PbtorR5cUUn0wBZTo3Jx84NmdiJr/xDGrtfawsV6ATz+Rpx3vzz4EE4dq4wN3eTUJiPCpc4jbTvHpp0GdJTK1BkZ4IANgw3a+loOO2MHq2JgMRjKJrH7sqrw7s9XgzHSh/ufOmEKAtgw75tWExEcy/05QGGbR2jnIKde4vVIS5JheT1z4gYASjKEEidjisDxig5nigPddxe3nSxKRQczKXPV+KUOB14AljRbnyqgbw4Dv9wtnkFL/QLMXFA0/NaOAZxhI+fOoAcg+No2ZsB95IgQ49ay/LN011x9o1vfwVPfReOtkjpVxQB8oCXhA53BfrG3M=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAtzqd+HKKUdtdjsFK/O61rbaIfH2/ANnbsFBvd1WLXA
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOyw0g2rIQxTWmEkqBGUUvYwuDopCg/ppyBGUh5LatbQKlwO7AkEzPUhEeFZv2/qzobLbOH4kVCTAQVjiQm//WM=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2+zJSXp4XBwGccVvswqz0/27MxV0mWhHJ9EKngmPOQ2Et2f+QArNFJsEaUEJankaYSrISVt8m0QscyZhZUgrxp07g0OV9pVQ2pkqF/CSC7RnN96odOHOeQjRmSOj9vF8Q3EeyRZ7MS1CWH6TT+jYOD77TFol6cQhi7o5bzgAdL6yB/ili/PG3bBxtbYtNwSqCSpiGaN8z8j/REszkW2GM6wvDGXk9NgNfBZT4goP4O3qz/wVeMM/OQFGQa/34tMNX3QEE/XOdAUIRXXLw0vmVj7oRDzGVMc12TDalGOqphS+LkUS4PB+ns/IaplTUzc8zlwhycQQPxnzEcm+z3QP8Bo+iBGw+aKpc5UTMMtZocXrjHCv0Q6irXug6N6b7aaANiHMmveZua/Gjp6Ef//Q/+thKtkvcvvhUDZknHLDrHGT5QbVQYjN23MyFdWCu6MgpBw8NNyeI5sO605lOrxk2oXwX19ah7Qt7iAU7KRijLzQBjnMjNb6bcSOCFXVzpl0=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOxmfzZIbNhcux/tJpdvzaDW/iX/PRMqNcEGpeyKOTEV
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBANBfiBul8lZFa5T9kjEYk719DZo4CtW2bTDn+SPcbu/2U71Ms3Qc1tvqiM9B/ciT9t/uzxk25klpGuFqieJFkk=
                                             create=True mode=0644 path=/tmp/ansible.3qr617h_ state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:42:16 compute-1 sudo[68087]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:16 compute-1 sudo[68239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omsfwbogodhdnkdmwnxjubjeqjoetlem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405336.309848-139-196081991197474/AnsiballZ_command.py'
Oct 02 11:42:16 compute-1 sudo[68239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:16 compute-1 python3.9[68241]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.3qr617h_' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:42:16 compute-1 sudo[68239]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:17 compute-1 sudo[68393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhwdfcjeepkbslslrhuxxxzenmicjrul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405337.0661478-163-136871483646881/AnsiballZ_file.py'
Oct 02 11:42:17 compute-1 sudo[68393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:17 compute-1 python3.9[68395]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.3qr617h_ state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:42:17 compute-1 sudo[68393]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:18 compute-1 sshd-session[67481]: Connection closed by 192.168.122.30 port 52840
Oct 02 11:42:18 compute-1 sshd-session[67478]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:42:18 compute-1 systemd[1]: session-17.scope: Deactivated successfully.
Oct 02 11:42:18 compute-1 systemd[1]: session-17.scope: Consumed 3.391s CPU time.
Oct 02 11:42:18 compute-1 systemd-logind[795]: Session 17 logged out. Waiting for processes to exit.
Oct 02 11:42:18 compute-1 systemd-logind[795]: Removed session 17.
Oct 02 11:42:18 compute-1 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 02 11:42:23 compute-1 sshd-session[68423]: Accepted publickey for zuul from 192.168.122.30 port 41936 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:42:23 compute-1 systemd-logind[795]: New session 18 of user zuul.
Oct 02 11:42:23 compute-1 systemd[1]: Started Session 18 of User zuul.
Oct 02 11:42:23 compute-1 sshd-session[68423]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:42:24 compute-1 python3.9[68576]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:42:25 compute-1 sudo[68730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhhqbdfdkpzsgagldmaqmscxbnewmfup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405344.6711512-62-70498796676592/AnsiballZ_systemd.py'
Oct 02 11:42:25 compute-1 sudo[68730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:25 compute-1 python3.9[68732]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 02 11:42:25 compute-1 sudo[68730]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:26 compute-1 sudo[68884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srduglucvnerovnjmvlertjgdmzfuukq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405345.8304808-86-250112769319363/AnsiballZ_systemd.py'
Oct 02 11:42:26 compute-1 sudo[68884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:26 compute-1 python3.9[68886]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 11:42:26 compute-1 sudo[68884]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:27 compute-1 sudo[69037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpkppecsqolddjcaeesauqopoqkfkhhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405346.713158-113-133169065513795/AnsiballZ_command.py'
Oct 02 11:42:27 compute-1 sudo[69037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:27 compute-1 python3.9[69039]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:42:27 compute-1 sudo[69037]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:27 compute-1 sudo[69190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixkkdrqnzarpudozppwbjhpodmfchtda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405347.535547-137-61631085136311/AnsiballZ_stat.py'
Oct 02 11:42:27 compute-1 sudo[69190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:28 compute-1 python3.9[69192]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:42:28 compute-1 sudo[69190]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:28 compute-1 sudo[69344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buegvavleqtsgtjgwxlgiulnhhxmrhcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405348.3544273-161-102874623024663/AnsiballZ_command.py'
Oct 02 11:42:28 compute-1 sudo[69344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:28 compute-1 python3.9[69346]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:42:28 compute-1 sudo[69344]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:29 compute-1 sudo[69499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhlqerkgyesffewhwnxhqcywmksgohqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405349.069699-185-6990006528602/AnsiballZ_file.py'
Oct 02 11:42:29 compute-1 sudo[69499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:29 compute-1 python3.9[69501]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:42:29 compute-1 sudo[69499]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:30 compute-1 sshd-session[68426]: Connection closed by 192.168.122.30 port 41936
Oct 02 11:42:30 compute-1 sshd-session[68423]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:42:30 compute-1 systemd[1]: session-18.scope: Deactivated successfully.
Oct 02 11:42:30 compute-1 systemd[1]: session-18.scope: Consumed 4.130s CPU time.
Oct 02 11:42:30 compute-1 systemd-logind[795]: Session 18 logged out. Waiting for processes to exit.
Oct 02 11:42:30 compute-1 systemd-logind[795]: Removed session 18.
Oct 02 11:42:35 compute-1 sshd-session[69526]: Accepted publickey for zuul from 192.168.122.30 port 51436 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:42:35 compute-1 systemd-logind[795]: New session 19 of user zuul.
Oct 02 11:42:35 compute-1 systemd[1]: Started Session 19 of User zuul.
Oct 02 11:42:35 compute-1 sshd-session[69526]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:42:36 compute-1 python3.9[69679]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:42:37 compute-1 sudo[69833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqnjvjhnyavqcdaprwlvmtgozitgtros ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405356.991969-68-268068422104129/AnsiballZ_setup.py'
Oct 02 11:42:37 compute-1 sudo[69833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:37 compute-1 python3.9[69835]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:42:38 compute-1 sudo[69833]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:38 compute-1 sudo[69917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pusccobhsvssqcizvvcqtppttsarubmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405356.991969-68-268068422104129/AnsiballZ_dnf.py'
Oct 02 11:42:38 compute-1 sudo[69917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:38 compute-1 python3.9[69919]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 02 11:42:39 compute-1 sudo[69917]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:40 compute-1 python3.9[70070]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:42:42 compute-1 python3.9[70221]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 11:42:42 compute-1 python3.9[70371]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:42:42 compute-1 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 11:42:42 compute-1 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 11:42:43 compute-1 python3.9[70522]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:42:43 compute-1 sshd-session[69529]: Connection closed by 192.168.122.30 port 51436
Oct 02 11:42:43 compute-1 sshd-session[69526]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:42:43 compute-1 systemd[1]: session-19.scope: Deactivated successfully.
Oct 02 11:42:43 compute-1 systemd[1]: session-19.scope: Consumed 5.555s CPU time.
Oct 02 11:42:43 compute-1 systemd-logind[795]: Session 19 logged out. Waiting for processes to exit.
Oct 02 11:42:43 compute-1 systemd-logind[795]: Removed session 19.
Oct 02 11:42:51 compute-1 sshd-session[70547]: Accepted publickey for zuul from 38.129.56.116 port 36088 ssh2: RSA SHA256:kF187RjowWfVB0Eh8J6+KYVujBZ/IQN67xGI3Wy/+nI
Oct 02 11:42:51 compute-1 systemd-logind[795]: New session 20 of user zuul.
Oct 02 11:42:51 compute-1 systemd[1]: Started Session 20 of User zuul.
Oct 02 11:42:51 compute-1 sshd-session[70547]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:42:51 compute-1 sudo[70623]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivsojfvxngbwkrbjfwjugdybksnsadll ; /usr/bin/python3'
Oct 02 11:42:51 compute-1 sudo[70623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:52 compute-1 useradd[70627]: new group: name=ceph-admin, GID=42478
Oct 02 11:42:52 compute-1 useradd[70627]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Oct 02 11:42:52 compute-1 sudo[70623]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:52 compute-1 sudo[70709]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpwntcjzfaokojmjolrhpvksrbwvozlk ; /usr/bin/python3'
Oct 02 11:42:52 compute-1 sudo[70709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:52 compute-1 sudo[70709]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:53 compute-1 sudo[70782]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzinynktibavikxeyihhmjojzmlalhhe ; /usr/bin/python3'
Oct 02 11:42:53 compute-1 sudo[70782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:53 compute-1 sudo[70782]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:53 compute-1 sudo[70832]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybozdthpupkbuzlwncnszrtcahurbmim ; /usr/bin/python3'
Oct 02 11:42:53 compute-1 sudo[70832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:53 compute-1 sudo[70832]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:53 compute-1 sudo[70858]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgmrdqwaejqizfxkdtphsygxzzmnucro ; /usr/bin/python3'
Oct 02 11:42:53 compute-1 sudo[70858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:54 compute-1 sudo[70858]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:54 compute-1 sudo[70884]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnpfwcoipaohqihgrmzsxtauzereejnu ; /usr/bin/python3'
Oct 02 11:42:54 compute-1 sudo[70884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:54 compute-1 sudo[70884]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:54 compute-1 sudo[70910]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rojfhvobzlowovaptuygrzucrrkfownf ; /usr/bin/python3'
Oct 02 11:42:54 compute-1 sudo[70910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:54 compute-1 sudo[70910]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:55 compute-1 sudo[70988]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjeipocwhngfliefjyajdmlzgdjwwnwo ; /usr/bin/python3'
Oct 02 11:42:55 compute-1 sudo[70988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:55 compute-1 sudo[70988]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:55 compute-1 sudo[71061]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wugchiyiekinbmprkdhvypbiobsytczk ; /usr/bin/python3'
Oct 02 11:42:55 compute-1 sudo[71061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:55 compute-1 sudo[71061]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:56 compute-1 sudo[71163]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phmwlsclonxgdivilkvhhrabnnzjhovw ; /usr/bin/python3'
Oct 02 11:42:56 compute-1 sudo[71163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:56 compute-1 sudo[71163]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:56 compute-1 sudo[71236]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkjzcmtdyeuezzetvjkunrsmkvmtkrmh ; /usr/bin/python3'
Oct 02 11:42:56 compute-1 sudo[71236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:56 compute-1 sudo[71236]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:57 compute-1 sudo[71286]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgxbchysgitcawbgtxamhqunyjathddc ; /usr/bin/python3'
Oct 02 11:42:57 compute-1 sudo[71286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:57 compute-1 python3[71288]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:42:58 compute-1 sudo[71286]: pam_unix(sudo:session): session closed for user root
Oct 02 11:42:58 compute-1 sudo[71381]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsbdhthshjvkdfkipsrqrybtxrftgpue ; /usr/bin/python3'
Oct 02 11:42:58 compute-1 sudo[71381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:42:59 compute-1 python3[71383]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 02 11:43:00 compute-1 sudo[71381]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:00 compute-1 sudo[71408]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzdnmkvudsyxfbpydrkrvmflnbxzoaiv ; /usr/bin/python3'
Oct 02 11:43:00 compute-1 sudo[71408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:00 compute-1 python3[71410]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:43:00 compute-1 sudo[71408]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:00 compute-1 sudo[71434]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbmnhikpqzkrawwtzvzlzwlvdpyblied ; /usr/bin/python3'
Oct 02 11:43:00 compute-1 sudo[71434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:01 compute-1 python3[71436]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:43:01 compute-1 kernel: loop: module loaded
Oct 02 11:43:01 compute-1 kernel: loop3: detected capacity change from 0 to 14680064
Oct 02 11:43:01 compute-1 sudo[71434]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:01 compute-1 sudo[71469]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrmtfoqbewyqzwzmhhuvdxhbdljovphs ; /usr/bin/python3'
Oct 02 11:43:01 compute-1 sudo[71469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:01 compute-1 python3[71471]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:43:01 compute-1 lvm[71474]: PV /dev/loop3 not used.
Oct 02 11:43:01 compute-1 lvm[71483]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 02 11:43:01 compute-1 sudo[71469]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:01 compute-1 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Oct 02 11:43:01 compute-1 lvm[71485]:   1 logical volume(s) in volume group "ceph_vg0" now active
Oct 02 11:43:01 compute-1 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Oct 02 11:43:02 compute-1 sudo[71561]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfontzaogjerheaaqycjwuuoleirlqxo ; /usr/bin/python3'
Oct 02 11:43:02 compute-1 sudo[71561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:02 compute-1 python3[71563]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 02 11:43:02 compute-1 sudo[71561]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:02 compute-1 sudo[71634]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gliytlcjnwtdndlvrpkswdmcqhwfmsxi ; /usr/bin/python3'
Oct 02 11:43:02 compute-1 sudo[71634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:02 compute-1 python3[71636]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759405382.253875-33446-278457987895800/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:43:02 compute-1 sudo[71634]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:03 compute-1 sudo[71684]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gypscnjancunjwlysunnwmsuhdaiggzl ; /usr/bin/python3'
Oct 02 11:43:03 compute-1 sudo[71684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:43:03 compute-1 python3[71686]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:43:03 compute-1 systemd[1]: Reloading.
Oct 02 11:43:03 compute-1 systemd-rc-local-generator[71717]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:43:03 compute-1 systemd-sysv-generator[71720]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:43:03 compute-1 systemd[1]: Starting Ceph OSD losetup...
Oct 02 11:43:03 compute-1 bash[71727]: /dev/loop3: [64513]:4349018 (/var/lib/ceph-osd-0.img)
Oct 02 11:43:03 compute-1 systemd[1]: Finished Ceph OSD losetup.
Oct 02 11:43:04 compute-1 lvm[71728]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 02 11:43:04 compute-1 lvm[71728]: VG ceph_vg0 finished
Oct 02 11:43:04 compute-1 sudo[71684]: pam_unix(sudo:session): session closed for user root
Oct 02 11:43:06 compute-1 python3[71752]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:43:52 compute-1 PackageKit[31077]: daemon quit
Oct 02 11:43:52 compute-1 systemd[1]: packagekit.service: Deactivated successfully.
Oct 02 11:44:01 compute-1 anacron[1073]: Job `cron.daily' started
Oct 02 11:44:01 compute-1 anacron[1073]: Job `cron.daily' terminated
Oct 02 11:44:57 compute-1 sshd-session[71799]: Accepted publickey for ceph-admin from 192.168.122.100 port 35138 ssh2: RSA SHA256:kA0TB/Djp/K+F/Rn+QSMI4m5Frd7N6TJlbFUF2u90C4
Oct 02 11:44:57 compute-1 systemd[1]: Created slice User Slice of UID 42477.
Oct 02 11:44:57 compute-1 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 02 11:44:57 compute-1 systemd-logind[795]: New session 21 of user ceph-admin.
Oct 02 11:44:57 compute-1 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 02 11:44:57 compute-1 systemd[1]: Starting User Manager for UID 42477...
Oct 02 11:44:57 compute-1 systemd[71803]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:44:57 compute-1 systemd[71803]: Queued start job for default target Main User Target.
Oct 02 11:44:57 compute-1 systemd[71803]: Created slice User Application Slice.
Oct 02 11:44:57 compute-1 systemd[71803]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 02 11:44:57 compute-1 systemd[71803]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 11:44:57 compute-1 systemd[71803]: Reached target Paths.
Oct 02 11:44:57 compute-1 systemd[71803]: Reached target Timers.
Oct 02 11:44:57 compute-1 sshd-session[71816]: Accepted publickey for ceph-admin from 192.168.122.100 port 47834 ssh2: RSA SHA256:kA0TB/Djp/K+F/Rn+QSMI4m5Frd7N6TJlbFUF2u90C4
Oct 02 11:44:57 compute-1 systemd[71803]: Starting D-Bus User Message Bus Socket...
Oct 02 11:44:57 compute-1 systemd[71803]: Starting Create User's Volatile Files and Directories...
Oct 02 11:44:57 compute-1 systemd[71803]: Listening on D-Bus User Message Bus Socket.
Oct 02 11:44:57 compute-1 systemd[71803]: Reached target Sockets.
Oct 02 11:44:57 compute-1 systemd-logind[795]: New session 23 of user ceph-admin.
Oct 02 11:44:57 compute-1 systemd[71803]: Finished Create User's Volatile Files and Directories.
Oct 02 11:44:57 compute-1 systemd[71803]: Reached target Basic System.
Oct 02 11:44:57 compute-1 systemd[71803]: Reached target Main User Target.
Oct 02 11:44:57 compute-1 systemd[71803]: Startup finished in 122ms.
Oct 02 11:44:57 compute-1 systemd[1]: Started User Manager for UID 42477.
Oct 02 11:44:57 compute-1 systemd[1]: Started Session 21 of User ceph-admin.
Oct 02 11:44:57 compute-1 systemd[1]: Started Session 23 of User ceph-admin.
Oct 02 11:44:57 compute-1 sshd-session[71799]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:44:57 compute-1 sshd-session[71816]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:44:57 compute-1 sudo[71823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:57 compute-1 sudo[71823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:57 compute-1 sudo[71823]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:57 compute-1 sudo[71848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:44:57 compute-1 sudo[71848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:57 compute-1 sudo[71848]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:57 compute-1 sshd-session[71873]: Accepted publickey for ceph-admin from 192.168.122.100 port 47838 ssh2: RSA SHA256:kA0TB/Djp/K+F/Rn+QSMI4m5Frd7N6TJlbFUF2u90C4
Oct 02 11:44:57 compute-1 systemd-logind[795]: New session 24 of user ceph-admin.
Oct 02 11:44:57 compute-1 systemd[1]: Started Session 24 of User ceph-admin.
Oct 02 11:44:57 compute-1 sshd-session[71873]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:44:57 compute-1 sudo[71877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:57 compute-1 sudo[71877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:57 compute-1 sudo[71877]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:57 compute-1 sudo[71902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-1
Oct 02 11:44:57 compute-1 sudo[71902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:57 compute-1 sudo[71902]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:58 compute-1 sshd-session[71927]: Accepted publickey for ceph-admin from 192.168.122.100 port 47852 ssh2: RSA SHA256:kA0TB/Djp/K+F/Rn+QSMI4m5Frd7N6TJlbFUF2u90C4
Oct 02 11:44:58 compute-1 systemd-logind[795]: New session 25 of user ceph-admin.
Oct 02 11:44:58 compute-1 systemd[1]: Started Session 25 of User ceph-admin.
Oct 02 11:44:58 compute-1 sshd-session[71927]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:44:58 compute-1 sudo[71931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:58 compute-1 sudo[71931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:58 compute-1 sudo[71931]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:58 compute-1 sudo[71956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Oct 02 11:44:58 compute-1 sudo[71956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:58 compute-1 sudo[71956]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:58 compute-1 sshd-session[71981]: Accepted publickey for ceph-admin from 192.168.122.100 port 47864 ssh2: RSA SHA256:kA0TB/Djp/K+F/Rn+QSMI4m5Frd7N6TJlbFUF2u90C4
Oct 02 11:44:58 compute-1 systemd-logind[795]: New session 26 of user ceph-admin.
Oct 02 11:44:58 compute-1 systemd[1]: Started Session 26 of User ceph-admin.
Oct 02 11:44:58 compute-1 sshd-session[71981]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:44:58 compute-1 sudo[71985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:58 compute-1 sudo[71985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:58 compute-1 sudo[71985]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:58 compute-1 sudo[72010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:44:58 compute-1 sudo[72010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:58 compute-1 sudo[72010]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:58 compute-1 sshd-session[72035]: Accepted publickey for ceph-admin from 192.168.122.100 port 47868 ssh2: RSA SHA256:kA0TB/Djp/K+F/Rn+QSMI4m5Frd7N6TJlbFUF2u90C4
Oct 02 11:44:58 compute-1 systemd-logind[795]: New session 27 of user ceph-admin.
Oct 02 11:44:58 compute-1 systemd[1]: Started Session 27 of User ceph-admin.
Oct 02 11:44:58 compute-1 sshd-session[72035]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:44:58 compute-1 sudo[72039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:58 compute-1 sudo[72039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:58 compute-1 sudo[72039]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:58 compute-1 sudo[72064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:44:58 compute-1 sudo[72064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:58 compute-1 sudo[72064]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:59 compute-1 sshd-session[72089]: Accepted publickey for ceph-admin from 192.168.122.100 port 47882 ssh2: RSA SHA256:kA0TB/Djp/K+F/Rn+QSMI4m5Frd7N6TJlbFUF2u90C4
Oct 02 11:44:59 compute-1 systemd-logind[795]: New session 28 of user ceph-admin.
Oct 02 11:44:59 compute-1 systemd[1]: Started Session 28 of User ceph-admin.
Oct 02 11:44:59 compute-1 sshd-session[72089]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:44:59 compute-1 sudo[72093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:59 compute-1 sudo[72093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:59 compute-1 sudo[72093]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:59 compute-1 sudo[72118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Oct 02 11:44:59 compute-1 sudo[72118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:59 compute-1 sudo[72118]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:59 compute-1 sshd-session[72143]: Accepted publickey for ceph-admin from 192.168.122.100 port 47896 ssh2: RSA SHA256:kA0TB/Djp/K+F/Rn+QSMI4m5Frd7N6TJlbFUF2u90C4
Oct 02 11:44:59 compute-1 systemd-logind[795]: New session 29 of user ceph-admin.
Oct 02 11:44:59 compute-1 systemd[1]: Started Session 29 of User ceph-admin.
Oct 02 11:44:59 compute-1 sshd-session[72143]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:44:59 compute-1 sudo[72147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:44:59 compute-1 sudo[72147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:59 compute-1 sudo[72147]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:59 compute-1 sudo[72172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:44:59 compute-1 sudo[72172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:44:59 compute-1 sudo[72172]: pam_unix(sudo:session): session closed for user root
Oct 02 11:44:59 compute-1 sshd-session[72197]: Accepted publickey for ceph-admin from 192.168.122.100 port 47906 ssh2: RSA SHA256:kA0TB/Djp/K+F/Rn+QSMI4m5Frd7N6TJlbFUF2u90C4
Oct 02 11:44:59 compute-1 systemd-logind[795]: New session 30 of user ceph-admin.
Oct 02 11:44:59 compute-1 systemd[1]: Started Session 30 of User ceph-admin.
Oct 02 11:44:59 compute-1 sshd-session[72197]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:45:00 compute-1 sudo[72201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:00 compute-1 sudo[72201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:00 compute-1 sudo[72201]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:00 compute-1 sudo[72226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Oct 02 11:45:00 compute-1 sudo[72226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:00 compute-1 sudo[72226]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:00 compute-1 sshd-session[72251]: Accepted publickey for ceph-admin from 192.168.122.100 port 47914 ssh2: RSA SHA256:kA0TB/Djp/K+F/Rn+QSMI4m5Frd7N6TJlbFUF2u90C4
Oct 02 11:45:00 compute-1 systemd-logind[795]: New session 31 of user ceph-admin.
Oct 02 11:45:00 compute-1 systemd[1]: Started Session 31 of User ceph-admin.
Oct 02 11:45:00 compute-1 sshd-session[72251]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:45:00 compute-1 sshd-session[72278]: Accepted publickey for ceph-admin from 192.168.122.100 port 47926 ssh2: RSA SHA256:kA0TB/Djp/K+F/Rn+QSMI4m5Frd7N6TJlbFUF2u90C4
Oct 02 11:45:00 compute-1 systemd-logind[795]: New session 32 of user ceph-admin.
Oct 02 11:45:00 compute-1 systemd[1]: Started Session 32 of User ceph-admin.
Oct 02 11:45:00 compute-1 sshd-session[72278]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:45:00 compute-1 sudo[72282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:00 compute-1 sudo[72282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:00 compute-1 sudo[72282]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:00 compute-1 sudo[72307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Oct 02 11:45:00 compute-1 sudo[72307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:00 compute-1 sudo[72307]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:01 compute-1 sshd-session[72332]: Accepted publickey for ceph-admin from 192.168.122.100 port 47934 ssh2: RSA SHA256:kA0TB/Djp/K+F/Rn+QSMI4m5Frd7N6TJlbFUF2u90C4
Oct 02 11:45:01 compute-1 systemd-logind[795]: New session 33 of user ceph-admin.
Oct 02 11:45:01 compute-1 systemd[1]: Started Session 33 of User ceph-admin.
Oct 02 11:45:01 compute-1 sshd-session[72332]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 02 11:45:01 compute-1 sudo[72336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:01 compute-1 sudo[72336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:01 compute-1 sudo[72336]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:01 compute-1 sudo[72361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-1
Oct 02 11:45:01 compute-1 sudo[72361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:01 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:45:01 compute-1 sudo[72361]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:01 compute-1 sudo[72405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:01 compute-1 sudo[72405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:01 compute-1 sudo[72405]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:01 compute-1 sudo[72430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:45:01 compute-1 sudo[72430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:01 compute-1 sudo[72430]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:01 compute-1 sudo[72455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:01 compute-1 sudo[72455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:01 compute-1 sudo[72455]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:01 compute-1 sudo[72480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 11:45:01 compute-1 sudo[72480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:01 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:45:02 compute-1 sudo[72480]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:02 compute-1 sudo[72525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:02 compute-1 sudo[72525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:02 compute-1 sudo[72525]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:02 compute-1 sudo[72550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:45:02 compute-1 sudo[72550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:02 compute-1 sudo[72550]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:02 compute-1 sudo[72575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:02 compute-1 sudo[72575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:02 compute-1 sudo[72575]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:02 compute-1 sudo[72600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 11:45:02 compute-1 sudo[72600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:02 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:45:02 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:45:02 compute-1 sudo[72600]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:02 compute-1 sudo[72660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:02 compute-1 sudo[72660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:02 compute-1 sudo[72660]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:02 compute-1 sudo[72685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:45:02 compute-1 sudo[72685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:02 compute-1 sudo[72685]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:02 compute-1 sudo[72710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:02 compute-1 sudo[72710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:02 compute-1 sudo[72710]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:02 compute-1 sudo[72735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:45:02 compute-1 sudo[72735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:02 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:45:02 compute-1 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 72772 (sysctl)
Oct 02 11:45:02 compute-1 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Oct 02 11:45:02 compute-1 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Oct 02 11:45:03 compute-1 sudo[72735]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:03 compute-1 sudo[72795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:03 compute-1 sudo[72795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:03 compute-1 sudo[72795]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:03 compute-1 sudo[72820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:45:03 compute-1 sudo[72820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:03 compute-1 sudo[72820]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:03 compute-1 sudo[72845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:03 compute-1 sudo[72845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:03 compute-1 sudo[72845]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:03 compute-1 sudo[72870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Oct 02 11:45:03 compute-1 sudo[72870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:03 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:45:03 compute-1 sudo[72870]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:03 compute-1 sudo[72913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:03 compute-1 sudo[72913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:03 compute-1 sudo[72913]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:03 compute-1 sudo[72938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:45:03 compute-1 sudo[72938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:03 compute-1 sudo[72938]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:03 compute-1 sudo[72963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:03 compute-1 sudo[72963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:03 compute-1 sudo[72963]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:03 compute-1 sudo[72988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- inventory --format=json-pretty --filter-for-batch
Oct 02 11:45:03 compute-1 sudo[72988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:03 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:45:06 compute-1 systemd[1]: var-lib-containers-storage-overlay-compat3952850408-lower\x2dmapped.mount: Deactivated successfully.
Oct 02 11:45:17 compute-1 podman[73049]: 2025-10-02 11:45:17.307471463 +0000 UTC m=+13.294168593 container create 285edd22b6298345db44ec27a37828548e4740d80aa1f1592109fc64dab6ee9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 11:45:17 compute-1 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck1968279995-merged.mount: Deactivated successfully.
Oct 02 11:45:17 compute-1 podman[73049]: 2025-10-02 11:45:17.287032555 +0000 UTC m=+13.273729695 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:45:17 compute-1 systemd[1]: Created slice Virtual Machine and Container Slice.
Oct 02 11:45:17 compute-1 systemd[1]: Started libpod-conmon-285edd22b6298345db44ec27a37828548e4740d80aa1f1592109fc64dab6ee9f.scope.
Oct 02 11:45:17 compute-1 systemd[1]: Started libcrun container.
Oct 02 11:45:17 compute-1 podman[73049]: 2025-10-02 11:45:17.39767453 +0000 UTC m=+13.384371680 container init 285edd22b6298345db44ec27a37828548e4740d80aa1f1592109fc64dab6ee9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 11:45:17 compute-1 podman[73049]: 2025-10-02 11:45:17.405054471 +0000 UTC m=+13.391751601 container start 285edd22b6298345db44ec27a37828548e4740d80aa1f1592109fc64dab6ee9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_noether, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 11:45:17 compute-1 podman[73049]: 2025-10-02 11:45:17.408710696 +0000 UTC m=+13.395407856 container attach 285edd22b6298345db44ec27a37828548e4740d80aa1f1592109fc64dab6ee9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:45:17 compute-1 focused_noether[73112]: 167 167
Oct 02 11:45:17 compute-1 systemd[1]: libpod-285edd22b6298345db44ec27a37828548e4740d80aa1f1592109fc64dab6ee9f.scope: Deactivated successfully.
Oct 02 11:45:17 compute-1 conmon[73112]: conmon 285edd22b6298345db44 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-285edd22b6298345db44ec27a37828548e4740d80aa1f1592109fc64dab6ee9f.scope/container/memory.events
Oct 02 11:45:17 compute-1 podman[73049]: 2025-10-02 11:45:17.413758903 +0000 UTC m=+13.400456043 container died 285edd22b6298345db44ec27a37828548e4740d80aa1f1592109fc64dab6ee9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 11:45:17 compute-1 systemd[1]: var-lib-containers-storage-overlay-08bc5421ffcd701119682e76aea2a50ef0664bdbb2706dcb2e32ef694ab4d421-merged.mount: Deactivated successfully.
Oct 02 11:45:17 compute-1 podman[73049]: 2025-10-02 11:45:17.458138189 +0000 UTC m=+13.444835319 container remove 285edd22b6298345db44ec27a37828548e4740d80aa1f1592109fc64dab6ee9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 11:45:17 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:45:17 compute-1 systemd[1]: libpod-conmon-285edd22b6298345db44ec27a37828548e4740d80aa1f1592109fc64dab6ee9f.scope: Deactivated successfully.
Oct 02 11:45:17 compute-1 podman[73135]: 2025-10-02 11:45:17.640635381 +0000 UTC m=+0.050130268 container create c73cad2c3354263479631514787c1c9ec5781838289ec8aa0e4ffcb562d55899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_maxwell, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:45:17 compute-1 systemd[1]: Started libpod-conmon-c73cad2c3354263479631514787c1c9ec5781838289ec8aa0e4ffcb562d55899.scope.
Oct 02 11:45:17 compute-1 systemd[1]: Started libcrun container.
Oct 02 11:45:17 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3e017226a99ed4066cdbd2954b2092646828811192a7888b4f874a4b599ca4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:17 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3e017226a99ed4066cdbd2954b2092646828811192a7888b4f874a4b599ca4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:17 compute-1 podman[73135]: 2025-10-02 11:45:17.713479645 +0000 UTC m=+0.122974542 container init c73cad2c3354263479631514787c1c9ec5781838289ec8aa0e4ffcb562d55899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_maxwell, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:45:17 compute-1 podman[73135]: 2025-10-02 11:45:17.620624055 +0000 UTC m=+0.030118972 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:45:17 compute-1 podman[73135]: 2025-10-02 11:45:17.719165423 +0000 UTC m=+0.128660310 container start c73cad2c3354263479631514787c1c9ec5781838289ec8aa0e4ffcb562d55899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_maxwell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:45:17 compute-1 podman[73135]: 2025-10-02 11:45:17.722533239 +0000 UTC m=+0.132028126 container attach c73cad2c3354263479631514787c1c9ec5781838289ec8aa0e4ffcb562d55899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_maxwell, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]: [
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]:     {
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]:         "available": false,
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]:         "ceph_device": false,
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]:         "lsm_data": {},
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]:         "lvs": [],
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]:         "path": "/dev/sr0",
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]:         "rejected_reasons": [
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]:             "Insufficient space (<5GB)",
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]:             "Has a FileSystem"
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]:         ],
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]:         "sys_api": {
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]:             "actuators": null,
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]:             "device_nodes": "sr0",
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]:             "devname": "sr0",
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]:             "human_readable_size": "482.00 KB",
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]:             "id_bus": "ata",
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]:             "model": "QEMU DVD-ROM",
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]:             "nr_requests": "2",
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]:             "parent": "/dev/sr0",
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]:             "partitions": {},
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]:             "path": "/dev/sr0",
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]:             "removable": "1",
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]:             "rev": "2.5+",
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]:             "ro": "0",
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]:             "rotational": "0",
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]:             "sas_address": "",
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]:             "sas_device_handle": "",
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]:             "scheduler_mode": "mq-deadline",
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]:             "sectors": 0,
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]:             "sectorsize": "2048",
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]:             "size": 493568.0,
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]:             "support_discard": "2048",
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]:             "type": "disk",
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]:             "vendor": "QEMU"
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]:         }
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]:     }
Oct 02 11:45:18 compute-1 nostalgic_maxwell[73151]: ]
Oct 02 11:45:18 compute-1 systemd[1]: libpod-c73cad2c3354263479631514787c1c9ec5781838289ec8aa0e4ffcb562d55899.scope: Deactivated successfully.
Oct 02 11:45:18 compute-1 podman[73135]: 2025-10-02 11:45:18.790396855 +0000 UTC m=+1.199891752 container died c73cad2c3354263479631514787c1c9ec5781838289ec8aa0e4ffcb562d55899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:45:18 compute-1 systemd[1]: libpod-c73cad2c3354263479631514787c1c9ec5781838289ec8aa0e4ffcb562d55899.scope: Consumed 1.054s CPU time.
Oct 02 11:45:18 compute-1 systemd[1]: var-lib-containers-storage-overlay-a3e017226a99ed4066cdbd2954b2092646828811192a7888b4f874a4b599ca4a-merged.mount: Deactivated successfully.
Oct 02 11:45:18 compute-1 podman[73135]: 2025-10-02 11:45:18.842507003 +0000 UTC m=+1.252001890 container remove c73cad2c3354263479631514787c1c9ec5781838289ec8aa0e4ffcb562d55899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_maxwell, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:45:18 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:45:18 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:45:18 compute-1 systemd[1]: libpod-conmon-c73cad2c3354263479631514787c1c9ec5781838289ec8aa0e4ffcb562d55899.scope: Deactivated successfully.
Oct 02 11:45:18 compute-1 sudo[72988]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:18 compute-1 sudo[74145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:18 compute-1 sudo[74145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:18 compute-1 sudo[74145]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:19 compute-1 sudo[74170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 02 11:45:19 compute-1 sudo[74170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:19 compute-1 sudo[74170]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:19 compute-1 sudo[74195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:19 compute-1 sudo[74195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:19 compute-1 sudo[74195]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:19 compute-1 sudo[74220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/etc/ceph
Oct 02 11:45:19 compute-1 sudo[74220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:19 compute-1 sudo[74220]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:19 compute-1 sudo[74245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:19 compute-1 sudo[74245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:19 compute-1 sudo[74245]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:19 compute-1 sudo[74270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/etc/ceph/ceph.conf.new
Oct 02 11:45:19 compute-1 sudo[74270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:19 compute-1 sudo[74270]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:19 compute-1 sudo[74295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:19 compute-1 sudo[74295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:19 compute-1 sudo[74295]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:19 compute-1 sudo[74320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:45:19 compute-1 sudo[74320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:19 compute-1 sudo[74320]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:19 compute-1 sudo[74345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:19 compute-1 sudo[74345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:19 compute-1 sudo[74345]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:19 compute-1 sudo[74370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/etc/ceph/ceph.conf.new
Oct 02 11:45:19 compute-1 sudo[74370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:19 compute-1 sudo[74370]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:19 compute-1 sudo[74418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:19 compute-1 sudo[74418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:19 compute-1 sudo[74418]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:19 compute-1 sudo[74443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/etc/ceph/ceph.conf.new
Oct 02 11:45:19 compute-1 sudo[74443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:19 compute-1 sudo[74443]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:19 compute-1 sudo[74468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:19 compute-1 sudo[74468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:19 compute-1 sudo[74468]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:19 compute-1 sudo[74493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/etc/ceph/ceph.conf.new
Oct 02 11:45:19 compute-1 sudo[74493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:19 compute-1 sudo[74493]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:19 compute-1 sudo[74518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:19 compute-1 sudo[74518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:19 compute-1 sudo[74518]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:19 compute-1 sudo[74543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 02 11:45:19 compute-1 sudo[74543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:19 compute-1 sudo[74543]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:19 compute-1 sudo[74568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:19 compute-1 sudo[74568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:19 compute-1 sudo[74568]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:19 compute-1 sudo[74593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config
Oct 02 11:45:19 compute-1 sudo[74593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:19 compute-1 sudo[74593]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:20 compute-1 sudo[74618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:20 compute-1 sudo[74618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:20 compute-1 sudo[74618]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:20 compute-1 sudo[74643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config
Oct 02 11:45:20 compute-1 sudo[74643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:20 compute-1 sudo[74643]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:20 compute-1 sudo[74668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:20 compute-1 sudo[74668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:20 compute-1 sudo[74668]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:20 compute-1 sudo[74693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf.new
Oct 02 11:45:20 compute-1 sudo[74693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:20 compute-1 sudo[74693]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:20 compute-1 sudo[74718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:20 compute-1 sudo[74718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:20 compute-1 sudo[74718]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:20 compute-1 sudo[74743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:45:20 compute-1 sudo[74743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:20 compute-1 sudo[74743]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:20 compute-1 sudo[74768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:20 compute-1 sudo[74768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:20 compute-1 sudo[74768]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:20 compute-1 sudo[74793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf.new
Oct 02 11:45:20 compute-1 sudo[74793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:20 compute-1 sudo[74793]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:20 compute-1 sudo[74841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:20 compute-1 sudo[74841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:20 compute-1 sudo[74841]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:20 compute-1 sudo[74866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf.new
Oct 02 11:45:20 compute-1 sudo[74866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:20 compute-1 sudo[74866]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:20 compute-1 sudo[74891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:20 compute-1 sudo[74891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:20 compute-1 sudo[74891]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:20 compute-1 sudo[74916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf.new
Oct 02 11:45:20 compute-1 sudo[74916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:20 compute-1 sudo[74916]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:20 compute-1 sudo[74941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:20 compute-1 sudo[74941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:20 compute-1 sudo[74941]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:20 compute-1 sudo[74966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf.new /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf
Oct 02 11:45:20 compute-1 sudo[74966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:20 compute-1 sudo[74966]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:20 compute-1 sudo[74991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:20 compute-1 sudo[74991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:20 compute-1 sudo[74991]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:20 compute-1 sudo[75016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 02 11:45:20 compute-1 sudo[75016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:20 compute-1 sudo[75016]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:20 compute-1 sudo[75041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:20 compute-1 sudo[75041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:20 compute-1 sudo[75041]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:20 compute-1 sudo[75066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/etc/ceph
Oct 02 11:45:20 compute-1 sudo[75066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:20 compute-1 sudo[75066]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:21 compute-1 sudo[75091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:21 compute-1 sudo[75091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:21 compute-1 sudo[75091]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:21 compute-1 sudo[75116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/etc/ceph/ceph.client.admin.keyring.new
Oct 02 11:45:21 compute-1 sudo[75116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:21 compute-1 sudo[75116]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:21 compute-1 sudo[75141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:21 compute-1 sudo[75141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:21 compute-1 sudo[75141]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:21 compute-1 sudo[75166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:45:21 compute-1 sudo[75166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:21 compute-1 sudo[75166]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:21 compute-1 sudo[75191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:21 compute-1 sudo[75191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:21 compute-1 sudo[75191]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:21 compute-1 sudo[75216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/etc/ceph/ceph.client.admin.keyring.new
Oct 02 11:45:21 compute-1 sudo[75216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:21 compute-1 sudo[75216]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:21 compute-1 sudo[75264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:21 compute-1 sudo[75264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:21 compute-1 sudo[75264]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:21 compute-1 sudo[75289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/etc/ceph/ceph.client.admin.keyring.new
Oct 02 11:45:21 compute-1 sudo[75289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:21 compute-1 sudo[75289]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:21 compute-1 sudo[75314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:21 compute-1 sudo[75314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:21 compute-1 sudo[75314]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:21 compute-1 sudo[75339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/etc/ceph/ceph.client.admin.keyring.new
Oct 02 11:45:21 compute-1 sudo[75339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:21 compute-1 sudo[75339]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:21 compute-1 sudo[75364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:21 compute-1 sudo[75364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:21 compute-1 sudo[75364]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:21 compute-1 sudo[75389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Oct 02 11:45:21 compute-1 sudo[75389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:21 compute-1 sudo[75389]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:21 compute-1 sudo[75414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:21 compute-1 sudo[75414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:21 compute-1 sudo[75414]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:21 compute-1 sudo[75439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config
Oct 02 11:45:21 compute-1 sudo[75439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:21 compute-1 sudo[75439]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:21 compute-1 sudo[75464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:21 compute-1 sudo[75464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:21 compute-1 sudo[75464]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:21 compute-1 sudo[75489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config
Oct 02 11:45:21 compute-1 sudo[75489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:21 compute-1 sudo[75489]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:21 compute-1 sudo[75514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:21 compute-1 sudo[75514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:21 compute-1 sudo[75514]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:22 compute-1 sudo[75539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.client.admin.keyring.new
Oct 02 11:45:22 compute-1 sudo[75539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:22 compute-1 sudo[75539]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:22 compute-1 sudo[75564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:22 compute-1 sudo[75564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:22 compute-1 sudo[75564]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:22 compute-1 sudo[75589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:45:22 compute-1 sudo[75589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:22 compute-1 sudo[75589]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:22 compute-1 sudo[75614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:22 compute-1 sudo[75614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:22 compute-1 sudo[75614]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:22 compute-1 sudo[75639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.client.admin.keyring.new
Oct 02 11:45:22 compute-1 sudo[75639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:22 compute-1 sudo[75639]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:22 compute-1 sudo[75687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:22 compute-1 sudo[75687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:22 compute-1 sudo[75687]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:22 compute-1 sudo[75712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.client.admin.keyring.new
Oct 02 11:45:22 compute-1 sudo[75712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:22 compute-1 sudo[75712]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:22 compute-1 sudo[75737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:22 compute-1 sudo[75737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:22 compute-1 sudo[75737]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:22 compute-1 sudo[75762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.client.admin.keyring.new
Oct 02 11:45:22 compute-1 sudo[75762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:22 compute-1 sudo[75762]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:22 compute-1 sudo[75787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:22 compute-1 sudo[75787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:22 compute-1 sudo[75787]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:22 compute-1 sudo[75812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.client.admin.keyring.new /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.client.admin.keyring
Oct 02 11:45:22 compute-1 sudo[75812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:22 compute-1 sudo[75812]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:22 compute-1 sudo[75837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:22 compute-1 sudo[75837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:22 compute-1 sudo[75837]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:22 compute-1 sudo[75862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:45:22 compute-1 sudo[75862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:22 compute-1 sudo[75862]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:22 compute-1 sudo[75887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:22 compute-1 sudo[75887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:22 compute-1 sudo[75887]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:22 compute-1 sudo[75912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:45:22 compute-1 sudo[75912]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:23 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:45:23 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:45:23 compute-1 podman[75978]: 2025-10-02 11:45:23.224852273 +0000 UTC m=+0.021176783 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:45:23 compute-1 podman[75978]: 2025-10-02 11:45:23.355218035 +0000 UTC m=+0.151542525 container create d1abca056c994a1776c9ded402e671ea06b8189e073aa1d96eef672d80659588 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:45:23 compute-1 systemd[1]: Started libpod-conmon-d1abca056c994a1776c9ded402e671ea06b8189e073aa1d96eef672d80659588.scope.
Oct 02 11:45:23 compute-1 systemd[1]: Started libcrun container.
Oct 02 11:45:23 compute-1 podman[75978]: 2025-10-02 11:45:23.502532476 +0000 UTC m=+0.298856996 container init d1abca056c994a1776c9ded402e671ea06b8189e073aa1d96eef672d80659588 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 02 11:45:23 compute-1 podman[75978]: 2025-10-02 11:45:23.510394653 +0000 UTC m=+0.306719143 container start d1abca056c994a1776c9ded402e671ea06b8189e073aa1d96eef672d80659588 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_tharp, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Oct 02 11:45:23 compute-1 silly_tharp[75994]: 167 167
Oct 02 11:45:23 compute-1 systemd[1]: libpod-d1abca056c994a1776c9ded402e671ea06b8189e073aa1d96eef672d80659588.scope: Deactivated successfully.
Oct 02 11:45:23 compute-1 podman[75978]: 2025-10-02 11:45:23.522073857 +0000 UTC m=+0.318398337 container attach d1abca056c994a1776c9ded402e671ea06b8189e073aa1d96eef672d80659588 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_tharp, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:45:23 compute-1 podman[75978]: 2025-10-02 11:45:23.522901773 +0000 UTC m=+0.319226263 container died d1abca056c994a1776c9ded402e671ea06b8189e073aa1d96eef672d80659588 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:45:23 compute-1 podman[75978]: 2025-10-02 11:45:23.775123391 +0000 UTC m=+0.571447891 container remove d1abca056c994a1776c9ded402e671ea06b8189e073aa1d96eef672d80659588 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_tharp, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:45:23 compute-1 systemd[1]: libpod-conmon-d1abca056c994a1776c9ded402e671ea06b8189e073aa1d96eef672d80659588.scope: Deactivated successfully.
Oct 02 11:45:23 compute-1 systemd[1]: Reloading.
Oct 02 11:45:24 compute-1 systemd-rc-local-generator[76040]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:45:24 compute-1 systemd-sysv-generator[76043]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:45:24 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:45:24 compute-1 systemd[1]: Reloading.
Oct 02 11:45:24 compute-1 systemd-sysv-generator[76081]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:45:24 compute-1 systemd-rc-local-generator[76077]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:45:24 compute-1 systemd[1]: Reached target All Ceph clusters and services.
Oct 02 11:45:24 compute-1 systemd[1]: Reloading.
Oct 02 11:45:24 compute-1 systemd-rc-local-generator[76115]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:45:24 compute-1 systemd-sysv-generator[76119]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:45:24 compute-1 systemd[1]: Reached target Ceph cluster 20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct 02 11:45:24 compute-1 systemd[1]: Reloading.
Oct 02 11:45:24 compute-1 systemd-rc-local-generator[76153]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:45:24 compute-1 systemd-sysv-generator[76157]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:45:24 compute-1 systemd[1]: Reloading.
Oct 02 11:45:25 compute-1 systemd-sysv-generator[76195]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:45:25 compute-1 systemd-rc-local-generator[76192]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:45:25 compute-1 systemd[1]: Created slice Slice /system/ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct 02 11:45:25 compute-1 systemd[1]: Reached target System Time Set.
Oct 02 11:45:25 compute-1 systemd[1]: Reached target System Time Synchronized.
Oct 02 11:45:25 compute-1 systemd[1]: Starting Ceph crash.compute-1 for 20fdc58c-b037-5094-a8ef-d490aa7c36f3...
Oct 02 11:45:25 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:45:25 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 02 11:45:25 compute-1 podman[76248]: 2025-10-02 11:45:25.465871593 +0000 UTC m=+0.026788248 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:45:25 compute-1 podman[76248]: 2025-10-02 11:45:25.793023939 +0000 UTC m=+0.353940614 container create f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Oct 02 11:45:25 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa1d1438bf3de6c240f49b3f8d950266bc2a30e683f3cbad5f0fed80c4238e08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:25 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa1d1438bf3de6c240f49b3f8d950266bc2a30e683f3cbad5f0fed80c4238e08/merged/etc/ceph/ceph.client.crash.compute-1.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:25 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa1d1438bf3de6c240f49b3f8d950266bc2a30e683f3cbad5f0fed80c4238e08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:25 compute-1 podman[76248]: 2025-10-02 11:45:25.870029599 +0000 UTC m=+0.430946284 container init f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct 02 11:45:25 compute-1 podman[76248]: 2025-10-02 11:45:25.875321901 +0000 UTC m=+0.436238556 container start f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:45:25 compute-1 bash[76248]: f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13
Oct 02 11:45:25 compute-1 systemd[1]: Started Ceph crash.compute-1 for 20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct 02 11:45:25 compute-1 sudo[75912]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:26 compute-1 sudo[76269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:26 compute-1 sudo[76269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:26 compute-1 sudo[76269]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:26 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1[76264]: INFO:ceph-crash:pinging cluster to exercise our key
Oct 02 11:45:26 compute-1 sudo[76294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:45:26 compute-1 sudo[76294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:26 compute-1 sudo[76294]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:26 compute-1 sudo[76321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:26 compute-1 sudo[76321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:26 compute-1 sudo[76321]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:26 compute-1 sudo[76346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 02 11:45:26 compute-1 sudo[76346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:26 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1[76264]: 2025-10-02T11:45:26.267+0000 7f6ed9021640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 02 11:45:26 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1[76264]: 2025-10-02T11:45:26.267+0000 7f6ed9021640 -1 AuthRegistry(0x7f6ed4067440) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 02 11:45:26 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1[76264]: 2025-10-02T11:45:26.268+0000 7f6ed9021640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 02 11:45:26 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1[76264]: 2025-10-02T11:45:26.268+0000 7f6ed9021640 -1 AuthRegistry(0x7f6ed9020000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 02 11:45:26 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1[76264]: 2025-10-02T11:45:26.271+0000 7f6ed2d76640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct 02 11:45:26 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1[76264]: 2025-10-02T11:45:26.271+0000 7f6ed9021640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Oct 02 11:45:26 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1[76264]: [errno 13] RADOS permission denied (error connecting to the cluster)
Oct 02 11:45:26 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1[76264]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Oct 02 11:45:26 compute-1 podman[76420]: 2025-10-02 11:45:26.45588733 +0000 UTC m=+0.020511817 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:45:26 compute-1 podman[76420]: 2025-10-02 11:45:26.555037066 +0000 UTC m=+0.119661523 container create 8e546f436cb90b08eea55f5c3e77fba43f7fd3a1a894a9755e52196e968ccd5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_merkle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:45:26 compute-1 systemd[1]: Started libpod-conmon-8e546f436cb90b08eea55f5c3e77fba43f7fd3a1a894a9755e52196e968ccd5a.scope.
Oct 02 11:45:26 compute-1 systemd[1]: Started libcrun container.
Oct 02 11:45:26 compute-1 podman[76420]: 2025-10-02 11:45:26.672962525 +0000 UTC m=+0.237587002 container init 8e546f436cb90b08eea55f5c3e77fba43f7fd3a1a894a9755e52196e968ccd5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 11:45:26 compute-1 podman[76420]: 2025-10-02 11:45:26.680611579 +0000 UTC m=+0.245236036 container start 8e546f436cb90b08eea55f5c3e77fba43f7fd3a1a894a9755e52196e968ccd5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_merkle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:45:26 compute-1 eloquent_merkle[76437]: 167 167
Oct 02 11:45:26 compute-1 systemd[1]: libpod-8e546f436cb90b08eea55f5c3e77fba43f7fd3a1a894a9755e52196e968ccd5a.scope: Deactivated successfully.
Oct 02 11:45:26 compute-1 conmon[76437]: conmon 8e546f436cb90b08eea5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8e546f436cb90b08eea55f5c3e77fba43f7fd3a1a894a9755e52196e968ccd5a.scope/container/memory.events
Oct 02 11:45:26 compute-1 podman[76420]: 2025-10-02 11:45:26.693470381 +0000 UTC m=+0.258094838 container attach 8e546f436cb90b08eea55f5c3e77fba43f7fd3a1a894a9755e52196e968ccd5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_merkle, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:45:26 compute-1 podman[76420]: 2025-10-02 11:45:26.694750261 +0000 UTC m=+0.259374718 container died 8e546f436cb90b08eea55f5c3e77fba43f7fd3a1a894a9755e52196e968ccd5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_merkle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:45:26 compute-1 systemd[1]: var-lib-containers-storage-overlay-920c3e546ac272887279f259713cf828e3e402e51da866ad1f3b0b68923470b6-merged.mount: Deactivated successfully.
Oct 02 11:45:26 compute-1 podman[76420]: 2025-10-02 11:45:26.759169066 +0000 UTC m=+0.323793523 container remove 8e546f436cb90b08eea55f5c3e77fba43f7fd3a1a894a9755e52196e968ccd5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:45:26 compute-1 systemd[1]: libpod-conmon-8e546f436cb90b08eea55f5c3e77fba43f7fd3a1a894a9755e52196e968ccd5a.scope: Deactivated successfully.
Oct 02 11:45:26 compute-1 podman[76461]: 2025-10-02 11:45:26.898350064 +0000 UTC m=+0.039678822 container create 33415ab6b7e7f3a3e038d50ea279a7c4c7e639c5cbd9cb24615672c794455d38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 11:45:26 compute-1 systemd[1]: Started libpod-conmon-33415ab6b7e7f3a3e038d50ea279a7c4c7e639c5cbd9cb24615672c794455d38.scope.
Oct 02 11:45:26 compute-1 systemd[1]: Started libcrun container.
Oct 02 11:45:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bad878e37bdf8c22fbf2f2f2d95ca629fa95aa6a5b677b086ac022b4682b040f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bad878e37bdf8c22fbf2f2f2d95ca629fa95aa6a5b677b086ac022b4682b040f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bad878e37bdf8c22fbf2f2f2d95ca629fa95aa6a5b677b086ac022b4682b040f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bad878e37bdf8c22fbf2f2f2d95ca629fa95aa6a5b677b086ac022b4682b040f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bad878e37bdf8c22fbf2f2f2d95ca629fa95aa6a5b677b086ac022b4682b040f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:26 compute-1 podman[76461]: 2025-10-02 11:45:26.973458616 +0000 UTC m=+0.114787394 container init 33415ab6b7e7f3a3e038d50ea279a7c4c7e639c5cbd9cb24615672c794455d38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 11:45:26 compute-1 podman[76461]: 2025-10-02 11:45:26.879428187 +0000 UTC m=+0.020756975 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:45:26 compute-1 podman[76461]: 2025-10-02 11:45:26.983603306 +0000 UTC m=+0.124932064 container start 33415ab6b7e7f3a3e038d50ea279a7c4c7e639c5cbd9cb24615672c794455d38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_nightingale, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct 02 11:45:26 compute-1 podman[76461]: 2025-10-02 11:45:26.986964169 +0000 UTC m=+0.128292927 container attach 33415ab6b7e7f3a3e038d50ea279a7c4c7e639c5cbd9cb24615672c794455d38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_nightingale, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct 02 11:45:27 compute-1 dreamy_nightingale[76477]: --> passed data devices: 0 physical, 1 LVM
Oct 02 11:45:27 compute-1 dreamy_nightingale[76477]: --> relative data size: 1.0
Oct 02 11:45:27 compute-1 dreamy_nightingale[76477]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 02 11:45:27 compute-1 dreamy_nightingale[76477]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 6e4de194-9f54-490b-9be5-cb1e4c11649b
Oct 02 11:45:28 compute-1 dreamy_nightingale[76477]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 02 11:45:28 compute-1 dreamy_nightingale[76477]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Oct 02 11:45:28 compute-1 dreamy_nightingale[76477]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Oct 02 11:45:28 compute-1 lvm[76525]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 02 11:45:28 compute-1 lvm[76525]: VG ceph_vg0 finished
Oct 02 11:45:28 compute-1 dreamy_nightingale[76477]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 02 11:45:28 compute-1 dreamy_nightingale[76477]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 02 11:45:28 compute-1 dreamy_nightingale[76477]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Oct 02 11:45:28 compute-1 dreamy_nightingale[76477]:  stderr: got monmap epoch 1
Oct 02 11:45:28 compute-1 dreamy_nightingale[76477]: --> Creating keyring file for osd.0
Oct 02 11:45:28 compute-1 dreamy_nightingale[76477]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Oct 02 11:45:28 compute-1 dreamy_nightingale[76477]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Oct 02 11:45:28 compute-1 dreamy_nightingale[76477]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 6e4de194-9f54-490b-9be5-cb1e4c11649b --setuser ceph --setgroup ceph
Oct 02 11:45:31 compute-1 dreamy_nightingale[76477]:  stderr: 2025-10-02T11:45:28.949+0000 7f2652122740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 02 11:45:31 compute-1 dreamy_nightingale[76477]:  stderr: 2025-10-02T11:45:28.949+0000 7f2652122740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 02 11:45:31 compute-1 dreamy_nightingale[76477]:  stderr: 2025-10-02T11:45:28.949+0000 7f2652122740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct 02 11:45:31 compute-1 dreamy_nightingale[76477]:  stderr: 2025-10-02T11:45:28.949+0000 7f2652122740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Oct 02 11:45:31 compute-1 dreamy_nightingale[76477]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Oct 02 11:45:31 compute-1 dreamy_nightingale[76477]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 02 11:45:31 compute-1 dreamy_nightingale[76477]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Oct 02 11:45:31 compute-1 dreamy_nightingale[76477]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 02 11:45:31 compute-1 dreamy_nightingale[76477]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Oct 02 11:45:31 compute-1 dreamy_nightingale[76477]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 02 11:45:31 compute-1 dreamy_nightingale[76477]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 02 11:45:31 compute-1 dreamy_nightingale[76477]: --> ceph-volume lvm activate successful for osd ID: 0
Oct 02 11:45:31 compute-1 dreamy_nightingale[76477]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Oct 02 11:45:31 compute-1 systemd[1]: libpod-33415ab6b7e7f3a3e038d50ea279a7c4c7e639c5cbd9cb24615672c794455d38.scope: Deactivated successfully.
Oct 02 11:45:31 compute-1 systemd[1]: libpod-33415ab6b7e7f3a3e038d50ea279a7c4c7e639c5cbd9cb24615672c794455d38.scope: Consumed 2.411s CPU time.
Oct 02 11:45:31 compute-1 conmon[76477]: conmon 33415ab6b7e7f3a3e038 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-33415ab6b7e7f3a3e038d50ea279a7c4c7e639c5cbd9cb24615672c794455d38.scope/container/memory.events
Oct 02 11:45:31 compute-1 podman[77444]: 2025-10-02 11:45:31.57368317 +0000 UTC m=+0.029441959 container died 33415ab6b7e7f3a3e038d50ea279a7c4c7e639c5cbd9cb24615672c794455d38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_nightingale, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct 02 11:45:31 compute-1 systemd[1]: var-lib-containers-storage-overlay-bad878e37bdf8c22fbf2f2f2d95ca629fa95aa6a5b677b086ac022b4682b040f-merged.mount: Deactivated successfully.
Oct 02 11:45:32 compute-1 podman[77444]: 2025-10-02 11:45:32.036121284 +0000 UTC m=+0.491880023 container remove 33415ab6b7e7f3a3e038d50ea279a7c4c7e639c5cbd9cb24615672c794455d38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_nightingale, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:45:32 compute-1 systemd[1]: libpod-conmon-33415ab6b7e7f3a3e038d50ea279a7c4c7e639c5cbd9cb24615672c794455d38.scope: Deactivated successfully.
Oct 02 11:45:32 compute-1 sudo[76346]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:32 compute-1 sudo[77460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:32 compute-1 sudo[77460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:32 compute-1 sudo[77460]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:32 compute-1 sudo[77485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:45:32 compute-1 sudo[77485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:32 compute-1 sudo[77485]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:32 compute-1 sudo[77510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:32 compute-1 sudo[77510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:32 compute-1 sudo[77510]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:32 compute-1 sudo[77535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- lvm list --format json
Oct 02 11:45:32 compute-1 sudo[77535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:32 compute-1 podman[77599]: 2025-10-02 11:45:32.620659754 +0000 UTC m=+0.066705056 container create fa98fff69ae2405ea21e2c3833fcd13a0e8e2aba7e14f6f21792fb5c38475101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_gagarin, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 11:45:32 compute-1 podman[77599]: 2025-10-02 11:45:32.579417716 +0000 UTC m=+0.025463108 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:45:32 compute-1 systemd[1]: Started libpod-conmon-fa98fff69ae2405ea21e2c3833fcd13a0e8e2aba7e14f6f21792fb5c38475101.scope.
Oct 02 11:45:32 compute-1 systemd[1]: Started libcrun container.
Oct 02 11:45:32 compute-1 podman[77599]: 2025-10-02 11:45:32.745373951 +0000 UTC m=+0.191419283 container init fa98fff69ae2405ea21e2c3833fcd13a0e8e2aba7e14f6f21792fb5c38475101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_gagarin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct 02 11:45:32 compute-1 podman[77599]: 2025-10-02 11:45:32.753344784 +0000 UTC m=+0.199390096 container start fa98fff69ae2405ea21e2c3833fcd13a0e8e2aba7e14f6f21792fb5c38475101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 11:45:32 compute-1 gallant_gagarin[77616]: 167 167
Oct 02 11:45:32 compute-1 podman[77599]: 2025-10-02 11:45:32.757223562 +0000 UTC m=+0.203268894 container attach fa98fff69ae2405ea21e2c3833fcd13a0e8e2aba7e14f6f21792fb5c38475101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_gagarin, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:45:32 compute-1 systemd[1]: libpod-fa98fff69ae2405ea21e2c3833fcd13a0e8e2aba7e14f6f21792fb5c38475101.scope: Deactivated successfully.
Oct 02 11:45:32 compute-1 podman[77599]: 2025-10-02 11:45:32.759709559 +0000 UTC m=+0.205754871 container died fa98fff69ae2405ea21e2c3833fcd13a0e8e2aba7e14f6f21792fb5c38475101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:45:32 compute-1 systemd[1]: var-lib-containers-storage-overlay-d150a0381db1a2c5ad16dd209bc4dbb21a1e4783e6a05e5b0a5c0bc0aa43c120-merged.mount: Deactivated successfully.
Oct 02 11:45:32 compute-1 podman[77599]: 2025-10-02 11:45:32.798256305 +0000 UTC m=+0.244301607 container remove fa98fff69ae2405ea21e2c3833fcd13a0e8e2aba7e14f6f21792fb5c38475101 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_gagarin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 11:45:32 compute-1 systemd[1]: libpod-conmon-fa98fff69ae2405ea21e2c3833fcd13a0e8e2aba7e14f6f21792fb5c38475101.scope: Deactivated successfully.
Oct 02 11:45:32 compute-1 podman[77640]: 2025-10-02 11:45:32.943412915 +0000 UTC m=+0.038949279 container create a81680453c8f6dc919b8ce022a0e9f61a5be4b3be847dcdb98788f448db62cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shirley, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:45:32 compute-1 systemd[1]: Started libpod-conmon-a81680453c8f6dc919b8ce022a0e9f61a5be4b3be847dcdb98788f448db62cf3.scope.
Oct 02 11:45:33 compute-1 systemd[1]: Started libcrun container.
Oct 02 11:45:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/374c3f4a1e5facfc8aa879624c7938d7e1be233d93b160fa69748384941d46be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/374c3f4a1e5facfc8aa879624c7938d7e1be233d93b160fa69748384941d46be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/374c3f4a1e5facfc8aa879624c7938d7e1be233d93b160fa69748384941d46be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/374c3f4a1e5facfc8aa879624c7938d7e1be233d93b160fa69748384941d46be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:33 compute-1 podman[77640]: 2025-10-02 11:45:32.925031515 +0000 UTC m=+0.020567889 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:45:33 compute-1 podman[77640]: 2025-10-02 11:45:33.030411881 +0000 UTC m=+0.125948255 container init a81680453c8f6dc919b8ce022a0e9f61a5be4b3be847dcdb98788f448db62cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct 02 11:45:33 compute-1 podman[77640]: 2025-10-02 11:45:33.038662272 +0000 UTC m=+0.134198626 container start a81680453c8f6dc919b8ce022a0e9f61a5be4b3be847dcdb98788f448db62cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shirley, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:45:33 compute-1 podman[77640]: 2025-10-02 11:45:33.04282887 +0000 UTC m=+0.138365224 container attach a81680453c8f6dc919b8ce022a0e9f61a5be4b3be847dcdb98788f448db62cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:45:33 compute-1 relaxed_shirley[77657]: {
Oct 02 11:45:33 compute-1 relaxed_shirley[77657]:     "0": [
Oct 02 11:45:33 compute-1 relaxed_shirley[77657]:         {
Oct 02 11:45:33 compute-1 relaxed_shirley[77657]:             "devices": [
Oct 02 11:45:33 compute-1 relaxed_shirley[77657]:                 "/dev/loop3"
Oct 02 11:45:33 compute-1 relaxed_shirley[77657]:             ],
Oct 02 11:45:33 compute-1 relaxed_shirley[77657]:             "lv_name": "ceph_lv0",
Oct 02 11:45:33 compute-1 relaxed_shirley[77657]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:45:33 compute-1 relaxed_shirley[77657]:             "lv_size": "7511998464",
Oct 02 11:45:33 compute-1 relaxed_shirley[77657]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=NOlLw0-B6eL-n4qO-Nw8l-35dX-ZVQD-2catnQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=20fdc58c-b037-5094-a8ef-d490aa7c36f3,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6e4de194-9f54-490b-9be5-cb1e4c11649b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct 02 11:45:33 compute-1 relaxed_shirley[77657]:             "lv_uuid": "NOlLw0-B6eL-n4qO-Nw8l-35dX-ZVQD-2catnQ",
Oct 02 11:45:33 compute-1 relaxed_shirley[77657]:             "name": "ceph_lv0",
Oct 02 11:45:33 compute-1 relaxed_shirley[77657]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:45:33 compute-1 relaxed_shirley[77657]:             "tags": {
Oct 02 11:45:33 compute-1 relaxed_shirley[77657]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 02 11:45:33 compute-1 relaxed_shirley[77657]:                 "ceph.block_uuid": "NOlLw0-B6eL-n4qO-Nw8l-35dX-ZVQD-2catnQ",
Oct 02 11:45:33 compute-1 relaxed_shirley[77657]:                 "ceph.cephx_lockbox_secret": "",
Oct 02 11:45:33 compute-1 relaxed_shirley[77657]:                 "ceph.cluster_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:45:33 compute-1 relaxed_shirley[77657]:                 "ceph.cluster_name": "ceph",
Oct 02 11:45:33 compute-1 relaxed_shirley[77657]:                 "ceph.crush_device_class": "",
Oct 02 11:45:33 compute-1 relaxed_shirley[77657]:                 "ceph.encrypted": "0",
Oct 02 11:45:33 compute-1 relaxed_shirley[77657]:                 "ceph.osd_fsid": "6e4de194-9f54-490b-9be5-cb1e4c11649b",
Oct 02 11:45:33 compute-1 relaxed_shirley[77657]:                 "ceph.osd_id": "0",
Oct 02 11:45:33 compute-1 relaxed_shirley[77657]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 02 11:45:33 compute-1 relaxed_shirley[77657]:                 "ceph.type": "block",
Oct 02 11:45:33 compute-1 relaxed_shirley[77657]:                 "ceph.vdo": "0"
Oct 02 11:45:33 compute-1 relaxed_shirley[77657]:             },
Oct 02 11:45:33 compute-1 relaxed_shirley[77657]:             "type": "block",
Oct 02 11:45:33 compute-1 relaxed_shirley[77657]:             "vg_name": "ceph_vg0"
Oct 02 11:45:33 compute-1 relaxed_shirley[77657]:         }
Oct 02 11:45:33 compute-1 relaxed_shirley[77657]:     ]
Oct 02 11:45:33 compute-1 relaxed_shirley[77657]: }
Oct 02 11:45:33 compute-1 systemd[1]: libpod-a81680453c8f6dc919b8ce022a0e9f61a5be4b3be847dcdb98788f448db62cf3.scope: Deactivated successfully.
Oct 02 11:45:33 compute-1 podman[77640]: 2025-10-02 11:45:33.905626873 +0000 UTC m=+1.001163227 container died a81680453c8f6dc919b8ce022a0e9f61a5be4b3be847dcdb98788f448db62cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 11:45:33 compute-1 systemd[1]: var-lib-containers-storage-overlay-374c3f4a1e5facfc8aa879624c7938d7e1be233d93b160fa69748384941d46be-merged.mount: Deactivated successfully.
Oct 02 11:45:33 compute-1 podman[77640]: 2025-10-02 11:45:33.984700696 +0000 UTC m=+1.080237050 container remove a81680453c8f6dc919b8ce022a0e9f61a5be4b3be847dcdb98788f448db62cf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shirley, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:45:33 compute-1 systemd[1]: libpod-conmon-a81680453c8f6dc919b8ce022a0e9f61a5be4b3be847dcdb98788f448db62cf3.scope: Deactivated successfully.
Oct 02 11:45:34 compute-1 sudo[77535]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:34 compute-1 sudo[77681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:34 compute-1 sudo[77681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:34 compute-1 sudo[77681]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:34 compute-1 sudo[77706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:45:34 compute-1 sudo[77706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:34 compute-1 sudo[77706]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:34 compute-1 sudo[77731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:34 compute-1 sudo[77731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:34 compute-1 sudo[77731]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:34 compute-1 sudo[77756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:45:34 compute-1 sudo[77756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:34 compute-1 podman[77821]: 2025-10-02 11:45:34.656082627 +0000 UTC m=+0.037121043 container create b705583eec38c42611e39f1f69782e9f9e65ccd0d1718cd2a283d5f55ca389f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:45:34 compute-1 systemd[1]: Started libpod-conmon-b705583eec38c42611e39f1f69782e9f9e65ccd0d1718cd2a283d5f55ca389f1.scope.
Oct 02 11:45:34 compute-1 systemd[1]: Started libcrun container.
Oct 02 11:45:34 compute-1 podman[77821]: 2025-10-02 11:45:34.733998136 +0000 UTC m=+0.115036592 container init b705583eec38c42611e39f1f69782e9f9e65ccd0d1718cd2a283d5f55ca389f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 02 11:45:34 compute-1 podman[77821]: 2025-10-02 11:45:34.640524482 +0000 UTC m=+0.021562938 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:45:34 compute-1 podman[77821]: 2025-10-02 11:45:34.740417402 +0000 UTC m=+0.121455828 container start b705583eec38c42611e39f1f69782e9f9e65ccd0d1718cd2a283d5f55ca389f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_neumann, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 11:45:34 compute-1 agitated_neumann[77837]: 167 167
Oct 02 11:45:34 compute-1 systemd[1]: libpod-b705583eec38c42611e39f1f69782e9f9e65ccd0d1718cd2a283d5f55ca389f1.scope: Deactivated successfully.
Oct 02 11:45:34 compute-1 podman[77821]: 2025-10-02 11:45:34.745887358 +0000 UTC m=+0.126925804 container attach b705583eec38c42611e39f1f69782e9f9e65ccd0d1718cd2a283d5f55ca389f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_neumann, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:45:34 compute-1 podman[77821]: 2025-10-02 11:45:34.746903259 +0000 UTC m=+0.127941705 container died b705583eec38c42611e39f1f69782e9f9e65ccd0d1718cd2a283d5f55ca389f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:45:34 compute-1 systemd[1]: var-lib-containers-storage-overlay-a3a500e9f629486f9ced20a87e07ee8d1dd82f45325ffabdcbf1eca82f5ffc95-merged.mount: Deactivated successfully.
Oct 02 11:45:34 compute-1 podman[77821]: 2025-10-02 11:45:34.787922982 +0000 UTC m=+0.168961398 container remove b705583eec38c42611e39f1f69782e9f9e65ccd0d1718cd2a283d5f55ca389f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_neumann, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:45:34 compute-1 systemd[1]: libpod-conmon-b705583eec38c42611e39f1f69782e9f9e65ccd0d1718cd2a283d5f55ca389f1.scope: Deactivated successfully.
Oct 02 11:45:35 compute-1 podman[77869]: 2025-10-02 11:45:35.041357277 +0000 UTC m=+0.043204070 container create feb88d817d25190298955c1a44bfa59f96f0bd189fd3847b1bb87013c3f245a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate-test, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:45:35 compute-1 systemd[1]: Started libpod-conmon-feb88d817d25190298955c1a44bfa59f96f0bd189fd3847b1bb87013c3f245a5.scope.
Oct 02 11:45:35 compute-1 systemd[1]: Started libcrun container.
Oct 02 11:45:35 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6f43ce7d33fb842b5c8c27d52417032294a43a1996733d693c1ed06ae5a856c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:35 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6f43ce7d33fb842b5c8c27d52417032294a43a1996733d693c1ed06ae5a856c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:35 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6f43ce7d33fb842b5c8c27d52417032294a43a1996733d693c1ed06ae5a856c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:35 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6f43ce7d33fb842b5c8c27d52417032294a43a1996733d693c1ed06ae5a856c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:35 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6f43ce7d33fb842b5c8c27d52417032294a43a1996733d693c1ed06ae5a856c/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:35 compute-1 podman[77869]: 2025-10-02 11:45:35.019332954 +0000 UTC m=+0.021179787 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:45:35 compute-1 podman[77869]: 2025-10-02 11:45:35.123608027 +0000 UTC m=+0.125454840 container init feb88d817d25190298955c1a44bfa59f96f0bd189fd3847b1bb87013c3f245a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate-test, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:45:35 compute-1 podman[77869]: 2025-10-02 11:45:35.131934991 +0000 UTC m=+0.133781784 container start feb88d817d25190298955c1a44bfa59f96f0bd189fd3847b1bb87013c3f245a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:45:35 compute-1 podman[77869]: 2025-10-02 11:45:35.137502511 +0000 UTC m=+0.139349324 container attach feb88d817d25190298955c1a44bfa59f96f0bd189fd3847b1bb87013c3f245a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate-test, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:45:35 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate-test[77885]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct 02 11:45:35 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate-test[77885]:                             [--no-systemd] [--no-tmpfs]
Oct 02 11:45:35 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate-test[77885]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct 02 11:45:35 compute-1 systemd[1]: libpod-feb88d817d25190298955c1a44bfa59f96f0bd189fd3847b1bb87013c3f245a5.scope: Deactivated successfully.
Oct 02 11:45:35 compute-1 podman[77869]: 2025-10-02 11:45:35.782560729 +0000 UTC m=+0.784407532 container died feb88d817d25190298955c1a44bfa59f96f0bd189fd3847b1bb87013c3f245a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate-test, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 02 11:45:35 compute-1 systemd[1]: var-lib-containers-storage-overlay-e6f43ce7d33fb842b5c8c27d52417032294a43a1996733d693c1ed06ae5a856c-merged.mount: Deactivated successfully.
Oct 02 11:45:35 compute-1 podman[77869]: 2025-10-02 11:45:35.848047898 +0000 UTC m=+0.849894691 container remove feb88d817d25190298955c1a44bfa59f96f0bd189fd3847b1bb87013c3f245a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Oct 02 11:45:35 compute-1 systemd[1]: libpod-conmon-feb88d817d25190298955c1a44bfa59f96f0bd189fd3847b1bb87013c3f245a5.scope: Deactivated successfully.
Oct 02 11:45:36 compute-1 systemd[1]: Reloading.
Oct 02 11:45:36 compute-1 systemd-rc-local-generator[77951]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:45:36 compute-1 systemd-sysv-generator[77954]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:45:36 compute-1 systemd[1]: Reloading.
Oct 02 11:45:36 compute-1 systemd-rc-local-generator[77988]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:45:36 compute-1 systemd-sysv-generator[77993]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:45:36 compute-1 systemd[1]: Starting Ceph osd.0 for 20fdc58c-b037-5094-a8ef-d490aa7c36f3...
Oct 02 11:45:36 compute-1 podman[78050]: 2025-10-02 11:45:36.909450972 +0000 UTC m=+0.039996211 container create 7a287feff2035f58456e18e1c6870665b718df3fbabdc79cc3774df40c675e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:45:36 compute-1 systemd[1]: Started libcrun container.
Oct 02 11:45:36 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc27f0cdee14262d1d6c1463d735dcb8e09bb5a2d16d46ffeb230a5f7e49d95b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:36 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc27f0cdee14262d1d6c1463d735dcb8e09bb5a2d16d46ffeb230a5f7e49d95b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:36 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc27f0cdee14262d1d6c1463d735dcb8e09bb5a2d16d46ffeb230a5f7e49d95b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:36 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc27f0cdee14262d1d6c1463d735dcb8e09bb5a2d16d46ffeb230a5f7e49d95b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:36 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc27f0cdee14262d1d6c1463d735dcb8e09bb5a2d16d46ffeb230a5f7e49d95b/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:36 compute-1 podman[78050]: 2025-10-02 11:45:36.978367735 +0000 UTC m=+0.108912994 container init 7a287feff2035f58456e18e1c6870665b718df3fbabdc79cc3774df40c675e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 11:45:36 compute-1 podman[78050]: 2025-10-02 11:45:36.890665719 +0000 UTC m=+0.021210988 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:45:36 compute-1 podman[78050]: 2025-10-02 11:45:36.986986519 +0000 UTC m=+0.117531758 container start 7a287feff2035f58456e18e1c6870665b718df3fbabdc79cc3774df40c675e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:45:36 compute-1 podman[78050]: 2025-10-02 11:45:36.989937579 +0000 UTC m=+0.120482818 container attach 7a287feff2035f58456e18e1c6870665b718df3fbabdc79cc3774df40c675e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:45:37 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate[78065]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 02 11:45:37 compute-1 bash[78050]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 02 11:45:37 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate[78065]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct 02 11:45:37 compute-1 bash[78050]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct 02 11:45:37 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate[78065]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct 02 11:45:37 compute-1 bash[78050]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct 02 11:45:37 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate[78065]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 02 11:45:37 compute-1 bash[78050]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 02 11:45:37 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate[78065]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 02 11:45:37 compute-1 bash[78050]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct 02 11:45:37 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate[78065]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 02 11:45:37 compute-1 bash[78050]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct 02 11:45:37 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate[78065]: --> ceph-volume raw activate successful for osd ID: 0
Oct 02 11:45:37 compute-1 bash[78050]: --> ceph-volume raw activate successful for osd ID: 0
Oct 02 11:45:37 compute-1 systemd[1]: libpod-7a287feff2035f58456e18e1c6870665b718df3fbabdc79cc3774df40c675e8b.scope: Deactivated successfully.
Oct 02 11:45:37 compute-1 podman[78050]: 2025-10-02 11:45:37.877696354 +0000 UTC m=+1.008241603 container died 7a287feff2035f58456e18e1c6870665b718df3fbabdc79cc3774df40c675e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:45:37 compute-1 systemd[1]: var-lib-containers-storage-overlay-fc27f0cdee14262d1d6c1463d735dcb8e09bb5a2d16d46ffeb230a5f7e49d95b-merged.mount: Deactivated successfully.
Oct 02 11:45:37 compute-1 podman[78050]: 2025-10-02 11:45:37.924635817 +0000 UTC m=+1.055181056 container remove 7a287feff2035f58456e18e1c6870665b718df3fbabdc79cc3774df40c675e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct 02 11:45:38 compute-1 podman[78242]: 2025-10-02 11:45:38.132340456 +0000 UTC m=+0.037768084 container create 284fc2c9a45dca72d4af3b8a44ba1a2846a1ba4489e2b844019e9d39cb93ceef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:45:38 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e670487e98e9176891ddc35e4011afc8199ee0801be7c59c3d0883412e44ca88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:38 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e670487e98e9176891ddc35e4011afc8199ee0801be7c59c3d0883412e44ca88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:38 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e670487e98e9176891ddc35e4011afc8199ee0801be7c59c3d0883412e44ca88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:38 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e670487e98e9176891ddc35e4011afc8199ee0801be7c59c3d0883412e44ca88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:38 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e670487e98e9176891ddc35e4011afc8199ee0801be7c59c3d0883412e44ca88/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:38 compute-1 podman[78242]: 2025-10-02 11:45:38.116268016 +0000 UTC m=+0.021695664 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:45:38 compute-1 podman[78242]: 2025-10-02 11:45:38.221329352 +0000 UTC m=+0.126757010 container init 284fc2c9a45dca72d4af3b8a44ba1a2846a1ba4489e2b844019e9d39cb93ceef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Oct 02 11:45:38 compute-1 podman[78242]: 2025-10-02 11:45:38.239384723 +0000 UTC m=+0.144812371 container start 284fc2c9a45dca72d4af3b8a44ba1a2846a1ba4489e2b844019e9d39cb93ceef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:45:38 compute-1 bash[78242]: 284fc2c9a45dca72d4af3b8a44ba1a2846a1ba4489e2b844019e9d39cb93ceef
Oct 02 11:45:38 compute-1 systemd[1]: Started Ceph osd.0 for 20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct 02 11:45:38 compute-1 ceph-osd[78262]: set uid:gid to 167:167 (ceph:ceph)
Oct 02 11:45:38 compute-1 ceph-osd[78262]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct 02 11:45:38 compute-1 ceph-osd[78262]: pidfile_write: ignore empty --pid-file
Oct 02 11:45:38 compute-1 ceph-osd[78262]: bdev(0x559433635800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 02 11:45:38 compute-1 sudo[77756]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:38 compute-1 ceph-osd[78262]: bdev(0x559433635800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 02 11:45:38 compute-1 ceph-osd[78262]: bdev(0x559433635800 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 11:45:38 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 02 11:45:38 compute-1 ceph-osd[78262]: bdev(0x559434477800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 02 11:45:38 compute-1 ceph-osd[78262]: bdev(0x559434477800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 02 11:45:38 compute-1 ceph-osd[78262]: bdev(0x559434477800 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 11:45:38 compute-1 ceph-osd[78262]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Oct 02 11:45:38 compute-1 ceph-osd[78262]: bdev(0x559434477800 /var/lib/ceph/osd/ceph-0/block) close
Oct 02 11:45:38 compute-1 sudo[78275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:38 compute-1 sudo[78275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:38 compute-1 sudo[78275]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:38 compute-1 sudo[78300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:45:38 compute-1 sudo[78300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:38 compute-1 sudo[78300]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:38 compute-1 sudo[78325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:38 compute-1 sudo[78325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:38 compute-1 ceph-osd[78262]: bdev(0x559433635800 /var/lib/ceph/osd/ceph-0/block) close
Oct 02 11:45:38 compute-1 sudo[78325]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:38 compute-1 sudo[78350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- raw list --format json
Oct 02 11:45:38 compute-1 sudo[78350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:38 compute-1 ceph-osd[78262]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Oct 02 11:45:38 compute-1 ceph-osd[78262]: load: jerasure load: lrc 
Oct 02 11:45:38 compute-1 ceph-osd[78262]: bdev(0x5594344f8c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 02 11:45:38 compute-1 ceph-osd[78262]: bdev(0x5594344f8c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 02 11:45:38 compute-1 ceph-osd[78262]: bdev(0x5594344f8c00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 11:45:38 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 02 11:45:38 compute-1 ceph-osd[78262]: bdev(0x5594344f8c00 /var/lib/ceph/osd/ceph-0/block) close
Oct 02 11:45:38 compute-1 podman[78423]: 2025-10-02 11:45:38.940685827 +0000 UTC m=+0.037693031 container create 7b7ec1e5541ae54095277593ec9e997b9bbf6aeb3c9aae700b5526e96c0c61f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_curran, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct 02 11:45:38 compute-1 systemd[1]: Started libpod-conmon-7b7ec1e5541ae54095277593ec9e997b9bbf6aeb3c9aae700b5526e96c0c61f6.scope.
Oct 02 11:45:38 compute-1 systemd[1]: Started libcrun container.
Oct 02 11:45:39 compute-1 podman[78423]: 2025-10-02 11:45:39.00860471 +0000 UTC m=+0.105611934 container init 7b7ec1e5541ae54095277593ec9e997b9bbf6aeb3c9aae700b5526e96c0c61f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 11:45:39 compute-1 podman[78423]: 2025-10-02 11:45:39.01482408 +0000 UTC m=+0.111831284 container start 7b7ec1e5541ae54095277593ec9e997b9bbf6aeb3c9aae700b5526e96c0c61f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_curran, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:45:39 compute-1 podman[78423]: 2025-10-02 11:45:39.018075779 +0000 UTC m=+0.115083013 container attach 7b7ec1e5541ae54095277593ec9e997b9bbf6aeb3c9aae700b5526e96c0c61f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_curran, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 02 11:45:39 compute-1 naughty_curran[78440]: 167 167
Oct 02 11:45:39 compute-1 systemd[1]: libpod-7b7ec1e5541ae54095277593ec9e997b9bbf6aeb3c9aae700b5526e96c0c61f6.scope: Deactivated successfully.
Oct 02 11:45:39 compute-1 podman[78423]: 2025-10-02 11:45:39.02103932 +0000 UTC m=+0.118046524 container died 7b7ec1e5541ae54095277593ec9e997b9bbf6aeb3c9aae700b5526e96c0c61f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:45:39 compute-1 podman[78423]: 2025-10-02 11:45:38.925446832 +0000 UTC m=+0.022454056 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:45:39 compute-1 systemd[1]: var-lib-containers-storage-overlay-c79364f72c8a10f58de88b20c2e9c30a25b389ae38d33567e2cd8768cedcdc89-merged.mount: Deactivated successfully.
Oct 02 11:45:39 compute-1 podman[78423]: 2025-10-02 11:45:39.058230085 +0000 UTC m=+0.155237289 container remove 7b7ec1e5541ae54095277593ec9e997b9bbf6aeb3c9aae700b5526e96c0c61f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct 02 11:45:39 compute-1 systemd[1]: libpod-conmon-7b7ec1e5541ae54095277593ec9e997b9bbf6aeb3c9aae700b5526e96c0c61f6.scope: Deactivated successfully.
Oct 02 11:45:39 compute-1 ceph-osd[78262]: bdev(0x5594344f8c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 02 11:45:39 compute-1 ceph-osd[78262]: bdev(0x5594344f8c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 02 11:45:39 compute-1 ceph-osd[78262]: bdev(0x5594344f8c00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 11:45:39 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 02 11:45:39 compute-1 ceph-osd[78262]: bdev(0x5594344f8c00 /var/lib/ceph/osd/ceph-0/block) close
Oct 02 11:45:39 compute-1 podman[78468]: 2025-10-02 11:45:39.207673526 +0000 UTC m=+0.044129918 container create 4acef55898d38d2ac4a55f9ad886b8aca274e9a69556ccc77ff1e1064bea5ba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lumiere, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:45:39 compute-1 systemd[1]: Started libpod-conmon-4acef55898d38d2ac4a55f9ad886b8aca274e9a69556ccc77ff1e1064bea5ba1.scope.
Oct 02 11:45:39 compute-1 podman[78468]: 2025-10-02 11:45:39.186769268 +0000 UTC m=+0.023225690 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:45:39 compute-1 systemd[1]: Started libcrun container.
Oct 02 11:45:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdfc13959ba2aaf0b5085ef239856fe0307d23f441e223473a417ec8ec292a2b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdfc13959ba2aaf0b5085ef239856fe0307d23f441e223473a417ec8ec292a2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdfc13959ba2aaf0b5085ef239856fe0307d23f441e223473a417ec8ec292a2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdfc13959ba2aaf0b5085ef239856fe0307d23f441e223473a417ec8ec292a2b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:39 compute-1 podman[78468]: 2025-10-02 11:45:39.307798132 +0000 UTC m=+0.144254534 container init 4acef55898d38d2ac4a55f9ad886b8aca274e9a69556ccc77ff1e1064bea5ba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 11:45:39 compute-1 podman[78468]: 2025-10-02 11:45:39.315829197 +0000 UTC m=+0.152285589 container start 4acef55898d38d2ac4a55f9ad886b8aca274e9a69556ccc77ff1e1064bea5ba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:45:39 compute-1 podman[78468]: 2025-10-02 11:45:39.321057767 +0000 UTC m=+0.157514169 container attach 4acef55898d38d2ac4a55f9ad886b8aca274e9a69556ccc77ff1e1064bea5ba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct 02 11:45:39 compute-1 ceph-osd[78262]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct 02 11:45:39 compute-1 ceph-osd[78262]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct 02 11:45:39 compute-1 ceph-osd[78262]: bdev(0x5594344f8c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 02 11:45:39 compute-1 ceph-osd[78262]: bdev(0x5594344f8c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 02 11:45:39 compute-1 ceph-osd[78262]: bdev(0x5594344f8c00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 11:45:39 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 02 11:45:39 compute-1 ceph-osd[78262]: bdev(0x5594344f9400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 02 11:45:39 compute-1 ceph-osd[78262]: bdev(0x5594344f9400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 02 11:45:39 compute-1 ceph-osd[78262]: bdev(0x5594344f9400 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 11:45:39 compute-1 ceph-osd[78262]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Oct 02 11:45:39 compute-1 ceph-osd[78262]: bluefs mount
Oct 02 11:45:39 compute-1 ceph-osd[78262]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: bluefs mount shared_bdev_used = 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: RocksDB version: 7.9.2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Git sha 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: DB SUMMARY
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: DB Session ID:  BRAAO5BX4M9V7H9L0YQQ
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: CURRENT file:  CURRENT
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: IDENTITY file:  IDENTITY
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                         Options.error_if_exists: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                       Options.create_if_missing: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                         Options.paranoid_checks: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                                     Options.env: 0x5594344c9c70
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                                Options.info_log: 0x5594336b2ba0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.max_file_opening_threads: 16
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                              Options.statistics: (nil)
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                               Options.use_fsync: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                       Options.max_log_file_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                         Options.allow_fallocate: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.use_direct_reads: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.create_missing_column_families: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                              Options.db_log_dir: 
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                                 Options.wal_dir: db.wal
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.advise_random_on_open: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.write_buffer_manager: 0x5594345d2460
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                            Options.rate_limiter: (nil)
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.unordered_write: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                               Options.row_cache: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                              Options.wal_filter: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.allow_ingest_behind: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.two_write_queues: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.manual_wal_flush: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.wal_compression: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.atomic_flush: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                 Options.log_readahead_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.allow_data_in_errors: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.db_host_id: __hostname__
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.max_background_jobs: 4
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.max_background_compactions: -1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.max_subcompactions: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.max_open_files: -1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.bytes_per_sync: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.max_background_flushes: -1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Compression algorithms supported:
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         kZSTD supported: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         kXpressCompression supported: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         kBZip2Compression supported: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         kLZ4Compression supported: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         kZlibCompression supported: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         kLZ4HCCompression supported: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         kSnappyCompression supported: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5594336b2600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5594336a8dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:           Options.merge_operator: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5594336b2600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5594336a8dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:           Options.merge_operator: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5594336b2600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5594336a8dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:           Options.merge_operator: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5594336b2600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5594336a8dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:           Options.merge_operator: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5594336b2600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5594336a8dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:           Options.merge_operator: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5594336b2600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5594336a8dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:           Options.merge_operator: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5594336b2600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5594336a8dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:           Options.merge_operator: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5594336b25c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5594336a8430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:           Options.merge_operator: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5594336b25c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5594336a8430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:           Options.merge_operator: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5594336b25c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5594336a8430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: dc3ffa87-f2ef-46c3-8e65-ad4082adbcb2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405539370770, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405539370955, "job": 1, "event": "recovery_finished"}
Oct 02 11:45:39 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Oct 02 11:45:39 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Oct 02 11:45:39 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct 02 11:45:39 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: freelist init
Oct 02 11:45:39 compute-1 ceph-osd[78262]: freelist _read_cfg
Oct 02 11:45:39 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 02 11:45:39 compute-1 ceph-osd[78262]: bluefs umount
Oct 02 11:45:39 compute-1 ceph-osd[78262]: bdev(0x5594344f9400 /var/lib/ceph/osd/ceph-0/block) close
Oct 02 11:45:39 compute-1 ceph-osd[78262]: bdev(0x5594344f9400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct 02 11:45:39 compute-1 ceph-osd[78262]: bdev(0x5594344f9400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct 02 11:45:39 compute-1 ceph-osd[78262]: bdev(0x5594344f9400 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 02 11:45:39 compute-1 ceph-osd[78262]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Oct 02 11:45:39 compute-1 ceph-osd[78262]: bluefs mount
Oct 02 11:45:39 compute-1 ceph-osd[78262]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: bluefs mount shared_bdev_used = 4718592
Oct 02 11:45:39 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: RocksDB version: 7.9.2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Git sha 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: DB SUMMARY
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: DB Session ID:  BRAAO5BX4M9V7H9L0YQR
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: CURRENT file:  CURRENT
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: IDENTITY file:  IDENTITY
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                         Options.error_if_exists: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                       Options.create_if_missing: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                         Options.paranoid_checks: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                                     Options.env: 0x5594336f4690
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                                Options.info_log: 0x5594336b38a0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.max_file_opening_threads: 16
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                              Options.statistics: (nil)
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                               Options.use_fsync: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                       Options.max_log_file_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                         Options.allow_fallocate: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.use_direct_reads: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.create_missing_column_families: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                              Options.db_log_dir: 
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                                 Options.wal_dir: db.wal
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.advise_random_on_open: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.write_buffer_manager: 0x5594345d2460
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                            Options.rate_limiter: (nil)
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.unordered_write: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                               Options.row_cache: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                              Options.wal_filter: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.allow_ingest_behind: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.two_write_queues: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.manual_wal_flush: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.wal_compression: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.atomic_flush: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                 Options.log_readahead_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.allow_data_in_errors: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.db_host_id: __hostname__
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.max_background_jobs: 4
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.max_background_compactions: -1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.max_subcompactions: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.max_open_files: -1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.bytes_per_sync: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.max_background_flushes: -1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Compression algorithms supported:
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         kZSTD supported: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         kXpressCompression supported: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         kBZip2Compression supported: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         kLZ4Compression supported: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         kZlibCompression supported: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         kLZ4HCCompression supported: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         kSnappyCompression supported: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55943368fb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5594336a9610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:           Options.merge_operator: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55943368fb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5594336a9610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:           Options.merge_operator: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55943368fb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5594336a9610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:           Options.merge_operator: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55943368fb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5594336a9610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:           Options.merge_operator: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55943368fb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5594336a9610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:           Options.merge_operator: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55943368fb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5594336a9610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:           Options.merge_operator: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55943368fb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5594336a9610
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:           Options.merge_operator: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5594336b3e40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5594336a9770
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:           Options.merge_operator: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5594336b3e40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5594336a9770
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:           Options.merge_operator: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5594336b3e40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5594336a9770
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.write_buffer_size: 16777216
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.max_write_buffer_number: 64
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.compression: LZ4
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.num_levels: 7
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: dc3ffa87-f2ef-46c3-8e65-ad4082adbcb2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405539641203, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405539658499, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405539, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc3ffa87-f2ef-46c3-8e65-ad4082adbcb2", "db_session_id": "BRAAO5BX4M9V7H9L0YQR", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405539662597, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405539, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc3ffa87-f2ef-46c3-8e65-ad4082adbcb2", "db_session_id": "BRAAO5BX4M9V7H9L0YQR", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405539665419, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405539, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "dc3ffa87-f2ef-46c3-8e65-ad4082adbcb2", "db_session_id": "BRAAO5BX4M9V7H9L0YQR", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405539666993, "job": 1, "event": "recovery_finished"}
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55943443fc00
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: DB pointer 0x5594345bba00
Oct 02 11:45:39 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 02 11:45:39 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Oct 02 11:45:39 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 11:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9770#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9770#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9770#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 02 11:45:39 compute-1 ceph-osd[78262]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct 02 11:45:39 compute-1 ceph-osd[78262]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct 02 11:45:39 compute-1 ceph-osd[78262]: _get_class not permitted to load lua
Oct 02 11:45:39 compute-1 ceph-osd[78262]: _get_class not permitted to load sdk
Oct 02 11:45:39 compute-1 ceph-osd[78262]: _get_class not permitted to load test_remote_reads
Oct 02 11:45:39 compute-1 ceph-osd[78262]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct 02 11:45:39 compute-1 ceph-osd[78262]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct 02 11:45:39 compute-1 ceph-osd[78262]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct 02 11:45:39 compute-1 ceph-osd[78262]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct 02 11:45:39 compute-1 ceph-osd[78262]: osd.0 0 load_pgs
Oct 02 11:45:39 compute-1 ceph-osd[78262]: osd.0 0 load_pgs opened 0 pgs
Oct 02 11:45:39 compute-1 ceph-osd[78262]: osd.0 0 log_to_monitors true
Oct 02 11:45:39 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0[78258]: 2025-10-02T11:45:39.737+0000 7f15232fa740 -1 osd.0 0 log_to_monitors true
Oct 02 11:45:40 compute-1 modest_lumiere[78485]: {
Oct 02 11:45:40 compute-1 modest_lumiere[78485]:     "6e4de194-9f54-490b-9be5-cb1e4c11649b": {
Oct 02 11:45:40 compute-1 modest_lumiere[78485]:         "ceph_fsid": "20fdc58c-b037-5094-a8ef-d490aa7c36f3",
Oct 02 11:45:40 compute-1 modest_lumiere[78485]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct 02 11:45:40 compute-1 modest_lumiere[78485]:         "osd_id": 0,
Oct 02 11:45:40 compute-1 modest_lumiere[78485]:         "osd_uuid": "6e4de194-9f54-490b-9be5-cb1e4c11649b",
Oct 02 11:45:40 compute-1 modest_lumiere[78485]:         "type": "bluestore"
Oct 02 11:45:40 compute-1 modest_lumiere[78485]:     }
Oct 02 11:45:40 compute-1 modest_lumiere[78485]: }
Oct 02 11:45:40 compute-1 systemd[1]: libpod-4acef55898d38d2ac4a55f9ad886b8aca274e9a69556ccc77ff1e1064bea5ba1.scope: Deactivated successfully.
Oct 02 11:45:40 compute-1 podman[78468]: 2025-10-02 11:45:40.202024134 +0000 UTC m=+1.038480526 container died 4acef55898d38d2ac4a55f9ad886b8aca274e9a69556ccc77ff1e1064bea5ba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lumiere, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 11:45:40 compute-1 systemd[1]: var-lib-containers-storage-overlay-cdfc13959ba2aaf0b5085ef239856fe0307d23f441e223473a417ec8ec292a2b-merged.mount: Deactivated successfully.
Oct 02 11:45:40 compute-1 podman[78468]: 2025-10-02 11:45:40.25726041 +0000 UTC m=+1.093716802 container remove 4acef55898d38d2ac4a55f9ad886b8aca274e9a69556ccc77ff1e1064bea5ba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:45:40 compute-1 systemd[1]: libpod-conmon-4acef55898d38d2ac4a55f9ad886b8aca274e9a69556ccc77ff1e1064bea5ba1.scope: Deactivated successfully.
Oct 02 11:45:40 compute-1 sudo[78350]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:40 compute-1 sudo[78929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:40 compute-1 sudo[78929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:40 compute-1 sudo[78929]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:40 compute-1 sudo[78954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:45:40 compute-1 sudo[78954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:40 compute-1 sudo[78954]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:40 compute-1 sudo[78979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:40 compute-1 sudo[78979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:40 compute-1 sudo[78979]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:40 compute-1 sudo[79004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:45:40 compute-1 sudo[79004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:40 compute-1 sudo[79004]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:40 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct 02 11:45:40 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct 02 11:45:40 compute-1 sudo[79029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:40 compute-1 sudo[79029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:40 compute-1 sudo[79029]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:40 compute-1 sudo[79054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 11:45:40 compute-1 sudo[79054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:41 compute-1 podman[79152]: 2025-10-02 11:45:41.279681535 +0000 UTC m=+0.050921095 container exec f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:45:41 compute-1 ceph-osd[78262]: osd.0 0 done with init, starting boot process
Oct 02 11:45:41 compute-1 ceph-osd[78262]: osd.0 0 start_boot
Oct 02 11:45:41 compute-1 ceph-osd[78262]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct 02 11:45:41 compute-1 ceph-osd[78262]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct 02 11:45:41 compute-1 ceph-osd[78262]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct 02 11:45:41 compute-1 ceph-osd[78262]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct 02 11:45:41 compute-1 ceph-osd[78262]: osd.0 0  bench count 12288000 bsize 4 KiB
Oct 02 11:45:41 compute-1 podman[79152]: 2025-10-02 11:45:41.466061314 +0000 UTC m=+0.237300854 container exec_died f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 11:45:41 compute-1 sudo[79054]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:41 compute-1 sudo[79202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:41 compute-1 sudo[79202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:41 compute-1 sudo[79202]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:41 compute-1 sudo[79227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:45:41 compute-1 sudo[79227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:41 compute-1 sudo[79227]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:41 compute-1 sudo[79252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:45:41 compute-1 sudo[79252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:41 compute-1 sudo[79252]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:41 compute-1 sudo[79277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- inventory --format=json-pretty --filter-for-batch
Oct 02 11:45:41 compute-1 sudo[79277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:45:42 compute-1 podman[79341]: 2025-10-02 11:45:42.157832917 +0000 UTC m=+0.035541925 container create 49ff69093038616a1baac832893049990b306ea54cbec1606917e01aba0f5def (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:45:42 compute-1 systemd[1]: Started libpod-conmon-49ff69093038616a1baac832893049990b306ea54cbec1606917e01aba0f5def.scope.
Oct 02 11:45:42 compute-1 systemd[1]: Started libcrun container.
Oct 02 11:45:42 compute-1 podman[79341]: 2025-10-02 11:45:42.140260821 +0000 UTC m=+0.017969769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:45:42 compute-1 podman[79341]: 2025-10-02 11:45:42.784858755 +0000 UTC m=+0.662567713 container init 49ff69093038616a1baac832893049990b306ea54cbec1606917e01aba0f5def (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_tu, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:45:42 compute-1 podman[79341]: 2025-10-02 11:45:42.792081415 +0000 UTC m=+0.669790333 container start 49ff69093038616a1baac832893049990b306ea54cbec1606917e01aba0f5def (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_tu, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:45:42 compute-1 epic_tu[79357]: 167 167
Oct 02 11:45:42 compute-1 systemd[1]: libpod-49ff69093038616a1baac832893049990b306ea54cbec1606917e01aba0f5def.scope: Deactivated successfully.
Oct 02 11:45:42 compute-1 podman[79341]: 2025-10-02 11:45:42.807862257 +0000 UTC m=+0.685571195 container attach 49ff69093038616a1baac832893049990b306ea54cbec1606917e01aba0f5def (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct 02 11:45:42 compute-1 podman[79341]: 2025-10-02 11:45:42.808628041 +0000 UTC m=+0.686336959 container died 49ff69093038616a1baac832893049990b306ea54cbec1606917e01aba0f5def (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_tu, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:45:42 compute-1 systemd[1]: var-lib-containers-storage-overlay-5aa1e38a254794e44c4daf7b95e6b6b51b74f7f9a82639d7693c0761288b8aae-merged.mount: Deactivated successfully.
Oct 02 11:45:42 compute-1 podman[79341]: 2025-10-02 11:45:42.891656024 +0000 UTC m=+0.769364942 container remove 49ff69093038616a1baac832893049990b306ea54cbec1606917e01aba0f5def (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_tu, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Oct 02 11:45:42 compute-1 systemd[1]: libpod-conmon-49ff69093038616a1baac832893049990b306ea54cbec1606917e01aba0f5def.scope: Deactivated successfully.
Oct 02 11:45:43 compute-1 podman[79383]: 2025-10-02 11:45:43.038759854 +0000 UTC m=+0.046315544 container create d81829a1eeb61d2a1d9946b0e895398efe4cf6041e67f91a4e36a89099e9ddfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 11:45:43 compute-1 systemd[1]: Started libpod-conmon-d81829a1eeb61d2a1d9946b0e895398efe4cf6041e67f91a4e36a89099e9ddfe.scope.
Oct 02 11:45:43 compute-1 podman[79383]: 2025-10-02 11:45:43.01175138 +0000 UTC m=+0.019307080 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:45:43 compute-1 systemd[1]: Started libcrun container.
Oct 02 11:45:43 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54b0dc58f0badb2b884074c08e98a8525bf91edfb214c5e14c0ab1b91dc3e4af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:43 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54b0dc58f0badb2b884074c08e98a8525bf91edfb214c5e14c0ab1b91dc3e4af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:43 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54b0dc58f0badb2b884074c08e98a8525bf91edfb214c5e14c0ab1b91dc3e4af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:43 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54b0dc58f0badb2b884074c08e98a8525bf91edfb214c5e14c0ab1b91dc3e4af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:45:43 compute-1 podman[79383]: 2025-10-02 11:45:43.131707591 +0000 UTC m=+0.139263301 container init d81829a1eeb61d2a1d9946b0e895398efe4cf6041e67f91a4e36a89099e9ddfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 11:45:43 compute-1 podman[79383]: 2025-10-02 11:45:43.13723242 +0000 UTC m=+0.144788110 container start d81829a1eeb61d2a1d9946b0e895398efe4cf6041e67f91a4e36a89099e9ddfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:45:43 compute-1 podman[79383]: 2025-10-02 11:45:43.144567973 +0000 UTC m=+0.152123653 container attach d81829a1eeb61d2a1d9946b0e895398efe4cf6041e67f91a4e36a89099e9ddfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]: [
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]:     {
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]:         "available": false,
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]:         "ceph_device": false,
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]:         "lsm_data": {},
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]:         "lvs": [],
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]:         "path": "/dev/sr0",
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]:         "rejected_reasons": [
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]:             "Has a FileSystem",
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]:             "Insufficient space (<5GB)"
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]:         ],
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]:         "sys_api": {
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]:             "actuators": null,
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]:             "device_nodes": "sr0",
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]:             "devname": "sr0",
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]:             "human_readable_size": "482.00 KB",
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]:             "id_bus": "ata",
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]:             "model": "QEMU DVD-ROM",
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]:             "nr_requests": "2",
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]:             "parent": "/dev/sr0",
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]:             "partitions": {},
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]:             "path": "/dev/sr0",
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]:             "removable": "1",
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]:             "rev": "2.5+",
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]:             "ro": "0",
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]:             "rotational": "0",
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]:             "sas_address": "",
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]:             "sas_device_handle": "",
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]:             "scheduler_mode": "mq-deadline",
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]:             "sectors": 0,
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]:             "sectorsize": "2048",
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]:             "size": 493568.0,
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]:             "support_discard": "2048",
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]:             "type": "disk",
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]:             "vendor": "QEMU"
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]:         }
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]:     }
Oct 02 11:45:44 compute-1 practical_heyrovsky[79399]: ]
Oct 02 11:45:44 compute-1 systemd[1]: libpod-d81829a1eeb61d2a1d9946b0e895398efe4cf6041e67f91a4e36a89099e9ddfe.scope: Deactivated successfully.
Oct 02 11:45:44 compute-1 systemd[1]: libpod-d81829a1eeb61d2a1d9946b0e895398efe4cf6041e67f91a4e36a89099e9ddfe.scope: Consumed 1.106s CPU time.
Oct 02 11:45:44 compute-1 podman[80537]: 2025-10-02 11:45:44.281402271 +0000 UTC m=+0.022836288 container died d81829a1eeb61d2a1d9946b0e895398efe4cf6041e67f91a4e36a89099e9ddfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_heyrovsky, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:45:44 compute-1 systemd[1]: var-lib-containers-storage-overlay-54b0dc58f0badb2b884074c08e98a8525bf91edfb214c5e14c0ab1b91dc3e4af-merged.mount: Deactivated successfully.
Oct 02 11:45:44 compute-1 podman[80537]: 2025-10-02 11:45:44.433851343 +0000 UTC m=+0.175285310 container remove d81829a1eeb61d2a1d9946b0e895398efe4cf6041e67f91a4e36a89099e9ddfe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:45:44 compute-1 systemd[1]: libpod-conmon-d81829a1eeb61d2a1d9946b0e895398efe4cf6041e67f91a4e36a89099e9ddfe.scope: Deactivated successfully.
Oct 02 11:45:44 compute-1 sudo[79277]: pam_unix(sudo:session): session closed for user root
Oct 02 11:45:44 compute-1 ceph-osd[78262]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 27.497 iops: 7039.207 elapsed_sec: 0.426
Oct 02 11:45:44 compute-1 ceph-osd[78262]: log_channel(cluster) log [WRN] : OSD bench result of 7039.207197 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 02 11:45:44 compute-1 ceph-osd[78262]: osd.0 0 waiting for initial osdmap
Oct 02 11:45:44 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0[78258]: 2025-10-02T11:45:44.610+0000 7f151f27a640 -1 osd.0 0 waiting for initial osdmap
Oct 02 11:45:44 compute-1 ceph-osd[78262]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Oct 02 11:45:44 compute-1 ceph-osd[78262]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Oct 02 11:45:44 compute-1 ceph-osd[78262]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Oct 02 11:45:44 compute-1 ceph-osd[78262]: osd.0 8 check_osdmap_features require_osd_release unknown -> reef
Oct 02 11:45:44 compute-1 ceph-osd[78262]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 02 11:45:44 compute-1 ceph-osd[78262]: osd.0 8 set_numa_affinity not setting numa affinity
Oct 02 11:45:44 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-osd-0[78258]: 2025-10-02T11:45:44.640+0000 7f151a8a2640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 02 11:45:44 compute-1 ceph-osd[78262]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Oct 02 11:45:45 compute-1 ceph-osd[78262]: osd.0 9 state: booting -> active
Oct 02 11:45:46 compute-1 ceph-osd[78262]: osd.0 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 02 11:45:46 compute-1 ceph-osd[78262]: osd.0 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Oct 02 11:45:46 compute-1 ceph-osd[78262]: osd.0 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 02 11:45:46 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 10 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=10) [0] r=0 lpr=10 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:45:47 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 11 pg[1.0( empty local-lis/les=10/11 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=10) [0] r=0 lpr=10 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:08 compute-1 sudo[80553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:08 compute-1 sudo[80553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:08 compute-1 sudo[80553]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:08 compute-1 sudo[80578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:46:08 compute-1 sudo[80578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:08 compute-1 sudo[80578]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:08 compute-1 sudo[80603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:08 compute-1 sudo[80603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:08 compute-1 sudo[80603]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:08 compute-1 sudo[80628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:46:08 compute-1 sudo[80628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:09 compute-1 podman[80692]: 2025-10-02 11:46:09.19090485 +0000 UTC m=+0.047034176 container create 6a66aa32a3bd5a2a9b268028511595f586f2d9fe61838034e29e1a0f0e444729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_ramanujan, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 11:46:09 compute-1 systemd[1]: Started libpod-conmon-6a66aa32a3bd5a2a9b268028511595f586f2d9fe61838034e29e1a0f0e444729.scope.
Oct 02 11:46:09 compute-1 podman[80692]: 2025-10-02 11:46:09.169487397 +0000 UTC m=+0.025616703 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:46:09 compute-1 systemd[1]: Started libcrun container.
Oct 02 11:46:09 compute-1 podman[80692]: 2025-10-02 11:46:09.280879186 +0000 UTC m=+0.137008502 container init 6a66aa32a3bd5a2a9b268028511595f586f2d9fe61838034e29e1a0f0e444729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_ramanujan, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:46:09 compute-1 podman[80692]: 2025-10-02 11:46:09.291312914 +0000 UTC m=+0.147442200 container start 6a66aa32a3bd5a2a9b268028511595f586f2d9fe61838034e29e1a0f0e444729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_ramanujan, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:46:09 compute-1 podman[80692]: 2025-10-02 11:46:09.295131011 +0000 UTC m=+0.151260317 container attach 6a66aa32a3bd5a2a9b268028511595f586f2d9fe61838034e29e1a0f0e444729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 02 11:46:09 compute-1 gracious_ramanujan[80708]: 167 167
Oct 02 11:46:09 compute-1 systemd[1]: libpod-6a66aa32a3bd5a2a9b268028511595f586f2d9fe61838034e29e1a0f0e444729.scope: Deactivated successfully.
Oct 02 11:46:09 compute-1 podman[80692]: 2025-10-02 11:46:09.297088941 +0000 UTC m=+0.153218227 container died 6a66aa32a3bd5a2a9b268028511595f586f2d9fe61838034e29e1a0f0e444729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_ramanujan, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 11:46:09 compute-1 systemd[1]: var-lib-containers-storage-overlay-945293095da3f6cfb9d42e4fce570aec171744e7c8860fa892c0f8d28fdff091-merged.mount: Deactivated successfully.
Oct 02 11:46:09 compute-1 podman[80692]: 2025-10-02 11:46:09.33572056 +0000 UTC m=+0.191849846 container remove 6a66aa32a3bd5a2a9b268028511595f586f2d9fe61838034e29e1a0f0e444729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_ramanujan, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:46:09 compute-1 systemd[1]: libpod-conmon-6a66aa32a3bd5a2a9b268028511595f586f2d9fe61838034e29e1a0f0e444729.scope: Deactivated successfully.
Oct 02 11:46:09 compute-1 podman[80726]: 2025-10-02 11:46:09.414828904 +0000 UTC m=+0.045998735 container create 8caa789ea916136b85ca87b4eb8dc179954617b508d336cd44df53876f333a60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:46:09 compute-1 systemd[1]: Started libpod-conmon-8caa789ea916136b85ca87b4eb8dc179954617b508d336cd44df53876f333a60.scope.
Oct 02 11:46:09 compute-1 systemd[1]: Started libcrun container.
Oct 02 11:46:09 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbe4ecd5bffe35882eb9857d04ae68f2bffc64777bbd412d09a39069c15eb381/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:09 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbe4ecd5bffe35882eb9857d04ae68f2bffc64777bbd412d09a39069c15eb381/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:09 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbe4ecd5bffe35882eb9857d04ae68f2bffc64777bbd412d09a39069c15eb381/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:09 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbe4ecd5bffe35882eb9857d04ae68f2bffc64777bbd412d09a39069c15eb381/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:09 compute-1 podman[80726]: 2025-10-02 11:46:09.390764 +0000 UTC m=+0.021933841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:46:09 compute-1 podman[80726]: 2025-10-02 11:46:09.494439904 +0000 UTC m=+0.125609755 container init 8caa789ea916136b85ca87b4eb8dc179954617b508d336cd44df53876f333a60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:46:09 compute-1 podman[80726]: 2025-10-02 11:46:09.499388505 +0000 UTC m=+0.130558326 container start 8caa789ea916136b85ca87b4eb8dc179954617b508d336cd44df53876f333a60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 11:46:09 compute-1 podman[80726]: 2025-10-02 11:46:09.502055786 +0000 UTC m=+0.133225607 container attach 8caa789ea916136b85ca87b4eb8dc179954617b508d336cd44df53876f333a60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kapitsa, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Oct 02 11:46:09 compute-1 systemd[1]: libpod-8caa789ea916136b85ca87b4eb8dc179954617b508d336cd44df53876f333a60.scope: Deactivated successfully.
Oct 02 11:46:09 compute-1 podman[80726]: 2025-10-02 11:46:09.565018708 +0000 UTC m=+0.196188519 container died 8caa789ea916136b85ca87b4eb8dc179954617b508d336cd44df53876f333a60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kapitsa, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Oct 02 11:46:09 compute-1 systemd[1]: var-lib-containers-storage-overlay-cbe4ecd5bffe35882eb9857d04ae68f2bffc64777bbd412d09a39069c15eb381-merged.mount: Deactivated successfully.
Oct 02 11:46:09 compute-1 podman[80726]: 2025-10-02 11:46:09.597479059 +0000 UTC m=+0.228648880 container remove 8caa789ea916136b85ca87b4eb8dc179954617b508d336cd44df53876f333a60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 11:46:09 compute-1 systemd[1]: libpod-conmon-8caa789ea916136b85ca87b4eb8dc179954617b508d336cd44df53876f333a60.scope: Deactivated successfully.
Oct 02 11:46:09 compute-1 systemd[1]: Reloading.
Oct 02 11:46:09 compute-1 systemd-sysv-generator[80808]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:46:09 compute-1 systemd-rc-local-generator[80805]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:46:09 compute-1 systemd[1]: Reloading.
Oct 02 11:46:09 compute-1 systemd-sysv-generator[80851]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:46:09 compute-1 systemd-rc-local-generator[80848]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:46:10 compute-1 systemd[1]: Starting Ceph mon.compute-1 for 20fdc58c-b037-5094-a8ef-d490aa7c36f3...
Oct 02 11:46:10 compute-1 podman[80906]: 2025-10-02 11:46:10.341033043 +0000 UTC m=+0.034941347 container create 1e591a1b941303c1fcb965e5b095981d6989e8d3735ab0e0c04e12b0689c7f10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-1, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:46:10 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583e02d67b8547f3673d3bb37f4cbcf8f84e7a85e5135212db4d8f8dd380adf2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:10 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583e02d67b8547f3673d3bb37f4cbcf8f84e7a85e5135212db4d8f8dd380adf2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:10 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583e02d67b8547f3673d3bb37f4cbcf8f84e7a85e5135212db4d8f8dd380adf2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:10 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/583e02d67b8547f3673d3bb37f4cbcf8f84e7a85e5135212db4d8f8dd380adf2/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:10 compute-1 podman[80906]: 2025-10-02 11:46:10.395271708 +0000 UTC m=+0.089180012 container init 1e591a1b941303c1fcb965e5b095981d6989e8d3735ab0e0c04e12b0689c7f10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 02 11:46:10 compute-1 podman[80906]: 2025-10-02 11:46:10.403561571 +0000 UTC m=+0.097469875 container start 1e591a1b941303c1fcb965e5b095981d6989e8d3735ab0e0c04e12b0689c7f10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mon-compute-1, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Oct 02 11:46:10 compute-1 bash[80906]: 1e591a1b941303c1fcb965e5b095981d6989e8d3735ab0e0c04e12b0689c7f10
Oct 02 11:46:10 compute-1 podman[80906]: 2025-10-02 11:46:10.325791448 +0000 UTC m=+0.019699772 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:46:10 compute-1 systemd[1]: Started Ceph mon.compute-1 for 20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct 02 11:46:10 compute-1 ceph-mon[80926]: set uid:gid to 167:167 (ceph:ceph)
Oct 02 11:46:10 compute-1 ceph-mon[80926]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Oct 02 11:46:10 compute-1 ceph-mon[80926]: pidfile_write: ignore empty --pid-file
Oct 02 11:46:10 compute-1 ceph-mon[80926]: load: jerasure load: lrc 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: RocksDB version: 7.9.2
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: Git sha 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: Compile date 2025-05-06 23:30:25
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: DB SUMMARY
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: DB Session ID:  FDMBSZ550JKCBY0GVN5D
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: CURRENT file:  CURRENT
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: IDENTITY file:  IDENTITY
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-1/store.db dir, Total Num: 0, files: 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-1/store.db: 000004.log size: 511 ; 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                         Options.error_if_exists: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                       Options.create_if_missing: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                         Options.paranoid_checks: 1
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                                     Options.env: 0x559103106c40
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                                      Options.fs: PosixFileSystem
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                                Options.info_log: 0x5591049fefc0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                Options.max_file_opening_threads: 16
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                              Options.statistics: (nil)
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                               Options.use_fsync: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                       Options.max_log_file_size: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                         Options.allow_fallocate: 1
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                        Options.use_direct_reads: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:          Options.create_missing_column_families: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                              Options.db_log_dir: 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                                 Options.wal_dir: 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                   Options.advise_random_on_open: 1
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                    Options.write_buffer_manager: 0x559104a0eb40
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                            Options.rate_limiter: (nil)
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                  Options.unordered_write: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                               Options.row_cache: None
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                              Options.wal_filter: None
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:             Options.allow_ingest_behind: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:             Options.two_write_queues: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:             Options.manual_wal_flush: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:             Options.wal_compression: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:             Options.atomic_flush: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                 Options.log_readahead_size: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:             Options.allow_data_in_errors: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:             Options.db_host_id: __hostname__
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:             Options.max_background_jobs: 2
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:             Options.max_background_compactions: -1
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:             Options.max_subcompactions: 1
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:             Options.max_total_wal_size: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                          Options.max_open_files: -1
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                          Options.bytes_per_sync: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:       Options.compaction_readahead_size: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                  Options.max_background_flushes: -1
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: Compression algorithms supported:
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:         kZSTD supported: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:         kXpressCompression supported: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:         kBZip2Compression supported: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:         kLZ4Compression supported: 1
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:         kZlibCompression supported: 1
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:         kLZ4HCCompression supported: 1
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:         kSnappyCompression supported: 1
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:           Options.merge_operator: 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:        Options.compaction_filter: None
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:        Options.compaction_filter_factory: None
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:  Options.sst_partitioner_factory: None
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5591049fec00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5591049f71f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:        Options.write_buffer_size: 33554432
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:  Options.max_write_buffer_number: 2
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:          Options.compression: NoCompression
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:       Options.prefix_extractor: nullptr
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:             Options.num_levels: 7
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                  Options.compression_opts.level: 32767
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:               Options.compression_opts.strategy: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                  Options.compression_opts.enabled: false
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                        Options.arena_block_size: 1048576
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                Options.disable_auto_compactions: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                   Options.inplace_update_support: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                           Options.bloom_locality: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                    Options.max_successive_merges: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                Options.paranoid_file_checks: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                Options.force_consistency_checks: 1
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                Options.report_bg_io_stats: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                               Options.ttl: 2592000
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                       Options.enable_blob_files: false
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                           Options.min_blob_size: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                          Options.blob_file_size: 268435456
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb:                Options.blob_file_starting_level: 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: f39ba2d7-ed25-4935-a0d2-2c1c33353d32
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405570441523, "job": 1, "event": "recovery_started", "wal_files": [4]}
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405570443644, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405570443726, "job": 1, "event": "recovery_finished"}
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x559104a20e00
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: DB pointer 0x559104b28000
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 11:46:10 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5591049f71f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 11:46:10 compute-1 sudo[80628]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:10 compute-1 ceph-mon[80926]: mon.compute-1 does not exist in monmap, will attempt to join an existing cluster
Oct 02 11:46:10 compute-1 ceph-mon[80926]: using public_addr v2:192.168.122.101:0/0 -> [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0]
Oct 02 11:46:10 compute-1 ceph-mon[80926]: starting mon.compute-1 rank -1 at public addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] at bind addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-1 fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:46:10 compute-1 ceph-mon[80926]: mon.compute-1@-1(???) e0 preinit fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:46:10 compute-1 ceph-mon[80926]: mon.compute-1@-1(synchronizing).mds e1 new map
Oct 02 11:46:10 compute-1 ceph-mon[80926]: mon.compute-1@-1(synchronizing).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Oct 02 11:46:10 compute-1 ceph-mon[80926]: mon.compute-1@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct 02 11:46:10 compute-1 ceph-mon[80926]: mon.compute-1@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct 02 11:46:10 compute-1 ceph-mon[80926]: mon.compute-1@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in
Oct 02 11:46:10 compute-1 ceph-mon[80926]: mon.compute-1@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in
Oct 02 11:46:10 compute-1 ceph-mon[80926]: mon.compute-1@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in
Oct 02 11:46:10 compute-1 ceph-mon[80926]: mon.compute-1@-1(synchronizing).osd e4 e4: 1 total, 0 up, 1 in
Oct 02 11:46:10 compute-1 ceph-mon[80926]: mon.compute-1@-1(synchronizing).osd e5 e5: 2 total, 0 up, 2 in
Oct 02 11:46:10 compute-1 ceph-mon[80926]: mon.compute-1@-1(synchronizing).osd e6 e6: 2 total, 0 up, 2 in
Oct 02 11:46:10 compute-1 ceph-mon[80926]: mon.compute-1@-1(synchronizing).osd e7 e7: 2 total, 0 up, 2 in
Oct 02 11:46:10 compute-1 ceph-mon[80926]: mon.compute-1@-1(synchronizing).osd e8 e8: 2 total, 1 up, 2 in
Oct 02 11:46:10 compute-1 ceph-mon[80926]: mon.compute-1@-1(synchronizing).osd e9 e9: 2 total, 2 up, 2 in
Oct 02 11:46:10 compute-1 ceph-mon[80926]: mon.compute-1@-1(synchronizing).osd e10 e10: 2 total, 2 up, 2 in
Oct 02 11:46:10 compute-1 ceph-mon[80926]: mon.compute-1@-1(synchronizing).osd e11 e11: 2 total, 2 up, 2 in
Oct 02 11:46:10 compute-1 ceph-mon[80926]: mon.compute-1@-1(synchronizing).osd e12 e12: 2 total, 2 up, 2 in
Oct 02 11:46:10 compute-1 ceph-mon[80926]: mon.compute-1@-1(synchronizing).osd e12 crush map has features 3314933000852226048, adjusting msgr requires
Oct 02 11:46:10 compute-1 ceph-mon[80926]: mon.compute-1@-1(synchronizing).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Oct 02 11:46:10 compute-1 ceph-mon[80926]: mon.compute-1@-1(synchronizing).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Oct 02 11:46:10 compute-1 ceph-mon[80926]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:46:10 compute-1 ceph-mon[80926]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:46:10 compute-1 ceph-mon[80926]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:46:10 compute-1 ceph-mon[80926]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: Updating compute-1:/etc/ceph/ceph.conf
Oct 02 11:46:10 compute-1 ceph-mon[80926]: Updating compute-1:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf
Oct 02 11:46:10 compute-1 ceph-mon[80926]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:46:10 compute-1 ceph-mon[80926]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 02 11:46:10 compute-1 ceph-mon[80926]: Updating compute-1:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.client.admin.keyring
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:46:10 compute-1 ceph-mon[80926]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct 02 11:46:10 compute-1 ceph-mon[80926]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:46:10 compute-1 ceph-mon[80926]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct 02 11:46:10 compute-1 ceph-mon[80926]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:46:10 compute-1 ceph-mon[80926]: Deploying daemon crash.compute-1 on compute-1
Oct 02 11:46:10 compute-1 ceph-mon[80926]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Oct 02 11:46:10 compute-1 ceph-mon[80926]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2994638250' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6e4de194-9f54-490b-9be5-cb1e4c11649b"}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2994638250' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6e4de194-9f54-490b-9be5-cb1e4c11649b"}]': finished
Oct 02 11:46:10 compute-1 ceph-mon[80926]: osdmap e4: 1 total, 0 up, 1 in
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1136146219' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c"}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1136146219' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "3e590da2-9176-4197-8be9-66fc8d360a0c"}]': finished
Oct 02 11:46:10 compute-1 ceph-mon[80926]: osdmap e5: 2 total, 0 up, 2 in
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2339471251' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3789933639' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct 02 11:46:10 compute-1 ceph-mon[80926]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:46:10 compute-1 ceph-mon[80926]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: Deploying daemon osd.1 on compute-0
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: Deploying daemon osd.0 on compute-1
Oct 02 11:46:10 compute-1 ceph-mon[80926]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:46:10 compute-1 ceph-mon[80926]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3557356877' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='osd.1 [v2:192.168.122.100:6802/993231012,v1:192.168.122.100:6803/993231012]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='osd.0 [v2:192.168.122.101:6800/3815319485,v1:192.168.122.101:6801/3815319485]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='osd.1 [v2:192.168.122.100:6802/993231012,v1:192.168.122.100:6803/993231012]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='osd.0 [v2:192.168.122.101:6800/3815319485,v1:192.168.122.101:6801/3815319485]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct 02 11:46:10 compute-1 ceph-mon[80926]: osdmap e6: 2 total, 0 up, 2 in
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='osd.1 [v2:192.168.122.100:6802/993231012,v1:192.168.122.100:6803/993231012]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='osd.0 [v2:192.168.122.101:6800/3815319485,v1:192.168.122.101:6801/3815319485]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='osd.1 [v2:192.168.122.100:6802/993231012,v1:192.168.122.100:6803/993231012]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='osd.0 [v2:192.168.122.101:6800/3815319485,v1:192.168.122.101:6801/3815319485]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Oct 02 11:46:10 compute-1 ceph-mon[80926]: osdmap e7: 2 total, 0 up, 2 in
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: mon.compute-1@-1(synchronizing).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: purged_snaps scrub starts
Oct 02 11:46:10 compute-1 ceph-mon[80926]: purged_snaps scrub ok
Oct 02 11:46:10 compute-1 ceph-mon[80926]: purged_snaps scrub starts
Oct 02 11:46:10 compute-1 ceph-mon[80926]: purged_snaps scrub ok
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: OSD bench result of 9008.887835 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 02 11:46:10 compute-1 ceph-mon[80926]: osd.1 [v2:192.168.122.100:6802/993231012,v1:192.168.122.100:6803/993231012] boot
Oct 02 11:46:10 compute-1 ceph-mon[80926]: osdmap e8: 2 total, 1 up, 2 in
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: Adjusting osd_memory_target on compute-1 to  5247M
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: OSD bench result of 7039.207197 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 02 11:46:10 compute-1 ceph-mon[80926]: pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: Adjusting osd_memory_target on compute-0 to 127.8M
Oct 02 11:46:10 compute-1 ceph-mon[80926]: Unable to set osd_memory_target on compute-0 to 134065766: error parsing value: Value '134065766' is below minimum 939524096
Oct 02 11:46:10 compute-1 ceph-mon[80926]: osd.0 [v2:192.168.122.101:6800/3815319485,v1:192.168.122.101:6801/3815319485] boot
Oct 02 11:46:10 compute-1 ceph-mon[80926]: osdmap e9: 2 total, 2 up, 2 in
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct 02 11:46:10 compute-1 ceph-mon[80926]: osdmap e10: 2 total, 2 up, 2 in
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: pgmap v42: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Oct 02 11:46:10 compute-1 ceph-mon[80926]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct 02 11:46:10 compute-1 ceph-mon[80926]: osdmap e11: 2 total, 2 up, 2 in
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: mgrmap e9: compute-0.unmtoh(active, since 80s)
Oct 02 11:46:10 compute-1 ceph-mon[80926]: osdmap e12: 2 total, 2 up, 2 in
Oct 02 11:46:10 compute-1 ceph-mon[80926]: pgmap v45: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Oct 02 11:46:10 compute-1 ceph-mon[80926]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 02 11:46:10 compute-1 ceph-mon[80926]: pgmap v46: 1 pgs: 1 unknown; 0 B data, 453 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:10 compute-1 ceph-mon[80926]: pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:10 compute-1 ceph-mon[80926]: pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:10 compute-1 ceph-mon[80926]: pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:10 compute-1 ceph-mon[80926]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:10 compute-1 ceph-mon[80926]: pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: Updating compute-2:/etc/ceph/ceph.conf
Oct 02 11:46:10 compute-1 ceph-mon[80926]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:10 compute-1 ceph-mon[80926]: Updating compute-2:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf
Oct 02 11:46:10 compute-1 ceph-mon[80926]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 02 11:46:10 compute-1 ceph-mon[80926]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:10 compute-1 ceph-mon[80926]: Updating compute-2:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.client.admin.keyring
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:10 compute-1 ceph-mon[80926]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:10 compute-1 ceph-mon[80926]: Deploying daemon mon.compute-2 on compute-2
Oct 02 11:46:10 compute-1 ceph-mon[80926]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Oct 02 11:46:10 compute-1 ceph-mon[80926]: Cluster is now healthy
Oct 02 11:46:10 compute-1 ceph-mon[80926]: mon.compute-1@-1(synchronizing).paxosservice(auth 1..7) refresh upgraded, format 0 -> 3
Oct 02 11:46:16 compute-1 ceph-mon[80926]: mon.compute-1@-1(probing) e3  my rank is now 2 (was -1)
Oct 02 11:46:16 compute-1 ceph-mon[80926]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Oct 02 11:46:16 compute-1 ceph-mon[80926]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 
Oct 02 11:46:16 compute-1 ceph-mon[80926]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 02 11:46:19 compute-1 ceph-mon[80926]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 02 11:46:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Oct 02 11:46:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Oct 02 11:46:19 compute-1 ceph-mon[80926]: Deploying daemon mon.compute-1 on compute-1
Oct 02 11:46:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 02 11:46:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:46:19 compute-1 ceph-mon[80926]: mon.compute-0 calling monitor election
Oct 02 11:46:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:46:19 compute-1 ceph-mon[80926]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:46:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:46:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:46:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:46:19 compute-1 ceph-mon[80926]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:46:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:46:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:46:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:46:19 compute-1 ceph-mon[80926]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Oct 02 11:46:19 compute-1 ceph-mon[80926]: pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:19 compute-1 ceph-mon[80926]: monmap e2: 2 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 02 11:46:19 compute-1 ceph-mon[80926]: fsmap 
Oct 02 11:46:19 compute-1 ceph-mon[80926]: osdmap e12: 2 total, 2 up, 2 in
Oct 02 11:46:19 compute-1 ceph-mon[80926]: mgrmap e9: compute-0.unmtoh(active, since 105s)
Oct 02 11:46:19 compute-1 ceph-mon[80926]: overall HEALTH_OK
Oct 02 11:46:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 02 11:46:19 compute-1 ceph-mon[80926]: mgrc update_daemon_metadata mon.compute-1 metadata {addrs=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,created_at=2025-10-02T11:46:09.534898Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-1,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864104,os=Linux}
Oct 02 11:46:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.kvxdhw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 02 11:46:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 02 11:46:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:46:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 11:46:19 compute-1 ceph-mon[80926]: mon.compute-0 calling monitor election
Oct 02 11:46:19 compute-1 ceph-mon[80926]: mon.compute-2 calling monitor election
Oct 02 11:46:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:46:19 compute-1 ceph-mon[80926]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:19 compute-1 ceph-mon[80926]: mon.compute-1 calling monitor election
Oct 02 11:46:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:46:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:46:19 compute-1 ceph-mon[80926]: pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:46:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:46:19 compute-1 ceph-mon[80926]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Oct 02 11:46:19 compute-1 ceph-mon[80926]: monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct 02 11:46:19 compute-1 ceph-mon[80926]: fsmap 
Oct 02 11:46:19 compute-1 ceph-mon[80926]: osdmap e12: 2 total, 2 up, 2 in
Oct 02 11:46:19 compute-1 ceph-mon[80926]: mgrmap e9: compute-0.unmtoh(active, since 111s)
Oct 02 11:46:19 compute-1 ceph-mon[80926]: overall HEALTH_OK
Oct 02 11:46:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.kvxdhw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 02 11:46:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 11:46:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e12 _set_new_cache_sizes cache_size:1019935369 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:46:20 compute-1 ceph-mon[80926]: Deploying daemon mgr.compute-2.kvxdhw on compute-2
Oct 02 11:46:20 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:20 compute-1 ceph-mon[80926]: pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2292528460' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 11:46:20 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 11:46:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e13 e13: 2 total, 2 up, 2 in
Oct 02 11:46:20 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 13 pg[2.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [0] r=0 lpr=13 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:21 compute-1 sudo[80965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:21 compute-1 sudo[80965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:21 compute-1 sudo[80965]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:21 compute-1 sudo[80990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:46:21 compute-1 sudo[80990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:21 compute-1 sudo[80990]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:21 compute-1 sudo[81015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:21 compute-1 sudo[81015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:21 compute-1 sudo[81015]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:21 compute-1 sudo[81040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:46:21 compute-1 sudo[81040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e14 e14: 2 total, 2 up, 2 in
Oct 02 11:46:21 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 14 pg[2.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [0] r=0 lpr=13 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2292528460' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 11:46:21 compute-1 ceph-mon[80926]: osdmap e13: 2 total, 2 up, 2 in
Oct 02 11:46:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.wtokkj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 02 11:46:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.wtokkj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 02 11:46:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 11:46:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:22 compute-1 podman[81105]: 2025-10-02 11:46:22.038262663 +0000 UTC m=+0.039239849 container create ee6b1be3baa8d158f5de816fc02f9775ed1dcd4e783a129b95e4e636ff6a55f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_blackwell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:46:22 compute-1 systemd[1]: Started libpod-conmon-ee6b1be3baa8d158f5de816fc02f9775ed1dcd4e783a129b95e4e636ff6a55f6.scope.
Oct 02 11:46:22 compute-1 systemd[1]: Started libcrun container.
Oct 02 11:46:22 compute-1 podman[81105]: 2025-10-02 11:46:22.109333042 +0000 UTC m=+0.110310248 container init ee6b1be3baa8d158f5de816fc02f9775ed1dcd4e783a129b95e4e636ff6a55f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:46:22 compute-1 podman[81105]: 2025-10-02 11:46:22.01947673 +0000 UTC m=+0.020453946 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:46:22 compute-1 podman[81105]: 2025-10-02 11:46:22.115777609 +0000 UTC m=+0.116754795 container start ee6b1be3baa8d158f5de816fc02f9775ed1dcd4e783a129b95e4e636ff6a55f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Oct 02 11:46:22 compute-1 podman[81105]: 2025-10-02 11:46:22.119478502 +0000 UTC m=+0.120455718 container attach ee6b1be3baa8d158f5de816fc02f9775ed1dcd4e783a129b95e4e636ff6a55f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_blackwell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:46:22 compute-1 blissful_blackwell[81121]: 167 167
Oct 02 11:46:22 compute-1 systemd[1]: libpod-ee6b1be3baa8d158f5de816fc02f9775ed1dcd4e783a129b95e4e636ff6a55f6.scope: Deactivated successfully.
Oct 02 11:46:22 compute-1 podman[81105]: 2025-10-02 11:46:22.121701989 +0000 UTC m=+0.122679205 container died ee6b1be3baa8d158f5de816fc02f9775ed1dcd4e783a129b95e4e636ff6a55f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:46:22 compute-1 systemd[1]: var-lib-containers-storage-overlay-99f43a9ed3c76f8cdef597f312971f5b286a21ad11d076905189caa9b47bb68d-merged.mount: Deactivated successfully.
Oct 02 11:46:22 compute-1 podman[81105]: 2025-10-02 11:46:22.152722216 +0000 UTC m=+0.153699402 container remove ee6b1be3baa8d158f5de816fc02f9775ed1dcd4e783a129b95e4e636ff6a55f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_blackwell, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 11:46:22 compute-1 systemd[1]: libpod-conmon-ee6b1be3baa8d158f5de816fc02f9775ed1dcd4e783a129b95e4e636ff6a55f6.scope: Deactivated successfully.
Oct 02 11:46:22 compute-1 systemd[1]: Reloading.
Oct 02 11:46:22 compute-1 systemd-rc-local-generator[81168]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:46:22 compute-1 systemd-sysv-generator[81172]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:46:22 compute-1 systemd[1]: Reloading.
Oct 02 11:46:22 compute-1 systemd-sysv-generator[81212]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:46:22 compute-1 systemd-rc-local-generator[81208]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:46:22 compute-1 systemd[1]: Starting Ceph mgr.compute-1.wtokkj for 20fdc58c-b037-5094-a8ef-d490aa7c36f3...
Oct 02 11:46:22 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e15 e15: 2 total, 2 up, 2 in
Oct 02 11:46:22 compute-1 ceph-mon[80926]: Deploying daemon mgr.compute-1.wtokkj on compute-1
Oct 02 11:46:22 compute-1 ceph-mon[80926]: osdmap e14: 2 total, 2 up, 2 in
Oct 02 11:46:22 compute-1 ceph-mon[80926]: pgmap v64: 2 pgs: 1 creating+peering, 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/315550621' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 11:46:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/315550621' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 11:46:22 compute-1 ceph-mon[80926]: osdmap e15: 2 total, 2 up, 2 in
Oct 02 11:46:22 compute-1 podman[81263]: 2025-10-02 11:46:22.919375995 +0000 UTC m=+0.038009892 container create 7339018a450e0ae84fceb4dca5f91ceea73ff12c304dd445d3a5af88ca9800c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:46:22 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da7bb107e412adf9a357075c04fc0687a240cfdcb951d04cf0b3c1303996b13d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:22 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da7bb107e412adf9a357075c04fc0687a240cfdcb951d04cf0b3c1303996b13d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:22 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da7bb107e412adf9a357075c04fc0687a240cfdcb951d04cf0b3c1303996b13d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:22 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da7bb107e412adf9a357075c04fc0687a240cfdcb951d04cf0b3c1303996b13d/merged/var/lib/ceph/mgr/ceph-compute-1.wtokkj supports timestamps until 2038 (0x7fffffff)
Oct 02 11:46:22 compute-1 podman[81263]: 2025-10-02 11:46:22.967221235 +0000 UTC m=+0.085855132 container init 7339018a450e0ae84fceb4dca5f91ceea73ff12c304dd445d3a5af88ca9800c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Oct 02 11:46:22 compute-1 podman[81263]: 2025-10-02 11:46:22.974301231 +0000 UTC m=+0.092935128 container start 7339018a450e0ae84fceb4dca5f91ceea73ff12c304dd445d3a5af88ca9800c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:46:22 compute-1 bash[81263]: 7339018a450e0ae84fceb4dca5f91ceea73ff12c304dd445d3a5af88ca9800c1
Oct 02 11:46:22 compute-1 podman[81263]: 2025-10-02 11:46:22.901976724 +0000 UTC m=+0.020610641 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:46:22 compute-1 systemd[1]: Started Ceph mgr.compute-1.wtokkj for 20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct 02 11:46:23 compute-1 sudo[81040]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:23 compute-1 ceph-mgr[81282]: set uid:gid to 167:167 (ceph:ceph)
Oct 02 11:46:23 compute-1 ceph-mgr[81282]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct 02 11:46:23 compute-1 ceph-mgr[81282]: pidfile_write: ignore empty --pid-file
Oct 02 11:46:23 compute-1 ceph-mgr[81282]: mgr[py] Loading python module 'alerts'
Oct 02 11:46:23 compute-1 ceph-mgr[81282]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 02 11:46:23 compute-1 ceph-mgr[81282]: mgr[py] Loading python module 'balancer'
Oct 02 11:46:23 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:23.411+0000 7f8f3746f140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 02 11:46:23 compute-1 ceph-mgr[81282]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 02 11:46:23 compute-1 ceph-mgr[81282]: mgr[py] Loading python module 'cephadm'
Oct 02 11:46:23 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:23.672+0000 7f8f3746f140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 02 11:46:23 compute-1 ceph-mon[80926]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 11:46:23 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:23 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:23 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:23 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:23 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 02 11:46:23 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 02 11:46:23 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e16 e16: 2 total, 2 up, 2 in
Oct 02 11:46:24 compute-1 ceph-mon[80926]: Deploying daemon crash.compute-2 on compute-2
Oct 02 11:46:24 compute-1 ceph-mon[80926]: osdmap e16: 2 total, 2 up, 2 in
Oct 02 11:46:24 compute-1 ceph-mon[80926]: pgmap v67: 3 pgs: 1 unknown, 1 creating+peering, 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1583078942' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 11:46:24 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e17 e17: 2 total, 2 up, 2 in
Oct 02 11:46:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e17 _set_new_cache_sizes cache_size:1020053289 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:46:25 compute-1 ceph-mgr[81282]: mgr[py] Loading python module 'crash'
Oct 02 11:46:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1583078942' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 11:46:25 compute-1 ceph-mon[80926]: osdmap e17: 2 total, 2 up, 2 in
Oct 02 11:46:25 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:25 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:25 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:25 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:25 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:46:25 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:46:25 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:25 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:46:25 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:25 compute-1 ceph-mon[80926]: pgmap v69: 4 pgs: 2 creating+peering, 2 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:26 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e18 e18: 2 total, 2 up, 2 in
Oct 02 11:46:26 compute-1 ceph-mgr[81282]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 02 11:46:26 compute-1 ceph-mgr[81282]: mgr[py] Loading python module 'dashboard'
Oct 02 11:46:26 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:26.021+0000 7f8f3746f140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 02 11:46:27 compute-1 ceph-mon[80926]: osdmap e18: 2 total, 2 up, 2 in
Oct 02 11:46:27 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2601359451' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 11:46:27 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e19 e19: 2 total, 2 up, 2 in
Oct 02 11:46:27 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e20 e20: 3 total, 2 up, 3 in
Oct 02 11:46:27 compute-1 ceph-mgr[81282]: mgr[py] Loading python module 'devicehealth'
Oct 02 11:46:27 compute-1 ceph-mgr[81282]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 02 11:46:27 compute-1 ceph-mgr[81282]: mgr[py] Loading python module 'diskprediction_local'
Oct 02 11:46:27 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:27.848+0000 7f8f3746f140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 02 11:46:28 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2601359451' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 11:46:28 compute-1 ceph-mon[80926]: osdmap e19: 2 total, 2 up, 2 in
Oct 02 11:46:28 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3177347678' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7e9b39ac-5928-4949-8bce-29a1be4f628f"}]: dispatch
Oct 02 11:46:28 compute-1 ceph-mon[80926]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "7e9b39ac-5928-4949-8bce-29a1be4f628f"}]: dispatch
Oct 02 11:46:28 compute-1 ceph-mon[80926]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "7e9b39ac-5928-4949-8bce-29a1be4f628f"}]': finished
Oct 02 11:46:28 compute-1 ceph-mon[80926]: osdmap e20: 3 total, 2 up, 3 in
Oct 02 11:46:28 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:28 compute-1 ceph-mon[80926]: pgmap v73: 5 pgs: 1 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:28 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 02 11:46:28 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 02 11:46:28 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]:   from numpy import show_config as show_numpy_config
Oct 02 11:46:28 compute-1 ceph-mgr[81282]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 02 11:46:28 compute-1 ceph-mgr[81282]: mgr[py] Loading python module 'influx'
Oct 02 11:46:28 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:28.407+0000 7f8f3746f140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 02 11:46:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e21 e21: 3 total, 2 up, 3 in
Oct 02 11:46:28 compute-1 ceph-mgr[81282]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 02 11:46:28 compute-1 ceph-mgr[81282]: mgr[py] Loading python module 'insights'
Oct 02 11:46:28 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:28.666+0000 7f8f3746f140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 02 11:46:28 compute-1 ceph-mgr[81282]: mgr[py] Loading python module 'iostat'
Oct 02 11:46:29 compute-1 ceph-mgr[81282]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 02 11:46:29 compute-1 ceph-mgr[81282]: mgr[py] Loading python module 'k8sevents'
Oct 02 11:46:29 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:29.175+0000 7f8f3746f140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 02 11:46:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3491711820' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 02 11:46:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/454705554' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 11:46:29 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:46:29 compute-1 ceph-mon[80926]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 11:46:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/454705554' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 11:46:29 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:46:29 compute-1 ceph-mon[80926]: osdmap e21: 3 total, 2 up, 3 in
Oct 02 11:46:29 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:29 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:46:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e22 e22: 3 total, 2 up, 3 in
Oct 02 11:46:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e22 _set_new_cache_sizes cache_size:1020054714 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:46:31 compute-1 ceph-mgr[81282]: mgr[py] Loading python module 'localpool'
Oct 02 11:46:31 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1762713421' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 02 11:46:31 compute-1 ceph-mon[80926]: pgmap v75: 6 pgs: 2 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:31 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:46:31 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:46:31 compute-1 ceph-mon[80926]: osdmap e22: 3 total, 2 up, 3 in
Oct 02 11:46:31 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:31 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:46:31 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:31 compute-1 ceph-mgr[81282]: mgr[py] Loading python module 'mds_autoscaler'
Oct 02 11:46:32 compute-1 ceph-mgr[81282]: mgr[py] Loading python module 'mirroring'
Oct 02 11:46:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e23 e23: 3 total, 2 up, 3 in
Oct 02 11:46:32 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 23 pg[7.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [0] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:32 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 23 pg[2.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=23 pruub=13.599875450s) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active pruub 66.031814575s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:32 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 23 pg[2.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=23 pruub=13.599875450s) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown pruub 66.031814575s@ mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:32 compute-1 ceph-mgr[81282]: mgr[py] Loading python module 'nfs'
Oct 02 11:46:32 compute-1 ceph-mon[80926]: pgmap v77: 6 pgs: 6 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:32 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:46:32 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:46:33 compute-1 ceph-mgr[81282]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 02 11:46:33 compute-1 ceph-mgr[81282]: mgr[py] Loading python module 'orchestrator'
Oct 02 11:46:33 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:33.106+0000 7f8f3746f140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 02 11:46:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e24 e24: 3 total, 2 up, 3 in
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.1c( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.1e( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.1b( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.1f( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.1d( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.a( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.9( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.8( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.6( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.7( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.4( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.2( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.5( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.1( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.3( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.b( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.c( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.d( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.f( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.e( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.10( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.12( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.11( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.14( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.15( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.16( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.13( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.18( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.19( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.1a( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.17( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.1c( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.1b( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.1d( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.a( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.4( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.9( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.6( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.1f( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.7( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.2( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.8( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.5( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.0( empty local-lis/les=23/24 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[7.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [0] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.b( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.1e( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.d( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.f( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.1( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.e( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.3( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.10( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.12( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.c( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.15( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.11( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.14( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.13( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.18( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.1a( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.17( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.16( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 24 pg[2.19( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [0] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:33 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1762713421' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 02 11:46:33 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:46:33 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:46:33 compute-1 ceph-mon[80926]: osdmap e23: 3 total, 2 up, 3 in
Oct 02 11:46:33 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:33 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:46:33 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:33 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:33 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:46:33 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:46:33 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:46:33 compute-1 ceph-mon[80926]: osdmap e24: 3 total, 2 up, 3 in
Oct 02 11:46:33 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:33 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1920783801' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct 02 11:46:33 compute-1 ceph-mgr[81282]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 02 11:46:33 compute-1 ceph-mgr[81282]: mgr[py] Loading python module 'osd_perf_query'
Oct 02 11:46:33 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:33.820+0000 7f8f3746f140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 02 11:46:33 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Oct 02 11:46:34 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Oct 02 11:46:34 compute-1 ceph-mgr[81282]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 02 11:46:34 compute-1 ceph-mgr[81282]: mgr[py] Loading python module 'osd_support'
Oct 02 11:46:34 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:34.113+0000 7f8f3746f140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 02 11:46:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e25 e25: 3 total, 2 up, 3 in
Oct 02 11:46:34 compute-1 ceph-mgr[81282]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 02 11:46:34 compute-1 ceph-mgr[81282]: mgr[py] Loading python module 'pg_autoscaler'
Oct 02 11:46:34 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:34.376+0000 7f8f3746f140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 02 11:46:34 compute-1 ceph-mon[80926]: pgmap v80: 69 pgs: 63 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:34 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:46:34 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:46:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1920783801' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct 02 11:46:34 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:46:34 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:46:34 compute-1 ceph-mon[80926]: osdmap e25: 3 total, 2 up, 3 in
Oct 02 11:46:34 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:34 compute-1 ceph-mon[80926]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 11:46:34 compute-1 ceph-mgr[81282]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 02 11:46:34 compute-1 ceph-mgr[81282]: mgr[py] Loading python module 'progress'
Oct 02 11:46:34 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:34.678+0000 7f8f3746f140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 02 11:46:34 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Oct 02 11:46:34 compute-1 ceph-mgr[81282]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 02 11:46:34 compute-1 ceph-mgr[81282]: mgr[py] Loading python module 'prometheus'
Oct 02 11:46:34 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:34.955+0000 7f8f3746f140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 02 11:46:34 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Oct 02 11:46:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e25 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:46:35 compute-1 ceph-mon[80926]: 2.1 scrub starts
Oct 02 11:46:35 compute-1 ceph-mon[80926]: 2.1 scrub ok
Oct 02 11:46:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2059673187' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct 02 11:46:35 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e26 e26: 3 total, 2 up, 3 in
Oct 02 11:46:36 compute-1 ceph-mgr[81282]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 02 11:46:36 compute-1 ceph-mgr[81282]: mgr[py] Loading python module 'rbd_support'
Oct 02 11:46:36 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:36.031+0000 7f8f3746f140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 02 11:46:36 compute-1 ceph-mgr[81282]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 02 11:46:36 compute-1 ceph-mgr[81282]: mgr[py] Loading python module 'restful'
Oct 02 11:46:36 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:36.404+0000 7f8f3746f140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 02 11:46:36 compute-1 ceph-mon[80926]: 2.2 scrub starts
Oct 02 11:46:36 compute-1 ceph-mon[80926]: 2.2 scrub ok
Oct 02 11:46:36 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2059673187' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct 02 11:46:36 compute-1 ceph-mon[80926]: osdmap e26: 3 total, 2 up, 3 in
Oct 02 11:46:36 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:36 compute-1 ceph-mon[80926]: pgmap v83: 131 pgs: 34 peering, 94 unknown, 3 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:36 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Oct 02 11:46:37 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Oct 02 11:46:37 compute-1 ceph-mgr[81282]: mgr[py] Loading python module 'rgw'
Oct 02 11:46:37 compute-1 ceph-mon[80926]: 5.1 scrub starts
Oct 02 11:46:37 compute-1 ceph-mon[80926]: 5.1 scrub ok
Oct 02 11:46:37 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3992653650' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct 02 11:46:37 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct 02 11:46:37 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:37 compute-1 ceph-mon[80926]: Deploying daemon osd.2 on compute-2
Oct 02 11:46:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e27 e27: 3 total, 2 up, 3 in
Oct 02 11:46:37 compute-1 ceph-mgr[81282]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 02 11:46:37 compute-1 ceph-mgr[81282]: mgr[py] Loading python module 'rook'
Oct 02 11:46:37 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:37.946+0000 7f8f3746f140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 02 11:46:38 compute-1 ceph-mon[80926]: 2.3 scrub starts
Oct 02 11:46:38 compute-1 ceph-mon[80926]: 2.3 scrub ok
Oct 02 11:46:38 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3992653650' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct 02 11:46:38 compute-1 ceph-mon[80926]: osdmap e27: 3 total, 2 up, 3 in
Oct 02 11:46:38 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:38 compute-1 ceph-mon[80926]: pgmap v85: 131 pgs: 34 peering, 62 unknown, 35 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:38 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Oct 02 11:46:38 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Oct 02 11:46:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/779281035' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct 02 11:46:39 compute-1 ceph-mon[80926]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 11:46:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e28 e28: 3 total, 2 up, 3 in
Oct 02 11:46:40 compute-1 ceph-mgr[81282]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 02 11:46:40 compute-1 ceph-mgr[81282]: mgr[py] Loading python module 'selftest'
Oct 02 11:46:40 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:40.242+0000 7f8f3746f140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 02 11:46:40 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:46:40 compute-1 ceph-mgr[81282]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 02 11:46:40 compute-1 ceph-mgr[81282]: mgr[py] Loading python module 'snap_schedule'
Oct 02 11:46:40 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:40.518+0000 7f8f3746f140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 02 11:46:40 compute-1 ceph-mgr[81282]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 02 11:46:40 compute-1 ceph-mgr[81282]: mgr[py] Loading python module 'stats'
Oct 02 11:46:40 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:40.801+0000 7f8f3746f140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 02 11:46:40 compute-1 ceph-mon[80926]: 2.4 scrub starts
Oct 02 11:46:40 compute-1 ceph-mon[80926]: 2.4 scrub ok
Oct 02 11:46:40 compute-1 ceph-mon[80926]: 3.1 scrub starts
Oct 02 11:46:40 compute-1 ceph-mon[80926]: 3.1 scrub ok
Oct 02 11:46:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/779281035' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct 02 11:46:40 compute-1 ceph-mon[80926]: osdmap e28: 3 total, 2 up, 3 in
Oct 02 11:46:40 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:40 compute-1 ceph-mon[80926]: pgmap v87: 131 pgs: 34 peering, 62 unknown, 35 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:40 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e29 e29: 3 total, 2 up, 3 in
Oct 02 11:46:41 compute-1 ceph-mgr[81282]: mgr[py] Loading python module 'status'
Oct 02 11:46:41 compute-1 ceph-mgr[81282]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 02 11:46:41 compute-1 ceph-mgr[81282]: mgr[py] Loading python module 'telegraf'
Oct 02 11:46:41 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:41.373+0000 7f8f3746f140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 02 11:46:41 compute-1 ceph-mgr[81282]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 02 11:46:41 compute-1 ceph-mgr[81282]: mgr[py] Loading python module 'telemetry'
Oct 02 11:46:41 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:41.630+0000 7f8f3746f140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 02 11:46:41 compute-1 ceph-mon[80926]: 5.2 scrub starts
Oct 02 11:46:41 compute-1 ceph-mon[80926]: 5.2 scrub ok
Oct 02 11:46:41 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1918843349' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct 02 11:46:41 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1918843349' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct 02 11:46:41 compute-1 ceph-mon[80926]: osdmap e29: 3 total, 2 up, 3 in
Oct 02 11:46:41 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:41 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:41 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:41 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Oct 02 11:46:41 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Oct 02 11:46:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e30 e30: 3 total, 2 up, 3 in
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[5.18( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[4.18( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[4.1b( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[5.1a( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[3.1c( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[5.1b( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[4.1a( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[3.1d( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[5.1c( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[3.1a( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[4.c( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[5.e( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[5.f( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[3.9( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[4.e( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[4.1( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[5.1( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[3.5( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[3.3( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[5.2( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[5.7( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[4.5( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[5.4( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[4.d( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[3.a( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[3.d( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[4.a( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[3.c( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[4.8( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[5.9( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[3.f( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[4.9( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[3.e( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[3.11( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[5.16( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[3.10( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[5.15( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[3.13( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[4.15( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[3.15( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[3.14( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[4.13( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[5.11( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[5.10( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[3.16( empty local-lis/les=0/0 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[5.1f( empty local-lis/les=0/0 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[4.1f( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.19( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.092329025s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 77.469169617s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.19( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.092301369s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.469169617s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.15( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.092037201s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 77.469055176s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.15( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.092014313s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.469055176s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.13( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.091887474s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 77.469039917s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.13( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.091865540s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.469039917s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.10( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.091481209s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 77.468780518s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.10( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.091461182s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468780518s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.e( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.091250420s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 77.468742371s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.e( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.091231346s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468742371s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.d( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.091112137s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 77.468704224s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.d( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.091097832s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468704224s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.c( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.091105461s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 77.468803406s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.c( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.091085434s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468803406s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.1( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.090168953s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 77.468727112s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.6( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.089975357s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 77.468574524s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.1( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.090146065s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468727112s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.6( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.089949608s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468574524s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.4( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.089907646s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 77.468544006s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.4( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.089863777s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468544006s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.1b( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.089564323s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 77.468338013s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.a( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.089577675s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 77.468360901s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.1b( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.089547157s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468338013s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.a( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.089554787s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468360901s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.1f( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.089651108s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 77.468574524s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.1f( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.089611053s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468574524s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.1e( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.089719772s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 77.468719482s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.1e( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.089684486s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468719482s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.9( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.089271545s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 77.468376160s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:42 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 30 pg[2.9( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.089249611s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468376160s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:42 compute-1 ceph-mgr[81282]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 02 11:46:42 compute-1 ceph-mgr[81282]: mgr[py] Loading python module 'test_orchestrator'
Oct 02 11:46:42 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:42.314+0000 7f8f3746f140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 02 11:46:42 compute-1 ceph-mon[80926]: pgmap v89: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:46:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:46:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:46:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:46:42 compute-1 ceph-mon[80926]: 2.5 scrub starts
Oct 02 11:46:42 compute-1 ceph-mon[80926]: 2.5 scrub ok
Oct 02 11:46:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:46:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:46:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:46:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:46:42 compute-1 ceph-mon[80926]: osdmap e30: 3 total, 2 up, 3 in
Oct 02 11:46:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1049687618' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct 02 11:46:42 compute-1 ceph-mon[80926]: from='osd.2 [v2:192.168.122.102:6800/804192295,v1:192.168.122.102:6801/804192295]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 02 11:46:42 compute-1 ceph-mon[80926]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 02 11:46:42 compute-1 ceph-mon[80926]: Standby manager daemon compute-2.kvxdhw started
Oct 02 11:46:43 compute-1 ceph-mgr[81282]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 02 11:46:43 compute-1 ceph-mgr[81282]: mgr[py] Loading python module 'volumes'
Oct 02 11:46:43 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:43.040+0000 7f8f3746f140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 02 11:46:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e31 e31: 3 total, 2 up, 3 in
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[5.18( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[3.1c( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[4.18( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[4.1a( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[5.1b( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[5.1c( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[5.1a( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[4.c( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[3.1a( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[5.e( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[5.f( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[3.9( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[4.e( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[5.1( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[4.1( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[3.5( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[3.3( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[5.7( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[5.2( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[4.5( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[5.4( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[4.d( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[3.d( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[3.a( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[4.a( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[3.c( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[4.8( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[5.9( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[3.f( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[3.e( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[3.11( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[5.16( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[3.10( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[4.9( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[3.13( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[5.15( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[4.15( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[3.14( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[4.13( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[5.11( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[3.16( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[5.10( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[4.1f( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[5.1f( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[3.15( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[4.1b( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 31 pg[3.1d( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=24/24 les/c/f=26/26/0 sis=30) [0] r=0 lpr=30 pi=[24,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:46:43 compute-1 sudo[81318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:43 compute-1 sudo[81318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:43 compute-1 sudo[81318]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:43 compute-1 sudo[81343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:46:43 compute-1 sudo[81343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:43 compute-1 sudo[81343]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:43 compute-1 ceph-mgr[81282]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 02 11:46:43 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:43.786+0000 7f8f3746f140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 02 11:46:43 compute-1 ceph-mgr[81282]: mgr[py] Loading python module 'zabbix'
Oct 02 11:46:43 compute-1 sudo[81368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:43 compute-1 sudo[81368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:43 compute-1 sudo[81368]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:43 compute-1 sudo[81393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:46:43 compute-1 sudo[81393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:43 compute-1 sudo[81393]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:43 compute-1 sudo[81418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:43 compute-1 sudo[81418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:43 compute-1 sudo[81418]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:43 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Oct 02 11:46:43 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Oct 02 11:46:43 compute-1 sudo[81443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 11:46:43 compute-1 sudo[81443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:44 compute-1 ceph-mgr[81282]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 02 11:46:44 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mgr-compute-1-wtokkj[81278]: 2025-10-02T11:46:44.068+0000 7f8f3746f140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 02 11:46:44 compute-1 ceph-mgr[81282]: ms_deliver_dispatch: unhandled message 0x55bbe42e3600 mon_map magic: 0 v1 from mon.2 v2:192.168.122.101:3300/0
Oct 02 11:46:44 compute-1 ceph-mgr[81282]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3443433125
Oct 02 11:46:44 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1049687618' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct 02 11:46:44 compute-1 ceph-mon[80926]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct 02 11:46:44 compute-1 ceph-mon[80926]: from='osd.2 [v2:192.168.122.102:6800/804192295,v1:192.168.122.102:6801/804192295]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct 02 11:46:44 compute-1 ceph-mon[80926]: osdmap e31: 3 total, 2 up, 3 in
Oct 02 11:46:44 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:44 compute-1 ceph-mon[80926]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct 02 11:46:44 compute-1 ceph-mon[80926]: mgrmap e10: compute-0.unmtoh(active, since 2m), standbys: compute-2.kvxdhw
Oct 02 11:46:44 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mgr metadata", "who": "compute-2.kvxdhw", "id": "compute-2.kvxdhw"}]: dispatch
Oct 02 11:46:44 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:44 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:44 compute-1 ceph-mon[80926]: pgmap v92: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:44 compute-1 ceph-mon[80926]: Standby manager daemon compute-1.wtokkj started
Oct 02 11:46:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e32 e32: 3 total, 2 up, 3 in
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[2.18( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=12.894192696s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 77.469039917s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[2.18( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=12.894192696s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.469039917s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[3.15( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=15.000248909s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 79.575187683s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[4.1f( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=15.000205994s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 79.575164795s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[3.15( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=15.000248909s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.575187683s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[4.1f( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=15.000205994s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.575164795s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[4.15( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=15.000073433s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 79.575111389s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[4.15( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=15.000073433s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.575111389s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[2.12( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=12.893628120s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 77.468780518s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[2.12( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=12.893628120s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468780518s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[3.11( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999844551s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 79.575042725s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[2.f( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=12.893508911s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 77.468711853s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[3.11( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999844551s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.575042725s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[3.e( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999811172s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 79.575042725s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[3.e( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999811172s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.575042725s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[2.f( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=12.893508911s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468711853s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[4.9( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999816895s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 79.575080872s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[4.9( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999816895s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.575080872s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[4.8( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999673843s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 79.574996948s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[4.8( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999673843s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574996948s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[5.4( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999487877s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 79.574905396s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[5.4( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999487877s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574905396s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[2.5( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=12.893162727s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 77.468612671s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[2.5( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=12.893162727s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468612671s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[4.1( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999329567s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 79.574844360s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[4.1( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999329567s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574844360s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[3.9( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999258995s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 79.574821472s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[5.e( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999205589s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 79.574790955s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[3.1a( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999183655s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 79.574775696s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[5.e( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999205589s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574790955s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[2.b( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=12.893005371s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 77.468620300s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[3.9( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999258995s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574821472s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[2.b( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=12.893005371s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468620300s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[3.1d( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999115944s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 79.574775696s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[3.1d( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999115944s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574775696s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[2.1c( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=12.887790680s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 77.463493347s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[2.1c( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=12.887790680s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.463493347s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[5.1a( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999016762s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active pruub 79.574752808s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[5.1a( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999016762s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574752808s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[2.1d( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=12.892520905s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 77.468353271s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[2.1d( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=12.892520905s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468353271s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 32 pg[3.1a( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=32 pruub=14.999183655s) [] r=-1 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574775696s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:44 compute-1 podman[81540]: 2025-10-02 11:46:44.387630802 +0000 UTC m=+0.053311662 container exec f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:46:44 compute-1 podman[81540]: 2025-10-02 11:46:44.482777623 +0000 UTC m=+0.148458463 container exec_died f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:46:44 compute-1 sudo[81443]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:44 compute-1 sudo[81625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:44 compute-1 sudo[81625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:44 compute-1 sudo[81625]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:44 compute-1 sudo[81650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:46:44 compute-1 sudo[81650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:44 compute-1 sudo[81650]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:44 compute-1 sudo[81675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:44 compute-1 sudo[81675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:44 compute-1 sudo[81675]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:44 compute-1 sudo[81700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:46:44 compute-1 sudo[81700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:45 compute-1 sudo[81700]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:46:45 compute-1 ceph-mon[80926]: 2.7 scrub starts
Oct 02 11:46:45 compute-1 ceph-mon[80926]: 2.7 scrub ok
Oct 02 11:46:45 compute-1 ceph-mon[80926]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Oct 02 11:46:45 compute-1 ceph-mon[80926]: osdmap e32: 3 total, 2 up, 3 in
Oct 02 11:46:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:45 compute-1 ceph-mon[80926]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 02 11:46:45 compute-1 ceph-mon[80926]: Cluster is now healthy
Oct 02 11:46:45 compute-1 ceph-mon[80926]: mgrmap e11: compute-0.unmtoh(active, since 2m), standbys: compute-2.kvxdhw, compute-1.wtokkj
Oct 02 11:46:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mgr metadata", "who": "compute-1.wtokkj", "id": "compute-1.wtokkj"}]: dispatch
Oct 02 11:46:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:47 compute-1 ceph-mon[80926]: purged_snaps scrub starts
Oct 02 11:46:47 compute-1 ceph-mon[80926]: purged_snaps scrub ok
Oct 02 11:46:47 compute-1 ceph-mon[80926]: 3.2 scrub starts
Oct 02 11:46:47 compute-1 ceph-mon[80926]: 3.2 scrub ok
Oct 02 11:46:47 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:47 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:47 compute-1 ceph-mon[80926]: pgmap v94: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:47 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:48 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3455455273' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 02 11:46:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3455455273' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 02 11:46:48 compute-1 ceph-mon[80926]: pgmap v95: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Oct 02 11:46:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e33 e33: 3 total, 3 up, 3 in
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[4.1f( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.664138794s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.575164795s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[4.1f( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.664088249s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.575164795s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[3.15( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.664095879s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.575187683s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[3.15( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.664053917s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.575187683s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[4.15( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.663953781s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.575111389s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[4.15( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.663905144s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.575111389s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[2.12( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=8.557517052s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468780518s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[2.12( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=8.557495117s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468780518s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[3.11( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.663733482s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.575042725s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[3.11( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.663718224s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.575042725s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[2.f( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=8.557359695s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468711853s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[3.e( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.663661003s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.575042725s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[2.f( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=8.557341576s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468711853s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[3.e( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.663645744s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.575042725s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[4.9( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.663664818s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.575080872s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[4.9( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.663651466s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.575080872s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[2.b( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=8.557121277s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468620300s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[2.b( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=8.557109833s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468620300s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[5.4( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.663317680s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574905396s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[2.5( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=8.556999207s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468612671s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[2.5( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=8.556984901s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468612671s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[5.4( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.663281441s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574905396s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[4.1( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.663154602s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574844360s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[4.1( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.663138390s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574844360s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[3.9( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.663082123s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574821472s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[3.9( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.663067818s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574821472s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[5.e( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.662989616s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574790955s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[3.1a( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.662947655s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574775696s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[5.e( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.662973404s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574790955s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[3.1d( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.662931442s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574775696s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[3.1d( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.662914276s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574775696s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[3.1a( empty local-lis/les=30/31 n=0 ec=24/15 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.662919044s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574775696s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[2.1c( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=8.551541328s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.463493347s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[2.1c( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=8.551506996s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.463493347s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[5.1a( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.662742615s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574752808s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[5.1a( empty local-lis/les=30/31 n=0 ec=25/19 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.662725449s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574752808s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[2.1d( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=8.556281090s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468353271s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[2.1d( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=8.556267738s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.468353271s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[2.18( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=8.556897163s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.469039917s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[2.18( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=8.556874275s) [2] r=-1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.469039917s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[4.8( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.662742615s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574996948s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:46:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 33 pg[4.8( empty local-lis/les=30/31 n=0 ec=25/17 lis/c=30/30 les/c/f=31/31/0 sis=33 pruub=10.662691116s) [2] r=-1 lpr=33 pi=[30,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.574996948s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:46:48 compute-1 sudo[81756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:48 compute-1 sudo[81756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:48 compute-1 sudo[81756]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:48 compute-1 sudo[81781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 02 11:46:48 compute-1 sudo[81781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:48 compute-1 sudo[81781]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:48 compute-1 sudo[81806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:48 compute-1 sudo[81806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:48 compute-1 sudo[81806]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:48 compute-1 sudo[81831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/etc/ceph
Oct 02 11:46:48 compute-1 sudo[81831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:48 compute-1 sudo[81831]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:48 compute-1 sudo[81856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:48 compute-1 sudo[81856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:48 compute-1 sudo[81856]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:48 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Oct 02 11:46:48 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Oct 02 11:46:49 compute-1 sudo[81881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/etc/ceph/ceph.conf.new
Oct 02 11:46:49 compute-1 sudo[81881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:49 compute-1 sudo[81881]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:49 compute-1 sudo[81906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:49 compute-1 sudo[81906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:49 compute-1 sudo[81906]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:49 compute-1 sudo[81931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:46:49 compute-1 sudo[81931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:49 compute-1 sudo[81931]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:49 compute-1 ceph-mon[80926]: 5.3 scrub starts
Oct 02 11:46:49 compute-1 ceph-mon[80926]: 5.3 scrub ok
Oct 02 11:46:49 compute-1 ceph-mon[80926]: OSD bench result of 8243.165808 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 02 11:46:49 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:49 compute-1 ceph-mon[80926]: osd.2 [v2:192.168.122.102:6800/804192295,v1:192.168.122.102:6801/804192295] boot
Oct 02 11:46:49 compute-1 ceph-mon[80926]: osdmap e33: 3 total, 3 up, 3 in
Oct 02 11:46:49 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 02 11:46:49 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:49 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:49 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 02 11:46:49 compute-1 ceph-mon[80926]: Adjusting osd_memory_target on compute-2 to 127.8M
Oct 02 11:46:49 compute-1 ceph-mon[80926]: Unable to set osd_memory_target on compute-2 to 134062899: error parsing value: Value '134062899' is below minimum 939524096
Oct 02 11:46:49 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:49 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:46:49 compute-1 ceph-mon[80926]: Updating compute-0:/etc/ceph/ceph.conf
Oct 02 11:46:49 compute-1 ceph-mon[80926]: Updating compute-1:/etc/ceph/ceph.conf
Oct 02 11:46:49 compute-1 ceph-mon[80926]: Updating compute-2:/etc/ceph/ceph.conf
Oct 02 11:46:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1687359328' entity='client.admin' 
Oct 02 11:46:49 compute-1 sudo[81956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:49 compute-1 sudo[81956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:49 compute-1 sudo[81956]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:49 compute-1 sudo[81981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/etc/ceph/ceph.conf.new
Oct 02 11:46:49 compute-1 sudo[81981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:49 compute-1 sudo[81981]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:49 compute-1 sudo[82029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:49 compute-1 sudo[82029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:49 compute-1 sudo[82029]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:49 compute-1 sudo[82054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/etc/ceph/ceph.conf.new
Oct 02 11:46:49 compute-1 sudo[82054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:49 compute-1 sudo[82054]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:49 compute-1 sudo[82079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:49 compute-1 sudo[82079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:49 compute-1 sudo[82079]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:49 compute-1 sudo[82104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/etc/ceph/ceph.conf.new
Oct 02 11:46:49 compute-1 sudo[82104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:49 compute-1 sudo[82104]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:49 compute-1 sudo[82129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:49 compute-1 sudo[82129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:49 compute-1 sudo[82129]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:49 compute-1 sudo[82154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 02 11:46:49 compute-1 sudo[82154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:49 compute-1 sudo[82154]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:49 compute-1 sudo[82179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:49 compute-1 sudo[82179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e34 e34: 3 total, 3 up, 3 in
Oct 02 11:46:49 compute-1 sudo[82179]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:49 compute-1 sudo[82204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config
Oct 02 11:46:49 compute-1 sudo[82204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:49 compute-1 sudo[82204]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:49 compute-1 sudo[82229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:49 compute-1 sudo[82229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:49 compute-1 sudo[82229]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:49 compute-1 sudo[82254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config
Oct 02 11:46:49 compute-1 sudo[82254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:49 compute-1 sudo[82254]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:49 compute-1 sudo[82279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:49 compute-1 sudo[82279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:49 compute-1 sudo[82279]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:49 compute-1 sudo[82304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf.new
Oct 02 11:46:49 compute-1 sudo[82304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:49 compute-1 sudo[82304]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:49 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Oct 02 11:46:49 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Oct 02 11:46:49 compute-1 sudo[82329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:49 compute-1 sudo[82329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:49 compute-1 sudo[82329]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:50 compute-1 sudo[82354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:46:50 compute-1 sudo[82354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:50 compute-1 sudo[82354]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:50 compute-1 sudo[82379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:50 compute-1 sudo[82379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:50 compute-1 sudo[82379]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:50 compute-1 ceph-mon[80926]: 5.5 scrub starts
Oct 02 11:46:50 compute-1 ceph-mon[80926]: 5.5 scrub ok
Oct 02 11:46:50 compute-1 ceph-mon[80926]: 2.8 scrub starts
Oct 02 11:46:50 compute-1 ceph-mon[80926]: 2.8 scrub ok
Oct 02 11:46:50 compute-1 ceph-mon[80926]: Updating compute-1:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf
Oct 02 11:46:50 compute-1 ceph-mon[80926]: osdmap e34: 3 total, 3 up, 3 in
Oct 02 11:46:50 compute-1 ceph-mon[80926]: pgmap v98: 131 pgs: 21 peering, 110 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:50 compute-1 ceph-mon[80926]: Updating compute-0:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf
Oct 02 11:46:50 compute-1 ceph-mon[80926]: Updating compute-2:/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf
Oct 02 11:46:50 compute-1 sudo[82404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf.new
Oct 02 11:46:50 compute-1 sudo[82404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:50 compute-1 sudo[82404]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:50 compute-1 sudo[82452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:50 compute-1 sudo[82452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:50 compute-1 sudo[82452]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:50 compute-1 sudo[82477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf.new
Oct 02 11:46:50 compute-1 sudo[82477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:50 compute-1 sudo[82477]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:50 compute-1 sudo[82502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:50 compute-1 sudo[82502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:50 compute-1 sudo[82502]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:46:50 compute-1 sudo[82527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf.new
Oct 02 11:46:50 compute-1 sudo[82527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:50 compute-1 sudo[82527]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:50 compute-1 sudo[82552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:46:50 compute-1 sudo[82552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:50 compute-1 sudo[82552]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:50 compute-1 sudo[82577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-20fdc58c-b037-5094-a8ef-d490aa7c36f3/var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf.new /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/config/ceph.conf
Oct 02 11:46:50 compute-1 sudo[82577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:46:50 compute-1 sudo[82577]: pam_unix(sudo:session): session closed for user root
Oct 02 11:46:51 compute-1 ceph-mon[80926]: 5.6 scrub starts
Oct 02 11:46:51 compute-1 ceph-mon[80926]: 5.6 scrub ok
Oct 02 11:46:51 compute-1 ceph-mon[80926]: 2.11 scrub starts
Oct 02 11:46:51 compute-1 ceph-mon[80926]: 2.11 scrub ok
Oct 02 11:46:51 compute-1 ceph-mon[80926]: from='client.14301 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:46:51 compute-1 ceph-mon[80926]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 02 11:46:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:51 compute-1 ceph-mon[80926]: Saving service ingress.rgw.default spec with placement count:2
Oct 02 11:46:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:52 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:52 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:46:52 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:46:52 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:52 compute-1 ceph-mon[80926]: pgmap v99: 131 pgs: 21 peering, 110 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:53 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.14 deep-scrub starts
Oct 02 11:46:53 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.14 deep-scrub ok
Oct 02 11:46:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).mds e2 new map
Oct 02 11:46:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).mds e2 print_map
                                           e2
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-02T11:46:53.022688+0000
                                           modified        2025-10-02T11:46:53.022725+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
Oct 02 11:46:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e35 e35: 3 total, 3 up, 3 in
Oct 02 11:46:53 compute-1 ceph-mon[80926]: 3.4 scrub starts
Oct 02 11:46:53 compute-1 ceph-mon[80926]: 3.4 scrub ok
Oct 02 11:46:53 compute-1 ceph-mon[80926]: 4.15 scrub starts
Oct 02 11:46:53 compute-1 ceph-mon[80926]: 4.15 scrub ok
Oct 02 11:46:53 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct 02 11:46:53 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct 02 11:46:53 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct 02 11:46:53 compute-1 ceph-mon[80926]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 02 11:46:53 compute-1 ceph-mon[80926]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct 02 11:46:53 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct 02 11:46:53 compute-1 ceph-mon[80926]: osdmap e35: 3 total, 3 up, 3 in
Oct 02 11:46:53 compute-1 ceph-mon[80926]: fsmap cephfs:0
Oct 02 11:46:53 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:54 compute-1 ceph-mon[80926]: 2.14 deep-scrub starts
Oct 02 11:46:54 compute-1 ceph-mon[80926]: from='client.14307 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:46:54 compute-1 ceph-mon[80926]: 2.14 deep-scrub ok
Oct 02 11:46:54 compute-1 ceph-mon[80926]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 02 11:46:54 compute-1 ceph-mon[80926]: 2.12 scrub starts
Oct 02 11:46:54 compute-1 ceph-mon[80926]: 2.12 scrub ok
Oct 02 11:46:54 compute-1 ceph-mon[80926]: pgmap v101: 131 pgs: 21 peering, 110 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:54 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:46:55 compute-1 ceph-mon[80926]: from='client.14313 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 11:46:55 compute-1 ceph-mon[80926]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 02 11:46:55 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Oct 02 11:46:55 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Oct 02 11:46:56 compute-1 ceph-mon[80926]: pgmap v102: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:57 compute-1 ceph-mon[80926]: 2.16 scrub starts
Oct 02 11:46:57 compute-1 ceph-mon[80926]: 2.16 scrub ok
Oct 02 11:46:57 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3621566955' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct 02 11:46:57 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3621566955' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct 02 11:46:57 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:57 compute-1 ceph-mon[80926]: 3.11 scrub starts
Oct 02 11:46:57 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:57 compute-1 ceph-mon[80926]: 3.11 scrub ok
Oct 02 11:46:57 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.tsbazp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 02 11:46:57 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.tsbazp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 02 11:46:57 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:46:57 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:46:57 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.17 deep-scrub starts
Oct 02 11:46:57 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.17 deep-scrub ok
Oct 02 11:46:58 compute-1 ceph-mon[80926]: Deploying daemon rgw.rgw.compute-2.tsbazp on compute-2
Oct 02 11:46:58 compute-1 ceph-mon[80926]: pgmap v103: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:46:58 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2363194234' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 02 11:47:00 compute-1 ceph-mon[80926]: 2.17 deep-scrub starts
Oct 02 11:47:00 compute-1 ceph-mon[80926]: 2.17 deep-scrub ok
Oct 02 11:47:00 compute-1 ceph-mon[80926]: 4.4 scrub starts
Oct 02 11:47:00 compute-1 ceph-mon[80926]: 4.4 scrub ok
Oct 02 11:47:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/991034300' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 11:47:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:47:01 compute-1 ceph-mon[80926]: pgmap v104: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:01 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Oct 02 11:47:01 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Oct 02 11:47:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e36 e36: 3 total, 3 up, 3 in
Oct 02 11:47:02 compute-1 ceph-mon[80926]: 3.6 scrub starts
Oct 02 11:47:02 compute-1 ceph-mon[80926]: 3.6 scrub ok
Oct 02 11:47:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2424414405' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct 02 11:47:02 compute-1 ceph-mon[80926]: pgmap v105: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:02 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:02 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:03 compute-1 sudo[82602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:03 compute-1 sudo[82602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:03 compute-1 sudo[82602]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:03 compute-1 sudo[82627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:47:03 compute-1 sudo[82627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:03 compute-1 sudo[82627]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:03 compute-1 sudo[82652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:03 compute-1 sudo[82652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:03 compute-1 sudo[82652]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:03 compute-1 sudo[82677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:47:03 compute-1 sudo[82677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:03 compute-1 ceph-mon[80926]: 5.a scrub starts
Oct 02 11:47:03 compute-1 ceph-mon[80926]: 5.a scrub ok
Oct 02 11:47:03 compute-1 ceph-mon[80926]: 2.1a scrub starts
Oct 02 11:47:03 compute-1 ceph-mon[80926]: 2.1a scrub ok
Oct 02 11:47:03 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:03 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.vuotmz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 02 11:47:03 compute-1 ceph-mon[80926]: osdmap e36: 3 total, 3 up, 3 in
Oct 02 11:47:03 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4114646185' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 02 11:47:03 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.vuotmz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 02 11:47:03 compute-1 ceph-mon[80926]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 02 11:47:03 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:03 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:47:03 compute-1 podman[82744]: 2025-10-02 11:47:03.654183004 +0000 UTC m=+0.049386418 container create 5477e514696c8d5e6c9cb7c965c0a757712ab6687bce7a344664714d1f194e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 11:47:03 compute-1 systemd[71803]: Starting Mark boot as successful...
Oct 02 11:47:03 compute-1 systemd[71803]: Finished Mark boot as successful.
Oct 02 11:47:03 compute-1 systemd[1]: Started libpod-conmon-5477e514696c8d5e6c9cb7c965c0a757712ab6687bce7a344664714d1f194e9e.scope.
Oct 02 11:47:03 compute-1 systemd[1]: Started libcrun container.
Oct 02 11:47:03 compute-1 podman[82744]: 2025-10-02 11:47:03.631393491 +0000 UTC m=+0.026596925 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:47:03 compute-1 podman[82744]: 2025-10-02 11:47:03.755061786 +0000 UTC m=+0.150265210 container init 5477e514696c8d5e6c9cb7c965c0a757712ab6687bce7a344664714d1f194e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct 02 11:47:03 compute-1 podman[82744]: 2025-10-02 11:47:03.763706016 +0000 UTC m=+0.158909430 container start 5477e514696c8d5e6c9cb7c965c0a757712ab6687bce7a344664714d1f194e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct 02 11:47:03 compute-1 competent_chatterjee[82761]: 167 167
Oct 02 11:47:03 compute-1 systemd[1]: libpod-5477e514696c8d5e6c9cb7c965c0a757712ab6687bce7a344664714d1f194e9e.scope: Deactivated successfully.
Oct 02 11:47:03 compute-1 podman[82744]: 2025-10-02 11:47:03.788560445 +0000 UTC m=+0.183763859 container attach 5477e514696c8d5e6c9cb7c965c0a757712ab6687bce7a344664714d1f194e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct 02 11:47:03 compute-1 podman[82744]: 2025-10-02 11:47:03.789853185 +0000 UTC m=+0.185056599 container died 5477e514696c8d5e6c9cb7c965c0a757712ab6687bce7a344664714d1f194e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:47:03 compute-1 systemd[1]: var-lib-containers-storage-overlay-99dab7f558861f108aa9b345ac9684482f011c3dcbaf049e2dbed9aca0231638-merged.mount: Deactivated successfully.
Oct 02 11:47:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e37 e37: 3 total, 3 up, 3 in
Oct 02 11:47:03 compute-1 podman[82744]: 2025-10-02 11:47:03.87939214 +0000 UTC m=+0.274595554 container remove 5477e514696c8d5e6c9cb7c965c0a757712ab6687bce7a344664714d1f194e9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chatterjee, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 11:47:03 compute-1 systemd[1]: libpod-conmon-5477e514696c8d5e6c9cb7c965c0a757712ab6687bce7a344664714d1f194e9e.scope: Deactivated successfully.
Oct 02 11:47:03 compute-1 systemd[1]: Reloading.
Oct 02 11:47:04 compute-1 systemd-rc-local-generator[82809]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:47:04 compute-1 systemd-sysv-generator[82813]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:47:04 compute-1 systemd[1]: Reloading.
Oct 02 11:47:04 compute-1 systemd-rc-local-generator[82850]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:47:04 compute-1 systemd-sysv-generator[82854]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:47:04 compute-1 systemd[1]: Starting Ceph rgw.rgw.compute-1.vuotmz for 20fdc58c-b037-5094-a8ef-d490aa7c36f3...
Oct 02 11:47:04 compute-1 podman[82903]: 2025-10-02 11:47:04.695474287 +0000 UTC m=+0.051509545 container create c6de875cb4b89403624319953a0a449dcdc8f537a620fc12a16b1394cfc6bf32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-rgw-rgw-compute-1-vuotmz, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct 02 11:47:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/936e291a49c060c70c1c5acf16dc5e215f60dea84411e5acace0a0826ac4aaf0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/936e291a49c060c70c1c5acf16dc5e215f60dea84411e5acace0a0826ac4aaf0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/936e291a49c060c70c1c5acf16dc5e215f60dea84411e5acace0a0826ac4aaf0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/936e291a49c060c70c1c5acf16dc5e215f60dea84411e5acace0a0826ac4aaf0/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-1.vuotmz supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:04 compute-1 podman[82903]: 2025-10-02 11:47:04.664365232 +0000 UTC m=+0.020400510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:47:04 compute-1 podman[82903]: 2025-10-02 11:47:04.761711892 +0000 UTC m=+0.117747160 container init c6de875cb4b89403624319953a0a449dcdc8f537a620fc12a16b1394cfc6bf32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-rgw-rgw-compute-1-vuotmz, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 11:47:04 compute-1 podman[82903]: 2025-10-02 11:47:04.7661194 +0000 UTC m=+0.122154658 container start c6de875cb4b89403624319953a0a449dcdc8f537a620fc12a16b1394cfc6bf32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-rgw-rgw-compute-1-vuotmz, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:47:04 compute-1 bash[82903]: c6de875cb4b89403624319953a0a449dcdc8f537a620fc12a16b1394cfc6bf32
Oct 02 11:47:04 compute-1 systemd[1]: Started Ceph rgw.rgw.compute-1.vuotmz for 20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct 02 11:47:04 compute-1 radosgw[82922]: deferred set uid:gid to 167:167 (ceph:ceph)
Oct 02 11:47:04 compute-1 radosgw[82922]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Oct 02 11:47:04 compute-1 radosgw[82922]: framework: beast
Oct 02 11:47:04 compute-1 radosgw[82922]: framework conf key: endpoint, val: 192.168.122.101:8082
Oct 02 11:47:04 compute-1 radosgw[82922]: init_numa not setting numa affinity
Oct 02 11:47:04 compute-1 sudo[82677]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e38 e38: 3 total, 3 up, 3 in
Oct 02 11:47:04 compute-1 ceph-mon[80926]: Deploying daemon rgw.rgw.compute-1.vuotmz on compute-1
Oct 02 11:47:04 compute-1 ceph-mon[80926]: from='client.14343 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 11:47:04 compute-1 ceph-mon[80926]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct 02 11:47:04 compute-1 ceph-mon[80926]: pgmap v107: 132 pgs: 1 unknown, 131 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:04 compute-1 ceph-mon[80926]: osdmap e37: 3 total, 3 up, 3 in
Oct 02 11:47:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Oct 02 11:47:04 compute-1 ceph-mon[80926]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/1098657432' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 02 11:47:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:47:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e39 e39: 3 total, 3 up, 3 in
Oct 02 11:47:05 compute-1 ceph-mon[80926]: 3.7 deep-scrub starts
Oct 02 11:47:05 compute-1 ceph-mon[80926]: 3.7 deep-scrub ok
Oct 02 11:47:05 compute-1 ceph-mon[80926]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 02 11:47:05 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:05 compute-1 ceph-mon[80926]: osdmap e38: 3 total, 3 up, 3 in
Oct 02 11:47:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4114646185' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 02 11:47:05 compute-1 ceph-mon[80926]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 02 11:47:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1098657432' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 02 11:47:05 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:05 compute-1 ceph-mon[80926]: from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 02 11:47:05 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:05 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.hlkvzi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 02 11:47:05 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.hlkvzi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 02 11:47:05 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:05 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:47:06 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Oct 02 11:47:06 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Oct 02 11:47:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e40 e40: 3 total, 3 up, 3 in
Oct 02 11:47:07 compute-1 ceph-mon[80926]: Deploying daemon rgw.rgw.compute-0.hlkvzi on compute-0
Oct 02 11:47:07 compute-1 ceph-mon[80926]: from='client.14349 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 11:47:07 compute-1 ceph-mon[80926]: pgmap v110: 133 pgs: 1 creating+peering, 132 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 1 op/s
Oct 02 11:47:07 compute-1 ceph-mon[80926]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 02 11:47:07 compute-1 ceph-mon[80926]: from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 02 11:47:07 compute-1 ceph-mon[80926]: osdmap e39: 3 total, 3 up, 3 in
Oct 02 11:47:07 compute-1 ceph-mon[80926]: 2.f scrub starts
Oct 02 11:47:07 compute-1 ceph-mon[80926]: 2.f scrub ok
Oct 02 11:47:07 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:07 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:07 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:07 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:07 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:07 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 40 pg[10.0( empty local-lis/les=0/0 n=0 ec=40/40 lis/c=0/0 les/c/f=0/0/0 sis=40) [0] r=0 lpr=40 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Oct 02 11:47:07 compute-1 ceph-mon[80926]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/1098657432' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 02 11:47:07 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Oct 02 11:47:07 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Oct 02 11:47:08 compute-1 ceph-mon[80926]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 02 11:47:08 compute-1 ceph-mon[80926]: 5.18 scrub starts
Oct 02 11:47:08 compute-1 ceph-mon[80926]: 5.18 scrub ok
Oct 02 11:47:08 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:08 compute-1 ceph-mon[80926]: Deploying daemon haproxy.rgw.default.compute-0.zhecum on compute-0
Oct 02 11:47:08 compute-1 ceph-mon[80926]: osdmap e40: 3 total, 3 up, 3 in
Oct 02 11:47:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3375865598' entity='client.rgw.rgw.compute-0.hlkvzi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 02 11:47:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4114646185' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 02 11:47:08 compute-1 ceph-mon[80926]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 02 11:47:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1098657432' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 02 11:47:08 compute-1 ceph-mon[80926]: from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 02 11:47:08 compute-1 ceph-mon[80926]: from='client.14364 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 11:47:08 compute-1 ceph-mon[80926]: pgmap v113: 134 pgs: 1 unknown, 1 creating+peering, 132 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 2.0 KiB/s rd, 1023 B/s wr, 2 op/s
Oct 02 11:47:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e41 e41: 3 total, 3 up, 3 in
Oct 02 11:47:08 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 41 pg[10.0( empty local-lis/les=40/41 n=0 ec=40/40 lis/c=0/0 les/c/f=0/0/0 sis=40) [0] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:08 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 4.1b deep-scrub starts
Oct 02 11:47:08 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 4.1b deep-scrub ok
Oct 02 11:47:09 compute-1 ceph-mon[80926]: 4.7 scrub starts
Oct 02 11:47:09 compute-1 ceph-mon[80926]: 4.7 scrub ok
Oct 02 11:47:09 compute-1 ceph-mon[80926]: 4.18 scrub starts
Oct 02 11:47:09 compute-1 ceph-mon[80926]: 4.18 scrub ok
Oct 02 11:47:09 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3375865598' entity='client.rgw.rgw.compute-0.hlkvzi' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 02 11:47:09 compute-1 ceph-mon[80926]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 02 11:47:09 compute-1 ceph-mon[80926]: from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 02 11:47:09 compute-1 ceph-mon[80926]: osdmap e41: 3 total, 3 up, 3 in
Oct 02 11:47:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e42 e42: 3 total, 3 up, 3 in
Oct 02 11:47:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Oct 02 11:47:09 compute-1 ceph-mon[80926]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2318512383' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 02 11:47:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:47:10 compute-1 ceph-mon[80926]: 4.9 deep-scrub starts
Oct 02 11:47:10 compute-1 ceph-mon[80926]: 4.9 deep-scrub ok
Oct 02 11:47:10 compute-1 ceph-mon[80926]: 4.1b deep-scrub starts
Oct 02 11:47:10 compute-1 ceph-mon[80926]: 4.1b deep-scrub ok
Oct 02 11:47:10 compute-1 ceph-mon[80926]: from='client.14379 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 02 11:47:10 compute-1 ceph-mon[80926]: pgmap v115: 134 pgs: 1 unknown, 1 creating+peering, 132 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 1.6 KiB/s rd, 828 B/s wr, 2 op/s
Oct 02 11:47:10 compute-1 ceph-mon[80926]: osdmap e42: 3 total, 3 up, 3 in
Oct 02 11:47:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1791419250' entity='client.rgw.rgw.compute-0.hlkvzi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 02 11:47:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2318512383' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 02 11:47:10 compute-1 ceph-mon[80926]: from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 02 11:47:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3234087284' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 02 11:47:10 compute-1 ceph-mon[80926]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 02 11:47:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e43 e43: 3 total, 3 up, 3 in
Oct 02 11:47:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Oct 02 11:47:10 compute-1 ceph-mon[80926]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2318512383' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 02 11:47:11 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.1b deep-scrub starts
Oct 02 11:47:11 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.1b deep-scrub ok
Oct 02 11:47:11 compute-1 ceph-mon[80926]: 2.b scrub starts
Oct 02 11:47:11 compute-1 ceph-mon[80926]: 2.b scrub ok
Oct 02 11:47:11 compute-1 ceph-mon[80926]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 02 11:47:11 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:11 compute-1 ceph-mon[80926]: 4.1f scrub starts
Oct 02 11:47:11 compute-1 ceph-mon[80926]: 4.1f scrub ok
Oct 02 11:47:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1791419250' entity='client.rgw.rgw.compute-0.hlkvzi' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 02 11:47:11 compute-1 ceph-mon[80926]: from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 02 11:47:11 compute-1 ceph-mon[80926]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 02 11:47:11 compute-1 ceph-mon[80926]: osdmap e43: 3 total, 3 up, 3 in
Oct 02 11:47:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1791419250' entity='client.rgw.rgw.compute-0.hlkvzi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 02 11:47:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2318512383' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 02 11:47:11 compute-1 ceph-mon[80926]: from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 02 11:47:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3234087284' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 02 11:47:11 compute-1 ceph-mon[80926]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 02 11:47:11 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e44 e44: 3 total, 3 up, 3 in
Oct 02 11:47:12 compute-1 radosgw[82922]: LDAP not started since no server URIs were provided in the configuration.
Oct 02 11:47:12 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-rgw-rgw-compute-1-vuotmz[82918]: 2025-10-02T11:47:12.565+0000 7f932dfdc940 -1 LDAP not started since no server URIs were provided in the configuration.
Oct 02 11:47:12 compute-1 radosgw[82922]: framework: beast
Oct 02 11:47:12 compute-1 radosgw[82922]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Oct 02 11:47:12 compute-1 radosgw[82922]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Oct 02 11:47:12 compute-1 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Oct 02 11:47:12 compute-1 radosgw[82922]: starting handler: beast
Oct 02 11:47:12 compute-1 radosgw[82922]: set uid:gid to 167:167 (ceph:ceph)
Oct 02 11:47:12 compute-1 radosgw[82922]: mgrc service_daemon_register rgw.24140 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.101:8082,frontend_type#0=beast,hostname=compute-1,id=rgw.compute-1.vuotmz,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864104,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=16ba9875-e611-4c67-897a-e19079014af6,zone_name=default,zonegroup_id=407d395c-624c-4136-be08-de285eb61d42,zonegroup_name=default}
Oct 02 11:47:12 compute-1 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Oct 02 11:47:12 compute-1 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Oct 02 11:47:12 compute-1 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Oct 02 11:47:12 compute-1 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Oct 02 11:47:12 compute-1 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Oct 02 11:47:12 compute-1 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Oct 02 11:47:13 compute-1 ceph-mon[80926]: 5.c scrub starts
Oct 02 11:47:13 compute-1 ceph-mon[80926]: 5.c scrub ok
Oct 02 11:47:13 compute-1 ceph-mon[80926]: 5.1b deep-scrub starts
Oct 02 11:47:13 compute-1 ceph-mon[80926]: 5.1b deep-scrub ok
Oct 02 11:47:13 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:13 compute-1 ceph-mon[80926]: pgmap v118: 135 pgs: 1 creating+peering, 134 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 2.8 KiB/s rd, 433 B/s wr, 4 op/s
Oct 02 11:47:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1331194290' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 02 11:47:13 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1791419250' entity='client.rgw.rgw.compute-0.hlkvzi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 02 11:47:13 compute-1 ceph-mon[80926]: from='client.? ' entity='client.rgw.rgw.compute-1.vuotmz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 02 11:47:13 compute-1 ceph-mon[80926]: from='client.? ' entity='client.rgw.rgw.compute-2.tsbazp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 02 11:47:13 compute-1 ceph-mon[80926]: osdmap e44: 3 total, 3 up, 3 in
Oct 02 11:47:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.002999973s ======
Oct 02 11:47:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:14.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002999973s
Oct 02 11:47:14 compute-1 ceph-mon[80926]: Deploying daemon haproxy.rgw.default.compute-2.zptkij on compute-2
Oct 02 11:47:14 compute-1 ceph-mon[80926]: 3.15 scrub starts
Oct 02 11:47:14 compute-1 ceph-mon[80926]: 3.15 scrub ok
Oct 02 11:47:14 compute-1 ceph-mon[80926]: 4.b scrub starts
Oct 02 11:47:14 compute-1 ceph-mon[80926]: 4.b scrub ok
Oct 02 11:47:14 compute-1 ceph-mon[80926]: pgmap v120: 135 pgs: 1 creating+peering, 134 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 2.4 KiB/s rd, 374 B/s wr, 3 op/s
Oct 02 11:47:14 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Oct 02 11:47:14 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Oct 02 11:47:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3169703598' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 02 11:47:15 compute-1 ceph-mon[80926]: 3.1c scrub starts
Oct 02 11:47:15 compute-1 ceph-mon[80926]: 3.1c scrub ok
Oct 02 11:47:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:47:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:16.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:16 compute-1 ceph-mon[80926]: 4.1 deep-scrub starts
Oct 02 11:47:16 compute-1 ceph-mon[80926]: 4.1 deep-scrub ok
Oct 02 11:47:16 compute-1 ceph-mon[80926]: pgmap v121: 135 pgs: 135 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 210 KiB/s rd, 5.7 KiB/s wr, 388 op/s
Oct 02 11:47:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1380703263' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Oct 02 11:47:17 compute-1 ceph-mon[80926]: 3.b deep-scrub starts
Oct 02 11:47:17 compute-1 ceph-mon[80926]: 3.b deep-scrub ok
Oct 02 11:47:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:18.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:18.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:19 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Oct 02 11:47:19 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Oct 02 11:47:20 compute-1 ceph-mon[80926]: pgmap v122: 135 pgs: 135 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 159 KiB/s rd, 4.3 KiB/s wr, 294 op/s
Oct 02 11:47:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3145980468' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Oct 02 11:47:20 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:20 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:20 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:20.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:47:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:20.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:20 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 4.c deep-scrub starts
Oct 02 11:47:20 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 4.c deep-scrub ok
Oct 02 11:47:21 compute-1 ceph-mon[80926]: 5.4 deep-scrub starts
Oct 02 11:47:21 compute-1 ceph-mon[80926]: 5.4 deep-scrub ok
Oct 02 11:47:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:21 compute-1 ceph-mon[80926]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 02 11:47:21 compute-1 ceph-mon[80926]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 02 11:47:21 compute-1 ceph-mon[80926]: Deploying daemon keepalived.rgw.default.compute-2.emwnjv on compute-2
Oct 02 11:47:21 compute-1 ceph-mon[80926]: 4.1a scrub starts
Oct 02 11:47:21 compute-1 ceph-mon[80926]: 4.1a scrub ok
Oct 02 11:47:21 compute-1 ceph-mon[80926]: pgmap v123: 135 pgs: 135 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 140 KiB/s rd, 3.6 KiB/s wr, 259 op/s
Oct 02 11:47:21 compute-1 ceph-mon[80926]: 4.c deep-scrub starts
Oct 02 11:47:21 compute-1 ceph-mon[80926]: 4.c deep-scrub ok
Oct 02 11:47:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:22.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:22 compute-1 ceph-mon[80926]: 5.14 scrub starts
Oct 02 11:47:22 compute-1 ceph-mon[80926]: 5.14 scrub ok
Oct 02 11:47:22 compute-1 ceph-mon[80926]: pgmap v124: 135 pgs: 135 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 126 KiB/s rd, 3.2 KiB/s wr, 233 op/s
Oct 02 11:47:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:22.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:23 compute-1 ceph-mon[80926]: 4.f deep-scrub starts
Oct 02 11:47:23 compute-1 ceph-mon[80926]: 4.f deep-scrub ok
Oct 02 11:47:23 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Oct 02 11:47:23 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Oct 02 11:47:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:24.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:24 compute-1 ceph-mon[80926]: 3.1d scrub starts
Oct 02 11:47:24 compute-1 ceph-mon[80926]: 3.1d scrub ok
Oct 02 11:47:24 compute-1 ceph-mon[80926]: 5.1c scrub starts
Oct 02 11:47:24 compute-1 ceph-mon[80926]: 5.1c scrub ok
Oct 02 11:47:24 compute-1 ceph-mon[80926]: pgmap v125: 135 pgs: 135 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 110 KiB/s rd, 2.8 KiB/s wr, 202 op/s
Oct 02 11:47:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:47:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:24.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:47:24 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.f scrub starts
Oct 02 11:47:24 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.f scrub ok
Oct 02 11:47:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:47:26 compute-1 ceph-mon[80926]: 3.9 scrub starts
Oct 02 11:47:26 compute-1 ceph-mon[80926]: 3.9 scrub ok
Oct 02 11:47:26 compute-1 ceph-mon[80926]: 5.f scrub starts
Oct 02 11:47:26 compute-1 ceph-mon[80926]: 5.f scrub ok
Oct 02 11:47:26 compute-1 ceph-mon[80926]: 3.e scrub starts
Oct 02 11:47:26 compute-1 ceph-mon[80926]: 3.e scrub ok
Oct 02 11:47:26 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:26.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:26 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 4.e scrub starts
Oct 02 11:47:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:47:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:26.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:47:26 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 4.e scrub ok
Oct 02 11:47:27 compute-1 ceph-mon[80926]: 3.1a scrub starts
Oct 02 11:47:27 compute-1 ceph-mon[80926]: 3.1a scrub ok
Oct 02 11:47:27 compute-1 ceph-mon[80926]: pgmap v126: 135 pgs: 135 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 105 KiB/s rd, 2.7 KiB/s wr, 194 op/s
Oct 02 11:47:27 compute-1 ceph-mon[80926]: 4.10 scrub starts
Oct 02 11:47:27 compute-1 ceph-mon[80926]: 4.10 scrub ok
Oct 02 11:47:27 compute-1 ceph-mon[80926]: 4.e scrub starts
Oct 02 11:47:27 compute-1 ceph-mon[80926]: 4.e scrub ok
Oct 02 11:47:27 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Oct 02 11:47:27 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Oct 02 11:47:28 compute-1 ceph-mon[80926]: 2.1d scrub starts
Oct 02 11:47:28 compute-1 ceph-mon[80926]: 2.1d scrub ok
Oct 02 11:47:28 compute-1 ceph-mon[80926]: 3.5 scrub starts
Oct 02 11:47:28 compute-1 ceph-mon[80926]: 3.5 scrub ok
Oct 02 11:47:28 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:28 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:28 compute-1 ceph-mon[80926]: pgmap v127: 135 pgs: 135 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 2 op/s
Oct 02 11:47:28 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:28.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:28 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Oct 02 11:47:28 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Oct 02 11:47:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:28.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:29 compute-1 ceph-mon[80926]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 02 11:47:29 compute-1 ceph-mon[80926]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 02 11:47:29 compute-1 ceph-mon[80926]: Deploying daemon keepalived.rgw.default.compute-0.nghmbz on compute-0
Oct 02 11:47:29 compute-1 ceph-mon[80926]: 3.3 scrub starts
Oct 02 11:47:29 compute-1 ceph-mon[80926]: 3.3 scrub ok
Oct 02 11:47:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:30.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:30 compute-1 ceph-mon[80926]: 2.18 deep-scrub starts
Oct 02 11:47:30 compute-1 ceph-mon[80926]: 2.18 deep-scrub ok
Oct 02 11:47:30 compute-1 ceph-mon[80926]: pgmap v128: 135 pgs: 135 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 2 op/s
Oct 02 11:47:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:47:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:30.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:31 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Oct 02 11:47:31 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Oct 02 11:47:31 compute-1 ceph-mon[80926]: 5.7 scrub starts
Oct 02 11:47:31 compute-1 ceph-mon[80926]: 5.7 scrub ok
Oct 02 11:47:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:32.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:32.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:33 compute-1 sudo[83536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:33 compute-1 sudo[83536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:33 compute-1 sudo[83536]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:33 compute-1 ceph-mon[80926]: pgmap v129: 135 pgs: 135 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 1.3 KiB/s rd, 0 B/s wr, 2 op/s
Oct 02 11:47:33 compute-1 ceph-mon[80926]: 5.17 deep-scrub starts
Oct 02 11:47:33 compute-1 ceph-mon[80926]: 5.17 deep-scrub ok
Oct 02 11:47:33 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:33 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:33 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:33 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:33 compute-1 sudo[83561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:47:33 compute-1 sudo[83561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:33 compute-1 sudo[83561]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:33 compute-1 sudo[83586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:33 compute-1 sudo[83586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:33 compute-1 sudo[83586]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:33 compute-1 sudo[83611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:47:33 compute-1 sudo[83611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:33 compute-1 sudo[83611]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:33 compute-1 sudo[83636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:33 compute-1 sudo[83636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:33 compute-1 sudo[83636]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:33 compute-1 sudo[83661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 11:47:33 compute-1 sudo[83661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:33 compute-1 podman[83758]: 2025-10-02 11:47:33.968696081 +0000 UTC m=+0.138069976 container exec f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:47:34 compute-1 podman[83758]: 2025-10-02 11:47:34.09185799 +0000 UTC m=+0.261231885 container exec_died f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:47:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:34.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:34 compute-1 ceph-mon[80926]: 4.11 scrub starts
Oct 02 11:47:34 compute-1 ceph-mon[80926]: 4.11 scrub ok
Oct 02 11:47:34 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Oct 02 11:47:34 compute-1 ceph-mon[80926]: pgmap v130: 135 pgs: 135 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e45 e45: 3 total, 3 up, 3 in
Oct 02 11:47:34 compute-1 sudo[83661]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:34.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:34 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Oct 02 11:47:34 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Oct 02 11:47:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e46 e46: 3 total, 3 up, 3 in
Oct 02 11:47:35 compute-1 ceph-mon[80926]: 4.8 scrub starts
Oct 02 11:47:35 compute-1 ceph-mon[80926]: 4.8 scrub ok
Oct 02 11:47:35 compute-1 ceph-mon[80926]: 4.12 deep-scrub starts
Oct 02 11:47:35 compute-1 ceph-mon[80926]: 4.12 deep-scrub ok
Oct 02 11:47:35 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Oct 02 11:47:35 compute-1 ceph-mon[80926]: osdmap e45: 3 total, 3 up, 3 in
Oct 02 11:47:35 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:47:35 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:35 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:35 compute-1 ceph-mon[80926]: 4.5 scrub starts
Oct 02 11:47:35 compute-1 ceph-mon[80926]: 4.5 scrub ok
Oct 02 11:47:35 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:35 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:35 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:35 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:35 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:47:35 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:47:35 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:35 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:47:35 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:47:35 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:47:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:47:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:36.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:36 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e47 e47: 3 total, 3 up, 3 in
Oct 02 11:47:36 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 47 pg[7.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=47 pruub=8.836661339s) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active pruub 125.470329285s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:36 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 47 pg[7.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=47 pruub=8.836661339s) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown pruub 125.470329285s@ mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:36 compute-1 ceph-mon[80926]: 5.19 scrub starts
Oct 02 11:47:36 compute-1 ceph-mon[80926]: 5.19 scrub ok
Oct 02 11:47:36 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:47:36 compute-1 ceph-mon[80926]: osdmap e46: 3 total, 3 up, 3 in
Oct 02 11:47:36 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:47:36 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:36 compute-1 ceph-mon[80926]: pgmap v133: 135 pgs: 135 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:36 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:47:36 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Oct 02 11:47:36 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:47:36 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:47:36 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Oct 02 11:47:36 compute-1 ceph-mon[80926]: osdmap e47: 3 total, 3 up, 3 in
Oct 02 11:47:36 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:47:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:36.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e48 e48: 3 total, 3 up, 3 in
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.15( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.16( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.c( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.1c( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.4( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.1f( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.1d( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.12( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.10( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.13( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.11( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.17( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.14( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.b( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.8( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.9( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.e( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.6( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.a( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.5( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.7( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.1( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.3( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.2( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.d( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.f( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.1e( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.19( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.18( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.1b( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.1a( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.15( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.1c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.16( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.4( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.1f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.1d( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.12( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.10( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.13( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.11( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.17( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.14( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.8( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.9( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.0( empty local-lis/les=47/48 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.5( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.7( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.1( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.3( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.2( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.d( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.1e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.6( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.19( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.18( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.1b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 48 pg[7.1a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [0] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:37 compute-1 ceph-mon[80926]: 5.1d deep-scrub starts
Oct 02 11:47:37 compute-1 ceph-mon[80926]: 5.1d deep-scrub ok
Oct 02 11:47:37 compute-1 ceph-mon[80926]: 5.1a scrub starts
Oct 02 11:47:37 compute-1 ceph-mon[80926]: 5.1a scrub ok
Oct 02 11:47:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:38.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:38.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:38 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e49 e49: 3 total, 3 up, 3 in
Oct 02 11:47:38 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:47:38 compute-1 ceph-mon[80926]: osdmap e48: 3 total, 3 up, 3 in
Oct 02 11:47:38 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:47:38 compute-1 ceph-mon[80926]: pgmap v136: 181 pgs: 1 peering, 46 unknown, 134 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:38 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:47:38 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:47:38 compute-1 ceph-mon[80926]: 2.1c scrub starts
Oct 02 11:47:38 compute-1 ceph-mon[80926]: 2.1c scrub ok
Oct 02 11:47:38 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:47:38 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:47:38 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:47:38 compute-1 ceph-mon[80926]: osdmap e49: 3 total, 3 up, 3 in
Oct 02 11:47:38 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 02 11:47:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e50 e50: 3 total, 3 up, 3 in
Oct 02 11:47:39 compute-1 ceph-mon[80926]: 4.16 scrub starts
Oct 02 11:47:39 compute-1 ceph-mon[80926]: 4.16 scrub ok
Oct 02 11:47:39 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct 02 11:47:39 compute-1 ceph-mon[80926]: osdmap e50: 3 total, 3 up, 3 in
Oct 02 11:47:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:40.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:40 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:47:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:47:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:40.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:47:40 compute-1 ceph-mon[80926]: pgmap v139: 243 pgs: 1 peering, 108 unknown, 134 active+clean; 453 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:40 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:47:40 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 02 11:47:40 compute-1 ceph-mon[80926]: 3.12 deep-scrub starts
Oct 02 11:47:40 compute-1 ceph-mon[80926]: 3.12 deep-scrub ok
Oct 02 11:47:40 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e51 e51: 3 total, 3 up, 3 in
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 51 pg[10.0( v 41'48 (0'0,41'48] local-lis/les=40/41 n=8 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=51 pruub=14.794152260s) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 41'47 mlcod 41'47 active pruub 136.695800781s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 51 pg[10.0( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=51 pruub=14.794152260s) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 41'47 mlcod 0'0 unknown pruub 136.695800781s@ mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e52 e52: 3 total, 3 up, 3 in
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.1b( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.11( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.18( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.1( v 41'48 (0'0,41'48] local-lis/les=40/41 n=1 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.7( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=1 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.9( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.12( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.10( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.1f( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.1e( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.1d( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.1c( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.1a( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.19( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.6( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=1 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.5( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=1 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.4( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=1 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.3( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=1 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.b( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.8( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=1 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.d( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.a( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.c( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.e( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.f( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.2( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=1 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.13( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.14( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.15( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.16( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.17( v 41'48 lc 0'0 (0'0,41'48] local-lis/les=40/41 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.1b( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.11( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-1 ceph-mon[80926]: 5.e scrub starts
Oct 02 11:47:41 compute-1 ceph-mon[80926]: 5.e scrub ok
Oct 02 11:47:41 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:47:41 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 02 11:47:41 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:41 compute-1 ceph-mon[80926]: osdmap e51: 3 total, 3 up, 3 in
Oct 02 11:47:41 compute-1 ceph-mon[80926]: 5.1e scrub starts
Oct 02 11:47:41 compute-1 ceph-mon[80926]: 5.1e scrub ok
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.18( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.1( v 41'48 (0'0,41'48] local-lis/les=51/52 n=1 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.9( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.12( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.1f( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.1e( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.10( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.1c( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.1d( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.1a( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.19( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.6( v 41'48 (0'0,41'48] local-lis/les=51/52 n=1 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.5( v 41'48 (0'0,41'48] local-lis/les=51/52 n=1 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.b( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.3( v 41'48 (0'0,41'48] local-lis/les=51/52 n=1 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.4( v 41'48 (0'0,41'48] local-lis/les=51/52 n=1 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.8( v 41'48 (0'0,41'48] local-lis/les=51/52 n=1 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.d( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.a( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.c( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.e( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.f( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.0( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 41'47 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.13( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.2( v 41'48 (0'0,41'48] local-lis/les=51/52 n=1 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.14( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.15( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.16( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.7( v 41'48 (0'0,41'48] local-lis/les=51/52 n=1 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 52 pg[10.17( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=40/40 les/c/f=41/41/0 sis=51) [0] r=0 lpr=51 pi=[40,51)/1 crt=41'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:42.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:42 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 4.d scrub starts
Oct 02 11:47:42 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 4.d scrub ok
Oct 02 11:47:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:42.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:42 compute-1 ceph-mon[80926]: osdmap e52: 3 total, 3 up, 3 in
Oct 02 11:47:42 compute-1 ceph-mon[80926]: pgmap v142: 305 pgs: 62 unknown, 243 active+clean; 453 KiB data, 102 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:42 compute-1 ceph-mon[80926]: 4.17 scrub starts
Oct 02 11:47:42 compute-1 ceph-mon[80926]: 4.17 scrub ok
Oct 02 11:47:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.dtavud", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 02 11:47:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.dtavud", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 02 11:47:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:47:43 compute-1 ceph-mon[80926]: Deploying daemon mds.cephfs.compute-2.dtavud on compute-2
Oct 02 11:47:43 compute-1 ceph-mon[80926]: 4.d scrub starts
Oct 02 11:47:43 compute-1 ceph-mon[80926]: 4.d scrub ok
Oct 02 11:47:43 compute-1 ceph-mon[80926]: 4.14 scrub starts
Oct 02 11:47:43 compute-1 ceph-mon[80926]: 4.14 scrub ok
Oct 02 11:47:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:44.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).mds e3 new map
Oct 02 11:47:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).mds e3 print_map
                                           e3
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        3
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-02T11:46:53.022688+0000
                                           modified        2025-10-02T11:47:44.341938+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24154}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.dtavud{0:24154} state up:creating seq 1 addr [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Oct 02 11:47:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:44.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:44 compute-1 ceph-mon[80926]: pgmap v143: 305 pgs: 62 unknown, 243 active+clean; 453 KiB data, 102 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:47:44 compute-1 ceph-mon[80926]: 3.17 scrub starts
Oct 02 11:47:44 compute-1 ceph-mon[80926]: 3.17 scrub ok
Oct 02 11:47:44 compute-1 ceph-mon[80926]: daemon mds.cephfs.compute-2.dtavud assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct 02 11:47:44 compute-1 ceph-mon[80926]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct 02 11:47:44 compute-1 ceph-mon[80926]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct 02 11:47:44 compute-1 ceph-mon[80926]: Cluster is now healthy
Oct 02 11:47:44 compute-1 ceph-mon[80926]: mds.? [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] up:boot
Oct 02 11:47:44 compute-1 ceph-mon[80926]: fsmap cephfs:1 {0=cephfs.compute-2.dtavud=up:creating}
Oct 02 11:47:44 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.dtavud"}]: dispatch
Oct 02 11:47:44 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:44 compute-1 ceph-mon[80926]: daemon mds.cephfs.compute-2.dtavud is now active in filesystem cephfs as rank 0
Oct 02 11:47:44 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:44 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:44 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.yqiqns", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 02 11:47:44 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.yqiqns", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 02 11:47:44 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:47:45 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.a scrub starts
Oct 02 11:47:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).mds e4 new map
Oct 02 11:47:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).mds e4 print_map
                                           e4
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-02T11:46:53.022688+0000
                                           modified        2025-10-02T11:47:45.438767+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24154}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.dtavud{0:24154} state up:active seq 2 addr [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Oct 02 11:47:45 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.a scrub ok
Oct 02 11:47:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:47:45 compute-1 ceph-mon[80926]: Deploying daemon mds.cephfs.compute-0.yqiqns on compute-0
Oct 02 11:47:45 compute-1 ceph-mon[80926]: 3.a scrub starts
Oct 02 11:47:45 compute-1 ceph-mon[80926]: mds.? [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] up:active
Oct 02 11:47:45 compute-1 ceph-mon[80926]: fsmap cephfs:1 {0=cephfs.compute-2.dtavud=up:active}
Oct 02 11:47:45 compute-1 ceph-mon[80926]: 3.a scrub ok
Oct 02 11:47:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:46.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:46.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:46 compute-1 sudo[83865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:46 compute-1 sudo[83865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:46 compute-1 sudo[83865]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:46 compute-1 sudo[83890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:47:46 compute-1 sudo[83890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:46 compute-1 sudo[83890]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:46 compute-1 sudo[83915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:46 compute-1 sudo[83915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:46 compute-1 sudo[83915]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:46 compute-1 sudo[83940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:47:46 compute-1 sudo[83940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:46 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).mds e5 new map
Oct 02 11:47:46 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).mds e5 print_map
                                           e5
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-02T11:46:53.022688+0000
                                           modified        2025-10-02T11:47:45.438767+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24154}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-2.dtavud{0:24154} state up:active seq 2 addr [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.yqiqns{-1:24149} state up:standby seq 1 addr [v2:192.168.122.100:6806/1663007594,v1:192.168.122.100:6807/1663007594] compat {c=[1],r=[1],i=[7ff]}]
Oct 02 11:47:46 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).mds e6 new map
Oct 02 11:47:46 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).mds e6 print_map
                                           e6
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-02T11:46:53.022688+0000
                                           modified        2025-10-02T11:47:45.438767+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24154}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.dtavud{0:24154} state up:active seq 2 addr [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.yqiqns{-1:24149} state up:standby seq 1 addr [v2:192.168.122.100:6806/1663007594,v1:192.168.122.100:6807/1663007594] compat {c=[1],r=[1],i=[7ff]}]
Oct 02 11:47:46 compute-1 ceph-mon[80926]: pgmap v144: 305 pgs: 31 unknown, 274 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 2.2 KiB/s wr, 6 op/s
Oct 02 11:47:46 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:46 compute-1 ceph-mon[80926]: 2.15 scrub starts
Oct 02 11:47:46 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:46 compute-1 ceph-mon[80926]: 2.15 scrub ok
Oct 02 11:47:46 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:46 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.bhscyq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 02 11:47:46 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.bhscyq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 02 11:47:46 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:47:47 compute-1 podman[84005]: 2025-10-02 11:47:47.106432789 +0000 UTC m=+0.038568358 container create a614afd1b6468f66cca34b5a72512ecb158c2be12e621197397a9581e20ca9be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_vaughan, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:47:47 compute-1 systemd[1]: Started libpod-conmon-a614afd1b6468f66cca34b5a72512ecb158c2be12e621197397a9581e20ca9be.scope.
Oct 02 11:47:47 compute-1 systemd[1]: Started libcrun container.
Oct 02 11:47:47 compute-1 podman[84005]: 2025-10-02 11:47:47.162633715 +0000 UTC m=+0.094769304 container init a614afd1b6468f66cca34b5a72512ecb158c2be12e621197397a9581e20ca9be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_vaughan, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:47:47 compute-1 podman[84005]: 2025-10-02 11:47:47.169904998 +0000 UTC m=+0.102040577 container start a614afd1b6468f66cca34b5a72512ecb158c2be12e621197397a9581e20ca9be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_vaughan, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 11:47:47 compute-1 podman[84005]: 2025-10-02 11:47:47.173451816 +0000 UTC m=+0.105587395 container attach a614afd1b6468f66cca34b5a72512ecb158c2be12e621197397a9581e20ca9be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_vaughan, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:47:47 compute-1 upbeat_vaughan[84021]: 167 167
Oct 02 11:47:47 compute-1 systemd[1]: libpod-a614afd1b6468f66cca34b5a72512ecb158c2be12e621197397a9581e20ca9be.scope: Deactivated successfully.
Oct 02 11:47:47 compute-1 podman[84005]: 2025-10-02 11:47:47.174807628 +0000 UTC m=+0.106943227 container died a614afd1b6468f66cca34b5a72512ecb158c2be12e621197397a9581e20ca9be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_vaughan, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 11:47:47 compute-1 podman[84005]: 2025-10-02 11:47:47.086928324 +0000 UTC m=+0.019063923 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:47:47 compute-1 systemd[1]: var-lib-containers-storage-overlay-e172b143675e0c87f20cfabedbb62eb2220fea5b6c653b814f1255bf8432ccfa-merged.mount: Deactivated successfully.
Oct 02 11:47:47 compute-1 podman[84005]: 2025-10-02 11:47:47.20992404 +0000 UTC m=+0.142059619 container remove a614afd1b6468f66cca34b5a72512ecb158c2be12e621197397a9581e20ca9be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_vaughan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:47:47 compute-1 systemd[1]: libpod-conmon-a614afd1b6468f66cca34b5a72512ecb158c2be12e621197397a9581e20ca9be.scope: Deactivated successfully.
Oct 02 11:47:47 compute-1 systemd[1]: Reloading.
Oct 02 11:47:47 compute-1 systemd-rc-local-generator[84067]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:47:47 compute-1 systemd-sysv-generator[84071]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:47:47 compute-1 systemd[1]: Reloading.
Oct 02 11:47:47 compute-1 systemd-rc-local-generator[84107]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:47:47 compute-1 systemd-sysv-generator[84112]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:47:47 compute-1 systemd[1]: Starting Ceph mds.cephfs.compute-1.bhscyq for 20fdc58c-b037-5094-a8ef-d490aa7c36f3...
Oct 02 11:47:48 compute-1 podman[84164]: 2025-10-02 11:47:48.037961224 +0000 UTC m=+0.038124316 container create 43616804aac506f055db4a47f2acfcbab5fac1f11a4a9f7cf32b72d97f3e223c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mds-cephfs-compute-1-bhscyq, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 11:47:48 compute-1 ceph-mon[80926]: Deploying daemon mds.cephfs.compute-1.bhscyq on compute-1
Oct 02 11:47:48 compute-1 ceph-mon[80926]: mds.? [v2:192.168.122.100:6806/1663007594,v1:192.168.122.100:6807/1663007594] up:boot
Oct 02 11:47:48 compute-1 ceph-mon[80926]: fsmap cephfs:1 {0=cephfs.compute-2.dtavud=up:active} 1 up:standby
Oct 02 11:47:48 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.yqiqns"}]: dispatch
Oct 02 11:47:48 compute-1 ceph-mon[80926]: fsmap cephfs:1 {0=cephfs.compute-2.dtavud=up:active} 1 up:standby
Oct 02 11:47:48 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f71d9589f15a69b55ea50ee11acdc5a36f32535426422bcc69c50f5b3942c848/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:48 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f71d9589f15a69b55ea50ee11acdc5a36f32535426422bcc69c50f5b3942c848/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:48 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f71d9589f15a69b55ea50ee11acdc5a36f32535426422bcc69c50f5b3942c848/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:48 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f71d9589f15a69b55ea50ee11acdc5a36f32535426422bcc69c50f5b3942c848/merged/var/lib/ceph/mds/ceph-cephfs.compute-1.bhscyq supports timestamps until 2038 (0x7fffffff)
Oct 02 11:47:48 compute-1 podman[84164]: 2025-10-02 11:47:48.113042777 +0000 UTC m=+0.113205879 container init 43616804aac506f055db4a47f2acfcbab5fac1f11a4a9f7cf32b72d97f3e223c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mds-cephfs-compute-1-bhscyq, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct 02 11:47:48 compute-1 podman[84164]: 2025-10-02 11:47:48.020433649 +0000 UTC m=+0.020596761 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:47:48 compute-1 podman[84164]: 2025-10-02 11:47:48.118436901 +0000 UTC m=+0.118599993 container start 43616804aac506f055db4a47f2acfcbab5fac1f11a4a9f7cf32b72d97f3e223c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mds-cephfs-compute-1-bhscyq, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:47:48 compute-1 bash[84164]: 43616804aac506f055db4a47f2acfcbab5fac1f11a4a9f7cf32b72d97f3e223c
Oct 02 11:47:48 compute-1 systemd[1]: Started Ceph mds.cephfs.compute-1.bhscyq for 20fdc58c-b037-5094-a8ef-d490aa7c36f3.
Oct 02 11:47:48 compute-1 ceph-mds[84183]: set uid:gid to 167:167 (ceph:ceph)
Oct 02 11:47:48 compute-1 ceph-mds[84183]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Oct 02 11:47:48 compute-1 ceph-mds[84183]: main not setting numa affinity
Oct 02 11:47:48 compute-1 ceph-mds[84183]: pidfile_write: ignore empty --pid-file
Oct 02 11:47:48 compute-1 ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-mds-cephfs-compute-1-bhscyq[84179]: starting mds.cephfs.compute-1.bhscyq at 
Oct 02 11:47:48 compute-1 sudo[83940]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:48.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e53 e53: 3 total, 3 up, 3 in
Oct 02 11:47:48 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Updating MDS map to version 6 from mon.2
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[8.19( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[11.1a( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[8.10( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[8.12( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[11.1e( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[11.1c( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[11.1d( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[11.1b( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[8.18( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[8.1b( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[8.4( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[11.7( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[11.4( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[11.5( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[8.8( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[11.f( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[11.1( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[11.12( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[8.17( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[11.14( empty local-lis/les=0/0 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[8.14( empty local-lis/les=0/0 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.1b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.329748154s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.848831177s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.1b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.329705238s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.848831177s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.15( v 52'51 (0'0,52'51] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.548415184s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=52'49 lcod 52'50 mlcod 52'50 active pruub 138.067703247s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.15( v 52'51 (0'0,52'51] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.548357964s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=52'49 lcod 52'50 mlcod 0'0 unknown NOTIFY pruub 138.067703247s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.14( v 52'51 (0'0,52'51] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.548274994s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=52'49 lcod 52'50 mlcod 52'50 active pruub 138.067687988s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.14( v 52'51 (0'0,52'51] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.548229218s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=52'49 lcod 52'50 mlcod 0'0 unknown NOTIFY pruub 138.067687988s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.18( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.329261780s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.848815918s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.18( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.329158783s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.848815918s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.328984261s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.848663330s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.13( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.547948837s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 138.067672729s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.328922272s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.848663330s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.13( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.547911644s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.067672729s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.1e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.328830719s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.848678589s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.2( v 41'48 (0'0,41'48] local-lis/les=51/52 n=1 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.547788620s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 138.067672729s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.1e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.328777313s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.848678589s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.2( v 41'48 (0'0,41'48] local-lis/les=51/52 n=1 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.547757149s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.067672729s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.2( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.328403473s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.848556519s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.f( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.547414780s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 138.067642212s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.2( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.328350067s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.848556519s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.f( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.547387123s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.067642212s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.3( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.328087807s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.848464966s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.3( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.328059196s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.848464966s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.5( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.327667236s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.848220825s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.8( v 41'48 (0'0,41'48] local-lis/les=51/52 n=1 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.546947479s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 138.067520142s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.5( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.327629089s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.848220825s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.8( v 41'48 (0'0,41'48] local-lis/les=51/52 n=1 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.546919823s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.067520142s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.6( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.328060150s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.848815918s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.6( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.328017235s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.848815918s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.327173233s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.848052979s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.3( v 52'51 (0'0,52'51] local-lis/les=51/52 n=1 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.546610832s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=52'49 lcod 52'50 mlcod 52'50 active pruub 138.067504883s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.4( v 41'48 (0'0,41'48] local-lis/les=51/52 n=1 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.546586037s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 138.067504883s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.327144623s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.848052979s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.3( v 52'51 (0'0,52'51] local-lis/les=51/52 n=1 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.546566963s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=52'49 lcod 52'50 mlcod 0'0 unknown NOTIFY pruub 138.067504883s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.4( v 41'48 (0'0,41'48] local-lis/les=51/52 n=1 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.546521187s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.067504883s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.8( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.326650620s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.847885132s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.326511383s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.847793579s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.5( v 41'48 (0'0,41'48] local-lis/les=51/52 n=1 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.546118736s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 138.067413330s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.8( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.326592445s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.847885132s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.14( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.326425552s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.847717285s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.326475143s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.847793579s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.5( v 41'48 (0'0,41'48] local-lis/les=51/52 n=1 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.546095848s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.067413330s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.14( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.326395035s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.847717285s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.19( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.545853615s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 138.067321777s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.11( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.326023102s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.847549438s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.19( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.545831680s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.067321777s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.11( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.325993538s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.847549438s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.10( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.325842857s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.847412109s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.9( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.326825142s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.847946167s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.10( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.325815201s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.847412109s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.13( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.325885773s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.847534180s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.13( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.325865746s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.847534180s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.1e( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.545518875s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 138.067214966s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.1d( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.325510025s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.847305298s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.10( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.545374870s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 138.067184448s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.1e( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.545417786s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.067214966s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.1d( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.325470924s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.847305298s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.1f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.325381279s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.847229004s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.10( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.545350075s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.067184448s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.1f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.325360298s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.847229004s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.12( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.545221329s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 138.067184448s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.12( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.545179367s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.067184448s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.4( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.325210571s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.847229004s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.4( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.325180054s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.847229004s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.1( v 41'48 (0'0,41'48] local-lis/les=51/52 n=1 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.544771194s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 138.066909790s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.326684952s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.848831177s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.1( v 41'48 (0'0,41'48] local-lis/les=51/52 n=1 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.544737816s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.066909790s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.326654434s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.848831177s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.16( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.324884415s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active pruub 141.847106934s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.18( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.544631004s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 138.066848755s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.18( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.544609070s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.066848755s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.1b( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.532390594s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 138.054718018s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.1b( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.532342911s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.054718018s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.11( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.532333374s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 active pruub 138.054794312s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.16( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.324854851s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.847106934s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[10.11( v 41'48 (0'0,41'48] local-lis/les=51/52 n=0 ec=51/40 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=9.532307625s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=41'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.054794312s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 53 pg[7.9( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=13.325004578s) [1] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 141.847946167s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:47:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:47:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:48.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:47:48 compute-1 sudo[84202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:48 compute-1 sudo[84202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:48 compute-1 sudo[84202]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:48 compute-1 sudo[84227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:47:48 compute-1 sudo[84227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:48 compute-1 sudo[84227]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:48 compute-1 sudo[84252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:48 compute-1 sudo[84252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:48 compute-1 sudo[84252]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:48 compute-1 sudo[84277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:47:48 compute-1 sudo[84277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:48 compute-1 sudo[84277]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:48 compute-1 sudo[84302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:48 compute-1 sudo[84302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:48 compute-1 sudo[84302]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:49 compute-1 sudo[84327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 11:47:49 compute-1 sudo[84327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:49 compute-1 podman[84423]: 2025-10-02 11:47:49.468377193 +0000 UTC m=+0.055525617 container exec f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 11:47:49 compute-1 ceph-mon[80926]: pgmap v145: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1.7 KiB/s wr, 5 op/s
Oct 02 11:47:49 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:47:49 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:47:49 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 02 11:47:49 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:47:49 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 02 11:47:49 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:47:49 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:47:49 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:47:49 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 02 11:47:49 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:47:49 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 02 11:47:49 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:47:49 compute-1 ceph-mon[80926]: osdmap e53: 3 total, 3 up, 3 in
Oct 02 11:47:49 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:49 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:49 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:49 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:49 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).mds e7 new map
Oct 02 11:47:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).mds e7 print_map
                                           e7
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        7
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-02T11:46:53.022688+0000
                                           modified        2025-10-02T11:47:49.237571+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24154}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.dtavud{0:24154} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.yqiqns{-1:24149} state up:standby seq 1 addr [v2:192.168.122.100:6806/1663007594,v1:192.168.122.100:6807/1663007594] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.bhscyq{-1:24155} state up:standby seq 1 addr [v2:192.168.122.101:6804/3876031050,v1:192.168.122.101:6805/3876031050] compat {c=[1],r=[1],i=[7ff]}]
Oct 02 11:47:49 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Updating MDS map to version 7 from mon.2
Oct 02 11:47:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e54 e54: 3 total, 3 up, 3 in
Oct 02 11:47:49 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Monitors have assigned me to become a standby.
Oct 02 11:47:49 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[8.12( v 37'4 (0'0,37'4] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[8.19( v 37'4 (0'0,37'4] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[11.1e( empty local-lis/les=53/54 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[11.1d( empty local-lis/les=53/54 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[11.1b( empty local-lis/les=53/54 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[11.1c( empty local-lis/les=53/54 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[8.18( v 37'4 (0'0,37'4] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[8.4( v 37'4 (0'0,37'4] local-lis/les=53/54 n=1 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[11.1a( empty local-lis/les=53/54 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[11.7( empty local-lis/les=53/54 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[11.4( empty local-lis/les=53/54 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[11.5( empty local-lis/les=53/54 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[8.8( v 37'4 lc 0'0 (0'0,37'4] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=37'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[11.1( empty local-lis/les=53/54 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[11.12( empty local-lis/les=53/54 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[8.17( v 37'4 (0'0,37'4] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[11.f( empty local-lis/les=53/54 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[11.14( empty local-lis/les=53/54 n=0 ec=51/42 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[8.14( v 37'4 (0'0,37'4] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[8.10( v 37'4 (0'0,37'4] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 54 pg[8.1b( v 37'4 (0'0,37'4] local-lis/les=53/54 n=0 ec=49/36 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=37'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:49 compute-1 podman[84423]: 2025-10-02 11:47:49.588560192 +0000 UTC m=+0.175708606 container exec_died f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:47:49 compute-1 sudo[84327]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:50 compute-1 sudo[84548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:50 compute-1 sudo[84548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:50 compute-1 sudo[84548]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:50 compute-1 sudo[84573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:47:50 compute-1 sudo[84573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:50 compute-1 sudo[84573]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:50 compute-1 sudo[84598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:50.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:50 compute-1 sudo[84598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:50 compute-1 sudo[84598]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:50 compute-1 sudo[84623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:47:50 compute-1 sudo[84623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:47:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:50.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:50 compute-1 ceph-mon[80926]: 5.12 scrub starts
Oct 02 11:47:50 compute-1 ceph-mon[80926]: 5.12 scrub ok
Oct 02 11:47:50 compute-1 ceph-mon[80926]: 5.8 scrub starts
Oct 02 11:47:50 compute-1 ceph-mon[80926]: 5.8 scrub ok
Oct 02 11:47:50 compute-1 ceph-mon[80926]: osdmap e54: 3 total, 3 up, 3 in
Oct 02 11:47:50 compute-1 ceph-mon[80926]: mds.? [v2:192.168.122.101:6804/3876031050,v1:192.168.122.101:6805/3876031050] up:boot
Oct 02 11:47:50 compute-1 ceph-mon[80926]: mds.? [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] up:active
Oct 02 11:47:50 compute-1 ceph-mon[80926]: fsmap cephfs:1 {0=cephfs.compute-2.dtavud=up:active} 2 up:standby
Oct 02 11:47:50 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.bhscyq"}]: dispatch
Oct 02 11:47:50 compute-1 ceph-mon[80926]: pgmap v148: 305 pgs: 46 peering, 259 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1.7 KiB/s wr, 5 op/s
Oct 02 11:47:50 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:50 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:50 compute-1 sudo[84623]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:51 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).mds e8 new map
Oct 02 11:47:51 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).mds e8 print_map
                                           e8
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        7
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-02T11:46:53.022688+0000
                                           modified        2025-10-02T11:47:49.237571+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24154}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.dtavud{0:24154} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.yqiqns{-1:24149} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/1663007594,v1:192.168.122.100:6807/1663007594] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.bhscyq{-1:24155} state up:standby seq 1 addr [v2:192.168.122.101:6804/3876031050,v1:192.168.122.101:6805/3876031050] compat {c=[1],r=[1],i=[7ff]}]
Oct 02 11:47:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:51 compute-1 ceph-mon[80926]: mds.? [v2:192.168.122.100:6806/1663007594,v1:192.168.122.100:6807/1663007594] up:standby
Oct 02 11:47:51 compute-1 ceph-mon[80926]: fsmap cephfs:1 {0=cephfs.compute-2.dtavud=up:active} 2 up:standby
Oct 02 11:47:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:47:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:47:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:47:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:47:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:47:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:52.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:52 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.d scrub starts
Oct 02 11:47:52 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.d scrub ok
Oct 02 11:47:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:52.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:52 compute-1 ceph-mon[80926]: pgmap v149: 305 pgs: 46 peering, 259 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1.7 KiB/s wr, 5 op/s; 109 B/s, 0 objects/s recovering
Oct 02 11:47:52 compute-1 ceph-mon[80926]: 4.1e scrub starts
Oct 02 11:47:52 compute-1 ceph-mon[80926]: 4.1e scrub ok
Oct 02 11:47:52 compute-1 ceph-mon[80926]: 2.10 deep-scrub starts
Oct 02 11:47:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).mds e9 new map
Oct 02 11:47:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).mds e9 print_map
                                           e9
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        7
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-02T11:46:53.022688+0000
                                           modified        2025-10-02T11:47:49.237571+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=24154}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           [mds.cephfs.compute-2.dtavud{0:24154} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/2867674520,v1:192.168.122.102:6805/2867674520] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.yqiqns{-1:24149} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/1663007594,v1:192.168.122.100:6807/1663007594] compat {c=[1],r=[1],i=[7ff]}]
                                           [mds.cephfs.compute-1.bhscyq{-1:24155} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/3876031050,v1:192.168.122.101:6805/3876031050] compat {c=[1],r=[1],i=[7ff]}]
Oct 02 11:47:52 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Updating MDS map to version 9 from mon.2
Oct 02 11:47:53 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.c deep-scrub starts
Oct 02 11:47:53 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.c deep-scrub ok
Oct 02 11:47:53 compute-1 ceph-mon[80926]: 3.d scrub starts
Oct 02 11:47:53 compute-1 ceph-mon[80926]: 3.d scrub ok
Oct 02 11:47:53 compute-1 ceph-mon[80926]: 2.10 deep-scrub ok
Oct 02 11:47:53 compute-1 ceph-mon[80926]: mds.? [v2:192.168.122.101:6804/3876031050,v1:192.168.122.101:6805/3876031050] up:standby
Oct 02 11:47:53 compute-1 ceph-mon[80926]: fsmap cephfs:1 {0=cephfs.compute-2.dtavud=up:active} 2 up:standby
Oct 02 11:47:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:54.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:54.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:54 compute-1 ceph-mon[80926]: 3.c deep-scrub starts
Oct 02 11:47:54 compute-1 ceph-mon[80926]: 3.c deep-scrub ok
Oct 02 11:47:54 compute-1 ceph-mon[80926]: pgmap v150: 305 pgs: 46 peering, 259 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 0 B/s wr, 0 op/s; 109 B/s, 0 objects/s recovering
Oct 02 11:47:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:47:55 compute-1 ceph-mon[80926]: 2.c scrub starts
Oct 02 11:47:55 compute-1 ceph-mon[80926]: 2.c scrub ok
Oct 02 11:47:55 compute-1 ceph-mon[80926]: 3.18 scrub starts
Oct 02 11:47:55 compute-1 ceph-mon[80926]: 3.18 scrub ok
Oct 02 11:47:55 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:55 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 02 11:47:55 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 02 11:47:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:47:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:56.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:47:56 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 4.a scrub starts
Oct 02 11:47:56 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 4.a scrub ok
Oct 02 11:47:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:47:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:56.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:47:56 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e55 e55: 3 total, 3 up, 3 in
Oct 02 11:47:56 compute-1 ceph-mon[80926]: 5.b scrub starts
Oct 02 11:47:56 compute-1 ceph-mon[80926]: 5.b scrub ok
Oct 02 11:47:56 compute-1 ceph-mon[80926]: pgmap v151: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 0 B/s wr, 0 op/s; 220 B/s, 1 keys/s, 2 objects/s recovering
Oct 02 11:47:56 compute-1 ceph-mon[80926]: 4.a scrub starts
Oct 02 11:47:56 compute-1 ceph-mon[80926]: 4.a scrub ok
Oct 02 11:47:56 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 02 11:47:56 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 02 11:47:56 compute-1 ceph-mon[80926]: osdmap e55: 3 total, 3 up, 3 in
Oct 02 11:47:57 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.9 deep-scrub starts
Oct 02 11:47:57 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.9 deep-scrub ok
Oct 02 11:47:57 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:57 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:57 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:57 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:58 compute-1 ceph-mon[80926]: 5.9 deep-scrub starts
Oct 02 11:47:58 compute-1 ceph-mon[80926]: 5.9 deep-scrub ok
Oct 02 11:47:58 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 02 11:47:58 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 02 11:47:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:47:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:47:58.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:47:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e56 e56: 3 total, 3 up, 3 in
Oct 02 11:47:58 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 56 pg[6.7( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=56) [0] r=0 lpr=56 pi=[53,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:58 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 56 pg[6.f( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=56) [0] r=0 lpr=56 pi=[53,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:58 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 56 pg[6.3( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=56) [0] r=0 lpr=56 pi=[53,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:58 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 56 pg[6.b( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=56) [0] r=0 lpr=56 pi=[53,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:47:58 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 56 pg[6.2( empty local-lis/les=55/56 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:58 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 56 pg[6.e( v 52'3 lc 52'1 (0'0,52'3] local-lis/les=55/56 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=52'3 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:58 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 56 pg[6.a( v 52'1 (0'0,52'1] local-lis/les=55/56 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=52'1 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:58 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 56 pg[6.6( v 53'1 lc 0'0 (0'0,53'1] local-lis/les=55/56 n=1 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=55) [0] r=0 lpr=55 pi=[47,55)/1 crt=53'1 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:47:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:47:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:47:58.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:47:58 compute-1 sudo[84680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:47:58 compute-1 sudo[84680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:58 compute-1 sudo[84680]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:58 compute-1 sudo[84705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:47:58 compute-1 sudo[84705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:47:58 compute-1 sudo[84705]: pam_unix(sudo:session): session closed for user root
Oct 02 11:47:59 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.f scrub starts
Oct 02 11:47:59 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.f scrub ok
Oct 02 11:47:59 compute-1 ceph-mon[80926]: pgmap v153: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 0 B/s wr, 0 op/s; 212 B/s, 1 keys/s, 2 objects/s recovering
Oct 02 11:47:59 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 02 11:47:59 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 02 11:47:59 compute-1 ceph-mon[80926]: osdmap e56: 3 total, 3 up, 3 in
Oct 02 11:47:59 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:59 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:47:59 compute-1 ceph-mon[80926]: Reconfiguring mon.compute-0 (monmap changed)...
Oct 02 11:47:59 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 02 11:47:59 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 02 11:47:59 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:47:59 compute-1 ceph-mon[80926]: Reconfiguring daemon mon.compute-0 on compute-0
Oct 02 11:47:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e57 e57: 3 total, 3 up, 3 in
Oct 02 11:47:59 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 57 pg[6.3( v 52'2 lc 0'0 (0'0,52'2] local-lis/les=56/57 n=2 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=56) [0] r=0 lpr=56 pi=[53,56)/1 crt=52'2 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:59 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 57 pg[6.f( v 52'5 lc 52'1 (0'0,52'5] local-lis/les=56/57 n=3 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=56) [0] r=0 lpr=56 pi=[53,56)/1 crt=52'5 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:59 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 57 pg[6.7( v 52'2 lc 52'1 (0'0,52'2] local-lis/les=56/57 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=56) [0] r=0 lpr=56 pi=[53,56)/1 crt=52'2 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:47:59 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 57 pg[6.b( v 52'3 lc 0'0 (0'0,52'3] local-lis/les=56/57 n=1 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=56) [0] r=0 lpr=56 pi=[53,56)/1 crt=52'3 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:00.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:00 compute-1 ceph-mon[80926]: 3.f scrub starts
Oct 02 11:48:00 compute-1 ceph-mon[80926]: 3.f scrub ok
Oct 02 11:48:00 compute-1 ceph-mon[80926]: osdmap e57: 3 total, 3 up, 3 in
Oct 02 11:48:00 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:00 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:00 compute-1 ceph-mon[80926]: Reconfiguring mgr.compute-0.unmtoh (monmap changed)...
Oct 02 11:48:00 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.unmtoh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 02 11:48:00 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 11:48:00 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:48:00 compute-1 ceph-mon[80926]: Reconfiguring daemon mgr.compute-0.unmtoh on compute-0
Oct 02 11:48:00 compute-1 ceph-mon[80926]: pgmap v156: 305 pgs: 8 unknown, 297 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 148 B/s, 1 keys/s, 2 objects/s recovering
Oct 02 11:48:00 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:00 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:00 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 02 11:48:00 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:48:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e58 e58: 3 total, 3 up, 3 in
Oct 02 11:48:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:48:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:00.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:01 compute-1 ceph-mon[80926]: 3.0 scrub starts
Oct 02 11:48:01 compute-1 ceph-mon[80926]: 3.0 scrub ok
Oct 02 11:48:01 compute-1 ceph-mon[80926]: Reconfiguring crash.compute-0 (monmap changed)...
Oct 02 11:48:01 compute-1 ceph-mon[80926]: Reconfiguring daemon crash.compute-0 on compute-0
Oct 02 11:48:01 compute-1 ceph-mon[80926]: osdmap e58: 3 total, 3 up, 3 in
Oct 02 11:48:01 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:01 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:01 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 02 11:48:01 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:48:01 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e59 e59: 3 total, 3 up, 3 in
Oct 02 11:48:02 compute-1 sudo[84730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:02 compute-1 sudo[84730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:02 compute-1 sudo[84730]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:02 compute-1 sudo[84755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:48:02 compute-1 sudo[84755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:02 compute-1 sudo[84755]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:02 compute-1 sudo[84780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:02 compute-1 sudo[84780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:02 compute-1 sudo[84780]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:02 compute-1 sudo[84805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:48:02 compute-1 sudo[84805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:02.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:02 compute-1 podman[84846]: 2025-10-02 11:48:02.41030056 +0000 UTC m=+0.036072092 container create c8414b0e69d120cf0e8eb67cd557ef17337e7d535367181b51eb14a5765ff145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kirch, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 11:48:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e60 e60: 3 total, 3 up, 3 in
Oct 02 11:48:02 compute-1 ceph-mon[80926]: Reconfiguring osd.1 (monmap changed)...
Oct 02 11:48:02 compute-1 ceph-mon[80926]: Reconfiguring daemon osd.1 on compute-0
Oct 02 11:48:02 compute-1 ceph-mon[80926]: osdmap e59: 3 total, 3 up, 3 in
Oct 02 11:48:02 compute-1 ceph-mon[80926]: pgmap v159: 305 pgs: 1 active+recovering+remapped, 7 active+recovery_wait+remapped, 297 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 38/213 objects misplaced (17.840%); 186 B/s, 2 keys/s, 2 objects/s recovering
Oct 02 11:48:02 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:02 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:02 compute-1 ceph-mon[80926]: Reconfiguring crash.compute-1 (monmap changed)...
Oct 02 11:48:02 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 02 11:48:02 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:48:02 compute-1 ceph-mon[80926]: Reconfiguring daemon crash.compute-1 on compute-1
Oct 02 11:48:02 compute-1 systemd[1]: Started libpod-conmon-c8414b0e69d120cf0e8eb67cd557ef17337e7d535367181b51eb14a5765ff145.scope.
Oct 02 11:48:02 compute-1 systemd[1]: Started libcrun container.
Oct 02 11:48:02 compute-1 podman[84846]: 2025-10-02 11:48:02.395016473 +0000 UTC m=+0.020788035 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:48:02 compute-1 podman[84846]: 2025-10-02 11:48:02.493067918 +0000 UTC m=+0.118839470 container init c8414b0e69d120cf0e8eb67cd557ef17337e7d535367181b51eb14a5765ff145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 11:48:02 compute-1 podman[84846]: 2025-10-02 11:48:02.50003948 +0000 UTC m=+0.125811012 container start c8414b0e69d120cf0e8eb67cd557ef17337e7d535367181b51eb14a5765ff145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct 02 11:48:02 compute-1 podman[84846]: 2025-10-02 11:48:02.503089963 +0000 UTC m=+0.128861525 container attach c8414b0e69d120cf0e8eb67cd557ef17337e7d535367181b51eb14a5765ff145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 11:48:02 compute-1 exciting_kirch[84862]: 167 167
Oct 02 11:48:02 compute-1 systemd[1]: libpod-c8414b0e69d120cf0e8eb67cd557ef17337e7d535367181b51eb14a5765ff145.scope: Deactivated successfully.
Oct 02 11:48:02 compute-1 conmon[84862]: conmon c8414b0e69d120cf0e8e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c8414b0e69d120cf0e8eb67cd557ef17337e7d535367181b51eb14a5765ff145.scope/container/memory.events
Oct 02 11:48:02 compute-1 podman[84846]: 2025-10-02 11:48:02.5059402 +0000 UTC m=+0.131711732 container died c8414b0e69d120cf0e8eb67cd557ef17337e7d535367181b51eb14a5765ff145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kirch, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:48:02 compute-1 systemd[1]: var-lib-containers-storage-overlay-dee8b04d7bc5a89e524340770c81017d6918c092cda763fd8ec935dcb90901f3-merged.mount: Deactivated successfully.
Oct 02 11:48:02 compute-1 podman[84846]: 2025-10-02 11:48:02.545030744 +0000 UTC m=+0.170802276 container remove c8414b0e69d120cf0e8eb67cd557ef17337e7d535367181b51eb14a5765ff145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:48:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:02.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:02 compute-1 systemd[1]: libpod-conmon-c8414b0e69d120cf0e8eb67cd557ef17337e7d535367181b51eb14a5765ff145.scope: Deactivated successfully.
Oct 02 11:48:02 compute-1 sudo[84805]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:02 compute-1 sudo[84881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:02 compute-1 sudo[84881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:02 compute-1 sudo[84881]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:02 compute-1 sudo[84906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:48:02 compute-1 sudo[84906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:02 compute-1 sudo[84906]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:02 compute-1 sudo[84931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:02 compute-1 sudo[84931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:02 compute-1 sudo[84931]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:02 compute-1 sudo[84956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:48:02 compute-1 sudo[84956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:03 compute-1 podman[84998]: 2025-10-02 11:48:03.145382156 +0000 UTC m=+0.036703942 container create ac5ba940e996dd87e2132dad09af9afc6ce2953c445e80b0c3237a96ff4dcf96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 11:48:03 compute-1 systemd[1]: Started libpod-conmon-ac5ba940e996dd87e2132dad09af9afc6ce2953c445e80b0c3237a96ff4dcf96.scope.
Oct 02 11:48:03 compute-1 systemd[1]: Started libcrun container.
Oct 02 11:48:03 compute-1 podman[84998]: 2025-10-02 11:48:03.198730325 +0000 UTC m=+0.090052131 container init ac5ba940e996dd87e2132dad09af9afc6ce2953c445e80b0c3237a96ff4dcf96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_booth, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 11:48:03 compute-1 podman[84998]: 2025-10-02 11:48:03.203431599 +0000 UTC m=+0.094753385 container start ac5ba940e996dd87e2132dad09af9afc6ce2953c445e80b0c3237a96ff4dcf96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:48:03 compute-1 podman[84998]: 2025-10-02 11:48:03.20642491 +0000 UTC m=+0.097746696 container attach ac5ba940e996dd87e2132dad09af9afc6ce2953c445e80b0c3237a96ff4dcf96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Oct 02 11:48:03 compute-1 dreamy_booth[85015]: 167 167
Oct 02 11:48:03 compute-1 systemd[1]: libpod-ac5ba940e996dd87e2132dad09af9afc6ce2953c445e80b0c3237a96ff4dcf96.scope: Deactivated successfully.
Oct 02 11:48:03 compute-1 podman[84998]: 2025-10-02 11:48:03.207795122 +0000 UTC m=+0.099116908 container died ac5ba940e996dd87e2132dad09af9afc6ce2953c445e80b0c3237a96ff4dcf96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_booth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 02 11:48:03 compute-1 podman[84998]: 2025-10-02 11:48:03.129297215 +0000 UTC m=+0.020619021 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:48:03 compute-1 systemd[1]: var-lib-containers-storage-overlay-c2513a7702a38d7e0b3e753ba39b3d2ef01fe13587df930adde07c647000c78e-merged.mount: Deactivated successfully.
Oct 02 11:48:03 compute-1 podman[84998]: 2025-10-02 11:48:03.24736865 +0000 UTC m=+0.138690436 container remove ac5ba940e996dd87e2132dad09af9afc6ce2953c445e80b0c3237a96ff4dcf96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Oct 02 11:48:03 compute-1 systemd[1]: libpod-conmon-ac5ba940e996dd87e2132dad09af9afc6ce2953c445e80b0c3237a96ff4dcf96.scope: Deactivated successfully.
Oct 02 11:48:03 compute-1 sudo[84956]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:03 compute-1 ceph-mon[80926]: osdmap e60: 3 total, 3 up, 3 in
Oct 02 11:48:03 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:03 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:03 compute-1 ceph-mon[80926]: Reconfiguring osd.0 (monmap changed)...
Oct 02 11:48:03 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 02 11:48:03 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:48:03 compute-1 ceph-mon[80926]: Reconfiguring daemon osd.0 on compute-1
Oct 02 11:48:03 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:03 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:03 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 02 11:48:03 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 02 11:48:03 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:48:03 compute-1 sudo[85042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:03 compute-1 sudo[85042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:03 compute-1 sudo[85042]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:03 compute-1 sudo[85067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:48:03 compute-1 sudo[85067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:03 compute-1 sudo[85067]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:03 compute-1 sudo[85092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:03 compute-1 sudo[85092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:03 compute-1 sudo[85092]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:03 compute-1 sudo[85117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3
Oct 02 11:48:03 compute-1 sudo[85117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:03 compute-1 podman[85159]: 2025-10-02 11:48:03.86301592 +0000 UTC m=+0.034652750 container create 7a7c9cce145052aab823a88af6a2cde47765884c00925c922a7b911e0bdbcb16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_saha, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:48:03 compute-1 systemd[1]: Started libpod-conmon-7a7c9cce145052aab823a88af6a2cde47765884c00925c922a7b911e0bdbcb16.scope.
Oct 02 11:48:03 compute-1 systemd[1]: Started libcrun container.
Oct 02 11:48:03 compute-1 podman[85159]: 2025-10-02 11:48:03.930470959 +0000 UTC m=+0.102107899 container init 7a7c9cce145052aab823a88af6a2cde47765884c00925c922a7b911e0bdbcb16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 11:48:03 compute-1 podman[85159]: 2025-10-02 11:48:03.937865746 +0000 UTC m=+0.109502576 container start 7a7c9cce145052aab823a88af6a2cde47765884c00925c922a7b911e0bdbcb16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct 02 11:48:03 compute-1 jolly_saha[85175]: 167 167
Oct 02 11:48:03 compute-1 podman[85159]: 2025-10-02 11:48:03.942838017 +0000 UTC m=+0.114474877 container attach 7a7c9cce145052aab823a88af6a2cde47765884c00925c922a7b911e0bdbcb16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_saha, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct 02 11:48:03 compute-1 podman[85159]: 2025-10-02 11:48:03.846237598 +0000 UTC m=+0.017874448 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 11:48:03 compute-1 podman[85159]: 2025-10-02 11:48:03.944121367 +0000 UTC m=+0.115758197 container died 7a7c9cce145052aab823a88af6a2cde47765884c00925c922a7b911e0bdbcb16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:48:03 compute-1 systemd[1]: libpod-7a7c9cce145052aab823a88af6a2cde47765884c00925c922a7b911e0bdbcb16.scope: Deactivated successfully.
Oct 02 11:48:03 compute-1 systemd[1]: var-lib-containers-storage-overlay-391ae3521f57565f9a1007637c7bb2c0736f9f8d294347b04a392e73e9a5daa2-merged.mount: Deactivated successfully.
Oct 02 11:48:03 compute-1 podman[85159]: 2025-10-02 11:48:03.979901249 +0000 UTC m=+0.151538079 container remove 7a7c9cce145052aab823a88af6a2cde47765884c00925c922a7b911e0bdbcb16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_saha, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:48:03 compute-1 systemd[1]: libpod-conmon-7a7c9cce145052aab823a88af6a2cde47765884c00925c922a7b911e0bdbcb16.scope: Deactivated successfully.
Oct 02 11:48:04 compute-1 sudo[85117]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:04.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:04 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Oct 02 11:48:04 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Oct 02 11:48:04 compute-1 ceph-mon[80926]: Reconfiguring mon.compute-1 (monmap changed)...
Oct 02 11:48:04 compute-1 ceph-mon[80926]: Reconfiguring daemon mon.compute-1 on compute-1
Oct 02 11:48:04 compute-1 ceph-mon[80926]: pgmap v161: 305 pgs: 1 active+recovering+remapped, 7 active+recovery_wait+remapped, 297 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 70 KiB/s rd, 1.8 KiB/s wr, 130 op/s; 38/213 objects misplaced (17.840%); 165 B/s, 2 keys/s, 2 objects/s recovering
Oct 02 11:48:04 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:04 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:04 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 02 11:48:04 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 02 11:48:04 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:48:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:04.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:04 compute-1 sudo[85193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:04 compute-1 sudo[85193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:04 compute-1 sudo[85193]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:04 compute-1 sudo[85218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:48:04 compute-1 sudo[85218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:04 compute-1 sudo[85218]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:04 compute-1 sudo[85243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:04 compute-1 sudo[85243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:04 compute-1 sudo[85243]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:05 compute-1 sudo[85268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 11:48:05 compute-1 sudo[85268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:05 compute-1 podman[85364]: 2025-10-02 11:48:05.443608944 +0000 UTC m=+0.054151805 container exec f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 11:48:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:48:05 compute-1 ceph-mon[80926]: 2.a scrub starts
Oct 02 11:48:05 compute-1 ceph-mon[80926]: 2.a scrub ok
Oct 02 11:48:05 compute-1 ceph-mon[80926]: Reconfiguring mon.compute-2 (monmap changed)...
Oct 02 11:48:05 compute-1 ceph-mon[80926]: Reconfiguring daemon mon.compute-2 on compute-2
Oct 02 11:48:05 compute-1 ceph-mon[80926]: 5.16 scrub starts
Oct 02 11:48:05 compute-1 ceph-mon[80926]: 5.16 scrub ok
Oct 02 11:48:05 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:05 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:05 compute-1 podman[85364]: 2025-10-02 11:48:05.558683647 +0000 UTC m=+0.169226508 container exec_died f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 02 11:48:05 compute-1 sudo[85268]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:06.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:06 compute-1 sshd-session[70550]: Received disconnect from 38.129.56.116 port 36088:11: disconnected by user
Oct 02 11:48:06 compute-1 sshd-session[70550]: Disconnected from user zuul 38.129.56.116 port 36088
Oct 02 11:48:06 compute-1 sshd-session[70547]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:48:06 compute-1 systemd[1]: session-20.scope: Deactivated successfully.
Oct 02 11:48:06 compute-1 systemd[1]: session-20.scope: Consumed 7.997s CPU time.
Oct 02 11:48:06 compute-1 systemd-logind[795]: Session 20 logged out. Waiting for processes to exit.
Oct 02 11:48:06 compute-1 systemd-logind[795]: Removed session 20.
Oct 02 11:48:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:06.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:06 compute-1 ceph-mon[80926]: pgmap v162: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 55 KiB/s rd, 1.7 KiB/s wr, 100 op/s; 307 B/s, 1 keys/s, 7 objects/s recovering
Oct 02 11:48:06 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 02 11:48:06 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 02 11:48:06 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:06 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:06 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:06 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e61 e61: 3 total, 3 up, 3 in
Oct 02 11:48:08 compute-1 ceph-mon[80926]: 5.d deep-scrub starts
Oct 02 11:48:08 compute-1 ceph-mon[80926]: 5.d deep-scrub ok
Oct 02 11:48:08 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 02 11:48:08 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 02 11:48:08 compute-1 ceph-mon[80926]: 3.19 scrub starts
Oct 02 11:48:08 compute-1 ceph-mon[80926]: osdmap e61: 3 total, 3 up, 3 in
Oct 02 11:48:08 compute-1 ceph-mon[80926]: 3.19 scrub ok
Oct 02 11:48:08 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:08 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:08 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:48:08 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:48:08 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:08 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:48:08 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:48:08 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:48:08 compute-1 ceph-mon[80926]: pgmap v164: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 2.2 KiB/s rd, 317 B/s wr, 2 op/s; 269 B/s, 1 keys/s, 7 objects/s recovering
Oct 02 11:48:08 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 02 11:48:08 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 02 11:48:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:08.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:08.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e62 e62: 3 total, 3 up, 3 in
Oct 02 11:48:08 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 62 pg[6.d( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:08 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 62 pg[6.5( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:09 compute-1 ceph-mon[80926]: 2.13 scrub starts
Oct 02 11:48:09 compute-1 ceph-mon[80926]: 2.13 scrub ok
Oct 02 11:48:09 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 02 11:48:09 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 02 11:48:09 compute-1 ceph-mon[80926]: osdmap e62: 3 total, 3 up, 3 in
Oct 02 11:48:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e63 e63: 3 total, 3 up, 3 in
Oct 02 11:48:09 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 63 pg[6.5( v 52'3 lc 52'1 (0'0,52'3] local-lis/les=62/63 n=2 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=52'3 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:09 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 63 pg[6.d( v 52'3 lc 52'1 (0'0,52'3] local-lis/les=62/63 n=2 ec=47/21 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=52'3 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:10.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:10 compute-1 ceph-mon[80926]: osdmap e63: 3 total, 3 up, 3 in
Oct 02 11:48:10 compute-1 ceph-mon[80926]: pgmap v167: 305 pgs: 4 unknown, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 183 B/s, 6 objects/s recovering
Oct 02 11:48:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e64 e64: 3 total, 3 up, 3 in
Oct 02 11:48:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:48:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:10.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:11 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e65 e65: 3 total, 3 up, 3 in
Oct 02 11:48:11 compute-1 ceph-mon[80926]: osdmap e64: 3 total, 3 up, 3 in
Oct 02 11:48:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:12.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:12 compute-1 ceph-mon[80926]: osdmap e65: 3 total, 3 up, 3 in
Oct 02 11:48:12 compute-1 ceph-mon[80926]: pgmap v170: 305 pgs: 4 unknown, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 39 B/s, 0 objects/s recovering
Oct 02 11:48:12 compute-1 ceph-mon[80926]: 3.1e scrub starts
Oct 02 11:48:12 compute-1 ceph-mon[80926]: 3.1e scrub ok
Oct 02 11:48:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e66 e66: 3 total, 3 up, 3 in
Oct 02 11:48:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:48:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:12.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:48:13 compute-1 ceph-mon[80926]: osdmap e66: 3 total, 3 up, 3 in
Oct 02 11:48:14 compute-1 sudo[85486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:48:14 compute-1 sudo[85486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:14 compute-1 sudo[85486]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:14 compute-1 sudo[85511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:48:14 compute-1 sudo[85511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:48:14 compute-1 sudo[85511]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:14.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:48:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:14.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:48:14 compute-1 ceph-mon[80926]: pgmap v172: 305 pgs: 4 unknown, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 35 B/s, 0 objects/s recovering
Oct 02 11:48:14 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:14 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:48:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:48:15 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 02 11:48:15 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 02 11:48:16 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e67 e67: 3 total, 3 up, 3 in
Oct 02 11:48:16 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 67 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=67) [0] r=0 lpr=67 pi=[49,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:16 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 67 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=67) [0] r=0 lpr=67 pi=[49,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:16 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 67 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=67) [0] r=0 lpr=67 pi=[49,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:16 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 67 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=67) [0] r=0 lpr=67 pi=[49,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:16 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 67 pg[6.6( v 53'1 (0'0,53'1] local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=67 pruub=14.278901100s) [1] r=-1 lpr=67 pi=[55,67)/1 crt=53'1 mlcod 53'1 active pruub 170.618331909s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:16 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 67 pg[6.e( v 52'3 (0'0,52'3] local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=67 pruub=14.278839111s) [1] r=-1 lpr=67 pi=[55,67)/1 crt=52'3 mlcod 52'3 active pruub 170.618286133s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:16 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 67 pg[6.e( v 52'3 (0'0,52'3] local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=67 pruub=14.278769493s) [1] r=-1 lpr=67 pi=[55,67)/1 crt=52'3 mlcod 0'0 unknown NOTIFY pruub 170.618286133s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:16 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 67 pg[6.6( v 53'1 (0'0,53'1] local-lis/les=55/56 n=1 ec=47/21 lis/c=55/55 les/c/f=56/56/0 sis=67 pruub=14.278729439s) [1] r=-1 lpr=67 pi=[55,67)/1 crt=53'1 mlcod 0'0 unknown NOTIFY pruub 170.618331909s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:16.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:16.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:16 compute-1 ceph-mon[80926]: 5.0 scrub starts
Oct 02 11:48:16 compute-1 ceph-mon[80926]: 5.0 scrub ok
Oct 02 11:48:16 compute-1 ceph-mon[80926]: pgmap v173: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 682 B/s wr, 48 op/s; 125 B/s, 4 objects/s recovering
Oct 02 11:48:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 02 11:48:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 02 11:48:16 compute-1 ceph-mon[80926]: osdmap e67: 3 total, 3 up, 3 in
Oct 02 11:48:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e68 e68: 3 total, 3 up, 3 in
Oct 02 11:48:17 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 68 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[49,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:17 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 68 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[49,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:17 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 68 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[49,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:17 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 68 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[49,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:17 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 68 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[49,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:17 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 68 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[49,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:17 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 68 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[49,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:17 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 68 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[49,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e69 e69: 3 total, 3 up, 3 in
Oct 02 11:48:18 compute-1 ceph-mon[80926]: 4.3 scrub starts
Oct 02 11:48:18 compute-1 ceph-mon[80926]: 4.3 scrub ok
Oct 02 11:48:18 compute-1 ceph-mon[80926]: osdmap e68: 3 total, 3 up, 3 in
Oct 02 11:48:18 compute-1 ceph-mon[80926]: pgmap v176: 305 pgs: 4 unknown, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 0 op/s; 98 B/s, 3 objects/s recovering
Oct 02 11:48:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:18.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:48:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:18.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:48:19 compute-1 ceph-mon[80926]: osdmap e69: 3 total, 3 up, 3 in
Oct 02 11:48:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e70 e70: 3 total, 3 up, 3 in
Oct 02 11:48:19 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 70 pg[9.16( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=68/49 les/c/f=69/50/0 sis=70) [0] r=0 lpr=70 pi=[49,70)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:19 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 70 pg[9.16( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=68/49 les/c/f=69/50/0 sis=70) [0] r=0 lpr=70 pi=[49,70)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:19 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 70 pg[9.6( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=68/49 les/c/f=69/50/0 sis=70) [0] r=0 lpr=70 pi=[49,70)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:19 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 70 pg[9.6( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=68/49 les/c/f=69/50/0 sis=70) [0] r=0 lpr=70 pi=[49,70)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:19 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 70 pg[9.e( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=68/49 les/c/f=69/50/0 sis=70) [0] r=0 lpr=70 pi=[49,70)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:19 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 70 pg[9.e( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=68/49 les/c/f=69/50/0 sis=70) [0] r=0 lpr=70 pi=[49,70)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:19 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 70 pg[9.1e( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=68/49 les/c/f=69/50/0 sis=70) [0] r=0 lpr=70 pi=[49,70)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:19 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 70 pg[9.1e( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=68/49 les/c/f=69/50/0 sis=70) [0] r=0 lpr=70 pi=[49,70)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:20 compute-1 ceph-mon[80926]: 4.6 scrub starts
Oct 02 11:48:20 compute-1 ceph-mon[80926]: 4.6 scrub ok
Oct 02 11:48:20 compute-1 ceph-mon[80926]: osdmap e70: 3 total, 3 up, 3 in
Oct 02 11:48:20 compute-1 ceph-mon[80926]: pgmap v179: 305 pgs: 4 unknown, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:48:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:20.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:48:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e71 e71: 3 total, 3 up, 3 in
Oct 02 11:48:20 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 71 pg[9.e( v 44'1012 (0'0,44'1012] local-lis/les=70/71 n=6 ec=49/38 lis/c=68/49 les/c/f=69/50/0 sis=70) [0] r=0 lpr=70 pi=[49,70)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:20 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 71 pg[9.16( v 44'1012 (0'0,44'1012] local-lis/les=70/71 n=5 ec=49/38 lis/c=68/49 les/c/f=69/50/0 sis=70) [0] r=0 lpr=70 pi=[49,70)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:20 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 71 pg[9.6( v 44'1012 (0'0,44'1012] local-lis/les=70/71 n=6 ec=49/38 lis/c=68/49 les/c/f=69/50/0 sis=70) [0] r=0 lpr=70 pi=[49,70)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:20 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 71 pg[9.1e( v 44'1012 (0'0,44'1012] local-lis/les=70/71 n=5 ec=49/38 lis/c=68/49 les/c/f=69/50/0 sis=70) [0] r=0 lpr=70 pi=[49,70)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:48:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:48:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:20.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:48:21 compute-1 ceph-mon[80926]: osdmap e71: 3 total, 3 up, 3 in
Oct 02 11:48:21 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Oct 02 11:48:21 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Oct 02 11:48:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:48:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:22.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:48:22 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e72 e72: 3 total, 3 up, 3 in
Oct 02 11:48:22 compute-1 ceph-mon[80926]: 2.1b scrub starts
Oct 02 11:48:22 compute-1 ceph-mon[80926]: 2.1b scrub ok
Oct 02 11:48:22 compute-1 ceph-mon[80926]: 3.1f scrub starts
Oct 02 11:48:22 compute-1 ceph-mon[80926]: 3.1f scrub ok
Oct 02 11:48:22 compute-1 ceph-mon[80926]: 3.13 scrub starts
Oct 02 11:48:22 compute-1 ceph-mon[80926]: 3.13 scrub ok
Oct 02 11:48:22 compute-1 ceph-mon[80926]: pgmap v181: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 848 B/s wr, 60 op/s; 45 B/s, 4 objects/s recovering
Oct 02 11:48:22 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 02 11:48:22 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 02 11:48:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:22.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:23 compute-1 ceph-mon[80926]: 2.19 deep-scrub starts
Oct 02 11:48:23 compute-1 ceph-mon[80926]: 2.19 deep-scrub ok
Oct 02 11:48:23 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 02 11:48:23 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 02 11:48:23 compute-1 ceph-mon[80926]: osdmap e72: 3 total, 3 up, 3 in
Oct 02 11:48:23 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Oct 02 11:48:23 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Oct 02 11:48:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:24.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e73 e73: 3 total, 3 up, 3 in
Oct 02 11:48:24 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 73 pg[6.8( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=73) [0] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:24 compute-1 ceph-mon[80926]: 5.13 scrub starts
Oct 02 11:48:24 compute-1 ceph-mon[80926]: 5.13 scrub ok
Oct 02 11:48:24 compute-1 ceph-mon[80926]: 3.10 scrub starts
Oct 02 11:48:24 compute-1 ceph-mon[80926]: 3.10 scrub ok
Oct 02 11:48:24 compute-1 ceph-mon[80926]: pgmap v183: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 708 B/s wr, 50 op/s; 38 B/s, 3 objects/s recovering
Oct 02 11:48:24 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 02 11:48:24 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 02 11:48:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:48:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:24.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:48:25 compute-1 sshd-session[85536]: Accepted publickey for zuul from 192.168.122.30 port 39100 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:48:25 compute-1 systemd-logind[795]: New session 34 of user zuul.
Oct 02 11:48:25 compute-1 systemd[1]: Started Session 34 of User zuul.
Oct 02 11:48:25 compute-1 sshd-session[85536]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:48:25 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.14 deep-scrub starts
Oct 02 11:48:25 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.14 deep-scrub ok
Oct 02 11:48:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e74 e74: 3 total, 3 up, 3 in
Oct 02 11:48:25 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 74 pg[6.8( empty local-lis/les=73/74 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=73) [0] r=0 lpr=73 pi=[47,73)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:25 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 02 11:48:25 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 02 11:48:25 compute-1 ceph-mon[80926]: osdmap e73: 3 total, 3 up, 3 in
Oct 02 11:48:25 compute-1 ceph-mon[80926]: osdmap e74: 3 total, 3 up, 3 in
Oct 02 11:48:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:48:26 compute-1 python3.9[85689]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:48:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:26.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:26 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e75 e75: 3 total, 3 up, 3 in
Oct 02 11:48:26 compute-1 ceph-mon[80926]: 3.14 deep-scrub starts
Oct 02 11:48:26 compute-1 ceph-mon[80926]: 3.14 deep-scrub ok
Oct 02 11:48:26 compute-1 ceph-mon[80926]: pgmap v186: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 732 B/s wr, 52 op/s; 39 B/s, 3 objects/s recovering
Oct 02 11:48:26 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 02 11:48:26 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 02 11:48:26 compute-1 ceph-mon[80926]: 2.e scrub starts
Oct 02 11:48:26 compute-1 ceph-mon[80926]: 2.e scrub ok
Oct 02 11:48:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:48:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:26.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:48:27 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Oct 02 11:48:27 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Oct 02 11:48:27 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e76 e76: 3 total, 3 up, 3 in
Oct 02 11:48:27 compute-1 ceph-mon[80926]: 4.1c scrub starts
Oct 02 11:48:27 compute-1 ceph-mon[80926]: 4.1c scrub ok
Oct 02 11:48:27 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 02 11:48:27 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 02 11:48:27 compute-1 ceph-mon[80926]: osdmap e75: 3 total, 3 up, 3 in
Oct 02 11:48:27 compute-1 ceph-mon[80926]: osdmap e76: 3 total, 3 up, 3 in
Oct 02 11:48:27 compute-1 sudo[85901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fecwiybkclmirjephaoskqhvjxmgisnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405707.337313-62-12141933334667/AnsiballZ_command.py'
Oct 02 11:48:27 compute-1 sudo[85901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:27 compute-1 python3.9[85903]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:48:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:48:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:28.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:48:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e77 e77: 3 total, 3 up, 3 in
Oct 02 11:48:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e77 crush map has features 3314933000854323200, adjusting msgr requires
Oct 02 11:48:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e77 crush map has features 432629239337189376, adjusting msgr requires
Oct 02 11:48:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e77 crush map has features 432629239337189376, adjusting msgr requires
Oct 02 11:48:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e77 crush map has features 432629239337189376, adjusting msgr requires
Oct 02 11:48:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:48:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:28.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:48:28 compute-1 ceph-osd[78262]: osd.0 77 crush map has features 432629239337189376, adjusting msgr requires for clients
Oct 02 11:48:28 compute-1 ceph-osd[78262]: osd.0 77 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons
Oct 02 11:48:28 compute-1 ceph-osd[78262]: osd.0 77 crush map has features 3314933000854323200, adjusting msgr requires for osds
Oct 02 11:48:28 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 77 pg[9.a( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=77) [0] r=0 lpr=77 pi=[49,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:28 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 77 pg[9.1a( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=77) [0] r=0 lpr=77 pi=[49,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:28 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 77 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=59/59 les/c/f=60/60/0 sis=77) [0] r=0 lpr=77 pi=[59,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:28 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 77 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=59/59 les/c/f=60/60/0 sis=77) [0] r=0 lpr=77 pi=[59,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:28 compute-1 ceph-mon[80926]: 5.15 scrub starts
Oct 02 11:48:28 compute-1 ceph-mon[80926]: 5.15 scrub ok
Oct 02 11:48:28 compute-1 ceph-mon[80926]: pgmap v189: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:28 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 02 11:48:28 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 02 11:48:28 compute-1 ceph-mon[80926]: 2.6 scrub starts
Oct 02 11:48:28 compute-1 ceph-mon[80926]: 2.6 scrub ok
Oct 02 11:48:28 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.f", "id": [2, 0]}]: dispatch
Oct 02 11:48:28 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1f", "id": [2, 0]}]: dispatch
Oct 02 11:48:29 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Oct 02 11:48:29 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Oct 02 11:48:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e78 e78: 3 total, 3 up, 3 in
Oct 02 11:48:29 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 78 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=59/59 les/c/f=60/60/0 sis=78) [0]/[2] r=-1 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:29 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 78 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=59/59 les/c/f=60/60/0 sis=78) [0]/[2] r=-1 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:29 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 78 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=59/59 les/c/f=60/60/0 sis=78) [0]/[2] r=-1 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:29 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 78 pg[9.a( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[49,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:29 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 78 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=59/59 les/c/f=60/60/0 sis=78) [0]/[2] r=-1 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:29 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 78 pg[9.1a( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[49,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:29 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 78 pg[9.1a( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[49,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:29 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 78 pg[9.a( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=78) [0]/[1] r=-1 lpr=78 pi=[49,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:29 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 02 11:48:29 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 02 11:48:29 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.f", "id": [2, 0]}]': finished
Oct 02 11:48:29 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1f", "id": [2, 0]}]': finished
Oct 02 11:48:29 compute-1 ceph-mon[80926]: osdmap e77: 3 total, 3 up, 3 in
Oct 02 11:48:29 compute-1 ceph-mon[80926]: osdmap e78: 3 total, 3 up, 3 in
Oct 02 11:48:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:48:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:30.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:48:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e79 e79: 3 total, 3 up, 3 in
Oct 02 11:48:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:48:30 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 79 pg[6.b( v 52'3 (0'0,52'3] local-lis/les=56/57 n=1 ec=47/21 lis/c=56/56 les/c/f=57/57/0 sis=79 pruub=8.777896881s) [1] r=-1 lpr=79 pi=[56,79)/1 crt=52'3 mlcod 52'3 active pruub 179.649734497s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:30 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 79 pg[6.b( v 52'3 (0'0,52'3] local-lis/les=56/57 n=1 ec=47/21 lis/c=56/56 les/c/f=57/57/0 sis=79 pruub=8.777853012s) [1] r=-1 lpr=79 pi=[56,79)/1 crt=52'3 mlcod 0'0 unknown NOTIFY pruub 179.649734497s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:30.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:30 compute-1 ceph-mon[80926]: 5.11 scrub starts
Oct 02 11:48:30 compute-1 ceph-mon[80926]: 5.11 scrub ok
Oct 02 11:48:30 compute-1 ceph-mon[80926]: pgmap v192: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Oct 02 11:48:30 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 02 11:48:30 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 02 11:48:30 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 02 11:48:30 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 02 11:48:30 compute-1 ceph-mon[80926]: osdmap e79: 3 total, 3 up, 3 in
Oct 02 11:48:31 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e80 e80: 3 total, 3 up, 3 in
Oct 02 11:48:31 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 80 pg[9.a( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=78/49 les/c/f=79/50/0 sis=80) [0] r=0 lpr=80 pi=[49,80)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:31 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 80 pg[9.a( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=78/49 les/c/f=79/50/0 sis=80) [0] r=0 lpr=80 pi=[49,80)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:31 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 80 pg[9.1a( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=78/49 les/c/f=79/50/0 sis=80) [0] r=0 lpr=80 pi=[49,80)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:31 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 80 pg[9.1a( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=78/49 les/c/f=79/50/0 sis=80) [0] r=0 lpr=80 pi=[49,80)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:31 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 80 pg[9.f( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=78/59 les/c/f=79/60/0 sis=80) [0] r=0 lpr=80 pi=[59,80)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:31 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 80 pg[9.f( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=78/59 les/c/f=79/60/0 sis=80) [0] r=0 lpr=80 pi=[59,80)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:31 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 80 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=78/59 les/c/f=79/60/0 sis=80) [0] r=0 lpr=80 pi=[59,80)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:31 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 80 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=78/59 les/c/f=79/60/0 sis=80) [0] r=0 lpr=80 pi=[59,80)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:31 compute-1 ceph-mon[80926]: 4.2 scrub starts
Oct 02 11:48:31 compute-1 ceph-mon[80926]: 2.1e scrub starts
Oct 02 11:48:31 compute-1 ceph-mon[80926]: 4.2 scrub ok
Oct 02 11:48:31 compute-1 ceph-mon[80926]: 2.1e scrub ok
Oct 02 11:48:31 compute-1 ceph-mon[80926]: osdmap e80: 3 total, 3 up, 3 in
Oct 02 11:48:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:48:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:32.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:48:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e81 e81: 3 total, 3 up, 3 in
Oct 02 11:48:32 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 81 pg[9.f( v 44'1012 (0'0,44'1012] local-lis/les=80/81 n=6 ec=49/38 lis/c=78/59 les/c/f=79/60/0 sis=80) [0] r=0 lpr=80 pi=[59,80)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:32 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 81 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=80/81 n=5 ec=49/38 lis/c=78/59 les/c/f=79/60/0 sis=80) [0] r=0 lpr=80 pi=[59,80)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:32 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 81 pg[9.1a( v 44'1012 (0'0,44'1012] local-lis/les=80/81 n=5 ec=49/38 lis/c=78/49 les/c/f=79/50/0 sis=80) [0] r=0 lpr=80 pi=[49,80)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:32 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 81 pg[9.a( v 44'1012 (0'0,44'1012] local-lis/les=80/81 n=6 ec=49/38 lis/c=78/49 les/c/f=79/50/0 sis=80) [0] r=0 lpr=80 pi=[49,80)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:32.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:32 compute-1 ceph-mon[80926]: pgmap v195: 305 pgs: 2 remapped+peering, 303 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Oct 02 11:48:32 compute-1 ceph-mon[80926]: osdmap e81: 3 total, 3 up, 3 in
Oct 02 11:48:34 compute-1 ceph-mon[80926]: 3.1b scrub starts
Oct 02 11:48:34 compute-1 ceph-mon[80926]: 3.1b scrub ok
Oct 02 11:48:34 compute-1 ceph-mon[80926]: pgmap v197: 305 pgs: 2 remapped+peering, 303 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:34.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:34.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:34 compute-1 sudo[85901]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:48:35 compute-1 ceph-mon[80926]: 2.9 scrub starts
Oct 02 11:48:35 compute-1 ceph-mon[80926]: 2.9 scrub ok
Oct 02 11:48:35 compute-1 sshd-session[85539]: Connection closed by 192.168.122.30 port 39100
Oct 02 11:48:35 compute-1 sshd-session[85536]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:48:35 compute-1 systemd[1]: session-34.scope: Deactivated successfully.
Oct 02 11:48:35 compute-1 systemd[1]: session-34.scope: Consumed 8.018s CPU time.
Oct 02 11:48:35 compute-1 systemd-logind[795]: Session 34 logged out. Waiting for processes to exit.
Oct 02 11:48:35 compute-1 systemd-logind[795]: Removed session 34.
Oct 02 11:48:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:36.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:36 compute-1 ceph-mon[80926]: pgmap v198: 305 pgs: 2 remapped+peering, 303 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Oct 02 11:48:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:36.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e82 e82: 3 total, 3 up, 3 in
Oct 02 11:48:37 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 02 11:48:37 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 02 11:48:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:38.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:38.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:39 compute-1 ceph-mon[80926]: pgmap v199: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 103 B/s, 5 objects/s recovering
Oct 02 11:48:39 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 02 11:48:39 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 02 11:48:39 compute-1 ceph-mon[80926]: osdmap e82: 3 total, 3 up, 3 in
Oct 02 11:48:39 compute-1 ceph-mon[80926]: 4.19 scrub starts
Oct 02 11:48:39 compute-1 ceph-mon[80926]: 4.19 scrub ok
Oct 02 11:48:40 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e83 e83: 3 total, 3 up, 3 in
Oct 02 11:48:40 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 83 pg[9.1d( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=65/65 les/c/f=66/66/0 sis=83) [0] r=0 lpr=83 pi=[65,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:40 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 83 pg[9.d( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=65/65 les/c/f=66/66/0 sis=83) [0] r=0 lpr=83 pi=[65,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:40 compute-1 ceph-mon[80926]: 3.8 deep-scrub starts
Oct 02 11:48:40 compute-1 ceph-mon[80926]: 3.8 deep-scrub ok
Oct 02 11:48:40 compute-1 ceph-mon[80926]: 2.1f scrub starts
Oct 02 11:48:40 compute-1 ceph-mon[80926]: 2.1f scrub ok
Oct 02 11:48:40 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 02 11:48:40 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 02 11:48:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999989s ======
Oct 02 11:48:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:40.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999989s
Oct 02 11:48:40 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:48:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:40.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:41 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e84 e84: 3 total, 3 up, 3 in
Oct 02 11:48:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 84 pg[9.1d( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=65/65 les/c/f=66/66/0 sis=84) [0]/[2] r=-1 lpr=84 pi=[65,84)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 84 pg[9.d( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=65/65 les/c/f=66/66/0 sis=84) [0]/[2] r=-1 lpr=84 pi=[65,84)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 84 pg[9.d( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=65/65 les/c/f=66/66/0 sis=84) [0]/[2] r=-1 lpr=84 pi=[65,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:41 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 84 pg[9.1d( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=65/65 les/c/f=66/66/0 sis=84) [0]/[2] r=-1 lpr=84 pi=[65,84)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:41 compute-1 ceph-mon[80926]: pgmap v201: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 96 B/s, 4 objects/s recovering
Oct 02 11:48:41 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 02 11:48:41 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 02 11:48:41 compute-1 ceph-mon[80926]: osdmap e83: 3 total, 3 up, 3 in
Oct 02 11:48:41 compute-1 ceph-mon[80926]: osdmap e84: 3 total, 3 up, 3 in
Oct 02 11:48:41 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 4.13 deep-scrub starts
Oct 02 11:48:41 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 4.13 deep-scrub ok
Oct 02 11:48:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e85 e85: 3 total, 3 up, 3 in
Oct 02 11:48:42 compute-1 ceph-mon[80926]: 4.13 deep-scrub starts
Oct 02 11:48:42 compute-1 ceph-mon[80926]: 4.13 deep-scrub ok
Oct 02 11:48:42 compute-1 ceph-mon[80926]: 4.1d scrub starts
Oct 02 11:48:42 compute-1 ceph-mon[80926]: 4.1d scrub ok
Oct 02 11:48:42 compute-1 ceph-mon[80926]: pgmap v204: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 73 B/s, 4 objects/s recovering
Oct 02 11:48:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 02 11:48:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 02 11:48:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 02 11:48:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 02 11:48:42 compute-1 ceph-mon[80926]: osdmap e85: 3 total, 3 up, 3 in
Oct 02 11:48:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:42.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:42.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:43 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.10 deep-scrub starts
Oct 02 11:48:43 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.10 deep-scrub ok
Oct 02 11:48:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 85 pg[6.e( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=67/67 les/c/f=68/68/0 sis=85) [0] r=0 lpr=85 pi=[67,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e86 e86: 3 total, 3 up, 3 in
Oct 02 11:48:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 86 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=84/65 les/c/f=85/66/0 sis=86) [0] r=0 lpr=86 pi=[65,86)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 86 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=84/65 les/c/f=85/66/0 sis=86) [0] r=0 lpr=86 pi=[65,86)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 86 pg[9.d( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=84/65 les/c/f=85/66/0 sis=86) [0] r=0 lpr=86 pi=[65,86)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 86 pg[9.d( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=84/65 les/c/f=85/66/0 sis=86) [0] r=0 lpr=86 pi=[65,86)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:43 compute-1 ceph-mon[80926]: 2.d scrub starts
Oct 02 11:48:43 compute-1 ceph-mon[80926]: 2.d scrub ok
Oct 02 11:48:43 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 86 pg[6.e( v 52'3 lc 52'1 (0'0,52'3] local-lis/les=85/86 n=1 ec=47/21 lis/c=67/67 les/c/f=68/68/0 sis=85) [0] r=0 lpr=85 pi=[67,85)/1 crt=52'3 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:44 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.1f deep-scrub starts
Oct 02 11:48:44 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 5.1f deep-scrub ok
Oct 02 11:48:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:44.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e87 e87: 3 total, 3 up, 3 in
Oct 02 11:48:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e87 crush map has features 3314933000852226048, adjusting msgr requires
Oct 02 11:48:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e87 crush map has features 288514051259236352, adjusting msgr requires
Oct 02 11:48:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e87 crush map has features 288514051259236352, adjusting msgr requires
Oct 02 11:48:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e87 crush map has features 288514051259236352, adjusting msgr requires
Oct 02 11:48:44 compute-1 ceph-osd[78262]: osd.0 87 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 02 11:48:44 compute-1 ceph-osd[78262]: osd.0 87 crush map has features 288514051259236352 was 432629239337198081, adjusting msgr requires for mons
Oct 02 11:48:44 compute-1 ceph-osd[78262]: osd.0 87 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 02 11:48:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 87 pg[6.f( v 52'5 (0'0,52'5] local-lis/les=56/57 n=3 ec=47/21 lis/c=56/56 les/c/f=57/57/0 sis=87 pruub=10.973770142s) [1] r=-1 lpr=87 pi=[56,87)/1 crt=52'5 mlcod 52'5 active pruub 195.647384644s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 87 pg[6.f( v 52'5 (0'0,52'5] local-lis/les=56/57 n=3 ec=47/21 lis/c=56/56 les/c/f=57/57/0 sis=87 pruub=10.973541260s) [1] r=-1 lpr=87 pi=[56,87)/1 crt=52'5 mlcod 0'0 unknown NOTIFY pruub 195.647384644s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:44 compute-1 ceph-mon[80926]: 5.10 deep-scrub starts
Oct 02 11:48:44 compute-1 ceph-mon[80926]: 5.10 deep-scrub ok
Oct 02 11:48:44 compute-1 ceph-mon[80926]: osdmap e86: 3 total, 3 up, 3 in
Oct 02 11:48:44 compute-1 ceph-mon[80926]: pgmap v207: 305 pgs: 305 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:44 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 02 11:48:44 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 02 11:48:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 87 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=86/87 n=5 ec=49/38 lis/c=84/65 les/c/f=85/66/0 sis=86) [0] r=0 lpr=86 pi=[65,86)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:44 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 87 pg[9.d( v 44'1012 (0'0,44'1012] local-lis/les=86/87 n=6 ec=49/38 lis/c=84/65 les/c/f=85/66/0 sis=86) [0] r=0 lpr=86 pi=[65,86)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:44.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:45 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.16 deep-scrub starts
Oct 02 11:48:45 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 3.16 deep-scrub ok
Oct 02 11:48:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e88 e88: 3 total, 3 up, 3 in
Oct 02 11:48:45 compute-1 ceph-mon[80926]: 5.1f deep-scrub starts
Oct 02 11:48:45 compute-1 ceph-mon[80926]: 5.1f deep-scrub ok
Oct 02 11:48:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 02 11:48:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 02 11:48:45 compute-1 ceph-mon[80926]: osdmap e87: 3 total, 3 up, 3 in
Oct 02 11:48:45 compute-1 ceph-mon[80926]: osdmap e88: 3 total, 3 up, 3 in
Oct 02 11:48:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:48:46 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Oct 02 11:48:46 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Oct 02 11:48:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:46.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:46 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e89 e89: 3 total, 3 up, 3 in
Oct 02 11:48:46 compute-1 ceph-mon[80926]: 3.16 deep-scrub starts
Oct 02 11:48:46 compute-1 ceph-mon[80926]: 3.16 deep-scrub ok
Oct 02 11:48:46 compute-1 ceph-mon[80926]: pgmap v210: 305 pgs: 1 active+recovering, 304 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 2/215 objects misplaced (0.930%); 0 B/s, 0 objects/s recovering
Oct 02 11:48:46 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct 02 11:48:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:46.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:47 compute-1 ceph-mon[80926]: 7.1 scrub starts
Oct 02 11:48:47 compute-1 ceph-mon[80926]: 7.1 scrub ok
Oct 02 11:48:47 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct 02 11:48:47 compute-1 ceph-mon[80926]: osdmap e89: 3 total, 3 up, 3 in
Oct 02 11:48:47 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 89 pg[9.10( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=89) [0] r=0 lpr=89 pi=[49,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:48.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:48 compute-1 ceph-mon[80926]: pgmap v212: 305 pgs: 1 active+recovering, 304 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 2/215 objects misplaced (0.930%); 71 B/s, 3 objects/s recovering
Oct 02 11:48:48 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct 02 11:48:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e90 e90: 3 total, 3 up, 3 in
Oct 02 11:48:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 90 pg[9.10( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=90) [0]/[1] r=-1 lpr=90 pi=[49,90)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 90 pg[9.10( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=90) [0]/[1] r=-1 lpr=90 pi=[49,90)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:48 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 90 pg[9.11( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=90) [0] r=0 lpr=90 pi=[49,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:48.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e91 e91: 3 total, 3 up, 3 in
Oct 02 11:48:49 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 91 pg[9.11( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=91) [0]/[1] r=-1 lpr=91 pi=[49,91)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:49 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 91 pg[9.11( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=91) [0]/[1] r=-1 lpr=91 pi=[49,91)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:49 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct 02 11:48:49 compute-1 ceph-mon[80926]: osdmap e90: 3 total, 3 up, 3 in
Oct 02 11:48:49 compute-1 ceph-mon[80926]: osdmap e91: 3 total, 3 up, 3 in
Oct 02 11:48:49 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.7 deep-scrub starts
Oct 02 11:48:49 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.7 deep-scrub ok
Oct 02 11:48:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:50.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e92 e92: 3 total, 3 up, 3 in
Oct 02 11:48:50 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 92 pg[9.10( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=90/49 les/c/f=91/50/0 sis=92) [0] r=0 lpr=92 pi=[49,92)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:50 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 92 pg[9.10( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=90/49 les/c/f=91/50/0 sis=92) [0] r=0 lpr=92 pi=[49,92)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:50 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 92 pg[9.12( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=92) [0] r=0 lpr=92 pi=[49,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:48:50 compute-1 ceph-mon[80926]: pgmap v215: 305 pgs: 1 active+recovering, 304 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 2/215 objects misplaced (0.930%); 72 B/s, 3 objects/s recovering
Oct 02 11:48:50 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct 02 11:48:50 compute-1 ceph-mon[80926]: 6.4 scrub starts
Oct 02 11:48:50 compute-1 ceph-mon[80926]: 6.4 scrub ok
Oct 02 11:48:50 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct 02 11:48:50 compute-1 ceph-mon[80926]: osdmap e92: 3 total, 3 up, 3 in
Oct 02 11:48:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999989s ======
Oct 02 11:48:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:50.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999989s
Oct 02 11:48:51 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e93 e93: 3 total, 3 up, 3 in
Oct 02 11:48:51 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 93 pg[9.11( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=91/49 les/c/f=92/50/0 sis=93) [0] r=0 lpr=93 pi=[49,93)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:51 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 93 pg[9.11( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=6 ec=49/38 lis/c=91/49 les/c/f=92/50/0 sis=93) [0] r=0 lpr=93 pi=[49,93)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:51 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 93 pg[9.12( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=93) [0]/[1] r=-1 lpr=93 pi=[49,93)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:51 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 93 pg[9.12( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=49/49 les/c/f=50/50/0 sis=93) [0]/[1] r=-1 lpr=93 pi=[49,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 11:48:51 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 93 pg[9.10( v 44'1012 (0'0,44'1012] local-lis/les=92/93 n=6 ec=49/38 lis/c=90/49 les/c/f=91/50/0 sis=92) [0] r=0 lpr=92 pi=[49,92)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:51 compute-1 ceph-mon[80926]: 7.7 deep-scrub starts
Oct 02 11:48:51 compute-1 ceph-mon[80926]: 7.7 deep-scrub ok
Oct 02 11:48:51 compute-1 ceph-mon[80926]: 7.1f scrub starts
Oct 02 11:48:51 compute-1 ceph-mon[80926]: 7.1f scrub ok
Oct 02 11:48:51 compute-1 ceph-mon[80926]: osdmap e93: 3 total, 3 up, 3 in
Oct 02 11:48:51 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.c scrub starts
Oct 02 11:48:51 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.c scrub ok
Oct 02 11:48:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:52.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e94 e94: 3 total, 3 up, 3 in
Oct 02 11:48:52 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 94 pg[9.11( v 44'1012 (0'0,44'1012] local-lis/les=93/94 n=6 ec=49/38 lis/c=91/49 les/c/f=92/50/0 sis=93) [0] r=0 lpr=93 pi=[49,93)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:52 compute-1 sshd-session[85960]: Accepted publickey for zuul from 192.168.122.30 port 49448 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:48:52 compute-1 systemd-logind[795]: New session 35 of user zuul.
Oct 02 11:48:52 compute-1 ceph-mon[80926]: pgmap v218: 305 pgs: 1 peering, 1 unknown, 1 remapped+peering, 302 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 150 B/s, 0 objects/s recovering
Oct 02 11:48:52 compute-1 ceph-mon[80926]: osdmap e94: 3 total, 3 up, 3 in
Oct 02 11:48:52 compute-1 systemd[1]: Started Session 35 of User zuul.
Oct 02 11:48:52 compute-1 sshd-session[85960]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:48:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:52.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:53 compute-1 python3.9[86113]: ansible-ansible.legacy.ping Invoked with data=pong
Oct 02 11:48:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e95 e95: 3 total, 3 up, 3 in
Oct 02 11:48:53 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 95 pg[9.12( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=93/49 les/c/f=94/50/0 sis=95) [0] r=0 lpr=95 pi=[49,95)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:48:53 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 95 pg[9.12( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=93/49 les/c/f=94/50/0 sis=95) [0] r=0 lpr=95 pi=[49,95)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:48:53 compute-1 ceph-mon[80926]: 7.c scrub starts
Oct 02 11:48:53 compute-1 ceph-mon[80926]: 7.c scrub ok
Oct 02 11:48:53 compute-1 ceph-mon[80926]: 11.17 scrub starts
Oct 02 11:48:53 compute-1 ceph-mon[80926]: 11.17 scrub ok
Oct 02 11:48:53 compute-1 ceph-mon[80926]: osdmap e95: 3 total, 3 up, 3 in
Oct 02 11:48:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:54.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e96 e96: 3 total, 3 up, 3 in
Oct 02 11:48:54 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 96 pg[9.12( v 44'1012 (0'0,44'1012] local-lis/les=95/96 n=5 ec=49/38 lis/c=93/49 les/c/f=94/50/0 sis=95) [0] r=0 lpr=95 pi=[49,95)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:48:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:54.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:54 compute-1 python3.9[86287]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:48:54 compute-1 ceph-mon[80926]: pgmap v221: 305 pgs: 1 peering, 1 unknown, 1 remapped+peering, 302 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 150 B/s, 0 objects/s recovering
Oct 02 11:48:54 compute-1 ceph-mon[80926]: osdmap e96: 3 total, 3 up, 3 in
Oct 02 11:48:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:48:55 compute-1 sudo[86441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqkgazrowzkcbzmpkkhquuvvmtlildja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405735.208011-99-22167008831547/AnsiballZ_command.py'
Oct 02 11:48:55 compute-1 sudo[86441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:55 compute-1 python3.9[86443]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:48:55 compute-1 sudo[86441]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:55 compute-1 ceph-mon[80926]: 6.c scrub starts
Oct 02 11:48:55 compute-1 ceph-mon[80926]: 6.c scrub ok
Oct 02 11:48:55 compute-1 ceph-mon[80926]: 8.15 scrub starts
Oct 02 11:48:55 compute-1 ceph-mon[80926]: 8.15 scrub ok
Oct 02 11:48:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999988s ======
Oct 02 11:48:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:56.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999988s
Oct 02 11:48:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:56.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:56 compute-1 sudo[86594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzynbmfvlpzisydkgqfpliimiatajqeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405736.2318132-135-54016832859242/AnsiballZ_stat.py'
Oct 02 11:48:56 compute-1 sudo[86594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:56 compute-1 python3.9[86596]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:48:56 compute-1 sudo[86594]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:56 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.d scrub starts
Oct 02 11:48:56 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.d scrub ok
Oct 02 11:48:57 compute-1 ceph-mon[80926]: pgmap v223: 305 pgs: 1 peering, 1 unknown, 1 remapped+peering, 302 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:48:57 compute-1 sudo[86748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivciqxmsrekafuoyipfrlaqdntuleqat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405737.1611063-168-243604266114430/AnsiballZ_file.py'
Oct 02 11:48:57 compute-1 sudo[86748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:48:57 compute-1 python3.9[86750]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:48:57 compute-1 sudo[86748]: pam_unix(sudo:session): session closed for user root
Oct 02 11:48:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e97 e97: 3 total, 3 up, 3 in
Oct 02 11:48:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:48:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:48:58.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:48:58 compute-1 ceph-mon[80926]: 7.d scrub starts
Oct 02 11:48:58 compute-1 ceph-mon[80926]: 7.d scrub ok
Oct 02 11:48:58 compute-1 ceph-mon[80926]: 11.16 scrub starts
Oct 02 11:48:58 compute-1 ceph-mon[80926]: 11.16 scrub ok
Oct 02 11:48:58 compute-1 ceph-mon[80926]: pgmap v224: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 6.7 KiB/s rd, 170 B/s wr, 11 op/s; 36 B/s, 1 objects/s recovering
Oct 02 11:48:58 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct 02 11:48:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:48:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999989s ======
Oct 02 11:48:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:48:58.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999989s
Oct 02 11:48:58 compute-1 python3.9[86900]: ansible-ansible.builtin.service_facts Invoked
Oct 02 11:48:58 compute-1 network[86917]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 11:48:58 compute-1 network[86918]: 'network-scripts' will be removed from distribution in near future.
Oct 02 11:48:58 compute-1 network[86919]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 11:48:58 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Oct 02 11:48:58 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Oct 02 11:48:59 compute-1 ceph-mon[80926]: 8.1 scrub starts
Oct 02 11:48:59 compute-1 ceph-mon[80926]: 8.1 scrub ok
Oct 02 11:48:59 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct 02 11:48:59 compute-1 ceph-mon[80926]: osdmap e97: 3 total, 3 up, 3 in
Oct 02 11:49:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:00.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e98 e98: 3 total, 3 up, 3 in
Oct 02 11:49:00 compute-1 ceph-mon[80926]: 7.12 scrub starts
Oct 02 11:49:00 compute-1 ceph-mon[80926]: 7.12 scrub ok
Oct 02 11:49:00 compute-1 ceph-mon[80926]: 8.7 scrub starts
Oct 02 11:49:00 compute-1 ceph-mon[80926]: 8.7 scrub ok
Oct 02 11:49:00 compute-1 ceph-mon[80926]: 10.12 scrub starts
Oct 02 11:49:00 compute-1 ceph-mon[80926]: 10.12 scrub ok
Oct 02 11:49:00 compute-1 ceph-mon[80926]: pgmap v226: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 6.2 KiB/s rd, 158 B/s wr, 10 op/s; 34 B/s, 1 objects/s recovering
Oct 02 11:49:00 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct 02 11:49:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:49:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:00.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:01 compute-1 ceph-mon[80926]: 8.e deep-scrub starts
Oct 02 11:49:01 compute-1 ceph-mon[80926]: 8.e deep-scrub ok
Oct 02 11:49:01 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct 02 11:49:01 compute-1 ceph-mon[80926]: osdmap e98: 3 total, 3 up, 3 in
Oct 02 11:49:01 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Oct 02 11:49:01 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Oct 02 11:49:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:02.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:02 compute-1 python3.9[87182]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:49:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999989s ======
Oct 02 11:49:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:02.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999989s
Oct 02 11:49:02 compute-1 ceph-mon[80926]: pgmap v228: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 5.1 KiB/s rd, 0 B/s wr, 9 op/s; 29 B/s, 1 objects/s recovering
Oct 02 11:49:02 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct 02 11:49:02 compute-1 ceph-mon[80926]: 8.13 scrub starts
Oct 02 11:49:02 compute-1 ceph-mon[80926]: 8.13 scrub ok
Oct 02 11:49:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e99 e99: 3 total, 3 up, 3 in
Oct 02 11:49:02 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=65/65 les/c/f=66/66/0 sis=99) [0] r=0 lpr=99 pi=[65,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:49:03 compute-1 python3.9[87332]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:49:03 compute-1 ceph-mon[80926]: 7.15 scrub starts
Oct 02 11:49:03 compute-1 ceph-mon[80926]: 7.15 scrub ok
Oct 02 11:49:03 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct 02 11:49:03 compute-1 ceph-mon[80926]: osdmap e99: 3 total, 3 up, 3 in
Oct 02 11:49:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e100 e100: 3 total, 3 up, 3 in
Oct 02 11:49:03 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=65/65 les/c/f=66/66/0 sis=100) [0]/[2] r=-1 lpr=100 pi=[65,100)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:49:03 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/38 lis/c=65/65 les/c/f=66/66/0 sis=100) [0]/[2] r=-1 lpr=100 pi=[65,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 02 11:49:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:04.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:04 compute-1 python3.9[87486]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:49:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:04.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:04 compute-1 ceph-mon[80926]: osdmap e100: 3 total, 3 up, 3 in
Oct 02 11:49:04 compute-1 ceph-mon[80926]: pgmap v231: 305 pgs: 305 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:04 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct 02 11:49:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e101 e101: 3 total, 3 up, 3 in
Oct 02 11:49:04 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 101 pg[9.16( v 44'1012 (0'0,44'1012] local-lis/les=70/71 n=5 ec=49/38 lis/c=70/70 les/c/f=71/71/0 sis=101 pruub=11.498471260s) [2] r=-1 lpr=101 pi=[70,101)/1 crt=44'1012 mlcod 0'0 active pruub 216.587463379s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:49:04 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 101 pg[9.16( v 44'1012 (0'0,44'1012] local-lis/les=70/71 n=5 ec=49/38 lis/c=70/70 les/c/f=71/71/0 sis=101 pruub=11.498307228s) [2] r=-1 lpr=101 pi=[70,101)/1 crt=44'1012 mlcod 0'0 unknown NOTIFY pruub 216.587463379s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:49:05 compute-1 sudo[87642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwcohibkendmchuntpzpeccvmhuexozk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405745.1036158-312-218002229921898/AnsiballZ_setup.py'
Oct 02 11:49:05 compute-1 sudo[87642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:49:05 compute-1 python3.9[87644]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:49:05 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct 02 11:49:05 compute-1 ceph-mon[80926]: osdmap e101: 3 total, 3 up, 3 in
Oct 02 11:49:05 compute-1 ceph-mon[80926]: 8.1a scrub starts
Oct 02 11:49:05 compute-1 ceph-mon[80926]: 8.1a scrub ok
Oct 02 11:49:05 compute-1 ceph-mon[80926]: 7.a scrub starts
Oct 02 11:49:05 compute-1 ceph-mon[80926]: 7.a scrub ok
Oct 02 11:49:05 compute-1 sudo[87642]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e102 e102: 3 total, 3 up, 3 in
Oct 02 11:49:05 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 102 pg[9.16( v 44'1012 (0'0,44'1012] local-lis/les=70/71 n=5 ec=49/38 lis/c=70/70 les/c/f=71/71/0 sis=102) [2]/[0] r=0 lpr=102 pi=[70,102)/1 crt=44'1012 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:49:05 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 102 pg[9.16( v 44'1012 (0'0,44'1012] local-lis/les=70/71 n=5 ec=49/38 lis/c=70/70 les/c/f=71/71/0 sis=102) [2]/[0] r=0 lpr=102 pi=[70,102)/1 crt=44'1012 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 11:49:05 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 102 pg[9.15( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=100/65 les/c/f=101/66/0 sis=102) [0] r=0 lpr=102 pi=[65,102)/1 luod=0'0 crt=44'1012 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:49:05 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 102 pg[9.15( v 44'1012 (0'0,44'1012] local-lis/les=0/0 n=5 ec=49/38 lis/c=100/65 les/c/f=101/66/0 sis=102) [0] r=0 lpr=102 pi=[65,102)/1 crt=44'1012 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 02 11:49:06 compute-1 sudo[87726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yihhahngiwihjgpkkrsfnjsjzpyhados ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405745.1036158-312-218002229921898/AnsiballZ_dnf.py'
Oct 02 11:49:06 compute-1 sudo[87726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:06.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:06 compute-1 python3.9[87728]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:49:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999989s ======
Oct 02 11:49:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:06.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999989s
Oct 02 11:49:07 compute-1 ceph-mon[80926]: 8.1d scrub starts
Oct 02 11:49:07 compute-1 ceph-mon[80926]: 8.1d scrub ok
Oct 02 11:49:07 compute-1 ceph-mon[80926]: pgmap v233: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:07 compute-1 ceph-mon[80926]: osdmap e102: 3 total, 3 up, 3 in
Oct 02 11:49:07 compute-1 ceph-mon[80926]: 7.5 deep-scrub starts
Oct 02 11:49:07 compute-1 ceph-mon[80926]: 7.5 deep-scrub ok
Oct 02 11:49:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e103 e103: 3 total, 3 up, 3 in
Oct 02 11:49:07 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 103 pg[9.15( v 44'1012 (0'0,44'1012] local-lis/les=102/103 n=5 ec=49/38 lis/c=100/65 les/c/f=101/66/0 sis=102) [0] r=0 lpr=102 pi=[65,102)/1 crt=44'1012 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:49:07 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 103 pg[9.16( v 44'1012 (0'0,44'1012] local-lis/les=102/103 n=5 ec=49/38 lis/c=70/70 les/c/f=71/71/0 sis=102) [2]/[0] async=[2] r=0 lpr=102 pi=[70,102)/1 crt=44'1012 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:49:07 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Oct 02 11:49:07 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Oct 02 11:49:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e104 e104: 3 total, 3 up, 3 in
Oct 02 11:49:08 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 104 pg[9.16( v 44'1012 (0'0,44'1012] local-lis/les=102/103 n=5 ec=49/38 lis/c=102/70 les/c/f=103/71/0 sis=104 pruub=14.901468277s) [2] async=[2] r=-1 lpr=104 pi=[70,104)/1 crt=44'1012 mlcod 44'1012 active pruub 223.467895508s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:49:08 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 104 pg[9.16( v 44'1012 (0'0,44'1012] local-lis/les=102/103 n=5 ec=49/38 lis/c=102/70 les/c/f=103/71/0 sis=104 pruub=14.901384354s) [2] r=-1 lpr=104 pi=[70,104)/1 crt=44'1012 mlcod 0'0 unknown NOTIFY pruub 223.467895508s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:49:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999989s ======
Oct 02 11:49:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:08.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999989s
Oct 02 11:49:08 compute-1 ceph-mon[80926]: osdmap e103: 3 total, 3 up, 3 in
Oct 02 11:49:08 compute-1 ceph-mon[80926]: pgmap v236: 305 pgs: 1 remapped+peering, 1 peering, 303 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 26 B/s, 0 objects/s recovering
Oct 02 11:49:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:08.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e105 e105: 3 total, 3 up, 3 in
Oct 02 11:49:09 compute-1 ceph-mon[80926]: 7.17 scrub starts
Oct 02 11:49:09 compute-1 ceph-mon[80926]: 7.17 scrub ok
Oct 02 11:49:09 compute-1 ceph-mon[80926]: osdmap e104: 3 total, 3 up, 3 in
Oct 02 11:49:09 compute-1 ceph-mon[80926]: osdmap e105: 3 total, 3 up, 3 in
Oct 02 11:49:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999989s ======
Oct 02 11:49:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:10.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999989s
Oct 02 11:49:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:49:10 compute-1 ceph-mon[80926]: pgmap v239: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 1 objects/s recovering
Oct 02 11:49:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:10.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:11 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Oct 02 11:49:11 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Oct 02 11:49:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e106 e106: 3 total, 3 up, 3 in
Oct 02 11:49:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:49:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:12.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:49:12 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct 02 11:49:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999989s ======
Oct 02 11:49:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:12.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999989s
Oct 02 11:49:12 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Oct 02 11:49:12 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Oct 02 11:49:13 compute-1 ceph-mon[80926]: pgmap v240: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 37 B/s, 1 objects/s recovering
Oct 02 11:49:13 compute-1 ceph-mon[80926]: 7.19 scrub starts
Oct 02 11:49:13 compute-1 ceph-mon[80926]: 7.19 scrub ok
Oct 02 11:49:13 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct 02 11:49:13 compute-1 ceph-mon[80926]: osdmap e106: 3 total, 3 up, 3 in
Oct 02 11:49:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e107 e107: 3 total, 3 up, 3 in
Oct 02 11:49:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:14.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:14 compute-1 sudo[87797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:49:14 compute-1 sudo[87797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:14 compute-1 sudo[87797]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:14 compute-1 ceph-mon[80926]: 7.1a scrub starts
Oct 02 11:49:14 compute-1 ceph-mon[80926]: 7.1a scrub ok
Oct 02 11:49:14 compute-1 ceph-mon[80926]: 7.16 scrub starts
Oct 02 11:49:14 compute-1 ceph-mon[80926]: 7.16 scrub ok
Oct 02 11:49:14 compute-1 ceph-mon[80926]: pgmap v242: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 0 objects/s recovering
Oct 02 11:49:14 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct 02 11:49:14 compute-1 sudo[87822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:49:14 compute-1 sudo[87822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:14 compute-1 sudo[87822]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:14 compute-1 sudo[87847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:49:14 compute-1 sudo[87847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:14 compute-1 sudo[87847]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:14 compute-1 sudo[87872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 11:49:14 compute-1 sudo[87872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999989s ======
Oct 02 11:49:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:14.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999989s
Oct 02 11:49:14 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Oct 02 11:49:14 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Oct 02 11:49:15 compute-1 podman[87969]: 2025-10-02 11:49:15.011464064 +0000 UTC m=+0.067951652 container exec f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 11:49:15 compute-1 podman[87969]: 2025-10-02 11:49:15.105573952 +0000 UTC m=+0.162061520 container exec_died f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 02 11:49:15 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct 02 11:49:15 compute-1 ceph-mon[80926]: osdmap e107: 3 total, 3 up, 3 in
Oct 02 11:49:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:49:15 compute-1 sudo[87872]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:15 compute-1 sudo[88094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:49:15 compute-1 sudo[88094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:15 compute-1 sudo[88094]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:15 compute-1 sudo[88119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:49:15 compute-1 sudo[88119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:15 compute-1 sudo[88119]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:15 compute-1 sudo[88144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:49:15 compute-1 sudo[88144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:15 compute-1 sudo[88144]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:15 compute-1 sudo[88169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:49:15 compute-1 sudo[88169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:16 compute-1 sudo[88169]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:16.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:16 compute-1 ceph-mon[80926]: 7.1c scrub starts
Oct 02 11:49:16 compute-1 ceph-mon[80926]: 7.1c scrub ok
Oct 02 11:49:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:49:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:49:16 compute-1 ceph-mon[80926]: pgmap v244: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 16 B/s, 0 objects/s recovering
Oct 02 11:49:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct 02 11:49:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:49:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:49:16 compute-1 ceph-mon[80926]: 8.a scrub starts
Oct 02 11:49:16 compute-1 ceph-mon[80926]: 8.a scrub ok
Oct 02 11:49:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999989s ======
Oct 02 11:49:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:16.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999989s
Oct 02 11:49:16 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e108 e108: 3 total, 3 up, 3 in
Oct 02 11:49:16 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Oct 02 11:49:16 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Oct 02 11:49:17 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:49:17 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:49:17 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct 02 11:49:17 compute-1 ceph-mon[80926]: osdmap e108: 3 total, 3 up, 3 in
Oct 02 11:49:17 compute-1 ceph-mon[80926]: 10.6 scrub starts
Oct 02 11:49:17 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:49:17 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:49:17 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:49:17 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:49:17 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:49:17 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:49:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e109 e109: 3 total, 3 up, 3 in
Oct 02 11:49:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:18.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:18.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:18 compute-1 ceph-mon[80926]: 10.6 scrub ok
Oct 02 11:49:18 compute-1 ceph-mon[80926]: osdmap e109: 3 total, 3 up, 3 in
Oct 02 11:49:18 compute-1 ceph-mon[80926]: pgmap v247: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:18 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct 02 11:49:18 compute-1 ceph-mon[80926]: 11.13 scrub starts
Oct 02 11:49:18 compute-1 ceph-mon[80926]: 11.13 scrub ok
Oct 02 11:49:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e110 e110: 3 total, 3 up, 3 in
Oct 02 11:49:19 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 110 pg[9.1a( v 44'1012 (0'0,44'1012] local-lis/les=80/81 n=5 ec=49/38 lis/c=80/80 les/c/f=81/81/0 sis=110 pruub=9.366613388s) [1] r=-1 lpr=110 pi=[80,110)/1 crt=44'1012 mlcod 0'0 active pruub 228.798370361s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:49:19 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 110 pg[9.1a( v 44'1012 (0'0,44'1012] local-lis/les=80/81 n=5 ec=49/38 lis/c=80/80 les/c/f=81/81/0 sis=110 pruub=9.366524696s) [1] r=-1 lpr=110 pi=[80,110)/1 crt=44'1012 mlcod 0'0 unknown NOTIFY pruub 228.798370361s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:49:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e111 e111: 3 total, 3 up, 3 in
Oct 02 11:49:19 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 111 pg[9.1a( v 44'1012 (0'0,44'1012] local-lis/les=80/81 n=5 ec=49/38 lis/c=80/80 les/c/f=81/81/0 sis=111) [1]/[0] r=0 lpr=111 pi=[80,111)/1 crt=44'1012 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:49:19 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 111 pg[9.1a( v 44'1012 (0'0,44'1012] local-lis/les=80/81 n=5 ec=49/38 lis/c=80/80 les/c/f=81/81/0 sis=111) [1]/[0] r=0 lpr=111 pi=[80,111)/1 crt=44'1012 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 11:49:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct 02 11:49:19 compute-1 ceph-mon[80926]: osdmap e110: 3 total, 3 up, 3 in
Oct 02 11:49:19 compute-1 ceph-mon[80926]: osdmap e111: 3 total, 3 up, 3 in
Oct 02 11:49:19 compute-1 ceph-mon[80926]: 11.8 scrub starts
Oct 02 11:49:19 compute-1 ceph-mon[80926]: 11.8 scrub ok
Oct 02 11:49:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999989s ======
Oct 02 11:49:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:20.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999989s
Oct 02 11:49:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e112 e112: 3 total, 3 up, 3 in
Oct 02 11:49:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:49:20 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 112 pg[9.1a( v 44'1012 (0'0,44'1012] local-lis/les=111/112 n=5 ec=49/38 lis/c=80/80 les/c/f=81/81/0 sis=111) [1]/[0] async=[1] r=0 lpr=111 pi=[80,111)/1 crt=44'1012 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:49:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:20.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:21 compute-1 ceph-mon[80926]: 8.1e scrub starts
Oct 02 11:49:21 compute-1 ceph-mon[80926]: 8.1e scrub ok
Oct 02 11:49:21 compute-1 ceph-mon[80926]: pgmap v250: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 1 objects/s recovering
Oct 02 11:49:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct 02 11:49:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct 02 11:49:21 compute-1 ceph-mon[80926]: osdmap e112: 3 total, 3 up, 3 in
Oct 02 11:49:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e113 e113: 3 total, 3 up, 3 in
Oct 02 11:49:21 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 113 pg[9.1a( v 44'1012 (0'0,44'1012] local-lis/les=111/112 n=5 ec=49/38 lis/c=111/80 les/c/f=112/81/0 sis=113 pruub=14.892210007s) [1] async=[1] r=-1 lpr=113 pi=[80,113)/1 crt=44'1012 mlcod 44'1012 active pruub 236.792495728s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:49:21 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 113 pg[9.1a( v 44'1012 (0'0,44'1012] local-lis/les=111/112 n=5 ec=49/38 lis/c=111/80 les/c/f=112/81/0 sis=113 pruub=14.892071724s) [1] r=-1 lpr=113 pi=[80,113)/1 crt=44'1012 mlcod 0'0 unknown NOTIFY pruub 236.792495728s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:49:21 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Oct 02 11:49:21 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Oct 02 11:49:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:22.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999988s ======
Oct 02 11:49:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:22.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999988s
Oct 02 11:49:22 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e114 e114: 3 total, 3 up, 3 in
Oct 02 11:49:23 compute-1 ceph-mon[80926]: osdmap e113: 3 total, 3 up, 3 in
Oct 02 11:49:23 compute-1 ceph-mon[80926]: 10.7 scrub starts
Oct 02 11:49:23 compute-1 ceph-mon[80926]: 10.7 scrub ok
Oct 02 11:49:23 compute-1 ceph-mon[80926]: 9.1 scrub starts
Oct 02 11:49:23 compute-1 ceph-mon[80926]: 9.1 scrub ok
Oct 02 11:49:23 compute-1 ceph-mon[80926]: pgmap v253: 305 pgs: 1 active+remapped, 1 peering, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Oct 02 11:49:24 compute-1 ceph-mon[80926]: osdmap e114: 3 total, 3 up, 3 in
Oct 02 11:49:24 compute-1 ceph-mon[80926]: pgmap v255: 305 pgs: 1 active+remapped, 1 peering, 303 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 24 B/s, 0 objects/s recovering
Oct 02 11:49:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e115 e115: 3 total, 3 up, 3 in
Oct 02 11:49:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:24.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:24 compute-1 sudo[88272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:49:24 compute-1 sudo[88272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:24 compute-1 sudo[88272]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:24.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:24 compute-1 sudo[88297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:49:24 compute-1 sudo[88297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:49:24 compute-1 sudo[88297]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:25 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:49:25 compute-1 ceph-mon[80926]: osdmap e115: 3 total, 3 up, 3 in
Oct 02 11:49:25 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:49:25 compute-1 ceph-mon[80926]: 11.3 scrub starts
Oct 02 11:49:25 compute-1 ceph-mon[80926]: 11.3 scrub ok
Oct 02 11:49:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e116 e116: 3 total, 3 up, 3 in
Oct 02 11:49:25 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Oct 02 11:49:25 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:49:25.337771) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 11:49:25 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Oct 02 11:49:25 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405765337854, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7188, "num_deletes": 255, "total_data_size": 13612807, "memory_usage": 13840080, "flush_reason": "Manual Compaction"}
Oct 02 11:49:25 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Oct 02 11:49:25 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405765399451, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 7980851, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 237, "largest_seqno": 7193, "table_properties": {"data_size": 7953161, "index_size": 18103, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8517, "raw_key_size": 79917, "raw_average_key_size": 23, "raw_value_size": 7887090, "raw_average_value_size": 2320, "num_data_blocks": 802, "num_entries": 3399, "num_filter_entries": 3399, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 1759405570, "file_creation_time": 1759405765, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:49:25 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 61744 microseconds, and 15498 cpu microseconds.
Oct 02 11:49:25 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:49:25.399521) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 7980851 bytes OK
Oct 02 11:49:25 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:49:25.399540) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Oct 02 11:49:25 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:49:25.404740) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Oct 02 11:49:25 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:49:25.404798) EVENT_LOG_v1 {"time_micros": 1759405765404786, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Oct 02 11:49:25 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:49:25.404824) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Oct 02 11:49:25 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 13575364, prev total WAL file size 13575364, number of live WAL files 2.
Oct 02 11:49:25 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:49:25 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:49:25.408581) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Oct 02 11:49:25 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Oct 02 11:49:25 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(7793KB) 8(1648B)]
Oct 02 11:49:25 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405765408701, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 7982499, "oldest_snapshot_seqno": -1}
Oct 02 11:49:25 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 3148 keys, 7977361 bytes, temperature: kUnknown
Oct 02 11:49:25 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405765461592, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 7977361, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7950335, "index_size": 18084, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7877, "raw_key_size": 75739, "raw_average_key_size": 24, "raw_value_size": 7887370, "raw_average_value_size": 2505, "num_data_blocks": 802, "num_entries": 3148, "num_filter_entries": 3148, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759405765, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:49:25 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:49:25 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:49:25.461912) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 7977361 bytes
Oct 02 11:49:25 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:49:25.464185) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 150.6 rd, 150.5 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(7.6, 0.0 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3404, records dropped: 256 output_compression: NoCompression
Oct 02 11:49:25 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:49:25.464222) EVENT_LOG_v1 {"time_micros": 1759405765464208, "job": 4, "event": "compaction_finished", "compaction_time_micros": 52993, "compaction_time_cpu_micros": 16173, "output_level": 6, "num_output_files": 1, "total_output_size": 7977361, "num_input_records": 3404, "num_output_records": 3148, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 11:49:25 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:49:25 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405765466068, "job": 4, "event": "table_file_deletion", "file_number": 14}
Oct 02 11:49:25 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:49:25 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405765466148, "job": 4, "event": "table_file_deletion", "file_number": 8}
Oct 02 11:49:25 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:49:25.408441) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:49:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:49:26 compute-1 ceph-mon[80926]: 9.2 deep-scrub starts
Oct 02 11:49:26 compute-1 ceph-mon[80926]: 9.2 deep-scrub ok
Oct 02 11:49:26 compute-1 ceph-mon[80926]: osdmap e116: 3 total, 3 up, 3 in
Oct 02 11:49:26 compute-1 ceph-mon[80926]: pgmap v258: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 25 B/s, 0 objects/s recovering
Oct 02 11:49:26 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct 02 11:49:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:26.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:26 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e117 e117: 3 total, 3 up, 3 in
Oct 02 11:49:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999989s ======
Oct 02 11:49:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:26.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999989s
Oct 02 11:49:26 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Oct 02 11:49:26 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Oct 02 11:49:27 compute-1 ceph-mon[80926]: 9.4 scrub starts
Oct 02 11:49:27 compute-1 ceph-mon[80926]: 9.4 scrub ok
Oct 02 11:49:27 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct 02 11:49:27 compute-1 ceph-mon[80926]: osdmap e117: 3 total, 3 up, 3 in
Oct 02 11:49:27 compute-1 ceph-mon[80926]: 10.9 scrub starts
Oct 02 11:49:27 compute-1 ceph-mon[80926]: 10.9 scrub ok
Oct 02 11:49:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:28.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e118 e118: 3 total, 3 up, 3 in
Oct 02 11:49:28 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 118 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=86/87 n=5 ec=49/38 lis/c=86/86 les/c/f=87/87/0 sis=118 pruub=11.965606689s) [2] r=-1 lpr=118 pi=[86,118)/1 crt=44'1012 mlcod 0'0 active pruub 240.732025146s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:49:28 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 118 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=86/87 n=5 ec=49/38 lis/c=86/86 les/c/f=87/87/0 sis=118 pruub=11.965219498s) [2] r=-1 lpr=118 pi=[86,118)/1 crt=44'1012 mlcod 0'0 unknown NOTIFY pruub 240.732025146s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:49:28 compute-1 ceph-mon[80926]: 8.b scrub starts
Oct 02 11:49:28 compute-1 ceph-mon[80926]: 8.b scrub ok
Oct 02 11:49:28 compute-1 ceph-mon[80926]: 9.c scrub starts
Oct 02 11:49:28 compute-1 ceph-mon[80926]: 9.c scrub ok
Oct 02 11:49:28 compute-1 ceph-mon[80926]: pgmap v260: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 21 B/s, 0 objects/s recovering
Oct 02 11:49:28 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct 02 11:49:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:28.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:28 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.a scrub starts
Oct 02 11:49:28 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.a scrub ok
Oct 02 11:49:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e119 e119: 3 total, 3 up, 3 in
Oct 02 11:49:29 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 119 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=86/87 n=5 ec=49/38 lis/c=86/86 les/c/f=87/87/0 sis=119) [2]/[0] r=0 lpr=119 pi=[86,119)/1 crt=44'1012 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:49:29 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 119 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=86/87 n=5 ec=49/38 lis/c=86/86 les/c/f=87/87/0 sis=119) [2]/[0] r=0 lpr=119 pi=[86,119)/1 crt=44'1012 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 11:49:29 compute-1 ceph-mon[80926]: 6.1 scrub starts
Oct 02 11:49:29 compute-1 ceph-mon[80926]: 6.1 scrub ok
Oct 02 11:49:29 compute-1 ceph-mon[80926]: 9.14 deep-scrub starts
Oct 02 11:49:29 compute-1 ceph-mon[80926]: 9.14 deep-scrub ok
Oct 02 11:49:29 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct 02 11:49:29 compute-1 ceph-mon[80926]: osdmap e118: 3 total, 3 up, 3 in
Oct 02 11:49:29 compute-1 ceph-mon[80926]: osdmap e119: 3 total, 3 up, 3 in
Oct 02 11:49:29 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.b deep-scrub starts
Oct 02 11:49:29 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.b deep-scrub ok
Oct 02 11:49:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:30.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:49:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e120 e120: 3 total, 3 up, 3 in
Oct 02 11:49:30 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 120 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=119/120 n=5 ec=49/38 lis/c=86/86 les/c/f=87/87/0 sis=119) [2]/[0] async=[2] r=0 lpr=119 pi=[86,119)/1 crt=44'1012 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:49:30 compute-1 ceph-mon[80926]: 10.a scrub starts
Oct 02 11:49:30 compute-1 ceph-mon[80926]: 10.a scrub ok
Oct 02 11:49:30 compute-1 ceph-mon[80926]: 10.b deep-scrub starts
Oct 02 11:49:30 compute-1 ceph-mon[80926]: 10.b deep-scrub ok
Oct 02 11:49:30 compute-1 ceph-mon[80926]: pgmap v263: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:30 compute-1 ceph-mon[80926]: osdmap e120: 3 total, 3 up, 3 in
Oct 02 11:49:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:30.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:30 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.c scrub starts
Oct 02 11:49:30 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.c scrub ok
Oct 02 11:49:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e121 e121: 3 total, 3 up, 3 in
Oct 02 11:49:32 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 121 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=119/120 n=5 ec=49/38 lis/c=119/86 les/c/f=120/87/0 sis=121 pruub=14.563700676s) [2] async=[2] r=-1 lpr=121 pi=[86,121)/1 crt=44'1012 mlcod 44'1012 active pruub 246.896209717s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:49:32 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 121 pg[9.1d( v 44'1012 (0'0,44'1012] local-lis/les=119/120 n=5 ec=49/38 lis/c=119/86 les/c/f=120/87/0 sis=121 pruub=14.563613892s) [2] r=-1 lpr=121 pi=[86,121)/1 crt=44'1012 mlcod 0'0 unknown NOTIFY pruub 246.896209717s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:49:32 compute-1 ceph-mon[80926]: 9.1c deep-scrub starts
Oct 02 11:49:32 compute-1 ceph-mon[80926]: 9.1c deep-scrub ok
Oct 02 11:49:32 compute-1 ceph-mon[80926]: 10.c scrub starts
Oct 02 11:49:32 compute-1 ceph-mon[80926]: 10.c scrub ok
Oct 02 11:49:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:32.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999989s ======
Oct 02 11:49:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:32.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999989s
Oct 02 11:49:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e122 e122: 3 total, 3 up, 3 in
Oct 02 11:49:32 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 122 pg[9.1e( v 44'1012 (0'0,44'1012] local-lis/les=70/71 n=5 ec=49/38 lis/c=70/70 les/c/f=71/71/0 sis=122 pruub=15.415595055s) [1] r=-1 lpr=122 pi=[70,122)/1 crt=44'1012 mlcod 0'0 active pruub 248.588150024s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:49:32 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 122 pg[9.1e( v 44'1012 (0'0,44'1012] local-lis/les=70/71 n=5 ec=49/38 lis/c=70/70 les/c/f=71/71/0 sis=122 pruub=15.415527344s) [1] r=-1 lpr=122 pi=[70,122)/1 crt=44'1012 mlcod 0'0 unknown NOTIFY pruub 248.588150024s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:49:33 compute-1 ceph-mon[80926]: osdmap e121: 3 total, 3 up, 3 in
Oct 02 11:49:33 compute-1 ceph-mon[80926]: pgmap v266: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 1 objects/s recovering
Oct 02 11:49:33 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct 02 11:49:33 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct 02 11:49:33 compute-1 ceph-mon[80926]: osdmap e122: 3 total, 3 up, 3 in
Oct 02 11:49:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e123 e123: 3 total, 3 up, 3 in
Oct 02 11:49:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 123 pg[9.1e( v 44'1012 (0'0,44'1012] local-lis/les=70/71 n=5 ec=49/38 lis/c=70/70 les/c/f=71/71/0 sis=123) [1]/[0] r=0 lpr=123 pi=[70,123)/1 crt=44'1012 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:49:33 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 123 pg[9.1e( v 44'1012 (0'0,44'1012] local-lis/les=70/71 n=5 ec=49/38 lis/c=70/70 les/c/f=71/71/0 sis=123) [1]/[0] r=0 lpr=123 pi=[70,123)/1 crt=44'1012 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 11:49:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:34.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999989s ======
Oct 02 11:49:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:34.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999989s
Oct 02 11:49:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e124 e124: 3 total, 3 up, 3 in
Oct 02 11:49:35 compute-1 ceph-mon[80926]: 8.2 scrub starts
Oct 02 11:49:35 compute-1 ceph-mon[80926]: 8.2 scrub ok
Oct 02 11:49:35 compute-1 ceph-mon[80926]: 11.2 scrub starts
Oct 02 11:49:35 compute-1 ceph-mon[80926]: 11.2 scrub ok
Oct 02 11:49:35 compute-1 ceph-mon[80926]: osdmap e123: 3 total, 3 up, 3 in
Oct 02 11:49:35 compute-1 ceph-mon[80926]: pgmap v269: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 164 B/s, 3 objects/s recovering
Oct 02 11:49:35 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 02 11:49:35 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 124 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=80/81 n=5 ec=49/38 lis/c=80/80 les/c/f=81/81/0 sis=124 pruub=9.390641212s) [1] r=-1 lpr=124 pi=[80,124)/1 crt=44'1012 mlcod 0'0 active pruub 244.799011230s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:49:35 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 124 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=80/81 n=5 ec=49/38 lis/c=80/80 les/c/f=81/81/0 sis=124 pruub=9.390583992s) [1] r=-1 lpr=124 pi=[80,124)/1 crt=44'1012 mlcod 0'0 unknown NOTIFY pruub 244.799011230s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:49:35 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 124 pg[9.1e( v 44'1012 (0'0,44'1012] local-lis/les=123/124 n=5 ec=49/38 lis/c=70/70 les/c/f=71/71/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[70,123)/1 crt=44'1012 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:49:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:49:36 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 02 11:49:36 compute-1 ceph-mon[80926]: osdmap e124: 3 total, 3 up, 3 in
Oct 02 11:49:36 compute-1 ceph-mon[80926]: pgmap v271: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:36 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e125 e125: 3 total, 3 up, 3 in
Oct 02 11:49:36 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 125 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=80/81 n=5 ec=49/38 lis/c=80/80 les/c/f=81/81/0 sis=125) [1]/[0] r=0 lpr=125 pi=[80,125)/1 crt=44'1012 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:49:36 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 125 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=80/81 n=5 ec=49/38 lis/c=80/80 les/c/f=81/81/0 sis=125) [1]/[0] r=0 lpr=125 pi=[80,125)/1 crt=44'1012 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 02 11:49:36 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 125 pg[9.1e( v 44'1012 (0'0,44'1012] local-lis/les=123/124 n=5 ec=49/38 lis/c=123/70 les/c/f=124/71/0 sis=125 pruub=15.259602547s) [1] async=[1] r=-1 lpr=125 pi=[70,125)/1 crt=44'1012 mlcod 44'1012 active pruub 251.688507080s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:49:36 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 125 pg[9.1e( v 44'1012 (0'0,44'1012] local-lis/les=123/124 n=5 ec=49/38 lis/c=123/70 les/c/f=124/71/0 sis=125 pruub=15.259287834s) [1] r=-1 lpr=125 pi=[70,125)/1 crt=44'1012 mlcod 0'0 unknown NOTIFY pruub 251.688507080s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:49:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:36.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:36 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.d scrub starts
Oct 02 11:49:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:36.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:36 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.d scrub ok
Oct 02 11:49:37 compute-1 ceph-mon[80926]: 8.f scrub starts
Oct 02 11:49:37 compute-1 ceph-mon[80926]: 8.f scrub ok
Oct 02 11:49:37 compute-1 ceph-mon[80926]: osdmap e125: 3 total, 3 up, 3 in
Oct 02 11:49:37 compute-1 ceph-mon[80926]: 10.d scrub starts
Oct 02 11:49:37 compute-1 ceph-mon[80926]: 10.d scrub ok
Oct 02 11:49:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e126 e126: 3 total, 3 up, 3 in
Oct 02 11:49:37 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 126 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=125/126 n=5 ec=49/38 lis/c=80/80 les/c/f=81/81/0 sis=125) [1]/[0] async=[1] r=0 lpr=125 pi=[80,125)/1 crt=44'1012 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 02 11:49:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:38.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:38 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e127 e127: 3 total, 3 up, 3 in
Oct 02 11:49:38 compute-1 ceph-mon[80926]: 11.6 scrub starts
Oct 02 11:49:38 compute-1 ceph-mon[80926]: 11.6 scrub ok
Oct 02 11:49:38 compute-1 ceph-mon[80926]: osdmap e126: 3 total, 3 up, 3 in
Oct 02 11:49:38 compute-1 ceph-mon[80926]: pgmap v274: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:38 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 127 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=125/126 n=5 ec=49/38 lis/c=125/80 les/c/f=126/81/0 sis=127 pruub=15.016470909s) [1] async=[1] r=-1 lpr=127 pi=[80,127)/1 crt=44'1012 mlcod 44'1012 active pruub 253.799392700s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct 02 11:49:38 compute-1 ceph-osd[78262]: osd.0 pg_epoch: 127 pg[9.1f( v 44'1012 (0'0,44'1012] local-lis/les=125/126 n=5 ec=49/38 lis/c=125/80 les/c/f=126/81/0 sis=127 pruub=15.016346931s) [1] r=-1 lpr=127 pi=[80,127)/1 crt=44'1012 mlcod 0'0 unknown NOTIFY pruub 253.799392700s@ mbc={}] state<Start>: transitioning to Stray
Oct 02 11:49:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:38.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 e128: 3 total, 3 up, 3 in
Oct 02 11:49:39 compute-1 ceph-mon[80926]: osdmap e127: 3 total, 3 up, 3 in
Oct 02 11:49:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:40.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:40 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:49:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999989s ======
Oct 02 11:49:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:40.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999989s
Oct 02 11:49:40 compute-1 ceph-mon[80926]: 11.9 scrub starts
Oct 02 11:49:40 compute-1 ceph-mon[80926]: 11.9 scrub ok
Oct 02 11:49:40 compute-1 ceph-mon[80926]: osdmap e128: 3 total, 3 up, 3 in
Oct 02 11:49:40 compute-1 ceph-mon[80926]: 8.5 scrub starts
Oct 02 11:49:40 compute-1 ceph-mon[80926]: 8.5 scrub ok
Oct 02 11:49:40 compute-1 ceph-mon[80926]: 11.b scrub starts
Oct 02 11:49:40 compute-1 ceph-mon[80926]: 11.b scrub ok
Oct 02 11:49:40 compute-1 ceph-mon[80926]: pgmap v277: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:41 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.e scrub starts
Oct 02 11:49:41 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.e scrub ok
Oct 02 11:49:41 compute-1 ceph-mon[80926]: 10.e scrub starts
Oct 02 11:49:41 compute-1 ceph-mon[80926]: 10.e scrub ok
Oct 02 11:49:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:42.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:42.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:43 compute-1 ceph-mon[80926]: pgmap v278: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 6.1 KiB/s rd, 0 B/s wr, 11 op/s; 41 B/s, 3 objects/s recovering
Oct 02 11:49:44 compute-1 ceph-mon[80926]: pgmap v279: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 5.4 KiB/s rd, 0 B/s wr, 10 op/s; 36 B/s, 2 objects/s recovering
Oct 02 11:49:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:44.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:44.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:45 compute-1 ceph-mon[80926]: 10.f scrub starts
Oct 02 11:49:45 compute-1 ceph-mon[80926]: 10.f scrub ok
Oct 02 11:49:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:49:46 compute-1 ceph-mon[80926]: 10.3 deep-scrub starts
Oct 02 11:49:46 compute-1 ceph-mon[80926]: 10.3 deep-scrub ok
Oct 02 11:49:46 compute-1 ceph-mon[80926]: pgmap v280: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 4.2 KiB/s rd, 0 B/s wr, 8 op/s; 29 B/s, 2 objects/s recovering
Oct 02 11:49:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:46.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:46.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:47 compute-1 ceph-mon[80926]: 11.e deep-scrub starts
Oct 02 11:49:47 compute-1 ceph-mon[80926]: 11.e deep-scrub ok
Oct 02 11:49:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:48.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:48.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:49 compute-1 ceph-mon[80926]: 11.c scrub starts
Oct 02 11:49:49 compute-1 ceph-mon[80926]: 11.c scrub ok
Oct 02 11:49:49 compute-1 ceph-mon[80926]: pgmap v281: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 3.6 KiB/s rd, 0 B/s wr, 6 op/s; 25 B/s, 1 objects/s recovering
Oct 02 11:49:50 compute-1 ceph-mon[80926]: pgmap v282: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 6 op/s; 31 B/s, 1 objects/s recovering
Oct 02 11:49:50 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.16 deep-scrub starts
Oct 02 11:49:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:50.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:50 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.16 deep-scrub ok
Oct 02 11:49:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:49:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:49:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:50.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:49:51 compute-1 sudo[87726]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:51 compute-1 ceph-mon[80926]: 10.16 deep-scrub starts
Oct 02 11:49:51 compute-1 ceph-mon[80926]: 10.16 deep-scrub ok
Oct 02 11:49:51 compute-1 sudo[88501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umvmmfwaaiioukgvnguvyfqzrdaodlcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405791.4231682-348-254375977224135/AnsiballZ_command.py'
Oct 02 11:49:51 compute-1 sudo[88501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:51 compute-1 python3.9[88503]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:49:52 compute-1 ceph-mon[80926]: 11.d scrub starts
Oct 02 11:49:52 compute-1 ceph-mon[80926]: 11.d scrub ok
Oct 02 11:49:52 compute-1 ceph-mon[80926]: pgmap v283: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 5 op/s; 27 B/s, 1 objects/s recovering
Oct 02 11:49:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:52.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:52 compute-1 sudo[88501]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:52.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:53 compute-1 sudo[88788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdrbznqduvmpubqlxjkhhfjaaefmqqvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405792.846938-372-253173712923928/AnsiballZ_selinux.py'
Oct 02 11:49:53 compute-1 sudo[88788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:53 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Oct 02 11:49:53 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Oct 02 11:49:53 compute-1 ceph-mon[80926]: 11.10 scrub starts
Oct 02 11:49:53 compute-1 ceph-mon[80926]: 11.10 scrub ok
Oct 02 11:49:53 compute-1 python3.9[88790]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct 02 11:49:53 compute-1 sudo[88788]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:54 compute-1 sudo[88940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nchfthqnwlqiqoikdpgihhswrmcoiggf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405794.1208208-405-112420691606074/AnsiballZ_command.py'
Oct 02 11:49:54 compute-1 sudo[88940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:54.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:54 compute-1 python3.9[88942]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct 02 11:49:54 compute-1 sudo[88940]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:54 compute-1 ceph-mon[80926]: 7.11 deep-scrub starts
Oct 02 11:49:54 compute-1 ceph-mon[80926]: 7.11 deep-scrub ok
Oct 02 11:49:54 compute-1 ceph-mon[80926]: 10.17 scrub starts
Oct 02 11:49:54 compute-1 ceph-mon[80926]: 10.17 scrub ok
Oct 02 11:49:54 compute-1 ceph-mon[80926]: 11.11 scrub starts
Oct 02 11:49:54 compute-1 ceph-mon[80926]: 11.11 scrub ok
Oct 02 11:49:54 compute-1 ceph-mon[80926]: pgmap v284: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:49:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:54.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:49:55 compute-1 sudo[89092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynikgaskxzxeojzemjieyhgjyjabwnwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405794.8360357-429-203505730997163/AnsiballZ_file.py'
Oct 02 11:49:55 compute-1 sudo[89092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:55 compute-1 python3.9[89094]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:49:55 compute-1 sudo[89092]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:49:55 compute-1 sudo[89244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtwbmuvthoerwowauqkfdknzouqkfqqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405795.5426936-453-61506339211825/AnsiballZ_mount.py'
Oct 02 11:49:55 compute-1 sudo[89244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:56 compute-1 ceph-mon[80926]: pgmap v285: 305 pgs: 305 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:56 compute-1 python3.9[89246]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct 02 11:49:56 compute-1 sudo[89244]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:56.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:56 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Oct 02 11:49:56 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Oct 02 11:49:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:56.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:57 compute-1 sudo[89396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvwfclpcchgruqldnnasgmxacfwypodv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405797.0951226-537-41106771612033/AnsiballZ_file.py'
Oct 02 11:49:57 compute-1 sudo[89396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:57 compute-1 python3.9[89398]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:49:57 compute-1 sudo[89396]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:57 compute-1 ceph-mon[80926]: 11.15 scrub starts
Oct 02 11:49:57 compute-1 ceph-mon[80926]: 11.15 scrub ok
Oct 02 11:49:57 compute-1 ceph-mon[80926]: 10.1a scrub starts
Oct 02 11:49:57 compute-1 ceph-mon[80926]: 10.1a scrub ok
Oct 02 11:49:58 compute-1 sudo[89548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-melihhprxzffzsdokctbdnnqfronjnms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405797.8572671-561-276015960662791/AnsiballZ_stat.py'
Oct 02 11:49:58 compute-1 sudo[89548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:58 compute-1 python3.9[89550]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:49:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:49:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:49:58.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:49:58 compute-1 sudo[89548]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:58 compute-1 ceph-mon[80926]: pgmap v286: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:49:58 compute-1 sudo[89626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlnyuvhukbyugmehbvagicgloibozcoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405797.8572671-561-276015960662791/AnsiballZ_file.py'
Oct 02 11:49:58 compute-1 sudo[89626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:49:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:49:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:49:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:49:58.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:49:58 compute-1 python3.9[89628]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:49:58 compute-1 sudo[89626]: pam_unix(sudo:session): session closed for user root
Oct 02 11:49:59 compute-1 ceph-mon[80926]: 8.d scrub starts
Oct 02 11:49:59 compute-1 ceph-mon[80926]: 8.d scrub ok
Oct 02 11:49:59 compute-1 ceph-mon[80926]: 10.11 scrub starts
Oct 02 11:49:59 compute-1 ceph-mon[80926]: 10.11 scrub ok
Oct 02 11:50:00 compute-1 sudo[89778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfzohmnfyswqeuspkcevdswizsminomq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405799.8836284-633-193045967694334/AnsiballZ_getent.py'
Oct 02 11:50:00 compute-1 sudo[89778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:50:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:00.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:50:00 compute-1 python3.9[89780]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct 02 11:50:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:50:00 compute-1 sudo[89778]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:00.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:00 compute-1 ceph-mon[80926]: 11.19 scrub starts
Oct 02 11:50:00 compute-1 ceph-mon[80926]: 11.19 scrub ok
Oct 02 11:50:00 compute-1 ceph-mon[80926]: pgmap v287: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:00 compute-1 ceph-mon[80926]: overall HEALTH_OK
Oct 02 11:50:01 compute-1 sudo[89931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elvscdlmuxbcftphqolcwigjabrdvcau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405800.8329208-663-101351945119425/AnsiballZ_getent.py'
Oct 02 11:50:01 compute-1 sudo[89931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:01 compute-1 python3.9[89933]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct 02 11:50:01 compute-1 sudo[89931]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:02 compute-1 ceph-mon[80926]: 11.18 scrub starts
Oct 02 11:50:02 compute-1 ceph-mon[80926]: 11.18 scrub ok
Oct 02 11:50:02 compute-1 sudo[90084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkcbhakzyffdgqxpcepapgkkdosqkkas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405801.6614208-687-276712110416860/AnsiballZ_group.py'
Oct 02 11:50:02 compute-1 sudo[90084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:02 compute-1 python3.9[90086]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 02 11:50:02 compute-1 sudo[90084]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:50:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:02.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:50:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:02.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:02 compute-1 sudo[90236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekehfzcfhmxzornolgxvbcehmzdmohrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405802.578865-714-256833986189373/AnsiballZ_file.py'
Oct 02 11:50:02 compute-1 sudo[90236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:03 compute-1 python3.9[90238]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct 02 11:50:03 compute-1 sudo[90236]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:03 compute-1 ceph-mon[80926]: 10.1 scrub starts
Oct 02 11:50:03 compute-1 ceph-mon[80926]: 10.1 scrub ok
Oct 02 11:50:03 compute-1 ceph-mon[80926]: pgmap v288: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:03 compute-1 sudo[90388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aabpqiwnegvvlqkayzgihligoyprfloe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405803.6748621-747-147119594858773/AnsiballZ_dnf.py'
Oct 02 11:50:03 compute-1 sudo[90388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:04 compute-1 python3.9[90390]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:50:04 compute-1 ceph-mon[80926]: 10.10 scrub starts
Oct 02 11:50:04 compute-1 ceph-mon[80926]: 10.10 scrub ok
Oct 02 11:50:04 compute-1 ceph-mon[80926]: pgmap v289: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:50:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:04.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:50:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:04.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:05 compute-1 ceph-mon[80926]: 7.1d scrub starts
Oct 02 11:50:05 compute-1 ceph-mon[80926]: 7.1d scrub ok
Oct 02 11:50:05 compute-1 sudo[90388]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:50:06 compute-1 sudo[90541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evgortzitykwophckeiytfgvguhezxfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405805.8736315-771-101850147994846/AnsiballZ_file.py'
Oct 02 11:50:06 compute-1 sudo[90541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:06 compute-1 ceph-mon[80926]: 8.9 scrub starts
Oct 02 11:50:06 compute-1 ceph-mon[80926]: 8.9 scrub ok
Oct 02 11:50:06 compute-1 ceph-mon[80926]: pgmap v290: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:06 compute-1 python3.9[90543]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:50:06 compute-1 sudo[90541]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:50:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:06.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:50:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:06.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:06 compute-1 sudo[90693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxjfwzvkxijflnwjfhmiiweieuxgholl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405806.5494063-795-102202669932534/AnsiballZ_stat.py'
Oct 02 11:50:06 compute-1 sudo[90693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:07 compute-1 python3.9[90695]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:50:07 compute-1 sudo[90693]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:07 compute-1 sudo[90771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whgaqqfdlrfwryjczjgjpatfxkwhvgss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405806.5494063-795-102202669932534/AnsiballZ_file.py'
Oct 02 11:50:07 compute-1 sudo[90771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:07 compute-1 ceph-mon[80926]: 11.1f scrub starts
Oct 02 11:50:07 compute-1 ceph-mon[80926]: 11.1f scrub ok
Oct 02 11:50:07 compute-1 python3.9[90773]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:50:07 compute-1 sudo[90771]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:08 compute-1 sudo[90923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgsqsrdkpbwzrjbkvhnfllwjdlnfmlub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405807.7693822-834-141998146211329/AnsiballZ_stat.py'
Oct 02 11:50:08 compute-1 sudo[90923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:08 compute-1 python3.9[90925]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:50:08 compute-1 sudo[90923]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:08 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Oct 02 11:50:08 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Oct 02 11:50:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:50:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:08.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:50:08 compute-1 ceph-mon[80926]: 10.4 scrub starts
Oct 02 11:50:08 compute-1 ceph-mon[80926]: 10.4 scrub ok
Oct 02 11:50:08 compute-1 ceph-mon[80926]: pgmap v291: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:08 compute-1 sudo[91001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdikbhdbfwzaoodqpiukoknobzshtbur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405807.7693822-834-141998146211329/AnsiballZ_file.py'
Oct 02 11:50:08 compute-1 sudo[91001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:08 compute-1 python3.9[91003]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:50:08 compute-1 sudo[91001]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:08.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:09 compute-1 sudo[91153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxvxjimpveysuqnytizbtmwkgadkwurr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405809.1526458-879-138666208087797/AnsiballZ_dnf.py'
Oct 02 11:50:09 compute-1 sudo[91153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:09 compute-1 ceph-mon[80926]: 7.1b scrub starts
Oct 02 11:50:09 compute-1 ceph-mon[80926]: 7.1b scrub ok
Oct 02 11:50:09 compute-1 ceph-mon[80926]: 10.1c scrub starts
Oct 02 11:50:09 compute-1 ceph-mon[80926]: 10.1c scrub ok
Oct 02 11:50:09 compute-1 python3.9[91155]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:50:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:50:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:10.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:50:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:50:10 compute-1 ceph-mon[80926]: 7.14 scrub starts
Oct 02 11:50:10 compute-1 ceph-mon[80926]: 7.14 scrub ok
Oct 02 11:50:10 compute-1 ceph-mon[80926]: 10.14 deep-scrub starts
Oct 02 11:50:10 compute-1 ceph-mon[80926]: 10.14 deep-scrub ok
Oct 02 11:50:10 compute-1 ceph-mon[80926]: pgmap v292: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:10 compute-1 systemd[71803]: Created slice User Background Tasks Slice.
Oct 02 11:50:10 compute-1 systemd[71803]: Starting Cleanup of User's Temporary Files and Directories...
Oct 02 11:50:10 compute-1 systemd[71803]: Finished Cleanup of User's Temporary Files and Directories.
Oct 02 11:50:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:10.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:10 compute-1 sudo[91153]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:11 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Oct 02 11:50:11 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Oct 02 11:50:11 compute-1 ceph-mon[80926]: 7.18 scrub starts
Oct 02 11:50:11 compute-1 ceph-mon[80926]: 7.18 scrub ok
Oct 02 11:50:12 compute-1 python3.9[91307]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:50:12 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Oct 02 11:50:12 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Oct 02 11:50:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:12.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:50:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:12.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:50:12 compute-1 ceph-mon[80926]: 10.1d scrub starts
Oct 02 11:50:12 compute-1 ceph-mon[80926]: 10.1d scrub ok
Oct 02 11:50:12 compute-1 ceph-mon[80926]: pgmap v293: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:12 compute-1 python3.9[91459]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct 02 11:50:13 compute-1 python3.9[91609]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:50:13 compute-1 ceph-mon[80926]: 10.1f scrub starts
Oct 02 11:50:13 compute-1 ceph-mon[80926]: 10.1f scrub ok
Oct 02 11:50:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:50:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:14.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:50:14 compute-1 sudo[91759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pczerdowweveltoqfhypwztelontowou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405814.0645196-1002-128383586089584/AnsiballZ_systemd.py'
Oct 02 11:50:14 compute-1 sudo[91759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:14.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:15 compute-1 python3.9[91761]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:50:15 compute-1 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 02 11:50:15 compute-1 ceph-mon[80926]: 11.a scrub starts
Oct 02 11:50:15 compute-1 ceph-mon[80926]: 11.a scrub ok
Oct 02 11:50:15 compute-1 ceph-mon[80926]: pgmap v294: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:15 compute-1 systemd[1]: tuned.service: Deactivated successfully.
Oct 02 11:50:15 compute-1 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 02 11:50:15 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 02 11:50:15 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 02 11:50:15 compute-1 sudo[91759]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:50:16 compute-1 python3.9[91922]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct 02 11:50:16 compute-1 ceph-mon[80926]: pgmap v295: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:50:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:16.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:50:16 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Oct 02 11:50:16 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Oct 02 11:50:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:16.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:17 compute-1 ceph-mon[80926]: 7.f scrub starts
Oct 02 11:50:17 compute-1 ceph-mon[80926]: 7.f scrub ok
Oct 02 11:50:17 compute-1 ceph-mon[80926]: 8.19 scrub starts
Oct 02 11:50:17 compute-1 ceph-mon[80926]: 8.19 scrub ok
Oct 02 11:50:18 compute-1 ceph-mon[80926]: 10.13 scrub starts
Oct 02 11:50:18 compute-1 ceph-mon[80926]: 10.13 scrub ok
Oct 02 11:50:18 compute-1 ceph-mon[80926]: pgmap v296: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:18.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:18.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:18 compute-1 sudo[92072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxgsbfrqjsfrprrobjhbqwwhxcwjpizr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405818.6183572-1173-14544708281617/AnsiballZ_systemd.py'
Oct 02 11:50:18 compute-1 sudo[92072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:19 compute-1 python3.9[92074]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:50:19 compute-1 sudo[92072]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:19 compute-1 ceph-mon[80926]: 10.15 scrub starts
Oct 02 11:50:19 compute-1 ceph-mon[80926]: 10.15 scrub ok
Oct 02 11:50:19 compute-1 sudo[92226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slltumjykziwhpirxipltsowyxnyecoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405819.4042845-1173-182390872097257/AnsiballZ_systemd.py'
Oct 02 11:50:19 compute-1 sudo[92226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:19 compute-1 python3.9[92228]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:50:20 compute-1 sudo[92226]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:20.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:20 compute-1 sshd-session[85963]: Connection closed by 192.168.122.30 port 49448
Oct 02 11:50:20 compute-1 sshd-session[85960]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:50:20 compute-1 systemd[1]: session-35.scope: Deactivated successfully.
Oct 02 11:50:20 compute-1 systemd[1]: session-35.scope: Consumed 1min 3.652s CPU time.
Oct 02 11:50:20 compute-1 systemd-logind[795]: Session 35 logged out. Waiting for processes to exit.
Oct 02 11:50:20 compute-1 systemd-logind[795]: Removed session 35.
Oct 02 11:50:20 compute-1 ceph-mon[80926]: pgmap v297: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:50:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:20.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:22 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Oct 02 11:50:22 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Oct 02 11:50:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:22.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:22.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:23 compute-1 ceph-mon[80926]: 7.2 scrub starts
Oct 02 11:50:23 compute-1 ceph-mon[80926]: pgmap v298: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:23 compute-1 ceph-mon[80926]: 7.2 scrub ok
Oct 02 11:50:23 compute-1 ceph-mon[80926]: 11.1a scrub starts
Oct 02 11:50:23 compute-1 ceph-mon[80926]: 11.1a scrub ok
Oct 02 11:50:24 compute-1 ceph-mon[80926]: 8.16 scrub starts
Oct 02 11:50:24 compute-1 ceph-mon[80926]: 8.16 scrub ok
Oct 02 11:50:24 compute-1 ceph-mon[80926]: pgmap v299: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:24 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 8.10 deep-scrub starts
Oct 02 11:50:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:24.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:24 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 8.10 deep-scrub ok
Oct 02 11:50:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:24.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:24 compute-1 sudo[92256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:50:24 compute-1 sudo[92256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:24 compute-1 sudo[92256]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:24 compute-1 sudo[92281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:50:24 compute-1 sudo[92281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:24 compute-1 sudo[92281]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:25 compute-1 sudo[92306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:50:25 compute-1 sudo[92306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:25 compute-1 sudo[92306]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:25 compute-1 sudo[92331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:50:25 compute-1 sudo[92331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:25 compute-1 ceph-mon[80926]: 8.10 deep-scrub starts
Oct 02 11:50:25 compute-1 ceph-mon[80926]: 8.10 deep-scrub ok
Oct 02 11:50:25 compute-1 sudo[92331]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:50:25 compute-1 sshd-session[92387]: Accepted publickey for zuul from 192.168.122.30 port 50844 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:50:25 compute-1 systemd-logind[795]: New session 36 of user zuul.
Oct 02 11:50:25 compute-1 systemd[1]: Started Session 36 of User zuul.
Oct 02 11:50:25 compute-1 sshd-session[92387]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:50:26 compute-1 ceph-mon[80926]: pgmap v300: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:26.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:26 compute-1 python3.9[92540]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:50:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:26.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:27 compute-1 sudo[92694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-parkprriposqlscnfswycvfcsdqqcfbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405827.1673164-74-248377318188675/AnsiballZ_getent.py'
Oct 02 11:50:27 compute-1 sudo[92694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:27 compute-1 python3.9[92696]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct 02 11:50:27 compute-1 sudo[92694]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:28 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:50:28 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:50:28 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:50:28 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:50:28 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:50:28 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:50:28 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:50:28 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:50:28 compute-1 ceph-mon[80926]: pgmap v301: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:28 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 8.12 deep-scrub starts
Oct 02 11:50:28 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 8.12 deep-scrub ok
Oct 02 11:50:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:50:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:28.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:50:28 compute-1 sudo[92847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khcavnssursfesvxsxralpjpfjywlocu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405828.252982-110-32035017093883/AnsiballZ_setup.py'
Oct 02 11:50:28 compute-1 sudo[92847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:28.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:28 compute-1 python3.9[92849]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:50:29 compute-1 sudo[92847]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:29 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.1b deep-scrub starts
Oct 02 11:50:29 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.1b deep-scrub ok
Oct 02 11:50:29 compute-1 sudo[92931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deppcocbgnchsvhfzgsnhpswwxyvlpzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405828.252982-110-32035017093883/AnsiballZ_dnf.py'
Oct 02 11:50:29 compute-1 sudo[92931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:29 compute-1 ceph-mon[80926]: 7.3 scrub starts
Oct 02 11:50:29 compute-1 ceph-mon[80926]: 7.3 scrub ok
Oct 02 11:50:29 compute-1 ceph-mon[80926]: 8.12 deep-scrub starts
Oct 02 11:50:29 compute-1 ceph-mon[80926]: 8.12 deep-scrub ok
Oct 02 11:50:29 compute-1 python3.9[92933]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 02 11:50:30 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Oct 02 11:50:30 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Oct 02 11:50:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:30.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:30 compute-1 ceph-mon[80926]: 8.1f scrub starts
Oct 02 11:50:30 compute-1 ceph-mon[80926]: 8.1f scrub ok
Oct 02 11:50:30 compute-1 ceph-mon[80926]: 11.1b deep-scrub starts
Oct 02 11:50:30 compute-1 ceph-mon[80926]: 11.1b deep-scrub ok
Oct 02 11:50:30 compute-1 ceph-mon[80926]: pgmap v302: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:50:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:30.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:31 compute-1 sudo[92931]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:31 compute-1 ceph-mon[80926]: 11.1d scrub starts
Oct 02 11:50:31 compute-1 ceph-mon[80926]: 11.1d scrub ok
Oct 02 11:50:31 compute-1 sudo[93084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inyqxjbcvlwstsnugpdjceufzpealdrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405831.2818763-152-156757551309843/AnsiballZ_dnf.py'
Oct 02 11:50:31 compute-1 sudo[93084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:31 compute-1 python3.9[93086]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:50:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:32.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:32 compute-1 ceph-mon[80926]: pgmap v303: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:32 compute-1 ceph-mon[80926]: 10.2 scrub starts
Oct 02 11:50:32 compute-1 ceph-mon[80926]: 10.2 scrub ok
Oct 02 11:50:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:50:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:32.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:50:33 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Oct 02 11:50:33 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Oct 02 11:50:33 compute-1 sudo[93084]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:33 compute-1 sudo[93164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:50:33 compute-1 sudo[93164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:33 compute-1 sudo[93164]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:33 compute-1 sudo[93189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:50:33 compute-1 sudo[93189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:50:33 compute-1 sudo[93189]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:34 compute-1 sudo[93287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbvcplgofrmiafpewonnbyodkolewgeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405833.5884-176-213016421102873/AnsiballZ_systemd.py'
Oct 02 11:50:34 compute-1 sudo[93287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:34.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:34 compute-1 python3.9[93289]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 11:50:34 compute-1 sudo[93287]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:34 compute-1 ceph-mon[80926]: 8.18 scrub starts
Oct 02 11:50:34 compute-1 ceph-mon[80926]: 8.18 scrub ok
Oct 02 11:50:34 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:50:34 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:50:34 compute-1 ceph-mon[80926]: pgmap v304: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:34 compute-1 ceph-mon[80926]: 7.e scrub starts
Oct 02 11:50:34 compute-1 ceph-mon[80926]: 7.e scrub ok
Oct 02 11:50:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:34.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:35 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 8.1b deep-scrub starts
Oct 02 11:50:35 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 8.1b deep-scrub ok
Oct 02 11:50:35 compute-1 python3.9[93442]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:50:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:50:35 compute-1 ceph-mon[80926]: 10.1e scrub starts
Oct 02 11:50:35 compute-1 ceph-mon[80926]: 10.1e scrub ok
Oct 02 11:50:35 compute-1 ceph-mon[80926]: 8.1b deep-scrub starts
Oct 02 11:50:35 compute-1 ceph-mon[80926]: 8.1b deep-scrub ok
Oct 02 11:50:36 compute-1 sudo[93592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efkptranmoedrjmlihgmbijznvssuazy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405835.5783901-230-121218040054265/AnsiballZ_sefcontext.py'
Oct 02 11:50:36 compute-1 sudo[93592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:36 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Oct 02 11:50:36 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Oct 02 11:50:36 compute-1 python3.9[93594]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct 02 11:50:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:36.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:36 compute-1 sudo[93592]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:36.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:37 compute-1 ceph-mon[80926]: pgmap v305: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:37 compute-1 ceph-mon[80926]: 11.1e scrub starts
Oct 02 11:50:37 compute-1 ceph-mon[80926]: 11.1e scrub ok
Oct 02 11:50:37 compute-1 python3.9[93744]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:50:38 compute-1 ceph-mon[80926]: 8.11 deep-scrub starts
Oct 02 11:50:38 compute-1 ceph-mon[80926]: 8.11 deep-scrub ok
Oct 02 11:50:38 compute-1 sudo[93900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flcjrmwpplxwfcxxpnlqvlshnmptvabw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405837.859878-284-219727701484302/AnsiballZ_dnf.py'
Oct 02 11:50:38 compute-1 sudo[93900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:38 compute-1 python3.9[93902]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:50:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:38.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:50:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:38.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:50:39 compute-1 ceph-mon[80926]: pgmap v306: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:39 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Oct 02 11:50:39 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Oct 02 11:50:39 compute-1 sudo[93900]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:40 compute-1 ceph-mon[80926]: 8.4 scrub starts
Oct 02 11:50:40 compute-1 ceph-mon[80926]: 8.4 scrub ok
Oct 02 11:50:40 compute-1 sudo[94053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdeqakjnvdhlccjsuyyuwwvvzejdclbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405839.866826-308-174572090980215/AnsiballZ_command.py'
Oct 02 11:50:40 compute-1 sudo[94053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:40 compute-1 python3.9[94055]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:50:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:40.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:40 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:50:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:40.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:41 compute-1 ceph-mon[80926]: pgmap v307: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:41 compute-1 sudo[94053]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:41 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Oct 02 11:50:41 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Oct 02 11:50:41 compute-1 sudo[94340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smfijtwbfcosqqyffnkwwzjbymfjzfvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405841.358977-332-132989823280404/AnsiballZ_file.py'
Oct 02 11:50:41 compute-1 sudo[94340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:41 compute-1 python3.9[94342]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 02 11:50:41 compute-1 sudo[94340]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:42 compute-1 ceph-mon[80926]: 11.7 scrub starts
Oct 02 11:50:42 compute-1 ceph-mon[80926]: 11.7 scrub ok
Oct 02 11:50:42 compute-1 ceph-mon[80926]: pgmap v308: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:42.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:42 compute-1 python3.9[94492]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:50:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:42.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:43 compute-1 ceph-mon[80926]: 8.3 scrub starts
Oct 02 11:50:43 compute-1 ceph-mon[80926]: 8.3 scrub ok
Oct 02 11:50:43 compute-1 sudo[94644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfuutefedukysmlyxkkrtdnffnmkquxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405842.978857-380-257123821753807/AnsiballZ_dnf.py'
Oct 02 11:50:43 compute-1 sudo[94644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:43 compute-1 python3.9[94646]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:50:44 compute-1 ceph-mon[80926]: pgmap v309: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:44 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Oct 02 11:50:44 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Oct 02 11:50:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:44.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:44 compute-1 sudo[94644]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:44.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:45 compute-1 ceph-mon[80926]: 8.6 deep-scrub starts
Oct 02 11:50:45 compute-1 ceph-mon[80926]: 8.6 deep-scrub ok
Oct 02 11:50:45 compute-1 ceph-mon[80926]: 7.8 scrub starts
Oct 02 11:50:45 compute-1 ceph-mon[80926]: 7.8 scrub ok
Oct 02 11:50:45 compute-1 ceph-mon[80926]: 11.5 scrub starts
Oct 02 11:50:45 compute-1 ceph-mon[80926]: 11.5 scrub ok
Oct 02 11:50:45 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Oct 02 11:50:45 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Oct 02 11:50:45 compute-1 sudo[94797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bofmszpopfdstdvzfsqmfcoldjehevpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405845.2468276-407-234238327235547/AnsiballZ_dnf.py'
Oct 02 11:50:45 compute-1 sudo[94797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:50:45 compute-1 python3.9[94799]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:50:46 compute-1 ceph-mon[80926]: 8.c scrub starts
Oct 02 11:50:46 compute-1 ceph-mon[80926]: 8.c scrub ok
Oct 02 11:50:46 compute-1 ceph-mon[80926]: 11.1c scrub starts
Oct 02 11:50:46 compute-1 ceph-mon[80926]: 11.1c scrub ok
Oct 02 11:50:46 compute-1 ceph-mon[80926]: pgmap v310: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:46.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:46.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:46 compute-1 sudo[94797]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:47 compute-1 ceph-mon[80926]: 10.8 scrub starts
Oct 02 11:50:47 compute-1 ceph-mon[80926]: 10.8 scrub ok
Oct 02 11:50:47 compute-1 sudo[94950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nocnnvlgmguonxvhhbemjgegklrhgghk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405847.4924982-443-112127214053788/AnsiballZ_stat.py'
Oct 02 11:50:47 compute-1 sudo[94950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:47 compute-1 python3.9[94952]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:50:47 compute-1 sudo[94950]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:50:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:48.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:50:48 compute-1 sudo[95104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seafcvfosbpcivauyyethnzgomnemuxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405848.1728551-467-114979504063519/AnsiballZ_slurp.py'
Oct 02 11:50:48 compute-1 ceph-mon[80926]: 8.1c scrub starts
Oct 02 11:50:48 compute-1 sudo[95104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:50:48 compute-1 ceph-mon[80926]: 8.1c scrub ok
Oct 02 11:50:48 compute-1 ceph-mon[80926]: pgmap v311: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:48 compute-1 ceph-mon[80926]: 7.1e scrub starts
Oct 02 11:50:48 compute-1 ceph-mon[80926]: 7.1e scrub ok
Oct 02 11:50:48 compute-1 python3.9[95106]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Oct 02 11:50:48 compute-1 sudo[95104]: pam_unix(sudo:session): session closed for user root
Oct 02 11:50:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:50:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:48.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:50:49 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Oct 02 11:50:49 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Oct 02 11:50:49 compute-1 sshd-session[92390]: Connection closed by 192.168.122.30 port 50844
Oct 02 11:50:49 compute-1 sshd-session[92387]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:50:49 compute-1 systemd[1]: session-36.scope: Deactivated successfully.
Oct 02 11:50:49 compute-1 systemd[1]: session-36.scope: Consumed 17.674s CPU time.
Oct 02 11:50:49 compute-1 systemd-logind[795]: Session 36 logged out. Waiting for processes to exit.
Oct 02 11:50:49 compute-1 systemd-logind[795]: Removed session 36.
Oct 02 11:50:49 compute-1 ceph-mon[80926]: 8.8 scrub starts
Oct 02 11:50:49 compute-1 ceph-mon[80926]: 8.8 scrub ok
Oct 02 11:50:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:50.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:50:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:50.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:50 compute-1 ceph-mon[80926]: pgmap v312: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:52.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:52.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:52 compute-1 ceph-mon[80926]: pgmap v313: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:52 compute-1 ceph-mon[80926]: 10.5 scrub starts
Oct 02 11:50:52 compute-1 ceph-mon[80926]: 10.5 scrub ok
Oct 02 11:50:52 compute-1 ceph-mon[80926]: 9.13 scrub starts
Oct 02 11:50:52 compute-1 ceph-mon[80926]: 9.13 scrub ok
Oct 02 11:50:54 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.f scrub starts
Oct 02 11:50:54 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.f scrub ok
Oct 02 11:50:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:50:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:54.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:50:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:50:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:54.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:50:55 compute-1 ceph-mon[80926]: pgmap v314: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:55 compute-1 ceph-mon[80926]: 11.f scrub starts
Oct 02 11:50:55 compute-1 ceph-mon[80926]: 11.f scrub ok
Oct 02 11:50:55 compute-1 ceph-mon[80926]: 9.b scrub starts
Oct 02 11:50:55 compute-1 ceph-mon[80926]: 9.b scrub ok
Oct 02 11:50:55 compute-1 sshd-session[95132]: Accepted publickey for zuul from 192.168.122.30 port 52644 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:50:55 compute-1 systemd-logind[795]: New session 37 of user zuul.
Oct 02 11:50:55 compute-1 systemd[1]: Started Session 37 of User zuul.
Oct 02 11:50:55 compute-1 sshd-session[95132]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:50:55 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.4 deep-scrub starts
Oct 02 11:50:55 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.4 deep-scrub ok
Oct 02 11:50:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:50:56 compute-1 ceph-mon[80926]: 11.4 deep-scrub starts
Oct 02 11:50:56 compute-1 ceph-mon[80926]: 11.4 deep-scrub ok
Oct 02 11:50:56 compute-1 python3.9[95285]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:50:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:50:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:56.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:50:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:56.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:57 compute-1 ceph-mon[80926]: pgmap v315: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:57 compute-1 python3.9[95439]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:50:58 compute-1 ceph-mon[80926]: 9.17 scrub starts
Oct 02 11:50:58 compute-1 ceph-mon[80926]: 9.17 scrub ok
Oct 02 11:50:58 compute-1 ceph-mon[80926]: pgmap v316: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:50:58 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Oct 02 11:50:58 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Oct 02 11:50:58 compute-1 python3.9[95632]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:50:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:50:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:50:58.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:50:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:50:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:50:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:50:58.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:50:58 compute-1 sshd-session[95135]: Connection closed by 192.168.122.30 port 52644
Oct 02 11:50:58 compute-1 sshd-session[95132]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:50:58 compute-1 systemd[1]: session-37.scope: Deactivated successfully.
Oct 02 11:50:58 compute-1 systemd[1]: session-37.scope: Consumed 2.164s CPU time.
Oct 02 11:50:58 compute-1 systemd-logind[795]: Session 37 logged out. Waiting for processes to exit.
Oct 02 11:50:58 compute-1 systemd-logind[795]: Removed session 37.
Oct 02 11:50:59 compute-1 ceph-mon[80926]: 7.6 scrub starts
Oct 02 11:50:59 compute-1 ceph-mon[80926]: 7.6 scrub ok
Oct 02 11:50:59 compute-1 ceph-mon[80926]: 11.1 scrub starts
Oct 02 11:50:59 compute-1 ceph-mon[80926]: 11.1 scrub ok
Oct 02 11:50:59 compute-1 ceph-mon[80926]: 9.3 scrub starts
Oct 02 11:50:59 compute-1 ceph-mon[80926]: 9.3 scrub ok
Oct 02 11:51:00 compute-1 ceph-mon[80926]: pgmap v317: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:00 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Oct 02 11:51:00 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Oct 02 11:51:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:00.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:51:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:00.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:01 compute-1 ceph-mon[80926]: 10.19 deep-scrub starts
Oct 02 11:51:01 compute-1 ceph-mon[80926]: 10.19 deep-scrub ok
Oct 02 11:51:01 compute-1 ceph-mon[80926]: 8.17 scrub starts
Oct 02 11:51:01 compute-1 ceph-mon[80926]: 8.17 scrub ok
Oct 02 11:51:02 compute-1 ceph-mon[80926]: pgmap v318: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:02 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Oct 02 11:51:02 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Oct 02 11:51:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:51:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:02.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:51:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:51:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:02.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:51:03 compute-1 ceph-mon[80926]: 11.12 scrub starts
Oct 02 11:51:03 compute-1 ceph-mon[80926]: 11.12 scrub ok
Oct 02 11:51:04 compute-1 ceph-mon[80926]: pgmap v319: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:04 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Oct 02 11:51:04 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Oct 02 11:51:04 compute-1 sshd-session[95659]: Accepted publickey for zuul from 192.168.122.30 port 55668 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:51:04 compute-1 systemd-logind[795]: New session 38 of user zuul.
Oct 02 11:51:04 compute-1 systemd[1]: Started Session 38 of User zuul.
Oct 02 11:51:04 compute-1 sshd-session[95659]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:51:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:04.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:04.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:05 compute-1 ceph-mon[80926]: 7.10 deep-scrub starts
Oct 02 11:51:05 compute-1 ceph-mon[80926]: 7.10 deep-scrub ok
Oct 02 11:51:05 compute-1 ceph-mon[80926]: 11.14 scrub starts
Oct 02 11:51:05 compute-1 ceph-mon[80926]: 11.14 scrub ok
Oct 02 11:51:05 compute-1 ceph-mon[80926]: 9.7 scrub starts
Oct 02 11:51:05 compute-1 ceph-mon[80926]: 9.7 scrub ok
Oct 02 11:51:05 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Oct 02 11:51:05 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Oct 02 11:51:05 compute-1 python3.9[95812]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:51:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:51:06 compute-1 ceph-mon[80926]: 8.14 scrub starts
Oct 02 11:51:06 compute-1 ceph-mon[80926]: 8.14 scrub ok
Oct 02 11:51:06 compute-1 ceph-mon[80926]: 7.4 deep-scrub starts
Oct 02 11:51:06 compute-1 ceph-mon[80926]: 7.4 deep-scrub ok
Oct 02 11:51:06 compute-1 ceph-mon[80926]: pgmap v320: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:06 compute-1 python3.9[95966]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:51:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:06.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:06.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:07 compute-1 sudo[96120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yehwvkvtdmuwnjdfwihlxyxmldvpgotx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405866.8137681-86-33417972157158/AnsiballZ_setup.py'
Oct 02 11:51:07 compute-1 sudo[96120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:07 compute-1 ceph-mon[80926]: 7.b scrub starts
Oct 02 11:51:07 compute-1 ceph-mon[80926]: 7.b scrub ok
Oct 02 11:51:07 compute-1 python3.9[96122]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:51:07 compute-1 sudo[96120]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:08 compute-1 sudo[96204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpolnjdownhxlbewvlayerwqiwlmgbmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405866.8137681-86-33417972157158/AnsiballZ_dnf.py'
Oct 02 11:51:08 compute-1 sudo[96204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:08 compute-1 python3.9[96206]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:51:08 compute-1 ceph-mon[80926]: pgmap v321: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:08.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:08.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:09 compute-1 ceph-mon[80926]: 10.1b scrub starts
Oct 02 11:51:09 compute-1 ceph-mon[80926]: 10.1b scrub ok
Oct 02 11:51:09 compute-1 sudo[96204]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:10 compute-1 sudo[96357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbohenlceaotipbjlcogzgdmcdcnyqiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405869.7881396-122-271298310959294/AnsiballZ_setup.py'
Oct 02 11:51:10 compute-1 sudo[96357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:10 compute-1 python3.9[96359]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:51:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:51:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:10.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:51:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:51:10 compute-1 ceph-mon[80926]: pgmap v322: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:10 compute-1 sudo[96357]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:10.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:11 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Oct 02 11:51:11 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Oct 02 11:51:11 compute-1 sudo[96552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycvfmkllsnyowphayxwwnxnqgarthxar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405870.990591-155-68273364696290/AnsiballZ_file.py'
Oct 02 11:51:11 compute-1 sudo[96552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:11 compute-1 ceph-mon[80926]: 10.18 deep-scrub starts
Oct 02 11:51:11 compute-1 ceph-mon[80926]: 10.18 deep-scrub ok
Oct 02 11:51:11 compute-1 python3.9[96554]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:51:11 compute-1 sudo[96552]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:12 compute-1 sudo[96704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyoiucuxwbbcgumuiqnzbjdhvjpqebeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405871.8368156-179-153098961525463/AnsiballZ_command.py'
Oct 02 11:51:12 compute-1 sudo[96704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:12 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 6.a deep-scrub starts
Oct 02 11:51:12 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 6.a deep-scrub ok
Oct 02 11:51:12 compute-1 python3.9[96706]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:51:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:12.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:12 compute-1 sudo[96704]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:12 compute-1 ceph-mon[80926]: 6.2 scrub starts
Oct 02 11:51:12 compute-1 ceph-mon[80926]: 6.2 scrub ok
Oct 02 11:51:12 compute-1 ceph-mon[80926]: pgmap v323: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:12 compute-1 ceph-mon[80926]: 7.13 scrub starts
Oct 02 11:51:12 compute-1 ceph-mon[80926]: 7.13 scrub ok
Oct 02 11:51:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:12.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:13 compute-1 sudo[96868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxuqkffnraktmwdgxayqaxbouxuoueph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405872.8060367-203-147450312013506/AnsiballZ_stat.py'
Oct 02 11:51:13 compute-1 sudo[96868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:13 compute-1 python3.9[96870]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:51:13 compute-1 sudo[96868]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:13 compute-1 ceph-mon[80926]: 6.a deep-scrub starts
Oct 02 11:51:13 compute-1 ceph-mon[80926]: 6.a deep-scrub ok
Oct 02 11:51:13 compute-1 sudo[96946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvkqosboktincrnhsjdvfamybzniqeau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405872.8060367-203-147450312013506/AnsiballZ_file.py'
Oct 02 11:51:13 compute-1 sudo[96946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:13 compute-1 python3.9[96948]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:51:13 compute-1 sudo[96946]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:14 compute-1 sudo[97098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdpnlhpiwsgpprmrdiryolbhnxsrqryc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405874.1493309-239-237987927914802/AnsiballZ_stat.py'
Oct 02 11:51:14 compute-1 sudo[97098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:51:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:14.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:51:14 compute-1 ceph-mon[80926]: pgmap v324: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:14 compute-1 python3.9[97100]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:51:14 compute-1 sudo[97098]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:14.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:14 compute-1 sudo[97176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giqqtlmpqsartxjopcjjbybqaitgbcjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405874.1493309-239-237987927914802/AnsiballZ_file.py'
Oct 02 11:51:14 compute-1 sudo[97176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:15 compute-1 python3.9[97178]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:51:15 compute-1 sudo[97176]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:51:15 compute-1 ceph-mon[80926]: 7.9 scrub starts
Oct 02 11:51:15 compute-1 ceph-mon[80926]: 7.9 scrub ok
Oct 02 11:51:15 compute-1 ceph-mon[80926]: 9.5 scrub starts
Oct 02 11:51:15 compute-1 ceph-mon[80926]: 9.5 scrub ok
Oct 02 11:51:15 compute-1 sudo[97328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sewbzzctgdgpwyowmvhgohonffvckduw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405875.3936067-278-104382034050187/AnsiballZ_ini_file.py'
Oct 02 11:51:15 compute-1 sudo[97328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:15 compute-1 python3.9[97330]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:51:16 compute-1 sudo[97328]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:16 compute-1 sudo[97480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylzctpdotybjjrjyugdcmswkiwymazkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405876.142151-278-9112681439336/AnsiballZ_ini_file.py'
Oct 02 11:51:16 compute-1 sudo[97480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:16.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:16 compute-1 python3.9[97482]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:51:16 compute-1 sudo[97480]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:16 compute-1 ceph-mon[80926]: pgmap v325: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:16 compute-1 ceph-mon[80926]: 9.18 scrub starts
Oct 02 11:51:16 compute-1 ceph-mon[80926]: 9.18 scrub ok
Oct 02 11:51:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:16.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:17 compute-1 sudo[97632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vynvgmyiidxcexdouityamnoooykavkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405876.7765243-278-111767451416096/AnsiballZ_ini_file.py'
Oct 02 11:51:17 compute-1 sudo[97632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:17 compute-1 python3.9[97634]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:51:17 compute-1 sudo[97632]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:17 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Oct 02 11:51:17 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Oct 02 11:51:17 compute-1 sudo[97784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wknudtbeblikhvjlnvtjnjjzzqpisrnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405877.3484724-278-34146929833316/AnsiballZ_ini_file.py'
Oct 02 11:51:17 compute-1 sudo[97784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:17 compute-1 ceph-mon[80926]: 6.6 scrub starts
Oct 02 11:51:17 compute-1 ceph-mon[80926]: 6.6 scrub ok
Oct 02 11:51:17 compute-1 python3.9[97786]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:51:17 compute-1 sudo[97784]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:18.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:18 compute-1 sudo[97936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmbmtxcvqxbxbtkbptrwmxvxvownmxeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405878.443579-371-234375836289893/AnsiballZ_dnf.py'
Oct 02 11:51:18 compute-1 sudo[97936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:18 compute-1 ceph-mon[80926]: 6.7 scrub starts
Oct 02 11:51:18 compute-1 ceph-mon[80926]: 6.7 scrub ok
Oct 02 11:51:18 compute-1 ceph-mon[80926]: pgmap v326: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:18.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:18 compute-1 python3.9[97938]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:51:20 compute-1 sudo[97936]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:20.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:51:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:20.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:21 compute-1 ceph-mon[80926]: pgmap v327: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:21 compute-1 ceph-mon[80926]: 9.8 scrub starts
Oct 02 11:51:21 compute-1 ceph-mon[80926]: 9.8 scrub ok
Oct 02 11:51:21 compute-1 sudo[98089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcbmuxpmzbcnuphizbxklwqqrnqqodqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405880.7779753-404-146250706170477/AnsiballZ_setup.py'
Oct 02 11:51:21 compute-1 sudo[98089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:21 compute-1 python3.9[98091]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:51:21 compute-1 sudo[98089]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:22 compute-1 sudo[98243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsnewitlhlhyavitdekihsaquccrizoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405881.7506623-428-116811572089506/AnsiballZ_stat.py'
Oct 02 11:51:22 compute-1 sudo[98243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:22 compute-1 ceph-mon[80926]: 6.9 scrub starts
Oct 02 11:51:22 compute-1 ceph-mon[80926]: 6.9 scrub ok
Oct 02 11:51:22 compute-1 ceph-mon[80926]: pgmap v328: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:22 compute-1 python3.9[98245]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:51:22 compute-1 sudo[98243]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:51:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:22.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:51:22 compute-1 sudo[98395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jupujmgptoaqhvwyzxzsgvizovumwxmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405882.450495-455-2253029120829/AnsiballZ_stat.py'
Oct 02 11:51:22 compute-1 sudo[98395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:22.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:22 compute-1 python3.9[98397]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:51:22 compute-1 sudo[98395]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:23 compute-1 ceph-mon[80926]: 9.9 scrub starts
Oct 02 11:51:23 compute-1 ceph-mon[80926]: 9.9 scrub ok
Oct 02 11:51:23 compute-1 sudo[98547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqlluohrzlmfnoalwzidfsuzflpahcqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405883.2496207-485-166173319956388/AnsiballZ_service_facts.py'
Oct 02 11:51:23 compute-1 sudo[98547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:23 compute-1 python3.9[98549]: ansible-service_facts Invoked
Oct 02 11:51:23 compute-1 network[98566]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 11:51:23 compute-1 network[98567]: 'network-scripts' will be removed from distribution in near future.
Oct 02 11:51:23 compute-1 network[98568]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 11:51:24 compute-1 ceph-mon[80926]: 6.b deep-scrub starts
Oct 02 11:51:24 compute-1 ceph-mon[80926]: 6.b deep-scrub ok
Oct 02 11:51:24 compute-1 ceph-mon[80926]: pgmap v329: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:24.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:51:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:24.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:51:25 compute-1 ceph-mon[80926]: 9.16 scrub starts
Oct 02 11:51:25 compute-1 ceph-mon[80926]: 9.16 scrub ok
Oct 02 11:51:25 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 6.3 deep-scrub starts
Oct 02 11:51:25 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 6.3 deep-scrub ok
Oct 02 11:51:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:51:26 compute-1 ceph-mon[80926]: 9.1d scrub starts
Oct 02 11:51:26 compute-1 ceph-mon[80926]: 6.3 deep-scrub starts
Oct 02 11:51:26 compute-1 ceph-mon[80926]: 9.1d scrub ok
Oct 02 11:51:26 compute-1 ceph-mon[80926]: 6.3 deep-scrub ok
Oct 02 11:51:26 compute-1 ceph-mon[80926]: pgmap v330: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:26 compute-1 sudo[98547]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:26.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:26.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:27 compute-1 ceph-mon[80926]: 6.f scrub starts
Oct 02 11:51:27 compute-1 ceph-mon[80926]: 6.f scrub ok
Oct 02 11:51:28 compute-1 sudo[98855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alugkuzdrywduorkrvlzhkaedckyimpm ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1759405887.7476144-524-145397265556589/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1759405887.7476144-524-145397265556589/args'
Oct 02 11:51:28 compute-1 sudo[98855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:28 compute-1 sudo[98855]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:28 compute-1 ceph-mon[80926]: pgmap v331: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:28.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:28 compute-1 sudo[99022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dijdnwoynsajdynimwkhcontdjncvxyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405888.4753962-557-48480013588215/AnsiballZ_dnf.py'
Oct 02 11:51:28 compute-1 sudo[99022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:28.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:28 compute-1 python3.9[99024]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:51:30 compute-1 sudo[99022]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:30.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:51:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:30.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:31 compute-1 ceph-mon[80926]: pgmap v332: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:31 compute-1 ceph-mon[80926]: 9.19 scrub starts
Oct 02 11:51:31 compute-1 ceph-mon[80926]: 9.19 scrub ok
Oct 02 11:51:31 compute-1 sudo[99175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyxmnxineqwjvbhnmqgmdgcviqcevako ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405890.7739484-596-4935265400826/AnsiballZ_package_facts.py'
Oct 02 11:51:31 compute-1 sudo[99175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:31 compute-1 python3.9[99177]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct 02 11:51:31 compute-1 sudo[99175]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:32 compute-1 ceph-mon[80926]: 9.1a deep-scrub starts
Oct 02 11:51:32 compute-1 ceph-mon[80926]: 9.1a deep-scrub ok
Oct 02 11:51:32 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 6.d scrub starts
Oct 02 11:51:32 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 6.d scrub ok
Oct 02 11:51:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:32.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:32.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:32 compute-1 sudo[99327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uswbiunypppnolljtigelgpyrgadtucc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405892.6938145-627-188286134670908/AnsiballZ_stat.py'
Oct 02 11:51:32 compute-1 sudo[99327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:33 compute-1 ceph-mon[80926]: pgmap v333: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:33 compute-1 ceph-mon[80926]: 6.d scrub starts
Oct 02 11:51:33 compute-1 ceph-mon[80926]: 6.d scrub ok
Oct 02 11:51:33 compute-1 python3.9[99329]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:51:33 compute-1 sudo[99327]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:33 compute-1 sudo[99405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxwyrxokqlnzlaununqmbxqxwevpayuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405892.6938145-627-188286134670908/AnsiballZ_file.py'
Oct 02 11:51:33 compute-1 sudo[99405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:33 compute-1 python3.9[99407]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:51:33 compute-1 sudo[99405]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:34 compute-1 ceph-mon[80926]: pgmap v334: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:34 compute-1 sudo[99527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:51:34 compute-1 sudo[99527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:34 compute-1 sudo[99527]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:34 compute-1 sudo[99586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sklhpvcyuzvjiituhyevvnxwppvsdzmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405893.9252057-663-66483087064392/AnsiballZ_stat.py'
Oct 02 11:51:34 compute-1 sudo[99586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:34 compute-1 sudo[99580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:51:34 compute-1 sudo[99580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:34 compute-1 sudo[99580]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:34 compute-1 sudo[99610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:51:34 compute-1 sudo[99610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:34 compute-1 sudo[99610]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:34 compute-1 sudo[99635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:51:34 compute-1 sudo[99635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:34 compute-1 python3.9[99602]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:51:34 compute-1 sudo[99586]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:34.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:34 compute-1 sudo[99753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpwilsdlvhjacsnimgpygbnjbhtfapaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405893.9252057-663-66483087064392/AnsiballZ_file.py'
Oct 02 11:51:34 compute-1 sudo[99753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:34 compute-1 sudo[99635]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:34 compute-1 python3.9[99755]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:51:34 compute-1 sudo[99753]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:34.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:35 compute-1 ceph-mon[80926]: 9.1b scrub starts
Oct 02 11:51:35 compute-1 ceph-mon[80926]: 9.1b scrub ok
Oct 02 11:51:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:51:36 compute-1 sudo[99917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiaqgrxekxnjqslrpbadglcxnkfctzso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405895.7686074-717-156996701048983/AnsiballZ_lineinfile.py'
Oct 02 11:51:36 compute-1 sudo[99917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:36 compute-1 python3.9[99919]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:51:36 compute-1 sudo[99917]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:36 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Oct 02 11:51:36 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Oct 02 11:51:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:36.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:36.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:36 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:51:36 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:51:36 compute-1 ceph-mon[80926]: pgmap v335: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:36 compute-1 ceph-mon[80926]: 6.5 scrub starts
Oct 02 11:51:36 compute-1 ceph-mon[80926]: 6.5 scrub ok
Oct 02 11:51:36 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:51:36 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:51:36 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:51:36 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:51:36 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:51:36 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:51:37 compute-1 sudo[100069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lavtejfcgzdzqxabrbqvrfmhwmgqyrdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405897.4208539-762-87494780695523/AnsiballZ_setup.py'
Oct 02 11:51:37 compute-1 sudo[100069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:37 compute-1 python3.9[100071]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:51:37 compute-1 ceph-mon[80926]: 9.1e scrub starts
Oct 02 11:51:37 compute-1 ceph-mon[80926]: 9.1e scrub ok
Oct 02 11:51:38 compute-1 sudo[100069]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:38 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Oct 02 11:51:38 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Oct 02 11:51:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:38.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:38 compute-1 sudo[100153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klbairvqyxqdifjnbmzukomneacbcwqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405897.4208539-762-87494780695523/AnsiballZ_systemd.py'
Oct 02 11:51:38 compute-1 sudo[100153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:38.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:38 compute-1 ceph-mon[80926]: pgmap v336: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:38 compute-1 ceph-mon[80926]: 9.1f scrub starts
Oct 02 11:51:38 compute-1 ceph-mon[80926]: 9.1f scrub ok
Oct 02 11:51:38 compute-1 ceph-mon[80926]: 9.6 scrub starts
Oct 02 11:51:38 compute-1 ceph-mon[80926]: 9.6 scrub ok
Oct 02 11:51:39 compute-1 python3.9[100155]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:51:39 compute-1 sudo[100153]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:40 compute-1 sshd-session[95662]: Connection closed by 192.168.122.30 port 55668
Oct 02 11:51:40 compute-1 sshd-session[95659]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:51:40 compute-1 systemd[1]: session-38.scope: Deactivated successfully.
Oct 02 11:51:40 compute-1 systemd[1]: session-38.scope: Consumed 22.503s CPU time.
Oct 02 11:51:40 compute-1 systemd-logind[795]: Session 38 logged out. Waiting for processes to exit.
Oct 02 11:51:40 compute-1 systemd-logind[795]: Removed session 38.
Oct 02 11:51:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:40.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:40 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:51:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:40.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:40 compute-1 ceph-mon[80926]: pgmap v337: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:41 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 9.e scrub starts
Oct 02 11:51:41 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 9.e scrub ok
Oct 02 11:51:42 compute-1 ceph-mon[80926]: 9.e scrub starts
Oct 02 11:51:42 compute-1 ceph-mon[80926]: 9.e scrub ok
Oct 02 11:51:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:42.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:51:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:42.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:51:43 compute-1 ceph-mon[80926]: pgmap v338: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:43 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Oct 02 11:51:43 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Oct 02 11:51:43 compute-1 sudo[100183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:51:43 compute-1 sudo[100183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:43 compute-1 sudo[100183]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:43 compute-1 sudo[100208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:51:43 compute-1 sudo[100208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:51:43 compute-1 sudo[100208]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:44.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:44 compute-1 ceph-mon[80926]: 6.8 scrub starts
Oct 02 11:51:44 compute-1 ceph-mon[80926]: 6.8 scrub ok
Oct 02 11:51:44 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:51:44 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:51:44 compute-1 ceph-mon[80926]: pgmap v339: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:44.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:45 compute-1 sshd-session[100233]: Accepted publickey for zuul from 192.168.122.30 port 50464 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:51:45 compute-1 systemd-logind[795]: New session 39 of user zuul.
Oct 02 11:51:45 compute-1 systemd[1]: Started Session 39 of User zuul.
Oct 02 11:51:45 compute-1 sshd-session[100233]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:51:45 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 9.a scrub starts
Oct 02 11:51:45 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 9.a scrub ok
Oct 02 11:51:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:51:45 compute-1 ceph-mon[80926]: 9.a scrub starts
Oct 02 11:51:45 compute-1 ceph-mon[80926]: 9.a scrub ok
Oct 02 11:51:46 compute-1 sudo[100386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjnakexqzrkklqwzapbdqbgefqnltxbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405905.5426786-31-52798842993130/AnsiballZ_file.py'
Oct 02 11:51:46 compute-1 sudo[100386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:46 compute-1 python3.9[100388]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:51:46 compute-1 sudo[100386]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:46 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 9.f scrub starts
Oct 02 11:51:46 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 9.f scrub ok
Oct 02 11:51:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:46.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:46 compute-1 sudo[100538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efqdkhduvnaoallpyowzssgrpvcuknen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405906.4936414-67-27213169469410/AnsiballZ_stat.py'
Oct 02 11:51:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:51:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:46.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:51:46 compute-1 sudo[100538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:47 compute-1 ceph-mon[80926]: pgmap v340: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:47 compute-1 ceph-mon[80926]: 9.f scrub starts
Oct 02 11:51:47 compute-1 ceph-mon[80926]: 9.f scrub ok
Oct 02 11:51:47 compute-1 python3.9[100540]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:51:47 compute-1 sudo[100538]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:47 compute-1 sudo[100616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqmyuuynuedndivhmpeihxjkooqqxqre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405906.4936414-67-27213169469410/AnsiballZ_file.py'
Oct 02 11:51:47 compute-1 sudo[100616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:47 compute-1 python3.9[100618]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:51:47 compute-1 sudo[100616]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:47 compute-1 sshd-session[100236]: Connection closed by 192.168.122.30 port 50464
Oct 02 11:51:47 compute-1 sshd-session[100233]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:51:48 compute-1 systemd[1]: session-39.scope: Deactivated successfully.
Oct 02 11:51:48 compute-1 systemd[1]: session-39.scope: Consumed 1.447s CPU time.
Oct 02 11:51:48 compute-1 systemd-logind[795]: Session 39 logged out. Waiting for processes to exit.
Oct 02 11:51:48 compute-1 systemd-logind[795]: Removed session 39.
Oct 02 11:51:48 compute-1 ceph-mon[80926]: pgmap v341: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:48.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:51:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:48.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:51:49 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 6.e scrub starts
Oct 02 11:51:49 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 6.e scrub ok
Oct 02 11:51:49 compute-1 ceph-mon[80926]: 6.e scrub starts
Oct 02 11:51:49 compute-1 ceph-mon[80926]: 6.e scrub ok
Oct 02 11:51:50 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 9.d scrub starts
Oct 02 11:51:50 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 9.d scrub ok
Oct 02 11:51:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:51:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:50.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:51:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:51:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:51:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:50.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:51:50 compute-1 ceph-mon[80926]: pgmap v342: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:50 compute-1 ceph-mon[80926]: 9.d scrub starts
Oct 02 11:51:50 compute-1 ceph-mon[80926]: 9.d scrub ok
Oct 02 11:51:51 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Oct 02 11:51:51 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Oct 02 11:51:51 compute-1 ceph-mon[80926]: 9.10 scrub starts
Oct 02 11:51:51 compute-1 ceph-mon[80926]: 9.10 scrub ok
Oct 02 11:51:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:51:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:52.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:51:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:51:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:52.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:51:52 compute-1 ceph-mon[80926]: pgmap v343: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:53 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 9.11 deep-scrub starts
Oct 02 11:51:53 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 9.11 deep-scrub ok
Oct 02 11:51:53 compute-1 sshd-session[100643]: Accepted publickey for zuul from 192.168.122.30 port 36278 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:51:53 compute-1 systemd-logind[795]: New session 40 of user zuul.
Oct 02 11:51:53 compute-1 systemd[1]: Started Session 40 of User zuul.
Oct 02 11:51:53 compute-1 sshd-session[100643]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:51:54 compute-1 ceph-mon[80926]: 9.11 deep-scrub starts
Oct 02 11:51:54 compute-1 ceph-mon[80926]: 9.11 deep-scrub ok
Oct 02 11:51:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:54.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:54 compute-1 python3.9[100796]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:51:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:51:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:54.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:51:55 compute-1 ceph-mon[80926]: pgmap v344: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:51:55 compute-1 sudo[100950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueqqvhztqffgauvejvanntddavtvigdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405915.2882626-65-117742190591551/AnsiballZ_file.py'
Oct 02 11:51:55 compute-1 sudo[100950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:55 compute-1 python3.9[100952]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:51:55 compute-1 sudo[100950]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:56 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Oct 02 11:51:56 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Oct 02 11:51:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:56.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:56 compute-1 sudo[101125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvfjwgefdlkhxfqkirxfrfxscdezbnfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405916.1533442-89-268441459849102/AnsiballZ_stat.py'
Oct 02 11:51:56 compute-1 sudo[101125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:56 compute-1 python3.9[101127]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:51:56 compute-1 sudo[101125]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:56.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:57 compute-1 ceph-mon[80926]: pgmap v345: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:57 compute-1 ceph-mon[80926]: 9.12 scrub starts
Oct 02 11:51:57 compute-1 ceph-mon[80926]: 9.12 scrub ok
Oct 02 11:51:57 compute-1 sudo[101203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exgleufzrotwfllfscvkpdpfvtvqmfwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405916.1533442-89-268441459849102/AnsiballZ_file.py'
Oct 02 11:51:57 compute-1 sudo[101203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:57 compute-1 python3.9[101205]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.j843oalw recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:51:57 compute-1 sudo[101203]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:57 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Oct 02 11:51:57 compute-1 ceph-osd[78262]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Oct 02 11:51:58 compute-1 ceph-mon[80926]: 9.15 scrub starts
Oct 02 11:51:58 compute-1 ceph-mon[80926]: 9.15 scrub ok
Oct 02 11:51:58 compute-1 sudo[101355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqafsfatuhruigkssnkarmvayvvmeswj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405918.0228035-149-169661369428706/AnsiballZ_stat.py'
Oct 02 11:51:58 compute-1 sudo[101355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:58 compute-1 python3.9[101357]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:51:58 compute-1 sudo[101355]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:51:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:51:58.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:51:58 compute-1 sudo[101433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqcelhbelqqthodogglkswmellaeuyps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405918.0228035-149-169661369428706/AnsiballZ_file.py'
Oct 02 11:51:58 compute-1 sudo[101433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:51:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:51:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:51:58.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:51:58 compute-1 python3.9[101435]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.yqpyhefk recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:51:58 compute-1 sudo[101433]: pam_unix(sudo:session): session closed for user root
Oct 02 11:51:59 compute-1 ceph-mon[80926]: pgmap v346: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:51:59 compute-1 sudo[101585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezebdzqvbbxypkkixpfwcelhwquoiqok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405919.2636735-188-56391664442892/AnsiballZ_file.py'
Oct 02 11:51:59 compute-1 sudo[101585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:51:59 compute-1 python3.9[101587]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:51:59 compute-1 sudo[101585]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:00 compute-1 ceph-mon[80926]: pgmap v347: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:00 compute-1 sudo[101737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crjipkkazpzzytwwxnaiuyvmejgqtlzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405920.0091634-212-47352060447116/AnsiballZ_stat.py'
Oct 02 11:52:00 compute-1 sudo[101737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:00 compute-1 python3.9[101739]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:00 compute-1 sudo[101737]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:00.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:52:00 compute-1 sudo[101815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsvdydxdzqwibasiagefeyckrpspzjtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405920.0091634-212-47352060447116/AnsiballZ_file.py'
Oct 02 11:52:00 compute-1 sudo[101815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:00 compute-1 python3.9[101817]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:52:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:00.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:00 compute-1 sudo[101815]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:01 compute-1 sudo[101967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qegyuhcvzumyezodpfwzfrghkaalopwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405921.0637598-212-254762121343198/AnsiballZ_stat.py'
Oct 02 11:52:01 compute-1 sudo[101967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:01 compute-1 python3.9[101969]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:01 compute-1 sudo[101967]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:01 compute-1 sudo[102045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnhykjlhnfjtbduoxklhrdbrrdzaonta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405921.0637598-212-254762121343198/AnsiballZ_file.py'
Oct 02 11:52:01 compute-1 sudo[102045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:01 compute-1 python3.9[102047]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:52:01 compute-1 sudo[102045]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:02.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:02 compute-1 sudo[102197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyccxpbujtcippzthfzogzyoxsobddmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405922.3816228-281-76273369026482/AnsiballZ_file.py'
Oct 02 11:52:02 compute-1 sudo[102197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:02 compute-1 python3.9[102199]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:02 compute-1 sudo[102197]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:52:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:02.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:52:03 compute-1 ceph-mon[80926]: pgmap v348: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:03 compute-1 sudo[102349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nshhdluvsxtzezcqgmnpjukcfjehoxov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405923.0651138-305-144155510257195/AnsiballZ_stat.py'
Oct 02 11:52:03 compute-1 sudo[102349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:03 compute-1 python3.9[102351]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:03 compute-1 sudo[102349]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:03 compute-1 sudo[102427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfpzfzqqmbdqcgrzblogkqfsszvmngdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405923.0651138-305-144155510257195/AnsiballZ_file.py'
Oct 02 11:52:03 compute-1 sudo[102427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:03 compute-1 python3.9[102429]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:03 compute-1 sudo[102427]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:04 compute-1 ceph-mon[80926]: pgmap v349: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:04 compute-1 sudo[102579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tezwckylhzjtikxobjrixpxwbsvkvsoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405924.18413-341-199625796650032/AnsiballZ_stat.py'
Oct 02 11:52:04 compute-1 sudo[102579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:52:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:04.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:52:04 compute-1 python3.9[102581]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:04 compute-1 sudo[102579]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:04 compute-1 sudo[102657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-belrblpvvegfpumegqdflxunkmvsvdxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405924.18413-341-199625796650032/AnsiballZ_file.py'
Oct 02 11:52:04 compute-1 sudo[102657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:52:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:04.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:52:05 compute-1 python3.9[102659]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:05 compute-1 sudo[102657]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:52:05 compute-1 sudo[102809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzrjihhtxpdovsfylchzgsyzlhydjbad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405925.3333387-377-216029830331610/AnsiballZ_systemd.py'
Oct 02 11:52:05 compute-1 sudo[102809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:06 compute-1 python3.9[102811]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:52:06 compute-1 systemd[1]: Reloading.
Oct 02 11:52:06 compute-1 systemd-sysv-generator[102843]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:52:06 compute-1 systemd-rc-local-generator[102840]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:52:06 compute-1 sudo[102809]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 11:52:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:06.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 11:52:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:06.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:07 compute-1 ceph-mon[80926]: pgmap v350: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:07 compute-1 sudo[102999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqhubxwdxuffrcujhymlphgqyysdrweo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405926.85935-401-254939076601248/AnsiballZ_stat.py'
Oct 02 11:52:07 compute-1 sudo[102999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:07 compute-1 python3.9[103001]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:07 compute-1 sudo[102999]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:07 compute-1 sudo[103077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzjofsrfonbwicthleidtvqugqjewfeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405926.85935-401-254939076601248/AnsiballZ_file.py'
Oct 02 11:52:07 compute-1 sudo[103077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:07 compute-1 python3.9[103079]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:07 compute-1 sudo[103077]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:08 compute-1 ceph-mon[80926]: pgmap v351: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:08 compute-1 sudo[103229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yeminfacutgqomhbhufcfkfyugjzxchf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405928.0824409-437-240927867068545/AnsiballZ_stat.py'
Oct 02 11:52:08 compute-1 sudo[103229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:08 compute-1 python3.9[103231]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:08 compute-1 sudo[103229]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:08.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:08 compute-1 sudo[103307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oohefsvrlcqzluxhyxbiffkcyfwyghnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405928.0824409-437-240927867068545/AnsiballZ_file.py'
Oct 02 11:52:08 compute-1 sudo[103307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:08 compute-1 python3.9[103309]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:08.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:08 compute-1 sudo[103307]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:09 compute-1 sudo[103459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aywqxqpqvyjziwivvxvkkekdfznfosvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405929.3163807-473-20508203046121/AnsiballZ_systemd.py'
Oct 02 11:52:09 compute-1 sudo[103459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:09 compute-1 python3.9[103461]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:52:09 compute-1 systemd[1]: Reloading.
Oct 02 11:52:09 compute-1 systemd-sysv-generator[103491]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:52:09 compute-1 systemd-rc-local-generator[103488]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:52:10 compute-1 systemd[1]: Starting Create netns directory...
Oct 02 11:52:10 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 02 11:52:10 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 02 11:52:10 compute-1 systemd[1]: Finished Create netns directory.
Oct 02 11:52:10 compute-1 sudo[103459]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:52:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:10.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:52:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:52:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:52:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:10.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:52:11 compute-1 ceph-mon[80926]: pgmap v352: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:11 compute-1 python3.9[103652]: ansible-ansible.builtin.service_facts Invoked
Oct 02 11:52:11 compute-1 network[103669]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 11:52:11 compute-1 network[103670]: 'network-scripts' will be removed from distribution in near future.
Oct 02 11:52:11 compute-1 network[103671]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 11:52:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:12.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:12.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:13 compute-1 ceph-mon[80926]: pgmap v353: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:14 compute-1 ceph-mon[80926]: pgmap v354: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:14.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:14.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:52:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:16.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:16.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:17 compute-1 ceph-mon[80926]: pgmap v355: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:18 compute-1 sudo[103934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rifntkcimnsvnbgpeqnauzkatfwcxyzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405937.7786775-551-166660495617278/AnsiballZ_stat.py'
Oct 02 11:52:18 compute-1 sudo[103934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:18 compute-1 ceph-mon[80926]: pgmap v356: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:18 compute-1 python3.9[103936]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:18 compute-1 sudo[103934]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:18 compute-1 sudo[104012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkciyaejoatpfnyucnjvsptpxreitjmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405937.7786775-551-166660495617278/AnsiballZ_file.py'
Oct 02 11:52:18 compute-1 sudo[104012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:18.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:18 compute-1 python3.9[104014]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:18 compute-1 sudo[104012]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:18.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:19 compute-1 sudo[104164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fawajbvwfzdfwjryhqogqvfxyygliuow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405939.0111406-590-245129355724977/AnsiballZ_file.py'
Oct 02 11:52:19 compute-1 sudo[104164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:19 compute-1 python3.9[104166]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:19 compute-1 sudo[104164]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:20 compute-1 sudo[104316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcowhvqavujgnvgwljezopxpqswbbsqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405939.812626-614-208235621429864/AnsiballZ_stat.py'
Oct 02 11:52:20 compute-1 sudo[104316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:20 compute-1 ceph-mon[80926]: pgmap v357: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:20 compute-1 python3.9[104318]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:20 compute-1 sudo[104316]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:20 compute-1 sudo[104394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnuinkhhllgrdwsdscizzzusgelbxcoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405939.812626-614-208235621429864/AnsiballZ_file.py'
Oct 02 11:52:20 compute-1 sudo[104394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:52:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:52:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:20.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:52:20 compute-1 python3.9[104396]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:20 compute-1 sudo[104394]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:20.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:21 compute-1 sudo[104546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjobxschqfxlyefimjzajvmioxltjwsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405941.2087593-659-229811796884951/AnsiballZ_timezone.py'
Oct 02 11:52:21 compute-1 sudo[104546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:21 compute-1 python3.9[104548]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 02 11:52:21 compute-1 systemd[1]: Starting Time & Date Service...
Oct 02 11:52:22 compute-1 systemd[1]: Started Time & Date Service.
Oct 02 11:52:22 compute-1 sudo[104546]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:22 compute-1 ceph-mon[80926]: pgmap v358: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:22.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:22 compute-1 sudo[104702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjzgjvhdikacqbxcmegwtspvgxqhbuxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405942.363466-686-66146359508514/AnsiballZ_file.py'
Oct 02 11:52:22 compute-1 sudo[104702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:22 compute-1 python3.9[104704]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:22 compute-1 sudo[104702]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:52:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:22.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:52:23 compute-1 sudo[104854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipsvxtqzgpjkmpcpqxkoqerbxhzmsqtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405943.1713324-710-90217601364512/AnsiballZ_stat.py'
Oct 02 11:52:23 compute-1 sudo[104854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:23 compute-1 python3.9[104856]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:23 compute-1 sudo[104854]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:23 compute-1 sudo[104932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luhlhiiiufsgjlixnvcuzrimrvejfbgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405943.1713324-710-90217601364512/AnsiballZ_file.py'
Oct 02 11:52:23 compute-1 sudo[104932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:24 compute-1 python3.9[104934]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:24 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Oct 02 11:52:24 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:52:24.094693) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 11:52:24 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Oct 02 11:52:24 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405944094760, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2588, "num_deletes": 251, "total_data_size": 5243595, "memory_usage": 5317840, "flush_reason": "Manual Compaction"}
Oct 02 11:52:24 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Oct 02 11:52:24 compute-1 sudo[104932]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:24 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405944140308, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 3433301, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7198, "largest_seqno": 9781, "table_properties": {"data_size": 3423416, "index_size": 5803, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 25468, "raw_average_key_size": 21, "raw_value_size": 3401260, "raw_average_value_size": 2853, "num_data_blocks": 259, "num_entries": 1192, "num_filter_entries": 1192, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405766, "oldest_key_time": 1759405766, "file_creation_time": 1759405944, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:52:24 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 45714 microseconds, and 7685 cpu microseconds.
Oct 02 11:52:24 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:52:24 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:52:24.140407) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 3433301 bytes OK
Oct 02 11:52:24 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:52:24.140446) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Oct 02 11:52:24 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:52:24.143621) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Oct 02 11:52:24 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:52:24.143642) EVENT_LOG_v1 {"time_micros": 1759405944143636, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 11:52:24 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:52:24.143667) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 11:52:24 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 5231612, prev total WAL file size 5232248, number of live WAL files 2.
Oct 02 11:52:24 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:52:24 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:52:24.145200) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Oct 02 11:52:24 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 11:52:24 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(3352KB)], [15(7790KB)]
Oct 02 11:52:24 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405944145269, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 11410662, "oldest_snapshot_seqno": -1}
Oct 02 11:52:24 compute-1 ceph-mon[80926]: pgmap v359: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:24 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 3819 keys, 9730545 bytes, temperature: kUnknown
Oct 02 11:52:24 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405944235455, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 9730545, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9699212, "index_size": 20663, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9605, "raw_key_size": 92031, "raw_average_key_size": 24, "raw_value_size": 9624526, "raw_average_value_size": 2520, "num_data_blocks": 903, "num_entries": 3819, "num_filter_entries": 3819, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759405944, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:52:24 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:52:24 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:52:24.235727) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 9730545 bytes
Oct 02 11:52:24 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:52:24.241689) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 126.5 rd, 107.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.6 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(6.2) write-amplify(2.8) OK, records in: 4340, records dropped: 521 output_compression: NoCompression
Oct 02 11:52:24 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:52:24.241728) EVENT_LOG_v1 {"time_micros": 1759405944241712, "job": 6, "event": "compaction_finished", "compaction_time_micros": 90231, "compaction_time_cpu_micros": 22485, "output_level": 6, "num_output_files": 1, "total_output_size": 9730545, "num_input_records": 4340, "num_output_records": 3819, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 11:52:24 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:52:24 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405944242491, "job": 6, "event": "table_file_deletion", "file_number": 17}
Oct 02 11:52:24 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:52:24 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759405944243925, "job": 6, "event": "table_file_deletion", "file_number": 15}
Oct 02 11:52:24 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:52:24.145114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:52:24 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:52:24.244117) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:52:24 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:52:24.244124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:52:24 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:52:24.244126) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:52:24 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:52:24.244127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:52:24 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:52:24.244129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:52:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:52:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:24.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:52:24 compute-1 sudo[105084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxqsspwmgzazquhayodvxyjtuadclxbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405944.376411-746-195049162048340/AnsiballZ_stat.py'
Oct 02 11:52:24 compute-1 sudo[105084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:24 compute-1 python3.9[105086]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:24 compute-1 sudo[105084]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:24.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:25 compute-1 sudo[105162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsnqsxnzxycdrqrqsenjcfdpuvyoeluy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405944.376411-746-195049162048340/AnsiballZ_file.py'
Oct 02 11:52:25 compute-1 sudo[105162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:25 compute-1 python3.9[105164]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.ignqgc_8 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:25 compute-1 sudo[105162]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:52:25 compute-1 sudo[105314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aivpytfsebtdinohysbqdyrtzcosyltx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405945.5629013-782-181541386213267/AnsiballZ_stat.py'
Oct 02 11:52:25 compute-1 sudo[105314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:26 compute-1 python3.9[105316]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:26 compute-1 sudo[105314]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:26 compute-1 sudo[105392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyuocohoocfbnnlcgigcfciurljhlvva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405945.5629013-782-181541386213267/AnsiballZ_file.py'
Oct 02 11:52:26 compute-1 sudo[105392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:26 compute-1 python3.9[105394]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:26 compute-1 sudo[105392]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 11:52:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:26.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 11:52:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:26.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:27 compute-1 ceph-mon[80926]: pgmap v360: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:27 compute-1 sudo[105544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndfwzwvxveimpichipypimnmrrkwuwvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405946.8126266-821-167006572446949/AnsiballZ_command.py'
Oct 02 11:52:27 compute-1 sudo[105544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:27 compute-1 python3.9[105546]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:52:27 compute-1 sudo[105544]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:28 compute-1 ceph-mon[80926]: pgmap v361: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:28 compute-1 sudo[105697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbfjdnukabtjwdgcveqfqxeyshyyxeeo ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759405947.7644908-845-277460441898437/AnsiballZ_edpm_nftables_from_files.py'
Oct 02 11:52:28 compute-1 sudo[105697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:28 compute-1 python3[105699]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 02 11:52:28 compute-1 sudo[105697]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:52:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:28.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:52:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:28.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:29 compute-1 sudo[105849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgjpqkuxxnlkpvqkpmmgqrzxoruqxgbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405948.798276-869-42034236082820/AnsiballZ_stat.py'
Oct 02 11:52:29 compute-1 sudo[105849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:29 compute-1 python3.9[105851]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:29 compute-1 sudo[105849]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:29 compute-1 sudo[105927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksykusovdkoxuqmujwrdvuofglbmtorx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405948.798276-869-42034236082820/AnsiballZ_file.py'
Oct 02 11:52:29 compute-1 sudo[105927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:29 compute-1 python3.9[105929]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:29 compute-1 sudo[105927]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:30 compute-1 ceph-mon[80926]: pgmap v362: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:52:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:52:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:30.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:52:30 compute-1 sudo[106079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktmvhftleoapeuuvfjpjirxzeuuxwmte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405950.3438513-905-26112816890264/AnsiballZ_stat.py'
Oct 02 11:52:30 compute-1 sudo[106079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:30 compute-1 python3.9[106081]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:30 compute-1 sudo[106079]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:30.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:31 compute-1 sudo[106157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bazaykwautguzekishrxcqnaipdbhlda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405950.3438513-905-26112816890264/AnsiballZ_file.py'
Oct 02 11:52:31 compute-1 sudo[106157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:31 compute-1 python3.9[106159]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:31 compute-1 sudo[106157]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:31 compute-1 sudo[106309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxiibistvdalcmhmtocyfbefblpvvuog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405951.5719469-941-220338370916471/AnsiballZ_stat.py'
Oct 02 11:52:31 compute-1 sudo[106309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:32 compute-1 python3.9[106311]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:32 compute-1 sudo[106309]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:32 compute-1 sudo[106387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjwnkfxpvledhrhpxyyzjhtfvhwprwzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405951.5719469-941-220338370916471/AnsiballZ_file.py'
Oct 02 11:52:32 compute-1 sudo[106387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:32 compute-1 python3.9[106389]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:32 compute-1 sudo[106387]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:52:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:32.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:52:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:32.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:33 compute-1 ceph-mon[80926]: pgmap v363: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:33 compute-1 sudo[106539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuonkqbaeenenzckpjzxoqpijpebtwna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405952.802756-977-195448184726669/AnsiballZ_stat.py'
Oct 02 11:52:33 compute-1 sudo[106539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:33 compute-1 python3.9[106541]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:33 compute-1 sudo[106539]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:33 compute-1 sudo[106617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-effiegqzcrpxopugaxiqehfsxsasomhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405952.802756-977-195448184726669/AnsiballZ_file.py'
Oct 02 11:52:33 compute-1 sudo[106617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:33 compute-1 python3.9[106619]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:33 compute-1 sudo[106617]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:34 compute-1 ceph-mon[80926]: pgmap v364: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:34 compute-1 sudo[106769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjpnnxmssxkjkdyosgbyyfxyxvmpjwkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405954.0796895-1013-164701663814307/AnsiballZ_stat.py'
Oct 02 11:52:34 compute-1 sudo[106769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:34.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:34 compute-1 python3.9[106771]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:34 compute-1 sudo[106769]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:34 compute-1 sudo[106847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxiimoerzztrdsylterfcxfpxcdtzwcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405954.0796895-1013-164701663814307/AnsiballZ_file.py'
Oct 02 11:52:34 compute-1 sudo[106847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:34.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:35 compute-1 python3.9[106849]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:35 compute-1 sudo[106847]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:52:35 compute-1 sudo[106999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvrkpbocxzthflompyohyetodvasdxcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405955.4530246-1052-277742845251417/AnsiballZ_command.py'
Oct 02 11:52:35 compute-1 sudo[106999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:35 compute-1 python3.9[107001]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:52:35 compute-1 sudo[106999]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:36.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:36 compute-1 sudo[107154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mncfluktctctzgzngbkwdrhhhrqwwipf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405956.2242346-1076-179444784919734/AnsiballZ_blockinfile.py'
Oct 02 11:52:36 compute-1 sudo[107154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:36 compute-1 python3.9[107156]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:36 compute-1 sudo[107154]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:52:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:36.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:52:37 compute-1 ceph-mon[80926]: pgmap v365: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:37 compute-1 sudo[107306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eggriytrrxjdufirlxpuyjlmmyqcvfmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405957.1996107-1103-196501425411273/AnsiballZ_file.py'
Oct 02 11:52:37 compute-1 sudo[107306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:37 compute-1 python3.9[107308]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:37 compute-1 sudo[107306]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:38 compute-1 sudo[107458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwnvphrflqohzsvnsbiaqinmuodrvaxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405957.824957-1103-271151606014727/AnsiballZ_file.py'
Oct 02 11:52:38 compute-1 sudo[107458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:38 compute-1 ceph-mon[80926]: pgmap v366: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:38 compute-1 python3.9[107460]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:38 compute-1 sudo[107458]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:38.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:38.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:39 compute-1 sudo[107610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgzopojfjfwwsiwjrvmczqpdibtqmmpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405958.6986744-1148-123921124721833/AnsiballZ_mount.py'
Oct 02 11:52:39 compute-1 sudo[107610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:39 compute-1 python3.9[107612]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 02 11:52:39 compute-1 sudo[107610]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:39 compute-1 sudo[107762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmskprshzrvepeuudmtvxrmghrnzgwpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405959.449535-1148-220661092933620/AnsiballZ_mount.py'
Oct 02 11:52:39 compute-1 sudo[107762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:39 compute-1 python3.9[107764]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 02 11:52:39 compute-1 sudo[107762]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:40 compute-1 sshd-session[100646]: Connection closed by 192.168.122.30 port 36278
Oct 02 11:52:40 compute-1 sshd-session[100643]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:52:40 compute-1 systemd[1]: session-40.scope: Deactivated successfully.
Oct 02 11:52:40 compute-1 systemd[1]: session-40.scope: Consumed 27.867s CPU time.
Oct 02 11:52:40 compute-1 systemd-logind[795]: Session 40 logged out. Waiting for processes to exit.
Oct 02 11:52:40 compute-1 systemd-logind[795]: Removed session 40.
Oct 02 11:52:40 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:52:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:40.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:40.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:41 compute-1 ceph-mon[80926]: pgmap v367: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:42 compute-1 ceph-mon[80926]: pgmap v368: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:42.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:42.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:44 compute-1 sudo[107790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:52:44 compute-1 sudo[107790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:44 compute-1 sudo[107790]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:44 compute-1 sudo[107815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:52:44 compute-1 sudo[107815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:44 compute-1 ceph-mon[80926]: pgmap v369: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:44 compute-1 sudo[107815]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:44 compute-1 sudo[107840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:52:44 compute-1 sudo[107840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:44 compute-1 sudo[107840]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:44 compute-1 sudo[107865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:52:44 compute-1 sudo[107865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:44.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:44 compute-1 sudo[107865]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:44.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:52:45 compute-1 sshd-session[107920]: Accepted publickey for zuul from 192.168.122.30 port 46250 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:52:45 compute-1 systemd-logind[795]: New session 41 of user zuul.
Oct 02 11:52:45 compute-1 systemd[1]: Started Session 41 of User zuul.
Oct 02 11:52:45 compute-1 sshd-session[107920]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:52:46 compute-1 sudo[108073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzgvnhvioszyvjukxpqdrlljdruotmgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405966.0573485-24-18408354851807/AnsiballZ_tempfile.py'
Oct 02 11:52:46 compute-1 sudo[108073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:46.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:46 compute-1 python3.9[108075]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct 02 11:52:46 compute-1 sudo[108073]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:46 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:52:46 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:52:46 compute-1 ceph-mon[80926]: pgmap v370: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:46 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:52:46 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:52:46 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:52:46 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:52:46 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:52:46 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:52:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:46.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:47 compute-1 sudo[108225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofcqatgviphlczsvtgmyfygdvfpmopyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405967.0457833-60-236176683210711/AnsiballZ_stat.py'
Oct 02 11:52:47 compute-1 sudo[108225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:47 compute-1 python3.9[108227]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:52:47 compute-1 sudo[108225]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:48 compute-1 sudo[108379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sttqamammtjqozubrwabxllazqhxhckp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405967.932243-84-69258756163241/AnsiballZ_slurp.py'
Oct 02 11:52:48 compute-1 sudo[108379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:48 compute-1 python3.9[108381]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Oct 02 11:52:48 compute-1 sudo[108379]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:52:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:48.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:52:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:48.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:49 compute-1 sudo[108531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxoluahawgyvvztuywzqzhaqvkrjwefz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405968.7988791-108-28436385808654/AnsiballZ_stat.py'
Oct 02 11:52:49 compute-1 sudo[108531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:49 compute-1 ceph-mon[80926]: pgmap v371: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:49 compute-1 python3.9[108533]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.7z2ld17y follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:52:49 compute-1 sudo[108531]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:49 compute-1 sudo[108656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omgywprxkbaezzpfngzoypijwljehtwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405968.7988791-108-28436385808654/AnsiballZ_copy.py'
Oct 02 11:52:49 compute-1 sudo[108656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:49 compute-1 python3.9[108658]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.7z2ld17y mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759405968.7988791-108-28436385808654/.source.7z2ld17y _original_basename=.04ehrds_ follow=False checksum=d30a0a751a32c4c6fb89a77f8bd3d66e091396ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:49 compute-1 sudo[108656]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:50 compute-1 ceph-mon[80926]: pgmap v372: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:52:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:50.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:50 compute-1 sudo[108808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzjtlefmsquexrujdgpjpqozcrsiyutp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405970.2660105-153-145456685188750/AnsiballZ_setup.py'
Oct 02 11:52:50 compute-1 sudo[108808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 11:52:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:51.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 11:52:51 compute-1 python3.9[108810]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:52:51 compute-1 sudo[108808]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:51 compute-1 sudo[108960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqsphurmnhjernrmnujvrjmvortutayw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405971.5433002-178-132463517885089/AnsiballZ_blockinfile.py'
Oct 02 11:52:51 compute-1 sudo[108960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:52 compute-1 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 02 11:52:52 compute-1 python3.9[108962]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDfikJfuUE7Xs2lF9Qh9l0WUdl+Tct7ff0gJQZVpPwLHlAwFnY1lIlqF2IQ3J7LtFcsjYF5RcofKcj+ARkMTobXFoygI/H3Yl5EGDehZbaNONLkDXT20bcYtosTZBjJTZWMJaDGUobRPnKWEbt7P8G/CVwj+LKBYxYcl65Bs0m8Ii2JZObV/41E/44oNBbTT6VnLqrH1BjRfNgToFyoYZToIU6gJw+lDGgt/afrHnDeR8fo6fgHkoHZKHxctrFraqhPOEX+SW/RD5ra4/WxZTBDAcOelVyZhpZ0V6HTQuS0IuD/sy9RD9W59TrF0oFH8kP6H1F3EbhrMfM/wkGJqxcBEMPIlGjUgoOCOY4tgCsAuyKcqelTUJIoL5uTuk06fd+1+B0t8j//vY7eWDCGwHAYrOCbL954GsjqhEOd/SL8vW6cT4Eh+DaWzKpvnl+bEN+G7wkI9etJ4B8NugtDyE25Ikfn9nsBLIcPcuepnlcBQkTN4sC+w0I1AEm3Uo8MFOM=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPxo/cGygmGP55Hjd3RI5yFpLqrtrtdd2PGw/FbMnxJJ
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLbUwjRfNWPOWmPM9kXykw3bNz7sYSt7DYbalJhzh+E3yGMACUO+HxFuSQ4lHBBXquZltdOcmR202cRP+4s05oI=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2+zJSXp4XBwGccVvswqz0/27MxV0mWhHJ9EKngmPOQ2Et2f+QArNFJsEaUEJankaYSrISVt8m0QscyZhZUgrxp07g0OV9pVQ2pkqF/CSC7RnN96odOHOeQjRmSOj9vF8Q3EeyRZ7MS1CWH6TT+jYOD77TFol6cQhi7o5bzgAdL6yB/ili/PG3bBxtbYtNwSqCSpiGaN8z8j/REszkW2GM6wvDGXk9NgNfBZT4goP4O3qz/wVeMM/OQFGQa/34tMNX3QEE/XOdAUIRXXLw0vmVj7oRDzGVMc12TDalGOqphS+LkUS4PB+ns/IaplTUzc8zlwhycQQPxnzEcm+z3QP8Bo+iBGw+aKpc5UTMMtZocXrjHCv0Q6irXug6N6b7aaANiHMmveZua/Gjp6Ef//Q/+thKtkvcvvhUDZknHLDrHGT5QbVQYjN23MyFdWCu6MgpBw8NNyeI5sO605lOrxk2oXwX19ah7Qt7iAU7KRijLzQBjnMjNb6bcSOCFXVzpl0=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOxmfzZIbNhcux/tJpdvzaDW/iX/PRMqNcEGpeyKOTEV
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBANBfiBul8lZFa5T9kjEYk719DZo4CtW2bTDn+SPcbu/2U71Ms3Qc1tvqiM9B/ciT9t/uzxk25klpGuFqieJFkk=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDb8D90laelhslbtmfz72Mp6Q7iCMu+KiPRuBFH59nBtb1LmjrIFjvU1qZnJ+wipHW+bRcdDzNWNM8KJ4IImBqFxbrg17RhHeunE84nnR8leX3OYiMZumpygvXYCykppXcKbe6pfxYUtyTc8Tz3bNoayi7uGoKgN/iaUeADLuyJUDDVyusj2q7uIj7gZ6PbtorR5cUUn0wBZTo3Jx84NmdiJr/xDGrtfawsV6ATz+Rpx3vzz4EE4dq4wN3eTUJiPCpc4jbTvHpp0GdJTK1BkZ4IANgw3a+loOO2MHq2JgMRjKJrH7sqrw7s9XgzHSh/ufOmEKAtgw75tWExEcy/05QGGbR2jnIKde4vVIS5JheT1z4gYASjKEEidjisDxig5nigPddxe3nSxKRQczKXPV+KUOB14AljRbnyqgbw4Dv9wtnkFL/QLMXFA0/NaOAZxhI+fOoAcg+No2ZsB95IgQ49ay/LN011x9o1vfwVPfReOtkjpVxQB8oCXhA53BfrG3M=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAtzqd+HKKUdtdjsFK/O61rbaIfH2/ANnbsFBvd1WLXA
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOyw0g2rIQxTWmEkqBGUUvYwuDopCg/ppyBGUh5LatbQKlwO7AkEzPUhEeFZv2/qzobLbOH4kVCTAQVjiQm//WM=
                                              create=True mode=0644 path=/tmp/ansible.7z2ld17y state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:52 compute-1 sudo[108960]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:52 compute-1 ceph-mon[80926]: pgmap v373: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:52:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:52.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:52:52 compute-1 sudo[109114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcqmqcmqqfgrzrevcmxoechfyacbpyjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405972.3987815-202-239693719765667/AnsiballZ_command.py'
Oct 02 11:52:52 compute-1 sudo[109114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:53.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:53 compute-1 python3.9[109116]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.7z2ld17y' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:52:53 compute-1 sudo[109114]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:53 compute-1 sudo[109268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aluruwhzkxcjzafhkrcbnlrucfbtiozn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405973.4531128-226-20694387150380/AnsiballZ_file.py'
Oct 02 11:52:53 compute-1 sudo[109268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:52:54 compute-1 python3.9[109270]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.7z2ld17y state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:52:54 compute-1 sudo[109268]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:54 compute-1 ceph-mon[80926]: pgmap v374: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:54.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:54 compute-1 sshd-session[107923]: Connection closed by 192.168.122.30 port 46250
Oct 02 11:52:54 compute-1 sshd-session[107920]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:52:54 compute-1 systemd[1]: session-41.scope: Deactivated successfully.
Oct 02 11:52:54 compute-1 systemd[1]: session-41.scope: Consumed 4.853s CPU time.
Oct 02 11:52:54 compute-1 systemd-logind[795]: Session 41 logged out. Waiting for processes to exit.
Oct 02 11:52:54 compute-1 systemd-logind[795]: Removed session 41.
Oct 02 11:52:54 compute-1 sudo[109295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:52:54 compute-1 sudo[109295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:54 compute-1 sudo[109295]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:54 compute-1 sudo[109320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:52:54 compute-1 sudo[109320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:52:54 compute-1 sudo[109320]: pam_unix(sudo:session): session closed for user root
Oct 02 11:52:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:55.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:55 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:52:55 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:52:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:52:56 compute-1 ceph-mon[80926]: pgmap v375: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:56.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:57.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:58 compute-1 ceph-mon[80926]: pgmap v376: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:52:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:52:58.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:52:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:52:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:52:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:52:59.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:00 compute-1 sshd-session[109345]: Accepted publickey for zuul from 192.168.122.30 port 42264 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:53:00 compute-1 systemd-logind[795]: New session 42 of user zuul.
Oct 02 11:53:00 compute-1 systemd[1]: Started Session 42 of User zuul.
Oct 02 11:53:00 compute-1 sshd-session[109345]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:53:00 compute-1 ceph-mon[80926]: pgmap v377: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:53:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:00.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:01.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:01 compute-1 python3.9[109498]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:53:02 compute-1 sudo[109652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkhvthxbyuulfpekzxmxiljbgoxrsgda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405981.7105832-62-151570791736941/AnsiballZ_systemd.py'
Oct 02 11:53:02 compute-1 sudo[109652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:02 compute-1 python3.9[109654]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 02 11:53:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:02.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:02 compute-1 sudo[109652]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:03.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:03 compute-1 ceph-mon[80926]: pgmap v378: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:03 compute-1 sudo[109806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sozvowrkuxyugwiqbprrnkxhxtqqytuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405982.8528264-86-33021498628844/AnsiballZ_systemd.py'
Oct 02 11:53:03 compute-1 sudo[109806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:03 compute-1 python3.9[109808]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 11:53:03 compute-1 sudo[109806]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:04 compute-1 ceph-mon[80926]: pgmap v379: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:04 compute-1 sudo[109959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzcwvsbjgwurtfqlaulbtptfihxcutmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405983.8873708-113-98666576352468/AnsiballZ_command.py'
Oct 02 11:53:04 compute-1 sudo[109959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:04 compute-1 python3.9[109961]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:53:04 compute-1 sudo[109959]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:04.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:05.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:05 compute-1 sudo[110112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqllqrrswdizodlybwnamjribfnlaeku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405984.7178102-137-279304560391208/AnsiballZ_stat.py'
Oct 02 11:53:05 compute-1 sudo[110112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:05 compute-1 python3.9[110114]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:53:05 compute-1 sudo[110112]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:53:05 compute-1 sudo[110264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxcnfsgbhmaozfkjwbgdnfxvafcnwopq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405985.5135353-164-185499868838217/AnsiballZ_file.py'
Oct 02 11:53:05 compute-1 sudo[110264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:06 compute-1 python3.9[110266]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:53:06 compute-1 sudo[110264]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:06 compute-1 sshd-session[109348]: Connection closed by 192.168.122.30 port 42264
Oct 02 11:53:06 compute-1 sshd-session[109345]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:53:06 compute-1 systemd[1]: session-42.scope: Deactivated successfully.
Oct 02 11:53:06 compute-1 systemd[1]: session-42.scope: Consumed 3.676s CPU time.
Oct 02 11:53:06 compute-1 systemd-logind[795]: Session 42 logged out. Waiting for processes to exit.
Oct 02 11:53:06 compute-1 systemd-logind[795]: Removed session 42.
Oct 02 11:53:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:06.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:07.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:07 compute-1 ceph-mon[80926]: pgmap v380: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:08 compute-1 ceph-mon[80926]: pgmap v381: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:08.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:09.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:53:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:10.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:11.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:11 compute-1 ceph-mon[80926]: pgmap v382: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:12 compute-1 sshd-session[110291]: Accepted publickey for zuul from 192.168.122.30 port 59690 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:53:12 compute-1 systemd-logind[795]: New session 43 of user zuul.
Oct 02 11:53:12 compute-1 systemd[1]: Started Session 43 of User zuul.
Oct 02 11:53:12 compute-1 sshd-session[110291]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:53:12 compute-1 ceph-mon[80926]: pgmap v383: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:53:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:12.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:53:13 compute-1 python3.9[110444]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:53:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:13.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:13 compute-1 sudo[110598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbpeppmtyyjmonnvpvflrmvwppqtgrof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405993.5415282-68-172515982983165/AnsiballZ_setup.py'
Oct 02 11:53:13 compute-1 sudo[110598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:14 compute-1 python3.9[110600]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:53:14 compute-1 sudo[110598]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:14.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:14 compute-1 sudo[110682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrgnjrdirawvqlvaeusjtdqjbaybiqnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759405993.5415282-68-172515982983165/AnsiballZ_dnf.py'
Oct 02 11:53:14 compute-1 sudo[110682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:14 compute-1 python3.9[110684]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 02 11:53:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:15.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:15 compute-1 ceph-mon[80926]: pgmap v384: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:53:16 compute-1 ceph-mon[80926]: pgmap v385: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:16 compute-1 sudo[110682]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:16.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:53:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:17.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:53:17 compute-1 python3.9[110835]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:53:18 compute-1 python3.9[110986]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 11:53:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:53:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:18.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:53:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:53:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:19.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:53:19 compute-1 ceph-mon[80926]: pgmap v386: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:19 compute-1 python3.9[111136]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:53:19 compute-1 python3.9[111286]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:53:20 compute-1 ceph-mon[80926]: pgmap v387: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:20 compute-1 sshd-session[110294]: Connection closed by 192.168.122.30 port 59690
Oct 02 11:53:20 compute-1 sshd-session[110291]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:53:20 compute-1 systemd-logind[795]: Session 43 logged out. Waiting for processes to exit.
Oct 02 11:53:20 compute-1 systemd[1]: session-43.scope: Deactivated successfully.
Oct 02 11:53:20 compute-1 systemd[1]: session-43.scope: Consumed 5.639s CPU time.
Oct 02 11:53:20 compute-1 systemd-logind[795]: Removed session 43.
Oct 02 11:53:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:53:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:20.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:21.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:22.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:23.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:23 compute-1 ceph-mon[80926]: pgmap v388: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:24 compute-1 ceph-mon[80926]: pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:24 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Oct 02 11:53:24 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:53:24.511505) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 11:53:24 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Oct 02 11:53:24 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406004511533, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 809, "num_deletes": 250, "total_data_size": 1616341, "memory_usage": 1639912, "flush_reason": "Manual Compaction"}
Oct 02 11:53:24 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Oct 02 11:53:24 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406004516817, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 701185, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9786, "largest_seqno": 10590, "table_properties": {"data_size": 697888, "index_size": 1141, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8288, "raw_average_key_size": 19, "raw_value_size": 691034, "raw_average_value_size": 1653, "num_data_blocks": 50, "num_entries": 418, "num_filter_entries": 418, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405944, "oldest_key_time": 1759405944, "file_creation_time": 1759406004, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:53:24 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 5345 microseconds, and 2554 cpu microseconds.
Oct 02 11:53:24 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:53:24 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:53:24.516850) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 701185 bytes OK
Oct 02 11:53:24 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:53:24.516867) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Oct 02 11:53:24 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:53:24.518173) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Oct 02 11:53:24 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:53:24.518190) EVENT_LOG_v1 {"time_micros": 1759406004518185, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 11:53:24 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:53:24.518211) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 11:53:24 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1612132, prev total WAL file size 1612132, number of live WAL files 2.
Oct 02 11:53:24 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:53:24 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:53:24.519050) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Oct 02 11:53:24 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 11:53:24 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(684KB)], [18(9502KB)]
Oct 02 11:53:24 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406004519121, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 10431730, "oldest_snapshot_seqno": -1}
Oct 02 11:53:24 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 3746 keys, 7707753 bytes, temperature: kUnknown
Oct 02 11:53:24 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406004549414, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 7707753, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7679948, "index_size": 17327, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9413, "raw_key_size": 90955, "raw_average_key_size": 24, "raw_value_size": 7609481, "raw_average_value_size": 2031, "num_data_blocks": 757, "num_entries": 3746, "num_filter_entries": 3746, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759406004, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:53:24 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:53:24 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:53:24.549650) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 7707753 bytes
Oct 02 11:53:24 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:53:24.550675) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 343.6 rd, 253.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.3 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(25.9) write-amplify(11.0) OK, records in: 4237, records dropped: 491 output_compression: NoCompression
Oct 02 11:53:24 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:53:24.550696) EVENT_LOG_v1 {"time_micros": 1759406004550685, "job": 8, "event": "compaction_finished", "compaction_time_micros": 30360, "compaction_time_cpu_micros": 17985, "output_level": 6, "num_output_files": 1, "total_output_size": 7707753, "num_input_records": 4237, "num_output_records": 3746, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 11:53:24 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:53:24 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406004550908, "job": 8, "event": "table_file_deletion", "file_number": 20}
Oct 02 11:53:24 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:53:24 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406004552251, "job": 8, "event": "table_file_deletion", "file_number": 18}
Oct 02 11:53:24 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:53:24.518937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:53:24 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:53:24.552319) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:53:24 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:53:24.552326) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:53:24 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:53:24.552327) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:53:24 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:53:24.552328) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:53:24 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:53:24.552330) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:53:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:24.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:25.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:25 compute-1 sshd-session[111311]: Accepted publickey for zuul from 192.168.122.30 port 41244 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:53:25 compute-1 systemd-logind[795]: New session 44 of user zuul.
Oct 02 11:53:25 compute-1 systemd[1]: Started Session 44 of User zuul.
Oct 02 11:53:25 compute-1 sshd-session[111311]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:53:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:53:26 compute-1 python3.9[111464]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:53:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:26.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:27.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:27 compute-1 ceph-mon[80926]: pgmap v390: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:27 compute-1 sudo[111618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aapcxjreusefmsqybgaxatovpdyaladb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406007.127742-115-263145341497923/AnsiballZ_file.py'
Oct 02 11:53:27 compute-1 sudo[111618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:27 compute-1 python3.9[111620]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:53:27 compute-1 sudo[111618]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:28 compute-1 ceph-mon[80926]: pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:28 compute-1 sudo[111770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxknkctyawlbmbwvqpkomlkzvxwlddgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406007.886045-115-15328622164942/AnsiballZ_file.py'
Oct 02 11:53:28 compute-1 sudo[111770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:28 compute-1 python3.9[111772]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:53:28 compute-1 sudo[111770]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:28.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:29.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:29 compute-1 sudo[111922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usyuxpgzhquavtetbefdadwjhcjcnabw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406008.5954194-158-106706083065266/AnsiballZ_stat.py'
Oct 02 11:53:29 compute-1 sudo[111922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:29 compute-1 python3.9[111924]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:53:29 compute-1 sudo[111922]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:30 compute-1 sudo[112045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqxbubokfobvahaqtmxcapzxtttvymoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406008.5954194-158-106706083065266/AnsiballZ_copy.py'
Oct 02 11:53:30 compute-1 sudo[112045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:30 compute-1 python3.9[112047]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406008.5954194-158-106706083065266/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=5701859e6f99bebb728ba839c69b6b2a9ec878f4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:53:30 compute-1 sudo[112045]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:53:30 compute-1 sudo[112197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilirdwhingheektkphytpbenaespmels ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406010.3883438-158-133724666753749/AnsiballZ_stat.py'
Oct 02 11:53:30 compute-1 sudo[112197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:30.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:30 compute-1 python3.9[112199]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:53:30 compute-1 sudo[112197]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:31.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:31 compute-1 ceph-mon[80926]: pgmap v392: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:31 compute-1 sudo[112320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldcbxiuwsyjxipnkhmcpptghxuzmxhfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406010.3883438-158-133724666753749/AnsiballZ_copy.py'
Oct 02 11:53:31 compute-1 sudo[112320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:31 compute-1 python3.9[112322]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406010.3883438-158-133724666753749/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=3ecf5e4f77066a77590b5118d192b2b931dec8bf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:53:31 compute-1 sudo[112320]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:31 compute-1 sudo[112472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwggpivimnplmtugnjagpnfrrxyfbutc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406011.485713-158-242066637855815/AnsiballZ_stat.py'
Oct 02 11:53:31 compute-1 sudo[112472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:31 compute-1 python3.9[112474]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:53:31 compute-1 sudo[112472]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:32 compute-1 ceph-mon[80926]: pgmap v393: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:32 compute-1 sudo[112595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuakvujdcrxbbvtjdsslyegaitwhvaod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406011.485713-158-242066637855815/AnsiballZ_copy.py'
Oct 02 11:53:32 compute-1 sudo[112595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:32 compute-1 python3.9[112597]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406011.485713-158-242066637855815/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=4aa0139080ac0ab9e64ae577109c46bec3764980 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:53:32 compute-1 sudo[112595]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:32.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:32 compute-1 sudo[112747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kttsczlcgojzggpdxoudmrftzoennmas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406012.655119-288-276957752323355/AnsiballZ_file.py'
Oct 02 11:53:32 compute-1 sudo[112747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:33.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:33 compute-1 python3.9[112749]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:53:33 compute-1 sudo[112747]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:33 compute-1 sudo[112899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lndzrzdyrcekpkilkhtrhvizpskegsse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406013.2580767-288-59696250464675/AnsiballZ_file.py'
Oct 02 11:53:33 compute-1 sudo[112899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:33 compute-1 python3.9[112901]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:53:33 compute-1 sudo[112899]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:34 compute-1 sudo[113051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfxegkfgcmrpitvrvdbsvvxavvgsidxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406013.9352722-335-249154459684089/AnsiballZ_stat.py'
Oct 02 11:53:34 compute-1 sudo[113051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:34 compute-1 python3.9[113053]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:53:34 compute-1 sudo[113051]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:34.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:34 compute-1 sudo[113174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmxlklkwxctzawjphatvikqczoktdpww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406013.9352722-335-249154459684089/AnsiballZ_copy.py'
Oct 02 11:53:34 compute-1 sudo[113174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:34 compute-1 python3.9[113176]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406013.9352722-335-249154459684089/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=66f0ccf261dd8839c4c8f774a0a21a880477d530 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:53:35 compute-1 sudo[113174]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:53:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:35.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:53:35 compute-1 ceph-mon[80926]: pgmap v394: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:35 compute-1 sudo[113326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsdgotwzwdyyieapelabeqpxqncywbny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406015.1311266-335-101122478434008/AnsiballZ_stat.py'
Oct 02 11:53:35 compute-1 sudo[113326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:35 compute-1 python3.9[113328]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:53:35 compute-1 sudo[113326]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:53:35 compute-1 sudo[113449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgofacrvkytvwoltfuezizwjqcvihfxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406015.1311266-335-101122478434008/AnsiballZ_copy.py'
Oct 02 11:53:35 compute-1 sudo[113449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:36 compute-1 ceph-mon[80926]: pgmap v395: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:36 compute-1 python3.9[113451]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406015.1311266-335-101122478434008/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=900083829e6d3cf8d122351d6d42abd08dd175ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:53:36 compute-1 sudo[113449]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:36 compute-1 sudo[113601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwbdwznfknptmbfpzrycujwugzxrzdzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406016.3632164-335-244037381185802/AnsiballZ_stat.py'
Oct 02 11:53:36 compute-1 sudo[113601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:36.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:36 compute-1 python3.9[113603]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:53:36 compute-1 sudo[113601]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:37.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:37 compute-1 sudo[113724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzlwlxxyjcwkbyfasycmyzokwkecctol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406016.3632164-335-244037381185802/AnsiballZ_copy.py'
Oct 02 11:53:37 compute-1 sudo[113724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:37 compute-1 python3.9[113726]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406016.3632164-335-244037381185802/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=0d941fc9451c5d9a7a910488cd06b09db9bdf3b8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:53:37 compute-1 sudo[113724]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:37 compute-1 sudo[113876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcasxzufcnstmiitnuiaawevpuinlwxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406017.5302413-465-84134013616648/AnsiballZ_file.py'
Oct 02 11:53:37 compute-1 sudo[113876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:38 compute-1 python3.9[113878]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:53:38 compute-1 sudo[113876]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:38 compute-1 sudo[114028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgzbvttofkvijeyestqtezriqqvhduah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406018.1519055-465-148583342729225/AnsiballZ_file.py'
Oct 02 11:53:38 compute-1 sudo[114028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:38 compute-1 python3.9[114030]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:53:38 compute-1 sudo[114028]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:38.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:39 compute-1 sudo[114180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmfphqhsnuroeqjqslrvxjufcijgwqql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406018.7772686-511-189949252078132/AnsiballZ_stat.py'
Oct 02 11:53:39 compute-1 sudo[114180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:53:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:39.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:53:39 compute-1 ceph-mon[80926]: pgmap v396: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:39 compute-1 python3.9[114182]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:53:39 compute-1 sudo[114180]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:39 compute-1 sudo[114303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxzkxusvmajxlaialcahtepzafxzzxor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406018.7772686-511-189949252078132/AnsiballZ_copy.py'
Oct 02 11:53:39 compute-1 sudo[114303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:39 compute-1 python3.9[114305]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406018.7772686-511-189949252078132/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=d14b1899eaab79c298e4cf2f0edf593be3adc735 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:53:39 compute-1 sudo[114303]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:40 compute-1 ceph-mon[80926]: pgmap v397: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:40 compute-1 sudo[114455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iaubjkhmqbxujunfotcpqhpdvgtexbni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406019.9655387-511-172897595749014/AnsiballZ_stat.py'
Oct 02 11:53:40 compute-1 sudo[114455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:40 compute-1 python3.9[114457]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:53:40 compute-1 sudo[114455]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:40 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:53:40 compute-1 sudo[114578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixmnyudmuxqzzpuwydbkylictozjqujg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406019.9655387-511-172897595749014/AnsiballZ_copy.py'
Oct 02 11:53:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:53:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:40.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:53:40 compute-1 sudo[114578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:40 compute-1 python3.9[114580]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406019.9655387-511-172897595749014/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=900083829e6d3cf8d122351d6d42abd08dd175ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:53:40 compute-1 sudo[114578]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:41.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:41 compute-1 sudo[114730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfdulvdbdsqwmnjgprzycerqjphibnup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406021.0884225-511-206665211907291/AnsiballZ_stat.py'
Oct 02 11:53:41 compute-1 sudo[114730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:41 compute-1 python3.9[114732]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:53:41 compute-1 sudo[114730]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:41 compute-1 sudo[114853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfzikcjezvyomrxhyqfsrxhyzlceyyom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406021.0884225-511-206665211907291/AnsiballZ_copy.py'
Oct 02 11:53:41 compute-1 sudo[114853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:42 compute-1 python3.9[114855]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406021.0884225-511-206665211907291/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=3ac8674b28901153dbb19b53e5670329e2a5dc76 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:53:42 compute-1 sudo[114853]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:42.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:42 compute-1 sudo[115005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amkizpucfxrctysmexjanechswkkmhia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406022.7317169-672-258779095973255/AnsiballZ_file.py'
Oct 02 11:53:42 compute-1 sudo[115005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:43.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:43 compute-1 ceph-mon[80926]: pgmap v398: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:43 compute-1 python3.9[115007]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:53:43 compute-1 sudo[115005]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:43 compute-1 sudo[115157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjayczefmrpqbpbxnxcgaogqrobsigqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406023.3546689-708-31863176175071/AnsiballZ_stat.py'
Oct 02 11:53:43 compute-1 sudo[115157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:43 compute-1 python3.9[115159]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:53:43 compute-1 sudo[115157]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:44 compute-1 sudo[115280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koktquevetqcvyyzyczkgfejmytznltz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406023.3546689-708-31863176175071/AnsiballZ_copy.py'
Oct 02 11:53:44 compute-1 sudo[115280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:44 compute-1 ceph-mon[80926]: pgmap v399: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:44 compute-1 python3.9[115282]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406023.3546689-708-31863176175071/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=fcdb52e49c4d8b9ffc79ce29410702893676d42e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:53:44 compute-1 sudo[115280]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:53:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:44.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:53:44 compute-1 sudo[115432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzycbejviiiikmukparsgwgaytnlijkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406024.6660068-755-102044133517807/AnsiballZ_file.py'
Oct 02 11:53:44 compute-1 sudo[115432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:53:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:45.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:53:45 compute-1 python3.9[115434]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:53:45 compute-1 sudo[115432]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:53:45 compute-1 sudo[115584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzovpvszcmbgfmydkaoilxwzjuhaaqts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406025.3305156-780-80847351519556/AnsiballZ_stat.py'
Oct 02 11:53:45 compute-1 sudo[115584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:45 compute-1 python3.9[115586]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:53:45 compute-1 sudo[115584]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:46 compute-1 sudo[115707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grtxnmhilkwoqpvxecyyytohhqrppuch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406025.3305156-780-80847351519556/AnsiballZ_copy.py'
Oct 02 11:53:46 compute-1 sudo[115707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:46 compute-1 python3.9[115709]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406025.3305156-780-80847351519556/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=fcdb52e49c4d8b9ffc79ce29410702893676d42e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:53:46 compute-1 sudo[115707]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:53:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:46.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:53:46 compute-1 sudo[115859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxuntqqmsiniemldvznkntednchbtcsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406026.5265093-827-200122456536869/AnsiballZ_file.py'
Oct 02 11:53:46 compute-1 sudo[115859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:46 compute-1 python3.9[115861]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:53:46 compute-1 sudo[115859]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:47.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:47 compute-1 ceph-mon[80926]: pgmap v400: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:47 compute-1 sudo[116011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyyhjyhpkhjbekjdiipwzbmyjjjwjgen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406027.1147988-851-91023928664388/AnsiballZ_stat.py'
Oct 02 11:53:47 compute-1 sudo[116011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:47 compute-1 python3.9[116013]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:53:47 compute-1 sudo[116011]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:47 compute-1 sudo[116134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rougkngtlabslgqnfuyghxkqlsfxtgql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406027.1147988-851-91023928664388/AnsiballZ_copy.py'
Oct 02 11:53:47 compute-1 sudo[116134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:48 compute-1 python3.9[116136]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406027.1147988-851-91023928664388/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=fcdb52e49c4d8b9ffc79ce29410702893676d42e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:53:48 compute-1 sudo[116134]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:48 compute-1 ceph-mon[80926]: pgmap v401: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:48 compute-1 sudo[116286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeqmyjhvqbwuefnspfqikgfhhefefkcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406028.319719-899-236197130377231/AnsiballZ_file.py'
Oct 02 11:53:48 compute-1 sudo[116286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:48.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:48 compute-1 python3.9[116288]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:53:48 compute-1 sudo[116286]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:49.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:49 compute-1 sudo[116438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxcztwswbajmaznxixegcnzycfmiytcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406028.9254313-921-86414246127607/AnsiballZ_stat.py'
Oct 02 11:53:49 compute-1 sudo[116438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:49 compute-1 python3.9[116440]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:53:49 compute-1 sudo[116438]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:49 compute-1 sudo[116561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpovzjlydqantboemovhcxgynufbueol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406028.9254313-921-86414246127607/AnsiballZ_copy.py'
Oct 02 11:53:49 compute-1 sudo[116561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:49 compute-1 python3.9[116563]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406028.9254313-921-86414246127607/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=fcdb52e49c4d8b9ffc79ce29410702893676d42e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:53:49 compute-1 sudo[116561]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:50 compute-1 sudo[116713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmpasfbihjydohueiyaqaicotyomifmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406030.084173-962-4070781562155/AnsiballZ_file.py'
Oct 02 11:53:50 compute-1 sudo[116713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:50 compute-1 python3.9[116715]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:53:50 compute-1 sudo[116713]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:53:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:50.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:51 compute-1 sudo[116865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlrmsdssmhfawpybuaqidlrrmhzucbyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406030.7528882-986-86803612188948/AnsiballZ_stat.py'
Oct 02 11:53:51 compute-1 sudo[116865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:53:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:51.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:53:51 compute-1 ceph-mon[80926]: pgmap v402: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:51 compute-1 python3.9[116867]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:53:51 compute-1 sudo[116865]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:51 compute-1 sudo[116988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abrpqacosdlbrgxyhpkeckkyvcohzrbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406030.7528882-986-86803612188948/AnsiballZ_copy.py'
Oct 02 11:53:51 compute-1 sudo[116988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:51 compute-1 python3.9[116990]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406030.7528882-986-86803612188948/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=fcdb52e49c4d8b9ffc79ce29410702893676d42e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:53:51 compute-1 sudo[116988]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:52 compute-1 ceph-mon[80926]: pgmap v403: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:52 compute-1 sudo[117140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcfctztkqatzavotkgzziqpsrlnqlyrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406031.9122353-1030-185393811756887/AnsiballZ_file.py'
Oct 02 11:53:52 compute-1 sudo[117140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:52 compute-1 python3.9[117142]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:53:52 compute-1 sudo[117140]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:52.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:52 compute-1 sudo[117292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btedqnjxdsqmaunffdqakrbpgrlerton ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406032.5420167-1054-179838425653125/AnsiballZ_stat.py'
Oct 02 11:53:52 compute-1 sudo[117292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:53 compute-1 python3.9[117294]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:53:53 compute-1 sudo[117292]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:53:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:53.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:53:53 compute-1 sudo[117415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvvvycxtcczjgqcoyvyvaxcdrsljtavl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406032.5420167-1054-179838425653125/AnsiballZ_copy.py'
Oct 02 11:53:53 compute-1 sudo[117415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:53:53 compute-1 python3.9[117417]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406032.5420167-1054-179838425653125/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=fcdb52e49c4d8b9ffc79ce29410702893676d42e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:53:53 compute-1 sudo[117415]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:54.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:55 compute-1 sudo[117442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:53:55 compute-1 sudo[117442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:55 compute-1 sudo[117442]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:55 compute-1 sudo[117467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:53:55 compute-1 sudo[117467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:55 compute-1 sudo[117467]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:55.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:55 compute-1 ceph-mon[80926]: pgmap v404: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:55 compute-1 sudo[117492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:53:55 compute-1 sudo[117492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:55 compute-1 sudo[117492]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:55 compute-1 sshd-session[111314]: Connection closed by 192.168.122.30 port 41244
Oct 02 11:53:55 compute-1 sudo[117517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:53:55 compute-1 sudo[117517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:53:55 compute-1 sshd-session[111311]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:53:55 compute-1 systemd-logind[795]: Session 44 logged out. Waiting for processes to exit.
Oct 02 11:53:55 compute-1 systemd[1]: session-44.scope: Deactivated successfully.
Oct 02 11:53:55 compute-1 systemd[1]: session-44.scope: Consumed 22.236s CPU time.
Oct 02 11:53:55 compute-1 systemd-logind[795]: Removed session 44.
Oct 02 11:53:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:53:55 compute-1 sudo[117517]: pam_unix(sudo:session): session closed for user root
Oct 02 11:53:56 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:53:56 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:53:56 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:53:56 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:53:56 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:53:56 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:53:56 compute-1 ceph-mon[80926]: pgmap v405: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:56.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:57.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:58 compute-1 ceph-mon[80926]: pgmap v406: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:53:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:53:58.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:53:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:53:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:53:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:53:59.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:00 compute-1 sshd-session[117574]: Accepted publickey for zuul from 192.168.122.30 port 56986 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:54:00 compute-1 systemd-logind[795]: New session 45 of user zuul.
Oct 02 11:54:00 compute-1 systemd[1]: Started Session 45 of User zuul.
Oct 02 11:54:00 compute-1 sshd-session[117574]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:54:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:54:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:00.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:01 compute-1 sudo[117727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkoplhcflxxppxegxxiviyrwcfluglwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406040.5491664-32-110114614450651/AnsiballZ_file.py'
Oct 02 11:54:01 compute-1 sudo[117727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:01.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:01 compute-1 ceph-mon[80926]: pgmap v407: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:01 compute-1 python3.9[117729]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:01 compute-1 sudo[117727]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:01 compute-1 sudo[117879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cginckryqfnbcdblxnxictldtvbtyfsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406041.464536-68-212475890197705/AnsiballZ_stat.py'
Oct 02 11:54:01 compute-1 sudo[117879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:02 compute-1 python3.9[117881]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:54:02 compute-1 sudo[117879]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:02 compute-1 sudo[117882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:54:02 compute-1 sudo[117882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:54:02 compute-1 sudo[117882]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:02 compute-1 sudo[117907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:54:02 compute-1 sudo[117907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:54:02 compute-1 sudo[117907]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:02 compute-1 sudo[118052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbczdauxuxwosobcjuljhgbuibejomwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406041.464536-68-212475890197705/AnsiballZ_copy.py'
Oct 02 11:54:02 compute-1 sudo[118052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:02 compute-1 python3.9[118054]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406041.464536-68-212475890197705/.source.conf _original_basename=ceph.conf follow=False checksum=bc6368cedc2ad3c8a4bd89508113374e22439583 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:02 compute-1 sudo[118052]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:02.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:02 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:54:02 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:54:02 compute-1 ceph-mon[80926]: pgmap v408: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:03 compute-1 sudo[118204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zafxaxptgdzhtlznwrietfzbhrltfnuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406042.8659356-68-89280977594074/AnsiballZ_stat.py'
Oct 02 11:54:03 compute-1 sudo[118204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:03.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:03 compute-1 python3.9[118206]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:54:03 compute-1 sudo[118204]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:03 compute-1 sudo[118327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbytyzvazuttgxgewuknysqgiqczpkdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406042.8659356-68-89280977594074/AnsiballZ_copy.py'
Oct 02 11:54:03 compute-1 sudo[118327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:03 compute-1 python3.9[118329]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406042.8659356-68-89280977594074/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=75f34a13e5eafe465b3328865c9fc53d2eab5578 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:03 compute-1 sudo[118327]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:04 compute-1 sshd-session[117577]: Connection closed by 192.168.122.30 port 56986
Oct 02 11:54:04 compute-1 sshd-session[117574]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:54:04 compute-1 systemd[1]: session-45.scope: Deactivated successfully.
Oct 02 11:54:04 compute-1 systemd[1]: session-45.scope: Consumed 2.519s CPU time.
Oct 02 11:54:04 compute-1 systemd-logind[795]: Session 45 logged out. Waiting for processes to exit.
Oct 02 11:54:04 compute-1 systemd-logind[795]: Removed session 45.
Oct 02 11:54:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:04.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:05 compute-1 ceph-mon[80926]: pgmap v409: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:05.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:54:06 compute-1 ceph-mon[80926]: pgmap v410: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:06.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:54:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:07.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:54:08 compute-1 ceph-mon[80926]: pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:08.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:09.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:09 compute-1 sshd-session[118354]: Accepted publickey for zuul from 192.168.122.30 port 49766 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:54:09 compute-1 systemd-logind[795]: New session 46 of user zuul.
Oct 02 11:54:09 compute-1 systemd[1]: Started Session 46 of User zuul.
Oct 02 11:54:09 compute-1 sshd-session[118354]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:54:10 compute-1 ceph-mon[80926]: pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:54:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:54:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:10.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:54:11 compute-1 python3.9[118507]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:54:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:54:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:11.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:54:12 compute-1 sudo[118661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdpxhdqtqtitnytrakmxgmevsatqysqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406051.5692601-68-245041387079275/AnsiballZ_file.py'
Oct 02 11:54:12 compute-1 sudo[118661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:12 compute-1 python3.9[118663]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:54:12 compute-1 sudo[118661]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:12 compute-1 sudo[118813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-firmzfqucsrckircuijjmmnjkrcnwfsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406052.3882086-68-20931345549763/AnsiballZ_file.py'
Oct 02 11:54:12 compute-1 sudo[118813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:54:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:12.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:54:12 compute-1 python3.9[118815]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:54:12 compute-1 sudo[118813]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:13 compute-1 ceph-mon[80926]: pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:13.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:13 compute-1 python3.9[118965]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:54:14 compute-1 ceph-mon[80926]: pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:14 compute-1 sudo[119115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewfmlvsupqbsjlzrsisvfcywqvkohigs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406053.855695-137-41913737553372/AnsiballZ_seboolean.py'
Oct 02 11:54:14 compute-1 sudo[119115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:14 compute-1 python3.9[119117]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 02 11:54:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:14.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:54:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:15.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:54:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:54:15 compute-1 sudo[119115]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:16 compute-1 ceph-mon[80926]: pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:16 compute-1 sudo[119271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxsgvsjzfbnpbcfivkpdbkgruexvlgkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406056.044188-167-162315029838724/AnsiballZ_setup.py'
Oct 02 11:54:16 compute-1 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Oct 02 11:54:16 compute-1 sudo[119271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:16 compute-1 python3.9[119273]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:54:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:16.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:16 compute-1 sudo[119271]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:17.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:17 compute-1 sudo[119355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuqzzxjlwhyzlrdqersiuuabmdygpvya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406056.044188-167-162315029838724/AnsiballZ_dnf.py'
Oct 02 11:54:17 compute-1 sudo[119355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:17 compute-1 python3.9[119357]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:54:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:18.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:18 compute-1 sudo[119355]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:19 compute-1 ceph-mon[80926]: pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:19.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:19 compute-1 sudo[119508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfklhvvnevcjetzpgebvacrbzdclrylc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406059.0542233-203-82323080251261/AnsiballZ_systemd.py'
Oct 02 11:54:19 compute-1 sudo[119508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:19 compute-1 python3.9[119510]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 11:54:19 compute-1 sudo[119508]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:20 compute-1 ceph-mon[80926]: pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:54:20 compute-1 sudo[119663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uefabgcpaopeamyvexzjxbnpupyipcsj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759406060.345494-227-36406052003393/AnsiballZ_edpm_nftables_snippet.py'
Oct 02 11:54:20 compute-1 sudo[119663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:20.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:20 compute-1 python3[119665]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Oct 02 11:54:20 compute-1 sudo[119663]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:21.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:21 compute-1 sudo[119815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlavecqckftqonrstildwlofrynzscda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406061.2588603-254-151322050818005/AnsiballZ_file.py'
Oct 02 11:54:21 compute-1 sudo[119815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:21 compute-1 python3.9[119817]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:21 compute-1 sudo[119815]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:22 compute-1 sudo[119967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlrhexiomoitioinuqtgxlgvayszdyil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406061.9038231-278-189346455168429/AnsiballZ_stat.py'
Oct 02 11:54:22 compute-1 sudo[119967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:22 compute-1 python3.9[119969]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:54:22 compute-1 sudo[119967]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:22 compute-1 sudo[120045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-punhhbivmvyavlvonlrviljazukcrwji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406061.9038231-278-189346455168429/AnsiballZ_file.py'
Oct 02 11:54:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:54:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:22.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:54:22 compute-1 sudo[120045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:23 compute-1 python3.9[120047]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:23 compute-1 sudo[120045]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:23 compute-1 ceph-mon[80926]: pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:23.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:23 compute-1 sudo[120197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsogsbekumwvdwtlgzhjqvzcpaaxvyim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406063.2146122-314-177455878934539/AnsiballZ_stat.py'
Oct 02 11:54:23 compute-1 sudo[120197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:23 compute-1 python3.9[120199]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:54:23 compute-1 sudo[120197]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:23 compute-1 sudo[120275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srsyghrylujvcihupwobiicvisffrwuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406063.2146122-314-177455878934539/AnsiballZ_file.py'
Oct 02 11:54:23 compute-1 sudo[120275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:24 compute-1 python3.9[120277]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.4v8kc0tg recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:24 compute-1 sudo[120275]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:24 compute-1 ceph-mon[80926]: pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:24 compute-1 sudo[120427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ancgmsntwmdspbnzsiwwoxzlctogitzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406064.33582-350-242118844231871/AnsiballZ_stat.py'
Oct 02 11:54:24 compute-1 sudo[120427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:24.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:24 compute-1 python3.9[120429]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:54:24 compute-1 sudo[120427]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:25 compute-1 sudo[120505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iecdcnfkvodnnsxdjznasmukqltgqknw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406064.33582-350-242118844231871/AnsiballZ_file.py'
Oct 02 11:54:25 compute-1 sudo[120505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:25.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:25 compute-1 python3.9[120507]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:25 compute-1 sudo[120505]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:54:25 compute-1 sudo[120657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smbqqqwvozcsfoemivxezvkkgngyucce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406065.483633-389-32570930320459/AnsiballZ_command.py'
Oct 02 11:54:25 compute-1 sudo[120657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:26 compute-1 python3.9[120659]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:54:26 compute-1 sudo[120657]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:26 compute-1 ceph-mon[80926]: pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:26 compute-1 sudo[120810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baaycyvpzdiubpepntpqraadphgmeyeg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759406066.3322217-413-275235272913711/AnsiballZ_edpm_nftables_from_files.py'
Oct 02 11:54:26 compute-1 sudo[120810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:26.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:26 compute-1 python3[120812]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 02 11:54:26 compute-1 sudo[120810]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:54:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:27.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:54:27 compute-1 sudo[120962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxztotqwbktmjcdtwlamuejsokhpenas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406067.1561906-437-59711288727108/AnsiballZ_stat.py'
Oct 02 11:54:27 compute-1 sudo[120962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:27 compute-1 python3.9[120964]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:54:27 compute-1 sudo[120962]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:28 compute-1 ceph-mon[80926]: pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:28 compute-1 sudo[121087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arauvisvcnofyvfbpduqzmaaqghzfqdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406067.1561906-437-59711288727108/AnsiballZ_copy.py'
Oct 02 11:54:28 compute-1 sudo[121087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:28 compute-1 python3.9[121089]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406067.1561906-437-59711288727108/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:28 compute-1 sudo[121087]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:28.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:28 compute-1 sudo[121239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdqhsembllkdgqywmuwgcwxgvsudxqqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406068.677015-482-188917764686674/AnsiballZ_stat.py'
Oct 02 11:54:28 compute-1 sudo[121239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:29 compute-1 python3.9[121241]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:54:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:29.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:29 compute-1 sudo[121239]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:29 compute-1 sudo[121364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmhlwxctdrdzwvruagwhedewfmzqvpyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406068.677015-482-188917764686674/AnsiballZ_copy.py'
Oct 02 11:54:29 compute-1 sudo[121364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:29 compute-1 python3.9[121366]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406068.677015-482-188917764686674/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:29 compute-1 sudo[121364]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:30 compute-1 sudo[121516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cseatzzxqmutfpvufbqficixeeijxfja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406069.8822687-527-49518981086114/AnsiballZ_stat.py'
Oct 02 11:54:30 compute-1 sudo[121516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:30 compute-1 python3.9[121518]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:54:30 compute-1 sudo[121516]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:54:30 compute-1 ceph-mon[80926]: pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:30 compute-1 sudo[121641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajoyzvolbbpoaavivhrztytlalemlesu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406069.8822687-527-49518981086114/AnsiballZ_copy.py'
Oct 02 11:54:30 compute-1 sudo[121641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:30.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:30 compute-1 python3.9[121643]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406069.8822687-527-49518981086114/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:30 compute-1 sudo[121641]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:31.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:31 compute-1 sudo[121793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjhhqvyovlcgyafprdbsyzrjnjcmbxjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406071.1252136-572-86068142432920/AnsiballZ_stat.py'
Oct 02 11:54:31 compute-1 sudo[121793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:31 compute-1 python3.9[121795]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:54:31 compute-1 sudo[121793]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:31 compute-1 sudo[121918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axzkicqflvdjolnbblevfyrkhecrznlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406071.1252136-572-86068142432920/AnsiballZ_copy.py'
Oct 02 11:54:31 compute-1 sudo[121918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:32 compute-1 python3.9[121920]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406071.1252136-572-86068142432920/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:32 compute-1 sudo[121918]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:32 compute-1 ceph-mon[80926]: pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:32 compute-1 sudo[122070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhaxdhxizknwrsxwsvtpfqydsvwzvmpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406072.3505304-617-260512025487499/AnsiballZ_stat.py'
Oct 02 11:54:32 compute-1 sudo[122070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:32.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:32 compute-1 python3.9[122072]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:54:32 compute-1 sudo[122070]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:33.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:33 compute-1 sudo[122195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbziyytwwoehfjdsohetknnjwzivmviw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406072.3505304-617-260512025487499/AnsiballZ_copy.py'
Oct 02 11:54:33 compute-1 sudo[122195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:33 compute-1 python3.9[122197]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406072.3505304-617-260512025487499/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:33 compute-1 sudo[122195]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:33 compute-1 sudo[122347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwitlovdfrhqavysefycdihiixkswowa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406073.6182277-662-213895273529156/AnsiballZ_file.py'
Oct 02 11:54:33 compute-1 sudo[122347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:34 compute-1 python3.9[122349]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:34 compute-1 sudo[122347]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:34 compute-1 ceph-mon[80926]: pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:34 compute-1 sudo[122499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrscvqknjkzpecdyfrbfzvxihgopobpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406074.2856472-686-269071807440977/AnsiballZ_command.py'
Oct 02 11:54:34 compute-1 sudo[122499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:34 compute-1 python3.9[122501]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:54:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:34.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:34 compute-1 sudo[122499]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:35.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:35 compute-1 sudo[122654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzrdegjtrszrkooyhnmfxncdfssintfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406074.9974976-710-28848219531573/AnsiballZ_blockinfile.py'
Oct 02 11:54:35 compute-1 sudo[122654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:54:35 compute-1 python3.9[122656]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:35 compute-1 sudo[122654]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:36 compute-1 sudo[122806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byqvotslbfdumvbultimduyddizbgkxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406075.8938231-737-53713343148758/AnsiballZ_command.py'
Oct 02 11:54:36 compute-1 sudo[122806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:36 compute-1 ceph-mon[80926]: pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:36 compute-1 python3.9[122808]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:54:36 compute-1 sudo[122806]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:36.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:36 compute-1 sudo[122959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckwvbwkwmuudscnnpixotoisaimpwskk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406076.5951455-761-278495152223702/AnsiballZ_stat.py'
Oct 02 11:54:36 compute-1 sudo[122959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:37 compute-1 python3.9[122961]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:54:37 compute-1 sudo[122959]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:37.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:37 compute-1 sudo[123113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwufrfmwoasqpkqqkqfubtncddpqbxwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406077.2494583-785-94714222439676/AnsiballZ_command.py'
Oct 02 11:54:37 compute-1 sudo[123113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:37 compute-1 python3.9[123115]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:54:37 compute-1 sudo[123113]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:38 compute-1 ceph-mon[80926]: pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:38 compute-1 sudo[123268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duamdeyhtmixjbebwjpnwtilftovmqme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406077.9284887-809-138091548359250/AnsiballZ_file.py'
Oct 02 11:54:38 compute-1 sudo[123268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:38 compute-1 python3.9[123270]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:38 compute-1 sudo[123268]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:54:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:38.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:54:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:54:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:39.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:54:39 compute-1 python3.9[123420]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:54:40 compute-1 ceph-mon[80926]: pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:40 compute-1 sudo[123571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljtzmrxjgznyctqsytrzpspwbizcsfyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406080.3129523-929-221418565586101/AnsiballZ_command.py'
Oct 02 11:54:40 compute-1 sudo[123571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:40 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:54:40 compute-1 python3.9[123573]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:d8:76:c8:90" external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:54:40 compute-1 ovs-vsctl[123574]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:d8:76:c8:90 external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Oct 02 11:54:40 compute-1 sudo[123571]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:54:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:40.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:54:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:41.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:41 compute-1 sudo[123724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdupvzbrrideobrkihwetidxqvtarbxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406081.0136564-956-81944477542986/AnsiballZ_command.py'
Oct 02 11:54:41 compute-1 sudo[123724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:41 compute-1 python3.9[123726]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:54:41 compute-1 sudo[123724]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:41 compute-1 sudo[123879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjoqahisqeuoiumxsusgpasbeidixxhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406081.6910155-980-23903714188845/AnsiballZ_command.py'
Oct 02 11:54:41 compute-1 sudo[123879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:42 compute-1 python3.9[123881]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:54:42 compute-1 ovs-vsctl[123882]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Oct 02 11:54:42 compute-1 sudo[123879]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:42 compute-1 ceph-mon[80926]: pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:42 compute-1 python3.9[124032]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:54:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:42.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:43.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:43 compute-1 sudo[124184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvcmmtljvscsirnkohfqiecebzitpuwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406083.0940754-1031-25640603586354/AnsiballZ_file.py'
Oct 02 11:54:43 compute-1 sudo[124184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:43 compute-1 python3.9[124186]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:54:43 compute-1 sudo[124184]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:44 compute-1 sudo[124336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqltdclibgkkenafnxgpzbxxowficrwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406083.7553182-1055-274215965718247/AnsiballZ_stat.py'
Oct 02 11:54:44 compute-1 sudo[124336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:44 compute-1 ceph-mon[80926]: pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:44 compute-1 python3.9[124338]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:54:44 compute-1 sudo[124336]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:44 compute-1 sudo[124414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxeluusdfeeypymylouvesexlmlbzjpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406083.7553182-1055-274215965718247/AnsiballZ_file.py'
Oct 02 11:54:44 compute-1 sudo[124414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:44 compute-1 python3.9[124416]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:54:44 compute-1 sudo[124414]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:44.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:45 compute-1 sudo[124566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wldqfekrwknavznbtxoqkbdaknlcqjhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406084.853939-1055-229367732269664/AnsiballZ_stat.py'
Oct 02 11:54:45 compute-1 sudo[124566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:54:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:45.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:54:45 compute-1 python3.9[124568]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:54:45 compute-1 sudo[124566]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:45 compute-1 sudo[124644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szxhxlixdepscunylqacuandbthhegsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406084.853939-1055-229367732269664/AnsiballZ_file.py'
Oct 02 11:54:45 compute-1 sudo[124644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:54:45 compute-1 python3.9[124646]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:54:45 compute-1 sudo[124644]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:46 compute-1 ceph-mon[80926]: pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:46 compute-1 sudo[124796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdxjkzdubgzezxalexxljtoaodndkchn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406086.0459788-1124-200597999790399/AnsiballZ_file.py'
Oct 02 11:54:46 compute-1 sudo[124796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:46 compute-1 python3.9[124798]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:46 compute-1 sudo[124796]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:54:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:46.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:54:46 compute-1 sudo[124948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojxbgkrrpjyhjfovrrukcyngdygkckmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406086.6857245-1148-232352391799188/AnsiballZ_stat.py'
Oct 02 11:54:46 compute-1 sudo[124948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:47 compute-1 python3.9[124950]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:54:47 compute-1 sudo[124948]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:47.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:47 compute-1 sudo[125026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yryploeqvxtdonpolxdmjqwrishbczpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406086.6857245-1148-232352391799188/AnsiballZ_file.py'
Oct 02 11:54:47 compute-1 sudo[125026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:47 compute-1 python3.9[125028]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:47 compute-1 sudo[125026]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:48 compute-1 sudo[125178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffenaotwknaqiyfmheovydhfbfkaorle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406087.8378527-1184-174300171045374/AnsiballZ_stat.py'
Oct 02 11:54:48 compute-1 sudo[125178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:48 compute-1 ceph-mon[80926]: pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:48 compute-1 python3.9[125180]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:54:48 compute-1 sudo[125178]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:48 compute-1 sudo[125256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdjklbzleiwoemwaamcbzqgjstrwdjyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406087.8378527-1184-174300171045374/AnsiballZ_file.py'
Oct 02 11:54:48 compute-1 sudo[125256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:48 compute-1 python3.9[125258]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:48 compute-1 sudo[125256]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:48.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:54:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:49.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:54:49 compute-1 sudo[125408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mimhjbnfzvfaukhcbynwgijongazqgqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406089.0867116-1220-179872410573790/AnsiballZ_systemd.py'
Oct 02 11:54:49 compute-1 sudo[125408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:49 compute-1 python3.9[125410]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:54:49 compute-1 systemd[1]: Reloading.
Oct 02 11:54:49 compute-1 systemd-rc-local-generator[125439]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:54:49 compute-1 systemd-sysv-generator[125442]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:54:49 compute-1 sudo[125408]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:50 compute-1 ceph-mon[80926]: pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:50 compute-1 sudo[125598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdcllsebugynhicssqfosgkcsdbkzkuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406090.1910105-1244-50590926849934/AnsiballZ_stat.py'
Oct 02 11:54:50 compute-1 sudo[125598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:54:50 compute-1 python3.9[125600]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:54:50 compute-1 sudo[125598]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:50.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:50 compute-1 sudo[125676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iytepcnwsiqftcckghmkqkzertanpjkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406090.1910105-1244-50590926849934/AnsiballZ_file.py'
Oct 02 11:54:50 compute-1 sudo[125676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:51 compute-1 python3.9[125678]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:51 compute-1 sudo[125676]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:51.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:51 compute-1 sudo[125828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtxpryvfkzfgvpteifodviavbounmpkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406091.4293914-1280-4486565606601/AnsiballZ_stat.py'
Oct 02 11:54:51 compute-1 sudo[125828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:51 compute-1 python3.9[125830]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:54:51 compute-1 sudo[125828]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:52 compute-1 sudo[125906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xythskbudjvgohbixcnkypztgfhtmwgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406091.4293914-1280-4486565606601/AnsiballZ_file.py'
Oct 02 11:54:52 compute-1 sudo[125906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:52 compute-1 ceph-mon[80926]: pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:52 compute-1 python3.9[125908]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:52 compute-1 sudo[125906]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:54:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:52.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:54:52 compute-1 sudo[126058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqcchzjxgauvyjmvjwxpjajzojjogebb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406092.6395345-1316-68017062527353/AnsiballZ_systemd.py'
Oct 02 11:54:52 compute-1 sudo[126058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:53 compute-1 python3.9[126060]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:54:53 compute-1 systemd[1]: Reloading.
Oct 02 11:54:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:53.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:53 compute-1 systemd-rc-local-generator[126082]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:54:53 compute-1 systemd-sysv-generator[126087]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:54:53 compute-1 systemd[1]: Starting Create netns directory...
Oct 02 11:54:53 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 02 11:54:53 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 02 11:54:53 compute-1 systemd[1]: Finished Create netns directory.
Oct 02 11:54:53 compute-1 sudo[126058]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:54 compute-1 ceph-mon[80926]: pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:54 compute-1 sudo[126252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgudtrwemgbpxfkvkxbovwbjeyneghgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406093.9422364-1346-33872707675849/AnsiballZ_file.py'
Oct 02 11:54:54 compute-1 sudo[126252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:54 compute-1 python3.9[126254]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:54:54 compute-1 sudo[126252]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:54.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:54 compute-1 sudo[126404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytgjnikiiyopkikrscjztkmrgaseysvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406094.6659286-1370-65124947093444/AnsiballZ_stat.py'
Oct 02 11:54:54 compute-1 sudo[126404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:55 compute-1 python3.9[126406]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:54:55 compute-1 sudo[126404]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:55.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:55 compute-1 sudo[126527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luuqdzmzswunxijkvhjkzkitfqijsatc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406094.6659286-1370-65124947093444/AnsiballZ_copy.py'
Oct 02 11:54:55 compute-1 sudo[126527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:54:55 compute-1 python3.9[126529]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406094.6659286-1370-65124947093444/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:54:55 compute-1 sudo[126527]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:56 compute-1 ceph-mon[80926]: pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:56 compute-1 sudo[126679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqcmdxrvgbwsvygiavnhekgihnhkatmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406096.3107314-1421-116003130777777/AnsiballZ_file.py'
Oct 02 11:54:56 compute-1 sudo[126679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:56 compute-1 python3.9[126681]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:54:56 compute-1 sudo[126679]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:56.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:57.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:57 compute-1 sudo[126831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkogtvkzugcnfbyrwydamokdctblwyht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406097.0353258-1445-243384232954415/AnsiballZ_stat.py'
Oct 02 11:54:57 compute-1 sudo[126831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:57 compute-1 python3.9[126833]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:54:57 compute-1 sudo[126831]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:57 compute-1 sudo[126954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twvrstvdkpxuoqscyqwntbabavtdyyil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406097.0353258-1445-243384232954415/AnsiballZ_copy.py'
Oct 02 11:54:57 compute-1 sudo[126954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:57 compute-1 python3.9[126956]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406097.0353258-1445-243384232954415/.source.json _original_basename=.7772p28_ follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:58 compute-1 sudo[126954]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:58 compute-1 ceph-mon[80926]: pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:54:58 compute-1 sudo[127106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayaipmrebnueitzgtyhthnmzjtlbblep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406098.3157306-1490-172481504768263/AnsiballZ_file.py'
Oct 02 11:54:58 compute-1 sudo[127106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:58 compute-1 python3.9[127108]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:54:58 compute-1 sudo[127106]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:54:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:54:58.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:54:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:54:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:54:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:54:59.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:54:59 compute-1 sudo[127258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btdjsltswgbycbqhmjxgdfovcazugjuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406098.992002-1514-104652298715598/AnsiballZ_stat.py'
Oct 02 11:54:59 compute-1 sudo[127258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:54:59 compute-1 sudo[127258]: pam_unix(sudo:session): session closed for user root
Oct 02 11:54:59 compute-1 sudo[127381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bllooxorkrglhyshofpdrnhmquicrmox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406098.992002-1514-104652298715598/AnsiballZ_copy.py'
Oct 02 11:54:59 compute-1 sudo[127381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:00 compute-1 sudo[127381]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:00 compute-1 ceph-mon[80926]: pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:55:00 compute-1 sudo[127533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbrrmmozfhwcxtbsvdvjmtjrnwcxqqai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406100.3660486-1565-35616792871720/AnsiballZ_container_config_data.py'
Oct 02 11:55:00 compute-1 sudo[127533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:00.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:01 compute-1 python3.9[127535]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Oct 02 11:55:01 compute-1 sudo[127533]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:01.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:01 compute-1 sudo[127685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anlrrydrprcqlosizyeygzueiuosqxgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406101.3364217-1592-105771240895657/AnsiballZ_container_config_hash.py'
Oct 02 11:55:01 compute-1 sudo[127685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:02 compute-1 python3.9[127687]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 11:55:02 compute-1 sudo[127685]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:02 compute-1 ceph-mon[80926]: pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:02 compute-1 sudo[127737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:55:02 compute-1 sudo[127737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:02 compute-1 sudo[127737]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:02 compute-1 sudo[127789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:55:02 compute-1 sudo[127789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:02 compute-1 sudo[127789]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:02 compute-1 sudo[127814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:55:02 compute-1 sudo[127814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:02 compute-1 sudo[127814]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:02 compute-1 sudo[127839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 11:55:02 compute-1 sudo[127839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:02 compute-1 sudo[127956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgpdlpjltmdzwiyfetsuyuyxsyuaewqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406102.3077247-1619-257614858053160/AnsiballZ_podman_container_info.py'
Oct 02 11:55:02 compute-1 sudo[127956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:02 compute-1 sudo[127839]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:55:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:02.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:55:03 compute-1 sudo[127959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:55:03 compute-1 sudo[127959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:03 compute-1 sudo[127959]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:03 compute-1 python3.9[127958]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 02 11:55:03 compute-1 sudo[127984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:55:03 compute-1 sudo[127984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:03 compute-1 sudo[127984]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:03 compute-1 sudo[128023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:55:03 compute-1 sudo[128023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:03 compute-1 sudo[128023]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:03 compute-1 sudo[127956]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:03 compute-1 sudo[128060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:55:03 compute-1 sudo[128060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:03.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:03 compute-1 sudo[128060]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:04 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:55:04 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:55:04 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:55:04 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:55:04 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:55:04 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:55:04 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:55:04 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:55:04 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:55:04 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:55:04 compute-1 sudo[128264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzbsfnaezabzhjbrvxkabmsvlxqjxgmu ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759406103.992147-1658-164061553434447/AnsiballZ_edpm_container_manage.py'
Oct 02 11:55:04 compute-1 sudo[128264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:04 compute-1 python3[128266]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 11:55:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:04.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:05 compute-1 ceph-mon[80926]: pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:05.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:55:06 compute-1 ceph-mon[80926]: pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:06.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:07.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:08 compute-1 ceph-mon[80926]: pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:55:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:08.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:55:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:09.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:10 compute-1 podman[128280]: 2025-10-02 11:55:10.031595684 +0000 UTC m=+5.195113115 image pull ae232aa720979600656d94fc26ba957f1cdf5bca825fe9b57990f60c6534611f quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct 02 11:55:10 compute-1 podman[128394]: 2025-10-02 11:55:10.179064605 +0000 UTC m=+0.051927395 container create 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 11:55:10 compute-1 podman[128394]: 2025-10-02 11:55:10.152669549 +0000 UTC m=+0.025532359 image pull ae232aa720979600656d94fc26ba957f1cdf5bca825fe9b57990f60c6534611f quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct 02 11:55:10 compute-1 python3[128266]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Oct 02 11:55:10 compute-1 sudo[128264]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:55:10 compute-1 sudo[128582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekyuxqinvmxvconddpwhahosdrfllrwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406110.4737103-1682-41153085699328/AnsiballZ_stat.py'
Oct 02 11:55:10 compute-1 sudo[128582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:10.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:10 compute-1 python3.9[128584]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:55:10 compute-1 sudo[128582]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:11 compute-1 ceph-mon[80926]: pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:11.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:11 compute-1 sudo[128736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqtrefxlmfqxrzssxajqfpobggubkkwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406111.287165-1709-115042517512336/AnsiballZ_file.py'
Oct 02 11:55:11 compute-1 sudo[128736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:11 compute-1 python3.9[128738]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:55:11 compute-1 sudo[128736]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:11 compute-1 sudo[128812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrkspvwzatxnjatzpdazbjjebmftgkja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406111.287165-1709-115042517512336/AnsiballZ_stat.py'
Oct 02 11:55:11 compute-1 sudo[128812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:12 compute-1 python3.9[128814]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:55:12 compute-1 sudo[128812]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:12 compute-1 ceph-mon[80926]: pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:12 compute-1 sudo[128963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzxnrbyhriovthihuulmgqnzhwkebopk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406112.2115078-1709-60905132632103/AnsiballZ_copy.py'
Oct 02 11:55:12 compute-1 sudo[128963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:12 compute-1 python3.9[128965]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759406112.2115078-1709-60905132632103/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:55:12 compute-1 sudo[128963]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:12.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:13 compute-1 sudo[129039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyuftpyhtgocguucyzxtzvghfhghschs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406112.2115078-1709-60905132632103/AnsiballZ_systemd.py'
Oct 02 11:55:13 compute-1 sudo[129039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:13.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:13 compute-1 python3.9[129041]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 11:55:13 compute-1 systemd[1]: Reloading.
Oct 02 11:55:13 compute-1 systemd-rc-local-generator[129068]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:55:13 compute-1 systemd-sysv-generator[129071]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:55:13 compute-1 sudo[129076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:55:13 compute-1 sudo[129076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:13 compute-1 sudo[129076]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:13 compute-1 sudo[129039]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:13 compute-1 sudo[129101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:55:13 compute-1 sudo[129101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:55:13 compute-1 sudo[129101]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:13 compute-1 sudo[129199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjagpfbjpdjpbkiggefzuxvbmwwqclvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406112.2115078-1709-60905132632103/AnsiballZ_systemd.py'
Oct 02 11:55:13 compute-1 sudo[129199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:14 compute-1 python3.9[129201]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:55:14 compute-1 systemd[1]: Reloading.
Oct 02 11:55:14 compute-1 systemd-rc-local-generator[129228]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:55:14 compute-1 systemd-sysv-generator[129232]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:55:14 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:55:14 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:55:14 compute-1 ceph-mon[80926]: pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:14 compute-1 systemd[1]: Starting ovn_controller container...
Oct 02 11:55:14 compute-1 systemd[1]: Started libcrun container.
Oct 02 11:55:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/120e368ddeee0800e02132928f2722fa4685cb9275ecf36047afb1aaf51f94a7/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 02 11:55:14 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409.
Oct 02 11:55:14 compute-1 podman[129241]: 2025-10-02 11:55:14.619844076 +0000 UTC m=+0.109546956 container init 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 11:55:14 compute-1 ovn_controller[129257]: + sudo -E kolla_set_configs
Oct 02 11:55:14 compute-1 podman[129241]: 2025-10-02 11:55:14.63977929 +0000 UTC m=+0.129482170 container start 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 11:55:14 compute-1 edpm-start-podman-container[129241]: ovn_controller
Oct 02 11:55:14 compute-1 systemd[1]: Created slice User Slice of UID 0.
Oct 02 11:55:14 compute-1 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 02 11:55:14 compute-1 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 02 11:55:14 compute-1 edpm-start-podman-container[129240]: Creating additional drop-in dependency for "ovn_controller" (0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409)
Oct 02 11:55:14 compute-1 podman[129264]: 2025-10-02 11:55:14.718332916 +0000 UTC m=+0.067631036 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 11:55:14 compute-1 systemd[1]: Starting User Manager for UID 0...
Oct 02 11:55:14 compute-1 systemd[1]: 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409-12456c00632a49e6.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 11:55:14 compute-1 systemd[1]: 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409-12456c00632a49e6.service: Failed with result 'exit-code'.
Oct 02 11:55:14 compute-1 systemd[1]: Reloading.
Oct 02 11:55:14 compute-1 systemd-rc-local-generator[129330]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:55:14 compute-1 systemd-sysv-generator[129337]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:55:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:14.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:15 compute-1 systemd[1]: Started ovn_controller container.
Oct 02 11:55:15 compute-1 systemd[129300]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Oct 02 11:55:15 compute-1 sudo[129199]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:55:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:15.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:55:15 compute-1 systemd[129300]: Queued start job for default target Main User Target.
Oct 02 11:55:15 compute-1 systemd[129300]: Created slice User Application Slice.
Oct 02 11:55:15 compute-1 systemd[129300]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 02 11:55:15 compute-1 systemd[129300]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 11:55:15 compute-1 systemd[129300]: Reached target Paths.
Oct 02 11:55:15 compute-1 systemd[129300]: Reached target Timers.
Oct 02 11:55:15 compute-1 systemd[129300]: Starting D-Bus User Message Bus Socket...
Oct 02 11:55:15 compute-1 systemd[129300]: Starting Create User's Volatile Files and Directories...
Oct 02 11:55:15 compute-1 systemd[129300]: Listening on D-Bus User Message Bus Socket.
Oct 02 11:55:15 compute-1 systemd[129300]: Reached target Sockets.
Oct 02 11:55:15 compute-1 systemd[129300]: Finished Create User's Volatile Files and Directories.
Oct 02 11:55:15 compute-1 systemd[129300]: Reached target Basic System.
Oct 02 11:55:15 compute-1 systemd[129300]: Reached target Main User Target.
Oct 02 11:55:15 compute-1 systemd[129300]: Startup finished in 127ms.
Oct 02 11:55:15 compute-1 systemd[1]: Started User Manager for UID 0.
Oct 02 11:55:15 compute-1 systemd[1]: Started Session c1 of User root.
Oct 02 11:55:15 compute-1 ovn_controller[129257]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 11:55:15 compute-1 ovn_controller[129257]: INFO:__main__:Validating config file
Oct 02 11:55:15 compute-1 ovn_controller[129257]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 11:55:15 compute-1 ovn_controller[129257]: INFO:__main__:Writing out command to execute
Oct 02 11:55:15 compute-1 systemd[1]: session-c1.scope: Deactivated successfully.
Oct 02 11:55:15 compute-1 ovn_controller[129257]: ++ cat /run_command
Oct 02 11:55:15 compute-1 ovn_controller[129257]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 02 11:55:15 compute-1 ovn_controller[129257]: + ARGS=
Oct 02 11:55:15 compute-1 ovn_controller[129257]: + sudo kolla_copy_cacerts
Oct 02 11:55:15 compute-1 systemd[1]: Started Session c2 of User root.
Oct 02 11:55:15 compute-1 systemd[1]: session-c2.scope: Deactivated successfully.
Oct 02 11:55:15 compute-1 ovn_controller[129257]: + [[ ! -n '' ]]
Oct 02 11:55:15 compute-1 ovn_controller[129257]: + . kolla_extend_start
Oct 02 11:55:15 compute-1 ovn_controller[129257]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 02 11:55:15 compute-1 ovn_controller[129257]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Oct 02 11:55:15 compute-1 ovn_controller[129257]: + umask 0022
Oct 02 11:55:15 compute-1 ovn_controller[129257]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Oct 02 11:55:15 compute-1 ovn_controller[129257]: 2025-10-02T11:55:15Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 02 11:55:15 compute-1 ovn_controller[129257]: 2025-10-02T11:55:15Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 02 11:55:15 compute-1 ovn_controller[129257]: 2025-10-02T11:55:15Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Oct 02 11:55:15 compute-1 ovn_controller[129257]: 2025-10-02T11:55:15Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Oct 02 11:55:15 compute-1 ovn_controller[129257]: 2025-10-02T11:55:15Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 02 11:55:15 compute-1 ovn_controller[129257]: 2025-10-02T11:55:15Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Oct 02 11:55:15 compute-1 NetworkManager[44960]: <info>  [1759406115.4635] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Oct 02 11:55:15 compute-1 NetworkManager[44960]: <info>  [1759406115.4642] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 11:55:15 compute-1 NetworkManager[44960]: <info>  [1759406115.4651] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Oct 02 11:55:15 compute-1 NetworkManager[44960]: <info>  [1759406115.4657] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Oct 02 11:55:15 compute-1 NetworkManager[44960]: <info>  [1759406115.4663] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 02 11:55:15 compute-1 kernel: br-int: entered promiscuous mode
Oct 02 11:55:15 compute-1 ovn_controller[129257]: 2025-10-02T11:55:15Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 02 11:55:15 compute-1 ovn_controller[129257]: 2025-10-02T11:55:15Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 02 11:55:15 compute-1 ovn_controller[129257]: 2025-10-02T11:55:15Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 02 11:55:15 compute-1 ovn_controller[129257]: 2025-10-02T11:55:15Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Oct 02 11:55:15 compute-1 ovn_controller[129257]: 2025-10-02T11:55:15Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Oct 02 11:55:15 compute-1 ovn_controller[129257]: 2025-10-02T11:55:15Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Oct 02 11:55:15 compute-1 ovn_controller[129257]: 2025-10-02T11:55:15Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 02 11:55:15 compute-1 ovn_controller[129257]: 2025-10-02T11:55:15Z|00014|main|INFO|OVS feature set changed, force recompute.
Oct 02 11:55:15 compute-1 ovn_controller[129257]: 2025-10-02T11:55:15Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 02 11:55:15 compute-1 ovn_controller[129257]: 2025-10-02T11:55:15Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 02 11:55:15 compute-1 ovn_controller[129257]: 2025-10-02T11:55:15Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 02 11:55:15 compute-1 ovn_controller[129257]: 2025-10-02T11:55:15Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Oct 02 11:55:15 compute-1 ovn_controller[129257]: 2025-10-02T11:55:15Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Oct 02 11:55:15 compute-1 ovn_controller[129257]: 2025-10-02T11:55:15Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 02 11:55:15 compute-1 ovn_controller[129257]: 2025-10-02T11:55:15Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 02 11:55:15 compute-1 ovn_controller[129257]: 2025-10-02T11:55:15Z|00022|main|INFO|OVS feature set changed, force recompute.
Oct 02 11:55:15 compute-1 ovn_controller[129257]: 2025-10-02T11:55:15Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Oct 02 11:55:15 compute-1 ovn_controller[129257]: 2025-10-02T11:55:15Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Oct 02 11:55:15 compute-1 ovn_controller[129257]: 2025-10-02T11:55:15Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 02 11:55:15 compute-1 ovn_controller[129257]: 2025-10-02T11:55:15Z|00001|statctrl(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 02 11:55:15 compute-1 ovn_controller[129257]: 2025-10-02T11:55:15Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 02 11:55:15 compute-1 ovn_controller[129257]: 2025-10-02T11:55:15Z|00002|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 02 11:55:15 compute-1 ovn_controller[129257]: 2025-10-02T11:55:15Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 02 11:55:15 compute-1 ovn_controller[129257]: 2025-10-02T11:55:15Z|00003|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 02 11:55:15 compute-1 NetworkManager[44960]: <info>  [1759406115.4868] manager: (ovn-bfdd72-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Oct 02 11:55:15 compute-1 systemd-udevd[129392]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 11:55:15 compute-1 kernel: genev_sys_6081: entered promiscuous mode
Oct 02 11:55:15 compute-1 systemd-udevd[129393]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 11:55:15 compute-1 NetworkManager[44960]: <info>  [1759406115.5067] device (genev_sys_6081): carrier: link connected
Oct 02 11:55:15 compute-1 NetworkManager[44960]: <info>  [1759406115.5070] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Oct 02 11:55:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:55:15 compute-1 NetworkManager[44960]: <info>  [1759406115.7843] manager: (ovn-b95886-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Oct 02 11:55:16 compute-1 NetworkManager[44960]: <info>  [1759406116.0905] manager: (ovn-17f118-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Oct 02 11:55:16 compute-1 sudo[129522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfcyrsmhbzhkepkhssmzznosgolzkodo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406116.0345905-1793-235385823841014/AnsiballZ_command.py'
Oct 02 11:55:16 compute-1 sudo[129522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:16 compute-1 python3.9[129524]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:55:16 compute-1 ovs-vsctl[129525]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Oct 02 11:55:16 compute-1 sudo[129522]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:16 compute-1 ceph-mon[80926]: pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:55:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:16.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:55:17 compute-1 sudo[129675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inexfcuphogpoeemcwjgthsjibcqvjpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406116.829444-1817-253161865308034/AnsiballZ_command.py'
Oct 02 11:55:17 compute-1 sudo[129675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:17.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:17 compute-1 python3.9[129677]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:55:17 compute-1 ovs-vsctl[129679]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Oct 02 11:55:17 compute-1 sudo[129675]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:18 compute-1 sudo[129830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfbyxbfwteiietzfflzowlsrukuqicaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406117.9268014-1859-174087439297505/AnsiballZ_command.py'
Oct 02 11:55:18 compute-1 sudo[129830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:18 compute-1 ceph-mon[80926]: pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:18 compute-1 python3.9[129832]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:55:18 compute-1 ovs-vsctl[129833]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Oct 02 11:55:18 compute-1 sudo[129830]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:18.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:18 compute-1 sshd-session[118357]: Connection closed by 192.168.122.30 port 49766
Oct 02 11:55:18 compute-1 sshd-session[118354]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:55:18 compute-1 systemd[1]: session-46.scope: Deactivated successfully.
Oct 02 11:55:18 compute-1 systemd[1]: session-46.scope: Consumed 55.544s CPU time.
Oct 02 11:55:18 compute-1 systemd-logind[795]: Session 46 logged out. Waiting for processes to exit.
Oct 02 11:55:18 compute-1 systemd-logind[795]: Removed session 46.
Oct 02 11:55:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:19.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:20 compute-1 ceph-mon[80926]: pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:55:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:20.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:55:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:21.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:55:22 compute-1 ceph-mon[80926]: pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:22.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:55:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:23.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:55:24 compute-1 ceph-mon[80926]: pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:24 compute-1 sshd-session[129858]: Accepted publickey for zuul from 192.168.122.30 port 45640 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:55:24 compute-1 systemd-logind[795]: New session 48 of user zuul.
Oct 02 11:55:24 compute-1 systemd[1]: Started Session 48 of User zuul.
Oct 02 11:55:24 compute-1 sshd-session[129858]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:55:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:24.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:55:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:25.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:55:25 compute-1 python3.9[130011]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:55:25 compute-1 systemd[1]: Stopping User Manager for UID 0...
Oct 02 11:55:25 compute-1 systemd[129300]: Activating special unit Exit the Session...
Oct 02 11:55:25 compute-1 systemd[129300]: Stopped target Main User Target.
Oct 02 11:55:25 compute-1 systemd[129300]: Stopped target Basic System.
Oct 02 11:55:25 compute-1 systemd[129300]: Stopped target Paths.
Oct 02 11:55:25 compute-1 systemd[129300]: Stopped target Sockets.
Oct 02 11:55:25 compute-1 systemd[129300]: Stopped target Timers.
Oct 02 11:55:25 compute-1 systemd[129300]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 02 11:55:25 compute-1 systemd[129300]: Closed D-Bus User Message Bus Socket.
Oct 02 11:55:25 compute-1 systemd[129300]: Stopped Create User's Volatile Files and Directories.
Oct 02 11:55:25 compute-1 systemd[129300]: Removed slice User Application Slice.
Oct 02 11:55:25 compute-1 systemd[129300]: Reached target Shutdown.
Oct 02 11:55:25 compute-1 systemd[129300]: Finished Exit the Session.
Oct 02 11:55:25 compute-1 systemd[129300]: Reached target Exit the Session.
Oct 02 11:55:25 compute-1 systemd[1]: user@0.service: Deactivated successfully.
Oct 02 11:55:25 compute-1 systemd[1]: Stopped User Manager for UID 0.
Oct 02 11:55:25 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 02 11:55:25 compute-1 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 02 11:55:25 compute-1 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 02 11:55:25 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 02 11:55:25 compute-1 systemd[1]: Removed slice User Slice of UID 0.
Oct 02 11:55:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:55:26 compute-1 ceph-mon[80926]: pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:26 compute-1 sudo[130168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhquqtxhinlvooqytzhutuqcrudgahtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406125.9857643-68-226172872962245/AnsiballZ_file.py'
Oct 02 11:55:26 compute-1 sudo[130168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:26 compute-1 python3.9[130170]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:55:26 compute-1 sudo[130168]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:26.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:27 compute-1 sudo[130320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nejlwaxkwzdczmmfbabzpfyfgiobjyvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406126.7686055-68-170420465522552/AnsiballZ_file.py'
Oct 02 11:55:27 compute-1 sudo[130320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:27 compute-1 python3.9[130322]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:55:27 compute-1 sudo[130320]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:27.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:27 compute-1 sudo[130472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njxzymnmbcexzqkkiznfbhxraeiicqwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406127.366168-68-147681835372380/AnsiballZ_file.py'
Oct 02 11:55:27 compute-1 sudo[130472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:27 compute-1 python3.9[130474]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:55:27 compute-1 sudo[130472]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:28 compute-1 ceph-mon[80926]: pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:28 compute-1 sudo[130624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmyezcorkhqpwsadsfffyuakshbkfjfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406128.0205019-68-262467354002266/AnsiballZ_file.py'
Oct 02 11:55:28 compute-1 sudo[130624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:28 compute-1 python3.9[130626]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:55:28 compute-1 sudo[130624]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:28 compute-1 sudo[130776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chxojxbhcacxvgnwwsnwbmxthevuwmlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406128.6248505-68-151146336853416/AnsiballZ_file.py'
Oct 02 11:55:28 compute-1 sudo[130776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:55:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:28.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:55:29 compute-1 python3.9[130778]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:55:29 compute-1 sudo[130776]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:29.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:29 compute-1 python3.9[130928]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:55:30 compute-1 ceph-mon[80926]: pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:30 compute-1 sudo[131078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfyyaepyszwpoisozbspbcgaqlroprai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406130.0474632-200-160790460212045/AnsiballZ_seboolean.py'
Oct 02 11:55:30 compute-1 sudo[131078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:55:30 compute-1 python3.9[131080]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 02 11:55:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:30.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:31.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:31 compute-1 sudo[131078]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:32 compute-1 python3.9[131230]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:55:32 compute-1 ceph-mon[80926]: pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:32 compute-1 python3.9[131351]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406131.5344315-224-246455043174162/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:55:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:32.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:33.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:34 compute-1 ceph-mon[80926]: pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:34 compute-1 python3.9[131502]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:55:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:34.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:35.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:35 compute-1 python3.9[131623]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406134.3913963-269-191724804268438/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:55:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:55:36 compute-1 sudo[131773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxcysutodgbejfkzjutopqewqwulcjjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406135.7387757-320-96348414407867/AnsiballZ_setup.py'
Oct 02 11:55:36 compute-1 sudo[131773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:36 compute-1 ceph-mon[80926]: pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:36 compute-1 python3.9[131775]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:55:36 compute-1 sudo[131773]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:55:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:36.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:55:36 compute-1 sudo[131857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbtfwygedtktkjyjzyddfqcctclobvni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406135.7387757-320-96348414407867/AnsiballZ_dnf.py'
Oct 02 11:55:36 compute-1 sudo[131857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:37 compute-1 python3.9[131859]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:55:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:37.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:38 compute-1 ceph-mon[80926]: pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:38 compute-1 sudo[131857]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:38.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:39.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:39 compute-1 sudo[132010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylpmzziowvojmktgoxhwpahuyzgfcqld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406138.691674-356-45852220398478/AnsiballZ_systemd.py'
Oct 02 11:55:39 compute-1 sudo[132010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:39 compute-1 python3.9[132012]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 11:55:39 compute-1 sudo[132010]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:39 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 11:55:39 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5948 writes, 25K keys, 5948 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5948 writes, 931 syncs, 6.39 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5948 writes, 25K keys, 5948 commit groups, 1.0 writes per commit group, ingest: 19.07 MB, 0.03 MB/s
                                           Interval WAL: 5948 writes, 931 syncs, 6.39 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9770#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9770#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9770#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.56 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.67844e-05%) FilterBlock(3,0.33 KB,2.00272e-05%) IndexBlock(3,0.34 KB,2.09808e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 02 11:55:40 compute-1 ceph-mon[80926]: pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:40 compute-1 python3.9[132166]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:55:40 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:55:40 compute-1 python3.9[132287]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406139.9633777-380-5234580160264/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:55:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:40.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:41.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:41 compute-1 python3.9[132437]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:55:41 compute-1 python3.9[132558]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406141.0053596-380-90406480983723/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:55:42 compute-1 ceph-mon[80926]: pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:42.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:43.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:43 compute-1 python3.9[132708]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:55:44 compute-1 ceph-mon[80926]: pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:44 compute-1 python3.9[132829]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406143.3533409-512-188363010067761/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:55:44 compute-1 python3.9[132979]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:55:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:44.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:45 compute-1 ovn_controller[129257]: 2025-10-02T11:55:45Z|00025|memory|INFO|16384 kB peak resident set size after 29.8 seconds
Oct 02 11:55:45 compute-1 ovn_controller[129257]: 2025-10-02T11:55:45Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Oct 02 11:55:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:55:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:45.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:55:45 compute-1 podman[133074]: 2025-10-02 11:55:45.293129494 +0000 UTC m=+0.122303215 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 11:55:45 compute-1 python3.9[133115]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406144.4383025-512-68891070573365/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:55:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:55:46 compute-1 python3.9[133278]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:55:46 compute-1 ceph-mon[80926]: pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:46 compute-1 sudo[133430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikysocitywpobfpguxydcgybzjnvrlga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406146.4685516-626-58970523647028/AnsiballZ_file.py'
Oct 02 11:55:46 compute-1 sudo[133430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:55:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:46.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:55:46 compute-1 python3.9[133432]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:55:46 compute-1 sudo[133430]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:47.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:47 compute-1 sudo[133582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyjsxrhnvkxslzeemtbarpbaovkixgze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406147.142662-650-224885649030170/AnsiballZ_stat.py'
Oct 02 11:55:47 compute-1 sudo[133582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:47 compute-1 python3.9[133584]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:55:47 compute-1 sudo[133582]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:47 compute-1 sudo[133660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uadzyudapeqifftlcmwhokkoozknxmgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406147.142662-650-224885649030170/AnsiballZ_file.py'
Oct 02 11:55:47 compute-1 sudo[133660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:48 compute-1 python3.9[133662]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:55:48 compute-1 sudo[133660]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:48 compute-1 ceph-mon[80926]: pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:48 compute-1 sudo[133812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdgbrqwnqpubrjtazoyiuisxywmgtazh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406148.1665425-650-199862230354193/AnsiballZ_stat.py'
Oct 02 11:55:48 compute-1 sudo[133812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:48 compute-1 python3.9[133814]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:55:48 compute-1 sudo[133812]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:48 compute-1 sudo[133890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmvpuscslxnwynfqregxyefgpxjjyweb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406148.1665425-650-199862230354193/AnsiballZ_file.py'
Oct 02 11:55:48 compute-1 sudo[133890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:48.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:49 compute-1 python3.9[133892]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:55:49 compute-1 sudo[133890]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:49.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:49 compute-1 sudo[134042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdteiyqotcuetbgkrnpddajabesyczww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406149.3387501-719-89746320995110/AnsiballZ_file.py'
Oct 02 11:55:49 compute-1 sudo[134042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:49 compute-1 python3.9[134044]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:55:49 compute-1 sudo[134042]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:50 compute-1 ceph-mon[80926]: pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:50 compute-1 sudo[134194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qulayrliocmzezcupqdxwseyiofvftwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406150.0842423-743-48306671321264/AnsiballZ_stat.py'
Oct 02 11:55:50 compute-1 sudo[134194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:50 compute-1 python3.9[134196]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:55:50 compute-1 sudo[134194]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:55:50 compute-1 sudo[134272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlaoztmtwkfyljiedsqvciczhxgnkerx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406150.0842423-743-48306671321264/AnsiballZ_file.py'
Oct 02 11:55:50 compute-1 sudo[134272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:50.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:51 compute-1 python3.9[134274]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:55:51 compute-1 sudo[134272]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:51.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:51 compute-1 sudo[134424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkdsbwewdlldivqemyhsnoyvgrlpcmny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406151.2228465-779-270001811408594/AnsiballZ_stat.py'
Oct 02 11:55:51 compute-1 sudo[134424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:51 compute-1 python3.9[134426]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:55:51 compute-1 sudo[134424]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:51 compute-1 sudo[134502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmonckpseidyovvjlhupppfqpyubpmus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406151.2228465-779-270001811408594/AnsiballZ_file.py'
Oct 02 11:55:51 compute-1 sudo[134502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:52 compute-1 python3.9[134504]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:55:52 compute-1 sudo[134502]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:52 compute-1 ceph-mon[80926]: pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:52 compute-1 sudo[134654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyneuavaegkhjjdqksvpzzwdmcgkljwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406152.3565962-815-74327711700067/AnsiballZ_systemd.py'
Oct 02 11:55:52 compute-1 sudo[134654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:55:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:52.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:55:52 compute-1 python3.9[134656]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:55:52 compute-1 systemd[1]: Reloading.
Oct 02 11:55:53 compute-1 systemd-rc-local-generator[134684]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:55:53 compute-1 systemd-sysv-generator[134687]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:55:53 compute-1 sudo[134654]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:53.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:53 compute-1 sudo[134843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjcwxkyimculakxmqozehuihbpypjvqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406153.460613-839-222010713244498/AnsiballZ_stat.py'
Oct 02 11:55:53 compute-1 sudo[134843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:53 compute-1 python3.9[134845]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:55:53 compute-1 sudo[134843]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:54 compute-1 sudo[134921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibwlhqfyeyaowevnvuvuokcrgjqzlnqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406153.460613-839-222010713244498/AnsiballZ_file.py'
Oct 02 11:55:54 compute-1 sudo[134921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:54 compute-1 ceph-mon[80926]: pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:54 compute-1 python3.9[134923]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:55:54 compute-1 sudo[134921]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:54 compute-1 sudo[135073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skdubwtxziyqqfxazjwvdwpeqerbncpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406154.5953097-875-8883261918610/AnsiballZ_stat.py'
Oct 02 11:55:54 compute-1 sudo[135073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:54.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:55 compute-1 python3.9[135075]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:55:55 compute-1 sudo[135073]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:55 compute-1 sudo[135151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idhnfcbsbspwdvrqdyocrykscgqwewal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406154.5953097-875-8883261918610/AnsiballZ_file.py'
Oct 02 11:55:55 compute-1 sudo[135151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 11:55:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:55.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 11:55:55 compute-1 python3.9[135153]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:55:55 compute-1 sudo[135151]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:55:56 compute-1 sudo[135303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azmmlhcliiwjbqitawtteclsofzprsnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406155.7777839-911-244234240244729/AnsiballZ_systemd.py'
Oct 02 11:55:56 compute-1 sudo[135303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:56 compute-1 ceph-mon[80926]: pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:56 compute-1 python3.9[135305]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:55:56 compute-1 systemd[1]: Reloading.
Oct 02 11:55:56 compute-1 systemd-rc-local-generator[135332]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:55:56 compute-1 systemd-sysv-generator[135336]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:55:56 compute-1 systemd[1]: Starting Create netns directory...
Oct 02 11:55:56 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 02 11:55:56 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 02 11:55:56 compute-1 systemd[1]: Finished Create netns directory.
Oct 02 11:55:56 compute-1 sudo[135303]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:56.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:57.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:57 compute-1 sudo[135496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwptnifzrwkpusacfuxdytdbxaclubkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406157.1033406-941-145792242351529/AnsiballZ_file.py'
Oct 02 11:55:57 compute-1 sudo[135496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:57 compute-1 python3.9[135498]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:55:57 compute-1 sudo[135496]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:58 compute-1 sudo[135648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkeytuakwtyoswacoaqgwbrsibxpqgxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406157.8083937-965-116368178836152/AnsiballZ_stat.py'
Oct 02 11:55:58 compute-1 sudo[135648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:58 compute-1 ceph-mon[80926]: pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:55:58 compute-1 python3.9[135650]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:55:58 compute-1 sudo[135648]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:58 compute-1 sudo[135771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjyanegsbdkqhmvufmztxrrvdxfxofwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406157.8083937-965-116368178836152/AnsiballZ_copy.py'
Oct 02 11:55:58 compute-1 sudo[135771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:58 compute-1 python3.9[135773]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406157.8083937-965-116368178836152/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:55:58 compute-1 sudo[135771]: pam_unix(sudo:session): session closed for user root
Oct 02 11:55:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:55:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:55:58.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:55:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:55:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:55:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:55:59.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:55:59 compute-1 sudo[135923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqitxfpbwktukahpltkxwhygerspjvzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406159.2937648-1016-10688609776798/AnsiballZ_file.py'
Oct 02 11:55:59 compute-1 sudo[135923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:55:59 compute-1 python3.9[135925]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:55:59 compute-1 sudo[135923]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:00 compute-1 sudo[136075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbmzbrknkkdsqpigwytdluvatkktzvgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406159.9773817-1040-87132063875980/AnsiballZ_stat.py'
Oct 02 11:56:00 compute-1 sudo[136075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:00 compute-1 ceph-mon[80926]: pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:00 compute-1 python3.9[136077]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:56:00 compute-1 sudo[136075]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:56:00 compute-1 sudo[136198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruxvefltyhnokfjbytshgqfggdnzboan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406159.9773817-1040-87132063875980/AnsiballZ_copy.py'
Oct 02 11:56:00 compute-1 sudo[136198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:56:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:00.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:56:00 compute-1 python3.9[136200]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406159.9773817-1040-87132063875980/.source.json _original_basename=.2tuepay3 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:56:01 compute-1 sudo[136198]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:56:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:01.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:56:01 compute-1 sudo[136350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmnrkgxspjgewadesrpryxguxpuezlel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406161.2495277-1085-65812648874380/AnsiballZ_file.py'
Oct 02 11:56:01 compute-1 sudo[136350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:01 compute-1 python3.9[136352]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:56:01 compute-1 sudo[136350]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:02 compute-1 ceph-mon[80926]: pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:02 compute-1 sudo[136502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxmjajxssllvcpigyodgjeharlhtzbvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406162.0136752-1109-97981789541131/AnsiballZ_stat.py'
Oct 02 11:56:02 compute-1 sudo[136502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:02 compute-1 sudo[136502]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:02 compute-1 sudo[136625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shqnutkhwwqrucwptiaiqmfswibdoefb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406162.0136752-1109-97981789541131/AnsiballZ_copy.py'
Oct 02 11:56:02 compute-1 sudo[136625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:56:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:02.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:56:02 compute-1 sudo[136625]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:03.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:03 compute-1 sudo[136777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoczhgbyzbaqgnlynytrhmcwrxfnksow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406163.4871619-1160-63209830385289/AnsiballZ_container_config_data.py'
Oct 02 11:56:03 compute-1 sudo[136777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:04 compute-1 python3.9[136779]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Oct 02 11:56:04 compute-1 sudo[136777]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:04 compute-1 ceph-mon[80926]: pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:04 compute-1 sudo[136929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpevlprdmueztnzevkvvbfpfavhmphfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406164.506098-1187-268275300767416/AnsiballZ_container_config_hash.py'
Oct 02 11:56:04 compute-1 sudo[136929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:04.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:05 compute-1 python3.9[136931]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 11:56:05 compute-1 sudo[136929]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:56:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:05.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:56:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:56:05 compute-1 sudo[137081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oivewzjmriiycavvxbdqbswzmedpxnwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406165.4661007-1214-1571738006150/AnsiballZ_podman_container_info.py'
Oct 02 11:56:05 compute-1 sudo[137081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:06 compute-1 python3.9[137083]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 02 11:56:06 compute-1 ceph-mon[80926]: pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:06 compute-1 sudo[137081]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:06.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:07.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:07 compute-1 sudo[137260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fobgtjzlzxvdfhbnittmlnsjpsjawjde ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759406167.0495815-1253-123183954538380/AnsiballZ_edpm_container_manage.py'
Oct 02 11:56:07 compute-1 sudo[137260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:07 compute-1 python3[137262]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 11:56:08 compute-1 ceph-mon[80926]: pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:08.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:56:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:09.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:56:10 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 11:56:10 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2135 writes, 12K keys, 2135 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 2134 writes, 2134 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2135 writes, 12K keys, 2135 commit groups, 1.0 writes per commit group, ingest: 23.51 MB, 0.04 MB/s
                                           Interval WAL: 2134 writes, 2134 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    100.6      0.11              0.03         4    0.029       0      0       0.0       0.0
                                             L6      1/0    7.35 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.1    163.9    139.6      0.17              0.06         3    0.058     11K   1268       0.0       0.0
                                            Sum      1/0    7.35 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1     98.6    124.1      0.29              0.08         7    0.041     11K   1268       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1     99.3    125.0      0.29              0.08         6    0.048     11K   1268       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    163.9    139.6      0.17              0.06         3    0.058     11K   1268       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    102.4      0.11              0.03         3    0.038       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.011, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.3 seconds
                                           Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5591049f71f0#2 capacity: 308.00 MB usage: 984.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 5.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(49,851.05 KB,0.269838%) FilterBlock(7,41.42 KB,0.0131335%) IndexBlock(7,91.70 KB,0.0290759%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 11:56:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:56:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:10.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:11.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:12 compute-1 ceph-mon[80926]: pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 11:56:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:12.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 11:56:13 compute-1 ceph-mon[80926]: pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:13.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:14 compute-1 sudo[137338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:56:14 compute-1 sudo[137338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:14 compute-1 sudo[137338]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:14 compute-1 sudo[137363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:56:14 compute-1 sudo[137363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:14 compute-1 sudo[137363]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:14 compute-1 sudo[137388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:56:14 compute-1 sudo[137388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:14 compute-1 sudo[137388]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:14 compute-1 sudo[137413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:56:14 compute-1 sudo[137413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:14 compute-1 ceph-mon[80926]: pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:14.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:15.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:56:16 compute-1 ceph-mon[80926]: pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:16 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Oct 02 11:56:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 11:56:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:56:16.479861) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 11:56:16 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Oct 02 11:56:16 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406176479915, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1757, "num_deletes": 251, "total_data_size": 4440893, "memory_usage": 4482128, "flush_reason": "Manual Compaction"}
Oct 02 11:56:16 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Oct 02 11:56:16 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406176501268, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 2913291, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10595, "largest_seqno": 12347, "table_properties": {"data_size": 2905944, "index_size": 4418, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14364, "raw_average_key_size": 19, "raw_value_size": 2891373, "raw_average_value_size": 3901, "num_data_blocks": 199, "num_entries": 741, "num_filter_entries": 741, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406006, "oldest_key_time": 1759406006, "file_creation_time": 1759406176, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:56:16 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 21466 microseconds, and 16132 cpu microseconds.
Oct 02 11:56:16 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:56:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:56:16.501334) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 2913291 bytes OK
Oct 02 11:56:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:56:16.501357) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Oct 02 11:56:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:56:16.502725) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Oct 02 11:56:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:56:16.502739) EVENT_LOG_v1 {"time_micros": 1759406176502734, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 11:56:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:56:16.502760) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 11:56:16 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 4432939, prev total WAL file size 4432939, number of live WAL files 2.
Oct 02 11:56:16 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:56:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:56:16.503716) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Oct 02 11:56:16 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 11:56:16 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(2845KB)], [21(7527KB)]
Oct 02 11:56:16 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406176503748, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 10621044, "oldest_snapshot_seqno": -1}
Oct 02 11:56:16 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 3970 keys, 8438001 bytes, temperature: kUnknown
Oct 02 11:56:16 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406176729595, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 8438001, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8408564, "index_size": 18383, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9989, "raw_key_size": 96251, "raw_average_key_size": 24, "raw_value_size": 8333934, "raw_average_value_size": 2099, "num_data_blocks": 794, "num_entries": 3970, "num_filter_entries": 3970, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759406176, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:56:16 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:56:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:56:16.729835) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8438001 bytes
Oct 02 11:56:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:56:16.775788) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 47.0 rd, 37.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 7.4 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(6.5) write-amplify(2.9) OK, records in: 4487, records dropped: 517 output_compression: NoCompression
Oct 02 11:56:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:56:16.775824) EVENT_LOG_v1 {"time_micros": 1759406176775810, "job": 10, "event": "compaction_finished", "compaction_time_micros": 225919, "compaction_time_cpu_micros": 38747, "output_level": 6, "num_output_files": 1, "total_output_size": 8438001, "num_input_records": 4487, "num_output_records": 3970, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 11:56:16 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:56:16 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406176776319, "job": 10, "event": "table_file_deletion", "file_number": 23}
Oct 02 11:56:16 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:56:16 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406176777376, "job": 10, "event": "table_file_deletion", "file_number": 21}
Oct 02 11:56:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:56:16.503638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:56:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:56:16.777412) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:56:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:56:16.777417) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:56:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:56:16.777418) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:56:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:56:16.777420) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:56:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:56:16.777421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:56:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:16.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:17 compute-1 sudo[137413]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:17.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:17 compute-1 podman[137273]: 2025-10-02 11:56:17.344729501 +0000 UTC m=+9.516128662 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 11:56:17 compute-1 podman[137465]: 2025-10-02 11:56:17.377117118 +0000 UTC m=+1.763739289 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 11:56:17 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 11:56:17 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Oct 02 11:56:17 compute-1 podman[137554]: 2025-10-02 11:56:17.490599853 +0000 UTC m=+0.045908833 container create 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 11:56:17 compute-1 podman[137554]: 2025-10-02 11:56:17.466024481 +0000 UTC m=+0.021333481 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 11:56:17 compute-1 python3[137262]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 11:56:17 compute-1 sudo[137260]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:18 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:56:18 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:56:18 compute-1 ceph-mon[80926]: pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:18 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:56:18 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:56:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:18.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:19.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:19 compute-1 sudo[137742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvcqvoyutkzmiihdyojflydsnhmjfguv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406179.2832036-1277-30722869463487/AnsiballZ_stat.py'
Oct 02 11:56:19 compute-1 sudo[137742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:56:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:56:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:56:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:56:19 compute-1 python3.9[137744]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:56:19 compute-1 sudo[137742]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:20 compute-1 sudo[137896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhnygpqrhepupysntldsaonlcqxvfkis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406180.2185256-1304-6270125802463/AnsiballZ_file.py'
Oct 02 11:56:20 compute-1 sudo[137896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:20 compute-1 python3.9[137898]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:56:20 compute-1 sudo[137896]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:20 compute-1 ceph-mon[80926]: pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:20 compute-1 sudo[137972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmvervcfsnrcejvonhtwkcdqelcmsemc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406180.2185256-1304-6270125802463/AnsiballZ_stat.py'
Oct 02 11:56:20 compute-1 sudo[137972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:56:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:20.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:21 compute-1 python3.9[137974]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 11:56:21 compute-1 sudo[137972]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:21.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:21 compute-1 sudo[138123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omtqhytimfzwsxozgobgzcyfitxkseoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406181.1276753-1304-34792720281342/AnsiballZ_copy.py'
Oct 02 11:56:21 compute-1 sudo[138123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:21 compute-1 python3.9[138125]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759406181.1276753-1304-34792720281342/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:56:21 compute-1 sudo[138123]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:21 compute-1 sudo[138199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtfubpkgcxmynkmfudwmtxncrevaqckn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406181.1276753-1304-34792720281342/AnsiballZ_systemd.py'
Oct 02 11:56:21 compute-1 sudo[138199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:22 compute-1 ceph-mon[80926]: pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:22 compute-1 python3.9[138201]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 11:56:22 compute-1 systemd[1]: Reloading.
Oct 02 11:56:22 compute-1 systemd-rc-local-generator[138226]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:56:22 compute-1 systemd-sysv-generator[138231]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:56:22 compute-1 sudo[138199]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:22 compute-1 sudo[138310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-humosigqwbasjkvgpqggjtjoqsjtfvjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406181.1276753-1304-34792720281342/AnsiballZ_systemd.py'
Oct 02 11:56:22 compute-1 sudo[138310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:56:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:22.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:56:23 compute-1 python3.9[138312]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:56:23 compute-1 systemd[1]: Reloading.
Oct 02 11:56:23 compute-1 systemd-sysv-generator[138347]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:56:23 compute-1 systemd-rc-local-generator[138343]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:56:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:56:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:23.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:56:23 compute-1 systemd[1]: Starting ovn_metadata_agent container...
Oct 02 11:56:23 compute-1 systemd[1]: Started libcrun container.
Oct 02 11:56:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de7524752fc5ca5448b79e867a921e886f022ba26ab05152b27e2c373874c4c6/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct 02 11:56:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de7524752fc5ca5448b79e867a921e886f022ba26ab05152b27e2c373874c4c6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 11:56:23 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd.
Oct 02 11:56:23 compute-1 podman[138354]: 2025-10-02 11:56:23.868207465 +0000 UTC m=+0.278720426 container init 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 11:56:23 compute-1 ovn_metadata_agent[138369]: + sudo -E kolla_set_configs
Oct 02 11:56:23 compute-1 podman[138354]: 2025-10-02 11:56:23.893993186 +0000 UTC m=+0.304506137 container start 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 11:56:23 compute-1 edpm-start-podman-container[138354]: ovn_metadata_agent
Oct 02 11:56:23 compute-1 ovn_metadata_agent[138369]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 11:56:23 compute-1 ovn_metadata_agent[138369]: INFO:__main__:Validating config file
Oct 02 11:56:23 compute-1 ovn_metadata_agent[138369]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 11:56:23 compute-1 ovn_metadata_agent[138369]: INFO:__main__:Copying service configuration files
Oct 02 11:56:23 compute-1 ovn_metadata_agent[138369]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct 02 11:56:23 compute-1 ovn_metadata_agent[138369]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct 02 11:56:23 compute-1 edpm-start-podman-container[138353]: Creating additional drop-in dependency for "ovn_metadata_agent" (0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd)
Oct 02 11:56:23 compute-1 ovn_metadata_agent[138369]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct 02 11:56:23 compute-1 ovn_metadata_agent[138369]: INFO:__main__:Writing out command to execute
Oct 02 11:56:23 compute-1 ovn_metadata_agent[138369]: INFO:__main__:Setting permission for /var/lib/neutron
Oct 02 11:56:23 compute-1 ovn_metadata_agent[138369]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct 02 11:56:23 compute-1 ovn_metadata_agent[138369]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct 02 11:56:23 compute-1 ovn_metadata_agent[138369]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct 02 11:56:23 compute-1 ovn_metadata_agent[138369]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct 02 11:56:23 compute-1 ovn_metadata_agent[138369]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct 02 11:56:23 compute-1 ovn_metadata_agent[138369]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct 02 11:56:23 compute-1 ovn_metadata_agent[138369]: ++ cat /run_command
Oct 02 11:56:23 compute-1 ovn_metadata_agent[138369]: + CMD=neutron-ovn-metadata-agent
Oct 02 11:56:23 compute-1 ovn_metadata_agent[138369]: + ARGS=
Oct 02 11:56:23 compute-1 ovn_metadata_agent[138369]: + sudo kolla_copy_cacerts
Oct 02 11:56:23 compute-1 systemd[1]: Reloading.
Oct 02 11:56:23 compute-1 ovn_metadata_agent[138369]: + [[ ! -n '' ]]
Oct 02 11:56:23 compute-1 ovn_metadata_agent[138369]: + . kolla_extend_start
Oct 02 11:56:23 compute-1 ovn_metadata_agent[138369]: Running command: 'neutron-ovn-metadata-agent'
Oct 02 11:56:23 compute-1 ovn_metadata_agent[138369]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Oct 02 11:56:23 compute-1 ovn_metadata_agent[138369]: + umask 0022
Oct 02 11:56:23 compute-1 ovn_metadata_agent[138369]: + exec neutron-ovn-metadata-agent
Oct 02 11:56:23 compute-1 podman[138376]: 2025-10-02 11:56:23.990031753 +0000 UTC m=+0.085510117 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct 02 11:56:24 compute-1 systemd-sysv-generator[138445]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:56:24 compute-1 systemd-rc-local-generator[138442]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:56:24 compute-1 systemd[1]: Started ovn_metadata_agent container.
Oct 02 11:56:24 compute-1 sudo[138310]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:24 compute-1 ceph-mon[80926]: pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:56:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:24.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:56:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:25.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:25 compute-1 sshd-session[129861]: Connection closed by 192.168.122.30 port 45640
Oct 02 11:56:25 compute-1 sshd-session[129858]: pam_unix(sshd:session): session closed for user zuul
Oct 02 11:56:25 compute-1 systemd[1]: session-48.scope: Deactivated successfully.
Oct 02 11:56:25 compute-1 systemd[1]: session-48.scope: Consumed 53.170s CPU time.
Oct 02 11:56:25 compute-1 systemd-logind[795]: Session 48 logged out. Waiting for processes to exit.
Oct 02 11:56:25 compute-1 systemd-logind[795]: Removed session 48.
Oct 02 11:56:25 compute-1 sudo[138478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:56:25 compute-1 sudo[138478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:25 compute-1 sudo[138478]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:25 compute-1 sudo[138503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:56:25 compute-1 sudo[138503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:56:25 compute-1 sudo[138503]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.854 138374 INFO neutron.common.config [-] Logging enabled!
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.854 138374 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.854 138374 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.855 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.855 138374 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.855 138374 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.855 138374 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.855 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.855 138374 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.856 138374 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.856 138374 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.856 138374 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.856 138374 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.856 138374 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.856 138374 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.856 138374 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.856 138374 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.857 138374 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.857 138374 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.857 138374 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.857 138374 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.857 138374 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.857 138374 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.857 138374 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.857 138374 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.858 138374 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.858 138374 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.858 138374 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.858 138374 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.858 138374 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.858 138374 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.858 138374 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.858 138374 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.858 138374 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.859 138374 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.859 138374 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.859 138374 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.859 138374 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.859 138374 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.859 138374 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.859 138374 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.859 138374 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.860 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.860 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.860 138374 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.860 138374 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.860 138374 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.860 138374 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.860 138374 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.860 138374 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.860 138374 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.860 138374 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.861 138374 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.861 138374 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.861 138374 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.861 138374 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.861 138374 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.861 138374 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.861 138374 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.861 138374 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.861 138374 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.862 138374 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.862 138374 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.862 138374 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.862 138374 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.862 138374 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.862 138374 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.862 138374 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.863 138374 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.863 138374 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.863 138374 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.863 138374 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.863 138374 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.863 138374 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.863 138374 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.864 138374 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.864 138374 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.864 138374 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.864 138374 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.864 138374 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.864 138374 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.864 138374 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.864 138374 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.864 138374 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.865 138374 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.865 138374 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.865 138374 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.865 138374 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.865 138374 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.865 138374 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.865 138374 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.865 138374 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.866 138374 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.866 138374 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.866 138374 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.866 138374 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.866 138374 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.866 138374 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.866 138374 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.866 138374 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.866 138374 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.866 138374 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.867 138374 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.867 138374 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.867 138374 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.867 138374 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.867 138374 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.867 138374 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.867 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.867 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.868 138374 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.868 138374 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.868 138374 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.868 138374 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.868 138374 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.868 138374 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.868 138374 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.868 138374 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.868 138374 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.869 138374 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.869 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.869 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.869 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.869 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.869 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.869 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.869 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.869 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.870 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.870 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.870 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.870 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.870 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.870 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.870 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.870 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.870 138374 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.871 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.871 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.871 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.871 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.871 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.871 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.871 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.871 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.872 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.872 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.872 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.872 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.872 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.872 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.873 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.873 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.873 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.873 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.873 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.873 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.873 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.874 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.874 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.874 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.874 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.874 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.874 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.874 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.875 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.875 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.875 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.875 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.875 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.875 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.875 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.876 138374 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.876 138374 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.876 138374 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.876 138374 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.876 138374 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.876 138374 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.877 138374 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.877 138374 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.877 138374 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.877 138374 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.877 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.877 138374 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.877 138374 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.878 138374 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.878 138374 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.878 138374 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.878 138374 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.878 138374 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.878 138374 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.879 138374 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.879 138374 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.879 138374 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.879 138374 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.879 138374 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.879 138374 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.879 138374 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.880 138374 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.880 138374 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.880 138374 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.880 138374 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.880 138374 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.880 138374 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.881 138374 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.881 138374 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.881 138374 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.881 138374 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.881 138374 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.881 138374 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.881 138374 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.882 138374 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.882 138374 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.882 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.882 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.882 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.882 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.883 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.883 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.883 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.883 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.883 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.883 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.884 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.884 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.884 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.884 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.884 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.884 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.885 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.885 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.885 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.885 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.885 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.885 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.885 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.886 138374 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.886 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.886 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.886 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.886 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.886 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.887 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.887 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.887 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.887 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.887 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.887 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.888 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.888 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.888 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.888 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.888 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.888 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.889 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.889 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.889 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.889 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.889 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.889 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.890 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.890 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.890 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.890 138374 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.890 138374 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.890 138374 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.891 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.891 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.891 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.891 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.891 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.891 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.892 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.892 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.892 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.892 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.892 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.892 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.893 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.893 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.893 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.893 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.893 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.893 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.894 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.894 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.894 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.894 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.894 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.894 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.895 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.895 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.895 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.895 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.895 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.895 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.896 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.896 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.896 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.896 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.896 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.896 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.897 138374 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.897 138374 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.906 138374 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.906 138374 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.906 138374 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.907 138374 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.907 138374 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Oct 02 11:56:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.920 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name db222192-8da1-4f7c-972d-dc680c3e6630 (UUID: db222192-8da1-4f7c-972d-dc680c3e6630) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.941 138374 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.941 138374 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.942 138374 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.942 138374 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.944 138374 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.950 138374 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.955 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'db222192-8da1-4f7c-972d-dc680c3e6630'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], external_ids={}, name=db222192-8da1-4f7c-972d-dc680c3e6630, nb_cfg_timestamp=1759406123485, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.956 138374 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f23cc43bf40>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.956 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.957 138374 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.957 138374 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.957 138374 INFO oslo_service.service [-] Starting 1 workers
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.961 138374 DEBUG oslo_service.service [-] Started child 138528 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.965 138374 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpm5m4yfxz/privsep.sock']
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.965 138528 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-167138'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.985 138528 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.985 138528 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.985 138528 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.988 138528 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 02 11:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.994 138528 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 02 11:56:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:25.999 138528 INFO eventlet.wsgi.server [-] (138528) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Oct 02 11:56:26 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:56:26 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:56:26 compute-1 ceph-mon[80926]: pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:26 compute-1 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Oct 02 11:56:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:26.668 138374 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 02 11:56:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:26.668 138374 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpm5m4yfxz/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 02 11:56:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:26.516 138533 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 02 11:56:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:26.521 138533 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 02 11:56:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:26.523 138533 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Oct 02 11:56:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:26.523 138533 INFO oslo.privsep.daemon [-] privsep daemon running as pid 138533
Oct 02 11:56:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:26.672 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[72ef5f16-2609-49f3-a6a9-4020a4bcf2b4]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:56:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:56:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:26.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.216 138533 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.216 138533 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.216 138533 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:56:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:56:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:27.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.791 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[d2f9d557-4034-416b-aaf4-c9eef53b5435]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.794 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, column=external_ids, values=({'neutron:ovn-metadata-id': 'fbafd4ac-feb6-5c3f-8139-621c52ba192d'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.801 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.810 138374 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.810 138374 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.810 138374 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.810 138374 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.810 138374 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.811 138374 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.811 138374 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.811 138374 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.811 138374 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.811 138374 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.812 138374 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.812 138374 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.812 138374 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.812 138374 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.812 138374 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.813 138374 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.813 138374 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.813 138374 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.813 138374 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.813 138374 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.813 138374 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.813 138374 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.813 138374 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.814 138374 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.814 138374 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.814 138374 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.814 138374 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.814 138374 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.815 138374 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.815 138374 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.815 138374 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.815 138374 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.815 138374 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.815 138374 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.815 138374 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.816 138374 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.816 138374 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.816 138374 DEBUG oslo_service.service [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.816 138374 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.816 138374 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.816 138374 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.816 138374 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.817 138374 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.817 138374 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.817 138374 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.817 138374 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.817 138374 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.817 138374 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.817 138374 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.818 138374 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.818 138374 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.818 138374 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.818 138374 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.818 138374 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.818 138374 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.818 138374 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.819 138374 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.819 138374 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.819 138374 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.819 138374 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.819 138374 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.819 138374 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.820 138374 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.820 138374 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.820 138374 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.820 138374 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.820 138374 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.821 138374 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.821 138374 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.821 138374 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.821 138374 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.821 138374 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.821 138374 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.822 138374 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.822 138374 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.822 138374 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.822 138374 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.822 138374 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.822 138374 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.822 138374 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.822 138374 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.823 138374 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.823 138374 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.823 138374 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.823 138374 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.823 138374 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.823 138374 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.823 138374 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.824 138374 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.824 138374 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.824 138374 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.824 138374 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.824 138374 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.824 138374 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.824 138374 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.825 138374 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.825 138374 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.825 138374 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.825 138374 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.825 138374 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.825 138374 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.825 138374 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.825 138374 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.826 138374 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.826 138374 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.826 138374 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.826 138374 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.826 138374 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.826 138374 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.826 138374 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.827 138374 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.827 138374 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.827 138374 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.827 138374 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.827 138374 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.827 138374 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.827 138374 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.827 138374 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.827 138374 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.827 138374 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.828 138374 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.828 138374 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.828 138374 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.828 138374 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.828 138374 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.828 138374 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.828 138374 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.828 138374 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.829 138374 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.829 138374 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.829 138374 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.829 138374 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.829 138374 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.829 138374 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.829 138374 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.829 138374 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.829 138374 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.830 138374 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.830 138374 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.830 138374 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.830 138374 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.830 138374 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.830 138374 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.830 138374 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.830 138374 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.830 138374 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.831 138374 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.831 138374 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.831 138374 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.831 138374 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.831 138374 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.831 138374 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.831 138374 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.831 138374 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.831 138374 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.831 138374 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.832 138374 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.832 138374 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.832 138374 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.832 138374 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.832 138374 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.832 138374 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.832 138374 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.832 138374 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.832 138374 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.832 138374 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.833 138374 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.833 138374 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.833 138374 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.833 138374 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.833 138374 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.833 138374 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.833 138374 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.833 138374 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.834 138374 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.834 138374 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.834 138374 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.834 138374 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.834 138374 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.834 138374 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.834 138374 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.834 138374 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.835 138374 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.835 138374 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.835 138374 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.835 138374 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.835 138374 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.835 138374 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.835 138374 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.836 138374 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.836 138374 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.836 138374 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.836 138374 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.836 138374 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.836 138374 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.836 138374 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.836 138374 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.837 138374 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.837 138374 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.837 138374 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.837 138374 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.837 138374 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.837 138374 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.837 138374 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.838 138374 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.838 138374 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.838 138374 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.838 138374 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.838 138374 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.838 138374 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.838 138374 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.838 138374 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.838 138374 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.839 138374 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.839 138374 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.839 138374 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.839 138374 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.839 138374 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.839 138374 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.839 138374 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.839 138374 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.839 138374 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.839 138374 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.840 138374 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.840 138374 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.840 138374 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.840 138374 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.840 138374 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.840 138374 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.840 138374 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.840 138374 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.840 138374 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.840 138374 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.840 138374 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.841 138374 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.841 138374 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.841 138374 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.841 138374 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.841 138374 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.841 138374 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.841 138374 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.841 138374 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.841 138374 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.842 138374 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.842 138374 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.842 138374 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.842 138374 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.842 138374 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.842 138374 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.842 138374 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.842 138374 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.842 138374 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.842 138374 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.843 138374 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.843 138374 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.843 138374 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.843 138374 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.843 138374 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.843 138374 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.843 138374 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.843 138374 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.843 138374 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.843 138374 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.844 138374 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.844 138374 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.844 138374 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.844 138374 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.844 138374 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.844 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.844 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.844 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.844 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.845 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.845 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.845 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.845 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.845 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.845 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.845 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.845 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.845 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.846 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.846 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.846 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.846 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.846 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.846 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.846 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.846 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.846 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.847 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.847 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.847 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.847 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.847 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.847 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.847 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.847 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.847 138374 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.848 138374 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.848 138374 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.848 138374 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.848 138374 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 11:56:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:56:27.848 138374 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 02 11:56:28 compute-1 ceph-mon[80926]: pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:28.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:29.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:30 compute-1 ceph-mon[80926]: pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:30 compute-1 sshd-session[138539]: Accepted publickey for zuul from 192.168.122.30 port 40710 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 11:56:30 compute-1 systemd-logind[795]: New session 49 of user zuul.
Oct 02 11:56:30 compute-1 systemd[1]: Started Session 49 of User zuul.
Oct 02 11:56:30 compute-1 sshd-session[138539]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 11:56:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:56:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:30.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:31.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:31 compute-1 python3.9[138692]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 11:56:32 compute-1 ceph-mon[80926]: pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:32 compute-1 sudo[138846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifzyhhhlucofoarywepvfmsoiveewrkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406192.3877227-68-53128454583507/AnsiballZ_command.py'
Oct 02 11:56:32 compute-1 sudo[138846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:32.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:33 compute-1 python3.9[138848]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:56:33 compute-1 sudo[138846]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:56:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:33.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:56:34 compute-1 sudo[139011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugxkuywmbwudjgsapijaqjfmounipxch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406193.4753296-101-106629795069163/AnsiballZ_systemd_service.py'
Oct 02 11:56:34 compute-1 sudo[139011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:34 compute-1 ceph-mon[80926]: pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:34 compute-1 python3.9[139013]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 11:56:34 compute-1 systemd[1]: Reloading.
Oct 02 11:56:34 compute-1 systemd-rc-local-generator[139039]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:56:34 compute-1 systemd-sysv-generator[139044]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:56:34 compute-1 sudo[139011]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:34.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:35.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:35 compute-1 python3.9[139197]: ansible-ansible.builtin.service_facts Invoked
Oct 02 11:56:35 compute-1 network[139214]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 11:56:35 compute-1 network[139215]: 'network-scripts' will be removed from distribution in near future.
Oct 02 11:56:35 compute-1 network[139216]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 11:56:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:56:36 compute-1 ceph-mon[80926]: pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:36.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:37.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:38 compute-1 ceph-mon[80926]: pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:56:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:38.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:56:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:39.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:40 compute-1 ceph-mon[80926]: pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:40 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:56:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:40.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:41.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:41 compute-1 sudo[139479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ricuxetpxubapzxfrzealceiyosgngpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406201.2916965-159-83875777034462/AnsiballZ_systemd_service.py'
Oct 02 11:56:41 compute-1 sudo[139479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:42 compute-1 python3.9[139481]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:56:42 compute-1 sudo[139479]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:42 compute-1 ceph-mon[80926]: pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:42 compute-1 sudo[139632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fklxfqiypjuklvnppqgoqyvbwcztvrxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406202.2416153-159-249292806411316/AnsiballZ_systemd_service.py'
Oct 02 11:56:42 compute-1 sudo[139632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:42 compute-1 python3.9[139634]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:56:42 compute-1 sudo[139632]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:42.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:43 compute-1 sudo[139785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdsffdcpcxwatvzcmkeaasechkikrrcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406202.9921193-159-185763655523734/AnsiballZ_systemd_service.py'
Oct 02 11:56:43 compute-1 sudo[139785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:43.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:43 compute-1 python3.9[139787]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:56:43 compute-1 sudo[139785]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:44 compute-1 sudo[139938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhbqpjhdqmjafuvwsvdzcyahmjpulqaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406203.8116996-159-64594670540143/AnsiballZ_systemd_service.py'
Oct 02 11:56:44 compute-1 sudo[139938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:44 compute-1 ceph-mon[80926]: pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:44 compute-1 python3.9[139940]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:56:44 compute-1 sudo[139938]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:44 compute-1 sudo[140091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyijwrdsiaipfddbjpnzbquacbbdyifx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406204.6325703-159-151044599578033/AnsiballZ_systemd_service.py'
Oct 02 11:56:44 compute-1 sudo[140091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:56:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:45.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:56:45 compute-1 python3.9[140093]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:56:45 compute-1 sudo[140091]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:45.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:56:45 compute-1 sudo[140244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfcoiaicjseigniqnaugbuioklijisbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406205.4516213-159-258906398116009/AnsiballZ_systemd_service.py'
Oct 02 11:56:45 compute-1 sudo[140244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:46 compute-1 ceph-mon[80926]: pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:46 compute-1 python3.9[140246]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:56:46 compute-1 sudo[140244]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:46 compute-1 sudo[140397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrutnbtitorqcnmqeigyuqzfhnvskwdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406206.4259417-159-110709439852088/AnsiballZ_systemd_service.py'
Oct 02 11:56:46 compute-1 sudo[140397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:47.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:47 compute-1 python3.9[140399]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 11:56:47 compute-1 sudo[140397]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:47.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:47 compute-1 podman[140425]: 2025-10-02 11:56:47.854146335 +0000 UTC m=+0.108445529 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 02 11:56:48 compute-1 sudo[140576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssytvzdhpdgpjwnfystgzmeoaamuxvnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406207.9506137-314-83612548726637/AnsiballZ_file.py'
Oct 02 11:56:48 compute-1 sudo[140576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:48 compute-1 ceph-mon[80926]: pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:48 compute-1 python3.9[140578]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:56:48 compute-1 sudo[140576]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:49.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:49 compute-1 sudo[140728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzlndiklhmiwdffqucfsnprsvbupvsfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406208.8191156-314-41434657968619/AnsiballZ_file.py'
Oct 02 11:56:49 compute-1 sudo[140728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:49 compute-1 python3.9[140730]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:56:49 compute-1 sudo[140728]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:49.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:49 compute-1 sudo[140881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncjfvjgfujhwfbhmflubyxqyirisprdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406209.4324872-314-209598764525812/AnsiballZ_file.py'
Oct 02 11:56:49 compute-1 sudo[140881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:50 compute-1 python3.9[140883]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:56:50 compute-1 sudo[140881]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:50 compute-1 ceph-mon[80926]: pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:50 compute-1 sudo[141033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muvajnkkfvbbrvcsxxedykorjvmpcuun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406210.4355023-314-231547569075016/AnsiballZ_file.py'
Oct 02 11:56:50 compute-1 sudo[141033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:56:50 compute-1 python3.9[141035]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:56:50 compute-1 sudo[141033]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:51.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:51 compute-1 sudo[141185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyvttntzqekxqzatpbxcciccdzlouctl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406211.1005332-314-57690389802648/AnsiballZ_file.py'
Oct 02 11:56:51 compute-1 sudo[141185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:51.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:51 compute-1 python3.9[141187]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:56:51 compute-1 sudo[141185]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:51 compute-1 sudo[141337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyiydvjzfxhuahlavlshsdjglezdiuoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406211.669817-314-27781970532981/AnsiballZ_file.py'
Oct 02 11:56:51 compute-1 sudo[141337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:52 compute-1 python3.9[141339]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:56:52 compute-1 sudo[141337]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:52 compute-1 ceph-mon[80926]: pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:52 compute-1 sudo[141489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgstkeuzwsfoaduwowqohdsfphmpazxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406212.3173926-314-222536330728333/AnsiballZ_file.py'
Oct 02 11:56:52 compute-1 sudo[141489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:52 compute-1 python3.9[141491]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:56:52 compute-1 sudo[141489]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:53.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:53.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:53 compute-1 sudo[141641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-coaqloiltyjvzqwulfngszxpsfemrlvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406213.4661424-464-134867435804400/AnsiballZ_file.py'
Oct 02 11:56:53 compute-1 sudo[141641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:53 compute-1 python3.9[141643]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:56:53 compute-1 sudo[141641]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:54 compute-1 ceph-mon[80926]: pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:54 compute-1 sudo[141808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcfrqnasejmgkrctdmknwkekardiqzvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406214.0425253-464-16511367747317/AnsiballZ_file.py'
Oct 02 11:56:54 compute-1 sudo[141808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:54 compute-1 podman[141767]: 2025-10-02 11:56:54.31746278 +0000 UTC m=+0.050811097 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent)
Oct 02 11:56:54 compute-1 python3.9[141814]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:56:54 compute-1 sudo[141808]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:54 compute-1 sudo[141964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfwjzyjdtsvprizyfnlxbhecfoscfluc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406214.6522906-464-273377522278142/AnsiballZ_file.py'
Oct 02 11:56:54 compute-1 sudo[141964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:55.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:55 compute-1 python3.9[141966]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:56:55 compute-1 sudo[141964]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:56:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:55.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:56:55 compute-1 sudo[142116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqxtlqpyajokqlhduyqlljwyeguuxwkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406215.218175-464-98828492982591/AnsiballZ_file.py'
Oct 02 11:56:55 compute-1 sudo[142116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:55 compute-1 python3.9[142118]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:56:55 compute-1 sudo[142116]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:56:56 compute-1 sudo[142268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovnvxzyobsxakyukrkwskuztdwpcoxeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406215.8451073-464-239206572266193/AnsiballZ_file.py'
Oct 02 11:56:56 compute-1 sudo[142268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:56 compute-1 ceph-mon[80926]: pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:56 compute-1 python3.9[142270]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:56:56 compute-1 sudo[142268]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:56 compute-1 sudo[142420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esprovjimistfzofxzulftpjhaxzqxie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406216.5235827-464-249374872535279/AnsiballZ_file.py'
Oct 02 11:56:56 compute-1 sudo[142420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:57 compute-1 python3.9[142422]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:56:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:57.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:57 compute-1 sudo[142420]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:57.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:57 compute-1 sudo[142572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iebkhzmogtthtznvmsrezdhzbxzbbpda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406217.1596441-464-200066941206393/AnsiballZ_file.py'
Oct 02 11:56:57 compute-1 sudo[142572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:57 compute-1 python3.9[142574]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:56:57 compute-1 sudo[142572]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:58 compute-1 ceph-mon[80926]: pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:56:58 compute-1 sudo[142724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyyedndelvniirudgitkxhmazhxluduy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406218.1653361-617-57847415143965/AnsiballZ_command.py'
Oct 02 11:56:58 compute-1 sudo[142724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:56:58 compute-1 python3.9[142726]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:56:58 compute-1 sudo[142724]: pam_unix(sudo:session): session closed for user root
Oct 02 11:56:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:56:59.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:56:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:56:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:56:59.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:56:59 compute-1 python3.9[142878]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 11:57:00 compute-1 ceph-mon[80926]: pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:00 compute-1 sudo[143028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ammvofetzlwlmjkkzkmqojiptmiqczqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406219.9486067-671-267813474877538/AnsiballZ_systemd_service.py'
Oct 02 11:57:00 compute-1 sudo[143028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:57:00 compute-1 python3.9[143030]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 11:57:00 compute-1 systemd[1]: Reloading.
Oct 02 11:57:00 compute-1 systemd-sysv-generator[143061]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:57:00 compute-1 systemd-rc-local-generator[143058]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:57:00 compute-1 sudo[143028]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:57:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:57:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:01.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:57:01 compute-1 sudo[143216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udxswnevpxydabkaceatmlpvvqffjvcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406221.1374261-695-164407946786479/AnsiballZ_command.py'
Oct 02 11:57:01 compute-1 sudo[143216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:57:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:57:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:01.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:57:01 compute-1 python3.9[143218]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:57:01 compute-1 sudo[143216]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:01 compute-1 sudo[143369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncrfpcfmxfvbzgjgnaagypumfspklpsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406221.7297366-695-160709282467384/AnsiballZ_command.py'
Oct 02 11:57:01 compute-1 sudo[143369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:57:02 compute-1 python3.9[143371]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:57:02 compute-1 sudo[143369]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:02 compute-1 ceph-mon[80926]: pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:02 compute-1 sudo[143522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xskrpmrbjyjimdvhzqdwjljjsccsgsqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406222.3499265-695-158555177039672/AnsiballZ_command.py'
Oct 02 11:57:02 compute-1 sudo[143522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:57:02 compute-1 python3.9[143524]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:57:02 compute-1 sudo[143522]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:57:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:03.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:57:03 compute-1 sudo[143675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taqodwlwjyjtivtieatrujwvvnimoyln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406222.9683464-695-22553851493548/AnsiballZ_command.py'
Oct 02 11:57:03 compute-1 sudo[143675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:57:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:03.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:03 compute-1 python3.9[143677]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:57:03 compute-1 sudo[143675]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:03 compute-1 sudo[143828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhopwsjwazmyvstqwowwqoeugntofaqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406223.5666435-695-281278794430847/AnsiballZ_command.py'
Oct 02 11:57:03 compute-1 sudo[143828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:57:04 compute-1 python3.9[143830]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:57:04 compute-1 sudo[143828]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:04 compute-1 ceph-mon[80926]: pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:04 compute-1 sudo[143981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msvnovpuwirzlnstcbemkkxvblpkzpvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406224.1765492-695-199884113286357/AnsiballZ_command.py'
Oct 02 11:57:04 compute-1 sudo[143981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:57:04 compute-1 python3.9[143983]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:57:04 compute-1 sudo[143981]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:05 compute-1 sudo[144134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juvcnpizibqmldtulxzinjxbqrzpyvit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406224.7875888-695-249757924480619/AnsiballZ_command.py'
Oct 02 11:57:05 compute-1 sudo[144134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:57:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:05.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:05 compute-1 python3.9[144136]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 11:57:05 compute-1 sudo[144134]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:05.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:57:06 compute-1 ceph-mon[80926]: pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:06 compute-1 sudo[144287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqcgvqiqsgwrdffugkwcsaqbtvdqkjaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406226.3902392-857-97241080120457/AnsiballZ_getent.py'
Oct 02 11:57:06 compute-1 sudo[144287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:57:06 compute-1 python3.9[144289]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct 02 11:57:07 compute-1 sudo[144287]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:07.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:07.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:07 compute-1 sudo[144440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpmsxupyhhpgspadupqwrvcxcfhwvygf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406227.2386248-881-268888661120762/AnsiballZ_group.py'
Oct 02 11:57:07 compute-1 sudo[144440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:57:07 compute-1 python3.9[144442]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 02 11:57:07 compute-1 groupadd[144443]: group added to /etc/group: name=libvirt, GID=42473
Oct 02 11:57:08 compute-1 groupadd[144443]: group added to /etc/gshadow: name=libvirt
Oct 02 11:57:08 compute-1 groupadd[144443]: new group: name=libvirt, GID=42473
Oct 02 11:57:08 compute-1 sudo[144440]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:08 compute-1 ceph-mon[80926]: pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:08 compute-1 sudo[144598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtdhrmtqkdbidifrqvrouuuckvnyrnry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406228.3109527-905-280592797758486/AnsiballZ_user.py'
Oct 02 11:57:08 compute-1 sudo[144598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:57:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:09.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:09 compute-1 python3.9[144600]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 02 11:57:09 compute-1 useradd[144602]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Oct 02 11:57:09 compute-1 sudo[144598]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:09.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:10 compute-1 sudo[144758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndbggqmtllmfojgkvzdlqvpwynrtlsbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406229.8212667-938-198167519494111/AnsiballZ_setup.py'
Oct 02 11:57:10 compute-1 sudo[144758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:57:10 compute-1 ceph-mon[80926]: pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:10 compute-1 python3.9[144760]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 11:57:10 compute-1 sudo[144758]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:57:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:57:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:11.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:57:11 compute-1 sudo[144842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhtbnvjlccipwsradexvsepfhtvxvpnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406229.8212667-938-198167519494111/AnsiballZ_dnf.py'
Oct 02 11:57:11 compute-1 sudo[144842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:57:11 compute-1 python3.9[144844]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 11:57:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:11.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:12 compute-1 ceph-mon[80926]: pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:57:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:13.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:57:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:57:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:13.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:57:14 compute-1 ceph-mon[80926]: pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:15.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:15.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:57:16 compute-1 ceph-mon[80926]: pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 55 KiB/s rd, 0 B/s wr, 92 op/s
Oct 02 11:57:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:17.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:57:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:17.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:57:18 compute-1 ceph-mon[80926]: pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 81 KiB/s rd, 0 B/s wr, 134 op/s
Oct 02 11:57:18 compute-1 podman[144855]: 2025-10-02 11:57:18.870171799 +0000 UTC m=+0.110850470 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 11:57:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:19.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:57:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:19.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:57:20 compute-1 ceph-mon[80926]: pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 87 KiB/s rd, 0 B/s wr, 145 op/s
Oct 02 11:57:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:57:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:21.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:21.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:22 compute-1 ceph-mon[80926]: pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 87 KiB/s rd, 0 B/s wr, 145 op/s
Oct 02 11:57:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 11:57:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:23.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 11:57:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:23.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:24 compute-1 ceph-mon[80926]: pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 87 KiB/s rd, 0 B/s wr, 145 op/s
Oct 02 11:57:24 compute-1 podman[145008]: 2025-10-02 11:57:24.795642176 +0000 UTC m=+0.053429312 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 11:57:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:25.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:25.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:57:25.899 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:57:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:57:25.899 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:57:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:57:25.899 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:57:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:57:26 compute-1 sudo[145074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:57:26 compute-1 sudo[145074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:26 compute-1 sudo[145074]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:26 compute-1 sudo[145099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:57:26 compute-1 sudo[145099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:26 compute-1 sudo[145099]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:26 compute-1 sudo[145124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:57:26 compute-1 sudo[145124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:26 compute-1 sudo[145124]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:26 compute-1 sudo[145149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:57:26 compute-1 sudo[145149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:26 compute-1 ceph-mon[80926]: pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 87 KiB/s rd, 0 B/s wr, 145 op/s
Oct 02 11:57:26 compute-1 sudo[145149]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:57:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:27.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:57:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:27.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:27 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 11:57:27 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:57:27 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:57:27 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:57:27 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:57:27 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:57:27 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:57:28 compute-1 ceph-mon[80926]: pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Oct 02 11:57:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:57:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:29.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:57:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:57:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:29.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:57:30 compute-1 ceph-mon[80926]: pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 10 op/s
Oct 02 11:57:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:57:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:31.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:31.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:32 compute-1 ceph-mon[80926]: pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:33.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:33.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:34 compute-1 ceph-mon[80926]: pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:34 compute-1 sudo[145212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:57:34 compute-1 sudo[145212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:34 compute-1 sudo[145212]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:34 compute-1 sudo[145237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:57:34 compute-1 sudo[145237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:57:34 compute-1 sudo[145237]: pam_unix(sudo:session): session closed for user root
Oct 02 11:57:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:35.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:35.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:35 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:57:35 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:57:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:57:36 compute-1 ceph-mon[80926]: pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:37.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:37.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:38 compute-1 ceph-mon[80926]: pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:39.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:57:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:39.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:57:40 compute-1 ceph-mon[80926]: pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:40 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:57:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:41.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:41.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:42 compute-1 ceph-mon[80926]: pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:43.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:43.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:57:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:45.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:57:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:45.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:45 compute-1 ceph-mon[80926]: pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:57:46 compute-1 kernel: SELinux:  Converting 2765 SID table entries...
Oct 02 11:57:46 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 11:57:46 compute-1 kernel: SELinux:  policy capability open_perms=1
Oct 02 11:57:46 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 11:57:46 compute-1 kernel: SELinux:  policy capability always_check_network=0
Oct 02 11:57:46 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 11:57:46 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 11:57:46 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 11:57:46 compute-1 ceph-mon[80926]: pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:47.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:57:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:47.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:57:48 compute-1 ceph-mon[80926]: pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:49.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:49.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:49 compute-1 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Oct 02 11:57:49 compute-1 podman[145271]: 2025-10-02 11:57:49.851152369 +0000 UTC m=+0.103015263 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 02 11:57:50 compute-1 ceph-mon[80926]: pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:57:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:51.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 11:57:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:51.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 11:57:52 compute-1 ceph-mon[80926]: pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:53.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 11:57:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:53.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 11:57:54 compute-1 ceph-mon[80926]: pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:55.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:55.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:55 compute-1 podman[145297]: 2025-10-02 11:57:55.811182244 +0000 UTC m=+0.057134459 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 11:57:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:57:56 compute-1 ceph-mon[80926]: pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:57.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:57.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:58 compute-1 kernel: SELinux:  Converting 2765 SID table entries...
Oct 02 11:57:58 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 11:57:58 compute-1 kernel: SELinux:  policy capability open_perms=1
Oct 02 11:57:58 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 11:57:58 compute-1 kernel: SELinux:  policy capability always_check_network=0
Oct 02 11:57:58 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 11:57:58 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 11:57:58 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 11:57:58 compute-1 ceph-mon[80926]: pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:57:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:57:59.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:57:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:57:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:57:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:57:59.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:00 compute-1 ceph-mon[80926]: pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:58:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:01.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 11:58:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:01.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 11:58:02 compute-1 ceph-mon[80926]: pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:03.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:03.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:04 compute-1 ceph-mon[80926]: pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:05.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:05.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:58:06 compute-1 ceph-mon[80926]: pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:58:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:07.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:58:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:07.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:08 compute-1 ceph-mon[80926]: pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:09.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:09.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:10 compute-1 ceph-mon[80926]: pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:58:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:11.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:11.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:12 compute-1 ceph-mon[80926]: pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:58:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:13.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:58:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:13.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:14 compute-1 ceph-mon[80926]: pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:15.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:15.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:58:16 compute-1 ceph-mon[80926]: pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:17.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:17.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:18 compute-1 ceph-mon[80926]: pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 11:58:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:19.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 11:58:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:19.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:20 compute-1 ceph-mon[80926]: pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:20 compute-1 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Oct 02 11:58:20 compute-1 podman[151942]: 2025-10-02 11:58:20.838407398 +0000 UTC m=+0.083478038 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 02 11:58:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:58:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:58:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:21.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:58:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:21.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:22 compute-1 ceph-mon[80926]: pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:23.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:23.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:24 compute-1 ceph-mon[80926]: pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:25.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:25.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:58:25.900 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:58:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:58:25.900 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:58:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:58:25.900 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:58:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:58:26 compute-1 ceph-mon[80926]: pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:26 compute-1 podman[156255]: 2025-10-02 11:58:26.786920381 +0000 UTC m=+0.043283348 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 11:58:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:27.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:27.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:28 compute-1 ceph-mon[80926]: pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:58:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:29.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:58:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:29.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:30 compute-1 ceph-mon[80926]: pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:58:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:58:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:31.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:58:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:31.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:32 compute-1 ceph-mon[80926]: pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:58:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:33.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:58:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:58:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:33.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:58:34 compute-1 ceph-mon[80926]: pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:34 compute-1 sudo[162031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:58:34 compute-1 sudo[162031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:34 compute-1 sudo[162031]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:34 compute-1 sudo[162108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:58:34 compute-1 sudo[162108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:34 compute-1 sudo[162108]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:35 compute-1 sudo[162153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:58:35 compute-1 sudo[162153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:35 compute-1 sudo[162153]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:35 compute-1 sudo[162178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:58:35 compute-1 sudo[162178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:58:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:35.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:58:35 compute-1 sudo[162178]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 11:58:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:35.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 11:58:35 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:58:35 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:58:35 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:58:35 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:58:35 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:58:35 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:58:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:58:36 compute-1 ceph-mon[80926]: pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:37.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:37.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:38 compute-1 ceph-mon[80926]: pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:39.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:39.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:40 compute-1 ceph-mon[80926]: pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:40 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:58:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:41.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:41.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:42 compute-1 ceph-mon[80926]: pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:58:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:58:42 compute-1 sudo[162248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:58:42 compute-1 sudo[162248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:42 compute-1 sudo[162248]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:42 compute-1 sudo[162273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:58:42 compute-1 sudo[162273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:58:42 compute-1 sudo[162273]: pam_unix(sudo:session): session closed for user root
Oct 02 11:58:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:43.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:43.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:44 compute-1 ceph-mon[80926]: pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:45.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:45.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:58:46 compute-1 ceph-mon[80926]: pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:47.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:58:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:47.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:58:47 compute-1 kernel: SELinux:  Converting 2766 SID table entries...
Oct 02 11:58:47 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Oct 02 11:58:47 compute-1 kernel: SELinux:  policy capability open_perms=1
Oct 02 11:58:47 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Oct 02 11:58:47 compute-1 kernel: SELinux:  policy capability always_check_network=0
Oct 02 11:58:47 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 02 11:58:47 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 02 11:58:47 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 02 11:58:48 compute-1 ceph-mon[80926]: pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:48 compute-1 groupadd[162310]: group added to /etc/group: name=dnsmasq, GID=992
Oct 02 11:58:48 compute-1 groupadd[162310]: group added to /etc/gshadow: name=dnsmasq
Oct 02 11:58:48 compute-1 groupadd[162310]: new group: name=dnsmasq, GID=992
Oct 02 11:58:48 compute-1 useradd[162317]: new user: name=dnsmasq, UID=992, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Oct 02 11:58:48 compute-1 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Oct 02 11:58:48 compute-1 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Oct 02 11:58:48 compute-1 dbus-broker-launch[772]: Noticed file-system modification, trigger reload.
Oct 02 11:58:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:49.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:49.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:49 compute-1 groupadd[162330]: group added to /etc/group: name=clevis, GID=991
Oct 02 11:58:49 compute-1 groupadd[162330]: group added to /etc/gshadow: name=clevis
Oct 02 11:58:49 compute-1 groupadd[162330]: new group: name=clevis, GID=991
Oct 02 11:58:49 compute-1 useradd[162337]: new user: name=clevis, UID=991, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Oct 02 11:58:49 compute-1 usermod[162347]: add 'clevis' to group 'tss'
Oct 02 11:58:49 compute-1 usermod[162347]: add 'clevis' to shadow group 'tss'
Oct 02 11:58:50 compute-1 ceph-mon[80926]: pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:58:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:51.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:51.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:51 compute-1 podman[162368]: 2025-10-02 11:58:51.851063056 +0000 UTC m=+0.096297659 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 11:58:52 compute-1 polkitd[6956]: Reloading rules
Oct 02 11:58:52 compute-1 polkitd[6956]: Collecting garbage unconditionally...
Oct 02 11:58:52 compute-1 polkitd[6956]: Loading rules from directory /etc/polkit-1/rules.d
Oct 02 11:58:52 compute-1 polkitd[6956]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 02 11:58:52 compute-1 polkitd[6956]: Finished loading, compiling and executing 4 rules
Oct 02 11:58:52 compute-1 polkitd[6956]: Reloading rules
Oct 02 11:58:52 compute-1 polkitd[6956]: Collecting garbage unconditionally...
Oct 02 11:58:52 compute-1 polkitd[6956]: Loading rules from directory /etc/polkit-1/rules.d
Oct 02 11:58:52 compute-1 polkitd[6956]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 02 11:58:52 compute-1 polkitd[6956]: Finished loading, compiling and executing 4 rules
Oct 02 11:58:52 compute-1 ceph-mon[80926]: pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:53 compute-1 groupadd[162560]: group added to /etc/group: name=ceph, GID=167
Oct 02 11:58:53 compute-1 groupadd[162560]: group added to /etc/gshadow: name=ceph
Oct 02 11:58:53 compute-1 groupadd[162560]: new group: name=ceph, GID=167
Oct 02 11:58:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:53.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:53 compute-1 useradd[162566]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Oct 02 11:58:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:53.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:54 compute-1 ceph-mon[80926]: pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:58:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:55.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:58:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:55.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:58:56 compute-1 systemd[1]: Stopping OpenSSH server daemon...
Oct 02 11:58:56 compute-1 sshd[1007]: Received signal 15; terminating.
Oct 02 11:58:56 compute-1 systemd[1]: sshd.service: Deactivated successfully.
Oct 02 11:58:56 compute-1 systemd[1]: Stopped OpenSSH server daemon.
Oct 02 11:58:56 compute-1 systemd[1]: sshd.service: Consumed 1.855s CPU time, read 0B from disk, written 4.0K to disk.
Oct 02 11:58:56 compute-1 systemd[1]: Stopped target sshd-keygen.target.
Oct 02 11:58:56 compute-1 systemd[1]: Stopping sshd-keygen.target...
Oct 02 11:58:56 compute-1 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 02 11:58:56 compute-1 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 02 11:58:56 compute-1 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 02 11:58:56 compute-1 systemd[1]: Reached target sshd-keygen.target.
Oct 02 11:58:56 compute-1 systemd[1]: Starting OpenSSH server daemon...
Oct 02 11:58:56 compute-1 sshd[163171]: Server listening on 0.0.0.0 port 22.
Oct 02 11:58:56 compute-1 sshd[163171]: Server listening on :: port 22.
Oct 02 11:58:56 compute-1 systemd[1]: Started OpenSSH server daemon.
Oct 02 11:58:56 compute-1 ceph-mon[80926]: pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:58:56 compute-1 podman[163287]: 2025-10-02 11:58:56.882166741 +0000 UTC m=+0.053474078 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 11:58:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:57.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:57.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:57 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 02 11:58:57 compute-1 systemd[1]: Starting man-db-cache-update.service...
Oct 02 11:58:57 compute-1 systemd[1]: Reloading.
Oct 02 11:58:57 compute-1 systemd-rc-local-generator[163451]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:58:57 compute-1 systemd-sysv-generator[163454]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:58:58 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 02 11:58:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:58:59.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:58:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:58:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:58:59.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:58:59 compute-1 ceph-mon[80926]: pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:59:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:01.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:59:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:01.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:01 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:59:01 compute-1 ceph-mon[80926]: pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:02 compute-1 systemd[1]: Starting PackageKit Daemon...
Oct 02 11:59:02 compute-1 PackageKit[168653]: daemon start
Oct 02 11:59:02 compute-1 systemd[1]: Started PackageKit Daemon.
Oct 02 11:59:02 compute-1 sudo[144842]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:02 compute-1 ceph-mon[80926]: pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:03.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:03.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:04 compute-1 ceph-mon[80926]: pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:59:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:05.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:59:05 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 02 11:59:05 compute-1 systemd[1]: Finished man-db-cache-update.service.
Oct 02 11:59:05 compute-1 systemd[1]: man-db-cache-update.service: Consumed 9.937s CPU time.
Oct 02 11:59:05 compute-1 systemd[1]: run-r7427ec1af2a84507a90f10b5b61268cc.service: Deactivated successfully.
Oct 02 11:59:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:59:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:05.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:59:06 compute-1 ceph-mon[80926]: pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:06 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:59:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:59:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:07.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:59:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:59:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:07.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:59:08 compute-1 sudo[171845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvgfakrwmwiolonisjkrmobwjxftqdvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406347.5067391-974-32951152817455/AnsiballZ_systemd.py'
Oct 02 11:59:08 compute-1 sudo[171845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:08 compute-1 ceph-mon[80926]: pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:08 compute-1 python3.9[171847]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 11:59:08 compute-1 systemd[1]: Reloading.
Oct 02 11:59:08 compute-1 systemd-rc-local-generator[171876]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:59:08 compute-1 systemd-sysv-generator[171879]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:59:08 compute-1 sudo[171845]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:09 compute-1 sudo[172034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnlispjccpwfyduowbfynpmvdhldzpbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406348.8917956-974-228905314849027/AnsiballZ_systemd.py'
Oct 02 11:59:09 compute-1 sudo[172034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:59:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:09.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:59:09 compute-1 python3.9[172036]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 11:59:09 compute-1 systemd[1]: Reloading.
Oct 02 11:59:09 compute-1 systemd-rc-local-generator[172065]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:59:09 compute-1 systemd-sysv-generator[172069]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:59:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:09.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:09 compute-1 sudo[172034]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:10 compute-1 sudo[172224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcakbtdhdvhzjoinjikvrapyrxwoskch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406349.970961-974-23118181335453/AnsiballZ_systemd.py'
Oct 02 11:59:10 compute-1 sudo[172224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:10 compute-1 ceph-mon[80926]: pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:10 compute-1 python3.9[172226]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 11:59:10 compute-1 systemd[1]: Reloading.
Oct 02 11:59:10 compute-1 systemd-sysv-generator[172256]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:59:10 compute-1 systemd-rc-local-generator[172251]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:59:10 compute-1 sudo[172224]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:59:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:11.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:59:11 compute-1 sudo[172414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpurucgxybshnlbpycdkdmpnnulwyttn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406351.034545-974-43705108255635/AnsiballZ_systemd.py'
Oct 02 11:59:11 compute-1 sudo[172414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:59:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:11.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:59:11 compute-1 python3.9[172416]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 11:59:11 compute-1 systemd[1]: Reloading.
Oct 02 11:59:11 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:59:11 compute-1 systemd-rc-local-generator[172444]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:59:11 compute-1 systemd-sysv-generator[172448]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:59:12 compute-1 sudo[172414]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:12 compute-1 ceph-mon[80926]: pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:12 compute-1 sudo[172603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqlfbdjzgxoomgkcdavreobdusrgghsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406352.5224624-1061-181343294359870/AnsiballZ_systemd.py'
Oct 02 11:59:12 compute-1 sudo[172603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:13 compute-1 python3.9[172605]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:13 compute-1 systemd[1]: Reloading.
Oct 02 11:59:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:59:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:13.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:59:13 compute-1 systemd-sysv-generator[172639]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:59:13 compute-1 systemd-rc-local-generator[172636]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:59:13 compute-1 sudo[172603]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:13.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:13 compute-1 sudo[172793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkwyobkplhbggyfcboylrwuxuhsrqmlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406353.5978608-1061-90379805160946/AnsiballZ_systemd.py'
Oct 02 11:59:13 compute-1 sudo[172793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:14 compute-1 python3.9[172795]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:14 compute-1 systemd[1]: Reloading.
Oct 02 11:59:14 compute-1 ceph-mon[80926]: pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:14 compute-1 systemd-rc-local-generator[172824]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:59:14 compute-1 systemd-sysv-generator[172829]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:59:14 compute-1 sudo[172793]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:14 compute-1 sudo[172983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcmferdkampbsgtffrtyvhnjerldibwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406354.6916404-1061-15440853203699/AnsiballZ_systemd.py'
Oct 02 11:59:14 compute-1 sudo[172983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:59:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:15.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:59:15 compute-1 python3.9[172985]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:15 compute-1 systemd[1]: Reloading.
Oct 02 11:59:15 compute-1 systemd-rc-local-generator[173014]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:59:15 compute-1 systemd-sysv-generator[173019]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:59:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:59:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:15.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:59:15 compute-1 sudo[172983]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:16 compute-1 sudo[173173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytnskilnmwbfwuayrtloibgfnehjqnls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406355.8366294-1061-66594976991523/AnsiballZ_systemd.py'
Oct 02 11:59:16 compute-1 sudo[173173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:16 compute-1 ceph-mon[80926]: pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:16 compute-1 python3.9[173175]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:16 compute-1 sudo[173173]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:16 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:59:16 compute-1 sudo[173328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjvhmtaljleugkioysfrnuuqddvorgin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406356.6971548-1061-238060957982761/AnsiballZ_systemd.py'
Oct 02 11:59:16 compute-1 sudo[173328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:17.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:17 compute-1 python3.9[173330]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:17 compute-1 systemd[1]: Reloading.
Oct 02 11:59:17 compute-1 systemd-sysv-generator[173363]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:59:17 compute-1 systemd-rc-local-generator[173359]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:59:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:17.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:17 compute-1 sudo[173328]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:18 compute-1 sudo[173517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddexqohyjlxkkgsorbgscolhfnvybyic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406358.0441196-1169-200359949136126/AnsiballZ_systemd.py'
Oct 02 11:59:18 compute-1 sudo[173517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:18 compute-1 python3.9[173519]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 02 11:59:18 compute-1 systemd[1]: Reloading.
Oct 02 11:59:18 compute-1 systemd-rc-local-generator[173545]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 11:59:18 compute-1 systemd-sysv-generator[173548]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 11:59:18 compute-1 systemd[1]: Listening on libvirt proxy daemon socket.
Oct 02 11:59:19 compute-1 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Oct 02 11:59:19 compute-1 sudo[173517]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:19.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:19 compute-1 ceph-mon[80926]: pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:59:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:19.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:59:19 compute-1 sudo[173710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oftsetdufbndxwlsjxqziuvetegyhucz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406359.365245-1193-185561400075977/AnsiballZ_systemd.py'
Oct 02 11:59:19 compute-1 sudo[173710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:19 compute-1 python3.9[173712]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:20 compute-1 sudo[173710]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:20 compute-1 sudo[173865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xikgkggalrsdtoqxywwrwxmhuxxxzdnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406360.1614408-1193-190557759145759/AnsiballZ_systemd.py'
Oct 02 11:59:20 compute-1 sudo[173865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:20 compute-1 python3.9[173867]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:20 compute-1 sudo[173865]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:21 compute-1 sudo[174020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aazlimifwlmpyyfzdybsrxtgijjhgazg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406360.9142334-1193-120918391274844/AnsiballZ_systemd.py'
Oct 02 11:59:21 compute-1 sudo[174020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:21.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:21 compute-1 ceph-mon[80926]: pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:21 compute-1 python3.9[174022]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:21 compute-1 sudo[174020]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:59:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:21.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:59:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:59:21 compute-1 sudo[174175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhqjtqgiyumyqulptsricowlizpzcshm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406361.6657333-1193-127140577553931/AnsiballZ_systemd.py'
Oct 02 11:59:21 compute-1 sudo[174175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:22 compute-1 podman[174177]: 2025-10-02 11:59:22.005001264 +0000 UTC m=+0.082194909 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 11:59:22 compute-1 python3.9[174178]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:22 compute-1 sudo[174175]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:22 compute-1 sudo[174358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpiganeuksfnhujxjwqyplnmfrymjykf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406362.4112885-1193-85373224490765/AnsiballZ_systemd.py'
Oct 02 11:59:22 compute-1 sudo[174358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:23 compute-1 python3.9[174360]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:23 compute-1 sudo[174358]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:59:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:23.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:59:23 compute-1 ceph-mon[80926]: pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:23 compute-1 sudo[174513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgfxrwsfptitxjmrauvnstohjzpzerfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406363.2292056-1193-196591227831591/AnsiballZ_systemd.py'
Oct 02 11:59:23 compute-1 sudo[174513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:23.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:23 compute-1 python3.9[174515]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:23 compute-1 sudo[174513]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:24 compute-1 sudo[174668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kupxedrjvfysldbukqcdrmvkbjdpxxuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406363.9850314-1193-129429307397112/AnsiballZ_systemd.py'
Oct 02 11:59:24 compute-1 sudo[174668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:24 compute-1 python3.9[174670]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:24 compute-1 sudo[174668]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:25 compute-1 sudo[174823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpifuhkfbrmscuarojvoabhvqxtdlenj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406364.7793694-1193-6777918715932/AnsiballZ_systemd.py'
Oct 02 11:59:25 compute-1 sudo[174823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:59:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:25.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:59:25 compute-1 python3.9[174825]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:25 compute-1 sudo[174823]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:25 compute-1 ceph-mon[80926]: pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:25.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:25 compute-1 sudo[174978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hneldsqjocklmodzzolnsubvsmdxhxxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406365.5178967-1193-31636393763583/AnsiballZ_systemd.py'
Oct 02 11:59:25 compute-1 sudo[174978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:59:25.900 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 11:59:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:59:25.901 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 11:59:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 11:59:25.901 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 11:59:26 compute-1 python3.9[174980]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:26 compute-1 sudo[174978]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:26 compute-1 sudo[175133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdhrimoypkmjtkwzxortwtkjttxaouin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406366.2237887-1193-233406790382722/AnsiballZ_systemd.py'
Oct 02 11:59:26 compute-1 sudo[175133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:26 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:59:26 compute-1 python3.9[175135]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:26 compute-1 sudo[175133]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:27.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:27 compute-1 sudo[175300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcfqttpoofsroyrktrinlmjhwbelxtua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406366.9965453-1193-132238540536905/AnsiballZ_systemd.py'
Oct 02 11:59:27 compute-1 sudo[175300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:27 compute-1 podman[175262]: 2025-10-02 11:59:27.275982405 +0000 UTC m=+0.052861627 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 11:59:27 compute-1 ceph-mon[80926]: pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:27 compute-1 python3.9[175306]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:27 compute-1 sudo[175300]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:27.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:27 compute-1 sudo[175462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glspwjgotwdwrbkcwirldfzgzmrvysxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406367.7339349-1193-79913027885138/AnsiballZ_systemd.py'
Oct 02 11:59:27 compute-1 sudo[175462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:28 compute-1 python3.9[175464]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:28 compute-1 sudo[175462]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:28 compute-1 sudo[175617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqzrifnafphkyfyzowygawypedoqtguo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406368.4598527-1193-152748518070684/AnsiballZ_systemd.py'
Oct 02 11:59:28 compute-1 sudo[175617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:29 compute-1 python3.9[175619]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:29 compute-1 sudo[175617]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:59:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:29.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:59:29 compute-1 sudo[175772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgbcikcdfbfrzfbbpzezumgtmhwunlop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406369.229354-1193-212317160262631/AnsiballZ_systemd.py'
Oct 02 11:59:29 compute-1 sudo[175772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:29 compute-1 ceph-mon[80926]: pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:29.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:29 compute-1 python3.9[175774]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 02 11:59:29 compute-1 sudo[175772]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:31.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:31 compute-1 ceph-mon[80926]: pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:31.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:31 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:59:32 compute-1 sudo[175927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbdqoovoodclljbesmezvzxdzzdgmgeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406372.0565326-1499-126401486102575/AnsiballZ_file.py'
Oct 02 11:59:32 compute-1 sudo[175927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:32 compute-1 python3.9[175929]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:59:32 compute-1 sudo[175927]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:32 compute-1 sudo[176079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzehudyjdycinpjfdeytcfwykggctclw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406372.6783526-1499-3039048430110/AnsiballZ_file.py'
Oct 02 11:59:32 compute-1 sudo[176079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:33 compute-1 python3.9[176081]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:59:33 compute-1 sudo[176079]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:33.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:33 compute-1 sudo[176231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bobggcfoddewmuidybjdodrxmmptmpel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406373.2418742-1499-278848389393239/AnsiballZ_file.py'
Oct 02 11:59:33 compute-1 sudo[176231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:33 compute-1 ceph-mon[80926]: pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:59:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:33.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:59:33 compute-1 python3.9[176233]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:59:33 compute-1 sudo[176231]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:34 compute-1 sudo[176383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xktdkymorufdryieucjujuahwxhmniim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406373.865068-1499-39620052513009/AnsiballZ_file.py'
Oct 02 11:59:34 compute-1 sudo[176383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:34 compute-1 python3.9[176385]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:59:34 compute-1 sudo[176383]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:34 compute-1 sudo[176535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyecairilcacittcvfejggoxxvkzzmvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406374.4438128-1499-261714036774616/AnsiballZ_file.py'
Oct 02 11:59:34 compute-1 sudo[176535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:34 compute-1 python3.9[176537]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:59:34 compute-1 sudo[176535]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:35 compute-1 sudo[176687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhlhmdypfuydjldozebzbjozrykzgchk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406375.001499-1499-127380287607319/AnsiballZ_file.py'
Oct 02 11:59:35 compute-1 sudo[176687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:59:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:35.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:59:35 compute-1 python3.9[176689]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 11:59:35 compute-1 sudo[176687]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:35 compute-1 ceph-mon[80926]: pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:35.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:36 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:59:37 compute-1 sudo[176839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nezfwbccpolavmcgyskoorozpfatmftk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406376.6771562-1628-261065329655687/AnsiballZ_stat.py'
Oct 02 11:59:37 compute-1 sudo[176839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:37.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:37 compute-1 python3.9[176841]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:59:37 compute-1 sudo[176839]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:37 compute-1 ceph-mon[80926]: pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:37.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:37 compute-1 sudo[176964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnucgxxiolluuajakacmwjybfzgsyklm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406376.6771562-1628-261065329655687/AnsiballZ_copy.py'
Oct 02 11:59:37 compute-1 sudo[176964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:38 compute-1 python3.9[176966]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759406376.6771562-1628-261065329655687/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:38 compute-1 sudo[176964]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:38 compute-1 sudo[177116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqcgazgifxwypbwfcqkraptswyukckau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406378.1532433-1628-248017595930972/AnsiballZ_stat.py'
Oct 02 11:59:38 compute-1 sudo[177116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:38 compute-1 python3.9[177118]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:59:38 compute-1 sudo[177116]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:39 compute-1 sudo[177241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqbjltauqncembojnumtsnebcpdxvlkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406378.1532433-1628-248017595930972/AnsiballZ_copy.py'
Oct 02 11:59:39 compute-1 sudo[177241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:39 compute-1 python3.9[177243]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759406378.1532433-1628-248017595930972/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:39 compute-1 sudo[177241]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:59:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:39.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:59:39 compute-1 sudo[177393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpzaryvkfphjtrfabgyrdzlmqibzgzeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406379.3667595-1628-58310110771014/AnsiballZ_stat.py'
Oct 02 11:59:39 compute-1 sudo[177393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:39.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:39 compute-1 ceph-mon[80926]: pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:39 compute-1 python3.9[177395]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:59:39 compute-1 sudo[177393]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:40 compute-1 sudo[177518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icaqhzovcxwoeoupqsamyhyfuzvqonlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406379.3667595-1628-58310110771014/AnsiballZ_copy.py'
Oct 02 11:59:40 compute-1 sudo[177518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:40 compute-1 python3.9[177520]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759406379.3667595-1628-58310110771014/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:40 compute-1 sudo[177518]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:40 compute-1 sudo[177670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhfsecjazswllgktblbfjwjznkoimiyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406380.552646-1628-59987462646004/AnsiballZ_stat.py'
Oct 02 11:59:40 compute-1 sudo[177670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:41 compute-1 python3.9[177672]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:59:41 compute-1 sudo[177670]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:41.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:41 compute-1 sudo[177795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldszwwjvoyvwogpgwrknrtjdcppvkppo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406380.552646-1628-59987462646004/AnsiballZ_copy.py'
Oct 02 11:59:41 compute-1 sudo[177795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:41 compute-1 python3.9[177797]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759406380.552646-1628-59987462646004/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:41 compute-1 sudo[177795]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:41.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:41 compute-1 ceph-mon[80926]: pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:41 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:59:41 compute-1 sudo[177947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qasfrdptabejddnsfrxbuigxlwvvgpmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406381.7061007-1628-184955012675798/AnsiballZ_stat.py'
Oct 02 11:59:41 compute-1 sudo[177947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:42 compute-1 python3.9[177949]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:59:42 compute-1 sudo[177947]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:42 compute-1 sudo[178072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgnurenhljcoklgjqgneikblxglwvofr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406381.7061007-1628-184955012675798/AnsiballZ_copy.py'
Oct 02 11:59:42 compute-1 sudo[178072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:42 compute-1 sudo[178075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:59:42 compute-1 sudo[178075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:42 compute-1 sudo[178075]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:42 compute-1 sudo[178100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:59:42 compute-1 sudo[178100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:42 compute-1 sudo[178100]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:42 compute-1 sudo[178125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:59:42 compute-1 sudo[178125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:42 compute-1 sudo[178125]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:42 compute-1 python3.9[178074]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759406381.7061007-1628-184955012675798/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:42 compute-1 sudo[178150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 11:59:42 compute-1 sudo[178150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:42 compute-1 sudo[178072]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:43.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:43 compute-1 sudo[178399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kewppbtughcykkivgzcyvnsakgxrohwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406383.0405846-1628-266298787189070/AnsiballZ_stat.py'
Oct 02 11:59:43 compute-1 sudo[178399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:43 compute-1 podman[178397]: 2025-10-02 11:59:43.434595797 +0000 UTC m=+0.146129188 container exec f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 11:59:43 compute-1 python3.9[178406]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:59:43 compute-1 podman[178397]: 2025-10-02 11:59:43.528704174 +0000 UTC m=+0.240237525 container exec_died f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct 02 11:59:43 compute-1 sudo[178399]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:59:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:43.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:59:43 compute-1 ceph-mon[80926]: pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:43 compute-1 sudo[178613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxadyrcosiwnjuodxqhafpgbcgtpvsqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406383.0405846-1628-266298787189070/AnsiballZ_copy.py'
Oct 02 11:59:43 compute-1 sudo[178613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:44 compute-1 sudo[178150]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:44 compute-1 python3.9[178619]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759406383.0405846-1628-266298787189070/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:44 compute-1 sudo[178613]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:44 compute-1 sudo[178650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:59:44 compute-1 sudo[178650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:44 compute-1 sudo[178650]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:44 compute-1 sudo[178699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 11:59:44 compute-1 sudo[178699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:44 compute-1 sudo[178699]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:44 compute-1 sudo[178747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:59:44 compute-1 sudo[178747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:44 compute-1 sudo[178747]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:44 compute-1 sudo[178800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 11:59:44 compute-1 sudo[178800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:44 compute-1 sudo[178912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebijmogrsxenhfcaoxafhuoztkpfvnrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406384.266439-1628-269313726276815/AnsiballZ_stat.py'
Oct 02 11:59:44 compute-1 sudo[178912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:44 compute-1 python3.9[178916]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:59:44 compute-1 sudo[178912]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:44 compute-1 sudo[178800]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:45 compute-1 sudo[179054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djnquszjglfhpshgopdxmanuhletzzdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406384.266439-1628-269313726276815/AnsiballZ_copy.py'
Oct 02 11:59:45 compute-1 sudo[179054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:59:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:59:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:59:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:59:45 compute-1 python3.9[179056]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759406384.266439-1628-269313726276815/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:45 compute-1 sudo[179054]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:45.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:45 compute-1 sudo[179206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eedkxgoeaxrkiashblzdlmufwbwmtiwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406385.3850121-1628-168034311419829/AnsiballZ_stat.py'
Oct 02 11:59:45 compute-1 sudo[179206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:45.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:45 compute-1 python3.9[179208]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 11:59:45 compute-1 sudo[179206]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:46 compute-1 ceph-mon[80926]: pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:46 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:59:46 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 11:59:46 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:59:46 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 11:59:46 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 11:59:46 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 11:59:46 compute-1 sudo[179331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxxjlkjducmfxjdhxlieadmrchifkoko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406385.3850121-1628-168034311419829/AnsiballZ_copy.py'
Oct 02 11:59:46 compute-1 sudo[179331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:46 compute-1 python3.9[179333]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759406385.3850121-1628-168034311419829/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:46 compute-1 sudo[179331]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:46 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:59:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:59:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:47.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:59:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:47.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:48 compute-1 ceph-mon[80926]: pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:48 compute-1 sudo[179483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzuhqcsxfmsdbhwbgtqeqhwlqylkgckc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406388.2739475-1967-131891844420896/AnsiballZ_command.py'
Oct 02 11:59:48 compute-1 sudo[179483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:48 compute-1 python3.9[179485]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Oct 02 11:59:48 compute-1 sudo[179483]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:49 compute-1 sudo[179636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egywyozcdwdrxsvugppdvnavbdtgskmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406389.0391932-1994-177651542885867/AnsiballZ_file.py'
Oct 02 11:59:49 compute-1 sudo[179636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:59:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:49.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:59:49 compute-1 python3.9[179638]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:49 compute-1 sudo[179636]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:49 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Oct 02 11:59:49 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:59:49.642029) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 11:59:49 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Oct 02 11:59:49 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406389642059, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 2186, "num_deletes": 253, "total_data_size": 5565293, "memory_usage": 5643816, "flush_reason": "Manual Compaction"}
Oct 02 11:59:49 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Oct 02 11:59:49 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406389664126, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 3645687, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12352, "largest_seqno": 14533, "table_properties": {"data_size": 3636755, "index_size": 5618, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 16653, "raw_average_key_size": 18, "raw_value_size": 3619026, "raw_average_value_size": 4080, "num_data_blocks": 251, "num_entries": 887, "num_filter_entries": 887, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406177, "oldest_key_time": 1759406177, "file_creation_time": 1759406389, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:59:49 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 22207 microseconds, and 7449 cpu microseconds.
Oct 02 11:59:49 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:59:49 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:59:49.664232) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 3645687 bytes OK
Oct 02 11:59:49 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:59:49.664255) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Oct 02 11:59:49 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:59:49.665975) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Oct 02 11:59:49 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:59:49.665998) EVENT_LOG_v1 {"time_micros": 1759406389665991, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 11:59:49 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:59:49.666016) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 11:59:49 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 5555606, prev total WAL file size 5555606, number of live WAL files 2.
Oct 02 11:59:49 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:59:49 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:59:49.667492) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323533' seq:0, type:0; will stop at (end)
Oct 02 11:59:49 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 11:59:49 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(3560KB)], [24(8240KB)]
Oct 02 11:59:49 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406389667543, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 12083688, "oldest_snapshot_seqno": -1}
Oct 02 11:59:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:49.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:49 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 4333 keys, 11528601 bytes, temperature: kUnknown
Oct 02 11:59:49 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406389733876, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 11528601, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11493930, "index_size": 22721, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10885, "raw_key_size": 105264, "raw_average_key_size": 24, "raw_value_size": 11410070, "raw_average_value_size": 2633, "num_data_blocks": 969, "num_entries": 4333, "num_filter_entries": 4333, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759406389, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Oct 02 11:59:49 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 11:59:49 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:59:49.734091) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 11528601 bytes
Oct 02 11:59:49 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:59:49.735551) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.0 rd, 173.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 8.0 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(6.5) write-amplify(3.2) OK, records in: 4857, records dropped: 524 output_compression: NoCompression
Oct 02 11:59:49 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:59:49.735568) EVENT_LOG_v1 {"time_micros": 1759406389735559, "job": 12, "event": "compaction_finished", "compaction_time_micros": 66407, "compaction_time_cpu_micros": 23966, "output_level": 6, "num_output_files": 1, "total_output_size": 11528601, "num_input_records": 4857, "num_output_records": 4333, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 11:59:49 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:59:49 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406389736237, "job": 12, "event": "table_file_deletion", "file_number": 26}
Oct 02 11:59:49 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 11:59:49 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406389737526, "job": 12, "event": "table_file_deletion", "file_number": 24}
Oct 02 11:59:49 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:59:49.667447) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:59:49 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:59:49.737631) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:59:49 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:59:49.737635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:59:49 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:59:49.737637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:59:49 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:59:49.737639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:59:49 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-11:59:49.737640) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 11:59:49 compute-1 sudo[179788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmscmjnzulqirvzcrnoniwrvzygkmboq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406389.6164894-1994-25490836322226/AnsiballZ_file.py'
Oct 02 11:59:49 compute-1 sudo[179788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:50 compute-1 python3.9[179790]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:50 compute-1 sudo[179788]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:50 compute-1 ceph-mon[80926]: pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:50 compute-1 sudo[179940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxlkzcoaygzvxlifjsphrmydskspipna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406390.349029-1994-27997335129747/AnsiballZ_file.py'
Oct 02 11:59:50 compute-1 sudo[179940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:50 compute-1 python3.9[179942]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:50 compute-1 sudo[179940]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:51 compute-1 ceph-mon[80926]: pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:51.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:51 compute-1 sudo[180092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gspdkzdijchsdabvtynqnhnwesdlphtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406391.0491395-1994-225951138245231/AnsiballZ_file.py'
Oct 02 11:59:51 compute-1 sudo[180092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:51 compute-1 python3.9[180094]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:51 compute-1 sudo[180092]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:51.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:51 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:59:51 compute-1 sudo[180244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrgzvadspnxwhqnfjvntvrlgkpaduztk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406391.6454532-1994-151690216677508/AnsiballZ_file.py'
Oct 02 11:59:51 compute-1 sudo[180244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:52 compute-1 python3.9[180246]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:52 compute-1 sudo[180244]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:52 compute-1 sudo[180407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbqlcwqfglkhjemstmxrawvxjpkvcvre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406392.285746-1994-149293690250975/AnsiballZ_file.py'
Oct 02 11:59:52 compute-1 sudo[180407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:52 compute-1 podman[180370]: 2025-10-02 11:59:52.61433803 +0000 UTC m=+0.093851909 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 02 11:59:52 compute-1 sudo[180423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 11:59:52 compute-1 sudo[180423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:52 compute-1 sudo[180423]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:52 compute-1 sudo[180448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 11:59:52 compute-1 sudo[180448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 11:59:52 compute-1 sudo[180448]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:52 compute-1 python3.9[180411]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:52 compute-1 sudo[180407]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:53 compute-1 sudo[180622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljyalqgkrpqednrobfyytjeknjkwiihc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406392.967179-1994-117141881035290/AnsiballZ_file.py'
Oct 02 11:59:53 compute-1 sudo[180622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:59:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:53.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:59:53 compute-1 python3.9[180624]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:53 compute-1 sudo[180622]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:53 compute-1 ceph-mon[80926]: pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:53 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:59:53 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 11:59:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:53.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:53 compute-1 sudo[180774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yevdcnpgrkkoozxyyonuardzzxquqdht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406393.5325472-1994-128767502144012/AnsiballZ_file.py'
Oct 02 11:59:53 compute-1 sudo[180774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:53 compute-1 python3.9[180776]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:53 compute-1 sudo[180774]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:54 compute-1 sudo[180926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjwmpzgkhtbeskatobpmtgaardifwvxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406394.1122303-1994-256519922770466/AnsiballZ_file.py'
Oct 02 11:59:54 compute-1 sudo[180926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:54 compute-1 python3.9[180928]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:54 compute-1 sudo[180926]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:54 compute-1 sudo[181078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcfxekhmcajqzssuywespgywvxtxdpch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406394.745677-1994-174174301012703/AnsiballZ_file.py'
Oct 02 11:59:54 compute-1 sudo[181078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:55 compute-1 python3.9[181080]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:55 compute-1 sudo[181078]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:55.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:55 compute-1 ceph-mon[80926]: pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:55 compute-1 sudo[181230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjlzwxfmycwoftadvohatrrmcpzebtko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406395.3767526-1994-84517314848708/AnsiballZ_file.py'
Oct 02 11:59:55 compute-1 sudo[181230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:55.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:55 compute-1 python3.9[181232]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:55 compute-1 sudo[181230]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:56 compute-1 sudo[181382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyibbxnbffatkubdorovargoghiadade ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406395.9691806-1994-209601536273269/AnsiballZ_file.py'
Oct 02 11:59:56 compute-1 sudo[181382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:56 compute-1 python3.9[181384]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:56 compute-1 sudo[181382]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:56 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 11:59:56 compute-1 sudo[181534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxnqspzqcityryroskoqvzgbpwlnaqfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406396.5516822-1994-163348565176593/AnsiballZ_file.py'
Oct 02 11:59:56 compute-1 sudo[181534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:56 compute-1 python3.9[181536]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:56 compute-1 sudo[181534]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:57.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:57 compute-1 sudo[181703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujynflcjfqfivnaarirazecmxntwlnzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406397.0978065-1994-252074692011098/AnsiballZ_file.py'
Oct 02 11:59:57 compute-1 podman[181660]: 2025-10-02 11:59:57.386798121 +0000 UTC m=+0.052699002 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 11:59:57 compute-1 sudo[181703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 11:59:57 compute-1 ceph-mon[80926]: pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:57 compute-1 python3.9[181707]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 11:59:57 compute-1 sudo[181703]: pam_unix(sudo:session): session closed for user root
Oct 02 11:59:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:57.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 11:59:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:11:59:59.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 11:59:59 compute-1 ceph-mon[80926]: pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 11:59:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 11:59:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 11:59:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:11:59:59.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 11:59:59 compute-1 sudo[181857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydymkpgftvavthxlkqanvyhcmqeitpoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406399.5549986-2291-102109785365175/AnsiballZ_stat.py'
Oct 02 11:59:59 compute-1 sudo[181857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:00 compute-1 python3.9[181859]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:00 compute-1 sudo[181857]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:00 compute-1 sudo[181980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfrqrwkspskmomiaxckwjikkirctopzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406399.5549986-2291-102109785365175/AnsiballZ_copy.py'
Oct 02 12:00:00 compute-1 sudo[181980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:00 compute-1 python3.9[181982]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406399.5549986-2291-102109785365175/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:00 compute-1 sudo[181980]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:00 compute-1 ceph-mon[80926]: overall HEALTH_OK
Oct 02 12:00:01 compute-1 sudo[182132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqcuzpkmzgalavmagaevtxuqdcwucxhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406400.737095-2291-174063598240378/AnsiballZ_stat.py'
Oct 02 12:00:01 compute-1 sudo[182132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:01 compute-1 python3.9[182134]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:01 compute-1 sudo[182132]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:01.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:01 compute-1 sudo[182255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnjtdsmmmxqnuloksnmzqnffocpiqyyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406400.737095-2291-174063598240378/AnsiballZ_copy.py'
Oct 02 12:00:01 compute-1 sudo[182255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:01 compute-1 ceph-mon[80926]: pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:01.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:01 compute-1 python3.9[182257]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406400.737095-2291-174063598240378/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:01 compute-1 sudo[182255]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:01 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:00:02 compute-1 sudo[182407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjrhzhkhnxbqgkteafrfoxietxbrdzmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406401.8630579-2291-105962517836569/AnsiballZ_stat.py'
Oct 02 12:00:02 compute-1 sudo[182407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:02 compute-1 python3.9[182409]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:02 compute-1 sudo[182407]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:02 compute-1 sudo[182530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwbvkzajdncgiateskizuouimntxoqyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406401.8630579-2291-105962517836569/AnsiballZ_copy.py'
Oct 02 12:00:02 compute-1 sudo[182530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:02 compute-1 python3.9[182532]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406401.8630579-2291-105962517836569/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:02 compute-1 sudo[182530]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:03.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:03 compute-1 sudo[182682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aossyurjbrqtlwzjmoezbzuafqbbmiav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406403.0453568-2291-105942500914016/AnsiballZ_stat.py'
Oct 02 12:00:03 compute-1 sudo[182682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:03 compute-1 python3.9[182684]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:03 compute-1 sudo[182682]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:03 compute-1 ceph-mon[80926]: pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 12:00:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:03.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 12:00:03 compute-1 sudo[182805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzgrkuzpkrpciuqmleoytcdtsovvdymy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406403.0453568-2291-105942500914016/AnsiballZ_copy.py'
Oct 02 12:00:03 compute-1 sudo[182805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:04 compute-1 python3.9[182807]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406403.0453568-2291-105942500914016/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:04 compute-1 sudo[182805]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:04 compute-1 sudo[182957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssqhswztqgulgpofrifhxypnlyjetmcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406404.2424357-2291-47537395873895/AnsiballZ_stat.py'
Oct 02 12:00:04 compute-1 sudo[182957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:04 compute-1 python3.9[182959]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:04 compute-1 sudo[182957]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:05 compute-1 sudo[183080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfzhdztlwonvjqyfkxmexxdgrtenclik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406404.2424357-2291-47537395873895/AnsiballZ_copy.py'
Oct 02 12:00:05 compute-1 sudo[183080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:05 compute-1 python3.9[183082]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406404.2424357-2291-47537395873895/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:05 compute-1 sudo[183080]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:05.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:00:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:05.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:00:05 compute-1 ceph-mon[80926]: pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:05 compute-1 sudo[183232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puhonexgtsbevjgzojosmlcpoxusfwhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406405.4242322-2291-226981980865608/AnsiballZ_stat.py'
Oct 02 12:00:05 compute-1 sudo[183232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:05 compute-1 python3.9[183234]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:05 compute-1 sudo[183232]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:06 compute-1 sudo[183355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-folbswlaluegsgwnvoalaczzzpqmenxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406405.4242322-2291-226981980865608/AnsiballZ_copy.py'
Oct 02 12:00:06 compute-1 sudo[183355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:06 compute-1 python3.9[183357]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406405.4242322-2291-226981980865608/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:06 compute-1 sudo[183355]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:06 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:00:06 compute-1 sudo[183507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrmriffokispcbnulbhspjilfztlimos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406406.6253943-2291-100206145998702/AnsiballZ_stat.py'
Oct 02 12:00:06 compute-1 sudo[183507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:07 compute-1 python3.9[183509]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:07 compute-1 sudo[183507]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:07.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:07 compute-1 sudo[183630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhlwdudscxbbrratlkrpyfslqlzlphpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406406.6253943-2291-100206145998702/AnsiballZ_copy.py'
Oct 02 12:00:07 compute-1 sudo[183630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:07 compute-1 python3.9[183632]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406406.6253943-2291-100206145998702/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:07 compute-1 sudo[183630]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:00:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:07.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:00:07 compute-1 ceph-mon[80926]: pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:08 compute-1 sudo[183782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvmzdhqmnwazvemgvhmbykgjfuehjbnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406407.751486-2291-218070813104067/AnsiballZ_stat.py'
Oct 02 12:00:08 compute-1 sudo[183782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:08 compute-1 python3.9[183784]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:08 compute-1 sudo[183782]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:08 compute-1 sudo[183905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbtnpuhwehnnycdqvhfqcgukpfjnxpsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406407.751486-2291-218070813104067/AnsiballZ_copy.py'
Oct 02 12:00:08 compute-1 sudo[183905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:08 compute-1 python3.9[183907]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406407.751486-2291-218070813104067/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:08 compute-1 sudo[183905]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:09 compute-1 sudo[184057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhoptgwaommkohsnobvxivzsauoouaqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406408.901911-2291-162670625109885/AnsiballZ_stat.py'
Oct 02 12:00:09 compute-1 sudo[184057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:09.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:09 compute-1 python3.9[184059]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:09 compute-1 sudo[184057]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:09 compute-1 sudo[184180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwtwxtpgtbtoqgpdqapsxlbgbuguvvre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406408.901911-2291-162670625109885/AnsiballZ_copy.py'
Oct 02 12:00:09 compute-1 sudo[184180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:09.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:09 compute-1 ceph-mon[80926]: pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:09 compute-1 python3.9[184182]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406408.901911-2291-162670625109885/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:09 compute-1 sudo[184180]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:10 compute-1 sudo[184332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcbueozvkfuqpxtmoglrscyclyypgmbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406410.0764315-2291-62862961791825/AnsiballZ_stat.py'
Oct 02 12:00:10 compute-1 sudo[184332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:10 compute-1 python3.9[184334]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:10 compute-1 sudo[184332]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:10 compute-1 sudo[184455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgreboxpqjiiemaqtfywrchyxodresto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406410.0764315-2291-62862961791825/AnsiballZ_copy.py'
Oct 02 12:00:10 compute-1 sudo[184455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:11 compute-1 python3.9[184457]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406410.0764315-2291-62862961791825/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:11 compute-1 sudo[184455]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:00:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:11.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:00:11 compute-1 sudo[184607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agejsynlbjogdojwbwzwbowrmbvdqtxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406411.1485138-2291-236579093308532/AnsiballZ_stat.py'
Oct 02 12:00:11 compute-1 sudo[184607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:11 compute-1 python3.9[184609]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:11.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:11 compute-1 sudo[184607]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:11 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:00:11 compute-1 ceph-mon[80926]: pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:12 compute-1 sudo[184730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfcuczchjddwzbuwszocfunuwwoaigir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406411.1485138-2291-236579093308532/AnsiballZ_copy.py'
Oct 02 12:00:12 compute-1 sudo[184730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:12 compute-1 python3.9[184732]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406411.1485138-2291-236579093308532/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:12 compute-1 sudo[184730]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:12 compute-1 sudo[184882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scxtfnzmpcslecesbdunxgqveshzclne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406412.4347644-2291-220416300987845/AnsiballZ_stat.py'
Oct 02 12:00:12 compute-1 sudo[184882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:12 compute-1 python3.9[184884]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:12 compute-1 sudo[184882]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:13 compute-1 sudo[185005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zibwmkjailfdommworcstkdpscfigtra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406412.4347644-2291-220416300987845/AnsiballZ_copy.py'
Oct 02 12:00:13 compute-1 sudo[185005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:13.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:13 compute-1 python3.9[185007]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406412.4347644-2291-220416300987845/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:13 compute-1 sudo[185005]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:00:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:13.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:00:13 compute-1 sudo[185157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcuztrockfjcsbgqzokhnzwwhmuyyyll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406413.5511267-2291-73885059226937/AnsiballZ_stat.py'
Oct 02 12:00:13 compute-1 sudo[185157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:13 compute-1 ceph-mon[80926]: pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:13 compute-1 python3.9[185159]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:13 compute-1 sudo[185157]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:14 compute-1 sudo[185280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvaoydmtubuxnocuvyzfhgqbjllrxgon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406413.5511267-2291-73885059226937/AnsiballZ_copy.py'
Oct 02 12:00:14 compute-1 sudo[185280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:14 compute-1 python3.9[185282]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406413.5511267-2291-73885059226937/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:14 compute-1 sudo[185280]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:14 compute-1 sudo[185432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfmrpkcdarjdopteiboiskslwtlfgdft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406414.599189-2291-276667802500644/AnsiballZ_stat.py'
Oct 02 12:00:14 compute-1 sudo[185432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:15 compute-1 python3.9[185434]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:15 compute-1 sudo[185432]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:15 compute-1 sudo[185555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpsrphblxgoskyvaesswzaturslgyiyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406414.599189-2291-276667802500644/AnsiballZ_copy.py'
Oct 02 12:00:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:15.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:15 compute-1 sudo[185555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:15 compute-1 python3.9[185557]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406414.599189-2291-276667802500644/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:15 compute-1 sudo[185555]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:00:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:15.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:00:15 compute-1 ceph-mon[80926]: pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:16 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:00:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:17.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:17.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:17 compute-1 ceph-mon[80926]: pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:18 compute-1 python3.9[185707]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 12:00:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:19.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:19 compute-1 sudo[185860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrkgehucnlbzkgxaxqripdudbmcudshu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406419.1762516-2909-253353211822024/AnsiballZ_seboolean.py'
Oct 02 12:00:19 compute-1 sudo[185860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:19.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:19 compute-1 python3.9[185862]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct 02 12:00:19 compute-1 ceph-mon[80926]: pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:00:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:21.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:00:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:00:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:21.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:00:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:00:21 compute-1 ceph-mon[80926]: pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:22 compute-1 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Oct 02 12:00:22 compute-1 sudo[185860]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:22 compute-1 podman[185867]: 2025-10-02 12:00:22.845220656 +0000 UTC m=+0.084541519 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 12:00:23 compute-1 sudo[186043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbchagrtvsurejgwelaubpdyxnqaixfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406422.951178-2933-258332882656066/AnsiballZ_copy.py'
Oct 02 12:00:23 compute-1 sudo[186043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:23.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:23 compute-1 python3.9[186045]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:23 compute-1 sudo[186043]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:00:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:23.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:00:23 compute-1 sudo[186195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msvrekkkenxbhyxhbviscbeoltcqtbrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406423.596466-2933-247251751120479/AnsiballZ_copy.py'
Oct 02 12:00:23 compute-1 sudo[186195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:24 compute-1 ceph-mon[80926]: pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:24 compute-1 python3.9[186197]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:24 compute-1 sudo[186195]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:24 compute-1 sudo[186347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbtzcotuuldltucarlgwuriyxfsbaxgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406424.2505279-2933-213278187041965/AnsiballZ_copy.py'
Oct 02 12:00:24 compute-1 sudo[186347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:24 compute-1 python3.9[186349]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:24 compute-1 sudo[186347]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:25 compute-1 sudo[186499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvgwuoiwurbrwoxncdrerjvkqerxuplq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406424.8734398-2933-208973939309469/AnsiballZ_copy.py'
Oct 02 12:00:25 compute-1 sudo[186499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:25 compute-1 python3.9[186501]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:25 compute-1 sudo[186499]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:00:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:25.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:00:25 compute-1 sudo[186651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtavgrvdbgxcfanrenccdyfizvmggofu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406425.4401803-2933-238080744573887/AnsiballZ_copy.py'
Oct 02 12:00:25 compute-1 sudo[186651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:25.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:25 compute-1 python3.9[186653]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:25 compute-1 sudo[186651]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:00:25.901 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:00:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:00:25.901 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:00:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:00:25.901 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:00:26 compute-1 ceph-mon[80926]: pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:26 compute-1 sudo[186803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzrygcbqkamwudfwouaxcxeoowwfwjnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406426.4894452-3041-234585895689728/AnsiballZ_copy.py'
Oct 02 12:00:26 compute-1 sudo[186803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:26 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:00:26 compute-1 python3.9[186805]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:26 compute-1 sudo[186803]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:27 compute-1 sudo[186955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hknyejeltwtdroytldtrwqzefukddtip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406427.0878904-3041-132331409789871/AnsiballZ_copy.py'
Oct 02 12:00:27 compute-1 sudo[186955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:27.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:27 compute-1 python3.9[186957]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:27 compute-1 sudo[186955]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:27 compute-1 podman[186958]: 2025-10-02 12:00:27.611950857 +0000 UTC m=+0.042118050 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 02 12:00:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:27.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:27 compute-1 sudo[187125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufoifwpcilkwjlivdnrklkcdkhmbeuvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406427.694857-3041-145399710066091/AnsiballZ_copy.py'
Oct 02 12:00:27 compute-1 sudo[187125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:28 compute-1 ceph-mon[80926]: pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:28 compute-1 python3.9[187127]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:28 compute-1 sudo[187125]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:28 compute-1 sudo[187277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxuwzmnsxmlygcbksslcmlxizibvakwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406428.3594608-3041-159170565502278/AnsiballZ_copy.py'
Oct 02 12:00:28 compute-1 sudo[187277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:28 compute-1 python3.9[187279]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:28 compute-1 sudo[187277]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:29 compute-1 sudo[187429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybexaeagqpuevfrizumnfjpmzxoquabf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406429.0299447-3041-126462154018830/AnsiballZ_copy.py'
Oct 02 12:00:29 compute-1 sudo[187429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:00:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:29.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:00:29 compute-1 python3.9[187431]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:29 compute-1 sudo[187429]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:29.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:30 compute-1 ceph-mon[80926]: pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:30 compute-1 sudo[187581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgtuvahlbodqrmxcyjzgjifmfcpwsuxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406430.0931513-3149-22673560061647/AnsiballZ_systemd.py'
Oct 02 12:00:30 compute-1 sudo[187581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:30 compute-1 python3.9[187583]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 12:00:30 compute-1 systemd[1]: Reloading.
Oct 02 12:00:30 compute-1 systemd-rc-local-generator[187610]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:00:30 compute-1 systemd-sysv-generator[187613]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:00:31 compute-1 systemd[1]: Starting libvirt logging daemon socket...
Oct 02 12:00:31 compute-1 systemd[1]: Listening on libvirt logging daemon socket.
Oct 02 12:00:31 compute-1 systemd[1]: Starting libvirt logging daemon admin socket...
Oct 02 12:00:31 compute-1 systemd[1]: Listening on libvirt logging daemon admin socket.
Oct 02 12:00:31 compute-1 systemd[1]: Starting libvirt logging daemon...
Oct 02 12:00:31 compute-1 systemd[1]: Started libvirt logging daemon.
Oct 02 12:00:31 compute-1 sudo[187581]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:31 compute-1 ceph-mon[80926]: pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:31.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:31 compute-1 sudo[187773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjkhnbpnbqedfxdiezkectlraocyziaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406431.2831564-3149-169467115706216/AnsiballZ_systemd.py'
Oct 02 12:00:31 compute-1 sudo[187773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:00:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:31.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:00:31 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:00:31 compute-1 python3.9[187775]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 12:00:31 compute-1 systemd[1]: Reloading.
Oct 02 12:00:32 compute-1 systemd-sysv-generator[187805]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:00:32 compute-1 systemd-rc-local-generator[187802]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:00:32 compute-1 systemd[1]: Starting libvirt nodedev daemon socket...
Oct 02 12:00:32 compute-1 systemd[1]: Listening on libvirt nodedev daemon socket.
Oct 02 12:00:32 compute-1 systemd[1]: Starting libvirt nodedev daemon admin socket...
Oct 02 12:00:32 compute-1 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Oct 02 12:00:32 compute-1 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Oct 02 12:00:32 compute-1 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Oct 02 12:00:32 compute-1 systemd[1]: Starting libvirt nodedev daemon...
Oct 02 12:00:32 compute-1 systemd[1]: Started libvirt nodedev daemon.
Oct 02 12:00:32 compute-1 sudo[187773]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:32 compute-1 sudo[187987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqlsmirehuqdeglddrosmfnprtdwself ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406432.414735-3149-19889261101528/AnsiballZ_systemd.py'
Oct 02 12:00:32 compute-1 sudo[187987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:33 compute-1 python3.9[187989]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 12:00:33 compute-1 systemd[1]: Reloading.
Oct 02 12:00:33 compute-1 systemd-rc-local-generator[188013]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:00:33 compute-1 systemd-sysv-generator[188018]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:00:33 compute-1 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct 02 12:00:33 compute-1 ceph-mon[80926]: pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:00:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:33.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:00:33 compute-1 systemd[1]: Starting libvirt proxy daemon admin socket...
Oct 02 12:00:33 compute-1 systemd[1]: Starting libvirt proxy daemon read-only socket...
Oct 02 12:00:33 compute-1 systemd[1]: Listening on libvirt proxy daemon admin socket.
Oct 02 12:00:33 compute-1 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Oct 02 12:00:33 compute-1 systemd[1]: Starting libvirt proxy daemon...
Oct 02 12:00:33 compute-1 systemd[1]: Started libvirt proxy daemon.
Oct 02 12:00:33 compute-1 sudo[187987]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:33 compute-1 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct 02 12:00:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:00:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:33.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:00:33 compute-1 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Oct 02 12:00:33 compute-1 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Oct 02 12:00:33 compute-1 sudo[188203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbwoybfgjqiajzlcwgglcgqtpankzhpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406433.5954788-3149-89951576655977/AnsiballZ_systemd.py'
Oct 02 12:00:33 compute-1 sudo[188203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:34 compute-1 python3.9[188206]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 12:00:34 compute-1 systemd[1]: Reloading.
Oct 02 12:00:34 compute-1 systemd-sysv-generator[188236]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:00:34 compute-1 systemd-rc-local-generator[188233]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:00:34 compute-1 systemd[1]: Listening on libvirt locking daemon socket.
Oct 02 12:00:34 compute-1 systemd[1]: Starting libvirt QEMU daemon socket...
Oct 02 12:00:34 compute-1 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct 02 12:00:34 compute-1 systemd[1]: Starting Virtual Machine and Container Registration Service...
Oct 02 12:00:34 compute-1 systemd[1]: Listening on libvirt QEMU daemon socket.
Oct 02 12:00:34 compute-1 systemd[1]: Starting libvirt QEMU daemon admin socket...
Oct 02 12:00:34 compute-1 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Oct 02 12:00:34 compute-1 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Oct 02 12:00:34 compute-1 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Oct 02 12:00:34 compute-1 systemd[1]: Started Virtual Machine and Container Registration Service.
Oct 02 12:00:34 compute-1 systemd[1]: Starting libvirt QEMU daemon...
Oct 02 12:00:34 compute-1 systemd[1]: Started libvirt QEMU daemon.
Oct 02 12:00:34 compute-1 sudo[188203]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:34 compute-1 setroubleshoot[188026]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l b1f1a135-9903-412b-a227-3feac8724652
Oct 02 12:00:34 compute-1 setroubleshoot[188026]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Oct 02 12:00:34 compute-1 setroubleshoot[188026]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l b1f1a135-9903-412b-a227-3feac8724652
Oct 02 12:00:34 compute-1 setroubleshoot[188026]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Oct 02 12:00:35 compute-1 sudo[188419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jalrodmuqhslpzywxffhccuvmpippnxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406434.8455913-3149-219353895398127/AnsiballZ_systemd.py'
Oct 02 12:00:35 compute-1 sudo[188419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:35.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:35 compute-1 python3.9[188421]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 12:00:35 compute-1 systemd[1]: Reloading.
Oct 02 12:00:35 compute-1 ceph-mon[80926]: pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:35 compute-1 systemd-sysv-generator[188451]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:00:35 compute-1 systemd-rc-local-generator[188447]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:00:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:35.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:35 compute-1 systemd[1]: Starting libvirt secret daemon socket...
Oct 02 12:00:35 compute-1 systemd[1]: Listening on libvirt secret daemon socket.
Oct 02 12:00:35 compute-1 systemd[1]: Starting libvirt secret daemon admin socket...
Oct 02 12:00:35 compute-1 systemd[1]: Starting libvirt secret daemon read-only socket...
Oct 02 12:00:35 compute-1 systemd[1]: Listening on libvirt secret daemon admin socket.
Oct 02 12:00:35 compute-1 systemd[1]: Listening on libvirt secret daemon read-only socket.
Oct 02 12:00:35 compute-1 systemd[1]: Starting libvirt secret daemon...
Oct 02 12:00:35 compute-1 systemd[1]: Started libvirt secret daemon.
Oct 02 12:00:35 compute-1 sudo[188419]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:36 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:00:36 compute-1 sudo[188628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbkbdymdmwwsiemxggesoggqdpagdkvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406436.7011695-3260-251894483180648/AnsiballZ_file.py'
Oct 02 12:00:36 compute-1 sudo[188628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:37 compute-1 python3.9[188630]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:37 compute-1 sudo[188628]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:00:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:37.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:00:37 compute-1 ceph-mon[80926]: pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:37.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:37 compute-1 sudo[188780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhqaphpxjusfjlcibwwkwblzwvntvhgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406437.4815962-3284-6666918340462/AnsiballZ_find.py'
Oct 02 12:00:37 compute-1 sudo[188780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:37 compute-1 python3.9[188782]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 12:00:38 compute-1 sudo[188780]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:38 compute-1 sudo[188932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxxnvslllrruemruubkoahuxmcozxcmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406438.2237296-3308-127428624795055/AnsiballZ_command.py'
Oct 02 12:00:38 compute-1 sudo[188932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:38 compute-1 python3.9[188934]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 12:00:38 compute-1 sudo[188932]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:39.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:39 compute-1 ceph-mon[80926]: pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:39 compute-1 python3.9[189088]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 12:00:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:39.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:40 compute-1 python3.9[189238]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:40 compute-1 python3.9[189359]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406439.973228-3365-113231454245175/.source.xml follow=False _original_basename=secret.xml.j2 checksum=9d9565ec21a9799171bafbb06d2141d5e5510d7d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:41.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:41 compute-1 sudo[189509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puezbjwoukpdgelosazkszzqaykzyuho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406441.2305243-3410-253592739947396/AnsiballZ_command.py'
Oct 02 12:00:41 compute-1 sudo[189509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:41 compute-1 ceph-mon[80926]: pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:41 compute-1 python3.9[189511]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 20fdc58c-b037-5094-a8ef-d490aa7c36f3
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 12:00:41 compute-1 polkitd[6956]: Registered Authentication Agent for unix-process:189513:430529 (system bus name :1.1971 [/usr/bin/pkttyagent --process 189513 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 02 12:00:41 compute-1 polkitd[6956]: Unregistered Authentication Agent for unix-process:189513:430529 (system bus name :1.1971, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 02 12:00:41 compute-1 polkitd[6956]: Registered Authentication Agent for unix-process:189512:430528 (system bus name :1.1972 [/usr/bin/pkttyagent --process 189512 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 02 12:00:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:41.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:41 compute-1 polkitd[6956]: Unregistered Authentication Agent for unix-process:189512:430528 (system bus name :1.1972, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 02 12:00:41 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:00:41 compute-1 sudo[189509]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:42 compute-1 python3.9[189673]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:43 compute-1 sudo[189823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvhuacfwusrbgtcgkhpzxptnrybyoqfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406442.9602466-3458-233419351120584/AnsiballZ_command.py'
Oct 02 12:00:43 compute-1 sudo[189823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:43.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:43 compute-1 sudo[189823]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:43 compute-1 ceph-mon[80926]: pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:43.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:43 compute-1 sudo[189976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktaesnpelzdddpbbjszvezynusvfecbz ; FSID=20fdc58c-b037-5094-a8ef-d490aa7c36f3 KEY=AQBLZd5oAAAAABAAIzZhCjE1jBJ2OFSOmUV6ug== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406443.716509-3482-188364275138526/AnsiballZ_command.py'
Oct 02 12:00:44 compute-1 sudo[189976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:44 compute-1 polkitd[6956]: Registered Authentication Agent for unix-process:189979:430780 (system bus name :1.1975 [/usr/bin/pkttyagent --process 189979 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 02 12:00:44 compute-1 polkitd[6956]: Unregistered Authentication Agent for unix-process:189979:430780 (system bus name :1.1975, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 02 12:00:44 compute-1 sudo[189976]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:44 compute-1 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Oct 02 12:00:44 compute-1 systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct 02 12:00:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:45.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:45 compute-1 ceph-mon[80926]: pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:45 compute-1 sudo[190134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fprdzrdfvcduprfrxwkanylaahmnnijw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406445.462237-3506-2115706904746/AnsiballZ_copy.py'
Oct 02 12:00:45 compute-1 sudo[190134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:00:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:45.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:00:45 compute-1 python3.9[190136]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:45 compute-1 sudo[190134]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:46 compute-1 sudo[190286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxtapxfljuorsdvnhnvlltysjywjojig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406446.1559014-3530-28948947166296/AnsiballZ_stat.py'
Oct 02 12:00:46 compute-1 sudo[190286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:46 compute-1 python3.9[190288]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:46 compute-1 sudo[190286]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:46 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:00:47 compute-1 sudo[190409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxenictjxoujufqtvohcqudrkdifaimt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406446.1559014-3530-28948947166296/AnsiballZ_copy.py'
Oct 02 12:00:47 compute-1 sudo[190409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:47 compute-1 python3.9[190411]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406446.1559014-3530-28948947166296/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:47 compute-1 sudo[190409]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:47.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:47 compute-1 ceph-mon[80926]: pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:47.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:47 compute-1 sudo[190561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aypzgatobbrwrtirtjkorkwqsqppgprn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406447.6769297-3578-5770624323916/AnsiballZ_file.py'
Oct 02 12:00:47 compute-1 sudo[190561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:48 compute-1 python3.9[190563]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:48 compute-1 sudo[190561]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:48 compute-1 sudo[190713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byrnytehksiscuzugevzlgrememgliiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406448.583293-3602-66802147060038/AnsiballZ_stat.py'
Oct 02 12:00:48 compute-1 sudo[190713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:49 compute-1 python3.9[190715]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:49 compute-1 sudo[190713]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:49 compute-1 sudo[190791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xeoxusljdcsaziugfbflegngtsrivwye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406448.583293-3602-66802147060038/AnsiballZ_file.py'
Oct 02 12:00:49 compute-1 sudo[190791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:49.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:49 compute-1 python3.9[190793]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:49 compute-1 sudo[190791]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:49.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:49 compute-1 ceph-mon[80926]: pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:49 compute-1 sudo[190943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhwqdvbhjsotxrgblbfeckqobdkkcyxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406449.7128098-3638-276959250439795/AnsiballZ_stat.py'
Oct 02 12:00:49 compute-1 sudo[190943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:50 compute-1 python3.9[190945]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:50 compute-1 sudo[190943]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:50 compute-1 sudo[191021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pciuoekejhobyjxzqxcibqlnugghbsss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406449.7128098-3638-276959250439795/AnsiballZ_file.py'
Oct 02 12:00:50 compute-1 sudo[191021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:50 compute-1 python3.9[191023]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.emaho68x recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:50 compute-1 sudo[191021]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:51 compute-1 sudo[191173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiftgoeolslqghhphzbquzeonzxvclep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406450.9075904-3674-167766724766699/AnsiballZ_stat.py'
Oct 02 12:00:51 compute-1 sudo[191173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:51.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:51 compute-1 python3.9[191175]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:51 compute-1 sudo[191173]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:51 compute-1 sudo[191251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qasvesesmskrjvdfeumcmdasieouavoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406450.9075904-3674-167766724766699/AnsiballZ_file.py'
Oct 02 12:00:51 compute-1 sudo[191251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.001999982s ======
Oct 02 12:00:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:51.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001999982s
Oct 02 12:00:51 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:00:51 compute-1 ceph-mon[80926]: pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:51 compute-1 python3.9[191253]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:51 compute-1 sudo[191251]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:52 compute-1 sudo[191403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhpdhmqtprwhvmaxaevouythrdyxtwir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406452.287826-3713-122652961201528/AnsiballZ_command.py'
Oct 02 12:00:52 compute-1 sudo[191403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:52 compute-1 python3.9[191405]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 12:00:52 compute-1 sudo[191403]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:52 compute-1 sudo[191431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:00:52 compute-1 sudo[191431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:52 compute-1 sudo[191431]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:53 compute-1 sudo[191483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:00:53 compute-1 sudo[191483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:53 compute-1 sudo[191483]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:53 compute-1 podman[191456]: 2025-10-02 12:00:53.076854999 +0000 UTC m=+0.093210620 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 12:00:53 compute-1 sudo[191555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:00:53 compute-1 sudo[191555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:53 compute-1 sudo[191555]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:53 compute-1 sudo[191584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:00:53 compute-1 sudo[191584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:00:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:53.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:53 compute-1 sudo[191697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wanwwyawbxpvowucdgejrhjkyevsuyat ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759406452.9947462-3737-169985278902761/AnsiballZ_edpm_nftables_from_files.py'
Oct 02 12:00:53 compute-1 sudo[191697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:53 compute-1 auditd[703]: Audit daemon rotating log files
Oct 02 12:00:53 compute-1 sudo[191584]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:53 compute-1 python3[191700]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 02 12:00:53 compute-1 sudo[191697]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:53.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:53 compute-1 ceph-mon[80926]: pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:54 compute-1 sudo[191866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uochxxkudphhfjujefvjhwrjkmkbqvyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406453.9383817-3761-49061414070924/AnsiballZ_stat.py'
Oct 02 12:00:54 compute-1 sudo[191866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:54 compute-1 python3.9[191868]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:54 compute-1 sudo[191866]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:54 compute-1 sudo[191944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cellyurjmpqvqtakwjkhqzyzosttsdha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406453.9383817-3761-49061414070924/AnsiballZ_file.py'
Oct 02 12:00:54 compute-1 sudo[191944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:54 compute-1 python3.9[191946]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:54 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:00:54 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:00:54 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:00:54 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:00:54 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:00:54 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:00:54 compute-1 sudo[191944]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:55.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:55 compute-1 sudo[192096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiisrntspoqzbhncnqpgliywyosmeqtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406455.2698083-3797-107288695562855/AnsiballZ_stat.py'
Oct 02 12:00:55 compute-1 sudo[192096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:00:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:55.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:00:55 compute-1 ceph-mon[80926]: pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:55 compute-1 python3.9[192098]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:56 compute-1 sudo[192096]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:56 compute-1 sudo[192174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkvawdkysltjclycrgbowpxsjppitmaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406455.2698083-3797-107288695562855/AnsiballZ_file.py'
Oct 02 12:00:56 compute-1 sudo[192174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:56 compute-1 python3.9[192176]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:56 compute-1 sudo[192174]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:56 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:00:57 compute-1 sudo[192326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orfyiclwlwlxzigrqdecesjymemlohvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406456.712444-3833-3661145415817/AnsiballZ_stat.py'
Oct 02 12:00:57 compute-1 sudo[192326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:57 compute-1 python3.9[192328]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:57 compute-1 sudo[192326]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:57.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:57 compute-1 sudo[192404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ollmzaecvpgvofpiybnrybdhbtgcjyas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406456.712444-3833-3661145415817/AnsiballZ_file.py'
Oct 02 12:00:57 compute-1 sudo[192404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:57 compute-1 python3.9[192406]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:57 compute-1 sudo[192404]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:00:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:57.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:00:57 compute-1 podman[192407]: 2025-10-02 12:00:57.827596779 +0000 UTC m=+0.073550725 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 12:00:58 compute-1 ceph-mon[80926]: pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:00:58 compute-1 sudo[192575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbsybxmrsokmiefalkhwlovudmzlbupz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406458.0737805-3869-167750579786654/AnsiballZ_stat.py'
Oct 02 12:00:58 compute-1 sudo[192575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:58 compute-1 python3.9[192577]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:58 compute-1 sudo[192575]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:58 compute-1 sudo[192653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vleaealwqzoyyirytmwsjilvtkjgxlhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406458.0737805-3869-167750579786654/AnsiballZ_file.py'
Oct 02 12:00:58 compute-1 sudo[192653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:59 compute-1 python3.9[192655]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:00:59 compute-1 sudo[192653]: pam_unix(sudo:session): session closed for user root
Oct 02 12:00:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:00:59.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:59 compute-1 sudo[192805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nckofjkuohhgwrwdncmrjdmavbgguhny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406459.329464-3905-235630255068616/AnsiballZ_stat.py'
Oct 02 12:00:59 compute-1 sudo[192805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:00:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:00:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:00:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:00:59.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:00:59 compute-1 python3.9[192807]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:00:59 compute-1 sudo[192805]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:00 compute-1 ceph-mon[80926]: pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:00 compute-1 sudo[192930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-meayonqrdtiypdbkfgeybejfymapjhdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406459.329464-3905-235630255068616/AnsiballZ_copy.py'
Oct 02 12:01:00 compute-1 sudo[192930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:00 compute-1 python3.9[192932]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759406459.329464-3905-235630255068616/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:01:00 compute-1 sudo[192930]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:00 compute-1 sudo[192987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:01:00 compute-1 sudo[192987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:01:00 compute-1 sudo[192987]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:00 compute-1 sudo[193036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:01:00 compute-1 sudo[193036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:01:00 compute-1 sudo[193036]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:01 compute-1 CROND[193113]: (root) CMD (run-parts /etc/cron.hourly)
Oct 02 12:01:01 compute-1 run-parts[193124]: (/etc/cron.hourly) starting 0anacron
Oct 02 12:01:01 compute-1 run-parts[193140]: (/etc/cron.hourly) finished 0anacron
Oct 02 12:01:01 compute-1 CROND[193108]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 02 12:01:01 compute-1 sudo[193143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boupuyyocevtpvnnhmqirsyaxcddmqyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406460.8581092-3950-249459101988472/AnsiballZ_file.py'
Oct 02 12:01:01 compute-1 sudo[193143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:01 compute-1 python3.9[193145]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:01:01 compute-1 sudo[193143]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:01.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:01 compute-1 ceph-mon[80926]: pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:01 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:01:01 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:01:01 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:01:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:01.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:01 compute-1 sudo[193295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-becurquesesfbfbjukvouuoguvgslkui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406461.59791-3974-232274906518009/AnsiballZ_command.py'
Oct 02 12:01:01 compute-1 sudo[193295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:02 compute-1 python3.9[193297]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 12:01:02 compute-1 sudo[193295]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:02 compute-1 sudo[193450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtypxgltvhddjxjjnrivtcrqeblmbevn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406462.4081101-3998-19347870949222/AnsiballZ_blockinfile.py'
Oct 02 12:01:02 compute-1 sudo[193450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:03 compute-1 python3.9[193452]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:01:03 compute-1 sudo[193450]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:03.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:03 compute-1 sudo[193602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gslobrcikenwopwtbfsidyzyxdwhfycd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406463.3726547-4025-230013261753667/AnsiballZ_command.py'
Oct 02 12:01:03 compute-1 sudo[193602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:03 compute-1 ceph-mon[80926]: pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:03.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:03 compute-1 python3.9[193604]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 12:01:03 compute-1 sudo[193602]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:04 compute-1 sudo[193755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szyksermpuntiizyzvodilfduieixhij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406464.0989394-4049-247698028117310/AnsiballZ_stat.py'
Oct 02 12:01:04 compute-1 sudo[193755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:04 compute-1 python3.9[193757]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:01:04 compute-1 sudo[193755]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:05 compute-1 sudo[193909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyulwsrduuzrusnigjuvydvdsrkdmsxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406464.9659517-4073-16563163848100/AnsiballZ_command.py'
Oct 02 12:01:05 compute-1 sudo[193909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:05 compute-1 python3.9[193911]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 12:01:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:05.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:05 compute-1 sudo[193909]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:05 compute-1 ceph-mon[80926]: pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:05.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:05 compute-1 sudo[194064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zggfpybalsqrpyrvtpuayrttxfbsrmkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406465.679571-4097-126899584610331/AnsiballZ_file.py'
Oct 02 12:01:05 compute-1 sudo[194064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:06 compute-1 python3.9[194066]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:01:06 compute-1 sudo[194064]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:06 compute-1 sudo[194216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtefudvhvbqtrhucnkkgeyfbmnjxjpuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406466.4714315-4121-3569866354244/AnsiballZ_stat.py'
Oct 02 12:01:06 compute-1 sudo[194216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:06 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:01:06 compute-1 python3.9[194218]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:01:06 compute-1 sudo[194216]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:07 compute-1 sudo[194339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqxgjckygqkzahrrmzzjvsxkgavzanwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406466.4714315-4121-3569866354244/AnsiballZ_copy.py'
Oct 02 12:01:07 compute-1 sudo[194339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:07 compute-1 python3.9[194341]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406466.4714315-4121-3569866354244/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:01:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:07.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:07 compute-1 sudo[194339]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:07.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:07 compute-1 ceph-mon[80926]: pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:08 compute-1 sudo[194491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgpdtggpjrjthzdyxojbowqeweawfsmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406467.8983197-4166-202496219478751/AnsiballZ_stat.py'
Oct 02 12:01:08 compute-1 sudo[194491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:08 compute-1 python3.9[194493]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:01:08 compute-1 sudo[194491]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:08 compute-1 sudo[194614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgxuardhqewovbrjnsknmkcrmtoopxlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406467.8983197-4166-202496219478751/AnsiballZ_copy.py'
Oct 02 12:01:08 compute-1 sudo[194614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:08 compute-1 python3.9[194616]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406467.8983197-4166-202496219478751/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:01:08 compute-1 sudo[194614]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:09.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:09 compute-1 sudo[194766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jozrdqtqbjtmnrvdjaqupevzsfypampx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406469.2594056-4211-25809470708168/AnsiballZ_stat.py'
Oct 02 12:01:09 compute-1 sudo[194766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:09 compute-1 python3.9[194768]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:01:09 compute-1 sudo[194766]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:09.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:09 compute-1 ceph-mon[80926]: pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:10 compute-1 sudo[194889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynjjuafvugxxxijzeldqwkvintzskkub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406469.2594056-4211-25809470708168/AnsiballZ_copy.py'
Oct 02 12:01:10 compute-1 sudo[194889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:10 compute-1 python3.9[194891]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406469.2594056-4211-25809470708168/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:01:10 compute-1 sudo[194889]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:10 compute-1 sudo[195041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srmfdnnqmzcmsiarwoscbaqxyzfpyoky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406470.7166579-4256-154340023698019/AnsiballZ_systemd.py'
Oct 02 12:01:10 compute-1 sudo[195041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:11 compute-1 python3.9[195043]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 12:01:11 compute-1 systemd[1]: Reloading.
Oct 02 12:01:11 compute-1 systemd-rc-local-generator[195068]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:01:11 compute-1 systemd-sysv-generator[195073]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:01:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:01:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:11.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:01:11 compute-1 systemd[1]: Reached target edpm_libvirt.target.
Oct 02 12:01:11 compute-1 sudo[195041]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:11 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:01:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:11.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:11 compute-1 ceph-mon[80926]: pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:12 compute-1 sudo[195231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzpesforkyiokaerimtpkzshxwfhsncz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406472.0328164-4280-110649274610125/AnsiballZ_systemd.py'
Oct 02 12:01:12 compute-1 sudo[195231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:12 compute-1 python3.9[195233]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 02 12:01:12 compute-1 systemd[1]: Reloading.
Oct 02 12:01:12 compute-1 systemd-rc-local-generator[195259]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:01:12 compute-1 systemd-sysv-generator[195262]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:01:13 compute-1 systemd[1]: Reloading.
Oct 02 12:01:13 compute-1 systemd-rc-local-generator[195296]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:01:13 compute-1 systemd-sysv-generator[195299]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:01:13 compute-1 sudo[195231]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:13.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:13.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:13 compute-1 ceph-mon[80926]: pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:13 compute-1 sshd-session[138542]: Connection closed by 192.168.122.30 port 40710
Oct 02 12:01:13 compute-1 sshd-session[138539]: pam_unix(sshd:session): session closed for user zuul
Oct 02 12:01:13 compute-1 systemd[1]: session-49.scope: Deactivated successfully.
Oct 02 12:01:13 compute-1 systemd[1]: session-49.scope: Consumed 3min 24.005s CPU time.
Oct 02 12:01:13 compute-1 systemd-logind[795]: Session 49 logged out. Waiting for processes to exit.
Oct 02 12:01:13 compute-1 systemd-logind[795]: Removed session 49.
Oct 02 12:01:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:15.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:15.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:15 compute-1 ceph-mon[80926]: pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:16 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:01:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:17.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:17.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:17 compute-1 ceph-mon[80926]: pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:18 compute-1 sshd-session[195329]: Accepted publickey for zuul from 192.168.122.30 port 35142 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 12:01:18 compute-1 systemd-logind[795]: New session 50 of user zuul.
Oct 02 12:01:18 compute-1 systemd[1]: Started Session 50 of User zuul.
Oct 02 12:01:18 compute-1 sshd-session[195329]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 12:01:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:19.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:19 compute-1 python3.9[195482]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 12:01:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 12:01:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:19.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 12:01:19 compute-1 ceph-mon[80926]: pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:21 compute-1 sudo[195636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krhsissglkfkvypyrtuaitxrjucrcewm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406480.596452-68-249867291167037/AnsiballZ_file.py'
Oct 02 12:01:21 compute-1 sudo[195636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:21 compute-1 python3.9[195638]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:01:21 compute-1 sudo[195636]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:21.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:21 compute-1 sudo[195788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twsayrrhxztxhwyjfmrzfpevmmnwjeyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406481.4387283-68-41405979342230/AnsiballZ_file.py'
Oct 02 12:01:21 compute-1 sudo[195788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:01:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:21.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:21 compute-1 python3.9[195790]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:01:21 compute-1 sudo[195788]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:21 compute-1 ceph-mon[80926]: pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:22 compute-1 sudo[195940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bapdcwbaydcfmwlhgwklkuivmdqgchqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406482.061793-68-104304685084787/AnsiballZ_file.py'
Oct 02 12:01:22 compute-1 sudo[195940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:22 compute-1 python3.9[195942]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:01:22 compute-1 sudo[195940]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:22 compute-1 sudo[196092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofclvvvngvjwxzeqpjksczwgdflanbbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406482.7021823-68-232430955783605/AnsiballZ_file.py'
Oct 02 12:01:22 compute-1 sudo[196092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:23 compute-1 python3.9[196094]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 02 12:01:23 compute-1 sudo[196092]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:23 compute-1 podman[196095]: 2025-10-02 12:01:23.255561051 +0000 UTC m=+0.106647361 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:01:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:01:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:23.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:01:23 compute-1 sudo[196270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpgifjikuynhummriwjmzqjbexwkyubl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406483.289698-68-189521207155325/AnsiballZ_file.py'
Oct 02 12:01:23 compute-1 sudo[196270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:23 compute-1 python3.9[196272]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:01:23 compute-1 sudo[196270]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:23.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:23 compute-1 ceph-mon[80926]: pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:24 compute-1 sudo[196422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cchmfuzrjcbvgecwgywfiiovjjpnqqdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406484.2477455-176-191836621900058/AnsiballZ_stat.py'
Oct 02 12:01:24 compute-1 sudo[196422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:24 compute-1 python3.9[196424]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:01:24 compute-1 sudo[196422]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:25.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:25.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:25 compute-1 sudo[196576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkasvqkssjzlenhqjzsxgztbnsslhzmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406485.24334-200-102200128349287/AnsiballZ_systemd.py'
Oct 02 12:01:25 compute-1 sudo[196576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:01:25.902 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:01:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:01:25.902 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:01:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:01:25.902 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:01:26 compute-1 ceph-mon[80926]: pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:26 compute-1 python3.9[196578]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 12:01:26 compute-1 systemd[1]: Reloading.
Oct 02 12:01:26 compute-1 systemd-sysv-generator[196610]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:01:26 compute-1 systemd-rc-local-generator[196607]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:01:26 compute-1 sudo[196576]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:26 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:01:27 compute-1 sudo[196764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkkorbnlkrpoifmwmqvgdydrfhkbldmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406486.8324285-224-217930462902588/AnsiballZ_service_facts.py'
Oct 02 12:01:27 compute-1 sudo[196764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:27 compute-1 python3.9[196766]: ansible-ansible.builtin.service_facts Invoked
Oct 02 12:01:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:27.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:27 compute-1 network[196783]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 12:01:27 compute-1 network[196784]: 'network-scripts' will be removed from distribution in near future.
Oct 02 12:01:27 compute-1 network[196785]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 12:01:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:27.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:28 compute-1 ceph-mon[80926]: pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:28 compute-1 podman[196791]: 2025-10-02 12:01:28.457050677 +0000 UTC m=+0.053320791 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct 02 12:01:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:01:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:29.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:01:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:29.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:30 compute-1 ceph-mon[80926]: pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:30 compute-1 sudo[196764]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:31.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:31 compute-1 sudo[197075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iccqfokhwprvsqqexaokewgboytvokho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406491.3767478-248-222126107258683/AnsiballZ_systemd.py'
Oct 02 12:01:31 compute-1 sudo[197075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:31 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:01:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:31.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:31 compute-1 python3.9[197077]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 12:01:32 compute-1 systemd[1]: Reloading.
Oct 02 12:01:32 compute-1 systemd-rc-local-generator[197106]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:01:32 compute-1 systemd-sysv-generator[197109]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:01:32 compute-1 ceph-mon[80926]: pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:32 compute-1 sudo[197075]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:33 compute-1 python3.9[197263]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:01:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:33.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:33.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:33 compute-1 sudo[197413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afhbbtvitepkrmlzwuhilzltihqmywog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406493.4405963-299-182308100077813/AnsiballZ_podman_container.py'
Oct 02 12:01:33 compute-1 sudo[197413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:34 compute-1 python3.9[197415]: ansible-containers.podman.podman_container Invoked with command=/usr/sbin/iscsi-iname detach=False image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified name=iscsid_config rm=True tty=True executable=podman state=started debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 02 12:01:34 compute-1 ceph-mon[80926]: pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:34 compute-1 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 12:01:34 compute-1 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 12:01:35 compute-1 ceph-mon[80926]: pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:35.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:35 compute-1 podman[197427]: 2025-10-02 12:01:35.515963812 +0000 UTC m=+1.296068030 image pull 1b3fd7f2436e5c6f2e28c01b83721476c7b295789c77b3d63e30f49404389ea1 quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 02 12:01:35 compute-1 podman[197486]: 2025-10-02 12:01:35.645124676 +0000 UTC m=+0.043483492 container create 26643785765ecd6de8527d6cd0efbe37fd3978e3ca22734b35e058ba26d4d646 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:01:35 compute-1 NetworkManager[44960]: <info>  [1759406495.6635] manager: (podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/23)
Oct 02 12:01:35 compute-1 kernel: podman0: port 1(veth0) entered blocking state
Oct 02 12:01:35 compute-1 kernel: podman0: port 1(veth0) entered disabled state
Oct 02 12:01:35 compute-1 kernel: veth0: entered allmulticast mode
Oct 02 12:01:35 compute-1 kernel: veth0: entered promiscuous mode
Oct 02 12:01:35 compute-1 kernel: podman0: port 1(veth0) entered blocking state
Oct 02 12:01:35 compute-1 kernel: podman0: port 1(veth0) entered forwarding state
Oct 02 12:01:35 compute-1 NetworkManager[44960]: <info>  [1759406495.6810] device (veth0): carrier: link connected
Oct 02 12:01:35 compute-1 NetworkManager[44960]: <info>  [1759406495.6812] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/24)
Oct 02 12:01:35 compute-1 NetworkManager[44960]: <info>  [1759406495.6818] device (podman0): carrier: link connected
Oct 02 12:01:35 compute-1 systemd-udevd[197514]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:01:35 compute-1 systemd-udevd[197518]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:01:35 compute-1 NetworkManager[44960]: <info>  [1759406495.7109] device (podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:01:35 compute-1 NetworkManager[44960]: <info>  [1759406495.7116] device (podman0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:01:35 compute-1 NetworkManager[44960]: <info>  [1759406495.7122] device (podman0): Activation: starting connection 'podman0' (9089a157-fe13-4347-90b3-a7891003e76b)
Oct 02 12:01:35 compute-1 NetworkManager[44960]: <info>  [1759406495.7142] device (podman0): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 02 12:01:35 compute-1 NetworkManager[44960]: <info>  [1759406495.7144] device (podman0): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 02 12:01:35 compute-1 NetworkManager[44960]: <info>  [1759406495.7145] device (podman0): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 02 12:01:35 compute-1 NetworkManager[44960]: <info>  [1759406495.7146] device (podman0): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 02 12:01:35 compute-1 podman[197486]: 2025-10-02 12:01:35.625757291 +0000 UTC m=+0.024116137 image pull 1b3fd7f2436e5c6f2e28c01b83721476c7b295789c77b3d63e30f49404389ea1 quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 02 12:01:35 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 02 12:01:35 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 02 12:01:35 compute-1 NetworkManager[44960]: <info>  [1759406495.7405] device (podman0): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 02 12:01:35 compute-1 NetworkManager[44960]: <info>  [1759406495.7408] device (podman0): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 02 12:01:35 compute-1 NetworkManager[44960]: <info>  [1759406495.7415] device (podman0): Activation: successful, device activated.
Oct 02 12:01:35 compute-1 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Oct 02 12:01:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:35.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:35 compute-1 systemd[1]: Started libpod-conmon-26643785765ecd6de8527d6cd0efbe37fd3978e3ca22734b35e058ba26d4d646.scope.
Oct 02 12:01:35 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:01:35 compute-1 podman[197486]: 2025-10-02 12:01:35.980019045 +0000 UTC m=+0.378377881 container init 26643785765ecd6de8527d6cd0efbe37fd3978e3ca22734b35e058ba26d4d646 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 02 12:01:35 compute-1 podman[197486]: 2025-10-02 12:01:35.988216172 +0000 UTC m=+0.386574988 container start 26643785765ecd6de8527d6cd0efbe37fd3978e3ca22734b35e058ba26d4d646 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 12:01:35 compute-1 iscsid_config[197643]: iqn.1994-05.com.redhat:d783e47ecf
Oct 02 12:01:35 compute-1 systemd[1]: libpod-26643785765ecd6de8527d6cd0efbe37fd3978e3ca22734b35e058ba26d4d646.scope: Deactivated successfully.
Oct 02 12:01:36 compute-1 podman[197486]: 2025-10-02 12:01:36.006416161 +0000 UTC m=+0.404774997 container attach 26643785765ecd6de8527d6cd0efbe37fd3978e3ca22734b35e058ba26d4d646 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:01:36 compute-1 podman[197486]: 2025-10-02 12:01:36.008221318 +0000 UTC m=+0.406580134 container died 26643785765ecd6de8527d6cd0efbe37fd3978e3ca22734b35e058ba26d4d646 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:01:36 compute-1 kernel: podman0: port 1(veth0) entered disabled state
Oct 02 12:01:36 compute-1 kernel: veth0 (unregistering): left allmulticast mode
Oct 02 12:01:36 compute-1 kernel: veth0 (unregistering): left promiscuous mode
Oct 02 12:01:36 compute-1 kernel: podman0: port 1(veth0) entered disabled state
Oct 02 12:01:36 compute-1 NetworkManager[44960]: <info>  [1759406496.1104] device (podman0): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:01:36 compute-1 systemd[1]: run-netns-netns\x2dba03a981\x2db8af\x2d541a\x2d970f\x2d5f3cb3a85b55.mount: Deactivated successfully.
Oct 02 12:01:36 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-26643785765ecd6de8527d6cd0efbe37fd3978e3ca22734b35e058ba26d4d646-userdata-shm.mount: Deactivated successfully.
Oct 02 12:01:36 compute-1 podman[197486]: 2025-10-02 12:01:36.515295167 +0000 UTC m=+0.913653983 container remove 26643785765ecd6de8527d6cd0efbe37fd3978e3ca22734b35e058ba26d4d646 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid_config, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:01:36 compute-1 python3.9[197415]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman run --name iscsid_config --detach=False --rm --tty=True quay.io/podified-antelope-centos9/openstack-iscsid:current-podified /usr/sbin/iscsi-iname
Oct 02 12:01:36 compute-1 systemd[1]: libpod-conmon-26643785765ecd6de8527d6cd0efbe37fd3978e3ca22734b35e058ba26d4d646.scope: Deactivated successfully.
Oct 02 12:01:36 compute-1 python3.9[197415]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: Error generating systemd: 
                                             DEPRECATED command:
                                             It is recommended to use Quadlets for running containers and pods under systemd.
                                             
                                             Please refer to podman-systemd.unit(5) for details.
                                             Error: iscsid_config does not refer to a container or pod: no pod with name or ID iscsid_config found: no such pod: no container with name or ID "iscsid_config" found: no such container
Oct 02 12:01:36 compute-1 sudo[197413]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:36 compute-1 systemd[1]: var-lib-containers-storage-overlay-18a9268dc48256ea007b3d40fce05292ef3b4d3f1b2689cc7f82d07808bb7a33-merged.mount: Deactivated successfully.
Oct 02 12:01:36 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:01:37 compute-1 ceph-mon[80926]: pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:37.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:01:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:37.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:01:38 compute-1 sudo[197882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zivwvljqrqczlsnytqjdolwoaxhmdauk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406498.184692-323-233070866336056/AnsiballZ_stat.py'
Oct 02 12:01:38 compute-1 sudo[197882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:38 compute-1 python3.9[197884]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:01:38 compute-1 sudo[197882]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:39 compute-1 sudo[198005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilunwiueapxqjzeruruvztcfxmygpxqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406498.184692-323-233070866336056/AnsiballZ_copy.py'
Oct 02 12:01:39 compute-1 sudo[198005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:39 compute-1 python3.9[198007]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406498.184692-323-233070866336056/.source.iscsi _original_basename=.fk2hqacq follow=False checksum=d140a8b25ccedd64545d8857068a83f3cc83c4ae backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:01:39 compute-1 sudo[198005]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:39.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:39 compute-1 ceph-mon[80926]: pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:01:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:39.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:01:39 compute-1 sudo[198157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnqmhwjwqpeaqdmihlyxgxcdbhhgcuym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406499.682954-368-167506623384388/AnsiballZ_file.py'
Oct 02 12:01:39 compute-1 sudo[198157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:40 compute-1 python3.9[198159]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:01:40 compute-1 sudo[198157]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:40 compute-1 python3.9[198309]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:01:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:01:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:41.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:01:41 compute-1 ceph-mon[80926]: pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:41 compute-1 sudo[198461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgtsvdswyjsddlngmkhlxtmsbiyhvtoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406501.2120655-419-32172562931961/AnsiballZ_lineinfile.py'
Oct 02 12:01:41 compute-1 sudo[198461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:41 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:01:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:41.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:41 compute-1 python3.9[198463]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:01:41 compute-1 sudo[198461]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:42 compute-1 sudo[198613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xomjsfwpmtayinvoacdcbdzepswauseq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406502.2678964-446-100756983195428/AnsiballZ_file.py'
Oct 02 12:01:42 compute-1 sudo[198613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:42 compute-1 python3.9[198615]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:01:42 compute-1 sudo[198613]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:43 compute-1 sudo[198765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vesvktejhcqouwpvzrqewnhgxwomrnrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406503.0462315-470-220849141154986/AnsiballZ_stat.py'
Oct 02 12:01:43 compute-1 sudo[198765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:43.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:43 compute-1 python3.9[198767]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:01:43 compute-1 ceph-mon[80926]: pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:43 compute-1 sudo[198765]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:43 compute-1 sudo[198843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsqwafpuvacmlsqopfggqclveuzvpclq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406503.0462315-470-220849141154986/AnsiballZ_file.py'
Oct 02 12:01:43 compute-1 sudo[198843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:43.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:43 compute-1 python3.9[198845]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:01:43 compute-1 sudo[198843]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:44 compute-1 sudo[198995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozzecypbawabuygovgrwkcdtwxxnkyme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406504.0777957-470-206946537087372/AnsiballZ_stat.py'
Oct 02 12:01:44 compute-1 sudo[198995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:44 compute-1 python3.9[198997]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:01:44 compute-1 sudo[198995]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:44 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Oct 02 12:01:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:01:44.692256) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:01:44 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Oct 02 12:01:44 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406504692323, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1520, "num_deletes": 501, "total_data_size": 2980063, "memory_usage": 3024888, "flush_reason": "Manual Compaction"}
Oct 02 12:01:44 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Oct 02 12:01:44 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406504712636, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 1162013, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14538, "largest_seqno": 16053, "table_properties": {"data_size": 1157196, "index_size": 1765, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 14875, "raw_average_key_size": 19, "raw_value_size": 1144903, "raw_average_value_size": 1465, "num_data_blocks": 81, "num_entries": 781, "num_filter_entries": 781, "num_deletions": 501, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406390, "oldest_key_time": 1759406390, "file_creation_time": 1759406504, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:01:44 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 20495 microseconds, and 4147 cpu microseconds.
Oct 02 12:01:44 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:01:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:01:44.712778) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 1162013 bytes OK
Oct 02 12:01:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:01:44.712802) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Oct 02 12:01:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:01:44.716329) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Oct 02 12:01:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:01:44.716378) EVENT_LOG_v1 {"time_micros": 1759406504716367, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:01:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:01:44.716402) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:01:44 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 2972066, prev total WAL file size 2972066, number of live WAL files 2.
Oct 02 12:01:44 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:01:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:01:44.717354) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353031' seq:0, type:0; will stop at (end)
Oct 02 12:01:44 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:01:44 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(1134KB)], [27(10MB)]
Oct 02 12:01:44 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406504717411, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 12690614, "oldest_snapshot_seqno": -1}
Oct 02 12:01:44 compute-1 sudo[199073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdzgjyzhudhlpxugwfnxxufvsbkleevz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406504.0777957-470-206946537087372/AnsiballZ_file.py'
Oct 02 12:01:44 compute-1 sudo[199073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:44 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 4150 keys, 7897474 bytes, temperature: kUnknown
Oct 02 12:01:44 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406504800109, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 7897474, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7868425, "index_size": 17547, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10437, "raw_key_size": 102626, "raw_average_key_size": 24, "raw_value_size": 7792014, "raw_average_value_size": 1877, "num_data_blocks": 739, "num_entries": 4150, "num_filter_entries": 4150, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759406504, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:01:44 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:01:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:01:44.800327) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7897474 bytes
Oct 02 12:01:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:01:44.804314) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.3 rd, 95.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 11.0 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(17.7) write-amplify(6.8) OK, records in: 5114, records dropped: 964 output_compression: NoCompression
Oct 02 12:01:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:01:44.804335) EVENT_LOG_v1 {"time_micros": 1759406504804326, "job": 14, "event": "compaction_finished", "compaction_time_micros": 82757, "compaction_time_cpu_micros": 17412, "output_level": 6, "num_output_files": 1, "total_output_size": 7897474, "num_input_records": 5114, "num_output_records": 4150, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:01:44 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:01:44 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406504804594, "job": 14, "event": "table_file_deletion", "file_number": 29}
Oct 02 12:01:44 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:01:44 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406504806245, "job": 14, "event": "table_file_deletion", "file_number": 27}
Oct 02 12:01:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:01:44.717218) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:01:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:01:44.806298) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:01:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:01:44.806303) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:01:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:01:44.806305) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:01:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:01:44.806306) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:01:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:01:44.806307) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:01:44 compute-1 python3.9[199075]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:01:45 compute-1 sudo[199073]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:45 compute-1 ceph-mgr[81282]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3443433125
Oct 02 12:01:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:45.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:45 compute-1 ceph-mon[80926]: pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:45.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:46 compute-1 sudo[199225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfhkigqnerzrukzfpdyjsmvrbuealfwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406505.8329136-539-158611274802652/AnsiballZ_file.py'
Oct 02 12:01:46 compute-1 sudo[199225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:46 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 02 12:01:46 compute-1 python3.9[199227]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:01:46 compute-1 sudo[199225]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:46 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:01:46 compute-1 sudo[199377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdedtbhlannsrgmuraiammavxrgqrkbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406506.6899178-563-259913652579497/AnsiballZ_stat.py'
Oct 02 12:01:46 compute-1 sudo[199377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:47 compute-1 python3.9[199379]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:01:47 compute-1 sudo[199377]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:01:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:47.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:01:47 compute-1 sudo[199455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cabcvfzfxsftdswphlwfkadtddbxjmep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406506.6899178-563-259913652579497/AnsiballZ_file.py'
Oct 02 12:01:47 compute-1 sudo[199455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:47 compute-1 python3.9[199457]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:01:47 compute-1 sudo[199455]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:47 compute-1 ceph-mon[80926]: pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:01:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:47.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:01:48 compute-1 sudo[199607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvaeuvemohjhlhxmwzlqjmziwenpoxpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406508.0891023-600-5248122739883/AnsiballZ_stat.py'
Oct 02 12:01:48 compute-1 sudo[199607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:48 compute-1 python3.9[199609]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:01:48 compute-1 sudo[199607]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:48 compute-1 sudo[199685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgguyioaxrnytkzlrnbofddiixlujkuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406508.0891023-600-5248122739883/AnsiballZ_file.py'
Oct 02 12:01:48 compute-1 sudo[199685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:48 compute-1 python3.9[199687]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:01:49 compute-1 sudo[199685]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:49.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:49 compute-1 sudo[199837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdxmlujewrguomjjhbjlcimufnouidpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406509.345617-635-261528596972506/AnsiballZ_systemd.py'
Oct 02 12:01:49 compute-1 sudo[199837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:49 compute-1 ceph-mon[80926]: pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:49.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:50 compute-1 python3.9[199839]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 12:01:50 compute-1 systemd[1]: Reloading.
Oct 02 12:01:50 compute-1 systemd-rc-local-generator[199867]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:01:50 compute-1 systemd-sysv-generator[199871]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:01:50 compute-1 sudo[199837]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:50 compute-1 sudo[200026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivklehifdjfawkukistmjtohbrinmlct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406510.7368977-659-63958869781893/AnsiballZ_stat.py'
Oct 02 12:01:50 compute-1 sudo[200026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:51 compute-1 python3.9[200028]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:01:51 compute-1 sudo[200026]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:51 compute-1 sudo[200104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmraimpttfczfbnvrrccwgpdelwbioan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406510.7368977-659-63958869781893/AnsiballZ_file.py'
Oct 02 12:01:51 compute-1 sudo[200104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:01:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:51.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:01:51 compute-1 python3.9[200106]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:01:51 compute-1 sudo[200104]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:51 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:01:51 compute-1 ceph-mon[80926]: pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:51.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:52 compute-1 sudo[200256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nujchfcdkrsbdlxnrlmfqskhnwjcslgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406511.9997065-695-35054718110868/AnsiballZ_stat.py'
Oct 02 12:01:52 compute-1 sudo[200256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:52 compute-1 python3.9[200258]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:01:52 compute-1 sudo[200256]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:52 compute-1 sudo[200334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amcskhogmuheixjmawieeycqszuqhioh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406511.9997065-695-35054718110868/AnsiballZ_file.py'
Oct 02 12:01:52 compute-1 sudo[200334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:52 compute-1 python3.9[200336]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:01:52 compute-1 sudo[200334]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:53.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:53 compute-1 sudo[200497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpaporrwcapmyovgvpazzscccdmthxql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406513.304013-731-19208498911004/AnsiballZ_systemd.py'
Oct 02 12:01:53 compute-1 sudo[200497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:53 compute-1 podman[200460]: 2025-10-02 12:01:53.633336675 +0000 UTC m=+0.093574621 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:01:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:53.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:53 compute-1 python3.9[200505]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 12:01:53 compute-1 systemd[1]: Reloading.
Oct 02 12:01:53 compute-1 ceph-mon[80926]: pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:53 compute-1 systemd-rc-local-generator[200541]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:01:53 compute-1 systemd-sysv-generator[200544]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:01:54 compute-1 systemd[1]: Starting Create netns directory...
Oct 02 12:01:54 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 02 12:01:54 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 02 12:01:54 compute-1 systemd[1]: Finished Create netns directory.
Oct 02 12:01:54 compute-1 sudo[200497]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:55 compute-1 sudo[200705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oupkkccsbnggvgbqjeckpffrhppzwmmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406514.774686-761-150571756522534/AnsiballZ_file.py'
Oct 02 12:01:55 compute-1 sudo[200705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:55 compute-1 python3.9[200707]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:01:55 compute-1 sudo[200705]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:55.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:55 compute-1 sudo[200857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agjwmksqzpbysntauvvnqncqvtsebrkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406515.5073578-785-166770813296322/AnsiballZ_stat.py'
Oct 02 12:01:55 compute-1 sudo[200857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:55.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:55 compute-1 ceph-mon[80926]: pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:55 compute-1 python3.9[200859]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:01:55 compute-1 sudo[200857]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:56 compute-1 sudo[200980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxuhmqcbbomdefkzxisrovqgdgzwqqvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406515.5073578-785-166770813296322/AnsiballZ_copy.py'
Oct 02 12:01:56 compute-1 sudo[200980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:56 compute-1 python3.9[200982]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406515.5073578-785-166770813296322/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:01:56 compute-1 sudo[200980]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:56 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:01:57 compute-1 sudo[201132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxuinmdypsifjntqcnxfqkocfjnqpbzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406517.0898364-836-206675755983125/AnsiballZ_file.py'
Oct 02 12:01:57 compute-1 sudo[201132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:57.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:57 compute-1 python3.9[201134]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:01:57 compute-1 sudo[201132]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:57.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:57 compute-1 ceph-mon[80926]: pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:01:58 compute-1 sudo[201284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grrsoqlltliajweuxhdzoqrvdobzlbsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406517.9081368-860-211046296052695/AnsiballZ_stat.py'
Oct 02 12:01:58 compute-1 sudo[201284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:58 compute-1 python3.9[201286]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:01:58 compute-1 sudo[201284]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:58 compute-1 sudo[201418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nntqmbkqlbiwhlbvtfxzgvdzwfmstfxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406517.9081368-860-211046296052695/AnsiballZ_copy.py'
Oct 02 12:01:58 compute-1 sudo[201418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:58 compute-1 podman[201381]: 2025-10-02 12:01:58.723621026 +0000 UTC m=+0.058962702 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 12:01:58 compute-1 python3.9[201425]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406517.9081368-860-211046296052695/.source.json _original_basename=.uxz08149 follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:01:58 compute-1 sudo[201418]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:59 compute-1 sudo[201576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzdognnzgwefscuqljypkjosxgsvyntt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406519.1799786-905-145634367742728/AnsiballZ_file.py'
Oct 02 12:01:59 compute-1 sudo[201576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:01:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:01:59.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:59 compute-1 python3.9[201578]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:01:59 compute-1 sudo[201576]: pam_unix(sudo:session): session closed for user root
Oct 02 12:01:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:01:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:01:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:01:59.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:01:59 compute-1 ceph-mon[80926]: pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:00 compute-1 sudo[201728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttlhbzjauootoptwrikgqtusahjyyvrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406520.019289-929-241476058144640/AnsiballZ_stat.py'
Oct 02 12:02:00 compute-1 sudo[201728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:00 compute-1 sudo[201728]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:00 compute-1 sudo[201851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doceqdgerctokkqxfueypeymrbknlbnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406520.019289-929-241476058144640/AnsiballZ_copy.py'
Oct 02 12:02:00 compute-1 sudo[201851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:01 compute-1 sudo[201851]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:01 compute-1 sudo[201878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:02:01 compute-1 sudo[201878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:01 compute-1 sudo[201878]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:01 compute-1 sudo[201903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:02:01 compute-1 sudo[201903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:01 compute-1 sudo[201903]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:01 compute-1 sudo[201928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:02:01 compute-1 sudo[201928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:01 compute-1 sudo[201928]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:01 compute-1 sudo[201953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:02:01 compute-1 sudo[201953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:01.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:01 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:02:01 compute-1 sudo[201953]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:01 compute-1 sudo[202134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdbxryhytecdaxgrhwzteyodixnmcvdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406521.400312-980-37136259724304/AnsiballZ_container_config_data.py'
Oct 02 12:02:01 compute-1 sudo[202134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:02:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:01.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:02:01 compute-1 ceph-mon[80926]: pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:01 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:02:01 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:02:01 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:02:01 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:02:01 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:02:01 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:02:02 compute-1 python3.9[202136]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Oct 02 12:02:02 compute-1 sudo[202134]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:02 compute-1 sudo[202286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyfnwssrbermtnfrfvtkanbxbodrywjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406522.2670329-1007-236259967163161/AnsiballZ_container_config_hash.py'
Oct 02 12:02:02 compute-1 sudo[202286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:02 compute-1 python3.9[202288]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 12:02:02 compute-1 sudo[202286]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:02:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:03.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:02:03 compute-1 sudo[202438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sazlzkbonbapodhxreybwntokrxdmnif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406523.317806-1034-67123987436277/AnsiballZ_podman_container_info.py'
Oct 02 12:02:03 compute-1 sudo[202438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:02:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:03.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:02:03 compute-1 python3.9[202440]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 02 12:02:04 compute-1 ceph-mon[80926]: pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:04 compute-1 sudo[202438]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:05.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:05.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:06 compute-1 ceph-mon[80926]: pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:06 compute-1 sudo[202617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udphvjfvnjdrulmmllnezmbxsgpnskon ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759406525.6409607-1073-242276004937529/AnsiballZ_edpm_container_manage.py'
Oct 02 12:02:06 compute-1 sudo[202617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:06 compute-1 python3[202619]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 12:02:06 compute-1 podman[202658]: 2025-10-02 12:02:06.561226744 +0000 UTC m=+0.046465330 container create 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, container_name=iscsid, config_id=iscsid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:02:06 compute-1 podman[202658]: 2025-10-02 12:02:06.540266847 +0000 UTC m=+0.025505453 image pull 1b3fd7f2436e5c6f2e28c01b83721476c7b295789c77b3d63e30f49404389ea1 quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 02 12:02:06 compute-1 python3[202619]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid:current-podified
Oct 02 12:02:06 compute-1 sudo[202617]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:06 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:02:07 compute-1 sudo[202846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afgpmyabkbhwuyhxxjkcgbyvubjrbkba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406527.1784084-1097-123292164194045/AnsiballZ_stat.py'
Oct 02 12:02:07 compute-1 sudo[202846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:07.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:07 compute-1 python3.9[202848]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:02:07 compute-1 sudo[202846]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:07.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:08 compute-1 ceph-mon[80926]: pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:08 compute-1 sudo[203000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epkrblpqhndraentractsvkgdbtbwxal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406528.081581-1124-115019016854479/AnsiballZ_file.py'
Oct 02 12:02:08 compute-1 sudo[203000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:08 compute-1 sudo[203003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:02:08 compute-1 sudo[203003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:08 compute-1 sudo[203003]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:08 compute-1 python3.9[203002]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:08 compute-1 sudo[203000]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:08 compute-1 sudo[203028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:02:08 compute-1 sudo[203028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:02:08 compute-1 sudo[203028]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:08 compute-1 sudo[203126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uholvyxbegwmmwurqkrokwqwpmxpiaeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406528.081581-1124-115019016854479/AnsiballZ_stat.py'
Oct 02 12:02:08 compute-1 sudo[203126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:08 compute-1 python3.9[203128]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:02:08 compute-1 sudo[203126]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:09 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:02:09 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:02:09 compute-1 sudo[203277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpnfiisjmcivqxofqjihuqkooaywvhkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406528.996802-1124-41831447520668/AnsiballZ_copy.py'
Oct 02 12:02:09 compute-1 sudo[203277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:09.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:09 compute-1 python3.9[203279]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759406528.996802-1124-41831447520668/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:09 compute-1 sudo[203277]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:09 compute-1 sudo[203353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqxmfguybajqghtyczusbgidbjfwbaej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406528.996802-1124-41831447520668/AnsiballZ_systemd.py'
Oct 02 12:02:09 compute-1 sudo[203353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:09.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:10 compute-1 python3.9[203355]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 12:02:10 compute-1 systemd[1]: Reloading.
Oct 02 12:02:10 compute-1 systemd-rc-local-generator[203381]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:02:10 compute-1 systemd-sysv-generator[203384]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:02:10 compute-1 ceph-mon[80926]: pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:10 compute-1 sudo[203353]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:10 compute-1 sudo[203463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-losorzvzouhgdssopqagqvtajiwgxsxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406528.996802-1124-41831447520668/AnsiballZ_systemd.py'
Oct 02 12:02:10 compute-1 sudo[203463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:11 compute-1 python3.9[203465]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 12:02:11 compute-1 systemd[1]: Reloading.
Oct 02 12:02:11 compute-1 systemd-rc-local-generator[203493]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:02:11 compute-1 systemd-sysv-generator[203496]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:02:11 compute-1 ceph-mon[80926]: pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:11 compute-1 systemd[1]: Starting iscsid container...
Oct 02 12:02:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:11.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:11 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:02:11 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e74b51f65e3b362198b6608a9dfaa74005262918f6cba2ec69c451aa36d0aec/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct 02 12:02:11 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e74b51f65e3b362198b6608a9dfaa74005262918f6cba2ec69c451aa36d0aec/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 02 12:02:11 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e74b51f65e3b362198b6608a9dfaa74005262918f6cba2ec69c451aa36d0aec/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 02 12:02:11 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9.
Oct 02 12:02:11 compute-1 podman[203505]: 2025-10-02 12:02:11.586333119 +0000 UTC m=+0.120364712 container init 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_managed=true, container_name=iscsid)
Oct 02 12:02:11 compute-1 iscsid[203521]: + sudo -E kolla_set_configs
Oct 02 12:02:11 compute-1 sudo[203527]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 02 12:02:11 compute-1 podman[203505]: 2025-10-02 12:02:11.620708539 +0000 UTC m=+0.154740132 container start 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, org.label-schema.build-date=20251001, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 02 12:02:11 compute-1 podman[203505]: iscsid
Oct 02 12:02:11 compute-1 systemd[1]: Started iscsid container.
Oct 02 12:02:11 compute-1 systemd[1]: Created slice User Slice of UID 0.
Oct 02 12:02:11 compute-1 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 02 12:02:11 compute-1 sudo[203463]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:11 compute-1 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 02 12:02:11 compute-1 systemd[1]: Starting User Manager for UID 0...
Oct 02 12:02:11 compute-1 systemd[203547]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Oct 02 12:02:11 compute-1 podman[203528]: 2025-10-02 12:02:11.69682985 +0000 UTC m=+0.062962959 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:02:11 compute-1 systemd[1]: 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9-efbaf9eaede2ddd.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 12:02:11 compute-1 systemd[1]: 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9-efbaf9eaede2ddd.service: Failed with result 'exit-code'.
Oct 02 12:02:11 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:02:11 compute-1 systemd[203547]: Queued start job for default target Main User Target.
Oct 02 12:02:11 compute-1 systemd[203547]: Created slice User Application Slice.
Oct 02 12:02:11 compute-1 systemd[203547]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 02 12:02:11 compute-1 systemd[203547]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 12:02:11 compute-1 systemd[203547]: Reached target Paths.
Oct 02 12:02:11 compute-1 systemd[203547]: Reached target Timers.
Oct 02 12:02:11 compute-1 systemd[203547]: Starting D-Bus User Message Bus Socket...
Oct 02 12:02:11 compute-1 systemd[203547]: Starting Create User's Volatile Files and Directories...
Oct 02 12:02:11 compute-1 systemd[203547]: Listening on D-Bus User Message Bus Socket.
Oct 02 12:02:11 compute-1 systemd[203547]: Reached target Sockets.
Oct 02 12:02:11 compute-1 systemd[203547]: Finished Create User's Volatile Files and Directories.
Oct 02 12:02:11 compute-1 systemd[203547]: Reached target Basic System.
Oct 02 12:02:11 compute-1 systemd[203547]: Reached target Main User Target.
Oct 02 12:02:11 compute-1 systemd[203547]: Startup finished in 120ms.
Oct 02 12:02:11 compute-1 systemd[1]: Started User Manager for UID 0.
Oct 02 12:02:11 compute-1 systemd[1]: Started Session c3 of User root.
Oct 02 12:02:11 compute-1 sudo[203527]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 02 12:02:11 compute-1 iscsid[203521]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 12:02:11 compute-1 iscsid[203521]: INFO:__main__:Validating config file
Oct 02 12:02:11 compute-1 iscsid[203521]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 12:02:11 compute-1 iscsid[203521]: INFO:__main__:Writing out command to execute
Oct 02 12:02:11 compute-1 sudo[203527]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:11 compute-1 systemd[1]: session-c3.scope: Deactivated successfully.
Oct 02 12:02:11 compute-1 iscsid[203521]: ++ cat /run_command
Oct 02 12:02:11 compute-1 iscsid[203521]: + CMD='/usr/sbin/iscsid -f'
Oct 02 12:02:11 compute-1 iscsid[203521]: + ARGS=
Oct 02 12:02:11 compute-1 iscsid[203521]: + sudo kolla_copy_cacerts
Oct 02 12:02:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:02:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:11.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:02:11 compute-1 sudo[203591]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 02 12:02:11 compute-1 systemd[1]: Started Session c4 of User root.
Oct 02 12:02:11 compute-1 sudo[203591]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 02 12:02:11 compute-1 sudo[203591]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:11 compute-1 systemd[1]: session-c4.scope: Deactivated successfully.
Oct 02 12:02:11 compute-1 iscsid[203521]: + [[ ! -n '' ]]
Oct 02 12:02:11 compute-1 iscsid[203521]: + . kolla_extend_start
Oct 02 12:02:11 compute-1 iscsid[203521]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Oct 02 12:02:11 compute-1 iscsid[203521]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Oct 02 12:02:11 compute-1 iscsid[203521]: Running command: '/usr/sbin/iscsid -f'
Oct 02 12:02:11 compute-1 iscsid[203521]: + umask 0022
Oct 02 12:02:11 compute-1 iscsid[203521]: + exec /usr/sbin/iscsid -f
Oct 02 12:02:11 compute-1 kernel: Loading iSCSI transport class v2.0-870.
Oct 02 12:02:13 compute-1 python3.9[203727]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:02:13 compute-1 ceph-mon[80926]: pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:02:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:13.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:02:13 compute-1 sudo[203877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjantgrgecstmkcxwbbmmwibwgrsvnea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406533.5046766-1235-181158978010423/AnsiballZ_file.py'
Oct 02 12:02:13 compute-1 sudo[203877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:13.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:13 compute-1 python3.9[203879]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:14 compute-1 sudo[203877]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:14 compute-1 sudo[204029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oznkljdwbaprpzmqdcvmurbimjxrcjcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406534.6292074-1268-249778317787409/AnsiballZ_service_facts.py'
Oct 02 12:02:14 compute-1 sudo[204029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:15 compute-1 python3.9[204031]: ansible-ansible.builtin.service_facts Invoked
Oct 02 12:02:15 compute-1 network[204048]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 12:02:15 compute-1 network[204049]: 'network-scripts' will be removed from distribution in near future.
Oct 02 12:02:15 compute-1 network[204050]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 12:02:15 compute-1 ceph-mon[80926]: pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.001999982s ======
Oct 02 12:02:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:15.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001999982s
Oct 02 12:02:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:15.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:16 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:02:17 compute-1 ceph-mon[80926]: pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:17.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:17.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:18 compute-1 sudo[204029]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:19 compute-1 ceph-mon[80926]: pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:19.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:02:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:19.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:02:19 compute-1 sudo[204323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktipcbudmxsyfropcfclwwmujycbaibf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406539.6686482-1298-59252295710233/AnsiballZ_file.py'
Oct 02 12:02:19 compute-1 sudo[204323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:20 compute-1 python3.9[204325]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 02 12:02:20 compute-1 sudo[204323]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:20 compute-1 sudo[204475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuqwvazvchcpoxijowphanviecsjtmag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406540.4926476-1322-9200690906670/AnsiballZ_modprobe.py'
Oct 02 12:02:20 compute-1 sudo[204475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:21 compute-1 python3.9[204477]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Oct 02 12:02:21 compute-1 sudo[204475]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:21 compute-1 ceph-mon[80926]: pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:02:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:21.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:02:21 compute-1 sudo[204631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxxfspfwzrauomocbqdrsqygojhxsbip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406541.505878-1346-28758419086455/AnsiballZ_stat.py'
Oct 02 12:02:21 compute-1 sudo[204631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:02:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:21.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:21 compute-1 python3.9[204633]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:02:21 compute-1 systemd[1]: Stopping User Manager for UID 0...
Oct 02 12:02:21 compute-1 systemd[203547]: Activating special unit Exit the Session...
Oct 02 12:02:21 compute-1 systemd[203547]: Stopped target Main User Target.
Oct 02 12:02:21 compute-1 systemd[203547]: Stopped target Basic System.
Oct 02 12:02:21 compute-1 systemd[203547]: Stopped target Paths.
Oct 02 12:02:21 compute-1 systemd[203547]: Stopped target Sockets.
Oct 02 12:02:21 compute-1 systemd[203547]: Stopped target Timers.
Oct 02 12:02:21 compute-1 systemd[203547]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 02 12:02:21 compute-1 systemd[203547]: Closed D-Bus User Message Bus Socket.
Oct 02 12:02:21 compute-1 systemd[203547]: Stopped Create User's Volatile Files and Directories.
Oct 02 12:02:21 compute-1 systemd[203547]: Removed slice User Application Slice.
Oct 02 12:02:21 compute-1 systemd[203547]: Reached target Shutdown.
Oct 02 12:02:21 compute-1 systemd[203547]: Finished Exit the Session.
Oct 02 12:02:21 compute-1 systemd[203547]: Reached target Exit the Session.
Oct 02 12:02:21 compute-1 systemd[1]: user@0.service: Deactivated successfully.
Oct 02 12:02:21 compute-1 systemd[1]: Stopped User Manager for UID 0.
Oct 02 12:02:21 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 02 12:02:21 compute-1 sudo[204631]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:22 compute-1 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 02 12:02:22 compute-1 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 02 12:02:22 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 02 12:02:22 compute-1 systemd[1]: Removed slice User Slice of UID 0.
Oct 02 12:02:22 compute-1 sudo[204756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfxaivlfpfpvpvesjypuatzcgtepcohf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406541.505878-1346-28758419086455/AnsiballZ_copy.py'
Oct 02 12:02:22 compute-1 sudo[204756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:22 compute-1 python3.9[204758]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406541.505878-1346-28758419086455/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:22 compute-1 sudo[204756]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:23 compute-1 sudo[204908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khcdzhvnqpkonkqqlujgtqamtwtbackb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406542.9618964-1394-74043126753342/AnsiballZ_lineinfile.py'
Oct 02 12:02:23 compute-1 sudo[204908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:23 compute-1 ceph-mon[80926]: pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:23 compute-1 python3.9[204910]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:23 compute-1 sudo[204908]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:02:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:23.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:02:23 compute-1 podman[204935]: 2025-10-02 12:02:23.858240444 +0000 UTC m=+0.108576442 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:02:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:23.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:24 compute-1 sudo[205087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maxsxbvzbppleyfuihewgfrosiddjjqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406543.8036413-1418-70839580191753/AnsiballZ_systemd.py'
Oct 02 12:02:24 compute-1 sudo[205087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:24 compute-1 python3.9[205089]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 12:02:24 compute-1 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 02 12:02:24 compute-1 systemd[1]: Stopped Load Kernel Modules.
Oct 02 12:02:24 compute-1 systemd[1]: Stopping Load Kernel Modules...
Oct 02 12:02:24 compute-1 systemd[1]: Starting Load Kernel Modules...
Oct 02 12:02:24 compute-1 systemd[1]: Finished Load Kernel Modules.
Oct 02 12:02:24 compute-1 sudo[205087]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:25 compute-1 sudo[205243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxowixtaclxqihszacefpgnzjrjutkqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406544.85156-1442-201015948569/AnsiballZ_file.py'
Oct 02 12:02:25 compute-1 sudo[205243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:25 compute-1 python3.9[205245]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:02:25 compute-1 sudo[205243]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:25 compute-1 ceph-mon[80926]: pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:25.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:25.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:02:25.902 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:02:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:02:25.903 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:02:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:02:25.903 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:02:26 compute-1 sudo[205395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjqqomodywutzoqlqmnnziljaqhmmanl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406545.8104002-1469-149266886882332/AnsiballZ_stat.py'
Oct 02 12:02:26 compute-1 sudo[205395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:26 compute-1 python3.9[205397]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:02:26 compute-1 sudo[205395]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:26 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:02:26 compute-1 sudo[205547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjksjnsvejewqunecpqkvolmyetgraow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406546.643916-1496-201219941713621/AnsiballZ_stat.py'
Oct 02 12:02:26 compute-1 sudo[205547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:27 compute-1 python3.9[205549]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:02:27 compute-1 sudo[205547]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:27 compute-1 ceph-mon[80926]: pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:27.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:27 compute-1 sudo[205699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vagmssdzcavvngwgyjwcgkksqtwxquyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406547.398631-1520-128691218632933/AnsiballZ_stat.py'
Oct 02 12:02:27 compute-1 sudo[205699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:27 compute-1 python3.9[205701]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:02:27 compute-1 sudo[205699]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:27.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:28 compute-1 sudo[205822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrfkfogurjhjkpkznllpwmmushsnzoka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406547.398631-1520-128691218632933/AnsiballZ_copy.py'
Oct 02 12:02:28 compute-1 sudo[205822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:28 compute-1 python3.9[205824]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406547.398631-1520-128691218632933/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:28 compute-1 sudo[205822]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:29 compute-1 sudo[205984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnucuezhpvmtslqxbpqqzamvklqwidmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406548.7849011-1565-99123783835151/AnsiballZ_command.py'
Oct 02 12:02:29 compute-1 sudo[205984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:29 compute-1 podman[205948]: 2025-10-02 12:02:29.227024303 +0000 UTC m=+0.055020328 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Oct 02 12:02:29 compute-1 python3.9[205993]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 12:02:29 compute-1 sudo[205984]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:29 compute-1 ceph-mon[80926]: pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:29.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:29.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:29 compute-1 sudo[206146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxjczzllmahagdckafneeovxghoafdhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406549.6766636-1589-151545367480052/AnsiballZ_lineinfile.py'
Oct 02 12:02:29 compute-1 sudo[206146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:30 compute-1 python3.9[206148]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:30 compute-1 sudo[206146]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:30 compute-1 sudo[206298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kahyiolyotxdkknhprhttcsuwavxipms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406550.4256732-1613-16307776015514/AnsiballZ_replace.py'
Oct 02 12:02:30 compute-1 sudo[206298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:31 compute-1 python3.9[206300]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:31 compute-1 sudo[206298]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:31 compute-1 ceph-mon[80926]: pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:31.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:31 compute-1 sudo[206450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgxllyltwnnjeyeuxthzqhacibifnyyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406551.3483489-1638-105579976913906/AnsiballZ_replace.py'
Oct 02 12:02:31 compute-1 sudo[206450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:31 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:02:31 compute-1 python3.9[206452]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:31 compute-1 sudo[206450]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:31.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:32 compute-1 systemd[1]: virtnodedevd.service: Deactivated successfully.
Oct 02 12:02:32 compute-1 sudo[206603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzfolmoacnccsbznsiaziypdraicgyml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406552.114354-1664-97246715393463/AnsiballZ_lineinfile.py'
Oct 02 12:02:32 compute-1 sudo[206603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:32 compute-1 python3.9[206605]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:32 compute-1 sudo[206603]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:32 compute-1 sudo[206755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpjrvhkztzwxpjoytpnefgiygzrmhssj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406552.7365382-1664-71454911983054/AnsiballZ_lineinfile.py'
Oct 02 12:02:32 compute-1 sudo[206755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:33 compute-1 python3.9[206757]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:33 compute-1 sudo[206755]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:33 compute-1 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 02 12:02:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:33.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:33 compute-1 ceph-mon[80926]: pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:33 compute-1 sudo[206908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyedgrtjaxraeuctopdqxxcahgpzvtoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406553.3297362-1664-235996253555972/AnsiballZ_lineinfile.py'
Oct 02 12:02:33 compute-1 sudo[206908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:33 compute-1 python3.9[206910]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:33 compute-1 sudo[206908]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:33.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:34 compute-1 sudo[207060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqkcitxpyqybwdceddanxszrghmazbvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406553.9229932-1664-233779665731543/AnsiballZ_lineinfile.py'
Oct 02 12:02:34 compute-1 sudo[207060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:34 compute-1 python3.9[207062]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:34 compute-1 sudo[207060]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:35 compute-1 sudo[207212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxrqjoqbgeydacfkwlsuemtfitehkdnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406554.9694557-1751-49176967814252/AnsiballZ_stat.py'
Oct 02 12:02:35 compute-1 sudo[207212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:35 compute-1 python3.9[207214]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:02:35 compute-1 sudo[207212]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:02:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:35.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:02:35 compute-1 ceph-mon[80926]: pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:35.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:36 compute-1 sudo[207366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjsftuhpdisrxueqcbollqrxtgjijlsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406555.780512-1775-168901033938412/AnsiballZ_file.py'
Oct 02 12:02:36 compute-1 sudo[207366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:36 compute-1 python3.9[207368]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:36 compute-1 sudo[207366]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:36 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:02:37 compute-1 sudo[207518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-poikbugmsfjuxlsjaduaazoluijrkmjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406556.7102797-1802-267022782502600/AnsiballZ_file.py'
Oct 02 12:02:37 compute-1 sudo[207518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:37 compute-1 python3.9[207520]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:02:37 compute-1 sudo[207518]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:37.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:37 compute-1 ceph-mon[80926]: pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:37 compute-1 sudo[207670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjlhvpwgbxisabgcjddamwwmlibuhypa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406557.5746899-1826-151017379175704/AnsiballZ_stat.py'
Oct 02 12:02:37 compute-1 sudo[207670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:02:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:37.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:02:38 compute-1 python3.9[207672]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:02:38 compute-1 sudo[207670]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:38 compute-1 sudo[207748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzggawdwxztcjkqhvuqobgnzqdnjvejh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406557.5746899-1826-151017379175704/AnsiballZ_file.py'
Oct 02 12:02:38 compute-1 sudo[207748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:38 compute-1 python3.9[207750]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:02:38 compute-1 sudo[207748]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:38 compute-1 sudo[207900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiswbqspeyqsfwvweckijhxygyaagjmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406558.6357477-1826-30874917097134/AnsiballZ_stat.py'
Oct 02 12:02:38 compute-1 sudo[207900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:39 compute-1 python3.9[207902]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:02:39 compute-1 sudo[207900]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:39 compute-1 sudo[207978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bikpphnljutpwcmifuiqbosmlovkzyyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406558.6357477-1826-30874917097134/AnsiballZ_file.py'
Oct 02 12:02:39 compute-1 sudo[207978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:39.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:39 compute-1 python3.9[207980]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:02:39 compute-1 ceph-mon[80926]: pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:39 compute-1 sudo[207978]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:39.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:40 compute-1 sudo[208130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmflipvkkqdosbloiwkropgafmdjlisu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406559.99721-1895-281042522997985/AnsiballZ_file.py'
Oct 02 12:02:40 compute-1 sudo[208130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:40 compute-1 python3.9[208132]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:40 compute-1 sudo[208130]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:40 compute-1 sudo[208282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rccdsbqmorxlmcuizeihkbajpadihriq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406560.7019567-1919-23187646115090/AnsiballZ_stat.py'
Oct 02 12:02:40 compute-1 sudo[208282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:41 compute-1 python3.9[208284]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:02:41 compute-1 sudo[208282]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:41 compute-1 sudo[208360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csbwfmreuucggnjtugwfkerhhkshjwfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406560.7019567-1919-23187646115090/AnsiballZ_file.py'
Oct 02 12:02:41 compute-1 sudo[208360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:41.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:41 compute-1 ceph-mon[80926]: pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:41 compute-1 python3.9[208362]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:41 compute-1 sudo[208360]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:41 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:02:41 compute-1 podman[208363]: 2025-10-02 12:02:41.82878057 +0000 UTC m=+0.073339855 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 12:02:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:41.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:42 compute-1 sudo[208534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqocvgqooygymmxtpuirzivdjqnzypze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406561.9259727-1956-206493001475946/AnsiballZ_stat.py'
Oct 02 12:02:42 compute-1 sudo[208534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:42 compute-1 python3.9[208536]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:02:42 compute-1 sudo[208534]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:42 compute-1 sudo[208612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxpmelrjafhphvwlzcrahnmpsxwnegnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406561.9259727-1956-206493001475946/AnsiballZ_file.py'
Oct 02 12:02:42 compute-1 sudo[208612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:42 compute-1 python3.9[208614]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:42 compute-1 sudo[208612]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:43 compute-1 sudo[208764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obihssmhdbzrylwlwygsqqmafkgfbjyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406563.1794164-1992-249074440633065/AnsiballZ_systemd.py'
Oct 02 12:02:43 compute-1 sudo[208764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:43.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:43 compute-1 ceph-mon[80926]: pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:43 compute-1 python3.9[208766]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 12:02:43 compute-1 systemd[1]: Reloading.
Oct 02 12:02:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:43.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:43 compute-1 systemd-rc-local-generator[208792]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:02:43 compute-1 systemd-sysv-generator[208796]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:02:44 compute-1 sudo[208764]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:44 compute-1 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 02 12:02:44 compute-1 systemd[1]: virtqemud.service: Deactivated successfully.
Oct 02 12:02:44 compute-1 sudo[208956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgjmeaqpfithdamptenzoaudezsfjplr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406564.5314465-2015-9366079831968/AnsiballZ_stat.py'
Oct 02 12:02:44 compute-1 sudo[208956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:45 compute-1 python3.9[208958]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:02:45 compute-1 sudo[208956]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:45 compute-1 sudo[209034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bioaorpwvsoclvcnxkeaajpcdcpnsmvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406564.5314465-2015-9366079831968/AnsiballZ_file.py'
Oct 02 12:02:45 compute-1 sudo[209034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:45 compute-1 python3.9[209036]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:45 compute-1 sudo[209034]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:45.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:45 compute-1 ceph-mon[80926]: pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:45.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:46 compute-1 sudo[209186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otauofvvqcrvcvzevmfejgqbswkcygie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406565.7944672-2051-253221330770488/AnsiballZ_stat.py'
Oct 02 12:02:46 compute-1 sudo[209186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:46 compute-1 python3.9[209188]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:02:46 compute-1 sudo[209186]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:46 compute-1 sudo[209264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twmegtnbongfaiykgtfiuujdueawmxir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406565.7944672-2051-253221330770488/AnsiballZ_file.py'
Oct 02 12:02:46 compute-1 sudo[209264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:46 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:02:46 compute-1 python3.9[209266]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:46 compute-1 sudo[209264]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:47 compute-1 sudo[209416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvlziqjfqddolffmokfttdclgnpyczhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406567.1093209-2087-254953101671101/AnsiballZ_systemd.py'
Oct 02 12:02:47 compute-1 sudo[209416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:47.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:47 compute-1 python3.9[209418]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 12:02:47 compute-1 systemd[1]: Reloading.
Oct 02 12:02:47 compute-1 ceph-mon[80926]: pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:47 compute-1 systemd-rc-local-generator[209445]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:02:47 compute-1 systemd-sysv-generator[209450]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:02:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:02:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:47.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:02:48 compute-1 systemd[1]: Starting Create netns directory...
Oct 02 12:02:48 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 02 12:02:48 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 02 12:02:48 compute-1 systemd[1]: Finished Create netns directory.
Oct 02 12:02:48 compute-1 sudo[209416]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:48 compute-1 sudo[209611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfjtfgfmynorzqgxdkdsxleuorqhotkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406568.6791303-2117-261631056570991/AnsiballZ_file.py'
Oct 02 12:02:48 compute-1 sudo[209611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:49 compute-1 python3.9[209613]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:02:49 compute-1 sudo[209611]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:02:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:49.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:02:49 compute-1 ceph-mon[80926]: pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:49 compute-1 sudo[209763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjdajumycfvkuuhrbdmkipebtdbmlwlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406569.5594912-2141-52551594554319/AnsiballZ_stat.py'
Oct 02 12:02:49 compute-1 sudo[209763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:49.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:50 compute-1 python3.9[209765]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:02:50 compute-1 sudo[209763]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:50 compute-1 sudo[209886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eplgmocsbbddognmmgvjthosquhvopyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406569.5594912-2141-52551594554319/AnsiballZ_copy.py'
Oct 02 12:02:50 compute-1 sudo[209886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:50 compute-1 python3.9[209888]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406569.5594912-2141-52551594554319/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:02:50 compute-1 sudo[209886]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:51 compute-1 sudo[210038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzbizslzjfzmraunlolfxnkdasofubbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406571.2451181-2192-73572874157318/AnsiballZ_file.py'
Oct 02 12:02:51 compute-1 sudo[210038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:02:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:51.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:02:51 compute-1 python3.9[210040]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:02:51 compute-1 sudo[210038]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:51 compute-1 ceph-mon[80926]: pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:51 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:02:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:51.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:52 compute-1 sudo[210190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhhqxraibmktrfrfvqxwlztufkpyhmzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406572.0418603-2216-109536775016369/AnsiballZ_stat.py'
Oct 02 12:02:52 compute-1 sudo[210190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:52 compute-1 python3.9[210192]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:02:52 compute-1 sudo[210190]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:52 compute-1 sudo[210313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stkwfpgikhdrakhvlbmjlwslxxxbafjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406572.0418603-2216-109536775016369/AnsiballZ_copy.py'
Oct 02 12:02:52 compute-1 sudo[210313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:53 compute-1 python3.9[210315]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406572.0418603-2216-109536775016369/.source.json _original_basename=.4srd60oy follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:53 compute-1 sudo[210313]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:53.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:53 compute-1 sudo[210465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjijniycjkebvtfuindfcwbvlvafswks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406573.4570718-2261-88471728346421/AnsiballZ_file.py'
Oct 02 12:02:53 compute-1 sudo[210465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:53 compute-1 ceph-mon[80926]: pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:53 compute-1 python3.9[210467]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:02:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:53.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:53 compute-1 sudo[210465]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:54 compute-1 sudo[210626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqdyfomqicmkxlvzgvxmognzmnjwutic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406574.30694-2285-30890266886567/AnsiballZ_stat.py'
Oct 02 12:02:54 compute-1 sudo[210626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:54 compute-1 podman[210591]: 2025-10-02 12:02:54.6735464 +0000 UTC m=+0.090087631 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 12:02:54 compute-1 sudo[210626]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:55 compute-1 sudo[210765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwapuqcmudgczgxzwiofhaxdzyafijwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406574.30694-2285-30890266886567/AnsiballZ_copy.py'
Oct 02 12:02:55 compute-1 sudo[210765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:55 compute-1 sudo[210765]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:55.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:55 compute-1 ceph-mon[80926]: pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:02:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:55.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:02:56 compute-1 sudo[210917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xaydbjuymobykrgepyudzamyeudgbtti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406576.1076488-2336-76078714742044/AnsiballZ_container_config_data.py'
Oct 02 12:02:56 compute-1 sudo[210917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:56 compute-1 python3.9[210919]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Oct 02 12:02:56 compute-1 sudo[210917]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:56 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:02:57 compute-1 sudo[211069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgwimlvnmaxumxwwgfmrjskemhkcokoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406577.0172825-2363-118043577343003/AnsiballZ_container_config_hash.py'
Oct 02 12:02:57 compute-1 sudo[211069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:57 compute-1 python3.9[211071]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 12:02:57 compute-1 sudo[211069]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:57.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:57 compute-1 ceph-mon[80926]: pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:57.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:58 compute-1 sudo[211221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plwyoxkkztkzqtzhjelqklzwbscytisi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406577.8743043-2390-203353813848064/AnsiballZ_podman_container_info.py'
Oct 02 12:02:58 compute-1 sudo[211221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:02:58 compute-1 python3.9[211223]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 02 12:02:58 compute-1 sudo[211221]: pam_unix(sudo:session): session closed for user root
Oct 02 12:02:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:02:59.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:02:59 compute-1 podman[211275]: 2025-10-02 12:02:59.794142784 +0000 UTC m=+0.052375637 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:02:59 compute-1 ceph-mon[80926]: pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:02:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:02:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:02:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:02:59.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:00 compute-1 sudo[211420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfrknekszuadobnyyjjteuubrdyblqvk ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759406579.7928889-2429-89784006655906/AnsiballZ_edpm_container_manage.py'
Oct 02 12:03:00 compute-1 sudo[211420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:00 compute-1 python3[211422]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 12:03:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:01.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:01 compute-1 podman[211434]: 2025-10-02 12:03:01.641594685 +0000 UTC m=+1.225001220 image pull d8d739f82a6fecf9df690e49539b589e74665b54e36448657b874630717d5bd1 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct 02 12:03:01 compute-1 podman[211491]: 2025-10-02 12:03:01.768005416 +0000 UTC m=+0.040007668 container create a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 12:03:01 compute-1 podman[211491]: 2025-10-02 12:03:01.745156308 +0000 UTC m=+0.017158580 image pull d8d739f82a6fecf9df690e49539b589e74665b54e36448657b874630717d5bd1 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct 02 12:03:01 compute-1 python3[211422]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Oct 02 12:03:01 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:03:01 compute-1 ceph-mon[80926]: pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:01 compute-1 sudo[211420]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:03:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:01.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:03:02 compute-1 sudo[211679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aerkscicvnwkbheyebcmpphpqtdwemmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406582.5623019-2454-249212570319172/AnsiballZ_stat.py'
Oct 02 12:03:02 compute-1 sudo[211679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:03 compute-1 python3.9[211681]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:03:03 compute-1 sudo[211679]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:03:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:03.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:03:03 compute-1 sudo[211833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjlmbuacrbclxevtnoveyzhoyekbyyxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406583.477342-2480-11072554958976/AnsiballZ_file.py'
Oct 02 12:03:03 compute-1 sudo[211833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:03 compute-1 ceph-mon[80926]: pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:03 compute-1 python3.9[211835]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:03:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:03.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:03 compute-1 sudo[211833]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:04 compute-1 sudo[211909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxdqurvsrsyacpxrtwdoxvnfcnfwuflu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406583.477342-2480-11072554958976/AnsiballZ_stat.py'
Oct 02 12:03:04 compute-1 sudo[211909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:04 compute-1 python3.9[211911]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:03:04 compute-1 sudo[211909]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:04 compute-1 sudo[212060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubaipzjyajhgpdpfxqdaixcqgugpynxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406584.4633915-2480-180046824582250/AnsiballZ_copy.py'
Oct 02 12:03:04 compute-1 sudo[212060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:05 compute-1 python3.9[212062]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759406584.4633915-2480-180046824582250/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:03:05 compute-1 sudo[212060]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:05 compute-1 sudo[212136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsbunjbhctfbkcawazfcypefembmlqtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406584.4633915-2480-180046824582250/AnsiballZ_systemd.py'
Oct 02 12:03:05 compute-1 sudo[212136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:03:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:05.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:03:05 compute-1 python3.9[212138]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 12:03:05 compute-1 systemd[1]: Reloading.
Oct 02 12:03:05 compute-1 systemd-rc-local-generator[212166]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:03:05 compute-1 systemd-sysv-generator[212169]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:03:05 compute-1 ceph-mon[80926]: pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:05.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:05 compute-1 sudo[212136]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:06 compute-1 sudo[212247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flyjbufmdgirtptjkcbjgtyvimlgrhnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406584.4633915-2480-180046824582250/AnsiballZ_systemd.py'
Oct 02 12:03:06 compute-1 sudo[212247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:06 compute-1 python3.9[212249]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 12:03:06 compute-1 systemd[1]: Reloading.
Oct 02 12:03:06 compute-1 systemd-rc-local-generator[212277]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:03:06 compute-1 systemd-sysv-generator[212280]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:03:06 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:03:06 compute-1 systemd[1]: Starting multipathd container...
Oct 02 12:03:07 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:03:07 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2451788c6336cd767cffd00f70fbe6e392396a233e9e556ccfff54ee79567c9c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 02 12:03:07 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2451788c6336cd767cffd00f70fbe6e392396a233e9e556ccfff54ee79567c9c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 02 12:03:07 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a.
Oct 02 12:03:07 compute-1 podman[212289]: 2025-10-02 12:03:07.056448823 +0000 UTC m=+0.099987313 container init a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd)
Oct 02 12:03:07 compute-1 multipathd[212304]: + sudo -E kolla_set_configs
Oct 02 12:03:07 compute-1 sudo[212310]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 02 12:03:07 compute-1 sudo[212310]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 12:03:07 compute-1 sudo[212310]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 02 12:03:07 compute-1 podman[212289]: 2025-10-02 12:03:07.093681852 +0000 UTC m=+0.137220372 container start a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:03:07 compute-1 multipathd[212304]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 12:03:07 compute-1 multipathd[212304]: INFO:__main__:Validating config file
Oct 02 12:03:07 compute-1 multipathd[212304]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 12:03:07 compute-1 multipathd[212304]: INFO:__main__:Writing out command to execute
Oct 02 12:03:07 compute-1 sudo[212310]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:07 compute-1 multipathd[212304]: ++ cat /run_command
Oct 02 12:03:07 compute-1 multipathd[212304]: + CMD='/usr/sbin/multipathd -d'
Oct 02 12:03:07 compute-1 multipathd[212304]: + ARGS=
Oct 02 12:03:07 compute-1 multipathd[212304]: + sudo kolla_copy_cacerts
Oct 02 12:03:07 compute-1 sudo[212325]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 02 12:03:07 compute-1 podman[212289]: multipathd
Oct 02 12:03:07 compute-1 sudo[212325]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 12:03:07 compute-1 sudo[212325]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 02 12:03:07 compute-1 systemd[1]: Started multipathd container.
Oct 02 12:03:07 compute-1 sudo[212325]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:07 compute-1 multipathd[212304]: + [[ ! -n '' ]]
Oct 02 12:03:07 compute-1 multipathd[212304]: + . kolla_extend_start
Oct 02 12:03:07 compute-1 multipathd[212304]: Running command: '/usr/sbin/multipathd -d'
Oct 02 12:03:07 compute-1 multipathd[212304]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 02 12:03:07 compute-1 multipathd[212304]: + umask 0022
Oct 02 12:03:07 compute-1 multipathd[212304]: + exec /usr/sbin/multipathd -d
Oct 02 12:03:07 compute-1 multipathd[212304]: 4450.783103 | --------start up--------
Oct 02 12:03:07 compute-1 multipathd[212304]: 4450.783123 | read /etc/multipath.conf
Oct 02 12:03:07 compute-1 multipathd[212304]: 4450.788595 | path checkers start up
Oct 02 12:03:07 compute-1 sudo[212247]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:07 compute-1 podman[212311]: 2025-10-02 12:03:07.198137023 +0000 UTC m=+0.094614913 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 12:03:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:07.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:07 compute-1 ceph-mon[80926]: pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:07.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:08 compute-1 python3.9[212492]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:03:08 compute-1 sudo[212519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:03:08 compute-1 sudo[212519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:08 compute-1 sudo[212519]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:08 compute-1 sudo[212544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:03:08 compute-1 sudo[212544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:08 compute-1 sudo[212544]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:08 compute-1 sudo[212590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:03:08 compute-1 sudo[212590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:08 compute-1 sudo[212590]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:08 compute-1 sudo[212638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:03:08 compute-1 sudo[212638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:09 compute-1 sudo[212756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emgrjcumxczugleejvkkiohbjnohvlhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406588.852514-2589-161392793287155/AnsiballZ_command.py'
Oct 02 12:03:09 compute-1 sudo[212756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:09 compute-1 python3.9[212760]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 12:03:09 compute-1 sudo[212756]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:09 compute-1 sudo[212638]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:09.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:09 compute-1 ceph-mon[80926]: pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:09.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:09 compute-1 sudo[212940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmmfwxlvflekfdtyigxvnqvnamanoysw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406589.6646628-2612-59002623391865/AnsiballZ_systemd.py'
Oct 02 12:03:09 compute-1 sudo[212940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:10 compute-1 python3.9[212942]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 12:03:10 compute-1 systemd[1]: Stopping multipathd container...
Oct 02 12:03:10 compute-1 multipathd[212304]: 4453.983908 | exit (signal)
Oct 02 12:03:10 compute-1 multipathd[212304]: 4453.984515 | --------shut down-------
Oct 02 12:03:10 compute-1 systemd[1]: libpod-a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a.scope: Deactivated successfully.
Oct 02 12:03:10 compute-1 podman[212946]: 2025-10-02 12:03:10.39787092 +0000 UTC m=+0.069624808 container died a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:03:10 compute-1 systemd[1]: a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a-665ada711ac93a87.timer: Deactivated successfully.
Oct 02 12:03:10 compute-1 systemd[1]: Stopped /usr/bin/podman healthcheck run a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a.
Oct 02 12:03:10 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a-userdata-shm.mount: Deactivated successfully.
Oct 02 12:03:10 compute-1 systemd[1]: var-lib-containers-storage-overlay-2451788c6336cd767cffd00f70fbe6e392396a233e9e556ccfff54ee79567c9c-merged.mount: Deactivated successfully.
Oct 02 12:03:10 compute-1 podman[212946]: 2025-10-02 12:03:10.545762946 +0000 UTC m=+0.217516824 container cleanup a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:03:10 compute-1 podman[212946]: multipathd
Oct 02 12:03:10 compute-1 podman[212973]: multipathd
Oct 02 12:03:10 compute-1 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Oct 02 12:03:10 compute-1 systemd[1]: Stopped multipathd container.
Oct 02 12:03:10 compute-1 systemd[1]: Starting multipathd container...
Oct 02 12:03:10 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:03:10 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2451788c6336cd767cffd00f70fbe6e392396a233e9e556ccfff54ee79567c9c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 02 12:03:10 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2451788c6336cd767cffd00f70fbe6e392396a233e9e556ccfff54ee79567c9c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 02 12:03:10 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a.
Oct 02 12:03:10 compute-1 podman[212986]: 2025-10-02 12:03:10.740441381 +0000 UTC m=+0.103402149 container init a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd)
Oct 02 12:03:10 compute-1 multipathd[213002]: + sudo -E kolla_set_configs
Oct 02 12:03:10 compute-1 sudo[213008]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 02 12:03:10 compute-1 sudo[213008]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 12:03:10 compute-1 sudo[213008]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 02 12:03:10 compute-1 podman[212986]: 2025-10-02 12:03:10.768582185 +0000 UTC m=+0.131542943 container start a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:03:10 compute-1 podman[212986]: multipathd
Oct 02 12:03:10 compute-1 systemd[1]: Started multipathd container.
Oct 02 12:03:10 compute-1 multipathd[213002]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 12:03:10 compute-1 multipathd[213002]: INFO:__main__:Validating config file
Oct 02 12:03:10 compute-1 multipathd[213002]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 12:03:10 compute-1 multipathd[213002]: INFO:__main__:Writing out command to execute
Oct 02 12:03:10 compute-1 sudo[212940]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:10 compute-1 sudo[213008]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:10 compute-1 multipathd[213002]: ++ cat /run_command
Oct 02 12:03:10 compute-1 multipathd[213002]: + CMD='/usr/sbin/multipathd -d'
Oct 02 12:03:10 compute-1 multipathd[213002]: + ARGS=
Oct 02 12:03:10 compute-1 multipathd[213002]: + sudo kolla_copy_cacerts
Oct 02 12:03:10 compute-1 sudo[213027]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 02 12:03:10 compute-1 sudo[213027]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 02 12:03:10 compute-1 sudo[213027]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 02 12:03:10 compute-1 sudo[213027]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:10 compute-1 multipathd[213002]: + [[ ! -n '' ]]
Oct 02 12:03:10 compute-1 multipathd[213002]: + . kolla_extend_start
Oct 02 12:03:10 compute-1 multipathd[213002]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 02 12:03:10 compute-1 multipathd[213002]: Running command: '/usr/sbin/multipathd -d'
Oct 02 12:03:10 compute-1 multipathd[213002]: + umask 0022
Oct 02 12:03:10 compute-1 multipathd[213002]: + exec /usr/sbin/multipathd -d
Oct 02 12:03:10 compute-1 podman[213009]: 2025-10-02 12:03:10.843340373 +0000 UTC m=+0.063870098 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd)
Oct 02 12:03:10 compute-1 systemd[1]: a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a-aef7d2fc3870334.service: Main process exited, code=exited, status=1/FAILURE
Oct 02 12:03:10 compute-1 systemd[1]: a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a-aef7d2fc3870334.service: Failed with result 'exit-code'.
Oct 02 12:03:10 compute-1 multipathd[213002]: 4454.469036 | --------start up--------
Oct 02 12:03:10 compute-1 multipathd[213002]: 4454.469050 | read /etc/multipath.conf
Oct 02 12:03:10 compute-1 multipathd[213002]: 4454.474146 | path checkers start up
Oct 02 12:03:11 compute-1 sudo[213191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebxjrttxemldlwukabovwckwdpfsoqza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406591.1808548-2637-73532354111601/AnsiballZ_file.py'
Oct 02 12:03:11 compute-1 sudo[213191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:11.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:11 compute-1 python3.9[213193]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:03:11 compute-1 sudo[213191]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:11 compute-1 ceph-mon[80926]: pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:11 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:03:11 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:03:11 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:03:11 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:03:11 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:03:11 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:03:11 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:03:11 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:03:11 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:03:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:11.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:12 compute-1 sudo[213353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irwlullwrsrnmivjwmlllpvchgmvddpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406592.2753549-2672-52546331630721/AnsiballZ_file.py'
Oct 02 12:03:12 compute-1 sudo[213353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:12 compute-1 podman[213317]: 2025-10-02 12:03:12.530018394 +0000 UTC m=+0.052771179 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 02 12:03:12 compute-1 python3.9[213361]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 02 12:03:12 compute-1 sudo[213353]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:13 compute-1 sudo[213512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opydltxwnhymhjsnjgbdiweqpafdcakf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406593.0014868-2696-93088513056585/AnsiballZ_modprobe.py'
Oct 02 12:03:13 compute-1 sudo[213512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:13 compute-1 python3.9[213514]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Oct 02 12:03:13 compute-1 kernel: Key type psk registered
Oct 02 12:03:13 compute-1 sudo[213512]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:13.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:13 compute-1 ceph-mon[80926]: pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:13.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:14 compute-1 sudo[213673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twrlqbgoqrhhshnmfwhyffihrrwemgfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406593.7787657-2720-212600043158228/AnsiballZ_stat.py'
Oct 02 12:03:14 compute-1 sudo[213673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:14 compute-1 python3.9[213675]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:03:14 compute-1 sudo[213673]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:14 compute-1 sudo[213796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlqkwceeyivfebpdzrcwyrcydkznodsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406593.7787657-2720-212600043158228/AnsiballZ_copy.py'
Oct 02 12:03:14 compute-1 sudo[213796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:14 compute-1 python3.9[213798]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759406593.7787657-2720-212600043158228/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:03:14 compute-1 sudo[213796]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:15.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:15 compute-1 sudo[213948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llwnajgshxdghxhqeklelfgsrspocdec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406595.355822-2768-64234859617835/AnsiballZ_lineinfile.py'
Oct 02 12:03:15 compute-1 sudo[213948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:15 compute-1 python3.9[213950]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:03:15 compute-1 sudo[213948]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:15.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:15 compute-1 ceph-mon[80926]: pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:16 compute-1 sudo[214100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rldwvfzcofytsakmnjucaxosrjypcgrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406596.0556238-2792-155699168040102/AnsiballZ_systemd.py'
Oct 02 12:03:16 compute-1 sudo[214100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:16 compute-1 python3.9[214102]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 12:03:16 compute-1 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 02 12:03:16 compute-1 systemd[1]: Stopped Load Kernel Modules.
Oct 02 12:03:16 compute-1 systemd[1]: Stopping Load Kernel Modules...
Oct 02 12:03:16 compute-1 systemd[1]: Starting Load Kernel Modules...
Oct 02 12:03:16 compute-1 systemd[1]: Finished Load Kernel Modules.
Oct 02 12:03:16 compute-1 sudo[214100]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:16 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:03:17 compute-1 sudo[214256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-potwrmzcynsrkhvxcfijpzjcmawjucgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406597.0846157-2816-87242769410113/AnsiballZ_setup.py'
Oct 02 12:03:17 compute-1 sudo[214256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:03:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:17.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:03:17 compute-1 python3.9[214258]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 02 12:03:17 compute-1 ceph-mon[80926]: pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:17 compute-1 sudo[214256]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:03:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:17.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:03:18 compute-1 sudo[214267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:03:18 compute-1 sudo[214267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:18 compute-1 sudo[214267]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:18 compute-1 sudo[214299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:03:18 compute-1 sudo[214299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:03:18 compute-1 sudo[214299]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:18 compute-1 sudo[214390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdzosowualfkcuizpvllqftbeommxwqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406597.0846157-2816-87242769410113/AnsiballZ_dnf.py'
Oct 02 12:03:18 compute-1 sudo[214390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:18 compute-1 python3.9[214392]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 02 12:03:18 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:03:18 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:03:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:19.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:19.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:20 compute-1 ceph-mon[80926]: pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:21.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:03:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:21.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:22 compute-1 ceph-mon[80926]: pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:23.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:23.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:24 compute-1 ceph-mon[80926]: pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:24 compute-1 podman[214397]: 2025-10-02 12:03:24.846063875 +0000 UTC m=+0.103965887 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:03:25 compute-1 systemd[1]: Reloading.
Oct 02 12:03:25 compute-1 systemd-rc-local-generator[214450]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:03:25 compute-1 systemd-sysv-generator[214453]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:03:25 compute-1 systemd[1]: Reloading.
Oct 02 12:03:25 compute-1 systemd-rc-local-generator[214483]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:03:25 compute-1 systemd-sysv-generator[214486]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:03:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:03:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:25.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:03:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:03:25.903 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:03:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:03:25.904 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:03:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:03:25.904 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:03:25 compute-1 systemd-logind[795]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 02 12:03:25 compute-1 systemd-logind[795]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 02 12:03:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:25.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:26 compute-1 lvm[214534]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 02 12:03:26 compute-1 lvm[214534]: VG ceph_vg0 finished
Oct 02 12:03:26 compute-1 ceph-mon[80926]: pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:26 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 02 12:03:26 compute-1 systemd[1]: Starting man-db-cache-update.service...
Oct 02 12:03:26 compute-1 systemd[1]: Reloading.
Oct 02 12:03:26 compute-1 systemd-rc-local-generator[214582]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:03:26 compute-1 systemd-sysv-generator[214585]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:03:26 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 02 12:03:26 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:03:27 compute-1 sudo[214390]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:27 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 02 12:03:27 compute-1 systemd[1]: Finished man-db-cache-update.service.
Oct 02 12:03:27 compute-1 systemd[1]: man-db-cache-update.service: Consumed 1.579s CPU time.
Oct 02 12:03:27 compute-1 systemd[1]: run-r05d40a3efd9e43b8964a6196b6ca804e.service: Deactivated successfully.
Oct 02 12:03:27 compute-1 sudo[215869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmmkwfmcoalywybckzmmabvavepxmrsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406607.3554451-2852-201834854408535/AnsiballZ_file.py'
Oct 02 12:03:27 compute-1 sudo[215869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:27.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:27 compute-1 python3.9[215871]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:03:27 compute-1 sudo[215869]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:27.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:28 compute-1 ceph-mon[80926]: pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:28 compute-1 python3.9[216021]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 02 12:03:29 compute-1 sudo[216175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivyqfkylnpppzfyjmmbhufecoljyoxwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406609.288067-2904-230030359620770/AnsiballZ_file.py'
Oct 02 12:03:29 compute-1 sudo[216175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:29.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:29 compute-1 python3.9[216177]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:03:29 compute-1 sudo[216175]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:03:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:29.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:03:30 compute-1 ceph-mon[80926]: pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:30 compute-1 podman[216254]: 2025-10-02 12:03:30.815396129 +0000 UTC m=+0.063393542 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:03:30 compute-1 sudo[216344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lchhrixaqvjtgbkcxajsflqawveuerqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406610.3335922-2937-43200003364745/AnsiballZ_systemd_service.py'
Oct 02 12:03:30 compute-1 sudo[216344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:31 compute-1 python3.9[216346]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 12:03:31 compute-1 systemd[1]: Reloading.
Oct 02 12:03:31 compute-1 systemd-rc-local-generator[216371]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:03:31 compute-1 systemd-sysv-generator[216376]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:03:31 compute-1 sudo[216344]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:31.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:31 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:03:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:31.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:32 compute-1 ceph-mon[80926]: pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:32 compute-1 python3.9[216531]: ansible-ansible.builtin.service_facts Invoked
Oct 02 12:03:32 compute-1 network[216548]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 02 12:03:32 compute-1 network[216549]: 'network-scripts' will be removed from distribution in near future.
Oct 02 12:03:32 compute-1 network[216550]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 02 12:03:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:03:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:33.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:03:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:33.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:34 compute-1 ceph-mon[80926]: pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:35.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:36.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:36 compute-1 ceph-mon[80926]: pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:36 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:03:37 compute-1 sudo[216826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrbfixtclntyiwfvstupfddozsspfxax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406617.138706-2994-237667597389899/AnsiballZ_systemd_service.py'
Oct 02 12:03:37 compute-1 sudo[216826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:37.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:37 compute-1 python3.9[216828]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 12:03:37 compute-1 sudo[216826]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:38.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:38 compute-1 ceph-mon[80926]: pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:38 compute-1 sudo[216979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbpacdnfbwmmtvpcerjwdyvyllrhtwnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406617.921007-2994-205084870479473/AnsiballZ_systemd_service.py'
Oct 02 12:03:38 compute-1 sudo[216979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:38 compute-1 python3.9[216981]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 12:03:38 compute-1 sudo[216979]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:38 compute-1 sudo[217132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trngmteoclidfvtxamygobylosnneecq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406618.698279-2994-156055297982968/AnsiballZ_systemd_service.py'
Oct 02 12:03:38 compute-1 sudo[217132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:39 compute-1 python3.9[217134]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 12:03:39 compute-1 sudo[217132]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:03:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:39.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:03:39 compute-1 sudo[217285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyngqhrqwsesjhepbxdngqzzbmxntpea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406619.4772646-2994-33102323810669/AnsiballZ_systemd_service.py'
Oct 02 12:03:39 compute-1 sudo[217285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:40.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:40 compute-1 python3.9[217287]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 12:03:40 compute-1 sudo[217285]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:40 compute-1 ceph-mon[80926]: pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:40 compute-1 sudo[217438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqjrzvrwooogsggiindbqanriyoawqoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406620.2552145-2994-199544081469317/AnsiballZ_systemd_service.py'
Oct 02 12:03:40 compute-1 sudo[217438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:40 compute-1 python3.9[217440]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 12:03:41 compute-1 sudo[217438]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:41 compute-1 podman[217442]: 2025-10-02 12:03:41.086006671 +0000 UTC m=+0.067690737 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3)
Oct 02 12:03:41 compute-1 sudo[217611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dieanjvggbaesxlgpoidhpzcsdonzehv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406621.162464-2994-220667576077220/AnsiballZ_systemd_service.py'
Oct 02 12:03:41 compute-1 sudo[217611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:41.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:41 compute-1 python3.9[217613]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 12:03:41 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:03:41 compute-1 sudo[217611]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:42.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:42 compute-1 ceph-mon[80926]: pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:42 compute-1 sudo[217764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnxvzywtngwdnuvshdpsrdgmzydzmwxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406621.942552-2994-123338715975733/AnsiballZ_systemd_service.py'
Oct 02 12:03:42 compute-1 sudo[217764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:42 compute-1 python3.9[217766]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 12:03:42 compute-1 sudo[217764]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:42 compute-1 podman[217768]: 2025-10-02 12:03:42.687101624 +0000 UTC m=+0.061509634 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3)
Oct 02 12:03:43 compute-1 sudo[217937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdavrfcuioqdvbgpybuvqxzpxwzkhcem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406622.7993584-2994-131952772027406/AnsiballZ_systemd_service.py'
Oct 02 12:03:43 compute-1 sudo[217937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:43 compute-1 python3.9[217939]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 12:03:43 compute-1 sudo[217937]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:43.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:44.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:44 compute-1 ceph-mon[80926]: pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:44 compute-1 sudo[218090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsvmsamykuxwurtweknnypizbcfjmitm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406624.1743498-3171-36814711906408/AnsiballZ_file.py'
Oct 02 12:03:44 compute-1 sudo[218090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:44 compute-1 python3.9[218092]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:03:44 compute-1 sudo[218090]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:44 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Oct 02 12:03:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:44.761934) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:03:44 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Oct 02 12:03:44 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406624762036, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1323, "num_deletes": 255, "total_data_size": 3127314, "memory_usage": 3172824, "flush_reason": "Manual Compaction"}
Oct 02 12:03:44 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Oct 02 12:03:44 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406624772174, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 2055989, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16058, "largest_seqno": 17376, "table_properties": {"data_size": 2050299, "index_size": 3085, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11336, "raw_average_key_size": 18, "raw_value_size": 2038910, "raw_average_value_size": 3353, "num_data_blocks": 140, "num_entries": 608, "num_filter_entries": 608, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406505, "oldest_key_time": 1759406505, "file_creation_time": 1759406624, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:03:44 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 10282 microseconds, and 5763 cpu microseconds.
Oct 02 12:03:44 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:03:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:44.772226) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 2055989 bytes OK
Oct 02 12:03:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:44.772247) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Oct 02 12:03:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:44.774115) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Oct 02 12:03:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:44.774131) EVENT_LOG_v1 {"time_micros": 1759406624774126, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:03:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:44.774148) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:03:44 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 3121062, prev total WAL file size 3121062, number of live WAL files 2.
Oct 02 12:03:44 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:03:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:44.775060) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Oct 02 12:03:44 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:03:44 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(2007KB)], [30(7712KB)]
Oct 02 12:03:44 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406624775129, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 9953463, "oldest_snapshot_seqno": -1}
Oct 02 12:03:44 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 4233 keys, 9583961 bytes, temperature: kUnknown
Oct 02 12:03:44 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406624839407, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 9583961, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9552969, "index_size": 19298, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10629, "raw_key_size": 105447, "raw_average_key_size": 24, "raw_value_size": 9473577, "raw_average_value_size": 2238, "num_data_blocks": 805, "num_entries": 4233, "num_filter_entries": 4233, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759406624, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:03:44 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:03:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:44.839696) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 9583961 bytes
Oct 02 12:03:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:44.841108) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.7 rd, 148.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 7.5 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(9.5) write-amplify(4.7) OK, records in: 4758, records dropped: 525 output_compression: NoCompression
Oct 02 12:03:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:44.841123) EVENT_LOG_v1 {"time_micros": 1759406624841116, "job": 16, "event": "compaction_finished", "compaction_time_micros": 64361, "compaction_time_cpu_micros": 34846, "output_level": 6, "num_output_files": 1, "total_output_size": 9583961, "num_input_records": 4758, "num_output_records": 4233, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:03:44 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:03:44 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406624841570, "job": 16, "event": "table_file_deletion", "file_number": 32}
Oct 02 12:03:44 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:03:44 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406624842608, "job": 16, "event": "table_file_deletion", "file_number": 30}
Oct 02 12:03:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:44.774945) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:03:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:44.842632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:03:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:44.842636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:03:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:44.842638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:03:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:44.842644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:03:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:44.842645) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:03:45 compute-1 sudo[218242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnsmxntioqtboaqwvyqygggvxhmopmqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406624.7407181-3171-197153796310484/AnsiballZ_file.py'
Oct 02 12:03:45 compute-1 sudo[218242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:45 compute-1 python3.9[218244]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:03:45 compute-1 sudo[218242]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:45.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:45 compute-1 sudo[218394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctavvgvkxzjmzqlzvbidvygsifhsqmtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406625.491668-3171-127172864204874/AnsiballZ_file.py'
Oct 02 12:03:45 compute-1 sudo[218394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:45 compute-1 python3.9[218396]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:03:45 compute-1 sudo[218394]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:03:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:46.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:03:46 compute-1 ceph-mon[80926]: pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:46 compute-1 sudo[218546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqyilrupzqondodymykufqgucbpcdzsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406626.1080675-3171-66108244470285/AnsiballZ_file.py'
Oct 02 12:03:46 compute-1 sudo[218546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:46 compute-1 python3.9[218548]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:03:46 compute-1 sudo[218546]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:46 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:03:47 compute-1 sudo[218698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzlopqzdxqrhrzwxldlbglspafoqjgxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406626.724406-3171-205749523149642/AnsiballZ_file.py'
Oct 02 12:03:47 compute-1 sudo[218698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:47 compute-1 python3.9[218700]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:03:47 compute-1 sudo[218698]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:47 compute-1 sudo[218850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwrregbyaddveboceiybzdaevswgunuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406627.3858976-3171-183251495542846/AnsiballZ_file.py'
Oct 02 12:03:47 compute-1 sudo[218850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:47.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:47 compute-1 python3.9[218852]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:03:47 compute-1 sudo[218850]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:48.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:48 compute-1 sudo[219002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfcihgjkgsjsvvptvsjrzjjieounizer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406627.9850605-3171-235874042399657/AnsiballZ_file.py'
Oct 02 12:03:48 compute-1 sudo[219002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:48 compute-1 ceph-mon[80926]: pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:48 compute-1 python3.9[219004]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:03:48 compute-1 sudo[219002]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:48 compute-1 sudo[219154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fifriaitqudkxojthuzlqrxjhxvrejzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406628.5598068-3171-217379731918699/AnsiballZ_file.py'
Oct 02 12:03:48 compute-1 sudo[219154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:48 compute-1 python3.9[219156]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:03:49 compute-1 sudo[219154]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:03:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:49.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:03:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:50.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:50 compute-1 ceph-mon[80926]: pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:50 compute-1 sudo[219306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeinlnydxkqousxuxqjxdupsqewyxmvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406630.0941505-3342-93439045667620/AnsiballZ_file.py'
Oct 02 12:03:50 compute-1 sudo[219306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:50 compute-1 python3.9[219308]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:03:50 compute-1 sudo[219306]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:51 compute-1 sudo[219458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flrxztpezhufpajbphkiadyxmmibmmww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406630.7500095-3342-262841722930721/AnsiballZ_file.py'
Oct 02 12:03:51 compute-1 sudo[219458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:51 compute-1 python3.9[219460]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:03:51 compute-1 sudo[219458]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:51 compute-1 ceph-mon[80926]: pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:51 compute-1 sudo[219610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlxrxjdvlvyfzockrstcnohsukxuadyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406631.4449306-3342-193868436876855/AnsiballZ_file.py'
Oct 02 12:03:51 compute-1 sudo[219610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:03:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:51.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:51 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:03:51 compute-1 python3.9[219612]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:03:51 compute-1 sudo[219610]: pam_unix(sudo:session): session closed for user root
Oct 02 12:03:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:52.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:53 compute-1 ceph-mon[80926]: pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:53.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:03:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:54.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:03:55 compute-1 ceph-mon[80926]: pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:55 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0.
Oct 02 12:03:55 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:55.458097) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:03:55 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34
Oct 02 12:03:55 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406635458528, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 354, "num_deletes": 251, "total_data_size": 310838, "memory_usage": 318808, "flush_reason": "Manual Compaction"}
Oct 02 12:03:55 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started
Oct 02 12:03:55 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406635462105, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 204944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17381, "largest_seqno": 17730, "table_properties": {"data_size": 202801, "index_size": 307, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5354, "raw_average_key_size": 18, "raw_value_size": 198624, "raw_average_value_size": 684, "num_data_blocks": 14, "num_entries": 290, "num_filter_entries": 290, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406625, "oldest_key_time": 1759406625, "file_creation_time": 1759406635, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:03:55 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 4012 microseconds, and 1511 cpu microseconds.
Oct 02 12:03:55 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:03:55 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:55.462142) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 204944 bytes OK
Oct 02 12:03:55 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:55.462159) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started
Oct 02 12:03:55 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:55.464068) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done
Oct 02 12:03:55 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:55.464083) EVENT_LOG_v1 {"time_micros": 1759406635464078, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:03:55 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:55.464105) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:03:55 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 308433, prev total WAL file size 308433, number of live WAL files 2.
Oct 02 12:03:55 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:03:55 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:55.465031) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Oct 02 12:03:55 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:03:55 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(200KB)], [33(9359KB)]
Oct 02 12:03:55 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406635465088, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 9788905, "oldest_snapshot_seqno": -1}
Oct 02 12:03:55 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 4013 keys, 7761775 bytes, temperature: kUnknown
Oct 02 12:03:55 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406635509518, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 7761775, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7733892, "index_size": 16765, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 101547, "raw_average_key_size": 25, "raw_value_size": 7659904, "raw_average_value_size": 1908, "num_data_blocks": 691, "num_entries": 4013, "num_filter_entries": 4013, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759406635, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:03:55 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:03:55 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:55.509763) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 7761775 bytes
Oct 02 12:03:55 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:55.511036) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 220.0 rd, 174.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 9.1 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(85.6) write-amplify(37.9) OK, records in: 4523, records dropped: 510 output_compression: NoCompression
Oct 02 12:03:55 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:55.511057) EVENT_LOG_v1 {"time_micros": 1759406635511047, "job": 18, "event": "compaction_finished", "compaction_time_micros": 44496, "compaction_time_cpu_micros": 16029, "output_level": 6, "num_output_files": 1, "total_output_size": 7761775, "num_input_records": 4523, "num_output_records": 4013, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:03:55 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:03:55 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406635511226, "job": 18, "event": "table_file_deletion", "file_number": 35}
Oct 02 12:03:55 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:03:55 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406635513047, "job": 18, "event": "table_file_deletion", "file_number": 33}
Oct 02 12:03:55 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:55.464975) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:03:55 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:55.513115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:03:55 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:55.513120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:03:55 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:55.513122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:03:55 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:55.513125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:03:55 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:03:55.513128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:03:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:03:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:55.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:03:55 compute-1 podman[219637]: 2025-10-02 12:03:55.852032483 +0000 UTC m=+0.097991260 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:03:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:56.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:56 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:03:57 compute-1 ceph-mon[80926]: pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:03:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:57.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:03:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:03:58.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:03:59 compute-1 ceph-mon[80926]: pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:03:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:03:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:03:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:03:59.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:00.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:01 compute-1 anacron[1073]: Job `cron.weekly' started
Oct 02 12:04:01 compute-1 anacron[1073]: Job `cron.weekly' terminated
Oct 02 12:04:01 compute-1 ceph-mon[80926]: pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:01 compute-1 podman[219665]: 2025-10-02 12:04:01.55502999 +0000 UTC m=+0.063628381 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:04:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:01.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:01 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:04:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:04:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:02.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:04:03 compute-1 sudo[219809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxgythroxkiumeoirppvjmvshuasvodr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406642.7489965-3342-31922925086082/AnsiballZ_file.py'
Oct 02 12:04:03 compute-1 sudo[219809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:03 compute-1 python3.9[219811]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:04:03 compute-1 sudo[219809]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:03 compute-1 ceph-mon[80926]: pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:03 compute-1 sudo[219961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eykmidvnihpjsgqmcjlrfztbfitafpwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406643.4376988-3342-229827656212869/AnsiballZ_file.py'
Oct 02 12:04:03 compute-1 sudo[219961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:03.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:03 compute-1 python3.9[219963]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:04:03 compute-1 sudo[219961]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:04:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:04.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:04:04 compute-1 sudo[220113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnmbcjquymyhkagowxpdipxbjhpdfjpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406644.192067-3342-138652633849906/AnsiballZ_file.py'
Oct 02 12:04:04 compute-1 sudo[220113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:04 compute-1 python3.9[220115]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:04:04 compute-1 sudo[220113]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:05 compute-1 sudo[220265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmwwzzvdzpvmbrionrvkbqrjhuzhdiou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406644.8196216-3342-214605654370312/AnsiballZ_file.py'
Oct 02 12:04:05 compute-1 sudo[220265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:05 compute-1 python3.9[220267]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:04:05 compute-1 sudo[220265]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:05 compute-1 ceph-mon[80926]: pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:05.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:05 compute-1 sudo[220417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcnszyifqbmyxymfwpodummunnelzaea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406645.4228473-3342-112548274947905/AnsiballZ_file.py'
Oct 02 12:04:05 compute-1 sudo[220417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:05 compute-1 python3.9[220419]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:04:06 compute-1 sudo[220417]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:04:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:06.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:04:06 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:04:07 compute-1 sudo[220569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtbmtbmgwcbbpzexpivbzilzsknlfggw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406647.009257-3516-61902073148167/AnsiballZ_command.py'
Oct 02 12:04:07 compute-1 sudo[220569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:07 compute-1 python3.9[220571]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 12:04:07 compute-1 sudo[220569]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:07 compute-1 ceph-mon[80926]: pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:07.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:04:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:08.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:04:08 compute-1 python3.9[220723]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 02 12:04:09 compute-1 sudo[220873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbntfjsgvyuyhruushpyobfqcsaiwyrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406648.9245987-3570-68892732452434/AnsiballZ_systemd_service.py'
Oct 02 12:04:09 compute-1 sudo[220873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:09 compute-1 python3.9[220875]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 12:04:09 compute-1 systemd[1]: Reloading.
Oct 02 12:04:09 compute-1 ceph-mon[80926]: pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:09 compute-1 systemd-sysv-generator[220906]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:04:09 compute-1 systemd-rc-local-generator[220902]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:04:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:09.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:09 compute-1 sudo[220873]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:10.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:10 compute-1 sudo[221060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weifdtdaihiaxfpayjnbbouwtirvzjhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406650.2751796-3594-130291524949232/AnsiballZ_command.py'
Oct 02 12:04:10 compute-1 sudo[221060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:10 compute-1 python3.9[221062]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 12:04:10 compute-1 sudo[221060]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:11 compute-1 sudo[221225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcriomdvztifgsofdvuscrlrnmdjzyvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406650.9304953-3594-164717116630058/AnsiballZ_command.py'
Oct 02 12:04:11 compute-1 sudo[221225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:11 compute-1 podman[221187]: 2025-10-02 12:04:11.219687657 +0000 UTC m=+0.066333675 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 12:04:11 compute-1 python3.9[221232]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 12:04:11 compute-1 sudo[221225]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:11 compute-1 ceph-mon[80926]: pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:11.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:11 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:04:11 compute-1 sudo[221386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgiilepohkrzrdvwnoqlkmufgghfyqcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406651.5597386-3594-215286115086529/AnsiballZ_command.py'
Oct 02 12:04:11 compute-1 sudo[221386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:12 compute-1 python3.9[221388]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 12:04:12 compute-1 sudo[221386]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:12.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:12 compute-1 sudo[221539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwnrkzmstjqrbqtjmadiylhbkbnmgxeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406652.1680388-3594-51304021866750/AnsiballZ_command.py'
Oct 02 12:04:12 compute-1 sudo[221539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:12 compute-1 python3.9[221541]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 12:04:12 compute-1 sudo[221539]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:12 compute-1 podman[221567]: 2025-10-02 12:04:12.799361084 +0000 UTC m=+0.054465782 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:04:13 compute-1 sudo[221711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioatpvijzksyrwrqtpnwyhmgvfprpcmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406652.7775948-3594-107406160940060/AnsiballZ_command.py'
Oct 02 12:04:13 compute-1 sudo[221711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:13 compute-1 python3.9[221713]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 12:04:13 compute-1 sudo[221711]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:13 compute-1 sudo[221864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcifxsaeftsoxctwcqlziuadeimpszvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406653.3538392-3594-242414420942547/AnsiballZ_command.py'
Oct 02 12:04:13 compute-1 sudo[221864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:13 compute-1 ceph-mon[80926]: pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:13.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:13 compute-1 python3.9[221866]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 12:04:13 compute-1 sudo[221864]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:14.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:14 compute-1 sudo[222017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glwreuakqntskhqvlhhdbuqigrftjmff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406653.948196-3594-48720310641888/AnsiballZ_command.py'
Oct 02 12:04:14 compute-1 sudo[222017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:14 compute-1 python3.9[222019]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 12:04:14 compute-1 sudo[222017]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:15 compute-1 sudo[222170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjtexlqlteusipxrurrzvqwephwnzhfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406654.7650473-3594-101324450023244/AnsiballZ_command.py'
Oct 02 12:04:15 compute-1 sudo[222170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:15 compute-1 python3.9[222172]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 02 12:04:15 compute-1 sudo[222170]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:15 compute-1 ceph-mon[80926]: pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:15.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:16.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:16 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:04:17 compute-1 ceph-mon[80926]: pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:17.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:18.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:18 compute-1 sudo[222198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:04:18 compute-1 sudo[222198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:18 compute-1 sudo[222198]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:18 compute-1 sudo[222223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:04:18 compute-1 sudo[222223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:18 compute-1 sudo[222223]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:18 compute-1 sudo[222248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:04:18 compute-1 sudo[222248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:18 compute-1 sudo[222248]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:18 compute-1 sudo[222294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:04:18 compute-1 sudo[222294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:18 compute-1 sudo[222435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htbeapgwmosnpzxyfalamdthqrcrzwit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406658.6330557-3801-270073654653956/AnsiballZ_file.py'
Oct 02 12:04:18 compute-1 sudo[222435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:19 compute-1 python3.9[222437]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:19 compute-1 sudo[222294]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:19 compute-1 sudo[222435]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:19 compute-1 sudo[222604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpimtawavnrvwetzuzgljjfskehkobma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406659.2468984-3801-100135118185868/AnsiballZ_file.py'
Oct 02 12:04:19 compute-1 sudo[222604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:19 compute-1 python3.9[222606]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:19 compute-1 ceph-mon[80926]: pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:04:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:04:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:04:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:04:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:04:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:04:19 compute-1 sudo[222604]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:19.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.001999982s ======
Oct 02 12:04:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:20.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001999982s
Oct 02 12:04:20 compute-1 sudo[222756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gizoxtlwincjjelysowgjxhhncpwqsps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406659.8578362-3801-52844224809024/AnsiballZ_file.py'
Oct 02 12:04:20 compute-1 sudo[222756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:20 compute-1 python3.9[222758]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:20 compute-1 sudo[222756]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:21 compute-1 sudo[222908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgfjrkugkyofwtlwjcqvgehpjyukflhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406661.013486-3867-118705210682510/AnsiballZ_file.py'
Oct 02 12:04:21 compute-1 sudo[222908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:21 compute-1 python3.9[222910]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:21 compute-1 sudo[222908]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:21 compute-1 ceph-mon[80926]: pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:04:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:21.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:04:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:04:21 compute-1 sudo[223060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koyapkxfiuzzxcnjedzyrqkwagntfqci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406661.6094785-3867-167521730398856/AnsiballZ_file.py'
Oct 02 12:04:21 compute-1 sudo[223060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:22.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:22 compute-1 python3.9[223062]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:22 compute-1 sudo[223060]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:22 compute-1 sudo[223212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okbbhxnttnmfyhxrasjlmuzubznulapc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406662.2660186-3867-85276484543094/AnsiballZ_file.py'
Oct 02 12:04:22 compute-1 sudo[223212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:22 compute-1 python3.9[223214]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:22 compute-1 sudo[223212]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:23 compute-1 sudo[223364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdywmujbmpelhkpbpjxyirhbjwppibgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406662.903725-3867-133544455680954/AnsiballZ_file.py'
Oct 02 12:04:23 compute-1 sudo[223364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:23 compute-1 python3.9[223366]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:23 compute-1 sudo[223364]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:23 compute-1 sudo[223516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpqbejzdmgohnyhtfrkuupmqcckypysu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406663.4464407-3867-187258143595578/AnsiballZ_file.py'
Oct 02 12:04:23 compute-1 sudo[223516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:23 compute-1 ceph-mon[80926]: pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.001999982s ======
Oct 02 12:04:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:23.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001999982s
Oct 02 12:04:23 compute-1 python3.9[223518]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:23 compute-1 sudo[223516]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:24.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:24 compute-1 sudo[223668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnqejxykpzuqbschemegfctzshnatkqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406664.0253298-3867-105618145369856/AnsiballZ_file.py'
Oct 02 12:04:24 compute-1 sudo[223668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:24 compute-1 python3.9[223670]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:24 compute-1 sudo[223668]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:24 compute-1 sudo[223820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-haonrcppgyhhtvbqxombqsaplmjevrxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406664.7104802-3867-71007320806275/AnsiballZ_file.py'
Oct 02 12:04:24 compute-1 sudo[223820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:25 compute-1 python3.9[223822]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:25 compute-1 sudo[223820]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:25 compute-1 sudo[223972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipmjfcpyircgxteadpknpdliidpsiwtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406665.3276598-3867-139067415971199/AnsiballZ_file.py'
Oct 02 12:04:25 compute-1 sudo[223972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:25 compute-1 python3.9[223974]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:25 compute-1 sudo[223972]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:25 compute-1 ceph-mon[80926]: pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:25.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:04:25.904 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:04:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:04:25.905 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:04:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:04:25.905 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:04:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:26.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:26 compute-1 sudo[224081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:04:26 compute-1 sudo[224081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:26 compute-1 sudo[224081]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:26 compute-1 sudo[224182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlezkaoqzfwcmvckimbmsgzloqgobzwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406665.9089203-3867-154407326933100/AnsiballZ_file.py'
Oct 02 12:04:26 compute-1 sudo[224182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:26 compute-1 sudo[224136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:04:26 compute-1 sudo[224136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:04:26 compute-1 sudo[224136]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:26 compute-1 podman[224121]: 2025-10-02 12:04:26.201247261 +0000 UTC m=+0.097917018 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 12:04:26 compute-1 python3.9[224193]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:26 compute-1 sudo[224182]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:26 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:04:26 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:04:26 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:04:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:27.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:27 compute-1 ceph-mon[80926]: pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:04:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:28.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:04:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:04:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:29.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:04:29 compute-1 ceph-mon[80926]: pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:30.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:31 compute-1 podman[224226]: 2025-10-02 12:04:31.799510457 +0000 UTC m=+0.057814697 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 12:04:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:31.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:31 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:04:31 compute-1 ceph-mon[80926]: pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:32.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:32 compute-1 sudo[224370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnfkvamigdkqdjsuxahkaezjxlnrgfjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406672.546562-4234-70482851084317/AnsiballZ_getent.py'
Oct 02 12:04:32 compute-1 sudo[224370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:33 compute-1 python3.9[224372]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Oct 02 12:04:33 compute-1 sudo[224370]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:33.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:33 compute-1 sudo[224523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oduphmjwagauquozeaojewgnngjtddih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406673.3236904-4258-148348408206211/AnsiballZ_group.py'
Oct 02 12:04:33 compute-1 sudo[224523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:34 compute-1 ceph-mon[80926]: pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:34 compute-1 python3.9[224525]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 02 12:04:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:34.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:34 compute-1 groupadd[224526]: group added to /etc/group: name=nova, GID=42436
Oct 02 12:04:34 compute-1 groupadd[224526]: group added to /etc/gshadow: name=nova
Oct 02 12:04:34 compute-1 groupadd[224526]: new group: name=nova, GID=42436
Oct 02 12:04:34 compute-1 sudo[224523]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:34 compute-1 sudo[224681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkzwkilrvzmhrbmzvbbshbqvtipbmwal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406674.434208-4282-36290944440582/AnsiballZ_user.py'
Oct 02 12:04:34 compute-1 sudo[224681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:35 compute-1 python3.9[224683]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 02 12:04:35 compute-1 useradd[224685]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Oct 02 12:04:35 compute-1 useradd[224685]: add 'nova' to group 'libvirt'
Oct 02 12:04:35 compute-1 useradd[224685]: add 'nova' to shadow group 'libvirt'
Oct 02 12:04:35 compute-1 sudo[224681]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:35.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:36 compute-1 ceph-mon[80926]: pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:36.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:36 compute-1 sshd-session[224716]: Accepted publickey for zuul from 192.168.122.30 port 36144 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 12:04:36 compute-1 systemd-logind[795]: New session 52 of user zuul.
Oct 02 12:04:36 compute-1 systemd[1]: Started Session 52 of User zuul.
Oct 02 12:04:36 compute-1 sshd-session[224716]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 12:04:36 compute-1 sshd-session[224719]: Received disconnect from 192.168.122.30 port 36144:11: disconnected by user
Oct 02 12:04:36 compute-1 sshd-session[224719]: Disconnected from user zuul 192.168.122.30 port 36144
Oct 02 12:04:36 compute-1 sshd-session[224716]: pam_unix(sshd:session): session closed for user zuul
Oct 02 12:04:36 compute-1 systemd[1]: session-52.scope: Deactivated successfully.
Oct 02 12:04:36 compute-1 systemd-logind[795]: Session 52 logged out. Waiting for processes to exit.
Oct 02 12:04:36 compute-1 systemd-logind[795]: Removed session 52.
Oct 02 12:04:36 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:04:36 compute-1 python3.9[224869]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:04:37 compute-1 python3.9[224990]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406676.5179965-4357-183356514533463/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:37.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:38 compute-1 ceph-mon[80926]: pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:38.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:38 compute-1 python3.9[225140]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:04:38 compute-1 python3.9[225216]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:39 compute-1 python3.9[225366]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:04:39 compute-1 python3.9[225487]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406678.7337186-4357-233347205498610/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:39.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:40 compute-1 ceph-mon[80926]: pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:40.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:40 compute-1 python3.9[225637]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:04:40 compute-1 python3.9[225758]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406679.926514-4357-243549155125228/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=bc7f3bb7d4094c596a18178a888511b54e157ba4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:41 compute-1 podman[225882]: 2025-10-02 12:04:41.455225594 +0000 UTC m=+0.078337272 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:04:41 compute-1 python3.9[225919]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:04:41 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:04:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:41.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:42 compute-1 ceph-mon[80926]: pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:42.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:42 compute-1 python3.9[226048]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406681.1348033-4357-192622436252957/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:43 compute-1 sudo[226211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blgowyvtamemwknxrhmmqjwfotnylfqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406683.0923676-4564-261355372984843/AnsiballZ_file.py'
Oct 02 12:04:43 compute-1 sudo[226211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:43 compute-1 podman[226172]: 2025-10-02 12:04:43.351236769 +0000 UTC m=+0.053562234 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:04:43 compute-1 python3.9[226218]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:04:43 compute-1 sudo[226211]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:43.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:44 compute-1 ceph-mon[80926]: pgmap v728: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:44.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:44 compute-1 sudo[226368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uroqjpzwrrvxmituqmactcdkdnouncut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406683.9448519-4588-243438508916911/AnsiballZ_copy.py'
Oct 02 12:04:44 compute-1 sudo[226368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:44 compute-1 python3.9[226370]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:04:44 compute-1 sudo[226368]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:44 compute-1 sudo[226520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqawjrrydsurulcfydeqasusjuduimgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406684.7000015-4612-229661179173691/AnsiballZ_stat.py'
Oct 02 12:04:44 compute-1 sudo[226520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:45 compute-1 python3.9[226522]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:04:45 compute-1 sudo[226520]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:45 compute-1 sudo[226672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txbavrtsnhuaoherrbwroiueeczqyyfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406685.4838111-4636-31294943551518/AnsiballZ_stat.py'
Oct 02 12:04:45 compute-1 sudo[226672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:04:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:45.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:04:46 compute-1 python3.9[226674]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:04:46 compute-1 sudo[226672]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:46.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:46 compute-1 ceph-mon[80926]: pgmap v729: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:46 compute-1 sudo[226795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnyhastcsxfbhimogreyhcopshttotwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406685.4838111-4636-31294943551518/AnsiballZ_copy.py'
Oct 02 12:04:46 compute-1 sudo[226795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:46 compute-1 python3.9[226797]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1759406685.4838111-4636-31294943551518/.source _original_basename=.84wwjf3m follow=False checksum=62bef941a60b6885f43fe43facda790502381dd3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Oct 02 12:04:46 compute-1 sudo[226795]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:46 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:04:47 compute-1 python3.9[226949]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:04:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:04:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:47.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:04:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:04:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:48.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:04:48 compute-1 ceph-mon[80926]: pgmap v730: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:48 compute-1 python3.9[227101]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:04:48 compute-1 python3.9[227222]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406687.763834-4714-245288903738854/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=f022386746472553146d29f689b545df70fa8a60 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:49 compute-1 python3.9[227372]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 02 12:04:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:04:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:49.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:04:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:04:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:50.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:04:50 compute-1 ceph-mon[80926]: pgmap v731: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:50 compute-1 python3.9[227493]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759406689.2430182-4759-239984502605595/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 02 12:04:50 compute-1 sudo[227643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsxtujahmnuryljwousdtjgblfvisbdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406690.701407-4810-58548963393611/AnsiballZ_container_config_data.py'
Oct 02 12:04:50 compute-1 sudo[227643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:51 compute-1 python3.9[227645]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Oct 02 12:04:51 compute-1 sudo[227643]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:51 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:04:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:51.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:51 compute-1 sudo[227795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcavievtrvwidroxyycstcqbbzteqtke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406691.6018505-4837-278713232825016/AnsiballZ_container_config_hash.py'
Oct 02 12:04:51 compute-1 sudo[227795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:52 compute-1 python3.9[227797]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 12:04:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:52.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:52 compute-1 sudo[227795]: pam_unix(sudo:session): session closed for user root
Oct 02 12:04:52 compute-1 ceph-mon[80926]: pgmap v732: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:52 compute-1 sudo[227947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idvjagajttphiyeliavwtdjbascgwelg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759406692.4781191-4867-91479685490831/AnsiballZ_edpm_container_manage.py'
Oct 02 12:04:52 compute-1 sudo[227947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:04:52 compute-1 python3[227949]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 12:04:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:53.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:54.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:54 compute-1 ceph-mon[80926]: pgmap v733: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:55.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:04:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:56.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:04:56 compute-1 ceph-mon[80926]: pgmap v734: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:56 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:04:57 compute-1 ceph-mon[80926]: pgmap v735: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:04:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:57.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:04:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:04:58.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:04:58 compute-1 podman[228002]: 2025-10-02 12:04:58.615286729 +0000 UTC m=+1.863103263 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:04:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:04:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:04:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:04:59.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:04:59 compute-1 ceph-mon[80926]: pgmap v736: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:00.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:05:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:01.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:05:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:02.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:05:02 compute-1 ceph-mon[80926]: pgmap v737: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:02 compute-1 podman[228050]: 2025-10-02 12:05:02.948065747 +0000 UTC m=+0.205637670 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 12:05:03 compute-1 podman[227963]: 2025-10-02 12:05:03.135753964 +0000 UTC m=+10.076548458 image pull e36f31143f26011980def9337d375f895bea59b742a3a2b372b996aa8ad58eba quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct 02 12:05:03 compute-1 podman[228092]: 2025-10-02 12:05:03.265196391 +0000 UTC m=+0.022581140 image pull e36f31143f26011980def9337d375f895bea59b742a3a2b372b996aa8ad58eba quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct 02 12:05:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:03.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:03 compute-1 podman[228092]: 2025-10-02 12:05:03.941732665 +0000 UTC m=+0.699117414 container create 6ce21e5703e43134d3d0ff907881606807c49fc0032a12b5ba846274498709dc (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:05:03 compute-1 python3[227949]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Oct 02 12:05:04 compute-1 sudo[227947]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:05:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:04.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:05:04 compute-1 ceph-mon[80926]: pgmap v738: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:04 compute-1 sudo[228280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahzefwnincmbhnujellaghzyozoywnni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406704.2727616-4891-99376304480815/AnsiballZ_stat.py'
Oct 02 12:05:04 compute-1 sudo[228280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:05:04 compute-1 python3.9[228282]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:05:04 compute-1 sudo[228280]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:05 compute-1 ceph-mon[80926]: pgmap v739: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:05 compute-1 sudo[228434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwqjsttsjuojnhwxzkdpodclrycrvtei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406705.417821-4928-95491799515129/AnsiballZ_container_config_data.py'
Oct 02 12:05:05 compute-1 sudo[228434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:05:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:05:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:05.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:05:05 compute-1 python3.9[228436]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Oct 02 12:05:05 compute-1 sudo[228434]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:06.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:06 compute-1 sudo[228586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezlcnlvzkmmbnvcrcallqrvyylxdiajb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406706.3498213-4954-207037580920657/AnsiballZ_container_config_hash.py'
Oct 02 12:05:06 compute-1 sudo[228586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:05:06 compute-1 python3.9[228588]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 02 12:05:06 compute-1 sudo[228586]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:07 compute-1 ceph-mon[80926]: pgmap v740: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:07 compute-1 sudo[228738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fomxzcnqrjfjqezgnpgaxfeklchnudlw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1759406707.305628-4984-63675819658892/AnsiballZ_edpm_container_manage.py'
Oct 02 12:05:07 compute-1 sudo[228738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:05:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:05:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:07.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:05:07 compute-1 python3[228740]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct 02 12:05:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:05:08 compute-1 podman[228778]: 2025-10-02 12:05:08.092197227 +0000 UTC m=+0.065200169 container create 57f3e34ca6d5c20ef17d0e389a0c7241db4367c34c2f292790b9ac81a2cc5c10 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, config_id=edpm, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 12:05:08 compute-1 podman[228778]: 2025-10-02 12:05:08.056727253 +0000 UTC m=+0.029730195 image pull e36f31143f26011980def9337d375f895bea59b742a3a2b372b996aa8ad58eba quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Oct 02 12:05:08 compute-1 python3[228740]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Oct 02 12:05:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:05:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:08.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:05:08 compute-1 sudo[228738]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:05:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:09.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:05:09 compute-1 sudo[228966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhenugtufmczbwcimyhfretgnpjpxikr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406709.7025337-5008-245398112184662/AnsiballZ_stat.py'
Oct 02 12:05:09 compute-1 sudo[228966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:05:10 compute-1 ceph-mon[80926]: pgmap v741: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:10 compute-1 python3.9[228968]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:05:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:10.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:10 compute-1 sudo[228966]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:10 compute-1 sudo[229120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzvkgzhmqueohshhwcezfwtfowclswbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406710.7257555-5035-98246903247251/AnsiballZ_file.py'
Oct 02 12:05:10 compute-1 sudo[229120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:05:11 compute-1 python3.9[229122]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:05:11 compute-1 sudo[229120]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:11 compute-1 sudo[229284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqagkzuhhwqgtizzsthtgnuyfmjxaukb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406711.2994633-5035-126091902027174/AnsiballZ_copy.py'
Oct 02 12:05:11 compute-1 sudo[229284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:05:11 compute-1 podman[229245]: 2025-10-02 12:05:11.735180446 +0000 UTC m=+0.061625547 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:05:11 compute-1 ceph-mon[80926]: pgmap v742: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:05:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:11.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:05:11 compute-1 python3.9[229292]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759406711.2994633-5035-126091902027174/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 02 12:05:11 compute-1 sudo[229284]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:12.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:12 compute-1 sudo[229368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwzzywtajzurqctaiqgjtbpfafgnoxqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406711.2994633-5035-126091902027174/AnsiballZ_systemd.py'
Oct 02 12:05:12 compute-1 sudo[229368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:05:12 compute-1 python3.9[229370]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 02 12:05:12 compute-1 systemd[1]: Reloading.
Oct 02 12:05:12 compute-1 systemd-rc-local-generator[229398]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:05:12 compute-1 systemd-sysv-generator[229401]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:05:12 compute-1 sudo[229368]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:05:13 compute-1 sudo[229479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpqsizbfeonyalughzhjmxrvgojgplmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406711.2994633-5035-126091902027174/AnsiballZ_systemd.py'
Oct 02 12:05:13 compute-1 sudo[229479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:05:13 compute-1 python3.9[229481]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 02 12:05:13 compute-1 systemd[1]: Reloading.
Oct 02 12:05:13 compute-1 podman[229483]: 2025-10-02 12:05:13.54678091 +0000 UTC m=+0.066707467 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 12:05:13 compute-1 systemd-rc-local-generator[229528]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 02 12:05:13 compute-1 systemd-sysv-generator[229531]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 02 12:05:13 compute-1 systemd[1]: Starting nova_compute container...
Oct 02 12:05:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:13.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:13 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:05:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682d5c3edb6e6889b9afc27cf3ef5f355477729e65f8fc9bf09b0c841c7b03cc/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682d5c3edb6e6889b9afc27cf3ef5f355477729e65f8fc9bf09b0c841c7b03cc/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682d5c3edb6e6889b9afc27cf3ef5f355477729e65f8fc9bf09b0c841c7b03cc/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682d5c3edb6e6889b9afc27cf3ef5f355477729e65f8fc9bf09b0c841c7b03cc/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682d5c3edb6e6889b9afc27cf3ef5f355477729e65f8fc9bf09b0c841c7b03cc/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:13 compute-1 podman[229539]: 2025-10-02 12:05:13.959577809 +0000 UTC m=+0.105803485 container init 57f3e34ca6d5c20ef17d0e389a0c7241db4367c34c2f292790b9ac81a2cc5c10 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:05:13 compute-1 podman[229539]: 2025-10-02 12:05:13.964997888 +0000 UTC m=+0.111223544 container start 57f3e34ca6d5c20ef17d0e389a0c7241db4367c34c2f292790b9ac81a2cc5c10 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:05:13 compute-1 podman[229539]: nova_compute
Oct 02 12:05:13 compute-1 nova_compute[229555]: + sudo -E kolla_set_configs
Oct 02 12:05:13 compute-1 systemd[1]: Started nova_compute container.
Oct 02 12:05:14 compute-1 sudo[229479]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:14 compute-1 nova_compute[229555]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 12:05:14 compute-1 nova_compute[229555]: INFO:__main__:Validating config file
Oct 02 12:05:14 compute-1 nova_compute[229555]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 12:05:14 compute-1 nova_compute[229555]: INFO:__main__:Copying service configuration files
Oct 02 12:05:14 compute-1 nova_compute[229555]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 02 12:05:14 compute-1 nova_compute[229555]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 02 12:05:14 compute-1 nova_compute[229555]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 02 12:05:14 compute-1 nova_compute[229555]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 02 12:05:14 compute-1 nova_compute[229555]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 02 12:05:14 compute-1 nova_compute[229555]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 02 12:05:14 compute-1 nova_compute[229555]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 02 12:05:14 compute-1 nova_compute[229555]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 02 12:05:14 compute-1 nova_compute[229555]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 02 12:05:14 compute-1 nova_compute[229555]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 02 12:05:14 compute-1 nova_compute[229555]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 02 12:05:14 compute-1 nova_compute[229555]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 02 12:05:14 compute-1 nova_compute[229555]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 02 12:05:14 compute-1 nova_compute[229555]: INFO:__main__:Deleting /etc/ceph
Oct 02 12:05:14 compute-1 nova_compute[229555]: INFO:__main__:Creating directory /etc/ceph
Oct 02 12:05:14 compute-1 nova_compute[229555]: INFO:__main__:Setting permission for /etc/ceph
Oct 02 12:05:14 compute-1 nova_compute[229555]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 02 12:05:14 compute-1 nova_compute[229555]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 02 12:05:14 compute-1 nova_compute[229555]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 02 12:05:14 compute-1 nova_compute[229555]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 02 12:05:14 compute-1 nova_compute[229555]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 02 12:05:14 compute-1 nova_compute[229555]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 02 12:05:14 compute-1 nova_compute[229555]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 02 12:05:14 compute-1 nova_compute[229555]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 02 12:05:14 compute-1 nova_compute[229555]: INFO:__main__:Writing out command to execute
Oct 02 12:05:14 compute-1 nova_compute[229555]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 02 12:05:14 compute-1 nova_compute[229555]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 02 12:05:14 compute-1 nova_compute[229555]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 02 12:05:14 compute-1 nova_compute[229555]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 02 12:05:14 compute-1 nova_compute[229555]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 02 12:05:14 compute-1 nova_compute[229555]: ++ cat /run_command
Oct 02 12:05:14 compute-1 nova_compute[229555]: + CMD=nova-compute
Oct 02 12:05:14 compute-1 nova_compute[229555]: + ARGS=
Oct 02 12:05:14 compute-1 nova_compute[229555]: + sudo kolla_copy_cacerts
Oct 02 12:05:14 compute-1 nova_compute[229555]: + [[ ! -n '' ]]
Oct 02 12:05:14 compute-1 nova_compute[229555]: + . kolla_extend_start
Oct 02 12:05:14 compute-1 nova_compute[229555]: Running command: 'nova-compute'
Oct 02 12:05:14 compute-1 nova_compute[229555]: + echo 'Running command: '\''nova-compute'\'''
Oct 02 12:05:14 compute-1 nova_compute[229555]: + umask 0022
Oct 02 12:05:14 compute-1 nova_compute[229555]: + exec nova-compute
Oct 02 12:05:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:05:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:14.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:05:14 compute-1 ceph-mon[80926]: pgmap v743: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:15 compute-1 python3.9[229717]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:05:15 compute-1 ceph-mon[80926]: pgmap v744: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:15.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:16.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:16 compute-1 nova_compute[229555]: 2025-10-02 12:05:16.480 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 02 12:05:16 compute-1 nova_compute[229555]: 2025-10-02 12:05:16.480 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 02 12:05:16 compute-1 nova_compute[229555]: 2025-10-02 12:05:16.481 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 02 12:05:16 compute-1 nova_compute[229555]: 2025-10-02 12:05:16.481 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Oct 02 12:05:16 compute-1 python3.9[229869]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:05:16 compute-1 nova_compute[229555]: 2025-10-02 12:05:16.740 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:05:16 compute-1 nova_compute[229555]: 2025-10-02 12:05:16.759 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.414 2 INFO nova.virt.driver [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.547 2 INFO nova.compute.provider_config [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.563 2 DEBUG oslo_concurrency.lockutils [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.564 2 DEBUG oslo_concurrency.lockutils [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.564 2 DEBUG oslo_concurrency.lockutils [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.564 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.565 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.565 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.565 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.565 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.565 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.566 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.566 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.566 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.566 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.566 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.567 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.567 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.567 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.567 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.568 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.568 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.568 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.568 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.569 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.569 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.569 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.569 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.569 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.570 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.570 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.570 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.570 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.570 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.570 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.571 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.571 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.571 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.571 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.571 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.572 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.572 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.572 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.572 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.572 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.573 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.573 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.573 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.573 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.574 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.574 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.574 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.574 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.574 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.574 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.575 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.575 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.575 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.575 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.575 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.576 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.577 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.577 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.578 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.578 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.578 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.578 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.579 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.579 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.579 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.579 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.580 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.580 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.580 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.580 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.580 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.581 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.581 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.581 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.581 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.581 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.582 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.582 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.582 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.582 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.582 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.583 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.583 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.583 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.583 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.583 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.583 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.583 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.584 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.584 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.584 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.584 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.584 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.584 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.585 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.585 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.585 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.585 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.585 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.585 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.586 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.586 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.586 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.586 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.586 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.586 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.586 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.587 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.587 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.587 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.587 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.587 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.587 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.587 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.588 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.588 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.588 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.588 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.588 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.588 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.589 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.589 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.589 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.589 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.589 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.589 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.590 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.590 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.590 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.590 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.590 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.590 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.590 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.590 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.591 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.591 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.591 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.591 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.591 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.591 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 python3.9[230021]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.592 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.592 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.592 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.592 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.592 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.593 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.593 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.593 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.593 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.593 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.593 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.594 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.594 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.594 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.594 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.594 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.594 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.595 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.595 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.595 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.595 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.595 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.595 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.595 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.596 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.596 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.596 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.596 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.596 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.596 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.596 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.597 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.597 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.597 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.597 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.597 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.597 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.598 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.598 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.598 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.598 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.598 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.599 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.599 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.599 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.599 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.599 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.599 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.600 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.600 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.600 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.600 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.600 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.601 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.601 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.601 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.601 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.601 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.602 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.602 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.602 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.602 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.602 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.602 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.603 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.603 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.603 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.603 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.603 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.603 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.603 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.604 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.604 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.604 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.604 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.604 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.604 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.605 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.605 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.605 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.605 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.605 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.606 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.606 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.606 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.606 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.606 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.607 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.607 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.607 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.607 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.607 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.607 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.607 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.608 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.608 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.608 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.608 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.608 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.608 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.608 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.609 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.609 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.609 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.609 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.609 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.609 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.610 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.610 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.610 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.610 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.610 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.610 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.610 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.611 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.611 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.611 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.611 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.611 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.611 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.612 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.612 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.612 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.612 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.612 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.612 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.613 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.613 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.613 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.613 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.613 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.613 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.614 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.614 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.614 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.614 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.614 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.614 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.614 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.615 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.615 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.615 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.615 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.615 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.615 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.615 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.616 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.616 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.616 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.616 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.616 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.616 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.616 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.617 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.617 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.617 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.617 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.617 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.617 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.618 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.618 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.618 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.618 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.618 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.618 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.618 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.619 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.619 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.619 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.619 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.619 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.619 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.619 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.619 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.620 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.620 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.620 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.620 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.620 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.620 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.620 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.621 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.621 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.621 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.621 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.621 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.621 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.621 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.621 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.622 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.622 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.622 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.622 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.622 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.622 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.622 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.623 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.623 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.623 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.623 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.623 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.623 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.623 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.624 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.624 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.624 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.624 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.624 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.624 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.624 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.625 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.625 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.625 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.625 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.625 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.625 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.626 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.626 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.626 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.626 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.626 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.626 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.627 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.627 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.627 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.627 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.627 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.627 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.627 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.628 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.628 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.628 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.628 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.628 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.628 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.628 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.628 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.629 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.629 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.629 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.629 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.629 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.629 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.629 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.630 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.630 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.630 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.630 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.630 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.630 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.630 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.630 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.631 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.631 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.631 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.631 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.631 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.631 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.632 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.632 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.632 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.632 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.632 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.632 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.632 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.633 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.633 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.633 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.633 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.633 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.633 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.633 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.633 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.634 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.634 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.634 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.634 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.634 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.634 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.634 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.635 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.635 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.635 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.635 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.635 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.635 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.635 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.636 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.636 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.636 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.636 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.636 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.636 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.636 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.637 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.637 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.637 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.637 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.637 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.637 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.637 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.638 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.638 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.638 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.638 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.638 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.638 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.638 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.639 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.639 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.639 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.639 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.640 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.640 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.640 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.641 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.641 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.641 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.641 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.642 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.642 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.642 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.642 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.642 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.642 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.643 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.643 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.643 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.644 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.644 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.644 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.645 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.645 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.645 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.645 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.645 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.645 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.646 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.646 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.646 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.646 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.646 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.646 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.647 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.647 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.647 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.647 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.647 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.647 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.647 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.648 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.648 2 WARNING oslo_config.cfg [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 02 12:05:17 compute-1 nova_compute[229555]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 02 12:05:17 compute-1 nova_compute[229555]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 02 12:05:17 compute-1 nova_compute[229555]: and ``live_migration_inbound_addr`` respectively.
Oct 02 12:05:17 compute-1 nova_compute[229555]: ).  Its value may be silently ignored in the future.
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.648 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.648 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.648 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.649 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.649 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.649 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.649 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.649 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.649 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.649 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.650 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.650 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.650 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.650 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.650 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.650 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.650 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.651 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.651 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.rbd_secret_uuid        = 20fdc58c-b037-5094-a8ef-d490aa7c36f3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.651 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.651 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.651 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.652 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.652 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.652 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.652 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.652 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.652 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.652 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.653 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.653 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.653 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.653 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.653 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.653 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.653 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.654 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.654 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.654 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.654 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.654 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.654 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.654 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.655 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.655 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.655 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.655 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.655 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.655 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.655 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.656 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.656 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.656 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.656 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.656 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.657 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.657 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.657 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.657 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.657 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.657 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.657 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.658 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.658 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.658 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.658 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.658 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.658 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.658 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.659 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.659 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.659 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.659 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.659 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.659 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.659 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.660 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.660 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.660 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.660 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.660 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.660 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.660 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.661 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.661 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.661 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.661 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.661 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.661 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.661 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.662 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.662 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.662 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.662 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.662 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.662 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.662 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.662 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.663 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.663 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.663 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.663 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.663 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.663 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.663 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.664 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.664 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.664 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.664 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.664 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.664 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.664 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.665 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.665 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.665 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.665 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.665 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.665 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.665 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.665 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.666 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.666 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.666 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.666 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.666 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.666 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.666 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.667 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.667 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.667 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.667 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.667 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.667 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.667 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.668 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.668 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.668 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.668 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.668 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.669 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.669 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.669 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.669 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.670 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.670 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.670 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.670 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.670 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.670 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.671 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.671 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.671 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.671 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.672 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.672 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.672 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.672 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.672 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.673 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.673 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.673 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.673 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.673 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.674 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.674 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.674 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.674 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.674 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.675 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.675 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.675 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.675 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.675 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.675 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.676 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.676 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.676 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.676 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.676 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.677 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.677 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.677 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.677 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.677 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.677 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.678 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.678 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.678 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.678 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.678 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.678 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.679 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.679 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.679 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.679 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.679 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.679 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.680 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.680 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.680 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.680 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.680 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.680 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.681 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.681 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.681 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.681 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.681 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.681 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.681 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.682 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.682 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.682 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.682 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.682 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.682 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.682 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.682 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.683 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.683 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.683 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.683 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.683 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.684 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.684 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.684 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.684 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.684 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.684 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.684 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.685 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.685 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.685 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.685 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.685 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.685 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.685 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.686 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.686 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.686 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.686 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.686 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.686 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.687 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.687 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.687 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.687 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.687 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.687 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.688 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.688 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.688 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.688 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.688 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.689 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.689 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.689 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.689 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.689 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.689 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.689 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.690 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.690 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.690 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.690 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.690 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.690 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.690 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.691 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.691 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.691 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.691 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.691 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.691 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.691 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.692 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.692 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.692 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.692 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.692 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.692 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.692 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.693 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.693 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.693 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.693 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.693 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.693 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.693 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.694 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.694 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.694 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.694 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.694 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.694 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.695 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.695 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.695 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.695 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.695 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.695 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.695 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.696 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.696 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.696 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.696 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.696 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.696 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.697 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.697 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.697 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.697 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.697 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.698 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.698 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.698 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.698 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.698 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.698 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.699 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.699 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.699 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.699 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.699 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.699 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.700 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.700 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.700 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.700 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.700 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.700 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.701 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.701 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.701 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.701 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.701 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.701 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.701 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.702 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.702 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.702 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.702 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.702 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.702 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.702 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.703 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.703 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.703 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.703 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.703 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.703 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.703 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.704 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.704 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.704 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.704 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.704 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.704 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.704 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.705 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.705 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.705 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.705 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.705 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.706 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.706 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.706 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.706 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.706 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.706 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.706 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.706 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.707 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.707 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.707 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.707 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.707 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.707 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.708 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.708 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.708 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.708 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.708 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.708 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.708 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.709 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.709 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.709 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.709 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.709 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.709 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.709 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.710 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.710 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.710 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.710 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.710 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.710 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.710 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.711 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.711 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.711 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.711 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.711 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.711 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.711 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.712 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.712 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.712 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.712 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.712 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.712 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.713 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.713 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.713 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.713 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.713 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.713 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.714 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.714 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.714 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.714 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.714 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.714 2 DEBUG oslo_service.service [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.715 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.732 2 DEBUG nova.virt.libvirt.host [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.733 2 DEBUG nova.virt.libvirt.host [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.733 2 DEBUG nova.virt.libvirt.host [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.733 2 DEBUG nova.virt.libvirt.host [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Oct 02 12:05:17 compute-1 systemd[1]: Starting libvirt QEMU daemon...
Oct 02 12:05:17 compute-1 systemd[1]: Started libvirt QEMU daemon.
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.809 2 DEBUG nova.virt.libvirt.host [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f03552825b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.812 2 DEBUG nova.virt.libvirt.host [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f03552825b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.812 2 INFO nova.virt.libvirt.driver [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Connection event '1' reason 'None'
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.836 2 WARNING nova.virt.libvirt.driver [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Cannot update service status on host "compute-1.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.
Oct 02 12:05:17 compute-1 nova_compute[229555]: 2025-10-02 12:05:17.836 2 DEBUG nova.virt.libvirt.volume.mount [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Oct 02 12:05:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:17.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:05:17 compute-1 ceph-mon[80926]: pgmap v745: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:18.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:18 compute-1 sudo[230231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzumoiehbvanlsejyxafjenouckziuxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406718.0543756-5215-1559356249132/AnsiballZ_podman_container.py'
Oct 02 12:05:18 compute-1 sudo[230231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:05:18 compute-1 python3.9[230233]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 02 12:05:18 compute-1 nova_compute[229555]: 2025-10-02 12:05:18.592 2 INFO nova.virt.libvirt.host [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Libvirt host capabilities <capabilities>
Oct 02 12:05:18 compute-1 nova_compute[229555]: 
Oct 02 12:05:18 compute-1 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <host>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <uuid>5d5cabb1-2c53-462b-89f3-16d4280c3e4c</uuid>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <cpu>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <arch>x86_64</arch>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model>EPYC-Rome-v4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <vendor>AMD</vendor>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <microcode version='16777317'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <signature family='23' model='49' stepping='0'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <maxphysaddr mode='emulate' bits='40'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature name='x2apic'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature name='tsc-deadline'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature name='osxsave'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature name='hypervisor'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature name='tsc_adjust'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature name='spec-ctrl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature name='stibp'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature name='arch-capabilities'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature name='ssbd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature name='cmp_legacy'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature name='topoext'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature name='virt-ssbd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature name='lbrv'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature name='tsc-scale'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature name='vmcb-clean'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature name='pause-filter'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature name='pfthreshold'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature name='svme-addr-chk'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature name='rdctl-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature name='skip-l1dfl-vmentry'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature name='mds-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature name='pschange-mc-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <pages unit='KiB' size='4'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <pages unit='KiB' size='2048'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <pages unit='KiB' size='1048576'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </cpu>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <power_management>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <suspend_mem/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </power_management>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <iommu support='no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <migration_features>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <live/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <uri_transports>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <uri_transport>tcp</uri_transport>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <uri_transport>rdma</uri_transport>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </uri_transports>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </migration_features>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <topology>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <cells num='1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <cell id='0'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:           <memory unit='KiB'>7864104</memory>
Oct 02 12:05:18 compute-1 nova_compute[229555]:           <pages unit='KiB' size='4'>1966026</pages>
Oct 02 12:05:18 compute-1 nova_compute[229555]:           <pages unit='KiB' size='2048'>0</pages>
Oct 02 12:05:18 compute-1 nova_compute[229555]:           <pages unit='KiB' size='1048576'>0</pages>
Oct 02 12:05:18 compute-1 nova_compute[229555]:           <distances>
Oct 02 12:05:18 compute-1 nova_compute[229555]:             <sibling id='0' value='10'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:           </distances>
Oct 02 12:05:18 compute-1 nova_compute[229555]:           <cpus num='8'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:           </cpus>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         </cell>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </cells>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </topology>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <cache>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </cache>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <secmodel>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model>selinux</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <doi>0</doi>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </secmodel>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <secmodel>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model>dac</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <doi>0</doi>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <baselabel type='kvm'>+107:+107</baselabel>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <baselabel type='qemu'>+107:+107</baselabel>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </secmodel>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   </host>
Oct 02 12:05:18 compute-1 nova_compute[229555]: 
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <guest>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <os_type>hvm</os_type>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <arch name='i686'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <wordsize>32</wordsize>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <domain type='qemu'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <domain type='kvm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </arch>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <features>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <pae/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <nonpae/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <acpi default='on' toggle='yes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <apic default='on' toggle='no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <cpuselection/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <deviceboot/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <disksnapshot default='on' toggle='no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <externalSnapshot/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </features>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   </guest>
Oct 02 12:05:18 compute-1 nova_compute[229555]: 
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <guest>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <os_type>hvm</os_type>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <arch name='x86_64'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <wordsize>64</wordsize>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <domain type='qemu'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <domain type='kvm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </arch>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <features>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <acpi default='on' toggle='yes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <apic default='on' toggle='no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <cpuselection/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <deviceboot/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <disksnapshot default='on' toggle='no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <externalSnapshot/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </features>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   </guest>
Oct 02 12:05:18 compute-1 nova_compute[229555]: 
Oct 02 12:05:18 compute-1 nova_compute[229555]: </capabilities>
Oct 02 12:05:18 compute-1 nova_compute[229555]: 
Oct 02 12:05:18 compute-1 nova_compute[229555]: 2025-10-02 12:05:18.600 2 DEBUG nova.virt.libvirt.host [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 02 12:05:18 compute-1 nova_compute[229555]: 2025-10-02 12:05:18.628 2 DEBUG nova.virt.libvirt.host [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 02 12:05:18 compute-1 nova_compute[229555]: <domainCapabilities>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <path>/usr/libexec/qemu-kvm</path>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <domain>kvm</domain>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <arch>i686</arch>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <vcpu max='240'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <iothreads supported='yes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <os supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <enum name='firmware'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <loader supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='type'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>rom</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>pflash</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='readonly'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>yes</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>no</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='secure'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>no</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </loader>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   </os>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <cpu>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <mode name='host-passthrough' supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='hostPassthroughMigratable'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>on</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>off</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </mode>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <mode name='maximum' supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='maximumMigratable'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>on</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>off</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </mode>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <mode name='host-model' supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <vendor>AMD</vendor>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='x2apic'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='tsc-deadline'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='hypervisor'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='tsc_adjust'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='spec-ctrl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='stibp'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='arch-capabilities'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='ssbd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='cmp_legacy'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='overflow-recov'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='succor'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='ibrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='amd-ssbd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='virt-ssbd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='lbrv'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='tsc-scale'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='vmcb-clean'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='flushbyasid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='pause-filter'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='pfthreshold'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='svme-addr-chk'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='rdctl-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='mds-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='pschange-mc-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='gds-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='rfds-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='disable' name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </mode>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <mode name='custom' supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Broadwell'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Broadwell-IBRS'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Broadwell-noTSX'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Broadwell-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Broadwell-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Broadwell-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Broadwell-v4'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cascadelake-Server'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cascadelake-Server-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cascadelake-Server-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cascadelake-Server-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cascadelake-Server-v4'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cascadelake-Server-v5'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cooperlake'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cooperlake-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cooperlake-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Denverton'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='mpx'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Denverton-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='mpx'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Denverton-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Denverton-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Dhyana-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-Genoa'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amd-psfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='auto-ibrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='no-nested-data-bp'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='null-sel-clr-base'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='stibp-always-on'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-Genoa-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amd-psfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='auto-ibrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='no-nested-data-bp'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='null-sel-clr-base'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='stibp-always-on'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-Milan'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-Milan-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-Milan-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amd-psfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='no-nested-data-bp'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='null-sel-clr-base'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='stibp-always-on'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-Rome'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-Rome-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-Rome-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-Rome-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-v4'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='GraniteRapids'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-int8'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-tile'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fbsdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrc'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fzrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='mcdt-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pbrsb-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='prefetchiti'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='psdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='serialize'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='GraniteRapids-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-int8'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-tile'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fbsdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrc'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fzrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='mcdt-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pbrsb-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='prefetchiti'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='psdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='serialize'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='GraniteRapids-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-int8'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-tile'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx10'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx10-128'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx10-256'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx10-512'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='cldemote'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fbsdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrc'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fzrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='mcdt-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdir64b'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdiri'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pbrsb-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='prefetchiti'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='psdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='serialize'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Haswell'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Haswell-IBRS'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Haswell-noTSX'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Haswell-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Haswell-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Haswell-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Haswell-v4'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Icelake-Server'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Icelake-Server-noTSX'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Icelake-Server-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Icelake-Server-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Icelake-Server-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Icelake-Server-v4'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Icelake-Server-v5'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Icelake-Server-v6'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Icelake-Server-v7'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='IvyBridge'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='IvyBridge-IBRS'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='IvyBridge-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='IvyBridge-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='KnightsMill'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-4fmaps'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-4vnniw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512er'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512pf'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='KnightsMill-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-4fmaps'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-4vnniw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512er'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512pf'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Opteron_G4'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fma4'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xop'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Opteron_G4-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fma4'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xop'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Opteron_G5'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fma4'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='tbm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xop'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Opteron_G5-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fma4'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='tbm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xop'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='SapphireRapids'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-int8'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-tile'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrc'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fzrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='serialize'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='SapphireRapids-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-int8'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-tile'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrc'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fzrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='serialize'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='SapphireRapids-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-int8'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-tile'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fbsdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrc'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fzrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='psdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='serialize'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='SapphireRapids-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-int8'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-tile'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='cldemote'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fbsdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrc'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fzrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdir64b'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdiri'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='psdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='serialize'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='SierraForest'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-ne-convert'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni-int8'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='cmpccxadd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fbsdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='mcdt-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pbrsb-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='psdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='serialize'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='SierraForest-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-ne-convert'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni-int8'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='cmpccxadd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fbsdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='mcdt-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pbrsb-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='psdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='serialize'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Client'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Client-IBRS'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Client-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Client-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Client-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Client-v4'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Server'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Server-IBRS'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Server-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Server-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Server-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Server-v4'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Server-v5'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Snowridge'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='cldemote'/>
Oct 02 12:05:18 compute-1 sudo[230231]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='core-capability'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdir64b'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdiri'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='mpx'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='split-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Snowridge-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='cldemote'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='core-capability'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdir64b'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdiri'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='mpx'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='split-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Snowridge-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='cldemote'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='core-capability'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdir64b'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdiri'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='split-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Snowridge-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='cldemote'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='core-capability'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdir64b'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdiri'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='split-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Snowridge-v4'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='cldemote'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdir64b'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdiri'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='athlon'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='3dnow'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='3dnowext'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='athlon-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='3dnow'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='3dnowext'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='core2duo'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='core2duo-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='coreduo'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='coreduo-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='n270'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='n270-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='phenom'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='3dnow'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='3dnowext'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='phenom-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='3dnow'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='3dnowext'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </mode>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   </cpu>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <memoryBacking supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <enum name='sourceType'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <value>file</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <value>anonymous</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <value>memfd</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   </memoryBacking>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <devices>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <disk supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='diskDevice'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>disk</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>cdrom</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>floppy</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>lun</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='bus'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>ide</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>fdc</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>scsi</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>virtio</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>usb</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>sata</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='model'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>virtio</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>virtio-transitional</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>virtio-non-transitional</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </disk>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <graphics supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='type'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>vnc</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>egl-headless</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>dbus</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </graphics>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <video supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='modelType'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>vga</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>cirrus</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>virtio</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>none</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>bochs</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>ramfb</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </video>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <hostdev supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='mode'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>subsystem</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='startupPolicy'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>default</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>mandatory</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>requisite</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>optional</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='subsysType'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>usb</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>pci</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>scsi</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='capsType'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='pciBackend'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </hostdev>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <rng supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='model'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>virtio</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>virtio-transitional</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>virtio-non-transitional</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='backendModel'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>random</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>egd</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>builtin</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </rng>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <filesystem supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='driverType'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>path</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>handle</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>virtiofs</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </filesystem>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <tpm supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='model'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>tpm-tis</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>tpm-crb</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='backendModel'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>emulator</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>external</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='backendVersion'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>2.0</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </tpm>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <redirdev supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='bus'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>usb</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </redirdev>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <channel supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='type'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>pty</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>unix</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </channel>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <crypto supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='model'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='type'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>qemu</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='backendModel'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>builtin</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </crypto>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <interface supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='backendType'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>default</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>passt</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </interface>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <panic supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='model'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>isa</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>hyperv</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </panic>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   </devices>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <features>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <gic supported='no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <vmcoreinfo supported='yes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <genid supported='yes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <backingStoreInput supported='yes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <backup supported='yes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <async-teardown supported='yes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <ps2 supported='yes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <sev supported='no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <sgx supported='no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <hyperv supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='features'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>relaxed</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>vapic</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>spinlocks</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>vpindex</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>runtime</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>synic</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>stimer</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>reset</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>vendor_id</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>frequencies</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>reenlightenment</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>tlbflush</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>ipi</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>avic</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>emsr_bitmap</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>xmm_input</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </hyperv>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <launchSecurity supported='no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   </features>
Oct 02 12:05:18 compute-1 nova_compute[229555]: </domainCapabilities>
Oct 02 12:05:18 compute-1 nova_compute[229555]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 02 12:05:18 compute-1 nova_compute[229555]: 2025-10-02 12:05:18.635 2 DEBUG nova.virt.libvirt.host [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 02 12:05:18 compute-1 nova_compute[229555]: <domainCapabilities>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <path>/usr/libexec/qemu-kvm</path>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <domain>kvm</domain>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <arch>i686</arch>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <vcpu max='4096'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <iothreads supported='yes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <os supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <enum name='firmware'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <loader supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='type'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>rom</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>pflash</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='readonly'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>yes</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>no</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='secure'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>no</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </loader>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   </os>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <cpu>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <mode name='host-passthrough' supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='hostPassthroughMigratable'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>on</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>off</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </mode>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <mode name='maximum' supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='maximumMigratable'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>on</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>off</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </mode>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <mode name='host-model' supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <vendor>AMD</vendor>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='x2apic'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='tsc-deadline'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='hypervisor'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='tsc_adjust'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='spec-ctrl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='stibp'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='arch-capabilities'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='ssbd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='cmp_legacy'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='overflow-recov'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='succor'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='ibrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='amd-ssbd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='virt-ssbd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='lbrv'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='tsc-scale'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='vmcb-clean'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='flushbyasid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='pause-filter'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='pfthreshold'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='svme-addr-chk'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='rdctl-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='mds-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='pschange-mc-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='gds-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='rfds-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='disable' name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </mode>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <mode name='custom' supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Broadwell'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Broadwell-IBRS'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Broadwell-noTSX'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Broadwell-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Broadwell-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Broadwell-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Broadwell-v4'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cascadelake-Server'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cascadelake-Server-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cascadelake-Server-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cascadelake-Server-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cascadelake-Server-v4'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cascadelake-Server-v5'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cooperlake'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cooperlake-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cooperlake-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Denverton'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='mpx'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Denverton-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='mpx'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Denverton-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Denverton-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Dhyana-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-Genoa'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amd-psfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='auto-ibrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='no-nested-data-bp'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='null-sel-clr-base'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='stibp-always-on'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-Genoa-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amd-psfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='auto-ibrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='no-nested-data-bp'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='null-sel-clr-base'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='stibp-always-on'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-Milan'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-Milan-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-Milan-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amd-psfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='no-nested-data-bp'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='null-sel-clr-base'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='stibp-always-on'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-Rome'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-Rome-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-Rome-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-Rome-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-v4'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='GraniteRapids'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-int8'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-tile'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fbsdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrc'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fzrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='mcdt-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pbrsb-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='prefetchiti'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='psdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='serialize'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='GraniteRapids-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-int8'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-tile'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fbsdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrc'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fzrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='mcdt-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pbrsb-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='prefetchiti'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='psdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='serialize'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='GraniteRapids-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-int8'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-tile'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx10'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx10-128'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx10-256'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx10-512'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='cldemote'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fbsdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrc'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fzrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='mcdt-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdir64b'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdiri'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pbrsb-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='prefetchiti'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='psdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='serialize'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Haswell'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Haswell-IBRS'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Haswell-noTSX'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Haswell-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Haswell-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Haswell-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Haswell-v4'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Icelake-Server'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Icelake-Server-noTSX'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Icelake-Server-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Icelake-Server-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Icelake-Server-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Icelake-Server-v4'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Icelake-Server-v5'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Icelake-Server-v6'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Icelake-Server-v7'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='IvyBridge'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='IvyBridge-IBRS'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='IvyBridge-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='IvyBridge-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='KnightsMill'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-4fmaps'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-4vnniw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512er'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512pf'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='KnightsMill-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-4fmaps'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-4vnniw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512er'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512pf'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Opteron_G4'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fma4'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xop'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Opteron_G4-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fma4'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xop'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Opteron_G5'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fma4'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='tbm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xop'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Opteron_G5-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fma4'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='tbm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xop'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='SapphireRapids'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-int8'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-tile'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrc'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fzrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='serialize'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='SapphireRapids-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-int8'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-tile'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrc'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fzrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='serialize'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='SapphireRapids-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-int8'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-tile'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fbsdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrc'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fzrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='psdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='serialize'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='SapphireRapids-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-int8'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-tile'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='cldemote'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fbsdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrc'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fzrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdir64b'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdiri'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='psdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='serialize'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='SierraForest'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-ne-convert'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni-int8'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='cmpccxadd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fbsdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='mcdt-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pbrsb-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='psdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='serialize'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='SierraForest-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-ne-convert'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni-int8'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='cmpccxadd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fbsdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='mcdt-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pbrsb-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='psdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='serialize'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Client'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Client-IBRS'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Client-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Client-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Client-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Client-v4'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Server'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Server-IBRS'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Server-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Server-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Server-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Server-v4'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Server-v5'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Snowridge'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='cldemote'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='core-capability'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdir64b'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdiri'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='mpx'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='split-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Snowridge-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='cldemote'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='core-capability'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdir64b'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdiri'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='mpx'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='split-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Snowridge-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='cldemote'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='core-capability'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdir64b'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdiri'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='split-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Snowridge-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='cldemote'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='core-capability'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdir64b'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdiri'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='split-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Snowridge-v4'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='cldemote'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdir64b'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdiri'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='athlon'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='3dnow'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='3dnowext'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='athlon-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='3dnow'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='3dnowext'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='core2duo'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='core2duo-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='coreduo'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='coreduo-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='n270'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='n270-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='phenom'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='3dnow'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='3dnowext'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='phenom-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='3dnow'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='3dnowext'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </mode>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   </cpu>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <memoryBacking supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <enum name='sourceType'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <value>file</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <value>anonymous</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <value>memfd</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   </memoryBacking>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <devices>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <disk supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='diskDevice'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>disk</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>cdrom</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>floppy</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>lun</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='bus'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>fdc</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>scsi</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>virtio</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>usb</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>sata</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='model'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>virtio</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>virtio-transitional</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>virtio-non-transitional</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </disk>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <graphics supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='type'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>vnc</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>egl-headless</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>dbus</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </graphics>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <video supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='modelType'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>vga</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>cirrus</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>virtio</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>none</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>bochs</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>ramfb</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </video>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <hostdev supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='mode'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>subsystem</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='startupPolicy'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>default</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>mandatory</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>requisite</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>optional</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='subsysType'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>usb</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>pci</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>scsi</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='capsType'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='pciBackend'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </hostdev>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <rng supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='model'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>virtio</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>virtio-transitional</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>virtio-non-transitional</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='backendModel'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>random</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>egd</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>builtin</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </rng>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <filesystem supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='driverType'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>path</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>handle</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>virtiofs</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </filesystem>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <tpm supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='model'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>tpm-tis</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>tpm-crb</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='backendModel'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>emulator</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>external</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='backendVersion'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>2.0</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </tpm>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <redirdev supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='bus'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>usb</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </redirdev>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <channel supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='type'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>pty</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>unix</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </channel>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <crypto supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='model'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='type'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>qemu</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='backendModel'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>builtin</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </crypto>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <interface supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='backendType'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>default</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>passt</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </interface>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <panic supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='model'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>isa</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>hyperv</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </panic>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   </devices>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <features>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <gic supported='no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <vmcoreinfo supported='yes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <genid supported='yes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <backingStoreInput supported='yes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <backup supported='yes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <async-teardown supported='yes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <ps2 supported='yes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <sev supported='no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <sgx supported='no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <hyperv supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='features'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>relaxed</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>vapic</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>spinlocks</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>vpindex</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>runtime</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>synic</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>stimer</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>reset</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>vendor_id</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>frequencies</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>reenlightenment</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>tlbflush</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>ipi</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>avic</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>emsr_bitmap</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>xmm_input</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </hyperv>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <launchSecurity supported='no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   </features>
Oct 02 12:05:18 compute-1 nova_compute[229555]: </domainCapabilities>
Oct 02 12:05:18 compute-1 nova_compute[229555]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 02 12:05:18 compute-1 nova_compute[229555]: 2025-10-02 12:05:18.662 2 DEBUG nova.virt.libvirt.host [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 02 12:05:18 compute-1 nova_compute[229555]: 2025-10-02 12:05:18.666 2 DEBUG nova.virt.libvirt.host [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 02 12:05:18 compute-1 nova_compute[229555]: <domainCapabilities>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <path>/usr/libexec/qemu-kvm</path>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <domain>kvm</domain>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <arch>x86_64</arch>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <vcpu max='240'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <iothreads supported='yes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <os supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <enum name='firmware'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <loader supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='type'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>rom</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>pflash</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='readonly'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>yes</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>no</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='secure'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>no</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </loader>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   </os>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <cpu>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <mode name='host-passthrough' supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='hostPassthroughMigratable'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>on</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>off</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </mode>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <mode name='maximum' supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='maximumMigratable'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>on</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>off</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </mode>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <mode name='host-model' supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <vendor>AMD</vendor>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='x2apic'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='tsc-deadline'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='hypervisor'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='tsc_adjust'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='spec-ctrl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='stibp'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='arch-capabilities'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='ssbd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='cmp_legacy'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='overflow-recov'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='succor'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='ibrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='amd-ssbd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='virt-ssbd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='lbrv'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='tsc-scale'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='vmcb-clean'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='flushbyasid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='pause-filter'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='pfthreshold'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='svme-addr-chk'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='rdctl-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='mds-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='pschange-mc-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='gds-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='rfds-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='disable' name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </mode>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <mode name='custom' supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Broadwell'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Broadwell-IBRS'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Broadwell-noTSX'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Broadwell-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Broadwell-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Broadwell-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Broadwell-v4'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cascadelake-Server'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cascadelake-Server-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cascadelake-Server-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cascadelake-Server-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cascadelake-Server-v4'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cascadelake-Server-v5'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cooperlake'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cooperlake-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cooperlake-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Denverton'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='mpx'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Denverton-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='mpx'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Denverton-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Denverton-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Dhyana-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-Genoa'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amd-psfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='auto-ibrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='no-nested-data-bp'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='null-sel-clr-base'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='stibp-always-on'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-Genoa-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amd-psfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='auto-ibrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='no-nested-data-bp'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='null-sel-clr-base'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='stibp-always-on'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-Milan'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-Milan-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-Milan-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amd-psfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='no-nested-data-bp'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='null-sel-clr-base'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='stibp-always-on'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-Rome'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-Rome-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-Rome-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-Rome-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-v4'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='GraniteRapids'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-int8'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-tile'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fbsdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrc'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fzrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='mcdt-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pbrsb-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='prefetchiti'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='psdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='serialize'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='GraniteRapids-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-int8'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-tile'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fbsdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrc'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fzrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='mcdt-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pbrsb-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='prefetchiti'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='psdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='serialize'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='GraniteRapids-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-int8'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-tile'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx10'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx10-128'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx10-256'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx10-512'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='cldemote'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fbsdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrc'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fzrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='mcdt-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdir64b'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdiri'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pbrsb-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='prefetchiti'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='psdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='serialize'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Haswell'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Haswell-IBRS'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Haswell-noTSX'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Haswell-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Haswell-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Haswell-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Haswell-v4'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Icelake-Server'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Icelake-Server-noTSX'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Icelake-Server-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Icelake-Server-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Icelake-Server-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Icelake-Server-v4'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Icelake-Server-v5'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Icelake-Server-v6'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Icelake-Server-v7'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='IvyBridge'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='IvyBridge-IBRS'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='IvyBridge-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='IvyBridge-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='KnightsMill'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-4fmaps'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-4vnniw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512er'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512pf'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='KnightsMill-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-4fmaps'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-4vnniw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512er'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512pf'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Opteron_G4'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fma4'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xop'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Opteron_G4-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fma4'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xop'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Opteron_G5'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fma4'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='tbm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xop'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Opteron_G5-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fma4'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='tbm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xop'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='SapphireRapids'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-int8'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-tile'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrc'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fzrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='serialize'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='SapphireRapids-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-int8'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-tile'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrc'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fzrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='serialize'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='SapphireRapids-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-int8'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-tile'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fbsdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrc'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fzrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='psdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='serialize'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='SapphireRapids-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-int8'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-tile'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='cldemote'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fbsdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrc'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fzrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdir64b'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdiri'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='psdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='serialize'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='SierraForest'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-ne-convert'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni-int8'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='cmpccxadd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fbsdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='mcdt-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pbrsb-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='psdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='serialize'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='SierraForest-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-ne-convert'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni-int8'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='cmpccxadd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fbsdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='mcdt-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pbrsb-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='psdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='serialize'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Client'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Client-IBRS'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Client-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Client-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Client-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Client-v4'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Server'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Server-IBRS'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Server-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Server-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Server-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Server-v4'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Server-v5'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Snowridge'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='cldemote'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='core-capability'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdir64b'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdiri'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='mpx'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='split-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Snowridge-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='cldemote'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='core-capability'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdir64b'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdiri'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='mpx'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='split-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Snowridge-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='cldemote'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='core-capability'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdir64b'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdiri'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='split-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Snowridge-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='cldemote'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='core-capability'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdir64b'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdiri'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='split-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Snowridge-v4'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='cldemote'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdir64b'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdiri'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='athlon'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='3dnow'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='3dnowext'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='athlon-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='3dnow'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='3dnowext'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='core2duo'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='core2duo-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='coreduo'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='coreduo-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='n270'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='n270-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='phenom'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='3dnow'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='3dnowext'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='phenom-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='3dnow'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='3dnowext'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </mode>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   </cpu>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <memoryBacking supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <enum name='sourceType'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <value>file</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <value>anonymous</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <value>memfd</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   </memoryBacking>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <devices>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <disk supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='diskDevice'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>disk</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>cdrom</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>floppy</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>lun</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='bus'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>ide</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>fdc</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>scsi</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>virtio</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>usb</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>sata</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='model'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>virtio</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>virtio-transitional</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>virtio-non-transitional</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </disk>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <graphics supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='type'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>vnc</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>egl-headless</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>dbus</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </graphics>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <video supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='modelType'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>vga</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>cirrus</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>virtio</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>none</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>bochs</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>ramfb</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </video>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <hostdev supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='mode'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>subsystem</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='startupPolicy'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>default</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>mandatory</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>requisite</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>optional</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='subsysType'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>usb</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>pci</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>scsi</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='capsType'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='pciBackend'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </hostdev>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <rng supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='model'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>virtio</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>virtio-transitional</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>virtio-non-transitional</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='backendModel'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>random</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>egd</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>builtin</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </rng>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <filesystem supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='driverType'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>path</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>handle</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>virtiofs</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </filesystem>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <tpm supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='model'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>tpm-tis</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>tpm-crb</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='backendModel'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>emulator</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>external</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='backendVersion'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>2.0</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </tpm>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <redirdev supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='bus'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>usb</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </redirdev>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <channel supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='type'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>pty</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>unix</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </channel>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <crypto supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='model'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='type'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>qemu</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='backendModel'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>builtin</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </crypto>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <interface supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='backendType'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>default</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>passt</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </interface>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <panic supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='model'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>isa</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>hyperv</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </panic>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   </devices>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <features>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <gic supported='no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <vmcoreinfo supported='yes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <genid supported='yes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <backingStoreInput supported='yes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <backup supported='yes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <async-teardown supported='yes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <ps2 supported='yes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <sev supported='no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <sgx supported='no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <hyperv supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='features'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>relaxed</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>vapic</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>spinlocks</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>vpindex</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>runtime</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>synic</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>stimer</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>reset</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>vendor_id</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>frequencies</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>reenlightenment</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>tlbflush</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>ipi</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>avic</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>emsr_bitmap</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>xmm_input</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </hyperv>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <launchSecurity supported='no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   </features>
Oct 02 12:05:18 compute-1 nova_compute[229555]: </domainCapabilities>
Oct 02 12:05:18 compute-1 nova_compute[229555]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 02 12:05:18 compute-1 nova_compute[229555]: 2025-10-02 12:05:18.722 2 DEBUG nova.virt.libvirt.host [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 02 12:05:18 compute-1 nova_compute[229555]: <domainCapabilities>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <path>/usr/libexec/qemu-kvm</path>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <domain>kvm</domain>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <arch>x86_64</arch>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <vcpu max='4096'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <iothreads supported='yes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <os supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <enum name='firmware'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <value>efi</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <loader supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='type'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>rom</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>pflash</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='readonly'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>yes</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>no</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='secure'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>yes</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>no</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </loader>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   </os>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <cpu>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <mode name='host-passthrough' supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='hostPassthroughMigratable'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>on</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>off</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </mode>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <mode name='maximum' supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='maximumMigratable'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>on</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>off</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </mode>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <mode name='host-model' supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <vendor>AMD</vendor>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='x2apic'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='tsc-deadline'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='hypervisor'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='tsc_adjust'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='spec-ctrl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='stibp'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='arch-capabilities'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='ssbd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='cmp_legacy'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='overflow-recov'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='succor'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='ibrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='amd-ssbd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='virt-ssbd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='lbrv'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='tsc-scale'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='vmcb-clean'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='flushbyasid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='pause-filter'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='pfthreshold'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='svme-addr-chk'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='rdctl-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='mds-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='pschange-mc-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='gds-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='require' name='rfds-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <feature policy='disable' name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </mode>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <mode name='custom' supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Broadwell'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Broadwell-IBRS'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Broadwell-noTSX'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Broadwell-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Broadwell-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Broadwell-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Broadwell-v4'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cascadelake-Server'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cascadelake-Server-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cascadelake-Server-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cascadelake-Server-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cascadelake-Server-v4'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cascadelake-Server-v5'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cooperlake'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cooperlake-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Cooperlake-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Denverton'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='mpx'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Denverton-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='mpx'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Denverton-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Denverton-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Dhyana-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-Genoa'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amd-psfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='auto-ibrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='no-nested-data-bp'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='null-sel-clr-base'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='stibp-always-on'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-Genoa-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amd-psfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='auto-ibrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='no-nested-data-bp'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='null-sel-clr-base'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='stibp-always-on'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-Milan'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-Milan-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-Milan-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amd-psfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='no-nested-data-bp'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='null-sel-clr-base'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='stibp-always-on'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-Rome'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-Rome-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-Rome-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-Rome-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='EPYC-v4'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='GraniteRapids'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-int8'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-tile'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fbsdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrc'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fzrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='mcdt-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pbrsb-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='prefetchiti'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='psdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='serialize'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='GraniteRapids-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-int8'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-tile'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fbsdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrc'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fzrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='mcdt-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pbrsb-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='prefetchiti'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='psdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='serialize'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='GraniteRapids-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-int8'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-tile'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx10'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx10-128'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx10-256'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx10-512'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='cldemote'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fbsdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrc'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fzrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='mcdt-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdir64b'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdiri'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pbrsb-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='prefetchiti'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='psdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='serialize'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Haswell'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Haswell-IBRS'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Haswell-noTSX'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Haswell-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Haswell-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Haswell-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Haswell-v4'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Icelake-Server'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Icelake-Server-noTSX'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Icelake-Server-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Icelake-Server-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Icelake-Server-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Icelake-Server-v4'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Icelake-Server-v5'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Icelake-Server-v6'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Icelake-Server-v7'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='IvyBridge'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='IvyBridge-IBRS'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='IvyBridge-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='IvyBridge-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='KnightsMill'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-4fmaps'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-4vnniw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512er'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512pf'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='KnightsMill-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-4fmaps'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-4vnniw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512er'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512pf'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Opteron_G4'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fma4'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xop'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Opteron_G4-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fma4'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xop'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Opteron_G5'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fma4'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='tbm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xop'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Opteron_G5-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fma4'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='tbm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xop'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='SapphireRapids'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-int8'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-tile'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrc'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fzrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='serialize'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='SapphireRapids-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-int8'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-tile'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrc'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fzrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='serialize'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='SapphireRapids-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-int8'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-tile'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fbsdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrc'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fzrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='psdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='serialize'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='SapphireRapids-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-int8'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='amx-tile'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-bf16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-fp16'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bitalg'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='cldemote'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fbsdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrc'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fzrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='la57'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdir64b'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdiri'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='psdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='serialize'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='taa-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xfd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='SierraForest'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-ne-convert'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni-int8'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='cmpccxadd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fbsdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='mcdt-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pbrsb-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='psdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='serialize'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='SierraForest-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-ifma'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-ne-convert'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx-vnni-int8'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='cmpccxadd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fbsdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='fsrs'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ibrs-all'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='mcdt-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pbrsb-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='psdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='serialize'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vaes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Client'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Client-IBRS'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Client-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Client-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Client-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Client-v4'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Server'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Server-IBRS'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Server-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Server-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='hle'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='rtm'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Server-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Server-v4'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Skylake-Server-v5'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512bw'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512cd'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512dq'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512f'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='avx512vl'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='invpcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pcid'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='pku'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Snowridge'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='cldemote'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='core-capability'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdir64b'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdiri'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='mpx'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='split-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Snowridge-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='cldemote'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='core-capability'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdir64b'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdiri'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='mpx'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='split-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Snowridge-v2'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='cldemote'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='core-capability'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdir64b'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdiri'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='split-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Snowridge-v3'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='cldemote'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='core-capability'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdir64b'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdiri'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='split-lock-detect'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='Snowridge-v4'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='cldemote'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='erms'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='gfni'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdir64b'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='movdiri'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='xsaves'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='athlon'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='3dnow'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='3dnowext'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='athlon-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='3dnow'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='3dnowext'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='core2duo'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='core2duo-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='coreduo'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='coreduo-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='n270'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='n270-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='ss'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='phenom'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='3dnow'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='3dnowext'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <blockers model='phenom-v1'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='3dnow'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <feature name='3dnowext'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </blockers>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </mode>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   </cpu>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <memoryBacking supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <enum name='sourceType'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <value>file</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <value>anonymous</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <value>memfd</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   </memoryBacking>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <devices>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <disk supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='diskDevice'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>disk</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>cdrom</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>floppy</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>lun</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='bus'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>fdc</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>scsi</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>virtio</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>usb</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>sata</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='model'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>virtio</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>virtio-transitional</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>virtio-non-transitional</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </disk>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <graphics supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='type'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>vnc</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>egl-headless</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>dbus</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </graphics>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <video supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='modelType'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>vga</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>cirrus</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>virtio</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>none</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>bochs</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>ramfb</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </video>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <hostdev supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='mode'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>subsystem</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='startupPolicy'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>default</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>mandatory</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>requisite</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>optional</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='subsysType'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>usb</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>pci</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>scsi</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='capsType'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='pciBackend'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </hostdev>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <rng supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='model'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>virtio</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>virtio-transitional</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>virtio-non-transitional</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='backendModel'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>random</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>egd</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>builtin</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </rng>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <filesystem supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='driverType'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>path</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>handle</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>virtiofs</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </filesystem>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <tpm supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='model'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>tpm-tis</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>tpm-crb</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='backendModel'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>emulator</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>external</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='backendVersion'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>2.0</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </tpm>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <redirdev supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='bus'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>usb</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </redirdev>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <channel supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='type'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>pty</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>unix</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </channel>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <crypto supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='model'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='type'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>qemu</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='backendModel'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>builtin</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </crypto>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <interface supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='backendType'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>default</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>passt</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </interface>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <panic supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='model'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>isa</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>hyperv</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </panic>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   </devices>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <features>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <gic supported='no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <vmcoreinfo supported='yes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <genid supported='yes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <backingStoreInput supported='yes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <backup supported='yes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <async-teardown supported='yes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <ps2 supported='yes'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <sev supported='no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <sgx supported='no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <hyperv supported='yes'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       <enum name='features'>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>relaxed</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>vapic</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>spinlocks</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>vpindex</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>runtime</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>synic</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>stimer</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>reset</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>vendor_id</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>frequencies</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>reenlightenment</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>tlbflush</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>ipi</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>avic</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>emsr_bitmap</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:         <value>xmm_input</value>
Oct 02 12:05:18 compute-1 nova_compute[229555]:       </enum>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     </hyperv>
Oct 02 12:05:18 compute-1 nova_compute[229555]:     <launchSecurity supported='no'/>
Oct 02 12:05:18 compute-1 nova_compute[229555]:   </features>
Oct 02 12:05:18 compute-1 nova_compute[229555]: </domainCapabilities>
Oct 02 12:05:18 compute-1 nova_compute[229555]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 02 12:05:18 compute-1 nova_compute[229555]: 2025-10-02 12:05:18.779 2 DEBUG nova.virt.libvirt.host [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 02 12:05:18 compute-1 nova_compute[229555]: 2025-10-02 12:05:18.779 2 DEBUG nova.virt.libvirt.host [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 02 12:05:18 compute-1 nova_compute[229555]: 2025-10-02 12:05:18.779 2 DEBUG nova.virt.libvirt.host [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 02 12:05:18 compute-1 nova_compute[229555]: 2025-10-02 12:05:18.779 2 INFO nova.virt.libvirt.host [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Secure Boot support detected
Oct 02 12:05:18 compute-1 nova_compute[229555]: 2025-10-02 12:05:18.781 2 INFO nova.virt.libvirt.driver [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 02 12:05:18 compute-1 nova_compute[229555]: 2025-10-02 12:05:18.781 2 INFO nova.virt.libvirt.driver [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 02 12:05:18 compute-1 nova_compute[229555]: 2025-10-02 12:05:18.792 2 DEBUG nova.virt.libvirt.driver [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] cpu compare xml: <cpu match="exact">
Oct 02 12:05:18 compute-1 nova_compute[229555]:   <model>Nehalem</model>
Oct 02 12:05:18 compute-1 nova_compute[229555]: </cpu>
Oct 02 12:05:18 compute-1 nova_compute[229555]:  _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019
Oct 02 12:05:18 compute-1 nova_compute[229555]: 2025-10-02 12:05:18.794 2 DEBUG nova.virt.libvirt.driver [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Oct 02 12:05:18 compute-1 nova_compute[229555]: 2025-10-02 12:05:18.826 2 INFO nova.virt.node [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Determined node identity 730da6ce-9754-46f0-88e3-0019d056443f from /var/lib/nova/compute_id
Oct 02 12:05:18 compute-1 nova_compute[229555]: 2025-10-02 12:05:18.851 2 WARNING nova.compute.manager [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Compute nodes ['730da6ce-9754-46f0-88e3-0019d056443f'] for host compute-1.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Oct 02 12:05:18 compute-1 nova_compute[229555]: 2025-10-02 12:05:18.885 2 INFO nova.compute.manager [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Oct 02 12:05:18 compute-1 nova_compute[229555]: 2025-10-02 12:05:18.920 2 WARNING nova.compute.manager [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] No compute node record found for host compute-1.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.
Oct 02 12:05:18 compute-1 nova_compute[229555]: 2025-10-02 12:05:18.921 2 DEBUG oslo_concurrency.lockutils [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:05:18 compute-1 nova_compute[229555]: 2025-10-02 12:05:18.921 2 DEBUG oslo_concurrency.lockutils [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:05:18 compute-1 nova_compute[229555]: 2025-10-02 12:05:18.921 2 DEBUG oslo_concurrency.lockutils [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:05:18 compute-1 nova_compute[229555]: 2025-10-02 12:05:18.922 2 DEBUG nova.compute.resource_tracker [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:05:18 compute-1 nova_compute[229555]: 2025-10-02 12:05:18.922 2 DEBUG oslo_concurrency.processutils [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:05:19 compute-1 sudo[230431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhpnlrzpkdcgyyqminuencsbtdijobza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406719.0561705-5239-127555927909885/AnsiballZ_systemd.py'
Oct 02 12:05:19 compute-1 sudo[230431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:05:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:05:19 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3611304547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:05:19 compute-1 nova_compute[229555]: 2025-10-02 12:05:19.371 2 DEBUG oslo_concurrency.processutils [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:05:19 compute-1 systemd[1]: Starting libvirt nodedev daemon...
Oct 02 12:05:19 compute-1 systemd[1]: Started libvirt nodedev daemon.
Oct 02 12:05:19 compute-1 nova_compute[229555]: 2025-10-02 12:05:19.715 2 WARNING nova.virt.libvirt.driver [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:05:19 compute-1 nova_compute[229555]: 2025-10-02 12:05:19.716 2 DEBUG nova.compute.resource_tracker [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5264MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:05:19 compute-1 nova_compute[229555]: 2025-10-02 12:05:19.716 2 DEBUG oslo_concurrency.lockutils [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:05:19 compute-1 nova_compute[229555]: 2025-10-02 12:05:19.717 2 DEBUG oslo_concurrency.lockutils [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:05:19 compute-1 python3.9[230433]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 02 12:05:19 compute-1 nova_compute[229555]: 2025-10-02 12:05:19.735 2 WARNING nova.compute.resource_tracker [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] No compute node record for compute-1.ctlplane.example.com:730da6ce-9754-46f0-88e3-0019d056443f: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 730da6ce-9754-46f0-88e3-0019d056443f could not be found.
Oct 02 12:05:19 compute-1 nova_compute[229555]: 2025-10-02 12:05:19.759 2 INFO nova.compute.resource_tracker [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Compute node record created for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com with uuid: 730da6ce-9754-46f0-88e3-0019d056443f
Oct 02 12:05:19 compute-1 systemd[1]: Stopping nova_compute container...
Oct 02 12:05:19 compute-1 nova_compute[229555]: 2025-10-02 12:05:19.847 2 DEBUG nova.compute.resource_tracker [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:05:19 compute-1 nova_compute[229555]: 2025-10-02 12:05:19.848 2 DEBUG nova.compute.resource_tracker [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:05:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:05:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:19.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:05:19 compute-1 nova_compute[229555]: 2025-10-02 12:05:19.981 2 DEBUG oslo_concurrency.lockutils [None req-7e0a333e-fbb1-4f4e-9542-f26cf9588f0e - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.265s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:05:19 compute-1 nova_compute[229555]: 2025-10-02 12:05:19.982 2 DEBUG oslo_concurrency.lockutils [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:05:19 compute-1 nova_compute[229555]: 2025-10-02 12:05:19.982 2 DEBUG oslo_concurrency.lockutils [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:05:19 compute-1 nova_compute[229555]: 2025-10-02 12:05:19.982 2 DEBUG oslo_concurrency.lockutils [None req-821113e3-9684-482a-8f97-95bd5454b45d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:05:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:20.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:20 compute-1 virtqemud[230067]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct 02 12:05:20 compute-1 virtqemud[230067]: hostname: compute-1
Oct 02 12:05:20 compute-1 virtqemud[230067]: End of file while reading data: Input/output error
Oct 02 12:05:20 compute-1 systemd[1]: libpod-57f3e34ca6d5c20ef17d0e389a0c7241db4367c34c2f292790b9ac81a2cc5c10.scope: Deactivated successfully.
Oct 02 12:05:20 compute-1 systemd[1]: libpod-57f3e34ca6d5c20ef17d0e389a0c7241db4367c34c2f292790b9ac81a2cc5c10.scope: Consumed 3.995s CPU time.
Oct 02 12:05:20 compute-1 podman[230461]: 2025-10-02 12:05:20.402228422 +0000 UTC m=+0.603316495 container died 57f3e34ca6d5c20ef17d0e389a0c7241db4367c34c2f292790b9ac81a2cc5c10 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3)
Oct 02 12:05:20 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-57f3e34ca6d5c20ef17d0e389a0c7241db4367c34c2f292790b9ac81a2cc5c10-userdata-shm.mount: Deactivated successfully.
Oct 02 12:05:20 compute-1 systemd[1]: var-lib-containers-storage-overlay-682d5c3edb6e6889b9afc27cf3ef5f355477729e65f8fc9bf09b0c841c7b03cc-merged.mount: Deactivated successfully.
Oct 02 12:05:21 compute-1 ceph-mon[80926]: pgmap v746: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3611304547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:05:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2400835705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:05:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:05:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:21.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:05:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:05:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:22.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:05:22 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:05:23 compute-1 podman[230461]: 2025-10-02 12:05:23.518834255 +0000 UTC m=+3.719922338 container cleanup 57f3e34ca6d5c20ef17d0e389a0c7241db4367c34c2f292790b9ac81a2cc5c10 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm)
Oct 02 12:05:23 compute-1 podman[230461]: nova_compute
Oct 02 12:05:23 compute-1 podman[230490]: nova_compute
Oct 02 12:05:23 compute-1 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Oct 02 12:05:23 compute-1 systemd[1]: Stopped nova_compute container.
Oct 02 12:05:23 compute-1 systemd[1]: Starting nova_compute container...
Oct 02 12:05:23 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:05:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682d5c3edb6e6889b9afc27cf3ef5f355477729e65f8fc9bf09b0c841c7b03cc/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682d5c3edb6e6889b9afc27cf3ef5f355477729e65f8fc9bf09b0c841c7b03cc/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682d5c3edb6e6889b9afc27cf3ef5f355477729e65f8fc9bf09b0c841c7b03cc/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682d5c3edb6e6889b9afc27cf3ef5f355477729e65f8fc9bf09b0c841c7b03cc/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/682d5c3edb6e6889b9afc27cf3ef5f355477729e65f8fc9bf09b0c841c7b03cc/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:23 compute-1 podman[230502]: 2025-10-02 12:05:23.716987309 +0000 UTC m=+0.096079929 container init 57f3e34ca6d5c20ef17d0e389a0c7241db4367c34c2f292790b9ac81a2cc5c10 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:05:23 compute-1 podman[230502]: 2025-10-02 12:05:23.722782801 +0000 UTC m=+0.101875381 container start 57f3e34ca6d5c20ef17d0e389a0c7241db4367c34c2f292790b9ac81a2cc5c10 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute)
Oct 02 12:05:23 compute-1 nova_compute[230518]: + sudo -E kolla_set_configs
Oct 02 12:05:23 compute-1 podman[230502]: nova_compute
Oct 02 12:05:23 compute-1 systemd[1]: Started nova_compute container.
Oct 02 12:05:23 compute-1 sudo[230431]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Validating config file
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Copying service configuration files
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Deleting /etc/ceph
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Creating directory /etc/ceph
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Setting permission for /etc/ceph
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Writing out command to execute
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 02 12:05:23 compute-1 nova_compute[230518]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 02 12:05:23 compute-1 nova_compute[230518]: ++ cat /run_command
Oct 02 12:05:23 compute-1 nova_compute[230518]: + CMD=nova-compute
Oct 02 12:05:23 compute-1 nova_compute[230518]: + ARGS=
Oct 02 12:05:23 compute-1 nova_compute[230518]: + sudo kolla_copy_cacerts
Oct 02 12:05:23 compute-1 nova_compute[230518]: + [[ ! -n '' ]]
Oct 02 12:05:23 compute-1 nova_compute[230518]: + . kolla_extend_start
Oct 02 12:05:23 compute-1 nova_compute[230518]: Running command: 'nova-compute'
Oct 02 12:05:23 compute-1 nova_compute[230518]: + echo 'Running command: '\''nova-compute'\'''
Oct 02 12:05:23 compute-1 nova_compute[230518]: + umask 0022
Oct 02 12:05:23 compute-1 nova_compute[230518]: + exec nova-compute
Oct 02 12:05:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:23.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:24.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:25.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:25 compute-1 nova_compute[230518]: 2025-10-02 12:05:25.886 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 02 12:05:25 compute-1 nova_compute[230518]: 2025-10-02 12:05:25.886 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 02 12:05:25 compute-1 nova_compute[230518]: 2025-10-02 12:05:25.886 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 02 12:05:25 compute-1 nova_compute[230518]: 2025-10-02 12:05:25.887 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Oct 02 12:05:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:05:25.905 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:05:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:05:25.905 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:05:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:05:25.905 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.020 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.042 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:05:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:26.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:26 compute-1 sudo[230559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:05:26 compute-1 sudo[230559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:26 compute-1 sudo[230559]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:26 compute-1 sudo[230584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:05:26 compute-1 sudo[230584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:26 compute-1 sudo[230584]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.466 2 INFO nova.virt.driver [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Oct 02 12:05:26 compute-1 sudo[230609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:05:26 compute-1 sudo[230609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:26 compute-1 sudo[230609]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:26 compute-1 sudo[230634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 12:05:26 compute-1 sudo[230634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.562 2 INFO nova.compute.provider_config [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.569 2 DEBUG oslo_concurrency.lockutils [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.569 2 DEBUG oslo_concurrency.lockutils [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.570 2 DEBUG oslo_concurrency.lockutils [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.570 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.570 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.570 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.571 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.571 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.571 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.571 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.572 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.572 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.572 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.572 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.572 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.573 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.573 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.573 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.573 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.573 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.574 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.574 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.574 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.574 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.574 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.575 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.575 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.575 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.575 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.575 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.576 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.576 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.576 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.576 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.577 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.577 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.577 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.577 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.577 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.578 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.578 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.578 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.578 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.578 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.579 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.579 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.579 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.579 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.580 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.580 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.580 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.580 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.580 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.581 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.581 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.581 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.581 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.581 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.582 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.582 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.582 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.582 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.582 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.583 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.583 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.583 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.583 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.583 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.584 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.584 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.584 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.584 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.584 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.585 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.585 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.585 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.585 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.585 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.586 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.586 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.586 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.586 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.587 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.587 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.587 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.587 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.587 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.588 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.588 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.588 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.588 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.588 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.589 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.589 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.589 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.589 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.589 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.590 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.590 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.590 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.590 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.590 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.591 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.591 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.591 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.591 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.591 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.592 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.592 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.592 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.592 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.592 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.593 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.593 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.593 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.593 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.593 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.594 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.594 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.594 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.594 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.594 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.595 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.595 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.595 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.595 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.595 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.596 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.596 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.596 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.596 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.596 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.596 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.597 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.597 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.597 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.597 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.597 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.598 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.598 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.598 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.598 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.598 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.599 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.599 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.599 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.599 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.599 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.600 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.600 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.600 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.600 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.600 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.601 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.601 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.601 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.601 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.601 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.602 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.602 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.602 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.602 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.602 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.603 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.603 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.603 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.603 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.604 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.604 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.604 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.604 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.605 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.606 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.606 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.606 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.606 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.606 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.607 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.607 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.607 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.607 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.607 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.607 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.608 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.608 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.608 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.608 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.608 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.608 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.609 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.609 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.609 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.609 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.609 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.610 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.610 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.610 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.610 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.610 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.610 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.611 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.611 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.611 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.611 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.611 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.611 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.612 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.612 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.612 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.612 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.612 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.612 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.612 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.613 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.613 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.613 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.613 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.613 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.613 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.614 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.614 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.614 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.614 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.614 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.615 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.615 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.615 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.615 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.615 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.615 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.615 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.616 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.616 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.616 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.616 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.616 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.616 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.616 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.617 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.617 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.617 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.617 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.617 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.618 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.618 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.618 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.618 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.618 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.618 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.618 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.619 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.619 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.619 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.619 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.619 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.619 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.619 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.620 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.620 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.620 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.620 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.620 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.620 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.620 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.621 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.621 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.621 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.621 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.621 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.622 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.622 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.622 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.622 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.622 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.622 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.623 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.623 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.623 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.623 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.623 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.623 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.624 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.624 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.624 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.624 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.624 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.624 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.625 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.625 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.625 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.625 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.625 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.625 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.626 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.626 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.626 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.626 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.626 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.626 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.627 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.627 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.627 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.627 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.627 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.628 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.628 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.628 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.628 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.628 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.629 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.629 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.629 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.629 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.629 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.629 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.630 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.630 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.630 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.630 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.630 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.630 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.630 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.631 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.631 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.631 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.631 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.631 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.631 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.631 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.632 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.632 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.632 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.632 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.632 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.632 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.632 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.633 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.633 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.633 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.633 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.633 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.633 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.633 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.634 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.634 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.634 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.634 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.634 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.634 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.634 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.635 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.635 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.635 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.635 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.635 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.635 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.635 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.636 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.636 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.636 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.636 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.636 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.637 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.637 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.637 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.637 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.637 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.637 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.638 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.638 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.638 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.638 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.638 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.638 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.639 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.639 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.639 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.639 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.639 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.640 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.640 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.640 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.640 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.640 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.640 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.640 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.641 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.641 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.641 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.641 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.641 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.641 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.642 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.642 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.642 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.642 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.642 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.642 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.642 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.643 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.643 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.643 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.643 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.643 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.643 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.643 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.644 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.644 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.644 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.644 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.644 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.644 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.644 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.645 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.645 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.645 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.645 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.645 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.645 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.645 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.646 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.646 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.646 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.646 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.646 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.646 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.646 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.647 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.647 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.647 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.647 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.647 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.647 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.648 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.648 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.648 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.648 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.648 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.649 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.649 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.649 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.649 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.649 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.650 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.650 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.650 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.650 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.650 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.650 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.650 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.651 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.651 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.651 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.651 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.651 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.652 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.652 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.652 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.652 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.652 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.653 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.653 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.653 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.653 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.653 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.653 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.653 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.654 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.654 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.654 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.654 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.654 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.654 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.654 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.655 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.655 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.655 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.655 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.655 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.655 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.656 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.656 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.656 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.656 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.656 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.656 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.657 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.657 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.657 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.657 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.657 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.657 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.657 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.658 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.658 2 WARNING oslo_config.cfg [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 02 12:05:26 compute-1 nova_compute[230518]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 02 12:05:26 compute-1 nova_compute[230518]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 02 12:05:26 compute-1 nova_compute[230518]: and ``live_migration_inbound_addr`` respectively.
Oct 02 12:05:26 compute-1 nova_compute[230518]: ).  Its value may be silently ignored in the future.
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.658 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.658 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.658 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.659 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.659 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.659 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.659 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.659 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.659 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.660 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.660 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.660 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.660 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.660 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.660 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.660 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.661 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.661 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.661 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.rbd_secret_uuid        = 20fdc58c-b037-5094-a8ef-d490aa7c36f3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.661 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.661 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.661 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.662 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.662 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.662 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.662 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.662 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.662 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.663 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.663 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.663 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.663 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.663 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.664 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.664 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.664 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.664 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.664 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.664 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.665 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.665 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.665 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.665 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.665 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.665 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.665 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.666 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.666 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.666 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.666 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.666 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.666 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.667 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.667 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.667 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.667 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.667 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.667 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.668 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.668 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.668 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.668 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.668 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.668 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.668 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.669 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.669 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.669 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.669 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.669 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.670 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.670 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.670 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.670 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.670 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.670 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.670 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.671 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.671 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.671 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.671 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.671 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.671 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.672 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.672 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.672 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.672 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.672 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.672 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.672 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.673 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.673 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.673 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.673 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.673 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.673 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.673 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.674 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.674 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.674 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.674 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.674 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.674 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.674 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.675 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.675 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.675 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.675 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.675 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.675 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.675 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.676 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.676 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.676 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.676 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.676 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.676 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.676 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.677 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.677 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.677 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.677 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.677 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.677 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.677 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.678 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.678 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.678 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.678 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.678 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.678 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.679 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.679 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.679 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.679 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.679 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.679 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.680 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.680 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.680 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.680 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.680 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.680 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.681 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.681 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.681 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.681 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.681 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.681 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.681 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.682 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.682 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.682 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.682 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.682 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.682 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.682 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.683 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.683 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.683 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.683 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.683 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.683 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.683 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.684 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.684 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.684 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.684 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.684 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.684 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.684 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.685 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.685 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.685 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.685 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.685 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.685 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.685 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.686 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.686 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.686 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.686 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.686 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.686 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.687 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.687 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.687 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.687 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.687 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.687 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.688 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.688 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.688 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.688 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.688 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.688 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.688 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.689 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.689 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.689 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.689 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.689 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.689 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.690 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.690 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.690 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.690 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.690 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.690 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.690 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.691 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.691 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.691 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.691 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.691 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.691 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.692 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.692 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.692 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.692 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.692 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.692 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.693 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.693 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.693 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.693 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.693 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.693 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.693 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.694 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.694 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.694 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.694 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.694 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.694 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.694 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.695 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.695 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.695 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.695 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.695 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.695 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.695 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.696 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.696 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.696 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.696 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.696 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.696 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.697 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.697 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.697 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.697 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.697 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.697 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.698 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.698 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.698 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.698 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.698 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.698 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.699 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.699 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.699 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.699 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.699 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.699 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.699 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.700 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.700 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.700 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.700 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.700 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.700 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.700 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.701 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.701 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.701 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.701 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.701 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.702 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.702 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.702 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.702 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.702 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.702 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.703 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.703 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.703 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.703 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.703 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.703 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.704 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.704 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.704 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.704 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.704 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.704 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.704 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.705 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.705 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.705 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.705 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.705 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.705 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.706 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.706 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.706 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.706 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.706 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.706 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.706 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.707 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.707 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.707 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.707 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.707 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.707 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.707 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.708 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.708 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.708 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.708 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.708 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.708 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.709 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.709 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.709 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.709 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.709 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.709 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.709 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.710 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.710 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.710 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.710 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.710 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.710 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.710 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.711 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.711 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.711 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.711 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.711 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.711 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.711 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.712 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.712 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.712 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.712 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.712 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.712 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.712 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.713 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.713 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.713 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.713 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.713 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.713 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.713 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.714 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.714 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.714 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.714 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.714 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.714 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.714 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.715 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.715 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.715 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.715 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.715 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.715 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.716 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.716 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.716 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.716 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.716 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.716 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.716 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.717 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.717 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.717 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.717 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.717 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.717 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.717 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.718 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.718 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.718 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.718 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.718 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.718 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.718 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.719 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.719 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.719 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.719 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.719 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.719 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.720 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.720 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.720 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.720 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.720 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.720 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.720 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.721 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.721 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.721 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.721 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.721 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.721 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.722 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.722 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.722 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.722 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.722 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.722 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.723 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.723 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.723 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.723 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.723 2 DEBUG oslo_service.service [None req-5aa6d92d-cd0c-47d1-b8b4-50bccb745a61 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.724 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.739 2 INFO nova.virt.node [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Determined node identity 730da6ce-9754-46f0-88e3-0019d056443f from /var/lib/nova/compute_id
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.740 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.741 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.741 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.741 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.751 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f0adf050b20> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.753 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f0adf050b20> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.754 2 INFO nova.virt.libvirt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Connection event '1' reason 'None'
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.766 2 DEBUG nova.virt.libvirt.volume.mount [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.768 2 INFO nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Libvirt host capabilities <capabilities>
Oct 02 12:05:26 compute-1 nova_compute[230518]: 
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <host>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <uuid>5d5cabb1-2c53-462b-89f3-16d4280c3e4c</uuid>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <cpu>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <arch>x86_64</arch>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model>EPYC-Rome-v4</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <vendor>AMD</vendor>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <microcode version='16777317'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <signature family='23' model='49' stepping='0'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <maxphysaddr mode='emulate' bits='40'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature name='x2apic'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature name='tsc-deadline'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature name='osxsave'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature name='hypervisor'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature name='tsc_adjust'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature name='spec-ctrl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature name='stibp'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature name='arch-capabilities'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature name='ssbd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature name='cmp_legacy'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature name='topoext'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature name='virt-ssbd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature name='lbrv'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature name='tsc-scale'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature name='vmcb-clean'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature name='pause-filter'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature name='pfthreshold'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature name='svme-addr-chk'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature name='rdctl-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature name='skip-l1dfl-vmentry'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature name='mds-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature name='pschange-mc-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <pages unit='KiB' size='4'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <pages unit='KiB' size='2048'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <pages unit='KiB' size='1048576'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </cpu>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <power_management>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <suspend_mem/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </power_management>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <iommu support='no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <migration_features>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <live/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <uri_transports>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <uri_transport>tcp</uri_transport>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <uri_transport>rdma</uri_transport>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </uri_transports>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </migration_features>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <topology>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <cells num='1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <cell id='0'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:           <memory unit='KiB'>7864104</memory>
Oct 02 12:05:26 compute-1 nova_compute[230518]:           <pages unit='KiB' size='4'>1966026</pages>
Oct 02 12:05:26 compute-1 nova_compute[230518]:           <pages unit='KiB' size='2048'>0</pages>
Oct 02 12:05:26 compute-1 nova_compute[230518]:           <pages unit='KiB' size='1048576'>0</pages>
Oct 02 12:05:26 compute-1 nova_compute[230518]:           <distances>
Oct 02 12:05:26 compute-1 nova_compute[230518]:             <sibling id='0' value='10'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:           </distances>
Oct 02 12:05:26 compute-1 nova_compute[230518]:           <cpus num='8'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:           </cpus>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         </cell>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </cells>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </topology>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <cache>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </cache>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <secmodel>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model>selinux</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <doi>0</doi>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </secmodel>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <secmodel>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model>dac</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <doi>0</doi>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <baselabel type='kvm'>+107:+107</baselabel>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <baselabel type='qemu'>+107:+107</baselabel>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </secmodel>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   </host>
Oct 02 12:05:26 compute-1 nova_compute[230518]: 
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <guest>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <os_type>hvm</os_type>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <arch name='i686'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <wordsize>32</wordsize>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <domain type='qemu'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <domain type='kvm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </arch>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <features>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <pae/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <nonpae/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <acpi default='on' toggle='yes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <apic default='on' toggle='no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <cpuselection/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <deviceboot/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <disksnapshot default='on' toggle='no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <externalSnapshot/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </features>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   </guest>
Oct 02 12:05:26 compute-1 nova_compute[230518]: 
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <guest>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <os_type>hvm</os_type>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <arch name='x86_64'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <wordsize>64</wordsize>
Oct 02 12:05:26 compute-1 sudo[230634]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <domain type='qemu'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <domain type='kvm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </arch>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <features>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <acpi default='on' toggle='yes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <apic default='on' toggle='no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <cpuselection/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <deviceboot/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <disksnapshot default='on' toggle='no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <externalSnapshot/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </features>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   </guest>
Oct 02 12:05:26 compute-1 nova_compute[230518]: 
Oct 02 12:05:26 compute-1 nova_compute[230518]: </capabilities>
Oct 02 12:05:26 compute-1 nova_compute[230518]: 
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.778 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.781 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 02 12:05:26 compute-1 nova_compute[230518]: <domainCapabilities>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <path>/usr/libexec/qemu-kvm</path>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <domain>kvm</domain>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <arch>i686</arch>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <vcpu max='4096'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <iothreads supported='yes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <os supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <enum name='firmware'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <loader supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='type'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>rom</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>pflash</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='readonly'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>yes</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>no</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='secure'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>no</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </loader>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   </os>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <cpu>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <mode name='host-passthrough' supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='hostPassthroughMigratable'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>on</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>off</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </mode>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <mode name='maximum' supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='maximumMigratable'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>on</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>off</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </mode>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <mode name='host-model' supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <vendor>AMD</vendor>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='x2apic'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='tsc-deadline'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='hypervisor'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='tsc_adjust'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='spec-ctrl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='stibp'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='arch-capabilities'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='ssbd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='cmp_legacy'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='overflow-recov'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='succor'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='ibrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='amd-ssbd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='virt-ssbd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='lbrv'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='tsc-scale'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='vmcb-clean'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='flushbyasid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='pause-filter'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='pfthreshold'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='svme-addr-chk'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='rdctl-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='mds-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='pschange-mc-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='gds-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='rfds-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='disable' name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </mode>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <mode name='custom' supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Broadwell'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Broadwell-IBRS'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Broadwell-noTSX'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Broadwell-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Broadwell-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Broadwell-v3'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Broadwell-v4'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cascadelake-Server'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cascadelake-Server-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cascadelake-Server-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cascadelake-Server-v3'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cascadelake-Server-v4'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cascadelake-Server-v5'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cooperlake'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cooperlake-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cooperlake-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Denverton'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='mpx'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Denverton-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='mpx'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Denverton-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Denverton-v3'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Dhyana-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-Genoa'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amd-psfd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='auto-ibrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='no-nested-data-bp'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='null-sel-clr-base'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='stibp-always-on'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-Genoa-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amd-psfd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='auto-ibrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='no-nested-data-bp'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='null-sel-clr-base'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='stibp-always-on'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-Milan'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-Milan-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-Milan-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amd-psfd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='no-nested-data-bp'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='null-sel-clr-base'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='stibp-always-on'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-Rome'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-Rome-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-Rome-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-Rome-v3'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-v3'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-v4'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='GraniteRapids'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-fp16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-int8'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-tile'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-fp16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fbsdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrc'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fzrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='mcdt-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pbrsb-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='prefetchiti'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='psdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='serialize'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xfd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='GraniteRapids-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-fp16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-int8'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-tile'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-fp16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fbsdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrc'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fzrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='mcdt-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pbrsb-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='prefetchiti'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='psdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='serialize'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xfd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='GraniteRapids-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-fp16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-int8'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-tile'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx10'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx10-128'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx10-256'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx10-512'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-fp16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='cldemote'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fbsdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrc'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fzrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='mcdt-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdir64b'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdiri'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pbrsb-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='prefetchiti'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='psdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='serialize'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xfd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Haswell'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Haswell-IBRS'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Haswell-noTSX'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Haswell-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Haswell-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Haswell-v3'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Haswell-v4'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Icelake-Server'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Icelake-Server-noTSX'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Icelake-Server-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Icelake-Server-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Icelake-Server-v3'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Icelake-Server-v4'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Icelake-Server-v5'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Icelake-Server-v6'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Icelake-Server-v7'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='IvyBridge'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='IvyBridge-IBRS'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='IvyBridge-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='IvyBridge-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='KnightsMill'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-4fmaps'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-4vnniw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512er'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512pf'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='KnightsMill-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-4fmaps'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-4vnniw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512er'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512pf'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Opteron_G4'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fma4'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xop'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Opteron_G4-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fma4'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xop'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Opteron_G5'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fma4'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='tbm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xop'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Opteron_G5-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fma4'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='tbm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xop'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='SapphireRapids'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-int8'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-tile'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-fp16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrc'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fzrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='serialize'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xfd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='SapphireRapids-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-int8'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-tile'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-fp16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrc'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fzrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='serialize'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xfd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='SapphireRapids-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-int8'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-tile'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-fp16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fbsdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrc'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fzrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='psdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='serialize'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xfd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='SapphireRapids-v3'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-int8'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-tile'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-fp16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='cldemote'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fbsdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrc'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fzrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdir64b'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdiri'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='psdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='serialize'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xfd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='SierraForest'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-ne-convert'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-vnni-int8'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='cmpccxadd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fbsdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='mcdt-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pbrsb-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='psdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='serialize'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='SierraForest-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-ne-convert'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-vnni-int8'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='cmpccxadd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fbsdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='mcdt-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pbrsb-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='psdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='serialize'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Client'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Client-IBRS'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Client-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Client-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Client-v3'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Client-v4'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Server'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Server-IBRS'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Server-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Server-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Server-v3'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Server-v4'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Server-v5'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Snowridge'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='cldemote'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='core-capability'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdir64b'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdiri'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='mpx'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='split-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Snowridge-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='cldemote'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='core-capability'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdir64b'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdiri'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='mpx'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='split-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Snowridge-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='cldemote'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='core-capability'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdir64b'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdiri'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='split-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Snowridge-v3'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='cldemote'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='core-capability'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdir64b'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdiri'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='split-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Snowridge-v4'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='cldemote'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdir64b'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdiri'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='athlon'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='3dnow'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='3dnowext'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='athlon-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='3dnow'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='3dnowext'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='core2duo'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='core2duo-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='coreduo'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='coreduo-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='n270'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='n270-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='phenom'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='3dnow'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='3dnowext'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='phenom-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='3dnow'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='3dnowext'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </mode>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <memoryBacking supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <enum name='sourceType'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <value>file</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <value>anonymous</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <value>memfd</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   </memoryBacking>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <disk supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='diskDevice'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>disk</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>cdrom</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>floppy</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>lun</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='bus'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>fdc</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>scsi</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>virtio</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>usb</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>sata</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='model'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>virtio</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>virtio-transitional</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>virtio-non-transitional</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <graphics supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='type'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>vnc</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>egl-headless</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>dbus</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </graphics>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <video supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='modelType'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>vga</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>cirrus</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>virtio</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>none</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>bochs</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>ramfb</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </video>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <hostdev supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='mode'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>subsystem</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='startupPolicy'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>default</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>mandatory</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>requisite</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>optional</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='subsysType'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>usb</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>pci</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>scsi</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='capsType'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='pciBackend'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </hostdev>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <rng supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='model'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>virtio</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>virtio-transitional</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>virtio-non-transitional</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='backendModel'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>random</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>egd</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>builtin</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <filesystem supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='driverType'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>path</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>handle</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>virtiofs</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </filesystem>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <tpm supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='model'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>tpm-tis</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>tpm-crb</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='backendModel'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>emulator</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>external</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='backendVersion'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>2.0</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </tpm>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <redirdev supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='bus'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>usb</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </redirdev>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <channel supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='type'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>pty</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>unix</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </channel>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <crypto supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='model'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='type'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>qemu</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='backendModel'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>builtin</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </crypto>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <interface supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='backendType'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>default</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>passt</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <panic supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='model'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>isa</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>hyperv</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </panic>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <features>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <gic supported='no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <vmcoreinfo supported='yes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <genid supported='yes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <backingStoreInput supported='yes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <backup supported='yes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <async-teardown supported='yes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <ps2 supported='yes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <sev supported='no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <sgx supported='no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <hyperv supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='features'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>relaxed</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>vapic</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>spinlocks</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>vpindex</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>runtime</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>synic</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>stimer</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>reset</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>vendor_id</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>frequencies</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>reenlightenment</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>tlbflush</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>ipi</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>avic</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>emsr_bitmap</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>xmm_input</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </hyperv>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <launchSecurity supported='no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   </features>
Oct 02 12:05:26 compute-1 nova_compute[230518]: </domainCapabilities>
Oct 02 12:05:26 compute-1 nova_compute[230518]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.786 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 02 12:05:26 compute-1 nova_compute[230518]: <domainCapabilities>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <path>/usr/libexec/qemu-kvm</path>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <domain>kvm</domain>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <arch>i686</arch>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <vcpu max='240'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <iothreads supported='yes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <os supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <enum name='firmware'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <loader supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='type'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>rom</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>pflash</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='readonly'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>yes</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>no</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='secure'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>no</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </loader>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   </os>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <cpu>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <mode name='host-passthrough' supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='hostPassthroughMigratable'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>on</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>off</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </mode>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <mode name='maximum' supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='maximumMigratable'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>on</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>off</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </mode>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <mode name='host-model' supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <vendor>AMD</vendor>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='x2apic'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='tsc-deadline'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='hypervisor'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='tsc_adjust'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='spec-ctrl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='stibp'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='arch-capabilities'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='ssbd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='cmp_legacy'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='overflow-recov'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='succor'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='ibrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='amd-ssbd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='virt-ssbd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='lbrv'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='tsc-scale'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='vmcb-clean'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='flushbyasid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='pause-filter'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='pfthreshold'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='svme-addr-chk'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='rdctl-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='mds-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='pschange-mc-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='gds-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='rfds-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='disable' name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </mode>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <mode name='custom' supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Broadwell'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Broadwell-IBRS'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Broadwell-noTSX'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Broadwell-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Broadwell-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Broadwell-v3'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Broadwell-v4'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cascadelake-Server'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cascadelake-Server-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cascadelake-Server-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cascadelake-Server-v3'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cascadelake-Server-v4'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cascadelake-Server-v5'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cooperlake'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cooperlake-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cooperlake-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Denverton'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='mpx'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Denverton-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='mpx'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Denverton-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Denverton-v3'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Dhyana-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-Genoa'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amd-psfd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='auto-ibrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='no-nested-data-bp'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='null-sel-clr-base'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='stibp-always-on'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-Genoa-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amd-psfd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='auto-ibrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='no-nested-data-bp'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='null-sel-clr-base'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='stibp-always-on'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-Milan'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-Milan-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-Milan-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amd-psfd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='no-nested-data-bp'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='null-sel-clr-base'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='stibp-always-on'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-Rome'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-Rome-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-Rome-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-Rome-v3'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-v3'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-v4'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='GraniteRapids'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-fp16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-int8'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-tile'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-fp16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fbsdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrc'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fzrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='mcdt-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pbrsb-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='prefetchiti'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='psdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='serialize'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xfd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='GraniteRapids-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-fp16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-int8'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-tile'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-fp16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fbsdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrc'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fzrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='mcdt-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pbrsb-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='prefetchiti'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='psdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='serialize'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xfd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='GraniteRapids-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-fp16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-int8'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-tile'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx10'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx10-128'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx10-256'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx10-512'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-fp16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='cldemote'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fbsdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrc'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fzrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='mcdt-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdir64b'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdiri'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pbrsb-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='prefetchiti'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='psdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='serialize'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xfd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Haswell'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Haswell-IBRS'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Haswell-noTSX'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Haswell-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Haswell-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Haswell-v3'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Haswell-v4'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Icelake-Server'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Icelake-Server-noTSX'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Icelake-Server-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Icelake-Server-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Icelake-Server-v3'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Icelake-Server-v4'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Icelake-Server-v5'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Icelake-Server-v6'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Icelake-Server-v7'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='IvyBridge'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='IvyBridge-IBRS'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='IvyBridge-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='IvyBridge-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='KnightsMill'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-4fmaps'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-4vnniw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512er'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512pf'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='KnightsMill-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-4fmaps'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-4vnniw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512er'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512pf'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Opteron_G4'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fma4'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xop'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Opteron_G4-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fma4'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xop'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Opteron_G5'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fma4'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='tbm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xop'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Opteron_G5-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fma4'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='tbm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xop'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='SapphireRapids'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-int8'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-tile'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-fp16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrc'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fzrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='serialize'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xfd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='SapphireRapids-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-int8'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-tile'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-fp16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrc'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fzrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='serialize'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xfd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='SapphireRapids-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-int8'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-tile'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-fp16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fbsdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrc'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fzrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='psdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='serialize'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xfd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='SapphireRapids-v3'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-int8'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-tile'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-fp16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='cldemote'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fbsdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrc'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fzrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdir64b'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdiri'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='psdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='serialize'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xfd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='SierraForest'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-ne-convert'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-vnni-int8'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='cmpccxadd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fbsdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='mcdt-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pbrsb-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='psdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='serialize'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='SierraForest-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-ne-convert'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-vnni-int8'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='cmpccxadd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fbsdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='mcdt-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pbrsb-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='psdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='serialize'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Client'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Client-IBRS'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Client-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Client-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Client-v3'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Client-v4'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Server'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Server-IBRS'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Server-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Server-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Server-v3'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Server-v4'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Server-v5'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Snowridge'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='cldemote'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='core-capability'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdir64b'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdiri'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='mpx'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='split-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Snowridge-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='cldemote'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='core-capability'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdir64b'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdiri'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='mpx'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='split-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Snowridge-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='cldemote'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='core-capability'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdir64b'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdiri'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='split-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Snowridge-v3'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='cldemote'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='core-capability'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdir64b'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdiri'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='split-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Snowridge-v4'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='cldemote'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdir64b'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdiri'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='athlon'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='3dnow'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='3dnowext'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='athlon-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='3dnow'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='3dnowext'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='core2duo'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='core2duo-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='coreduo'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='coreduo-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='n270'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='n270-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='phenom'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='3dnow'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='3dnowext'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='phenom-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='3dnow'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='3dnowext'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </mode>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <memoryBacking supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <enum name='sourceType'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <value>file</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <value>anonymous</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <value>memfd</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   </memoryBacking>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <disk supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='diskDevice'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>disk</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>cdrom</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>floppy</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>lun</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='bus'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>ide</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>fdc</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>scsi</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>virtio</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>usb</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>sata</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='model'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>virtio</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>virtio-transitional</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>virtio-non-transitional</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <graphics supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='type'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>vnc</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>egl-headless</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>dbus</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </graphics>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <video supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='modelType'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>vga</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>cirrus</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>virtio</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>none</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>bochs</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>ramfb</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </video>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <hostdev supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='mode'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>subsystem</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='startupPolicy'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>default</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>mandatory</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>requisite</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>optional</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='subsysType'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>usb</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>pci</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>scsi</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='capsType'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='pciBackend'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </hostdev>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <rng supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='model'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>virtio</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>virtio-transitional</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>virtio-non-transitional</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='backendModel'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>random</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>egd</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>builtin</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <filesystem supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='driverType'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>path</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>handle</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>virtiofs</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </filesystem>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <tpm supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='model'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>tpm-tis</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>tpm-crb</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='backendModel'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>emulator</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>external</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='backendVersion'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>2.0</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </tpm>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <redirdev supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='bus'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>usb</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </redirdev>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <channel supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='type'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>pty</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>unix</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </channel>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <crypto supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='model'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='type'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>qemu</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='backendModel'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>builtin</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </crypto>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <interface supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='backendType'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>default</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>passt</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <panic supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='model'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>isa</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>hyperv</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </panic>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <features>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <gic supported='no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <vmcoreinfo supported='yes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <genid supported='yes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <backingStoreInput supported='yes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <backup supported='yes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <async-teardown supported='yes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <ps2 supported='yes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <sev supported='no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <sgx supported='no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <hyperv supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='features'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>relaxed</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>vapic</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>spinlocks</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>vpindex</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>runtime</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>synic</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>stimer</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>reset</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>vendor_id</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>frequencies</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>reenlightenment</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>tlbflush</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>ipi</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>avic</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>emsr_bitmap</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>xmm_input</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </hyperv>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <launchSecurity supported='no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   </features>
Oct 02 12:05:26 compute-1 nova_compute[230518]: </domainCapabilities>
Oct 02 12:05:26 compute-1 nova_compute[230518]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.837 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.841 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 02 12:05:26 compute-1 nova_compute[230518]: <domainCapabilities>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <path>/usr/libexec/qemu-kvm</path>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <domain>kvm</domain>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <arch>x86_64</arch>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <vcpu max='4096'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <iothreads supported='yes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <os supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <enum name='firmware'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <value>efi</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <loader supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='type'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>rom</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>pflash</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='readonly'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>yes</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>no</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='secure'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>yes</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>no</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </loader>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   </os>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <cpu>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <mode name='host-passthrough' supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='hostPassthroughMigratable'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>on</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>off</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </mode>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <mode name='maximum' supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='maximumMigratable'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>on</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>off</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </mode>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <mode name='host-model' supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <vendor>AMD</vendor>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='x2apic'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='tsc-deadline'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='hypervisor'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='tsc_adjust'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='spec-ctrl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='stibp'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='arch-capabilities'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='ssbd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='cmp_legacy'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='overflow-recov'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='succor'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='ibrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='amd-ssbd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='virt-ssbd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='lbrv'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='tsc-scale'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='vmcb-clean'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='flushbyasid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='pause-filter'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='pfthreshold'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='svme-addr-chk'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='rdctl-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='mds-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='pschange-mc-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='gds-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='rfds-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='disable' name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </mode>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <mode name='custom' supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Broadwell'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Broadwell-IBRS'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Broadwell-noTSX'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Broadwell-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Broadwell-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Broadwell-v3'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Broadwell-v4'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cascadelake-Server'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 02 12:05:26 compute-1 ceph-mon[80926]: pgmap v747: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cascadelake-Server-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cascadelake-Server-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cascadelake-Server-v3'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cascadelake-Server-v4'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cascadelake-Server-v5'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cooperlake'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cooperlake-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cooperlake-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Denverton'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='mpx'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Denverton-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='mpx'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Denverton-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Denverton-v3'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Dhyana-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-Genoa'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amd-psfd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='auto-ibrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='no-nested-data-bp'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='null-sel-clr-base'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='stibp-always-on'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-Genoa-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amd-psfd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='auto-ibrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='no-nested-data-bp'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='null-sel-clr-base'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='stibp-always-on'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-Milan'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-Milan-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-Milan-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amd-psfd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='no-nested-data-bp'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='null-sel-clr-base'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='stibp-always-on'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-Rome'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-Rome-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-Rome-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-Rome-v3'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-v3'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-v4'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='GraniteRapids'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-fp16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-int8'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-tile'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-fp16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fbsdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrc'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fzrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='mcdt-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pbrsb-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='prefetchiti'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='psdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='serialize'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xfd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='GraniteRapids-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-fp16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-int8'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-tile'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-fp16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fbsdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrc'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fzrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='mcdt-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pbrsb-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='prefetchiti'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='psdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='serialize'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xfd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='GraniteRapids-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-fp16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-int8'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-tile'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx10'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx10-128'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx10-256'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx10-512'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-fp16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='cldemote'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fbsdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrc'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fzrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='mcdt-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdir64b'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdiri'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pbrsb-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='prefetchiti'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='psdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='serialize'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xfd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Haswell'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Haswell-IBRS'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Haswell-noTSX'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Haswell-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Haswell-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Haswell-v3'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Haswell-v4'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Icelake-Server'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Icelake-Server-noTSX'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Icelake-Server-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Icelake-Server-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Icelake-Server-v3'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Icelake-Server-v4'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Icelake-Server-v5'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Icelake-Server-v6'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Icelake-Server-v7'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='IvyBridge'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='IvyBridge-IBRS'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='IvyBridge-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='IvyBridge-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='KnightsMill'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-4fmaps'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-4vnniw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512er'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512pf'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='KnightsMill-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-4fmaps'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-4vnniw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512er'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512pf'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Opteron_G4'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fma4'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xop'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Opteron_G4-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fma4'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xop'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Opteron_G5'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fma4'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='tbm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xop'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Opteron_G5-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fma4'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='tbm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xop'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='SapphireRapids'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-int8'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-tile'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-fp16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrc'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fzrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='serialize'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xfd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='SapphireRapids-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-int8'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-tile'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-fp16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrc'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fzrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='serialize'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xfd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='SapphireRapids-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-int8'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-tile'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-fp16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fbsdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrc'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fzrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='psdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='serialize'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xfd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='SapphireRapids-v3'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-int8'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amx-tile'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-fp16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='cldemote'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fbsdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrc'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fzrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdir64b'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdiri'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='psdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='serialize'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xfd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='SierraForest'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-ne-convert'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-vnni-int8'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='cmpccxadd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fbsdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='mcdt-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pbrsb-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='psdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='serialize'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='SierraForest-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-ne-convert'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx-vnni-int8'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='cmpccxadd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fbsdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='mcdt-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pbrsb-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='psdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='serialize'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Client'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Client-IBRS'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Client-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Client-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Client-v3'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Client-v4'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Server'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Server-IBRS'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Server-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Server-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Server-v3'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Server-v4'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Skylake-Server-v5'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Snowridge'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='cldemote'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='core-capability'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdir64b'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdiri'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='mpx'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='split-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Snowridge-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='cldemote'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='core-capability'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdir64b'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdiri'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='mpx'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='split-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Snowridge-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='cldemote'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='core-capability'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdir64b'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdiri'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='split-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Snowridge-v3'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='cldemote'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='core-capability'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdir64b'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdiri'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='split-lock-detect'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Snowridge-v4'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='cldemote'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdir64b'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='movdiri'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='athlon'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='3dnow'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='3dnowext'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='athlon-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='3dnow'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='3dnowext'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='core2duo'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='core2duo-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='coreduo'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='coreduo-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='n270'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='n270-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='phenom'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='3dnow'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='3dnowext'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='phenom-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='3dnow'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='3dnowext'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </mode>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <memoryBacking supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <enum name='sourceType'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <value>file</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <value>anonymous</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <value>memfd</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   </memoryBacking>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <disk supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='diskDevice'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>disk</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>cdrom</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>floppy</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>lun</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='bus'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>fdc</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>scsi</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>virtio</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>usb</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>sata</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='model'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>virtio</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>virtio-transitional</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>virtio-non-transitional</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <graphics supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='type'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>vnc</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>egl-headless</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>dbus</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </graphics>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <video supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='modelType'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>vga</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>cirrus</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>virtio</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>none</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>bochs</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>ramfb</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </video>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <hostdev supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='mode'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>subsystem</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='startupPolicy'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>default</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>mandatory</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>requisite</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>optional</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='subsysType'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>usb</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>pci</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>scsi</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='capsType'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='pciBackend'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </hostdev>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <rng supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='model'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>virtio</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>virtio-transitional</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>virtio-non-transitional</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='backendModel'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>random</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>egd</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>builtin</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <filesystem supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='driverType'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>path</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>handle</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>virtiofs</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </filesystem>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <tpm supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='model'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>tpm-tis</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>tpm-crb</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='backendModel'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>emulator</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>external</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='backendVersion'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>2.0</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </tpm>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <redirdev supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='bus'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>usb</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </redirdev>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <channel supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='type'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>pty</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>unix</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </channel>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <crypto supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='model'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='type'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>qemu</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='backendModel'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>builtin</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </crypto>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <interface supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='backendType'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>default</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>passt</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <panic supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='model'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>isa</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>hyperv</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </panic>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <features>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <gic supported='no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <vmcoreinfo supported='yes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <genid supported='yes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <backingStoreInput supported='yes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <backup supported='yes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <async-teardown supported='yes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <ps2 supported='yes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <sev supported='no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <sgx supported='no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <hyperv supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='features'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>relaxed</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>vapic</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>spinlocks</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>vpindex</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>runtime</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>synic</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>stimer</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>reset</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>vendor_id</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>frequencies</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>reenlightenment</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>tlbflush</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>ipi</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>avic</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>emsr_bitmap</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>xmm_input</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </hyperv>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <launchSecurity supported='no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   </features>
Oct 02 12:05:26 compute-1 nova_compute[230518]: </domainCapabilities>
Oct 02 12:05:26 compute-1 nova_compute[230518]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 02 12:05:26 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.913 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 02 12:05:26 compute-1 nova_compute[230518]: <domainCapabilities>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <path>/usr/libexec/qemu-kvm</path>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <domain>kvm</domain>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <arch>x86_64</arch>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <vcpu max='240'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <iothreads supported='yes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <os supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <enum name='firmware'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <loader supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='type'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>rom</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>pflash</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='readonly'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>yes</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>no</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='secure'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>no</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </loader>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   </os>
Oct 02 12:05:26 compute-1 nova_compute[230518]:   <cpu>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <mode name='host-passthrough' supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='hostPassthroughMigratable'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>on</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>off</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </mode>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <mode name='maximum' supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <enum name='maximumMigratable'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>on</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <value>off</value>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </mode>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <mode name='host-model' supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <vendor>AMD</vendor>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='x2apic'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='tsc-deadline'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='hypervisor'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='tsc_adjust'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='spec-ctrl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='stibp'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='arch-capabilities'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='ssbd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='cmp_legacy'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='overflow-recov'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='succor'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='ibrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='amd-ssbd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='virt-ssbd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='lbrv'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='tsc-scale'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='vmcb-clean'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='flushbyasid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='pause-filter'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='pfthreshold'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='svme-addr-chk'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='rdctl-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='mds-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='pschange-mc-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='gds-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='require' name='rfds-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <feature policy='disable' name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     </mode>
Oct 02 12:05:26 compute-1 nova_compute[230518]:     <mode name='custom' supported='yes'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Broadwell'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Broadwell-IBRS'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Broadwell-noTSX'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Broadwell-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Broadwell-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Broadwell-v3'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Broadwell-v4'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cascadelake-Server'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cascadelake-Server-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cascadelake-Server-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cascadelake-Server-v3'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cascadelake-Server-v4'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cascadelake-Server-v5'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cooperlake'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cooperlake-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Cooperlake-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Denverton'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='mpx'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Denverton-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='mpx'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Denverton-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Denverton-v3'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='Dhyana-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-Genoa'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amd-psfd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='auto-ibrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='no-nested-data-bp'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='null-sel-clr-base'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='stibp-always-on'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-Genoa-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amd-psfd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='auto-ibrs'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='no-nested-data-bp'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='null-sel-clr-base'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='stibp-always-on'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-Milan'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-Milan-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-Milan-v2'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='amd-psfd'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='no-nested-data-bp'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='null-sel-clr-base'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='stibp-always-on'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-Rome'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       <blockers model='EPYC-Rome-v1'>
Oct 02 12:05:26 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:26 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='EPYC-Rome-v2'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='EPYC-Rome-v3'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='EPYC-v3'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='EPYC-v4'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='GraniteRapids'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='amx-bf16'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='amx-fp16'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='amx-int8'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='amx-tile'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx-vnni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512-fp16'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fbsdp-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fsrc'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fsrs'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fzrm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='mcdt-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pbrsb-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='prefetchiti'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='psdp-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='serialize'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='xfd'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='GraniteRapids-v1'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='amx-bf16'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='amx-fp16'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='amx-int8'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='amx-tile'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx-vnni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512-fp16'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fbsdp-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fsrc'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fsrs'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fzrm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='mcdt-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pbrsb-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='prefetchiti'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='psdp-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='serialize'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='xfd'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='GraniteRapids-v2'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='amx-bf16'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='amx-fp16'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='amx-int8'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='amx-tile'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx-vnni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx10'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx10-128'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx10-256'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx10-512'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512-fp16'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='cldemote'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fbsdp-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fsrc'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fsrs'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fzrm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='mcdt-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='movdir64b'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='movdiri'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pbrsb-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='prefetchiti'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='psdp-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='serialize'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='xfd'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Haswell'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Haswell-IBRS'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Haswell-noTSX'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Haswell-v1'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Haswell-v2'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Haswell-v3'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Haswell-v4'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Icelake-Server'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Icelake-Server-noTSX'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Icelake-Server-v1'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Icelake-Server-v2'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Icelake-Server-v3'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Icelake-Server-v4'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Icelake-Server-v5'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Icelake-Server-v6'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Icelake-Server-v7'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='IvyBridge'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='IvyBridge-IBRS'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='IvyBridge-v1'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='IvyBridge-v2'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='KnightsMill'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512-4fmaps'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512-4vnniw'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512er'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512pf'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='KnightsMill-v1'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512-4fmaps'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512-4vnniw'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512er'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512pf'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Opteron_G4'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fma4'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='xop'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Opteron_G4-v1'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fma4'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='xop'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Opteron_G5'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fma4'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='tbm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='xop'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Opteron_G5-v1'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fma4'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='tbm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='xop'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='SapphireRapids'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='amx-bf16'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='amx-int8'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='amx-tile'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx-vnni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512-fp16'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fsrc'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fsrs'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fzrm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='serialize'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='xfd'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='SapphireRapids-v1'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='amx-bf16'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='amx-int8'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='amx-tile'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx-vnni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512-fp16'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fsrc'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fsrs'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fzrm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='serialize'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='xfd'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='SapphireRapids-v2'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='amx-bf16'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='amx-int8'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='amx-tile'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx-vnni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512-fp16'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fbsdp-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fsrc'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fsrs'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fzrm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='psdp-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='serialize'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='xfd'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='SapphireRapids-v3'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='amx-bf16'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='amx-int8'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='amx-tile'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx-vnni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512-bf16'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512-fp16'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512-vpopcntdq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bitalg'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512ifma'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vbmi'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vbmi2'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vnni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='cldemote'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fbsdp-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fsrc'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fsrs'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fzrm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='la57'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='movdir64b'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='movdiri'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='psdp-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='serialize'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='taa-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='tsx-ldtrk'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='xfd'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='SierraForest'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx-ifma'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx-ne-convert'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx-vnni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx-vnni-int8'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='cmpccxadd'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fbsdp-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fsrs'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='mcdt-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pbrsb-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='psdp-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='serialize'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='SierraForest-v1'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx-ifma'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx-ne-convert'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx-vnni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx-vnni-int8'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='bus-lock-detect'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='cmpccxadd'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fbsdp-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fsrm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='fsrs'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='ibrs-all'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='mcdt-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pbrsb-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='psdp-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='sbdr-ssdp-no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='serialize'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='vaes'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='vpclmulqdq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Skylake-Client'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Skylake-Client-IBRS'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Skylake-Client-v1'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Skylake-Client-v2'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Skylake-Client-v3'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Skylake-Client-v4'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Skylake-Server'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Skylake-Server-IBRS'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Skylake-Server-v1'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Skylake-Server-v2'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='hle'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='rtm'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Skylake-Server-v3'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Skylake-Server-v4'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Skylake-Server-v5'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512bw'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512cd'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512dq'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512f'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='avx512vl'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='invpcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pcid'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='pku'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Snowridge'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='cldemote'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='core-capability'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='movdir64b'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='movdiri'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='mpx'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='split-lock-detect'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Snowridge-v1'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='cldemote'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='core-capability'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='movdir64b'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='movdiri'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='mpx'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='split-lock-detect'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Snowridge-v2'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='cldemote'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='core-capability'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='movdir64b'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='movdiri'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='split-lock-detect'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Snowridge-v3'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='cldemote'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='core-capability'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='movdir64b'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='movdiri'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='split-lock-detect'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='Snowridge-v4'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='cldemote'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='erms'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='gfni'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='movdir64b'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='movdiri'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='xsaves'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='athlon'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='3dnow'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='3dnowext'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='athlon-v1'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='3dnow'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='3dnowext'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='core2duo'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='core2duo-v1'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='coreduo'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='coreduo-v1'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='n270'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='n270-v1'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='ss'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='phenom'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='3dnow'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='3dnowext'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <blockers model='phenom-v1'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='3dnow'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <feature name='3dnowext'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </blockers>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     </mode>
Oct 02 12:05:27 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:05:27 compute-1 nova_compute[230518]:   <memoryBacking supported='yes'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     <enum name='sourceType'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <value>file</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <value>anonymous</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <value>memfd</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     </enum>
Oct 02 12:05:27 compute-1 nova_compute[230518]:   </memoryBacking>
Oct 02 12:05:27 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     <disk supported='yes'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <enum name='diskDevice'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>disk</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>cdrom</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>floppy</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>lun</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <enum name='bus'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>ide</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>fdc</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>scsi</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>virtio</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>usb</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>sata</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <enum name='model'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>virtio</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>virtio-transitional</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>virtio-non-transitional</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     <graphics supported='yes'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <enum name='type'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>vnc</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>egl-headless</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>dbus</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     </graphics>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     <video supported='yes'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <enum name='modelType'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>vga</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>cirrus</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>virtio</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>none</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>bochs</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>ramfb</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     </video>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     <hostdev supported='yes'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <enum name='mode'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>subsystem</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <enum name='startupPolicy'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>default</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>mandatory</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>requisite</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>optional</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <enum name='subsysType'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>usb</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>pci</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>scsi</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <enum name='capsType'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <enum name='pciBackend'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     </hostdev>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     <rng supported='yes'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <enum name='model'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>virtio</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>virtio-transitional</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>virtio-non-transitional</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <enum name='backendModel'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>random</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>egd</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>builtin</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     <filesystem supported='yes'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <enum name='driverType'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>path</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>handle</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>virtiofs</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     </filesystem>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     <tpm supported='yes'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <enum name='model'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>tpm-tis</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>tpm-crb</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <enum name='backendModel'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>emulator</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>external</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <enum name='backendVersion'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>2.0</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     </tpm>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     <redirdev supported='yes'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <enum name='bus'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>usb</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     </redirdev>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     <channel supported='yes'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <enum name='type'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>pty</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>unix</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     </channel>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     <crypto supported='yes'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <enum name='model'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <enum name='type'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>qemu</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <enum name='backendModel'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>builtin</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     </crypto>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     <interface supported='yes'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <enum name='backendType'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>default</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>passt</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     <panic supported='yes'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <enum name='model'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>isa</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>hyperv</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     </panic>
Oct 02 12:05:27 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:05:27 compute-1 nova_compute[230518]:   <features>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     <gic supported='no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     <vmcoreinfo supported='yes'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     <genid supported='yes'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     <backingStoreInput supported='yes'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     <backup supported='yes'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     <async-teardown supported='yes'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     <ps2 supported='yes'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     <sev supported='no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     <sgx supported='no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     <hyperv supported='yes'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       <enum name='features'>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>relaxed</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>vapic</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>spinlocks</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>vpindex</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>runtime</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>synic</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>stimer</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>reset</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>vendor_id</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>frequencies</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>reenlightenment</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>tlbflush</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>ipi</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>avic</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>emsr_bitmap</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:         <value>xmm_input</value>
Oct 02 12:05:27 compute-1 nova_compute[230518]:       </enum>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     </hyperv>
Oct 02 12:05:27 compute-1 nova_compute[230518]:     <launchSecurity supported='no'/>
Oct 02 12:05:27 compute-1 nova_compute[230518]:   </features>
Oct 02 12:05:27 compute-1 nova_compute[230518]: </domainCapabilities>
Oct 02 12:05:27 compute-1 nova_compute[230518]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 02 12:05:27 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.969 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 02 12:05:27 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.970 2 INFO nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Secure Boot support detected
Oct 02 12:05:27 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.971 2 INFO nova.virt.libvirt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 02 12:05:27 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.971 2 INFO nova.virt.libvirt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 02 12:05:27 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.979 2 DEBUG nova.virt.libvirt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] cpu compare xml: <cpu match="exact">
Oct 02 12:05:27 compute-1 nova_compute[230518]:   <model>Nehalem</model>
Oct 02 12:05:27 compute-1 nova_compute[230518]: </cpu>
Oct 02 12:05:27 compute-1 nova_compute[230518]:  _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019
Oct 02 12:05:27 compute-1 nova_compute[230518]: 2025-10-02 12:05:26.981 2 DEBUG nova.virt.libvirt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Oct 02 12:05:27 compute-1 nova_compute[230518]: 2025-10-02 12:05:27.008 2 INFO nova.virt.node [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Determined node identity 730da6ce-9754-46f0-88e3-0019d056443f from /var/lib/nova/compute_id
Oct 02 12:05:27 compute-1 nova_compute[230518]: 2025-10-02 12:05:27.025 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Verified node 730da6ce-9754-46f0-88e3-0019d056443f matches my host compute-1.ctlplane.example.com _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568
Oct 02 12:05:27 compute-1 nova_compute[230518]: 2025-10-02 12:05:27.045 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Oct 02 12:05:27 compute-1 nova_compute[230518]: 2025-10-02 12:05:27.089 2 ERROR nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Could not retrieve compute node resource provider 730da6ce-9754-46f0-88e3-0019d056443f and therefore unable to error out any instances stuck in BUILDING state. Error: Failed to retrieve allocations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '730da6ce-9754-46f0-88e3-0019d056443f' not found: No resource provider with uuid 730da6ce-9754-46f0-88e3-0019d056443f found  ", "request_id": "req-3b183d19-5264-4020-857c-220b8bdf190b"}]}: nova.exception.ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '730da6ce-9754-46f0-88e3-0019d056443f' not found: No resource provider with uuid 730da6ce-9754-46f0-88e3-0019d056443f found  ", "request_id": "req-3b183d19-5264-4020-857c-220b8bdf190b"}]}
Oct 02 12:05:27 compute-1 nova_compute[230518]: 2025-10-02 12:05:27.106 2 DEBUG oslo_concurrency.lockutils [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:05:27 compute-1 nova_compute[230518]: 2025-10-02 12:05:27.107 2 DEBUG oslo_concurrency.lockutils [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:05:27 compute-1 nova_compute[230518]: 2025-10-02 12:05:27.107 2 DEBUG oslo_concurrency.lockutils [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:05:27 compute-1 nova_compute[230518]: 2025-10-02 12:05:27.107 2 DEBUG nova.compute.resource_tracker [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:05:27 compute-1 nova_compute[230518]: 2025-10-02 12:05:27.108 2 DEBUG oslo_concurrency.processutils [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:05:27 compute-1 sudo[230700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:05:27 compute-1 sudo[230700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:27 compute-1 sudo[230700]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:27 compute-1 sudo[230744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:05:27 compute-1 sudo[230744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:27 compute-1 sudo[230744]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:27 compute-1 sudo[230769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:05:27 compute-1 sudo[230769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:27 compute-1 sudo[230769]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:27 compute-1 sudo[230821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:05:27 compute-1 sudo[230821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:27 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:05:27 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1319389839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:05:27 compute-1 nova_compute[230518]: 2025-10-02 12:05:27.625 2 DEBUG oslo_concurrency.processutils [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:05:27 compute-1 sudo[230958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlmcqipzeougdmvlyalmxebvqjqwdjdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1759406727.3706-5267-153418595062810/AnsiballZ_podman_container.py'
Oct 02 12:05:27 compute-1 sudo[230958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 12:05:27 compute-1 nova_compute[230518]: 2025-10-02 12:05:27.771 2 WARNING nova.virt.libvirt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:05:27 compute-1 nova_compute[230518]: 2025-10-02 12:05:27.772 2 DEBUG nova.compute.resource_tracker [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5204MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:05:27 compute-1 nova_compute[230518]: 2025-10-02 12:05:27.773 2 DEBUG oslo_concurrency.lockutils [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:05:27 compute-1 nova_compute[230518]: 2025-10-02 12:05:27.773 2 DEBUG oslo_concurrency.lockutils [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:05:27 compute-1 ceph-mon[80926]: pgmap v748: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:27 compute-1 ceph-mon[80926]: pgmap v749: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:27 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2216606571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:05:27 compute-1 ceph-mon[80926]: pgmap v750: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:27 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:05:27 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:05:27 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:05:27 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:05:27 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1319389839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:05:27 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3916071039' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:05:27 compute-1 sudo[230821]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:27 compute-1 nova_compute[230518]: 2025-10-02 12:05:27.885 2 ERROR nova.compute.resource_tracker [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Skipping removal of allocations for deleted instances: Failed to retrieve allocations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '730da6ce-9754-46f0-88e3-0019d056443f' not found: No resource provider with uuid 730da6ce-9754-46f0-88e3-0019d056443f found  ", "request_id": "req-1ea64c30-273d-46c5-8122-88de7e5e885a"}]}: nova.exception.ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '730da6ce-9754-46f0-88e3-0019d056443f' not found: No resource provider with uuid 730da6ce-9754-46f0-88e3-0019d056443f found  ", "request_id": "req-1ea64c30-273d-46c5-8122-88de7e5e885a"}]}
Oct 02 12:05:27 compute-1 nova_compute[230518]: 2025-10-02 12:05:27.885 2 DEBUG nova.compute.resource_tracker [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:05:27 compute-1 nova_compute[230518]: 2025-10-02 12:05:27.886 2 DEBUG nova.compute.resource_tracker [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:05:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:27.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:27 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:05:27 compute-1 python3.9[230960]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 02 12:05:27 compute-1 nova_compute[230518]: 2025-10-02 12:05:27.970 2 INFO nova.scheduler.client.report [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [req-91e90fcf-724f-4024-b4e9-bedbd1a3f459] Created resource provider record via placement API for resource provider with UUID 730da6ce-9754-46f0-88e3-0019d056443f and name compute-1.ctlplane.example.com.
Oct 02 12:05:27 compute-1 nova_compute[230518]: 2025-10-02 12:05:27.988 2 DEBUG oslo_concurrency.processutils [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:05:28 compute-1 systemd[1]: Started libpod-conmon-6ce21e5703e43134d3d0ff907881606807c49fc0032a12b5ba846274498709dc.scope.
Oct 02 12:05:28 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:05:28 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de5b51067d290be3866bf7d1b9cc8969bb19850fc5ee7f4f47e20f0f809d37f/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:28 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de5b51067d290be3866bf7d1b9cc8969bb19850fc5ee7f4f47e20f0f809d37f/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:28 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de5b51067d290be3866bf7d1b9cc8969bb19850fc5ee7f4f47e20f0f809d37f/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct 02 12:05:28 compute-1 podman[231001]: 2025-10-02 12:05:28.145216848 +0000 UTC m=+0.118392830 container init 6ce21e5703e43134d3d0ff907881606807c49fc0032a12b5ba846274498709dc (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm, container_name=nova_compute_init)
Oct 02 12:05:28 compute-1 podman[231001]: 2025-10-02 12:05:28.151836206 +0000 UTC m=+0.125012178 container start 6ce21e5703e43134d3d0ff907881606807c49fc0032a12b5ba846274498709dc (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init)
Oct 02 12:05:28 compute-1 python3.9[230960]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Oct 02 12:05:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:28.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:28 compute-1 nova_compute_init[231041]: INFO:nova_statedir:Applying nova statedir ownership
Oct 02 12:05:28 compute-1 nova_compute_init[231041]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Oct 02 12:05:28 compute-1 nova_compute_init[231041]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Oct 02 12:05:28 compute-1 nova_compute_init[231041]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Oct 02 12:05:28 compute-1 nova_compute_init[231041]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Oct 02 12:05:28 compute-1 nova_compute_init[231041]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Oct 02 12:05:28 compute-1 nova_compute_init[231041]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Oct 02 12:05:28 compute-1 nova_compute_init[231041]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Oct 02 12:05:28 compute-1 nova_compute_init[231041]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Oct 02 12:05:28 compute-1 nova_compute_init[231041]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Oct 02 12:05:28 compute-1 nova_compute_init[231041]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Oct 02 12:05:28 compute-1 nova_compute_init[231041]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Oct 02 12:05:28 compute-1 nova_compute_init[231041]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Oct 02 12:05:28 compute-1 nova_compute_init[231041]: INFO:nova_statedir:Nova statedir ownership complete
Oct 02 12:05:28 compute-1 systemd[1]: libpod-6ce21e5703e43134d3d0ff907881606807c49fc0032a12b5ba846274498709dc.scope: Deactivated successfully.
Oct 02 12:05:28 compute-1 podman[231054]: 2025-10-02 12:05:28.238154457 +0000 UTC m=+0.022433325 container died 6ce21e5703e43134d3d0ff907881606807c49fc0032a12b5ba846274498709dc (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251001)
Oct 02 12:05:28 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6ce21e5703e43134d3d0ff907881606807c49fc0032a12b5ba846274498709dc-userdata-shm.mount: Deactivated successfully.
Oct 02 12:05:28 compute-1 systemd[1]: var-lib-containers-storage-overlay-1de5b51067d290be3866bf7d1b9cc8969bb19850fc5ee7f4f47e20f0f809d37f-merged.mount: Deactivated successfully.
Oct 02 12:05:28 compute-1 sudo[230958]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:28 compute-1 podman[231054]: 2025-10-02 12:05:28.307586209 +0000 UTC m=+0.091865067 container cleanup 6ce21e5703e43134d3d0ff907881606807c49fc0032a12b5ba846274498709dc (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251001)
Oct 02 12:05:28 compute-1 systemd[1]: libpod-conmon-6ce21e5703e43134d3d0ff907881606807c49fc0032a12b5ba846274498709dc.scope: Deactivated successfully.
Oct 02 12:05:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:05:28 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2433992043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:05:28 compute-1 nova_compute[230518]: 2025-10-02 12:05:28.430 2 DEBUG oslo_concurrency.processutils [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:05:28 compute-1 nova_compute[230518]: 2025-10-02 12:05:28.435 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Oct 02 12:05:28 compute-1 nova_compute[230518]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Oct 02 12:05:28 compute-1 nova_compute[230518]: 2025-10-02 12:05:28.436 2 INFO nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] kernel doesn't support AMD SEV
Oct 02 12:05:28 compute-1 nova_compute[230518]: 2025-10-02 12:05:28.437 2 DEBUG nova.compute.provider_tree [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:05:28 compute-1 nova_compute[230518]: 2025-10-02 12:05:28.437 2 DEBUG nova.virt.libvirt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:05:28 compute-1 nova_compute[230518]: 2025-10-02 12:05:28.440 2 DEBUG nova.virt.libvirt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Libvirt baseline CPU <cpu>
Oct 02 12:05:28 compute-1 nova_compute[230518]:   <arch>x86_64</arch>
Oct 02 12:05:28 compute-1 nova_compute[230518]:   <model>Nehalem</model>
Oct 02 12:05:28 compute-1 nova_compute[230518]:   <vendor>AMD</vendor>
Oct 02 12:05:28 compute-1 nova_compute[230518]:   <topology sockets="8" cores="1" threads="1"/>
Oct 02 12:05:28 compute-1 nova_compute[230518]: </cpu>
Oct 02 12:05:28 compute-1 nova_compute[230518]:  _get_guest_baseline_cpu_features /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12537
Oct 02 12:05:28 compute-1 nova_compute[230518]: 2025-10-02 12:05:28.480 2 DEBUG nova.scheduler.client.report [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Updated inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Oct 02 12:05:28 compute-1 nova_compute[230518]: 2025-10-02 12:05:28.480 2 DEBUG nova.compute.provider_tree [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Updating resource provider 730da6ce-9754-46f0-88e3-0019d056443f generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 02 12:05:28 compute-1 nova_compute[230518]: 2025-10-02 12:05:28.481 2 DEBUG nova.compute.provider_tree [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:05:28 compute-1 nova_compute[230518]: 2025-10-02 12:05:28.592 2 DEBUG nova.compute.provider_tree [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Updating resource provider 730da6ce-9754-46f0-88e3-0019d056443f generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 02 12:05:28 compute-1 nova_compute[230518]: 2025-10-02 12:05:28.622 2 DEBUG nova.compute.resource_tracker [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:05:28 compute-1 nova_compute[230518]: 2025-10-02 12:05:28.623 2 DEBUG oslo_concurrency.lockutils [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:05:28 compute-1 nova_compute[230518]: 2025-10-02 12:05:28.623 2 DEBUG nova.service [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Oct 02 12:05:28 compute-1 nova_compute[230518]: 2025-10-02 12:05:28.696 2 DEBUG nova.service [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Oct 02 12:05:28 compute-1 nova_compute[230518]: 2025-10-02 12:05:28.697 2 DEBUG nova.servicegroup.drivers.db [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] DB_Driver: join new ServiceGroup member compute-1.ctlplane.example.com to the compute group, service = <Service: host=compute-1.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Oct 02 12:05:28 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:05:28 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:05:28 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:05:28 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:05:28 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:05:28 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:05:28 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2433992043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:05:28 compute-1 sshd-session[195332]: Connection closed by 192.168.122.30 port 35142
Oct 02 12:05:29 compute-1 sshd-session[195329]: pam_unix(sshd:session): session closed for user zuul
Oct 02 12:05:29 compute-1 systemd[1]: session-50.scope: Deactivated successfully.
Oct 02 12:05:29 compute-1 systemd[1]: session-50.scope: Consumed 2min 38.290s CPU time.
Oct 02 12:05:29 compute-1 systemd-logind[795]: Session 50 logged out. Waiting for processes to exit.
Oct 02 12:05:29 compute-1 systemd-logind[795]: Removed session 50.
Oct 02 12:05:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:29.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:30.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:30 compute-1 ceph-mon[80926]: pgmap v751: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:05:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:31.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:05:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:32.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:32 compute-1 ceph-mon[80926]: pgmap v752: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1051053204' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:05:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:05:33 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1429792564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:05:33 compute-1 podman[231108]: 2025-10-02 12:05:33.807300909 +0000 UTC m=+0.054615688 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 12:05:33 compute-1 podman[231107]: 2025-10-02 12:05:33.847720379 +0000 UTC m=+0.094143950 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Oct 02 12:05:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:05:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:33.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:05:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:34.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:34 compute-1 ceph-mon[80926]: pgmap v753: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:34 compute-1 sudo[231150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:05:34 compute-1 sudo[231150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:34 compute-1 sudo[231150]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:35 compute-1 sudo[231175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:05:35 compute-1 sudo[231175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:05:35 compute-1 sudo[231175]: pam_unix(sudo:session): session closed for user root
Oct 02 12:05:35 compute-1 ceph-mon[80926]: pgmap v754: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:35 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:05:35 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:05:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:35.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:36.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:37 compute-1 ceph-mon[80926]: pgmap v755: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:37.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:05:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:38.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:39 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 12:05:39 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 6377 writes, 26K keys, 6377 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 6377 writes, 1129 syncs, 5.65 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 429 writes, 666 keys, 429 commit groups, 1.0 writes per commit group, ingest: 0.21 MB, 0.00 MB/s
                                           Interval WAL: 429 writes, 198 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 02 12:05:39 compute-1 ceph-mon[80926]: pgmap v756: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:39.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:40.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:41 compute-1 ceph-mon[80926]: pgmap v757: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:41.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:05:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:42.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:05:42 compute-1 podman[231200]: 2025-10-02 12:05:42.808688959 +0000 UTC m=+0.061830804 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0)
Oct 02 12:05:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:05:43 compute-1 ceph-mon[80926]: pgmap v758: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:43.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:44.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:44 compute-1 nova_compute[230518]: 2025-10-02 12:05:44.698 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:05:44 compute-1 nova_compute[230518]: 2025-10-02 12:05:44.719 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:05:44 compute-1 podman[231221]: 2025-10-02 12:05:44.797979395 +0000 UTC m=+0.052947714 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:05:45 compute-1 ceph-mon[80926]: pgmap v759: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:45.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:05:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:46.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:05:47 compute-1 ceph-mon[80926]: pgmap v760: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:05:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:05:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:47.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:05:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:48.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:49 compute-1 ceph-mon[80926]: pgmap v761: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:49.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:05:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:50.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:05:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:05:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:51.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:05:52 compute-1 ceph-mon[80926]: pgmap v762: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:52.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:05:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:53.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:54 compute-1 ceph-mon[80926]: pgmap v763: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:54.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:55.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:56 compute-1 ceph-mon[80926]: pgmap v764: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:05:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:56.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:05:57 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:05:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:05:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:57.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:05:58 compute-1 ceph-mon[80926]: pgmap v765: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:05:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:05:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:05:58.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:05:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:05:59 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/346215333' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:05:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:05:59 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/346215333' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:05:59 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/346215333' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:05:59 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/346215333' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:05:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:05:59 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2459773295' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:05:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:05:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:05:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:05:59.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:05:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:05:59 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2459773295' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:06:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:00.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:00 compute-1 ceph-mon[80926]: pgmap v766: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/391542199' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:06:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/391542199' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:06:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2459773295' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:06:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2459773295' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:06:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:06:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:01.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:06:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:02.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:02 compute-1 ceph-mon[80926]: pgmap v767: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:06:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:06:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:03.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:06:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:04.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:04 compute-1 ceph-mon[80926]: pgmap v768: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:04 compute-1 podman[231243]: 2025-10-02 12:06:04.812188823 +0000 UTC m=+0.056102181 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 12:06:04 compute-1 podman[231242]: 2025-10-02 12:06:04.842605101 +0000 UTC m=+0.088663597 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Oct 02 12:06:05 compute-1 ceph-mon[80926]: pgmap v769: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:05.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:06.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:07 compute-1 ceph-mon[80926]: pgmap v770: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:06:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:07.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:08.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:09 compute-1 ceph-mon[80926]: pgmap v771: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:09.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:06:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:10.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:06:10 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 12:06:10 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3502 writes, 18K keys, 3502 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.03 MB/s
                                           Cumulative WAL: 3502 writes, 3502 syncs, 1.00 writes per sync, written: 0.04 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1367 writes, 6899 keys, 1367 commit groups, 1.0 writes per commit group, ingest: 14.87 MB, 0.02 MB/s
                                           Interval WAL: 1368 writes, 1368 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    109.0      0.19              0.06         9    0.021       0      0       0.0       0.0
                                             L6      1/0    7.40 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    123.2    102.4      0.66              0.19         8    0.082     35K   4308       0.0       0.0
                                            Sum      1/0    7.40 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2     95.2    103.9      0.85              0.25        17    0.050     35K   4308       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.5     93.5     93.6      0.56              0.17        10    0.056     23K   3040       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    123.2    102.4      0.66              0.19         8    0.082     35K   4308       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    110.2      0.19              0.06         8    0.024       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.021, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.09 GB write, 0.07 MB/s write, 0.08 GB read, 0.07 MB/s read, 0.9 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5591049f71f0#2 capacity: 308.00 MB usage: 4.66 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 6.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(256,4.35 MB,1.4116%) FilterBlock(17,106.61 KB,0.0338022%) IndexBlock(17,212.30 KB,0.0673121%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 12:06:11 compute-1 ceph-mon[80926]: pgmap v772: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:11.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:12.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:06:13 compute-1 ceph-mon[80926]: pgmap v773: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:13 compute-1 podman[231285]: 2025-10-02 12:06:13.807440811 +0000 UTC m=+0.058475735 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:06:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:13.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:14.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:15 compute-1 ceph-mon[80926]: pgmap v774: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:15 compute-1 podman[231305]: 2025-10-02 12:06:15.802108942 +0000 UTC m=+0.057027439 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 12:06:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:15.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:16.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:17 compute-1 ceph-mon[80926]: pgmap v775: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:06:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:17.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:06:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:18.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:06:19 compute-1 ceph-mon[80926]: pgmap v776: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:19.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:20.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:21 compute-1 ceph-mon[80926]: pgmap v777: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:21.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:22.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:22 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:06:23 compute-1 ceph-mon[80926]: pgmap v778: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:23.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:06:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:24.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:06:25 compute-1 ceph-mon[80926]: pgmap v779: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3315944606' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:06:25.906 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:06:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:06:25.907 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:06:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:06:25.907 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:06:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:06:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:25.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:06:26 compute-1 nova_compute[230518]: 2025-10-02 12:06:26.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:06:26 compute-1 nova_compute[230518]: 2025-10-02 12:06:26.055 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:06:26 compute-1 nova_compute[230518]: 2025-10-02 12:06:26.055 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:06:26 compute-1 nova_compute[230518]: 2025-10-02 12:06:26.055 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:06:26 compute-1 nova_compute[230518]: 2025-10-02 12:06:26.083 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:06:26 compute-1 nova_compute[230518]: 2025-10-02 12:06:26.083 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:06:26 compute-1 nova_compute[230518]: 2025-10-02 12:06:26.084 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:06:26 compute-1 nova_compute[230518]: 2025-10-02 12:06:26.084 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:06:26 compute-1 nova_compute[230518]: 2025-10-02 12:06:26.084 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:06:26 compute-1 nova_compute[230518]: 2025-10-02 12:06:26.085 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:06:26 compute-1 nova_compute[230518]: 2025-10-02 12:06:26.085 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:06:26 compute-1 nova_compute[230518]: 2025-10-02 12:06:26.085 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:06:26 compute-1 nova_compute[230518]: 2025-10-02 12:06:26.085 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:06:26 compute-1 nova_compute[230518]: 2025-10-02 12:06:26.152 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:06:26 compute-1 nova_compute[230518]: 2025-10-02 12:06:26.152 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:06:26 compute-1 nova_compute[230518]: 2025-10-02 12:06:26.153 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:06:26 compute-1 nova_compute[230518]: 2025-10-02 12:06:26.153 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:06:26 compute-1 nova_compute[230518]: 2025-10-02 12:06:26.153 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:06:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:26.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:26 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:06:26 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2515429145' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:26 compute-1 nova_compute[230518]: 2025-10-02 12:06:26.591 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:06:26 compute-1 nova_compute[230518]: 2025-10-02 12:06:26.725 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:06:26 compute-1 nova_compute[230518]: 2025-10-02 12:06:26.726 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5301MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:06:26 compute-1 nova_compute[230518]: 2025-10-02 12:06:26.726 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:06:26 compute-1 nova_compute[230518]: 2025-10-02 12:06:26.727 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:06:26 compute-1 nova_compute[230518]: 2025-10-02 12:06:26.809 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:06:26 compute-1 nova_compute[230518]: 2025-10-02 12:06:26.810 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:06:26 compute-1 nova_compute[230518]: 2025-10-02 12:06:26.833 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:06:26 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1921985533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:26 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2515429145' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:27 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:06:27 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1963014453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:27 compute-1 nova_compute[230518]: 2025-10-02 12:06:27.282 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:06:27 compute-1 nova_compute[230518]: 2025-10-02 12:06:27.288 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:06:27 compute-1 nova_compute[230518]: 2025-10-02 12:06:27.303 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:06:27 compute-1 nova_compute[230518]: 2025-10-02 12:06:27.305 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:06:27 compute-1 nova_compute[230518]: 2025-10-02 12:06:27.305 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:06:27 compute-1 ceph-mon[80926]: pgmap v780: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:27 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1963014453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:27 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:06:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:27.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:28.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:29 compute-1 ceph-mon[80926]: pgmap v781: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/29750166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:06:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:29.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:06:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:30.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/6871257' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:06:31 compute-1 ceph-mon[80926]: pgmap v782: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:31.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:32.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:06:33 compute-1 ceph-mon[80926]: pgmap v783: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:33.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:34.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:35 compute-1 sudo[231371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:06:35 compute-1 sudo[231371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:35 compute-1 sudo[231371]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:35 compute-1 sudo[231408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:06:35 compute-1 sudo[231408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:35 compute-1 sudo[231408]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:35 compute-1 podman[231396]: 2025-10-02 12:06:35.384714968 +0000 UTC m=+0.087209842 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct 02 12:06:35 compute-1 podman[231395]: 2025-10-02 12:06:35.393211226 +0000 UTC m=+0.098370594 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 12:06:35 compute-1 sudo[231465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:06:35 compute-1 sudo[231465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:35 compute-1 sudo[231465]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:35 compute-1 sudo[231490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:06:35 compute-1 sudo[231490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:35 compute-1 sudo[231490]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:35 compute-1 ceph-mon[80926]: pgmap v784: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:35 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:06:35 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:06:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:06:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:35.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:06:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:36.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:36 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 12:06:36 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 12:06:36 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:06:36 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:06:36 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:06:36 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:06:36 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:06:36 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:06:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:06:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:06:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:37.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:06:38 compute-1 ceph-mon[80926]: pgmap v785: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:38.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:06:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:39.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:06:40 compute-1 ceph-mon[80926]: pgmap v786: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:40.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:41.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:42 compute-1 ceph-mon[80926]: pgmap v787: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:42.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:06:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:43.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:44 compute-1 sudo[231547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:06:44 compute-1 ceph-mon[80926]: pgmap v788: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:44 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:06:44 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:06:44 compute-1 sudo[231547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:44 compute-1 sudo[231547]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:44 compute-1 sudo[231578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:06:44 compute-1 sudo[231578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:06:44 compute-1 podman[231571]: 2025-10-02 12:06:44.235098508 +0000 UTC m=+0.056420430 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd)
Oct 02 12:06:44 compute-1 sudo[231578]: pam_unix(sudo:session): session closed for user root
Oct 02 12:06:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:44.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:46.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:46.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:46 compute-1 ceph-mon[80926]: pgmap v789: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:46 compute-1 podman[231617]: 2025-10-02 12:06:46.803438867 +0000 UTC m=+0.058209316 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 12:06:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:06:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:06:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:48.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:06:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:48.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:48 compute-1 ceph-mon[80926]: pgmap v790: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:49 compute-1 ceph-mon[80926]: pgmap v791: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:06:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:50.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:06:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:06:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:50.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:06:51 compute-1 ceph-mon[80926]: pgmap v792: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:52.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:52.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:06:53 compute-1 ceph-mon[80926]: pgmap v793: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:54.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:54.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:55 compute-1 ceph-mon[80926]: pgmap v794: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:56.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:56.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:57 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:06:57 compute-1 ceph-mon[80926]: pgmap v795: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:06:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:06:58.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:06:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:06:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:06:58.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:06:59 compute-1 ceph-mon[80926]: pgmap v796: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:00.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:00.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:02 compute-1 ceph-mon[80926]: pgmap v797: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:02.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:07:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:02.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:07:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:07:04 compute-1 ceph-mon[80926]: pgmap v798: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:04.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:07:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:04.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:07:05 compute-1 podman[231638]: 2025-10-02 12:07:05.801346325 +0000 UTC m=+0.054217280 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:07:05 compute-1 podman[231637]: 2025-10-02 12:07:05.832146376 +0000 UTC m=+0.089941806 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:07:06 compute-1 ceph-mon[80926]: pgmap v799: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1643989139' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:07:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1643989139' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:07:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:06.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:06.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:07:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:08.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:08 compute-1 ceph-mon[80926]: pgmap v800: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:07:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:08.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:07:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:10.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:10 compute-1 ceph-mon[80926]: pgmap v801: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:10.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:07:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:12.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:07:12 compute-1 ceph-mon[80926]: pgmap v802: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:12.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:07:13 compute-1 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Oct 02 12:07:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:14.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:14 compute-1 ceph-mon[80926]: pgmap v803: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:07:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:14.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:07:14 compute-1 podman[231680]: 2025-10-02 12:07:14.810427099 +0000 UTC m=+0.065174566 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd)
Oct 02 12:07:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:16.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:16 compute-1 ceph-mon[80926]: pgmap v804: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:16.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:17 compute-1 podman[231700]: 2025-10-02 12:07:17.799509314 +0000 UTC m=+0.049064208 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:07:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:07:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:07:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:18.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:07:18 compute-1 ceph-mon[80926]: pgmap v805: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 62 KiB/s rd, 0 B/s wr, 102 op/s
Oct 02 12:07:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:07:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:18.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:07:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:20.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:20 compute-1 ceph-mon[80926]: pgmap v806: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 91 KiB/s rd, 0 B/s wr, 152 op/s
Oct 02 12:07:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:20.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:07:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:22.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:07:22 compute-1 ceph-mon[80926]: pgmap v807: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 100 KiB/s rd, 0 B/s wr, 166 op/s
Oct 02 12:07:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:22.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:22 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:07:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:24.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:24 compute-1 ceph-mon[80926]: pgmap v808: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 171 op/s
Oct 02 12:07:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:24.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:07:25.907 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:07:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:07:25.907 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:07:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:07:25.907 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:07:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:07:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:26.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:07:26 compute-1 ceph-mon[80926]: pgmap v809: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 171 op/s
Oct 02 12:07:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:26.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:27 compute-1 nova_compute[230518]: 2025-10-02 12:07:27.298 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:07:27 compute-1 nova_compute[230518]: 2025-10-02 12:07:27.316 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:07:27 compute-1 nova_compute[230518]: 2025-10-02 12:07:27.316 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:07:27 compute-1 nova_compute[230518]: 2025-10-02 12:07:27.316 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:07:27 compute-1 nova_compute[230518]: 2025-10-02 12:07:27.331 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:07:27 compute-1 nova_compute[230518]: 2025-10-02 12:07:27.331 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:07:27 compute-1 nova_compute[230518]: 2025-10-02 12:07:27.331 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:07:27 compute-1 nova_compute[230518]: 2025-10-02 12:07:27.331 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:07:27 compute-1 nova_compute[230518]: 2025-10-02 12:07:27.331 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:07:27 compute-1 nova_compute[230518]: 2025-10-02 12:07:27.332 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:07:27 compute-1 nova_compute[230518]: 2025-10-02 12:07:27.332 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:07:27 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1281353787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:07:27 compute-1 nova_compute[230518]: 2025-10-02 12:07:27.373 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:07:27 compute-1 nova_compute[230518]: 2025-10-02 12:07:27.373 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:07:27 compute-1 nova_compute[230518]: 2025-10-02 12:07:27.373 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:07:27 compute-1 nova_compute[230518]: 2025-10-02 12:07:27.374 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:07:27 compute-1 nova_compute[230518]: 2025-10-02 12:07:27.374 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:07:27 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:07:27 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1801310458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:07:27 compute-1 nova_compute[230518]: 2025-10-02 12:07:27.823 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:07:27 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:07:27 compute-1 nova_compute[230518]: 2025-10-02 12:07:27.959 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:07:27 compute-1 nova_compute[230518]: 2025-10-02 12:07:27.960 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5336MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:07:27 compute-1 nova_compute[230518]: 2025-10-02 12:07:27.960 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:07:27 compute-1 nova_compute[230518]: 2025-10-02 12:07:27.960 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:07:28 compute-1 nova_compute[230518]: 2025-10-02 12:07:28.025 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:07:28 compute-1 nova_compute[230518]: 2025-10-02 12:07:28.025 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:07:28 compute-1 nova_compute[230518]: 2025-10-02 12:07:28.039 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:07:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:07:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:28.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:07:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:07:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:28.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:07:28 compute-1 ceph-mon[80926]: pgmap v810: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 103 KiB/s rd, 0 B/s wr, 171 op/s
Oct 02 12:07:28 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/787876386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:07:28 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1801310458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:07:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:07:28 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/135022925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:07:28 compute-1 nova_compute[230518]: 2025-10-02 12:07:28.491 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:07:28 compute-1 nova_compute[230518]: 2025-10-02 12:07:28.496 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:07:28 compute-1 nova_compute[230518]: 2025-10-02 12:07:28.516 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:07:28 compute-1 nova_compute[230518]: 2025-10-02 12:07:28.518 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:07:28 compute-1 nova_compute[230518]: 2025-10-02 12:07:28.518 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:07:29 compute-1 nova_compute[230518]: 2025-10-02 12:07:29.240 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:07:29 compute-1 nova_compute[230518]: 2025-10-02 12:07:29.241 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:07:29 compute-1 nova_compute[230518]: 2025-10-02 12:07:29.241 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:07:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:07:29.268 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:07:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:07:29.269 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:07:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:07:29.270 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:07:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/135022925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:07:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:07:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:30.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:07:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:30.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:30 compute-1 ceph-mon[80926]: pgmap v811: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 41 KiB/s rd, 0 B/s wr, 68 op/s
Oct 02 12:07:31 compute-1 ceph-mon[80926]: pgmap v812: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Oct 02 12:07:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:32.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:32.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2958073629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:07:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1860265621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:07:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:07:33 compute-1 ceph-mon[80926]: pgmap v813: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 4 op/s
Oct 02 12:07:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:07:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:34.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:07:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:34.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:36 compute-1 ceph-mon[80926]: pgmap v814: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:07:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:36.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:07:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:36.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:36 compute-1 podman[231765]: 2025-10-02 12:07:36.79001463 +0000 UTC m=+0.046368533 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:07:36 compute-1 podman[231764]: 2025-10-02 12:07:36.887235555 +0000 UTC m=+0.146621694 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Oct 02 12:07:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:07:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:07:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:38.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:07:38 compute-1 ceph-mon[80926]: pgmap v815: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:38.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:40.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:40.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:40 compute-1 ceph-mon[80926]: pgmap v816: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:41 compute-1 ceph-mon[80926]: pgmap v817: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:42.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:42.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:07:43 compute-1 ceph-mon[80926]: pgmap v818: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:44.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:44.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:44 compute-1 sudo[231806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:07:44 compute-1 sudo[231806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:44 compute-1 sudo[231806]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:44 compute-1 sudo[231831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:07:44 compute-1 sudo[231831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:44 compute-1 sudo[231831]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:44 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0.
Oct 02 12:07:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:07:44.558507) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:07:44 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37
Oct 02 12:07:44 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406864558588, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 2338, "num_deletes": 251, "total_data_size": 5946111, "memory_usage": 6010416, "flush_reason": "Manual Compaction"}
Oct 02 12:07:44 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started
Oct 02 12:07:44 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406864577263, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 3896006, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17735, "largest_seqno": 20068, "table_properties": {"data_size": 3886472, "index_size": 6092, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18976, "raw_average_key_size": 20, "raw_value_size": 3867534, "raw_average_value_size": 4088, "num_data_blocks": 272, "num_entries": 946, "num_filter_entries": 946, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406636, "oldest_key_time": 1759406636, "file_creation_time": 1759406864, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:07:44 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 18823 microseconds, and 7941 cpu microseconds.
Oct 02 12:07:44 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:07:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:07:44.577339) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 3896006 bytes OK
Oct 02 12:07:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:07:44.577359) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started
Oct 02 12:07:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:07:44.579120) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done
Oct 02 12:07:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:07:44.579136) EVENT_LOG_v1 {"time_micros": 1759406864579131, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:07:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:07:44.579155) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:07:44 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 5935794, prev total WAL file size 5935794, number of live WAL files 2.
Oct 02 12:07:44 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:07:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:07:44.580579) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Oct 02 12:07:44 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:07:44 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(3804KB)], [36(7579KB)]
Oct 02 12:07:44 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406864580656, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 11657781, "oldest_snapshot_seqno": -1}
Oct 02 12:07:44 compute-1 sudo[231856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:07:44 compute-1 sudo[231856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:44 compute-1 sudo[231856]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:44 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 4440 keys, 9637537 bytes, temperature: kUnknown
Oct 02 12:07:44 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406864626906, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 9637537, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9605291, "index_size": 20040, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 110845, "raw_average_key_size": 24, "raw_value_size": 9522243, "raw_average_value_size": 2144, "num_data_blocks": 832, "num_entries": 4440, "num_filter_entries": 4440, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759406864, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:07:44 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:07:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:07:44.627116) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 9637537 bytes
Oct 02 12:07:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:07:44.628216) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 251.7 rd, 208.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 7.4 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(5.5) write-amplify(2.5) OK, records in: 4959, records dropped: 519 output_compression: NoCompression
Oct 02 12:07:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:07:44.628234) EVENT_LOG_v1 {"time_micros": 1759406864628225, "job": 20, "event": "compaction_finished", "compaction_time_micros": 46320, "compaction_time_cpu_micros": 20910, "output_level": 6, "num_output_files": 1, "total_output_size": 9637537, "num_input_records": 4959, "num_output_records": 4440, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:07:44 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:07:44 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406864628964, "job": 20, "event": "table_file_deletion", "file_number": 38}
Oct 02 12:07:44 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:07:44 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759406864630309, "job": 20, "event": "table_file_deletion", "file_number": 36}
Oct 02 12:07:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:07:44.580452) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:07:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:07:44.630386) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:07:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:07:44.630394) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:07:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:07:44.630396) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:07:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:07:44.630398) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:07:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:07:44.630400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:07:44 compute-1 sudo[231881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:07:44 compute-1 sudo[231881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:45 compute-1 sudo[231881]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:45 compute-1 ceph-mon[80926]: pgmap v819: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 12:07:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:07:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:07:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:07:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:07:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:07:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:07:45 compute-1 podman[231938]: 2025-10-02 12:07:45.79548427 +0000 UTC m=+0.050106761 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 02 12:07:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:07:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:46.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:07:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:46.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:47 compute-1 ceph-mon[80926]: pgmap v820: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:07:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:48.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:48.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:48 compute-1 podman[231958]: 2025-10-02 12:07:48.800037813 +0000 UTC m=+0.054984245 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:07:49 compute-1 ceph-mon[80926]: pgmap v821: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:50.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:50.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:51 compute-1 ceph-mon[80926]: pgmap v822: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:52.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:52.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:52 compute-1 sudo[231980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:07:52 compute-1 sudo[231980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:52 compute-1 sudo[231980]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:07:52 compute-1 sudo[232005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:07:52 compute-1 sudo[232005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:07:52 compute-1 sudo[232005]: pam_unix(sudo:session): session closed for user root
Oct 02 12:07:53 compute-1 ceph-mon[80926]: pgmap v823: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:53 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:07:53 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:07:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:54.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:54.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:55 compute-1 ceph-mon[80926]: pgmap v824: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:56.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:07:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:56.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:07:57 compute-1 ceph-mon[80926]: pgmap v825: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:07:57 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:07:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:07:58.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:07:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:07:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:07:58.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:07:59 compute-1 ceph-mon[80926]: pgmap v826: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:00.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:00.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:01 compute-1 ceph-mon[80926]: pgmap v827: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:08:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:02.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:08:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:02.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:08:03 compute-1 ceph-mon[80926]: pgmap v828: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:08:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:04.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:08:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:04.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:05 compute-1 ceph-mon[80926]: pgmap v829: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2705115424' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:08:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2705115424' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:08:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:08:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:06.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:08:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:06.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:07 compute-1 podman[232031]: 2025-10-02 12:08:07.796150502 +0000 UTC m=+0.049886459 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 02 12:08:07 compute-1 ceph-mon[80926]: pgmap v830: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:07 compute-1 podman[232030]: 2025-10-02 12:08:07.822649882 +0000 UTC m=+0.078935490 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 02 12:08:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:08:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:08.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:08.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:10 compute-1 ceph-mon[80926]: pgmap v831: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:10.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:10.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:12 compute-1 ceph-mon[80926]: pgmap v832: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:12.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:12.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:08:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:08:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:14.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:08:14 compute-1 ceph-mon[80926]: pgmap v833: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:14.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:16.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:16 compute-1 ceph-mon[80926]: pgmap v834: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:08:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:16.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:08:16 compute-1 podman[232075]: 2025-10-02 12:08:16.809188889 +0000 UTC m=+0.060508827 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:08:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:08:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:08:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:18.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:08:18 compute-1 ceph-mon[80926]: pgmap v835: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:18.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:19 compute-1 podman[232096]: 2025-10-02 12:08:19.811429937 +0000 UTC m=+0.063546533 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:08:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:08:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:20.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:08:20 compute-1 ceph-mon[80926]: pgmap v836: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:20.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:21 compute-1 ceph-mon[80926]: pgmap v837: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:22.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:22.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:22 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:08:24 compute-1 ceph-mon[80926]: pgmap v838: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:24.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:24.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:08:25.907 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:08:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:08:25.908 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:08:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:08:25.908 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:08:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:26.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:26 compute-1 ceph-mon[80926]: pgmap v839: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:08:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:26.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:08:27 compute-1 nova_compute[230518]: 2025-10-02 12:08:27.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:08:27 compute-1 nova_compute[230518]: 2025-10-02 12:08:27.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:08:27 compute-1 ceph-mon[80926]: pgmap v840: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:27 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:08:28 compute-1 nova_compute[230518]: 2025-10-02 12:08:28.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:08:28 compute-1 nova_compute[230518]: 2025-10-02 12:08:28.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:08:28 compute-1 nova_compute[230518]: 2025-10-02 12:08:28.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:08:28 compute-1 nova_compute[230518]: 2025-10-02 12:08:28.187 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:08:28 compute-1 nova_compute[230518]: 2025-10-02 12:08:28.187 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:08:28 compute-1 nova_compute[230518]: 2025-10-02 12:08:28.188 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:08:28 compute-1 nova_compute[230518]: 2025-10-02 12:08:28.188 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:08:28 compute-1 nova_compute[230518]: 2025-10-02 12:08:28.188 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:08:28 compute-1 nova_compute[230518]: 2025-10-02 12:08:28.188 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:08:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:08:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:28.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:08:28 compute-1 nova_compute[230518]: 2025-10-02 12:08:28.224 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:08:28 compute-1 nova_compute[230518]: 2025-10-02 12:08:28.225 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:08:28 compute-1 nova_compute[230518]: 2025-10-02 12:08:28.225 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:08:28 compute-1 nova_compute[230518]: 2025-10-02 12:08:28.225 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:08:28 compute-1 nova_compute[230518]: 2025-10-02 12:08:28.226 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:08:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:28.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:08:28 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3330560456' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:08:28 compute-1 nova_compute[230518]: 2025-10-02 12:08:28.704 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:08:28 compute-1 nova_compute[230518]: 2025-10-02 12:08:28.890 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:08:28 compute-1 nova_compute[230518]: 2025-10-02 12:08:28.892 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5333MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:08:28 compute-1 nova_compute[230518]: 2025-10-02 12:08:28.893 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:08:28 compute-1 nova_compute[230518]: 2025-10-02 12:08:28.893 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:08:29 compute-1 nova_compute[230518]: 2025-10-02 12:08:29.072 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:08:29 compute-1 nova_compute[230518]: 2025-10-02 12:08:29.073 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:08:29 compute-1 nova_compute[230518]: 2025-10-02 12:08:29.097 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:08:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2064557662' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:08:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3308308964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:08:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3330560456' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:08:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:08:29 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2031141129' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:08:29 compute-1 nova_compute[230518]: 2025-10-02 12:08:29.572 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:08:29 compute-1 nova_compute[230518]: 2025-10-02 12:08:29.580 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:08:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:30.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:30 compute-1 ceph-mon[80926]: pgmap v841: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2031141129' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:08:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:08:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:30.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:08:30 compute-1 nova_compute[230518]: 2025-10-02 12:08:30.585 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:08:30 compute-1 nova_compute[230518]: 2025-10-02 12:08:30.586 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:08:30 compute-1 nova_compute[230518]: 2025-10-02 12:08:30.586 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:08:31 compute-1 nova_compute[230518]: 2025-10-02 12:08:31.451 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:08:31 compute-1 nova_compute[230518]: 2025-10-02 12:08:31.452 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:08:31 compute-1 ceph-mon[80926]: pgmap v842: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:32.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:32 compute-1 PackageKit[168653]: daemon quit
Oct 02 12:08:32 compute-1 systemd[1]: packagekit.service: Deactivated successfully.
Oct 02 12:08:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:32.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:08:33 compute-1 ceph-mon[80926]: pgmap v843: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:08:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:34.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:08:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:34.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/23628137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:08:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3641892421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:08:36 compute-1 ceph-mon[80926]: pgmap v844: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:08:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:36.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:08:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:36.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:08:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:38.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:38 compute-1 ceph-mon[80926]: pgmap v845: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:38.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:38 compute-1 podman[232162]: 2025-10-02 12:08:38.84222933 +0000 UTC m=+0.083140304 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 12:08:38 compute-1 podman[232161]: 2025-10-02 12:08:38.84260015 +0000 UTC m=+0.085936021 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 12:08:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:40.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:40 compute-1 ceph-mon[80926]: pgmap v846: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:40.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:08:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:42.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:08:42 compute-1 ceph-mon[80926]: pgmap v847: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:42.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - - [02/Oct/2025:12:08:42.649 +0000] "GET /swift/info HTTP/1.1" 200 509 - "python-urllib3/1.26.5" - latency=0.000000000s
Oct 02 12:08:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:08:43 compute-1 ceph-mon[80926]: pgmap v848: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:08:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:44.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:08:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:44.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:45 compute-1 ceph-mon[80926]: pgmap v849: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:08:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:46.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:08:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:46.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:47 compute-1 podman[232205]: 2025-10-02 12:08:47.809883389 +0000 UTC m=+0.066420023 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 12:08:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:08:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e129 e129: 3 total, 3 up, 3 in
Oct 02 12:08:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:08:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:48.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:08:48 compute-1 ceph-mon[80926]: pgmap v850: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:48.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e130 e130: 3 total, 3 up, 3 in
Oct 02 12:08:50 compute-1 ceph-mon[80926]: osdmap e129: 3 total, 3 up, 3 in
Oct 02 12:08:50 compute-1 ceph-mon[80926]: pgmap v852: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:08:50 compute-1 ceph-mon[80926]: osdmap e130: 3 total, 3 up, 3 in
Oct 02 12:08:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:08:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:50.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:08:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:50.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:50 compute-1 podman[232225]: 2025-10-02 12:08:50.807279375 +0000 UTC m=+0.060739884 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 12:08:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:08:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:52.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:08:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:52.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:52 compute-1 ceph-mon[80926]: pgmap v854: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 255 B/s wr, 0 op/s
Oct 02 12:08:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 02 12:08:53 compute-1 sudo[232245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:08:53 compute-1 sudo[232245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:53 compute-1 sudo[232245]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:53 compute-1 sudo[232270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:08:53 compute-1 sudo[232270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:53 compute-1 sudo[232270]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:53 compute-1 sudo[232295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:08:53 compute-1 sudo[232295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:53 compute-1 sudo[232295]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:53 compute-1 sudo[232320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:08:53 compute-1 sudo[232320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:08:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e131 e131: 3 total, 3 up, 3 in
Oct 02 12:08:53 compute-1 sudo[232320]: pam_unix(sudo:session): session closed for user root
Oct 02 12:08:53 compute-1 ceph-mon[80926]: pgmap v855: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 4.1 KiB/s rd, 1.1 KiB/s wr, 6 op/s
Oct 02 12:08:53 compute-1 ceph-mon[80926]: osdmap e131: 3 total, 3 up, 3 in
Oct 02 12:08:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:54.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:54.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:54 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:08:54 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:08:54 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:08:54 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:08:54 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:08:54 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:08:56 compute-1 ceph-mon[80926]: pgmap v857: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 5.2 KiB/s rd, 1.4 KiB/s wr, 8 op/s
Oct 02 12:08:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:08:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:56.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:08:56 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e132 e132: 3 total, 3 up, 3 in
Oct 02 12:08:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:08:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:56.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:08:57 compute-1 ceph-mon[80926]: osdmap e132: 3 total, 3 up, 3 in
Oct 02 12:08:57 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e133 e133: 3 total, 3 up, 3 in
Oct 02 12:08:57 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:08:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:08:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:08:58.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:08:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:08:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:08:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:08:58.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:08:58 compute-1 ceph-mon[80926]: pgmap v859: 305 pgs: 305 active+clean; 8.4 MiB data, 161 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 1.1 MiB/s wr, 22 op/s
Oct 02 12:08:58 compute-1 ceph-mon[80926]: osdmap e133: 3 total, 3 up, 3 in
Oct 02 12:09:00 compute-1 ceph-mon[80926]: pgmap v861: 305 pgs: 305 active+clean; 25 MiB data, 173 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 4.1 MiB/s wr, 44 op/s
Oct 02 12:09:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:00.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:00.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:09:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:02.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:09:02 compute-1 ceph-mon[80926]: pgmap v862: 305 pgs: 305 active+clean; 33 MiB data, 181 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 4.8 MiB/s wr, 40 op/s
Oct 02 12:09:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:02.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:09:04 compute-1 ceph-mon[80926]: pgmap v863: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 5.1 MiB/s wr, 40 op/s
Oct 02 12:09:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:04.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:04.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2844025141' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:09:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2844025141' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:09:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:06.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:06.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:06 compute-1 ceph-mon[80926]: pgmap v864: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 4.0 MiB/s wr, 27 op/s
Oct 02 12:09:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:09:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:09:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:08.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:09:08 compute-1 ceph-mon[80926]: pgmap v865: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 3.3 MiB/s wr, 22 op/s
Oct 02 12:09:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:08.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:09 compute-1 sudo[232378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:09:09 compute-1 sudo[232378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:09:09 compute-1 sudo[232378]: pam_unix(sudo:session): session closed for user root
Oct 02 12:09:09 compute-1 sudo[232415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:09:09 compute-1 sudo[232415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:09:09 compute-1 sudo[232415]: pam_unix(sudo:session): session closed for user root
Oct 02 12:09:09 compute-1 podman[232403]: 2025-10-02 12:09:09.703327472 +0000 UTC m=+0.073847249 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 12:09:09 compute-1 podman[232402]: 2025-10-02 12:09:09.703356944 +0000 UTC m=+0.078710533 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller)
Oct 02 12:09:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:10.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:10 compute-1 ceph-mon[80926]: pgmap v866: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 3.7 KiB/s rd, 1.5 MiB/s wr, 5 op/s
Oct 02 12:09:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:09:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:09:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:10.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:11 compute-1 ceph-mon[80926]: pgmap v867: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 3.3 KiB/s rd, 1.3 MiB/s wr, 4 op/s
Oct 02 12:09:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:12.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:12.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:09:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:09:13.701 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:09:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:09:13.702 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:09:14 compute-1 ceph-mon[80926]: pgmap v868: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 683 KiB/s wr, 4 op/s
Oct 02 12:09:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:14.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:09:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:14.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:09:16 compute-1 ceph-mon[80926]: pgmap v869: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:09:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:16.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:09:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:16.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:09:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:09:18 compute-1 ceph-mon[80926]: pgmap v870: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:09:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:18.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:18.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:18 compute-1 podman[232470]: 2025-10-02 12:09:18.805044727 +0000 UTC m=+0.058381490 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Oct 02 12:09:20 compute-1 ceph-mon[80926]: pgmap v871: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:09:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4267939077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:20.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:20.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:21 compute-1 podman[232490]: 2025-10-02 12:09:21.810948732 +0000 UTC m=+0.062167029 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:09:22 compute-1 ceph-mon[80926]: pgmap v872: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:09:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:09:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:22.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:09:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:22.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:22 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:09:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:09:23.704 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:09:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e134 e134: 3 total, 3 up, 3 in
Oct 02 12:09:24 compute-1 ceph-mon[80926]: pgmap v873: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 1 op/s
Oct 02 12:09:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:24.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:24.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:25 compute-1 ceph-mon[80926]: osdmap e134: 3 total, 3 up, 3 in
Oct 02 12:09:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e135 e135: 3 total, 3 up, 3 in
Oct 02 12:09:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:09:25.908 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:09:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:09:25.908 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:09:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:09:25.909 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:09:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:26.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:26 compute-1 ceph-mon[80926]: pgmap v875: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 1 op/s
Oct 02 12:09:26 compute-1 ceph-mon[80926]: osdmap e135: 3 total, 3 up, 3 in
Oct 02 12:09:26 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1302507799' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:09:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:26.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:27 compute-1 nova_compute[230518]: 2025-10-02 12:09:27.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:09:27 compute-1 nova_compute[230518]: 2025-10-02 12:09:27.065 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:09:27 compute-1 nova_compute[230518]: 2025-10-02 12:09:27.065 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:09:27 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4019412554' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:09:27 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:09:28 compute-1 nova_compute[230518]: 2025-10-02 12:09:28.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:09:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:28.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:28 compute-1 ceph-mon[80926]: pgmap v877: 305 pgs: 305 active+clean; 51 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 218 KiB/s wr, 14 op/s
Oct 02 12:09:28 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2150631146' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:28.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:29 compute-1 nova_compute[230518]: 2025-10-02 12:09:29.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:09:29 compute-1 nova_compute[230518]: 2025-10-02 12:09:29.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:09:29 compute-1 nova_compute[230518]: 2025-10-02 12:09:29.093 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:09:29 compute-1 nova_compute[230518]: 2025-10-02 12:09:29.094 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:09:29 compute-1 nova_compute[230518]: 2025-10-02 12:09:29.094 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:09:29 compute-1 nova_compute[230518]: 2025-10-02 12:09:29.094 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:09:29 compute-1 nova_compute[230518]: 2025-10-02 12:09:29.094 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:09:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:09:29 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1708688043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:29 compute-1 nova_compute[230518]: 2025-10-02 12:09:29.510 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:09:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1331342296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:29 compute-1 nova_compute[230518]: 2025-10-02 12:09:29.657 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:09:29 compute-1 nova_compute[230518]: 2025-10-02 12:09:29.658 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5332MB free_disk=20.986618041992188GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:09:29 compute-1 nova_compute[230518]: 2025-10-02 12:09:29.658 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:09:29 compute-1 nova_compute[230518]: 2025-10-02 12:09:29.659 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:09:29 compute-1 nova_compute[230518]: 2025-10-02 12:09:29.735 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:09:29 compute-1 nova_compute[230518]: 2025-10-02 12:09:29.735 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:09:29 compute-1 nova_compute[230518]: 2025-10-02 12:09:29.755 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:09:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:09:30 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1664810939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:30 compute-1 nova_compute[230518]: 2025-10-02 12:09:30.175 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:09:30 compute-1 nova_compute[230518]: 2025-10-02 12:09:30.180 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:09:30 compute-1 nova_compute[230518]: 2025-10-02 12:09:30.204 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:09:30 compute-1 nova_compute[230518]: 2025-10-02 12:09:30.206 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:09:30 compute-1 nova_compute[230518]: 2025-10-02 12:09:30.206 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.547s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:09:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:30.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:30.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:30 compute-1 ceph-mon[80926]: pgmap v878: 305 pgs: 305 active+clean; 59 MiB data, 196 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 551 KiB/s wr, 16 op/s
Oct 02 12:09:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1708688043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/44836204' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1664810939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:31 compute-1 nova_compute[230518]: 2025-10-02 12:09:31.202 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:09:31 compute-1 nova_compute[230518]: 2025-10-02 12:09:31.202 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:09:31 compute-1 nova_compute[230518]: 2025-10-02 12:09:31.203 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:09:31 compute-1 nova_compute[230518]: 2025-10-02 12:09:31.203 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:09:31 compute-1 nova_compute[230518]: 2025-10-02 12:09:31.221 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:09:31 compute-1 nova_compute[230518]: 2025-10-02 12:09:31.222 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:09:31 compute-1 nova_compute[230518]: 2025-10-02 12:09:31.222 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:09:31 compute-1 nova_compute[230518]: 2025-10-02 12:09:31.222 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:09:31 compute-1 ceph-mon[80926]: pgmap v879: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.7 MiB/s wr, 68 op/s
Oct 02 12:09:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:32.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:09:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:32.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:09:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:09:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:09:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:34.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:09:34 compute-1 ceph-mon[80926]: pgmap v880: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.6 MiB/s wr, 84 op/s
Oct 02 12:09:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:34.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2118983961' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:35 compute-1 nova_compute[230518]: 2025-10-02 12:09:35.579 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "3affd040-669b-4cde-a697-00b991236a6c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:09:35 compute-1 nova_compute[230518]: 2025-10-02 12:09:35.580 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "3affd040-669b-4cde-a697-00b991236a6c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:09:35 compute-1 nova_compute[230518]: 2025-10-02 12:09:35.610 2 DEBUG nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:09:35 compute-1 nova_compute[230518]: 2025-10-02 12:09:35.685 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:09:35 compute-1 nova_compute[230518]: 2025-10-02 12:09:35.685 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:09:35 compute-1 nova_compute[230518]: 2025-10-02 12:09:35.693 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:09:35 compute-1 nova_compute[230518]: 2025-10-02 12:09:35.694 2 INFO nova.compute.claims [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:09:35 compute-1 nova_compute[230518]: 2025-10-02 12:09:35.881 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:09:36 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:09:36 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1063715503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:36 compute-1 nova_compute[230518]: 2025-10-02 12:09:36.304 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:09:36 compute-1 nova_compute[230518]: 2025-10-02 12:09:36.309 2 DEBUG nova.compute.provider_tree [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:09:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:36.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:36 compute-1 nova_compute[230518]: 2025-10-02 12:09:36.342 2 DEBUG nova.scheduler.client.report [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:09:36 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e136 e136: 3 total, 3 up, 3 in
Oct 02 12:09:36 compute-1 nova_compute[230518]: 2025-10-02 12:09:36.385 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:09:36 compute-1 nova_compute[230518]: 2025-10-02 12:09:36.386 2 DEBUG nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:09:36 compute-1 nova_compute[230518]: 2025-10-02 12:09:36.459 2 DEBUG nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:09:36 compute-1 nova_compute[230518]: 2025-10-02 12:09:36.460 2 DEBUG nova.network.neutron [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:09:36 compute-1 nova_compute[230518]: 2025-10-02 12:09:36.499 2 INFO nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:09:36 compute-1 ceph-mon[80926]: pgmap v881: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 70 op/s
Oct 02 12:09:36 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2459032079' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:36 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/791544486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:36 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1063715503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:36 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1301256854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:36 compute-1 ceph-mon[80926]: osdmap e136: 3 total, 3 up, 3 in
Oct 02 12:09:36 compute-1 nova_compute[230518]: 2025-10-02 12:09:36.541 2 DEBUG nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:09:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:36.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:36 compute-1 nova_compute[230518]: 2025-10-02 12:09:36.695 2 DEBUG nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:09:36 compute-1 nova_compute[230518]: 2025-10-02 12:09:36.696 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:09:36 compute-1 nova_compute[230518]: 2025-10-02 12:09:36.697 2 INFO nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Creating image(s)
Oct 02 12:09:36 compute-1 nova_compute[230518]: 2025-10-02 12:09:36.721 2 DEBUG nova.storage.rbd_utils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] rbd image 3affd040-669b-4cde-a697-00b991236a6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:09:36 compute-1 nova_compute[230518]: 2025-10-02 12:09:36.743 2 DEBUG nova.storage.rbd_utils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] rbd image 3affd040-669b-4cde-a697-00b991236a6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:09:36 compute-1 nova_compute[230518]: 2025-10-02 12:09:36.766 2 DEBUG nova.storage.rbd_utils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] rbd image 3affd040-669b-4cde-a697-00b991236a6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:09:36 compute-1 nova_compute[230518]: 2025-10-02 12:09:36.769 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:09:36 compute-1 nova_compute[230518]: 2025-10-02 12:09:36.770 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:09:37 compute-1 nova_compute[230518]: 2025-10-02 12:09:37.550 2 DEBUG nova.virt.libvirt.imagebackend [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Image locations are: [{'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/423b8b5f-aab8-418b-8fad-d82c90818bdd/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/423b8b5f-aab8-418b-8fad-d82c90818bdd/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 02 12:09:37 compute-1 ceph-mon[80926]: pgmap v883: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.0 MiB/s wr, 117 op/s
Oct 02 12:09:37 compute-1 nova_compute[230518]: 2025-10-02 12:09:37.909 2 DEBUG nova.network.neutron [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Automatically allocating a network for project fa15236c63df4c43bf19989029fcda0f. _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2460
Oct 02 12:09:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:09:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:38.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:38.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:39 compute-1 ceph-mon[80926]: pgmap v884: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.7 MiB/s wr, 115 op/s
Oct 02 12:09:39 compute-1 podman[232631]: 2025-10-02 12:09:39.797259628 +0000 UTC m=+0.045658567 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 12:09:39 compute-1 podman[232630]: 2025-10-02 12:09:39.82513337 +0000 UTC m=+0.075906834 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Oct 02 12:09:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:40.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:40.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:41 compute-1 nova_compute[230518]: 2025-10-02 12:09:41.410 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:09:41 compute-1 nova_compute[230518]: 2025-10-02 12:09:41.463 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6.part --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:09:41 compute-1 nova_compute[230518]: 2025-10-02 12:09:41.464 2 DEBUG nova.virt.images [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] 423b8b5f-aab8-418b-8fad-d82c90818bdd was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Oct 02 12:09:41 compute-1 nova_compute[230518]: 2025-10-02 12:09:41.466 2 DEBUG nova.privsep.utils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 02 12:09:41 compute-1 nova_compute[230518]: 2025-10-02 12:09:41.466 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6.part /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:09:41 compute-1 nova_compute[230518]: 2025-10-02 12:09:41.620 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6.part /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6.converted" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:09:41 compute-1 nova_compute[230518]: 2025-10-02 12:09:41.625 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:09:41 compute-1 ceph-mon[80926]: pgmap v885: 305 pgs: 305 active+clean; 129 MiB data, 237 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.3 MiB/s wr, 107 op/s
Oct 02 12:09:41 compute-1 nova_compute[230518]: 2025-10-02 12:09:41.694 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6.converted --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:09:41 compute-1 nova_compute[230518]: 2025-10-02 12:09:41.695 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 4.926s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:09:41 compute-1 nova_compute[230518]: 2025-10-02 12:09:41.718 2 DEBUG nova.storage.rbd_utils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] rbd image 3affd040-669b-4cde-a697-00b991236a6c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:09:41 compute-1 nova_compute[230518]: 2025-10-02 12:09:41.722 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 3affd040-669b-4cde-a697-00b991236a6c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:09:42 compute-1 nova_compute[230518]: 2025-10-02 12:09:42.116 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 3affd040-669b-4cde-a697-00b991236a6c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.393s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:09:42 compute-1 nova_compute[230518]: 2025-10-02 12:09:42.209 2 DEBUG nova.storage.rbd_utils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] resizing rbd image 3affd040-669b-4cde-a697-00b991236a6c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:09:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:42.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:42 compute-1 nova_compute[230518]: 2025-10-02 12:09:42.443 2 DEBUG nova.objects.instance [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lazy-loading 'migration_context' on Instance uuid 3affd040-669b-4cde-a697-00b991236a6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:09:42 compute-1 nova_compute[230518]: 2025-10-02 12:09:42.470 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:09:42 compute-1 nova_compute[230518]: 2025-10-02 12:09:42.471 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Ensure instance console log exists: /var/lib/nova/instances/3affd040-669b-4cde-a697-00b991236a6c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:09:42 compute-1 nova_compute[230518]: 2025-10-02 12:09:42.471 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:09:42 compute-1 nova_compute[230518]: 2025-10-02 12:09:42.471 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:09:42 compute-1 nova_compute[230518]: 2025-10-02 12:09:42.472 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:09:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:42.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:09:43 compute-1 ceph-mon[80926]: pgmap v886: 305 pgs: 305 active+clean; 142 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 5.9 MiB/s rd, 2.8 MiB/s wr, 121 op/s
Oct 02 12:09:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:44.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:44.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:45 compute-1 ceph-mon[80926]: pgmap v887: 305 pgs: 305 active+clean; 142 MiB data, 242 MiB used, 21 GiB / 21 GiB avail; 5.9 MiB/s rd, 2.8 MiB/s wr, 121 op/s
Oct 02 12:09:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:46.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:46.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:47 compute-1 ceph-mon[80926]: pgmap v888: 305 pgs: 305 active+clean; 226 MiB data, 305 MiB used, 21 GiB / 21 GiB avail; 4.4 MiB/s rd, 7.2 MiB/s wr, 161 op/s
Oct 02 12:09:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:09:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:48.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:48 compute-1 nova_compute[230518]: 2025-10-02 12:09:48.472 2 DEBUG oslo_concurrency.lockutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:09:48 compute-1 nova_compute[230518]: 2025-10-02 12:09:48.475 2 DEBUG oslo_concurrency.lockutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:09:48 compute-1 nova_compute[230518]: 2025-10-02 12:09:48.535 2 DEBUG nova.compute.manager [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:09:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:09:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:48.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:09:48 compute-1 nova_compute[230518]: 2025-10-02 12:09:48.656 2 DEBUG oslo_concurrency.lockutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:09:48 compute-1 nova_compute[230518]: 2025-10-02 12:09:48.657 2 DEBUG oslo_concurrency.lockutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:09:48 compute-1 nova_compute[230518]: 2025-10-02 12:09:48.665 2 DEBUG nova.virt.hardware [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:09:48 compute-1 nova_compute[230518]: 2025-10-02 12:09:48.666 2 INFO nova.compute.claims [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:09:48 compute-1 nova_compute[230518]: 2025-10-02 12:09:48.901 2 DEBUG oslo_concurrency.processutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:09:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:09:49 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3418279144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:49 compute-1 nova_compute[230518]: 2025-10-02 12:09:49.402 2 DEBUG oslo_concurrency.processutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:09:49 compute-1 nova_compute[230518]: 2025-10-02 12:09:49.408 2 DEBUG nova.compute.provider_tree [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:09:49 compute-1 nova_compute[230518]: 2025-10-02 12:09:49.492 2 ERROR nova.scheduler.client.report [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [req-4a88711d-a07c-4f50-b567-89b26fb3e9fc] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 730da6ce-9754-46f0-88e3-0019d056443f.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-4a88711d-a07c-4f50-b567-89b26fb3e9fc"}]}
Oct 02 12:09:49 compute-1 nova_compute[230518]: 2025-10-02 12:09:49.528 2 DEBUG nova.scheduler.client.report [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 12:09:49 compute-1 nova_compute[230518]: 2025-10-02 12:09:49.595 2 DEBUG nova.scheduler.client.report [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 12:09:49 compute-1 nova_compute[230518]: 2025-10-02 12:09:49.595 2 DEBUG nova.compute.provider_tree [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:09:49 compute-1 nova_compute[230518]: 2025-10-02 12:09:49.651 2 DEBUG nova.scheduler.client.report [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 12:09:49 compute-1 nova_compute[230518]: 2025-10-02 12:09:49.724 2 DEBUG nova.scheduler.client.report [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 12:09:49 compute-1 podman[232821]: 2025-10-02 12:09:49.806200105 +0000 UTC m=+0.058193574 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:09:49 compute-1 nova_compute[230518]: 2025-10-02 12:09:49.896 2 DEBUG oslo_concurrency.processutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:09:49 compute-1 ceph-mon[80926]: pgmap v889: 305 pgs: 305 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 7.4 MiB/s wr, 156 op/s
Oct 02 12:09:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3418279144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:09:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:50.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:09:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:09:50 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3378946537' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:50 compute-1 nova_compute[230518]: 2025-10-02 12:09:50.367 2 DEBUG oslo_concurrency.processutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:09:50 compute-1 nova_compute[230518]: 2025-10-02 12:09:50.373 2 DEBUG nova.compute.provider_tree [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:09:50 compute-1 nova_compute[230518]: 2025-10-02 12:09:50.490 2 DEBUG nova.scheduler.client.report [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Updated inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f with generation 8 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Oct 02 12:09:50 compute-1 nova_compute[230518]: 2025-10-02 12:09:50.490 2 DEBUG nova.compute.provider_tree [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Updating resource provider 730da6ce-9754-46f0-88e3-0019d056443f generation from 8 to 9 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 02 12:09:50 compute-1 nova_compute[230518]: 2025-10-02 12:09:50.491 2 DEBUG nova.compute.provider_tree [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:09:50 compute-1 nova_compute[230518]: 2025-10-02 12:09:50.538 2 DEBUG oslo_concurrency.lockutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:09:50 compute-1 nova_compute[230518]: 2025-10-02 12:09:50.539 2 DEBUG nova.compute.manager [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:09:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:50.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:50 compute-1 nova_compute[230518]: 2025-10-02 12:09:50.625 2 DEBUG nova.compute.manager [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:09:50 compute-1 nova_compute[230518]: 2025-10-02 12:09:50.626 2 DEBUG nova.network.neutron [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:09:50 compute-1 nova_compute[230518]: 2025-10-02 12:09:50.674 2 INFO nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:09:50 compute-1 nova_compute[230518]: 2025-10-02 12:09:50.710 2 DEBUG nova.compute.manager [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:09:50 compute-1 nova_compute[230518]: 2025-10-02 12:09:50.868 2 DEBUG nova.compute.manager [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:09:50 compute-1 nova_compute[230518]: 2025-10-02 12:09:50.869 2 DEBUG nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:09:50 compute-1 nova_compute[230518]: 2025-10-02 12:09:50.870 2 INFO nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Creating image(s)
Oct 02 12:09:50 compute-1 nova_compute[230518]: 2025-10-02 12:09:50.905 2 DEBUG nova.storage.rbd_utils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] rbd image c6cef7fd-49cb-4781-97ad-027e835dcc5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:09:50 compute-1 nova_compute[230518]: 2025-10-02 12:09:50.939 2 DEBUG nova.storage.rbd_utils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] rbd image c6cef7fd-49cb-4781-97ad-027e835dcc5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:09:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3378946537' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:09:51 compute-1 nova_compute[230518]: 2025-10-02 12:09:51.006 2 DEBUG nova.storage.rbd_utils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] rbd image c6cef7fd-49cb-4781-97ad-027e835dcc5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:09:51 compute-1 nova_compute[230518]: 2025-10-02 12:09:51.011 2 DEBUG oslo_concurrency.processutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:09:51 compute-1 nova_compute[230518]: 2025-10-02 12:09:51.079 2 DEBUG oslo_concurrency.processutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:09:51 compute-1 nova_compute[230518]: 2025-10-02 12:09:51.080 2 DEBUG oslo_concurrency.lockutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:09:51 compute-1 nova_compute[230518]: 2025-10-02 12:09:51.081 2 DEBUG oslo_concurrency.lockutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:09:51 compute-1 nova_compute[230518]: 2025-10-02 12:09:51.081 2 DEBUG oslo_concurrency.lockutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:09:51 compute-1 nova_compute[230518]: 2025-10-02 12:09:51.117 2 DEBUG nova.storage.rbd_utils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] rbd image c6cef7fd-49cb-4781-97ad-027e835dcc5c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:09:51 compute-1 nova_compute[230518]: 2025-10-02 12:09:51.123 2 DEBUG oslo_concurrency.processutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c6cef7fd-49cb-4781-97ad-027e835dcc5c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:09:51 compute-1 nova_compute[230518]: 2025-10-02 12:09:51.389 2 WARNING oslo_policy.policy [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Oct 02 12:09:51 compute-1 nova_compute[230518]: 2025-10-02 12:09:51.390 2 WARNING oslo_policy.policy [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Oct 02 12:09:51 compute-1 nova_compute[230518]: 2025-10-02 12:09:51.393 2 DEBUG nova.policy [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '531ddb9812364f7b9743bd02a8ed797f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2c66662015f74444b15ea4b3d8644714', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:09:51 compute-1 nova_compute[230518]: 2025-10-02 12:09:51.823 2 DEBUG oslo_concurrency.processutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c6cef7fd-49cb-4781-97ad-027e835dcc5c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.701s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:09:51 compute-1 nova_compute[230518]: 2025-10-02 12:09:51.888 2 DEBUG nova.storage.rbd_utils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] resizing rbd image c6cef7fd-49cb-4781-97ad-027e835dcc5c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:09:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:52.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:52 compute-1 ceph-mon[80926]: pgmap v890: 305 pgs: 305 active+clean; 259 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 7.5 MiB/s wr, 156 op/s
Oct 02 12:09:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:52.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:52 compute-1 podman[233011]: 2025-10-02 12:09:52.805257702 +0000 UTC m=+0.060703583 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 12:09:52 compute-1 nova_compute[230518]: 2025-10-02 12:09:52.909 2 DEBUG nova.objects.instance [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lazy-loading 'migration_context' on Instance uuid c6cef7fd-49cb-4781-97ad-027e835dcc5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:09:52 compute-1 nova_compute[230518]: 2025-10-02 12:09:52.933 2 DEBUG nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:09:52 compute-1 nova_compute[230518]: 2025-10-02 12:09:52.934 2 DEBUG nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Ensure instance console log exists: /var/lib/nova/instances/c6cef7fd-49cb-4781-97ad-027e835dcc5c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:09:52 compute-1 nova_compute[230518]: 2025-10-02 12:09:52.934 2 DEBUG oslo_concurrency.lockutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:09:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:09:52 compute-1 nova_compute[230518]: 2025-10-02 12:09:52.935 2 DEBUG oslo_concurrency.lockutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:09:52 compute-1 nova_compute[230518]: 2025-10-02 12:09:52.935 2 DEBUG oslo_concurrency.lockutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:09:53 compute-1 ceph-mon[80926]: pgmap v891: 305 pgs: 305 active+clean; 267 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 6.1 MiB/s wr, 128 op/s
Oct 02 12:09:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:54.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:09:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:54.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:09:54 compute-1 nova_compute[230518]: 2025-10-02 12:09:54.942 2 DEBUG nova.network.neutron [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Successfully created port: 567aae3a-5019-47d2-84ba-8de1184cf4f0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:09:55 compute-1 ceph-mon[80926]: pgmap v892: 305 pgs: 305 active+clean; 267 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 278 KiB/s rd, 5.7 MiB/s wr, 103 op/s
Oct 02 12:09:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:56.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:56.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:57 compute-1 ceph-mon[80926]: pgmap v893: 305 pgs: 305 active+clean; 306 MiB data, 354 MiB used, 21 GiB / 21 GiB avail; 295 KiB/s rd, 6.9 MiB/s wr, 129 op/s
Oct 02 12:09:57 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:09:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:09:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:09:58.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:09:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:09:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:09:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:09:58.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:09:58 compute-1 nova_compute[230518]: 2025-10-02 12:09:58.881 2 DEBUG nova.network.neutron [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Automatically allocated network: {'id': 'b4aadb38-89a4-463f-b7b5-8bb4dcce7d32', 'name': 'auto_allocated_network', 'tenant_id': 'fa15236c63df4c43bf19989029fcda0f', 'admin_state_up': True, 'mtu': 1442, 'status': 'ACTIVE', 'subnets': ['227aa610-f00f-4cec-b799-1839354b34be', 'c08cc57a-142e-470f-ae51-6b2f21e9e17e'], 'shared': False, 'availability_zone_hints': [], 'availability_zones': [], 'ipv4_address_scope': None, 'ipv6_address_scope': None, 'router:external': False, 'description': '', 'qos_policy_id': None, 'port_security_enabled': True, 'dns_domain': '', 'l2_adjacency': True, 'tags': [], 'created_at': '2025-10-02T12:09:38Z', 'updated_at': '2025-10-02T12:09:57Z', 'revision_number': 4, 'project_id': 'fa15236c63df4c43bf19989029fcda0f'} _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2478
Oct 02 12:09:58 compute-1 nova_compute[230518]: 2025-10-02 12:09:58.883 2 DEBUG nova.policy [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b81237ef015d48dfa022b6761d706e36', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fa15236c63df4c43bf19989029fcda0f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:09:59 compute-1 nova_compute[230518]: 2025-10-02 12:09:59.272 2 DEBUG nova.network.neutron [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Successfully updated port: 567aae3a-5019-47d2-84ba-8de1184cf4f0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:09:59 compute-1 nova_compute[230518]: 2025-10-02 12:09:59.345 2 DEBUG oslo_concurrency.lockutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "refresh_cache-c6cef7fd-49cb-4781-97ad-027e835dcc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:09:59 compute-1 nova_compute[230518]: 2025-10-02 12:09:59.345 2 DEBUG oslo_concurrency.lockutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquired lock "refresh_cache-c6cef7fd-49cb-4781-97ad-027e835dcc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:09:59 compute-1 nova_compute[230518]: 2025-10-02 12:09:59.345 2 DEBUG nova.network.neutron [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:09:59 compute-1 nova_compute[230518]: 2025-10-02 12:09:59.796 2 DEBUG nova.network.neutron [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:09:59 compute-1 ceph-mon[80926]: pgmap v894: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 57 KiB/s rd, 3.2 MiB/s wr, 47 op/s
Oct 02 12:09:59 compute-1 nova_compute[230518]: 2025-10-02 12:09:59.978 2 DEBUG nova.network.neutron [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Successfully created port: 7bdea026-3636-4861-a8a9-fcb0a82509ad _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:10:00 compute-1 nova_compute[230518]: 2025-10-02 12:10:00.001 2 DEBUG nova.compute.manager [req-bc60192d-97bf-4db1-8e6a-a2e0aa59ad1f req-7847210c-0bfd-4c3d-914c-29293e72431e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Received event network-changed-567aae3a-5019-47d2-84ba-8de1184cf4f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:10:00 compute-1 nova_compute[230518]: 2025-10-02 12:10:00.001 2 DEBUG nova.compute.manager [req-bc60192d-97bf-4db1-8e6a-a2e0aa59ad1f req-7847210c-0bfd-4c3d-914c-29293e72431e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Refreshing instance network info cache due to event network-changed-567aae3a-5019-47d2-84ba-8de1184cf4f0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:10:00 compute-1 nova_compute[230518]: 2025-10-02 12:10:00.001 2 DEBUG oslo_concurrency.lockutils [req-bc60192d-97bf-4db1-8e6a-a2e0aa59ad1f req-7847210c-0bfd-4c3d-914c-29293e72431e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-c6cef7fd-49cb-4781-97ad-027e835dcc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:10:00 compute-1 sshd-session[233052]: banner exchange: Connection from 172.210.81.91 port 57354: invalid format
Oct 02 12:10:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:00.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:00.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:00 compute-1 ceph-mon[80926]: overall HEALTH_OK
Oct 02 12:10:01 compute-1 nova_compute[230518]: 2025-10-02 12:10:01.719 2 DEBUG nova.network.neutron [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Successfully updated port: 7bdea026-3636-4861-a8a9-fcb0a82509ad _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:10:01 compute-1 nova_compute[230518]: 2025-10-02 12:10:01.759 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "refresh_cache-3affd040-669b-4cde-a697-00b991236a6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:10:01 compute-1 nova_compute[230518]: 2025-10-02 12:10:01.759 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquired lock "refresh_cache-3affd040-669b-4cde-a697-00b991236a6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:10:01 compute-1 nova_compute[230518]: 2025-10-02 12:10:01.759 2 DEBUG nova.network.neutron [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:10:02 compute-1 nova_compute[230518]: 2025-10-02 12:10:02.164 2 DEBUG nova.network.neutron [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:10:02 compute-1 ceph-mon[80926]: pgmap v895: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 12:10:02 compute-1 nova_compute[230518]: 2025-10-02 12:10:02.248 2 DEBUG nova.network.neutron [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Updating instance_info_cache with network_info: [{"id": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "address": "fa:16:3e:37:e0:d8", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap567aae3a-50", "ovs_interfaceid": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:10:02 compute-1 nova_compute[230518]: 2025-10-02 12:10:02.281 2 DEBUG oslo_concurrency.lockutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Releasing lock "refresh_cache-c6cef7fd-49cb-4781-97ad-027e835dcc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:10:02 compute-1 nova_compute[230518]: 2025-10-02 12:10:02.282 2 DEBUG nova.compute.manager [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Instance network_info: |[{"id": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "address": "fa:16:3e:37:e0:d8", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap567aae3a-50", "ovs_interfaceid": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:10:02 compute-1 nova_compute[230518]: 2025-10-02 12:10:02.283 2 DEBUG oslo_concurrency.lockutils [req-bc60192d-97bf-4db1-8e6a-a2e0aa59ad1f req-7847210c-0bfd-4c3d-914c-29293e72431e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-c6cef7fd-49cb-4781-97ad-027e835dcc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:10:02 compute-1 nova_compute[230518]: 2025-10-02 12:10:02.283 2 DEBUG nova.network.neutron [req-bc60192d-97bf-4db1-8e6a-a2e0aa59ad1f req-7847210c-0bfd-4c3d-914c-29293e72431e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Refreshing network info cache for port 567aae3a-5019-47d2-84ba-8de1184cf4f0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:10:02 compute-1 nova_compute[230518]: 2025-10-02 12:10:02.286 2 DEBUG nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Start _get_guest_xml network_info=[{"id": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "address": "fa:16:3e:37:e0:d8", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap567aae3a-50", "ovs_interfaceid": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:10:02 compute-1 nova_compute[230518]: 2025-10-02 12:10:02.291 2 WARNING nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:10:02 compute-1 nova_compute[230518]: 2025-10-02 12:10:02.297 2 DEBUG nova.virt.libvirt.host [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:10:02 compute-1 nova_compute[230518]: 2025-10-02 12:10:02.299 2 DEBUG nova.virt.libvirt.host [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:10:02 compute-1 nova_compute[230518]: 2025-10-02 12:10:02.302 2 DEBUG nova.virt.libvirt.host [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:10:02 compute-1 nova_compute[230518]: 2025-10-02 12:10:02.303 2 DEBUG nova.virt.libvirt.host [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:10:02 compute-1 nova_compute[230518]: 2025-10-02 12:10:02.304 2 DEBUG nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:10:02 compute-1 nova_compute[230518]: 2025-10-02 12:10:02.304 2 DEBUG nova.virt.hardware [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:09:32Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='718594175',id=15,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_0-1977279530',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:10:02 compute-1 nova_compute[230518]: 2025-10-02 12:10:02.305 2 DEBUG nova.virt.hardware [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:10:02 compute-1 nova_compute[230518]: 2025-10-02 12:10:02.305 2 DEBUG nova.virt.hardware [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:10:02 compute-1 nova_compute[230518]: 2025-10-02 12:10:02.305 2 DEBUG nova.virt.hardware [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:10:02 compute-1 nova_compute[230518]: 2025-10-02 12:10:02.305 2 DEBUG nova.virt.hardware [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:10:02 compute-1 nova_compute[230518]: 2025-10-02 12:10:02.306 2 DEBUG nova.virt.hardware [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:10:02 compute-1 nova_compute[230518]: 2025-10-02 12:10:02.306 2 DEBUG nova.virt.hardware [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:10:02 compute-1 nova_compute[230518]: 2025-10-02 12:10:02.306 2 DEBUG nova.virt.hardware [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:10:02 compute-1 nova_compute[230518]: 2025-10-02 12:10:02.306 2 DEBUG nova.virt.hardware [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:10:02 compute-1 nova_compute[230518]: 2025-10-02 12:10:02.307 2 DEBUG nova.virt.hardware [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:10:02 compute-1 nova_compute[230518]: 2025-10-02 12:10:02.308 2 DEBUG nova.virt.hardware [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:10:02 compute-1 nova_compute[230518]: 2025-10-02 12:10:02.311 2 DEBUG nova.privsep.utils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 02 12:10:02 compute-1 nova_compute[230518]: 2025-10-02 12:10:02.312 2 DEBUG oslo_concurrency.processutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:02.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:02 compute-1 nova_compute[230518]: 2025-10-02 12:10:02.505 2 DEBUG nova.compute.manager [req-094a2ef0-262a-487f-9573-b72c94519ff9 req-af799a0b-67e1-4e69-a33c-0a2a76945d14 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Received event network-changed-7bdea026-3636-4861-a8a9-fcb0a82509ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:10:02 compute-1 nova_compute[230518]: 2025-10-02 12:10:02.505 2 DEBUG nova.compute.manager [req-094a2ef0-262a-487f-9573-b72c94519ff9 req-af799a0b-67e1-4e69-a33c-0a2a76945d14 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Refreshing instance network info cache due to event network-changed-7bdea026-3636-4861-a8a9-fcb0a82509ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:10:02 compute-1 nova_compute[230518]: 2025-10-02 12:10:02.506 2 DEBUG oslo_concurrency.lockutils [req-094a2ef0-262a-487f-9573-b72c94519ff9 req-af799a0b-67e1-4e69-a33c-0a2a76945d14 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-3affd040-669b-4cde-a697-00b991236a6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:10:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:02.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:10:02 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1286063853' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:02 compute-1 nova_compute[230518]: 2025-10-02 12:10:02.812 2 DEBUG oslo_concurrency.processutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:02 compute-1 nova_compute[230518]: 2025-10-02 12:10:02.843 2 DEBUG nova.storage.rbd_utils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] rbd image c6cef7fd-49cb-4781-97ad-027e835dcc5c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:10:02 compute-1 nova_compute[230518]: 2025-10-02 12:10:02.848 2 DEBUG oslo_concurrency.processutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:10:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:10:03 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3848430570' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:03 compute-1 nova_compute[230518]: 2025-10-02 12:10:03.304 2 DEBUG oslo_concurrency.processutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:03 compute-1 nova_compute[230518]: 2025-10-02 12:10:03.306 2 DEBUG nova.virt.libvirt.vif [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:09:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-870505644',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-870505644',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(15),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-870505644',id=5,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=15,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMZIzbMfwVBTTToNPNrnTuckoO8kg28OkEFKvLyHQzGuKrzHQ5Xu2/PJVR0z9htMcy/llPoN2mM4eTO6OIHrSZOwjPe/taZdTaEhmzjh34Ak2Vyd+nZrFG8VSiYQyffl8g==',key_name='tempest-keypair-1186587753',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2c66662015f74444b15ea4b3d8644714',ramdisk_id='',reservation_id='r-e4cgmrkv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-957372394',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-957372394-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:09:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='531ddb9812364f7b9743bd02a8ed797f',uuid=c6cef7fd-49cb-4781-97ad-027e835dcc5c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "address": "fa:16:3e:37:e0:d8", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap567aae3a-50", "ovs_interfaceid": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:10:03 compute-1 nova_compute[230518]: 2025-10-02 12:10:03.306 2 DEBUG nova.network.os_vif_util [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Converting VIF {"id": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "address": "fa:16:3e:37:e0:d8", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap567aae3a-50", "ovs_interfaceid": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:10:03 compute-1 nova_compute[230518]: 2025-10-02 12:10:03.307 2 DEBUG nova.network.os_vif_util [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:e0:d8,bridge_name='br-int',has_traffic_filtering=True,id=567aae3a-5019-47d2-84ba-8de1184cf4f0,network=Network(38c94475-c52a-421c-9bc8-95fdc649b043),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap567aae3a-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:10:03 compute-1 nova_compute[230518]: 2025-10-02 12:10:03.309 2 DEBUG nova.objects.instance [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lazy-loading 'pci_devices' on Instance uuid c6cef7fd-49cb-4781-97ad-027e835dcc5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:10:03 compute-1 nova_compute[230518]: 2025-10-02 12:10:03.341 2 DEBUG nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:10:03 compute-1 nova_compute[230518]:   <uuid>c6cef7fd-49cb-4781-97ad-027e835dcc5c</uuid>
Oct 02 12:10:03 compute-1 nova_compute[230518]:   <name>instance-00000005</name>
Oct 02 12:10:03 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:10:03 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:10:03 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:10:03 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:       <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-870505644</nova:name>
Oct 02 12:10:03 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:10:02</nova:creationTime>
Oct 02 12:10:03 compute-1 nova_compute[230518]:       <nova:flavor name="tempest-flavor_with_ephemeral_0-1977279530">
Oct 02 12:10:03 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:10:03 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:10:03 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:10:03 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:10:03 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:10:03 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:10:03 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:10:03 compute-1 nova_compute[230518]:         <nova:user uuid="531ddb9812364f7b9743bd02a8ed797f">tempest-ServersWithSpecificFlavorTestJSON-957372394-project-member</nova:user>
Oct 02 12:10:03 compute-1 nova_compute[230518]:         <nova:project uuid="2c66662015f74444b15ea4b3d8644714">tempest-ServersWithSpecificFlavorTestJSON-957372394</nova:project>
Oct 02 12:10:03 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:10:03 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:10:03 compute-1 nova_compute[230518]:         <nova:port uuid="567aae3a-5019-47d2-84ba-8de1184cf4f0">
Oct 02 12:10:03 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:10:03 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:10:03 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:10:03 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <system>
Oct 02 12:10:03 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:10:03 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:10:03 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:10:03 compute-1 nova_compute[230518]:       <entry name="serial">c6cef7fd-49cb-4781-97ad-027e835dcc5c</entry>
Oct 02 12:10:03 compute-1 nova_compute[230518]:       <entry name="uuid">c6cef7fd-49cb-4781-97ad-027e835dcc5c</entry>
Oct 02 12:10:03 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     </system>
Oct 02 12:10:03 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:10:03 compute-1 nova_compute[230518]:   <os>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:   </os>
Oct 02 12:10:03 compute-1 nova_compute[230518]:   <features>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:   </features>
Oct 02 12:10:03 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:10:03 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:10:03 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:10:03 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/c6cef7fd-49cb-4781-97ad-027e835dcc5c_disk">
Oct 02 12:10:03 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:       </source>
Oct 02 12:10:03 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:10:03 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:10:03 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:10:03 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/c6cef7fd-49cb-4781-97ad-027e835dcc5c_disk.config">
Oct 02 12:10:03 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:       </source>
Oct 02 12:10:03 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:10:03 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:10:03 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:10:03 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:37:e0:d8"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:       <target dev="tap567aae3a-50"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:10:03 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/c6cef7fd-49cb-4781-97ad-027e835dcc5c/console.log" append="off"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <video>
Oct 02 12:10:03 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     </video>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:10:03 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:10:03 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:10:03 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:10:03 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:10:03 compute-1 nova_compute[230518]: </domain>
Oct 02 12:10:03 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:10:03 compute-1 nova_compute[230518]: 2025-10-02 12:10:03.343 2 DEBUG nova.compute.manager [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Preparing to wait for external event network-vif-plugged-567aae3a-5019-47d2-84ba-8de1184cf4f0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:10:03 compute-1 nova_compute[230518]: 2025-10-02 12:10:03.343 2 DEBUG oslo_concurrency.lockutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:03 compute-1 nova_compute[230518]: 2025-10-02 12:10:03.343 2 DEBUG oslo_concurrency.lockutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:03 compute-1 nova_compute[230518]: 2025-10-02 12:10:03.344 2 DEBUG oslo_concurrency.lockutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:03 compute-1 nova_compute[230518]: 2025-10-02 12:10:03.344 2 DEBUG nova.virt.libvirt.vif [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:09:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-870505644',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-870505644',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(15),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-870505644',id=5,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=15,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMZIzbMfwVBTTToNPNrnTuckoO8kg28OkEFKvLyHQzGuKrzHQ5Xu2/PJVR0z9htMcy/llPoN2mM4eTO6OIHrSZOwjPe/taZdTaEhmzjh34Ak2Vyd+nZrFG8VSiYQyffl8g==',key_name='tempest-keypair-1186587753',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2c66662015f74444b15ea4b3d8644714',ramdisk_id='',reservation_id='r-e4cgmrkv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-957372394',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-957372394-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:09:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='531ddb9812364f7b9743bd02a8ed797f',uuid=c6cef7fd-49cb-4781-97ad-027e835dcc5c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "address": "fa:16:3e:37:e0:d8", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap567aae3a-50", "ovs_interfaceid": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:10:03 compute-1 nova_compute[230518]: 2025-10-02 12:10:03.344 2 DEBUG nova.network.os_vif_util [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Converting VIF {"id": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "address": "fa:16:3e:37:e0:d8", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap567aae3a-50", "ovs_interfaceid": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:10:03 compute-1 nova_compute[230518]: 2025-10-02 12:10:03.345 2 DEBUG nova.network.os_vif_util [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:e0:d8,bridge_name='br-int',has_traffic_filtering=True,id=567aae3a-5019-47d2-84ba-8de1184cf4f0,network=Network(38c94475-c52a-421c-9bc8-95fdc649b043),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap567aae3a-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:10:03 compute-1 nova_compute[230518]: 2025-10-02 12:10:03.346 2 DEBUG os_vif [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:e0:d8,bridge_name='br-int',has_traffic_filtering=True,id=567aae3a-5019-47d2-84ba-8de1184cf4f0,network=Network(38c94475-c52a-421c-9bc8-95fdc649b043),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap567aae3a-50') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:10:03 compute-1 nova_compute[230518]: 2025-10-02 12:10:03.419 2 DEBUG ovsdbapp.backend.ovs_idl [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 12:10:03 compute-1 nova_compute[230518]: 2025-10-02 12:10:03.420 2 DEBUG ovsdbapp.backend.ovs_idl [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 12:10:03 compute-1 nova_compute[230518]: 2025-10-02 12:10:03.420 2 DEBUG ovsdbapp.backend.ovs_idl [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 02 12:10:03 compute-1 nova_compute[230518]: 2025-10-02 12:10:03.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 12:10:03 compute-1 nova_compute[230518]: 2025-10-02 12:10:03.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [POLLOUT] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:03 compute-1 nova_compute[230518]: 2025-10-02 12:10:03.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 02 12:10:03 compute-1 nova_compute[230518]: 2025-10-02 12:10:03.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:03 compute-1 nova_compute[230518]: 2025-10-02 12:10:03.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:03 compute-1 nova_compute[230518]: 2025-10-02 12:10:03.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:03 compute-1 nova_compute[230518]: 2025-10-02 12:10:03.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:03 compute-1 nova_compute[230518]: 2025-10-02 12:10:03.439 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:03 compute-1 nova_compute[230518]: 2025-10-02 12:10:03.439 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:10:03 compute-1 nova_compute[230518]: 2025-10-02 12:10:03.440 2 INFO oslo.privsep.daemon [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmprcfvmz_2/privsep.sock']
Oct 02 12:10:03 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1286063853' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.161 2 INFO oslo.privsep.daemon [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Spawned new privsep daemon via rootwrap
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.028 732 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.032 732 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.034 732 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.034 732 INFO oslo.privsep.daemon [-] privsep daemon running as pid 732
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.322 2 DEBUG nova.network.neutron [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Updating instance_info_cache with network_info: [{"id": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "address": "fa:16:3e:46:60:3e", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::7b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bdea026-36", "ovs_interfaceid": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:10:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:04.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.363 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Releasing lock "refresh_cache-3affd040-669b-4cde-a697-00b991236a6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.364 2 DEBUG nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Instance network_info: |[{"id": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "address": "fa:16:3e:46:60:3e", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::7b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bdea026-36", "ovs_interfaceid": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.364 2 DEBUG oslo_concurrency.lockutils [req-094a2ef0-262a-487f-9573-b72c94519ff9 req-af799a0b-67e1-4e69-a33c-0a2a76945d14 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-3affd040-669b-4cde-a697-00b991236a6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.365 2 DEBUG nova.network.neutron [req-094a2ef0-262a-487f-9573-b72c94519ff9 req-af799a0b-67e1-4e69-a33c-0a2a76945d14 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Refreshing network info cache for port 7bdea026-3636-4861-a8a9-fcb0a82509ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.369 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Start _get_guest_xml network_info=[{"id": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "address": "fa:16:3e:46:60:3e", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::7b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bdea026-36", "ovs_interfaceid": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.373 2 WARNING nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.378 2 DEBUG nova.virt.libvirt.host [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.379 2 DEBUG nova.virt.libvirt.host [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.382 2 DEBUG nova.virt.libvirt.host [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.383 2 DEBUG nova.virt.libvirt.host [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.385 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.385 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.386 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.386 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.386 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.387 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.387 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.387 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.387 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.388 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.388 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.388 2 DEBUG nova.virt.hardware [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.391 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:04 compute-1 ceph-mon[80926]: pgmap v896: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 12:10:04 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3848430570' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.517 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap567aae3a-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.518 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap567aae3a-50, col_values=(('external_ids', {'iface-id': '567aae3a-5019-47d2-84ba-8de1184cf4f0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:37:e0:d8', 'vm-uuid': 'c6cef7fd-49cb-4781-97ad-027e835dcc5c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:04 compute-1 NetworkManager[44960]: <info>  [1759407004.5205] manager: (tap567aae3a-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.530 2 INFO os_vif [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:e0:d8,bridge_name='br-int',has_traffic_filtering=True,id=567aae3a-5019-47d2-84ba-8de1184cf4f0,network=Network(38c94475-c52a-421c-9bc8-95fdc649b043),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap567aae3a-50')
Oct 02 12:10:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:10:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:04.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.618 2 DEBUG nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.618 2 DEBUG nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.618 2 DEBUG nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] No VIF found with MAC fa:16:3e:37:e0:d8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.619 2 INFO nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Using config drive
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.650 2 DEBUG nova.storage.rbd_utils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] rbd image c6cef7fd-49cb-4781-97ad-027e835dcc5c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:10:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:10:04 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3716205795' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.890 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.923 2 DEBUG nova.storage.rbd_utils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] rbd image 3affd040-669b-4cde-a697-00b991236a6c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:10:04 compute-1 nova_compute[230518]: 2025-10-02 12:10:04.928 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:10:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2184049641' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:10:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:10:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2184049641' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:10:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:10:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2617089515' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:05 compute-1 nova_compute[230518]: 2025-10-02 12:10:05.417 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:05 compute-1 nova_compute[230518]: 2025-10-02 12:10:05.419 2 DEBUG nova.virt.libvirt.vif [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:09:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-1858146006-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1858146006-2',id=3,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fa15236c63df4c43bf19989029fcda0f',ramdisk_id='',reservation_id='r-b33bb0il',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-1017519520',owner_user_name='tempest-AutoAllocateNetworkTest-1017519520-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:09:36Z,user_data=None,user_id='b81237ef015d48dfa022b6761d706e36',uuid=3affd040-669b-4cde-a697-00b991236a6c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "address": "fa:16:3e:46:60:3e", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::7b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bdea026-36", "ovs_interfaceid": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:10:05 compute-1 nova_compute[230518]: 2025-10-02 12:10:05.419 2 DEBUG nova.network.os_vif_util [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Converting VIF {"id": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "address": "fa:16:3e:46:60:3e", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::7b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bdea026-36", "ovs_interfaceid": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:10:05 compute-1 nova_compute[230518]: 2025-10-02 12:10:05.420 2 DEBUG nova.network.os_vif_util [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:60:3e,bridge_name='br-int',has_traffic_filtering=True,id=7bdea026-3636-4861-a8a9-fcb0a82509ad,network=Network(b4aadb38-89a4-463f-b7b5-8bb4dcce7d32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bdea026-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:10:05 compute-1 nova_compute[230518]: 2025-10-02 12:10:05.422 2 DEBUG nova.objects.instance [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lazy-loading 'pci_devices' on Instance uuid 3affd040-669b-4cde-a697-00b991236a6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:10:05 compute-1 nova_compute[230518]: 2025-10-02 12:10:05.450 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:10:05 compute-1 nova_compute[230518]:   <uuid>3affd040-669b-4cde-a697-00b991236a6c</uuid>
Oct 02 12:10:05 compute-1 nova_compute[230518]:   <name>instance-00000003</name>
Oct 02 12:10:05 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:10:05 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:10:05 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:10:05 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:       <nova:name>tempest-tempest.common.compute-instance-1858146006-2</nova:name>
Oct 02 12:10:05 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:10:04</nova:creationTime>
Oct 02 12:10:05 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:10:05 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:10:05 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:10:05 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:10:05 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:10:05 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:10:05 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:10:05 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:10:05 compute-1 nova_compute[230518]:         <nova:user uuid="b81237ef015d48dfa022b6761d706e36">tempest-AutoAllocateNetworkTest-1017519520-project-member</nova:user>
Oct 02 12:10:05 compute-1 nova_compute[230518]:         <nova:project uuid="fa15236c63df4c43bf19989029fcda0f">tempest-AutoAllocateNetworkTest-1017519520</nova:project>
Oct 02 12:10:05 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:10:05 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:10:05 compute-1 nova_compute[230518]:         <nova:port uuid="7bdea026-3636-4861-a8a9-fcb0a82509ad">
Oct 02 12:10:05 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="fdfe:381f:8400:1::7b" ipVersion="6"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.1.0.87" ipVersion="4"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:10:05 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:10:05 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:10:05 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <system>
Oct 02 12:10:05 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:10:05 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:10:05 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:10:05 compute-1 nova_compute[230518]:       <entry name="serial">3affd040-669b-4cde-a697-00b991236a6c</entry>
Oct 02 12:10:05 compute-1 nova_compute[230518]:       <entry name="uuid">3affd040-669b-4cde-a697-00b991236a6c</entry>
Oct 02 12:10:05 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     </system>
Oct 02 12:10:05 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:10:05 compute-1 nova_compute[230518]:   <os>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:   </os>
Oct 02 12:10:05 compute-1 nova_compute[230518]:   <features>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:   </features>
Oct 02 12:10:05 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:10:05 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:10:05 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:10:05 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/3affd040-669b-4cde-a697-00b991236a6c_disk">
Oct 02 12:10:05 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:       </source>
Oct 02 12:10:05 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:10:05 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:10:05 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:10:05 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/3affd040-669b-4cde-a697-00b991236a6c_disk.config">
Oct 02 12:10:05 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:       </source>
Oct 02 12:10:05 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:10:05 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:10:05 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:10:05 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:46:60:3e"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:       <target dev="tap7bdea026-36"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:10:05 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/3affd040-669b-4cde-a697-00b991236a6c/console.log" append="off"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <video>
Oct 02 12:10:05 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     </video>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:10:05 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:10:05 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:10:05 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:10:05 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:10:05 compute-1 nova_compute[230518]: </domain>
Oct 02 12:10:05 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:10:05 compute-1 nova_compute[230518]: 2025-10-02 12:10:05.452 2 DEBUG nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Preparing to wait for external event network-vif-plugged-7bdea026-3636-4861-a8a9-fcb0a82509ad prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:10:05 compute-1 nova_compute[230518]: 2025-10-02 12:10:05.452 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "3affd040-669b-4cde-a697-00b991236a6c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:05 compute-1 nova_compute[230518]: 2025-10-02 12:10:05.452 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "3affd040-669b-4cde-a697-00b991236a6c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:05 compute-1 nova_compute[230518]: 2025-10-02 12:10:05.453 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "3affd040-669b-4cde-a697-00b991236a6c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:05 compute-1 nova_compute[230518]: 2025-10-02 12:10:05.453 2 DEBUG nova.virt.libvirt.vif [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:09:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-1858146006-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1858146006-2',id=3,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fa15236c63df4c43bf19989029fcda0f',ramdisk_id='',reservation_id='r-b33bb0il',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-1017519520',owner_user_name='tempest-AutoAllocateNetworkTest-1017519520-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:09:36Z,user_data=None,user_id='b81237ef015d48dfa022b6761d706e36',uuid=3affd040-669b-4cde-a697-00b991236a6c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "address": "fa:16:3e:46:60:3e", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::7b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bdea026-36", "ovs_interfaceid": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:10:05 compute-1 nova_compute[230518]: 2025-10-02 12:10:05.454 2 DEBUG nova.network.os_vif_util [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Converting VIF {"id": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "address": "fa:16:3e:46:60:3e", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::7b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bdea026-36", "ovs_interfaceid": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:10:05 compute-1 nova_compute[230518]: 2025-10-02 12:10:05.454 2 DEBUG nova.network.os_vif_util [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:60:3e,bridge_name='br-int',has_traffic_filtering=True,id=7bdea026-3636-4861-a8a9-fcb0a82509ad,network=Network(b4aadb38-89a4-463f-b7b5-8bb4dcce7d32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bdea026-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:10:05 compute-1 nova_compute[230518]: 2025-10-02 12:10:05.455 2 DEBUG os_vif [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:60:3e,bridge_name='br-int',has_traffic_filtering=True,id=7bdea026-3636-4861-a8a9-fcb0a82509ad,network=Network(b4aadb38-89a4-463f-b7b5-8bb4dcce7d32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bdea026-36') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:10:05 compute-1 nova_compute[230518]: 2025-10-02 12:10:05.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:05 compute-1 nova_compute[230518]: 2025-10-02 12:10:05.456 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:05 compute-1 nova_compute[230518]: 2025-10-02 12:10:05.456 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:10:05 compute-1 nova_compute[230518]: 2025-10-02 12:10:05.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:05 compute-1 nova_compute[230518]: 2025-10-02 12:10:05.459 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7bdea026-36, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:05 compute-1 nova_compute[230518]: 2025-10-02 12:10:05.460 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7bdea026-36, col_values=(('external_ids', {'iface-id': '7bdea026-3636-4861-a8a9-fcb0a82509ad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:46:60:3e', 'vm-uuid': '3affd040-669b-4cde-a697-00b991236a6c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:05 compute-1 nova_compute[230518]: 2025-10-02 12:10:05.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:05 compute-1 NetworkManager[44960]: <info>  [1759407005.4628] manager: (tap7bdea026-36): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Oct 02 12:10:05 compute-1 nova_compute[230518]: 2025-10-02 12:10:05.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:10:05 compute-1 nova_compute[230518]: 2025-10-02 12:10:05.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:05 compute-1 nova_compute[230518]: 2025-10-02 12:10:05.470 2 INFO os_vif [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:60:3e,bridge_name='br-int',has_traffic_filtering=True,id=7bdea026-3636-4861-a8a9-fcb0a82509ad,network=Network(b4aadb38-89a4-463f-b7b5-8bb4dcce7d32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bdea026-36')
Oct 02 12:10:05 compute-1 nova_compute[230518]: 2025-10-02 12:10:05.476 2 DEBUG nova.network.neutron [req-bc60192d-97bf-4db1-8e6a-a2e0aa59ad1f req-7847210c-0bfd-4c3d-914c-29293e72431e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Updated VIF entry in instance network info cache for port 567aae3a-5019-47d2-84ba-8de1184cf4f0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:10:05 compute-1 nova_compute[230518]: 2025-10-02 12:10:05.477 2 DEBUG nova.network.neutron [req-bc60192d-97bf-4db1-8e6a-a2e0aa59ad1f req-7847210c-0bfd-4c3d-914c-29293e72431e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Updating instance_info_cache with network_info: [{"id": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "address": "fa:16:3e:37:e0:d8", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap567aae3a-50", "ovs_interfaceid": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:10:05 compute-1 ceph-mon[80926]: pgmap v897: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.2 MiB/s wr, 26 op/s
Oct 02 12:10:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3716205795' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2184049641' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:10:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2184049641' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:10:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2617089515' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:05 compute-1 nova_compute[230518]: 2025-10-02 12:10:05.530 2 DEBUG oslo_concurrency.lockutils [req-bc60192d-97bf-4db1-8e6a-a2e0aa59ad1f req-7847210c-0bfd-4c3d-914c-29293e72431e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-c6cef7fd-49cb-4781-97ad-027e835dcc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:10:05 compute-1 nova_compute[230518]: 2025-10-02 12:10:05.548 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:10:05 compute-1 nova_compute[230518]: 2025-10-02 12:10:05.549 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:10:05 compute-1 nova_compute[230518]: 2025-10-02 12:10:05.549 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] No VIF found with MAC fa:16:3e:46:60:3e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:10:05 compute-1 nova_compute[230518]: 2025-10-02 12:10:05.549 2 INFO nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Using config drive
Oct 02 12:10:05 compute-1 nova_compute[230518]: 2025-10-02 12:10:05.572 2 DEBUG nova.storage.rbd_utils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] rbd image 3affd040-669b-4cde-a697-00b991236a6c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:10:06 compute-1 nova_compute[230518]: 2025-10-02 12:10:06.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:10:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:06.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:10:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:06.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:06 compute-1 nova_compute[230518]: 2025-10-02 12:10:06.684 2 INFO nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Creating config drive at /var/lib/nova/instances/3affd040-669b-4cde-a697-00b991236a6c/disk.config
Oct 02 12:10:06 compute-1 nova_compute[230518]: 2025-10-02 12:10:06.689 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3affd040-669b-4cde-a697-00b991236a6c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7zxw_yai execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:06 compute-1 nova_compute[230518]: 2025-10-02 12:10:06.707 2 INFO nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Creating config drive at /var/lib/nova/instances/c6cef7fd-49cb-4781-97ad-027e835dcc5c/disk.config
Oct 02 12:10:06 compute-1 nova_compute[230518]: 2025-10-02 12:10:06.713 2 DEBUG oslo_concurrency.processutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c6cef7fd-49cb-4781-97ad-027e835dcc5c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpog__2ued execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:06 compute-1 nova_compute[230518]: 2025-10-02 12:10:06.812 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3affd040-669b-4cde-a697-00b991236a6c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7zxw_yai" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:06 compute-1 nova_compute[230518]: 2025-10-02 12:10:06.844 2 DEBUG nova.storage.rbd_utils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] rbd image 3affd040-669b-4cde-a697-00b991236a6c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:10:06 compute-1 nova_compute[230518]: 2025-10-02 12:10:06.851 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3affd040-669b-4cde-a697-00b991236a6c/disk.config 3affd040-669b-4cde-a697-00b991236a6c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:06 compute-1 nova_compute[230518]: 2025-10-02 12:10:06.871 2 DEBUG oslo_concurrency.processutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c6cef7fd-49cb-4781-97ad-027e835dcc5c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpog__2ued" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:06 compute-1 nova_compute[230518]: 2025-10-02 12:10:06.909 2 DEBUG nova.storage.rbd_utils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] rbd image c6cef7fd-49cb-4781-97ad-027e835dcc5c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:10:06 compute-1 nova_compute[230518]: 2025-10-02 12:10:06.914 2 DEBUG oslo_concurrency.processutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c6cef7fd-49cb-4781-97ad-027e835dcc5c/disk.config c6cef7fd-49cb-4781-97ad-027e835dcc5c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:07 compute-1 nova_compute[230518]: 2025-10-02 12:10:07.013 2 DEBUG oslo_concurrency.processutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3affd040-669b-4cde-a697-00b991236a6c/disk.config 3affd040-669b-4cde-a697-00b991236a6c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:07 compute-1 nova_compute[230518]: 2025-10-02 12:10:07.014 2 INFO nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Deleting local config drive /var/lib/nova/instances/3affd040-669b-4cde-a697-00b991236a6c/disk.config because it was imported into RBD.
Oct 02 12:10:07 compute-1 systemd[1]: Starting libvirt secret daemon...
Oct 02 12:10:07 compute-1 systemd[1]: Started libvirt secret daemon.
Oct 02 12:10:07 compute-1 nova_compute[230518]: 2025-10-02 12:10:07.091 2 DEBUG oslo_concurrency.processutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c6cef7fd-49cb-4781-97ad-027e835dcc5c/disk.config c6cef7fd-49cb-4781-97ad-027e835dcc5c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:07 compute-1 nova_compute[230518]: 2025-10-02 12:10:07.091 2 INFO nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Deleting local config drive /var/lib/nova/instances/c6cef7fd-49cb-4781-97ad-027e835dcc5c/disk.config because it was imported into RBD.
Oct 02 12:10:07 compute-1 kernel: tun: Universal TUN/TAP device driver, 1.6
Oct 02 12:10:07 compute-1 kernel: tap7bdea026-36: entered promiscuous mode
Oct 02 12:10:07 compute-1 NetworkManager[44960]: <info>  [1759407007.1449] manager: (tap7bdea026-36): new Tun device (/org/freedesktop/NetworkManager/Devices/27)
Oct 02 12:10:07 compute-1 ovn_controller[129257]: 2025-10-02T12:10:07Z|00027|binding|INFO|Claiming lport 7bdea026-3636-4861-a8a9-fcb0a82509ad for this chassis.
Oct 02 12:10:07 compute-1 ovn_controller[129257]: 2025-10-02T12:10:07Z|00028|binding|INFO|7bdea026-3636-4861-a8a9-fcb0a82509ad: Claiming fa:16:3e:46:60:3e 10.1.0.87 fdfe:381f:8400:1::7b
Oct 02 12:10:07 compute-1 nova_compute[230518]: 2025-10-02 12:10:07.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:07 compute-1 kernel: tap567aae3a-50: entered promiscuous mode
Oct 02 12:10:07 compute-1 NetworkManager[44960]: <info>  [1759407007.1546] manager: (tap567aae3a-50): new Tun device (/org/freedesktop/NetworkManager/Devices/28)
Oct 02 12:10:07 compute-1 nova_compute[230518]: 2025-10-02 12:10:07.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:07 compute-1 nova_compute[230518]: 2025-10-02 12:10:07.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:07 compute-1 ovn_controller[129257]: 2025-10-02T12:10:07Z|00029|if_status|INFO|Not updating pb chassis for 567aae3a-5019-47d2-84ba-8de1184cf4f0 now as sb is readonly
Oct 02 12:10:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:07.167 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:60:3e 10.1.0.87 fdfe:381f:8400:1::7b'], port_security=['fa:16:3e:46:60:3e 10.1.0.87 fdfe:381f:8400:1::7b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.87/26 fdfe:381f:8400:1::7b/64', 'neutron:device_id': '3affd040-669b-4cde-a697-00b991236a6c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fa15236c63df4c43bf19989029fcda0f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8e3feb76-9212-430e-bcfa-0b85f7aedc4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1382d266-669c-46c5-981d-23fbe67f9508, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=7bdea026-3636-4861-a8a9-fcb0a82509ad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:10:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:07.168 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 7bdea026-3636-4861-a8a9-fcb0a82509ad in datapath b4aadb38-89a4-463f-b7b5-8bb4dcce7d32 bound to our chassis
Oct 02 12:10:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:07.170 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b4aadb38-89a4-463f-b7b5-8bb4dcce7d32
Oct 02 12:10:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:07.171 138374 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpaltl5tih/privsep.sock']
Oct 02 12:10:07 compute-1 systemd-udevd[233349]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:10:07 compute-1 systemd-udevd[233350]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:10:07 compute-1 NetworkManager[44960]: <info>  [1759407007.2085] device (tap567aae3a-50): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:10:07 compute-1 NetworkManager[44960]: <info>  [1759407007.2095] device (tap7bdea026-36): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:10:07 compute-1 NetworkManager[44960]: <info>  [1759407007.2102] device (tap567aae3a-50): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:10:07 compute-1 NetworkManager[44960]: <info>  [1759407007.2108] device (tap7bdea026-36): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:10:07 compute-1 systemd-machined[188247]: New machine qemu-1-instance-00000003.
Oct 02 12:10:07 compute-1 ovn_controller[129257]: 2025-10-02T12:10:07Z|00030|binding|INFO|Claiming lport 567aae3a-5019-47d2-84ba-8de1184cf4f0 for this chassis.
Oct 02 12:10:07 compute-1 ovn_controller[129257]: 2025-10-02T12:10:07Z|00031|binding|INFO|567aae3a-5019-47d2-84ba-8de1184cf4f0: Claiming fa:16:3e:37:e0:d8 10.100.0.9
Oct 02 12:10:07 compute-1 ovn_controller[129257]: 2025-10-02T12:10:07Z|00032|binding|INFO|Setting lport 7bdea026-3636-4861-a8a9-fcb0a82509ad ovn-installed in OVS
Oct 02 12:10:07 compute-1 ovn_controller[129257]: 2025-10-02T12:10:07Z|00033|binding|INFO|Setting lport 7bdea026-3636-4861-a8a9-fcb0a82509ad up in Southbound
Oct 02 12:10:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:07.259 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:e0:d8 10.100.0.9'], port_security=['fa:16:3e:37:e0:d8 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'c6cef7fd-49cb-4781-97ad-027e835dcc5c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-38c94475-c52a-421c-9bc8-95fdc649b043', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2c66662015f74444b15ea4b3d8644714', 'neutron:revision_number': '2', 'neutron:security_group_ids': '92a0a32f-072c-4975-aa02-ea951ff6e560', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ca6fa6e-a437-4756-b504-a14a9bfa02d8, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=567aae3a-5019-47d2-84ba-8de1184cf4f0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:10:07 compute-1 ovn_controller[129257]: 2025-10-02T12:10:07Z|00034|binding|INFO|Setting lport 567aae3a-5019-47d2-84ba-8de1184cf4f0 up in Southbound
Oct 02 12:10:07 compute-1 ovn_controller[129257]: 2025-10-02T12:10:07Z|00035|binding|INFO|Setting lport 567aae3a-5019-47d2-84ba-8de1184cf4f0 ovn-installed in OVS
Oct 02 12:10:07 compute-1 systemd[1]: Started Virtual Machine qemu-1-instance-00000003.
Oct 02 12:10:07 compute-1 nova_compute[230518]: 2025-10-02 12:10:07.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:07 compute-1 systemd-machined[188247]: New machine qemu-2-instance-00000005.
Oct 02 12:10:07 compute-1 systemd[1]: Started Virtual Machine qemu-2-instance-00000005.
Oct 02 12:10:07 compute-1 ceph-mon[80926]: pgmap v898: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.2 MiB/s wr, 26 op/s
Oct 02 12:10:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:07.911 138374 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 02 12:10:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:07.912 138374 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpaltl5tih/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 02 12:10:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:07.772 233418 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 02 12:10:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:07.776 233418 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 02 12:10:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:07.778 233418 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Oct 02 12:10:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:07.778 233418 INFO oslo.privsep.daemon [-] privsep daemon running as pid 233418
Oct 02 12:10:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:07.914 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d201059d-8230-49cb-ae85-289c974a3f6e]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.211 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407008.2105086, 3affd040-669b-4cde-a697-00b991236a6c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.213 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3affd040-669b-4cde-a697-00b991236a6c] VM Started (Lifecycle Event)
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.250 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.255 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407008.2121518, 3affd040-669b-4cde-a697-00b991236a6c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.255 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3affd040-669b-4cde-a697-00b991236a6c] VM Paused (Lifecycle Event)
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.288 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.291 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.308 2 DEBUG nova.compute.manager [req-8813b66d-677c-41e5-a7ed-61c5d7fc13d9 req-3e861485-999f-4c01-9924-a01e4daad94e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Received event network-vif-plugged-567aae3a-5019-47d2-84ba-8de1184cf4f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.308 2 DEBUG oslo_concurrency.lockutils [req-8813b66d-677c-41e5-a7ed-61c5d7fc13d9 req-3e861485-999f-4c01-9924-a01e4daad94e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.309 2 DEBUG oslo_concurrency.lockutils [req-8813b66d-677c-41e5-a7ed-61c5d7fc13d9 req-3e861485-999f-4c01-9924-a01e4daad94e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.309 2 DEBUG oslo_concurrency.lockutils [req-8813b66d-677c-41e5-a7ed-61c5d7fc13d9 req-3e861485-999f-4c01-9924-a01e4daad94e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.310 2 DEBUG nova.compute.manager [req-8813b66d-677c-41e5-a7ed-61c5d7fc13d9 req-3e861485-999f-4c01-9924-a01e4daad94e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Processing event network-vif-plugged-567aae3a-5019-47d2-84ba-8de1184cf4f0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.325 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3affd040-669b-4cde-a697-00b991236a6c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:10:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:08.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.439 2 DEBUG nova.compute.manager [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.440 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407008.4391465, c6cef7fd-49cb-4781-97ad-027e835dcc5c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.440 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] VM Started (Lifecycle Event)
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.451 2 DEBUG nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.455 2 INFO nova.virt.libvirt.driver [-] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Instance spawned successfully.
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.455 2 DEBUG nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.490 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.494 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:10:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:08.550 233418 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:08.550 233418 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:08.551 233418 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.601 2 DEBUG nova.compute.manager [req-1cd820f4-72ba-4ea8-998f-073c354b7897 req-6fda91be-b0b0-4ed3-a5c4-d8bd72df44c4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Received event network-vif-plugged-7bdea026-3636-4861-a8a9-fcb0a82509ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.601 2 DEBUG oslo_concurrency.lockutils [req-1cd820f4-72ba-4ea8-998f-073c354b7897 req-6fda91be-b0b0-4ed3-a5c4-d8bd72df44c4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3affd040-669b-4cde-a697-00b991236a6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.602 2 DEBUG oslo_concurrency.lockutils [req-1cd820f4-72ba-4ea8-998f-073c354b7897 req-6fda91be-b0b0-4ed3-a5c4-d8bd72df44c4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3affd040-669b-4cde-a697-00b991236a6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.603 2 DEBUG oslo_concurrency.lockutils [req-1cd820f4-72ba-4ea8-998f-073c354b7897 req-6fda91be-b0b0-4ed3-a5c4-d8bd72df44c4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3affd040-669b-4cde-a697-00b991236a6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.603 2 DEBUG nova.compute.manager [req-1cd820f4-72ba-4ea8-998f-073c354b7897 req-6fda91be-b0b0-4ed3-a5c4-d8bd72df44c4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Processing event network-vif-plugged-7bdea026-3636-4861-a8a9-fcb0a82509ad _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.604 2 DEBUG nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:10:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:10:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:08.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.609 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.615 2 INFO nova.virt.libvirt.driver [-] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Instance spawned successfully.
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.616 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.684 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.685 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407008.439394, c6cef7fd-49cb-4781-97ad-027e835dcc5c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.685 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] VM Paused (Lifecycle Event)
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.690 2 DEBUG nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.691 2 DEBUG nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.691 2 DEBUG nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.691 2 DEBUG nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.692 2 DEBUG nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.692 2 DEBUG nova.virt.libvirt.driver [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.697 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.697 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.699 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.699 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.700 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.700 2 DEBUG nova.virt.libvirt.driver [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.739 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.743 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407008.4514744, c6cef7fd-49cb-4781-97ad-027e835dcc5c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.743 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] VM Resumed (Lifecycle Event)
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.831 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.834 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.852 2 INFO nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Took 32.16 seconds to spawn the instance on the hypervisor.
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.853 2 DEBUG nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.902 2 INFO nova.compute.manager [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Took 18.03 seconds to spawn the instance on the hypervisor.
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.903 2 DEBUG nova.compute.manager [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.903 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.904 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407008.6078126, 3affd040-669b-4cde-a697-00b991236a6c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.904 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3affd040-669b-4cde-a697-00b991236a6c] VM Resumed (Lifecycle Event)
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.964 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.969 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:10:08 compute-1 nova_compute[230518]: 2025-10-02 12:10:08.985 2 INFO nova.compute.manager [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Took 33.32 seconds to build instance.
Oct 02 12:10:09 compute-1 nova_compute[230518]: 2025-10-02 12:10:09.000 2 INFO nova.compute.manager [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Took 20.39 seconds to build instance.
Oct 02 12:10:09 compute-1 nova_compute[230518]: 2025-10-02 12:10:09.002 2 DEBUG oslo_concurrency.lockutils [None req-a10f83da-d3e2-4543-a145-ad9895f50f84 b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "3affd040-669b-4cde-a697-00b991236a6c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 33.423s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:09 compute-1 nova_compute[230518]: 2025-10-02 12:10:09.016 2 DEBUG oslo_concurrency.lockutils [None req-864cb455-1781-46c6-82d9-e6681e67d23e 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:09 compute-1 nova_compute[230518]: 2025-10-02 12:10:09.127 2 DEBUG nova.network.neutron [req-094a2ef0-262a-487f-9573-b72c94519ff9 req-af799a0b-67e1-4e69-a33c-0a2a76945d14 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Updated VIF entry in instance network info cache for port 7bdea026-3636-4861-a8a9-fcb0a82509ad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:10:09 compute-1 nova_compute[230518]: 2025-10-02 12:10:09.128 2 DEBUG nova.network.neutron [req-094a2ef0-262a-487f-9573-b72c94519ff9 req-af799a0b-67e1-4e69-a33c-0a2a76945d14 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Updating instance_info_cache with network_info: [{"id": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "address": "fa:16:3e:46:60:3e", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::7b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bdea026-36", "ovs_interfaceid": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:10:09 compute-1 nova_compute[230518]: 2025-10-02 12:10:09.149 2 DEBUG oslo_concurrency.lockutils [req-094a2ef0-262a-487f-9573-b72c94519ff9 req-af799a0b-67e1-4e69-a33c-0a2a76945d14 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-3affd040-669b-4cde-a697-00b991236a6c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:10:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:09.349 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[de39ce20-631b-4da3-8c1f-f1e2ebbb20ae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:09.351 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb4aadb38-81 in ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:10:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:09.354 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb4aadb38-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:10:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:09.354 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2087b854-4568-4537-96ed-4ee0e1359e58]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:09.359 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7f6f6a2c-d476-49e2-a0bd-49a5bf7c9f5b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:09.386 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[c6b13d56-7c83-4c21-8c42-4361acafd90f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:09.406 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e2586f1e-4d1b-4a16-8827-d0405fe0f11e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:09.409 138374 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpvfaek5j7/privsep.sock']
Oct 02 12:10:09 compute-1 ceph-mon[80926]: pgmap v899: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 341 B/s rd, 0 op/s
Oct 02 12:10:09 compute-1 sshd-session[233050]: Connection closed by 172.210.81.91 port 57352 [preauth]
Oct 02 12:10:09 compute-1 sudo[233474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:10:09 compute-1 sudo[233474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:09 compute-1 sudo[233474]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:10 compute-1 podman[233499]: 2025-10-02 12:10:10.02818553 +0000 UTC m=+0.063607815 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:10:10 compute-1 sudo[233521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:10:10 compute-1 sudo[233521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:10 compute-1 sudo[233521]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:10 compute-1 podman[233498]: 2025-10-02 12:10:10.049338929 +0000 UTC m=+0.094847883 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 12:10:10 compute-1 sudo[233569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:10:10 compute-1 sudo[233569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:10 compute-1 sudo[233569]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:10 compute-1 sudo[233595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 12:10:10 compute-1 sudo[233595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:10.201 138374 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 02 12:10:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:10.202 138374 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpvfaek5j7/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 02 12:10:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:10.048 233568 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 02 12:10:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:10.052 233568 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 02 12:10:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:10.055 233568 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 02 12:10:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:10.055 233568 INFO oslo.privsep.daemon [-] privsep daemon running as pid 233568
Oct 02 12:10:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:10.207 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[34997fdb-7258-45d7-a607-00378d329731]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:10.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:10 compute-1 nova_compute[230518]: 2025-10-02 12:10:10.448 2 DEBUG nova.compute.manager [req-915b3929-4056-4ea1-a951-127a84566b5d req-dd77c89b-1b26-4395-bc95-0046a7e36d3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Received event network-vif-plugged-567aae3a-5019-47d2-84ba-8de1184cf4f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:10:10 compute-1 nova_compute[230518]: 2025-10-02 12:10:10.449 2 DEBUG oslo_concurrency.lockutils [req-915b3929-4056-4ea1-a951-127a84566b5d req-dd77c89b-1b26-4395-bc95-0046a7e36d3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:10 compute-1 nova_compute[230518]: 2025-10-02 12:10:10.450 2 DEBUG oslo_concurrency.lockutils [req-915b3929-4056-4ea1-a951-127a84566b5d req-dd77c89b-1b26-4395-bc95-0046a7e36d3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:10 compute-1 nova_compute[230518]: 2025-10-02 12:10:10.450 2 DEBUG oslo_concurrency.lockutils [req-915b3929-4056-4ea1-a951-127a84566b5d req-dd77c89b-1b26-4395-bc95-0046a7e36d3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:10 compute-1 nova_compute[230518]: 2025-10-02 12:10:10.452 2 DEBUG nova.compute.manager [req-915b3929-4056-4ea1-a951-127a84566b5d req-dd77c89b-1b26-4395-bc95-0046a7e36d3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] No waiting events found dispatching network-vif-plugged-567aae3a-5019-47d2-84ba-8de1184cf4f0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:10:10 compute-1 nova_compute[230518]: 2025-10-02 12:10:10.452 2 WARNING nova.compute.manager [req-915b3929-4056-4ea1-a951-127a84566b5d req-dd77c89b-1b26-4395-bc95-0046a7e36d3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Received unexpected event network-vif-plugged-567aae3a-5019-47d2-84ba-8de1184cf4f0 for instance with vm_state active and task_state None.
Oct 02 12:10:10 compute-1 nova_compute[230518]: 2025-10-02 12:10:10.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:10:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:10.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:10:10 compute-1 podman[233695]: 2025-10-02 12:10:10.761927359 +0000 UTC m=+0.134952683 container exec f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 12:10:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:10.867 233568 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:10.867 233568 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:10.868 233568 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:10 compute-1 podman[233695]: 2025-10-02 12:10:10.869624459 +0000 UTC m=+0.242649753 container exec_died f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 02 12:10:11 compute-1 nova_compute[230518]: 2025-10-02 12:10:11.034 2 DEBUG nova.compute.manager [req-a1ece650-5f91-48a6-a401-4bcf0f4873b7 req-b2482f8e-40da-4b5f-bb32-c6ed6eb036e4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Received event network-vif-plugged-7bdea026-3636-4861-a8a9-fcb0a82509ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:10:11 compute-1 nova_compute[230518]: 2025-10-02 12:10:11.035 2 DEBUG oslo_concurrency.lockutils [req-a1ece650-5f91-48a6-a401-4bcf0f4873b7 req-b2482f8e-40da-4b5f-bb32-c6ed6eb036e4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3affd040-669b-4cde-a697-00b991236a6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:11 compute-1 nova_compute[230518]: 2025-10-02 12:10:11.036 2 DEBUG oslo_concurrency.lockutils [req-a1ece650-5f91-48a6-a401-4bcf0f4873b7 req-b2482f8e-40da-4b5f-bb32-c6ed6eb036e4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3affd040-669b-4cde-a697-00b991236a6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:11 compute-1 nova_compute[230518]: 2025-10-02 12:10:11.036 2 DEBUG oslo_concurrency.lockutils [req-a1ece650-5f91-48a6-a401-4bcf0f4873b7 req-b2482f8e-40da-4b5f-bb32-c6ed6eb036e4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3affd040-669b-4cde-a697-00b991236a6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:11 compute-1 nova_compute[230518]: 2025-10-02 12:10:11.036 2 DEBUG nova.compute.manager [req-a1ece650-5f91-48a6-a401-4bcf0f4873b7 req-b2482f8e-40da-4b5f-bb32-c6ed6eb036e4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] No waiting events found dispatching network-vif-plugged-7bdea026-3636-4861-a8a9-fcb0a82509ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:10:11 compute-1 nova_compute[230518]: 2025-10-02 12:10:11.036 2 WARNING nova.compute.manager [req-a1ece650-5f91-48a6-a401-4bcf0f4873b7 req-b2482f8e-40da-4b5f-bb32-c6ed6eb036e4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Received unexpected event network-vif-plugged-7bdea026-3636-4861-a8a9-fcb0a82509ad for instance with vm_state active and task_state None.
Oct 02 12:10:11 compute-1 nova_compute[230518]: 2025-10-02 12:10:11.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:11 compute-1 sudo[233595]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:11.555 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[88fac29d-1a92-4341-87c8-c727163d4d4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:11 compute-1 NetworkManager[44960]: <info>  [1759407011.5650] manager: (tapb4aadb38-80): new Veth device (/org/freedesktop/NetworkManager/Devices/29)
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:11.562 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[03266333-c53d-4547-a6b3-3ea9580cc2f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:11 compute-1 systemd-udevd[233812]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:11.592 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[ef223854-ecfc-4ad7-998d-72aa5e66cf8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:11.599 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[82b5c6e9-9408-450c-85e9-1be284542bd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:11 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0.
Oct 02 12:10:11 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:10:11.622119) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:10:11 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40
Oct 02 12:10:11 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407011622185, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 1714, "num_deletes": 251, "total_data_size": 3948955, "memory_usage": 4012552, "flush_reason": "Manual Compaction"}
Oct 02 12:10:11 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started
Oct 02 12:10:11 compute-1 NetworkManager[44960]: <info>  [1759407011.6257] device (tapb4aadb38-80): carrier: link connected
Oct 02 12:10:11 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407011634304, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 1572669, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20073, "largest_seqno": 21782, "table_properties": {"data_size": 1567204, "index_size": 2669, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14134, "raw_average_key_size": 20, "raw_value_size": 1555080, "raw_average_value_size": 2263, "num_data_blocks": 120, "num_entries": 687, "num_filter_entries": 687, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759406865, "oldest_key_time": 1759406865, "file_creation_time": 1759407011, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:10:11 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 12214 microseconds, and 5974 cpu microseconds.
Oct 02 12:10:11 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:11.633 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[16072fe6-1b28-4952-8ad0-cf5a2e499fae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:11 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:10:11.634349) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 1572669 bytes OK
Oct 02 12:10:11 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:10:11.634368) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started
Oct 02 12:10:11 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:10:11.636082) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done
Oct 02 12:10:11 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:10:11.636096) EVENT_LOG_v1 {"time_micros": 1759407011636092, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:10:11 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:10:11.636112) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:10:11 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 3941074, prev total WAL file size 3941074, number of live WAL files 2.
Oct 02 12:10:11 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:10:11 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:10:11.637057) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353030' seq:72057594037927935, type:22 .. '6D67727374617400373532' seq:0, type:0; will stop at (end)
Oct 02 12:10:11 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:10:11 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(1535KB)], [39(9411KB)]
Oct 02 12:10:11 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407011637105, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 11210206, "oldest_snapshot_seqno": -1}
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:11.654 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f74d128e-1d74-4295-b5e3-792d3e6f0353]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb4aadb38-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:b6:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487518, 'reachable_time': 43647, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 233831, 'error': None, 'target': 'ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:11.670 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8d95f30c-8c52-4872-8780-e9125a9aa6c3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe31:b633'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 487518, 'tstamp': 487518}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 233832, 'error': None, 'target': 'ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:11 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 4675 keys, 8370051 bytes, temperature: kUnknown
Oct 02 12:10:11 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407011683308, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 8370051, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8338763, "index_size": 18506, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11717, "raw_key_size": 116178, "raw_average_key_size": 24, "raw_value_size": 8254017, "raw_average_value_size": 1765, "num_data_blocks": 765, "num_entries": 4675, "num_filter_entries": 4675, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759407011, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:10:11 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:10:11 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:10:11.683640) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 8370051 bytes
Oct 02 12:10:11 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:10:11.684980) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 241.6 rd, 180.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 9.2 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(12.5) write-amplify(5.3) OK, records in: 5127, records dropped: 452 output_compression: NoCompression
Oct 02 12:10:11 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:10:11.685002) EVENT_LOG_v1 {"time_micros": 1759407011684991, "job": 22, "event": "compaction_finished", "compaction_time_micros": 46393, "compaction_time_cpu_micros": 17752, "output_level": 6, "num_output_files": 1, "total_output_size": 8370051, "num_input_records": 5127, "num_output_records": 4675, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:10:11 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:10:11 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407011685777, "job": 22, "event": "table_file_deletion", "file_number": 41}
Oct 02 12:10:11 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:11.685 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fd0a3049-fbe3-4901-8e55-ca9d8efd5513]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb4aadb38-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:b6:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487518, 'reachable_time': 43647, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 233833, 'error': None, 'target': 'ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:11 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407011687863, "job": 22, "event": "table_file_deletion", "file_number": 39}
Oct 02 12:10:11 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:10:11.636975) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:10:11 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:10:11.688040) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:10:11 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:10:11.688046) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:10:11 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:10:11.688048) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:10:11 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:10:11.688049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:10:11 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:10:11.688053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:11.721 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cfa2e05c-0ebe-432a-9d8b-6af1b0cad257]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:11 compute-1 ceph-mon[80926]: pgmap v900: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 852 B/s wr, 77 op/s
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:11.780 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bd2920d7-2db4-42e7-89c4-b579e223d3dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:11.781 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4aadb38-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:11.782 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:11.782 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb4aadb38-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:11 compute-1 NetworkManager[44960]: <info>  [1759407011.7847] manager: (tapb4aadb38-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Oct 02 12:10:11 compute-1 kernel: tapb4aadb38-80: entered promiscuous mode
Oct 02 12:10:11 compute-1 nova_compute[230518]: 2025-10-02 12:10:11.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:11 compute-1 nova_compute[230518]: 2025-10-02 12:10:11.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:11.795 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb4aadb38-80, col_values=(('external_ids', {'iface-id': 'de74dbb2-fac5-494f-b65c-51300143a2da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:11 compute-1 ovn_controller[129257]: 2025-10-02T12:10:11Z|00036|binding|INFO|Releasing lport de74dbb2-fac5-494f-b65c-51300143a2da from this chassis (sb_readonly=0)
Oct 02 12:10:11 compute-1 nova_compute[230518]: 2025-10-02 12:10:11.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:11 compute-1 nova_compute[230518]: 2025-10-02 12:10:11.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:11.814 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b4aadb38-89a4-463f-b7b5-8bb4dcce7d32.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b4aadb38-89a4-463f-b7b5-8bb4dcce7d32.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:11.815 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5e6382c0-8ae9-418e-9549-6c06e81180f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:11.816 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/b4aadb38-89a4-463f-b7b5-8bb4dcce7d32.pid.haproxy
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID b4aadb38-89a4-463f-b7b5-8bb4dcce7d32
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:10:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:11.819 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32', 'env', 'PROCESS_TAG=haproxy-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b4aadb38-89a4-463f-b7b5-8bb4dcce7d32.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:10:11 compute-1 sudo[233841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:10:11 compute-1 sudo[233841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:11 compute-1 sudo[233841]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:11 compute-1 sudo[233869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:10:11 compute-1 sudo[233869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:11 compute-1 sudo[233869]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:11 compute-1 sudo[233894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:10:11 compute-1 sudo[233894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:11 compute-1 sudo[233894]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:12 compute-1 sudo[233919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:10:12 compute-1 sudo[233919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:12 compute-1 podman[233974]: 2025-10-02 12:10:12.24146317 +0000 UTC m=+0.064707049 container create 27d9e7ada72816b72b0daa1521e7751bb3bd97af77964b21adcf7d8aaea36be8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 12:10:12 compute-1 systemd[1]: Started libpod-conmon-27d9e7ada72816b72b0daa1521e7751bb3bd97af77964b21adcf7d8aaea36be8.scope.
Oct 02 12:10:12 compute-1 podman[233974]: 2025-10-02 12:10:12.215878291 +0000 UTC m=+0.039122200 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:10:12 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:10:12 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/887b11986ce41198ec71677adcd74a5ad1696c1e295797b15dba8f945811de2a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:10:12 compute-1 podman[233974]: 2025-10-02 12:10:12.32293178 +0000 UTC m=+0.146175679 container init 27d9e7ada72816b72b0daa1521e7751bb3bd97af77964b21adcf7d8aaea36be8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:10:12 compute-1 podman[233974]: 2025-10-02 12:10:12.330032435 +0000 UTC m=+0.153276314 container start 27d9e7ada72816b72b0daa1521e7751bb3bd97af77964b21adcf7d8aaea36be8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:10:12 compute-1 neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32[233994]: [NOTICE]   (234001) : New worker (234003) forked
Oct 02 12:10:12 compute-1 neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32[233994]: [NOTICE]   (234001) : Loading success.
Oct 02 12:10:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:10:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:12.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.420 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 567aae3a-5019-47d2-84ba-8de1184cf4f0 in datapath 38c94475-c52a-421c-9bc8-95fdc649b043 unbound from our chassis
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.423 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 38c94475-c52a-421c-9bc8-95fdc649b043
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.447 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[31a60102-5bb8-41d8-b66d-4656cf71b8f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.448 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap38c94475-c1 in ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.450 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap38c94475-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.450 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b2c6da9a-efeb-401b-8ca7-d00fc7d5abd9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.452 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[35dc54f7-181b-4051-b662-30d8255d985e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.474 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[bc3a8791-8727-43bc-ab97-9da81924ac48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.487 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f052c6b3-791d-49cb-9275-41204cc8d26b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:12 compute-1 sudo[233919]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.518 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[8b2b6782-b9aa-410d-933a-e7b75d4c72e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.524 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ce1a017c-ead3-4169-b704-05864b291bbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:12 compute-1 NetworkManager[44960]: <info>  [1759407012.5258] manager: (tap38c94475-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/31)
Oct 02 12:10:12 compute-1 systemd-udevd[233828]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.559 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[eb36d33b-5963-4080-aa20-1e01a30e4e7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.563 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[d74466f3-468c-4b82-83fc-fc820d6616b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:12 compute-1 NetworkManager[44960]: <info>  [1759407012.5865] device (tap38c94475-c0): carrier: link connected
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.594 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[55eb0073-fca5-44ba-95c2-d4f817c82f1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:12.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.612 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a9dcab5f-9495-4ca2-ae71-b8e3b28fc0c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap38c94475-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:d2:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 17], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487614, 'reachable_time': 23804, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 234037, 'error': None, 'target': 'ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.633 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2fd6cb2b-d2d2-47a5-96b2-45e835c97b86]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe76:d299'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 487614, 'tstamp': 487614}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 234038, 'error': None, 'target': 'ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.653 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7fb05cff-3d82-426e-ab49-28f5f8bbc45c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap38c94475-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:d2:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 17], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487614, 'reachable_time': 23804, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 234039, 'error': None, 'target': 'ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.687 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[04100819-96dc-4496-b930-57036079232e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.751 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[872b2cc0-a88c-4d13-bed2-cc39cfb05720]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.753 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap38c94475-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.753 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.754 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap38c94475-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:12 compute-1 nova_compute[230518]: 2025-10-02 12:10:12.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:12 compute-1 NetworkManager[44960]: <info>  [1759407012.7570] manager: (tap38c94475-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Oct 02 12:10:12 compute-1 kernel: tap38c94475-c0: entered promiscuous mode
Oct 02 12:10:12 compute-1 nova_compute[230518]: 2025-10-02 12:10:12.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.761 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap38c94475-c0, col_values=(('external_ids', {'iface-id': 'cb8aa481-1d5c-4f65-bc0c-1f1aa2cac89a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:12 compute-1 nova_compute[230518]: 2025-10-02 12:10:12.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:12 compute-1 ovn_controller[129257]: 2025-10-02T12:10:12Z|00037|binding|INFO|Releasing lport cb8aa481-1d5c-4f65-bc0c-1f1aa2cac89a from this chassis (sb_readonly=0)
Oct 02 12:10:12 compute-1 nova_compute[230518]: 2025-10-02 12:10:12.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.778 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/38c94475-c52a-421c-9bc8-95fdc649b043.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/38c94475-c52a-421c-9bc8-95fdc649b043.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:10:12 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:10:12 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:10:12 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:10:12 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.780 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a4feac2e-412f-4710-85a4-ada2b136e3f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.781 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-38c94475-c52a-421c-9bc8-95fdc649b043
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/38c94475-c52a-421c-9bc8-95fdc649b043.pid.haproxy
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 38c94475-c52a-421c-9bc8-95fdc649b043
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:10:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:12.783 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043', 'env', 'PROCESS_TAG=haproxy-38c94475-c52a-421c-9bc8-95fdc649b043', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/38c94475-c52a-421c-9bc8-95fdc649b043.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:10:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:10:13 compute-1 podman[234071]: 2025-10-02 12:10:13.245625542 +0000 UTC m=+0.112021017 container create b1a69f99a134a660a0d3dcf0dec3b682d2bca01ffe2fbc8351acc8cc93599f89 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 12:10:13 compute-1 podman[234071]: 2025-10-02 12:10:13.158757892 +0000 UTC m=+0.025153387 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:10:13 compute-1 systemd[1]: Started libpod-conmon-b1a69f99a134a660a0d3dcf0dec3b682d2bca01ffe2fbc8351acc8cc93599f89.scope.
Oct 02 12:10:13 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:10:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e673779b1bd623d71ccc62dcbdfdb0383cf4987b9cb72f96d48944541a34d5b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:10:13 compute-1 podman[234071]: 2025-10-02 12:10:13.403092207 +0000 UTC m=+0.269487682 container init b1a69f99a134a660a0d3dcf0dec3b682d2bca01ffe2fbc8351acc8cc93599f89 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 12:10:13 compute-1 podman[234071]: 2025-10-02 12:10:13.409531831 +0000 UTC m=+0.275927306 container start b1a69f99a134a660a0d3dcf0dec3b682d2bca01ffe2fbc8351acc8cc93599f89 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 12:10:13 compute-1 neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043[234087]: [NOTICE]   (234091) : New worker (234093) forked
Oct 02 12:10:13 compute-1 neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043[234087]: [NOTICE]   (234091) : Loading success.
Oct 02 12:10:13 compute-1 NetworkManager[44960]: <info>  [1759407013.4498] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/33)
Oct 02 12:10:13 compute-1 NetworkManager[44960]: <info>  [1759407013.4506] device (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 12:10:13 compute-1 nova_compute[230518]: 2025-10-02 12:10:13.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:13 compute-1 NetworkManager[44960]: <info>  [1759407013.4521] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/34)
Oct 02 12:10:13 compute-1 NetworkManager[44960]: <info>  [1759407013.4527] device (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 02 12:10:13 compute-1 NetworkManager[44960]: <info>  [1759407013.4540] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Oct 02 12:10:13 compute-1 NetworkManager[44960]: <info>  [1759407013.4549] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Oct 02 12:10:13 compute-1 NetworkManager[44960]: <info>  [1759407013.4556] device (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 02 12:10:13 compute-1 NetworkManager[44960]: <info>  [1759407013.4561] device (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 02 12:10:13 compute-1 nova_compute[230518]: 2025-10-02 12:10:13.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:13 compute-1 ovn_controller[129257]: 2025-10-02T12:10:13Z|00038|binding|INFO|Releasing lport cb8aa481-1d5c-4f65-bc0c-1f1aa2cac89a from this chassis (sb_readonly=0)
Oct 02 12:10:13 compute-1 ovn_controller[129257]: 2025-10-02T12:10:13Z|00039|binding|INFO|Releasing lport de74dbb2-fac5-494f-b65c-51300143a2da from this chassis (sb_readonly=0)
Oct 02 12:10:13 compute-1 nova_compute[230518]: 2025-10-02 12:10:13.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:13 compute-1 ceph-mon[80926]: pgmap v901: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 27 KiB/s wr, 135 op/s
Oct 02 12:10:13 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:10:13 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:10:13 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:10:13 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:10:13 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:10:13 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:10:14 compute-1 nova_compute[230518]: 2025-10-02 12:10:14.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:14 compute-1 nova_compute[230518]: 2025-10-02 12:10:14.182 2 DEBUG nova.compute.manager [req-ac668f71-ae88-4342-b981-312038978b54 req-3001bb5e-65fd-4ae8-bfa3-0df0e011e6ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Received event network-changed-567aae3a-5019-47d2-84ba-8de1184cf4f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:10:14 compute-1 nova_compute[230518]: 2025-10-02 12:10:14.182 2 DEBUG nova.compute.manager [req-ac668f71-ae88-4342-b981-312038978b54 req-3001bb5e-65fd-4ae8-bfa3-0df0e011e6ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Refreshing instance network info cache due to event network-changed-567aae3a-5019-47d2-84ba-8de1184cf4f0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:10:14 compute-1 nova_compute[230518]: 2025-10-02 12:10:14.182 2 DEBUG oslo_concurrency.lockutils [req-ac668f71-ae88-4342-b981-312038978b54 req-3001bb5e-65fd-4ae8-bfa3-0df0e011e6ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-c6cef7fd-49cb-4781-97ad-027e835dcc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:10:14 compute-1 nova_compute[230518]: 2025-10-02 12:10:14.183 2 DEBUG oslo_concurrency.lockutils [req-ac668f71-ae88-4342-b981-312038978b54 req-3001bb5e-65fd-4ae8-bfa3-0df0e011e6ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-c6cef7fd-49cb-4781-97ad-027e835dcc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:10:14 compute-1 nova_compute[230518]: 2025-10-02 12:10:14.183 2 DEBUG nova.network.neutron [req-ac668f71-ae88-4342-b981-312038978b54 req-3001bb5e-65fd-4ae8-bfa3-0df0e011e6ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Refreshing network info cache for port 567aae3a-5019-47d2-84ba-8de1184cf4f0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:10:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:14.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:14.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:15 compute-1 nova_compute[230518]: 2025-10-02 12:10:15.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:15 compute-1 ceph-mon[80926]: pgmap v902: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 27 KiB/s wr, 135 op/s
Oct 02 12:10:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2095754142' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1511910208' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:16 compute-1 nova_compute[230518]: 2025-10-02 12:10:16.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:16.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:10:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:16.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:10:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e137 e137: 3 total, 3 up, 3 in
Oct 02 12:10:17 compute-1 nova_compute[230518]: 2025-10-02 12:10:17.205 2 DEBUG nova.network.neutron [req-ac668f71-ae88-4342-b981-312038978b54 req-3001bb5e-65fd-4ae8-bfa3-0df0e011e6ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Updated VIF entry in instance network info cache for port 567aae3a-5019-47d2-84ba-8de1184cf4f0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:10:17 compute-1 nova_compute[230518]: 2025-10-02 12:10:17.206 2 DEBUG nova.network.neutron [req-ac668f71-ae88-4342-b981-312038978b54 req-3001bb5e-65fd-4ae8-bfa3-0df0e011e6ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Updating instance_info_cache with network_info: [{"id": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "address": "fa:16:3e:37:e0:d8", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap567aae3a-50", "ovs_interfaceid": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:10:17 compute-1 nova_compute[230518]: 2025-10-02 12:10:17.226 2 DEBUG oslo_concurrency.lockutils [req-ac668f71-ae88-4342-b981-312038978b54 req-3001bb5e-65fd-4ae8-bfa3-0df0e011e6ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-c6cef7fd-49cb-4781-97ad-027e835dcc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:10:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:10:18 compute-1 ceph-mon[80926]: pgmap v903: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 29 KiB/s wr, 147 op/s
Oct 02 12:10:18 compute-1 ceph-mon[80926]: osdmap e137: 3 total, 3 up, 3 in
Oct 02 12:10:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1883701792' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:18.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e138 e138: 3 total, 3 up, 3 in
Oct 02 12:10:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:10:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:18.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:10:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2438634577' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:19 compute-1 ceph-mon[80926]: osdmap e138: 3 total, 3 up, 3 in
Oct 02 12:10:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:19.423 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:10:19 compute-1 nova_compute[230518]: 2025-10-02 12:10:19.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:19.427 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:10:20 compute-1 ceph-mon[80926]: pgmap v906: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 54 KiB/s wr, 106 op/s
Oct 02 12:10:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:20.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:20 compute-1 nova_compute[230518]: 2025-10-02 12:10:20.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:20.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:20 compute-1 podman[234103]: 2025-10-02 12:10:20.816436442 +0000 UTC m=+0.068698241 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:10:20 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 02 12:10:21 compute-1 nova_compute[230518]: 2025-10-02 12:10:21.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:22 compute-1 sudo[234122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:10:22 compute-1 sudo[234122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:22 compute-1 sudo[234122]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:22 compute-1 sudo[234147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:10:22 compute-1 sudo[234147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:10:22 compute-1 sudo[234147]: pam_unix(sudo:session): session closed for user root
Oct 02 12:10:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:10:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:22.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:10:22 compute-1 ceph-mon[80926]: pgmap v907: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 602 KiB/s rd, 33 KiB/s wr, 37 op/s
Oct 02 12:10:22 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:10:22 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:10:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:22.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:22 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:10:23 compute-1 ceph-mon[80926]: pgmap v908: 305 pgs: 305 active+clean; 327 MiB data, 397 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.3 MiB/s wr, 90 op/s
Oct 02 12:10:23 compute-1 podman[234173]: 2025-10-02 12:10:23.805089057 +0000 UTC m=+0.058988866 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3)
Oct 02 12:10:23 compute-1 ovn_controller[129257]: 2025-10-02T12:10:23Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:46:60:3e 10.1.0.87
Oct 02 12:10:24 compute-1 ovn_controller[129257]: 2025-10-02T12:10:24Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:46:60:3e 10.1.0.87
Oct 02 12:10:24 compute-1 ovn_controller[129257]: 2025-10-02T12:10:24Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:37:e0:d8 10.100.0.9
Oct 02 12:10:24 compute-1 ovn_controller[129257]: 2025-10-02T12:10:24Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:37:e0:d8 10.100.0.9
Oct 02 12:10:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:24.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:24.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:25 compute-1 nova_compute[230518]: 2025-10-02 12:10:25.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:25.909 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:25.910 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:25.911 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:26 compute-1 ceph-mon[80926]: pgmap v909: 305 pgs: 305 active+clean; 327 MiB data, 397 MiB used, 21 GiB / 21 GiB avail; 653 KiB/s rd, 2.3 MiB/s wr, 72 op/s
Oct 02 12:10:26 compute-1 nova_compute[230518]: 2025-10-02 12:10:26.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:10:26 compute-1 nova_compute[230518]: 2025-10-02 12:10:26.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 12:10:26 compute-1 nova_compute[230518]: 2025-10-02 12:10:26.080 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 12:10:26 compute-1 nova_compute[230518]: 2025-10-02 12:10:26.080 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:10:26 compute-1 nova_compute[230518]: 2025-10-02 12:10:26.080 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 12:10:26 compute-1 nova_compute[230518]: 2025-10-02 12:10:26.091 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:10:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:26 compute-1 nova_compute[230518]: 2025-10-02 12:10:26.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:26.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:26 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e139 e139: 3 total, 3 up, 3 in
Oct 02 12:10:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:10:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:26.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:10:27 compute-1 nova_compute[230518]: 2025-10-02 12:10:27.116 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:10:27 compute-1 nova_compute[230518]: 2025-10-02 12:10:27.117 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:10:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:27.430 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:27 compute-1 ceph-mon[80926]: osdmap e139: 3 total, 3 up, 3 in
Oct 02 12:10:27 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:10:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:28.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:28.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:28 compute-1 ceph-mon[80926]: pgmap v911: 305 pgs: 305 active+clean; 362 MiB data, 429 MiB used, 21 GiB / 21 GiB avail; 5.9 MiB/s rd, 6.3 MiB/s wr, 359 op/s
Oct 02 12:10:29 compute-1 nova_compute[230518]: 2025-10-02 12:10:29.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:10:29 compute-1 nova_compute[230518]: 2025-10-02 12:10:29.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:10:30 compute-1 ceph-mon[80926]: pgmap v912: 305 pgs: 305 active+clean; 372 MiB data, 429 MiB used, 21 GiB / 21 GiB avail; 5.2 MiB/s rd, 5.1 MiB/s wr, 327 op/s
Oct 02 12:10:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/20603787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:10:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:30.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:10:30 compute-1 nova_compute[230518]: 2025-10-02 12:10:30.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:10:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:30.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:10:31 compute-1 nova_compute[230518]: 2025-10-02 12:10:31.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:10:31 compute-1 nova_compute[230518]: 2025-10-02 12:10:31.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:10:31 compute-1 nova_compute[230518]: 2025-10-02 12:10:31.102 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:31 compute-1 nova_compute[230518]: 2025-10-02 12:10:31.103 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:31 compute-1 nova_compute[230518]: 2025-10-02 12:10:31.103 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:31 compute-1 nova_compute[230518]: 2025-10-02 12:10:31.103 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:10:31 compute-1 nova_compute[230518]: 2025-10-02 12:10:31.103 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:31 compute-1 nova_compute[230518]: 2025-10-02 12:10:31.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:31 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2922378935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:31 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/204562109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:31 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/100158935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:31 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:10:31 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3606299484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:31 compute-1 nova_compute[230518]: 2025-10-02 12:10:31.589 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:32 compute-1 nova_compute[230518]: 2025-10-02 12:10:32.180 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:10:32 compute-1 nova_compute[230518]: 2025-10-02 12:10:32.181 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:10:32 compute-1 nova_compute[230518]: 2025-10-02 12:10:32.186 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:10:32 compute-1 nova_compute[230518]: 2025-10-02 12:10:32.186 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:10:32 compute-1 nova_compute[230518]: 2025-10-02 12:10:32.211 2 DEBUG oslo_concurrency.lockutils [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:32 compute-1 nova_compute[230518]: 2025-10-02 12:10:32.212 2 DEBUG oslo_concurrency.lockutils [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:32 compute-1 nova_compute[230518]: 2025-10-02 12:10:32.212 2 DEBUG oslo_concurrency.lockutils [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:32 compute-1 nova_compute[230518]: 2025-10-02 12:10:32.212 2 DEBUG oslo_concurrency.lockutils [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:32 compute-1 nova_compute[230518]: 2025-10-02 12:10:32.213 2 DEBUG oslo_concurrency.lockutils [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:32 compute-1 nova_compute[230518]: 2025-10-02 12:10:32.214 2 INFO nova.compute.manager [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Terminating instance
Oct 02 12:10:32 compute-1 nova_compute[230518]: 2025-10-02 12:10:32.215 2 DEBUG nova.compute.manager [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:10:32 compute-1 kernel: tap567aae3a-50 (unregistering): left promiscuous mode
Oct 02 12:10:32 compute-1 NetworkManager[44960]: <info>  [1759407032.2708] device (tap567aae3a-50): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:10:32 compute-1 ovn_controller[129257]: 2025-10-02T12:10:32Z|00040|binding|INFO|Releasing lport 567aae3a-5019-47d2-84ba-8de1184cf4f0 from this chassis (sb_readonly=0)
Oct 02 12:10:32 compute-1 ovn_controller[129257]: 2025-10-02T12:10:32Z|00041|binding|INFO|Setting lport 567aae3a-5019-47d2-84ba-8de1184cf4f0 down in Southbound
Oct 02 12:10:32 compute-1 nova_compute[230518]: 2025-10-02 12:10:32.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:32 compute-1 ovn_controller[129257]: 2025-10-02T12:10:32Z|00042|binding|INFO|Removing iface tap567aae3a-50 ovn-installed in OVS
Oct 02 12:10:32 compute-1 nova_compute[230518]: 2025-10-02 12:10:32.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:32.294 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:e0:d8 10.100.0.9'], port_security=['fa:16:3e:37:e0:d8 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'c6cef7fd-49cb-4781-97ad-027e835dcc5c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-38c94475-c52a-421c-9bc8-95fdc649b043', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2c66662015f74444b15ea4b3d8644714', 'neutron:revision_number': '4', 'neutron:security_group_ids': '92a0a32f-072c-4975-aa02-ea951ff6e560', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.233'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ca6fa6e-a437-4756-b504-a14a9bfa02d8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=567aae3a-5019-47d2-84ba-8de1184cf4f0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:10:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:32.295 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 567aae3a-5019-47d2-84ba-8de1184cf4f0 in datapath 38c94475-c52a-421c-9bc8-95fdc649b043 unbound from our chassis
Oct 02 12:10:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:32.297 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 38c94475-c52a-421c-9bc8-95fdc649b043, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:10:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:32.298 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f0d95f3c-e5d1-43b8-8025-57c4d5201629]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:32.298 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043 namespace which is not needed anymore
Oct 02 12:10:32 compute-1 nova_compute[230518]: 2025-10-02 12:10:32.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:32 compute-1 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000005.scope: Deactivated successfully.
Oct 02 12:10:32 compute-1 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000005.scope: Consumed 14.572s CPU time.
Oct 02 12:10:32 compute-1 systemd-machined[188247]: Machine qemu-2-instance-00000005 terminated.
Oct 02 12:10:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:10:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:32.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:10:32 compute-1 nova_compute[230518]: 2025-10-02 12:10:32.415 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:10:32 compute-1 nova_compute[230518]: 2025-10-02 12:10:32.417 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4640MB free_disk=20.810245513916016GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:10:32 compute-1 nova_compute[230518]: 2025-10-02 12:10:32.417 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:32 compute-1 nova_compute[230518]: 2025-10-02 12:10:32.417 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:32 compute-1 neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043[234087]: [NOTICE]   (234091) : haproxy version is 2.8.14-c23fe91
Oct 02 12:10:32 compute-1 neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043[234087]: [NOTICE]   (234091) : path to executable is /usr/sbin/haproxy
Oct 02 12:10:32 compute-1 neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043[234087]: [ALERT]    (234091) : Current worker (234093) exited with code 143 (Terminated)
Oct 02 12:10:32 compute-1 neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043[234087]: [WARNING]  (234091) : All workers exited. Exiting... (0)
Oct 02 12:10:32 compute-1 systemd[1]: libpod-b1a69f99a134a660a0d3dcf0dec3b682d2bca01ffe2fbc8351acc8cc93599f89.scope: Deactivated successfully.
Oct 02 12:10:32 compute-1 nova_compute[230518]: 2025-10-02 12:10:32.450 2 INFO nova.virt.libvirt.driver [-] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Instance destroyed successfully.
Oct 02 12:10:32 compute-1 podman[234241]: 2025-10-02 12:10:32.451105124 +0000 UTC m=+0.050670175 container died b1a69f99a134a660a0d3dcf0dec3b682d2bca01ffe2fbc8351acc8cc93599f89 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 12:10:32 compute-1 nova_compute[230518]: 2025-10-02 12:10:32.450 2 DEBUG nova.objects.instance [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lazy-loading 'resources' on Instance uuid c6cef7fd-49cb-4781-97ad-027e835dcc5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:10:32 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b1a69f99a134a660a0d3dcf0dec3b682d2bca01ffe2fbc8351acc8cc93599f89-userdata-shm.mount: Deactivated successfully.
Oct 02 12:10:32 compute-1 systemd[1]: var-lib-containers-storage-overlay-1e673779b1bd623d71ccc62dcbdfdb0383cf4987b9cb72f96d48944541a34d5b-merged.mount: Deactivated successfully.
Oct 02 12:10:32 compute-1 ceph-mon[80926]: pgmap v913: 305 pgs: 305 active+clean; 341 MiB data, 413 MiB used, 21 GiB / 21 GiB avail; 5.2 MiB/s rd, 5.1 MiB/s wr, 330 op/s
Oct 02 12:10:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3606299484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1286005547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:32 compute-1 podman[234241]: 2025-10-02 12:10:32.51232368 +0000 UTC m=+0.111888731 container cleanup b1a69f99a134a660a0d3dcf0dec3b682d2bca01ffe2fbc8351acc8cc93599f89 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:10:32 compute-1 systemd[1]: libpod-conmon-b1a69f99a134a660a0d3dcf0dec3b682d2bca01ffe2fbc8351acc8cc93599f89.scope: Deactivated successfully.
Oct 02 12:10:32 compute-1 podman[234281]: 2025-10-02 12:10:32.581331231 +0000 UTC m=+0.046667138 container remove b1a69f99a134a660a0d3dcf0dec3b682d2bca01ffe2fbc8351acc8cc93599f89 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 12:10:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:32.587 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[742639d3-1e00-4a25-a705-e5abc4821fa4]: (4, ('Thu Oct  2 12:10:32 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043 (b1a69f99a134a660a0d3dcf0dec3b682d2bca01ffe2fbc8351acc8cc93599f89)\nb1a69f99a134a660a0d3dcf0dec3b682d2bca01ffe2fbc8351acc8cc93599f89\nThu Oct  2 12:10:32 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043 (b1a69f99a134a660a0d3dcf0dec3b682d2bca01ffe2fbc8351acc8cc93599f89)\nb1a69f99a134a660a0d3dcf0dec3b682d2bca01ffe2fbc8351acc8cc93599f89\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:32.588 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b0602919-7938-42d8-9fd6-1758588cd8e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:32.589 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap38c94475-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:32 compute-1 nova_compute[230518]: 2025-10-02 12:10:32.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:32 compute-1 kernel: tap38c94475-c0: left promiscuous mode
Oct 02 12:10:32 compute-1 nova_compute[230518]: 2025-10-02 12:10:32.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:32.613 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[68ae35a6-5992-4706-981b-272e0a96e8f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:32.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:32 compute-1 nova_compute[230518]: 2025-10-02 12:10:32.634 2 DEBUG nova.virt.libvirt.vif [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:09:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-870505644',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-870505644',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(15),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-870505644',id=5,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=15,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMZIzbMfwVBTTToNPNrnTuckoO8kg28OkEFKvLyHQzGuKrzHQ5Xu2/PJVR0z9htMcy/llPoN2mM4eTO6OIHrSZOwjPe/taZdTaEhmzjh34Ak2Vyd+nZrFG8VSiYQyffl8g==',key_name='tempest-keypair-1186587753',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:10:08Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2c66662015f74444b15ea4b3d8644714',ramdisk_id='',reservation_id='r-e4cgmrkv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-957372394',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-957372394-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:10:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='531ddb9812364f7b9743bd02a8ed797f',uuid=c6cef7fd-49cb-4781-97ad-027e835dcc5c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "address": "fa:16:3e:37:e0:d8", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap567aae3a-50", "ovs_interfaceid": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:10:32 compute-1 nova_compute[230518]: 2025-10-02 12:10:32.635 2 DEBUG nova.network.os_vif_util [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Converting VIF {"id": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "address": "fa:16:3e:37:e0:d8", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap567aae3a-50", "ovs_interfaceid": "567aae3a-5019-47d2-84ba-8de1184cf4f0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:10:32 compute-1 nova_compute[230518]: 2025-10-02 12:10:32.636 2 DEBUG nova.network.os_vif_util [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:37:e0:d8,bridge_name='br-int',has_traffic_filtering=True,id=567aae3a-5019-47d2-84ba-8de1184cf4f0,network=Network(38c94475-c52a-421c-9bc8-95fdc649b043),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap567aae3a-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:10:32 compute-1 nova_compute[230518]: 2025-10-02 12:10:32.636 2 DEBUG os_vif [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:37:e0:d8,bridge_name='br-int',has_traffic_filtering=True,id=567aae3a-5019-47d2-84ba-8de1184cf4f0,network=Network(38c94475-c52a-421c-9bc8-95fdc649b043),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap567aae3a-50') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:10:32 compute-1 nova_compute[230518]: 2025-10-02 12:10:32.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:32 compute-1 nova_compute[230518]: 2025-10-02 12:10:32.639 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap567aae3a-50, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:32.639 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0d7ffb7a-f4b8-454b-8c63-cba6c2b55020]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:32.640 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2c85bb8b-cbe2-4903-a671-1b2225d41b05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:32 compute-1 nova_compute[230518]: 2025-10-02 12:10:32.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:32 compute-1 nova_compute[230518]: 2025-10-02 12:10:32.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:32 compute-1 nova_compute[230518]: 2025-10-02 12:10:32.645 2 INFO os_vif [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:37:e0:d8,bridge_name='br-int',has_traffic_filtering=True,id=567aae3a-5019-47d2-84ba-8de1184cf4f0,network=Network(38c94475-c52a-421c-9bc8-95fdc649b043),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap567aae3a-50')
Oct 02 12:10:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:32.656 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3956a173-a554-4de9-b9e9-648d23ec495a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487607, 'reachable_time': 32466, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 234299, 'error': None, 'target': 'ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:32 compute-1 systemd[1]: run-netns-ovnmeta\x2d38c94475\x2dc52a\x2d421c\x2d9bc8\x2d95fdc649b043.mount: Deactivated successfully.
Oct 02 12:10:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:32.671 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:10:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:32.671 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[35e1e3e9-93ab-47ec-91fd-6dffa0ab3269]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:10:33 compute-1 nova_compute[230518]: 2025-10-02 12:10:33.114 2 DEBUG nova.compute.manager [req-ab1e64b6-eb26-4e40-a948-3d3f91b6da17 req-664e81fb-f7ae-4044-b825-89206e625c63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Received event network-vif-unplugged-567aae3a-5019-47d2-84ba-8de1184cf4f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:10:33 compute-1 nova_compute[230518]: 2025-10-02 12:10:33.115 2 DEBUG oslo_concurrency.lockutils [req-ab1e64b6-eb26-4e40-a948-3d3f91b6da17 req-664e81fb-f7ae-4044-b825-89206e625c63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:33 compute-1 nova_compute[230518]: 2025-10-02 12:10:33.115 2 DEBUG oslo_concurrency.lockutils [req-ab1e64b6-eb26-4e40-a948-3d3f91b6da17 req-664e81fb-f7ae-4044-b825-89206e625c63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:33 compute-1 nova_compute[230518]: 2025-10-02 12:10:33.115 2 DEBUG oslo_concurrency.lockutils [req-ab1e64b6-eb26-4e40-a948-3d3f91b6da17 req-664e81fb-f7ae-4044-b825-89206e625c63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:33 compute-1 nova_compute[230518]: 2025-10-02 12:10:33.115 2 DEBUG nova.compute.manager [req-ab1e64b6-eb26-4e40-a948-3d3f91b6da17 req-664e81fb-f7ae-4044-b825-89206e625c63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] No waiting events found dispatching network-vif-unplugged-567aae3a-5019-47d2-84ba-8de1184cf4f0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:10:33 compute-1 nova_compute[230518]: 2025-10-02 12:10:33.115 2 DEBUG nova.compute.manager [req-ab1e64b6-eb26-4e40-a948-3d3f91b6da17 req-664e81fb-f7ae-4044-b825-89206e625c63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Received event network-vif-unplugged-567aae3a-5019-47d2-84ba-8de1184cf4f0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:10:33 compute-1 nova_compute[230518]: 2025-10-02 12:10:33.125 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 3affd040-669b-4cde-a697-00b991236a6c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:10:33 compute-1 nova_compute[230518]: 2025-10-02 12:10:33.125 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance c6cef7fd-49cb-4781-97ad-027e835dcc5c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:10:33 compute-1 nova_compute[230518]: 2025-10-02 12:10:33.125 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:10:33 compute-1 nova_compute[230518]: 2025-10-02 12:10:33.125 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:10:33 compute-1 nova_compute[230518]: 2025-10-02 12:10:33.228 2 INFO nova.virt.libvirt.driver [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Deleting instance files /var/lib/nova/instances/c6cef7fd-49cb-4781-97ad-027e835dcc5c_del
Oct 02 12:10:33 compute-1 nova_compute[230518]: 2025-10-02 12:10:33.228 2 INFO nova.virt.libvirt.driver [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Deletion of /var/lib/nova/instances/c6cef7fd-49cb-4781-97ad-027e835dcc5c_del complete
Oct 02 12:10:33 compute-1 nova_compute[230518]: 2025-10-02 12:10:33.315 2 DEBUG nova.virt.libvirt.host [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Oct 02 12:10:33 compute-1 nova_compute[230518]: 2025-10-02 12:10:33.316 2 INFO nova.virt.libvirt.host [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] UEFI support detected
Oct 02 12:10:33 compute-1 nova_compute[230518]: 2025-10-02 12:10:33.317 2 INFO nova.compute.manager [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Took 1.10 seconds to destroy the instance on the hypervisor.
Oct 02 12:10:33 compute-1 nova_compute[230518]: 2025-10-02 12:10:33.318 2 DEBUG oslo.service.loopingcall [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:10:33 compute-1 nova_compute[230518]: 2025-10-02 12:10:33.318 2 DEBUG nova.compute.manager [-] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:10:33 compute-1 nova_compute[230518]: 2025-10-02 12:10:33.318 2 DEBUG nova.network.neutron [-] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:10:33 compute-1 nova_compute[230518]: 2025-10-02 12:10:33.610 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:33 compute-1 nova_compute[230518]: 2025-10-02 12:10:33.712 2 DEBUG oslo_concurrency.lockutils [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "3affd040-669b-4cde-a697-00b991236a6c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:33 compute-1 nova_compute[230518]: 2025-10-02 12:10:33.713 2 DEBUG oslo_concurrency.lockutils [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "3affd040-669b-4cde-a697-00b991236a6c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:33 compute-1 nova_compute[230518]: 2025-10-02 12:10:33.713 2 DEBUG oslo_concurrency.lockutils [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "3affd040-669b-4cde-a697-00b991236a6c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:33 compute-1 nova_compute[230518]: 2025-10-02 12:10:33.713 2 DEBUG oslo_concurrency.lockutils [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "3affd040-669b-4cde-a697-00b991236a6c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:33 compute-1 nova_compute[230518]: 2025-10-02 12:10:33.714 2 DEBUG oslo_concurrency.lockutils [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "3affd040-669b-4cde-a697-00b991236a6c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:33 compute-1 nova_compute[230518]: 2025-10-02 12:10:33.715 2 INFO nova.compute.manager [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Terminating instance
Oct 02 12:10:33 compute-1 nova_compute[230518]: 2025-10-02 12:10:33.716 2 DEBUG nova.compute.manager [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:10:33 compute-1 ceph-mon[80926]: pgmap v914: 305 pgs: 305 active+clean; 337 MiB data, 408 MiB used, 21 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.0 MiB/s wr, 305 op/s
Oct 02 12:10:33 compute-1 kernel: tap7bdea026-36 (unregistering): left promiscuous mode
Oct 02 12:10:33 compute-1 NetworkManager[44960]: <info>  [1759407033.8278] device (tap7bdea026-36): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:10:33 compute-1 ovn_controller[129257]: 2025-10-02T12:10:33Z|00043|binding|INFO|Releasing lport 7bdea026-3636-4861-a8a9-fcb0a82509ad from this chassis (sb_readonly=0)
Oct 02 12:10:33 compute-1 ovn_controller[129257]: 2025-10-02T12:10:33Z|00044|binding|INFO|Setting lport 7bdea026-3636-4861-a8a9-fcb0a82509ad down in Southbound
Oct 02 12:10:33 compute-1 nova_compute[230518]: 2025-10-02 12:10:33.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:33 compute-1 ovn_controller[129257]: 2025-10-02T12:10:33Z|00045|binding|INFO|Removing iface tap7bdea026-36 ovn-installed in OVS
Oct 02 12:10:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:33.850 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:60:3e 10.1.0.87 fdfe:381f:8400:1::7b'], port_security=['fa:16:3e:46:60:3e 10.1.0.87 fdfe:381f:8400:1::7b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.87/26 fdfe:381f:8400:1::7b/64', 'neutron:device_id': '3affd040-669b-4cde-a697-00b991236a6c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fa15236c63df4c43bf19989029fcda0f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8e3feb76-9212-430e-bcfa-0b85f7aedc4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1382d266-669c-46c5-981d-23fbe67f9508, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=7bdea026-3636-4861-a8a9-fcb0a82509ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:10:33 compute-1 nova_compute[230518]: 2025-10-02 12:10:33.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:33.855 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 7bdea026-3636-4861-a8a9-fcb0a82509ad in datapath b4aadb38-89a4-463f-b7b5-8bb4dcce7d32 unbound from our chassis
Oct 02 12:10:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:33.866 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b4aadb38-89a4-463f-b7b5-8bb4dcce7d32, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:10:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:33.867 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[669a53f3-0eec-456f-9cae-0eba1d88c11b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:33.868 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32 namespace which is not needed anymore
Oct 02 12:10:33 compute-1 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000003.scope: Deactivated successfully.
Oct 02 12:10:33 compute-1 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000003.scope: Consumed 15.391s CPU time.
Oct 02 12:10:33 compute-1 systemd-machined[188247]: Machine qemu-1-instance-00000003 terminated.
Oct 02 12:10:33 compute-1 nova_compute[230518]: 2025-10-02 12:10:33.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:33 compute-1 nova_compute[230518]: 2025-10-02 12:10:33.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:33 compute-1 nova_compute[230518]: 2025-10-02 12:10:33.951 2 INFO nova.virt.libvirt.driver [-] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Instance destroyed successfully.
Oct 02 12:10:33 compute-1 nova_compute[230518]: 2025-10-02 12:10:33.952 2 DEBUG nova.objects.instance [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lazy-loading 'resources' on Instance uuid 3affd040-669b-4cde-a697-00b991236a6c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:10:34 compute-1 nova_compute[230518]: 2025-10-02 12:10:34.052 2 DEBUG nova.virt.libvirt.vif [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:09:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-1858146006-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1858146006-2',id=3,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-10-02T12:10:08Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fa15236c63df4c43bf19989029fcda0f',ramdisk_id='',reservation_id='r-b33bb0il',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AutoAllocateNetworkTest-1017519520',owner_user_name='tempest-AutoAllocateNetworkTest-1017519520-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:10:08Z,user_data=None,user_id='b81237ef015d48dfa022b6761d706e36',uuid=3affd040-669b-4cde-a697-00b991236a6c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "address": "fa:16:3e:46:60:3e", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::7b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bdea026-36", "ovs_interfaceid": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:10:34 compute-1 nova_compute[230518]: 2025-10-02 12:10:34.052 2 DEBUG nova.network.os_vif_util [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Converting VIF {"id": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "address": "fa:16:3e:46:60:3e", "network": {"id": "b4aadb38-89a4-463f-b7b5-8bb4dcce7d32", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::7b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.87", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa15236c63df4c43bf19989029fcda0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7bdea026-36", "ovs_interfaceid": "7bdea026-3636-4861-a8a9-fcb0a82509ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:10:34 compute-1 nova_compute[230518]: 2025-10-02 12:10:34.053 2 DEBUG nova.network.os_vif_util [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:60:3e,bridge_name='br-int',has_traffic_filtering=True,id=7bdea026-3636-4861-a8a9-fcb0a82509ad,network=Network(b4aadb38-89a4-463f-b7b5-8bb4dcce7d32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bdea026-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:10:34 compute-1 nova_compute[230518]: 2025-10-02 12:10:34.053 2 DEBUG os_vif [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:60:3e,bridge_name='br-int',has_traffic_filtering=True,id=7bdea026-3636-4861-a8a9-fcb0a82509ad,network=Network(b4aadb38-89a4-463f-b7b5-8bb4dcce7d32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bdea026-36') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:10:34 compute-1 nova_compute[230518]: 2025-10-02 12:10:34.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:34 compute-1 nova_compute[230518]: 2025-10-02 12:10:34.055 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7bdea026-36, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:34 compute-1 nova_compute[230518]: 2025-10-02 12:10:34.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:10:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:10:34 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2268975107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:34 compute-1 nova_compute[230518]: 2025-10-02 12:10:34.061 2 INFO os_vif [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:60:3e,bridge_name='br-int',has_traffic_filtering=True,id=7bdea026-3636-4861-a8a9-fcb0a82509ad,network=Network(b4aadb38-89a4-463f-b7b5-8bb4dcce7d32),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7bdea026-36')
Oct 02 12:10:34 compute-1 nova_compute[230518]: 2025-10-02 12:10:34.077 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:34 compute-1 nova_compute[230518]: 2025-10-02 12:10:34.082 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:10:34 compute-1 nova_compute[230518]: 2025-10-02 12:10:34.130 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:10:34 compute-1 neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32[233994]: [NOTICE]   (234001) : haproxy version is 2.8.14-c23fe91
Oct 02 12:10:34 compute-1 neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32[233994]: [NOTICE]   (234001) : path to executable is /usr/sbin/haproxy
Oct 02 12:10:34 compute-1 neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32[233994]: [WARNING]  (234001) : Exiting Master process...
Oct 02 12:10:34 compute-1 neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32[233994]: [WARNING]  (234001) : Exiting Master process...
Oct 02 12:10:34 compute-1 neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32[233994]: [ALERT]    (234001) : Current worker (234003) exited with code 143 (Terminated)
Oct 02 12:10:34 compute-1 neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32[233994]: [WARNING]  (234001) : All workers exited. Exiting... (0)
Oct 02 12:10:34 compute-1 systemd[1]: libpod-27d9e7ada72816b72b0daa1521e7751bb3bd97af77964b21adcf7d8aaea36be8.scope: Deactivated successfully.
Oct 02 12:10:34 compute-1 podman[234371]: 2025-10-02 12:10:34.195671864 +0000 UTC m=+0.244852513 container died 27d9e7ada72816b72b0daa1521e7751bb3bd97af77964b21adcf7d8aaea36be8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:10:34 compute-1 nova_compute[230518]: 2025-10-02 12:10:34.245 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:10:34 compute-1 nova_compute[230518]: 2025-10-02 12:10:34.245 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:34 compute-1 nova_compute[230518]: 2025-10-02 12:10:34.293 2 DEBUG nova.compute.manager [req-b0cd5f73-4992-4f49-a3fd-07dee196fc4b req-1fa4e345-8962-43c6-bf27-bf2c25df3614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Received event network-vif-unplugged-7bdea026-3636-4861-a8a9-fcb0a82509ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:10:34 compute-1 nova_compute[230518]: 2025-10-02 12:10:34.294 2 DEBUG oslo_concurrency.lockutils [req-b0cd5f73-4992-4f49-a3fd-07dee196fc4b req-1fa4e345-8962-43c6-bf27-bf2c25df3614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3affd040-669b-4cde-a697-00b991236a6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:34 compute-1 nova_compute[230518]: 2025-10-02 12:10:34.295 2 DEBUG oslo_concurrency.lockutils [req-b0cd5f73-4992-4f49-a3fd-07dee196fc4b req-1fa4e345-8962-43c6-bf27-bf2c25df3614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3affd040-669b-4cde-a697-00b991236a6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:34 compute-1 nova_compute[230518]: 2025-10-02 12:10:34.295 2 DEBUG oslo_concurrency.lockutils [req-b0cd5f73-4992-4f49-a3fd-07dee196fc4b req-1fa4e345-8962-43c6-bf27-bf2c25df3614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3affd040-669b-4cde-a697-00b991236a6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:34 compute-1 nova_compute[230518]: 2025-10-02 12:10:34.295 2 DEBUG nova.compute.manager [req-b0cd5f73-4992-4f49-a3fd-07dee196fc4b req-1fa4e345-8962-43c6-bf27-bf2c25df3614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] No waiting events found dispatching network-vif-unplugged-7bdea026-3636-4861-a8a9-fcb0a82509ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:10:34 compute-1 nova_compute[230518]: 2025-10-02 12:10:34.295 2 DEBUG nova.compute.manager [req-b0cd5f73-4992-4f49-a3fd-07dee196fc4b req-1fa4e345-8962-43c6-bf27-bf2c25df3614 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Received event network-vif-unplugged-7bdea026-3636-4861-a8a9-fcb0a82509ad for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:10:34 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-27d9e7ada72816b72b0daa1521e7751bb3bd97af77964b21adcf7d8aaea36be8-userdata-shm.mount: Deactivated successfully.
Oct 02 12:10:34 compute-1 systemd[1]: var-lib-containers-storage-overlay-887b11986ce41198ec71677adcd74a5ad1696c1e295797b15dba8f945811de2a-merged.mount: Deactivated successfully.
Oct 02 12:10:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:34.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:34.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:34 compute-1 podman[234371]: 2025-10-02 12:10:34.685074409 +0000 UTC m=+0.734255058 container cleanup 27d9e7ada72816b72b0daa1521e7751bb3bd97af77964b21adcf7d8aaea36be8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 12:10:34 compute-1 systemd[1]: libpod-conmon-27d9e7ada72816b72b0daa1521e7751bb3bd97af77964b21adcf7d8aaea36be8.scope: Deactivated successfully.
Oct 02 12:10:34 compute-1 podman[234423]: 2025-10-02 12:10:34.794030706 +0000 UTC m=+0.089833977 container remove 27d9e7ada72816b72b0daa1521e7751bb3bd97af77964b21adcf7d8aaea36be8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:10:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:34.801 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c2746083-a9df-47dc-aabd-34f56df1a2ea]: (4, ('Thu Oct  2 12:10:33 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32 (27d9e7ada72816b72b0daa1521e7751bb3bd97af77964b21adcf7d8aaea36be8)\n27d9e7ada72816b72b0daa1521e7751bb3bd97af77964b21adcf7d8aaea36be8\nThu Oct  2 12:10:34 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32 (27d9e7ada72816b72b0daa1521e7751bb3bd97af77964b21adcf7d8aaea36be8)\n27d9e7ada72816b72b0daa1521e7751bb3bd97af77964b21adcf7d8aaea36be8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:34.803 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5e95e319-347f-4f2c-b194-dd850feef900]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:34.804 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4aadb38-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:34 compute-1 nova_compute[230518]: 2025-10-02 12:10:34.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:34 compute-1 kernel: tapb4aadb38-80: left promiscuous mode
Oct 02 12:10:34 compute-1 nova_compute[230518]: 2025-10-02 12:10:34.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:34.824 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ee0db048-2212-4b92-bb5d-6b5111fb1391]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:34.853 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[514383d8-2c0d-4afd-a6b3-5d7100932992]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:34.855 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c1b69311-d288-4cec-9bcb-7b84e36db0da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2268975107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:34.872 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[95bd7105-78eb-4235-ab79-60a3472d266f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487510, 'reachable_time': 31960, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 234438, 'error': None, 'target': 'ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:34.874 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b4aadb38-89a4-463f-b7b5-8bb4dcce7d32 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:10:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:34.874 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[b15dc31c-457f-456b-8082-62d0ca373ae1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:34 compute-1 systemd[1]: run-netns-ovnmeta\x2db4aadb38\x2d89a4\x2d463f\x2db7b5\x2d8bb4dcce7d32.mount: Deactivated successfully.
Oct 02 12:10:35 compute-1 nova_compute[230518]: 2025-10-02 12:10:35.240 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:10:35 compute-1 nova_compute[230518]: 2025-10-02 12:10:35.241 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:10:35 compute-1 nova_compute[230518]: 2025-10-02 12:10:35.241 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:10:35 compute-1 nova_compute[230518]: 2025-10-02 12:10:35.241 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:10:35 compute-1 nova_compute[230518]: 2025-10-02 12:10:35.246 2 INFO nova.virt.libvirt.driver [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Deleting instance files /var/lib/nova/instances/3affd040-669b-4cde-a697-00b991236a6c_del
Oct 02 12:10:35 compute-1 nova_compute[230518]: 2025-10-02 12:10:35.246 2 INFO nova.virt.libvirt.driver [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Deletion of /var/lib/nova/instances/3affd040-669b-4cde-a697-00b991236a6c_del complete
Oct 02 12:10:35 compute-1 nova_compute[230518]: 2025-10-02 12:10:35.293 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 02 12:10:35 compute-1 nova_compute[230518]: 2025-10-02 12:10:35.293 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 02 12:10:35 compute-1 nova_compute[230518]: 2025-10-02 12:10:35.294 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:10:35 compute-1 nova_compute[230518]: 2025-10-02 12:10:35.294 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:10:35 compute-1 nova_compute[230518]: 2025-10-02 12:10:35.295 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:10:35 compute-1 nova_compute[230518]: 2025-10-02 12:10:35.366 2 INFO nova.compute.manager [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Took 1.65 seconds to destroy the instance on the hypervisor.
Oct 02 12:10:35 compute-1 nova_compute[230518]: 2025-10-02 12:10:35.367 2 DEBUG oslo.service.loopingcall [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:10:35 compute-1 nova_compute[230518]: 2025-10-02 12:10:35.367 2 DEBUG nova.compute.manager [-] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:10:35 compute-1 nova_compute[230518]: 2025-10-02 12:10:35.367 2 DEBUG nova.network.neutron [-] [instance: 3affd040-669b-4cde-a697-00b991236a6c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:10:35 compute-1 nova_compute[230518]: 2025-10-02 12:10:35.493 2 DEBUG nova.compute.manager [req-39648667-748d-44fb-8639-6cca81fd66dc req-ffa9fd66-d7f9-4a23-900c-9169fb3d2a80 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Received event network-vif-plugged-567aae3a-5019-47d2-84ba-8de1184cf4f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:10:35 compute-1 nova_compute[230518]: 2025-10-02 12:10:35.494 2 DEBUG oslo_concurrency.lockutils [req-39648667-748d-44fb-8639-6cca81fd66dc req-ffa9fd66-d7f9-4a23-900c-9169fb3d2a80 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:35 compute-1 nova_compute[230518]: 2025-10-02 12:10:35.494 2 DEBUG oslo_concurrency.lockutils [req-39648667-748d-44fb-8639-6cca81fd66dc req-ffa9fd66-d7f9-4a23-900c-9169fb3d2a80 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:35 compute-1 nova_compute[230518]: 2025-10-02 12:10:35.495 2 DEBUG oslo_concurrency.lockutils [req-39648667-748d-44fb-8639-6cca81fd66dc req-ffa9fd66-d7f9-4a23-900c-9169fb3d2a80 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:35 compute-1 nova_compute[230518]: 2025-10-02 12:10:35.495 2 DEBUG nova.compute.manager [req-39648667-748d-44fb-8639-6cca81fd66dc req-ffa9fd66-d7f9-4a23-900c-9169fb3d2a80 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] No waiting events found dispatching network-vif-plugged-567aae3a-5019-47d2-84ba-8de1184cf4f0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:10:35 compute-1 nova_compute[230518]: 2025-10-02 12:10:35.495 2 WARNING nova.compute.manager [req-39648667-748d-44fb-8639-6cca81fd66dc req-ffa9fd66-d7f9-4a23-900c-9169fb3d2a80 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Received unexpected event network-vif-plugged-567aae3a-5019-47d2-84ba-8de1184cf4f0 for instance with vm_state active and task_state deleting.
Oct 02 12:10:35 compute-1 ceph-mon[80926]: pgmap v915: 305 pgs: 305 active+clean; 337 MiB data, 408 MiB used, 21 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.0 MiB/s wr, 305 op/s
Oct 02 12:10:35 compute-1 nova_compute[230518]: 2025-10-02 12:10:35.922 2 DEBUG nova.network.neutron [-] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:10:35 compute-1 nova_compute[230518]: 2025-10-02 12:10:35.946 2 INFO nova.compute.manager [-] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Took 2.63 seconds to deallocate network for instance.
Oct 02 12:10:36 compute-1 nova_compute[230518]: 2025-10-02 12:10:36.127 2 DEBUG oslo_concurrency.lockutils [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:36 compute-1 nova_compute[230518]: 2025-10-02 12:10:36.128 2 DEBUG oslo_concurrency.lockutils [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:36 compute-1 nova_compute[230518]: 2025-10-02 12:10:36.211 2 DEBUG oslo_concurrency.processutils [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:36 compute-1 nova_compute[230518]: 2025-10-02 12:10:36.394 2 DEBUG nova.compute.manager [req-dc408017-3f58-4004-a1fa-fd10e5f4902d req-722f1468-5120-45de-a931-9c5bc1f7788f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Received event network-vif-plugged-7bdea026-3636-4861-a8a9-fcb0a82509ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:10:36 compute-1 nova_compute[230518]: 2025-10-02 12:10:36.395 2 DEBUG oslo_concurrency.lockutils [req-dc408017-3f58-4004-a1fa-fd10e5f4902d req-722f1468-5120-45de-a931-9c5bc1f7788f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3affd040-669b-4cde-a697-00b991236a6c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:36 compute-1 nova_compute[230518]: 2025-10-02 12:10:36.395 2 DEBUG oslo_concurrency.lockutils [req-dc408017-3f58-4004-a1fa-fd10e5f4902d req-722f1468-5120-45de-a931-9c5bc1f7788f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3affd040-669b-4cde-a697-00b991236a6c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:36 compute-1 nova_compute[230518]: 2025-10-02 12:10:36.395 2 DEBUG oslo_concurrency.lockutils [req-dc408017-3f58-4004-a1fa-fd10e5f4902d req-722f1468-5120-45de-a931-9c5bc1f7788f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3affd040-669b-4cde-a697-00b991236a6c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:36 compute-1 nova_compute[230518]: 2025-10-02 12:10:36.395 2 DEBUG nova.compute.manager [req-dc408017-3f58-4004-a1fa-fd10e5f4902d req-722f1468-5120-45de-a931-9c5bc1f7788f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] No waiting events found dispatching network-vif-plugged-7bdea026-3636-4861-a8a9-fcb0a82509ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:10:36 compute-1 nova_compute[230518]: 2025-10-02 12:10:36.396 2 WARNING nova.compute.manager [req-dc408017-3f58-4004-a1fa-fd10e5f4902d req-722f1468-5120-45de-a931-9c5bc1f7788f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Received unexpected event network-vif-plugged-7bdea026-3636-4861-a8a9-fcb0a82509ad for instance with vm_state active and task_state deleting.
Oct 02 12:10:36 compute-1 nova_compute[230518]: 2025-10-02 12:10:36.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:36.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:10:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:36.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:10:36 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:10:36 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2230665049' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:36 compute-1 nova_compute[230518]: 2025-10-02 12:10:36.705 2 DEBUG oslo_concurrency.processutils [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:36 compute-1 nova_compute[230518]: 2025-10-02 12:10:36.711 2 DEBUG nova.compute.provider_tree [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:10:36 compute-1 nova_compute[230518]: 2025-10-02 12:10:36.748 2 DEBUG nova.scheduler.client.report [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:10:36 compute-1 nova_compute[230518]: 2025-10-02 12:10:36.900 2 DEBUG nova.network.neutron [-] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:10:36 compute-1 nova_compute[230518]: 2025-10-02 12:10:36.902 2 DEBUG oslo_concurrency.lockutils [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:36 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2230665049' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:36 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/46767875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:36 compute-1 nova_compute[230518]: 2025-10-02 12:10:36.931 2 INFO nova.compute.manager [-] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Took 1.56 seconds to deallocate network for instance.
Oct 02 12:10:36 compute-1 nova_compute[230518]: 2025-10-02 12:10:36.934 2 INFO nova.scheduler.client.report [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Deleted allocations for instance c6cef7fd-49cb-4781-97ad-027e835dcc5c
Oct 02 12:10:37 compute-1 nova_compute[230518]: 2025-10-02 12:10:37.380 2 DEBUG oslo_concurrency.lockutils [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:37 compute-1 nova_compute[230518]: 2025-10-02 12:10:37.380 2 DEBUG oslo_concurrency.lockutils [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:37 compute-1 nova_compute[230518]: 2025-10-02 12:10:37.424 2 DEBUG oslo_concurrency.processutils [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:37 compute-1 nova_compute[230518]: 2025-10-02 12:10:37.615 2 DEBUG oslo_concurrency.lockutils [None req-86e0b819-15b6-426d-9d64-00e88d9eb100 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "c6cef7fd-49cb-4781-97ad-027e835dcc5c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.403s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:10:37 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1891091328' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:37 compute-1 nova_compute[230518]: 2025-10-02 12:10:37.841 2 DEBUG oslo_concurrency.processutils [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:37 compute-1 nova_compute[230518]: 2025-10-02 12:10:37.847 2 DEBUG nova.compute.provider_tree [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:10:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:10:38 compute-1 ceph-mon[80926]: pgmap v916: 305 pgs: 305 active+clean; 312 MiB data, 409 MiB used, 21 GiB / 21 GiB avail; 749 KiB/s rd, 4.3 MiB/s wr, 181 op/s
Oct 02 12:10:38 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1891091328' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:38 compute-1 nova_compute[230518]: 2025-10-02 12:10:38.402 2 DEBUG nova.compute.manager [req-95957d16-960a-4120-8313-bb921247d74f req-c1e2d8e5-5f34-4a75-91e0-754905d6bb9e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Received event network-vif-deleted-7bdea026-3636-4861-a8a9-fcb0a82509ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:10:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:10:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:38.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:10:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:38.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:38 compute-1 nova_compute[230518]: 2025-10-02 12:10:38.893 2 DEBUG nova.scheduler.client.report [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:10:38 compute-1 nova_compute[230518]: 2025-10-02 12:10:38.922 2 DEBUG oslo_concurrency.lockutils [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:38 compute-1 nova_compute[230518]: 2025-10-02 12:10:38.927 2 DEBUG nova.compute.manager [req-25178c5a-f41f-4eda-b873-664d9a62c9cf req-1945fc10-f65e-4ad0-9106-a678caa455b2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Received event network-vif-deleted-567aae3a-5019-47d2-84ba-8de1184cf4f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:10:38 compute-1 nova_compute[230518]: 2025-10-02 12:10:38.970 2 INFO nova.scheduler.client.report [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Deleted allocations for instance 3affd040-669b-4cde-a697-00b991236a6c
Oct 02 12:10:39 compute-1 nova_compute[230518]: 2025-10-02 12:10:39.056 2 DEBUG oslo_concurrency.lockutils [None req-36d51abe-a1c4-4909-a34b-02d41aac3e6a b81237ef015d48dfa022b6761d706e36 fa15236c63df4c43bf19989029fcda0f - - default default] Lock "3affd040-669b-4cde-a697-00b991236a6c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.344s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:39 compute-1 nova_compute[230518]: 2025-10-02 12:10:39.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:39 compute-1 nova_compute[230518]: 2025-10-02 12:10:39.730 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "e582fd0b-cd3a-4903-9ed3-024359954c81" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:39 compute-1 nova_compute[230518]: 2025-10-02 12:10:39.731 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "e582fd0b-cd3a-4903-9ed3-024359954c81" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:39 compute-1 nova_compute[230518]: 2025-10-02 12:10:39.753 2 DEBUG nova.compute.manager [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:10:39 compute-1 nova_compute[230518]: 2025-10-02 12:10:39.845 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:39 compute-1 nova_compute[230518]: 2025-10-02 12:10:39.846 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:39 compute-1 nova_compute[230518]: 2025-10-02 12:10:39.853 2 DEBUG nova.virt.hardware [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:10:39 compute-1 nova_compute[230518]: 2025-10-02 12:10:39.854 2 INFO nova.compute.claims [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:10:39 compute-1 nova_compute[230518]: 2025-10-02 12:10:39.940 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:40 compute-1 ceph-mon[80926]: pgmap v917: 305 pgs: 305 active+clean; 260 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 663 KiB/s rd, 3.9 MiB/s wr, 171 op/s
Oct 02 12:10:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3584573527' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/103885911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4193053227' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:40 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:10:40 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/764252114' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:40 compute-1 nova_compute[230518]: 2025-10-02 12:10:40.410 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:40 compute-1 nova_compute[230518]: 2025-10-02 12:10:40.418 2 DEBUG nova.compute.provider_tree [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:10:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:40.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:40 compute-1 nova_compute[230518]: 2025-10-02 12:10:40.439 2 DEBUG nova.scheduler.client.report [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:10:40 compute-1 nova_compute[230518]: 2025-10-02 12:10:40.467 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:40 compute-1 nova_compute[230518]: 2025-10-02 12:10:40.468 2 DEBUG nova.compute.manager [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:10:40 compute-1 nova_compute[230518]: 2025-10-02 12:10:40.541 2 DEBUG nova.compute.manager [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:10:40 compute-1 nova_compute[230518]: 2025-10-02 12:10:40.542 2 DEBUG nova.network.neutron [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:10:40 compute-1 nova_compute[230518]: 2025-10-02 12:10:40.581 2 INFO nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:10:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:10:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:40.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:10:40 compute-1 nova_compute[230518]: 2025-10-02 12:10:40.679 2 DEBUG nova.compute.manager [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:10:40 compute-1 podman[234507]: 2025-10-02 12:10:40.806671186 +0000 UTC m=+0.051681677 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:10:40 compute-1 podman[234506]: 2025-10-02 12:10:40.839469707 +0000 UTC m=+0.086618015 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:10:40 compute-1 nova_compute[230518]: 2025-10-02 12:10:40.878 2 DEBUG nova.compute.manager [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:10:40 compute-1 nova_compute[230518]: 2025-10-02 12:10:40.879 2 DEBUG nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:10:40 compute-1 nova_compute[230518]: 2025-10-02 12:10:40.880 2 INFO nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Creating image(s)
Oct 02 12:10:40 compute-1 nova_compute[230518]: 2025-10-02 12:10:40.906 2 DEBUG nova.storage.rbd_utils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] rbd image e582fd0b-cd3a-4903-9ed3-024359954c81_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:10:40 compute-1 nova_compute[230518]: 2025-10-02 12:10:40.931 2 DEBUG nova.storage.rbd_utils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] rbd image e582fd0b-cd3a-4903-9ed3-024359954c81_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:10:40 compute-1 nova_compute[230518]: 2025-10-02 12:10:40.961 2 DEBUG nova.storage.rbd_utils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] rbd image e582fd0b-cd3a-4903-9ed3-024359954c81_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:10:40 compute-1 nova_compute[230518]: 2025-10-02 12:10:40.967 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:40 compute-1 nova_compute[230518]: 2025-10-02 12:10:40.995 2 DEBUG nova.policy [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '531ddb9812364f7b9743bd02a8ed797f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2c66662015f74444b15ea4b3d8644714', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:10:41 compute-1 nova_compute[230518]: 2025-10-02 12:10:41.037 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:41 compute-1 nova_compute[230518]: 2025-10-02 12:10:41.038 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:41 compute-1 nova_compute[230518]: 2025-10-02 12:10:41.039 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:41 compute-1 nova_compute[230518]: 2025-10-02 12:10:41.039 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:41 compute-1 nova_compute[230518]: 2025-10-02 12:10:41.069 2 DEBUG nova.storage.rbd_utils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] rbd image e582fd0b-cd3a-4903-9ed3-024359954c81_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:10:41 compute-1 nova_compute[230518]: 2025-10-02 12:10:41.074 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 e582fd0b-cd3a-4903-9ed3-024359954c81_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:41 compute-1 nova_compute[230518]: 2025-10-02 12:10:41.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:41 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/764252114' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:41 compute-1 nova_compute[230518]: 2025-10-02 12:10:41.796 2 DEBUG nova.network.neutron [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Successfully created port: 2887bff5-92aa-4c83-9902-292a59e0add5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:10:42 compute-1 nova_compute[230518]: 2025-10-02 12:10:42.281 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 e582fd0b-cd3a-4903-9ed3-024359954c81_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:42 compute-1 nova_compute[230518]: 2025-10-02 12:10:42.342 2 DEBUG nova.storage.rbd_utils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] resizing rbd image e582fd0b-cd3a-4903-9ed3-024359954c81_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:10:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:42.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:42 compute-1 nova_compute[230518]: 2025-10-02 12:10:42.500 2 DEBUG nova.objects.instance [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lazy-loading 'migration_context' on Instance uuid e582fd0b-cd3a-4903-9ed3-024359954c81 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:10:42 compute-1 ceph-mon[80926]: pgmap v918: 305 pgs: 305 active+clean; 246 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 404 KiB/s rd, 3.9 MiB/s wr, 174 op/s
Oct 02 12:10:42 compute-1 nova_compute[230518]: 2025-10-02 12:10:42.573 2 DEBUG nova.storage.rbd_utils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] rbd image e582fd0b-cd3a-4903-9ed3-024359954c81_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:10:42 compute-1 nova_compute[230518]: 2025-10-02 12:10:42.599 2 DEBUG nova.storage.rbd_utils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] rbd image e582fd0b-cd3a-4903-9ed3-024359954c81_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:10:42 compute-1 nova_compute[230518]: 2025-10-02 12:10:42.605 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:42 compute-1 nova_compute[230518]: 2025-10-02 12:10:42.606 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:42 compute-1 nova_compute[230518]: 2025-10-02 12:10:42.606 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:42 compute-1 nova_compute[230518]: 2025-10-02 12:10:42.629 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:42 compute-1 nova_compute[230518]: 2025-10-02 12:10:42.630 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:42.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:42 compute-1 nova_compute[230518]: 2025-10-02 12:10:42.650 2 DEBUG nova.network.neutron [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Successfully updated port: 2887bff5-92aa-4c83-9902-292a59e0add5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:10:42 compute-1 nova_compute[230518]: 2025-10-02 12:10:42.666 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "refresh_cache-e582fd0b-cd3a-4903-9ed3-024359954c81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:10:42 compute-1 nova_compute[230518]: 2025-10-02 12:10:42.666 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquired lock "refresh_cache-e582fd0b-cd3a-4903-9ed3-024359954c81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:10:42 compute-1 nova_compute[230518]: 2025-10-02 12:10:42.666 2 DEBUG nova.network.neutron [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:10:42 compute-1 nova_compute[230518]: 2025-10-02 12:10:42.682 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:42 compute-1 nova_compute[230518]: 2025-10-02 12:10:42.683 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:42 compute-1 nova_compute[230518]: 2025-10-02 12:10:42.714 2 DEBUG nova.storage.rbd_utils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] rbd image e582fd0b-cd3a-4903-9ed3-024359954c81_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:10:42 compute-1 nova_compute[230518]: 2025-10-02 12:10:42.718 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 e582fd0b-cd3a-4903-9ed3-024359954c81_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:10:42 compute-1 nova_compute[230518]: 2025-10-02 12:10:42.944 2 DEBUG nova.network.neutron [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:10:43 compute-1 nova_compute[230518]: 2025-10-02 12:10:43.210 2 DEBUG nova.compute.manager [req-843ebe39-17f6-4269-b786-ba89007fa53b req-201f5071-1357-49f8-b2da-82929bc362f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Received event network-changed-2887bff5-92aa-4c83-9902-292a59e0add5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:10:43 compute-1 nova_compute[230518]: 2025-10-02 12:10:43.210 2 DEBUG nova.compute.manager [req-843ebe39-17f6-4269-b786-ba89007fa53b req-201f5071-1357-49f8-b2da-82929bc362f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Refreshing instance network info cache due to event network-changed-2887bff5-92aa-4c83-9902-292a59e0add5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:10:43 compute-1 nova_compute[230518]: 2025-10-02 12:10:43.211 2 DEBUG oslo_concurrency.lockutils [req-843ebe39-17f6-4269-b786-ba89007fa53b req-201f5071-1357-49f8-b2da-82929bc362f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-e582fd0b-cd3a-4903-9ed3-024359954c81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:10:43 compute-1 ceph-mon[80926]: pgmap v919: 305 pgs: 305 active+clean; 246 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 394 KiB/s rd, 3.9 MiB/s wr, 163 op/s
Oct 02 12:10:43 compute-1 nova_compute[230518]: 2025-10-02 12:10:43.733 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 e582fd0b-cd3a-4903-9ed3-024359954c81_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:43 compute-1 nova_compute[230518]: 2025-10-02 12:10:43.823 2 DEBUG nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:10:43 compute-1 nova_compute[230518]: 2025-10-02 12:10:43.824 2 DEBUG nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Ensure instance console log exists: /var/lib/nova/instances/e582fd0b-cd3a-4903-9ed3-024359954c81/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:10:43 compute-1 nova_compute[230518]: 2025-10-02 12:10:43.824 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:43 compute-1 nova_compute[230518]: 2025-10-02 12:10:43.824 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:43 compute-1 nova_compute[230518]: 2025-10-02 12:10:43.824 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:44 compute-1 nova_compute[230518]: 2025-10-02 12:10:44.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:44.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:44.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:45 compute-1 nova_compute[230518]: 2025-10-02 12:10:45.457 2 DEBUG nova.network.neutron [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Updating instance_info_cache with network_info: [{"id": "2887bff5-92aa-4c83-9902-292a59e0add5", "address": "fa:16:3e:53:c6:7c", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2887bff5-92", "ovs_interfaceid": "2887bff5-92aa-4c83-9902-292a59e0add5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:10:45 compute-1 ceph-mon[80926]: pgmap v920: 305 pgs: 305 active+clean; 246 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 384 KiB/s rd, 3.4 MiB/s wr, 149 op/s
Oct 02 12:10:45 compute-1 nova_compute[230518]: 2025-10-02 12:10:45.599 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Releasing lock "refresh_cache-e582fd0b-cd3a-4903-9ed3-024359954c81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:10:45 compute-1 nova_compute[230518]: 2025-10-02 12:10:45.600 2 DEBUG nova.compute.manager [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Instance network_info: |[{"id": "2887bff5-92aa-4c83-9902-292a59e0add5", "address": "fa:16:3e:53:c6:7c", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2887bff5-92", "ovs_interfaceid": "2887bff5-92aa-4c83-9902-292a59e0add5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:10:45 compute-1 nova_compute[230518]: 2025-10-02 12:10:45.600 2 DEBUG oslo_concurrency.lockutils [req-843ebe39-17f6-4269-b786-ba89007fa53b req-201f5071-1357-49f8-b2da-82929bc362f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-e582fd0b-cd3a-4903-9ed3-024359954c81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:10:45 compute-1 nova_compute[230518]: 2025-10-02 12:10:45.600 2 DEBUG nova.network.neutron [req-843ebe39-17f6-4269-b786-ba89007fa53b req-201f5071-1357-49f8-b2da-82929bc362f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Refreshing network info cache for port 2887bff5-92aa-4c83-9902-292a59e0add5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:10:45 compute-1 nova_compute[230518]: 2025-10-02 12:10:45.604 2 DEBUG nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Start _get_guest_xml network_info=[{"id": "2887bff5-92aa-4c83-9902-292a59e0add5", "address": "fa:16:3e:53:c6:7c", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2887bff5-92", "ovs_interfaceid": "2887bff5-92aa-4c83-9902-292a59e0add5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vdb', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'size': 1, 'guest_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:10:45 compute-1 nova_compute[230518]: 2025-10-02 12:10:45.609 2 WARNING nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:10:45 compute-1 nova_compute[230518]: 2025-10-02 12:10:45.615 2 DEBUG nova.virt.libvirt.host [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:10:45 compute-1 nova_compute[230518]: 2025-10-02 12:10:45.616 2 DEBUG nova.virt.libvirt.host [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:10:45 compute-1 nova_compute[230518]: 2025-10-02 12:10:45.620 2 DEBUG nova.virt.libvirt.host [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:10:45 compute-1 nova_compute[230518]: 2025-10-02 12:10:45.621 2 DEBUG nova.virt.libvirt.host [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:10:45 compute-1 nova_compute[230518]: 2025-10-02 12:10:45.622 2 DEBUG nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:10:45 compute-1 nova_compute[230518]: 2025-10-02 12:10:45.623 2 DEBUG nova.virt.hardware [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:09:32Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={hw_rng:allowed='True'},flavorid='1268098770',id=14,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_1-2032995918',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:10:45 compute-1 nova_compute[230518]: 2025-10-02 12:10:45.624 2 DEBUG nova.virt.hardware [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:10:45 compute-1 nova_compute[230518]: 2025-10-02 12:10:45.624 2 DEBUG nova.virt.hardware [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:10:45 compute-1 nova_compute[230518]: 2025-10-02 12:10:45.624 2 DEBUG nova.virt.hardware [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:10:45 compute-1 nova_compute[230518]: 2025-10-02 12:10:45.625 2 DEBUG nova.virt.hardware [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:10:45 compute-1 nova_compute[230518]: 2025-10-02 12:10:45.625 2 DEBUG nova.virt.hardware [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:10:45 compute-1 nova_compute[230518]: 2025-10-02 12:10:45.625 2 DEBUG nova.virt.hardware [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:10:45 compute-1 nova_compute[230518]: 2025-10-02 12:10:45.626 2 DEBUG nova.virt.hardware [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:10:45 compute-1 nova_compute[230518]: 2025-10-02 12:10:45.626 2 DEBUG nova.virt.hardware [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:10:45 compute-1 nova_compute[230518]: 2025-10-02 12:10:45.626 2 DEBUG nova.virt.hardware [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:10:45 compute-1 nova_compute[230518]: 2025-10-02 12:10:45.627 2 DEBUG nova.virt.hardware [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:10:45 compute-1 nova_compute[230518]: 2025-10-02 12:10:45.630 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:46 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:10:46 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2705683967' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:46 compute-1 nova_compute[230518]: 2025-10-02 12:10:46.079 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:46 compute-1 nova_compute[230518]: 2025-10-02 12:10:46.080 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:46.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:46 compute-1 nova_compute[230518]: 2025-10-02 12:10:46.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:46 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:10:46 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/946783354' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:46 compute-1 nova_compute[230518]: 2025-10-02 12:10:46.513 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:46 compute-1 nova_compute[230518]: 2025-10-02 12:10:46.535 2 DEBUG nova.storage.rbd_utils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] rbd image e582fd0b-cd3a-4903-9ed3-024359954c81_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:10:46 compute-1 nova_compute[230518]: 2025-10-02 12:10:46.539 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:46.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2705683967' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/946783354' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:10:47 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2087921985' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:47 compute-1 nova_compute[230518]: 2025-10-02 12:10:47.139 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:47 compute-1 nova_compute[230518]: 2025-10-02 12:10:47.142 2 DEBUG nova.virt.libvirt.vif [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:10:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-425689073',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-425689073',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(14),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-425689073',id=7,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=14,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMZIzbMfwVBTTToNPNrnTuckoO8kg28OkEFKvLyHQzGuKrzHQ5Xu2/PJVR0z9htMcy/llPoN2mM4eTO6OIHrSZOwjPe/taZdTaEhmzjh34Ak2Vyd+nZrFG8VSiYQyffl8g==',key_name='tempest-keypair-1186587753',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2c66662015f74444b15ea4b3d8644714',ramdisk_id='',reservation_id='r-nm6ufj5e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-957372394',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-957372394-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:10:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='531ddb9812364f7b9743bd02a8ed797f',uuid=e582fd0b-cd3a-4903-9ed3-024359954c81,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2887bff5-92aa-4c83-9902-292a59e0add5", "address": "fa:16:3e:53:c6:7c", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2887bff5-92", "ovs_interfaceid": "2887bff5-92aa-4c83-9902-292a59e0add5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:10:47 compute-1 nova_compute[230518]: 2025-10-02 12:10:47.143 2 DEBUG nova.network.os_vif_util [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Converting VIF {"id": "2887bff5-92aa-4c83-9902-292a59e0add5", "address": "fa:16:3e:53:c6:7c", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2887bff5-92", "ovs_interfaceid": "2887bff5-92aa-4c83-9902-292a59e0add5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:10:47 compute-1 nova_compute[230518]: 2025-10-02 12:10:47.144 2 DEBUG nova.network.os_vif_util [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:c6:7c,bridge_name='br-int',has_traffic_filtering=True,id=2887bff5-92aa-4c83-9902-292a59e0add5,network=Network(38c94475-c52a-421c-9bc8-95fdc649b043),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2887bff5-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:10:47 compute-1 nova_compute[230518]: 2025-10-02 12:10:47.146 2 DEBUG nova.objects.instance [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lazy-loading 'pci_devices' on Instance uuid e582fd0b-cd3a-4903-9ed3-024359954c81 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:10:47 compute-1 nova_compute[230518]: 2025-10-02 12:10:47.326 2 DEBUG nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:10:47 compute-1 nova_compute[230518]:   <uuid>e582fd0b-cd3a-4903-9ed3-024359954c81</uuid>
Oct 02 12:10:47 compute-1 nova_compute[230518]:   <name>instance-00000007</name>
Oct 02 12:10:47 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:10:47 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:10:47 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:10:47 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:       <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-425689073</nova:name>
Oct 02 12:10:47 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:10:45</nova:creationTime>
Oct 02 12:10:47 compute-1 nova_compute[230518]:       <nova:flavor name="tempest-flavor_with_ephemeral_1-2032995918">
Oct 02 12:10:47 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:10:47 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:10:47 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:10:47 compute-1 nova_compute[230518]:         <nova:ephemeral>1</nova:ephemeral>
Oct 02 12:10:47 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:10:47 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:10:47 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:10:47 compute-1 nova_compute[230518]:         <nova:user uuid="531ddb9812364f7b9743bd02a8ed797f">tempest-ServersWithSpecificFlavorTestJSON-957372394-project-member</nova:user>
Oct 02 12:10:47 compute-1 nova_compute[230518]:         <nova:project uuid="2c66662015f74444b15ea4b3d8644714">tempest-ServersWithSpecificFlavorTestJSON-957372394</nova:project>
Oct 02 12:10:47 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:10:47 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:10:47 compute-1 nova_compute[230518]:         <nova:port uuid="2887bff5-92aa-4c83-9902-292a59e0add5">
Oct 02 12:10:47 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:10:47 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:10:47 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:10:47 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <system>
Oct 02 12:10:47 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:10:47 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:10:47 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:10:47 compute-1 nova_compute[230518]:       <entry name="serial">e582fd0b-cd3a-4903-9ed3-024359954c81</entry>
Oct 02 12:10:47 compute-1 nova_compute[230518]:       <entry name="uuid">e582fd0b-cd3a-4903-9ed3-024359954c81</entry>
Oct 02 12:10:47 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     </system>
Oct 02 12:10:47 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:10:47 compute-1 nova_compute[230518]:   <os>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:   </os>
Oct 02 12:10:47 compute-1 nova_compute[230518]:   <features>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:   </features>
Oct 02 12:10:47 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:10:47 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:10:47 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:10:47 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/e582fd0b-cd3a-4903-9ed3-024359954c81_disk">
Oct 02 12:10:47 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:       </source>
Oct 02 12:10:47 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:10:47 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:10:47 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:10:47 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/e582fd0b-cd3a-4903-9ed3-024359954c81_disk.eph0">
Oct 02 12:10:47 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:       </source>
Oct 02 12:10:47 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:10:47 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:10:47 compute-1 nova_compute[230518]:       <target dev="vdb" bus="virtio"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:10:47 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/e582fd0b-cd3a-4903-9ed3-024359954c81_disk.config">
Oct 02 12:10:47 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:       </source>
Oct 02 12:10:47 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:10:47 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:10:47 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:10:47 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:53:c6:7c"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:       <target dev="tap2887bff5-92"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:10:47 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/e582fd0b-cd3a-4903-9ed3-024359954c81/console.log" append="off"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <video>
Oct 02 12:10:47 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     </video>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:10:47 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:10:47 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:10:47 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:10:47 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:10:47 compute-1 nova_compute[230518]: </domain>
Oct 02 12:10:47 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:10:47 compute-1 nova_compute[230518]: 2025-10-02 12:10:47.327 2 DEBUG nova.compute.manager [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Preparing to wait for external event network-vif-plugged-2887bff5-92aa-4c83-9902-292a59e0add5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:10:47 compute-1 nova_compute[230518]: 2025-10-02 12:10:47.328 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "e582fd0b-cd3a-4903-9ed3-024359954c81-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:47 compute-1 nova_compute[230518]: 2025-10-02 12:10:47.328 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "e582fd0b-cd3a-4903-9ed3-024359954c81-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:47 compute-1 nova_compute[230518]: 2025-10-02 12:10:47.329 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "e582fd0b-cd3a-4903-9ed3-024359954c81-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:47 compute-1 nova_compute[230518]: 2025-10-02 12:10:47.329 2 DEBUG nova.virt.libvirt.vif [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:10:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-425689073',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-425689073',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(14),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-425689073',id=7,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=14,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMZIzbMfwVBTTToNPNrnTuckoO8kg28OkEFKvLyHQzGuKrzHQ5Xu2/PJVR0z9htMcy/llPoN2mM4eTO6OIHrSZOwjPe/taZdTaEhmzjh34Ak2Vyd+nZrFG8VSiYQyffl8g==',key_name='tempest-keypair-1186587753',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2c66662015f74444b15ea4b3d8644714',ramdisk_id='',reservation_id='r-nm6ufj5e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-957372394',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-957372394-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:10:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='531ddb9812364f7b9743bd02a8ed797f',uuid=e582fd0b-cd3a-4903-9ed3-024359954c81,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2887bff5-92aa-4c83-9902-292a59e0add5", "address": "fa:16:3e:53:c6:7c", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2887bff5-92", "ovs_interfaceid": "2887bff5-92aa-4c83-9902-292a59e0add5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:10:47 compute-1 nova_compute[230518]: 2025-10-02 12:10:47.330 2 DEBUG nova.network.os_vif_util [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Converting VIF {"id": "2887bff5-92aa-4c83-9902-292a59e0add5", "address": "fa:16:3e:53:c6:7c", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2887bff5-92", "ovs_interfaceid": "2887bff5-92aa-4c83-9902-292a59e0add5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:10:47 compute-1 nova_compute[230518]: 2025-10-02 12:10:47.330 2 DEBUG nova.network.os_vif_util [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:c6:7c,bridge_name='br-int',has_traffic_filtering=True,id=2887bff5-92aa-4c83-9902-292a59e0add5,network=Network(38c94475-c52a-421c-9bc8-95fdc649b043),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2887bff5-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:10:47 compute-1 nova_compute[230518]: 2025-10-02 12:10:47.331 2 DEBUG os_vif [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:c6:7c,bridge_name='br-int',has_traffic_filtering=True,id=2887bff5-92aa-4c83-9902-292a59e0add5,network=Network(38c94475-c52a-421c-9bc8-95fdc649b043),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2887bff5-92') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:10:47 compute-1 nova_compute[230518]: 2025-10-02 12:10:47.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:47 compute-1 nova_compute[230518]: 2025-10-02 12:10:47.332 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:47 compute-1 nova_compute[230518]: 2025-10-02 12:10:47.332 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:10:47 compute-1 nova_compute[230518]: 2025-10-02 12:10:47.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:47 compute-1 nova_compute[230518]: 2025-10-02 12:10:47.335 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2887bff5-92, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:47 compute-1 nova_compute[230518]: 2025-10-02 12:10:47.336 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2887bff5-92, col_values=(('external_ids', {'iface-id': '2887bff5-92aa-4c83-9902-292a59e0add5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:c6:7c', 'vm-uuid': 'e582fd0b-cd3a-4903-9ed3-024359954c81'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:47 compute-1 nova_compute[230518]: 2025-10-02 12:10:47.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:47 compute-1 NetworkManager[44960]: <info>  [1759407047.3382] manager: (tap2887bff5-92): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Oct 02 12:10:47 compute-1 nova_compute[230518]: 2025-10-02 12:10:47.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:10:47 compute-1 nova_compute[230518]: 2025-10-02 12:10:47.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:47 compute-1 nova_compute[230518]: 2025-10-02 12:10:47.343 2 INFO os_vif [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:c6:7c,bridge_name='br-int',has_traffic_filtering=True,id=2887bff5-92aa-4c83-9902-292a59e0add5,network=Network(38c94475-c52a-421c-9bc8-95fdc649b043),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2887bff5-92')
Oct 02 12:10:47 compute-1 nova_compute[230518]: 2025-10-02 12:10:47.448 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407032.447705, c6cef7fd-49cb-4781-97ad-027e835dcc5c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:10:47 compute-1 nova_compute[230518]: 2025-10-02 12:10:47.449 2 INFO nova.compute.manager [-] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] VM Stopped (Lifecycle Event)
Oct 02 12:10:47 compute-1 nova_compute[230518]: 2025-10-02 12:10:47.669 2 DEBUG nova.compute.manager [None req-aa432e24-adcf-4944-befb-c8a0a8a61f45 - - - - - -] [instance: c6cef7fd-49cb-4781-97ad-027e835dcc5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:10:47 compute-1 nova_compute[230518]: 2025-10-02 12:10:47.720 2 DEBUG nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:10:47 compute-1 nova_compute[230518]: 2025-10-02 12:10:47.720 2 DEBUG nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:10:47 compute-1 nova_compute[230518]: 2025-10-02 12:10:47.721 2 DEBUG nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:10:47 compute-1 nova_compute[230518]: 2025-10-02 12:10:47.721 2 DEBUG nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] No VIF found with MAC fa:16:3e:53:c6:7c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:10:47 compute-1 nova_compute[230518]: 2025-10-02 12:10:47.722 2 INFO nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Using config drive
Oct 02 12:10:47 compute-1 nova_compute[230518]: 2025-10-02 12:10:47.747 2 DEBUG nova.storage.rbd_utils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] rbd image e582fd0b-cd3a-4903-9ed3-024359954c81_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:10:47 compute-1 ceph-mon[80926]: pgmap v921: 305 pgs: 305 active+clean; 220 MiB data, 342 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 5.0 MiB/s wr, 246 op/s
Oct 02 12:10:47 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2087921985' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:10:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:10:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:48.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:48.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:48 compute-1 nova_compute[230518]: 2025-10-02 12:10:48.945 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407033.9453151, 3affd040-669b-4cde-a697-00b991236a6c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:10:48 compute-1 nova_compute[230518]: 2025-10-02 12:10:48.946 2 INFO nova.compute.manager [-] [instance: 3affd040-669b-4cde-a697-00b991236a6c] VM Stopped (Lifecycle Event)
Oct 02 12:10:49 compute-1 nova_compute[230518]: 2025-10-02 12:10:49.037 2 DEBUG nova.compute.manager [None req-f2bc264d-d69f-4a90-bbab-9c0e78b11138 - - - - - -] [instance: 3affd040-669b-4cde-a697-00b991236a6c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:10:49 compute-1 nova_compute[230518]: 2025-10-02 12:10:49.334 2 INFO nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Creating config drive at /var/lib/nova/instances/e582fd0b-cd3a-4903-9ed3-024359954c81/disk.config
Oct 02 12:10:49 compute-1 nova_compute[230518]: 2025-10-02 12:10:49.339 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e582fd0b-cd3a-4903-9ed3-024359954c81/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9g0b7w4w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:49 compute-1 nova_compute[230518]: 2025-10-02 12:10:49.475 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e582fd0b-cd3a-4903-9ed3-024359954c81/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9g0b7w4w" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:49 compute-1 nova_compute[230518]: 2025-10-02 12:10:49.501 2 DEBUG nova.storage.rbd_utils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] rbd image e582fd0b-cd3a-4903-9ed3-024359954c81_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:10:49 compute-1 nova_compute[230518]: 2025-10-02 12:10:49.505 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e582fd0b-cd3a-4903-9ed3-024359954c81/disk.config e582fd0b-cd3a-4903-9ed3-024359954c81_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:10:49 compute-1 nova_compute[230518]: 2025-10-02 12:10:49.615 2 DEBUG nova.network.neutron [req-843ebe39-17f6-4269-b786-ba89007fa53b req-201f5071-1357-49f8-b2da-82929bc362f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Updated VIF entry in instance network info cache for port 2887bff5-92aa-4c83-9902-292a59e0add5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:10:49 compute-1 nova_compute[230518]: 2025-10-02 12:10:49.615 2 DEBUG nova.network.neutron [req-843ebe39-17f6-4269-b786-ba89007fa53b req-201f5071-1357-49f8-b2da-82929bc362f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Updating instance_info_cache with network_info: [{"id": "2887bff5-92aa-4c83-9902-292a59e0add5", "address": "fa:16:3e:53:c6:7c", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2887bff5-92", "ovs_interfaceid": "2887bff5-92aa-4c83-9902-292a59e0add5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:10:49 compute-1 nova_compute[230518]: 2025-10-02 12:10:49.782 2 DEBUG oslo_concurrency.lockutils [req-843ebe39-17f6-4269-b786-ba89007fa53b req-201f5071-1357-49f8-b2da-82929bc362f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-e582fd0b-cd3a-4903-9ed3-024359954c81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:10:50 compute-1 nova_compute[230518]: 2025-10-02 12:10:50.169 2 DEBUG oslo_concurrency.processutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e582fd0b-cd3a-4903-9ed3-024359954c81/disk.config e582fd0b-cd3a-4903-9ed3-024359954c81_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.664s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:10:50 compute-1 nova_compute[230518]: 2025-10-02 12:10:50.169 2 INFO nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Deleting local config drive /var/lib/nova/instances/e582fd0b-cd3a-4903-9ed3-024359954c81/disk.config because it was imported into RBD.
Oct 02 12:10:50 compute-1 kernel: tap2887bff5-92: entered promiscuous mode
Oct 02 12:10:50 compute-1 NetworkManager[44960]: <info>  [1759407050.2346] manager: (tap2887bff5-92): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Oct 02 12:10:50 compute-1 ovn_controller[129257]: 2025-10-02T12:10:50Z|00046|binding|INFO|Claiming lport 2887bff5-92aa-4c83-9902-292a59e0add5 for this chassis.
Oct 02 12:10:50 compute-1 nova_compute[230518]: 2025-10-02 12:10:50.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:50 compute-1 ovn_controller[129257]: 2025-10-02T12:10:50Z|00047|binding|INFO|2887bff5-92aa-4c83-9902-292a59e0add5: Claiming fa:16:3e:53:c6:7c 10.100.0.4
Oct 02 12:10:50 compute-1 ovn_controller[129257]: 2025-10-02T12:10:50Z|00048|binding|INFO|Setting lport 2887bff5-92aa-4c83-9902-292a59e0add5 ovn-installed in OVS
Oct 02 12:10:50 compute-1 nova_compute[230518]: 2025-10-02 12:10:50.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:50 compute-1 nova_compute[230518]: 2025-10-02 12:10:50.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:50 compute-1 systemd-udevd[235004]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:10:50 compute-1 systemd-machined[188247]: New machine qemu-3-instance-00000007.
Oct 02 12:10:50 compute-1 systemd[1]: Started Virtual Machine qemu-3-instance-00000007.
Oct 02 12:10:50 compute-1 NetworkManager[44960]: <info>  [1759407050.2854] device (tap2887bff5-92): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:10:50 compute-1 NetworkManager[44960]: <info>  [1759407050.2863] device (tap2887bff5-92): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:10:50 compute-1 ovn_controller[129257]: 2025-10-02T12:10:50Z|00049|binding|INFO|Setting lport 2887bff5-92aa-4c83-9902-292a59e0add5 up in Southbound
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.290 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:c6:7c 10.100.0.4'], port_security=['fa:16:3e:53:c6:7c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e582fd0b-cd3a-4903-9ed3-024359954c81', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-38c94475-c52a-421c-9bc8-95fdc649b043', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2c66662015f74444b15ea4b3d8644714', 'neutron:revision_number': '2', 'neutron:security_group_ids': '92a0a32f-072c-4975-aa02-ea951ff6e560', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ca6fa6e-a437-4756-b504-a14a9bfa02d8, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=2887bff5-92aa-4c83-9902-292a59e0add5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.291 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 2887bff5-92aa-4c83-9902-292a59e0add5 in datapath 38c94475-c52a-421c-9bc8-95fdc649b043 bound to our chassis
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.294 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 38c94475-c52a-421c-9bc8-95fdc649b043
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.305 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b13af214-6715-4855-a5d4-5215d7f44b8b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.307 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap38c94475-c1 in ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.309 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap38c94475-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.309 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b4de73cd-4682-4182-b197-8add4c0130a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.310 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[051dcb55-dc4c-44b1-affa-1fd0586e31f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.323 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[65d073cb-93ca-4fba-b1d1-b81fe2c4e8f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.377 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9c28f62a-a1fc-4df3-8aae-3335085b33d3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:50 compute-1 ceph-mon[80926]: pgmap v922: 305 pgs: 305 active+clean; 215 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 192 op/s
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.414 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[9a69ad3d-071f-4694-811b-500b4adec33f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.422 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e1217883-cfe4-42ce-93de-014e2262c26b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:50 compute-1 NetworkManager[44960]: <info>  [1759407050.4243] manager: (tap38c94475-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Oct 02 12:10:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:50.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.471 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[8066e40a-f335-488e-8fc2-49e78341ff46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.478 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[5451055f-a35b-4b98-a656-48c0fb371d00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:50 compute-1 NetworkManager[44960]: <info>  [1759407050.5096] device (tap38c94475-c0): carrier: link connected
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.524 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[016cbe2e-4037-43f0-a6f0-ed7ad87d5fa4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.543 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1ca519e1-6135-4603-9785-347509379b5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap38c94475-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:d2:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491406, 'reachable_time': 40938, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 235039, 'error': None, 'target': 'ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.556 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[94a43f48-b708-4c24-ab01-07f5b7dffbb1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe76:d299'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 491406, 'tstamp': 491406}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 235040, 'error': None, 'target': 'ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.573 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[21a1027d-0080-40ce-889d-3c4f2f8790bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap38c94475-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:76:d2:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491406, 'reachable_time': 40938, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 235041, 'error': None, 'target': 'ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.602 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[89dfd6f5-7e14-4304-8802-b2cf58495f30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:50.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.674 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5eba45c1-e266-4438-ab07-54048abb2633]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.676 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap38c94475-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.676 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.677 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap38c94475-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:50 compute-1 kernel: tap38c94475-c0: entered promiscuous mode
Oct 02 12:10:50 compute-1 nova_compute[230518]: 2025-10-02 12:10:50.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:50 compute-1 NetworkManager[44960]: <info>  [1759407050.6814] manager: (tap38c94475-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Oct 02 12:10:50 compute-1 nova_compute[230518]: 2025-10-02 12:10:50.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.683 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap38c94475-c0, col_values=(('external_ids', {'iface-id': 'cb8aa481-1d5c-4f65-bc0c-1f1aa2cac89a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:10:50 compute-1 nova_compute[230518]: 2025-10-02 12:10:50.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:50 compute-1 ovn_controller[129257]: 2025-10-02T12:10:50Z|00050|binding|INFO|Releasing lport cb8aa481-1d5c-4f65-bc0c-1f1aa2cac89a from this chassis (sb_readonly=0)
Oct 02 12:10:50 compute-1 nova_compute[230518]: 2025-10-02 12:10:50.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.700 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/38c94475-c52a-421c-9bc8-95fdc649b043.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/38c94475-c52a-421c-9bc8-95fdc649b043.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.701 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[68287536-4eaf-45fc-b18c-25654070630c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.702 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-38c94475-c52a-421c-9bc8-95fdc649b043
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/38c94475-c52a-421c-9bc8-95fdc649b043.pid.haproxy
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 38c94475-c52a-421c-9bc8-95fdc649b043
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:10:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:10:50.703 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043', 'env', 'PROCESS_TAG=haproxy-38c94475-c52a-421c-9bc8-95fdc649b043', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/38c94475-c52a-421c-9bc8-95fdc649b043.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:10:51 compute-1 podman[235105]: 2025-10-02 12:10:51.1539104 +0000 UTC m=+0.070123897 container create 5130784b2895e15a79e3af9114f26862f812b40d9c1766faa658d5c18d957f2c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 12:10:51 compute-1 systemd[1]: Started libpod-conmon-5130784b2895e15a79e3af9114f26862f812b40d9c1766faa658d5c18d957f2c.scope.
Oct 02 12:10:51 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:10:51 compute-1 podman[235105]: 2025-10-02 12:10:51.11099904 +0000 UTC m=+0.027212557 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:10:51 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7a0e6f1bf3ec86e5bc2215d1c862c93a70af47fd512dbd4ba72e6130027c198/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:10:51 compute-1 podman[235105]: 2025-10-02 12:10:51.297451124 +0000 UTC m=+0.213664651 container init 5130784b2895e15a79e3af9114f26862f812b40d9c1766faa658d5c18d957f2c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:10:51 compute-1 podman[235143]: 2025-10-02 12:10:51.300464769 +0000 UTC m=+0.120228523 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:10:51 compute-1 podman[235105]: 2025-10-02 12:10:51.303760543 +0000 UTC m=+0.219974040 container start 5130784b2895e15a79e3af9114f26862f812b40d9c1766faa658d5c18d957f2c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 12:10:51 compute-1 neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043[235152]: [NOTICE]   (235173) : New worker (235175) forked
Oct 02 12:10:51 compute-1 neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043[235152]: [NOTICE]   (235173) : Loading success.
Oct 02 12:10:51 compute-1 nova_compute[230518]: 2025-10-02 12:10:51.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:51 compute-1 nova_compute[230518]: 2025-10-02 12:10:51.680 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407051.6799974, e582fd0b-cd3a-4903-9ed3-024359954c81 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:10:51 compute-1 nova_compute[230518]: 2025-10-02 12:10:51.681 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] VM Started (Lifecycle Event)
Oct 02 12:10:52 compute-1 nova_compute[230518]: 2025-10-02 12:10:52.033 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:10:52 compute-1 nova_compute[230518]: 2025-10-02 12:10:52.038 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407051.6800942, e582fd0b-cd3a-4903-9ed3-024359954c81 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:10:52 compute-1 nova_compute[230518]: 2025-10-02 12:10:52.038 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] VM Paused (Lifecycle Event)
Oct 02 12:10:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3951825793' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:10:52 compute-1 nova_compute[230518]: 2025-10-02 12:10:52.313 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:10:52 compute-1 nova_compute[230518]: 2025-10-02 12:10:52.316 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:10:52 compute-1 nova_compute[230518]: 2025-10-02 12:10:52.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:52.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:52 compute-1 nova_compute[230518]: 2025-10-02 12:10:52.569 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:10:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:52.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:10:53 compute-1 ceph-mon[80926]: pgmap v923: 305 pgs: 305 active+clean; 215 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 173 op/s
Oct 02 12:10:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:54.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:54 compute-1 ceph-mon[80926]: pgmap v924: 305 pgs: 305 active+clean; 215 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 145 op/s
Oct 02 12:10:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:10:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:54.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:10:54 compute-1 podman[235185]: 2025-10-02 12:10:54.798320693 +0000 UTC m=+0.051336706 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:10:55 compute-1 nova_compute[230518]: 2025-10-02 12:10:55.881 2 DEBUG nova.compute.manager [req-41223464-719d-4364-8d05-6af6b2b23afe req-25b473ef-8960-4c4a-82b8-5acdb25827b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Received event network-vif-plugged-2887bff5-92aa-4c83-9902-292a59e0add5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:10:55 compute-1 nova_compute[230518]: 2025-10-02 12:10:55.881 2 DEBUG oslo_concurrency.lockutils [req-41223464-719d-4364-8d05-6af6b2b23afe req-25b473ef-8960-4c4a-82b8-5acdb25827b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e582fd0b-cd3a-4903-9ed3-024359954c81-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:55 compute-1 nova_compute[230518]: 2025-10-02 12:10:55.882 2 DEBUG oslo_concurrency.lockutils [req-41223464-719d-4364-8d05-6af6b2b23afe req-25b473ef-8960-4c4a-82b8-5acdb25827b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e582fd0b-cd3a-4903-9ed3-024359954c81-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:55 compute-1 nova_compute[230518]: 2025-10-02 12:10:55.882 2 DEBUG oslo_concurrency.lockutils [req-41223464-719d-4364-8d05-6af6b2b23afe req-25b473ef-8960-4c4a-82b8-5acdb25827b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e582fd0b-cd3a-4903-9ed3-024359954c81-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:55 compute-1 nova_compute[230518]: 2025-10-02 12:10:55.882 2 DEBUG nova.compute.manager [req-41223464-719d-4364-8d05-6af6b2b23afe req-25b473ef-8960-4c4a-82b8-5acdb25827b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Processing event network-vif-plugged-2887bff5-92aa-4c83-9902-292a59e0add5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:10:55 compute-1 nova_compute[230518]: 2025-10-02 12:10:55.882 2 DEBUG nova.compute.manager [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:10:55 compute-1 nova_compute[230518]: 2025-10-02 12:10:55.885 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407055.885355, e582fd0b-cd3a-4903-9ed3-024359954c81 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:10:55 compute-1 nova_compute[230518]: 2025-10-02 12:10:55.885 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] VM Resumed (Lifecycle Event)
Oct 02 12:10:55 compute-1 nova_compute[230518]: 2025-10-02 12:10:55.887 2 DEBUG nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:10:55 compute-1 nova_compute[230518]: 2025-10-02 12:10:55.890 2 INFO nova.virt.libvirt.driver [-] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Instance spawned successfully.
Oct 02 12:10:55 compute-1 nova_compute[230518]: 2025-10-02 12:10:55.890 2 DEBUG nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:10:55 compute-1 ceph-mon[80926]: pgmap v925: 305 pgs: 305 active+clean; 215 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 142 op/s
Oct 02 12:10:56 compute-1 nova_compute[230518]: 2025-10-02 12:10:56.211 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:10:56 compute-1 nova_compute[230518]: 2025-10-02 12:10:56.216 2 DEBUG nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:10:56 compute-1 nova_compute[230518]: 2025-10-02 12:10:56.216 2 DEBUG nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:10:56 compute-1 nova_compute[230518]: 2025-10-02 12:10:56.217 2 DEBUG nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:10:56 compute-1 nova_compute[230518]: 2025-10-02 12:10:56.217 2 DEBUG nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:10:56 compute-1 nova_compute[230518]: 2025-10-02 12:10:56.217 2 DEBUG nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:10:56 compute-1 nova_compute[230518]: 2025-10-02 12:10:56.218 2 DEBUG nova.virt.libvirt.driver [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:10:56 compute-1 nova_compute[230518]: 2025-10-02 12:10:56.222 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:10:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:56.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:56 compute-1 nova_compute[230518]: 2025-10-02 12:10:56.454 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:10:56 compute-1 nova_compute[230518]: 2025-10-02 12:10:56.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:10:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:56.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:10:56 compute-1 nova_compute[230518]: 2025-10-02 12:10:56.857 2 INFO nova.compute.manager [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Took 15.98 seconds to spawn the instance on the hypervisor.
Oct 02 12:10:56 compute-1 nova_compute[230518]: 2025-10-02 12:10:56.858 2 DEBUG nova.compute.manager [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:10:57 compute-1 nova_compute[230518]: 2025-10-02 12:10:57.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:10:57 compute-1 nova_compute[230518]: 2025-10-02 12:10:57.426 2 INFO nova.compute.manager [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Took 17.60 seconds to build instance.
Oct 02 12:10:57 compute-1 nova_compute[230518]: 2025-10-02 12:10:57.505 2 DEBUG oslo_concurrency.lockutils [None req-56885e6d-7c1c-4952-b24f-d36822ff720f 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "e582fd0b-cd3a-4903-9ed3-024359954c81" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:57 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:10:58 compute-1 nova_compute[230518]: 2025-10-02 12:10:58.031 2 DEBUG nova.compute.manager [req-e9fcbb3b-4e2f-4412-ad23-ecabe21b205d req-60bedcc5-e184-48b6-a476-3cd30e7f1136 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Received event network-vif-plugged-2887bff5-92aa-4c83-9902-292a59e0add5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:10:58 compute-1 nova_compute[230518]: 2025-10-02 12:10:58.031 2 DEBUG oslo_concurrency.lockutils [req-e9fcbb3b-4e2f-4412-ad23-ecabe21b205d req-60bedcc5-e184-48b6-a476-3cd30e7f1136 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e582fd0b-cd3a-4903-9ed3-024359954c81-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:10:58 compute-1 nova_compute[230518]: 2025-10-02 12:10:58.032 2 DEBUG oslo_concurrency.lockutils [req-e9fcbb3b-4e2f-4412-ad23-ecabe21b205d req-60bedcc5-e184-48b6-a476-3cd30e7f1136 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e582fd0b-cd3a-4903-9ed3-024359954c81-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:10:58 compute-1 nova_compute[230518]: 2025-10-02 12:10:58.032 2 DEBUG oslo_concurrency.lockutils [req-e9fcbb3b-4e2f-4412-ad23-ecabe21b205d req-60bedcc5-e184-48b6-a476-3cd30e7f1136 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e582fd0b-cd3a-4903-9ed3-024359954c81-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:10:58 compute-1 nova_compute[230518]: 2025-10-02 12:10:58.032 2 DEBUG nova.compute.manager [req-e9fcbb3b-4e2f-4412-ad23-ecabe21b205d req-60bedcc5-e184-48b6-a476-3cd30e7f1136 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] No waiting events found dispatching network-vif-plugged-2887bff5-92aa-4c83-9902-292a59e0add5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:10:58 compute-1 nova_compute[230518]: 2025-10-02 12:10:58.032 2 WARNING nova.compute.manager [req-e9fcbb3b-4e2f-4412-ad23-ecabe21b205d req-60bedcc5-e184-48b6-a476-3cd30e7f1136 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Received unexpected event network-vif-plugged-2887bff5-92aa-4c83-9902-292a59e0add5 for instance with vm_state active and task_state None.
Oct 02 12:10:58 compute-1 ceph-mon[80926]: pgmap v926: 305 pgs: 305 active+clean; 234 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.4 MiB/s wr, 168 op/s
Oct 02 12:10:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:10:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:10:58.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:10:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:10:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:10:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:10:58.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:11:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:00.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:11:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:00.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:11:00 compute-1 ceph-mon[80926]: pgmap v927: 305 pgs: 305 active+clean; 237 MiB data, 357 MiB used, 21 GiB / 21 GiB avail; 771 KiB/s rd, 2.2 MiB/s wr, 87 op/s
Oct 02 12:11:00 compute-1 nova_compute[230518]: 2025-10-02 12:11:00.728 2 DEBUG nova.compute.manager [req-f00623cd-d2f3-4b00-9b0f-0eacaf82ddde req-ad6e3154-8566-41d3-bf45-19f60df228e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Received event network-changed-2887bff5-92aa-4c83-9902-292a59e0add5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:11:00 compute-1 nova_compute[230518]: 2025-10-02 12:11:00.729 2 DEBUG nova.compute.manager [req-f00623cd-d2f3-4b00-9b0f-0eacaf82ddde req-ad6e3154-8566-41d3-bf45-19f60df228e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Refreshing instance network info cache due to event network-changed-2887bff5-92aa-4c83-9902-292a59e0add5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:11:00 compute-1 nova_compute[230518]: 2025-10-02 12:11:00.730 2 DEBUG oslo_concurrency.lockutils [req-f00623cd-d2f3-4b00-9b0f-0eacaf82ddde req-ad6e3154-8566-41d3-bf45-19f60df228e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-e582fd0b-cd3a-4903-9ed3-024359954c81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:11:00 compute-1 nova_compute[230518]: 2025-10-02 12:11:00.730 2 DEBUG oslo_concurrency.lockutils [req-f00623cd-d2f3-4b00-9b0f-0eacaf82ddde req-ad6e3154-8566-41d3-bf45-19f60df228e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-e582fd0b-cd3a-4903-9ed3-024359954c81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:11:00 compute-1 nova_compute[230518]: 2025-10-02 12:11:00.730 2 DEBUG nova.network.neutron [req-f00623cd-d2f3-4b00-9b0f-0eacaf82ddde req-ad6e3154-8566-41d3-bf45-19f60df228e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Refreshing network info cache for port 2887bff5-92aa-4c83-9902-292a59e0add5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:11:01 compute-1 nova_compute[230518]: 2025-10-02 12:11:01.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:02 compute-1 ovn_controller[129257]: 2025-10-02T12:11:02Z|00051|binding|INFO|Releasing lport cb8aa481-1d5c-4f65-bc0c-1f1aa2cac89a from this chassis (sb_readonly=0)
Oct 02 12:11:02 compute-1 ceph-mon[80926]: pgmap v928: 305 pgs: 305 active+clean; 242 MiB data, 369 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Oct 02 12:11:02 compute-1 nova_compute[230518]: 2025-10-02 12:11:02.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:02 compute-1 nova_compute[230518]: 2025-10-02 12:11:02.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:02.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:11:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:02.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:11:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:11:03 compute-1 nova_compute[230518]: 2025-10-02 12:11:03.674 2 DEBUG nova.network.neutron [req-f00623cd-d2f3-4b00-9b0f-0eacaf82ddde req-ad6e3154-8566-41d3-bf45-19f60df228e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Updated VIF entry in instance network info cache for port 2887bff5-92aa-4c83-9902-292a59e0add5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:11:03 compute-1 nova_compute[230518]: 2025-10-02 12:11:03.675 2 DEBUG nova.network.neutron [req-f00623cd-d2f3-4b00-9b0f-0eacaf82ddde req-ad6e3154-8566-41d3-bf45-19f60df228e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Updating instance_info_cache with network_info: [{"id": "2887bff5-92aa-4c83-9902-292a59e0add5", "address": "fa:16:3e:53:c6:7c", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2887bff5-92", "ovs_interfaceid": "2887bff5-92aa-4c83-9902-292a59e0add5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:11:03 compute-1 nova_compute[230518]: 2025-10-02 12:11:03.694 2 DEBUG oslo_concurrency.lockutils [req-f00623cd-d2f3-4b00-9b0f-0eacaf82ddde req-ad6e3154-8566-41d3-bf45-19f60df228e8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-e582fd0b-cd3a-4903-9ed3-024359954c81" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:11:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:11:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:04.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:11:04 compute-1 ceph-mon[80926]: pgmap v929: 305 pgs: 305 active+clean; 248 MiB data, 369 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Oct 02 12:11:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:11:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:04.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:11:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1832698462' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:11:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1832698462' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:11:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:06.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:06 compute-1 nova_compute[230518]: 2025-10-02 12:11:06.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:06.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:06 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0.
Oct 02 12:11:06 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:06.891070) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:11:06 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43
Oct 02 12:11:06 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407066891137, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 902, "num_deletes": 255, "total_data_size": 1648262, "memory_usage": 1666648, "flush_reason": "Manual Compaction"}
Oct 02 12:11:06 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started
Oct 02 12:11:07 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407067027090, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 1077109, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21787, "largest_seqno": 22684, "table_properties": {"data_size": 1072939, "index_size": 1822, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9373, "raw_average_key_size": 18, "raw_value_size": 1064325, "raw_average_value_size": 2150, "num_data_blocks": 80, "num_entries": 495, "num_filter_entries": 495, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407011, "oldest_key_time": 1759407011, "file_creation_time": 1759407066, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:11:07 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 136063 microseconds, and 3615 cpu microseconds.
Oct 02 12:11:07 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:11:07 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:07.027140) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 1077109 bytes OK
Oct 02 12:11:07 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:07.027162) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started
Oct 02 12:11:07 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:07.081672) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done
Oct 02 12:11:07 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:07.081744) EVENT_LOG_v1 {"time_micros": 1759407067081731, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:11:07 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:07.081777) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:11:07 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1643601, prev total WAL file size 1643601, number of live WAL files 2.
Oct 02 12:11:07 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:11:07 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:07.083092) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353032' seq:0, type:0; will stop at (end)
Oct 02 12:11:07 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:11:07 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(1051KB)], [42(8173KB)]
Oct 02 12:11:07 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407067083186, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 9447160, "oldest_snapshot_seqno": -1}
Oct 02 12:11:07 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 4641 keys, 9303050 bytes, temperature: kUnknown
Oct 02 12:11:07 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407067160659, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 9303050, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9270616, "index_size": 19716, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11653, "raw_key_size": 116700, "raw_average_key_size": 25, "raw_value_size": 9185165, "raw_average_value_size": 1979, "num_data_blocks": 814, "num_entries": 4641, "num_filter_entries": 4641, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759407067, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:11:07 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:11:07 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:07.161039) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 9303050 bytes
Oct 02 12:11:07 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:07.164529) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 122.1 rd, 120.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 8.0 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(17.4) write-amplify(8.6) OK, records in: 5170, records dropped: 529 output_compression: NoCompression
Oct 02 12:11:07 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:07.164568) EVENT_LOG_v1 {"time_micros": 1759407067164553, "job": 24, "event": "compaction_finished", "compaction_time_micros": 77377, "compaction_time_cpu_micros": 24169, "output_level": 6, "num_output_files": 1, "total_output_size": 9303050, "num_input_records": 5170, "num_output_records": 4641, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:11:07 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:11:07 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407067164938, "job": 24, "event": "table_file_deletion", "file_number": 44}
Oct 02 12:11:07 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:11:07 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407067166144, "job": 24, "event": "table_file_deletion", "file_number": 42}
Oct 02 12:11:07 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:07.082652) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:11:07 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:07.166224) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:11:07 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:07.166229) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:11:07 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:07.166231) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:11:07 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:07.166232) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:11:07 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:07.166234) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:11:07 compute-1 ceph-mon[80926]: pgmap v930: 305 pgs: 305 active+clean; 248 MiB data, 369 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Oct 02 12:11:07 compute-1 nova_compute[230518]: 2025-10-02 12:11:07.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:11:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e140 e140: 3 total, 3 up, 3 in
Oct 02 12:11:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:11:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:08.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:11:08 compute-1 ceph-mon[80926]: pgmap v931: 305 pgs: 305 active+clean; 248 MiB data, 370 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 135 op/s
Oct 02 12:11:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:11:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:08.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:11:08 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0.
Oct 02 12:11:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:08.769708) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:11:08 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46
Oct 02 12:11:08 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407068769745, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 292, "num_deletes": 251, "total_data_size": 94560, "memory_usage": 100888, "flush_reason": "Manual Compaction"}
Oct 02 12:11:08 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started
Oct 02 12:11:08 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407068778535, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 61781, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22689, "largest_seqno": 22976, "table_properties": {"data_size": 59852, "index_size": 157, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5051, "raw_average_key_size": 18, "raw_value_size": 56007, "raw_average_value_size": 204, "num_data_blocks": 7, "num_entries": 274, "num_filter_entries": 274, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407067, "oldest_key_time": 1759407067, "file_creation_time": 1759407068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:11:08 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 8881 microseconds, and 1116 cpu microseconds.
Oct 02 12:11:08 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:11:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:08.778585) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 61781 bytes OK
Oct 02 12:11:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:08.778605) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started
Oct 02 12:11:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:08.782485) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done
Oct 02 12:11:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:08.782508) EVENT_LOG_v1 {"time_micros": 1759407068782501, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:11:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:08.782528) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:11:08 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 92397, prev total WAL file size 92397, number of live WAL files 2.
Oct 02 12:11:08 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:11:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:08.782946) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Oct 02 12:11:08 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:11:08 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(60KB)], [45(9085KB)]
Oct 02 12:11:08 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407068783002, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 9364831, "oldest_snapshot_seqno": -1}
Oct 02 12:11:08 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 4402 keys, 7343207 bytes, temperature: kUnknown
Oct 02 12:11:08 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407068827981, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 7343207, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7314085, "index_size": 17044, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11013, "raw_key_size": 112424, "raw_average_key_size": 25, "raw_value_size": 7234423, "raw_average_value_size": 1643, "num_data_blocks": 693, "num_entries": 4402, "num_filter_entries": 4402, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759407068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:11:08 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:11:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:08.828192) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 7343207 bytes
Oct 02 12:11:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:08.834111) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 207.9 rd, 163.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 8.9 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(270.4) write-amplify(118.9) OK, records in: 4915, records dropped: 513 output_compression: NoCompression
Oct 02 12:11:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:08.834155) EVENT_LOG_v1 {"time_micros": 1759407068834139, "job": 26, "event": "compaction_finished", "compaction_time_micros": 45043, "compaction_time_cpu_micros": 16688, "output_level": 6, "num_output_files": 1, "total_output_size": 7343207, "num_input_records": 4915, "num_output_records": 4402, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:11:08 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:11:08 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407068834357, "job": 26, "event": "table_file_deletion", "file_number": 47}
Oct 02 12:11:08 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:11:08 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407068835671, "job": 26, "event": "table_file_deletion", "file_number": 45}
Oct 02 12:11:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:08.782808) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:11:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:08.835698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:11:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:08.835702) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:11:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:08.835703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:11:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:08.835704) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:11:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:11:08.835706) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:11:10 compute-1 ceph-mon[80926]: osdmap e140: 3 total, 3 up, 3 in
Oct 02 12:11:10 compute-1 ceph-mon[80926]: pgmap v933: 305 pgs: 305 active+clean; 248 MiB data, 370 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 234 KiB/s wr, 112 op/s
Oct 02 12:11:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:10.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:11:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:10.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:11:11 compute-1 nova_compute[230518]: 2025-10-02 12:11:11.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:11 compute-1 podman[235205]: 2025-10-02 12:11:11.827631206 +0000 UTC m=+0.075852507 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:11:11 compute-1 podman[235204]: 2025-10-02 12:11:11.865167996 +0000 UTC m=+0.114580885 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 12:11:12 compute-1 ovn_controller[129257]: 2025-10-02T12:11:12Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:53:c6:7c 10.100.0.4
Oct 02 12:11:12 compute-1 ovn_controller[129257]: 2025-10-02T12:11:12Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:53:c6:7c 10.100.0.4
Oct 02 12:11:12 compute-1 ceph-mon[80926]: pgmap v934: 305 pgs: 305 active+clean; 196 MiB data, 352 MiB used, 21 GiB / 21 GiB avail; 200 KiB/s rd, 1.6 MiB/s wr, 73 op/s
Oct 02 12:11:12 compute-1 nova_compute[230518]: 2025-10-02 12:11:12.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e141 e141: 3 total, 3 up, 3 in
Oct 02 12:11:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:12.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:11:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:12.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:11:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:11:13 compute-1 ceph-mon[80926]: osdmap e141: 3 total, 3 up, 3 in
Oct 02 12:11:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2090179139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1377094611' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:11:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:14.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:14 compute-1 ceph-mon[80926]: pgmap v936: 305 pgs: 305 active+clean; 190 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 142 KiB/s rd, 3.0 MiB/s wr, 99 op/s
Oct 02 12:11:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:14.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:15 compute-1 ceph-mon[80926]: pgmap v937: 305 pgs: 305 active+clean; 190 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 126 KiB/s rd, 3.0 MiB/s wr, 97 op/s
Oct 02 12:11:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:16.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:16 compute-1 nova_compute[230518]: 2025-10-02 12:11:16.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:16.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:17 compute-1 nova_compute[230518]: 2025-10-02 12:11:17.239 2 DEBUG oslo_concurrency.lockutils [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "e582fd0b-cd3a-4903-9ed3-024359954c81" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:17 compute-1 nova_compute[230518]: 2025-10-02 12:11:17.240 2 DEBUG oslo_concurrency.lockutils [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "e582fd0b-cd3a-4903-9ed3-024359954c81" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:17 compute-1 nova_compute[230518]: 2025-10-02 12:11:17.240 2 DEBUG oslo_concurrency.lockutils [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "e582fd0b-cd3a-4903-9ed3-024359954c81-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:17 compute-1 nova_compute[230518]: 2025-10-02 12:11:17.240 2 DEBUG oslo_concurrency.lockutils [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "e582fd0b-cd3a-4903-9ed3-024359954c81-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:17 compute-1 nova_compute[230518]: 2025-10-02 12:11:17.240 2 DEBUG oslo_concurrency.lockutils [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "e582fd0b-cd3a-4903-9ed3-024359954c81-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:17 compute-1 nova_compute[230518]: 2025-10-02 12:11:17.241 2 INFO nova.compute.manager [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Terminating instance
Oct 02 12:11:17 compute-1 nova_compute[230518]: 2025-10-02 12:11:17.242 2 DEBUG nova.compute.manager [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:11:17 compute-1 kernel: tap2887bff5-92 (unregistering): left promiscuous mode
Oct 02 12:11:17 compute-1 NetworkManager[44960]: <info>  [1759407077.3379] device (tap2887bff5-92): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:11:17 compute-1 ovn_controller[129257]: 2025-10-02T12:11:17Z|00052|binding|INFO|Releasing lport 2887bff5-92aa-4c83-9902-292a59e0add5 from this chassis (sb_readonly=0)
Oct 02 12:11:17 compute-1 ovn_controller[129257]: 2025-10-02T12:11:17Z|00053|binding|INFO|Setting lport 2887bff5-92aa-4c83-9902-292a59e0add5 down in Southbound
Oct 02 12:11:17 compute-1 ovn_controller[129257]: 2025-10-02T12:11:17Z|00054|binding|INFO|Removing iface tap2887bff5-92 ovn-installed in OVS
Oct 02 12:11:17 compute-1 nova_compute[230518]: 2025-10-02 12:11:17.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:17 compute-1 nova_compute[230518]: 2025-10-02 12:11:17.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:11:17.356 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:c6:7c 10.100.0.4'], port_security=['fa:16:3e:53:c6:7c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e582fd0b-cd3a-4903-9ed3-024359954c81', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-38c94475-c52a-421c-9bc8-95fdc649b043', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2c66662015f74444b15ea4b3d8644714', 'neutron:revision_number': '4', 'neutron:security_group_ids': '92a0a32f-072c-4975-aa02-ea951ff6e560', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.233'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9ca6fa6e-a437-4756-b504-a14a9bfa02d8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=2887bff5-92aa-4c83-9902-292a59e0add5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:11:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:11:17.358 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 2887bff5-92aa-4c83-9902-292a59e0add5 in datapath 38c94475-c52a-421c-9bc8-95fdc649b043 unbound from our chassis
Oct 02 12:11:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:11:17.360 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 38c94475-c52a-421c-9bc8-95fdc649b043, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:11:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:11:17.361 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6c204c5e-af3e-41ec-b44b-16c38b8bd9db]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:11:17.361 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043 namespace which is not needed anymore
Oct 02 12:11:17 compute-1 nova_compute[230518]: 2025-10-02 12:11:17.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:17 compute-1 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000007.scope: Deactivated successfully.
Oct 02 12:11:17 compute-1 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000007.scope: Consumed 14.899s CPU time.
Oct 02 12:11:17 compute-1 systemd-machined[188247]: Machine qemu-3-instance-00000007 terminated.
Oct 02 12:11:17 compute-1 nova_compute[230518]: 2025-10-02 12:11:17.476 2 INFO nova.virt.libvirt.driver [-] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Instance destroyed successfully.
Oct 02 12:11:17 compute-1 nova_compute[230518]: 2025-10-02 12:11:17.477 2 DEBUG nova.objects.instance [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lazy-loading 'resources' on Instance uuid e582fd0b-cd3a-4903-9ed3-024359954c81 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:11:17 compute-1 neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043[235152]: [NOTICE]   (235173) : haproxy version is 2.8.14-c23fe91
Oct 02 12:11:17 compute-1 neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043[235152]: [NOTICE]   (235173) : path to executable is /usr/sbin/haproxy
Oct 02 12:11:17 compute-1 neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043[235152]: [WARNING]  (235173) : Exiting Master process...
Oct 02 12:11:17 compute-1 neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043[235152]: [ALERT]    (235173) : Current worker (235175) exited with code 143 (Terminated)
Oct 02 12:11:17 compute-1 neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043[235152]: [WARNING]  (235173) : All workers exited. Exiting... (0)
Oct 02 12:11:17 compute-1 systemd[1]: libpod-5130784b2895e15a79e3af9114f26862f812b40d9c1766faa658d5c18d957f2c.scope: Deactivated successfully.
Oct 02 12:11:17 compute-1 podman[235272]: 2025-10-02 12:11:17.508596153 +0000 UTC m=+0.065187011 container died 5130784b2895e15a79e3af9114f26862f812b40d9c1766faa658d5c18d957f2c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:11:17 compute-1 nova_compute[230518]: 2025-10-02 12:11:17.537 2 DEBUG nova.virt.libvirt.vif [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:10:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-425689073',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-425689073',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(14),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-425689073',id=7,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=14,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMZIzbMfwVBTTToNPNrnTuckoO8kg28OkEFKvLyHQzGuKrzHQ5Xu2/PJVR0z9htMcy/llPoN2mM4eTO6OIHrSZOwjPe/taZdTaEhmzjh34Ak2Vyd+nZrFG8VSiYQyffl8g==',key_name='tempest-keypair-1186587753',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:10:56Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2c66662015f74444b15ea4b3d8644714',ramdisk_id='',reservation_id='r-nm6ufj5e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-957372394',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-957372394-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:10:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='531ddb9812364f7b9743bd02a8ed797f',uuid=e582fd0b-cd3a-4903-9ed3-024359954c81,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2887bff5-92aa-4c83-9902-292a59e0add5", "address": "fa:16:3e:53:c6:7c", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2887bff5-92", "ovs_interfaceid": "2887bff5-92aa-4c83-9902-292a59e0add5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:11:17 compute-1 nova_compute[230518]: 2025-10-02 12:11:17.539 2 DEBUG nova.network.os_vif_util [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Converting VIF {"id": "2887bff5-92aa-4c83-9902-292a59e0add5", "address": "fa:16:3e:53:c6:7c", "network": {"id": "38c94475-c52a-421c-9bc8-95fdc649b043", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1004983344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2c66662015f74444b15ea4b3d8644714", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2887bff5-92", "ovs_interfaceid": "2887bff5-92aa-4c83-9902-292a59e0add5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:11:17 compute-1 nova_compute[230518]: 2025-10-02 12:11:17.539 2 DEBUG nova.network.os_vif_util [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:53:c6:7c,bridge_name='br-int',has_traffic_filtering=True,id=2887bff5-92aa-4c83-9902-292a59e0add5,network=Network(38c94475-c52a-421c-9bc8-95fdc649b043),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2887bff5-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:11:17 compute-1 nova_compute[230518]: 2025-10-02 12:11:17.540 2 DEBUG os_vif [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:c6:7c,bridge_name='br-int',has_traffic_filtering=True,id=2887bff5-92aa-4c83-9902-292a59e0add5,network=Network(38c94475-c52a-421c-9bc8-95fdc649b043),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2887bff5-92') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:11:17 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5130784b2895e15a79e3af9114f26862f812b40d9c1766faa658d5c18d957f2c-userdata-shm.mount: Deactivated successfully.
Oct 02 12:11:17 compute-1 nova_compute[230518]: 2025-10-02 12:11:17.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:17 compute-1 nova_compute[230518]: 2025-10-02 12:11:17.542 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2887bff5-92, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:11:17 compute-1 systemd[1]: var-lib-containers-storage-overlay-f7a0e6f1bf3ec86e5bc2215d1c862c93a70af47fd512dbd4ba72e6130027c198-merged.mount: Deactivated successfully.
Oct 02 12:11:17 compute-1 nova_compute[230518]: 2025-10-02 12:11:17.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:17 compute-1 nova_compute[230518]: 2025-10-02 12:11:17.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:17 compute-1 nova_compute[230518]: 2025-10-02 12:11:17.548 2 INFO os_vif [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:c6:7c,bridge_name='br-int',has_traffic_filtering=True,id=2887bff5-92aa-4c83-9902-292a59e0add5,network=Network(38c94475-c52a-421c-9bc8-95fdc649b043),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2887bff5-92')
Oct 02 12:11:17 compute-1 podman[235272]: 2025-10-02 12:11:17.569101527 +0000 UTC m=+0.125692385 container cleanup 5130784b2895e15a79e3af9114f26862f812b40d9c1766faa658d5c18d957f2c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 02 12:11:17 compute-1 nova_compute[230518]: 2025-10-02 12:11:17.575 2 DEBUG nova.compute.manager [req-3bae73e1-8c83-4d61-bb2e-aa0ea559795f req-7fe8f2c4-00bb-449f-9314-dcef504f97dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Received event network-vif-unplugged-2887bff5-92aa-4c83-9902-292a59e0add5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:11:17 compute-1 nova_compute[230518]: 2025-10-02 12:11:17.575 2 DEBUG oslo_concurrency.lockutils [req-3bae73e1-8c83-4d61-bb2e-aa0ea559795f req-7fe8f2c4-00bb-449f-9314-dcef504f97dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e582fd0b-cd3a-4903-9ed3-024359954c81-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:17 compute-1 nova_compute[230518]: 2025-10-02 12:11:17.576 2 DEBUG oslo_concurrency.lockutils [req-3bae73e1-8c83-4d61-bb2e-aa0ea559795f req-7fe8f2c4-00bb-449f-9314-dcef504f97dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e582fd0b-cd3a-4903-9ed3-024359954c81-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:17 compute-1 nova_compute[230518]: 2025-10-02 12:11:17.576 2 DEBUG oslo_concurrency.lockutils [req-3bae73e1-8c83-4d61-bb2e-aa0ea559795f req-7fe8f2c4-00bb-449f-9314-dcef504f97dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e582fd0b-cd3a-4903-9ed3-024359954c81-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:17 compute-1 nova_compute[230518]: 2025-10-02 12:11:17.576 2 DEBUG nova.compute.manager [req-3bae73e1-8c83-4d61-bb2e-aa0ea559795f req-7fe8f2c4-00bb-449f-9314-dcef504f97dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] No waiting events found dispatching network-vif-unplugged-2887bff5-92aa-4c83-9902-292a59e0add5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:11:17 compute-1 nova_compute[230518]: 2025-10-02 12:11:17.577 2 DEBUG nova.compute.manager [req-3bae73e1-8c83-4d61-bb2e-aa0ea559795f req-7fe8f2c4-00bb-449f-9314-dcef504f97dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Received event network-vif-unplugged-2887bff5-92aa-4c83-9902-292a59e0add5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:11:17 compute-1 systemd[1]: libpod-conmon-5130784b2895e15a79e3af9114f26862f812b40d9c1766faa658d5c18d957f2c.scope: Deactivated successfully.
Oct 02 12:11:17 compute-1 ceph-mon[80926]: pgmap v938: 305 pgs: 305 active+clean; 198 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 696 KiB/s rd, 3.1 MiB/s wr, 190 op/s
Oct 02 12:11:17 compute-1 podman[235326]: 2025-10-02 12:11:17.634227336 +0000 UTC m=+0.042974364 container remove 5130784b2895e15a79e3af9114f26862f812b40d9c1766faa658d5c18d957f2c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:11:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:11:17.641 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1fce8f11-e16b-40c3-a552-4eb770f3d9f2]: (4, ('Thu Oct  2 12:11:17 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043 (5130784b2895e15a79e3af9114f26862f812b40d9c1766faa658d5c18d957f2c)\n5130784b2895e15a79e3af9114f26862f812b40d9c1766faa658d5c18d957f2c\nThu Oct  2 12:11:17 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043 (5130784b2895e15a79e3af9114f26862f812b40d9c1766faa658d5c18d957f2c)\n5130784b2895e15a79e3af9114f26862f812b40d9c1766faa658d5c18d957f2c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:11:17.642 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f98c25fc-6bd1-4c7c-afa0-2ccb663e44d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:11:17.643 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap38c94475-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:11:17 compute-1 nova_compute[230518]: 2025-10-02 12:11:17.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:17 compute-1 kernel: tap38c94475-c0: left promiscuous mode
Oct 02 12:11:17 compute-1 nova_compute[230518]: 2025-10-02 12:11:17.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:11:17.661 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3b071796-6b47-4ef0-89cf-f8c38245ba7c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:11:17.693 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[44bb9dd9-22a4-4959-b672-ce36eb7611ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:11:17.694 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a8275bbf-05de-47b1-b2cb-4a588dc3baaf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:11:17.709 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[706a8fce-7cdd-4fb2-9f9a-f79a7319577d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491396, 'reachable_time': 21904, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 235343, 'error': None, 'target': 'ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:11:17.713 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-38c94475-c52a-421c-9bc8-95fdc649b043 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:11:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:11:17.713 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[55c8badf-439b-4dff-a684-c517a1dbb689]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:11:17 compute-1 systemd[1]: run-netns-ovnmeta\x2d38c94475\x2dc52a\x2d421c\x2d9bc8\x2d95fdc649b043.mount: Deactivated successfully.
Oct 02 12:11:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:11:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:18.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:18.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:18 compute-1 nova_compute[230518]: 2025-10-02 12:11:18.883 2 INFO nova.virt.libvirt.driver [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Deleting instance files /var/lib/nova/instances/e582fd0b-cd3a-4903-9ed3-024359954c81_del
Oct 02 12:11:18 compute-1 nova_compute[230518]: 2025-10-02 12:11:18.884 2 INFO nova.virt.libvirt.driver [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Deletion of /var/lib/nova/instances/e582fd0b-cd3a-4903-9ed3-024359954c81_del complete
Oct 02 12:11:18 compute-1 nova_compute[230518]: 2025-10-02 12:11:18.963 2 INFO nova.compute.manager [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Took 1.72 seconds to destroy the instance on the hypervisor.
Oct 02 12:11:18 compute-1 nova_compute[230518]: 2025-10-02 12:11:18.964 2 DEBUG oslo.service.loopingcall [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:11:18 compute-1 nova_compute[230518]: 2025-10-02 12:11:18.964 2 DEBUG nova.compute.manager [-] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:11:18 compute-1 nova_compute[230518]: 2025-10-02 12:11:18.964 2 DEBUG nova.network.neutron [-] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:11:19 compute-1 ceph-mon[80926]: pgmap v939: 305 pgs: 305 active+clean; 202 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 574 KiB/s rd, 2.6 MiB/s wr, 164 op/s
Oct 02 12:11:19 compute-1 nova_compute[230518]: 2025-10-02 12:11:19.707 2 DEBUG nova.compute.manager [req-e65dd699-12f7-41b5-94d3-6c32624c373e req-fa4f49b4-f871-4e81-9d5b-1ef250bde980 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Received event network-vif-plugged-2887bff5-92aa-4c83-9902-292a59e0add5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:11:19 compute-1 nova_compute[230518]: 2025-10-02 12:11:19.707 2 DEBUG oslo_concurrency.lockutils [req-e65dd699-12f7-41b5-94d3-6c32624c373e req-fa4f49b4-f871-4e81-9d5b-1ef250bde980 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e582fd0b-cd3a-4903-9ed3-024359954c81-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:19 compute-1 nova_compute[230518]: 2025-10-02 12:11:19.707 2 DEBUG oslo_concurrency.lockutils [req-e65dd699-12f7-41b5-94d3-6c32624c373e req-fa4f49b4-f871-4e81-9d5b-1ef250bde980 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e582fd0b-cd3a-4903-9ed3-024359954c81-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:19 compute-1 nova_compute[230518]: 2025-10-02 12:11:19.708 2 DEBUG oslo_concurrency.lockutils [req-e65dd699-12f7-41b5-94d3-6c32624c373e req-fa4f49b4-f871-4e81-9d5b-1ef250bde980 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e582fd0b-cd3a-4903-9ed3-024359954c81-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:19 compute-1 nova_compute[230518]: 2025-10-02 12:11:19.708 2 DEBUG nova.compute.manager [req-e65dd699-12f7-41b5-94d3-6c32624c373e req-fa4f49b4-f871-4e81-9d5b-1ef250bde980 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] No waiting events found dispatching network-vif-plugged-2887bff5-92aa-4c83-9902-292a59e0add5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:11:19 compute-1 nova_compute[230518]: 2025-10-02 12:11:19.708 2 WARNING nova.compute.manager [req-e65dd699-12f7-41b5-94d3-6c32624c373e req-fa4f49b4-f871-4e81-9d5b-1ef250bde980 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Received unexpected event network-vif-plugged-2887bff5-92aa-4c83-9902-292a59e0add5 for instance with vm_state active and task_state deleting.
Oct 02 12:11:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:11:19.711 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:11:19 compute-1 nova_compute[230518]: 2025-10-02 12:11:19.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:11:19.712 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:11:20 compute-1 nova_compute[230518]: 2025-10-02 12:11:20.357 2 DEBUG nova.network.neutron [-] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:11:20 compute-1 nova_compute[230518]: 2025-10-02 12:11:20.376 2 INFO nova.compute.manager [-] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Took 1.41 seconds to deallocate network for instance.
Oct 02 12:11:20 compute-1 nova_compute[230518]: 2025-10-02 12:11:20.422 2 DEBUG oslo_concurrency.lockutils [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:20 compute-1 nova_compute[230518]: 2025-10-02 12:11:20.423 2 DEBUG oslo_concurrency.lockutils [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:20 compute-1 nova_compute[230518]: 2025-10-02 12:11:20.470 2 DEBUG nova.compute.manager [req-dd9c7a07-8051-4ab7-ad8d-75e75569803a req-064c0054-b718-47ad-bab5-34db0ee8242a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Received event network-vif-deleted-2887bff5-92aa-4c83-9902-292a59e0add5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:11:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:20.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:20 compute-1 nova_compute[230518]: 2025-10-02 12:11:20.485 2 DEBUG oslo_concurrency.processutils [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:11:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:11:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:20.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:11:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3757002488' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:11:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3757002488' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:11:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:11:20 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1749779519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:20 compute-1 nova_compute[230518]: 2025-10-02 12:11:20.971 2 DEBUG oslo_concurrency.processutils [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:11:20 compute-1 nova_compute[230518]: 2025-10-02 12:11:20.976 2 DEBUG nova.compute.provider_tree [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:11:21 compute-1 nova_compute[230518]: 2025-10-02 12:11:21.009 2 DEBUG nova.scheduler.client.report [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:11:21 compute-1 nova_compute[230518]: 2025-10-02 12:11:21.033 2 DEBUG oslo_concurrency.lockutils [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:21 compute-1 nova_compute[230518]: 2025-10-02 12:11:21.057 2 INFO nova.scheduler.client.report [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Deleted allocations for instance e582fd0b-cd3a-4903-9ed3-024359954c81
Oct 02 12:11:21 compute-1 nova_compute[230518]: 2025-10-02 12:11:21.117 2 DEBUG oslo_concurrency.lockutils [None req-aaaada52-c82d-430e-9838-9db8f9797b0b 531ddb9812364f7b9743bd02a8ed797f 2c66662015f74444b15ea4b3d8644714 - - default default] Lock "e582fd0b-cd3a-4903-9ed3-024359954c81" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:21 compute-1 nova_compute[230518]: 2025-10-02 12:11:21.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:21 compute-1 nova_compute[230518]: 2025-10-02 12:11:21.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 e142: 3 total, 3 up, 3 in
Oct 02 12:11:21 compute-1 ceph-mon[80926]: pgmap v940: 305 pgs: 305 active+clean; 140 MiB data, 313 MiB used, 21 GiB / 21 GiB avail; 520 KiB/s rd, 1.0 MiB/s wr, 137 op/s
Oct 02 12:11:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1749779519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:21 compute-1 ceph-mon[80926]: osdmap e142: 3 total, 3 up, 3 in
Oct 02 12:11:21 compute-1 podman[235367]: 2025-10-02 12:11:21.819155572 +0000 UTC m=+0.062722895 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 12:11:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:22.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:22 compute-1 sudo[235388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:11:22 compute-1 sudo[235388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:22 compute-1 sudo[235388]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:22 compute-1 nova_compute[230518]: 2025-10-02 12:11:22.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:22 compute-1 sudo[235413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:11:22 compute-1 sudo[235413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:22 compute-1 sudo[235413]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:22 compute-1 sudo[235438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:11:22 compute-1 sudo[235438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:22 compute-1 sudo[235438]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:11:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:22.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:11:22 compute-1 sudo[235463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:11:22 compute-1 sudo[235463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:22 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:11:23 compute-1 sudo[235463]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:23 compute-1 ceph-mon[80926]: pgmap v942: 305 pgs: 305 active+clean; 101 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 521 KiB/s rd, 197 KiB/s wr, 154 op/s
Oct 02 12:11:23 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:11:23 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:11:23 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:11:23 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:11:23 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:11:23 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:11:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:24.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:11:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:24.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:11:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3691398620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:25 compute-1 nova_compute[230518]: 2025-10-02 12:11:25.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:25 compute-1 ceph-mon[80926]: pgmap v943: 305 pgs: 305 active+clean; 101 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 521 KiB/s rd, 197 KiB/s wr, 154 op/s
Oct 02 12:11:25 compute-1 podman[235519]: 2025-10-02 12:11:25.80608725 +0000 UTC m=+0.059033667 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible)
Oct 02 12:11:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:11:25.910 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:11:25.910 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:11:25.911 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:26.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:26 compute-1 nova_compute[230518]: 2025-10-02 12:11:26.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:26.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:11:26.714 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:11:27 compute-1 nova_compute[230518]: 2025-10-02 12:11:27.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:27 compute-1 ceph-mon[80926]: pgmap v944: 305 pgs: 305 active+clean; 41 MiB data, 266 MiB used, 21 GiB / 21 GiB avail; 73 KiB/s rd, 63 KiB/s wr, 105 op/s
Oct 02 12:11:27 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1743318663' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:27 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:11:28 compute-1 nova_compute[230518]: 2025-10-02 12:11:28.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:11:28 compute-1 nova_compute[230518]: 2025-10-02 12:11:28.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:11:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:28.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:28.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:28 compute-1 nova_compute[230518]: 2025-10-02 12:11:28.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:29 compute-1 nova_compute[230518]: 2025-10-02 12:11:29.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:11:29 compute-1 nova_compute[230518]: 2025-10-02 12:11:29.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:29 compute-1 ceph-mon[80926]: pgmap v945: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail; 69 KiB/s rd, 5.8 KiB/s wr, 98 op/s
Oct 02 12:11:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2720756083' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:11:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1658119017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3626303909' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:11:30 compute-1 nova_compute[230518]: 2025-10-02 12:11:30.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:11:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:30.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:30.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/955745629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:31 compute-1 nova_compute[230518]: 2025-10-02 12:11:31.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:11:31 compute-1 sudo[235540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:11:31 compute-1 sudo[235540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:31 compute-1 sudo[235540]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:31 compute-1 sudo[235565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:11:31 compute-1 sudo[235565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:11:31 compute-1 sudo[235565]: pam_unix(sudo:session): session closed for user root
Oct 02 12:11:31 compute-1 nova_compute[230518]: 2025-10-02 12:11:31.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:31 compute-1 ceph-mon[80926]: pgmap v946: 305 pgs: 305 active+clean; 91 MiB data, 279 MiB used, 21 GiB / 21 GiB avail; 66 KiB/s rd, 2.4 MiB/s wr, 97 op/s
Oct 02 12:11:31 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:11:31 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:11:31 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2081085421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:32 compute-1 nova_compute[230518]: 2025-10-02 12:11:32.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:11:32 compute-1 nova_compute[230518]: 2025-10-02 12:11:32.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:11:32 compute-1 nova_compute[230518]: 2025-10-02 12:11:32.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:11:32 compute-1 nova_compute[230518]: 2025-10-02 12:11:32.082 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:11:32 compute-1 nova_compute[230518]: 2025-10-02 12:11:32.083 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:11:32 compute-1 nova_compute[230518]: 2025-10-02 12:11:32.083 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:11:32 compute-1 nova_compute[230518]: 2025-10-02 12:11:32.111 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:32 compute-1 nova_compute[230518]: 2025-10-02 12:11:32.112 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:32 compute-1 nova_compute[230518]: 2025-10-02 12:11:32.112 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:32 compute-1 nova_compute[230518]: 2025-10-02 12:11:32.112 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:11:32 compute-1 nova_compute[230518]: 2025-10-02 12:11:32.112 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:11:32 compute-1 nova_compute[230518]: 2025-10-02 12:11:32.473 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407077.47298, e582fd0b-cd3a-4903-9ed3-024359954c81 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:11:32 compute-1 nova_compute[230518]: 2025-10-02 12:11:32.474 2 INFO nova.compute.manager [-] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] VM Stopped (Lifecycle Event)
Oct 02 12:11:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:32.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:11:32 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2323166239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:32 compute-1 nova_compute[230518]: 2025-10-02 12:11:32.532 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:11:32 compute-1 nova_compute[230518]: 2025-10-02 12:11:32.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:32.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:32 compute-1 nova_compute[230518]: 2025-10-02 12:11:32.701 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:11:32 compute-1 nova_compute[230518]: 2025-10-02 12:11:32.702 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5021MB free_disk=20.964763641357422GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:11:32 compute-1 nova_compute[230518]: 2025-10-02 12:11:32.702 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:32 compute-1 nova_compute[230518]: 2025-10-02 12:11:32.702 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:32 compute-1 nova_compute[230518]: 2025-10-02 12:11:32.763 2 DEBUG nova.compute.manager [None req-a7c63f2d-ea9b-4bd5-93d1-5d640b3ce89b - - - - - -] [instance: e582fd0b-cd3a-4903-9ed3-024359954c81] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:11:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:11:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2323166239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:33 compute-1 nova_compute[230518]: 2025-10-02 12:11:33.112 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:11:33 compute-1 nova_compute[230518]: 2025-10-02 12:11:33.112 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:11:33 compute-1 nova_compute[230518]: 2025-10-02 12:11:33.148 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:11:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:11:33 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/51653874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:33 compute-1 nova_compute[230518]: 2025-10-02 12:11:33.706 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:11:33 compute-1 nova_compute[230518]: 2025-10-02 12:11:33.711 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:11:33 compute-1 nova_compute[230518]: 2025-10-02 12:11:33.765 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:11:33 compute-1 nova_compute[230518]: 2025-10-02 12:11:33.910 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:11:33 compute-1 nova_compute[230518]: 2025-10-02 12:11:33.911 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.208s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:34 compute-1 ceph-mon[80926]: pgmap v947: 305 pgs: 305 active+clean; 116 MiB data, 291 MiB used, 21 GiB / 21 GiB avail; 67 KiB/s rd, 3.4 MiB/s wr, 65 op/s
Oct 02 12:11:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/51653874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:34.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:34.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:34 compute-1 nova_compute[230518]: 2025-10-02 12:11:34.880 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:11:34 compute-1 nova_compute[230518]: 2025-10-02 12:11:34.881 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:11:34 compute-1 nova_compute[230518]: 2025-10-02 12:11:34.881 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:11:36 compute-1 ceph-mon[80926]: pgmap v948: 305 pgs: 305 active+clean; 116 MiB data, 291 MiB used, 21 GiB / 21 GiB avail; 61 KiB/s rd, 3.0 MiB/s wr, 59 op/s
Oct 02 12:11:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:36.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:36 compute-1 nova_compute[230518]: 2025-10-02 12:11:36.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:36.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:37 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1510747912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:37 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1548667372' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:11:37 compute-1 nova_compute[230518]: 2025-10-02 12:11:37.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:11:38 compute-1 ceph-mon[80926]: pgmap v949: 305 pgs: 305 active+clean; 105 MiB data, 292 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 153 op/s
Oct 02 12:11:38 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/328603544' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:11:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:38.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:38.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1471206287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:40 compute-1 ceph-mon[80926]: pgmap v950: 305 pgs: 305 active+clean; 88 MiB data, 284 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Oct 02 12:11:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4006113779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:40.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:40.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:41 compute-1 nova_compute[230518]: 2025-10-02 12:11:41.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:42 compute-1 ceph-mon[80926]: pgmap v951: 305 pgs: 305 active+clean; 88 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Oct 02 12:11:42 compute-1 nova_compute[230518]: 2025-10-02 12:11:42.410 2 DEBUG oslo_concurrency.lockutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Acquiring lock "cdf18af7-3741-4849-8c50-edeb7cc6b5b7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:42 compute-1 nova_compute[230518]: 2025-10-02 12:11:42.411 2 DEBUG oslo_concurrency.lockutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lock "cdf18af7-3741-4849-8c50-edeb7cc6b5b7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:42 compute-1 nova_compute[230518]: 2025-10-02 12:11:42.484 2 DEBUG nova.compute.manager [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:11:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:42.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:42 compute-1 nova_compute[230518]: 2025-10-02 12:11:42.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:42 compute-1 nova_compute[230518]: 2025-10-02 12:11:42.697 2 DEBUG oslo_concurrency.lockutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:42 compute-1 nova_compute[230518]: 2025-10-02 12:11:42.698 2 DEBUG oslo_concurrency.lockutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:42 compute-1 nova_compute[230518]: 2025-10-02 12:11:42.707 2 DEBUG nova.virt.hardware [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:11:42 compute-1 nova_compute[230518]: 2025-10-02 12:11:42.707 2 INFO nova.compute.claims [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:11:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:11:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:42.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:11:42 compute-1 podman[235636]: 2025-10-02 12:11:42.792078134 +0000 UTC m=+0.048460166 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 12:11:42 compute-1 podman[235635]: 2025-10-02 12:11:42.816791821 +0000 UTC m=+0.074961119 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 12:11:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:11:42 compute-1 nova_compute[230518]: 2025-10-02 12:11:42.997 2 DEBUG oslo_concurrency.processutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:11:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:11:43 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3263702750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:43 compute-1 nova_compute[230518]: 2025-10-02 12:11:43.403 2 DEBUG oslo_concurrency.processutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:11:43 compute-1 nova_compute[230518]: 2025-10-02 12:11:43.413 2 DEBUG nova.compute.provider_tree [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:11:43 compute-1 nova_compute[230518]: 2025-10-02 12:11:43.445 2 DEBUG nova.scheduler.client.report [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:11:43 compute-1 nova_compute[230518]: 2025-10-02 12:11:43.521 2 DEBUG oslo_concurrency.lockutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:43 compute-1 nova_compute[230518]: 2025-10-02 12:11:43.522 2 DEBUG nova.compute.manager [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:11:43 compute-1 nova_compute[230518]: 2025-10-02 12:11:43.713 2 DEBUG nova.compute.manager [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:11:43 compute-1 nova_compute[230518]: 2025-10-02 12:11:43.714 2 DEBUG nova.network.neutron [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:11:43 compute-1 nova_compute[230518]: 2025-10-02 12:11:43.771 2 INFO nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:11:43 compute-1 nova_compute[230518]: 2025-10-02 12:11:43.857 2 DEBUG nova.compute.manager [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:11:44 compute-1 ceph-mon[80926]: pgmap v952: 305 pgs: 305 active+clean; 88 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 128 op/s
Oct 02 12:11:44 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3263702750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:44 compute-1 nova_compute[230518]: 2025-10-02 12:11:44.290 2 DEBUG nova.compute.manager [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:11:44 compute-1 nova_compute[230518]: 2025-10-02 12:11:44.293 2 DEBUG nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:11:44 compute-1 nova_compute[230518]: 2025-10-02 12:11:44.294 2 INFO nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Creating image(s)
Oct 02 12:11:44 compute-1 nova_compute[230518]: 2025-10-02 12:11:44.331 2 DEBUG nova.storage.rbd_utils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] rbd image cdf18af7-3741-4849-8c50-edeb7cc6b5b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:11:44 compute-1 nova_compute[230518]: 2025-10-02 12:11:44.358 2 DEBUG nova.storage.rbd_utils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] rbd image cdf18af7-3741-4849-8c50-edeb7cc6b5b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:11:44 compute-1 nova_compute[230518]: 2025-10-02 12:11:44.384 2 DEBUG nova.storage.rbd_utils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] rbd image cdf18af7-3741-4849-8c50-edeb7cc6b5b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:11:44 compute-1 nova_compute[230518]: 2025-10-02 12:11:44.388 2 DEBUG oslo_concurrency.processutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:11:44 compute-1 nova_compute[230518]: 2025-10-02 12:11:44.442 2 DEBUG oslo_concurrency.processutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:11:44 compute-1 nova_compute[230518]: 2025-10-02 12:11:44.443 2 DEBUG oslo_concurrency.lockutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:44 compute-1 nova_compute[230518]: 2025-10-02 12:11:44.443 2 DEBUG oslo_concurrency.lockutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:44 compute-1 nova_compute[230518]: 2025-10-02 12:11:44.443 2 DEBUG oslo_concurrency.lockutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:44 compute-1 nova_compute[230518]: 2025-10-02 12:11:44.466 2 DEBUG nova.storage.rbd_utils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] rbd image cdf18af7-3741-4849-8c50-edeb7cc6b5b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:11:44 compute-1 nova_compute[230518]: 2025-10-02 12:11:44.470 2 DEBUG oslo_concurrency.processutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 cdf18af7-3741-4849-8c50-edeb7cc6b5b7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:11:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:44.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:44 compute-1 nova_compute[230518]: 2025-10-02 12:11:44.564 2 DEBUG nova.network.neutron [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 02 12:11:44 compute-1 nova_compute[230518]: 2025-10-02 12:11:44.565 2 DEBUG nova.compute.manager [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:11:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:44.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:44 compute-1 nova_compute[230518]: 2025-10-02 12:11:44.739 2 DEBUG oslo_concurrency.processutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 cdf18af7-3741-4849-8c50-edeb7cc6b5b7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.269s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:11:44 compute-1 nova_compute[230518]: 2025-10-02 12:11:44.819 2 DEBUG nova.storage.rbd_utils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] resizing rbd image cdf18af7-3741-4849-8c50-edeb7cc6b5b7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:11:44 compute-1 nova_compute[230518]: 2025-10-02 12:11:44.945 2 DEBUG nova.objects.instance [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lazy-loading 'migration_context' on Instance uuid cdf18af7-3741-4849-8c50-edeb7cc6b5b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:11:45 compute-1 nova_compute[230518]: 2025-10-02 12:11:45.022 2 DEBUG nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:11:45 compute-1 nova_compute[230518]: 2025-10-02 12:11:45.023 2 DEBUG nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Ensure instance console log exists: /var/lib/nova/instances/cdf18af7-3741-4849-8c50-edeb7cc6b5b7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:11:45 compute-1 nova_compute[230518]: 2025-10-02 12:11:45.023 2 DEBUG oslo_concurrency.lockutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:45 compute-1 nova_compute[230518]: 2025-10-02 12:11:45.024 2 DEBUG oslo_concurrency.lockutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:45 compute-1 nova_compute[230518]: 2025-10-02 12:11:45.024 2 DEBUG oslo_concurrency.lockutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:45 compute-1 nova_compute[230518]: 2025-10-02 12:11:45.025 2 DEBUG nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:11:45 compute-1 nova_compute[230518]: 2025-10-02 12:11:45.029 2 WARNING nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:11:45 compute-1 nova_compute[230518]: 2025-10-02 12:11:45.040 2 DEBUG nova.virt.libvirt.host [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:11:45 compute-1 nova_compute[230518]: 2025-10-02 12:11:45.041 2 DEBUG nova.virt.libvirt.host [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:11:45 compute-1 nova_compute[230518]: 2025-10-02 12:11:45.046 2 DEBUG nova.virt.libvirt.host [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:11:45 compute-1 nova_compute[230518]: 2025-10-02 12:11:45.046 2 DEBUG nova.virt.libvirt.host [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:11:45 compute-1 nova_compute[230518]: 2025-10-02 12:11:45.047 2 DEBUG nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:11:45 compute-1 nova_compute[230518]: 2025-10-02 12:11:45.047 2 DEBUG nova.virt.hardware [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:11:45 compute-1 nova_compute[230518]: 2025-10-02 12:11:45.048 2 DEBUG nova.virt.hardware [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:11:45 compute-1 nova_compute[230518]: 2025-10-02 12:11:45.048 2 DEBUG nova.virt.hardware [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:11:45 compute-1 nova_compute[230518]: 2025-10-02 12:11:45.049 2 DEBUG nova.virt.hardware [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:11:45 compute-1 nova_compute[230518]: 2025-10-02 12:11:45.049 2 DEBUG nova.virt.hardware [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:11:45 compute-1 nova_compute[230518]: 2025-10-02 12:11:45.049 2 DEBUG nova.virt.hardware [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:11:45 compute-1 nova_compute[230518]: 2025-10-02 12:11:45.049 2 DEBUG nova.virt.hardware [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:11:45 compute-1 nova_compute[230518]: 2025-10-02 12:11:45.049 2 DEBUG nova.virt.hardware [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:11:45 compute-1 nova_compute[230518]: 2025-10-02 12:11:45.050 2 DEBUG nova.virt.hardware [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:11:45 compute-1 nova_compute[230518]: 2025-10-02 12:11:45.050 2 DEBUG nova.virt.hardware [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:11:45 compute-1 nova_compute[230518]: 2025-10-02 12:11:45.050 2 DEBUG nova.virt.hardware [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:11:45 compute-1 nova_compute[230518]: 2025-10-02 12:11:45.052 2 DEBUG oslo_concurrency.processutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:11:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:11:45 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4248775936' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:11:45 compute-1 nova_compute[230518]: 2025-10-02 12:11:45.477 2 DEBUG oslo_concurrency.processutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:11:45 compute-1 nova_compute[230518]: 2025-10-02 12:11:45.505 2 DEBUG nova.storage.rbd_utils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] rbd image cdf18af7-3741-4849-8c50-edeb7cc6b5b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:11:45 compute-1 nova_compute[230518]: 2025-10-02 12:11:45.510 2 DEBUG oslo_concurrency.processutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:11:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:11:45 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2283155084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:11:45 compute-1 nova_compute[230518]: 2025-10-02 12:11:45.966 2 DEBUG oslo_concurrency.processutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:11:45 compute-1 nova_compute[230518]: 2025-10-02 12:11:45.968 2 DEBUG nova.objects.instance [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lazy-loading 'pci_devices' on Instance uuid cdf18af7-3741-4849-8c50-edeb7cc6b5b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:11:46 compute-1 nova_compute[230518]: 2025-10-02 12:11:46.211 2 DEBUG nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:11:46 compute-1 nova_compute[230518]:   <uuid>cdf18af7-3741-4849-8c50-edeb7cc6b5b7</uuid>
Oct 02 12:11:46 compute-1 nova_compute[230518]:   <name>instance-0000000a</name>
Oct 02 12:11:46 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:11:46 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:11:46 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:11:46 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:       <nova:name>tempest-DeleteServersAdminTestJSON-server-609215018</nova:name>
Oct 02 12:11:46 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:11:45</nova:creationTime>
Oct 02 12:11:46 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:11:46 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:11:46 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:11:46 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:11:46 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:11:46 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:11:46 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:11:46 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:11:46 compute-1 nova_compute[230518]:         <nova:user uuid="2cdfea5c8e074c59b963b1fba6b35e1f">tempest-DeleteServersAdminTestJSON-98667439-project-member</nova:user>
Oct 02 12:11:46 compute-1 nova_compute[230518]:         <nova:project uuid="14444ba992464a08be0b7dc7a5dd00c2">tempest-DeleteServersAdminTestJSON-98667439</nova:project>
Oct 02 12:11:46 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:11:46 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:       <nova:ports/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:11:46 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:11:46 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <system>
Oct 02 12:11:46 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:11:46 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:11:46 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:11:46 compute-1 nova_compute[230518]:       <entry name="serial">cdf18af7-3741-4849-8c50-edeb7cc6b5b7</entry>
Oct 02 12:11:46 compute-1 nova_compute[230518]:       <entry name="uuid">cdf18af7-3741-4849-8c50-edeb7cc6b5b7</entry>
Oct 02 12:11:46 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     </system>
Oct 02 12:11:46 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:11:46 compute-1 nova_compute[230518]:   <os>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:   </os>
Oct 02 12:11:46 compute-1 nova_compute[230518]:   <features>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:   </features>
Oct 02 12:11:46 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:11:46 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:11:46 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:11:46 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/cdf18af7-3741-4849-8c50-edeb7cc6b5b7_disk">
Oct 02 12:11:46 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:       </source>
Oct 02 12:11:46 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:11:46 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:11:46 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:11:46 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/cdf18af7-3741-4849-8c50-edeb7cc6b5b7_disk.config">
Oct 02 12:11:46 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:       </source>
Oct 02 12:11:46 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:11:46 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:11:46 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:11:46 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/cdf18af7-3741-4849-8c50-edeb7cc6b5b7/console.log" append="off"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <video>
Oct 02 12:11:46 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     </video>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:11:46 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:11:46 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:11:46 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:11:46 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:11:46 compute-1 nova_compute[230518]: </domain>
Oct 02 12:11:46 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:11:46 compute-1 nova_compute[230518]: 2025-10-02 12:11:46.263 2 DEBUG nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:11:46 compute-1 nova_compute[230518]: 2025-10-02 12:11:46.264 2 DEBUG nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:11:46 compute-1 nova_compute[230518]: 2025-10-02 12:11:46.264 2 INFO nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Using config drive
Oct 02 12:11:46 compute-1 ceph-mon[80926]: pgmap v953: 305 pgs: 305 active+clean; 88 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 547 KiB/s wr, 122 op/s
Oct 02 12:11:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4248775936' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:11:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2283155084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:11:46 compute-1 nova_compute[230518]: 2025-10-02 12:11:46.312 2 DEBUG nova.storage.rbd_utils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] rbd image cdf18af7-3741-4849-8c50-edeb7cc6b5b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:11:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:46.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:46 compute-1 nova_compute[230518]: 2025-10-02 12:11:46.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:11:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:46.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:11:47 compute-1 nova_compute[230518]: 2025-10-02 12:11:47.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:11:48 compute-1 nova_compute[230518]: 2025-10-02 12:11:48.269 2 INFO nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Creating config drive at /var/lib/nova/instances/cdf18af7-3741-4849-8c50-edeb7cc6b5b7/disk.config
Oct 02 12:11:48 compute-1 nova_compute[230518]: 2025-10-02 12:11:48.273 2 DEBUG oslo_concurrency.processutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cdf18af7-3741-4849-8c50-edeb7cc6b5b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4_1orimb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:11:48 compute-1 ceph-mon[80926]: pgmap v954: 305 pgs: 305 active+clean; 102 MiB data, 282 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.0 MiB/s wr, 196 op/s
Oct 02 12:11:48 compute-1 nova_compute[230518]: 2025-10-02 12:11:48.403 2 DEBUG oslo_concurrency.processutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cdf18af7-3741-4849-8c50-edeb7cc6b5b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4_1orimb" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:11:48 compute-1 nova_compute[230518]: 2025-10-02 12:11:48.434 2 DEBUG nova.storage.rbd_utils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] rbd image cdf18af7-3741-4849-8c50-edeb7cc6b5b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:11:48 compute-1 nova_compute[230518]: 2025-10-02 12:11:48.438 2 DEBUG oslo_concurrency.processutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cdf18af7-3741-4849-8c50-edeb7cc6b5b7/disk.config cdf18af7-3741-4849-8c50-edeb7cc6b5b7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:11:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:48.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:48 compute-1 nova_compute[230518]: 2025-10-02 12:11:48.587 2 DEBUG oslo_concurrency.processutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cdf18af7-3741-4849-8c50-edeb7cc6b5b7/disk.config cdf18af7-3741-4849-8c50-edeb7cc6b5b7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:11:48 compute-1 nova_compute[230518]: 2025-10-02 12:11:48.588 2 INFO nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Deleting local config drive /var/lib/nova/instances/cdf18af7-3741-4849-8c50-edeb7cc6b5b7/disk.config because it was imported into RBD.
Oct 02 12:11:48 compute-1 systemd-machined[188247]: New machine qemu-4-instance-0000000a.
Oct 02 12:11:48 compute-1 systemd[1]: Started Virtual Machine qemu-4-instance-0000000a.
Oct 02 12:11:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:48.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:49 compute-1 nova_compute[230518]: 2025-10-02 12:11:49.767 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407109.7670376, cdf18af7-3741-4849-8c50-edeb7cc6b5b7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:11:49 compute-1 nova_compute[230518]: 2025-10-02 12:11:49.769 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] VM Resumed (Lifecycle Event)
Oct 02 12:11:49 compute-1 nova_compute[230518]: 2025-10-02 12:11:49.772 2 DEBUG nova.compute.manager [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:11:49 compute-1 nova_compute[230518]: 2025-10-02 12:11:49.772 2 DEBUG nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:11:49 compute-1 nova_compute[230518]: 2025-10-02 12:11:49.775 2 INFO nova.virt.libvirt.driver [-] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Instance spawned successfully.
Oct 02 12:11:49 compute-1 nova_compute[230518]: 2025-10-02 12:11:49.776 2 DEBUG nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:11:49 compute-1 nova_compute[230518]: 2025-10-02 12:11:49.803 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:11:49 compute-1 nova_compute[230518]: 2025-10-02 12:11:49.810 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:11:49 compute-1 nova_compute[230518]: 2025-10-02 12:11:49.814 2 DEBUG nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:11:49 compute-1 nova_compute[230518]: 2025-10-02 12:11:49.814 2 DEBUG nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:11:49 compute-1 nova_compute[230518]: 2025-10-02 12:11:49.815 2 DEBUG nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:11:49 compute-1 nova_compute[230518]: 2025-10-02 12:11:49.815 2 DEBUG nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:11:49 compute-1 nova_compute[230518]: 2025-10-02 12:11:49.816 2 DEBUG nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:11:49 compute-1 nova_compute[230518]: 2025-10-02 12:11:49.816 2 DEBUG nova.virt.libvirt.driver [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:11:49 compute-1 nova_compute[230518]: 2025-10-02 12:11:49.857 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:11:49 compute-1 nova_compute[230518]: 2025-10-02 12:11:49.858 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407109.7685792, cdf18af7-3741-4849-8c50-edeb7cc6b5b7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:11:49 compute-1 nova_compute[230518]: 2025-10-02 12:11:49.858 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] VM Started (Lifecycle Event)
Oct 02 12:11:49 compute-1 nova_compute[230518]: 2025-10-02 12:11:49.891 2 INFO nova.compute.manager [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Took 5.60 seconds to spawn the instance on the hypervisor.
Oct 02 12:11:49 compute-1 nova_compute[230518]: 2025-10-02 12:11:49.892 2 DEBUG nova.compute.manager [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:11:49 compute-1 nova_compute[230518]: 2025-10-02 12:11:49.898 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:11:49 compute-1 nova_compute[230518]: 2025-10-02 12:11:49.901 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:11:49 compute-1 nova_compute[230518]: 2025-10-02 12:11:49.933 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:11:49 compute-1 nova_compute[230518]: 2025-10-02 12:11:49.951 2 INFO nova.compute.manager [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Took 7.32 seconds to build instance.
Oct 02 12:11:49 compute-1 nova_compute[230518]: 2025-10-02 12:11:49.967 2 DEBUG oslo_concurrency.lockutils [None req-8e171c30-4b34-4767-b53a-6cb79d4a2d02 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lock "cdf18af7-3741-4849-8c50-edeb7cc6b5b7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:50 compute-1 ceph-mon[80926]: pgmap v955: 305 pgs: 305 active+clean; 122 MiB data, 296 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.7 MiB/s wr, 116 op/s
Oct 02 12:11:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:50.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:11:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:50.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:11:51 compute-1 nova_compute[230518]: 2025-10-02 12:11:51.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:52 compute-1 ceph-mon[80926]: pgmap v956: 305 pgs: 305 active+clean; 134 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Oct 02 12:11:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:52.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:52 compute-1 nova_compute[230518]: 2025-10-02 12:11:52.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:52.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:52 compute-1 podman[236045]: 2025-10-02 12:11:52.841078376 +0000 UTC m=+0.091178799 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 12:11:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:11:53 compute-1 nova_compute[230518]: 2025-10-02 12:11:53.767 2 DEBUG oslo_concurrency.lockutils [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Acquiring lock "cdf18af7-3741-4849-8c50-edeb7cc6b5b7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:53 compute-1 nova_compute[230518]: 2025-10-02 12:11:53.767 2 DEBUG oslo_concurrency.lockutils [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lock "cdf18af7-3741-4849-8c50-edeb7cc6b5b7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:53 compute-1 nova_compute[230518]: 2025-10-02 12:11:53.768 2 DEBUG oslo_concurrency.lockutils [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Acquiring lock "cdf18af7-3741-4849-8c50-edeb7cc6b5b7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:53 compute-1 nova_compute[230518]: 2025-10-02 12:11:53.768 2 DEBUG oslo_concurrency.lockutils [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lock "cdf18af7-3741-4849-8c50-edeb7cc6b5b7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:53 compute-1 nova_compute[230518]: 2025-10-02 12:11:53.768 2 DEBUG oslo_concurrency.lockutils [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lock "cdf18af7-3741-4849-8c50-edeb7cc6b5b7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:53 compute-1 nova_compute[230518]: 2025-10-02 12:11:53.769 2 INFO nova.compute.manager [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Terminating instance
Oct 02 12:11:53 compute-1 nova_compute[230518]: 2025-10-02 12:11:53.770 2 DEBUG oslo_concurrency.lockutils [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Acquiring lock "refresh_cache-cdf18af7-3741-4849-8c50-edeb7cc6b5b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:11:53 compute-1 nova_compute[230518]: 2025-10-02 12:11:53.770 2 DEBUG oslo_concurrency.lockutils [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Acquired lock "refresh_cache-cdf18af7-3741-4849-8c50-edeb7cc6b5b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:11:53 compute-1 nova_compute[230518]: 2025-10-02 12:11:53.770 2 DEBUG nova.network.neutron [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:11:54 compute-1 nova_compute[230518]: 2025-10-02 12:11:54.347 2 DEBUG nova.network.neutron [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:11:54 compute-1 ceph-mon[80926]: pgmap v957: 305 pgs: 305 active+clean; 134 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 158 op/s
Oct 02 12:11:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:54.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:54 compute-1 nova_compute[230518]: 2025-10-02 12:11:54.697 2 DEBUG nova.network.neutron [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:11:54 compute-1 nova_compute[230518]: 2025-10-02 12:11:54.722 2 DEBUG oslo_concurrency.lockutils [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Releasing lock "refresh_cache-cdf18af7-3741-4849-8c50-edeb7cc6b5b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:11:54 compute-1 nova_compute[230518]: 2025-10-02 12:11:54.723 2 DEBUG nova.compute.manager [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:11:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:54.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:54 compute-1 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Oct 02 12:11:54 compute-1 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000a.scope: Consumed 6.147s CPU time.
Oct 02 12:11:54 compute-1 systemd-machined[188247]: Machine qemu-4-instance-0000000a terminated.
Oct 02 12:11:54 compute-1 nova_compute[230518]: 2025-10-02 12:11:54.947 2 INFO nova.virt.libvirt.driver [-] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Instance destroyed successfully.
Oct 02 12:11:54 compute-1 nova_compute[230518]: 2025-10-02 12:11:54.948 2 DEBUG nova.objects.instance [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lazy-loading 'resources' on Instance uuid cdf18af7-3741-4849-8c50-edeb7cc6b5b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:11:55 compute-1 nova_compute[230518]: 2025-10-02 12:11:55.553 2 INFO nova.virt.libvirt.driver [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Deleting instance files /var/lib/nova/instances/cdf18af7-3741-4849-8c50-edeb7cc6b5b7_del
Oct 02 12:11:55 compute-1 nova_compute[230518]: 2025-10-02 12:11:55.553 2 INFO nova.virt.libvirt.driver [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Deletion of /var/lib/nova/instances/cdf18af7-3741-4849-8c50-edeb7cc6b5b7_del complete
Oct 02 12:11:55 compute-1 nova_compute[230518]: 2025-10-02 12:11:55.683 2 INFO nova.compute.manager [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Took 0.96 seconds to destroy the instance on the hypervisor.
Oct 02 12:11:55 compute-1 nova_compute[230518]: 2025-10-02 12:11:55.684 2 DEBUG oslo.service.loopingcall [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:11:55 compute-1 nova_compute[230518]: 2025-10-02 12:11:55.684 2 DEBUG nova.compute.manager [-] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:11:55 compute-1 nova_compute[230518]: 2025-10-02 12:11:55.684 2 DEBUG nova.network.neutron [-] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:11:55 compute-1 nova_compute[230518]: 2025-10-02 12:11:55.929 2 DEBUG nova.network.neutron [-] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:11:55 compute-1 nova_compute[230518]: 2025-10-02 12:11:55.957 2 DEBUG nova.network.neutron [-] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:11:56 compute-1 nova_compute[230518]: 2025-10-02 12:11:56.008 2 INFO nova.compute.manager [-] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Took 0.32 seconds to deallocate network for instance.
Oct 02 12:11:56 compute-1 nova_compute[230518]: 2025-10-02 12:11:56.085 2 DEBUG oslo_concurrency.lockutils [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:11:56 compute-1 nova_compute[230518]: 2025-10-02 12:11:56.086 2 DEBUG oslo_concurrency.lockutils [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:11:56 compute-1 nova_compute[230518]: 2025-10-02 12:11:56.178 2 DEBUG oslo_concurrency.processutils [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:11:56 compute-1 ceph-mon[80926]: pgmap v958: 305 pgs: 305 active+clean; 134 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 156 op/s
Oct 02 12:11:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:56.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:56 compute-1 nova_compute[230518]: 2025-10-02 12:11:56.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:56 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:11:56 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2787303180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:56 compute-1 nova_compute[230518]: 2025-10-02 12:11:56.626 2 DEBUG oslo_concurrency.processutils [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:11:56 compute-1 nova_compute[230518]: 2025-10-02 12:11:56.631 2 DEBUG nova.compute.provider_tree [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:11:56 compute-1 nova_compute[230518]: 2025-10-02 12:11:56.651 2 DEBUG nova.scheduler.client.report [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:11:56 compute-1 nova_compute[230518]: 2025-10-02 12:11:56.681 2 DEBUG oslo_concurrency.lockutils [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:11:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:56.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:11:56 compute-1 nova_compute[230518]: 2025-10-02 12:11:56.743 2 INFO nova.scheduler.client.report [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Deleted allocations for instance cdf18af7-3741-4849-8c50-edeb7cc6b5b7
Oct 02 12:11:56 compute-1 podman[236109]: 2025-10-02 12:11:56.78933762 +0000 UTC m=+0.047097374 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 12:11:56 compute-1 nova_compute[230518]: 2025-10-02 12:11:56.861 2 DEBUG oslo_concurrency.lockutils [None req-adad5291-13cf-44b7-8957-d653f99a7a0d 2cdfea5c8e074c59b963b1fba6b35e1f 14444ba992464a08be0b7dc7a5dd00c2 - - default default] Lock "cdf18af7-3741-4849-8c50-edeb7cc6b5b7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:11:57 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2787303180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:11:57 compute-1 nova_compute[230518]: 2025-10-02 12:11:57.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:11:57 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:11:58 compute-1 ceph-mon[80926]: pgmap v959: 305 pgs: 305 active+clean; 146 MiB data, 317 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.4 MiB/s wr, 225 op/s
Oct 02 12:11:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:11:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:11:58.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:11:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:11:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:11:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:11:58.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:12:00 compute-1 ceph-mon[80926]: pgmap v960: 305 pgs: 305 active+clean; 132 MiB data, 309 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.4 MiB/s wr, 181 op/s
Oct 02 12:12:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:00.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:12:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:00.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:12:01 compute-1 nova_compute[230518]: 2025-10-02 12:12:01.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:02.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:02 compute-1 ceph-mon[80926]: pgmap v961: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 174 op/s
Oct 02 12:12:02 compute-1 nova_compute[230518]: 2025-10-02 12:12:02.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:12:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:02.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:12:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:12:02.911 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:12:02 compute-1 nova_compute[230518]: 2025-10-02 12:12:02.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:12:02.912 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:12:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:12:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:12:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:04.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:12:04 compute-1 ceph-mon[80926]: pgmap v962: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 152 op/s
Oct 02 12:12:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:12:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:04.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:12:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2339841313' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:12:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2339841313' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:12:06 compute-1 nova_compute[230518]: 2025-10-02 12:12:06.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:06.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:06.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:06 compute-1 ceph-mon[80926]: pgmap v963: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 808 KiB/s rd, 2.1 MiB/s wr, 105 op/s
Oct 02 12:12:07 compute-1 nova_compute[230518]: 2025-10-02 12:12:07.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:12:08 compute-1 ceph-mon[80926]: pgmap v964: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 808 KiB/s rd, 2.1 MiB/s wr, 105 op/s
Oct 02 12:12:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:08.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:12:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:08.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:12:09 compute-1 nova_compute[230518]: 2025-10-02 12:12:09.945 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407114.9434974, cdf18af7-3741-4849-8c50-edeb7cc6b5b7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:12:09 compute-1 nova_compute[230518]: 2025-10-02 12:12:09.945 2 INFO nova.compute.manager [-] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] VM Stopped (Lifecycle Event)
Oct 02 12:12:09 compute-1 nova_compute[230518]: 2025-10-02 12:12:09.978 2 DEBUG nova.compute.manager [None req-e948ab94-18b4-45cc-9ee8-ea4f4050fd92 - - - - - -] [instance: cdf18af7-3741-4849-8c50-edeb7cc6b5b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:12:10 compute-1 ceph-mon[80926]: pgmap v965: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 143 KiB/s rd, 542 KiB/s wr, 37 op/s
Oct 02 12:12:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:10.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:10.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:11 compute-1 nova_compute[230518]: 2025-10-02 12:12:11.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:12 compute-1 ceph-mon[80926]: pgmap v966: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 13 KiB/s wr, 7 op/s
Oct 02 12:12:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:12:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:12.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:12:12 compute-1 nova_compute[230518]: 2025-10-02 12:12:12.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:12:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:12.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:12:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:12:12.913 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:12:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:12:13 compute-1 podman[236130]: 2025-10-02 12:12:13.809107501 +0000 UTC m=+0.053680100 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:12:13 compute-1 podman[236129]: 2025-10-02 12:12:13.829039478 +0000 UTC m=+0.083076775 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:12:14 compute-1 ceph-mon[80926]: pgmap v967: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.3 KiB/s wr, 0 op/s
Oct 02 12:12:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:14.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:14.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:16 compute-1 ceph-mon[80926]: pgmap v968: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.3 KiB/s wr, 0 op/s
Oct 02 12:12:16 compute-1 nova_compute[230518]: 2025-10-02 12:12:16.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:12:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:16.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:12:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:16.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:17 compute-1 nova_compute[230518]: 2025-10-02 12:12:17.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:12:18 compute-1 ceph-mon[80926]: pgmap v969: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Oct 02 12:12:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:12:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:18.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:12:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:12:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:18.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:12:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1934180314' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:20 compute-1 ceph-mon[80926]: pgmap v970: 305 pgs: 305 active+clean; 105 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.7 KiB/s rd, 1.5 KiB/s wr, 3 op/s
Oct 02 12:12:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:20.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:20.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:21 compute-1 nova_compute[230518]: 2025-10-02 12:12:21.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:22 compute-1 ceph-mon[80926]: pgmap v971: 305 pgs: 305 active+clean; 41 MiB data, 265 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct 02 12:12:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:22.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:22 compute-1 nova_compute[230518]: 2025-10-02 12:12:22.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:12:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:22.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:12:22 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:12:23 compute-1 podman[236176]: 2025-10-02 12:12:23.805967493 +0000 UTC m=+0.059930846 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2)
Oct 02 12:12:24 compute-1 ceph-mon[80926]: pgmap v972: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct 02 12:12:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:12:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:24.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:12:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:24.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:12:25.911 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:12:25.912 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:12:25.912 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:26 compute-1 ceph-mon[80926]: pgmap v973: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct 02 12:12:26 compute-1 nova_compute[230518]: 2025-10-02 12:12:26.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:26.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:12:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:26.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:12:27 compute-1 nova_compute[230518]: 2025-10-02 12:12:27.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:27 compute-1 podman[236198]: 2025-10-02 12:12:27.805228549 +0000 UTC m=+0.059863485 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid)
Oct 02 12:12:27 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:12:28 compute-1 ceph-mon[80926]: pgmap v974: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct 02 12:12:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:28.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:28.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:29 compute-1 nova_compute[230518]: 2025-10-02 12:12:29.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:12:29 compute-1 nova_compute[230518]: 2025-10-02 12:12:29.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:12:29 compute-1 ceph-mon[80926]: pgmap v975: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 12:12:30 compute-1 nova_compute[230518]: 2025-10-02 12:12:30.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:12:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:30.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:30.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:31 compute-1 nova_compute[230518]: 2025-10-02 12:12:31.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:12:31 compute-1 sudo[236218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:12:31 compute-1 sudo[236218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:31 compute-1 sudo[236218]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:31 compute-1 nova_compute[230518]: 2025-10-02 12:12:31.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:31 compute-1 sudo[236243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:12:31 compute-1 sudo[236243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:31 compute-1 sudo[236243]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:31 compute-1 sudo[236268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:12:31 compute-1 sudo[236268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:31 compute-1 sudo[236268]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:31 compute-1 sudo[236293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:12:31 compute-1 sudo[236293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:31 compute-1 ceph-mon[80926]: pgmap v976: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 682 B/s wr, 24 op/s
Oct 02 12:12:31 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2165228842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:32 compute-1 nova_compute[230518]: 2025-10-02 12:12:32.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:12:32 compute-1 nova_compute[230518]: 2025-10-02 12:12:32.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:12:32 compute-1 nova_compute[230518]: 2025-10-02 12:12:32.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:12:32 compute-1 nova_compute[230518]: 2025-10-02 12:12:32.068 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:12:32 compute-1 sudo[236293]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:32.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:32 compute-1 nova_compute[230518]: 2025-10-02 12:12:32.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:12:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:32.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:12:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1978288928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:32 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:12:32 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:12:32 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:12:32 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:12:32 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:12:32 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:12:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:12:33 compute-1 nova_compute[230518]: 2025-10-02 12:12:33.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:12:34 compute-1 nova_compute[230518]: 2025-10-02 12:12:34.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:12:34 compute-1 nova_compute[230518]: 2025-10-02 12:12:34.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:12:34 compute-1 nova_compute[230518]: 2025-10-02 12:12:34.119 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:34 compute-1 nova_compute[230518]: 2025-10-02 12:12:34.120 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:34 compute-1 nova_compute[230518]: 2025-10-02 12:12:34.120 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:34 compute-1 nova_compute[230518]: 2025-10-02 12:12:34.120 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:12:34 compute-1 nova_compute[230518]: 2025-10-02 12:12:34.121 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:12:34 compute-1 ceph-mon[80926]: pgmap v977: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:12:34 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/398630855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:34 compute-1 nova_compute[230518]: 2025-10-02 12:12:34.525 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:12:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:34.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:34 compute-1 nova_compute[230518]: 2025-10-02 12:12:34.685 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:12:34 compute-1 nova_compute[230518]: 2025-10-02 12:12:34.686 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5003MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:12:34 compute-1 nova_compute[230518]: 2025-10-02 12:12:34.686 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:12:34 compute-1 nova_compute[230518]: 2025-10-02 12:12:34.687 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:12:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:34.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:34 compute-1 nova_compute[230518]: 2025-10-02 12:12:34.838 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:12:34 compute-1 nova_compute[230518]: 2025-10-02 12:12:34.838 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:12:34 compute-1 nova_compute[230518]: 2025-10-02 12:12:34.856 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:12:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:12:35 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1190854149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:35 compute-1 nova_compute[230518]: 2025-10-02 12:12:35.289 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:12:35 compute-1 nova_compute[230518]: 2025-10-02 12:12:35.293 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:12:35 compute-1 nova_compute[230518]: 2025-10-02 12:12:35.343 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:12:35 compute-1 nova_compute[230518]: 2025-10-02 12:12:35.378 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:12:35 compute-1 nova_compute[230518]: 2025-10-02 12:12:35.378 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:12:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/398630855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1190854149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:36 compute-1 nova_compute[230518]: 2025-10-02 12:12:36.373 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:12:36 compute-1 nova_compute[230518]: 2025-10-02 12:12:36.374 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:12:36 compute-1 nova_compute[230518]: 2025-10-02 12:12:36.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:12:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:36.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:12:36 compute-1 ceph-mon[80926]: pgmap v978: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:36.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:37 compute-1 nova_compute[230518]: 2025-10-02 12:12:37.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:37 compute-1 ceph-mon[80926]: pgmap v979: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:12:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:38.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:38.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:40 compute-1 ceph-mon[80926]: pgmap v980: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2078166621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:40.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:40.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:41 compute-1 sudo[236393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:12:41 compute-1 nova_compute[230518]: 2025-10-02 12:12:41.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:41 compute-1 sudo[236393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:41 compute-1 sudo[236393]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:41 compute-1 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 12:12:41 compute-1 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 12:12:41 compute-1 sudo[236419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:12:41 compute-1 sudo[236419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:12:41 compute-1 sudo[236419]: pam_unix(sudo:session): session closed for user root
Oct 02 12:12:42 compute-1 ceph-mon[80926]: pgmap v981: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:12:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:12:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:42.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:42 compute-1 nova_compute[230518]: 2025-10-02 12:12:42.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:42.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:12:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:12:43.365 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:12:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:12:43.366 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:12:43 compute-1 nova_compute[230518]: 2025-10-02 12:12:43.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1636622389' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:12:44.369 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:12:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:12:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:44.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:12:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:44.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:44 compute-1 podman[236445]: 2025-10-02 12:12:44.798050053 +0000 UTC m=+0.049349962 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:12:44 compute-1 podman[236444]: 2025-10-02 12:12:44.849026698 +0000 UTC m=+0.100723330 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 12:12:45 compute-1 ceph-mon[80926]: pgmap v982: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:46 compute-1 ceph-mon[80926]: pgmap v983: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:46 compute-1 nova_compute[230518]: 2025-10-02 12:12:46.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:46.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:46.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:47 compute-1 nova_compute[230518]: 2025-10-02 12:12:47.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:47 compute-1 ceph-mon[80926]: pgmap v984: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:12:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:48.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:48.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:50 compute-1 ceph-mon[80926]: pgmap v985: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:12:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:50.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:12:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:50.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:51 compute-1 nova_compute[230518]: 2025-10-02 12:12:51.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:52 compute-1 ceph-mon[80926]: pgmap v986: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:12:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:52.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:12:52 compute-1 nova_compute[230518]: 2025-10-02 12:12:52.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:52.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:12:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:54.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:12:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:54.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:12:54 compute-1 podman[236486]: 2025-10-02 12:12:54.814777241 +0000 UTC m=+0.069019113 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 12:12:54 compute-1 ceph-mon[80926]: pgmap v987: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:56 compute-1 nova_compute[230518]: 2025-10-02 12:12:56.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:56.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:56 compute-1 ceph-mon[80926]: pgmap v988: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:56.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:57 compute-1 nova_compute[230518]: 2025-10-02 12:12:57.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:12:57 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:12:58 compute-1 ceph-mon[80926]: pgmap v989: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:12:58 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3748525350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:12:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:12:58.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:58 compute-1 podman[236506]: 2025-10-02 12:12:58.789137112 +0000 UTC m=+0.044080268 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:12:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:12:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:12:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:12:58.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:12:59 compute-1 ceph-mon[80926]: pgmap v990: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail
Oct 02 12:13:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:00.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:00.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:01 compute-1 nova_compute[230518]: 2025-10-02 12:13:01.150 2 DEBUG oslo_concurrency.lockutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "f85aa55e-c534-4270-b8bb-d25f8026084c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:01 compute-1 nova_compute[230518]: 2025-10-02 12:13:01.151 2 DEBUG oslo_concurrency.lockutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "f85aa55e-c534-4270-b8bb-d25f8026084c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:01 compute-1 nova_compute[230518]: 2025-10-02 12:13:01.172 2 DEBUG nova.compute.manager [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:13:01 compute-1 nova_compute[230518]: 2025-10-02 12:13:01.254 2 DEBUG oslo_concurrency.lockutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:01 compute-1 nova_compute[230518]: 2025-10-02 12:13:01.255 2 DEBUG oslo_concurrency.lockutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:01 compute-1 nova_compute[230518]: 2025-10-02 12:13:01.264 2 DEBUG nova.virt.hardware [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:13:01 compute-1 nova_compute[230518]: 2025-10-02 12:13:01.265 2 INFO nova.compute.claims [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:13:01 compute-1 nova_compute[230518]: 2025-10-02 12:13:01.405 2 DEBUG oslo_concurrency.processutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:01 compute-1 ovn_controller[129257]: 2025-10-02T12:13:01Z|00055|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 02 12:13:01 compute-1 nova_compute[230518]: 2025-10-02 12:13:01.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:01 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:13:01 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3565044961' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:01 compute-1 nova_compute[230518]: 2025-10-02 12:13:01.841 2 DEBUG oslo_concurrency.processutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:01 compute-1 nova_compute[230518]: 2025-10-02 12:13:01.846 2 DEBUG nova.compute.provider_tree [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:13:01 compute-1 nova_compute[230518]: 2025-10-02 12:13:01.864 2 DEBUG nova.scheduler.client.report [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:13:01 compute-1 nova_compute[230518]: 2025-10-02 12:13:01.891 2 DEBUG oslo_concurrency.lockutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:01 compute-1 nova_compute[230518]: 2025-10-02 12:13:01.892 2 DEBUG nova.compute.manager [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:13:01 compute-1 nova_compute[230518]: 2025-10-02 12:13:01.944 2 DEBUG nova.compute.manager [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:13:01 compute-1 nova_compute[230518]: 2025-10-02 12:13:01.945 2 DEBUG nova.network.neutron [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:13:01 compute-1 nova_compute[230518]: 2025-10-02 12:13:01.964 2 INFO nova.virt.libvirt.driver [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:13:01 compute-1 nova_compute[230518]: 2025-10-02 12:13:01.983 2 DEBUG nova.compute.manager [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:13:02 compute-1 nova_compute[230518]: 2025-10-02 12:13:02.027 2 DEBUG oslo_concurrency.processutils [None req-2402c7da-5751-417f-a867-6906b50f0e15 8153b3e00d0b451f90664278a2d2bf88 7aa3898a9d4b490d95d0c8bbd7e4127c - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:02 compute-1 nova_compute[230518]: 2025-10-02 12:13:02.053 2 DEBUG oslo_concurrency.processutils [None req-2402c7da-5751-417f-a867-6906b50f0e15 8153b3e00d0b451f90664278a2d2bf88 7aa3898a9d4b490d95d0c8bbd7e4127c - - default default] CMD "env LANG=C uptime" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:02 compute-1 nova_compute[230518]: 2025-10-02 12:13:02.089 2 DEBUG nova.compute.manager [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:13:02 compute-1 nova_compute[230518]: 2025-10-02 12:13:02.090 2 DEBUG nova.virt.libvirt.driver [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:13:02 compute-1 nova_compute[230518]: 2025-10-02 12:13:02.091 2 INFO nova.virt.libvirt.driver [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Creating image(s)
Oct 02 12:13:02 compute-1 nova_compute[230518]: 2025-10-02 12:13:02.124 2 DEBUG nova.storage.rbd_utils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image f85aa55e-c534-4270-b8bb-d25f8026084c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:13:02 compute-1 nova_compute[230518]: 2025-10-02 12:13:02.151 2 DEBUG nova.storage.rbd_utils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image f85aa55e-c534-4270-b8bb-d25f8026084c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:13:02 compute-1 nova_compute[230518]: 2025-10-02 12:13:02.174 2 DEBUG nova.storage.rbd_utils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image f85aa55e-c534-4270-b8bb-d25f8026084c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:13:02 compute-1 nova_compute[230518]: 2025-10-02 12:13:02.177 2 DEBUG oslo_concurrency.processutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:02 compute-1 nova_compute[230518]: 2025-10-02 12:13:02.230 2 DEBUG oslo_concurrency.processutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:02 compute-1 nova_compute[230518]: 2025-10-02 12:13:02.232 2 DEBUG oslo_concurrency.lockutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:02 compute-1 nova_compute[230518]: 2025-10-02 12:13:02.232 2 DEBUG oslo_concurrency.lockutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:02 compute-1 nova_compute[230518]: 2025-10-02 12:13:02.233 2 DEBUG oslo_concurrency.lockutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:02 compute-1 nova_compute[230518]: 2025-10-02 12:13:02.255 2 DEBUG nova.storage.rbd_utils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image f85aa55e-c534-4270-b8bb-d25f8026084c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:13:02 compute-1 nova_compute[230518]: 2025-10-02 12:13:02.259 2 DEBUG oslo_concurrency.processutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 f85aa55e-c534-4270-b8bb-d25f8026084c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:02 compute-1 nova_compute[230518]: 2025-10-02 12:13:02.278 2 DEBUG nova.policy [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a80f833255046e7b62d34c1c6066073', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '39ca581fbb054c959d26096ca39fef05', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:13:02 compute-1 ceph-mon[80926]: pgmap v991: 305 pgs: 305 active+clean; 65 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 6.1 KiB/s rd, 1003 KiB/s wr, 10 op/s
Oct 02 12:13:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3565044961' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:02 compute-1 rsyslogd[1006]: imjournal from <np0005466030:ceph-mon>: begin to drop messages due to rate-limiting
Oct 02 12:13:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:02.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:02 compute-1 nova_compute[230518]: 2025-10-02 12:13:02.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:13:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:02.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:13:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:13:03 compute-1 nova_compute[230518]: 2025-10-02 12:13:03.056 2 DEBUG oslo_concurrency.processutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 f85aa55e-c534-4270-b8bb-d25f8026084c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.797s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:03 compute-1 nova_compute[230518]: 2025-10-02 12:13:03.123 2 DEBUG nova.storage.rbd_utils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] resizing rbd image f85aa55e-c534-4270-b8bb-d25f8026084c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:13:03 compute-1 nova_compute[230518]: 2025-10-02 12:13:03.233 2 DEBUG nova.objects.instance [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lazy-loading 'migration_context' on Instance uuid f85aa55e-c534-4270-b8bb-d25f8026084c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:13:03 compute-1 nova_compute[230518]: 2025-10-02 12:13:03.245 2 DEBUG nova.network.neutron [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Successfully created port: 760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:13:03 compute-1 nova_compute[230518]: 2025-10-02 12:13:03.248 2 DEBUG nova.virt.libvirt.driver [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:13:03 compute-1 nova_compute[230518]: 2025-10-02 12:13:03.249 2 DEBUG nova.virt.libvirt.driver [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Ensure instance console log exists: /var/lib/nova/instances/f85aa55e-c534-4270-b8bb-d25f8026084c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:13:03 compute-1 nova_compute[230518]: 2025-10-02 12:13:03.249 2 DEBUG oslo_concurrency.lockutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:03 compute-1 nova_compute[230518]: 2025-10-02 12:13:03.249 2 DEBUG oslo_concurrency.lockutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:03 compute-1 nova_compute[230518]: 2025-10-02 12:13:03.250 2 DEBUG oslo_concurrency.lockutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:04 compute-1 ceph-mon[80926]: pgmap v992: 305 pgs: 305 active+clean; 88 MiB data, 269 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Oct 02 12:13:04 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2868232652' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:04 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3995955371' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:04 compute-1 nova_compute[230518]: 2025-10-02 12:13:04.627 2 DEBUG nova.network.neutron [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Successfully updated port: 760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:13:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:13:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:04.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:13:04 compute-1 nova_compute[230518]: 2025-10-02 12:13:04.645 2 DEBUG oslo_concurrency.lockutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "refresh_cache-f85aa55e-c534-4270-b8bb-d25f8026084c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:13:04 compute-1 nova_compute[230518]: 2025-10-02 12:13:04.645 2 DEBUG oslo_concurrency.lockutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquired lock "refresh_cache-f85aa55e-c534-4270-b8bb-d25f8026084c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:13:04 compute-1 nova_compute[230518]: 2025-10-02 12:13:04.645 2 DEBUG nova.network.neutron [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:13:04 compute-1 nova_compute[230518]: 2025-10-02 12:13:04.728 2 DEBUG nova.compute.manager [req-6f3500be-81bf-46cd-ba4e-8639c556f5bf req-9f432b6e-b0a0-4a7f-a091-70cd0f8aa208 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Received event network-changed-760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:13:04 compute-1 nova_compute[230518]: 2025-10-02 12:13:04.729 2 DEBUG nova.compute.manager [req-6f3500be-81bf-46cd-ba4e-8639c556f5bf req-9f432b6e-b0a0-4a7f-a091-70cd0f8aa208 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Refreshing instance network info cache due to event network-changed-760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:13:04 compute-1 nova_compute[230518]: 2025-10-02 12:13:04.729 2 DEBUG oslo_concurrency.lockutils [req-6f3500be-81bf-46cd-ba4e-8639c556f5bf req-9f432b6e-b0a0-4a7f-a091-70cd0f8aa208 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-f85aa55e-c534-4270-b8bb-d25f8026084c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:13:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:04.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:04 compute-1 nova_compute[230518]: 2025-10-02 12:13:04.895 2 DEBUG nova.network.neutron [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:13:05 compute-1 ceph-mon[80926]: pgmap v993: 305 pgs: 305 active+clean; 107 MiB data, 284 MiB used, 21 GiB / 21 GiB avail; 34 KiB/s rd, 2.4 MiB/s wr, 51 op/s
Oct 02 12:13:06 compute-1 nova_compute[230518]: 2025-10-02 12:13:06.054 2 DEBUG nova.network.neutron [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Updating instance_info_cache with network_info: [{"id": "760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a", "address": "fa:16:3e:6f:96:77", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap760df1d8-a2", "ovs_interfaceid": "760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:13:06 compute-1 nova_compute[230518]: 2025-10-02 12:13:06.078 2 DEBUG oslo_concurrency.lockutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Releasing lock "refresh_cache-f85aa55e-c534-4270-b8bb-d25f8026084c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:13:06 compute-1 nova_compute[230518]: 2025-10-02 12:13:06.078 2 DEBUG nova.compute.manager [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Instance network_info: |[{"id": "760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a", "address": "fa:16:3e:6f:96:77", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap760df1d8-a2", "ovs_interfaceid": "760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:13:06 compute-1 nova_compute[230518]: 2025-10-02 12:13:06.079 2 DEBUG oslo_concurrency.lockutils [req-6f3500be-81bf-46cd-ba4e-8639c556f5bf req-9f432b6e-b0a0-4a7f-a091-70cd0f8aa208 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-f85aa55e-c534-4270-b8bb-d25f8026084c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:13:06 compute-1 nova_compute[230518]: 2025-10-02 12:13:06.079 2 DEBUG nova.network.neutron [req-6f3500be-81bf-46cd-ba4e-8639c556f5bf req-9f432b6e-b0a0-4a7f-a091-70cd0f8aa208 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Refreshing network info cache for port 760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:13:06 compute-1 nova_compute[230518]: 2025-10-02 12:13:06.083 2 DEBUG nova.virt.libvirt.driver [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Start _get_guest_xml network_info=[{"id": "760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a", "address": "fa:16:3e:6f:96:77", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap760df1d8-a2", "ovs_interfaceid": "760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:13:06 compute-1 nova_compute[230518]: 2025-10-02 12:13:06.086 2 WARNING nova.virt.libvirt.driver [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:13:06 compute-1 nova_compute[230518]: 2025-10-02 12:13:06.102 2 DEBUG nova.virt.libvirt.host [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:13:06 compute-1 nova_compute[230518]: 2025-10-02 12:13:06.103 2 DEBUG nova.virt.libvirt.host [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:13:06 compute-1 nova_compute[230518]: 2025-10-02 12:13:06.109 2 DEBUG nova.virt.libvirt.host [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:13:06 compute-1 nova_compute[230518]: 2025-10-02 12:13:06.109 2 DEBUG nova.virt.libvirt.host [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:13:06 compute-1 nova_compute[230518]: 2025-10-02 12:13:06.110 2 DEBUG nova.virt.libvirt.driver [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:13:06 compute-1 nova_compute[230518]: 2025-10-02 12:13:06.111 2 DEBUG nova.virt.hardware [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:13:06 compute-1 nova_compute[230518]: 2025-10-02 12:13:06.111 2 DEBUG nova.virt.hardware [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:13:06 compute-1 nova_compute[230518]: 2025-10-02 12:13:06.111 2 DEBUG nova.virt.hardware [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:13:06 compute-1 nova_compute[230518]: 2025-10-02 12:13:06.112 2 DEBUG nova.virt.hardware [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:13:06 compute-1 nova_compute[230518]: 2025-10-02 12:13:06.112 2 DEBUG nova.virt.hardware [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:13:06 compute-1 nova_compute[230518]: 2025-10-02 12:13:06.112 2 DEBUG nova.virt.hardware [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:13:06 compute-1 nova_compute[230518]: 2025-10-02 12:13:06.112 2 DEBUG nova.virt.hardware [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:13:06 compute-1 nova_compute[230518]: 2025-10-02 12:13:06.113 2 DEBUG nova.virt.hardware [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:13:06 compute-1 nova_compute[230518]: 2025-10-02 12:13:06.113 2 DEBUG nova.virt.hardware [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:13:06 compute-1 nova_compute[230518]: 2025-10-02 12:13:06.113 2 DEBUG nova.virt.hardware [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:13:06 compute-1 nova_compute[230518]: 2025-10-02 12:13:06.114 2 DEBUG nova.virt.hardware [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:13:06 compute-1 nova_compute[230518]: 2025-10-02 12:13:06.116 2 DEBUG oslo_concurrency.processutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:06 compute-1 nova_compute[230518]: 2025-10-02 12:13:06.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:06 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:13:06 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3095058566' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:06 compute-1 nova_compute[230518]: 2025-10-02 12:13:06.574 2 DEBUG oslo_concurrency.processutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:06 compute-1 nova_compute[230518]: 2025-10-02 12:13:06.604 2 DEBUG nova.storage.rbd_utils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image f85aa55e-c534-4270-b8bb-d25f8026084c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:13:06 compute-1 nova_compute[230518]: 2025-10-02 12:13:06.608 2 DEBUG oslo_concurrency.processutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:13:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:06.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:13:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:06.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:13:07 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3419774605' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3095058566' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:07 compute-1 nova_compute[230518]: 2025-10-02 12:13:07.216 2 DEBUG oslo_concurrency.processutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:07 compute-1 nova_compute[230518]: 2025-10-02 12:13:07.218 2 DEBUG nova.virt.libvirt.vif [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:13:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1669259740',display_name='tempest-ServersAdminTestJSON-server-1669259740',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1669259740',id=12,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='39ca581fbb054c959d26096ca39fef05',ramdisk_id='',reservation_id='r-xl0a3qnb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1879159697',owner_user_name='tempest-ServersAdminTestJSON-1879159697-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:13:02Z,user_data=None,user_id='7a80f833255046e7b62d34c1c6066073',uuid=f85aa55e-c534-4270-b8bb-d25f8026084c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a", "address": "fa:16:3e:6f:96:77", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap760df1d8-a2", "ovs_interfaceid": "760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:13:07 compute-1 nova_compute[230518]: 2025-10-02 12:13:07.218 2 DEBUG nova.network.os_vif_util [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Converting VIF {"id": "760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a", "address": "fa:16:3e:6f:96:77", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap760df1d8-a2", "ovs_interfaceid": "760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:13:07 compute-1 nova_compute[230518]: 2025-10-02 12:13:07.219 2 DEBUG nova.network.os_vif_util [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:96:77,bridge_name='br-int',has_traffic_filtering=True,id=760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap760df1d8-a2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:13:07 compute-1 nova_compute[230518]: 2025-10-02 12:13:07.220 2 DEBUG nova.objects.instance [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lazy-loading 'pci_devices' on Instance uuid f85aa55e-c534-4270-b8bb-d25f8026084c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:13:07 compute-1 nova_compute[230518]: 2025-10-02 12:13:07.276 2 DEBUG nova.virt.libvirt.driver [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:13:07 compute-1 nova_compute[230518]:   <uuid>f85aa55e-c534-4270-b8bb-d25f8026084c</uuid>
Oct 02 12:13:07 compute-1 nova_compute[230518]:   <name>instance-0000000c</name>
Oct 02 12:13:07 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:13:07 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:13:07 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:13:07 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:       <nova:name>tempest-ServersAdminTestJSON-server-1669259740</nova:name>
Oct 02 12:13:07 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:13:06</nova:creationTime>
Oct 02 12:13:07 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:13:07 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:13:07 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:13:07 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:13:07 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:13:07 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:13:07 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:13:07 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:13:07 compute-1 nova_compute[230518]:         <nova:user uuid="7a80f833255046e7b62d34c1c6066073">tempest-ServersAdminTestJSON-1879159697-project-member</nova:user>
Oct 02 12:13:07 compute-1 nova_compute[230518]:         <nova:project uuid="39ca581fbb054c959d26096ca39fef05">tempest-ServersAdminTestJSON-1879159697</nova:project>
Oct 02 12:13:07 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:13:07 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:13:07 compute-1 nova_compute[230518]:         <nova:port uuid="760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a">
Oct 02 12:13:07 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:13:07 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:13:07 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:13:07 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <system>
Oct 02 12:13:07 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:13:07 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:13:07 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:13:07 compute-1 nova_compute[230518]:       <entry name="serial">f85aa55e-c534-4270-b8bb-d25f8026084c</entry>
Oct 02 12:13:07 compute-1 nova_compute[230518]:       <entry name="uuid">f85aa55e-c534-4270-b8bb-d25f8026084c</entry>
Oct 02 12:13:07 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     </system>
Oct 02 12:13:07 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:13:07 compute-1 nova_compute[230518]:   <os>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:   </os>
Oct 02 12:13:07 compute-1 nova_compute[230518]:   <features>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:   </features>
Oct 02 12:13:07 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:13:07 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:13:07 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:13:07 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/f85aa55e-c534-4270-b8bb-d25f8026084c_disk">
Oct 02 12:13:07 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:       </source>
Oct 02 12:13:07 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:13:07 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:13:07 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:13:07 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/f85aa55e-c534-4270-b8bb-d25f8026084c_disk.config">
Oct 02 12:13:07 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:       </source>
Oct 02 12:13:07 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:13:07 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:13:07 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:13:07 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:6f:96:77"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:       <target dev="tap760df1d8-a2"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:13:07 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/f85aa55e-c534-4270-b8bb-d25f8026084c/console.log" append="off"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <video>
Oct 02 12:13:07 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     </video>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:13:07 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:13:07 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:13:07 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:13:07 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:13:07 compute-1 nova_compute[230518]: </domain>
Oct 02 12:13:07 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:13:07 compute-1 nova_compute[230518]: 2025-10-02 12:13:07.278 2 DEBUG nova.compute.manager [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Preparing to wait for external event network-vif-plugged-760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:13:07 compute-1 nova_compute[230518]: 2025-10-02 12:13:07.278 2 DEBUG oslo_concurrency.lockutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "f85aa55e-c534-4270-b8bb-d25f8026084c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:07 compute-1 nova_compute[230518]: 2025-10-02 12:13:07.278 2 DEBUG oslo_concurrency.lockutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "f85aa55e-c534-4270-b8bb-d25f8026084c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:07 compute-1 nova_compute[230518]: 2025-10-02 12:13:07.279 2 DEBUG oslo_concurrency.lockutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "f85aa55e-c534-4270-b8bb-d25f8026084c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:07 compute-1 nova_compute[230518]: 2025-10-02 12:13:07.279 2 DEBUG nova.virt.libvirt.vif [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:13:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1669259740',display_name='tempest-ServersAdminTestJSON-server-1669259740',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1669259740',id=12,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='39ca581fbb054c959d26096ca39fef05',ramdisk_id='',reservation_id='r-xl0a3qnb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1879159697',owner_user_name='tempest-ServersAdminTestJSON-1879159697-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:13:02Z,user_data=None,user_id='7a80f833255046e7b62d34c1c6066073',uuid=f85aa55e-c534-4270-b8bb-d25f8026084c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a", "address": "fa:16:3e:6f:96:77", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap760df1d8-a2", "ovs_interfaceid": "760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:13:07 compute-1 nova_compute[230518]: 2025-10-02 12:13:07.280 2 DEBUG nova.network.os_vif_util [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Converting VIF {"id": "760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a", "address": "fa:16:3e:6f:96:77", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap760df1d8-a2", "ovs_interfaceid": "760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:13:07 compute-1 nova_compute[230518]: 2025-10-02 12:13:07.281 2 DEBUG nova.network.os_vif_util [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:96:77,bridge_name='br-int',has_traffic_filtering=True,id=760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap760df1d8-a2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:13:07 compute-1 nova_compute[230518]: 2025-10-02 12:13:07.282 2 DEBUG os_vif [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:96:77,bridge_name='br-int',has_traffic_filtering=True,id=760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap760df1d8-a2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:13:07 compute-1 nova_compute[230518]: 2025-10-02 12:13:07.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:07 compute-1 nova_compute[230518]: 2025-10-02 12:13:07.283 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:07 compute-1 nova_compute[230518]: 2025-10-02 12:13:07.283 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:13:07 compute-1 nova_compute[230518]: 2025-10-02 12:13:07.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:07 compute-1 nova_compute[230518]: 2025-10-02 12:13:07.287 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap760df1d8-a2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:07 compute-1 nova_compute[230518]: 2025-10-02 12:13:07.287 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap760df1d8-a2, col_values=(('external_ids', {'iface-id': '760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6f:96:77', 'vm-uuid': 'f85aa55e-c534-4270-b8bb-d25f8026084c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:07 compute-1 NetworkManager[44960]: <info>  [1759407187.3125] manager: (tap760df1d8-a2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Oct 02 12:13:07 compute-1 nova_compute[230518]: 2025-10-02 12:13:07.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:07 compute-1 nova_compute[230518]: 2025-10-02 12:13:07.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:13:07 compute-1 nova_compute[230518]: 2025-10-02 12:13:07.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:07 compute-1 nova_compute[230518]: 2025-10-02 12:13:07.319 2 INFO os_vif [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:96:77,bridge_name='br-int',has_traffic_filtering=True,id=760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap760df1d8-a2')
Oct 02 12:13:07 compute-1 nova_compute[230518]: 2025-10-02 12:13:07.404 2 DEBUG nova.virt.libvirt.driver [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:13:07 compute-1 nova_compute[230518]: 2025-10-02 12:13:07.404 2 DEBUG nova.virt.libvirt.driver [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:13:07 compute-1 nova_compute[230518]: 2025-10-02 12:13:07.404 2 DEBUG nova.virt.libvirt.driver [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] No VIF found with MAC fa:16:3e:6f:96:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:13:07 compute-1 nova_compute[230518]: 2025-10-02 12:13:07.405 2 INFO nova.virt.libvirt.driver [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Using config drive
Oct 02 12:13:07 compute-1 nova_compute[230518]: 2025-10-02 12:13:07.429 2 DEBUG nova.storage.rbd_utils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image f85aa55e-c534-4270-b8bb-d25f8026084c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:13:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:13:08 compute-1 nova_compute[230518]: 2025-10-02 12:13:08.227 2 INFO nova.virt.libvirt.driver [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Creating config drive at /var/lib/nova/instances/f85aa55e-c534-4270-b8bb-d25f8026084c/disk.config
Oct 02 12:13:08 compute-1 nova_compute[230518]: 2025-10-02 12:13:08.231 2 DEBUG oslo_concurrency.processutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f85aa55e-c534-4270-b8bb-d25f8026084c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfl2ie_0e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:08 compute-1 ceph-mon[80926]: pgmap v994: 305 pgs: 305 active+clean; 119 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 34 KiB/s rd, 2.8 MiB/s wr, 52 op/s
Oct 02 12:13:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3419774605' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:08 compute-1 nova_compute[230518]: 2025-10-02 12:13:08.265 2 DEBUG nova.network.neutron [req-6f3500be-81bf-46cd-ba4e-8639c556f5bf req-9f432b6e-b0a0-4a7f-a091-70cd0f8aa208 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Updated VIF entry in instance network info cache for port 760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:13:08 compute-1 nova_compute[230518]: 2025-10-02 12:13:08.266 2 DEBUG nova.network.neutron [req-6f3500be-81bf-46cd-ba4e-8639c556f5bf req-9f432b6e-b0a0-4a7f-a091-70cd0f8aa208 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Updating instance_info_cache with network_info: [{"id": "760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a", "address": "fa:16:3e:6f:96:77", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap760df1d8-a2", "ovs_interfaceid": "760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:13:08 compute-1 nova_compute[230518]: 2025-10-02 12:13:08.357 2 DEBUG oslo_concurrency.processutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f85aa55e-c534-4270-b8bb-d25f8026084c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfl2ie_0e" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:08 compute-1 nova_compute[230518]: 2025-10-02 12:13:08.388 2 DEBUG nova.storage.rbd_utils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] rbd image f85aa55e-c534-4270-b8bb-d25f8026084c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:13:08 compute-1 nova_compute[230518]: 2025-10-02 12:13:08.394 2 DEBUG oslo_concurrency.processutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f85aa55e-c534-4270-b8bb-d25f8026084c/disk.config f85aa55e-c534-4270-b8bb-d25f8026084c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:08 compute-1 nova_compute[230518]: 2025-10-02 12:13:08.418 2 DEBUG oslo_concurrency.lockutils [req-6f3500be-81bf-46cd-ba4e-8639c556f5bf req-9f432b6e-b0a0-4a7f-a091-70cd0f8aa208 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-f85aa55e-c534-4270-b8bb-d25f8026084c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:13:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:08.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:08.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:10 compute-1 nova_compute[230518]: 2025-10-02 12:13:10.218 2 DEBUG oslo_concurrency.processutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f85aa55e-c534-4270-b8bb-d25f8026084c/disk.config f85aa55e-c534-4270-b8bb-d25f8026084c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.824s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:10 compute-1 nova_compute[230518]: 2025-10-02 12:13:10.219 2 INFO nova.virt.libvirt.driver [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Deleting local config drive /var/lib/nova/instances/f85aa55e-c534-4270-b8bb-d25f8026084c/disk.config because it was imported into RBD.
Oct 02 12:13:10 compute-1 kernel: tap760df1d8-a2: entered promiscuous mode
Oct 02 12:13:10 compute-1 NetworkManager[44960]: <info>  [1759407190.2621] manager: (tap760df1d8-a2): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Oct 02 12:13:10 compute-1 ovn_controller[129257]: 2025-10-02T12:13:10Z|00056|binding|INFO|Claiming lport 760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a for this chassis.
Oct 02 12:13:10 compute-1 ovn_controller[129257]: 2025-10-02T12:13:10Z|00057|binding|INFO|760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a: Claiming fa:16:3e:6f:96:77 10.100.0.8
Oct 02 12:13:10 compute-1 nova_compute[230518]: 2025-10-02 12:13:10.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:10 compute-1 nova_compute[230518]: 2025-10-02 12:13:10.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:10.276 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:96:77 10.100.0.8'], port_security=['fa:16:3e:6f:96:77 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'f85aa55e-c534-4270-b8bb-d25f8026084c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85ed78eb-4003-42a7-9312-f47c5830131f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39ca581fbb054c959d26096ca39fef05', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a4ed4a9c-2cdf-4db2-a179-94b54b394a70', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d885d496-7533-482b-ad35-d86c4b60006e, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:10.277 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a in datapath 85ed78eb-4003-42a7-9312-f47c5830131f bound to our chassis
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:10.279 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85ed78eb-4003-42a7-9312-f47c5830131f
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:10.289 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2196f950-df86-44ef-88f6-b69bf799f363]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:10.290 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap85ed78eb-41 in ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:10.292 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap85ed78eb-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:10.292 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[55616e65-27d1-45a2-806b-adb2970069be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:10.293 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ce2de2a1-ad77-4301-9368-a527bede2e6d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:10 compute-1 systemd-udevd[236853]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:13:10 compute-1 systemd-machined[188247]: New machine qemu-5-instance-0000000c.
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:10.303 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[3b100267-f049-4e68-bdc0-6f737df7debc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:10 compute-1 NetworkManager[44960]: <info>  [1759407190.3058] device (tap760df1d8-a2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:13:10 compute-1 NetworkManager[44960]: <info>  [1759407190.3072] device (tap760df1d8-a2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:13:10 compute-1 systemd[1]: Started Virtual Machine qemu-5-instance-0000000c.
Oct 02 12:13:10 compute-1 nova_compute[230518]: 2025-10-02 12:13:10.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:10.328 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4cdbf678-7077-4b49-b9bd-996a2f2fe2e5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:10 compute-1 ovn_controller[129257]: 2025-10-02T12:13:10Z|00058|binding|INFO|Setting lport 760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a ovn-installed in OVS
Oct 02 12:13:10 compute-1 ovn_controller[129257]: 2025-10-02T12:13:10Z|00059|binding|INFO|Setting lport 760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a up in Southbound
Oct 02 12:13:10 compute-1 nova_compute[230518]: 2025-10-02 12:13:10.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:10.354 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[212568ee-ff08-44ae-a57c-addff65f842a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:10.359 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3f7593a4-b72a-41c9-9bff-3df536fabb09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:10 compute-1 NetworkManager[44960]: <info>  [1759407190.3609] manager: (tap85ed78eb-40): new Veth device (/org/freedesktop/NetworkManager/Devices/43)
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:10.387 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[6fb3aa64-f3a9-40f3-a90a-46dfa2912f02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:10.390 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[f154de94-b247-46ac-bb3a-ba4b0a47f971]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:10 compute-1 NetworkManager[44960]: <info>  [1759407190.4079] device (tap85ed78eb-40): carrier: link connected
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:10.412 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c5da4231-9584-4297-8d8f-85fffddb47f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:10.429 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4b9ea4d0-1376-4e0c-bbab-8187ec7b9630]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85ed78eb-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:5e:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505396, 'reachable_time': 40251, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 236885, 'error': None, 'target': 'ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:10.445 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7c9b5c40-1ea2-4640-bba4-c3533fdaadad]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8c:5ea8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 505396, 'tstamp': 505396}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 236886, 'error': None, 'target': 'ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:10.465 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[145b0f38-807d-4d82-8d3a-0ff7b20dfc54]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85ed78eb-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:5e:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505396, 'reachable_time': 40251, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 236887, 'error': None, 'target': 'ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:10.495 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d4a4a703-1578-47c6-bf86-a71405767b33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:10.553 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5a7ad54b-70f4-4067-b68a-85449622b3ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:10.554 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85ed78eb-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:10.554 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:10.555 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85ed78eb-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:10 compute-1 ceph-mon[80926]: pgmap v995: 305 pgs: 305 active+clean; 134 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 3.6 MiB/s wr, 57 op/s
Oct 02 12:13:10 compute-1 NetworkManager[44960]: <info>  [1759407190.5901] manager: (tap85ed78eb-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Oct 02 12:13:10 compute-1 kernel: tap85ed78eb-40: entered promiscuous mode
Oct 02 12:13:10 compute-1 nova_compute[230518]: 2025-10-02 12:13:10.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:10.593 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85ed78eb-40, col_values=(('external_ids', {'iface-id': '3021b3c7-b1d0-44e1-b22e-fbf6a4a79654'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:10 compute-1 ovn_controller[129257]: 2025-10-02T12:13:10Z|00060|binding|INFO|Releasing lport 3021b3c7-b1d0-44e1-b22e-fbf6a4a79654 from this chassis (sb_readonly=0)
Oct 02 12:13:10 compute-1 nova_compute[230518]: 2025-10-02 12:13:10.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:10 compute-1 nova_compute[230518]: 2025-10-02 12:13:10.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:10.608 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/85ed78eb-4003-42a7-9312-f47c5830131f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/85ed78eb-4003-42a7-9312-f47c5830131f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:13:10 compute-1 nova_compute[230518]: 2025-10-02 12:13:10.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:10.609 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[aeea6cf1-1d31-4c2e-bf07-4d3615587e12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:10.610 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-85ed78eb-4003-42a7-9312-f47c5830131f
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/85ed78eb-4003-42a7-9312-f47c5830131f.pid.haproxy
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 85ed78eb-4003-42a7-9312-f47c5830131f
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:13:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:10.610 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f', 'env', 'PROCESS_TAG=haproxy-85ed78eb-4003-42a7-9312-f47c5830131f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/85ed78eb-4003-42a7-9312-f47c5830131f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:13:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:13:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:10.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:13:10 compute-1 nova_compute[230518]: 2025-10-02 12:13:10.660 2 DEBUG nova.compute.manager [req-597079ea-61c7-4eee-986c-a7888263c727 req-fa95f459-92be-4ae8-8cd5-7961aeca7b6c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Received event network-vif-plugged-760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:13:10 compute-1 nova_compute[230518]: 2025-10-02 12:13:10.661 2 DEBUG oslo_concurrency.lockutils [req-597079ea-61c7-4eee-986c-a7888263c727 req-fa95f459-92be-4ae8-8cd5-7961aeca7b6c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f85aa55e-c534-4270-b8bb-d25f8026084c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:10 compute-1 nova_compute[230518]: 2025-10-02 12:13:10.661 2 DEBUG oslo_concurrency.lockutils [req-597079ea-61c7-4eee-986c-a7888263c727 req-fa95f459-92be-4ae8-8cd5-7961aeca7b6c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f85aa55e-c534-4270-b8bb-d25f8026084c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:10 compute-1 nova_compute[230518]: 2025-10-02 12:13:10.661 2 DEBUG oslo_concurrency.lockutils [req-597079ea-61c7-4eee-986c-a7888263c727 req-fa95f459-92be-4ae8-8cd5-7961aeca7b6c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f85aa55e-c534-4270-b8bb-d25f8026084c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:10 compute-1 nova_compute[230518]: 2025-10-02 12:13:10.661 2 DEBUG nova.compute.manager [req-597079ea-61c7-4eee-986c-a7888263c727 req-fa95f459-92be-4ae8-8cd5-7961aeca7b6c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Processing event network-vif-plugged-760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:13:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:13:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:10.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:13:10 compute-1 podman[236960]: 2025-10-02 12:13:10.968264132 +0000 UTC m=+0.052874033 container create 35b36a27c6742392e613f47a04ec9933d99d819c04629dbaa795abc3c0b035eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:13:11 compute-1 systemd[1]: Started libpod-conmon-35b36a27c6742392e613f47a04ec9933d99d819c04629dbaa795abc3c0b035eb.scope.
Oct 02 12:13:11 compute-1 podman[236960]: 2025-10-02 12:13:10.938827236 +0000 UTC m=+0.023437157 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:13:11 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:13:11 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/602c0cae69e79696e14e2b1741a65067c00fef81481eae72b85f399a9248bd39/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:13:11 compute-1 podman[236960]: 2025-10-02 12:13:11.05908704 +0000 UTC m=+0.143696941 container init 35b36a27c6742392e613f47a04ec9933d99d819c04629dbaa795abc3c0b035eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:13:11 compute-1 podman[236960]: 2025-10-02 12:13:11.066330858 +0000 UTC m=+0.150940759 container start 35b36a27c6742392e613f47a04ec9933d99d819c04629dbaa795abc3c0b035eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:13:11 compute-1 neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f[236975]: [NOTICE]   (236979) : New worker (236981) forked
Oct 02 12:13:11 compute-1 neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f[236975]: [NOTICE]   (236979) : Loading success.
Oct 02 12:13:11 compute-1 nova_compute[230518]: 2025-10-02 12:13:11.229 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407191.2294192, f85aa55e-c534-4270-b8bb-d25f8026084c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:13:11 compute-1 nova_compute[230518]: 2025-10-02 12:13:11.230 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] VM Started (Lifecycle Event)
Oct 02 12:13:11 compute-1 nova_compute[230518]: 2025-10-02 12:13:11.232 2 DEBUG nova.compute.manager [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:13:11 compute-1 nova_compute[230518]: 2025-10-02 12:13:11.235 2 DEBUG nova.virt.libvirt.driver [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:13:11 compute-1 nova_compute[230518]: 2025-10-02 12:13:11.238 2 INFO nova.virt.libvirt.driver [-] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Instance spawned successfully.
Oct 02 12:13:11 compute-1 nova_compute[230518]: 2025-10-02 12:13:11.239 2 DEBUG nova.virt.libvirt.driver [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:13:11 compute-1 nova_compute[230518]: 2025-10-02 12:13:11.251 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:13:11 compute-1 nova_compute[230518]: 2025-10-02 12:13:11.255 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:13:11 compute-1 nova_compute[230518]: 2025-10-02 12:13:11.259 2 DEBUG nova.virt.libvirt.driver [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:13:11 compute-1 nova_compute[230518]: 2025-10-02 12:13:11.260 2 DEBUG nova.virt.libvirt.driver [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:13:11 compute-1 nova_compute[230518]: 2025-10-02 12:13:11.260 2 DEBUG nova.virt.libvirt.driver [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:13:11 compute-1 nova_compute[230518]: 2025-10-02 12:13:11.261 2 DEBUG nova.virt.libvirt.driver [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:13:11 compute-1 nova_compute[230518]: 2025-10-02 12:13:11.261 2 DEBUG nova.virt.libvirt.driver [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:13:11 compute-1 nova_compute[230518]: 2025-10-02 12:13:11.261 2 DEBUG nova.virt.libvirt.driver [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:13:11 compute-1 nova_compute[230518]: 2025-10-02 12:13:11.283 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:13:11 compute-1 nova_compute[230518]: 2025-10-02 12:13:11.283 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407191.2303047, f85aa55e-c534-4270-b8bb-d25f8026084c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:13:11 compute-1 nova_compute[230518]: 2025-10-02 12:13:11.283 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] VM Paused (Lifecycle Event)
Oct 02 12:13:11 compute-1 nova_compute[230518]: 2025-10-02 12:13:11.308 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:13:11 compute-1 nova_compute[230518]: 2025-10-02 12:13:11.311 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407191.2348862, f85aa55e-c534-4270-b8bb-d25f8026084c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:13:11 compute-1 nova_compute[230518]: 2025-10-02 12:13:11.312 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] VM Resumed (Lifecycle Event)
Oct 02 12:13:11 compute-1 nova_compute[230518]: 2025-10-02 12:13:11.317 2 INFO nova.compute.manager [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Took 9.23 seconds to spawn the instance on the hypervisor.
Oct 02 12:13:11 compute-1 nova_compute[230518]: 2025-10-02 12:13:11.318 2 DEBUG nova.compute.manager [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:13:11 compute-1 nova_compute[230518]: 2025-10-02 12:13:11.325 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:13:11 compute-1 nova_compute[230518]: 2025-10-02 12:13:11.328 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:13:11 compute-1 nova_compute[230518]: 2025-10-02 12:13:11.347 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:13:11 compute-1 nova_compute[230518]: 2025-10-02 12:13:11.375 2 INFO nova.compute.manager [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Took 10.15 seconds to build instance.
Oct 02 12:13:11 compute-1 nova_compute[230518]: 2025-10-02 12:13:11.390 2 DEBUG oslo_concurrency.lockutils [None req-94e39057-d2aa-40e2-9408-583b263bff3f 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "f85aa55e-c534-4270-b8bb-d25f8026084c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.239s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:11 compute-1 nova_compute[230518]: 2025-10-02 12:13:11.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:12 compute-1 ceph-mon[80926]: pgmap v996: 305 pgs: 305 active+clean; 134 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 736 KiB/s rd, 3.6 MiB/s wr, 92 op/s
Oct 02 12:13:12 compute-1 nova_compute[230518]: 2025-10-02 12:13:12.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:12.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:12 compute-1 nova_compute[230518]: 2025-10-02 12:13:12.759 2 DEBUG nova.compute.manager [req-347f088a-79ff-4581-81a9-3bb47987b27f req-e4690af5-93e5-4886-8cfc-0d34f9990b2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Received event network-vif-plugged-760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:13:12 compute-1 nova_compute[230518]: 2025-10-02 12:13:12.759 2 DEBUG oslo_concurrency.lockutils [req-347f088a-79ff-4581-81a9-3bb47987b27f req-e4690af5-93e5-4886-8cfc-0d34f9990b2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f85aa55e-c534-4270-b8bb-d25f8026084c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:12 compute-1 nova_compute[230518]: 2025-10-02 12:13:12.759 2 DEBUG oslo_concurrency.lockutils [req-347f088a-79ff-4581-81a9-3bb47987b27f req-e4690af5-93e5-4886-8cfc-0d34f9990b2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f85aa55e-c534-4270-b8bb-d25f8026084c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:12 compute-1 nova_compute[230518]: 2025-10-02 12:13:12.760 2 DEBUG oslo_concurrency.lockutils [req-347f088a-79ff-4581-81a9-3bb47987b27f req-e4690af5-93e5-4886-8cfc-0d34f9990b2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f85aa55e-c534-4270-b8bb-d25f8026084c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:12 compute-1 nova_compute[230518]: 2025-10-02 12:13:12.760 2 DEBUG nova.compute.manager [req-347f088a-79ff-4581-81a9-3bb47987b27f req-e4690af5-93e5-4886-8cfc-0d34f9990b2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] No waiting events found dispatching network-vif-plugged-760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:13:12 compute-1 nova_compute[230518]: 2025-10-02 12:13:12.760 2 WARNING nova.compute.manager [req-347f088a-79ff-4581-81a9-3bb47987b27f req-e4690af5-93e5-4886-8cfc-0d34f9990b2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Received unexpected event network-vif-plugged-760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a for instance with vm_state active and task_state None.
Oct 02 12:13:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:13:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:12.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:13:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:13:14 compute-1 ceph-mon[80926]: pgmap v997: 305 pgs: 305 active+clean; 134 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.6 MiB/s wr, 114 op/s
Oct 02 12:13:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1201019618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:13:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:14.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:13:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:14.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:15 compute-1 ceph-mon[80926]: pgmap v998: 305 pgs: 305 active+clean; 134 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.8 MiB/s wr, 137 op/s
Oct 02 12:13:15 compute-1 podman[236991]: 2025-10-02 12:13:15.826412076 +0000 UTC m=+0.078029215 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 12:13:15 compute-1 podman[236990]: 2025-10-02 12:13:15.82686087 +0000 UTC m=+0.082397032 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:13:16 compute-1 nova_compute[230518]: 2025-10-02 12:13:16.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:16.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:13:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:16.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:13:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e143 e143: 3 total, 3 up, 3 in
Oct 02 12:13:17 compute-1 nova_compute[230518]: 2025-10-02 12:13:17.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:13:18 compute-1 ceph-mon[80926]: pgmap v999: 305 pgs: 305 active+clean; 147 MiB data, 305 MiB used, 21 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Oct 02 12:13:18 compute-1 ceph-mon[80926]: osdmap e143: 3 total, 3 up, 3 in
Oct 02 12:13:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/261862872' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:18.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:13:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:18.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:13:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/932998566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:13:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:20.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:13:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:20.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:21 compute-1 ceph-mon[80926]: pgmap v1001: 305 pgs: 305 active+clean; 171 MiB data, 314 MiB used, 21 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.6 MiB/s wr, 191 op/s
Oct 02 12:13:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3914675815' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1702175459' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:21 compute-1 nova_compute[230518]: 2025-10-02 12:13:21.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:21.726 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:13:21 compute-1 nova_compute[230518]: 2025-10-02 12:13:21.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:21.727 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:13:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:21.728 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:22 compute-1 nova_compute[230518]: 2025-10-02 12:13:22.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:13:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:22.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:13:22 compute-1 ceph-mon[80926]: pgmap v1002: 305 pgs: 305 active+clean; 181 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.1 MiB/s wr, 168 op/s
Oct 02 12:13:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2915731104' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:22.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:22 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:13:24 compute-1 ceph-mon[80926]: pgmap v1003: 305 pgs: 305 active+clean; 220 MiB data, 329 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.6 MiB/s wr, 215 op/s
Oct 02 12:13:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2071537996' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2997894256' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:24 compute-1 ovn_controller[129257]: 2025-10-02T12:13:24Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6f:96:77 10.100.0.8
Oct 02 12:13:24 compute-1 ovn_controller[129257]: 2025-10-02T12:13:24Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6f:96:77 10.100.0.8
Oct 02 12:13:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:24.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:24.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3950564086' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:25 compute-1 podman[237035]: 2025-10-02 12:13:25.807416 +0000 UTC m=+0.056606761 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Oct 02 12:13:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:25.912 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:25.913 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:25.913 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:26 compute-1 ceph-mon[80926]: pgmap v1004: 305 pgs: 305 active+clean; 292 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 9.1 MiB/s wr, 226 op/s
Oct 02 12:13:26 compute-1 nova_compute[230518]: 2025-10-02 12:13:26.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:26.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:26.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:27 compute-1 nova_compute[230518]: 2025-10-02 12:13:27.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:27 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:13:28 compute-1 ceph-mon[80926]: pgmap v1005: 305 pgs: 305 active+clean; 318 MiB data, 404 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 9.7 MiB/s wr, 238 op/s
Oct 02 12:13:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:13:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:28.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:13:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:28.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:29 compute-1 nova_compute[230518]: 2025-10-02 12:13:29.359 2 DEBUG nova.virt.libvirt.driver [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Creating tmpfile /var/lib/nova/instances/tmp0qr7i040 to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041
Oct 02 12:13:29 compute-1 nova_compute[230518]: 2025-10-02 12:13:29.481 2 DEBUG nova.compute.manager [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp0qr7i040',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476
Oct 02 12:13:29 compute-1 nova_compute[230518]: 2025-10-02 12:13:29.509 2 DEBUG oslo_concurrency.lockutils [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:13:29 compute-1 nova_compute[230518]: 2025-10-02 12:13:29.509 2 DEBUG oslo_concurrency.lockutils [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:13:29 compute-1 nova_compute[230518]: 2025-10-02 12:13:29.518 2 INFO nova.compute.rpcapi [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66
Oct 02 12:13:29 compute-1 nova_compute[230518]: 2025-10-02 12:13:29.518 2 DEBUG oslo_concurrency.lockutils [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:13:29 compute-1 podman[237055]: 2025-10-02 12:13:29.800397798 +0000 UTC m=+0.055511887 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:13:30 compute-1 ceph-mon[80926]: pgmap v1006: 305 pgs: 305 active+clean; 331 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 1014 KiB/s rd, 8.6 MiB/s wr, 221 op/s
Oct 02 12:13:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:13:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:30.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:13:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:13:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:30.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:13:31 compute-1 nova_compute[230518]: 2025-10-02 12:13:31.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:13:31 compute-1 nova_compute[230518]: 2025-10-02 12:13:31.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:13:31 compute-1 nova_compute[230518]: 2025-10-02 12:13:31.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:13:31 compute-1 nova_compute[230518]: 2025-10-02 12:13:31.062 2 DEBUG nova.compute.manager [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp0qr7i040',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='2b86a484-6fc6-4efa-983f-fb93053b0874',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604
Oct 02 12:13:31 compute-1 nova_compute[230518]: 2025-10-02 12:13:31.087 2 DEBUG oslo_concurrency.lockutils [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Acquiring lock "refresh_cache-2b86a484-6fc6-4efa-983f-fb93053b0874" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:13:31 compute-1 nova_compute[230518]: 2025-10-02 12:13:31.087 2 DEBUG oslo_concurrency.lockutils [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Acquired lock "refresh_cache-2b86a484-6fc6-4efa-983f-fb93053b0874" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:13:31 compute-1 nova_compute[230518]: 2025-10-02 12:13:31.087 2 DEBUG nova.network.neutron [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:13:31 compute-1 nova_compute[230518]: 2025-10-02 12:13:31.101 2 DEBUG oslo_concurrency.lockutils [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Acquiring lock "51b9c197-3185-48a3-9986-e579c83abc88" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:31 compute-1 nova_compute[230518]: 2025-10-02 12:13:31.101 2 DEBUG oslo_concurrency.lockutils [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Lock "51b9c197-3185-48a3-9986-e579c83abc88" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:31 compute-1 nova_compute[230518]: 2025-10-02 12:13:31.115 2 DEBUG nova.compute.manager [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:13:31 compute-1 nova_compute[230518]: 2025-10-02 12:13:31.183 2 DEBUG oslo_concurrency.lockutils [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:31 compute-1 nova_compute[230518]: 2025-10-02 12:13:31.184 2 DEBUG oslo_concurrency.lockutils [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:31 compute-1 nova_compute[230518]: 2025-10-02 12:13:31.194 2 DEBUG nova.virt.hardware [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:13:31 compute-1 nova_compute[230518]: 2025-10-02 12:13:31.195 2 INFO nova.compute.claims [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:13:31 compute-1 nova_compute[230518]: 2025-10-02 12:13:31.324 2 DEBUG oslo_concurrency.processutils [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:31 compute-1 nova_compute[230518]: 2025-10-02 12:13:31.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:31 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:13:31 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/812512747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:31 compute-1 nova_compute[230518]: 2025-10-02 12:13:31.764 2 DEBUG oslo_concurrency.processutils [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:31 compute-1 nova_compute[230518]: 2025-10-02 12:13:31.772 2 DEBUG nova.compute.provider_tree [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:13:31 compute-1 nova_compute[230518]: 2025-10-02 12:13:31.793 2 DEBUG nova.scheduler.client.report [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:13:31 compute-1 nova_compute[230518]: 2025-10-02 12:13:31.823 2 DEBUG oslo_concurrency.lockutils [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:31 compute-1 nova_compute[230518]: 2025-10-02 12:13:31.824 2 DEBUG nova.compute.manager [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:13:31 compute-1 nova_compute[230518]: 2025-10-02 12:13:31.867 2 DEBUG nova.compute.manager [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:13:31 compute-1 nova_compute[230518]: 2025-10-02 12:13:31.867 2 DEBUG nova.network.neutron [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:13:31 compute-1 nova_compute[230518]: 2025-10-02 12:13:31.888 2 INFO nova.virt.libvirt.driver [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:13:31 compute-1 nova_compute[230518]: 2025-10-02 12:13:31.912 2 DEBUG nova.compute.manager [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:13:32 compute-1 ceph-mon[80926]: pgmap v1007: 305 pgs: 305 active+clean; 337 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 8.3 MiB/s wr, 292 op/s
Oct 02 12:13:32 compute-1 nova_compute[230518]: 2025-10-02 12:13:32.060 2 DEBUG nova.compute.manager [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:13:32 compute-1 nova_compute[230518]: 2025-10-02 12:13:32.061 2 DEBUG nova.virt.libvirt.driver [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:13:32 compute-1 nova_compute[230518]: 2025-10-02 12:13:32.061 2 INFO nova.virt.libvirt.driver [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Creating image(s)
Oct 02 12:13:32 compute-1 nova_compute[230518]: 2025-10-02 12:13:32.089 2 DEBUG nova.storage.rbd_utils [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] rbd image 51b9c197-3185-48a3-9986-e579c83abc88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:13:32 compute-1 nova_compute[230518]: 2025-10-02 12:13:32.117 2 DEBUG nova.storage.rbd_utils [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] rbd image 51b9c197-3185-48a3-9986-e579c83abc88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:13:32 compute-1 nova_compute[230518]: 2025-10-02 12:13:32.147 2 DEBUG nova.storage.rbd_utils [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] rbd image 51b9c197-3185-48a3-9986-e579c83abc88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:13:32 compute-1 nova_compute[230518]: 2025-10-02 12:13:32.152 2 DEBUG oslo_concurrency.processutils [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:32 compute-1 nova_compute[230518]: 2025-10-02 12:13:32.179 2 DEBUG nova.network.neutron [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 02 12:13:32 compute-1 nova_compute[230518]: 2025-10-02 12:13:32.180 2 DEBUG nova.compute.manager [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:13:32 compute-1 nova_compute[230518]: 2025-10-02 12:13:32.213 2 DEBUG oslo_concurrency.processutils [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:32 compute-1 nova_compute[230518]: 2025-10-02 12:13:32.214 2 DEBUG oslo_concurrency.lockutils [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:32 compute-1 nova_compute[230518]: 2025-10-02 12:13:32.214 2 DEBUG oslo_concurrency.lockutils [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:32 compute-1 nova_compute[230518]: 2025-10-02 12:13:32.215 2 DEBUG oslo_concurrency.lockutils [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:32 compute-1 nova_compute[230518]: 2025-10-02 12:13:32.239 2 DEBUG nova.storage.rbd_utils [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] rbd image 51b9c197-3185-48a3-9986-e579c83abc88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:13:32 compute-1 nova_compute[230518]: 2025-10-02 12:13:32.243 2 DEBUG oslo_concurrency.processutils [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 51b9c197-3185-48a3-9986-e579c83abc88_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:32 compute-1 nova_compute[230518]: 2025-10-02 12:13:32.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:32.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:32.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.003 2 DEBUG nova.network.neutron [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Updating instance_info_cache with network_info: [{"id": "8879d541-1199-497a-b096-b45e17e4df04", "address": "fa:16:3e:d1:8f:1e", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8879d541-11", "ovs_interfaceid": "8879d541-1199-497a-b096-b45e17e4df04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.038 2 DEBUG oslo_concurrency.lockutils [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Releasing lock "refresh_cache-2b86a484-6fc6-4efa-983f-fb93053b0874" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.040 2 DEBUG nova.virt.libvirt.driver [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp0qr7i040',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='2b86a484-6fc6-4efa-983f-fb93053b0874',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.040 2 DEBUG nova.virt.libvirt.driver [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Creating instance directory: /var/lib/nova/instances/2b86a484-6fc6-4efa-983f-fb93053b0874 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.041 2 DEBUG nova.virt.libvirt.driver [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Ensure instance console log exists: /var/lib/nova/instances/2b86a484-6fc6-4efa-983f-fb93053b0874/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.041 2 DEBUG nova.virt.libvirt.driver [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.042 2 DEBUG nova.virt.libvirt.vif [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:13:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-522976997',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-522976997',id=13,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:13:25Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4db2957ac1b546178a9f2c0f24807e5b',ramdisk_id='',reservation_id='r-es3dgd0n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-211124371',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-211124371-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:13:25Z,user_data=None,user_id='d29391679bd0482aada18c987e4c11ca',uuid=2b86a484-6fc6-4efa-983f-fb93053b0874,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8879d541-1199-497a-b096-b45e17e4df04", "address": "fa:16:3e:d1:8f:1e", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap8879d541-11", "ovs_interfaceid": "8879d541-1199-497a-b096-b45e17e4df04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.042 2 DEBUG nova.network.os_vif_util [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Converting VIF {"id": "8879d541-1199-497a-b096-b45e17e4df04", "address": "fa:16:3e:d1:8f:1e", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap8879d541-11", "ovs_interfaceid": "8879d541-1199-497a-b096-b45e17e4df04", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.043 2 DEBUG nova.network.os_vif_util [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:8f:1e,bridge_name='br-int',has_traffic_filtering=True,id=8879d541-1199-497a-b096-b45e17e4df04,network=Network(5b610572-0903-4bfb-be0b-9848e0af3ae3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8879d541-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.043 2 DEBUG os_vif [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:8f:1e,bridge_name='br-int',has_traffic_filtering=True,id=8879d541-1199-497a-b096-b45e17e4df04,network=Network(5b610572-0903-4bfb-be0b-9848e0af3ae3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8879d541-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.044 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.045 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.050 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8879d541-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.050 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8879d541-11, col_values=(('external_ids', {'iface-id': '8879d541-1199-497a-b096-b45e17e4df04', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d1:8f:1e', 'vm-uuid': '2b86a484-6fc6-4efa-983f-fb93053b0874'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:33 compute-1 NetworkManager[44960]: <info>  [1759407213.1051] manager: (tap8879d541-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.112 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.114 2 INFO os_vif [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:8f:1e,bridge_name='br-int',has_traffic_filtering=True,id=8879d541-1199-497a-b096-b45e17e4df04,network=Network(5b610572-0903-4bfb-be0b-9848e0af3ae3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8879d541-11')
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.115 2 DEBUG nova.virt.libvirt.driver [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.115 2 DEBUG nova.compute.manager [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp0qr7i040',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='2b86a484-6fc6-4efa-983f-fb93053b0874',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.131 2 DEBUG oslo_concurrency.processutils [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 51b9c197-3185-48a3-9986-e579c83abc88_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.889s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.206 2 DEBUG nova.storage.rbd_utils [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] resizing rbd image 51b9c197-3185-48a3-9986-e579c83abc88_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.312 2 DEBUG nova.objects.instance [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Lazy-loading 'migration_context' on Instance uuid 51b9c197-3185-48a3-9986-e579c83abc88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.327 2 DEBUG nova.virt.libvirt.driver [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.328 2 DEBUG nova.virt.libvirt.driver [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Ensure instance console log exists: /var/lib/nova/instances/51b9c197-3185-48a3-9986-e579c83abc88/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.329 2 DEBUG oslo_concurrency.lockutils [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.329 2 DEBUG oslo_concurrency.lockutils [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.330 2 DEBUG oslo_concurrency.lockutils [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.331 2 DEBUG nova.virt.libvirt.driver [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.336 2 WARNING nova.virt.libvirt.driver [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.340 2 DEBUG nova.virt.libvirt.host [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.341 2 DEBUG nova.virt.libvirt.host [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.349 2 DEBUG nova.virt.libvirt.host [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.350 2 DEBUG nova.virt.libvirt.host [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.351 2 DEBUG nova.virt.libvirt.driver [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.352 2 DEBUG nova.virt.hardware [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.352 2 DEBUG nova.virt.hardware [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.352 2 DEBUG nova.virt.hardware [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.352 2 DEBUG nova.virt.hardware [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.353 2 DEBUG nova.virt.hardware [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.353 2 DEBUG nova.virt.hardware [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.353 2 DEBUG nova.virt.hardware [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.353 2 DEBUG nova.virt.hardware [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.354 2 DEBUG nova.virt.hardware [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.354 2 DEBUG nova.virt.hardware [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.355 2 DEBUG nova.virt.hardware [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:13:33 compute-1 nova_compute[230518]: 2025-10-02 12:13:33.357 2 DEBUG oslo_concurrency.processutils [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:13:33 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3184662851' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:33 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/812512747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:34 compute-1 nova_compute[230518]: 2025-10-02 12:13:34.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:13:34 compute-1 nova_compute[230518]: 2025-10-02 12:13:34.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:13:34 compute-1 nova_compute[230518]: 2025-10-02 12:13:34.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:13:34 compute-1 nova_compute[230518]: 2025-10-02 12:13:34.070 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 12:13:34 compute-1 nova_compute[230518]: 2025-10-02 12:13:34.139 2 DEBUG oslo_concurrency.processutils [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.782s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:34 compute-1 nova_compute[230518]: 2025-10-02 12:13:34.174 2 DEBUG nova.storage.rbd_utils [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] rbd image 51b9c197-3185-48a3-9986-e579c83abc88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:13:34 compute-1 nova_compute[230518]: 2025-10-02 12:13:34.178 2 DEBUG oslo_concurrency.processutils [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:34 compute-1 nova_compute[230518]: 2025-10-02 12:13:34.491 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-f85aa55e-c534-4270-b8bb-d25f8026084c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:13:34 compute-1 nova_compute[230518]: 2025-10-02 12:13:34.492 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-f85aa55e-c534-4270-b8bb-d25f8026084c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:13:34 compute-1 nova_compute[230518]: 2025-10-02 12:13:34.492 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:13:34 compute-1 nova_compute[230518]: 2025-10-02 12:13:34.493 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f85aa55e-c534-4270-b8bb-d25f8026084c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:13:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:13:34 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2647583963' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:34 compute-1 nova_compute[230518]: 2025-10-02 12:13:34.629 2 DEBUG oslo_concurrency.processutils [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:34 compute-1 nova_compute[230518]: 2025-10-02 12:13:34.632 2 DEBUG nova.objects.instance [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 51b9c197-3185-48a3-9986-e579c83abc88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:13:34 compute-1 nova_compute[230518]: 2025-10-02 12:13:34.648 2 DEBUG nova.virt.libvirt.driver [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:13:34 compute-1 nova_compute[230518]:   <uuid>51b9c197-3185-48a3-9986-e579c83abc88</uuid>
Oct 02 12:13:34 compute-1 nova_compute[230518]:   <name>instance-00000010</name>
Oct 02 12:13:34 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:13:34 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:13:34 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:13:34 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:       <nova:name>tempest-LiveMigrationNegativeTest-server-1654989527</nova:name>
Oct 02 12:13:34 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:13:33</nova:creationTime>
Oct 02 12:13:34 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:13:34 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:13:34 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:13:34 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:13:34 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:13:34 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:13:34 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:13:34 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:13:34 compute-1 nova_compute[230518]:         <nova:user uuid="8da43dcf236343bfa92dff74df42cb79">tempest-LiveMigrationNegativeTest-496070914-project-member</nova:user>
Oct 02 12:13:34 compute-1 nova_compute[230518]:         <nova:project uuid="6933706e32a14e3c92fdf8c1df4f90b2">tempest-LiveMigrationNegativeTest-496070914</nova:project>
Oct 02 12:13:34 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:13:34 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:       <nova:ports/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:13:34 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:13:34 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <system>
Oct 02 12:13:34 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:13:34 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:13:34 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:13:34 compute-1 nova_compute[230518]:       <entry name="serial">51b9c197-3185-48a3-9986-e579c83abc88</entry>
Oct 02 12:13:34 compute-1 nova_compute[230518]:       <entry name="uuid">51b9c197-3185-48a3-9986-e579c83abc88</entry>
Oct 02 12:13:34 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     </system>
Oct 02 12:13:34 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:13:34 compute-1 nova_compute[230518]:   <os>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:   </os>
Oct 02 12:13:34 compute-1 nova_compute[230518]:   <features>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:   </features>
Oct 02 12:13:34 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:13:34 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:13:34 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:13:34 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/51b9c197-3185-48a3-9986-e579c83abc88_disk">
Oct 02 12:13:34 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:       </source>
Oct 02 12:13:34 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:13:34 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:13:34 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:13:34 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/51b9c197-3185-48a3-9986-e579c83abc88_disk.config">
Oct 02 12:13:34 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:       </source>
Oct 02 12:13:34 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:13:34 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:13:34 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:13:34 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/51b9c197-3185-48a3-9986-e579c83abc88/console.log" append="off"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <video>
Oct 02 12:13:34 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     </video>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:13:34 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:13:34 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:13:34 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:13:34 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:13:34 compute-1 nova_compute[230518]: </domain>
Oct 02 12:13:34 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:13:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:34.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:34 compute-1 nova_compute[230518]: 2025-10-02 12:13:34.704 2 DEBUG nova.virt.libvirt.driver [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:13:34 compute-1 nova_compute[230518]: 2025-10-02 12:13:34.704 2 DEBUG nova.virt.libvirt.driver [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:13:34 compute-1 nova_compute[230518]: 2025-10-02 12:13:34.705 2 INFO nova.virt.libvirt.driver [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Using config drive
Oct 02 12:13:34 compute-1 nova_compute[230518]: 2025-10-02 12:13:34.739 2 DEBUG nova.storage.rbd_utils [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] rbd image 51b9c197-3185-48a3-9986-e579c83abc88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:13:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:34.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:34 compute-1 nova_compute[230518]: 2025-10-02 12:13:34.928 2 INFO nova.virt.libvirt.driver [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Creating config drive at /var/lib/nova/instances/51b9c197-3185-48a3-9986-e579c83abc88/disk.config
Oct 02 12:13:34 compute-1 nova_compute[230518]: 2025-10-02 12:13:34.933 2 DEBUG oslo_concurrency.processutils [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/51b9c197-3185-48a3-9986-e579c83abc88/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_mwj2r92 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:35 compute-1 nova_compute[230518]: 2025-10-02 12:13:35.056 2 DEBUG oslo_concurrency.processutils [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/51b9c197-3185-48a3-9986-e579c83abc88/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_mwj2r92" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:35 compute-1 nova_compute[230518]: 2025-10-02 12:13:35.087 2 DEBUG nova.storage.rbd_utils [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] rbd image 51b9c197-3185-48a3-9986-e579c83abc88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:13:35 compute-1 nova_compute[230518]: 2025-10-02 12:13:35.090 2 DEBUG oslo_concurrency.processutils [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/51b9c197-3185-48a3-9986-e579c83abc88/disk.config 51b9c197-3185-48a3-9986-e579c83abc88_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:35 compute-1 ceph-mon[80926]: pgmap v1008: 305 pgs: 305 active+clean; 339 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 4.9 MiB/s rd, 7.8 MiB/s wr, 355 op/s
Oct 02 12:13:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3184662851' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/56871212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2647583963' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:36 compute-1 nova_compute[230518]: 2025-10-02 12:13:36.520 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Updating instance_info_cache with network_info: [{"id": "760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a", "address": "fa:16:3e:6f:96:77", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap760df1d8-a2", "ovs_interfaceid": "760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:13:36 compute-1 nova_compute[230518]: 2025-10-02 12:13:36.562 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-f85aa55e-c534-4270-b8bb-d25f8026084c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:13:36 compute-1 nova_compute[230518]: 2025-10-02 12:13:36.562 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:13:36 compute-1 nova_compute[230518]: 2025-10-02 12:13:36.563 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:13:36 compute-1 nova_compute[230518]: 2025-10-02 12:13:36.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:36 compute-1 nova_compute[230518]: 2025-10-02 12:13:36.567 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:13:36 compute-1 nova_compute[230518]: 2025-10-02 12:13:36.568 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:13:36 compute-1 nova_compute[230518]: 2025-10-02 12:13:36.588 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:36 compute-1 nova_compute[230518]: 2025-10-02 12:13:36.589 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:36 compute-1 nova_compute[230518]: 2025-10-02 12:13:36.589 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:36 compute-1 nova_compute[230518]: 2025-10-02 12:13:36.589 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:13:36 compute-1 nova_compute[230518]: 2025-10-02 12:13:36.589 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:36 compute-1 nova_compute[230518]: 2025-10-02 12:13:36.608 2 DEBUG nova.network.neutron [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Port 8879d541-1199-497a-b096-b45e17e4df04 updated with migration profile {'migrating_to': 'compute-1.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354
Oct 02 12:13:36 compute-1 nova_compute[230518]: 2025-10-02 12:13:36.610 2 DEBUG nova.compute.manager [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp0qr7i040',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='2b86a484-6fc6-4efa-983f-fb93053b0874',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723
Oct 02 12:13:36 compute-1 ceph-mon[80926]: pgmap v1009: 305 pgs: 305 active+clean; 362 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 6.3 MiB/s rd, 7.0 MiB/s wr, 361 op/s
Oct 02 12:13:36 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1645490694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:36 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/103446974' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:36.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:36 compute-1 systemd[1]: Starting libvirt proxy daemon...
Oct 02 12:13:36 compute-1 systemd[1]: Started libvirt proxy daemon.
Oct 02 12:13:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:13:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:36.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:13:36 compute-1 kernel: tap8879d541-11: entered promiscuous mode
Oct 02 12:13:36 compute-1 NetworkManager[44960]: <info>  [1759407216.9161] manager: (tap8879d541-11): new Tun device (/org/freedesktop/NetworkManager/Devices/46)
Oct 02 12:13:36 compute-1 ovn_controller[129257]: 2025-10-02T12:13:36Z|00061|binding|INFO|Claiming lport 8879d541-1199-497a-b096-b45e17e4df04 for this additional chassis.
Oct 02 12:13:36 compute-1 ovn_controller[129257]: 2025-10-02T12:13:36Z|00062|binding|INFO|8879d541-1199-497a-b096-b45e17e4df04: Claiming fa:16:3e:d1:8f:1e 10.100.0.4
Oct 02 12:13:36 compute-1 ovn_controller[129257]: 2025-10-02T12:13:36Z|00063|binding|INFO|Claiming lport 96e672de-12ad-4022-be24-94113ee6de10 for this additional chassis.
Oct 02 12:13:36 compute-1 ovn_controller[129257]: 2025-10-02T12:13:36Z|00064|binding|INFO|96e672de-12ad-4022-be24-94113ee6de10: Claiming fa:16:3e:bb:a4:98 19.80.0.36
Oct 02 12:13:36 compute-1 nova_compute[230518]: 2025-10-02 12:13:36.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:36 compute-1 systemd-machined[188247]: New machine qemu-6-instance-0000000d.
Oct 02 12:13:36 compute-1 systemd[1]: Started Virtual Machine qemu-6-instance-0000000d.
Oct 02 12:13:37 compute-1 systemd-udevd[237436]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:13:37 compute-1 NetworkManager[44960]: <info>  [1759407217.0221] device (tap8879d541-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:13:37 compute-1 NetworkManager[44960]: <info>  [1759407217.0235] device (tap8879d541-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:13:37 compute-1 nova_compute[230518]: 2025-10-02 12:13:37.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:37 compute-1 ovn_controller[129257]: 2025-10-02T12:13:37Z|00065|binding|INFO|Setting lport 8879d541-1199-497a-b096-b45e17e4df04 ovn-installed in OVS
Oct 02 12:13:37 compute-1 nova_compute[230518]: 2025-10-02 12:13:37.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:13:37 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3594061908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:37 compute-1 nova_compute[230518]: 2025-10-02 12:13:37.061 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:37 compute-1 nova_compute[230518]: 2025-10-02 12:13:37.140 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:13:37 compute-1 nova_compute[230518]: 2025-10-02 12:13:37.141 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:13:37 compute-1 nova_compute[230518]: 2025-10-02 12:13:37.145 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:13:37 compute-1 nova_compute[230518]: 2025-10-02 12:13:37.146 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:13:37 compute-1 nova_compute[230518]: 2025-10-02 12:13:37.149 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:13:37 compute-1 nova_compute[230518]: 2025-10-02 12:13:37.150 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:13:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e144 e144: 3 total, 3 up, 3 in
Oct 02 12:13:37 compute-1 nova_compute[230518]: 2025-10-02 12:13:37.318 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:13:37 compute-1 nova_compute[230518]: 2025-10-02 12:13:37.319 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4758MB free_disk=20.820781707763672GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:13:37 compute-1 nova_compute[230518]: 2025-10-02 12:13:37.320 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:37 compute-1 nova_compute[230518]: 2025-10-02 12:13:37.320 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:37 compute-1 nova_compute[230518]: 2025-10-02 12:13:37.367 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Migration for instance 2b86a484-6fc6-4efa-983f-fb93053b0874 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Oct 02 12:13:37 compute-1 nova_compute[230518]: 2025-10-02 12:13:37.396 2 INFO nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Updating resource usage from migration 22d4d26f-f69b-44aa-b104-f9e752cdc5d9
Oct 02 12:13:37 compute-1 nova_compute[230518]: 2025-10-02 12:13:37.397 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Starting to track incoming migration 22d4d26f-f69b-44aa-b104-f9e752cdc5d9 with flavor 99c52872-4e37-4be3-86cc-757b8f375aa8 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Oct 02 12:13:37 compute-1 nova_compute[230518]: 2025-10-02 12:13:37.430 2 DEBUG oslo_concurrency.processutils [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/51b9c197-3185-48a3-9986-e579c83abc88/disk.config 51b9c197-3185-48a3-9986-e579c83abc88_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:37 compute-1 nova_compute[230518]: 2025-10-02 12:13:37.431 2 INFO nova.virt.libvirt.driver [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Deleting local config drive /var/lib/nova/instances/51b9c197-3185-48a3-9986-e579c83abc88/disk.config because it was imported into RBD.
Oct 02 12:13:37 compute-1 nova_compute[230518]: 2025-10-02 12:13:37.457 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance f85aa55e-c534-4270-b8bb-d25f8026084c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:13:37 compute-1 nova_compute[230518]: 2025-10-02 12:13:37.479 2 WARNING nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 2b86a484-6fc6-4efa-983f-fb93053b0874 has been moved to another host compute-2.ctlplane.example.com(compute-2.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}.
Oct 02 12:13:37 compute-1 nova_compute[230518]: 2025-10-02 12:13:37.479 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 51b9c197-3185-48a3-9986-e579c83abc88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:13:37 compute-1 nova_compute[230518]: 2025-10-02 12:13:37.480 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:13:37 compute-1 nova_compute[230518]: 2025-10-02 12:13:37.480 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:13:37 compute-1 systemd-machined[188247]: New machine qemu-7-instance-00000010.
Oct 02 12:13:37 compute-1 systemd[1]: Started Virtual Machine qemu-7-instance-00000010.
Oct 02 12:13:37 compute-1 nova_compute[230518]: 2025-10-02 12:13:37.623 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:37 compute-1 ceph-mon[80926]: pgmap v1010: 305 pgs: 305 active+clean; 374 MiB data, 434 MiB used, 21 GiB / 21 GiB avail; 6.2 MiB/s rd, 3.5 MiB/s wr, 322 op/s
Oct 02 12:13:37 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3594061908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:37 compute-1 ceph-mon[80926]: osdmap e144: 3 total, 3 up, 3 in
Oct 02 12:13:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:13:38 compute-1 nova_compute[230518]: 2025-10-02 12:13:38.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:38 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:13:38 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3683659444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:38 compute-1 nova_compute[230518]: 2025-10-02 12:13:38.140 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:38 compute-1 nova_compute[230518]: 2025-10-02 12:13:38.146 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:13:38 compute-1 nova_compute[230518]: 2025-10-02 12:13:38.293 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:13:38 compute-1 nova_compute[230518]: 2025-10-02 12:13:38.326 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:13:38 compute-1 nova_compute[230518]: 2025-10-02 12:13:38.327 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:38 compute-1 nova_compute[230518]: 2025-10-02 12:13:38.329 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407218.3290842, 51b9c197-3185-48a3-9986-e579c83abc88 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:13:38 compute-1 nova_compute[230518]: 2025-10-02 12:13:38.329 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] VM Resumed (Lifecycle Event)
Oct 02 12:13:38 compute-1 nova_compute[230518]: 2025-10-02 12:13:38.332 2 DEBUG nova.compute.manager [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:13:38 compute-1 nova_compute[230518]: 2025-10-02 12:13:38.332 2 DEBUG nova.virt.libvirt.driver [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:13:38 compute-1 nova_compute[230518]: 2025-10-02 12:13:38.334 2 INFO nova.virt.libvirt.driver [-] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Instance spawned successfully.
Oct 02 12:13:38 compute-1 nova_compute[230518]: 2025-10-02 12:13:38.334 2 DEBUG nova.virt.libvirt.driver [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:13:38 compute-1 nova_compute[230518]: 2025-10-02 12:13:38.356 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:13:38 compute-1 nova_compute[230518]: 2025-10-02 12:13:38.360 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:13:38 compute-1 nova_compute[230518]: 2025-10-02 12:13:38.363 2 DEBUG nova.virt.libvirt.driver [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:13:38 compute-1 nova_compute[230518]: 2025-10-02 12:13:38.363 2 DEBUG nova.virt.libvirt.driver [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:13:38 compute-1 nova_compute[230518]: 2025-10-02 12:13:38.364 2 DEBUG nova.virt.libvirt.driver [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:13:38 compute-1 nova_compute[230518]: 2025-10-02 12:13:38.364 2 DEBUG nova.virt.libvirt.driver [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:13:38 compute-1 nova_compute[230518]: 2025-10-02 12:13:38.364 2 DEBUG nova.virt.libvirt.driver [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:13:38 compute-1 nova_compute[230518]: 2025-10-02 12:13:38.365 2 DEBUG nova.virt.libvirt.driver [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:13:38 compute-1 nova_compute[230518]: 2025-10-02 12:13:38.388 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:13:38 compute-1 nova_compute[230518]: 2025-10-02 12:13:38.389 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407218.3316774, 51b9c197-3185-48a3-9986-e579c83abc88 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:13:38 compute-1 nova_compute[230518]: 2025-10-02 12:13:38.390 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] VM Started (Lifecycle Event)
Oct 02 12:13:38 compute-1 nova_compute[230518]: 2025-10-02 12:13:38.413 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:13:38 compute-1 nova_compute[230518]: 2025-10-02 12:13:38.416 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:13:38 compute-1 nova_compute[230518]: 2025-10-02 12:13:38.422 2 INFO nova.compute.manager [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Took 6.36 seconds to spawn the instance on the hypervisor.
Oct 02 12:13:38 compute-1 nova_compute[230518]: 2025-10-02 12:13:38.422 2 DEBUG nova.compute.manager [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:13:38 compute-1 nova_compute[230518]: 2025-10-02 12:13:38.432 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:13:38 compute-1 nova_compute[230518]: 2025-10-02 12:13:38.492 2 INFO nova.compute.manager [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Took 7.33 seconds to build instance.
Oct 02 12:13:38 compute-1 nova_compute[230518]: 2025-10-02 12:13:38.505 2 DEBUG oslo_concurrency.lockutils [None req-4f31a188-34c1-47e4-a43b-84eb42ef8c42 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Lock "51b9c197-3185-48a3-9986-e579c83abc88" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.403s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:38.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:38 compute-1 nova_compute[230518]: 2025-10-02 12:13:38.796 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407218.7961817, 2b86a484-6fc6-4efa-983f-fb93053b0874 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:13:38 compute-1 nova_compute[230518]: 2025-10-02 12:13:38.796 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] VM Started (Lifecycle Event)
Oct 02 12:13:38 compute-1 nova_compute[230518]: 2025-10-02 12:13:38.816 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:13:38 compute-1 nova_compute[230518]: 2025-10-02 12:13:38.817 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:13:38 compute-1 nova_compute[230518]: 2025-10-02 12:13:38.828 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:13:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:38.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:39 compute-1 nova_compute[230518]: 2025-10-02 12:13:39.408 2 DEBUG nova.objects.instance [None req-8ebf177a-fd2b-4493-b144-f0c772aa6fd6 43f721199ae646a59ae360e489d46a57 40ee8077bbf945118456a5e1cecabbda - - default default] Lazy-loading 'pci_devices' on Instance uuid 51b9c197-3185-48a3-9986-e579c83abc88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:13:39 compute-1 nova_compute[230518]: 2025-10-02 12:13:39.432 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407219.4327145, 51b9c197-3185-48a3-9986-e579c83abc88 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:13:39 compute-1 nova_compute[230518]: 2025-10-02 12:13:39.433 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] VM Paused (Lifecycle Event)
Oct 02 12:13:39 compute-1 nova_compute[230518]: 2025-10-02 12:13:39.454 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:13:39 compute-1 nova_compute[230518]: 2025-10-02 12:13:39.458 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:13:39 compute-1 nova_compute[230518]: 2025-10-02 12:13:39.479 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] During sync_power_state the instance has a pending task (suspending). Skip.
Oct 02 12:13:39 compute-1 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000010.scope: Deactivated successfully.
Oct 02 12:13:39 compute-1 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000010.scope: Consumed 1.819s CPU time.
Oct 02 12:13:39 compute-1 systemd-machined[188247]: Machine qemu-7-instance-00000010 terminated.
Oct 02 12:13:39 compute-1 nova_compute[230518]: 2025-10-02 12:13:39.718 2 DEBUG nova.compute.manager [None req-8ebf177a-fd2b-4493-b144-f0c772aa6fd6 43f721199ae646a59ae360e489d46a57 40ee8077bbf945118456a5e1cecabbda - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:13:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3683659444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:39 compute-1 nova_compute[230518]: 2025-10-02 12:13:39.850 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407219.850089, 2b86a484-6fc6-4efa-983f-fb93053b0874 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:13:39 compute-1 nova_compute[230518]: 2025-10-02 12:13:39.850 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] VM Resumed (Lifecycle Event)
Oct 02 12:13:39 compute-1 nova_compute[230518]: 2025-10-02 12:13:39.873 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:13:39 compute-1 nova_compute[230518]: 2025-10-02 12:13:39.875 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:13:39 compute-1 nova_compute[230518]: 2025-10-02 12:13:39.903 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-1.ctlplane.example.com
Oct 02 12:13:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:13:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:40.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:13:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:40.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:40 compute-1 ceph-mon[80926]: pgmap v1012: 305 pgs: 305 active+clean; 386 MiB data, 438 MiB used, 21 GiB / 21 GiB avail; 6.7 MiB/s rd, 2.2 MiB/s wr, 296 op/s
Oct 02 12:13:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4182321119' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:41 compute-1 nova_compute[230518]: 2025-10-02 12:13:41.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:41 compute-1 sudo[237573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:13:41 compute-1 sudo[237573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:41 compute-1 sudo[237573]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:41 compute-1 sudo[237598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:13:41 compute-1 sudo[237598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:41 compute-1 sudo[237598]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:41 compute-1 sudo[237623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:13:41 compute-1 sudo[237623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:41 compute-1 sudo[237623]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:42 compute-1 sudo[237648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:13:42 compute-1 sudo[237648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:13:42 compute-1 ceph-mon[80926]: pgmap v1013: 305 pgs: 305 active+clean; 416 MiB data, 455 MiB used, 21 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.5 MiB/s wr, 261 op/s
Oct 02 12:13:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2859817929' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2102508875' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:13:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2057945220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:42 compute-1 sudo[237648]: pam_unix(sudo:session): session closed for user root
Oct 02 12:13:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:13:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:42.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:13:42 compute-1 nova_compute[230518]: 2025-10-02 12:13:42.736 2 DEBUG oslo_concurrency.lockutils [None req-9ae707b0-f662-419d-a8bd-6e1c557e6126 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Acquiring lock "51b9c197-3185-48a3-9986-e579c83abc88" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:42 compute-1 nova_compute[230518]: 2025-10-02 12:13:42.737 2 DEBUG oslo_concurrency.lockutils [None req-9ae707b0-f662-419d-a8bd-6e1c557e6126 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Lock "51b9c197-3185-48a3-9986-e579c83abc88" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:42 compute-1 nova_compute[230518]: 2025-10-02 12:13:42.737 2 DEBUG oslo_concurrency.lockutils [None req-9ae707b0-f662-419d-a8bd-6e1c557e6126 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Acquiring lock "51b9c197-3185-48a3-9986-e579c83abc88-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:42 compute-1 nova_compute[230518]: 2025-10-02 12:13:42.737 2 DEBUG oslo_concurrency.lockutils [None req-9ae707b0-f662-419d-a8bd-6e1c557e6126 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Lock "51b9c197-3185-48a3-9986-e579c83abc88-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:42 compute-1 nova_compute[230518]: 2025-10-02 12:13:42.738 2 DEBUG oslo_concurrency.lockutils [None req-9ae707b0-f662-419d-a8bd-6e1c557e6126 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Lock "51b9c197-3185-48a3-9986-e579c83abc88-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:42 compute-1 nova_compute[230518]: 2025-10-02 12:13:42.739 2 INFO nova.compute.manager [None req-9ae707b0-f662-419d-a8bd-6e1c557e6126 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Terminating instance
Oct 02 12:13:42 compute-1 nova_compute[230518]: 2025-10-02 12:13:42.739 2 DEBUG oslo_concurrency.lockutils [None req-9ae707b0-f662-419d-a8bd-6e1c557e6126 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Acquiring lock "refresh_cache-51b9c197-3185-48a3-9986-e579c83abc88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:13:42 compute-1 nova_compute[230518]: 2025-10-02 12:13:42.740 2 DEBUG oslo_concurrency.lockutils [None req-9ae707b0-f662-419d-a8bd-6e1c557e6126 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Acquired lock "refresh_cache-51b9c197-3185-48a3-9986-e579c83abc88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:13:42 compute-1 nova_compute[230518]: 2025-10-02 12:13:42.740 2 DEBUG nova.network.neutron [None req-9ae707b0-f662-419d-a8bd-6e1c557e6126 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:13:42 compute-1 ovn_controller[129257]: 2025-10-02T12:13:42Z|00066|binding|INFO|Claiming lport 8879d541-1199-497a-b096-b45e17e4df04 for this chassis.
Oct 02 12:13:42 compute-1 ovn_controller[129257]: 2025-10-02T12:13:42Z|00067|binding|INFO|8879d541-1199-497a-b096-b45e17e4df04: Claiming fa:16:3e:d1:8f:1e 10.100.0.4
Oct 02 12:13:42 compute-1 ovn_controller[129257]: 2025-10-02T12:13:42Z|00068|binding|INFO|Claiming lport 96e672de-12ad-4022-be24-94113ee6de10 for this chassis.
Oct 02 12:13:42 compute-1 ovn_controller[129257]: 2025-10-02T12:13:42Z|00069|binding|INFO|96e672de-12ad-4022-be24-94113ee6de10: Claiming fa:16:3e:bb:a4:98 19.80.0.36
Oct 02 12:13:42 compute-1 ovn_controller[129257]: 2025-10-02T12:13:42Z|00070|binding|INFO|Setting lport 8879d541-1199-497a-b096-b45e17e4df04 up in Southbound
Oct 02 12:13:42 compute-1 ovn_controller[129257]: 2025-10-02T12:13:42Z|00071|binding|INFO|Setting lport 96e672de-12ad-4022-be24-94113ee6de10 up in Southbound
Oct 02 12:13:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:42.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:42.879 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:8f:1e 10.100.0.4'], port_security=['fa:16:3e:d1:8f:1e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1633959326', 'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2b86a484-6fc6-4efa-983f-fb93053b0874', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1633959326', 'neutron:project_id': '4db2957ac1b546178a9f2c0f24807e5b', 'neutron:revision_number': '11', 'neutron:security_group_ids': '3bac25ef-a7f0-47fe-951f-dbdf1692a36b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3799c735-d38d-43c0-9348-b36c933d72da, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=8879d541-1199-497a-b096-b45e17e4df04) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:13:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:42.881 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:a4:98 19.80.0.36'], port_security=['fa:16:3e:bb:a4:98 19.80.0.36'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': ''}, parent_port=['8879d541-1199-497a-b096-b45e17e4df04'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1987210166', 'neutron:cidrs': '19.80.0.36/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bdc26f36-19a2-41f9-8f78-61503fbb20a7', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1987210166', 'neutron:project_id': '4db2957ac1b546178a9f2c0f24807e5b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3bac25ef-a7f0-47fe-951f-dbdf1692a36b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=ba88d201-1b94-4e72-bbe3-032bdf9cfc2d, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=96e672de-12ad-4022-be24-94113ee6de10) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:13:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:42.882 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 8879d541-1199-497a-b096-b45e17e4df04 in datapath 5b610572-0903-4bfb-be0b-9848e0af3ae3 bound to our chassis
Oct 02 12:13:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:42.883 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5b610572-0903-4bfb-be0b-9848e0af3ae3
Oct 02 12:13:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:42.895 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6b646d5d-bd81-4fa4-b822-6ddb7f993bc1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:42.896 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5b610572-01 in ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:13:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:42.899 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5b610572-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:13:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:42.899 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[eaa92982-0c4d-48ec-8315-982579fb5af9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:42.900 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6a745415-5f9e-49ee-9bd1-6722cd0e49d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:42.911 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[d5f397ab-449f-4bf9-bd2e-328dae69ca4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:42.929 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d09c95b0-4e33-4b2c-b008-cf81d9739606]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:42.958 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[e0eb519e-dbea-4f76-b042-dfafd6ed9242]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:42 compute-1 NetworkManager[44960]: <info>  [1759407222.9637] manager: (tap5b610572-00): new Veth device (/org/freedesktop/NetworkManager/Devices/47)
Oct 02 12:13:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:42.963 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e1139e38-275b-43d2-8fe5-8a96ceb947de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:13:42 compute-1 systemd-udevd[237711]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:13:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:42.996 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[8237b595-f3fd-40e4-ad8d-b13675c9dac9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:42 compute-1 ovn_controller[129257]: 2025-10-02T12:13:42Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d1:8f:1e 10.100.0.4
Oct 02 12:13:43 compute-1 ovn_controller[129257]: 2025-10-02T12:13:43Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d1:8f:1e 10.100.0.4
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:43.000 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[810547e0-8a14-46a2-828f-be3a5941ef88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:43 compute-1 nova_compute[230518]: 2025-10-02 12:13:43.006 2 DEBUG nova.network.neutron [None req-9ae707b0-f662-419d-a8bd-6e1c557e6126 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:13:43 compute-1 NetworkManager[44960]: <info>  [1759407223.0212] device (tap5b610572-00): carrier: link connected
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:43.026 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[ceb83e59-e1a4-4832-9e0c-ba857cf0243a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:43.042 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f417b912-b714-46cd-9b1a-6df18d00b0c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5b610572-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b0:0e:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508657, 'reachable_time': 17815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237730, 'error': None, 'target': 'ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:43.057 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8a81ff07-faa9-48f9-81bf-7b221ade2a03]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb0:ed9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508657, 'tstamp': 508657}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 237731, 'error': None, 'target': 'ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:43.073 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c3865797-0968-4929-b704-bf398e0375e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5b610572-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b0:0e:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508657, 'reachable_time': 17815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 237732, 'error': None, 'target': 'ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:43.104 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[15d25526-1444-4439-8373-8e5e8103f860]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:43 compute-1 nova_compute[230518]: 2025-10-02 12:13:43.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:43 compute-1 nova_compute[230518]: 2025-10-02 12:13:43.166 2 INFO nova.compute.manager [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Post operation of migration started
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:43.167 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8d444195-ae1e-4b59-b896-4d96d979c8c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:43.168 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b610572-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:43.169 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:43.169 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5b610572-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:43 compute-1 nova_compute[230518]: 2025-10-02 12:13:43.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:43 compute-1 NetworkManager[44960]: <info>  [1759407223.1719] manager: (tap5b610572-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Oct 02 12:13:43 compute-1 kernel: tap5b610572-00: entered promiscuous mode
Oct 02 12:13:43 compute-1 nova_compute[230518]: 2025-10-02 12:13:43.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:43.174 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5b610572-00, col_values=(('external_ids', {'iface-id': '02fa40d7-59fd-4885-996d-218aed489cb1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:43 compute-1 nova_compute[230518]: 2025-10-02 12:13:43.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:43 compute-1 ovn_controller[129257]: 2025-10-02T12:13:43Z|00072|binding|INFO|Releasing lport 02fa40d7-59fd-4885-996d-218aed489cb1 from this chassis (sb_readonly=0)
Oct 02 12:13:43 compute-1 nova_compute[230518]: 2025-10-02 12:13:43.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:43.192 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5b610572-0903-4bfb-be0b-9848e0af3ae3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5b610572-0903-4bfb-be0b-9848e0af3ae3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:43.193 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a9ba89f7-1496-4ac7-8745-6c28155459af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:43.194 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-5b610572-0903-4bfb-be0b-9848e0af3ae3
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/5b610572-0903-4bfb-be0b-9848e0af3ae3.pid.haproxy
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 5b610572-0903-4bfb-be0b-9848e0af3ae3
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:43.194 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'env', 'PROCESS_TAG=haproxy-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5b610572-0903-4bfb-be0b-9848e0af3ae3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:13:43 compute-1 nova_compute[230518]: 2025-10-02 12:13:43.390 2 DEBUG nova.network.neutron [None req-9ae707b0-f662-419d-a8bd-6e1c557e6126 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:13:43 compute-1 nova_compute[230518]: 2025-10-02 12:13:43.405 2 DEBUG oslo_concurrency.lockutils [None req-9ae707b0-f662-419d-a8bd-6e1c557e6126 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Releasing lock "refresh_cache-51b9c197-3185-48a3-9986-e579c83abc88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:13:43 compute-1 nova_compute[230518]: 2025-10-02 12:13:43.406 2 DEBUG nova.compute.manager [None req-9ae707b0-f662-419d-a8bd-6e1c557e6126 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:13:43 compute-1 nova_compute[230518]: 2025-10-02 12:13:43.413 2 INFO nova.virt.libvirt.driver [-] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Instance destroyed successfully.
Oct 02 12:13:43 compute-1 nova_compute[230518]: 2025-10-02 12:13:43.414 2 DEBUG nova.objects.instance [None req-9ae707b0-f662-419d-a8bd-6e1c557e6126 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Lazy-loading 'resources' on Instance uuid 51b9c197-3185-48a3-9986-e579c83abc88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:13:43 compute-1 podman[237782]: 2025-10-02 12:13:43.552033693 +0000 UTC m=+0.045848613 container create 8953e0191abc6f8c7cfb8dc093fc7ef2110784bf2848e283a41cddcb7b433dbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 12:13:43 compute-1 systemd[1]: Started libpod-conmon-8953e0191abc6f8c7cfb8dc093fc7ef2110784bf2848e283a41cddcb7b433dbf.scope.
Oct 02 12:13:43 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:13:43 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b3b2f9ef863e45e32afb62235a57ad96286386eb759fa04a2bc6eb89ccc840d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:13:43 compute-1 podman[237782]: 2025-10-02 12:13:43.528005547 +0000 UTC m=+0.021820487 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:13:43 compute-1 podman[237782]: 2025-10-02 12:13:43.627137276 +0000 UTC m=+0.120952226 container init 8953e0191abc6f8c7cfb8dc093fc7ef2110784bf2848e283a41cddcb7b433dbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:13:43 compute-1 podman[237782]: 2025-10-02 12:13:43.632450132 +0000 UTC m=+0.126265052 container start 8953e0191abc6f8c7cfb8dc093fc7ef2110784bf2848e283a41cddcb7b433dbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 12:13:43 compute-1 neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3[237797]: [NOTICE]   (237801) : New worker (237803) forked
Oct 02 12:13:43 compute-1 neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3[237797]: [NOTICE]   (237801) : Loading success.
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:43.725 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 96e672de-12ad-4022-be24-94113ee6de10 in datapath bdc26f36-19a2-41f9-8f78-61503fbb20a7 unbound from our chassis
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:43.727 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bdc26f36-19a2-41f9-8f78-61503fbb20a7
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:43.740 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b2842a88-9504-4268-a45c-8525d10a1536]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:43.741 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbdc26f36-11 in ovnmeta-bdc26f36-19a2-41f9-8f78-61503fbb20a7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:43.743 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbdc26f36-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:43.743 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bf913527-fcfd-4bd2-beee-cb7390f69e4f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:43.744 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[da9c1fbf-af4f-41e4-8489-e6cf3bca4e53]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:43.757 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[78d1f3fc-cdd1-4bb8-aaba-e84743c7d0cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:43.782 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[07311cb5-55da-4568-84a2-4e981142de58]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:43.815 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[51781afe-d569-4304-8e75-c3d3e8c74236]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:43 compute-1 NetworkManager[44960]: <info>  [1759407223.8244] manager: (tapbdc26f36-10): new Veth device (/org/freedesktop/NetworkManager/Devices/49)
Oct 02 12:13:43 compute-1 systemd-udevd[237717]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:43.824 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[23dc4b9a-5a6f-4e70-93ef-a02a23f63d54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:43.859 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[ae91b873-f30b-4574-8607-1814e3f17a89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:43.862 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[fd48f595-11ff-40e7-beba-5cf6c8cac6a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:43 compute-1 NetworkManager[44960]: <info>  [1759407223.8825] device (tapbdc26f36-10): carrier: link connected
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:43.888 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[5f88d596-97ad-43b1-995e-d824153bc9ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:43.905 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a675be8f-a35d-4835-ac36-30a315db05e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbdc26f36-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:4e:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508744, 'reachable_time': 38785, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237823, 'error': None, 'target': 'ovnmeta-bdc26f36-19a2-41f9-8f78-61503fbb20a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:43.922 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[94bacba5-a187-47b6-9a36-b6ce9f826944]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9b:4e93'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508744, 'tstamp': 508744}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 237824, 'error': None, 'target': 'ovnmeta-bdc26f36-19a2-41f9-8f78-61503fbb20a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:43.938 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[abe540e2-cbcb-44e0-87ed-a8676cbbb5f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbdc26f36-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:4e:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508744, 'reachable_time': 38785, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 237825, 'error': None, 'target': 'ovnmeta-bdc26f36-19a2-41f9-8f78-61503fbb20a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:43.964 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e5888507-5b82-4b8a-9aaf-28d944eee476]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:44.018 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4db7edd8-b391-42a1-bd0c-eb650c499fc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:44.019 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbdc26f36-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:44.020 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:44.020 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbdc26f36-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:44 compute-1 kernel: tapbdc26f36-10: entered promiscuous mode
Oct 02 12:13:44 compute-1 NetworkManager[44960]: <info>  [1759407224.0237] manager: (tapbdc26f36-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Oct 02 12:13:44 compute-1 nova_compute[230518]: 2025-10-02 12:13:44.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:44.025 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbdc26f36-10, col_values=(('external_ids', {'iface-id': '2278bdaf-c37b-4127-83d4-ca11f07feaa5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:13:44 compute-1 nova_compute[230518]: 2025-10-02 12:13:44.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:44 compute-1 ovn_controller[129257]: 2025-10-02T12:13:44Z|00073|binding|INFO|Releasing lport 2278bdaf-c37b-4127-83d4-ca11f07feaa5 from this chassis (sb_readonly=0)
Oct 02 12:13:44 compute-1 nova_compute[230518]: 2025-10-02 12:13:44.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:44.049 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bdc26f36-19a2-41f9-8f78-61503fbb20a7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bdc26f36-19a2-41f9-8f78-61503fbb20a7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:44.050 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[76e13227-2d68-4ad0-b0bb-8cb16ba9ad75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:44.050 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-bdc26f36-19a2-41f9-8f78-61503fbb20a7
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/bdc26f36-19a2-41f9-8f78-61503fbb20a7.pid.haproxy
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID bdc26f36-19a2-41f9-8f78-61503fbb20a7
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:13:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:13:44.051 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bdc26f36-19a2-41f9-8f78-61503fbb20a7', 'env', 'PROCESS_TAG=haproxy-bdc26f36-19a2-41f9-8f78-61503fbb20a7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bdc26f36-19a2-41f9-8f78-61503fbb20a7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:13:44 compute-1 podman[237857]: 2025-10-02 12:13:44.438734706 +0000 UTC m=+0.062090534 container create 8c2bd10725ec50506764ae5bca53a575be92055b09d560b413dae170fd17d71f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bdc26f36-19a2-41f9-8f78-61503fbb20a7, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:13:44 compute-1 systemd[1]: Started libpod-conmon-8c2bd10725ec50506764ae5bca53a575be92055b09d560b413dae170fd17d71f.scope.
Oct 02 12:13:44 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:13:44 compute-1 podman[237857]: 2025-10-02 12:13:44.407475833 +0000 UTC m=+0.030831691 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:13:44 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/542123bb31b236d9dd2f1f394035551e9fafc6bc648a18f686f01cdda5789143/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:13:44 compute-1 podman[237857]: 2025-10-02 12:13:44.515183891 +0000 UTC m=+0.138539729 container init 8c2bd10725ec50506764ae5bca53a575be92055b09d560b413dae170fd17d71f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bdc26f36-19a2-41f9-8f78-61503fbb20a7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:13:44 compute-1 podman[237857]: 2025-10-02 12:13:44.520314622 +0000 UTC m=+0.143670450 container start 8c2bd10725ec50506764ae5bca53a575be92055b09d560b413dae170fd17d71f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bdc26f36-19a2-41f9-8f78-61503fbb20a7, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:13:44 compute-1 nova_compute[230518]: 2025-10-02 12:13:44.538 2 DEBUG oslo_concurrency.lockutils [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Acquiring lock "refresh_cache-2b86a484-6fc6-4efa-983f-fb93053b0874" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:13:44 compute-1 nova_compute[230518]: 2025-10-02 12:13:44.539 2 DEBUG oslo_concurrency.lockutils [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Acquired lock "refresh_cache-2b86a484-6fc6-4efa-983f-fb93053b0874" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:13:44 compute-1 nova_compute[230518]: 2025-10-02 12:13:44.539 2 DEBUG nova.network.neutron [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:13:44 compute-1 neutron-haproxy-ovnmeta-bdc26f36-19a2-41f9-8f78-61503fbb20a7[237872]: [NOTICE]   (237876) : New worker (237878) forked
Oct 02 12:13:44 compute-1 neutron-haproxy-ovnmeta-bdc26f36-19a2-41f9-8f78-61503fbb20a7[237872]: [NOTICE]   (237876) : Loading success.
Oct 02 12:13:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:44.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:44 compute-1 ceph-mon[80926]: pgmap v1014: 305 pgs: 305 active+clean; 463 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 8.4 MiB/s wr, 247 op/s
Oct 02 12:13:44 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:13:44 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:13:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:44.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:45 compute-1 ceph-mon[80926]: pgmap v1015: 305 pgs: 305 active+clean; 465 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 8.6 MiB/s wr, 206 op/s
Oct 02 12:13:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:13:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:13:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:13:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:13:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:13:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:13:46 compute-1 nova_compute[230518]: 2025-10-02 12:13:46.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:13:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:46.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:13:46 compute-1 podman[237888]: 2025-10-02 12:13:46.818322263 +0000 UTC m=+0.061770445 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:13:46 compute-1 podman[237887]: 2025-10-02 12:13:46.848325757 +0000 UTC m=+0.096735885 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Oct 02 12:13:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:46.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:47 compute-1 nova_compute[230518]: 2025-10-02 12:13:47.325 2 DEBUG nova.network.neutron [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Updating instance_info_cache with network_info: [{"id": "8879d541-1199-497a-b096-b45e17e4df04", "address": "fa:16:3e:d1:8f:1e", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8879d541-11", "ovs_interfaceid": "8879d541-1199-497a-b096-b45e17e4df04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:13:47 compute-1 nova_compute[230518]: 2025-10-02 12:13:47.344 2 DEBUG oslo_concurrency.lockutils [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Releasing lock "refresh_cache-2b86a484-6fc6-4efa-983f-fb93053b0874" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:13:47 compute-1 nova_compute[230518]: 2025-10-02 12:13:47.358 2 DEBUG oslo_concurrency.lockutils [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:47 compute-1 nova_compute[230518]: 2025-10-02 12:13:47.359 2 DEBUG oslo_concurrency.lockutils [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:47 compute-1 nova_compute[230518]: 2025-10-02 12:13:47.359 2 DEBUG oslo_concurrency.lockutils [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:47 compute-1 nova_compute[230518]: 2025-10-02 12:13:47.365 2 INFO nova.virt.libvirt.driver [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Oct 02 12:13:47 compute-1 virtqemud[230067]: Domain id=6 name='instance-0000000d' uuid=2b86a484-6fc6-4efa-983f-fb93053b0874 is tainted: custom-monitor
Oct 02 12:13:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e145 e145: 3 total, 3 up, 3 in
Oct 02 12:13:47 compute-1 nova_compute[230518]: 2025-10-02 12:13:47.666 2 INFO nova.virt.libvirt.driver [None req-9ae707b0-f662-419d-a8bd-6e1c557e6126 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Deleting instance files /var/lib/nova/instances/51b9c197-3185-48a3-9986-e579c83abc88_del
Oct 02 12:13:47 compute-1 nova_compute[230518]: 2025-10-02 12:13:47.667 2 INFO nova.virt.libvirt.driver [None req-9ae707b0-f662-419d-a8bd-6e1c557e6126 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Deletion of /var/lib/nova/instances/51b9c197-3185-48a3-9986-e579c83abc88_del complete
Oct 02 12:13:47 compute-1 nova_compute[230518]: 2025-10-02 12:13:47.712 2 INFO nova.compute.manager [None req-9ae707b0-f662-419d-a8bd-6e1c557e6126 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Took 4.31 seconds to destroy the instance on the hypervisor.
Oct 02 12:13:47 compute-1 nova_compute[230518]: 2025-10-02 12:13:47.713 2 DEBUG oslo.service.loopingcall [None req-9ae707b0-f662-419d-a8bd-6e1c557e6126 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:13:47 compute-1 nova_compute[230518]: 2025-10-02 12:13:47.713 2 DEBUG nova.compute.manager [-] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:13:47 compute-1 nova_compute[230518]: 2025-10-02 12:13:47.713 2 DEBUG nova.network.neutron [-] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:13:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:13:48 compute-1 nova_compute[230518]: 2025-10-02 12:13:48.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:48 compute-1 nova_compute[230518]: 2025-10-02 12:13:48.376 2 INFO nova.virt.libvirt.driver [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Oct 02 12:13:48 compute-1 nova_compute[230518]: 2025-10-02 12:13:48.469 2 DEBUG nova.network.neutron [-] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:13:48 compute-1 nova_compute[230518]: 2025-10-02 12:13:48.485 2 DEBUG nova.network.neutron [-] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:13:48 compute-1 nova_compute[230518]: 2025-10-02 12:13:48.505 2 INFO nova.compute.manager [-] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Took 0.79 seconds to deallocate network for instance.
Oct 02 12:13:48 compute-1 nova_compute[230518]: 2025-10-02 12:13:48.557 2 DEBUG oslo_concurrency.lockutils [None req-9ae707b0-f662-419d-a8bd-6e1c557e6126 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:13:48 compute-1 nova_compute[230518]: 2025-10-02 12:13:48.558 2 DEBUG oslo_concurrency.lockutils [None req-9ae707b0-f662-419d-a8bd-6e1c557e6126 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:13:48 compute-1 ceph-mon[80926]: pgmap v1016: 305 pgs: 305 active+clean; 462 MiB data, 511 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 9.0 MiB/s wr, 245 op/s
Oct 02 12:13:48 compute-1 ceph-mon[80926]: osdmap e145: 3 total, 3 up, 3 in
Oct 02 12:13:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:48.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:48.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:48 compute-1 nova_compute[230518]: 2025-10-02 12:13:48.889 2 DEBUG oslo_concurrency.processutils [None req-9ae707b0-f662-419d-a8bd-6e1c557e6126 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:13:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:13:49 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2261448251' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:49 compute-1 nova_compute[230518]: 2025-10-02 12:13:49.331 2 DEBUG oslo_concurrency.processutils [None req-9ae707b0-f662-419d-a8bd-6e1c557e6126 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:13:49 compute-1 nova_compute[230518]: 2025-10-02 12:13:49.337 2 DEBUG nova.compute.provider_tree [None req-9ae707b0-f662-419d-a8bd-6e1c557e6126 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:13:49 compute-1 nova_compute[230518]: 2025-10-02 12:13:49.361 2 DEBUG nova.scheduler.client.report [None req-9ae707b0-f662-419d-a8bd-6e1c557e6126 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:13:49 compute-1 nova_compute[230518]: 2025-10-02 12:13:49.381 2 INFO nova.virt.libvirt.driver [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Oct 02 12:13:49 compute-1 nova_compute[230518]: 2025-10-02 12:13:49.382 2 DEBUG oslo_concurrency.lockutils [None req-9ae707b0-f662-419d-a8bd-6e1c557e6126 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:49 compute-1 nova_compute[230518]: 2025-10-02 12:13:49.390 2 DEBUG nova.compute.manager [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:13:49 compute-1 nova_compute[230518]: 2025-10-02 12:13:49.408 2 DEBUG nova.objects.instance [None req-2ffac1ab-a2be-438d-ad11-adbf9586d0a3 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:13:49 compute-1 nova_compute[230518]: 2025-10-02 12:13:49.412 2 INFO nova.scheduler.client.report [None req-9ae707b0-f662-419d-a8bd-6e1c557e6126 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Deleted allocations for instance 51b9c197-3185-48a3-9986-e579c83abc88
Oct 02 12:13:49 compute-1 nova_compute[230518]: 2025-10-02 12:13:49.517 2 DEBUG oslo_concurrency.lockutils [None req-9ae707b0-f662-419d-a8bd-6e1c557e6126 8da43dcf236343bfa92dff74df42cb79 6933706e32a14e3c92fdf8c1df4f90b2 - - default default] Lock "51b9c197-3185-48a3-9986-e579c83abc88" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:13:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2261448251' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:50.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:50.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:51 compute-1 ceph-mon[80926]: pgmap v1018: 305 pgs: 305 active+clean; 471 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 9.7 MiB/s wr, 319 op/s
Oct 02 12:13:51 compute-1 nova_compute[230518]: 2025-10-02 12:13:51.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:52 compute-1 ceph-mon[80926]: pgmap v1019: 305 pgs: 305 active+clean; 481 MiB data, 536 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 7.5 MiB/s wr, 335 op/s
Oct 02 12:13:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3601294168' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:13:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:52.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:13:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:52.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:13:53 compute-1 nova_compute[230518]: 2025-10-02 12:13:53.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:53 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3097560303' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:54 compute-1 ceph-mon[80926]: pgmap v1020: 305 pgs: 305 active+clean; 484 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.6 MiB/s wr, 292 op/s
Oct 02 12:13:54 compute-1 nova_compute[230518]: 2025-10-02 12:13:54.719 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407219.7179334, 51b9c197-3185-48a3-9986-e579c83abc88 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:13:54 compute-1 nova_compute[230518]: 2025-10-02 12:13:54.719 2 INFO nova.compute.manager [-] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] VM Stopped (Lifecycle Event)
Oct 02 12:13:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:54.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:54 compute-1 nova_compute[230518]: 2025-10-02 12:13:54.744 2 DEBUG nova.compute.manager [None req-29a9c903-b5b1-447d-9c4a-f4a02f1482e4 - - - - - -] [instance: 51b9c197-3185-48a3-9986-e579c83abc88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:13:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:54.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:55 compute-1 ceph-mon[80926]: pgmap v1021: 305 pgs: 305 active+clean; 441 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.9 MiB/s wr, 247 op/s
Oct 02 12:13:55 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/513098378' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:13:56 compute-1 nova_compute[230518]: 2025-10-02 12:13:56.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:13:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:56.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:13:56 compute-1 podman[237952]: 2025-10-02 12:13:56.808474628 +0000 UTC m=+0.060564266 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:13:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:13:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:56.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:13:57 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:13:58 compute-1 nova_compute[230518]: 2025-10-02 12:13:58.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:13:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:13:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:13:58.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:13:58 compute-1 ceph-mon[80926]: pgmap v1022: 305 pgs: 305 active+clean; 425 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.2 MiB/s wr, 213 op/s
Oct 02 12:13:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:13:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:13:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:13:58.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:14:00 compute-1 ceph-mon[80926]: pgmap v1023: 305 pgs: 305 active+clean; 405 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 131 KiB/s wr, 137 op/s
Oct 02 12:14:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:14:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:00.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:14:00 compute-1 podman[237973]: 2025-10-02 12:14:00.79488807 +0000 UTC m=+0.046072180 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2)
Oct 02 12:14:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:00.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:01 compute-1 nova_compute[230518]: 2025-10-02 12:14:01.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:01 compute-1 ceph-mon[80926]: pgmap v1024: 305 pgs: 305 active+clean; 417 MiB data, 504 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 145 op/s
Oct 02 12:14:02 compute-1 sudo[237994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:14:02 compute-1 sudo[237994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:02 compute-1 sudo[237994]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:14:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:02.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:14:02 compute-1 sudo[238019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:14:02 compute-1 sudo[238019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:14:02 compute-1 sudo[238019]: pam_unix(sudo:session): session closed for user root
Oct 02 12:14:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:02.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:14:03 compute-1 nova_compute[230518]: 2025-10-02 12:14:03.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:03 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:14:03 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:14:04 compute-1 ceph-mon[80926]: pgmap v1025: 305 pgs: 305 active+clean; 430 MiB data, 513 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.1 MiB/s wr, 113 op/s
Oct 02 12:14:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:04.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:04 compute-1 nova_compute[230518]: 2025-10-02 12:14:04.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:14:04.844 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:14:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:14:04.846 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:14:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:14:04.847 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:04.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1935995377' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:14:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1935995377' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:14:06 compute-1 nova_compute[230518]: 2025-10-02 12:14:06.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:06 compute-1 ceph-mon[80926]: pgmap v1026: 305 pgs: 305 active+clean; 408 MiB data, 519 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 109 op/s
Oct 02 12:14:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:14:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:06.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:14:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:06.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:07 compute-1 ceph-mon[80926]: pgmap v1027: 305 pgs: 305 active+clean; 382 MiB data, 519 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.1 MiB/s wr, 107 op/s
Oct 02 12:14:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/333674804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:14:08 compute-1 nova_compute[230518]: 2025-10-02 12:14:08.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:08.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:08.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:10 compute-1 ceph-mon[80926]: pgmap v1028: 305 pgs: 305 active+clean; 358 MiB data, 505 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.1 MiB/s wr, 116 op/s
Oct 02 12:14:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3576362853' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1635128892' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1921988349' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:10.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:10.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2250033867' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:11 compute-1 nova_compute[230518]: 2025-10-02 12:14:11.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:12 compute-1 ceph-mon[80926]: pgmap v1029: 305 pgs: 305 active+clean; 401 MiB data, 520 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.7 MiB/s wr, 130 op/s
Oct 02 12:14:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:14:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:12.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:14:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:12.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:14:13 compute-1 nova_compute[230518]: 2025-10-02 12:14:13.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:14 compute-1 ceph-mon[80926]: pgmap v1030: 305 pgs: 305 active+clean; 451 MiB data, 527 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 4.6 MiB/s wr, 142 op/s
Oct 02 12:14:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3144243600' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:14:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:14.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:14:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:14.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:15 compute-1 ceph-mon[80926]: pgmap v1031: 305 pgs: 305 active+clean; 462 MiB data, 531 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.9 MiB/s wr, 155 op/s
Oct 02 12:14:16 compute-1 nova_compute[230518]: 2025-10-02 12:14:16.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:16.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:16.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:17 compute-1 podman[238045]: 2025-10-02 12:14:17.799988409 +0000 UTC m=+0.049373044 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 12:14:17 compute-1 podman[238044]: 2025-10-02 12:14:17.822183418 +0000 UTC m=+0.072961397 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:14:17 compute-1 ceph-mon[80926]: pgmap v1032: 305 pgs: 305 active+clean; 484 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.1 MiB/s wr, 164 op/s
Oct 02 12:14:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:14:18 compute-1 nova_compute[230518]: 2025-10-02 12:14:18.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:18 compute-1 nova_compute[230518]: 2025-10-02 12:14:18.191 2 DEBUG oslo_concurrency.lockutils [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Acquiring lock "b8f8f97e-2823-451c-ab36-7f94ade8be46" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:18 compute-1 nova_compute[230518]: 2025-10-02 12:14:18.191 2 DEBUG oslo_concurrency.lockutils [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:18 compute-1 nova_compute[230518]: 2025-10-02 12:14:18.216 2 DEBUG nova.compute.manager [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:14:18 compute-1 nova_compute[230518]: 2025-10-02 12:14:18.301 2 DEBUG oslo_concurrency.lockutils [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:18 compute-1 nova_compute[230518]: 2025-10-02 12:14:18.302 2 DEBUG oslo_concurrency.lockutils [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:18 compute-1 nova_compute[230518]: 2025-10-02 12:14:18.309 2 DEBUG nova.virt.hardware [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:14:18 compute-1 nova_compute[230518]: 2025-10-02 12:14:18.310 2 INFO nova.compute.claims [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:14:18 compute-1 nova_compute[230518]: 2025-10-02 12:14:18.494 2 DEBUG oslo_concurrency.processutils [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:18.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:14:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:18.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:14:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:14:18 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1261907939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:18 compute-1 nova_compute[230518]: 2025-10-02 12:14:18.954 2 DEBUG oslo_concurrency.processutils [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:18 compute-1 nova_compute[230518]: 2025-10-02 12:14:18.961 2 DEBUG nova.compute.provider_tree [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:14:18 compute-1 nova_compute[230518]: 2025-10-02 12:14:18.985 2 DEBUG nova.scheduler.client.report [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:14:19 compute-1 nova_compute[230518]: 2025-10-02 12:14:19.039 2 DEBUG oslo_concurrency.lockutils [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:19 compute-1 nova_compute[230518]: 2025-10-02 12:14:19.040 2 DEBUG nova.compute.manager [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:14:19 compute-1 nova_compute[230518]: 2025-10-02 12:14:19.103 2 DEBUG nova.compute.manager [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:14:19 compute-1 nova_compute[230518]: 2025-10-02 12:14:19.104 2 DEBUG nova.network.neutron [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:14:19 compute-1 nova_compute[230518]: 2025-10-02 12:14:19.142 2 INFO nova.virt.libvirt.driver [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:14:19 compute-1 nova_compute[230518]: 2025-10-02 12:14:19.189 2 DEBUG nova.compute.manager [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:14:19 compute-1 nova_compute[230518]: 2025-10-02 12:14:19.282 2 INFO nova.virt.block_device [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Booting with volume ff92c1da-c1e7-425c-b20d-f332daad4188 at /dev/vda
Oct 02 12:14:19 compute-1 nova_compute[230518]: 2025-10-02 12:14:19.301 2 DEBUG nova.compute.manager [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Oct 02 12:14:19 compute-1 nova_compute[230518]: 2025-10-02 12:14:19.434 2 DEBUG oslo_concurrency.lockutils [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:19 compute-1 nova_compute[230518]: 2025-10-02 12:14:19.434 2 DEBUG oslo_concurrency.lockutils [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:19 compute-1 nova_compute[230518]: 2025-10-02 12:14:19.472 2 DEBUG nova.policy [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd29391679bd0482aada18c987e4c11ca', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4db2957ac1b546178a9f2c0f24807e5b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:14:19 compute-1 nova_compute[230518]: 2025-10-02 12:14:19.482 2 DEBUG nova.objects.instance [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Lazy-loading 'pci_requests' on Instance uuid fcfe251b-73c3-4310-b646-3c6c0a8c7e6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:14:19 compute-1 nova_compute[230518]: 2025-10-02 12:14:19.511 2 DEBUG nova.virt.hardware [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:14:19 compute-1 nova_compute[230518]: 2025-10-02 12:14:19.511 2 INFO nova.compute.claims [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:14:19 compute-1 nova_compute[230518]: 2025-10-02 12:14:19.511 2 DEBUG nova.objects.instance [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Lazy-loading 'resources' on Instance uuid fcfe251b-73c3-4310-b646-3c6c0a8c7e6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:14:19 compute-1 nova_compute[230518]: 2025-10-02 12:14:19.532 2 DEBUG nova.objects.instance [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Lazy-loading 'numa_topology' on Instance uuid fcfe251b-73c3-4310-b646-3c6c0a8c7e6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:14:19 compute-1 nova_compute[230518]: 2025-10-02 12:14:19.562 2 DEBUG nova.objects.instance [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Lazy-loading 'pci_devices' on Instance uuid fcfe251b-73c3-4310-b646-3c6c0a8c7e6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:14:19 compute-1 nova_compute[230518]: 2025-10-02 12:14:19.598 2 DEBUG os_brick.utils [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:14:19 compute-1 nova_compute[230518]: 2025-10-02 12:14:19.599 2 INFO oslo.privsep.daemon [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmp41202kru/privsep.sock']
Oct 02 12:14:19 compute-1 nova_compute[230518]: 2025-10-02 12:14:19.674 2 INFO nova.compute.resource_tracker [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Updating resource usage from migration 29d56307-e333-4b59-aa08-549a50840cd6
Oct 02 12:14:19 compute-1 nova_compute[230518]: 2025-10-02 12:14:19.675 2 DEBUG nova.compute.resource_tracker [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Starting to track incoming migration 29d56307-e333-4b59-aa08-549a50840cd6 with flavor 99c52872-4e37-4be3-86cc-757b8f375aa8 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Oct 02 12:14:19 compute-1 nova_compute[230518]: 2025-10-02 12:14:19.806 2 DEBUG oslo_concurrency.processutils [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:19 compute-1 ceph-mon[80926]: pgmap v1033: 305 pgs: 305 active+clean; 528 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 6.8 MiB/s wr, 221 op/s
Oct 02 12:14:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1261907939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:14:20 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2639501534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:20 compute-1 nova_compute[230518]: 2025-10-02 12:14:20.239 2 DEBUG oslo_concurrency.processutils [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:20 compute-1 nova_compute[230518]: 2025-10-02 12:14:20.246 2 DEBUG nova.compute.provider_tree [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:14:20 compute-1 nova_compute[230518]: 2025-10-02 12:14:20.271 2 DEBUG nova.scheduler.client.report [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:14:20 compute-1 nova_compute[230518]: 2025-10-02 12:14:20.300 2 DEBUG oslo_concurrency.lockutils [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.866s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:20 compute-1 nova_compute[230518]: 2025-10-02 12:14:20.300 2 INFO nova.compute.manager [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Migrating
Oct 02 12:14:20 compute-1 nova_compute[230518]: 2025-10-02 12:14:20.402 2 INFO oslo.privsep.daemon [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Spawned new privsep daemon via rootwrap
Oct 02 12:14:20 compute-1 nova_compute[230518]: 2025-10-02 12:14:20.208 2727 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 02 12:14:20 compute-1 nova_compute[230518]: 2025-10-02 12:14:20.212 2727 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 02 12:14:20 compute-1 nova_compute[230518]: 2025-10-02 12:14:20.215 2727 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Oct 02 12:14:20 compute-1 nova_compute[230518]: 2025-10-02 12:14:20.215 2727 INFO oslo.privsep.daemon [-] privsep daemon running as pid 2727
Oct 02 12:14:20 compute-1 nova_compute[230518]: 2025-10-02 12:14:20.405 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[c98366ed-dad0-4d6e-9164-efe13704523f]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:20 compute-1 nova_compute[230518]: 2025-10-02 12:14:20.497 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:20 compute-1 nova_compute[230518]: 2025-10-02 12:14:20.518 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:20 compute-1 nova_compute[230518]: 2025-10-02 12:14:20.519 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[a6033555-b82f-4405-a9e8-885a1750113f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:20 compute-1 nova_compute[230518]: 2025-10-02 12:14:20.520 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:20 compute-1 nova_compute[230518]: 2025-10-02 12:14:20.528 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:20 compute-1 nova_compute[230518]: 2025-10-02 12:14:20.529 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[bbdb3ca6-763c-4180-9037-355a62cc9cd9]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:20 compute-1 nova_compute[230518]: 2025-10-02 12:14:20.531 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:20 compute-1 nova_compute[230518]: 2025-10-02 12:14:20.539 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:20 compute-1 nova_compute[230518]: 2025-10-02 12:14:20.540 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[8ecc8164-9be2-466e-b670-459c32a63fc0]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:20 compute-1 nova_compute[230518]: 2025-10-02 12:14:20.542 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[2c521e16-3487-427c-8a0a-cdf6fd2c6c2a]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:20 compute-1 nova_compute[230518]: 2025-10-02 12:14:20.542 2 DEBUG oslo_concurrency.processutils [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:20 compute-1 nova_compute[230518]: 2025-10-02 12:14:20.569 2 DEBUG oslo_concurrency.processutils [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:20 compute-1 nova_compute[230518]: 2025-10-02 12:14:20.572 2 DEBUG os_brick.initiator.connectors.lightos [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:14:20 compute-1 nova_compute[230518]: 2025-10-02 12:14:20.573 2 DEBUG os_brick.initiator.connectors.lightos [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:14:20 compute-1 nova_compute[230518]: 2025-10-02 12:14:20.573 2 DEBUG os_brick.initiator.connectors.lightos [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:14:20 compute-1 nova_compute[230518]: 2025-10-02 12:14:20.574 2 DEBUG os_brick.utils [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] <== get_connector_properties: return (974ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:14:20 compute-1 nova_compute[230518]: 2025-10-02 12:14:20.574 2 DEBUG nova.virt.block_device [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Updating existing volume attachment record: 22dff84c-2800-4a44-8b74-72aa222f3126 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:14:20 compute-1 nova_compute[230518]: 2025-10-02 12:14:20.630 2 DEBUG nova.network.neutron [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Successfully created port: 647b79a6-6cf5-4d28-afd1-9e21f2a56e32 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:14:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:20.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:20.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2639501534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:21 compute-1 nova_compute[230518]: 2025-10-02 12:14:21.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:21 compute-1 ceph-mon[80926]: pgmap v1034: 305 pgs: 305 active+clean; 544 MiB data, 570 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 7.1 MiB/s wr, 250 op/s
Oct 02 12:14:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3593238018' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:22.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:22 compute-1 sshd-session[238146]: Accepted publickey for nova from 192.168.122.102 port 54794 ssh2: ECDSA SHA256:Vro/IzzyOA86z5RBI5lBF+NKUNzyxTh79RUgVc2XKwY
Oct 02 12:14:22 compute-1 systemd[1]: Created slice User Slice of UID 42436.
Oct 02 12:14:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:22.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:22 compute-1 systemd[1]: Starting User Runtime Directory /run/user/42436...
Oct 02 12:14:22 compute-1 systemd-logind[795]: New session 53 of user nova.
Oct 02 12:14:22 compute-1 systemd[1]: Finished User Runtime Directory /run/user/42436.
Oct 02 12:14:22 compute-1 systemd[1]: Starting User Manager for UID 42436...
Oct 02 12:14:22 compute-1 systemd[238151]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:14:22 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:14:22 compute-1 nova_compute[230518]: 2025-10-02 12:14:22.975 2 DEBUG nova.network.neutron [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Successfully updated port: 647b79a6-6cf5-4d28-afd1-9e21f2a56e32 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:14:23 compute-1 nova_compute[230518]: 2025-10-02 12:14:23.013 2 DEBUG oslo_concurrency.lockutils [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Acquiring lock "refresh_cache-b8f8f97e-2823-451c-ab36-7f94ade8be46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:14:23 compute-1 nova_compute[230518]: 2025-10-02 12:14:23.013 2 DEBUG oslo_concurrency.lockutils [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Acquired lock "refresh_cache-b8f8f97e-2823-451c-ab36-7f94ade8be46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:14:23 compute-1 nova_compute[230518]: 2025-10-02 12:14:23.014 2 DEBUG nova.network.neutron [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:14:23 compute-1 nova_compute[230518]: 2025-10-02 12:14:23.023 2 DEBUG nova.compute.manager [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:14:23 compute-1 nova_compute[230518]: 2025-10-02 12:14:23.024 2 DEBUG nova.virt.libvirt.driver [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:14:23 compute-1 nova_compute[230518]: 2025-10-02 12:14:23.025 2 INFO nova.virt.libvirt.driver [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Creating image(s)
Oct 02 12:14:23 compute-1 nova_compute[230518]: 2025-10-02 12:14:23.025 2 DEBUG nova.virt.libvirt.driver [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 12:14:23 compute-1 nova_compute[230518]: 2025-10-02 12:14:23.025 2 DEBUG nova.virt.libvirt.driver [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Ensure instance console log exists: /var/lib/nova/instances/b8f8f97e-2823-451c-ab36-7f94ade8be46/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:14:23 compute-1 nova_compute[230518]: 2025-10-02 12:14:23.026 2 DEBUG oslo_concurrency.lockutils [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:23 compute-1 nova_compute[230518]: 2025-10-02 12:14:23.026 2 DEBUG oslo_concurrency.lockutils [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:23 compute-1 nova_compute[230518]: 2025-10-02 12:14:23.026 2 DEBUG oslo_concurrency.lockutils [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:23 compute-1 systemd[238151]: Queued start job for default target Main User Target.
Oct 02 12:14:23 compute-1 systemd[238151]: Created slice User Application Slice.
Oct 02 12:14:23 compute-1 systemd[238151]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 02 12:14:23 compute-1 systemd[238151]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 12:14:23 compute-1 systemd[238151]: Reached target Paths.
Oct 02 12:14:23 compute-1 systemd[238151]: Reached target Timers.
Oct 02 12:14:23 compute-1 systemd[238151]: Starting D-Bus User Message Bus Socket...
Oct 02 12:14:23 compute-1 systemd[238151]: Starting Create User's Volatile Files and Directories...
Oct 02 12:14:23 compute-1 systemd[238151]: Listening on D-Bus User Message Bus Socket.
Oct 02 12:14:23 compute-1 systemd[238151]: Reached target Sockets.
Oct 02 12:14:23 compute-1 systemd[238151]: Finished Create User's Volatile Files and Directories.
Oct 02 12:14:23 compute-1 systemd[238151]: Reached target Basic System.
Oct 02 12:14:23 compute-1 systemd[238151]: Reached target Main User Target.
Oct 02 12:14:23 compute-1 systemd[238151]: Startup finished in 157ms.
Oct 02 12:14:23 compute-1 nova_compute[230518]: 2025-10-02 12:14:23.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:23 compute-1 systemd[1]: Started User Manager for UID 42436.
Oct 02 12:14:23 compute-1 systemd[1]: Started Session 53 of User nova.
Oct 02 12:14:23 compute-1 sshd-session[238146]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:14:23 compute-1 nova_compute[230518]: 2025-10-02 12:14:23.181 2 DEBUG nova.compute.manager [req-d0bbe7e3-dc77-4544-9960-a05cea01d6cd req-4957dbd1-0eee-42fa-8e9f-6096e40508b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received event network-changed-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:14:23 compute-1 nova_compute[230518]: 2025-10-02 12:14:23.182 2 DEBUG nova.compute.manager [req-d0bbe7e3-dc77-4544-9960-a05cea01d6cd req-4957dbd1-0eee-42fa-8e9f-6096e40508b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Refreshing instance network info cache due to event network-changed-647b79a6-6cf5-4d28-afd1-9e21f2a56e32. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:14:23 compute-1 nova_compute[230518]: 2025-10-02 12:14:23.182 2 DEBUG oslo_concurrency.lockutils [req-d0bbe7e3-dc77-4544-9960-a05cea01d6cd req-4957dbd1-0eee-42fa-8e9f-6096e40508b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-b8f8f97e-2823-451c-ab36-7f94ade8be46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:14:23 compute-1 sshd-session[238165]: Received disconnect from 192.168.122.102 port 54794:11: disconnected by user
Oct 02 12:14:23 compute-1 sshd-session[238165]: Disconnected from user nova 192.168.122.102 port 54794
Oct 02 12:14:23 compute-1 sshd-session[238146]: pam_unix(sshd:session): session closed for user nova
Oct 02 12:14:23 compute-1 systemd[1]: session-53.scope: Deactivated successfully.
Oct 02 12:14:23 compute-1 systemd-logind[795]: Session 53 logged out. Waiting for processes to exit.
Oct 02 12:14:23 compute-1 systemd-logind[795]: Removed session 53.
Oct 02 12:14:23 compute-1 sshd-session[238167]: Accepted publickey for nova from 192.168.122.102 port 54800 ssh2: ECDSA SHA256:Vro/IzzyOA86z5RBI5lBF+NKUNzyxTh79RUgVc2XKwY
Oct 02 12:14:23 compute-1 systemd-logind[795]: New session 55 of user nova.
Oct 02 12:14:23 compute-1 systemd[1]: Started Session 55 of User nova.
Oct 02 12:14:23 compute-1 sshd-session[238167]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:14:23 compute-1 sshd-session[238170]: Received disconnect from 192.168.122.102 port 54800:11: disconnected by user
Oct 02 12:14:23 compute-1 sshd-session[238170]: Disconnected from user nova 192.168.122.102 port 54800
Oct 02 12:14:23 compute-1 sshd-session[238167]: pam_unix(sshd:session): session closed for user nova
Oct 02 12:14:23 compute-1 systemd[1]: session-55.scope: Deactivated successfully.
Oct 02 12:14:23 compute-1 systemd-logind[795]: Session 55 logged out. Waiting for processes to exit.
Oct 02 12:14:23 compute-1 systemd-logind[795]: Removed session 55.
Oct 02 12:14:23 compute-1 nova_compute[230518]: 2025-10-02 12:14:23.545 2 DEBUG nova.network.neutron [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:14:24 compute-1 ceph-mon[80926]: pgmap v1035: 305 pgs: 305 active+clean; 544 MiB data, 570 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 5.6 MiB/s wr, 238 op/s
Oct 02 12:14:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3459332538' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2022665077' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:24 compute-1 nova_compute[230518]: 2025-10-02 12:14:24.767 2 DEBUG nova.network.neutron [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Updating instance_info_cache with network_info: [{"id": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "address": "fa:16:3e:b9:be:58", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b79a6-6c", "ovs_interfaceid": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:14:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:14:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:24.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:14:24 compute-1 nova_compute[230518]: 2025-10-02 12:14:24.819 2 DEBUG oslo_concurrency.lockutils [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Releasing lock "refresh_cache-b8f8f97e-2823-451c-ab36-7f94ade8be46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:14:24 compute-1 nova_compute[230518]: 2025-10-02 12:14:24.820 2 DEBUG nova.compute.manager [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Instance network_info: |[{"id": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "address": "fa:16:3e:b9:be:58", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b79a6-6c", "ovs_interfaceid": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:14:24 compute-1 nova_compute[230518]: 2025-10-02 12:14:24.820 2 DEBUG oslo_concurrency.lockutils [req-d0bbe7e3-dc77-4544-9960-a05cea01d6cd req-4957dbd1-0eee-42fa-8e9f-6096e40508b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-b8f8f97e-2823-451c-ab36-7f94ade8be46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:14:24 compute-1 nova_compute[230518]: 2025-10-02 12:14:24.820 2 DEBUG nova.network.neutron [req-d0bbe7e3-dc77-4544-9960-a05cea01d6cd req-4957dbd1-0eee-42fa-8e9f-6096e40508b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Refreshing network info cache for port 647b79a6-6cf5-4d28-afd1-9e21f2a56e32 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:14:24 compute-1 nova_compute[230518]: 2025-10-02 12:14:24.824 2 DEBUG nova.virt.libvirt.driver [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Start _get_guest_xml network_info=[{"id": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "address": "fa:16:3e:b9:be:58", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b79a6-6c", "ovs_interfaceid": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'delete_on_termination': True, 'disk_bus': 'virtio', 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-ff92c1da-c1e7-425c-b20d-f332daad4188', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'ff92c1da-c1e7-425c-b20d-f332daad4188', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'b8f8f97e-2823-451c-ab36-7f94ade8be46', 'attached_at': '', 'detached_at': '', 'volume_id': 'ff92c1da-c1e7-425c-b20d-f332daad4188', 'serial': 'ff92c1da-c1e7-425c-b20d-f332daad4188'}, 'boot_index': 0, 'attachment_id': '22dff84c-2800-4a44-8b74-72aa222f3126', 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:14:24 compute-1 nova_compute[230518]: 2025-10-02 12:14:24.831 2 WARNING nova.virt.libvirt.driver [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:14:24 compute-1 nova_compute[230518]: 2025-10-02 12:14:24.838 2 DEBUG nova.virt.libvirt.host [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:14:24 compute-1 nova_compute[230518]: 2025-10-02 12:14:24.840 2 DEBUG nova.virt.libvirt.host [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:14:24 compute-1 nova_compute[230518]: 2025-10-02 12:14:24.847 2 DEBUG nova.virt.libvirt.host [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:14:24 compute-1 nova_compute[230518]: 2025-10-02 12:14:24.848 2 DEBUG nova.virt.libvirt.host [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:14:24 compute-1 nova_compute[230518]: 2025-10-02 12:14:24.849 2 DEBUG nova.virt.libvirt.driver [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:14:24 compute-1 nova_compute[230518]: 2025-10-02 12:14:24.849 2 DEBUG nova.virt.hardware [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:14:24 compute-1 nova_compute[230518]: 2025-10-02 12:14:24.850 2 DEBUG nova.virt.hardware [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:14:24 compute-1 nova_compute[230518]: 2025-10-02 12:14:24.850 2 DEBUG nova.virt.hardware [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:14:24 compute-1 nova_compute[230518]: 2025-10-02 12:14:24.850 2 DEBUG nova.virt.hardware [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:14:24 compute-1 nova_compute[230518]: 2025-10-02 12:14:24.851 2 DEBUG nova.virt.hardware [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:14:24 compute-1 nova_compute[230518]: 2025-10-02 12:14:24.851 2 DEBUG nova.virt.hardware [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:14:24 compute-1 nova_compute[230518]: 2025-10-02 12:14:24.851 2 DEBUG nova.virt.hardware [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:14:24 compute-1 nova_compute[230518]: 2025-10-02 12:14:24.851 2 DEBUG nova.virt.hardware [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:14:24 compute-1 nova_compute[230518]: 2025-10-02 12:14:24.851 2 DEBUG nova.virt.hardware [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:14:24 compute-1 nova_compute[230518]: 2025-10-02 12:14:24.852 2 DEBUG nova.virt.hardware [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:14:24 compute-1 nova_compute[230518]: 2025-10-02 12:14:24.852 2 DEBUG nova.virt.hardware [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:14:24 compute-1 nova_compute[230518]: 2025-10-02 12:14:24.883 2 DEBUG nova.storage.rbd_utils [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] rbd image b8f8f97e-2823-451c-ab36-7f94ade8be46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:14:24 compute-1 nova_compute[230518]: 2025-10-02 12:14:24.887 2 DEBUG oslo_concurrency.processutils [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:24.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:14:25 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3948354084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:25 compute-1 nova_compute[230518]: 2025-10-02 12:14:25.363 2 DEBUG oslo_concurrency.processutils [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:25 compute-1 nova_compute[230518]: 2025-10-02 12:14:25.363 2 DEBUG oslo_concurrency.lockutils [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:25 compute-1 nova_compute[230518]: 2025-10-02 12:14:25.364 2 DEBUG oslo_concurrency.lockutils [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:25 compute-1 nova_compute[230518]: 2025-10-02 12:14:25.365 2 DEBUG oslo_concurrency.lockutils [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:25 compute-1 nova_compute[230518]: 2025-10-02 12:14:25.409 2 DEBUG nova.virt.libvirt.vif [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:14:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-927671937',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-927671937',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4db2957ac1b546178a9f2c0f24807e5b',ramdisk_id='',reservation_id='r-v8e9y6s2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-211124371',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-211124371-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:14:19Z,user_data=None,user_id='d29391679bd0482aada18c987e4c11ca',uuid=b8f8f97e-2823-451c-ab36-7f94ade8be46,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "address": "fa:16:3e:b9:be:58", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b79a6-6c", "ovs_interfaceid": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:14:25 compute-1 nova_compute[230518]: 2025-10-02 12:14:25.410 2 DEBUG nova.network.os_vif_util [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Converting VIF {"id": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "address": "fa:16:3e:b9:be:58", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b79a6-6c", "ovs_interfaceid": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:14:25 compute-1 nova_compute[230518]: 2025-10-02 12:14:25.411 2 DEBUG nova.network.os_vif_util [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:be:58,bridge_name='br-int',has_traffic_filtering=True,id=647b79a6-6cf5-4d28-afd1-9e21f2a56e32,network=Network(5b610572-0903-4bfb-be0b-9848e0af3ae3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b79a6-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:14:25 compute-1 nova_compute[230518]: 2025-10-02 12:14:25.412 2 DEBUG nova.objects.instance [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lazy-loading 'pci_devices' on Instance uuid b8f8f97e-2823-451c-ab36-7f94ade8be46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:14:25 compute-1 nova_compute[230518]: 2025-10-02 12:14:25.426 2 DEBUG nova.virt.libvirt.driver [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:14:25 compute-1 nova_compute[230518]:   <uuid>b8f8f97e-2823-451c-ab36-7f94ade8be46</uuid>
Oct 02 12:14:25 compute-1 nova_compute[230518]:   <name>instance-00000014</name>
Oct 02 12:14:25 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:14:25 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:14:25 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:14:25 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:       <nova:name>tempest-LiveAutoBlockMigrationV225Test-server-927671937</nova:name>
Oct 02 12:14:25 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:14:24</nova:creationTime>
Oct 02 12:14:25 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:14:25 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:14:25 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:14:25 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:14:25 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:14:25 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:14:25 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:14:25 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:14:25 compute-1 nova_compute[230518]:         <nova:user uuid="d29391679bd0482aada18c987e4c11ca">tempest-LiveAutoBlockMigrationV225Test-211124371-project-member</nova:user>
Oct 02 12:14:25 compute-1 nova_compute[230518]:         <nova:project uuid="4db2957ac1b546178a9f2c0f24807e5b">tempest-LiveAutoBlockMigrationV225Test-211124371</nova:project>
Oct 02 12:14:25 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:14:25 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:14:25 compute-1 nova_compute[230518]:         <nova:port uuid="647b79a6-6cf5-4d28-afd1-9e21f2a56e32">
Oct 02 12:14:25 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:14:25 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:14:25 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:14:25 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <system>
Oct 02 12:14:25 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:14:25 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:14:25 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:14:25 compute-1 nova_compute[230518]:       <entry name="serial">b8f8f97e-2823-451c-ab36-7f94ade8be46</entry>
Oct 02 12:14:25 compute-1 nova_compute[230518]:       <entry name="uuid">b8f8f97e-2823-451c-ab36-7f94ade8be46</entry>
Oct 02 12:14:25 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     </system>
Oct 02 12:14:25 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:14:25 compute-1 nova_compute[230518]:   <os>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:   </os>
Oct 02 12:14:25 compute-1 nova_compute[230518]:   <features>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:   </features>
Oct 02 12:14:25 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:14:25 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:14:25 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:14:25 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/b8f8f97e-2823-451c-ab36-7f94ade8be46_disk.config">
Oct 02 12:14:25 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:       </source>
Oct 02 12:14:25 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:14:25 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:14:25 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:14:25 compute-1 nova_compute[230518]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:       <source protocol="rbd" name="volumes/volume-ff92c1da-c1e7-425c-b20d-f332daad4188">
Oct 02 12:14:25 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:       </source>
Oct 02 12:14:25 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:14:25 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:14:25 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:       <serial>ff92c1da-c1e7-425c-b20d-f332daad4188</serial>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:14:25 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:b9:be:58"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:       <target dev="tap647b79a6-6c"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:14:25 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/b8f8f97e-2823-451c-ab36-7f94ade8be46/console.log" append="off"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <video>
Oct 02 12:14:25 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     </video>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:14:25 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:14:25 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:14:25 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:14:25 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:14:25 compute-1 nova_compute[230518]: </domain>
Oct 02 12:14:25 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:14:25 compute-1 nova_compute[230518]: 2025-10-02 12:14:25.427 2 DEBUG nova.compute.manager [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Preparing to wait for external event network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:14:25 compute-1 nova_compute[230518]: 2025-10-02 12:14:25.427 2 DEBUG oslo_concurrency.lockutils [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Acquiring lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:25 compute-1 nova_compute[230518]: 2025-10-02 12:14:25.427 2 DEBUG oslo_concurrency.lockutils [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:25 compute-1 nova_compute[230518]: 2025-10-02 12:14:25.427 2 DEBUG oslo_concurrency.lockutils [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:25 compute-1 nova_compute[230518]: 2025-10-02 12:14:25.428 2 DEBUG nova.virt.libvirt.vif [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:14:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-927671937',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-927671937',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4db2957ac1b546178a9f2c0f24807e5b',ramdisk_id='',reservation_id='r-v8e9y6s2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-211124371',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-211124371-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:14:19Z,user_data=None,user_id='d29391679bd0482aada18c987e4c11ca',uuid=b8f8f97e-2823-451c-ab36-7f94ade8be46,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "address": "fa:16:3e:b9:be:58", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b79a6-6c", "ovs_interfaceid": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:14:25 compute-1 nova_compute[230518]: 2025-10-02 12:14:25.428 2 DEBUG nova.network.os_vif_util [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Converting VIF {"id": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "address": "fa:16:3e:b9:be:58", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b79a6-6c", "ovs_interfaceid": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:14:25 compute-1 nova_compute[230518]: 2025-10-02 12:14:25.429 2 DEBUG nova.network.os_vif_util [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:be:58,bridge_name='br-int',has_traffic_filtering=True,id=647b79a6-6cf5-4d28-afd1-9e21f2a56e32,network=Network(5b610572-0903-4bfb-be0b-9848e0af3ae3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b79a6-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:14:25 compute-1 nova_compute[230518]: 2025-10-02 12:14:25.429 2 DEBUG os_vif [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:be:58,bridge_name='br-int',has_traffic_filtering=True,id=647b79a6-6cf5-4d28-afd1-9e21f2a56e32,network=Network(5b610572-0903-4bfb-be0b-9848e0af3ae3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b79a6-6c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:14:25 compute-1 nova_compute[230518]: 2025-10-02 12:14:25.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:25 compute-1 nova_compute[230518]: 2025-10-02 12:14:25.430 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:25 compute-1 nova_compute[230518]: 2025-10-02 12:14:25.430 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:14:25 compute-1 nova_compute[230518]: 2025-10-02 12:14:25.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:25 compute-1 nova_compute[230518]: 2025-10-02 12:14:25.433 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap647b79a6-6c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:25 compute-1 nova_compute[230518]: 2025-10-02 12:14:25.433 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap647b79a6-6c, col_values=(('external_ids', {'iface-id': '647b79a6-6cf5-4d28-afd1-9e21f2a56e32', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b9:be:58', 'vm-uuid': 'b8f8f97e-2823-451c-ab36-7f94ade8be46'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:25 compute-1 nova_compute[230518]: 2025-10-02 12:14:25.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:25 compute-1 NetworkManager[44960]: <info>  [1759407265.4359] manager: (tap647b79a6-6c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Oct 02 12:14:25 compute-1 nova_compute[230518]: 2025-10-02 12:14:25.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:14:25 compute-1 nova_compute[230518]: 2025-10-02 12:14:25.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:25 compute-1 nova_compute[230518]: 2025-10-02 12:14:25.441 2 INFO os_vif [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:be:58,bridge_name='br-int',has_traffic_filtering=True,id=647b79a6-6cf5-4d28-afd1-9e21f2a56e32,network=Network(5b610572-0903-4bfb-be0b-9848e0af3ae3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b79a6-6c')
Oct 02 12:14:25 compute-1 nova_compute[230518]: 2025-10-02 12:14:25.505 2 DEBUG nova.virt.libvirt.driver [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:14:25 compute-1 nova_compute[230518]: 2025-10-02 12:14:25.505 2 DEBUG nova.virt.libvirt.driver [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:14:25 compute-1 nova_compute[230518]: 2025-10-02 12:14:25.506 2 DEBUG nova.virt.libvirt.driver [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] No VIF found with MAC fa:16:3e:b9:be:58, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:14:25 compute-1 nova_compute[230518]: 2025-10-02 12:14:25.506 2 INFO nova.virt.libvirt.driver [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Using config drive
Oct 02 12:14:25 compute-1 nova_compute[230518]: 2025-10-02 12:14:25.528 2 DEBUG nova.storage.rbd_utils [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] rbd image b8f8f97e-2823-451c-ab36-7f94ade8be46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:14:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:14:25.913 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:14:25.914 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:14:25.914 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:26 compute-1 nova_compute[230518]: 2025-10-02 12:14:26.134 2 INFO nova.virt.libvirt.driver [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Creating config drive at /var/lib/nova/instances/b8f8f97e-2823-451c-ab36-7f94ade8be46/disk.config
Oct 02 12:14:26 compute-1 nova_compute[230518]: 2025-10-02 12:14:26.139 2 DEBUG oslo_concurrency.processutils [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b8f8f97e-2823-451c-ab36-7f94ade8be46/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi9sm7wbc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:26 compute-1 nova_compute[230518]: 2025-10-02 12:14:26.263 2 DEBUG oslo_concurrency.processutils [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b8f8f97e-2823-451c-ab36-7f94ade8be46/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi9sm7wbc" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:26 compute-1 nova_compute[230518]: 2025-10-02 12:14:26.288 2 DEBUG nova.storage.rbd_utils [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] rbd image b8f8f97e-2823-451c-ab36-7f94ade8be46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:14:26 compute-1 nova_compute[230518]: 2025-10-02 12:14:26.292 2 DEBUG oslo_concurrency.processutils [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b8f8f97e-2823-451c-ab36-7f94ade8be46/disk.config b8f8f97e-2823-451c-ab36-7f94ade8be46_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:26 compute-1 ceph-mon[80926]: pgmap v1036: 305 pgs: 305 active+clean; 544 MiB data, 570 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 211 op/s
Oct 02 12:14:26 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3948354084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:26 compute-1 nova_compute[230518]: 2025-10-02 12:14:26.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:26 compute-1 nova_compute[230518]: 2025-10-02 12:14:26.696 2 DEBUG oslo_concurrency.processutils [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b8f8f97e-2823-451c-ab36-7f94ade8be46/disk.config b8f8f97e-2823-451c-ab36-7f94ade8be46_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:26 compute-1 nova_compute[230518]: 2025-10-02 12:14:26.697 2 INFO nova.virt.libvirt.driver [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Deleting local config drive /var/lib/nova/instances/b8f8f97e-2823-451c-ab36-7f94ade8be46/disk.config because it was imported into RBD.
Oct 02 12:14:26 compute-1 kernel: tap647b79a6-6c: entered promiscuous mode
Oct 02 12:14:26 compute-1 NetworkManager[44960]: <info>  [1759407266.7440] manager: (tap647b79a6-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/52)
Oct 02 12:14:26 compute-1 ovn_controller[129257]: 2025-10-02T12:14:26Z|00074|binding|INFO|Claiming lport 647b79a6-6cf5-4d28-afd1-9e21f2a56e32 for this chassis.
Oct 02 12:14:26 compute-1 ovn_controller[129257]: 2025-10-02T12:14:26Z|00075|binding|INFO|647b79a6-6cf5-4d28-afd1-9e21f2a56e32: Claiming fa:16:3e:b9:be:58 10.100.0.12
Oct 02 12:14:26 compute-1 nova_compute[230518]: 2025-10-02 12:14:26.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:26 compute-1 ovn_controller[129257]: 2025-10-02T12:14:26Z|00076|binding|INFO|Setting lport 647b79a6-6cf5-4d28-afd1-9e21f2a56e32 ovn-installed in OVS
Oct 02 12:14:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:26 compute-1 nova_compute[230518]: 2025-10-02 12:14:26.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:14:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:26.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:14:26 compute-1 systemd-machined[188247]: New machine qemu-8-instance-00000014.
Oct 02 12:14:26 compute-1 nova_compute[230518]: 2025-10-02 12:14:26.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:26 compute-1 systemd[1]: Started Virtual Machine qemu-8-instance-00000014.
Oct 02 12:14:26 compute-1 systemd-udevd[238286]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:14:26 compute-1 NetworkManager[44960]: <info>  [1759407266.8041] device (tap647b79a6-6c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:14:26 compute-1 NetworkManager[44960]: <info>  [1759407266.8049] device (tap647b79a6-6c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:14:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:26.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:14:27.334 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:be:58 10.100.0.12'], port_security=['fa:16:3e:b9:be:58 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b8f8f97e-2823-451c-ab36-7f94ade8be46', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4db2957ac1b546178a9f2c0f24807e5b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3bac25ef-a7f0-47fe-951f-dbdf1692a36b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3799c735-d38d-43c0-9348-b36c933d72da, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=647b79a6-6cf5-4d28-afd1-9e21f2a56e32) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:14:27 compute-1 ovn_controller[129257]: 2025-10-02T12:14:27Z|00077|binding|INFO|Setting lport 647b79a6-6cf5-4d28-afd1-9e21f2a56e32 up in Southbound
Oct 02 12:14:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:14:27.335 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 647b79a6-6cf5-4d28-afd1-9e21f2a56e32 in datapath 5b610572-0903-4bfb-be0b-9848e0af3ae3 bound to our chassis
Oct 02 12:14:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:14:27.337 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5b610572-0903-4bfb-be0b-9848e0af3ae3
Oct 02 12:14:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:14:27.353 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[335eb307-a61a-4829-90ac-4146bc1b194e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:14:27.386 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[00e2541d-90d0-4204-ba39-5d7d840ed0e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:14:27.389 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[0a90555b-5c21-4c54-b06a-a9331fd98d32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:14:27.415 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[8922ced3-7593-433d-b406-61f88663ecf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:14:27.432 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[24c4b001-93c3-44c8-a06b-63c1cf59f1d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5b610572-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b0:0e:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1336, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1336, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508657, 'reachable_time': 17815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 238342, 'error': None, 'target': 'ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:14:27.447 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8b5c9574-1845-4ad7-b067-6cd72d4ee508]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5b610572-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508669, 'tstamp': 508669}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 238343, 'error': None, 'target': 'ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5b610572-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508672, 'tstamp': 508672}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 238343, 'error': None, 'target': 'ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:14:27.448 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b610572-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:27 compute-1 nova_compute[230518]: 2025-10-02 12:14:27.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:27 compute-1 nova_compute[230518]: 2025-10-02 12:14:27.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:14:27.451 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5b610572-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:14:27.452 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:14:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:14:27.452 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5b610572-00, col_values=(('external_ids', {'iface-id': '02fa40d7-59fd-4885-996d-218aed489cb1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:14:27.452 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:14:27 compute-1 nova_compute[230518]: 2025-10-02 12:14:27.574 2 DEBUG nova.network.neutron [req-d0bbe7e3-dc77-4544-9960-a05cea01d6cd req-4957dbd1-0eee-42fa-8e9f-6096e40508b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Updated VIF entry in instance network info cache for port 647b79a6-6cf5-4d28-afd1-9e21f2a56e32. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:14:27 compute-1 nova_compute[230518]: 2025-10-02 12:14:27.575 2 DEBUG nova.network.neutron [req-d0bbe7e3-dc77-4544-9960-a05cea01d6cd req-4957dbd1-0eee-42fa-8e9f-6096e40508b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Updating instance_info_cache with network_info: [{"id": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "address": "fa:16:3e:b9:be:58", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b79a6-6c", "ovs_interfaceid": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:14:27 compute-1 nova_compute[230518]: 2025-10-02 12:14:27.799 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407267.7989335, b8f8f97e-2823-451c-ab36-7f94ade8be46 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:14:27 compute-1 nova_compute[230518]: 2025-10-02 12:14:27.799 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] VM Started (Lifecycle Event)
Oct 02 12:14:27 compute-1 podman[238344]: 2025-10-02 12:14:27.809355225 +0000 UTC m=+0.060859845 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:14:27 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:14:28 compute-1 nova_compute[230518]: 2025-10-02 12:14:28.195 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:14:28 compute-1 nova_compute[230518]: 2025-10-02 12:14:28.198 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407267.7991211, b8f8f97e-2823-451c-ab36-7f94ade8be46 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:14:28 compute-1 nova_compute[230518]: 2025-10-02 12:14:28.198 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] VM Paused (Lifecycle Event)
Oct 02 12:14:28 compute-1 nova_compute[230518]: 2025-10-02 12:14:28.203 2 DEBUG oslo_concurrency.lockutils [req-d0bbe7e3-dc77-4544-9960-a05cea01d6cd req-4957dbd1-0eee-42fa-8e9f-6096e40508b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-b8f8f97e-2823-451c-ab36-7f94ade8be46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:14:28 compute-1 ceph-mon[80926]: pgmap v1037: 305 pgs: 305 active+clean; 545 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.5 MiB/s wr, 171 op/s
Oct 02 12:14:28 compute-1 nova_compute[230518]: 2025-10-02 12:14:28.253 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:14:28 compute-1 nova_compute[230518]: 2025-10-02 12:14:28.256 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:14:28 compute-1 nova_compute[230518]: 2025-10-02 12:14:28.303 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:14:28 compute-1 nova_compute[230518]: 2025-10-02 12:14:28.515 2 DEBUG nova.compute.manager [req-137b099f-91a4-4894-b2e4-993a0fe4448c req-bdad9937-f602-4db2-8954-b6843ce2163f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received event network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:14:28 compute-1 nova_compute[230518]: 2025-10-02 12:14:28.516 2 DEBUG oslo_concurrency.lockutils [req-137b099f-91a4-4894-b2e4-993a0fe4448c req-bdad9937-f602-4db2-8954-b6843ce2163f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:28 compute-1 nova_compute[230518]: 2025-10-02 12:14:28.516 2 DEBUG oslo_concurrency.lockutils [req-137b099f-91a4-4894-b2e4-993a0fe4448c req-bdad9937-f602-4db2-8954-b6843ce2163f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:28 compute-1 nova_compute[230518]: 2025-10-02 12:14:28.516 2 DEBUG oslo_concurrency.lockutils [req-137b099f-91a4-4894-b2e4-993a0fe4448c req-bdad9937-f602-4db2-8954-b6843ce2163f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:28 compute-1 nova_compute[230518]: 2025-10-02 12:14:28.516 2 DEBUG nova.compute.manager [req-137b099f-91a4-4894-b2e4-993a0fe4448c req-bdad9937-f602-4db2-8954-b6843ce2163f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Processing event network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:14:28 compute-1 nova_compute[230518]: 2025-10-02 12:14:28.517 2 DEBUG nova.compute.manager [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:14:28 compute-1 nova_compute[230518]: 2025-10-02 12:14:28.520 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407268.5200489, b8f8f97e-2823-451c-ab36-7f94ade8be46 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:14:28 compute-1 nova_compute[230518]: 2025-10-02 12:14:28.521 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] VM Resumed (Lifecycle Event)
Oct 02 12:14:28 compute-1 nova_compute[230518]: 2025-10-02 12:14:28.523 2 DEBUG nova.virt.libvirt.driver [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:14:28 compute-1 nova_compute[230518]: 2025-10-02 12:14:28.526 2 INFO nova.virt.libvirt.driver [-] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Instance spawned successfully.
Oct 02 12:14:28 compute-1 nova_compute[230518]: 2025-10-02 12:14:28.527 2 DEBUG nova.virt.libvirt.driver [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:14:28 compute-1 nova_compute[230518]: 2025-10-02 12:14:28.542 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:14:28 compute-1 nova_compute[230518]: 2025-10-02 12:14:28.546 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:14:28 compute-1 nova_compute[230518]: 2025-10-02 12:14:28.562 2 DEBUG nova.virt.libvirt.driver [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:14:28 compute-1 nova_compute[230518]: 2025-10-02 12:14:28.563 2 DEBUG nova.virt.libvirt.driver [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:14:28 compute-1 nova_compute[230518]: 2025-10-02 12:14:28.564 2 DEBUG nova.virt.libvirt.driver [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:14:28 compute-1 nova_compute[230518]: 2025-10-02 12:14:28.565 2 DEBUG nova.virt.libvirt.driver [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:14:28 compute-1 nova_compute[230518]: 2025-10-02 12:14:28.566 2 DEBUG nova.virt.libvirt.driver [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:14:28 compute-1 nova_compute[230518]: 2025-10-02 12:14:28.567 2 DEBUG nova.virt.libvirt.driver [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:14:28 compute-1 nova_compute[230518]: 2025-10-02 12:14:28.573 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:14:28 compute-1 nova_compute[230518]: 2025-10-02 12:14:28.635 2 INFO nova.compute.manager [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Took 5.61 seconds to spawn the instance on the hypervisor.
Oct 02 12:14:28 compute-1 nova_compute[230518]: 2025-10-02 12:14:28.636 2 DEBUG nova.compute.manager [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:14:28 compute-1 nova_compute[230518]: 2025-10-02 12:14:28.699 2 INFO nova.compute.manager [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Took 10.43 seconds to build instance.
Oct 02 12:14:28 compute-1 nova_compute[230518]: 2025-10-02 12:14:28.723 2 DEBUG oslo_concurrency.lockutils [None req-b9050b57-4bbc-4fdf-b4e6-35ceffd4340a d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:14:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:28.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:14:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:14:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:28.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:14:30 compute-1 nova_compute[230518]: 2025-10-02 12:14:30.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:30 compute-1 ceph-mon[80926]: pgmap v1038: 305 pgs: 305 active+clean; 555 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.2 MiB/s wr, 150 op/s
Oct 02 12:14:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:14:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:30.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:14:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:14:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:30.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:14:30 compute-1 nova_compute[230518]: 2025-10-02 12:14:30.984 2 DEBUG nova.compute.manager [req-f2b51479-6b7b-4910-a096-0426a8b3acdd req-0a275908-9209-4afc-ab22-1d782e5257e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received event network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:14:30 compute-1 nova_compute[230518]: 2025-10-02 12:14:30.984 2 DEBUG oslo_concurrency.lockutils [req-f2b51479-6b7b-4910-a096-0426a8b3acdd req-0a275908-9209-4afc-ab22-1d782e5257e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:30 compute-1 nova_compute[230518]: 2025-10-02 12:14:30.984 2 DEBUG oslo_concurrency.lockutils [req-f2b51479-6b7b-4910-a096-0426a8b3acdd req-0a275908-9209-4afc-ab22-1d782e5257e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:30 compute-1 nova_compute[230518]: 2025-10-02 12:14:30.985 2 DEBUG oslo_concurrency.lockutils [req-f2b51479-6b7b-4910-a096-0426a8b3acdd req-0a275908-9209-4afc-ab22-1d782e5257e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:30 compute-1 nova_compute[230518]: 2025-10-02 12:14:30.985 2 DEBUG nova.compute.manager [req-f2b51479-6b7b-4910-a096-0426a8b3acdd req-0a275908-9209-4afc-ab22-1d782e5257e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] No waiting events found dispatching network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:14:30 compute-1 nova_compute[230518]: 2025-10-02 12:14:30.985 2 WARNING nova.compute.manager [req-f2b51479-6b7b-4910-a096-0426a8b3acdd req-0a275908-9209-4afc-ab22-1d782e5257e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received unexpected event network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 for instance with vm_state active and task_state None.
Oct 02 12:14:31 compute-1 podman[238364]: 2025-10-02 12:14:31.549041255 +0000 UTC m=+0.051911184 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:14:31 compute-1 nova_compute[230518]: 2025-10-02 12:14:31.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:31 compute-1 ceph-mon[80926]: pgmap v1039: 305 pgs: 305 active+clean; 574 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.4 MiB/s wr, 155 op/s
Oct 02 12:14:32 compute-1 nova_compute[230518]: 2025-10-02 12:14:32.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:14:32 compute-1 nova_compute[230518]: 2025-10-02 12:14:32.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:14:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:32.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:14:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:32.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:14:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:14:33 compute-1 nova_compute[230518]: 2025-10-02 12:14:33.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:14:33 compute-1 nova_compute[230518]: 2025-10-02 12:14:33.056 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:14:33 compute-1 systemd[1]: Stopping User Manager for UID 42436...
Oct 02 12:14:33 compute-1 systemd[238151]: Activating special unit Exit the Session...
Oct 02 12:14:33 compute-1 systemd[238151]: Stopped target Main User Target.
Oct 02 12:14:33 compute-1 systemd[238151]: Stopped target Basic System.
Oct 02 12:14:33 compute-1 systemd[238151]: Stopped target Paths.
Oct 02 12:14:33 compute-1 systemd[238151]: Stopped target Sockets.
Oct 02 12:14:33 compute-1 systemd[238151]: Stopped target Timers.
Oct 02 12:14:33 compute-1 systemd[238151]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 02 12:14:33 compute-1 systemd[238151]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 02 12:14:33 compute-1 systemd[238151]: Closed D-Bus User Message Bus Socket.
Oct 02 12:14:33 compute-1 systemd[238151]: Stopped Create User's Volatile Files and Directories.
Oct 02 12:14:33 compute-1 systemd[238151]: Removed slice User Application Slice.
Oct 02 12:14:33 compute-1 systemd[238151]: Reached target Shutdown.
Oct 02 12:14:33 compute-1 systemd[238151]: Finished Exit the Session.
Oct 02 12:14:33 compute-1 systemd[238151]: Reached target Exit the Session.
Oct 02 12:14:33 compute-1 systemd[1]: user@42436.service: Deactivated successfully.
Oct 02 12:14:33 compute-1 systemd[1]: Stopped User Manager for UID 42436.
Oct 02 12:14:33 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Oct 02 12:14:33 compute-1 systemd[1]: run-user-42436.mount: Deactivated successfully.
Oct 02 12:14:33 compute-1 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Oct 02 12:14:33 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Oct 02 12:14:33 compute-1 systemd[1]: Removed slice User Slice of UID 42436.
Oct 02 12:14:33 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1827930705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:33 compute-1 nova_compute[230518]: 2025-10-02 12:14:33.576 2 DEBUG nova.virt.libvirt.driver [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Check if temp file /var/lib/nova/instances/tmpvx44lsjw exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065
Oct 02 12:14:33 compute-1 nova_compute[230518]: 2025-10-02 12:14:33.577 2 DEBUG nova.compute.manager [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpvx44lsjw',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='b8f8f97e-2823-451c-ab36-7f94ade8be46',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587
Oct 02 12:14:34 compute-1 ceph-mon[80926]: pgmap v1040: 305 pgs: 305 active+clean; 577 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.2 MiB/s wr, 261 op/s
Oct 02 12:14:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4119413179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:34.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:34.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:35 compute-1 nova_compute[230518]: 2025-10-02 12:14:35.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:14:35 compute-1 nova_compute[230518]: 2025-10-02 12:14:35.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:35 compute-1 ceph-mon[80926]: pgmap v1041: 305 pgs: 305 active+clean; 571 MiB data, 610 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.3 MiB/s wr, 297 op/s
Oct 02 12:14:36 compute-1 nova_compute[230518]: 2025-10-02 12:14:36.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:14:36 compute-1 nova_compute[230518]: 2025-10-02 12:14:36.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:14:36 compute-1 nova_compute[230518]: 2025-10-02 12:14:36.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:14:36 compute-1 nova_compute[230518]: 2025-10-02 12:14:36.324 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-f85aa55e-c534-4270-b8bb-d25f8026084c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:14:36 compute-1 nova_compute[230518]: 2025-10-02 12:14:36.325 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-f85aa55e-c534-4270-b8bb-d25f8026084c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:14:36 compute-1 nova_compute[230518]: 2025-10-02 12:14:36.325 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:14:36 compute-1 nova_compute[230518]: 2025-10-02 12:14:36.326 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f85aa55e-c534-4270-b8bb-d25f8026084c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:14:36 compute-1 nova_compute[230518]: 2025-10-02 12:14:36.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:36 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0.
Oct 02 12:14:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:14:36.735116) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:14:36 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49
Oct 02 12:14:36 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407276735148, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 2395, "num_deletes": 252, "total_data_size": 5672497, "memory_usage": 5760208, "flush_reason": "Manual Compaction"}
Oct 02 12:14:36 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started
Oct 02 12:14:36 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1965354879' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:36 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407276762667, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 3722828, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22981, "largest_seqno": 25371, "table_properties": {"data_size": 3713161, "index_size": 6033, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20530, "raw_average_key_size": 20, "raw_value_size": 3693619, "raw_average_value_size": 3693, "num_data_blocks": 267, "num_entries": 1000, "num_filter_entries": 1000, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407069, "oldest_key_time": 1759407069, "file_creation_time": 1759407276, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:14:36 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 27597 microseconds, and 7560 cpu microseconds.
Oct 02 12:14:36 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:14:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:14:36.762709) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 3722828 bytes OK
Oct 02 12:14:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:14:36.762728) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started
Oct 02 12:14:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:14:36.764572) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done
Oct 02 12:14:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:14:36.764585) EVENT_LOG_v1 {"time_micros": 1759407276764580, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:14:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:14:36.764601) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:14:36 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 5661816, prev total WAL file size 5661816, number of live WAL files 2.
Oct 02 12:14:36 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:14:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:14:36.765589) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Oct 02 12:14:36 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:14:36 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(3635KB)], [48(7171KB)]
Oct 02 12:14:36 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407276765614, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 11066035, "oldest_snapshot_seqno": -1}
Oct 02 12:14:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:36.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:36 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 4880 keys, 9054911 bytes, temperature: kUnknown
Oct 02 12:14:36 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407276817829, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 9054911, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9021320, "index_size": 20297, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12229, "raw_key_size": 123229, "raw_average_key_size": 25, "raw_value_size": 8932118, "raw_average_value_size": 1830, "num_data_blocks": 829, "num_entries": 4880, "num_filter_entries": 4880, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759407276, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:14:36 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:14:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:14:36.818022) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 9054911 bytes
Oct 02 12:14:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:14:36.819334) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 211.7 rd, 173.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 7.0 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(5.4) write-amplify(2.4) OK, records in: 5402, records dropped: 522 output_compression: NoCompression
Oct 02 12:14:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:14:36.819349) EVENT_LOG_v1 {"time_micros": 1759407276819342, "job": 28, "event": "compaction_finished", "compaction_time_micros": 52275, "compaction_time_cpu_micros": 17602, "output_level": 6, "num_output_files": 1, "total_output_size": 9054911, "num_input_records": 5402, "num_output_records": 4880, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:14:36 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:14:36 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407276819927, "job": 28, "event": "table_file_deletion", "file_number": 50}
Oct 02 12:14:36 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:14:36 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407276821033, "job": 28, "event": "table_file_deletion", "file_number": 48}
Oct 02 12:14:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:14:36.765539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:14:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:14:36.821059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:14:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:14:36.821062) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:14:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:14:36.821064) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:14:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:14:36.821065) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:14:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:14:36.821067) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:14:36 compute-1 nova_compute[230518]: 2025-10-02 12:14:36.879 2 DEBUG oslo_concurrency.lockutils [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Acquiring lock "refresh_cache-fcfe251b-73c3-4310-b646-3c6c0a8c7e6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:14:36 compute-1 nova_compute[230518]: 2025-10-02 12:14:36.880 2 DEBUG oslo_concurrency.lockutils [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Acquired lock "refresh_cache-fcfe251b-73c3-4310-b646-3c6c0a8c7e6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:14:36 compute-1 nova_compute[230518]: 2025-10-02 12:14:36.882 2 DEBUG nova.network.neutron [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:14:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:36.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:37 compute-1 nova_compute[230518]: 2025-10-02 12:14:37.156 2 DEBUG nova.network.neutron [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:14:37 compute-1 ceph-mon[80926]: pgmap v1042: 305 pgs: 305 active+clean; 563 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.3 MiB/s wr, 296 op/s
Oct 02 12:14:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:14:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:38.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:14:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:38.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:14:39 compute-1 nova_compute[230518]: 2025-10-02 12:14:39.247 2 DEBUG nova.network.neutron [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:14:39 compute-1 nova_compute[230518]: 2025-10-02 12:14:39.311 2 DEBUG oslo_concurrency.lockutils [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Releasing lock "refresh_cache-fcfe251b-73c3-4310-b646-3c6c0a8c7e6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:14:39 compute-1 nova_compute[230518]: 2025-10-02 12:14:39.480 2 DEBUG nova.virt.libvirt.driver [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Oct 02 12:14:39 compute-1 nova_compute[230518]: 2025-10-02 12:14:39.482 2 DEBUG nova.virt.libvirt.driver [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 02 12:14:39 compute-1 nova_compute[230518]: 2025-10-02 12:14:39.482 2 INFO nova.virt.libvirt.driver [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Creating image(s)
Oct 02 12:14:39 compute-1 nova_compute[230518]: 2025-10-02 12:14:39.516 2 DEBUG nova.storage.rbd_utils [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] creating snapshot(nova-resize) on rbd image(fcfe251b-73c3-4310-b646-3c6c0a8c7e6e_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:14:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e146 e146: 3 total, 3 up, 3 in
Oct 02 12:14:39 compute-1 ceph-mon[80926]: pgmap v1043: 305 pgs: 305 active+clean; 563 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.1 MiB/s wr, 294 op/s
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.052 2 DEBUG nova.objects.instance [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Lazy-loading 'trusted_certs' on Instance uuid fcfe251b-73c3-4310-b646-3c6c0a8c7e6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.251 2 DEBUG nova.virt.libvirt.driver [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.252 2 DEBUG nova.virt.libvirt.driver [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Ensure instance console log exists: /var/lib/nova/instances/fcfe251b-73c3-4310-b646-3c6c0a8c7e6e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.253 2 DEBUG oslo_concurrency.lockutils [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.253 2 DEBUG oslo_concurrency.lockutils [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.253 2 DEBUG oslo_concurrency.lockutils [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.255 2 DEBUG nova.virt.libvirt.driver [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.260 2 WARNING nova.virt.libvirt.driver [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.266 2 DEBUG nova.virt.libvirt.host [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.267 2 DEBUG nova.virt.libvirt.host [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.272 2 DEBUG nova.virt.libvirt.host [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.273 2 DEBUG nova.virt.libvirt.host [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.274 2 DEBUG nova.virt.libvirt.driver [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.274 2 DEBUG nova.virt.hardware [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.275 2 DEBUG nova.virt.hardware [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.275 2 DEBUG nova.virt.hardware [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.275 2 DEBUG nova.virt.hardware [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.275 2 DEBUG nova.virt.hardware [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.276 2 DEBUG nova.virt.hardware [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.276 2 DEBUG nova.virt.hardware [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.276 2 DEBUG nova.virt.hardware [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.276 2 DEBUG nova.virt.hardware [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.277 2 DEBUG nova.virt.hardware [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.277 2 DEBUG nova.virt.hardware [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.277 2 DEBUG nova.objects.instance [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid fcfe251b-73c3-4310-b646-3c6c0a8c7e6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.339 2 DEBUG oslo_concurrency.processutils [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.581 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Updating instance_info_cache with network_info: [{"id": "760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a", "address": "fa:16:3e:6f:96:77", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap760df1d8-a2", "ovs_interfaceid": "760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.609 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-f85aa55e-c534-4270-b8bb-d25f8026084c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.609 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.610 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.610 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.611 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.636 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.637 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.638 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.638 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.638 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:40 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:14:40 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2551709202' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.795 2 DEBUG oslo_concurrency.processutils [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:40.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:40 compute-1 nova_compute[230518]: 2025-10-02 12:14:40.835 2 DEBUG oslo_concurrency.processutils [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:14:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:40.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:14:41 compute-1 ceph-mon[80926]: osdmap e146: 3 total, 3 up, 3 in
Oct 02 12:14:41 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2551709202' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:41 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:14:41 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2725198995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:41 compute-1 nova_compute[230518]: 2025-10-02 12:14:41.083 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:41 compute-1 nova_compute[230518]: 2025-10-02 12:14:41.206 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:14:41 compute-1 nova_compute[230518]: 2025-10-02 12:14:41.208 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:14:41 compute-1 nova_compute[230518]: 2025-10-02 12:14:41.214 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:14:41 compute-1 nova_compute[230518]: 2025-10-02 12:14:41.214 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:14:41 compute-1 nova_compute[230518]: 2025-10-02 12:14:41.219 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:14:41 compute-1 nova_compute[230518]: 2025-10-02 12:14:41.220 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:14:41 compute-1 nova_compute[230518]: 2025-10-02 12:14:41.283 2 DEBUG oslo_concurrency.processutils [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:41 compute-1 nova_compute[230518]: 2025-10-02 12:14:41.285 2 DEBUG nova.virt.libvirt.driver [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:14:41 compute-1 nova_compute[230518]:   <uuid>fcfe251b-73c3-4310-b646-3c6c0a8c7e6e</uuid>
Oct 02 12:14:41 compute-1 nova_compute[230518]:   <name>instance-00000012</name>
Oct 02 12:14:41 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:14:41 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:14:41 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:14:41 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:       <nova:name>tempest-MigrationsAdminTest-server-371579099</nova:name>
Oct 02 12:14:41 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:14:40</nova:creationTime>
Oct 02 12:14:41 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:14:41 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:14:41 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:14:41 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:14:41 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:14:41 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:14:41 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:14:41 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:14:41 compute-1 nova_compute[230518]:         <nova:user uuid="ac1b39d94ed94e2490ad953afb3c225f">tempest-MigrationsAdminTest-1653457839-project-member</nova:user>
Oct 02 12:14:41 compute-1 nova_compute[230518]:         <nova:project uuid="3d306048f2854052ba5317253b834aa7">tempest-MigrationsAdminTest-1653457839</nova:project>
Oct 02 12:14:41 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:14:41 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:       <nova:ports/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:14:41 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:14:41 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <system>
Oct 02 12:14:41 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:14:41 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:14:41 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:14:41 compute-1 nova_compute[230518]:       <entry name="serial">fcfe251b-73c3-4310-b646-3c6c0a8c7e6e</entry>
Oct 02 12:14:41 compute-1 nova_compute[230518]:       <entry name="uuid">fcfe251b-73c3-4310-b646-3c6c0a8c7e6e</entry>
Oct 02 12:14:41 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     </system>
Oct 02 12:14:41 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:14:41 compute-1 nova_compute[230518]:   <os>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:   </os>
Oct 02 12:14:41 compute-1 nova_compute[230518]:   <features>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:   </features>
Oct 02 12:14:41 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:14:41 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:14:41 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:14:41 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/fcfe251b-73c3-4310-b646-3c6c0a8c7e6e_disk">
Oct 02 12:14:41 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:       </source>
Oct 02 12:14:41 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:14:41 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:14:41 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:14:41 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/fcfe251b-73c3-4310-b646-3c6c0a8c7e6e_disk.config">
Oct 02 12:14:41 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:       </source>
Oct 02 12:14:41 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:14:41 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:14:41 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:14:41 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/fcfe251b-73c3-4310-b646-3c6c0a8c7e6e/console.log" append="off"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <video>
Oct 02 12:14:41 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     </video>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:14:41 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:14:41 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:14:41 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:14:41 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:14:41 compute-1 nova_compute[230518]: </domain>
Oct 02 12:14:41 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:14:41 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:14:41 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2864292701' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:41 compute-1 nova_compute[230518]: 2025-10-02 12:14:41.342 2 DEBUG nova.virt.libvirt.driver [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:14:41 compute-1 nova_compute[230518]: 2025-10-02 12:14:41.343 2 DEBUG nova.virt.libvirt.driver [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:14:41 compute-1 nova_compute[230518]: 2025-10-02 12:14:41.343 2 INFO nova.virt.libvirt.driver [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Using config drive
Oct 02 12:14:41 compute-1 systemd-machined[188247]: New machine qemu-9-instance-00000012.
Oct 02 12:14:41 compute-1 systemd[1]: Started Virtual Machine qemu-9-instance-00000012.
Oct 02 12:14:41 compute-1 nova_compute[230518]: 2025-10-02 12:14:41.480 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:14:41 compute-1 nova_compute[230518]: 2025-10-02 12:14:41.481 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4286MB free_disk=20.715171813964844GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:14:41 compute-1 nova_compute[230518]: 2025-10-02 12:14:41.481 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:41 compute-1 nova_compute[230518]: 2025-10-02 12:14:41.482 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:41 compute-1 nova_compute[230518]: 2025-10-02 12:14:41.569 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Applying migration context for instance fcfe251b-73c3-4310-b646-3c6c0a8c7e6e as it has an incoming, in-progress migration 29d56307-e333-4b59-aa08-549a50840cd6. Migration status is post-migrating _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950
Oct 02 12:14:41 compute-1 nova_compute[230518]: 2025-10-02 12:14:41.570 2 INFO nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Updating resource usage from migration 29d56307-e333-4b59-aa08-549a50840cd6
Oct 02 12:14:41 compute-1 nova_compute[230518]: 2025-10-02 12:14:41.571 2 INFO nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Updating resource usage from migration 7b3d6a5a-de37-42b3-bb4b-3c64a2479aa1
Oct 02 12:14:41 compute-1 nova_compute[230518]: 2025-10-02 12:14:41.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:41 compute-1 nova_compute[230518]: 2025-10-02 12:14:41.601 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance f85aa55e-c534-4270-b8bb-d25f8026084c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:14:41 compute-1 nova_compute[230518]: 2025-10-02 12:14:41.601 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 2b86a484-6fc6-4efa-983f-fb93053b0874 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:14:41 compute-1 nova_compute[230518]: 2025-10-02 12:14:41.601 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance fcfe251b-73c3-4310-b646-3c6c0a8c7e6e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:14:41 compute-1 nova_compute[230518]: 2025-10-02 12:14:41.601 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Migration 7b3d6a5a-de37-42b3-bb4b-3c64a2479aa1 is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Oct 02 12:14:41 compute-1 nova_compute[230518]: 2025-10-02 12:14:41.602 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:14:41 compute-1 nova_compute[230518]: 2025-10-02 12:14:41.602 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:14:41 compute-1 nova_compute[230518]: 2025-10-02 12:14:41.707 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:14:42 compute-1 ceph-mon[80926]: pgmap v1045: 305 pgs: 305 active+clean; 563 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 1.6 MiB/s wr, 255 op/s
Oct 02 12:14:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2725198995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2544534418' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2864292701' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:14:42 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2935015047' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:42 compute-1 nova_compute[230518]: 2025-10-02 12:14:42.159 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:14:42 compute-1 nova_compute[230518]: 2025-10-02 12:14:42.164 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:14:42 compute-1 nova_compute[230518]: 2025-10-02 12:14:42.183 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:14:42 compute-1 nova_compute[230518]: 2025-10-02 12:14:42.216 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:14:42 compute-1 nova_compute[230518]: 2025-10-02 12:14:42.217 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:42 compute-1 ovn_controller[129257]: 2025-10-02T12:14:42Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b9:be:58 10.100.0.12
Oct 02 12:14:42 compute-1 ovn_controller[129257]: 2025-10-02T12:14:42Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b9:be:58 10.100.0.12
Oct 02 12:14:42 compute-1 nova_compute[230518]: 2025-10-02 12:14:42.733 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407282.7335937, fcfe251b-73c3-4310-b646-3c6c0a8c7e6e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:14:42 compute-1 nova_compute[230518]: 2025-10-02 12:14:42.734 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] VM Resumed (Lifecycle Event)
Oct 02 12:14:42 compute-1 nova_compute[230518]: 2025-10-02 12:14:42.736 2 DEBUG nova.compute.manager [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:14:42 compute-1 nova_compute[230518]: 2025-10-02 12:14:42.739 2 INFO nova.virt.libvirt.driver [-] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Instance running successfully.
Oct 02 12:14:42 compute-1 virtqemud[230067]: argument unsupported: QEMU guest agent is not configured
Oct 02 12:14:42 compute-1 nova_compute[230518]: 2025-10-02 12:14:42.742 2 DEBUG nova.virt.libvirt.guest [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 02 12:14:42 compute-1 nova_compute[230518]: 2025-10-02 12:14:42.742 2 DEBUG nova.virt.libvirt.driver [None req-ea512c1f-e00c-42fd-ba39-1da31d8d0939 b48bf9f19611404983b14ed5245fd047 7162d8da66f743e9bb917c9bbfc1e2b3 - - default default] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Oct 02 12:14:42 compute-1 nova_compute[230518]: 2025-10-02 12:14:42.778 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:14:42 compute-1 nova_compute[230518]: 2025-10-02 12:14:42.783 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:14:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:42.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:42 compute-1 nova_compute[230518]: 2025-10-02 12:14:42.825 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] During sync_power_state the instance has a pending task (resize_finish). Skip.
Oct 02 12:14:42 compute-1 nova_compute[230518]: 2025-10-02 12:14:42.826 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407282.7347407, fcfe251b-73c3-4310-b646-3c6c0a8c7e6e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:14:42 compute-1 nova_compute[230518]: 2025-10-02 12:14:42.826 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] VM Started (Lifecycle Event)
Oct 02 12:14:42 compute-1 nova_compute[230518]: 2025-10-02 12:14:42.863 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:14:42 compute-1 nova_compute[230518]: 2025-10-02 12:14:42.865 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:14:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:14:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:42.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:14:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:14:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2935015047' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:43 compute-1 nova_compute[230518]: 2025-10-02 12:14:43.213 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:14:44 compute-1 ceph-mon[80926]: pgmap v1046: 305 pgs: 305 active+clean; 583 MiB data, 619 MiB used, 20 GiB / 21 GiB avail; 526 KiB/s rd, 1.8 MiB/s wr, 149 op/s
Oct 02 12:14:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:14:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:44.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:14:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:44.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:45 compute-1 nova_compute[230518]: 2025-10-02 12:14:45.022 2 DEBUG nova.compute.manager [req-2773b6b0-1083-46e6-a616-b4dd51b49bc7 req-8a7c09b8-91d9-4ad6-9c4e-4079efca9dc5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received event network-vif-unplugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:14:45 compute-1 nova_compute[230518]: 2025-10-02 12:14:45.022 2 DEBUG oslo_concurrency.lockutils [req-2773b6b0-1083-46e6-a616-b4dd51b49bc7 req-8a7c09b8-91d9-4ad6-9c4e-4079efca9dc5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:45 compute-1 nova_compute[230518]: 2025-10-02 12:14:45.022 2 DEBUG oslo_concurrency.lockutils [req-2773b6b0-1083-46e6-a616-b4dd51b49bc7 req-8a7c09b8-91d9-4ad6-9c4e-4079efca9dc5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:45 compute-1 nova_compute[230518]: 2025-10-02 12:14:45.023 2 DEBUG oslo_concurrency.lockutils [req-2773b6b0-1083-46e6-a616-b4dd51b49bc7 req-8a7c09b8-91d9-4ad6-9c4e-4079efca9dc5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:45 compute-1 nova_compute[230518]: 2025-10-02 12:14:45.023 2 DEBUG nova.compute.manager [req-2773b6b0-1083-46e6-a616-b4dd51b49bc7 req-8a7c09b8-91d9-4ad6-9c4e-4079efca9dc5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] No waiting events found dispatching network-vif-unplugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:14:45 compute-1 nova_compute[230518]: 2025-10-02 12:14:45.023 2 DEBUG nova.compute.manager [req-2773b6b0-1083-46e6-a616-b4dd51b49bc7 req-8a7c09b8-91d9-4ad6-9c4e-4079efca9dc5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received event network-vif-unplugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:14:45 compute-1 nova_compute[230518]: 2025-10-02 12:14:45.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:46 compute-1 ceph-mon[80926]: pgmap v1047: 305 pgs: 305 active+clean; 565 MiB data, 629 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.6 MiB/s wr, 160 op/s
Oct 02 12:14:46 compute-1 nova_compute[230518]: 2025-10-02 12:14:46.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:46.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:46.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:47 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1538923089' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:47 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/791117221' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:14:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:14:48 compute-1 ceph-mon[80926]: pgmap v1048: 305 pgs: 305 active+clean; 553 MiB data, 610 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 193 op/s
Oct 02 12:14:48 compute-1 nova_compute[230518]: 2025-10-02 12:14:48.391 2 DEBUG nova.compute.manager [req-4b03d143-3f82-4228-9c55-28365351ff56 req-d653f5cd-15b2-41d4-a202-1e72cfb99266 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received event network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:14:48 compute-1 nova_compute[230518]: 2025-10-02 12:14:48.391 2 DEBUG oslo_concurrency.lockutils [req-4b03d143-3f82-4228-9c55-28365351ff56 req-d653f5cd-15b2-41d4-a202-1e72cfb99266 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:48 compute-1 nova_compute[230518]: 2025-10-02 12:14:48.392 2 DEBUG oslo_concurrency.lockutils [req-4b03d143-3f82-4228-9c55-28365351ff56 req-d653f5cd-15b2-41d4-a202-1e72cfb99266 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:48 compute-1 nova_compute[230518]: 2025-10-02 12:14:48.392 2 DEBUG oslo_concurrency.lockutils [req-4b03d143-3f82-4228-9c55-28365351ff56 req-d653f5cd-15b2-41d4-a202-1e72cfb99266 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:48 compute-1 nova_compute[230518]: 2025-10-02 12:14:48.392 2 DEBUG nova.compute.manager [req-4b03d143-3f82-4228-9c55-28365351ff56 req-d653f5cd-15b2-41d4-a202-1e72cfb99266 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] No waiting events found dispatching network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:14:48 compute-1 nova_compute[230518]: 2025-10-02 12:14:48.393 2 WARNING nova.compute.manager [req-4b03d143-3f82-4228-9c55-28365351ff56 req-d653f5cd-15b2-41d4-a202-1e72cfb99266 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received unexpected event network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 for instance with vm_state active and task_state migrating.
Oct 02 12:14:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:14:48.409 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:14:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:14:48.410 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:14:48 compute-1 nova_compute[230518]: 2025-10-02 12:14:48.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:48 compute-1 nova_compute[230518]: 2025-10-02 12:14:48.651 2 INFO nova.compute.manager [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Took 12.57 seconds for pre_live_migration on destination host compute-2.ctlplane.example.com.
Oct 02 12:14:48 compute-1 nova_compute[230518]: 2025-10-02 12:14:48.652 2 DEBUG nova.compute.manager [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:14:48 compute-1 nova_compute[230518]: 2025-10-02 12:14:48.675 2 DEBUG nova.compute.manager [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpvx44lsjw',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='b8f8f97e-2823-451c-ab36-7f94ade8be46',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=Migration(7b3d6a5a-de37-42b3-bb4b-3c64a2479aa1),old_vol_attachment_ids={ff92c1da-c1e7-425c-b20d-f332daad4188='22dff84c-2800-4a44-8b74-72aa222f3126'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939
Oct 02 12:14:48 compute-1 nova_compute[230518]: 2025-10-02 12:14:48.678 2 DEBUG nova.objects.instance [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Lazy-loading 'migration_context' on Instance uuid b8f8f97e-2823-451c-ab36-7f94ade8be46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:14:48 compute-1 nova_compute[230518]: 2025-10-02 12:14:48.679 2 DEBUG nova.virt.libvirt.driver [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639
Oct 02 12:14:48 compute-1 nova_compute[230518]: 2025-10-02 12:14:48.681 2 DEBUG nova.virt.libvirt.driver [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440
Oct 02 12:14:48 compute-1 nova_compute[230518]: 2025-10-02 12:14:48.681 2 DEBUG nova.virt.libvirt.driver [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449
Oct 02 12:14:48 compute-1 nova_compute[230518]: 2025-10-02 12:14:48.706 2 DEBUG nova.virt.libvirt.migration [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Find same serial number: pos=1, serial=ff92c1da-c1e7-425c-b20d-f332daad4188 _update_volume_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:242
Oct 02 12:14:48 compute-1 nova_compute[230518]: 2025-10-02 12:14:48.708 2 DEBUG nova.virt.libvirt.vif [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:14:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-927671937',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-927671937',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:14:28Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4db2957ac1b546178a9f2c0f24807e5b',ramdisk_id='',reservation_id='r-v8e9y6s2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-211124371',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-211124371-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:14:28Z,user_data=None,user_id='d29391679bd0482aada18c987e4c11ca',uuid=b8f8f97e-2823-451c-ab36-7f94ade8be46,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "address": "fa:16:3e:b9:be:58", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap647b79a6-6c", "ovs_interfaceid": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:14:48 compute-1 nova_compute[230518]: 2025-10-02 12:14:48.708 2 DEBUG nova.network.os_vif_util [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Converting VIF {"id": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "address": "fa:16:3e:b9:be:58", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap647b79a6-6c", "ovs_interfaceid": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:14:48 compute-1 nova_compute[230518]: 2025-10-02 12:14:48.709 2 DEBUG nova.network.os_vif_util [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:be:58,bridge_name='br-int',has_traffic_filtering=True,id=647b79a6-6cf5-4d28-afd1-9e21f2a56e32,network=Network(5b610572-0903-4bfb-be0b-9848e0af3ae3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b79a6-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:14:48 compute-1 nova_compute[230518]: 2025-10-02 12:14:48.709 2 DEBUG nova.virt.libvirt.migration [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Updating guest XML with vif config: <interface type="ethernet">
Oct 02 12:14:48 compute-1 nova_compute[230518]:   <mac address="fa:16:3e:b9:be:58"/>
Oct 02 12:14:48 compute-1 nova_compute[230518]:   <model type="virtio"/>
Oct 02 12:14:48 compute-1 nova_compute[230518]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:14:48 compute-1 nova_compute[230518]:   <mtu size="1442"/>
Oct 02 12:14:48 compute-1 nova_compute[230518]:   <target dev="tap647b79a6-6c"/>
Oct 02 12:14:48 compute-1 nova_compute[230518]: </interface>
Oct 02 12:14:48 compute-1 nova_compute[230518]:  _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388
Oct 02 12:14:48 compute-1 nova_compute[230518]: 2025-10-02 12:14:48.710 2 DEBUG nova.virt.libvirt.driver [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272
Oct 02 12:14:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:48.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:48 compute-1 podman[238641]: 2025-10-02 12:14:48.840612244 +0000 UTC m=+0.080476492 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Oct 02 12:14:48 compute-1 podman[238640]: 2025-10-02 12:14:48.858094633 +0000 UTC m=+0.101439281 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Oct 02 12:14:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:14:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:48.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:14:49 compute-1 nova_compute[230518]: 2025-10-02 12:14:49.184 2 DEBUG nova.virt.libvirt.migration [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Oct 02 12:14:49 compute-1 nova_compute[230518]: 2025-10-02 12:14:49.184 2 INFO nova.virt.libvirt.migration [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Increasing downtime to 50 ms after 0 sec elapsed time
Oct 02 12:14:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4165662720' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:49 compute-1 nova_compute[230518]: 2025-10-02 12:14:49.564 2 INFO nova.virt.libvirt.driver [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Oct 02 12:14:50 compute-1 nova_compute[230518]: 2025-10-02 12:14:50.067 2 DEBUG nova.virt.libvirt.migration [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Oct 02 12:14:50 compute-1 nova_compute[230518]: 2025-10-02 12:14:50.068 2 DEBUG nova.virt.libvirt.migration [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Oct 02 12:14:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e147 e147: 3 total, 3 up, 3 in
Oct 02 12:14:50 compute-1 nova_compute[230518]: 2025-10-02 12:14:50.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:50 compute-1 nova_compute[230518]: 2025-10-02 12:14:50.572 2 DEBUG nova.virt.libvirt.migration [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Oct 02 12:14:50 compute-1 nova_compute[230518]: 2025-10-02 12:14:50.573 2 DEBUG nova.virt.libvirt.migration [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Oct 02 12:14:50 compute-1 ceph-mon[80926]: pgmap v1049: 305 pgs: 305 active+clean; 543 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.8 MiB/s wr, 237 op/s
Oct 02 12:14:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/883025651' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:50 compute-1 nova_compute[230518]: 2025-10-02 12:14:50.686 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407290.6859066, b8f8f97e-2823-451c-ab36-7f94ade8be46 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:14:50 compute-1 nova_compute[230518]: 2025-10-02 12:14:50.686 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] VM Paused (Lifecycle Event)
Oct 02 12:14:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:50.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:50 compute-1 kernel: tap647b79a6-6c (unregistering): left promiscuous mode
Oct 02 12:14:50 compute-1 NetworkManager[44960]: <info>  [1759407290.8675] device (tap647b79a6-6c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:14:50 compute-1 ovn_controller[129257]: 2025-10-02T12:14:50Z|00078|binding|INFO|Releasing lport 647b79a6-6cf5-4d28-afd1-9e21f2a56e32 from this chassis (sb_readonly=0)
Oct 02 12:14:50 compute-1 ovn_controller[129257]: 2025-10-02T12:14:50Z|00079|binding|INFO|Setting lport 647b79a6-6cf5-4d28-afd1-9e21f2a56e32 down in Southbound
Oct 02 12:14:50 compute-1 nova_compute[230518]: 2025-10-02 12:14:50.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:50 compute-1 ovn_controller[129257]: 2025-10-02T12:14:50Z|00080|binding|INFO|Removing iface tap647b79a6-6c ovn-installed in OVS
Oct 02 12:14:50 compute-1 nova_compute[230518]: 2025-10-02 12:14:50.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:50 compute-1 nova_compute[230518]: 2025-10-02 12:14:50.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:50 compute-1 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000014.scope: Deactivated successfully.
Oct 02 12:14:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:50 compute-1 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000014.scope: Consumed 14.123s CPU time.
Oct 02 12:14:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:50.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:50 compute-1 systemd-machined[188247]: Machine qemu-8-instance-00000014 terminated.
Oct 02 12:14:51 compute-1 virtqemud[230067]: Unable to get XATTR trusted.libvirt.security.ref_selinux on volume-ff92c1da-c1e7-425c-b20d-f332daad4188: No such file or directory
Oct 02 12:14:51 compute-1 virtqemud[230067]: Unable to get XATTR trusted.libvirt.security.ref_dac on volume-ff92c1da-c1e7-425c-b20d-f332daad4188: No such file or directory
Oct 02 12:14:51 compute-1 NetworkManager[44960]: <info>  [1759407291.0247] manager: (tap647b79a6-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/53)
Oct 02 12:14:51 compute-1 systemd-udevd[238689]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:14:51 compute-1 nova_compute[230518]: 2025-10-02 12:14:51.042 2 DEBUG nova.virt.libvirt.driver [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279
Oct 02 12:14:51 compute-1 nova_compute[230518]: 2025-10-02 12:14:51.043 2 DEBUG nova.virt.libvirt.driver [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327
Oct 02 12:14:51 compute-1 nova_compute[230518]: 2025-10-02 12:14:51.043 2 DEBUG nova.virt.libvirt.driver [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630
Oct 02 12:14:51 compute-1 nova_compute[230518]: 2025-10-02 12:14:51.075 2 DEBUG nova.virt.libvirt.guest [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid 'b8f8f97e-2823-451c-ab36-7f94ade8be46' (instance-00000014) get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688
Oct 02 12:14:51 compute-1 nova_compute[230518]: 2025-10-02 12:14:51.075 2 INFO nova.virt.libvirt.driver [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Migration operation has completed
Oct 02 12:14:51 compute-1 nova_compute[230518]: 2025-10-02 12:14:51.075 2 INFO nova.compute.manager [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] _post_live_migration() is started..
Oct 02 12:14:51 compute-1 ovn_controller[129257]: 2025-10-02T12:14:51Z|00081|binding|INFO|Releasing lport 02fa40d7-59fd-4885-996d-218aed489cb1 from this chassis (sb_readonly=0)
Oct 02 12:14:51 compute-1 ovn_controller[129257]: 2025-10-02T12:14:51Z|00082|binding|INFO|Releasing lport 2278bdaf-c37b-4127-83d4-ca11f07feaa5 from this chassis (sb_readonly=0)
Oct 02 12:14:51 compute-1 ovn_controller[129257]: 2025-10-02T12:14:51Z|00083|binding|INFO|Releasing lport 3021b3c7-b1d0-44e1-b22e-fbf6a4a79654 from this chassis (sb_readonly=0)
Oct 02 12:14:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:14:51.374 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:be:58 10.100.0.12'], port_security=['fa:16:3e:b9:be:58 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-2.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': 'b9588630-ee40-495c-89d2-4219f6b0f0b5'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b8f8f97e-2823-451c-ab36-7f94ade8be46', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4db2957ac1b546178a9f2c0f24807e5b', 'neutron:revision_number': '8', 'neutron:security_group_ids': '3bac25ef-a7f0-47fe-951f-dbdf1692a36b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3799c735-d38d-43c0-9348-b36c933d72da, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=647b79a6-6cf5-4d28-afd1-9e21f2a56e32) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:14:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:14:51.376 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 647b79a6-6cf5-4d28-afd1-9e21f2a56e32 in datapath 5b610572-0903-4bfb-be0b-9848e0af3ae3 unbound from our chassis
Oct 02 12:14:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:14:51.379 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5b610572-0903-4bfb-be0b-9848e0af3ae3
Oct 02 12:14:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:14:51.396 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8d9ce77d-e074-46c2-9346-9bc7ad146653]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:14:51.432 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[faf0d197-df8b-41bb-aa21-a61e3e940b5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:14:51.434 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[b4144b5e-b3d3-4a1d-9a8b-8fe78015dc54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:51 compute-1 nova_compute[230518]: 2025-10-02 12:14:51.463 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:14:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:14:51.482 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[e4724571-b8ce-461e-ae56-327b4d54f7cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:14:51.500 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[81138cc1-5322-45b5-bb59-e94eb53b6e3a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5b610572-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b0:0e:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 22, 'tx_packets': 7, 'rx_bytes': 1420, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 22, 'tx_packets': 7, 'rx_bytes': 1420, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508657, 'reachable_time': 17815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 238703, 'error': None, 'target': 'ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:51 compute-1 nova_compute[230518]: 2025-10-02 12:14:51.513 2 DEBUG nova.compute.manager [req-93d3e92c-41ab-41c9-bf4b-fbccbbf3b432 req-e937b3f3-090f-4b08-ae1e-aa30cf7c6de6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received event network-changed-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:14:51 compute-1 nova_compute[230518]: 2025-10-02 12:14:51.514 2 DEBUG nova.compute.manager [req-93d3e92c-41ab-41c9-bf4b-fbccbbf3b432 req-e937b3f3-090f-4b08-ae1e-aa30cf7c6de6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Refreshing instance network info cache due to event network-changed-647b79a6-6cf5-4d28-afd1-9e21f2a56e32. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:14:51 compute-1 nova_compute[230518]: 2025-10-02 12:14:51.514 2 DEBUG oslo_concurrency.lockutils [req-93d3e92c-41ab-41c9-bf4b-fbccbbf3b432 req-e937b3f3-090f-4b08-ae1e-aa30cf7c6de6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-b8f8f97e-2823-451c-ab36-7f94ade8be46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:14:51 compute-1 nova_compute[230518]: 2025-10-02 12:14:51.514 2 DEBUG oslo_concurrency.lockutils [req-93d3e92c-41ab-41c9-bf4b-fbccbbf3b432 req-e937b3f3-090f-4b08-ae1e-aa30cf7c6de6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-b8f8f97e-2823-451c-ab36-7f94ade8be46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:14:51 compute-1 nova_compute[230518]: 2025-10-02 12:14:51.515 2 DEBUG nova.network.neutron [req-93d3e92c-41ab-41c9-bf4b-fbccbbf3b432 req-e937b3f3-090f-4b08-ae1e-aa30cf7c6de6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Refreshing network info cache for port 647b79a6-6cf5-4d28-afd1-9e21f2a56e32 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:14:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:14:51.517 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[13b12f2f-cc87-4d46-a6c8-6c3bd24445a4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5b610572-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508669, 'tstamp': 508669}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 238704, 'error': None, 'target': 'ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5b610572-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508672, 'tstamp': 508672}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 238704, 'error': None, 'target': 'ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:14:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:14:51.519 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b610572-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:14:51.525 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5b610572-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:14:51.525 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:14:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:14:51.525 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5b610572-00, col_values=(('external_ids', {'iface-id': '02fa40d7-59fd-4885-996d-218aed489cb1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:14:51.526 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:14:51 compute-1 nova_compute[230518]: 2025-10-02 12:14:51.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:51 compute-1 nova_compute[230518]: 2025-10-02 12:14:51.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:52 compute-1 ceph-mon[80926]: osdmap e147: 3 total, 3 up, 3 in
Oct 02 12:14:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:52.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:52.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:14:53 compute-1 ceph-mon[80926]: pgmap v1051: 305 pgs: 305 active+clean; 563 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.7 MiB/s wr, 257 op/s
Oct 02 12:14:53 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/826188560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:14:53 compute-1 nova_compute[230518]: 2025-10-02 12:14:53.734 2 DEBUG nova.compute.manager [req-38dc08f8-1834-4450-b19d-24afe81bcf36 req-d855d6a0-723a-44f4-a60f-b25a809bfe49 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received event network-vif-unplugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:14:53 compute-1 nova_compute[230518]: 2025-10-02 12:14:53.735 2 DEBUG oslo_concurrency.lockutils [req-38dc08f8-1834-4450-b19d-24afe81bcf36 req-d855d6a0-723a-44f4-a60f-b25a809bfe49 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:53 compute-1 nova_compute[230518]: 2025-10-02 12:14:53.736 2 DEBUG oslo_concurrency.lockutils [req-38dc08f8-1834-4450-b19d-24afe81bcf36 req-d855d6a0-723a-44f4-a60f-b25a809bfe49 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:53 compute-1 nova_compute[230518]: 2025-10-02 12:14:53.736 2 DEBUG oslo_concurrency.lockutils [req-38dc08f8-1834-4450-b19d-24afe81bcf36 req-d855d6a0-723a-44f4-a60f-b25a809bfe49 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:53 compute-1 nova_compute[230518]: 2025-10-02 12:14:53.737 2 DEBUG nova.compute.manager [req-38dc08f8-1834-4450-b19d-24afe81bcf36 req-d855d6a0-723a-44f4-a60f-b25a809bfe49 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] No waiting events found dispatching network-vif-unplugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:14:53 compute-1 nova_compute[230518]: 2025-10-02 12:14:53.737 2 DEBUG nova.compute.manager [req-38dc08f8-1834-4450-b19d-24afe81bcf36 req-d855d6a0-723a-44f4-a60f-b25a809bfe49 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received event network-vif-unplugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:14:54 compute-1 ceph-mon[80926]: pgmap v1052: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 563 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.1 MiB/s wr, 204 op/s
Oct 02 12:14:54 compute-1 nova_compute[230518]: 2025-10-02 12:14:54.306 2 DEBUG nova.network.neutron [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Activated binding for port 647b79a6-6cf5-4d28-afd1-9e21f2a56e32 and host compute-2.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181
Oct 02 12:14:54 compute-1 nova_compute[230518]: 2025-10-02 12:14:54.306 2 DEBUG nova.compute.manager [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "address": "fa:16:3e:b9:be:58", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b79a6-6c", "ovs_interfaceid": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326
Oct 02 12:14:54 compute-1 nova_compute[230518]: 2025-10-02 12:14:54.307 2 DEBUG nova.virt.libvirt.vif [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:14:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-927671937',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-927671937',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:14:28Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4db2957ac1b546178a9f2c0f24807e5b',ramdisk_id='',reservation_id='r-v8e9y6s2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-211124371',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-211124371-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:14:32Z,user_data=None,user_id='d29391679bd0482aada18c987e4c11ca',uuid=b8f8f97e-2823-451c-ab36-7f94ade8be46,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "address": "fa:16:3e:b9:be:58", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b79a6-6c", "ovs_interfaceid": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:14:54 compute-1 nova_compute[230518]: 2025-10-02 12:14:54.308 2 DEBUG nova.network.os_vif_util [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Converting VIF {"id": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "address": "fa:16:3e:b9:be:58", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b79a6-6c", "ovs_interfaceid": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:14:54 compute-1 nova_compute[230518]: 2025-10-02 12:14:54.308 2 DEBUG nova.network.os_vif_util [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:be:58,bridge_name='br-int',has_traffic_filtering=True,id=647b79a6-6cf5-4d28-afd1-9e21f2a56e32,network=Network(5b610572-0903-4bfb-be0b-9848e0af3ae3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b79a6-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:14:54 compute-1 nova_compute[230518]: 2025-10-02 12:14:54.309 2 DEBUG os_vif [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:be:58,bridge_name='br-int',has_traffic_filtering=True,id=647b79a6-6cf5-4d28-afd1-9e21f2a56e32,network=Network(5b610572-0903-4bfb-be0b-9848e0af3ae3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b79a6-6c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:14:54 compute-1 nova_compute[230518]: 2025-10-02 12:14:54.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:54 compute-1 nova_compute[230518]: 2025-10-02 12:14:54.311 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap647b79a6-6c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:54 compute-1 nova_compute[230518]: 2025-10-02 12:14:54.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:14:54 compute-1 nova_compute[230518]: 2025-10-02 12:14:54.318 2 INFO os_vif [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:be:58,bridge_name='br-int',has_traffic_filtering=True,id=647b79a6-6cf5-4d28-afd1-9e21f2a56e32,network=Network(5b610572-0903-4bfb-be0b-9848e0af3ae3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b79a6-6c')
Oct 02 12:14:54 compute-1 nova_compute[230518]: 2025-10-02 12:14:54.319 2 DEBUG oslo_concurrency.lockutils [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:54 compute-1 nova_compute[230518]: 2025-10-02 12:14:54.319 2 DEBUG oslo_concurrency.lockutils [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:54 compute-1 nova_compute[230518]: 2025-10-02 12:14:54.320 2 DEBUG oslo_concurrency.lockutils [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:54 compute-1 nova_compute[230518]: 2025-10-02 12:14:54.320 2 DEBUG nova.compute.manager [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349
Oct 02 12:14:54 compute-1 nova_compute[230518]: 2025-10-02 12:14:54.320 2 INFO nova.virt.libvirt.driver [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Deleting instance files /var/lib/nova/instances/b8f8f97e-2823-451c-ab36-7f94ade8be46_del
Oct 02 12:14:54 compute-1 nova_compute[230518]: 2025-10-02 12:14:54.321 2 INFO nova.virt.libvirt.driver [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Deletion of /var/lib/nova/instances/b8f8f97e-2823-451c-ab36-7f94ade8be46_del complete
Oct 02 12:14:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:14:54.411 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:14:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:14:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:54.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:14:54 compute-1 nova_compute[230518]: 2025-10-02 12:14:54.929 2 DEBUG nova.network.neutron [req-93d3e92c-41ab-41c9-bf4b-fbccbbf3b432 req-e937b3f3-090f-4b08-ae1e-aa30cf7c6de6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Updated VIF entry in instance network info cache for port 647b79a6-6cf5-4d28-afd1-9e21f2a56e32. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:14:54 compute-1 nova_compute[230518]: 2025-10-02 12:14:54.929 2 DEBUG nova.network.neutron [req-93d3e92c-41ab-41c9-bf4b-fbccbbf3b432 req-e937b3f3-090f-4b08-ae1e-aa30cf7c6de6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Updating instance_info_cache with network_info: [{"id": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "address": "fa:16:3e:b9:be:58", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b79a6-6c", "ovs_interfaceid": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-2.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:14:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:14:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:54.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:14:54 compute-1 nova_compute[230518]: 2025-10-02 12:14:54.962 2 DEBUG oslo_concurrency.lockutils [req-93d3e92c-41ab-41c9-bf4b-fbccbbf3b432 req-e937b3f3-090f-4b08-ae1e-aa30cf7c6de6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-b8f8f97e-2823-451c-ab36-7f94ade8be46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:14:55 compute-1 nova_compute[230518]: 2025-10-02 12:14:55.972 2 DEBUG nova.compute.manager [req-fe8e6a37-1916-4b2a-92d0-02eb7c5c49ce req-77219d95-762b-4641-8383-50c45c6a0c12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received event network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:14:55 compute-1 nova_compute[230518]: 2025-10-02 12:14:55.973 2 DEBUG oslo_concurrency.lockutils [req-fe8e6a37-1916-4b2a-92d0-02eb7c5c49ce req-77219d95-762b-4641-8383-50c45c6a0c12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:55 compute-1 nova_compute[230518]: 2025-10-02 12:14:55.973 2 DEBUG oslo_concurrency.lockutils [req-fe8e6a37-1916-4b2a-92d0-02eb7c5c49ce req-77219d95-762b-4641-8383-50c45c6a0c12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:55 compute-1 nova_compute[230518]: 2025-10-02 12:14:55.973 2 DEBUG oslo_concurrency.lockutils [req-fe8e6a37-1916-4b2a-92d0-02eb7c5c49ce req-77219d95-762b-4641-8383-50c45c6a0c12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:55 compute-1 nova_compute[230518]: 2025-10-02 12:14:55.973 2 DEBUG nova.compute.manager [req-fe8e6a37-1916-4b2a-92d0-02eb7c5c49ce req-77219d95-762b-4641-8383-50c45c6a0c12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] No waiting events found dispatching network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:14:55 compute-1 nova_compute[230518]: 2025-10-02 12:14:55.973 2 WARNING nova.compute.manager [req-fe8e6a37-1916-4b2a-92d0-02eb7c5c49ce req-77219d95-762b-4641-8383-50c45c6a0c12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received unexpected event network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 for instance with vm_state active and task_state migrating.
Oct 02 12:14:55 compute-1 nova_compute[230518]: 2025-10-02 12:14:55.974 2 DEBUG nova.compute.manager [req-fe8e6a37-1916-4b2a-92d0-02eb7c5c49ce req-77219d95-762b-4641-8383-50c45c6a0c12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received event network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:14:55 compute-1 nova_compute[230518]: 2025-10-02 12:14:55.974 2 DEBUG oslo_concurrency.lockutils [req-fe8e6a37-1916-4b2a-92d0-02eb7c5c49ce req-77219d95-762b-4641-8383-50c45c6a0c12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:55 compute-1 nova_compute[230518]: 2025-10-02 12:14:55.974 2 DEBUG oslo_concurrency.lockutils [req-fe8e6a37-1916-4b2a-92d0-02eb7c5c49ce req-77219d95-762b-4641-8383-50c45c6a0c12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:55 compute-1 nova_compute[230518]: 2025-10-02 12:14:55.974 2 DEBUG oslo_concurrency.lockutils [req-fe8e6a37-1916-4b2a-92d0-02eb7c5c49ce req-77219d95-762b-4641-8383-50c45c6a0c12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:55 compute-1 nova_compute[230518]: 2025-10-02 12:14:55.974 2 DEBUG nova.compute.manager [req-fe8e6a37-1916-4b2a-92d0-02eb7c5c49ce req-77219d95-762b-4641-8383-50c45c6a0c12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] No waiting events found dispatching network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:14:55 compute-1 nova_compute[230518]: 2025-10-02 12:14:55.975 2 WARNING nova.compute.manager [req-fe8e6a37-1916-4b2a-92d0-02eb7c5c49ce req-77219d95-762b-4641-8383-50c45c6a0c12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received unexpected event network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 for instance with vm_state active and task_state migrating.
Oct 02 12:14:55 compute-1 nova_compute[230518]: 2025-10-02 12:14:55.975 2 DEBUG nova.compute.manager [req-fe8e6a37-1916-4b2a-92d0-02eb7c5c49ce req-77219d95-762b-4641-8383-50c45c6a0c12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received event network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:14:55 compute-1 nova_compute[230518]: 2025-10-02 12:14:55.975 2 DEBUG oslo_concurrency.lockutils [req-fe8e6a37-1916-4b2a-92d0-02eb7c5c49ce req-77219d95-762b-4641-8383-50c45c6a0c12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:14:55 compute-1 nova_compute[230518]: 2025-10-02 12:14:55.975 2 DEBUG oslo_concurrency.lockutils [req-fe8e6a37-1916-4b2a-92d0-02eb7c5c49ce req-77219d95-762b-4641-8383-50c45c6a0c12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:14:55 compute-1 nova_compute[230518]: 2025-10-02 12:14:55.975 2 DEBUG oslo_concurrency.lockutils [req-fe8e6a37-1916-4b2a-92d0-02eb7c5c49ce req-77219d95-762b-4641-8383-50c45c6a0c12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:14:55 compute-1 nova_compute[230518]: 2025-10-02 12:14:55.976 2 DEBUG nova.compute.manager [req-fe8e6a37-1916-4b2a-92d0-02eb7c5c49ce req-77219d95-762b-4641-8383-50c45c6a0c12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] No waiting events found dispatching network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:14:55 compute-1 nova_compute[230518]: 2025-10-02 12:14:55.976 2 WARNING nova.compute.manager [req-fe8e6a37-1916-4b2a-92d0-02eb7c5c49ce req-77219d95-762b-4641-8383-50c45c6a0c12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received unexpected event network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 for instance with vm_state active and task_state migrating.
Oct 02 12:14:56 compute-1 ceph-mon[80926]: pgmap v1053: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 563 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.2 MiB/s wr, 176 op/s
Oct 02 12:14:56 compute-1 nova_compute[230518]: 2025-10-02 12:14:56.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:14:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:56.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:14:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:56.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:14:57 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e148 e148: 3 total, 3 up, 3 in
Oct 02 12:14:57 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:14:58 compute-1 ceph-mon[80926]: pgmap v1054: 305 pgs: 305 active+clean; 563 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.3 MiB/s wr, 176 op/s
Oct 02 12:14:58 compute-1 ceph-mon[80926]: osdmap e148: 3 total, 3 up, 3 in
Oct 02 12:14:58 compute-1 podman[238706]: 2025-10-02 12:14:58.816141073 +0000 UTC m=+0.061048391 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 12:14:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:14:58.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:14:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:14:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:14:58.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:14:59 compute-1 nova_compute[230518]: 2025-10-02 12:14:59.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:00 compute-1 ceph-mon[80926]: pgmap v1056: 305 pgs: 305 active+clean; 563 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 54 KiB/s wr, 155 op/s
Oct 02 12:15:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/118389002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/798080806' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:00 compute-1 rsyslogd[1006]: imjournal: 2349 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 02 12:15:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:15:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:00.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:15:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:15:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:00.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:15:01 compute-1 nova_compute[230518]: 2025-10-02 12:15:01.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:01 compute-1 podman[238727]: 2025-10-02 12:15:01.798257256 +0000 UTC m=+0.055604950 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 12:15:02 compute-1 ceph-mon[80926]: pgmap v1057: 305 pgs: 305 active+clean; 530 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 46 KiB/s wr, 157 op/s
Oct 02 12:15:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2851670682' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2011153037' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:02 compute-1 nova_compute[230518]: 2025-10-02 12:15:02.232 2 DEBUG oslo_concurrency.lockutils [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Acquiring lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:02 compute-1 nova_compute[230518]: 2025-10-02 12:15:02.232 2 DEBUG oslo_concurrency.lockutils [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:02 compute-1 nova_compute[230518]: 2025-10-02 12:15:02.232 2 DEBUG oslo_concurrency.lockutils [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:02 compute-1 nova_compute[230518]: 2025-10-02 12:15:02.261 2 DEBUG oslo_concurrency.lockutils [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:02 compute-1 nova_compute[230518]: 2025-10-02 12:15:02.262 2 DEBUG oslo_concurrency.lockutils [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:02 compute-1 nova_compute[230518]: 2025-10-02 12:15:02.262 2 DEBUG oslo_concurrency.lockutils [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:02 compute-1 nova_compute[230518]: 2025-10-02 12:15:02.262 2 DEBUG nova.compute.resource_tracker [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:15:02 compute-1 nova_compute[230518]: 2025-10-02 12:15:02.263 2 DEBUG oslo_concurrency.processutils [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:15:02 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3998498089' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:02 compute-1 nova_compute[230518]: 2025-10-02 12:15:02.708 2 DEBUG oslo_concurrency.processutils [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:02.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:02 compute-1 nova_compute[230518]: 2025-10-02 12:15:02.917 2 DEBUG nova.virt.libvirt.driver [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:15:02 compute-1 nova_compute[230518]: 2025-10-02 12:15:02.918 2 DEBUG nova.virt.libvirt.driver [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:15:02 compute-1 nova_compute[230518]: 2025-10-02 12:15:02.921 2 DEBUG nova.virt.libvirt.driver [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:15:02 compute-1 nova_compute[230518]: 2025-10-02 12:15:02.922 2 DEBUG nova.virt.libvirt.driver [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:15:02 compute-1 nova_compute[230518]: 2025-10-02 12:15:02.926 2 DEBUG nova.virt.libvirt.driver [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:15:02 compute-1 nova_compute[230518]: 2025-10-02 12:15:02.926 2 DEBUG nova.virt.libvirt.driver [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:15:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:02.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:02 compute-1 sudo[238771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:15:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:15:02 compute-1 sudo[238771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:02 compute-1 sudo[238771]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:03 compute-1 sudo[238796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:15:03 compute-1 sudo[238796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:03 compute-1 sudo[238796]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:03 compute-1 sudo[238821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:15:03 compute-1 sudo[238821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:03 compute-1 sudo[238821]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:03 compute-1 nova_compute[230518]: 2025-10-02 12:15:03.124 2 WARNING nova.virt.libvirt.driver [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:15:03 compute-1 nova_compute[230518]: 2025-10-02 12:15:03.126 2 DEBUG nova.compute.resource_tracker [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4333MB free_disk=20.763572692871094GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:15:03 compute-1 nova_compute[230518]: 2025-10-02 12:15:03.126 2 DEBUG oslo_concurrency.lockutils [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:03 compute-1 nova_compute[230518]: 2025-10-02 12:15:03.127 2 DEBUG oslo_concurrency.lockutils [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:03 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3998498089' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:03 compute-1 sudo[238846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:15:03 compute-1 sudo[238846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:03 compute-1 nova_compute[230518]: 2025-10-02 12:15:03.198 2 DEBUG nova.compute.resource_tracker [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Migration for instance b8f8f97e-2823-451c-ab36-7f94ade8be46 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Oct 02 12:15:03 compute-1 nova_compute[230518]: 2025-10-02 12:15:03.227 2 DEBUG nova.compute.resource_tracker [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491
Oct 02 12:15:03 compute-1 nova_compute[230518]: 2025-10-02 12:15:03.272 2 DEBUG nova.compute.resource_tracker [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Instance f85aa55e-c534-4270-b8bb-d25f8026084c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:15:03 compute-1 nova_compute[230518]: 2025-10-02 12:15:03.273 2 DEBUG nova.compute.resource_tracker [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Instance 2b86a484-6fc6-4efa-983f-fb93053b0874 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:15:03 compute-1 nova_compute[230518]: 2025-10-02 12:15:03.273 2 DEBUG nova.compute.resource_tracker [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Instance fcfe251b-73c3-4310-b646-3c6c0a8c7e6e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:15:03 compute-1 nova_compute[230518]: 2025-10-02 12:15:03.273 2 DEBUG nova.compute.resource_tracker [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Migration 7b3d6a5a-de37-42b3-bb4b-3c64a2479aa1 is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Oct 02 12:15:03 compute-1 nova_compute[230518]: 2025-10-02 12:15:03.274 2 DEBUG nova.compute.resource_tracker [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:15:03 compute-1 nova_compute[230518]: 2025-10-02 12:15:03.274 2 DEBUG nova.compute.resource_tracker [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:15:03 compute-1 nova_compute[230518]: 2025-10-02 12:15:03.305 2 DEBUG nova.scheduler.client.report [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 12:15:03 compute-1 nova_compute[230518]: 2025-10-02 12:15:03.328 2 DEBUG nova.scheduler.client.report [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 12:15:03 compute-1 nova_compute[230518]: 2025-10-02 12:15:03.329 2 DEBUG nova.compute.provider_tree [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:15:03 compute-1 nova_compute[230518]: 2025-10-02 12:15:03.354 2 DEBUG nova.scheduler.client.report [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 12:15:03 compute-1 nova_compute[230518]: 2025-10-02 12:15:03.376 2 DEBUG nova.scheduler.client.report [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 12:15:03 compute-1 nova_compute[230518]: 2025-10-02 12:15:03.478 2 DEBUG oslo_concurrency.processutils [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:03 compute-1 sudo[238846]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:03 compute-1 sudo[238922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:15:03 compute-1 sudo[238922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:03 compute-1 sudo[238922]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:15:03 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3902082590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:03 compute-1 sudo[238947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:15:03 compute-1 nova_compute[230518]: 2025-10-02 12:15:03.957 2 DEBUG oslo_concurrency.processutils [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:03 compute-1 sudo[238947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:03 compute-1 sudo[238947]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:03 compute-1 nova_compute[230518]: 2025-10-02 12:15:03.963 2 DEBUG nova.compute.provider_tree [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:15:03 compute-1 nova_compute[230518]: 2025-10-02 12:15:03.997 2 DEBUG nova.scheduler.client.report [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:15:04 compute-1 sudo[238974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:15:04 compute-1 sudo[238974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:04 compute-1 sudo[238974]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:04 compute-1 nova_compute[230518]: 2025-10-02 12:15:04.058 2 DEBUG nova.compute.resource_tracker [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:15:04 compute-1 nova_compute[230518]: 2025-10-02 12:15:04.059 2 DEBUG oslo_concurrency.lockutils [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.932s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:04 compute-1 sudo[238999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Oct 02 12:15:04 compute-1 sudo[238999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:04 compute-1 nova_compute[230518]: 2025-10-02 12:15:04.067 2 INFO nova.compute.manager [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Migrating instance to compute-2.ctlplane.example.com finished successfully.
Oct 02 12:15:04 compute-1 nova_compute[230518]: 2025-10-02 12:15:04.183 2 INFO nova.scheduler.client.report [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Deleted allocation for migration 7b3d6a5a-de37-42b3-bb4b-3c64a2479aa1
Oct 02 12:15:04 compute-1 nova_compute[230518]: 2025-10-02 12:15:04.184 2 DEBUG nova.virt.libvirt.driver [None req-bc357d33-3178-4f40-936f-a0f68ba82d74 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662
Oct 02 12:15:04 compute-1 ceph-mon[80926]: pgmap v1058: 305 pgs: 305 active+clean; 509 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.5 MiB/s wr, 184 op/s
Oct 02 12:15:04 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3902082590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:04 compute-1 sudo[238999]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:04 compute-1 nova_compute[230518]: 2025-10-02 12:15:04.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:04.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:04.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:05 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:15:05 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:15:05 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:15:05 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:15:05 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:15:05 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:15:05 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:15:05 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:15:05 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:15:05 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:15:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1377956880' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2848696870' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:15:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2848696870' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:15:05 compute-1 nova_compute[230518]: 2025-10-02 12:15:05.594 2 DEBUG nova.virt.libvirt.driver [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Creating tmpfile /var/lib/nova/instances/tmpvz1p5b8e to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041
Oct 02 12:15:05 compute-1 nova_compute[230518]: 2025-10-02 12:15:05.595 2 DEBUG nova.compute.manager [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpvz1p5b8e',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476
Oct 02 12:15:06 compute-1 nova_compute[230518]: 2025-10-02 12:15:06.468 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407291.0397005, b8f8f97e-2823-451c-ab36-7f94ade8be46 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:15:06 compute-1 nova_compute[230518]: 2025-10-02 12:15:06.468 2 INFO nova.compute.manager [-] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] VM Stopped (Lifecycle Event)
Oct 02 12:15:06 compute-1 ceph-mon[80926]: pgmap v1059: 305 pgs: 305 active+clean; 511 MiB data, 589 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.8 MiB/s wr, 214 op/s
Oct 02 12:15:06 compute-1 nova_compute[230518]: 2025-10-02 12:15:06.521 2 DEBUG nova.compute.manager [None req-e439849c-3ec0-4c34-8c0d-d8e694b4ee17 - - - - - -] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:06 compute-1 nova_compute[230518]: 2025-10-02 12:15:06.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:06 compute-1 nova_compute[230518]: 2025-10-02 12:15:06.649 2 DEBUG oslo_concurrency.lockutils [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "f85aa55e-c534-4270-b8bb-d25f8026084c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:06 compute-1 nova_compute[230518]: 2025-10-02 12:15:06.649 2 DEBUG oslo_concurrency.lockutils [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "f85aa55e-c534-4270-b8bb-d25f8026084c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:06 compute-1 nova_compute[230518]: 2025-10-02 12:15:06.649 2 DEBUG oslo_concurrency.lockutils [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "f85aa55e-c534-4270-b8bb-d25f8026084c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:06 compute-1 nova_compute[230518]: 2025-10-02 12:15:06.650 2 DEBUG oslo_concurrency.lockutils [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "f85aa55e-c534-4270-b8bb-d25f8026084c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:06 compute-1 nova_compute[230518]: 2025-10-02 12:15:06.650 2 DEBUG oslo_concurrency.lockutils [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "f85aa55e-c534-4270-b8bb-d25f8026084c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:06 compute-1 nova_compute[230518]: 2025-10-02 12:15:06.651 2 INFO nova.compute.manager [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Terminating instance
Oct 02 12:15:06 compute-1 nova_compute[230518]: 2025-10-02 12:15:06.652 2 DEBUG nova.compute.manager [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:15:06 compute-1 kernel: tap760df1d8-a2 (unregistering): left promiscuous mode
Oct 02 12:15:06 compute-1 NetworkManager[44960]: <info>  [1759407306.7289] device (tap760df1d8-a2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:15:06 compute-1 nova_compute[230518]: 2025-10-02 12:15:06.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:06 compute-1 ovn_controller[129257]: 2025-10-02T12:15:06Z|00084|binding|INFO|Releasing lport 760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a from this chassis (sb_readonly=0)
Oct 02 12:15:06 compute-1 ovn_controller[129257]: 2025-10-02T12:15:06Z|00085|binding|INFO|Setting lport 760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a down in Southbound
Oct 02 12:15:06 compute-1 ovn_controller[129257]: 2025-10-02T12:15:06Z|00086|binding|INFO|Removing iface tap760df1d8-a2 ovn-installed in OVS
Oct 02 12:15:06 compute-1 nova_compute[230518]: 2025-10-02 12:15:06.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:06 compute-1 nova_compute[230518]: 2025-10-02 12:15:06.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:06.763 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:96:77 10.100.0.8'], port_security=['fa:16:3e:6f:96:77 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'f85aa55e-c534-4270-b8bb-d25f8026084c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85ed78eb-4003-42a7-9312-f47c5830131f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39ca581fbb054c959d26096ca39fef05', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a4ed4a9c-2cdf-4db2-a179-94b54b394a70', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d885d496-7533-482b-ad35-d86c4b60006e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:15:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:06.764 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a in datapath 85ed78eb-4003-42a7-9312-f47c5830131f unbound from our chassis
Oct 02 12:15:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:06.767 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 85ed78eb-4003-42a7-9312-f47c5830131f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:15:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:06.768 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1a402bde-0c4f-44a6-ad84-d04dd1fd83a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:06.768 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f namespace which is not needed anymore
Oct 02 12:15:06 compute-1 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Oct 02 12:15:06 compute-1 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000c.scope: Consumed 17.147s CPU time.
Oct 02 12:15:06 compute-1 systemd-machined[188247]: Machine qemu-5-instance-0000000c terminated.
Oct 02 12:15:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:06.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:06 compute-1 nova_compute[230518]: 2025-10-02 12:15:06.891 2 INFO nova.virt.libvirt.driver [-] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Instance destroyed successfully.
Oct 02 12:15:06 compute-1 nova_compute[230518]: 2025-10-02 12:15:06.891 2 DEBUG nova.objects.instance [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lazy-loading 'resources' on Instance uuid f85aa55e-c534-4270-b8bb-d25f8026084c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:15:06 compute-1 neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f[236975]: [NOTICE]   (236979) : haproxy version is 2.8.14-c23fe91
Oct 02 12:15:06 compute-1 neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f[236975]: [NOTICE]   (236979) : path to executable is /usr/sbin/haproxy
Oct 02 12:15:06 compute-1 neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f[236975]: [WARNING]  (236979) : Exiting Master process...
Oct 02 12:15:06 compute-1 neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f[236975]: [ALERT]    (236979) : Current worker (236981) exited with code 143 (Terminated)
Oct 02 12:15:06 compute-1 neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f[236975]: [WARNING]  (236979) : All workers exited. Exiting... (0)
Oct 02 12:15:06 compute-1 systemd[1]: libpod-35b36a27c6742392e613f47a04ec9933d99d819c04629dbaa795abc3c0b035eb.scope: Deactivated successfully.
Oct 02 12:15:06 compute-1 podman[239064]: 2025-10-02 12:15:06.917850742 +0000 UTC m=+0.050134409 container died 35b36a27c6742392e613f47a04ec9933d99d819c04629dbaa795abc3c0b035eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 12:15:06 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-35b36a27c6742392e613f47a04ec9933d99d819c04629dbaa795abc3c0b035eb-userdata-shm.mount: Deactivated successfully.
Oct 02 12:15:06 compute-1 systemd[1]: var-lib-containers-storage-overlay-602c0cae69e79696e14e2b1741a65067c00fef81481eae72b85f399a9248bd39-merged.mount: Deactivated successfully.
Oct 02 12:15:06 compute-1 podman[239064]: 2025-10-02 12:15:06.955209376 +0000 UTC m=+0.087493053 container cleanup 35b36a27c6742392e613f47a04ec9933d99d819c04629dbaa795abc3c0b035eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 12:15:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:06.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:06 compute-1 systemd[1]: libpod-conmon-35b36a27c6742392e613f47a04ec9933d99d819c04629dbaa795abc3c0b035eb.scope: Deactivated successfully.
Oct 02 12:15:06 compute-1 nova_compute[230518]: 2025-10-02 12:15:06.987 2 DEBUG nova.virt.libvirt.vif [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:13:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1669259740',display_name='tempest-ServersAdminTestJSON-server-1669259740',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1669259740',id=12,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:13:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='39ca581fbb054c959d26096ca39fef05',ramdisk_id='',reservation_id='r-xl0a3qnb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1879159697',owner_user_name='tempest-ServersAdminTestJSON-1879159697-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:13:11Z,user_data=None,user_id='7a80f833255046e7b62d34c1c6066073',uuid=f85aa55e-c534-4270-b8bb-d25f8026084c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a", "address": "fa:16:3e:6f:96:77", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap760df1d8-a2", "ovs_interfaceid": "760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:15:06 compute-1 nova_compute[230518]: 2025-10-02 12:15:06.987 2 DEBUG nova.network.os_vif_util [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Converting VIF {"id": "760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a", "address": "fa:16:3e:6f:96:77", "network": {"id": "85ed78eb-4003-42a7-9312-f47c5830131f", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-905236935-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39ca581fbb054c959d26096ca39fef05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap760df1d8-a2", "ovs_interfaceid": "760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:15:06 compute-1 nova_compute[230518]: 2025-10-02 12:15:06.988 2 DEBUG nova.network.os_vif_util [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6f:96:77,bridge_name='br-int',has_traffic_filtering=True,id=760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap760df1d8-a2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:15:06 compute-1 nova_compute[230518]: 2025-10-02 12:15:06.989 2 DEBUG os_vif [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6f:96:77,bridge_name='br-int',has_traffic_filtering=True,id=760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap760df1d8-a2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:15:06 compute-1 nova_compute[230518]: 2025-10-02 12:15:06.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:06 compute-1 nova_compute[230518]: 2025-10-02 12:15:06.991 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap760df1d8-a2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:06 compute-1 nova_compute[230518]: 2025-10-02 12:15:06.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:06 compute-1 nova_compute[230518]: 2025-10-02 12:15:06.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:15:06 compute-1 nova_compute[230518]: 2025-10-02 12:15:06.998 2 INFO os_vif [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6f:96:77,bridge_name='br-int',has_traffic_filtering=True,id=760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a,network=Network(85ed78eb-4003-42a7-9312-f47c5830131f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap760df1d8-a2')
Oct 02 12:15:07 compute-1 podman[239105]: 2025-10-02 12:15:07.020352635 +0000 UTC m=+0.045213763 container remove 35b36a27c6742392e613f47a04ec9933d99d819c04629dbaa795abc3c0b035eb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:15:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:07.026 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[86464e80-859c-436c-a413-9e38b836d98f]: (4, ('Thu Oct  2 12:15:06 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f (35b36a27c6742392e613f47a04ec9933d99d819c04629dbaa795abc3c0b035eb)\n35b36a27c6742392e613f47a04ec9933d99d819c04629dbaa795abc3c0b035eb\nThu Oct  2 12:15:06 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f (35b36a27c6742392e613f47a04ec9933d99d819c04629dbaa795abc3c0b035eb)\n35b36a27c6742392e613f47a04ec9933d99d819c04629dbaa795abc3c0b035eb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:07.028 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1de9e52e-a8f3-497c-be27-d7f24702fbed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:07.028 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85ed78eb-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:07 compute-1 nova_compute[230518]: 2025-10-02 12:15:07.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:07 compute-1 kernel: tap85ed78eb-40: left promiscuous mode
Oct 02 12:15:07 compute-1 nova_compute[230518]: 2025-10-02 12:15:07.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:07.046 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e83db9d4-183d-4e62-847e-d7b6691276b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:07.080 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f1cbe24a-140e-4e29-8fb8-8ff746bdc519]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:07.082 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ffeab7e1-2050-4281-acc7-7d4ab913dc6d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:07.096 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[590c1a92-d089-4921-9872-dc4c7e7e66c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 505390, 'reachable_time': 33646, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239138, 'error': None, 'target': 'ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:07.098 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-85ed78eb-4003-42a7-9312-f47c5830131f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:15:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:07.098 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[cfd9c74e-2598-4969-9f98-1cc67c42b6ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:07 compute-1 systemd[1]: run-netns-ovnmeta\x2d85ed78eb\x2d4003\x2d42a7\x2d9312\x2df47c5830131f.mount: Deactivated successfully.
Oct 02 12:15:07 compute-1 nova_compute[230518]: 2025-10-02 12:15:07.303 2 DEBUG nova.compute.manager [req-ce24f0d3-6d4e-4f88-b4bb-839bcf0c2668 req-76871c5b-bc3b-45ba-ad5d-bab944ec23e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Received event network-vif-unplugged-760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:15:07 compute-1 nova_compute[230518]: 2025-10-02 12:15:07.304 2 DEBUG oslo_concurrency.lockutils [req-ce24f0d3-6d4e-4f88-b4bb-839bcf0c2668 req-76871c5b-bc3b-45ba-ad5d-bab944ec23e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f85aa55e-c534-4270-b8bb-d25f8026084c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:07 compute-1 nova_compute[230518]: 2025-10-02 12:15:07.304 2 DEBUG oslo_concurrency.lockutils [req-ce24f0d3-6d4e-4f88-b4bb-839bcf0c2668 req-76871c5b-bc3b-45ba-ad5d-bab944ec23e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f85aa55e-c534-4270-b8bb-d25f8026084c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:07 compute-1 nova_compute[230518]: 2025-10-02 12:15:07.304 2 DEBUG oslo_concurrency.lockutils [req-ce24f0d3-6d4e-4f88-b4bb-839bcf0c2668 req-76871c5b-bc3b-45ba-ad5d-bab944ec23e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f85aa55e-c534-4270-b8bb-d25f8026084c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:07 compute-1 nova_compute[230518]: 2025-10-02 12:15:07.305 2 DEBUG nova.compute.manager [req-ce24f0d3-6d4e-4f88-b4bb-839bcf0c2668 req-76871c5b-bc3b-45ba-ad5d-bab944ec23e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] No waiting events found dispatching network-vif-unplugged-760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:15:07 compute-1 nova_compute[230518]: 2025-10-02 12:15:07.305 2 DEBUG nova.compute.manager [req-ce24f0d3-6d4e-4f88-b4bb-839bcf0c2668 req-76871c5b-bc3b-45ba-ad5d-bab944ec23e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Received event network-vif-unplugged-760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:15:07 compute-1 nova_compute[230518]: 2025-10-02 12:15:07.440 2 INFO nova.virt.libvirt.driver [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Deleting instance files /var/lib/nova/instances/f85aa55e-c534-4270-b8bb-d25f8026084c_del
Oct 02 12:15:07 compute-1 nova_compute[230518]: 2025-10-02 12:15:07.441 2 INFO nova.virt.libvirt.driver [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Deletion of /var/lib/nova/instances/f85aa55e-c534-4270-b8bb-d25f8026084c_del complete
Oct 02 12:15:07 compute-1 nova_compute[230518]: 2025-10-02 12:15:07.468 2 DEBUG nova.compute.manager [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Oct 02 12:15:07 compute-1 nova_compute[230518]: 2025-10-02 12:15:07.662 2 INFO nova.compute.manager [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Took 1.01 seconds to destroy the instance on the hypervisor.
Oct 02 12:15:07 compute-1 nova_compute[230518]: 2025-10-02 12:15:07.663 2 DEBUG oslo.service.loopingcall [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:15:07 compute-1 nova_compute[230518]: 2025-10-02 12:15:07.663 2 DEBUG nova.compute.manager [-] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:15:07 compute-1 nova_compute[230518]: 2025-10-02 12:15:07.663 2 DEBUG nova.network.neutron [-] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:15:07 compute-1 nova_compute[230518]: 2025-10-02 12:15:07.687 2 DEBUG oslo_concurrency.lockutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:07 compute-1 nova_compute[230518]: 2025-10-02 12:15:07.688 2 DEBUG oslo_concurrency.lockutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:07 compute-1 nova_compute[230518]: 2025-10-02 12:15:07.711 2 DEBUG nova.compute.manager [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpvz1p5b8e',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='b8f8f97e-2823-451c-ab36-7f94ade8be46',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604
Oct 02 12:15:07 compute-1 nova_compute[230518]: 2025-10-02 12:15:07.719 2 DEBUG nova.objects.instance [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'pci_requests' on Instance uuid bb6a3b63-8cda-41b6-ac43-6f9d310fad2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:15:07 compute-1 nova_compute[230518]: 2025-10-02 12:15:07.746 2 DEBUG nova.virt.hardware [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:15:07 compute-1 nova_compute[230518]: 2025-10-02 12:15:07.747 2 INFO nova.compute.claims [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:15:07 compute-1 nova_compute[230518]: 2025-10-02 12:15:07.747 2 DEBUG nova.objects.instance [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'resources' on Instance uuid bb6a3b63-8cda-41b6-ac43-6f9d310fad2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:15:07 compute-1 nova_compute[230518]: 2025-10-02 12:15:07.765 2 DEBUG oslo_concurrency.lockutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Acquiring lock "refresh_cache-b8f8f97e-2823-451c-ab36-7f94ade8be46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:15:07 compute-1 nova_compute[230518]: 2025-10-02 12:15:07.766 2 DEBUG oslo_concurrency.lockutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Acquired lock "refresh_cache-b8f8f97e-2823-451c-ab36-7f94ade8be46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:15:07 compute-1 nova_compute[230518]: 2025-10-02 12:15:07.766 2 DEBUG nova.network.neutron [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:15:07 compute-1 nova_compute[230518]: 2025-10-02 12:15:07.769 2 DEBUG nova.objects.instance [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'pci_devices' on Instance uuid bb6a3b63-8cda-41b6-ac43-6f9d310fad2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:15:07 compute-1 nova_compute[230518]: 2025-10-02 12:15:07.825 2 INFO nova.compute.resource_tracker [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Updating resource usage from migration f91708f3-2f55-4a30-9404-01310d275f98
Oct 02 12:15:07 compute-1 nova_compute[230518]: 2025-10-02 12:15:07.826 2 DEBUG nova.compute.resource_tracker [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Starting to track incoming migration f91708f3-2f55-4a30-9404-01310d275f98 with flavor 475e3257-fad6-494a-9174-56c6af5e0ac9 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Oct 02 12:15:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:15:08 compute-1 nova_compute[230518]: 2025-10-02 12:15:08.090 2 DEBUG oslo_concurrency.processutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:15:08 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2752457260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:08 compute-1 nova_compute[230518]: 2025-10-02 12:15:08.538 2 DEBUG oslo_concurrency.processutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:08 compute-1 nova_compute[230518]: 2025-10-02 12:15:08.545 2 DEBUG nova.compute.provider_tree [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:15:08 compute-1 ceph-mon[80926]: pgmap v1060: 305 pgs: 305 active+clean; 508 MiB data, 592 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 4.4 MiB/s wr, 228 op/s
Oct 02 12:15:08 compute-1 nova_compute[230518]: 2025-10-02 12:15:08.578 2 DEBUG nova.scheduler.client.report [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:15:08 compute-1 nova_compute[230518]: 2025-10-02 12:15:08.612 2 DEBUG oslo_concurrency.lockutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.924s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:08 compute-1 nova_compute[230518]: 2025-10-02 12:15:08.612 2 INFO nova.compute.manager [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Migrating
Oct 02 12:15:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:08.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:08.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:09 compute-1 nova_compute[230518]: 2025-10-02 12:15:09.089 2 DEBUG nova.network.neutron [-] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:15:09 compute-1 nova_compute[230518]: 2025-10-02 12:15:09.118 2 INFO nova.compute.manager [-] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Took 1.45 seconds to deallocate network for instance.
Oct 02 12:15:09 compute-1 nova_compute[230518]: 2025-10-02 12:15:09.193 2 DEBUG oslo_concurrency.lockutils [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:09 compute-1 nova_compute[230518]: 2025-10-02 12:15:09.194 2 DEBUG oslo_concurrency.lockutils [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:09 compute-1 nova_compute[230518]: 2025-10-02 12:15:09.371 2 DEBUG oslo_concurrency.processutils [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:09 compute-1 nova_compute[230518]: 2025-10-02 12:15:09.458 2 DEBUG nova.compute.manager [req-2834744c-7da8-4346-b66b-4e7a1b407c11 req-945b4ffb-6d72-4d56-99d3-e1d5b684fb5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Received event network-vif-plugged-760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:15:09 compute-1 nova_compute[230518]: 2025-10-02 12:15:09.459 2 DEBUG oslo_concurrency.lockutils [req-2834744c-7da8-4346-b66b-4e7a1b407c11 req-945b4ffb-6d72-4d56-99d3-e1d5b684fb5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f85aa55e-c534-4270-b8bb-d25f8026084c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:09 compute-1 nova_compute[230518]: 2025-10-02 12:15:09.459 2 DEBUG oslo_concurrency.lockutils [req-2834744c-7da8-4346-b66b-4e7a1b407c11 req-945b4ffb-6d72-4d56-99d3-e1d5b684fb5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f85aa55e-c534-4270-b8bb-d25f8026084c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:09 compute-1 nova_compute[230518]: 2025-10-02 12:15:09.459 2 DEBUG oslo_concurrency.lockutils [req-2834744c-7da8-4346-b66b-4e7a1b407c11 req-945b4ffb-6d72-4d56-99d3-e1d5b684fb5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f85aa55e-c534-4270-b8bb-d25f8026084c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:09 compute-1 nova_compute[230518]: 2025-10-02 12:15:09.460 2 DEBUG nova.compute.manager [req-2834744c-7da8-4346-b66b-4e7a1b407c11 req-945b4ffb-6d72-4d56-99d3-e1d5b684fb5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] No waiting events found dispatching network-vif-plugged-760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:15:09 compute-1 nova_compute[230518]: 2025-10-02 12:15:09.460 2 WARNING nova.compute.manager [req-2834744c-7da8-4346-b66b-4e7a1b407c11 req-945b4ffb-6d72-4d56-99d3-e1d5b684fb5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Received unexpected event network-vif-plugged-760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a for instance with vm_state deleted and task_state None.
Oct 02 12:15:09 compute-1 nova_compute[230518]: 2025-10-02 12:15:09.460 2 DEBUG nova.compute.manager [req-2834744c-7da8-4346-b66b-4e7a1b407c11 req-945b4ffb-6d72-4d56-99d3-e1d5b684fb5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Received event network-vif-deleted-760df1d8-a2d6-41cc-8df5-90f0f8f5ac1a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:15:09 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2752457260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:15:09 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1446795426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:09 compute-1 nova_compute[230518]: 2025-10-02 12:15:09.903 2 DEBUG oslo_concurrency.processutils [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:09 compute-1 nova_compute[230518]: 2025-10-02 12:15:09.912 2 DEBUG nova.compute.provider_tree [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:15:10 compute-1 nova_compute[230518]: 2025-10-02 12:15:10.017 2 DEBUG nova.scheduler.client.report [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:15:10 compute-1 nova_compute[230518]: 2025-10-02 12:15:10.063 2 DEBUG oslo_concurrency.lockutils [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.870s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:10 compute-1 sshd-session[239184]: Accepted publickey for nova from 192.168.122.100 port 51534 ssh2: ECDSA SHA256:Vro/IzzyOA86z5RBI5lBF+NKUNzyxTh79RUgVc2XKwY
Oct 02 12:15:10 compute-1 systemd-logind[795]: New session 56 of user nova.
Oct 02 12:15:10 compute-1 systemd[1]: Created slice User Slice of UID 42436.
Oct 02 12:15:10 compute-1 nova_compute[230518]: 2025-10-02 12:15:10.141 2 INFO nova.scheduler.client.report [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Deleted allocations for instance f85aa55e-c534-4270-b8bb-d25f8026084c
Oct 02 12:15:10 compute-1 systemd[1]: Starting User Runtime Directory /run/user/42436...
Oct 02 12:15:10 compute-1 systemd[1]: Finished User Runtime Directory /run/user/42436.
Oct 02 12:15:10 compute-1 systemd[1]: Starting User Manager for UID 42436...
Oct 02 12:15:10 compute-1 systemd[239188]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:15:10 compute-1 nova_compute[230518]: 2025-10-02 12:15:10.313 2 DEBUG oslo_concurrency.lockutils [None req-dbe6a93c-8f81-49ef-820d-edbd471672c7 7a80f833255046e7b62d34c1c6066073 39ca581fbb054c959d26096ca39fef05 - - default default] Lock "f85aa55e-c534-4270-b8bb-d25f8026084c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:10 compute-1 systemd[239188]: Queued start job for default target Main User Target.
Oct 02 12:15:10 compute-1 systemd[239188]: Created slice User Application Slice.
Oct 02 12:15:10 compute-1 systemd[239188]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 02 12:15:10 compute-1 systemd[239188]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 12:15:10 compute-1 systemd[239188]: Reached target Paths.
Oct 02 12:15:10 compute-1 systemd[239188]: Reached target Timers.
Oct 02 12:15:10 compute-1 systemd[239188]: Starting D-Bus User Message Bus Socket...
Oct 02 12:15:10 compute-1 systemd[239188]: Starting Create User's Volatile Files and Directories...
Oct 02 12:15:10 compute-1 systemd[239188]: Finished Create User's Volatile Files and Directories.
Oct 02 12:15:10 compute-1 systemd[239188]: Listening on D-Bus User Message Bus Socket.
Oct 02 12:15:10 compute-1 systemd[239188]: Reached target Sockets.
Oct 02 12:15:10 compute-1 systemd[239188]: Reached target Basic System.
Oct 02 12:15:10 compute-1 systemd[239188]: Reached target Main User Target.
Oct 02 12:15:10 compute-1 systemd[239188]: Startup finished in 150ms.
Oct 02 12:15:10 compute-1 systemd[1]: Started User Manager for UID 42436.
Oct 02 12:15:10 compute-1 systemd[1]: Started Session 56 of User nova.
Oct 02 12:15:10 compute-1 sshd-session[239184]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:15:10 compute-1 sshd-session[239204]: Received disconnect from 192.168.122.100 port 51534:11: disconnected by user
Oct 02 12:15:10 compute-1 sshd-session[239204]: Disconnected from user nova 192.168.122.100 port 51534
Oct 02 12:15:10 compute-1 sshd-session[239184]: pam_unix(sshd:session): session closed for user nova
Oct 02 12:15:10 compute-1 systemd[1]: session-56.scope: Deactivated successfully.
Oct 02 12:15:10 compute-1 systemd-logind[795]: Session 56 logged out. Waiting for processes to exit.
Oct 02 12:15:10 compute-1 systemd-logind[795]: Removed session 56.
Oct 02 12:15:10 compute-1 nova_compute[230518]: 2025-10-02 12:15:10.538 2 DEBUG nova.network.neutron [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Updating instance_info_cache with network_info: [{"id": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "address": "fa:16:3e:b9:be:58", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b79a6-6c", "ovs_interfaceid": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:15:10 compute-1 nova_compute[230518]: 2025-10-02 12:15:10.561 2 DEBUG oslo_concurrency.lockutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Releasing lock "refresh_cache-b8f8f97e-2823-451c-ab36-7f94ade8be46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:15:10 compute-1 nova_compute[230518]: 2025-10-02 12:15:10.562 2 DEBUG os_brick.utils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:15:10 compute-1 nova_compute[230518]: 2025-10-02 12:15:10.563 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:10 compute-1 nova_compute[230518]: 2025-10-02 12:15:10.572 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:10 compute-1 nova_compute[230518]: 2025-10-02 12:15:10.573 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[bdbd5fdd-b87e-4de3-b5e4-2a2806577c7f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:10 compute-1 nova_compute[230518]: 2025-10-02 12:15:10.573 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:10 compute-1 nova_compute[230518]: 2025-10-02 12:15:10.581 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:10 compute-1 nova_compute[230518]: 2025-10-02 12:15:10.582 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[d6dd8634-71e9-4bf7-8caa-b578e6df019e]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:10 compute-1 nova_compute[230518]: 2025-10-02 12:15:10.582 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:10 compute-1 sshd-session[239206]: Accepted publickey for nova from 192.168.122.100 port 51544 ssh2: ECDSA SHA256:Vro/IzzyOA86z5RBI5lBF+NKUNzyxTh79RUgVc2XKwY
Oct 02 12:15:10 compute-1 nova_compute[230518]: 2025-10-02 12:15:10.591 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:10 compute-1 nova_compute[230518]: 2025-10-02 12:15:10.591 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[9334ef9a-e38a-43ed-a176-3433146fb773]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:10 compute-1 nova_compute[230518]: 2025-10-02 12:15:10.592 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[c5c55d6c-4c12-4b02-8f10-40320155ce81]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:10 compute-1 nova_compute[230518]: 2025-10-02 12:15:10.593 2 DEBUG oslo_concurrency.processutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:10 compute-1 systemd-logind[795]: New session 58 of user nova.
Oct 02 12:15:10 compute-1 nova_compute[230518]: 2025-10-02 12:15:10.612 2 DEBUG oslo_concurrency.processutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] CMD "nvme version" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:10 compute-1 systemd[1]: Started Session 58 of User nova.
Oct 02 12:15:10 compute-1 nova_compute[230518]: 2025-10-02 12:15:10.614 2 DEBUG os_brick.initiator.connectors.lightos [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:15:10 compute-1 nova_compute[230518]: 2025-10-02 12:15:10.614 2 DEBUG os_brick.initiator.connectors.lightos [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:15:10 compute-1 nova_compute[230518]: 2025-10-02 12:15:10.614 2 DEBUG os_brick.initiator.connectors.lightos [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:15:10 compute-1 nova_compute[230518]: 2025-10-02 12:15:10.615 2 DEBUG os_brick.utils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] <== get_connector_properties: return (51ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:15:10 compute-1 sshd-session[239206]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:15:10 compute-1 sshd-session[239216]: Received disconnect from 192.168.122.100 port 51544:11: disconnected by user
Oct 02 12:15:10 compute-1 sshd-session[239216]: Disconnected from user nova 192.168.122.100 port 51544
Oct 02 12:15:10 compute-1 sshd-session[239206]: pam_unix(sshd:session): session closed for user nova
Oct 02 12:15:10 compute-1 systemd-logind[795]: Session 58 logged out. Waiting for processes to exit.
Oct 02 12:15:10 compute-1 systemd[1]: session-58.scope: Deactivated successfully.
Oct 02 12:15:10 compute-1 systemd-logind[795]: Removed session 58.
Oct 02 12:15:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:10.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:10 compute-1 ceph-mon[80926]: pgmap v1061: 305 pgs: 305 active+clean; 486 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.3 MiB/s wr, 234 op/s
Oct 02 12:15:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1446795426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:10.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:11 compute-1 nova_compute[230518]: 2025-10-02 12:15:11.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:11 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:15:11 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/879596293' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:11 compute-1 nova_compute[230518]: 2025-10-02 12:15:11.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:12 compute-1 ceph-mon[80926]: pgmap v1062: 305 pgs: 305 active+clean; 455 MiB data, 562 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.9 MiB/s wr, 256 op/s
Oct 02 12:15:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/879596293' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:12 compute-1 nova_compute[230518]: 2025-10-02 12:15:12.311 2 DEBUG nova.virt.libvirt.driver [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpvz1p5b8e',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='b8f8f97e-2823-451c-ab36-7f94ade8be46',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={ff92c1da-c1e7-425c-b20d-f332daad4188='8041298c-4154-45fe-81e2-9dfa6deeae22'},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827
Oct 02 12:15:12 compute-1 nova_compute[230518]: 2025-10-02 12:15:12.312 2 DEBUG nova.virt.libvirt.driver [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Creating instance directory: /var/lib/nova/instances/b8f8f97e-2823-451c-ab36-7f94ade8be46 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840
Oct 02 12:15:12 compute-1 nova_compute[230518]: 2025-10-02 12:15:12.312 2 DEBUG nova.virt.libvirt.driver [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Ensure instance console log exists: /var/lib/nova/instances/b8f8f97e-2823-451c-ab36-7f94ade8be46/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:15:12 compute-1 nova_compute[230518]: 2025-10-02 12:15:12.313 2 DEBUG nova.virt.libvirt.driver [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Connecting volumes before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10901
Oct 02 12:15:12 compute-1 nova_compute[230518]: 2025-10-02 12:15:12.318 2 DEBUG nova.virt.libvirt.driver [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794
Oct 02 12:15:12 compute-1 nova_compute[230518]: 2025-10-02 12:15:12.320 2 DEBUG nova.virt.libvirt.vif [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:14:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-927671937',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-927671937',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:14:28Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4db2957ac1b546178a9f2c0f24807e5b',ramdisk_id='',reservation_id='r-v8e9y6s2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-211124371',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-211124371-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:15:01Z,user_data=None,user_id='d29391679bd0482aada18c987e4c11ca',uuid=b8f8f97e-2823-451c-ab36-7f94ade8be46,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "address": "fa:16:3e:b9:be:58", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap647b79a6-6c", "ovs_interfaceid": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:15:12 compute-1 nova_compute[230518]: 2025-10-02 12:15:12.320 2 DEBUG nova.network.os_vif_util [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Converting VIF {"id": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "address": "fa:16:3e:b9:be:58", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap647b79a6-6c", "ovs_interfaceid": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:15:12 compute-1 nova_compute[230518]: 2025-10-02 12:15:12.321 2 DEBUG nova.network.os_vif_util [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b9:be:58,bridge_name='br-int',has_traffic_filtering=True,id=647b79a6-6cf5-4d28-afd1-9e21f2a56e32,network=Network(5b610572-0903-4bfb-be0b-9848e0af3ae3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b79a6-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:15:12 compute-1 nova_compute[230518]: 2025-10-02 12:15:12.321 2 DEBUG os_vif [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b9:be:58,bridge_name='br-int',has_traffic_filtering=True,id=647b79a6-6cf5-4d28-afd1-9e21f2a56e32,network=Network(5b610572-0903-4bfb-be0b-9848e0af3ae3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b79a6-6c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:15:12 compute-1 nova_compute[230518]: 2025-10-02 12:15:12.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:12 compute-1 nova_compute[230518]: 2025-10-02 12:15:12.323 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:12 compute-1 nova_compute[230518]: 2025-10-02 12:15:12.323 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:15:12 compute-1 nova_compute[230518]: 2025-10-02 12:15:12.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:12 compute-1 nova_compute[230518]: 2025-10-02 12:15:12.327 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap647b79a6-6c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:12 compute-1 nova_compute[230518]: 2025-10-02 12:15:12.328 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap647b79a6-6c, col_values=(('external_ids', {'iface-id': '647b79a6-6cf5-4d28-afd1-9e21f2a56e32', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b9:be:58', 'vm-uuid': 'b8f8f97e-2823-451c-ab36-7f94ade8be46'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:12 compute-1 nova_compute[230518]: 2025-10-02 12:15:12.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:12 compute-1 NetworkManager[44960]: <info>  [1759407312.3309] manager: (tap647b79a6-6c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Oct 02 12:15:12 compute-1 nova_compute[230518]: 2025-10-02 12:15:12.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:15:12 compute-1 nova_compute[230518]: 2025-10-02 12:15:12.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:12 compute-1 nova_compute[230518]: 2025-10-02 12:15:12.338 2 INFO os_vif [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b9:be:58,bridge_name='br-int',has_traffic_filtering=True,id=647b79a6-6cf5-4d28-afd1-9e21f2a56e32,network=Network(5b610572-0903-4bfb-be0b-9848e0af3ae3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b79a6-6c')
Oct 02 12:15:12 compute-1 nova_compute[230518]: 2025-10-02 12:15:12.341 2 DEBUG nova.virt.libvirt.driver [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954
Oct 02 12:15:12 compute-1 nova_compute[230518]: 2025-10-02 12:15:12.341 2 DEBUG nova.compute.manager [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpvz1p5b8e',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='b8f8f97e-2823-451c-ab36-7f94ade8be46',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={ff92c1da-c1e7-425c-b20d-f332daad4188='8041298c-4154-45fe-81e2-9dfa6deeae22'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668
Oct 02 12:15:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:12.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:12.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:15:14 compute-1 ceph-mon[80926]: pgmap v1063: 305 pgs: 305 active+clean; 387 MiB data, 528 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.9 MiB/s wr, 256 op/s
Oct 02 12:15:14 compute-1 nova_compute[230518]: 2025-10-02 12:15:14.333 2 DEBUG nova.network.neutron [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Port 647b79a6-6cf5-4d28-afd1-9e21f2a56e32 updated with migration profile {'os_vif_delegation': True, 'migrating_to': 'compute-1.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354
Oct 02 12:15:14 compute-1 nova_compute[230518]: 2025-10-02 12:15:14.553 2 DEBUG nova.compute.manager [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpvz1p5b8e',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='b8f8f97e-2823-451c-ab36-7f94ade8be46',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={ff92c1da-c1e7-425c-b20d-f332daad4188='8041298c-4154-45fe-81e2-9dfa6deeae22'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723
Oct 02 12:15:14 compute-1 sudo[239220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:15:14 compute-1 sudo[239220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:14 compute-1 sudo[239220]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:14 compute-1 sudo[239245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:15:14 compute-1 sudo[239245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:15:14 compute-1 sudo[239245]: pam_unix(sudo:session): session closed for user root
Oct 02 12:15:14 compute-1 kernel: tap647b79a6-6c: entered promiscuous mode
Oct 02 12:15:14 compute-1 NetworkManager[44960]: <info>  [1759407314.8168] manager: (tap647b79a6-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/55)
Oct 02 12:15:14 compute-1 ovn_controller[129257]: 2025-10-02T12:15:14Z|00087|binding|INFO|Claiming lport 647b79a6-6cf5-4d28-afd1-9e21f2a56e32 for this additional chassis.
Oct 02 12:15:14 compute-1 nova_compute[230518]: 2025-10-02 12:15:14.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:14 compute-1 ovn_controller[129257]: 2025-10-02T12:15:14Z|00088|binding|INFO|647b79a6-6cf5-4d28-afd1-9e21f2a56e32: Claiming fa:16:3e:b9:be:58 10.100.0.12
Oct 02 12:15:14 compute-1 ovn_controller[129257]: 2025-10-02T12:15:14Z|00089|binding|INFO|Setting lport 647b79a6-6cf5-4d28-afd1-9e21f2a56e32 ovn-installed in OVS
Oct 02 12:15:14 compute-1 nova_compute[230518]: 2025-10-02 12:15:14.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:14 compute-1 nova_compute[230518]: 2025-10-02 12:15:14.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:14 compute-1 nova_compute[230518]: 2025-10-02 12:15:14.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:14 compute-1 systemd-udevd[239284]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:15:14 compute-1 systemd-machined[188247]: New machine qemu-10-instance-00000014.
Oct 02 12:15:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:15:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:14.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:15:14 compute-1 NetworkManager[44960]: <info>  [1759407314.8571] device (tap647b79a6-6c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:15:14 compute-1 NetworkManager[44960]: <info>  [1759407314.8579] device (tap647b79a6-6c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:15:14 compute-1 systemd[1]: Started Virtual Machine qemu-10-instance-00000014.
Oct 02 12:15:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:14.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:15 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:15:15 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:15:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2635107998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:15 compute-1 nova_compute[230518]: 2025-10-02 12:15:15.622 2 DEBUG oslo_concurrency.lockutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Acquiring lock "01eee71c-078c-41f4-a1c1-4591cab7195e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:15 compute-1 nova_compute[230518]: 2025-10-02 12:15:15.623 2 DEBUG oslo_concurrency.lockutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lock "01eee71c-078c-41f4-a1c1-4591cab7195e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:15 compute-1 nova_compute[230518]: 2025-10-02 12:15:15.656 2 DEBUG nova.compute.manager [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:15:15 compute-1 nova_compute[230518]: 2025-10-02 12:15:15.749 2 DEBUG oslo_concurrency.lockutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:15 compute-1 nova_compute[230518]: 2025-10-02 12:15:15.750 2 DEBUG oslo_concurrency.lockutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:15 compute-1 nova_compute[230518]: 2025-10-02 12:15:15.758 2 DEBUG nova.virt.hardware [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:15:15 compute-1 nova_compute[230518]: 2025-10-02 12:15:15.759 2 INFO nova.compute.claims [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:15:16 compute-1 nova_compute[230518]: 2025-10-02 12:15:16.021 2 DEBUG oslo_concurrency.processutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:16 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct 02 12:15:16 compute-1 nova_compute[230518]: 2025-10-02 12:15:16.419 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407316.4184062, b8f8f97e-2823-451c-ab36-7f94ade8be46 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:15:16 compute-1 nova_compute[230518]: 2025-10-02 12:15:16.421 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] VM Started (Lifecycle Event)
Oct 02 12:15:16 compute-1 ceph-mon[80926]: pgmap v1064: 305 pgs: 305 active+clean; 348 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 227 op/s
Oct 02 12:15:16 compute-1 nova_compute[230518]: 2025-10-02 12:15:16.445 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:16 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:15:16 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3930749028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:16 compute-1 nova_compute[230518]: 2025-10-02 12:15:16.534 2 DEBUG oslo_concurrency.processutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:16 compute-1 nova_compute[230518]: 2025-10-02 12:15:16.541 2 DEBUG nova.compute.provider_tree [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:15:16 compute-1 nova_compute[230518]: 2025-10-02 12:15:16.559 2 DEBUG nova.scheduler.client.report [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:15:16 compute-1 nova_compute[230518]: 2025-10-02 12:15:16.591 2 DEBUG oslo_concurrency.lockutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:16 compute-1 nova_compute[230518]: 2025-10-02 12:15:16.593 2 DEBUG nova.compute.manager [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:15:16 compute-1 nova_compute[230518]: 2025-10-02 12:15:16.653 2 DEBUG nova.compute.manager [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:15:16 compute-1 nova_compute[230518]: 2025-10-02 12:15:16.654 2 DEBUG nova.network.neutron [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:15:16 compute-1 nova_compute[230518]: 2025-10-02 12:15:16.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:16 compute-1 nova_compute[230518]: 2025-10-02 12:15:16.677 2 INFO nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:15:16 compute-1 nova_compute[230518]: 2025-10-02 12:15:16.728 2 DEBUG nova.compute.manager [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:15:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:16.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:16 compute-1 nova_compute[230518]: 2025-10-02 12:15:16.895 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407316.8950229, b8f8f97e-2823-451c-ab36-7f94ade8be46 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:15:16 compute-1 nova_compute[230518]: 2025-10-02 12:15:16.895 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] VM Resumed (Lifecycle Event)
Oct 02 12:15:16 compute-1 nova_compute[230518]: 2025-10-02 12:15:16.958 2 DEBUG nova.policy [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '79b88925d1704f5c9b3d2114c1a9ae4f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd92e60d304e64805972937813fc99606', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:15:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:16.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:16 compute-1 nova_compute[230518]: 2025-10-02 12:15:16.982 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:16 compute-1 nova_compute[230518]: 2025-10-02 12:15:16.985 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:15:17 compute-1 nova_compute[230518]: 2025-10-02 12:15:17.043 2 DEBUG nova.compute.manager [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:15:17 compute-1 nova_compute[230518]: 2025-10-02 12:15:17.044 2 DEBUG nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:15:17 compute-1 nova_compute[230518]: 2025-10-02 12:15:17.044 2 INFO nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Creating image(s)
Oct 02 12:15:17 compute-1 nova_compute[230518]: 2025-10-02 12:15:17.067 2 DEBUG nova.storage.rbd_utils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] rbd image 01eee71c-078c-41f4-a1c1-4591cab7195e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:17 compute-1 nova_compute[230518]: 2025-10-02 12:15:17.088 2 DEBUG nova.storage.rbd_utils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] rbd image 01eee71c-078c-41f4-a1c1-4591cab7195e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:17 compute-1 nova_compute[230518]: 2025-10-02 12:15:17.110 2 DEBUG nova.storage.rbd_utils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] rbd image 01eee71c-078c-41f4-a1c1-4591cab7195e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:17 compute-1 nova_compute[230518]: 2025-10-02 12:15:17.113 2 DEBUG oslo_concurrency.processutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:17 compute-1 nova_compute[230518]: 2025-10-02 12:15:17.129 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-1.ctlplane.example.com
Oct 02 12:15:17 compute-1 nova_compute[230518]: 2025-10-02 12:15:17.168 2 DEBUG oslo_concurrency.processutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:17 compute-1 nova_compute[230518]: 2025-10-02 12:15:17.169 2 DEBUG oslo_concurrency.lockutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:17 compute-1 nova_compute[230518]: 2025-10-02 12:15:17.170 2 DEBUG oslo_concurrency.lockutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:17 compute-1 nova_compute[230518]: 2025-10-02 12:15:17.170 2 DEBUG oslo_concurrency.lockutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:17 compute-1 nova_compute[230518]: 2025-10-02 12:15:17.190 2 DEBUG nova.storage.rbd_utils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] rbd image 01eee71c-078c-41f4-a1c1-4591cab7195e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:17 compute-1 nova_compute[230518]: 2025-10-02 12:15:17.194 2 DEBUG oslo_concurrency.processutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 01eee71c-078c-41f4-a1c1-4591cab7195e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:17 compute-1 nova_compute[230518]: 2025-10-02 12:15:17.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3930749028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:17 compute-1 nova_compute[230518]: 2025-10-02 12:15:17.561 2 DEBUG oslo_concurrency.processutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 01eee71c-078c-41f4-a1c1-4591cab7195e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.367s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:17 compute-1 nova_compute[230518]: 2025-10-02 12:15:17.632 2 DEBUG nova.storage.rbd_utils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] resizing rbd image 01eee71c-078c-41f4-a1c1-4591cab7195e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:15:17 compute-1 nova_compute[230518]: 2025-10-02 12:15:17.759 2 DEBUG nova.objects.instance [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lazy-loading 'migration_context' on Instance uuid 01eee71c-078c-41f4-a1c1-4591cab7195e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:15:17 compute-1 nova_compute[230518]: 2025-10-02 12:15:17.779 2 DEBUG nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:15:17 compute-1 nova_compute[230518]: 2025-10-02 12:15:17.780 2 DEBUG nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Ensure instance console log exists: /var/lib/nova/instances/01eee71c-078c-41f4-a1c1-4591cab7195e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:15:17 compute-1 nova_compute[230518]: 2025-10-02 12:15:17.780 2 DEBUG oslo_concurrency.lockutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:17 compute-1 nova_compute[230518]: 2025-10-02 12:15:17.781 2 DEBUG oslo_concurrency.lockutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:17 compute-1 nova_compute[230518]: 2025-10-02 12:15:17.781 2 DEBUG oslo_concurrency.lockutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:15:18 compute-1 nova_compute[230518]: 2025-10-02 12:15:18.398 2 DEBUG nova.network.neutron [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Successfully created port: 0a7827d1-d2e0-4330-b738-ee929dc7af48 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:15:18 compute-1 ceph-mon[80926]: pgmap v1065: 305 pgs: 305 active+clean; 327 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 826 KiB/s wr, 167 op/s
Oct 02 12:15:18 compute-1 ovn_controller[129257]: 2025-10-02T12:15:18Z|00090|binding|INFO|Claiming lport 647b79a6-6cf5-4d28-afd1-9e21f2a56e32 for this chassis.
Oct 02 12:15:18 compute-1 ovn_controller[129257]: 2025-10-02T12:15:18Z|00091|binding|INFO|647b79a6-6cf5-4d28-afd1-9e21f2a56e32: Claiming fa:16:3e:b9:be:58 10.100.0.12
Oct 02 12:15:18 compute-1 ovn_controller[129257]: 2025-10-02T12:15:18Z|00092|binding|INFO|Setting lport 647b79a6-6cf5-4d28-afd1-9e21f2a56e32 up in Southbound
Oct 02 12:15:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:18.830 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:be:58 10.100.0.12'], port_security=['fa:16:3e:b9:be:58 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b8f8f97e-2823-451c-ab36-7f94ade8be46', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4db2957ac1b546178a9f2c0f24807e5b', 'neutron:revision_number': '20', 'neutron:security_group_ids': '3bac25ef-a7f0-47fe-951f-dbdf1692a36b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3799c735-d38d-43c0-9348-b36c933d72da, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=647b79a6-6cf5-4d28-afd1-9e21f2a56e32) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:15:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:18.832 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 647b79a6-6cf5-4d28-afd1-9e21f2a56e32 in datapath 5b610572-0903-4bfb-be0b-9848e0af3ae3 bound to our chassis
Oct 02 12:15:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:18.834 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5b610572-0903-4bfb-be0b-9848e0af3ae3
Oct 02 12:15:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:18.849 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c814f42b-71a5-4270-9bde-d07ab18c3d27]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:18.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:18.877 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[71075d35-3b53-4778-a87d-6bcfaaff93cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:18.880 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[54d59eba-9833-4205-a582-d9c693e0d02b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:18.909 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[48469a6b-956c-4e27-889a-c6a91ad24af7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:18.927 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[532e262e-414c-41d9-9b87-e6343a625b8e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5b610572-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b0:0e:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 27, 'tx_packets': 9, 'rx_bytes': 1630, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 27, 'tx_packets': 9, 'rx_bytes': 1630, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508657, 'reachable_time': 17815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239528, 'error': None, 'target': 'ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:18.943 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[43e9868b-b3e5-4036-bc31-34265ea4bbc6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5b610572-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508669, 'tstamp': 508669}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239529, 'error': None, 'target': 'ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5b610572-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508672, 'tstamp': 508672}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239529, 'error': None, 'target': 'ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:18.944 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b610572-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:18 compute-1 nova_compute[230518]: 2025-10-02 12:15:18.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:18 compute-1 nova_compute[230518]: 2025-10-02 12:15:18.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:18.947 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5b610572-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:18.948 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:15:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:18.948 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5b610572-00, col_values=(('external_ids', {'iface-id': '02fa40d7-59fd-4885-996d-218aed489cb1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:18.948 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:15:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:18.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:19 compute-1 nova_compute[230518]: 2025-10-02 12:15:19.386 2 DEBUG nova.network.neutron [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Successfully updated port: 0a7827d1-d2e0-4330-b738-ee929dc7af48 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:15:19 compute-1 nova_compute[230518]: 2025-10-02 12:15:19.477 2 DEBUG oslo_concurrency.lockutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Acquiring lock "refresh_cache-01eee71c-078c-41f4-a1c1-4591cab7195e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:15:19 compute-1 nova_compute[230518]: 2025-10-02 12:15:19.477 2 DEBUG oslo_concurrency.lockutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Acquired lock "refresh_cache-01eee71c-078c-41f4-a1c1-4591cab7195e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:15:19 compute-1 nova_compute[230518]: 2025-10-02 12:15:19.477 2 DEBUG nova.network.neutron [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:15:19 compute-1 nova_compute[230518]: 2025-10-02 12:15:19.569 2 DEBUG nova.compute.manager [req-c78b7900-7e86-46cc-95a5-2517fe5d3a9c req-ac066e7a-ccb8-43a3-80ec-1bf15dacfb9c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Received event network-changed-0a7827d1-d2e0-4330-b738-ee929dc7af48 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:15:19 compute-1 nova_compute[230518]: 2025-10-02 12:15:19.570 2 DEBUG nova.compute.manager [req-c78b7900-7e86-46cc-95a5-2517fe5d3a9c req-ac066e7a-ccb8-43a3-80ec-1bf15dacfb9c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Refreshing instance network info cache due to event network-changed-0a7827d1-d2e0-4330-b738-ee929dc7af48. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:15:19 compute-1 nova_compute[230518]: 2025-10-02 12:15:19.570 2 DEBUG oslo_concurrency.lockutils [req-c78b7900-7e86-46cc-95a5-2517fe5d3a9c req-ac066e7a-ccb8-43a3-80ec-1bf15dacfb9c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-01eee71c-078c-41f4-a1c1-4591cab7195e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:15:19 compute-1 nova_compute[230518]: 2025-10-02 12:15:19.596 2 INFO nova.compute.manager [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Post operation of migration started
Oct 02 12:15:19 compute-1 nova_compute[230518]: 2025-10-02 12:15:19.757 2 DEBUG nova.network.neutron [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:15:19 compute-1 podman[239531]: 2025-10-02 12:15:19.812364528 +0000 UTC m=+0.055236008 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 12:15:19 compute-1 podman[239530]: 2025-10-02 12:15:19.870401934 +0000 UTC m=+0.113989937 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 02 12:15:20 compute-1 nova_compute[230518]: 2025-10-02 12:15:20.320 2 DEBUG oslo_concurrency.lockutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Acquiring lock "refresh_cache-b8f8f97e-2823-451c-ab36-7f94ade8be46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:15:20 compute-1 nova_compute[230518]: 2025-10-02 12:15:20.320 2 DEBUG oslo_concurrency.lockutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Acquired lock "refresh_cache-b8f8f97e-2823-451c-ab36-7f94ade8be46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:15:20 compute-1 nova_compute[230518]: 2025-10-02 12:15:20.320 2 DEBUG nova.network.neutron [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:15:20 compute-1 ovn_controller[129257]: 2025-10-02T12:15:20Z|00093|binding|INFO|Releasing lport 02fa40d7-59fd-4885-996d-218aed489cb1 from this chassis (sb_readonly=0)
Oct 02 12:15:20 compute-1 ovn_controller[129257]: 2025-10-02T12:15:20Z|00094|binding|INFO|Releasing lport 2278bdaf-c37b-4127-83d4-ca11f07feaa5 from this chassis (sb_readonly=0)
Oct 02 12:15:20 compute-1 ceph-mon[80926]: pgmap v1066: 305 pgs: 305 active+clean; 336 MiB data, 506 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 825 KiB/s wr, 146 op/s
Oct 02 12:15:20 compute-1 nova_compute[230518]: 2025-10-02 12:15:20.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:20 compute-1 systemd[1]: Stopping User Manager for UID 42436...
Oct 02 12:15:20 compute-1 systemd[239188]: Activating special unit Exit the Session...
Oct 02 12:15:20 compute-1 systemd[239188]: Stopped target Main User Target.
Oct 02 12:15:20 compute-1 systemd[239188]: Stopped target Basic System.
Oct 02 12:15:20 compute-1 systemd[239188]: Stopped target Paths.
Oct 02 12:15:20 compute-1 systemd[239188]: Stopped target Sockets.
Oct 02 12:15:20 compute-1 systemd[239188]: Stopped target Timers.
Oct 02 12:15:20 compute-1 systemd[239188]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 02 12:15:20 compute-1 systemd[239188]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 02 12:15:20 compute-1 systemd[239188]: Closed D-Bus User Message Bus Socket.
Oct 02 12:15:20 compute-1 systemd[239188]: Stopped Create User's Volatile Files and Directories.
Oct 02 12:15:20 compute-1 systemd[239188]: Removed slice User Application Slice.
Oct 02 12:15:20 compute-1 systemd[239188]: Reached target Shutdown.
Oct 02 12:15:20 compute-1 systemd[239188]: Finished Exit the Session.
Oct 02 12:15:20 compute-1 systemd[239188]: Reached target Exit the Session.
Oct 02 12:15:20 compute-1 systemd[1]: user@42436.service: Deactivated successfully.
Oct 02 12:15:20 compute-1 systemd[1]: Stopped User Manager for UID 42436.
Oct 02 12:15:20 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Oct 02 12:15:20 compute-1 systemd[1]: run-user-42436.mount: Deactivated successfully.
Oct 02 12:15:20 compute-1 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Oct 02 12:15:20 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Oct 02 12:15:20 compute-1 systemd[1]: Removed slice User Slice of UID 42436.
Oct 02 12:15:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:20.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:20.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:21 compute-1 nova_compute[230518]: 2025-10-02 12:15:21.114 2 DEBUG nova.network.neutron [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Updating instance_info_cache with network_info: [{"id": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "address": "fa:16:3e:cc:6f:ab", "network": {"id": "377fcfd9-a6d0-4567-bd23-a9d9c96adbd5", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1611413034-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d92e60d304e64805972937813fc99606", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a7827d1-d2", "ovs_interfaceid": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:15:21 compute-1 nova_compute[230518]: 2025-10-02 12:15:21.170 2 DEBUG oslo_concurrency.lockutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Releasing lock "refresh_cache-01eee71c-078c-41f4-a1c1-4591cab7195e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:15:21 compute-1 nova_compute[230518]: 2025-10-02 12:15:21.170 2 DEBUG nova.compute.manager [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Instance network_info: |[{"id": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "address": "fa:16:3e:cc:6f:ab", "network": {"id": "377fcfd9-a6d0-4567-bd23-a9d9c96adbd5", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1611413034-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d92e60d304e64805972937813fc99606", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a7827d1-d2", "ovs_interfaceid": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:15:21 compute-1 nova_compute[230518]: 2025-10-02 12:15:21.171 2 DEBUG oslo_concurrency.lockutils [req-c78b7900-7e86-46cc-95a5-2517fe5d3a9c req-ac066e7a-ccb8-43a3-80ec-1bf15dacfb9c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-01eee71c-078c-41f4-a1c1-4591cab7195e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:15:21 compute-1 nova_compute[230518]: 2025-10-02 12:15:21.171 2 DEBUG nova.network.neutron [req-c78b7900-7e86-46cc-95a5-2517fe5d3a9c req-ac066e7a-ccb8-43a3-80ec-1bf15dacfb9c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Refreshing network info cache for port 0a7827d1-d2e0-4330-b738-ee929dc7af48 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:15:21 compute-1 nova_compute[230518]: 2025-10-02 12:15:21.175 2 DEBUG nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Start _get_guest_xml network_info=[{"id": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "address": "fa:16:3e:cc:6f:ab", "network": {"id": "377fcfd9-a6d0-4567-bd23-a9d9c96adbd5", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1611413034-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d92e60d304e64805972937813fc99606", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a7827d1-d2", "ovs_interfaceid": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:15:21 compute-1 nova_compute[230518]: 2025-10-02 12:15:21.187 2 WARNING nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:15:21 compute-1 nova_compute[230518]: 2025-10-02 12:15:21.192 2 DEBUG nova.virt.libvirt.host [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:15:21 compute-1 nova_compute[230518]: 2025-10-02 12:15:21.193 2 DEBUG nova.virt.libvirt.host [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:15:21 compute-1 nova_compute[230518]: 2025-10-02 12:15:21.200 2 DEBUG nova.virt.libvirt.host [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:15:21 compute-1 nova_compute[230518]: 2025-10-02 12:15:21.200 2 DEBUG nova.virt.libvirt.host [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:15:21 compute-1 nova_compute[230518]: 2025-10-02 12:15:21.202 2 DEBUG nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:15:21 compute-1 nova_compute[230518]: 2025-10-02 12:15:21.202 2 DEBUG nova.virt.hardware [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:15:21 compute-1 nova_compute[230518]: 2025-10-02 12:15:21.202 2 DEBUG nova.virt.hardware [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:15:21 compute-1 nova_compute[230518]: 2025-10-02 12:15:21.203 2 DEBUG nova.virt.hardware [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:15:21 compute-1 nova_compute[230518]: 2025-10-02 12:15:21.203 2 DEBUG nova.virt.hardware [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:15:21 compute-1 nova_compute[230518]: 2025-10-02 12:15:21.203 2 DEBUG nova.virt.hardware [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:15:21 compute-1 nova_compute[230518]: 2025-10-02 12:15:21.203 2 DEBUG nova.virt.hardware [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:15:21 compute-1 nova_compute[230518]: 2025-10-02 12:15:21.203 2 DEBUG nova.virt.hardware [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:15:21 compute-1 nova_compute[230518]: 2025-10-02 12:15:21.203 2 DEBUG nova.virt.hardware [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:15:21 compute-1 nova_compute[230518]: 2025-10-02 12:15:21.204 2 DEBUG nova.virt.hardware [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:15:21 compute-1 nova_compute[230518]: 2025-10-02 12:15:21.204 2 DEBUG nova.virt.hardware [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:15:21 compute-1 nova_compute[230518]: 2025-10-02 12:15:21.204 2 DEBUG nova.virt.hardware [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:15:21 compute-1 nova_compute[230518]: 2025-10-02 12:15:21.207 2 DEBUG oslo_concurrency.processutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:21 compute-1 nova_compute[230518]: 2025-10-02 12:15:21.476 2 DEBUG nova.network.neutron [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Updating instance_info_cache with network_info: [{"id": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "address": "fa:16:3e:b9:be:58", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b79a6-6c", "ovs_interfaceid": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:15:21 compute-1 nova_compute[230518]: 2025-10-02 12:15:21.532 2 DEBUG oslo_concurrency.lockutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Releasing lock "refresh_cache-b8f8f97e-2823-451c-ab36-7f94ade8be46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:15:21 compute-1 nova_compute[230518]: 2025-10-02 12:15:21.563 2 DEBUG oslo_concurrency.lockutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:21 compute-1 nova_compute[230518]: 2025-10-02 12:15:21.564 2 DEBUG oslo_concurrency.lockutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:21 compute-1 nova_compute[230518]: 2025-10-02 12:15:21.564 2 DEBUG oslo_concurrency.lockutils [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:21 compute-1 nova_compute[230518]: 2025-10-02 12:15:21.568 2 INFO nova.virt.libvirt.driver [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Oct 02 12:15:21 compute-1 virtqemud[230067]: Domain id=10 name='instance-00000014' uuid=b8f8f97e-2823-451c-ab36-7f94ade8be46 is tainted: custom-monitor
Oct 02 12:15:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:15:21 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2104195057' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:21 compute-1 nova_compute[230518]: 2025-10-02 12:15:21.663 2 DEBUG oslo_concurrency.processutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:21 compute-1 nova_compute[230518]: 2025-10-02 12:15:21.686 2 DEBUG nova.storage.rbd_utils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] rbd image 01eee71c-078c-41f4-a1c1-4591cab7195e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:21 compute-1 nova_compute[230518]: 2025-10-02 12:15:21.727 2 DEBUG oslo_concurrency.processutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:21 compute-1 nova_compute[230518]: 2025-10-02 12:15:21.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:21 compute-1 nova_compute[230518]: 2025-10-02 12:15:21.888 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407306.887273, f85aa55e-c534-4270-b8bb-d25f8026084c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:15:21 compute-1 nova_compute[230518]: 2025-10-02 12:15:21.889 2 INFO nova.compute.manager [-] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] VM Stopped (Lifecycle Event)
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.023 2 DEBUG nova.compute.manager [None req-1d4bc170-ba4c-481b-8ccf-8faa773353a2 - - - - - -] [instance: f85aa55e-c534-4270-b8bb-d25f8026084c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:22 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:15:22 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1515999479' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.202 2 DEBUG oslo_concurrency.processutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.204 2 DEBUG nova.virt.libvirt.vif [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:15:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1959096416',display_name='tempest-ImagesOneServerTestJSON-server-1959096416',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1959096416',id=22,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d92e60d304e64805972937813fc99606',ramdisk_id='',reservation_id='r-nww065mo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-572210404',owner_user_name='tempest-ImagesOneServerTestJSON-572210404-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:15:16Z,user_data=None,user_id='79b88925d1704f5c9b3d2114c1a9ae4f',uuid=01eee71c-078c-41f4-a1c1-4591cab7195e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "address": "fa:16:3e:cc:6f:ab", "network": {"id": "377fcfd9-a6d0-4567-bd23-a9d9c96adbd5", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1611413034-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d92e60d304e64805972937813fc99606", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a7827d1-d2", "ovs_interfaceid": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.204 2 DEBUG nova.network.os_vif_util [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Converting VIF {"id": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "address": "fa:16:3e:cc:6f:ab", "network": {"id": "377fcfd9-a6d0-4567-bd23-a9d9c96adbd5", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1611413034-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d92e60d304e64805972937813fc99606", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a7827d1-d2", "ovs_interfaceid": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.205 2 DEBUG nova.network.os_vif_util [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:6f:ab,bridge_name='br-int',has_traffic_filtering=True,id=0a7827d1-d2e0-4330-b738-ee929dc7af48,network=Network(377fcfd9-a6d0-4567-bd23-a9d9c96adbd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a7827d1-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.206 2 DEBUG nova.objects.instance [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lazy-loading 'pci_devices' on Instance uuid 01eee71c-078c-41f4-a1c1-4591cab7195e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.258 2 DEBUG nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:15:22 compute-1 nova_compute[230518]:   <uuid>01eee71c-078c-41f4-a1c1-4591cab7195e</uuid>
Oct 02 12:15:22 compute-1 nova_compute[230518]:   <name>instance-00000016</name>
Oct 02 12:15:22 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:15:22 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:15:22 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:15:22 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:       <nova:name>tempest-ImagesOneServerTestJSON-server-1959096416</nova:name>
Oct 02 12:15:22 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:15:21</nova:creationTime>
Oct 02 12:15:22 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:15:22 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:15:22 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:15:22 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:15:22 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:15:22 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:15:22 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:15:22 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:15:22 compute-1 nova_compute[230518]:         <nova:user uuid="79b88925d1704f5c9b3d2114c1a9ae4f">tempest-ImagesOneServerTestJSON-572210404-project-member</nova:user>
Oct 02 12:15:22 compute-1 nova_compute[230518]:         <nova:project uuid="d92e60d304e64805972937813fc99606">tempest-ImagesOneServerTestJSON-572210404</nova:project>
Oct 02 12:15:22 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:15:22 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:15:22 compute-1 nova_compute[230518]:         <nova:port uuid="0a7827d1-d2e0-4330-b738-ee929dc7af48">
Oct 02 12:15:22 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:15:22 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:15:22 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:15:22 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <system>
Oct 02 12:15:22 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:15:22 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:15:22 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:15:22 compute-1 nova_compute[230518]:       <entry name="serial">01eee71c-078c-41f4-a1c1-4591cab7195e</entry>
Oct 02 12:15:22 compute-1 nova_compute[230518]:       <entry name="uuid">01eee71c-078c-41f4-a1c1-4591cab7195e</entry>
Oct 02 12:15:22 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     </system>
Oct 02 12:15:22 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:15:22 compute-1 nova_compute[230518]:   <os>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:   </os>
Oct 02 12:15:22 compute-1 nova_compute[230518]:   <features>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:   </features>
Oct 02 12:15:22 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:15:22 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:15:22 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:15:22 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/01eee71c-078c-41f4-a1c1-4591cab7195e_disk">
Oct 02 12:15:22 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:       </source>
Oct 02 12:15:22 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:15:22 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:15:22 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:15:22 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/01eee71c-078c-41f4-a1c1-4591cab7195e_disk.config">
Oct 02 12:15:22 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:       </source>
Oct 02 12:15:22 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:15:22 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:15:22 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:15:22 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:cc:6f:ab"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:       <target dev="tap0a7827d1-d2"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:15:22 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/01eee71c-078c-41f4-a1c1-4591cab7195e/console.log" append="off"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <video>
Oct 02 12:15:22 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     </video>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:15:22 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:15:22 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:15:22 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:15:22 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:15:22 compute-1 nova_compute[230518]: </domain>
Oct 02 12:15:22 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.260 2 DEBUG nova.compute.manager [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Preparing to wait for external event network-vif-plugged-0a7827d1-d2e0-4330-b738-ee929dc7af48 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.260 2 DEBUG oslo_concurrency.lockutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Acquiring lock "01eee71c-078c-41f4-a1c1-4591cab7195e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.261 2 DEBUG oslo_concurrency.lockutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lock "01eee71c-078c-41f4-a1c1-4591cab7195e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.261 2 DEBUG oslo_concurrency.lockutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lock "01eee71c-078c-41f4-a1c1-4591cab7195e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.262 2 DEBUG nova.virt.libvirt.vif [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:15:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1959096416',display_name='tempest-ImagesOneServerTestJSON-server-1959096416',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1959096416',id=22,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d92e60d304e64805972937813fc99606',ramdisk_id='',reservation_id='r-nww065mo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-572210404',owner_user_name='tempest-ImagesOneServerTestJSON-572210404-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:15:16Z,user_data=None,user_id='79b88925d1704f5c9b3d2114c1a9ae4f',uuid=01eee71c-078c-41f4-a1c1-4591cab7195e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "address": "fa:16:3e:cc:6f:ab", "network": {"id": "377fcfd9-a6d0-4567-bd23-a9d9c96adbd5", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1611413034-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d92e60d304e64805972937813fc99606", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a7827d1-d2", "ovs_interfaceid": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.262 2 DEBUG nova.network.os_vif_util [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Converting VIF {"id": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "address": "fa:16:3e:cc:6f:ab", "network": {"id": "377fcfd9-a6d0-4567-bd23-a9d9c96adbd5", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1611413034-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d92e60d304e64805972937813fc99606", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a7827d1-d2", "ovs_interfaceid": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.263 2 DEBUG nova.network.os_vif_util [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:6f:ab,bridge_name='br-int',has_traffic_filtering=True,id=0a7827d1-d2e0-4330-b738-ee929dc7af48,network=Network(377fcfd9-a6d0-4567-bd23-a9d9c96adbd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a7827d1-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.263 2 DEBUG os_vif [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:6f:ab,bridge_name='br-int',has_traffic_filtering=True,id=0a7827d1-d2e0-4330-b738-ee929dc7af48,network=Network(377fcfd9-a6d0-4567-bd23-a9d9c96adbd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a7827d1-d2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.264 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.265 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.269 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0a7827d1-d2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.270 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0a7827d1-d2, col_values=(('external_ids', {'iface-id': '0a7827d1-d2e0-4330-b738-ee929dc7af48', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cc:6f:ab', 'vm-uuid': '01eee71c-078c-41f4-a1c1-4591cab7195e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:22 compute-1 NetworkManager[44960]: <info>  [1759407322.2732] manager: (tap0a7827d1-d2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.281 2 INFO os_vif [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:6f:ab,bridge_name='br-int',has_traffic_filtering=True,id=0a7827d1-d2e0-4330-b738-ee929dc7af48,network=Network(377fcfd9-a6d0-4567-bd23-a9d9c96adbd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a7827d1-d2')
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.326 2 DEBUG nova.network.neutron [req-c78b7900-7e86-46cc-95a5-2517fe5d3a9c req-ac066e7a-ccb8-43a3-80ec-1bf15dacfb9c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Updated VIF entry in instance network info cache for port 0a7827d1-d2e0-4330-b738-ee929dc7af48. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.326 2 DEBUG nova.network.neutron [req-c78b7900-7e86-46cc-95a5-2517fe5d3a9c req-ac066e7a-ccb8-43a3-80ec-1bf15dacfb9c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Updating instance_info_cache with network_info: [{"id": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "address": "fa:16:3e:cc:6f:ab", "network": {"id": "377fcfd9-a6d0-4567-bd23-a9d9c96adbd5", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1611413034-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d92e60d304e64805972937813fc99606", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a7827d1-d2", "ovs_interfaceid": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.405 2 DEBUG oslo_concurrency.lockutils [req-c78b7900-7e86-46cc-95a5-2517fe5d3a9c req-ac066e7a-ccb8-43a3-80ec-1bf15dacfb9c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-01eee71c-078c-41f4-a1c1-4591cab7195e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.409 2 DEBUG nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.410 2 DEBUG nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.410 2 DEBUG nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] No VIF found with MAC fa:16:3e:cc:6f:ab, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.411 2 INFO nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Using config drive
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.437 2 DEBUG nova.storage.rbd_utils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] rbd image 01eee71c-078c-41f4-a1c1-4591cab7195e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:22 compute-1 ceph-mon[80926]: pgmap v1067: 305 pgs: 305 active+clean; 359 MiB data, 524 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.1 MiB/s wr, 117 op/s
Oct 02 12:15:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2104195057' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1515999479' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.575 2 INFO nova.virt.libvirt.driver [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.744 2 INFO nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Creating config drive at /var/lib/nova/instances/01eee71c-078c-41f4-a1c1-4591cab7195e/disk.config
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.749 2 DEBUG oslo_concurrency.processutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/01eee71c-078c-41f4-a1c1-4591cab7195e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7_r609qv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:22.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.875 2 DEBUG oslo_concurrency.processutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/01eee71c-078c-41f4-a1c1-4591cab7195e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7_r609qv" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.901 2 DEBUG nova.storage.rbd_utils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] rbd image 01eee71c-078c-41f4-a1c1-4591cab7195e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:22 compute-1 nova_compute[230518]: 2025-10-02 12:15:22.904 2 DEBUG oslo_concurrency.processutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/01eee71c-078c-41f4-a1c1-4591cab7195e/disk.config 01eee71c-078c-41f4-a1c1-4591cab7195e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:22 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:15:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:22.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:23 compute-1 nova_compute[230518]: 2025-10-02 12:15:23.221 2 DEBUG oslo_concurrency.processutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/01eee71c-078c-41f4-a1c1-4591cab7195e/disk.config 01eee71c-078c-41f4-a1c1-4591cab7195e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.316s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:23 compute-1 nova_compute[230518]: 2025-10-02 12:15:23.222 2 INFO nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Deleting local config drive /var/lib/nova/instances/01eee71c-078c-41f4-a1c1-4591cab7195e/disk.config because it was imported into RBD.
Oct 02 12:15:23 compute-1 kernel: tap0a7827d1-d2: entered promiscuous mode
Oct 02 12:15:23 compute-1 NetworkManager[44960]: <info>  [1759407323.2625] manager: (tap0a7827d1-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/57)
Oct 02 12:15:23 compute-1 ovn_controller[129257]: 2025-10-02T12:15:23Z|00095|binding|INFO|Claiming lport 0a7827d1-d2e0-4330-b738-ee929dc7af48 for this chassis.
Oct 02 12:15:23 compute-1 ovn_controller[129257]: 2025-10-02T12:15:23Z|00096|binding|INFO|0a7827d1-d2e0-4330-b738-ee929dc7af48: Claiming fa:16:3e:cc:6f:ab 10.100.0.13
Oct 02 12:15:23 compute-1 nova_compute[230518]: 2025-10-02 12:15:23.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:23 compute-1 systemd-udevd[239711]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:15:23 compute-1 systemd-machined[188247]: New machine qemu-11-instance-00000016.
Oct 02 12:15:23 compute-1 NetworkManager[44960]: <info>  [1759407323.3008] device (tap0a7827d1-d2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:15:23 compute-1 NetworkManager[44960]: <info>  [1759407323.3016] device (tap0a7827d1-d2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:15:23 compute-1 systemd[1]: Started Virtual Machine qemu-11-instance-00000016.
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.337 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:6f:ab 10.100.0.13'], port_security=['fa:16:3e:cc:6f:ab 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '01eee71c-078c-41f4-a1c1-4591cab7195e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd92e60d304e64805972937813fc99606', 'neutron:revision_number': '2', 'neutron:security_group_ids': '13422694-ff96-4d03-9ea0-adedb130ec76', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6cf5d7d6-9d03-4d57-a5e5-97ce6dc98b2e, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=0a7827d1-d2e0-4330-b738-ee929dc7af48) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.338 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 0a7827d1-d2e0-4330-b738-ee929dc7af48 in datapath 377fcfd9-a6d0-4567-bd23-a9d9c96adbd5 bound to our chassis
Oct 02 12:15:23 compute-1 nova_compute[230518]: 2025-10-02 12:15:23.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.340 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 377fcfd9-a6d0-4567-bd23-a9d9c96adbd5
Oct 02 12:15:23 compute-1 ovn_controller[129257]: 2025-10-02T12:15:23Z|00097|binding|INFO|Setting lport 0a7827d1-d2e0-4330-b738-ee929dc7af48 ovn-installed in OVS
Oct 02 12:15:23 compute-1 ovn_controller[129257]: 2025-10-02T12:15:23Z|00098|binding|INFO|Setting lport 0a7827d1-d2e0-4330-b738-ee929dc7af48 up in Southbound
Oct 02 12:15:23 compute-1 nova_compute[230518]: 2025-10-02 12:15:23.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.350 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[43243dc9-a278-4652-9e71-92c144ee5f1f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.350 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap377fcfd9-a1 in ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.352 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap377fcfd9-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.352 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ad762928-8d58-4630-a76c-29138b60888a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.353 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7376425c-53c4-41d6-891b-ebc0bf1d2902]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.364 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[5a2d64b8-51e5-49ba-b89d-d28876879eed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.377 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f2be35a2-081e-4415-91dc-262ec9b56108]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.403 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c459ab64-fbc8-43e9-afe6-c9a6029b5621]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:23 compute-1 systemd-udevd[239714]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:15:23 compute-1 NetworkManager[44960]: <info>  [1759407323.4086] manager: (tap377fcfd9-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/58)
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.408 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e88b9943-603f-4833-916a-9b2b480db174]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.435 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[d399830c-d328-411d-ad3d-da8a4ef816dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.438 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[8b7706ec-bc5a-4f32-b036-a65849859779]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:23 compute-1 NetworkManager[44960]: <info>  [1759407323.4578] device (tap377fcfd9-a0): carrier: link connected
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.464 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[6d57004b-72da-4ea6-a97a-cb2e0f917bed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.479 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1afa4bd1-3b4e-4e96-9739-59bc77ffbd72]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap377fcfd9-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:25:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 518701, 'reachable_time': 29948, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239745, 'error': None, 'target': 'ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.495 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8c68aec0-86be-4e87-8ae6-d6db8426bffa]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb8:2505'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 518701, 'tstamp': 518701}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239746, 'error': None, 'target': 'ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.513 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[635e9745-f51d-4fd3-86a5-a4b507880c69]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap377fcfd9-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:25:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 518701, 'reachable_time': 29948, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 239747, 'error': None, 'target': 'ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.546 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fa71cf74-964a-4113-895a-ff73f9124856]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:23 compute-1 nova_compute[230518]: 2025-10-02 12:15:23.580 2 INFO nova.virt.libvirt.driver [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Oct 02 12:15:23 compute-1 nova_compute[230518]: 2025-10-02 12:15:23.586 2 DEBUG nova.compute.manager [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.610 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3a19f493-1591-4713-80a4-31dad238f938]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.612 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap377fcfd9-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.612 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.613 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap377fcfd9-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:23 compute-1 NetworkManager[44960]: <info>  [1759407323.6155] manager: (tap377fcfd9-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Oct 02 12:15:23 compute-1 kernel: tap377fcfd9-a0: entered promiscuous mode
Oct 02 12:15:23 compute-1 nova_compute[230518]: 2025-10-02 12:15:23.616 2 DEBUG nova.objects.instance [None req-209b9b45-9c09-49cd-8881-620d2d3a0171 2d0b44e1ae884cd9b6f5b34c4b20961b 1e2a07f96ebf489f9ca155f20c045c56 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.619 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap377fcfd9-a0, col_values=(('external_ids', {'iface-id': '727141f8-bed3-42ac-abe8-be7b66cbedbb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:23 compute-1 ovn_controller[129257]: 2025-10-02T12:15:23Z|00099|binding|INFO|Releasing lport 727141f8-bed3-42ac-abe8-be7b66cbedbb from this chassis (sb_readonly=0)
Oct 02 12:15:23 compute-1 nova_compute[230518]: 2025-10-02 12:15:23.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:23 compute-1 nova_compute[230518]: 2025-10-02 12:15:23.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.636 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/377fcfd9-a6d0-4567-bd23-a9d9c96adbd5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/377fcfd9-a6d0-4567-bd23-a9d9c96adbd5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.637 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[95156b12-1172-4150-8046-3a4cf6db7b73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.638 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/377fcfd9-a6d0-4567-bd23-a9d9c96adbd5.pid.haproxy
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 377fcfd9-a6d0-4567-bd23-a9d9c96adbd5
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:15:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:23.640 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5', 'env', 'PROCESS_TAG=haproxy-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/377fcfd9-a6d0-4567-bd23-a9d9c96adbd5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:15:23 compute-1 nova_compute[230518]: 2025-10-02 12:15:23.871 2 DEBUG nova.compute.manager [req-d2543263-8d98-4958-9c11-462a4f3f10eb req-c264361d-f1ae-4f62-b836-43078deefbf1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Received event network-vif-plugged-0a7827d1-d2e0-4330-b738-ee929dc7af48 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:15:23 compute-1 nova_compute[230518]: 2025-10-02 12:15:23.871 2 DEBUG oslo_concurrency.lockutils [req-d2543263-8d98-4958-9c11-462a4f3f10eb req-c264361d-f1ae-4f62-b836-43078deefbf1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "01eee71c-078c-41f4-a1c1-4591cab7195e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:23 compute-1 nova_compute[230518]: 2025-10-02 12:15:23.872 2 DEBUG oslo_concurrency.lockutils [req-d2543263-8d98-4958-9c11-462a4f3f10eb req-c264361d-f1ae-4f62-b836-43078deefbf1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "01eee71c-078c-41f4-a1c1-4591cab7195e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:23 compute-1 nova_compute[230518]: 2025-10-02 12:15:23.872 2 DEBUG oslo_concurrency.lockutils [req-d2543263-8d98-4958-9c11-462a4f3f10eb req-c264361d-f1ae-4f62-b836-43078deefbf1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "01eee71c-078c-41f4-a1c1-4591cab7195e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:23 compute-1 nova_compute[230518]: 2025-10-02 12:15:23.873 2 DEBUG nova.compute.manager [req-d2543263-8d98-4958-9c11-462a4f3f10eb req-c264361d-f1ae-4f62-b836-43078deefbf1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Processing event network-vif-plugged-0a7827d1-d2e0-4330-b738-ee929dc7af48 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:15:24 compute-1 podman[239815]: 2025-10-02 12:15:24.044142821 +0000 UTC m=+0.054057881 container create c62ea193ddcbca14651a9e8ff3bf68be4995f6527d79e5f957112c650b70d5fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 12:15:24 compute-1 systemd[1]: Started libpod-conmon-c62ea193ddcbca14651a9e8ff3bf68be4995f6527d79e5f957112c650b70d5fe.scope.
Oct 02 12:15:24 compute-1 podman[239815]: 2025-10-02 12:15:24.013756515 +0000 UTC m=+0.023671615 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:15:24 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:15:24 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71439b31039a617ef0686867304ef27ca8200532d38131be38c55e1cda6f86ef/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:15:24 compute-1 podman[239815]: 2025-10-02 12:15:24.14078435 +0000 UTC m=+0.150699420 container init c62ea193ddcbca14651a9e8ff3bf68be4995f6527d79e5f957112c650b70d5fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 12:15:24 compute-1 podman[239815]: 2025-10-02 12:15:24.146202341 +0000 UTC m=+0.156117411 container start c62ea193ddcbca14651a9e8ff3bf68be4995f6527d79e5f957112c650b70d5fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:15:24 compute-1 neutron-haproxy-ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5[239836]: [NOTICE]   (239840) : New worker (239842) forked
Oct 02 12:15:24 compute-1 neutron-haproxy-ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5[239836]: [NOTICE]   (239840) : Loading success.
Oct 02 12:15:24 compute-1 nova_compute[230518]: 2025-10-02 12:15:24.463 2 DEBUG nova.compute.manager [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:15:24 compute-1 nova_compute[230518]: 2025-10-02 12:15:24.464 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407324.463521, 01eee71c-078c-41f4-a1c1-4591cab7195e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:15:24 compute-1 nova_compute[230518]: 2025-10-02 12:15:24.465 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] VM Started (Lifecycle Event)
Oct 02 12:15:24 compute-1 nova_compute[230518]: 2025-10-02 12:15:24.468 2 DEBUG nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:15:24 compute-1 nova_compute[230518]: 2025-10-02 12:15:24.471 2 INFO nova.virt.libvirt.driver [-] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Instance spawned successfully.
Oct 02 12:15:24 compute-1 nova_compute[230518]: 2025-10-02 12:15:24.472 2 DEBUG nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:15:24 compute-1 ceph-mon[80926]: pgmap v1068: 305 pgs: 305 active+clean; 407 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 372 KiB/s rd, 3.9 MiB/s wr, 140 op/s
Oct 02 12:15:24 compute-1 nova_compute[230518]: 2025-10-02 12:15:24.590 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:24 compute-1 nova_compute[230518]: 2025-10-02 12:15:24.593 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:15:24 compute-1 nova_compute[230518]: 2025-10-02 12:15:24.621 2 DEBUG nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:15:24 compute-1 nova_compute[230518]: 2025-10-02 12:15:24.622 2 DEBUG nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:15:24 compute-1 nova_compute[230518]: 2025-10-02 12:15:24.622 2 DEBUG nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:15:24 compute-1 nova_compute[230518]: 2025-10-02 12:15:24.623 2 DEBUG nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:15:24 compute-1 nova_compute[230518]: 2025-10-02 12:15:24.623 2 DEBUG nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:15:24 compute-1 nova_compute[230518]: 2025-10-02 12:15:24.624 2 DEBUG nova.virt.libvirt.driver [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:15:24 compute-1 nova_compute[230518]: 2025-10-02 12:15:24.747 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:15:24 compute-1 nova_compute[230518]: 2025-10-02 12:15:24.748 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407324.4649415, 01eee71c-078c-41f4-a1c1-4591cab7195e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:15:24 compute-1 nova_compute[230518]: 2025-10-02 12:15:24.748 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] VM Paused (Lifecycle Event)
Oct 02 12:15:24 compute-1 nova_compute[230518]: 2025-10-02 12:15:24.779 2 INFO nova.compute.manager [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Took 7.74 seconds to spawn the instance on the hypervisor.
Oct 02 12:15:24 compute-1 nova_compute[230518]: 2025-10-02 12:15:24.780 2 DEBUG nova.compute.manager [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:24 compute-1 nova_compute[230518]: 2025-10-02 12:15:24.800 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:24 compute-1 nova_compute[230518]: 2025-10-02 12:15:24.802 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407324.4665978, 01eee71c-078c-41f4-a1c1-4591cab7195e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:15:24 compute-1 nova_compute[230518]: 2025-10-02 12:15:24.803 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] VM Resumed (Lifecycle Event)
Oct 02 12:15:24 compute-1 nova_compute[230518]: 2025-10-02 12:15:24.862 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:24 compute-1 nova_compute[230518]: 2025-10-02 12:15:24.864 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:15:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:15:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:24.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:15:24 compute-1 nova_compute[230518]: 2025-10-02 12:15:24.974 2 INFO nova.compute.manager [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Took 9.26 seconds to build instance.
Oct 02 12:15:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:24.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:25 compute-1 nova_compute[230518]: 2025-10-02 12:15:25.031 2 DEBUG oslo_concurrency.lockutils [None req-e076a7c0-d1f5-469c-ad38-c7e439d9dedf 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lock "01eee71c-078c-41f4-a1c1-4591cab7195e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.408s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:25 compute-1 nova_compute[230518]: 2025-10-02 12:15:25.537 2 DEBUG oslo_concurrency.lockutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "refresh_cache-bb6a3b63-8cda-41b6-ac43-6f9d310fad2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:15:25 compute-1 nova_compute[230518]: 2025-10-02 12:15:25.537 2 DEBUG oslo_concurrency.lockutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquired lock "refresh_cache-bb6a3b63-8cda-41b6-ac43-6f9d310fad2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:15:25 compute-1 nova_compute[230518]: 2025-10-02 12:15:25.538 2 DEBUG nova.network.neutron [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:15:25 compute-1 nova_compute[230518]: 2025-10-02 12:15:25.839 2 DEBUG nova.network.neutron [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:15:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:25.915 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:25.916 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:25.918 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:26 compute-1 nova_compute[230518]: 2025-10-02 12:15:26.225 2 DEBUG nova.network.neutron [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:15:26 compute-1 nova_compute[230518]: 2025-10-02 12:15:26.451 2 DEBUG nova.compute.manager [req-61c41691-a737-4ea4-ae74-ae918f6d8c08 req-760d3eab-4615-4fb0-b683-6039533a41f3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Received event network-vif-plugged-0a7827d1-d2e0-4330-b738-ee929dc7af48 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:15:26 compute-1 nova_compute[230518]: 2025-10-02 12:15:26.452 2 DEBUG oslo_concurrency.lockutils [req-61c41691-a737-4ea4-ae74-ae918f6d8c08 req-760d3eab-4615-4fb0-b683-6039533a41f3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "01eee71c-078c-41f4-a1c1-4591cab7195e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:26 compute-1 nova_compute[230518]: 2025-10-02 12:15:26.452 2 DEBUG oslo_concurrency.lockutils [req-61c41691-a737-4ea4-ae74-ae918f6d8c08 req-760d3eab-4615-4fb0-b683-6039533a41f3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "01eee71c-078c-41f4-a1c1-4591cab7195e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:26 compute-1 nova_compute[230518]: 2025-10-02 12:15:26.452 2 DEBUG oslo_concurrency.lockutils [req-61c41691-a737-4ea4-ae74-ae918f6d8c08 req-760d3eab-4615-4fb0-b683-6039533a41f3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "01eee71c-078c-41f4-a1c1-4591cab7195e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:26 compute-1 nova_compute[230518]: 2025-10-02 12:15:26.452 2 DEBUG nova.compute.manager [req-61c41691-a737-4ea4-ae74-ae918f6d8c08 req-760d3eab-4615-4fb0-b683-6039533a41f3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] No waiting events found dispatching network-vif-plugged-0a7827d1-d2e0-4330-b738-ee929dc7af48 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:15:26 compute-1 nova_compute[230518]: 2025-10-02 12:15:26.453 2 WARNING nova.compute.manager [req-61c41691-a737-4ea4-ae74-ae918f6d8c08 req-760d3eab-4615-4fb0-b683-6039533a41f3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Received unexpected event network-vif-plugged-0a7827d1-d2e0-4330-b738-ee929dc7af48 for instance with vm_state active and task_state None.
Oct 02 12:15:26 compute-1 nova_compute[230518]: 2025-10-02 12:15:26.458 2 DEBUG oslo_concurrency.lockutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Releasing lock "refresh_cache-bb6a3b63-8cda-41b6-ac43-6f9d310fad2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:15:26 compute-1 ceph-mon[80926]: pgmap v1069: 305 pgs: 305 active+clean; 407 MiB data, 567 MiB used, 20 GiB / 21 GiB avail; 356 KiB/s rd, 3.9 MiB/s wr, 118 op/s
Oct 02 12:15:26 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/521728697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:26 compute-1 nova_compute[230518]: 2025-10-02 12:15:26.706 2 DEBUG nova.virt.libvirt.driver [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Oct 02 12:15:26 compute-1 nova_compute[230518]: 2025-10-02 12:15:26.707 2 DEBUG nova.virt.libvirt.driver [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 02 12:15:26 compute-1 nova_compute[230518]: 2025-10-02 12:15:26.707 2 INFO nova.virt.libvirt.driver [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Creating image(s)
Oct 02 12:15:26 compute-1 nova_compute[230518]: 2025-10-02 12:15:26.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:26 compute-1 nova_compute[230518]: 2025-10-02 12:15:26.755 2 DEBUG nova.storage.rbd_utils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] creating snapshot(nova-resize) on rbd image(bb6a3b63-8cda-41b6-ac43-6f9d310fad2a_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:15:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:26.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:26 compute-1 nova_compute[230518]: 2025-10-02 12:15:26.948 2 DEBUG nova.compute.manager [None req-004cf06c-a83a-4125-874c-5c2b88642de2 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:26.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:27 compute-1 nova_compute[230518]: 2025-10-02 12:15:27.001 2 INFO nova.compute.manager [None req-004cf06c-a83a-4125-874c-5c2b88642de2 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] instance snapshotting
Oct 02 12:15:27 compute-1 nova_compute[230518]: 2025-10-02 12:15:27.251 2 INFO nova.virt.libvirt.driver [None req-004cf06c-a83a-4125-874c-5c2b88642de2 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Beginning live snapshot process
Oct 02 12:15:27 compute-1 nova_compute[230518]: 2025-10-02 12:15:27.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:27 compute-1 nova_compute[230518]: 2025-10-02 12:15:27.277 2 DEBUG oslo_concurrency.lockutils [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Acquiring lock "b8f8f97e-2823-451c-ab36-7f94ade8be46" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:27 compute-1 nova_compute[230518]: 2025-10-02 12:15:27.278 2 DEBUG oslo_concurrency.lockutils [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:27 compute-1 nova_compute[230518]: 2025-10-02 12:15:27.278 2 DEBUG oslo_concurrency.lockutils [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Acquiring lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:27 compute-1 nova_compute[230518]: 2025-10-02 12:15:27.278 2 DEBUG oslo_concurrency.lockutils [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:27 compute-1 nova_compute[230518]: 2025-10-02 12:15:27.279 2 DEBUG oslo_concurrency.lockutils [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:27 compute-1 nova_compute[230518]: 2025-10-02 12:15:27.280 2 INFO nova.compute.manager [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Terminating instance
Oct 02 12:15:27 compute-1 nova_compute[230518]: 2025-10-02 12:15:27.281 2 DEBUG nova.compute.manager [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:15:27 compute-1 kernel: tap647b79a6-6c (unregistering): left promiscuous mode
Oct 02 12:15:27 compute-1 NetworkManager[44960]: <info>  [1759407327.3317] device (tap647b79a6-6c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:15:27 compute-1 ovn_controller[129257]: 2025-10-02T12:15:27Z|00100|binding|INFO|Releasing lport 647b79a6-6cf5-4d28-afd1-9e21f2a56e32 from this chassis (sb_readonly=0)
Oct 02 12:15:27 compute-1 ovn_controller[129257]: 2025-10-02T12:15:27Z|00101|binding|INFO|Setting lport 647b79a6-6cf5-4d28-afd1-9e21f2a56e32 down in Southbound
Oct 02 12:15:27 compute-1 ovn_controller[129257]: 2025-10-02T12:15:27Z|00102|binding|INFO|Removing iface tap647b79a6-6c ovn-installed in OVS
Oct 02 12:15:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:27.392 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:be:58 10.100.0.12'], port_security=['fa:16:3e:b9:be:58 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b8f8f97e-2823-451c-ab36-7f94ade8be46', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4db2957ac1b546178a9f2c0f24807e5b', 'neutron:revision_number': '22', 'neutron:security_group_ids': '3bac25ef-a7f0-47fe-951f-dbdf1692a36b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3799c735-d38d-43c0-9348-b36c933d72da, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=647b79a6-6cf5-4d28-afd1-9e21f2a56e32) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:15:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:27.395 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 647b79a6-6cf5-4d28-afd1-9e21f2a56e32 in datapath 5b610572-0903-4bfb-be0b-9848e0af3ae3 unbound from our chassis
Oct 02 12:15:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:27.397 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5b610572-0903-4bfb-be0b-9848e0af3ae3
Oct 02 12:15:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:27.416 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[131440ef-79a4-4151-80d5-418163cfa2a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:27 compute-1 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000014.scope: Deactivated successfully.
Oct 02 12:15:27 compute-1 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000014.scope: Consumed 2.222s CPU time.
Oct 02 12:15:27 compute-1 nova_compute[230518]: 2025-10-02 12:15:27.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:27 compute-1 systemd-machined[188247]: Machine qemu-10-instance-00000014 terminated.
Oct 02 12:15:27 compute-1 nova_compute[230518]: 2025-10-02 12:15:27.441 2 DEBUG nova.virt.libvirt.imagebackend [None req-004cf06c-a83a-4125-874c-5c2b88642de2 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 12:15:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:27.446 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c62378e6-bb22-4d96-bf61-e8ef75c4c1a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:27.449 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[0264e58a-8cd6-45be-8c3c-a748a30246a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:27.477 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[36b6c7dd-607f-4b62-bcd5-0014d39a3f5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:27.494 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1a6d58de-fb92-4344-aa27-b9fe705f3fec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5b610572-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b0:0e:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 42, 'tx_packets': 11, 'rx_bytes': 2260, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 42, 'tx_packets': 11, 'rx_bytes': 2260, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508657, 'reachable_time': 17815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239931, 'error': None, 'target': 'ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:27.511 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[91fdd428-1a8a-4977-baf3-24ba07dfcade]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5b610572-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508669, 'tstamp': 508669}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239934, 'error': None, 'target': 'ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5b610572-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508672, 'tstamp': 508672}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239934, 'error': None, 'target': 'ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:27.517 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b610572-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:27 compute-1 nova_compute[230518]: 2025-10-02 12:15:27.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:27 compute-1 nova_compute[230518]: 2025-10-02 12:15:27.522 2 INFO nova.virt.libvirt.driver [-] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Instance destroyed successfully.
Oct 02 12:15:27 compute-1 nova_compute[230518]: 2025-10-02 12:15:27.523 2 DEBUG nova.objects.instance [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lazy-loading 'resources' on Instance uuid b8f8f97e-2823-451c-ab36-7f94ade8be46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:15:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:27.529 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5b610572-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:27.530 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:15:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:27.530 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5b610572-00, col_values=(('external_ids', {'iface-id': '02fa40d7-59fd-4885-996d-218aed489cb1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:27 compute-1 nova_compute[230518]: 2025-10-02 12:15:27.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:27.531 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:15:27 compute-1 nova_compute[230518]: 2025-10-02 12:15:27.539 2 DEBUG nova.virt.libvirt.vif [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:14:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-927671937',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-927671937',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:14:28Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4db2957ac1b546178a9f2c0f24807e5b',ramdisk_id='',reservation_id='r-v8e9y6s2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-211124371',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-211124371-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:15:23Z,user_data=None,user_id='d29391679bd0482aada18c987e4c11ca',uuid=b8f8f97e-2823-451c-ab36-7f94ade8be46,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "address": "fa:16:3e:b9:be:58", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b79a6-6c", "ovs_interfaceid": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:15:27 compute-1 nova_compute[230518]: 2025-10-02 12:15:27.539 2 DEBUG nova.network.os_vif_util [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Converting VIF {"id": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "address": "fa:16:3e:b9:be:58", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647b79a6-6c", "ovs_interfaceid": "647b79a6-6cf5-4d28-afd1-9e21f2a56e32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:15:27 compute-1 nova_compute[230518]: 2025-10-02 12:15:27.540 2 DEBUG nova.network.os_vif_util [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b9:be:58,bridge_name='br-int',has_traffic_filtering=True,id=647b79a6-6cf5-4d28-afd1-9e21f2a56e32,network=Network(5b610572-0903-4bfb-be0b-9848e0af3ae3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b79a6-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:15:27 compute-1 nova_compute[230518]: 2025-10-02 12:15:27.541 2 DEBUG os_vif [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b9:be:58,bridge_name='br-int',has_traffic_filtering=True,id=647b79a6-6cf5-4d28-afd1-9e21f2a56e32,network=Network(5b610572-0903-4bfb-be0b-9848e0af3ae3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b79a6-6c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:15:27 compute-1 nova_compute[230518]: 2025-10-02 12:15:27.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:27 compute-1 nova_compute[230518]: 2025-10-02 12:15:27.543 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap647b79a6-6c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:27 compute-1 nova_compute[230518]: 2025-10-02 12:15:27.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:27 compute-1 nova_compute[230518]: 2025-10-02 12:15:27.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:27 compute-1 nova_compute[230518]: 2025-10-02 12:15:27.547 2 INFO os_vif [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b9:be:58,bridge_name='br-int',has_traffic_filtering=True,id=647b79a6-6cf5-4d28-afd1-9e21f2a56e32,network=Network(5b610572-0903-4bfb-be0b-9848e0af3ae3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647b79a6-6c')
Oct 02 12:15:27 compute-1 nova_compute[230518]: 2025-10-02 12:15:27.656 2 DEBUG nova.compute.manager [req-07def723-d909-4b57-948a-b8f69637a6b1 req-332dfc20-24e7-404c-8a0a-be57e4a75214 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received event network-vif-unplugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:15:27 compute-1 nova_compute[230518]: 2025-10-02 12:15:27.657 2 DEBUG oslo_concurrency.lockutils [req-07def723-d909-4b57-948a-b8f69637a6b1 req-332dfc20-24e7-404c-8a0a-be57e4a75214 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:27 compute-1 nova_compute[230518]: 2025-10-02 12:15:27.657 2 DEBUG oslo_concurrency.lockutils [req-07def723-d909-4b57-948a-b8f69637a6b1 req-332dfc20-24e7-404c-8a0a-be57e4a75214 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:27 compute-1 nova_compute[230518]: 2025-10-02 12:15:27.657 2 DEBUG oslo_concurrency.lockutils [req-07def723-d909-4b57-948a-b8f69637a6b1 req-332dfc20-24e7-404c-8a0a-be57e4a75214 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:27 compute-1 nova_compute[230518]: 2025-10-02 12:15:27.658 2 DEBUG nova.compute.manager [req-07def723-d909-4b57-948a-b8f69637a6b1 req-332dfc20-24e7-404c-8a0a-be57e4a75214 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] No waiting events found dispatching network-vif-unplugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:15:27 compute-1 nova_compute[230518]: 2025-10-02 12:15:27.658 2 DEBUG nova.compute.manager [req-07def723-d909-4b57-948a-b8f69637a6b1 req-332dfc20-24e7-404c-8a0a-be57e4a75214 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received event network-vif-unplugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:15:27 compute-1 nova_compute[230518]: 2025-10-02 12:15:27.683 2 DEBUG nova.storage.rbd_utils [None req-004cf06c-a83a-4125-874c-5c2b88642de2 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] creating snapshot(89ce9dda825b4bc98f90246d0c92a59d) on rbd image(01eee71c-078c-41f4-a1c1-4591cab7195e_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:15:27 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e149 e149: 3 total, 3 up, 3 in
Oct 02 12:15:27 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:15:28 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1554869641' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:28 compute-1 nova_compute[230518]: 2025-10-02 12:15:28.505 2 INFO nova.virt.libvirt.driver [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Deleting instance files /var/lib/nova/instances/b8f8f97e-2823-451c-ab36-7f94ade8be46_del
Oct 02 12:15:28 compute-1 nova_compute[230518]: 2025-10-02 12:15:28.506 2 INFO nova.virt.libvirt.driver [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Deletion of /var/lib/nova/instances/b8f8f97e-2823-451c-ab36-7f94ade8be46_del complete
Oct 02 12:15:28 compute-1 nova_compute[230518]: 2025-10-02 12:15:28.555 2 INFO nova.compute.manager [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Took 1.27 seconds to destroy the instance on the hypervisor.
Oct 02 12:15:28 compute-1 nova_compute[230518]: 2025-10-02 12:15:28.556 2 DEBUG oslo.service.loopingcall [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:15:28 compute-1 nova_compute[230518]: 2025-10-02 12:15:28.557 2 DEBUG nova.compute.manager [-] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:15:28 compute-1 nova_compute[230518]: 2025-10-02 12:15:28.557 2 DEBUG nova.network.neutron [-] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:15:28 compute-1 nova_compute[230518]: 2025-10-02 12:15:28.722 2 DEBUG nova.objects.instance [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'trusted_certs' on Instance uuid bb6a3b63-8cda-41b6-ac43-6f9d310fad2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:15:28 compute-1 nova_compute[230518]: 2025-10-02 12:15:28.832 2 DEBUG nova.virt.libvirt.driver [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 12:15:28 compute-1 nova_compute[230518]: 2025-10-02 12:15:28.832 2 DEBUG nova.virt.libvirt.driver [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Ensure instance console log exists: /var/lib/nova/instances/bb6a3b63-8cda-41b6-ac43-6f9d310fad2a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:15:28 compute-1 nova_compute[230518]: 2025-10-02 12:15:28.834 2 DEBUG oslo_concurrency.lockutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:28 compute-1 nova_compute[230518]: 2025-10-02 12:15:28.834 2 DEBUG oslo_concurrency.lockutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:28 compute-1 nova_compute[230518]: 2025-10-02 12:15:28.834 2 DEBUG oslo_concurrency.lockutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:28 compute-1 nova_compute[230518]: 2025-10-02 12:15:28.836 2 DEBUG nova.virt.libvirt.driver [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:15:28 compute-1 nova_compute[230518]: 2025-10-02 12:15:28.839 2 WARNING nova.virt.libvirt.driver [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:15:28 compute-1 nova_compute[230518]: 2025-10-02 12:15:28.844 2 DEBUG nova.virt.libvirt.host [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:15:28 compute-1 nova_compute[230518]: 2025-10-02 12:15:28.845 2 DEBUG nova.virt.libvirt.host [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:15:28 compute-1 nova_compute[230518]: 2025-10-02 12:15:28.847 2 DEBUG nova.virt.libvirt.host [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:15:28 compute-1 nova_compute[230518]: 2025-10-02 12:15:28.848 2 DEBUG nova.virt.libvirt.host [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:15:28 compute-1 nova_compute[230518]: 2025-10-02 12:15:28.849 2 DEBUG nova.virt.libvirt.driver [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:15:28 compute-1 nova_compute[230518]: 2025-10-02 12:15:28.849 2 DEBUG nova.virt.hardware [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:44Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='475e3257-fad6-494a-9174-56c6af5e0ac9',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:15:28 compute-1 nova_compute[230518]: 2025-10-02 12:15:28.850 2 DEBUG nova.virt.hardware [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:15:28 compute-1 nova_compute[230518]: 2025-10-02 12:15:28.850 2 DEBUG nova.virt.hardware [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:15:28 compute-1 nova_compute[230518]: 2025-10-02 12:15:28.850 2 DEBUG nova.virt.hardware [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:15:28 compute-1 nova_compute[230518]: 2025-10-02 12:15:28.851 2 DEBUG nova.virt.hardware [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:15:28 compute-1 nova_compute[230518]: 2025-10-02 12:15:28.851 2 DEBUG nova.virt.hardware [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:15:28 compute-1 nova_compute[230518]: 2025-10-02 12:15:28.851 2 DEBUG nova.virt.hardware [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:15:28 compute-1 nova_compute[230518]: 2025-10-02 12:15:28.852 2 DEBUG nova.virt.hardware [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:15:28 compute-1 nova_compute[230518]: 2025-10-02 12:15:28.852 2 DEBUG nova.virt.hardware [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:15:28 compute-1 nova_compute[230518]: 2025-10-02 12:15:28.852 2 DEBUG nova.virt.hardware [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:15:28 compute-1 nova_compute[230518]: 2025-10-02 12:15:28.852 2 DEBUG nova.virt.hardware [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:15:28 compute-1 nova_compute[230518]: 2025-10-02 12:15:28.853 2 DEBUG nova.objects.instance [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'vcpu_model' on Instance uuid bb6a3b63-8cda-41b6-ac43-6f9d310fad2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:15:28 compute-1 nova_compute[230518]: 2025-10-02 12:15:28.867 2 DEBUG oslo_concurrency.processutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:28.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:28.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:29 compute-1 ceph-mon[80926]: pgmap v1070: 305 pgs: 305 active+clean; 407 MiB data, 567 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 154 op/s
Oct 02 12:15:29 compute-1 ceph-mon[80926]: osdmap e149: 3 total, 3 up, 3 in
Oct 02 12:15:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e150 e150: 3 total, 3 up, 3 in
Oct 02 12:15:29 compute-1 nova_compute[230518]: 2025-10-02 12:15:29.217 2 DEBUG nova.storage.rbd_utils [None req-004cf06c-a83a-4125-874c-5c2b88642de2 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] cloning vms/01eee71c-078c-41f4-a1c1-4591cab7195e_disk@89ce9dda825b4bc98f90246d0c92a59d to images/9f2e97bd-159f-41e5-875d-f066be38a116 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 12:15:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:15:29 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2568182976' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:29 compute-1 nova_compute[230518]: 2025-10-02 12:15:29.308 2 DEBUG oslo_concurrency.processutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:29 compute-1 nova_compute[230518]: 2025-10-02 12:15:29.350 2 DEBUG nova.storage.rbd_utils [None req-004cf06c-a83a-4125-874c-5c2b88642de2 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] flattening images/9f2e97bd-159f-41e5-875d-f066be38a116 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 12:15:29 compute-1 nova_compute[230518]: 2025-10-02 12:15:29.402 2 DEBUG oslo_concurrency.processutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:29 compute-1 nova_compute[230518]: 2025-10-02 12:15:29.678 2 DEBUG nova.storage.rbd_utils [None req-004cf06c-a83a-4125-874c-5c2b88642de2 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] removing snapshot(89ce9dda825b4bc98f90246d0c92a59d) on rbd image(01eee71c-078c-41f4-a1c1-4591cab7195e_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 12:15:29 compute-1 nova_compute[230518]: 2025-10-02 12:15:29.687 2 DEBUG nova.network.neutron [-] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:15:29 compute-1 nova_compute[230518]: 2025-10-02 12:15:29.709 2 INFO nova.compute.manager [-] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Took 1.15 seconds to deallocate network for instance.
Oct 02 12:15:29 compute-1 nova_compute[230518]: 2025-10-02 12:15:29.801 2 DEBUG nova.compute.manager [req-a28d6466-df5f-47a2-83d6-216fd7a35754 req-d4b62304-5d11-4c4d-97a7-eccdca96361c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received event network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:15:29 compute-1 nova_compute[230518]: 2025-10-02 12:15:29.802 2 DEBUG oslo_concurrency.lockutils [req-a28d6466-df5f-47a2-83d6-216fd7a35754 req-d4b62304-5d11-4c4d-97a7-eccdca96361c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:29 compute-1 nova_compute[230518]: 2025-10-02 12:15:29.802 2 DEBUG oslo_concurrency.lockutils [req-a28d6466-df5f-47a2-83d6-216fd7a35754 req-d4b62304-5d11-4c4d-97a7-eccdca96361c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:29 compute-1 nova_compute[230518]: 2025-10-02 12:15:29.802 2 DEBUG oslo_concurrency.lockutils [req-a28d6466-df5f-47a2-83d6-216fd7a35754 req-d4b62304-5d11-4c4d-97a7-eccdca96361c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:29 compute-1 nova_compute[230518]: 2025-10-02 12:15:29.803 2 DEBUG nova.compute.manager [req-a28d6466-df5f-47a2-83d6-216fd7a35754 req-d4b62304-5d11-4c4d-97a7-eccdca96361c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] No waiting events found dispatching network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:15:29 compute-1 nova_compute[230518]: 2025-10-02 12:15:29.803 2 WARNING nova.compute.manager [req-a28d6466-df5f-47a2-83d6-216fd7a35754 req-d4b62304-5d11-4c4d-97a7-eccdca96361c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received unexpected event network-vif-plugged-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 for instance with vm_state active and task_state deleting.
Oct 02 12:15:29 compute-1 nova_compute[230518]: 2025-10-02 12:15:29.803 2 DEBUG nova.compute.manager [req-a28d6466-df5f-47a2-83d6-216fd7a35754 req-d4b62304-5d11-4c4d-97a7-eccdca96361c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Received event network-vif-deleted-647b79a6-6cf5-4d28-afd1-9e21f2a56e32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:15:29 compute-1 podman[240150]: 2025-10-02 12:15:29.806954677 +0000 UTC m=+0.063832019 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:15:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:15:29 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/917251262' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:29 compute-1 nova_compute[230518]: 2025-10-02 12:15:29.870 2 DEBUG oslo_concurrency.processutils [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:29 compute-1 nova_compute[230518]: 2025-10-02 12:15:29.872 2 DEBUG nova.virt.libvirt.driver [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:15:29 compute-1 nova_compute[230518]:   <uuid>bb6a3b63-8cda-41b6-ac43-6f9d310fad2a</uuid>
Oct 02 12:15:29 compute-1 nova_compute[230518]:   <name>instance-00000015</name>
Oct 02 12:15:29 compute-1 nova_compute[230518]:   <memory>196608</memory>
Oct 02 12:15:29 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:15:29 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:15:29 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:       <nova:name>tempest-MigrationsAdminTest-server-399340879</nova:name>
Oct 02 12:15:29 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:15:28</nova:creationTime>
Oct 02 12:15:29 compute-1 nova_compute[230518]:       <nova:flavor name="m1.micro">
Oct 02 12:15:29 compute-1 nova_compute[230518]:         <nova:memory>192</nova:memory>
Oct 02 12:15:29 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:15:29 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:15:29 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:15:29 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:15:29 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:15:29 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:15:29 compute-1 nova_compute[230518]:         <nova:user uuid="ac1b39d94ed94e2490ad953afb3c225f">tempest-MigrationsAdminTest-1653457839-project-member</nova:user>
Oct 02 12:15:29 compute-1 nova_compute[230518]:         <nova:project uuid="3d306048f2854052ba5317253b834aa7">tempest-MigrationsAdminTest-1653457839</nova:project>
Oct 02 12:15:29 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:15:29 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:       <nova:ports/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:15:29 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:15:29 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <system>
Oct 02 12:15:29 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:15:29 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:15:29 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:15:29 compute-1 nova_compute[230518]:       <entry name="serial">bb6a3b63-8cda-41b6-ac43-6f9d310fad2a</entry>
Oct 02 12:15:29 compute-1 nova_compute[230518]:       <entry name="uuid">bb6a3b63-8cda-41b6-ac43-6f9d310fad2a</entry>
Oct 02 12:15:29 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     </system>
Oct 02 12:15:29 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:15:29 compute-1 nova_compute[230518]:   <os>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:   </os>
Oct 02 12:15:29 compute-1 nova_compute[230518]:   <features>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:   </features>
Oct 02 12:15:29 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:15:29 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:15:29 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:15:29 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/bb6a3b63-8cda-41b6-ac43-6f9d310fad2a_disk">
Oct 02 12:15:29 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:       </source>
Oct 02 12:15:29 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:15:29 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:15:29 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:15:29 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/bb6a3b63-8cda-41b6-ac43-6f9d310fad2a_disk.config">
Oct 02 12:15:29 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:       </source>
Oct 02 12:15:29 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:15:29 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:15:29 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:15:29 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/bb6a3b63-8cda-41b6-ac43-6f9d310fad2a/console.log" append="off"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <video>
Oct 02 12:15:29 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     </video>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:15:29 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:15:29 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:15:29 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:15:29 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:15:29 compute-1 nova_compute[230518]: </domain>
Oct 02 12:15:29 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:15:29 compute-1 nova_compute[230518]: 2025-10-02 12:15:29.932 2 DEBUG nova.virt.libvirt.driver [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:15:29 compute-1 nova_compute[230518]: 2025-10-02 12:15:29.932 2 DEBUG nova.virt.libvirt.driver [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:15:29 compute-1 nova_compute[230518]: 2025-10-02 12:15:29.946 2 INFO nova.virt.libvirt.driver [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Using config drive
Oct 02 12:15:30 compute-1 nova_compute[230518]: 2025-10-02 12:15:30.016 2 INFO nova.compute.manager [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Took 0.31 seconds to detach 1 volumes for instance.
Oct 02 12:15:30 compute-1 nova_compute[230518]: 2025-10-02 12:15:30.017 2 DEBUG nova.compute.manager [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Deleting volume: ff92c1da-c1e7-425c-b20d-f332daad4188 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Oct 02 12:15:30 compute-1 systemd-machined[188247]: New machine qemu-12-instance-00000015.
Oct 02 12:15:30 compute-1 nova_compute[230518]: 2025-10-02 12:15:30.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:30 compute-1 nova_compute[230518]: 2025-10-02 12:15:30.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 12:15:30 compute-1 systemd[1]: Started Virtual Machine qemu-12-instance-00000015.
Oct 02 12:15:30 compute-1 ceph-mon[80926]: pgmap v1072: 305 pgs: 305 active+clean; 407 MiB data, 567 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 4.1 MiB/s wr, 157 op/s
Oct 02 12:15:30 compute-1 ceph-mon[80926]: osdmap e150: 3 total, 3 up, 3 in
Oct 02 12:15:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2568182976' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/917251262' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e151 e151: 3 total, 3 up, 3 in
Oct 02 12:15:30 compute-1 nova_compute[230518]: 2025-10-02 12:15:30.228 2 DEBUG nova.storage.rbd_utils [None req-004cf06c-a83a-4125-874c-5c2b88642de2 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] creating snapshot(snap) on rbd image(9f2e97bd-159f-41e5-875d-f066be38a116) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:15:30 compute-1 nova_compute[230518]: 2025-10-02 12:15:30.279 2 DEBUG oslo_concurrency.lockutils [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:30 compute-1 nova_compute[230518]: 2025-10-02 12:15:30.280 2 DEBUG oslo_concurrency.lockutils [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:30 compute-1 nova_compute[230518]: 2025-10-02 12:15:30.285 2 DEBUG oslo_concurrency.lockutils [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:30 compute-1 nova_compute[230518]: 2025-10-02 12:15:30.320 2 INFO nova.scheduler.client.report [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Deleted allocations for instance b8f8f97e-2823-451c-ab36-7f94ade8be46
Oct 02 12:15:30 compute-1 nova_compute[230518]: 2025-10-02 12:15:30.375 2 DEBUG oslo_concurrency.lockutils [None req-2e5a1315-52a9-4b96-ae09-3bcdbaf08241 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "b8f8f97e-2823-451c-ab36-7f94ade8be46" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.097s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:15:30 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3343551856' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:15:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:15:30 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3343551856' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:15:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:30.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:30 compute-1 nova_compute[230518]: 2025-10-02 12:15:30.905 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407330.904805, bb6a3b63-8cda-41b6-ac43-6f9d310fad2a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:15:30 compute-1 nova_compute[230518]: 2025-10-02 12:15:30.905 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] VM Resumed (Lifecycle Event)
Oct 02 12:15:30 compute-1 nova_compute[230518]: 2025-10-02 12:15:30.907 2 DEBUG nova.compute.manager [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:15:30 compute-1 nova_compute[230518]: 2025-10-02 12:15:30.910 2 INFO nova.virt.libvirt.driver [-] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Instance running successfully.
Oct 02 12:15:30 compute-1 virtqemud[230067]: argument unsupported: QEMU guest agent is not configured
Oct 02 12:15:30 compute-1 nova_compute[230518]: 2025-10-02 12:15:30.912 2 DEBUG nova.virt.libvirt.guest [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 02 12:15:30 compute-1 nova_compute[230518]: 2025-10-02 12:15:30.913 2 DEBUG nova.virt.libvirt.driver [None req-8b5a748e-fc47-4b12-a657-9c1d30ac81dc ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Oct 02 12:15:30 compute-1 nova_compute[230518]: 2025-10-02 12:15:30.944 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:30 compute-1 nova_compute[230518]: 2025-10-02 12:15:30.946 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:15:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:30.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:31 compute-1 nova_compute[230518]: 2025-10-02 12:15:31.005 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] During sync_power_state the instance has a pending task (resize_finish). Skip.
Oct 02 12:15:31 compute-1 nova_compute[230518]: 2025-10-02 12:15:31.005 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407330.9067378, bb6a3b63-8cda-41b6-ac43-6f9d310fad2a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:15:31 compute-1 nova_compute[230518]: 2025-10-02 12:15:31.006 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] VM Started (Lifecycle Event)
Oct 02 12:15:31 compute-1 nova_compute[230518]: 2025-10-02 12:15:31.029 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:31 compute-1 nova_compute[230518]: 2025-10-02 12:15:31.049 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:15:31 compute-1 ceph-mon[80926]: osdmap e151: 3 total, 3 up, 3 in
Oct 02 12:15:31 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3343551856' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:15:31 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3343551856' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:15:31 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e152 e152: 3 total, 3 up, 3 in
Oct 02 12:15:31 compute-1 nova_compute[230518]: 2025-10-02 12:15:31.429 2 DEBUG oslo_concurrency.lockutils [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Acquiring lock "2b86a484-6fc6-4efa-983f-fb93053b0874" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:31 compute-1 nova_compute[230518]: 2025-10-02 12:15:31.429 2 DEBUG oslo_concurrency.lockutils [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "2b86a484-6fc6-4efa-983f-fb93053b0874" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:31 compute-1 nova_compute[230518]: 2025-10-02 12:15:31.430 2 DEBUG oslo_concurrency.lockutils [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Acquiring lock "2b86a484-6fc6-4efa-983f-fb93053b0874-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:31 compute-1 nova_compute[230518]: 2025-10-02 12:15:31.430 2 DEBUG oslo_concurrency.lockutils [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "2b86a484-6fc6-4efa-983f-fb93053b0874-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:31 compute-1 nova_compute[230518]: 2025-10-02 12:15:31.431 2 DEBUG oslo_concurrency.lockutils [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "2b86a484-6fc6-4efa-983f-fb93053b0874-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:31 compute-1 nova_compute[230518]: 2025-10-02 12:15:31.432 2 INFO nova.compute.manager [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Terminating instance
Oct 02 12:15:31 compute-1 nova_compute[230518]: 2025-10-02 12:15:31.433 2 DEBUG nova.compute.manager [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:15:31 compute-1 kernel: tap8879d541-11 (unregistering): left promiscuous mode
Oct 02 12:15:31 compute-1 NetworkManager[44960]: <info>  [1759407331.4875] device (tap8879d541-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:15:31 compute-1 nova_compute[230518]: 2025-10-02 12:15:31.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:31 compute-1 ovn_controller[129257]: 2025-10-02T12:15:31Z|00103|binding|INFO|Releasing lport 8879d541-1199-497a-b096-b45e17e4df04 from this chassis (sb_readonly=0)
Oct 02 12:15:31 compute-1 ovn_controller[129257]: 2025-10-02T12:15:31Z|00104|binding|INFO|Setting lport 8879d541-1199-497a-b096-b45e17e4df04 down in Southbound
Oct 02 12:15:31 compute-1 ovn_controller[129257]: 2025-10-02T12:15:31Z|00105|binding|INFO|Releasing lport 96e672de-12ad-4022-be24-94113ee6de10 from this chassis (sb_readonly=0)
Oct 02 12:15:31 compute-1 ovn_controller[129257]: 2025-10-02T12:15:31Z|00106|binding|INFO|Setting lport 96e672de-12ad-4022-be24-94113ee6de10 down in Southbound
Oct 02 12:15:31 compute-1 ovn_controller[129257]: 2025-10-02T12:15:31Z|00107|binding|INFO|Removing iface tap8879d541-11 ovn-installed in OVS
Oct 02 12:15:31 compute-1 nova_compute[230518]: 2025-10-02 12:15:31.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:31 compute-1 ovn_controller[129257]: 2025-10-02T12:15:31Z|00108|binding|INFO|Releasing lport 02fa40d7-59fd-4885-996d-218aed489cb1 from this chassis (sb_readonly=0)
Oct 02 12:15:31 compute-1 ovn_controller[129257]: 2025-10-02T12:15:31Z|00109|binding|INFO|Releasing lport 2278bdaf-c37b-4127-83d4-ca11f07feaa5 from this chassis (sb_readonly=0)
Oct 02 12:15:31 compute-1 ovn_controller[129257]: 2025-10-02T12:15:31Z|00110|binding|INFO|Releasing lport 727141f8-bed3-42ac-abe8-be7b66cbedbb from this chassis (sb_readonly=0)
Oct 02 12:15:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.523 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:8f:1e 10.100.0.4'], port_security=['fa:16:3e:d1:8f:1e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1633959326', 'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2b86a484-6fc6-4efa-983f-fb93053b0874', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1633959326', 'neutron:project_id': '4db2957ac1b546178a9f2c0f24807e5b', 'neutron:revision_number': '11', 'neutron:security_group_ids': '3bac25ef-a7f0-47fe-951f-dbdf1692a36b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3799c735-d38d-43c0-9348-b36c933d72da, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=8879d541-1199-497a-b096-b45e17e4df04) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:15:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.524 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:a4:98 19.80.0.36'], port_security=['fa:16:3e:bb:a4:98 19.80.0.36'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['8879d541-1199-497a-b096-b45e17e4df04'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1987210166', 'neutron:cidrs': '19.80.0.36/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bdc26f36-19a2-41f9-8f78-61503fbb20a7', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1987210166', 'neutron:project_id': '4db2957ac1b546178a9f2c0f24807e5b', 'neutron:revision_number': '5', 'neutron:security_group_ids': '3bac25ef-a7f0-47fe-951f-dbdf1692a36b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=ba88d201-1b94-4e72-bbe3-032bdf9cfc2d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=96e672de-12ad-4022-be24-94113ee6de10) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:15:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.526 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 8879d541-1199-497a-b096-b45e17e4df04 in datapath 5b610572-0903-4bfb-be0b-9848e0af3ae3 unbound from our chassis
Oct 02 12:15:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.527 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5b610572-0903-4bfb-be0b-9848e0af3ae3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:15:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.529 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ee073c81-60b9-440c-9f09-9c6e3eea85c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:31 compute-1 nova_compute[230518]: 2025-10-02 12:15:31.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.530 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3 namespace which is not needed anymore
Oct 02 12:15:31 compute-1 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Oct 02 12:15:31 compute-1 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000d.scope: Consumed 7.006s CPU time.
Oct 02 12:15:31 compute-1 systemd-machined[188247]: Machine qemu-6-instance-0000000d terminated.
Oct 02 12:15:31 compute-1 nova_compute[230518]: 2025-10-02 12:15:31.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:31 compute-1 neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3[237797]: [NOTICE]   (237801) : haproxy version is 2.8.14-c23fe91
Oct 02 12:15:31 compute-1 neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3[237797]: [NOTICE]   (237801) : path to executable is /usr/sbin/haproxy
Oct 02 12:15:31 compute-1 neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3[237797]: [WARNING]  (237801) : Exiting Master process...
Oct 02 12:15:31 compute-1 neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3[237797]: [WARNING]  (237801) : Exiting Master process...
Oct 02 12:15:31 compute-1 neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3[237797]: [ALERT]    (237801) : Current worker (237803) exited with code 143 (Terminated)
Oct 02 12:15:31 compute-1 neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3[237797]: [WARNING]  (237801) : All workers exited. Exiting... (0)
Oct 02 12:15:31 compute-1 systemd[1]: libpod-8953e0191abc6f8c7cfb8dc093fc7ef2110784bf2848e283a41cddcb7b433dbf.scope: Deactivated successfully.
Oct 02 12:15:31 compute-1 conmon[237797]: conmon 8953e0191abc6f8c7cfb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8953e0191abc6f8c7cfb8dc093fc7ef2110784bf2848e283a41cddcb7b433dbf.scope/container/memory.events
Oct 02 12:15:31 compute-1 nova_compute[230518]: 2025-10-02 12:15:31.664 2 INFO nova.virt.libvirt.driver [-] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Instance destroyed successfully.
Oct 02 12:15:31 compute-1 nova_compute[230518]: 2025-10-02 12:15:31.664 2 DEBUG nova.objects.instance [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lazy-loading 'resources' on Instance uuid 2b86a484-6fc6-4efa-983f-fb93053b0874 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:15:31 compute-1 podman[240288]: 2025-10-02 12:15:31.668146323 +0000 UTC m=+0.055150975 container died 8953e0191abc6f8c7cfb8dc093fc7ef2110784bf2848e283a41cddcb7b433dbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 12:15:31 compute-1 nova_compute[230518]: 2025-10-02 12:15:31.686 2 DEBUG nova.virt.libvirt.vif [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:13:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-522976997',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-522976997',id=13,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:13:25Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4db2957ac1b546178a9f2c0f24807e5b',ramdisk_id='',reservation_id='r-es3dgd0n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-211124371',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-211124371-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:13:49Z,user_data=None,user_id='d29391679bd0482aada18c987e4c11ca',uuid=2b86a484-6fc6-4efa-983f-fb93053b0874,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8879d541-1199-497a-b096-b45e17e4df04", "address": "fa:16:3e:d1:8f:1e", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8879d541-11", "ovs_interfaceid": "8879d541-1199-497a-b096-b45e17e4df04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:15:31 compute-1 nova_compute[230518]: 2025-10-02 12:15:31.687 2 DEBUG nova.network.os_vif_util [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Converting VIF {"id": "8879d541-1199-497a-b096-b45e17e4df04", "address": "fa:16:3e:d1:8f:1e", "network": {"id": "5b610572-0903-4bfb-be0b-9848e0af3ae3", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1579968573-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4db2957ac1b546178a9f2c0f24807e5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8879d541-11", "ovs_interfaceid": "8879d541-1199-497a-b096-b45e17e4df04", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:15:31 compute-1 nova_compute[230518]: 2025-10-02 12:15:31.688 2 DEBUG nova.network.os_vif_util [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d1:8f:1e,bridge_name='br-int',has_traffic_filtering=True,id=8879d541-1199-497a-b096-b45e17e4df04,network=Network(5b610572-0903-4bfb-be0b-9848e0af3ae3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8879d541-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:15:31 compute-1 nova_compute[230518]: 2025-10-02 12:15:31.688 2 DEBUG os_vif [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d1:8f:1e,bridge_name='br-int',has_traffic_filtering=True,id=8879d541-1199-497a-b096-b45e17e4df04,network=Network(5b610572-0903-4bfb-be0b-9848e0af3ae3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8879d541-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:15:31 compute-1 nova_compute[230518]: 2025-10-02 12:15:31.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:31 compute-1 nova_compute[230518]: 2025-10-02 12:15:31.690 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8879d541-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:31 compute-1 nova_compute[230518]: 2025-10-02 12:15:31.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:31 compute-1 nova_compute[230518]: 2025-10-02 12:15:31.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:31 compute-1 nova_compute[230518]: 2025-10-02 12:15:31.697 2 INFO os_vif [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d1:8f:1e,bridge_name='br-int',has_traffic_filtering=True,id=8879d541-1199-497a-b096-b45e17e4df04,network=Network(5b610572-0903-4bfb-be0b-9848e0af3ae3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8879d541-11')
Oct 02 12:15:31 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8953e0191abc6f8c7cfb8dc093fc7ef2110784bf2848e283a41cddcb7b433dbf-userdata-shm.mount: Deactivated successfully.
Oct 02 12:15:31 compute-1 systemd[1]: var-lib-containers-storage-overlay-7b3b2f9ef863e45e32afb62235a57ad96286386eb759fa04a2bc6eb89ccc840d-merged.mount: Deactivated successfully.
Oct 02 12:15:31 compute-1 podman[240288]: 2025-10-02 12:15:31.717104933 +0000 UTC m=+0.104109585 container cleanup 8953e0191abc6f8c7cfb8dc093fc7ef2110784bf2848e283a41cddcb7b433dbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:15:31 compute-1 nova_compute[230518]: 2025-10-02 12:15:31.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:31 compute-1 systemd[1]: libpod-conmon-8953e0191abc6f8c7cfb8dc093fc7ef2110784bf2848e283a41cddcb7b433dbf.scope: Deactivated successfully.
Oct 02 12:15:31 compute-1 podman[240344]: 2025-10-02 12:15:31.784503813 +0000 UTC m=+0.043574222 container remove 8953e0191abc6f8c7cfb8dc093fc7ef2110784bf2848e283a41cddcb7b433dbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:15:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.793 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[db2c1e25-1d81-4961-b267-4b5699621d30]: (4, ('Thu Oct  2 12:15:31 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3 (8953e0191abc6f8c7cfb8dc093fc7ef2110784bf2848e283a41cddcb7b433dbf)\n8953e0191abc6f8c7cfb8dc093fc7ef2110784bf2848e283a41cddcb7b433dbf\nThu Oct  2 12:15:31 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3 (8953e0191abc6f8c7cfb8dc093fc7ef2110784bf2848e283a41cddcb7b433dbf)\n8953e0191abc6f8c7cfb8dc093fc7ef2110784bf2848e283a41cddcb7b433dbf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.795 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8ea8b261-f1b4-4f00-88b2-20f91c484421]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.797 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b610572-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:31 compute-1 kernel: tap5b610572-00: left promiscuous mode
Oct 02 12:15:31 compute-1 nova_compute[230518]: 2025-10-02 12:15:31.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.804 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5fe4b492-f81a-4adc-9e04-ca92ac6fb041]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:31 compute-1 nova_compute[230518]: 2025-10-02 12:15:31.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.829 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cfe27ba3-d685-47fc-8884-e481c1ba19ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.830 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c14f3e5c-f480-49c5-8579-259ed8f7de62]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.848 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[112a14ee-38b3-4b31-af34-ca4910ace156]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508651, 'reachable_time': 41789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240362, 'error': None, 'target': 'ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.850 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5b610572-0903-4bfb-be0b-9848e0af3ae3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:15:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.851 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[360c4824-c969-44b2-be3a-cfb721fcd40b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.851 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 96e672de-12ad-4022-be24-94113ee6de10 in datapath bdc26f36-19a2-41f9-8f78-61503fbb20a7 unbound from our chassis
Oct 02 12:15:31 compute-1 systemd[1]: run-netns-ovnmeta\x2d5b610572\x2d0903\x2d4bfb\x2dbe0b\x2d9848e0af3ae3.mount: Deactivated successfully.
Oct 02 12:15:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.853 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bdc26f36-19a2-41f9-8f78-61503fbb20a7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:15:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.854 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[727f8a43-ff2b-402c-82fd-0772770b05a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:31.855 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bdc26f36-19a2-41f9-8f78-61503fbb20a7 namespace which is not needed anymore
Oct 02 12:15:31 compute-1 podman[240360]: 2025-10-02 12:15:31.902799893 +0000 UTC m=+0.062134465 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:15:31 compute-1 nova_compute[230518]: 2025-10-02 12:15:31.946 2 DEBUG nova.compute.manager [req-44905104-e53b-490f-a923-53987004d2ed req-a0681546-e667-480d-8909-88af1e5e1bf2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Received event network-vif-unplugged-8879d541-1199-497a-b096-b45e17e4df04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:15:31 compute-1 nova_compute[230518]: 2025-10-02 12:15:31.946 2 DEBUG oslo_concurrency.lockutils [req-44905104-e53b-490f-a923-53987004d2ed req-a0681546-e667-480d-8909-88af1e5e1bf2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2b86a484-6fc6-4efa-983f-fb93053b0874-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:31 compute-1 nova_compute[230518]: 2025-10-02 12:15:31.951 2 DEBUG oslo_concurrency.lockutils [req-44905104-e53b-490f-a923-53987004d2ed req-a0681546-e667-480d-8909-88af1e5e1bf2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2b86a484-6fc6-4efa-983f-fb93053b0874-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:31 compute-1 nova_compute[230518]: 2025-10-02 12:15:31.951 2 DEBUG oslo_concurrency.lockutils [req-44905104-e53b-490f-a923-53987004d2ed req-a0681546-e667-480d-8909-88af1e5e1bf2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2b86a484-6fc6-4efa-983f-fb93053b0874-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:31 compute-1 nova_compute[230518]: 2025-10-02 12:15:31.952 2 DEBUG nova.compute.manager [req-44905104-e53b-490f-a923-53987004d2ed req-a0681546-e667-480d-8909-88af1e5e1bf2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] No waiting events found dispatching network-vif-unplugged-8879d541-1199-497a-b096-b45e17e4df04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:15:31 compute-1 nova_compute[230518]: 2025-10-02 12:15:31.952 2 DEBUG nova.compute.manager [req-44905104-e53b-490f-a923-53987004d2ed req-a0681546-e667-480d-8909-88af1e5e1bf2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Received event network-vif-unplugged-8879d541-1199-497a-b096-b45e17e4df04 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:15:31 compute-1 neutron-haproxy-ovnmeta-bdc26f36-19a2-41f9-8f78-61503fbb20a7[237872]: [NOTICE]   (237876) : haproxy version is 2.8.14-c23fe91
Oct 02 12:15:31 compute-1 neutron-haproxy-ovnmeta-bdc26f36-19a2-41f9-8f78-61503fbb20a7[237872]: [NOTICE]   (237876) : path to executable is /usr/sbin/haproxy
Oct 02 12:15:31 compute-1 neutron-haproxy-ovnmeta-bdc26f36-19a2-41f9-8f78-61503fbb20a7[237872]: [WARNING]  (237876) : Exiting Master process...
Oct 02 12:15:31 compute-1 neutron-haproxy-ovnmeta-bdc26f36-19a2-41f9-8f78-61503fbb20a7[237872]: [ALERT]    (237876) : Current worker (237878) exited with code 143 (Terminated)
Oct 02 12:15:31 compute-1 neutron-haproxy-ovnmeta-bdc26f36-19a2-41f9-8f78-61503fbb20a7[237872]: [WARNING]  (237876) : All workers exited. Exiting... (0)
Oct 02 12:15:31 compute-1 systemd[1]: libpod-8c2bd10725ec50506764ae5bca53a575be92055b09d560b413dae170fd17d71f.scope: Deactivated successfully.
Oct 02 12:15:31 compute-1 conmon[237872]: conmon 8c2bd10725ec50506764 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8c2bd10725ec50506764ae5bca53a575be92055b09d560b413dae170fd17d71f.scope/container/memory.events
Oct 02 12:15:31 compute-1 podman[240395]: 2025-10-02 12:15:31.984448791 +0000 UTC m=+0.045819762 container died 8c2bd10725ec50506764ae5bca53a575be92055b09d560b413dae170fd17d71f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bdc26f36-19a2-41f9-8f78-61503fbb20a7, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:15:32 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8c2bd10725ec50506764ae5bca53a575be92055b09d560b413dae170fd17d71f-userdata-shm.mount: Deactivated successfully.
Oct 02 12:15:32 compute-1 systemd[1]: var-lib-containers-storage-overlay-542123bb31b236d9dd2f1f394035551e9fafc6bc648a18f686f01cdda5789143-merged.mount: Deactivated successfully.
Oct 02 12:15:32 compute-1 podman[240395]: 2025-10-02 12:15:32.026518044 +0000 UTC m=+0.087889005 container cleanup 8c2bd10725ec50506764ae5bca53a575be92055b09d560b413dae170fd17d71f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bdc26f36-19a2-41f9-8f78-61503fbb20a7, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 12:15:32 compute-1 systemd[1]: libpod-conmon-8c2bd10725ec50506764ae5bca53a575be92055b09d560b413dae170fd17d71f.scope: Deactivated successfully.
Oct 02 12:15:32 compute-1 nova_compute[230518]: 2025-10-02 12:15:32.077 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:32 compute-1 nova_compute[230518]: 2025-10-02 12:15:32.078 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:15:32 compute-1 podman[240425]: 2025-10-02 12:15:32.100069958 +0000 UTC m=+0.044852322 container remove 8c2bd10725ec50506764ae5bca53a575be92055b09d560b413dae170fd17d71f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bdc26f36-19a2-41f9-8f78-61503fbb20a7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 12:15:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:32.106 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1eecb30e-ce16-4aca-8e1a-d2d5f5f325c7]: (4, ('Thu Oct  2 12:15:31 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bdc26f36-19a2-41f9-8f78-61503fbb20a7 (8c2bd10725ec50506764ae5bca53a575be92055b09d560b413dae170fd17d71f)\n8c2bd10725ec50506764ae5bca53a575be92055b09d560b413dae170fd17d71f\nThu Oct  2 12:15:32 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bdc26f36-19a2-41f9-8f78-61503fbb20a7 (8c2bd10725ec50506764ae5bca53a575be92055b09d560b413dae170fd17d71f)\n8c2bd10725ec50506764ae5bca53a575be92055b09d560b413dae170fd17d71f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:32.108 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9687ef26-0a4d-4aaa-bfaf-0e567c6626b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:32.108 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbdc26f36-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:32 compute-1 kernel: tapbdc26f36-10: left promiscuous mode
Oct 02 12:15:32 compute-1 nova_compute[230518]: 2025-10-02 12:15:32.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:32 compute-1 nova_compute[230518]: 2025-10-02 12:15:32.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:32.128 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[19adf81b-9fb1-453d-9a7d-e7d7c9a3b9f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:32.150 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c676e5d8-e517-4165-9329-24c6cccb42b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:32.152 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b4c376e6-9956-4d40-8d0b-cf78cc0d3988]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:32.167 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[05bfd548-7042-4b86-853d-26bb80ae7be4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 508736, 'reachable_time': 37394, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240440, 'error': None, 'target': 'ovnmeta-bdc26f36-19a2-41f9-8f78-61503fbb20a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:32.170 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bdc26f36-19a2-41f9-8f78-61503fbb20a7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:15:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:32.171 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[3d14238b-f66c-49cb-b217-bebac10a8ee6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:15:32 compute-1 ceph-mon[80926]: pgmap v1075: 305 pgs: 305 active+clean; 412 MiB data, 567 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 686 KiB/s wr, 209 op/s
Oct 02 12:15:32 compute-1 ceph-mon[80926]: osdmap e152: 3 total, 3 up, 3 in
Oct 02 12:15:32 compute-1 nova_compute[230518]: 2025-10-02 12:15:32.269 2 INFO nova.virt.libvirt.driver [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Deleting instance files /var/lib/nova/instances/2b86a484-6fc6-4efa-983f-fb93053b0874_del
Oct 02 12:15:32 compute-1 nova_compute[230518]: 2025-10-02 12:15:32.271 2 INFO nova.virt.libvirt.driver [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Deletion of /var/lib/nova/instances/2b86a484-6fc6-4efa-983f-fb93053b0874_del complete
Oct 02 12:15:32 compute-1 nova_compute[230518]: 2025-10-02 12:15:32.320 2 INFO nova.compute.manager [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Took 0.89 seconds to destroy the instance on the hypervisor.
Oct 02 12:15:32 compute-1 nova_compute[230518]: 2025-10-02 12:15:32.321 2 DEBUG oslo.service.loopingcall [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:15:32 compute-1 nova_compute[230518]: 2025-10-02 12:15:32.321 2 DEBUG nova.compute.manager [-] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:15:32 compute-1 nova_compute[230518]: 2025-10-02 12:15:32.322 2 DEBUG nova.network.neutron [-] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:15:32 compute-1 systemd[1]: run-netns-ovnmeta\x2dbdc26f36\x2d19a2\x2d41f9\x2d8f78\x2d61503fbb20a7.mount: Deactivated successfully.
Oct 02 12:15:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:15:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:32.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:15:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:15:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:15:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:32.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:15:33 compute-1 nova_compute[230518]: 2025-10-02 12:15:33.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e153 e153: 3 total, 3 up, 3 in
Oct 02 12:15:33 compute-1 nova_compute[230518]: 2025-10-02 12:15:33.278 2 INFO nova.virt.libvirt.driver [None req-004cf06c-a83a-4125-874c-5c2b88642de2 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Snapshot image upload complete
Oct 02 12:15:33 compute-1 nova_compute[230518]: 2025-10-02 12:15:33.278 2 INFO nova.compute.manager [None req-004cf06c-a83a-4125-874c-5c2b88642de2 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Took 6.28 seconds to snapshot the instance on the hypervisor.
Oct 02 12:15:34 compute-1 nova_compute[230518]: 2025-10-02 12:15:34.064 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:34 compute-1 nova_compute[230518]: 2025-10-02 12:15:34.072 2 DEBUG nova.compute.manager [req-121db445-6c1b-45f0-8c8e-6aead8d280f7 req-9cb163ed-e3bc-49bf-86c7-cc42be7949b4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Received event network-vif-plugged-8879d541-1199-497a-b096-b45e17e4df04 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:15:34 compute-1 nova_compute[230518]: 2025-10-02 12:15:34.072 2 DEBUG oslo_concurrency.lockutils [req-121db445-6c1b-45f0-8c8e-6aead8d280f7 req-9cb163ed-e3bc-49bf-86c7-cc42be7949b4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2b86a484-6fc6-4efa-983f-fb93053b0874-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:34 compute-1 nova_compute[230518]: 2025-10-02 12:15:34.073 2 DEBUG oslo_concurrency.lockutils [req-121db445-6c1b-45f0-8c8e-6aead8d280f7 req-9cb163ed-e3bc-49bf-86c7-cc42be7949b4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2b86a484-6fc6-4efa-983f-fb93053b0874-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:34 compute-1 nova_compute[230518]: 2025-10-02 12:15:34.073 2 DEBUG oslo_concurrency.lockutils [req-121db445-6c1b-45f0-8c8e-6aead8d280f7 req-9cb163ed-e3bc-49bf-86c7-cc42be7949b4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2b86a484-6fc6-4efa-983f-fb93053b0874-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:34 compute-1 nova_compute[230518]: 2025-10-02 12:15:34.074 2 DEBUG nova.compute.manager [req-121db445-6c1b-45f0-8c8e-6aead8d280f7 req-9cb163ed-e3bc-49bf-86c7-cc42be7949b4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] No waiting events found dispatching network-vif-plugged-8879d541-1199-497a-b096-b45e17e4df04 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:15:34 compute-1 nova_compute[230518]: 2025-10-02 12:15:34.074 2 WARNING nova.compute.manager [req-121db445-6c1b-45f0-8c8e-6aead8d280f7 req-9cb163ed-e3bc-49bf-86c7-cc42be7949b4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Received unexpected event network-vif-plugged-8879d541-1199-497a-b096-b45e17e4df04 for instance with vm_state active and task_state deleting.
Oct 02 12:15:34 compute-1 ceph-mon[80926]: pgmap v1077: 305 pgs: 305 active+clean; 393 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 4.5 MiB/s wr, 287 op/s
Oct 02 12:15:34 compute-1 ceph-mon[80926]: osdmap e153: 3 total, 3 up, 3 in
Oct 02 12:15:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3606645617' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/442933021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:34 compute-1 nova_compute[230518]: 2025-10-02 12:15:34.505 2 DEBUG nova.network.neutron [-] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:15:34 compute-1 nova_compute[230518]: 2025-10-02 12:15:34.526 2 INFO nova.compute.manager [-] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Took 2.20 seconds to deallocate network for instance.
Oct 02 12:15:34 compute-1 nova_compute[230518]: 2025-10-02 12:15:34.571 2 DEBUG oslo_concurrency.lockutils [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:34 compute-1 nova_compute[230518]: 2025-10-02 12:15:34.571 2 DEBUG oslo_concurrency.lockutils [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:34 compute-1 nova_compute[230518]: 2025-10-02 12:15:34.779 2 DEBUG oslo_concurrency.processutils [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:34.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:34.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:35 compute-1 nova_compute[230518]: 2025-10-02 12:15:35.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:15:35 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3853434744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:35 compute-1 nova_compute[230518]: 2025-10-02 12:15:35.233 2 DEBUG oslo_concurrency.processutils [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:35 compute-1 nova_compute[230518]: 2025-10-02 12:15:35.238 2 DEBUG nova.compute.provider_tree [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:15:35 compute-1 nova_compute[230518]: 2025-10-02 12:15:35.252 2 DEBUG nova.scheduler.client.report [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:15:35 compute-1 nova_compute[230518]: 2025-10-02 12:15:35.277 2 DEBUG oslo_concurrency.lockutils [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:35 compute-1 nova_compute[230518]: 2025-10-02 12:15:35.314 2 INFO nova.scheduler.client.report [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Deleted allocations for instance 2b86a484-6fc6-4efa-983f-fb93053b0874
Oct 02 12:15:35 compute-1 nova_compute[230518]: 2025-10-02 12:15:35.419 2 DEBUG oslo_concurrency.lockutils [None req-0c397f3f-09a7-4610-918c-5e356f0c4496 d29391679bd0482aada18c987e4c11ca 4db2957ac1b546178a9f2c0f24807e5b - - default default] Lock "2b86a484-6fc6-4efa-983f-fb93053b0874" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2688368381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3853434744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e154 e154: 3 total, 3 up, 3 in
Oct 02 12:15:36 compute-1 nova_compute[230518]: 2025-10-02 12:15:36.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:36 compute-1 ceph-mon[80926]: pgmap v1079: 305 pgs: 305 active+clean; 349 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 7.6 MiB/s rd, 3.8 MiB/s wr, 411 op/s
Oct 02 12:15:36 compute-1 ceph-mon[80926]: osdmap e154: 3 total, 3 up, 3 in
Oct 02 12:15:36 compute-1 nova_compute[230518]: 2025-10-02 12:15:36.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:36 compute-1 nova_compute[230518]: 2025-10-02 12:15:36.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:36 compute-1 nova_compute[230518]: 2025-10-02 12:15:36.742 2 DEBUG nova.compute.manager [None req-05f1786f-9130-436f-933c-de9baf65d5ab 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:36 compute-1 nova_compute[230518]: 2025-10-02 12:15:36.798 2 INFO nova.compute.manager [None req-05f1786f-9130-436f-933c-de9baf65d5ab 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] instance snapshotting
Oct 02 12:15:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:36.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:36.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:37 compute-1 nova_compute[230518]: 2025-10-02 12:15:37.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:37 compute-1 nova_compute[230518]: 2025-10-02 12:15:37.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:37 compute-1 nova_compute[230518]: 2025-10-02 12:15:37.195 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:37 compute-1 ovn_controller[129257]: 2025-10-02T12:15:37Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cc:6f:ab 10.100.0.13
Oct 02 12:15:37 compute-1 ovn_controller[129257]: 2025-10-02T12:15:37Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cc:6f:ab 10.100.0.13
Oct 02 12:15:37 compute-1 nova_compute[230518]: 2025-10-02 12:15:37.322 2 INFO nova.virt.libvirt.driver [None req-05f1786f-9130-436f-933c-de9baf65d5ab 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Beginning live snapshot process
Oct 02 12:15:37 compute-1 nova_compute[230518]: 2025-10-02 12:15:37.468 2 DEBUG nova.virt.libvirt.imagebackend [None req-05f1786f-9130-436f-933c-de9baf65d5ab 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 12:15:37 compute-1 nova_compute[230518]: 2025-10-02 12:15:37.710 2 DEBUG nova.storage.rbd_utils [None req-05f1786f-9130-436f-933c-de9baf65d5ab 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] creating snapshot(330bd4c8e7db498babe577123d235fe2) on rbd image(01eee71c-078c-41f4-a1c1-4591cab7195e_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:15:37 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/513948928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:15:38 compute-1 nova_compute[230518]: 2025-10-02 12:15:38.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:38 compute-1 nova_compute[230518]: 2025-10-02 12:15:38.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:15:38 compute-1 nova_compute[230518]: 2025-10-02 12:15:38.095 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:15:38 compute-1 nova_compute[230518]: 2025-10-02 12:15:38.096 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:38 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e155 e155: 3 total, 3 up, 3 in
Oct 02 12:15:38 compute-1 nova_compute[230518]: 2025-10-02 12:15:38.124 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:38 compute-1 nova_compute[230518]: 2025-10-02 12:15:38.124 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:38 compute-1 nova_compute[230518]: 2025-10-02 12:15:38.125 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:38 compute-1 nova_compute[230518]: 2025-10-02 12:15:38.125 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:15:38 compute-1 nova_compute[230518]: 2025-10-02 12:15:38.125 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:38 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:15:38 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3304084597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:38 compute-1 nova_compute[230518]: 2025-10-02 12:15:38.615 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:38 compute-1 nova_compute[230518]: 2025-10-02 12:15:38.766 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:15:38 compute-1 nova_compute[230518]: 2025-10-02 12:15:38.767 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:15:38 compute-1 nova_compute[230518]: 2025-10-02 12:15:38.770 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:15:38 compute-1 nova_compute[230518]: 2025-10-02 12:15:38.770 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:15:38 compute-1 nova_compute[230518]: 2025-10-02 12:15:38.773 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:15:38 compute-1 nova_compute[230518]: 2025-10-02 12:15:38.773 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:15:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:38.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:38 compute-1 nova_compute[230518]: 2025-10-02 12:15:38.928 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:15:38 compute-1 nova_compute[230518]: 2025-10-02 12:15:38.930 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4277MB free_disk=20.866817474365234GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:15:38 compute-1 nova_compute[230518]: 2025-10-02 12:15:38.930 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:38 compute-1 nova_compute[230518]: 2025-10-02 12:15:38.931 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:38 compute-1 ceph-mon[80926]: pgmap v1081: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 260 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 6.4 MiB/s rd, 4.5 MiB/s wr, 417 op/s
Oct 02 12:15:38 compute-1 ceph-mon[80926]: osdmap e155: 3 total, 3 up, 3 in
Oct 02 12:15:38 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3304084597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:38.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:39 compute-1 nova_compute[230518]: 2025-10-02 12:15:39.004 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance fcfe251b-73c3-4310-b646-3c6c0a8c7e6e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:15:39 compute-1 nova_compute[230518]: 2025-10-02 12:15:39.004 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance bb6a3b63-8cda-41b6-ac43-6f9d310fad2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:15:39 compute-1 nova_compute[230518]: 2025-10-02 12:15:39.004 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 01eee71c-078c-41f4-a1c1-4591cab7195e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:15:39 compute-1 nova_compute[230518]: 2025-10-02 12:15:39.005 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:15:39 compute-1 nova_compute[230518]: 2025-10-02 12:15:39.005 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=960MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:15:39 compute-1 nova_compute[230518]: 2025-10-02 12:15:39.054 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e156 e156: 3 total, 3 up, 3 in
Oct 02 12:15:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:15:39 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2642791850' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:39 compute-1 nova_compute[230518]: 2025-10-02 12:15:39.501 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:39 compute-1 nova_compute[230518]: 2025-10-02 12:15:39.507 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:15:39 compute-1 nova_compute[230518]: 2025-10-02 12:15:39.532 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:15:39 compute-1 nova_compute[230518]: 2025-10-02 12:15:39.554 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:15:39 compute-1 nova_compute[230518]: 2025-10-02 12:15:39.555 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:39 compute-1 nova_compute[230518]: 2025-10-02 12:15:39.555 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:39 compute-1 nova_compute[230518]: 2025-10-02 12:15:39.556 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 12:15:39 compute-1 nova_compute[230518]: 2025-10-02 12:15:39.570 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 12:15:39 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 12:15:39 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 11K writes, 46K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 3265 syncs, 3.60 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5392 writes, 20K keys, 5392 commit groups, 1.0 writes per commit group, ingest: 23.58 MB, 0.04 MB/s
                                           Interval WAL: 5392 writes, 2136 syncs, 2.52 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 12:15:40 compute-1 nova_compute[230518]: 2025-10-02 12:15:40.227 2 DEBUG oslo_concurrency.lockutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "80f9c3a4-aadc-4519-a451-8ce36d37b598" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:40 compute-1 nova_compute[230518]: 2025-10-02 12:15:40.228 2 DEBUG oslo_concurrency.lockutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "80f9c3a4-aadc-4519-a451-8ce36d37b598" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:40 compute-1 nova_compute[230518]: 2025-10-02 12:15:40.256 2 DEBUG nova.compute.manager [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:15:40 compute-1 nova_compute[230518]: 2025-10-02 12:15:40.313 2 DEBUG oslo_concurrency.lockutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:40 compute-1 nova_compute[230518]: 2025-10-02 12:15:40.314 2 DEBUG oslo_concurrency.lockutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:40 compute-1 nova_compute[230518]: 2025-10-02 12:15:40.319 2 DEBUG nova.virt.hardware [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:15:40 compute-1 nova_compute[230518]: 2025-10-02 12:15:40.319 2 INFO nova.compute.claims [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:15:40 compute-1 ceph-mon[80926]: pgmap v1083: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 260 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.5 MiB/s wr, 295 op/s
Oct 02 12:15:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/565957413' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2642791850' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:40 compute-1 ceph-mon[80926]: osdmap e156: 3 total, 3 up, 3 in
Oct 02 12:15:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2178282675' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:40 compute-1 nova_compute[230518]: 2025-10-02 12:15:40.460 2 DEBUG oslo_concurrency.processutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:40 compute-1 nova_compute[230518]: 2025-10-02 12:15:40.527 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:40 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:15:40 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4150019923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:40.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:40 compute-1 nova_compute[230518]: 2025-10-02 12:15:40.899 2 DEBUG oslo_concurrency.processutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:40 compute-1 nova_compute[230518]: 2025-10-02 12:15:40.904 2 DEBUG nova.compute.provider_tree [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:15:40 compute-1 nova_compute[230518]: 2025-10-02 12:15:40.937 2 DEBUG nova.scheduler.client.report [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:15:40 compute-1 nova_compute[230518]: 2025-10-02 12:15:40.982 2 DEBUG oslo_concurrency.lockutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:40 compute-1 nova_compute[230518]: 2025-10-02 12:15:40.983 2 DEBUG nova.compute.manager [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:15:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:41.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:41 compute-1 nova_compute[230518]: 2025-10-02 12:15:41.035 2 DEBUG nova.compute.manager [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:15:41 compute-1 nova_compute[230518]: 2025-10-02 12:15:41.035 2 DEBUG nova.network.neutron [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:15:41 compute-1 nova_compute[230518]: 2025-10-02 12:15:41.054 2 INFO nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:15:41 compute-1 nova_compute[230518]: 2025-10-02 12:15:41.070 2 DEBUG nova.compute.manager [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:15:41 compute-1 nova_compute[230518]: 2025-10-02 12:15:41.154 2 DEBUG nova.compute.manager [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:15:41 compute-1 nova_compute[230518]: 2025-10-02 12:15:41.156 2 DEBUG nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:15:41 compute-1 nova_compute[230518]: 2025-10-02 12:15:41.157 2 INFO nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Creating image(s)
Oct 02 12:15:41 compute-1 nova_compute[230518]: 2025-10-02 12:15:41.187 2 DEBUG nova.storage.rbd_utils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] rbd image 80f9c3a4-aadc-4519-a451-8ce36d37b598_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:41 compute-1 nova_compute[230518]: 2025-10-02 12:15:41.215 2 DEBUG nova.storage.rbd_utils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] rbd image 80f9c3a4-aadc-4519-a451-8ce36d37b598_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:41 compute-1 nova_compute[230518]: 2025-10-02 12:15:41.243 2 DEBUG nova.storage.rbd_utils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] rbd image 80f9c3a4-aadc-4519-a451-8ce36d37b598_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:41 compute-1 nova_compute[230518]: 2025-10-02 12:15:41.246 2 DEBUG oslo_concurrency.processutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:41 compute-1 nova_compute[230518]: 2025-10-02 12:15:41.307 2 DEBUG oslo_concurrency.processutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:41 compute-1 nova_compute[230518]: 2025-10-02 12:15:41.308 2 DEBUG oslo_concurrency.lockutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:41 compute-1 nova_compute[230518]: 2025-10-02 12:15:41.309 2 DEBUG oslo_concurrency.lockutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:41 compute-1 nova_compute[230518]: 2025-10-02 12:15:41.310 2 DEBUG oslo_concurrency.lockutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:41 compute-1 nova_compute[230518]: 2025-10-02 12:15:41.335 2 DEBUG nova.storage.rbd_utils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] rbd image 80f9c3a4-aadc-4519-a451-8ce36d37b598_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:41 compute-1 nova_compute[230518]: 2025-10-02 12:15:41.339 2 DEBUG oslo_concurrency.processutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 80f9c3a4-aadc-4519-a451-8ce36d37b598_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:41 compute-1 nova_compute[230518]: 2025-10-02 12:15:41.692 2 DEBUG nova.network.neutron [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 02 12:15:41 compute-1 nova_compute[230518]: 2025-10-02 12:15:41.693 2 DEBUG nova.compute.manager [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:15:41 compute-1 nova_compute[230518]: 2025-10-02 12:15:41.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:41 compute-1 nova_compute[230518]: 2025-10-02 12:15:41.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:41 compute-1 nova_compute[230518]: 2025-10-02 12:15:41.824 2 DEBUG nova.storage.rbd_utils [None req-05f1786f-9130-436f-933c-de9baf65d5ab 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] cloning vms/01eee71c-078c-41f4-a1c1-4591cab7195e_disk@330bd4c8e7db498babe577123d235fe2 to images/28a7bd8e-0ddc-4cda-9f64-7c1162716074 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 12:15:41 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4150019923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:41 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3717745806' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:41 compute-1 nova_compute[230518]: 2025-10-02 12:15:41.965 2 DEBUG oslo_concurrency.processutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 80f9c3a4-aadc-4519-a451-8ce36d37b598_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:42 compute-1 nova_compute[230518]: 2025-10-02 12:15:42.023 2 DEBUG nova.storage.rbd_utils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] resizing rbd image 80f9c3a4-aadc-4519-a451-8ce36d37b598_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:15:42 compute-1 nova_compute[230518]: 2025-10-02 12:15:42.108 2 DEBUG nova.objects.instance [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'migration_context' on Instance uuid 80f9c3a4-aadc-4519-a451-8ce36d37b598 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:15:42 compute-1 nova_compute[230518]: 2025-10-02 12:15:42.121 2 DEBUG nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:15:42 compute-1 nova_compute[230518]: 2025-10-02 12:15:42.121 2 DEBUG nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Ensure instance console log exists: /var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:15:42 compute-1 nova_compute[230518]: 2025-10-02 12:15:42.122 2 DEBUG oslo_concurrency.lockutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:42 compute-1 nova_compute[230518]: 2025-10-02 12:15:42.122 2 DEBUG oslo_concurrency.lockutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:42 compute-1 nova_compute[230518]: 2025-10-02 12:15:42.123 2 DEBUG oslo_concurrency.lockutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:42 compute-1 nova_compute[230518]: 2025-10-02 12:15:42.125 2 DEBUG nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:15:42 compute-1 nova_compute[230518]: 2025-10-02 12:15:42.130 2 WARNING nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:15:42 compute-1 nova_compute[230518]: 2025-10-02 12:15:42.134 2 DEBUG nova.virt.libvirt.host [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:15:42 compute-1 nova_compute[230518]: 2025-10-02 12:15:42.134 2 DEBUG nova.virt.libvirt.host [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:15:42 compute-1 nova_compute[230518]: 2025-10-02 12:15:42.137 2 DEBUG nova.virt.libvirt.host [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:15:42 compute-1 nova_compute[230518]: 2025-10-02 12:15:42.137 2 DEBUG nova.virt.libvirt.host [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:15:42 compute-1 nova_compute[230518]: 2025-10-02 12:15:42.139 2 DEBUG nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:15:42 compute-1 nova_compute[230518]: 2025-10-02 12:15:42.139 2 DEBUG nova.virt.hardware [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:15:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='bba9bb99-43dc-47a6-9261-f8d87f6d4f9b',id=28,is_public=True,memory_mb=128,name='tempest-test_resize_flavor_-525834944',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:15:42 compute-1 nova_compute[230518]: 2025-10-02 12:15:42.139 2 DEBUG nova.virt.hardware [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:15:42 compute-1 nova_compute[230518]: 2025-10-02 12:15:42.140 2 DEBUG nova.virt.hardware [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:15:42 compute-1 nova_compute[230518]: 2025-10-02 12:15:42.140 2 DEBUG nova.virt.hardware [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:15:42 compute-1 nova_compute[230518]: 2025-10-02 12:15:42.140 2 DEBUG nova.virt.hardware [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:15:42 compute-1 nova_compute[230518]: 2025-10-02 12:15:42.140 2 DEBUG nova.virt.hardware [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:15:42 compute-1 nova_compute[230518]: 2025-10-02 12:15:42.141 2 DEBUG nova.virt.hardware [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:15:42 compute-1 nova_compute[230518]: 2025-10-02 12:15:42.141 2 DEBUG nova.virt.hardware [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:15:42 compute-1 nova_compute[230518]: 2025-10-02 12:15:42.141 2 DEBUG nova.virt.hardware [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:15:42 compute-1 nova_compute[230518]: 2025-10-02 12:15:42.142 2 DEBUG nova.virt.hardware [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:15:42 compute-1 nova_compute[230518]: 2025-10-02 12:15:42.142 2 DEBUG nova.virt.hardware [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:15:42 compute-1 nova_compute[230518]: 2025-10-02 12:15:42.145 2 DEBUG oslo_concurrency.processutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:42 compute-1 nova_compute[230518]: 2025-10-02 12:15:42.521 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407327.5200894, b8f8f97e-2823-451c-ab36-7f94ade8be46 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:15:42 compute-1 nova_compute[230518]: 2025-10-02 12:15:42.522 2 INFO nova.compute.manager [-] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] VM Stopped (Lifecycle Event)
Oct 02 12:15:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:15:42 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1971211061' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:42 compute-1 nova_compute[230518]: 2025-10-02 12:15:42.552 2 DEBUG nova.compute.manager [None req-f43bd273-936c-4b3c-9f0e-39a0d34460df - - - - - -] [instance: b8f8f97e-2823-451c-ab36-7f94ade8be46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:42 compute-1 nova_compute[230518]: 2025-10-02 12:15:42.573 2 DEBUG oslo_concurrency.processutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:42 compute-1 nova_compute[230518]: 2025-10-02 12:15:42.596 2 DEBUG nova.storage.rbd_utils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] rbd image 80f9c3a4-aadc-4519-a451-8ce36d37b598_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:42 compute-1 nova_compute[230518]: 2025-10-02 12:15:42.600 2 DEBUG oslo_concurrency.processutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.001999983s ======
Oct 02 12:15:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:42.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001999983s
Oct 02 12:15:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:15:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:15:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:43.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:15:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:15:43 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/623290968' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:43 compute-1 nova_compute[230518]: 2025-10-02 12:15:43.034 2 DEBUG oslo_concurrency.processutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:43 compute-1 nova_compute[230518]: 2025-10-02 12:15:43.036 2 DEBUG nova.objects.instance [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 80f9c3a4-aadc-4519-a451-8ce36d37b598 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:15:43 compute-1 nova_compute[230518]: 2025-10-02 12:15:43.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:43.042 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:15:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:43.043 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:15:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:15:43.043 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:15:43 compute-1 nova_compute[230518]: 2025-10-02 12:15:43.058 2 DEBUG nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:15:43 compute-1 nova_compute[230518]:   <uuid>80f9c3a4-aadc-4519-a451-8ce36d37b598</uuid>
Oct 02 12:15:43 compute-1 nova_compute[230518]:   <name>instance-00000018</name>
Oct 02 12:15:43 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:15:43 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:15:43 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:15:43 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:       <nova:name>tempest-MigrationsAdminTest-server-201463142</nova:name>
Oct 02 12:15:43 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:15:42</nova:creationTime>
Oct 02 12:15:43 compute-1 nova_compute[230518]:       <nova:flavor name="tempest-test_resize_flavor_-525834944">
Oct 02 12:15:43 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:15:43 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:15:43 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:15:43 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:15:43 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:15:43 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:15:43 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:15:43 compute-1 nova_compute[230518]:         <nova:user uuid="ac1b39d94ed94e2490ad953afb3c225f">tempest-MigrationsAdminTest-1653457839-project-member</nova:user>
Oct 02 12:15:43 compute-1 nova_compute[230518]:         <nova:project uuid="3d306048f2854052ba5317253b834aa7">tempest-MigrationsAdminTest-1653457839</nova:project>
Oct 02 12:15:43 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:15:43 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:       <nova:ports/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:15:43 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:15:43 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <system>
Oct 02 12:15:43 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:15:43 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:15:43 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:15:43 compute-1 nova_compute[230518]:       <entry name="serial">80f9c3a4-aadc-4519-a451-8ce36d37b598</entry>
Oct 02 12:15:43 compute-1 nova_compute[230518]:       <entry name="uuid">80f9c3a4-aadc-4519-a451-8ce36d37b598</entry>
Oct 02 12:15:43 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     </system>
Oct 02 12:15:43 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:15:43 compute-1 nova_compute[230518]:   <os>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:   </os>
Oct 02 12:15:43 compute-1 nova_compute[230518]:   <features>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:   </features>
Oct 02 12:15:43 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:15:43 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:15:43 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:15:43 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/80f9c3a4-aadc-4519-a451-8ce36d37b598_disk">
Oct 02 12:15:43 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:       </source>
Oct 02 12:15:43 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:15:43 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:15:43 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:15:43 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/80f9c3a4-aadc-4519-a451-8ce36d37b598_disk.config">
Oct 02 12:15:43 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:       </source>
Oct 02 12:15:43 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:15:43 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:15:43 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:15:43 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598/console.log" append="off"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <video>
Oct 02 12:15:43 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     </video>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:15:43 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:15:43 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:15:43 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:15:43 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:15:43 compute-1 nova_compute[230518]: </domain>
Oct 02 12:15:43 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:15:43 compute-1 nova_compute[230518]: 2025-10-02 12:15:43.132 2 DEBUG nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:15:43 compute-1 nova_compute[230518]: 2025-10-02 12:15:43.133 2 DEBUG nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:15:43 compute-1 nova_compute[230518]: 2025-10-02 12:15:43.134 2 INFO nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Using config drive
Oct 02 12:15:43 compute-1 nova_compute[230518]: 2025-10-02 12:15:43.169 2 DEBUG nova.storage.rbd_utils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] rbd image 80f9c3a4-aadc-4519-a451-8ce36d37b598_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:43 compute-1 nova_compute[230518]: 2025-10-02 12:15:43.587 2 INFO nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Creating config drive at /var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598/disk.config
Oct 02 12:15:43 compute-1 nova_compute[230518]: 2025-10-02 12:15:43.592 2 DEBUG oslo_concurrency.processutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdy3d4_jn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:43 compute-1 ceph-mon[80926]: pgmap v1085: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 282 MiB data, 494 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.6 MiB/s wr, 215 op/s
Oct 02 12:15:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2234252399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1971211061' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:43 compute-1 nova_compute[230518]: 2025-10-02 12:15:43.719 2 DEBUG oslo_concurrency.processutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdy3d4_jn" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:43 compute-1 nova_compute[230518]: 2025-10-02 12:15:43.753 2 DEBUG nova.storage.rbd_utils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] rbd image 80f9c3a4-aadc-4519-a451-8ce36d37b598_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:15:43 compute-1 nova_compute[230518]: 2025-10-02 12:15:43.758 2 DEBUG oslo_concurrency.processutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598/disk.config 80f9c3a4-aadc-4519-a451-8ce36d37b598_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:43 compute-1 nova_compute[230518]: 2025-10-02 12:15:43.947 2 DEBUG oslo_concurrency.processutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598/disk.config 80f9c3a4-aadc-4519-a451-8ce36d37b598_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:43 compute-1 nova_compute[230518]: 2025-10-02 12:15:43.948 2 INFO nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Deleting local config drive /var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598/disk.config because it was imported into RBD.
Oct 02 12:15:44 compute-1 systemd-machined[188247]: New machine qemu-13-instance-00000018.
Oct 02 12:15:44 compute-1 systemd[1]: Started Virtual Machine qemu-13-instance-00000018.
Oct 02 12:15:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e157 e157: 3 total, 3 up, 3 in
Oct 02 12:15:44 compute-1 nova_compute[230518]: 2025-10-02 12:15:44.700 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:15:44 compute-1 nova_compute[230518]: 2025-10-02 12:15:44.727 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Triggering sync for uuid fcfe251b-73c3-4310-b646-3c6c0a8c7e6e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 12:15:44 compute-1 nova_compute[230518]: 2025-10-02 12:15:44.727 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Triggering sync for uuid bb6a3b63-8cda-41b6-ac43-6f9d310fad2a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 12:15:44 compute-1 nova_compute[230518]: 2025-10-02 12:15:44.727 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Triggering sync for uuid 01eee71c-078c-41f4-a1c1-4591cab7195e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 12:15:44 compute-1 nova_compute[230518]: 2025-10-02 12:15:44.727 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Triggering sync for uuid 80f9c3a4-aadc-4519-a451-8ce36d37b598 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 12:15:44 compute-1 nova_compute[230518]: 2025-10-02 12:15:44.728 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "fcfe251b-73c3-4310-b646-3c6c0a8c7e6e" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:44 compute-1 nova_compute[230518]: 2025-10-02 12:15:44.728 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "fcfe251b-73c3-4310-b646-3c6c0a8c7e6e" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:44 compute-1 nova_compute[230518]: 2025-10-02 12:15:44.728 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "bb6a3b63-8cda-41b6-ac43-6f9d310fad2a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:44 compute-1 nova_compute[230518]: 2025-10-02 12:15:44.729 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "bb6a3b63-8cda-41b6-ac43-6f9d310fad2a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:44 compute-1 nova_compute[230518]: 2025-10-02 12:15:44.729 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "01eee71c-078c-41f4-a1c1-4591cab7195e" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:44 compute-1 nova_compute[230518]: 2025-10-02 12:15:44.729 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "01eee71c-078c-41f4-a1c1-4591cab7195e" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:44 compute-1 nova_compute[230518]: 2025-10-02 12:15:44.730 2 INFO nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] During sync_power_state the instance has a pending task (image_uploading). Skip.
Oct 02 12:15:44 compute-1 nova_compute[230518]: 2025-10-02 12:15:44.730 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "01eee71c-078c-41f4-a1c1-4591cab7195e" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:44 compute-1 nova_compute[230518]: 2025-10-02 12:15:44.730 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "80f9c3a4-aadc-4519-a451-8ce36d37b598" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:15:44 compute-1 nova_compute[230518]: 2025-10-02 12:15:44.794 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "bb6a3b63-8cda-41b6-ac43-6f9d310fad2a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:44 compute-1 nova_compute[230518]: 2025-10-02 12:15:44.795 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "fcfe251b-73c3-4310-b646-3c6c0a8c7e6e" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:44 compute-1 nova_compute[230518]: 2025-10-02 12:15:44.895 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407344.8951168, 80f9c3a4-aadc-4519-a451-8ce36d37b598 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:15:44 compute-1 nova_compute[230518]: 2025-10-02 12:15:44.896 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] VM Resumed (Lifecycle Event)
Oct 02 12:15:44 compute-1 nova_compute[230518]: 2025-10-02 12:15:44.898 2 DEBUG nova.compute.manager [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:15:44 compute-1 nova_compute[230518]: 2025-10-02 12:15:44.898 2 DEBUG nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:15:44 compute-1 nova_compute[230518]: 2025-10-02 12:15:44.901 2 INFO nova.virt.libvirt.driver [-] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Instance spawned successfully.
Oct 02 12:15:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:44 compute-1 nova_compute[230518]: 2025-10-02 12:15:44.901 2 DEBUG nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:15:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:44.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:44 compute-1 nova_compute[230518]: 2025-10-02 12:15:44.921 2 DEBUG nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:15:44 compute-1 nova_compute[230518]: 2025-10-02 12:15:44.921 2 DEBUG nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:15:44 compute-1 nova_compute[230518]: 2025-10-02 12:15:44.922 2 DEBUG nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:15:44 compute-1 nova_compute[230518]: 2025-10-02 12:15:44.922 2 DEBUG nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:15:44 compute-1 nova_compute[230518]: 2025-10-02 12:15:44.923 2 DEBUG nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:15:44 compute-1 nova_compute[230518]: 2025-10-02 12:15:44.923 2 DEBUG nova.virt.libvirt.driver [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:15:44 compute-1 nova_compute[230518]: 2025-10-02 12:15:44.926 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:44 compute-1 nova_compute[230518]: 2025-10-02 12:15:44.928 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:15:44 compute-1 nova_compute[230518]: 2025-10-02 12:15:44.987 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:15:44 compute-1 nova_compute[230518]: 2025-10-02 12:15:44.987 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407344.8957467, 80f9c3a4-aadc-4519-a451-8ce36d37b598 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:15:44 compute-1 nova_compute[230518]: 2025-10-02 12:15:44.988 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] VM Started (Lifecycle Event)
Oct 02 12:15:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:45.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:45 compute-1 nova_compute[230518]: 2025-10-02 12:15:45.013 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:45 compute-1 nova_compute[230518]: 2025-10-02 12:15:45.016 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:15:45 compute-1 nova_compute[230518]: 2025-10-02 12:15:45.019 2 INFO nova.compute.manager [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Took 3.86 seconds to spawn the instance on the hypervisor.
Oct 02 12:15:45 compute-1 nova_compute[230518]: 2025-10-02 12:15:45.019 2 DEBUG nova.compute.manager [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:45 compute-1 ceph-mon[80926]: pgmap v1086: 305 pgs: 305 active+clean; 323 MiB data, 522 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 6.3 MiB/s wr, 282 op/s
Oct 02 12:15:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/623290968' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:45 compute-1 ceph-mon[80926]: osdmap e157: 3 total, 3 up, 3 in
Oct 02 12:15:45 compute-1 nova_compute[230518]: 2025-10-02 12:15:45.052 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:15:45 compute-1 nova_compute[230518]: 2025-10-02 12:15:45.101 2 INFO nova.compute.manager [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Took 4.80 seconds to build instance.
Oct 02 12:15:45 compute-1 nova_compute[230518]: 2025-10-02 12:15:45.117 2 DEBUG oslo_concurrency.lockutils [None req-8a076c55-4e99-4e32-bdd1-58403c78b507 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "80f9c3a4-aadc-4519-a451-8ce36d37b598" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.889s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:45 compute-1 nova_compute[230518]: 2025-10-02 12:15:45.117 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "80f9c3a4-aadc-4519-a451-8ce36d37b598" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.387s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:15:45 compute-1 nova_compute[230518]: 2025-10-02 12:15:45.118 2 INFO nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:15:45 compute-1 nova_compute[230518]: 2025-10-02 12:15:45.118 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "80f9c3a4-aadc-4519-a451-8ce36d37b598" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:15:46 compute-1 ceph-mon[80926]: pgmap v1088: 305 pgs: 305 active+clean; 361 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 8.2 MiB/s wr, 273 op/s
Oct 02 12:15:46 compute-1 nova_compute[230518]: 2025-10-02 12:15:46.627 2 DEBUG nova.storage.rbd_utils [None req-05f1786f-9130-436f-933c-de9baf65d5ab 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] flattening images/28a7bd8e-0ddc-4cda-9f64-7c1162716074 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 12:15:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:46.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:15:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:47.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:15:47 compute-1 nova_compute[230518]: 2025-10-02 12:15:47.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:47 compute-1 nova_compute[230518]: 2025-10-02 12:15:47.175 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407331.6597059, 2b86a484-6fc6-4efa-983f-fb93053b0874 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:15:47 compute-1 nova_compute[230518]: 2025-10-02 12:15:47.175 2 INFO nova.compute.manager [-] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] VM Stopped (Lifecycle Event)
Oct 02 12:15:47 compute-1 nova_compute[230518]: 2025-10-02 12:15:47.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:15:47 compute-1 nova_compute[230518]: 2025-10-02 12:15:47.200 2 DEBUG nova.compute.manager [None req-5f076fb4-99dd-4052-a2e7-8a143953df25 - - - - - -] [instance: 2b86a484-6fc6-4efa-983f-fb93053b0874] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:15:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:15:48 compute-1 ceph-mon[80926]: pgmap v1089: 305 pgs: 305 active+clean; 367 MiB data, 544 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 7.4 MiB/s wr, 372 op/s
Oct 02 12:15:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:48.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:49.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e158 e158: 3 total, 3 up, 3 in
Oct 02 12:15:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2599039566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:50 compute-1 nova_compute[230518]: 2025-10-02 12:15:50.311 2 DEBUG oslo_concurrency.lockutils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "refresh_cache-80f9c3a4-aadc-4519-a451-8ce36d37b598" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:15:50 compute-1 nova_compute[230518]: 2025-10-02 12:15:50.312 2 DEBUG oslo_concurrency.lockutils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquired lock "refresh_cache-80f9c3a4-aadc-4519-a451-8ce36d37b598" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:15:50 compute-1 nova_compute[230518]: 2025-10-02 12:15:50.312 2 DEBUG nova.network.neutron [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:15:50 compute-1 nova_compute[230518]: 2025-10-02 12:15:50.334 2 DEBUG nova.storage.rbd_utils [None req-05f1786f-9130-436f-933c-de9baf65d5ab 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] removing snapshot(330bd4c8e7db498babe577123d235fe2) on rbd image(01eee71c-078c-41f4-a1c1-4591cab7195e_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 12:15:50 compute-1 ceph-mon[80926]: pgmap v1090: 305 pgs: 305 active+clean; 367 MiB data, 544 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 6.4 MiB/s wr, 322 op/s
Oct 02 12:15:50 compute-1 ceph-mon[80926]: osdmap e158: 3 total, 3 up, 3 in
Oct 02 12:15:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1700940213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4108361957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:15:50 compute-1 podman[240997]: 2025-10-02 12:15:50.8224703 +0000 UTC m=+0.065289524 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:15:50 compute-1 podman[240996]: 2025-10-02 12:15:50.851939557 +0000 UTC m=+0.099316375 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Oct 02 12:15:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:50.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:51.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:51 compute-1 nova_compute[230518]: 2025-10-02 12:15:51.162 2 DEBUG nova.network.neutron [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:15:51 compute-1 nova_compute[230518]: 2025-10-02 12:15:51.580 2 DEBUG nova.network.neutron [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:15:51 compute-1 nova_compute[230518]: 2025-10-02 12:15:51.596 2 DEBUG oslo_concurrency.lockutils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Releasing lock "refresh_cache-80f9c3a4-aadc-4519-a451-8ce36d37b598" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:15:51 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/556109171' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:51 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/456174084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:51 compute-1 nova_compute[230518]: 2025-10-02 12:15:51.718 2 DEBUG nova.virt.libvirt.driver [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Oct 02 12:15:51 compute-1 nova_compute[230518]: 2025-10-02 12:15:51.719 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Creating file /var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598/69657b36550b4a41a2ba62ead760d127.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Oct 02 12:15:51 compute-1 nova_compute[230518]: 2025-10-02 12:15:51.719 2 DEBUG oslo_concurrency.processutils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598/69657b36550b4a41a2ba62ead760d127.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:51 compute-1 nova_compute[230518]: 2025-10-02 12:15:51.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:51 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e159 e159: 3 total, 3 up, 3 in
Oct 02 12:15:52 compute-1 nova_compute[230518]: 2025-10-02 12:15:52.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:52 compute-1 nova_compute[230518]: 2025-10-02 12:15:52.348 2 DEBUG oslo_concurrency.processutils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598/69657b36550b4a41a2ba62ead760d127.tmp" returned: 1 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:52 compute-1 nova_compute[230518]: 2025-10-02 12:15:52.348 2 DEBUG oslo_concurrency.processutils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598/69657b36550b4a41a2ba62ead760d127.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Oct 02 12:15:52 compute-1 nova_compute[230518]: 2025-10-02 12:15:52.349 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Creating directory /var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598 on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Oct 02 12:15:52 compute-1 nova_compute[230518]: 2025-10-02 12:15:52.349 2 DEBUG oslo_concurrency.processutils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:15:52 compute-1 nova_compute[230518]: 2025-10-02 12:15:52.613 2 DEBUG oslo_concurrency.processutils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:15:52 compute-1 nova_compute[230518]: 2025-10-02 12:15:52.617 2 DEBUG nova.virt.libvirt.driver [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:15:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:52.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:52 compute-1 ceph-mon[80926]: pgmap v1092: 305 pgs: 305 active+clean; 432 MiB data, 583 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 7.7 MiB/s wr, 371 op/s
Oct 02 12:15:52 compute-1 ceph-mon[80926]: osdmap e159: 3 total, 3 up, 3 in
Oct 02 12:15:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2705301429' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:15:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:15:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:53.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:15:53 compute-1 nova_compute[230518]: 2025-10-02 12:15:53.156 2 DEBUG nova.storage.rbd_utils [None req-05f1786f-9130-436f-933c-de9baf65d5ab 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] creating snapshot(snap) on rbd image(28a7bd8e-0ddc-4cda-9f64-7c1162716074) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:15:54 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/407417649' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:15:54 compute-1 ceph-mon[80926]: pgmap v1094: 305 pgs: 305 active+clean; 515 MiB data, 625 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 11 MiB/s wr, 380 op/s
Oct 02 12:15:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:15:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:54.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:15:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:55.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e160 e160: 3 total, 3 up, 3 in
Oct 02 12:15:56 compute-1 ceph-mon[80926]: pgmap v1095: 305 pgs: 305 active+clean; 531 MiB data, 628 MiB used, 20 GiB / 21 GiB avail; 8.9 MiB/s rd, 11 MiB/s wr, 254 op/s
Oct 02 12:15:56 compute-1 ceph-mon[80926]: osdmap e160: 3 total, 3 up, 3 in
Oct 02 12:15:56 compute-1 nova_compute[230518]: 2025-10-02 12:15:56.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:56.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:15:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:57.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:15:57 compute-1 nova_compute[230518]: 2025-10-02 12:15:57.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:15:57 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:15:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e161 e161: 3 total, 3 up, 3 in
Oct 02 12:15:58 compute-1 ceph-mon[80926]: pgmap v1097: 305 pgs: 305 active+clean; 552 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 9.5 MiB/s rd, 14 MiB/s wr, 339 op/s
Oct 02 12:15:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:15:58.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:15:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:15:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:15:59.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:15:59 compute-1 ceph-mon[80926]: osdmap e161: 3 total, 3 up, 3 in
Oct 02 12:16:00 compute-1 nova_compute[230518]: 2025-10-02 12:16:00.438 2 INFO nova.virt.libvirt.driver [None req-05f1786f-9130-436f-933c-de9baf65d5ab 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Snapshot image upload complete
Oct 02 12:16:00 compute-1 nova_compute[230518]: 2025-10-02 12:16:00.439 2 INFO nova.compute.manager [None req-05f1786f-9130-436f-933c-de9baf65d5ab 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Took 23.64 seconds to snapshot the instance on the hypervisor.
Oct 02 12:16:00 compute-1 podman[241059]: 2025-10-02 12:16:00.825754283 +0000 UTC m=+0.076862049 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Oct 02 12:16:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:00.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:01.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:01 compute-1 ceph-mon[80926]: pgmap v1099: 305 pgs: 305 active+clean; 552 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 100 KiB/s rd, 3.1 MiB/s wr, 100 op/s
Oct 02 12:16:01 compute-1 nova_compute[230518]: 2025-10-02 12:16:01.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:02 compute-1 nova_compute[230518]: 2025-10-02 12:16:02.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:02 compute-1 nova_compute[230518]: 2025-10-02 12:16:02.656 2 DEBUG nova.virt.libvirt.driver [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:16:02 compute-1 podman[241079]: 2025-10-02 12:16:02.808177332 +0000 UTC m=+0.053230635 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:16:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:02.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:16:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:03.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:03 compute-1 ceph-mon[80926]: pgmap v1100: 305 pgs: 305 active+clean; 581 MiB data, 676 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 6.4 MiB/s wr, 260 op/s
Oct 02 12:16:04 compute-1 ceph-mon[80926]: pgmap v1101: 305 pgs: 305 active+clean; 592 MiB data, 682 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 6.7 MiB/s wr, 388 op/s
Oct 02 12:16:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:04.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:05.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:05 compute-1 nova_compute[230518]: 2025-10-02 12:16:05.346 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "ce696fa7-391a-4679-a805-f85d85077164" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:05 compute-1 nova_compute[230518]: 2025-10-02 12:16:05.346 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "ce696fa7-391a-4679-a805-f85d85077164" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:05 compute-1 nova_compute[230518]: 2025-10-02 12:16:05.369 2 DEBUG nova.compute.manager [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:16:05 compute-1 nova_compute[230518]: 2025-10-02 12:16:05.455 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:05 compute-1 nova_compute[230518]: 2025-10-02 12:16:05.456 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:05 compute-1 nova_compute[230518]: 2025-10-02 12:16:05.463 2 DEBUG nova.virt.hardware [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:16:05 compute-1 nova_compute[230518]: 2025-10-02 12:16:05.464 2 INFO nova.compute.claims [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:16:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e162 e162: 3 total, 3 up, 3 in
Oct 02 12:16:05 compute-1 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d00000018.scope: Deactivated successfully.
Oct 02 12:16:05 compute-1 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d00000018.scope: Consumed 15.175s CPU time.
Oct 02 12:16:05 compute-1 systemd-machined[188247]: Machine qemu-13-instance-00000018 terminated.
Oct 02 12:16:05 compute-1 nova_compute[230518]: 2025-10-02 12:16:05.748 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:05 compute-1 nova_compute[230518]: 2025-10-02 12:16:05.779 2 INFO nova.virt.libvirt.driver [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Instance shutdown successfully after 13 seconds.
Oct 02 12:16:05 compute-1 nova_compute[230518]: 2025-10-02 12:16:05.786 2 INFO nova.virt.libvirt.driver [-] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Instance destroyed successfully.
Oct 02 12:16:05 compute-1 nova_compute[230518]: 2025-10-02 12:16:05.790 2 DEBUG nova.virt.libvirt.driver [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:16:05 compute-1 nova_compute[230518]: 2025-10-02 12:16:05.790 2 DEBUG nova.virt.libvirt.driver [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:16:05 compute-1 nova_compute[230518]: 2025-10-02 12:16:05.883 2 DEBUG oslo_concurrency.lockutils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "80f9c3a4-aadc-4519-a451-8ce36d37b598-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:05 compute-1 nova_compute[230518]: 2025-10-02 12:16:05.884 2 DEBUG oslo_concurrency.lockutils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "80f9c3a4-aadc-4519-a451-8ce36d37b598-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:05 compute-1 nova_compute[230518]: 2025-10-02 12:16:05.885 2 DEBUG oslo_concurrency.lockutils [None req-6a22cdb2-c835-45af-85de-f88bc51703f1 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "80f9c3a4-aadc-4519-a451-8ce36d37b598-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/491089654' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:16:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/491089654' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:16:06 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:16:06 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/70985564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:06 compute-1 nova_compute[230518]: 2025-10-02 12:16:06.191 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:06 compute-1 nova_compute[230518]: 2025-10-02 12:16:06.196 2 DEBUG nova.compute.provider_tree [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:16:06 compute-1 nova_compute[230518]: 2025-10-02 12:16:06.249 2 DEBUG nova.scheduler.client.report [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:16:06 compute-1 nova_compute[230518]: 2025-10-02 12:16:06.292 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:06 compute-1 nova_compute[230518]: 2025-10-02 12:16:06.303 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "936eccfb-dac9-41d4-9d6f-5c3a7769f1e9" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:06 compute-1 nova_compute[230518]: 2025-10-02 12:16:06.304 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "936eccfb-dac9-41d4-9d6f-5c3a7769f1e9" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:06 compute-1 nova_compute[230518]: 2025-10-02 12:16:06.317 2 DEBUG nova.compute.manager [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] No node specified, defaulting to compute-1.ctlplane.example.com _get_nodename /usr/lib/python3.9/site-packages/nova/compute/manager.py:10505
Oct 02 12:16:06 compute-1 nova_compute[230518]: 2025-10-02 12:16:06.384 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "936eccfb-dac9-41d4-9d6f-5c3a7769f1e9" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.080s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:06 compute-1 nova_compute[230518]: 2025-10-02 12:16:06.384 2 DEBUG nova.compute.manager [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:16:06 compute-1 nova_compute[230518]: 2025-10-02 12:16:06.465 2 DEBUG nova.compute.manager [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:16:06 compute-1 nova_compute[230518]: 2025-10-02 12:16:06.465 2 DEBUG nova.network.neutron [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:16:06 compute-1 nova_compute[230518]: 2025-10-02 12:16:06.491 2 INFO nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:16:06 compute-1 nova_compute[230518]: 2025-10-02 12:16:06.511 2 DEBUG nova.compute.manager [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:16:06 compute-1 nova_compute[230518]: 2025-10-02 12:16:06.679 2 DEBUG nova.compute.manager [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:16:06 compute-1 nova_compute[230518]: 2025-10-02 12:16:06.680 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:16:06 compute-1 nova_compute[230518]: 2025-10-02 12:16:06.680 2 INFO nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Creating image(s)
Oct 02 12:16:06 compute-1 nova_compute[230518]: 2025-10-02 12:16:06.706 2 DEBUG nova.storage.rbd_utils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image ce696fa7-391a-4679-a805-f85d85077164_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:16:06 compute-1 nova_compute[230518]: 2025-10-02 12:16:06.739 2 DEBUG nova.storage.rbd_utils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image ce696fa7-391a-4679-a805-f85d85077164_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:16:06 compute-1 nova_compute[230518]: 2025-10-02 12:16:06.770 2 DEBUG nova.storage.rbd_utils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image ce696fa7-391a-4679-a805-f85d85077164_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:16:06 compute-1 nova_compute[230518]: 2025-10-02 12:16:06.773 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:06 compute-1 nova_compute[230518]: 2025-10-02 12:16:06.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:06 compute-1 nova_compute[230518]: 2025-10-02 12:16:06.802 2 DEBUG nova.network.neutron [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 02 12:16:06 compute-1 nova_compute[230518]: 2025-10-02 12:16:06.803 2 DEBUG nova.compute.manager [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:16:06 compute-1 nova_compute[230518]: 2025-10-02 12:16:06.870 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:06 compute-1 nova_compute[230518]: 2025-10-02 12:16:06.870 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:06 compute-1 nova_compute[230518]: 2025-10-02 12:16:06.871 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:06 compute-1 nova_compute[230518]: 2025-10-02 12:16:06.871 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:06 compute-1 nova_compute[230518]: 2025-10-02 12:16:06.913 2 DEBUG nova.storage.rbd_utils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image ce696fa7-391a-4679-a805-f85d85077164_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:16:06 compute-1 nova_compute[230518]: 2025-10-02 12:16:06.917 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 ce696fa7-391a-4679-a805-f85d85077164_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:06.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:06 compute-1 ceph-mon[80926]: pgmap v1102: 305 pgs: 305 active+clean; 606 MiB data, 683 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 5.7 MiB/s wr, 388 op/s
Oct 02 12:16:06 compute-1 ceph-mon[80926]: osdmap e162: 3 total, 3 up, 3 in
Oct 02 12:16:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3103502843' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/70985564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:07.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:07 compute-1 nova_compute[230518]: 2025-10-02 12:16:07.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:16:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e163 e163: 3 total, 3 up, 3 in
Oct 02 12:16:08 compute-1 ceph-mon[80926]: pgmap v1104: 305 pgs: 305 active+clean; 606 MiB data, 683 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 4.6 MiB/s wr, 402 op/s
Oct 02 12:16:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:16:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:08.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:16:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:09.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:09 compute-1 nova_compute[230518]: 2025-10-02 12:16:09.480 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 ce696fa7-391a-4679-a805-f85d85077164_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:09 compute-1 nova_compute[230518]: 2025-10-02 12:16:09.561 2 DEBUG nova.storage.rbd_utils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] resizing rbd image ce696fa7-391a-4679-a805-f85d85077164_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:16:09 compute-1 ceph-mon[80926]: osdmap e163: 3 total, 3 up, 3 in
Oct 02 12:16:09 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2208594399' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:10 compute-1 nova_compute[230518]: 2025-10-02 12:16:10.044 2 DEBUG nova.objects.instance [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lazy-loading 'migration_context' on Instance uuid ce696fa7-391a-4679-a805-f85d85077164 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:16:10 compute-1 nova_compute[230518]: 2025-10-02 12:16:10.069 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:16:10 compute-1 nova_compute[230518]: 2025-10-02 12:16:10.070 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Ensure instance console log exists: /var/lib/nova/instances/ce696fa7-391a-4679-a805-f85d85077164/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:16:10 compute-1 nova_compute[230518]: 2025-10-02 12:16:10.070 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:10 compute-1 nova_compute[230518]: 2025-10-02 12:16:10.071 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:10 compute-1 nova_compute[230518]: 2025-10-02 12:16:10.071 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:10 compute-1 nova_compute[230518]: 2025-10-02 12:16:10.072 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:16:10 compute-1 nova_compute[230518]: 2025-10-02 12:16:10.077 2 WARNING nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:16:10 compute-1 nova_compute[230518]: 2025-10-02 12:16:10.082 2 DEBUG nova.virt.libvirt.host [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:16:10 compute-1 nova_compute[230518]: 2025-10-02 12:16:10.083 2 DEBUG nova.virt.libvirt.host [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:16:10 compute-1 nova_compute[230518]: 2025-10-02 12:16:10.086 2 DEBUG nova.virt.libvirt.host [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:16:10 compute-1 nova_compute[230518]: 2025-10-02 12:16:10.087 2 DEBUG nova.virt.libvirt.host [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:16:10 compute-1 nova_compute[230518]: 2025-10-02 12:16:10.088 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:16:10 compute-1 nova_compute[230518]: 2025-10-02 12:16:10.088 2 DEBUG nova.virt.hardware [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:16:10 compute-1 nova_compute[230518]: 2025-10-02 12:16:10.089 2 DEBUG nova.virt.hardware [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:16:10 compute-1 nova_compute[230518]: 2025-10-02 12:16:10.089 2 DEBUG nova.virt.hardware [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:16:10 compute-1 nova_compute[230518]: 2025-10-02 12:16:10.089 2 DEBUG nova.virt.hardware [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:16:10 compute-1 nova_compute[230518]: 2025-10-02 12:16:10.090 2 DEBUG nova.virt.hardware [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:16:10 compute-1 nova_compute[230518]: 2025-10-02 12:16:10.090 2 DEBUG nova.virt.hardware [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:16:10 compute-1 nova_compute[230518]: 2025-10-02 12:16:10.090 2 DEBUG nova.virt.hardware [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:16:10 compute-1 nova_compute[230518]: 2025-10-02 12:16:10.090 2 DEBUG nova.virt.hardware [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:16:10 compute-1 nova_compute[230518]: 2025-10-02 12:16:10.091 2 DEBUG nova.virt.hardware [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:16:10 compute-1 nova_compute[230518]: 2025-10-02 12:16:10.091 2 DEBUG nova.virt.hardware [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:16:10 compute-1 nova_compute[230518]: 2025-10-02 12:16:10.091 2 DEBUG nova.virt.hardware [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:16:10 compute-1 nova_compute[230518]: 2025-10-02 12:16:10.094 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:10 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 12:16:10 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 5075 writes, 26K keys, 5075 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.03 MB/s
                                           Cumulative WAL: 5075 writes, 5075 syncs, 1.00 writes per sync, written: 0.05 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1573 writes, 7655 keys, 1573 commit groups, 1.0 writes per commit group, ingest: 16.04 MB, 0.03 MB/s
                                           Interval WAL: 1573 writes, 1573 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     77.9      0.40              0.09        14    0.028       0      0       0.0       0.0
                                             L6      1/0    8.64 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.5    142.0    117.9      0.92              0.28        13    0.071     61K   6843       0.0       0.0
                                            Sum      1/0    8.64 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.5     99.4    105.9      1.32              0.37        27    0.049     61K   6843       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   5.2    106.8    109.4      0.47              0.12        10    0.047     25K   2535       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    142.0    117.9      0.92              0.28        13    0.071     61K   6843       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     78.3      0.39              0.09        13    0.030       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.030, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.14 GB write, 0.08 MB/s write, 0.13 GB read, 0.07 MB/s read, 1.3 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5591049f71f0#2 capacity: 304.00 MB usage: 12.26 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000203 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(711,11.76 MB,3.86995%) FilterBlock(27,176.05 KB,0.0565529%) IndexBlock(27,329.98 KB,0.106003%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 12:16:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:16:10 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1338855465' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:10 compute-1 nova_compute[230518]: 2025-10-02 12:16:10.553 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:10 compute-1 nova_compute[230518]: 2025-10-02 12:16:10.581 2 DEBUG nova.storage.rbd_utils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image ce696fa7-391a-4679-a805-f85d85077164_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:16:10 compute-1 nova_compute[230518]: 2025-10-02 12:16:10.587 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:10 compute-1 ceph-mon[80926]: pgmap v1106: 305 pgs: 305 active+clean; 606 MiB data, 683 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 931 KiB/s wr, 236 op/s
Oct 02 12:16:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/249885067' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2134746537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2701199019' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1338855465' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:10.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:11.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:11 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:16:11 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3250954769' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:11 compute-1 nova_compute[230518]: 2025-10-02 12:16:11.081 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:11 compute-1 nova_compute[230518]: 2025-10-02 12:16:11.083 2 DEBUG nova.objects.instance [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lazy-loading 'pci_devices' on Instance uuid ce696fa7-391a-4679-a805-f85d85077164 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:16:11 compute-1 nova_compute[230518]: 2025-10-02 12:16:11.108 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:16:11 compute-1 nova_compute[230518]:   <uuid>ce696fa7-391a-4679-a805-f85d85077164</uuid>
Oct 02 12:16:11 compute-1 nova_compute[230518]:   <name>instance-0000001c</name>
Oct 02 12:16:11 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:16:11 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:16:11 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:16:11 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:       <nova:name>tempest-ServersOnMultiNodesTest-server-1094595277-2</nova:name>
Oct 02 12:16:11 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:16:10</nova:creationTime>
Oct 02 12:16:11 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:16:11 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:16:11 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:16:11 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:16:11 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:16:11 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:16:11 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:16:11 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:16:11 compute-1 nova_compute[230518]:         <nova:user uuid="27279919e67c49e1a04b6eec249ecc87">tempest-ServersOnMultiNodesTest-348944321-project-member</nova:user>
Oct 02 12:16:11 compute-1 nova_compute[230518]:         <nova:project uuid="a5ac6058475f4875b46ae8f3c4ff33e8">tempest-ServersOnMultiNodesTest-348944321</nova:project>
Oct 02 12:16:11 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:16:11 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:       <nova:ports/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:16:11 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:16:11 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <system>
Oct 02 12:16:11 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:16:11 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:16:11 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:16:11 compute-1 nova_compute[230518]:       <entry name="serial">ce696fa7-391a-4679-a805-f85d85077164</entry>
Oct 02 12:16:11 compute-1 nova_compute[230518]:       <entry name="uuid">ce696fa7-391a-4679-a805-f85d85077164</entry>
Oct 02 12:16:11 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     </system>
Oct 02 12:16:11 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:16:11 compute-1 nova_compute[230518]:   <os>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:   </os>
Oct 02 12:16:11 compute-1 nova_compute[230518]:   <features>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:   </features>
Oct 02 12:16:11 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:16:11 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:16:11 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:16:11 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/ce696fa7-391a-4679-a805-f85d85077164_disk">
Oct 02 12:16:11 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:       </source>
Oct 02 12:16:11 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:16:11 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:16:11 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:16:11 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/ce696fa7-391a-4679-a805-f85d85077164_disk.config">
Oct 02 12:16:11 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:       </source>
Oct 02 12:16:11 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:16:11 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:16:11 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:16:11 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/ce696fa7-391a-4679-a805-f85d85077164/console.log" append="off"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <video>
Oct 02 12:16:11 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     </video>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:16:11 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:16:11 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:16:11 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:16:11 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:16:11 compute-1 nova_compute[230518]: </domain>
Oct 02 12:16:11 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:16:11 compute-1 nova_compute[230518]: 2025-10-02 12:16:11.172 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:16:11 compute-1 nova_compute[230518]: 2025-10-02 12:16:11.172 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:16:11 compute-1 nova_compute[230518]: 2025-10-02 12:16:11.173 2 INFO nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Using config drive
Oct 02 12:16:11 compute-1 nova_compute[230518]: 2025-10-02 12:16:11.208 2 DEBUG nova.storage.rbd_utils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image ce696fa7-391a-4679-a805-f85d85077164_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:16:11 compute-1 nova_compute[230518]: 2025-10-02 12:16:11.390 2 INFO nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Creating config drive at /var/lib/nova/instances/ce696fa7-391a-4679-a805-f85d85077164/disk.config
Oct 02 12:16:11 compute-1 nova_compute[230518]: 2025-10-02 12:16:11.394 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ce696fa7-391a-4679-a805-f85d85077164/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp__tic_yv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:11 compute-1 nova_compute[230518]: 2025-10-02 12:16:11.529 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ce696fa7-391a-4679-a805-f85d85077164/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp__tic_yv" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:11 compute-1 nova_compute[230518]: 2025-10-02 12:16:11.564 2 DEBUG nova.storage.rbd_utils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] rbd image ce696fa7-391a-4679-a805-f85d85077164_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:16:11 compute-1 nova_compute[230518]: 2025-10-02 12:16:11.567 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ce696fa7-391a-4679-a805-f85d85077164/disk.config ce696fa7-391a-4679-a805-f85d85077164_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:11 compute-1 nova_compute[230518]: 2025-10-02 12:16:11.646 2 DEBUG oslo_concurrency.lockutils [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Acquiring lock "01eee71c-078c-41f4-a1c1-4591cab7195e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:11 compute-1 nova_compute[230518]: 2025-10-02 12:16:11.647 2 DEBUG oslo_concurrency.lockutils [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lock "01eee71c-078c-41f4-a1c1-4591cab7195e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:11 compute-1 nova_compute[230518]: 2025-10-02 12:16:11.648 2 DEBUG oslo_concurrency.lockutils [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Acquiring lock "01eee71c-078c-41f4-a1c1-4591cab7195e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:11 compute-1 nova_compute[230518]: 2025-10-02 12:16:11.648 2 DEBUG oslo_concurrency.lockutils [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lock "01eee71c-078c-41f4-a1c1-4591cab7195e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:11 compute-1 nova_compute[230518]: 2025-10-02 12:16:11.649 2 DEBUG oslo_concurrency.lockutils [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lock "01eee71c-078c-41f4-a1c1-4591cab7195e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:11 compute-1 nova_compute[230518]: 2025-10-02 12:16:11.650 2 INFO nova.compute.manager [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Terminating instance
Oct 02 12:16:11 compute-1 nova_compute[230518]: 2025-10-02 12:16:11.651 2 DEBUG nova.compute.manager [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:16:11 compute-1 nova_compute[230518]: 2025-10-02 12:16:11.718 2 DEBUG oslo_concurrency.processutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ce696fa7-391a-4679-a805-f85d85077164/disk.config ce696fa7-391a-4679-a805-f85d85077164_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:11 compute-1 nova_compute[230518]: 2025-10-02 12:16:11.719 2 INFO nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Deleting local config drive /var/lib/nova/instances/ce696fa7-391a-4679-a805-f85d85077164/disk.config because it was imported into RBD.
Oct 02 12:16:11 compute-1 kernel: tap0a7827d1-d2 (unregistering): left promiscuous mode
Oct 02 12:16:11 compute-1 NetworkManager[44960]: <info>  [1759407371.7402] device (tap0a7827d1-d2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:16:11 compute-1 ovn_controller[129257]: 2025-10-02T12:16:11Z|00111|binding|INFO|Releasing lport 0a7827d1-d2e0-4330-b738-ee929dc7af48 from this chassis (sb_readonly=0)
Oct 02 12:16:11 compute-1 ovn_controller[129257]: 2025-10-02T12:16:11Z|00112|binding|INFO|Setting lport 0a7827d1-d2e0-4330-b738-ee929dc7af48 down in Southbound
Oct 02 12:16:11 compute-1 ovn_controller[129257]: 2025-10-02T12:16:11Z|00113|binding|INFO|Removing iface tap0a7827d1-d2 ovn-installed in OVS
Oct 02 12:16:11 compute-1 nova_compute[230518]: 2025-10-02 12:16:11.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:11.772 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:6f:ab 10.100.0.13'], port_security=['fa:16:3e:cc:6f:ab 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '01eee71c-078c-41f4-a1c1-4591cab7195e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd92e60d304e64805972937813fc99606', 'neutron:revision_number': '4', 'neutron:security_group_ids': '13422694-ff96-4d03-9ea0-adedb130ec76', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6cf5d7d6-9d03-4d57-a5e5-97ce6dc98b2e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=0a7827d1-d2e0-4330-b738-ee929dc7af48) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:16:11 compute-1 ceph-mon[80926]: pgmap v1107: 305 pgs: 305 active+clean; 661 MiB data, 719 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.1 MiB/s wr, 152 op/s
Oct 02 12:16:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3250954769' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:11.773 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 0a7827d1-d2e0-4330-b738-ee929dc7af48 in datapath 377fcfd9-a6d0-4567-bd23-a9d9c96adbd5 unbound from our chassis
Oct 02 12:16:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:11.775 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 377fcfd9-a6d0-4567-bd23-a9d9c96adbd5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:16:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:11.776 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[99b272dd-ab27-44ae-9dc5-02cf87c1581b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:11.776 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5 namespace which is not needed anymore
Oct 02 12:16:11 compute-1 nova_compute[230518]: 2025-10-02 12:16:11.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:11 compute-1 systemd-machined[188247]: New machine qemu-14-instance-0000001c.
Oct 02 12:16:11 compute-1 systemd[1]: Started Virtual Machine qemu-14-instance-0000001c.
Oct 02 12:16:11 compute-1 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000016.scope: Deactivated successfully.
Oct 02 12:16:11 compute-1 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000016.scope: Consumed 14.356s CPU time.
Oct 02 12:16:11 compute-1 systemd-machined[188247]: Machine qemu-11-instance-00000016 terminated.
Oct 02 12:16:11 compute-1 nova_compute[230518]: 2025-10-02 12:16:11.882 2 INFO nova.virt.libvirt.driver [-] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Instance destroyed successfully.
Oct 02 12:16:11 compute-1 nova_compute[230518]: 2025-10-02 12:16:11.884 2 DEBUG nova.objects.instance [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lazy-loading 'resources' on Instance uuid 01eee71c-078c-41f4-a1c1-4591cab7195e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:16:11 compute-1 neutron-haproxy-ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5[239836]: [NOTICE]   (239840) : haproxy version is 2.8.14-c23fe91
Oct 02 12:16:11 compute-1 neutron-haproxy-ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5[239836]: [NOTICE]   (239840) : path to executable is /usr/sbin/haproxy
Oct 02 12:16:11 compute-1 neutron-haproxy-ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5[239836]: [WARNING]  (239840) : Exiting Master process...
Oct 02 12:16:11 compute-1 neutron-haproxy-ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5[239836]: [ALERT]    (239840) : Current worker (239842) exited with code 143 (Terminated)
Oct 02 12:16:11 compute-1 neutron-haproxy-ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5[239836]: [WARNING]  (239840) : All workers exited. Exiting... (0)
Oct 02 12:16:11 compute-1 systemd[1]: libpod-c62ea193ddcbca14651a9e8ff3bf68be4995f6527d79e5f957112c650b70d5fe.scope: Deactivated successfully.
Oct 02 12:16:11 compute-1 nova_compute[230518]: 2025-10-02 12:16:11.907 2 DEBUG nova.virt.libvirt.vif [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:15:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1959096416',display_name='tempest-ImagesOneServerTestJSON-server-1959096416',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1959096416',id=22,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:15:24Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d92e60d304e64805972937813fc99606',ramdisk_id='',reservation_id='r-nww065mo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerTestJSON-572210404',owner_user_name='tempest-ImagesOneServerTestJSON-572210404-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:16:00Z,user_data=None,user_id='79b88925d1704f5c9b3d2114c1a9ae4f',uuid=01eee71c-078c-41f4-a1c1-4591cab7195e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "address": "fa:16:3e:cc:6f:ab", "network": {"id": "377fcfd9-a6d0-4567-bd23-a9d9c96adbd5", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1611413034-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d92e60d304e64805972937813fc99606", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a7827d1-d2", "ovs_interfaceid": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:16:11 compute-1 nova_compute[230518]: 2025-10-02 12:16:11.907 2 DEBUG nova.network.os_vif_util [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Converting VIF {"id": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "address": "fa:16:3e:cc:6f:ab", "network": {"id": "377fcfd9-a6d0-4567-bd23-a9d9c96adbd5", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1611413034-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d92e60d304e64805972937813fc99606", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a7827d1-d2", "ovs_interfaceid": "0a7827d1-d2e0-4330-b738-ee929dc7af48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:16:11 compute-1 nova_compute[230518]: 2025-10-02 12:16:11.908 2 DEBUG nova.network.os_vif_util [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:6f:ab,bridge_name='br-int',has_traffic_filtering=True,id=0a7827d1-d2e0-4330-b738-ee929dc7af48,network=Network(377fcfd9-a6d0-4567-bd23-a9d9c96adbd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a7827d1-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:16:11 compute-1 nova_compute[230518]: 2025-10-02 12:16:11.909 2 DEBUG os_vif [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:6f:ab,bridge_name='br-int',has_traffic_filtering=True,id=0a7827d1-d2e0-4330-b738-ee929dc7af48,network=Network(377fcfd9-a6d0-4567-bd23-a9d9c96adbd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a7827d1-d2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:16:11 compute-1 nova_compute[230518]: 2025-10-02 12:16:11.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:11 compute-1 nova_compute[230518]: 2025-10-02 12:16:11.911 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a7827d1-d2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:16:11 compute-1 nova_compute[230518]: 2025-10-02 12:16:11.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:11 compute-1 podman[241450]: 2025-10-02 12:16:11.913213853 +0000 UTC m=+0.048608840 container died c62ea193ddcbca14651a9e8ff3bf68be4995f6527d79e5f957112c650b70d5fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 12:16:11 compute-1 nova_compute[230518]: 2025-10-02 12:16:11.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:16:11 compute-1 nova_compute[230518]: 2025-10-02 12:16:11.916 2 INFO os_vif [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:6f:ab,bridge_name='br-int',has_traffic_filtering=True,id=0a7827d1-d2e0-4330-b738-ee929dc7af48,network=Network(377fcfd9-a6d0-4567-bd23-a9d9c96adbd5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a7827d1-d2')
Oct 02 12:16:11 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c62ea193ddcbca14651a9e8ff3bf68be4995f6527d79e5f957112c650b70d5fe-userdata-shm.mount: Deactivated successfully.
Oct 02 12:16:11 compute-1 systemd[1]: var-lib-containers-storage-overlay-71439b31039a617ef0686867304ef27ca8200532d38131be38c55e1cda6f86ef-merged.mount: Deactivated successfully.
Oct 02 12:16:11 compute-1 podman[241450]: 2025-10-02 12:16:11.958456616 +0000 UTC m=+0.093851603 container cleanup c62ea193ddcbca14651a9e8ff3bf68be4995f6527d79e5f957112c650b70d5fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:16:11 compute-1 systemd[1]: libpod-conmon-c62ea193ddcbca14651a9e8ff3bf68be4995f6527d79e5f957112c650b70d5fe.scope: Deactivated successfully.
Oct 02 12:16:12 compute-1 podman[241505]: 2025-10-02 12:16:12.017593666 +0000 UTC m=+0.039442611 container remove c62ea193ddcbca14651a9e8ff3bf68be4995f6527d79e5f957112c650b70d5fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:16:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:12.023 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a3fe1998-8eda-48e7-acd3-7168c9b562fb]: (4, ('Thu Oct  2 12:16:11 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5 (c62ea193ddcbca14651a9e8ff3bf68be4995f6527d79e5f957112c650b70d5fe)\nc62ea193ddcbca14651a9e8ff3bf68be4995f6527d79e5f957112c650b70d5fe\nThu Oct  2 12:16:11 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5 (c62ea193ddcbca14651a9e8ff3bf68be4995f6527d79e5f957112c650b70d5fe)\nc62ea193ddcbca14651a9e8ff3bf68be4995f6527d79e5f957112c650b70d5fe\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:12.024 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f9574594-66ba-4b22-8d1e-382f3246e381]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:12.025 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap377fcfd9-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:12 compute-1 kernel: tap377fcfd9-a0: left promiscuous mode
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:12.053 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[104488f1-530e-435e-9592-dc20ef5f3e33]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:12.079 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[039395ca-4c05-4cc5-9168-bb90629acfe7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:12.082 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a3b743b5-97a2-4be1-a5b2-d719e4c8cbcc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:12.097 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[58c4ba06-d412-463f-b3ee-a8a8bf3deb0a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 518695, 'reachable_time': 40754, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 241560, 'error': None, 'target': 'ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:12 compute-1 systemd[1]: run-netns-ovnmeta\x2d377fcfd9\x2da6d0\x2d4567\x2dbd23\x2da9d9c96adbd5.mount: Deactivated successfully.
Oct 02 12:16:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:12.102 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-377fcfd9-a6d0-4567-bd23-a9d9c96adbd5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:16:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:12.102 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[45a774b5-cb35-4c72-bbe7-5eddaa252ad2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.571 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407372.5715215, ce696fa7-391a-4679-a805-f85d85077164 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.572 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ce696fa7-391a-4679-a805-f85d85077164] VM Resumed (Lifecycle Event)
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.575 2 DEBUG nova.compute.manager [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.575 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.578 2 INFO nova.virt.libvirt.driver [-] [instance: ce696fa7-391a-4679-a805-f85d85077164] Instance spawned successfully.
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.578 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.624 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ce696fa7-391a-4679-a805-f85d85077164] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.628 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.628 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.629 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.630 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.630 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.631 2 DEBUG nova.virt.libvirt.driver [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.635 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ce696fa7-391a-4679-a805-f85d85077164] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.684 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ce696fa7-391a-4679-a805-f85d85077164] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.684 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407372.5725222, ce696fa7-391a-4679-a805-f85d85077164 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.684 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ce696fa7-391a-4679-a805-f85d85077164] VM Started (Lifecycle Event)
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.722 2 INFO nova.virt.libvirt.driver [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Deleting instance files /var/lib/nova/instances/01eee71c-078c-41f4-a1c1-4591cab7195e_del
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.723 2 INFO nova.virt.libvirt.driver [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Deletion of /var/lib/nova/instances/01eee71c-078c-41f4-a1c1-4591cab7195e_del complete
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.729 2 INFO nova.compute.manager [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Took 6.05 seconds to spawn the instance on the hypervisor.
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.729 2 DEBUG nova.compute.manager [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.731 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ce696fa7-391a-4679-a805-f85d85077164] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.736 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ce696fa7-391a-4679-a805-f85d85077164] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.777 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ce696fa7-391a-4679-a805-f85d85077164] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.786 2 DEBUG nova.compute.manager [req-6a13efbd-4562-4196-bf84-86ddbba50fc8 req-a5968c78-1471-4fdc-87f5-ae17ffa47f18 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Received event network-vif-unplugged-0a7827d1-d2e0-4330-b738-ee929dc7af48 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.787 2 DEBUG oslo_concurrency.lockutils [req-6a13efbd-4562-4196-bf84-86ddbba50fc8 req-a5968c78-1471-4fdc-87f5-ae17ffa47f18 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "01eee71c-078c-41f4-a1c1-4591cab7195e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.787 2 DEBUG oslo_concurrency.lockutils [req-6a13efbd-4562-4196-bf84-86ddbba50fc8 req-a5968c78-1471-4fdc-87f5-ae17ffa47f18 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "01eee71c-078c-41f4-a1c1-4591cab7195e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.787 2 DEBUG oslo_concurrency.lockutils [req-6a13efbd-4562-4196-bf84-86ddbba50fc8 req-a5968c78-1471-4fdc-87f5-ae17ffa47f18 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "01eee71c-078c-41f4-a1c1-4591cab7195e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.787 2 DEBUG nova.compute.manager [req-6a13efbd-4562-4196-bf84-86ddbba50fc8 req-a5968c78-1471-4fdc-87f5-ae17ffa47f18 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] No waiting events found dispatching network-vif-unplugged-0a7827d1-d2e0-4330-b738-ee929dc7af48 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.787 2 DEBUG nova.compute.manager [req-6a13efbd-4562-4196-bf84-86ddbba50fc8 req-a5968c78-1471-4fdc-87f5-ae17ffa47f18 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Received event network-vif-unplugged-0a7827d1-d2e0-4330-b738-ee929dc7af48 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.809 2 INFO nova.compute.manager [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Took 1.16 seconds to destroy the instance on the hypervisor.
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.809 2 DEBUG oslo.service.loopingcall [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.809 2 DEBUG nova.compute.manager [-] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.810 2 DEBUG nova.network.neutron [-] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.812 2 INFO nova.compute.manager [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Took 7.39 seconds to build instance.
Oct 02 12:16:12 compute-1 nova_compute[230518]: 2025-10-02 12:16:12.850 2 DEBUG oslo_concurrency.lockutils [None req-670e03a4-05ef-41b7-bbb6-8aea256e9564 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "ce696fa7-391a-4679-a805-f85d85077164" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.504s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:12.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:16:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:13.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e164 e164: 3 total, 3 up, 3 in
Oct 02 12:16:13 compute-1 nova_compute[230518]: 2025-10-02 12:16:13.826 2 DEBUG nova.network.neutron [-] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:16:13 compute-1 nova_compute[230518]: 2025-10-02 12:16:13.864 2 DEBUG nova.compute.manager [req-91159b48-a14e-42ac-ba68-ef7ea96f4628 req-4cd22576-5394-4d7f-aca3-8410274ac3a9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Received event network-vif-deleted-0a7827d1-d2e0-4330-b738-ee929dc7af48 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:16:13 compute-1 nova_compute[230518]: 2025-10-02 12:16:13.864 2 INFO nova.compute.manager [req-91159b48-a14e-42ac-ba68-ef7ea96f4628 req-4cd22576-5394-4d7f-aca3-8410274ac3a9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Neutron deleted interface 0a7827d1-d2e0-4330-b738-ee929dc7af48; detaching it from the instance and deleting it from the info cache
Oct 02 12:16:13 compute-1 nova_compute[230518]: 2025-10-02 12:16:13.865 2 DEBUG nova.network.neutron [req-91159b48-a14e-42ac-ba68-ef7ea96f4628 req-4cd22576-5394-4d7f-aca3-8410274ac3a9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:16:13 compute-1 ceph-mon[80926]: pgmap v1108: 305 pgs: 305 active+clean; 656 MiB data, 723 MiB used, 20 GiB / 21 GiB avail; 233 KiB/s rd, 7.3 MiB/s wr, 190 op/s
Oct 02 12:16:13 compute-1 nova_compute[230518]: 2025-10-02 12:16:13.953 2 INFO nova.compute.manager [-] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Took 1.14 seconds to deallocate network for instance.
Oct 02 12:16:13 compute-1 nova_compute[230518]: 2025-10-02 12:16:13.962 2 DEBUG nova.compute.manager [req-91159b48-a14e-42ac-ba68-ef7ea96f4628 req-4cd22576-5394-4d7f-aca3-8410274ac3a9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Detach interface failed, port_id=0a7827d1-d2e0-4330-b738-ee929dc7af48, reason: Instance 01eee71c-078c-41f4-a1c1-4591cab7195e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:16:14 compute-1 nova_compute[230518]: 2025-10-02 12:16:14.106 2 DEBUG oslo_concurrency.lockutils [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:14 compute-1 nova_compute[230518]: 2025-10-02 12:16:14.107 2 DEBUG oslo_concurrency.lockutils [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:14 compute-1 nova_compute[230518]: 2025-10-02 12:16:14.244 2 DEBUG oslo_concurrency.processutils [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:16:14 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/540730844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:14 compute-1 nova_compute[230518]: 2025-10-02 12:16:14.681 2 DEBUG oslo_concurrency.processutils [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:14 compute-1 nova_compute[230518]: 2025-10-02 12:16:14.686 2 DEBUG nova.compute.provider_tree [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:16:14 compute-1 nova_compute[230518]: 2025-10-02 12:16:14.719 2 DEBUG nova.scheduler.client.report [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:16:14 compute-1 nova_compute[230518]: 2025-10-02 12:16:14.744 2 INFO nova.compute.manager [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Swapping old allocation on dict_keys(['730da6ce-9754-46f0-88e3-0019d056443f']) held by migration e3ae1389-09cd-481c-9d83-8b061ca8b765 for instance
Oct 02 12:16:14 compute-1 nova_compute[230518]: 2025-10-02 12:16:14.754 2 DEBUG oslo_concurrency.lockutils [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:14 compute-1 nova_compute[230518]: 2025-10-02 12:16:14.810 2 DEBUG nova.scheduler.client.report [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Overwriting current allocation {'allocations': {'8733289a-aa77-4139-9e88-bac686174c8d': {'resources': {'VCPU': 1, 'MEMORY_MB': 192, 'DISK_GB': 1}, 'generation': 15}}, 'project_id': '3d306048f2854052ba5317253b834aa7', 'user_id': 'ac1b39d94ed94e2490ad953afb3c225f', 'consumer_generation': 1} on consumer 80f9c3a4-aadc-4519-a451-8ce36d37b598 move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018
Oct 02 12:16:14 compute-1 nova_compute[230518]: 2025-10-02 12:16:14.813 2 INFO nova.scheduler.client.report [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Deleted allocations for instance 01eee71c-078c-41f4-a1c1-4591cab7195e
Oct 02 12:16:14 compute-1 nova_compute[230518]: 2025-10-02 12:16:14.898 2 DEBUG nova.compute.manager [req-86177c48-c7d6-40ac-82ed-7590d22a7056 req-62bc6f8e-eb02-436f-a7ab-46e868529c86 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Received event network-vif-plugged-0a7827d1-d2e0-4330-b738-ee929dc7af48 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:16:14 compute-1 nova_compute[230518]: 2025-10-02 12:16:14.899 2 DEBUG oslo_concurrency.lockutils [req-86177c48-c7d6-40ac-82ed-7590d22a7056 req-62bc6f8e-eb02-436f-a7ab-46e868529c86 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "01eee71c-078c-41f4-a1c1-4591cab7195e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:14 compute-1 nova_compute[230518]: 2025-10-02 12:16:14.899 2 DEBUG oslo_concurrency.lockutils [req-86177c48-c7d6-40ac-82ed-7590d22a7056 req-62bc6f8e-eb02-436f-a7ab-46e868529c86 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "01eee71c-078c-41f4-a1c1-4591cab7195e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:14 compute-1 nova_compute[230518]: 2025-10-02 12:16:14.900 2 DEBUG oslo_concurrency.lockutils [req-86177c48-c7d6-40ac-82ed-7590d22a7056 req-62bc6f8e-eb02-436f-a7ab-46e868529c86 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "01eee71c-078c-41f4-a1c1-4591cab7195e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:14 compute-1 nova_compute[230518]: 2025-10-02 12:16:14.900 2 DEBUG nova.compute.manager [req-86177c48-c7d6-40ac-82ed-7590d22a7056 req-62bc6f8e-eb02-436f-a7ab-46e868529c86 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] No waiting events found dispatching network-vif-plugged-0a7827d1-d2e0-4330-b738-ee929dc7af48 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:16:14 compute-1 nova_compute[230518]: 2025-10-02 12:16:14.900 2 WARNING nova.compute.manager [req-86177c48-c7d6-40ac-82ed-7590d22a7056 req-62bc6f8e-eb02-436f-a7ab-46e868529c86 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Received unexpected event network-vif-plugged-0a7827d1-d2e0-4330-b738-ee929dc7af48 for instance with vm_state deleted and task_state None.
Oct 02 12:16:14 compute-1 nova_compute[230518]: 2025-10-02 12:16:14.938 2 DEBUG oslo_concurrency.lockutils [None req-6e2674ba-282d-4b1d-bdf3-1e726865251f 79b88925d1704f5c9b3d2114c1a9ae4f d92e60d304e64805972937813fc99606 - - default default] Lock "01eee71c-078c-41f4-a1c1-4591cab7195e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.292s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:16:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:14.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:16:14 compute-1 sudo[241585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:16:14 compute-1 sudo[241585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:14 compute-1 sudo[241585]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:15 compute-1 sudo[241610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:16:15 compute-1 sudo[241610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:15 compute-1 sudo[241610]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:15.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:15 compute-1 ceph-mon[80926]: osdmap e164: 3 total, 3 up, 3 in
Oct 02 12:16:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2972657979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/540730844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:15 compute-1 nova_compute[230518]: 2025-10-02 12:16:15.058 2 DEBUG oslo_concurrency.lockutils [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "refresh_cache-80f9c3a4-aadc-4519-a451-8ce36d37b598" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:16:15 compute-1 nova_compute[230518]: 2025-10-02 12:16:15.058 2 DEBUG oslo_concurrency.lockutils [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquired lock "refresh_cache-80f9c3a4-aadc-4519-a451-8ce36d37b598" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:16:15 compute-1 nova_compute[230518]: 2025-10-02 12:16:15.058 2 DEBUG nova.network.neutron [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:16:15 compute-1 sudo[241635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:16:15 compute-1 sudo[241635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:15 compute-1 sudo[241635]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:15 compute-1 sudo[241660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 12:16:15 compute-1 sudo[241660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:15 compute-1 nova_compute[230518]: 2025-10-02 12:16:15.174 2 DEBUG nova.network.neutron [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:16:15 compute-1 sudo[241660]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:15 compute-1 nova_compute[230518]: 2025-10-02 12:16:15.406 2 DEBUG nova.network.neutron [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:16:15 compute-1 nova_compute[230518]: 2025-10-02 12:16:15.441 2 DEBUG oslo_concurrency.lockutils [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Releasing lock "refresh_cache-80f9c3a4-aadc-4519-a451-8ce36d37b598" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:16:15 compute-1 nova_compute[230518]: 2025-10-02 12:16:15.442 2 DEBUG nova.virt.libvirt.driver [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843
Oct 02 12:16:15 compute-1 nova_compute[230518]: 2025-10-02 12:16:15.471 2 DEBUG oslo_concurrency.lockutils [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "ce696fa7-391a-4679-a805-f85d85077164" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:15 compute-1 nova_compute[230518]: 2025-10-02 12:16:15.472 2 DEBUG oslo_concurrency.lockutils [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "ce696fa7-391a-4679-a805-f85d85077164" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:15 compute-1 nova_compute[230518]: 2025-10-02 12:16:15.473 2 DEBUG oslo_concurrency.lockutils [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "ce696fa7-391a-4679-a805-f85d85077164-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:15 compute-1 nova_compute[230518]: 2025-10-02 12:16:15.473 2 DEBUG oslo_concurrency.lockutils [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "ce696fa7-391a-4679-a805-f85d85077164-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:15 compute-1 nova_compute[230518]: 2025-10-02 12:16:15.474 2 DEBUG oslo_concurrency.lockutils [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "ce696fa7-391a-4679-a805-f85d85077164-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:15 compute-1 nova_compute[230518]: 2025-10-02 12:16:15.475 2 INFO nova.compute.manager [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Terminating instance
Oct 02 12:16:15 compute-1 nova_compute[230518]: 2025-10-02 12:16:15.477 2 DEBUG oslo_concurrency.lockutils [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "refresh_cache-ce696fa7-391a-4679-a805-f85d85077164" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:16:15 compute-1 nova_compute[230518]: 2025-10-02 12:16:15.477 2 DEBUG oslo_concurrency.lockutils [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquired lock "refresh_cache-ce696fa7-391a-4679-a805-f85d85077164" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:16:15 compute-1 nova_compute[230518]: 2025-10-02 12:16:15.478 2 DEBUG nova.network.neutron [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:16:15 compute-1 nova_compute[230518]: 2025-10-02 12:16:15.525 2 DEBUG nova.storage.rbd_utils [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] rolling back rbd image(80f9c3a4-aadc-4519-a451-8ce36d37b598_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505
Oct 02 12:16:15 compute-1 nova_compute[230518]: 2025-10-02 12:16:15.705 2 DEBUG nova.network.neutron [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:16:15 compute-1 sudo[241741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:16:15 compute-1 sudo[241741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:15 compute-1 sudo[241741]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:15 compute-1 sudo[241766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:16:15 compute-1 sudo[241766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:15 compute-1 sudo[241766]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:15 compute-1 sudo[241791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:16:15 compute-1 sudo[241791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:15 compute-1 sudo[241791]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:16 compute-1 sudo[241816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:16:16 compute-1 sudo[241816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:16 compute-1 nova_compute[230518]: 2025-10-02 12:16:16.032 2 DEBUG nova.network.neutron [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:16:16 compute-1 nova_compute[230518]: 2025-10-02 12:16:16.054 2 DEBUG oslo_concurrency.lockutils [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Releasing lock "refresh_cache-ce696fa7-391a-4679-a805-f85d85077164" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:16:16 compute-1 nova_compute[230518]: 2025-10-02 12:16:16.054 2 DEBUG nova.compute.manager [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:16:16 compute-1 nova_compute[230518]: 2025-10-02 12:16:16.116 2 DEBUG nova.storage.rbd_utils [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] removing snapshot(nova-resize) on rbd image(80f9c3a4-aadc-4519-a451-8ce36d37b598_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 12:16:16 compute-1 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Oct 02 12:16:16 compute-1 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000001c.scope: Consumed 4.274s CPU time.
Oct 02 12:16:16 compute-1 systemd-machined[188247]: Machine qemu-14-instance-0000001c terminated.
Oct 02 12:16:16 compute-1 nova_compute[230518]: 2025-10-02 12:16:16.476 2 INFO nova.virt.libvirt.driver [-] [instance: ce696fa7-391a-4679-a805-f85d85077164] Instance destroyed successfully.
Oct 02 12:16:16 compute-1 nova_compute[230518]: 2025-10-02 12:16:16.477 2 DEBUG nova.objects.instance [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lazy-loading 'resources' on Instance uuid ce696fa7-391a-4679-a805-f85d85077164 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:16:16 compute-1 sudo[241816]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:16 compute-1 sudo[241910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:16:16 compute-1 sudo[241910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:16 compute-1 sudo[241910]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:16 compute-1 sudo[241935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:16:16 compute-1 sudo[241935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:16 compute-1 sudo[241935]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:16 compute-1 nova_compute[230518]: 2025-10-02 12:16:16.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:16 compute-1 sudo[241960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:16:16 compute-1 sudo[241960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:16 compute-1 sudo[241960]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:16 compute-1 ceph-mon[80926]: pgmap v1110: 305 pgs: 305 active+clean; 648 MiB data, 720 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 10 MiB/s wr, 460 op/s
Oct 02 12:16:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:16 compute-1 sudo[241985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- inventory --format=json-pretty --filter-for-batch
Oct 02 12:16:16 compute-1 sudo[241985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:16 compute-1 nova_compute[230518]: 2025-10-02 12:16:16.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:16.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:17.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:17 compute-1 podman[242050]: 2025-10-02 12:16:17.147787956 +0000 UTC m=+0.040352361 container create 73acb9f64856652f209b1b6f42105f8eaf9692f085987286c1d045f77b93ef62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cori, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct 02 12:16:17 compute-1 systemd[1]: Started libpod-conmon-73acb9f64856652f209b1b6f42105f8eaf9692f085987286c1d045f77b93ef62.scope.
Oct 02 12:16:17 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:16:17 compute-1 podman[242050]: 2025-10-02 12:16:17.2175514 +0000 UTC m=+0.110115805 container init 73acb9f64856652f209b1b6f42105f8eaf9692f085987286c1d045f77b93ef62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:16:17 compute-1 podman[242050]: 2025-10-02 12:16:17.225035165 +0000 UTC m=+0.117599570 container start 73acb9f64856652f209b1b6f42105f8eaf9692f085987286c1d045f77b93ef62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct 02 12:16:17 compute-1 podman[242050]: 2025-10-02 12:16:17.131248595 +0000 UTC m=+0.023813020 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:16:17 compute-1 podman[242050]: 2025-10-02 12:16:17.227703359 +0000 UTC m=+0.120267794 container attach 73acb9f64856652f209b1b6f42105f8eaf9692f085987286c1d045f77b93ef62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:16:17 compute-1 beautiful_cori[242066]: 167 167
Oct 02 12:16:17 compute-1 systemd[1]: libpod-73acb9f64856652f209b1b6f42105f8eaf9692f085987286c1d045f77b93ef62.scope: Deactivated successfully.
Oct 02 12:16:17 compute-1 podman[242050]: 2025-10-02 12:16:17.230956031 +0000 UTC m=+0.123520436 container died 73acb9f64856652f209b1b6f42105f8eaf9692f085987286c1d045f77b93ef62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:16:17 compute-1 systemd[1]: var-lib-containers-storage-overlay-5b6e36be807881c84d3f7aba8ab500306c5c7ec5674520b22a7b8214f9fcc011-merged.mount: Deactivated successfully.
Oct 02 12:16:17 compute-1 podman[242050]: 2025-10-02 12:16:17.274689237 +0000 UTC m=+0.167253642 container remove 73acb9f64856652f209b1b6f42105f8eaf9692f085987286c1d045f77b93ef62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_cori, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:16:17 compute-1 systemd[1]: libpod-conmon-73acb9f64856652f209b1b6f42105f8eaf9692f085987286c1d045f77b93ef62.scope: Deactivated successfully.
Oct 02 12:16:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e165 e165: 3 total, 3 up, 3 in
Oct 02 12:16:17 compute-1 podman[242090]: 2025-10-02 12:16:17.422132325 +0000 UTC m=+0.038372699 container create 606a332e0f8ebf22213ed5f34cf413f5eb6d3eaa9794098c8fa2f3d30589d98a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jackson, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:16:17 compute-1 systemd[1]: Started libpod-conmon-606a332e0f8ebf22213ed5f34cf413f5eb6d3eaa9794098c8fa2f3d30589d98a.scope.
Oct 02 12:16:17 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:16:17 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7756c3c29ceb3494cea1585079bc7578dba351ecabe8f913ae58132d0d3e3b39/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:16:17 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7756c3c29ceb3494cea1585079bc7578dba351ecabe8f913ae58132d0d3e3b39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:16:17 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7756c3c29ceb3494cea1585079bc7578dba351ecabe8f913ae58132d0d3e3b39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:16:17 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7756c3c29ceb3494cea1585079bc7578dba351ecabe8f913ae58132d0d3e3b39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:16:17 compute-1 podman[242090]: 2025-10-02 12:16:17.405867052 +0000 UTC m=+0.022107446 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:16:17 compute-1 podman[242090]: 2025-10-02 12:16:17.50658566 +0000 UTC m=+0.122826054 container init 606a332e0f8ebf22213ed5f34cf413f5eb6d3eaa9794098c8fa2f3d30589d98a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:16:17 compute-1 podman[242090]: 2025-10-02 12:16:17.512418234 +0000 UTC m=+0.128658608 container start 606a332e0f8ebf22213ed5f34cf413f5eb6d3eaa9794098c8fa2f3d30589d98a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 12:16:17 compute-1 podman[242090]: 2025-10-02 12:16:17.516263534 +0000 UTC m=+0.132503918 container attach 606a332e0f8ebf22213ed5f34cf413f5eb6d3eaa9794098c8fa2f3d30589d98a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jackson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 12:16:17 compute-1 nova_compute[230518]: 2025-10-02 12:16:17.690 2 DEBUG nova.virt.libvirt.driver [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:16:17 compute-1 nova_compute[230518]: 2025-10-02 12:16:17.695 2 WARNING nova.virt.libvirt.driver [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:16:17 compute-1 nova_compute[230518]: 2025-10-02 12:16:17.699 2 DEBUG nova.virt.libvirt.host [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:16:17 compute-1 nova_compute[230518]: 2025-10-02 12:16:17.700 2 DEBUG nova.virt.libvirt.host [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:16:17 compute-1 nova_compute[230518]: 2025-10-02 12:16:17.703 2 DEBUG nova.virt.libvirt.host [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:16:17 compute-1 nova_compute[230518]: 2025-10-02 12:16:17.703 2 DEBUG nova.virt.libvirt.host [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:16:17 compute-1 nova_compute[230518]: 2025-10-02 12:16:17.704 2 DEBUG nova.virt.libvirt.driver [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:16:17 compute-1 nova_compute[230518]: 2025-10-02 12:16:17.704 2 DEBUG nova.virt.hardware [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:15:38Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='bba9bb99-43dc-47a6-9261-f8d87f6d4f9b',id=28,is_public=True,memory_mb=128,name='tempest-test_resize_flavor_-525834944',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:16:17 compute-1 nova_compute[230518]: 2025-10-02 12:16:17.705 2 DEBUG nova.virt.hardware [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:16:17 compute-1 nova_compute[230518]: 2025-10-02 12:16:17.705 2 DEBUG nova.virt.hardware [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:16:17 compute-1 nova_compute[230518]: 2025-10-02 12:16:17.705 2 DEBUG nova.virt.hardware [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:16:17 compute-1 nova_compute[230518]: 2025-10-02 12:16:17.705 2 DEBUG nova.virt.hardware [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:16:17 compute-1 nova_compute[230518]: 2025-10-02 12:16:17.705 2 DEBUG nova.virt.hardware [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:16:17 compute-1 nova_compute[230518]: 2025-10-02 12:16:17.706 2 DEBUG nova.virt.hardware [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:16:17 compute-1 nova_compute[230518]: 2025-10-02 12:16:17.706 2 DEBUG nova.virt.hardware [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:16:17 compute-1 nova_compute[230518]: 2025-10-02 12:16:17.706 2 DEBUG nova.virt.hardware [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:16:17 compute-1 nova_compute[230518]: 2025-10-02 12:16:17.706 2 DEBUG nova.virt.hardware [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:16:17 compute-1 nova_compute[230518]: 2025-10-02 12:16:17.706 2 DEBUG nova.virt.hardware [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:16:17 compute-1 nova_compute[230518]: 2025-10-02 12:16:17.707 2 DEBUG nova.objects.instance [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 80f9c3a4-aadc-4519-a451-8ce36d37b598 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:16:17 compute-1 nova_compute[230518]: 2025-10-02 12:16:17.733 2 DEBUG oslo_concurrency.processutils [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:16:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:16:18 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/164602721' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:18 compute-1 nova_compute[230518]: 2025-10-02 12:16:18.148 2 DEBUG oslo_concurrency.processutils [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:18 compute-1 nova_compute[230518]: 2025-10-02 12:16:18.183 2 DEBUG oslo_concurrency.processutils [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:18 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:18 compute-1 ceph-mon[80926]: osdmap e165: 3 total, 3 up, 3 in
Oct 02 12:16:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:16:18 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1953203193' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:18 compute-1 condescending_jackson[242106]: [
Oct 02 12:16:18 compute-1 condescending_jackson[242106]:     {
Oct 02 12:16:18 compute-1 condescending_jackson[242106]:         "available": false,
Oct 02 12:16:18 compute-1 condescending_jackson[242106]:         "ceph_device": false,
Oct 02 12:16:18 compute-1 condescending_jackson[242106]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 02 12:16:18 compute-1 condescending_jackson[242106]:         "lsm_data": {},
Oct 02 12:16:18 compute-1 condescending_jackson[242106]:         "lvs": [],
Oct 02 12:16:18 compute-1 condescending_jackson[242106]:         "path": "/dev/sr0",
Oct 02 12:16:18 compute-1 condescending_jackson[242106]:         "rejected_reasons": [
Oct 02 12:16:18 compute-1 condescending_jackson[242106]:             "Insufficient space (<5GB)",
Oct 02 12:16:18 compute-1 condescending_jackson[242106]:             "Has a FileSystem"
Oct 02 12:16:18 compute-1 condescending_jackson[242106]:         ],
Oct 02 12:16:18 compute-1 condescending_jackson[242106]:         "sys_api": {
Oct 02 12:16:18 compute-1 condescending_jackson[242106]:             "actuators": null,
Oct 02 12:16:18 compute-1 condescending_jackson[242106]:             "device_nodes": "sr0",
Oct 02 12:16:18 compute-1 condescending_jackson[242106]:             "devname": "sr0",
Oct 02 12:16:18 compute-1 condescending_jackson[242106]:             "human_readable_size": "482.00 KB",
Oct 02 12:16:18 compute-1 condescending_jackson[242106]:             "id_bus": "ata",
Oct 02 12:16:18 compute-1 condescending_jackson[242106]:             "model": "QEMU DVD-ROM",
Oct 02 12:16:18 compute-1 condescending_jackson[242106]:             "nr_requests": "2",
Oct 02 12:16:18 compute-1 condescending_jackson[242106]:             "parent": "/dev/sr0",
Oct 02 12:16:18 compute-1 condescending_jackson[242106]:             "partitions": {},
Oct 02 12:16:18 compute-1 condescending_jackson[242106]:             "path": "/dev/sr0",
Oct 02 12:16:18 compute-1 condescending_jackson[242106]:             "removable": "1",
Oct 02 12:16:18 compute-1 condescending_jackson[242106]:             "rev": "2.5+",
Oct 02 12:16:18 compute-1 condescending_jackson[242106]:             "ro": "0",
Oct 02 12:16:18 compute-1 condescending_jackson[242106]:             "rotational": "0",
Oct 02 12:16:18 compute-1 condescending_jackson[242106]:             "sas_address": "",
Oct 02 12:16:18 compute-1 condescending_jackson[242106]:             "sas_device_handle": "",
Oct 02 12:16:18 compute-1 condescending_jackson[242106]:             "scheduler_mode": "mq-deadline",
Oct 02 12:16:18 compute-1 condescending_jackson[242106]:             "sectors": 0,
Oct 02 12:16:18 compute-1 condescending_jackson[242106]:             "sectorsize": "2048",
Oct 02 12:16:18 compute-1 condescending_jackson[242106]:             "size": 493568.0,
Oct 02 12:16:18 compute-1 condescending_jackson[242106]:             "support_discard": "2048",
Oct 02 12:16:18 compute-1 condescending_jackson[242106]:             "type": "disk",
Oct 02 12:16:18 compute-1 condescending_jackson[242106]:             "vendor": "QEMU"
Oct 02 12:16:18 compute-1 condescending_jackson[242106]:         }
Oct 02 12:16:18 compute-1 condescending_jackson[242106]:     }
Oct 02 12:16:18 compute-1 condescending_jackson[242106]: ]
Oct 02 12:16:18 compute-1 systemd[1]: libpod-606a332e0f8ebf22213ed5f34cf413f5eb6d3eaa9794098c8fa2f3d30589d98a.scope: Deactivated successfully.
Oct 02 12:16:18 compute-1 systemd[1]: libpod-606a332e0f8ebf22213ed5f34cf413f5eb6d3eaa9794098c8fa2f3d30589d98a.scope: Consumed 1.164s CPU time.
Oct 02 12:16:18 compute-1 podman[242090]: 2025-10-02 12:16:18.70557035 +0000 UTC m=+1.321810744 container died 606a332e0f8ebf22213ed5f34cf413f5eb6d3eaa9794098c8fa2f3d30589d98a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 12:16:18 compute-1 systemd[1]: var-lib-containers-storage-overlay-7756c3c29ceb3494cea1585079bc7578dba351ecabe8f913ae58132d0d3e3b39-merged.mount: Deactivated successfully.
Oct 02 12:16:18 compute-1 nova_compute[230518]: 2025-10-02 12:16:18.733 2 DEBUG oslo_concurrency.processutils [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:18 compute-1 nova_compute[230518]: 2025-10-02 12:16:18.736 2 DEBUG nova.virt.libvirt.driver [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:16:18 compute-1 nova_compute[230518]:   <uuid>80f9c3a4-aadc-4519-a451-8ce36d37b598</uuid>
Oct 02 12:16:18 compute-1 nova_compute[230518]:   <name>instance-00000018</name>
Oct 02 12:16:18 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:16:18 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:16:18 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:16:18 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:       <nova:name>tempest-MigrationsAdminTest-server-201463142</nova:name>
Oct 02 12:16:18 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:16:17</nova:creationTime>
Oct 02 12:16:18 compute-1 nova_compute[230518]:       <nova:flavor name="tempest-test_resize_flavor_-525834944">
Oct 02 12:16:18 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:16:18 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:16:18 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:16:18 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:16:18 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:16:18 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:16:18 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:16:18 compute-1 nova_compute[230518]:         <nova:user uuid="ac1b39d94ed94e2490ad953afb3c225f">tempest-MigrationsAdminTest-1653457839-project-member</nova:user>
Oct 02 12:16:18 compute-1 nova_compute[230518]:         <nova:project uuid="3d306048f2854052ba5317253b834aa7">tempest-MigrationsAdminTest-1653457839</nova:project>
Oct 02 12:16:18 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:16:18 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:       <nova:ports/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:16:18 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:16:18 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <system>
Oct 02 12:16:18 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:16:18 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:16:18 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:16:18 compute-1 nova_compute[230518]:       <entry name="serial">80f9c3a4-aadc-4519-a451-8ce36d37b598</entry>
Oct 02 12:16:18 compute-1 nova_compute[230518]:       <entry name="uuid">80f9c3a4-aadc-4519-a451-8ce36d37b598</entry>
Oct 02 12:16:18 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     </system>
Oct 02 12:16:18 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:16:18 compute-1 nova_compute[230518]:   <os>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:   </os>
Oct 02 12:16:18 compute-1 nova_compute[230518]:   <features>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:   </features>
Oct 02 12:16:18 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:16:18 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:16:18 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:16:18 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/80f9c3a4-aadc-4519-a451-8ce36d37b598_disk">
Oct 02 12:16:18 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:       </source>
Oct 02 12:16:18 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:16:18 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:16:18 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:16:18 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/80f9c3a4-aadc-4519-a451-8ce36d37b598_disk.config">
Oct 02 12:16:18 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:       </source>
Oct 02 12:16:18 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:16:18 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:16:18 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:16:18 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598/console.log" append="off"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <video>
Oct 02 12:16:18 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     </video>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <input type="keyboard" bus="usb"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:16:18 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:16:18 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:16:18 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:16:18 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:16:18 compute-1 nova_compute[230518]: </domain>
Oct 02 12:16:18 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:16:18 compute-1 podman[242090]: 2025-10-02 12:16:18.758345209 +0000 UTC m=+1.374585583 container remove 606a332e0f8ebf22213ed5f34cf413f5eb6d3eaa9794098c8fa2f3d30589d98a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jackson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:16:18 compute-1 systemd[1]: libpod-conmon-606a332e0f8ebf22213ed5f34cf413f5eb6d3eaa9794098c8fa2f3d30589d98a.scope: Deactivated successfully.
Oct 02 12:16:18 compute-1 sudo[241985]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:18 compute-1 systemd-machined[188247]: New machine qemu-15-instance-00000018.
Oct 02 12:16:18 compute-1 systemd[1]: Started Virtual Machine qemu-15-instance-00000018.
Oct 02 12:16:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:18.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:19.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:19 compute-1 nova_compute[230518]: 2025-10-02 12:16:19.648 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Removed pending event for 80f9c3a4-aadc-4519-a451-8ce36d37b598 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:16:19 compute-1 nova_compute[230518]: 2025-10-02 12:16:19.648 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407379.647571, 80f9c3a4-aadc-4519-a451-8ce36d37b598 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:16:19 compute-1 nova_compute[230518]: 2025-10-02 12:16:19.648 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] VM Resumed (Lifecycle Event)
Oct 02 12:16:19 compute-1 nova_compute[230518]: 2025-10-02 12:16:19.650 2 DEBUG nova.compute.manager [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:16:19 compute-1 nova_compute[230518]: 2025-10-02 12:16:19.653 2 INFO nova.virt.libvirt.driver [-] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Instance running successfully.
Oct 02 12:16:19 compute-1 nova_compute[230518]: 2025-10-02 12:16:19.654 2 DEBUG nova.virt.libvirt.driver [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] finish_revert_migration finished successfully. finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11887
Oct 02 12:16:19 compute-1 nova_compute[230518]: 2025-10-02 12:16:19.675 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:16:19 compute-1 nova_compute[230518]: 2025-10-02 12:16:19.678 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:16:19 compute-1 nova_compute[230518]: 2025-10-02 12:16:19.693 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Oct 02 12:16:19 compute-1 nova_compute[230518]: 2025-10-02 12:16:19.694 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407379.6485198, 80f9c3a4-aadc-4519-a451-8ce36d37b598 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:16:19 compute-1 nova_compute[230518]: 2025-10-02 12:16:19.694 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] VM Started (Lifecycle Event)
Oct 02 12:16:19 compute-1 nova_compute[230518]: 2025-10-02 12:16:19.725 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:16:19 compute-1 nova_compute[230518]: 2025-10-02 12:16:19.728 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:16:19 compute-1 nova_compute[230518]: 2025-10-02 12:16:19.751 2 INFO nova.compute.manager [None req-d4edd190-1fa0-4504-a8e4-48b5622a8c32 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Updating instance to original state: 'active'
Oct 02 12:16:19 compute-1 nova_compute[230518]: 2025-10-02 12:16:19.755 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Oct 02 12:16:19 compute-1 ceph-mon[80926]: pgmap v1111: 305 pgs: 305 active+clean; 614 MiB data, 687 MiB used, 20 GiB / 21 GiB avail; 9.2 MiB/s rd, 11 MiB/s wr, 672 op/s
Oct 02 12:16:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/164602721' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1953203193' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:20 compute-1 nova_compute[230518]: 2025-10-02 12:16:20.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:20.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:16:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:21.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:16:21 compute-1 ceph-mon[80926]: pgmap v1113: 305 pgs: 305 active+clean; 614 MiB data, 687 MiB used, 20 GiB / 21 GiB avail; 9.5 MiB/s rd, 6.8 MiB/s wr, 634 op/s
Oct 02 12:16:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:21 compute-1 nova_compute[230518]: 2025-10-02 12:16:21.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:21 compute-1 podman[243543]: 2025-10-02 12:16:21.816094169 +0000 UTC m=+0.058902144 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:16:21 compute-1 podman[243542]: 2025-10-02 12:16:21.852010538 +0000 UTC m=+0.094674279 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 02 12:16:21 compute-1 nova_compute[230518]: 2025-10-02 12:16:21.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:22 compute-1 ceph-mon[80926]: pgmap v1114: 305 pgs: 305 active+clean; 605 MiB data, 684 MiB used, 20 GiB / 21 GiB avail; 9.6 MiB/s rd, 4.4 MiB/s wr, 543 op/s
Oct 02 12:16:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:22.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:22 compute-1 nova_compute[230518]: 2025-10-02 12:16:22.969 2 INFO nova.virt.libvirt.driver [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Deleting instance files /var/lib/nova/instances/ce696fa7-391a-4679-a805-f85d85077164_del
Oct 02 12:16:22 compute-1 nova_compute[230518]: 2025-10-02 12:16:22.969 2 INFO nova.virt.libvirt.driver [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Deletion of /var/lib/nova/instances/ce696fa7-391a-4679-a805-f85d85077164_del complete
Oct 02 12:16:22 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:16:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:23.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:23 compute-1 nova_compute[230518]: 2025-10-02 12:16:23.225 2 INFO nova.compute.manager [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] [instance: ce696fa7-391a-4679-a805-f85d85077164] Took 7.17 seconds to destroy the instance on the hypervisor.
Oct 02 12:16:23 compute-1 nova_compute[230518]: 2025-10-02 12:16:23.225 2 DEBUG oslo.service.loopingcall [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:16:23 compute-1 nova_compute[230518]: 2025-10-02 12:16:23.225 2 DEBUG nova.compute.manager [-] [instance: ce696fa7-391a-4679-a805-f85d85077164] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:16:23 compute-1 nova_compute[230518]: 2025-10-02 12:16:23.225 2 DEBUG nova.network.neutron [-] [instance: ce696fa7-391a-4679-a805-f85d85077164] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:16:23 compute-1 nova_compute[230518]: 2025-10-02 12:16:23.684 2 DEBUG nova.network.neutron [-] [instance: ce696fa7-391a-4679-a805-f85d85077164] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:16:23 compute-1 nova_compute[230518]: 2025-10-02 12:16:23.810 2 DEBUG nova.network.neutron [-] [instance: ce696fa7-391a-4679-a805-f85d85077164] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:16:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e166 e166: 3 total, 3 up, 3 in
Oct 02 12:16:23 compute-1 nova_compute[230518]: 2025-10-02 12:16:23.949 2 INFO nova.compute.manager [-] [instance: ce696fa7-391a-4679-a805-f85d85077164] Took 0.72 seconds to deallocate network for instance.
Oct 02 12:16:24 compute-1 nova_compute[230518]: 2025-10-02 12:16:24.049 2 DEBUG oslo_concurrency.lockutils [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:24 compute-1 nova_compute[230518]: 2025-10-02 12:16:24.050 2 DEBUG oslo_concurrency.lockutils [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:24 compute-1 nova_compute[230518]: 2025-10-02 12:16:24.147 2 DEBUG oslo_concurrency.processutils [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:16:24 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3110913110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:24 compute-1 nova_compute[230518]: 2025-10-02 12:16:24.632 2 DEBUG oslo_concurrency.processutils [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:24 compute-1 nova_compute[230518]: 2025-10-02 12:16:24.638 2 DEBUG nova.compute.provider_tree [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:16:24 compute-1 nova_compute[230518]: 2025-10-02 12:16:24.667 2 DEBUG nova.scheduler.client.report [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:16:24 compute-1 nova_compute[230518]: 2025-10-02 12:16:24.688 2 DEBUG oslo_concurrency.lockutils [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:24 compute-1 nova_compute[230518]: 2025-10-02 12:16:24.725 2 INFO nova.scheduler.client.report [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Deleted allocations for instance ce696fa7-391a-4679-a805-f85d85077164
Oct 02 12:16:24 compute-1 ceph-mon[80926]: pgmap v1115: 305 pgs: 305 active+clean; 592 MiB data, 684 MiB used, 20 GiB / 21 GiB avail; 8.6 MiB/s rd, 4.0 MiB/s wr, 510 op/s
Oct 02 12:16:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3466060320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:24 compute-1 ceph-mon[80926]: osdmap e166: 3 total, 3 up, 3 in
Oct 02 12:16:24 compute-1 nova_compute[230518]: 2025-10-02 12:16:24.799 2 DEBUG oslo_concurrency.lockutils [None req-111e408d-33fc-4a13-b518-f5682272b4e8 27279919e67c49e1a04b6eec249ecc87 a5ac6058475f4875b46ae8f3c4ff33e8 - - default default] Lock "ce696fa7-391a-4679-a805-f85d85077164" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.327s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.001999982s ======
Oct 02 12:16:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:24.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001999982s
Oct 02 12:16:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:16:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:25.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:16:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:25.915 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:25.916 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:25.916 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:26 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3110913110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:26 compute-1 ceph-mon[80926]: pgmap v1117: 305 pgs: 305 active+clean; 542 MiB data, 655 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 36 KiB/s wr, 141 op/s
Oct 02 12:16:26 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:26 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/183177585' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:26 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:26 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:16:26 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:16:26 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:26 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:16:26 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:16:26 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:16:26 compute-1 nova_compute[230518]: 2025-10-02 12:16:26.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:26 compute-1 nova_compute[230518]: 2025-10-02 12:16:26.880 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407371.8786888, 01eee71c-078c-41f4-a1c1-4591cab7195e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:16:26 compute-1 nova_compute[230518]: 2025-10-02 12:16:26.881 2 INFO nova.compute.manager [-] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] VM Stopped (Lifecycle Event)
Oct 02 12:16:26 compute-1 nova_compute[230518]: 2025-10-02 12:16:26.913 2 DEBUG nova.compute.manager [None req-d88c6c71-d658-4175-82ea-4a82e087e44d - - - - - -] [instance: 01eee71c-078c-41f4-a1c1-4591cab7195e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:16:26 compute-1 nova_compute[230518]: 2025-10-02 12:16:26.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:16:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:26.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:16:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:27.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:27 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:16:28 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3372069525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:28.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:16:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:29.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:16:29 compute-1 ceph-mon[80926]: pgmap v1118: 305 pgs: 305 active+clean; 521 MiB data, 645 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 34 KiB/s wr, 163 op/s
Oct 02 12:16:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:29.998 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:16:30 compute-1 nova_compute[230518]: 2025-10-02 12:16:29.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:30 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:29.999 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:16:30 compute-1 ceph-mon[80926]: pgmap v1119: 305 pgs: 305 active+clean; 521 MiB data, 645 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 31 KiB/s wr, 148 op/s
Oct 02 12:16:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1667495230' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:30.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:31.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:31 compute-1 nova_compute[230518]: 2025-10-02 12:16:31.474 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407376.4730577, ce696fa7-391a-4679-a805-f85d85077164 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:16:31 compute-1 nova_compute[230518]: 2025-10-02 12:16:31.475 2 INFO nova.compute.manager [-] [instance: ce696fa7-391a-4679-a805-f85d85077164] VM Stopped (Lifecycle Event)
Oct 02 12:16:31 compute-1 nova_compute[230518]: 2025-10-02 12:16:31.528 2 DEBUG nova.compute.manager [None req-65beeeb1-44de-4937-9134-6bd759f84758 - - - - - -] [instance: ce696fa7-391a-4679-a805-f85d85077164] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:16:31 compute-1 nova_compute[230518]: 2025-10-02 12:16:31.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:31 compute-1 podman[243608]: 2025-10-02 12:16:31.824892215 +0000 UTC m=+0.061403762 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 02 12:16:31 compute-1 nova_compute[230518]: 2025-10-02 12:16:31.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1481727496' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1561813357' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:32.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:16:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:33.001 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:16:33 compute-1 nova_compute[230518]: 2025-10-02 12:16:33.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:16:33 compute-1 nova_compute[230518]: 2025-10-02 12:16:33.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:16:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:33.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:33 compute-1 ceph-mon[80926]: pgmap v1120: 305 pgs: 305 active+clean; 543 MiB data, 649 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.8 MiB/s wr, 193 op/s
Oct 02 12:16:33 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/697768296' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:16:33 compute-1 podman[243628]: 2025-10-02 12:16:33.800071497 +0000 UTC m=+0.054837646 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid)
Oct 02 12:16:34 compute-1 nova_compute[230518]: 2025-10-02 12:16:34.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:16:34 compute-1 ceph-mon[80926]: pgmap v1121: 305 pgs: 305 active+clean; 534 MiB data, 646 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.3 MiB/s wr, 207 op/s
Oct 02 12:16:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:16:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:34.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:16:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:35.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:36 compute-1 ceph-mon[80926]: pgmap v1122: 305 pgs: 305 active+clean; 499 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.0 MiB/s wr, 229 op/s
Oct 02 12:16:36 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3171481446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:36 compute-1 nova_compute[230518]: 2025-10-02 12:16:36.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:36 compute-1 nova_compute[230518]: 2025-10-02 12:16:36.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:16:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:36.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:16:37 compute-1 nova_compute[230518]: 2025-10-02 12:16:37.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:16:37 compute-1 nova_compute[230518]: 2025-10-02 12:16:37.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:16:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:37.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:37 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/703150610' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:37 compute-1 ceph-mon[80926]: pgmap v1123: 305 pgs: 305 active+clean; 455 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.6 MiB/s wr, 235 op/s
Oct 02 12:16:37 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2193579931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:37 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2512597358' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:16:38 compute-1 nova_compute[230518]: 2025-10-02 12:16:38.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:16:38 compute-1 nova_compute[230518]: 2025-10-02 12:16:38.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:16:38 compute-1 nova_compute[230518]: 2025-10-02 12:16:38.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:16:38 compute-1 nova_compute[230518]: 2025-10-02 12:16:38.317 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-fcfe251b-73c3-4310-b646-3c6c0a8c7e6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:16:38 compute-1 nova_compute[230518]: 2025-10-02 12:16:38.317 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-fcfe251b-73c3-4310-b646-3c6c0a8c7e6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:16:38 compute-1 nova_compute[230518]: 2025-10-02 12:16:38.318 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:16:38 compute-1 nova_compute[230518]: 2025-10-02 12:16:38.318 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid fcfe251b-73c3-4310-b646-3c6c0a8c7e6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:16:38 compute-1 nova_compute[230518]: 2025-10-02 12:16:38.594 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:16:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:38.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:16:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:39.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:16:39 compute-1 nova_compute[230518]: 2025-10-02 12:16:39.082 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:16:39 compute-1 nova_compute[230518]: 2025-10-02 12:16:39.233 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-fcfe251b-73c3-4310-b646-3c6c0a8c7e6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:16:39 compute-1 nova_compute[230518]: 2025-10-02 12:16:39.233 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:16:39 compute-1 nova_compute[230518]: 2025-10-02 12:16:39.234 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:16:39 compute-1 nova_compute[230518]: 2025-10-02 12:16:39.234 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:16:39 compute-1 nova_compute[230518]: 2025-10-02 12:16:39.235 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:16:39 compute-1 nova_compute[230518]: 2025-10-02 12:16:39.365 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:39 compute-1 nova_compute[230518]: 2025-10-02 12:16:39.366 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:39 compute-1 nova_compute[230518]: 2025-10-02 12:16:39.366 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:39 compute-1 nova_compute[230518]: 2025-10-02 12:16:39.366 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:16:39 compute-1 nova_compute[230518]: 2025-10-02 12:16:39.367 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:16:39 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/762298545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:39 compute-1 nova_compute[230518]: 2025-10-02 12:16:39.797 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:39 compute-1 nova_compute[230518]: 2025-10-02 12:16:39.985 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:16:39 compute-1 nova_compute[230518]: 2025-10-02 12:16:39.985 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:16:39 compute-1 nova_compute[230518]: 2025-10-02 12:16:39.988 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:16:39 compute-1 nova_compute[230518]: 2025-10-02 12:16:39.988 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:16:39 compute-1 nova_compute[230518]: 2025-10-02 12:16:39.991 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:16:39 compute-1 nova_compute[230518]: 2025-10-02 12:16:39.991 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:16:40 compute-1 nova_compute[230518]: 2025-10-02 12:16:40.148 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:16:40 compute-1 nova_compute[230518]: 2025-10-02 12:16:40.149 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4225MB free_disk=20.76431655883789GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:16:40 compute-1 nova_compute[230518]: 2025-10-02 12:16:40.149 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:40 compute-1 nova_compute[230518]: 2025-10-02 12:16:40.149 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:40 compute-1 nova_compute[230518]: 2025-10-02 12:16:40.307 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance fcfe251b-73c3-4310-b646-3c6c0a8c7e6e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:16:40 compute-1 nova_compute[230518]: 2025-10-02 12:16:40.307 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance bb6a3b63-8cda-41b6-ac43-6f9d310fad2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:16:40 compute-1 nova_compute[230518]: 2025-10-02 12:16:40.307 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 80f9c3a4-aadc-4519-a451-8ce36d37b598 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:16:40 compute-1 nova_compute[230518]: 2025-10-02 12:16:40.319 2 DEBUG nova.virt.libvirt.driver [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Creating tmpfile /var/lib/nova/instances/tmpgk1qnbid to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041
Oct 02 12:16:40 compute-1 nova_compute[230518]: 2025-10-02 12:16:40.319 2 DEBUG nova.compute.manager [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpgk1qnbid',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476
Oct 02 12:16:40 compute-1 nova_compute[230518]: 2025-10-02 12:16:40.347 2 WARNING nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance a114d722-ceac-442e-8b38-c2892fda526b has been moved to another host compute-0.ctlplane.example.com(compute-0.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}.
Oct 02 12:16:40 compute-1 nova_compute[230518]: 2025-10-02 12:16:40.347 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:16:40 compute-1 nova_compute[230518]: 2025-10-02 12:16:40.347 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=960MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:16:40 compute-1 ceph-mon[80926]: pgmap v1124: 305 pgs: 305 active+clean; 455 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.6 MiB/s wr, 205 op/s
Oct 02 12:16:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/762298545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:40 compute-1 nova_compute[230518]: 2025-10-02 12:16:40.500 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:16:40 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:16:40 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1031066231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:40 compute-1 nova_compute[230518]: 2025-10-02 12:16:40.936 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:16:40 compute-1 nova_compute[230518]: 2025-10-02 12:16:40.943 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:16:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:40.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:41 compute-1 nova_compute[230518]: 2025-10-02 12:16:41.004 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:16:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:41.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:41 compute-1 nova_compute[230518]: 2025-10-02 12:16:41.123 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:16:41 compute-1 nova_compute[230518]: 2025-10-02 12:16:41.123 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.974s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:41 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/142292739' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:41 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1954817087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:41 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1031066231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:41 compute-1 nova_compute[230518]: 2025-10-02 12:16:41.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:41 compute-1 nova_compute[230518]: 2025-10-02 12:16:41.936 2 DEBUG nova.compute.manager [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpgk1qnbid',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='a114d722-ceac-442e-8b38-c2892fda526b',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604
Oct 02 12:16:41 compute-1 nova_compute[230518]: 2025-10-02 12:16:41.941 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:16:41 compute-1 nova_compute[230518]: 2025-10-02 12:16:41.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:42 compute-1 nova_compute[230518]: 2025-10-02 12:16:42.043 2 DEBUG oslo_concurrency.lockutils [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquiring lock "refresh_cache-a114d722-ceac-442e-8b38-c2892fda526b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:16:42 compute-1 nova_compute[230518]: 2025-10-02 12:16:42.043 2 DEBUG oslo_concurrency.lockutils [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquired lock "refresh_cache-a114d722-ceac-442e-8b38-c2892fda526b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:16:42 compute-1 nova_compute[230518]: 2025-10-02 12:16:42.043 2 DEBUG nova.network.neutron [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:16:42 compute-1 sudo[243693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:16:42 compute-1 sudo[243693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:42 compute-1 sudo[243693]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:42 compute-1 sudo[243718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:16:42 compute-1 sudo[243718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:16:42 compute-1 sudo[243718]: pam_unix(sudo:session): session closed for user root
Oct 02 12:16:42 compute-1 ceph-mon[80926]: pgmap v1125: 305 pgs: 305 active+clean; 455 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 278 op/s
Oct 02 12:16:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4274575128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:16:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:16:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:42.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:43.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:43 compute-1 nova_compute[230518]: 2025-10-02 12:16:43.396 2 DEBUG nova.network.neutron [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Updating instance_info_cache with network_info: [{"id": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "address": "fa:16:3e:bf:7b:22", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965edc3f-df", "ovs_interfaceid": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:16:43 compute-1 nova_compute[230518]: 2025-10-02 12:16:43.467 2 DEBUG oslo_concurrency.lockutils [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Releasing lock "refresh_cache-a114d722-ceac-442e-8b38-c2892fda526b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:16:43 compute-1 nova_compute[230518]: 2025-10-02 12:16:43.470 2 DEBUG nova.virt.libvirt.driver [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpgk1qnbid',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='a114d722-ceac-442e-8b38-c2892fda526b',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827
Oct 02 12:16:43 compute-1 nova_compute[230518]: 2025-10-02 12:16:43.471 2 DEBUG nova.virt.libvirt.driver [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Creating instance directory: /var/lib/nova/instances/a114d722-ceac-442e-8b38-c2892fda526b pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840
Oct 02 12:16:43 compute-1 nova_compute[230518]: 2025-10-02 12:16:43.472 2 DEBUG nova.virt.libvirt.driver [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Ensure instance console log exists: /var/lib/nova/instances/a114d722-ceac-442e-8b38-c2892fda526b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:16:43 compute-1 nova_compute[230518]: 2025-10-02 12:16:43.472 2 DEBUG nova.virt.libvirt.driver [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794
Oct 02 12:16:43 compute-1 nova_compute[230518]: 2025-10-02 12:16:43.474 2 DEBUG nova.virt.libvirt.vif [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:16:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-2023518062',display_name='tempest-LiveMigrationTest-server-2023518062',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-2023518062',id=29,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:16:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7f6188e258a04ea1a49e6b415bce3fc9',ramdisk_id='',reservation_id='r-f60c0zik',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-1880928942',owner_user_name='tempest-LiveMigrationTest-1880928942-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:16:36Z,user_data=None,user_id='e0cdfd1473bd4963b4ded642a43c35f3',uuid=a114d722-ceac-442e-8b38-c2892fda526b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "address": "fa:16:3e:bf:7b:22", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap965edc3f-df", "ovs_interfaceid": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:16:43 compute-1 nova_compute[230518]: 2025-10-02 12:16:43.475 2 DEBUG nova.network.os_vif_util [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Converting VIF {"id": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "address": "fa:16:3e:bf:7b:22", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap965edc3f-df", "ovs_interfaceid": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:16:43 compute-1 nova_compute[230518]: 2025-10-02 12:16:43.476 2 DEBUG nova.network.os_vif_util [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:7b:22,bridge_name='br-int',has_traffic_filtering=True,id=965edc3f-df96-430d-8b4b-4f3dbb19e9de,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap965edc3f-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:16:43 compute-1 nova_compute[230518]: 2025-10-02 12:16:43.476 2 DEBUG os_vif [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:7b:22,bridge_name='br-int',has_traffic_filtering=True,id=965edc3f-df96-430d-8b4b-4f3dbb19e9de,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap965edc3f-df') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:16:43 compute-1 nova_compute[230518]: 2025-10-02 12:16:43.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:43 compute-1 nova_compute[230518]: 2025-10-02 12:16:43.478 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:16:43 compute-1 nova_compute[230518]: 2025-10-02 12:16:43.478 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:16:43 compute-1 nova_compute[230518]: 2025-10-02 12:16:43.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:43 compute-1 nova_compute[230518]: 2025-10-02 12:16:43.483 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap965edc3f-df, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:16:43 compute-1 nova_compute[230518]: 2025-10-02 12:16:43.484 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap965edc3f-df, col_values=(('external_ids', {'iface-id': '965edc3f-df96-430d-8b4b-4f3dbb19e9de', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bf:7b:22', 'vm-uuid': 'a114d722-ceac-442e-8b38-c2892fda526b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:16:43 compute-1 nova_compute[230518]: 2025-10-02 12:16:43.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:43 compute-1 NetworkManager[44960]: <info>  [1759407403.4875] manager: (tap965edc3f-df): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Oct 02 12:16:43 compute-1 nova_compute[230518]: 2025-10-02 12:16:43.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:16:43 compute-1 nova_compute[230518]: 2025-10-02 12:16:43.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:43 compute-1 nova_compute[230518]: 2025-10-02 12:16:43.501 2 INFO os_vif [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:7b:22,bridge_name='br-int',has_traffic_filtering=True,id=965edc3f-df96-430d-8b4b-4f3dbb19e9de,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap965edc3f-df')
Oct 02 12:16:43 compute-1 nova_compute[230518]: 2025-10-02 12:16:43.502 2 DEBUG nova.virt.libvirt.driver [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954
Oct 02 12:16:43 compute-1 nova_compute[230518]: 2025-10-02 12:16:43.503 2 DEBUG nova.compute.manager [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpgk1qnbid',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='a114d722-ceac-442e-8b38-c2892fda526b',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668
Oct 02 12:16:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3362255199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:44 compute-1 ceph-mon[80926]: pgmap v1126: 305 pgs: 305 active+clean; 435 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 1.3 MiB/s wr, 262 op/s
Oct 02 12:16:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:16:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:44.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:16:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:16:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:45.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:16:45 compute-1 nova_compute[230518]: 2025-10-02 12:16:45.423 2 DEBUG nova.network.neutron [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Port 965edc3f-df96-430d-8b4b-4f3dbb19e9de updated with migration profile {'migrating_to': 'compute-1.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354
Oct 02 12:16:45 compute-1 nova_compute[230518]: 2025-10-02 12:16:45.425 2 DEBUG nova.compute.manager [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpgk1qnbid',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='a114d722-ceac-442e-8b38-c2892fda526b',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723
Oct 02 12:16:45 compute-1 kernel: tap965edc3f-df: entered promiscuous mode
Oct 02 12:16:45 compute-1 NetworkManager[44960]: <info>  [1759407405.7580] manager: (tap965edc3f-df): new Tun device (/org/freedesktop/NetworkManager/Devices/61)
Oct 02 12:16:45 compute-1 nova_compute[230518]: 2025-10-02 12:16:45.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:45 compute-1 ovn_controller[129257]: 2025-10-02T12:16:45Z|00114|binding|INFO|Claiming lport 965edc3f-df96-430d-8b4b-4f3dbb19e9de for this additional chassis.
Oct 02 12:16:45 compute-1 ovn_controller[129257]: 2025-10-02T12:16:45Z|00115|binding|INFO|965edc3f-df96-430d-8b4b-4f3dbb19e9de: Claiming fa:16:3e:bf:7b:22 10.100.0.10
Oct 02 12:16:45 compute-1 ovn_controller[129257]: 2025-10-02T12:16:45Z|00116|binding|INFO|Claiming lport 92466114-86f5-4a18-ad64-93c2127fe0d3 for this additional chassis.
Oct 02 12:16:45 compute-1 ovn_controller[129257]: 2025-10-02T12:16:45Z|00117|binding|INFO|92466114-86f5-4a18-ad64-93c2127fe0d3: Claiming fa:16:3e:bd:c1:f4 19.80.0.104
Oct 02 12:16:45 compute-1 nova_compute[230518]: 2025-10-02 12:16:45.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:45 compute-1 systemd-udevd[243757]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:16:45 compute-1 systemd-machined[188247]: New machine qemu-16-instance-0000001d.
Oct 02 12:16:45 compute-1 NetworkManager[44960]: <info>  [1759407405.8074] device (tap965edc3f-df): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:16:45 compute-1 NetworkManager[44960]: <info>  [1759407405.8085] device (tap965edc3f-df): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:16:45 compute-1 systemd[1]: Started Virtual Machine qemu-16-instance-0000001d.
Oct 02 12:16:45 compute-1 nova_compute[230518]: 2025-10-02 12:16:45.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:45 compute-1 ovn_controller[129257]: 2025-10-02T12:16:45Z|00118|binding|INFO|Setting lport 965edc3f-df96-430d-8b4b-4f3dbb19e9de ovn-installed in OVS
Oct 02 12:16:45 compute-1 nova_compute[230518]: 2025-10-02 12:16:45.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:46 compute-1 ceph-mon[80926]: pgmap v1127: 305 pgs: 305 active+clean; 403 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 49 KiB/s wr, 249 op/s
Oct 02 12:16:46 compute-1 nova_compute[230518]: 2025-10-02 12:16:46.780 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407406.7805657, a114d722-ceac-442e-8b38-c2892fda526b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:16:46 compute-1 nova_compute[230518]: 2025-10-02 12:16:46.781 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a114d722-ceac-442e-8b38-c2892fda526b] VM Started (Lifecycle Event)
Oct 02 12:16:46 compute-1 nova_compute[230518]: 2025-10-02 12:16:46.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:46 compute-1 nova_compute[230518]: 2025-10-02 12:16:46.844 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:16:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:47.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:47.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:47 compute-1 nova_compute[230518]: 2025-10-02 12:16:47.426 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407407.4256682, a114d722-ceac-442e-8b38-c2892fda526b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:16:47 compute-1 nova_compute[230518]: 2025-10-02 12:16:47.427 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a114d722-ceac-442e-8b38-c2892fda526b] VM Resumed (Lifecycle Event)
Oct 02 12:16:47 compute-1 nova_compute[230518]: 2025-10-02 12:16:47.472 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:16:47 compute-1 nova_compute[230518]: 2025-10-02 12:16:47.475 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:16:47 compute-1 nova_compute[230518]: 2025-10-02 12:16:47.520 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a114d722-ceac-442e-8b38-c2892fda526b] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com
Oct 02 12:16:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:16:48 compute-1 ceph-mon[80926]: pgmap v1128: 305 pgs: 305 active+clean; 378 MiB data, 534 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 19 KiB/s wr, 223 op/s
Oct 02 12:16:48 compute-1 nova_compute[230518]: 2025-10-02 12:16:48.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:16:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:49.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:16:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:16:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:49.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:16:49 compute-1 ovn_controller[129257]: 2025-10-02T12:16:49Z|00119|binding|INFO|Claiming lport 965edc3f-df96-430d-8b4b-4f3dbb19e9de for this chassis.
Oct 02 12:16:49 compute-1 ovn_controller[129257]: 2025-10-02T12:16:49Z|00120|binding|INFO|965edc3f-df96-430d-8b4b-4f3dbb19e9de: Claiming fa:16:3e:bf:7b:22 10.100.0.10
Oct 02 12:16:49 compute-1 ovn_controller[129257]: 2025-10-02T12:16:49Z|00121|binding|INFO|Claiming lport 92466114-86f5-4a18-ad64-93c2127fe0d3 for this chassis.
Oct 02 12:16:49 compute-1 ovn_controller[129257]: 2025-10-02T12:16:49Z|00122|binding|INFO|92466114-86f5-4a18-ad64-93c2127fe0d3: Claiming fa:16:3e:bd:c1:f4 19.80.0.104
Oct 02 12:16:49 compute-1 ovn_controller[129257]: 2025-10-02T12:16:49Z|00123|binding|INFO|Setting lport 965edc3f-df96-430d-8b4b-4f3dbb19e9de up in Southbound
Oct 02 12:16:49 compute-1 ovn_controller[129257]: 2025-10-02T12:16:49Z|00124|binding|INFO|Setting lport 92466114-86f5-4a18-ad64-93c2127fe0d3 up in Southbound
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.499 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:7b:22 10.100.0.10'], port_security=['fa:16:3e:bf:7b:22 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1502630260', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a114d722-ceac-442e-8b38-c2892fda526b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1502630260', 'neutron:project_id': '7f6188e258a04ea1a49e6b415bce3fc9', 'neutron:revision_number': '11', 'neutron:security_group_ids': '583e80e3-bda7-43ee-b04c-3ce88c2c7611', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34cc3fdc-62f5-47cf-be4b-547a25938be9, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=965edc3f-df96-430d-8b4b-4f3dbb19e9de) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.501 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:c1:f4 19.80.0.104'], port_security=['fa:16:3e:bd:c1:f4 19.80.0.104'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': ''}, parent_port=['965edc3f-df96-430d-8b4b-4f3dbb19e9de'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-342109323', 'neutron:cidrs': '19.80.0.104/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dc4336bf-639d-45a4-88f2-32f0af1b9dbe', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-342109323', 'neutron:project_id': '7f6188e258a04ea1a49e6b415bce3fc9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '583e80e3-bda7-43ee-b04c-3ce88c2c7611', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=0bcf5be3-3921-4228-85d5-12bbaf2eb666, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=92466114-86f5-4a18-ad64-93c2127fe0d3) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.503 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 965edc3f-df96-430d-8b4b-4f3dbb19e9de in datapath 5989958f-ccbb-4db4-8dcb-18563aa2418e bound to our chassis
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.504 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5989958f-ccbb-4db4-8dcb-18563aa2418e
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.519 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[edb67efb-b434-42c4-9ddc-31bda1f951b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.520 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5989958f-c1 in ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.522 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5989958f-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.522 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3dff152b-d22c-430f-9dbc-b4b87d0e6e12]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.523 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[334fe021-f8b8-4696-9fc2-1550c4d01e57]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.535 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[1a2d534a-abd9-4cfe-98d6-69d39a60a2a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.549 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e197cf1f-bab2-49a4-8240-77ed9841c04a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.578 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[63bb50c6-2ba5-4308-97f9-18f4bd0c4080]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:49 compute-1 NetworkManager[44960]: <info>  [1759407409.5849] manager: (tap5989958f-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/62)
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.584 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[195909ab-924a-4bd3-867e-980de2631990]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:49 compute-1 systemd-udevd[243816]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.617 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[0c3e0798-f389-4e46-b49b-a1178418d34d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.620 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[6237ffcf-c01b-406c-8eeb-7e3002fea4be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:49 compute-1 NetworkManager[44960]: <info>  [1759407409.6456] device (tap5989958f-c0): carrier: link connected
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.651 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[70f0cd3e-9b9a-4972-8c13-34a634a108ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.667 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0a289250-e8cf-4cf2-9332-12447dfc0335]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5989958f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:d2:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527320, 'reachable_time': 20483, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243835, 'error': None, 'target': 'ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.682 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0468def5-bb0f-45bb-8a6e-03deaf142601]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2d:d212'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527320, 'tstamp': 527320}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243836, 'error': None, 'target': 'ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.696 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3f521c02-d0a3-4465-9785-1f72c794df60]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5989958f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:d2:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527320, 'reachable_time': 20483, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 243837, 'error': None, 'target': 'ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.728 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e57e5815-fc3f-49d6-832c-6b371ae529b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.783 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2f5b372f-c909-4c7f-8fce-02d3211277fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.786 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5989958f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.786 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.786 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5989958f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:16:49 compute-1 nova_compute[230518]: 2025-10-02 12:16:49.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:49 compute-1 NetworkManager[44960]: <info>  [1759407409.7889] manager: (tap5989958f-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Oct 02 12:16:49 compute-1 kernel: tap5989958f-c0: entered promiscuous mode
Oct 02 12:16:49 compute-1 nova_compute[230518]: 2025-10-02 12:16:49.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.791 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5989958f-c0, col_values=(('external_ids', {'iface-id': 'c7d8e124-cc34-42e6-82ac-6fdf057166bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:16:49 compute-1 nova_compute[230518]: 2025-10-02 12:16:49.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:49 compute-1 ovn_controller[129257]: 2025-10-02T12:16:49Z|00125|binding|INFO|Releasing lport c7d8e124-cc34-42e6-82ac-6fdf057166bf from this chassis (sb_readonly=0)
Oct 02 12:16:49 compute-1 nova_compute[230518]: 2025-10-02 12:16:49.796 2 INFO nova.compute.manager [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Post operation of migration started
Oct 02 12:16:49 compute-1 nova_compute[230518]: 2025-10-02 12:16:49.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:49 compute-1 nova_compute[230518]: 2025-10-02 12:16:49.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.811 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5989958f-ccbb-4db4-8dcb-18563aa2418e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5989958f-ccbb-4db4-8dcb-18563aa2418e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.812 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a62b4fd3-e638-49a0-b38c-552c1a698ab0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.813 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-5989958f-ccbb-4db4-8dcb-18563aa2418e
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/5989958f-ccbb-4db4-8dcb-18563aa2418e.pid.haproxy
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 5989958f-ccbb-4db4-8dcb-18563aa2418e
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:16:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:49.814 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'env', 'PROCESS_TAG=haproxy-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5989958f-ccbb-4db4-8dcb-18563aa2418e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:16:50 compute-1 podman[243869]: 2025-10-02 12:16:50.205415992 +0000 UTC m=+0.065479471 container create 145912ae22aebd03ea1ed6d511382e5a0c9ce1a8f0669016c41b002415de78e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 12:16:50 compute-1 podman[243869]: 2025-10-02 12:16:50.168337695 +0000 UTC m=+0.028401224 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:16:50 compute-1 systemd[1]: Started libpod-conmon-145912ae22aebd03ea1ed6d511382e5a0c9ce1a8f0669016c41b002415de78e6.scope.
Oct 02 12:16:50 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:16:50 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8333f9bed9a3ff266e399a2d551e934a36972a2ab5ffe8727ce87e882998249b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:16:50 compute-1 podman[243869]: 2025-10-02 12:16:50.303506346 +0000 UTC m=+0.163569875 container init 145912ae22aebd03ea1ed6d511382e5a0c9ce1a8f0669016c41b002415de78e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 12:16:50 compute-1 podman[243869]: 2025-10-02 12:16:50.310380693 +0000 UTC m=+0.170444162 container start 145912ae22aebd03ea1ed6d511382e5a0c9ce1a8f0669016c41b002415de78e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:16:50 compute-1 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[243884]: [NOTICE]   (243888) : New worker (243890) forked
Oct 02 12:16:50 compute-1 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[243884]: [NOTICE]   (243888) : Loading success.
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.382 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 92466114-86f5-4a18-ad64-93c2127fe0d3 in datapath dc4336bf-639d-45a4-88f2-32f0af1b9dbe unbound from our chassis
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.384 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dc4336bf-639d-45a4-88f2-32f0af1b9dbe
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.395 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9cc031e3-0ffd-46d3-aca0-b91a1cf52a10]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.396 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdc4336bf-61 in ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.398 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdc4336bf-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.398 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f8fc4403-1902-4743-ac88-c3c507fb8e9b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.399 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7571130f-3302-4783-9154-dd9d59956ebc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.410 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[80bc64f1-6e60-4de0-8880-af4af469a91d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:50 compute-1 nova_compute[230518]: 2025-10-02 12:16:50.420 2 DEBUG oslo_concurrency.lockutils [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquiring lock "refresh_cache-a114d722-ceac-442e-8b38-c2892fda526b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:16:50 compute-1 nova_compute[230518]: 2025-10-02 12:16:50.420 2 DEBUG oslo_concurrency.lockutils [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquired lock "refresh_cache-a114d722-ceac-442e-8b38-c2892fda526b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:16:50 compute-1 nova_compute[230518]: 2025-10-02 12:16:50.420 2 DEBUG nova.network.neutron [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.437 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[46b998b3-9d78-4121-b872-3605ec9d2288]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.471 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[39b2c249-2616-43be-893b-aafc3c8730a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.478 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[68ebd278-c0b9-4cae-bc66-976eeaa01cff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:50 compute-1 NetworkManager[44960]: <info>  [1759407410.4796] manager: (tapdc4336bf-60): new Veth device (/org/freedesktop/NetworkManager/Devices/64)
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.517 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[d686ec85-2007-4ba9-8e91-f1103059d386]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.519 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[646c58b7-c7b7-4cf8-9656-876422ec65a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:50 compute-1 NetworkManager[44960]: <info>  [1759407410.5436] device (tapdc4336bf-60): carrier: link connected
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.549 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[373e0b12-35b7-4bf9-b402-0407ac99d9a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.564 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[88a947e5-aed6-42b5-b6d8-a7071dc7b19e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdc4336bf-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:70:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527410, 'reachable_time': 31983, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243909, 'error': None, 'target': 'ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.579 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ba8c6628-4575-4e53-86a6-cf9b24369daa]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feeb:708b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527410, 'tstamp': 527410}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243910, 'error': None, 'target': 'ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.594 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[aec9f9ce-5354-42ea-8f02-dd676a631d79]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdc4336bf-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:70:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527410, 'reachable_time': 31983, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 243911, 'error': None, 'target': 'ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.623 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[78fecf45-54ad-450b-9bf9-dbe554eb9fef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.671 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ab90f616-3462-47dd-ae65-6d444ce6badb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.673 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdc4336bf-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.673 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.674 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdc4336bf-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:16:50 compute-1 nova_compute[230518]: 2025-10-02 12:16:50.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:50 compute-1 kernel: tapdc4336bf-60: entered promiscuous mode
Oct 02 12:16:50 compute-1 NetworkManager[44960]: <info>  [1759407410.6772] manager: (tapdc4336bf-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Oct 02 12:16:50 compute-1 nova_compute[230518]: 2025-10-02 12:16:50.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.681 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdc4336bf-60, col_values=(('external_ids', {'iface-id': 'c67f345b-5542-4cd7-a60b-7617c8d1414e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:16:50 compute-1 nova_compute[230518]: 2025-10-02 12:16:50.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:50 compute-1 ovn_controller[129257]: 2025-10-02T12:16:50Z|00126|binding|INFO|Releasing lport c67f345b-5542-4cd7-a60b-7617c8d1414e from this chassis (sb_readonly=0)
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.684 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dc4336bf-639d-45a4-88f2-32f0af1b9dbe.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dc4336bf-639d-45a4-88f2-32f0af1b9dbe.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.685 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[83d6d6ff-1d62-4d18-b3af-b0984aff2d12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.686 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-dc4336bf-639d-45a4-88f2-32f0af1b9dbe
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/dc4336bf-639d-45a4-88f2-32f0af1b9dbe.pid.haproxy
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID dc4336bf-639d-45a4-88f2-32f0af1b9dbe
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:16:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:16:50.687 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe', 'env', 'PROCESS_TAG=haproxy-dc4336bf-639d-45a4-88f2-32f0af1b9dbe', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dc4336bf-639d-45a4-88f2-32f0af1b9dbe.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:16:50 compute-1 ceph-mon[80926]: pgmap v1129: 305 pgs: 305 active+clean; 378 MiB data, 534 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 15 KiB/s wr, 132 op/s
Oct 02 12:16:50 compute-1 nova_compute[230518]: 2025-10-02 12:16:50.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:51.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:51 compute-1 podman[243943]: 2025-10-02 12:16:51.0536642 +0000 UTC m=+0.048741254 container create 21e94ede3a85724ada1e04c7d3a3300a26a42c3880ab065702f1c77892a2fee7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 02 12:16:51 compute-1 systemd[1]: Started libpod-conmon-21e94ede3a85724ada1e04c7d3a3300a26a42c3880ab065702f1c77892a2fee7.scope.
Oct 02 12:16:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:51.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:51 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:16:51 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ccfcc3d3923f9105c6b44d482785ea52e49c37aea974d395f37f06c9024bca9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:16:51 compute-1 podman[243943]: 2025-10-02 12:16:51.026747713 +0000 UTC m=+0.021824767 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:16:51 compute-1 podman[243943]: 2025-10-02 12:16:51.125242231 +0000 UTC m=+0.120319295 container init 21e94ede3a85724ada1e04c7d3a3300a26a42c3880ab065702f1c77892a2fee7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:16:51 compute-1 podman[243943]: 2025-10-02 12:16:51.130160986 +0000 UTC m=+0.125238040 container start 21e94ede3a85724ada1e04c7d3a3300a26a42c3880ab065702f1c77892a2fee7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:16:51 compute-1 neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe[243958]: [NOTICE]   (243962) : New worker (243964) forked
Oct 02 12:16:51 compute-1 neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe[243958]: [NOTICE]   (243962) : Loading success.
Oct 02 12:16:51 compute-1 nova_compute[230518]: 2025-10-02 12:16:51.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:52 compute-1 ceph-mon[80926]: pgmap v1130: 305 pgs: 305 active+clean; 403 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 171 op/s
Oct 02 12:16:52 compute-1 nova_compute[230518]: 2025-10-02 12:16:52.329 2 DEBUG nova.network.neutron [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Updating instance_info_cache with network_info: [{"id": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "address": "fa:16:3e:bf:7b:22", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965edc3f-df", "ovs_interfaceid": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:16:52 compute-1 nova_compute[230518]: 2025-10-02 12:16:52.351 2 DEBUG oslo_concurrency.lockutils [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Releasing lock "refresh_cache-a114d722-ceac-442e-8b38-c2892fda526b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:16:52 compute-1 nova_compute[230518]: 2025-10-02 12:16:52.378 2 DEBUG oslo_concurrency.lockutils [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:16:52 compute-1 nova_compute[230518]: 2025-10-02 12:16:52.378 2 DEBUG oslo_concurrency.lockutils [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:16:52 compute-1 nova_compute[230518]: 2025-10-02 12:16:52.378 2 DEBUG oslo_concurrency.lockutils [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:16:52 compute-1 nova_compute[230518]: 2025-10-02 12:16:52.382 2 INFO nova.virt.libvirt.driver [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Oct 02 12:16:52 compute-1 virtqemud[230067]: Domain id=16 name='instance-0000001d' uuid=a114d722-ceac-442e-8b38-c2892fda526b is tainted: custom-monitor
Oct 02 12:16:52 compute-1 ovn_controller[129257]: 2025-10-02T12:16:52Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bf:7b:22 10.100.0.10
Oct 02 12:16:52 compute-1 ovn_controller[129257]: 2025-10-02T12:16:52Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bf:7b:22 10.100.0.10
Oct 02 12:16:52 compute-1 podman[243974]: 2025-10-02 12:16:52.802195274 +0000 UTC m=+0.052209223 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:16:52 compute-1 podman[243973]: 2025-10-02 12:16:52.83419587 +0000 UTC m=+0.084558160 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 12:16:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:16:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:53.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:53.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:53 compute-1 nova_compute[230518]: 2025-10-02 12:16:53.389 2 INFO nova.virt.libvirt.driver [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Oct 02 12:16:53 compute-1 nova_compute[230518]: 2025-10-02 12:16:53.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:54 compute-1 ceph-mon[80926]: pgmap v1131: 305 pgs: 305 active+clean; 420 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 975 KiB/s rd, 4.0 MiB/s wr, 115 op/s
Oct 02 12:16:54 compute-1 nova_compute[230518]: 2025-10-02 12:16:54.395 2 INFO nova.virt.libvirt.driver [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Oct 02 12:16:54 compute-1 nova_compute[230518]: 2025-10-02 12:16:54.399 2 DEBUG nova.compute.manager [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:16:54 compute-1 nova_compute[230518]: 2025-10-02 12:16:54.420 2 DEBUG nova.objects.instance [None req-dbc051ba-7e7a-4ddf-97f2-2c22897e0a64 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:16:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:55.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:55.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:56 compute-1 ceph-mon[80926]: pgmap v1132: 305 pgs: 305 active+clean; 429 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 360 KiB/s rd, 4.1 MiB/s wr, 114 op/s
Oct 02 12:16:56 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2905352423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:56 compute-1 nova_compute[230518]: 2025-10-02 12:16:56.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:16:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:57.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:16:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:57.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:57 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3680354693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:16:57 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:16:58 compute-1 nova_compute[230518]: 2025-10-02 12:16:58.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:16:58 compute-1 ceph-mon[80926]: pgmap v1133: 305 pgs: 305 active+clean; 443 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 571 KiB/s rd, 4.3 MiB/s wr, 130 op/s
Oct 02 12:16:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:16:59.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:16:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:16:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:16:59.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:16:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e167 e167: 3 total, 3 up, 3 in
Oct 02 12:17:00 compute-1 ceph-mon[80926]: pgmap v1134: 305 pgs: 305 active+clean; 443 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 558 KiB/s rd, 4.3 MiB/s wr, 122 op/s
Oct 02 12:17:00 compute-1 ceph-mon[80926]: osdmap e167: 3 total, 3 up, 3 in
Oct 02 12:17:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4085572415' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:17:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:01.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:01.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:01 compute-1 nova_compute[230518]: 2025-10-02 12:17:01.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:02 compute-1 ceph-mon[80926]: pgmap v1136: 305 pgs: 305 active+clean; 443 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 607 KiB/s rd, 1.9 MiB/s wr, 106 op/s
Oct 02 12:17:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3244676548' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:17:02 compute-1 podman[244016]: 2025-10-02 12:17:02.811105004 +0000 UTC m=+0.062156816 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 12:17:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:17:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:03.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:03.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:03 compute-1 nova_compute[230518]: 2025-10-02 12:17:03.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:04 compute-1 ceph-mon[80926]: pgmap v1137: 305 pgs: 305 active+clean; 443 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 526 KiB/s rd, 332 KiB/s wr, 94 op/s
Oct 02 12:17:04 compute-1 podman[244036]: 2025-10-02 12:17:04.795847976 +0000 UTC m=+0.054435244 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid)
Oct 02 12:17:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:05.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:05.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3342925200' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:17:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3342925200' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:17:06 compute-1 ceph-mon[80926]: pgmap v1138: 305 pgs: 305 active+clean; 443 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 217 KiB/s wr, 101 op/s
Oct 02 12:17:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3288555429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:06 compute-1 nova_compute[230518]: 2025-10-02 12:17:06.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:07.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:17:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:07.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:17:07 compute-1 ceph-mon[80926]: pgmap v1139: 305 pgs: 305 active+clean; 443 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 18 KiB/s wr, 113 op/s
Oct 02 12:17:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:17:08 compute-1 nova_compute[230518]: 2025-10-02 12:17:08.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:09.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:09.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:10 compute-1 ceph-mon[80926]: pgmap v1140: 305 pgs: 305 active+clean; 443 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 18 KiB/s wr, 113 op/s
Oct 02 12:17:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e168 e168: 3 total, 3 up, 3 in
Oct 02 12:17:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:17:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:11.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:17:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:11.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:11 compute-1 ceph-mon[80926]: osdmap e168: 3 total, 3 up, 3 in
Oct 02 12:17:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3914838924' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:17:11 compute-1 nova_compute[230518]: 2025-10-02 12:17:11.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:12 compute-1 ceph-mon[80926]: pgmap v1142: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 443 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 18 KiB/s wr, 130 op/s
Oct 02 12:17:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4003313724' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:17:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:17:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:13.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:17:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:13.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:17:13 compute-1 nova_compute[230518]: 2025-10-02 12:17:13.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:13 compute-1 ceph-mon[80926]: pgmap v1143: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 451 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 618 KiB/s wr, 129 op/s
Oct 02 12:17:14 compute-1 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Oct 02 12:17:14 compute-1 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Oct 02 12:17:14 compute-1 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Oct 02 12:17:14 compute-1 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Oct 02 12:17:14 compute-1 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Oct 02 12:17:14 compute-1 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Oct 02 12:17:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:15.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:15.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:15 compute-1 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Oct 02 12:17:15 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:17:15.777 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:17:15 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:17:15.778 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:17:15 compute-1 nova_compute[230518]: 2025-10-02 12:17:15.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:16 compute-1 ceph-mon[80926]: pgmap v1144: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 459 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.1 MiB/s wr, 117 op/s
Oct 02 12:17:16 compute-1 nova_compute[230518]: 2025-10-02 12:17:16.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.001999982s ======
Oct 02 12:17:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:17.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001999982s
Oct 02 12:17:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:17:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:17.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:17:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2874766076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:17:18 compute-1 ceph-mon[80926]: pgmap v1145: 305 pgs: 305 active+clean; 470 MiB data, 630 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 151 op/s
Oct 02 12:17:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4071389943' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2879857347' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:17:18 compute-1 nova_compute[230518]: 2025-10-02 12:17:18.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 e169: 3 total, 3 up, 3 in
Oct 02 12:17:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:19.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:19.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:19 compute-1 nova_compute[230518]: 2025-10-02 12:17:19.212 2 DEBUG oslo_concurrency.lockutils [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "80f9c3a4-aadc-4519-a451-8ce36d37b598" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:19 compute-1 nova_compute[230518]: 2025-10-02 12:17:19.212 2 DEBUG oslo_concurrency.lockutils [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "80f9c3a4-aadc-4519-a451-8ce36d37b598" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:19 compute-1 nova_compute[230518]: 2025-10-02 12:17:19.212 2 DEBUG oslo_concurrency.lockutils [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "80f9c3a4-aadc-4519-a451-8ce36d37b598-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:19 compute-1 nova_compute[230518]: 2025-10-02 12:17:19.213 2 DEBUG oslo_concurrency.lockutils [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "80f9c3a4-aadc-4519-a451-8ce36d37b598-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:19 compute-1 nova_compute[230518]: 2025-10-02 12:17:19.213 2 DEBUG oslo_concurrency.lockutils [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "80f9c3a4-aadc-4519-a451-8ce36d37b598-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:19 compute-1 nova_compute[230518]: 2025-10-02 12:17:19.214 2 INFO nova.compute.manager [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Terminating instance
Oct 02 12:17:19 compute-1 nova_compute[230518]: 2025-10-02 12:17:19.214 2 DEBUG oslo_concurrency.lockutils [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "refresh_cache-80f9c3a4-aadc-4519-a451-8ce36d37b598" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:17:19 compute-1 nova_compute[230518]: 2025-10-02 12:17:19.214 2 DEBUG oslo_concurrency.lockutils [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquired lock "refresh_cache-80f9c3a4-aadc-4519-a451-8ce36d37b598" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:17:19 compute-1 nova_compute[230518]: 2025-10-02 12:17:19.215 2 DEBUG nova.network.neutron [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:17:19 compute-1 nova_compute[230518]: 2025-10-02 12:17:19.496 2 DEBUG nova.network.neutron [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:17:19 compute-1 nova_compute[230518]: 2025-10-02 12:17:19.765 2 DEBUG nova.network.neutron [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:17:19 compute-1 nova_compute[230518]: 2025-10-02 12:17:19.787 2 DEBUG oslo_concurrency.lockutils [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Releasing lock "refresh_cache-80f9c3a4-aadc-4519-a451-8ce36d37b598" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:17:19 compute-1 nova_compute[230518]: 2025-10-02 12:17:19.788 2 DEBUG nova.compute.manager [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:17:19 compute-1 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d00000018.scope: Deactivated successfully.
Oct 02 12:17:19 compute-1 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d00000018.scope: Consumed 14.504s CPU time.
Oct 02 12:17:19 compute-1 systemd-machined[188247]: Machine qemu-15-instance-00000018 terminated.
Oct 02 12:17:20 compute-1 nova_compute[230518]: 2025-10-02 12:17:20.007 2 INFO nova.virt.libvirt.driver [-] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Instance destroyed successfully.
Oct 02 12:17:20 compute-1 nova_compute[230518]: 2025-10-02 12:17:20.008 2 DEBUG nova.objects.instance [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'resources' on Instance uuid 80f9c3a4-aadc-4519-a451-8ce36d37b598 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:17:20 compute-1 ceph-mon[80926]: pgmap v1146: 305 pgs: 305 active+clean; 470 MiB data, 630 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 151 op/s
Oct 02 12:17:20 compute-1 ceph-mon[80926]: osdmap e169: 3 total, 3 up, 3 in
Oct 02 12:17:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:21.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:21.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:21 compute-1 nova_compute[230518]: 2025-10-02 12:17:21.128 2 INFO nova.virt.libvirt.driver [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Deleting instance files /var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598_del
Oct 02 12:17:21 compute-1 nova_compute[230518]: 2025-10-02 12:17:21.129 2 INFO nova.virt.libvirt.driver [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Deletion of /var/lib/nova/instances/80f9c3a4-aadc-4519-a451-8ce36d37b598_del complete
Oct 02 12:17:21 compute-1 nova_compute[230518]: 2025-10-02 12:17:21.177 2 INFO nova.compute.manager [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Took 1.39 seconds to destroy the instance on the hypervisor.
Oct 02 12:17:21 compute-1 nova_compute[230518]: 2025-10-02 12:17:21.177 2 DEBUG oslo.service.loopingcall [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:17:21 compute-1 nova_compute[230518]: 2025-10-02 12:17:21.177 2 DEBUG nova.compute.manager [-] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:17:21 compute-1 nova_compute[230518]: 2025-10-02 12:17:21.177 2 DEBUG nova.network.neutron [-] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:17:21 compute-1 nova_compute[230518]: 2025-10-02 12:17:21.298 2 DEBUG nova.network.neutron [-] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:17:21 compute-1 nova_compute[230518]: 2025-10-02 12:17:21.318 2 DEBUG nova.network.neutron [-] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:17:21 compute-1 nova_compute[230518]: 2025-10-02 12:17:21.334 2 INFO nova.compute.manager [-] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Took 0.16 seconds to deallocate network for instance.
Oct 02 12:17:21 compute-1 nova_compute[230518]: 2025-10-02 12:17:21.394 2 DEBUG oslo_concurrency.lockutils [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:21 compute-1 nova_compute[230518]: 2025-10-02 12:17:21.395 2 DEBUG oslo_concurrency.lockutils [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:21 compute-1 nova_compute[230518]: 2025-10-02 12:17:21.494 2 DEBUG oslo_concurrency.processutils [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:17:21 compute-1 nova_compute[230518]: 2025-10-02 12:17:21.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:17:21 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3296994374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:21 compute-1 nova_compute[230518]: 2025-10-02 12:17:21.941 2 DEBUG oslo_concurrency.processutils [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:17:21 compute-1 nova_compute[230518]: 2025-10-02 12:17:21.946 2 DEBUG nova.compute.provider_tree [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:17:21 compute-1 nova_compute[230518]: 2025-10-02 12:17:21.971 2 DEBUG nova.scheduler.client.report [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:17:22 compute-1 nova_compute[230518]: 2025-10-02 12:17:22.010 2 DEBUG oslo_concurrency.lockutils [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:22 compute-1 nova_compute[230518]: 2025-10-02 12:17:22.065 2 INFO nova.scheduler.client.report [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Deleted allocations for instance 80f9c3a4-aadc-4519-a451-8ce36d37b598
Oct 02 12:17:22 compute-1 nova_compute[230518]: 2025-10-02 12:17:22.145 2 DEBUG oslo_concurrency.lockutils [None req-49e57221-6886-4f60-8040-60de46690290 ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "80f9c3a4-aadc-4519-a451-8ce36d37b598" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.933s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:22 compute-1 ceph-mon[80926]: pgmap v1148: 305 pgs: 305 active+clean; 410 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 251 op/s
Oct 02 12:17:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1496903658' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:17:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3296994374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:22 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:17:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:23.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:17:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:23.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:17:23 compute-1 nova_compute[230518]: 2025-10-02 12:17:23.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:17:23.780 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:23 compute-1 podman[244103]: 2025-10-02 12:17:23.81816071 +0000 UTC m=+0.059957557 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:17:23 compute-1 podman[244102]: 2025-10-02 12:17:23.845059445 +0000 UTC m=+0.088950008 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:17:24 compute-1 ceph-mon[80926]: pgmap v1149: 305 pgs: 305 active+clean; 396 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.5 MiB/s wr, 304 op/s
Oct 02 12:17:25 compute-1 nova_compute[230518]: 2025-10-02 12:17:25.040 2 DEBUG oslo_concurrency.lockutils [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "bb6a3b63-8cda-41b6-ac43-6f9d310fad2a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:25 compute-1 nova_compute[230518]: 2025-10-02 12:17:25.041 2 DEBUG oslo_concurrency.lockutils [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "bb6a3b63-8cda-41b6-ac43-6f9d310fad2a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:25 compute-1 nova_compute[230518]: 2025-10-02 12:17:25.041 2 DEBUG oslo_concurrency.lockutils [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "bb6a3b63-8cda-41b6-ac43-6f9d310fad2a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:25 compute-1 nova_compute[230518]: 2025-10-02 12:17:25.041 2 DEBUG oslo_concurrency.lockutils [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "bb6a3b63-8cda-41b6-ac43-6f9d310fad2a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:25 compute-1 nova_compute[230518]: 2025-10-02 12:17:25.042 2 DEBUG oslo_concurrency.lockutils [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "bb6a3b63-8cda-41b6-ac43-6f9d310fad2a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:25 compute-1 nova_compute[230518]: 2025-10-02 12:17:25.043 2 INFO nova.compute.manager [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Terminating instance
Oct 02 12:17:25 compute-1 nova_compute[230518]: 2025-10-02 12:17:25.044 2 DEBUG oslo_concurrency.lockutils [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "refresh_cache-bb6a3b63-8cda-41b6-ac43-6f9d310fad2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:17:25 compute-1 nova_compute[230518]: 2025-10-02 12:17:25.044 2 DEBUG oslo_concurrency.lockutils [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquired lock "refresh_cache-bb6a3b63-8cda-41b6-ac43-6f9d310fad2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:17:25 compute-1 nova_compute[230518]: 2025-10-02 12:17:25.045 2 DEBUG nova.network.neutron [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:17:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:25.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:25.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:25 compute-1 nova_compute[230518]: 2025-10-02 12:17:25.234 2 DEBUG nova.network.neutron [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:17:25 compute-1 nova_compute[230518]: 2025-10-02 12:17:25.659 2 DEBUG nova.network.neutron [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:17:25 compute-1 nova_compute[230518]: 2025-10-02 12:17:25.724 2 DEBUG oslo_concurrency.lockutils [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Releasing lock "refresh_cache-bb6a3b63-8cda-41b6-ac43-6f9d310fad2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:17:25 compute-1 nova_compute[230518]: 2025-10-02 12:17:25.725 2 DEBUG nova.compute.manager [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:17:25 compute-1 ceph-mon[80926]: pgmap v1150: 305 pgs: 305 active+clean; 346 MiB data, 551 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 283 op/s
Oct 02 12:17:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:17:25.916 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:17:25.916 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:17:25.917 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:26 compute-1 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000015.scope: Deactivated successfully.
Oct 02 12:17:26 compute-1 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000015.scope: Consumed 16.521s CPU time.
Oct 02 12:17:26 compute-1 systemd-machined[188247]: Machine qemu-12-instance-00000015 terminated.
Oct 02 12:17:26 compute-1 nova_compute[230518]: 2025-10-02 12:17:26.143 2 INFO nova.virt.libvirt.driver [-] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Instance destroyed successfully.
Oct 02 12:17:26 compute-1 nova_compute[230518]: 2025-10-02 12:17:26.143 2 DEBUG nova.objects.instance [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'resources' on Instance uuid bb6a3b63-8cda-41b6-ac43-6f9d310fad2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:17:26 compute-1 nova_compute[230518]: 2025-10-02 12:17:26.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:27.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:27 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1489528740' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:17:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:27.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:17:27 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:17:28 compute-1 nova_compute[230518]: 2025-10-02 12:17:28.316 2 DEBUG nova.virt.libvirt.driver [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Creating tmpfile /var/lib/nova/instances/tmpyzl_yqg1 to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041
Oct 02 12:17:28 compute-1 nova_compute[230518]: 2025-10-02 12:17:28.317 2 DEBUG nova.compute.manager [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpyzl_yqg1',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476
Oct 02 12:17:28 compute-1 nova_compute[230518]: 2025-10-02 12:17:28.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:28 compute-1 ceph-mon[80926]: pgmap v1151: 305 pgs: 305 active+clean; 329 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 19 KiB/s wr, 287 op/s
Oct 02 12:17:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:29.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:29.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:29 compute-1 nova_compute[230518]: 2025-10-02 12:17:29.745 2 DEBUG nova.compute.manager [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpyzl_yqg1',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='ecee1ec0-1a8d-4d67-b996-205a942120ae',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604
Oct 02 12:17:29 compute-1 nova_compute[230518]: 2025-10-02 12:17:29.889 2 DEBUG oslo_concurrency.lockutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquiring lock "refresh_cache-ecee1ec0-1a8d-4d67-b996-205a942120ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:17:29 compute-1 nova_compute[230518]: 2025-10-02 12:17:29.889 2 DEBUG oslo_concurrency.lockutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquired lock "refresh_cache-ecee1ec0-1a8d-4d67-b996-205a942120ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:17:29 compute-1 nova_compute[230518]: 2025-10-02 12:17:29.890 2 DEBUG nova.network.neutron [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:17:30 compute-1 nova_compute[230518]: 2025-10-02 12:17:30.642 2 INFO nova.virt.libvirt.driver [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Deleting instance files /var/lib/nova/instances/bb6a3b63-8cda-41b6-ac43-6f9d310fad2a_del
Oct 02 12:17:30 compute-1 nova_compute[230518]: 2025-10-02 12:17:30.642 2 INFO nova.virt.libvirt.driver [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Deletion of /var/lib/nova/instances/bb6a3b63-8cda-41b6-ac43-6f9d310fad2a_del complete
Oct 02 12:17:30 compute-1 nova_compute[230518]: 2025-10-02 12:17:30.802 2 INFO nova.compute.manager [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Took 5.08 seconds to destroy the instance on the hypervisor.
Oct 02 12:17:30 compute-1 nova_compute[230518]: 2025-10-02 12:17:30.803 2 DEBUG oslo.service.loopingcall [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:17:30 compute-1 nova_compute[230518]: 2025-10-02 12:17:30.804 2 DEBUG nova.compute.manager [-] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:17:30 compute-1 nova_compute[230518]: 2025-10-02 12:17:30.804 2 DEBUG nova.network.neutron [-] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:17:30 compute-1 ceph-mon[80926]: pgmap v1152: 305 pgs: 305 active+clean; 329 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 19 KiB/s wr, 287 op/s
Oct 02 12:17:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2145820866' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:17:30 compute-1 nova_compute[230518]: 2025-10-02 12:17:30.942 2 DEBUG nova.network.neutron [-] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:17:31 compute-1 nova_compute[230518]: 2025-10-02 12:17:31.007 2 DEBUG nova.network.neutron [-] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:17:31 compute-1 nova_compute[230518]: 2025-10-02 12:17:31.072 2 INFO nova.compute.manager [-] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Took 0.27 seconds to deallocate network for instance.
Oct 02 12:17:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:31.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:31 compute-1 nova_compute[230518]: 2025-10-02 12:17:31.100 2 DEBUG nova.network.neutron [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Updating instance_info_cache with network_info: [{"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:17:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:31.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:31 compute-1 nova_compute[230518]: 2025-10-02 12:17:31.150 2 DEBUG oslo_concurrency.lockutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Releasing lock "refresh_cache-ecee1ec0-1a8d-4d67-b996-205a942120ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:17:31 compute-1 nova_compute[230518]: 2025-10-02 12:17:31.151 2 DEBUG os_brick.utils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:17:31 compute-1 nova_compute[230518]: 2025-10-02 12:17:31.152 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:17:31 compute-1 nova_compute[230518]: 2025-10-02 12:17:31.158 2 DEBUG oslo_concurrency.lockutils [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:31 compute-1 nova_compute[230518]: 2025-10-02 12:17:31.159 2 DEBUG oslo_concurrency.lockutils [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:31 compute-1 nova_compute[230518]: 2025-10-02 12:17:31.162 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:17:31 compute-1 nova_compute[230518]: 2025-10-02 12:17:31.163 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[a82ea0ad-5f84-4e8b-9444-3a5814d1f74d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:31 compute-1 nova_compute[230518]: 2025-10-02 12:17:31.164 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:17:31 compute-1 nova_compute[230518]: 2025-10-02 12:17:31.170 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:17:31 compute-1 nova_compute[230518]: 2025-10-02 12:17:31.170 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[a044d085-39ae-4e01-9682-a0a8b1eef269]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:31 compute-1 nova_compute[230518]: 2025-10-02 12:17:31.171 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:17:31 compute-1 nova_compute[230518]: 2025-10-02 12:17:31.180 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:17:31 compute-1 nova_compute[230518]: 2025-10-02 12:17:31.181 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[94ba7346-3515-46fc-a25e-79af8f0aba15]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:31 compute-1 nova_compute[230518]: 2025-10-02 12:17:31.182 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[1452b0e4-223c-472c-acc1-19bbd3243669]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:31 compute-1 nova_compute[230518]: 2025-10-02 12:17:31.183 2 DEBUG oslo_concurrency.processutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:17:31 compute-1 nova_compute[230518]: 2025-10-02 12:17:31.206 2 DEBUG oslo_concurrency.processutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:17:31 compute-1 nova_compute[230518]: 2025-10-02 12:17:31.211 2 DEBUG os_brick.initiator.connectors.lightos [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:17:31 compute-1 nova_compute[230518]: 2025-10-02 12:17:31.212 2 DEBUG os_brick.initiator.connectors.lightos [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:17:31 compute-1 nova_compute[230518]: 2025-10-02 12:17:31.212 2 DEBUG os_brick.initiator.connectors.lightos [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:17:31 compute-1 nova_compute[230518]: 2025-10-02 12:17:31.212 2 DEBUG os_brick.utils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] <== get_connector_properties: return (60ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:17:31 compute-1 nova_compute[230518]: 2025-10-02 12:17:31.332 2 DEBUG oslo_concurrency.processutils [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:17:31 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:17:31 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3277705734' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:31 compute-1 nova_compute[230518]: 2025-10-02 12:17:31.752 2 DEBUG oslo_concurrency.processutils [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:17:31 compute-1 nova_compute[230518]: 2025-10-02 12:17:31.759 2 DEBUG nova.compute.provider_tree [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:17:31 compute-1 nova_compute[230518]: 2025-10-02 12:17:31.794 2 DEBUG nova.scheduler.client.report [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:17:31 compute-1 nova_compute[230518]: 2025-10-02 12:17:31.827 2 DEBUG oslo_concurrency.lockutils [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:31 compute-1 nova_compute[230518]: 2025-10-02 12:17:31.879 2 INFO nova.scheduler.client.report [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Deleted allocations for instance bb6a3b63-8cda-41b6-ac43-6f9d310fad2a
Oct 02 12:17:31 compute-1 nova_compute[230518]: 2025-10-02 12:17:31.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:32 compute-1 nova_compute[230518]: 2025-10-02 12:17:32.008 2 DEBUG oslo_concurrency.lockutils [None req-ad77fe7d-e653-4720-b24c-09c3804ee23a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "bb6a3b63-8cda-41b6-ac43-6f9d310fad2a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.968s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:32 compute-1 ceph-mon[80926]: pgmap v1153: 305 pgs: 305 active+clean; 285 MiB data, 494 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 501 KiB/s wr, 321 op/s
Oct 02 12:17:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3038992732' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:17:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3277705734' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:17:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:33.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:17:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:33.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:17:33 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1769959602' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:17:33 compute-1 nova_compute[230518]: 2025-10-02 12:17:33.229 2 DEBUG nova.virt.libvirt.driver [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpyzl_yqg1',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='ecee1ec0-1a8d-4d67-b996-205a942120ae',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={20a19061-0239-43b4-b9d7-980e7acde072='6a6d4750-7860-4bbf-bba3-50eb20d823fd'},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827
Oct 02 12:17:33 compute-1 nova_compute[230518]: 2025-10-02 12:17:33.230 2 DEBUG nova.virt.libvirt.driver [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Creating instance directory: /var/lib/nova/instances/ecee1ec0-1a8d-4d67-b996-205a942120ae pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840
Oct 02 12:17:33 compute-1 nova_compute[230518]: 2025-10-02 12:17:33.231 2 DEBUG nova.virt.libvirt.driver [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Ensure instance console log exists: /var/lib/nova/instances/ecee1ec0-1a8d-4d67-b996-205a942120ae/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:17:33 compute-1 nova_compute[230518]: 2025-10-02 12:17:33.231 2 DEBUG nova.virt.libvirt.driver [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Connecting volumes before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10901
Oct 02 12:17:33 compute-1 nova_compute[230518]: 2025-10-02 12:17:33.234 2 DEBUG nova.virt.libvirt.driver [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794
Oct 02 12:17:33 compute-1 nova_compute[230518]: 2025-10-02 12:17:33.236 2 DEBUG nova.virt.libvirt.vif [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:17:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-343208228',display_name='tempest-LiveMigrationTest-server-343208228',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-livemigrationtest-server-343208228',id=31,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:17:24Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7f6188e258a04ea1a49e6b415bce3fc9',ramdisk_id='',reservation_id='r-urrasvl8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-1880928942',owner_user_name='tempest-LiveMigrationTest-1880928942-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:17:24Z,user_data=None,user_id='e0cdfd1473bd4963b4ded642a43c35f3',uuid=ecee1ec0-1a8d-4d67-b996-205a942120ae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:17:33 compute-1 nova_compute[230518]: 2025-10-02 12:17:33.236 2 DEBUG nova.network.os_vif_util [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Converting VIF {"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:17:33 compute-1 nova_compute[230518]: 2025-10-02 12:17:33.237 2 DEBUG nova.network.os_vif_util [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=7539c03e-c932-4473-8d75-729cbed6008a,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7539c03e-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:17:33 compute-1 nova_compute[230518]: 2025-10-02 12:17:33.238 2 DEBUG os_vif [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=7539c03e-c932-4473-8d75-729cbed6008a,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7539c03e-c9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:17:33 compute-1 nova_compute[230518]: 2025-10-02 12:17:33.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:33 compute-1 nova_compute[230518]: 2025-10-02 12:17:33.239 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:33 compute-1 nova_compute[230518]: 2025-10-02 12:17:33.239 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:17:33 compute-1 nova_compute[230518]: 2025-10-02 12:17:33.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:33 compute-1 nova_compute[230518]: 2025-10-02 12:17:33.247 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7539c03e-c9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:33 compute-1 nova_compute[230518]: 2025-10-02 12:17:33.247 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7539c03e-c9, col_values=(('external_ids', {'iface-id': '7539c03e-c932-4473-8d75-729cbed6008a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0e:5e:ba', 'vm-uuid': 'ecee1ec0-1a8d-4d67-b996-205a942120ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:33 compute-1 nova_compute[230518]: 2025-10-02 12:17:33.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:33 compute-1 NetworkManager[44960]: <info>  [1759407453.2504] manager: (tap7539c03e-c9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Oct 02 12:17:33 compute-1 nova_compute[230518]: 2025-10-02 12:17:33.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:17:33 compute-1 nova_compute[230518]: 2025-10-02 12:17:33.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:33 compute-1 nova_compute[230518]: 2025-10-02 12:17:33.258 2 INFO os_vif [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=7539c03e-c932-4473-8d75-729cbed6008a,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7539c03e-c9')
Oct 02 12:17:33 compute-1 nova_compute[230518]: 2025-10-02 12:17:33.261 2 DEBUG nova.virt.libvirt.driver [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954
Oct 02 12:17:33 compute-1 nova_compute[230518]: 2025-10-02 12:17:33.261 2 DEBUG nova.compute.manager [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpyzl_yqg1',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='ecee1ec0-1a8d-4d67-b996-205a942120ae',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={20a19061-0239-43b4-b9d7-980e7acde072='6a6d4750-7860-4bbf-bba3-50eb20d823fd'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668
Oct 02 12:17:33 compute-1 nova_compute[230518]: 2025-10-02 12:17:33.419 2 DEBUG oslo_concurrency.lockutils [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "fcfe251b-73c3-4310-b646-3c6c0a8c7e6e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:33 compute-1 nova_compute[230518]: 2025-10-02 12:17:33.420 2 DEBUG oslo_concurrency.lockutils [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "fcfe251b-73c3-4310-b646-3c6c0a8c7e6e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:33 compute-1 nova_compute[230518]: 2025-10-02 12:17:33.420 2 DEBUG oslo_concurrency.lockutils [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "fcfe251b-73c3-4310-b646-3c6c0a8c7e6e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:33 compute-1 nova_compute[230518]: 2025-10-02 12:17:33.420 2 DEBUG oslo_concurrency.lockutils [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "fcfe251b-73c3-4310-b646-3c6c0a8c7e6e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:33 compute-1 nova_compute[230518]: 2025-10-02 12:17:33.421 2 DEBUG oslo_concurrency.lockutils [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "fcfe251b-73c3-4310-b646-3c6c0a8c7e6e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:33 compute-1 nova_compute[230518]: 2025-10-02 12:17:33.422 2 INFO nova.compute.manager [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Terminating instance
Oct 02 12:17:33 compute-1 nova_compute[230518]: 2025-10-02 12:17:33.422 2 DEBUG oslo_concurrency.lockutils [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "refresh_cache-fcfe251b-73c3-4310-b646-3c6c0a8c7e6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:17:33 compute-1 nova_compute[230518]: 2025-10-02 12:17:33.423 2 DEBUG oslo_concurrency.lockutils [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquired lock "refresh_cache-fcfe251b-73c3-4310-b646-3c6c0a8c7e6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:17:33 compute-1 nova_compute[230518]: 2025-10-02 12:17:33.423 2 DEBUG nova.network.neutron [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:17:33 compute-1 nova_compute[230518]: 2025-10-02 12:17:33.602 2 DEBUG nova.network.neutron [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:17:33 compute-1 podman[244198]: 2025-10-02 12:17:33.804208495 +0000 UTC m=+0.055006861 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct 02 12:17:33 compute-1 nova_compute[230518]: 2025-10-02 12:17:33.865 2 DEBUG nova.network.neutron [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:17:33 compute-1 nova_compute[230518]: 2025-10-02 12:17:33.903 2 DEBUG oslo_concurrency.lockutils [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Releasing lock "refresh_cache-fcfe251b-73c3-4310-b646-3c6c0a8c7e6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:17:33 compute-1 nova_compute[230518]: 2025-10-02 12:17:33.903 2 DEBUG nova.compute.manager [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:17:34 compute-1 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000012.scope: Deactivated successfully.
Oct 02 12:17:34 compute-1 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000012.scope: Consumed 19.691s CPU time.
Oct 02 12:17:34 compute-1 systemd-machined[188247]: Machine qemu-9-instance-00000012 terminated.
Oct 02 12:17:34 compute-1 nova_compute[230518]: 2025-10-02 12:17:34.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:17:34 compute-1 nova_compute[230518]: 2025-10-02 12:17:34.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:17:34 compute-1 nova_compute[230518]: 2025-10-02 12:17:34.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:17:34 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0.
Oct 02 12:17:34 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:34.052592) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:17:34 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52
Oct 02 12:17:34 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407454052632, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2491, "num_deletes": 265, "total_data_size": 5694580, "memory_usage": 5789488, "flush_reason": "Manual Compaction"}
Oct 02 12:17:34 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started
Oct 02 12:17:34 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407454077190, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 3692782, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25376, "largest_seqno": 27862, "table_properties": {"data_size": 3682594, "index_size": 6426, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 21956, "raw_average_key_size": 20, "raw_value_size": 3661730, "raw_average_value_size": 3438, "num_data_blocks": 280, "num_entries": 1065, "num_filter_entries": 1065, "num_deletions": 265, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407277, "oldest_key_time": 1759407277, "file_creation_time": 1759407454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:17:34 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 24641 microseconds, and 9456 cpu microseconds.
Oct 02 12:17:34 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:17:34 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:34.077233) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 3692782 bytes OK
Oct 02 12:17:34 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:34.077251) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started
Oct 02 12:17:34 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:34.079338) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done
Oct 02 12:17:34 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:34.079383) EVENT_LOG_v1 {"time_micros": 1759407454079375, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:17:34 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:34.079405) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:17:34 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 5683262, prev total WAL file size 5683262, number of live WAL files 2.
Oct 02 12:17:34 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:17:34 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:34.080654) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353031' seq:72057594037927935, type:22 .. '6C6F676D00373534' seq:0, type:0; will stop at (end)
Oct 02 12:17:34 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:17:34 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(3606KB)], [51(8842KB)]
Oct 02 12:17:34 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407454080704, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 12747693, "oldest_snapshot_seqno": -1}
Oct 02 12:17:34 compute-1 nova_compute[230518]: 2025-10-02 12:17:34.123 2 INFO nova.virt.libvirt.driver [-] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Instance destroyed successfully.
Oct 02 12:17:34 compute-1 nova_compute[230518]: 2025-10-02 12:17:34.124 2 DEBUG nova.objects.instance [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lazy-loading 'resources' on Instance uuid fcfe251b-73c3-4310-b646-3c6c0a8c7e6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:17:34 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 5400 keys, 12630939 bytes, temperature: kUnknown
Oct 02 12:17:34 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407454142641, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 12630939, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12590091, "index_size": 26274, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13509, "raw_key_size": 135692, "raw_average_key_size": 25, "raw_value_size": 12488125, "raw_average_value_size": 2312, "num_data_blocks": 1084, "num_entries": 5400, "num_filter_entries": 5400, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759407454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:17:34 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:17:34 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:34.142915) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 12630939 bytes
Oct 02 12:17:34 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:34.144190) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 205.4 rd, 203.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 8.6 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(6.9) write-amplify(3.4) OK, records in: 5945, records dropped: 545 output_compression: NoCompression
Oct 02 12:17:34 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:34.144207) EVENT_LOG_v1 {"time_micros": 1759407454144200, "job": 30, "event": "compaction_finished", "compaction_time_micros": 62068, "compaction_time_cpu_micros": 31608, "output_level": 6, "num_output_files": 1, "total_output_size": 12630939, "num_input_records": 5945, "num_output_records": 5400, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:17:34 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:17:34 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407454144903, "job": 30, "event": "table_file_deletion", "file_number": 53}
Oct 02 12:17:34 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:17:34 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407454146443, "job": 30, "event": "table_file_deletion", "file_number": 51}
Oct 02 12:17:34 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:34.080590) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:17:34 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:34.146469) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:17:34 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:34.146473) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:17:34 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:34.146475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:17:34 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:34.146476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:17:34 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:34.146478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:17:34 compute-1 nova_compute[230518]: 2025-10-02 12:17:34.318 2 DEBUG nova.network.neutron [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Port 7539c03e-c932-4473-8d75-729cbed6008a updated with migration profile {'migrating_to': 'compute-1.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354
Oct 02 12:17:34 compute-1 nova_compute[230518]: 2025-10-02 12:17:34.682 2 DEBUG nova.compute.manager [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpyzl_yqg1',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='ecee1ec0-1a8d-4d67-b996-205a942120ae',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={20a19061-0239-43b4-b9d7-980e7acde072='6a6d4750-7860-4bbf-bba3-50eb20d823fd'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723
Oct 02 12:17:34 compute-1 ceph-mon[80926]: pgmap v1154: 305 pgs: 305 active+clean; 294 MiB data, 496 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 243 op/s
Oct 02 12:17:34 compute-1 kernel: tap7539c03e-c9: entered promiscuous mode
Oct 02 12:17:34 compute-1 ovn_controller[129257]: 2025-10-02T12:17:34Z|00127|binding|INFO|Claiming lport 7539c03e-c932-4473-8d75-729cbed6008a for this additional chassis.
Oct 02 12:17:34 compute-1 ovn_controller[129257]: 2025-10-02T12:17:34Z|00128|binding|INFO|7539c03e-c932-4473-8d75-729cbed6008a: Claiming fa:16:3e:0e:5e:ba 10.100.0.9
Oct 02 12:17:34 compute-1 NetworkManager[44960]: <info>  [1759407454.8993] manager: (tap7539c03e-c9): new Tun device (/org/freedesktop/NetworkManager/Devices/67)
Oct 02 12:17:34 compute-1 nova_compute[230518]: 2025-10-02 12:17:34.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:34 compute-1 systemd-udevd[244220]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:17:34 compute-1 NetworkManager[44960]: <info>  [1759407454.9141] device (tap7539c03e-c9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:17:34 compute-1 NetworkManager[44960]: <info>  [1759407454.9149] device (tap7539c03e-c9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:17:34 compute-1 ovn_controller[129257]: 2025-10-02T12:17:34Z|00129|binding|INFO|Setting lport 7539c03e-c932-4473-8d75-729cbed6008a ovn-installed in OVS
Oct 02 12:17:34 compute-1 nova_compute[230518]: 2025-10-02 12:17:34.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:34 compute-1 nova_compute[230518]: 2025-10-02 12:17:34.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:34 compute-1 systemd-machined[188247]: New machine qemu-17-instance-0000001f.
Oct 02 12:17:34 compute-1 systemd[1]: Started Virtual Machine qemu-17-instance-0000001f.
Oct 02 12:17:34 compute-1 podman[244242]: 2025-10-02 12:17:34.947962699 +0000 UTC m=+0.075593228 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 02 12:17:35 compute-1 nova_compute[230518]: 2025-10-02 12:17:35.007 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407440.0053596, 80f9c3a4-aadc-4519-a451-8ce36d37b598 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:17:35 compute-1 nova_compute[230518]: 2025-10-02 12:17:35.007 2 INFO nova.compute.manager [-] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] VM Stopped (Lifecycle Event)
Oct 02 12:17:35 compute-1 nova_compute[230518]: 2025-10-02 12:17:35.026 2 DEBUG nova.compute.manager [None req-bb86c0aa-08d7-4464-a05b-b3a66c7090a3 - - - - - -] [instance: 80f9c3a4-aadc-4519-a451-8ce36d37b598] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:17:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:35.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:35.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:36 compute-1 nova_compute[230518]: 2025-10-02 12:17:36.132 2 INFO nova.virt.libvirt.driver [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Deleting instance files /var/lib/nova/instances/fcfe251b-73c3-4310-b646-3c6c0a8c7e6e_del
Oct 02 12:17:36 compute-1 nova_compute[230518]: 2025-10-02 12:17:36.134 2 INFO nova.virt.libvirt.driver [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Deletion of /var/lib/nova/instances/fcfe251b-73c3-4310-b646-3c6c0a8c7e6e_del complete
Oct 02 12:17:36 compute-1 nova_compute[230518]: 2025-10-02 12:17:36.182 2 INFO nova.compute.manager [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Took 2.28 seconds to destroy the instance on the hypervisor.
Oct 02 12:17:36 compute-1 nova_compute[230518]: 2025-10-02 12:17:36.183 2 DEBUG oslo.service.loopingcall [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:17:36 compute-1 nova_compute[230518]: 2025-10-02 12:17:36.183 2 DEBUG nova.compute.manager [-] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:17:36 compute-1 nova_compute[230518]: 2025-10-02 12:17:36.183 2 DEBUG nova.network.neutron [-] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:17:36 compute-1 nova_compute[230518]: 2025-10-02 12:17:36.348 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407456.348626, ecee1ec0-1a8d-4d67-b996-205a942120ae => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:17:36 compute-1 nova_compute[230518]: 2025-10-02 12:17:36.349 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] VM Started (Lifecycle Event)
Oct 02 12:17:36 compute-1 nova_compute[230518]: 2025-10-02 12:17:36.367 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:17:36 compute-1 nova_compute[230518]: 2025-10-02 12:17:36.424 2 DEBUG nova.network.neutron [-] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:17:36 compute-1 nova_compute[230518]: 2025-10-02 12:17:36.438 2 DEBUG nova.network.neutron [-] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:17:36 compute-1 nova_compute[230518]: 2025-10-02 12:17:36.449 2 INFO nova.compute.manager [-] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Took 0.27 seconds to deallocate network for instance.
Oct 02 12:17:36 compute-1 nova_compute[230518]: 2025-10-02 12:17:36.491 2 DEBUG oslo_concurrency.lockutils [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:36 compute-1 nova_compute[230518]: 2025-10-02 12:17:36.491 2 DEBUG oslo_concurrency.lockutils [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:36 compute-1 nova_compute[230518]: 2025-10-02 12:17:36.577 2 DEBUG oslo_concurrency.processutils [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:17:36 compute-1 nova_compute[230518]: 2025-10-02 12:17:36.773 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407456.7732575, ecee1ec0-1a8d-4d67-b996-205a942120ae => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:17:36 compute-1 nova_compute[230518]: 2025-10-02 12:17:36.774 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] VM Resumed (Lifecycle Event)
Oct 02 12:17:36 compute-1 nova_compute[230518]: 2025-10-02 12:17:36.829 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:17:36 compute-1 nova_compute[230518]: 2025-10-02 12:17:36.834 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:17:36 compute-1 ceph-mon[80926]: pgmap v1155: 305 pgs: 305 active+clean; 294 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 199 op/s
Oct 02 12:17:36 compute-1 nova_compute[230518]: 2025-10-02 12:17:36.863 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-1.ctlplane.example.com
Oct 02 12:17:36 compute-1 nova_compute[230518]: 2025-10-02 12:17:36.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:37 compute-1 nova_compute[230518]: 2025-10-02 12:17:37.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:17:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:17:37 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2269807714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:37.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:37 compute-1 nova_compute[230518]: 2025-10-02 12:17:37.087 2 DEBUG oslo_concurrency.processutils [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:17:37 compute-1 nova_compute[230518]: 2025-10-02 12:17:37.092 2 DEBUG nova.compute.provider_tree [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:17:37 compute-1 nova_compute[230518]: 2025-10-02 12:17:37.115 2 DEBUG nova.scheduler.client.report [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:17:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:37.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:37 compute-1 nova_compute[230518]: 2025-10-02 12:17:37.149 2 DEBUG oslo_concurrency.lockutils [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:37 compute-1 nova_compute[230518]: 2025-10-02 12:17:37.204 2 INFO nova.scheduler.client.report [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Deleted allocations for instance fcfe251b-73c3-4310-b646-3c6c0a8c7e6e
Oct 02 12:17:37 compute-1 nova_compute[230518]: 2025-10-02 12:17:37.300 2 DEBUG oslo_concurrency.lockutils [None req-4ed8b773-82c0-40e4-80bd-6b85cac9995a ac1b39d94ed94e2490ad953afb3c225f 3d306048f2854052ba5317253b834aa7 - - default default] Lock "fcfe251b-73c3-4310-b646-3c6c0a8c7e6e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:37 compute-1 ovn_controller[129257]: 2025-10-02T12:17:37Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0e:5e:ba 10.100.0.9
Oct 02 12:17:37 compute-1 ovn_controller[129257]: 2025-10-02T12:17:37Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0e:5e:ba 10.100.0.9
Oct 02 12:17:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:17:38 compute-1 nova_compute[230518]: 2025-10-02 12:17:38.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:17:38 compute-1 nova_compute[230518]: 2025-10-02 12:17:38.081 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:17:38 compute-1 nova_compute[230518]: 2025-10-02 12:17:38.118 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:38 compute-1 nova_compute[230518]: 2025-10-02 12:17:38.119 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:38 compute-1 nova_compute[230518]: 2025-10-02 12:17:38.119 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:38 compute-1 nova_compute[230518]: 2025-10-02 12:17:38.120 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:17:38 compute-1 nova_compute[230518]: 2025-10-02 12:17:38.120 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:17:38 compute-1 ceph-mon[80926]: pgmap v1156: 305 pgs: 305 active+clean; 296 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 3.3 MiB/s wr, 277 op/s
Oct 02 12:17:38 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2269807714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:38 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3581511965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:38 compute-1 nova_compute[230518]: 2025-10-02 12:17:38.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:38 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0.
Oct 02 12:17:38 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:38.344017) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:17:38 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55
Oct 02 12:17:38 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407458344048, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 299, "num_deletes": 251, "total_data_size": 124753, "memory_usage": 130960, "flush_reason": "Manual Compaction"}
Oct 02 12:17:38 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started
Oct 02 12:17:38 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407458347839, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 81647, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27867, "largest_seqno": 28161, "table_properties": {"data_size": 79720, "index_size": 155, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5039, "raw_average_key_size": 18, "raw_value_size": 75924, "raw_average_value_size": 277, "num_data_blocks": 7, "num_entries": 274, "num_filter_entries": 274, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407454, "oldest_key_time": 1759407454, "file_creation_time": 1759407458, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:17:38 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 3865 microseconds, and 737 cpu microseconds.
Oct 02 12:17:38 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:17:38 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:38.347883) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 81647 bytes OK
Oct 02 12:17:38 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:38.347895) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started
Oct 02 12:17:38 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:38.349070) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done
Oct 02 12:17:38 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:38.349081) EVENT_LOG_v1 {"time_micros": 1759407458349078, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:17:38 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:38.349092) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:17:38 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 122553, prev total WAL file size 122553, number of live WAL files 2.
Oct 02 12:17:38 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:17:38 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:38.349363) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Oct 02 12:17:38 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:17:38 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(79KB)], [54(12MB)]
Oct 02 12:17:38 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407458349396, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 12712586, "oldest_snapshot_seqno": -1}
Oct 02 12:17:38 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 5165 keys, 10806373 bytes, temperature: kUnknown
Oct 02 12:17:38 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407458401177, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 10806373, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10768667, "index_size": 23708, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12933, "raw_key_size": 131512, "raw_average_key_size": 25, "raw_value_size": 10672338, "raw_average_value_size": 2066, "num_data_blocks": 968, "num_entries": 5165, "num_filter_entries": 5165, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759407458, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:17:38 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:17:38 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:38.401412) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 10806373 bytes
Oct 02 12:17:38 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:38.402548) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 245.2 rd, 208.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 12.0 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(288.1) write-amplify(132.4) OK, records in: 5674, records dropped: 509 output_compression: NoCompression
Oct 02 12:17:38 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:38.402564) EVENT_LOG_v1 {"time_micros": 1759407458402556, "job": 32, "event": "compaction_finished", "compaction_time_micros": 51854, "compaction_time_cpu_micros": 22066, "output_level": 6, "num_output_files": 1, "total_output_size": 10806373, "num_input_records": 5674, "num_output_records": 5165, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:17:38 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:17:38 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407458402663, "job": 32, "event": "table_file_deletion", "file_number": 56}
Oct 02 12:17:38 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:17:38 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407458404543, "job": 32, "event": "table_file_deletion", "file_number": 54}
Oct 02 12:17:38 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:38.349296) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:17:38 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:38.404659) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:17:38 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:38.404665) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:17:38 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:38.404668) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:17:38 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:38.404670) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:17:38 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:17:38.404672) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:17:38 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:17:38 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1591436883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:38 compute-1 nova_compute[230518]: 2025-10-02 12:17:38.609 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:17:38 compute-1 nova_compute[230518]: 2025-10-02 12:17:38.680 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:17:38 compute-1 nova_compute[230518]: 2025-10-02 12:17:38.681 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:17:38 compute-1 nova_compute[230518]: 2025-10-02 12:17:38.686 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:17:38 compute-1 nova_compute[230518]: 2025-10-02 12:17:38.687 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:17:38 compute-1 ovn_controller[129257]: 2025-10-02T12:17:38Z|00130|binding|INFO|Claiming lport 7539c03e-c932-4473-8d75-729cbed6008a for this chassis.
Oct 02 12:17:38 compute-1 ovn_controller[129257]: 2025-10-02T12:17:38Z|00131|binding|INFO|7539c03e-c932-4473-8d75-729cbed6008a: Claiming fa:16:3e:0e:5e:ba 10.100.0.9
Oct 02 12:17:38 compute-1 ovn_controller[129257]: 2025-10-02T12:17:38Z|00132|binding|INFO|Setting lport 7539c03e-c932-4473-8d75-729cbed6008a up in Southbound
Oct 02 12:17:38 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:17:38.699 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:5e:ba 10.100.0.9'], port_security=['fa:16:3e:0e:5e:ba 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ecee1ec0-1a8d-4d67-b996-205a942120ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7f6188e258a04ea1a49e6b415bce3fc9', 'neutron:revision_number': '11', 'neutron:security_group_ids': '583e80e3-bda7-43ee-b04c-3ce88c2c7611', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34cc3fdc-62f5-47cf-be4b-547a25938be9, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=7539c03e-c932-4473-8d75-729cbed6008a) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:17:38 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:17:38.701 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 7539c03e-c932-4473-8d75-729cbed6008a in datapath 5989958f-ccbb-4db4-8dcb-18563aa2418e bound to our chassis
Oct 02 12:17:38 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:17:38.703 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5989958f-ccbb-4db4-8dcb-18563aa2418e
Oct 02 12:17:38 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:17:38.719 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[16b90fff-d261-475b-a461-464777dbc4d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:38 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:17:38.750 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[2903ffff-a59b-496d-8756-b9c23dea9bcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:38 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:17:38.753 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[4db940bc-cc91-4347-aa1b-6df38ff7923f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:38 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:17:38.783 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[01c6e657-5199-471d-95ac-77b1ff084260]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:38 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:17:38.801 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[aa1d5777-70ef-4b05-b0cc-f39722850c19]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5989958f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:d2:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527320, 'reachable_time': 20483, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244375, 'error': None, 'target': 'ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:38 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:17:38.815 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d8b2ab2c-dc3e-4a25-bdc9-f0d2d2c8191c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5989958f-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527331, 'tstamp': 527331}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244376, 'error': None, 'target': 'ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5989958f-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527333, 'tstamp': 527333}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244376, 'error': None, 'target': 'ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:38 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:17:38.817 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5989958f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:38 compute-1 nova_compute[230518]: 2025-10-02 12:17:38.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:38 compute-1 nova_compute[230518]: 2025-10-02 12:17:38.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:38 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:17:38.822 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5989958f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:38 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:17:38.822 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:17:38 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:17:38.823 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5989958f-c0, col_values=(('external_ids', {'iface-id': 'c7d8e124-cc34-42e6-82ac-6fdf057166bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:38 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:17:38.823 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:17:38 compute-1 nova_compute[230518]: 2025-10-02 12:17:38.875 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:17:38 compute-1 nova_compute[230518]: 2025-10-02 12:17:38.876 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4374MB free_disk=20.893749237060547GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:17:38 compute-1 nova_compute[230518]: 2025-10-02 12:17:38.876 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:38 compute-1 nova_compute[230518]: 2025-10-02 12:17:38.876 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:38 compute-1 nova_compute[230518]: 2025-10-02 12:17:38.959 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Migration for instance ecee1ec0-1a8d-4d67-b996-205a942120ae refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Oct 02 12:17:38 compute-1 nova_compute[230518]: 2025-10-02 12:17:38.999 2 INFO nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Updating resource usage from migration c51f2e80-99c7-4848-8e64-214b4d6d2c8b
Oct 02 12:17:38 compute-1 nova_compute[230518]: 2025-10-02 12:17:38.999 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Starting to track incoming migration c51f2e80-99c7-4848-8e64-214b4d6d2c8b with flavor 99c52872-4e37-4be3-86cc-757b8f375aa8 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Oct 02 12:17:39 compute-1 nova_compute[230518]: 2025-10-02 12:17:39.042 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance a114d722-ceac-442e-8b38-c2892fda526b actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:17:39 compute-1 nova_compute[230518]: 2025-10-02 12:17:39.067 2 WARNING nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance ecee1ec0-1a8d-4d67-b996-205a942120ae has been moved to another host compute-2.ctlplane.example.com(compute-2.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}.
Oct 02 12:17:39 compute-1 nova_compute[230518]: 2025-10-02 12:17:39.068 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:17:39 compute-1 nova_compute[230518]: 2025-10-02 12:17:39.068 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:17:39 compute-1 nova_compute[230518]: 2025-10-02 12:17:39.071 2 INFO nova.compute.manager [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Post operation of migration started
Oct 02 12:17:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:39.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:39 compute-1 nova_compute[230518]: 2025-10-02 12:17:39.132 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:17:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:17:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:39.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:17:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1347014307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1591436883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:39 compute-1 nova_compute[230518]: 2025-10-02 12:17:39.464 2 DEBUG oslo_concurrency.lockutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquiring lock "refresh_cache-ecee1ec0-1a8d-4d67-b996-205a942120ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:17:39 compute-1 nova_compute[230518]: 2025-10-02 12:17:39.465 2 DEBUG oslo_concurrency.lockutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquired lock "refresh_cache-ecee1ec0-1a8d-4d67-b996-205a942120ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:17:39 compute-1 nova_compute[230518]: 2025-10-02 12:17:39.466 2 DEBUG nova.network.neutron [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:17:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:17:39 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3073071398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:39 compute-1 nova_compute[230518]: 2025-10-02 12:17:39.612 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:17:39 compute-1 nova_compute[230518]: 2025-10-02 12:17:39.618 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:17:39 compute-1 nova_compute[230518]: 2025-10-02 12:17:39.639 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:17:39 compute-1 nova_compute[230518]: 2025-10-02 12:17:39.663 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:17:39 compute-1 nova_compute[230518]: 2025-10-02 12:17:39.664 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:40 compute-1 ceph-mon[80926]: pgmap v1157: 305 pgs: 305 active+clean; 296 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.3 MiB/s wr, 202 op/s
Oct 02 12:17:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3073071398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:40 compute-1 nova_compute[230518]: 2025-10-02 12:17:40.636 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:17:40 compute-1 nova_compute[230518]: 2025-10-02 12:17:40.637 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:17:40 compute-1 nova_compute[230518]: 2025-10-02 12:17:40.670 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:17:40 compute-1 nova_compute[230518]: 2025-10-02 12:17:40.671 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:17:40 compute-1 nova_compute[230518]: 2025-10-02 12:17:40.671 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:17:40 compute-1 nova_compute[230518]: 2025-10-02 12:17:40.671 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:17:40 compute-1 nova_compute[230518]: 2025-10-02 12:17:40.672 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:17:41 compute-1 nova_compute[230518]: 2025-10-02 12:17:41.031 2 DEBUG nova.network.neutron [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Updating instance_info_cache with network_info: [{"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:17:41 compute-1 nova_compute[230518]: 2025-10-02 12:17:41.056 2 DEBUG oslo_concurrency.lockutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Releasing lock "refresh_cache-ecee1ec0-1a8d-4d67-b996-205a942120ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:17:41 compute-1 nova_compute[230518]: 2025-10-02 12:17:41.070 2 DEBUG oslo_concurrency.lockutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:41 compute-1 nova_compute[230518]: 2025-10-02 12:17:41.070 2 DEBUG oslo_concurrency.lockutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:41 compute-1 nova_compute[230518]: 2025-10-02 12:17:41.070 2 DEBUG oslo_concurrency.lockutils [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:41 compute-1 nova_compute[230518]: 2025-10-02 12:17:41.073 2 INFO nova.virt.libvirt.driver [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Oct 02 12:17:41 compute-1 virtqemud[230067]: Domain id=17 name='instance-0000001f' uuid=ecee1ec0-1a8d-4d67-b996-205a942120ae is tainted: custom-monitor
Oct 02 12:17:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:41.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:41 compute-1 nova_compute[230518]: 2025-10-02 12:17:41.142 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407446.1418507, bb6a3b63-8cda-41b6-ac43-6f9d310fad2a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:17:41 compute-1 nova_compute[230518]: 2025-10-02 12:17:41.143 2 INFO nova.compute.manager [-] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] VM Stopped (Lifecycle Event)
Oct 02 12:17:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:41.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:41 compute-1 nova_compute[230518]: 2025-10-02 12:17:41.159 2 DEBUG nova.compute.manager [None req-44e67298-ad60-499d-8720-3702e30495ce - - - - - -] [instance: bb6a3b63-8cda-41b6-ac43-6f9d310fad2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:17:41 compute-1 nova_compute[230518]: 2025-10-02 12:17:41.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:42 compute-1 nova_compute[230518]: 2025-10-02 12:17:42.079 2 INFO nova.virt.libvirt.driver [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Oct 02 12:17:42 compute-1 sudo[244399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:17:42 compute-1 sudo[244399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:42 compute-1 sudo[244399]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:42 compute-1 ceph-mon[80926]: pgmap v1158: 305 pgs: 305 active+clean; 241 MiB data, 473 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.9 MiB/s wr, 268 op/s
Oct 02 12:17:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3291252393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:42 compute-1 sudo[244424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:17:42 compute-1 sudo[244424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:42 compute-1 sudo[244424]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:42 compute-1 sudo[244449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:17:42 compute-1 sudo[244449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:42 compute-1 sudo[244449]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:42 compute-1 sudo[244474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:17:42 compute-1 sudo[244474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:17:43 compute-1 nova_compute[230518]: 2025-10-02 12:17:43.085 2 INFO nova.virt.libvirt.driver [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Oct 02 12:17:43 compute-1 nova_compute[230518]: 2025-10-02 12:17:43.089 2 DEBUG nova.compute.manager [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:17:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:43.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:43 compute-1 nova_compute[230518]: 2025-10-02 12:17:43.114 2 DEBUG nova.objects.instance [None req-3517246e-de40-4519-85cb-6ba718b06b56 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:17:43 compute-1 sudo[244474]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:17:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:43.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:17:43 compute-1 nova_compute[230518]: 2025-10-02 12:17:43.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/292213779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:43 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 12:17:43 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:17:43 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:17:43 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 12:17:44 compute-1 ceph-mon[80926]: pgmap v1159: 305 pgs: 305 active+clean; 246 MiB data, 473 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.4 MiB/s wr, 195 op/s
Oct 02 12:17:44 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2688491066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:17:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:45.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:17:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:45.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1617248047' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:46 compute-1 nova_compute[230518]: 2025-10-02 12:17:46.631 2 DEBUG nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Check if temp file /var/lib/nova/instances/tmp10f21u5k exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065
Oct 02 12:17:46 compute-1 nova_compute[230518]: 2025-10-02 12:17:46.631 2 DEBUG nova.compute.manager [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp10f21u5k',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='ecee1ec0-1a8d-4d67-b996-205a942120ae',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587
Oct 02 12:17:46 compute-1 ceph-mon[80926]: pgmap v1160: 305 pgs: 305 active+clean; 246 MiB data, 473 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 171 op/s
Oct 02 12:17:46 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:17:46 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:17:46 compute-1 nova_compute[230518]: 2025-10-02 12:17:46.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:47.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:47.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:47 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 12:17:47 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:17:47 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:17:47 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:17:47 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:17:47 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:17:47 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:17:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:17:48 compute-1 nova_compute[230518]: 2025-10-02 12:17:48.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:48 compute-1 ceph-mon[80926]: pgmap v1161: 305 pgs: 305 active+clean; 250 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.8 MiB/s wr, 177 op/s
Oct 02 12:17:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:17:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:49.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:17:49 compute-1 nova_compute[230518]: 2025-10-02 12:17:49.122 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407454.1208904, fcfe251b-73c3-4310-b646-3c6c0a8c7e6e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:17:49 compute-1 nova_compute[230518]: 2025-10-02 12:17:49.122 2 INFO nova.compute.manager [-] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] VM Stopped (Lifecycle Event)
Oct 02 12:17:49 compute-1 nova_compute[230518]: 2025-10-02 12:17:49.144 2 DEBUG nova.compute.manager [None req-8a2b226e-ccaf-4f7b-bdd3-ff8a1bfa3cf1 - - - - - -] [instance: fcfe251b-73c3-4310-b646-3c6c0a8c7e6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:17:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:17:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:49.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:17:50 compute-1 ceph-mon[80926]: pgmap v1162: 305 pgs: 305 active+clean; 250 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 739 KiB/s rd, 1.2 MiB/s wr, 86 op/s
Oct 02 12:17:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:17:50 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1355568132' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:17:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:51.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:17:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:51.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:17:51 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3030137041' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:17:51 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1355568132' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:17:51 compute-1 nova_compute[230518]: 2025-10-02 12:17:51.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:52 compute-1 nova_compute[230518]: 2025-10-02 12:17:52.290 2 DEBUG nova.compute.manager [req-d6551eb9-6aee-4e24-856e-f2aa14543657 req-9cbada36-f322-4558-b656-5b5eaddbe20d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-unplugged-7539c03e-c932-4473-8d75-729cbed6008a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:52 compute-1 nova_compute[230518]: 2025-10-02 12:17:52.291 2 DEBUG oslo_concurrency.lockutils [req-d6551eb9-6aee-4e24-856e-f2aa14543657 req-9cbada36-f322-4558-b656-5b5eaddbe20d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:52 compute-1 nova_compute[230518]: 2025-10-02 12:17:52.291 2 DEBUG oslo_concurrency.lockutils [req-d6551eb9-6aee-4e24-856e-f2aa14543657 req-9cbada36-f322-4558-b656-5b5eaddbe20d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:52 compute-1 nova_compute[230518]: 2025-10-02 12:17:52.291 2 DEBUG oslo_concurrency.lockutils [req-d6551eb9-6aee-4e24-856e-f2aa14543657 req-9cbada36-f322-4558-b656-5b5eaddbe20d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:52 compute-1 nova_compute[230518]: 2025-10-02 12:17:52.292 2 DEBUG nova.compute.manager [req-d6551eb9-6aee-4e24-856e-f2aa14543657 req-9cbada36-f322-4558-b656-5b5eaddbe20d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] No waiting events found dispatching network-vif-unplugged-7539c03e-c932-4473-8d75-729cbed6008a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:17:52 compute-1 nova_compute[230518]: 2025-10-02 12:17:52.292 2 DEBUG nova.compute.manager [req-d6551eb9-6aee-4e24-856e-f2aa14543657 req-9cbada36-f322-4558-b656-5b5eaddbe20d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-unplugged-7539c03e-c932-4473-8d75-729cbed6008a for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:17:52 compute-1 ceph-mon[80926]: pgmap v1163: 305 pgs: 305 active+clean; 269 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 971 KiB/s rd, 2.1 MiB/s wr, 128 op/s
Oct 02 12:17:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:17:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:17:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:53.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:17:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:17:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:53.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:17:53 compute-1 nova_compute[230518]: 2025-10-02 12:17:53.207 2 INFO nova.compute.manager [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Took 5.62 seconds for pre_live_migration on destination host compute-2.ctlplane.example.com.
Oct 02 12:17:53 compute-1 nova_compute[230518]: 2025-10-02 12:17:53.208 2 DEBUG nova.compute.manager [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:17:53 compute-1 nova_compute[230518]: 2025-10-02 12:17:53.225 2 DEBUG nova.compute.manager [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp10f21u5k',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='ecee1ec0-1a8d-4d67-b996-205a942120ae',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=Migration(7d05ad01-db02-4010-91c3-110015cf1810),old_vol_attachment_ids={20a19061-0239-43b4-b9d7-980e7acde072='ff6e5128-91a9-4273-b9c7-d3a4c775f1fa'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939
Oct 02 12:17:53 compute-1 nova_compute[230518]: 2025-10-02 12:17:53.228 2 DEBUG nova.objects.instance [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lazy-loading 'migration_context' on Instance uuid ecee1ec0-1a8d-4d67-b996-205a942120ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:17:53 compute-1 nova_compute[230518]: 2025-10-02 12:17:53.229 2 DEBUG nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639
Oct 02 12:17:53 compute-1 nova_compute[230518]: 2025-10-02 12:17:53.230 2 DEBUG nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440
Oct 02 12:17:53 compute-1 nova_compute[230518]: 2025-10-02 12:17:53.231 2 DEBUG nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449
Oct 02 12:17:53 compute-1 nova_compute[230518]: 2025-10-02 12:17:53.248 2 DEBUG nova.virt.libvirt.migration [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Find same serial number: pos=1, serial=20a19061-0239-43b4-b9d7-980e7acde072 _update_volume_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:242
Oct 02 12:17:53 compute-1 nova_compute[230518]: 2025-10-02 12:17:53.250 2 DEBUG nova.virt.libvirt.vif [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:17:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-343208228',display_name='tempest-LiveMigrationTest-server-343208228',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-livemigrationtest-server-343208228',id=31,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:17:24Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7f6188e258a04ea1a49e6b415bce3fc9',ramdisk_id='',reservation_id='r-urrasvl8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-1880928942',owner_user_name='tempest-LiveMigrationTest-1880928942-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:17:43Z,user_data=None,user_id='e0cdfd1473bd4963b4ded642a43c35f3',uuid=ecee1ec0-1a8d-4d67-b996-205a942120ae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:17:53 compute-1 nova_compute[230518]: 2025-10-02 12:17:53.250 2 DEBUG nova.network.os_vif_util [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Converting VIF {"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:17:53 compute-1 nova_compute[230518]: 2025-10-02 12:17:53.251 2 DEBUG nova.network.os_vif_util [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0e:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=7539c03e-c932-4473-8d75-729cbed6008a,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7539c03e-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:17:53 compute-1 nova_compute[230518]: 2025-10-02 12:17:53.251 2 DEBUG nova.virt.libvirt.migration [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Updating guest XML with vif config: <interface type="ethernet">
Oct 02 12:17:53 compute-1 nova_compute[230518]:   <mac address="fa:16:3e:0e:5e:ba"/>
Oct 02 12:17:53 compute-1 nova_compute[230518]:   <model type="virtio"/>
Oct 02 12:17:53 compute-1 nova_compute[230518]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:17:53 compute-1 nova_compute[230518]:   <mtu size="1442"/>
Oct 02 12:17:53 compute-1 nova_compute[230518]:   <target dev="tap7539c03e-c9"/>
Oct 02 12:17:53 compute-1 nova_compute[230518]: </interface>
Oct 02 12:17:53 compute-1 nova_compute[230518]:  _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388
Oct 02 12:17:53 compute-1 nova_compute[230518]: 2025-10-02 12:17:53.252 2 DEBUG nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272
Oct 02 12:17:53 compute-1 nova_compute[230518]: 2025-10-02 12:17:53.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:53 compute-1 nova_compute[230518]: 2025-10-02 12:17:53.733 2 DEBUG nova.virt.libvirt.migration [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Oct 02 12:17:53 compute-1 nova_compute[230518]: 2025-10-02 12:17:53.734 2 INFO nova.virt.libvirt.migration [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Increasing downtime to 50 ms after 0 sec elapsed time
Oct 02 12:17:53 compute-1 nova_compute[230518]: 2025-10-02 12:17:53.864 2 INFO nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Oct 02 12:17:54 compute-1 ceph-mon[80926]: pgmap v1164: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 365 KiB/s rd, 2.2 MiB/s wr, 77 op/s
Oct 02 12:17:54 compute-1 nova_compute[230518]: 2025-10-02 12:17:54.367 2 DEBUG nova.virt.libvirt.migration [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Oct 02 12:17:54 compute-1 nova_compute[230518]: 2025-10-02 12:17:54.368 2 DEBUG nova.virt.libvirt.migration [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Oct 02 12:17:54 compute-1 nova_compute[230518]: 2025-10-02 12:17:54.409 2 DEBUG nova.compute.manager [req-645839b4-1a5e-43ac-af50-112cccefe0cf req-1d751890-246d-46a8-bf8e-21ebdc121459 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:54 compute-1 nova_compute[230518]: 2025-10-02 12:17:54.410 2 DEBUG oslo_concurrency.lockutils [req-645839b4-1a5e-43ac-af50-112cccefe0cf req-1d751890-246d-46a8-bf8e-21ebdc121459 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:54 compute-1 nova_compute[230518]: 2025-10-02 12:17:54.410 2 DEBUG oslo_concurrency.lockutils [req-645839b4-1a5e-43ac-af50-112cccefe0cf req-1d751890-246d-46a8-bf8e-21ebdc121459 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:54 compute-1 nova_compute[230518]: 2025-10-02 12:17:54.410 2 DEBUG oslo_concurrency.lockutils [req-645839b4-1a5e-43ac-af50-112cccefe0cf req-1d751890-246d-46a8-bf8e-21ebdc121459 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:54 compute-1 nova_compute[230518]: 2025-10-02 12:17:54.410 2 DEBUG nova.compute.manager [req-645839b4-1a5e-43ac-af50-112cccefe0cf req-1d751890-246d-46a8-bf8e-21ebdc121459 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] No waiting events found dispatching network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:17:54 compute-1 nova_compute[230518]: 2025-10-02 12:17:54.410 2 WARNING nova.compute.manager [req-645839b4-1a5e-43ac-af50-112cccefe0cf req-1d751890-246d-46a8-bf8e-21ebdc121459 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received unexpected event network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a for instance with vm_state active and task_state migrating.
Oct 02 12:17:54 compute-1 nova_compute[230518]: 2025-10-02 12:17:54.411 2 DEBUG nova.compute.manager [req-645839b4-1a5e-43ac-af50-112cccefe0cf req-1d751890-246d-46a8-bf8e-21ebdc121459 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-changed-7539c03e-c932-4473-8d75-729cbed6008a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:54 compute-1 nova_compute[230518]: 2025-10-02 12:17:54.411 2 DEBUG nova.compute.manager [req-645839b4-1a5e-43ac-af50-112cccefe0cf req-1d751890-246d-46a8-bf8e-21ebdc121459 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Refreshing instance network info cache due to event network-changed-7539c03e-c932-4473-8d75-729cbed6008a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:17:54 compute-1 nova_compute[230518]: 2025-10-02 12:17:54.411 2 DEBUG oslo_concurrency.lockutils [req-645839b4-1a5e-43ac-af50-112cccefe0cf req-1d751890-246d-46a8-bf8e-21ebdc121459 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-ecee1ec0-1a8d-4d67-b996-205a942120ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:17:54 compute-1 nova_compute[230518]: 2025-10-02 12:17:54.411 2 DEBUG oslo_concurrency.lockutils [req-645839b4-1a5e-43ac-af50-112cccefe0cf req-1d751890-246d-46a8-bf8e-21ebdc121459 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-ecee1ec0-1a8d-4d67-b996-205a942120ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:17:54 compute-1 nova_compute[230518]: 2025-10-02 12:17:54.411 2 DEBUG nova.network.neutron [req-645839b4-1a5e-43ac-af50-112cccefe0cf req-1d751890-246d-46a8-bf8e-21ebdc121459 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Refreshing network info cache for port 7539c03e-c932-4473-8d75-729cbed6008a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:17:54 compute-1 podman[244531]: 2025-10-02 12:17:54.801667318 +0000 UTC m=+0.048823716 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 02 12:17:54 compute-1 podman[244530]: 2025-10-02 12:17:54.822935007 +0000 UTC m=+0.071094827 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 02 12:17:54 compute-1 nova_compute[230518]: 2025-10-02 12:17:54.870 2 DEBUG nova.virt.libvirt.migration [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Oct 02 12:17:54 compute-1 nova_compute[230518]: 2025-10-02 12:17:54.871 2 DEBUG nova.virt.libvirt.migration [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Oct 02 12:17:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:55.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:55.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:55 compute-1 sudo[244573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:17:55 compute-1 sudo[244573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:55 compute-1 sudo[244573]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:55 compute-1 sudo[244598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:17:55 compute-1 sudo[244598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:17:55 compute-1 sudo[244598]: pam_unix(sudo:session): session closed for user root
Oct 02 12:17:55 compute-1 nova_compute[230518]: 2025-10-02 12:17:55.374 2 DEBUG nova.virt.libvirt.migration [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Oct 02 12:17:55 compute-1 nova_compute[230518]: 2025-10-02 12:17:55.374 2 DEBUG nova.virt.libvirt.migration [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Oct 02 12:17:55 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1585577951' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:17:55 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/796225185' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:17:55 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:17:55 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:17:55 compute-1 nova_compute[230518]: 2025-10-02 12:17:55.876 2 DEBUG nova.virt.libvirt.migration [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Oct 02 12:17:55 compute-1 nova_compute[230518]: 2025-10-02 12:17:55.877 2 DEBUG nova.virt.libvirt.migration [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Oct 02 12:17:56 compute-1 nova_compute[230518]: 2025-10-02 12:17:56.090 2 DEBUG nova.network.neutron [req-645839b4-1a5e-43ac-af50-112cccefe0cf req-1d751890-246d-46a8-bf8e-21ebdc121459 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Updated VIF entry in instance network info cache for port 7539c03e-c932-4473-8d75-729cbed6008a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:17:56 compute-1 nova_compute[230518]: 2025-10-02 12:17:56.091 2 DEBUG nova.network.neutron [req-645839b4-1a5e-43ac-af50-112cccefe0cf req-1d751890-246d-46a8-bf8e-21ebdc121459 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Updating instance_info_cache with network_info: [{"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true, "migrating_to": "compute-2.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:17:56 compute-1 nova_compute[230518]: 2025-10-02 12:17:56.112 2 DEBUG oslo_concurrency.lockutils [req-645839b4-1a5e-43ac-af50-112cccefe0cf req-1d751890-246d-46a8-bf8e-21ebdc121459 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-ecee1ec0-1a8d-4d67-b996-205a942120ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:17:56 compute-1 nova_compute[230518]: 2025-10-02 12:17:56.380 2 DEBUG nova.virt.libvirt.migration [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Current 50 elapsed 3 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Oct 02 12:17:56 compute-1 nova_compute[230518]: 2025-10-02 12:17:56.381 2 DEBUG nova.virt.libvirt.migration [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Oct 02 12:17:56 compute-1 nova_compute[230518]: 2025-10-02 12:17:56.437 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407476.4370553, ecee1ec0-1a8d-4d67-b996-205a942120ae => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:17:56 compute-1 nova_compute[230518]: 2025-10-02 12:17:56.438 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] VM Paused (Lifecycle Event)
Oct 02 12:17:56 compute-1 ceph-mon[80926]: pgmap v1165: 305 pgs: 305 active+clean; 310 MiB data, 533 MiB used, 20 GiB / 21 GiB avail; 372 KiB/s rd, 3.5 MiB/s wr, 92 op/s
Oct 02 12:17:56 compute-1 nova_compute[230518]: 2025-10-02 12:17:56.672 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:17:56 compute-1 kernel: tap7539c03e-c9 (unregistering): left promiscuous mode
Oct 02 12:17:56 compute-1 NetworkManager[44960]: <info>  [1759407476.8235] device (tap7539c03e-c9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:17:56 compute-1 ovn_controller[129257]: 2025-10-02T12:17:56Z|00133|binding|INFO|Releasing lport 7539c03e-c932-4473-8d75-729cbed6008a from this chassis (sb_readonly=0)
Oct 02 12:17:56 compute-1 ovn_controller[129257]: 2025-10-02T12:17:56Z|00134|binding|INFO|Setting lport 7539c03e-c932-4473-8d75-729cbed6008a down in Southbound
Oct 02 12:17:56 compute-1 ovn_controller[129257]: 2025-10-02T12:17:56Z|00135|binding|INFO|Removing iface tap7539c03e-c9 ovn-installed in OVS
Oct 02 12:17:56 compute-1 nova_compute[230518]: 2025-10-02 12:17:56.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:56 compute-1 nova_compute[230518]: 2025-10-02 12:17:56.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:56 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:17:56.840 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:5e:ba 10.100.0.9'], port_security=['fa:16:3e:0e:5e:ba 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-2.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': 'b9588630-ee40-495c-89d2-4219f6b0f0b5'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ecee1ec0-1a8d-4d67-b996-205a942120ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7f6188e258a04ea1a49e6b415bce3fc9', 'neutron:revision_number': '18', 'neutron:security_group_ids': '583e80e3-bda7-43ee-b04c-3ce88c2c7611', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34cc3fdc-62f5-47cf-be4b-547a25938be9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=7539c03e-c932-4473-8d75-729cbed6008a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:17:56 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:17:56.841 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 7539c03e-c932-4473-8d75-729cbed6008a in datapath 5989958f-ccbb-4db4-8dcb-18563aa2418e unbound from our chassis
Oct 02 12:17:56 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:17:56.843 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5989958f-ccbb-4db4-8dcb-18563aa2418e
Oct 02 12:17:56 compute-1 nova_compute[230518]: 2025-10-02 12:17:56.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:56 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:17:56.859 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[47d5b9ae-e0c9-428e-ac4c-cf3a3416963b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:56 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:17:56.887 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[61240be7-d5d3-4881-b34c-9b4ef0428581]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:56 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:17:56.890 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[b145f7df-3425-4182-b571-3d82fc3c2c51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:56 compute-1 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Oct 02 12:17:56 compute-1 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000001f.scope: Consumed 4.298s CPU time.
Oct 02 12:17:56 compute-1 systemd-machined[188247]: Machine qemu-17-instance-0000001f terminated.
Oct 02 12:17:56 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:17:56.920 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[98bdb7bf-a529-4b95-a587-0f526d839ec1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:56 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:17:56.939 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d445c1bc-1369-4058-9748-095a5d571881]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5989958f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:d2:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 21, 'tx_packets': 7, 'rx_bytes': 1378, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 21, 'tx_packets': 7, 'rx_bytes': 1378, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527320, 'reachable_time': 20483, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244637, 'error': None, 'target': 'ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:56 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:17:56.956 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[caf6b109-def9-415b-bed5-48f97212ea2d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5989958f-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527331, 'tstamp': 527331}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244638, 'error': None, 'target': 'ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5989958f-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527333, 'tstamp': 527333}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244638, 'error': None, 'target': 'ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:17:56 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:17:56.957 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5989958f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:56 compute-1 nova_compute[230518]: 2025-10-02 12:17:56.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:56 compute-1 nova_compute[230518]: 2025-10-02 12:17:56.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:56 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:17:56.964 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5989958f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:56 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:17:56.964 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:17:56 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:17:56.964 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5989958f-c0, col_values=(('external_ids', {'iface-id': 'c7d8e124-cc34-42e6-82ac-6fdf057166bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:56 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:17:56.964 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:17:56 compute-1 nova_compute[230518]: 2025-10-02 12:17:56.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:57 compute-1 virtqemud[230067]: Unable to get XATTR trusted.libvirt.security.ref_selinux on volume-20a19061-0239-43b4-b9d7-980e7acde072: No such file or directory
Oct 02 12:17:57 compute-1 virtqemud[230067]: Unable to get XATTR trusted.libvirt.security.ref_dac on volume-20a19061-0239-43b4-b9d7-980e7acde072: No such file or directory
Oct 02 12:17:57 compute-1 nova_compute[230518]: 2025-10-02 12:17:57.065 2 DEBUG nova.virt.libvirt.guest [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688
Oct 02 12:17:57 compute-1 nova_compute[230518]: 2025-10-02 12:17:57.066 2 INFO nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Migration operation has completed
Oct 02 12:17:57 compute-1 nova_compute[230518]: 2025-10-02 12:17:57.066 2 INFO nova.compute.manager [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] _post_live_migration() is started..
Oct 02 12:17:57 compute-1 nova_compute[230518]: 2025-10-02 12:17:57.068 2 DEBUG nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279
Oct 02 12:17:57 compute-1 nova_compute[230518]: 2025-10-02 12:17:57.068 2 DEBUG nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327
Oct 02 12:17:57 compute-1 nova_compute[230518]: 2025-10-02 12:17:57.068 2 DEBUG nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630
Oct 02 12:17:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:57.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:57.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:57 compute-1 nova_compute[230518]: 2025-10-02 12:17:57.322 2 DEBUG nova.compute.manager [req-778de673-99f7-44fa-b15c-c6d5629179a2 req-e37b1c07-d6d1-44f5-8f3a-3f0ce5363c77 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-unplugged-7539c03e-c932-4473-8d75-729cbed6008a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:57 compute-1 nova_compute[230518]: 2025-10-02 12:17:57.322 2 DEBUG oslo_concurrency.lockutils [req-778de673-99f7-44fa-b15c-c6d5629179a2 req-e37b1c07-d6d1-44f5-8f3a-3f0ce5363c77 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:57 compute-1 nova_compute[230518]: 2025-10-02 12:17:57.322 2 DEBUG oslo_concurrency.lockutils [req-778de673-99f7-44fa-b15c-c6d5629179a2 req-e37b1c07-d6d1-44f5-8f3a-3f0ce5363c77 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:57 compute-1 nova_compute[230518]: 2025-10-02 12:17:57.322 2 DEBUG oslo_concurrency.lockutils [req-778de673-99f7-44fa-b15c-c6d5629179a2 req-e37b1c07-d6d1-44f5-8f3a-3f0ce5363c77 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:57 compute-1 nova_compute[230518]: 2025-10-02 12:17:57.322 2 DEBUG nova.compute.manager [req-778de673-99f7-44fa-b15c-c6d5629179a2 req-e37b1c07-d6d1-44f5-8f3a-3f0ce5363c77 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] No waiting events found dispatching network-vif-unplugged-7539c03e-c932-4473-8d75-729cbed6008a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:17:57 compute-1 nova_compute[230518]: 2025-10-02 12:17:57.323 2 DEBUG nova.compute.manager [req-778de673-99f7-44fa-b15c-c6d5629179a2 req-e37b1c07-d6d1-44f5-8f3a-3f0ce5363c77 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-unplugged-7539c03e-c932-4473-8d75-729cbed6008a for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:17:57 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:17:58 compute-1 nova_compute[230518]: 2025-10-02 12:17:58.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:58 compute-1 ceph-mon[80926]: pgmap v1166: 305 pgs: 305 active+clean; 326 MiB data, 538 MiB used, 20 GiB / 21 GiB avail; 374 KiB/s rd, 3.9 MiB/s wr, 104 op/s
Oct 02 12:17:58 compute-1 nova_compute[230518]: 2025-10-02 12:17:58.655 2 DEBUG nova.network.neutron [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Activated binding for port 7539c03e-c932-4473-8d75-729cbed6008a and host compute-2.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181
Oct 02 12:17:58 compute-1 nova_compute[230518]: 2025-10-02 12:17:58.656 2 DEBUG nova.compute.manager [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326
Oct 02 12:17:58 compute-1 nova_compute[230518]: 2025-10-02 12:17:58.656 2 DEBUG nova.virt.libvirt.vif [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:17:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-343208228',display_name='tempest-LiveMigrationTest-server-343208228',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-livemigrationtest-server-343208228',id=31,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:17:24Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7f6188e258a04ea1a49e6b415bce3fc9',ramdisk_id='',reservation_id='r-urrasvl8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-1880928942',owner_user_name='tempest-LiveMigrationTest-1880928942-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:17:46Z,user_data=None,user_id='e0cdfd1473bd4963b4ded642a43c35f3',uuid=ecee1ec0-1a8d-4d67-b996-205a942120ae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:17:58 compute-1 nova_compute[230518]: 2025-10-02 12:17:58.657 2 DEBUG nova.network.os_vif_util [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Converting VIF {"id": "7539c03e-c932-4473-8d75-729cbed6008a", "address": "fa:16:3e:0e:5e:ba", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7539c03e-c9", "ovs_interfaceid": "7539c03e-c932-4473-8d75-729cbed6008a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:17:58 compute-1 nova_compute[230518]: 2025-10-02 12:17:58.657 2 DEBUG nova.network.os_vif_util [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0e:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=7539c03e-c932-4473-8d75-729cbed6008a,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7539c03e-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:17:58 compute-1 nova_compute[230518]: 2025-10-02 12:17:58.657 2 DEBUG os_vif [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=7539c03e-c932-4473-8d75-729cbed6008a,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7539c03e-c9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:17:58 compute-1 nova_compute[230518]: 2025-10-02 12:17:58.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:58 compute-1 nova_compute[230518]: 2025-10-02 12:17:58.659 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7539c03e-c9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:17:58 compute-1 nova_compute[230518]: 2025-10-02 12:17:58.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:58 compute-1 nova_compute[230518]: 2025-10-02 12:17:58.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:17:58 compute-1 nova_compute[230518]: 2025-10-02 12:17:58.663 2 INFO os_vif [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:5e:ba,bridge_name='br-int',has_traffic_filtering=True,id=7539c03e-c932-4473-8d75-729cbed6008a,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7539c03e-c9')
Oct 02 12:17:58 compute-1 nova_compute[230518]: 2025-10-02 12:17:58.664 2 DEBUG oslo_concurrency.lockutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:58 compute-1 nova_compute[230518]: 2025-10-02 12:17:58.664 2 DEBUG oslo_concurrency.lockutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:58 compute-1 nova_compute[230518]: 2025-10-02 12:17:58.664 2 DEBUG oslo_concurrency.lockutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:58 compute-1 nova_compute[230518]: 2025-10-02 12:17:58.664 2 DEBUG nova.compute.manager [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349
Oct 02 12:17:58 compute-1 nova_compute[230518]: 2025-10-02 12:17:58.664 2 INFO nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Deleting instance files /var/lib/nova/instances/ecee1ec0-1a8d-4d67-b996-205a942120ae_del
Oct 02 12:17:58 compute-1 nova_compute[230518]: 2025-10-02 12:17:58.665 2 INFO nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Deletion of /var/lib/nova/instances/ecee1ec0-1a8d-4d67-b996-205a942120ae_del complete
Oct 02 12:17:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:17:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:17:59.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:17:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:17:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:17:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:17:59.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:17:59 compute-1 nova_compute[230518]: 2025-10-02 12:17:59.409 2 DEBUG nova.compute.manager [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:59 compute-1 nova_compute[230518]: 2025-10-02 12:17:59.410 2 DEBUG oslo_concurrency.lockutils [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:59 compute-1 nova_compute[230518]: 2025-10-02 12:17:59.410 2 DEBUG oslo_concurrency.lockutils [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:59 compute-1 nova_compute[230518]: 2025-10-02 12:17:59.411 2 DEBUG oslo_concurrency.lockutils [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:59 compute-1 nova_compute[230518]: 2025-10-02 12:17:59.411 2 DEBUG nova.compute.manager [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] No waiting events found dispatching network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:17:59 compute-1 nova_compute[230518]: 2025-10-02 12:17:59.411 2 WARNING nova.compute.manager [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received unexpected event network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a for instance with vm_state active and task_state migrating.
Oct 02 12:17:59 compute-1 nova_compute[230518]: 2025-10-02 12:17:59.411 2 DEBUG nova.compute.manager [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:59 compute-1 nova_compute[230518]: 2025-10-02 12:17:59.411 2 DEBUG oslo_concurrency.lockutils [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:59 compute-1 nova_compute[230518]: 2025-10-02 12:17:59.412 2 DEBUG oslo_concurrency.lockutils [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:59 compute-1 nova_compute[230518]: 2025-10-02 12:17:59.412 2 DEBUG oslo_concurrency.lockutils [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:59 compute-1 nova_compute[230518]: 2025-10-02 12:17:59.412 2 DEBUG nova.compute.manager [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] No waiting events found dispatching network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:17:59 compute-1 nova_compute[230518]: 2025-10-02 12:17:59.412 2 WARNING nova.compute.manager [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received unexpected event network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a for instance with vm_state active and task_state migrating.
Oct 02 12:17:59 compute-1 nova_compute[230518]: 2025-10-02 12:17:59.412 2 DEBUG nova.compute.manager [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-unplugged-7539c03e-c932-4473-8d75-729cbed6008a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:59 compute-1 nova_compute[230518]: 2025-10-02 12:17:59.413 2 DEBUG oslo_concurrency.lockutils [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:59 compute-1 nova_compute[230518]: 2025-10-02 12:17:59.413 2 DEBUG oslo_concurrency.lockutils [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:59 compute-1 nova_compute[230518]: 2025-10-02 12:17:59.413 2 DEBUG oslo_concurrency.lockutils [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:59 compute-1 nova_compute[230518]: 2025-10-02 12:17:59.413 2 DEBUG nova.compute.manager [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] No waiting events found dispatching network-vif-unplugged-7539c03e-c932-4473-8d75-729cbed6008a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:17:59 compute-1 nova_compute[230518]: 2025-10-02 12:17:59.413 2 DEBUG nova.compute.manager [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-unplugged-7539c03e-c932-4473-8d75-729cbed6008a for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:17:59 compute-1 nova_compute[230518]: 2025-10-02 12:17:59.414 2 DEBUG nova.compute.manager [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:59 compute-1 nova_compute[230518]: 2025-10-02 12:17:59.414 2 DEBUG oslo_concurrency.lockutils [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:59 compute-1 nova_compute[230518]: 2025-10-02 12:17:59.415 2 DEBUG oslo_concurrency.lockutils [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:59 compute-1 nova_compute[230518]: 2025-10-02 12:17:59.415 2 DEBUG oslo_concurrency.lockutils [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:59 compute-1 nova_compute[230518]: 2025-10-02 12:17:59.415 2 DEBUG nova.compute.manager [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] No waiting events found dispatching network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:17:59 compute-1 nova_compute[230518]: 2025-10-02 12:17:59.415 2 WARNING nova.compute.manager [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received unexpected event network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a for instance with vm_state active and task_state migrating.
Oct 02 12:17:59 compute-1 nova_compute[230518]: 2025-10-02 12:17:59.415 2 DEBUG nova.compute.manager [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received event network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:17:59 compute-1 nova_compute[230518]: 2025-10-02 12:17:59.416 2 DEBUG oslo_concurrency.lockutils [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:17:59 compute-1 nova_compute[230518]: 2025-10-02 12:17:59.416 2 DEBUG oslo_concurrency.lockutils [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:17:59 compute-1 nova_compute[230518]: 2025-10-02 12:17:59.416 2 DEBUG oslo_concurrency.lockutils [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:17:59 compute-1 nova_compute[230518]: 2025-10-02 12:17:59.416 2 DEBUG nova.compute.manager [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] No waiting events found dispatching network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:17:59 compute-1 nova_compute[230518]: 2025-10-02 12:17:59.416 2 WARNING nova.compute.manager [req-7465ead8-9c7f-4a7d-a568-a1479b0a9ff2 req-649fd2cc-7d3b-4b8c-9c9f-e0782c7e1d1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Received unexpected event network-vif-plugged-7539c03e-c932-4473-8d75-729cbed6008a for instance with vm_state active and task_state migrating.
Oct 02 12:17:59 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3679030386' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:18:00 compute-1 ceph-mon[80926]: pgmap v1167: 305 pgs: 305 active+clean; 326 MiB data, 538 MiB used, 20 GiB / 21 GiB avail; 337 KiB/s rd, 3.3 MiB/s wr, 92 op/s
Oct 02 12:18:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:18:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:01.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:18:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:18:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:01.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:18:01 compute-1 ceph-mon[80926]: pgmap v1168: 305 pgs: 305 active+clean; 326 MiB data, 538 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.3 MiB/s wr, 159 op/s
Oct 02 12:18:01 compute-1 nova_compute[230518]: 2025-10-02 12:18:01.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:18:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:18:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:03.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:18:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:18:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:03.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:18:03 compute-1 nova_compute[230518]: 2025-10-02 12:18:03.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:04 compute-1 ceph-mon[80926]: pgmap v1169: 305 pgs: 305 active+clean; 326 MiB data, 538 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.5 MiB/s wr, 125 op/s
Oct 02 12:18:04 compute-1 nova_compute[230518]: 2025-10-02 12:18:04.242 2 DEBUG oslo_concurrency.lockutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquiring lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:04 compute-1 nova_compute[230518]: 2025-10-02 12:18:04.243 2 DEBUG oslo_concurrency.lockutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:04 compute-1 nova_compute[230518]: 2025-10-02 12:18:04.244 2 DEBUG oslo_concurrency.lockutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "ecee1ec0-1a8d-4d67-b996-205a942120ae-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:04 compute-1 nova_compute[230518]: 2025-10-02 12:18:04.277 2 DEBUG oslo_concurrency.lockutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:04 compute-1 nova_compute[230518]: 2025-10-02 12:18:04.278 2 DEBUG oslo_concurrency.lockutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:04 compute-1 nova_compute[230518]: 2025-10-02 12:18:04.278 2 DEBUG oslo_concurrency.lockutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:04 compute-1 nova_compute[230518]: 2025-10-02 12:18:04.279 2 DEBUG nova.compute.resource_tracker [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:18:04 compute-1 nova_compute[230518]: 2025-10-02 12:18:04.279 2 DEBUG oslo_concurrency.processutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:18:04 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2412881761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:04 compute-1 nova_compute[230518]: 2025-10-02 12:18:04.711 2 DEBUG oslo_concurrency.processutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:04 compute-1 nova_compute[230518]: 2025-10-02 12:18:04.806 2 DEBUG nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:18:04 compute-1 nova_compute[230518]: 2025-10-02 12:18:04.806 2 DEBUG nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:18:04 compute-1 podman[244673]: 2025-10-02 12:18:04.815465863 +0000 UTC m=+0.055425614 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 12:18:04 compute-1 nova_compute[230518]: 2025-10-02 12:18:04.969 2 WARNING nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:18:04 compute-1 nova_compute[230518]: 2025-10-02 12:18:04.970 2 DEBUG nova.compute.resource_tracker [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4571MB free_disk=20.87615966796875GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:18:04 compute-1 nova_compute[230518]: 2025-10-02 12:18:04.971 2 DEBUG oslo_concurrency.lockutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:04 compute-1 nova_compute[230518]: 2025-10-02 12:18:04.971 2 DEBUG oslo_concurrency.lockutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:05 compute-1 nova_compute[230518]: 2025-10-02 12:18:05.027 2 DEBUG nova.compute.resource_tracker [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Migration for instance ecee1ec0-1a8d-4d67-b996-205a942120ae refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Oct 02 12:18:05 compute-1 nova_compute[230518]: 2025-10-02 12:18:05.057 2 DEBUG nova.compute.resource_tracker [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491
Oct 02 12:18:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2412881761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:05 compute-1 nova_compute[230518]: 2025-10-02 12:18:05.115 2 DEBUG nova.compute.resource_tracker [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Instance a114d722-ceac-442e-8b38-c2892fda526b actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:18:05 compute-1 nova_compute[230518]: 2025-10-02 12:18:05.116 2 DEBUG nova.compute.resource_tracker [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Migration 7d05ad01-db02-4010-91c3-110015cf1810 is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Oct 02 12:18:05 compute-1 nova_compute[230518]: 2025-10-02 12:18:05.116 2 DEBUG nova.compute.resource_tracker [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:18:05 compute-1 nova_compute[230518]: 2025-10-02 12:18:05.116 2 DEBUG nova.compute.resource_tracker [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:18:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:18:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:05.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:18:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:18:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1691804989' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:18:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:18:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1691804989' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:18:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:05.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:05 compute-1 nova_compute[230518]: 2025-10-02 12:18:05.207 2 DEBUG oslo_concurrency.processutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:18:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1316808026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:05 compute-1 nova_compute[230518]: 2025-10-02 12:18:05.631 2 DEBUG oslo_concurrency.processutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:05 compute-1 nova_compute[230518]: 2025-10-02 12:18:05.636 2 DEBUG nova.compute.provider_tree [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:18:05 compute-1 nova_compute[230518]: 2025-10-02 12:18:05.669 2 DEBUG nova.scheduler.client.report [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:18:05 compute-1 nova_compute[230518]: 2025-10-02 12:18:05.693 2 DEBUG nova.compute.resource_tracker [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:18:05 compute-1 nova_compute[230518]: 2025-10-02 12:18:05.693 2 DEBUG oslo_concurrency.lockutils [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:05 compute-1 nova_compute[230518]: 2025-10-02 12:18:05.700 2 INFO nova.compute.manager [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Migrating instance to compute-2.ctlplane.example.com finished successfully.
Oct 02 12:18:05 compute-1 podman[244717]: 2025-10-02 12:18:05.793306757 +0000 UTC m=+0.052520962 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:18:05 compute-1 nova_compute[230518]: 2025-10-02 12:18:05.896 2 INFO nova.scheduler.client.report [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] Deleted allocation for migration 7d05ad01-db02-4010-91c3-110015cf1810
Oct 02 12:18:05 compute-1 nova_compute[230518]: 2025-10-02 12:18:05.896 2 DEBUG nova.virt.libvirt.driver [None req-40d08b1b-a5b1-4c01-b419-3510c69ef528 571990bd3b4445a6add45bfb1c70c84c 56632b212f4045a2b4bda23a0a743dcd - - default default] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662
Oct 02 12:18:06 compute-1 ceph-mon[80926]: pgmap v1170: 305 pgs: 305 active+clean; 326 MiB data, 538 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Oct 02 12:18:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1691804989' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:18:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1691804989' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:18:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1316808026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:06 compute-1 nova_compute[230518]: 2025-10-02 12:18:06.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:07.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:18:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:07.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:18:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:18:08 compute-1 ceph-mon[80926]: pgmap v1171: 305 pgs: 305 active+clean; 326 MiB data, 538 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 463 KiB/s wr, 93 op/s
Oct 02 12:18:08 compute-1 nova_compute[230518]: 2025-10-02 12:18:08.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:09.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:09.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:09 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2976361597' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:10 compute-1 ceph-mon[80926]: pgmap v1172: 305 pgs: 305 active+clean; 326 MiB data, 538 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.1 KiB/s wr, 74 op/s
Oct 02 12:18:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:11.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:18:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:11.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:18:11 compute-1 nova_compute[230518]: 2025-10-02 12:18:11.286 2 DEBUG oslo_concurrency.lockutils [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Acquiring lock "a114d722-ceac-442e-8b38-c2892fda526b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:11 compute-1 nova_compute[230518]: 2025-10-02 12:18:11.287 2 DEBUG oslo_concurrency.lockutils [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:11 compute-1 nova_compute[230518]: 2025-10-02 12:18:11.287 2 DEBUG oslo_concurrency.lockutils [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Acquiring lock "a114d722-ceac-442e-8b38-c2892fda526b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:11 compute-1 nova_compute[230518]: 2025-10-02 12:18:11.288 2 DEBUG oslo_concurrency.lockutils [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:11 compute-1 nova_compute[230518]: 2025-10-02 12:18:11.288 2 DEBUG oslo_concurrency.lockutils [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:11 compute-1 nova_compute[230518]: 2025-10-02 12:18:11.289 2 INFO nova.compute.manager [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Terminating instance
Oct 02 12:18:11 compute-1 nova_compute[230518]: 2025-10-02 12:18:11.290 2 DEBUG nova.compute.manager [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:18:11 compute-1 kernel: tap965edc3f-df (unregistering): left promiscuous mode
Oct 02 12:18:11 compute-1 NetworkManager[44960]: <info>  [1759407491.3448] device (tap965edc3f-df): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:18:11 compute-1 nova_compute[230518]: 2025-10-02 12:18:11.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:11 compute-1 ovn_controller[129257]: 2025-10-02T12:18:11Z|00136|binding|INFO|Releasing lport 965edc3f-df96-430d-8b4b-4f3dbb19e9de from this chassis (sb_readonly=0)
Oct 02 12:18:11 compute-1 ovn_controller[129257]: 2025-10-02T12:18:11Z|00137|binding|INFO|Setting lport 965edc3f-df96-430d-8b4b-4f3dbb19e9de down in Southbound
Oct 02 12:18:11 compute-1 ovn_controller[129257]: 2025-10-02T12:18:11Z|00138|binding|INFO|Releasing lport 92466114-86f5-4a18-ad64-93c2127fe0d3 from this chassis (sb_readonly=0)
Oct 02 12:18:11 compute-1 ovn_controller[129257]: 2025-10-02T12:18:11Z|00139|binding|INFO|Setting lport 92466114-86f5-4a18-ad64-93c2127fe0d3 down in Southbound
Oct 02 12:18:11 compute-1 ovn_controller[129257]: 2025-10-02T12:18:11Z|00140|binding|INFO|Removing iface tap965edc3f-df ovn-installed in OVS
Oct 02 12:18:11 compute-1 nova_compute[230518]: 2025-10-02 12:18:11.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.359 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:7b:22 10.100.0.10'], port_security=['fa:16:3e:bf:7b:22 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1502630260', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a114d722-ceac-442e-8b38-c2892fda526b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1502630260', 'neutron:project_id': '7f6188e258a04ea1a49e6b415bce3fc9', 'neutron:revision_number': '11', 'neutron:security_group_ids': '583e80e3-bda7-43ee-b04c-3ce88c2c7611', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34cc3fdc-62f5-47cf-be4b-547a25938be9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=965edc3f-df96-430d-8b4b-4f3dbb19e9de) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:18:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.360 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:c1:f4 19.80.0.104'], port_security=['fa:16:3e:bd:c1:f4 19.80.0.104'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['965edc3f-df96-430d-8b4b-4f3dbb19e9de'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-342109323', 'neutron:cidrs': '19.80.0.104/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dc4336bf-639d-45a4-88f2-32f0af1b9dbe', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-342109323', 'neutron:project_id': '7f6188e258a04ea1a49e6b415bce3fc9', 'neutron:revision_number': '5', 'neutron:security_group_ids': '583e80e3-bda7-43ee-b04c-3ce88c2c7611', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=0bcf5be3-3921-4228-85d5-12bbaf2eb666, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=92466114-86f5-4a18-ad64-93c2127fe0d3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:18:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.361 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 965edc3f-df96-430d-8b4b-4f3dbb19e9de in datapath 5989958f-ccbb-4db4-8dcb-18563aa2418e unbound from our chassis
Oct 02 12:18:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2594510805' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:18:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2594510805' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:18:11 compute-1 ovn_controller[129257]: 2025-10-02T12:18:11Z|00141|binding|INFO|Releasing lport c67f345b-5542-4cd7-a60b-7617c8d1414e from this chassis (sb_readonly=0)
Oct 02 12:18:11 compute-1 ovn_controller[129257]: 2025-10-02T12:18:11Z|00142|binding|INFO|Releasing lport c7d8e124-cc34-42e6-82ac-6fdf057166bf from this chassis (sb_readonly=0)
Oct 02 12:18:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.363 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5989958f-ccbb-4db4-8dcb-18563aa2418e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:18:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.364 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8ec11b62-840a-4103-be2a-6248ec5a0456]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.364 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e namespace which is not needed anymore
Oct 02 12:18:11 compute-1 nova_compute[230518]: 2025-10-02 12:18:11.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:11 compute-1 nova_compute[230518]: 2025-10-02 12:18:11.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:11 compute-1 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Oct 02 12:18:11 compute-1 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000001d.scope: Consumed 6.718s CPU time.
Oct 02 12:18:11 compute-1 systemd-machined[188247]: Machine qemu-16-instance-0000001d terminated.
Oct 02 12:18:11 compute-1 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[243884]: [NOTICE]   (243888) : haproxy version is 2.8.14-c23fe91
Oct 02 12:18:11 compute-1 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[243884]: [NOTICE]   (243888) : path to executable is /usr/sbin/haproxy
Oct 02 12:18:11 compute-1 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[243884]: [WARNING]  (243888) : Exiting Master process...
Oct 02 12:18:11 compute-1 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[243884]: [WARNING]  (243888) : Exiting Master process...
Oct 02 12:18:11 compute-1 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[243884]: [ALERT]    (243888) : Current worker (243890) exited with code 143 (Terminated)
Oct 02 12:18:11 compute-1 neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e[243884]: [WARNING]  (243888) : All workers exited. Exiting... (0)
Oct 02 12:18:11 compute-1 systemd[1]: libpod-145912ae22aebd03ea1ed6d511382e5a0c9ce1a8f0669016c41b002415de78e6.scope: Deactivated successfully.
Oct 02 12:18:11 compute-1 podman[244761]: 2025-10-02 12:18:11.493508535 +0000 UTC m=+0.046803392 container died 145912ae22aebd03ea1ed6d511382e5a0c9ce1a8f0669016c41b002415de78e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:18:11 compute-1 kernel: tap965edc3f-df: entered promiscuous mode
Oct 02 12:18:11 compute-1 systemd-udevd[244740]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:18:11 compute-1 ovn_controller[129257]: 2025-10-02T12:18:11Z|00143|binding|INFO|Claiming lport 965edc3f-df96-430d-8b4b-4f3dbb19e9de for this chassis.
Oct 02 12:18:11 compute-1 ovn_controller[129257]: 2025-10-02T12:18:11Z|00144|binding|INFO|965edc3f-df96-430d-8b4b-4f3dbb19e9de: Claiming fa:16:3e:bf:7b:22 10.100.0.10
Oct 02 12:18:11 compute-1 ovn_controller[129257]: 2025-10-02T12:18:11Z|00145|binding|INFO|Claiming lport 92466114-86f5-4a18-ad64-93c2127fe0d3 for this chassis.
Oct 02 12:18:11 compute-1 ovn_controller[129257]: 2025-10-02T12:18:11Z|00146|binding|INFO|92466114-86f5-4a18-ad64-93c2127fe0d3: Claiming fa:16:3e:bd:c1:f4 19.80.0.104
Oct 02 12:18:11 compute-1 nova_compute[230518]: 2025-10-02 12:18:11.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:11 compute-1 NetworkManager[44960]: <info>  [1759407491.5110] manager: (tap965edc3f-df): new Tun device (/org/freedesktop/NetworkManager/Devices/68)
Oct 02 12:18:11 compute-1 kernel: tap965edc3f-df (unregistering): left promiscuous mode
Oct 02 12:18:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.528 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:7b:22 10.100.0.10'], port_security=['fa:16:3e:bf:7b:22 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1502630260', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a114d722-ceac-442e-8b38-c2892fda526b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1502630260', 'neutron:project_id': '7f6188e258a04ea1a49e6b415bce3fc9', 'neutron:revision_number': '11', 'neutron:security_group_ids': '583e80e3-bda7-43ee-b04c-3ce88c2c7611', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34cc3fdc-62f5-47cf-be4b-547a25938be9, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=965edc3f-df96-430d-8b4b-4f3dbb19e9de) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:18:11 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-145912ae22aebd03ea1ed6d511382e5a0c9ce1a8f0669016c41b002415de78e6-userdata-shm.mount: Deactivated successfully.
Oct 02 12:18:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.531 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:c1:f4 19.80.0.104'], port_security=['fa:16:3e:bd:c1:f4 19.80.0.104'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['965edc3f-df96-430d-8b4b-4f3dbb19e9de'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-342109323', 'neutron:cidrs': '19.80.0.104/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dc4336bf-639d-45a4-88f2-32f0af1b9dbe', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-342109323', 'neutron:project_id': '7f6188e258a04ea1a49e6b415bce3fc9', 'neutron:revision_number': '5', 'neutron:security_group_ids': '583e80e3-bda7-43ee-b04c-3ce88c2c7611', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=0bcf5be3-3921-4228-85d5-12bbaf2eb666, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=92466114-86f5-4a18-ad64-93c2127fe0d3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:18:11 compute-1 systemd[1]: var-lib-containers-storage-overlay-8333f9bed9a3ff266e399a2d551e934a36972a2ab5ffe8727ce87e882998249b-merged.mount: Deactivated successfully.
Oct 02 12:18:11 compute-1 nova_compute[230518]: 2025-10-02 12:18:11.539 2 INFO nova.virt.libvirt.driver [-] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Instance destroyed successfully.
Oct 02 12:18:11 compute-1 nova_compute[230518]: 2025-10-02 12:18:11.539 2 DEBUG nova.objects.instance [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lazy-loading 'resources' on Instance uuid a114d722-ceac-442e-8b38-c2892fda526b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:18:11 compute-1 podman[244761]: 2025-10-02 12:18:11.548428843 +0000 UTC m=+0.101723680 container cleanup 145912ae22aebd03ea1ed6d511382e5a0c9ce1a8f0669016c41b002415de78e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:18:11 compute-1 systemd[1]: libpod-conmon-145912ae22aebd03ea1ed6d511382e5a0c9ce1a8f0669016c41b002415de78e6.scope: Deactivated successfully.
Oct 02 12:18:11 compute-1 nova_compute[230518]: 2025-10-02 12:18:11.562 2 DEBUG nova.virt.libvirt.vif [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:16:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-2023518062',display_name='tempest-LiveMigrationTest-server-2023518062',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-livemigrationtest-server-2023518062',id=29,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:16:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7f6188e258a04ea1a49e6b415bce3fc9',ramdisk_id='',reservation_id='r-f60c0zik',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-1880928942',owner_user_name='tempest-LiveMigrationTest-1880928942-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:16:54Z,user_data=None,user_id='e0cdfd1473bd4963b4ded642a43c35f3',uuid=a114d722-ceac-442e-8b38-c2892fda526b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "address": "fa:16:3e:bf:7b:22", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965edc3f-df", "ovs_interfaceid": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:18:11 compute-1 nova_compute[230518]: 2025-10-02 12:18:11.562 2 DEBUG nova.network.os_vif_util [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Converting VIF {"id": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "address": "fa:16:3e:bf:7b:22", "network": {"id": "5989958f-ccbb-4db4-8dcb-18563aa2418e", "bridge": "br-int", "label": "tempest-LiveMigrationTest-883744957-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f6188e258a04ea1a49e6b415bce3fc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap965edc3f-df", "ovs_interfaceid": "965edc3f-df96-430d-8b4b-4f3dbb19e9de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:18:11 compute-1 nova_compute[230518]: 2025-10-02 12:18:11.563 2 DEBUG nova.network.os_vif_util [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bf:7b:22,bridge_name='br-int',has_traffic_filtering=True,id=965edc3f-df96-430d-8b4b-4f3dbb19e9de,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap965edc3f-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:18:11 compute-1 nova_compute[230518]: 2025-10-02 12:18:11.563 2 DEBUG os_vif [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bf:7b:22,bridge_name='br-int',has_traffic_filtering=True,id=965edc3f-df96-430d-8b4b-4f3dbb19e9de,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap965edc3f-df') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:18:11 compute-1 nova_compute[230518]: 2025-10-02 12:18:11.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:11 compute-1 nova_compute[230518]: 2025-10-02 12:18:11.565 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap965edc3f-df, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:11 compute-1 nova_compute[230518]: 2025-10-02 12:18:11.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:11 compute-1 ovn_controller[129257]: 2025-10-02T12:18:11Z|00147|binding|INFO|Releasing lport 965edc3f-df96-430d-8b4b-4f3dbb19e9de from this chassis (sb_readonly=0)
Oct 02 12:18:11 compute-1 ovn_controller[129257]: 2025-10-02T12:18:11Z|00148|binding|INFO|Releasing lport 92466114-86f5-4a18-ad64-93c2127fe0d3 from this chassis (sb_readonly=0)
Oct 02 12:18:11 compute-1 nova_compute[230518]: 2025-10-02 12:18:11.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:18:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.575 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:7b:22 10.100.0.10'], port_security=['fa:16:3e:bf:7b:22 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1502630260', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a114d722-ceac-442e-8b38-c2892fda526b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1502630260', 'neutron:project_id': '7f6188e258a04ea1a49e6b415bce3fc9', 'neutron:revision_number': '11', 'neutron:security_group_ids': '583e80e3-bda7-43ee-b04c-3ce88c2c7611', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34cc3fdc-62f5-47cf-be4b-547a25938be9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=965edc3f-df96-430d-8b4b-4f3dbb19e9de) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:18:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.578 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:c1:f4 19.80.0.104'], port_security=['fa:16:3e:bd:c1:f4 19.80.0.104'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['965edc3f-df96-430d-8b4b-4f3dbb19e9de'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-342109323', 'neutron:cidrs': '19.80.0.104/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dc4336bf-639d-45a4-88f2-32f0af1b9dbe', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-342109323', 'neutron:project_id': '7f6188e258a04ea1a49e6b415bce3fc9', 'neutron:revision_number': '5', 'neutron:security_group_ids': '583e80e3-bda7-43ee-b04c-3ce88c2c7611', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=0bcf5be3-3921-4228-85d5-12bbaf2eb666, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=92466114-86f5-4a18-ad64-93c2127fe0d3) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:18:11 compute-1 nova_compute[230518]: 2025-10-02 12:18:11.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:11 compute-1 nova_compute[230518]: 2025-10-02 12:18:11.619 2 INFO os_vif [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bf:7b:22,bridge_name='br-int',has_traffic_filtering=True,id=965edc3f-df96-430d-8b4b-4f3dbb19e9de,network=Network(5989958f-ccbb-4db4-8dcb-18563aa2418e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap965edc3f-df')
Oct 02 12:18:11 compute-1 podman[244793]: 2025-10-02 12:18:11.625202587 +0000 UTC m=+0.047143613 container remove 145912ae22aebd03ea1ed6d511382e5a0c9ce1a8f0669016c41b002415de78e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:18:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.641 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[66480642-1041-447e-b42f-aa35e2ef6aed]: (4, ('Thu Oct  2 12:18:11 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e (145912ae22aebd03ea1ed6d511382e5a0c9ce1a8f0669016c41b002415de78e6)\n145912ae22aebd03ea1ed6d511382e5a0c9ce1a8f0669016c41b002415de78e6\nThu Oct  2 12:18:11 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e (145912ae22aebd03ea1ed6d511382e5a0c9ce1a8f0669016c41b002415de78e6)\n145912ae22aebd03ea1ed6d511382e5a0c9ce1a8f0669016c41b002415de78e6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:11 compute-1 nova_compute[230518]: 2025-10-02 12:18:11.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.643 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5c2bf365-ffb2-4a15-92a6-22488c885f02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.644 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5989958f-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:11 compute-1 nova_compute[230518]: 2025-10-02 12:18:11.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:11 compute-1 kernel: tap5989958f-c0: left promiscuous mode
Oct 02 12:18:11 compute-1 nova_compute[230518]: 2025-10-02 12:18:11.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.664 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cdfbed1b-f4de-410a-a7d2-0218d01215de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:11 compute-1 nova_compute[230518]: 2025-10-02 12:18:11.675 2 DEBUG nova.compute.manager [req-29ff6684-07da-473d-a423-5dc2a7b9ea0e req-cd96cce7-f34e-4689-a9b8-787bb9624377 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Received event network-vif-unplugged-965edc3f-df96-430d-8b4b-4f3dbb19e9de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:18:11 compute-1 nova_compute[230518]: 2025-10-02 12:18:11.675 2 DEBUG oslo_concurrency.lockutils [req-29ff6684-07da-473d-a423-5dc2a7b9ea0e req-cd96cce7-f34e-4689-a9b8-787bb9624377 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a114d722-ceac-442e-8b38-c2892fda526b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:11 compute-1 nova_compute[230518]: 2025-10-02 12:18:11.676 2 DEBUG oslo_concurrency.lockutils [req-29ff6684-07da-473d-a423-5dc2a7b9ea0e req-cd96cce7-f34e-4689-a9b8-787bb9624377 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:11 compute-1 nova_compute[230518]: 2025-10-02 12:18:11.676 2 DEBUG oslo_concurrency.lockutils [req-29ff6684-07da-473d-a423-5dc2a7b9ea0e req-cd96cce7-f34e-4689-a9b8-787bb9624377 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:11 compute-1 nova_compute[230518]: 2025-10-02 12:18:11.676 2 DEBUG nova.compute.manager [req-29ff6684-07da-473d-a423-5dc2a7b9ea0e req-cd96cce7-f34e-4689-a9b8-787bb9624377 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] No waiting events found dispatching network-vif-unplugged-965edc3f-df96-430d-8b4b-4f3dbb19e9de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:18:11 compute-1 nova_compute[230518]: 2025-10-02 12:18:11.676 2 DEBUG nova.compute.manager [req-29ff6684-07da-473d-a423-5dc2a7b9ea0e req-cd96cce7-f34e-4689-a9b8-787bb9624377 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Received event network-vif-unplugged-965edc3f-df96-430d-8b4b-4f3dbb19e9de for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:18:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.701 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ff08240c-ba03-438d-9cb2-1b01e6de85cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.702 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[190b4af4-d912-404e-aefd-deff087d429b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.720 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[490eb88e-a9a4-43bf-8aa8-6907c2414bb1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527313, 'reachable_time': 21698, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244826, 'error': None, 'target': 'ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:11 compute-1 systemd[1]: run-netns-ovnmeta\x2d5989958f\x2dccbb\x2d4db4\x2d8dcb\x2d18563aa2418e.mount: Deactivated successfully.
Oct 02 12:18:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.723 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5989958f-ccbb-4db4-8dcb-18563aa2418e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:18:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.723 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[4fa51121-a67b-4068-b9f8-d2d12470af46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.725 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 92466114-86f5-4a18-ad64-93c2127fe0d3 in datapath dc4336bf-639d-45a4-88f2-32f0af1b9dbe unbound from our chassis
Oct 02 12:18:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.727 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dc4336bf-639d-45a4-88f2-32f0af1b9dbe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:18:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.728 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a0c08fc7-575d-44a6-bb2c-518cf2a0ef3a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.728 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe namespace which is not needed anymore
Oct 02 12:18:11 compute-1 neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe[243958]: [NOTICE]   (243962) : haproxy version is 2.8.14-c23fe91
Oct 02 12:18:11 compute-1 neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe[243958]: [NOTICE]   (243962) : path to executable is /usr/sbin/haproxy
Oct 02 12:18:11 compute-1 neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe[243958]: [WARNING]  (243962) : Exiting Master process...
Oct 02 12:18:11 compute-1 neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe[243958]: [WARNING]  (243962) : Exiting Master process...
Oct 02 12:18:11 compute-1 neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe[243958]: [ALERT]    (243962) : Current worker (243964) exited with code 143 (Terminated)
Oct 02 12:18:11 compute-1 neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe[243958]: [WARNING]  (243962) : All workers exited. Exiting... (0)
Oct 02 12:18:11 compute-1 systemd[1]: libpod-21e94ede3a85724ada1e04c7d3a3300a26a42c3880ab065702f1c77892a2fee7.scope: Deactivated successfully.
Oct 02 12:18:11 compute-1 podman[244844]: 2025-10-02 12:18:11.859406813 +0000 UTC m=+0.046211384 container died 21e94ede3a85724ada1e04c7d3a3300a26a42c3880ab065702f1c77892a2fee7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:18:11 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-21e94ede3a85724ada1e04c7d3a3300a26a42c3880ab065702f1c77892a2fee7-userdata-shm.mount: Deactivated successfully.
Oct 02 12:18:11 compute-1 systemd[1]: var-lib-containers-storage-overlay-7ccfcc3d3923f9105c6b44d482785ea52e49c37aea974d395f37f06c9024bca9-merged.mount: Deactivated successfully.
Oct 02 12:18:11 compute-1 podman[244844]: 2025-10-02 12:18:11.888672983 +0000 UTC m=+0.075477564 container cleanup 21e94ede3a85724ada1e04c7d3a3300a26a42c3880ab065702f1c77892a2fee7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:18:11 compute-1 systemd[1]: libpod-conmon-21e94ede3a85724ada1e04c7d3a3300a26a42c3880ab065702f1c77892a2fee7.scope: Deactivated successfully.
Oct 02 12:18:11 compute-1 podman[244872]: 2025-10-02 12:18:11.946539904 +0000 UTC m=+0.039476053 container remove 21e94ede3a85724ada1e04c7d3a3300a26a42c3880ab065702f1c77892a2fee7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:18:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.953 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[dd0a89c6-25d1-4364-adb1-6d3a5d15f58e]: (4, ('Thu Oct  2 12:18:11 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe (21e94ede3a85724ada1e04c7d3a3300a26a42c3880ab065702f1c77892a2fee7)\n21e94ede3a85724ada1e04c7d3a3300a26a42c3880ab065702f1c77892a2fee7\nThu Oct  2 12:18:11 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe (21e94ede3a85724ada1e04c7d3a3300a26a42c3880ab065702f1c77892a2fee7)\n21e94ede3a85724ada1e04c7d3a3300a26a42c3880ab065702f1c77892a2fee7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.954 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9ed0057c-29b5-48b4-be79-d9566bd2cd81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.955 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdc4336bf-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:11 compute-1 nova_compute[230518]: 2025-10-02 12:18:11.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:11 compute-1 kernel: tapdc4336bf-60: left promiscuous mode
Oct 02 12:18:11 compute-1 nova_compute[230518]: 2025-10-02 12:18:11.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:11 compute-1 nova_compute[230518]: 2025-10-02 12:18:11.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:11.980 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3a18b137-ec16-43fe-a7b4-597d33d3024a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:12.014 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[288dff4b-f37d-4a7d-8b04-e3ea74d9b318]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:12.015 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[422ad25c-4d3f-4be9-98e0-542762beceb5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:12.030 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[20b6311c-77b9-4269-b345-12f7d1b91216]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527402, 'reachable_time': 35441, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244888, 'error': None, 'target': 'ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:12.031 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dc4336bf-639d-45a4-88f2-32f0af1b9dbe deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:18:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:12.031 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[16dcff14-3d58-4ff5-9a09-34774be44bfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:12.032 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 965edc3f-df96-430d-8b4b-4f3dbb19e9de in datapath 5989958f-ccbb-4db4-8dcb-18563aa2418e unbound from our chassis
Oct 02 12:18:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:12.033 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5989958f-ccbb-4db4-8dcb-18563aa2418e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:18:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:12.034 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c84efdce-c708-45a6-ad8b-b5ae5a8cc22a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:12.034 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 92466114-86f5-4a18-ad64-93c2127fe0d3 in datapath dc4336bf-639d-45a4-88f2-32f0af1b9dbe unbound from our chassis
Oct 02 12:18:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:12.035 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dc4336bf-639d-45a4-88f2-32f0af1b9dbe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:18:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:12.035 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[be106de5-cb63-47ce-acc9-bba7b4117ca1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:12.036 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 965edc3f-df96-430d-8b4b-4f3dbb19e9de in datapath 5989958f-ccbb-4db4-8dcb-18563aa2418e unbound from our chassis
Oct 02 12:18:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:12.037 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5989958f-ccbb-4db4-8dcb-18563aa2418e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:18:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:12.037 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e69869be-fcad-4ecc-ac7f-c95bb0daa100]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:12.038 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 92466114-86f5-4a18-ad64-93c2127fe0d3 in datapath dc4336bf-639d-45a4-88f2-32f0af1b9dbe unbound from our chassis
Oct 02 12:18:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:12.039 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dc4336bf-639d-45a4-88f2-32f0af1b9dbe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:18:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:12.039 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d4ba9f0b-4c9c-45b9-8aab-2042787fc859]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:12 compute-1 nova_compute[230518]: 2025-10-02 12:18:12.068 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407477.064609, ecee1ec0-1a8d-4d67-b996-205a942120ae => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:18:12 compute-1 nova_compute[230518]: 2025-10-02 12:18:12.068 2 INFO nova.compute.manager [-] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] VM Stopped (Lifecycle Event)
Oct 02 12:18:12 compute-1 nova_compute[230518]: 2025-10-02 12:18:12.087 2 DEBUG nova.compute.manager [None req-0096b992-debf-4856-9070-794a82b6a69c - - - - - -] [instance: ecee1ec0-1a8d-4d67-b996-205a942120ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:18:12 compute-1 nova_compute[230518]: 2025-10-02 12:18:12.270 2 INFO nova.virt.libvirt.driver [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Deleting instance files /var/lib/nova/instances/a114d722-ceac-442e-8b38-c2892fda526b_del
Oct 02 12:18:12 compute-1 nova_compute[230518]: 2025-10-02 12:18:12.271 2 INFO nova.virt.libvirt.driver [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Deletion of /var/lib/nova/instances/a114d722-ceac-442e-8b38-c2892fda526b_del complete
Oct 02 12:18:12 compute-1 nova_compute[230518]: 2025-10-02 12:18:12.319 2 INFO nova.compute.manager [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Took 1.03 seconds to destroy the instance on the hypervisor.
Oct 02 12:18:12 compute-1 nova_compute[230518]: 2025-10-02 12:18:12.319 2 DEBUG oslo.service.loopingcall [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:18:12 compute-1 nova_compute[230518]: 2025-10-02 12:18:12.319 2 DEBUG nova.compute.manager [-] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:18:12 compute-1 nova_compute[230518]: 2025-10-02 12:18:12.320 2 DEBUG nova.network.neutron [-] [instance: a114d722-ceac-442e-8b38-c2892fda526b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:18:12 compute-1 ceph-mon[80926]: pgmap v1173: 305 pgs: 305 active+clean; 324 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Oct 02 12:18:12 compute-1 systemd[1]: run-netns-ovnmeta\x2ddc4336bf\x2d639d\x2d45a4\x2d88f2\x2d32f0af1b9dbe.mount: Deactivated successfully.
Oct 02 12:18:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:18:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:13.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:13.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2180439530' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:18:13 compute-1 nova_compute[230518]: 2025-10-02 12:18:13.773 2 DEBUG nova.compute.manager [req-07ec5d32-1481-43c4-ad27-36d0476adc81 req-22f199e1-9543-46ba-911c-96e942aefb83 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Received event network-vif-plugged-965edc3f-df96-430d-8b4b-4f3dbb19e9de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:18:13 compute-1 nova_compute[230518]: 2025-10-02 12:18:13.774 2 DEBUG oslo_concurrency.lockutils [req-07ec5d32-1481-43c4-ad27-36d0476adc81 req-22f199e1-9543-46ba-911c-96e942aefb83 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a114d722-ceac-442e-8b38-c2892fda526b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:13 compute-1 nova_compute[230518]: 2025-10-02 12:18:13.774 2 DEBUG oslo_concurrency.lockutils [req-07ec5d32-1481-43c4-ad27-36d0476adc81 req-22f199e1-9543-46ba-911c-96e942aefb83 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:13 compute-1 nova_compute[230518]: 2025-10-02 12:18:13.774 2 DEBUG oslo_concurrency.lockutils [req-07ec5d32-1481-43c4-ad27-36d0476adc81 req-22f199e1-9543-46ba-911c-96e942aefb83 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:13 compute-1 nova_compute[230518]: 2025-10-02 12:18:13.774 2 DEBUG nova.compute.manager [req-07ec5d32-1481-43c4-ad27-36d0476adc81 req-22f199e1-9543-46ba-911c-96e942aefb83 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] No waiting events found dispatching network-vif-plugged-965edc3f-df96-430d-8b4b-4f3dbb19e9de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:18:13 compute-1 nova_compute[230518]: 2025-10-02 12:18:13.775 2 WARNING nova.compute.manager [req-07ec5d32-1481-43c4-ad27-36d0476adc81 req-22f199e1-9543-46ba-911c-96e942aefb83 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Received unexpected event network-vif-plugged-965edc3f-df96-430d-8b4b-4f3dbb19e9de for instance with vm_state active and task_state deleting.
Oct 02 12:18:14 compute-1 ceph-mon[80926]: pgmap v1174: 305 pgs: 305 active+clean; 313 MiB data, 551 MiB used, 20 GiB / 21 GiB avail; 399 KiB/s rd, 2.9 MiB/s wr, 98 op/s
Oct 02 12:18:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/891112775' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:18:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:15.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:15.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:16 compute-1 nova_compute[230518]: 2025-10-02 12:18:16.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:16 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:16.740 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:18:16 compute-1 nova_compute[230518]: 2025-10-02 12:18:16.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:16 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:16.742 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:18:16 compute-1 ceph-mon[80926]: pgmap v1175: 305 pgs: 305 active+clean; 311 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 370 KiB/s rd, 3.9 MiB/s wr, 132 op/s
Oct 02 12:18:16 compute-1 nova_compute[230518]: 2025-10-02 12:18:16.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:17.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:17.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:17 compute-1 nova_compute[230518]: 2025-10-02 12:18:17.697 2 DEBUG nova.network.neutron [-] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:18:17 compute-1 nova_compute[230518]: 2025-10-02 12:18:17.719 2 INFO nova.compute.manager [-] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Took 5.40 seconds to deallocate network for instance.
Oct 02 12:18:17 compute-1 nova_compute[230518]: 2025-10-02 12:18:17.796 2 DEBUG oslo_concurrency.lockutils [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:17 compute-1 nova_compute[230518]: 2025-10-02 12:18:17.797 2 DEBUG oslo_concurrency.lockutils [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:17 compute-1 nova_compute[230518]: 2025-10-02 12:18:17.859 2 DEBUG oslo_concurrency.processutils [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:17 compute-1 ceph-mon[80926]: pgmap v1176: 305 pgs: 305 active+clean; 246 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 381 KiB/s rd, 3.9 MiB/s wr, 147 op/s
Oct 02 12:18:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:18:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:18:18 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/914543213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:18 compute-1 nova_compute[230518]: 2025-10-02 12:18:18.286 2 DEBUG oslo_concurrency.processutils [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:18 compute-1 nova_compute[230518]: 2025-10-02 12:18:18.291 2 DEBUG nova.compute.provider_tree [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:18:18 compute-1 nova_compute[230518]: 2025-10-02 12:18:18.352 2 DEBUG nova.scheduler.client.report [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:18:18 compute-1 nova_compute[230518]: 2025-10-02 12:18:18.454 2 DEBUG oslo_concurrency.lockutils [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:18 compute-1 nova_compute[230518]: 2025-10-02 12:18:18.500 2 INFO nova.scheduler.client.report [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Deleted allocations for instance a114d722-ceac-442e-8b38-c2892fda526b
Oct 02 12:18:18 compute-1 nova_compute[230518]: 2025-10-02 12:18:18.986 2 DEBUG oslo_concurrency.lockutils [None req-cf8566cc-9860-4737-bf62-b35b1435b7d7 e0cdfd1473bd4963b4ded642a43c35f3 7f6188e258a04ea1a49e6b415bce3fc9 - - default default] Lock "a114d722-ceac-442e-8b38-c2892fda526b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/914543213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:18:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:19.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:18:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:18:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:19.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:18:20 compute-1 ceph-mon[80926]: pgmap v1177: 305 pgs: 305 active+clean; 246 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 381 KiB/s rd, 3.9 MiB/s wr, 147 op/s
Oct 02 12:18:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:21.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:21.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:21 compute-1 nova_compute[230518]: 2025-10-02 12:18:21.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:21 compute-1 nova_compute[230518]: 2025-10-02 12:18:21.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:22 compute-1 ceph-mon[80926]: pgmap v1178: 305 pgs: 305 active+clean; 246 MiB data, 504 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 188 op/s
Oct 02 12:18:22 compute-1 nova_compute[230518]: 2025-10-02 12:18:22.540 2 DEBUG oslo_concurrency.lockutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Acquiring lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:22 compute-1 nova_compute[230518]: 2025-10-02 12:18:22.541 2 DEBUG oslo_concurrency.lockutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:22 compute-1 nova_compute[230518]: 2025-10-02 12:18:22.572 2 DEBUG nova.compute.manager [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:18:22 compute-1 nova_compute[230518]: 2025-10-02 12:18:22.655 2 DEBUG oslo_concurrency.lockutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:22 compute-1 nova_compute[230518]: 2025-10-02 12:18:22.656 2 DEBUG oslo_concurrency.lockutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:22 compute-1 nova_compute[230518]: 2025-10-02 12:18:22.664 2 DEBUG nova.virt.hardware [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:18:22 compute-1 nova_compute[230518]: 2025-10-02 12:18:22.664 2 INFO nova.compute.claims [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:18:22 compute-1 nova_compute[230518]: 2025-10-02 12:18:22.806 2 DEBUG oslo_concurrency.processutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:18:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:23.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:18:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:23.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:18:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:18:23 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/967908571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:23 compute-1 nova_compute[230518]: 2025-10-02 12:18:23.277 2 DEBUG oslo_concurrency.processutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:23 compute-1 nova_compute[230518]: 2025-10-02 12:18:23.283 2 DEBUG nova.compute.provider_tree [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:18:23 compute-1 nova_compute[230518]: 2025-10-02 12:18:23.349 2 DEBUG nova.scheduler.client.report [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:18:23 compute-1 nova_compute[230518]: 2025-10-02 12:18:23.537 2 DEBUG oslo_concurrency.lockutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:23 compute-1 nova_compute[230518]: 2025-10-02 12:18:23.537 2 DEBUG nova.compute.manager [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:18:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/967908571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:23 compute-1 nova_compute[230518]: 2025-10-02 12:18:23.720 2 DEBUG nova.compute.manager [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:18:23 compute-1 nova_compute[230518]: 2025-10-02 12:18:23.720 2 DEBUG nova.network.neutron [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:18:23 compute-1 nova_compute[230518]: 2025-10-02 12:18:23.825 2 INFO nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:18:23 compute-1 nova_compute[230518]: 2025-10-02 12:18:23.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:24 compute-1 nova_compute[230518]: 2025-10-02 12:18:24.158 2 DEBUG nova.compute.manager [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:18:24 compute-1 nova_compute[230518]: 2025-10-02 12:18:24.357 2 DEBUG nova.policy [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2ed8b6a2129742dfb3b8a0d9f044ac24', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f0bd0c6232b84d03a010ba8cf85bda46', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:18:24 compute-1 nova_compute[230518]: 2025-10-02 12:18:24.604 2 DEBUG nova.compute.manager [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:18:24 compute-1 nova_compute[230518]: 2025-10-02 12:18:24.606 2 DEBUG nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:18:24 compute-1 nova_compute[230518]: 2025-10-02 12:18:24.606 2 INFO nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Creating image(s)
Oct 02 12:18:24 compute-1 nova_compute[230518]: 2025-10-02 12:18:24.638 2 DEBUG nova.storage.rbd_utils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] rbd image 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:18:24 compute-1 nova_compute[230518]: 2025-10-02 12:18:24.669 2 DEBUG nova.storage.rbd_utils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] rbd image 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:18:24 compute-1 nova_compute[230518]: 2025-10-02 12:18:24.697 2 DEBUG nova.storage.rbd_utils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] rbd image 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:18:24 compute-1 nova_compute[230518]: 2025-10-02 12:18:24.701 2 DEBUG oslo_concurrency.processutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:24 compute-1 ceph-mon[80926]: pgmap v1179: 305 pgs: 305 active+clean; 246 MiB data, 496 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 148 op/s
Oct 02 12:18:24 compute-1 nova_compute[230518]: 2025-10-02 12:18:24.766 2 DEBUG oslo_concurrency.processutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:24 compute-1 nova_compute[230518]: 2025-10-02 12:18:24.767 2 DEBUG oslo_concurrency.lockutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:24 compute-1 nova_compute[230518]: 2025-10-02 12:18:24.767 2 DEBUG oslo_concurrency.lockutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:24 compute-1 nova_compute[230518]: 2025-10-02 12:18:24.767 2 DEBUG oslo_concurrency.lockutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:24 compute-1 nova_compute[230518]: 2025-10-02 12:18:24.797 2 DEBUG nova.storage.rbd_utils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] rbd image 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:18:24 compute-1 nova_compute[230518]: 2025-10-02 12:18:24.802 2 DEBUG oslo_concurrency.processutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:18:25 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3586233434' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:18:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:18:25 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3586233434' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:18:25 compute-1 nova_compute[230518]: 2025-10-02 12:18:25.080 2 DEBUG oslo_concurrency.processutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.279s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:25 compute-1 nova_compute[230518]: 2025-10-02 12:18:25.143 2 DEBUG nova.storage.rbd_utils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] resizing rbd image 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:18:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:18:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:25.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:18:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:25.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:25 compute-1 nova_compute[230518]: 2025-10-02 12:18:25.236 2 DEBUG nova.objects.instance [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lazy-loading 'migration_context' on Instance uuid 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:18:25 compute-1 nova_compute[230518]: 2025-10-02 12:18:25.282 2 DEBUG nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:18:25 compute-1 nova_compute[230518]: 2025-10-02 12:18:25.283 2 DEBUG nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Ensure instance console log exists: /var/lib/nova/instances/4cc8a5b1-c816-476e-9b8c-1d152d2f57c1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:18:25 compute-1 nova_compute[230518]: 2025-10-02 12:18:25.283 2 DEBUG oslo_concurrency.lockutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:25 compute-1 nova_compute[230518]: 2025-10-02 12:18:25.283 2 DEBUG oslo_concurrency.lockutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:25 compute-1 nova_compute[230518]: 2025-10-02 12:18:25.284 2 DEBUG oslo_concurrency.lockutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:25 compute-1 nova_compute[230518]: 2025-10-02 12:18:25.500 2 DEBUG nova.network.neutron [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Successfully created port: 02efafc4-ff2d-47ca-98bd-8e608e9980b8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:18:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3586233434' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:18:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3586233434' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:18:25 compute-1 podman[245101]: 2025-10-02 12:18:25.804050607 +0000 UTC m=+0.053847135 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 02 12:18:25 compute-1 podman[245100]: 2025-10-02 12:18:25.852734208 +0000 UTC m=+0.105932322 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 12:18:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:25.917 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:25.917 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:25.917 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:26 compute-1 nova_compute[230518]: 2025-10-02 12:18:26.537 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407491.5365417, a114d722-ceac-442e-8b38-c2892fda526b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:18:26 compute-1 nova_compute[230518]: 2025-10-02 12:18:26.538 2 INFO nova.compute.manager [-] [instance: a114d722-ceac-442e-8b38-c2892fda526b] VM Stopped (Lifecycle Event)
Oct 02 12:18:26 compute-1 nova_compute[230518]: 2025-10-02 12:18:26.572 2 DEBUG nova.compute.manager [None req-d5736351-64e4-4954-ad7f-f3781d793ace - - - - - -] [instance: a114d722-ceac-442e-8b38-c2892fda526b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:18:26 compute-1 nova_compute[230518]: 2025-10-02 12:18:26.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:26.744 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:26 compute-1 ceph-mon[80926]: pgmap v1180: 305 pgs: 305 active+clean; 246 MiB data, 496 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1013 KiB/s wr, 132 op/s
Oct 02 12:18:26 compute-1 nova_compute[230518]: 2025-10-02 12:18:26.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:27.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:27.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:27 compute-1 nova_compute[230518]: 2025-10-02 12:18:27.812 2 DEBUG nova.network.neutron [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Successfully updated port: 02efafc4-ff2d-47ca-98bd-8e608e9980b8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:18:27 compute-1 nova_compute[230518]: 2025-10-02 12:18:27.841 2 DEBUG oslo_concurrency.lockutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Acquiring lock "refresh_cache-4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:18:27 compute-1 nova_compute[230518]: 2025-10-02 12:18:27.842 2 DEBUG oslo_concurrency.lockutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Acquired lock "refresh_cache-4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:18:27 compute-1 nova_compute[230518]: 2025-10-02 12:18:27.842 2 DEBUG nova.network.neutron [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:18:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:18:28 compute-1 ceph-mon[80926]: pgmap v1181: 305 pgs: 305 active+clean; 269 MiB data, 505 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 133 op/s
Oct 02 12:18:28 compute-1 nova_compute[230518]: 2025-10-02 12:18:28.261 2 DEBUG nova.network.neutron [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:18:28 compute-1 nova_compute[230518]: 2025-10-02 12:18:28.326 2 DEBUG nova.compute.manager [req-fc6b1c2a-1405-40f6-8142-ec441ea6a046 req-02e317be-e687-47c3-a2c3-f8716a7f2752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received event network-changed-02efafc4-ff2d-47ca-98bd-8e608e9980b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:18:28 compute-1 nova_compute[230518]: 2025-10-02 12:18:28.327 2 DEBUG nova.compute.manager [req-fc6b1c2a-1405-40f6-8142-ec441ea6a046 req-02e317be-e687-47c3-a2c3-f8716a7f2752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Refreshing instance network info cache due to event network-changed-02efafc4-ff2d-47ca-98bd-8e608e9980b8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:18:28 compute-1 nova_compute[230518]: 2025-10-02 12:18:28.327 2 DEBUG oslo_concurrency.lockutils [req-fc6b1c2a-1405-40f6-8142-ec441ea6a046 req-02e317be-e687-47c3-a2c3-f8716a7f2752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:18:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:29.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:29.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/49774349' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:29 compute-1 nova_compute[230518]: 2025-10-02 12:18:29.370 2 DEBUG nova.network.neutron [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Updating instance_info_cache with network_info: [{"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:18:29 compute-1 nova_compute[230518]: 2025-10-02 12:18:29.428 2 DEBUG oslo_concurrency.lockutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Releasing lock "refresh_cache-4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:18:29 compute-1 nova_compute[230518]: 2025-10-02 12:18:29.429 2 DEBUG nova.compute.manager [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Instance network_info: |[{"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:18:29 compute-1 nova_compute[230518]: 2025-10-02 12:18:29.429 2 DEBUG oslo_concurrency.lockutils [req-fc6b1c2a-1405-40f6-8142-ec441ea6a046 req-02e317be-e687-47c3-a2c3-f8716a7f2752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:18:29 compute-1 nova_compute[230518]: 2025-10-02 12:18:29.429 2 DEBUG nova.network.neutron [req-fc6b1c2a-1405-40f6-8142-ec441ea6a046 req-02e317be-e687-47c3-a2c3-f8716a7f2752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Refreshing network info cache for port 02efafc4-ff2d-47ca-98bd-8e608e9980b8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:18:29 compute-1 nova_compute[230518]: 2025-10-02 12:18:29.432 2 DEBUG nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Start _get_guest_xml network_info=[{"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:18:29 compute-1 nova_compute[230518]: 2025-10-02 12:18:29.436 2 WARNING nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:18:29 compute-1 nova_compute[230518]: 2025-10-02 12:18:29.442 2 DEBUG nova.virt.libvirt.host [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:18:29 compute-1 nova_compute[230518]: 2025-10-02 12:18:29.443 2 DEBUG nova.virt.libvirt.host [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:18:29 compute-1 nova_compute[230518]: 2025-10-02 12:18:29.446 2 DEBUG nova.virt.libvirt.host [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:18:29 compute-1 nova_compute[230518]: 2025-10-02 12:18:29.446 2 DEBUG nova.virt.libvirt.host [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:18:29 compute-1 nova_compute[230518]: 2025-10-02 12:18:29.447 2 DEBUG nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:18:29 compute-1 nova_compute[230518]: 2025-10-02 12:18:29.448 2 DEBUG nova.virt.hardware [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:18:29 compute-1 nova_compute[230518]: 2025-10-02 12:18:29.448 2 DEBUG nova.virt.hardware [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:18:29 compute-1 nova_compute[230518]: 2025-10-02 12:18:29.448 2 DEBUG nova.virt.hardware [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:18:29 compute-1 nova_compute[230518]: 2025-10-02 12:18:29.448 2 DEBUG nova.virt.hardware [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:18:29 compute-1 nova_compute[230518]: 2025-10-02 12:18:29.449 2 DEBUG nova.virt.hardware [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:18:29 compute-1 nova_compute[230518]: 2025-10-02 12:18:29.449 2 DEBUG nova.virt.hardware [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:18:29 compute-1 nova_compute[230518]: 2025-10-02 12:18:29.449 2 DEBUG nova.virt.hardware [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:18:29 compute-1 nova_compute[230518]: 2025-10-02 12:18:29.449 2 DEBUG nova.virt.hardware [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:18:29 compute-1 nova_compute[230518]: 2025-10-02 12:18:29.449 2 DEBUG nova.virt.hardware [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:18:29 compute-1 nova_compute[230518]: 2025-10-02 12:18:29.450 2 DEBUG nova.virt.hardware [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:18:29 compute-1 nova_compute[230518]: 2025-10-02 12:18:29.450 2 DEBUG nova.virt.hardware [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:18:29 compute-1 nova_compute[230518]: 2025-10-02 12:18:29.452 2 DEBUG oslo_concurrency.processutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:18:29 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/885113874' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:18:29 compute-1 nova_compute[230518]: 2025-10-02 12:18:29.899 2 DEBUG oslo_concurrency.processutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:29 compute-1 nova_compute[230518]: 2025-10-02 12:18:29.930 2 DEBUG nova.storage.rbd_utils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] rbd image 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:18:29 compute-1 nova_compute[230518]: 2025-10-02 12:18:29.935 2 DEBUG oslo_concurrency.processutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:30 compute-1 ceph-mon[80926]: pgmap v1182: 305 pgs: 305 active+clean; 269 MiB data, 505 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 118 op/s
Oct 02 12:18:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/885113874' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:18:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:18:30 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3202358119' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:18:30 compute-1 nova_compute[230518]: 2025-10-02 12:18:30.383 2 DEBUG oslo_concurrency.processutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:30 compute-1 nova_compute[230518]: 2025-10-02 12:18:30.385 2 DEBUG nova.virt.libvirt.vif [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:18:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-488412587',display_name='tempest-SecurityGroupsTestJSON-server-488412587',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-488412587',id=35,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f0bd0c6232b84d03a010ba8cf85bda46',ramdisk_id='',reservation_id='r-1xmq2s07',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1241678427',owner_user_name='tempest-SecurityGroupsTestJSON-1241678427-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:18:24Z,user_data=None,user_id='2ed8b6a2129742dfb3b8a0d9f044ac24',uuid=4cc8a5b1-c816-476e-9b8c-1d152d2f57c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:18:30 compute-1 nova_compute[230518]: 2025-10-02 12:18:30.385 2 DEBUG nova.network.os_vif_util [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Converting VIF {"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:18:30 compute-1 nova_compute[230518]: 2025-10-02 12:18:30.386 2 DEBUG nova.network.os_vif_util [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:9c:4f,bridge_name='br-int',has_traffic_filtering=True,id=02efafc4-ff2d-47ca-98bd-8e608e9980b8,network=Network(cec9cbfc-5dec-4f85-90c5-6104a054547f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02efafc4-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:18:30 compute-1 nova_compute[230518]: 2025-10-02 12:18:30.387 2 DEBUG nova.objects.instance [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:18:30 compute-1 nova_compute[230518]: 2025-10-02 12:18:30.405 2 DEBUG nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:18:30 compute-1 nova_compute[230518]:   <uuid>4cc8a5b1-c816-476e-9b8c-1d152d2f57c1</uuid>
Oct 02 12:18:30 compute-1 nova_compute[230518]:   <name>instance-00000023</name>
Oct 02 12:18:30 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:18:30 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:18:30 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:18:30 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:       <nova:name>tempest-SecurityGroupsTestJSON-server-488412587</nova:name>
Oct 02 12:18:30 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:18:29</nova:creationTime>
Oct 02 12:18:30 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:18:30 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:18:30 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:18:30 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:18:30 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:18:30 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:18:30 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:18:30 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:18:30 compute-1 nova_compute[230518]:         <nova:user uuid="2ed8b6a2129742dfb3b8a0d9f044ac24">tempest-SecurityGroupsTestJSON-1241678427-project-member</nova:user>
Oct 02 12:18:30 compute-1 nova_compute[230518]:         <nova:project uuid="f0bd0c6232b84d03a010ba8cf85bda46">tempest-SecurityGroupsTestJSON-1241678427</nova:project>
Oct 02 12:18:30 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:18:30 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:18:30 compute-1 nova_compute[230518]:         <nova:port uuid="02efafc4-ff2d-47ca-98bd-8e608e9980b8">
Oct 02 12:18:30 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:18:30 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:18:30 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:18:30 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <system>
Oct 02 12:18:30 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:18:30 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:18:30 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:18:30 compute-1 nova_compute[230518]:       <entry name="serial">4cc8a5b1-c816-476e-9b8c-1d152d2f57c1</entry>
Oct 02 12:18:30 compute-1 nova_compute[230518]:       <entry name="uuid">4cc8a5b1-c816-476e-9b8c-1d152d2f57c1</entry>
Oct 02 12:18:30 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     </system>
Oct 02 12:18:30 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:18:30 compute-1 nova_compute[230518]:   <os>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:   </os>
Oct 02 12:18:30 compute-1 nova_compute[230518]:   <features>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:   </features>
Oct 02 12:18:30 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:18:30 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:18:30 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:18:30 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/4cc8a5b1-c816-476e-9b8c-1d152d2f57c1_disk">
Oct 02 12:18:30 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:       </source>
Oct 02 12:18:30 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:18:30 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:18:30 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:18:30 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/4cc8a5b1-c816-476e-9b8c-1d152d2f57c1_disk.config">
Oct 02 12:18:30 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:       </source>
Oct 02 12:18:30 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:18:30 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:18:30 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:18:30 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:30:9c:4f"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:       <target dev="tap02efafc4-ff"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:18:30 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/4cc8a5b1-c816-476e-9b8c-1d152d2f57c1/console.log" append="off"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <video>
Oct 02 12:18:30 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     </video>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:18:30 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:18:30 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:18:30 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:18:30 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:18:30 compute-1 nova_compute[230518]: </domain>
Oct 02 12:18:30 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:18:30 compute-1 nova_compute[230518]: 2025-10-02 12:18:30.407 2 DEBUG nova.compute.manager [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Preparing to wait for external event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:18:30 compute-1 nova_compute[230518]: 2025-10-02 12:18:30.407 2 DEBUG oslo_concurrency.lockutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Acquiring lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:30 compute-1 nova_compute[230518]: 2025-10-02 12:18:30.407 2 DEBUG oslo_concurrency.lockutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:30 compute-1 nova_compute[230518]: 2025-10-02 12:18:30.408 2 DEBUG oslo_concurrency.lockutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:30 compute-1 nova_compute[230518]: 2025-10-02 12:18:30.408 2 DEBUG nova.virt.libvirt.vif [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:18:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-488412587',display_name='tempest-SecurityGroupsTestJSON-server-488412587',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-488412587',id=35,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f0bd0c6232b84d03a010ba8cf85bda46',ramdisk_id='',reservation_id='r-1xmq2s07',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1241678427',owner_user_name='tempest-SecurityGroupsTestJSON-1241678427-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:18:24Z,user_data=None,user_id='2ed8b6a2129742dfb3b8a0d9f044ac24',uuid=4cc8a5b1-c816-476e-9b8c-1d152d2f57c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:18:30 compute-1 nova_compute[230518]: 2025-10-02 12:18:30.409 2 DEBUG nova.network.os_vif_util [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Converting VIF {"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:18:30 compute-1 nova_compute[230518]: 2025-10-02 12:18:30.409 2 DEBUG nova.network.os_vif_util [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:9c:4f,bridge_name='br-int',has_traffic_filtering=True,id=02efafc4-ff2d-47ca-98bd-8e608e9980b8,network=Network(cec9cbfc-5dec-4f85-90c5-6104a054547f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02efafc4-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:18:30 compute-1 nova_compute[230518]: 2025-10-02 12:18:30.410 2 DEBUG os_vif [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:9c:4f,bridge_name='br-int',has_traffic_filtering=True,id=02efafc4-ff2d-47ca-98bd-8e608e9980b8,network=Network(cec9cbfc-5dec-4f85-90c5-6104a054547f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02efafc4-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:18:30 compute-1 nova_compute[230518]: 2025-10-02 12:18:30.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:30 compute-1 nova_compute[230518]: 2025-10-02 12:18:30.411 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:30 compute-1 nova_compute[230518]: 2025-10-02 12:18:30.411 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:18:30 compute-1 nova_compute[230518]: 2025-10-02 12:18:30.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:30 compute-1 nova_compute[230518]: 2025-10-02 12:18:30.414 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02efafc4-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:30 compute-1 nova_compute[230518]: 2025-10-02 12:18:30.414 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap02efafc4-ff, col_values=(('external_ids', {'iface-id': '02efafc4-ff2d-47ca-98bd-8e608e9980b8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:30:9c:4f', 'vm-uuid': '4cc8a5b1-c816-476e-9b8c-1d152d2f57c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:30 compute-1 NetworkManager[44960]: <info>  [1759407510.4164] manager: (tap02efafc4-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Oct 02 12:18:30 compute-1 nova_compute[230518]: 2025-10-02 12:18:30.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:18:30 compute-1 nova_compute[230518]: 2025-10-02 12:18:30.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:30 compute-1 nova_compute[230518]: 2025-10-02 12:18:30.421 2 INFO os_vif [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:9c:4f,bridge_name='br-int',has_traffic_filtering=True,id=02efafc4-ff2d-47ca-98bd-8e608e9980b8,network=Network(cec9cbfc-5dec-4f85-90c5-6104a054547f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02efafc4-ff')
Oct 02 12:18:30 compute-1 nova_compute[230518]: 2025-10-02 12:18:30.474 2 DEBUG nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:18:30 compute-1 nova_compute[230518]: 2025-10-02 12:18:30.474 2 DEBUG nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:18:30 compute-1 nova_compute[230518]: 2025-10-02 12:18:30.475 2 DEBUG nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] No VIF found with MAC fa:16:3e:30:9c:4f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:18:30 compute-1 nova_compute[230518]: 2025-10-02 12:18:30.475 2 INFO nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Using config drive
Oct 02 12:18:30 compute-1 nova_compute[230518]: 2025-10-02 12:18:30.502 2 DEBUG nova.storage.rbd_utils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] rbd image 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:18:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:31.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:18:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:31.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:18:31 compute-1 nova_compute[230518]: 2025-10-02 12:18:31.297 2 INFO nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Creating config drive at /var/lib/nova/instances/4cc8a5b1-c816-476e-9b8c-1d152d2f57c1/disk.config
Oct 02 12:18:31 compute-1 nova_compute[230518]: 2025-10-02 12:18:31.302 2 DEBUG oslo_concurrency.processutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4cc8a5b1-c816-476e-9b8c-1d152d2f57c1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppdf0tw8h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:31 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3202358119' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:18:31 compute-1 nova_compute[230518]: 2025-10-02 12:18:31.440 2 DEBUG oslo_concurrency.processutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4cc8a5b1-c816-476e-9b8c-1d152d2f57c1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppdf0tw8h" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:31 compute-1 nova_compute[230518]: 2025-10-02 12:18:31.477 2 DEBUG nova.storage.rbd_utils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] rbd image 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:18:31 compute-1 nova_compute[230518]: 2025-10-02 12:18:31.482 2 DEBUG oslo_concurrency.processutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4cc8a5b1-c816-476e-9b8c-1d152d2f57c1/disk.config 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:31 compute-1 nova_compute[230518]: 2025-10-02 12:18:31.515 2 DEBUG nova.network.neutron [req-fc6b1c2a-1405-40f6-8142-ec441ea6a046 req-02e317be-e687-47c3-a2c3-f8716a7f2752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Updated VIF entry in instance network info cache for port 02efafc4-ff2d-47ca-98bd-8e608e9980b8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:18:31 compute-1 nova_compute[230518]: 2025-10-02 12:18:31.516 2 DEBUG nova.network.neutron [req-fc6b1c2a-1405-40f6-8142-ec441ea6a046 req-02e317be-e687-47c3-a2c3-f8716a7f2752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Updating instance_info_cache with network_info: [{"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:18:31 compute-1 nova_compute[230518]: 2025-10-02 12:18:31.539 2 DEBUG oslo_concurrency.lockutils [req-fc6b1c2a-1405-40f6-8142-ec441ea6a046 req-02e317be-e687-47c3-a2c3-f8716a7f2752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:18:31 compute-1 nova_compute[230518]: 2025-10-02 12:18:31.808 2 DEBUG oslo_concurrency.processutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4cc8a5b1-c816-476e-9b8c-1d152d2f57c1/disk.config 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.326s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:31 compute-1 nova_compute[230518]: 2025-10-02 12:18:31.809 2 INFO nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Deleting local config drive /var/lib/nova/instances/4cc8a5b1-c816-476e-9b8c-1d152d2f57c1/disk.config because it was imported into RBD.
Oct 02 12:18:31 compute-1 kernel: tap02efafc4-ff: entered promiscuous mode
Oct 02 12:18:31 compute-1 NetworkManager[44960]: <info>  [1759407511.8605] manager: (tap02efafc4-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/70)
Oct 02 12:18:31 compute-1 ovn_controller[129257]: 2025-10-02T12:18:31Z|00149|binding|INFO|Claiming lport 02efafc4-ff2d-47ca-98bd-8e608e9980b8 for this chassis.
Oct 02 12:18:31 compute-1 ovn_controller[129257]: 2025-10-02T12:18:31Z|00150|binding|INFO|02efafc4-ff2d-47ca-98bd-8e608e9980b8: Claiming fa:16:3e:30:9c:4f 10.100.0.5
Oct 02 12:18:31 compute-1 nova_compute[230518]: 2025-10-02 12:18:31.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:31.880 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:9c:4f 10.100.0.5'], port_security=['fa:16:3e:30:9c:4f 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4cc8a5b1-c816-476e-9b8c-1d152d2f57c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f0bd0c6232b84d03a010ba8cf85bda46', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1b700ac5-01e6-4854-9f45-080d3952f68d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=67a70991-7ceb-4648-8df5-18a20c0a36a2, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=02efafc4-ff2d-47ca-98bd-8e608e9980b8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:18:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:31.882 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 02efafc4-ff2d-47ca-98bd-8e608e9980b8 in datapath cec9cbfc-5dec-4f85-90c5-6104a054547f bound to our chassis
Oct 02 12:18:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:31.883 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cec9cbfc-5dec-4f85-90c5-6104a054547f
Oct 02 12:18:31 compute-1 systemd-udevd[245277]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:18:31 compute-1 systemd-machined[188247]: New machine qemu-18-instance-00000023.
Oct 02 12:18:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:31.895 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1231521a-0ec3-492f-8bc2-9f5aaee42aee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:31.896 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcec9cbfc-51 in ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:18:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:31.898 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcec9cbfc-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:18:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:31.898 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2ebc21ce-8867-41e6-85ce-4e8c6722de2f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:31.899 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a0cea762-9591-492e-846a-4b9bfec94c9d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:31 compute-1 NetworkManager[44960]: <info>  [1759407511.9029] device (tap02efafc4-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:18:31 compute-1 NetworkManager[44960]: <info>  [1759407511.9039] device (tap02efafc4-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:18:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:31.909 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[7c55edfb-ff27-47ee-826d-36249fef5b4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:31 compute-1 systemd[1]: Started Virtual Machine qemu-18-instance-00000023.
Oct 02 12:18:31 compute-1 nova_compute[230518]: 2025-10-02 12:18:31.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:31 compute-1 nova_compute[230518]: 2025-10-02 12:18:31.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:31.935 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f8e07d84-1559-4a98-b27d-4a8510581ea8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:31 compute-1 ovn_controller[129257]: 2025-10-02T12:18:31Z|00151|binding|INFO|Setting lport 02efafc4-ff2d-47ca-98bd-8e608e9980b8 ovn-installed in OVS
Oct 02 12:18:31 compute-1 ovn_controller[129257]: 2025-10-02T12:18:31Z|00152|binding|INFO|Setting lport 02efafc4-ff2d-47ca-98bd-8e608e9980b8 up in Southbound
Oct 02 12:18:31 compute-1 nova_compute[230518]: 2025-10-02 12:18:31.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:31.965 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[ee6701ee-a606-4919-bd1f-7690f3343a22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:31.971 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[27d33747-3bf9-43a6-952d-7009e59fe51f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:31 compute-1 NetworkManager[44960]: <info>  [1759407511.9724] manager: (tapcec9cbfc-50): new Veth device (/org/freedesktop/NetworkManager/Devices/71)
Oct 02 12:18:31 compute-1 nova_compute[230518]: 2025-10-02 12:18:31.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:32.002 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[6652c145-05ec-40a8-ac74-a08214840299]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:32.005 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[e83c55d1-2f9a-4e19-9007-95b03c6d6521]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:32 compute-1 NetworkManager[44960]: <info>  [1759407512.0268] device (tapcec9cbfc-50): carrier: link connected
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:32.034 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[2469d699-de01-499d-aab7-22b10de6ed26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:32.051 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b3fa8f4d-32f3-414b-aa82-f8ea79ae2e59]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcec9cbfc-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:09:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 44], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537558, 'reachable_time': 20795, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245310, 'error': None, 'target': 'ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:32.067 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4be9e394-22bc-4f54-9f49-c23f156729e9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe64:917'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537558, 'tstamp': 537558}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245312, 'error': None, 'target': 'ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:32.090 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9f733ae7-2c00-416f-b560-a81dcf226ce6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcec9cbfc-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:09:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 44], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537558, 'reachable_time': 20795, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 245313, 'error': None, 'target': 'ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:32.123 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a7a14dd9-318c-4d58-b91b-c2ff38fe5aa1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:32.189 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a93d6237-71dd-4e37-b639-31a87e27590b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:32.191 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcec9cbfc-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:32.191 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:32.191 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcec9cbfc-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:32 compute-1 kernel: tapcec9cbfc-50: entered promiscuous mode
Oct 02 12:18:32 compute-1 NetworkManager[44960]: <info>  [1759407512.1939] manager: (tapcec9cbfc-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Oct 02 12:18:32 compute-1 nova_compute[230518]: 2025-10-02 12:18:32.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:32 compute-1 nova_compute[230518]: 2025-10-02 12:18:32.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:32.196 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcec9cbfc-50, col_values=(('external_ids', {'iface-id': '7fdb9d3a-47f2-4c84-9ee0-5806a194a1f4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:32 compute-1 ovn_controller[129257]: 2025-10-02T12:18:32Z|00153|binding|INFO|Releasing lport 7fdb9d3a-47f2-4c84-9ee0-5806a194a1f4 from this chassis (sb_readonly=0)
Oct 02 12:18:32 compute-1 nova_compute[230518]: 2025-10-02 12:18:32.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:32 compute-1 nova_compute[230518]: 2025-10-02 12:18:32.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:32.212 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cec9cbfc-5dec-4f85-90c5-6104a054547f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cec9cbfc-5dec-4f85-90c5-6104a054547f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:32.213 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[866b9cd9-b23e-4977-8d76-cbcc12089757]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:32.213 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-cec9cbfc-5dec-4f85-90c5-6104a054547f
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/cec9cbfc-5dec-4f85-90c5-6104a054547f.pid.haproxy
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID cec9cbfc-5dec-4f85-90c5-6104a054547f
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:18:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:32.214 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'env', 'PROCESS_TAG=haproxy-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cec9cbfc-5dec-4f85-90c5-6104a054547f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:18:32 compute-1 nova_compute[230518]: 2025-10-02 12:18:32.394 2 DEBUG nova.compute.manager [req-4f10e4c1-952c-481b-984c-f637549695fc req-d64db562-3721-49a6-be71-f78867fc9b87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:18:32 compute-1 nova_compute[230518]: 2025-10-02 12:18:32.395 2 DEBUG oslo_concurrency.lockutils [req-4f10e4c1-952c-481b-984c-f637549695fc req-d64db562-3721-49a6-be71-f78867fc9b87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:32 compute-1 nova_compute[230518]: 2025-10-02 12:18:32.395 2 DEBUG oslo_concurrency.lockutils [req-4f10e4c1-952c-481b-984c-f637549695fc req-d64db562-3721-49a6-be71-f78867fc9b87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:32 compute-1 nova_compute[230518]: 2025-10-02 12:18:32.395 2 DEBUG oslo_concurrency.lockutils [req-4f10e4c1-952c-481b-984c-f637549695fc req-d64db562-3721-49a6-be71-f78867fc9b87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:32 compute-1 nova_compute[230518]: 2025-10-02 12:18:32.396 2 DEBUG nova.compute.manager [req-4f10e4c1-952c-481b-984c-f637549695fc req-d64db562-3721-49a6-be71-f78867fc9b87 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Processing event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:18:32 compute-1 podman[245387]: 2025-10-02 12:18:32.602924992 +0000 UTC m=+0.057416017 container create 4ef8cd3863d83134867da4fadd3b62fdac42d5e85b78380f35f236b3ba474bbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:18:32 compute-1 systemd[1]: Started libpod-conmon-4ef8cd3863d83134867da4fadd3b62fdac42d5e85b78380f35f236b3ba474bbf.scope.
Oct 02 12:18:32 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:18:32 compute-1 podman[245387]: 2025-10-02 12:18:32.574646102 +0000 UTC m=+0.029137127 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:18:32 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dd8cce23eed1d8f8a72aa27661a61a05c7b042f2a5fd87369a7dd98a871dfe3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:18:32 compute-1 podman[245387]: 2025-10-02 12:18:32.684367773 +0000 UTC m=+0.138858828 container init 4ef8cd3863d83134867da4fadd3b62fdac42d5e85b78380f35f236b3ba474bbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:18:32 compute-1 podman[245387]: 2025-10-02 12:18:32.689766082 +0000 UTC m=+0.144257107 container start 4ef8cd3863d83134867da4fadd3b62fdac42d5e85b78380f35f236b3ba474bbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:18:32 compute-1 neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f[245402]: [NOTICE]   (245406) : New worker (245408) forked
Oct 02 12:18:32 compute-1 neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f[245402]: [NOTICE]   (245406) : Loading success.
Oct 02 12:18:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:18:32 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1482801893' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:18:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:18:32 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1482801893' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:18:32 compute-1 nova_compute[230518]: 2025-10-02 12:18:32.898 2 DEBUG nova.compute.manager [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:18:32 compute-1 nova_compute[230518]: 2025-10-02 12:18:32.900 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407512.8984327, 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:18:32 compute-1 nova_compute[230518]: 2025-10-02 12:18:32.900 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] VM Started (Lifecycle Event)
Oct 02 12:18:32 compute-1 nova_compute[230518]: 2025-10-02 12:18:32.903 2 DEBUG nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:18:32 compute-1 nova_compute[230518]: 2025-10-02 12:18:32.906 2 INFO nova.virt.libvirt.driver [-] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Instance spawned successfully.
Oct 02 12:18:32 compute-1 nova_compute[230518]: 2025-10-02 12:18:32.906 2 DEBUG nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:18:32 compute-1 ceph-mon[80926]: pgmap v1183: 305 pgs: 305 active+clean; 246 MiB data, 501 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 141 op/s
Oct 02 12:18:32 compute-1 nova_compute[230518]: 2025-10-02 12:18:32.934 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:18:32 compute-1 nova_compute[230518]: 2025-10-02 12:18:32.939 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:18:32 compute-1 nova_compute[230518]: 2025-10-02 12:18:32.942 2 DEBUG nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:18:32 compute-1 nova_compute[230518]: 2025-10-02 12:18:32.942 2 DEBUG nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:18:32 compute-1 nova_compute[230518]: 2025-10-02 12:18:32.943 2 DEBUG nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:18:32 compute-1 nova_compute[230518]: 2025-10-02 12:18:32.943 2 DEBUG nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:18:32 compute-1 nova_compute[230518]: 2025-10-02 12:18:32.943 2 DEBUG nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:18:32 compute-1 nova_compute[230518]: 2025-10-02 12:18:32.944 2 DEBUG nova.virt.libvirt.driver [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:18:32 compute-1 nova_compute[230518]: 2025-10-02 12:18:32.983 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:18:32 compute-1 nova_compute[230518]: 2025-10-02 12:18:32.983 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407512.8998206, 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:18:32 compute-1 nova_compute[230518]: 2025-10-02 12:18:32.984 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] VM Paused (Lifecycle Event)
Oct 02 12:18:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:18:33 compute-1 nova_compute[230518]: 2025-10-02 12:18:33.034 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:18:33 compute-1 nova_compute[230518]: 2025-10-02 12:18:33.037 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407512.9027326, 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:18:33 compute-1 nova_compute[230518]: 2025-10-02 12:18:33.037 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] VM Resumed (Lifecycle Event)
Oct 02 12:18:33 compute-1 nova_compute[230518]: 2025-10-02 12:18:33.073 2 INFO nova.compute.manager [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Took 8.47 seconds to spawn the instance on the hypervisor.
Oct 02 12:18:33 compute-1 nova_compute[230518]: 2025-10-02 12:18:33.074 2 DEBUG nova.compute.manager [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:18:33 compute-1 nova_compute[230518]: 2025-10-02 12:18:33.075 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:18:33 compute-1 nova_compute[230518]: 2025-10-02 12:18:33.080 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:18:33 compute-1 nova_compute[230518]: 2025-10-02 12:18:33.121 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:18:33 compute-1 nova_compute[230518]: 2025-10-02 12:18:33.161 2 INFO nova.compute.manager [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Took 10.52 seconds to build instance.
Oct 02 12:18:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:33.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:33 compute-1 nova_compute[230518]: 2025-10-02 12:18:33.182 2 DEBUG oslo_concurrency.lockutils [None req-57f0dd47-1609-43ab-88be-552f74f2f4ce 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:33.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:34 compute-1 ceph-mon[80926]: pgmap v1184: 305 pgs: 305 active+clean; 246 MiB data, 496 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 106 op/s
Oct 02 12:18:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1482801893' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:18:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1482801893' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:18:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/564973710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:34 compute-1 nova_compute[230518]: 2025-10-02 12:18:34.528 2 DEBUG nova.compute.manager [req-ae7f0d0e-003f-4fb0-8bce-018aa19487df req-e74b336b-b6df-414b-b73a-5cbaefa3f7d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:18:34 compute-1 nova_compute[230518]: 2025-10-02 12:18:34.529 2 DEBUG oslo_concurrency.lockutils [req-ae7f0d0e-003f-4fb0-8bce-018aa19487df req-e74b336b-b6df-414b-b73a-5cbaefa3f7d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:34 compute-1 nova_compute[230518]: 2025-10-02 12:18:34.529 2 DEBUG oslo_concurrency.lockutils [req-ae7f0d0e-003f-4fb0-8bce-018aa19487df req-e74b336b-b6df-414b-b73a-5cbaefa3f7d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:34 compute-1 nova_compute[230518]: 2025-10-02 12:18:34.529 2 DEBUG oslo_concurrency.lockutils [req-ae7f0d0e-003f-4fb0-8bce-018aa19487df req-e74b336b-b6df-414b-b73a-5cbaefa3f7d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:34 compute-1 nova_compute[230518]: 2025-10-02 12:18:34.530 2 DEBUG nova.compute.manager [req-ae7f0d0e-003f-4fb0-8bce-018aa19487df req-e74b336b-b6df-414b-b73a-5cbaefa3f7d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] No waiting events found dispatching network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:18:34 compute-1 nova_compute[230518]: 2025-10-02 12:18:34.530 2 WARNING nova.compute.manager [req-ae7f0d0e-003f-4fb0-8bce-018aa19487df req-e74b336b-b6df-414b-b73a-5cbaefa3f7d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received unexpected event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 for instance with vm_state active and task_state None.
Oct 02 12:18:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:35.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:18:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:35.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:18:35 compute-1 nova_compute[230518]: 2025-10-02 12:18:35.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:35 compute-1 podman[245417]: 2025-10-02 12:18:35.82246764 +0000 UTC m=+0.069890050 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:18:35 compute-1 podman[245435]: 2025-10-02 12:18:35.896408455 +0000 UTC m=+0.048320621 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:18:36 compute-1 nova_compute[230518]: 2025-10-02 12:18:36.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:18:36 compute-1 nova_compute[230518]: 2025-10-02 12:18:36.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:18:36 compute-1 nova_compute[230518]: 2025-10-02 12:18:36.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:18:36 compute-1 ceph-mon[80926]: pgmap v1185: 305 pgs: 305 active+clean; 225 MiB data, 482 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Oct 02 12:18:36 compute-1 nova_compute[230518]: 2025-10-02 12:18:36.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:37.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:37.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:38 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:18:38 compute-1 nova_compute[230518]: 2025-10-02 12:18:38.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:18:38 compute-1 nova_compute[230518]: 2025-10-02 12:18:38.086 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:38 compute-1 nova_compute[230518]: 2025-10-02 12:18:38.086 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:38 compute-1 nova_compute[230518]: 2025-10-02 12:18:38.086 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:38 compute-1 nova_compute[230518]: 2025-10-02 12:18:38.087 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:18:38 compute-1 nova_compute[230518]: 2025-10-02 12:18:38.087 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:38 compute-1 ceph-mon[80926]: pgmap v1186: 305 pgs: 305 active+clean; 188 MiB data, 451 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 179 op/s
Oct 02 12:18:38 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:18:38 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4179098919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:38 compute-1 nova_compute[230518]: 2025-10-02 12:18:38.557 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:38 compute-1 nova_compute[230518]: 2025-10-02 12:18:38.663 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000023 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:18:38 compute-1 nova_compute[230518]: 2025-10-02 12:18:38.663 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000023 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:18:38 compute-1 nova_compute[230518]: 2025-10-02 12:18:38.810 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:18:38 compute-1 nova_compute[230518]: 2025-10-02 12:18:38.811 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4638MB free_disk=20.907501220703125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:18:38 compute-1 nova_compute[230518]: 2025-10-02 12:18:38.812 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:38 compute-1 nova_compute[230518]: 2025-10-02 12:18:38.812 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:38 compute-1 nova_compute[230518]: 2025-10-02 12:18:38.903 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:18:38 compute-1 nova_compute[230518]: 2025-10-02 12:18:38.904 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:18:38 compute-1 nova_compute[230518]: 2025-10-02 12:18:38.904 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:18:38 compute-1 nova_compute[230518]: 2025-10-02 12:18:38.952 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:39 compute-1 nova_compute[230518]: 2025-10-02 12:18:39.096 2 DEBUG nova.compute.manager [req-bf7737d5-37f9-4ec7-a296-947dd30f1d2a req-d68cd729-a262-4b45-ab35-cf38ffd1fbbb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received event network-changed-02efafc4-ff2d-47ca-98bd-8e608e9980b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:18:39 compute-1 nova_compute[230518]: 2025-10-02 12:18:39.096 2 DEBUG nova.compute.manager [req-bf7737d5-37f9-4ec7-a296-947dd30f1d2a req-d68cd729-a262-4b45-ab35-cf38ffd1fbbb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Refreshing instance network info cache due to event network-changed-02efafc4-ff2d-47ca-98bd-8e608e9980b8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:18:39 compute-1 nova_compute[230518]: 2025-10-02 12:18:39.097 2 DEBUG oslo_concurrency.lockutils [req-bf7737d5-37f9-4ec7-a296-947dd30f1d2a req-d68cd729-a262-4b45-ab35-cf38ffd1fbbb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:18:39 compute-1 nova_compute[230518]: 2025-10-02 12:18:39.097 2 DEBUG oslo_concurrency.lockutils [req-bf7737d5-37f9-4ec7-a296-947dd30f1d2a req-d68cd729-a262-4b45-ab35-cf38ffd1fbbb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:18:39 compute-1 nova_compute[230518]: 2025-10-02 12:18:39.097 2 DEBUG nova.network.neutron [req-bf7737d5-37f9-4ec7-a296-947dd30f1d2a req-d68cd729-a262-4b45-ab35-cf38ffd1fbbb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Refreshing network info cache for port 02efafc4-ff2d-47ca-98bd-8e608e9980b8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:18:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:39.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:39.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/388213123' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:18:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4179098919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/319245846' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:18:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:18:39 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/951820212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:39 compute-1 nova_compute[230518]: 2025-10-02 12:18:39.427 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:39 compute-1 nova_compute[230518]: 2025-10-02 12:18:39.432 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:18:39 compute-1 nova_compute[230518]: 2025-10-02 12:18:39.449 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:18:39 compute-1 nova_compute[230518]: 2025-10-02 12:18:39.477 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:18:39 compute-1 nova_compute[230518]: 2025-10-02 12:18:39.478 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:40 compute-1 nova_compute[230518]: 2025-10-02 12:18:40.304 2 DEBUG oslo_concurrency.lockutils [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Acquiring lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:40 compute-1 nova_compute[230518]: 2025-10-02 12:18:40.304 2 DEBUG oslo_concurrency.lockutils [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:40 compute-1 nova_compute[230518]: 2025-10-02 12:18:40.304 2 INFO nova.compute.manager [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Rebooting instance
Oct 02 12:18:40 compute-1 nova_compute[230518]: 2025-10-02 12:18:40.364 2 DEBUG oslo_concurrency.lockutils [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Acquiring lock "refresh_cache-4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:18:40 compute-1 nova_compute[230518]: 2025-10-02 12:18:40.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:40 compute-1 nova_compute[230518]: 2025-10-02 12:18:40.473 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:18:40 compute-1 nova_compute[230518]: 2025-10-02 12:18:40.474 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:18:40 compute-1 nova_compute[230518]: 2025-10-02 12:18:40.474 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:18:40 compute-1 nova_compute[230518]: 2025-10-02 12:18:40.474 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:18:40 compute-1 nova_compute[230518]: 2025-10-02 12:18:40.544 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:18:40 compute-1 ceph-mon[80926]: pgmap v1187: 305 pgs: 305 active+clean; 188 MiB data, 451 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.5 MiB/s wr, 136 op/s
Oct 02 12:18:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/951820212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2073904783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:18:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:41.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:18:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:41.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:41 compute-1 nova_compute[230518]: 2025-10-02 12:18:41.777 2 DEBUG nova.network.neutron [req-bf7737d5-37f9-4ec7-a296-947dd30f1d2a req-d68cd729-a262-4b45-ab35-cf38ffd1fbbb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Updated VIF entry in instance network info cache for port 02efafc4-ff2d-47ca-98bd-8e608e9980b8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:18:41 compute-1 nova_compute[230518]: 2025-10-02 12:18:41.778 2 DEBUG nova.network.neutron [req-bf7737d5-37f9-4ec7-a296-947dd30f1d2a req-d68cd729-a262-4b45-ab35-cf38ffd1fbbb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Updating instance_info_cache with network_info: [{"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:18:41 compute-1 nova_compute[230518]: 2025-10-02 12:18:41.865 2 DEBUG oslo_concurrency.lockutils [req-bf7737d5-37f9-4ec7-a296-947dd30f1d2a req-d68cd729-a262-4b45-ab35-cf38ffd1fbbb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:18:41 compute-1 nova_compute[230518]: 2025-10-02 12:18:41.865 2 DEBUG oslo_concurrency.lockutils [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Acquired lock "refresh_cache-4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:18:41 compute-1 nova_compute[230518]: 2025-10-02 12:18:41.866 2 DEBUG nova.network.neutron [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:18:41 compute-1 nova_compute[230518]: 2025-10-02 12:18:41.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1177830465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:42 compute-1 ceph-mon[80926]: pgmap v1188: 305 pgs: 305 active+clean; 213 MiB data, 470 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 165 op/s
Oct 02 12:18:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/65087621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3657979213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:18:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:43.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:43.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:44 compute-1 ceph-mon[80926]: pgmap v1189: 305 pgs: 305 active+clean; 213 MiB data, 470 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 144 op/s
Oct 02 12:18:44 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2963092672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:18:44 compute-1 nova_compute[230518]: 2025-10-02 12:18:44.576 2 DEBUG nova.network.neutron [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Updating instance_info_cache with network_info: [{"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:18:44 compute-1 nova_compute[230518]: 2025-10-02 12:18:44.794 2 DEBUG oslo_concurrency.lockutils [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Releasing lock "refresh_cache-4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:18:44 compute-1 nova_compute[230518]: 2025-10-02 12:18:44.796 2 DEBUG nova.compute.manager [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:18:44 compute-1 nova_compute[230518]: 2025-10-02 12:18:44.796 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:18:44 compute-1 nova_compute[230518]: 2025-10-02 12:18:44.796 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:18:44 compute-1 nova_compute[230518]: 2025-10-02 12:18:44.796 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:18:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:45.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:45.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:45 compute-1 nova_compute[230518]: 2025-10-02 12:18:45.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:45 compute-1 kernel: tap02efafc4-ff (unregistering): left promiscuous mode
Oct 02 12:18:45 compute-1 NetworkManager[44960]: <info>  [1759407525.8754] device (tap02efafc4-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:18:45 compute-1 ovn_controller[129257]: 2025-10-02T12:18:45Z|00154|binding|INFO|Releasing lport 02efafc4-ff2d-47ca-98bd-8e608e9980b8 from this chassis (sb_readonly=0)
Oct 02 12:18:45 compute-1 ovn_controller[129257]: 2025-10-02T12:18:45Z|00155|binding|INFO|Setting lport 02efafc4-ff2d-47ca-98bd-8e608e9980b8 down in Southbound
Oct 02 12:18:45 compute-1 nova_compute[230518]: 2025-10-02 12:18:45.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:45 compute-1 ovn_controller[129257]: 2025-10-02T12:18:45Z|00156|binding|INFO|Removing iface tap02efafc4-ff ovn-installed in OVS
Oct 02 12:18:45 compute-1 nova_compute[230518]: 2025-10-02 12:18:45.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:45 compute-1 nova_compute[230518]: 2025-10-02 12:18:45.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:45.949 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:9c:4f 10.100.0.5'], port_security=['fa:16:3e:30:9c:4f 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4cc8a5b1-c816-476e-9b8c-1d152d2f57c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f0bd0c6232b84d03a010ba8cf85bda46', 'neutron:revision_number': '5', 'neutron:security_group_ids': '1b700ac5-01e6-4854-9f45-080d3952f68d 5d4fd7ed-d165-4a2d-a20a-9d9629a026e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=67a70991-7ceb-4648-8df5-18a20c0a36a2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=02efafc4-ff2d-47ca-98bd-8e608e9980b8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:18:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:45.951 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 02efafc4-ff2d-47ca-98bd-8e608e9980b8 in datapath cec9cbfc-5dec-4f85-90c5-6104a054547f unbound from our chassis
Oct 02 12:18:45 compute-1 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000023.scope: Deactivated successfully.
Oct 02 12:18:45 compute-1 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000023.scope: Consumed 12.148s CPU time.
Oct 02 12:18:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:45.954 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cec9cbfc-5dec-4f85-90c5-6104a054547f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:18:45 compute-1 systemd-machined[188247]: Machine qemu-18-instance-00000023 terminated.
Oct 02 12:18:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:45.955 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4afd6253-e189-4a5c-bc16-8a3ce385e12c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:45.956 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f namespace which is not needed anymore
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.098 2 INFO nova.virt.libvirt.driver [-] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Instance destroyed successfully.
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.099 2 DEBUG nova.objects.instance [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lazy-loading 'resources' on Instance uuid 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.166 2 DEBUG nova.virt.libvirt.vif [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:18:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-488412587',display_name='tempest-SecurityGroupsTestJSON-server-488412587',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-488412587',id=35,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:18:33Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f0bd0c6232b84d03a010ba8cf85bda46',ramdisk_id='',reservation_id='r-1xmq2s07',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1241678427',owner_user_name='tempest-SecurityGroupsTestJSON-1241678427-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:18:45Z,user_data=None,user_id='2ed8b6a2129742dfb3b8a0d9f044ac24',uuid=4cc8a5b1-c816-476e-9b8c-1d152d2f57c1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.167 2 DEBUG nova.network.os_vif_util [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Converting VIF {"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.168 2 DEBUG nova.network.os_vif_util [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:30:9c:4f,bridge_name='br-int',has_traffic_filtering=True,id=02efafc4-ff2d-47ca-98bd-8e608e9980b8,network=Network(cec9cbfc-5dec-4f85-90c5-6104a054547f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02efafc4-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.168 2 DEBUG os_vif [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:9c:4f,bridge_name='br-int',has_traffic_filtering=True,id=02efafc4-ff2d-47ca-98bd-8e608e9980b8,network=Network(cec9cbfc-5dec-4f85-90c5-6104a054547f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02efafc4-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.170 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02efafc4-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.175 2 INFO os_vif [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:9c:4f,bridge_name='br-int',has_traffic_filtering=True,id=02efafc4-ff2d-47ca-98bd-8e608e9980b8,network=Network(cec9cbfc-5dec-4f85-90c5-6104a054547f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02efafc4-ff')
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.182 2 DEBUG nova.virt.libvirt.driver [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Start _get_guest_xml network_info=[{"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.184 2 WARNING nova.virt.libvirt.driver [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.188 2 DEBUG nova.virt.libvirt.host [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.189 2 DEBUG nova.virt.libvirt.host [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.193 2 DEBUG nova.virt.libvirt.host [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.194 2 DEBUG nova.virt.libvirt.host [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.194 2 DEBUG nova.virt.libvirt.driver [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.195 2 DEBUG nova.virt.hardware [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.195 2 DEBUG nova.virt.hardware [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.195 2 DEBUG nova.virt.hardware [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.195 2 DEBUG nova.virt.hardware [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.196 2 DEBUG nova.virt.hardware [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.196 2 DEBUG nova.virt.hardware [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.196 2 DEBUG nova.virt.hardware [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.196 2 DEBUG nova.virt.hardware [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.197 2 DEBUG nova.virt.hardware [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.197 2 DEBUG nova.virt.hardware [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.197 2 DEBUG nova.virt.hardware [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.197 2 DEBUG nova.objects.instance [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.242 2 DEBUG nova.compute.manager [req-cdc1e700-f25a-418b-85de-ac828378922e req-fd08ce9e-8c4b-437e-90a6-a30c24f361ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received event network-vif-unplugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.242 2 DEBUG oslo_concurrency.lockutils [req-cdc1e700-f25a-418b-85de-ac828378922e req-fd08ce9e-8c4b-437e-90a6-a30c24f361ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.243 2 DEBUG oslo_concurrency.lockutils [req-cdc1e700-f25a-418b-85de-ac828378922e req-fd08ce9e-8c4b-437e-90a6-a30c24f361ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.243 2 DEBUG oslo_concurrency.lockutils [req-cdc1e700-f25a-418b-85de-ac828378922e req-fd08ce9e-8c4b-437e-90a6-a30c24f361ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.243 2 DEBUG nova.compute.manager [req-cdc1e700-f25a-418b-85de-ac828378922e req-fd08ce9e-8c4b-437e-90a6-a30c24f361ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] No waiting events found dispatching network-vif-unplugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.244 2 WARNING nova.compute.manager [req-cdc1e700-f25a-418b-85de-ac828378922e req-fd08ce9e-8c4b-437e-90a6-a30c24f361ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received unexpected event network-vif-unplugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 for instance with vm_state active and task_state reboot_started_hard.
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.249 2 DEBUG oslo_concurrency.processutils [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:46 compute-1 neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f[245402]: [NOTICE]   (245406) : haproxy version is 2.8.14-c23fe91
Oct 02 12:18:46 compute-1 neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f[245402]: [NOTICE]   (245406) : path to executable is /usr/sbin/haproxy
Oct 02 12:18:46 compute-1 neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f[245402]: [WARNING]  (245406) : Exiting Master process...
Oct 02 12:18:46 compute-1 neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f[245402]: [ALERT]    (245406) : Current worker (245408) exited with code 143 (Terminated)
Oct 02 12:18:46 compute-1 neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f[245402]: [WARNING]  (245406) : All workers exited. Exiting... (0)
Oct 02 12:18:46 compute-1 systemd[1]: libpod-4ef8cd3863d83134867da4fadd3b62fdac42d5e85b78380f35f236b3ba474bbf.scope: Deactivated successfully.
Oct 02 12:18:46 compute-1 podman[245526]: 2025-10-02 12:18:46.341647277 +0000 UTC m=+0.302556547 container died 4ef8cd3863d83134867da4fadd3b62fdac42d5e85b78380f35f236b3ba474bbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:18:46 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:18:46 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/704874661' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:18:46 compute-1 ceph-mon[80926]: pgmap v1190: 305 pgs: 305 active+clean; 214 MiB data, 470 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.8 MiB/s wr, 173 op/s
Oct 02 12:18:46 compute-1 nova_compute[230518]: 2025-10-02 12:18:46.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:47 compute-1 nova_compute[230518]: 2025-10-02 12:18:47.001 2 DEBUG oslo_concurrency.processutils [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.752s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:47 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4ef8cd3863d83134867da4fadd3b62fdac42d5e85b78380f35f236b3ba474bbf-userdata-shm.mount: Deactivated successfully.
Oct 02 12:18:47 compute-1 systemd[1]: var-lib-containers-storage-overlay-8dd8cce23eed1d8f8a72aa27661a61a05c7b042f2a5fd87369a7dd98a871dfe3-merged.mount: Deactivated successfully.
Oct 02 12:18:47 compute-1 nova_compute[230518]: 2025-10-02 12:18:47.037 2 DEBUG oslo_concurrency.processutils [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:18:47 compute-1 nova_compute[230518]: 2025-10-02 12:18:47.063 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Updating instance_info_cache with network_info: [{"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:18:47 compute-1 nova_compute[230518]: 2025-10-02 12:18:47.103 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:18:47 compute-1 nova_compute[230518]: 2025-10-02 12:18:47.103 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:18:47 compute-1 nova_compute[230518]: 2025-10-02 12:18:47.104 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:18:47 compute-1 nova_compute[230518]: 2025-10-02 12:18:47.104 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:18:47 compute-1 nova_compute[230518]: 2025-10-02 12:18:47.104 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:18:47 compute-1 nova_compute[230518]: 2025-10-02 12:18:47.104 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:18:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:18:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:47.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:18:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:47.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:47 compute-1 podman[245526]: 2025-10-02 12:18:47.416459371 +0000 UTC m=+1.377368581 container cleanup 4ef8cd3863d83134867da4fadd3b62fdac42d5e85b78380f35f236b3ba474bbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 12:18:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:18:47 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4246629214' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:18:47 compute-1 systemd[1]: libpod-conmon-4ef8cd3863d83134867da4fadd3b62fdac42d5e85b78380f35f236b3ba474bbf.scope: Deactivated successfully.
Oct 02 12:18:47 compute-1 nova_compute[230518]: 2025-10-02 12:18:47.447 2 DEBUG oslo_concurrency.processutils [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:18:47 compute-1 nova_compute[230518]: 2025-10-02 12:18:47.449 2 DEBUG nova.virt.libvirt.vif [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:18:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-488412587',display_name='tempest-SecurityGroupsTestJSON-server-488412587',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-488412587',id=35,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:18:33Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f0bd0c6232b84d03a010ba8cf85bda46',ramdisk_id='',reservation_id='r-1xmq2s07',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1241678427',owner_user_name='tempest-SecurityGroupsTestJSON-1241678427-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:18:45Z,user_data=None,user_id='2ed8b6a2129742dfb3b8a0d9f044ac24',uuid=4cc8a5b1-c816-476e-9b8c-1d152d2f57c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:18:47 compute-1 nova_compute[230518]: 2025-10-02 12:18:47.449 2 DEBUG nova.network.os_vif_util [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Converting VIF {"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:18:47 compute-1 nova_compute[230518]: 2025-10-02 12:18:47.450 2 DEBUG nova.network.os_vif_util [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:30:9c:4f,bridge_name='br-int',has_traffic_filtering=True,id=02efafc4-ff2d-47ca-98bd-8e608e9980b8,network=Network(cec9cbfc-5dec-4f85-90c5-6104a054547f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02efafc4-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:18:47 compute-1 nova_compute[230518]: 2025-10-02 12:18:47.453 2 DEBUG nova.objects.instance [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:18:47 compute-1 nova_compute[230518]: 2025-10-02 12:18:47.522 2 DEBUG nova.virt.libvirt.driver [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:18:47 compute-1 nova_compute[230518]:   <uuid>4cc8a5b1-c816-476e-9b8c-1d152d2f57c1</uuid>
Oct 02 12:18:47 compute-1 nova_compute[230518]:   <name>instance-00000023</name>
Oct 02 12:18:47 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:18:47 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:18:47 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:18:47 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:       <nova:name>tempest-SecurityGroupsTestJSON-server-488412587</nova:name>
Oct 02 12:18:47 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:18:46</nova:creationTime>
Oct 02 12:18:47 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:18:47 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:18:47 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:18:47 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:18:47 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:18:47 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:18:47 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:18:47 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:18:47 compute-1 nova_compute[230518]:         <nova:user uuid="2ed8b6a2129742dfb3b8a0d9f044ac24">tempest-SecurityGroupsTestJSON-1241678427-project-member</nova:user>
Oct 02 12:18:47 compute-1 nova_compute[230518]:         <nova:project uuid="f0bd0c6232b84d03a010ba8cf85bda46">tempest-SecurityGroupsTestJSON-1241678427</nova:project>
Oct 02 12:18:47 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:18:47 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:18:47 compute-1 nova_compute[230518]:         <nova:port uuid="02efafc4-ff2d-47ca-98bd-8e608e9980b8">
Oct 02 12:18:47 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:18:47 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:18:47 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:18:47 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <system>
Oct 02 12:18:47 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:18:47 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:18:47 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:18:47 compute-1 nova_compute[230518]:       <entry name="serial">4cc8a5b1-c816-476e-9b8c-1d152d2f57c1</entry>
Oct 02 12:18:47 compute-1 nova_compute[230518]:       <entry name="uuid">4cc8a5b1-c816-476e-9b8c-1d152d2f57c1</entry>
Oct 02 12:18:47 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     </system>
Oct 02 12:18:47 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:18:47 compute-1 nova_compute[230518]:   <os>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:   </os>
Oct 02 12:18:47 compute-1 nova_compute[230518]:   <features>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:   </features>
Oct 02 12:18:47 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:18:47 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:18:47 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:18:47 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/4cc8a5b1-c816-476e-9b8c-1d152d2f57c1_disk">
Oct 02 12:18:47 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:       </source>
Oct 02 12:18:47 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:18:47 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:18:47 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:18:47 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/4cc8a5b1-c816-476e-9b8c-1d152d2f57c1_disk.config">
Oct 02 12:18:47 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:       </source>
Oct 02 12:18:47 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:18:47 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:18:47 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:18:47 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:30:9c:4f"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:       <target dev="tap02efafc4-ff"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:18:47 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/4cc8a5b1-c816-476e-9b8c-1d152d2f57c1/console.log" append="off"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <video>
Oct 02 12:18:47 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     </video>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <input type="keyboard" bus="usb"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:18:47 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:18:47 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:18:47 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:18:47 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:18:47 compute-1 nova_compute[230518]: </domain>
Oct 02 12:18:47 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:18:47 compute-1 nova_compute[230518]: 2025-10-02 12:18:47.525 2 DEBUG nova.virt.libvirt.driver [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] skipping disk for instance-00000023 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:18:47 compute-1 nova_compute[230518]: 2025-10-02 12:18:47.525 2 DEBUG nova.virt.libvirt.driver [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] skipping disk for instance-00000023 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:18:47 compute-1 nova_compute[230518]: 2025-10-02 12:18:47.527 2 DEBUG nova.virt.libvirt.vif [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:18:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-488412587',display_name='tempest-SecurityGroupsTestJSON-server-488412587',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-488412587',id=35,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:18:33Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='f0bd0c6232b84d03a010ba8cf85bda46',ramdisk_id='',reservation_id='r-1xmq2s07',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1241678427',owner_user_name='tempest-SecurityGroupsTestJSON-1241678427-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:18:45Z,user_data=None,user_id='2ed8b6a2129742dfb3b8a0d9f044ac24',uuid=4cc8a5b1-c816-476e-9b8c-1d152d2f57c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:18:47 compute-1 nova_compute[230518]: 2025-10-02 12:18:47.527 2 DEBUG nova.network.os_vif_util [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Converting VIF {"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:18:47 compute-1 nova_compute[230518]: 2025-10-02 12:18:47.528 2 DEBUG nova.network.os_vif_util [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:30:9c:4f,bridge_name='br-int',has_traffic_filtering=True,id=02efafc4-ff2d-47ca-98bd-8e608e9980b8,network=Network(cec9cbfc-5dec-4f85-90c5-6104a054547f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02efafc4-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:18:47 compute-1 nova_compute[230518]: 2025-10-02 12:18:47.528 2 DEBUG os_vif [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:9c:4f,bridge_name='br-int',has_traffic_filtering=True,id=02efafc4-ff2d-47ca-98bd-8e608e9980b8,network=Network(cec9cbfc-5dec-4f85-90c5-6104a054547f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02efafc4-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:18:47 compute-1 nova_compute[230518]: 2025-10-02 12:18:47.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:47 compute-1 nova_compute[230518]: 2025-10-02 12:18:47.530 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:47 compute-1 nova_compute[230518]: 2025-10-02 12:18:47.531 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:18:47 compute-1 nova_compute[230518]: 2025-10-02 12:18:47.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:47 compute-1 nova_compute[230518]: 2025-10-02 12:18:47.534 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02efafc4-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:47 compute-1 nova_compute[230518]: 2025-10-02 12:18:47.534 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap02efafc4-ff, col_values=(('external_ids', {'iface-id': '02efafc4-ff2d-47ca-98bd-8e608e9980b8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:30:9c:4f', 'vm-uuid': '4cc8a5b1-c816-476e-9b8c-1d152d2f57c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:47 compute-1 NetworkManager[44960]: <info>  [1759407527.5374] manager: (tap02efafc4-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Oct 02 12:18:47 compute-1 nova_compute[230518]: 2025-10-02 12:18:47.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:47 compute-1 nova_compute[230518]: 2025-10-02 12:18:47.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:18:47 compute-1 nova_compute[230518]: 2025-10-02 12:18:47.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:47 compute-1 nova_compute[230518]: 2025-10-02 12:18:47.544 2 INFO os_vif [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:9c:4f,bridge_name='br-int',has_traffic_filtering=True,id=02efafc4-ff2d-47ca-98bd-8e608e9980b8,network=Network(cec9cbfc-5dec-4f85-90c5-6104a054547f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02efafc4-ff')
Oct 02 12:18:47 compute-1 kernel: tap02efafc4-ff: entered promiscuous mode
Oct 02 12:18:47 compute-1 systemd-udevd[245505]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:18:47 compute-1 NetworkManager[44960]: <info>  [1759407527.8282] manager: (tap02efafc4-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/74)
Oct 02 12:18:47 compute-1 ovn_controller[129257]: 2025-10-02T12:18:47Z|00157|binding|INFO|Claiming lport 02efafc4-ff2d-47ca-98bd-8e608e9980b8 for this chassis.
Oct 02 12:18:47 compute-1 ovn_controller[129257]: 2025-10-02T12:18:47Z|00158|binding|INFO|02efafc4-ff2d-47ca-98bd-8e608e9980b8: Claiming fa:16:3e:30:9c:4f 10.100.0.5
Oct 02 12:18:47 compute-1 nova_compute[230518]: 2025-10-02 12:18:47.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:47 compute-1 NetworkManager[44960]: <info>  [1759407527.8407] device (tap02efafc4-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:18:47 compute-1 NetworkManager[44960]: <info>  [1759407527.8420] device (tap02efafc4-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:18:47 compute-1 ovn_controller[129257]: 2025-10-02T12:18:47Z|00159|binding|INFO|Setting lport 02efafc4-ff2d-47ca-98bd-8e608e9980b8 ovn-installed in OVS
Oct 02 12:18:47 compute-1 nova_compute[230518]: 2025-10-02 12:18:47.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:47 compute-1 nova_compute[230518]: 2025-10-02 12:18:47.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:47 compute-1 systemd-machined[188247]: New machine qemu-19-instance-00000023.
Oct 02 12:18:47 compute-1 systemd[1]: Started Virtual Machine qemu-19-instance-00000023.
Oct 02 12:18:47 compute-1 ovn_controller[129257]: 2025-10-02T12:18:47Z|00160|binding|INFO|Setting lport 02efafc4-ff2d-47ca-98bd-8e608e9980b8 up in Southbound
Oct 02 12:18:47 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:47.944 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:9c:4f 10.100.0.5'], port_security=['fa:16:3e:30:9c:4f 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4cc8a5b1-c816-476e-9b8c-1d152d2f57c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f0bd0c6232b84d03a010ba8cf85bda46', 'neutron:revision_number': '6', 'neutron:security_group_ids': '1b700ac5-01e6-4854-9f45-080d3952f68d 5d4fd7ed-d165-4a2d-a20a-9d9629a026e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=67a70991-7ceb-4648-8df5-18a20c0a36a2, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=02efafc4-ff2d-47ca-98bd-8e608e9980b8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:18:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:18:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/704874661' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:18:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4246629214' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:18:48 compute-1 nova_compute[230518]: 2025-10-02 12:18:48.486 2 DEBUG nova.compute.manager [req-3d599416-6964-4f78-876c-80fe90085dba req-37ef51c9-92c1-417a-a1db-693af759b43c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:18:48 compute-1 nova_compute[230518]: 2025-10-02 12:18:48.487 2 DEBUG oslo_concurrency.lockutils [req-3d599416-6964-4f78-876c-80fe90085dba req-37ef51c9-92c1-417a-a1db-693af759b43c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:48 compute-1 nova_compute[230518]: 2025-10-02 12:18:48.488 2 DEBUG oslo_concurrency.lockutils [req-3d599416-6964-4f78-876c-80fe90085dba req-37ef51c9-92c1-417a-a1db-693af759b43c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:48 compute-1 nova_compute[230518]: 2025-10-02 12:18:48.489 2 DEBUG oslo_concurrency.lockutils [req-3d599416-6964-4f78-876c-80fe90085dba req-37ef51c9-92c1-417a-a1db-693af759b43c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:48 compute-1 nova_compute[230518]: 2025-10-02 12:18:48.489 2 DEBUG nova.compute.manager [req-3d599416-6964-4f78-876c-80fe90085dba req-37ef51c9-92c1-417a-a1db-693af759b43c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] No waiting events found dispatching network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:18:48 compute-1 nova_compute[230518]: 2025-10-02 12:18:48.490 2 WARNING nova.compute.manager [req-3d599416-6964-4f78-876c-80fe90085dba req-37ef51c9-92c1-417a-a1db-693af759b43c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received unexpected event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 for instance with vm_state active and task_state reboot_started_hard.
Oct 02 12:18:48 compute-1 nova_compute[230518]: 2025-10-02 12:18:48.490 2 DEBUG nova.compute.manager [req-3d599416-6964-4f78-876c-80fe90085dba req-37ef51c9-92c1-417a-a1db-693af759b43c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:18:48 compute-1 nova_compute[230518]: 2025-10-02 12:18:48.491 2 DEBUG oslo_concurrency.lockutils [req-3d599416-6964-4f78-876c-80fe90085dba req-37ef51c9-92c1-417a-a1db-693af759b43c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:48 compute-1 nova_compute[230518]: 2025-10-02 12:18:48.492 2 DEBUG oslo_concurrency.lockutils [req-3d599416-6964-4f78-876c-80fe90085dba req-37ef51c9-92c1-417a-a1db-693af759b43c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:48 compute-1 nova_compute[230518]: 2025-10-02 12:18:48.492 2 DEBUG oslo_concurrency.lockutils [req-3d599416-6964-4f78-876c-80fe90085dba req-37ef51c9-92c1-417a-a1db-693af759b43c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:48 compute-1 nova_compute[230518]: 2025-10-02 12:18:48.493 2 DEBUG nova.compute.manager [req-3d599416-6964-4f78-876c-80fe90085dba req-37ef51c9-92c1-417a-a1db-693af759b43c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] No waiting events found dispatching network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:18:48 compute-1 nova_compute[230518]: 2025-10-02 12:18:48.493 2 WARNING nova.compute.manager [req-3d599416-6964-4f78-876c-80fe90085dba req-37ef51c9-92c1-417a-a1db-693af759b43c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received unexpected event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 for instance with vm_state active and task_state reboot_started_hard.
Oct 02 12:18:49 compute-1 podman[245630]: 2025-10-02 12:18:49.070465851 +0000 UTC m=+1.626474365 container remove 4ef8cd3863d83134867da4fadd3b62fdac42d5e85b78380f35f236b3ba474bbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.076 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a1f92052-c159-42a8-9461-6c890d96988f]: (4, ('Thu Oct  2 12:18:46 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f (4ef8cd3863d83134867da4fadd3b62fdac42d5e85b78380f35f236b3ba474bbf)\n4ef8cd3863d83134867da4fadd3b62fdac42d5e85b78380f35f236b3ba474bbf\nThu Oct  2 12:18:47 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f (4ef8cd3863d83134867da4fadd3b62fdac42d5e85b78380f35f236b3ba474bbf)\n4ef8cd3863d83134867da4fadd3b62fdac42d5e85b78380f35f236b3ba474bbf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.079 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c433b16d-c360-4be4-9098-1cc2e39205a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.080 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcec9cbfc-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:49 compute-1 nova_compute[230518]: 2025-10-02 12:18:49.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:49 compute-1 kernel: tapcec9cbfc-50: left promiscuous mode
Oct 02 12:18:49 compute-1 nova_compute[230518]: 2025-10-02 12:18:49.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.118 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[60aaee44-1622-4f3b-9c28-2ca47e8aacec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.147 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ff579398-ce70-40e9-aa8f-ea9b4f1d8c5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.150 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c3eecc5f-efa2-4d6b-bbea-af280ca6bfb7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.170 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[af882e1a-cd33-4ba6-856b-756c64b5856e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537551, 'reachable_time': 26301, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245685, 'error': None, 'target': 'ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:49 compute-1 systemd[1]: run-netns-ovnmeta\x2dcec9cbfc\x2d5dec\x2d4f85\x2d90c5\x2d6104a054547f.mount: Deactivated successfully.
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.175 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.175 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[eb3b71d8-ddf5-40d7-8eae-b469a62daaf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.176 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 02efafc4-ff2d-47ca-98bd-8e608e9980b8 in datapath cec9cbfc-5dec-4f85-90c5-6104a054547f unbound from our chassis
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.177 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cec9cbfc-5dec-4f85-90c5-6104a054547f
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.191 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[aa3aa686-1a85-41d2-8cb9-a09fae78dff0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.193 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcec9cbfc-51 in ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.195 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcec9cbfc-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.195 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[50431427-a749-4a6d-8d56-dfc19b38e56f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.196 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[23ea38dc-838a-485b-b733-82e519eb3819]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:49.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.207 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[414d5056-eabd-408a-9128-40ef01f74559]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:18:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:49.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.233 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9e55403b-fee9-4f65-a81c-4974872a873f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.261 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[bef62267-8298-4275-971a-dc4a0e923292]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.274 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[34793f05-4050-41c5-859f-7d1138f393e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:49 compute-1 NetworkManager[44960]: <info>  [1759407529.2775] manager: (tapcec9cbfc-50): new Veth device (/org/freedesktop/NetworkManager/Devices/75)
Oct 02 12:18:49 compute-1 systemd-udevd[245684]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.318 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[0848bc2f-4c89-4b9a-98fb-7bcdf99a0088]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.320 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[ec385dc2-0652-4ca8-8c6f-15d609932aff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:49 compute-1 NetworkManager[44960]: <info>  [1759407529.3519] device (tapcec9cbfc-50): carrier: link connected
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.358 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[16f2be79-eafc-4843-ba59-f858f7f0fd28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.374 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7b6f7af9-832a-41c6-a3da-795c7188e5bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcec9cbfc-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:09:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539290, 'reachable_time': 35141, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245737, 'error': None, 'target': 'ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.388 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4393bc37-1e3e-4fc3-aca1-49b55d1c772b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe64:917'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 539290, 'tstamp': 539290}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245738, 'error': None, 'target': 'ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.405 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e886d793-5093-4c79-b6c8-d95b41133bbe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcec9cbfc-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:64:09:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539290, 'reachable_time': 35141, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 245740, 'error': None, 'target': 'ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.435 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[990d6c4a-507a-4d89-a8eb-06a732415674]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.491 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7aa1f6d5-229d-4e86-8b3f-5d435b2f5741]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.492 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcec9cbfc-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.493 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.493 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcec9cbfc-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:49 compute-1 ceph-mon[80926]: pgmap v1191: 305 pgs: 305 active+clean; 223 MiB data, 479 MiB used, 21 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.8 MiB/s wr, 193 op/s
Oct 02 12:18:49 compute-1 kernel: tapcec9cbfc-50: entered promiscuous mode
Oct 02 12:18:49 compute-1 NetworkManager[44960]: <info>  [1759407529.5311] manager: (tapcec9cbfc-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Oct 02 12:18:49 compute-1 nova_compute[230518]: 2025-10-02 12:18:49.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:49 compute-1 nova_compute[230518]: 2025-10-02 12:18:49.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.534 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcec9cbfc-50, col_values=(('external_ids', {'iface-id': '7fdb9d3a-47f2-4c84-9ee0-5806a194a1f4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:49 compute-1 ovn_controller[129257]: 2025-10-02T12:18:49Z|00161|binding|INFO|Releasing lport 7fdb9d3a-47f2-4c84-9ee0-5806a194a1f4 from this chassis (sb_readonly=0)
Oct 02 12:18:49 compute-1 nova_compute[230518]: 2025-10-02 12:18:49.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:49 compute-1 nova_compute[230518]: 2025-10-02 12:18:49.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.553 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cec9cbfc-5dec-4f85-90c5-6104a054547f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cec9cbfc-5dec-4f85-90c5-6104a054547f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.554 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0719288d-b471-40e2-9116-bf38082e5b3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.555 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-cec9cbfc-5dec-4f85-90c5-6104a054547f
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/cec9cbfc-5dec-4f85-90c5-6104a054547f.pid.haproxy
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID cec9cbfc-5dec-4f85-90c5-6104a054547f
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:18:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:49.557 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'env', 'PROCESS_TAG=haproxy-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cec9cbfc-5dec-4f85-90c5-6104a054547f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:18:49 compute-1 nova_compute[230518]: 2025-10-02 12:18:49.852 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Removed pending event for 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:18:49 compute-1 nova_compute[230518]: 2025-10-02 12:18:49.853 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407529.8519335, 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:18:49 compute-1 nova_compute[230518]: 2025-10-02 12:18:49.853 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] VM Resumed (Lifecycle Event)
Oct 02 12:18:49 compute-1 nova_compute[230518]: 2025-10-02 12:18:49.855 2 DEBUG nova.compute.manager [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:18:49 compute-1 nova_compute[230518]: 2025-10-02 12:18:49.858 2 INFO nova.virt.libvirt.driver [-] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Instance rebooted successfully.
Oct 02 12:18:49 compute-1 nova_compute[230518]: 2025-10-02 12:18:49.859 2 DEBUG nova.compute.manager [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:18:49 compute-1 nova_compute[230518]: 2025-10-02 12:18:49.950 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:18:49 compute-1 nova_compute[230518]: 2025-10-02 12:18:49.953 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:18:50 compute-1 nova_compute[230518]: 2025-10-02 12:18:50.002 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Oct 02 12:18:50 compute-1 nova_compute[230518]: 2025-10-02 12:18:50.003 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407529.8536701, 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:18:50 compute-1 nova_compute[230518]: 2025-10-02 12:18:50.004 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] VM Started (Lifecycle Event)
Oct 02 12:18:50 compute-1 podman[245772]: 2025-10-02 12:18:49.932345598 +0000 UTC m=+0.025478402 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:18:50 compute-1 nova_compute[230518]: 2025-10-02 12:18:50.054 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:18:50 compute-1 nova_compute[230518]: 2025-10-02 12:18:50.057 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:18:50 compute-1 nova_compute[230518]: 2025-10-02 12:18:50.138 2 DEBUG oslo_concurrency.lockutils [None req-27bf9609-f146-40f1-b6ab-22d8fc74e26e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 9.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:50 compute-1 nova_compute[230518]: 2025-10-02 12:18:50.745 2 DEBUG nova.compute.manager [req-d39bcfee-61fb-4268-8f79-f981b81addd4 req-e65be73c-e880-4aff-bb11-19f8a6ee974b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:18:50 compute-1 nova_compute[230518]: 2025-10-02 12:18:50.745 2 DEBUG oslo_concurrency.lockutils [req-d39bcfee-61fb-4268-8f79-f981b81addd4 req-e65be73c-e880-4aff-bb11-19f8a6ee974b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:50 compute-1 nova_compute[230518]: 2025-10-02 12:18:50.745 2 DEBUG oslo_concurrency.lockutils [req-d39bcfee-61fb-4268-8f79-f981b81addd4 req-e65be73c-e880-4aff-bb11-19f8a6ee974b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:50 compute-1 nova_compute[230518]: 2025-10-02 12:18:50.745 2 DEBUG oslo_concurrency.lockutils [req-d39bcfee-61fb-4268-8f79-f981b81addd4 req-e65be73c-e880-4aff-bb11-19f8a6ee974b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:50 compute-1 nova_compute[230518]: 2025-10-02 12:18:50.746 2 DEBUG nova.compute.manager [req-d39bcfee-61fb-4268-8f79-f981b81addd4 req-e65be73c-e880-4aff-bb11-19f8a6ee974b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] No waiting events found dispatching network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:18:50 compute-1 nova_compute[230518]: 2025-10-02 12:18:50.746 2 WARNING nova.compute.manager [req-d39bcfee-61fb-4268-8f79-f981b81addd4 req-e65be73c-e880-4aff-bb11-19f8a6ee974b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received unexpected event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 for instance with vm_state active and task_state None.
Oct 02 12:18:50 compute-1 podman[245772]: 2025-10-02 12:18:50.798203185 +0000 UTC m=+0.891336009 container create 8dbe6e19c311d6a3e557e7157ff3aa5b7d9c185641a1b2ff3c49ee6fc776a548 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:18:50 compute-1 ceph-mon[80926]: pgmap v1192: 305 pgs: 305 active+clean; 223 MiB data, 479 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 122 op/s
Oct 02 12:18:51 compute-1 systemd[1]: Started libpod-conmon-8dbe6e19c311d6a3e557e7157ff3aa5b7d9c185641a1b2ff3c49ee6fc776a548.scope.
Oct 02 12:18:51 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:18:51 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d55dcfce4eabfae633f16d7a1060e2abf279c1e0a7770e612687fe926476f20b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:18:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:51.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:18:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:51.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:18:51 compute-1 podman[245772]: 2025-10-02 12:18:51.478549001 +0000 UTC m=+1.571681855 container init 8dbe6e19c311d6a3e557e7157ff3aa5b7d9c185641a1b2ff3c49ee6fc776a548 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:18:51 compute-1 podman[245772]: 2025-10-02 12:18:51.486630506 +0000 UTC m=+1.579763300 container start 8dbe6e19c311d6a3e557e7157ff3aa5b7d9c185641a1b2ff3c49ee6fc776a548 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 12:18:51 compute-1 neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f[245787]: [NOTICE]   (245791) : New worker (245793) forked
Oct 02 12:18:51 compute-1 neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f[245787]: [NOTICE]   (245791) : Loading success.
Oct 02 12:18:51 compute-1 nova_compute[230518]: 2025-10-02 12:18:51.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:52 compute-1 ceph-mon[80926]: pgmap v1193: 305 pgs: 305 active+clean; 235 MiB data, 494 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.9 MiB/s wr, 133 op/s
Oct 02 12:18:52 compute-1 nova_compute[230518]: 2025-10-02 12:18:52.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:18:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:53.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:53.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:54 compute-1 nova_compute[230518]: 2025-10-02 12:18:54.269 2 DEBUG nova.compute.manager [req-ea32a061-349a-414c-a6e8-ebb4c2ca11fe req-b9042025-806e-46ae-a889-d9adda6fbd15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received event network-changed-02efafc4-ff2d-47ca-98bd-8e608e9980b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:18:54 compute-1 nova_compute[230518]: 2025-10-02 12:18:54.270 2 DEBUG nova.compute.manager [req-ea32a061-349a-414c-a6e8-ebb4c2ca11fe req-b9042025-806e-46ae-a889-d9adda6fbd15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Refreshing instance network info cache due to event network-changed-02efafc4-ff2d-47ca-98bd-8e608e9980b8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:18:54 compute-1 nova_compute[230518]: 2025-10-02 12:18:54.270 2 DEBUG oslo_concurrency.lockutils [req-ea32a061-349a-414c-a6e8-ebb4c2ca11fe req-b9042025-806e-46ae-a889-d9adda6fbd15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:18:54 compute-1 nova_compute[230518]: 2025-10-02 12:18:54.270 2 DEBUG oslo_concurrency.lockutils [req-ea32a061-349a-414c-a6e8-ebb4c2ca11fe req-b9042025-806e-46ae-a889-d9adda6fbd15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:18:54 compute-1 nova_compute[230518]: 2025-10-02 12:18:54.271 2 DEBUG nova.network.neutron [req-ea32a061-349a-414c-a6e8-ebb4c2ca11fe req-b9042025-806e-46ae-a889-d9adda6fbd15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Refreshing network info cache for port 02efafc4-ff2d-47ca-98bd-8e608e9980b8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:18:54 compute-1 ceph-mon[80926]: pgmap v1194: 305 pgs: 305 active+clean; 235 MiB data, 494 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.0 MiB/s wr, 133 op/s
Oct 02 12:18:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:55.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:55.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:55 compute-1 nova_compute[230518]: 2025-10-02 12:18:55.433 2 DEBUG oslo_concurrency.lockutils [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Acquiring lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:55 compute-1 nova_compute[230518]: 2025-10-02 12:18:55.434 2 DEBUG oslo_concurrency.lockutils [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:55 compute-1 nova_compute[230518]: 2025-10-02 12:18:55.435 2 DEBUG oslo_concurrency.lockutils [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Acquiring lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:55 compute-1 nova_compute[230518]: 2025-10-02 12:18:55.435 2 DEBUG oslo_concurrency.lockutils [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:55 compute-1 nova_compute[230518]: 2025-10-02 12:18:55.436 2 DEBUG oslo_concurrency.lockutils [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:55 compute-1 nova_compute[230518]: 2025-10-02 12:18:55.437 2 INFO nova.compute.manager [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Terminating instance
Oct 02 12:18:55 compute-1 nova_compute[230518]: 2025-10-02 12:18:55.439 2 DEBUG nova.compute.manager [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:18:55 compute-1 sudo[245802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:18:55 compute-1 sudo[245802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:55 compute-1 sudo[245802]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:55 compute-1 sudo[245827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:18:55 compute-1 sudo[245827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:55 compute-1 sudo[245827]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:55 compute-1 kernel: tap02efafc4-ff (unregistering): left promiscuous mode
Oct 02 12:18:55 compute-1 NetworkManager[44960]: <info>  [1759407535.7162] device (tap02efafc4-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:18:55 compute-1 ovn_controller[129257]: 2025-10-02T12:18:55Z|00162|binding|INFO|Releasing lport 02efafc4-ff2d-47ca-98bd-8e608e9980b8 from this chassis (sb_readonly=0)
Oct 02 12:18:55 compute-1 ovn_controller[129257]: 2025-10-02T12:18:55Z|00163|binding|INFO|Setting lport 02efafc4-ff2d-47ca-98bd-8e608e9980b8 down in Southbound
Oct 02 12:18:55 compute-1 ovn_controller[129257]: 2025-10-02T12:18:55Z|00164|binding|INFO|Removing iface tap02efafc4-ff ovn-installed in OVS
Oct 02 12:18:55 compute-1 nova_compute[230518]: 2025-10-02 12:18:55.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:55 compute-1 nova_compute[230518]: 2025-10-02 12:18:55.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:55 compute-1 nova_compute[230518]: 2025-10-02 12:18:55.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:55 compute-1 sudo[245854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:18:55 compute-1 sudo[245854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:55 compute-1 sudo[245854]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:55 compute-1 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000023.scope: Deactivated successfully.
Oct 02 12:18:55 compute-1 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000023.scope: Consumed 6.893s CPU time.
Oct 02 12:18:55 compute-1 systemd-machined[188247]: Machine qemu-19-instance-00000023 terminated.
Oct 02 12:18:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:55.817 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:9c:4f 10.100.0.5'], port_security=['fa:16:3e:30:9c:4f 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4cc8a5b1-c816-476e-9b8c-1d152d2f57c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f0bd0c6232b84d03a010ba8cf85bda46', 'neutron:revision_number': '8', 'neutron:security_group_ids': '1b700ac5-01e6-4854-9f45-080d3952f68d 4436e9fc-9948-4a06-9ade-f6478eba2f67 5d4fd7ed-d165-4a2d-a20a-9d9629a026e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=67a70991-7ceb-4648-8df5-18a20c0a36a2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=02efafc4-ff2d-47ca-98bd-8e608e9980b8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:18:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:55.819 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 02efafc4-ff2d-47ca-98bd-8e608e9980b8 in datapath cec9cbfc-5dec-4f85-90c5-6104a054547f unbound from our chassis
Oct 02 12:18:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:55.820 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cec9cbfc-5dec-4f85-90c5-6104a054547f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:18:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:55.821 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1c76c1fd-06d0-464c-bd35-a03eec057c5b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:55.821 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f namespace which is not needed anymore
Oct 02 12:18:55 compute-1 systemd-udevd[245861]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:18:55 compute-1 kernel: tap02efafc4-ff: entered promiscuous mode
Oct 02 12:18:55 compute-1 NetworkManager[44960]: <info>  [1759407535.8632] manager: (tap02efafc4-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/77)
Oct 02 12:18:55 compute-1 sudo[245884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:18:55 compute-1 nova_compute[230518]: 2025-10-02 12:18:55.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:55 compute-1 ovn_controller[129257]: 2025-10-02T12:18:55Z|00165|binding|INFO|Claiming lport 02efafc4-ff2d-47ca-98bd-8e608e9980b8 for this chassis.
Oct 02 12:18:55 compute-1 ovn_controller[129257]: 2025-10-02T12:18:55Z|00166|binding|INFO|02efafc4-ff2d-47ca-98bd-8e608e9980b8: Claiming fa:16:3e:30:9c:4f 10.100.0.5
Oct 02 12:18:55 compute-1 kernel: tap02efafc4-ff (unregistering): left promiscuous mode
Oct 02 12:18:55 compute-1 sudo[245884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:18:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:55.888 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:9c:4f 10.100.0.5'], port_security=['fa:16:3e:30:9c:4f 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4cc8a5b1-c816-476e-9b8c-1d152d2f57c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f0bd0c6232b84d03a010ba8cf85bda46', 'neutron:revision_number': '8', 'neutron:security_group_ids': '1b700ac5-01e6-4854-9f45-080d3952f68d 4436e9fc-9948-4a06-9ade-f6478eba2f67 5d4fd7ed-d165-4a2d-a20a-9d9629a026e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=67a70991-7ceb-4648-8df5-18a20c0a36a2, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=02efafc4-ff2d-47ca-98bd-8e608e9980b8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:18:55 compute-1 nova_compute[230518]: 2025-10-02 12:18:55.898 2 INFO nova.virt.libvirt.driver [-] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Instance destroyed successfully.
Oct 02 12:18:55 compute-1 nova_compute[230518]: 2025-10-02 12:18:55.899 2 DEBUG nova.objects.instance [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lazy-loading 'resources' on Instance uuid 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:18:55 compute-1 ovn_controller[129257]: 2025-10-02T12:18:55Z|00167|binding|INFO|Setting lport 02efafc4-ff2d-47ca-98bd-8e608e9980b8 ovn-installed in OVS
Oct 02 12:18:55 compute-1 ovn_controller[129257]: 2025-10-02T12:18:55Z|00168|binding|INFO|Setting lport 02efafc4-ff2d-47ca-98bd-8e608e9980b8 up in Southbound
Oct 02 12:18:55 compute-1 ovn_controller[129257]: 2025-10-02T12:18:55Z|00169|binding|INFO|Releasing lport 02efafc4-ff2d-47ca-98bd-8e608e9980b8 from this chassis (sb_readonly=1)
Oct 02 12:18:55 compute-1 ovn_controller[129257]: 2025-10-02T12:18:55Z|00170|binding|INFO|Removing iface tap02efafc4-ff ovn-installed in OVS
Oct 02 12:18:55 compute-1 ovn_controller[129257]: 2025-10-02T12:18:55Z|00171|if_status|INFO|Not setting lport 02efafc4-ff2d-47ca-98bd-8e608e9980b8 down as sb is readonly
Oct 02 12:18:55 compute-1 nova_compute[230518]: 2025-10-02 12:18:55.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:55 compute-1 nova_compute[230518]: 2025-10-02 12:18:55.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:55 compute-1 podman[245912]: 2025-10-02 12:18:55.994073215 +0000 UTC m=+0.116161471 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:18:56 compute-1 ovn_controller[129257]: 2025-10-02T12:18:56Z|00172|binding|INFO|Releasing lport 02efafc4-ff2d-47ca-98bd-8e608e9980b8 from this chassis (sb_readonly=0)
Oct 02 12:18:56 compute-1 ovn_controller[129257]: 2025-10-02T12:18:56Z|00173|binding|INFO|Setting lport 02efafc4-ff2d-47ca-98bd-8e608e9980b8 down in Southbound
Oct 02 12:18:56 compute-1 nova_compute[230518]: 2025-10-02 12:18:56.043 2 DEBUG nova.virt.libvirt.vif [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:18:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-488412587',display_name='tempest-SecurityGroupsTestJSON-server-488412587',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-488412587',id=35,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:18:33Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f0bd0c6232b84d03a010ba8cf85bda46',ramdisk_id='',reservation_id='r-1xmq2s07',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1241678427',owner_user_name='tempest-SecurityGroupsTestJSON-1241678427-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:18:50Z,user_data=None,user_id='2ed8b6a2129742dfb3b8a0d9f044ac24',uuid=4cc8a5b1-c816-476e-9b8c-1d152d2f57c1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:18:56 compute-1 nova_compute[230518]: 2025-10-02 12:18:56.043 2 DEBUG nova.network.os_vif_util [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Converting VIF {"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:18:56 compute-1 nova_compute[230518]: 2025-10-02 12:18:56.044 2 DEBUG nova.network.os_vif_util [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:30:9c:4f,bridge_name='br-int',has_traffic_filtering=True,id=02efafc4-ff2d-47ca-98bd-8e608e9980b8,network=Network(cec9cbfc-5dec-4f85-90c5-6104a054547f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02efafc4-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:18:56 compute-1 nova_compute[230518]: 2025-10-02 12:18:56.044 2 DEBUG os_vif [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:9c:4f,bridge_name='br-int',has_traffic_filtering=True,id=02efafc4-ff2d-47ca-98bd-8e608e9980b8,network=Network(cec9cbfc-5dec-4f85-90c5-6104a054547f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02efafc4-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:18:56 compute-1 nova_compute[230518]: 2025-10-02 12:18:56.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:56 compute-1 nova_compute[230518]: 2025-10-02 12:18:56.047 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02efafc4-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:56 compute-1 nova_compute[230518]: 2025-10-02 12:18:56.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:56 compute-1 nova_compute[230518]: 2025-10-02 12:18:56.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:56 compute-1 nova_compute[230518]: 2025-10-02 12:18:56.052 2 INFO os_vif [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:9c:4f,bridge_name='br-int',has_traffic_filtering=True,id=02efafc4-ff2d-47ca-98bd-8e608e9980b8,network=Network(cec9cbfc-5dec-4f85-90c5-6104a054547f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02efafc4-ff')
Oct 02 12:18:56 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:56.067 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:9c:4f 10.100.0.5'], port_security=['fa:16:3e:30:9c:4f 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4cc8a5b1-c816-476e-9b8c-1d152d2f57c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f0bd0c6232b84d03a010ba8cf85bda46', 'neutron:revision_number': '8', 'neutron:security_group_ids': '1b700ac5-01e6-4854-9f45-080d3952f68d 4436e9fc-9948-4a06-9ade-f6478eba2f67 5d4fd7ed-d165-4a2d-a20a-9d9629a026e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=67a70991-7ceb-4648-8df5-18a20c0a36a2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=02efafc4-ff2d-47ca-98bd-8e608e9980b8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:18:56 compute-1 nova_compute[230518]: 2025-10-02 12:18:56.228 2 DEBUG nova.compute.manager [req-4292b7b0-fb29-41aa-a12f-cebc10e563ab req-d2326b82-c02b-425f-85b5-b1e020b6610d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received event network-vif-unplugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:18:56 compute-1 nova_compute[230518]: 2025-10-02 12:18:56.229 2 DEBUG oslo_concurrency.lockutils [req-4292b7b0-fb29-41aa-a12f-cebc10e563ab req-d2326b82-c02b-425f-85b5-b1e020b6610d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:56 compute-1 nova_compute[230518]: 2025-10-02 12:18:56.229 2 DEBUG oslo_concurrency.lockutils [req-4292b7b0-fb29-41aa-a12f-cebc10e563ab req-d2326b82-c02b-425f-85b5-b1e020b6610d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:56 compute-1 nova_compute[230518]: 2025-10-02 12:18:56.229 2 DEBUG oslo_concurrency.lockutils [req-4292b7b0-fb29-41aa-a12f-cebc10e563ab req-d2326b82-c02b-425f-85b5-b1e020b6610d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:56 compute-1 nova_compute[230518]: 2025-10-02 12:18:56.229 2 DEBUG nova.compute.manager [req-4292b7b0-fb29-41aa-a12f-cebc10e563ab req-d2326b82-c02b-425f-85b5-b1e020b6610d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] No waiting events found dispatching network-vif-unplugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:18:56 compute-1 nova_compute[230518]: 2025-10-02 12:18:56.230 2 DEBUG nova.compute.manager [req-4292b7b0-fb29-41aa-a12f-cebc10e563ab req-d2326b82-c02b-425f-85b5-b1e020b6610d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received event network-vif-unplugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:18:56 compute-1 neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f[245787]: [NOTICE]   (245791) : haproxy version is 2.8.14-c23fe91
Oct 02 12:18:56 compute-1 neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f[245787]: [NOTICE]   (245791) : path to executable is /usr/sbin/haproxy
Oct 02 12:18:56 compute-1 neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f[245787]: [WARNING]  (245791) : Exiting Master process...
Oct 02 12:18:56 compute-1 neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f[245787]: [WARNING]  (245791) : Exiting Master process...
Oct 02 12:18:56 compute-1 neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f[245787]: [ALERT]    (245791) : Current worker (245793) exited with code 143 (Terminated)
Oct 02 12:18:56 compute-1 neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f[245787]: [WARNING]  (245791) : All workers exited. Exiting... (0)
Oct 02 12:18:56 compute-1 systemd[1]: libpod-8dbe6e19c311d6a3e557e7157ff3aa5b7d9c185641a1b2ff3c49ee6fc776a548.scope: Deactivated successfully.
Oct 02 12:18:56 compute-1 podman[245951]: 2025-10-02 12:18:56.25576907 +0000 UTC m=+0.270575066 container died 8dbe6e19c311d6a3e557e7157ff3aa5b7d9c185641a1b2ff3c49ee6fc776a548 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 12:18:56 compute-1 nova_compute[230518]: 2025-10-02 12:18:56.376 2 DEBUG nova.network.neutron [req-ea32a061-349a-414c-a6e8-ebb4c2ca11fe req-b9042025-806e-46ae-a889-d9adda6fbd15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Updated VIF entry in instance network info cache for port 02efafc4-ff2d-47ca-98bd-8e608e9980b8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:18:56 compute-1 nova_compute[230518]: 2025-10-02 12:18:56.376 2 DEBUG nova.network.neutron [req-ea32a061-349a-414c-a6e8-ebb4c2ca11fe req-b9042025-806e-46ae-a889-d9adda6fbd15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Updating instance_info_cache with network_info: [{"id": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "address": "fa:16:3e:30:9c:4f", "network": {"id": "cec9cbfc-5dec-4f85-90c5-6104a054547f", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-785559469-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f0bd0c6232b84d03a010ba8cf85bda46", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02efafc4-ff", "ovs_interfaceid": "02efafc4-ff2d-47ca-98bd-8e608e9980b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:18:56 compute-1 nova_compute[230518]: 2025-10-02 12:18:56.419 2 DEBUG oslo_concurrency.lockutils [req-ea32a061-349a-414c-a6e8-ebb4c2ca11fe req-b9042025-806e-46ae-a889-d9adda6fbd15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:18:56 compute-1 podman[245921]: 2025-10-02 12:18:56.859848744 +0000 UTC m=+0.977629555 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 12:18:56 compute-1 ceph-mon[80926]: pgmap v1195: 305 pgs: 305 active+clean; 242 MiB data, 505 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.9 MiB/s wr, 169 op/s
Oct 02 12:18:56 compute-1 nova_compute[230518]: 2025-10-02 12:18:56.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:57 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8dbe6e19c311d6a3e557e7157ff3aa5b7d9c185641a1b2ff3c49ee6fc776a548-userdata-shm.mount: Deactivated successfully.
Oct 02 12:18:57 compute-1 systemd[1]: var-lib-containers-storage-overlay-d55dcfce4eabfae633f16d7a1060e2abf279c1e0a7770e612687fe926476f20b-merged.mount: Deactivated successfully.
Oct 02 12:18:57 compute-1 sudo[245884]: pam_unix(sudo:session): session closed for user root
Oct 02 12:18:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:18:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:57.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:18:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:57.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:57 compute-1 podman[245951]: 2025-10-02 12:18:57.613607672 +0000 UTC m=+1.628413638 container cleanup 8dbe6e19c311d6a3e557e7157ff3aa5b7d9c185641a1b2ff3c49ee6fc776a548 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 12:18:57 compute-1 systemd[1]: libpod-conmon-8dbe6e19c311d6a3e557e7157ff3aa5b7d9c185641a1b2ff3c49ee6fc776a548.scope: Deactivated successfully.
Oct 02 12:18:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:18:58 compute-1 nova_compute[230518]: 2025-10-02 12:18:58.415 2 DEBUG nova.compute.manager [req-c80accf4-2c06-41d3-8a90-9583120ad200 req-4212cc5e-f41a-4ac1-9802-509900a356c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:18:58 compute-1 nova_compute[230518]: 2025-10-02 12:18:58.416 2 DEBUG oslo_concurrency.lockutils [req-c80accf4-2c06-41d3-8a90-9583120ad200 req-4212cc5e-f41a-4ac1-9802-509900a356c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:58 compute-1 nova_compute[230518]: 2025-10-02 12:18:58.416 2 DEBUG oslo_concurrency.lockutils [req-c80accf4-2c06-41d3-8a90-9583120ad200 req-4212cc5e-f41a-4ac1-9802-509900a356c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:58 compute-1 nova_compute[230518]: 2025-10-02 12:18:58.416 2 DEBUG oslo_concurrency.lockutils [req-c80accf4-2c06-41d3-8a90-9583120ad200 req-4212cc5e-f41a-4ac1-9802-509900a356c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:58 compute-1 nova_compute[230518]: 2025-10-02 12:18:58.417 2 DEBUG nova.compute.manager [req-c80accf4-2c06-41d3-8a90-9583120ad200 req-4212cc5e-f41a-4ac1-9802-509900a356c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] No waiting events found dispatching network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:18:58 compute-1 nova_compute[230518]: 2025-10-02 12:18:58.417 2 WARNING nova.compute.manager [req-c80accf4-2c06-41d3-8a90-9583120ad200 req-4212cc5e-f41a-4ac1-9802-509900a356c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received unexpected event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 for instance with vm_state active and task_state deleting.
Oct 02 12:18:58 compute-1 nova_compute[230518]: 2025-10-02 12:18:58.417 2 DEBUG nova.compute.manager [req-c80accf4-2c06-41d3-8a90-9583120ad200 req-4212cc5e-f41a-4ac1-9802-509900a356c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:18:58 compute-1 nova_compute[230518]: 2025-10-02 12:18:58.417 2 DEBUG oslo_concurrency.lockutils [req-c80accf4-2c06-41d3-8a90-9583120ad200 req-4212cc5e-f41a-4ac1-9802-509900a356c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:58 compute-1 nova_compute[230518]: 2025-10-02 12:18:58.418 2 DEBUG oslo_concurrency.lockutils [req-c80accf4-2c06-41d3-8a90-9583120ad200 req-4212cc5e-f41a-4ac1-9802-509900a356c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:58 compute-1 nova_compute[230518]: 2025-10-02 12:18:58.418 2 DEBUG oslo_concurrency.lockutils [req-c80accf4-2c06-41d3-8a90-9583120ad200 req-4212cc5e-f41a-4ac1-9802-509900a356c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:58 compute-1 nova_compute[230518]: 2025-10-02 12:18:58.418 2 DEBUG nova.compute.manager [req-c80accf4-2c06-41d3-8a90-9583120ad200 req-4212cc5e-f41a-4ac1-9802-509900a356c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] No waiting events found dispatching network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:18:58 compute-1 nova_compute[230518]: 2025-10-02 12:18:58.418 2 WARNING nova.compute.manager [req-c80accf4-2c06-41d3-8a90-9583120ad200 req-4212cc5e-f41a-4ac1-9802-509900a356c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received unexpected event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 for instance with vm_state active and task_state deleting.
Oct 02 12:18:58 compute-1 nova_compute[230518]: 2025-10-02 12:18:58.419 2 DEBUG nova.compute.manager [req-c80accf4-2c06-41d3-8a90-9583120ad200 req-4212cc5e-f41a-4ac1-9802-509900a356c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:18:58 compute-1 nova_compute[230518]: 2025-10-02 12:18:58.419 2 DEBUG oslo_concurrency.lockutils [req-c80accf4-2c06-41d3-8a90-9583120ad200 req-4212cc5e-f41a-4ac1-9802-509900a356c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:18:58 compute-1 nova_compute[230518]: 2025-10-02 12:18:58.419 2 DEBUG oslo_concurrency.lockutils [req-c80accf4-2c06-41d3-8a90-9583120ad200 req-4212cc5e-f41a-4ac1-9802-509900a356c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:18:58 compute-1 nova_compute[230518]: 2025-10-02 12:18:58.420 2 DEBUG oslo_concurrency.lockutils [req-c80accf4-2c06-41d3-8a90-9583120ad200 req-4212cc5e-f41a-4ac1-9802-509900a356c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:18:58 compute-1 nova_compute[230518]: 2025-10-02 12:18:58.420 2 DEBUG nova.compute.manager [req-c80accf4-2c06-41d3-8a90-9583120ad200 req-4212cc5e-f41a-4ac1-9802-509900a356c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] No waiting events found dispatching network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:18:58 compute-1 nova_compute[230518]: 2025-10-02 12:18:58.420 2 WARNING nova.compute.manager [req-c80accf4-2c06-41d3-8a90-9583120ad200 req-4212cc5e-f41a-4ac1-9802-509900a356c5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received unexpected event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 for instance with vm_state active and task_state deleting.
Oct 02 12:18:58 compute-1 ceph-mon[80926]: pgmap v1196: 305 pgs: 305 active+clean; 247 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.3 MiB/s wr, 157 op/s
Oct 02 12:18:58 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:18:58 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:18:58 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:18:58 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:18:58 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:18:58 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:18:58 compute-1 podman[246046]: 2025-10-02 12:18:58.788644025 +0000 UTC m=+1.148503438 container remove 8dbe6e19c311d6a3e557e7157ff3aa5b7d9c185641a1b2ff3c49ee6fc776a548 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 12:18:58 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:58.795 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d4cdc392-6508-42f9-9c8d-173f705620ae]: (4, ('Thu Oct  2 12:18:55 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f (8dbe6e19c311d6a3e557e7157ff3aa5b7d9c185641a1b2ff3c49ee6fc776a548)\n8dbe6e19c311d6a3e557e7157ff3aa5b7d9c185641a1b2ff3c49ee6fc776a548\nThu Oct  2 12:18:57 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f (8dbe6e19c311d6a3e557e7157ff3aa5b7d9c185641a1b2ff3c49ee6fc776a548)\n8dbe6e19c311d6a3e557e7157ff3aa5b7d9c185641a1b2ff3c49ee6fc776a548\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:58 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:58.796 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[42db7bf5-e999-4469-b1c7-bdaeee7c6fb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:58 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:58.797 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcec9cbfc-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:18:58 compute-1 nova_compute[230518]: 2025-10-02 12:18:58.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:58 compute-1 kernel: tapcec9cbfc-50: left promiscuous mode
Oct 02 12:18:58 compute-1 nova_compute[230518]: 2025-10-02 12:18:58.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:18:58 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:58.826 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5ae29cd2-0df0-4ff3-b83a-3f309ddbc544]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:58 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:58.855 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c10c89a2-2243-4584-ad76-c8b85032b56c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:58 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:58.857 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7e71e0dd-7d2f-423d-a230-03a27c215a9e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:58 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:58.874 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ab4be838-3d27-449a-aff6-466a95e2448c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539281, 'reachable_time': 29181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246060, 'error': None, 'target': 'ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:58 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:58.877 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cec9cbfc-5dec-4f85-90c5-6104a054547f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:18:58 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:58.877 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[a31c0275-660a-4167-98cf-24a3f083216f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:58 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:58.878 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 02efafc4-ff2d-47ca-98bd-8e608e9980b8 in datapath cec9cbfc-5dec-4f85-90c5-6104a054547f unbound from our chassis
Oct 02 12:18:58 compute-1 systemd[1]: run-netns-ovnmeta\x2dcec9cbfc\x2d5dec\x2d4f85\x2d90c5\x2d6104a054547f.mount: Deactivated successfully.
Oct 02 12:18:58 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:58.879 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cec9cbfc-5dec-4f85-90c5-6104a054547f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:18:58 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:58.879 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[358a68e5-3e5c-4a18-ad84-52e8302c4e05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:58 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:58.880 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 02efafc4-ff2d-47ca-98bd-8e608e9980b8 in datapath cec9cbfc-5dec-4f85-90c5-6104a054547f unbound from our chassis
Oct 02 12:18:58 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:58.880 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cec9cbfc-5dec-4f85-90c5-6104a054547f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:18:58 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:18:58.881 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[86259736-5648-485c-856b-5734da8f5800]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:18:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:18:59.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:18:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:18:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:18:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:18:59.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:00 compute-1 ceph-mon[80926]: pgmap v1197: 305 pgs: 305 active+clean; 247 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 100 op/s
Oct 02 12:19:00 compute-1 nova_compute[230518]: 2025-10-02 12:19:00.691 2 DEBUG nova.compute.manager [req-12ae2daa-8d11-47b7-a1e9-35b8d46fcfa4 req-a622e0c1-0d7f-4b2c-a917-491eafed0cd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received event network-vif-unplugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:19:00 compute-1 nova_compute[230518]: 2025-10-02 12:19:00.692 2 DEBUG oslo_concurrency.lockutils [req-12ae2daa-8d11-47b7-a1e9-35b8d46fcfa4 req-a622e0c1-0d7f-4b2c-a917-491eafed0cd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:00 compute-1 nova_compute[230518]: 2025-10-02 12:19:00.692 2 DEBUG oslo_concurrency.lockutils [req-12ae2daa-8d11-47b7-a1e9-35b8d46fcfa4 req-a622e0c1-0d7f-4b2c-a917-491eafed0cd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:00 compute-1 nova_compute[230518]: 2025-10-02 12:19:00.692 2 DEBUG oslo_concurrency.lockutils [req-12ae2daa-8d11-47b7-a1e9-35b8d46fcfa4 req-a622e0c1-0d7f-4b2c-a917-491eafed0cd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:00 compute-1 nova_compute[230518]: 2025-10-02 12:19:00.692 2 DEBUG nova.compute.manager [req-12ae2daa-8d11-47b7-a1e9-35b8d46fcfa4 req-a622e0c1-0d7f-4b2c-a917-491eafed0cd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] No waiting events found dispatching network-vif-unplugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:19:00 compute-1 nova_compute[230518]: 2025-10-02 12:19:00.693 2 DEBUG nova.compute.manager [req-12ae2daa-8d11-47b7-a1e9-35b8d46fcfa4 req-a622e0c1-0d7f-4b2c-a917-491eafed0cd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received event network-vif-unplugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:19:00 compute-1 nova_compute[230518]: 2025-10-02 12:19:00.693 2 DEBUG nova.compute.manager [req-12ae2daa-8d11-47b7-a1e9-35b8d46fcfa4 req-a622e0c1-0d7f-4b2c-a917-491eafed0cd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:19:00 compute-1 nova_compute[230518]: 2025-10-02 12:19:00.693 2 DEBUG oslo_concurrency.lockutils [req-12ae2daa-8d11-47b7-a1e9-35b8d46fcfa4 req-a622e0c1-0d7f-4b2c-a917-491eafed0cd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:00 compute-1 nova_compute[230518]: 2025-10-02 12:19:00.693 2 DEBUG oslo_concurrency.lockutils [req-12ae2daa-8d11-47b7-a1e9-35b8d46fcfa4 req-a622e0c1-0d7f-4b2c-a917-491eafed0cd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:00 compute-1 nova_compute[230518]: 2025-10-02 12:19:00.693 2 DEBUG oslo_concurrency.lockutils [req-12ae2daa-8d11-47b7-a1e9-35b8d46fcfa4 req-a622e0c1-0d7f-4b2c-a917-491eafed0cd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:00 compute-1 nova_compute[230518]: 2025-10-02 12:19:00.693 2 DEBUG nova.compute.manager [req-12ae2daa-8d11-47b7-a1e9-35b8d46fcfa4 req-a622e0c1-0d7f-4b2c-a917-491eafed0cd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] No waiting events found dispatching network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:19:00 compute-1 nova_compute[230518]: 2025-10-02 12:19:00.693 2 WARNING nova.compute.manager [req-12ae2daa-8d11-47b7-a1e9-35b8d46fcfa4 req-a622e0c1-0d7f-4b2c-a917-491eafed0cd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received unexpected event network-vif-plugged-02efafc4-ff2d-47ca-98bd-8e608e9980b8 for instance with vm_state active and task_state deleting.
Oct 02 12:19:01 compute-1 nova_compute[230518]: 2025-10-02 12:19:01.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:01.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:19:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:01.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:19:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2790133365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:02 compute-1 nova_compute[230518]: 2025-10-02 12:19:01.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:02 compute-1 nova_compute[230518]: 2025-10-02 12:19:02.241 2 INFO nova.virt.libvirt.driver [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Deleting instance files /var/lib/nova/instances/4cc8a5b1-c816-476e-9b8c-1d152d2f57c1_del
Oct 02 12:19:02 compute-1 nova_compute[230518]: 2025-10-02 12:19:02.242 2 INFO nova.virt.libvirt.driver [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Deletion of /var/lib/nova/instances/4cc8a5b1-c816-476e-9b8c-1d152d2f57c1_del complete
Oct 02 12:19:02 compute-1 nova_compute[230518]: 2025-10-02 12:19:02.288 2 INFO nova.compute.manager [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Took 6.85 seconds to destroy the instance on the hypervisor.
Oct 02 12:19:02 compute-1 nova_compute[230518]: 2025-10-02 12:19:02.289 2 DEBUG oslo.service.loopingcall [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:19:02 compute-1 nova_compute[230518]: 2025-10-02 12:19:02.289 2 DEBUG nova.compute.manager [-] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:19:02 compute-1 nova_compute[230518]: 2025-10-02 12:19:02.289 2 DEBUG nova.network.neutron [-] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:19:02 compute-1 ceph-mon[80926]: pgmap v1198: 305 pgs: 305 active+clean; 196 MiB data, 483 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 120 op/s
Oct 02 12:19:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:19:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:03.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:03.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:03 compute-1 nova_compute[230518]: 2025-10-02 12:19:03.507 2 DEBUG nova.network.neutron [-] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:19:03 compute-1 nova_compute[230518]: 2025-10-02 12:19:03.542 2 INFO nova.compute.manager [-] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Took 1.25 seconds to deallocate network for instance.
Oct 02 12:19:03 compute-1 nova_compute[230518]: 2025-10-02 12:19:03.586 2 DEBUG oslo_concurrency.lockutils [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:03 compute-1 nova_compute[230518]: 2025-10-02 12:19:03.586 2 DEBUG oslo_concurrency.lockutils [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:03 compute-1 nova_compute[230518]: 2025-10-02 12:19:03.632 2 DEBUG oslo_concurrency.processutils [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:03 compute-1 nova_compute[230518]: 2025-10-02 12:19:03.687 2 DEBUG nova.compute.manager [req-aae77d3b-b86f-441c-854c-64d40eb2a223 req-59cbc810-d4d9-495a-979e-ca54af3ad8b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Received event network-vif-deleted-02efafc4-ff2d-47ca-98bd-8e608e9980b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:19:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:19:04 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/439886864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:04 compute-1 nova_compute[230518]: 2025-10-02 12:19:04.068 2 DEBUG oslo_concurrency.processutils [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:04 compute-1 nova_compute[230518]: 2025-10-02 12:19:04.076 2 DEBUG nova.compute.provider_tree [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:19:04 compute-1 ceph-mon[80926]: pgmap v1199: 305 pgs: 305 active+clean; 165 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 117 op/s
Oct 02 12:19:04 compute-1 nova_compute[230518]: 2025-10-02 12:19:04.097 2 DEBUG nova.scheduler.client.report [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:19:04 compute-1 nova_compute[230518]: 2025-10-02 12:19:04.121 2 DEBUG oslo_concurrency.lockutils [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.535s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:04 compute-1 nova_compute[230518]: 2025-10-02 12:19:04.152 2 INFO nova.scheduler.client.report [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Deleted allocations for instance 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1
Oct 02 12:19:04 compute-1 nova_compute[230518]: 2025-10-02 12:19:04.243 2 DEBUG oslo_concurrency.lockutils [None req-cd4ebbc0-fc91-4da0-924d-05cbd2cd6d2e 2ed8b6a2129742dfb3b8a0d9f044ac24 f0bd0c6232b84d03a010ba8cf85bda46 - - default default] Lock "4cc8a5b1-c816-476e-9b8c-1d152d2f57c1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.809s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:19:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:05.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:19:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:05.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/439886864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:19:05.716 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:19:05 compute-1 nova_compute[230518]: 2025-10-02 12:19:05.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:19:05.717 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:19:06 compute-1 nova_compute[230518]: 2025-10-02 12:19:06.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:06 compute-1 ceph-mon[80926]: pgmap v1200: 305 pgs: 305 active+clean; 140 MiB data, 438 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.4 MiB/s wr, 106 op/s
Oct 02 12:19:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3932920309' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:19:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3932920309' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:19:06 compute-1 podman[246088]: 2025-10-02 12:19:06.800715738 +0000 UTC m=+0.057379499 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:19:06 compute-1 podman[246089]: 2025-10-02 12:19:06.803751403 +0000 UTC m=+0.058680469 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 02 12:19:06 compute-1 nova_compute[230518]: 2025-10-02 12:19:06.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:07.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:07.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:19:08 compute-1 ceph-mon[80926]: pgmap v1201: 305 pgs: 305 active+clean; 121 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 404 KiB/s rd, 400 KiB/s wr, 71 op/s
Oct 02 12:19:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:09.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:09.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:10 compute-1 nova_compute[230518]: 2025-10-02 12:19:10.896 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407535.8948076, 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:19:10 compute-1 nova_compute[230518]: 2025-10-02 12:19:10.896 2 INFO nova.compute.manager [-] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] VM Stopped (Lifecycle Event)
Oct 02 12:19:10 compute-1 ceph-mon[80926]: pgmap v1202: 305 pgs: 305 active+clean; 121 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 10 KiB/s wr, 49 op/s
Oct 02 12:19:10 compute-1 nova_compute[230518]: 2025-10-02 12:19:10.947 2 DEBUG nova.compute.manager [None req-2d379b19-6c12-41c1-900d-7b53ba97d571 - - - - - -] [instance: 4cc8a5b1-c816-476e-9b8c-1d152d2f57c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:19:11 compute-1 nova_compute[230518]: 2025-10-02 12:19:11.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:11.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:11.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:12 compute-1 nova_compute[230518]: 2025-10-02 12:19:12.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:12 compute-1 ceph-mon[80926]: pgmap v1203: 305 pgs: 305 active+clean; 121 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 10 KiB/s wr, 49 op/s
Oct 02 12:19:12 compute-1 sudo[246127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:19:12 compute-1 sudo[246127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:12 compute-1 sudo[246127]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:12 compute-1 sudo[246152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:19:12 compute-1 sudo[246152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:19:12 compute-1 sudo[246152]: pam_unix(sudo:session): session closed for user root
Oct 02 12:19:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:19:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:13.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:13.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:13 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:19:13 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:19:14 compute-1 ceph-mon[80926]: pgmap v1204: 305 pgs: 305 active+clean; 121 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 1.7 KiB/s wr, 29 op/s
Oct 02 12:19:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/200331205' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:15.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:15.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:15 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:19:15.718 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:19:15 compute-1 ceph-mon[80926]: pgmap v1205: 305 pgs: 305 active+clean; 121 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 20 op/s
Oct 02 12:19:16 compute-1 nova_compute[230518]: 2025-10-02 12:19:16.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:16 compute-1 nova_compute[230518]: 2025-10-02 12:19:16.256 2 DEBUG oslo_concurrency.lockutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Acquiring lock "9e5efd35-44d3-4665-b150-1936a55a5460" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:16 compute-1 nova_compute[230518]: 2025-10-02 12:19:16.256 2 DEBUG oslo_concurrency.lockutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Lock "9e5efd35-44d3-4665-b150-1936a55a5460" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:16 compute-1 nova_compute[230518]: 2025-10-02 12:19:16.413 2 DEBUG nova.compute.manager [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:19:16 compute-1 nova_compute[230518]: 2025-10-02 12:19:16.575 2 DEBUG oslo_concurrency.lockutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:16 compute-1 nova_compute[230518]: 2025-10-02 12:19:16.575 2 DEBUG oslo_concurrency.lockutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:16 compute-1 nova_compute[230518]: 2025-10-02 12:19:16.582 2 DEBUG nova.virt.hardware [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:19:16 compute-1 nova_compute[230518]: 2025-10-02 12:19:16.583 2 INFO nova.compute.claims [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:19:17 compute-1 nova_compute[230518]: 2025-10-02 12:19:17.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:17 compute-1 nova_compute[230518]: 2025-10-02 12:19:17.036 2 DEBUG oslo_concurrency.processutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:17.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:17.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:19:17 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2889312449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:17 compute-1 nova_compute[230518]: 2025-10-02 12:19:17.502 2 DEBUG oslo_concurrency.processutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:17 compute-1 nova_compute[230518]: 2025-10-02 12:19:17.509 2 DEBUG nova.compute.provider_tree [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:19:17 compute-1 nova_compute[230518]: 2025-10-02 12:19:17.579 2 DEBUG nova.scheduler.client.report [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:19:17 compute-1 nova_compute[230518]: 2025-10-02 12:19:17.864 2 DEBUG oslo_concurrency.lockutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.289s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:17 compute-1 nova_compute[230518]: 2025-10-02 12:19:17.865 2 DEBUG nova.compute.manager [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:19:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:19:18 compute-1 nova_compute[230518]: 2025-10-02 12:19:18.135 2 DEBUG nova.compute.manager [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:19:18 compute-1 nova_compute[230518]: 2025-10-02 12:19:18.135 2 DEBUG nova.network.neutron [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:19:18 compute-1 ceph-mon[80926]: pgmap v1206: 305 pgs: 305 active+clean; 103 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 7.6 KiB/s rd, 852 B/s wr, 11 op/s
Oct 02 12:19:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2889312449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:18 compute-1 nova_compute[230518]: 2025-10-02 12:19:18.473 2 INFO nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:19:18 compute-1 nova_compute[230518]: 2025-10-02 12:19:18.710 2 DEBUG nova.compute.manager [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:19:18 compute-1 nova_compute[230518]: 2025-10-02 12:19:18.762 2 DEBUG nova.network.neutron [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 02 12:19:18 compute-1 nova_compute[230518]: 2025-10-02 12:19:18.762 2 DEBUG nova.compute.manager [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:19:18 compute-1 nova_compute[230518]: 2025-10-02 12:19:18.980 2 DEBUG nova.compute.manager [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:19:18 compute-1 nova_compute[230518]: 2025-10-02 12:19:18.981 2 DEBUG nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:19:18 compute-1 nova_compute[230518]: 2025-10-02 12:19:18.982 2 INFO nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Creating image(s)
Oct 02 12:19:19 compute-1 nova_compute[230518]: 2025-10-02 12:19:19.009 2 DEBUG nova.storage.rbd_utils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] rbd image 9e5efd35-44d3-4665-b150-1936a55a5460_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:19:19 compute-1 nova_compute[230518]: 2025-10-02 12:19:19.034 2 DEBUG nova.storage.rbd_utils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] rbd image 9e5efd35-44d3-4665-b150-1936a55a5460_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:19:19 compute-1 nova_compute[230518]: 2025-10-02 12:19:19.058 2 DEBUG nova.storage.rbd_utils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] rbd image 9e5efd35-44d3-4665-b150-1936a55a5460_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:19:19 compute-1 nova_compute[230518]: 2025-10-02 12:19:19.064 2 DEBUG oslo_concurrency.processutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:19 compute-1 nova_compute[230518]: 2025-10-02 12:19:19.126 2 DEBUG oslo_concurrency.processutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:19 compute-1 nova_compute[230518]: 2025-10-02 12:19:19.127 2 DEBUG oslo_concurrency.lockutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:19 compute-1 nova_compute[230518]: 2025-10-02 12:19:19.128 2 DEBUG oslo_concurrency.lockutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:19 compute-1 nova_compute[230518]: 2025-10-02 12:19:19.128 2 DEBUG oslo_concurrency.lockutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:19 compute-1 nova_compute[230518]: 2025-10-02 12:19:19.151 2 DEBUG nova.storage.rbd_utils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] rbd image 9e5efd35-44d3-4665-b150-1936a55a5460_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:19:19 compute-1 nova_compute[230518]: 2025-10-02 12:19:19.153 2 DEBUG oslo_concurrency.processutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 9e5efd35-44d3-4665-b150-1936a55a5460_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:19.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:19.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3402607599' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:19:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3066289079' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:19:19 compute-1 nova_compute[230518]: 2025-10-02 12:19:19.473 2 DEBUG oslo_concurrency.processutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 9e5efd35-44d3-4665-b150-1936a55a5460_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.320s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:19 compute-1 nova_compute[230518]: 2025-10-02 12:19:19.538 2 DEBUG nova.storage.rbd_utils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] resizing rbd image 9e5efd35-44d3-4665-b150-1936a55a5460_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:19:19 compute-1 nova_compute[230518]: 2025-10-02 12:19:19.651 2 DEBUG nova.objects.instance [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Lazy-loading 'migration_context' on Instance uuid 9e5efd35-44d3-4665-b150-1936a55a5460 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:19:19 compute-1 nova_compute[230518]: 2025-10-02 12:19:19.702 2 DEBUG nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:19:19 compute-1 nova_compute[230518]: 2025-10-02 12:19:19.702 2 DEBUG nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Ensure instance console log exists: /var/lib/nova/instances/9e5efd35-44d3-4665-b150-1936a55a5460/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:19:19 compute-1 nova_compute[230518]: 2025-10-02 12:19:19.703 2 DEBUG oslo_concurrency.lockutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:19 compute-1 nova_compute[230518]: 2025-10-02 12:19:19.703 2 DEBUG oslo_concurrency.lockutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:19 compute-1 nova_compute[230518]: 2025-10-02 12:19:19.703 2 DEBUG oslo_concurrency.lockutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:19 compute-1 nova_compute[230518]: 2025-10-02 12:19:19.704 2 DEBUG nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:19:19 compute-1 nova_compute[230518]: 2025-10-02 12:19:19.709 2 WARNING nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:19:19 compute-1 nova_compute[230518]: 2025-10-02 12:19:19.713 2 DEBUG nova.virt.libvirt.host [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:19:19 compute-1 nova_compute[230518]: 2025-10-02 12:19:19.714 2 DEBUG nova.virt.libvirt.host [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:19:19 compute-1 nova_compute[230518]: 2025-10-02 12:19:19.717 2 DEBUG nova.virt.libvirt.host [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:19:19 compute-1 nova_compute[230518]: 2025-10-02 12:19:19.718 2 DEBUG nova.virt.libvirt.host [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:19:19 compute-1 nova_compute[230518]: 2025-10-02 12:19:19.719 2 DEBUG nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:19:19 compute-1 nova_compute[230518]: 2025-10-02 12:19:19.719 2 DEBUG nova.virt.hardware [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:19:19 compute-1 nova_compute[230518]: 2025-10-02 12:19:19.719 2 DEBUG nova.virt.hardware [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:19:19 compute-1 nova_compute[230518]: 2025-10-02 12:19:19.720 2 DEBUG nova.virt.hardware [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:19:19 compute-1 nova_compute[230518]: 2025-10-02 12:19:19.720 2 DEBUG nova.virt.hardware [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:19:19 compute-1 nova_compute[230518]: 2025-10-02 12:19:19.720 2 DEBUG nova.virt.hardware [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:19:19 compute-1 nova_compute[230518]: 2025-10-02 12:19:19.720 2 DEBUG nova.virt.hardware [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:19:19 compute-1 nova_compute[230518]: 2025-10-02 12:19:19.720 2 DEBUG nova.virt.hardware [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:19:19 compute-1 nova_compute[230518]: 2025-10-02 12:19:19.721 2 DEBUG nova.virt.hardware [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:19:19 compute-1 nova_compute[230518]: 2025-10-02 12:19:19.721 2 DEBUG nova.virt.hardware [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:19:19 compute-1 nova_compute[230518]: 2025-10-02 12:19:19.721 2 DEBUG nova.virt.hardware [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:19:19 compute-1 nova_compute[230518]: 2025-10-02 12:19:19.722 2 DEBUG nova.virt.hardware [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:19:19 compute-1 nova_compute[230518]: 2025-10-02 12:19:19.725 2 DEBUG oslo_concurrency.processutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:19:20 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2621466651' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:19:20 compute-1 nova_compute[230518]: 2025-10-02 12:19:20.226 2 DEBUG oslo_concurrency.processutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:20 compute-1 nova_compute[230518]: 2025-10-02 12:19:20.252 2 DEBUG nova.storage.rbd_utils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] rbd image 9e5efd35-44d3-4665-b150-1936a55a5460_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:19:20 compute-1 nova_compute[230518]: 2025-10-02 12:19:20.256 2 DEBUG oslo_concurrency.processutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:19:20 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3271792050' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:19:20 compute-1 nova_compute[230518]: 2025-10-02 12:19:20.689 2 DEBUG oslo_concurrency.processutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:20 compute-1 nova_compute[230518]: 2025-10-02 12:19:20.691 2 DEBUG nova.objects.instance [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Lazy-loading 'pci_devices' on Instance uuid 9e5efd35-44d3-4665-b150-1936a55a5460 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:19:20 compute-1 ceph-mon[80926]: pgmap v1207: 305 pgs: 305 active+clean; 103 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 5.6 KiB/s rd, 255 B/s wr, 7 op/s
Oct 02 12:19:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2501839545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2621466651' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:19:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3271792050' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:19:20 compute-1 nova_compute[230518]: 2025-10-02 12:19:20.999 2 DEBUG nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:19:20 compute-1 nova_compute[230518]:   <uuid>9e5efd35-44d3-4665-b150-1936a55a5460</uuid>
Oct 02 12:19:20 compute-1 nova_compute[230518]:   <name>instance-00000026</name>
Oct 02 12:19:20 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:19:20 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:19:20 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:19:20 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:19:20 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:19:20 compute-1 nova_compute[230518]:       <nova:name>tempest-ServerDiagnosticsTest-server-783920138</nova:name>
Oct 02 12:19:20 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:19:19</nova:creationTime>
Oct 02 12:19:20 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:19:20 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:19:20 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:19:20 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:19:20 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:19:20 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:19:20 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:19:20 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:19:20 compute-1 nova_compute[230518]:         <nova:user uuid="6d3f08c1bc2844488f5d3fcd1622dd59">tempest-ServerDiagnosticsTest-710539956-project-member</nova:user>
Oct 02 12:19:20 compute-1 nova_compute[230518]:         <nova:project uuid="aadd85819f8e46b8b2dc6524e9e6fbad">tempest-ServerDiagnosticsTest-710539956</nova:project>
Oct 02 12:19:20 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:19:20 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:19:20 compute-1 nova_compute[230518]:       <nova:ports/>
Oct 02 12:19:20 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:19:20 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:19:20 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:19:20 compute-1 nova_compute[230518]:     <system>
Oct 02 12:19:20 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:19:20 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:19:20 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:19:20 compute-1 nova_compute[230518]:       <entry name="serial">9e5efd35-44d3-4665-b150-1936a55a5460</entry>
Oct 02 12:19:20 compute-1 nova_compute[230518]:       <entry name="uuid">9e5efd35-44d3-4665-b150-1936a55a5460</entry>
Oct 02 12:19:20 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:19:20 compute-1 nova_compute[230518]:     </system>
Oct 02 12:19:20 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:19:20 compute-1 nova_compute[230518]:   <os>
Oct 02 12:19:20 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:19:20 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:19:20 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:19:20 compute-1 nova_compute[230518]:   </os>
Oct 02 12:19:20 compute-1 nova_compute[230518]:   <features>
Oct 02 12:19:20 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:19:20 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:19:20 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:19:20 compute-1 nova_compute[230518]:   </features>
Oct 02 12:19:20 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:19:20 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:19:20 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:19:20 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:19:20 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:19:20 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:19:20 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:19:20 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:19:20 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:19:20 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:19:20 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:19:20 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:19:20 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/9e5efd35-44d3-4665-b150-1936a55a5460_disk">
Oct 02 12:19:20 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:19:20 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:19:20 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:19:20 compute-1 nova_compute[230518]:       </source>
Oct 02 12:19:20 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:19:20 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:19:20 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:19:20 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:19:20 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:19:21 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:19:21 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/9e5efd35-44d3-4665-b150-1936a55a5460_disk.config">
Oct 02 12:19:21 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:19:21 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:19:21 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:19:21 compute-1 nova_compute[230518]:       </source>
Oct 02 12:19:21 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:19:21 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:19:21 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:19:21 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:19:21 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/9e5efd35-44d3-4665-b150-1936a55a5460/console.log" append="off"/>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     <video>
Oct 02 12:19:21 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     </video>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:19:21 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:19:21 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:19:21 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:19:21 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:19:21 compute-1 nova_compute[230518]: </domain>
Oct 02 12:19:21 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:19:21 compute-1 nova_compute[230518]: 2025-10-02 12:19:21.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:19:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:21.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:19:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:19:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:21.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:19:21 compute-1 nova_compute[230518]: 2025-10-02 12:19:21.875 2 DEBUG nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:19:21 compute-1 nova_compute[230518]: 2025-10-02 12:19:21.875 2 DEBUG nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:19:21 compute-1 nova_compute[230518]: 2025-10-02 12:19:21.876 2 INFO nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Using config drive
Oct 02 12:19:21 compute-1 nova_compute[230518]: 2025-10-02 12:19:21.911 2 DEBUG nova.storage.rbd_utils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] rbd image 9e5efd35-44d3-4665-b150-1936a55a5460_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:19:22 compute-1 nova_compute[230518]: 2025-10-02 12:19:22.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:22 compute-1 ceph-mon[80926]: pgmap v1208: 305 pgs: 305 active+clean; 77 MiB data, 401 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 1.0 MiB/s wr, 40 op/s
Oct 02 12:19:22 compute-1 nova_compute[230518]: 2025-10-02 12:19:22.763 2 INFO nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Creating config drive at /var/lib/nova/instances/9e5efd35-44d3-4665-b150-1936a55a5460/disk.config
Oct 02 12:19:22 compute-1 nova_compute[230518]: 2025-10-02 12:19:22.770 2 DEBUG oslo_concurrency.processutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9e5efd35-44d3-4665-b150-1936a55a5460/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp374hv1vu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:22 compute-1 nova_compute[230518]: 2025-10-02 12:19:22.906 2 DEBUG oslo_concurrency.processutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9e5efd35-44d3-4665-b150-1936a55a5460/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp374hv1vu" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:22 compute-1 nova_compute[230518]: 2025-10-02 12:19:22.941 2 DEBUG nova.storage.rbd_utils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] rbd image 9e5efd35-44d3-4665-b150-1936a55a5460_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:19:22 compute-1 nova_compute[230518]: 2025-10-02 12:19:22.946 2 DEBUG oslo_concurrency.processutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9e5efd35-44d3-4665-b150-1936a55a5460/disk.config 9e5efd35-44d3-4665-b150-1936a55a5460_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:19:23 compute-1 nova_compute[230518]: 2025-10-02 12:19:23.137 2 DEBUG oslo_concurrency.processutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9e5efd35-44d3-4665-b150-1936a55a5460/disk.config 9e5efd35-44d3-4665-b150-1936a55a5460_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:23 compute-1 nova_compute[230518]: 2025-10-02 12:19:23.138 2 INFO nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Deleting local config drive /var/lib/nova/instances/9e5efd35-44d3-4665-b150-1936a55a5460/disk.config because it was imported into RBD.
Oct 02 12:19:23 compute-1 systemd-machined[188247]: New machine qemu-20-instance-00000026.
Oct 02 12:19:23 compute-1 systemd[1]: Started Virtual Machine qemu-20-instance-00000026.
Oct 02 12:19:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:23.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:23.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:24 compute-1 nova_compute[230518]: 2025-10-02 12:19:24.044 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407564.044375, 9e5efd35-44d3-4665-b150-1936a55a5460 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:19:24 compute-1 nova_compute[230518]: 2025-10-02 12:19:24.046 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] VM Resumed (Lifecycle Event)
Oct 02 12:19:24 compute-1 nova_compute[230518]: 2025-10-02 12:19:24.049 2 DEBUG nova.compute.manager [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:19:24 compute-1 nova_compute[230518]: 2025-10-02 12:19:24.049 2 DEBUG nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:19:24 compute-1 nova_compute[230518]: 2025-10-02 12:19:24.053 2 INFO nova.virt.libvirt.driver [-] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Instance spawned successfully.
Oct 02 12:19:24 compute-1 nova_compute[230518]: 2025-10-02 12:19:24.053 2 DEBUG nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:19:24 compute-1 nova_compute[230518]: 2025-10-02 12:19:24.077 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:19:24 compute-1 nova_compute[230518]: 2025-10-02 12:19:24.081 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:19:24 compute-1 nova_compute[230518]: 2025-10-02 12:19:24.094 2 DEBUG nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:19:24 compute-1 nova_compute[230518]: 2025-10-02 12:19:24.095 2 DEBUG nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:19:24 compute-1 nova_compute[230518]: 2025-10-02 12:19:24.095 2 DEBUG nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:19:24 compute-1 nova_compute[230518]: 2025-10-02 12:19:24.095 2 DEBUG nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:19:24 compute-1 nova_compute[230518]: 2025-10-02 12:19:24.096 2 DEBUG nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:19:24 compute-1 nova_compute[230518]: 2025-10-02 12:19:24.096 2 DEBUG nova.virt.libvirt.driver [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:19:24 compute-1 nova_compute[230518]: 2025-10-02 12:19:24.135 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:19:24 compute-1 nova_compute[230518]: 2025-10-02 12:19:24.135 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407564.0456746, 9e5efd35-44d3-4665-b150-1936a55a5460 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:19:24 compute-1 nova_compute[230518]: 2025-10-02 12:19:24.136 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] VM Started (Lifecycle Event)
Oct 02 12:19:24 compute-1 nova_compute[230518]: 2025-10-02 12:19:24.167 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:19:24 compute-1 nova_compute[230518]: 2025-10-02 12:19:24.170 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:19:24 compute-1 nova_compute[230518]: 2025-10-02 12:19:24.176 2 INFO nova.compute.manager [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Took 5.19 seconds to spawn the instance on the hypervisor.
Oct 02 12:19:24 compute-1 nova_compute[230518]: 2025-10-02 12:19:24.176 2 DEBUG nova.compute.manager [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:19:24 compute-1 nova_compute[230518]: 2025-10-02 12:19:24.236 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:19:24 compute-1 nova_compute[230518]: 2025-10-02 12:19:24.283 2 INFO nova.compute.manager [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Took 7.74 seconds to build instance.
Oct 02 12:19:24 compute-1 ceph-mon[80926]: pgmap v1209: 305 pgs: 305 active+clean; 117 MiB data, 409 MiB used, 21 GiB / 21 GiB avail; 49 KiB/s rd, 2.5 MiB/s wr, 76 op/s
Oct 02 12:19:25 compute-1 nova_compute[230518]: 2025-10-02 12:19:25.077 2 DEBUG oslo_concurrency.lockutils [None req-1d632131-0843-4ca5-b96c-0a79a81f5fc5 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Lock "9e5efd35-44d3-4665-b150-1936a55a5460" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:25.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:25.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:19:25.917 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:19:25.918 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:19:25.918 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:26 compute-1 nova_compute[230518]: 2025-10-02 12:19:26.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:26 compute-1 ceph-mon[80926]: pgmap v1210: 305 pgs: 305 active+clean; 134 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 407 KiB/s rd, 3.6 MiB/s wr, 109 op/s
Oct 02 12:19:26 compute-1 podman[246544]: 2025-10-02 12:19:26.807153783 +0000 UTC m=+0.053587920 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:19:27 compute-1 nova_compute[230518]: 2025-10-02 12:19:27.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:27.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:27.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:27 compute-1 podman[246564]: 2025-10-02 12:19:27.815954257 +0000 UTC m=+0.074353343 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 12:19:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:19:28 compute-1 ceph-mon[80926]: pgmap v1211: 305 pgs: 305 active+clean; 134 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.6 MiB/s wr, 187 op/s
Oct 02 12:19:28 compute-1 nova_compute[230518]: 2025-10-02 12:19:28.798 2 DEBUG nova.compute.manager [None req-3dafd338-59c0-4be9-8f20-ca68390c7257 6512335913754777991846ad7621947f 4acd8d33638648bf90e723d49a24f77e - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:19:28 compute-1 nova_compute[230518]: 2025-10-02 12:19:28.801 2 INFO nova.compute.manager [None req-3dafd338-59c0-4be9-8f20-ca68390c7257 6512335913754777991846ad7621947f 4acd8d33638648bf90e723d49a24f77e - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Retrieving diagnostics
Oct 02 12:19:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:29.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:29.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:29 compute-1 nova_compute[230518]: 2025-10-02 12:19:29.994 2 DEBUG oslo_concurrency.lockutils [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Acquiring lock "9e5efd35-44d3-4665-b150-1936a55a5460" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:29 compute-1 nova_compute[230518]: 2025-10-02 12:19:29.995 2 DEBUG oslo_concurrency.lockutils [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Lock "9e5efd35-44d3-4665-b150-1936a55a5460" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:29 compute-1 nova_compute[230518]: 2025-10-02 12:19:29.995 2 DEBUG oslo_concurrency.lockutils [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Acquiring lock "9e5efd35-44d3-4665-b150-1936a55a5460-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:29 compute-1 nova_compute[230518]: 2025-10-02 12:19:29.996 2 DEBUG oslo_concurrency.lockutils [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Lock "9e5efd35-44d3-4665-b150-1936a55a5460-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:29 compute-1 nova_compute[230518]: 2025-10-02 12:19:29.996 2 DEBUG oslo_concurrency.lockutils [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Lock "9e5efd35-44d3-4665-b150-1936a55a5460-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:29 compute-1 nova_compute[230518]: 2025-10-02 12:19:29.997 2 INFO nova.compute.manager [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Terminating instance
Oct 02 12:19:29 compute-1 nova_compute[230518]: 2025-10-02 12:19:29.998 2 DEBUG oslo_concurrency.lockutils [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Acquiring lock "refresh_cache-9e5efd35-44d3-4665-b150-1936a55a5460" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:19:29 compute-1 nova_compute[230518]: 2025-10-02 12:19:29.999 2 DEBUG oslo_concurrency.lockutils [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Acquired lock "refresh_cache-9e5efd35-44d3-4665-b150-1936a55a5460" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:19:29 compute-1 nova_compute[230518]: 2025-10-02 12:19:29.999 2 DEBUG nova.network.neutron [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:19:30 compute-1 ceph-mon[80926]: pgmap v1212: 305 pgs: 305 active+clean; 134 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.6 MiB/s wr, 180 op/s
Oct 02 12:19:30 compute-1 nova_compute[230518]: 2025-10-02 12:19:30.451 2 DEBUG nova.network.neutron [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:19:30 compute-1 nova_compute[230518]: 2025-10-02 12:19:30.995 2 DEBUG nova.network.neutron [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:19:31 compute-1 nova_compute[230518]: 2025-10-02 12:19:31.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:31 compute-1 nova_compute[230518]: 2025-10-02 12:19:31.259 2 DEBUG oslo_concurrency.lockutils [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Releasing lock "refresh_cache-9e5efd35-44d3-4665-b150-1936a55a5460" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:19:31 compute-1 nova_compute[230518]: 2025-10-02 12:19:31.259 2 DEBUG nova.compute.manager [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:19:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:31.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:19:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:31.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:19:31 compute-1 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000026.scope: Deactivated successfully.
Oct 02 12:19:31 compute-1 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000026.scope: Consumed 8.099s CPU time.
Oct 02 12:19:31 compute-1 systemd-machined[188247]: Machine qemu-20-instance-00000026 terminated.
Oct 02 12:19:31 compute-1 nova_compute[230518]: 2025-10-02 12:19:31.481 2 INFO nova.virt.libvirt.driver [-] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Instance destroyed successfully.
Oct 02 12:19:31 compute-1 nova_compute[230518]: 2025-10-02 12:19:31.482 2 DEBUG nova.objects.instance [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Lazy-loading 'resources' on Instance uuid 9e5efd35-44d3-4665-b150-1936a55a5460 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:19:32 compute-1 nova_compute[230518]: 2025-10-02 12:19:32.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:33 compute-1 ceph-mon[80926]: pgmap v1213: 305 pgs: 305 active+clean; 134 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 221 op/s
Oct 02 12:19:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:19:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:33.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:33.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:34 compute-1 nova_compute[230518]: 2025-10-02 12:19:34.073 2 INFO nova.virt.libvirt.driver [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Deleting instance files /var/lib/nova/instances/9e5efd35-44d3-4665-b150-1936a55a5460_del
Oct 02 12:19:34 compute-1 nova_compute[230518]: 2025-10-02 12:19:34.073 2 INFO nova.virt.libvirt.driver [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Deletion of /var/lib/nova/instances/9e5efd35-44d3-4665-b150-1936a55a5460_del complete
Oct 02 12:19:34 compute-1 nova_compute[230518]: 2025-10-02 12:19:34.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:34 compute-1 ceph-mon[80926]: pgmap v1214: 305 pgs: 305 active+clean; 134 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.5 MiB/s wr, 190 op/s
Oct 02 12:19:34 compute-1 nova_compute[230518]: 2025-10-02 12:19:34.357 2 INFO nova.compute.manager [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Took 3.10 seconds to destroy the instance on the hypervisor.
Oct 02 12:19:34 compute-1 nova_compute[230518]: 2025-10-02 12:19:34.357 2 DEBUG oslo.service.loopingcall [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:19:34 compute-1 nova_compute[230518]: 2025-10-02 12:19:34.358 2 DEBUG nova.compute.manager [-] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:19:34 compute-1 nova_compute[230518]: 2025-10-02 12:19:34.358 2 DEBUG nova.network.neutron [-] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:19:34 compute-1 nova_compute[230518]: 2025-10-02 12:19:34.528 2 DEBUG nova.network.neutron [-] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:19:34 compute-1 nova_compute[230518]: 2025-10-02 12:19:34.649 2 DEBUG nova.network.neutron [-] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:19:34 compute-1 nova_compute[230518]: 2025-10-02 12:19:34.802 2 INFO nova.compute.manager [-] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Took 0.44 seconds to deallocate network for instance.
Oct 02 12:19:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:19:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:35.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:19:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:35.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:36 compute-1 nova_compute[230518]: 2025-10-02 12:19:36.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:36 compute-1 nova_compute[230518]: 2025-10-02 12:19:36.112 2 DEBUG oslo_concurrency.lockutils [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:36 compute-1 nova_compute[230518]: 2025-10-02 12:19:36.113 2 DEBUG oslo_concurrency.lockutils [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:36 compute-1 ceph-mon[80926]: pgmap v1215: 305 pgs: 305 active+clean; 106 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.1 MiB/s wr, 175 op/s
Oct 02 12:19:37 compute-1 nova_compute[230518]: 2025-10-02 12:19:37.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:37 compute-1 nova_compute[230518]: 2025-10-02 12:19:37.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:19:37 compute-1 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 02 12:19:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:37.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:37.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:37 compute-1 podman[246613]: 2025-10-02 12:19:37.33238965 +0000 UTC m=+0.059986161 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:19:37 compute-1 podman[246614]: 2025-10-02 12:19:37.34192341 +0000 UTC m=+0.062929274 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct 02 12:19:38 compute-1 nova_compute[230518]: 2025-10-02 12:19:38.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:19:38 compute-1 nova_compute[230518]: 2025-10-02 12:19:38.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:19:38 compute-1 nova_compute[230518]: 2025-10-02 12:19:38.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:19:38 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:19:38 compute-1 nova_compute[230518]: 2025-10-02 12:19:38.085 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:38 compute-1 nova_compute[230518]: 2025-10-02 12:19:38.166 2 DEBUG oslo_concurrency.processutils [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:38 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:19:38 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1202750396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:38 compute-1 nova_compute[230518]: 2025-10-02 12:19:38.679 2 DEBUG oslo_concurrency.processutils [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:38 compute-1 nova_compute[230518]: 2025-10-02 12:19:38.686 2 DEBUG nova.compute.provider_tree [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:19:38 compute-1 nova_compute[230518]: 2025-10-02 12:19:38.995 2 DEBUG nova.scheduler.client.report [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:19:39 compute-1 nova_compute[230518]: 2025-10-02 12:19:39.116 2 DEBUG oslo_concurrency.lockutils [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 3.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:39 compute-1 nova_compute[230518]: 2025-10-02 12:19:39.118 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 1.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:39 compute-1 nova_compute[230518]: 2025-10-02 12:19:39.119 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:39 compute-1 nova_compute[230518]: 2025-10-02 12:19:39.119 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:19:39 compute-1 nova_compute[230518]: 2025-10-02 12:19:39.119 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:39 compute-1 ceph-mon[80926]: pgmap v1216: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.6 KiB/s wr, 172 op/s
Oct 02 12:19:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1202750396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:39 compute-1 nova_compute[230518]: 2025-10-02 12:19:39.286 2 INFO nova.scheduler.client.report [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Deleted allocations for instance 9e5efd35-44d3-4665-b150-1936a55a5460
Oct 02 12:19:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:39.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:39.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:39 compute-1 nova_compute[230518]: 2025-10-02 12:19:39.484 2 DEBUG oslo_concurrency.lockutils [None req-562d54b3-1685-468f-a5a2-87749e7e7950 6d3f08c1bc2844488f5d3fcd1622dd59 aadd85819f8e46b8b2dc6524e9e6fbad - - default default] Lock "9e5efd35-44d3-4665-b150-1936a55a5460" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.489s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:19:39 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3298273475' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:39 compute-1 nova_compute[230518]: 2025-10-02 12:19:39.554 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:39 compute-1 nova_compute[230518]: 2025-10-02 12:19:39.734 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:19:39 compute-1 nova_compute[230518]: 2025-10-02 12:19:39.735 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4752MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:19:39 compute-1 nova_compute[230518]: 2025-10-02 12:19:39.735 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:39 compute-1 nova_compute[230518]: 2025-10-02 12:19:39.735 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:39 compute-1 nova_compute[230518]: 2025-10-02 12:19:39.854 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:19:39 compute-1 nova_compute[230518]: 2025-10-02 12:19:39.855 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:19:39 compute-1 nova_compute[230518]: 2025-10-02 12:19:39.881 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:40 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:19:40 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3478166263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:40 compute-1 nova_compute[230518]: 2025-10-02 12:19:40.341 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:40 compute-1 nova_compute[230518]: 2025-10-02 12:19:40.346 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:19:40 compute-1 nova_compute[230518]: 2025-10-02 12:19:40.396 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:19:40 compute-1 nova_compute[230518]: 2025-10-02 12:19:40.548 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:19:40 compute-1 nova_compute[230518]: 2025-10-02 12:19:40.549 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:40 compute-1 ceph-mon[80926]: pgmap v1217: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.3 KiB/s wr, 94 op/s
Oct 02 12:19:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1347628948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3298273475' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/528161419' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:41 compute-1 nova_compute[230518]: 2025-10-02 12:19:41.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:19:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:41.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:19:41 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:41.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:19:41 compute-1 nova_compute[230518]: 2025-10-02 12:19:41.544 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:19:41 compute-1 nova_compute[230518]: 2025-10-02 12:19:41.545 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:19:41 compute-1 nova_compute[230518]: 2025-10-02 12:19:41.590 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:19:41 compute-1 nova_compute[230518]: 2025-10-02 12:19:41.590 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:19:41 compute-1 nova_compute[230518]: 2025-10-02 12:19:41.590 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:19:41 compute-1 nova_compute[230518]: 2025-10-02 12:19:41.619 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:19:41 compute-1 nova_compute[230518]: 2025-10-02 12:19:41.619 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:19:41 compute-1 nova_compute[230518]: 2025-10-02 12:19:41.620 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:19:41 compute-1 nova_compute[230518]: 2025-10-02 12:19:41.620 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:19:42 compute-1 nova_compute[230518]: 2025-10-02 12:19:42.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:42 compute-1 nova_compute[230518]: 2025-10-02 12:19:42.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:19:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3478166263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2012743931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:19:43 compute-1 ceph-mon[80926]: pgmap v1218: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.3 KiB/s wr, 94 op/s
Oct 02 12:19:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/997822437' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:43.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:19:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:19:43 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:43.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:19:44 compute-1 ceph-mon[80926]: pgmap v1219: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 52 op/s
Oct 02 12:19:44 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3047511372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:19:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:45.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:19:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:45.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:46 compute-1 nova_compute[230518]: 2025-10-02 12:19:46.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:46 compute-1 nova_compute[230518]: 2025-10-02 12:19:46.481 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407571.478988, 9e5efd35-44d3-4665-b150-1936a55a5460 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:19:46 compute-1 nova_compute[230518]: 2025-10-02 12:19:46.481 2 INFO nova.compute.manager [-] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] VM Stopped (Lifecycle Event)
Oct 02 12:19:46 compute-1 nova_compute[230518]: 2025-10-02 12:19:46.511 2 DEBUG nova.compute.manager [None req-cee3fc9e-fab3-40dd-ad19-b53b5c4b8d3d - - - - - -] [instance: 9e5efd35-44d3-4665-b150-1936a55a5460] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:19:46 compute-1 ceph-mon[80926]: pgmap v1220: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 35 KiB/s rd, 2.3 KiB/s wr, 50 op/s
Oct 02 12:19:47 compute-1 nova_compute[230518]: 2025-10-02 12:19:47.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:47.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:47.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:19:48 compute-1 ceph-mon[80926]: pgmap v1221: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 2.0 KiB/s wr, 28 op/s
Oct 02 12:19:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:19:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:49.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:19:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:49.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:49 compute-1 ceph-mon[80926]: pgmap v1222: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 02 12:19:49 compute-1 nova_compute[230518]: 2025-10-02 12:19:49.949 2 DEBUG oslo_concurrency.lockutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Acquiring lock "94bf2d68-bf2c-4720-8ede-688ca2b48ce6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:49 compute-1 nova_compute[230518]: 2025-10-02 12:19:49.950 2 DEBUG oslo_concurrency.lockutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "94bf2d68-bf2c-4720-8ede-688ca2b48ce6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:49 compute-1 nova_compute[230518]: 2025-10-02 12:19:49.977 2 DEBUG nova.compute.manager [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:19:50 compute-1 nova_compute[230518]: 2025-10-02 12:19:50.054 2 DEBUG oslo_concurrency.lockutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:50 compute-1 nova_compute[230518]: 2025-10-02 12:19:50.054 2 DEBUG oslo_concurrency.lockutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:50 compute-1 nova_compute[230518]: 2025-10-02 12:19:50.064 2 DEBUG nova.virt.hardware [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:19:50 compute-1 nova_compute[230518]: 2025-10-02 12:19:50.064 2 INFO nova.compute.claims [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:19:50 compute-1 nova_compute[230518]: 2025-10-02 12:19:50.213 2 DEBUG oslo_concurrency.processutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:19:50 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1401379172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:50 compute-1 nova_compute[230518]: 2025-10-02 12:19:50.662 2 DEBUG oslo_concurrency.processutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:50 compute-1 nova_compute[230518]: 2025-10-02 12:19:50.668 2 DEBUG nova.compute.provider_tree [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:19:50 compute-1 nova_compute[230518]: 2025-10-02 12:19:50.689 2 DEBUG nova.scheduler.client.report [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:19:50 compute-1 nova_compute[230518]: 2025-10-02 12:19:50.715 2 DEBUG oslo_concurrency.lockutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:50 compute-1 nova_compute[230518]: 2025-10-02 12:19:50.715 2 DEBUG nova.compute.manager [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:19:50 compute-1 nova_compute[230518]: 2025-10-02 12:19:50.817 2 DEBUG nova.compute.manager [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 02 12:19:50 compute-1 nova_compute[230518]: 2025-10-02 12:19:50.842 2 INFO nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:19:50 compute-1 nova_compute[230518]: 2025-10-02 12:19:50.856 2 DEBUG nova.compute.manager [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:19:50 compute-1 nova_compute[230518]: 2025-10-02 12:19:50.978 2 DEBUG nova.compute.manager [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:19:50 compute-1 nova_compute[230518]: 2025-10-02 12:19:50.979 2 DEBUG nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:19:50 compute-1 nova_compute[230518]: 2025-10-02 12:19:50.979 2 INFO nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Creating image(s)
Oct 02 12:19:51 compute-1 nova_compute[230518]: 2025-10-02 12:19:51.005 2 DEBUG nova.storage.rbd_utils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:19:51 compute-1 nova_compute[230518]: 2025-10-02 12:19:51.032 2 DEBUG nova.storage.rbd_utils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:19:51 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1401379172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:51 compute-1 nova_compute[230518]: 2025-10-02 12:19:51.064 2 DEBUG nova.storage.rbd_utils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:19:51 compute-1 nova_compute[230518]: 2025-10-02 12:19:51.068 2 DEBUG oslo_concurrency.processutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:51 compute-1 nova_compute[230518]: 2025-10-02 12:19:51.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:51 compute-1 nova_compute[230518]: 2025-10-02 12:19:51.139 2 DEBUG oslo_concurrency.processutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:51 compute-1 nova_compute[230518]: 2025-10-02 12:19:51.140 2 DEBUG oslo_concurrency.lockutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:51 compute-1 nova_compute[230518]: 2025-10-02 12:19:51.141 2 DEBUG oslo_concurrency.lockutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:51 compute-1 nova_compute[230518]: 2025-10-02 12:19:51.141 2 DEBUG oslo_concurrency.lockutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:51 compute-1 nova_compute[230518]: 2025-10-02 12:19:51.165 2 DEBUG nova.storage.rbd_utils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:19:51 compute-1 nova_compute[230518]: 2025-10-02 12:19:51.169 2 DEBUG oslo_concurrency.processutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:51.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:51.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:52 compute-1 nova_compute[230518]: 2025-10-02 12:19:52.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:52 compute-1 nova_compute[230518]: 2025-10-02 12:19:52.103 2 DEBUG oslo_concurrency.processutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.934s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:52 compute-1 nova_compute[230518]: 2025-10-02 12:19:52.174 2 DEBUG nova.storage.rbd_utils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] resizing rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:19:52 compute-1 nova_compute[230518]: 2025-10-02 12:19:52.294 2 DEBUG nova.objects.instance [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lazy-loading 'migration_context' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:19:52 compute-1 nova_compute[230518]: 2025-10-02 12:19:52.311 2 DEBUG nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:19:52 compute-1 nova_compute[230518]: 2025-10-02 12:19:52.311 2 DEBUG nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Ensure instance console log exists: /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:19:52 compute-1 nova_compute[230518]: 2025-10-02 12:19:52.312 2 DEBUG oslo_concurrency.lockutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:52 compute-1 nova_compute[230518]: 2025-10-02 12:19:52.312 2 DEBUG oslo_concurrency.lockutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:52 compute-1 nova_compute[230518]: 2025-10-02 12:19:52.313 2 DEBUG oslo_concurrency.lockutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:52 compute-1 nova_compute[230518]: 2025-10-02 12:19:52.315 2 DEBUG nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:19:52 compute-1 nova_compute[230518]: 2025-10-02 12:19:52.320 2 WARNING nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:19:52 compute-1 nova_compute[230518]: 2025-10-02 12:19:52.328 2 DEBUG nova.virt.libvirt.host [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:19:52 compute-1 nova_compute[230518]: 2025-10-02 12:19:52.329 2 DEBUG nova.virt.libvirt.host [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:19:52 compute-1 nova_compute[230518]: 2025-10-02 12:19:52.336 2 DEBUG nova.virt.libvirt.host [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:19:52 compute-1 nova_compute[230518]: 2025-10-02 12:19:52.337 2 DEBUG nova.virt.libvirt.host [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:19:52 compute-1 nova_compute[230518]: 2025-10-02 12:19:52.338 2 DEBUG nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:19:52 compute-1 nova_compute[230518]: 2025-10-02 12:19:52.338 2 DEBUG nova.virt.hardware [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:19:52 compute-1 nova_compute[230518]: 2025-10-02 12:19:52.339 2 DEBUG nova.virt.hardware [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:19:52 compute-1 nova_compute[230518]: 2025-10-02 12:19:52.339 2 DEBUG nova.virt.hardware [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:19:52 compute-1 nova_compute[230518]: 2025-10-02 12:19:52.339 2 DEBUG nova.virt.hardware [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:19:52 compute-1 nova_compute[230518]: 2025-10-02 12:19:52.339 2 DEBUG nova.virt.hardware [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:19:52 compute-1 nova_compute[230518]: 2025-10-02 12:19:52.340 2 DEBUG nova.virt.hardware [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:19:52 compute-1 nova_compute[230518]: 2025-10-02 12:19:52.340 2 DEBUG nova.virt.hardware [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:19:52 compute-1 nova_compute[230518]: 2025-10-02 12:19:52.340 2 DEBUG nova.virt.hardware [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:19:52 compute-1 nova_compute[230518]: 2025-10-02 12:19:52.340 2 DEBUG nova.virt.hardware [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:19:52 compute-1 nova_compute[230518]: 2025-10-02 12:19:52.340 2 DEBUG nova.virt.hardware [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:19:52 compute-1 nova_compute[230518]: 2025-10-02 12:19:52.341 2 DEBUG nova.virt.hardware [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:19:52 compute-1 nova_compute[230518]: 2025-10-02 12:19:52.343 2 DEBUG oslo_concurrency.processutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:52 compute-1 ceph-mon[80926]: pgmap v1223: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 02 12:19:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2702662155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:19:52 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3840002178' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:19:52 compute-1 nova_compute[230518]: 2025-10-02 12:19:52.820 2 DEBUG oslo_concurrency.processutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:52 compute-1 nova_compute[230518]: 2025-10-02 12:19:52.853 2 DEBUG nova.storage.rbd_utils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:19:52 compute-1 nova_compute[230518]: 2025-10-02 12:19:52.858 2 DEBUG oslo_concurrency.processutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:19:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:19:53 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2080843043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:19:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:53.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:53.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:53 compute-1 nova_compute[230518]: 2025-10-02 12:19:53.341 2 DEBUG oslo_concurrency.processutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:53 compute-1 nova_compute[230518]: 2025-10-02 12:19:53.343 2 DEBUG nova.objects.instance [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lazy-loading 'pci_devices' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:19:53 compute-1 nova_compute[230518]: 2025-10-02 12:19:53.362 2 DEBUG nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:19:53 compute-1 nova_compute[230518]:   <uuid>94bf2d68-bf2c-4720-8ede-688ca2b48ce6</uuid>
Oct 02 12:19:53 compute-1 nova_compute[230518]:   <name>instance-00000027</name>
Oct 02 12:19:53 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:19:53 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:19:53 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:19:53 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:       <nova:name>tempest-ServersAdmin275Test-server-2065786220</nova:name>
Oct 02 12:19:53 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:19:52</nova:creationTime>
Oct 02 12:19:53 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:19:53 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:19:53 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:19:53 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:19:53 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:19:53 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:19:53 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:19:53 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:19:53 compute-1 nova_compute[230518]:         <nova:user uuid="00254a66d4364bc0b5d187d008ba5a9a">tempest-ServersAdmin275Test-1864943547-project-member</nova:user>
Oct 02 12:19:53 compute-1 nova_compute[230518]:         <nova:project uuid="b1871b72e3494da299605236b73c241f">tempest-ServersAdmin275Test-1864943547</nova:project>
Oct 02 12:19:53 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:19:53 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:       <nova:ports/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:19:53 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:19:53 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <system>
Oct 02 12:19:53 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:19:53 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:19:53 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:19:53 compute-1 nova_compute[230518]:       <entry name="serial">94bf2d68-bf2c-4720-8ede-688ca2b48ce6</entry>
Oct 02 12:19:53 compute-1 nova_compute[230518]:       <entry name="uuid">94bf2d68-bf2c-4720-8ede-688ca2b48ce6</entry>
Oct 02 12:19:53 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     </system>
Oct 02 12:19:53 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:19:53 compute-1 nova_compute[230518]:   <os>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:   </os>
Oct 02 12:19:53 compute-1 nova_compute[230518]:   <features>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:   </features>
Oct 02 12:19:53 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:19:53 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:19:53 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:19:53 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk">
Oct 02 12:19:53 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:       </source>
Oct 02 12:19:53 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:19:53 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:19:53 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:19:53 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk.config">
Oct 02 12:19:53 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:       </source>
Oct 02 12:19:53 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:19:53 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:19:53 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:19:53 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/console.log" append="off"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <video>
Oct 02 12:19:53 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     </video>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:19:53 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:19:53 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:19:53 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:19:53 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:19:53 compute-1 nova_compute[230518]: </domain>
Oct 02 12:19:53 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:19:53 compute-1 nova_compute[230518]: 2025-10-02 12:19:53.507 2 DEBUG nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:19:53 compute-1 nova_compute[230518]: 2025-10-02 12:19:53.508 2 DEBUG nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:19:53 compute-1 nova_compute[230518]: 2025-10-02 12:19:53.508 2 INFO nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Using config drive
Oct 02 12:19:53 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3840002178' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:19:53 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2080843043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:19:53 compute-1 nova_compute[230518]: 2025-10-02 12:19:53.615 2 DEBUG nova.storage.rbd_utils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:19:53 compute-1 nova_compute[230518]: 2025-10-02 12:19:53.852 2 INFO nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Creating config drive at /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/disk.config
Oct 02 12:19:53 compute-1 nova_compute[230518]: 2025-10-02 12:19:53.856 2 DEBUG oslo_concurrency.processutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgi86ip8x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:53 compute-1 nova_compute[230518]: 2025-10-02 12:19:53.989 2 DEBUG oslo_concurrency.processutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgi86ip8x" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:54 compute-1 nova_compute[230518]: 2025-10-02 12:19:54.020 2 DEBUG nova.storage.rbd_utils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:19:54 compute-1 nova_compute[230518]: 2025-10-02 12:19:54.024 2 DEBUG oslo_concurrency.processutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/disk.config 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:54 compute-1 nova_compute[230518]: 2025-10-02 12:19:54.218 2 DEBUG oslo_concurrency.processutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/disk.config 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:54 compute-1 nova_compute[230518]: 2025-10-02 12:19:54.219 2 INFO nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Deleting local config drive /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/disk.config because it was imported into RBD.
Oct 02 12:19:54 compute-1 systemd-machined[188247]: New machine qemu-21-instance-00000027.
Oct 02 12:19:54 compute-1 systemd[1]: Started Virtual Machine qemu-21-instance-00000027.
Oct 02 12:19:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:19:54.346 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:19:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:19:54.347 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:19:54 compute-1 nova_compute[230518]: 2025-10-02 12:19:54.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:54 compute-1 ceph-mon[80926]: pgmap v1224: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 2.2 KiB/s rd, 170 B/s wr, 2 op/s
Oct 02 12:19:54 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/818748125' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:19:54 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/442017125' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:19:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:19:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:55.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:19:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:55.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:55 compute-1 nova_compute[230518]: 2025-10-02 12:19:55.483 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407595.4834824, 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:19:55 compute-1 nova_compute[230518]: 2025-10-02 12:19:55.484 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] VM Resumed (Lifecycle Event)
Oct 02 12:19:55 compute-1 nova_compute[230518]: 2025-10-02 12:19:55.486 2 DEBUG nova.compute.manager [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:19:55 compute-1 nova_compute[230518]: 2025-10-02 12:19:55.487 2 DEBUG nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:19:55 compute-1 nova_compute[230518]: 2025-10-02 12:19:55.491 2 INFO nova.virt.libvirt.driver [-] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance spawned successfully.
Oct 02 12:19:55 compute-1 nova_compute[230518]: 2025-10-02 12:19:55.491 2 DEBUG nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:19:55 compute-1 nova_compute[230518]: 2025-10-02 12:19:55.528 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:19:55 compute-1 nova_compute[230518]: 2025-10-02 12:19:55.534 2 DEBUG nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:19:55 compute-1 nova_compute[230518]: 2025-10-02 12:19:55.535 2 DEBUG nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:19:55 compute-1 nova_compute[230518]: 2025-10-02 12:19:55.535 2 DEBUG nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:19:55 compute-1 nova_compute[230518]: 2025-10-02 12:19:55.535 2 DEBUG nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:19:55 compute-1 nova_compute[230518]: 2025-10-02 12:19:55.536 2 DEBUG nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:19:55 compute-1 nova_compute[230518]: 2025-10-02 12:19:55.537 2 DEBUG nova.virt.libvirt.driver [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:19:55 compute-1 nova_compute[230518]: 2025-10-02 12:19:55.541 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:19:55 compute-1 nova_compute[230518]: 2025-10-02 12:19:55.578 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:19:55 compute-1 nova_compute[230518]: 2025-10-02 12:19:55.578 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407595.4860685, 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:19:55 compute-1 nova_compute[230518]: 2025-10-02 12:19:55.579 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] VM Started (Lifecycle Event)
Oct 02 12:19:55 compute-1 nova_compute[230518]: 2025-10-02 12:19:55.613 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:19:55 compute-1 nova_compute[230518]: 2025-10-02 12:19:55.619 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:19:55 compute-1 nova_compute[230518]: 2025-10-02 12:19:55.625 2 INFO nova.compute.manager [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Took 4.65 seconds to spawn the instance on the hypervisor.
Oct 02 12:19:55 compute-1 nova_compute[230518]: 2025-10-02 12:19:55.625 2 DEBUG nova.compute.manager [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:19:55 compute-1 nova_compute[230518]: 2025-10-02 12:19:55.638 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:19:55 compute-1 nova_compute[230518]: 2025-10-02 12:19:55.672 2 INFO nova.compute.manager [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Took 5.65 seconds to build instance.
Oct 02 12:19:55 compute-1 nova_compute[230518]: 2025-10-02 12:19:55.690 2 DEBUG oslo_concurrency.lockutils [None req-a85bcd00-3c04-4da1-874c-8c5a15e23a13 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "94bf2d68-bf2c-4720-8ede-688ca2b48ce6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:55 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1773807802' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:56 compute-1 nova_compute[230518]: 2025-10-02 12:19:56.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:56 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:19:56.350 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:19:56 compute-1 ceph-mon[80926]: pgmap v1225: 305 pgs: 305 active+clean; 79 MiB data, 402 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 1.4 MiB/s wr, 42 op/s
Oct 02 12:19:57 compute-1 nova_compute[230518]: 2025-10-02 12:19:57.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:19:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:57.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:57.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:57 compute-1 podman[247086]: 2025-10-02 12:19:57.801445737 +0000 UTC m=+0.052356071 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:19:58 compute-1 ceph-mon[80926]: pgmap v1226: 305 pgs: 305 active+clean; 149 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 506 KiB/s rd, 4.3 MiB/s wr, 91 op/s
Oct 02 12:19:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:19:58 compute-1 nova_compute[230518]: 2025-10-02 12:19:58.252 2 DEBUG oslo_concurrency.lockutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:58 compute-1 nova_compute[230518]: 2025-10-02 12:19:58.252 2 DEBUG oslo_concurrency.lockutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:58 compute-1 nova_compute[230518]: 2025-10-02 12:19:58.270 2 DEBUG nova.compute.manager [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:19:58 compute-1 nova_compute[230518]: 2025-10-02 12:19:58.322 2 INFO nova.compute.manager [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Rebuilding instance
Oct 02 12:19:58 compute-1 nova_compute[230518]: 2025-10-02 12:19:58.374 2 DEBUG oslo_concurrency.lockutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:58 compute-1 nova_compute[230518]: 2025-10-02 12:19:58.374 2 DEBUG oslo_concurrency.lockutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:58 compute-1 nova_compute[230518]: 2025-10-02 12:19:58.383 2 DEBUG nova.virt.hardware [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:19:58 compute-1 nova_compute[230518]: 2025-10-02 12:19:58.384 2 INFO nova.compute.claims [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:19:58 compute-1 nova_compute[230518]: 2025-10-02 12:19:58.510 2 DEBUG oslo_concurrency.processutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:58 compute-1 nova_compute[230518]: 2025-10-02 12:19:58.569 2 DEBUG nova.objects.instance [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lazy-loading 'trusted_certs' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:19:58 compute-1 nova_compute[230518]: 2025-10-02 12:19:58.588 2 DEBUG nova.compute.manager [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:19:58 compute-1 nova_compute[230518]: 2025-10-02 12:19:58.642 2 DEBUG nova.objects.instance [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lazy-loading 'pci_requests' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:19:58 compute-1 nova_compute[230518]: 2025-10-02 12:19:58.665 2 DEBUG nova.objects.instance [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lazy-loading 'pci_devices' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:19:58 compute-1 nova_compute[230518]: 2025-10-02 12:19:58.678 2 DEBUG nova.objects.instance [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lazy-loading 'resources' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:19:58 compute-1 nova_compute[230518]: 2025-10-02 12:19:58.689 2 DEBUG nova.objects.instance [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lazy-loading 'migration_context' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:19:58 compute-1 nova_compute[230518]: 2025-10-02 12:19:58.702 2 DEBUG nova.objects.instance [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:19:58 compute-1 nova_compute[230518]: 2025-10-02 12:19:58.705 2 DEBUG nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:19:58 compute-1 podman[247125]: 2025-10-02 12:19:58.849003683 +0000 UTC m=+0.104339428 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 12:19:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:19:58 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3945813759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:58 compute-1 nova_compute[230518]: 2025-10-02 12:19:58.993 2 DEBUG oslo_concurrency.processutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:58 compute-1 nova_compute[230518]: 2025-10-02 12:19:58.999 2 DEBUG nova.compute.provider_tree [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:19:59 compute-1 nova_compute[230518]: 2025-10-02 12:19:59.018 2 DEBUG nova.scheduler.client.report [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:19:59 compute-1 nova_compute[230518]: 2025-10-02 12:19:59.042 2 DEBUG oslo_concurrency.lockutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:59 compute-1 nova_compute[230518]: 2025-10-02 12:19:59.042 2 DEBUG nova.compute.manager [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:19:59 compute-1 nova_compute[230518]: 2025-10-02 12:19:59.089 2 DEBUG nova.compute.manager [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:19:59 compute-1 nova_compute[230518]: 2025-10-02 12:19:59.090 2 DEBUG nova.network.neutron [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:19:59 compute-1 nova_compute[230518]: 2025-10-02 12:19:59.106 2 INFO nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:19:59 compute-1 nova_compute[230518]: 2025-10-02 12:19:59.124 2 DEBUG nova.compute.manager [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:19:59 compute-1 nova_compute[230518]: 2025-10-02 12:19:59.232 2 DEBUG nova.compute.manager [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:19:59 compute-1 nova_compute[230518]: 2025-10-02 12:19:59.233 2 DEBUG nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:19:59 compute-1 nova_compute[230518]: 2025-10-02 12:19:59.234 2 INFO nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Creating image(s)
Oct 02 12:19:59 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3945813759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:19:59 compute-1 nova_compute[230518]: 2025-10-02 12:19:59.257 2 DEBUG nova.storage.rbd_utils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:19:59 compute-1 nova_compute[230518]: 2025-10-02 12:19:59.301 2 DEBUG nova.storage.rbd_utils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:19:59 compute-1 nova_compute[230518]: 2025-10-02 12:19:59.325 2 DEBUG nova.storage.rbd_utils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:19:59 compute-1 nova_compute[230518]: 2025-10-02 12:19:59.329 2 DEBUG oslo_concurrency.processutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:19:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:19:59.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:19:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:19:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:19:59.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:19:59 compute-1 nova_compute[230518]: 2025-10-02 12:19:59.354 2 DEBUG nova.policy [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'afacfeac9efc4e6fbb83ebe4fe9a8f38', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd0ebb2827cb241e499606ce3a3c67d24', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:19:59 compute-1 nova_compute[230518]: 2025-10-02 12:19:59.395 2 DEBUG oslo_concurrency.processutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:19:59 compute-1 nova_compute[230518]: 2025-10-02 12:19:59.395 2 DEBUG oslo_concurrency.lockutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:19:59 compute-1 nova_compute[230518]: 2025-10-02 12:19:59.396 2 DEBUG oslo_concurrency.lockutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:19:59 compute-1 nova_compute[230518]: 2025-10-02 12:19:59.396 2 DEBUG oslo_concurrency.lockutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:19:59 compute-1 nova_compute[230518]: 2025-10-02 12:19:59.426 2 DEBUG nova.storage.rbd_utils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:19:59 compute-1 nova_compute[230518]: 2025-10-02 12:19:59.433 2 DEBUG oslo_concurrency.processutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:00 compute-1 nova_compute[230518]: 2025-10-02 12:20:00.066 2 DEBUG nova.network.neutron [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Successfully created port: 335bf3a1-e291-4896-a8b2-523eb372ebd6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:20:00 compute-1 nova_compute[230518]: 2025-10-02 12:20:00.097 2 DEBUG oslo_concurrency.processutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.664s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:00 compute-1 nova_compute[230518]: 2025-10-02 12:20:00.165 2 DEBUG nova.storage.rbd_utils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] resizing rbd image 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:20:00 compute-1 ceph-mon[80926]: pgmap v1227: 305 pgs: 305 active+clean; 149 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 506 KiB/s rd, 4.3 MiB/s wr, 91 op/s
Oct 02 12:20:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/12375301' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3795866088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:00 compute-1 ceph-mon[80926]: overall HEALTH_OK
Oct 02 12:20:00 compute-1 nova_compute[230518]: 2025-10-02 12:20:00.727 2 DEBUG nova.objects.instance [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lazy-loading 'migration_context' on Instance uuid 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:00 compute-1 nova_compute[230518]: 2025-10-02 12:20:00.750 2 DEBUG nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:20:00 compute-1 nova_compute[230518]: 2025-10-02 12:20:00.750 2 DEBUG nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Ensure instance console log exists: /var/lib/nova/instances/8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:20:00 compute-1 nova_compute[230518]: 2025-10-02 12:20:00.751 2 DEBUG oslo_concurrency.lockutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:00 compute-1 nova_compute[230518]: 2025-10-02 12:20:00.751 2 DEBUG oslo_concurrency.lockutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:00 compute-1 nova_compute[230518]: 2025-10-02 12:20:00.751 2 DEBUG oslo_concurrency.lockutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:00 compute-1 nova_compute[230518]: 2025-10-02 12:20:00.954 2 DEBUG nova.network.neutron [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Successfully updated port: 335bf3a1-e291-4896-a8b2-523eb372ebd6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:20:00 compute-1 nova_compute[230518]: 2025-10-02 12:20:00.987 2 DEBUG oslo_concurrency.lockutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "refresh_cache-8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:20:00 compute-1 nova_compute[230518]: 2025-10-02 12:20:00.988 2 DEBUG oslo_concurrency.lockutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquired lock "refresh_cache-8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:20:00 compute-1 nova_compute[230518]: 2025-10-02 12:20:00.988 2 DEBUG nova.network.neutron [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:20:01 compute-1 nova_compute[230518]: 2025-10-02 12:20:01.081 2 DEBUG nova.compute.manager [req-0d5215d0-e372-4527-86fb-cd01ebaddcfd req-07ce0b3f-0703-42b0-9d9a-a3a98a4c1433 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Received event network-changed-335bf3a1-e291-4896-a8b2-523eb372ebd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:01 compute-1 nova_compute[230518]: 2025-10-02 12:20:01.082 2 DEBUG nova.compute.manager [req-0d5215d0-e372-4527-86fb-cd01ebaddcfd req-07ce0b3f-0703-42b0-9d9a-a3a98a4c1433 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Refreshing instance network info cache due to event network-changed-335bf3a1-e291-4896-a8b2-523eb372ebd6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:20:01 compute-1 nova_compute[230518]: 2025-10-02 12:20:01.082 2 DEBUG oslo_concurrency.lockutils [req-0d5215d0-e372-4527-86fb-cd01ebaddcfd req-07ce0b3f-0703-42b0-9d9a-a3a98a4c1433 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:20:01 compute-1 nova_compute[230518]: 2025-10-02 12:20:01.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:01 compute-1 nova_compute[230518]: 2025-10-02 12:20:01.286 2 DEBUG nova.network.neutron [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:20:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:20:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:01.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:20:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:20:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:01.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:20:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1508859862' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:02 compute-1 nova_compute[230518]: 2025-10-02 12:20:02.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:02 compute-1 ceph-mon[80926]: pgmap v1228: 305 pgs: 305 active+clean; 155 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.6 MiB/s wr, 224 op/s
Oct 02 12:20:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:20:03 compute-1 nova_compute[230518]: 2025-10-02 12:20:03.120 2 DEBUG nova.network.neutron [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Updating instance_info_cache with network_info: [{"id": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "address": "fa:16:3e:39:9c:b8", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap335bf3a1-e2", "ovs_interfaceid": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:20:03 compute-1 nova_compute[230518]: 2025-10-02 12:20:03.139 2 DEBUG oslo_concurrency.lockutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Releasing lock "refresh_cache-8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:20:03 compute-1 nova_compute[230518]: 2025-10-02 12:20:03.140 2 DEBUG nova.compute.manager [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Instance network_info: |[{"id": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "address": "fa:16:3e:39:9c:b8", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap335bf3a1-e2", "ovs_interfaceid": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:20:03 compute-1 nova_compute[230518]: 2025-10-02 12:20:03.140 2 DEBUG oslo_concurrency.lockutils [req-0d5215d0-e372-4527-86fb-cd01ebaddcfd req-07ce0b3f-0703-42b0-9d9a-a3a98a4c1433 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:20:03 compute-1 nova_compute[230518]: 2025-10-02 12:20:03.141 2 DEBUG nova.network.neutron [req-0d5215d0-e372-4527-86fb-cd01ebaddcfd req-07ce0b3f-0703-42b0-9d9a-a3a98a4c1433 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Refreshing network info cache for port 335bf3a1-e291-4896-a8b2-523eb372ebd6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:20:03 compute-1 nova_compute[230518]: 2025-10-02 12:20:03.144 2 DEBUG nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Start _get_guest_xml network_info=[{"id": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "address": "fa:16:3e:39:9c:b8", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap335bf3a1-e2", "ovs_interfaceid": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:20:03 compute-1 nova_compute[230518]: 2025-10-02 12:20:03.148 2 WARNING nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:20:03 compute-1 nova_compute[230518]: 2025-10-02 12:20:03.152 2 DEBUG nova.virt.libvirt.host [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:20:03 compute-1 nova_compute[230518]: 2025-10-02 12:20:03.153 2 DEBUG nova.virt.libvirt.host [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:20:03 compute-1 nova_compute[230518]: 2025-10-02 12:20:03.156 2 DEBUG nova.virt.libvirt.host [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:20:03 compute-1 nova_compute[230518]: 2025-10-02 12:20:03.156 2 DEBUG nova.virt.libvirt.host [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:20:03 compute-1 nova_compute[230518]: 2025-10-02 12:20:03.158 2 DEBUG nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:20:03 compute-1 nova_compute[230518]: 2025-10-02 12:20:03.158 2 DEBUG nova.virt.hardware [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:20:03 compute-1 nova_compute[230518]: 2025-10-02 12:20:03.158 2 DEBUG nova.virt.hardware [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:20:03 compute-1 nova_compute[230518]: 2025-10-02 12:20:03.159 2 DEBUG nova.virt.hardware [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:20:03 compute-1 nova_compute[230518]: 2025-10-02 12:20:03.159 2 DEBUG nova.virt.hardware [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:20:03 compute-1 nova_compute[230518]: 2025-10-02 12:20:03.159 2 DEBUG nova.virt.hardware [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:20:03 compute-1 nova_compute[230518]: 2025-10-02 12:20:03.159 2 DEBUG nova.virt.hardware [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:20:03 compute-1 nova_compute[230518]: 2025-10-02 12:20:03.160 2 DEBUG nova.virt.hardware [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:20:03 compute-1 nova_compute[230518]: 2025-10-02 12:20:03.160 2 DEBUG nova.virt.hardware [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:20:03 compute-1 nova_compute[230518]: 2025-10-02 12:20:03.160 2 DEBUG nova.virt.hardware [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:20:03 compute-1 nova_compute[230518]: 2025-10-02 12:20:03.161 2 DEBUG nova.virt.hardware [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:20:03 compute-1 nova_compute[230518]: 2025-10-02 12:20:03.161 2 DEBUG nova.virt.hardware [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:20:03 compute-1 nova_compute[230518]: 2025-10-02 12:20:03.163 2 DEBUG oslo_concurrency.processutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:20:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:03.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:20:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:20:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:03.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:20:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:20:03 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/311085604' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:03 compute-1 nova_compute[230518]: 2025-10-02 12:20:03.619 2 DEBUG oslo_concurrency.processutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:03 compute-1 nova_compute[230518]: 2025-10-02 12:20:03.651 2 DEBUG nova.storage.rbd_utils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:03 compute-1 nova_compute[230518]: 2025-10-02 12:20:03.656 2 DEBUG oslo_concurrency.processutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:20:04 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1221744040' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:04 compute-1 nova_compute[230518]: 2025-10-02 12:20:04.170 2 DEBUG oslo_concurrency.processutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:04 compute-1 nova_compute[230518]: 2025-10-02 12:20:04.171 2 DEBUG nova.virt.libvirt.vif [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:19:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1512000059',display_name='tempest-ImagesTestJSON-server-1512000059',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-1512000059',id=42,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d0ebb2827cb241e499606ce3a3c67d24',ramdisk_id='',reservation_id='r-0z7xplik',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1681256609',owner_user_name='tempest-ImagesTestJSON-1681256609-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:19:59Z,user_data=None,user_id='afacfeac9efc4e6fbb83ebe4fe9a8f38',uuid=8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "address": "fa:16:3e:39:9c:b8", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap335bf3a1-e2", "ovs_interfaceid": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:20:04 compute-1 nova_compute[230518]: 2025-10-02 12:20:04.172 2 DEBUG nova.network.os_vif_util [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converting VIF {"id": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "address": "fa:16:3e:39:9c:b8", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap335bf3a1-e2", "ovs_interfaceid": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:20:04 compute-1 nova_compute[230518]: 2025-10-02 12:20:04.172 2 DEBUG nova.network.os_vif_util [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:9c:b8,bridge_name='br-int',has_traffic_filtering=True,id=335bf3a1-e291-4896-a8b2-523eb372ebd6,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap335bf3a1-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:20:04 compute-1 nova_compute[230518]: 2025-10-02 12:20:04.173 2 DEBUG nova.objects.instance [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:04 compute-1 nova_compute[230518]: 2025-10-02 12:20:04.200 2 DEBUG nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:20:04 compute-1 nova_compute[230518]:   <uuid>8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8</uuid>
Oct 02 12:20:04 compute-1 nova_compute[230518]:   <name>instance-0000002a</name>
Oct 02 12:20:04 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:20:04 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:20:04 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:20:04 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:       <nova:name>tempest-ImagesTestJSON-server-1512000059</nova:name>
Oct 02 12:20:04 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:20:03</nova:creationTime>
Oct 02 12:20:04 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:20:04 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:20:04 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:20:04 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:20:04 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:20:04 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:20:04 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:20:04 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:20:04 compute-1 nova_compute[230518]:         <nova:user uuid="afacfeac9efc4e6fbb83ebe4fe9a8f38">tempest-ImagesTestJSON-1681256609-project-member</nova:user>
Oct 02 12:20:04 compute-1 nova_compute[230518]:         <nova:project uuid="d0ebb2827cb241e499606ce3a3c67d24">tempest-ImagesTestJSON-1681256609</nova:project>
Oct 02 12:20:04 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:20:04 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:20:04 compute-1 nova_compute[230518]:         <nova:port uuid="335bf3a1-e291-4896-a8b2-523eb372ebd6">
Oct 02 12:20:04 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:20:04 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:20:04 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:20:04 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <system>
Oct 02 12:20:04 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:20:04 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:20:04 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:20:04 compute-1 nova_compute[230518]:       <entry name="serial">8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8</entry>
Oct 02 12:20:04 compute-1 nova_compute[230518]:       <entry name="uuid">8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8</entry>
Oct 02 12:20:04 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     </system>
Oct 02 12:20:04 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:20:04 compute-1 nova_compute[230518]:   <os>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:   </os>
Oct 02 12:20:04 compute-1 nova_compute[230518]:   <features>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:   </features>
Oct 02 12:20:04 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:20:04 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:20:04 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:20:04 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_disk">
Oct 02 12:20:04 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:       </source>
Oct 02 12:20:04 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:20:04 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:20:04 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:20:04 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_disk.config">
Oct 02 12:20:04 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:       </source>
Oct 02 12:20:04 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:20:04 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:20:04 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:20:04 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:39:9c:b8"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:       <target dev="tap335bf3a1-e2"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:20:04 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8/console.log" append="off"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <video>
Oct 02 12:20:04 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     </video>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:20:04 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:20:04 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:20:04 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:20:04 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:20:04 compute-1 nova_compute[230518]: </domain>
Oct 02 12:20:04 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:20:04 compute-1 nova_compute[230518]: 2025-10-02 12:20:04.202 2 DEBUG nova.compute.manager [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Preparing to wait for external event network-vif-plugged-335bf3a1-e291-4896-a8b2-523eb372ebd6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:20:04 compute-1 nova_compute[230518]: 2025-10-02 12:20:04.202 2 DEBUG oslo_concurrency.lockutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:04 compute-1 nova_compute[230518]: 2025-10-02 12:20:04.203 2 DEBUG oslo_concurrency.lockutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:04 compute-1 nova_compute[230518]: 2025-10-02 12:20:04.203 2 DEBUG oslo_concurrency.lockutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:04 compute-1 nova_compute[230518]: 2025-10-02 12:20:04.204 2 DEBUG nova.virt.libvirt.vif [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:19:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1512000059',display_name='tempest-ImagesTestJSON-server-1512000059',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-1512000059',id=42,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d0ebb2827cb241e499606ce3a3c67d24',ramdisk_id='',reservation_id='r-0z7xplik',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1681256609',owner_user_name='tempest-ImagesTestJSON-1681256609-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:19:59Z,user_data=None,user_id='afacfeac9efc4e6fbb83ebe4fe9a8f38',uuid=8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "address": "fa:16:3e:39:9c:b8", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap335bf3a1-e2", "ovs_interfaceid": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:20:04 compute-1 nova_compute[230518]: 2025-10-02 12:20:04.204 2 DEBUG nova.network.os_vif_util [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converting VIF {"id": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "address": "fa:16:3e:39:9c:b8", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap335bf3a1-e2", "ovs_interfaceid": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:20:04 compute-1 nova_compute[230518]: 2025-10-02 12:20:04.205 2 DEBUG nova.network.os_vif_util [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:9c:b8,bridge_name='br-int',has_traffic_filtering=True,id=335bf3a1-e291-4896-a8b2-523eb372ebd6,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap335bf3a1-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:20:04 compute-1 nova_compute[230518]: 2025-10-02 12:20:04.205 2 DEBUG os_vif [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:9c:b8,bridge_name='br-int',has_traffic_filtering=True,id=335bf3a1-e291-4896-a8b2-523eb372ebd6,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap335bf3a1-e2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:20:04 compute-1 nova_compute[230518]: 2025-10-02 12:20:04.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:04 compute-1 nova_compute[230518]: 2025-10-02 12:20:04.206 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:04 compute-1 nova_compute[230518]: 2025-10-02 12:20:04.207 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:20:04 compute-1 nova_compute[230518]: 2025-10-02 12:20:04.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:04 compute-1 nova_compute[230518]: 2025-10-02 12:20:04.212 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap335bf3a1-e2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:04 compute-1 nova_compute[230518]: 2025-10-02 12:20:04.213 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap335bf3a1-e2, col_values=(('external_ids', {'iface-id': '335bf3a1-e291-4896-a8b2-523eb372ebd6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:39:9c:b8', 'vm-uuid': '8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:04 compute-1 NetworkManager[44960]: <info>  [1759407604.2151] manager: (tap335bf3a1-e2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Oct 02 12:20:04 compute-1 nova_compute[230518]: 2025-10-02 12:20:04.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:20:04 compute-1 nova_compute[230518]: 2025-10-02 12:20:04.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:04 compute-1 nova_compute[230518]: 2025-10-02 12:20:04.225 2 INFO os_vif [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:9c:b8,bridge_name='br-int',has_traffic_filtering=True,id=335bf3a1-e291-4896-a8b2-523eb372ebd6,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap335bf3a1-e2')
Oct 02 12:20:04 compute-1 nova_compute[230518]: 2025-10-02 12:20:04.284 2 DEBUG nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:20:04 compute-1 nova_compute[230518]: 2025-10-02 12:20:04.284 2 DEBUG nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:20:04 compute-1 nova_compute[230518]: 2025-10-02 12:20:04.285 2 DEBUG nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] No VIF found with MAC fa:16:3e:39:9c:b8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:20:04 compute-1 nova_compute[230518]: 2025-10-02 12:20:04.285 2 INFO nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Using config drive
Oct 02 12:20:04 compute-1 nova_compute[230518]: 2025-10-02 12:20:04.312 2 DEBUG nova.storage.rbd_utils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:04 compute-1 ceph-mon[80926]: pgmap v1229: 305 pgs: 305 active+clean; 169 MiB data, 456 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 6.5 MiB/s wr, 277 op/s
Oct 02 12:20:04 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/311085604' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:04 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1221744040' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:05.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:05.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2792077436' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:20:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2792077436' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:20:05 compute-1 sshd-session[247400]: banner exchange: Connection from 93.123.109.214 port 57288: invalid format
Oct 02 12:20:05 compute-1 nova_compute[230518]: 2025-10-02 12:20:05.818 2 INFO nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Creating config drive at /var/lib/nova/instances/8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8/disk.config
Oct 02 12:20:05 compute-1 nova_compute[230518]: 2025-10-02 12:20:05.824 2 DEBUG oslo_concurrency.processutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp988o49aw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:05 compute-1 sshd-session[247404]: banner exchange: Connection from 93.123.109.214 port 57302: invalid format
Oct 02 12:20:05 compute-1 nova_compute[230518]: 2025-10-02 12:20:05.961 2 DEBUG oslo_concurrency.processutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp988o49aw" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:05 compute-1 nova_compute[230518]: 2025-10-02 12:20:05.999 2 DEBUG nova.storage.rbd_utils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:06 compute-1 nova_compute[230518]: 2025-10-02 12:20:06.003 2 DEBUG oslo_concurrency.processutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8/disk.config 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:06 compute-1 nova_compute[230518]: 2025-10-02 12:20:06.472 2 DEBUG oslo_concurrency.processutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8/disk.config 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:06 compute-1 nova_compute[230518]: 2025-10-02 12:20:06.473 2 INFO nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Deleting local config drive /var/lib/nova/instances/8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8/disk.config because it was imported into RBD.
Oct 02 12:20:06 compute-1 virtqemud[230067]: End of file while reading data: Input/output error
Oct 02 12:20:06 compute-1 virtqemud[230067]: End of file while reading data: Input/output error
Oct 02 12:20:06 compute-1 kernel: tap335bf3a1-e2: entered promiscuous mode
Oct 02 12:20:06 compute-1 NetworkManager[44960]: <info>  [1759407606.5399] manager: (tap335bf3a1-e2): new Tun device (/org/freedesktop/NetworkManager/Devices/79)
Oct 02 12:20:06 compute-1 nova_compute[230518]: 2025-10-02 12:20:06.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:06 compute-1 ovn_controller[129257]: 2025-10-02T12:20:06Z|00174|binding|INFO|Claiming lport 335bf3a1-e291-4896-a8b2-523eb372ebd6 for this chassis.
Oct 02 12:20:06 compute-1 ovn_controller[129257]: 2025-10-02T12:20:06Z|00175|binding|INFO|335bf3a1-e291-4896-a8b2-523eb372ebd6: Claiming fa:16:3e:39:9c:b8 10.100.0.10
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.557 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:9c:b8 10.100.0.10'], port_security=['fa:16:3e:39:9c:b8 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd0ebb2827cb241e499606ce3a3c67d24', 'neutron:revision_number': '2', 'neutron:security_group_ids': '82a35752-e404-444a-8896-2599ead4c932', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a6ee76fd-a5ee-4609-94ea-48618b0cf0da, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=335bf3a1-e291-4896-a8b2-523eb372ebd6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.560 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 335bf3a1-e291-4896-a8b2-523eb372ebd6 in datapath d68ff9e0-aff2-4eda-8590-74da7cfc5671 bound to our chassis
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.562 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d68ff9e0-aff2-4eda-8590-74da7cfc5671
Oct 02 12:20:06 compute-1 systemd-udevd[247456]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.579 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[49ee8b8a-0574-402f-9d74-2de9bd5141d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.581 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd68ff9e0-a1 in ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.586 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd68ff9e0-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.586 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6b6ba4e9-644b-4916-a5b6-b626e3d4a788]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.587 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2129728c-7bd2-4104-afd2-64eefe59fe9d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:06 compute-1 systemd-machined[188247]: New machine qemu-22-instance-0000002a.
Oct 02 12:20:06 compute-1 NetworkManager[44960]: <info>  [1759407606.6010] device (tap335bf3a1-e2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:20:06 compute-1 NetworkManager[44960]: <info>  [1759407606.6025] device (tap335bf3a1-e2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.603 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[3e06c6df-62e9-4023-969b-7f82ee5611f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:06 compute-1 systemd[1]: Started Virtual Machine qemu-22-instance-0000002a.
Oct 02 12:20:06 compute-1 ovn_controller[129257]: 2025-10-02T12:20:06Z|00176|binding|INFO|Setting lport 335bf3a1-e291-4896-a8b2-523eb372ebd6 ovn-installed in OVS
Oct 02 12:20:06 compute-1 ovn_controller[129257]: 2025-10-02T12:20:06Z|00177|binding|INFO|Setting lport 335bf3a1-e291-4896-a8b2-523eb372ebd6 up in Southbound
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.656 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8e0b5feb-68c9-4efb-bd32-3e17bfd4da75]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:06 compute-1 nova_compute[230518]: 2025-10-02 12:20:06.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.683 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[7bc17429-5225-4e07-bbf8-8d94bcad76fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.689 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[de1b4535-ef4d-41b5-9ab2-95ee079acfd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:06 compute-1 NetworkManager[44960]: <info>  [1759407606.6910] manager: (tapd68ff9e0-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/80)
Oct 02 12:20:06 compute-1 ceph-mon[80926]: pgmap v1230: 305 pgs: 305 active+clean; 181 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 7.1 MiB/s wr, 293 op/s
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.727 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a052b617-287d-4ab3-af32-dbd04bdc9975]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.729 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[81046983-4baa-4ad4-bacf-26ff44741df1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:06 compute-1 NetworkManager[44960]: <info>  [1759407606.7527] device (tapd68ff9e0-a0): carrier: link connected
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.758 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[f604bd8d-1173-42aa-990d-3a0654d2880b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.776 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[91f68845-7ea1-4ac0-8f20-0f36f5c0c269]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd68ff9e0-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:d9:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547031, 'reachable_time': 22913, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247490, 'error': None, 'target': 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.791 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ebd83b86-8054-4430-8f3f-81367c28d109]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee8:d99e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547031, 'tstamp': 547031}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247491, 'error': None, 'target': 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.810 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f6cfac97-b839-4435-8ba4-06610d1c0a67]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd68ff9e0-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:d9:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547031, 'reachable_time': 22913, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 247492, 'error': None, 'target': 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.839 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e084cbef-0454-4112-b5d3-785f8c3bc30c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:06 compute-1 nova_compute[230518]: 2025-10-02 12:20:06.862 2 DEBUG nova.network.neutron [req-0d5215d0-e372-4527-86fb-cd01ebaddcfd req-07ce0b3f-0703-42b0-9d9a-a3a98a4c1433 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Updated VIF entry in instance network info cache for port 335bf3a1-e291-4896-a8b2-523eb372ebd6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:20:06 compute-1 nova_compute[230518]: 2025-10-02 12:20:06.862 2 DEBUG nova.network.neutron [req-0d5215d0-e372-4527-86fb-cd01ebaddcfd req-07ce0b3f-0703-42b0-9d9a-a3a98a4c1433 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Updating instance_info_cache with network_info: [{"id": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "address": "fa:16:3e:39:9c:b8", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap335bf3a1-e2", "ovs_interfaceid": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:20:06 compute-1 nova_compute[230518]: 2025-10-02 12:20:06.892 2 DEBUG oslo_concurrency.lockutils [req-0d5215d0-e372-4527-86fb-cd01ebaddcfd req-07ce0b3f-0703-42b0-9d9a-a3a98a4c1433 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.896 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a4d27444-8323-46ad-a646-45efbb3d716f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.897 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd68ff9e0-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.898 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.898 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd68ff9e0-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:06 compute-1 nova_compute[230518]: 2025-10-02 12:20:06.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:06 compute-1 kernel: tapd68ff9e0-a0: entered promiscuous mode
Oct 02 12:20:06 compute-1 NetworkManager[44960]: <info>  [1759407606.9008] manager: (tapd68ff9e0-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81)
Oct 02 12:20:06 compute-1 nova_compute[230518]: 2025-10-02 12:20:06.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.903 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd68ff9e0-a0, col_values=(('external_ids', {'iface-id': 'c0382cb4-7e26-44bc-8951-80e73f21067a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:06 compute-1 nova_compute[230518]: 2025-10-02 12:20:06.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:06 compute-1 ovn_controller[129257]: 2025-10-02T12:20:06Z|00178|binding|INFO|Releasing lport c0382cb4-7e26-44bc-8951-80e73f21067a from this chassis (sb_readonly=0)
Oct 02 12:20:06 compute-1 nova_compute[230518]: 2025-10-02 12:20:06.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:06 compute-1 nova_compute[230518]: 2025-10-02 12:20:06.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.923 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d68ff9e0-aff2-4eda-8590-74da7cfc5671.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d68ff9e0-aff2-4eda-8590-74da7cfc5671.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.924 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a57268aa-3e7e-4a3f-876a-57faa6a7a9ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.925 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-d68ff9e0-aff2-4eda-8590-74da7cfc5671
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/d68ff9e0-aff2-4eda-8590-74da7cfc5671.pid.haproxy
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID d68ff9e0-aff2-4eda-8590-74da7cfc5671
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:20:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:06.926 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'env', 'PROCESS_TAG=haproxy-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d68ff9e0-aff2-4eda-8590-74da7cfc5671.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:20:07 compute-1 nova_compute[230518]: 2025-10-02 12:20:07.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:07 compute-1 nova_compute[230518]: 2025-10-02 12:20:07.049 2 DEBUG nova.compute.manager [req-aac2f481-82c1-4175-a72a-125853f476cc req-8510c49e-5f0d-4593-b561-74c9cd640882 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Received event network-vif-plugged-335bf3a1-e291-4896-a8b2-523eb372ebd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:07 compute-1 nova_compute[230518]: 2025-10-02 12:20:07.049 2 DEBUG oslo_concurrency.lockutils [req-aac2f481-82c1-4175-a72a-125853f476cc req-8510c49e-5f0d-4593-b561-74c9cd640882 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:07 compute-1 nova_compute[230518]: 2025-10-02 12:20:07.050 2 DEBUG oslo_concurrency.lockutils [req-aac2f481-82c1-4175-a72a-125853f476cc req-8510c49e-5f0d-4593-b561-74c9cd640882 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:07 compute-1 nova_compute[230518]: 2025-10-02 12:20:07.050 2 DEBUG oslo_concurrency.lockutils [req-aac2f481-82c1-4175-a72a-125853f476cc req-8510c49e-5f0d-4593-b561-74c9cd640882 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:07 compute-1 nova_compute[230518]: 2025-10-02 12:20:07.050 2 DEBUG nova.compute.manager [req-aac2f481-82c1-4175-a72a-125853f476cc req-8510c49e-5f0d-4593-b561-74c9cd640882 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Processing event network-vif-plugged-335bf3a1-e291-4896-a8b2-523eb372ebd6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:20:07 compute-1 podman[247543]: 2025-10-02 12:20:07.31126748 +0000 UTC m=+0.067184807 container create 6a407f851b886758ee1fe5a33f6d2c968930f5bd0b09dc1db69abe4af5aa9ffc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 12:20:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:07.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:20:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:07.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:20:07 compute-1 systemd[1]: Started libpod-conmon-6a407f851b886758ee1fe5a33f6d2c968930f5bd0b09dc1db69abe4af5aa9ffc.scope.
Oct 02 12:20:07 compute-1 podman[247543]: 2025-10-02 12:20:07.267513291 +0000 UTC m=+0.023430638 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:20:07 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:20:07 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0694d206f98a051cefdbfb537752832811b123300f6179fbe3f8a35b3a48a40/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:20:07 compute-1 podman[247543]: 2025-10-02 12:20:07.417753386 +0000 UTC m=+0.173670733 container init 6a407f851b886758ee1fe5a33f6d2c968930f5bd0b09dc1db69abe4af5aa9ffc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 12:20:07 compute-1 podman[247543]: 2025-10-02 12:20:07.425581423 +0000 UTC m=+0.181498760 container start 6a407f851b886758ee1fe5a33f6d2c968930f5bd0b09dc1db69abe4af5aa9ffc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 12:20:07 compute-1 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[247566]: [NOTICE]   (247611) : New worker (247624) forked
Oct 02 12:20:07 compute-1 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[247566]: [NOTICE]   (247611) : Loading success.
Oct 02 12:20:07 compute-1 podman[247560]: 2025-10-02 12:20:07.457832508 +0000 UTC m=+0.087055533 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 12:20:07 compute-1 podman[247568]: 2025-10-02 12:20:07.458545211 +0000 UTC m=+0.082371116 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:20:07 compute-1 nova_compute[230518]: 2025-10-02 12:20:07.935 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407607.9349484, 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:20:07 compute-1 nova_compute[230518]: 2025-10-02 12:20:07.935 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] VM Started (Lifecycle Event)
Oct 02 12:20:07 compute-1 nova_compute[230518]: 2025-10-02 12:20:07.937 2 DEBUG nova.compute.manager [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:20:07 compute-1 nova_compute[230518]: 2025-10-02 12:20:07.941 2 DEBUG nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:20:07 compute-1 nova_compute[230518]: 2025-10-02 12:20:07.945 2 INFO nova.virt.libvirt.driver [-] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Instance spawned successfully.
Oct 02 12:20:07 compute-1 nova_compute[230518]: 2025-10-02 12:20:07.945 2 DEBUG nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:20:07 compute-1 nova_compute[230518]: 2025-10-02 12:20:07.968 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:07 compute-1 nova_compute[230518]: 2025-10-02 12:20:07.974 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:20:07 compute-1 nova_compute[230518]: 2025-10-02 12:20:07.979 2 DEBUG nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:07 compute-1 nova_compute[230518]: 2025-10-02 12:20:07.979 2 DEBUG nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:07 compute-1 nova_compute[230518]: 2025-10-02 12:20:07.980 2 DEBUG nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:07 compute-1 nova_compute[230518]: 2025-10-02 12:20:07.980 2 DEBUG nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:07 compute-1 nova_compute[230518]: 2025-10-02 12:20:07.981 2 DEBUG nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:07 compute-1 nova_compute[230518]: 2025-10-02 12:20:07.981 2 DEBUG nova.virt.libvirt.driver [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:08 compute-1 nova_compute[230518]: 2025-10-02 12:20:08.011 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:20:08 compute-1 nova_compute[230518]: 2025-10-02 12:20:08.011 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407607.935077, 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:20:08 compute-1 nova_compute[230518]: 2025-10-02 12:20:08.012 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] VM Paused (Lifecycle Event)
Oct 02 12:20:08 compute-1 nova_compute[230518]: 2025-10-02 12:20:08.039 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:08 compute-1 nova_compute[230518]: 2025-10-02 12:20:08.042 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407607.9405236, 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:20:08 compute-1 nova_compute[230518]: 2025-10-02 12:20:08.042 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] VM Resumed (Lifecycle Event)
Oct 02 12:20:08 compute-1 nova_compute[230518]: 2025-10-02 12:20:08.049 2 INFO nova.compute.manager [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Took 8.82 seconds to spawn the instance on the hypervisor.
Oct 02 12:20:08 compute-1 nova_compute[230518]: 2025-10-02 12:20:08.050 2 DEBUG nova.compute.manager [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:20:08 compute-1 nova_compute[230518]: 2025-10-02 12:20:08.082 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:08 compute-1 nova_compute[230518]: 2025-10-02 12:20:08.085 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:20:08 compute-1 nova_compute[230518]: 2025-10-02 12:20:08.115 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:20:08 compute-1 nova_compute[230518]: 2025-10-02 12:20:08.140 2 INFO nova.compute.manager [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Took 9.81 seconds to build instance.
Oct 02 12:20:08 compute-1 nova_compute[230518]: 2025-10-02 12:20:08.161 2 DEBUG oslo_concurrency.lockutils [None req-03cb5b56-40c8-4dc8-ae26-9669c52d07f7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:08 compute-1 ceph-mon[80926]: pgmap v1231: 305 pgs: 305 active+clean; 181 MiB data, 467 MiB used, 21 GiB / 21 GiB avail; 5.1 MiB/s rd, 5.7 MiB/s wr, 291 op/s
Oct 02 12:20:08 compute-1 nova_compute[230518]: 2025-10-02 12:20:08.763 2 DEBUG nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:20:09 compute-1 nova_compute[230518]: 2025-10-02 12:20:09.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:09 compute-1 nova_compute[230518]: 2025-10-02 12:20:09.219 2 DEBUG nova.compute.manager [req-98c36b6d-7907-4ce8-948e-5e3f8841df91 req-22f5149c-c6da-4317-8053-c16153dab5c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Received event network-vif-plugged-335bf3a1-e291-4896-a8b2-523eb372ebd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:09 compute-1 nova_compute[230518]: 2025-10-02 12:20:09.219 2 DEBUG oslo_concurrency.lockutils [req-98c36b6d-7907-4ce8-948e-5e3f8841df91 req-22f5149c-c6da-4317-8053-c16153dab5c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:09 compute-1 nova_compute[230518]: 2025-10-02 12:20:09.219 2 DEBUG oslo_concurrency.lockutils [req-98c36b6d-7907-4ce8-948e-5e3f8841df91 req-22f5149c-c6da-4317-8053-c16153dab5c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:09 compute-1 nova_compute[230518]: 2025-10-02 12:20:09.220 2 DEBUG oslo_concurrency.lockutils [req-98c36b6d-7907-4ce8-948e-5e3f8841df91 req-22f5149c-c6da-4317-8053-c16153dab5c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:09 compute-1 nova_compute[230518]: 2025-10-02 12:20:09.220 2 DEBUG nova.compute.manager [req-98c36b6d-7907-4ce8-948e-5e3f8841df91 req-22f5149c-c6da-4317-8053-c16153dab5c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] No waiting events found dispatching network-vif-plugged-335bf3a1-e291-4896-a8b2-523eb372ebd6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:20:09 compute-1 nova_compute[230518]: 2025-10-02 12:20:09.220 2 WARNING nova.compute.manager [req-98c36b6d-7907-4ce8-948e-5e3f8841df91 req-22f5149c-c6da-4317-8053-c16153dab5c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Received unexpected event network-vif-plugged-335bf3a1-e291-4896-a8b2-523eb372ebd6 for instance with vm_state active and task_state None.
Oct 02 12:20:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:20:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:09.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:20:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:09.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:09 compute-1 nova_compute[230518]: 2025-10-02 12:20:09.718 2 INFO nova.compute.manager [None req-ce4f0af3-22ed-4d73-ad5b-3699cfa2f252 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Pausing
Oct 02 12:20:09 compute-1 nova_compute[230518]: 2025-10-02 12:20:09.719 2 DEBUG nova.objects.instance [None req-ce4f0af3-22ed-4d73-ad5b-3699cfa2f252 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lazy-loading 'flavor' on Instance uuid 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:09 compute-1 nova_compute[230518]: 2025-10-02 12:20:09.760 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407609.760153, 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:20:09 compute-1 nova_compute[230518]: 2025-10-02 12:20:09.760 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] VM Paused (Lifecycle Event)
Oct 02 12:20:09 compute-1 nova_compute[230518]: 2025-10-02 12:20:09.762 2 DEBUG nova.compute.manager [None req-ce4f0af3-22ed-4d73-ad5b-3699cfa2f252 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:09 compute-1 nova_compute[230518]: 2025-10-02 12:20:09.818 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:09 compute-1 nova_compute[230518]: 2025-10-02 12:20:09.821 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:20:09 compute-1 nova_compute[230518]: 2025-10-02 12:20:09.866 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] During sync_power_state the instance has a pending task (pausing). Skip.
Oct 02 12:20:10 compute-1 ceph-mon[80926]: pgmap v1232: 305 pgs: 305 active+clean; 181 MiB data, 467 MiB used, 21 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.8 MiB/s wr, 243 op/s
Oct 02 12:20:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:20:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:11.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:20:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:11.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:12 compute-1 nova_compute[230518]: 2025-10-02 12:20:12.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:12 compute-1 ceph-mon[80926]: pgmap v1233: 305 pgs: 305 active+clean; 192 MiB data, 478 MiB used, 21 GiB / 21 GiB avail; 6.5 MiB/s rd, 3.8 MiB/s wr, 331 op/s
Oct 02 12:20:12 compute-1 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000027.scope: Deactivated successfully.
Oct 02 12:20:12 compute-1 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000027.scope: Consumed 14.110s CPU time.
Oct 02 12:20:12 compute-1 systemd-machined[188247]: Machine qemu-21-instance-00000027 terminated.
Oct 02 12:20:12 compute-1 nova_compute[230518]: 2025-10-02 12:20:12.792 2 INFO nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance shutdown successfully after 14 seconds.
Oct 02 12:20:12 compute-1 nova_compute[230518]: 2025-10-02 12:20:12.797 2 INFO nova.virt.libvirt.driver [-] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance destroyed successfully.
Oct 02 12:20:12 compute-1 nova_compute[230518]: 2025-10-02 12:20:12.801 2 INFO nova.virt.libvirt.driver [-] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance destroyed successfully.
Oct 02 12:20:12 compute-1 nova_compute[230518]: 2025-10-02 12:20:12.821 2 DEBUG nova.compute.manager [None req-c012d3d0-4c9b-4c33-a407-c07227073d0e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:12 compute-1 nova_compute[230518]: 2025-10-02 12:20:12.906 2 INFO nova.compute.manager [None req-c012d3d0-4c9b-4c33-a407-c07227073d0e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] instance snapshotting
Oct 02 12:20:12 compute-1 nova_compute[230518]: 2025-10-02 12:20:12.907 2 WARNING nova.compute.manager [None req-c012d3d0-4c9b-4c33-a407-c07227073d0e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] trying to snapshot a non-running instance: (state: 3 expected: 1)
Oct 02 12:20:13 compute-1 sudo[247656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:20:13 compute-1 sudo[247656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:13 compute-1 sudo[247656]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:20:13 compute-1 sudo[247681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:20:13 compute-1 sudo[247681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:13 compute-1 sudo[247681]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:13 compute-1 sudo[247707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:20:13 compute-1 sudo[247707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:13 compute-1 sudo[247707]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:13 compute-1 sudo[247732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 12:20:13 compute-1 sudo[247732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:13.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:13.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:13 compute-1 nova_compute[230518]: 2025-10-02 12:20:13.361 2 INFO nova.virt.libvirt.driver [None req-c012d3d0-4c9b-4c33-a407-c07227073d0e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Beginning live snapshot process
Oct 02 12:20:13 compute-1 nova_compute[230518]: 2025-10-02 12:20:13.564 2 DEBUG nova.virt.libvirt.imagebackend [None req-c012d3d0-4c9b-4c33-a407-c07227073d0e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 12:20:13 compute-1 nova_compute[230518]: 2025-10-02 12:20:13.609 2 INFO nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Deleting instance files /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6_del
Oct 02 12:20:13 compute-1 nova_compute[230518]: 2025-10-02 12:20:13.610 2 INFO nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Deletion of /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6_del complete
Oct 02 12:20:13 compute-1 podman[247861]: 2025-10-02 12:20:13.716412314 +0000 UTC m=+0.075672796 container exec f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 12:20:13 compute-1 nova_compute[230518]: 2025-10-02 12:20:13.759 2 DEBUG nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:20:13 compute-1 nova_compute[230518]: 2025-10-02 12:20:13.760 2 INFO nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Creating image(s)
Oct 02 12:20:13 compute-1 nova_compute[230518]: 2025-10-02 12:20:13.785 2 DEBUG nova.storage.rbd_utils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:13 compute-1 podman[247861]: 2025-10-02 12:20:13.81404401 +0000 UTC m=+0.173304492 container exec_died f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:20:13 compute-1 nova_compute[230518]: 2025-10-02 12:20:13.827 2 DEBUG nova.storage.rbd_utils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:13 compute-1 nova_compute[230518]: 2025-10-02 12:20:13.864 2 DEBUG nova.storage.rbd_utils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:13 compute-1 nova_compute[230518]: 2025-10-02 12:20:13.872 2 DEBUG oslo_concurrency.lockutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Acquiring lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:13 compute-1 nova_compute[230518]: 2025-10-02 12:20:13.873 2 DEBUG oslo_concurrency.lockutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:13 compute-1 nova_compute[230518]: 2025-10-02 12:20:13.879 2 DEBUG nova.storage.rbd_utils [None req-c012d3d0-4c9b-4c33-a407-c07227073d0e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] creating snapshot(4f69a8e44b9541c089f1a91e2f5a3282) on rbd image(8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:20:14 compute-1 nova_compute[230518]: 2025-10-02 12:20:14.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:14 compute-1 nova_compute[230518]: 2025-10-02 12:20:14.220 2 DEBUG nova.virt.libvirt.imagebackend [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Image locations are: [{'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/52ef509e-0e22-464e-93c9-3ddcf574cd64/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/52ef509e-0e22-464e-93c9-3ddcf574cd64/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 02 12:20:14 compute-1 sudo[247732]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:14 compute-1 sudo[248053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:20:14 compute-1 sudo[248053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:14 compute-1 sudo[248053]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:14 compute-1 sudo[248078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:20:14 compute-1 sudo[248078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:14 compute-1 sudo[248078]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:14 compute-1 sudo[248103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:20:14 compute-1 sudo[248103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:14 compute-1 sudo[248103]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:14 compute-1 sudo[248128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:20:14 compute-1 sudo[248128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e170 e170: 3 total, 3 up, 3 in
Oct 02 12:20:14 compute-1 ceph-mon[80926]: pgmap v1234: 305 pgs: 305 active+clean; 210 MiB data, 492 MiB used, 21 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.7 MiB/s wr, 264 op/s
Oct 02 12:20:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2996451535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:14 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:20:14 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:20:15 compute-1 nova_compute[230518]: 2025-10-02 12:20:15.054 2 DEBUG nova.storage.rbd_utils [None req-c012d3d0-4c9b-4c33-a407-c07227073d0e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] cloning vms/8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_disk@4f69a8e44b9541c089f1a91e2f5a3282 to images/15e18cef-aa4b-424b-82b1-3a0edfa62ef7 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 12:20:15 compute-1 sudo[248128]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:15.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:15.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:15 compute-1 nova_compute[230518]: 2025-10-02 12:20:15.401 2 DEBUG nova.storage.rbd_utils [None req-c012d3d0-4c9b-4c33-a407-c07227073d0e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] flattening images/15e18cef-aa4b-424b-82b1-3a0edfa62ef7 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 12:20:15 compute-1 nova_compute[230518]: 2025-10-02 12:20:15.480 2 DEBUG oslo_concurrency.processutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:15 compute-1 nova_compute[230518]: 2025-10-02 12:20:15.563 2 DEBUG oslo_concurrency.processutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4.part --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:15 compute-1 nova_compute[230518]: 2025-10-02 12:20:15.564 2 DEBUG nova.virt.images [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] 52ef509e-0e22-464e-93c9-3ddcf574cd64 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Oct 02 12:20:15 compute-1 nova_compute[230518]: 2025-10-02 12:20:15.576 2 DEBUG nova.privsep.utils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 02 12:20:15 compute-1 nova_compute[230518]: 2025-10-02 12:20:15.576 2 DEBUG oslo_concurrency.processutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4.part /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:15 compute-1 nova_compute[230518]: 2025-10-02 12:20:15.767 2 DEBUG nova.storage.rbd_utils [None req-c012d3d0-4c9b-4c33-a407-c07227073d0e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] removing snapshot(4f69a8e44b9541c089f1a91e2f5a3282) on rbd image(8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 12:20:15 compute-1 nova_compute[230518]: 2025-10-02 12:20:15.809 2 DEBUG oslo_concurrency.processutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4.part /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4.converted" returned: 0 in 0.232s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:15 compute-1 nova_compute[230518]: 2025-10-02 12:20:15.813 2 DEBUG oslo_concurrency.processutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:15 compute-1 nova_compute[230518]: 2025-10-02 12:20:15.888 2 DEBUG oslo_concurrency.processutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4.converted --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:15 compute-1 nova_compute[230518]: 2025-10-02 12:20:15.890 2 DEBUG oslo_concurrency.lockutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.017s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:15 compute-1 ceph-mon[80926]: osdmap e170: 3 total, 3 up, 3 in
Oct 02 12:20:15 compute-1 ceph-mon[80926]: pgmap v1236: 305 pgs: 305 active+clean; 185 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.6 MiB/s wr, 261 op/s
Oct 02 12:20:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/956283661' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:15 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:20:15 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:20:15 compute-1 nova_compute[230518]: 2025-10-02 12:20:15.918 2 DEBUG nova.storage.rbd_utils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:15 compute-1 nova_compute[230518]: 2025-10-02 12:20:15.925 2 DEBUG oslo_concurrency.processutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:16 compute-1 nova_compute[230518]: 2025-10-02 12:20:16.296 2 DEBUG oslo_concurrency.processutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.371s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:16 compute-1 nova_compute[230518]: 2025-10-02 12:20:16.365 2 DEBUG nova.storage.rbd_utils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] resizing rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:20:16 compute-1 nova_compute[230518]: 2025-10-02 12:20:16.523 2 DEBUG nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:20:16 compute-1 nova_compute[230518]: 2025-10-02 12:20:16.523 2 DEBUG nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Ensure instance console log exists: /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:20:16 compute-1 nova_compute[230518]: 2025-10-02 12:20:16.524 2 DEBUG oslo_concurrency.lockutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:16 compute-1 nova_compute[230518]: 2025-10-02 12:20:16.524 2 DEBUG oslo_concurrency.lockutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:16 compute-1 nova_compute[230518]: 2025-10-02 12:20:16.524 2 DEBUG oslo_concurrency.lockutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:16 compute-1 nova_compute[230518]: 2025-10-02 12:20:16.526 2 DEBUG nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:54Z,direct_url=<?>,disk_format='qcow2',id=52ef509e-0e22-464e-93c9-3ddcf574cd64,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:20:16 compute-1 nova_compute[230518]: 2025-10-02 12:20:16.530 2 WARNING nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 02 12:20:16 compute-1 nova_compute[230518]: 2025-10-02 12:20:16.537 2 DEBUG nova.virt.libvirt.host [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:20:16 compute-1 nova_compute[230518]: 2025-10-02 12:20:16.537 2 DEBUG nova.virt.libvirt.host [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:20:16 compute-1 nova_compute[230518]: 2025-10-02 12:20:16.540 2 DEBUG nova.virt.libvirt.host [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:20:16 compute-1 nova_compute[230518]: 2025-10-02 12:20:16.540 2 DEBUG nova.virt.libvirt.host [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:20:16 compute-1 nova_compute[230518]: 2025-10-02 12:20:16.541 2 DEBUG nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:20:16 compute-1 nova_compute[230518]: 2025-10-02 12:20:16.542 2 DEBUG nova.virt.hardware [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:54Z,direct_url=<?>,disk_format='qcow2',id=52ef509e-0e22-464e-93c9-3ddcf574cd64,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:20:16 compute-1 nova_compute[230518]: 2025-10-02 12:20:16.542 2 DEBUG nova.virt.hardware [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:20:16 compute-1 nova_compute[230518]: 2025-10-02 12:20:16.542 2 DEBUG nova.virt.hardware [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:20:16 compute-1 nova_compute[230518]: 2025-10-02 12:20:16.542 2 DEBUG nova.virt.hardware [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:20:16 compute-1 nova_compute[230518]: 2025-10-02 12:20:16.542 2 DEBUG nova.virt.hardware [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:20:16 compute-1 nova_compute[230518]: 2025-10-02 12:20:16.543 2 DEBUG nova.virt.hardware [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:20:16 compute-1 nova_compute[230518]: 2025-10-02 12:20:16.543 2 DEBUG nova.virt.hardware [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:20:16 compute-1 nova_compute[230518]: 2025-10-02 12:20:16.543 2 DEBUG nova.virt.hardware [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:20:16 compute-1 nova_compute[230518]: 2025-10-02 12:20:16.543 2 DEBUG nova.virt.hardware [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:20:16 compute-1 nova_compute[230518]: 2025-10-02 12:20:16.543 2 DEBUG nova.virt.hardware [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:20:16 compute-1 nova_compute[230518]: 2025-10-02 12:20:16.544 2 DEBUG nova.virt.hardware [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:20:16 compute-1 nova_compute[230518]: 2025-10-02 12:20:16.544 2 DEBUG nova.objects.instance [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lazy-loading 'vcpu_model' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:16 compute-1 nova_compute[230518]: 2025-10-02 12:20:16.564 2 DEBUG oslo_concurrency.processutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:16 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e171 e171: 3 total, 3 up, 3 in
Oct 02 12:20:16 compute-1 nova_compute[230518]: 2025-10-02 12:20:16.923 2 DEBUG nova.storage.rbd_utils [None req-c012d3d0-4c9b-4c33-a407-c07227073d0e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] creating snapshot(snap) on rbd image(15e18cef-aa4b-424b-82b1-3a0edfa62ef7) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:20:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:20:17 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/405233859' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:17 compute-1 nova_compute[230518]: 2025-10-02 12:20:17.021 2 DEBUG oslo_concurrency.processutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:17 compute-1 nova_compute[230518]: 2025-10-02 12:20:17.048 2 DEBUG nova.storage.rbd_utils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:17 compute-1 nova_compute[230518]: 2025-10-02 12:20:17.052 2 DEBUG oslo_concurrency.processutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2270279448' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:17 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:20:17 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:20:17 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:20:17 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:20:17 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:20:17 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:20:17 compute-1 ceph-mon[80926]: osdmap e171: 3 total, 3 up, 3 in
Oct 02 12:20:17 compute-1 nova_compute[230518]: 2025-10-02 12:20:17.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:17.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:17.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:20:17 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3353938704' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:17 compute-1 nova_compute[230518]: 2025-10-02 12:20:17.614 2 DEBUG oslo_concurrency.processutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:17 compute-1 nova_compute[230518]: 2025-10-02 12:20:17.617 2 DEBUG nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:20:17 compute-1 nova_compute[230518]:   <uuid>94bf2d68-bf2c-4720-8ede-688ca2b48ce6</uuid>
Oct 02 12:20:17 compute-1 nova_compute[230518]:   <name>instance-00000027</name>
Oct 02 12:20:17 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:20:17 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:20:17 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:20:17 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:       <nova:name>tempest-ServersAdmin275Test-server-2065786220</nova:name>
Oct 02 12:20:17 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:20:16</nova:creationTime>
Oct 02 12:20:17 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:20:17 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:20:17 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:20:17 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:20:17 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:20:17 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:20:17 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:20:17 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:20:17 compute-1 nova_compute[230518]:         <nova:user uuid="00254a66d4364bc0b5d187d008ba5a9a">tempest-ServersAdmin275Test-1864943547-project-member</nova:user>
Oct 02 12:20:17 compute-1 nova_compute[230518]:         <nova:project uuid="b1871b72e3494da299605236b73c241f">tempest-ServersAdmin275Test-1864943547</nova:project>
Oct 02 12:20:17 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:20:17 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="52ef509e-0e22-464e-93c9-3ddcf574cd64"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:       <nova:ports/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:20:17 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:20:17 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <system>
Oct 02 12:20:17 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:20:17 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:20:17 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:20:17 compute-1 nova_compute[230518]:       <entry name="serial">94bf2d68-bf2c-4720-8ede-688ca2b48ce6</entry>
Oct 02 12:20:17 compute-1 nova_compute[230518]:       <entry name="uuid">94bf2d68-bf2c-4720-8ede-688ca2b48ce6</entry>
Oct 02 12:20:17 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     </system>
Oct 02 12:20:17 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:20:17 compute-1 nova_compute[230518]:   <os>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:   </os>
Oct 02 12:20:17 compute-1 nova_compute[230518]:   <features>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:   </features>
Oct 02 12:20:17 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:20:17 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:20:17 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:20:17 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk">
Oct 02 12:20:17 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:       </source>
Oct 02 12:20:17 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:20:17 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:20:17 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:20:17 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk.config">
Oct 02 12:20:17 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:       </source>
Oct 02 12:20:17 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:20:17 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:20:17 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:20:17 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/console.log" append="off"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <video>
Oct 02 12:20:17 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     </video>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:20:17 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:20:17 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:20:17 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:20:17 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:20:17 compute-1 nova_compute[230518]: </domain>
Oct 02 12:20:17 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:20:17 compute-1 nova_compute[230518]: 2025-10-02 12:20:17.684 2 DEBUG nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:20:17 compute-1 nova_compute[230518]: 2025-10-02 12:20:17.684 2 DEBUG nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:20:17 compute-1 nova_compute[230518]: 2025-10-02 12:20:17.685 2 INFO nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Using config drive
Oct 02 12:20:17 compute-1 nova_compute[230518]: 2025-10-02 12:20:17.711 2 DEBUG nova.storage.rbd_utils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:17 compute-1 nova_compute[230518]: 2025-10-02 12:20:17.731 2 DEBUG nova.objects.instance [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lazy-loading 'ec2_ids' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:17 compute-1 nova_compute[230518]: 2025-10-02 12:20:17.757 2 DEBUG nova.objects.instance [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lazy-loading 'keypairs' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e172 e172: 3 total, 3 up, 3 in
Oct 02 12:20:17 compute-1 nova_compute[230518]: 2025-10-02 12:20:17.960 2 INFO nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Creating config drive at /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/disk.config
Oct 02 12:20:17 compute-1 nova_compute[230518]: 2025-10-02 12:20:17.964 2 DEBUG oslo_concurrency.processutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps64v96zx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:20:18 compute-1 nova_compute[230518]: 2025-10-02 12:20:18.094 2 DEBUG oslo_concurrency.processutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps64v96zx" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:18 compute-1 nova_compute[230518]: 2025-10-02 12:20:18.122 2 DEBUG nova.storage.rbd_utils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:18 compute-1 nova_compute[230518]: 2025-10-02 12:20:18.126 2 DEBUG oslo_concurrency.processutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/disk.config 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:18 compute-1 ceph-mon[80926]: pgmap v1238: 305 pgs: 305 active+clean; 179 MiB data, 458 MiB used, 21 GiB / 21 GiB avail; 7.8 MiB/s rd, 6.7 MiB/s wr, 395 op/s
Oct 02 12:20:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/405233859' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3613112588' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3353938704' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:18 compute-1 ceph-mon[80926]: osdmap e172: 3 total, 3 up, 3 in
Oct 02 12:20:18 compute-1 nova_compute[230518]: 2025-10-02 12:20:18.351 2 DEBUG oslo_concurrency.processutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/disk.config 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.226s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:18 compute-1 nova_compute[230518]: 2025-10-02 12:20:18.352 2 INFO nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Deleting local config drive /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/disk.config because it was imported into RBD.
Oct 02 12:20:18 compute-1 systemd-machined[188247]: New machine qemu-23-instance-00000027.
Oct 02 12:20:18 compute-1 systemd[1]: Started Virtual Machine qemu-23-instance-00000027.
Oct 02 12:20:19 compute-1 nova_compute[230518]: 2025-10-02 12:20:19.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:19 compute-1 nova_compute[230518]: 2025-10-02 12:20:19.267 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Removed pending event for 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:20:19 compute-1 nova_compute[230518]: 2025-10-02 12:20:19.267 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407619.2667367, 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:20:19 compute-1 nova_compute[230518]: 2025-10-02 12:20:19.268 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] VM Resumed (Lifecycle Event)
Oct 02 12:20:19 compute-1 nova_compute[230518]: 2025-10-02 12:20:19.270 2 DEBUG nova.compute.manager [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:20:19 compute-1 nova_compute[230518]: 2025-10-02 12:20:19.271 2 DEBUG nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:20:19 compute-1 nova_compute[230518]: 2025-10-02 12:20:19.274 2 INFO nova.virt.libvirt.driver [-] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance spawned successfully.
Oct 02 12:20:19 compute-1 nova_compute[230518]: 2025-10-02 12:20:19.275 2 DEBUG nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:20:19 compute-1 nova_compute[230518]: 2025-10-02 12:20:19.293 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:19 compute-1 nova_compute[230518]: 2025-10-02 12:20:19.299 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:20:19 compute-1 nova_compute[230518]: 2025-10-02 12:20:19.303 2 DEBUG nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:19 compute-1 nova_compute[230518]: 2025-10-02 12:20:19.303 2 DEBUG nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:19 compute-1 nova_compute[230518]: 2025-10-02 12:20:19.304 2 DEBUG nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:19 compute-1 nova_compute[230518]: 2025-10-02 12:20:19.304 2 DEBUG nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:19 compute-1 nova_compute[230518]: 2025-10-02 12:20:19.305 2 DEBUG nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:19 compute-1 nova_compute[230518]: 2025-10-02 12:20:19.306 2 DEBUG nova.virt.libvirt.driver [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:19 compute-1 nova_compute[230518]: 2025-10-02 12:20:19.319 2 INFO nova.virt.libvirt.driver [None req-c012d3d0-4c9b-4c33-a407-c07227073d0e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Snapshot image upload complete
Oct 02 12:20:19 compute-1 nova_compute[230518]: 2025-10-02 12:20:19.320 2 INFO nova.compute.manager [None req-c012d3d0-4c9b-4c33-a407-c07227073d0e afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Took 6.41 seconds to snapshot the instance on the hypervisor.
Oct 02 12:20:19 compute-1 nova_compute[230518]: 2025-10-02 12:20:19.337 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 12:20:19 compute-1 nova_compute[230518]: 2025-10-02 12:20:19.338 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407619.2676768, 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:20:19 compute-1 nova_compute[230518]: 2025-10-02 12:20:19.338 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] VM Started (Lifecycle Event)
Oct 02 12:20:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:19.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:19.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:19 compute-1 nova_compute[230518]: 2025-10-02 12:20:19.374 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:19 compute-1 nova_compute[230518]: 2025-10-02 12:20:19.377 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:20:19 compute-1 nova_compute[230518]: 2025-10-02 12:20:19.404 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 12:20:19 compute-1 nova_compute[230518]: 2025-10-02 12:20:19.409 2 DEBUG nova.compute.manager [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:19 compute-1 nova_compute[230518]: 2025-10-02 12:20:19.460 2 DEBUG oslo_concurrency.lockutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:19 compute-1 nova_compute[230518]: 2025-10-02 12:20:19.461 2 DEBUG oslo_concurrency.lockutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:19 compute-1 nova_compute[230518]: 2025-10-02 12:20:19.461 2 DEBUG nova.objects.instance [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:20:19 compute-1 nova_compute[230518]: 2025-10-02 12:20:19.515 2 DEBUG oslo_concurrency.lockutils [None req-0d430e25-2dc0-469b-90e0-31efaac4f226 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.054s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:20 compute-1 ceph-mon[80926]: pgmap v1240: 305 pgs: 305 active+clean; 179 MiB data, 458 MiB used, 21 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.7 MiB/s wr, 217 op/s
Oct 02 12:20:20 compute-1 nova_compute[230518]: 2025-10-02 12:20:20.786 2 INFO nova.compute.manager [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Rebuilding instance
Oct 02 12:20:21 compute-1 nova_compute[230518]: 2025-10-02 12:20:21.108 2 DEBUG nova.objects.instance [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e173 e173: 3 total, 3 up, 3 in
Oct 02 12:20:21 compute-1 nova_compute[230518]: 2025-10-02 12:20:21.284 2 DEBUG nova.compute.manager [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:21.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:21.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:21 compute-1 nova_compute[230518]: 2025-10-02 12:20:21.376 2 DEBUG nova.objects.instance [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Lazy-loading 'pci_requests' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:21 compute-1 nova_compute[230518]: 2025-10-02 12:20:21.391 2 DEBUG nova.objects.instance [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:21 compute-1 nova_compute[230518]: 2025-10-02 12:20:21.420 2 DEBUG nova.objects.instance [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Lazy-loading 'resources' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:21 compute-1 nova_compute[230518]: 2025-10-02 12:20:21.521 2 DEBUG nova.objects.instance [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Lazy-loading 'migration_context' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:21 compute-1 nova_compute[230518]: 2025-10-02 12:20:21.551 2 DEBUG nova.objects.instance [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:20:21 compute-1 nova_compute[230518]: 2025-10-02 12:20:21.554 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:20:22 compute-1 nova_compute[230518]: 2025-10-02 12:20:22.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:22 compute-1 nova_compute[230518]: 2025-10-02 12:20:22.094 2 DEBUG oslo_concurrency.lockutils [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:22 compute-1 nova_compute[230518]: 2025-10-02 12:20:22.095 2 DEBUG oslo_concurrency.lockutils [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:22 compute-1 nova_compute[230518]: 2025-10-02 12:20:22.095 2 DEBUG oslo_concurrency.lockutils [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:22 compute-1 nova_compute[230518]: 2025-10-02 12:20:22.095 2 DEBUG oslo_concurrency.lockutils [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:22 compute-1 nova_compute[230518]: 2025-10-02 12:20:22.096 2 DEBUG oslo_concurrency.lockutils [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:22 compute-1 nova_compute[230518]: 2025-10-02 12:20:22.097 2 INFO nova.compute.manager [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Terminating instance
Oct 02 12:20:22 compute-1 nova_compute[230518]: 2025-10-02 12:20:22.098 2 DEBUG nova.compute.manager [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:20:22 compute-1 kernel: tap335bf3a1-e2 (unregistering): left promiscuous mode
Oct 02 12:20:22 compute-1 NetworkManager[44960]: <info>  [1759407622.1480] device (tap335bf3a1-e2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:20:22 compute-1 ovn_controller[129257]: 2025-10-02T12:20:22Z|00179|binding|INFO|Releasing lport 335bf3a1-e291-4896-a8b2-523eb372ebd6 from this chassis (sb_readonly=0)
Oct 02 12:20:22 compute-1 ovn_controller[129257]: 2025-10-02T12:20:22Z|00180|binding|INFO|Setting lport 335bf3a1-e291-4896-a8b2-523eb372ebd6 down in Southbound
Oct 02 12:20:22 compute-1 ovn_controller[129257]: 2025-10-02T12:20:22Z|00181|binding|INFO|Removing iface tap335bf3a1-e2 ovn-installed in OVS
Oct 02 12:20:22 compute-1 nova_compute[230518]: 2025-10-02 12:20:22.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:22 compute-1 nova_compute[230518]: 2025-10-02 12:20:22.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.200 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:9c:b8 10.100.0.10'], port_security=['fa:16:3e:39:9c:b8 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd0ebb2827cb241e499606ce3a3c67d24', 'neutron:revision_number': '4', 'neutron:security_group_ids': '82a35752-e404-444a-8896-2599ead4c932', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a6ee76fd-a5ee-4609-94ea-48618b0cf0da, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=335bf3a1-e291-4896-a8b2-523eb372ebd6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:20:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.201 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 335bf3a1-e291-4896-a8b2-523eb372ebd6 in datapath d68ff9e0-aff2-4eda-8590-74da7cfc5671 unbound from our chassis
Oct 02 12:20:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.202 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d68ff9e0-aff2-4eda-8590-74da7cfc5671, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:20:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.203 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[519e5f5d-090a-4ac9-bcf6-ff341ef5c43b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.203 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 namespace which is not needed anymore
Oct 02 12:20:22 compute-1 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d0000002a.scope: Deactivated successfully.
Oct 02 12:20:22 compute-1 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d0000002a.scope: Consumed 2.886s CPU time.
Oct 02 12:20:22 compute-1 systemd-machined[188247]: Machine qemu-22-instance-0000002a terminated.
Oct 02 12:20:22 compute-1 ceph-mon[80926]: pgmap v1241: 305 pgs: 305 active+clean; 227 MiB data, 482 MiB used, 21 GiB / 21 GiB avail; 11 MiB/s rd, 11 MiB/s wr, 445 op/s
Oct 02 12:20:22 compute-1 ceph-mon[80926]: osdmap e173: 3 total, 3 up, 3 in
Oct 02 12:20:22 compute-1 kernel: tap335bf3a1-e2: entered promiscuous mode
Oct 02 12:20:22 compute-1 NetworkManager[44960]: <info>  [1759407622.3194] manager: (tap335bf3a1-e2): new Tun device (/org/freedesktop/NetworkManager/Devices/82)
Oct 02 12:20:22 compute-1 ovn_controller[129257]: 2025-10-02T12:20:22Z|00182|binding|INFO|Claiming lport 335bf3a1-e291-4896-a8b2-523eb372ebd6 for this chassis.
Oct 02 12:20:22 compute-1 ovn_controller[129257]: 2025-10-02T12:20:22Z|00183|binding|INFO|335bf3a1-e291-4896-a8b2-523eb372ebd6: Claiming fa:16:3e:39:9c:b8 10.100.0.10
Oct 02 12:20:22 compute-1 nova_compute[230518]: 2025-10-02 12:20:22.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:22 compute-1 kernel: tap335bf3a1-e2 (unregistering): left promiscuous mode
Oct 02 12:20:22 compute-1 ovn_controller[129257]: 2025-10-02T12:20:22Z|00184|binding|INFO|Setting lport 335bf3a1-e291-4896-a8b2-523eb372ebd6 ovn-installed in OVS
Oct 02 12:20:22 compute-1 ovn_controller[129257]: 2025-10-02T12:20:22Z|00185|if_status|INFO|Dropped 6 log messages in last 86 seconds (most recently, 86 seconds ago) due to excessive rate
Oct 02 12:20:22 compute-1 ovn_controller[129257]: 2025-10-02T12:20:22Z|00186|if_status|INFO|Not setting lport 335bf3a1-e291-4896-a8b2-523eb372ebd6 down as sb is readonly
Oct 02 12:20:22 compute-1 nova_compute[230518]: 2025-10-02 12:20:22.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:22 compute-1 ovn_controller[129257]: 2025-10-02T12:20:22Z|00187|binding|INFO|Releasing lport 335bf3a1-e291-4896-a8b2-523eb372ebd6 from this chassis (sb_readonly=0)
Oct 02 12:20:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.353 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:9c:b8 10.100.0.10'], port_security=['fa:16:3e:39:9c:b8 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd0ebb2827cb241e499606ce3a3c67d24', 'neutron:revision_number': '4', 'neutron:security_group_ids': '82a35752-e404-444a-8896-2599ead4c932', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a6ee76fd-a5ee-4609-94ea-48618b0cf0da, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=335bf3a1-e291-4896-a8b2-523eb372ebd6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:20:22 compute-1 nova_compute[230518]: 2025-10-02 12:20:22.360 2 INFO nova.virt.libvirt.driver [-] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Instance destroyed successfully.
Oct 02 12:20:22 compute-1 nova_compute[230518]: 2025-10-02 12:20:22.361 2 DEBUG nova.objects.instance [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lazy-loading 'resources' on Instance uuid 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:22 compute-1 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[247566]: [NOTICE]   (247611) : haproxy version is 2.8.14-c23fe91
Oct 02 12:20:22 compute-1 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[247566]: [NOTICE]   (247611) : path to executable is /usr/sbin/haproxy
Oct 02 12:20:22 compute-1 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[247566]: [WARNING]  (247611) : Exiting Master process...
Oct 02 12:20:22 compute-1 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[247566]: [ALERT]    (247611) : Current worker (247624) exited with code 143 (Terminated)
Oct 02 12:20:22 compute-1 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[247566]: [WARNING]  (247611) : All workers exited. Exiting... (0)
Oct 02 12:20:22 compute-1 nova_compute[230518]: 2025-10-02 12:20:22.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:22 compute-1 systemd[1]: libpod-6a407f851b886758ee1fe5a33f6d2c968930f5bd0b09dc1db69abe4af5aa9ffc.scope: Deactivated successfully.
Oct 02 12:20:22 compute-1 conmon[247566]: conmon 6a407f851b886758ee1f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6a407f851b886758ee1fe5a33f6d2c968930f5bd0b09dc1db69abe4af5aa9ffc.scope/container/memory.events
Oct 02 12:20:22 compute-1 podman[248601]: 2025-10-02 12:20:22.38366816 +0000 UTC m=+0.078283448 container died 6a407f851b886758ee1fe5a33f6d2c968930f5bd0b09dc1db69abe4af5aa9ffc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:20:22 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6a407f851b886758ee1fe5a33f6d2c968930f5bd0b09dc1db69abe4af5aa9ffc-userdata-shm.mount: Deactivated successfully.
Oct 02 12:20:22 compute-1 systemd[1]: var-lib-containers-storage-overlay-e0694d206f98a051cefdbfb537752832811b123300f6179fbe3f8a35b3a48a40-merged.mount: Deactivated successfully.
Oct 02 12:20:22 compute-1 podman[248601]: 2025-10-02 12:20:22.432043804 +0000 UTC m=+0.126659092 container cleanup 6a407f851b886758ee1fe5a33f6d2c968930f5bd0b09dc1db69abe4af5aa9ffc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:20:22 compute-1 systemd[1]: libpod-conmon-6a407f851b886758ee1fe5a33f6d2c968930f5bd0b09dc1db69abe4af5aa9ffc.scope: Deactivated successfully.
Oct 02 12:20:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.453 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:9c:b8 10.100.0.10'], port_security=['fa:16:3e:39:9c:b8 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd0ebb2827cb241e499606ce3a3c67d24', 'neutron:revision_number': '4', 'neutron:security_group_ids': '82a35752-e404-444a-8896-2599ead4c932', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a6ee76fd-a5ee-4609-94ea-48618b0cf0da, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=335bf3a1-e291-4896-a8b2-523eb372ebd6) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:20:22 compute-1 nova_compute[230518]: 2025-10-02 12:20:22.458 2 DEBUG nova.virt.libvirt.vif [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:19:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1512000059',display_name='tempest-ImagesTestJSON-server-1512000059',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-1512000059',id=42,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:20:08Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='d0ebb2827cb241e499606ce3a3c67d24',ramdisk_id='',reservation_id='r-0z7xplik',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1681256609',owner_user_name='tempest-ImagesTestJSON-1681256609-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:20:19Z,user_data=None,user_id='afacfeac9efc4e6fbb83ebe4fe9a8f38',uuid=8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "address": "fa:16:3e:39:9c:b8", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap335bf3a1-e2", "ovs_interfaceid": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:20:22 compute-1 nova_compute[230518]: 2025-10-02 12:20:22.459 2 DEBUG nova.network.os_vif_util [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converting VIF {"id": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "address": "fa:16:3e:39:9c:b8", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap335bf3a1-e2", "ovs_interfaceid": "335bf3a1-e291-4896-a8b2-523eb372ebd6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:20:22 compute-1 nova_compute[230518]: 2025-10-02 12:20:22.460 2 DEBUG nova.network.os_vif_util [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:9c:b8,bridge_name='br-int',has_traffic_filtering=True,id=335bf3a1-e291-4896-a8b2-523eb372ebd6,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap335bf3a1-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:20:22 compute-1 nova_compute[230518]: 2025-10-02 12:20:22.460 2 DEBUG os_vif [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:9c:b8,bridge_name='br-int',has_traffic_filtering=True,id=335bf3a1-e291-4896-a8b2-523eb372ebd6,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap335bf3a1-e2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:20:22 compute-1 nova_compute[230518]: 2025-10-02 12:20:22.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:22 compute-1 nova_compute[230518]: 2025-10-02 12:20:22.462 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap335bf3a1-e2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:22 compute-1 podman[248640]: 2025-10-02 12:20:22.501752871 +0000 UTC m=+0.046270899 container remove 6a407f851b886758ee1fe5a33f6d2c968930f5bd0b09dc1db69abe4af5aa9ffc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Oct 02 12:20:22 compute-1 nova_compute[230518]: 2025-10-02 12:20:22.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:22 compute-1 nova_compute[230518]: 2025-10-02 12:20:22.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:20:22 compute-1 nova_compute[230518]: 2025-10-02 12:20:22.513 2 INFO os_vif [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:9c:b8,bridge_name='br-int',has_traffic_filtering=True,id=335bf3a1-e291-4896-a8b2-523eb372ebd6,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap335bf3a1-e2')
Oct 02 12:20:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.513 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e73f2c28-40b5-4472-a977-ae5c269c1b48]: (4, ('Thu Oct  2 12:20:22 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 (6a407f851b886758ee1fe5a33f6d2c968930f5bd0b09dc1db69abe4af5aa9ffc)\n6a407f851b886758ee1fe5a33f6d2c968930f5bd0b09dc1db69abe4af5aa9ffc\nThu Oct  2 12:20:22 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 (6a407f851b886758ee1fe5a33f6d2c968930f5bd0b09dc1db69abe4af5aa9ffc)\n6a407f851b886758ee1fe5a33f6d2c968930f5bd0b09dc1db69abe4af5aa9ffc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.514 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6edc6f4b-39af-4ebd-aff5-db77a919268d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.515 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd68ff9e0-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:20:22 compute-1 kernel: tapd68ff9e0-a0: left promiscuous mode
Oct 02 12:20:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.535 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d7af132c-75dc-48f8-9776-2f3ccdd37936]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:22 compute-1 nova_compute[230518]: 2025-10-02 12:20:22.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.566 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[52c6fac4-772b-4af5-9d1d-3ead9defefc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.568 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1cb08769-7661-41be-bf0f-1f85054d9115]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.586 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[76c790db-cbf5-49e1-96f6-bc246e181553]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547023, 'reachable_time': 33919, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248671, 'error': None, 'target': 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:22 compute-1 systemd[1]: run-netns-ovnmeta\x2dd68ff9e0\x2daff2\x2d4eda\x2d8590\x2d74da7cfc5671.mount: Deactivated successfully.
Oct 02 12:20:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.594 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:20:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.594 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[75eff08a-52b7-4b26-b7c8-6117ef1b2309]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.596 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 335bf3a1-e291-4896-a8b2-523eb372ebd6 in datapath d68ff9e0-aff2-4eda-8590-74da7cfc5671 unbound from our chassis
Oct 02 12:20:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.597 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d68ff9e0-aff2-4eda-8590-74da7cfc5671, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:20:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.597 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a2d9c781-0a50-49cc-903e-fb0ba46542e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.598 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 335bf3a1-e291-4896-a8b2-523eb372ebd6 in datapath d68ff9e0-aff2-4eda-8590-74da7cfc5671 unbound from our chassis
Oct 02 12:20:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.599 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d68ff9e0-aff2-4eda-8590-74da7cfc5671, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:20:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:22.599 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0e7b07b1-7f19-402b-a161-b1feaf2e9bd1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:20:22 compute-1 nova_compute[230518]: 2025-10-02 12:20:22.637 2 DEBUG nova.compute.manager [req-a616b931-b5dd-4442-b96e-0149fe94ec6e req-f6c13a23-dd35-4bd9-95c2-901c4e17dc5d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Received event network-vif-unplugged-335bf3a1-e291-4896-a8b2-523eb372ebd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:22 compute-1 nova_compute[230518]: 2025-10-02 12:20:22.638 2 DEBUG oslo_concurrency.lockutils [req-a616b931-b5dd-4442-b96e-0149fe94ec6e req-f6c13a23-dd35-4bd9-95c2-901c4e17dc5d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:22 compute-1 nova_compute[230518]: 2025-10-02 12:20:22.638 2 DEBUG oslo_concurrency.lockutils [req-a616b931-b5dd-4442-b96e-0149fe94ec6e req-f6c13a23-dd35-4bd9-95c2-901c4e17dc5d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:22 compute-1 nova_compute[230518]: 2025-10-02 12:20:22.639 2 DEBUG oslo_concurrency.lockutils [req-a616b931-b5dd-4442-b96e-0149fe94ec6e req-f6c13a23-dd35-4bd9-95c2-901c4e17dc5d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:22 compute-1 nova_compute[230518]: 2025-10-02 12:20:22.639 2 DEBUG nova.compute.manager [req-a616b931-b5dd-4442-b96e-0149fe94ec6e req-f6c13a23-dd35-4bd9-95c2-901c4e17dc5d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] No waiting events found dispatching network-vif-unplugged-335bf3a1-e291-4896-a8b2-523eb372ebd6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:20:22 compute-1 nova_compute[230518]: 2025-10-02 12:20:22.639 2 DEBUG nova.compute.manager [req-a616b931-b5dd-4442-b96e-0149fe94ec6e req-f6c13a23-dd35-4bd9-95c2-901c4e17dc5d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Received event network-vif-unplugged-335bf3a1-e291-4896-a8b2-523eb372ebd6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:20:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:20:23 compute-1 nova_compute[230518]: 2025-10-02 12:20:23.114 2 INFO nova.virt.libvirt.driver [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Deleting instance files /var/lib/nova/instances/8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_del
Oct 02 12:20:23 compute-1 nova_compute[230518]: 2025-10-02 12:20:23.115 2 INFO nova.virt.libvirt.driver [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Deletion of /var/lib/nova/instances/8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8_del complete
Oct 02 12:20:23 compute-1 nova_compute[230518]: 2025-10-02 12:20:23.220 2 INFO nova.compute.manager [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Took 1.12 seconds to destroy the instance on the hypervisor.
Oct 02 12:20:23 compute-1 nova_compute[230518]: 2025-10-02 12:20:23.221 2 DEBUG oslo.service.loopingcall [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:20:23 compute-1 nova_compute[230518]: 2025-10-02 12:20:23.221 2 DEBUG nova.compute.manager [-] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:20:23 compute-1 nova_compute[230518]: 2025-10-02 12:20:23.222 2 DEBUG nova.network.neutron [-] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:20:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:20:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:23.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:20:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:23.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:23 compute-1 sudo[248675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:20:23 compute-1 sudo[248675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:23 compute-1 sudo[248675]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:23 compute-1 sudo[248700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:20:23 compute-1 sudo[248700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:20:23 compute-1 sudo[248700]: pam_unix(sudo:session): session closed for user root
Oct 02 12:20:24 compute-1 nova_compute[230518]: 2025-10-02 12:20:24.206 2 DEBUG nova.network.neutron [-] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:20:24 compute-1 nova_compute[230518]: 2025-10-02 12:20:24.297 2 DEBUG nova.compute.manager [req-f5c77664-c2d4-4410-a43d-80730dcdb90d req-131b4990-59af-4c86-9078-3b9830ccfdf9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Received event network-vif-deleted-335bf3a1-e291-4896-a8b2-523eb372ebd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:24 compute-1 nova_compute[230518]: 2025-10-02 12:20:24.298 2 INFO nova.compute.manager [req-f5c77664-c2d4-4410-a43d-80730dcdb90d req-131b4990-59af-4c86-9078-3b9830ccfdf9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Neutron deleted interface 335bf3a1-e291-4896-a8b2-523eb372ebd6; detaching it from the instance and deleting it from the info cache
Oct 02 12:20:24 compute-1 nova_compute[230518]: 2025-10-02 12:20:24.298 2 DEBUG nova.network.neutron [req-f5c77664-c2d4-4410-a43d-80730dcdb90d req-131b4990-59af-4c86-9078-3b9830ccfdf9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:20:24 compute-1 nova_compute[230518]: 2025-10-02 12:20:24.301 2 INFO nova.compute.manager [-] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Took 1.08 seconds to deallocate network for instance.
Oct 02 12:20:24 compute-1 nova_compute[230518]: 2025-10-02 12:20:24.334 2 DEBUG nova.compute.manager [req-f5c77664-c2d4-4410-a43d-80730dcdb90d req-131b4990-59af-4c86-9078-3b9830ccfdf9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Detach interface failed, port_id=335bf3a1-e291-4896-a8b2-523eb372ebd6, reason: Instance 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:20:24 compute-1 nova_compute[230518]: 2025-10-02 12:20:24.370 2 DEBUG oslo_concurrency.lockutils [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:24 compute-1 nova_compute[230518]: 2025-10-02 12:20:24.370 2 DEBUG oslo_concurrency.lockutils [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:24 compute-1 nova_compute[230518]: 2025-10-02 12:20:24.388 2 DEBUG nova.scheduler.client.report [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 12:20:24 compute-1 nova_compute[230518]: 2025-10-02 12:20:24.405 2 DEBUG nova.scheduler.client.report [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 12:20:24 compute-1 nova_compute[230518]: 2025-10-02 12:20:24.406 2 DEBUG nova.compute.provider_tree [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:20:24 compute-1 nova_compute[230518]: 2025-10-02 12:20:24.421 2 DEBUG nova.scheduler.client.report [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 12:20:24 compute-1 nova_compute[230518]: 2025-10-02 12:20:24.445 2 DEBUG nova.scheduler.client.report [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 12:20:24 compute-1 nova_compute[230518]: 2025-10-02 12:20:24.502 2 DEBUG oslo_concurrency.processutils [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:24 compute-1 ceph-mon[80926]: pgmap v1243: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 227 MiB data, 488 MiB used, 21 GiB / 21 GiB avail; 8.8 MiB/s rd, 6.0 MiB/s wr, 428 op/s
Oct 02 12:20:24 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:20:24 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:20:24 compute-1 nova_compute[230518]: 2025-10-02 12:20:24.726 2 DEBUG nova.compute.manager [req-e3b0fa3e-f39c-4cd4-812d-5c0605982666 req-48af042c-dc46-40ee-b27c-a53a6cbde903 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Received event network-vif-plugged-335bf3a1-e291-4896-a8b2-523eb372ebd6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:20:24 compute-1 nova_compute[230518]: 2025-10-02 12:20:24.728 2 DEBUG oslo_concurrency.lockutils [req-e3b0fa3e-f39c-4cd4-812d-5c0605982666 req-48af042c-dc46-40ee-b27c-a53a6cbde903 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:24 compute-1 nova_compute[230518]: 2025-10-02 12:20:24.728 2 DEBUG oslo_concurrency.lockutils [req-e3b0fa3e-f39c-4cd4-812d-5c0605982666 req-48af042c-dc46-40ee-b27c-a53a6cbde903 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:24 compute-1 nova_compute[230518]: 2025-10-02 12:20:24.728 2 DEBUG oslo_concurrency.lockutils [req-e3b0fa3e-f39c-4cd4-812d-5c0605982666 req-48af042c-dc46-40ee-b27c-a53a6cbde903 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:24 compute-1 nova_compute[230518]: 2025-10-02 12:20:24.729 2 DEBUG nova.compute.manager [req-e3b0fa3e-f39c-4cd4-812d-5c0605982666 req-48af042c-dc46-40ee-b27c-a53a6cbde903 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] No waiting events found dispatching network-vif-plugged-335bf3a1-e291-4896-a8b2-523eb372ebd6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:20:24 compute-1 nova_compute[230518]: 2025-10-02 12:20:24.729 2 WARNING nova.compute.manager [req-e3b0fa3e-f39c-4cd4-812d-5c0605982666 req-48af042c-dc46-40ee-b27c-a53a6cbde903 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Received unexpected event network-vif-plugged-335bf3a1-e291-4896-a8b2-523eb372ebd6 for instance with vm_state deleted and task_state None.
Oct 02 12:20:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:20:24 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/13144440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:24 compute-1 nova_compute[230518]: 2025-10-02 12:20:24.973 2 DEBUG oslo_concurrency.processutils [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:24 compute-1 nova_compute[230518]: 2025-10-02 12:20:24.980 2 DEBUG nova.compute.provider_tree [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:20:25 compute-1 nova_compute[230518]: 2025-10-02 12:20:25.003 2 DEBUG nova.scheduler.client.report [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:20:25 compute-1 nova_compute[230518]: 2025-10-02 12:20:25.027 2 DEBUG oslo_concurrency.lockutils [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:25 compute-1 nova_compute[230518]: 2025-10-02 12:20:25.051 2 INFO nova.scheduler.client.report [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Deleted allocations for instance 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8
Oct 02 12:20:25 compute-1 nova_compute[230518]: 2025-10-02 12:20:25.115 2 DEBUG oslo_concurrency.lockutils [None req-bc68a456-d369-4ae4-a719-f545042674f2 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.020s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e174 e174: 3 total, 3 up, 3 in
Oct 02 12:20:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:25.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:25.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/13144440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:25 compute-1 ceph-mon[80926]: osdmap e174: 3 total, 3 up, 3 in
Oct 02 12:20:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:25.919 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:25.919 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:20:25.920 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:26 compute-1 ceph-mon[80926]: pgmap v1244: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 212 MiB data, 479 MiB used, 21 GiB / 21 GiB avail; 7.6 MiB/s rd, 4.6 MiB/s wr, 370 op/s
Oct 02 12:20:26 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/813570305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:27 compute-1 nova_compute[230518]: 2025-10-02 12:20:27.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:27.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:27.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:27 compute-1 nova_compute[230518]: 2025-10-02 12:20:27.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:20:28 compute-1 ceph-mon[80926]: pgmap v1246: 305 pgs: 305 active+clean; 134 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 7.8 MiB/s rd, 4.6 MiB/s wr, 413 op/s
Oct 02 12:20:28 compute-1 podman[248747]: 2025-10-02 12:20:28.816320288 +0000 UTC m=+0.063854752 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:20:29 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Oct 02 12:20:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:29.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:20:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:29.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:20:29 compute-1 podman[248765]: 2025-10-02 12:20:29.855418649 +0000 UTC m=+0.105224207 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 02 12:20:30 compute-1 ceph-mon[80926]: pgmap v1247: 305 pgs: 305 active+clean; 134 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 22 KiB/s wr, 201 op/s
Oct 02 12:20:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e175 e175: 3 total, 3 up, 3 in
Oct 02 12:20:31 compute-1 nova_compute[230518]: 2025-10-02 12:20:31.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:20:31 compute-1 nova_compute[230518]: 2025-10-02 12:20:31.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 12:20:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:31.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:20:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:31.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:20:31 compute-1 nova_compute[230518]: 2025-10-02 12:20:31.604 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:20:31 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/667482974' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:31 compute-1 ceph-mon[80926]: osdmap e175: 3 total, 3 up, 3 in
Oct 02 12:20:31 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2430916388' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:32 compute-1 nova_compute[230518]: 2025-10-02 12:20:32.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:32 compute-1 nova_compute[230518]: 2025-10-02 12:20:32.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:33 compute-1 ceph-mon[80926]: pgmap v1249: 305 pgs: 305 active+clean; 191 MiB data, 484 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 4.9 MiB/s wr, 162 op/s
Oct 02 12:20:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:20:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:20:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:33.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:20:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:33.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:34 compute-1 ceph-mon[80926]: pgmap v1250: 305 pgs: 305 active+clean; 205 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 478 KiB/s rd, 6.2 MiB/s wr, 147 op/s
Oct 02 12:20:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:35.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:20:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:35.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:20:36 compute-1 nova_compute[230518]: 2025-10-02 12:20:36.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:20:36 compute-1 ceph-mon[80926]: pgmap v1251: 305 pgs: 305 active+clean; 236 MiB data, 526 MiB used, 20 GiB / 21 GiB avail; 801 KiB/s rd, 6.5 MiB/s wr, 198 op/s
Oct 02 12:20:37 compute-1 nova_compute[230518]: 2025-10-02 12:20:37.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:37 compute-1 nova_compute[230518]: 2025-10-02 12:20:37.341 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407622.3388286, 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:20:37 compute-1 nova_compute[230518]: 2025-10-02 12:20:37.342 2 INFO nova.compute.manager [-] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] VM Stopped (Lifecycle Event)
Oct 02 12:20:37 compute-1 nova_compute[230518]: 2025-10-02 12:20:37.378 2 DEBUG nova.compute.manager [None req-e579a07e-ad87-44a3-a525-40f7e69dbe81 - - - - - -] [instance: 8eb476b0-ae5f-44b6-bac9-5a6ade5b24b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:20:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:37.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:20:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:37.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:37 compute-1 nova_compute[230518]: 2025-10-02 12:20:37.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:37 compute-1 podman[248792]: 2025-10-02 12:20:37.803270267 +0000 UTC m=+0.053918099 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Oct 02 12:20:37 compute-1 podman[248791]: 2025-10-02 12:20:37.803641739 +0000 UTC m=+0.054312452 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:20:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e176 e176: 3 total, 3 up, 3 in
Oct 02 12:20:38 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:20:38 compute-1 ceph-mon[80926]: pgmap v1252: 305 pgs: 305 active+clean; 246 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 735 KiB/s rd, 7.3 MiB/s wr, 194 op/s
Oct 02 12:20:39 compute-1 nova_compute[230518]: 2025-10-02 12:20:39.137 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:20:39 compute-1 nova_compute[230518]: 2025-10-02 12:20:39.137 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:20:39 compute-1 nova_compute[230518]: 2025-10-02 12:20:39.138 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:20:39 compute-1 nova_compute[230518]: 2025-10-02 12:20:39.138 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:20:39 compute-1 nova_compute[230518]: 2025-10-02 12:20:39.222 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:39 compute-1 nova_compute[230518]: 2025-10-02 12:20:39.222 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:39 compute-1 nova_compute[230518]: 2025-10-02 12:20:39.223 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:39 compute-1 nova_compute[230518]: 2025-10-02 12:20:39.223 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:20:39 compute-1 nova_compute[230518]: 2025-10-02 12:20:39.224 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:39 compute-1 ceph-mon[80926]: osdmap e176: 3 total, 3 up, 3 in
Oct 02 12:20:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:39.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:20:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:39.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:20:40 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:20:40 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3271158754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:40 compute-1 nova_compute[230518]: 2025-10-02 12:20:40.073 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.849s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:40 compute-1 nova_compute[230518]: 2025-10-02 12:20:40.450 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000027 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:20:40 compute-1 nova_compute[230518]: 2025-10-02 12:20:40.451 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000027 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:20:40 compute-1 nova_compute[230518]: 2025-10-02 12:20:40.591 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:20:40 compute-1 nova_compute[230518]: 2025-10-02 12:20:40.592 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4520MB free_disk=20.876529693603516GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:20:40 compute-1 nova_compute[230518]: 2025-10-02 12:20:40.592 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:40 compute-1 nova_compute[230518]: 2025-10-02 12:20:40.592 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:40 compute-1 ceph-mon[80926]: pgmap v1254: 305 pgs: 305 active+clean; 246 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 750 KiB/s rd, 4.1 MiB/s wr, 161 op/s
Oct 02 12:20:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3271158754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1887875243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:40 compute-1 nova_compute[230518]: 2025-10-02 12:20:40.838 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:20:40 compute-1 nova_compute[230518]: 2025-10-02 12:20:40.839 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:20:40 compute-1 nova_compute[230518]: 2025-10-02 12:20:40.839 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:20:41 compute-1 nova_compute[230518]: 2025-10-02 12:20:41.067 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:20:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:41.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:20:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:41.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:41 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:20:41 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/672575832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:41 compute-1 nova_compute[230518]: 2025-10-02 12:20:41.560 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:41 compute-1 nova_compute[230518]: 2025-10-02 12:20:41.567 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:20:41 compute-1 nova_compute[230518]: 2025-10-02 12:20:41.583 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:20:41 compute-1 nova_compute[230518]: 2025-10-02 12:20:41.606 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:20:41 compute-1 nova_compute[230518]: 2025-10-02 12:20:41.607 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.015s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:41 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/672575832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:41 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3085986660' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:42 compute-1 nova_compute[230518]: 2025-10-02 12:20:42.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:42 compute-1 nova_compute[230518]: 2025-10-02 12:20:42.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:42 compute-1 nova_compute[230518]: 2025-10-02 12:20:42.521 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:20:42 compute-1 nova_compute[230518]: 2025-10-02 12:20:42.522 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:20:42 compute-1 nova_compute[230518]: 2025-10-02 12:20:42.522 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:20:42 compute-1 nova_compute[230518]: 2025-10-02 12:20:42.647 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:20:42 compute-1 ceph-mon[80926]: pgmap v1255: 305 pgs: 305 active+clean; 246 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.4 MiB/s wr, 231 op/s
Oct 02 12:20:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e177 e177: 3 total, 3 up, 3 in
Oct 02 12:20:43 compute-1 nova_compute[230518]: 2025-10-02 12:20:43.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:20:43 compute-1 nova_compute[230518]: 2025-10-02 12:20:43.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:20:43 compute-1 nova_compute[230518]: 2025-10-02 12:20:43.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:20:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:20:43 compute-1 nova_compute[230518]: 2025-10-02 12:20:43.141 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-94bf2d68-bf2c-4720-8ede-688ca2b48ce6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:20:43 compute-1 nova_compute[230518]: 2025-10-02 12:20:43.141 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-94bf2d68-bf2c-4720-8ede-688ca2b48ce6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:20:43 compute-1 nova_compute[230518]: 2025-10-02 12:20:43.141 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:20:43 compute-1 nova_compute[230518]: 2025-10-02 12:20:43.141 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:43.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:43.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:43 compute-1 nova_compute[230518]: 2025-10-02 12:20:43.482 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:20:43 compute-1 nova_compute[230518]: 2025-10-02 12:20:43.952 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:20:44 compute-1 nova_compute[230518]: 2025-10-02 12:20:44.002 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-94bf2d68-bf2c-4720-8ede-688ca2b48ce6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:20:44 compute-1 nova_compute[230518]: 2025-10-02 12:20:44.002 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:20:44 compute-1 nova_compute[230518]: 2025-10-02 12:20:44.003 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:20:44 compute-1 ceph-mon[80926]: pgmap v1256: 305 pgs: 305 active+clean; 252 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.8 MiB/s wr, 234 op/s
Oct 02 12:20:44 compute-1 ceph-mon[80926]: osdmap e177: 3 total, 3 up, 3 in
Oct 02 12:20:44 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/206873957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:44 compute-1 nova_compute[230518]: 2025-10-02 12:20:44.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:20:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e178 e178: 3 total, 3 up, 3 in
Oct 02 12:20:45 compute-1 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000027.scope: Deactivated successfully.
Oct 02 12:20:45 compute-1 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000027.scope: Consumed 13.815s CPU time.
Oct 02 12:20:45 compute-1 systemd-machined[188247]: Machine qemu-23-instance-00000027 terminated.
Oct 02 12:20:45 compute-1 ceph-mon[80926]: osdmap e178: 3 total, 3 up, 3 in
Oct 02 12:20:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:20:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:45.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:20:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:45.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:45 compute-1 nova_compute[230518]: 2025-10-02 12:20:45.660 2 INFO nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance shutdown successfully after 24 seconds.
Oct 02 12:20:45 compute-1 nova_compute[230518]: 2025-10-02 12:20:45.668 2 INFO nova.virt.libvirt.driver [-] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance destroyed successfully.
Oct 02 12:20:45 compute-1 nova_compute[230518]: 2025-10-02 12:20:45.674 2 INFO nova.virt.libvirt.driver [-] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance destroyed successfully.
Oct 02 12:20:46 compute-1 ceph-mon[80926]: pgmap v1259: 305 pgs: 305 active+clean; 283 MiB data, 556 MiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 3.8 MiB/s wr, 201 op/s
Oct 02 12:20:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3549998780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:46 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0.
Oct 02 12:20:46 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:20:46.164888) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:20:46 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58
Oct 02 12:20:46 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407646164917, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2669, "num_deletes": 504, "total_data_size": 5540734, "memory_usage": 5620848, "flush_reason": "Manual Compaction"}
Oct 02 12:20:46 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started
Oct 02 12:20:46 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407646178765, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 3266205, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28166, "largest_seqno": 30830, "table_properties": {"data_size": 3256394, "index_size": 5601, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3141, "raw_key_size": 25651, "raw_average_key_size": 20, "raw_value_size": 3233993, "raw_average_value_size": 2583, "num_data_blocks": 243, "num_entries": 1252, "num_filter_entries": 1252, "num_deletions": 504, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407459, "oldest_key_time": 1759407459, "file_creation_time": 1759407646, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:20:46 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 13932 microseconds, and 6620 cpu microseconds.
Oct 02 12:20:46 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:20:46 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:20:46.178819) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 3266205 bytes OK
Oct 02 12:20:46 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:20:46.178837) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started
Oct 02 12:20:46 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:20:46.179822) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done
Oct 02 12:20:46 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:20:46.179835) EVENT_LOG_v1 {"time_micros": 1759407646179831, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:20:46 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:20:46.179851) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:20:46 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 5527907, prev total WAL file size 5527907, number of live WAL files 2.
Oct 02 12:20:46 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:20:46 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:20:46.181159) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Oct 02 12:20:46 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:20:46 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(3189KB)], [57(10MB)]
Oct 02 12:20:46 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407646181232, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 14072578, "oldest_snapshot_seqno": -1}
Oct 02 12:20:46 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 5409 keys, 8630159 bytes, temperature: kUnknown
Oct 02 12:20:46 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407646288130, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 8630159, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8594211, "index_size": 21310, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13573, "raw_key_size": 138035, "raw_average_key_size": 25, "raw_value_size": 8497034, "raw_average_value_size": 1570, "num_data_blocks": 857, "num_entries": 5409, "num_filter_entries": 5409, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759407646, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:20:46 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:20:46 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:20:46.288516) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 8630159 bytes
Oct 02 12:20:46 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:20:46.290096) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 131.4 rd, 80.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 10.3 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(7.0) write-amplify(2.6) OK, records in: 6417, records dropped: 1008 output_compression: NoCompression
Oct 02 12:20:46 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:20:46.290114) EVENT_LOG_v1 {"time_micros": 1759407646290106, "job": 34, "event": "compaction_finished", "compaction_time_micros": 107125, "compaction_time_cpu_micros": 39191, "output_level": 6, "num_output_files": 1, "total_output_size": 8630159, "num_input_records": 6417, "num_output_records": 5409, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:20:46 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:20:46 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407646291109, "job": 34, "event": "table_file_deletion", "file_number": 59}
Oct 02 12:20:46 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:20:46 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407646293168, "job": 34, "event": "table_file_deletion", "file_number": 57}
Oct 02 12:20:46 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:20:46.181051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:20:46 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:20:46.293299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:20:46 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:20:46.293304) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:20:46 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:20:46.293305) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:20:46 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:20:46.293307) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:20:46 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:20:46.293308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:20:47 compute-1 nova_compute[230518]: 2025-10-02 12:20:47.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:47.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:47.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:47 compute-1 nova_compute[230518]: 2025-10-02 12:20:47.454 2 INFO nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Deleting instance files /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6_del
Oct 02 12:20:47 compute-1 nova_compute[230518]: 2025-10-02 12:20:47.456 2 INFO nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Deletion of /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6_del complete
Oct 02 12:20:47 compute-1 nova_compute[230518]: 2025-10-02 12:20:47.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:47 compute-1 nova_compute[230518]: 2025-10-02 12:20:47.755 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:20:47 compute-1 nova_compute[230518]: 2025-10-02 12:20:47.756 2 INFO nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Creating image(s)
Oct 02 12:20:47 compute-1 nova_compute[230518]: 2025-10-02 12:20:47.786 2 DEBUG nova.storage.rbd_utils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:47 compute-1 nova_compute[230518]: 2025-10-02 12:20:47.813 2 DEBUG nova.storage.rbd_utils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:47 compute-1 nova_compute[230518]: 2025-10-02 12:20:47.837 2 DEBUG nova.storage.rbd_utils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:47 compute-1 nova_compute[230518]: 2025-10-02 12:20:47.841 2 DEBUG oslo_concurrency.processutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:47 compute-1 nova_compute[230518]: 2025-10-02 12:20:47.900 2 DEBUG oslo_concurrency.processutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:47 compute-1 nova_compute[230518]: 2025-10-02 12:20:47.901 2 DEBUG oslo_concurrency.lockutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:47 compute-1 nova_compute[230518]: 2025-10-02 12:20:47.901 2 DEBUG oslo_concurrency.lockutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:47 compute-1 nova_compute[230518]: 2025-10-02 12:20:47.901 2 DEBUG oslo_concurrency.lockutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:47 compute-1 nova_compute[230518]: 2025-10-02 12:20:47.930 2 DEBUG nova.storage.rbd_utils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:47 compute-1 nova_compute[230518]: 2025-10-02 12:20:47.934 2 DEBUG oslo_concurrency.processutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:20:48 compute-1 ceph-mon[80926]: pgmap v1260: 305 pgs: 305 active+clean; 308 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 8.8 MiB/s rd, 5.9 MiB/s wr, 236 op/s
Oct 02 12:20:48 compute-1 nova_compute[230518]: 2025-10-02 12:20:48.607 2 DEBUG oslo_concurrency.processutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.673s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:48 compute-1 nova_compute[230518]: 2025-10-02 12:20:48.678 2 DEBUG nova.storage.rbd_utils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] resizing rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:20:48 compute-1 nova_compute[230518]: 2025-10-02 12:20:48.950 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:20:48 compute-1 nova_compute[230518]: 2025-10-02 12:20:48.950 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Ensure instance console log exists: /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:20:48 compute-1 nova_compute[230518]: 2025-10-02 12:20:48.951 2 DEBUG oslo_concurrency.lockutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:48 compute-1 nova_compute[230518]: 2025-10-02 12:20:48.951 2 DEBUG oslo_concurrency.lockutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:48 compute-1 nova_compute[230518]: 2025-10-02 12:20:48.952 2 DEBUG oslo_concurrency.lockutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:48 compute-1 nova_compute[230518]: 2025-10-02 12:20:48.953 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:20:48 compute-1 nova_compute[230518]: 2025-10-02 12:20:48.958 2 WARNING nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 02 12:20:48 compute-1 nova_compute[230518]: 2025-10-02 12:20:48.965 2 DEBUG nova.virt.libvirt.host [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:20:48 compute-1 nova_compute[230518]: 2025-10-02 12:20:48.966 2 DEBUG nova.virt.libvirt.host [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:20:48 compute-1 nova_compute[230518]: 2025-10-02 12:20:48.973 2 DEBUG nova.virt.libvirt.host [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:20:48 compute-1 nova_compute[230518]: 2025-10-02 12:20:48.974 2 DEBUG nova.virt.libvirt.host [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:20:48 compute-1 nova_compute[230518]: 2025-10-02 12:20:48.975 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:20:48 compute-1 nova_compute[230518]: 2025-10-02 12:20:48.975 2 DEBUG nova.virt.hardware [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:20:48 compute-1 nova_compute[230518]: 2025-10-02 12:20:48.975 2 DEBUG nova.virt.hardware [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:20:48 compute-1 nova_compute[230518]: 2025-10-02 12:20:48.976 2 DEBUG nova.virt.hardware [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:20:48 compute-1 nova_compute[230518]: 2025-10-02 12:20:48.976 2 DEBUG nova.virt.hardware [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:20:48 compute-1 nova_compute[230518]: 2025-10-02 12:20:48.976 2 DEBUG nova.virt.hardware [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:20:48 compute-1 nova_compute[230518]: 2025-10-02 12:20:48.976 2 DEBUG nova.virt.hardware [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:20:48 compute-1 nova_compute[230518]: 2025-10-02 12:20:48.976 2 DEBUG nova.virt.hardware [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:20:48 compute-1 nova_compute[230518]: 2025-10-02 12:20:48.977 2 DEBUG nova.virt.hardware [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:20:48 compute-1 nova_compute[230518]: 2025-10-02 12:20:48.977 2 DEBUG nova.virt.hardware [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:20:48 compute-1 nova_compute[230518]: 2025-10-02 12:20:48.977 2 DEBUG nova.virt.hardware [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:20:48 compute-1 nova_compute[230518]: 2025-10-02 12:20:48.977 2 DEBUG nova.virt.hardware [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:20:48 compute-1 nova_compute[230518]: 2025-10-02 12:20:48.978 2 DEBUG nova.objects.instance [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:49 compute-1 nova_compute[230518]: 2025-10-02 12:20:49.042 2 DEBUG oslo_concurrency.processutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:49.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:20:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:49.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:20:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:20:49 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1483081823' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:49 compute-1 nova_compute[230518]: 2025-10-02 12:20:49.516 2 DEBUG oslo_concurrency.processutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:49 compute-1 nova_compute[230518]: 2025-10-02 12:20:49.547 2 DEBUG nova.storage.rbd_utils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:49 compute-1 nova_compute[230518]: 2025-10-02 12:20:49.550 2 DEBUG oslo_concurrency.processutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:20:49 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4104050532' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:49 compute-1 nova_compute[230518]: 2025-10-02 12:20:49.988 2 DEBUG oslo_concurrency.processutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:49 compute-1 nova_compute[230518]: 2025-10-02 12:20:49.992 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:20:49 compute-1 nova_compute[230518]:   <uuid>94bf2d68-bf2c-4720-8ede-688ca2b48ce6</uuid>
Oct 02 12:20:49 compute-1 nova_compute[230518]:   <name>instance-00000027</name>
Oct 02 12:20:49 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:20:49 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:20:49 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:20:49 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:       <nova:name>tempest-ServersAdmin275Test-server-2065786220</nova:name>
Oct 02 12:20:49 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:20:48</nova:creationTime>
Oct 02 12:20:49 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:20:49 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:20:49 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:20:49 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:20:49 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:20:49 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:20:49 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:20:49 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:20:49 compute-1 nova_compute[230518]:         <nova:user uuid="00254a66d4364bc0b5d187d008ba5a9a">tempest-ServersAdmin275Test-1864943547-project-member</nova:user>
Oct 02 12:20:49 compute-1 nova_compute[230518]:         <nova:project uuid="b1871b72e3494da299605236b73c241f">tempest-ServersAdmin275Test-1864943547</nova:project>
Oct 02 12:20:49 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:20:49 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:       <nova:ports/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:20:49 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:20:49 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <system>
Oct 02 12:20:49 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:20:49 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:20:49 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:20:49 compute-1 nova_compute[230518]:       <entry name="serial">94bf2d68-bf2c-4720-8ede-688ca2b48ce6</entry>
Oct 02 12:20:49 compute-1 nova_compute[230518]:       <entry name="uuid">94bf2d68-bf2c-4720-8ede-688ca2b48ce6</entry>
Oct 02 12:20:49 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     </system>
Oct 02 12:20:49 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:20:49 compute-1 nova_compute[230518]:   <os>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:   </os>
Oct 02 12:20:49 compute-1 nova_compute[230518]:   <features>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:   </features>
Oct 02 12:20:49 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:20:49 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:20:49 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:20:49 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk">
Oct 02 12:20:49 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:       </source>
Oct 02 12:20:49 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:20:49 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:20:49 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:20:49 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk.config">
Oct 02 12:20:49 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:       </source>
Oct 02 12:20:49 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:20:49 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:20:49 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:20:49 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/console.log" append="off"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <video>
Oct 02 12:20:49 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     </video>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:20:49 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:20:49 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:20:49 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:20:49 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:20:49 compute-1 nova_compute[230518]: </domain>
Oct 02 12:20:49 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:20:50 compute-1 nova_compute[230518]: 2025-10-02 12:20:50.167 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:20:50 compute-1 nova_compute[230518]: 2025-10-02 12:20:50.167 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:20:50 compute-1 nova_compute[230518]: 2025-10-02 12:20:50.168 2 INFO nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Using config drive
Oct 02 12:20:50 compute-1 nova_compute[230518]: 2025-10-02 12:20:50.195 2 DEBUG nova.storage.rbd_utils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:50 compute-1 nova_compute[230518]: 2025-10-02 12:20:50.227 2 DEBUG nova.objects.instance [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e179 e179: 3 total, 3 up, 3 in
Oct 02 12:20:50 compute-1 nova_compute[230518]: 2025-10-02 12:20:50.298 2 DEBUG nova.objects.instance [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Lazy-loading 'keypairs' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:50 compute-1 ceph-mon[80926]: pgmap v1261: 305 pgs: 305 active+clean; 308 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 114 op/s
Oct 02 12:20:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1483081823' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4104050532' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:20:50 compute-1 nova_compute[230518]: 2025-10-02 12:20:50.562 2 INFO nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Creating config drive at /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/disk.config
Oct 02 12:20:50 compute-1 nova_compute[230518]: 2025-10-02 12:20:50.571 2 DEBUG oslo_concurrency.processutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw9ft6f_m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:50 compute-1 nova_compute[230518]: 2025-10-02 12:20:50.703 2 DEBUG oslo_concurrency.processutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw9ft6f_m" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:50 compute-1 nova_compute[230518]: 2025-10-02 12:20:50.739 2 DEBUG nova.storage.rbd_utils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] rbd image 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:20:50 compute-1 nova_compute[230518]: 2025-10-02 12:20:50.742 2 DEBUG oslo_concurrency.processutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/disk.config 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:20:50 compute-1 nova_compute[230518]: 2025-10-02 12:20:50.918 2 DEBUG oslo_concurrency.processutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/disk.config 94bf2d68-bf2c-4720-8ede-688ca2b48ce6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:20:50 compute-1 nova_compute[230518]: 2025-10-02 12:20:50.919 2 INFO nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Deleting local config drive /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6/disk.config because it was imported into RBD.
Oct 02 12:20:50 compute-1 systemd-machined[188247]: New machine qemu-24-instance-00000027.
Oct 02 12:20:50 compute-1 systemd[1]: Started Virtual Machine qemu-24-instance-00000027.
Oct 02 12:20:51 compute-1 ceph-mon[80926]: osdmap e179: 3 total, 3 up, 3 in
Oct 02 12:20:51 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3095567087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:51.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:51.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:51 compute-1 nova_compute[230518]: 2025-10-02 12:20:51.642 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Removed pending event for 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:20:51 compute-1 nova_compute[230518]: 2025-10-02 12:20:51.643 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407651.6414962, 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:20:51 compute-1 nova_compute[230518]: 2025-10-02 12:20:51.644 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] VM Resumed (Lifecycle Event)
Oct 02 12:20:51 compute-1 nova_compute[230518]: 2025-10-02 12:20:51.650 2 DEBUG nova.compute.manager [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:20:51 compute-1 nova_compute[230518]: 2025-10-02 12:20:51.650 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:20:51 compute-1 nova_compute[230518]: 2025-10-02 12:20:51.655 2 INFO nova.virt.libvirt.driver [-] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance spawned successfully.
Oct 02 12:20:51 compute-1 nova_compute[230518]: 2025-10-02 12:20:51.656 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:20:52 compute-1 nova_compute[230518]: 2025-10-02 12:20:52.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:20:52 compute-1 nova_compute[230518]: 2025-10-02 12:20:52.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 12:20:52 compute-1 nova_compute[230518]: 2025-10-02 12:20:52.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:52 compute-1 nova_compute[230518]: 2025-10-02 12:20:52.133 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:52 compute-1 nova_compute[230518]: 2025-10-02 12:20:52.139 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:52 compute-1 nova_compute[230518]: 2025-10-02 12:20:52.139 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:52 compute-1 nova_compute[230518]: 2025-10-02 12:20:52.140 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:52 compute-1 nova_compute[230518]: 2025-10-02 12:20:52.140 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:52 compute-1 nova_compute[230518]: 2025-10-02 12:20:52.141 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:52 compute-1 nova_compute[230518]: 2025-10-02 12:20:52.141 2 DEBUG nova.virt.libvirt.driver [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:20:52 compute-1 nova_compute[230518]: 2025-10-02 12:20:52.145 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:20:52 compute-1 nova_compute[230518]: 2025-10-02 12:20:52.173 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 12:20:52 compute-1 nova_compute[230518]: 2025-10-02 12:20:52.314 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 12:20:52 compute-1 nova_compute[230518]: 2025-10-02 12:20:52.314 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407651.642726, 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:20:52 compute-1 nova_compute[230518]: 2025-10-02 12:20:52.314 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] VM Started (Lifecycle Event)
Oct 02 12:20:52 compute-1 ceph-mon[80926]: pgmap v1263: 305 pgs: 305 active+clean; 241 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 9.5 MiB/s wr, 241 op/s
Oct 02 12:20:52 compute-1 nova_compute[230518]: 2025-10-02 12:20:52.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:52 compute-1 nova_compute[230518]: 2025-10-02 12:20:52.881 2 DEBUG nova.compute.manager [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:52 compute-1 nova_compute[230518]: 2025-10-02 12:20:52.885 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:20:52 compute-1 nova_compute[230518]: 2025-10-02 12:20:52.889 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:20:52 compute-1 nova_compute[230518]: 2025-10-02 12:20:52.995 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 12:20:53 compute-1 nova_compute[230518]: 2025-10-02 12:20:53.018 2 DEBUG oslo_concurrency.lockutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:53 compute-1 nova_compute[230518]: 2025-10-02 12:20:53.018 2 DEBUG oslo_concurrency.lockutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:53 compute-1 nova_compute[230518]: 2025-10-02 12:20:53.019 2 DEBUG nova.objects.instance [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:20:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:20:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:20:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:53.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:20:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:20:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:53.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:20:53 compute-1 nova_compute[230518]: 2025-10-02 12:20:53.810 2 DEBUG oslo_concurrency.lockutils [None req-4ad21e04-1b55-4d01-916f-a8c9be862c73 b16188801c114483a63c75aa705b21de c4cda22198d447dfb7abed394e2299e5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:54 compute-1 ceph-mon[80926]: pgmap v1264: 305 pgs: 305 active+clean; 237 MiB data, 556 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 9.9 MiB/s wr, 252 op/s
Oct 02 12:20:55 compute-1 nova_compute[230518]: 2025-10-02 12:20:55.411 2 DEBUG oslo_concurrency.lockutils [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Acquiring lock "94bf2d68-bf2c-4720-8ede-688ca2b48ce6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:55 compute-1 nova_compute[230518]: 2025-10-02 12:20:55.412 2 DEBUG oslo_concurrency.lockutils [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "94bf2d68-bf2c-4720-8ede-688ca2b48ce6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:55 compute-1 nova_compute[230518]: 2025-10-02 12:20:55.413 2 DEBUG oslo_concurrency.lockutils [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Acquiring lock "94bf2d68-bf2c-4720-8ede-688ca2b48ce6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:20:55 compute-1 nova_compute[230518]: 2025-10-02 12:20:55.413 2 DEBUG oslo_concurrency.lockutils [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "94bf2d68-bf2c-4720-8ede-688ca2b48ce6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:20:55 compute-1 nova_compute[230518]: 2025-10-02 12:20:55.413 2 DEBUG oslo_concurrency.lockutils [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "94bf2d68-bf2c-4720-8ede-688ca2b48ce6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:20:55 compute-1 nova_compute[230518]: 2025-10-02 12:20:55.414 2 INFO nova.compute.manager [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Terminating instance
Oct 02 12:20:55 compute-1 nova_compute[230518]: 2025-10-02 12:20:55.415 2 DEBUG oslo_concurrency.lockutils [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Acquiring lock "refresh_cache-94bf2d68-bf2c-4720-8ede-688ca2b48ce6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:20:55 compute-1 nova_compute[230518]: 2025-10-02 12:20:55.415 2 DEBUG oslo_concurrency.lockutils [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Acquired lock "refresh_cache-94bf2d68-bf2c-4720-8ede-688ca2b48ce6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:20:55 compute-1 nova_compute[230518]: 2025-10-02 12:20:55.415 2 DEBUG nova.network.neutron [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:20:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:55.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:20:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:55.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:20:55 compute-1 nova_compute[230518]: 2025-10-02 12:20:55.898 2 DEBUG nova.network.neutron [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:20:56 compute-1 nova_compute[230518]: 2025-10-02 12:20:56.266 2 DEBUG nova.network.neutron [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:20:56 compute-1 nova_compute[230518]: 2025-10-02 12:20:56.418 2 DEBUG oslo_concurrency.lockutils [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Releasing lock "refresh_cache-94bf2d68-bf2c-4720-8ede-688ca2b48ce6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:20:56 compute-1 nova_compute[230518]: 2025-10-02 12:20:56.419 2 DEBUG nova.compute.manager [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:20:56 compute-1 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000027.scope: Deactivated successfully.
Oct 02 12:20:56 compute-1 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000027.scope: Consumed 5.546s CPU time.
Oct 02 12:20:56 compute-1 systemd-machined[188247]: Machine qemu-24-instance-00000027 terminated.
Oct 02 12:20:56 compute-1 nova_compute[230518]: 2025-10-02 12:20:56.637 2 INFO nova.virt.libvirt.driver [-] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance destroyed successfully.
Oct 02 12:20:56 compute-1 nova_compute[230518]: 2025-10-02 12:20:56.638 2 DEBUG nova.objects.instance [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lazy-loading 'resources' on Instance uuid 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:20:56 compute-1 ceph-mon[80926]: pgmap v1265: 305 pgs: 305 active+clean; 244 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 6.7 MiB/s wr, 276 op/s
Oct 02 12:20:57 compute-1 nova_compute[230518]: 2025-10-02 12:20:57.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:20:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:57.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:20:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:57.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:57 compute-1 nova_compute[230518]: 2025-10-02 12:20:57.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:20:58 compute-1 ceph-mon[80926]: pgmap v1266: 305 pgs: 305 active+clean; 246 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.7 MiB/s wr, 261 op/s
Oct 02 12:20:58 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2057428937' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:20:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:20:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:20:59.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:20:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:20:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:20:59.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:20:59 compute-1 podman[249262]: 2025-10-02 12:20:59.809709462 +0000 UTC m=+0.065664219 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:20:59 compute-1 nova_compute[230518]: 2025-10-02 12:20:59.904 2 INFO nova.virt.libvirt.driver [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Deleting instance files /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6_del
Oct 02 12:20:59 compute-1 nova_compute[230518]: 2025-10-02 12:20:59.905 2 INFO nova.virt.libvirt.driver [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Deletion of /var/lib/nova/instances/94bf2d68-bf2c-4720-8ede-688ca2b48ce6_del complete
Oct 02 12:21:00 compute-1 ceph-mon[80926]: pgmap v1267: 305 pgs: 305 active+clean; 246 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.7 MiB/s wr, 261 op/s
Oct 02 12:21:00 compute-1 nova_compute[230518]: 2025-10-02 12:21:00.531 2 INFO nova.compute.manager [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Took 4.11 seconds to destroy the instance on the hypervisor.
Oct 02 12:21:00 compute-1 nova_compute[230518]: 2025-10-02 12:21:00.532 2 DEBUG oslo.service.loopingcall [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:21:00 compute-1 nova_compute[230518]: 2025-10-02 12:21:00.532 2 DEBUG nova.compute.manager [-] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:21:00 compute-1 nova_compute[230518]: 2025-10-02 12:21:00.533 2 DEBUG nova.network.neutron [-] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:21:00 compute-1 nova_compute[230518]: 2025-10-02 12:21:00.639 2 DEBUG nova.network.neutron [-] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:21:00 compute-1 podman[249281]: 2025-10-02 12:21:00.845190019 +0000 UTC m=+0.092360471 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct 02 12:21:00 compute-1 nova_compute[230518]: 2025-10-02 12:21:00.962 2 DEBUG nova.network.neutron [-] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:21:01 compute-1 nova_compute[230518]: 2025-10-02 12:21:01.023 2 INFO nova.compute.manager [-] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Took 0.49 seconds to deallocate network for instance.
Oct 02 12:21:01 compute-1 nova_compute[230518]: 2025-10-02 12:21:01.161 2 DEBUG oslo_concurrency.lockutils [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:01 compute-1 nova_compute[230518]: 2025-10-02 12:21:01.162 2 DEBUG oslo_concurrency.lockutils [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:01 compute-1 nova_compute[230518]: 2025-10-02 12:21:01.220 2 DEBUG oslo_concurrency.processutils [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:21:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:01.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:01.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/287197055' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:21:01 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:21:01 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2306558928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:01 compute-1 nova_compute[230518]: 2025-10-02 12:21:01.650 2 DEBUG oslo_concurrency.processutils [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:21:01 compute-1 nova_compute[230518]: 2025-10-02 12:21:01.662 2 DEBUG nova.compute.provider_tree [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:21:01 compute-1 nova_compute[230518]: 2025-10-02 12:21:01.697 2 DEBUG nova.scheduler.client.report [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:21:01 compute-1 nova_compute[230518]: 2025-10-02 12:21:01.742 2 DEBUG oslo_concurrency.lockutils [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:02 compute-1 nova_compute[230518]: 2025-10-02 12:21:02.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:02 compute-1 nova_compute[230518]: 2025-10-02 12:21:02.328 2 INFO nova.scheduler.client.report [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Deleted allocations for instance 94bf2d68-bf2c-4720-8ede-688ca2b48ce6
Oct 02 12:21:02 compute-1 nova_compute[230518]: 2025-10-02 12:21:02.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:02 compute-1 nova_compute[230518]: 2025-10-02 12:21:02.625 2 DEBUG oslo_concurrency.lockutils [None req-4b046ca1-c7d2-4d8d-acad-ea9c6cef4a56 00254a66d4364bc0b5d187d008ba5a9a b1871b72e3494da299605236b73c241f - - default default] Lock "94bf2d68-bf2c-4720-8ede-688ca2b48ce6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:02 compute-1 ceph-mon[80926]: pgmap v1268: 305 pgs: 305 active+clean; 218 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.6 MiB/s wr, 221 op/s
Oct 02 12:21:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1740980608' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:21:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2306558928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:21:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:21:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:03.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:21:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:21:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:03.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:21:03 compute-1 ceph-mon[80926]: pgmap v1269: 305 pgs: 305 active+clean; 232 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 2.9 MiB/s wr, 187 op/s
Oct 02 12:21:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:21:04.323 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:21:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:21:04.324 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:21:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:21:04.325 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:21:04 compute-1 nova_compute[230518]: 2025-10-02 12:21:04.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e180 e180: 3 total, 3 up, 3 in
Oct 02 12:21:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:05.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:05.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:06 compute-1 ceph-mon[80926]: pgmap v1270: 305 pgs: 305 active+clean; 279 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 7.6 MiB/s rd, 4.0 MiB/s wr, 200 op/s
Oct 02 12:21:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/251241660' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:21:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/251241660' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:21:06 compute-1 ceph-mon[80926]: osdmap e180: 3 total, 3 up, 3 in
Oct 02 12:21:07 compute-1 nova_compute[230518]: 2025-10-02 12:21:07.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:07.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:07.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:07 compute-1 nova_compute[230518]: 2025-10-02 12:21:07.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:21:08 compute-1 ceph-mon[80926]: pgmap v1272: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 257 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 4.7 MiB/s wr, 216 op/s
Oct 02 12:21:08 compute-1 podman[249330]: 2025-10-02 12:21:08.798068444 +0000 UTC m=+0.049284384 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:21:08 compute-1 podman[249329]: 2025-10-02 12:21:08.827450329 +0000 UTC m=+0.081945023 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 12:21:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:09.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:09.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:10 compute-1 ceph-mon[80926]: pgmap v1273: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 257 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 4.7 MiB/s wr, 216 op/s
Oct 02 12:21:11 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e181 e181: 3 total, 3 up, 3 in
Oct 02 12:21:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:21:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:11.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:21:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:11.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:11 compute-1 nova_compute[230518]: 2025-10-02 12:21:11.636 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407656.635068, 94bf2d68-bf2c-4720-8ede-688ca2b48ce6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:21:11 compute-1 nova_compute[230518]: 2025-10-02 12:21:11.636 2 INFO nova.compute.manager [-] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] VM Stopped (Lifecycle Event)
Oct 02 12:21:12 compute-1 nova_compute[230518]: 2025-10-02 12:21:12.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:12 compute-1 ceph-mon[80926]: pgmap v1274: 305 pgs: 305 active+clean; 241 MiB data, 567 MiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 6.1 MiB/s wr, 264 op/s
Oct 02 12:21:12 compute-1 ceph-mon[80926]: osdmap e181: 3 total, 3 up, 3 in
Oct 02 12:21:12 compute-1 nova_compute[230518]: 2025-10-02 12:21:12.280 2 DEBUG nova.compute.manager [None req-ebc347f8-77a5-4453-9eed-a072155c3c5c - - - - - -] [instance: 94bf2d68-bf2c-4720-8ede-688ca2b48ce6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:21:12 compute-1 nova_compute[230518]: 2025-10-02 12:21:12.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:21:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e182 e182: 3 total, 3 up, 3 in
Oct 02 12:21:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:13.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:13.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e183 e183: 3 total, 3 up, 3 in
Oct 02 12:21:14 compute-1 ceph-mon[80926]: pgmap v1276: 305 pgs: 305 active+clean; 246 MiB data, 561 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 208 op/s
Oct 02 12:21:14 compute-1 ceph-mon[80926]: osdmap e182: 3 total, 3 up, 3 in
Oct 02 12:21:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e184 e184: 3 total, 3 up, 3 in
Oct 02 12:21:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:15.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:15.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:15 compute-1 ceph-mon[80926]: osdmap e183: 3 total, 3 up, 3 in
Oct 02 12:21:15 compute-1 ceph-mon[80926]: osdmap e184: 3 total, 3 up, 3 in
Oct 02 12:21:16 compute-1 ceph-mon[80926]: pgmap v1279: 305 pgs: 305 active+clean; 275 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 5.9 MiB/s wr, 212 op/s
Oct 02 12:21:17 compute-1 nova_compute[230518]: 2025-10-02 12:21:17.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:17.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:21:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:17.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:21:17 compute-1 nova_compute[230518]: 2025-10-02 12:21:17.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:17 compute-1 ceph-mon[80926]: pgmap v1281: 305 pgs: 305 active+clean; 325 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 8.5 MiB/s rd, 8.1 MiB/s wr, 210 op/s
Oct 02 12:21:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:21:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:19.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:19.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:20 compute-1 ceph-mon[80926]: pgmap v1282: 305 pgs: 305 active+clean; 325 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 8.1 MiB/s rd, 7.8 MiB/s wr, 173 op/s
Oct 02 12:21:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e185 e185: 3 total, 3 up, 3 in
Oct 02 12:21:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:21.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:21:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:21.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:21:21 compute-1 ovn_controller[129257]: 2025-10-02T12:21:21Z|00188|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Oct 02 12:21:22 compute-1 ceph-mon[80926]: osdmap e185: 3 total, 3 up, 3 in
Oct 02 12:21:22 compute-1 nova_compute[230518]: 2025-10-02 12:21:22.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:22 compute-1 nova_compute[230518]: 2025-10-02 12:21:22.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:21:23 compute-1 ceph-mon[80926]: pgmap v1284: 305 pgs: 305 active+clean; 325 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 5.0 MiB/s wr, 149 op/s
Oct 02 12:21:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:23.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:23.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:24 compute-1 sudo[249372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:21:24 compute-1 sudo[249372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:24 compute-1 sudo[249372]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:24 compute-1 sudo[249397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:21:24 compute-1 sudo[249397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:24 compute-1 sudo[249397]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:24 compute-1 sudo[249422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:21:24 compute-1 sudo[249422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:24 compute-1 sudo[249422]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:24 compute-1 sudo[249447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:21:24 compute-1 sudo[249447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:24 compute-1 sudo[249447]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:25 compute-1 ceph-mon[80926]: pgmap v1285: 305 pgs: 305 active+clean; 325 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 4.0 MiB/s wr, 132 op/s
Oct 02 12:21:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:25.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:25.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:21:25.919 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:21:25.920 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:21:25.920 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:26 compute-1 ceph-mon[80926]: pgmap v1286: 305 pgs: 305 active+clean; 325 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.4 MiB/s wr, 114 op/s
Oct 02 12:21:26 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:21:26 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:21:26 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:21:26 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:21:26 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:21:26 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:21:27 compute-1 nova_compute[230518]: 2025-10-02 12:21:27.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:27 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e186 e186: 3 total, 3 up, 3 in
Oct 02 12:21:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:27.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:27.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:27 compute-1 nova_compute[230518]: 2025-10-02 12:21:27.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:21:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e187 e187: 3 total, 3 up, 3 in
Oct 02 12:21:28 compute-1 ceph-mon[80926]: pgmap v1287: 305 pgs: 305 active+clean; 327 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 397 KiB/s rd, 38 KiB/s wr, 53 op/s
Oct 02 12:21:28 compute-1 ceph-mon[80926]: osdmap e186: 3 total, 3 up, 3 in
Oct 02 12:21:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:21:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:29.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:21:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:29.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:30 compute-1 ceph-mon[80926]: pgmap v1289: 305 pgs: 305 active+clean; 327 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 32 KiB/s wr, 28 op/s
Oct 02 12:21:30 compute-1 ceph-mon[80926]: osdmap e187: 3 total, 3 up, 3 in
Oct 02 12:21:30 compute-1 podman[249503]: 2025-10-02 12:21:30.799420119 +0000 UTC m=+0.055756918 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Oct 02 12:21:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:21:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:31.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:21:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:21:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:31.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:21:31 compute-1 podman[249520]: 2025-10-02 12:21:31.830238907 +0000 UTC m=+0.074070145 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 12:21:32 compute-1 nova_compute[230518]: 2025-10-02 12:21:32.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:32 compute-1 ceph-mon[80926]: pgmap v1291: 305 pgs: 305 active+clean; 315 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 403 KiB/s rd, 33 KiB/s wr, 20 op/s
Oct 02 12:21:32 compute-1 nova_compute[230518]: 2025-10-02 12:21:32.421 2 DEBUG oslo_concurrency.lockutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Acquiring lock "3cc914dc-40b0-4808-aff8-bd8e0c6789b1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:32 compute-1 nova_compute[230518]: 2025-10-02 12:21:32.422 2 DEBUG oslo_concurrency.lockutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "3cc914dc-40b0-4808-aff8-bd8e0c6789b1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:32 compute-1 nova_compute[230518]: 2025-10-02 12:21:32.522 2 DEBUG nova.compute.manager [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:21:32 compute-1 nova_compute[230518]: 2025-10-02 12:21:32.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:32 compute-1 nova_compute[230518]: 2025-10-02 12:21:32.710 2 DEBUG oslo_concurrency.lockutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:32 compute-1 nova_compute[230518]: 2025-10-02 12:21:32.711 2 DEBUG oslo_concurrency.lockutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:32 compute-1 nova_compute[230518]: 2025-10-02 12:21:32.721 2 DEBUG nova.virt.hardware [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:21:32 compute-1 nova_compute[230518]: 2025-10-02 12:21:32.721 2 INFO nova.compute.claims [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:21:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:21:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:21:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:33.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:21:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:21:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:33.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:21:33 compute-1 nova_compute[230518]: 2025-10-02 12:21:33.552 2 DEBUG oslo_concurrency.processutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:21:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:21:33 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1810387230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:34 compute-1 nova_compute[230518]: 2025-10-02 12:21:34.015 2 DEBUG oslo_concurrency.processutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:21:34 compute-1 nova_compute[230518]: 2025-10-02 12:21:34.021 2 DEBUG nova.compute.provider_tree [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:21:34 compute-1 nova_compute[230518]: 2025-10-02 12:21:34.096 2 DEBUG nova.scheduler.client.report [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:21:34 compute-1 nova_compute[230518]: 2025-10-02 12:21:34.178 2 DEBUG oslo_concurrency.lockutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.467s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:34 compute-1 nova_compute[230518]: 2025-10-02 12:21:34.179 2 DEBUG nova.compute.manager [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:21:34 compute-1 nova_compute[230518]: 2025-10-02 12:21:34.337 2 DEBUG nova.compute.manager [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:21:34 compute-1 nova_compute[230518]: 2025-10-02 12:21:34.338 2 DEBUG nova.network.neutron [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:21:34 compute-1 nova_compute[230518]: 2025-10-02 12:21:34.424 2 INFO nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:21:34 compute-1 nova_compute[230518]: 2025-10-02 12:21:34.444 2 DEBUG nova.compute.manager [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:21:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e188 e188: 3 total, 3 up, 3 in
Oct 02 12:21:34 compute-1 nova_compute[230518]: 2025-10-02 12:21:34.609 2 DEBUG nova.compute.manager [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:21:34 compute-1 nova_compute[230518]: 2025-10-02 12:21:34.611 2 DEBUG nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:21:34 compute-1 nova_compute[230518]: 2025-10-02 12:21:34.612 2 INFO nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Creating image(s)
Oct 02 12:21:34 compute-1 nova_compute[230518]: 2025-10-02 12:21:34.647 2 DEBUG nova.storage.rbd_utils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] rbd image 3cc914dc-40b0-4808-aff8-bd8e0c6789b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:21:34 compute-1 nova_compute[230518]: 2025-10-02 12:21:34.684 2 DEBUG nova.storage.rbd_utils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] rbd image 3cc914dc-40b0-4808-aff8-bd8e0c6789b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:21:34 compute-1 nova_compute[230518]: 2025-10-02 12:21:34.711 2 DEBUG nova.storage.rbd_utils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] rbd image 3cc914dc-40b0-4808-aff8-bd8e0c6789b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:21:34 compute-1 nova_compute[230518]: 2025-10-02 12:21:34.714 2 DEBUG oslo_concurrency.processutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:21:34 compute-1 ceph-mon[80926]: pgmap v1292: 305 pgs: 305 active+clean; 312 MiB data, 591 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 90 op/s
Oct 02 12:21:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1810387230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:34 compute-1 nova_compute[230518]: 2025-10-02 12:21:34.780 2 DEBUG oslo_concurrency.processutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:21:34 compute-1 nova_compute[230518]: 2025-10-02 12:21:34.782 2 DEBUG oslo_concurrency.lockutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:34 compute-1 nova_compute[230518]: 2025-10-02 12:21:34.783 2 DEBUG oslo_concurrency.lockutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:34 compute-1 nova_compute[230518]: 2025-10-02 12:21:34.784 2 DEBUG oslo_concurrency.lockutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:34 compute-1 nova_compute[230518]: 2025-10-02 12:21:34.815 2 DEBUG nova.storage.rbd_utils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] rbd image 3cc914dc-40b0-4808-aff8-bd8e0c6789b1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:21:34 compute-1 nova_compute[230518]: 2025-10-02 12:21:34.818 2 DEBUG oslo_concurrency.processutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 3cc914dc-40b0-4808-aff8-bd8e0c6789b1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:21:35 compute-1 nova_compute[230518]: 2025-10-02 12:21:35.219 2 DEBUG nova.network.neutron [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 02 12:21:35 compute-1 nova_compute[230518]: 2025-10-02 12:21:35.219 2 DEBUG nova.compute.manager [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:21:35 compute-1 nova_compute[230518]: 2025-10-02 12:21:35.316 2 DEBUG oslo_concurrency.processutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 3cc914dc-40b0-4808-aff8-bd8e0c6789b1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:21:35 compute-1 nova_compute[230518]: 2025-10-02 12:21:35.386 2 DEBUG nova.storage.rbd_utils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] resizing rbd image 3cc914dc-40b0-4808-aff8-bd8e0c6789b1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:21:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:35.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:35.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:35 compute-1 nova_compute[230518]: 2025-10-02 12:21:35.496 2 DEBUG nova.objects.instance [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lazy-loading 'migration_context' on Instance uuid 3cc914dc-40b0-4808-aff8-bd8e0c6789b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:21:35 compute-1 nova_compute[230518]: 2025-10-02 12:21:35.522 2 DEBUG nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:21:35 compute-1 nova_compute[230518]: 2025-10-02 12:21:35.523 2 DEBUG nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Ensure instance console log exists: /var/lib/nova/instances/3cc914dc-40b0-4808-aff8-bd8e0c6789b1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:21:35 compute-1 nova_compute[230518]: 2025-10-02 12:21:35.523 2 DEBUG oslo_concurrency.lockutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:35 compute-1 nova_compute[230518]: 2025-10-02 12:21:35.523 2 DEBUG oslo_concurrency.lockutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:35 compute-1 nova_compute[230518]: 2025-10-02 12:21:35.524 2 DEBUG oslo_concurrency.lockutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:35 compute-1 nova_compute[230518]: 2025-10-02 12:21:35.525 2 DEBUG nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:21:35 compute-1 nova_compute[230518]: 2025-10-02 12:21:35.529 2 WARNING nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:21:35 compute-1 nova_compute[230518]: 2025-10-02 12:21:35.533 2 DEBUG nova.virt.libvirt.host [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:21:35 compute-1 nova_compute[230518]: 2025-10-02 12:21:35.534 2 DEBUG nova.virt.libvirt.host [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:21:35 compute-1 nova_compute[230518]: 2025-10-02 12:21:35.538 2 DEBUG nova.virt.libvirt.host [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:21:35 compute-1 nova_compute[230518]: 2025-10-02 12:21:35.538 2 DEBUG nova.virt.libvirt.host [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:21:35 compute-1 nova_compute[230518]: 2025-10-02 12:21:35.540 2 DEBUG nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:21:35 compute-1 nova_compute[230518]: 2025-10-02 12:21:35.540 2 DEBUG nova.virt.hardware [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:21:35 compute-1 nova_compute[230518]: 2025-10-02 12:21:35.541 2 DEBUG nova.virt.hardware [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:21:35 compute-1 nova_compute[230518]: 2025-10-02 12:21:35.541 2 DEBUG nova.virt.hardware [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:21:35 compute-1 nova_compute[230518]: 2025-10-02 12:21:35.541 2 DEBUG nova.virt.hardware [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:21:35 compute-1 nova_compute[230518]: 2025-10-02 12:21:35.541 2 DEBUG nova.virt.hardware [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:21:35 compute-1 nova_compute[230518]: 2025-10-02 12:21:35.542 2 DEBUG nova.virt.hardware [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:21:35 compute-1 nova_compute[230518]: 2025-10-02 12:21:35.542 2 DEBUG nova.virt.hardware [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:21:35 compute-1 nova_compute[230518]: 2025-10-02 12:21:35.542 2 DEBUG nova.virt.hardware [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:21:35 compute-1 nova_compute[230518]: 2025-10-02 12:21:35.543 2 DEBUG nova.virt.hardware [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:21:35 compute-1 nova_compute[230518]: 2025-10-02 12:21:35.543 2 DEBUG nova.virt.hardware [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:21:35 compute-1 nova_compute[230518]: 2025-10-02 12:21:35.543 2 DEBUG nova.virt.hardware [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:21:35 compute-1 nova_compute[230518]: 2025-10-02 12:21:35.547 2 DEBUG oslo_concurrency.processutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:21:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e189 e189: 3 total, 3 up, 3 in
Oct 02 12:21:35 compute-1 ceph-mon[80926]: osdmap e188: 3 total, 3 up, 3 in
Oct 02 12:21:35 compute-1 ceph-mon[80926]: osdmap e189: 3 total, 3 up, 3 in
Oct 02 12:21:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:21:35 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2673187842' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:21:35 compute-1 nova_compute[230518]: 2025-10-02 12:21:35.991 2 DEBUG oslo_concurrency.processutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:21:36 compute-1 nova_compute[230518]: 2025-10-02 12:21:36.017 2 DEBUG nova.storage.rbd_utils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] rbd image 3cc914dc-40b0-4808-aff8-bd8e0c6789b1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:21:36 compute-1 nova_compute[230518]: 2025-10-02 12:21:36.021 2 DEBUG oslo_concurrency.processutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:21:36 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:21:36 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1982999056' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:21:36 compute-1 nova_compute[230518]: 2025-10-02 12:21:36.462 2 DEBUG oslo_concurrency.processutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:21:36 compute-1 nova_compute[230518]: 2025-10-02 12:21:36.465 2 DEBUG nova.objects.instance [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lazy-loading 'pci_devices' on Instance uuid 3cc914dc-40b0-4808-aff8-bd8e0c6789b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:21:36 compute-1 nova_compute[230518]: 2025-10-02 12:21:36.670 2 DEBUG nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:21:36 compute-1 nova_compute[230518]:   <uuid>3cc914dc-40b0-4808-aff8-bd8e0c6789b1</uuid>
Oct 02 12:21:36 compute-1 nova_compute[230518]:   <name>instance-0000002d</name>
Oct 02 12:21:36 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:21:36 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:21:36 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:21:36 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:       <nova:name>tempest-ServersAdminNegativeTestJSON-server-463301587</nova:name>
Oct 02 12:21:36 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:21:35</nova:creationTime>
Oct 02 12:21:36 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:21:36 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:21:36 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:21:36 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:21:36 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:21:36 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:21:36 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:21:36 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:21:36 compute-1 nova_compute[230518]:         <nova:user uuid="6ce6b90597304cd29e06b1f1e62246eb">tempest-ServersAdminNegativeTestJSON-1444821380-project-member</nova:user>
Oct 02 12:21:36 compute-1 nova_compute[230518]:         <nova:project uuid="1ff6686454554253817cdb343c2f7e5e">tempest-ServersAdminNegativeTestJSON-1444821380</nova:project>
Oct 02 12:21:36 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:21:36 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:       <nova:ports/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:21:36 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:21:36 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <system>
Oct 02 12:21:36 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:21:36 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:21:36 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:21:36 compute-1 nova_compute[230518]:       <entry name="serial">3cc914dc-40b0-4808-aff8-bd8e0c6789b1</entry>
Oct 02 12:21:36 compute-1 nova_compute[230518]:       <entry name="uuid">3cc914dc-40b0-4808-aff8-bd8e0c6789b1</entry>
Oct 02 12:21:36 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     </system>
Oct 02 12:21:36 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:21:36 compute-1 nova_compute[230518]:   <os>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:   </os>
Oct 02 12:21:36 compute-1 nova_compute[230518]:   <features>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:   </features>
Oct 02 12:21:36 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:21:36 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:21:36 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:21:36 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/3cc914dc-40b0-4808-aff8-bd8e0c6789b1_disk">
Oct 02 12:21:36 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:       </source>
Oct 02 12:21:36 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:21:36 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:21:36 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:21:36 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/3cc914dc-40b0-4808-aff8-bd8e0c6789b1_disk.config">
Oct 02 12:21:36 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:       </source>
Oct 02 12:21:36 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:21:36 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:21:36 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:21:36 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/3cc914dc-40b0-4808-aff8-bd8e0c6789b1/console.log" append="off"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <video>
Oct 02 12:21:36 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     </video>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:21:36 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:21:36 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:21:36 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:21:36 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:21:36 compute-1 nova_compute[230518]: </domain>
Oct 02 12:21:36 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:21:36 compute-1 nova_compute[230518]: 2025-10-02 12:21:36.814 2 DEBUG nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:21:36 compute-1 nova_compute[230518]: 2025-10-02 12:21:36.815 2 DEBUG nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:21:36 compute-1 nova_compute[230518]: 2025-10-02 12:21:36.815 2 INFO nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Using config drive
Oct 02 12:21:36 compute-1 nova_compute[230518]: 2025-10-02 12:21:36.837 2 DEBUG nova.storage.rbd_utils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] rbd image 3cc914dc-40b0-4808-aff8-bd8e0c6789b1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:21:36 compute-1 ceph-mon[80926]: pgmap v1294: 305 pgs: 305 active+clean; 296 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 3.4 MiB/s wr, 109 op/s
Oct 02 12:21:36 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2673187842' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:21:36 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1982999056' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:21:37 compute-1 nova_compute[230518]: 2025-10-02 12:21:37.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:37 compute-1 nova_compute[230518]: 2025-10-02 12:21:37.456 2 INFO nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Creating config drive at /var/lib/nova/instances/3cc914dc-40b0-4808-aff8-bd8e0c6789b1/disk.config
Oct 02 12:21:37 compute-1 nova_compute[230518]: 2025-10-02 12:21:37.468 2 DEBUG oslo_concurrency.processutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3cc914dc-40b0-4808-aff8-bd8e0c6789b1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9_2abr9u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:21:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:37.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:37.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:37 compute-1 nova_compute[230518]: 2025-10-02 12:21:37.606 2 DEBUG oslo_concurrency.processutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3cc914dc-40b0-4808-aff8-bd8e0c6789b1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9_2abr9u" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:21:37 compute-1 nova_compute[230518]: 2025-10-02 12:21:37.633 2 DEBUG nova.storage.rbd_utils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] rbd image 3cc914dc-40b0-4808-aff8-bd8e0c6789b1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:21:37 compute-1 nova_compute[230518]: 2025-10-02 12:21:37.636 2 DEBUG oslo_concurrency.processutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3cc914dc-40b0-4808-aff8-bd8e0c6789b1/disk.config 3cc914dc-40b0-4808-aff8-bd8e0c6789b1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:21:37 compute-1 nova_compute[230518]: 2025-10-02 12:21:37.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:37 compute-1 nova_compute[230518]: 2025-10-02 12:21:37.860 2 DEBUG oslo_concurrency.processutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3cc914dc-40b0-4808-aff8-bd8e0c6789b1/disk.config 3cc914dc-40b0-4808-aff8-bd8e0c6789b1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.224s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:21:37 compute-1 nova_compute[230518]: 2025-10-02 12:21:37.861 2 INFO nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Deleting local config drive /var/lib/nova/instances/3cc914dc-40b0-4808-aff8-bd8e0c6789b1/disk.config because it was imported into RBD.
Oct 02 12:21:37 compute-1 systemd-machined[188247]: New machine qemu-25-instance-0000002d.
Oct 02 12:21:37 compute-1 systemd[1]: Started Virtual Machine qemu-25-instance-0000002d.
Oct 02 12:21:38 compute-1 ceph-mon[80926]: pgmap v1296: 305 pgs: 305 active+clean; 322 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 7.9 MiB/s wr, 165 op/s
Oct 02 12:21:38 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:21:38 compute-1 nova_compute[230518]: 2025-10-02 12:21:38.715 2 DEBUG nova.compute.manager [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:21:38 compute-1 nova_compute[230518]: 2025-10-02 12:21:38.716 2 DEBUG nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:21:38 compute-1 nova_compute[230518]: 2025-10-02 12:21:38.716 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407698.714336, 3cc914dc-40b0-4808-aff8-bd8e0c6789b1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:21:38 compute-1 nova_compute[230518]: 2025-10-02 12:21:38.717 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] VM Resumed (Lifecycle Event)
Oct 02 12:21:38 compute-1 nova_compute[230518]: 2025-10-02 12:21:38.722 2 INFO nova.virt.libvirt.driver [-] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Instance spawned successfully.
Oct 02 12:21:38 compute-1 nova_compute[230518]: 2025-10-02 12:21:38.723 2 DEBUG nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:21:38 compute-1 nova_compute[230518]: 2025-10-02 12:21:38.807 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:21:38 compute-1 nova_compute[230518]: 2025-10-02 12:21:38.811 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:21:38 compute-1 nova_compute[230518]: 2025-10-02 12:21:38.814 2 DEBUG nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:21:38 compute-1 nova_compute[230518]: 2025-10-02 12:21:38.814 2 DEBUG nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:21:38 compute-1 nova_compute[230518]: 2025-10-02 12:21:38.815 2 DEBUG nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:21:38 compute-1 nova_compute[230518]: 2025-10-02 12:21:38.815 2 DEBUG nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:21:38 compute-1 nova_compute[230518]: 2025-10-02 12:21:38.815 2 DEBUG nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:21:38 compute-1 nova_compute[230518]: 2025-10-02 12:21:38.816 2 DEBUG nova.virt.libvirt.driver [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:21:38 compute-1 nova_compute[230518]: 2025-10-02 12:21:38.852 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:21:38 compute-1 nova_compute[230518]: 2025-10-02 12:21:38.852 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407698.7155795, 3cc914dc-40b0-4808-aff8-bd8e0c6789b1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:21:38 compute-1 nova_compute[230518]: 2025-10-02 12:21:38.852 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] VM Started (Lifecycle Event)
Oct 02 12:21:38 compute-1 nova_compute[230518]: 2025-10-02 12:21:38.962 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:21:38 compute-1 nova_compute[230518]: 2025-10-02 12:21:38.964 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:21:39 compute-1 nova_compute[230518]: 2025-10-02 12:21:39.025 2 INFO nova.compute.manager [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Took 4.41 seconds to spawn the instance on the hypervisor.
Oct 02 12:21:39 compute-1 nova_compute[230518]: 2025-10-02 12:21:39.025 2 DEBUG nova.compute.manager [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:21:39 compute-1 nova_compute[230518]: 2025-10-02 12:21:39.034 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:21:39 compute-1 nova_compute[230518]: 2025-10-02 12:21:39.168 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:21:39 compute-1 nova_compute[230518]: 2025-10-02 12:21:39.170 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:21:39 compute-1 nova_compute[230518]: 2025-10-02 12:21:39.247 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:21:39 compute-1 nova_compute[230518]: 2025-10-02 12:21:39.247 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:21:39 compute-1 nova_compute[230518]: 2025-10-02 12:21:39.248 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:21:39 compute-1 nova_compute[230518]: 2025-10-02 12:21:39.298 2 INFO nova.compute.manager [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Took 6.62 seconds to build instance.
Oct 02 12:21:39 compute-1 nova_compute[230518]: 2025-10-02 12:21:39.429 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:39 compute-1 nova_compute[230518]: 2025-10-02 12:21:39.430 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:39 compute-1 nova_compute[230518]: 2025-10-02 12:21:39.430 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:39 compute-1 nova_compute[230518]: 2025-10-02 12:21:39.431 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:21:39 compute-1 nova_compute[230518]: 2025-10-02 12:21:39.431 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:21:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:39.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:39.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/652067774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:39 compute-1 nova_compute[230518]: 2025-10-02 12:21:39.720 2 DEBUG oslo_concurrency.lockutils [None req-d22a844b-a696-4f97-b977-2da898b4f069 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "3cc914dc-40b0-4808-aff8-bd8e0c6789b1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.298s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:39 compute-1 podman[249933]: 2025-10-02 12:21:39.833190913 +0000 UTC m=+0.068475179 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:21:39 compute-1 podman[249934]: 2025-10-02 12:21:39.837263171 +0000 UTC m=+0.072104523 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 12:21:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:21:39 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/474486150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:39 compute-1 nova_compute[230518]: 2025-10-02 12:21:39.964 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:21:40 compute-1 nova_compute[230518]: 2025-10-02 12:21:40.179 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:21:40 compute-1 nova_compute[230518]: 2025-10-02 12:21:40.179 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:21:40 compute-1 nova_compute[230518]: 2025-10-02 12:21:40.317 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:21:40 compute-1 nova_compute[230518]: 2025-10-02 12:21:40.319 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4658MB free_disk=20.902359008789062GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:21:40 compute-1 nova_compute[230518]: 2025-10-02 12:21:40.319 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:40 compute-1 nova_compute[230518]: 2025-10-02 12:21:40.319 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:40 compute-1 nova_compute[230518]: 2025-10-02 12:21:40.519 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 3cc914dc-40b0-4808-aff8-bd8e0c6789b1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:21:40 compute-1 nova_compute[230518]: 2025-10-02 12:21:40.520 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:21:40 compute-1 nova_compute[230518]: 2025-10-02 12:21:40.520 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:21:40 compute-1 nova_compute[230518]: 2025-10-02 12:21:40.563 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:21:40 compute-1 sudo[249976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:21:40 compute-1 sudo[249976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:40 compute-1 sudo[249976]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:40 compute-1 ceph-mon[80926]: pgmap v1297: 305 pgs: 305 active+clean; 322 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 7.7 MiB/s wr, 155 op/s
Oct 02 12:21:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/474486150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:40 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:21:40 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:21:40 compute-1 sudo[250002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:21:40 compute-1 sudo[250002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:21:40 compute-1 sudo[250002]: pam_unix(sudo:session): session closed for user root
Oct 02 12:21:41 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:21:41 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3520970729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:41 compute-1 nova_compute[230518]: 2025-10-02 12:21:41.029 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:21:41 compute-1 nova_compute[230518]: 2025-10-02 12:21:41.034 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:21:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:41.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:41.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:41 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3520970729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:42 compute-1 nova_compute[230518]: 2025-10-02 12:21:42.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:42 compute-1 nova_compute[230518]: 2025-10-02 12:21:42.549 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:21:42 compute-1 nova_compute[230518]: 2025-10-02 12:21:42.633 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:21:42 compute-1 nova_compute[230518]: 2025-10-02 12:21:42.633 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.314s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:42 compute-1 nova_compute[230518]: 2025-10-02 12:21:42.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:42 compute-1 ceph-mon[80926]: pgmap v1298: 305 pgs: 305 active+clean; 296 MiB data, 583 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 6.6 MiB/s wr, 205 op/s
Oct 02 12:21:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:21:43 compute-1 nova_compute[230518]: 2025-10-02 12:21:43.438 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:21:43 compute-1 nova_compute[230518]: 2025-10-02 12:21:43.439 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:21:43 compute-1 nova_compute[230518]: 2025-10-02 12:21:43.439 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:21:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:43.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:43.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/197691806' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/106516087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:44 compute-1 nova_compute[230518]: 2025-10-02 12:21:44.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:21:45 compute-1 ceph-mon[80926]: pgmap v1299: 305 pgs: 305 active+clean; 275 MiB data, 583 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.2 MiB/s wr, 235 op/s
Oct 02 12:21:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/830024931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3805478565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:45 compute-1 nova_compute[230518]: 2025-10-02 12:21:45.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:21:45 compute-1 nova_compute[230518]: 2025-10-02 12:21:45.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:21:45 compute-1 nova_compute[230518]: 2025-10-02 12:21:45.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:21:45 compute-1 nova_compute[230518]: 2025-10-02 12:21:45.230 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-3cc914dc-40b0-4808-aff8-bd8e0c6789b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:21:45 compute-1 nova_compute[230518]: 2025-10-02 12:21:45.231 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-3cc914dc-40b0-4808-aff8-bd8e0c6789b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:21:45 compute-1 nova_compute[230518]: 2025-10-02 12:21:45.231 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:21:45 compute-1 nova_compute[230518]: 2025-10-02 12:21:45.231 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3cc914dc-40b0-4808-aff8-bd8e0c6789b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:21:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:45.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:45.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:45 compute-1 nova_compute[230518]: 2025-10-02 12:21:45.721 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:21:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e190 e190: 3 total, 3 up, 3 in
Oct 02 12:21:46 compute-1 nova_compute[230518]: 2025-10-02 12:21:46.431 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:21:46 compute-1 ceph-mon[80926]: pgmap v1300: 305 pgs: 305 active+clean; 228 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.3 MiB/s wr, 224 op/s
Oct 02 12:21:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/497092318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/830072830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:46 compute-1 ceph-mon[80926]: osdmap e190: 3 total, 3 up, 3 in
Oct 02 12:21:46 compute-1 nova_compute[230518]: 2025-10-02 12:21:46.452 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-3cc914dc-40b0-4808-aff8-bd8e0c6789b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:21:46 compute-1 nova_compute[230518]: 2025-10-02 12:21:46.453 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:21:46 compute-1 nova_compute[230518]: 2025-10-02 12:21:46.454 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:21:47 compute-1 nova_compute[230518]: 2025-10-02 12:21:47.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:47 compute-1 nova_compute[230518]: 2025-10-02 12:21:47.214 2 DEBUG oslo_concurrency.lockutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Acquiring lock "7beacac0-65ce-4e15-a73c-9b50a50f968e" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:47 compute-1 nova_compute[230518]: 2025-10-02 12:21:47.215 2 DEBUG oslo_concurrency.lockutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lock "7beacac0-65ce-4e15-a73c-9b50a50f968e" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:47 compute-1 nova_compute[230518]: 2025-10-02 12:21:47.215 2 INFO nova.compute.manager [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Unshelving
Oct 02 12:21:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:47.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:21:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:47.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:21:47 compute-1 nova_compute[230518]: 2025-10-02 12:21:47.504 2 DEBUG oslo_concurrency.lockutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:47 compute-1 nova_compute[230518]: 2025-10-02 12:21:47.505 2 DEBUG oslo_concurrency.lockutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:47 compute-1 nova_compute[230518]: 2025-10-02 12:21:47.527 2 DEBUG nova.objects.instance [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lazy-loading 'pci_requests' on Instance uuid 7beacac0-65ce-4e15-a73c-9b50a50f968e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:21:47 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2637826586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:47 compute-1 nova_compute[230518]: 2025-10-02 12:21:47.679 2 DEBUG nova.objects.instance [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lazy-loading 'numa_topology' on Instance uuid 7beacac0-65ce-4e15-a73c-9b50a50f968e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:21:47 compute-1 nova_compute[230518]: 2025-10-02 12:21:47.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:47 compute-1 nova_compute[230518]: 2025-10-02 12:21:47.796 2 DEBUG nova.virt.hardware [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:21:47 compute-1 nova_compute[230518]: 2025-10-02 12:21:47.797 2 INFO nova.compute.claims [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:21:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:21:48 compute-1 nova_compute[230518]: 2025-10-02 12:21:48.209 2 DEBUG oslo_concurrency.processutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:21:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:21:48 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/187165604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:48 compute-1 nova_compute[230518]: 2025-10-02 12:21:48.618 2 DEBUG oslo_concurrency.processutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:21:48 compute-1 nova_compute[230518]: 2025-10-02 12:21:48.626 2 DEBUG nova.compute.provider_tree [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:21:48 compute-1 nova_compute[230518]: 2025-10-02 12:21:48.645 2 DEBUG nova.scheduler.client.report [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:21:48 compute-1 nova_compute[230518]: 2025-10-02 12:21:48.719 2 DEBUG oslo_concurrency.lockutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.214s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:48 compute-1 nova_compute[230518]: 2025-10-02 12:21:48.977 2 DEBUG oslo_concurrency.lockutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Acquiring lock "refresh_cache-7beacac0-65ce-4e15-a73c-9b50a50f968e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:21:48 compute-1 nova_compute[230518]: 2025-10-02 12:21:48.978 2 DEBUG oslo_concurrency.lockutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Acquired lock "refresh_cache-7beacac0-65ce-4e15-a73c-9b50a50f968e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:21:48 compute-1 nova_compute[230518]: 2025-10-02 12:21:48.979 2 DEBUG nova.network.neutron [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:21:49 compute-1 ceph-mon[80926]: pgmap v1302: 305 pgs: 305 active+clean; 241 MiB data, 548 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.9 MiB/s wr, 196 op/s
Oct 02 12:21:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/187165604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:21:49 compute-1 nova_compute[230518]: 2025-10-02 12:21:49.282 2 DEBUG nova.network.neutron [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:21:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:49.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:49.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:49 compute-1 nova_compute[230518]: 2025-10-02 12:21:49.649 2 DEBUG nova.network.neutron [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:21:49 compute-1 nova_compute[230518]: 2025-10-02 12:21:49.886 2 DEBUG oslo_concurrency.lockutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Releasing lock "refresh_cache-7beacac0-65ce-4e15-a73c-9b50a50f968e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:21:49 compute-1 nova_compute[230518]: 2025-10-02 12:21:49.888 2 DEBUG nova.virt.libvirt.driver [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:21:49 compute-1 nova_compute[230518]: 2025-10-02 12:21:49.889 2 INFO nova.virt.libvirt.driver [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Creating image(s)
Oct 02 12:21:49 compute-1 nova_compute[230518]: 2025-10-02 12:21:49.932 2 DEBUG nova.storage.rbd_utils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] rbd image 7beacac0-65ce-4e15-a73c-9b50a50f968e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:21:49 compute-1 nova_compute[230518]: 2025-10-02 12:21:49.937 2 DEBUG nova.objects.instance [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lazy-loading 'trusted_certs' on Instance uuid 7beacac0-65ce-4e15-a73c-9b50a50f968e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:21:50 compute-1 ceph-mon[80926]: pgmap v1303: 305 pgs: 305 active+clean; 241 MiB data, 548 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.9 MiB/s wr, 196 op/s
Oct 02 12:21:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:21:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:51.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:21:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:51.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:52 compute-1 nova_compute[230518]: 2025-10-02 12:21:52.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:52 compute-1 nova_compute[230518]: 2025-10-02 12:21:52.535 2 DEBUG nova.storage.rbd_utils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] rbd image 7beacac0-65ce-4e15-a73c-9b50a50f968e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:21:52 compute-1 nova_compute[230518]: 2025-10-02 12:21:52.566 2 DEBUG nova.storage.rbd_utils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] rbd image 7beacac0-65ce-4e15-a73c-9b50a50f968e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:21:52 compute-1 nova_compute[230518]: 2025-10-02 12:21:52.569 2 DEBUG oslo_concurrency.lockutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Acquiring lock "4c7d010a3d79a57ee44b8c9ce07268c80a0ae9f5" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:21:52 compute-1 nova_compute[230518]: 2025-10-02 12:21:52.570 2 DEBUG oslo_concurrency.lockutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lock "4c7d010a3d79a57ee44b8c9ce07268c80a0ae9f5" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:21:52 compute-1 nova_compute[230518]: 2025-10-02 12:21:52.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:52 compute-1 nova_compute[230518]: 2025-10-02 12:21:52.816 2 DEBUG nova.virt.libvirt.imagebackend [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Image locations are: [{'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/45a729a3-bfb9-4ba4-a275-e4201ada93ed/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/45a729a3-bfb9-4ba4-a275-e4201ada93ed/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 02 12:21:52 compute-1 nova_compute[230518]: 2025-10-02 12:21:52.868 2 DEBUG nova.virt.libvirt.imagebackend [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Selected location: {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/45a729a3-bfb9-4ba4-a275-e4201ada93ed/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 02 12:21:52 compute-1 nova_compute[230518]: 2025-10-02 12:21:52.869 2 DEBUG nova.storage.rbd_utils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] cloning images/45a729a3-bfb9-4ba4-a275-e4201ada93ed@snap to None/7beacac0-65ce-4e15-a73c-9b50a50f968e_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 12:21:53 compute-1 ceph-mon[80926]: pgmap v1304: 305 pgs: 305 active+clean; 273 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.9 MiB/s wr, 134 op/s
Oct 02 12:21:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:21:53 compute-1 nova_compute[230518]: 2025-10-02 12:21:53.345 2 DEBUG oslo_concurrency.lockutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lock "4c7d010a3d79a57ee44b8c9ce07268c80a0ae9f5" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:21:53 compute-1 nova_compute[230518]: 2025-10-02 12:21:53.485 2 DEBUG nova.objects.instance [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lazy-loading 'migration_context' on Instance uuid 7beacac0-65ce-4e15-a73c-9b50a50f968e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:21:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:53.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:53.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:53 compute-1 nova_compute[230518]: 2025-10-02 12:21:53.559 2 DEBUG nova.storage.rbd_utils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] flattening vms/7beacac0-65ce-4e15-a73c-9b50a50f968e_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 12:21:54 compute-1 ceph-mon[80926]: pgmap v1305: 305 pgs: 305 active+clean; 307 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 734 KiB/s rd, 5.0 MiB/s wr, 120 op/s
Oct 02 12:21:54 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3883937379' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:21:54 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3889626090' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:21:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:55.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:55.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:56 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2041755104' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:21:56 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3924843442' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:21:57 compute-1 nova_compute[230518]: 2025-10-02 12:21:57.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:21:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:57.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:21:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:57.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:57 compute-1 ceph-mon[80926]: pgmap v1306: 305 pgs: 305 active+clean; 333 MiB data, 642 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 6.8 MiB/s wr, 166 op/s
Oct 02 12:21:57 compute-1 nova_compute[230518]: 2025-10-02 12:21:57.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:21:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:21:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:21:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:21:59.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:21:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:21:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:21:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:21:59.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:21:59 compute-1 ceph-mon[80926]: pgmap v1307: 305 pgs: 305 active+clean; 364 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 7.0 MiB/s wr, 176 op/s
Oct 02 12:22:00 compute-1 nova_compute[230518]: 2025-10-02 12:22:00.171 2 DEBUG nova.virt.libvirt.driver [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Image rbd:vms/7beacac0-65ce-4e15-a73c-9b50a50f968e_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Oct 02 12:22:00 compute-1 nova_compute[230518]: 2025-10-02 12:22:00.172 2 DEBUG nova.virt.libvirt.driver [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:22:00 compute-1 nova_compute[230518]: 2025-10-02 12:22:00.173 2 DEBUG nova.virt.libvirt.driver [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Ensure instance console log exists: /var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:22:00 compute-1 nova_compute[230518]: 2025-10-02 12:22:00.173 2 DEBUG oslo_concurrency.lockutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:00 compute-1 nova_compute[230518]: 2025-10-02 12:22:00.174 2 DEBUG oslo_concurrency.lockutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:00 compute-1 nova_compute[230518]: 2025-10-02 12:22:00.175 2 DEBUG oslo_concurrency.lockutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:00 compute-1 nova_compute[230518]: 2025-10-02 12:22:00.177 2 DEBUG nova.virt.libvirt.driver [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T12:21:10Z,direct_url=<?>,disk_format='raw',id=45a729a3-bfb9-4ba4-a275-e4201ada93ed,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-1498353913-shelved',owner='aaf2805394aa4c4cb7977f6433aabf56',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T12:21:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:22:00 compute-1 nova_compute[230518]: 2025-10-02 12:22:00.183 2 WARNING nova.virt.libvirt.driver [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:22:00 compute-1 nova_compute[230518]: 2025-10-02 12:22:00.189 2 DEBUG nova.virt.libvirt.host [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:22:00 compute-1 nova_compute[230518]: 2025-10-02 12:22:00.190 2 DEBUG nova.virt.libvirt.host [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:22:00 compute-1 nova_compute[230518]: 2025-10-02 12:22:00.193 2 DEBUG nova.virt.libvirt.host [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:22:00 compute-1 nova_compute[230518]: 2025-10-02 12:22:00.194 2 DEBUG nova.virt.libvirt.host [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:22:00 compute-1 nova_compute[230518]: 2025-10-02 12:22:00.196 2 DEBUG nova.virt.libvirt.driver [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:22:00 compute-1 nova_compute[230518]: 2025-10-02 12:22:00.196 2 DEBUG nova.virt.hardware [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T12:21:10Z,direct_url=<?>,disk_format='raw',id=45a729a3-bfb9-4ba4-a275-e4201ada93ed,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-1498353913-shelved',owner='aaf2805394aa4c4cb7977f6433aabf56',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T12:21:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:22:00 compute-1 nova_compute[230518]: 2025-10-02 12:22:00.197 2 DEBUG nova.virt.hardware [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:22:00 compute-1 nova_compute[230518]: 2025-10-02 12:22:00.197 2 DEBUG nova.virt.hardware [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:22:00 compute-1 nova_compute[230518]: 2025-10-02 12:22:00.198 2 DEBUG nova.virt.hardware [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:22:00 compute-1 nova_compute[230518]: 2025-10-02 12:22:00.198 2 DEBUG nova.virt.hardware [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:22:00 compute-1 nova_compute[230518]: 2025-10-02 12:22:00.199 2 DEBUG nova.virt.hardware [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:22:00 compute-1 nova_compute[230518]: 2025-10-02 12:22:00.199 2 DEBUG nova.virt.hardware [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:22:00 compute-1 nova_compute[230518]: 2025-10-02 12:22:00.200 2 DEBUG nova.virt.hardware [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:22:00 compute-1 nova_compute[230518]: 2025-10-02 12:22:00.201 2 DEBUG nova.virt.hardware [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:22:00 compute-1 nova_compute[230518]: 2025-10-02 12:22:00.201 2 DEBUG nova.virt.hardware [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:22:00 compute-1 nova_compute[230518]: 2025-10-02 12:22:00.202 2 DEBUG nova.virt.hardware [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:22:00 compute-1 nova_compute[230518]: 2025-10-02 12:22:00.202 2 DEBUG nova.objects.instance [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lazy-loading 'vcpu_model' on Instance uuid 7beacac0-65ce-4e15-a73c-9b50a50f968e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:22:00 compute-1 nova_compute[230518]: 2025-10-02 12:22:00.224 2 DEBUG oslo_concurrency.processutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:22:00 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4012375347' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:00 compute-1 nova_compute[230518]: 2025-10-02 12:22:00.670 2 DEBUG oslo_concurrency.processutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:00 compute-1 nova_compute[230518]: 2025-10-02 12:22:00.702 2 DEBUG nova.storage.rbd_utils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] rbd image 7beacac0-65ce-4e15-a73c-9b50a50f968e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:00 compute-1 nova_compute[230518]: 2025-10-02 12:22:00.706 2 DEBUG oslo_concurrency.processutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:01 compute-1 ceph-mon[80926]: pgmap v1308: 305 pgs: 305 active+clean; 364 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 6.2 MiB/s wr, 158 op/s
Oct 02 12:22:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4012375347' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:01 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:22:01 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3786116264' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:01 compute-1 podman[250344]: 2025-10-02 12:22:01.369185026 +0000 UTC m=+0.069651026 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 12:22:01 compute-1 nova_compute[230518]: 2025-10-02 12:22:01.439 2 DEBUG oslo_concurrency.processutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.733s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:01 compute-1 nova_compute[230518]: 2025-10-02 12:22:01.441 2 DEBUG nova.objects.instance [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lazy-loading 'pci_devices' on Instance uuid 7beacac0-65ce-4e15-a73c-9b50a50f968e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:22:01 compute-1 nova_compute[230518]: 2025-10-02 12:22:01.462 2 DEBUG nova.virt.libvirt.driver [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:22:01 compute-1 nova_compute[230518]:   <uuid>7beacac0-65ce-4e15-a73c-9b50a50f968e</uuid>
Oct 02 12:22:01 compute-1 nova_compute[230518]:   <name>instance-0000002b</name>
Oct 02 12:22:01 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:22:01 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:22:01 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:22:01 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:       <nova:name>tempest-UnshelveToHostMultiNodesTest-server-1498353913</nova:name>
Oct 02 12:22:01 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:22:00</nova:creationTime>
Oct 02 12:22:01 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:22:01 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:22:01 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:22:01 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:22:01 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:22:01 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:22:01 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:22:01 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:22:01 compute-1 nova_compute[230518]:         <nova:user uuid="93167a5206ba42b28aa96a676d3edb6d">tempest-UnshelveToHostMultiNodesTest-2076784560-project-member</nova:user>
Oct 02 12:22:01 compute-1 nova_compute[230518]:         <nova:project uuid="aaf2805394aa4c4cb7977f6433aabf56">tempest-UnshelveToHostMultiNodesTest-2076784560</nova:project>
Oct 02 12:22:01 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:22:01 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="45a729a3-bfb9-4ba4-a275-e4201ada93ed"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:       <nova:ports/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:22:01 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:22:01 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <system>
Oct 02 12:22:01 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:22:01 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:22:01 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:22:01 compute-1 nova_compute[230518]:       <entry name="serial">7beacac0-65ce-4e15-a73c-9b50a50f968e</entry>
Oct 02 12:22:01 compute-1 nova_compute[230518]:       <entry name="uuid">7beacac0-65ce-4e15-a73c-9b50a50f968e</entry>
Oct 02 12:22:01 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     </system>
Oct 02 12:22:01 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:22:01 compute-1 nova_compute[230518]:   <os>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:   </os>
Oct 02 12:22:01 compute-1 nova_compute[230518]:   <features>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:   </features>
Oct 02 12:22:01 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:22:01 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:22:01 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:22:01 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/7beacac0-65ce-4e15-a73c-9b50a50f968e_disk">
Oct 02 12:22:01 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:       </source>
Oct 02 12:22:01 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:22:01 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:22:01 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:22:01 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/7beacac0-65ce-4e15-a73c-9b50a50f968e_disk.config">
Oct 02 12:22:01 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:       </source>
Oct 02 12:22:01 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:22:01 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:22:01 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:22:01 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e/console.log" append="off"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <video>
Oct 02 12:22:01 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     </video>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <input type="keyboard" bus="usb"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:22:01 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:22:01 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:22:01 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:22:01 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:22:01 compute-1 nova_compute[230518]: </domain>
Oct 02 12:22:01 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:22:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:01.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:22:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:01.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:22:01 compute-1 nova_compute[230518]: 2025-10-02 12:22:01.564 2 DEBUG nova.virt.libvirt.driver [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:22:01 compute-1 nova_compute[230518]: 2025-10-02 12:22:01.565 2 DEBUG nova.virt.libvirt.driver [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:22:01 compute-1 nova_compute[230518]: 2025-10-02 12:22:01.565 2 INFO nova.virt.libvirt.driver [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Using config drive
Oct 02 12:22:01 compute-1 nova_compute[230518]: 2025-10-02 12:22:01.595 2 DEBUG nova.storage.rbd_utils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] rbd image 7beacac0-65ce-4e15-a73c-9b50a50f968e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:01 compute-1 nova_compute[230518]: 2025-10-02 12:22:01.647 2 DEBUG nova.objects.instance [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lazy-loading 'ec2_ids' on Instance uuid 7beacac0-65ce-4e15-a73c-9b50a50f968e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:22:01 compute-1 nova_compute[230518]: 2025-10-02 12:22:01.726 2 DEBUG nova.objects.instance [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lazy-loading 'keypairs' on Instance uuid 7beacac0-65ce-4e15-a73c-9b50a50f968e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:22:02 compute-1 nova_compute[230518]: 2025-10-02 12:22:02.047 2 INFO nova.virt.libvirt.driver [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Creating config drive at /var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e/disk.config
Oct 02 12:22:02 compute-1 nova_compute[230518]: 2025-10-02 12:22:02.053 2 DEBUG oslo_concurrency.processutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp3c4kxln execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:02 compute-1 ceph-mon[80926]: pgmap v1309: 305 pgs: 305 active+clean; 383 MiB data, 666 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 7.0 MiB/s wr, 174 op/s
Oct 02 12:22:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3786116264' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:02 compute-1 nova_compute[230518]: 2025-10-02 12:22:02.187 2 DEBUG oslo_concurrency.processutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp3c4kxln" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:02 compute-1 nova_compute[230518]: 2025-10-02 12:22:02.214 2 DEBUG nova.storage.rbd_utils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] rbd image 7beacac0-65ce-4e15-a73c-9b50a50f968e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:02 compute-1 nova_compute[230518]: 2025-10-02 12:22:02.218 2 DEBUG oslo_concurrency.processutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e/disk.config 7beacac0-65ce-4e15-a73c-9b50a50f968e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:02 compute-1 nova_compute[230518]: 2025-10-02 12:22:02.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:02 compute-1 nova_compute[230518]: 2025-10-02 12:22:02.602 2 DEBUG oslo_concurrency.processutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e/disk.config 7beacac0-65ce-4e15-a73c-9b50a50f968e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.384s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:02 compute-1 nova_compute[230518]: 2025-10-02 12:22:02.603 2 INFO nova.virt.libvirt.driver [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Deleting local config drive /var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e/disk.config because it was imported into RBD.
Oct 02 12:22:02 compute-1 systemd-machined[188247]: New machine qemu-26-instance-0000002b.
Oct 02 12:22:02 compute-1 systemd[1]: Started Virtual Machine qemu-26-instance-0000002b.
Oct 02 12:22:02 compute-1 nova_compute[230518]: 2025-10-02 12:22:02.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:02 compute-1 podman[250430]: 2025-10-02 12:22:02.783692303 +0000 UTC m=+0.110930146 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 12:22:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:22:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:03.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:03.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:03 compute-1 nova_compute[230518]: 2025-10-02 12:22:03.623 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407723.6224747, 7beacac0-65ce-4e15-a73c-9b50a50f968e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:22:03 compute-1 nova_compute[230518]: 2025-10-02 12:22:03.624 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] VM Resumed (Lifecycle Event)
Oct 02 12:22:03 compute-1 nova_compute[230518]: 2025-10-02 12:22:03.627 2 DEBUG nova.compute.manager [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:22:03 compute-1 nova_compute[230518]: 2025-10-02 12:22:03.627 2 DEBUG nova.virt.libvirt.driver [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:22:03 compute-1 nova_compute[230518]: 2025-10-02 12:22:03.630 2 INFO nova.virt.libvirt.driver [-] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Instance spawned successfully.
Oct 02 12:22:03 compute-1 nova_compute[230518]: 2025-10-02 12:22:03.701 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:03 compute-1 nova_compute[230518]: 2025-10-02 12:22:03.704 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:22:03 compute-1 nova_compute[230518]: 2025-10-02 12:22:03.727 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:22:03 compute-1 nova_compute[230518]: 2025-10-02 12:22:03.728 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407723.623446, 7beacac0-65ce-4e15-a73c-9b50a50f968e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:22:03 compute-1 nova_compute[230518]: 2025-10-02 12:22:03.728 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] VM Started (Lifecycle Event)
Oct 02 12:22:03 compute-1 nova_compute[230518]: 2025-10-02 12:22:03.791 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:03 compute-1 nova_compute[230518]: 2025-10-02 12:22:03.795 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:22:03 compute-1 nova_compute[230518]: 2025-10-02 12:22:03.822 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:22:04 compute-1 ceph-mon[80926]: pgmap v1310: 305 pgs: 305 active+clean; 417 MiB data, 690 MiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 7.2 MiB/s wr, 229 op/s
Oct 02 12:22:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e191 e191: 3 total, 3 up, 3 in
Oct 02 12:22:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:05.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:05.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:06 compute-1 ceph-mon[80926]: osdmap e191: 3 total, 3 up, 3 in
Oct 02 12:22:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2952618562' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:22:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2952618562' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:22:06 compute-1 nova_compute[230518]: 2025-10-02 12:22:06.706 2 DEBUG nova.compute.manager [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:06 compute-1 nova_compute[230518]: 2025-10-02 12:22:06.925 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:06 compute-1 nova_compute[230518]: 2025-10-02 12:22:06.925 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:06 compute-1 nova_compute[230518]: 2025-10-02 12:22:06.937 2 DEBUG oslo_concurrency.lockutils [None req-3084fdcf-12e1-4db8-b912-4fd8bd0a7453 1a8336b787b647e1b78cb06dcd9279b4 0ea4f964a9824c73ada798e6c162b9ea - - default default] Lock "7beacac0-65ce-4e15-a73c-9b50a50f968e" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 19.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:07 compute-1 nova_compute[230518]: 2025-10-02 12:22:07.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:07 compute-1 nova_compute[230518]: 2025-10-02 12:22:07.321 2 DEBUG nova.compute.manager [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:22:07 compute-1 nova_compute[230518]: 2025-10-02 12:22:07.498 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:07 compute-1 nova_compute[230518]: 2025-10-02 12:22:07.499 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:07 compute-1 nova_compute[230518]: 2025-10-02 12:22:07.507 2 DEBUG nova.virt.hardware [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:22:07 compute-1 nova_compute[230518]: 2025-10-02 12:22:07.507 2 INFO nova.compute.claims [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:22:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:22:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:07.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:22:07 compute-1 ceph-mon[80926]: pgmap v1312: 305 pgs: 305 active+clean; 422 MiB data, 690 MiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 4.7 MiB/s wr, 243 op/s
Oct 02 12:22:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:22:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:07.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:22:07 compute-1 nova_compute[230518]: 2025-10-02 12:22:07.708 2 DEBUG oslo_concurrency.processutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:07 compute-1 nova_compute[230518]: 2025-10-02 12:22:07.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e192 e192: 3 total, 3 up, 3 in
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.044 2 DEBUG oslo_concurrency.lockutils [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Acquiring lock "7beacac0-65ce-4e15-a73c-9b50a50f968e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.044 2 DEBUG oslo_concurrency.lockutils [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Lock "7beacac0-65ce-4e15-a73c-9b50a50f968e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.044 2 DEBUG oslo_concurrency.lockutils [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Acquiring lock "7beacac0-65ce-4e15-a73c-9b50a50f968e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.045 2 DEBUG oslo_concurrency.lockutils [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Lock "7beacac0-65ce-4e15-a73c-9b50a50f968e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.045 2 DEBUG oslo_concurrency.lockutils [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Lock "7beacac0-65ce-4e15-a73c-9b50a50f968e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.046 2 INFO nova.compute.manager [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Terminating instance
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.047 2 DEBUG oslo_concurrency.lockutils [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Acquiring lock "refresh_cache-7beacac0-65ce-4e15-a73c-9b50a50f968e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.047 2 DEBUG oslo_concurrency.lockutils [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Acquired lock "refresh_cache-7beacac0-65ce-4e15-a73c-9b50a50f968e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.047 2 DEBUG nova.network.neutron [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:22:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:22:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:22:08 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3645296222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.262 2 DEBUG oslo_concurrency.processutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.270 2 DEBUG nova.compute.provider_tree [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.287 2 DEBUG nova.scheduler.client.report [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.310 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.311 2 DEBUG nova.compute.manager [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.372 2 DEBUG nova.compute.manager [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.372 2 DEBUG nova.network.neutron [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.437 2 INFO nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.578 2 DEBUG nova.compute.manager [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.636 2 INFO nova.virt.block_device [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Booting with volume 0de00a00-9b68-498b-8bd0-88556bd22393 at /dev/vda
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.774 2 DEBUG os_brick.utils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.775 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.792 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.793 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[3a9cfa27-6e38-4d79-bb05-9d5a1f7dac68]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.795 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.802 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.803 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[3abb8055-d07b-49d8-b7cd-b2b43925f7e1]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.805 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.812 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.813 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[79752506-19e1-486a-988a-7e9fad5f365e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.814 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[7b50cb7e-9304-4e2f-8233-1209f83edb0d]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.815 2 DEBUG oslo_concurrency.processutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.847 2 DEBUG oslo_concurrency.processutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] CMD "nvme version" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.850 2 DEBUG os_brick.initiator.connectors.lightos [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.851 2 DEBUG os_brick.initiator.connectors.lightos [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.851 2 DEBUG os_brick.initiator.connectors.lightos [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.851 2 DEBUG os_brick.utils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] <== get_connector_properties: return (76ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.852 2 DEBUG nova.virt.block_device [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updating existing volume attachment record: a17f6c4e-3b3e-4e2a-bc65-6e16cb06b24f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:22:08 compute-1 nova_compute[230518]: 2025-10-02 12:22:08.994 2 DEBUG nova.network.neutron [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:22:09 compute-1 ceph-mon[80926]: pgmap v1313: 305 pgs: 305 active+clean; 354 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 2.9 MiB/s wr, 284 op/s
Oct 02 12:22:09 compute-1 ceph-mon[80926]: osdmap e192: 3 total, 3 up, 3 in
Oct 02 12:22:09 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3645296222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:09 compute-1 nova_compute[230518]: 2025-10-02 12:22:09.448 2 DEBUG nova.network.neutron [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:22:09 compute-1 nova_compute[230518]: 2025-10-02 12:22:09.476 2 DEBUG oslo_concurrency.lockutils [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Releasing lock "refresh_cache-7beacac0-65ce-4e15-a73c-9b50a50f968e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:22:09 compute-1 nova_compute[230518]: 2025-10-02 12:22:09.477 2 DEBUG nova.compute.manager [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:22:09 compute-1 nova_compute[230518]: 2025-10-02 12:22:09.511 2 DEBUG nova.policy [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b978e493dbdc419e864471708c90b0b4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dcab4f3b7c604f47befdd0a52db26eea', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:22:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:09.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:09.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:09 compute-1 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Oct 02 12:22:09 compute-1 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000002b.scope: Consumed 7.088s CPU time.
Oct 02 12:22:09 compute-1 systemd-machined[188247]: Machine qemu-26-instance-0000002b terminated.
Oct 02 12:22:09 compute-1 nova_compute[230518]: 2025-10-02 12:22:09.706 2 INFO nova.virt.libvirt.driver [-] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Instance destroyed successfully.
Oct 02 12:22:09 compute-1 nova_compute[230518]: 2025-10-02 12:22:09.706 2 DEBUG nova.objects.instance [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Lazy-loading 'resources' on Instance uuid 7beacac0-65ce-4e15-a73c-9b50a50f968e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:22:10 compute-1 nova_compute[230518]: 2025-10-02 12:22:10.093 2 INFO nova.virt.block_device [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Booting with volume b93e6b37-6df0-4c49-81d5-526e5c68b542 at /dev/vdb
Oct 02 12:22:10 compute-1 nova_compute[230518]: 2025-10-02 12:22:10.247 2 DEBUG os_brick.utils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:22:10 compute-1 nova_compute[230518]: 2025-10-02 12:22:10.249 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:10 compute-1 nova_compute[230518]: 2025-10-02 12:22:10.259 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:10 compute-1 nova_compute[230518]: 2025-10-02 12:22:10.259 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[18f57f6f-9d5a-41a7-a838-fa7c85e970d2]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:10 compute-1 nova_compute[230518]: 2025-10-02 12:22:10.261 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:10 compute-1 nova_compute[230518]: 2025-10-02 12:22:10.270 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:10 compute-1 nova_compute[230518]: 2025-10-02 12:22:10.271 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[a6e96118-ab7d-4fe5-a76c-e2a60f6a85a0]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:10 compute-1 nova_compute[230518]: 2025-10-02 12:22:10.272 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:10 compute-1 nova_compute[230518]: 2025-10-02 12:22:10.280 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:10 compute-1 nova_compute[230518]: 2025-10-02 12:22:10.280 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[27dcb7bb-48ef-4a79-bf76-d30c6a612b08]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:10 compute-1 nova_compute[230518]: 2025-10-02 12:22:10.281 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[d6a32cb5-38c4-4477-b60b-cded3170f55b]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:10 compute-1 nova_compute[230518]: 2025-10-02 12:22:10.282 2 DEBUG oslo_concurrency.processutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:10 compute-1 nova_compute[230518]: 2025-10-02 12:22:10.312 2 DEBUG oslo_concurrency.processutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:10 compute-1 nova_compute[230518]: 2025-10-02 12:22:10.314 2 DEBUG os_brick.initiator.connectors.lightos [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:22:10 compute-1 nova_compute[230518]: 2025-10-02 12:22:10.314 2 DEBUG os_brick.initiator.connectors.lightos [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:22:10 compute-1 nova_compute[230518]: 2025-10-02 12:22:10.314 2 DEBUG os_brick.initiator.connectors.lightos [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:22:10 compute-1 nova_compute[230518]: 2025-10-02 12:22:10.314 2 DEBUG os_brick.utils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] <== get_connector_properties: return (66ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:22:10 compute-1 nova_compute[230518]: 2025-10-02 12:22:10.315 2 DEBUG nova.virt.block_device [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updating existing volume attachment record: eac04b36-4077-49c3-86b1-942e2b6eeb26 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:22:10 compute-1 ceph-mon[80926]: pgmap v1315: 305 pgs: 305 active+clean; 354 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 8.6 MiB/s rd, 2.4 MiB/s wr, 331 op/s
Oct 02 12:22:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/503698474' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:10 compute-1 nova_compute[230518]: 2025-10-02 12:22:10.774 2 DEBUG nova.network.neutron [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Successfully created port: 6a13b8d9-269d-4176-b4c7-693a5e26e74b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:22:10 compute-1 podman[250565]: 2025-10-02 12:22:10.815189058 +0000 UTC m=+0.065026309 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid)
Oct 02 12:22:10 compute-1 podman[250566]: 2025-10-02 12:22:10.821107345 +0000 UTC m=+0.070879575 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3)
Oct 02 12:22:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:22:10 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3078669458' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:11 compute-1 nova_compute[230518]: 2025-10-02 12:22:11.284 2 INFO nova.virt.block_device [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Booting with volume f3584061-a34e-4c55-a201-ea9e5f60b3e5 at /dev/vdc
Oct 02 12:22:11 compute-1 nova_compute[230518]: 2025-10-02 12:22:11.402 2 DEBUG os_brick.utils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:22:11 compute-1 nova_compute[230518]: 2025-10-02 12:22:11.403 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:11 compute-1 nova_compute[230518]: 2025-10-02 12:22:11.406 2 DEBUG nova.network.neutron [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Successfully created port: b84676b0-d376-4ced-99fb-08e677046d6f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:22:11 compute-1 nova_compute[230518]: 2025-10-02 12:22:11.415 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:11 compute-1 nova_compute[230518]: 2025-10-02 12:22:11.416 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[f8789545-b0f4-4230-9837-add9bbe2caa3]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:11 compute-1 nova_compute[230518]: 2025-10-02 12:22:11.417 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:11 compute-1 nova_compute[230518]: 2025-10-02 12:22:11.424 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:11 compute-1 nova_compute[230518]: 2025-10-02 12:22:11.424 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[53a222f6-c599-489f-93f3-46f3286d61ac]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:11 compute-1 nova_compute[230518]: 2025-10-02 12:22:11.426 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:11 compute-1 nova_compute[230518]: 2025-10-02 12:22:11.439 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:11 compute-1 nova_compute[230518]: 2025-10-02 12:22:11.439 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[0d2f4b23-ee17-4689-ad40-a98e694fa618]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:11 compute-1 nova_compute[230518]: 2025-10-02 12:22:11.441 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[a64a4f47-03ef-46ed-ba20-5978245e0a1d]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:11 compute-1 nova_compute[230518]: 2025-10-02 12:22:11.441 2 DEBUG oslo_concurrency.processutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:11 compute-1 nova_compute[230518]: 2025-10-02 12:22:11.467 2 DEBUG oslo_concurrency.processutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:11 compute-1 nova_compute[230518]: 2025-10-02 12:22:11.471 2 DEBUG os_brick.initiator.connectors.lightos [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:22:11 compute-1 nova_compute[230518]: 2025-10-02 12:22:11.471 2 DEBUG os_brick.initiator.connectors.lightos [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:22:11 compute-1 nova_compute[230518]: 2025-10-02 12:22:11.471 2 DEBUG os_brick.initiator.connectors.lightos [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:22:11 compute-1 nova_compute[230518]: 2025-10-02 12:22:11.471 2 DEBUG os_brick.utils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] <== get_connector_properties: return (69ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:22:11 compute-1 nova_compute[230518]: 2025-10-02 12:22:11.472 2 DEBUG nova.virt.block_device [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updating existing volume attachment record: 54598788-e706-49d2-9e91-e968eea915b1 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:22:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:11.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:11.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:11 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e193 e193: 3 total, 3 up, 3 in
Oct 02 12:22:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3078669458' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:12 compute-1 nova_compute[230518]: 2025-10-02 12:22:12.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:12 compute-1 nova_compute[230518]: 2025-10-02 12:22:12.374 2 INFO nova.virt.libvirt.driver [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Deleting instance files /var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e_del
Oct 02 12:22:12 compute-1 nova_compute[230518]: 2025-10-02 12:22:12.375 2 INFO nova.virt.libvirt.driver [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Deletion of /var/lib/nova/instances/7beacac0-65ce-4e15-a73c-9b50a50f968e_del complete
Oct 02 12:22:12 compute-1 nova_compute[230518]: 2025-10-02 12:22:12.651 2 INFO nova.compute.manager [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Took 3.17 seconds to destroy the instance on the hypervisor.
Oct 02 12:22:12 compute-1 nova_compute[230518]: 2025-10-02 12:22:12.652 2 DEBUG oslo.service.loopingcall [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:22:12 compute-1 nova_compute[230518]: 2025-10-02 12:22:12.653 2 DEBUG nova.compute.manager [-] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:22:12 compute-1 nova_compute[230518]: 2025-10-02 12:22:12.653 2 DEBUG nova.network.neutron [-] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:22:12 compute-1 nova_compute[230518]: 2025-10-02 12:22:12.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:13 compute-1 ceph-mon[80926]: pgmap v1316: 305 pgs: 305 active+clean; 356 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 827 KiB/s wr, 260 op/s
Oct 02 12:22:13 compute-1 ceph-mon[80926]: osdmap e193: 3 total, 3 up, 3 in
Oct 02 12:22:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2804927787' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:22:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e194 e194: 3 total, 3 up, 3 in
Oct 02 12:22:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:13.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:22:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:13.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:22:13 compute-1 nova_compute[230518]: 2025-10-02 12:22:13.736 2 DEBUG nova.network.neutron [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Successfully created port: c9731d13-4315-4bdc-9d24-a91ce1d8d427 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:22:13 compute-1 nova_compute[230518]: 2025-10-02 12:22:13.992 2 DEBUG nova.network.neutron [-] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:22:14 compute-1 nova_compute[230518]: 2025-10-02 12:22:14.101 2 DEBUG nova.network.neutron [-] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:22:14 compute-1 nova_compute[230518]: 2025-10-02 12:22:14.204 2 INFO nova.compute.manager [-] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Took 1.55 seconds to deallocate network for instance.
Oct 02 12:22:14 compute-1 nova_compute[230518]: 2025-10-02 12:22:14.329 2 DEBUG oslo_concurrency.lockutils [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:14 compute-1 nova_compute[230518]: 2025-10-02 12:22:14.330 2 DEBUG oslo_concurrency.lockutils [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:14 compute-1 nova_compute[230518]: 2025-10-02 12:22:14.351 2 DEBUG nova.compute.manager [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:22:14 compute-1 nova_compute[230518]: 2025-10-02 12:22:14.352 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:22:14 compute-1 nova_compute[230518]: 2025-10-02 12:22:14.352 2 INFO nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Creating image(s)
Oct 02 12:22:14 compute-1 nova_compute[230518]: 2025-10-02 12:22:14.353 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 12:22:14 compute-1 nova_compute[230518]: 2025-10-02 12:22:14.353 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Ensure instance console log exists: /var/lib/nova/instances/bebdf690-5f58-4227-95e0-add2eae14645/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:22:14 compute-1 nova_compute[230518]: 2025-10-02 12:22:14.353 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:14 compute-1 nova_compute[230518]: 2025-10-02 12:22:14.353 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:14 compute-1 nova_compute[230518]: 2025-10-02 12:22:14.354 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:14 compute-1 ceph-mon[80926]: pgmap v1318: 305 pgs: 305 active+clean; 360 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 2.7 MiB/s wr, 211 op/s
Oct 02 12:22:14 compute-1 ceph-mon[80926]: osdmap e194: 3 total, 3 up, 3 in
Oct 02 12:22:14 compute-1 nova_compute[230518]: 2025-10-02 12:22:14.413 2 DEBUG oslo_concurrency.processutils [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:14 compute-1 nova_compute[230518]: 2025-10-02 12:22:14.647 2 DEBUG nova.network.neutron [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Successfully created port: 2ef879b2-3519-40b6-8207-d24b0e1a39de _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:22:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:22:14 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1369805820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:14 compute-1 nova_compute[230518]: 2025-10-02 12:22:14.850 2 DEBUG oslo_concurrency.processutils [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:14 compute-1 nova_compute[230518]: 2025-10-02 12:22:14.859 2 DEBUG nova.compute.provider_tree [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:22:14 compute-1 nova_compute[230518]: 2025-10-02 12:22:14.885 2 DEBUG nova.scheduler.client.report [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:22:14 compute-1 nova_compute[230518]: 2025-10-02 12:22:14.924 2 DEBUG oslo_concurrency.lockutils [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:14 compute-1 nova_compute[230518]: 2025-10-02 12:22:14.961 2 INFO nova.scheduler.client.report [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Deleted allocations for instance 7beacac0-65ce-4e15-a73c-9b50a50f968e
Oct 02 12:22:15 compute-1 nova_compute[230518]: 2025-10-02 12:22:15.088 2 DEBUG oslo_concurrency.lockutils [None req-bcce02bc-0c90-4e53-9267-ca63024f699f 93167a5206ba42b28aa96a676d3edb6d aaf2805394aa4c4cb7977f6433aabf56 - - default default] Lock "7beacac0-65ce-4e15-a73c-9b50a50f968e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.044s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:15 compute-1 nova_compute[230518]: 2025-10-02 12:22:15.299 2 DEBUG nova.network.neutron [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Successfully created port: af15c204-50a0-4b32-a3a7-46c9b925ec87 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:22:15 compute-1 nova_compute[230518]: 2025-10-02 12:22:15.390 2 DEBUG oslo_concurrency.lockutils [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Acquiring lock "3cc914dc-40b0-4808-aff8-bd8e0c6789b1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:15 compute-1 nova_compute[230518]: 2025-10-02 12:22:15.391 2 DEBUG oslo_concurrency.lockutils [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "3cc914dc-40b0-4808-aff8-bd8e0c6789b1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:15 compute-1 nova_compute[230518]: 2025-10-02 12:22:15.391 2 DEBUG oslo_concurrency.lockutils [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Acquiring lock "3cc914dc-40b0-4808-aff8-bd8e0c6789b1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:15 compute-1 nova_compute[230518]: 2025-10-02 12:22:15.391 2 DEBUG oslo_concurrency.lockutils [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "3cc914dc-40b0-4808-aff8-bd8e0c6789b1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:15 compute-1 nova_compute[230518]: 2025-10-02 12:22:15.391 2 DEBUG oslo_concurrency.lockutils [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "3cc914dc-40b0-4808-aff8-bd8e0c6789b1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:15 compute-1 nova_compute[230518]: 2025-10-02 12:22:15.392 2 INFO nova.compute.manager [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Terminating instance
Oct 02 12:22:15 compute-1 nova_compute[230518]: 2025-10-02 12:22:15.393 2 DEBUG oslo_concurrency.lockutils [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Acquiring lock "refresh_cache-3cc914dc-40b0-4808-aff8-bd8e0c6789b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:22:15 compute-1 nova_compute[230518]: 2025-10-02 12:22:15.393 2 DEBUG oslo_concurrency.lockutils [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Acquired lock "refresh_cache-3cc914dc-40b0-4808-aff8-bd8e0c6789b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:22:15 compute-1 nova_compute[230518]: 2025-10-02 12:22:15.393 2 DEBUG nova.network.neutron [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:22:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:15.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:15.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:15 compute-1 nova_compute[230518]: 2025-10-02 12:22:15.602 2 DEBUG nova.network.neutron [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:22:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1369805820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/388008822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e195 e195: 3 total, 3 up, 3 in
Oct 02 12:22:16 compute-1 nova_compute[230518]: 2025-10-02 12:22:16.619 2 DEBUG nova.network.neutron [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:22:16 compute-1 nova_compute[230518]: 2025-10-02 12:22:16.639 2 DEBUG oslo_concurrency.lockutils [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Releasing lock "refresh_cache-3cc914dc-40b0-4808-aff8-bd8e0c6789b1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:22:16 compute-1 nova_compute[230518]: 2025-10-02 12:22:16.641 2 DEBUG nova.compute.manager [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:22:16 compute-1 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d0000002d.scope: Deactivated successfully.
Oct 02 12:22:16 compute-1 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d0000002d.scope: Consumed 13.814s CPU time.
Oct 02 12:22:16 compute-1 systemd-machined[188247]: Machine qemu-25-instance-0000002d terminated.
Oct 02 12:22:16 compute-1 nova_compute[230518]: 2025-10-02 12:22:16.864 2 INFO nova.virt.libvirt.driver [-] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Instance destroyed successfully.
Oct 02 12:22:16 compute-1 nova_compute[230518]: 2025-10-02 12:22:16.865 2 DEBUG nova.objects.instance [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lazy-loading 'resources' on Instance uuid 3cc914dc-40b0-4808-aff8-bd8e0c6789b1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:22:17 compute-1 nova_compute[230518]: 2025-10-02 12:22:17.060 2 DEBUG nova.network.neutron [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Successfully updated port: 6a13b8d9-269d-4176-b4c7-693a5e26e74b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:22:17 compute-1 ceph-mon[80926]: pgmap v1320: 305 pgs: 305 active+clean; 317 MiB data, 662 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.0 MiB/s wr, 175 op/s
Oct 02 12:22:17 compute-1 ceph-mon[80926]: osdmap e195: 3 total, 3 up, 3 in
Oct 02 12:22:17 compute-1 nova_compute[230518]: 2025-10-02 12:22:17.161 2 DEBUG nova.compute.manager [req-1d8accdd-cf0c-4e34-bdab-7fd49cf1a071 req-915da2e8-0c06-490c-a59f-750f2637a2bc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-changed-6a13b8d9-269d-4176-b4c7-693a5e26e74b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:17 compute-1 nova_compute[230518]: 2025-10-02 12:22:17.162 2 DEBUG nova.compute.manager [req-1d8accdd-cf0c-4e34-bdab-7fd49cf1a071 req-915da2e8-0c06-490c-a59f-750f2637a2bc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Refreshing instance network info cache due to event network-changed-6a13b8d9-269d-4176-b4c7-693a5e26e74b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:22:17 compute-1 nova_compute[230518]: 2025-10-02 12:22:17.162 2 DEBUG oslo_concurrency.lockutils [req-1d8accdd-cf0c-4e34-bdab-7fd49cf1a071 req-915da2e8-0c06-490c-a59f-750f2637a2bc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:22:17 compute-1 nova_compute[230518]: 2025-10-02 12:22:17.162 2 DEBUG oslo_concurrency.lockutils [req-1d8accdd-cf0c-4e34-bdab-7fd49cf1a071 req-915da2e8-0c06-490c-a59f-750f2637a2bc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:22:17 compute-1 nova_compute[230518]: 2025-10-02 12:22:17.163 2 DEBUG nova.network.neutron [req-1d8accdd-cf0c-4e34-bdab-7fd49cf1a071 req-915da2e8-0c06-490c-a59f-750f2637a2bc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Refreshing network info cache for port 6a13b8d9-269d-4176-b4c7-693a5e26e74b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:22:17 compute-1 nova_compute[230518]: 2025-10-02 12:22:17.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:17.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:22:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:17.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:22:17 compute-1 nova_compute[230518]: 2025-10-02 12:22:17.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:18 compute-1 nova_compute[230518]: 2025-10-02 12:22:18.030 2 DEBUG nova.network.neutron [req-1d8accdd-cf0c-4e34-bdab-7fd49cf1a071 req-915da2e8-0c06-490c-a59f-750f2637a2bc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:22:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:22:18 compute-1 nova_compute[230518]: 2025-10-02 12:22:18.333 2 INFO nova.virt.libvirt.driver [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Deleting instance files /var/lib/nova/instances/3cc914dc-40b0-4808-aff8-bd8e0c6789b1_del
Oct 02 12:22:18 compute-1 nova_compute[230518]: 2025-10-02 12:22:18.334 2 INFO nova.virt.libvirt.driver [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Deletion of /var/lib/nova/instances/3cc914dc-40b0-4808-aff8-bd8e0c6789b1_del complete
Oct 02 12:22:18 compute-1 nova_compute[230518]: 2025-10-02 12:22:18.395 2 INFO nova.compute.manager [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Took 1.75 seconds to destroy the instance on the hypervisor.
Oct 02 12:22:18 compute-1 nova_compute[230518]: 2025-10-02 12:22:18.396 2 DEBUG oslo.service.loopingcall [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:22:18 compute-1 nova_compute[230518]: 2025-10-02 12:22:18.396 2 DEBUG nova.compute.manager [-] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:22:18 compute-1 nova_compute[230518]: 2025-10-02 12:22:18.396 2 DEBUG nova.network.neutron [-] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:22:18 compute-1 nova_compute[230518]: 2025-10-02 12:22:18.548 2 DEBUG nova.network.neutron [req-1d8accdd-cf0c-4e34-bdab-7fd49cf1a071 req-915da2e8-0c06-490c-a59f-750f2637a2bc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:22:18 compute-1 nova_compute[230518]: 2025-10-02 12:22:18.567 2 DEBUG oslo_concurrency.lockutils [req-1d8accdd-cf0c-4e34-bdab-7fd49cf1a071 req-915da2e8-0c06-490c-a59f-750f2637a2bc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:22:18 compute-1 nova_compute[230518]: 2025-10-02 12:22:18.610 2 DEBUG nova.network.neutron [-] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:22:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:18.614 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:22:18 compute-1 nova_compute[230518]: 2025-10-02 12:22:18.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:18.616 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:22:18 compute-1 nova_compute[230518]: 2025-10-02 12:22:18.646 2 DEBUG nova.network.neutron [-] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:22:18 compute-1 nova_compute[230518]: 2025-10-02 12:22:18.662 2 INFO nova.compute.manager [-] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Took 0.27 seconds to deallocate network for instance.
Oct 02 12:22:18 compute-1 ceph-mon[80926]: pgmap v1322: 305 pgs: 305 active+clean; 260 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.5 MiB/s wr, 182 op/s
Oct 02 12:22:18 compute-1 nova_compute[230518]: 2025-10-02 12:22:18.712 2 DEBUG oslo_concurrency.lockutils [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:18 compute-1 nova_compute[230518]: 2025-10-02 12:22:18.713 2 DEBUG oslo_concurrency.lockutils [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:18 compute-1 nova_compute[230518]: 2025-10-02 12:22:18.785 2 DEBUG oslo_concurrency.processutils [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e196 e196: 3 total, 3 up, 3 in
Oct 02 12:22:18 compute-1 nova_compute[230518]: 2025-10-02 12:22:18.963 2 DEBUG nova.network.neutron [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Successfully updated port: 25721468-4447-4fb7-97f7-e805e64f0267 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:22:19 compute-1 nova_compute[230518]: 2025-10-02 12:22:19.251 2 DEBUG nova.compute.manager [req-e8c79863-45e7-4e4b-81e6-0c9b12fdf57c req-792f9b16-e901-41b7-8771-25ba56ad6434 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-changed-25721468-4447-4fb7-97f7-e805e64f0267 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:19 compute-1 nova_compute[230518]: 2025-10-02 12:22:19.252 2 DEBUG nova.compute.manager [req-e8c79863-45e7-4e4b-81e6-0c9b12fdf57c req-792f9b16-e901-41b7-8771-25ba56ad6434 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Refreshing instance network info cache due to event network-changed-25721468-4447-4fb7-97f7-e805e64f0267. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:22:19 compute-1 nova_compute[230518]: 2025-10-02 12:22:19.252 2 DEBUG oslo_concurrency.lockutils [req-e8c79863-45e7-4e4b-81e6-0c9b12fdf57c req-792f9b16-e901-41b7-8771-25ba56ad6434 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:22:19 compute-1 nova_compute[230518]: 2025-10-02 12:22:19.253 2 DEBUG oslo_concurrency.lockutils [req-e8c79863-45e7-4e4b-81e6-0c9b12fdf57c req-792f9b16-e901-41b7-8771-25ba56ad6434 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:22:19 compute-1 nova_compute[230518]: 2025-10-02 12:22:19.253 2 DEBUG nova.network.neutron [req-e8c79863-45e7-4e4b-81e6-0c9b12fdf57c req-792f9b16-e901-41b7-8771-25ba56ad6434 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Refreshing network info cache for port 25721468-4447-4fb7-97f7-e805e64f0267 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:22:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:22:19 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1670408849' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:19 compute-1 nova_compute[230518]: 2025-10-02 12:22:19.444 2 DEBUG oslo_concurrency.processutils [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.659s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:19 compute-1 nova_compute[230518]: 2025-10-02 12:22:19.450 2 DEBUG nova.compute.provider_tree [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:22:19 compute-1 nova_compute[230518]: 2025-10-02 12:22:19.457 2 DEBUG nova.network.neutron [req-e8c79863-45e7-4e4b-81e6-0c9b12fdf57c req-792f9b16-e901-41b7-8771-25ba56ad6434 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:22:19 compute-1 nova_compute[230518]: 2025-10-02 12:22:19.466 2 DEBUG nova.scheduler.client.report [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:22:19 compute-1 nova_compute[230518]: 2025-10-02 12:22:19.489 2 DEBUG oslo_concurrency.lockutils [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:19 compute-1 nova_compute[230518]: 2025-10-02 12:22:19.523 2 INFO nova.scheduler.client.report [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Deleted allocations for instance 3cc914dc-40b0-4808-aff8-bd8e0c6789b1
Oct 02 12:22:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:19.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:22:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:19.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:22:19 compute-1 nova_compute[230518]: 2025-10-02 12:22:19.594 2 DEBUG oslo_concurrency.lockutils [None req-5b19d457-0e4a-448e-84a0-762c1c20cccc 6ce6b90597304cd29e06b1f1e62246eb 1ff6686454554253817cdb343c2f7e5e - - default default] Lock "3cc914dc-40b0-4808-aff8-bd8e0c6789b1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.203s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:19 compute-1 nova_compute[230518]: 2025-10-02 12:22:19.850 2 DEBUG nova.network.neutron [req-e8c79863-45e7-4e4b-81e6-0c9b12fdf57c req-792f9b16-e901-41b7-8771-25ba56ad6434 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:22:19 compute-1 nova_compute[230518]: 2025-10-02 12:22:19.867 2 DEBUG oslo_concurrency.lockutils [req-e8c79863-45e7-4e4b-81e6-0c9b12fdf57c req-792f9b16-e901-41b7-8771-25ba56ad6434 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:22:20 compute-1 ceph-mon[80926]: osdmap e196: 3 total, 3 up, 3 in
Oct 02 12:22:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1670408849' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:20 compute-1 nova_compute[230518]: 2025-10-02 12:22:20.669 2 DEBUG nova.network.neutron [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Successfully updated port: 8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:22:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e197 e197: 3 total, 3 up, 3 in
Oct 02 12:22:21 compute-1 ceph-mon[80926]: pgmap v1324: 305 pgs: 305 active+clean; 260 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 83 KiB/s rd, 5.7 KiB/s wr, 118 op/s
Oct 02 12:22:21 compute-1 ceph-mon[80926]: osdmap e197: 3 total, 3 up, 3 in
Oct 02 12:22:21 compute-1 nova_compute[230518]: 2025-10-02 12:22:21.497 2 DEBUG nova.compute.manager [req-1bdebf66-b10d-4016-93e9-0bc36edfd0ec req-675cd533-75b1-4396-a398-51a514286f4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-changed-8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:21 compute-1 nova_compute[230518]: 2025-10-02 12:22:21.497 2 DEBUG nova.compute.manager [req-1bdebf66-b10d-4016-93e9-0bc36edfd0ec req-675cd533-75b1-4396-a398-51a514286f4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Refreshing instance network info cache due to event network-changed-8d3881e4-99fe-4bc5-b5ab-5b3f06be6000. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:22:21 compute-1 nova_compute[230518]: 2025-10-02 12:22:21.498 2 DEBUG oslo_concurrency.lockutils [req-1bdebf66-b10d-4016-93e9-0bc36edfd0ec req-675cd533-75b1-4396-a398-51a514286f4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:22:21 compute-1 nova_compute[230518]: 2025-10-02 12:22:21.498 2 DEBUG oslo_concurrency.lockutils [req-1bdebf66-b10d-4016-93e9-0bc36edfd0ec req-675cd533-75b1-4396-a398-51a514286f4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:22:21 compute-1 nova_compute[230518]: 2025-10-02 12:22:21.499 2 DEBUG nova.network.neutron [req-1bdebf66-b10d-4016-93e9-0bc36edfd0ec req-675cd533-75b1-4396-a398-51a514286f4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Refreshing network info cache for port 8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:22:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:22:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:21.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:22:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:21.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:21 compute-1 nova_compute[230518]: 2025-10-02 12:22:21.721 2 DEBUG nova.network.neutron [req-1bdebf66-b10d-4016-93e9-0bc36edfd0ec req-675cd533-75b1-4396-a398-51a514286f4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:22:21 compute-1 nova_compute[230518]: 2025-10-02 12:22:21.836 2 DEBUG nova.network.neutron [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Successfully updated port: b84676b0-d376-4ced-99fb-08e677046d6f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:22:22 compute-1 nova_compute[230518]: 2025-10-02 12:22:22.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:22 compute-1 ceph-mon[80926]: pgmap v1325: 305 pgs: 305 active+clean; 231 MiB data, 612 MiB used, 20 GiB / 21 GiB avail; 83 KiB/s rd, 5.7 KiB/s wr, 120 op/s
Oct 02 12:22:22 compute-1 nova_compute[230518]: 2025-10-02 12:22:22.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:22 compute-1 nova_compute[230518]: 2025-10-02 12:22:22.870 2 DEBUG nova.network.neutron [req-1bdebf66-b10d-4016-93e9-0bc36edfd0ec req-675cd533-75b1-4396-a398-51a514286f4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:22:22 compute-1 nova_compute[230518]: 2025-10-02 12:22:22.892 2 DEBUG oslo_concurrency.lockutils [req-1bdebf66-b10d-4016-93e9-0bc36edfd0ec req-675cd533-75b1-4396-a398-51a514286f4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:22:23 compute-1 nova_compute[230518]: 2025-10-02 12:22:23.076 2 DEBUG nova.network.neutron [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Successfully updated port: c9731d13-4315-4bdc-9d24-a91ce1d8d427 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:22:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:22:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:22:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:23.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:22:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:22:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:23.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:22:23 compute-1 nova_compute[230518]: 2025-10-02 12:22:23.679 2 DEBUG nova.compute.manager [req-b8acc691-0f7d-4983-bd07-b0058aa91f20 req-84c98235-f0b9-4461-b2b8-c4934e806efe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-changed-b84676b0-d376-4ced-99fb-08e677046d6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:23 compute-1 nova_compute[230518]: 2025-10-02 12:22:23.680 2 DEBUG nova.compute.manager [req-b8acc691-0f7d-4983-bd07-b0058aa91f20 req-84c98235-f0b9-4461-b2b8-c4934e806efe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Refreshing instance network info cache due to event network-changed-b84676b0-d376-4ced-99fb-08e677046d6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:22:23 compute-1 nova_compute[230518]: 2025-10-02 12:22:23.680 2 DEBUG oslo_concurrency.lockutils [req-b8acc691-0f7d-4983-bd07-b0058aa91f20 req-84c98235-f0b9-4461-b2b8-c4934e806efe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:22:23 compute-1 nova_compute[230518]: 2025-10-02 12:22:23.680 2 DEBUG oslo_concurrency.lockutils [req-b8acc691-0f7d-4983-bd07-b0058aa91f20 req-84c98235-f0b9-4461-b2b8-c4934e806efe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:22:23 compute-1 nova_compute[230518]: 2025-10-02 12:22:23.680 2 DEBUG nova.network.neutron [req-b8acc691-0f7d-4983-bd07-b0058aa91f20 req-84c98235-f0b9-4461-b2b8-c4934e806efe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Refreshing network info cache for port b84676b0-d376-4ced-99fb-08e677046d6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:22:24 compute-1 nova_compute[230518]: 2025-10-02 12:22:24.039 2 DEBUG nova.network.neutron [req-b8acc691-0f7d-4983-bd07-b0058aa91f20 req-84c98235-f0b9-4461-b2b8-c4934e806efe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:22:24 compute-1 ceph-mon[80926]: pgmap v1327: 305 pgs: 305 active+clean; 180 MiB data, 568 MiB used, 20 GiB / 21 GiB avail; 85 KiB/s rd, 5.9 KiB/s wr, 120 op/s
Oct 02 12:22:24 compute-1 nova_compute[230518]: 2025-10-02 12:22:24.704 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407729.6994617, 7beacac0-65ce-4e15-a73c-9b50a50f968e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:22:24 compute-1 nova_compute[230518]: 2025-10-02 12:22:24.704 2 INFO nova.compute.manager [-] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] VM Stopped (Lifecycle Event)
Oct 02 12:22:24 compute-1 nova_compute[230518]: 2025-10-02 12:22:24.745 2 DEBUG nova.compute.manager [None req-0a3308f2-0ef4-4810-aef3-c57faf98c7d6 - - - - - -] [instance: 7beacac0-65ce-4e15-a73c-9b50a50f968e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:24 compute-1 nova_compute[230518]: 2025-10-02 12:22:24.903 2 DEBUG nova.network.neutron [req-b8acc691-0f7d-4983-bd07-b0058aa91f20 req-84c98235-f0b9-4461-b2b8-c4934e806efe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:22:24 compute-1 nova_compute[230518]: 2025-10-02 12:22:24.945 2 DEBUG oslo_concurrency.lockutils [req-b8acc691-0f7d-4983-bd07-b0058aa91f20 req-84c98235-f0b9-4461-b2b8-c4934e806efe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:22:24 compute-1 nova_compute[230518]: 2025-10-02 12:22:24.945 2 DEBUG nova.compute.manager [req-b8acc691-0f7d-4983-bd07-b0058aa91f20 req-84c98235-f0b9-4461-b2b8-c4934e806efe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-changed-c9731d13-4315-4bdc-9d24-a91ce1d8d427 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:24 compute-1 nova_compute[230518]: 2025-10-02 12:22:24.945 2 DEBUG nova.compute.manager [req-b8acc691-0f7d-4983-bd07-b0058aa91f20 req-84c98235-f0b9-4461-b2b8-c4934e806efe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Refreshing instance network info cache due to event network-changed-c9731d13-4315-4bdc-9d24-a91ce1d8d427. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:22:24 compute-1 nova_compute[230518]: 2025-10-02 12:22:24.946 2 DEBUG oslo_concurrency.lockutils [req-b8acc691-0f7d-4983-bd07-b0058aa91f20 req-84c98235-f0b9-4461-b2b8-c4934e806efe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:22:24 compute-1 nova_compute[230518]: 2025-10-02 12:22:24.946 2 DEBUG oslo_concurrency.lockutils [req-b8acc691-0f7d-4983-bd07-b0058aa91f20 req-84c98235-f0b9-4461-b2b8-c4934e806efe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:22:24 compute-1 nova_compute[230518]: 2025-10-02 12:22:24.946 2 DEBUG nova.network.neutron [req-b8acc691-0f7d-4983-bd07-b0058aa91f20 req-84c98235-f0b9-4461-b2b8-c4934e806efe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Refreshing network info cache for port c9731d13-4315-4bdc-9d24-a91ce1d8d427 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:22:25 compute-1 nova_compute[230518]: 2025-10-02 12:22:25.283 2 DEBUG nova.network.neutron [req-b8acc691-0f7d-4983-bd07-b0058aa91f20 req-84c98235-f0b9-4461-b2b8-c4934e806efe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:22:25 compute-1 nova_compute[230518]: 2025-10-02 12:22:25.354 2 DEBUG nova.network.neutron [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Successfully updated port: 2ef879b2-3519-40b6-8207-d24b0e1a39de _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:22:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:25.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:25.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:25 compute-1 nova_compute[230518]: 2025-10-02 12:22:25.803 2 DEBUG nova.compute.manager [req-d536b2a8-f19e-474d-8b29-84497299674e req-b1d1abb7-57e0-4bbd-85f1-707e3139d19a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-changed-2ef879b2-3519-40b6-8207-d24b0e1a39de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:25 compute-1 nova_compute[230518]: 2025-10-02 12:22:25.803 2 DEBUG nova.compute.manager [req-d536b2a8-f19e-474d-8b29-84497299674e req-b1d1abb7-57e0-4bbd-85f1-707e3139d19a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Refreshing instance network info cache due to event network-changed-2ef879b2-3519-40b6-8207-d24b0e1a39de. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:22:25 compute-1 nova_compute[230518]: 2025-10-02 12:22:25.803 2 DEBUG oslo_concurrency.lockutils [req-d536b2a8-f19e-474d-8b29-84497299674e req-b1d1abb7-57e0-4bbd-85f1-707e3139d19a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:22:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e198 e198: 3 total, 3 up, 3 in
Oct 02 12:22:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:25.920 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:25.921 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:25.921 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:26 compute-1 nova_compute[230518]: 2025-10-02 12:22:26.213 2 DEBUG nova.network.neutron [req-b8acc691-0f7d-4983-bd07-b0058aa91f20 req-84c98235-f0b9-4461-b2b8-c4934e806efe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:22:26 compute-1 nova_compute[230518]: 2025-10-02 12:22:26.242 2 DEBUG oslo_concurrency.lockutils [req-b8acc691-0f7d-4983-bd07-b0058aa91f20 req-84c98235-f0b9-4461-b2b8-c4934e806efe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:22:26 compute-1 nova_compute[230518]: 2025-10-02 12:22:26.243 2 DEBUG oslo_concurrency.lockutils [req-d536b2a8-f19e-474d-8b29-84497299674e req-b1d1abb7-57e0-4bbd-85f1-707e3139d19a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:22:26 compute-1 nova_compute[230518]: 2025-10-02 12:22:26.243 2 DEBUG nova.network.neutron [req-d536b2a8-f19e-474d-8b29-84497299674e req-b1d1abb7-57e0-4bbd-85f1-707e3139d19a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Refreshing network info cache for port 2ef879b2-3519-40b6-8207-d24b0e1a39de _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:22:26 compute-1 nova_compute[230518]: 2025-10-02 12:22:26.466 2 DEBUG nova.network.neutron [req-d536b2a8-f19e-474d-8b29-84497299674e req-b1d1abb7-57e0-4bbd-85f1-707e3139d19a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:22:26 compute-1 ceph-mon[80926]: pgmap v1328: 305 pgs: 305 active+clean; 145 MiB data, 553 MiB used, 20 GiB / 21 GiB avail; 61 KiB/s rd, 3.5 KiB/s wr, 86 op/s
Oct 02 12:22:26 compute-1 ceph-mon[80926]: osdmap e198: 3 total, 3 up, 3 in
Oct 02 12:22:27 compute-1 nova_compute[230518]: 2025-10-02 12:22:27.023 2 DEBUG nova.network.neutron [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Successfully updated port: af15c204-50a0-4b32-a3a7-46c9b925ec87 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:22:27 compute-1 nova_compute[230518]: 2025-10-02 12:22:27.075 2 DEBUG nova.network.neutron [req-d536b2a8-f19e-474d-8b29-84497299674e req-b1d1abb7-57e0-4bbd-85f1-707e3139d19a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:22:27 compute-1 nova_compute[230518]: 2025-10-02 12:22:27.088 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Acquiring lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:22:27 compute-1 nova_compute[230518]: 2025-10-02 12:22:27.119 2 DEBUG oslo_concurrency.lockutils [req-d536b2a8-f19e-474d-8b29-84497299674e req-b1d1abb7-57e0-4bbd-85f1-707e3139d19a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:22:27 compute-1 nova_compute[230518]: 2025-10-02 12:22:27.120 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Acquired lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:22:27 compute-1 nova_compute[230518]: 2025-10-02 12:22:27.120 2 DEBUG nova.network.neutron [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:22:27 compute-1 nova_compute[230518]: 2025-10-02 12:22:27.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:27 compute-1 nova_compute[230518]: 2025-10-02 12:22:27.390 2 DEBUG nova.network.neutron [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:22:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:22:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:27.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:22:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:27.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:27.618 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:27 compute-1 nova_compute[230518]: 2025-10-02 12:22:27.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:27 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1984249346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:27 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2108879586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:27 compute-1 nova_compute[230518]: 2025-10-02 12:22:27.955 2 DEBUG nova.compute.manager [req-3b768920-f1a4-4fa0-bcb7-cd2dcbb11852 req-94cb6435-42af-40db-ab3d-76403659aaab 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-changed-af15c204-50a0-4b32-a3a7-46c9b925ec87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:27 compute-1 nova_compute[230518]: 2025-10-02 12:22:27.956 2 DEBUG nova.compute.manager [req-3b768920-f1a4-4fa0-bcb7-cd2dcbb11852 req-94cb6435-42af-40db-ab3d-76403659aaab 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Refreshing instance network info cache due to event network-changed-af15c204-50a0-4b32-a3a7-46c9b925ec87. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:22:27 compute-1 nova_compute[230518]: 2025-10-02 12:22:27.956 2 DEBUG oslo_concurrency.lockutils [req-3b768920-f1a4-4fa0-bcb7-cd2dcbb11852 req-94cb6435-42af-40db-ab3d-76403659aaab 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:22:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:22:29 compute-1 ceph-mon[80926]: pgmap v1330: 305 pgs: 305 active+clean; 88 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 76 KiB/s rd, 4.7 KiB/s wr, 109 op/s
Oct 02 12:22:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/740931267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:29.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:29.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:30 compute-1 ceph-mon[80926]: pgmap v1331: 305 pgs: 305 active+clean; 88 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 59 KiB/s rd, 3.6 KiB/s wr, 83 op/s
Oct 02 12:22:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/408600532' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2442339680' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:22:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:31.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:22:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:31.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:31 compute-1 podman[250681]: 2025-10-02 12:22:31.803038988 +0000 UTC m=+0.054256800 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:22:31 compute-1 nova_compute[230518]: 2025-10-02 12:22:31.862 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407736.8599455, 3cc914dc-40b0-4808-aff8-bd8e0c6789b1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:22:31 compute-1 nova_compute[230518]: 2025-10-02 12:22:31.862 2 INFO nova.compute.manager [-] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] VM Stopped (Lifecycle Event)
Oct 02 12:22:31 compute-1 nova_compute[230518]: 2025-10-02 12:22:31.895 2 DEBUG nova.compute.manager [None req-6f07d140-2de2-473c-9294-6e75e2936381 - - - - - -] [instance: 3cc914dc-40b0-4808-aff8-bd8e0c6789b1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:32 compute-1 nova_compute[230518]: 2025-10-02 12:22:32.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:32 compute-1 ceph-mon[80926]: pgmap v1332: 305 pgs: 305 active+clean; 111 MiB data, 536 MiB used, 20 GiB / 21 GiB avail; 62 KiB/s rd, 1.0 MiB/s wr, 86 op/s
Oct 02 12:22:32 compute-1 nova_compute[230518]: 2025-10-02 12:22:32.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:22:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:33.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:22:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:33.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:22:33 compute-1 podman[250700]: 2025-10-02 12:22:33.841178045 +0000 UTC m=+0.097671128 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:22:34 compute-1 ceph-mon[80926]: pgmap v1333: 305 pgs: 305 active+clean; 161 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 3.2 MiB/s wr, 77 op/s
Oct 02 12:22:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:22:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:35.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:22:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:35.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:36 compute-1 ceph-mon[80926]: pgmap v1334: 305 pgs: 305 active+clean; 180 MiB data, 568 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.3 MiB/s wr, 128 op/s
Oct 02 12:22:37 compute-1 nova_compute[230518]: 2025-10-02 12:22:37.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:37.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:22:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:37.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:22:37 compute-1 nova_compute[230518]: 2025-10-02 12:22:37.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:38 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:22:38 compute-1 ceph-mon[80926]: pgmap v1335: 305 pgs: 305 active+clean; 180 MiB data, 568 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 158 op/s
Oct 02 12:22:39 compute-1 nova_compute[230518]: 2025-10-02 12:22:39.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:22:39 compute-1 nova_compute[230518]: 2025-10-02 12:22:39.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:22:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:22:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:39.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:22:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:39.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:40 compute-1 nova_compute[230518]: 2025-10-02 12:22:40.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:22:40 compute-1 ceph-mon[80926]: pgmap v1336: 305 pgs: 305 active+clean; 180 MiB data, 568 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 131 op/s
Oct 02 12:22:40 compute-1 sudo[250727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:22:40 compute-1 sudo[250727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:22:40 compute-1 sudo[250727]: pam_unix(sudo:session): session closed for user root
Oct 02 12:22:40 compute-1 sudo[250764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:22:40 compute-1 podman[250751]: 2025-10-02 12:22:40.998414213 +0000 UTC m=+0.068316064 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 12:22:41 compute-1 podman[250752]: 2025-10-02 12:22:41.00180905 +0000 UTC m=+0.073567709 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct 02 12:22:41 compute-1 sudo[250764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:22:41 compute-1 sudo[250764]: pam_unix(sudo:session): session closed for user root
Oct 02 12:22:41 compute-1 nova_compute[230518]: 2025-10-02 12:22:41.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:22:41 compute-1 nova_compute[230518]: 2025-10-02 12:22:41.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:22:41 compute-1 sudo[250819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:22:41 compute-1 sudo[250819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:22:41 compute-1 sudo[250819]: pam_unix(sudo:session): session closed for user root
Oct 02 12:22:41 compute-1 nova_compute[230518]: 2025-10-02 12:22:41.098 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:41 compute-1 nova_compute[230518]: 2025-10-02 12:22:41.099 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:41 compute-1 nova_compute[230518]: 2025-10-02 12:22:41.099 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:41 compute-1 nova_compute[230518]: 2025-10-02 12:22:41.099 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:22:41 compute-1 nova_compute[230518]: 2025-10-02 12:22:41.100 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:41 compute-1 sudo[250844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:22:41 compute-1 sudo[250844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:22:41 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3528556499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:41 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:22:41 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4094779463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:41 compute-1 nova_compute[230518]: 2025-10-02 12:22:41.558 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:22:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:41.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:22:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:22:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:41.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:22:41 compute-1 sudo[250844]: pam_unix(sudo:session): session closed for user root
Oct 02 12:22:41 compute-1 nova_compute[230518]: 2025-10-02 12:22:41.727 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:22:41 compute-1 nova_compute[230518]: 2025-10-02 12:22:41.728 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4703MB free_disk=20.94662857055664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:22:41 compute-1 nova_compute[230518]: 2025-10-02 12:22:41.728 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:41 compute-1 nova_compute[230518]: 2025-10-02 12:22:41.728 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:41 compute-1 nova_compute[230518]: 2025-10-02 12:22:41.849 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance bebdf690-5f58-4227-95e0-add2eae14645 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:22:41 compute-1 nova_compute[230518]: 2025-10-02 12:22:41.850 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:22:41 compute-1 nova_compute[230518]: 2025-10-02 12:22:41.850 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:22:41 compute-1 nova_compute[230518]: 2025-10-02 12:22:41.961 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:42 compute-1 nova_compute[230518]: 2025-10-02 12:22:42.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:22:42 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3141264522' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:42 compute-1 nova_compute[230518]: 2025-10-02 12:22:42.397 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:42 compute-1 nova_compute[230518]: 2025-10-02 12:22:42.402 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:22:42 compute-1 nova_compute[230518]: 2025-10-02 12:22:42.435 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:22:42 compute-1 nova_compute[230518]: 2025-10-02 12:22:42.462 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:22:42 compute-1 nova_compute[230518]: 2025-10-02 12:22:42.463 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:42 compute-1 ceph-mon[80926]: pgmap v1337: 305 pgs: 305 active+clean; 180 MiB data, 568 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 136 op/s
Oct 02 12:22:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4094779463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:22:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:22:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:22:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:22:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:22:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:22:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3447753627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3264439520' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3141264522' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:42 compute-1 nova_compute[230518]: 2025-10-02 12:22:42.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:22:43 compute-1 nova_compute[230518]: 2025-10-02 12:22:43.464 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:22:43 compute-1 nova_compute[230518]: 2025-10-02 12:22:43.464 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:22:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:43.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:22:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:43.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:22:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3213202268' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1356043525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:44 compute-1 ceph-mon[80926]: pgmap v1338: 305 pgs: 305 active+clean; 157 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.7 MiB/s wr, 127 op/s
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.076 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.076 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.124 2 DEBUG nova.network.neutron [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updating instance_info_cache with network_info: [{"id": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "address": "fa:16:3e:fb:a0:be", "network": {"id": "bce86765-c9ec-46bc-a7a3-317bd0b94198", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-349866377-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a13b8d9-26", "ovs_interfaceid": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "25721468-4447-4fb7-97f7-e805e64f0267", "address": "fa:16:3e:c5:de:0f", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25721468-44", "ovs_interfaceid": "25721468-4447-4fb7-97f7-e805e64f0267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "address": "fa:16:3e:1e:64:8d", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d3881e4-99", "ovs_interfaceid": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "b84676b0-d376-4ced-99fb-08e677046d6f", "address": "fa:16:3e:7b:6e:55", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb84676b0-d3", "ovs_interfaceid": "b84676b0-d376-4ced-99fb-08e677046d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "address": "fa:16:3e:b6:e7:f1", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.217", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9731d13-43", "ovs_interfaceid": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "address": "fa:16:3e:28:b6:82", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ef879b2-35", "ovs_interfaceid": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "address": "fa:16:3e:a3:ce:42", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf15c204-50", "ovs_interfaceid": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.166 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Releasing lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.167 2 DEBUG nova.compute.manager [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Instance network_info: |[{"id": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "address": "fa:16:3e:fb:a0:be", "network": {"id": "bce86765-c9ec-46bc-a7a3-317bd0b94198", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-349866377-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a13b8d9-26", "ovs_interfaceid": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "25721468-4447-4fb7-97f7-e805e64f0267", "address": "fa:16:3e:c5:de:0f", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25721468-44", "ovs_interfaceid": "25721468-4447-4fb7-97f7-e805e64f0267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "address": "fa:16:3e:1e:64:8d", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d3881e4-99", "ovs_interfaceid": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "b84676b0-d376-4ced-99fb-08e677046d6f", "address": "fa:16:3e:7b:6e:55", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb84676b0-d3", "ovs_interfaceid": "b84676b0-d376-4ced-99fb-08e677046d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "address": "fa:16:3e:b6:e7:f1", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.217", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9731d13-43", "ovs_interfaceid": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "address": "fa:16:3e:28:b6:82", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ef879b2-35", "ovs_interfaceid": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "address": "fa:16:3e:a3:ce:42", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf15c204-50", "ovs_interfaceid": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.169 2 DEBUG oslo_concurrency.lockutils [req-3b768920-f1a4-4fa0-bcb7-cd2dcbb11852 req-94cb6435-42af-40db-ab3d-76403659aaab 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.169 2 DEBUG nova.network.neutron [req-3b768920-f1a4-4fa0-bcb7-cd2dcbb11852 req-94cb6435-42af-40db-ab3d-76403659aaab 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Refreshing network info cache for port af15c204-50a0-4b32-a3a7-46c9b925ec87 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.187 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Start _get_guest_xml network_info=[{"id": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "address": "fa:16:3e:fb:a0:be", "network": {"id": "bce86765-c9ec-46bc-a7a3-317bd0b94198", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-349866377-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a13b8d9-26", "ovs_interfaceid": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "25721468-4447-4fb7-97f7-e805e64f0267", "address": "fa:16:3e:c5:de:0f", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25721468-44", "ovs_interfaceid": "25721468-4447-4fb7-97f7-e805e64f0267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "address": "fa:16:3e:1e:64:8d", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d3881e4-99", "ovs_interfaceid": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "b84676b0-d376-4ced-99fb-08e677046d6f", "address": "fa:16:3e:7b:6e:55", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb84676b0-d3", "ovs_interfaceid": "b84676b0-d376-4ced-99fb-08e677046d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "address": "fa:16:3e:b6:e7:f1", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.217", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9731d13-43", "ovs_interfaceid": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "address": "fa:16:3e:28:b6:82", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ef879b2-35", "ovs_interfaceid": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "address": "fa:16:3e:a3:ce:42", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf15c204-50", "ovs_interfaceid": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk', 'boot_index': '2'}, '/dev/vdc': {'bus': 'virtio', 'dev': 'vdc', 'type': 'disk', 'boot_index': '3'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,m
Oct 02 12:22:45 compute-1 nova_compute[230518]: in_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'delete_on_termination': False, 'disk_bus': 'virtio', 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-0de00a00-9b68-498b-8bd0-88556bd22393', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '0de00a00-9b68-498b-8bd0-88556bd22393', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'bebdf690-5f58-4227-95e0-add2eae14645', 'attached_at': '', 'detached_at': '', 'volume_id': '0de00a00-9b68-498b-8bd0-88556bd22393', 'serial': '0de00a00-9b68-498b-8bd0-88556bd22393'}, 'boot_index': 0, 'attachment_id': 'a17f6c4e-3b3e-4e2a-bc65-6e16cb06b24f', 'guest_format': None, 'volume_type': None}, {'mount_device': '/dev/vdb', 'delete_on_termination': False, 'disk_bus': 'virtio', 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-b93e6b37-6df0-4c49-81d5-526e5c68b542', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'b93e6b37-6df0-4c49-81d5-526e5c68b542', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'bebdf690-5f58-4227-95e0-add2eae14645', 'attached_at': '', 'detached_at': '', 'volume_id': 'b93e6b37-6df0-4c49-81d5-526e5c68b542', 'serial': 'b93e6b37-6df0-4c49-81d5-526e5c68b542'}, 'boot_index': 1, 'attachment_id': 'eac04b36-4077-49c3-86b1-942e2b6eeb26', 'guest_format': None, 'volume_type': None}, {'mount_device': '/dev/vdc', 'delete_on_termination': False, 'disk_bus': 'virtio', 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-f3584061-a34e-4c55-a201-ea9e5f60b3e5', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'f3584061-a34e-4c55-a201-ea9e5f60b3e5', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'bebdf690-5f58-4227-95e0-add2eae14645', 'attached_at': '', 'detached_at': '', 'volume_id': 'f3584061-a34e-4c55-a201-ea9e5f60b3e5', 'serial': 'f3584061-a34e-4c55-a201-ea9e5f60b3e5'}, 'boot_index': 2, 'attachment_id': '54598788-e706-49d2-9e91-e968eea915b1', 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.193 2 WARNING nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.200 2 DEBUG nova.virt.libvirt.host [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.201 2 DEBUG nova.virt.libvirt.host [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.209 2 DEBUG nova.virt.libvirt.host [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.210 2 DEBUG nova.virt.libvirt.host [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.211 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.211 2 DEBUG nova.virt.hardware [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.211 2 DEBUG nova.virt.hardware [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.212 2 DEBUG nova.virt.hardware [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.212 2 DEBUG nova.virt.hardware [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.212 2 DEBUG nova.virt.hardware [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.212 2 DEBUG nova.virt.hardware [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.212 2 DEBUG nova.virt.hardware [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.213 2 DEBUG nova.virt.hardware [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.213 2 DEBUG nova.virt.hardware [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.213 2 DEBUG nova.virt.hardware [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.213 2 DEBUG nova.virt.hardware [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.249 2 DEBUG nova.storage.rbd_utils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] rbd image bebdf690-5f58-4227-95e0-add2eae14645_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.254 2 DEBUG oslo_concurrency.processutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:45 compute-1 rsyslogd[1006]: message too long (8192) with configured size 8096, begin of message is: 2025-10-02 12:22:45.187 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694 [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct 02 12:22:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:45.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:45.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:22:45 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2272921810' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.681 2 DEBUG oslo_concurrency.processutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.802 2 DEBUG nova.virt.libvirt.vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "address": "fa:16:3e:fb:a0:be", "network": {"id": "bce86765-c9ec-46bc-a7a3-317bd0b94198", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-349866377-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a13b8d9-26", "ovs_interfaceid": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.803 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "address": "fa:16:3e:fb:a0:be", "network": {"id": "bce86765-c9ec-46bc-a7a3-317bd0b94198", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-349866377-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a13b8d9-26", "ovs_interfaceid": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.804 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:a0:be,bridge_name='br-int',has_traffic_filtering=True,id=6a13b8d9-269d-4176-b4c7-693a5e26e74b,network=Network(bce86765-c9ec-46bc-a7a3-317bd0b94198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a13b8d9-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.805 2 DEBUG nova.virt.libvirt.vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25721468-4447-4fb7-97f7-e805e64f0267", "address": "fa:16:3e:c5:de:0f", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25721468-44", "ovs_interfaceid": "25721468-4447-4fb7-97f7-e805e64f0267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.806 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "25721468-4447-4fb7-97f7-e805e64f0267", "address": "fa:16:3e:c5:de:0f", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25721468-44", "ovs_interfaceid": "25721468-4447-4fb7-97f7-e805e64f0267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.807 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:de:0f,bridge_name='br-int',has_traffic_filtering=True,id=25721468-4447-4fb7-97f7-e805e64f0267,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap25721468-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.807 2 DEBUG nova.virt.libvirt.vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "address": "fa:16:3e:1e:64:8d", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d3881e4-99", "ovs_interfaceid": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.808 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "address": "fa:16:3e:1e:64:8d", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d3881e4-99", "ovs_interfaceid": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.808 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:64:8d,bridge_name='br-int',has_traffic_filtering=True,id=8d3881e4-99fe-4bc5-b5ab-5b3f06be6000,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8d3881e4-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.809 2 DEBUG nova.virt.libvirt.vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b84676b0-d376-4ced-99fb-08e677046d6f", "address": "fa:16:3e:7b:6e:55", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb84676b0-d3", "ovs_interfaceid": "b84676b0-d376-4ced-99fb-08e677046d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.810 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "b84676b0-d376-4ced-99fb-08e677046d6f", "address": "fa:16:3e:7b:6e:55", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb84676b0-d3", "ovs_interfaceid": "b84676b0-d376-4ced-99fb-08e677046d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.811 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:6e:55,bridge_name='br-int',has_traffic_filtering=True,id=b84676b0-d376-4ced-99fb-08e677046d6f,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb84676b0-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.812 2 DEBUG nova.virt.libvirt.vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "address": "fa:16:3e:b6:e7:f1", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.217", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9731d13-43", "ovs_interfaceid": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.812 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "address": "fa:16:3e:b6:e7:f1", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.217", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9731d13-43", "ovs_interfaceid": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.813 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:e7:f1,bridge_name='br-int',has_traffic_filtering=True,id=c9731d13-4315-4bdc-9d24-a91ce1d8d427,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9731d13-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.814 2 DEBUG nova.virt.libvirt.vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "address": "fa:16:3e:28:b6:82", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ef879b2-35", "ovs_interfaceid": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.814 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "address": "fa:16:3e:28:b6:82", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ef879b2-35", "ovs_interfaceid": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.815 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:b6:82,bridge_name='br-int',has_traffic_filtering=True,id=2ef879b2-3519-40b6-8207-d24b0e1a39de,network=Network(16f75dae-02da-4559-9be9-2b702ece41dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ef879b2-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.816 2 DEBUG nova.virt.libvirt.vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "address": "fa:16:3e:a3:ce:42", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf15c204-50", "ovs_interfaceid": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.816 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "address": "fa:16:3e:a3:ce:42", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf15c204-50", "ovs_interfaceid": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.817 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:ce:42,bridge_name='br-int',has_traffic_filtering=True,id=af15c204-50a0-4b32-a3a7-46c9b925ec87,network=Network(16f75dae-02da-4559-9be9-2b702ece41dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaf15c204-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.818 2 DEBUG nova.objects.instance [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lazy-loading 'pci_devices' on Instance uuid bebdf690-5f58-4227-95e0-add2eae14645 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.840 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:22:45 compute-1 nova_compute[230518]:   <uuid>bebdf690-5f58-4227-95e0-add2eae14645</uuid>
Oct 02 12:22:45 compute-1 nova_compute[230518]:   <name>instance-00000030</name>
Oct 02 12:22:45 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:22:45 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:22:45 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <nova:name>tempest-device-tagging-server-1257127232</nova:name>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:22:45</nova:creationTime>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:22:45 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:22:45 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:22:45 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:22:45 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:22:45 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:22:45 compute-1 nova_compute[230518]:         <nova:user uuid="b978e493dbdc419e864471708c90b0b4">tempest-TaggedBootDevicesTest-1955030099-project-member</nova:user>
Oct 02 12:22:45 compute-1 nova_compute[230518]:         <nova:project uuid="dcab4f3b7c604f47befdd0a52db26eea">tempest-TaggedBootDevicesTest-1955030099</nova:project>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:22:45 compute-1 nova_compute[230518]:         <nova:port uuid="6a13b8d9-269d-4176-b4c7-693a5e26e74b">
Oct 02 12:22:45 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:22:45 compute-1 nova_compute[230518]:         <nova:port uuid="25721468-4447-4fb7-97f7-e805e64f0267">
Oct 02 12:22:45 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.1.1.243" ipVersion="4"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:22:45 compute-1 nova_compute[230518]:         <nova:port uuid="8d3881e4-99fe-4bc5-b5ab-5b3f06be6000">
Oct 02 12:22:45 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.1.1.231" ipVersion="4"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:22:45 compute-1 nova_compute[230518]:         <nova:port uuid="b84676b0-d376-4ced-99fb-08e677046d6f">
Oct 02 12:22:45 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.1.1.189" ipVersion="4"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:22:45 compute-1 nova_compute[230518]:         <nova:port uuid="c9731d13-4315-4bdc-9d24-a91ce1d8d427">
Oct 02 12:22:45 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.1.1.217" ipVersion="4"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:22:45 compute-1 nova_compute[230518]:         <nova:port uuid="2ef879b2-3519-40b6-8207-d24b0e1a39de">
Oct 02 12:22:45 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.2.2.100" ipVersion="4"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:22:45 compute-1 nova_compute[230518]:         <nova:port uuid="af15c204-50a0-4b32-a3a7-46c9b925ec87">
Oct 02 12:22:45 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.2.2.200" ipVersion="4"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:22:45 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:22:45 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <system>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <entry name="serial">bebdf690-5f58-4227-95e0-add2eae14645</entry>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <entry name="uuid">bebdf690-5f58-4227-95e0-add2eae14645</entry>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     </system>
Oct 02 12:22:45 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:22:45 compute-1 nova_compute[230518]:   <os>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:   </os>
Oct 02 12:22:45 compute-1 nova_compute[230518]:   <features>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:   </features>
Oct 02 12:22:45 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:22:45 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:22:45 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/bebdf690-5f58-4227-95e0-add2eae14645_disk.config">
Oct 02 12:22:45 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       </source>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:22:45 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <source protocol="rbd" name="volumes/volume-0de00a00-9b68-498b-8bd0-88556bd22393">
Oct 02 12:22:45 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       </source>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:22:45 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <serial>0de00a00-9b68-498b-8bd0-88556bd22393</serial>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <source protocol="rbd" name="volumes/volume-b93e6b37-6df0-4c49-81d5-526e5c68b542">
Oct 02 12:22:45 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       </source>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:22:45 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <target dev="vdb" bus="virtio"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <serial>b93e6b37-6df0-4c49-81d5-526e5c68b542</serial>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <source protocol="rbd" name="volumes/volume-f3584061-a34e-4c55-a201-ea9e5f60b3e5">
Oct 02 12:22:45 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       </source>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:22:45 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <target dev="vdc" bus="virtio"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <serial>f3584061-a34e-4c55-a201-ea9e5f60b3e5</serial>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:fb:a0:be"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <target dev="tap6a13b8d9-26"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:c5:de:0f"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <target dev="tap25721468-44"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:1e:64:8d"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <target dev="tap8d3881e4-99"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:7b:6e:55"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <target dev="tapb84676b0-d3"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:b6:e7:f1"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <target dev="tapc9731d13-43"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:28:b6:82"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <target dev="tap2ef879b2-35"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:a3:ce:42"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <target dev="tapaf15c204-50"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/bebdf690-5f58-4227-95e0-add2eae14645/console.log" append="off"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <video>
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     </video>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:22:45 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:22:45 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:22:45 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:22:45 compute-1 nova_compute[230518]: </domain>
Oct 02 12:22:45 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.843 2 DEBUG nova.compute.manager [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Preparing to wait for external event network-vif-plugged-6a13b8d9-269d-4176-b4c7-693a5e26e74b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.843 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.843 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.844 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.844 2 DEBUG nova.compute.manager [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Preparing to wait for external event network-vif-plugged-25721468-4447-4fb7-97f7-e805e64f0267 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.845 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.845 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.845 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.846 2 DEBUG nova.compute.manager [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Preparing to wait for external event network-vif-plugged-8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.846 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.846 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.847 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.847 2 DEBUG nova.compute.manager [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Preparing to wait for external event network-vif-plugged-b84676b0-d376-4ced-99fb-08e677046d6f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.847 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.848 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.848 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.848 2 DEBUG nova.compute.manager [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Preparing to wait for external event network-vif-plugged-c9731d13-4315-4bdc-9d24-a91ce1d8d427 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.848 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.849 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.849 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.849 2 DEBUG nova.compute.manager [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Preparing to wait for external event network-vif-plugged-2ef879b2-3519-40b6-8207-d24b0e1a39de prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.849 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.850 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.850 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.850 2 DEBUG nova.compute.manager [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Preparing to wait for external event network-vif-plugged-af15c204-50a0-4b32-a3a7-46c9b925ec87 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.851 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.851 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.851 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.853 2 DEBUG nova.virt.libvirt.vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "address": "fa:16:3e:fb:a0:be", "network": {"id": "bce86765-c9ec-46bc-a7a3-317bd0b94198", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-349866377-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a13b8d9-26", "ovs_interfaceid": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.854 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "address": "fa:16:3e:fb:a0:be", "network": {"id": "bce86765-c9ec-46bc-a7a3-317bd0b94198", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-349866377-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a13b8d9-26", "ovs_interfaceid": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.855 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:a0:be,bridge_name='br-int',has_traffic_filtering=True,id=6a13b8d9-269d-4176-b4c7-693a5e26e74b,network=Network(bce86765-c9ec-46bc-a7a3-317bd0b94198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a13b8d9-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.855 2 DEBUG os_vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:a0:be,bridge_name='br-int',has_traffic_filtering=True,id=6a13b8d9-269d-4176-b4c7-693a5e26e74b,network=Network(bce86765-c9ec-46bc-a7a3-317bd0b94198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a13b8d9-26') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.857 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.858 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.862 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a13b8d9-26, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.863 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6a13b8d9-26, col_values=(('external_ids', {'iface-id': '6a13b8d9-269d-4176-b4c7-693a5e26e74b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fb:a0:be', 'vm-uuid': 'bebdf690-5f58-4227-95e0-add2eae14645'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:45 compute-1 NetworkManager[44960]: <info>  [1759407765.8667] manager: (tap6a13b8d9-26): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.873 2 INFO os_vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:a0:be,bridge_name='br-int',has_traffic_filtering=True,id=6a13b8d9-269d-4176-b4c7-693a5e26e74b,network=Network(bce86765-c9ec-46bc-a7a3-317bd0b94198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a13b8d9-26')
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.874 2 DEBUG nova.virt.libvirt.vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25721468-4447-4fb7-97f7-e805e64f0267", "address": "fa:16:3e:c5:de:0f", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25721468-44", "ovs_interfaceid": "25721468-4447-4fb7-97f7-e805e64f0267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.874 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "25721468-4447-4fb7-97f7-e805e64f0267", "address": "fa:16:3e:c5:de:0f", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25721468-44", "ovs_interfaceid": "25721468-4447-4fb7-97f7-e805e64f0267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.875 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:de:0f,bridge_name='br-int',has_traffic_filtering=True,id=25721468-4447-4fb7-97f7-e805e64f0267,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap25721468-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.875 2 DEBUG os_vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:de:0f,bridge_name='br-int',has_traffic_filtering=True,id=25721468-4447-4fb7-97f7-e805e64f0267,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap25721468-44') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.876 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.876 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.880 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25721468-44, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.881 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap25721468-44, col_values=(('external_ids', {'iface-id': '25721468-4447-4fb7-97f7-e805e64f0267', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c5:de:0f', 'vm-uuid': 'bebdf690-5f58-4227-95e0-add2eae14645'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:45 compute-1 NetworkManager[44960]: <info>  [1759407765.8835] manager: (tap25721468-44): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.890 2 INFO os_vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:de:0f,bridge_name='br-int',has_traffic_filtering=True,id=25721468-4447-4fb7-97f7-e805e64f0267,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap25721468-44')
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.891 2 DEBUG nova.virt.libvirt.vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "address": "fa:16:3e:1e:64:8d", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d3881e4-99", "ovs_interfaceid": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.892 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "address": "fa:16:3e:1e:64:8d", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d3881e4-99", "ovs_interfaceid": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.893 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:64:8d,bridge_name='br-int',has_traffic_filtering=True,id=8d3881e4-99fe-4bc5-b5ab-5b3f06be6000,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8d3881e4-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.894 2 DEBUG os_vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:64:8d,bridge_name='br-int',has_traffic_filtering=True,id=8d3881e4-99fe-4bc5-b5ab-5b3f06be6000,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8d3881e4-99') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.895 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.895 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.898 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d3881e4-99, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.898 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8d3881e4-99, col_values=(('external_ids', {'iface-id': '8d3881e4-99fe-4bc5-b5ab-5b3f06be6000', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1e:64:8d', 'vm-uuid': 'bebdf690-5f58-4227-95e0-add2eae14645'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:45 compute-1 NetworkManager[44960]: <info>  [1759407765.9008] manager: (tap8d3881e4-99): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.911 2 INFO os_vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:64:8d,bridge_name='br-int',has_traffic_filtering=True,id=8d3881e4-99fe-4bc5-b5ab-5b3f06be6000,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8d3881e4-99')
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.911 2 DEBUG nova.virt.libvirt.vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b84676b0-d376-4ced-99fb-08e677046d6f", "address": "fa:16:3e:7b:6e:55", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb84676b0-d3", "ovs_interfaceid": "b84676b0-d376-4ced-99fb-08e677046d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.912 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "b84676b0-d376-4ced-99fb-08e677046d6f", "address": "fa:16:3e:7b:6e:55", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb84676b0-d3", "ovs_interfaceid": "b84676b0-d376-4ced-99fb-08e677046d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.912 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:6e:55,bridge_name='br-int',has_traffic_filtering=True,id=b84676b0-d376-4ced-99fb-08e677046d6f,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb84676b0-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.913 2 DEBUG os_vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:6e:55,bridge_name='br-int',has_traffic_filtering=True,id=b84676b0-d376-4ced-99fb-08e677046d6f,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb84676b0-d3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.913 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.914 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.916 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb84676b0-d3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.916 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb84676b0-d3, col_values=(('external_ids', {'iface-id': 'b84676b0-d376-4ced-99fb-08e677046d6f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:6e:55', 'vm-uuid': 'bebdf690-5f58-4227-95e0-add2eae14645'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:45 compute-1 NetworkManager[44960]: <info>  [1759407765.9185] manager: (tapb84676b0-d3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/86)
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.934 2 INFO os_vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:6e:55,bridge_name='br-int',has_traffic_filtering=True,id=b84676b0-d376-4ced-99fb-08e677046d6f,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb84676b0-d3')
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.935 2 DEBUG nova.virt.libvirt.vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "address": "fa:16:3e:b6:e7:f1", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.217", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9731d13-43", "ovs_interfaceid": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.935 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "address": "fa:16:3e:b6:e7:f1", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.217", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9731d13-43", "ovs_interfaceid": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.936 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:e7:f1,bridge_name='br-int',has_traffic_filtering=True,id=c9731d13-4315-4bdc-9d24-a91ce1d8d427,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9731d13-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.936 2 DEBUG os_vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:e7:f1,bridge_name='br-int',has_traffic_filtering=True,id=c9731d13-4315-4bdc-9d24-a91ce1d8d427,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9731d13-43') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.936 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.937 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.939 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc9731d13-43, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.939 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc9731d13-43, col_values=(('external_ids', {'iface-id': 'c9731d13-4315-4bdc-9d24-a91ce1d8d427', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b6:e7:f1', 'vm-uuid': 'bebdf690-5f58-4227-95e0-add2eae14645'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:45 compute-1 NetworkManager[44960]: <info>  [1759407765.9420] manager: (tapc9731d13-43): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/87)
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.957 2 INFO os_vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:e7:f1,bridge_name='br-int',has_traffic_filtering=True,id=c9731d13-4315-4bdc-9d24-a91ce1d8d427,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9731d13-43')
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.958 2 DEBUG nova.virt.libvirt.vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "address": "fa:16:3e:28:b6:82", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ef879b2-35", "ovs_interfaceid": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.958 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "address": "fa:16:3e:28:b6:82", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ef879b2-35", "ovs_interfaceid": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.959 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:b6:82,bridge_name='br-int',has_traffic_filtering=True,id=2ef879b2-3519-40b6-8207-d24b0e1a39de,network=Network(16f75dae-02da-4559-9be9-2b702ece41dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ef879b2-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.959 2 DEBUG os_vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:b6:82,bridge_name='br-int',has_traffic_filtering=True,id=2ef879b2-3519-40b6-8207-d24b0e1a39de,network=Network(16f75dae-02da-4559-9be9-2b702ece41dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ef879b2-35') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.959 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.960 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.961 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ef879b2-35, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.961 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2ef879b2-35, col_values=(('external_ids', {'iface-id': '2ef879b2-3519-40b6-8207-d24b0e1a39de', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:28:b6:82', 'vm-uuid': 'bebdf690-5f58-4227-95e0-add2eae14645'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:45 compute-1 NetworkManager[44960]: <info>  [1759407765.9640] manager: (tap2ef879b2-35): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.981 2 INFO os_vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:b6:82,bridge_name='br-int',has_traffic_filtering=True,id=2ef879b2-3519-40b6-8207-d24b0e1a39de,network=Network(16f75dae-02da-4559-9be9-2b702ece41dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ef879b2-35')
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.982 2 DEBUG nova.virt.libvirt.vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:22:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "address": "fa:16:3e:a3:ce:42", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf15c204-50", "ovs_interfaceid": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.982 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "address": "fa:16:3e:a3:ce:42", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf15c204-50", "ovs_interfaceid": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.983 2 DEBUG nova.network.os_vif_util [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:ce:42,bridge_name='br-int',has_traffic_filtering=True,id=af15c204-50a0-4b32-a3a7-46c9b925ec87,network=Network(16f75dae-02da-4559-9be9-2b702ece41dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaf15c204-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.983 2 DEBUG os_vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:ce:42,bridge_name='br-int',has_traffic_filtering=True,id=af15c204-50a0-4b32-a3a7-46c9b925ec87,network=Network(16f75dae-02da-4559-9be9-2b702ece41dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaf15c204-50') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.984 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.984 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.986 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaf15c204-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.986 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapaf15c204-50, col_values=(('external_ids', {'iface-id': 'af15c204-50a0-4b32-a3a7-46c9b925ec87', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a3:ce:42', 'vm-uuid': 'bebdf690-5f58-4227-95e0-add2eae14645'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4097927493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2272921810' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:22:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/16677038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:22:45 compute-1 NetworkManager[44960]: <info>  [1759407765.9881] manager: (tapaf15c204-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/89)
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:45 compute-1 nova_compute[230518]: 2025-10-02 12:22:45.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:22:46 compute-1 nova_compute[230518]: 2025-10-02 12:22:46.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:46 compute-1 nova_compute[230518]: 2025-10-02 12:22:46.005 2 INFO os_vif [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:ce:42,bridge_name='br-int',has_traffic_filtering=True,id=af15c204-50a0-4b32-a3a7-46c9b925ec87,network=Network(16f75dae-02da-4559-9be9-2b702ece41dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaf15c204-50')
Oct 02 12:22:46 compute-1 nova_compute[230518]: 2025-10-02 12:22:46.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:22:46 compute-1 nova_compute[230518]: 2025-10-02 12:22:46.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:22:46 compute-1 nova_compute[230518]: 2025-10-02 12:22:46.223 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:22:46 compute-1 nova_compute[230518]: 2025-10-02 12:22:46.224 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:22:46 compute-1 nova_compute[230518]: 2025-10-02 12:22:46.224 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] No VIF found with MAC fa:16:3e:fb:a0:be, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:22:46 compute-1 nova_compute[230518]: 2025-10-02 12:22:46.225 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] No VIF found with MAC fa:16:3e:b6:e7:f1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:22:46 compute-1 nova_compute[230518]: 2025-10-02 12:22:46.226 2 INFO nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Using config drive
Oct 02 12:22:46 compute-1 nova_compute[230518]: 2025-10-02 12:22:46.314 2 DEBUG nova.storage.rbd_utils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] rbd image bebdf690-5f58-4227-95e0-add2eae14645_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:46 compute-1 nova_compute[230518]: 2025-10-02 12:22:46.907 2 INFO nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Creating config drive at /var/lib/nova/instances/bebdf690-5f58-4227-95e0-add2eae14645/disk.config
Oct 02 12:22:46 compute-1 nova_compute[230518]: 2025-10-02 12:22:46.917 2 DEBUG oslo_concurrency.processutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bebdf690-5f58-4227-95e0-add2eae14645/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbm0il_va execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:47 compute-1 nova_compute[230518]: 2025-10-02 12:22:47.071 2 DEBUG oslo_concurrency.processutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bebdf690-5f58-4227-95e0-add2eae14645/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbm0il_va" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:47 compute-1 nova_compute[230518]: 2025-10-02 12:22:47.196 2 DEBUG nova.storage.rbd_utils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] rbd image bebdf690-5f58-4227-95e0-add2eae14645_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:22:47 compute-1 nova_compute[230518]: 2025-10-02 12:22:47.200 2 DEBUG oslo_concurrency.processutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bebdf690-5f58-4227-95e0-add2eae14645/disk.config bebdf690-5f58-4227-95e0-add2eae14645_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:22:47 compute-1 nova_compute[230518]: 2025-10-02 12:22:47.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:47 compute-1 ceph-mon[80926]: pgmap v1339: 305 pgs: 305 active+clean; 134 MiB data, 553 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 921 KiB/s wr, 114 op/s
Oct 02 12:22:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:47.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:47.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:22:48 compute-1 nova_compute[230518]: 2025-10-02 12:22:48.190 2 DEBUG oslo_concurrency.processutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bebdf690-5f58-4227-95e0-add2eae14645/disk.config bebdf690-5f58-4227-95e0-add2eae14645_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.990s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:22:48 compute-1 nova_compute[230518]: 2025-10-02 12:22:48.191 2 INFO nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Deleting local config drive /var/lib/nova/instances/bebdf690-5f58-4227-95e0-add2eae14645/disk.config because it was imported into RBD.
Oct 02 12:22:48 compute-1 NetworkManager[44960]: <info>  [1759407768.2727] manager: (tap6a13b8d9-26): new Tun device (/org/freedesktop/NetworkManager/Devices/90)
Oct 02 12:22:48 compute-1 kernel: tap6a13b8d9-26: entered promiscuous mode
Oct 02 12:22:48 compute-1 NetworkManager[44960]: <info>  [1759407768.2918] manager: (tap25721468-44): new Tun device (/org/freedesktop/NetworkManager/Devices/91)
Oct 02 12:22:48 compute-1 kernel: tap25721468-44: entered promiscuous mode
Oct 02 12:22:48 compute-1 ovn_controller[129257]: 2025-10-02T12:22:48Z|00189|binding|INFO|Claiming lport 25721468-4447-4fb7-97f7-e805e64f0267 for this chassis.
Oct 02 12:22:48 compute-1 nova_compute[230518]: 2025-10-02 12:22:48.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:48 compute-1 ovn_controller[129257]: 2025-10-02T12:22:48Z|00190|binding|INFO|25721468-4447-4fb7-97f7-e805e64f0267: Claiming fa:16:3e:c5:de:0f 10.1.1.243
Oct 02 12:22:48 compute-1 ovn_controller[129257]: 2025-10-02T12:22:48Z|00191|binding|INFO|Claiming lport 6a13b8d9-269d-4176-b4c7-693a5e26e74b for this chassis.
Oct 02 12:22:48 compute-1 ovn_controller[129257]: 2025-10-02T12:22:48Z|00192|binding|INFO|6a13b8d9-269d-4176-b4c7-693a5e26e74b: Claiming fa:16:3e:fb:a0:be 10.100.0.9
Oct 02 12:22:48 compute-1 systemd-udevd[251094]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:22:48 compute-1 systemd-udevd[251097]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:22:48 compute-1 NetworkManager[44960]: <info>  [1759407768.3145] manager: (tap8d3881e4-99): new Tun device (/org/freedesktop/NetworkManager/Devices/92)
Oct 02 12:22:48 compute-1 systemd-udevd[251099]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:22:48 compute-1 NetworkManager[44960]: <info>  [1759407768.3224] device (tap6a13b8d9-26): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:22:48 compute-1 NetworkManager[44960]: <info>  [1759407768.3242] device (tap6a13b8d9-26): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.321 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:de:0f 10.1.1.243'], port_security=['fa:16:3e:c5:de:0f 10.1.1.243'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest-1274683935', 'neutron:cidrs': '10.1.1.243/24', 'neutron:device_id': 'bebdf690-5f58-4227-95e0-add2eae14645', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9aed857d-6573-41ca-b0a5-fcab18195955', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest-1274683935', 'neutron:project_id': 'dcab4f3b7c604f47befdd0a52db26eea', 'neutron:revision_number': '2', 'neutron:security_group_ids': '517909f5-5ad1-4dd2-b684-855fd2b0ba7b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=081069ba-be0d-48ee-adb7-b26de4dcbd97, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=25721468-4447-4fb7-97f7-e805e64f0267) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.325 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:a0:be 10.100.0.9'], port_security=['fa:16:3e:fb:a0:be 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'bebdf690-5f58-4227-95e0-add2eae14645', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bce86765-c9ec-46bc-a7a3-317bd0b94198', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dcab4f3b7c604f47befdd0a52db26eea', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'eb6b2abd-5791-4e81-8257-d871691a47e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09a116df-d45e-4936-a295-e45094ee631c, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=6a13b8d9-269d-4176-b4c7-693a5e26e74b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.328 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 25721468-4447-4fb7-97f7-e805e64f0267 in datapath 9aed857d-6573-41ca-b0a5-fcab18195955 bound to our chassis
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.330 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9aed857d-6573-41ca-b0a5-fcab18195955
Oct 02 12:22:48 compute-1 NetworkManager[44960]: <info>  [1759407768.3316] device (tap25721468-44): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:22:48 compute-1 NetworkManager[44960]: <info>  [1759407768.3328] device (tap25721468-44): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:22:48 compute-1 NetworkManager[44960]: <info>  [1759407768.3359] manager: (tapb84676b0-d3): new Tun device (/org/freedesktop/NetworkManager/Devices/93)
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.346 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2e659348-d1e5-4d2e-8a32-eba1e39e5d37]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.347 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9aed857d-61 in ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.350 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9aed857d-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.350 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[93267881-7786-41cc-b272-f4703f0fc8bd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.351 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[663f4d12-5ac2-416f-ba2b-d637d6060202]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:48 compute-1 NetworkManager[44960]: <info>  [1759407768.3587] manager: (tapc9731d13-43): new Tun device (/org/freedesktop/NetworkManager/Devices/94)
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.368 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[920c22b4-6be2-4bf8-91ba-1266fe54b0cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:48 compute-1 NetworkManager[44960]: <info>  [1759407768.3841] manager: (tap2ef879b2-35): new Tun device (/org/freedesktop/NetworkManager/Devices/95)
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.400 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e55ff10f-fc8f-4d61-b461-569d8e01ccc3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:48 compute-1 NetworkManager[44960]: <info>  [1759407768.4109] manager: (tapaf15c204-50): new Tun device (/org/freedesktop/NetworkManager/Devices/96)
Oct 02 12:22:48 compute-1 systemd-udevd[251104]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:22:48 compute-1 nova_compute[230518]: 2025-10-02 12:22:48.424 2 DEBUG nova.network.neutron [req-3b768920-f1a4-4fa0-bcb7-cd2dcbb11852 req-94cb6435-42af-40db-ab3d-76403659aaab 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updated VIF entry in instance network info cache for port af15c204-50a0-4b32-a3a7-46c9b925ec87. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:22:48 compute-1 nova_compute[230518]: 2025-10-02 12:22:48.425 2 DEBUG nova.network.neutron [req-3b768920-f1a4-4fa0-bcb7-cd2dcbb11852 req-94cb6435-42af-40db-ab3d-76403659aaab 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updating instance_info_cache with network_info: [{"id": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "address": "fa:16:3e:fb:a0:be", "network": {"id": "bce86765-c9ec-46bc-a7a3-317bd0b94198", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-349866377-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a13b8d9-26", "ovs_interfaceid": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "25721468-4447-4fb7-97f7-e805e64f0267", "address": "fa:16:3e:c5:de:0f", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25721468-44", "ovs_interfaceid": "25721468-4447-4fb7-97f7-e805e64f0267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "address": "fa:16:3e:1e:64:8d", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d3881e4-99", "ovs_interfaceid": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "b84676b0-d376-4ced-99fb-08e677046d6f", "address": "fa:16:3e:7b:6e:55", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb84676b0-d3", "ovs_interfaceid": "b84676b0-d376-4ced-99fb-08e677046d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "address": "fa:16:3e:b6:e7:f1", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.217", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9731d13-43", "ovs_interfaceid": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "address": "fa:16:3e:28:b6:82", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ef879b2-35", "ovs_interfaceid": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "address": "fa:16:3e:a3:ce:42", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf15c204-50", "ovs_interfaceid": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:22:48 compute-1 kernel: tap8d3881e4-99: entered promiscuous mode
Oct 02 12:22:48 compute-1 kernel: tap2ef879b2-35: entered promiscuous mode
Oct 02 12:22:48 compute-1 kernel: tapb84676b0-d3: entered promiscuous mode
Oct 02 12:22:48 compute-1 kernel: tapaf15c204-50: entered promiscuous mode
Oct 02 12:22:48 compute-1 kernel: tapc9731d13-43: entered promiscuous mode
Oct 02 12:22:48 compute-1 NetworkManager[44960]: <info>  [1759407768.4349] device (tap8d3881e4-99): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:22:48 compute-1 NetworkManager[44960]: <info>  [1759407768.4371] device (tap2ef879b2-35): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:22:48 compute-1 NetworkManager[44960]: <info>  [1759407768.4384] device (tapb84676b0-d3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:22:48 compute-1 NetworkManager[44960]: <info>  [1759407768.4395] device (tapaf15c204-50): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:22:48 compute-1 ovn_controller[129257]: 2025-10-02T12:22:48Z|00193|binding|INFO|Claiming lport 8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 for this chassis.
Oct 02 12:22:48 compute-1 ovn_controller[129257]: 2025-10-02T12:22:48Z|00194|binding|INFO|8d3881e4-99fe-4bc5-b5ab-5b3f06be6000: Claiming fa:16:3e:1e:64:8d 10.1.1.231
Oct 02 12:22:48 compute-1 ovn_controller[129257]: 2025-10-02T12:22:48Z|00195|binding|INFO|Claiming lport 2ef879b2-3519-40b6-8207-d24b0e1a39de for this chassis.
Oct 02 12:22:48 compute-1 ovn_controller[129257]: 2025-10-02T12:22:48Z|00196|binding|INFO|2ef879b2-3519-40b6-8207-d24b0e1a39de: Claiming fa:16:3e:28:b6:82 10.2.2.100
Oct 02 12:22:48 compute-1 ovn_controller[129257]: 2025-10-02T12:22:48Z|00197|binding|INFO|Claiming lport af15c204-50a0-4b32-a3a7-46c9b925ec87 for this chassis.
Oct 02 12:22:48 compute-1 ovn_controller[129257]: 2025-10-02T12:22:48Z|00198|binding|INFO|af15c204-50a0-4b32-a3a7-46c9b925ec87: Claiming fa:16:3e:a3:ce:42 10.2.2.200
Oct 02 12:22:48 compute-1 ovn_controller[129257]: 2025-10-02T12:22:48Z|00199|binding|INFO|Claiming lport c9731d13-4315-4bdc-9d24-a91ce1d8d427 for this chassis.
Oct 02 12:22:48 compute-1 ovn_controller[129257]: 2025-10-02T12:22:48Z|00200|binding|INFO|c9731d13-4315-4bdc-9d24-a91ce1d8d427: Claiming fa:16:3e:b6:e7:f1 10.1.1.217
Oct 02 12:22:48 compute-1 ovn_controller[129257]: 2025-10-02T12:22:48Z|00201|binding|INFO|Claiming lport b84676b0-d376-4ced-99fb-08e677046d6f for this chassis.
Oct 02 12:22:48 compute-1 ovn_controller[129257]: 2025-10-02T12:22:48Z|00202|binding|INFO|b84676b0-d376-4ced-99fb-08e677046d6f: Claiming fa:16:3e:7b:6e:55 10.1.1.189
Oct 02 12:22:48 compute-1 nova_compute[230518]: 2025-10-02 12:22:48.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:48 compute-1 NetworkManager[44960]: <info>  [1759407768.4481] device (tapc9731d13-43): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:22:48 compute-1 NetworkManager[44960]: <info>  [1759407768.4495] device (tap8d3881e4-99): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:22:48 compute-1 NetworkManager[44960]: <info>  [1759407768.4509] device (tap2ef879b2-35): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:22:48 compute-1 nova_compute[230518]: 2025-10-02 12:22:48.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.449 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[7ff66ed2-da94-4aa5-a618-9b28c33aa3d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.453 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:ce:42 10.2.2.200'], port_security=['fa:16:3e:a3:ce:42 10.2.2.200'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.200/24', 'neutron:device_id': 'bebdf690-5f58-4227-95e0-add2eae14645', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16f75dae-02da-4559-9be9-2b702ece41dd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dcab4f3b7c604f47befdd0a52db26eea', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'eb6b2abd-5791-4e81-8257-d871691a47e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6f833f0c-43e5-49bc-a978-920c53d86b7d, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=af15c204-50a0-4b32-a3a7-46c9b925ec87) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.455 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:64:8d 10.1.1.231'], port_security=['fa:16:3e:1e:64:8d 10.1.1.231'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest-148297662', 'neutron:cidrs': '10.1.1.231/24', 'neutron:device_id': 'bebdf690-5f58-4227-95e0-add2eae14645', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9aed857d-6573-41ca-b0a5-fcab18195955', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest-148297662', 'neutron:project_id': 'dcab4f3b7c604f47befdd0a52db26eea', 'neutron:revision_number': '2', 'neutron:security_group_ids': '517909f5-5ad1-4dd2-b684-855fd2b0ba7b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=081069ba-be0d-48ee-adb7-b26de4dcbd97, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=8d3881e4-99fe-4bc5-b5ab-5b3f06be6000) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.456 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:6e:55 10.1.1.189'], port_security=['fa:16:3e:7b:6e:55 10.1.1.189'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.189/24', 'neutron:device_id': 'bebdf690-5f58-4227-95e0-add2eae14645', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9aed857d-6573-41ca-b0a5-fcab18195955', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dcab4f3b7c604f47befdd0a52db26eea', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'eb6b2abd-5791-4e81-8257-d871691a47e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=081069ba-be0d-48ee-adb7-b26de4dcbd97, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=b84676b0-d376-4ced-99fb-08e677046d6f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:22:48 compute-1 NetworkManager[44960]: <info>  [1759407768.4577] device (tapb84676b0-d3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:22:48 compute-1 nova_compute[230518]: 2025-10-02 12:22:48.456 2 DEBUG oslo_concurrency.lockutils [req-3b768920-f1a4-4fa0-bcb7-cd2dcbb11852 req-94cb6435-42af-40db-ab3d-76403659aaab 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:22:48 compute-1 NetworkManager[44960]: <info>  [1759407768.4582] device (tapaf15c204-50): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:22:48 compute-1 NetworkManager[44960]: <info>  [1759407768.4586] device (tapc9731d13-43): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:22:48 compute-1 nova_compute[230518]: 2025-10-02 12:22:48.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:48 compute-1 ovn_controller[129257]: 2025-10-02T12:22:48Z|00203|binding|INFO|Setting lport 6a13b8d9-269d-4176-b4c7-693a5e26e74b ovn-installed in OVS
Oct 02 12:22:48 compute-1 ovn_controller[129257]: 2025-10-02T12:22:48Z|00204|binding|INFO|Setting lport 6a13b8d9-269d-4176-b4c7-693a5e26e74b up in Southbound
Oct 02 12:22:48 compute-1 ovn_controller[129257]: 2025-10-02T12:22:48Z|00205|binding|INFO|Setting lport 25721468-4447-4fb7-97f7-e805e64f0267 ovn-installed in OVS
Oct 02 12:22:48 compute-1 ovn_controller[129257]: 2025-10-02T12:22:48Z|00206|binding|INFO|Setting lport 25721468-4447-4fb7-97f7-e805e64f0267 up in Southbound
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.457 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b6:e7:f1 10.1.1.217'], port_security=['fa:16:3e:b6:e7:f1 10.1.1.217'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.217/24', 'neutron:device_id': 'bebdf690-5f58-4227-95e0-add2eae14645', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9aed857d-6573-41ca-b0a5-fcab18195955', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dcab4f3b7c604f47befdd0a52db26eea', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'eb6b2abd-5791-4e81-8257-d871691a47e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=081069ba-be0d-48ee-adb7-b26de4dcbd97, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=c9731d13-4315-4bdc-9d24-a91ce1d8d427) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.459 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:b6:82 10.2.2.100'], port_security=['fa:16:3e:28:b6:82 10.2.2.100'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.100/24', 'neutron:device_id': 'bebdf690-5f58-4227-95e0-add2eae14645', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16f75dae-02da-4559-9be9-2b702ece41dd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dcab4f3b7c604f47befdd0a52db26eea', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'eb6b2abd-5791-4e81-8257-d871691a47e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6f833f0c-43e5-49bc-a978-920c53d86b7d, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=2ef879b2-3519-40b6-8207-d24b0e1a39de) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.462 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fadcfe2f-5413-437d-89d3-83c8f5409367]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:48 compute-1 systemd-machined[188247]: New machine qemu-27-instance-00000030.
Oct 02 12:22:48 compute-1 NetworkManager[44960]: <info>  [1759407768.4640] manager: (tap9aed857d-60): new Veth device (/org/freedesktop/NetworkManager/Devices/97)
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.489 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[2b34e25b-d875-4504-bd61-68551d96c559]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.491 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[6ed48aa7-c237-4eb6-8146-73ffaa77c691]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:48 compute-1 NetworkManager[44960]: <info>  [1759407768.5078] device (tap9aed857d-60): carrier: link connected
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.511 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a3ed5e5a-1e79-4b15-8fcd-a250ca4807f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.527 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[41bda245-eb35-4416-b414-086d5625261b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9aed857d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:10:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563206, 'reachable_time': 19770, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251148, 'error': None, 'target': 'ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:48 compute-1 systemd[1]: Started Virtual Machine qemu-27-instance-00000030.
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.545 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[dadc8db1-bda3-4456-aa2c-c0c5f3b0759e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2c:1039'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563206, 'tstamp': 563206}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251149, 'error': None, 'target': 'ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.564 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c8dc04eb-88a3-4614-b1b8-7463a2f4ea42]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9aed857d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:10:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563206, 'reachable_time': 19770, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251151, 'error': None, 'target': 'ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.587 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2d826962-0d7e-45ab-8c52-aca7b23d4f5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:48 compute-1 ovn_controller[129257]: 2025-10-02T12:22:48Z|00207|binding|INFO|Setting lport 8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 ovn-installed in OVS
Oct 02 12:22:48 compute-1 ovn_controller[129257]: 2025-10-02T12:22:48Z|00208|binding|INFO|Setting lport 8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 up in Southbound
Oct 02 12:22:48 compute-1 ovn_controller[129257]: 2025-10-02T12:22:48Z|00209|binding|INFO|Setting lport 2ef879b2-3519-40b6-8207-d24b0e1a39de ovn-installed in OVS
Oct 02 12:22:48 compute-1 ovn_controller[129257]: 2025-10-02T12:22:48Z|00210|binding|INFO|Setting lport 2ef879b2-3519-40b6-8207-d24b0e1a39de up in Southbound
Oct 02 12:22:48 compute-1 ovn_controller[129257]: 2025-10-02T12:22:48Z|00211|binding|INFO|Setting lport b84676b0-d376-4ced-99fb-08e677046d6f ovn-installed in OVS
Oct 02 12:22:48 compute-1 ovn_controller[129257]: 2025-10-02T12:22:48Z|00212|binding|INFO|Setting lport b84676b0-d376-4ced-99fb-08e677046d6f up in Southbound
Oct 02 12:22:48 compute-1 ovn_controller[129257]: 2025-10-02T12:22:48Z|00213|binding|INFO|Setting lport af15c204-50a0-4b32-a3a7-46c9b925ec87 ovn-installed in OVS
Oct 02 12:22:48 compute-1 ovn_controller[129257]: 2025-10-02T12:22:48Z|00214|binding|INFO|Setting lport af15c204-50a0-4b32-a3a7-46c9b925ec87 up in Southbound
Oct 02 12:22:48 compute-1 ovn_controller[129257]: 2025-10-02T12:22:48Z|00215|binding|INFO|Setting lport c9731d13-4315-4bdc-9d24-a91ce1d8d427 ovn-installed in OVS
Oct 02 12:22:48 compute-1 ovn_controller[129257]: 2025-10-02T12:22:48Z|00216|binding|INFO|Setting lport c9731d13-4315-4bdc-9d24-a91ce1d8d427 up in Southbound
Oct 02 12:22:48 compute-1 nova_compute[230518]: 2025-10-02 12:22:48.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.647 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[750cd72c-baa1-4000-89be-fbddc98b7948]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.648 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9aed857d-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.648 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.648 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9aed857d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:48 compute-1 nova_compute[230518]: 2025-10-02 12:22:48.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:48 compute-1 kernel: tap9aed857d-60: entered promiscuous mode
Oct 02 12:22:48 compute-1 nova_compute[230518]: 2025-10-02 12:22:48.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.652 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9aed857d-60, col_values=(('external_ids', {'iface-id': 'ead9b5d4-ba41-48d9-8938-c4b0a2b5cd08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:48 compute-1 nova_compute[230518]: 2025-10-02 12:22:48.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:48 compute-1 ovn_controller[129257]: 2025-10-02T12:22:48Z|00217|binding|INFO|Releasing lport ead9b5d4-ba41-48d9-8938-c4b0a2b5cd08 from this chassis (sb_readonly=0)
Oct 02 12:22:48 compute-1 NetworkManager[44960]: <info>  [1759407768.6548] manager: (tap9aed857d-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/98)
Oct 02 12:22:48 compute-1 nova_compute[230518]: 2025-10-02 12:22:48.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:48 compute-1 nova_compute[230518]: 2025-10-02 12:22:48.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.672 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9aed857d-6573-41ca-b0a5-fcab18195955.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9aed857d-6573-41ca-b0a5-fcab18195955.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.673 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[76337bd5-c2de-43a8-aa06-28da18354bee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.674 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-9aed857d-6573-41ca-b0a5-fcab18195955
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/9aed857d-6573-41ca-b0a5-fcab18195955.pid.haproxy
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 9aed857d-6573-41ca-b0a5-fcab18195955
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:22:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:48.674 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955', 'env', 'PROCESS_TAG=haproxy-9aed857d-6573-41ca-b0a5-fcab18195955', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9aed857d-6573-41ca-b0a5-fcab18195955.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:22:48 compute-1 nova_compute[230518]: 2025-10-02 12:22:48.825 2 DEBUG nova.compute.manager [req-3191e38b-40a1-4a28-8504-00d49043b570 req-d238769c-1e84-49f1-9109-1acf76c3211b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-6a13b8d9-269d-4176-b4c7-693a5e26e74b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:48 compute-1 nova_compute[230518]: 2025-10-02 12:22:48.825 2 DEBUG oslo_concurrency.lockutils [req-3191e38b-40a1-4a28-8504-00d49043b570 req-d238769c-1e84-49f1-9109-1acf76c3211b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:48 compute-1 nova_compute[230518]: 2025-10-02 12:22:48.826 2 DEBUG oslo_concurrency.lockutils [req-3191e38b-40a1-4a28-8504-00d49043b570 req-d238769c-1e84-49f1-9109-1acf76c3211b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:48 compute-1 nova_compute[230518]: 2025-10-02 12:22:48.826 2 DEBUG oslo_concurrency.lockutils [req-3191e38b-40a1-4a28-8504-00d49043b570 req-d238769c-1e84-49f1-9109-1acf76c3211b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:48 compute-1 nova_compute[230518]: 2025-10-02 12:22:48.826 2 DEBUG nova.compute.manager [req-3191e38b-40a1-4a28-8504-00d49043b570 req-d238769c-1e84-49f1-9109-1acf76c3211b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Processing event network-vif-plugged-6a13b8d9-269d-4176-b4c7-693a5e26e74b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:22:49 compute-1 nova_compute[230518]: 2025-10-02 12:22:49.032 2 DEBUG nova.compute.manager [req-c17f7d65-94ff-4e0a-bbe9-be29ff8b239f req-6268c53a-4c16-4705-86ec-710381bc3e2f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-25721468-4447-4fb7-97f7-e805e64f0267 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:49 compute-1 nova_compute[230518]: 2025-10-02 12:22:49.034 2 DEBUG oslo_concurrency.lockutils [req-c17f7d65-94ff-4e0a-bbe9-be29ff8b239f req-6268c53a-4c16-4705-86ec-710381bc3e2f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:49 compute-1 nova_compute[230518]: 2025-10-02 12:22:49.035 2 DEBUG oslo_concurrency.lockutils [req-c17f7d65-94ff-4e0a-bbe9-be29ff8b239f req-6268c53a-4c16-4705-86ec-710381bc3e2f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:49 compute-1 nova_compute[230518]: 2025-10-02 12:22:49.035 2 DEBUG oslo_concurrency.lockutils [req-c17f7d65-94ff-4e0a-bbe9-be29ff8b239f req-6268c53a-4c16-4705-86ec-710381bc3e2f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:49 compute-1 nova_compute[230518]: 2025-10-02 12:22:49.036 2 DEBUG nova.compute.manager [req-c17f7d65-94ff-4e0a-bbe9-be29ff8b239f req-6268c53a-4c16-4705-86ec-710381bc3e2f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Processing event network-vif-plugged-25721468-4447-4fb7-97f7-e805e64f0267 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:22:49 compute-1 ceph-mon[80926]: pgmap v1340: 305 pgs: 305 active+clean; 134 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 14 KiB/s wr, 72 op/s
Oct 02 12:22:49 compute-1 podman[251188]: 2025-10-02 12:22:49.041930535 +0000 UTC m=+0.026554388 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:22:49 compute-1 podman[251188]: 2025-10-02 12:22:49.322138654 +0000 UTC m=+0.306762487 container create f28f1929feb897a49292f391a7772ea61765feb2d47b0a86c02e01a45ab756c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:22:49 compute-1 systemd[1]: Started libpod-conmon-f28f1929feb897a49292f391a7772ea61765feb2d47b0a86c02e01a45ab756c5.scope.
Oct 02 12:22:49 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:22:49 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac5e0d3911095a7aec742e67ce07e80aafbfc58c9f078a790a45d6aff0e44e5e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:22:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:49.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:49.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:49 compute-1 podman[251188]: 2025-10-02 12:22:49.608002981 +0000 UTC m=+0.592626854 container init f28f1929feb897a49292f391a7772ea61765feb2d47b0a86c02e01a45ab756c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:22:49 compute-1 podman[251188]: 2025-10-02 12:22:49.615220629 +0000 UTC m=+0.599844472 container start f28f1929feb897a49292f391a7772ea61765feb2d47b0a86c02e01a45ab756c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:22:49 compute-1 neutron-haproxy-ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955[251264]: [NOTICE]   (251291) : New worker (251293) forked
Oct 02 12:22:49 compute-1 neutron-haproxy-ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955[251264]: [NOTICE]   (251291) : Loading success.
Oct 02 12:22:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:49.813 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 6a13b8d9-269d-4176-b4c7-693a5e26e74b in datapath bce86765-c9ec-46bc-a7a3-317bd0b94198 unbound from our chassis
Oct 02 12:22:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:49.818 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bce86765-c9ec-46bc-a7a3-317bd0b94198
Oct 02 12:22:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:49.833 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[52af74a4-2f5a-4367-9058-68e8d9a4882d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:49.834 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbce86765-c1 in ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:22:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:49.836 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbce86765-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:22:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:49.836 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a33b4014-dc0b-4c2c-8164-b9cec8caa60a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:49.837 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c65f82fb-ad81-4136-a96d-48a641f5031d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:49.853 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[8cc293fb-025a-4f24-baa4-3e09ef48c28e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:49.887 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0be9e24e-874d-46de-afc1-1c246b2f740a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:49.921 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[2075ae40-ab66-4032-8825-900bd68b9897]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:49.929 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c8d7ea78-e94d-41b0-9cda-9419c2ec63bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:49 compute-1 NetworkManager[44960]: <info>  [1759407769.9308] manager: (tapbce86765-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/99)
Oct 02 12:22:49 compute-1 systemd-udevd[251139]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:22:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:49.966 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[5f639628-f9c6-4fde-8444-ad0362bf6bab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:49.969 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[96a38da3-9e9c-4d60-ba7e-d18558e4c319]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:49 compute-1 NetworkManager[44960]: <info>  [1759407769.9882] device (tapbce86765-c0): carrier: link connected
Oct 02 12:22:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:49.995 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[add7973e-db1a-4ad7-9198-4719cc23718e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:50.015 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[669c0999-f357-444d-8c96-95707722914f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbce86765-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:67:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 60], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563354, 'reachable_time': 20734, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251312, 'error': None, 'target': 'ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:50.030 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3720f308-83b8-41c8-9e5b-46b5f6c62997]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe73:67f1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563354, 'tstamp': 563354}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251313, 'error': None, 'target': 'ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:50.047 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[06ad60cf-4a0a-4104-b811-1043d9f6d59e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbce86765-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:73:67:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 60], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563354, 'reachable_time': 20734, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251314, 'error': None, 'target': 'ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:50.077 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d518ca64-70ae-47a6-adb4-3f7330d2860a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:50.124 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d5673b62-9133-48c5-bfd7-565028ec65fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:50.126 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbce86765-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:50.127 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:50.128 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbce86765-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:50 compute-1 nova_compute[230518]: 2025-10-02 12:22:50.130 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407770.130037, bebdf690-5f58-4227-95e0-add2eae14645 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:22:50 compute-1 NetworkManager[44960]: <info>  [1759407770.1327] manager: (tapbce86765-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Oct 02 12:22:50 compute-1 kernel: tapbce86765-c0: entered promiscuous mode
Oct 02 12:22:50 compute-1 nova_compute[230518]: 2025-10-02 12:22:50.132 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: bebdf690-5f58-4227-95e0-add2eae14645] VM Started (Lifecycle Event)
Oct 02 12:22:50 compute-1 nova_compute[230518]: 2025-10-02 12:22:50.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:50.136 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbce86765-c0, col_values=(('external_ids', {'iface-id': 'c8e6c95f-23ce-48d2-baf5-eb1177e1a7ad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:50 compute-1 ovn_controller[129257]: 2025-10-02T12:22:50Z|00218|binding|INFO|Releasing lport c8e6c95f-23ce-48d2-baf5-eb1177e1a7ad from this chassis (sb_readonly=0)
Oct 02 12:22:50 compute-1 nova_compute[230518]: 2025-10-02 12:22:50.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:50.152 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bce86765-c9ec-46bc-a7a3-317bd0b94198.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bce86765-c9ec-46bc-a7a3-317bd0b94198.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:50.153 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[662616de-103a-41ba-814c-c7a92e7d94a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:50.153 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-bce86765-c9ec-46bc-a7a3-317bd0b94198
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/bce86765-c9ec-46bc-a7a3-317bd0b94198.pid.haproxy
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID bce86765-c9ec-46bc-a7a3-317bd0b94198
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:22:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:50.154 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198', 'env', 'PROCESS_TAG=haproxy-bce86765-c9ec-46bc-a7a3-317bd0b94198', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bce86765-c9ec-46bc-a7a3-317bd0b94198.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:22:50 compute-1 nova_compute[230518]: 2025-10-02 12:22:50.171 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:50 compute-1 nova_compute[230518]: 2025-10-02 12:22:50.175 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407770.1308172, bebdf690-5f58-4227-95e0-add2eae14645 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:22:50 compute-1 nova_compute[230518]: 2025-10-02 12:22:50.175 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: bebdf690-5f58-4227-95e0-add2eae14645] VM Paused (Lifecycle Event)
Oct 02 12:22:50 compute-1 nova_compute[230518]: 2025-10-02 12:22:50.201 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:50 compute-1 nova_compute[230518]: 2025-10-02 12:22:50.204 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:22:50 compute-1 nova_compute[230518]: 2025-10-02 12:22:50.228 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: bebdf690-5f58-4227-95e0-add2eae14645] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:22:50 compute-1 podman[251347]: 2025-10-02 12:22:50.528661038 +0000 UTC m=+0.029688956 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:22:50 compute-1 ceph-mon[80926]: pgmap v1341: 305 pgs: 305 active+clean; 134 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 14 KiB/s wr, 33 op/s
Oct 02 12:22:50 compute-1 podman[251347]: 2025-10-02 12:22:50.911933804 +0000 UTC m=+0.412961702 container create def4e5663605774a15aeb10a37fb9d00bdd2fde12486c1c3f17f7fdaae128806 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:22:50 compute-1 nova_compute[230518]: 2025-10-02 12:22:50.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.050 2 DEBUG nova.compute.manager [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-6a13b8d9-269d-4176-b4c7-693a5e26e74b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.050 2 DEBUG oslo_concurrency.lockutils [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.051 2 DEBUG oslo_concurrency.lockutils [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.051 2 DEBUG oslo_concurrency.lockutils [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.051 2 DEBUG nova.compute.manager [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No event matching network-vif-plugged-6a13b8d9-269d-4176-b4c7-693a5e26e74b in dict_keys([('network-vif-plugged', '8d3881e4-99fe-4bc5-b5ab-5b3f06be6000'), ('network-vif-plugged', 'b84676b0-d376-4ced-99fb-08e677046d6f'), ('network-vif-plugged', 'c9731d13-4315-4bdc-9d24-a91ce1d8d427'), ('network-vif-plugged', '2ef879b2-3519-40b6-8207-d24b0e1a39de'), ('network-vif-plugged', 'af15c204-50a0-4b32-a3a7-46c9b925ec87')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.051 2 WARNING nova.compute.manager [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received unexpected event network-vif-plugged-6a13b8d9-269d-4176-b4c7-693a5e26e74b for instance with vm_state building and task_state spawning.
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.052 2 DEBUG nova.compute.manager [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-b84676b0-d376-4ced-99fb-08e677046d6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.052 2 DEBUG oslo_concurrency.lockutils [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.052 2 DEBUG oslo_concurrency.lockutils [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.052 2 DEBUG oslo_concurrency.lockutils [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.052 2 DEBUG nova.compute.manager [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Processing event network-vif-plugged-b84676b0-d376-4ced-99fb-08e677046d6f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.053 2 DEBUG nova.compute.manager [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-b84676b0-d376-4ced-99fb-08e677046d6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.053 2 DEBUG oslo_concurrency.lockutils [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.053 2 DEBUG oslo_concurrency.lockutils [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.053 2 DEBUG oslo_concurrency.lockutils [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.053 2 DEBUG nova.compute.manager [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No event matching network-vif-plugged-b84676b0-d376-4ced-99fb-08e677046d6f in dict_keys([('network-vif-plugged', '8d3881e4-99fe-4bc5-b5ab-5b3f06be6000'), ('network-vif-plugged', 'c9731d13-4315-4bdc-9d24-a91ce1d8d427'), ('network-vif-plugged', '2ef879b2-3519-40b6-8207-d24b0e1a39de'), ('network-vif-plugged', 'af15c204-50a0-4b32-a3a7-46c9b925ec87')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.054 2 WARNING nova.compute.manager [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received unexpected event network-vif-plugged-b84676b0-d376-4ced-99fb-08e677046d6f for instance with vm_state building and task_state spawning.
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.054 2 DEBUG nova.compute.manager [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-c9731d13-4315-4bdc-9d24-a91ce1d8d427 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.055 2 DEBUG oslo_concurrency.lockutils [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.055 2 DEBUG oslo_concurrency.lockutils [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.056 2 DEBUG oslo_concurrency.lockutils [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.056 2 DEBUG nova.compute.manager [req-4ad0e374-918c-4e40-a971-095020807e01 req-ea2c4481-8edb-4dad-aaea-63bdbf6df9d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Processing event network-vif-plugged-c9731d13-4315-4bdc-9d24-a91ce1d8d427 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:22:51 compute-1 systemd[1]: Started libpod-conmon-def4e5663605774a15aeb10a37fb9d00bdd2fde12486c1c3f17f7fdaae128806.scope.
Oct 02 12:22:51 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:22:51 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f364814aae9bbb0a3b489cff6eafe4e2d92e7ede9b143c8dfd14625b67cb54df/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:22:51 compute-1 podman[251347]: 2025-10-02 12:22:51.231379139 +0000 UTC m=+0.732407097 container init def4e5663605774a15aeb10a37fb9d00bdd2fde12486c1c3f17f7fdaae128806 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 12:22:51 compute-1 podman[251347]: 2025-10-02 12:22:51.242880961 +0000 UTC m=+0.743908889 container start def4e5663605774a15aeb10a37fb9d00bdd2fde12486c1c3f17f7fdaae128806 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:22:51 compute-1 neutron-haproxy-ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198[251362]: [NOTICE]   (251366) : New worker (251368) forked
Oct 02 12:22:51 compute-1 neutron-haproxy-ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198[251362]: [NOTICE]   (251366) : Loading success.
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.411 2 DEBUG nova.compute.manager [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-25721468-4447-4fb7-97f7-e805e64f0267 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.412 2 DEBUG oslo_concurrency.lockutils [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.412 2 DEBUG oslo_concurrency.lockutils [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.413 2 DEBUG oslo_concurrency.lockutils [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.413 2 DEBUG nova.compute.manager [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No event matching network-vif-plugged-25721468-4447-4fb7-97f7-e805e64f0267 in dict_keys([('network-vif-plugged', '8d3881e4-99fe-4bc5-b5ab-5b3f06be6000'), ('network-vif-plugged', '2ef879b2-3519-40b6-8207-d24b0e1a39de'), ('network-vif-plugged', 'af15c204-50a0-4b32-a3a7-46c9b925ec87')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.414 2 WARNING nova.compute.manager [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received unexpected event network-vif-plugged-25721468-4447-4fb7-97f7-e805e64f0267 for instance with vm_state building and task_state spawning.
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.414 2 DEBUG nova.compute.manager [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-af15c204-50a0-4b32-a3a7-46c9b925ec87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.415 2 DEBUG oslo_concurrency.lockutils [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.415 2 DEBUG oslo_concurrency.lockutils [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.416 2 DEBUG oslo_concurrency.lockutils [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.416 2 DEBUG nova.compute.manager [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Processing event network-vif-plugged-af15c204-50a0-4b32-a3a7-46c9b925ec87 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.417 2 DEBUG nova.compute.manager [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-af15c204-50a0-4b32-a3a7-46c9b925ec87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.417 2 DEBUG oslo_concurrency.lockutils [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.418 2 DEBUG oslo_concurrency.lockutils [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.418 2 DEBUG oslo_concurrency.lockutils [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.419 2 DEBUG nova.compute.manager [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No event matching network-vif-plugged-af15c204-50a0-4b32-a3a7-46c9b925ec87 in dict_keys([('network-vif-plugged', '8d3881e4-99fe-4bc5-b5ab-5b3f06be6000'), ('network-vif-plugged', '2ef879b2-3519-40b6-8207-d24b0e1a39de')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.419 2 WARNING nova.compute.manager [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received unexpected event network-vif-plugged-af15c204-50a0-4b32-a3a7-46c9b925ec87 for instance with vm_state building and task_state spawning.
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.419 2 DEBUG nova.compute.manager [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-2ef879b2-3519-40b6-8207-d24b0e1a39de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.421 2 DEBUG oslo_concurrency.lockutils [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.422 2 DEBUG oslo_concurrency.lockutils [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.423 2 DEBUG oslo_concurrency.lockutils [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.423 2 DEBUG nova.compute.manager [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Processing event network-vif-plugged-2ef879b2-3519-40b6-8207-d24b0e1a39de _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.423 2 DEBUG nova.compute.manager [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-2ef879b2-3519-40b6-8207-d24b0e1a39de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.424 2 DEBUG oslo_concurrency.lockutils [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.424 2 DEBUG oslo_concurrency.lockutils [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.425 2 DEBUG oslo_concurrency.lockutils [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.425 2 DEBUG nova.compute.manager [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No event matching network-vif-plugged-2ef879b2-3519-40b6-8207-d24b0e1a39de in dict_keys([('network-vif-plugged', '8d3881e4-99fe-4bc5-b5ab-5b3f06be6000')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.426 2 WARNING nova.compute.manager [req-ef72c824-7f12-4054-891a-d782d5e17c35 req-8c223573-cadd-448a-a406-b695ff13930b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received unexpected event network-vif-plugged-2ef879b2-3519-40b6-8207-d24b0e1a39de for instance with vm_state building and task_state spawning.
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.450 138374 INFO neutron.agent.ovn.metadata.agent [-] Port af15c204-50a0-4b32-a3a7-46c9b925ec87 in datapath 16f75dae-02da-4559-9be9-2b702ece41dd unbound from our chassis
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.454 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 16f75dae-02da-4559-9be9-2b702ece41dd
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.467 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a3f495a1-7316-400b-8e39-dc4b6e7fab12]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.468 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap16f75dae-01 in ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.470 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap16f75dae-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.470 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8f286956-1951-4e87-97d1-e800adff2fc5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.471 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6620ddb5-b474-4175-87b5-ff2fb4111f1a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.487 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[c66ec3e1-3c29-4d16-8256-c49d54e5efff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.504 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[105baab9-fec3-4951-bd68-a3168d9f5147]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.535 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[2cab047e-f797-4f03-92ca-95fd5b21ab06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:51 compute-1 NetworkManager[44960]: <info>  [1759407771.5501] manager: (tap16f75dae-00): new Veth device (/org/freedesktop/NetworkManager/Devices/101)
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.551 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f23e9c20-9b05-4398-a026-ff75043d47d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:51 compute-1 systemd-udevd[251384]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.591 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[fc0ad40a-b254-4b29-9a89-7af48e94fff7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:51.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:22:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:51 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:51.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.595 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[aa5f6a5e-0773-4b9a-9a09-5a5f246f4ddb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:51 compute-1 NetworkManager[44960]: <info>  [1759407771.6355] device (tap16f75dae-00): carrier: link connected
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.646 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[e7bf522c-42b3-4ca4-ad25-e8d754fa4d74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.667 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[16500158-b548-4858-8181-6b8e5b7f3ed4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16f75dae-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:24:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563519, 'reachable_time': 33127, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251403, 'error': None, 'target': 'ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.687 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[695ea8e3-fb0c-439d-aa3c-272335dc9dec]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe38:24fa'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563519, 'tstamp': 563519}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251404, 'error': None, 'target': 'ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.704 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8083c0be-f101-401b-9ec7-a0a2d284bff7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16f75dae-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:24:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563519, 'reachable_time': 33127, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251405, 'error': None, 'target': 'ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.747 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4c2ef4ad-97c0-4e24-9503-d7eec77fcb51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.811 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[980c8bfd-eaa7-4c85-bd2b-226659680475]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.812 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16f75dae-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.812 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.813 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap16f75dae-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:51 compute-1 kernel: tap16f75dae-00: entered promiscuous mode
Oct 02 12:22:51 compute-1 NetworkManager[44960]: <info>  [1759407771.8176] manager: (tap16f75dae-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/102)
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.819 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap16f75dae-00, col_values=(('external_ids', {'iface-id': '9d789382-766d-4e5f-a412-599c2cbcba28'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:51 compute-1 ovn_controller[129257]: 2025-10-02T12:22:51Z|00219|binding|INFO|Releasing lport 9d789382-766d-4e5f-a412-599c2cbcba28 from this chassis (sb_readonly=0)
Oct 02 12:22:51 compute-1 nova_compute[230518]: 2025-10-02 12:22:51.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.851 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/16f75dae-02da-4559-9be9-2b702ece41dd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/16f75dae-02da-4559-9be9-2b702ece41dd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.853 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4cfe67d9-d08e-490b-8091-7858d352b4aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.853 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-16f75dae-02da-4559-9be9-2b702ece41dd
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/16f75dae-02da-4559-9be9-2b702ece41dd.pid.haproxy
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 16f75dae-02da-4559-9be9-2b702ece41dd
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:22:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:51.854 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd', 'env', 'PROCESS_TAG=haproxy-16f75dae-02da-4559-9be9-2b702ece41dd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/16f75dae-02da-4559-9be9-2b702ece41dd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:22:52 compute-1 nova_compute[230518]: 2025-10-02 12:22:52.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:52 compute-1 podman[251438]: 2025-10-02 12:22:52.172948436 +0000 UTC m=+0.023195022 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:22:52 compute-1 ceph-mon[80926]: pgmap v1342: 305 pgs: 305 active+clean; 134 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 912 KiB/s rd, 29 KiB/s wr, 69 op/s
Oct 02 12:22:52 compute-1 podman[251438]: 2025-10-02 12:22:52.575671185 +0000 UTC m=+0.425917721 container create 7f14478999350e8a4059bd184b28f20048f6370b3b478789587375fede144422 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:22:52 compute-1 systemd[1]: Started libpod-conmon-7f14478999350e8a4059bd184b28f20048f6370b3b478789587375fede144422.scope.
Oct 02 12:22:52 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:22:52 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b15385a9739d6fbf955e02628978d3ec9c263267cf4ee58ffc8102e691a426c5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:22:53 compute-1 podman[251438]: 2025-10-02 12:22:53.037221618 +0000 UTC m=+0.887468174 container init 7f14478999350e8a4059bd184b28f20048f6370b3b478789587375fede144422 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:22:53 compute-1 podman[251438]: 2025-10-02 12:22:53.047895834 +0000 UTC m=+0.898142370 container start 7f14478999350e8a4059bd184b28f20048f6370b3b478789587375fede144422 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:22:53 compute-1 neutron-haproxy-ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd[251454]: [NOTICE]   (251458) : New worker (251460) forked
Oct 02 12:22:53 compute-1 neutron-haproxy-ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd[251454]: [NOTICE]   (251458) : Loading success.
Oct 02 12:22:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.211 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 in datapath 9aed857d-6573-41ca-b0a5-fcab18195955 unbound from our chassis
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.215 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9aed857d-6573-41ca-b0a5-fcab18195955
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.229 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[921303fa-f63b-4652-8b62-5d688600e391]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.244 2 DEBUG nova.compute.manager [req-5d0db901-eec1-4654-beb0-17d30aa69de5 req-64e43953-d879-4dce-be09-90a52e0ca143 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-c9731d13-4315-4bdc-9d24-a91ce1d8d427 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.244 2 DEBUG oslo_concurrency.lockutils [req-5d0db901-eec1-4654-beb0-17d30aa69de5 req-64e43953-d879-4dce-be09-90a52e0ca143 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.245 2 DEBUG oslo_concurrency.lockutils [req-5d0db901-eec1-4654-beb0-17d30aa69de5 req-64e43953-d879-4dce-be09-90a52e0ca143 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.245 2 DEBUG oslo_concurrency.lockutils [req-5d0db901-eec1-4654-beb0-17d30aa69de5 req-64e43953-d879-4dce-be09-90a52e0ca143 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.245 2 DEBUG nova.compute.manager [req-5d0db901-eec1-4654-beb0-17d30aa69de5 req-64e43953-d879-4dce-be09-90a52e0ca143 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No event matching network-vif-plugged-c9731d13-4315-4bdc-9d24-a91ce1d8d427 in dict_keys([('network-vif-plugged', '8d3881e4-99fe-4bc5-b5ab-5b3f06be6000')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.245 2 WARNING nova.compute.manager [req-5d0db901-eec1-4654-beb0-17d30aa69de5 req-64e43953-d879-4dce-be09-90a52e0ca143 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received unexpected event network-vif-plugged-c9731d13-4315-4bdc-9d24-a91ce1d8d427 for instance with vm_state building and task_state spawning.
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.258 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[8479871a-f4b2-4763-ad50-a52882ea6071]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.261 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[d820c79a-7ff5-411e-952b-4375cbf8fe13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.293 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[e7266416-9502-45f5-8185-e38fcba8a4a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.307 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a806bb5e-2d6c-471f-b50c-713eea89383a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9aed857d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:10:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563206, 'reachable_time': 19770, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251474, 'error': None, 'target': 'ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.323 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[38353cca-f950-4b98-a05a-b8bfb1e2ccb7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9aed857d-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563217, 'tstamp': 563217}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251475, 'error': None, 'target': 'ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tap9aed857d-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563220, 'tstamp': 563220}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251475, 'error': None, 'target': 'ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.324 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9aed857d-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.327 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9aed857d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.328 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.328 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9aed857d-60, col_values=(('external_ids', {'iface-id': 'ead9b5d4-ba41-48d9-8938-c4b0a2b5cd08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.328 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.329 138374 INFO neutron.agent.ovn.metadata.agent [-] Port b84676b0-d376-4ced-99fb-08e677046d6f in datapath 9aed857d-6573-41ca-b0a5-fcab18195955 unbound from our chassis
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.331 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9aed857d-6573-41ca-b0a5-fcab18195955
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.343 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3fce06de-5ef0-488b-a50e-c0743736cfcf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.367 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[8707686e-db5c-41f0-a041-3587d21f4130]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.370 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[bde50039-f2e3-44c8-90b1-cc61dde18bec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.395 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[4dd1cd59-aed6-435b-a194-6418f395bbff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.418 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c2f8b50e-f3fe-4908-bec4-157a1588cef7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9aed857d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:10:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563206, 'reachable_time': 19770, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251481, 'error': None, 'target': 'ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.438 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9bcefc5b-0b1b-4a0f-9d1f-f6ebb1a14871]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9aed857d-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563217, 'tstamp': 563217}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251482, 'error': None, 'target': 'ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tap9aed857d-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563220, 'tstamp': 563220}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251482, 'error': None, 'target': 'ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.439 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9aed857d-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.442 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9aed857d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.443 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.443 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9aed857d-60, col_values=(('external_ids', {'iface-id': 'ead9b5d4-ba41-48d9-8938-c4b0a2b5cd08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.444 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.445 138374 INFO neutron.agent.ovn.metadata.agent [-] Port c9731d13-4315-4bdc-9d24-a91ce1d8d427 in datapath 9aed857d-6573-41ca-b0a5-fcab18195955 unbound from our chassis
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.447 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9aed857d-6573-41ca-b0a5-fcab18195955
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.464 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b0271e82-f070-4485-b9eb-b9d5c87071ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.497 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[28f6d8d1-ca58-468e-8811-fa60f6651a7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.499 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[d09c2342-a1be-4403-8ed9-e8bde131cd7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.525 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[9f1c6046-a05f-4240-b2db-7a592c16ecf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.542 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d1d9ccbd-0f4d-49b4-ab11-e5629b221fc0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9aed857d-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:10:39'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 832, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 832, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563206, 'reachable_time': 19770, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251488, 'error': None, 'target': 'ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.558 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c95162d5-b7e1-49fa-a4fd-f6d43ab91979]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9aed857d-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563217, 'tstamp': 563217}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251489, 'error': None, 'target': 'ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tap9aed857d-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563220, 'tstamp': 563220}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251489, 'error': None, 'target': 'ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.560 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9aed857d-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.563 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9aed857d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.563 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.563 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9aed857d-60, col_values=(('external_ids', {'iface-id': 'ead9b5d4-ba41-48d9-8938-c4b0a2b5cd08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.564 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.564 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 2ef879b2-3519-40b6-8207-d24b0e1a39de in datapath 16f75dae-02da-4559-9be9-2b702ece41dd unbound from our chassis
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.566 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 16f75dae-02da-4559-9be9-2b702ece41dd
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.579 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[257ad0e8-72ef-4b93-becb-6c50fd51a9f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:22:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:53.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:22:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:53.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.610 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[2aa46478-f44d-44e2-88b1-3e949348dab6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.613 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[87ff4ac0-7251-431f-acf0-da69218e5cd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.635 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[3e8c1448-1f05-4855-acc9-428b89002c83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.655 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2d707c88-2f92-4486-9de2-83e09135b318]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16f75dae-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:24:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 306, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 306, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563519, 'reachable_time': 33127, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 264, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 264, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251495, 'error': None, 'target': 'ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.669 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9097f2aa-5563-4d59-abfe-266997914f7e]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.2.2.2'], ['IFA_LOCAL', '10.2.2.2'], ['IFA_BROADCAST', '10.2.2.255'], ['IFA_LABEL', 'tap16f75dae-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563533, 'tstamp': 563533}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251496, 'error': None, 'target': 'ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap16f75dae-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 563536, 'tstamp': 563536}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251496, 'error': None, 'target': 'ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.670 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16f75dae-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.673 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap16f75dae-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.673 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.674 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap16f75dae-00, col_values=(('external_ids', {'iface-id': '9d789382-766d-4e5f-a412-599c2cbcba28'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:22:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:22:53.674 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.722 2 DEBUG nova.compute.manager [req-a8ee38f9-1e05-4a68-bf6d-e563f72d9257 req-92d7a8bb-27af-4598-8b95-3b5f6b994528 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.722 2 DEBUG oslo_concurrency.lockutils [req-a8ee38f9-1e05-4a68-bf6d-e563f72d9257 req-92d7a8bb-27af-4598-8b95-3b5f6b994528 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.723 2 DEBUG oslo_concurrency.lockutils [req-a8ee38f9-1e05-4a68-bf6d-e563f72d9257 req-92d7a8bb-27af-4598-8b95-3b5f6b994528 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.723 2 DEBUG oslo_concurrency.lockutils [req-a8ee38f9-1e05-4a68-bf6d-e563f72d9257 req-92d7a8bb-27af-4598-8b95-3b5f6b994528 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.724 2 DEBUG nova.compute.manager [req-a8ee38f9-1e05-4a68-bf6d-e563f72d9257 req-92d7a8bb-27af-4598-8b95-3b5f6b994528 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Processing event network-vif-plugged-8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.724 2 DEBUG nova.compute.manager [req-a8ee38f9-1e05-4a68-bf6d-e563f72d9257 req-92d7a8bb-27af-4598-8b95-3b5f6b994528 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.725 2 DEBUG oslo_concurrency.lockutils [req-a8ee38f9-1e05-4a68-bf6d-e563f72d9257 req-92d7a8bb-27af-4598-8b95-3b5f6b994528 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.725 2 DEBUG oslo_concurrency.lockutils [req-a8ee38f9-1e05-4a68-bf6d-e563f72d9257 req-92d7a8bb-27af-4598-8b95-3b5f6b994528 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.725 2 DEBUG oslo_concurrency.lockutils [req-a8ee38f9-1e05-4a68-bf6d-e563f72d9257 req-92d7a8bb-27af-4598-8b95-3b5f6b994528 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.726 2 DEBUG nova.compute.manager [req-a8ee38f9-1e05-4a68-bf6d-e563f72d9257 req-92d7a8bb-27af-4598-8b95-3b5f6b994528 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No waiting events found dispatching network-vif-plugged-8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.726 2 WARNING nova.compute.manager [req-a8ee38f9-1e05-4a68-bf6d-e563f72d9257 req-92d7a8bb-27af-4598-8b95-3b5f6b994528 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received unexpected event network-vif-plugged-8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 for instance with vm_state building and task_state spawning.
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.727 2 DEBUG nova.compute.manager [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Instance event wait completed in 3 seconds for network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.732 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407773.7325814, bebdf690-5f58-4227-95e0-add2eae14645 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.733 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: bebdf690-5f58-4227-95e0-add2eae14645] VM Resumed (Lifecycle Event)
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.735 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.740 2 INFO nova.virt.libvirt.driver [-] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Instance spawned successfully.
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.741 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.770 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.776 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.782 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.782 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.783 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.783 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.784 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.784 2 DEBUG nova.virt.libvirt.driver [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.824 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: bebdf690-5f58-4227-95e0-add2eae14645] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.870 2 INFO nova.compute.manager [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Took 39.52 seconds to spawn the instance on the hypervisor.
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.870 2 DEBUG nova.compute.manager [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.951 2 INFO nova.compute.manager [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Took 46.48 seconds to build instance.
Oct 02 12:22:53 compute-1 nova_compute[230518]: 2025-10-02 12:22:53.983 2 DEBUG oslo_concurrency.lockutils [None req-e0eb95af-c694-4c33-a10b-4d3ef62968f7 b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 47.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:22:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e199 e199: 3 total, 3 up, 3 in
Oct 02 12:22:54 compute-1 ceph-mon[80926]: pgmap v1343: 305 pgs: 305 active+clean; 134 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 107 op/s
Oct 02 12:22:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:22:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:55.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:55 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:55.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:55 compute-1 nova_compute[230518]: 2025-10-02 12:22:55.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:56 compute-1 ceph-mon[80926]: osdmap e199: 3 total, 3 up, 3 in
Oct 02 12:22:57 compute-1 sudo[251497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:22:57 compute-1 sudo[251497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:22:57 compute-1 sudo[251497]: pam_unix(sudo:session): session closed for user root
Oct 02 12:22:57 compute-1 sudo[251522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:22:57 compute-1 sudo[251522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:22:57 compute-1 sudo[251522]: pam_unix(sudo:session): session closed for user root
Oct 02 12:22:57 compute-1 nova_compute[230518]: 2025-10-02 12:22:57.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:22:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:57.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:22:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:22:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:57 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:57.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:57 compute-1 nova_compute[230518]: 2025-10-02 12:22:57.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:57 compute-1 NetworkManager[44960]: <info>  [1759407777.6461] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/103)
Oct 02 12:22:57 compute-1 NetworkManager[44960]: <info>  [1759407777.6474] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/104)
Oct 02 12:22:57 compute-1 ceph-mon[80926]: pgmap v1345: 305 pgs: 305 active+clean; 134 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 34 KiB/s wr, 119 op/s
Oct 02 12:22:57 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:22:57 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:22:57 compute-1 nova_compute[230518]: 2025-10-02 12:22:57.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:57 compute-1 ovn_controller[129257]: 2025-10-02T12:22:57Z|00220|binding|INFO|Releasing lport c8e6c95f-23ce-48d2-baf5-eb1177e1a7ad from this chassis (sb_readonly=0)
Oct 02 12:22:57 compute-1 ovn_controller[129257]: 2025-10-02T12:22:57Z|00221|binding|INFO|Releasing lport ead9b5d4-ba41-48d9-8938-c4b0a2b5cd08 from this chassis (sb_readonly=0)
Oct 02 12:22:57 compute-1 ovn_controller[129257]: 2025-10-02T12:22:57Z|00222|binding|INFO|Releasing lport 9d789382-766d-4e5f-a412-599c2cbcba28 from this chassis (sb_readonly=0)
Oct 02 12:22:57 compute-1 nova_compute[230518]: 2025-10-02 12:22:57.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:22:58 compute-1 nova_compute[230518]: 2025-10-02 12:22:58.042 2 DEBUG nova.compute.manager [req-3fb53e43-b026-46aa-92bd-d3595dee78b7 req-cc4b1b18-9b7c-4b9c-88ec-faba4433e6d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-changed-6a13b8d9-269d-4176-b4c7-693a5e26e74b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:22:58 compute-1 nova_compute[230518]: 2025-10-02 12:22:58.042 2 DEBUG nova.compute.manager [req-3fb53e43-b026-46aa-92bd-d3595dee78b7 req-cc4b1b18-9b7c-4b9c-88ec-faba4433e6d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Refreshing instance network info cache due to event network-changed-6a13b8d9-269d-4176-b4c7-693a5e26e74b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:22:58 compute-1 nova_compute[230518]: 2025-10-02 12:22:58.043 2 DEBUG oslo_concurrency.lockutils [req-3fb53e43-b026-46aa-92bd-d3595dee78b7 req-cc4b1b18-9b7c-4b9c-88ec-faba4433e6d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:22:58 compute-1 nova_compute[230518]: 2025-10-02 12:22:58.044 2 DEBUG oslo_concurrency.lockutils [req-3fb53e43-b026-46aa-92bd-d3595dee78b7 req-cc4b1b18-9b7c-4b9c-88ec-faba4433e6d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:22:58 compute-1 nova_compute[230518]: 2025-10-02 12:22:58.044 2 DEBUG nova.network.neutron [req-3fb53e43-b026-46aa-92bd-d3595dee78b7 req-cc4b1b18-9b7c-4b9c-88ec-faba4433e6d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Refreshing network info cache for port 6a13b8d9-269d-4176-b4c7-693a5e26e74b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:22:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:22:59 compute-1 ceph-mon[80926]: pgmap v1346: 305 pgs: 305 active+clean; 134 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 19 KiB/s wr, 152 op/s
Oct 02 12:22:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:22:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:22:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:22:59.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:22:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:22:59 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:22:59.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:00 compute-1 nova_compute[230518]: 2025-10-02 12:23:00.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:01 compute-1 ceph-mon[80926]: pgmap v1347: 305 pgs: 305 active+clean; 134 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 19 KiB/s wr, 152 op/s
Oct 02 12:23:01 compute-1 nova_compute[230518]: 2025-10-02 12:23:01.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:01 compute-1 nova_compute[230518]: 2025-10-02 12:23:01.217 2 DEBUG nova.network.neutron [req-3fb53e43-b026-46aa-92bd-d3595dee78b7 req-cc4b1b18-9b7c-4b9c-88ec-faba4433e6d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updated VIF entry in instance network info cache for port 6a13b8d9-269d-4176-b4c7-693a5e26e74b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:23:01 compute-1 nova_compute[230518]: 2025-10-02 12:23:01.218 2 DEBUG nova.network.neutron [req-3fb53e43-b026-46aa-92bd-d3595dee78b7 req-cc4b1b18-9b7c-4b9c-88ec-faba4433e6d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updating instance_info_cache with network_info: [{"id": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "address": "fa:16:3e:fb:a0:be", "network": {"id": "bce86765-c9ec-46bc-a7a3-317bd0b94198", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-349866377-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a13b8d9-26", "ovs_interfaceid": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "25721468-4447-4fb7-97f7-e805e64f0267", "address": "fa:16:3e:c5:de:0f", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25721468-44", "ovs_interfaceid": "25721468-4447-4fb7-97f7-e805e64f0267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "address": "fa:16:3e:1e:64:8d", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d3881e4-99", "ovs_interfaceid": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "b84676b0-d376-4ced-99fb-08e677046d6f", "address": "fa:16:3e:7b:6e:55", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb84676b0-d3", "ovs_interfaceid": "b84676b0-d376-4ced-99fb-08e677046d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "address": "fa:16:3e:b6:e7:f1", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.217", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9731d13-43", "ovs_interfaceid": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "address": "fa:16:3e:28:b6:82", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ef879b2-35", "ovs_interfaceid": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "address": "fa:16:3e:a3:ce:42", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf15c204-50", "ovs_interfaceid": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:23:01 compute-1 nova_compute[230518]: 2025-10-02 12:23:01.249 2 DEBUG oslo_concurrency.lockutils [req-3fb53e43-b026-46aa-92bd-d3595dee78b7 req-cc4b1b18-9b7c-4b9c-88ec-faba4433e6d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-bebdf690-5f58-4227-95e0-add2eae14645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:23:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:01.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:23:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:01.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:23:02 compute-1 nova_compute[230518]: 2025-10-02 12:23:02.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:02 compute-1 podman[251548]: 2025-10-02 12:23:02.83550454 +0000 UTC m=+0.080575569 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 12:23:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e200 e200: 3 total, 3 up, 3 in
Oct 02 12:23:03 compute-1 ceph-mon[80926]: pgmap v1348: 305 pgs: 305 active+clean; 141 MiB data, 551 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 44 KiB/s wr, 152 op/s
Oct 02 12:23:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:23:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:03.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:03.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:04 compute-1 ceph-mon[80926]: pgmap v1349: 305 pgs: 305 active+clean; 192 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.9 MiB/s wr, 151 op/s
Oct 02 12:23:04 compute-1 ceph-mon[80926]: osdmap e200: 3 total, 3 up, 3 in
Oct 02 12:23:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e201 e201: 3 total, 3 up, 3 in
Oct 02 12:23:04 compute-1 podman[251570]: 2025-10-02 12:23:04.823868849 +0000 UTC m=+0.078801844 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 12:23:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:23:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1848899412' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:23:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:23:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1848899412' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:23:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:05.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:23:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:05.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:23:05 compute-1 ceph-mon[80926]: osdmap e201: 3 total, 3 up, 3 in
Oct 02 12:23:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1848899412' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:23:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1848899412' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:23:05 compute-1 nova_compute[230518]: 2025-10-02 12:23:05.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:07 compute-1 nova_compute[230518]: 2025-10-02 12:23:07.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:07 compute-1 ceph-mon[80926]: pgmap v1352: 305 pgs: 305 active+clean; 202 MiB data, 587 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 5.0 MiB/s wr, 150 op/s
Oct 02 12:23:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:07.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:23:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:07.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:23:07 compute-1 nova_compute[230518]: 2025-10-02 12:23:07.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:23:09 compute-1 ceph-mon[80926]: pgmap v1353: 305 pgs: 305 active+clean; 202 MiB data, 592 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 5.7 MiB/s wr, 193 op/s
Oct 02 12:23:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:09.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:23:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:09.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:23:10 compute-1 nova_compute[230518]: 2025-10-02 12:23:10.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:10 compute-1 ceph-mon[80926]: pgmap v1354: 305 pgs: 305 active+clean; 202 MiB data, 592 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.7 MiB/s wr, 140 op/s
Oct 02 12:23:11 compute-1 nova_compute[230518]: 2025-10-02 12:23:10.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:11 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e202 e202: 3 total, 3 up, 3 in
Oct 02 12:23:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:11.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:11.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:11 compute-1 podman[251597]: 2025-10-02 12:23:11.830017773 +0000 UTC m=+0.069910931 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 12:23:11 compute-1 podman[251596]: 2025-10-02 12:23:11.837378725 +0000 UTC m=+0.083113327 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:23:12 compute-1 nova_compute[230518]: 2025-10-02 12:23:12.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:12 compute-1 ceph-mon[80926]: pgmap v1355: 305 pgs: 305 active+clean; 206 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 476 KiB/s rd, 2.6 MiB/s wr, 108 op/s
Oct 02 12:23:12 compute-1 ceph-mon[80926]: osdmap e202: 3 total, 3 up, 3 in
Oct 02 12:23:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3526900912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:23:13 compute-1 nova_compute[230518]: 2025-10-02 12:23:13.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:13.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:23:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:13.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:23:15 compute-1 ceph-mon[80926]: pgmap v1357: 305 pgs: 305 active+clean; 218 MiB data, 609 MiB used, 20 GiB / 21 GiB avail; 485 KiB/s rd, 2.8 MiB/s wr, 95 op/s
Oct 02 12:23:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:15.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:23:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:15.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:23:16 compute-1 nova_compute[230518]: 2025-10-02 12:23:16.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:16 compute-1 ceph-mon[80926]: pgmap v1358: 305 pgs: 305 active+clean; 222 MiB data, 612 MiB used, 20 GiB / 21 GiB avail; 398 KiB/s rd, 2.5 MiB/s wr, 86 op/s
Oct 02 12:23:17 compute-1 nova_compute[230518]: 2025-10-02 12:23:17.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:17 compute-1 nova_compute[230518]: 2025-10-02 12:23:17.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:17 compute-1 ovn_controller[129257]: 2025-10-02T12:23:17Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b6:e7:f1 10.1.1.217
Oct 02 12:23:17 compute-1 ovn_controller[129257]: 2025-10-02T12:23:17Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b6:e7:f1 10.1.1.217
Oct 02 12:23:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:17.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:17.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:18 compute-1 ovn_controller[129257]: 2025-10-02T12:23:18Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1e:64:8d 10.1.1.231
Oct 02 12:23:18 compute-1 ovn_controller[129257]: 2025-10-02T12:23:18Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1e:64:8d 10.1.1.231
Oct 02 12:23:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:23:18 compute-1 ceph-mon[80926]: pgmap v1359: 305 pgs: 305 active+clean; 232 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.5 MiB/s wr, 89 op/s
Oct 02 12:23:18 compute-1 ovn_controller[129257]: 2025-10-02T12:23:18Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a3:ce:42 10.2.2.200
Oct 02 12:23:18 compute-1 ovn_controller[129257]: 2025-10-02T12:23:18Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a3:ce:42 10.2.2.200
Oct 02 12:23:18 compute-1 ovn_controller[129257]: 2025-10-02T12:23:18Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:28:b6:82 10.2.2.100
Oct 02 12:23:18 compute-1 ovn_controller[129257]: 2025-10-02T12:23:18Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:28:b6:82 10.2.2.100
Oct 02 12:23:18 compute-1 ovn_controller[129257]: 2025-10-02T12:23:18Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c5:de:0f 10.1.1.243
Oct 02 12:23:18 compute-1 ovn_controller[129257]: 2025-10-02T12:23:18Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c5:de:0f 10.1.1.243
Oct 02 12:23:18 compute-1 ovn_controller[129257]: 2025-10-02T12:23:18Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fb:a0:be 10.100.0.9
Oct 02 12:23:18 compute-1 ovn_controller[129257]: 2025-10-02T12:23:18Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fb:a0:be 10.100.0.9
Oct 02 12:23:18 compute-1 ovn_controller[129257]: 2025-10-02T12:23:18Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7b:6e:55 10.1.1.189
Oct 02 12:23:18 compute-1 ovn_controller[129257]: 2025-10-02T12:23:18Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7b:6e:55 10.1.1.189
Oct 02 12:23:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:19.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:19.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:21 compute-1 nova_compute[230518]: 2025-10-02 12:23:21.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:21 compute-1 ceph-mon[80926]: pgmap v1360: 305 pgs: 305 active+clean; 232 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.5 MiB/s wr, 89 op/s
Oct 02 12:23:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/291030701' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3021617743' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:23:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:21.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:23:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:23:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:21.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:23:21 compute-1 nova_compute[230518]: 2025-10-02 12:23:21.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:22 compute-1 nova_compute[230518]: 2025-10-02 12:23:22.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:22.105 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:23:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:22.106 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:23:22 compute-1 nova_compute[230518]: 2025-10-02 12:23:22.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:23.109 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:23:23 compute-1 ceph-mon[80926]: pgmap v1361: 305 pgs: 305 active+clean; 239 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Oct 02 12:23:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:23:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:23.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:23:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:23.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e203 e203: 3 total, 3 up, 3 in
Oct 02 12:23:25 compute-1 ceph-mon[80926]: pgmap v1362: 305 pgs: 305 active+clean; 243 MiB data, 619 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 881 KiB/s wr, 90 op/s
Oct 02 12:23:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:23:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:25.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:23:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:25.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:25.921 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:25.922 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:25.924 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:26 compute-1 nova_compute[230518]: 2025-10-02 12:23:26.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:27 compute-1 ceph-mon[80926]: osdmap e203: 3 total, 3 up, 3 in
Oct 02 12:23:27 compute-1 ceph-mon[80926]: pgmap v1364: 305 pgs: 305 active+clean; 251 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.2 MiB/s wr, 116 op/s
Oct 02 12:23:27 compute-1 nova_compute[230518]: 2025-10-02 12:23:27.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:27.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:27.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:28 compute-1 nova_compute[230518]: 2025-10-02 12:23:28.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:23:28 compute-1 ceph-mon[80926]: pgmap v1365: 305 pgs: 305 active+clean; 267 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.2 MiB/s wr, 131 op/s
Oct 02 12:23:29 compute-1 ovn_controller[129257]: 2025-10-02T12:23:29Z|00223|memory|INFO|peak resident set size grew 51% in last 1664.4 seconds, from 16384 kB to 24748 kB
Oct 02 12:23:29 compute-1 ovn_controller[129257]: 2025-10-02T12:23:29Z|00224|memory|INFO|idl-cells-OVN_Southbound:11061 idl-cells-Open_vSwitch:1326 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:2 lflow-cache-entries-cache-expr:393 lflow-cache-entries-cache-matches:299 lflow-cache-size-KB:1618 local_datapath_usage-KB:3 ofctrl_desired_flow_usage-KB:784 ofctrl_installed_flow_usage-KB:571 ofctrl_sb_flow_ref_usage-KB:290
Oct 02 12:23:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:29.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:29.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:30 compute-1 ceph-mon[80926]: pgmap v1366: 305 pgs: 305 active+clean; 267 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.2 MiB/s wr, 131 op/s
Oct 02 12:23:31 compute-1 nova_compute[230518]: 2025-10-02 12:23:31.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:31.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:31.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:32 compute-1 nova_compute[230518]: 2025-10-02 12:23:32.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:32 compute-1 ceph-mon[80926]: pgmap v1367: 305 pgs: 305 active+clean; 267 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.2 MiB/s wr, 115 op/s
Oct 02 12:23:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1782790133' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:23:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:33.351 138528 DEBUG eventlet.wsgi.server [-] (138528) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Oct 02 12:23:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:33.352 138528 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0
Oct 02 12:23:33 compute-1 ovn_metadata_agent[138369]: Accept: */*
Oct 02 12:23:33 compute-1 ovn_metadata_agent[138369]: Connection: close
Oct 02 12:23:33 compute-1 ovn_metadata_agent[138369]: Content-Type: text/plain
Oct 02 12:23:33 compute-1 ovn_metadata_agent[138369]: Host: 169.254.169.254
Oct 02 12:23:33 compute-1 ovn_metadata_agent[138369]: User-Agent: curl/7.84.0
Oct 02 12:23:33 compute-1 ovn_metadata_agent[138369]: X-Forwarded-For: 10.100.0.9
Oct 02 12:23:33 compute-1 ovn_metadata_agent[138369]: X-Ovn-Network-Id: bce86765-c9ec-46bc-a7a3-317bd0b94198 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Oct 02 12:23:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:23:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:33.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:23:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:33.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:33 compute-1 podman[251637]: 2025-10-02 12:23:33.799143971 +0000 UTC m=+0.054081153 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct 02 12:23:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e204 e204: 3 total, 3 up, 3 in
Oct 02 12:23:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1057627497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:34 compute-1 haproxy-metadata-proxy-bce86765-c9ec-46bc-a7a3-317bd0b94198[251368]: 10.100.0.9:38238 [02/Oct/2025:12:23:33.349] listener listener/metadata 0/0/0/1341/1341 200 2534 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1"
Oct 02 12:23:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:34.691 138528 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Oct 02 12:23:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:34.691 138528 INFO eventlet.wsgi.server [-] 10.100.0.9,<local> "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200  len: 2550 time: 1.3390536
Oct 02 12:23:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:35.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:35.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:35 compute-1 ceph-mon[80926]: pgmap v1368: 305 pgs: 305 active+clean; 267 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 863 KiB/s rd, 2.1 MiB/s wr, 108 op/s
Oct 02 12:23:35 compute-1 ceph-mon[80926]: osdmap e204: 3 total, 3 up, 3 in
Oct 02 12:23:35 compute-1 podman[251656]: 2025-10-02 12:23:35.899480892 +0000 UTC m=+0.136337132 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.100 2 DEBUG oslo_concurrency.lockutils [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.100 2 DEBUG oslo_concurrency.lockutils [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.101 2 DEBUG oslo_concurrency.lockutils [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.101 2 DEBUG oslo_concurrency.lockutils [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.101 2 DEBUG oslo_concurrency.lockutils [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.103 2 INFO nova.compute.manager [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Terminating instance
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.104 2 DEBUG nova.compute.manager [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:23:36 compute-1 kernel: tap6a13b8d9-26 (unregistering): left promiscuous mode
Oct 02 12:23:36 compute-1 NetworkManager[44960]: <info>  [1759407816.1810] device (tap6a13b8d9-26): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:23:36 compute-1 ovn_controller[129257]: 2025-10-02T12:23:36Z|00225|binding|INFO|Releasing lport 6a13b8d9-269d-4176-b4c7-693a5e26e74b from this chassis (sb_readonly=0)
Oct 02 12:23:36 compute-1 ovn_controller[129257]: 2025-10-02T12:23:36Z|00226|binding|INFO|Setting lport 6a13b8d9-269d-4176-b4c7-693a5e26e74b down in Southbound
Oct 02 12:23:36 compute-1 ovn_controller[129257]: 2025-10-02T12:23:36Z|00227|binding|INFO|Removing iface tap6a13b8d9-26 ovn-installed in OVS
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.208 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:a0:be 10.100.0.9'], port_security=['fa:16:3e:fb:a0:be 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'bebdf690-5f58-4227-95e0-add2eae14645', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bce86765-c9ec-46bc-a7a3-317bd0b94198', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dcab4f3b7c604f47befdd0a52db26eea', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'eb6b2abd-5791-4e81-8257-d871691a47e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.176'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09a116df-d45e-4936-a295-e45094ee631c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=6a13b8d9-269d-4176-b4c7-693a5e26e74b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.209 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 6a13b8d9-269d-4176-b4c7-693a5e26e74b in datapath bce86765-c9ec-46bc-a7a3-317bd0b94198 unbound from our chassis
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.212 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bce86765-c9ec-46bc-a7a3-317bd0b94198, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.214 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[367daa65-91fa-4f0a-b6c7-67fb11f2ed7f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.214 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198 namespace which is not needed anymore
Oct 02 12:23:36 compute-1 kernel: tap25721468-44 (unregistering): left promiscuous mode
Oct 02 12:23:36 compute-1 NetworkManager[44960]: <info>  [1759407816.2258] device (tap25721468-44): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 ovn_controller[129257]: 2025-10-02T12:23:36Z|00228|binding|INFO|Releasing lport 25721468-4447-4fb7-97f7-e805e64f0267 from this chassis (sb_readonly=0)
Oct 02 12:23:36 compute-1 ovn_controller[129257]: 2025-10-02T12:23:36Z|00229|binding|INFO|Setting lport 25721468-4447-4fb7-97f7-e805e64f0267 down in Southbound
Oct 02 12:23:36 compute-1 ovn_controller[129257]: 2025-10-02T12:23:36Z|00230|binding|INFO|Removing iface tap25721468-44 ovn-installed in OVS
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.242 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:de:0f 10.1.1.243'], port_security=['fa:16:3e:c5:de:0f 10.1.1.243'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest-1274683935', 'neutron:cidrs': '10.1.1.243/24', 'neutron:device_id': 'bebdf690-5f58-4227-95e0-add2eae14645', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9aed857d-6573-41ca-b0a5-fcab18195955', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest-1274683935', 'neutron:project_id': 'dcab4f3b7c604f47befdd0a52db26eea', 'neutron:revision_number': '4', 'neutron:security_group_ids': '517909f5-5ad1-4dd2-b684-855fd2b0ba7b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=081069ba-be0d-48ee-adb7-b26de4dcbd97, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=25721468-4447-4fb7-97f7-e805e64f0267) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 kernel: tap8d3881e4-99 (unregistering): left promiscuous mode
Oct 02 12:23:36 compute-1 NetworkManager[44960]: <info>  [1759407816.2658] device (tap8d3881e4-99): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:23:36 compute-1 ovn_controller[129257]: 2025-10-02T12:23:36Z|00231|binding|INFO|Releasing lport 8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 from this chassis (sb_readonly=0)
Oct 02 12:23:36 compute-1 ovn_controller[129257]: 2025-10-02T12:23:36Z|00232|binding|INFO|Setting lport 8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 down in Southbound
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 ovn_controller[129257]: 2025-10-02T12:23:36Z|00233|binding|INFO|Removing iface tap8d3881e4-99 ovn-installed in OVS
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.288 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:64:8d 10.1.1.231'], port_security=['fa:16:3e:1e:64:8d 10.1.1.231'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest-148297662', 'neutron:cidrs': '10.1.1.231/24', 'neutron:device_id': 'bebdf690-5f58-4227-95e0-add2eae14645', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9aed857d-6573-41ca-b0a5-fcab18195955', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest-148297662', 'neutron:project_id': 'dcab4f3b7c604f47befdd0a52db26eea', 'neutron:revision_number': '4', 'neutron:security_group_ids': '517909f5-5ad1-4dd2-b684-855fd2b0ba7b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=081069ba-be0d-48ee-adb7-b26de4dcbd97, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=8d3881e4-99fe-4bc5-b5ab-5b3f06be6000) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 kernel: tapb84676b0-d3 (unregistering): left promiscuous mode
Oct 02 12:23:36 compute-1 NetworkManager[44960]: <info>  [1759407816.3111] device (tapb84676b0-d3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:23:36 compute-1 ovn_controller[129257]: 2025-10-02T12:23:36Z|00234|binding|INFO|Releasing lport b84676b0-d376-4ced-99fb-08e677046d6f from this chassis (sb_readonly=0)
Oct 02 12:23:36 compute-1 ovn_controller[129257]: 2025-10-02T12:23:36Z|00235|binding|INFO|Setting lport b84676b0-d376-4ced-99fb-08e677046d6f down in Southbound
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 ovn_controller[129257]: 2025-10-02T12:23:36Z|00236|binding|INFO|Removing iface tapb84676b0-d3 ovn-installed in OVS
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 kernel: tapc9731d13-43 (unregistering): left promiscuous mode
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.347 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:6e:55 10.1.1.189'], port_security=['fa:16:3e:7b:6e:55 10.1.1.189'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.189/24', 'neutron:device_id': 'bebdf690-5f58-4227-95e0-add2eae14645', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9aed857d-6573-41ca-b0a5-fcab18195955', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dcab4f3b7c604f47befdd0a52db26eea', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'eb6b2abd-5791-4e81-8257-d871691a47e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=081069ba-be0d-48ee-adb7-b26de4dcbd97, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=b84676b0-d376-4ced-99fb-08e677046d6f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:23:36 compute-1 NetworkManager[44960]: <info>  [1759407816.3498] device (tapc9731d13-43): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 neutron-haproxy-ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198[251362]: [NOTICE]   (251366) : haproxy version is 2.8.14-c23fe91
Oct 02 12:23:36 compute-1 neutron-haproxy-ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198[251362]: [NOTICE]   (251366) : path to executable is /usr/sbin/haproxy
Oct 02 12:23:36 compute-1 neutron-haproxy-ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198[251362]: [WARNING]  (251366) : Exiting Master process...
Oct 02 12:23:36 compute-1 neutron-haproxy-ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198[251362]: [ALERT]    (251366) : Current worker (251368) exited with code 143 (Terminated)
Oct 02 12:23:36 compute-1 neutron-haproxy-ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198[251362]: [WARNING]  (251366) : All workers exited. Exiting... (0)
Oct 02 12:23:36 compute-1 systemd[1]: libpod-def4e5663605774a15aeb10a37fb9d00bdd2fde12486c1c3f17f7fdaae128806.scope: Deactivated successfully.
Oct 02 12:23:36 compute-1 ovn_controller[129257]: 2025-10-02T12:23:36Z|00237|binding|INFO|Releasing lport c9731d13-4315-4bdc-9d24-a91ce1d8d427 from this chassis (sb_readonly=0)
Oct 02 12:23:36 compute-1 ovn_controller[129257]: 2025-10-02T12:23:36Z|00238|binding|INFO|Setting lport c9731d13-4315-4bdc-9d24-a91ce1d8d427 down in Southbound
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 ovn_controller[129257]: 2025-10-02T12:23:36Z|00239|binding|INFO|Removing iface tapc9731d13-43 ovn-installed in OVS
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 podman[251713]: 2025-10-02 12:23:36.370813452 +0000 UTC m=+0.057916923 container died def4e5663605774a15aeb10a37fb9d00bdd2fde12486c1c3f17f7fdaae128806 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.374 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b6:e7:f1 10.1.1.217'], port_security=['fa:16:3e:b6:e7:f1 10.1.1.217'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.217/24', 'neutron:device_id': 'bebdf690-5f58-4227-95e0-add2eae14645', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9aed857d-6573-41ca-b0a5-fcab18195955', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dcab4f3b7c604f47befdd0a52db26eea', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'eb6b2abd-5791-4e81-8257-d871691a47e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=081069ba-be0d-48ee-adb7-b26de4dcbd97, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=c9731d13-4315-4bdc-9d24-a91ce1d8d427) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:23:36 compute-1 kernel: tap2ef879b2-35 (unregistering): left promiscuous mode
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 NetworkManager[44960]: <info>  [1759407816.3884] device (tap2ef879b2-35): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:23:36 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-def4e5663605774a15aeb10a37fb9d00bdd2fde12486c1c3f17f7fdaae128806-userdata-shm.mount: Deactivated successfully.
Oct 02 12:23:36 compute-1 systemd[1]: var-lib-containers-storage-overlay-f364814aae9bbb0a3b489cff6eafe4e2d92e7ede9b143c8dfd14625b67cb54df-merged.mount: Deactivated successfully.
Oct 02 12:23:36 compute-1 kernel: tapaf15c204-50 (unregistering): left promiscuous mode
Oct 02 12:23:36 compute-1 ovn_controller[129257]: 2025-10-02T12:23:36Z|00240|binding|INFO|Releasing lport 2ef879b2-3519-40b6-8207-d24b0e1a39de from this chassis (sb_readonly=0)
Oct 02 12:23:36 compute-1 ovn_controller[129257]: 2025-10-02T12:23:36Z|00241|binding|INFO|Setting lport 2ef879b2-3519-40b6-8207-d24b0e1a39de down in Southbound
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 NetworkManager[44960]: <info>  [1759407816.4250] device (tapaf15c204-50): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:23:36 compute-1 podman[251713]: 2025-10-02 12:23:36.424979956 +0000 UTC m=+0.112083427 container cleanup def4e5663605774a15aeb10a37fb9d00bdd2fde12486c1c3f17f7fdaae128806 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:23:36 compute-1 ovn_controller[129257]: 2025-10-02T12:23:36Z|00242|binding|INFO|Removing iface tap2ef879b2-35 ovn-installed in OVS
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.433 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:b6:82 10.2.2.100'], port_security=['fa:16:3e:28:b6:82 10.2.2.100'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.100/24', 'neutron:device_id': 'bebdf690-5f58-4227-95e0-add2eae14645', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16f75dae-02da-4559-9be9-2b702ece41dd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dcab4f3b7c604f47befdd0a52db26eea', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'eb6b2abd-5791-4e81-8257-d871691a47e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6f833f0c-43e5-49bc-a978-920c53d86b7d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=2ef879b2-3519-40b6-8207-d24b0e1a39de) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:23:36 compute-1 systemd[1]: libpod-conmon-def4e5663605774a15aeb10a37fb9d00bdd2fde12486c1c3f17f7fdaae128806.scope: Deactivated successfully.
Oct 02 12:23:36 compute-1 ovn_controller[129257]: 2025-10-02T12:23:36Z|00243|binding|INFO|Releasing lport af15c204-50a0-4b32-a3a7-46c9b925ec87 from this chassis (sb_readonly=0)
Oct 02 12:23:36 compute-1 ovn_controller[129257]: 2025-10-02T12:23:36Z|00244|binding|INFO|Setting lport af15c204-50a0-4b32-a3a7-46c9b925ec87 down in Southbound
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 ovn_controller[129257]: 2025-10-02T12:23:36Z|00245|binding|INFO|Removing iface tapaf15c204-50 ovn-installed in OVS
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.474 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:ce:42 10.2.2.200'], port_security=['fa:16:3e:a3:ce:42 10.2.2.200'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.200/24', 'neutron:device_id': 'bebdf690-5f58-4227-95e0-add2eae14645', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16f75dae-02da-4559-9be9-2b702ece41dd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dcab4f3b7c604f47befdd0a52db26eea', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'eb6b2abd-5791-4e81-8257-d871691a47e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6f833f0c-43e5-49bc-a978-920c53d86b7d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=af15c204-50a0-4b32-a3a7-46c9b925ec87) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:23:36 compute-1 podman[251771]: 2025-10-02 12:23:36.488489454 +0000 UTC m=+0.041519897 container remove def4e5663605774a15aeb10a37fb9d00bdd2fde12486c1c3f17f7fdaae128806 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000030.scope: Deactivated successfully.
Oct 02 12:23:36 compute-1 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000030.scope: Consumed 18.110s CPU time.
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.500 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[93b14681-bfeb-4b4a-8812-7b7bea59eca4]: (4, ('Thu Oct  2 12:23:36 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198 (def4e5663605774a15aeb10a37fb9d00bdd2fde12486c1c3f17f7fdaae128806)\ndef4e5663605774a15aeb10a37fb9d00bdd2fde12486c1c3f17f7fdaae128806\nThu Oct  2 12:23:36 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198 (def4e5663605774a15aeb10a37fb9d00bdd2fde12486c1c3f17f7fdaae128806)\ndef4e5663605774a15aeb10a37fb9d00bdd2fde12486c1c3f17f7fdaae128806\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:36 compute-1 systemd-machined[188247]: Machine qemu-27-instance-00000030 terminated.
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.502 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5bb7485b-5fc9-436a-923a-f0e01276f6f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.503 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbce86765-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 kernel: tapbce86765-c0: left promiscuous mode
Oct 02 12:23:36 compute-1 NetworkManager[44960]: <info>  [1759407816.5207] manager: (tap6a13b8d9-26): new Tun device (/org/freedesktop/NetworkManager/Devices/105)
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.540 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ef39c3ab-377c-4b7f-8951-5302501e1bad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:36 compute-1 NetworkManager[44960]: <info>  [1759407816.5449] manager: (tap25721468-44): new Tun device (/org/freedesktop/NetworkManager/Devices/106)
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.566 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b3517e4a-9394-4663-9623-5b8b25715f77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.567 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ba10954a-3536-4fa0-b44a-8af75851c0a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.585 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2f928997-6db0-4080-a8aa-ec405b75a966]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563347, 'reachable_time': 31014, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251840, 'error': None, 'target': 'ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:36 compute-1 systemd[1]: run-netns-ovnmeta\x2dbce86765\x2dc9ec\x2d46bc\x2da7a3\x2d317bd0b94198.mount: Deactivated successfully.
Oct 02 12:23:36 compute-1 NetworkManager[44960]: <info>  [1759407816.5935] manager: (tapc9731d13-43): new Tun device (/org/freedesktop/NetworkManager/Devices/107)
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.592 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bce86765-c9ec-46bc-a7a3-317bd0b94198 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.592 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[2ce2011b-5d78-44cb-b05d-823566e449c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.593 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 25721468-4447-4fb7-97f7-e805e64f0267 in datapath 9aed857d-6573-41ca-b0a5-fcab18195955 unbound from our chassis
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.595 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9aed857d-6573-41ca-b0a5-fcab18195955, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.595 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d6630a3a-dbdc-41c1-aaee-9046d8fe582b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.596 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955 namespace which is not needed anymore
Oct 02 12:23:36 compute-1 NetworkManager[44960]: <info>  [1759407816.5998] manager: (tap2ef879b2-35): new Tun device (/org/freedesktop/NetworkManager/Devices/108)
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.638 2 INFO nova.virt.libvirt.driver [-] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Instance destroyed successfully.
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.639 2 DEBUG nova.objects.instance [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lazy-loading 'resources' on Instance uuid bebdf690-5f58-4227-95e0-add2eae14645 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.659 2 DEBUG nova.virt.libvirt.vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:22:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:22:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "address": "fa:16:3e:fb:a0:be", "network": {"id": "bce86765-c9ec-46bc-a7a3-317bd0b94198", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-349866377-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a13b8d9-26", "ovs_interfaceid": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.660 2 DEBUG nova.network.os_vif_util [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "address": "fa:16:3e:fb:a0:be", "network": {"id": "bce86765-c9ec-46bc-a7a3-317bd0b94198", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-349866377-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a13b8d9-26", "ovs_interfaceid": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.661 2 DEBUG nova.network.os_vif_util [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fb:a0:be,bridge_name='br-int',has_traffic_filtering=True,id=6a13b8d9-269d-4176-b4c7-693a5e26e74b,network=Network(bce86765-c9ec-46bc-a7a3-317bd0b94198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a13b8d9-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.661 2 DEBUG os_vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fb:a0:be,bridge_name='br-int',has_traffic_filtering=True,id=6a13b8d9-269d-4176-b4c7-693a5e26e74b,network=Network(bce86765-c9ec-46bc-a7a3-317bd0b94198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a13b8d9-26') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.663 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a13b8d9-26, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.684 2 INFO os_vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fb:a0:be,bridge_name='br-int',has_traffic_filtering=True,id=6a13b8d9-269d-4176-b4c7-693a5e26e74b,network=Network(bce86765-c9ec-46bc-a7a3-317bd0b94198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a13b8d9-26')
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.684 2 DEBUG nova.virt.libvirt.vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:22:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:22:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "25721468-4447-4fb7-97f7-e805e64f0267", "address": "fa:16:3e:c5:de:0f", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25721468-44", "ovs_interfaceid": "25721468-4447-4fb7-97f7-e805e64f0267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.685 2 DEBUG nova.network.os_vif_util [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "25721468-4447-4fb7-97f7-e805e64f0267", "address": "fa:16:3e:c5:de:0f", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25721468-44", "ovs_interfaceid": "25721468-4447-4fb7-97f7-e805e64f0267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.685 2 DEBUG nova.network.os_vif_util [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:de:0f,bridge_name='br-int',has_traffic_filtering=True,id=25721468-4447-4fb7-97f7-e805e64f0267,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap25721468-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.686 2 DEBUG os_vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:de:0f,bridge_name='br-int',has_traffic_filtering=True,id=25721468-4447-4fb7-97f7-e805e64f0267,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap25721468-44') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.687 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25721468-44, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.703 2 INFO os_vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:de:0f,bridge_name='br-int',has_traffic_filtering=True,id=25721468-4447-4fb7-97f7-e805e64f0267,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap25721468-44')
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.704 2 DEBUG nova.virt.libvirt.vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:22:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:22:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "address": "fa:16:3e:1e:64:8d", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d3881e4-99", "ovs_interfaceid": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.704 2 DEBUG nova.network.os_vif_util [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "address": "fa:16:3e:1e:64:8d", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d3881e4-99", "ovs_interfaceid": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.705 2 DEBUG nova.network.os_vif_util [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:64:8d,bridge_name='br-int',has_traffic_filtering=True,id=8d3881e4-99fe-4bc5-b5ab-5b3f06be6000,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8d3881e4-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.705 2 DEBUG os_vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:64:8d,bridge_name='br-int',has_traffic_filtering=True,id=8d3881e4-99fe-4bc5-b5ab-5b3f06be6000,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8d3881e4-99') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.707 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d3881e4-99, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 neutron-haproxy-ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955[251264]: [NOTICE]   (251291) : haproxy version is 2.8.14-c23fe91
Oct 02 12:23:36 compute-1 neutron-haproxy-ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955[251264]: [NOTICE]   (251291) : path to executable is /usr/sbin/haproxy
Oct 02 12:23:36 compute-1 neutron-haproxy-ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955[251264]: [WARNING]  (251291) : Exiting Master process...
Oct 02 12:23:36 compute-1 neutron-haproxy-ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955[251264]: [WARNING]  (251291) : Exiting Master process...
Oct 02 12:23:36 compute-1 neutron-haproxy-ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955[251264]: [ALERT]    (251291) : Current worker (251293) exited with code 143 (Terminated)
Oct 02 12:23:36 compute-1 neutron-haproxy-ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955[251264]: [WARNING]  (251291) : All workers exited. Exiting... (0)
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.720 2 INFO os_vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:64:8d,bridge_name='br-int',has_traffic_filtering=True,id=8d3881e4-99fe-4bc5-b5ab-5b3f06be6000,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8d3881e4-99')
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.721 2 DEBUG nova.virt.libvirt.vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:22:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:22:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b84676b0-d376-4ced-99fb-08e677046d6f", "address": "fa:16:3e:7b:6e:55", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb84676b0-d3", "ovs_interfaceid": "b84676b0-d376-4ced-99fb-08e677046d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.721 2 DEBUG nova.network.os_vif_util [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "b84676b0-d376-4ced-99fb-08e677046d6f", "address": "fa:16:3e:7b:6e:55", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb84676b0-d3", "ovs_interfaceid": "b84676b0-d376-4ced-99fb-08e677046d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:23:36 compute-1 systemd[1]: libpod-f28f1929feb897a49292f391a7772ea61765feb2d47b0a86c02e01a45ab756c5.scope: Deactivated successfully.
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.722 2 DEBUG nova.network.os_vif_util [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:6e:55,bridge_name='br-int',has_traffic_filtering=True,id=b84676b0-d376-4ced-99fb-08e677046d6f,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb84676b0-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.722 2 DEBUG os_vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:6e:55,bridge_name='br-int',has_traffic_filtering=True,id=b84676b0-d376-4ced-99fb-08e677046d6f,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb84676b0-d3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.724 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb84676b0-d3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:23:36 compute-1 podman[251911]: 2025-10-02 12:23:36.730061746 +0000 UTC m=+0.044416129 container died f28f1929feb897a49292f391a7772ea61765feb2d47b0a86c02e01a45ab756c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.736 2 INFO os_vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:6e:55,bridge_name='br-int',has_traffic_filtering=True,id=b84676b0-d376-4ced-99fb-08e677046d6f,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb84676b0-d3')
Oct 02 12:23:36 compute-1 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.737 2 DEBUG nova.virt.libvirt.vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:22:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:22:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "address": "fa:16:3e:b6:e7:f1", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.217", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9731d13-43", "ovs_interfaceid": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:23:36 compute-1 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.737 2 DEBUG nova.network.os_vif_util [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "address": "fa:16:3e:b6:e7:f1", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.217", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9731d13-43", "ovs_interfaceid": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.738 2 DEBUG nova.network.os_vif_util [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:e7:f1,bridge_name='br-int',has_traffic_filtering=True,id=c9731d13-4315-4bdc-9d24-a91ce1d8d427,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9731d13-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.738 2 DEBUG os_vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:e7:f1,bridge_name='br-int',has_traffic_filtering=True,id=c9731d13-4315-4bdc-9d24-a91ce1d8d427,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9731d13-43') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.740 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc9731d13-43, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.750 2 INFO os_vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:e7:f1,bridge_name='br-int',has_traffic_filtering=True,id=c9731d13-4315-4bdc-9d24-a91ce1d8d427,network=Network(9aed857d-6573-41ca-b0a5-fcab18195955),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc9731d13-43')
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.751 2 DEBUG nova.virt.libvirt.vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:22:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:22:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "address": "fa:16:3e:28:b6:82", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ef879b2-35", "ovs_interfaceid": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.752 2 DEBUG nova.network.os_vif_util [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "address": "fa:16:3e:28:b6:82", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ef879b2-35", "ovs_interfaceid": "2ef879b2-3519-40b6-8207-d24b0e1a39de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.752 2 DEBUG nova.network.os_vif_util [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:b6:82,bridge_name='br-int',has_traffic_filtering=True,id=2ef879b2-3519-40b6-8207-d24b0e1a39de,network=Network(16f75dae-02da-4559-9be9-2b702ece41dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ef879b2-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.753 2 DEBUG os_vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:b6:82,bridge_name='br-int',has_traffic_filtering=True,id=2ef879b2-3519-40b6-8207-d24b0e1a39de,network=Network(16f75dae-02da-4559-9be9-2b702ece41dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ef879b2-35') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.754 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ef879b2-35, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:36 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f28f1929feb897a49292f391a7772ea61765feb2d47b0a86c02e01a45ab756c5-userdata-shm.mount: Deactivated successfully.
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 systemd[1]: var-lib-containers-storage-overlay-ac5e0d3911095a7aec742e67ce07e80aafbfc58c9f078a790a45d6aff0e44e5e-merged.mount: Deactivated successfully.
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.761 2 INFO os_vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:b6:82,bridge_name='br-int',has_traffic_filtering=True,id=2ef879b2-3519-40b6-8207-d24b0e1a39de,network=Network(16f75dae-02da-4559-9be9-2b702ece41dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ef879b2-35')
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.762 2 DEBUG nova.virt.libvirt.vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:22:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1257127232',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-device-tagging-server-1257127232',id=48,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOavUye4jeGnwHeNrekvEEPDHtHlJ96NUHN/iH+ibPkyCFT14/CiWYdQjCUktGUdlWSYNji28IXs/KKwYJF61doQn5gqoo9T/Db3A0IgyKgAQOf4eLmeNwAkRZZUCXV94Q==',key_name='tempest-keypair-254084304',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:22:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dcab4f3b7c604f47befdd0a52db26eea',ramdisk_id='',reservation_id='r-h12tyqet',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest-1955030099',owner_user_name='tempest-TaggedBootDevicesTest-1955030099-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:22:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b978e493dbdc419e864471708c90b0b4',uuid=bebdf690-5f58-4227-95e0-add2eae14645,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "address": "fa:16:3e:a3:ce:42", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf15c204-50", "ovs_interfaceid": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.763 2 DEBUG nova.network.os_vif_util [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converting VIF {"id": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "address": "fa:16:3e:a3:ce:42", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf15c204-50", "ovs_interfaceid": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.763 2 DEBUG nova.network.os_vif_util [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:ce:42,bridge_name='br-int',has_traffic_filtering=True,id=af15c204-50a0-4b32-a3a7-46c9b925ec87,network=Network(16f75dae-02da-4559-9be9-2b702ece41dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaf15c204-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.764 2 DEBUG os_vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:ce:42,bridge_name='br-int',has_traffic_filtering=True,id=af15c204-50a0-4b32-a3a7-46c9b925ec87,network=Network(16f75dae-02da-4559-9be9-2b702ece41dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaf15c204-50') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.766 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaf15c204-50, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:36 compute-1 podman[251911]: 2025-10-02 12:23:36.76705072 +0000 UTC m=+0.081405123 container cleanup f28f1929feb897a49292f391a7772ea61765feb2d47b0a86c02e01a45ab756c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.774 2 INFO os_vif [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:ce:42,bridge_name='br-int',has_traffic_filtering=True,id=af15c204-50a0-4b32-a3a7-46c9b925ec87,network=Network(16f75dae-02da-4559-9be9-2b702ece41dd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaf15c204-50')
Oct 02 12:23:36 compute-1 systemd[1]: libpod-conmon-f28f1929feb897a49292f391a7772ea61765feb2d47b0a86c02e01a45ab756c5.scope: Deactivated successfully.
Oct 02 12:23:36 compute-1 podman[251963]: 2025-10-02 12:23:36.82901031 +0000 UTC m=+0.039431422 container remove f28f1929feb897a49292f391a7772ea61765feb2d47b0a86c02e01a45ab756c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.834 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e7763710-4581-4f22-9ca7-cadc486be2cf]: (4, ('Thu Oct  2 12:23:36 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955 (f28f1929feb897a49292f391a7772ea61765feb2d47b0a86c02e01a45ab756c5)\nf28f1929feb897a49292f391a7772ea61765feb2d47b0a86c02e01a45ab756c5\nThu Oct  2 12:23:36 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955 (f28f1929feb897a49292f391a7772ea61765feb2d47b0a86c02e01a45ab756c5)\nf28f1929feb897a49292f391a7772ea61765feb2d47b0a86c02e01a45ab756c5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.836 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[788c2925-63fa-4558-805f-1d22ca4ea717]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.837 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9aed857d-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:36 compute-1 kernel: tap9aed857d-60: left promiscuous mode
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.842 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[83cbc486-d5a1-42c0-9346-998b1889a75b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:36 compute-1 nova_compute[230518]: 2025-10-02 12:23:36.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.873 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b08b4816-b18a-4a5c-892a-637cc7315dae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.875 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[226aa36a-c1db-4e37-b29b-03dd8744b4f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.889 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6300970f-afe3-4edd-bd0d-b3944452dd29]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563199, 'reachable_time': 15530, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251994, 'error': None, 'target': 'ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.891 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9aed857d-6573-41ca-b0a5-fcab18195955 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.891 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[a7536fd9-ce21-47c7-9ed7-baed2d7180c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.891 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 in datapath 9aed857d-6573-41ca-b0a5-fcab18195955 unbound from our chassis
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.893 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9aed857d-6573-41ca-b0a5-fcab18195955, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.893 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fb8f3f75-510e-4443-8207-f892e88a4ee5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.894 138374 INFO neutron.agent.ovn.metadata.agent [-] Port b84676b0-d376-4ced-99fb-08e677046d6f in datapath 9aed857d-6573-41ca-b0a5-fcab18195955 unbound from our chassis
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.895 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9aed857d-6573-41ca-b0a5-fcab18195955, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.896 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1c2f14f8-9ecf-424b-9da8-bf2135185678]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.896 138374 INFO neutron.agent.ovn.metadata.agent [-] Port c9731d13-4315-4bdc-9d24-a91ce1d8d427 in datapath 9aed857d-6573-41ca-b0a5-fcab18195955 unbound from our chassis
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.898 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9aed857d-6573-41ca-b0a5-fcab18195955, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.898 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2c6c8c8b-06cf-40e1-89d4-9dbf5e81096a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.898 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 2ef879b2-3519-40b6-8207-d24b0e1a39de in datapath 16f75dae-02da-4559-9be9-2b702ece41dd unbound from our chassis
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.900 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 16f75dae-02da-4559-9be9-2b702ece41dd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.900 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[123fdf91-247a-4703-aa30-f52b5e85f0f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:36.901 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd namespace which is not needed anymore
Oct 02 12:23:37 compute-1 neutron-haproxy-ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd[251454]: [NOTICE]   (251458) : haproxy version is 2.8.14-c23fe91
Oct 02 12:23:37 compute-1 neutron-haproxy-ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd[251454]: [NOTICE]   (251458) : path to executable is /usr/sbin/haproxy
Oct 02 12:23:37 compute-1 neutron-haproxy-ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd[251454]: [WARNING]  (251458) : Exiting Master process...
Oct 02 12:23:37 compute-1 neutron-haproxy-ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd[251454]: [ALERT]    (251458) : Current worker (251460) exited with code 143 (Terminated)
Oct 02 12:23:37 compute-1 neutron-haproxy-ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd[251454]: [WARNING]  (251458) : All workers exited. Exiting... (0)
Oct 02 12:23:37 compute-1 systemd[1]: libpod-7f14478999350e8a4059bd184b28f20048f6370b3b478789587375fede144422.scope: Deactivated successfully.
Oct 02 12:23:37 compute-1 podman[252013]: 2025-10-02 12:23:37.014549418 +0000 UTC m=+0.040933929 container died 7f14478999350e8a4059bd184b28f20048f6370b3b478789587375fede144422 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:23:37 compute-1 podman[252013]: 2025-10-02 12:23:37.046368499 +0000 UTC m=+0.072753010 container cleanup 7f14478999350e8a4059bd184b28f20048f6370b3b478789587375fede144422 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:23:37 compute-1 systemd[1]: libpod-conmon-7f14478999350e8a4059bd184b28f20048f6370b3b478789587375fede144422.scope: Deactivated successfully.
Oct 02 12:23:37 compute-1 nova_compute[230518]: 2025-10-02 12:23:37.072 2 DEBUG nova.compute.manager [req-43b26edd-4c22-44b1-81e1-1ef9d9e84d93 req-0d20bbbd-1a6b-427a-bc5d-40767b321cca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-unplugged-6a13b8d9-269d-4176-b4c7-693a5e26e74b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:37 compute-1 nova_compute[230518]: 2025-10-02 12:23:37.073 2 DEBUG oslo_concurrency.lockutils [req-43b26edd-4c22-44b1-81e1-1ef9d9e84d93 req-0d20bbbd-1a6b-427a-bc5d-40767b321cca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:37 compute-1 nova_compute[230518]: 2025-10-02 12:23:37.073 2 DEBUG oslo_concurrency.lockutils [req-43b26edd-4c22-44b1-81e1-1ef9d9e84d93 req-0d20bbbd-1a6b-427a-bc5d-40767b321cca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:37 compute-1 nova_compute[230518]: 2025-10-02 12:23:37.073 2 DEBUG oslo_concurrency.lockutils [req-43b26edd-4c22-44b1-81e1-1ef9d9e84d93 req-0d20bbbd-1a6b-427a-bc5d-40767b321cca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:37 compute-1 nova_compute[230518]: 2025-10-02 12:23:37.073 2 DEBUG nova.compute.manager [req-43b26edd-4c22-44b1-81e1-1ef9d9e84d93 req-0d20bbbd-1a6b-427a-bc5d-40767b321cca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No waiting events found dispatching network-vif-unplugged-6a13b8d9-269d-4176-b4c7-693a5e26e74b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:23:37 compute-1 nova_compute[230518]: 2025-10-02 12:23:37.073 2 DEBUG nova.compute.manager [req-43b26edd-4c22-44b1-81e1-1ef9d9e84d93 req-0d20bbbd-1a6b-427a-bc5d-40767b321cca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-unplugged-6a13b8d9-269d-4176-b4c7-693a5e26e74b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:23:37 compute-1 podman[252044]: 2025-10-02 12:23:37.099680116 +0000 UTC m=+0.035569060 container remove 7f14478999350e8a4059bd184b28f20048f6370b3b478789587375fede144422 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:23:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:37.108 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[956f3f3d-a901-43fd-84e0-36cb7b41b071]: (4, ('Thu Oct  2 12:23:36 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd (7f14478999350e8a4059bd184b28f20048f6370b3b478789587375fede144422)\n7f14478999350e8a4059bd184b28f20048f6370b3b478789587375fede144422\nThu Oct  2 12:23:37 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd (7f14478999350e8a4059bd184b28f20048f6370b3b478789587375fede144422)\n7f14478999350e8a4059bd184b28f20048f6370b3b478789587375fede144422\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:37.110 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d86a73fd-d575-43d1-bd7e-564fe89d1b6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:37.111 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16f75dae-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:37 compute-1 kernel: tap16f75dae-00: left promiscuous mode
Oct 02 12:23:37 compute-1 nova_compute[230518]: 2025-10-02 12:23:37.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:37.116 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0507c919-7f88-443e-bb62-33e2f2940356]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:37 compute-1 nova_compute[230518]: 2025-10-02 12:23:37.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:37.136 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[88192782-79aa-4e19-8765-d9fa9fcedba3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:37.137 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cd05c9d1-c800-40dd-877e-3f9db24a136b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:37.158 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2cdb00a4-79e1-4d05-9432-cd95ec9d7c97]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 563508, 'reachable_time': 18093, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252059, 'error': None, 'target': 'ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:37.160 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-16f75dae-02da-4559-9be9-2b702ece41dd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:23:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:37.160 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[fd11c1eb-7de8-4145-bddf-de8f83b9f13a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:37.161 138374 INFO neutron.agent.ovn.metadata.agent [-] Port af15c204-50a0-4b32-a3a7-46c9b925ec87 in datapath 16f75dae-02da-4559-9be9-2b702ece41dd unbound from our chassis
Oct 02 12:23:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:37.163 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 16f75dae-02da-4559-9be9-2b702ece41dd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:23:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:37.164 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5ff275c0-198f-4d03-920f-8bd89e3e3134]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:37 compute-1 nova_compute[230518]: 2025-10-02 12:23:37.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:37 compute-1 nova_compute[230518]: 2025-10-02 12:23:37.278 2 DEBUG nova.compute.manager [req-9542265a-29d0-4027-9900-6f98b2727580 req-4ce50bbd-59e0-48ff-b088-9edb4af234ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-unplugged-25721468-4447-4fb7-97f7-e805e64f0267 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:37 compute-1 nova_compute[230518]: 2025-10-02 12:23:37.278 2 DEBUG oslo_concurrency.lockutils [req-9542265a-29d0-4027-9900-6f98b2727580 req-4ce50bbd-59e0-48ff-b088-9edb4af234ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:37 compute-1 nova_compute[230518]: 2025-10-02 12:23:37.278 2 DEBUG oslo_concurrency.lockutils [req-9542265a-29d0-4027-9900-6f98b2727580 req-4ce50bbd-59e0-48ff-b088-9edb4af234ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:37 compute-1 nova_compute[230518]: 2025-10-02 12:23:37.279 2 DEBUG oslo_concurrency.lockutils [req-9542265a-29d0-4027-9900-6f98b2727580 req-4ce50bbd-59e0-48ff-b088-9edb4af234ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:37 compute-1 nova_compute[230518]: 2025-10-02 12:23:37.279 2 DEBUG nova.compute.manager [req-9542265a-29d0-4027-9900-6f98b2727580 req-4ce50bbd-59e0-48ff-b088-9edb4af234ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No waiting events found dispatching network-vif-unplugged-25721468-4447-4fb7-97f7-e805e64f0267 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:23:37 compute-1 nova_compute[230518]: 2025-10-02 12:23:37.279 2 DEBUG nova.compute.manager [req-9542265a-29d0-4027-9900-6f98b2727580 req-4ce50bbd-59e0-48ff-b088-9edb4af234ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-unplugged-25721468-4447-4fb7-97f7-e805e64f0267 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:23:37 compute-1 systemd[1]: var-lib-containers-storage-overlay-b15385a9739d6fbf955e02628978d3ec9c263267cf4ee58ffc8102e691a426c5-merged.mount: Deactivated successfully.
Oct 02 12:23:37 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7f14478999350e8a4059bd184b28f20048f6370b3b478789587375fede144422-userdata-shm.mount: Deactivated successfully.
Oct 02 12:23:37 compute-1 systemd[1]: run-netns-ovnmeta\x2d16f75dae\x2d02da\x2d4559\x2d9be9\x2d2b702ece41dd.mount: Deactivated successfully.
Oct 02 12:23:37 compute-1 systemd[1]: run-netns-ovnmeta\x2d9aed857d\x2d6573\x2d41ca\x2db0a5\x2dfcab18195955.mount: Deactivated successfully.
Oct 02 12:23:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:23:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:37.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:23:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:23:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:37.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:23:37 compute-1 ceph-mon[80926]: pgmap v1370: 305 pgs: 305 active+clean; 267 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 770 KiB/s rd, 1.6 MiB/s wr, 89 op/s
Oct 02 12:23:37 compute-1 nova_compute[230518]: 2025-10-02 12:23:37.856 2 INFO nova.virt.libvirt.driver [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Deleting instance files /var/lib/nova/instances/bebdf690-5f58-4227-95e0-add2eae14645_del
Oct 02 12:23:37 compute-1 nova_compute[230518]: 2025-10-02 12:23:37.857 2 INFO nova.virt.libvirt.driver [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Deletion of /var/lib/nova/instances/bebdf690-5f58-4227-95e0-add2eae14645_del complete
Oct 02 12:23:37 compute-1 nova_compute[230518]: 2025-10-02 12:23:37.990 2 INFO nova.compute.manager [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Took 1.89 seconds to destroy the instance on the hypervisor.
Oct 02 12:23:37 compute-1 nova_compute[230518]: 2025-10-02 12:23:37.991 2 DEBUG oslo.service.loopingcall [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:23:37 compute-1 nova_compute[230518]: 2025-10-02 12:23:37.991 2 DEBUG nova.compute.manager [-] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:23:37 compute-1 nova_compute[230518]: 2025-10-02 12:23:37.992 2 DEBUG nova.network.neutron [-] [instance: bebdf690-5f58-4227-95e0-add2eae14645] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:23:38 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.226 2 DEBUG nova.compute.manager [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-6a13b8d9-269d-4176-b4c7-693a5e26e74b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.227 2 DEBUG oslo_concurrency.lockutils [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.227 2 DEBUG oslo_concurrency.lockutils [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.227 2 DEBUG oslo_concurrency.lockutils [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.227 2 DEBUG nova.compute.manager [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No waiting events found dispatching network-vif-plugged-6a13b8d9-269d-4176-b4c7-693a5e26e74b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.228 2 WARNING nova.compute.manager [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received unexpected event network-vif-plugged-6a13b8d9-269d-4176-b4c7-693a5e26e74b for instance with vm_state active and task_state deleting.
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.228 2 DEBUG nova.compute.manager [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-unplugged-b84676b0-d376-4ced-99fb-08e677046d6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.228 2 DEBUG oslo_concurrency.lockutils [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.228 2 DEBUG oslo_concurrency.lockutils [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.228 2 DEBUG oslo_concurrency.lockutils [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.229 2 DEBUG nova.compute.manager [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No waiting events found dispatching network-vif-unplugged-b84676b0-d376-4ced-99fb-08e677046d6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.229 2 DEBUG nova.compute.manager [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-unplugged-b84676b0-d376-4ced-99fb-08e677046d6f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.229 2 DEBUG nova.compute.manager [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-b84676b0-d376-4ced-99fb-08e677046d6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.229 2 DEBUG oslo_concurrency.lockutils [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.229 2 DEBUG oslo_concurrency.lockutils [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.230 2 DEBUG oslo_concurrency.lockutils [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.230 2 DEBUG nova.compute.manager [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No waiting events found dispatching network-vif-plugged-b84676b0-d376-4ced-99fb-08e677046d6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.230 2 WARNING nova.compute.manager [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received unexpected event network-vif-plugged-b84676b0-d376-4ced-99fb-08e677046d6f for instance with vm_state active and task_state deleting.
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.230 2 DEBUG nova.compute.manager [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-unplugged-c9731d13-4315-4bdc-9d24-a91ce1d8d427 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.230 2 DEBUG oslo_concurrency.lockutils [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.230 2 DEBUG oslo_concurrency.lockutils [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.231 2 DEBUG oslo_concurrency.lockutils [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.231 2 DEBUG nova.compute.manager [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No waiting events found dispatching network-vif-unplugged-c9731d13-4315-4bdc-9d24-a91ce1d8d427 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.231 2 DEBUG nova.compute.manager [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-unplugged-c9731d13-4315-4bdc-9d24-a91ce1d8d427 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.231 2 DEBUG nova.compute.manager [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-c9731d13-4315-4bdc-9d24-a91ce1d8d427 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.231 2 DEBUG oslo_concurrency.lockutils [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.232 2 DEBUG oslo_concurrency.lockutils [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.232 2 DEBUG oslo_concurrency.lockutils [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.232 2 DEBUG nova.compute.manager [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No waiting events found dispatching network-vif-plugged-c9731d13-4315-4bdc-9d24-a91ce1d8d427 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.232 2 WARNING nova.compute.manager [req-36f1d73c-8b8e-4c73-a675-3e7ffb2b44ac req-9e8f88b2-0434-4883-9445-50964d777997 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received unexpected event network-vif-plugged-c9731d13-4315-4bdc-9d24-a91ce1d8d427 for instance with vm_state active and task_state deleting.
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.445 2 DEBUG nova.compute.manager [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-25721468-4447-4fb7-97f7-e805e64f0267 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.445 2 DEBUG oslo_concurrency.lockutils [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.446 2 DEBUG oslo_concurrency.lockutils [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.446 2 DEBUG oslo_concurrency.lockutils [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.447 2 DEBUG nova.compute.manager [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No waiting events found dispatching network-vif-plugged-25721468-4447-4fb7-97f7-e805e64f0267 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.447 2 WARNING nova.compute.manager [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received unexpected event network-vif-plugged-25721468-4447-4fb7-97f7-e805e64f0267 for instance with vm_state active and task_state deleting.
Oct 02 12:23:39 compute-1 ceph-mon[80926]: pgmap v1371: 305 pgs: 305 active+clean; 267 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 22 KiB/s wr, 58 op/s
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.448 2 DEBUG nova.compute.manager [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-unplugged-8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.448 2 DEBUG oslo_concurrency.lockutils [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.449 2 DEBUG oslo_concurrency.lockutils [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.449 2 DEBUG oslo_concurrency.lockutils [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.449 2 DEBUG nova.compute.manager [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No waiting events found dispatching network-vif-unplugged-8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.450 2 DEBUG nova.compute.manager [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-unplugged-8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.450 2 DEBUG nova.compute.manager [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.451 2 DEBUG oslo_concurrency.lockutils [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.451 2 DEBUG oslo_concurrency.lockutils [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.452 2 DEBUG oslo_concurrency.lockutils [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.452 2 DEBUG nova.compute.manager [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No waiting events found dispatching network-vif-plugged-8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.452 2 WARNING nova.compute.manager [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received unexpected event network-vif-plugged-8d3881e4-99fe-4bc5-b5ab-5b3f06be6000 for instance with vm_state active and task_state deleting.
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.453 2 DEBUG nova.compute.manager [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-unplugged-2ef879b2-3519-40b6-8207-d24b0e1a39de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.453 2 DEBUG oslo_concurrency.lockutils [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.454 2 DEBUG oslo_concurrency.lockutils [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.454 2 DEBUG oslo_concurrency.lockutils [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.454 2 DEBUG nova.compute.manager [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No waiting events found dispatching network-vif-unplugged-2ef879b2-3519-40b6-8207-d24b0e1a39de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:23:39 compute-1 nova_compute[230518]: 2025-10-02 12:23:39.455 2 DEBUG nova.compute.manager [req-cc44d0f8-3803-48e9-a9ae-29a4a17485a1 req-d8916180-ca46-4709-91ad-066ecf478159 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-unplugged-2ef879b2-3519-40b6-8207-d24b0e1a39de for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:23:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:39.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:39.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:40 compute-1 nova_compute[230518]: 2025-10-02 12:23:40.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:23:40 compute-1 nova_compute[230518]: 2025-10-02 12:23:40.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:23:41 compute-1 nova_compute[230518]: 2025-10-02 12:23:41.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:23:41 compute-1 nova_compute[230518]: 2025-10-02 12:23:41.264 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:41 compute-1 nova_compute[230518]: 2025-10-02 12:23:41.265 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:41 compute-1 nova_compute[230518]: 2025-10-02 12:23:41.265 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:41 compute-1 nova_compute[230518]: 2025-10-02 12:23:41.265 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:23:41 compute-1 nova_compute[230518]: 2025-10-02 12:23:41.266 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:23:41 compute-1 ceph-mon[80926]: pgmap v1372: 305 pgs: 305 active+clean; 267 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 22 KiB/s wr, 58 op/s
Oct 02 12:23:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:41.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:41.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:41 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:23:41 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/550051481' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:41 compute-1 nova_compute[230518]: 2025-10-02 12:23:41.721 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:23:41 compute-1 nova_compute[230518]: 2025-10-02 12:23:41.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:41 compute-1 nova_compute[230518]: 2025-10-02 12:23:41.877 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:23:41 compute-1 nova_compute[230518]: 2025-10-02 12:23:41.878 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4685MB free_disk=20.942630767822266GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:23:41 compute-1 nova_compute[230518]: 2025-10-02 12:23:41.878 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:41 compute-1 nova_compute[230518]: 2025-10-02 12:23:41.878 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:42 compute-1 nova_compute[230518]: 2025-10-02 12:23:42.036 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance bebdf690-5f58-4227-95e0-add2eae14645 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:23:42 compute-1 nova_compute[230518]: 2025-10-02 12:23:42.037 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:23:42 compute-1 nova_compute[230518]: 2025-10-02 12:23:42.037 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:23:42 compute-1 nova_compute[230518]: 2025-10-02 12:23:42.087 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:23:42 compute-1 nova_compute[230518]: 2025-10-02 12:23:42.211 2 DEBUG nova.compute.manager [req-529deb27-00c8-4db0-beb0-748279104ecd req-0bb99d4b-fc76-4a15-a509-ac15d22e51fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-2ef879b2-3519-40b6-8207-d24b0e1a39de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:42 compute-1 nova_compute[230518]: 2025-10-02 12:23:42.212 2 DEBUG oslo_concurrency.lockutils [req-529deb27-00c8-4db0-beb0-748279104ecd req-0bb99d4b-fc76-4a15-a509-ac15d22e51fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:42 compute-1 nova_compute[230518]: 2025-10-02 12:23:42.212 2 DEBUG oslo_concurrency.lockutils [req-529deb27-00c8-4db0-beb0-748279104ecd req-0bb99d4b-fc76-4a15-a509-ac15d22e51fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:42 compute-1 nova_compute[230518]: 2025-10-02 12:23:42.212 2 DEBUG oslo_concurrency.lockutils [req-529deb27-00c8-4db0-beb0-748279104ecd req-0bb99d4b-fc76-4a15-a509-ac15d22e51fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:42 compute-1 nova_compute[230518]: 2025-10-02 12:23:42.212 2 DEBUG nova.compute.manager [req-529deb27-00c8-4db0-beb0-748279104ecd req-0bb99d4b-fc76-4a15-a509-ac15d22e51fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No waiting events found dispatching network-vif-plugged-2ef879b2-3519-40b6-8207-d24b0e1a39de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:23:42 compute-1 nova_compute[230518]: 2025-10-02 12:23:42.213 2 WARNING nova.compute.manager [req-529deb27-00c8-4db0-beb0-748279104ecd req-0bb99d4b-fc76-4a15-a509-ac15d22e51fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received unexpected event network-vif-plugged-2ef879b2-3519-40b6-8207-d24b0e1a39de for instance with vm_state active and task_state deleting.
Oct 02 12:23:42 compute-1 nova_compute[230518]: 2025-10-02 12:23:42.213 2 DEBUG nova.compute.manager [req-529deb27-00c8-4db0-beb0-748279104ecd req-0bb99d4b-fc76-4a15-a509-ac15d22e51fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-unplugged-af15c204-50a0-4b32-a3a7-46c9b925ec87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:42 compute-1 nova_compute[230518]: 2025-10-02 12:23:42.213 2 DEBUG oslo_concurrency.lockutils [req-529deb27-00c8-4db0-beb0-748279104ecd req-0bb99d4b-fc76-4a15-a509-ac15d22e51fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:42 compute-1 nova_compute[230518]: 2025-10-02 12:23:42.213 2 DEBUG oslo_concurrency.lockutils [req-529deb27-00c8-4db0-beb0-748279104ecd req-0bb99d4b-fc76-4a15-a509-ac15d22e51fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:42 compute-1 nova_compute[230518]: 2025-10-02 12:23:42.213 2 DEBUG oslo_concurrency.lockutils [req-529deb27-00c8-4db0-beb0-748279104ecd req-0bb99d4b-fc76-4a15-a509-ac15d22e51fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:42 compute-1 nova_compute[230518]: 2025-10-02 12:23:42.214 2 DEBUG nova.compute.manager [req-529deb27-00c8-4db0-beb0-748279104ecd req-0bb99d4b-fc76-4a15-a509-ac15d22e51fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No waiting events found dispatching network-vif-unplugged-af15c204-50a0-4b32-a3a7-46c9b925ec87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:23:42 compute-1 nova_compute[230518]: 2025-10-02 12:23:42.214 2 DEBUG nova.compute.manager [req-529deb27-00c8-4db0-beb0-748279104ecd req-0bb99d4b-fc76-4a15-a509-ac15d22e51fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-unplugged-af15c204-50a0-4b32-a3a7-46c9b925ec87 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:23:42 compute-1 nova_compute[230518]: 2025-10-02 12:23:42.214 2 DEBUG nova.compute.manager [req-529deb27-00c8-4db0-beb0-748279104ecd req-0bb99d4b-fc76-4a15-a509-ac15d22e51fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-plugged-af15c204-50a0-4b32-a3a7-46c9b925ec87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:42 compute-1 nova_compute[230518]: 2025-10-02 12:23:42.214 2 DEBUG oslo_concurrency.lockutils [req-529deb27-00c8-4db0-beb0-748279104ecd req-0bb99d4b-fc76-4a15-a509-ac15d22e51fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "bebdf690-5f58-4227-95e0-add2eae14645-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:42 compute-1 nova_compute[230518]: 2025-10-02 12:23:42.214 2 DEBUG oslo_concurrency.lockutils [req-529deb27-00c8-4db0-beb0-748279104ecd req-0bb99d4b-fc76-4a15-a509-ac15d22e51fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:42 compute-1 nova_compute[230518]: 2025-10-02 12:23:42.215 2 DEBUG oslo_concurrency.lockutils [req-529deb27-00c8-4db0-beb0-748279104ecd req-0bb99d4b-fc76-4a15-a509-ac15d22e51fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:42 compute-1 nova_compute[230518]: 2025-10-02 12:23:42.215 2 DEBUG nova.compute.manager [req-529deb27-00c8-4db0-beb0-748279104ecd req-0bb99d4b-fc76-4a15-a509-ac15d22e51fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] No waiting events found dispatching network-vif-plugged-af15c204-50a0-4b32-a3a7-46c9b925ec87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:23:42 compute-1 nova_compute[230518]: 2025-10-02 12:23:42.215 2 WARNING nova.compute.manager [req-529deb27-00c8-4db0-beb0-748279104ecd req-0bb99d4b-fc76-4a15-a509-ac15d22e51fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received unexpected event network-vif-plugged-af15c204-50a0-4b32-a3a7-46c9b925ec87 for instance with vm_state active and task_state deleting.
Oct 02 12:23:42 compute-1 nova_compute[230518]: 2025-10-02 12:23:42.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e205 e205: 3 total, 3 up, 3 in
Oct 02 12:23:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:23:42 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3492679320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:42 compute-1 nova_compute[230518]: 2025-10-02 12:23:42.515 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:23:42 compute-1 nova_compute[230518]: 2025-10-02 12:23:42.521 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:23:42 compute-1 nova_compute[230518]: 2025-10-02 12:23:42.592 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:23:42 compute-1 nova_compute[230518]: 2025-10-02 12:23:42.727 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:23:42 compute-1 nova_compute[230518]: 2025-10-02 12:23:42.728 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:42 compute-1 podman[252105]: 2025-10-02 12:23:42.837680168 +0000 UTC m=+0.077809409 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 12:23:42 compute-1 podman[252106]: 2025-10-02 12:23:42.838194174 +0000 UTC m=+0.079236504 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 12:23:43 compute-1 ceph-mon[80926]: pgmap v1373: 305 pgs: 305 active+clean; 276 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 568 KiB/s wr, 65 op/s
Oct 02 12:23:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/85177071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/550051481' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1889210619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3492679320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:43 compute-1 ceph-mon[80926]: osdmap e205: 3 total, 3 up, 3 in
Oct 02 12:23:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:23:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:23:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:43.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:23:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:23:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:43.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:23:44 compute-1 nova_compute[230518]: 2025-10-02 12:23:44.339 2 DEBUG nova.compute.manager [req-3dde3b70-988a-4ca1-a3a6-396c815c9a10 req-2a792338-cabc-4b32-a1db-ade24eaeeb27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-deleted-2ef879b2-3519-40b6-8207-d24b0e1a39de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:44 compute-1 nova_compute[230518]: 2025-10-02 12:23:44.340 2 INFO nova.compute.manager [req-3dde3b70-988a-4ca1-a3a6-396c815c9a10 req-2a792338-cabc-4b32-a1db-ade24eaeeb27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Neutron deleted interface 2ef879b2-3519-40b6-8207-d24b0e1a39de; detaching it from the instance and deleting it from the info cache
Oct 02 12:23:44 compute-1 nova_compute[230518]: 2025-10-02 12:23:44.340 2 DEBUG nova.network.neutron [req-3dde3b70-988a-4ca1-a3a6-396c815c9a10 req-2a792338-cabc-4b32-a1db-ade24eaeeb27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updating instance_info_cache with network_info: [{"id": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "address": "fa:16:3e:fb:a0:be", "network": {"id": "bce86765-c9ec-46bc-a7a3-317bd0b94198", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest-349866377-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a13b8d9-26", "ovs_interfaceid": "6a13b8d9-269d-4176-b4c7-693a5e26e74b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "25721468-4447-4fb7-97f7-e805e64f0267", "address": "fa:16:3e:c5:de:0f", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25721468-44", "ovs_interfaceid": "25721468-4447-4fb7-97f7-e805e64f0267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "address": "fa:16:3e:1e:64:8d", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d3881e4-99", "ovs_interfaceid": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "b84676b0-d376-4ced-99fb-08e677046d6f", "address": "fa:16:3e:7b:6e:55", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb84676b0-d3", "ovs_interfaceid": "b84676b0-d376-4ced-99fb-08e677046d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "address": "fa:16:3e:b6:e7:f1", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.217", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9731d13-43", "ovs_interfaceid": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "address": "fa:16:3e:a3:ce:42", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf15c204-50", "ovs_interfaceid": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:23:44 compute-1 nova_compute[230518]: 2025-10-02 12:23:44.395 2 DEBUG nova.compute.manager [req-3dde3b70-988a-4ca1-a3a6-396c815c9a10 req-2a792338-cabc-4b32-a1db-ade24eaeeb27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Detach interface failed, port_id=2ef879b2-3519-40b6-8207-d24b0e1a39de, reason: Instance bebdf690-5f58-4227-95e0-add2eae14645 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:23:44 compute-1 nova_compute[230518]: 2025-10-02 12:23:44.729 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:23:44 compute-1 nova_compute[230518]: 2025-10-02 12:23:44.757 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:23:44 compute-1 nova_compute[230518]: 2025-10-02 12:23:44.758 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:23:44 compute-1 nova_compute[230518]: 2025-10-02 12:23:44.759 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:23:44 compute-1 ceph-mon[80926]: pgmap v1375: 305 pgs: 305 active+clean; 263 MiB data, 633 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.5 MiB/s wr, 90 op/s
Oct 02 12:23:45 compute-1 nova_compute[230518]: 2025-10-02 12:23:45.109 2 DEBUG oslo_concurrency.lockutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "c1597192-3527-4620-a21f-0e71c9c1c09d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:45 compute-1 nova_compute[230518]: 2025-10-02 12:23:45.110 2 DEBUG oslo_concurrency.lockutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "c1597192-3527-4620-a21f-0e71c9c1c09d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:45 compute-1 nova_compute[230518]: 2025-10-02 12:23:45.149 2 DEBUG nova.compute.manager [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:23:45 compute-1 nova_compute[230518]: 2025-10-02 12:23:45.270 2 DEBUG oslo_concurrency.lockutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:45 compute-1 nova_compute[230518]: 2025-10-02 12:23:45.271 2 DEBUG oslo_concurrency.lockutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:45 compute-1 nova_compute[230518]: 2025-10-02 12:23:45.278 2 DEBUG nova.virt.hardware [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:23:45 compute-1 nova_compute[230518]: 2025-10-02 12:23:45.278 2 INFO nova.compute.claims [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:23:45 compute-1 nova_compute[230518]: 2025-10-02 12:23:45.475 2 DEBUG oslo_concurrency.processutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:23:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:23:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:45.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:23:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:45.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:23:45 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3521193034' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:45 compute-1 nova_compute[230518]: 2025-10-02 12:23:45.927 2 DEBUG oslo_concurrency.processutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:23:45 compute-1 nova_compute[230518]: 2025-10-02 12:23:45.932 2 DEBUG nova.compute.provider_tree [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:23:45 compute-1 nova_compute[230518]: 2025-10-02 12:23:45.959 2 DEBUG nova.scheduler.client.report [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:23:45 compute-1 nova_compute[230518]: 2025-10-02 12:23:45.987 2 DEBUG oslo_concurrency.lockutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:45 compute-1 nova_compute[230518]: 2025-10-02 12:23:45.987 2 DEBUG nova.compute.manager [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:23:46 compute-1 nova_compute[230518]: 2025-10-02 12:23:46.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:23:46 compute-1 nova_compute[230518]: 2025-10-02 12:23:46.064 2 DEBUG nova.compute.manager [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:23:46 compute-1 nova_compute[230518]: 2025-10-02 12:23:46.064 2 DEBUG nova.network.neutron [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:23:46 compute-1 nova_compute[230518]: 2025-10-02 12:23:46.102 2 INFO nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:23:46 compute-1 nova_compute[230518]: 2025-10-02 12:23:46.137 2 DEBUG nova.compute.manager [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:23:46 compute-1 nova_compute[230518]: 2025-10-02 12:23:46.253 2 DEBUG nova.compute.manager [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:23:46 compute-1 nova_compute[230518]: 2025-10-02 12:23:46.254 2 DEBUG nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:23:46 compute-1 nova_compute[230518]: 2025-10-02 12:23:46.255 2 INFO nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Creating image(s)
Oct 02 12:23:46 compute-1 nova_compute[230518]: 2025-10-02 12:23:46.289 2 DEBUG nova.storage.rbd_utils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image c1597192-3527-4620-a21f-0e71c9c1c09d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:23:46 compute-1 ceph-mon[80926]: pgmap v1376: 305 pgs: 305 active+clean; 234 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 88 op/s
Oct 02 12:23:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/941580738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1127619204' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3521193034' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:46 compute-1 nova_compute[230518]: 2025-10-02 12:23:46.325 2 DEBUG nova.storage.rbd_utils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image c1597192-3527-4620-a21f-0e71c9c1c09d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:23:46 compute-1 nova_compute[230518]: 2025-10-02 12:23:46.357 2 DEBUG nova.storage.rbd_utils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image c1597192-3527-4620-a21f-0e71c9c1c09d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:23:46 compute-1 nova_compute[230518]: 2025-10-02 12:23:46.361 2 DEBUG oslo_concurrency.processutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:23:46 compute-1 nova_compute[230518]: 2025-10-02 12:23:46.442 2 DEBUG oslo_concurrency.processutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:23:46 compute-1 nova_compute[230518]: 2025-10-02 12:23:46.443 2 DEBUG oslo_concurrency.lockutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:46 compute-1 nova_compute[230518]: 2025-10-02 12:23:46.443 2 DEBUG oslo_concurrency.lockutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:46 compute-1 nova_compute[230518]: 2025-10-02 12:23:46.444 2 DEBUG oslo_concurrency.lockutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:46 compute-1 nova_compute[230518]: 2025-10-02 12:23:46.473 2 DEBUG nova.storage.rbd_utils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image c1597192-3527-4620-a21f-0e71c9c1c09d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:23:46 compute-1 nova_compute[230518]: 2025-10-02 12:23:46.477 2 DEBUG oslo_concurrency.processutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c1597192-3527-4620-a21f-0e71c9c1c09d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:23:46 compute-1 nova_compute[230518]: 2025-10-02 12:23:46.509 2 DEBUG nova.compute.manager [req-5ceaadc3-b0ce-4cbc-a032-df5c74dfe7a2 req-558ceefc-5503-4845-8ac5-0dead30657dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-deleted-6a13b8d9-269d-4176-b4c7-693a5e26e74b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:46 compute-1 nova_compute[230518]: 2025-10-02 12:23:46.509 2 INFO nova.compute.manager [req-5ceaadc3-b0ce-4cbc-a032-df5c74dfe7a2 req-558ceefc-5503-4845-8ac5-0dead30657dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Neutron deleted interface 6a13b8d9-269d-4176-b4c7-693a5e26e74b; detaching it from the instance and deleting it from the info cache
Oct 02 12:23:46 compute-1 nova_compute[230518]: 2025-10-02 12:23:46.509 2 DEBUG nova.network.neutron [req-5ceaadc3-b0ce-4cbc-a032-df5c74dfe7a2 req-558ceefc-5503-4845-8ac5-0dead30657dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updating instance_info_cache with network_info: [{"id": "25721468-4447-4fb7-97f7-e805e64f0267", "address": "fa:16:3e:c5:de:0f", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25721468-44", "ovs_interfaceid": "25721468-4447-4fb7-97f7-e805e64f0267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "address": "fa:16:3e:1e:64:8d", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d3881e4-99", "ovs_interfaceid": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "b84676b0-d376-4ced-99fb-08e677046d6f", "address": "fa:16:3e:7b:6e:55", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb84676b0-d3", "ovs_interfaceid": "b84676b0-d376-4ced-99fb-08e677046d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "address": "fa:16:3e:b6:e7:f1", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.217", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc9731d13-43", "ovs_interfaceid": "c9731d13-4315-4bdc-9d24-a91ce1d8d427", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "address": "fa:16:3e:a3:ce:42", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf15c204-50", "ovs_interfaceid": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:23:46 compute-1 nova_compute[230518]: 2025-10-02 12:23:46.571 2 DEBUG nova.compute.manager [req-5ceaadc3-b0ce-4cbc-a032-df5c74dfe7a2 req-558ceefc-5503-4845-8ac5-0dead30657dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Detach interface failed, port_id=6a13b8d9-269d-4176-b4c7-693a5e26e74b, reason: Instance bebdf690-5f58-4227-95e0-add2eae14645 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:23:46 compute-1 nova_compute[230518]: 2025-10-02 12:23:46.571 2 DEBUG nova.compute.manager [req-5ceaadc3-b0ce-4cbc-a032-df5c74dfe7a2 req-558ceefc-5503-4845-8ac5-0dead30657dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-deleted-c9731d13-4315-4bdc-9d24-a91ce1d8d427 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:46 compute-1 nova_compute[230518]: 2025-10-02 12:23:46.571 2 INFO nova.compute.manager [req-5ceaadc3-b0ce-4cbc-a032-df5c74dfe7a2 req-558ceefc-5503-4845-8ac5-0dead30657dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Neutron deleted interface c9731d13-4315-4bdc-9d24-a91ce1d8d427; detaching it from the instance and deleting it from the info cache
Oct 02 12:23:46 compute-1 nova_compute[230518]: 2025-10-02 12:23:46.572 2 DEBUG nova.network.neutron [req-5ceaadc3-b0ce-4cbc-a032-df5c74dfe7a2 req-558ceefc-5503-4845-8ac5-0dead30657dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updating instance_info_cache with network_info: [{"id": "25721468-4447-4fb7-97f7-e805e64f0267", "address": "fa:16:3e:c5:de:0f", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25721468-44", "ovs_interfaceid": "25721468-4447-4fb7-97f7-e805e64f0267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "address": "fa:16:3e:1e:64:8d", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d3881e4-99", "ovs_interfaceid": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "b84676b0-d376-4ced-99fb-08e677046d6f", "address": "fa:16:3e:7b:6e:55", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.189", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb84676b0-d3", "ovs_interfaceid": "b84676b0-d376-4ced-99fb-08e677046d6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "address": "fa:16:3e:a3:ce:42", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf15c204-50", "ovs_interfaceid": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:23:46 compute-1 nova_compute[230518]: 2025-10-02 12:23:46.595 2 DEBUG nova.compute.manager [req-5ceaadc3-b0ce-4cbc-a032-df5c74dfe7a2 req-558ceefc-5503-4845-8ac5-0dead30657dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Detach interface failed, port_id=c9731d13-4315-4bdc-9d24-a91ce1d8d427, reason: Instance bebdf690-5f58-4227-95e0-add2eae14645 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:23:46 compute-1 nova_compute[230518]: 2025-10-02 12:23:46.595 2 DEBUG nova.compute.manager [req-5ceaadc3-b0ce-4cbc-a032-df5c74dfe7a2 req-558ceefc-5503-4845-8ac5-0dead30657dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-deleted-b84676b0-d376-4ced-99fb-08e677046d6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:46 compute-1 nova_compute[230518]: 2025-10-02 12:23:46.596 2 INFO nova.compute.manager [req-5ceaadc3-b0ce-4cbc-a032-df5c74dfe7a2 req-558ceefc-5503-4845-8ac5-0dead30657dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Neutron deleted interface b84676b0-d376-4ced-99fb-08e677046d6f; detaching it from the instance and deleting it from the info cache
Oct 02 12:23:46 compute-1 nova_compute[230518]: 2025-10-02 12:23:46.596 2 DEBUG nova.network.neutron [req-5ceaadc3-b0ce-4cbc-a032-df5c74dfe7a2 req-558ceefc-5503-4845-8ac5-0dead30657dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updating instance_info_cache with network_info: [{"id": "25721468-4447-4fb7-97f7-e805e64f0267", "address": "fa:16:3e:c5:de:0f", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25721468-44", "ovs_interfaceid": "25721468-4447-4fb7-97f7-e805e64f0267", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "address": "fa:16:3e:1e:64:8d", "network": {"id": "9aed857d-6573-41ca-b0a5-fcab18195955", "bridge": "br-int", "label": "tempest-device-tagging-net1-1256445519", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.231", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d3881e4-99", "ovs_interfaceid": "8d3881e4-99fe-4bc5-b5ab-5b3f06be6000", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "address": "fa:16:3e:a3:ce:42", "network": {"id": "16f75dae-02da-4559-9be9-2b702ece41dd", "bridge": "br-int", "label": "tempest-device-tagging-net2-1462086554", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dcab4f3b7c604f47befdd0a52db26eea", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf15c204-50", "ovs_interfaceid": "af15c204-50a0-4b32-a3a7-46c9b925ec87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:23:46 compute-1 nova_compute[230518]: 2025-10-02 12:23:46.632 2 DEBUG nova.compute.manager [req-5ceaadc3-b0ce-4cbc-a032-df5c74dfe7a2 req-558ceefc-5503-4845-8ac5-0dead30657dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Detach interface failed, port_id=b84676b0-d376-4ced-99fb-08e677046d6f, reason: Instance bebdf690-5f58-4227-95e0-add2eae14645 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:23:46 compute-1 nova_compute[230518]: 2025-10-02 12:23:46.751 2 DEBUG nova.policy [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'afacfeac9efc4e6fbb83ebe4fe9a8f38', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd0ebb2827cb241e499606ce3a3c67d24', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:23:46 compute-1 nova_compute[230518]: 2025-10-02 12:23:46.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:47 compute-1 nova_compute[230518]: 2025-10-02 12:23:47.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:23:47 compute-1 nova_compute[230518]: 2025-10-02 12:23:47.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:23:47 compute-1 nova_compute[230518]: 2025-10-02 12:23:47.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:23:47 compute-1 nova_compute[230518]: 2025-10-02 12:23:47.170 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 12:23:47 compute-1 nova_compute[230518]: 2025-10-02 12:23:47.171 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 02 12:23:47 compute-1 nova_compute[230518]: 2025-10-02 12:23:47.171 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:23:47 compute-1 nova_compute[230518]: 2025-10-02 12:23:47.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:47 compute-1 nova_compute[230518]: 2025-10-02 12:23:47.592 2 DEBUG nova.network.neutron [-] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:23:47 compute-1 nova_compute[230518]: 2025-10-02 12:23:47.647 2 INFO nova.compute.manager [-] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Took 9.65 seconds to deallocate network for instance.
Oct 02 12:23:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:47.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:23:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:47.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:23:47 compute-1 nova_compute[230518]: 2025-10-02 12:23:47.970 2 DEBUG nova.network.neutron [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Successfully created port: a27d28bd-0aac-45b1-9b85-fa648038cccc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:23:48 compute-1 nova_compute[230518]: 2025-10-02 12:23:48.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:23:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:23:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3942404161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2051029789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3580906529' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:48 compute-1 nova_compute[230518]: 2025-10-02 12:23:48.589 2 INFO nova.compute.manager [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Took 0.94 seconds to detach 3 volumes for instance.
Oct 02 12:23:48 compute-1 nova_compute[230518]: 2025-10-02 12:23:48.609 2 DEBUG nova.compute.manager [req-a7073526-4c17-42ca-8995-d797c7f6a4b7 req-872e9e0e-1d08-46ad-a612-94b08c748be2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Received event network-vif-deleted-af15c204-50a0-4b32-a3a7-46c9b925ec87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:48 compute-1 nova_compute[230518]: 2025-10-02 12:23:48.650 2 DEBUG oslo_concurrency.lockutils [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:48 compute-1 nova_compute[230518]: 2025-10-02 12:23:48.651 2 DEBUG oslo_concurrency.lockutils [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:48 compute-1 nova_compute[230518]: 2025-10-02 12:23:48.687 2 DEBUG oslo_concurrency.processutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c1597192-3527-4620-a21f-0e71c9c1c09d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:23:48 compute-1 nova_compute[230518]: 2025-10-02 12:23:48.800 2 DEBUG nova.storage.rbd_utils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] resizing rbd image c1597192-3527-4620-a21f-0e71c9c1c09d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:23:48 compute-1 nova_compute[230518]: 2025-10-02 12:23:48.855 2 DEBUG oslo_concurrency.processutils [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:23:48 compute-1 nova_compute[230518]: 2025-10-02 12:23:48.967 2 DEBUG nova.objects.instance [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lazy-loading 'migration_context' on Instance uuid c1597192-3527-4620-a21f-0e71c9c1c09d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:23:48 compute-1 nova_compute[230518]: 2025-10-02 12:23:48.990 2 DEBUG nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:23:48 compute-1 nova_compute[230518]: 2025-10-02 12:23:48.991 2 DEBUG nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Ensure instance console log exists: /var/lib/nova/instances/c1597192-3527-4620-a21f-0e71c9c1c09d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:23:48 compute-1 nova_compute[230518]: 2025-10-02 12:23:48.991 2 DEBUG oslo_concurrency.lockutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:48 compute-1 nova_compute[230518]: 2025-10-02 12:23:48.992 2 DEBUG oslo_concurrency.lockutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:48 compute-1 nova_compute[230518]: 2025-10-02 12:23:48.993 2 DEBUG oslo_concurrency.lockutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:49 compute-1 nova_compute[230518]: 2025-10-02 12:23:49.294 2 DEBUG nova.network.neutron [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Successfully updated port: a27d28bd-0aac-45b1-9b85-fa648038cccc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:23:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:23:49 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/334952972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:49 compute-1 nova_compute[230518]: 2025-10-02 12:23:49.314 2 DEBUG oslo_concurrency.lockutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "refresh_cache-c1597192-3527-4620-a21f-0e71c9c1c09d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:23:49 compute-1 nova_compute[230518]: 2025-10-02 12:23:49.314 2 DEBUG oslo_concurrency.lockutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquired lock "refresh_cache-c1597192-3527-4620-a21f-0e71c9c1c09d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:23:49 compute-1 nova_compute[230518]: 2025-10-02 12:23:49.314 2 DEBUG nova.network.neutron [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:23:49 compute-1 nova_compute[230518]: 2025-10-02 12:23:49.317 2 DEBUG oslo_concurrency.processutils [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:23:49 compute-1 nova_compute[230518]: 2025-10-02 12:23:49.322 2 DEBUG nova.compute.provider_tree [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:23:49 compute-1 nova_compute[230518]: 2025-10-02 12:23:49.343 2 DEBUG nova.scheduler.client.report [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:23:49 compute-1 nova_compute[230518]: 2025-10-02 12:23:49.369 2 DEBUG oslo_concurrency.lockutils [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:49 compute-1 nova_compute[230518]: 2025-10-02 12:23:49.403 2 INFO nova.scheduler.client.report [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Deleted allocations for instance bebdf690-5f58-4227-95e0-add2eae14645
Oct 02 12:23:49 compute-1 nova_compute[230518]: 2025-10-02 12:23:49.460 2 DEBUG oslo_concurrency.lockutils [None req-6eb9c9cc-f83e-40bd-b499-995d9fcac2eb b978e493dbdc419e864471708c90b0b4 dcab4f3b7c604f47befdd0a52db26eea - - default default] Lock "bebdf690-5f58-4227-95e0-add2eae14645" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 13.360s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:49 compute-1 nova_compute[230518]: 2025-10-02 12:23:49.512 2 DEBUG nova.network.neutron [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:23:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:49.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:23:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:49.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:23:49 compute-1 ceph-mon[80926]: pgmap v1377: 305 pgs: 305 active+clean; 206 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 514 KiB/s rd, 2.8 MiB/s wr, 88 op/s
Oct 02 12:23:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/334952972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:23:50 compute-1 nova_compute[230518]: 2025-10-02 12:23:50.691 2 DEBUG nova.network.neutron [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Updating instance_info_cache with network_info: [{"id": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "address": "fa:16:3e:bb:14:6c", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa27d28bd-0a", "ovs_interfaceid": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:23:50 compute-1 nova_compute[230518]: 2025-10-02 12:23:50.769 2 DEBUG nova.compute.manager [req-79b3c5c9-9141-45c2-8c8b-ad682650e869 req-10c265db-99bc-4cb4-959b-a1e681853e1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Received event network-changed-a27d28bd-0aac-45b1-9b85-fa648038cccc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:50 compute-1 nova_compute[230518]: 2025-10-02 12:23:50.769 2 DEBUG nova.compute.manager [req-79b3c5c9-9141-45c2-8c8b-ad682650e869 req-10c265db-99bc-4cb4-959b-a1e681853e1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Refreshing instance network info cache due to event network-changed-a27d28bd-0aac-45b1-9b85-fa648038cccc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:23:50 compute-1 nova_compute[230518]: 2025-10-02 12:23:50.770 2 DEBUG oslo_concurrency.lockutils [req-79b3c5c9-9141-45c2-8c8b-ad682650e869 req-10c265db-99bc-4cb4-959b-a1e681853e1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-c1597192-3527-4620-a21f-0e71c9c1c09d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:23:50 compute-1 nova_compute[230518]: 2025-10-02 12:23:50.802 2 DEBUG oslo_concurrency.lockutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Releasing lock "refresh_cache-c1597192-3527-4620-a21f-0e71c9c1c09d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:23:50 compute-1 nova_compute[230518]: 2025-10-02 12:23:50.803 2 DEBUG nova.compute.manager [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Instance network_info: |[{"id": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "address": "fa:16:3e:bb:14:6c", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa27d28bd-0a", "ovs_interfaceid": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:23:50 compute-1 nova_compute[230518]: 2025-10-02 12:23:50.804 2 DEBUG oslo_concurrency.lockutils [req-79b3c5c9-9141-45c2-8c8b-ad682650e869 req-10c265db-99bc-4cb4-959b-a1e681853e1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-c1597192-3527-4620-a21f-0e71c9c1c09d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:23:50 compute-1 nova_compute[230518]: 2025-10-02 12:23:50.805 2 DEBUG nova.network.neutron [req-79b3c5c9-9141-45c2-8c8b-ad682650e869 req-10c265db-99bc-4cb4-959b-a1e681853e1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Refreshing network info cache for port a27d28bd-0aac-45b1-9b85-fa648038cccc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:23:50 compute-1 nova_compute[230518]: 2025-10-02 12:23:50.810 2 DEBUG nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Start _get_guest_xml network_info=[{"id": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "address": "fa:16:3e:bb:14:6c", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa27d28bd-0a", "ovs_interfaceid": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:23:50 compute-1 nova_compute[230518]: 2025-10-02 12:23:50.815 2 WARNING nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:23:50 compute-1 nova_compute[230518]: 2025-10-02 12:23:50.837 2 DEBUG nova.virt.libvirt.host [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:23:50 compute-1 nova_compute[230518]: 2025-10-02 12:23:50.838 2 DEBUG nova.virt.libvirt.host [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:23:50 compute-1 nova_compute[230518]: 2025-10-02 12:23:50.841 2 DEBUG nova.virt.libvirt.host [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:23:50 compute-1 nova_compute[230518]: 2025-10-02 12:23:50.841 2 DEBUG nova.virt.libvirt.host [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:23:50 compute-1 nova_compute[230518]: 2025-10-02 12:23:50.843 2 DEBUG nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:23:50 compute-1 nova_compute[230518]: 2025-10-02 12:23:50.843 2 DEBUG nova.virt.hardware [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:23:50 compute-1 nova_compute[230518]: 2025-10-02 12:23:50.844 2 DEBUG nova.virt.hardware [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:23:50 compute-1 nova_compute[230518]: 2025-10-02 12:23:50.844 2 DEBUG nova.virt.hardware [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:23:50 compute-1 nova_compute[230518]: 2025-10-02 12:23:50.844 2 DEBUG nova.virt.hardware [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:23:50 compute-1 nova_compute[230518]: 2025-10-02 12:23:50.845 2 DEBUG nova.virt.hardware [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:23:50 compute-1 nova_compute[230518]: 2025-10-02 12:23:50.845 2 DEBUG nova.virt.hardware [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:23:50 compute-1 nova_compute[230518]: 2025-10-02 12:23:50.845 2 DEBUG nova.virt.hardware [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:23:50 compute-1 nova_compute[230518]: 2025-10-02 12:23:50.846 2 DEBUG nova.virt.hardware [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:23:50 compute-1 nova_compute[230518]: 2025-10-02 12:23:50.846 2 DEBUG nova.virt.hardware [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:23:50 compute-1 nova_compute[230518]: 2025-10-02 12:23:50.846 2 DEBUG nova.virt.hardware [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:23:50 compute-1 nova_compute[230518]: 2025-10-02 12:23:50.846 2 DEBUG nova.virt.hardware [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:23:50 compute-1 nova_compute[230518]: 2025-10-02 12:23:50.851 2 DEBUG oslo_concurrency.processutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:23:51 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:23:51 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1853136177' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:51 compute-1 ceph-mon[80926]: pgmap v1378: 305 pgs: 305 active+clean; 206 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 514 KiB/s rd, 2.8 MiB/s wr, 88 op/s
Oct 02 12:23:51 compute-1 nova_compute[230518]: 2025-10-02 12:23:51.637 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407816.6360064, bebdf690-5f58-4227-95e0-add2eae14645 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:23:51 compute-1 nova_compute[230518]: 2025-10-02 12:23:51.638 2 INFO nova.compute.manager [-] [instance: bebdf690-5f58-4227-95e0-add2eae14645] VM Stopped (Lifecycle Event)
Oct 02 12:23:51 compute-1 nova_compute[230518]: 2025-10-02 12:23:51.661 2 DEBUG nova.compute.manager [None req-b177e91f-18f6-43b9-bf0d-e74fe8557074 - - - - - -] [instance: bebdf690-5f58-4227-95e0-add2eae14645] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:23:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:51.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:51.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:51 compute-1 nova_compute[230518]: 2025-10-02 12:23:51.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:52 compute-1 nova_compute[230518]: 2025-10-02 12:23:52.113 2 DEBUG oslo_concurrency.processutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:23:52 compute-1 nova_compute[230518]: 2025-10-02 12:23:52.147 2 DEBUG nova.storage.rbd_utils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image c1597192-3527-4620-a21f-0e71c9c1c09d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:23:52 compute-1 nova_compute[230518]: 2025-10-02 12:23:52.152 2 DEBUG oslo_concurrency.processutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:23:52 compute-1 nova_compute[230518]: 2025-10-02 12:23:52.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:23:52 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3028286666' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:52 compute-1 nova_compute[230518]: 2025-10-02 12:23:52.673 2 DEBUG oslo_concurrency.processutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:23:52 compute-1 nova_compute[230518]: 2025-10-02 12:23:52.675 2 DEBUG nova.virt.libvirt.vif [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:23:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-984879556',display_name='tempest-ImagesTestJSON-server-984879556',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-984879556',id=54,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d0ebb2827cb241e499606ce3a3c67d24',ramdisk_id='',reservation_id='r-0xmwc9tk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1681256609',owner_user_name='tempest-ImagesTestJSON-1681256609-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:23:46Z,user_data=None,user_id='afacfeac9efc4e6fbb83ebe4fe9a8f38',uuid=c1597192-3527-4620-a21f-0e71c9c1c09d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "address": "fa:16:3e:bb:14:6c", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa27d28bd-0a", "ovs_interfaceid": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:23:52 compute-1 nova_compute[230518]: 2025-10-02 12:23:52.675 2 DEBUG nova.network.os_vif_util [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converting VIF {"id": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "address": "fa:16:3e:bb:14:6c", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa27d28bd-0a", "ovs_interfaceid": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:23:52 compute-1 nova_compute[230518]: 2025-10-02 12:23:52.676 2 DEBUG nova.network.os_vif_util [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:14:6c,bridge_name='br-int',has_traffic_filtering=True,id=a27d28bd-0aac-45b1-9b85-fa648038cccc,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa27d28bd-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:23:52 compute-1 nova_compute[230518]: 2025-10-02 12:23:52.677 2 DEBUG nova.objects.instance [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lazy-loading 'pci_devices' on Instance uuid c1597192-3527-4620-a21f-0e71c9c1c09d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:23:52 compute-1 nova_compute[230518]: 2025-10-02 12:23:52.696 2 DEBUG nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:23:52 compute-1 nova_compute[230518]:   <uuid>c1597192-3527-4620-a21f-0e71c9c1c09d</uuid>
Oct 02 12:23:52 compute-1 nova_compute[230518]:   <name>instance-00000036</name>
Oct 02 12:23:52 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:23:52 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:23:52 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:23:52 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:       <nova:name>tempest-ImagesTestJSON-server-984879556</nova:name>
Oct 02 12:23:52 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:23:50</nova:creationTime>
Oct 02 12:23:52 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:23:52 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:23:52 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:23:52 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:23:52 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:23:52 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:23:52 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:23:52 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:23:52 compute-1 nova_compute[230518]:         <nova:user uuid="afacfeac9efc4e6fbb83ebe4fe9a8f38">tempest-ImagesTestJSON-1681256609-project-member</nova:user>
Oct 02 12:23:52 compute-1 nova_compute[230518]:         <nova:project uuid="d0ebb2827cb241e499606ce3a3c67d24">tempest-ImagesTestJSON-1681256609</nova:project>
Oct 02 12:23:52 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:23:52 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:23:52 compute-1 nova_compute[230518]:         <nova:port uuid="a27d28bd-0aac-45b1-9b85-fa648038cccc">
Oct 02 12:23:52 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:23:52 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:23:52 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:23:52 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <system>
Oct 02 12:23:52 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:23:52 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:23:52 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:23:52 compute-1 nova_compute[230518]:       <entry name="serial">c1597192-3527-4620-a21f-0e71c9c1c09d</entry>
Oct 02 12:23:52 compute-1 nova_compute[230518]:       <entry name="uuid">c1597192-3527-4620-a21f-0e71c9c1c09d</entry>
Oct 02 12:23:52 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     </system>
Oct 02 12:23:52 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:23:52 compute-1 nova_compute[230518]:   <os>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:   </os>
Oct 02 12:23:52 compute-1 nova_compute[230518]:   <features>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:   </features>
Oct 02 12:23:52 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:23:52 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:23:52 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:23:52 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/c1597192-3527-4620-a21f-0e71c9c1c09d_disk">
Oct 02 12:23:52 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:       </source>
Oct 02 12:23:52 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:23:52 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:23:52 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:23:52 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/c1597192-3527-4620-a21f-0e71c9c1c09d_disk.config">
Oct 02 12:23:52 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:       </source>
Oct 02 12:23:52 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:23:52 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:23:52 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:23:52 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:bb:14:6c"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:       <target dev="tapa27d28bd-0a"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:23:52 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/c1597192-3527-4620-a21f-0e71c9c1c09d/console.log" append="off"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <video>
Oct 02 12:23:52 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     </video>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:23:52 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:23:52 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:23:52 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:23:52 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:23:52 compute-1 nova_compute[230518]: </domain>
Oct 02 12:23:52 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:23:52 compute-1 nova_compute[230518]: 2025-10-02 12:23:52.698 2 DEBUG nova.compute.manager [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Preparing to wait for external event network-vif-plugged-a27d28bd-0aac-45b1-9b85-fa648038cccc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:23:52 compute-1 nova_compute[230518]: 2025-10-02 12:23:52.698 2 DEBUG oslo_concurrency.lockutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "c1597192-3527-4620-a21f-0e71c9c1c09d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:52 compute-1 nova_compute[230518]: 2025-10-02 12:23:52.698 2 DEBUG oslo_concurrency.lockutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "c1597192-3527-4620-a21f-0e71c9c1c09d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:52 compute-1 nova_compute[230518]: 2025-10-02 12:23:52.699 2 DEBUG oslo_concurrency.lockutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "c1597192-3527-4620-a21f-0e71c9c1c09d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:52 compute-1 nova_compute[230518]: 2025-10-02 12:23:52.699 2 DEBUG nova.virt.libvirt.vif [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:23:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-984879556',display_name='tempest-ImagesTestJSON-server-984879556',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-984879556',id=54,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d0ebb2827cb241e499606ce3a3c67d24',ramdisk_id='',reservation_id='r-0xmwc9tk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1681256609',owner_user_name='tempest-ImagesTestJSON-1681256609-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:23:46Z,user_data=None,user_id='afacfeac9efc4e6fbb83ebe4fe9a8f38',uuid=c1597192-3527-4620-a21f-0e71c9c1c09d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "address": "fa:16:3e:bb:14:6c", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa27d28bd-0a", "ovs_interfaceid": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:23:52 compute-1 nova_compute[230518]: 2025-10-02 12:23:52.700 2 DEBUG nova.network.os_vif_util [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converting VIF {"id": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "address": "fa:16:3e:bb:14:6c", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa27d28bd-0a", "ovs_interfaceid": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:23:52 compute-1 nova_compute[230518]: 2025-10-02 12:23:52.700 2 DEBUG nova.network.os_vif_util [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:14:6c,bridge_name='br-int',has_traffic_filtering=True,id=a27d28bd-0aac-45b1-9b85-fa648038cccc,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa27d28bd-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:23:52 compute-1 nova_compute[230518]: 2025-10-02 12:23:52.701 2 DEBUG os_vif [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:14:6c,bridge_name='br-int',has_traffic_filtering=True,id=a27d28bd-0aac-45b1-9b85-fa648038cccc,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa27d28bd-0a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:23:52 compute-1 nova_compute[230518]: 2025-10-02 12:23:52.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:52 compute-1 nova_compute[230518]: 2025-10-02 12:23:52.702 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:52 compute-1 nova_compute[230518]: 2025-10-02 12:23:52.702 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:23:52 compute-1 nova_compute[230518]: 2025-10-02 12:23:52.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:52 compute-1 nova_compute[230518]: 2025-10-02 12:23:52.705 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa27d28bd-0a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:52 compute-1 nova_compute[230518]: 2025-10-02 12:23:52.705 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa27d28bd-0a, col_values=(('external_ids', {'iface-id': 'a27d28bd-0aac-45b1-9b85-fa648038cccc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bb:14:6c', 'vm-uuid': 'c1597192-3527-4620-a21f-0e71c9c1c09d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:52 compute-1 nova_compute[230518]: 2025-10-02 12:23:52.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:52 compute-1 NetworkManager[44960]: <info>  [1759407832.7095] manager: (tapa27d28bd-0a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/109)
Oct 02 12:23:52 compute-1 nova_compute[230518]: 2025-10-02 12:23:52.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:23:52 compute-1 nova_compute[230518]: 2025-10-02 12:23:52.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:52 compute-1 nova_compute[230518]: 2025-10-02 12:23:52.714 2 INFO os_vif [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:14:6c,bridge_name='br-int',has_traffic_filtering=True,id=a27d28bd-0aac-45b1-9b85-fa648038cccc,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa27d28bd-0a')
Oct 02 12:23:52 compute-1 nova_compute[230518]: 2025-10-02 12:23:52.886 2 DEBUG nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:23:52 compute-1 nova_compute[230518]: 2025-10-02 12:23:52.886 2 DEBUG nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:23:52 compute-1 nova_compute[230518]: 2025-10-02 12:23:52.887 2 DEBUG nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] No VIF found with MAC fa:16:3e:bb:14:6c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:23:52 compute-1 nova_compute[230518]: 2025-10-02 12:23:52.887 2 INFO nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Using config drive
Oct 02 12:23:52 compute-1 nova_compute[230518]: 2025-10-02 12:23:52.911 2 DEBUG nova.storage.rbd_utils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image c1597192-3527-4620-a21f-0e71c9c1c09d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:23:53 compute-1 ceph-mon[80926]: pgmap v1379: 305 pgs: 305 active+clean; 250 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 71 KiB/s rd, 4.1 MiB/s wr, 107 op/s
Oct 02 12:23:53 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1853136177' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:53 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3570288810' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:23:53 compute-1 nova_compute[230518]: 2025-10-02 12:23:53.341 2 INFO nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Creating config drive at /var/lib/nova/instances/c1597192-3527-4620-a21f-0e71c9c1c09d/disk.config
Oct 02 12:23:53 compute-1 nova_compute[230518]: 2025-10-02 12:23:53.350 2 DEBUG oslo_concurrency.processutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c1597192-3527-4620-a21f-0e71c9c1c09d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl2zi_8t_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:23:53 compute-1 nova_compute[230518]: 2025-10-02 12:23:53.504 2 DEBUG oslo_concurrency.processutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c1597192-3527-4620-a21f-0e71c9c1c09d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl2zi_8t_" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:23:53 compute-1 nova_compute[230518]: 2025-10-02 12:23:53.548 2 DEBUG nova.storage.rbd_utils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] rbd image c1597192-3527-4620-a21f-0e71c9c1c09d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:23:53 compute-1 nova_compute[230518]: 2025-10-02 12:23:53.554 2 DEBUG oslo_concurrency.processutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c1597192-3527-4620-a21f-0e71c9c1c09d/disk.config c1597192-3527-4620-a21f-0e71c9c1c09d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:23:53 compute-1 nova_compute[230518]: 2025-10-02 12:23:53.589 2 DEBUG nova.network.neutron [req-79b3c5c9-9141-45c2-8c8b-ad682650e869 req-10c265db-99bc-4cb4-959b-a1e681853e1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Updated VIF entry in instance network info cache for port a27d28bd-0aac-45b1-9b85-fa648038cccc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:23:53 compute-1 nova_compute[230518]: 2025-10-02 12:23:53.591 2 DEBUG nova.network.neutron [req-79b3c5c9-9141-45c2-8c8b-ad682650e869 req-10c265db-99bc-4cb4-959b-a1e681853e1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Updating instance_info_cache with network_info: [{"id": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "address": "fa:16:3e:bb:14:6c", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa27d28bd-0a", "ovs_interfaceid": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:23:53 compute-1 nova_compute[230518]: 2025-10-02 12:23:53.622 2 DEBUG oslo_concurrency.lockutils [req-79b3c5c9-9141-45c2-8c8b-ad682650e869 req-10c265db-99bc-4cb4-959b-a1e681853e1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-c1597192-3527-4620-a21f-0e71c9c1c09d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:23:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:53.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:23:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:53.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:23:54 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2916462947' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:54 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3028286666' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:23:54 compute-1 ceph-mon[80926]: pgmap v1380: 305 pgs: 305 active+clean; 280 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 72 KiB/s rd, 4.1 MiB/s wr, 107 op/s
Oct 02 12:23:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:55.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:55.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:56 compute-1 nova_compute[230518]: 2025-10-02 12:23:56.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:56 compute-1 ceph-mon[80926]: pgmap v1381: 305 pgs: 305 active+clean; 280 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 735 KiB/s rd, 3.6 MiB/s wr, 121 op/s
Oct 02 12:23:56 compute-1 nova_compute[230518]: 2025-10-02 12:23:56.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:56 compute-1 nova_compute[230518]: 2025-10-02 12:23:56.990 2 DEBUG oslo_concurrency.processutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c1597192-3527-4620-a21f-0e71c9c1c09d/disk.config c1597192-3527-4620-a21f-0e71c9c1c09d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:23:56 compute-1 nova_compute[230518]: 2025-10-02 12:23:56.991 2 INFO nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Deleting local config drive /var/lib/nova/instances/c1597192-3527-4620-a21f-0e71c9c1c09d/disk.config because it was imported into RBD.
Oct 02 12:23:57 compute-1 kernel: tapa27d28bd-0a: entered promiscuous mode
Oct 02 12:23:57 compute-1 NetworkManager[44960]: <info>  [1759407837.0505] manager: (tapa27d28bd-0a): new Tun device (/org/freedesktop/NetworkManager/Devices/110)
Oct 02 12:23:57 compute-1 nova_compute[230518]: 2025-10-02 12:23:57.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:57 compute-1 ovn_controller[129257]: 2025-10-02T12:23:57Z|00246|binding|INFO|Claiming lport a27d28bd-0aac-45b1-9b85-fa648038cccc for this chassis.
Oct 02 12:23:57 compute-1 ovn_controller[129257]: 2025-10-02T12:23:57Z|00247|binding|INFO|a27d28bd-0aac-45b1-9b85-fa648038cccc: Claiming fa:16:3e:bb:14:6c 10.100.0.3
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.063 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:14:6c 10.100.0.3'], port_security=['fa:16:3e:bb:14:6c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'c1597192-3527-4620-a21f-0e71c9c1c09d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd0ebb2827cb241e499606ce3a3c67d24', 'neutron:revision_number': '2', 'neutron:security_group_ids': '82a35752-e404-444a-8896-2599ead4c932', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a6ee76fd-a5ee-4609-94ea-48618b0cf0da, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=a27d28bd-0aac-45b1-9b85-fa648038cccc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.065 138374 INFO neutron.agent.ovn.metadata.agent [-] Port a27d28bd-0aac-45b1-9b85-fa648038cccc in datapath d68ff9e0-aff2-4eda-8590-74da7cfc5671 bound to our chassis
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.067 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d68ff9e0-aff2-4eda-8590-74da7cfc5671
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.078 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0f0d16d9-c3ea-4be4-b57a-ff7d4f3eaa59]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.079 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd68ff9e0-a1 in ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.081 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd68ff9e0-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.081 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8bebe24b-22f8-4f6a-b8c4-3fc725984841]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.082 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c32f6b4e-b039-41c8-87bd-55e303e652d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:57 compute-1 systemd-udevd[252493]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.093 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[1747ef6f-8d89-46d6-acfe-972345d7e263]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:57 compute-1 systemd-machined[188247]: New machine qemu-28-instance-00000036.
Oct 02 12:23:57 compute-1 NetworkManager[44960]: <info>  [1759407837.0982] device (tapa27d28bd-0a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:23:57 compute-1 NetworkManager[44960]: <info>  [1759407837.0995] device (tapa27d28bd-0a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:23:57 compute-1 systemd[1]: Started Virtual Machine qemu-28-instance-00000036.
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.117 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[77915704-3108-4d50-a1e0-ac51498f3f1f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:57 compute-1 ovn_controller[129257]: 2025-10-02T12:23:57Z|00248|binding|INFO|Setting lport a27d28bd-0aac-45b1-9b85-fa648038cccc ovn-installed in OVS
Oct 02 12:23:57 compute-1 ovn_controller[129257]: 2025-10-02T12:23:57Z|00249|binding|INFO|Setting lport a27d28bd-0aac-45b1-9b85-fa648038cccc up in Southbound
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.142 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[ac3866b2-8ff9-4ed0-98cf-e5b39af3645c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:57 compute-1 nova_compute[230518]: 2025-10-02 12:23:57.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:57 compute-1 nova_compute[230518]: 2025-10-02 12:23:57.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.161 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2a45c88a-692d-4d0f-b5cd-94f55ca37527]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:57 compute-1 NetworkManager[44960]: <info>  [1759407837.1622] manager: (tapd68ff9e0-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/111)
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.190 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[05cb5af5-21b7-4e51-a1bf-01f3840648c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.193 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[b9f91c1c-0110-4862-9f41-326344915df9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:57 compute-1 NetworkManager[44960]: <info>  [1759407837.2199] device (tapd68ff9e0-a0): carrier: link connected
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.225 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[7c338eaa-193e-4722-b0d3-bf6523353ce4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.240 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[596d6cdd-55c2-4266-a88d-ee3c39b8fea2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd68ff9e0-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:d9:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570077, 'reachable_time': 26339, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252526, 'error': None, 'target': 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:57 compute-1 nova_compute[230518]: 2025-10-02 12:23:57.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.257 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[01c2d87a-2a35-4be9-95a2-31ecd9eeeb6b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee8:d99e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570077, 'tstamp': 570077}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252527, 'error': None, 'target': 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.273 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d8818ad9-451b-47b7-860f-9eaa1932db07]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd68ff9e0-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:d9:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570077, 'reachable_time': 26339, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252528, 'error': None, 'target': 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.302 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7ae80e06-fc08-4aa3-8955-1d7bc18e63a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.360 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ab6f3abf-c65c-437b-af8e-0453604b95b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.361 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd68ff9e0-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.362 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.362 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd68ff9e0-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:57 compute-1 nova_compute[230518]: 2025-10-02 12:23:57.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:57 compute-1 kernel: tapd68ff9e0-a0: entered promiscuous mode
Oct 02 12:23:57 compute-1 NetworkManager[44960]: <info>  [1759407837.3643] manager: (tapd68ff9e0-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/112)
Oct 02 12:23:57 compute-1 nova_compute[230518]: 2025-10-02 12:23:57.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:57 compute-1 sudo[252532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:23:57 compute-1 sudo[252532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.370 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd68ff9e0-a0, col_values=(('external_ids', {'iface-id': 'c0382cb4-7e26-44bc-8951-80e73f21067a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:23:57 compute-1 nova_compute[230518]: 2025-10-02 12:23:57.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:57 compute-1 ovn_controller[129257]: 2025-10-02T12:23:57Z|00250|binding|INFO|Releasing lport c0382cb4-7e26-44bc-8951-80e73f21067a from this chassis (sb_readonly=0)
Oct 02 12:23:57 compute-1 nova_compute[230518]: 2025-10-02 12:23:57.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:57 compute-1 sudo[252532]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:57 compute-1 nova_compute[230518]: 2025-10-02 12:23:57.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.385 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d68ff9e0-aff2-4eda-8590-74da7cfc5671.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d68ff9e0-aff2-4eda-8590-74da7cfc5671.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.386 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f27569d0-dd12-41e4-a54e-e08e510fdf19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.387 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-d68ff9e0-aff2-4eda-8590-74da7cfc5671
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/d68ff9e0-aff2-4eda-8590-74da7cfc5671.pid.haproxy
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID d68ff9e0-aff2-4eda-8590-74da7cfc5671
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:23:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:23:57.387 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'env', 'PROCESS_TAG=haproxy-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d68ff9e0-aff2-4eda-8590-74da7cfc5671.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:23:57 compute-1 sudo[252560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:23:57 compute-1 sudo[252560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:57 compute-1 sudo[252560]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:57 compute-1 sudo[252588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:23:57 compute-1 sudo[252588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:57 compute-1 sudo[252588]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:57 compute-1 sudo[252613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:23:57 compute-1 sudo[252613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:23:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:57.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:57.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:57 compute-1 nova_compute[230518]: 2025-10-02 12:23:57.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:23:57 compute-1 podman[252660]: 2025-10-02 12:23:57.776047747 +0000 UTC m=+0.080225255 container create 3ffb3e0ae00feb0cd5a7f378d4689b7c8c1ca9c6973d326a4af4a82cd1aa9102 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 12:23:57 compute-1 systemd[1]: Started libpod-conmon-3ffb3e0ae00feb0cd5a7f378d4689b7c8c1ca9c6973d326a4af4a82cd1aa9102.scope.
Oct 02 12:23:57 compute-1 podman[252660]: 2025-10-02 12:23:57.736750381 +0000 UTC m=+0.040927879 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:23:57 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:23:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db8253beedaf47a6624a8aab6d6b150b50161b15aea48a77330b21dba701fe6f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:23:57 compute-1 podman[252660]: 2025-10-02 12:23:57.876035013 +0000 UTC m=+0.180212521 container init 3ffb3e0ae00feb0cd5a7f378d4689b7c8c1ca9c6973d326a4af4a82cd1aa9102 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 12:23:57 compute-1 podman[252660]: 2025-10-02 12:23:57.88355317 +0000 UTC m=+0.187730658 container start 3ffb3e0ae00feb0cd5a7f378d4689b7c8c1ca9c6973d326a4af4a82cd1aa9102 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:23:57 compute-1 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[252688]: [NOTICE]   (252695) : New worker (252697) forked
Oct 02 12:23:57 compute-1 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[252688]: [NOTICE]   (252695) : Loading success.
Oct 02 12:23:58 compute-1 sudo[252613]: pam_unix(sudo:session): session closed for user root
Oct 02 12:23:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:23:58 compute-1 ceph-mon[80926]: pgmap v1382: 305 pgs: 305 active+clean; 281 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 160 op/s
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.150 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407839.1495578, c1597192-3527-4620-a21f-0e71c9c1c09d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.151 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] VM Started (Lifecycle Event)
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.201 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.210 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407839.150226, c1597192-3527-4620-a21f-0e71c9c1c09d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.211 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] VM Paused (Lifecycle Event)
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.354 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.359 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.426 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.580 2 DEBUG nova.compute.manager [req-3f23de67-0d72-48f6-a713-cd3ba874deb3 req-9c279403-8d12-4d2d-acbc-33c97d5ededc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Received event network-vif-plugged-a27d28bd-0aac-45b1-9b85-fa648038cccc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.581 2 DEBUG oslo_concurrency.lockutils [req-3f23de67-0d72-48f6-a713-cd3ba874deb3 req-9c279403-8d12-4d2d-acbc-33c97d5ededc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c1597192-3527-4620-a21f-0e71c9c1c09d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.581 2 DEBUG oslo_concurrency.lockutils [req-3f23de67-0d72-48f6-a713-cd3ba874deb3 req-9c279403-8d12-4d2d-acbc-33c97d5ededc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c1597192-3527-4620-a21f-0e71c9c1c09d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.582 2 DEBUG oslo_concurrency.lockutils [req-3f23de67-0d72-48f6-a713-cd3ba874deb3 req-9c279403-8d12-4d2d-acbc-33c97d5ededc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c1597192-3527-4620-a21f-0e71c9c1c09d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.582 2 DEBUG nova.compute.manager [req-3f23de67-0d72-48f6-a713-cd3ba874deb3 req-9c279403-8d12-4d2d-acbc-33c97d5ededc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Processing event network-vif-plugged-a27d28bd-0aac-45b1-9b85-fa648038cccc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.582 2 DEBUG nova.compute.manager [req-3f23de67-0d72-48f6-a713-cd3ba874deb3 req-9c279403-8d12-4d2d-acbc-33c97d5ededc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Received event network-vif-plugged-a27d28bd-0aac-45b1-9b85-fa648038cccc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.583 2 DEBUG oslo_concurrency.lockutils [req-3f23de67-0d72-48f6-a713-cd3ba874deb3 req-9c279403-8d12-4d2d-acbc-33c97d5ededc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c1597192-3527-4620-a21f-0e71c9c1c09d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.583 2 DEBUG oslo_concurrency.lockutils [req-3f23de67-0d72-48f6-a713-cd3ba874deb3 req-9c279403-8d12-4d2d-acbc-33c97d5ededc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c1597192-3527-4620-a21f-0e71c9c1c09d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.583 2 DEBUG oslo_concurrency.lockutils [req-3f23de67-0d72-48f6-a713-cd3ba874deb3 req-9c279403-8d12-4d2d-acbc-33c97d5ededc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c1597192-3527-4620-a21f-0e71c9c1c09d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.584 2 DEBUG nova.compute.manager [req-3f23de67-0d72-48f6-a713-cd3ba874deb3 req-9c279403-8d12-4d2d-acbc-33c97d5ededc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] No waiting events found dispatching network-vif-plugged-a27d28bd-0aac-45b1-9b85-fa648038cccc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.584 2 WARNING nova.compute.manager [req-3f23de67-0d72-48f6-a713-cd3ba874deb3 req-9c279403-8d12-4d2d-acbc-33c97d5ededc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Received unexpected event network-vif-plugged-a27d28bd-0aac-45b1-9b85-fa648038cccc for instance with vm_state building and task_state spawning.
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.585 2 DEBUG nova.compute.manager [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.589 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407839.5880425, c1597192-3527-4620-a21f-0e71c9c1c09d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.589 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] VM Resumed (Lifecycle Event)
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.591 2 DEBUG nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.595 2 INFO nova.virt.libvirt.driver [-] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Instance spawned successfully.
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.596 2 DEBUG nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.625 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.629 2 DEBUG nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.630 2 DEBUG nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.630 2 DEBUG nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.630 2 DEBUG nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.631 2 DEBUG nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.631 2 DEBUG nova.virt.libvirt.driver [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.635 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.663 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:23:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:23:59.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:23:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:23:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:23:59.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.702 2 INFO nova.compute.manager [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Took 13.45 seconds to spawn the instance on the hypervisor.
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.703 2 DEBUG nova.compute.manager [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.788 2 INFO nova.compute.manager [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Took 14.54 seconds to build instance.
Oct 02 12:23:59 compute-1 nova_compute[230518]: 2025-10-02 12:23:59.826 2 DEBUG oslo_concurrency.lockutils [None req-5ca81411-9eb4-458f-83ef-cc5bce8dfe23 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "c1597192-3527-4620-a21f-0e71c9c1c09d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:24:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:24:00 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1269643199' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:24:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3523132386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:00 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:24:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:24:00 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1269643199' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:24:01 compute-1 anacron[1073]: Job `cron.monthly' started
Oct 02 12:24:01 compute-1 anacron[1073]: Job `cron.monthly' terminated
Oct 02 12:24:01 compute-1 anacron[1073]: Normal exit (3 jobs run)
Oct 02 12:24:01 compute-1 ceph-mon[80926]: pgmap v1383: 305 pgs: 305 active+clean; 281 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 143 op/s
Oct 02 12:24:01 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:24:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1269643199' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:24:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1269643199' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:24:01 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:24:01 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:24:01 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:24:01 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:24:01 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:24:01 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:24:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:01.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:01.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:01 compute-1 nova_compute[230518]: 2025-10-02 12:24:01.844 2 DEBUG nova.compute.manager [None req-7ca7ecb4-a7ab-4a81-abdd-5c001008cace afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:24:01 compute-1 nova_compute[230518]: 2025-10-02 12:24:01.924 2 INFO nova.compute.manager [None req-7ca7ecb4-a7ab-4a81-abdd-5c001008cace afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] instance snapshotting
Oct 02 12:24:02 compute-1 nova_compute[230518]: 2025-10-02 12:24:02.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:02 compute-1 nova_compute[230518]: 2025-10-02 12:24:02.275 2 INFO nova.virt.libvirt.driver [None req-7ca7ecb4-a7ab-4a81-abdd-5c001008cace afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Beginning live snapshot process
Oct 02 12:24:02 compute-1 nova_compute[230518]: 2025-10-02 12:24:02.495 2 DEBUG nova.virt.libvirt.imagebackend [None req-7ca7ecb4-a7ab-4a81-abdd-5c001008cace afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 12:24:02 compute-1 ceph-mon[80926]: pgmap v1384: 305 pgs: 305 active+clean; 281 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 157 op/s
Oct 02 12:24:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1954818865' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:24:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1954818865' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:24:02 compute-1 nova_compute[230518]: 2025-10-02 12:24:02.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:02 compute-1 nova_compute[230518]: 2025-10-02 12:24:02.911 2 DEBUG nova.storage.rbd_utils [None req-7ca7ecb4-a7ab-4a81-abdd-5c001008cace afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] creating snapshot(de7f12910f6e4a0f8d9cc3e61ebe2d70) on rbd image(c1597192-3527-4620-a21f-0e71c9c1c09d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:24:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:24:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:03.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:03.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e206 e206: 3 total, 3 up, 3 in
Oct 02 12:24:04 compute-1 podman[252816]: 2025-10-02 12:24:04.804118272 +0000 UTC m=+0.056583281 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 02 12:24:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:24:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2727269195' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:24:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:24:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2727269195' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:24:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:24:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:05.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:24:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:05.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:06 compute-1 ceph-mon[80926]: pgmap v1385: 305 pgs: 305 active+clean; 281 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.5 MiB/s wr, 207 op/s
Oct 02 12:24:06 compute-1 podman[252835]: 2025-10-02 12:24:06.824891838 +0000 UTC m=+0.078463029 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller)
Oct 02 12:24:07 compute-1 nova_compute[230518]: 2025-10-02 12:24:07.046 2 DEBUG nova.storage.rbd_utils [None req-7ca7ecb4-a7ab-4a81-abdd-5c001008cace afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] cloning vms/c1597192-3527-4620-a21f-0e71c9c1c09d_disk@de7f12910f6e4a0f8d9cc3e61ebe2d70 to images/cf745836-410d-40ce-8229-51a950b72ba1 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 12:24:07 compute-1 nova_compute[230518]: 2025-10-02 12:24:07.171 2 DEBUG nova.storage.rbd_utils [None req-7ca7ecb4-a7ab-4a81-abdd-5c001008cace afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] flattening images/cf745836-410d-40ce-8229-51a950b72ba1 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 12:24:07 compute-1 nova_compute[230518]: 2025-10-02 12:24:07.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:24:07 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3803416719' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:24:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:24:07 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3803416719' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:24:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:24:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:07.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:24:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:07.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:07 compute-1 nova_compute[230518]: 2025-10-02 12:24:07.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:24:08 compute-1 ceph-mon[80926]: osdmap e206: 3 total, 3 up, 3 in
Oct 02 12:24:08 compute-1 ceph-mon[80926]: pgmap v1387: 305 pgs: 305 active+clean; 263 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 852 KiB/s wr, 284 op/s
Oct 02 12:24:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2727269195' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:24:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2727269195' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:24:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:09.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:09.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:09 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0.
Oct 02 12:24:09 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:24:09.952148) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:24:09 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61
Oct 02 12:24:09 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407849952197, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2807, "num_deletes": 518, "total_data_size": 5958896, "memory_usage": 6044920, "flush_reason": "Manual Compaction"}
Oct 02 12:24:09 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started
Oct 02 12:24:09 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407849968543, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 3917090, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30835, "largest_seqno": 33637, "table_properties": {"data_size": 3905675, "index_size": 7013, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3333, "raw_key_size": 26911, "raw_average_key_size": 20, "raw_value_size": 3880750, "raw_average_value_size": 2917, "num_data_blocks": 302, "num_entries": 1330, "num_filter_entries": 1330, "num_deletions": 518, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407647, "oldest_key_time": 1759407647, "file_creation_time": 1759407849, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:24:09 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 16438 microseconds, and 7546 cpu microseconds.
Oct 02 12:24:09 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:24:09 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:24:09.968583) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 3917090 bytes OK
Oct 02 12:24:09 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:24:09.968603) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started
Oct 02 12:24:09 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:24:09.970363) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done
Oct 02 12:24:09 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:24:09.970375) EVENT_LOG_v1 {"time_micros": 1759407849970371, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:24:09 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:24:09.970389) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:24:09 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 5945329, prev total WAL file size 5945329, number of live WAL files 2.
Oct 02 12:24:09 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:24:09 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:24:09.971501) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Oct 02 12:24:09 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:24:09 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(3825KB)], [60(8427KB)]
Oct 02 12:24:09 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407849971526, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 12547249, "oldest_snapshot_seqno": -1}
Oct 02 12:24:09 compute-1 nova_compute[230518]: 2025-10-02 12:24:09.989 2 DEBUG nova.storage.rbd_utils [None req-7ca7ecb4-a7ab-4a81-abdd-5c001008cace afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] removing snapshot(de7f12910f6e4a0f8d9cc3e61ebe2d70) on rbd image(c1597192-3527-4620-a21f-0e71c9c1c09d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 12:24:10 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 5687 keys, 10471001 bytes, temperature: kUnknown
Oct 02 12:24:10 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407850033023, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 10471001, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10430571, "index_size": 25088, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14277, "raw_key_size": 146270, "raw_average_key_size": 25, "raw_value_size": 10325917, "raw_average_value_size": 1815, "num_data_blocks": 1008, "num_entries": 5687, "num_filter_entries": 5687, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759407849, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:24:10 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:24:10 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:24:10.033434) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 10471001 bytes
Oct 02 12:24:10 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:24:10.035602) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 203.6 rd, 169.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 8.2 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 6739, records dropped: 1052 output_compression: NoCompression
Oct 02 12:24:10 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:24:10.035658) EVENT_LOG_v1 {"time_micros": 1759407850035637, "job": 36, "event": "compaction_finished", "compaction_time_micros": 61613, "compaction_time_cpu_micros": 21429, "output_level": 6, "num_output_files": 1, "total_output_size": 10471001, "num_input_records": 6739, "num_output_records": 5687, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:24:10 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:24:10 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407850037116, "job": 36, "event": "table_file_deletion", "file_number": 62}
Oct 02 12:24:10 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:24:10 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759407850039019, "job": 36, "event": "table_file_deletion", "file_number": 60}
Oct 02 12:24:10 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:24:09.971439) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:24:10 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:24:10.039066) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:24:10 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:24:10.039072) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:24:10 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:24:10.039074) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:24:10 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:24:10.039076) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:24:10 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:24:10.039078) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:24:10 compute-1 ceph-mon[80926]: pgmap v1388: 305 pgs: 305 active+clean; 222 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.3 MiB/s wr, 256 op/s
Oct 02 12:24:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3803416719' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:24:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3803416719' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:24:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:24:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:11.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:24:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:11.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:12 compute-1 nova_compute[230518]: 2025-10-02 12:24:12.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:12 compute-1 ceph-mon[80926]: pgmap v1389: 305 pgs: 305 active+clean; 222 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.3 MiB/s wr, 256 op/s
Oct 02 12:24:12 compute-1 nova_compute[230518]: 2025-10-02 12:24:12.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:24:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e207 e207: 3 total, 3 up, 3 in
Oct 02 12:24:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:24:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:13.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:24:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:13.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:13 compute-1 podman[252934]: 2025-10-02 12:24:13.811049214 +0000 UTC m=+0.056263240 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 12:24:13 compute-1 podman[252935]: 2025-10-02 12:24:13.816999491 +0000 UTC m=+0.059595346 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 12:24:13 compute-1 nova_compute[230518]: 2025-10-02 12:24:13.960 2 DEBUG nova.storage.rbd_utils [None req-7ca7ecb4-a7ab-4a81-abdd-5c001008cace afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] creating snapshot(snap) on rbd image(cf745836-410d-40ce-8229-51a950b72ba1) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:24:14 compute-1 ceph-mon[80926]: pgmap v1390: 305 pgs: 305 active+clean; 242 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 3.1 MiB/s wr, 293 op/s
Oct 02 12:24:14 compute-1 ceph-mon[80926]: osdmap e207: 3 total, 3 up, 3 in
Oct 02 12:24:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e208 e208: 3 total, 3 up, 3 in
Oct 02 12:24:15 compute-1 ceph-mon[80926]: pgmap v1391: 305 pgs: 305 active+clean; 276 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.6 MiB/s wr, 231 op/s
Oct 02 12:24:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:24:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:15.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:24:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:24:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:15.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver [None req-7ca7ecb4-a7ab-4a81-abdd-5c001008cace afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image cf745836-410d-40ce-8229-51a950b72ba1 could not be found.
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID cf745836-410d-40ce-8229-51a950b72ba1
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver 
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver 
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image cf745836-410d-40ce-8229-51a950b72ba1 could not be found.
Oct 02 12:24:16 compute-1 nova_compute[230518]: 2025-10-02 12:24:16.397 2 ERROR nova.virt.libvirt.driver 
Oct 02 12:24:17 compute-1 ceph-mon[80926]: pgmap v1393: 305 pgs: 305 active+clean; 298 MiB data, 663 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.2 MiB/s wr, 180 op/s
Oct 02 12:24:17 compute-1 ceph-mon[80926]: osdmap e208: 3 total, 3 up, 3 in
Oct 02 12:24:17 compute-1 nova_compute[230518]: 2025-10-02 12:24:17.200 2 DEBUG nova.storage.rbd_utils [None req-7ca7ecb4-a7ab-4a81-abdd-5c001008cace afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] removing snapshot(snap) on rbd image(cf745836-410d-40ce-8229-51a950b72ba1) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 12:24:17 compute-1 nova_compute[230518]: 2025-10-02 12:24:17.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:17.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:17 compute-1 nova_compute[230518]: 2025-10-02 12:24:17.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:24:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:17.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:24:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:24:18 compute-1 ovn_controller[129257]: 2025-10-02T12:24:18Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bb:14:6c 10.100.0.3
Oct 02 12:24:18 compute-1 ovn_controller[129257]: 2025-10-02T12:24:18Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bb:14:6c 10.100.0.3
Oct 02 12:24:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e209 e209: 3 total, 3 up, 3 in
Oct 02 12:24:19 compute-1 ceph-mon[80926]: pgmap v1395: 305 pgs: 305 active+clean; 323 MiB data, 685 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 8.8 MiB/s wr, 210 op/s
Oct 02 12:24:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:24:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:19.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:24:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:24:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:19.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:24:20 compute-1 ceph-mon[80926]: osdmap e209: 3 total, 3 up, 3 in
Oct 02 12:24:20 compute-1 ceph-mon[80926]: pgmap v1397: 305 pgs: 305 active+clean; 323 MiB data, 685 MiB used, 20 GiB / 21 GiB avail; 162 KiB/s rd, 8.0 MiB/s wr, 132 op/s
Oct 02 12:24:21 compute-1 nova_compute[230518]: 2025-10-02 12:24:21.261 2 WARNING nova.compute.manager [None req-7ca7ecb4-a7ab-4a81-abdd-5c001008cace afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Image not found during snapshot: nova.exception.ImageNotFound: Image cf745836-410d-40ce-8229-51a950b72ba1 could not be found.
Oct 02 12:24:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:21.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:24:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:21.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:24:22 compute-1 nova_compute[230518]: 2025-10-02 12:24:22.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:22 compute-1 ceph-mon[80926]: pgmap v1398: 305 pgs: 305 active+clean; 323 MiB data, 691 MiB used, 20 GiB / 21 GiB avail; 569 KiB/s rd, 6.6 MiB/s wr, 169 op/s
Oct 02 12:24:22 compute-1 nova_compute[230518]: 2025-10-02 12:24:22.512 2 DEBUG oslo_concurrency.lockutils [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "c1597192-3527-4620-a21f-0e71c9c1c09d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:24:22 compute-1 nova_compute[230518]: 2025-10-02 12:24:22.513 2 DEBUG oslo_concurrency.lockutils [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "c1597192-3527-4620-a21f-0e71c9c1c09d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:24:22 compute-1 nova_compute[230518]: 2025-10-02 12:24:22.513 2 DEBUG oslo_concurrency.lockutils [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "c1597192-3527-4620-a21f-0e71c9c1c09d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:24:22 compute-1 nova_compute[230518]: 2025-10-02 12:24:22.514 2 DEBUG oslo_concurrency.lockutils [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "c1597192-3527-4620-a21f-0e71c9c1c09d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:24:22 compute-1 nova_compute[230518]: 2025-10-02 12:24:22.514 2 DEBUG oslo_concurrency.lockutils [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "c1597192-3527-4620-a21f-0e71c9c1c09d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:24:22 compute-1 nova_compute[230518]: 2025-10-02 12:24:22.515 2 INFO nova.compute.manager [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Terminating instance
Oct 02 12:24:22 compute-1 nova_compute[230518]: 2025-10-02 12:24:22.516 2 DEBUG nova.compute.manager [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:24:22 compute-1 kernel: tapa27d28bd-0a (unregistering): left promiscuous mode
Oct 02 12:24:22 compute-1 NetworkManager[44960]: <info>  [1759407862.6556] device (tapa27d28bd-0a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:24:22 compute-1 ovn_controller[129257]: 2025-10-02T12:24:22Z|00251|binding|INFO|Releasing lport a27d28bd-0aac-45b1-9b85-fa648038cccc from this chassis (sb_readonly=0)
Oct 02 12:24:22 compute-1 ovn_controller[129257]: 2025-10-02T12:24:22Z|00252|binding|INFO|Setting lport a27d28bd-0aac-45b1-9b85-fa648038cccc down in Southbound
Oct 02 12:24:22 compute-1 ovn_controller[129257]: 2025-10-02T12:24:22Z|00253|binding|INFO|Removing iface tapa27d28bd-0a ovn-installed in OVS
Oct 02 12:24:22 compute-1 nova_compute[230518]: 2025-10-02 12:24:22.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:22.671 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:14:6c 10.100.0.3'], port_security=['fa:16:3e:bb:14:6c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'c1597192-3527-4620-a21f-0e71c9c1c09d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd0ebb2827cb241e499606ce3a3c67d24', 'neutron:revision_number': '4', 'neutron:security_group_ids': '82a35752-e404-444a-8896-2599ead4c932', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a6ee76fd-a5ee-4609-94ea-48618b0cf0da, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=a27d28bd-0aac-45b1-9b85-fa648038cccc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:24:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:22.674 138374 INFO neutron.agent.ovn.metadata.agent [-] Port a27d28bd-0aac-45b1-9b85-fa648038cccc in datapath d68ff9e0-aff2-4eda-8590-74da7cfc5671 unbound from our chassis
Oct 02 12:24:22 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e210 e210: 3 total, 3 up, 3 in
Oct 02 12:24:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:22.678 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d68ff9e0-aff2-4eda-8590-74da7cfc5671, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:24:22 compute-1 nova_compute[230518]: 2025-10-02 12:24:22.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:22.680 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b1c5457b-9b03-40ae-a478-358021bb7394]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:22.684 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 namespace which is not needed anymore
Oct 02 12:24:22 compute-1 nova_compute[230518]: 2025-10-02 12:24:22.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:22 compute-1 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000036.scope: Deactivated successfully.
Oct 02 12:24:22 compute-1 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000036.scope: Consumed 14.045s CPU time.
Oct 02 12:24:22 compute-1 systemd-machined[188247]: Machine qemu-28-instance-00000036 terminated.
Oct 02 12:24:22 compute-1 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[252688]: [NOTICE]   (252695) : haproxy version is 2.8.14-c23fe91
Oct 02 12:24:22 compute-1 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[252688]: [NOTICE]   (252695) : path to executable is /usr/sbin/haproxy
Oct 02 12:24:22 compute-1 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[252688]: [WARNING]  (252695) : Exiting Master process...
Oct 02 12:24:22 compute-1 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[252688]: [ALERT]    (252695) : Current worker (252697) exited with code 143 (Terminated)
Oct 02 12:24:22 compute-1 neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671[252688]: [WARNING]  (252695) : All workers exited. Exiting... (0)
Oct 02 12:24:22 compute-1 systemd[1]: libpod-3ffb3e0ae00feb0cd5a7f378d4689b7c8c1ca9c6973d326a4af4a82cd1aa9102.scope: Deactivated successfully.
Oct 02 12:24:22 compute-1 podman[253051]: 2025-10-02 12:24:22.821172067 +0000 UTC m=+0.051967467 container died 3ffb3e0ae00feb0cd5a7f378d4689b7c8c1ca9c6973d326a4af4a82cd1aa9102 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 12:24:22 compute-1 systemd[1]: var-lib-containers-storage-overlay-db8253beedaf47a6624a8aab6d6b150b50161b15aea48a77330b21dba701fe6f-merged.mount: Deactivated successfully.
Oct 02 12:24:22 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3ffb3e0ae00feb0cd5a7f378d4689b7c8c1ca9c6973d326a4af4a82cd1aa9102-userdata-shm.mount: Deactivated successfully.
Oct 02 12:24:22 compute-1 podman[253051]: 2025-10-02 12:24:22.85590837 +0000 UTC m=+0.086703760 container cleanup 3ffb3e0ae00feb0cd5a7f378d4689b7c8c1ca9c6973d326a4af4a82cd1aa9102 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:24:22 compute-1 systemd[1]: libpod-conmon-3ffb3e0ae00feb0cd5a7f378d4689b7c8c1ca9c6973d326a4af4a82cd1aa9102.scope: Deactivated successfully.
Oct 02 12:24:22 compute-1 podman[253081]: 2025-10-02 12:24:22.929389362 +0000 UTC m=+0.048217529 container remove 3ffb3e0ae00feb0cd5a7f378d4689b7c8c1ca9c6973d326a4af4a82cd1aa9102 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:24:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:22.938 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[be2436b1-6bcd-4d43-a9b7-564dcbe373e7]: (4, ('Thu Oct  2 12:24:22 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 (3ffb3e0ae00feb0cd5a7f378d4689b7c8c1ca9c6973d326a4af4a82cd1aa9102)\n3ffb3e0ae00feb0cd5a7f378d4689b7c8c1ca9c6973d326a4af4a82cd1aa9102\nThu Oct  2 12:24:22 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 (3ffb3e0ae00feb0cd5a7f378d4689b7c8c1ca9c6973d326a4af4a82cd1aa9102)\n3ffb3e0ae00feb0cd5a7f378d4689b7c8c1ca9c6973d326a4af4a82cd1aa9102\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:22.939 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[812224ca-711a-4c30-b1a9-34597da8ceb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:22.941 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd68ff9e0-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:24:22 compute-1 nova_compute[230518]: 2025-10-02 12:24:22.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:22 compute-1 nova_compute[230518]: 2025-10-02 12:24:22.956 2 INFO nova.virt.libvirt.driver [-] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Instance destroyed successfully.
Oct 02 12:24:22 compute-1 nova_compute[230518]: 2025-10-02 12:24:22.957 2 DEBUG nova.objects.instance [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lazy-loading 'resources' on Instance uuid c1597192-3527-4620-a21f-0e71c9c1c09d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:24:22 compute-1 kernel: tapd68ff9e0-a0: left promiscuous mode
Oct 02 12:24:22 compute-1 nova_compute[230518]: 2025-10-02 12:24:22.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:22 compute-1 nova_compute[230518]: 2025-10-02 12:24:22.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:22.969 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a4398c09-d69a-4bec-84d5-8c7d493356de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:22 compute-1 nova_compute[230518]: 2025-10-02 12:24:22.975 2 DEBUG nova.virt.libvirt.vif [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:23:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-984879556',display_name='tempest-ImagesTestJSON-server-984879556',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagestestjson-server-984879556',id=54,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:23:59Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d0ebb2827cb241e499606ce3a3c67d24',ramdisk_id='',reservation_id='r-0xmwc9tk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1681256609',owner_user_name='tempest-ImagesTestJSON-1681256609-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:24:21Z,user_data=None,user_id='afacfeac9efc4e6fbb83ebe4fe9a8f38',uuid=c1597192-3527-4620-a21f-0e71c9c1c09d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "address": "fa:16:3e:bb:14:6c", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa27d28bd-0a", "ovs_interfaceid": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:24:22 compute-1 nova_compute[230518]: 2025-10-02 12:24:22.976 2 DEBUG nova.network.os_vif_util [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converting VIF {"id": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "address": "fa:16:3e:bb:14:6c", "network": {"id": "d68ff9e0-aff2-4eda-8590-74da7cfc5671", "bridge": "br-int", "label": "tempest-ImagesTestJSON-418762254-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0ebb2827cb241e499606ce3a3c67d24", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa27d28bd-0a", "ovs_interfaceid": "a27d28bd-0aac-45b1-9b85-fa648038cccc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:24:22 compute-1 nova_compute[230518]: 2025-10-02 12:24:22.980 2 DEBUG nova.network.os_vif_util [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:14:6c,bridge_name='br-int',has_traffic_filtering=True,id=a27d28bd-0aac-45b1-9b85-fa648038cccc,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa27d28bd-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:24:22 compute-1 nova_compute[230518]: 2025-10-02 12:24:22.980 2 DEBUG os_vif [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:14:6c,bridge_name='br-int',has_traffic_filtering=True,id=a27d28bd-0aac-45b1-9b85-fa648038cccc,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa27d28bd-0a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:24:22 compute-1 nova_compute[230518]: 2025-10-02 12:24:22.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:22 compute-1 nova_compute[230518]: 2025-10-02 12:24:22.982 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa27d28bd-0a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:24:22 compute-1 nova_compute[230518]: 2025-10-02 12:24:22.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:22 compute-1 nova_compute[230518]: 2025-10-02 12:24:22.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:22 compute-1 nova_compute[230518]: 2025-10-02 12:24:22.988 2 INFO os_vif [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:14:6c,bridge_name='br-int',has_traffic_filtering=True,id=a27d28bd-0aac-45b1-9b85-fa648038cccc,network=Network(d68ff9e0-aff2-4eda-8590-74da7cfc5671),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa27d28bd-0a')
Oct 02 12:24:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:22.998 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[27aafe2c-cc2a-46a3-b677-303522690856]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:23.000 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0cbf7c9e-384e-487e-aeb5-9207ef952d83]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:23.015 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c0757a60-c605-41a0-8285-839a331f741e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570069, 'reachable_time': 37526, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253126, 'error': None, 'target': 'ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:23.017 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d68ff9e0-aff2-4eda-8590-74da7cfc5671 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:24:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:23.017 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[1c5735c3-4ec4-4491-95a6-d8d9f15ff335]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:23 compute-1 systemd[1]: run-netns-ovnmeta\x2dd68ff9e0\x2daff2\x2d4eda\x2d8590\x2d74da7cfc5671.mount: Deactivated successfully.
Oct 02 12:24:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:24:23 compute-1 nova_compute[230518]: 2025-10-02 12:24:23.225 2 DEBUG nova.compute.manager [req-e797cef3-f784-4e1c-ad09-b817ba23ae59 req-0a379a7b-a422-4933-8334-4e042718e13e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Received event network-vif-unplugged-a27d28bd-0aac-45b1-9b85-fa648038cccc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:24:23 compute-1 nova_compute[230518]: 2025-10-02 12:24:23.226 2 DEBUG oslo_concurrency.lockutils [req-e797cef3-f784-4e1c-ad09-b817ba23ae59 req-0a379a7b-a422-4933-8334-4e042718e13e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c1597192-3527-4620-a21f-0e71c9c1c09d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:24:23 compute-1 nova_compute[230518]: 2025-10-02 12:24:23.227 2 DEBUG oslo_concurrency.lockutils [req-e797cef3-f784-4e1c-ad09-b817ba23ae59 req-0a379a7b-a422-4933-8334-4e042718e13e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c1597192-3527-4620-a21f-0e71c9c1c09d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:24:23 compute-1 nova_compute[230518]: 2025-10-02 12:24:23.227 2 DEBUG oslo_concurrency.lockutils [req-e797cef3-f784-4e1c-ad09-b817ba23ae59 req-0a379a7b-a422-4933-8334-4e042718e13e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c1597192-3527-4620-a21f-0e71c9c1c09d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:24:23 compute-1 nova_compute[230518]: 2025-10-02 12:24:23.228 2 DEBUG nova.compute.manager [req-e797cef3-f784-4e1c-ad09-b817ba23ae59 req-0a379a7b-a422-4933-8334-4e042718e13e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] No waiting events found dispatching network-vif-unplugged-a27d28bd-0aac-45b1-9b85-fa648038cccc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:24:23 compute-1 nova_compute[230518]: 2025-10-02 12:24:23.228 2 DEBUG nova.compute.manager [req-e797cef3-f784-4e1c-ad09-b817ba23ae59 req-0a379a7b-a422-4933-8334-4e042718e13e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Received event network-vif-unplugged-a27d28bd-0aac-45b1-9b85-fa648038cccc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:24:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:23.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:23.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:24 compute-1 ceph-mon[80926]: osdmap e210: 3 total, 3 up, 3 in
Oct 02 12:24:25 compute-1 nova_compute[230518]: 2025-10-02 12:24:25.327 2 DEBUG nova.compute.manager [req-f370c914-f12a-4fad-a27f-598b483b2041 req-90a86a4d-1f08-4441-b64d-227eaf6d02a9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Received event network-vif-plugged-a27d28bd-0aac-45b1-9b85-fa648038cccc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:24:25 compute-1 nova_compute[230518]: 2025-10-02 12:24:25.328 2 DEBUG oslo_concurrency.lockutils [req-f370c914-f12a-4fad-a27f-598b483b2041 req-90a86a4d-1f08-4441-b64d-227eaf6d02a9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c1597192-3527-4620-a21f-0e71c9c1c09d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:24:25 compute-1 nova_compute[230518]: 2025-10-02 12:24:25.328 2 DEBUG oslo_concurrency.lockutils [req-f370c914-f12a-4fad-a27f-598b483b2041 req-90a86a4d-1f08-4441-b64d-227eaf6d02a9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c1597192-3527-4620-a21f-0e71c9c1c09d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:24:25 compute-1 nova_compute[230518]: 2025-10-02 12:24:25.329 2 DEBUG oslo_concurrency.lockutils [req-f370c914-f12a-4fad-a27f-598b483b2041 req-90a86a4d-1f08-4441-b64d-227eaf6d02a9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c1597192-3527-4620-a21f-0e71c9c1c09d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:24:25 compute-1 nova_compute[230518]: 2025-10-02 12:24:25.329 2 DEBUG nova.compute.manager [req-f370c914-f12a-4fad-a27f-598b483b2041 req-90a86a4d-1f08-4441-b64d-227eaf6d02a9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] No waiting events found dispatching network-vif-plugged-a27d28bd-0aac-45b1-9b85-fa648038cccc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:24:25 compute-1 nova_compute[230518]: 2025-10-02 12:24:25.329 2 WARNING nova.compute.manager [req-f370c914-f12a-4fad-a27f-598b483b2041 req-90a86a4d-1f08-4441-b64d-227eaf6d02a9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Received unexpected event network-vif-plugged-a27d28bd-0aac-45b1-9b85-fa648038cccc for instance with vm_state active and task_state deleting.
Oct 02 12:24:25 compute-1 ceph-mon[80926]: pgmap v1400: 305 pgs: 305 active+clean; 302 MiB data, 678 MiB used, 20 GiB / 21 GiB avail; 826 KiB/s rd, 3.6 MiB/s wr, 158 op/s
Oct 02 12:24:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:25.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:25.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:25.922 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:24:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:25.923 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:24:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:25.923 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:24:26 compute-1 sudo[253131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:24:26 compute-1 sudo[253131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:26 compute-1 sudo[253131]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:26 compute-1 sudo[253156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:24:26 compute-1 sudo[253156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:24:26 compute-1 sudo[253156]: pam_unix(sudo:session): session closed for user root
Oct 02 12:24:26 compute-1 ceph-mon[80926]: pgmap v1401: 305 pgs: 305 active+clean; 270 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 926 KiB/s rd, 508 KiB/s wr, 165 op/s
Oct 02 12:24:26 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:24:26 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:24:27 compute-1 nova_compute[230518]: 2025-10-02 12:24:27.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:27 compute-1 nova_compute[230518]: 2025-10-02 12:24:27.351 2 INFO nova.virt.libvirt.driver [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Deleting instance files /var/lib/nova/instances/c1597192-3527-4620-a21f-0e71c9c1c09d_del
Oct 02 12:24:27 compute-1 nova_compute[230518]: 2025-10-02 12:24:27.351 2 INFO nova.virt.libvirt.driver [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Deletion of /var/lib/nova/instances/c1597192-3527-4620-a21f-0e71c9c1c09d_del complete
Oct 02 12:24:27 compute-1 nova_compute[230518]: 2025-10-02 12:24:27.401 2 INFO nova.compute.manager [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Took 4.88 seconds to destroy the instance on the hypervisor.
Oct 02 12:24:27 compute-1 nova_compute[230518]: 2025-10-02 12:24:27.402 2 DEBUG oslo.service.loopingcall [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:24:27 compute-1 nova_compute[230518]: 2025-10-02 12:24:27.402 2 DEBUG nova.compute.manager [-] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:24:27 compute-1 nova_compute[230518]: 2025-10-02 12:24:27.402 2 DEBUG nova.network.neutron [-] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:24:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:24:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:27.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:24:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:27.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:27 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e211 e211: 3 total, 3 up, 3 in
Oct 02 12:24:27 compute-1 nova_compute[230518]: 2025-10-02 12:24:27.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:28 compute-1 nova_compute[230518]: 2025-10-02 12:24:28.130 2 DEBUG nova.network.neutron [-] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:24:28 compute-1 nova_compute[230518]: 2025-10-02 12:24:28.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:28.146 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:24:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:28.147 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:24:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:24:28 compute-1 nova_compute[230518]: 2025-10-02 12:24:28.186 2 INFO nova.compute.manager [-] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Took 0.78 seconds to deallocate network for instance.
Oct 02 12:24:28 compute-1 nova_compute[230518]: 2025-10-02 12:24:28.231 2 DEBUG oslo_concurrency.lockutils [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:24:28 compute-1 nova_compute[230518]: 2025-10-02 12:24:28.231 2 DEBUG oslo_concurrency.lockutils [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:24:28 compute-1 nova_compute[230518]: 2025-10-02 12:24:28.280 2 DEBUG nova.compute.manager [req-2a4b04c7-85b4-4294-904e-5f9875767de1 req-aec6e49c-efb7-4abb-96c4-174c93c2e510 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Received event network-vif-deleted-a27d28bd-0aac-45b1-9b85-fa648038cccc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:24:28 compute-1 nova_compute[230518]: 2025-10-02 12:24:28.293 2 DEBUG oslo_concurrency.processutils [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:24:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:24:28 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2032694493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:28 compute-1 nova_compute[230518]: 2025-10-02 12:24:28.746 2 DEBUG oslo_concurrency.processutils [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:24:28 compute-1 nova_compute[230518]: 2025-10-02 12:24:28.755 2 DEBUG nova.compute.provider_tree [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:24:28 compute-1 nova_compute[230518]: 2025-10-02 12:24:28.784 2 DEBUG nova.scheduler.client.report [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:24:28 compute-1 nova_compute[230518]: 2025-10-02 12:24:28.815 2 DEBUG oslo_concurrency.lockutils [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:24:28 compute-1 nova_compute[230518]: 2025-10-02 12:24:28.905 2 INFO nova.scheduler.client.report [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Deleted allocations for instance c1597192-3527-4620-a21f-0e71c9c1c09d
Oct 02 12:24:29 compute-1 nova_compute[230518]: 2025-10-02 12:24:29.009 2 DEBUG oslo_concurrency.lockutils [None req-9558ae1f-01c9-4c5e-a7bb-4bd9bdf225d7 afacfeac9efc4e6fbb83ebe4fe9a8f38 d0ebb2827cb241e499606ce3a3c67d24 - - default default] Lock "c1597192-3527-4620-a21f-0e71c9c1c09d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.496s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:24:29 compute-1 ceph-mon[80926]: pgmap v1402: 305 pgs: 305 active+clean; 221 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 928 KiB/s rd, 523 KiB/s wr, 184 op/s
Oct 02 12:24:29 compute-1 ceph-mon[80926]: osdmap e211: 3 total, 3 up, 3 in
Oct 02 12:24:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2032694493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:24:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:29.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:24:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:24:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:29.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:24:31 compute-1 ceph-mon[80926]: pgmap v1404: 305 pgs: 305 active+clean; 221 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 549 KiB/s rd, 324 KiB/s wr, 131 op/s
Oct 02 12:24:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:31.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:31.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:32 compute-1 nova_compute[230518]: 2025-10-02 12:24:32.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:32 compute-1 nova_compute[230518]: 2025-10-02 12:24:32.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:33 compute-1 ceph-mon[80926]: pgmap v1405: 305 pgs: 305 active+clean; 221 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 226 KiB/s rd, 198 KiB/s wr, 79 op/s
Oct 02 12:24:33 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3742383102' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:24:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:24:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:33.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:24:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:33.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:24:34 compute-1 ceph-mon[80926]: pgmap v1406: 305 pgs: 305 active+clean; 221 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 189 KiB/s rd, 179 KiB/s wr, 68 op/s
Oct 02 12:24:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:24:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:35.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:24:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:24:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:35.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:24:35 compute-1 podman[253203]: 2025-10-02 12:24:35.809047603 +0000 UTC m=+0.064316004 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 12:24:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:36.150 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:24:37 compute-1 ceph-mon[80926]: pgmap v1407: 305 pgs: 305 active+clean; 221 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 48 KiB/s wr, 28 op/s
Oct 02 12:24:37 compute-1 nova_compute[230518]: 2025-10-02 12:24:37.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:37.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:37.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:37 compute-1 podman[253222]: 2025-10-02 12:24:37.841583989 +0000 UTC m=+0.098461629 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Oct 02 12:24:37 compute-1 nova_compute[230518]: 2025-10-02 12:24:37.954 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407862.9531674, c1597192-3527-4620-a21f-0e71c9c1c09d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:24:37 compute-1 nova_compute[230518]: 2025-10-02 12:24:37.955 2 INFO nova.compute.manager [-] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] VM Stopped (Lifecycle Event)
Oct 02 12:24:38 compute-1 nova_compute[230518]: 2025-10-02 12:24:38.000 2 DEBUG nova.compute.manager [None req-90c98255-3046-41e1-b843-5d4983b69ad8 - - - - - -] [instance: c1597192-3527-4620-a21f-0e71c9c1c09d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:24:38 compute-1 nova_compute[230518]: 2025-10-02 12:24:38.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:38 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:24:38 compute-1 nova_compute[230518]: 2025-10-02 12:24:38.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:38 compute-1 ceph-mon[80926]: pgmap v1408: 305 pgs: 305 active+clean; 221 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 6.4 KiB/s rd, 19 KiB/s wr, 10 op/s
Oct 02 12:24:38 compute-1 nova_compute[230518]: 2025-10-02 12:24:38.620 2 DEBUG oslo_concurrency.lockutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquiring lock "12ae9024-48e3-4894-ac32-41af4e31c223" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:24:38 compute-1 nova_compute[230518]: 2025-10-02 12:24:38.620 2 DEBUG oslo_concurrency.lockutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:24:38 compute-1 nova_compute[230518]: 2025-10-02 12:24:38.666 2 DEBUG nova.compute.manager [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:24:38 compute-1 nova_compute[230518]: 2025-10-02 12:24:38.773 2 DEBUG oslo_concurrency.lockutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:24:38 compute-1 nova_compute[230518]: 2025-10-02 12:24:38.774 2 DEBUG oslo_concurrency.lockutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:24:38 compute-1 nova_compute[230518]: 2025-10-02 12:24:38.785 2 DEBUG nova.virt.hardware [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:24:38 compute-1 nova_compute[230518]: 2025-10-02 12:24:38.785 2 INFO nova.compute.claims [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:24:39 compute-1 nova_compute[230518]: 2025-10-02 12:24:39.065 2 DEBUG oslo_concurrency.processutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:24:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:24:39 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2860898462' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:39 compute-1 nova_compute[230518]: 2025-10-02 12:24:39.536 2 DEBUG oslo_concurrency.processutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:24:39 compute-1 nova_compute[230518]: 2025-10-02 12:24:39.543 2 DEBUG nova.compute.provider_tree [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:24:39 compute-1 nova_compute[230518]: 2025-10-02 12:24:39.569 2 DEBUG nova.scheduler.client.report [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:24:39 compute-1 nova_compute[230518]: 2025-10-02 12:24:39.613 2 DEBUG oslo_concurrency.lockutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:24:39 compute-1 nova_compute[230518]: 2025-10-02 12:24:39.614 2 DEBUG nova.compute.manager [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:24:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2860898462' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:39 compute-1 nova_compute[230518]: 2025-10-02 12:24:39.679 2 DEBUG nova.compute.manager [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:24:39 compute-1 nova_compute[230518]: 2025-10-02 12:24:39.680 2 DEBUG nova.network.neutron [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:24:39 compute-1 nova_compute[230518]: 2025-10-02 12:24:39.710 2 INFO nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:24:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:24:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:39.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:24:39 compute-1 nova_compute[230518]: 2025-10-02 12:24:39.767 2 DEBUG nova.compute.manager [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:24:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:39.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:39 compute-1 nova_compute[230518]: 2025-10-02 12:24:39.908 2 DEBUG nova.compute.manager [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:24:39 compute-1 nova_compute[230518]: 2025-10-02 12:24:39.909 2 DEBUG nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:24:39 compute-1 nova_compute[230518]: 2025-10-02 12:24:39.910 2 INFO nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Creating image(s)
Oct 02 12:24:39 compute-1 nova_compute[230518]: 2025-10-02 12:24:39.937 2 DEBUG nova.storage.rbd_utils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] rbd image 12ae9024-48e3-4894-ac32-41af4e31c223_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:24:39 compute-1 nova_compute[230518]: 2025-10-02 12:24:39.972 2 DEBUG nova.storage.rbd_utils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] rbd image 12ae9024-48e3-4894-ac32-41af4e31c223_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:24:40 compute-1 nova_compute[230518]: 2025-10-02 12:24:40.091 2 DEBUG nova.storage.rbd_utils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] rbd image 12ae9024-48e3-4894-ac32-41af4e31c223_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:24:40 compute-1 nova_compute[230518]: 2025-10-02 12:24:40.097 2 DEBUG oslo_concurrency.processutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:24:40 compute-1 nova_compute[230518]: 2025-10-02 12:24:40.140 2 DEBUG nova.policy [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '51b45ef40bdc499a8409fd2bf3e6a339', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '12dfeaa31a6e4a2481a5332ce3094262', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:24:40 compute-1 nova_compute[230518]: 2025-10-02 12:24:40.187 2 DEBUG oslo_concurrency.processutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:24:40 compute-1 nova_compute[230518]: 2025-10-02 12:24:40.188 2 DEBUG oslo_concurrency.lockutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:24:40 compute-1 nova_compute[230518]: 2025-10-02 12:24:40.189 2 DEBUG oslo_concurrency.lockutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:24:40 compute-1 nova_compute[230518]: 2025-10-02 12:24:40.190 2 DEBUG oslo_concurrency.lockutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:24:40 compute-1 nova_compute[230518]: 2025-10-02 12:24:40.224 2 DEBUG nova.storage.rbd_utils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] rbd image 12ae9024-48e3-4894-ac32-41af4e31c223_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:24:40 compute-1 nova_compute[230518]: 2025-10-02 12:24:40.229 2 DEBUG oslo_concurrency.processutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 12ae9024-48e3-4894-ac32-41af4e31c223_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:24:41 compute-1 nova_compute[230518]: 2025-10-02 12:24:41.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:24:41 compute-1 nova_compute[230518]: 2025-10-02 12:24:41.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:24:41 compute-1 nova_compute[230518]: 2025-10-02 12:24:41.106 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:24:41 compute-1 nova_compute[230518]: 2025-10-02 12:24:41.107 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:24:41 compute-1 nova_compute[230518]: 2025-10-02 12:24:41.107 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:24:41 compute-1 nova_compute[230518]: 2025-10-02 12:24:41.108 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:24:41 compute-1 nova_compute[230518]: 2025-10-02 12:24:41.109 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:24:41 compute-1 ceph-mon[80926]: pgmap v1409: 305 pgs: 305 active+clean; 221 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 5.7 KiB/s rd, 17 KiB/s wr, 9 op/s
Oct 02 12:24:41 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:24:41 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1773491671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:41 compute-1 nova_compute[230518]: 2025-10-02 12:24:41.699 2 DEBUG oslo_concurrency.processutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 12ae9024-48e3-4894-ac32-41af4e31c223_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:24:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:41.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:41.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:41 compute-1 nova_compute[230518]: 2025-10-02 12:24:41.791 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.682s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:24:41 compute-1 nova_compute[230518]: 2025-10-02 12:24:41.871 2 DEBUG nova.storage.rbd_utils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] resizing rbd image 12ae9024-48e3-4894-ac32-41af4e31c223_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:24:42 compute-1 nova_compute[230518]: 2025-10-02 12:24:42.008 2 DEBUG nova.objects.instance [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lazy-loading 'migration_context' on Instance uuid 12ae9024-48e3-4894-ac32-41af4e31c223 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:24:42 compute-1 nova_compute[230518]: 2025-10-02 12:24:42.054 2 DEBUG nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:24:42 compute-1 nova_compute[230518]: 2025-10-02 12:24:42.054 2 DEBUG nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Ensure instance console log exists: /var/lib/nova/instances/12ae9024-48e3-4894-ac32-41af4e31c223/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:24:42 compute-1 nova_compute[230518]: 2025-10-02 12:24:42.055 2 DEBUG oslo_concurrency.lockutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:24:42 compute-1 nova_compute[230518]: 2025-10-02 12:24:42.055 2 DEBUG oslo_concurrency.lockutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:24:42 compute-1 nova_compute[230518]: 2025-10-02 12:24:42.055 2 DEBUG oslo_concurrency.lockutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:24:42 compute-1 nova_compute[230518]: 2025-10-02 12:24:42.150 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:24:42 compute-1 nova_compute[230518]: 2025-10-02 12:24:42.151 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4621MB free_disk=20.897125244140625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:24:42 compute-1 nova_compute[230518]: 2025-10-02 12:24:42.151 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:24:42 compute-1 nova_compute[230518]: 2025-10-02 12:24:42.152 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:24:42 compute-1 nova_compute[230518]: 2025-10-02 12:24:42.248 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 12ae9024-48e3-4894-ac32-41af4e31c223 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:24:42 compute-1 nova_compute[230518]: 2025-10-02 12:24:42.249 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:24:42 compute-1 nova_compute[230518]: 2025-10-02 12:24:42.250 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:24:42 compute-1 nova_compute[230518]: 2025-10-02 12:24:42.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:42 compute-1 nova_compute[230518]: 2025-10-02 12:24:42.294 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:24:42 compute-1 nova_compute[230518]: 2025-10-02 12:24:42.449 2 DEBUG nova.network.neutron [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Successfully created port: ac685902-7a16-4ff8-ac8b-85430ba9f8cd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:24:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:24:42 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3235092226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:42 compute-1 nova_compute[230518]: 2025-10-02 12:24:42.743 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:24:42 compute-1 nova_compute[230518]: 2025-10-02 12:24:42.750 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:24:42 compute-1 ceph-mon[80926]: pgmap v1410: 305 pgs: 305 active+clean; 186 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 5.8 KiB/s rd, 16 KiB/s wr, 10 op/s
Oct 02 12:24:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1773491671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3235092226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:43 compute-1 nova_compute[230518]: 2025-10-02 12:24:43.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:24:43 compute-1 nova_compute[230518]: 2025-10-02 12:24:43.712 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:24:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:24:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:43.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:24:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:24:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:43.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:24:43 compute-1 nova_compute[230518]: 2025-10-02 12:24:43.780 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:24:43 compute-1 nova_compute[230518]: 2025-10-02 12:24:43.781 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:24:44 compute-1 ceph-mon[80926]: pgmap v1411: 305 pgs: 305 active+clean; 164 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 1.1 MiB/s wr, 23 op/s
Oct 02 12:24:44 compute-1 nova_compute[230518]: 2025-10-02 12:24:44.777 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:24:44 compute-1 nova_compute[230518]: 2025-10-02 12:24:44.778 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:24:44 compute-1 nova_compute[230518]: 2025-10-02 12:24:44.778 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:24:44 compute-1 podman[253484]: 2025-10-02 12:24:44.834641972 +0000 UTC m=+0.076763826 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid)
Oct 02 12:24:44 compute-1 podman[253485]: 2025-10-02 12:24:44.84057802 +0000 UTC m=+0.080116283 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:24:45 compute-1 nova_compute[230518]: 2025-10-02 12:24:45.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:24:45 compute-1 nova_compute[230518]: 2025-10-02 12:24:45.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:24:45 compute-1 nova_compute[230518]: 2025-10-02 12:24:45.152 2 DEBUG nova.network.neutron [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Successfully updated port: ac685902-7a16-4ff8-ac8b-85430ba9f8cd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:24:45 compute-1 nova_compute[230518]: 2025-10-02 12:24:45.184 2 DEBUG oslo_concurrency.lockutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquiring lock "refresh_cache-12ae9024-48e3-4894-ac32-41af4e31c223" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:24:45 compute-1 nova_compute[230518]: 2025-10-02 12:24:45.184 2 DEBUG oslo_concurrency.lockutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquired lock "refresh_cache-12ae9024-48e3-4894-ac32-41af4e31c223" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:24:45 compute-1 nova_compute[230518]: 2025-10-02 12:24:45.184 2 DEBUG nova.network.neutron [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:24:45 compute-1 nova_compute[230518]: 2025-10-02 12:24:45.566 2 DEBUG nova.network.neutron [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:24:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:24:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:45.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:24:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:45.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:45 compute-1 nova_compute[230518]: 2025-10-02 12:24:45.785 2 DEBUG nova.compute.manager [req-47882e26-004f-4298-88fc-be008f606786 req-841da5f9-9625-48c0-b8d3-5599a4fa0215 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Received event network-changed-ac685902-7a16-4ff8-ac8b-85430ba9f8cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:24:45 compute-1 nova_compute[230518]: 2025-10-02 12:24:45.785 2 DEBUG nova.compute.manager [req-47882e26-004f-4298-88fc-be008f606786 req-841da5f9-9625-48c0-b8d3-5599a4fa0215 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Refreshing instance network info cache due to event network-changed-ac685902-7a16-4ff8-ac8b-85430ba9f8cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:24:45 compute-1 nova_compute[230518]: 2025-10-02 12:24:45.785 2 DEBUG oslo_concurrency.lockutils [req-47882e26-004f-4298-88fc-be008f606786 req-841da5f9-9625-48c0-b8d3-5599a4fa0215 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-12ae9024-48e3-4894-ac32-41af4e31c223" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:24:46 compute-1 ceph-mon[80926]: pgmap v1412: 305 pgs: 305 active+clean; 183 MiB data, 590 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 1.7 MiB/s wr, 55 op/s
Oct 02 12:24:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/318574402' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:47 compute-1 nova_compute[230518]: 2025-10-02 12:24:47.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:24:47 compute-1 nova_compute[230518]: 2025-10-02 12:24:47.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:24:47 compute-1 nova_compute[230518]: 2025-10-02 12:24:47.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:24:47 compute-1 nova_compute[230518]: 2025-10-02 12:24:47.086 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 12:24:47 compute-1 nova_compute[230518]: 2025-10-02 12:24:47.086 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:24:47 compute-1 nova_compute[230518]: 2025-10-02 12:24:47.214 2 DEBUG nova.network.neutron [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Updating instance_info_cache with network_info: [{"id": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "address": "fa:16:3e:2e:09:31", "network": {"id": "34ecce08-278a-4a16-9f99-cfef8148769d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-640265691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "12dfeaa31a6e4a2481a5332ce3094262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac685902-7a", "ovs_interfaceid": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:24:47 compute-1 nova_compute[230518]: 2025-10-02 12:24:47.250 2 DEBUG oslo_concurrency.lockutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Releasing lock "refresh_cache-12ae9024-48e3-4894-ac32-41af4e31c223" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:24:47 compute-1 nova_compute[230518]: 2025-10-02 12:24:47.251 2 DEBUG nova.compute.manager [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Instance network_info: |[{"id": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "address": "fa:16:3e:2e:09:31", "network": {"id": "34ecce08-278a-4a16-9f99-cfef8148769d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-640265691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "12dfeaa31a6e4a2481a5332ce3094262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac685902-7a", "ovs_interfaceid": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:24:47 compute-1 nova_compute[230518]: 2025-10-02 12:24:47.251 2 DEBUG oslo_concurrency.lockutils [req-47882e26-004f-4298-88fc-be008f606786 req-841da5f9-9625-48c0-b8d3-5599a4fa0215 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-12ae9024-48e3-4894-ac32-41af4e31c223" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:24:47 compute-1 nova_compute[230518]: 2025-10-02 12:24:47.251 2 DEBUG nova.network.neutron [req-47882e26-004f-4298-88fc-be008f606786 req-841da5f9-9625-48c0-b8d3-5599a4fa0215 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Refreshing network info cache for port ac685902-7a16-4ff8-ac8b-85430ba9f8cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:24:47 compute-1 nova_compute[230518]: 2025-10-02 12:24:47.255 2 DEBUG nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Start _get_guest_xml network_info=[{"id": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "address": "fa:16:3e:2e:09:31", "network": {"id": "34ecce08-278a-4a16-9f99-cfef8148769d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-640265691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "12dfeaa31a6e4a2481a5332ce3094262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac685902-7a", "ovs_interfaceid": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:24:47 compute-1 nova_compute[230518]: 2025-10-02 12:24:47.264 2 WARNING nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:24:47 compute-1 nova_compute[230518]: 2025-10-02 12:24:47.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:47 compute-1 nova_compute[230518]: 2025-10-02 12:24:47.280 2 DEBUG nova.virt.libvirt.host [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:24:47 compute-1 nova_compute[230518]: 2025-10-02 12:24:47.281 2 DEBUG nova.virt.libvirt.host [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:24:47 compute-1 nova_compute[230518]: 2025-10-02 12:24:47.287 2 DEBUG nova.virt.libvirt.host [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:24:47 compute-1 nova_compute[230518]: 2025-10-02 12:24:47.288 2 DEBUG nova.virt.libvirt.host [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:24:47 compute-1 nova_compute[230518]: 2025-10-02 12:24:47.290 2 DEBUG nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:24:47 compute-1 nova_compute[230518]: 2025-10-02 12:24:47.291 2 DEBUG nova.virt.hardware [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:24:47 compute-1 nova_compute[230518]: 2025-10-02 12:24:47.292 2 DEBUG nova.virt.hardware [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:24:47 compute-1 nova_compute[230518]: 2025-10-02 12:24:47.293 2 DEBUG nova.virt.hardware [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:24:47 compute-1 nova_compute[230518]: 2025-10-02 12:24:47.294 2 DEBUG nova.virt.hardware [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:24:47 compute-1 nova_compute[230518]: 2025-10-02 12:24:47.294 2 DEBUG nova.virt.hardware [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:24:47 compute-1 nova_compute[230518]: 2025-10-02 12:24:47.295 2 DEBUG nova.virt.hardware [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:24:47 compute-1 nova_compute[230518]: 2025-10-02 12:24:47.296 2 DEBUG nova.virt.hardware [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:24:47 compute-1 nova_compute[230518]: 2025-10-02 12:24:47.296 2 DEBUG nova.virt.hardware [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:24:47 compute-1 nova_compute[230518]: 2025-10-02 12:24:47.297 2 DEBUG nova.virt.hardware [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:24:47 compute-1 nova_compute[230518]: 2025-10-02 12:24:47.297 2 DEBUG nova.virt.hardware [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:24:47 compute-1 nova_compute[230518]: 2025-10-02 12:24:47.298 2 DEBUG nova.virt.hardware [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:24:47 compute-1 nova_compute[230518]: 2025-10-02 12:24:47.304 2 DEBUG oslo_concurrency.processutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:24:47 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/405708132' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:47 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4126952542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:24:47 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1523422355' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:24:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:24:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:47.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:24:47 compute-1 nova_compute[230518]: 2025-10-02 12:24:47.765 2 DEBUG oslo_concurrency.processutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:24:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:47.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:47 compute-1 nova_compute[230518]: 2025-10-02 12:24:47.792 2 DEBUG nova.storage.rbd_utils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] rbd image 12ae9024-48e3-4894-ac32-41af4e31c223_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:24:47 compute-1 nova_compute[230518]: 2025-10-02 12:24:47.797 2 DEBUG oslo_concurrency.processutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:24:48 compute-1 nova_compute[230518]: 2025-10-02 12:24:48.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:48 compute-1 nova_compute[230518]: 2025-10-02 12:24:48.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:24:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:24:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:24:48 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3165033264' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:24:48 compute-1 nova_compute[230518]: 2025-10-02 12:24:48.244 2 DEBUG oslo_concurrency.processutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:24:48 compute-1 nova_compute[230518]: 2025-10-02 12:24:48.247 2 DEBUG nova.virt.libvirt.vif [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:24:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-857840689',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-857840689',id=55,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNEu8Mg1mF4AKAnETiPc/1EN0o0yW0N8RDXfeYfe2EYf2XEQAi1u5vSoxbgTCJBIGOu3aJCScfGKzHsqNwZ9VVPhf8HNvzQILfXuoUBQZfVHYTHLisifzGPoXHjQ6TltjQ==',key_name='tempest-keypair-2123530339',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='12dfeaa31a6e4a2481a5332ce3094262',ramdisk_id='',reservation_id='r-92a9mnnp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-158673309',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-158673309-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:24:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='51b45ef40bdc499a8409fd2bf3e6a339',uuid=12ae9024-48e3-4894-ac32-41af4e31c223,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "address": "fa:16:3e:2e:09:31", "network": {"id": "34ecce08-278a-4a16-9f99-cfef8148769d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-640265691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "12dfeaa31a6e4a2481a5332ce3094262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac685902-7a", "ovs_interfaceid": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:24:48 compute-1 nova_compute[230518]: 2025-10-02 12:24:48.247 2 DEBUG nova.network.os_vif_util [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Converting VIF {"id": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "address": "fa:16:3e:2e:09:31", "network": {"id": "34ecce08-278a-4a16-9f99-cfef8148769d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-640265691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "12dfeaa31a6e4a2481a5332ce3094262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac685902-7a", "ovs_interfaceid": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:24:48 compute-1 nova_compute[230518]: 2025-10-02 12:24:48.248 2 DEBUG nova.network.os_vif_util [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:09:31,bridge_name='br-int',has_traffic_filtering=True,id=ac685902-7a16-4ff8-ac8b-85430ba9f8cd,network=Network(34ecce08-278a-4a16-9f99-cfef8148769d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac685902-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:24:48 compute-1 nova_compute[230518]: 2025-10-02 12:24:48.250 2 DEBUG nova.objects.instance [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lazy-loading 'pci_devices' on Instance uuid 12ae9024-48e3-4894-ac32-41af4e31c223 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:24:48 compute-1 nova_compute[230518]: 2025-10-02 12:24:48.463 2 DEBUG nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:24:48 compute-1 nova_compute[230518]:   <uuid>12ae9024-48e3-4894-ac32-41af4e31c223</uuid>
Oct 02 12:24:48 compute-1 nova_compute[230518]:   <name>instance-00000037</name>
Oct 02 12:24:48 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:24:48 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:24:48 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:24:48 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:       <nova:name>tempest-UpdateMultiattachVolumeNegativeTest-server-857840689</nova:name>
Oct 02 12:24:48 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:24:47</nova:creationTime>
Oct 02 12:24:48 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:24:48 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:24:48 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:24:48 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:24:48 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:24:48 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:24:48 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:24:48 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:24:48 compute-1 nova_compute[230518]:         <nova:user uuid="51b45ef40bdc499a8409fd2bf3e6a339">tempest-UpdateMultiattachVolumeNegativeTest-158673309-project-member</nova:user>
Oct 02 12:24:48 compute-1 nova_compute[230518]:         <nova:project uuid="12dfeaa31a6e4a2481a5332ce3094262">tempest-UpdateMultiattachVolumeNegativeTest-158673309</nova:project>
Oct 02 12:24:48 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:24:48 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:24:48 compute-1 nova_compute[230518]:         <nova:port uuid="ac685902-7a16-4ff8-ac8b-85430ba9f8cd">
Oct 02 12:24:48 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:24:48 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:24:48 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:24:48 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <system>
Oct 02 12:24:48 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:24:48 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:24:48 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:24:48 compute-1 nova_compute[230518]:       <entry name="serial">12ae9024-48e3-4894-ac32-41af4e31c223</entry>
Oct 02 12:24:48 compute-1 nova_compute[230518]:       <entry name="uuid">12ae9024-48e3-4894-ac32-41af4e31c223</entry>
Oct 02 12:24:48 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     </system>
Oct 02 12:24:48 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:24:48 compute-1 nova_compute[230518]:   <os>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:   </os>
Oct 02 12:24:48 compute-1 nova_compute[230518]:   <features>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:   </features>
Oct 02 12:24:48 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:24:48 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:24:48 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:24:48 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/12ae9024-48e3-4894-ac32-41af4e31c223_disk">
Oct 02 12:24:48 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:       </source>
Oct 02 12:24:48 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:24:48 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:24:48 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:24:48 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/12ae9024-48e3-4894-ac32-41af4e31c223_disk.config">
Oct 02 12:24:48 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:       </source>
Oct 02 12:24:48 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:24:48 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:24:48 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:24:48 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:2e:09:31"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:       <target dev="tapac685902-7a"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:24:48 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/12ae9024-48e3-4894-ac32-41af4e31c223/console.log" append="off"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <video>
Oct 02 12:24:48 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     </video>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:24:48 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:24:48 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:24:48 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:24:48 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:24:48 compute-1 nova_compute[230518]: </domain>
Oct 02 12:24:48 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:24:48 compute-1 nova_compute[230518]: 2025-10-02 12:24:48.465 2 DEBUG nova.compute.manager [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Preparing to wait for external event network-vif-plugged-ac685902-7a16-4ff8-ac8b-85430ba9f8cd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:24:48 compute-1 nova_compute[230518]: 2025-10-02 12:24:48.466 2 DEBUG oslo_concurrency.lockutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquiring lock "12ae9024-48e3-4894-ac32-41af4e31c223-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:24:48 compute-1 nova_compute[230518]: 2025-10-02 12:24:48.467 2 DEBUG oslo_concurrency.lockutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:24:48 compute-1 nova_compute[230518]: 2025-10-02 12:24:48.467 2 DEBUG oslo_concurrency.lockutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:24:48 compute-1 nova_compute[230518]: 2025-10-02 12:24:48.468 2 DEBUG nova.virt.libvirt.vif [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:24:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-857840689',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-857840689',id=55,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNEu8Mg1mF4AKAnETiPc/1EN0o0yW0N8RDXfeYfe2EYf2XEQAi1u5vSoxbgTCJBIGOu3aJCScfGKzHsqNwZ9VVPhf8HNvzQILfXuoUBQZfVHYTHLisifzGPoXHjQ6TltjQ==',key_name='tempest-keypair-2123530339',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='12dfeaa31a6e4a2481a5332ce3094262',ramdisk_id='',reservation_id='r-92a9mnnp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-158673309',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-158673309-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:24:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='51b45ef40bdc499a8409fd2bf3e6a339',uuid=12ae9024-48e3-4894-ac32-41af4e31c223,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "address": "fa:16:3e:2e:09:31", "network": {"id": "34ecce08-278a-4a16-9f99-cfef8148769d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-640265691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "12dfeaa31a6e4a2481a5332ce3094262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac685902-7a", "ovs_interfaceid": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:24:48 compute-1 nova_compute[230518]: 2025-10-02 12:24:48.469 2 DEBUG nova.network.os_vif_util [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Converting VIF {"id": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "address": "fa:16:3e:2e:09:31", "network": {"id": "34ecce08-278a-4a16-9f99-cfef8148769d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-640265691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "12dfeaa31a6e4a2481a5332ce3094262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac685902-7a", "ovs_interfaceid": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:24:48 compute-1 nova_compute[230518]: 2025-10-02 12:24:48.470 2 DEBUG nova.network.os_vif_util [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:09:31,bridge_name='br-int',has_traffic_filtering=True,id=ac685902-7a16-4ff8-ac8b-85430ba9f8cd,network=Network(34ecce08-278a-4a16-9f99-cfef8148769d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac685902-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:24:48 compute-1 nova_compute[230518]: 2025-10-02 12:24:48.470 2 DEBUG os_vif [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:09:31,bridge_name='br-int',has_traffic_filtering=True,id=ac685902-7a16-4ff8-ac8b-85430ba9f8cd,network=Network(34ecce08-278a-4a16-9f99-cfef8148769d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac685902-7a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:24:48 compute-1 nova_compute[230518]: 2025-10-02 12:24:48.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:48 compute-1 nova_compute[230518]: 2025-10-02 12:24:48.477 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:24:48 compute-1 nova_compute[230518]: 2025-10-02 12:24:48.477 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:24:48 compute-1 nova_compute[230518]: 2025-10-02 12:24:48.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:48 compute-1 nova_compute[230518]: 2025-10-02 12:24:48.483 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac685902-7a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:24:48 compute-1 nova_compute[230518]: 2025-10-02 12:24:48.484 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapac685902-7a, col_values=(('external_ids', {'iface-id': 'ac685902-7a16-4ff8-ac8b-85430ba9f8cd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2e:09:31', 'vm-uuid': '12ae9024-48e3-4894-ac32-41af4e31c223'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:24:48 compute-1 nova_compute[230518]: 2025-10-02 12:24:48.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:48 compute-1 NetworkManager[44960]: <info>  [1759407888.4872] manager: (tapac685902-7a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/113)
Oct 02 12:24:48 compute-1 nova_compute[230518]: 2025-10-02 12:24:48.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:24:48 compute-1 nova_compute[230518]: 2025-10-02 12:24:48.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:48 compute-1 nova_compute[230518]: 2025-10-02 12:24:48.495 2 INFO os_vif [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:09:31,bridge_name='br-int',has_traffic_filtering=True,id=ac685902-7a16-4ff8-ac8b-85430ba9f8cd,network=Network(34ecce08-278a-4a16-9f99-cfef8148769d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac685902-7a')
Oct 02 12:24:48 compute-1 nova_compute[230518]: 2025-10-02 12:24:48.563 2 DEBUG nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:24:48 compute-1 nova_compute[230518]: 2025-10-02 12:24:48.564 2 DEBUG nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:24:48 compute-1 nova_compute[230518]: 2025-10-02 12:24:48.564 2 DEBUG nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] No VIF found with MAC fa:16:3e:2e:09:31, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:24:48 compute-1 nova_compute[230518]: 2025-10-02 12:24:48.564 2 INFO nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Using config drive
Oct 02 12:24:48 compute-1 nova_compute[230518]: 2025-10-02 12:24:48.594 2 DEBUG nova.storage.rbd_utils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] rbd image 12ae9024-48e3-4894-ac32-41af4e31c223_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:24:48 compute-1 ceph-mon[80926]: pgmap v1413: 305 pgs: 305 active+clean; 187 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Oct 02 12:24:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/64841178' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1523422355' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:24:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3165033264' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:24:49 compute-1 nova_compute[230518]: 2025-10-02 12:24:49.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:24:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:49.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:49.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:49 compute-1 nova_compute[230518]: 2025-10-02 12:24:49.813 2 INFO nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Creating config drive at /var/lib/nova/instances/12ae9024-48e3-4894-ac32-41af4e31c223/disk.config
Oct 02 12:24:49 compute-1 nova_compute[230518]: 2025-10-02 12:24:49.822 2 DEBUG oslo_concurrency.processutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/12ae9024-48e3-4894-ac32-41af4e31c223/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp18hx5xwp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:24:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/782473490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:24:49 compute-1 nova_compute[230518]: 2025-10-02 12:24:49.956 2 DEBUG oslo_concurrency.processutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/12ae9024-48e3-4894-ac32-41af4e31c223/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp18hx5xwp" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:24:49 compute-1 nova_compute[230518]: 2025-10-02 12:24:49.987 2 DEBUG nova.storage.rbd_utils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] rbd image 12ae9024-48e3-4894-ac32-41af4e31c223_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:24:49 compute-1 nova_compute[230518]: 2025-10-02 12:24:49.991 2 DEBUG oslo_concurrency.processutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/12ae9024-48e3-4894-ac32-41af4e31c223/disk.config 12ae9024-48e3-4894-ac32-41af4e31c223_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:24:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e212 e212: 3 total, 3 up, 3 in
Oct 02 12:24:50 compute-1 nova_compute[230518]: 2025-10-02 12:24:50.158 2 DEBUG nova.network.neutron [req-47882e26-004f-4298-88fc-be008f606786 req-841da5f9-9625-48c0-b8d3-5599a4fa0215 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Updated VIF entry in instance network info cache for port ac685902-7a16-4ff8-ac8b-85430ba9f8cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:24:50 compute-1 nova_compute[230518]: 2025-10-02 12:24:50.159 2 DEBUG nova.network.neutron [req-47882e26-004f-4298-88fc-be008f606786 req-841da5f9-9625-48c0-b8d3-5599a4fa0215 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Updating instance_info_cache with network_info: [{"id": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "address": "fa:16:3e:2e:09:31", "network": {"id": "34ecce08-278a-4a16-9f99-cfef8148769d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-640265691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "12dfeaa31a6e4a2481a5332ce3094262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac685902-7a", "ovs_interfaceid": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:24:50 compute-1 nova_compute[230518]: 2025-10-02 12:24:50.427 2 DEBUG oslo_concurrency.lockutils [req-47882e26-004f-4298-88fc-be008f606786 req-841da5f9-9625-48c0-b8d3-5599a4fa0215 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-12ae9024-48e3-4894-ac32-41af4e31c223" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:24:50 compute-1 nova_compute[230518]: 2025-10-02 12:24:50.894 2 DEBUG oslo_concurrency.processutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/12ae9024-48e3-4894-ac32-41af4e31c223/disk.config 12ae9024-48e3-4894-ac32-41af4e31c223_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.903s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:24:50 compute-1 nova_compute[230518]: 2025-10-02 12:24:50.895 2 INFO nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Deleting local config drive /var/lib/nova/instances/12ae9024-48e3-4894-ac32-41af4e31c223/disk.config because it was imported into RBD.
Oct 02 12:24:50 compute-1 kernel: tapac685902-7a: entered promiscuous mode
Oct 02 12:24:50 compute-1 ovn_controller[129257]: 2025-10-02T12:24:50Z|00254|binding|INFO|Claiming lport ac685902-7a16-4ff8-ac8b-85430ba9f8cd for this chassis.
Oct 02 12:24:50 compute-1 ovn_controller[129257]: 2025-10-02T12:24:50Z|00255|binding|INFO|ac685902-7a16-4ff8-ac8b-85430ba9f8cd: Claiming fa:16:3e:2e:09:31 10.100.0.10
Oct 02 12:24:50 compute-1 NetworkManager[44960]: <info>  [1759407890.9594] manager: (tapac685902-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/114)
Oct 02 12:24:50 compute-1 nova_compute[230518]: 2025-10-02 12:24:50.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:50 compute-1 nova_compute[230518]: 2025-10-02 12:24:50.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:50 compute-1 nova_compute[230518]: 2025-10-02 12:24:50.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:50 compute-1 nova_compute[230518]: 2025-10-02 12:24:50.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:50 compute-1 nova_compute[230518]: 2025-10-02 12:24:50.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:50 compute-1 NetworkManager[44960]: <info>  [1759407890.9814] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/115)
Oct 02 12:24:50 compute-1 NetworkManager[44960]: <info>  [1759407890.9828] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/116)
Oct 02 12:24:50 compute-1 systemd-udevd[253656]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:24:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:50.994 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:09:31 10.100.0.10'], port_security=['fa:16:3e:2e:09:31 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '12ae9024-48e3-4894-ac32-41af4e31c223', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34ecce08-278a-4a16-9f99-cfef8148769d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '12dfeaa31a6e4a2481a5332ce3094262', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7b0b284f-6afe-4611-b8db-1ab4d5466651', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7fc2557b-b462-4493-9e4f-7b4266aaba5c, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=ac685902-7a16-4ff8-ac8b-85430ba9f8cd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:24:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:50.995 138374 INFO neutron.agent.ovn.metadata.agent [-] Port ac685902-7a16-4ff8-ac8b-85430ba9f8cd in datapath 34ecce08-278a-4a16-9f99-cfef8148769d bound to our chassis
Oct 02 12:24:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:50.997 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 34ecce08-278a-4a16-9f99-cfef8148769d
Oct 02 12:24:51 compute-1 NetworkManager[44960]: <info>  [1759407891.0027] device (tapac685902-7a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:24:51 compute-1 NetworkManager[44960]: <info>  [1759407891.0041] device (tapac685902-7a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.009 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0aaf38ab-f828-4e95-ab27-eee6ac9a3891]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.010 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap34ecce08-21 in ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.012 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap34ecce08-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.012 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ee86d684-34ba-4fd5-a3e9-af04492136c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.013 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[45a777b8-93f1-48cb-aa0f-17c4a31fd1a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:51 compute-1 systemd-machined[188247]: New machine qemu-29-instance-00000037.
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.026 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[b022c88c-73ec-4152-a164-95c4b415ee2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:51 compute-1 systemd[1]: Started Virtual Machine qemu-29-instance-00000037.
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.055 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e5ca7b34-a583-43c0-b708-b9e34d8fcc43]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.090 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[445611b7-7870-4e06-91e4-eda3300f0a7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:51 compute-1 NetworkManager[44960]: <info>  [1759407891.1018] manager: (tap34ecce08-20): new Veth device (/org/freedesktop/NetworkManager/Devices/117)
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.100 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[01503bf2-b08c-478b-9e07-98363f9496ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:51 compute-1 ceph-mon[80926]: pgmap v1414: 305 pgs: 305 active+clean; 187 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Oct 02 12:24:51 compute-1 ceph-mon[80926]: osdmap e212: 3 total, 3 up, 3 in
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.142 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[6e53b694-7038-41ab-9adf-72771922f068]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.144 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a3a4f607-f0b5-424f-9b11-811de65401dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:51 compute-1 NetworkManager[44960]: <info>  [1759407891.1676] device (tap34ecce08-20): carrier: link connected
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.173 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[110042b2-a806-475d-91c4-a34776757bcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.191 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c7854ca5-d506-4f3c-8d84-7fa112c44248]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34ecce08-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:c7:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 575472, 'reachable_time': 21849, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253692, 'error': None, 'target': 'ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.209 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[042e4d88-881d-4692-abaa-d9e8bf20d412]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe78:c771'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 575472, 'tstamp': 575472}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253693, 'error': None, 'target': 'ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.228 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f31d32ae-ac2b-45a7-ac78-135a1c5e0509]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34ecce08-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:c7:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 575472, 'reachable_time': 21849, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253694, 'error': None, 'target': 'ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.261 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f9ccb309-57e0-42cd-bbb5-940d16ddd88b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:51 compute-1 nova_compute[230518]: 2025-10-02 12:24:51.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:51 compute-1 nova_compute[230518]: 2025-10-02 12:24:51.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:51 compute-1 ovn_controller[129257]: 2025-10-02T12:24:51Z|00256|binding|INFO|Setting lport ac685902-7a16-4ff8-ac8b-85430ba9f8cd ovn-installed in OVS
Oct 02 12:24:51 compute-1 ovn_controller[129257]: 2025-10-02T12:24:51Z|00257|binding|INFO|Setting lport ac685902-7a16-4ff8-ac8b-85430ba9f8cd up in Southbound
Oct 02 12:24:51 compute-1 nova_compute[230518]: 2025-10-02 12:24:51.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.339 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4f246cce-1ce6-4822-bbc3-164a6ff10dd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.340 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34ecce08-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.340 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.340 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34ecce08-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:24:51 compute-1 kernel: tap34ecce08-20: entered promiscuous mode
Oct 02 12:24:51 compute-1 nova_compute[230518]: 2025-10-02 12:24:51.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:51 compute-1 NetworkManager[44960]: <info>  [1759407891.3450] manager: (tap34ecce08-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/118)
Oct 02 12:24:51 compute-1 nova_compute[230518]: 2025-10-02 12:24:51.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.347 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap34ecce08-20, col_values=(('external_ids', {'iface-id': '8d2c214b-08f8-42fc-8049-39454e430512'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:24:51 compute-1 nova_compute[230518]: 2025-10-02 12:24:51.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:51 compute-1 ovn_controller[129257]: 2025-10-02T12:24:51Z|00258|binding|INFO|Releasing lport 8d2c214b-08f8-42fc-8049-39454e430512 from this chassis (sb_readonly=1)
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.349 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/34ecce08-278a-4a16-9f99-cfef8148769d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/34ecce08-278a-4a16-9f99-cfef8148769d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.350 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0da5d130-ebe1-4a63-8fc2-a54bb224e426]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.351 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-34ecce08-278a-4a16-9f99-cfef8148769d
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/34ecce08-278a-4a16-9f99-cfef8148769d.pid.haproxy
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 34ecce08-278a-4a16-9f99-cfef8148769d
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:24:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:24:51.351 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d', 'env', 'PROCESS_TAG=haproxy-34ecce08-278a-4a16-9f99-cfef8148769d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/34ecce08-278a-4a16-9f99-cfef8148769d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:24:51 compute-1 nova_compute[230518]: 2025-10-02 12:24:51.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:51 compute-1 podman[253726]: 2025-10-02 12:24:51.748708661 +0000 UTC m=+0.062484027 container create 597f2e1664669f4d29bcfa450fa533a9d3b1c4ba41417fd9d91b5b8ebdbec8ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 12:24:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:24:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:51.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:24:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:51.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:51 compute-1 systemd[1]: Started libpod-conmon-597f2e1664669f4d29bcfa450fa533a9d3b1c4ba41417fd9d91b5b8ebdbec8ea.scope.
Oct 02 12:24:51 compute-1 podman[253726]: 2025-10-02 12:24:51.707766212 +0000 UTC m=+0.021541598 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:24:51 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:24:51 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1a1fb007e30a62109ba083d1b17ba4b02cd8951cd7c632e42029e9dfea28755/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:24:51 compute-1 podman[253726]: 2025-10-02 12:24:51.85070967 +0000 UTC m=+0.164485076 container init 597f2e1664669f4d29bcfa450fa533a9d3b1c4ba41417fd9d91b5b8ebdbec8ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 12:24:51 compute-1 podman[253726]: 2025-10-02 12:24:51.857091251 +0000 UTC m=+0.170866657 container start 597f2e1664669f4d29bcfa450fa533a9d3b1c4ba41417fd9d91b5b8ebdbec8ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:24:51 compute-1 neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d[253741]: [NOTICE]   (253745) : New worker (253747) forked
Oct 02 12:24:51 compute-1 neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d[253741]: [NOTICE]   (253745) : Loading success.
Oct 02 12:24:52 compute-1 nova_compute[230518]: 2025-10-02 12:24:52.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:52 compute-1 nova_compute[230518]: 2025-10-02 12:24:52.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:52 compute-1 ceph-mon[80926]: pgmap v1416: 305 pgs: 305 active+clean; 187 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Oct 02 12:24:52 compute-1 nova_compute[230518]: 2025-10-02 12:24:52.639 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407892.6389408, 12ae9024-48e3-4894-ac32-41af4e31c223 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:24:52 compute-1 nova_compute[230518]: 2025-10-02 12:24:52.640 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] VM Started (Lifecycle Event)
Oct 02 12:24:52 compute-1 nova_compute[230518]: 2025-10-02 12:24:52.672 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:24:52 compute-1 nova_compute[230518]: 2025-10-02 12:24:52.677 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407892.6391075, 12ae9024-48e3-4894-ac32-41af4e31c223 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:24:52 compute-1 nova_compute[230518]: 2025-10-02 12:24:52.677 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] VM Paused (Lifecycle Event)
Oct 02 12:24:52 compute-1 nova_compute[230518]: 2025-10-02 12:24:52.755 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:24:52 compute-1 nova_compute[230518]: 2025-10-02 12:24:52.760 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:24:52 compute-1 nova_compute[230518]: 2025-10-02 12:24:52.764 2 DEBUG nova.compute.manager [req-712cdde5-5217-402e-8b09-d846629f8ae9 req-d64643f4-1925-4844-a2bc-4779148167cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Received event network-vif-plugged-ac685902-7a16-4ff8-ac8b-85430ba9f8cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:24:52 compute-1 nova_compute[230518]: 2025-10-02 12:24:52.765 2 DEBUG oslo_concurrency.lockutils [req-712cdde5-5217-402e-8b09-d846629f8ae9 req-d64643f4-1925-4844-a2bc-4779148167cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "12ae9024-48e3-4894-ac32-41af4e31c223-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:24:52 compute-1 nova_compute[230518]: 2025-10-02 12:24:52.765 2 DEBUG oslo_concurrency.lockutils [req-712cdde5-5217-402e-8b09-d846629f8ae9 req-d64643f4-1925-4844-a2bc-4779148167cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:24:52 compute-1 nova_compute[230518]: 2025-10-02 12:24:52.765 2 DEBUG oslo_concurrency.lockutils [req-712cdde5-5217-402e-8b09-d846629f8ae9 req-d64643f4-1925-4844-a2bc-4779148167cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:24:52 compute-1 nova_compute[230518]: 2025-10-02 12:24:52.766 2 DEBUG nova.compute.manager [req-712cdde5-5217-402e-8b09-d846629f8ae9 req-d64643f4-1925-4844-a2bc-4779148167cf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Processing event network-vif-plugged-ac685902-7a16-4ff8-ac8b-85430ba9f8cd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:24:52 compute-1 nova_compute[230518]: 2025-10-02 12:24:52.767 2 DEBUG nova.compute.manager [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:24:52 compute-1 nova_compute[230518]: 2025-10-02 12:24:52.771 2 DEBUG nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:24:52 compute-1 nova_compute[230518]: 2025-10-02 12:24:52.774 2 INFO nova.virt.libvirt.driver [-] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Instance spawned successfully.
Oct 02 12:24:52 compute-1 nova_compute[230518]: 2025-10-02 12:24:52.775 2 DEBUG nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:24:52 compute-1 nova_compute[230518]: 2025-10-02 12:24:52.808 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:24:52 compute-1 nova_compute[230518]: 2025-10-02 12:24:52.809 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759407892.7700608, 12ae9024-48e3-4894-ac32-41af4e31c223 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:24:52 compute-1 nova_compute[230518]: 2025-10-02 12:24:52.809 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] VM Resumed (Lifecycle Event)
Oct 02 12:24:52 compute-1 nova_compute[230518]: 2025-10-02 12:24:52.842 2 DEBUG nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:24:52 compute-1 nova_compute[230518]: 2025-10-02 12:24:52.843 2 DEBUG nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:24:52 compute-1 nova_compute[230518]: 2025-10-02 12:24:52.843 2 DEBUG nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:24:52 compute-1 nova_compute[230518]: 2025-10-02 12:24:52.844 2 DEBUG nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:24:52 compute-1 nova_compute[230518]: 2025-10-02 12:24:52.844 2 DEBUG nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:24:52 compute-1 nova_compute[230518]: 2025-10-02 12:24:52.845 2 DEBUG nova.virt.libvirt.driver [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:24:52 compute-1 nova_compute[230518]: 2025-10-02 12:24:52.880 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:24:52 compute-1 nova_compute[230518]: 2025-10-02 12:24:52.884 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:24:52 compute-1 nova_compute[230518]: 2025-10-02 12:24:52.956 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:24:52 compute-1 nova_compute[230518]: 2025-10-02 12:24:52.978 2 INFO nova.compute.manager [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Took 13.07 seconds to spawn the instance on the hypervisor.
Oct 02 12:24:52 compute-1 nova_compute[230518]: 2025-10-02 12:24:52.978 2 DEBUG nova.compute.manager [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:24:53 compute-1 nova_compute[230518]: 2025-10-02 12:24:53.093 2 INFO nova.compute.manager [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Took 14.35 seconds to build instance.
Oct 02 12:24:53 compute-1 nova_compute[230518]: 2025-10-02 12:24:53.118 2 DEBUG oslo_concurrency.lockutils [None req-b9d315d9-e85b-490c-943d-b77b46067a64 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.498s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:24:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:24:53 compute-1 nova_compute[230518]: 2025-10-02 12:24:53.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:24:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:53.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:24:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:53.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:55 compute-1 ceph-mon[80926]: pgmap v1417: 305 pgs: 305 active+clean; 171 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 868 KiB/s wr, 60 op/s
Oct 02 12:24:55 compute-1 nova_compute[230518]: 2025-10-02 12:24:55.052 2 DEBUG nova.compute.manager [req-381d48c8-b98c-4900-bda7-414137664971 req-734342e9-6092-4e12-adad-20acc7a478fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Received event network-vif-plugged-ac685902-7a16-4ff8-ac8b-85430ba9f8cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:24:55 compute-1 nova_compute[230518]: 2025-10-02 12:24:55.053 2 DEBUG oslo_concurrency.lockutils [req-381d48c8-b98c-4900-bda7-414137664971 req-734342e9-6092-4e12-adad-20acc7a478fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "12ae9024-48e3-4894-ac32-41af4e31c223-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:24:55 compute-1 nova_compute[230518]: 2025-10-02 12:24:55.053 2 DEBUG oslo_concurrency.lockutils [req-381d48c8-b98c-4900-bda7-414137664971 req-734342e9-6092-4e12-adad-20acc7a478fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:24:55 compute-1 nova_compute[230518]: 2025-10-02 12:24:55.054 2 DEBUG oslo_concurrency.lockutils [req-381d48c8-b98c-4900-bda7-414137664971 req-734342e9-6092-4e12-adad-20acc7a478fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:24:55 compute-1 nova_compute[230518]: 2025-10-02 12:24:55.054 2 DEBUG nova.compute.manager [req-381d48c8-b98c-4900-bda7-414137664971 req-734342e9-6092-4e12-adad-20acc7a478fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] No waiting events found dispatching network-vif-plugged-ac685902-7a16-4ff8-ac8b-85430ba9f8cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:24:55 compute-1 nova_compute[230518]: 2025-10-02 12:24:55.055 2 WARNING nova.compute.manager [req-381d48c8-b98c-4900-bda7-414137664971 req-734342e9-6092-4e12-adad-20acc7a478fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Received unexpected event network-vif-plugged-ac685902-7a16-4ff8-ac8b-85430ba9f8cd for instance with vm_state active and task_state None.
Oct 02 12:24:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:55.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:55.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:56 compute-1 ceph-mon[80926]: pgmap v1418: 305 pgs: 305 active+clean; 167 MiB data, 581 MiB used, 20 GiB / 21 GiB avail; 886 KiB/s rd, 70 KiB/s wr, 65 op/s
Oct 02 12:24:57 compute-1 nova_compute[230518]: 2025-10-02 12:24:57.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:57.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:57.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 e213: 3 total, 3 up, 3 in
Oct 02 12:24:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:24:58 compute-1 nova_compute[230518]: 2025-10-02 12:24:58.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:24:58 compute-1 ceph-mon[80926]: pgmap v1419: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 22 KiB/s wr, 128 op/s
Oct 02 12:24:58 compute-1 ceph-mon[80926]: osdmap e213: 3 total, 3 up, 3 in
Oct 02 12:24:58 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3937864801' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:24:58 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3937864801' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:24:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:24:59.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:24:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:24:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:24:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:24:59.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:00 compute-1 ceph-mon[80926]: pgmap v1421: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 24 KiB/s wr, 135 op/s
Oct 02 12:25:01 compute-1 nova_compute[230518]: 2025-10-02 12:25:01.208 2 DEBUG nova.compute.manager [req-76701482-409c-42c5-aa6b-c3a71dda2b3a req-6bb58b89-a0f3-446f-a6da-d12f58752e96 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Received event network-changed-ac685902-7a16-4ff8-ac8b-85430ba9f8cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:25:01 compute-1 nova_compute[230518]: 2025-10-02 12:25:01.208 2 DEBUG nova.compute.manager [req-76701482-409c-42c5-aa6b-c3a71dda2b3a req-6bb58b89-a0f3-446f-a6da-d12f58752e96 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Refreshing instance network info cache due to event network-changed-ac685902-7a16-4ff8-ac8b-85430ba9f8cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:25:01 compute-1 nova_compute[230518]: 2025-10-02 12:25:01.209 2 DEBUG oslo_concurrency.lockutils [req-76701482-409c-42c5-aa6b-c3a71dda2b3a req-6bb58b89-a0f3-446f-a6da-d12f58752e96 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-12ae9024-48e3-4894-ac32-41af4e31c223" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:25:01 compute-1 nova_compute[230518]: 2025-10-02 12:25:01.209 2 DEBUG oslo_concurrency.lockutils [req-76701482-409c-42c5-aa6b-c3a71dda2b3a req-6bb58b89-a0f3-446f-a6da-d12f58752e96 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-12ae9024-48e3-4894-ac32-41af4e31c223" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:25:01 compute-1 nova_compute[230518]: 2025-10-02 12:25:01.209 2 DEBUG nova.network.neutron [req-76701482-409c-42c5-aa6b-c3a71dda2b3a req-6bb58b89-a0f3-446f-a6da-d12f58752e96 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Refreshing network info cache for port ac685902-7a16-4ff8-ac8b-85430ba9f8cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:25:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:01.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:01.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:02 compute-1 nova_compute[230518]: 2025-10-02 12:25:02.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:02 compute-1 ceph-mon[80926]: pgmap v1422: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 21 KiB/s wr, 121 op/s
Oct 02 12:25:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:25:03 compute-1 nova_compute[230518]: 2025-10-02 12:25:03.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:25:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:03.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:25:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:03.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:04 compute-1 nova_compute[230518]: 2025-10-02 12:25:04.664 2 DEBUG nova.network.neutron [req-76701482-409c-42c5-aa6b-c3a71dda2b3a req-6bb58b89-a0f3-446f-a6da-d12f58752e96 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Updated VIF entry in instance network info cache for port ac685902-7a16-4ff8-ac8b-85430ba9f8cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:25:04 compute-1 nova_compute[230518]: 2025-10-02 12:25:04.665 2 DEBUG nova.network.neutron [req-76701482-409c-42c5-aa6b-c3a71dda2b3a req-6bb58b89-a0f3-446f-a6da-d12f58752e96 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Updating instance_info_cache with network_info: [{"id": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "address": "fa:16:3e:2e:09:31", "network": {"id": "34ecce08-278a-4a16-9f99-cfef8148769d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-640265691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "12dfeaa31a6e4a2481a5332ce3094262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac685902-7a", "ovs_interfaceid": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:25:04 compute-1 nova_compute[230518]: 2025-10-02 12:25:04.695 2 DEBUG oslo_concurrency.lockutils [req-76701482-409c-42c5-aa6b-c3a71dda2b3a req-6bb58b89-a0f3-446f-a6da-d12f58752e96 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-12ae9024-48e3-4894-ac32-41af4e31c223" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:25:05 compute-1 ceph-mon[80926]: pgmap v1423: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 20 KiB/s wr, 121 op/s
Oct 02 12:25:05 compute-1 ovn_controller[129257]: 2025-10-02T12:25:05Z|00259|binding|INFO|Releasing lport 8d2c214b-08f8-42fc-8049-39454e430512 from this chassis (sb_readonly=0)
Oct 02 12:25:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:05.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:05.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:05 compute-1 nova_compute[230518]: 2025-10-02 12:25:05.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:06 compute-1 ceph-mon[80926]: pgmap v1424: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 3.3 KiB/s wr, 79 op/s
Oct 02 12:25:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3312100151' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:25:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3312100151' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:25:06 compute-1 podman[253799]: 2025-10-02 12:25:06.863801736 +0000 UTC m=+0.105829881 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct 02 12:25:07 compute-1 nova_compute[230518]: 2025-10-02 12:25:07.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:07 compute-1 ovn_controller[129257]: 2025-10-02T12:25:07Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2e:09:31 10.100.0.10
Oct 02 12:25:07 compute-1 ovn_controller[129257]: 2025-10-02T12:25:07Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2e:09:31 10.100.0.10
Oct 02 12:25:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:25:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:07.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:25:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:07.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:25:08 compute-1 nova_compute[230518]: 2025-10-02 12:25:08.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:08 compute-1 ceph-mon[80926]: pgmap v1425: 305 pgs: 305 active+clean; 184 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 34 op/s
Oct 02 12:25:08 compute-1 podman[253816]: 2025-10-02 12:25:08.848210337 +0000 UTC m=+0.101471704 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:25:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:25:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:09.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:25:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:09.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:10 compute-1 ceph-mon[80926]: pgmap v1426: 305 pgs: 305 active+clean; 184 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.7 MiB/s wr, 30 op/s
Oct 02 12:25:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:11.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:25:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:11.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:25:12 compute-1 nova_compute[230518]: 2025-10-02 12:25:12.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:12 compute-1 ceph-mon[80926]: pgmap v1427: 305 pgs: 305 active+clean; 190 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 52 op/s
Oct 02 12:25:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:25:13 compute-1 nova_compute[230518]: 2025-10-02 12:25:13.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:25:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:13.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:25:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:13.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:15 compute-1 ceph-mon[80926]: pgmap v1428: 305 pgs: 305 active+clean; 200 MiB data, 590 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 82 op/s
Oct 02 12:25:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:15.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:15 compute-1 podman[253844]: 2025-10-02 12:25:15.811904828 +0000 UTC m=+0.055087885 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:25:15 compute-1 podman[253843]: 2025-10-02 12:25:15.812400733 +0000 UTC m=+0.061187956 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:25:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:25:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:15.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:25:16 compute-1 ceph-mon[80926]: pgmap v1429: 305 pgs: 305 active+clean; 201 MiB data, 590 MiB used, 20 GiB / 21 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Oct 02 12:25:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3819116064' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:25:17 compute-1 nova_compute[230518]: 2025-10-02 12:25:17.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:17 compute-1 ovn_controller[129257]: 2025-10-02T12:25:17Z|00260|binding|INFO|Releasing lport 8d2c214b-08f8-42fc-8049-39454e430512 from this chassis (sb_readonly=0)
Oct 02 12:25:17 compute-1 nova_compute[230518]: 2025-10-02 12:25:17.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:17.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:25:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:17.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:25:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:25:18 compute-1 nova_compute[230518]: 2025-10-02 12:25:18.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:18 compute-1 ceph-mon[80926]: pgmap v1430: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 322 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Oct 02 12:25:18 compute-1 nova_compute[230518]: 2025-10-02 12:25:18.997 2 DEBUG oslo_concurrency.lockutils [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquiring lock "12ae9024-48e3-4894-ac32-41af4e31c223" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:18 compute-1 nova_compute[230518]: 2025-10-02 12:25:18.998 2 DEBUG oslo_concurrency.lockutils [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:19 compute-1 nova_compute[230518]: 2025-10-02 12:25:19.312 2 DEBUG nova.objects.instance [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lazy-loading 'flavor' on Instance uuid 12ae9024-48e3-4894-ac32-41af4e31c223 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:25:19 compute-1 nova_compute[230518]: 2025-10-02 12:25:19.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:25:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:19.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:25:19 compute-1 nova_compute[230518]: 2025-10-02 12:25:19.831 2 DEBUG oslo_concurrency.lockutils [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:25:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:19.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:25:20 compute-1 nova_compute[230518]: 2025-10-02 12:25:20.871 2 DEBUG oslo_concurrency.lockutils [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquiring lock "12ae9024-48e3-4894-ac32-41af4e31c223" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:20 compute-1 nova_compute[230518]: 2025-10-02 12:25:20.872 2 DEBUG oslo_concurrency.lockutils [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:20 compute-1 nova_compute[230518]: 2025-10-02 12:25:20.872 2 INFO nova.compute.manager [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Attaching volume 5957fb80-298a-4379-a0ba-fde86e2113d0 to /dev/vdb
Oct 02 12:25:21 compute-1 ceph-mon[80926]: pgmap v1431: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 221 KiB/s rd, 2.3 MiB/s wr, 76 op/s
Oct 02 12:25:21 compute-1 nova_compute[230518]: 2025-10-02 12:25:21.304 2 DEBUG os_brick.utils [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:25:21 compute-1 nova_compute[230518]: 2025-10-02 12:25:21.306 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:25:21 compute-1 nova_compute[230518]: 2025-10-02 12:25:21.324 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:25:21 compute-1 nova_compute[230518]: 2025-10-02 12:25:21.325 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[3e100502-0bf3-41d3-9e34-a8df172827d5]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:21 compute-1 nova_compute[230518]: 2025-10-02 12:25:21.326 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:25:21 compute-1 nova_compute[230518]: 2025-10-02 12:25:21.340 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:25:21 compute-1 nova_compute[230518]: 2025-10-02 12:25:21.340 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[39d68e2b-f7a8-46c8-a5b2-293a97f69079]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:21 compute-1 nova_compute[230518]: 2025-10-02 12:25:21.343 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:25:21 compute-1 nova_compute[230518]: 2025-10-02 12:25:21.356 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:25:21 compute-1 nova_compute[230518]: 2025-10-02 12:25:21.357 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[049edf27-8f24-4a51-8d6c-f13c1c005210]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:21 compute-1 nova_compute[230518]: 2025-10-02 12:25:21.359 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[95288274-67dd-4c6d-b7d1-d849a0040acc]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:21 compute-1 nova_compute[230518]: 2025-10-02 12:25:21.360 2 DEBUG oslo_concurrency.processutils [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:25:21 compute-1 nova_compute[230518]: 2025-10-02 12:25:21.405 2 DEBUG oslo_concurrency.processutils [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] CMD "nvme version" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:25:21 compute-1 nova_compute[230518]: 2025-10-02 12:25:21.408 2 DEBUG os_brick.initiator.connectors.lightos [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:25:21 compute-1 nova_compute[230518]: 2025-10-02 12:25:21.409 2 DEBUG os_brick.initiator.connectors.lightos [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:25:21 compute-1 nova_compute[230518]: 2025-10-02 12:25:21.410 2 DEBUG os_brick.initiator.connectors.lightos [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:25:21 compute-1 nova_compute[230518]: 2025-10-02 12:25:21.410 2 DEBUG os_brick.utils [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] <== get_connector_properties: return (104ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:25:21 compute-1 nova_compute[230518]: 2025-10-02 12:25:21.411 2 DEBUG nova.virt.block_device [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Updating existing volume attachment record: 220f75d1-7597-411c-a311-ebd7f4b718f8 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:25:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:21.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:21.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:22 compute-1 nova_compute[230518]: 2025-10-02 12:25:22.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:22 compute-1 ceph-mon[80926]: pgmap v1432: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 221 KiB/s rd, 2.3 MiB/s wr, 77 op/s
Oct 02 12:25:23 compute-1 nova_compute[230518]: 2025-10-02 12:25:23.004 2 DEBUG nova.objects.instance [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lazy-loading 'flavor' on Instance uuid 12ae9024-48e3-4894-ac32-41af4e31c223 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:25:23 compute-1 nova_compute[230518]: 2025-10-02 12:25:23.086 2 DEBUG nova.virt.libvirt.driver [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Attempting to attach volume 5957fb80-298a-4379-a0ba-fde86e2113d0 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 12:25:23 compute-1 nova_compute[230518]: 2025-10-02 12:25:23.089 2 DEBUG nova.virt.libvirt.guest [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 12:25:23 compute-1 nova_compute[230518]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:25:23 compute-1 nova_compute[230518]:   <source protocol="rbd" name="volumes/volume-5957fb80-298a-4379-a0ba-fde86e2113d0">
Oct 02 12:25:23 compute-1 nova_compute[230518]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:25:23 compute-1 nova_compute[230518]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:25:23 compute-1 nova_compute[230518]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:25:23 compute-1 nova_compute[230518]:   </source>
Oct 02 12:25:23 compute-1 nova_compute[230518]:   <auth username="openstack">
Oct 02 12:25:23 compute-1 nova_compute[230518]:     <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:25:23 compute-1 nova_compute[230518]:   </auth>
Oct 02 12:25:23 compute-1 nova_compute[230518]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:25:23 compute-1 nova_compute[230518]:   <serial>5957fb80-298a-4379-a0ba-fde86e2113d0</serial>
Oct 02 12:25:23 compute-1 nova_compute[230518]:   <shareable/>
Oct 02 12:25:23 compute-1 nova_compute[230518]: </disk>
Oct 02 12:25:23 compute-1 nova_compute[230518]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 12:25:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:25:23 compute-1 nova_compute[230518]: 2025-10-02 12:25:23.462 2 DEBUG nova.virt.libvirt.driver [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:25:23 compute-1 nova_compute[230518]: 2025-10-02 12:25:23.463 2 DEBUG nova.virt.libvirt.driver [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:25:23 compute-1 nova_compute[230518]: 2025-10-02 12:25:23.463 2 DEBUG nova.virt.libvirt.driver [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:25:23 compute-1 nova_compute[230518]: 2025-10-02 12:25:23.464 2 DEBUG nova.virt.libvirt.driver [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] No VIF found with MAC fa:16:3e:2e:09:31, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:25:23 compute-1 nova_compute[230518]: 2025-10-02 12:25:23.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2486575946' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:25:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:25:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:23.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:25:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:23.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:24 compute-1 nova_compute[230518]: 2025-10-02 12:25:24.590 2 DEBUG oslo_concurrency.lockutils [None req-705d17d8-6872-4ba4-9754-f27c0133ed98 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:25 compute-1 ceph-mon[80926]: pgmap v1433: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 147 KiB/s rd, 1.9 MiB/s wr, 55 op/s
Oct 02 12:25:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:25:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:25.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:25:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:25.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:25:25.923 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:25:25.924 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:25:25.924 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:26 compute-1 sudo[253910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:25:26 compute-1 sudo[253910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:26 compute-1 sudo[253910]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:26 compute-1 sudo[253935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:25:26 compute-1 sudo[253935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:26 compute-1 sudo[253935]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:26 compute-1 ceph-mon[80926]: pgmap v1434: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Oct 02 12:25:26 compute-1 sudo[253960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:25:26 compute-1 sudo[253960]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:26 compute-1 sudo[253960]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:26 compute-1 sudo[253985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:25:26 compute-1 sudo[253985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:27 compute-1 nova_compute[230518]: 2025-10-02 12:25:27.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:27 compute-1 sudo[253985]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:27.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:27.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:27 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:25:27 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:25:27 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:25:27 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:25:27 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:25:27 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:25:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:25:28 compute-1 nova_compute[230518]: 2025-10-02 12:25:28.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:28 compute-1 ceph-mon[80926]: pgmap v1435: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.8 MiB/s wr, 22 op/s
Oct 02 12:25:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:29.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:29.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:30 compute-1 ceph-mon[80926]: pgmap v1436: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 12 KiB/s wr, 7 op/s
Oct 02 12:25:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:31.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:25:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:31.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:25:32 compute-1 nova_compute[230518]: 2025-10-02 12:25:32.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:32 compute-1 ceph-mon[80926]: pgmap v1437: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 12 KiB/s wr, 7 op/s
Oct 02 12:25:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:25:33 compute-1 nova_compute[230518]: 2025-10-02 12:25:33.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:25:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:33.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:25:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:33.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:34 compute-1 nova_compute[230518]: 2025-10-02 12:25:34.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:34 compute-1 ceph-mon[80926]: pgmap v1438: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 4.2 KiB/s rd, 12 KiB/s wr, 6 op/s
Oct 02 12:25:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:25:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:35.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:25:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:35.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:36 compute-1 ceph-mon[80926]: pgmap v1439: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 2.0 KiB/s rd, 9.7 KiB/s wr, 3 op/s
Oct 02 12:25:37 compute-1 nova_compute[230518]: 2025-10-02 12:25:37.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:37 compute-1 nova_compute[230518]: 2025-10-02 12:25:37.755 2 DEBUG oslo_concurrency.lockutils [None req-69862a48-48cb-4715-9652-e37b72aee897 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquiring lock "12ae9024-48e3-4894-ac32-41af4e31c223" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:37 compute-1 nova_compute[230518]: 2025-10-02 12:25:37.755 2 DEBUG oslo_concurrency.lockutils [None req-69862a48-48cb-4715-9652-e37b72aee897 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:37 compute-1 nova_compute[230518]: 2025-10-02 12:25:37.784 2 INFO nova.compute.manager [None req-69862a48-48cb-4715-9652-e37b72aee897 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Detaching volume 5957fb80-298a-4379-a0ba-fde86e2113d0
Oct 02 12:25:37 compute-1 podman[254041]: 2025-10-02 12:25:37.815426545 +0000 UTC m=+0.064717456 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 12:25:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:37.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:37.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:38 compute-1 nova_compute[230518]: 2025-10-02 12:25:38.037 2 INFO nova.virt.block_device [None req-69862a48-48cb-4715-9652-e37b72aee897 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Attempting to driver detach volume 5957fb80-298a-4379-a0ba-fde86e2113d0 from mountpoint /dev/vdb
Oct 02 12:25:38 compute-1 nova_compute[230518]: 2025-10-02 12:25:38.047 2 DEBUG nova.virt.libvirt.driver [None req-69862a48-48cb-4715-9652-e37b72aee897 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Attempting to detach device vdb from instance 12ae9024-48e3-4894-ac32-41af4e31c223 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 12:25:38 compute-1 nova_compute[230518]: 2025-10-02 12:25:38.048 2 DEBUG nova.virt.libvirt.guest [None req-69862a48-48cb-4715-9652-e37b72aee897 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:25:38 compute-1 nova_compute[230518]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:25:38 compute-1 nova_compute[230518]:   <source protocol="rbd" name="volumes/volume-5957fb80-298a-4379-a0ba-fde86e2113d0">
Oct 02 12:25:38 compute-1 nova_compute[230518]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:25:38 compute-1 nova_compute[230518]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:25:38 compute-1 nova_compute[230518]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:25:38 compute-1 nova_compute[230518]:   </source>
Oct 02 12:25:38 compute-1 nova_compute[230518]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:25:38 compute-1 nova_compute[230518]:   <serial>5957fb80-298a-4379-a0ba-fde86e2113d0</serial>
Oct 02 12:25:38 compute-1 nova_compute[230518]:   <shareable/>
Oct 02 12:25:38 compute-1 nova_compute[230518]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 12:25:38 compute-1 nova_compute[230518]: </disk>
Oct 02 12:25:38 compute-1 nova_compute[230518]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:25:38 compute-1 nova_compute[230518]: 2025-10-02 12:25:38.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:25:38 compute-1 nova_compute[230518]: 2025-10-02 12:25:38.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 12:25:38 compute-1 nova_compute[230518]: 2025-10-02 12:25:38.055 2 INFO nova.virt.libvirt.driver [None req-69862a48-48cb-4715-9652-e37b72aee897 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Successfully detached device vdb from instance 12ae9024-48e3-4894-ac32-41af4e31c223 from the persistent domain config.
Oct 02 12:25:38 compute-1 nova_compute[230518]: 2025-10-02 12:25:38.056 2 DEBUG nova.virt.libvirt.driver [None req-69862a48-48cb-4715-9652-e37b72aee897 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 12ae9024-48e3-4894-ac32-41af4e31c223 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 12:25:38 compute-1 nova_compute[230518]: 2025-10-02 12:25:38.056 2 DEBUG nova.virt.libvirt.guest [None req-69862a48-48cb-4715-9652-e37b72aee897 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:25:38 compute-1 nova_compute[230518]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:25:38 compute-1 nova_compute[230518]:   <source protocol="rbd" name="volumes/volume-5957fb80-298a-4379-a0ba-fde86e2113d0">
Oct 02 12:25:38 compute-1 nova_compute[230518]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:25:38 compute-1 nova_compute[230518]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:25:38 compute-1 nova_compute[230518]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:25:38 compute-1 nova_compute[230518]:   </source>
Oct 02 12:25:38 compute-1 nova_compute[230518]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:25:38 compute-1 nova_compute[230518]:   <serial>5957fb80-298a-4379-a0ba-fde86e2113d0</serial>
Oct 02 12:25:38 compute-1 nova_compute[230518]:   <shareable/>
Oct 02 12:25:38 compute-1 nova_compute[230518]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 12:25:38 compute-1 nova_compute[230518]: </disk>
Oct 02 12:25:38 compute-1 nova_compute[230518]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:25:38 compute-1 nova_compute[230518]: 2025-10-02 12:25:38.073 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:25:38 compute-1 nova_compute[230518]: 2025-10-02 12:25:38.165 2 DEBUG nova.virt.libvirt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Received event <DeviceRemovedEvent: 1759407938.1650586, 12ae9024-48e3-4894-ac32-41af4e31c223 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 12:25:38 compute-1 nova_compute[230518]: 2025-10-02 12:25:38.168 2 DEBUG nova.virt.libvirt.driver [None req-69862a48-48cb-4715-9652-e37b72aee897 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 12ae9024-48e3-4894-ac32-41af4e31c223 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 12:25:38 compute-1 nova_compute[230518]: 2025-10-02 12:25:38.171 2 INFO nova.virt.libvirt.driver [None req-69862a48-48cb-4715-9652-e37b72aee897 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Successfully detached device vdb from instance 12ae9024-48e3-4894-ac32-41af4e31c223 from the live domain config.
Oct 02 12:25:38 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:25:38 compute-1 ceph-mon[80926]: pgmap v1440: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 1.9 KiB/s rd, 1023 B/s wr, 2 op/s
Oct 02 12:25:38 compute-1 nova_compute[230518]: 2025-10-02 12:25:38.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:38 compute-1 nova_compute[230518]: 2025-10-02 12:25:38.684 2 DEBUG nova.objects.instance [None req-69862a48-48cb-4715-9652-e37b72aee897 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lazy-loading 'flavor' on Instance uuid 12ae9024-48e3-4894-ac32-41af4e31c223 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:25:38 compute-1 nova_compute[230518]: 2025-10-02 12:25:38.759 2 DEBUG oslo_concurrency.lockutils [None req-69862a48-48cb-4715-9652-e37b72aee897 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:39 compute-1 sudo[254062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:25:39 compute-1 sudo[254062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:39 compute-1 sudo[254062]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:39 compute-1 sudo[254092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:25:39 compute-1 sudo[254092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:25:39 compute-1 sudo[254092]: pam_unix(sudo:session): session closed for user root
Oct 02 12:25:39 compute-1 podman[254086]: 2025-10-02 12:25:39.212822825 +0000 UTC m=+0.146992686 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:25:39 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 12:25:39 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 21K writes, 82K keys, 21K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s
                                           Cumulative WAL: 21K writes, 7216 syncs, 2.97 writes per sync, written: 0.07 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 9633 writes, 36K keys, 9633 commit groups, 1.0 writes per commit group, ingest: 33.79 MB, 0.06 MB/s
                                           Interval WAL: 9633 writes, 3951 syncs, 2.44 writes per sync, written: 0.03 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 12:25:39 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:25:39 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:25:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:39.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:39.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:40 compute-1 ceph-mon[80926]: pgmap v1441: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail
Oct 02 12:25:41 compute-1 nova_compute[230518]: 2025-10-02 12:25:41.087 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:25:41 compute-1 nova_compute[230518]: 2025-10-02 12:25:41.191 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:41 compute-1 nova_compute[230518]: 2025-10-02 12:25:41.192 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:41 compute-1 nova_compute[230518]: 2025-10-02 12:25:41.193 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:41 compute-1 nova_compute[230518]: 2025-10-02 12:25:41.193 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:25:41 compute-1 nova_compute[230518]: 2025-10-02 12:25:41.194 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:25:41 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:25:41 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3840614406' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:41 compute-1 nova_compute[230518]: 2025-10-02 12:25:41.698 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:25:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:25:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:41.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:25:41 compute-1 nova_compute[230518]: 2025-10-02 12:25:41.868 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000037 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:25:41 compute-1 nova_compute[230518]: 2025-10-02 12:25:41.868 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000037 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:25:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:25:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:41.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:25:41 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3840614406' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:42 compute-1 nova_compute[230518]: 2025-10-02 12:25:42.028 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:25:42 compute-1 nova_compute[230518]: 2025-10-02 12:25:42.029 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4471MB free_disk=20.89710235595703GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:25:42 compute-1 nova_compute[230518]: 2025-10-02 12:25:42.030 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:42 compute-1 nova_compute[230518]: 2025-10-02 12:25:42.030 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:42 compute-1 nova_compute[230518]: 2025-10-02 12:25:42.223 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 12ae9024-48e3-4894-ac32-41af4e31c223 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:25:42 compute-1 nova_compute[230518]: 2025-10-02 12:25:42.224 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:25:42 compute-1 nova_compute[230518]: 2025-10-02 12:25:42.224 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:25:42 compute-1 nova_compute[230518]: 2025-10-02 12:25:42.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:25:42.399 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:25:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:25:42.400 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:25:42 compute-1 nova_compute[230518]: 2025-10-02 12:25:42.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:25:42.401 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:25:42 compute-1 nova_compute[230518]: 2025-10-02 12:25:42.610 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 12:25:42 compute-1 nova_compute[230518]: 2025-10-02 12:25:42.715 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 12:25:42 compute-1 nova_compute[230518]: 2025-10-02 12:25:42.716 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:25:42 compute-1 nova_compute[230518]: 2025-10-02 12:25:42.753 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 12:25:42 compute-1 nova_compute[230518]: 2025-10-02 12:25:42.801 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 12:25:42 compute-1 nova_compute[230518]: 2025-10-02 12:25:42.851 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:25:42 compute-1 ceph-mon[80926]: pgmap v1442: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 341 B/s wr, 0 op/s
Oct 02 12:25:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1383723782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:25:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:25:43 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1350195972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:43 compute-1 nova_compute[230518]: 2025-10-02 12:25:43.293 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:25:43 compute-1 nova_compute[230518]: 2025-10-02 12:25:43.298 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:25:43 compute-1 nova_compute[230518]: 2025-10-02 12:25:43.540 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:25:43 compute-1 nova_compute[230518]: 2025-10-02 12:25:43.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:43.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:43.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1350195972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1633199061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:44 compute-1 nova_compute[230518]: 2025-10-02 12:25:44.072 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:25:44 compute-1 nova_compute[230518]: 2025-10-02 12:25:44.073 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:44 compute-1 nova_compute[230518]: 2025-10-02 12:25:44.840 2 DEBUG oslo_concurrency.lockutils [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquiring lock "12ae9024-48e3-4894-ac32-41af4e31c223" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:44 compute-1 nova_compute[230518]: 2025-10-02 12:25:44.841 2 DEBUG oslo_concurrency.lockutils [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:44 compute-1 nova_compute[230518]: 2025-10-02 12:25:44.841 2 DEBUG oslo_concurrency.lockutils [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquiring lock "12ae9024-48e3-4894-ac32-41af4e31c223-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:44 compute-1 nova_compute[230518]: 2025-10-02 12:25:44.841 2 DEBUG oslo_concurrency.lockutils [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:44 compute-1 nova_compute[230518]: 2025-10-02 12:25:44.842 2 DEBUG oslo_concurrency.lockutils [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:44 compute-1 nova_compute[230518]: 2025-10-02 12:25:44.843 2 INFO nova.compute.manager [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Terminating instance
Oct 02 12:25:44 compute-1 nova_compute[230518]: 2025-10-02 12:25:44.845 2 DEBUG nova.compute.manager [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:25:44 compute-1 kernel: tapac685902-7a (unregistering): left promiscuous mode
Oct 02 12:25:44 compute-1 NetworkManager[44960]: <info>  [1759407944.9080] device (tapac685902-7a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:25:44 compute-1 nova_compute[230518]: 2025-10-02 12:25:44.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:44 compute-1 ovn_controller[129257]: 2025-10-02T12:25:44Z|00261|binding|INFO|Releasing lport ac685902-7a16-4ff8-ac8b-85430ba9f8cd from this chassis (sb_readonly=0)
Oct 02 12:25:44 compute-1 ovn_controller[129257]: 2025-10-02T12:25:44Z|00262|binding|INFO|Setting lport ac685902-7a16-4ff8-ac8b-85430ba9f8cd down in Southbound
Oct 02 12:25:44 compute-1 ovn_controller[129257]: 2025-10-02T12:25:44Z|00263|binding|INFO|Removing iface tapac685902-7a ovn-installed in OVS
Oct 02 12:25:44 compute-1 nova_compute[230518]: 2025-10-02 12:25:44.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:44 compute-1 nova_compute[230518]: 2025-10-02 12:25:44.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:44 compute-1 ceph-mon[80926]: pgmap v1443: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct 02 12:25:44 compute-1 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000037.scope: Deactivated successfully.
Oct 02 12:25:44 compute-1 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000037.scope: Consumed 16.459s CPU time.
Oct 02 12:25:44 compute-1 systemd-machined[188247]: Machine qemu-29-instance-00000037 terminated.
Oct 02 12:25:45 compute-1 nova_compute[230518]: 2025-10-02 12:25:45.034 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:25:45 compute-1 nova_compute[230518]: 2025-10-02 12:25:45.034 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:25:45 compute-1 nova_compute[230518]: 2025-10-02 12:25:45.035 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:25:45 compute-1 nova_compute[230518]: 2025-10-02 12:25:45.035 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:25:45 compute-1 nova_compute[230518]: 2025-10-02 12:25:45.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:25:45 compute-1 nova_compute[230518]: 2025-10-02 12:25:45.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:25:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:25:45.079 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:09:31 10.100.0.10'], port_security=['fa:16:3e:2e:09:31 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '12ae9024-48e3-4894-ac32-41af4e31c223', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34ecce08-278a-4a16-9f99-cfef8148769d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '12dfeaa31a6e4a2481a5332ce3094262', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7b0b284f-6afe-4611-b8db-1ab4d5466651', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7fc2557b-b462-4493-9e4f-7b4266aaba5c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=ac685902-7a16-4ff8-ac8b-85430ba9f8cd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:25:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:25:45.081 138374 INFO neutron.agent.ovn.metadata.agent [-] Port ac685902-7a16-4ff8-ac8b-85430ba9f8cd in datapath 34ecce08-278a-4a16-9f99-cfef8148769d unbound from our chassis
Oct 02 12:25:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:25:45.085 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 34ecce08-278a-4a16-9f99-cfef8148769d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:25:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:25:45.088 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[47e381a8-ab00-47ff-91c6-5a1a7b2e150d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:45 compute-1 nova_compute[230518]: 2025-10-02 12:25:45.087 2 INFO nova.virt.libvirt.driver [-] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Instance destroyed successfully.
Oct 02 12:25:45 compute-1 nova_compute[230518]: 2025-10-02 12:25:45.088 2 DEBUG nova.objects.instance [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lazy-loading 'resources' on Instance uuid 12ae9024-48e3-4894-ac32-41af4e31c223 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:25:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:25:45.089 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d namespace which is not needed anymore
Oct 02 12:25:45 compute-1 nova_compute[230518]: 2025-10-02 12:25:45.140 2 DEBUG nova.virt.libvirt.vif [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:24:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-857840689',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-857840689',id=55,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNEu8Mg1mF4AKAnETiPc/1EN0o0yW0N8RDXfeYfe2EYf2XEQAi1u5vSoxbgTCJBIGOu3aJCScfGKzHsqNwZ9VVPhf8HNvzQILfXuoUBQZfVHYTHLisifzGPoXHjQ6TltjQ==',key_name='tempest-keypair-2123530339',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:24:52Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='12dfeaa31a6e4a2481a5332ce3094262',ramdisk_id='',reservation_id='r-92a9mnnp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-158673309',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-158673309-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:24:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='51b45ef40bdc499a8409fd2bf3e6a339',uuid=12ae9024-48e3-4894-ac32-41af4e31c223,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "address": "fa:16:3e:2e:09:31", "network": {"id": "34ecce08-278a-4a16-9f99-cfef8148769d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-640265691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "12dfeaa31a6e4a2481a5332ce3094262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac685902-7a", "ovs_interfaceid": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:25:45 compute-1 nova_compute[230518]: 2025-10-02 12:25:45.141 2 DEBUG nova.network.os_vif_util [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Converting VIF {"id": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "address": "fa:16:3e:2e:09:31", "network": {"id": "34ecce08-278a-4a16-9f99-cfef8148769d", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-640265691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "12dfeaa31a6e4a2481a5332ce3094262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac685902-7a", "ovs_interfaceid": "ac685902-7a16-4ff8-ac8b-85430ba9f8cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:25:45 compute-1 nova_compute[230518]: 2025-10-02 12:25:45.142 2 DEBUG nova.network.os_vif_util [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2e:09:31,bridge_name='br-int',has_traffic_filtering=True,id=ac685902-7a16-4ff8-ac8b-85430ba9f8cd,network=Network(34ecce08-278a-4a16-9f99-cfef8148769d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac685902-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:25:45 compute-1 nova_compute[230518]: 2025-10-02 12:25:45.142 2 DEBUG os_vif [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2e:09:31,bridge_name='br-int',has_traffic_filtering=True,id=ac685902-7a16-4ff8-ac8b-85430ba9f8cd,network=Network(34ecce08-278a-4a16-9f99-cfef8148769d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac685902-7a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:25:45 compute-1 nova_compute[230518]: 2025-10-02 12:25:45.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:45 compute-1 nova_compute[230518]: 2025-10-02 12:25:45.145 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac685902-7a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:25:45 compute-1 nova_compute[230518]: 2025-10-02 12:25:45.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:45 compute-1 nova_compute[230518]: 2025-10-02 12:25:45.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:45 compute-1 nova_compute[230518]: 2025-10-02 12:25:45.149 2 INFO os_vif [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2e:09:31,bridge_name='br-int',has_traffic_filtering=True,id=ac685902-7a16-4ff8-ac8b-85430ba9f8cd,network=Network(34ecce08-278a-4a16-9f99-cfef8148769d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac685902-7a')
Oct 02 12:25:45 compute-1 neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d[253741]: [NOTICE]   (253745) : haproxy version is 2.8.14-c23fe91
Oct 02 12:25:45 compute-1 neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d[253741]: [NOTICE]   (253745) : path to executable is /usr/sbin/haproxy
Oct 02 12:25:45 compute-1 neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d[253741]: [WARNING]  (253745) : Exiting Master process...
Oct 02 12:25:45 compute-1 neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d[253741]: [ALERT]    (253745) : Current worker (253747) exited with code 143 (Terminated)
Oct 02 12:25:45 compute-1 neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d[253741]: [WARNING]  (253745) : All workers exited. Exiting... (0)
Oct 02 12:25:45 compute-1 systemd[1]: libpod-597f2e1664669f4d29bcfa450fa533a9d3b1c4ba41417fd9d91b5b8ebdbec8ea.scope: Deactivated successfully.
Oct 02 12:25:45 compute-1 podman[254234]: 2025-10-02 12:25:45.255168271 +0000 UTC m=+0.053503205 container died 597f2e1664669f4d29bcfa450fa533a9d3b1c4ba41417fd9d91b5b8ebdbec8ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:25:45 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-597f2e1664669f4d29bcfa450fa533a9d3b1c4ba41417fd9d91b5b8ebdbec8ea-userdata-shm.mount: Deactivated successfully.
Oct 02 12:25:45 compute-1 systemd[1]: var-lib-containers-storage-overlay-b1a1fb007e30a62109ba083d1b17ba4b02cd8951cd7c632e42029e9dfea28755-merged.mount: Deactivated successfully.
Oct 02 12:25:45 compute-1 podman[254234]: 2025-10-02 12:25:45.29740012 +0000 UTC m=+0.095735044 container cleanup 597f2e1664669f4d29bcfa450fa533a9d3b1c4ba41417fd9d91b5b8ebdbec8ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:25:45 compute-1 systemd[1]: libpod-conmon-597f2e1664669f4d29bcfa450fa533a9d3b1c4ba41417fd9d91b5b8ebdbec8ea.scope: Deactivated successfully.
Oct 02 12:25:45 compute-1 podman[254267]: 2025-10-02 12:25:45.36002565 +0000 UTC m=+0.041202317 container remove 597f2e1664669f4d29bcfa450fa533a9d3b1c4ba41417fd9d91b5b8ebdbec8ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:25:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:25:45.365 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[79c31022-9a68-40dc-9394-3327f6757929]: (4, ('Thu Oct  2 12:25:45 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d (597f2e1664669f4d29bcfa450fa533a9d3b1c4ba41417fd9d91b5b8ebdbec8ea)\n597f2e1664669f4d29bcfa450fa533a9d3b1c4ba41417fd9d91b5b8ebdbec8ea\nThu Oct  2 12:25:45 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d (597f2e1664669f4d29bcfa450fa533a9d3b1c4ba41417fd9d91b5b8ebdbec8ea)\n597f2e1664669f4d29bcfa450fa533a9d3b1c4ba41417fd9d91b5b8ebdbec8ea\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:25:45.367 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b808c221-f729-4c76-beac-f16be1cb1da4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:25:45.368 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34ecce08-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:25:45 compute-1 kernel: tap34ecce08-20: left promiscuous mode
Oct 02 12:25:45 compute-1 nova_compute[230518]: 2025-10-02 12:25:45.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:25:45.375 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f139053f-e3e3-4339-97c0-f60d3c0b48ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:45 compute-1 nova_compute[230518]: 2025-10-02 12:25:45.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:25:45.405 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5a02cb80-b21f-4334-8943-fae62d06b0e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:25:45.409 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a6fec14c-119a-4c1b-b873-a00b3bb6557d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:25:45.428 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[867e46b4-0103-4b0b-8b57-25fc6448ea4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 575464, 'reachable_time': 35895, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254284, 'error': None, 'target': 'ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:45 compute-1 systemd[1]: run-netns-ovnmeta\x2d34ecce08\x2d278a\x2d4a16\x2d9f99\x2dcfef8148769d.mount: Deactivated successfully.
Oct 02 12:25:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:25:45.432 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-34ecce08-278a-4a16-9f99-cfef8148769d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:25:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:25:45.433 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[b4fe82c7-2fea-43ed-aa52-481bb515ffe3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:25:45 compute-1 nova_compute[230518]: 2025-10-02 12:25:45.777 2 INFO nova.virt.libvirt.driver [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Deleting instance files /var/lib/nova/instances/12ae9024-48e3-4894-ac32-41af4e31c223_del
Oct 02 12:25:45 compute-1 nova_compute[230518]: 2025-10-02 12:25:45.778 2 INFO nova.virt.libvirt.driver [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Deletion of /var/lib/nova/instances/12ae9024-48e3-4894-ac32-41af4e31c223_del complete
Oct 02 12:25:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:25:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:45.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:25:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:25:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:45.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:25:46 compute-1 nova_compute[230518]: 2025-10-02 12:25:46.227 2 INFO nova.compute.manager [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Took 1.38 seconds to destroy the instance on the hypervisor.
Oct 02 12:25:46 compute-1 nova_compute[230518]: 2025-10-02 12:25:46.228 2 DEBUG oslo.service.loopingcall [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:25:46 compute-1 nova_compute[230518]: 2025-10-02 12:25:46.228 2 DEBUG nova.compute.manager [-] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:25:46 compute-1 nova_compute[230518]: 2025-10-02 12:25:46.229 2 DEBUG nova.network.neutron [-] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:25:46 compute-1 nova_compute[230518]: 2025-10-02 12:25:46.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:46 compute-1 nova_compute[230518]: 2025-10-02 12:25:46.716 2 DEBUG nova.compute.manager [req-4cd8cbd4-ea2b-49ac-807b-242a987e16d6 req-b7c3ece5-5086-4615-ac12-5c01cf8ff342 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Received event network-vif-unplugged-ac685902-7a16-4ff8-ac8b-85430ba9f8cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:25:46 compute-1 nova_compute[230518]: 2025-10-02 12:25:46.716 2 DEBUG oslo_concurrency.lockutils [req-4cd8cbd4-ea2b-49ac-807b-242a987e16d6 req-b7c3ece5-5086-4615-ac12-5c01cf8ff342 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "12ae9024-48e3-4894-ac32-41af4e31c223-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:46 compute-1 nova_compute[230518]: 2025-10-02 12:25:46.717 2 DEBUG oslo_concurrency.lockutils [req-4cd8cbd4-ea2b-49ac-807b-242a987e16d6 req-b7c3ece5-5086-4615-ac12-5c01cf8ff342 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:46 compute-1 nova_compute[230518]: 2025-10-02 12:25:46.718 2 DEBUG oslo_concurrency.lockutils [req-4cd8cbd4-ea2b-49ac-807b-242a987e16d6 req-b7c3ece5-5086-4615-ac12-5c01cf8ff342 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:46 compute-1 nova_compute[230518]: 2025-10-02 12:25:46.718 2 DEBUG nova.compute.manager [req-4cd8cbd4-ea2b-49ac-807b-242a987e16d6 req-b7c3ece5-5086-4615-ac12-5c01cf8ff342 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] No waiting events found dispatching network-vif-unplugged-ac685902-7a16-4ff8-ac8b-85430ba9f8cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:25:46 compute-1 nova_compute[230518]: 2025-10-02 12:25:46.719 2 DEBUG nova.compute.manager [req-4cd8cbd4-ea2b-49ac-807b-242a987e16d6 req-b7c3ece5-5086-4615-ac12-5c01cf8ff342 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Received event network-vif-unplugged-ac685902-7a16-4ff8-ac8b-85430ba9f8cd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:25:46 compute-1 podman[254286]: 2025-10-02 12:25:46.814026543 +0000 UTC m=+0.064472890 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:25:46 compute-1 podman[254287]: 2025-10-02 12:25:46.823076387 +0000 UTC m=+0.070428707 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:25:47 compute-1 ceph-mon[80926]: pgmap v1444: 305 pgs: 305 active+clean; 246 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct 02 12:25:47 compute-1 nova_compute[230518]: 2025-10-02 12:25:47.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:47.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:47.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:48 compute-1 nova_compute[230518]: 2025-10-02 12:25:48.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:25:48 compute-1 nova_compute[230518]: 2025-10-02 12:25:48.102 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:25:48 compute-1 ceph-mon[80926]: pgmap v1445: 305 pgs: 305 active+clean; 192 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 3.3 KiB/s rd, 3.2 KiB/s wr, 7 op/s
Oct 02 12:25:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/509332985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:25:48 compute-1 nova_compute[230518]: 2025-10-02 12:25:48.525 2 DEBUG nova.network.neutron [-] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:25:48 compute-1 nova_compute[230518]: 2025-10-02 12:25:48.560 2 INFO nova.compute.manager [-] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Took 2.33 seconds to deallocate network for instance.
Oct 02 12:25:48 compute-1 nova_compute[230518]: 2025-10-02 12:25:48.670 2 DEBUG oslo_concurrency.lockutils [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:48 compute-1 nova_compute[230518]: 2025-10-02 12:25:48.671 2 DEBUG oslo_concurrency.lockutils [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:48 compute-1 nova_compute[230518]: 2025-10-02 12:25:48.749 2 DEBUG oslo_concurrency.processutils [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:25:48 compute-1 nova_compute[230518]: 2025-10-02 12:25:48.979 2 DEBUG nova.compute.manager [req-a9963c2f-1ad0-49bc-b39d-78d3291b4216 req-aed2599f-c31c-44f3-93f0-c2ab91420ae4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Received event network-vif-deleted-ac685902-7a16-4ff8-ac8b-85430ba9f8cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:25:49 compute-1 nova_compute[230518]: 2025-10-02 12:25:49.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:25:49 compute-1 nova_compute[230518]: 2025-10-02 12:25:49.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:25:49 compute-1 nova_compute[230518]: 2025-10-02 12:25:49.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:25:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:25:49 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2837588613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:49 compute-1 nova_compute[230518]: 2025-10-02 12:25:49.227 2 DEBUG oslo_concurrency.processutils [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:25:49 compute-1 nova_compute[230518]: 2025-10-02 12:25:49.234 2 DEBUG nova.compute.provider_tree [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:25:49 compute-1 nova_compute[230518]: 2025-10-02 12:25:49.259 2 DEBUG nova.scheduler.client.report [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:25:49 compute-1 nova_compute[230518]: 2025-10-02 12:25:49.279 2 DEBUG nova.compute.manager [req-42b6eda8-5789-4a61-8c09-524f3257f5ed req-cb4d005a-79b4-4be1-8414-574a6faf8460 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Received event network-vif-plugged-ac685902-7a16-4ff8-ac8b-85430ba9f8cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:25:49 compute-1 nova_compute[230518]: 2025-10-02 12:25:49.280 2 DEBUG oslo_concurrency.lockutils [req-42b6eda8-5789-4a61-8c09-524f3257f5ed req-cb4d005a-79b4-4be1-8414-574a6faf8460 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "12ae9024-48e3-4894-ac32-41af4e31c223-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:25:49 compute-1 nova_compute[230518]: 2025-10-02 12:25:49.280 2 DEBUG oslo_concurrency.lockutils [req-42b6eda8-5789-4a61-8c09-524f3257f5ed req-cb4d005a-79b4-4be1-8414-574a6faf8460 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:25:49 compute-1 nova_compute[230518]: 2025-10-02 12:25:49.281 2 DEBUG oslo_concurrency.lockutils [req-42b6eda8-5789-4a61-8c09-524f3257f5ed req-cb4d005a-79b4-4be1-8414-574a6faf8460 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:49 compute-1 nova_compute[230518]: 2025-10-02 12:25:49.281 2 DEBUG nova.compute.manager [req-42b6eda8-5789-4a61-8c09-524f3257f5ed req-cb4d005a-79b4-4be1-8414-574a6faf8460 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] No waiting events found dispatching network-vif-plugged-ac685902-7a16-4ff8-ac8b-85430ba9f8cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:25:49 compute-1 nova_compute[230518]: 2025-10-02 12:25:49.282 2 WARNING nova.compute.manager [req-42b6eda8-5789-4a61-8c09-524f3257f5ed req-cb4d005a-79b4-4be1-8414-574a6faf8460 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Received unexpected event network-vif-plugged-ac685902-7a16-4ff8-ac8b-85430ba9f8cd for instance with vm_state deleted and task_state None.
Oct 02 12:25:49 compute-1 nova_compute[230518]: 2025-10-02 12:25:49.338 2 DEBUG oslo_concurrency.lockutils [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:49 compute-1 nova_compute[230518]: 2025-10-02 12:25:49.385 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-12ae9024-48e3-4894-ac32-41af4e31c223" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:25:49 compute-1 nova_compute[230518]: 2025-10-02 12:25:49.386 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-12ae9024-48e3-4894-ac32-41af4e31c223" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:25:49 compute-1 nova_compute[230518]: 2025-10-02 12:25:49.386 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:25:49 compute-1 nova_compute[230518]: 2025-10-02 12:25:49.386 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 12ae9024-48e3-4894-ac32-41af4e31c223 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:25:49 compute-1 nova_compute[230518]: 2025-10-02 12:25:49.402 2 INFO nova.scheduler.client.report [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Deleted allocations for instance 12ae9024-48e3-4894-ac32-41af4e31c223
Oct 02 12:25:49 compute-1 nova_compute[230518]: 2025-10-02 12:25:49.708 2 DEBUG oslo_concurrency.lockutils [None req-faf3e979-4093-4ac3-a05e-13160467ef5a 51b45ef40bdc499a8409fd2bf3e6a339 12dfeaa31a6e4a2481a5332ce3094262 - - default default] Lock "12ae9024-48e3-4894-ac32-41af4e31c223" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.868s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:25:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:49.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:25:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:49.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:25:49 compute-1 nova_compute[230518]: 2025-10-02 12:25:49.958 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:25:50 compute-1 nova_compute[230518]: 2025-10-02 12:25:50.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:50 compute-1 ceph-mon[80926]: pgmap v1446: 305 pgs: 305 active+clean; 192 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 3.3 KiB/s rd, 3.2 KiB/s wr, 7 op/s
Oct 02 12:25:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2837588613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3254338683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:50 compute-1 nova_compute[230518]: 2025-10-02 12:25:50.614 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:25:50 compute-1 nova_compute[230518]: 2025-10-02 12:25:50.665 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-12ae9024-48e3-4894-ac32-41af4e31c223" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:25:50 compute-1 nova_compute[230518]: 2025-10-02 12:25:50.665 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:25:50 compute-1 nova_compute[230518]: 2025-10-02 12:25:50.700 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:25:51 compute-1 nova_compute[230518]: 2025-10-02 12:25:51.080 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:25:51 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1471354611' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:25:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:25:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:51.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:25:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:51.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:52 compute-1 nova_compute[230518]: 2025-10-02 12:25:52.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:52 compute-1 ceph-mon[80926]: pgmap v1447: 305 pgs: 305 active+clean; 167 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Oct 02 12:25:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:25:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:53.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:25:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:53.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:25:54 compute-1 ceph-mon[80926]: pgmap v1448: 305 pgs: 305 active+clean; 167 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 3.3 KiB/s wr, 34 op/s
Oct 02 12:25:55 compute-1 nova_compute[230518]: 2025-10-02 12:25:55.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.001999982s ======
Oct 02 12:25:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:55.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001999982s
Oct 02 12:25:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:25:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:55.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:25:56 compute-1 ceph-mon[80926]: pgmap v1449: 305 pgs: 305 active+clean; 135 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 811 KiB/s wr, 40 op/s
Oct 02 12:25:57 compute-1 nova_compute[230518]: 2025-10-02 12:25:57.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:57 compute-1 nova_compute[230518]: 2025-10-02 12:25:57.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:25:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:25:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:57.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:25:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:25:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:57.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:25:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:25:59 compute-1 ceph-mon[80926]: pgmap v1450: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Oct 02 12:25:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:25:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:25:59.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:25:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:25:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:25:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:25:59.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:26:00 compute-1 nova_compute[230518]: 2025-10-02 12:26:00.084 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759407945.0832973, 12ae9024-48e3-4894-ac32-41af4e31c223 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:26:00 compute-1 nova_compute[230518]: 2025-10-02 12:26:00.085 2 INFO nova.compute.manager [-] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] VM Stopped (Lifecycle Event)
Oct 02 12:26:00 compute-1 nova_compute[230518]: 2025-10-02 12:26:00.115 2 DEBUG nova.compute.manager [None req-eed75ee4-2817-4320-b171-2f7f3a72f4f9 - - - - - -] [instance: 12ae9024-48e3-4894-ac32-41af4e31c223] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:26:00 compute-1 nova_compute[230518]: 2025-10-02 12:26:00.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:00 compute-1 ceph-mon[80926]: pgmap v1451: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 1.8 MiB/s wr, 69 op/s
Oct 02 12:26:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2524818878' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:26:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1061664957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1934729901' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:26:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:26:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:01.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:26:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:26:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:01.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:26:02 compute-1 nova_compute[230518]: 2025-10-02 12:26:02.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:03 compute-1 ceph-mon[80926]: pgmap v1452: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Oct 02 12:26:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:26:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:03.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:03.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:04 compute-1 ceph-mon[80926]: pgmap v1453: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Oct 02 12:26:05 compute-1 nova_compute[230518]: 2025-10-02 12:26:05.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/425071959' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:26:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/425071959' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:26:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:26:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:05.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:26:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:05.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:06 compute-1 nova_compute[230518]: 2025-10-02 12:26:06.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:26:06 compute-1 nova_compute[230518]: 2025-10-02 12:26:06.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 12:26:06 compute-1 nova_compute[230518]: 2025-10-02 12:26:06.082 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 12:26:06 compute-1 ceph-mon[80926]: pgmap v1454: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 111 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Oct 02 12:26:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2398778346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:07 compute-1 nova_compute[230518]: 2025-10-02 12:26:07.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3384030533' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:26:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3384030533' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:26:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:07.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:26:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:07.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:26:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:26:08 compute-1 podman[254351]: 2025-10-02 12:26:08.843533213 +0000 UTC m=+0.081536397 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:26:09 compute-1 ceph-mon[80926]: pgmap v1455: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1021 KiB/s wr, 99 op/s
Oct 02 12:26:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:26:09 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1364927460' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:26:09 compute-1 podman[254370]: 2025-10-02 12:26:09.821452815 +0000 UTC m=+0.074647200 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:26:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:26:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:09.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:26:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:26:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:09.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:26:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1364927460' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:26:10 compute-1 nova_compute[230518]: 2025-10-02 12:26:10.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:10 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 12:26:10 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 6719 writes, 34K keys, 6719 commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s
                                           Cumulative WAL: 6719 writes, 6719 syncs, 1.00 writes per sync, written: 0.07 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1644 writes, 8305 keys, 1644 commit groups, 1.0 writes per commit group, ingest: 16.83 MB, 0.03 MB/s
                                           Interval WAL: 1644 writes, 1644 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     90.8      0.46              0.11        18    0.025       0      0       0.0       0.0
                                             L6      1/0    9.99 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.6    149.9    123.9      1.21              0.40        17    0.071     86K   9957       0.0       0.0
                                            Sum      1/0    9.99 MB   0.0      0.2     0.0      0.1       0.2      0.1       0.0   4.6    108.8    114.8      1.66              0.51        35    0.048     86K   9957       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.9    145.4    149.4      0.34              0.14         8    0.043     24K   3114       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0    149.9    123.9      1.21              0.40        17    0.071     86K   9957       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     91.2      0.45              0.11        17    0.027       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.040, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.19 GB write, 0.08 MB/s write, 0.18 GB read, 0.08 MB/s read, 1.7 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5591049f71f0#2 capacity: 304.00 MB usage: 19.64 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000166 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1141,18.96 MB,6.23563%) FilterBlock(35,248.05 KB,0.079682%) IndexBlock(35,451.27 KB,0.144964%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 12:26:11 compute-1 ceph-mon[80926]: pgmap v1456: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 15 KiB/s wr, 63 op/s
Oct 02 12:26:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2035805136' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:26:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2035805136' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:26:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:11.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:26:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:11.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:26:12 compute-1 ceph-mon[80926]: pgmap v1457: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 90 op/s
Oct 02 12:26:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1061387115' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:26:12 compute-1 nova_compute[230518]: 2025-10-02 12:26:12.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/695025001' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:26:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:26:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:13.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:13.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:14 compute-1 ceph-mon[80926]: pgmap v1458: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 99 op/s
Oct 02 12:26:15 compute-1 nova_compute[230518]: 2025-10-02 12:26:15.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:15 compute-1 nova_compute[230518]: 2025-10-02 12:26:15.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:15 compute-1 nova_compute[230518]: 2025-10-02 12:26:15.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:15.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:15.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:16 compute-1 ceph-mon[80926]: pgmap v1459: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 95 op/s
Oct 02 12:26:17 compute-1 nova_compute[230518]: 2025-10-02 12:26:17.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:17 compute-1 podman[254397]: 2025-10-02 12:26:17.795304721 +0000 UTC m=+0.053726021 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001)
Oct 02 12:26:17 compute-1 podman[254398]: 2025-10-02 12:26:17.818029206 +0000 UTC m=+0.063157219 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_managed=true)
Oct 02 12:26:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:17.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:17.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:26:18 compute-1 ceph-mon[80926]: pgmap v1460: 305 pgs: 305 active+clean; 149 MiB data, 553 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 117 op/s
Oct 02 12:26:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:19.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:19.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:20 compute-1 nova_compute[230518]: 2025-10-02 12:26:20.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:20 compute-1 ceph-mon[80926]: pgmap v1461: 305 pgs: 305 active+clean; 149 MiB data, 553 MiB used, 20 GiB / 21 GiB avail; 963 KiB/s rd, 1.4 MiB/s wr, 75 op/s
Oct 02 12:26:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:21.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:21.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:22 compute-1 nova_compute[230518]: 2025-10-02 12:26:22.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:22 compute-1 ceph-mon[80926]: pgmap v1462: 305 pgs: 305 active+clean; 162 MiB data, 560 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.1 MiB/s wr, 88 op/s
Oct 02 12:26:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:26:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:26:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:23.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:26:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:23.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:24 compute-1 ceph-mon[80926]: pgmap v1463: 305 pgs: 305 active+clean; 167 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 72 op/s
Oct 02 12:26:25 compute-1 nova_compute[230518]: 2025-10-02 12:26:25.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:25.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:26:25.923 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:26:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:26:25.924 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:26:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:26:25.924 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:26:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:25.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:26 compute-1 ceph-mon[80926]: pgmap v1464: 305 pgs: 305 active+clean; 167 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 02 12:26:27 compute-1 nova_compute[230518]: 2025-10-02 12:26:27.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:27.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:27.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:28 compute-1 ceph-mon[80926]: pgmap v1465: 305 pgs: 305 active+clean; 167 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 12:26:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:26:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:26:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:29.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:26:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:26:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:29.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:26:30 compute-1 nova_compute[230518]: 2025-10-02 12:26:30.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:30 compute-1 ceph-mon[80926]: pgmap v1466: 305 pgs: 305 active+clean; 167 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 154 KiB/s rd, 729 KiB/s wr, 24 op/s
Oct 02 12:26:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:31.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:26:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:31.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:26:32 compute-1 ceph-mon[80926]: pgmap v1467: 305 pgs: 305 active+clean; 167 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 154 KiB/s rd, 729 KiB/s wr, 24 op/s
Oct 02 12:26:32 compute-1 nova_compute[230518]: 2025-10-02 12:26:32.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:26:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:26:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:33.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:26:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:26:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:33.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:26:34 compute-1 ceph-mon[80926]: pgmap v1468: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 88 KiB/s wr, 11 op/s
Oct 02 12:26:35 compute-1 nova_compute[230518]: 2025-10-02 12:26:35.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:26:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:35.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:26:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:35.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:36 compute-1 ceph-mon[80926]: pgmap v1469: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s wr, 0 op/s
Oct 02 12:26:37 compute-1 nova_compute[230518]: 2025-10-02 12:26:37.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:37.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:37.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:38 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:26:38 compute-1 ceph-mon[80926]: pgmap v1470: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s wr, 0 op/s
Oct 02 12:26:39 compute-1 sudo[254436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:26:39 compute-1 sudo[254436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:39 compute-1 sudo[254436]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:39 compute-1 podman[254460]: 2025-10-02 12:26:39.436647456 +0000 UTC m=+0.084617193 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 12:26:39 compute-1 sudo[254478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:26:39 compute-1 sudo[254478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:39 compute-1 sudo[254478]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:39 compute-1 sudo[254505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:26:39 compute-1 sudo[254505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:39 compute-1 sudo[254505]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:39 compute-1 sudo[254530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 12:26:39 compute-1 sudo[254530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:39 compute-1 sudo[254530]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:39.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:39.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:40 compute-1 nova_compute[230518]: 2025-10-02 12:26:40.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:40 compute-1 sudo[254576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:26:40 compute-1 sudo[254576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:40 compute-1 sudo[254576]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:40 compute-1 sudo[254607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:26:40 compute-1 sudo[254607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:40 compute-1 sudo[254607]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:40 compute-1 podman[254600]: 2025-10-02 12:26:40.405650257 +0000 UTC m=+0.082147406 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 12:26:40 compute-1 sudo[254649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:26:40 compute-1 sudo[254649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:40 compute-1 sudo[254649]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:40 compute-1 sudo[254677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:26:40 compute-1 sudo[254677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:40 compute-1 sudo[254677]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:41 compute-1 ceph-mon[80926]: pgmap v1471: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s wr, 0 op/s
Oct 02 12:26:41 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:26:41 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:26:41 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:26:41 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:26:41 compute-1 nova_compute[230518]: 2025-10-02 12:26:41.083 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:26:41 compute-1 nova_compute[230518]: 2025-10-02 12:26:41.110 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:26:41 compute-1 nova_compute[230518]: 2025-10-02 12:26:41.111 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:26:41 compute-1 nova_compute[230518]: 2025-10-02 12:26:41.111 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:26:41 compute-1 nova_compute[230518]: 2025-10-02 12:26:41.111 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:26:41 compute-1 nova_compute[230518]: 2025-10-02 12:26:41.112 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:26:41 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:26:41 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4063761561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:41 compute-1 nova_compute[230518]: 2025-10-02 12:26:41.575 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:26:41 compute-1 nova_compute[230518]: 2025-10-02 12:26:41.724 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:26:41 compute-1 nova_compute[230518]: 2025-10-02 12:26:41.725 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4683MB free_disk=20.942699432373047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:26:41 compute-1 nova_compute[230518]: 2025-10-02 12:26:41.725 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:26:41 compute-1 nova_compute[230518]: 2025-10-02 12:26:41.725 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:26:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:41.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:41.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:26:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:26:42 compute-1 ceph-mon[80926]: pgmap v1472: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s wr, 0 op/s
Oct 02 12:26:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:26:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:26:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:26:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:26:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4063761561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:42 compute-1 nova_compute[230518]: 2025-10-02 12:26:42.286 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:26:42 compute-1 nova_compute[230518]: 2025-10-02 12:26:42.287 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:26:42 compute-1 nova_compute[230518]: 2025-10-02 12:26:42.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:42 compute-1 nova_compute[230518]: 2025-10-02 12:26:42.605 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:26:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:26:43 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2116819383' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:43 compute-1 nova_compute[230518]: 2025-10-02 12:26:43.025 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:26:43 compute-1 nova_compute[230518]: 2025-10-02 12:26:43.032 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:26:43 compute-1 nova_compute[230518]: 2025-10-02 12:26:43.052 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:26:43 compute-1 nova_compute[230518]: 2025-10-02 12:26:43.096 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:26:43 compute-1 nova_compute[230518]: 2025-10-02 12:26:43.097 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.372s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:26:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1937974989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2116819383' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1347811142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:26:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:26:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:43.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:26:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:43.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:44 compute-1 ceph-mon[80926]: pgmap v1473: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s wr, 0 op/s
Oct 02 12:26:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:26:44.901 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:26:44 compute-1 nova_compute[230518]: 2025-10-02 12:26:44.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:26:44.903 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:26:45 compute-1 nova_compute[230518]: 2025-10-02 12:26:45.062 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:26:45 compute-1 nova_compute[230518]: 2025-10-02 12:26:45.062 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:26:45 compute-1 nova_compute[230518]: 2025-10-02 12:26:45.063 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:26:45 compute-1 nova_compute[230518]: 2025-10-02 12:26:45.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:45.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:45.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:46 compute-1 nova_compute[230518]: 2025-10-02 12:26:46.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:26:46 compute-1 ceph-mon[80926]: pgmap v1474: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail
Oct 02 12:26:47 compute-1 nova_compute[230518]: 2025-10-02 12:26:47.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:26:47 compute-1 nova_compute[230518]: 2025-10-02 12:26:47.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:26:47 compute-1 nova_compute[230518]: 2025-10-02 12:26:47.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:47.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:47.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3653337162' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:26:48 compute-1 podman[254779]: 2025-10-02 12:26:48.803789852 +0000 UTC m=+0.048205938 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 12:26:48 compute-1 podman[254778]: 2025-10-02 12:26:48.803611886 +0000 UTC m=+0.049860620 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, container_name=iscsid)
Oct 02 12:26:49 compute-1 ceph-mon[80926]: pgmap v1475: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail
Oct 02 12:26:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2080025941' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:26:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3462639232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:49.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:49.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:50 compute-1 nova_compute[230518]: 2025-10-02 12:26:50.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:26:50 compute-1 nova_compute[230518]: 2025-10-02 12:26:50.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:50 compute-1 ceph-mon[80926]: pgmap v1476: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail
Oct 02 12:26:50 compute-1 sudo[254817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:26:50 compute-1 sudo[254817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:50 compute-1 sudo[254817]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:50 compute-1 sudo[254842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:26:50 compute-1 sudo[254842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:26:50 compute-1 sudo[254842]: pam_unix(sudo:session): session closed for user root
Oct 02 12:26:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:26:50.906 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:26:51 compute-1 nova_compute[230518]: 2025-10-02 12:26:51.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:26:51 compute-1 nova_compute[230518]: 2025-10-02 12:26:51.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:26:51 compute-1 nova_compute[230518]: 2025-10-02 12:26:51.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:26:51 compute-1 nova_compute[230518]: 2025-10-02 12:26:51.089 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:26:51 compute-1 nova_compute[230518]: 2025-10-02 12:26:51.090 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:26:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:26:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:26:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:51.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:51.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:52 compute-1 nova_compute[230518]: 2025-10-02 12:26:52.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:52 compute-1 ceph-mon[80926]: pgmap v1477: 305 pgs: 305 active+clean; 167 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Oct 02 12:26:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:26:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:53.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:53.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:54 compute-1 ceph-mon[80926]: pgmap v1478: 305 pgs: 305 active+clean; 118 MiB data, 565 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 18 KiB/s wr, 25 op/s
Oct 02 12:26:55 compute-1 nova_compute[230518]: 2025-10-02 12:26:55.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:55.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:55.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:56 compute-1 ceph-mon[80926]: pgmap v1479: 305 pgs: 305 active+clean; 88 MiB data, 548 MiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 19 KiB/s wr, 40 op/s
Oct 02 12:26:56 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/455478377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:26:57 compute-1 nova_compute[230518]: 2025-10-02 12:26:57.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:26:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:26:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:57.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:26:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:26:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:57.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:26:58 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #64. Immutable memtables: 0.
Oct 02 12:26:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:26:58.006805) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:26:58 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 64
Oct 02 12:26:58 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408018006912, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2003, "num_deletes": 255, "total_data_size": 4758920, "memory_usage": 4822664, "flush_reason": "Manual Compaction"}
Oct 02 12:26:58 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #65: started
Oct 02 12:26:58 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408018018051, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 65, "file_size": 1899287, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33642, "largest_seqno": 35640, "table_properties": {"data_size": 1892988, "index_size": 3245, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16669, "raw_average_key_size": 21, "raw_value_size": 1879034, "raw_average_value_size": 2402, "num_data_blocks": 145, "num_entries": 782, "num_filter_entries": 782, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759407850, "oldest_key_time": 1759407850, "file_creation_time": 1759408018, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:26:58 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 11288 microseconds, and 5843 cpu microseconds.
Oct 02 12:26:58 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:26:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:26:58.018110) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #65: 1899287 bytes OK
Oct 02 12:26:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:26:58.018140) [db/memtable_list.cc:519] [default] Level-0 commit table #65 started
Oct 02 12:26:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:26:58.020771) [db/memtable_list.cc:722] [default] Level-0 commit table #65: memtable #1 done
Oct 02 12:26:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:26:58.020785) EVENT_LOG_v1 {"time_micros": 1759408018020780, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:26:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:26:58.020810) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:26:58 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 4749804, prev total WAL file size 4749804, number of live WAL files 2.
Oct 02 12:26:58 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000061.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:26:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:26:58.022544) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303032' seq:72057594037927935, type:22 .. '6D6772737461740031323534' seq:0, type:0; will stop at (end)
Oct 02 12:26:58 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:26:58 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [65(1854KB)], [63(10225KB)]
Oct 02 12:26:58 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408018022598, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [65], "files_L6": [63], "score": -1, "input_data_size": 12370288, "oldest_snapshot_seqno": -1}
Oct 02 12:26:58 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #66: 6020 keys, 9706784 bytes, temperature: kUnknown
Oct 02 12:26:58 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408018105175, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 66, "file_size": 9706784, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9666387, "index_size": 24223, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15109, "raw_key_size": 153506, "raw_average_key_size": 25, "raw_value_size": 9558199, "raw_average_value_size": 1587, "num_data_blocks": 978, "num_entries": 6020, "num_filter_entries": 6020, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759408018, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 66, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:26:58 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:26:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:26:58.105476) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 9706784 bytes
Oct 02 12:26:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:26:58.106928) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 149.6 rd, 117.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 10.0 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(11.6) write-amplify(5.1) OK, records in: 6469, records dropped: 449 output_compression: NoCompression
Oct 02 12:26:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:26:58.106958) EVENT_LOG_v1 {"time_micros": 1759408018106945, "job": 38, "event": "compaction_finished", "compaction_time_micros": 82682, "compaction_time_cpu_micros": 46039, "output_level": 6, "num_output_files": 1, "total_output_size": 9706784, "num_input_records": 6469, "num_output_records": 6020, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:26:58 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:26:58 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408018107586, "job": 38, "event": "table_file_deletion", "file_number": 65}
Oct 02 12:26:58 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000063.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:26:58 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408018109995, "job": 38, "event": "table_file_deletion", "file_number": 63}
Oct 02 12:26:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:26:58.022397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:26:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:26:58.110123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:26:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:26:58.110134) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:26:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:26:58.110138) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:26:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:26:58.110143) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:26:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:26:58.110148) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:26:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:26:59 compute-1 ceph-mon[80926]: pgmap v1480: 305 pgs: 305 active+clean; 88 MiB data, 518 MiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 19 KiB/s wr, 43 op/s
Oct 02 12:26:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:26:59.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:26:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:26:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:26:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:26:59.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/458153804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:00 compute-1 nova_compute[230518]: 2025-10-02 12:27:00.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:01 compute-1 ceph-mon[80926]: pgmap v1481: 305 pgs: 305 active+clean; 88 MiB data, 518 MiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 19 KiB/s wr, 43 op/s
Oct 02 12:27:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:01.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:01.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:02 compute-1 nova_compute[230518]: 2025-10-02 12:27:02.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:02 compute-1 ceph-mon[80926]: pgmap v1482: 305 pgs: 305 active+clean; 88 MiB data, 518 MiB used, 20 GiB / 21 GiB avail; 92 KiB/s rd, 19 KiB/s wr, 45 op/s
Oct 02 12:27:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:27:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:03.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:04.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:04 compute-1 ceph-mon[80926]: pgmap v1483: 305 pgs: 305 active+clean; 117 MiB data, 529 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 988 KiB/s wr, 121 op/s
Oct 02 12:27:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:27:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3685327218' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:27:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:27:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3685327218' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:27:05 compute-1 nova_compute[230518]: 2025-10-02 12:27:05.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3685327218' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:27:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3685327218' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:27:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:05.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:06.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:06 compute-1 ceph-mon[80926]: pgmap v1484: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Oct 02 12:27:07 compute-1 nova_compute[230518]: 2025-10-02 12:27:07.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2134444350' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:27:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3571085260' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:27:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:07.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:27:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:08.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:27:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:27:08 compute-1 ceph-mon[80926]: pgmap v1485: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Oct 02 12:27:09 compute-1 podman[254868]: 2025-10-02 12:27:09.84147917 +0000 UTC m=+0.084912893 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct 02 12:27:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:09.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:10.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:10 compute-1 nova_compute[230518]: 2025-10-02 12:27:10.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:10 compute-1 podman[254887]: 2025-10-02 12:27:10.878445529 +0000 UTC m=+0.118697975 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Oct 02 12:27:11 compute-1 ceph-mon[80926]: pgmap v1486: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 90 op/s
Oct 02 12:27:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:11.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:27:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:12.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:27:12 compute-1 nova_compute[230518]: 2025-10-02 12:27:12.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:12 compute-1 ceph-mon[80926]: pgmap v1487: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Oct 02 12:27:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:27:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:13.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:14.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:14 compute-1 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Oct 02 12:27:14 compute-1 ceph-mon[80926]: pgmap v1488: 305 pgs: 305 active+clean; 142 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.9 MiB/s wr, 132 op/s
Oct 02 12:27:14 compute-1 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Oct 02 12:27:15 compute-1 nova_compute[230518]: 2025-10-02 12:27:15.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:15 compute-1 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Oct 02 12:27:15 compute-1 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Oct 02 12:27:15 compute-1 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Oct 02 12:27:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:27:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:15.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:27:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:27:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:16.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:27:17 compute-1 ceph-mon[80926]: pgmap v1489: 305 pgs: 305 active+clean; 147 MiB data, 543 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 97 op/s
Oct 02 12:27:17 compute-1 nova_compute[230518]: 2025-10-02 12:27:17.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:17.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:18.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:18 compute-1 ceph-mon[80926]: pgmap v1490: 305 pgs: 305 active+clean; 165 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 169 op/s
Oct 02 12:27:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:27:18 compute-1 nova_compute[230518]: 2025-10-02 12:27:18.403 2 DEBUG oslo_concurrency.lockutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Acquiring lock "78888fa8-1051-485d-9e51-feaec2648c8f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:18 compute-1 nova_compute[230518]: 2025-10-02 12:27:18.404 2 DEBUG oslo_concurrency.lockutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "78888fa8-1051-485d-9e51-feaec2648c8f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:18 compute-1 nova_compute[230518]: 2025-10-02 12:27:18.421 2 DEBUG nova.compute.manager [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:27:18 compute-1 nova_compute[230518]: 2025-10-02 12:27:18.524 2 DEBUG oslo_concurrency.lockutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:18 compute-1 nova_compute[230518]: 2025-10-02 12:27:18.525 2 DEBUG oslo_concurrency.lockutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:18 compute-1 nova_compute[230518]: 2025-10-02 12:27:18.533 2 DEBUG nova.virt.hardware [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:27:18 compute-1 nova_compute[230518]: 2025-10-02 12:27:18.534 2 INFO nova.compute.claims [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:27:18 compute-1 nova_compute[230518]: 2025-10-02 12:27:18.630 2 DEBUG oslo_concurrency.processutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:27:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:27:19 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3727851687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:19 compute-1 nova_compute[230518]: 2025-10-02 12:27:19.094 2 DEBUG oslo_concurrency.processutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:27:19 compute-1 nova_compute[230518]: 2025-10-02 12:27:19.103 2 DEBUG nova.compute.provider_tree [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:27:19 compute-1 nova_compute[230518]: 2025-10-02 12:27:19.122 2 DEBUG nova.scheduler.client.report [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:27:19 compute-1 nova_compute[230518]: 2025-10-02 12:27:19.144 2 DEBUG oslo_concurrency.lockutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:19 compute-1 nova_compute[230518]: 2025-10-02 12:27:19.145 2 DEBUG nova.compute.manager [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:27:19 compute-1 nova_compute[230518]: 2025-10-02 12:27:19.193 2 DEBUG nova.compute.manager [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:27:19 compute-1 nova_compute[230518]: 2025-10-02 12:27:19.194 2 DEBUG nova.network.neutron [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:27:19 compute-1 nova_compute[230518]: 2025-10-02 12:27:19.219 2 INFO nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:27:19 compute-1 nova_compute[230518]: 2025-10-02 12:27:19.237 2 DEBUG nova.compute.manager [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:27:19 compute-1 nova_compute[230518]: 2025-10-02 12:27:19.325 2 DEBUG nova.compute.manager [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:27:19 compute-1 nova_compute[230518]: 2025-10-02 12:27:19.328 2 DEBUG nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:27:19 compute-1 nova_compute[230518]: 2025-10-02 12:27:19.329 2 INFO nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Creating image(s)
Oct 02 12:27:19 compute-1 nova_compute[230518]: 2025-10-02 12:27:19.371 2 DEBUG nova.storage.rbd_utils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] rbd image 78888fa8-1051-485d-9e51-feaec2648c8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:27:19 compute-1 nova_compute[230518]: 2025-10-02 12:27:19.414 2 DEBUG nova.storage.rbd_utils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] rbd image 78888fa8-1051-485d-9e51-feaec2648c8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:27:19 compute-1 nova_compute[230518]: 2025-10-02 12:27:19.461 2 DEBUG nova.storage.rbd_utils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] rbd image 78888fa8-1051-485d-9e51-feaec2648c8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:27:19 compute-1 nova_compute[230518]: 2025-10-02 12:27:19.466 2 DEBUG oslo_concurrency.processutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:27:19 compute-1 nova_compute[230518]: 2025-10-02 12:27:19.507 2 DEBUG nova.policy [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2b6687fbfb1f484ba99ef93bbb4ffa7e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c20ce9073494490588607984318667f5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:27:19 compute-1 nova_compute[230518]: 2025-10-02 12:27:19.547 2 DEBUG oslo_concurrency.processutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:27:19 compute-1 nova_compute[230518]: 2025-10-02 12:27:19.549 2 DEBUG oslo_concurrency.lockutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:19 compute-1 nova_compute[230518]: 2025-10-02 12:27:19.551 2 DEBUG oslo_concurrency.lockutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:19 compute-1 nova_compute[230518]: 2025-10-02 12:27:19.552 2 DEBUG oslo_concurrency.lockutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:19 compute-1 nova_compute[230518]: 2025-10-02 12:27:19.607 2 DEBUG nova.storage.rbd_utils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] rbd image 78888fa8-1051-485d-9e51-feaec2648c8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:27:19 compute-1 nova_compute[230518]: 2025-10-02 12:27:19.611 2 DEBUG oslo_concurrency.processutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 78888fa8-1051-485d-9e51-feaec2648c8f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:27:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3727851687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:19 compute-1 podman[255030]: 2025-10-02 12:27:19.818882118 +0000 UTC m=+0.067650290 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 12:27:19 compute-1 podman[255029]: 2025-10-02 12:27:19.823151653 +0000 UTC m=+0.065481092 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 12:27:19 compute-1 nova_compute[230518]: 2025-10-02 12:27:19.895 2 DEBUG oslo_concurrency.processutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 78888fa8-1051-485d-9e51-feaec2648c8f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:27:19 compute-1 nova_compute[230518]: 2025-10-02 12:27:19.966 2 DEBUG nova.storage.rbd_utils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] resizing rbd image 78888fa8-1051-485d-9e51-feaec2648c8f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:27:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:19.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:20.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:20 compute-1 nova_compute[230518]: 2025-10-02 12:27:20.069 2 DEBUG nova.objects.instance [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lazy-loading 'migration_context' on Instance uuid 78888fa8-1051-485d-9e51-feaec2648c8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:27:20 compute-1 nova_compute[230518]: 2025-10-02 12:27:20.082 2 DEBUG nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:27:20 compute-1 nova_compute[230518]: 2025-10-02 12:27:20.083 2 DEBUG nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Ensure instance console log exists: /var/lib/nova/instances/78888fa8-1051-485d-9e51-feaec2648c8f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:27:20 compute-1 nova_compute[230518]: 2025-10-02 12:27:20.083 2 DEBUG oslo_concurrency.lockutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:20 compute-1 nova_compute[230518]: 2025-10-02 12:27:20.084 2 DEBUG oslo_concurrency.lockutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:20 compute-1 nova_compute[230518]: 2025-10-02 12:27:20.084 2 DEBUG oslo_concurrency.lockutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:20 compute-1 nova_compute[230518]: 2025-10-02 12:27:20.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:20 compute-1 ceph-mon[80926]: pgmap v1491: 305 pgs: 305 active+clean; 165 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 169 op/s
Oct 02 12:27:21 compute-1 nova_compute[230518]: 2025-10-02 12:27:21.500 2 DEBUG nova.network.neutron [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Successfully created port: 4b915a8b-b3f4-47fe-bb5a-1e712c3d182e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:27:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:21.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:27:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:22.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:27:22 compute-1 nova_compute[230518]: 2025-10-02 12:27:22.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:22 compute-1 ceph-mon[80926]: pgmap v1492: 305 pgs: 305 active+clean; 167 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 214 op/s
Oct 02 12:27:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:27:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:27:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:23.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:27:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:24.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:24 compute-1 nova_compute[230518]: 2025-10-02 12:27:24.323 2 DEBUG nova.network.neutron [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Successfully updated port: 4b915a8b-b3f4-47fe-bb5a-1e712c3d182e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:27:24 compute-1 nova_compute[230518]: 2025-10-02 12:27:24.341 2 DEBUG oslo_concurrency.lockutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Acquiring lock "refresh_cache-78888fa8-1051-485d-9e51-feaec2648c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:27:24 compute-1 nova_compute[230518]: 2025-10-02 12:27:24.341 2 DEBUG oslo_concurrency.lockutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Acquired lock "refresh_cache-78888fa8-1051-485d-9e51-feaec2648c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:27:24 compute-1 nova_compute[230518]: 2025-10-02 12:27:24.341 2 DEBUG nova.network.neutron [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:27:24 compute-1 nova_compute[230518]: 2025-10-02 12:27:24.465 2 DEBUG nova.compute.manager [req-3ded94a7-72f1-442c-a17f-2776aab1fc24 req-c184aaab-6a82-4874-931b-c5f495569673 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Received event network-changed-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:27:24 compute-1 nova_compute[230518]: 2025-10-02 12:27:24.465 2 DEBUG nova.compute.manager [req-3ded94a7-72f1-442c-a17f-2776aab1fc24 req-c184aaab-6a82-4874-931b-c5f495569673 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Refreshing instance network info cache due to event network-changed-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:27:24 compute-1 nova_compute[230518]: 2025-10-02 12:27:24.465 2 DEBUG oslo_concurrency.lockutils [req-3ded94a7-72f1-442c-a17f-2776aab1fc24 req-c184aaab-6a82-4874-931b-c5f495569673 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-78888fa8-1051-485d-9e51-feaec2648c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:27:24 compute-1 nova_compute[230518]: 2025-10-02 12:27:24.510 2 DEBUG nova.network.neutron [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:27:24 compute-1 nova_compute[230518]: 2025-10-02 12:27:24.677 2 DEBUG oslo_concurrency.lockutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "be855518-90af-4fa9-b969-4a1579934010" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:24 compute-1 nova_compute[230518]: 2025-10-02 12:27:24.677 2 DEBUG oslo_concurrency.lockutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "be855518-90af-4fa9-b969-4a1579934010" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:24 compute-1 nova_compute[230518]: 2025-10-02 12:27:24.697 2 DEBUG nova.compute.manager [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:27:24 compute-1 nova_compute[230518]: 2025-10-02 12:27:24.777 2 DEBUG oslo_concurrency.lockutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:24 compute-1 nova_compute[230518]: 2025-10-02 12:27:24.777 2 DEBUG oslo_concurrency.lockutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:24 compute-1 nova_compute[230518]: 2025-10-02 12:27:24.784 2 DEBUG nova.virt.hardware [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:27:24 compute-1 nova_compute[230518]: 2025-10-02 12:27:24.784 2 INFO nova.compute.claims [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:27:24 compute-1 nova_compute[230518]: 2025-10-02 12:27:24.920 2 DEBUG oslo_concurrency.processutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:27:25 compute-1 nova_compute[230518]: 2025-10-02 12:27:25.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:25 compute-1 ceph-mon[80926]: pgmap v1493: 305 pgs: 305 active+clean; 184 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.1 MiB/s wr, 292 op/s
Oct 02 12:27:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:27:25 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1124206307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:25 compute-1 nova_compute[230518]: 2025-10-02 12:27:25.344 2 DEBUG oslo_concurrency.processutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:27:25 compute-1 nova_compute[230518]: 2025-10-02 12:27:25.349 2 DEBUG nova.compute.provider_tree [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:27:25 compute-1 nova_compute[230518]: 2025-10-02 12:27:25.369 2 DEBUG nova.scheduler.client.report [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:27:25 compute-1 nova_compute[230518]: 2025-10-02 12:27:25.394 2 DEBUG oslo_concurrency.lockutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:25 compute-1 nova_compute[230518]: 2025-10-02 12:27:25.394 2 DEBUG nova.compute.manager [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:27:25 compute-1 nova_compute[230518]: 2025-10-02 12:27:25.434 2 DEBUG nova.compute.manager [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:27:25 compute-1 nova_compute[230518]: 2025-10-02 12:27:25.434 2 DEBUG nova.network.neutron [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:27:25 compute-1 nova_compute[230518]: 2025-10-02 12:27:25.457 2 INFO nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:27:25 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Oct 02 12:27:25 compute-1 nova_compute[230518]: 2025-10-02 12:27:25.473 2 DEBUG nova.compute.manager [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:27:25 compute-1 nova_compute[230518]: 2025-10-02 12:27:25.669 2 DEBUG nova.compute.manager [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:27:25 compute-1 nova_compute[230518]: 2025-10-02 12:27:25.670 2 DEBUG nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:27:25 compute-1 nova_compute[230518]: 2025-10-02 12:27:25.670 2 INFO nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Creating image(s)
Oct 02 12:27:25 compute-1 nova_compute[230518]: 2025-10-02 12:27:25.694 2 DEBUG nova.storage.rbd_utils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] rbd image be855518-90af-4fa9-b969-4a1579934010_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:27:25 compute-1 nova_compute[230518]: 2025-10-02 12:27:25.727 2 DEBUG nova.storage.rbd_utils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] rbd image be855518-90af-4fa9-b969-4a1579934010_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:27:25 compute-1 nova_compute[230518]: 2025-10-02 12:27:25.755 2 DEBUG nova.storage.rbd_utils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] rbd image be855518-90af-4fa9-b969-4a1579934010_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:27:25 compute-1 nova_compute[230518]: 2025-10-02 12:27:25.759 2 DEBUG oslo_concurrency.processutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:27:25 compute-1 nova_compute[230518]: 2025-10-02 12:27:25.785 2 DEBUG nova.policy [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eff0431e92464c78b780c8365e6e920c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bfd7bec5bd4b4366a96cc55cfe95fcc9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:27:25 compute-1 nova_compute[230518]: 2025-10-02 12:27:25.821 2 DEBUG oslo_concurrency.processutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:27:25 compute-1 nova_compute[230518]: 2025-10-02 12:27:25.822 2 DEBUG oslo_concurrency.lockutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:25 compute-1 nova_compute[230518]: 2025-10-02 12:27:25.823 2 DEBUG oslo_concurrency.lockutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:25 compute-1 nova_compute[230518]: 2025-10-02 12:27:25.823 2 DEBUG oslo_concurrency.lockutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:25 compute-1 nova_compute[230518]: 2025-10-02 12:27:25.851 2 DEBUG nova.storage.rbd_utils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] rbd image be855518-90af-4fa9-b969-4a1579934010_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:27:25 compute-1 nova_compute[230518]: 2025-10-02 12:27:25.854 2 DEBUG oslo_concurrency.processutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 be855518-90af-4fa9-b969-4a1579934010_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:27:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:25.924 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:25.924 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:25.924 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:27:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:25.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:27:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:27:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:26.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:27:26 compute-1 ceph-mon[80926]: pgmap v1494: 305 pgs: 305 active+clean; 225 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.7 MiB/s wr, 283 op/s
Oct 02 12:27:26 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1124206307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:26 compute-1 nova_compute[230518]: 2025-10-02 12:27:26.410 2 DEBUG nova.network.neutron [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Updating instance_info_cache with network_info: [{"id": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "address": "fa:16:3e:db:de:41", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b915a8b-b3", "ovs_interfaceid": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:27:26 compute-1 nova_compute[230518]: 2025-10-02 12:27:26.431 2 DEBUG oslo_concurrency.lockutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Releasing lock "refresh_cache-78888fa8-1051-485d-9e51-feaec2648c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:27:26 compute-1 nova_compute[230518]: 2025-10-02 12:27:26.431 2 DEBUG nova.compute.manager [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Instance network_info: |[{"id": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "address": "fa:16:3e:db:de:41", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b915a8b-b3", "ovs_interfaceid": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:27:26 compute-1 nova_compute[230518]: 2025-10-02 12:27:26.432 2 DEBUG oslo_concurrency.lockutils [req-3ded94a7-72f1-442c-a17f-2776aab1fc24 req-c184aaab-6a82-4874-931b-c5f495569673 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-78888fa8-1051-485d-9e51-feaec2648c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:27:26 compute-1 nova_compute[230518]: 2025-10-02 12:27:26.432 2 DEBUG nova.network.neutron [req-3ded94a7-72f1-442c-a17f-2776aab1fc24 req-c184aaab-6a82-4874-931b-c5f495569673 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Refreshing network info cache for port 4b915a8b-b3f4-47fe-bb5a-1e712c3d182e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:27:26 compute-1 nova_compute[230518]: 2025-10-02 12:27:26.435 2 DEBUG nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Start _get_guest_xml network_info=[{"id": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "address": "fa:16:3e:db:de:41", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b915a8b-b3", "ovs_interfaceid": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:27:26 compute-1 nova_compute[230518]: 2025-10-02 12:27:26.439 2 WARNING nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:27:26 compute-1 nova_compute[230518]: 2025-10-02 12:27:26.444 2 DEBUG nova.virt.libvirt.host [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:27:26 compute-1 nova_compute[230518]: 2025-10-02 12:27:26.444 2 DEBUG nova.virt.libvirt.host [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:27:26 compute-1 nova_compute[230518]: 2025-10-02 12:27:26.452 2 DEBUG nova.virt.libvirt.host [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:27:26 compute-1 nova_compute[230518]: 2025-10-02 12:27:26.452 2 DEBUG nova.virt.libvirt.host [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:27:26 compute-1 nova_compute[230518]: 2025-10-02 12:27:26.454 2 DEBUG nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:27:26 compute-1 nova_compute[230518]: 2025-10-02 12:27:26.454 2 DEBUG nova.virt.hardware [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:27:26 compute-1 nova_compute[230518]: 2025-10-02 12:27:26.455 2 DEBUG nova.virt.hardware [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:27:26 compute-1 nova_compute[230518]: 2025-10-02 12:27:26.455 2 DEBUG nova.virt.hardware [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:27:26 compute-1 nova_compute[230518]: 2025-10-02 12:27:26.455 2 DEBUG nova.virt.hardware [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:27:26 compute-1 nova_compute[230518]: 2025-10-02 12:27:26.455 2 DEBUG nova.virt.hardware [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:27:26 compute-1 nova_compute[230518]: 2025-10-02 12:27:26.456 2 DEBUG nova.virt.hardware [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:27:26 compute-1 nova_compute[230518]: 2025-10-02 12:27:26.456 2 DEBUG nova.virt.hardware [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:27:26 compute-1 nova_compute[230518]: 2025-10-02 12:27:26.456 2 DEBUG nova.virt.hardware [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:27:26 compute-1 nova_compute[230518]: 2025-10-02 12:27:26.456 2 DEBUG nova.virt.hardware [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:27:26 compute-1 nova_compute[230518]: 2025-10-02 12:27:26.457 2 DEBUG nova.virt.hardware [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:27:26 compute-1 nova_compute[230518]: 2025-10-02 12:27:26.457 2 DEBUG nova.virt.hardware [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:27:26 compute-1 nova_compute[230518]: 2025-10-02 12:27:26.460 2 DEBUG oslo_concurrency.processutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:27:26 compute-1 nova_compute[230518]: 2025-10-02 12:27:26.854 2 DEBUG oslo_concurrency.processutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 be855518-90af-4fa9-b969-4a1579934010_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.000s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:27:26 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:27:26 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2610359259' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:27:26 compute-1 nova_compute[230518]: 2025-10-02 12:27:26.929 2 DEBUG nova.storage.rbd_utils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] resizing rbd image be855518-90af-4fa9-b969-4a1579934010_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.158 2 DEBUG oslo_concurrency.processutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.698s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.182 2 DEBUG nova.storage.rbd_utils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] rbd image 78888fa8-1051-485d-9e51-feaec2648c8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.185 2 DEBUG oslo_concurrency.processutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.310 2 DEBUG nova.network.neutron [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Successfully created port: 5404e3f9-be33-4f3a-b227-2fa3944c6bb7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:27 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:27:27 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3572455864' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.603 2 DEBUG oslo_concurrency.processutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.605 2 DEBUG nova.virt.libvirt.vif [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:27:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-232208417',display_name='tempest-FloatingIPsAssociationTestJSON-server-232208417',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-232208417',id=59,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c20ce9073494490588607984318667f5',ramdisk_id='',reservation_id='r-xvp3u8r5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-875651151',owner_user_name='tempest-FloatingIPsAssociationTestJSON-875651151-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:27:19Z,user_data=None,user_id='2b6687fbfb1f484ba99ef93bbb4ffa7e',uuid=78888fa8-1051-485d-9e51-feaec2648c8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "address": "fa:16:3e:db:de:41", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b915a8b-b3", "ovs_interfaceid": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.606 2 DEBUG nova.network.os_vif_util [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Converting VIF {"id": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "address": "fa:16:3e:db:de:41", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b915a8b-b3", "ovs_interfaceid": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.607 2 DEBUG nova.network.os_vif_util [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:de:41,bridge_name='br-int',has_traffic_filtering=True,id=4b915a8b-b3f4-47fe-bb5a-1e712c3d182e,network=Network(a920a635-8cc8-4478-b2dc-4d6329a5abef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b915a8b-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.609 2 DEBUG nova.objects.instance [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 78888fa8-1051-485d-9e51-feaec2648c8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.642 2 DEBUG nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:27:27 compute-1 nova_compute[230518]:   <uuid>78888fa8-1051-485d-9e51-feaec2648c8f</uuid>
Oct 02 12:27:27 compute-1 nova_compute[230518]:   <name>instance-0000003b</name>
Oct 02 12:27:27 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:27:27 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:27:27 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:27:27 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:       <nova:name>tempest-FloatingIPsAssociationTestJSON-server-232208417</nova:name>
Oct 02 12:27:27 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:27:26</nova:creationTime>
Oct 02 12:27:27 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:27:27 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:27:27 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:27:27 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:27:27 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:27:27 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:27:27 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:27:27 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:27:27 compute-1 nova_compute[230518]:         <nova:user uuid="2b6687fbfb1f484ba99ef93bbb4ffa7e">tempest-FloatingIPsAssociationTestJSON-875651151-project-member</nova:user>
Oct 02 12:27:27 compute-1 nova_compute[230518]:         <nova:project uuid="c20ce9073494490588607984318667f5">tempest-FloatingIPsAssociationTestJSON-875651151</nova:project>
Oct 02 12:27:27 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:27:27 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:27:27 compute-1 nova_compute[230518]:         <nova:port uuid="4b915a8b-b3f4-47fe-bb5a-1e712c3d182e">
Oct 02 12:27:27 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:27:27 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:27:27 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:27:27 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <system>
Oct 02 12:27:27 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:27:27 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:27:27 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:27:27 compute-1 nova_compute[230518]:       <entry name="serial">78888fa8-1051-485d-9e51-feaec2648c8f</entry>
Oct 02 12:27:27 compute-1 nova_compute[230518]:       <entry name="uuid">78888fa8-1051-485d-9e51-feaec2648c8f</entry>
Oct 02 12:27:27 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     </system>
Oct 02 12:27:27 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:27:27 compute-1 nova_compute[230518]:   <os>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:   </os>
Oct 02 12:27:27 compute-1 nova_compute[230518]:   <features>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:   </features>
Oct 02 12:27:27 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:27:27 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:27:27 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:27:27 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/78888fa8-1051-485d-9e51-feaec2648c8f_disk">
Oct 02 12:27:27 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:       </source>
Oct 02 12:27:27 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:27:27 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:27:27 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:27:27 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/78888fa8-1051-485d-9e51-feaec2648c8f_disk.config">
Oct 02 12:27:27 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:       </source>
Oct 02 12:27:27 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:27:27 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:27:27 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:27:27 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:db:de:41"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:       <target dev="tap4b915a8b-b3"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:27:27 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/78888fa8-1051-485d-9e51-feaec2648c8f/console.log" append="off"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <video>
Oct 02 12:27:27 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     </video>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:27:27 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:27:27 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:27:27 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:27:27 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:27:27 compute-1 nova_compute[230518]: </domain>
Oct 02 12:27:27 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.644 2 DEBUG nova.compute.manager [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Preparing to wait for external event network-vif-plugged-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.644 2 DEBUG oslo_concurrency.lockutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Acquiring lock "78888fa8-1051-485d-9e51-feaec2648c8f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.644 2 DEBUG oslo_concurrency.lockutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "78888fa8-1051-485d-9e51-feaec2648c8f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.645 2 DEBUG oslo_concurrency.lockutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "78888fa8-1051-485d-9e51-feaec2648c8f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.645 2 DEBUG nova.virt.libvirt.vif [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:27:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-232208417',display_name='tempest-FloatingIPsAssociationTestJSON-server-232208417',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-232208417',id=59,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c20ce9073494490588607984318667f5',ramdisk_id='',reservation_id='r-xvp3u8r5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-875651151',owner_user_name='tempest-FloatingIPsAssociationTestJSON-875651151-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:27:19Z,user_data=None,user_id='2b6687fbfb1f484ba99ef93bbb4ffa7e',uuid=78888fa8-1051-485d-9e51-feaec2648c8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "address": "fa:16:3e:db:de:41", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b915a8b-b3", "ovs_interfaceid": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.646 2 DEBUG nova.network.os_vif_util [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Converting VIF {"id": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "address": "fa:16:3e:db:de:41", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b915a8b-b3", "ovs_interfaceid": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.646 2 DEBUG nova.network.os_vif_util [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:de:41,bridge_name='br-int',has_traffic_filtering=True,id=4b915a8b-b3f4-47fe-bb5a-1e712c3d182e,network=Network(a920a635-8cc8-4478-b2dc-4d6329a5abef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b915a8b-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.646 2 DEBUG os_vif [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:de:41,bridge_name='br-int',has_traffic_filtering=True,id=4b915a8b-b3f4-47fe-bb5a-1e712c3d182e,network=Network(a920a635-8cc8-4478-b2dc-4d6329a5abef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b915a8b-b3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.647 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.648 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.650 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4b915a8b-b3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.651 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4b915a8b-b3, col_values=(('external_ids', {'iface-id': '4b915a8b-b3f4-47fe-bb5a-1e712c3d182e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:db:de:41', 'vm-uuid': '78888fa8-1051-485d-9e51-feaec2648c8f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:27 compute-1 NetworkManager[44960]: <info>  [1759408047.6530] manager: (tap4b915a8b-b3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/119)
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.658 2 INFO os_vif [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:de:41,bridge_name='br-int',has_traffic_filtering=True,id=4b915a8b-b3f4-47fe-bb5a-1e712c3d182e,network=Network(a920a635-8cc8-4478-b2dc-4d6329a5abef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b915a8b-b3')
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.704 2 DEBUG nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.705 2 DEBUG nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.705 2 DEBUG nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] No VIF found with MAC fa:16:3e:db:de:41, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.705 2 INFO nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Using config drive
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.730 2 DEBUG nova.storage.rbd_utils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] rbd image 78888fa8-1051-485d-9e51-feaec2648c8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:27:27 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2610359259' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.865 2 DEBUG nova.objects.instance [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lazy-loading 'migration_context' on Instance uuid be855518-90af-4fa9-b969-4a1579934010 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.878 2 DEBUG nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.878 2 DEBUG nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Ensure instance console log exists: /var/lib/nova/instances/be855518-90af-4fa9-b969-4a1579934010/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.878 2 DEBUG oslo_concurrency.lockutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.878 2 DEBUG oslo_concurrency.lockutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:27 compute-1 nova_compute[230518]: 2025-10-02 12:27:27.879 2 DEBUG oslo_concurrency.lockutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:27:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:28.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:27:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:27:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:28.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:27:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:27:28 compute-1 nova_compute[230518]: 2025-10-02 12:27:28.629 2 INFO nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Creating config drive at /var/lib/nova/instances/78888fa8-1051-485d-9e51-feaec2648c8f/disk.config
Oct 02 12:27:28 compute-1 nova_compute[230518]: 2025-10-02 12:27:28.633 2 DEBUG oslo_concurrency.processutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/78888fa8-1051-485d-9e51-feaec2648c8f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_9z6yd2h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:27:28 compute-1 nova_compute[230518]: 2025-10-02 12:27:28.668 2 DEBUG nova.network.neutron [req-3ded94a7-72f1-442c-a17f-2776aab1fc24 req-c184aaab-6a82-4874-931b-c5f495569673 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Updated VIF entry in instance network info cache for port 4b915a8b-b3f4-47fe-bb5a-1e712c3d182e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:27:28 compute-1 nova_compute[230518]: 2025-10-02 12:27:28.669 2 DEBUG nova.network.neutron [req-3ded94a7-72f1-442c-a17f-2776aab1fc24 req-c184aaab-6a82-4874-931b-c5f495569673 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Updating instance_info_cache with network_info: [{"id": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "address": "fa:16:3e:db:de:41", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b915a8b-b3", "ovs_interfaceid": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:27:28 compute-1 nova_compute[230518]: 2025-10-02 12:27:28.694 2 DEBUG oslo_concurrency.lockutils [req-3ded94a7-72f1-442c-a17f-2776aab1fc24 req-c184aaab-6a82-4874-931b-c5f495569673 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-78888fa8-1051-485d-9e51-feaec2648c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:27:28 compute-1 nova_compute[230518]: 2025-10-02 12:27:28.697 2 DEBUG nova.network.neutron [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Successfully updated port: 5404e3f9-be33-4f3a-b227-2fa3944c6bb7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:27:28 compute-1 nova_compute[230518]: 2025-10-02 12:27:28.737 2 DEBUG oslo_concurrency.lockutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "refresh_cache-be855518-90af-4fa9-b969-4a1579934010" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:27:28 compute-1 nova_compute[230518]: 2025-10-02 12:27:28.737 2 DEBUG oslo_concurrency.lockutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquired lock "refresh_cache-be855518-90af-4fa9-b969-4a1579934010" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:27:28 compute-1 nova_compute[230518]: 2025-10-02 12:27:28.737 2 DEBUG nova.network.neutron [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:27:28 compute-1 nova_compute[230518]: 2025-10-02 12:27:28.764 2 DEBUG oslo_concurrency.processutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/78888fa8-1051-485d-9e51-feaec2648c8f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_9z6yd2h" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:27:28 compute-1 nova_compute[230518]: 2025-10-02 12:27:28.793 2 DEBUG nova.storage.rbd_utils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] rbd image 78888fa8-1051-485d-9e51-feaec2648c8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:27:28 compute-1 nova_compute[230518]: 2025-10-02 12:27:28.797 2 DEBUG oslo_concurrency.processutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/78888fa8-1051-485d-9e51-feaec2648c8f/disk.config 78888fa8-1051-485d-9e51-feaec2648c8f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:27:28 compute-1 ceph-mon[80926]: pgmap v1495: 305 pgs: 305 active+clean; 264 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 5.4 MiB/s wr, 287 op/s
Oct 02 12:27:28 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3572455864' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:27:28 compute-1 nova_compute[230518]: 2025-10-02 12:27:28.978 2 DEBUG oslo_concurrency.processutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/78888fa8-1051-485d-9e51-feaec2648c8f/disk.config 78888fa8-1051-485d-9e51-feaec2648c8f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:27:28 compute-1 nova_compute[230518]: 2025-10-02 12:27:28.979 2 INFO nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Deleting local config drive /var/lib/nova/instances/78888fa8-1051-485d-9e51-feaec2648c8f/disk.config because it was imported into RBD.
Oct 02 12:27:29 compute-1 kernel: tap4b915a8b-b3: entered promiscuous mode
Oct 02 12:27:29 compute-1 NetworkManager[44960]: <info>  [1759408049.0242] manager: (tap4b915a8b-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/120)
Oct 02 12:27:29 compute-1 nova_compute[230518]: 2025-10-02 12:27:29.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:29 compute-1 ovn_controller[129257]: 2025-10-02T12:27:29Z|00264|binding|INFO|Claiming lport 4b915a8b-b3f4-47fe-bb5a-1e712c3d182e for this chassis.
Oct 02 12:27:29 compute-1 ovn_controller[129257]: 2025-10-02T12:27:29Z|00265|binding|INFO|4b915a8b-b3f4-47fe-bb5a-1e712c3d182e: Claiming fa:16:3e:db:de:41 10.100.0.8
Oct 02 12:27:29 compute-1 nova_compute[230518]: 2025-10-02 12:27:29.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:29 compute-1 systemd-udevd[255462]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:27:29 compute-1 systemd-machined[188247]: New machine qemu-30-instance-0000003b.
Oct 02 12:27:29 compute-1 NetworkManager[44960]: <info>  [1759408049.0620] device (tap4b915a8b-b3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:27:29 compute-1 NetworkManager[44960]: <info>  [1759408049.0635] device (tap4b915a8b-b3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:27:29 compute-1 systemd[1]: Started Virtual Machine qemu-30-instance-0000003b.
Oct 02 12:27:29 compute-1 nova_compute[230518]: 2025-10-02 12:27:29.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:29 compute-1 ovn_controller[129257]: 2025-10-02T12:27:29Z|00266|binding|INFO|Setting lport 4b915a8b-b3f4-47fe-bb5a-1e712c3d182e ovn-installed in OVS
Oct 02 12:27:29 compute-1 nova_compute[230518]: 2025-10-02 12:27:29.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:29 compute-1 nova_compute[230518]: 2025-10-02 12:27:29.308 2 DEBUG nova.network.neutron [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:27:29 compute-1 ovn_controller[129257]: 2025-10-02T12:27:29Z|00267|binding|INFO|Setting lport 4b915a8b-b3f4-47fe-bb5a-1e712c3d182e up in Southbound
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.571 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:de:41 10.100.0.8'], port_security=['fa:16:3e:db:de:41 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '78888fa8-1051-485d-9e51-feaec2648c8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a920a635-8cc8-4478-b2dc-4d6329a5abef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c20ce9073494490588607984318667f5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '389e89f9-e63f-4b21-acd4-c43535d1012a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4006d2de-ae16-44cd-90c1-7beb9f87e38f, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=4b915a8b-b3f4-47fe-bb5a-1e712c3d182e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.572 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 4b915a8b-b3f4-47fe-bb5a-1e712c3d182e in datapath a920a635-8cc8-4478-b2dc-4d6329a5abef bound to our chassis
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.573 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a920a635-8cc8-4478-b2dc-4d6329a5abef
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.586 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c5ef9d29-71eb-4fd4-b4c4-223a02e418af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.587 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa920a635-81 in ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.588 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa920a635-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.589 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[48beee8f-0d9a-4693-b3fa-795d492f1807]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.589 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3e26e011-2597-4fd0-8c0f-07db050d0f65]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.603 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[5fd0ab09-8911-4f40-8ddc-e81c5b78f550]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.627 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[296e1898-7091-4cc8-991d-c06a7e1a95b0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.658 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[d5fd8374-aec9-4bfb-b74f-abd0f98cbfb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:29 compute-1 NetworkManager[44960]: <info>  [1759408049.6652] manager: (tapa920a635-80): new Veth device (/org/freedesktop/NetworkManager/Devices/121)
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.665 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[401acaf9-15f2-4bce-8af2-0eb5788fafea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.695 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[b5f133e3-eeea-47ef-8c6d-67a22ca74138]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.699 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[7adf9756-62e5-4033-9934-a95819dc73fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:29 compute-1 NetworkManager[44960]: <info>  [1759408049.7225] device (tapa920a635-80): carrier: link connected
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.730 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[49305452-bdbd-4d83-bf84-3ac00ad08c61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.745 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[000ec547-9859-468a-8608-724c1d879a0c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa920a635-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5e:4d:8c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591328, 'reachable_time': 35958, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255540, 'error': None, 'target': 'ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:29 compute-1 nova_compute[230518]: 2025-10-02 12:27:29.757 2 DEBUG nova.compute.manager [req-00f61e75-3b93-4e63-8c3b-088d8a71f05d req-0bcf5065-cf39-4bfb-8224-63a12684175b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Received event network-changed-5404e3f9-be33-4f3a-b227-2fa3944c6bb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:27:29 compute-1 nova_compute[230518]: 2025-10-02 12:27:29.758 2 DEBUG nova.compute.manager [req-00f61e75-3b93-4e63-8c3b-088d8a71f05d req-0bcf5065-cf39-4bfb-8224-63a12684175b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Refreshing instance network info cache due to event network-changed-5404e3f9-be33-4f3a-b227-2fa3944c6bb7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:27:29 compute-1 nova_compute[230518]: 2025-10-02 12:27:29.758 2 DEBUG oslo_concurrency.lockutils [req-00f61e75-3b93-4e63-8c3b-088d8a71f05d req-0bcf5065-cf39-4bfb-8224-63a12684175b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-be855518-90af-4fa9-b969-4a1579934010" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.763 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a2d1ae8f-f033-4cc9-8095-a8dbe641aab1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5e:4d8c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591328, 'tstamp': 591328}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255541, 'error': None, 'target': 'ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.780 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c604429b-1eb2-4af6-b42f-de7996af5a4b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa920a635-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5e:4d:8c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591328, 'reachable_time': 35958, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 255542, 'error': None, 'target': 'ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.820 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2bae1b18-bba2-4dc1-8720-7f898dbf6739]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:29 compute-1 nova_compute[230518]: 2025-10-02 12:27:29.866 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408049.8659763, 78888fa8-1051-485d-9e51-feaec2648c8f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:27:29 compute-1 nova_compute[230518]: 2025-10-02 12:27:29.867 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] VM Started (Lifecycle Event)
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.892 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fe59cd6f-c522-43e3-b708-c441874cbcd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.893 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa920a635-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.893 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.894 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa920a635-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:27:29 compute-1 nova_compute[230518]: 2025-10-02 12:27:29.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:29 compute-1 kernel: tapa920a635-80: entered promiscuous mode
Oct 02 12:27:29 compute-1 NetworkManager[44960]: <info>  [1759408049.8975] manager: (tapa920a635-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/122)
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.901 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa920a635-80, col_values=(('external_ids', {'iface-id': '7f4ed3f3-7aae-4781-9951-b5f99eb58474'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:27:29 compute-1 nova_compute[230518]: 2025-10-02 12:27:29.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:29 compute-1 ovn_controller[129257]: 2025-10-02T12:27:29Z|00268|binding|INFO|Releasing lport 7f4ed3f3-7aae-4781-9951-b5f99eb58474 from this chassis (sb_readonly=0)
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.906 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a920a635-8cc8-4478-b2dc-4d6329a5abef.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a920a635-8cc8-4478-b2dc-4d6329a5abef.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.907 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a65f5f93-6845-480a-a6bb-807aafb57e4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.908 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-a920a635-8cc8-4478-b2dc-4d6329a5abef
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/a920a635-8cc8-4478-b2dc-4d6329a5abef.pid.haproxy
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID a920a635-8cc8-4478-b2dc-4d6329a5abef
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:27:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:29.908 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef', 'env', 'PROCESS_TAG=haproxy-a920a635-8cc8-4478-b2dc-4d6329a5abef', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a920a635-8cc8-4478-b2dc-4d6329a5abef.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:27:29 compute-1 nova_compute[230518]: 2025-10-02 12:27:29.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:29 compute-1 nova_compute[230518]: 2025-10-02 12:27:29.945 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:27:29 compute-1 nova_compute[230518]: 2025-10-02 12:27:29.949 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408049.8662252, 78888fa8-1051-485d-9e51-feaec2648c8f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:27:29 compute-1 nova_compute[230518]: 2025-10-02 12:27:29.949 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] VM Paused (Lifecycle Event)
Oct 02 12:27:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:27:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:30.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:27:30 compute-1 nova_compute[230518]: 2025-10-02 12:27:30.039 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:27:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:30.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:30 compute-1 nova_compute[230518]: 2025-10-02 12:27:30.043 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:27:30 compute-1 nova_compute[230518]: 2025-10-02 12:27:30.182 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:27:30 compute-1 podman[255574]: 2025-10-02 12:27:30.298698377 +0000 UTC m=+0.062595941 container create 15f60cb19acf3dc22fcc96e929b08e0dece3d1ae536dca1db828c9e09eecb659 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct 02 12:27:30 compute-1 systemd[1]: Started libpod-conmon-15f60cb19acf3dc22fcc96e929b08e0dece3d1ae536dca1db828c9e09eecb659.scope.
Oct 02 12:27:30 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:27:30 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8348c266641d7f0169a9362cbaa2a00eb443af6e6b20ca0bc7f72d3ff7a284ad/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:27:30 compute-1 podman[255574]: 2025-10-02 12:27:30.27305544 +0000 UTC m=+0.036953024 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:27:30 compute-1 podman[255574]: 2025-10-02 12:27:30.366191591 +0000 UTC m=+0.130089155 container init 15f60cb19acf3dc22fcc96e929b08e0dece3d1ae536dca1db828c9e09eecb659 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:27:30 compute-1 podman[255574]: 2025-10-02 12:27:30.372162849 +0000 UTC m=+0.136060413 container start 15f60cb19acf3dc22fcc96e929b08e0dece3d1ae536dca1db828c9e09eecb659 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 12:27:30 compute-1 neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef[255589]: [NOTICE]   (255593) : New worker (255595) forked
Oct 02 12:27:30 compute-1 neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef[255589]: [NOTICE]   (255593) : Loading success.
Oct 02 12:27:30 compute-1 nova_compute[230518]: 2025-10-02 12:27:30.608 2 DEBUG nova.compute.manager [req-534bc5e5-a7d8-48b8-ad9e-29e34175f62a req-dff0f943-e537-4239-a1e7-af3f998cd3a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Received event network-vif-plugged-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:27:30 compute-1 nova_compute[230518]: 2025-10-02 12:27:30.608 2 DEBUG oslo_concurrency.lockutils [req-534bc5e5-a7d8-48b8-ad9e-29e34175f62a req-dff0f943-e537-4239-a1e7-af3f998cd3a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "78888fa8-1051-485d-9e51-feaec2648c8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:30 compute-1 nova_compute[230518]: 2025-10-02 12:27:30.608 2 DEBUG oslo_concurrency.lockutils [req-534bc5e5-a7d8-48b8-ad9e-29e34175f62a req-dff0f943-e537-4239-a1e7-af3f998cd3a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "78888fa8-1051-485d-9e51-feaec2648c8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:30 compute-1 nova_compute[230518]: 2025-10-02 12:27:30.609 2 DEBUG oslo_concurrency.lockutils [req-534bc5e5-a7d8-48b8-ad9e-29e34175f62a req-dff0f943-e537-4239-a1e7-af3f998cd3a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "78888fa8-1051-485d-9e51-feaec2648c8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:30 compute-1 nova_compute[230518]: 2025-10-02 12:27:30.609 2 DEBUG nova.compute.manager [req-534bc5e5-a7d8-48b8-ad9e-29e34175f62a req-dff0f943-e537-4239-a1e7-af3f998cd3a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Processing event network-vif-plugged-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:27:30 compute-1 nova_compute[230518]: 2025-10-02 12:27:30.609 2 DEBUG nova.compute.manager [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:27:30 compute-1 nova_compute[230518]: 2025-10-02 12:27:30.615 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408050.6156862, 78888fa8-1051-485d-9e51-feaec2648c8f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:27:30 compute-1 nova_compute[230518]: 2025-10-02 12:27:30.617 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] VM Resumed (Lifecycle Event)
Oct 02 12:27:30 compute-1 nova_compute[230518]: 2025-10-02 12:27:30.619 2 DEBUG nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:27:30 compute-1 nova_compute[230518]: 2025-10-02 12:27:30.623 2 INFO nova.virt.libvirt.driver [-] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Instance spawned successfully.
Oct 02 12:27:30 compute-1 nova_compute[230518]: 2025-10-02 12:27:30.624 2 DEBUG nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:27:30 compute-1 nova_compute[230518]: 2025-10-02 12:27:30.876 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:27:30 compute-1 nova_compute[230518]: 2025-10-02 12:27:30.882 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:27:30 compute-1 nova_compute[230518]: 2025-10-02 12:27:30.886 2 DEBUG nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:27:30 compute-1 nova_compute[230518]: 2025-10-02 12:27:30.887 2 DEBUG nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:27:30 compute-1 nova_compute[230518]: 2025-10-02 12:27:30.887 2 DEBUG nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:27:30 compute-1 nova_compute[230518]: 2025-10-02 12:27:30.888 2 DEBUG nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:27:30 compute-1 nova_compute[230518]: 2025-10-02 12:27:30.888 2 DEBUG nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:27:30 compute-1 nova_compute[230518]: 2025-10-02 12:27:30.889 2 DEBUG nova.virt.libvirt.driver [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:27:30 compute-1 nova_compute[230518]: 2025-10-02 12:27:30.908 2 DEBUG nova.network.neutron [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Updating instance_info_cache with network_info: [{"id": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "address": "fa:16:3e:d4:5c:32", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5404e3f9-be", "ovs_interfaceid": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:27:30 compute-1 nova_compute[230518]: 2025-10-02 12:27:30.939 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:27:31 compute-1 nova_compute[230518]: 2025-10-02 12:27:31.061 2 DEBUG oslo_concurrency.lockutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Releasing lock "refresh_cache-be855518-90af-4fa9-b969-4a1579934010" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:27:31 compute-1 nova_compute[230518]: 2025-10-02 12:27:31.061 2 DEBUG nova.compute.manager [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Instance network_info: |[{"id": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "address": "fa:16:3e:d4:5c:32", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5404e3f9-be", "ovs_interfaceid": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:27:31 compute-1 nova_compute[230518]: 2025-10-02 12:27:31.062 2 DEBUG oslo_concurrency.lockutils [req-00f61e75-3b93-4e63-8c3b-088d8a71f05d req-0bcf5065-cf39-4bfb-8224-63a12684175b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-be855518-90af-4fa9-b969-4a1579934010" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:27:31 compute-1 nova_compute[230518]: 2025-10-02 12:27:31.062 2 DEBUG nova.network.neutron [req-00f61e75-3b93-4e63-8c3b-088d8a71f05d req-0bcf5065-cf39-4bfb-8224-63a12684175b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Refreshing network info cache for port 5404e3f9-be33-4f3a-b227-2fa3944c6bb7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:27:31 compute-1 nova_compute[230518]: 2025-10-02 12:27:31.064 2 DEBUG nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Start _get_guest_xml network_info=[{"id": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "address": "fa:16:3e:d4:5c:32", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5404e3f9-be", "ovs_interfaceid": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:27:31 compute-1 nova_compute[230518]: 2025-10-02 12:27:31.068 2 WARNING nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:27:31 compute-1 nova_compute[230518]: 2025-10-02 12:27:31.075 2 DEBUG nova.virt.libvirt.host [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:27:31 compute-1 nova_compute[230518]: 2025-10-02 12:27:31.075 2 DEBUG nova.virt.libvirt.host [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:27:31 compute-1 nova_compute[230518]: 2025-10-02 12:27:31.078 2 DEBUG nova.virt.libvirt.host [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:27:31 compute-1 nova_compute[230518]: 2025-10-02 12:27:31.079 2 DEBUG nova.virt.libvirt.host [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:27:31 compute-1 nova_compute[230518]: 2025-10-02 12:27:31.079 2 DEBUG nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:27:31 compute-1 nova_compute[230518]: 2025-10-02 12:27:31.080 2 DEBUG nova.virt.hardware [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:27:31 compute-1 nova_compute[230518]: 2025-10-02 12:27:31.080 2 DEBUG nova.virt.hardware [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:27:31 compute-1 nova_compute[230518]: 2025-10-02 12:27:31.080 2 DEBUG nova.virt.hardware [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:27:31 compute-1 nova_compute[230518]: 2025-10-02 12:27:31.080 2 DEBUG nova.virt.hardware [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:27:31 compute-1 nova_compute[230518]: 2025-10-02 12:27:31.081 2 DEBUG nova.virt.hardware [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:27:31 compute-1 nova_compute[230518]: 2025-10-02 12:27:31.081 2 DEBUG nova.virt.hardware [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:27:31 compute-1 nova_compute[230518]: 2025-10-02 12:27:31.081 2 DEBUG nova.virt.hardware [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:27:31 compute-1 nova_compute[230518]: 2025-10-02 12:27:31.081 2 DEBUG nova.virt.hardware [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:27:31 compute-1 nova_compute[230518]: 2025-10-02 12:27:31.082 2 DEBUG nova.virt.hardware [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:27:31 compute-1 nova_compute[230518]: 2025-10-02 12:27:31.082 2 DEBUG nova.virt.hardware [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:27:31 compute-1 nova_compute[230518]: 2025-10-02 12:27:31.082 2 DEBUG nova.virt.hardware [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:27:31 compute-1 nova_compute[230518]: 2025-10-02 12:27:31.084 2 DEBUG oslo_concurrency.processutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:27:31 compute-1 nova_compute[230518]: 2025-10-02 12:27:31.114 2 INFO nova.compute.manager [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Took 11.79 seconds to spawn the instance on the hypervisor.
Oct 02 12:27:31 compute-1 nova_compute[230518]: 2025-10-02 12:27:31.115 2 DEBUG nova.compute.manager [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:27:31 compute-1 ceph-mon[80926]: pgmap v1496: 305 pgs: 305 active+clean; 264 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 367 KiB/s rd, 4.6 MiB/s wr, 203 op/s
Oct 02 12:27:31 compute-1 nova_compute[230518]: 2025-10-02 12:27:31.176 2 INFO nova.compute.manager [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Took 12.70 seconds to build instance.
Oct 02 12:27:31 compute-1 nova_compute[230518]: 2025-10-02 12:27:31.201 2 DEBUG oslo_concurrency.lockutils [None req-c05ccd6e-46e9-4eb3-ab41-a58252dacf7a 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "78888fa8-1051-485d-9e51-feaec2648c8f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:31 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:27:31 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1910363574' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:27:31 compute-1 nova_compute[230518]: 2025-10-02 12:27:31.540 2 DEBUG oslo_concurrency.processutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:27:31 compute-1 nova_compute[230518]: 2025-10-02 12:27:31.567 2 DEBUG nova.storage.rbd_utils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] rbd image be855518-90af-4fa9-b969-4a1579934010_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:27:31 compute-1 nova_compute[230518]: 2025-10-02 12:27:31.571 2 DEBUG oslo_concurrency.processutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:27:31 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:27:31 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3686929686' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.003 2 DEBUG oslo_concurrency.processutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:27:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:27:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:32.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.007 2 DEBUG nova.virt.libvirt.vif [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:27:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1210201217',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1210201217',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1210201217',id=60,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bfd7bec5bd4b4366a96cc55cfe95fcc9',ramdisk_id='',reservation_id='r-yyo20tgh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-883313902',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-883313902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:27:25Z,user_data=None,user_id='eff0431e92464c78b780c8365e6e920c',uuid=be855518-90af-4fa9-b969-4a1579934010,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "address": "fa:16:3e:d4:5c:32", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5404e3f9-be", "ovs_interfaceid": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.009 2 DEBUG nova.network.os_vif_util [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Converting VIF {"id": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "address": "fa:16:3e:d4:5c:32", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5404e3f9-be", "ovs_interfaceid": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.010 2 DEBUG nova.network.os_vif_util [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:5c:32,bridge_name='br-int',has_traffic_filtering=True,id=5404e3f9-be33-4f3a-b227-2fa3944c6bb7,network=Network(eefd67eb-b4b6-4162-bbdd-0cce7dbdb491),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5404e3f9-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.013 2 DEBUG nova.objects.instance [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lazy-loading 'pci_devices' on Instance uuid be855518-90af-4fa9-b969-4a1579934010 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.042 2 DEBUG nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:27:32 compute-1 nova_compute[230518]:   <uuid>be855518-90af-4fa9-b969-4a1579934010</uuid>
Oct 02 12:27:32 compute-1 nova_compute[230518]:   <name>instance-0000003c</name>
Oct 02 12:27:32 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:27:32 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:27:32 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:27:32 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1210201217</nova:name>
Oct 02 12:27:32 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:27:31</nova:creationTime>
Oct 02 12:27:32 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:27:32 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:27:32 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:27:32 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:27:32 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:27:32 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:27:32 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:27:32 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:27:32 compute-1 nova_compute[230518]:         <nova:user uuid="eff0431e92464c78b780c8365e6e920c">tempest-ImagesOneServerNegativeTestJSON-883313902-project-member</nova:user>
Oct 02 12:27:32 compute-1 nova_compute[230518]:         <nova:project uuid="bfd7bec5bd4b4366a96cc55cfe95fcc9">tempest-ImagesOneServerNegativeTestJSON-883313902</nova:project>
Oct 02 12:27:32 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:27:32 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:27:32 compute-1 nova_compute[230518]:         <nova:port uuid="5404e3f9-be33-4f3a-b227-2fa3944c6bb7">
Oct 02 12:27:32 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:27:32 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:27:32 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:27:32 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <system>
Oct 02 12:27:32 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:27:32 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:27:32 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:27:32 compute-1 nova_compute[230518]:       <entry name="serial">be855518-90af-4fa9-b969-4a1579934010</entry>
Oct 02 12:27:32 compute-1 nova_compute[230518]:       <entry name="uuid">be855518-90af-4fa9-b969-4a1579934010</entry>
Oct 02 12:27:32 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     </system>
Oct 02 12:27:32 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:27:32 compute-1 nova_compute[230518]:   <os>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:   </os>
Oct 02 12:27:32 compute-1 nova_compute[230518]:   <features>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:   </features>
Oct 02 12:27:32 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:27:32 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:27:32 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:27:32 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/be855518-90af-4fa9-b969-4a1579934010_disk">
Oct 02 12:27:32 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:       </source>
Oct 02 12:27:32 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:27:32 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:27:32 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:27:32 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/be855518-90af-4fa9-b969-4a1579934010_disk.config">
Oct 02 12:27:32 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:       </source>
Oct 02 12:27:32 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:27:32 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:27:32 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:27:32 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:d4:5c:32"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:       <target dev="tap5404e3f9-be"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:27:32 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/be855518-90af-4fa9-b969-4a1579934010/console.log" append="off"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <video>
Oct 02 12:27:32 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     </video>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:27:32 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:32.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:27:32 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:27:32 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:27:32 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:27:32 compute-1 nova_compute[230518]: </domain>
Oct 02 12:27:32 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.054 2 DEBUG nova.compute.manager [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Preparing to wait for external event network-vif-plugged-5404e3f9-be33-4f3a-b227-2fa3944c6bb7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.054 2 DEBUG oslo_concurrency.lockutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "be855518-90af-4fa9-b969-4a1579934010-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.054 2 DEBUG oslo_concurrency.lockutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "be855518-90af-4fa9-b969-4a1579934010-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.055 2 DEBUG oslo_concurrency.lockutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "be855518-90af-4fa9-b969-4a1579934010-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.056 2 DEBUG nova.virt.libvirt.vif [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:27:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1210201217',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1210201217',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1210201217',id=60,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bfd7bec5bd4b4366a96cc55cfe95fcc9',ramdisk_id='',reservation_id='r-yyo20tgh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-883313902',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-883313902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:27:25Z,user_data=None,user_id='eff0431e92464c78b780c8365e6e920c',uuid=be855518-90af-4fa9-b969-4a1579934010,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "address": "fa:16:3e:d4:5c:32", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5404e3f9-be", "ovs_interfaceid": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.056 2 DEBUG nova.network.os_vif_util [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Converting VIF {"id": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "address": "fa:16:3e:d4:5c:32", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5404e3f9-be", "ovs_interfaceid": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.057 2 DEBUG nova.network.os_vif_util [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:5c:32,bridge_name='br-int',has_traffic_filtering=True,id=5404e3f9-be33-4f3a-b227-2fa3944c6bb7,network=Network(eefd67eb-b4b6-4162-bbdd-0cce7dbdb491),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5404e3f9-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.058 2 DEBUG os_vif [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:5c:32,bridge_name='br-int',has_traffic_filtering=True,id=5404e3f9-be33-4f3a-b227-2fa3944c6bb7,network=Network(eefd67eb-b4b6-4162-bbdd-0cce7dbdb491),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5404e3f9-be') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.061 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.062 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.066 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5404e3f9-be, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.068 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5404e3f9-be, col_values=(('external_ids', {'iface-id': '5404e3f9-be33-4f3a-b227-2fa3944c6bb7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d4:5c:32', 'vm-uuid': 'be855518-90af-4fa9-b969-4a1579934010'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:32 compute-1 NetworkManager[44960]: <info>  [1759408052.0718] manager: (tap5404e3f9-be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/123)
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.079 2 INFO os_vif [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:5c:32,bridge_name='br-int',has_traffic_filtering=True,id=5404e3f9-be33-4f3a-b227-2fa3944c6bb7,network=Network(eefd67eb-b4b6-4162-bbdd-0cce7dbdb491),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5404e3f9-be')
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.126 2 DEBUG nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.127 2 DEBUG nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.127 2 DEBUG nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] No VIF found with MAC fa:16:3e:d4:5c:32, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.127 2 INFO nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Using config drive
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.148 2 DEBUG nova.storage.rbd_utils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] rbd image be855518-90af-4fa9-b969-4a1579934010_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.460 2 INFO nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Creating config drive at /var/lib/nova/instances/be855518-90af-4fa9-b969-4a1579934010/disk.config
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.467 2 DEBUG oslo_concurrency.processutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/be855518-90af-4fa9-b969-4a1579934010/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphkcla7hf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:27:32 compute-1 ceph-mon[80926]: pgmap v1497: 305 pgs: 305 active+clean; 293 MiB data, 672 MiB used, 20 GiB / 21 GiB avail; 402 KiB/s rd, 5.7 MiB/s wr, 218 op/s
Oct 02 12:27:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1910363574' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:27:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3686929686' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.602 2 DEBUG oslo_concurrency.processutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/be855518-90af-4fa9-b969-4a1579934010/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphkcla7hf" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.640 2 DEBUG nova.storage.rbd_utils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] rbd image be855518-90af-4fa9-b969-4a1579934010_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.644 2 DEBUG oslo_concurrency.processutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/be855518-90af-4fa9-b969-4a1579934010/disk.config be855518-90af-4fa9-b969-4a1579934010_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.741 2 DEBUG nova.network.neutron [req-00f61e75-3b93-4e63-8c3b-088d8a71f05d req-0bcf5065-cf39-4bfb-8224-63a12684175b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Updated VIF entry in instance network info cache for port 5404e3f9-be33-4f3a-b227-2fa3944c6bb7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.742 2 DEBUG nova.network.neutron [req-00f61e75-3b93-4e63-8c3b-088d8a71f05d req-0bcf5065-cf39-4bfb-8224-63a12684175b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Updating instance_info_cache with network_info: [{"id": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "address": "fa:16:3e:d4:5c:32", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5404e3f9-be", "ovs_interfaceid": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.761 2 DEBUG nova.compute.manager [req-c08600d2-db33-413a-91c1-9440f099bf8f req-10001f27-437a-4478-911d-31c362ebc43c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Received event network-vif-plugged-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.761 2 DEBUG oslo_concurrency.lockutils [req-c08600d2-db33-413a-91c1-9440f099bf8f req-10001f27-437a-4478-911d-31c362ebc43c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "78888fa8-1051-485d-9e51-feaec2648c8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.762 2 DEBUG oslo_concurrency.lockutils [req-c08600d2-db33-413a-91c1-9440f099bf8f req-10001f27-437a-4478-911d-31c362ebc43c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "78888fa8-1051-485d-9e51-feaec2648c8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.763 2 DEBUG oslo_concurrency.lockutils [req-c08600d2-db33-413a-91c1-9440f099bf8f req-10001f27-437a-4478-911d-31c362ebc43c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "78888fa8-1051-485d-9e51-feaec2648c8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.763 2 DEBUG nova.compute.manager [req-c08600d2-db33-413a-91c1-9440f099bf8f req-10001f27-437a-4478-911d-31c362ebc43c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] No waiting events found dispatching network-vif-plugged-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.763 2 WARNING nova.compute.manager [req-c08600d2-db33-413a-91c1-9440f099bf8f req-10001f27-437a-4478-911d-31c362ebc43c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Received unexpected event network-vif-plugged-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e for instance with vm_state active and task_state None.
Oct 02 12:27:32 compute-1 nova_compute[230518]: 2025-10-02 12:27:32.779 2 DEBUG oslo_concurrency.lockutils [req-00f61e75-3b93-4e63-8c3b-088d8a71f05d req-0bcf5065-cf39-4bfb-8224-63a12684175b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-be855518-90af-4fa9-b969-4a1579934010" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:27:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:27:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:27:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:34.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:27:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3350442704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:34.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:35 compute-1 nova_compute[230518]: 2025-10-02 12:27:35.008 2 DEBUG oslo_concurrency.processutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/be855518-90af-4fa9-b969-4a1579934010/disk.config be855518-90af-4fa9-b969-4a1579934010_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.364s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:27:35 compute-1 nova_compute[230518]: 2025-10-02 12:27:35.009 2 INFO nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Deleting local config drive /var/lib/nova/instances/be855518-90af-4fa9-b969-4a1579934010/disk.config because it was imported into RBD.
Oct 02 12:27:35 compute-1 kernel: tap5404e3f9-be: entered promiscuous mode
Oct 02 12:27:35 compute-1 NetworkManager[44960]: <info>  [1759408055.0604] manager: (tap5404e3f9-be): new Tun device (/org/freedesktop/NetworkManager/Devices/124)
Oct 02 12:27:35 compute-1 nova_compute[230518]: 2025-10-02 12:27:35.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:35 compute-1 ovn_controller[129257]: 2025-10-02T12:27:35Z|00269|binding|INFO|Claiming lport 5404e3f9-be33-4f3a-b227-2fa3944c6bb7 for this chassis.
Oct 02 12:27:35 compute-1 ovn_controller[129257]: 2025-10-02T12:27:35Z|00270|binding|INFO|5404e3f9-be33-4f3a-b227-2fa3944c6bb7: Claiming fa:16:3e:d4:5c:32 10.100.0.14
Oct 02 12:27:35 compute-1 systemd-udevd[255739]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:27:35 compute-1 systemd-machined[188247]: New machine qemu-31-instance-0000003c.
Oct 02 12:27:35 compute-1 NetworkManager[44960]: <info>  [1759408055.1094] device (tap5404e3f9-be): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:27:35 compute-1 NetworkManager[44960]: <info>  [1759408055.1114] device (tap5404e3f9-be): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:27:35 compute-1 ovn_controller[129257]: 2025-10-02T12:27:35Z|00271|binding|INFO|Setting lport 5404e3f9-be33-4f3a-b227-2fa3944c6bb7 ovn-installed in OVS
Oct 02 12:27:35 compute-1 ovn_controller[129257]: 2025-10-02T12:27:35Z|00272|binding|INFO|Setting lport 5404e3f9-be33-4f3a-b227-2fa3944c6bb7 up in Southbound
Oct 02 12:27:35 compute-1 systemd[1]: Started Virtual Machine qemu-31-instance-0000003c.
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.169 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:5c:32 10.100.0.14'], port_security=['fa:16:3e:d4:5c:32 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'be855518-90af-4fa9-b969-4a1579934010', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bfd7bec5bd4b4366a96cc55cfe95fcc9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1274568d-0664-4745-a1c4-36fd447c1a9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f6f5554-f9ad-4301-a295-af1afef2d045, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=5404e3f9-be33-4f3a-b227-2fa3944c6bb7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.170 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 5404e3f9-be33-4f3a-b227-2fa3944c6bb7 in datapath eefd67eb-b4b6-4162-bbdd-0cce7dbdb491 bound to our chassis
Oct 02 12:27:35 compute-1 nova_compute[230518]: 2025-10-02 12:27:35.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.177 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network eefd67eb-b4b6-4162-bbdd-0cce7dbdb491
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.187 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1a1e6116-c36d-4527-97ee-49ffefa60704]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.189 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapeefd67eb-b1 in ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.192 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapeefd67eb-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.192 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6eb5126a-502f-4827-b064-ea658a35561a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.193 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b5831871-d311-4128-864b-8508485975a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.206 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[3116ac65-7499-4296-961e-b0a4608663dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.227 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8c74516a-078e-4d62-a2d9-144ce28c468a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.258 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[8e254b00-e91c-45b9-b4bd-01ffecf49a43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.266 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[82e0bc9c-f298-4b9d-8153-68919186247c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:35 compute-1 NetworkManager[44960]: <info>  [1759408055.2672] manager: (tapeefd67eb-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/125)
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.301 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[2c76b17f-184e-4bc5-954a-4f9682a3c2a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.304 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[3831ecc8-c925-48db-aa97-fc5bdbbbe348]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:35 compute-1 NetworkManager[44960]: <info>  [1759408055.3303] device (tapeefd67eb-b0): carrier: link connected
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.340 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c0672a7b-2e2e-4888-b1ef-83d9f4de3f07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.358 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[26e6ebb7-6197-4262-a01a-0057e67faf36]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeefd67eb-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:db:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591888, 'reachable_time': 28229, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255775, 'error': None, 'target': 'ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:35 compute-1 ceph-mon[80926]: pgmap v1498: 305 pgs: 305 active+clean; 293 MiB data, 676 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 5.7 MiB/s wr, 212 op/s
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.376 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8ba922f8-465f-4729-b306-d8a77a9a0aed]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:db93'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 591888, 'tstamp': 591888}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255776, 'error': None, 'target': 'ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.393 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1df4dc27-a18a-447d-8f20-a1b8fb131a32]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeefd67eb-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:db:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591888, 'reachable_time': 28229, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 255777, 'error': None, 'target': 'ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.428 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6e257a9c-d388-41c8-9692-d570ac315d4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.490 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d6bd6c7d-599d-4267-a890-fa144cd82573]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.491 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeefd67eb-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.492 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.492 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeefd67eb-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:27:35 compute-1 nova_compute[230518]: 2025-10-02 12:27:35.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:35 compute-1 NetworkManager[44960]: <info>  [1759408055.4953] manager: (tapeefd67eb-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/126)
Oct 02 12:27:35 compute-1 kernel: tapeefd67eb-b0: entered promiscuous mode
Oct 02 12:27:35 compute-1 nova_compute[230518]: 2025-10-02 12:27:35.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.498 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeefd67eb-b0, col_values=(('external_ids', {'iface-id': '4a1c64ee-2e43-4924-ad64-0ba8b656d152'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:27:35 compute-1 nova_compute[230518]: 2025-10-02 12:27:35.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:35 compute-1 ovn_controller[129257]: 2025-10-02T12:27:35Z|00273|binding|INFO|Releasing lport 4a1c64ee-2e43-4924-ad64-0ba8b656d152 from this chassis (sb_readonly=0)
Oct 02 12:27:35 compute-1 nova_compute[230518]: 2025-10-02 12:27:35.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.501 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/eefd67eb-b4b6-4162-bbdd-0cce7dbdb491.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/eefd67eb-b4b6-4162-bbdd-0cce7dbdb491.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:27:35 compute-1 nova_compute[230518]: 2025-10-02 12:27:35.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.512 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c73e274a-abce-4830-a000-268984d884f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.514 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/eefd67eb-b4b6-4162-bbdd-0cce7dbdb491.pid.haproxy
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID eefd67eb-b4b6-4162-bbdd-0cce7dbdb491
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:27:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:35.516 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'env', 'PROCESS_TAG=haproxy-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/eefd67eb-b4b6-4162-bbdd-0cce7dbdb491.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:27:35 compute-1 podman[255843]: 2025-10-02 12:27:35.891452478 +0000 UTC m=+0.055643412 container create 3236060feae6b034e02fdd32b0325b9b80dcb8161d23c9f8595868b5ccc47edd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:27:35 compute-1 NetworkManager[44960]: <info>  [1759408055.9152] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/127)
Oct 02 12:27:35 compute-1 nova_compute[230518]: 2025-10-02 12:27:35.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:35 compute-1 NetworkManager[44960]: <info>  [1759408055.9160] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Oct 02 12:27:35 compute-1 podman[255843]: 2025-10-02 12:27:35.868389153 +0000 UTC m=+0.032580107 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:27:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:36.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:27:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:36.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:27:36 compute-1 systemd[1]: Started libpod-conmon-3236060feae6b034e02fdd32b0325b9b80dcb8161d23c9f8595868b5ccc47edd.scope.
Oct 02 12:27:36 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:27:36 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/464fc771c237ea3199774af778ab94d5f0cccedca549bc0b6bbf75459dd17206/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:27:36 compute-1 nova_compute[230518]: 2025-10-02 12:27:36.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:36 compute-1 ovn_controller[129257]: 2025-10-02T12:27:36Z|00274|binding|INFO|Releasing lport 7f4ed3f3-7aae-4781-9951-b5f99eb58474 from this chassis (sb_readonly=0)
Oct 02 12:27:36 compute-1 ovn_controller[129257]: 2025-10-02T12:27:36Z|00275|binding|INFO|Releasing lport 4a1c64ee-2e43-4924-ad64-0ba8b656d152 from this chassis (sb_readonly=0)
Oct 02 12:27:36 compute-1 podman[255843]: 2025-10-02 12:27:36.091815323 +0000 UTC m=+0.256006307 container init 3236060feae6b034e02fdd32b0325b9b80dcb8161d23c9f8595868b5ccc47edd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 12:27:36 compute-1 podman[255843]: 2025-10-02 12:27:36.098594846 +0000 UTC m=+0.262785800 container start 3236060feae6b034e02fdd32b0325b9b80dcb8161d23c9f8595868b5ccc47edd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct 02 12:27:36 compute-1 nova_compute[230518]: 2025-10-02 12:27:36.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:36 compute-1 neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491[255859]: [NOTICE]   (255867) : New worker (255870) forked
Oct 02 12:27:36 compute-1 neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491[255859]: [NOTICE]   (255867) : Loading success.
Oct 02 12:27:36 compute-1 nova_compute[230518]: 2025-10-02 12:27:36.226 2 DEBUG nova.compute.manager [req-9248dbfa-c6c3-4bc5-87e1-e5616db70b57 req-6ea502ba-1a1e-486c-90ee-818a0aa01a15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Received event network-vif-plugged-5404e3f9-be33-4f3a-b227-2fa3944c6bb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:27:36 compute-1 nova_compute[230518]: 2025-10-02 12:27:36.226 2 DEBUG oslo_concurrency.lockutils [req-9248dbfa-c6c3-4bc5-87e1-e5616db70b57 req-6ea502ba-1a1e-486c-90ee-818a0aa01a15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "be855518-90af-4fa9-b969-4a1579934010-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:36 compute-1 nova_compute[230518]: 2025-10-02 12:27:36.227 2 DEBUG oslo_concurrency.lockutils [req-9248dbfa-c6c3-4bc5-87e1-e5616db70b57 req-6ea502ba-1a1e-486c-90ee-818a0aa01a15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "be855518-90af-4fa9-b969-4a1579934010-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:36 compute-1 nova_compute[230518]: 2025-10-02 12:27:36.227 2 DEBUG oslo_concurrency.lockutils [req-9248dbfa-c6c3-4bc5-87e1-e5616db70b57 req-6ea502ba-1a1e-486c-90ee-818a0aa01a15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "be855518-90af-4fa9-b969-4a1579934010-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:36 compute-1 nova_compute[230518]: 2025-10-02 12:27:36.227 2 DEBUG nova.compute.manager [req-9248dbfa-c6c3-4bc5-87e1-e5616db70b57 req-6ea502ba-1a1e-486c-90ee-818a0aa01a15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Processing event network-vif-plugged-5404e3f9-be33-4f3a-b227-2fa3944c6bb7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:27:36 compute-1 nova_compute[230518]: 2025-10-02 12:27:36.555 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408056.5545697, be855518-90af-4fa9-b969-4a1579934010 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:27:36 compute-1 nova_compute[230518]: 2025-10-02 12:27:36.555 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: be855518-90af-4fa9-b969-4a1579934010] VM Started (Lifecycle Event)
Oct 02 12:27:36 compute-1 nova_compute[230518]: 2025-10-02 12:27:36.557 2 DEBUG nova.compute.manager [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:27:36 compute-1 nova_compute[230518]: 2025-10-02 12:27:36.561 2 DEBUG nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:27:36 compute-1 nova_compute[230518]: 2025-10-02 12:27:36.564 2 INFO nova.virt.libvirt.driver [-] [instance: be855518-90af-4fa9-b969-4a1579934010] Instance spawned successfully.
Oct 02 12:27:36 compute-1 nova_compute[230518]: 2025-10-02 12:27:36.565 2 DEBUG nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:27:36 compute-1 nova_compute[230518]: 2025-10-02 12:27:36.587 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: be855518-90af-4fa9-b969-4a1579934010] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:27:36 compute-1 nova_compute[230518]: 2025-10-02 12:27:36.591 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: be855518-90af-4fa9-b969-4a1579934010] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:27:36 compute-1 nova_compute[230518]: 2025-10-02 12:27:36.623 2 DEBUG nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:27:36 compute-1 nova_compute[230518]: 2025-10-02 12:27:36.624 2 DEBUG nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:27:36 compute-1 nova_compute[230518]: 2025-10-02 12:27:36.624 2 DEBUG nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:27:36 compute-1 nova_compute[230518]: 2025-10-02 12:27:36.625 2 DEBUG nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:27:36 compute-1 nova_compute[230518]: 2025-10-02 12:27:36.625 2 DEBUG nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:27:36 compute-1 nova_compute[230518]: 2025-10-02 12:27:36.626 2 DEBUG nova.virt.libvirt.driver [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:27:36 compute-1 nova_compute[230518]: 2025-10-02 12:27:36.635 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: be855518-90af-4fa9-b969-4a1579934010] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:27:36 compute-1 nova_compute[230518]: 2025-10-02 12:27:36.635 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408056.554747, be855518-90af-4fa9-b969-4a1579934010 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:27:36 compute-1 nova_compute[230518]: 2025-10-02 12:27:36.635 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: be855518-90af-4fa9-b969-4a1579934010] VM Paused (Lifecycle Event)
Oct 02 12:27:36 compute-1 nova_compute[230518]: 2025-10-02 12:27:36.715 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: be855518-90af-4fa9-b969-4a1579934010] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:27:36 compute-1 nova_compute[230518]: 2025-10-02 12:27:36.720 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408056.5611575, be855518-90af-4fa9-b969-4a1579934010 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:27:36 compute-1 nova_compute[230518]: 2025-10-02 12:27:36.720 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: be855518-90af-4fa9-b969-4a1579934010] VM Resumed (Lifecycle Event)
Oct 02 12:27:36 compute-1 nova_compute[230518]: 2025-10-02 12:27:36.790 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: be855518-90af-4fa9-b969-4a1579934010] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:27:36 compute-1 nova_compute[230518]: 2025-10-02 12:27:36.793 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: be855518-90af-4fa9-b969-4a1579934010] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:27:36 compute-1 nova_compute[230518]: 2025-10-02 12:27:36.806 2 INFO nova.compute.manager [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Took 11.14 seconds to spawn the instance on the hypervisor.
Oct 02 12:27:36 compute-1 nova_compute[230518]: 2025-10-02 12:27:36.806 2 DEBUG nova.compute.manager [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:27:36 compute-1 nova_compute[230518]: 2025-10-02 12:27:36.831 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: be855518-90af-4fa9-b969-4a1579934010] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:27:36 compute-1 nova_compute[230518]: 2025-10-02 12:27:36.946 2 INFO nova.compute.manager [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Took 12.20 seconds to build instance.
Oct 02 12:27:37 compute-1 nova_compute[230518]: 2025-10-02 12:27:37.015 2 DEBUG oslo_concurrency.lockutils [None req-a163ff3f-679f-4c8d-8f8b-666992e33d9c eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "be855518-90af-4fa9-b969-4a1579934010" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.338s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:37 compute-1 ceph-mon[80926]: pgmap v1499: 305 pgs: 305 active+clean; 293 MiB data, 676 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.8 MiB/s wr, 168 op/s
Oct 02 12:27:37 compute-1 nova_compute[230518]: 2025-10-02 12:27:37.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:37 compute-1 nova_compute[230518]: 2025-10-02 12:27:37.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:38.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:38.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:38 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:27:38 compute-1 nova_compute[230518]: 2025-10-02 12:27:38.388 2 DEBUG nova.compute.manager [req-cc6228ce-0558-47e7-96f9-f538db226164 req-ad1eb511-2bc2-4ba9-b32a-64d9c0d0f7fe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Received event network-vif-plugged-5404e3f9-be33-4f3a-b227-2fa3944c6bb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:27:38 compute-1 nova_compute[230518]: 2025-10-02 12:27:38.388 2 DEBUG oslo_concurrency.lockutils [req-cc6228ce-0558-47e7-96f9-f538db226164 req-ad1eb511-2bc2-4ba9-b32a-64d9c0d0f7fe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "be855518-90af-4fa9-b969-4a1579934010-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:38 compute-1 nova_compute[230518]: 2025-10-02 12:27:38.389 2 DEBUG oslo_concurrency.lockutils [req-cc6228ce-0558-47e7-96f9-f538db226164 req-ad1eb511-2bc2-4ba9-b32a-64d9c0d0f7fe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "be855518-90af-4fa9-b969-4a1579934010-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:38 compute-1 nova_compute[230518]: 2025-10-02 12:27:38.389 2 DEBUG oslo_concurrency.lockutils [req-cc6228ce-0558-47e7-96f9-f538db226164 req-ad1eb511-2bc2-4ba9-b32a-64d9c0d0f7fe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "be855518-90af-4fa9-b969-4a1579934010-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:38 compute-1 nova_compute[230518]: 2025-10-02 12:27:38.389 2 DEBUG nova.compute.manager [req-cc6228ce-0558-47e7-96f9-f538db226164 req-ad1eb511-2bc2-4ba9-b32a-64d9c0d0f7fe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] No waiting events found dispatching network-vif-plugged-5404e3f9-be33-4f3a-b227-2fa3944c6bb7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:27:38 compute-1 nova_compute[230518]: 2025-10-02 12:27:38.389 2 WARNING nova.compute.manager [req-cc6228ce-0558-47e7-96f9-f538db226164 req-ad1eb511-2bc2-4ba9-b32a-64d9c0d0f7fe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Received unexpected event network-vif-plugged-5404e3f9-be33-4f3a-b227-2fa3944c6bb7 for instance with vm_state active and task_state None.
Oct 02 12:27:38 compute-1 ceph-mon[80926]: pgmap v1500: 305 pgs: 305 active+clean; 309 MiB data, 677 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.2 MiB/s wr, 149 op/s
Oct 02 12:27:39 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #67. Immutable memtables: 0.
Oct 02 12:27:39 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:27:39.642632) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:27:39 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 67
Oct 02 12:27:39 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408059642694, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 645, "num_deletes": 251, "total_data_size": 1016623, "memory_usage": 1028952, "flush_reason": "Manual Compaction"}
Oct 02 12:27:39 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #68: started
Oct 02 12:27:39 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408059676786, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 68, "file_size": 670427, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35645, "largest_seqno": 36285, "table_properties": {"data_size": 667281, "index_size": 1054, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7438, "raw_average_key_size": 19, "raw_value_size": 661007, "raw_average_value_size": 1699, "num_data_blocks": 47, "num_entries": 389, "num_filter_entries": 389, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408019, "oldest_key_time": 1759408019, "file_creation_time": 1759408059, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:27:39 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 34209 microseconds, and 2562 cpu microseconds.
Oct 02 12:27:39 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:27:39 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:27:39.676850) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #68: 670427 bytes OK
Oct 02 12:27:39 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:27:39.676867) [db/memtable_list.cc:519] [default] Level-0 commit table #68 started
Oct 02 12:27:39 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:27:39.683638) [db/memtable_list.cc:722] [default] Level-0 commit table #68: memtable #1 done
Oct 02 12:27:39 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:27:39.683692) EVENT_LOG_v1 {"time_micros": 1759408059683682, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:27:39 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:27:39.683717) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:27:39 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1013057, prev total WAL file size 1028564, number of live WAL files 2.
Oct 02 12:27:39 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000064.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:27:39 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:27:39.684440) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Oct 02 12:27:39 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:27:39 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [68(654KB)], [66(9479KB)]
Oct 02 12:27:39 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408059684485, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [68], "files_L6": [66], "score": -1, "input_data_size": 10377211, "oldest_snapshot_seqno": -1}
Oct 02 12:27:39 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #69: 5898 keys, 8522087 bytes, temperature: kUnknown
Oct 02 12:27:39 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408059835571, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 69, "file_size": 8522087, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8483569, "index_size": 22664, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14789, "raw_key_size": 151666, "raw_average_key_size": 25, "raw_value_size": 8378468, "raw_average_value_size": 1420, "num_data_blocks": 906, "num_entries": 5898, "num_filter_entries": 5898, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759408059, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 69, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:27:39 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:27:39 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:27:39.835934) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 8522087 bytes
Oct 02 12:27:39 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:27:39.945438) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 68.6 rd, 56.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 9.3 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(28.2) write-amplify(12.7) OK, records in: 6409, records dropped: 511 output_compression: NoCompression
Oct 02 12:27:39 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:27:39.945488) EVENT_LOG_v1 {"time_micros": 1759408059945462, "job": 40, "event": "compaction_finished", "compaction_time_micros": 151187, "compaction_time_cpu_micros": 19680, "output_level": 6, "num_output_files": 1, "total_output_size": 8522087, "num_input_records": 6409, "num_output_records": 5898, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:27:39 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:27:39 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408059945985, "job": 40, "event": "table_file_deletion", "file_number": 68}
Oct 02 12:27:39 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000066.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:27:39 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408059947612, "job": 40, "event": "table_file_deletion", "file_number": 66}
Oct 02 12:27:39 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:27:39.684342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:27:39 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:27:39.947707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:27:39 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:27:39.947711) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:27:39 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:27:39.947713) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:27:39 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:27:39.947714) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:27:39 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:27:39.947716) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:27:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:27:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:40.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:27:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:27:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:40.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:27:40 compute-1 podman[255880]: 2025-10-02 12:27:40.826719751 +0000 UTC m=+0.075935300 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:27:40 compute-1 ceph-mon[80926]: pgmap v1501: 305 pgs: 305 active+clean; 309 MiB data, 677 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 104 op/s
Oct 02 12:27:41 compute-1 podman[255898]: 2025-10-02 12:27:41.82618228 +0000 UTC m=+0.083804018 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller)
Oct 02 12:27:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:42.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:42 compute-1 nova_compute[230518]: 2025-10-02 12:27:42.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:27:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:42.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:42 compute-1 nova_compute[230518]: 2025-10-02 12:27:42.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:42 compute-1 nova_compute[230518]: 2025-10-02 12:27:42.273 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:42 compute-1 nova_compute[230518]: 2025-10-02 12:27:42.273 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:42 compute-1 nova_compute[230518]: 2025-10-02 12:27:42.273 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:42 compute-1 nova_compute[230518]: 2025-10-02 12:27:42.273 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:27:42 compute-1 nova_compute[230518]: 2025-10-02 12:27:42.274 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:27:42 compute-1 nova_compute[230518]: 2025-10-02 12:27:42.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:27:42 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3804469328' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:42 compute-1 nova_compute[230518]: 2025-10-02 12:27:42.712 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:27:42 compute-1 nova_compute[230518]: 2025-10-02 12:27:42.931 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:27:42 compute-1 nova_compute[230518]: 2025-10-02 12:27:42.931 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:27:42 compute-1 nova_compute[230518]: 2025-10-02 12:27:42.936 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000003b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:27:42 compute-1 nova_compute[230518]: 2025-10-02 12:27:42.936 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000003b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:27:43 compute-1 nova_compute[230518]: 2025-10-02 12:27:43.173 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:27:43 compute-1 nova_compute[230518]: 2025-10-02 12:27:43.175 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4255MB free_disk=20.880069732666016GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:27:43 compute-1 nova_compute[230518]: 2025-10-02 12:27:43.175 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:43 compute-1 nova_compute[230518]: 2025-10-02 12:27:43.176 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:27:43 compute-1 ceph-mon[80926]: pgmap v1502: 305 pgs: 305 active+clean; 336 MiB data, 695 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.7 MiB/s wr, 159 op/s
Oct 02 12:27:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3804469328' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:43 compute-1 nova_compute[230518]: 2025-10-02 12:27:43.366 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 78888fa8-1051-485d-9e51-feaec2648c8f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:27:43 compute-1 nova_compute[230518]: 2025-10-02 12:27:43.367 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance be855518-90af-4fa9-b969-4a1579934010 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:27:43 compute-1 nova_compute[230518]: 2025-10-02 12:27:43.368 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:27:43 compute-1 nova_compute[230518]: 2025-10-02 12:27:43.369 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:27:43 compute-1 nova_compute[230518]: 2025-10-02 12:27:43.439 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:27:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:27:43 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2161945545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:43 compute-1 nova_compute[230518]: 2025-10-02 12:27:43.911 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:27:43 compute-1 nova_compute[230518]: 2025-10-02 12:27:43.917 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:27:44 compute-1 nova_compute[230518]: 2025-10-02 12:27:44.006 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:27:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:44.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:44.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:44 compute-1 nova_compute[230518]: 2025-10-02 12:27:44.239 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:27:44 compute-1 nova_compute[230518]: 2025-10-02 12:27:44.239 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.064s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:44 compute-1 ceph-mon[80926]: pgmap v1503: 305 pgs: 305 active+clean; 339 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 170 op/s
Oct 02 12:27:45 compute-1 nova_compute[230518]: 2025-10-02 12:27:45.234 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:27:45 compute-1 nova_compute[230518]: 2025-10-02 12:27:45.235 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:27:45 compute-1 nova_compute[230518]: 2025-10-02 12:27:45.236 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:27:45 compute-1 nova_compute[230518]: 2025-10-02 12:27:45.821 2 DEBUG nova.compute.manager [req-e52a75ca-c182-4ee5-9e59-0c99b422076b req-ab6c38ea-e90c-45f4-948b-050469a157c1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Received event network-changed-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:27:45 compute-1 nova_compute[230518]: 2025-10-02 12:27:45.823 2 DEBUG nova.compute.manager [req-e52a75ca-c182-4ee5-9e59-0c99b422076b req-ab6c38ea-e90c-45f4-948b-050469a157c1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Refreshing instance network info cache due to event network-changed-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:27:45 compute-1 nova_compute[230518]: 2025-10-02 12:27:45.823 2 DEBUG oslo_concurrency.lockutils [req-e52a75ca-c182-4ee5-9e59-0c99b422076b req-ab6c38ea-e90c-45f4-948b-050469a157c1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-78888fa8-1051-485d-9e51-feaec2648c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:27:45 compute-1 nova_compute[230518]: 2025-10-02 12:27:45.824 2 DEBUG oslo_concurrency.lockutils [req-e52a75ca-c182-4ee5-9e59-0c99b422076b req-ab6c38ea-e90c-45f4-948b-050469a157c1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-78888fa8-1051-485d-9e51-feaec2648c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:27:45 compute-1 nova_compute[230518]: 2025-10-02 12:27:45.825 2 DEBUG nova.network.neutron [req-e52a75ca-c182-4ee5-9e59-0c99b422076b req-ab6c38ea-e90c-45f4-948b-050469a157c1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Refreshing network info cache for port 4b915a8b-b3f4-47fe-bb5a-1e712c3d182e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:27:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2182550942' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2161945545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:46.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:27:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:46.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:27:47 compute-1 nova_compute[230518]: 2025-10-02 12:27:47.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:27:47 compute-1 nova_compute[230518]: 2025-10-02 12:27:47.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:47 compute-1 ceph-mon[80926]: pgmap v1504: 305 pgs: 305 active+clean; 347 MiB data, 724 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.7 MiB/s wr, 152 op/s
Oct 02 12:27:47 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/36665426' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:27:47 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1623385835' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:27:47 compute-1 nova_compute[230518]: 2025-10-02 12:27:47.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:47 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:47.628 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:27:47 compute-1 nova_compute[230518]: 2025-10-02 12:27:47.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:47 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:47.629 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:27:47 compute-1 nova_compute[230518]: 2025-10-02 12:27:47.819 2 DEBUG nova.network.neutron [req-e52a75ca-c182-4ee5-9e59-0c99b422076b req-ab6c38ea-e90c-45f4-948b-050469a157c1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Updated VIF entry in instance network info cache for port 4b915a8b-b3f4-47fe-bb5a-1e712c3d182e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:27:47 compute-1 nova_compute[230518]: 2025-10-02 12:27:47.820 2 DEBUG nova.network.neutron [req-e52a75ca-c182-4ee5-9e59-0c99b422076b req-ab6c38ea-e90c-45f4-948b-050469a157c1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Updating instance_info_cache with network_info: [{"id": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "address": "fa:16:3e:db:de:41", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b915a8b-b3", "ovs_interfaceid": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:27:47 compute-1 ovn_controller[129257]: 2025-10-02T12:27:47Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:db:de:41 10.100.0.8
Oct 02 12:27:47 compute-1 ovn_controller[129257]: 2025-10-02T12:27:47Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:db:de:41 10.100.0.8
Oct 02 12:27:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:48.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:48 compute-1 nova_compute[230518]: 2025-10-02 12:27:48.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:27:48 compute-1 nova_compute[230518]: 2025-10-02 12:27:48.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:27:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:48.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:48 compute-1 ceph-mon[80926]: pgmap v1505: 305 pgs: 305 active+clean; 357 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.5 MiB/s wr, 139 op/s
Oct 02 12:27:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3220107927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/982489555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:27:49 compute-1 nova_compute[230518]: 2025-10-02 12:27:49.086 2 DEBUG oslo_concurrency.lockutils [req-e52a75ca-c182-4ee5-9e59-0c99b422076b req-ab6c38ea-e90c-45f4-948b-050469a157c1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-78888fa8-1051-485d-9e51-feaec2648c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:27:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:27:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:50.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:27:50 compute-1 nova_compute[230518]: 2025-10-02 12:27:50.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:27:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:50.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:50 compute-1 nova_compute[230518]: 2025-10-02 12:27:50.475 2 DEBUG nova.compute.manager [None req-f41481db-a81c-4e3a-b383-0b066ac56f0b eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:27:50 compute-1 nova_compute[230518]: 2025-10-02 12:27:50.583 2 INFO nova.compute.manager [None req-f41481db-a81c-4e3a-b383-0b066ac56f0b eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] instance snapshotting
Oct 02 12:27:50 compute-1 ceph-mon[80926]: pgmap v1506: 305 pgs: 305 active+clean; 357 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.4 MiB/s wr, 124 op/s
Oct 02 12:27:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/561522923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:27:50 compute-1 podman[255971]: 2025-10-02 12:27:50.866206143 +0000 UTC m=+0.107817284 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 12:27:50 compute-1 podman[255970]: 2025-10-02 12:27:50.872718298 +0000 UTC m=+0.118837231 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 12:27:50 compute-1 sudo[256002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:27:50 compute-1 sudo[256002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:27:50 compute-1 sudo[256002]: pam_unix(sudo:session): session closed for user root
Oct 02 12:27:51 compute-1 sudo[256035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:27:51 compute-1 sudo[256035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:27:51 compute-1 sudo[256035]: pam_unix(sudo:session): session closed for user root
Oct 02 12:27:51 compute-1 nova_compute[230518]: 2025-10-02 12:27:51.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:27:51 compute-1 sudo[256060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:27:51 compute-1 sudo[256060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:27:51 compute-1 sudo[256060]: pam_unix(sudo:session): session closed for user root
Oct 02 12:27:51 compute-1 sudo[256085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:27:51 compute-1 sudo[256085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:27:51 compute-1 nova_compute[230518]: 2025-10-02 12:27:51.382 2 WARNING nova.compute.manager [None req-f41481db-a81c-4e3a-b383-0b066ac56f0b eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Image not found during snapshot: nova.exception.ImageNotFound: Image a1e5194e-ee54-41db-883e-1f37efee5068 could not be found.
Oct 02 12:27:51 compute-1 sudo[256085]: pam_unix(sudo:session): session closed for user root
Oct 02 12:27:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:52.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:52 compute-1 nova_compute[230518]: 2025-10-02 12:27:52.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:27:52 compute-1 nova_compute[230518]: 2025-10-02 12:27:52.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:27:52 compute-1 nova_compute[230518]: 2025-10-02 12:27:52.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:27:52 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 12:27:52 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 12:27:52 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:27:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:52.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:52 compute-1 nova_compute[230518]: 2025-10-02 12:27:52.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:52 compute-1 nova_compute[230518]: 2025-10-02 12:27:52.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:52 compute-1 nova_compute[230518]: 2025-10-02 12:27:52.588 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-78888fa8-1051-485d-9e51-feaec2648c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:27:52 compute-1 nova_compute[230518]: 2025-10-02 12:27:52.588 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-78888fa8-1051-485d-9e51-feaec2648c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:27:52 compute-1 nova_compute[230518]: 2025-10-02 12:27:52.589 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:27:52 compute-1 nova_compute[230518]: 2025-10-02 12:27:52.589 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 78888fa8-1051-485d-9e51-feaec2648c8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:27:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:27:53 compute-1 ceph-mon[80926]: pgmap v1507: 305 pgs: 305 active+clean; 363 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.8 MiB/s wr, 158 op/s
Oct 02 12:27:53 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:27:53 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 12:27:53 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:27:53 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:27:53 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:27:53 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:27:53 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:27:53 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:27:53 compute-1 ovn_controller[129257]: 2025-10-02T12:27:53Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d4:5c:32 10.100.0.14
Oct 02 12:27:53 compute-1 ovn_controller[129257]: 2025-10-02T12:27:53Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d4:5c:32 10.100.0.14
Oct 02 12:27:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:54.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:27:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:54.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:27:54 compute-1 nova_compute[230518]: 2025-10-02 12:27:54.765 2 DEBUG oslo_concurrency.lockutils [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "be855518-90af-4fa9-b969-4a1579934010" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:54 compute-1 nova_compute[230518]: 2025-10-02 12:27:54.765 2 DEBUG oslo_concurrency.lockutils [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "be855518-90af-4fa9-b969-4a1579934010" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:54 compute-1 nova_compute[230518]: 2025-10-02 12:27:54.766 2 DEBUG oslo_concurrency.lockutils [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "be855518-90af-4fa9-b969-4a1579934010-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:54 compute-1 nova_compute[230518]: 2025-10-02 12:27:54.766 2 DEBUG oslo_concurrency.lockutils [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "be855518-90af-4fa9-b969-4a1579934010-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:54 compute-1 nova_compute[230518]: 2025-10-02 12:27:54.766 2 DEBUG oslo_concurrency.lockutils [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "be855518-90af-4fa9-b969-4a1579934010-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:54 compute-1 nova_compute[230518]: 2025-10-02 12:27:54.767 2 INFO nova.compute.manager [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Terminating instance
Oct 02 12:27:54 compute-1 nova_compute[230518]: 2025-10-02 12:27:54.768 2 DEBUG nova.compute.manager [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:27:55 compute-1 ceph-mon[80926]: pgmap v1508: 305 pgs: 305 active+clean; 387 MiB data, 761 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.8 MiB/s wr, 141 op/s
Oct 02 12:27:55 compute-1 kernel: tap5404e3f9-be (unregistering): left promiscuous mode
Oct 02 12:27:55 compute-1 NetworkManager[44960]: <info>  [1759408075.3162] device (tap5404e3f9-be): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:27:55 compute-1 ovn_controller[129257]: 2025-10-02T12:27:55Z|00276|binding|INFO|Releasing lport 5404e3f9-be33-4f3a-b227-2fa3944c6bb7 from this chassis (sb_readonly=0)
Oct 02 12:27:55 compute-1 ovn_controller[129257]: 2025-10-02T12:27:55Z|00277|binding|INFO|Setting lport 5404e3f9-be33-4f3a-b227-2fa3944c6bb7 down in Southbound
Oct 02 12:27:55 compute-1 nova_compute[230518]: 2025-10-02 12:27:55.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:55 compute-1 ovn_controller[129257]: 2025-10-02T12:27:55Z|00278|binding|INFO|Removing iface tap5404e3f9-be ovn-installed in OVS
Oct 02 12:27:55 compute-1 nova_compute[230518]: 2025-10-02 12:27:55.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:55 compute-1 nova_compute[230518]: 2025-10-02 12:27:55.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:55 compute-1 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Oct 02 12:27:55 compute-1 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000003c.scope: Consumed 15.027s CPU time.
Oct 02 12:27:55 compute-1 systemd-machined[188247]: Machine qemu-31-instance-0000003c terminated.
Oct 02 12:27:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:55.426 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:5c:32 10.100.0.14'], port_security=['fa:16:3e:d4:5c:32 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'be855518-90af-4fa9-b969-4a1579934010', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bfd7bec5bd4b4366a96cc55cfe95fcc9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1274568d-0664-4745-a1c4-36fd447c1a9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f6f5554-f9ad-4301-a295-af1afef2d045, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=5404e3f9-be33-4f3a-b227-2fa3944c6bb7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:27:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:55.427 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 5404e3f9-be33-4f3a-b227-2fa3944c6bb7 in datapath eefd67eb-b4b6-4162-bbdd-0cce7dbdb491 unbound from our chassis
Oct 02 12:27:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:55.429 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eefd67eb-b4b6-4162-bbdd-0cce7dbdb491, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:27:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:55.430 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[64a29388-2582-403e-9208-b064603b7620]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:55.431 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491 namespace which is not needed anymore
Oct 02 12:27:55 compute-1 nova_compute[230518]: 2025-10-02 12:27:55.613 2 INFO nova.virt.libvirt.driver [-] [instance: be855518-90af-4fa9-b969-4a1579934010] Instance destroyed successfully.
Oct 02 12:27:55 compute-1 nova_compute[230518]: 2025-10-02 12:27:55.613 2 DEBUG nova.objects.instance [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lazy-loading 'resources' on Instance uuid be855518-90af-4fa9-b969-4a1579934010 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:27:55 compute-1 neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491[255859]: [NOTICE]   (255867) : haproxy version is 2.8.14-c23fe91
Oct 02 12:27:55 compute-1 neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491[255859]: [NOTICE]   (255867) : path to executable is /usr/sbin/haproxy
Oct 02 12:27:55 compute-1 neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491[255859]: [WARNING]  (255867) : Exiting Master process...
Oct 02 12:27:55 compute-1 neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491[255859]: [ALERT]    (255867) : Current worker (255870) exited with code 143 (Terminated)
Oct 02 12:27:55 compute-1 neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491[255859]: [WARNING]  (255867) : All workers exited. Exiting... (0)
Oct 02 12:27:55 compute-1 systemd[1]: libpod-3236060feae6b034e02fdd32b0325b9b80dcb8161d23c9f8595868b5ccc47edd.scope: Deactivated successfully.
Oct 02 12:27:55 compute-1 podman[256164]: 2025-10-02 12:27:55.643141294 +0000 UTC m=+0.074488765 container died 3236060feae6b034e02fdd32b0325b9b80dcb8161d23c9f8595868b5ccc47edd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 12:27:55 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3236060feae6b034e02fdd32b0325b9b80dcb8161d23c9f8595868b5ccc47edd-userdata-shm.mount: Deactivated successfully.
Oct 02 12:27:55 compute-1 systemd[1]: var-lib-containers-storage-overlay-464fc771c237ea3199774af778ab94d5f0cccedca549bc0b6bbf75459dd17206-merged.mount: Deactivated successfully.
Oct 02 12:27:55 compute-1 podman[256164]: 2025-10-02 12:27:55.685527938 +0000 UTC m=+0.116875419 container cleanup 3236060feae6b034e02fdd32b0325b9b80dcb8161d23c9f8595868b5ccc47edd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:27:55 compute-1 nova_compute[230518]: 2025-10-02 12:27:55.685 2 DEBUG nova.compute.manager [req-b5e14939-4ae9-4bce-a3e4-c3b7ca81ffef req-66d2d073-39e3-4b29-a146-aad4e6393a3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Received event network-changed-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:27:55 compute-1 nova_compute[230518]: 2025-10-02 12:27:55.685 2 DEBUG nova.compute.manager [req-b5e14939-4ae9-4bce-a3e4-c3b7ca81ffef req-66d2d073-39e3-4b29-a146-aad4e6393a3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Refreshing instance network info cache due to event network-changed-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:27:55 compute-1 nova_compute[230518]: 2025-10-02 12:27:55.686 2 DEBUG oslo_concurrency.lockutils [req-b5e14939-4ae9-4bce-a3e4-c3b7ca81ffef req-66d2d073-39e3-4b29-a146-aad4e6393a3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-78888fa8-1051-485d-9e51-feaec2648c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:27:55 compute-1 systemd[1]: libpod-conmon-3236060feae6b034e02fdd32b0325b9b80dcb8161d23c9f8595868b5ccc47edd.scope: Deactivated successfully.
Oct 02 12:27:55 compute-1 nova_compute[230518]: 2025-10-02 12:27:55.757 2 DEBUG nova.virt.libvirt.vif [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:27:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1210201217',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1210201217',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1210201217',id=60,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:27:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bfd7bec5bd4b4366a96cc55cfe95fcc9',ramdisk_id='',reservation_id='r-yyo20tgh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-883313902',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-883313902-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:27:51Z,user_data=None,user_id='eff0431e92464c78b780c8365e6e920c',uuid=be855518-90af-4fa9-b969-4a1579934010,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "address": "fa:16:3e:d4:5c:32", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5404e3f9-be", "ovs_interfaceid": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:27:55 compute-1 nova_compute[230518]: 2025-10-02 12:27:55.757 2 DEBUG nova.network.os_vif_util [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Converting VIF {"id": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "address": "fa:16:3e:d4:5c:32", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5404e3f9-be", "ovs_interfaceid": "5404e3f9-be33-4f3a-b227-2fa3944c6bb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:27:55 compute-1 nova_compute[230518]: 2025-10-02 12:27:55.758 2 DEBUG nova.network.os_vif_util [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:5c:32,bridge_name='br-int',has_traffic_filtering=True,id=5404e3f9-be33-4f3a-b227-2fa3944c6bb7,network=Network(eefd67eb-b4b6-4162-bbdd-0cce7dbdb491),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5404e3f9-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:27:55 compute-1 nova_compute[230518]: 2025-10-02 12:27:55.758 2 DEBUG os_vif [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:5c:32,bridge_name='br-int',has_traffic_filtering=True,id=5404e3f9-be33-4f3a-b227-2fa3944c6bb7,network=Network(eefd67eb-b4b6-4162-bbdd-0cce7dbdb491),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5404e3f9-be') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:27:55 compute-1 nova_compute[230518]: 2025-10-02 12:27:55.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:55 compute-1 nova_compute[230518]: 2025-10-02 12:27:55.761 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5404e3f9-be, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:27:55 compute-1 nova_compute[230518]: 2025-10-02 12:27:55.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:55 compute-1 nova_compute[230518]: 2025-10-02 12:27:55.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:55 compute-1 nova_compute[230518]: 2025-10-02 12:27:55.768 2 INFO os_vif [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:5c:32,bridge_name='br-int',has_traffic_filtering=True,id=5404e3f9-be33-4f3a-b227-2fa3944c6bb7,network=Network(eefd67eb-b4b6-4162-bbdd-0cce7dbdb491),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5404e3f9-be')
Oct 02 12:27:55 compute-1 podman[256205]: 2025-10-02 12:27:55.769648895 +0000 UTC m=+0.055551709 container remove 3236060feae6b034e02fdd32b0325b9b80dcb8161d23c9f8595868b5ccc47edd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 12:27:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:55.781 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[64bcc710-d2b5-4eb9-a3ee-773e54d84c61]: (4, ('Thu Oct  2 12:27:55 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491 (3236060feae6b034e02fdd32b0325b9b80dcb8161d23c9f8595868b5ccc47edd)\n3236060feae6b034e02fdd32b0325b9b80dcb8161d23c9f8595868b5ccc47edd\nThu Oct  2 12:27:55 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491 (3236060feae6b034e02fdd32b0325b9b80dcb8161d23c9f8595868b5ccc47edd)\n3236060feae6b034e02fdd32b0325b9b80dcb8161d23c9f8595868b5ccc47edd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:55.783 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[498325bd-c73a-4c7f-ba05-5e048006c11a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:55.784 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeefd67eb-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:27:55 compute-1 kernel: tapeefd67eb-b0: left promiscuous mode
Oct 02 12:27:55 compute-1 nova_compute[230518]: 2025-10-02 12:27:55.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:55 compute-1 nova_compute[230518]: 2025-10-02 12:27:55.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:55.819 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[854409f4-85e6-480d-b282-c255b0ad9982]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:55.860 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fb630631-6837-4b50-9960-c5808482fad3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:55.862 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[27dec628-9002-4070-bfdd-353181e8444e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:55.886 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6a662e4e-8d04-4fec-a98b-d014e443dcc1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591881, 'reachable_time': 42569, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256238, 'error': None, 'target': 'ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:55.889 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:27:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:55.889 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[1e9ccdd6-bd9a-431e-9f2a-8d87cd716cfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:55 compute-1 systemd[1]: run-netns-ovnmeta\x2deefd67eb\x2db4b6\x2d4162\x2dbbdd\x2d0cce7dbdb491.mount: Deactivated successfully.
Oct 02 12:27:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:56.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:27:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:56.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:27:56 compute-1 nova_compute[230518]: 2025-10-02 12:27:56.375 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Updating instance_info_cache with network_info: [{"id": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "address": "fa:16:3e:db:de:41", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b915a8b-b3", "ovs_interfaceid": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:27:56 compute-1 ceph-mon[80926]: pgmap v1509: 305 pgs: 305 active+clean; 396 MiB data, 786 MiB used, 20 GiB / 21 GiB avail; 666 KiB/s rd, 4.2 MiB/s wr, 149 op/s
Oct 02 12:27:56 compute-1 nova_compute[230518]: 2025-10-02 12:27:56.740 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-78888fa8-1051-485d-9e51-feaec2648c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:27:56 compute-1 nova_compute[230518]: 2025-10-02 12:27:56.740 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:27:56 compute-1 nova_compute[230518]: 2025-10-02 12:27:56.741 2 DEBUG oslo_concurrency.lockutils [req-b5e14939-4ae9-4bce-a3e4-c3b7ca81ffef req-66d2d073-39e3-4b29-a146-aad4e6393a3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-78888fa8-1051-485d-9e51-feaec2648c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:27:56 compute-1 nova_compute[230518]: 2025-10-02 12:27:56.741 2 DEBUG nova.network.neutron [req-b5e14939-4ae9-4bce-a3e4-c3b7ca81ffef req-66d2d073-39e3-4b29-a146-aad4e6393a3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Refreshing network info cache for port 4b915a8b-b3f4-47fe-bb5a-1e712c3d182e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:27:56 compute-1 nova_compute[230518]: 2025-10-02 12:27:56.742 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:27:57 compute-1 nova_compute[230518]: 2025-10-02 12:27:57.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:57.631 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:27:57 compute-1 nova_compute[230518]: 2025-10-02 12:27:57.893 2 DEBUG oslo_concurrency.lockutils [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Acquiring lock "78888fa8-1051-485d-9e51-feaec2648c8f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:57 compute-1 nova_compute[230518]: 2025-10-02 12:27:57.893 2 DEBUG oslo_concurrency.lockutils [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "78888fa8-1051-485d-9e51-feaec2648c8f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:57 compute-1 nova_compute[230518]: 2025-10-02 12:27:57.893 2 DEBUG oslo_concurrency.lockutils [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Acquiring lock "78888fa8-1051-485d-9e51-feaec2648c8f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:57 compute-1 nova_compute[230518]: 2025-10-02 12:27:57.893 2 DEBUG oslo_concurrency.lockutils [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "78888fa8-1051-485d-9e51-feaec2648c8f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:57 compute-1 nova_compute[230518]: 2025-10-02 12:27:57.894 2 DEBUG oslo_concurrency.lockutils [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "78888fa8-1051-485d-9e51-feaec2648c8f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:57 compute-1 nova_compute[230518]: 2025-10-02 12:27:57.895 2 INFO nova.compute.manager [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Terminating instance
Oct 02 12:27:57 compute-1 nova_compute[230518]: 2025-10-02 12:27:57.895 2 DEBUG nova.compute.manager [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:27:58 compute-1 kernel: tap4b915a8b-b3 (unregistering): left promiscuous mode
Oct 02 12:27:58 compute-1 NetworkManager[44960]: <info>  [1759408078.0438] device (tap4b915a8b-b3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:27:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:27:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:27:58.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:27:58 compute-1 nova_compute[230518]: 2025-10-02 12:27:58.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:58 compute-1 ovn_controller[129257]: 2025-10-02T12:27:58Z|00279|binding|INFO|Releasing lport 4b915a8b-b3f4-47fe-bb5a-1e712c3d182e from this chassis (sb_readonly=0)
Oct 02 12:27:58 compute-1 ovn_controller[129257]: 2025-10-02T12:27:58Z|00280|binding|INFO|Setting lport 4b915a8b-b3f4-47fe-bb5a-1e712c3d182e down in Southbound
Oct 02 12:27:58 compute-1 ovn_controller[129257]: 2025-10-02T12:27:58Z|00281|binding|INFO|Removing iface tap4b915a8b-b3 ovn-installed in OVS
Oct 02 12:27:58 compute-1 nova_compute[230518]: 2025-10-02 12:27:58.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:58 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:58.078 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:de:41 10.100.0.8'], port_security=['fa:16:3e:db:de:41 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '78888fa8-1051-485d-9e51-feaec2648c8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a920a635-8cc8-4478-b2dc-4d6329a5abef', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c20ce9073494490588607984318667f5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '389e89f9-e63f-4b21-acd4-c43535d1012a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4006d2de-ae16-44cd-90c1-7beb9f87e38f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=4b915a8b-b3f4-47fe-bb5a-1e712c3d182e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:27:58 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:58.079 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 4b915a8b-b3f4-47fe-bb5a-1e712c3d182e in datapath a920a635-8cc8-4478-b2dc-4d6329a5abef unbound from our chassis
Oct 02 12:27:58 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:58.081 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a920a635-8cc8-4478-b2dc-4d6329a5abef, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:27:58 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:58.082 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c1e3e0ca-4ab2-491f-ada0-75a4a545fae5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:58 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:58.082 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef namespace which is not needed anymore
Oct 02 12:27:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:27:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:27:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:27:58.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:27:58 compute-1 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000003b.scope: Deactivated successfully.
Oct 02 12:27:58 compute-1 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000003b.scope: Consumed 13.098s CPU time.
Oct 02 12:27:58 compute-1 systemd-machined[188247]: Machine qemu-30-instance-0000003b terminated.
Oct 02 12:27:58 compute-1 neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef[255589]: [NOTICE]   (255593) : haproxy version is 2.8.14-c23fe91
Oct 02 12:27:58 compute-1 neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef[255589]: [NOTICE]   (255593) : path to executable is /usr/sbin/haproxy
Oct 02 12:27:58 compute-1 neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef[255589]: [WARNING]  (255593) : Exiting Master process...
Oct 02 12:27:58 compute-1 neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef[255589]: [ALERT]    (255593) : Current worker (255595) exited with code 143 (Terminated)
Oct 02 12:27:58 compute-1 neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef[255589]: [WARNING]  (255593) : All workers exited. Exiting... (0)
Oct 02 12:27:58 compute-1 systemd[1]: libpod-15f60cb19acf3dc22fcc96e929b08e0dece3d1ae536dca1db828c9e09eecb659.scope: Deactivated successfully.
Oct 02 12:27:58 compute-1 podman[256262]: 2025-10-02 12:27:58.263804896 +0000 UTC m=+0.098084348 container died 15f60cb19acf3dc22fcc96e929b08e0dece3d1ae536dca1db828c9e09eecb659 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:27:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:27:58 compute-1 NetworkManager[44960]: <info>  [1759408078.3149] manager: (tap4b915a8b-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/129)
Oct 02 12:27:58 compute-1 nova_compute[230518]: 2025-10-02 12:27:58.333 2 INFO nova.virt.libvirt.driver [-] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Instance destroyed successfully.
Oct 02 12:27:58 compute-1 nova_compute[230518]: 2025-10-02 12:27:58.335 2 DEBUG nova.objects.instance [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lazy-loading 'resources' on Instance uuid 78888fa8-1051-485d-9e51-feaec2648c8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:27:58 compute-1 nova_compute[230518]: 2025-10-02 12:27:58.422 2 DEBUG nova.virt.libvirt.vif [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:27:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-232208417',display_name='tempest-FloatingIPsAssociationTestJSON-server-232208417',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-232208417',id=59,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:27:31Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c20ce9073494490588607984318667f5',ramdisk_id='',reservation_id='r-xvp3u8r5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-875651151',owner_user_name='tempest-FloatingIPsAssociationTestJSON-875651151-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:27:31Z,user_data=None,user_id='2b6687fbfb1f484ba99ef93bbb4ffa7e',uuid=78888fa8-1051-485d-9e51-feaec2648c8f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "address": "fa:16:3e:db:de:41", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b915a8b-b3", "ovs_interfaceid": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:27:58 compute-1 nova_compute[230518]: 2025-10-02 12:27:58.423 2 DEBUG nova.network.os_vif_util [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Converting VIF {"id": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "address": "fa:16:3e:db:de:41", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b915a8b-b3", "ovs_interfaceid": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:27:58 compute-1 nova_compute[230518]: 2025-10-02 12:27:58.423 2 DEBUG nova.network.os_vif_util [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:db:de:41,bridge_name='br-int',has_traffic_filtering=True,id=4b915a8b-b3f4-47fe-bb5a-1e712c3d182e,network=Network(a920a635-8cc8-4478-b2dc-4d6329a5abef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b915a8b-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:27:58 compute-1 nova_compute[230518]: 2025-10-02 12:27:58.423 2 DEBUG os_vif [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:db:de:41,bridge_name='br-int',has_traffic_filtering=True,id=4b915a8b-b3f4-47fe-bb5a-1e712c3d182e,network=Network(a920a635-8cc8-4478-b2dc-4d6329a5abef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b915a8b-b3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:27:58 compute-1 nova_compute[230518]: 2025-10-02 12:27:58.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:58 compute-1 nova_compute[230518]: 2025-10-02 12:27:58.425 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4b915a8b-b3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:27:58 compute-1 nova_compute[230518]: 2025-10-02 12:27:58.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:58 compute-1 nova_compute[230518]: 2025-10-02 12:27:58.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:58 compute-1 nova_compute[230518]: 2025-10-02 12:27:58.430 2 INFO os_vif [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:db:de:41,bridge_name='br-int',has_traffic_filtering=True,id=4b915a8b-b3f4-47fe-bb5a-1e712c3d182e,network=Network(a920a635-8cc8-4478-b2dc-4d6329a5abef),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b915a8b-b3')
Oct 02 12:27:58 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-15f60cb19acf3dc22fcc96e929b08e0dece3d1ae536dca1db828c9e09eecb659-userdata-shm.mount: Deactivated successfully.
Oct 02 12:27:58 compute-1 systemd[1]: var-lib-containers-storage-overlay-8348c266641d7f0169a9362cbaa2a00eb443af6e6b20ca0bc7f72d3ff7a284ad-merged.mount: Deactivated successfully.
Oct 02 12:27:58 compute-1 nova_compute[230518]: 2025-10-02 12:27:58.587 2 DEBUG nova.compute.manager [req-106a1898-7c2e-4e70-b535-ae3d6c4f607c req-4f4e038b-8e74-400d-8e56-3352a6ee8526 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Received event network-vif-unplugged-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:27:58 compute-1 nova_compute[230518]: 2025-10-02 12:27:58.587 2 DEBUG oslo_concurrency.lockutils [req-106a1898-7c2e-4e70-b535-ae3d6c4f607c req-4f4e038b-8e74-400d-8e56-3352a6ee8526 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "78888fa8-1051-485d-9e51-feaec2648c8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:58 compute-1 nova_compute[230518]: 2025-10-02 12:27:58.588 2 DEBUG oslo_concurrency.lockutils [req-106a1898-7c2e-4e70-b535-ae3d6c4f607c req-4f4e038b-8e74-400d-8e56-3352a6ee8526 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "78888fa8-1051-485d-9e51-feaec2648c8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:58 compute-1 nova_compute[230518]: 2025-10-02 12:27:58.588 2 DEBUG oslo_concurrency.lockutils [req-106a1898-7c2e-4e70-b535-ae3d6c4f607c req-4f4e038b-8e74-400d-8e56-3352a6ee8526 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "78888fa8-1051-485d-9e51-feaec2648c8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:58 compute-1 nova_compute[230518]: 2025-10-02 12:27:58.589 2 DEBUG nova.compute.manager [req-106a1898-7c2e-4e70-b535-ae3d6c4f607c req-4f4e038b-8e74-400d-8e56-3352a6ee8526 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] No waiting events found dispatching network-vif-unplugged-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:27:58 compute-1 nova_compute[230518]: 2025-10-02 12:27:58.589 2 DEBUG nova.compute.manager [req-106a1898-7c2e-4e70-b535-ae3d6c4f607c req-4f4e038b-8e74-400d-8e56-3352a6ee8526 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Received event network-vif-unplugged-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:27:58 compute-1 podman[256262]: 2025-10-02 12:27:58.653440846 +0000 UTC m=+0.487720288 container cleanup 15f60cb19acf3dc22fcc96e929b08e0dece3d1ae536dca1db828c9e09eecb659 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 12:27:58 compute-1 nova_compute[230518]: 2025-10-02 12:27:58.717 2 DEBUG nova.network.neutron [req-b5e14939-4ae9-4bce-a3e4-c3b7ca81ffef req-66d2d073-39e3-4b29-a146-aad4e6393a3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Updated VIF entry in instance network info cache for port 4b915a8b-b3f4-47fe-bb5a-1e712c3d182e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:27:58 compute-1 nova_compute[230518]: 2025-10-02 12:27:58.718 2 DEBUG nova.network.neutron [req-b5e14939-4ae9-4bce-a3e4-c3b7ca81ffef req-66d2d073-39e3-4b29-a146-aad4e6393a3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Updating instance_info_cache with network_info: [{"id": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "address": "fa:16:3e:db:de:41", "network": {"id": "a920a635-8cc8-4478-b2dc-4d6329a5abef", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-721315678-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c20ce9073494490588607984318667f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b915a8b-b3", "ovs_interfaceid": "4b915a8b-b3f4-47fe-bb5a-1e712c3d182e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:27:58 compute-1 nova_compute[230518]: 2025-10-02 12:27:58.777 2 DEBUG oslo_concurrency.lockutils [req-b5e14939-4ae9-4bce-a3e4-c3b7ca81ffef req-66d2d073-39e3-4b29-a146-aad4e6393a3f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-78888fa8-1051-485d-9e51-feaec2648c8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:27:58 compute-1 nova_compute[230518]: 2025-10-02 12:27:58.892 2 DEBUG nova.compute.manager [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Received event network-vif-unplugged-5404e3f9-be33-4f3a-b227-2fa3944c6bb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:27:58 compute-1 nova_compute[230518]: 2025-10-02 12:27:58.893 2 DEBUG oslo_concurrency.lockutils [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "be855518-90af-4fa9-b969-4a1579934010-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:58 compute-1 nova_compute[230518]: 2025-10-02 12:27:58.893 2 DEBUG oslo_concurrency.lockutils [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "be855518-90af-4fa9-b969-4a1579934010-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:58 compute-1 nova_compute[230518]: 2025-10-02 12:27:58.894 2 DEBUG oslo_concurrency.lockutils [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "be855518-90af-4fa9-b969-4a1579934010-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:58 compute-1 nova_compute[230518]: 2025-10-02 12:27:58.894 2 DEBUG nova.compute.manager [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] No waiting events found dispatching network-vif-unplugged-5404e3f9-be33-4f3a-b227-2fa3944c6bb7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:27:58 compute-1 nova_compute[230518]: 2025-10-02 12:27:58.894 2 DEBUG nova.compute.manager [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Received event network-vif-unplugged-5404e3f9-be33-4f3a-b227-2fa3944c6bb7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:27:58 compute-1 nova_compute[230518]: 2025-10-02 12:27:58.895 2 DEBUG nova.compute.manager [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Received event network-vif-plugged-5404e3f9-be33-4f3a-b227-2fa3944c6bb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:27:58 compute-1 nova_compute[230518]: 2025-10-02 12:27:58.895 2 DEBUG oslo_concurrency.lockutils [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "be855518-90af-4fa9-b969-4a1579934010-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:27:58 compute-1 nova_compute[230518]: 2025-10-02 12:27:58.896 2 DEBUG oslo_concurrency.lockutils [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "be855518-90af-4fa9-b969-4a1579934010-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:27:58 compute-1 nova_compute[230518]: 2025-10-02 12:27:58.896 2 DEBUG oslo_concurrency.lockutils [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "be855518-90af-4fa9-b969-4a1579934010-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:27:58 compute-1 nova_compute[230518]: 2025-10-02 12:27:58.896 2 DEBUG nova.compute.manager [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] No waiting events found dispatching network-vif-plugged-5404e3f9-be33-4f3a-b227-2fa3944c6bb7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:27:58 compute-1 nova_compute[230518]: 2025-10-02 12:27:58.897 2 WARNING nova.compute.manager [req-062e0aa2-4837-4f4b-b0cb-e5d9da5e5ff2 req-8ffdb27c-5eca-42de-a887-12513446e265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Received unexpected event network-vif-plugged-5404e3f9-be33-4f3a-b227-2fa3944c6bb7 for instance with vm_state active and task_state deleting.
Oct 02 12:27:59 compute-1 podman[256323]: 2025-10-02 12:27:59.044405978 +0000 UTC m=+0.352057588 container remove 15f60cb19acf3dc22fcc96e929b08e0dece3d1ae536dca1db828c9e09eecb659 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:27:59 compute-1 systemd[1]: libpod-conmon-15f60cb19acf3dc22fcc96e929b08e0dece3d1ae536dca1db828c9e09eecb659.scope: Deactivated successfully.
Oct 02 12:27:59 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:59.052 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b4c3de6a-e04d-4f83-b1a6-2a6de34f0a9c]: (4, ('Thu Oct  2 12:27:58 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef (15f60cb19acf3dc22fcc96e929b08e0dece3d1ae536dca1db828c9e09eecb659)\n15f60cb19acf3dc22fcc96e929b08e0dece3d1ae536dca1db828c9e09eecb659\nThu Oct  2 12:27:58 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef (15f60cb19acf3dc22fcc96e929b08e0dece3d1ae536dca1db828c9e09eecb659)\n15f60cb19acf3dc22fcc96e929b08e0dece3d1ae536dca1db828c9e09eecb659\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:59 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:59.053 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c4d9fff6-f3e5-4525-a921-2429d219dac4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:59 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:59.054 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa920a635-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:27:59 compute-1 nova_compute[230518]: 2025-10-02 12:27:59.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:59 compute-1 kernel: tapa920a635-80: left promiscuous mode
Oct 02 12:27:59 compute-1 nova_compute[230518]: 2025-10-02 12:27:59.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:27:59 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:59.071 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b5183e10-4e9d-4955-a20d-2ff934fccb36]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:59 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:59.103 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[91576ab4-bcf0-4caf-81bd-7235ceaeb96f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:59 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:59.105 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[67984301-f547-4da5-8342-721421d4cd8d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:59 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:59.123 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b99738ef-8e78-4e7a-a31b-510290b3a182]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 591321, 'reachable_time': 21112, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256341, 'error': None, 'target': 'ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:59 compute-1 systemd[1]: run-netns-ovnmeta\x2da920a635\x2d8cc8\x2d4478\x2db2dc\x2d4d6329a5abef.mount: Deactivated successfully.
Oct 02 12:27:59 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:59.125 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a920a635-8cc8-4478-b2dc-4d6329a5abef deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:27:59 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:27:59.126 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[1c62ce0e-d1d3-4b3e-9e00-efc99a327786]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:27:59 compute-1 ceph-mon[80926]: pgmap v1510: 305 pgs: 305 active+clean; 405 MiB data, 787 MiB used, 20 GiB / 21 GiB avail; 735 KiB/s rd, 3.4 MiB/s wr, 145 op/s
Oct 02 12:28:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:28:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:00.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:28:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:28:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:00.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:28:00 compute-1 nova_compute[230518]: 2025-10-02 12:28:00.389 2 INFO nova.virt.libvirt.driver [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Deleting instance files /var/lib/nova/instances/78888fa8-1051-485d-9e51-feaec2648c8f_del
Oct 02 12:28:00 compute-1 nova_compute[230518]: 2025-10-02 12:28:00.389 2 INFO nova.virt.libvirt.driver [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Deletion of /var/lib/nova/instances/78888fa8-1051-485d-9e51-feaec2648c8f_del complete
Oct 02 12:28:00 compute-1 ceph-mon[80926]: pgmap v1511: 305 pgs: 305 active+clean; 405 MiB data, 787 MiB used, 20 GiB / 21 GiB avail; 617 KiB/s rd, 2.6 MiB/s wr, 124 op/s
Oct 02 12:28:00 compute-1 nova_compute[230518]: 2025-10-02 12:28:00.861 2 DEBUG nova.compute.manager [req-45be8117-f6bd-42dd-9608-27697ca985da req-af85aea0-7104-42a6-829c-3d5d869faf1b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Received event network-vif-plugged-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:28:00 compute-1 nova_compute[230518]: 2025-10-02 12:28:00.861 2 DEBUG oslo_concurrency.lockutils [req-45be8117-f6bd-42dd-9608-27697ca985da req-af85aea0-7104-42a6-829c-3d5d869faf1b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "78888fa8-1051-485d-9e51-feaec2648c8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:00 compute-1 nova_compute[230518]: 2025-10-02 12:28:00.862 2 DEBUG oslo_concurrency.lockutils [req-45be8117-f6bd-42dd-9608-27697ca985da req-af85aea0-7104-42a6-829c-3d5d869faf1b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "78888fa8-1051-485d-9e51-feaec2648c8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:00 compute-1 nova_compute[230518]: 2025-10-02 12:28:00.862 2 DEBUG oslo_concurrency.lockutils [req-45be8117-f6bd-42dd-9608-27697ca985da req-af85aea0-7104-42a6-829c-3d5d869faf1b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "78888fa8-1051-485d-9e51-feaec2648c8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:00 compute-1 nova_compute[230518]: 2025-10-02 12:28:00.863 2 DEBUG nova.compute.manager [req-45be8117-f6bd-42dd-9608-27697ca985da req-af85aea0-7104-42a6-829c-3d5d869faf1b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] No waiting events found dispatching network-vif-plugged-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:28:00 compute-1 nova_compute[230518]: 2025-10-02 12:28:00.863 2 WARNING nova.compute.manager [req-45be8117-f6bd-42dd-9608-27697ca985da req-af85aea0-7104-42a6-829c-3d5d869faf1b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Received unexpected event network-vif-plugged-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e for instance with vm_state active and task_state deleting.
Oct 02 12:28:00 compute-1 nova_compute[230518]: 2025-10-02 12:28:00.872 2 INFO nova.compute.manager [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Took 2.98 seconds to destroy the instance on the hypervisor.
Oct 02 12:28:00 compute-1 nova_compute[230518]: 2025-10-02 12:28:00.873 2 DEBUG oslo.service.loopingcall [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:28:00 compute-1 nova_compute[230518]: 2025-10-02 12:28:00.873 2 DEBUG nova.compute.manager [-] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:28:00 compute-1 nova_compute[230518]: 2025-10-02 12:28:00.873 2 DEBUG nova.network.neutron [-] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:28:01 compute-1 nova_compute[230518]: 2025-10-02 12:28:01.047 2 INFO nova.virt.libvirt.driver [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Deleting instance files /var/lib/nova/instances/be855518-90af-4fa9-b969-4a1579934010_del
Oct 02 12:28:01 compute-1 nova_compute[230518]: 2025-10-02 12:28:01.048 2 INFO nova.virt.libvirt.driver [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Deletion of /var/lib/nova/instances/be855518-90af-4fa9-b969-4a1579934010_del complete
Oct 02 12:28:01 compute-1 nova_compute[230518]: 2025-10-02 12:28:01.316 2 INFO nova.compute.manager [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Took 6.55 seconds to destroy the instance on the hypervisor.
Oct 02 12:28:01 compute-1 nova_compute[230518]: 2025-10-02 12:28:01.317 2 DEBUG oslo.service.loopingcall [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:28:01 compute-1 nova_compute[230518]: 2025-10-02 12:28:01.318 2 DEBUG nova.compute.manager [-] [instance: be855518-90af-4fa9-b969-4a1579934010] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:28:01 compute-1 nova_compute[230518]: 2025-10-02 12:28:01.318 2 DEBUG nova.network.neutron [-] [instance: be855518-90af-4fa9-b969-4a1579934010] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:28:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:02.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:02.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:02 compute-1 nova_compute[230518]: 2025-10-02 12:28:02.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:02 compute-1 nova_compute[230518]: 2025-10-02 12:28:02.791 2 DEBUG nova.network.neutron [-] [instance: be855518-90af-4fa9-b969-4a1579934010] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:28:02 compute-1 nova_compute[230518]: 2025-10-02 12:28:02.931 2 DEBUG nova.network.neutron [-] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:28:02 compute-1 ceph-mon[80926]: pgmap v1512: 305 pgs: 305 active+clean; 328 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.6 MiB/s wr, 185 op/s
Oct 02 12:28:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:28:03 compute-1 nova_compute[230518]: 2025-10-02 12:28:03.348 2 INFO nova.compute.manager [-] [instance: be855518-90af-4fa9-b969-4a1579934010] Took 2.03 seconds to deallocate network for instance.
Oct 02 12:28:03 compute-1 nova_compute[230518]: 2025-10-02 12:28:03.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:03 compute-1 nova_compute[230518]: 2025-10-02 12:28:03.527 2 INFO nova.compute.manager [-] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Took 2.65 seconds to deallocate network for instance.
Oct 02 12:28:03 compute-1 nova_compute[230518]: 2025-10-02 12:28:03.744 2 DEBUG oslo_concurrency.lockutils [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:03 compute-1 nova_compute[230518]: 2025-10-02 12:28:03.745 2 DEBUG oslo_concurrency.lockutils [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:03 compute-1 nova_compute[230518]: 2025-10-02 12:28:03.911 2 DEBUG oslo_concurrency.lockutils [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:04.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:28:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:04.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:28:04 compute-1 ceph-mon[80926]: pgmap v1513: 305 pgs: 305 active+clean; 247 MiB data, 720 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 190 op/s
Oct 02 12:28:04 compute-1 nova_compute[230518]: 2025-10-02 12:28:04.263 2 DEBUG oslo_concurrency.processutils [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:04 compute-1 nova_compute[230518]: 2025-10-02 12:28:04.324 2 DEBUG nova.compute.manager [req-91310769-5507-40a7-a0a3-47ce84f51735 req-7e8e928c-93b3-4f88-a7e0-1736ef594878 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: be855518-90af-4fa9-b969-4a1579934010] Received event network-vif-deleted-5404e3f9-be33-4f3a-b227-2fa3944c6bb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:28:04 compute-1 nova_compute[230518]: 2025-10-02 12:28:04.325 2 DEBUG nova.compute.manager [req-91310769-5507-40a7-a0a3-47ce84f51735 req-7e8e928c-93b3-4f88-a7e0-1736ef594878 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Received event network-vif-deleted-4b915a8b-b3f4-47fe-bb5a-1e712c3d182e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:28:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:28:04 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/908879594' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:04 compute-1 nova_compute[230518]: 2025-10-02 12:28:04.794 2 DEBUG oslo_concurrency.processutils [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:04 compute-1 nova_compute[230518]: 2025-10-02 12:28:04.800 2 DEBUG nova.compute.provider_tree [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:28:04 compute-1 nova_compute[230518]: 2025-10-02 12:28:04.917 2 DEBUG nova.scheduler.client.report [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:28:05 compute-1 nova_compute[230518]: 2025-10-02 12:28:05.065 2 DEBUG oslo_concurrency.lockutils [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.320s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:05 compute-1 nova_compute[230518]: 2025-10-02 12:28:05.068 2 DEBUG oslo_concurrency.lockutils [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/908879594' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1435707135' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:28:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1435707135' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:28:05 compute-1 nova_compute[230518]: 2025-10-02 12:28:05.299 2 INFO nova.scheduler.client.report [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Deleted allocations for instance be855518-90af-4fa9-b969-4a1579934010
Oct 02 12:28:05 compute-1 nova_compute[230518]: 2025-10-02 12:28:05.323 2 DEBUG oslo_concurrency.processutils [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:05 compute-1 nova_compute[230518]: 2025-10-02 12:28:05.740 2 DEBUG oslo_concurrency.lockutils [None req-3791d599-9efd-446c-b88b-72c4ff4c44db eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "be855518-90af-4fa9-b969-4a1579934010" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.975s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:28:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4026011726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:05 compute-1 nova_compute[230518]: 2025-10-02 12:28:05.790 2 DEBUG oslo_concurrency.processutils [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:05 compute-1 nova_compute[230518]: 2025-10-02 12:28:05.796 2 DEBUG nova.compute.provider_tree [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:28:05 compute-1 nova_compute[230518]: 2025-10-02 12:28:05.841 2 DEBUG nova.scheduler.client.report [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:28:05 compute-1 nova_compute[230518]: 2025-10-02 12:28:05.907 2 DEBUG oslo_concurrency.lockutils [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:05 compute-1 nova_compute[230518]: 2025-10-02 12:28:05.940 2 INFO nova.scheduler.client.report [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Deleted allocations for instance 78888fa8-1051-485d-9e51-feaec2648c8f
Oct 02 12:28:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:28:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:06.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:28:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:06.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:06 compute-1 nova_compute[230518]: 2025-10-02 12:28:06.172 2 DEBUG oslo_concurrency.lockutils [None req-4ec5cc8e-1380-4800-9e51-7653f3d0342f 2b6687fbfb1f484ba99ef93bbb4ffa7e c20ce9073494490588607984318667f5 - - default default] Lock "78888fa8-1051-485d-9e51-feaec2648c8f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.279s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:06 compute-1 ceph-mon[80926]: pgmap v1514: 305 pgs: 305 active+clean; 247 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 711 KiB/s wr, 167 op/s
Oct 02 12:28:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4026011726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:06 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:28:06 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:28:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1381430579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:06 compute-1 sudo[256388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:28:06 compute-1 sudo[256388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:28:06 compute-1 sudo[256388]: pam_unix(sudo:session): session closed for user root
Oct 02 12:28:06 compute-1 sudo[256413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:28:06 compute-1 sudo[256413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:28:06 compute-1 sudo[256413]: pam_unix(sudo:session): session closed for user root
Oct 02 12:28:07 compute-1 nova_compute[230518]: 2025-10-02 12:28:07.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:28:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:08.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:28:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:08.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:28:08 compute-1 nova_compute[230518]: 2025-10-02 12:28:08.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:09 compute-1 ceph-mon[80926]: pgmap v1515: 305 pgs: 305 active+clean; 247 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 52 KiB/s wr, 133 op/s
Oct 02 12:28:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:10.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:10.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:10 compute-1 ceph-mon[80926]: pgmap v1516: 305 pgs: 305 active+clean; 247 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.0 KiB/s wr, 115 op/s
Oct 02 12:28:10 compute-1 nova_compute[230518]: 2025-10-02 12:28:10.609 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408075.6075075, be855518-90af-4fa9-b969-4a1579934010 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:28:10 compute-1 nova_compute[230518]: 2025-10-02 12:28:10.609 2 INFO nova.compute.manager [-] [instance: be855518-90af-4fa9-b969-4a1579934010] VM Stopped (Lifecycle Event)
Oct 02 12:28:10 compute-1 nova_compute[230518]: 2025-10-02 12:28:10.641 2 DEBUG nova.compute.manager [None req-025108b5-c8eb-4204-9d3a-b23b39b20862 - - - - - -] [instance: be855518-90af-4fa9-b969-4a1579934010] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:28:11 compute-1 podman[256438]: 2025-10-02 12:28:11.847705615 +0000 UTC m=+0.085521382 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:28:12 compute-1 podman[256457]: 2025-10-02 12:28:12.027225143 +0000 UTC m=+0.127253064 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 02 12:28:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:12.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:12.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:12 compute-1 nova_compute[230518]: 2025-10-02 12:28:12.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:12 compute-1 ceph-mon[80926]: pgmap v1517: 305 pgs: 305 active+clean; 255 MiB data, 702 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 826 KiB/s wr, 129 op/s
Oct 02 12:28:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:28:13 compute-1 nova_compute[230518]: 2025-10-02 12:28:13.332 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408078.3309495, 78888fa8-1051-485d-9e51-feaec2648c8f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:28:13 compute-1 nova_compute[230518]: 2025-10-02 12:28:13.332 2 INFO nova.compute.manager [-] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] VM Stopped (Lifecycle Event)
Oct 02 12:28:13 compute-1 nova_compute[230518]: 2025-10-02 12:28:13.354 2 DEBUG nova.compute.manager [None req-f49e83be-0622-42c4-b558-30dc1d764d1e - - - - - -] [instance: 78888fa8-1051-485d-9e51-feaec2648c8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:28:13 compute-1 nova_compute[230518]: 2025-10-02 12:28:13.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:14 compute-1 nova_compute[230518]: 2025-10-02 12:28:14.025 2 DEBUG oslo_concurrency.lockutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "e408d787-b02c-4b28-9af7-b7ae07b54538" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:14 compute-1 nova_compute[230518]: 2025-10-02 12:28:14.026 2 DEBUG oslo_concurrency.lockutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "e408d787-b02c-4b28-9af7-b7ae07b54538" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:14.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:14.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:14 compute-1 nova_compute[230518]: 2025-10-02 12:28:14.173 2 DEBUG nova.compute.manager [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:28:14 compute-1 nova_compute[230518]: 2025-10-02 12:28:14.265 2 DEBUG oslo_concurrency.lockutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:14 compute-1 nova_compute[230518]: 2025-10-02 12:28:14.266 2 DEBUG oslo_concurrency.lockutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:14 compute-1 nova_compute[230518]: 2025-10-02 12:28:14.273 2 DEBUG nova.virt.hardware [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:28:14 compute-1 nova_compute[230518]: 2025-10-02 12:28:14.274 2 INFO nova.compute.claims [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:28:14 compute-1 nova_compute[230518]: 2025-10-02 12:28:14.435 2 DEBUG oslo_concurrency.processutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:28:14 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/253313661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:14 compute-1 nova_compute[230518]: 2025-10-02 12:28:14.892 2 DEBUG oslo_concurrency.processutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:14 compute-1 nova_compute[230518]: 2025-10-02 12:28:14.898 2 DEBUG nova.compute.provider_tree [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:28:14 compute-1 nova_compute[230518]: 2025-10-02 12:28:14.984 2 DEBUG nova.scheduler.client.report [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:28:15 compute-1 ceph-mon[80926]: pgmap v1518: 305 pgs: 305 active+clean; 272 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 570 KiB/s rd, 2.1 MiB/s wr, 86 op/s
Oct 02 12:28:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/253313661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:15 compute-1 nova_compute[230518]: 2025-10-02 12:28:15.165 2 DEBUG oslo_concurrency.lockutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.899s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:15 compute-1 nova_compute[230518]: 2025-10-02 12:28:15.166 2 DEBUG nova.compute.manager [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:28:15 compute-1 nova_compute[230518]: 2025-10-02 12:28:15.302 2 DEBUG nova.compute.manager [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:28:15 compute-1 nova_compute[230518]: 2025-10-02 12:28:15.303 2 DEBUG nova.network.neutron [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:28:15 compute-1 nova_compute[230518]: 2025-10-02 12:28:15.407 2 INFO nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:28:15 compute-1 nova_compute[230518]: 2025-10-02 12:28:15.481 2 DEBUG nova.policy [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eff0431e92464c78b780c8365e6e920c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bfd7bec5bd4b4366a96cc55cfe95fcc9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:28:15 compute-1 nova_compute[230518]: 2025-10-02 12:28:15.519 2 DEBUG nova.compute.manager [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:28:15 compute-1 nova_compute[230518]: 2025-10-02 12:28:15.719 2 DEBUG nova.compute.manager [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:28:15 compute-1 nova_compute[230518]: 2025-10-02 12:28:15.720 2 DEBUG nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:28:15 compute-1 nova_compute[230518]: 2025-10-02 12:28:15.720 2 INFO nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Creating image(s)
Oct 02 12:28:15 compute-1 nova_compute[230518]: 2025-10-02 12:28:15.747 2 DEBUG nova.storage.rbd_utils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] rbd image e408d787-b02c-4b28-9af7-b7ae07b54538_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:28:15 compute-1 nova_compute[230518]: 2025-10-02 12:28:15.771 2 DEBUG nova.storage.rbd_utils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] rbd image e408d787-b02c-4b28-9af7-b7ae07b54538_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:28:15 compute-1 nova_compute[230518]: 2025-10-02 12:28:15.794 2 DEBUG nova.storage.rbd_utils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] rbd image e408d787-b02c-4b28-9af7-b7ae07b54538_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:28:15 compute-1 nova_compute[230518]: 2025-10-02 12:28:15.796 2 DEBUG oslo_concurrency.processutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:15 compute-1 nova_compute[230518]: 2025-10-02 12:28:15.873 2 DEBUG oslo_concurrency.processutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:15 compute-1 nova_compute[230518]: 2025-10-02 12:28:15.874 2 DEBUG oslo_concurrency.lockutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:15 compute-1 nova_compute[230518]: 2025-10-02 12:28:15.875 2 DEBUG oslo_concurrency.lockutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:15 compute-1 nova_compute[230518]: 2025-10-02 12:28:15.875 2 DEBUG oslo_concurrency.lockutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:15 compute-1 nova_compute[230518]: 2025-10-02 12:28:15.955 2 DEBUG nova.storage.rbd_utils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] rbd image e408d787-b02c-4b28-9af7-b7ae07b54538_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:28:15 compute-1 nova_compute[230518]: 2025-10-02 12:28:15.958 2 DEBUG oslo_concurrency.processutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 e408d787-b02c-4b28-9af7-b7ae07b54538_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:16.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:28:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:16.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:28:16 compute-1 ceph-mon[80926]: pgmap v1519: 305 pgs: 305 active+clean; 242 MiB data, 702 MiB used, 20 GiB / 21 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 79 op/s
Oct 02 12:28:16 compute-1 nova_compute[230518]: 2025-10-02 12:28:16.568 2 DEBUG oslo_concurrency.processutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 e408d787-b02c-4b28-9af7-b7ae07b54538_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:16 compute-1 nova_compute[230518]: 2025-10-02 12:28:16.654 2 DEBUG nova.storage.rbd_utils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] resizing rbd image e408d787-b02c-4b28-9af7-b7ae07b54538_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:28:16 compute-1 nova_compute[230518]: 2025-10-02 12:28:16.775 2 DEBUG nova.network.neutron [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Successfully created port: a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:28:17 compute-1 nova_compute[230518]: 2025-10-02 12:28:17.048 2 DEBUG nova.objects.instance [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lazy-loading 'migration_context' on Instance uuid e408d787-b02c-4b28-9af7-b7ae07b54538 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:28:17 compute-1 nova_compute[230518]: 2025-10-02 12:28:17.074 2 DEBUG nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:28:17 compute-1 nova_compute[230518]: 2025-10-02 12:28:17.075 2 DEBUG nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Ensure instance console log exists: /var/lib/nova/instances/e408d787-b02c-4b28-9af7-b7ae07b54538/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:28:17 compute-1 nova_compute[230518]: 2025-10-02 12:28:17.076 2 DEBUG oslo_concurrency.lockutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:17 compute-1 nova_compute[230518]: 2025-10-02 12:28:17.076 2 DEBUG oslo_concurrency.lockutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:17 compute-1 nova_compute[230518]: 2025-10-02 12:28:17.076 2 DEBUG oslo_concurrency.lockutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:17 compute-1 nova_compute[230518]: 2025-10-02 12:28:17.448 2 DEBUG nova.network.neutron [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Successfully updated port: a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:28:17 compute-1 nova_compute[230518]: 2025-10-02 12:28:17.464 2 DEBUG oslo_concurrency.lockutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "refresh_cache-e408d787-b02c-4b28-9af7-b7ae07b54538" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:28:17 compute-1 nova_compute[230518]: 2025-10-02 12:28:17.465 2 DEBUG oslo_concurrency.lockutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquired lock "refresh_cache-e408d787-b02c-4b28-9af7-b7ae07b54538" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:28:17 compute-1 nova_compute[230518]: 2025-10-02 12:28:17.466 2 DEBUG nova.network.neutron [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:28:17 compute-1 nova_compute[230518]: 2025-10-02 12:28:17.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2927939317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:17 compute-1 nova_compute[230518]: 2025-10-02 12:28:17.675 2 DEBUG nova.network.neutron [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:28:17 compute-1 nova_compute[230518]: 2025-10-02 12:28:17.766 2 DEBUG nova.compute.manager [req-eef4ceed-eb00-4704-9cac-2edd50055d5d req-c551859b-767c-46e8-b49b-230b0bea6726 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Received event network-changed-a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:28:17 compute-1 nova_compute[230518]: 2025-10-02 12:28:17.766 2 DEBUG nova.compute.manager [req-eef4ceed-eb00-4704-9cac-2edd50055d5d req-c551859b-767c-46e8-b49b-230b0bea6726 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Refreshing instance network info cache due to event network-changed-a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:28:17 compute-1 nova_compute[230518]: 2025-10-02 12:28:17.766 2 DEBUG oslo_concurrency.lockutils [req-eef4ceed-eb00-4704-9cac-2edd50055d5d req-c551859b-767c-46e8-b49b-230b0bea6726 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-e408d787-b02c-4b28-9af7-b7ae07b54538" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:28:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:18.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:28:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:18.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:28:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:28:18 compute-1 nova_compute[230518]: 2025-10-02 12:28:18.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:18 compute-1 ceph-mon[80926]: pgmap v1520: 305 pgs: 305 active+clean; 230 MiB data, 674 MiB used, 20 GiB / 21 GiB avail; 356 KiB/s rd, 3.3 MiB/s wr, 103 op/s
Oct 02 12:28:19 compute-1 nova_compute[230518]: 2025-10-02 12:28:19.063 2 DEBUG nova.network.neutron [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Updating instance_info_cache with network_info: [{"id": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "address": "fa:16:3e:63:17:b5", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ec450c-ad", "ovs_interfaceid": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:28:19 compute-1 nova_compute[230518]: 2025-10-02 12:28:19.197 2 DEBUG oslo_concurrency.lockutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Releasing lock "refresh_cache-e408d787-b02c-4b28-9af7-b7ae07b54538" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:28:19 compute-1 nova_compute[230518]: 2025-10-02 12:28:19.198 2 DEBUG nova.compute.manager [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Instance network_info: |[{"id": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "address": "fa:16:3e:63:17:b5", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ec450c-ad", "ovs_interfaceid": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:28:19 compute-1 nova_compute[230518]: 2025-10-02 12:28:19.198 2 DEBUG oslo_concurrency.lockutils [req-eef4ceed-eb00-4704-9cac-2edd50055d5d req-c551859b-767c-46e8-b49b-230b0bea6726 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-e408d787-b02c-4b28-9af7-b7ae07b54538" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:28:19 compute-1 nova_compute[230518]: 2025-10-02 12:28:19.198 2 DEBUG nova.network.neutron [req-eef4ceed-eb00-4704-9cac-2edd50055d5d req-c551859b-767c-46e8-b49b-230b0bea6726 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Refreshing network info cache for port a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:28:19 compute-1 nova_compute[230518]: 2025-10-02 12:28:19.201 2 DEBUG nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Start _get_guest_xml network_info=[{"id": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "address": "fa:16:3e:63:17:b5", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ec450c-ad", "ovs_interfaceid": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:28:19 compute-1 nova_compute[230518]: 2025-10-02 12:28:19.206 2 WARNING nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:28:19 compute-1 nova_compute[230518]: 2025-10-02 12:28:19.210 2 DEBUG nova.virt.libvirt.host [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:28:19 compute-1 nova_compute[230518]: 2025-10-02 12:28:19.211 2 DEBUG nova.virt.libvirt.host [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:28:19 compute-1 nova_compute[230518]: 2025-10-02 12:28:19.214 2 DEBUG nova.virt.libvirt.host [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:28:19 compute-1 nova_compute[230518]: 2025-10-02 12:28:19.215 2 DEBUG nova.virt.libvirt.host [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:28:19 compute-1 nova_compute[230518]: 2025-10-02 12:28:19.216 2 DEBUG nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:28:19 compute-1 nova_compute[230518]: 2025-10-02 12:28:19.216 2 DEBUG nova.virt.hardware [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:28:19 compute-1 nova_compute[230518]: 2025-10-02 12:28:19.216 2 DEBUG nova.virt.hardware [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:28:19 compute-1 nova_compute[230518]: 2025-10-02 12:28:19.216 2 DEBUG nova.virt.hardware [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:28:19 compute-1 nova_compute[230518]: 2025-10-02 12:28:19.217 2 DEBUG nova.virt.hardware [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:28:19 compute-1 nova_compute[230518]: 2025-10-02 12:28:19.217 2 DEBUG nova.virt.hardware [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:28:19 compute-1 nova_compute[230518]: 2025-10-02 12:28:19.217 2 DEBUG nova.virt.hardware [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:28:19 compute-1 nova_compute[230518]: 2025-10-02 12:28:19.217 2 DEBUG nova.virt.hardware [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:28:19 compute-1 nova_compute[230518]: 2025-10-02 12:28:19.217 2 DEBUG nova.virt.hardware [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:28:19 compute-1 nova_compute[230518]: 2025-10-02 12:28:19.218 2 DEBUG nova.virt.hardware [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:28:19 compute-1 nova_compute[230518]: 2025-10-02 12:28:19.218 2 DEBUG nova.virt.hardware [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:28:19 compute-1 nova_compute[230518]: 2025-10-02 12:28:19.218 2 DEBUG nova.virt.hardware [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:28:19 compute-1 nova_compute[230518]: 2025-10-02 12:28:19.220 2 DEBUG oslo_concurrency.processutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:28:19 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3400719442' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:19 compute-1 nova_compute[230518]: 2025-10-02 12:28:19.696 2 DEBUG oslo_concurrency.processutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:19 compute-1 nova_compute[230518]: 2025-10-02 12:28:19.721 2 DEBUG nova.storage.rbd_utils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] rbd image e408d787-b02c-4b28-9af7-b7ae07b54538_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:28:19 compute-1 nova_compute[230518]: 2025-10-02 12:28:19.725 2 DEBUG oslo_concurrency.processutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3400719442' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/4210674111' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:28:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/4210674111' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:28:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:28:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:20.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:28:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:28:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:20.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:28:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:28:20 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/569439342' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:20 compute-1 nova_compute[230518]: 2025-10-02 12:28:20.283 2 DEBUG oslo_concurrency.processutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:20 compute-1 nova_compute[230518]: 2025-10-02 12:28:20.286 2 DEBUG nova.virt.libvirt.vif [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:28:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1894199701',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1894199701',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1894199701',id=62,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bfd7bec5bd4b4366a96cc55cfe95fcc9',ramdisk_id='',reservation_id='r-qc9filwg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-883313902',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-883313902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:28:15Z,user_data=None,user_id='eff0431e92464c78b780c8365e6e920c',uuid=e408d787-b02c-4b28-9af7-b7ae07b54538,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "address": "fa:16:3e:63:17:b5", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ec450c-ad", "ovs_interfaceid": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:28:20 compute-1 nova_compute[230518]: 2025-10-02 12:28:20.287 2 DEBUG nova.network.os_vif_util [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Converting VIF {"id": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "address": "fa:16:3e:63:17:b5", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ec450c-ad", "ovs_interfaceid": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:28:20 compute-1 nova_compute[230518]: 2025-10-02 12:28:20.289 2 DEBUG nova.network.os_vif_util [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:17:b5,bridge_name='br-int',has_traffic_filtering=True,id=a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6,network=Network(eefd67eb-b4b6-4162-bbdd-0cce7dbdb491),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3ec450c-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:28:20 compute-1 nova_compute[230518]: 2025-10-02 12:28:20.291 2 DEBUG nova.objects.instance [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lazy-loading 'pci_devices' on Instance uuid e408d787-b02c-4b28-9af7-b7ae07b54538 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:28:20 compute-1 nova_compute[230518]: 2025-10-02 12:28:20.316 2 DEBUG nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:28:20 compute-1 nova_compute[230518]:   <uuid>e408d787-b02c-4b28-9af7-b7ae07b54538</uuid>
Oct 02 12:28:20 compute-1 nova_compute[230518]:   <name>instance-0000003e</name>
Oct 02 12:28:20 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:28:20 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:28:20 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:28:20 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1894199701</nova:name>
Oct 02 12:28:20 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:28:19</nova:creationTime>
Oct 02 12:28:20 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:28:20 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:28:20 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:28:20 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:28:20 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:28:20 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:28:20 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:28:20 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:28:20 compute-1 nova_compute[230518]:         <nova:user uuid="eff0431e92464c78b780c8365e6e920c">tempest-ImagesOneServerNegativeTestJSON-883313902-project-member</nova:user>
Oct 02 12:28:20 compute-1 nova_compute[230518]:         <nova:project uuid="bfd7bec5bd4b4366a96cc55cfe95fcc9">tempest-ImagesOneServerNegativeTestJSON-883313902</nova:project>
Oct 02 12:28:20 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:28:20 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:28:20 compute-1 nova_compute[230518]:         <nova:port uuid="a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6">
Oct 02 12:28:20 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:28:20 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:28:20 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:28:20 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <system>
Oct 02 12:28:20 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:28:20 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:28:20 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:28:20 compute-1 nova_compute[230518]:       <entry name="serial">e408d787-b02c-4b28-9af7-b7ae07b54538</entry>
Oct 02 12:28:20 compute-1 nova_compute[230518]:       <entry name="uuid">e408d787-b02c-4b28-9af7-b7ae07b54538</entry>
Oct 02 12:28:20 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     </system>
Oct 02 12:28:20 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:28:20 compute-1 nova_compute[230518]:   <os>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:   </os>
Oct 02 12:28:20 compute-1 nova_compute[230518]:   <features>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:   </features>
Oct 02 12:28:20 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:28:20 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:28:20 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:28:20 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/e408d787-b02c-4b28-9af7-b7ae07b54538_disk">
Oct 02 12:28:20 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:       </source>
Oct 02 12:28:20 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:28:20 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:28:20 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:28:20 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/e408d787-b02c-4b28-9af7-b7ae07b54538_disk.config">
Oct 02 12:28:20 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:       </source>
Oct 02 12:28:20 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:28:20 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:28:20 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:28:20 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:63:17:b5"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:       <target dev="tapa3ec450c-ad"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:28:20 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/e408d787-b02c-4b28-9af7-b7ae07b54538/console.log" append="off"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <video>
Oct 02 12:28:20 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     </video>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:28:20 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:28:20 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:28:20 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:28:20 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:28:20 compute-1 nova_compute[230518]: </domain>
Oct 02 12:28:20 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:28:20 compute-1 nova_compute[230518]: 2025-10-02 12:28:20.318 2 DEBUG nova.compute.manager [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Preparing to wait for external event network-vif-plugged-a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:28:20 compute-1 nova_compute[230518]: 2025-10-02 12:28:20.318 2 DEBUG oslo_concurrency.lockutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "e408d787-b02c-4b28-9af7-b7ae07b54538-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:20 compute-1 nova_compute[230518]: 2025-10-02 12:28:20.318 2 DEBUG oslo_concurrency.lockutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "e408d787-b02c-4b28-9af7-b7ae07b54538-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:20 compute-1 nova_compute[230518]: 2025-10-02 12:28:20.319 2 DEBUG oslo_concurrency.lockutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "e408d787-b02c-4b28-9af7-b7ae07b54538-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:20 compute-1 nova_compute[230518]: 2025-10-02 12:28:20.319 2 DEBUG nova.virt.libvirt.vif [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:28:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1894199701',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1894199701',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1894199701',id=62,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bfd7bec5bd4b4366a96cc55cfe95fcc9',ramdisk_id='',reservation_id='r-qc9filwg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-883313902',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-883313902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:28:15Z,user_data=None,user_id='eff0431e92464c78b780c8365e6e920c',uuid=e408d787-b02c-4b28-9af7-b7ae07b54538,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "address": "fa:16:3e:63:17:b5", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ec450c-ad", "ovs_interfaceid": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:28:20 compute-1 nova_compute[230518]: 2025-10-02 12:28:20.320 2 DEBUG nova.network.os_vif_util [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Converting VIF {"id": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "address": "fa:16:3e:63:17:b5", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ec450c-ad", "ovs_interfaceid": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:28:20 compute-1 nova_compute[230518]: 2025-10-02 12:28:20.320 2 DEBUG nova.network.os_vif_util [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:17:b5,bridge_name='br-int',has_traffic_filtering=True,id=a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6,network=Network(eefd67eb-b4b6-4162-bbdd-0cce7dbdb491),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3ec450c-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:28:20 compute-1 nova_compute[230518]: 2025-10-02 12:28:20.321 2 DEBUG os_vif [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:17:b5,bridge_name='br-int',has_traffic_filtering=True,id=a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6,network=Network(eefd67eb-b4b6-4162-bbdd-0cce7dbdb491),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3ec450c-ad') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:28:20 compute-1 nova_compute[230518]: 2025-10-02 12:28:20.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:20 compute-1 nova_compute[230518]: 2025-10-02 12:28:20.322 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:28:20 compute-1 nova_compute[230518]: 2025-10-02 12:28:20.324 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:28:20 compute-1 nova_compute[230518]: 2025-10-02 12:28:20.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:20 compute-1 nova_compute[230518]: 2025-10-02 12:28:20.331 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3ec450c-ad, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:28:20 compute-1 nova_compute[230518]: 2025-10-02 12:28:20.332 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa3ec450c-ad, col_values=(('external_ids', {'iface-id': 'a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:63:17:b5', 'vm-uuid': 'e408d787-b02c-4b28-9af7-b7ae07b54538'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:28:20 compute-1 NetworkManager[44960]: <info>  [1759408100.3349] manager: (tapa3ec450c-ad): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/130)
Oct 02 12:28:20 compute-1 nova_compute[230518]: 2025-10-02 12:28:20.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:20 compute-1 nova_compute[230518]: 2025-10-02 12:28:20.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:28:20 compute-1 nova_compute[230518]: 2025-10-02 12:28:20.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:20 compute-1 nova_compute[230518]: 2025-10-02 12:28:20.342 2 INFO os_vif [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:17:b5,bridge_name='br-int',has_traffic_filtering=True,id=a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6,network=Network(eefd67eb-b4b6-4162-bbdd-0cce7dbdb491),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3ec450c-ad')
Oct 02 12:28:20 compute-1 nova_compute[230518]: 2025-10-02 12:28:20.455 2 DEBUG nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:28:20 compute-1 nova_compute[230518]: 2025-10-02 12:28:20.455 2 DEBUG nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:28:20 compute-1 nova_compute[230518]: 2025-10-02 12:28:20.456 2 DEBUG nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] No VIF found with MAC fa:16:3e:63:17:b5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:28:20 compute-1 nova_compute[230518]: 2025-10-02 12:28:20.456 2 INFO nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Using config drive
Oct 02 12:28:20 compute-1 nova_compute[230518]: 2025-10-02 12:28:20.485 2 DEBUG nova.storage.rbd_utils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] rbd image e408d787-b02c-4b28-9af7-b7ae07b54538_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:28:20 compute-1 nova_compute[230518]: 2025-10-02 12:28:20.765 2 DEBUG nova.network.neutron [req-eef4ceed-eb00-4704-9cac-2edd50055d5d req-c551859b-767c-46e8-b49b-230b0bea6726 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Updated VIF entry in instance network info cache for port a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:28:20 compute-1 nova_compute[230518]: 2025-10-02 12:28:20.765 2 DEBUG nova.network.neutron [req-eef4ceed-eb00-4704-9cac-2edd50055d5d req-c551859b-767c-46e8-b49b-230b0bea6726 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Updating instance_info_cache with network_info: [{"id": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "address": "fa:16:3e:63:17:b5", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ec450c-ad", "ovs_interfaceid": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:28:20 compute-1 nova_compute[230518]: 2025-10-02 12:28:20.797 2 DEBUG oslo_concurrency.lockutils [req-eef4ceed-eb00-4704-9cac-2edd50055d5d req-c551859b-767c-46e8-b49b-230b0bea6726 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-e408d787-b02c-4b28-9af7-b7ae07b54538" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:28:21 compute-1 nova_compute[230518]: 2025-10-02 12:28:21.033 2 INFO nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Creating config drive at /var/lib/nova/instances/e408d787-b02c-4b28-9af7-b7ae07b54538/disk.config
Oct 02 12:28:21 compute-1 nova_compute[230518]: 2025-10-02 12:28:21.043 2 DEBUG oslo_concurrency.processutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e408d787-b02c-4b28-9af7-b7ae07b54538/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9koygc7g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:21 compute-1 ceph-mon[80926]: pgmap v1521: 305 pgs: 305 active+clean; 230 MiB data, 674 MiB used, 20 GiB / 21 GiB avail; 356 KiB/s rd, 3.3 MiB/s wr, 103 op/s
Oct 02 12:28:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/569439342' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:21 compute-1 nova_compute[230518]: 2025-10-02 12:28:21.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:21 compute-1 nova_compute[230518]: 2025-10-02 12:28:21.181 2 DEBUG oslo_concurrency.processutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e408d787-b02c-4b28-9af7-b7ae07b54538/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9koygc7g" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:21 compute-1 nova_compute[230518]: 2025-10-02 12:28:21.209 2 DEBUG nova.storage.rbd_utils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] rbd image e408d787-b02c-4b28-9af7-b7ae07b54538_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:28:21 compute-1 nova_compute[230518]: 2025-10-02 12:28:21.212 2 DEBUG oslo_concurrency.processutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e408d787-b02c-4b28-9af7-b7ae07b54538/disk.config e408d787-b02c-4b28-9af7-b7ae07b54538_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:21 compute-1 nova_compute[230518]: 2025-10-02 12:28:21.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:21 compute-1 podman[256798]: 2025-10-02 12:28:21.809632325 +0000 UTC m=+0.059803303 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001)
Oct 02 12:28:21 compute-1 podman[256799]: 2025-10-02 12:28:21.842038255 +0000 UTC m=+0.078058008 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:28:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:22.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:22.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:22 compute-1 ceph-mon[80926]: pgmap v1522: 305 pgs: 305 active+clean; 208 MiB data, 677 MiB used, 20 GiB / 21 GiB avail; 375 KiB/s rd, 3.9 MiB/s wr, 129 op/s
Oct 02 12:28:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/4286952671' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:28:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/4286952671' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:28:22 compute-1 nova_compute[230518]: 2025-10-02 12:28:22.400 2 DEBUG oslo_concurrency.processutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e408d787-b02c-4b28-9af7-b7ae07b54538/disk.config e408d787-b02c-4b28-9af7-b7ae07b54538_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:22 compute-1 nova_compute[230518]: 2025-10-02 12:28:22.401 2 INFO nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Deleting local config drive /var/lib/nova/instances/e408d787-b02c-4b28-9af7-b7ae07b54538/disk.config because it was imported into RBD.
Oct 02 12:28:22 compute-1 kernel: tapa3ec450c-ad: entered promiscuous mode
Oct 02 12:28:22 compute-1 ovn_controller[129257]: 2025-10-02T12:28:22Z|00282|binding|INFO|Claiming lport a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 for this chassis.
Oct 02 12:28:22 compute-1 ovn_controller[129257]: 2025-10-02T12:28:22Z|00283|binding|INFO|a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6: Claiming fa:16:3e:63:17:b5 10.100.0.12
Oct 02 12:28:22 compute-1 nova_compute[230518]: 2025-10-02 12:28:22.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:22 compute-1 NetworkManager[44960]: <info>  [1759408102.4765] manager: (tapa3ec450c-ad): new Tun device (/org/freedesktop/NetworkManager/Devices/131)
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.497 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:17:b5 10.100.0.12'], port_security=['fa:16:3e:63:17:b5 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e408d787-b02c-4b28-9af7-b7ae07b54538', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bfd7bec5bd4b4366a96cc55cfe95fcc9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1274568d-0664-4745-a1c4-36fd447c1a9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f6f5554-f9ad-4301-a295-af1afef2d045, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.500 138374 INFO neutron.agent.ovn.metadata.agent [-] Port a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 in datapath eefd67eb-b4b6-4162-bbdd-0cce7dbdb491 bound to our chassis
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.503 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network eefd67eb-b4b6-4162-bbdd-0cce7dbdb491
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.515 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3bdc45e6-dfcf-4c1b-b280-ed0de4a05ac5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.516 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapeefd67eb-b1 in ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:28:22 compute-1 systemd-machined[188247]: New machine qemu-32-instance-0000003e.
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.518 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapeefd67eb-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.518 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cbeba7be-981a-4c3c-ae25-13e07242071c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.519 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[21dd0a5c-1460-4e9e-b29c-05636749a291]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.533 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[274e6fa6-69db-4ff8-a80b-5d7b3b782b74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:22 compute-1 nova_compute[230518]: 2025-10-02 12:28:22.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:22 compute-1 nova_compute[230518]: 2025-10-02 12:28:22.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:22 compute-1 ovn_controller[129257]: 2025-10-02T12:28:22Z|00284|binding|INFO|Setting lport a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 ovn-installed in OVS
Oct 02 12:28:22 compute-1 ovn_controller[129257]: 2025-10-02T12:28:22Z|00285|binding|INFO|Setting lport a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 up in Southbound
Oct 02 12:28:22 compute-1 systemd[1]: Started Virtual Machine qemu-32-instance-0000003e.
Oct 02 12:28:22 compute-1 nova_compute[230518]: 2025-10-02 12:28:22.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.553 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c66e799b-9caf-42cc-8e99-a7e0774112f3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:22 compute-1 systemd-udevd[256853]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:28:22 compute-1 NetworkManager[44960]: <info>  [1759408102.5734] device (tapa3ec450c-ad): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:28:22 compute-1 NetworkManager[44960]: <info>  [1759408102.5748] device (tapa3ec450c-ad): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.584 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a29890ea-db36-4657-bccf-93a9dd1274c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:22 compute-1 NetworkManager[44960]: <info>  [1759408102.5899] manager: (tapeefd67eb-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/132)
Oct 02 12:28:22 compute-1 systemd-udevd[256857]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.589 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d61af137-d5da-4bcf-98ef-efc53bf9d6f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.621 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[d4b781dc-36de-449e-a831-f9c703687db7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.625 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a541e6d8-7014-4ebf-bac5-9b6b8bf451c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:22 compute-1 NetworkManager[44960]: <info>  [1759408102.6510] device (tapeefd67eb-b0): carrier: link connected
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.656 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[55b05721-5293-4943-ae3d-e7a68f46efad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.676 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d9b1bb76-0d13-44b6-bd88-5d2101304bbf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeefd67eb-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:db:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596620, 'reachable_time': 24021, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256883, 'error': None, 'target': 'ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.691 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[daee81dd-da10-4a37-8c10-1f8ade2dde01]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:db93'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 596620, 'tstamp': 596620}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256884, 'error': None, 'target': 'ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.714 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ce1268b5-629d-415e-b131-989b6d6649b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeefd67eb-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:db:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596620, 'reachable_time': 24021, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 256885, 'error': None, 'target': 'ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.745 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1603d68a-ede8-4f90-af70-69307878c5f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.807 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[af0e658b-1c64-415b-9e76-e6ca3960d8bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.809 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeefd67eb-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.809 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.809 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeefd67eb-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:28:22 compute-1 nova_compute[230518]: 2025-10-02 12:28:22.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:22 compute-1 NetworkManager[44960]: <info>  [1759408102.8123] manager: (tapeefd67eb-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/133)
Oct 02 12:28:22 compute-1 kernel: tapeefd67eb-b0: entered promiscuous mode
Oct 02 12:28:22 compute-1 nova_compute[230518]: 2025-10-02 12:28:22.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.815 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeefd67eb-b0, col_values=(('external_ids', {'iface-id': '4a1c64ee-2e43-4924-ad64-0ba8b656d152'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:28:22 compute-1 nova_compute[230518]: 2025-10-02 12:28:22.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:22 compute-1 ovn_controller[129257]: 2025-10-02T12:28:22Z|00286|binding|INFO|Releasing lport 4a1c64ee-2e43-4924-ad64-0ba8b656d152 from this chassis (sb_readonly=0)
Oct 02 12:28:22 compute-1 nova_compute[230518]: 2025-10-02 12:28:22.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.846 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/eefd67eb-b4b6-4162-bbdd-0cce7dbdb491.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/eefd67eb-b4b6-4162-bbdd-0cce7dbdb491.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.848 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3098455d-c3b7-4959-a1de-d96c0b87511f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.849 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/eefd67eb-b4b6-4162-bbdd-0cce7dbdb491.pid.haproxy
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID eefd67eb-b4b6-4162-bbdd-0cce7dbdb491
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:28:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:22.850 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'env', 'PROCESS_TAG=haproxy-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/eefd67eb-b4b6-4162-bbdd-0cce7dbdb491.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:28:23 compute-1 podman[256959]: 2025-10-02 12:28:23.269781409 +0000 UTC m=+0.060608998 container create 59bd57da184bd82f8a793a83d650a4af0bf1a42b6bbe1f99c591a84fe3c34682 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct 02 12:28:23 compute-1 systemd[1]: Started libpod-conmon-59bd57da184bd82f8a793a83d650a4af0bf1a42b6bbe1f99c591a84fe3c34682.scope.
Oct 02 12:28:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:28:23 compute-1 podman[256959]: 2025-10-02 12:28:23.243818743 +0000 UTC m=+0.034646362 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:28:23 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:28:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac381c0e8614aae9b537a6ede1bffe087d0774e8c5a7a70ab53ff54e2364b0f9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:28:23 compute-1 podman[256959]: 2025-10-02 12:28:23.357401157 +0000 UTC m=+0.148228786 container init 59bd57da184bd82f8a793a83d650a4af0bf1a42b6bbe1f99c591a84fe3c34682 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:28:23 compute-1 podman[256959]: 2025-10-02 12:28:23.36324538 +0000 UTC m=+0.154072979 container start 59bd57da184bd82f8a793a83d650a4af0bf1a42b6bbe1f99c591a84fe3c34682 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:28:23 compute-1 neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491[256974]: [NOTICE]   (256978) : New worker (256980) forked
Oct 02 12:28:23 compute-1 neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491[256974]: [NOTICE]   (256978) : Loading success.
Oct 02 12:28:23 compute-1 nova_compute[230518]: 2025-10-02 12:28:23.499 2 DEBUG nova.compute.manager [req-3598fc8f-4d9e-40da-8e63-aa30ca320385 req-62b0b9f3-4edc-451b-a39b-46796df5ced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Received event network-vif-plugged-a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:28:23 compute-1 nova_compute[230518]: 2025-10-02 12:28:23.499 2 DEBUG oslo_concurrency.lockutils [req-3598fc8f-4d9e-40da-8e63-aa30ca320385 req-62b0b9f3-4edc-451b-a39b-46796df5ced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e408d787-b02c-4b28-9af7-b7ae07b54538-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:23 compute-1 nova_compute[230518]: 2025-10-02 12:28:23.500 2 DEBUG oslo_concurrency.lockutils [req-3598fc8f-4d9e-40da-8e63-aa30ca320385 req-62b0b9f3-4edc-451b-a39b-46796df5ced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e408d787-b02c-4b28-9af7-b7ae07b54538-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:23 compute-1 nova_compute[230518]: 2025-10-02 12:28:23.500 2 DEBUG oslo_concurrency.lockutils [req-3598fc8f-4d9e-40da-8e63-aa30ca320385 req-62b0b9f3-4edc-451b-a39b-46796df5ced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e408d787-b02c-4b28-9af7-b7ae07b54538-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:23 compute-1 nova_compute[230518]: 2025-10-02 12:28:23.501 2 DEBUG nova.compute.manager [req-3598fc8f-4d9e-40da-8e63-aa30ca320385 req-62b0b9f3-4edc-451b-a39b-46796df5ced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Processing event network-vif-plugged-a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:28:23 compute-1 nova_compute[230518]: 2025-10-02 12:28:23.501 2 DEBUG nova.compute.manager [req-3598fc8f-4d9e-40da-8e63-aa30ca320385 req-62b0b9f3-4edc-451b-a39b-46796df5ced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Received event network-vif-plugged-a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:28:23 compute-1 nova_compute[230518]: 2025-10-02 12:28:23.502 2 DEBUG oslo_concurrency.lockutils [req-3598fc8f-4d9e-40da-8e63-aa30ca320385 req-62b0b9f3-4edc-451b-a39b-46796df5ced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e408d787-b02c-4b28-9af7-b7ae07b54538-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:23 compute-1 nova_compute[230518]: 2025-10-02 12:28:23.502 2 DEBUG oslo_concurrency.lockutils [req-3598fc8f-4d9e-40da-8e63-aa30ca320385 req-62b0b9f3-4edc-451b-a39b-46796df5ced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e408d787-b02c-4b28-9af7-b7ae07b54538-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:23 compute-1 nova_compute[230518]: 2025-10-02 12:28:23.503 2 DEBUG oslo_concurrency.lockutils [req-3598fc8f-4d9e-40da-8e63-aa30ca320385 req-62b0b9f3-4edc-451b-a39b-46796df5ced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e408d787-b02c-4b28-9af7-b7ae07b54538-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:23 compute-1 nova_compute[230518]: 2025-10-02 12:28:23.503 2 DEBUG nova.compute.manager [req-3598fc8f-4d9e-40da-8e63-aa30ca320385 req-62b0b9f3-4edc-451b-a39b-46796df5ced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] No waiting events found dispatching network-vif-plugged-a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:28:23 compute-1 nova_compute[230518]: 2025-10-02 12:28:23.504 2 WARNING nova.compute.manager [req-3598fc8f-4d9e-40da-8e63-aa30ca320385 req-62b0b9f3-4edc-451b-a39b-46796df5ced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Received unexpected event network-vif-plugged-a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 for instance with vm_state building and task_state spawning.
Oct 02 12:28:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3000373398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:23 compute-1 nova_compute[230518]: 2025-10-02 12:28:23.614 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408103.613883, e408d787-b02c-4b28-9af7-b7ae07b54538 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:28:23 compute-1 nova_compute[230518]: 2025-10-02 12:28:23.615 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] VM Started (Lifecycle Event)
Oct 02 12:28:23 compute-1 nova_compute[230518]: 2025-10-02 12:28:23.618 2 DEBUG nova.compute.manager [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:28:23 compute-1 nova_compute[230518]: 2025-10-02 12:28:23.623 2 DEBUG nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:28:23 compute-1 nova_compute[230518]: 2025-10-02 12:28:23.628 2 INFO nova.virt.libvirt.driver [-] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Instance spawned successfully.
Oct 02 12:28:23 compute-1 nova_compute[230518]: 2025-10-02 12:28:23.628 2 DEBUG nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:28:23 compute-1 nova_compute[230518]: 2025-10-02 12:28:23.650 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:28:23 compute-1 nova_compute[230518]: 2025-10-02 12:28:23.657 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:28:23 compute-1 nova_compute[230518]: 2025-10-02 12:28:23.663 2 DEBUG nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:28:23 compute-1 nova_compute[230518]: 2025-10-02 12:28:23.663 2 DEBUG nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:28:23 compute-1 nova_compute[230518]: 2025-10-02 12:28:23.664 2 DEBUG nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:28:23 compute-1 nova_compute[230518]: 2025-10-02 12:28:23.664 2 DEBUG nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:28:23 compute-1 nova_compute[230518]: 2025-10-02 12:28:23.665 2 DEBUG nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:28:23 compute-1 nova_compute[230518]: 2025-10-02 12:28:23.665 2 DEBUG nova.virt.libvirt.driver [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:28:23 compute-1 nova_compute[230518]: 2025-10-02 12:28:23.707 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:28:23 compute-1 nova_compute[230518]: 2025-10-02 12:28:23.708 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408103.6141589, e408d787-b02c-4b28-9af7-b7ae07b54538 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:28:23 compute-1 nova_compute[230518]: 2025-10-02 12:28:23.708 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] VM Paused (Lifecycle Event)
Oct 02 12:28:23 compute-1 nova_compute[230518]: 2025-10-02 12:28:23.810 2 INFO nova.compute.manager [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Took 8.09 seconds to spawn the instance on the hypervisor.
Oct 02 12:28:23 compute-1 nova_compute[230518]: 2025-10-02 12:28:23.811 2 DEBUG nova.compute.manager [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:28:23 compute-1 nova_compute[230518]: 2025-10-02 12:28:23.816 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:28:23 compute-1 nova_compute[230518]: 2025-10-02 12:28:23.818 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408103.6217616, e408d787-b02c-4b28-9af7-b7ae07b54538 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:28:23 compute-1 nova_compute[230518]: 2025-10-02 12:28:23.818 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] VM Resumed (Lifecycle Event)
Oct 02 12:28:23 compute-1 nova_compute[230518]: 2025-10-02 12:28:23.942 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:28:23 compute-1 nova_compute[230518]: 2025-10-02 12:28:23.946 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:28:23 compute-1 nova_compute[230518]: 2025-10-02 12:28:23.999 2 INFO nova.compute.manager [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Took 9.77 seconds to build instance.
Oct 02 12:28:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:24.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:24.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:24 compute-1 nova_compute[230518]: 2025-10-02 12:28:24.394 2 DEBUG oslo_concurrency.lockutils [None req-e61b7648-f69a-4e33-a0af-b47cf76413e4 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "e408d787-b02c-4b28-9af7-b7ae07b54538" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.368s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:24 compute-1 ceph-mon[80926]: pgmap v1523: 305 pgs: 305 active+clean; 167 MiB data, 665 MiB used, 20 GiB / 21 GiB avail; 382 KiB/s rd, 3.1 MiB/s wr, 149 op/s
Oct 02 12:28:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/4121041882' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:28:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/4121041882' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:28:25 compute-1 nova_compute[230518]: 2025-10-02 12:28:25.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:25.925 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:25.926 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:25.926 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:26.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:28:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:26.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:28:27 compute-1 ceph-mon[80926]: pgmap v1524: 305 pgs: 305 active+clean; 160 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 745 KiB/s rd, 1.9 MiB/s wr, 170 op/s
Oct 02 12:28:27 compute-1 nova_compute[230518]: 2025-10-02 12:28:27.411 2 DEBUG nova.compute.manager [None req-0f82ea2b-d79e-4c61-96b0-c08adc58b4a0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:28:27 compute-1 nova_compute[230518]: 2025-10-02 12:28:27.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:27 compute-1 nova_compute[230518]: 2025-10-02 12:28:27.549 2 INFO nova.compute.manager [None req-0f82ea2b-d79e-4c61-96b0-c08adc58b4a0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] instance snapshotting
Oct 02 12:28:27 compute-1 nova_compute[230518]: 2025-10-02 12:28:27.856 2 WARNING nova.compute.manager [None req-0f82ea2b-d79e-4c61-96b0-c08adc58b4a0 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Image not found during snapshot: nova.exception.ImageNotFound: Image 79274edd-028c-4cfd-9fc0-5c8ea54a5ee4 could not be found.
Oct 02 12:28:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:28.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:28.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:28:28 compute-1 ceph-mon[80926]: pgmap v1525: 305 pgs: 305 active+clean; 88 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 193 op/s
Oct 02 12:28:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e214 e214: 3 total, 3 up, 3 in
Oct 02 12:28:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:28:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:30.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:28:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:30.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:30 compute-1 nova_compute[230518]: 2025-10-02 12:28:30.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:30 compute-1 nova_compute[230518]: 2025-10-02 12:28:30.782 2 DEBUG oslo_concurrency.lockutils [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "e408d787-b02c-4b28-9af7-b7ae07b54538" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:30 compute-1 nova_compute[230518]: 2025-10-02 12:28:30.783 2 DEBUG oslo_concurrency.lockutils [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "e408d787-b02c-4b28-9af7-b7ae07b54538" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:30 compute-1 nova_compute[230518]: 2025-10-02 12:28:30.783 2 DEBUG oslo_concurrency.lockutils [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "e408d787-b02c-4b28-9af7-b7ae07b54538-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:30 compute-1 nova_compute[230518]: 2025-10-02 12:28:30.783 2 DEBUG oslo_concurrency.lockutils [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "e408d787-b02c-4b28-9af7-b7ae07b54538-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:30 compute-1 nova_compute[230518]: 2025-10-02 12:28:30.784 2 DEBUG oslo_concurrency.lockutils [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "e408d787-b02c-4b28-9af7-b7ae07b54538-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:30 compute-1 nova_compute[230518]: 2025-10-02 12:28:30.785 2 INFO nova.compute.manager [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Terminating instance
Oct 02 12:28:30 compute-1 nova_compute[230518]: 2025-10-02 12:28:30.785 2 DEBUG nova.compute.manager [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:28:30 compute-1 kernel: tapa3ec450c-ad (unregistering): left promiscuous mode
Oct 02 12:28:30 compute-1 NetworkManager[44960]: <info>  [1759408110.9514] device (tapa3ec450c-ad): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:28:30 compute-1 nova_compute[230518]: 2025-10-02 12:28:30.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:30 compute-1 ovn_controller[129257]: 2025-10-02T12:28:30Z|00287|binding|INFO|Releasing lport a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 from this chassis (sb_readonly=0)
Oct 02 12:28:30 compute-1 ovn_controller[129257]: 2025-10-02T12:28:30Z|00288|binding|INFO|Setting lport a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 down in Southbound
Oct 02 12:28:30 compute-1 ovn_controller[129257]: 2025-10-02T12:28:30Z|00289|binding|INFO|Removing iface tapa3ec450c-ad ovn-installed in OVS
Oct 02 12:28:30 compute-1 nova_compute[230518]: 2025-10-02 12:28:30.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:31 compute-1 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000003e.scope: Deactivated successfully.
Oct 02 12:28:31 compute-1 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000003e.scope: Consumed 8.314s CPU time.
Oct 02 12:28:31 compute-1 systemd-machined[188247]: Machine qemu-32-instance-0000003e terminated.
Oct 02 12:28:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:31.049 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:17:b5 10.100.0.12'], port_security=['fa:16:3e:63:17:b5 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e408d787-b02c-4b28-9af7-b7ae07b54538', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bfd7bec5bd4b4366a96cc55cfe95fcc9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1274568d-0664-4745-a1c4-36fd447c1a9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f6f5554-f9ad-4301-a295-af1afef2d045, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:28:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:31.050 138374 INFO neutron.agent.ovn.metadata.agent [-] Port a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 in datapath eefd67eb-b4b6-4162-bbdd-0cce7dbdb491 unbound from our chassis
Oct 02 12:28:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:31.051 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eefd67eb-b4b6-4162-bbdd-0cce7dbdb491, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:28:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:31.052 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[02375aed-9b70-4d7a-aefc-c6b0742e181e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:31.053 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491 namespace which is not needed anymore
Oct 02 12:28:31 compute-1 nova_compute[230518]: 2025-10-02 12:28:31.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:31 compute-1 nova_compute[230518]: 2025-10-02 12:28:31.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:31 compute-1 ceph-mon[80926]: pgmap v1526: 305 pgs: 305 active+clean; 88 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 626 KiB/s wr, 154 op/s
Oct 02 12:28:31 compute-1 ceph-mon[80926]: osdmap e214: 3 total, 3 up, 3 in
Oct 02 12:28:31 compute-1 nova_compute[230518]: 2025-10-02 12:28:31.234 2 INFO nova.virt.libvirt.driver [-] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Instance destroyed successfully.
Oct 02 12:28:31 compute-1 nova_compute[230518]: 2025-10-02 12:28:31.235 2 DEBUG nova.objects.instance [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lazy-loading 'resources' on Instance uuid e408d787-b02c-4b28-9af7-b7ae07b54538 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:28:31 compute-1 nova_compute[230518]: 2025-10-02 12:28:31.346 2 DEBUG nova.virt.libvirt.vif [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:28:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1894199701',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1894199701',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1894199701',id=62,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:28:23Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bfd7bec5bd4b4366a96cc55cfe95fcc9',ramdisk_id='',reservation_id='r-qc9filwg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-883313902',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-883313902-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:28:27Z,user_data=None,user_id='eff0431e92464c78b780c8365e6e920c',uuid=e408d787-b02c-4b28-9af7-b7ae07b54538,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "address": "fa:16:3e:63:17:b5", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ec450c-ad", "ovs_interfaceid": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:28:31 compute-1 nova_compute[230518]: 2025-10-02 12:28:31.346 2 DEBUG nova.network.os_vif_util [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Converting VIF {"id": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "address": "fa:16:3e:63:17:b5", "network": {"id": "eefd67eb-b4b6-4162-bbdd-0cce7dbdb491", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1891404386-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bfd7bec5bd4b4366a96cc55cfe95fcc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3ec450c-ad", "ovs_interfaceid": "a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:28:31 compute-1 nova_compute[230518]: 2025-10-02 12:28:31.347 2 DEBUG nova.network.os_vif_util [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:17:b5,bridge_name='br-int',has_traffic_filtering=True,id=a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6,network=Network(eefd67eb-b4b6-4162-bbdd-0cce7dbdb491),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3ec450c-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:28:31 compute-1 nova_compute[230518]: 2025-10-02 12:28:31.348 2 DEBUG os_vif [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:17:b5,bridge_name='br-int',has_traffic_filtering=True,id=a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6,network=Network(eefd67eb-b4b6-4162-bbdd-0cce7dbdb491),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3ec450c-ad') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:28:31 compute-1 nova_compute[230518]: 2025-10-02 12:28:31.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:31 compute-1 nova_compute[230518]: 2025-10-02 12:28:31.350 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3ec450c-ad, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:28:31 compute-1 nova_compute[230518]: 2025-10-02 12:28:31.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:31 compute-1 nova_compute[230518]: 2025-10-02 12:28:31.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:31 compute-1 nova_compute[230518]: 2025-10-02 12:28:31.355 2 INFO os_vif [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:17:b5,bridge_name='br-int',has_traffic_filtering=True,id=a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6,network=Network(eefd67eb-b4b6-4162-bbdd-0cce7dbdb491),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3ec450c-ad')
Oct 02 12:28:31 compute-1 neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491[256974]: [NOTICE]   (256978) : haproxy version is 2.8.14-c23fe91
Oct 02 12:28:31 compute-1 neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491[256974]: [NOTICE]   (256978) : path to executable is /usr/sbin/haproxy
Oct 02 12:28:31 compute-1 neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491[256974]: [WARNING]  (256978) : Exiting Master process...
Oct 02 12:28:31 compute-1 neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491[256974]: [WARNING]  (256978) : Exiting Master process...
Oct 02 12:28:31 compute-1 neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491[256974]: [ALERT]    (256978) : Current worker (256980) exited with code 143 (Terminated)
Oct 02 12:28:31 compute-1 neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491[256974]: [WARNING]  (256978) : All workers exited. Exiting... (0)
Oct 02 12:28:31 compute-1 systemd[1]: libpod-59bd57da184bd82f8a793a83d650a4af0bf1a42b6bbe1f99c591a84fe3c34682.scope: Deactivated successfully.
Oct 02 12:28:31 compute-1 podman[257014]: 2025-10-02 12:28:31.39063313 +0000 UTC m=+0.220231480 container died 59bd57da184bd82f8a793a83d650a4af0bf1a42b6bbe1f99c591a84fe3c34682 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:28:31 compute-1 nova_compute[230518]: 2025-10-02 12:28:31.646 2 DEBUG nova.compute.manager [req-a841b5bd-5767-4445-a105-fb42eced73eb req-f5b784fd-c814-4bee-9d90-9cca7f145994 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Received event network-vif-unplugged-a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:28:31 compute-1 nova_compute[230518]: 2025-10-02 12:28:31.647 2 DEBUG oslo_concurrency.lockutils [req-a841b5bd-5767-4445-a105-fb42eced73eb req-f5b784fd-c814-4bee-9d90-9cca7f145994 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e408d787-b02c-4b28-9af7-b7ae07b54538-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:31 compute-1 nova_compute[230518]: 2025-10-02 12:28:31.647 2 DEBUG oslo_concurrency.lockutils [req-a841b5bd-5767-4445-a105-fb42eced73eb req-f5b784fd-c814-4bee-9d90-9cca7f145994 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e408d787-b02c-4b28-9af7-b7ae07b54538-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:31 compute-1 nova_compute[230518]: 2025-10-02 12:28:31.648 2 DEBUG oslo_concurrency.lockutils [req-a841b5bd-5767-4445-a105-fb42eced73eb req-f5b784fd-c814-4bee-9d90-9cca7f145994 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e408d787-b02c-4b28-9af7-b7ae07b54538-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:31 compute-1 nova_compute[230518]: 2025-10-02 12:28:31.649 2 DEBUG nova.compute.manager [req-a841b5bd-5767-4445-a105-fb42eced73eb req-f5b784fd-c814-4bee-9d90-9cca7f145994 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] No waiting events found dispatching network-vif-unplugged-a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:28:31 compute-1 nova_compute[230518]: 2025-10-02 12:28:31.650 2 DEBUG nova.compute.manager [req-a841b5bd-5767-4445-a105-fb42eced73eb req-f5b784fd-c814-4bee-9d90-9cca7f145994 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Received event network-vif-unplugged-a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:28:31 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-59bd57da184bd82f8a793a83d650a4af0bf1a42b6bbe1f99c591a84fe3c34682-userdata-shm.mount: Deactivated successfully.
Oct 02 12:28:31 compute-1 systemd[1]: var-lib-containers-storage-overlay-ac381c0e8614aae9b537a6ede1bffe087d0774e8c5a7a70ab53ff54e2364b0f9-merged.mount: Deactivated successfully.
Oct 02 12:28:32 compute-1 podman[257014]: 2025-10-02 12:28:32.051042271 +0000 UTC m=+0.880640611 container cleanup 59bd57da184bd82f8a793a83d650a4af0bf1a42b6bbe1f99c591a84fe3c34682 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:28:32 compute-1 systemd[1]: libpod-conmon-59bd57da184bd82f8a793a83d650a4af0bf1a42b6bbe1f99c591a84fe3c34682.scope: Deactivated successfully.
Oct 02 12:28:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:28:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:32.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:28:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:32.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:32 compute-1 podman[257070]: 2025-10-02 12:28:32.193185013 +0000 UTC m=+0.103544629 container remove 59bd57da184bd82f8a793a83d650a4af0bf1a42b6bbe1f99c591a84fe3c34682 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:28:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:32.202 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[00de373c-fca5-4d3c-b80f-b7cfa13d8728]: (4, ('Thu Oct  2 12:28:31 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491 (59bd57da184bd82f8a793a83d650a4af0bf1a42b6bbe1f99c591a84fe3c34682)\n59bd57da184bd82f8a793a83d650a4af0bf1a42b6bbe1f99c591a84fe3c34682\nThu Oct  2 12:28:32 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491 (59bd57da184bd82f8a793a83d650a4af0bf1a42b6bbe1f99c591a84fe3c34682)\n59bd57da184bd82f8a793a83d650a4af0bf1a42b6bbe1f99c591a84fe3c34682\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:32.205 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ab272cd9-f269-4820-8aa1-f1c88f9e02ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:32.206 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeefd67eb-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:28:32 compute-1 nova_compute[230518]: 2025-10-02 12:28:32.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:32 compute-1 kernel: tapeefd67eb-b0: left promiscuous mode
Oct 02 12:28:32 compute-1 nova_compute[230518]: 2025-10-02 12:28:32.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:32.236 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5f58fe74-aa8c-495c-b6f2-4b382c3afac9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:32.268 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3e8957af-2411-4ae7-9edc-91efcaf2f72a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:32.270 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[587772f5-7178-469e-b412-d130010d80c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:32.290 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[117f5233-1fc0-4535-bdf5-ce132ffd46b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596613, 'reachable_time': 20590, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257084, 'error': None, 'target': 'ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:32 compute-1 systemd[1]: run-netns-ovnmeta\x2deefd67eb\x2db4b6\x2d4162\x2dbbdd\x2d0cce7dbdb491.mount: Deactivated successfully.
Oct 02 12:28:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:32.293 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-eefd67eb-b4b6-4162-bbdd-0cce7dbdb491 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:28:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:32.293 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[3d6750b3-134a-44ba-b777-eccd61608c69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:28:32 compute-1 ceph-mon[80926]: pgmap v1528: 305 pgs: 305 active+clean; 88 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 19 KiB/s wr, 161 op/s
Oct 02 12:28:32 compute-1 nova_compute[230518]: 2025-10-02 12:28:32.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:28:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:34.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:34 compute-1 nova_compute[230518]: 2025-10-02 12:28:34.136 2 DEBUG nova.compute.manager [req-4bae97f0-6a08-44bc-852a-9d3f747e035b req-88a2b7c6-30dd-4df4-bfdd-1592244a3928 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Received event network-vif-plugged-a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:28:34 compute-1 nova_compute[230518]: 2025-10-02 12:28:34.137 2 DEBUG oslo_concurrency.lockutils [req-4bae97f0-6a08-44bc-852a-9d3f747e035b req-88a2b7c6-30dd-4df4-bfdd-1592244a3928 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "e408d787-b02c-4b28-9af7-b7ae07b54538-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:34 compute-1 nova_compute[230518]: 2025-10-02 12:28:34.137 2 DEBUG oslo_concurrency.lockutils [req-4bae97f0-6a08-44bc-852a-9d3f747e035b req-88a2b7c6-30dd-4df4-bfdd-1592244a3928 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e408d787-b02c-4b28-9af7-b7ae07b54538-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:34 compute-1 nova_compute[230518]: 2025-10-02 12:28:34.137 2 DEBUG oslo_concurrency.lockutils [req-4bae97f0-6a08-44bc-852a-9d3f747e035b req-88a2b7c6-30dd-4df4-bfdd-1592244a3928 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "e408d787-b02c-4b28-9af7-b7ae07b54538-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:34 compute-1 nova_compute[230518]: 2025-10-02 12:28:34.137 2 DEBUG nova.compute.manager [req-4bae97f0-6a08-44bc-852a-9d3f747e035b req-88a2b7c6-30dd-4df4-bfdd-1592244a3928 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] No waiting events found dispatching network-vif-plugged-a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:28:34 compute-1 nova_compute[230518]: 2025-10-02 12:28:34.137 2 WARNING nova.compute.manager [req-4bae97f0-6a08-44bc-852a-9d3f747e035b req-88a2b7c6-30dd-4df4-bfdd-1592244a3928 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Received unexpected event network-vif-plugged-a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 for instance with vm_state active and task_state deleting.
Oct 02 12:28:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:28:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:34.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:28:34 compute-1 ceph-mon[80926]: pgmap v1529: 305 pgs: 305 active+clean; 88 MiB data, 584 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 136 op/s
Oct 02 12:28:36 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e215 e215: 3 total, 3 up, 3 in
Oct 02 12:28:36 compute-1 nova_compute[230518]: 2025-10-02 12:28:36.068 2 INFO nova.virt.libvirt.driver [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Deleting instance files /var/lib/nova/instances/e408d787-b02c-4b28-9af7-b7ae07b54538_del
Oct 02 12:28:36 compute-1 nova_compute[230518]: 2025-10-02 12:28:36.069 2 INFO nova.virt.libvirt.driver [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Deletion of /var/lib/nova/instances/e408d787-b02c-4b28-9af7-b7ae07b54538_del complete
Oct 02 12:28:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:28:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:36.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:28:36 compute-1 nova_compute[230518]: 2025-10-02 12:28:36.116 2 INFO nova.compute.manager [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Took 5.33 seconds to destroy the instance on the hypervisor.
Oct 02 12:28:36 compute-1 nova_compute[230518]: 2025-10-02 12:28:36.117 2 DEBUG oslo.service.loopingcall [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:28:36 compute-1 nova_compute[230518]: 2025-10-02 12:28:36.118 2 DEBUG nova.compute.manager [-] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:28:36 compute-1 nova_compute[230518]: 2025-10-02 12:28:36.118 2 DEBUG nova.network.neutron [-] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:28:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:28:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:36.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:28:36 compute-1 nova_compute[230518]: 2025-10-02 12:28:36.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:37 compute-1 ceph-mon[80926]: pgmap v1530: 305 pgs: 305 active+clean; 72 MiB data, 577 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 17 KiB/s wr, 99 op/s
Oct 02 12:28:37 compute-1 ceph-mon[80926]: osdmap e215: 3 total, 3 up, 3 in
Oct 02 12:28:37 compute-1 nova_compute[230518]: 2025-10-02 12:28:37.448 2 DEBUG nova.network.neutron [-] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:28:37 compute-1 nova_compute[230518]: 2025-10-02 12:28:37.466 2 INFO nova.compute.manager [-] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Took 1.35 seconds to deallocate network for instance.
Oct 02 12:28:37 compute-1 nova_compute[230518]: 2025-10-02 12:28:37.562 2 DEBUG oslo_concurrency.lockutils [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:37 compute-1 nova_compute[230518]: 2025-10-02 12:28:37.563 2 DEBUG oslo_concurrency.lockutils [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:37 compute-1 nova_compute[230518]: 2025-10-02 12:28:37.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:37 compute-1 nova_compute[230518]: 2025-10-02 12:28:37.639 2 DEBUG oslo_concurrency.processutils [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:37 compute-1 nova_compute[230518]: 2025-10-02 12:28:37.705 2 DEBUG nova.compute.manager [req-3007abfc-a5a7-4a36-a40e-feca216822a6 req-4c817e2c-34a1-4cf0-aaea-013df145f3ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Received event network-vif-deleted-a3ec450c-ad1e-47b3-9a0e-3c9b1a2460c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:28:38 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:28:38 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1184234749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:38 compute-1 nova_compute[230518]: 2025-10-02 12:28:38.073 2 DEBUG oslo_concurrency.processutils [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:38 compute-1 nova_compute[230518]: 2025-10-02 12:28:38.084 2 DEBUG nova.compute.provider_tree [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:28:38 compute-1 nova_compute[230518]: 2025-10-02 12:28:38.108 2 DEBUG nova.scheduler.client.report [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:28:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:28:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:38.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:28:38 compute-1 nova_compute[230518]: 2025-10-02 12:28:38.133 2 DEBUG oslo_concurrency.lockutils [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:38.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:38 compute-1 nova_compute[230518]: 2025-10-02 12:28:38.171 2 INFO nova.scheduler.client.report [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Deleted allocations for instance e408d787-b02c-4b28-9af7-b7ae07b54538
Oct 02 12:28:38 compute-1 nova_compute[230518]: 2025-10-02 12:28:38.234 2 DEBUG oslo_concurrency.lockutils [None req-247c640b-6147-4695-ba18-c7d87fc51bc8 eff0431e92464c78b780c8365e6e920c bfd7bec5bd4b4366a96cc55cfe95fcc9 - - default default] Lock "e408d787-b02c-4b28-9af7-b7ae07b54538" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.451s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:38 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:28:38 compute-1 ceph-mon[80926]: pgmap v1532: 305 pgs: 305 active+clean; 41 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 319 KiB/s rd, 5.5 KiB/s wr, 84 op/s
Oct 02 12:28:38 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1184234749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:40.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:40.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:41 compute-1 ceph-mon[80926]: pgmap v1533: 305 pgs: 305 active+clean; 41 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 4.8 KiB/s wr, 66 op/s
Oct 02 12:28:41 compute-1 nova_compute[230518]: 2025-10-02 12:28:41.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:41 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e216 e216: 3 total, 3 up, 3 in
Oct 02 12:28:42 compute-1 nova_compute[230518]: 2025-10-02 12:28:42.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:28:42 compute-1 nova_compute[230518]: 2025-10-02 12:28:42.076 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:42 compute-1 nova_compute[230518]: 2025-10-02 12:28:42.077 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:42 compute-1 nova_compute[230518]: 2025-10-02 12:28:42.077 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:42 compute-1 nova_compute[230518]: 2025-10-02 12:28:42.077 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:28:42 compute-1 nova_compute[230518]: 2025-10-02 12:28:42.078 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:42.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:42.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:28:42 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2569227816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:42 compute-1 nova_compute[230518]: 2025-10-02 12:28:42.535 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:42 compute-1 nova_compute[230518]: 2025-10-02 12:28:42.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:42 compute-1 podman[257133]: 2025-10-02 12:28:42.65031415 +0000 UTC m=+0.067236167 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 12:28:42 compute-1 podman[257132]: 2025-10-02 12:28:42.692938941 +0000 UTC m=+0.103097696 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 12:28:42 compute-1 nova_compute[230518]: 2025-10-02 12:28:42.732 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:28:42 compute-1 nova_compute[230518]: 2025-10-02 12:28:42.733 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4640MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:28:42 compute-1 nova_compute[230518]: 2025-10-02 12:28:42.734 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:42 compute-1 nova_compute[230518]: 2025-10-02 12:28:42.734 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:42 compute-1 nova_compute[230518]: 2025-10-02 12:28:42.842 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:28:42 compute-1 nova_compute[230518]: 2025-10-02 12:28:42.843 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:28:42 compute-1 nova_compute[230518]: 2025-10-02 12:28:42.867 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:43 compute-1 ceph-mon[80926]: pgmap v1534: 305 pgs: 305 active+clean; 41 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 4.6 KiB/s wr, 61 op/s
Oct 02 12:28:43 compute-1 ceph-mon[80926]: osdmap e216: 3 total, 3 up, 3 in
Oct 02 12:28:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2569227816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:28:43 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1184975754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:43 compute-1 nova_compute[230518]: 2025-10-02 12:28:43.343 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:43 compute-1 nova_compute[230518]: 2025-10-02 12:28:43.352 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:28:43 compute-1 nova_compute[230518]: 2025-10-02 12:28:43.373 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:28:43 compute-1 nova_compute[230518]: 2025-10-02 12:28:43.406 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:28:43 compute-1 nova_compute[230518]: 2025-10-02 12:28:43.407 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:28:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:44.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:44.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:44 compute-1 nova_compute[230518]: 2025-10-02 12:28:44.402 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:28:45 compute-1 nova_compute[230518]: 2025-10-02 12:28:45.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:28:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2909641701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:45 compute-1 ceph-mon[80926]: pgmap v1536: 305 pgs: 305 active+clean; 41 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 4.7 KiB/s wr, 47 op/s
Oct 02 12:28:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1184975754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3965843975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1417505443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:46 compute-1 nova_compute[230518]: 2025-10-02 12:28:46.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:28:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:46.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:46.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:46 compute-1 nova_compute[230518]: 2025-10-02 12:28:46.231 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408111.230305, e408d787-b02c-4b28-9af7-b7ae07b54538 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:28:46 compute-1 nova_compute[230518]: 2025-10-02 12:28:46.231 2 INFO nova.compute.manager [-] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] VM Stopped (Lifecycle Event)
Oct 02 12:28:46 compute-1 nova_compute[230518]: 2025-10-02 12:28:46.311 2 DEBUG nova.compute.manager [None req-2881662f-9c51-434b-832e-e00ba38f3275 - - - - - -] [instance: e408d787-b02c-4b28-9af7-b7ae07b54538] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:28:46 compute-1 nova_compute[230518]: 2025-10-02 12:28:46.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:46 compute-1 ceph-mon[80926]: pgmap v1537: 305 pgs: 305 active+clean; 60 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 1.4 MiB/s wr, 58 op/s
Oct 02 12:28:47 compute-1 nova_compute[230518]: 2025-10-02 12:28:47.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:28:47 compute-1 nova_compute[230518]: 2025-10-02 12:28:47.103 2 DEBUG oslo_concurrency.lockutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Acquiring lock "55bd545d-c449-4749-a3f1-b04f0f37e06e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:47 compute-1 nova_compute[230518]: 2025-10-02 12:28:47.103 2 DEBUG oslo_concurrency.lockutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "55bd545d-c449-4749-a3f1-b04f0f37e06e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:47 compute-1 nova_compute[230518]: 2025-10-02 12:28:47.124 2 DEBUG nova.compute.manager [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:28:47 compute-1 nova_compute[230518]: 2025-10-02 12:28:47.207 2 DEBUG oslo_concurrency.lockutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:47 compute-1 nova_compute[230518]: 2025-10-02 12:28:47.207 2 DEBUG oslo_concurrency.lockutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:47 compute-1 nova_compute[230518]: 2025-10-02 12:28:47.215 2 DEBUG nova.virt.hardware [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:28:47 compute-1 nova_compute[230518]: 2025-10-02 12:28:47.216 2 INFO nova.compute.claims [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:28:47 compute-1 nova_compute[230518]: 2025-10-02 12:28:47.401 2 DEBUG oslo_concurrency.processutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:47 compute-1 nova_compute[230518]: 2025-10-02 12:28:47.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:28:47 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3140232862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:47 compute-1 nova_compute[230518]: 2025-10-02 12:28:47.865 2 DEBUG oslo_concurrency.processutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:47 compute-1 nova_compute[230518]: 2025-10-02 12:28:47.872 2 DEBUG nova.compute.provider_tree [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:28:47 compute-1 nova_compute[230518]: 2025-10-02 12:28:47.889 2 DEBUG nova.scheduler.client.report [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:28:47 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2950793471' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:47 compute-1 nova_compute[230518]: 2025-10-02 12:28:47.931 2 DEBUG oslo_concurrency.lockutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:47 compute-1 nova_compute[230518]: 2025-10-02 12:28:47.932 2 DEBUG nova.compute.manager [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:28:48 compute-1 nova_compute[230518]: 2025-10-02 12:28:48.005 2 DEBUG nova.compute.manager [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:28:48 compute-1 nova_compute[230518]: 2025-10-02 12:28:48.006 2 DEBUG nova.network.neutron [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:28:48 compute-1 nova_compute[230518]: 2025-10-02 12:28:48.024 2 INFO nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:28:48 compute-1 nova_compute[230518]: 2025-10-02 12:28:48.048 2 DEBUG nova.compute.manager [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:28:48 compute-1 nova_compute[230518]: 2025-10-02 12:28:48.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:28:48 compute-1 nova_compute[230518]: 2025-10-02 12:28:48.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:28:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:48.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:48.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:48 compute-1 nova_compute[230518]: 2025-10-02 12:28:48.206 2 DEBUG nova.compute.manager [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:28:48 compute-1 nova_compute[230518]: 2025-10-02 12:28:48.208 2 DEBUG nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:28:48 compute-1 nova_compute[230518]: 2025-10-02 12:28:48.209 2 INFO nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Creating image(s)
Oct 02 12:28:48 compute-1 nova_compute[230518]: 2025-10-02 12:28:48.249 2 DEBUG nova.storage.rbd_utils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] rbd image 55bd545d-c449-4749-a3f1-b04f0f37e06e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:28:48 compute-1 nova_compute[230518]: 2025-10-02 12:28:48.293 2 DEBUG nova.storage.rbd_utils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] rbd image 55bd545d-c449-4749-a3f1-b04f0f37e06e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:28:48 compute-1 nova_compute[230518]: 2025-10-02 12:28:48.342 2 DEBUG nova.storage.rbd_utils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] rbd image 55bd545d-c449-4749-a3f1-b04f0f37e06e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:28:48 compute-1 nova_compute[230518]: 2025-10-02 12:28:48.348 2 DEBUG oslo_concurrency.processutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:48.380 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:28:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:48.381 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:28:48 compute-1 nova_compute[230518]: 2025-10-02 12:28:48.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:48 compute-1 nova_compute[230518]: 2025-10-02 12:28:48.432 2 DEBUG oslo_concurrency.processutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:48 compute-1 nova_compute[230518]: 2025-10-02 12:28:48.433 2 DEBUG oslo_concurrency.lockutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:48 compute-1 nova_compute[230518]: 2025-10-02 12:28:48.434 2 DEBUG oslo_concurrency.lockutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:48 compute-1 nova_compute[230518]: 2025-10-02 12:28:48.435 2 DEBUG oslo_concurrency.lockutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:48 compute-1 nova_compute[230518]: 2025-10-02 12:28:48.472 2 DEBUG nova.storage.rbd_utils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] rbd image 55bd545d-c449-4749-a3f1-b04f0f37e06e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:28:48 compute-1 nova_compute[230518]: 2025-10-02 12:28:48.478 2 DEBUG oslo_concurrency.processutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 55bd545d-c449-4749-a3f1-b04f0f37e06e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:28:48 compute-1 nova_compute[230518]: 2025-10-02 12:28:48.565 2 DEBUG nova.network.neutron [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Oct 02 12:28:48 compute-1 nova_compute[230518]: 2025-10-02 12:28:48.566 2 DEBUG nova.compute.manager [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:28:48 compute-1 ceph-mon[80926]: pgmap v1538: 305 pgs: 305 active+clean; 88 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 2.1 MiB/s wr, 37 op/s
Oct 02 12:28:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1154819649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3140232862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1595482658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:50.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:50.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/139225508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/174407595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/556924285' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:50 compute-1 nova_compute[230518]: 2025-10-02 12:28:50.616 2 DEBUG oslo_concurrency.processutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 55bd545d-c449-4749-a3f1-b04f0f37e06e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:50 compute-1 nova_compute[230518]: 2025-10-02 12:28:50.722 2 DEBUG nova.storage.rbd_utils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] resizing rbd image 55bd545d-c449-4749-a3f1-b04f0f37e06e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:28:51 compute-1 nova_compute[230518]: 2025-10-02 12:28:51.255 2 DEBUG nova.objects.instance [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lazy-loading 'migration_context' on Instance uuid 55bd545d-c449-4749-a3f1-b04f0f37e06e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:28:51 compute-1 nova_compute[230518]: 2025-10-02 12:28:51.288 2 DEBUG nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:28:51 compute-1 nova_compute[230518]: 2025-10-02 12:28:51.289 2 DEBUG nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Ensure instance console log exists: /var/lib/nova/instances/55bd545d-c449-4749-a3f1-b04f0f37e06e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:28:51 compute-1 nova_compute[230518]: 2025-10-02 12:28:51.290 2 DEBUG oslo_concurrency.lockutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:51 compute-1 nova_compute[230518]: 2025-10-02 12:28:51.290 2 DEBUG oslo_concurrency.lockutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:51 compute-1 nova_compute[230518]: 2025-10-02 12:28:51.291 2 DEBUG oslo_concurrency.lockutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:51 compute-1 nova_compute[230518]: 2025-10-02 12:28:51.293 2 DEBUG nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:28:51 compute-1 nova_compute[230518]: 2025-10-02 12:28:51.299 2 WARNING nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:28:51 compute-1 nova_compute[230518]: 2025-10-02 12:28:51.304 2 DEBUG nova.virt.libvirt.host [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:28:51 compute-1 nova_compute[230518]: 2025-10-02 12:28:51.305 2 DEBUG nova.virt.libvirt.host [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:28:51 compute-1 nova_compute[230518]: 2025-10-02 12:28:51.308 2 DEBUG nova.virt.libvirt.host [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:28:51 compute-1 nova_compute[230518]: 2025-10-02 12:28:51.309 2 DEBUG nova.virt.libvirt.host [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:28:51 compute-1 nova_compute[230518]: 2025-10-02 12:28:51.311 2 DEBUG nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:28:51 compute-1 nova_compute[230518]: 2025-10-02 12:28:51.312 2 DEBUG nova.virt.hardware [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:28:51 compute-1 nova_compute[230518]: 2025-10-02 12:28:51.312 2 DEBUG nova.virt.hardware [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:28:51 compute-1 nova_compute[230518]: 2025-10-02 12:28:51.313 2 DEBUG nova.virt.hardware [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:28:51 compute-1 nova_compute[230518]: 2025-10-02 12:28:51.313 2 DEBUG nova.virt.hardware [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:28:51 compute-1 nova_compute[230518]: 2025-10-02 12:28:51.314 2 DEBUG nova.virt.hardware [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:28:51 compute-1 nova_compute[230518]: 2025-10-02 12:28:51.314 2 DEBUG nova.virt.hardware [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:28:51 compute-1 nova_compute[230518]: 2025-10-02 12:28:51.315 2 DEBUG nova.virt.hardware [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:28:51 compute-1 nova_compute[230518]: 2025-10-02 12:28:51.315 2 DEBUG nova.virt.hardware [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:28:51 compute-1 nova_compute[230518]: 2025-10-02 12:28:51.315 2 DEBUG nova.virt.hardware [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:28:51 compute-1 nova_compute[230518]: 2025-10-02 12:28:51.316 2 DEBUG nova.virt.hardware [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:28:51 compute-1 nova_compute[230518]: 2025-10-02 12:28:51.316 2 DEBUG nova.virt.hardware [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:28:51 compute-1 nova_compute[230518]: 2025-10-02 12:28:51.321 2 DEBUG oslo_concurrency.processutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:51 compute-1 ceph-mon[80926]: pgmap v1539: 305 pgs: 305 active+clean; 88 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 2.1 MiB/s wr, 37 op/s
Oct 02 12:28:51 compute-1 nova_compute[230518]: 2025-10-02 12:28:51.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:51 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:28:51 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/639640980' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:51 compute-1 nova_compute[230518]: 2025-10-02 12:28:51.782 2 DEBUG oslo_concurrency.processutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:51 compute-1 nova_compute[230518]: 2025-10-02 12:28:51.824 2 DEBUG nova.storage.rbd_utils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] rbd image 55bd545d-c449-4749-a3f1-b04f0f37e06e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:28:51 compute-1 nova_compute[230518]: 2025-10-02 12:28:51.830 2 DEBUG oslo_concurrency.processutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:52 compute-1 nova_compute[230518]: 2025-10-02 12:28:52.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:28:52 compute-1 nova_compute[230518]: 2025-10-02 12:28:52.055 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:28:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:52.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:28:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:52.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:28:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:28:52 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1091874348' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:52 compute-1 nova_compute[230518]: 2025-10-02 12:28:52.248 2 DEBUG oslo_concurrency.processutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:52 compute-1 nova_compute[230518]: 2025-10-02 12:28:52.251 2 DEBUG nova.objects.instance [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 55bd545d-c449-4749-a3f1-b04f0f37e06e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:28:52 compute-1 nova_compute[230518]: 2025-10-02 12:28:52.267 2 DEBUG nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:28:52 compute-1 nova_compute[230518]:   <uuid>55bd545d-c449-4749-a3f1-b04f0f37e06e</uuid>
Oct 02 12:28:52 compute-1 nova_compute[230518]:   <name>instance-00000040</name>
Oct 02 12:28:52 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:28:52 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:28:52 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:28:52 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:       <nova:name>tempest-ListImageFiltersTestJSON-server-2039141641</nova:name>
Oct 02 12:28:52 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:28:51</nova:creationTime>
Oct 02 12:28:52 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:28:52 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:28:52 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:28:52 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:28:52 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:28:52 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:28:52 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:28:52 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:28:52 compute-1 nova_compute[230518]:         <nova:user uuid="87db7657bb324d029ff3d66f218f1d8d">tempest-ListImageFiltersTestJSON-1602275258-project-member</nova:user>
Oct 02 12:28:52 compute-1 nova_compute[230518]:         <nova:project uuid="494736d8288b414094eb0bc6fbaa8cb7">tempest-ListImageFiltersTestJSON-1602275258</nova:project>
Oct 02 12:28:52 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:28:52 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:       <nova:ports/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:28:52 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:28:52 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <system>
Oct 02 12:28:52 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:28:52 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:28:52 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:28:52 compute-1 nova_compute[230518]:       <entry name="serial">55bd545d-c449-4749-a3f1-b04f0f37e06e</entry>
Oct 02 12:28:52 compute-1 nova_compute[230518]:       <entry name="uuid">55bd545d-c449-4749-a3f1-b04f0f37e06e</entry>
Oct 02 12:28:52 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     </system>
Oct 02 12:28:52 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:28:52 compute-1 nova_compute[230518]:   <os>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:   </os>
Oct 02 12:28:52 compute-1 nova_compute[230518]:   <features>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:   </features>
Oct 02 12:28:52 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:28:52 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:28:52 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:28:52 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/55bd545d-c449-4749-a3f1-b04f0f37e06e_disk">
Oct 02 12:28:52 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:       </source>
Oct 02 12:28:52 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:28:52 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:28:52 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:28:52 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/55bd545d-c449-4749-a3f1-b04f0f37e06e_disk.config">
Oct 02 12:28:52 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:       </source>
Oct 02 12:28:52 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:28:52 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:28:52 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:28:52 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/55bd545d-c449-4749-a3f1-b04f0f37e06e/console.log" append="off"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <video>
Oct 02 12:28:52 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     </video>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:28:52 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:28:52 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:28:52 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:28:52 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:28:52 compute-1 nova_compute[230518]: </domain>
Oct 02 12:28:52 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:28:52 compute-1 nova_compute[230518]: 2025-10-02 12:28:52.326 2 DEBUG nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:28:52 compute-1 nova_compute[230518]: 2025-10-02 12:28:52.326 2 DEBUG nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:28:52 compute-1 nova_compute[230518]: 2025-10-02 12:28:52.327 2 INFO nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Using config drive
Oct 02 12:28:52 compute-1 nova_compute[230518]: 2025-10-02 12:28:52.354 2 DEBUG nova.storage.rbd_utils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] rbd image 55bd545d-c449-4749-a3f1-b04f0f37e06e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:28:52 compute-1 ceph-mon[80926]: pgmap v1540: 305 pgs: 305 active+clean; 148 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 4.9 MiB/s wr, 71 op/s
Oct 02 12:28:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/988252884' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/639640980' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2019023000' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1091874348' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:52 compute-1 nova_compute[230518]: 2025-10-02 12:28:52.575 2 INFO nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Creating config drive at /var/lib/nova/instances/55bd545d-c449-4749-a3f1-b04f0f37e06e/disk.config
Oct 02 12:28:52 compute-1 nova_compute[230518]: 2025-10-02 12:28:52.584 2 DEBUG oslo_concurrency.processutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/55bd545d-c449-4749-a3f1-b04f0f37e06e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpixm3b6lo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:52 compute-1 nova_compute[230518]: 2025-10-02 12:28:52.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:52 compute-1 nova_compute[230518]: 2025-10-02 12:28:52.741 2 DEBUG oslo_concurrency.processutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/55bd545d-c449-4749-a3f1-b04f0f37e06e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpixm3b6lo" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:52 compute-1 nova_compute[230518]: 2025-10-02 12:28:52.789 2 DEBUG nova.storage.rbd_utils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] rbd image 55bd545d-c449-4749-a3f1-b04f0f37e06e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:28:52 compute-1 nova_compute[230518]: 2025-10-02 12:28:52.793 2 DEBUG oslo_concurrency.processutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/55bd545d-c449-4749-a3f1-b04f0f37e06e/disk.config 55bd545d-c449-4749-a3f1-b04f0f37e06e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:52 compute-1 podman[257470]: 2025-10-02 12:28:52.807173225 +0000 UTC m=+0.059733041 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 12:28:52 compute-1 podman[257471]: 2025-10-02 12:28:52.821997411 +0000 UTC m=+0.069893720 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd)
Oct 02 12:28:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:28:53 compute-1 nova_compute[230518]: 2025-10-02 12:28:53.683 2 DEBUG oslo_concurrency.lockutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "91d6698b-e355-4477-8680-f469bfd285a4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:53 compute-1 nova_compute[230518]: 2025-10-02 12:28:53.684 2 DEBUG oslo_concurrency.lockutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:53 compute-1 nova_compute[230518]: 2025-10-02 12:28:53.711 2 DEBUG nova.compute.manager [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:28:53 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2513697758' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:53 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3622040677' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:53 compute-1 nova_compute[230518]: 2025-10-02 12:28:53.809 2 DEBUG oslo_concurrency.lockutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:53 compute-1 nova_compute[230518]: 2025-10-02 12:28:53.809 2 DEBUG oslo_concurrency.lockutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:53 compute-1 nova_compute[230518]: 2025-10-02 12:28:53.819 2 DEBUG nova.virt.hardware [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:28:53 compute-1 nova_compute[230518]: 2025-10-02 12:28:53.819 2 INFO nova.compute.claims [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:28:53 compute-1 nova_compute[230518]: 2025-10-02 12:28:53.938 2 DEBUG oslo_concurrency.processutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:54 compute-1 nova_compute[230518]: 2025-10-02 12:28:54.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:28:54 compute-1 nova_compute[230518]: 2025-10-02 12:28:54.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:28:54 compute-1 nova_compute[230518]: 2025-10-02 12:28:54.072 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:28:54 compute-1 nova_compute[230518]: 2025-10-02 12:28:54.115 2 DEBUG oslo_concurrency.processutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/55bd545d-c449-4749-a3f1-b04f0f37e06e/disk.config 55bd545d-c449-4749-a3f1-b04f0f37e06e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.323s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:54 compute-1 nova_compute[230518]: 2025-10-02 12:28:54.116 2 INFO nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Deleting local config drive /var/lib/nova/instances/55bd545d-c449-4749-a3f1-b04f0f37e06e/disk.config because it was imported into RBD.
Oct 02 12:28:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:54.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:28:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:54.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:28:54 compute-1 systemd-machined[188247]: New machine qemu-33-instance-00000040.
Oct 02 12:28:54 compute-1 systemd[1]: Started Virtual Machine qemu-33-instance-00000040.
Oct 02 12:28:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:28:54.384 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:28:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:28:54 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3896983913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:54 compute-1 nova_compute[230518]: 2025-10-02 12:28:54.428 2 DEBUG oslo_concurrency.processutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:54 compute-1 nova_compute[230518]: 2025-10-02 12:28:54.437 2 DEBUG nova.compute.provider_tree [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:28:54 compute-1 nova_compute[230518]: 2025-10-02 12:28:54.457 2 DEBUG nova.scheduler.client.report [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:28:54 compute-1 nova_compute[230518]: 2025-10-02 12:28:54.491 2 DEBUG oslo_concurrency.lockutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:54 compute-1 nova_compute[230518]: 2025-10-02 12:28:54.492 2 DEBUG nova.compute.manager [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:28:54 compute-1 nova_compute[230518]: 2025-10-02 12:28:54.537 2 DEBUG nova.compute.manager [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:28:54 compute-1 nova_compute[230518]: 2025-10-02 12:28:54.538 2 DEBUG nova.network.neutron [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:28:54 compute-1 nova_compute[230518]: 2025-10-02 12:28:54.561 2 INFO nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:28:54 compute-1 nova_compute[230518]: 2025-10-02 12:28:54.595 2 DEBUG nova.compute.manager [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:28:54 compute-1 nova_compute[230518]: 2025-10-02 12:28:54.787 2 DEBUG nova.policy [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '28d5425714b04888ba9e6112879fae33', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6b5045a3aa3e42e6b66e2ec8c6bb5810', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:28:54 compute-1 nova_compute[230518]: 2025-10-02 12:28:54.883 2 DEBUG nova.compute.manager [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:28:54 compute-1 nova_compute[230518]: 2025-10-02 12:28:54.885 2 DEBUG nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:28:54 compute-1 nova_compute[230518]: 2025-10-02 12:28:54.886 2 INFO nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Creating image(s)
Oct 02 12:28:54 compute-1 nova_compute[230518]: 2025-10-02 12:28:54.925 2 DEBUG nova.storage.rbd_utils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image 91d6698b-e355-4477-8680-f469bfd285a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:28:54 compute-1 nova_compute[230518]: 2025-10-02 12:28:54.965 2 DEBUG nova.storage.rbd_utils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image 91d6698b-e355-4477-8680-f469bfd285a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:28:55 compute-1 nova_compute[230518]: 2025-10-02 12:28:55.006 2 DEBUG nova.storage.rbd_utils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image 91d6698b-e355-4477-8680-f469bfd285a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:28:55 compute-1 nova_compute[230518]: 2025-10-02 12:28:55.011 2 DEBUG oslo_concurrency.processutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:55 compute-1 nova_compute[230518]: 2025-10-02 12:28:55.108 2 DEBUG oslo_concurrency.processutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:55 compute-1 nova_compute[230518]: 2025-10-02 12:28:55.110 2 DEBUG oslo_concurrency.lockutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:55 compute-1 nova_compute[230518]: 2025-10-02 12:28:55.113 2 DEBUG oslo_concurrency.lockutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:55 compute-1 nova_compute[230518]: 2025-10-02 12:28:55.114 2 DEBUG oslo_concurrency.lockutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:55 compute-1 nova_compute[230518]: 2025-10-02 12:28:55.158 2 DEBUG nova.storage.rbd_utils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image 91d6698b-e355-4477-8680-f469bfd285a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:28:55 compute-1 nova_compute[230518]: 2025-10-02 12:28:55.164 2 DEBUG oslo_concurrency.processutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 91d6698b-e355-4477-8680-f469bfd285a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:55 compute-1 ceph-mon[80926]: pgmap v1541: 305 pgs: 305 active+clean; 209 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 69 KiB/s rd, 6.7 MiB/s wr, 112 op/s
Oct 02 12:28:55 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3896983913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:55 compute-1 nova_compute[230518]: 2025-10-02 12:28:55.446 2 DEBUG nova.network.neutron [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Successfully created port: ef24dbb6-1a67-4d96-a8a7-c34925dd3699 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:28:55 compute-1 nova_compute[230518]: 2025-10-02 12:28:55.567 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408135.5668404, 55bd545d-c449-4749-a3f1-b04f0f37e06e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:28:55 compute-1 nova_compute[230518]: 2025-10-02 12:28:55.568 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] VM Resumed (Lifecycle Event)
Oct 02 12:28:55 compute-1 nova_compute[230518]: 2025-10-02 12:28:55.572 2 DEBUG nova.compute.manager [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:28:55 compute-1 nova_compute[230518]: 2025-10-02 12:28:55.572 2 DEBUG nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:28:55 compute-1 nova_compute[230518]: 2025-10-02 12:28:55.576 2 INFO nova.virt.libvirt.driver [-] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Instance spawned successfully.
Oct 02 12:28:55 compute-1 nova_compute[230518]: 2025-10-02 12:28:55.576 2 DEBUG nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:28:55 compute-1 nova_compute[230518]: 2025-10-02 12:28:55.596 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:28:55 compute-1 nova_compute[230518]: 2025-10-02 12:28:55.603 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:28:55 compute-1 nova_compute[230518]: 2025-10-02 12:28:55.610 2 DEBUG nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:28:55 compute-1 nova_compute[230518]: 2025-10-02 12:28:55.611 2 DEBUG nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:28:55 compute-1 nova_compute[230518]: 2025-10-02 12:28:55.612 2 DEBUG nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:28:55 compute-1 nova_compute[230518]: 2025-10-02 12:28:55.612 2 DEBUG nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:28:55 compute-1 nova_compute[230518]: 2025-10-02 12:28:55.613 2 DEBUG nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:28:55 compute-1 nova_compute[230518]: 2025-10-02 12:28:55.613 2 DEBUG nova.virt.libvirt.driver [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:28:55 compute-1 nova_compute[230518]: 2025-10-02 12:28:55.640 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:28:55 compute-1 nova_compute[230518]: 2025-10-02 12:28:55.642 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408135.571295, 55bd545d-c449-4749-a3f1-b04f0f37e06e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:28:55 compute-1 nova_compute[230518]: 2025-10-02 12:28:55.642 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] VM Started (Lifecycle Event)
Oct 02 12:28:55 compute-1 nova_compute[230518]: 2025-10-02 12:28:55.669 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:28:55 compute-1 nova_compute[230518]: 2025-10-02 12:28:55.673 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:28:55 compute-1 nova_compute[230518]: 2025-10-02 12:28:55.682 2 INFO nova.compute.manager [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Took 7.48 seconds to spawn the instance on the hypervisor.
Oct 02 12:28:55 compute-1 nova_compute[230518]: 2025-10-02 12:28:55.682 2 DEBUG nova.compute.manager [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:28:55 compute-1 nova_compute[230518]: 2025-10-02 12:28:55.690 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:28:55 compute-1 nova_compute[230518]: 2025-10-02 12:28:55.733 2 INFO nova.compute.manager [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Took 8.56 seconds to build instance.
Oct 02 12:28:55 compute-1 nova_compute[230518]: 2025-10-02 12:28:55.748 2 DEBUG oslo_concurrency.lockutils [None req-2bdbc92a-77bb-42e1-ba54-8a017d97d5db 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "55bd545d-c449-4749-a3f1-b04f0f37e06e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:28:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:56.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:28:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:28:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:56.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:28:56 compute-1 nova_compute[230518]: 2025-10-02 12:28:56.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:56 compute-1 nova_compute[230518]: 2025-10-02 12:28:56.577 2 DEBUG oslo_concurrency.processutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 91d6698b-e355-4477-8680-f469bfd285a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:56 compute-1 nova_compute[230518]: 2025-10-02 12:28:56.633 2 DEBUG nova.network.neutron [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Successfully updated port: ef24dbb6-1a67-4d96-a8a7-c34925dd3699 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:28:56 compute-1 nova_compute[230518]: 2025-10-02 12:28:56.634 2 DEBUG nova.compute.manager [None req-e0ac78b1-4a22-4b06-bdce-6606d07f79eb 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:28:56 compute-1 nova_compute[230518]: 2025-10-02 12:28:56.642 2 DEBUG nova.storage.rbd_utils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] resizing rbd image 91d6698b-e355-4477-8680-f469bfd285a4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:28:56 compute-1 nova_compute[230518]: 2025-10-02 12:28:56.671 2 DEBUG oslo_concurrency.lockutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "refresh_cache-91d6698b-e355-4477-8680-f469bfd285a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:28:56 compute-1 nova_compute[230518]: 2025-10-02 12:28:56.671 2 DEBUG oslo_concurrency.lockutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquired lock "refresh_cache-91d6698b-e355-4477-8680-f469bfd285a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:28:56 compute-1 nova_compute[230518]: 2025-10-02 12:28:56.671 2 DEBUG nova.network.neutron [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:28:56 compute-1 nova_compute[230518]: 2025-10-02 12:28:56.688 2 INFO nova.compute.manager [None req-e0ac78b1-4a22-4b06-bdce-6606d07f79eb 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] instance snapshotting
Oct 02 12:28:56 compute-1 nova_compute[230518]: 2025-10-02 12:28:56.737 2 DEBUG nova.objects.instance [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'migration_context' on Instance uuid 91d6698b-e355-4477-8680-f469bfd285a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:28:56 compute-1 nova_compute[230518]: 2025-10-02 12:28:56.751 2 DEBUG nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:28:56 compute-1 nova_compute[230518]: 2025-10-02 12:28:56.751 2 DEBUG nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Ensure instance console log exists: /var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:28:56 compute-1 nova_compute[230518]: 2025-10-02 12:28:56.751 2 DEBUG oslo_concurrency.lockutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:28:56 compute-1 nova_compute[230518]: 2025-10-02 12:28:56.752 2 DEBUG oslo_concurrency.lockutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:28:56 compute-1 nova_compute[230518]: 2025-10-02 12:28:56.752 2 DEBUG oslo_concurrency.lockutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:28:56 compute-1 ceph-mon[80926]: pgmap v1542: 305 pgs: 305 active+clean; 208 MiB data, 648 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 7.1 MiB/s wr, 178 op/s
Oct 02 12:28:56 compute-1 nova_compute[230518]: 2025-10-02 12:28:56.860 2 DEBUG nova.network.neutron [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:28:57 compute-1 nova_compute[230518]: 2025-10-02 12:28:57.034 2 INFO nova.virt.libvirt.driver [None req-e0ac78b1-4a22-4b06-bdce-6606d07f79eb 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Beginning live snapshot process
Oct 02 12:28:57 compute-1 nova_compute[230518]: 2025-10-02 12:28:57.225 2 DEBUG nova.virt.libvirt.imagebackend [None req-e0ac78b1-4a22-4b06-bdce-6606d07f79eb 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 12:28:57 compute-1 nova_compute[230518]: 2025-10-02 12:28:57.481 2 DEBUG nova.storage.rbd_utils [None req-e0ac78b1-4a22-4b06-bdce-6606d07f79eb 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] creating snapshot(ef1ba0cf5dad4f22a36c049e1ae41a07) on rbd image(55bd545d-c449-4749-a3f1-b04f0f37e06e_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:28:57 compute-1 nova_compute[230518]: 2025-10-02 12:28:57.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:28:58 compute-1 nova_compute[230518]: 2025-10-02 12:28:58.054 2 DEBUG nova.compute.manager [req-d46abe17-432e-4cb7-90c5-698b993d33d6 req-f72d253b-0f79-40fa-a71a-ed820d2a416b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Received event network-changed-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:28:58 compute-1 nova_compute[230518]: 2025-10-02 12:28:58.055 2 DEBUG nova.compute.manager [req-d46abe17-432e-4cb7-90c5-698b993d33d6 req-f72d253b-0f79-40fa-a71a-ed820d2a416b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Refreshing instance network info cache due to event network-changed-ef24dbb6-1a67-4d96-a8a7-c34925dd3699. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:28:58 compute-1 nova_compute[230518]: 2025-10-02 12:28:58.056 2 DEBUG oslo_concurrency.lockutils [req-d46abe17-432e-4cb7-90c5-698b993d33d6 req-f72d253b-0f79-40fa-a71a-ed820d2a416b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-91d6698b-e355-4477-8680-f469bfd285a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:28:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:28:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:28:58.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:28:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:28:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:28:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:28:58.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:28:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e217 e217: 3 total, 3 up, 3 in
Oct 02 12:28:58 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1248735020' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:28:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:28:59 compute-1 nova_compute[230518]: 2025-10-02 12:28:59.065 2 DEBUG nova.storage.rbd_utils [None req-e0ac78b1-4a22-4b06-bdce-6606d07f79eb 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] cloning vms/55bd545d-c449-4749-a3f1-b04f0f37e06e_disk@ef1ba0cf5dad4f22a36c049e1ae41a07 to images/e63c0054-4f57-471e-a2d8-a929502f4104 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 12:28:59 compute-1 nova_compute[230518]: 2025-10-02 12:28:59.388 2 DEBUG nova.network.neutron [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Updating instance_info_cache with network_info: [{"id": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "address": "fa:16:3e:6b:63:f8", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef24dbb6-1a", "ovs_interfaceid": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:28:59 compute-1 ceph-mon[80926]: pgmap v1543: 305 pgs: 305 active+clean; 207 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 7.0 MiB/s wr, 294 op/s
Oct 02 12:28:59 compute-1 ceph-mon[80926]: osdmap e217: 3 total, 3 up, 3 in
Oct 02 12:28:59 compute-1 nova_compute[230518]: 2025-10-02 12:28:59.414 2 DEBUG oslo_concurrency.lockutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Releasing lock "refresh_cache-91d6698b-e355-4477-8680-f469bfd285a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:28:59 compute-1 nova_compute[230518]: 2025-10-02 12:28:59.415 2 DEBUG nova.compute.manager [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Instance network_info: |[{"id": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "address": "fa:16:3e:6b:63:f8", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef24dbb6-1a", "ovs_interfaceid": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:28:59 compute-1 nova_compute[230518]: 2025-10-02 12:28:59.415 2 DEBUG oslo_concurrency.lockutils [req-d46abe17-432e-4cb7-90c5-698b993d33d6 req-f72d253b-0f79-40fa-a71a-ed820d2a416b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-91d6698b-e355-4477-8680-f469bfd285a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:28:59 compute-1 nova_compute[230518]: 2025-10-02 12:28:59.416 2 DEBUG nova.network.neutron [req-d46abe17-432e-4cb7-90c5-698b993d33d6 req-f72d253b-0f79-40fa-a71a-ed820d2a416b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Refreshing network info cache for port ef24dbb6-1a67-4d96-a8a7-c34925dd3699 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:28:59 compute-1 nova_compute[230518]: 2025-10-02 12:28:59.421 2 DEBUG nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Start _get_guest_xml network_info=[{"id": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "address": "fa:16:3e:6b:63:f8", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef24dbb6-1a", "ovs_interfaceid": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:28:59 compute-1 nova_compute[230518]: 2025-10-02 12:28:59.426 2 WARNING nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:28:59 compute-1 nova_compute[230518]: 2025-10-02 12:28:59.433 2 DEBUG nova.virt.libvirt.host [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:28:59 compute-1 nova_compute[230518]: 2025-10-02 12:28:59.434 2 DEBUG nova.virt.libvirt.host [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:28:59 compute-1 nova_compute[230518]: 2025-10-02 12:28:59.438 2 DEBUG nova.virt.libvirt.host [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:28:59 compute-1 nova_compute[230518]: 2025-10-02 12:28:59.438 2 DEBUG nova.virt.libvirt.host [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:28:59 compute-1 nova_compute[230518]: 2025-10-02 12:28:59.440 2 DEBUG nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:28:59 compute-1 nova_compute[230518]: 2025-10-02 12:28:59.441 2 DEBUG nova.virt.hardware [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:28:59 compute-1 nova_compute[230518]: 2025-10-02 12:28:59.441 2 DEBUG nova.virt.hardware [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:28:59 compute-1 nova_compute[230518]: 2025-10-02 12:28:59.442 2 DEBUG nova.virt.hardware [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:28:59 compute-1 nova_compute[230518]: 2025-10-02 12:28:59.443 2 DEBUG nova.virt.hardware [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:28:59 compute-1 nova_compute[230518]: 2025-10-02 12:28:59.443 2 DEBUG nova.virt.hardware [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:28:59 compute-1 nova_compute[230518]: 2025-10-02 12:28:59.444 2 DEBUG nova.virt.hardware [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:28:59 compute-1 nova_compute[230518]: 2025-10-02 12:28:59.444 2 DEBUG nova.virt.hardware [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:28:59 compute-1 nova_compute[230518]: 2025-10-02 12:28:59.445 2 DEBUG nova.virt.hardware [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:28:59 compute-1 nova_compute[230518]: 2025-10-02 12:28:59.445 2 DEBUG nova.virt.hardware [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:28:59 compute-1 nova_compute[230518]: 2025-10-02 12:28:59.446 2 DEBUG nova.virt.hardware [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:28:59 compute-1 nova_compute[230518]: 2025-10-02 12:28:59.446 2 DEBUG nova.virt.hardware [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:28:59 compute-1 nova_compute[230518]: 2025-10-02 12:28:59.450 2 DEBUG oslo_concurrency.processutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:28:59 compute-1 nova_compute[230518]: 2025-10-02 12:28:59.802 2 DEBUG nova.storage.rbd_utils [None req-e0ac78b1-4a22-4b06-bdce-6606d07f79eb 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] flattening images/e63c0054-4f57-471e-a2d8-a929502f4104 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 12:28:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:28:59 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3081959650' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:28:59 compute-1 nova_compute[230518]: 2025-10-02 12:28:59.921 2 DEBUG oslo_concurrency.processutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:28:59 compute-1 nova_compute[230518]: 2025-10-02 12:28:59.956 2 DEBUG nova.storage.rbd_utils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image 91d6698b-e355-4477-8680-f469bfd285a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:28:59 compute-1 nova_compute[230518]: 2025-10-02 12:28:59.960 2 DEBUG oslo_concurrency.processutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:00.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:00.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:29:00 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/81590209' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:00 compute-1 nova_compute[230518]: 2025-10-02 12:29:00.478 2 DEBUG oslo_concurrency.processutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:00 compute-1 nova_compute[230518]: 2025-10-02 12:29:00.479 2 DEBUG nova.virt.libvirt.vif [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:28:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-912984471',display_name='tempest-ServerDiskConfigTestJSON-server-912984471',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-912984471',id=67,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6b5045a3aa3e42e6b66e2ec8c6bb5810',ramdisk_id='',reservation_id='r-gj0sluw1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1782236021',owner_user_name='tempest-ServerDiskConfigTestJSON-1782236021-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:28:54Z,user_data=None,user_id='28d5425714b04888ba9e6112879fae33',uuid=91d6698b-e355-4477-8680-f469bfd285a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "address": "fa:16:3e:6b:63:f8", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef24dbb6-1a", "ovs_interfaceid": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:29:00 compute-1 nova_compute[230518]: 2025-10-02 12:29:00.480 2 DEBUG nova.network.os_vif_util [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converting VIF {"id": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "address": "fa:16:3e:6b:63:f8", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef24dbb6-1a", "ovs_interfaceid": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:29:00 compute-1 nova_compute[230518]: 2025-10-02 12:29:00.481 2 DEBUG nova.network.os_vif_util [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:63:f8,bridge_name='br-int',has_traffic_filtering=True,id=ef24dbb6-1a67-4d96-a8a7-c34925dd3699,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef24dbb6-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:29:00 compute-1 nova_compute[230518]: 2025-10-02 12:29:00.481 2 DEBUG nova.objects.instance [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'pci_devices' on Instance uuid 91d6698b-e355-4477-8680-f469bfd285a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:29:00 compute-1 nova_compute[230518]: 2025-10-02 12:29:00.497 2 DEBUG nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:29:00 compute-1 nova_compute[230518]:   <uuid>91d6698b-e355-4477-8680-f469bfd285a4</uuid>
Oct 02 12:29:00 compute-1 nova_compute[230518]:   <name>instance-00000043</name>
Oct 02 12:29:00 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:29:00 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:29:00 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:29:00 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-912984471</nova:name>
Oct 02 12:29:00 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:28:59</nova:creationTime>
Oct 02 12:29:00 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:29:00 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:29:00 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:29:00 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:29:00 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:29:00 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:29:00 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:29:00 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:29:00 compute-1 nova_compute[230518]:         <nova:user uuid="28d5425714b04888ba9e6112879fae33">tempest-ServerDiskConfigTestJSON-1782236021-project-member</nova:user>
Oct 02 12:29:00 compute-1 nova_compute[230518]:         <nova:project uuid="6b5045a3aa3e42e6b66e2ec8c6bb5810">tempest-ServerDiskConfigTestJSON-1782236021</nova:project>
Oct 02 12:29:00 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:29:00 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:29:00 compute-1 nova_compute[230518]:         <nova:port uuid="ef24dbb6-1a67-4d96-a8a7-c34925dd3699">
Oct 02 12:29:00 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:29:00 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:29:00 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:29:00 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <system>
Oct 02 12:29:00 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:29:00 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:29:00 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:29:00 compute-1 nova_compute[230518]:       <entry name="serial">91d6698b-e355-4477-8680-f469bfd285a4</entry>
Oct 02 12:29:00 compute-1 nova_compute[230518]:       <entry name="uuid">91d6698b-e355-4477-8680-f469bfd285a4</entry>
Oct 02 12:29:00 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     </system>
Oct 02 12:29:00 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:29:00 compute-1 nova_compute[230518]:   <os>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:   </os>
Oct 02 12:29:00 compute-1 nova_compute[230518]:   <features>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:   </features>
Oct 02 12:29:00 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:29:00 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:29:00 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:29:00 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/91d6698b-e355-4477-8680-f469bfd285a4_disk">
Oct 02 12:29:00 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:       </source>
Oct 02 12:29:00 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:29:00 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:29:00 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:29:00 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/91d6698b-e355-4477-8680-f469bfd285a4_disk.config">
Oct 02 12:29:00 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:       </source>
Oct 02 12:29:00 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:29:00 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:29:00 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:29:00 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:6b:63:f8"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:       <target dev="tapef24dbb6-1a"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:29:00 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4/console.log" append="off"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <video>
Oct 02 12:29:00 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     </video>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:29:00 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:29:00 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:29:00 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:29:00 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:29:00 compute-1 nova_compute[230518]: </domain>
Oct 02 12:29:00 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:29:00 compute-1 nova_compute[230518]: 2025-10-02 12:29:00.498 2 DEBUG nova.compute.manager [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Preparing to wait for external event network-vif-plugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:29:00 compute-1 nova_compute[230518]: 2025-10-02 12:29:00.498 2 DEBUG oslo_concurrency.lockutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "91d6698b-e355-4477-8680-f469bfd285a4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:00 compute-1 nova_compute[230518]: 2025-10-02 12:29:00.499 2 DEBUG oslo_concurrency.lockutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:00 compute-1 nova_compute[230518]: 2025-10-02 12:29:00.499 2 DEBUG oslo_concurrency.lockutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:00 compute-1 nova_compute[230518]: 2025-10-02 12:29:00.500 2 DEBUG nova.virt.libvirt.vif [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:28:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-912984471',display_name='tempest-ServerDiskConfigTestJSON-server-912984471',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-912984471',id=67,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6b5045a3aa3e42e6b66e2ec8c6bb5810',ramdisk_id='',reservation_id='r-gj0sluw1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1782236021',owner_user_name='tempest-ServerDiskConfigTestJSON-1782236021-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:28:54Z,user_data=None,user_id='28d5425714b04888ba9e6112879fae33',uuid=91d6698b-e355-4477-8680-f469bfd285a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "address": "fa:16:3e:6b:63:f8", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef24dbb6-1a", "ovs_interfaceid": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:29:00 compute-1 nova_compute[230518]: 2025-10-02 12:29:00.500 2 DEBUG nova.network.os_vif_util [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converting VIF {"id": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "address": "fa:16:3e:6b:63:f8", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef24dbb6-1a", "ovs_interfaceid": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:29:00 compute-1 nova_compute[230518]: 2025-10-02 12:29:00.500 2 DEBUG nova.network.os_vif_util [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:63:f8,bridge_name='br-int',has_traffic_filtering=True,id=ef24dbb6-1a67-4d96-a8a7-c34925dd3699,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef24dbb6-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:29:00 compute-1 nova_compute[230518]: 2025-10-02 12:29:00.501 2 DEBUG os_vif [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:63:f8,bridge_name='br-int',has_traffic_filtering=True,id=ef24dbb6-1a67-4d96-a8a7-c34925dd3699,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef24dbb6-1a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:29:00 compute-1 nova_compute[230518]: 2025-10-02 12:29:00.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:00 compute-1 nova_compute[230518]: 2025-10-02 12:29:00.502 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:00 compute-1 nova_compute[230518]: 2025-10-02 12:29:00.502 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:29:00 compute-1 nova_compute[230518]: 2025-10-02 12:29:00.503 2 DEBUG nova.storage.rbd_utils [None req-e0ac78b1-4a22-4b06-bdce-6606d07f79eb 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] removing snapshot(ef1ba0cf5dad4f22a36c049e1ae41a07) on rbd image(55bd545d-c449-4749-a3f1-b04f0f37e06e_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 12:29:00 compute-1 nova_compute[230518]: 2025-10-02 12:29:00.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:00 compute-1 nova_compute[230518]: 2025-10-02 12:29:00.505 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef24dbb6-1a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:00 compute-1 nova_compute[230518]: 2025-10-02 12:29:00.505 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapef24dbb6-1a, col_values=(('external_ids', {'iface-id': 'ef24dbb6-1a67-4d96-a8a7-c34925dd3699', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6b:63:f8', 'vm-uuid': '91d6698b-e355-4477-8680-f469bfd285a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:00 compute-1 nova_compute[230518]: 2025-10-02 12:29:00.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:00 compute-1 NetworkManager[44960]: <info>  [1759408140.5078] manager: (tapef24dbb6-1a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/134)
Oct 02 12:29:00 compute-1 nova_compute[230518]: 2025-10-02 12:29:00.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:29:00 compute-1 nova_compute[230518]: 2025-10-02 12:29:00.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:00 compute-1 nova_compute[230518]: 2025-10-02 12:29:00.512 2 INFO os_vif [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:63:f8,bridge_name='br-int',has_traffic_filtering=True,id=ef24dbb6-1a67-4d96-a8a7-c34925dd3699,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef24dbb6-1a')
Oct 02 12:29:00 compute-1 nova_compute[230518]: 2025-10-02 12:29:00.561 2 DEBUG nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:29:00 compute-1 nova_compute[230518]: 2025-10-02 12:29:00.562 2 DEBUG nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:29:00 compute-1 nova_compute[230518]: 2025-10-02 12:29:00.562 2 DEBUG nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] No VIF found with MAC fa:16:3e:6b:63:f8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:29:00 compute-1 nova_compute[230518]: 2025-10-02 12:29:00.563 2 INFO nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Using config drive
Oct 02 12:29:00 compute-1 nova_compute[230518]: 2025-10-02 12:29:00.590 2 DEBUG nova.storage.rbd_utils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image 91d6698b-e355-4477-8680-f469bfd285a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:29:00 compute-1 nova_compute[230518]: 2025-10-02 12:29:00.709 2 DEBUG nova.network.neutron [req-d46abe17-432e-4cb7-90c5-698b993d33d6 req-f72d253b-0f79-40fa-a71a-ed820d2a416b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Updated VIF entry in instance network info cache for port ef24dbb6-1a67-4d96-a8a7-c34925dd3699. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:29:00 compute-1 nova_compute[230518]: 2025-10-02 12:29:00.710 2 DEBUG nova.network.neutron [req-d46abe17-432e-4cb7-90c5-698b993d33d6 req-f72d253b-0f79-40fa-a71a-ed820d2a416b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Updating instance_info_cache with network_info: [{"id": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "address": "fa:16:3e:6b:63:f8", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef24dbb6-1a", "ovs_interfaceid": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:29:00 compute-1 nova_compute[230518]: 2025-10-02 12:29:00.786 2 DEBUG oslo_concurrency.lockutils [req-d46abe17-432e-4cb7-90c5-698b993d33d6 req-f72d253b-0f79-40fa-a71a-ed820d2a416b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-91d6698b-e355-4477-8680-f469bfd285a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:29:00 compute-1 ceph-mon[80926]: pgmap v1545: 305 pgs: 305 active+clean; 207 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 7.5 MiB/s wr, 335 op/s
Oct 02 12:29:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3081959650' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/81590209' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:01 compute-1 nova_compute[230518]: 2025-10-02 12:29:01.061 2 INFO nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Creating config drive at /var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4/disk.config
Oct 02 12:29:01 compute-1 nova_compute[230518]: 2025-10-02 12:29:01.067 2 DEBUG oslo_concurrency.processutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx5rgizt2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:01 compute-1 nova_compute[230518]: 2025-10-02 12:29:01.202 2 DEBUG oslo_concurrency.processutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx5rgizt2" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:01 compute-1 nova_compute[230518]: 2025-10-02 12:29:01.241 2 DEBUG nova.storage.rbd_utils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image 91d6698b-e355-4477-8680-f469bfd285a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:29:01 compute-1 nova_compute[230518]: 2025-10-02 12:29:01.245 2 DEBUG oslo_concurrency.processutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4/disk.config 91d6698b-e355-4477-8680-f469bfd285a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1362851634' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/93481825' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:02 compute-1 nova_compute[230518]: 2025-10-02 12:29:02.114 2 DEBUG oslo_concurrency.processutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4/disk.config 91d6698b-e355-4477-8680-f469bfd285a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.870s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:02 compute-1 nova_compute[230518]: 2025-10-02 12:29:02.115 2 INFO nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Deleting local config drive /var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4/disk.config because it was imported into RBD.
Oct 02 12:29:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:29:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:02.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:29:02 compute-1 kernel: tapef24dbb6-1a: entered promiscuous mode
Oct 02 12:29:02 compute-1 NetworkManager[44960]: <info>  [1759408142.1642] manager: (tapef24dbb6-1a): new Tun device (/org/freedesktop/NetworkManager/Devices/135)
Oct 02 12:29:02 compute-1 nova_compute[230518]: 2025-10-02 12:29:02.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:02 compute-1 ovn_controller[129257]: 2025-10-02T12:29:02Z|00290|binding|INFO|Claiming lport ef24dbb6-1a67-4d96-a8a7-c34925dd3699 for this chassis.
Oct 02 12:29:02 compute-1 ovn_controller[129257]: 2025-10-02T12:29:02Z|00291|binding|INFO|ef24dbb6-1a67-4d96-a8a7-c34925dd3699: Claiming fa:16:3e:6b:63:f8 10.100.0.7
Oct 02 12:29:02 compute-1 nova_compute[230518]: 2025-10-02 12:29:02.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.176 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:63:f8 10.100.0.7'], port_security=['fa:16:3e:6b:63:f8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '91d6698b-e355-4477-8680-f469bfd285a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b5045a3aa3e42e6b66e2ec8c6bb5810', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9af85e52-bdf0-43fd-9e40-10fd2b6d8a0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=378292cc-8e1b-46dd-b2c4-895c151f1253, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=ef24dbb6-1a67-4d96-a8a7-c34925dd3699) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.178 138374 INFO neutron.agent.ovn.metadata.agent [-] Port ef24dbb6-1a67-4d96-a8a7-c34925dd3699 in datapath e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 bound to our chassis
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.184 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e21cd6a6-f7fd-48ec-8f87-bbcc167f5711
Oct 02 12:29:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:29:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:02.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.195 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7ab6e91a-f67d-42b4-b193-5a4d7b5e047b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.196 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape21cd6a6-f1 in ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.198 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape21cd6a6-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.198 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b5c3fb2e-30fa-46f5-82bb-b6c2231b3d88]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.200 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[98e7e842-e5fd-43b4-b61a-8dfb391e459f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.211 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[1ae12eac-97e2-4e0c-8a4b-1e1a158a9f98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:02 compute-1 systemd-udevd[258055]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:29:02 compute-1 systemd-machined[188247]: New machine qemu-34-instance-00000043.
Oct 02 12:29:02 compute-1 NetworkManager[44960]: <info>  [1759408142.2298] device (tapef24dbb6-1a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:29:02 compute-1 NetworkManager[44960]: <info>  [1759408142.2319] device (tapef24dbb6-1a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:29:02 compute-1 systemd[1]: Started Virtual Machine qemu-34-instance-00000043.
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.242 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[54d8ad79-b6e2-465d-9ad9-7641c94197d7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:02 compute-1 nova_compute[230518]: 2025-10-02 12:29:02.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:02 compute-1 ovn_controller[129257]: 2025-10-02T12:29:02Z|00292|binding|INFO|Setting lport ef24dbb6-1a67-4d96-a8a7-c34925dd3699 ovn-installed in OVS
Oct 02 12:29:02 compute-1 ovn_controller[129257]: 2025-10-02T12:29:02Z|00293|binding|INFO|Setting lport ef24dbb6-1a67-4d96-a8a7-c34925dd3699 up in Southbound
Oct 02 12:29:02 compute-1 nova_compute[230518]: 2025-10-02 12:29:02.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e218 e218: 3 total, 3 up, 3 in
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.275 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[b6392f4d-85e8-4937-bc84-5a00792b48d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.281 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8e9288c3-645f-4c28-b30c-4a7f0650c2d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:02 compute-1 NetworkManager[44960]: <info>  [1759408142.2821] manager: (tape21cd6a6-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/136)
Oct 02 12:29:02 compute-1 systemd-udevd[258058]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.311 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[b950f679-3a4b-4d82-aa19-76678458f276]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.313 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[48eb6afd-b5bd-4567-9786-b2c2e3266da2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:02 compute-1 NetworkManager[44960]: <info>  [1759408142.3297] device (tape21cd6a6-f0): carrier: link connected
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.333 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[0ef67604-11d3-402e-a1d2-12b09c713a5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.347 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4964a55b-8b51-414c-9565-61371a49d30a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape21cd6a6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:30:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600588, 'reachable_time': 35377, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258086, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.358 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[577f3d16-e15c-45cc-b2c5-41f69bf9ad05]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe84:30ee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 600588, 'tstamp': 600588}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258087, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.370 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d5b99ef7-8d68-44ff-8598-51a95274032d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape21cd6a6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:30:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600588, 'reachable_time': 35377, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 258088, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.392 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[74bf6b13-7f01-43be-a4f8-08c7e635702d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:02 compute-1 nova_compute[230518]: 2025-10-02 12:29:02.450 2 DEBUG nova.storage.rbd_utils [None req-e0ac78b1-4a22-4b06-bdce-6606d07f79eb 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] creating snapshot(snap) on rbd image(e63c0054-4f57-471e-a2d8-a929502f4104) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.450 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4db8e3a4-2c3e-4c1c-a0ab-aebec9bb275f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.458 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape21cd6a6-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.459 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.460 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape21cd6a6-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:02 compute-1 kernel: tape21cd6a6-f0: entered promiscuous mode
Oct 02 12:29:02 compute-1 NetworkManager[44960]: <info>  [1759408142.4644] manager: (tape21cd6a6-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/137)
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.468 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape21cd6a6-f0, col_values=(('external_ids', {'iface-id': '155c8aeb-2b8a-439c-8558-741aa183fa54'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:02 compute-1 ovn_controller[129257]: 2025-10-02T12:29:02Z|00294|binding|INFO|Releasing lport 155c8aeb-2b8a-439c-8558-741aa183fa54 from this chassis (sb_readonly=0)
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.497 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.497 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1320fb28-805c-4629-a1c8-1b58e1a29099]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.498 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.pid.haproxy
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID e21cd6a6-f7fd-48ec-8f87-bbcc167f5711
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:29:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:02.499 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'env', 'PROCESS_TAG=haproxy-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:29:02 compute-1 nova_compute[230518]: 2025-10-02 12:29:02.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:02 compute-1 nova_compute[230518]: 2025-10-02 12:29:02.770 2 DEBUG nova.compute.manager [req-8576f800-007e-4bae-bb53-bfc1c3a5281b req-8be518a1-b60e-491d-89eb-14a25c6360f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Received event network-vif-plugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:29:02 compute-1 nova_compute[230518]: 2025-10-02 12:29:02.771 2 DEBUG oslo_concurrency.lockutils [req-8576f800-007e-4bae-bb53-bfc1c3a5281b req-8be518a1-b60e-491d-89eb-14a25c6360f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "91d6698b-e355-4477-8680-f469bfd285a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:02 compute-1 nova_compute[230518]: 2025-10-02 12:29:02.772 2 DEBUG oslo_concurrency.lockutils [req-8576f800-007e-4bae-bb53-bfc1c3a5281b req-8be518a1-b60e-491d-89eb-14a25c6360f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:02 compute-1 nova_compute[230518]: 2025-10-02 12:29:02.776 2 DEBUG oslo_concurrency.lockutils [req-8576f800-007e-4bae-bb53-bfc1c3a5281b req-8be518a1-b60e-491d-89eb-14a25c6360f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:02 compute-1 nova_compute[230518]: 2025-10-02 12:29:02.776 2 DEBUG nova.compute.manager [req-8576f800-007e-4bae-bb53-bfc1c3a5281b req-8be518a1-b60e-491d-89eb-14a25c6360f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Processing event network-vif-plugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:29:02 compute-1 podman[258138]: 2025-10-02 12:29:02.909808793 +0000 UTC m=+0.059780592 container create b12b7238f6af63348ed1465e758325a5fcebc381aaa78cd4e9880822d2ae0330 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 12:29:02 compute-1 systemd[1]: Started libpod-conmon-b12b7238f6af63348ed1465e758325a5fcebc381aaa78cd4e9880822d2ae0330.scope.
Oct 02 12:29:02 compute-1 podman[258138]: 2025-10-02 12:29:02.875443402 +0000 UTC m=+0.025415231 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:29:02 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:29:02 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d61a65ad1c1d3fc987ed3a39b2d4915277d9656d46d566e289ee28d19b267782/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:29:03 compute-1 podman[258138]: 2025-10-02 12:29:03.017710888 +0000 UTC m=+0.167682687 container init b12b7238f6af63348ed1465e758325a5fcebc381aaa78cd4e9880822d2ae0330 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 02 12:29:03 compute-1 ceph-mon[80926]: pgmap v1546: 305 pgs: 305 active+clean; 227 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 8.5 MiB/s rd, 5.8 MiB/s wr, 469 op/s
Oct 02 12:29:03 compute-1 ceph-mon[80926]: osdmap e218: 3 total, 3 up, 3 in
Oct 02 12:29:03 compute-1 podman[258138]: 2025-10-02 12:29:03.024790061 +0000 UTC m=+0.174761840 container start b12b7238f6af63348ed1465e758325a5fcebc381aaa78cd4e9880822d2ae0330 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:29:03 compute-1 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[258153]: [NOTICE]   (258175) : New worker (258184) forked
Oct 02 12:29:03 compute-1 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[258153]: [NOTICE]   (258175) : Loading success.
Oct 02 12:29:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:29:03 compute-1 nova_compute[230518]: 2025-10-02 12:29:03.722 2 DEBUG nova.compute.manager [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:29:03 compute-1 nova_compute[230518]: 2025-10-02 12:29:03.724 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408143.7217352, 91d6698b-e355-4477-8680-f469bfd285a4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:29:03 compute-1 nova_compute[230518]: 2025-10-02 12:29:03.725 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] VM Started (Lifecycle Event)
Oct 02 12:29:03 compute-1 nova_compute[230518]: 2025-10-02 12:29:03.730 2 DEBUG nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:29:03 compute-1 nova_compute[230518]: 2025-10-02 12:29:03.737 2 INFO nova.virt.libvirt.driver [-] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Instance spawned successfully.
Oct 02 12:29:03 compute-1 nova_compute[230518]: 2025-10-02 12:29:03.738 2 DEBUG nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:29:03 compute-1 nova_compute[230518]: 2025-10-02 12:29:03.752 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:29:03 compute-1 nova_compute[230518]: 2025-10-02 12:29:03.763 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:29:03 compute-1 nova_compute[230518]: 2025-10-02 12:29:03.767 2 DEBUG nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:29:03 compute-1 nova_compute[230518]: 2025-10-02 12:29:03.768 2 DEBUG nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:29:03 compute-1 nova_compute[230518]: 2025-10-02 12:29:03.768 2 DEBUG nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:29:03 compute-1 nova_compute[230518]: 2025-10-02 12:29:03.768 2 DEBUG nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:29:03 compute-1 nova_compute[230518]: 2025-10-02 12:29:03.769 2 DEBUG nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:29:03 compute-1 nova_compute[230518]: 2025-10-02 12:29:03.769 2 DEBUG nova.virt.libvirt.driver [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:29:03 compute-1 nova_compute[230518]: 2025-10-02 12:29:03.801 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:29:03 compute-1 nova_compute[230518]: 2025-10-02 12:29:03.802 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408143.7222707, 91d6698b-e355-4477-8680-f469bfd285a4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:29:03 compute-1 nova_compute[230518]: 2025-10-02 12:29:03.802 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] VM Paused (Lifecycle Event)
Oct 02 12:29:03 compute-1 nova_compute[230518]: 2025-10-02 12:29:03.832 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:29:03 compute-1 nova_compute[230518]: 2025-10-02 12:29:03.836 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408143.729117, 91d6698b-e355-4477-8680-f469bfd285a4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:29:03 compute-1 nova_compute[230518]: 2025-10-02 12:29:03.837 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] VM Resumed (Lifecycle Event)
Oct 02 12:29:03 compute-1 nova_compute[230518]: 2025-10-02 12:29:03.845 2 INFO nova.compute.manager [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Took 8.96 seconds to spawn the instance on the hypervisor.
Oct 02 12:29:03 compute-1 nova_compute[230518]: 2025-10-02 12:29:03.846 2 DEBUG nova.compute.manager [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:29:03 compute-1 nova_compute[230518]: 2025-10-02 12:29:03.868 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:29:03 compute-1 nova_compute[230518]: 2025-10-02 12:29:03.870 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:29:03 compute-1 nova_compute[230518]: 2025-10-02 12:29:03.892 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:29:03 compute-1 nova_compute[230518]: 2025-10-02 12:29:03.904 2 INFO nova.compute.manager [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Took 10.13 seconds to build instance.
Oct 02 12:29:03 compute-1 nova_compute[230518]: 2025-10-02 12:29:03.926 2 DEBUG oslo_concurrency.lockutils [None req-7abdaff7-1eaf-4288-9da5-c49042a4835f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.242s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e219 e219: 3 total, 3 up, 3 in
Oct 02 12:29:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:04.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:04.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:04 compute-1 ceph-mon[80926]: pgmap v1548: 305 pgs: 305 active+clean; 262 MiB data, 664 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 4.6 MiB/s wr, 469 op/s
Oct 02 12:29:04 compute-1 ceph-mon[80926]: osdmap e219: 3 total, 3 up, 3 in
Oct 02 12:29:04 compute-1 nova_compute[230518]: 2025-10-02 12:29:04.601 2 DEBUG nova.compute.manager [req-05aa3589-bfb5-494a-bf6c-4867878c4afa req-41a04e7b-fecc-44b6-a2f6-7c7322b94b10 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Received event network-vif-plugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:29:04 compute-1 nova_compute[230518]: 2025-10-02 12:29:04.602 2 DEBUG oslo_concurrency.lockutils [req-05aa3589-bfb5-494a-bf6c-4867878c4afa req-41a04e7b-fecc-44b6-a2f6-7c7322b94b10 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "91d6698b-e355-4477-8680-f469bfd285a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:04 compute-1 nova_compute[230518]: 2025-10-02 12:29:04.602 2 DEBUG oslo_concurrency.lockutils [req-05aa3589-bfb5-494a-bf6c-4867878c4afa req-41a04e7b-fecc-44b6-a2f6-7c7322b94b10 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:04 compute-1 nova_compute[230518]: 2025-10-02 12:29:04.603 2 DEBUG oslo_concurrency.lockutils [req-05aa3589-bfb5-494a-bf6c-4867878c4afa req-41a04e7b-fecc-44b6-a2f6-7c7322b94b10 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:04 compute-1 nova_compute[230518]: 2025-10-02 12:29:04.603 2 DEBUG nova.compute.manager [req-05aa3589-bfb5-494a-bf6c-4867878c4afa req-41a04e7b-fecc-44b6-a2f6-7c7322b94b10 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] No waiting events found dispatching network-vif-plugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:29:04 compute-1 nova_compute[230518]: 2025-10-02 12:29:04.603 2 WARNING nova.compute.manager [req-05aa3589-bfb5-494a-bf6c-4867878c4afa req-41a04e7b-fecc-44b6-a2f6-7c7322b94b10 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Received unexpected event network-vif-plugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 for instance with vm_state active and task_state None.
Oct 02 12:29:04 compute-1 ovn_controller[129257]: 2025-10-02T12:29:04Z|00295|binding|INFO|Releasing lport 155c8aeb-2b8a-439c-8558-741aa183fa54 from this chassis (sb_readonly=0)
Oct 02 12:29:04 compute-1 nova_compute[230518]: 2025-10-02 12:29:04.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:29:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1830989897' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:29:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:29:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1830989897' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:29:05 compute-1 nova_compute[230518]: 2025-10-02 12:29:05.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1830989897' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:29:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1830989897' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:29:06 compute-1 nova_compute[230518]: 2025-10-02 12:29:06.127 2 INFO nova.compute.manager [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Rebuilding instance
Oct 02 12:29:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:06.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:06.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:06 compute-1 nova_compute[230518]: 2025-10-02 12:29:06.584 2 DEBUG nova.objects.instance [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 91d6698b-e355-4477-8680-f469bfd285a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:29:06 compute-1 nova_compute[230518]: 2025-10-02 12:29:06.608 2 DEBUG nova.compute.manager [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:29:06 compute-1 nova_compute[230518]: 2025-10-02 12:29:06.659 2 DEBUG nova.objects.instance [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'pci_requests' on Instance uuid 91d6698b-e355-4477-8680-f469bfd285a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:29:06 compute-1 sudo[258210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:29:06 compute-1 sudo[258210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:06 compute-1 sudo[258210]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:06 compute-1 nova_compute[230518]: 2025-10-02 12:29:06.678 2 DEBUG nova.objects.instance [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'pci_devices' on Instance uuid 91d6698b-e355-4477-8680-f469bfd285a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:29:06 compute-1 nova_compute[230518]: 2025-10-02 12:29:06.698 2 DEBUG nova.objects.instance [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'resources' on Instance uuid 91d6698b-e355-4477-8680-f469bfd285a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:29:06 compute-1 nova_compute[230518]: 2025-10-02 12:29:06.713 2 DEBUG nova.objects.instance [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'migration_context' on Instance uuid 91d6698b-e355-4477-8680-f469bfd285a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:29:06 compute-1 nova_compute[230518]: 2025-10-02 12:29:06.728 2 DEBUG nova.objects.instance [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:29:06 compute-1 sudo[258235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:29:06 compute-1 nova_compute[230518]: 2025-10-02 12:29:06.733 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:29:06 compute-1 sudo[258235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:06 compute-1 sudo[258235]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:06 compute-1 sudo[258260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:29:06 compute-1 sudo[258260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:06 compute-1 sudo[258260]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:06 compute-1 sudo[258285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:29:06 compute-1 sudo[258285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:07 compute-1 ceph-mon[80926]: pgmap v1550: 305 pgs: 305 active+clean; 273 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 13 MiB/s rd, 4.6 MiB/s wr, 466 op/s
Oct 02 12:29:07 compute-1 sudo[258285]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:07 compute-1 nova_compute[230518]: 2025-10-02 12:29:07.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:08.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:29:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:08.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:29:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:29:09 compute-1 ceph-mon[80926]: pgmap v1551: 305 pgs: 305 active+clean; 259 MiB data, 664 MiB used, 20 GiB / 21 GiB avail; 14 MiB/s rd, 4.5 MiB/s wr, 544 op/s
Oct 02 12:29:09 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:29:09 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:29:09 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:29:09 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:29:09 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:29:09 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:29:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:10.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:10.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:10 compute-1 nova_compute[230518]: 2025-10-02 12:29:10.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:11 compute-1 ceph-mon[80926]: pgmap v1552: 305 pgs: 305 active+clean; 259 MiB data, 664 MiB used, 20 GiB / 21 GiB avail; 8.6 MiB/s rd, 3.2 MiB/s wr, 334 op/s
Oct 02 12:29:11 compute-1 nova_compute[230518]: 2025-10-02 12:29:11.579 2 INFO nova.virt.libvirt.driver [None req-e0ac78b1-4a22-4b06-bdce-6606d07f79eb 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Snapshot image upload complete
Oct 02 12:29:11 compute-1 nova_compute[230518]: 2025-10-02 12:29:11.580 2 INFO nova.compute.manager [None req-e0ac78b1-4a22-4b06-bdce-6606d07f79eb 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Took 14.89 seconds to snapshot the instance on the hypervisor.
Oct 02 12:29:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:12.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:29:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:12.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:29:12 compute-1 ceph-mon[80926]: pgmap v1553: 305 pgs: 305 active+clean; 250 MiB data, 672 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 3.6 MiB/s wr, 320 op/s
Oct 02 12:29:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1899312348' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:12 compute-1 nova_compute[230518]: 2025-10-02 12:29:12.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:12 compute-1 podman[258342]: 2025-10-02 12:29:12.836124294 +0000 UTC m=+0.076378044 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 02 12:29:12 compute-1 podman[258341]: 2025-10-02 12:29:12.877210026 +0000 UTC m=+0.122715481 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:29:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:29:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e220 e220: 3 total, 3 up, 3 in
Oct 02 12:29:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:29:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:14.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:29:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:14.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:14 compute-1 ceph-mon[80926]: pgmap v1554: 305 pgs: 305 active+clean; 273 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 5.6 MiB/s wr, 350 op/s
Oct 02 12:29:14 compute-1 ceph-mon[80926]: osdmap e220: 3 total, 3 up, 3 in
Oct 02 12:29:15 compute-1 nova_compute[230518]: 2025-10-02 12:29:15.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e221 e221: 3 total, 3 up, 3 in
Oct 02 12:29:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:16.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:16.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:16 compute-1 nova_compute[230518]: 2025-10-02 12:29:16.785 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:29:17 compute-1 ovn_controller[129257]: 2025-10-02T12:29:17Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6b:63:f8 10.100.0.7
Oct 02 12:29:17 compute-1 ovn_controller[129257]: 2025-10-02T12:29:17Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6b:63:f8 10.100.0.7
Oct 02 12:29:17 compute-1 ceph-mon[80926]: pgmap v1556: 305 pgs: 305 active+clean; 287 MiB data, 698 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.0 MiB/s wr, 281 op/s
Oct 02 12:29:17 compute-1 ceph-mon[80926]: osdmap e221: 3 total, 3 up, 3 in
Oct 02 12:29:17 compute-1 nova_compute[230518]: 2025-10-02 12:29:17.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:18.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:29:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:18.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:29:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:29:18 compute-1 ceph-mon[80926]: pgmap v1558: 305 pgs: 305 active+clean; 305 MiB data, 708 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 6.9 MiB/s wr, 305 op/s
Oct 02 12:29:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e222 e222: 3 total, 3 up, 3 in
Oct 02 12:29:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:20.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:29:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:20.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:29:20 compute-1 nova_compute[230518]: 2025-10-02 12:29:20.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:20 compute-1 ceph-mon[80926]: pgmap v1559: 305 pgs: 305 active+clean; 305 MiB data, 708 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.2 MiB/s wr, 215 op/s
Oct 02 12:29:20 compute-1 ceph-mon[80926]: osdmap e222: 3 total, 3 up, 3 in
Oct 02 12:29:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e223 e223: 3 total, 3 up, 3 in
Oct 02 12:29:21 compute-1 kernel: tapef24dbb6-1a (unregistering): left promiscuous mode
Oct 02 12:29:21 compute-1 NetworkManager[44960]: <info>  [1759408161.7981] device (tapef24dbb6-1a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:29:21 compute-1 ovn_controller[129257]: 2025-10-02T12:29:21Z|00296|binding|INFO|Releasing lport ef24dbb6-1a67-4d96-a8a7-c34925dd3699 from this chassis (sb_readonly=0)
Oct 02 12:29:21 compute-1 ovn_controller[129257]: 2025-10-02T12:29:21Z|00297|binding|INFO|Setting lport ef24dbb6-1a67-4d96-a8a7-c34925dd3699 down in Southbound
Oct 02 12:29:21 compute-1 ovn_controller[129257]: 2025-10-02T12:29:21Z|00298|binding|INFO|Removing iface tapef24dbb6-1a ovn-installed in OVS
Oct 02 12:29:21 compute-1 nova_compute[230518]: 2025-10-02 12:29:21.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:21 compute-1 nova_compute[230518]: 2025-10-02 12:29:21.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:21 compute-1 nova_compute[230518]: 2025-10-02 12:29:21.819 2 INFO nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Instance shutdown successfully after 15 seconds.
Oct 02 12:29:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:21.828 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:63:f8 10.100.0.7'], port_security=['fa:16:3e:6b:63:f8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '91d6698b-e355-4477-8680-f469bfd285a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b5045a3aa3e42e6b66e2ec8c6bb5810', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9af85e52-bdf0-43fd-9e40-10fd2b6d8a0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=378292cc-8e1b-46dd-b2c4-895c151f1253, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=ef24dbb6-1a67-4d96-a8a7-c34925dd3699) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:29:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:21.830 138374 INFO neutron.agent.ovn.metadata.agent [-] Port ef24dbb6-1a67-4d96-a8a7-c34925dd3699 in datapath e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 unbound from our chassis
Oct 02 12:29:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:21.833 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:29:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:21.836 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[11e82a19-0db9-4d74-bd80-61da2500cf58]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:21.837 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 namespace which is not needed anymore
Oct 02 12:29:21 compute-1 nova_compute[230518]: 2025-10-02 12:29:21.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:21 compute-1 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d00000043.scope: Deactivated successfully.
Oct 02 12:29:21 compute-1 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d00000043.scope: Consumed 13.970s CPU time.
Oct 02 12:29:21 compute-1 systemd-machined[188247]: Machine qemu-34-instance-00000043 terminated.
Oct 02 12:29:22 compute-1 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[258153]: [NOTICE]   (258175) : haproxy version is 2.8.14-c23fe91
Oct 02 12:29:22 compute-1 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[258153]: [NOTICE]   (258175) : path to executable is /usr/sbin/haproxy
Oct 02 12:29:22 compute-1 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[258153]: [WARNING]  (258175) : Exiting Master process...
Oct 02 12:29:22 compute-1 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[258153]: [WARNING]  (258175) : Exiting Master process...
Oct 02 12:29:22 compute-1 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[258153]: [ALERT]    (258175) : Current worker (258184) exited with code 143 (Terminated)
Oct 02 12:29:22 compute-1 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[258153]: [WARNING]  (258175) : All workers exited. Exiting... (0)
Oct 02 12:29:22 compute-1 systemd[1]: libpod-b12b7238f6af63348ed1465e758325a5fcebc381aaa78cd4e9880822d2ae0330.scope: Deactivated successfully.
Oct 02 12:29:22 compute-1 podman[258411]: 2025-10-02 12:29:22.027322854 +0000 UTC m=+0.057789210 container died b12b7238f6af63348ed1465e758325a5fcebc381aaa78cd4e9880822d2ae0330 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 12:29:22 compute-1 nova_compute[230518]: 2025-10-02 12:29:22.059 2 INFO nova.virt.libvirt.driver [-] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Instance destroyed successfully.
Oct 02 12:29:22 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b12b7238f6af63348ed1465e758325a5fcebc381aaa78cd4e9880822d2ae0330-userdata-shm.mount: Deactivated successfully.
Oct 02 12:29:22 compute-1 nova_compute[230518]: 2025-10-02 12:29:22.075 2 INFO nova.virt.libvirt.driver [-] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Instance destroyed successfully.
Oct 02 12:29:22 compute-1 systemd[1]: var-lib-containers-storage-overlay-d61a65ad1c1d3fc987ed3a39b2d4915277d9656d46d566e289ee28d19b267782-merged.mount: Deactivated successfully.
Oct 02 12:29:22 compute-1 nova_compute[230518]: 2025-10-02 12:29:22.080 2 DEBUG nova.virt.libvirt.vif [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:28:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-912984471',display_name='tempest-ServerDiskConfigTestJSON-server-912984471',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-912984471',id=67,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:29:03Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6b5045a3aa3e42e6b66e2ec8c6bb5810',ramdisk_id='',reservation_id='r-gj0sluw1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1782236021',owner_user_name='tempest-ServerDiskConfigTestJSON-1782236021-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:29:05Z,user_data=None,user_id='28d5425714b04888ba9e6112879fae33',uuid=91d6698b-e355-4477-8680-f469bfd285a4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "address": "fa:16:3e:6b:63:f8", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef24dbb6-1a", "ovs_interfaceid": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:29:22 compute-1 nova_compute[230518]: 2025-10-02 12:29:22.080 2 DEBUG nova.network.os_vif_util [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converting VIF {"id": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "address": "fa:16:3e:6b:63:f8", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef24dbb6-1a", "ovs_interfaceid": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:29:22 compute-1 nova_compute[230518]: 2025-10-02 12:29:22.083 2 DEBUG nova.network.os_vif_util [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:63:f8,bridge_name='br-int',has_traffic_filtering=True,id=ef24dbb6-1a67-4d96-a8a7-c34925dd3699,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef24dbb6-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:29:22 compute-1 nova_compute[230518]: 2025-10-02 12:29:22.084 2 DEBUG os_vif [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:63:f8,bridge_name='br-int',has_traffic_filtering=True,id=ef24dbb6-1a67-4d96-a8a7-c34925dd3699,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef24dbb6-1a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:29:22 compute-1 nova_compute[230518]: 2025-10-02 12:29:22.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:22 compute-1 nova_compute[230518]: 2025-10-02 12:29:22.090 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef24dbb6-1a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:22 compute-1 podman[258411]: 2025-10-02 12:29:22.141765156 +0000 UTC m=+0.172231502 container cleanup b12b7238f6af63348ed1465e758325a5fcebc381aaa78cd4e9880822d2ae0330 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 12:29:22 compute-1 nova_compute[230518]: 2025-10-02 12:29:22.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:22 compute-1 nova_compute[230518]: 2025-10-02 12:29:22.142 2 DEBUG nova.compute.manager [req-f186f542-7268-47c3-adeb-bb929c4eb878 req-b3e41d6b-f93d-4166-a34f-87f5327e7982 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Received event network-vif-unplugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:29:22 compute-1 nova_compute[230518]: 2025-10-02 12:29:22.143 2 DEBUG oslo_concurrency.lockutils [req-f186f542-7268-47c3-adeb-bb929c4eb878 req-b3e41d6b-f93d-4166-a34f-87f5327e7982 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "91d6698b-e355-4477-8680-f469bfd285a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:22 compute-1 nova_compute[230518]: 2025-10-02 12:29:22.143 2 DEBUG oslo_concurrency.lockutils [req-f186f542-7268-47c3-adeb-bb929c4eb878 req-b3e41d6b-f93d-4166-a34f-87f5327e7982 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:22 compute-1 nova_compute[230518]: 2025-10-02 12:29:22.143 2 DEBUG oslo_concurrency.lockutils [req-f186f542-7268-47c3-adeb-bb929c4eb878 req-b3e41d6b-f93d-4166-a34f-87f5327e7982 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:22 compute-1 nova_compute[230518]: 2025-10-02 12:29:22.143 2 DEBUG nova.compute.manager [req-f186f542-7268-47c3-adeb-bb929c4eb878 req-b3e41d6b-f93d-4166-a34f-87f5327e7982 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] No waiting events found dispatching network-vif-unplugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:29:22 compute-1 nova_compute[230518]: 2025-10-02 12:29:22.144 2 WARNING nova.compute.manager [req-f186f542-7268-47c3-adeb-bb929c4eb878 req-b3e41d6b-f93d-4166-a34f-87f5327e7982 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Received unexpected event network-vif-unplugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 for instance with vm_state active and task_state rebuilding.
Oct 02 12:29:22 compute-1 nova_compute[230518]: 2025-10-02 12:29:22.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:29:22 compute-1 systemd[1]: libpod-conmon-b12b7238f6af63348ed1465e758325a5fcebc381aaa78cd4e9880822d2ae0330.scope: Deactivated successfully.
Oct 02 12:29:22 compute-1 nova_compute[230518]: 2025-10-02 12:29:22.150 2 INFO os_vif [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:63:f8,bridge_name='br-int',has_traffic_filtering=True,id=ef24dbb6-1a67-4d96-a8a7-c34925dd3699,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef24dbb6-1a')
Oct 02 12:29:22 compute-1 sudo[258431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:29:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:22.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:22 compute-1 sudo[258431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:22 compute-1 sudo[258431]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:22.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:22 compute-1 podman[258471]: 2025-10-02 12:29:22.230887869 +0000 UTC m=+0.062177467 container remove b12b7238f6af63348ed1465e758325a5fcebc381aaa78cd4e9880822d2ae0330 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 02 12:29:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:22.237 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[248597e8-2738-4362-8f80-4f012d673146]: (4, ('Thu Oct  2 12:29:21 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 (b12b7238f6af63348ed1465e758325a5fcebc381aaa78cd4e9880822d2ae0330)\nb12b7238f6af63348ed1465e758325a5fcebc381aaa78cd4e9880822d2ae0330\nThu Oct  2 12:29:22 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 (b12b7238f6af63348ed1465e758325a5fcebc381aaa78cd4e9880822d2ae0330)\nb12b7238f6af63348ed1465e758325a5fcebc381aaa78cd4e9880822d2ae0330\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:22.239 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[10ed46f6-48f1-43ac-a610-14cbce984532]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:22.240 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape21cd6a6-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:22 compute-1 nova_compute[230518]: 2025-10-02 12:29:22.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:22 compute-1 kernel: tape21cd6a6-f0: left promiscuous mode
Oct 02 12:29:22 compute-1 nova_compute[230518]: 2025-10-02 12:29:22.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:22.258 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c4bde8dd-e4f5-4835-9d96-5f321c1b6e6b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:22 compute-1 sudo[258499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:29:22 compute-1 sudo[258499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:29:22 compute-1 sudo[258499]: pam_unix(sudo:session): session closed for user root
Oct 02 12:29:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:22.280 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[19de6e18-86d9-45d3-97dd-a0f037e5d662]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:22.281 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[110f4472-a63f-4147-ba0c-7eb90bf93c7c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:22.295 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c0bd26ac-542d-4425-95de-5a6c29fb7439]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600582, 'reachable_time': 23628, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258531, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:22 compute-1 systemd[1]: run-netns-ovnmeta\x2de21cd6a6\x2df7fd\x2d48ec\x2d8f87\x2dbbcc167f5711.mount: Deactivated successfully.
Oct 02 12:29:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:22.300 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:29:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:22.300 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[d51e6e8c-08c8-4a01-9e59-e20ed62972ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:22 compute-1 ceph-mon[80926]: pgmap v1561: 305 pgs: 305 active+clean; 347 MiB data, 735 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 5.1 MiB/s wr, 239 op/s
Oct 02 12:29:22 compute-1 ceph-mon[80926]: osdmap e223: 3 total, 3 up, 3 in
Oct 02 12:29:22 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:29:22 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:29:22 compute-1 nova_compute[230518]: 2025-10-02 12:29:22.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:29:23 compute-1 podman[258533]: 2025-10-02 12:29:23.813540478 +0000 UTC m=+0.065810792 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Oct 02 12:29:23 compute-1 podman[258534]: 2025-10-02 12:29:23.826174785 +0000 UTC m=+0.079039648 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:29:23 compute-1 nova_compute[230518]: 2025-10-02 12:29:23.849 2 INFO nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Deleting instance files /var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4_del
Oct 02 12:29:23 compute-1 nova_compute[230518]: 2025-10-02 12:29:23.850 2 INFO nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Deletion of /var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4_del complete
Oct 02 12:29:24 compute-1 nova_compute[230518]: 2025-10-02 12:29:24.091 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:29:24 compute-1 nova_compute[230518]: 2025-10-02 12:29:24.092 2 INFO nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Creating image(s)
Oct 02 12:29:24 compute-1 nova_compute[230518]: 2025-10-02 12:29:24.127 2 DEBUG nova.storage.rbd_utils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image 91d6698b-e355-4477-8680-f469bfd285a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:29:24 compute-1 nova_compute[230518]: 2025-10-02 12:29:24.152 2 DEBUG nova.storage.rbd_utils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image 91d6698b-e355-4477-8680-f469bfd285a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:29:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:24.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:24 compute-1 nova_compute[230518]: 2025-10-02 12:29:24.180 2 DEBUG nova.storage.rbd_utils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image 91d6698b-e355-4477-8680-f469bfd285a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:29:24 compute-1 nova_compute[230518]: 2025-10-02 12:29:24.183 2 DEBUG oslo_concurrency.processutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:24.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:24 compute-1 nova_compute[230518]: 2025-10-02 12:29:24.247 2 DEBUG oslo_concurrency.processutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:24 compute-1 nova_compute[230518]: 2025-10-02 12:29:24.249 2 DEBUG oslo_concurrency.lockutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:24 compute-1 nova_compute[230518]: 2025-10-02 12:29:24.249 2 DEBUG oslo_concurrency.lockutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:24 compute-1 nova_compute[230518]: 2025-10-02 12:29:24.249 2 DEBUG oslo_concurrency.lockutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:24 compute-1 nova_compute[230518]: 2025-10-02 12:29:24.277 2 DEBUG nova.storage.rbd_utils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image 91d6698b-e355-4477-8680-f469bfd285a4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:29:24 compute-1 nova_compute[230518]: 2025-10-02 12:29:24.281 2 DEBUG oslo_concurrency.processutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 91d6698b-e355-4477-8680-f469bfd285a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:24 compute-1 nova_compute[230518]: 2025-10-02 12:29:24.318 2 DEBUG nova.compute.manager [req-b5b17da1-cd30-4e3b-9270-ef1fdb672a9e req-c661127c-2240-4a24-a706-6c45fe7dec77 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Received event network-vif-plugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:29:24 compute-1 nova_compute[230518]: 2025-10-02 12:29:24.319 2 DEBUG oslo_concurrency.lockutils [req-b5b17da1-cd30-4e3b-9270-ef1fdb672a9e req-c661127c-2240-4a24-a706-6c45fe7dec77 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "91d6698b-e355-4477-8680-f469bfd285a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:24 compute-1 nova_compute[230518]: 2025-10-02 12:29:24.319 2 DEBUG oslo_concurrency.lockutils [req-b5b17da1-cd30-4e3b-9270-ef1fdb672a9e req-c661127c-2240-4a24-a706-6c45fe7dec77 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:24 compute-1 nova_compute[230518]: 2025-10-02 12:29:24.320 2 DEBUG oslo_concurrency.lockutils [req-b5b17da1-cd30-4e3b-9270-ef1fdb672a9e req-c661127c-2240-4a24-a706-6c45fe7dec77 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:24 compute-1 nova_compute[230518]: 2025-10-02 12:29:24.320 2 DEBUG nova.compute.manager [req-b5b17da1-cd30-4e3b-9270-ef1fdb672a9e req-c661127c-2240-4a24-a706-6c45fe7dec77 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] No waiting events found dispatching network-vif-plugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:29:24 compute-1 nova_compute[230518]: 2025-10-02 12:29:24.320 2 WARNING nova.compute.manager [req-b5b17da1-cd30-4e3b-9270-ef1fdb672a9e req-c661127c-2240-4a24-a706-6c45fe7dec77 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Received unexpected event network-vif-plugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 for instance with vm_state active and task_state rebuild_spawning.
Oct 02 12:29:24 compute-1 ceph-mon[80926]: pgmap v1563: 305 pgs: 305 active+clean; 405 MiB data, 771 MiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 10 MiB/s wr, 263 op/s
Oct 02 12:29:24 compute-1 nova_compute[230518]: 2025-10-02 12:29:24.843 2 DEBUG oslo_concurrency.processutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 91d6698b-e355-4477-8680-f469bfd285a4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:24 compute-1 nova_compute[230518]: 2025-10-02 12:29:24.940 2 DEBUG nova.storage.rbd_utils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] resizing rbd image 91d6698b-e355-4477-8680-f469bfd285a4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:29:25 compute-1 nova_compute[230518]: 2025-10-02 12:29:25.065 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:29:25 compute-1 nova_compute[230518]: 2025-10-02 12:29:25.066 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Ensure instance console log exists: /var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:29:25 compute-1 nova_compute[230518]: 2025-10-02 12:29:25.066 2 DEBUG oslo_concurrency.lockutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:25 compute-1 nova_compute[230518]: 2025-10-02 12:29:25.067 2 DEBUG oslo_concurrency.lockutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:25 compute-1 nova_compute[230518]: 2025-10-02 12:29:25.067 2 DEBUG oslo_concurrency.lockutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:25 compute-1 nova_compute[230518]: 2025-10-02 12:29:25.069 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Start _get_guest_xml network_info=[{"id": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "address": "fa:16:3e:6b:63:f8", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef24dbb6-1a", "ovs_interfaceid": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:54Z,direct_url=<?>,disk_format='qcow2',id=52ef509e-0e22-464e-93c9-3ddcf574cd64,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:29:25 compute-1 nova_compute[230518]: 2025-10-02 12:29:25.074 2 WARNING nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 02 12:29:25 compute-1 nova_compute[230518]: 2025-10-02 12:29:25.084 2 DEBUG nova.virt.libvirt.host [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:29:25 compute-1 nova_compute[230518]: 2025-10-02 12:29:25.084 2 DEBUG nova.virt.libvirt.host [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:29:25 compute-1 nova_compute[230518]: 2025-10-02 12:29:25.088 2 DEBUG nova.virt.libvirt.host [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:29:25 compute-1 nova_compute[230518]: 2025-10-02 12:29:25.088 2 DEBUG nova.virt.libvirt.host [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:29:25 compute-1 nova_compute[230518]: 2025-10-02 12:29:25.089 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:29:25 compute-1 nova_compute[230518]: 2025-10-02 12:29:25.089 2 DEBUG nova.virt.hardware [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:54Z,direct_url=<?>,disk_format='qcow2',id=52ef509e-0e22-464e-93c9-3ddcf574cd64,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:29:25 compute-1 nova_compute[230518]: 2025-10-02 12:29:25.090 2 DEBUG nova.virt.hardware [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:29:25 compute-1 nova_compute[230518]: 2025-10-02 12:29:25.090 2 DEBUG nova.virt.hardware [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:29:25 compute-1 nova_compute[230518]: 2025-10-02 12:29:25.090 2 DEBUG nova.virt.hardware [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:29:25 compute-1 nova_compute[230518]: 2025-10-02 12:29:25.090 2 DEBUG nova.virt.hardware [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:29:25 compute-1 nova_compute[230518]: 2025-10-02 12:29:25.090 2 DEBUG nova.virt.hardware [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:29:25 compute-1 nova_compute[230518]: 2025-10-02 12:29:25.091 2 DEBUG nova.virt.hardware [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:29:25 compute-1 nova_compute[230518]: 2025-10-02 12:29:25.091 2 DEBUG nova.virt.hardware [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:29:25 compute-1 nova_compute[230518]: 2025-10-02 12:29:25.091 2 DEBUG nova.virt.hardware [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:29:25 compute-1 nova_compute[230518]: 2025-10-02 12:29:25.091 2 DEBUG nova.virt.hardware [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:29:25 compute-1 nova_compute[230518]: 2025-10-02 12:29:25.091 2 DEBUG nova.virt.hardware [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:29:25 compute-1 nova_compute[230518]: 2025-10-02 12:29:25.092 2 DEBUG nova.objects.instance [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 91d6698b-e355-4477-8680-f469bfd285a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:29:25 compute-1 nova_compute[230518]: 2025-10-02 12:29:25.107 2 DEBUG oslo_concurrency.processutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:29:25 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/336891820' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/336891820' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:25 compute-1 nova_compute[230518]: 2025-10-02 12:29:25.553 2 DEBUG oslo_concurrency.processutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:25 compute-1 nova_compute[230518]: 2025-10-02 12:29:25.594 2 DEBUG nova.storage.rbd_utils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image 91d6698b-e355-4477-8680-f469bfd285a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:29:25 compute-1 nova_compute[230518]: 2025-10-02 12:29:25.602 2 DEBUG oslo_concurrency.processutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:25 compute-1 nova_compute[230518]: 2025-10-02 12:29:25.860 2 DEBUG nova.compute.manager [None req-d7d92ff2-8017-48a9-9ad1-0772ee646fe4 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:29:25 compute-1 nova_compute[230518]: 2025-10-02 12:29:25.914 2 INFO nova.compute.manager [None req-d7d92ff2-8017-48a9-9ad1-0772ee646fe4 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] instance snapshotting
Oct 02 12:29:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:25.926 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:25.926 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:25.927 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:26 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:29:26 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1261912510' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.069 2 DEBUG oslo_concurrency.processutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.071 2 DEBUG nova.virt.libvirt.vif [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:28:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-912984471',display_name='tempest-ServerDiskConfigTestJSON-server-912984471',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-912984471',id=67,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:29:03Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6b5045a3aa3e42e6b66e2ec8c6bb5810',ramdisk_id='',reservation_id='r-gj0sluw1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1782236021',owner_user_name='tempest-ServerDiskConfigTestJSON-1782236021-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:29:24Z,user_data=None,user_id='28d5425714b04888ba9e6112879fae33',uuid=91d6698b-e355-4477-8680-f469bfd285a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "address": "fa:16:3e:6b:63:f8", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef24dbb6-1a", "ovs_interfaceid": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.071 2 DEBUG nova.network.os_vif_util [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converting VIF {"id": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "address": "fa:16:3e:6b:63:f8", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef24dbb6-1a", "ovs_interfaceid": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.072 2 DEBUG nova.network.os_vif_util [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:63:f8,bridge_name='br-int',has_traffic_filtering=True,id=ef24dbb6-1a67-4d96-a8a7-c34925dd3699,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef24dbb6-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.074 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:29:26 compute-1 nova_compute[230518]:   <uuid>91d6698b-e355-4477-8680-f469bfd285a4</uuid>
Oct 02 12:29:26 compute-1 nova_compute[230518]:   <name>instance-00000043</name>
Oct 02 12:29:26 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:29:26 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:29:26 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:29:26 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-912984471</nova:name>
Oct 02 12:29:26 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:29:25</nova:creationTime>
Oct 02 12:29:26 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:29:26 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:29:26 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:29:26 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:29:26 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:29:26 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:29:26 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:29:26 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:29:26 compute-1 nova_compute[230518]:         <nova:user uuid="28d5425714b04888ba9e6112879fae33">tempest-ServerDiskConfigTestJSON-1782236021-project-member</nova:user>
Oct 02 12:29:26 compute-1 nova_compute[230518]:         <nova:project uuid="6b5045a3aa3e42e6b66e2ec8c6bb5810">tempest-ServerDiskConfigTestJSON-1782236021</nova:project>
Oct 02 12:29:26 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:29:26 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="52ef509e-0e22-464e-93c9-3ddcf574cd64"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:29:26 compute-1 nova_compute[230518]:         <nova:port uuid="ef24dbb6-1a67-4d96-a8a7-c34925dd3699">
Oct 02 12:29:26 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:29:26 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:29:26 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:29:26 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <system>
Oct 02 12:29:26 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:29:26 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:29:26 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:29:26 compute-1 nova_compute[230518]:       <entry name="serial">91d6698b-e355-4477-8680-f469bfd285a4</entry>
Oct 02 12:29:26 compute-1 nova_compute[230518]:       <entry name="uuid">91d6698b-e355-4477-8680-f469bfd285a4</entry>
Oct 02 12:29:26 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     </system>
Oct 02 12:29:26 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:29:26 compute-1 nova_compute[230518]:   <os>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:   </os>
Oct 02 12:29:26 compute-1 nova_compute[230518]:   <features>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:   </features>
Oct 02 12:29:26 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:29:26 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:29:26 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:29:26 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/91d6698b-e355-4477-8680-f469bfd285a4_disk">
Oct 02 12:29:26 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:       </source>
Oct 02 12:29:26 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:29:26 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:29:26 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:29:26 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/91d6698b-e355-4477-8680-f469bfd285a4_disk.config">
Oct 02 12:29:26 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:       </source>
Oct 02 12:29:26 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:29:26 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:29:26 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:29:26 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:6b:63:f8"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:       <target dev="tapef24dbb6-1a"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:29:26 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4/console.log" append="off"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <video>
Oct 02 12:29:26 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     </video>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:29:26 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:29:26 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:29:26 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:29:26 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:29:26 compute-1 nova_compute[230518]: </domain>
Oct 02 12:29:26 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.075 2 DEBUG nova.compute.manager [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Preparing to wait for external event network-vif-plugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.075 2 DEBUG oslo_concurrency.lockutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "91d6698b-e355-4477-8680-f469bfd285a4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.075 2 DEBUG oslo_concurrency.lockutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.075 2 DEBUG oslo_concurrency.lockutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.076 2 DEBUG nova.virt.libvirt.vif [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:28:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-912984471',display_name='tempest-ServerDiskConfigTestJSON-server-912984471',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-912984471',id=67,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:29:03Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6b5045a3aa3e42e6b66e2ec8c6bb5810',ramdisk_id='',reservation_id='r-gj0sluw1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1782236021',owner_user_name='tempest-ServerDiskConfigTestJSON-1782236021-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:29:24Z,user_data=None,user_id='28d5425714b04888ba9e6112879fae33',uuid=91d6698b-e355-4477-8680-f469bfd285a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "address": "fa:16:3e:6b:63:f8", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef24dbb6-1a", "ovs_interfaceid": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.076 2 DEBUG nova.network.os_vif_util [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converting VIF {"id": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "address": "fa:16:3e:6b:63:f8", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef24dbb6-1a", "ovs_interfaceid": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.077 2 DEBUG nova.network.os_vif_util [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:63:f8,bridge_name='br-int',has_traffic_filtering=True,id=ef24dbb6-1a67-4d96-a8a7-c34925dd3699,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef24dbb6-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.077 2 DEBUG os_vif [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:63:f8,bridge_name='br-int',has_traffic_filtering=True,id=ef24dbb6-1a67-4d96-a8a7-c34925dd3699,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef24dbb6-1a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.078 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.078 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.081 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef24dbb6-1a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.081 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapef24dbb6-1a, col_values=(('external_ids', {'iface-id': 'ef24dbb6-1a67-4d96-a8a7-c34925dd3699', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6b:63:f8', 'vm-uuid': '91d6698b-e355-4477-8680-f469bfd285a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:26 compute-1 NetworkManager[44960]: <info>  [1759408166.0842] manager: (tapef24dbb6-1a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/138)
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.090 2 INFO os_vif [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:63:f8,bridge_name='br-int',has_traffic_filtering=True,id=ef24dbb6-1a67-4d96-a8a7-c34925dd3699,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef24dbb6-1a')
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.144 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.145 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.145 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] No VIF found with MAC fa:16:3e:6b:63:f8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.146 2 INFO nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Using config drive
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.175 2 DEBUG nova.storage.rbd_utils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image 91d6698b-e355-4477-8680-f469bfd285a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:29:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:26.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.188 2 INFO nova.virt.libvirt.driver [None req-d7d92ff2-8017-48a9-9ad1-0772ee646fe4 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Beginning live snapshot process
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.193 2 DEBUG nova.objects.instance [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 91d6698b-e355-4477-8680-f469bfd285a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:29:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:26.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.227 2 DEBUG nova.objects.instance [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'keypairs' on Instance uuid 91d6698b-e355-4477-8680-f469bfd285a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.323 2 DEBUG nova.virt.libvirt.imagebackend [None req-d7d92ff2-8017-48a9-9ad1-0772ee646fe4 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.579 2 INFO nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Creating config drive at /var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4/disk.config
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.588 2 DEBUG oslo_concurrency.processutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6uyzws4e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.635 2 DEBUG nova.storage.rbd_utils [None req-d7d92ff2-8017-48a9-9ad1-0772ee646fe4 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] creating snapshot(0ffcb9a4a0ec4dec811b3b8306762f4f) on rbd image(55bd545d-c449-4749-a3f1-b04f0f37e06e_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:29:26 compute-1 ceph-mon[80926]: pgmap v1564: 305 pgs: 305 active+clean; 373 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 8.1 MiB/s wr, 174 op/s
Oct 02 12:29:26 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1261912510' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.742 2 DEBUG oslo_concurrency.processutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6uyzws4e" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.783 2 DEBUG nova.storage.rbd_utils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] rbd image 91d6698b-e355-4477-8680-f469bfd285a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:29:26 compute-1 nova_compute[230518]: 2025-10-02 12:29:26.788 2 DEBUG oslo_concurrency.processutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4/disk.config 91d6698b-e355-4477-8680-f469bfd285a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:27 compute-1 nova_compute[230518]: 2025-10-02 12:29:27.236 2 DEBUG oslo_concurrency.processutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4/disk.config 91d6698b-e355-4477-8680-f469bfd285a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:27 compute-1 nova_compute[230518]: 2025-10-02 12:29:27.238 2 INFO nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Deleting local config drive /var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4/disk.config because it was imported into RBD.
Oct 02 12:29:27 compute-1 kernel: tapef24dbb6-1a: entered promiscuous mode
Oct 02 12:29:27 compute-1 NetworkManager[44960]: <info>  [1759408167.3128] manager: (tapef24dbb6-1a): new Tun device (/org/freedesktop/NetworkManager/Devices/139)
Oct 02 12:29:27 compute-1 ovn_controller[129257]: 2025-10-02T12:29:27Z|00299|binding|INFO|Claiming lport ef24dbb6-1a67-4d96-a8a7-c34925dd3699 for this chassis.
Oct 02 12:29:27 compute-1 ovn_controller[129257]: 2025-10-02T12:29:27Z|00300|binding|INFO|ef24dbb6-1a67-4d96-a8a7-c34925dd3699: Claiming fa:16:3e:6b:63:f8 10.100.0.7
Oct 02 12:29:27 compute-1 nova_compute[230518]: 2025-10-02 12:29:27.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.324 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:63:f8 10.100.0.7'], port_security=['fa:16:3e:6b:63:f8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '91d6698b-e355-4477-8680-f469bfd285a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b5045a3aa3e42e6b66e2ec8c6bb5810', 'neutron:revision_number': '5', 'neutron:security_group_ids': '9af85e52-bdf0-43fd-9e40-10fd2b6d8a0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=378292cc-8e1b-46dd-b2c4-895c151f1253, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=ef24dbb6-1a67-4d96-a8a7-c34925dd3699) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.326 138374 INFO neutron.agent.ovn.metadata.agent [-] Port ef24dbb6-1a67-4d96-a8a7-c34925dd3699 in datapath e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 bound to our chassis
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.328 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e21cd6a6-f7fd-48ec-8f87-bbcc167f5711
Oct 02 12:29:27 compute-1 systemd-udevd[258925]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.343 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b7ac115b-8512-4b2f-a335-7a47a81138a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.345 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape21cd6a6-f1 in ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.347 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape21cd6a6-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.348 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f2a133a4-6bb0-4fc5-9053-16cc2e398b6f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:27 compute-1 ovn_controller[129257]: 2025-10-02T12:29:27Z|00301|binding|INFO|Setting lport ef24dbb6-1a67-4d96-a8a7-c34925dd3699 ovn-installed in OVS
Oct 02 12:29:27 compute-1 ovn_controller[129257]: 2025-10-02T12:29:27Z|00302|binding|INFO|Setting lport ef24dbb6-1a67-4d96-a8a7-c34925dd3699 up in Southbound
Oct 02 12:29:27 compute-1 nova_compute[230518]: 2025-10-02 12:29:27.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.349 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2d1b4996-51aa-4c92-a160-6f2b1e23ccb1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:27 compute-1 systemd-machined[188247]: New machine qemu-35-instance-00000043.
Oct 02 12:29:27 compute-1 nova_compute[230518]: 2025-10-02 12:29:27.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.366 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[8c80547b-4b17-4d4d-a19d-2ef282ac3e38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:27 compute-1 NetworkManager[44960]: <info>  [1759408167.3673] device (tapef24dbb6-1a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:29:27 compute-1 NetworkManager[44960]: <info>  [1759408167.3681] device (tapef24dbb6-1a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:29:27 compute-1 systemd[1]: Started Virtual Machine qemu-35-instance-00000043.
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.396 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5015ee75-86ba-479f-86f1-008d2013e694]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.432 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[49e652e2-235a-4601-9ba2-82d793f98fc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.438 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c6e67b97-cdd7-4510-803b-e395bf5158ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:27 compute-1 NetworkManager[44960]: <info>  [1759408167.4395] manager: (tape21cd6a6-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/140)
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.486 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[aa487d2a-0c69-44b2-9440-08b6aa09cad7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.490 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[2fb395d0-a8f9-4551-9ae4-740f43021298]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:27 compute-1 NetworkManager[44960]: <info>  [1759408167.5267] device (tape21cd6a6-f0): carrier: link connected
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.537 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c74d62d4-0708-460d-bb06-874e211e0d86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.564 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1dc0d4b0-c1c2-480d-b4e5-371f01f22f8d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape21cd6a6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:30:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603108, 'reachable_time': 34127, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258959, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.589 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a532842c-8317-4a84-bf70-818d1a71552d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe84:30ee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603108, 'tstamp': 603108}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258960, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:27 compute-1 nova_compute[230518]: 2025-10-02 12:29:27.592 2 DEBUG nova.compute.manager [req-b8009e88-3bbd-46da-b950-18075d866fde req-9a624a6e-3965-474e-a198-33c6748b5969 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Received event network-vif-plugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:29:27 compute-1 nova_compute[230518]: 2025-10-02 12:29:27.592 2 DEBUG oslo_concurrency.lockutils [req-b8009e88-3bbd-46da-b950-18075d866fde req-9a624a6e-3965-474e-a198-33c6748b5969 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "91d6698b-e355-4477-8680-f469bfd285a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:27 compute-1 nova_compute[230518]: 2025-10-02 12:29:27.593 2 DEBUG oslo_concurrency.lockutils [req-b8009e88-3bbd-46da-b950-18075d866fde req-9a624a6e-3965-474e-a198-33c6748b5969 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:27 compute-1 nova_compute[230518]: 2025-10-02 12:29:27.593 2 DEBUG oslo_concurrency.lockutils [req-b8009e88-3bbd-46da-b950-18075d866fde req-9a624a6e-3965-474e-a198-33c6748b5969 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:27 compute-1 nova_compute[230518]: 2025-10-02 12:29:27.594 2 DEBUG nova.compute.manager [req-b8009e88-3bbd-46da-b950-18075d866fde req-9a624a6e-3965-474e-a198-33c6748b5969 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Processing event network-vif-plugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.616 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f8b804f1-bc4a-404a-8489-faa25f27eec6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape21cd6a6-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:84:30:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603108, 'reachable_time': 34127, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 258961, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:27 compute-1 nova_compute[230518]: 2025-10-02 12:29:27.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.664 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e1cf1209-0b0e-4b44-9211-312bab487a60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.723 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b31e7fa9-9c07-4ea7-8fd5-8c159e0ab111]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.725 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape21cd6a6-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.725 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.726 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape21cd6a6-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:27 compute-1 kernel: tape21cd6a6-f0: entered promiscuous mode
Oct 02 12:29:27 compute-1 NetworkManager[44960]: <info>  [1759408167.7296] manager: (tape21cd6a6-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/141)
Oct 02 12:29:27 compute-1 nova_compute[230518]: 2025-10-02 12:29:27.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.733 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape21cd6a6-f0, col_values=(('external_ids', {'iface-id': '155c8aeb-2b8a-439c-8558-741aa183fa54'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:27 compute-1 ovn_controller[129257]: 2025-10-02T12:29:27Z|00303|binding|INFO|Releasing lport 155c8aeb-2b8a-439c-8558-741aa183fa54 from this chassis (sb_readonly=0)
Oct 02 12:29:27 compute-1 nova_compute[230518]: 2025-10-02 12:29:27.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.766 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.767 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b2b719e8-2c40-49e8-9dec-bf05e55a7949]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.768 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.pid.haproxy
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID e21cd6a6-f7fd-48ec-8f87-bbcc167f5711
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:29:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:27.769 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'env', 'PROCESS_TAG=haproxy-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e21cd6a6-f7fd-48ec-8f87-bbcc167f5711.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:29:27 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e224 e224: 3 total, 3 up, 3 in
Oct 02 12:29:28 compute-1 nova_compute[230518]: 2025-10-02 12:29:28.096 2 DEBUG nova.storage.rbd_utils [None req-d7d92ff2-8017-48a9-9ad1-0772ee646fe4 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] cloning vms/55bd545d-c449-4749-a3f1-b04f0f37e06e_disk@0ffcb9a4a0ec4dec811b3b8306762f4f to images/d0dced0b-1a65-4528-9713-633878d7128b clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 12:29:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:28.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:28 compute-1 podman[259008]: 2025-10-02 12:29:28.201926053 +0000 UTC m=+0.065953626 container create f47459a56695e2641efc3fefa5c873de519db3144835b0bfddd0c43391977676 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 12:29:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:29:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:28.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:29:28 compute-1 podman[259008]: 2025-10-02 12:29:28.164858326 +0000 UTC m=+0.028885889 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:29:28 compute-1 nova_compute[230518]: 2025-10-02 12:29:28.263 2 DEBUG nova.storage.rbd_utils [None req-d7d92ff2-8017-48a9-9ad1-0772ee646fe4 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] flattening images/d0dced0b-1a65-4528-9713-633878d7128b flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 12:29:28 compute-1 systemd[1]: Started libpod-conmon-f47459a56695e2641efc3fefa5c873de519db3144835b0bfddd0c43391977676.scope.
Oct 02 12:29:28 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:29:28 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bab9c1cc5c842b2f1033e688001ccfde90870661a665762c428a425ee99c5e7a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:29:28 compute-1 podman[259008]: 2025-10-02 12:29:28.312541233 +0000 UTC m=+0.176568797 container init f47459a56695e2641efc3fefa5c873de519db3144835b0bfddd0c43391977676 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 12:29:28 compute-1 podman[259008]: 2025-10-02 12:29:28.319274635 +0000 UTC m=+0.183302168 container start f47459a56695e2641efc3fefa5c873de519db3144835b0bfddd0c43391977676 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 12:29:28 compute-1 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[259062]: [NOTICE]   (259102) : New worker (259105) forked
Oct 02 12:29:28 compute-1 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[259062]: [NOTICE]   (259102) : Loading success.
Oct 02 12:29:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:29:28 compute-1 nova_compute[230518]: 2025-10-02 12:29:28.841 2 DEBUG nova.compute.manager [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:29:28 compute-1 nova_compute[230518]: 2025-10-02 12:29:28.842 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Removed pending event for 91d6698b-e355-4477-8680-f469bfd285a4 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:29:28 compute-1 nova_compute[230518]: 2025-10-02 12:29:28.842 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408168.8406715, 91d6698b-e355-4477-8680-f469bfd285a4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:29:28 compute-1 nova_compute[230518]: 2025-10-02 12:29:28.842 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] VM Started (Lifecycle Event)
Oct 02 12:29:28 compute-1 nova_compute[230518]: 2025-10-02 12:29:28.847 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:29:28 compute-1 nova_compute[230518]: 2025-10-02 12:29:28.851 2 INFO nova.virt.libvirt.driver [-] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Instance spawned successfully.
Oct 02 12:29:28 compute-1 nova_compute[230518]: 2025-10-02 12:29:28.851 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:29:28 compute-1 nova_compute[230518]: 2025-10-02 12:29:28.873 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:29:28 compute-1 nova_compute[230518]: 2025-10-02 12:29:28.876 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:29:28 compute-1 nova_compute[230518]: 2025-10-02 12:29:28.884 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:29:28 compute-1 nova_compute[230518]: 2025-10-02 12:29:28.884 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:29:28 compute-1 nova_compute[230518]: 2025-10-02 12:29:28.884 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:29:28 compute-1 nova_compute[230518]: 2025-10-02 12:29:28.885 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:29:28 compute-1 nova_compute[230518]: 2025-10-02 12:29:28.885 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:29:28 compute-1 nova_compute[230518]: 2025-10-02 12:29:28.885 2 DEBUG nova.virt.libvirt.driver [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:29:28 compute-1 nova_compute[230518]: 2025-10-02 12:29:28.947 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 12:29:28 compute-1 nova_compute[230518]: 2025-10-02 12:29:28.947 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408168.84158, 91d6698b-e355-4477-8680-f469bfd285a4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:29:28 compute-1 nova_compute[230518]: 2025-10-02 12:29:28.947 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] VM Paused (Lifecycle Event)
Oct 02 12:29:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e225 e225: 3 total, 3 up, 3 in
Oct 02 12:29:28 compute-1 nova_compute[230518]: 2025-10-02 12:29:28.982 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:29:28 compute-1 nova_compute[230518]: 2025-10-02 12:29:28.985 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408168.846325, 91d6698b-e355-4477-8680-f469bfd285a4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:29:28 compute-1 nova_compute[230518]: 2025-10-02 12:29:28.985 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] VM Resumed (Lifecycle Event)
Oct 02 12:29:29 compute-1 nova_compute[230518]: 2025-10-02 12:29:29.010 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:29:29 compute-1 nova_compute[230518]: 2025-10-02 12:29:29.015 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:29:29 compute-1 nova_compute[230518]: 2025-10-02 12:29:29.020 2 DEBUG nova.compute.manager [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:29:29 compute-1 nova_compute[230518]: 2025-10-02 12:29:29.048 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 12:29:29 compute-1 nova_compute[230518]: 2025-10-02 12:29:29.092 2 DEBUG oslo_concurrency.lockutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:29 compute-1 nova_compute[230518]: 2025-10-02 12:29:29.093 2 DEBUG oslo_concurrency.lockutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:29 compute-1 nova_compute[230518]: 2025-10-02 12:29:29.093 2 DEBUG nova.objects.instance [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:29:29 compute-1 nova_compute[230518]: 2025-10-02 12:29:29.193 2 DEBUG oslo_concurrency.lockutils [None req-0539cd0d-faaf-4f99-a82c-4599a06e8e5f 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:29 compute-1 ceph-mon[80926]: pgmap v1565: 305 pgs: 305 active+clean; 372 MiB data, 744 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 11 MiB/s wr, 243 op/s
Oct 02 12:29:29 compute-1 ceph-mon[80926]: osdmap e224: 3 total, 3 up, 3 in
Oct 02 12:29:29 compute-1 nova_compute[230518]: 2025-10-02 12:29:29.709 2 DEBUG nova.compute.manager [req-868d6390-b15e-4876-b95d-46a0000f4d66 req-5ecdde4d-c5f8-4c75-833a-17dde075b8d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Received event network-vif-plugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:29:29 compute-1 nova_compute[230518]: 2025-10-02 12:29:29.709 2 DEBUG oslo_concurrency.lockutils [req-868d6390-b15e-4876-b95d-46a0000f4d66 req-5ecdde4d-c5f8-4c75-833a-17dde075b8d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "91d6698b-e355-4477-8680-f469bfd285a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:29 compute-1 nova_compute[230518]: 2025-10-02 12:29:29.710 2 DEBUG oslo_concurrency.lockutils [req-868d6390-b15e-4876-b95d-46a0000f4d66 req-5ecdde4d-c5f8-4c75-833a-17dde075b8d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:29 compute-1 nova_compute[230518]: 2025-10-02 12:29:29.710 2 DEBUG oslo_concurrency.lockutils [req-868d6390-b15e-4876-b95d-46a0000f4d66 req-5ecdde4d-c5f8-4c75-833a-17dde075b8d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:29 compute-1 nova_compute[230518]: 2025-10-02 12:29:29.710 2 DEBUG nova.compute.manager [req-868d6390-b15e-4876-b95d-46a0000f4d66 req-5ecdde4d-c5f8-4c75-833a-17dde075b8d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] No waiting events found dispatching network-vif-plugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:29:29 compute-1 nova_compute[230518]: 2025-10-02 12:29:29.711 2 WARNING nova.compute.manager [req-868d6390-b15e-4876-b95d-46a0000f4d66 req-5ecdde4d-c5f8-4c75-833a-17dde075b8d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Received unexpected event network-vif-plugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 for instance with vm_state active and task_state None.
Oct 02 12:29:30 compute-1 nova_compute[230518]: 2025-10-02 12:29:30.009 2 DEBUG nova.storage.rbd_utils [None req-d7d92ff2-8017-48a9-9ad1-0772ee646fe4 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] removing snapshot(0ffcb9a4a0ec4dec811b3b8306762f4f) on rbd image(55bd545d-c449-4749-a3f1-b04f0f37e06e_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 12:29:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:30.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:30.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:30 compute-1 ceph-mon[80926]: osdmap e225: 3 total, 3 up, 3 in
Oct 02 12:29:30 compute-1 ceph-mon[80926]: pgmap v1568: 305 pgs: 305 active+clean; 372 MiB data, 744 MiB used, 20 GiB / 21 GiB avail; 596 KiB/s rd, 7.2 MiB/s wr, 155 op/s
Oct 02 12:29:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2314630718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:31 compute-1 nova_compute[230518]: 2025-10-02 12:29:31.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:31 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e226 e226: 3 total, 3 up, 3 in
Oct 02 12:29:31 compute-1 nova_compute[230518]: 2025-10-02 12:29:31.573 2 DEBUG nova.storage.rbd_utils [None req-d7d92ff2-8017-48a9-9ad1-0772ee646fe4 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] creating snapshot(snap) on rbd image(d0dced0b-1a65-4528-9713-633878d7128b) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:29:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:29:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:32.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:29:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:29:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:32.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:29:32 compute-1 nova_compute[230518]: 2025-10-02 12:29:32.277 2 DEBUG oslo_concurrency.lockutils [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "91d6698b-e355-4477-8680-f469bfd285a4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:32 compute-1 nova_compute[230518]: 2025-10-02 12:29:32.277 2 DEBUG oslo_concurrency.lockutils [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:32 compute-1 nova_compute[230518]: 2025-10-02 12:29:32.277 2 DEBUG oslo_concurrency.lockutils [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "91d6698b-e355-4477-8680-f469bfd285a4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:32 compute-1 nova_compute[230518]: 2025-10-02 12:29:32.278 2 DEBUG oslo_concurrency.lockutils [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:32 compute-1 nova_compute[230518]: 2025-10-02 12:29:32.278 2 DEBUG oslo_concurrency.lockutils [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:32 compute-1 nova_compute[230518]: 2025-10-02 12:29:32.279 2 INFO nova.compute.manager [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Terminating instance
Oct 02 12:29:32 compute-1 nova_compute[230518]: 2025-10-02 12:29:32.280 2 DEBUG nova.compute.manager [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:29:32 compute-1 kernel: tapef24dbb6-1a (unregistering): left promiscuous mode
Oct 02 12:29:32 compute-1 NetworkManager[44960]: <info>  [1759408172.3345] device (tapef24dbb6-1a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:29:32 compute-1 nova_compute[230518]: 2025-10-02 12:29:32.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:32 compute-1 ovn_controller[129257]: 2025-10-02T12:29:32Z|00304|binding|INFO|Releasing lport ef24dbb6-1a67-4d96-a8a7-c34925dd3699 from this chassis (sb_readonly=0)
Oct 02 12:29:32 compute-1 ovn_controller[129257]: 2025-10-02T12:29:32Z|00305|binding|INFO|Setting lport ef24dbb6-1a67-4d96-a8a7-c34925dd3699 down in Southbound
Oct 02 12:29:32 compute-1 ovn_controller[129257]: 2025-10-02T12:29:32Z|00306|binding|INFO|Removing iface tapef24dbb6-1a ovn-installed in OVS
Oct 02 12:29:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:32.354 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:63:f8 10.100.0.7'], port_security=['fa:16:3e:6b:63:f8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '91d6698b-e355-4477-8680-f469bfd285a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b5045a3aa3e42e6b66e2ec8c6bb5810', 'neutron:revision_number': '6', 'neutron:security_group_ids': '9af85e52-bdf0-43fd-9e40-10fd2b6d8a0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=378292cc-8e1b-46dd-b2c4-895c151f1253, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=ef24dbb6-1a67-4d96-a8a7-c34925dd3699) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:29:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:32.355 138374 INFO neutron.agent.ovn.metadata.agent [-] Port ef24dbb6-1a67-4d96-a8a7-c34925dd3699 in datapath e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 unbound from our chassis
Oct 02 12:29:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:32.356 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:29:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:32.357 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[18e37939-e8c1-4ffd-b5ac-fb7a788e53f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:32.358 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 namespace which is not needed anymore
Oct 02 12:29:32 compute-1 nova_compute[230518]: 2025-10-02 12:29:32.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:32 compute-1 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d00000043.scope: Deactivated successfully.
Oct 02 12:29:32 compute-1 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d00000043.scope: Consumed 4.886s CPU time.
Oct 02 12:29:32 compute-1 systemd-machined[188247]: Machine qemu-35-instance-00000043 terminated.
Oct 02 12:29:32 compute-1 ceph-mon[80926]: pgmap v1569: 305 pgs: 305 active+clean; 385 MiB data, 751 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.4 MiB/s wr, 127 op/s
Oct 02 12:29:32 compute-1 ceph-mon[80926]: osdmap e226: 3 total, 3 up, 3 in
Oct 02 12:29:32 compute-1 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[259062]: [NOTICE]   (259102) : haproxy version is 2.8.14-c23fe91
Oct 02 12:29:32 compute-1 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[259062]: [NOTICE]   (259102) : path to executable is /usr/sbin/haproxy
Oct 02 12:29:32 compute-1 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[259062]: [WARNING]  (259102) : Exiting Master process...
Oct 02 12:29:32 compute-1 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[259062]: [ALERT]    (259102) : Current worker (259105) exited with code 143 (Terminated)
Oct 02 12:29:32 compute-1 neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711[259062]: [WARNING]  (259102) : All workers exited. Exiting... (0)
Oct 02 12:29:32 compute-1 systemd[1]: libpod-f47459a56695e2641efc3fefa5c873de519db3144835b0bfddd0c43391977676.scope: Deactivated successfully.
Oct 02 12:29:32 compute-1 podman[259180]: 2025-10-02 12:29:32.503090263 +0000 UTC m=+0.042776967 container died f47459a56695e2641efc3fefa5c873de519db3144835b0bfddd0c43391977676 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 12:29:32 compute-1 nova_compute[230518]: 2025-10-02 12:29:32.522 2 INFO nova.virt.libvirt.driver [-] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Instance destroyed successfully.
Oct 02 12:29:32 compute-1 nova_compute[230518]: 2025-10-02 12:29:32.523 2 DEBUG nova.objects.instance [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lazy-loading 'resources' on Instance uuid 91d6698b-e355-4477-8680-f469bfd285a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:29:32 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f47459a56695e2641efc3fefa5c873de519db3144835b0bfddd0c43391977676-userdata-shm.mount: Deactivated successfully.
Oct 02 12:29:32 compute-1 systemd[1]: var-lib-containers-storage-overlay-bab9c1cc5c842b2f1033e688001ccfde90870661a665762c428a425ee99c5e7a-merged.mount: Deactivated successfully.
Oct 02 12:29:32 compute-1 podman[259180]: 2025-10-02 12:29:32.552902381 +0000 UTC m=+0.092589095 container cleanup f47459a56695e2641efc3fefa5c873de519db3144835b0bfddd0c43391977676 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 12:29:32 compute-1 nova_compute[230518]: 2025-10-02 12:29:32.555 2 DEBUG nova.virt.libvirt.vif [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:28:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-912984471',display_name='tempest-ServerDiskConfigTestJSON-server-912984471',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-912984471',id=67,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:29:29Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6b5045a3aa3e42e6b66e2ec8c6bb5810',ramdisk_id='',reservation_id='r-gj0sluw1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1782236021',owner_user_name='tempest-ServerDiskConfigTestJSON-1782236021-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:29:29Z,user_data=None,user_id='28d5425714b04888ba9e6112879fae33',uuid=91d6698b-e355-4477-8680-f469bfd285a4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "address": "fa:16:3e:6b:63:f8", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef24dbb6-1a", "ovs_interfaceid": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:29:32 compute-1 nova_compute[230518]: 2025-10-02 12:29:32.555 2 DEBUG nova.network.os_vif_util [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converting VIF {"id": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "address": "fa:16:3e:6b:63:f8", "network": {"id": "e21cd6a6-f7fd-48ec-8f87-bbcc167f5711", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-757628303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b5045a3aa3e42e6b66e2ec8c6bb5810", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef24dbb6-1a", "ovs_interfaceid": "ef24dbb6-1a67-4d96-a8a7-c34925dd3699", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:29:32 compute-1 nova_compute[230518]: 2025-10-02 12:29:32.556 2 DEBUG nova.network.os_vif_util [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:63:f8,bridge_name='br-int',has_traffic_filtering=True,id=ef24dbb6-1a67-4d96-a8a7-c34925dd3699,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef24dbb6-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:29:32 compute-1 nova_compute[230518]: 2025-10-02 12:29:32.557 2 DEBUG os_vif [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:63:f8,bridge_name='br-int',has_traffic_filtering=True,id=ef24dbb6-1a67-4d96-a8a7-c34925dd3699,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef24dbb6-1a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:29:32 compute-1 nova_compute[230518]: 2025-10-02 12:29:32.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:32 compute-1 nova_compute[230518]: 2025-10-02 12:29:32.559 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef24dbb6-1a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:32 compute-1 nova_compute[230518]: 2025-10-02 12:29:32.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:32 compute-1 nova_compute[230518]: 2025-10-02 12:29:32.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:32 compute-1 nova_compute[230518]: 2025-10-02 12:29:32.566 2 INFO os_vif [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:63:f8,bridge_name='br-int',has_traffic_filtering=True,id=ef24dbb6-1a67-4d96-a8a7-c34925dd3699,network=Network(e21cd6a6-f7fd-48ec-8f87-bbcc167f5711),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef24dbb6-1a')
Oct 02 12:29:32 compute-1 systemd[1]: libpod-conmon-f47459a56695e2641efc3fefa5c873de519db3144835b0bfddd0c43391977676.scope: Deactivated successfully.
Oct 02 12:29:32 compute-1 podman[259218]: 2025-10-02 12:29:32.629977346 +0000 UTC m=+0.048302821 container remove f47459a56695e2641efc3fefa5c873de519db3144835b0bfddd0c43391977676 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 12:29:32 compute-1 nova_compute[230518]: 2025-10-02 12:29:32.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:32 compute-1 nova_compute[230518]: 2025-10-02 12:29:32.673 2 DEBUG nova.compute.manager [req-8981a0e1-d845-4f16-bf56-1c9f1985527c req-a10d57fb-87a1-4b4d-85c1-18d8d2794c32 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Received event network-vif-unplugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:29:32 compute-1 nova_compute[230518]: 2025-10-02 12:29:32.673 2 DEBUG oslo_concurrency.lockutils [req-8981a0e1-d845-4f16-bf56-1c9f1985527c req-a10d57fb-87a1-4b4d-85c1-18d8d2794c32 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "91d6698b-e355-4477-8680-f469bfd285a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:32 compute-1 nova_compute[230518]: 2025-10-02 12:29:32.674 2 DEBUG oslo_concurrency.lockutils [req-8981a0e1-d845-4f16-bf56-1c9f1985527c req-a10d57fb-87a1-4b4d-85c1-18d8d2794c32 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:32 compute-1 nova_compute[230518]: 2025-10-02 12:29:32.674 2 DEBUG oslo_concurrency.lockutils [req-8981a0e1-d845-4f16-bf56-1c9f1985527c req-a10d57fb-87a1-4b4d-85c1-18d8d2794c32 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:32 compute-1 nova_compute[230518]: 2025-10-02 12:29:32.674 2 DEBUG nova.compute.manager [req-8981a0e1-d845-4f16-bf56-1c9f1985527c req-a10d57fb-87a1-4b4d-85c1-18d8d2794c32 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] No waiting events found dispatching network-vif-unplugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:29:32 compute-1 nova_compute[230518]: 2025-10-02 12:29:32.674 2 DEBUG nova.compute.manager [req-8981a0e1-d845-4f16-bf56-1c9f1985527c req-a10d57fb-87a1-4b4d-85c1-18d8d2794c32 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Received event network-vif-unplugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:29:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:32.673 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8dc9f3c5-85f2-4eb8-8a60-fe88c2a16855]: (4, ('Thu Oct  2 12:29:32 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 (f47459a56695e2641efc3fefa5c873de519db3144835b0bfddd0c43391977676)\nf47459a56695e2641efc3fefa5c873de519db3144835b0bfddd0c43391977676\nThu Oct  2 12:29:32 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 (f47459a56695e2641efc3fefa5c873de519db3144835b0bfddd0c43391977676)\nf47459a56695e2641efc3fefa5c873de519db3144835b0bfddd0c43391977676\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:32.675 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ac3f0492-7fb1-4aa1-8f93-e87c5b91bb96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:32.676 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape21cd6a6-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:32 compute-1 nova_compute[230518]: 2025-10-02 12:29:32.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:32 compute-1 kernel: tape21cd6a6-f0: left promiscuous mode
Oct 02 12:29:32 compute-1 nova_compute[230518]: 2025-10-02 12:29:32.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:32.696 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a4fc8148-f12e-41c9-b4c3-1ef797d85759]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:32.742 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4d201059-3bed-47b6-b297-38a4d1f58c7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:32.744 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4e010b7a-034b-4d55-805a-fdfb60d6805c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e227 e227: 3 total, 3 up, 3 in
Oct 02 12:29:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:32.770 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cd1a8168-13ed-4d83-ab57-336436dc751b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603098, 'reachable_time': 24431, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259251, 'error': None, 'target': 'ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:32 compute-1 systemd[1]: run-netns-ovnmeta\x2de21cd6a6\x2df7fd\x2d48ec\x2d8f87\x2dbbcc167f5711.mount: Deactivated successfully.
Oct 02 12:29:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:32.775 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e21cd6a6-f7fd-48ec-8f87-bbcc167f5711 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:29:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:32.775 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[882d4d9e-7373-4d04-81d1-6c08b7793911]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:29:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:29:33 compute-1 nova_compute[230518]: 2025-10-02 12:29:33.641 2 INFO nova.virt.libvirt.driver [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Deleting instance files /var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4_del
Oct 02 12:29:33 compute-1 nova_compute[230518]: 2025-10-02 12:29:33.641 2 INFO nova.virt.libvirt.driver [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Deletion of /var/lib/nova/instances/91d6698b-e355-4477-8680-f469bfd285a4_del complete
Oct 02 12:29:33 compute-1 nova_compute[230518]: 2025-10-02 12:29:33.691 2 INFO nova.compute.manager [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Took 1.41 seconds to destroy the instance on the hypervisor.
Oct 02 12:29:33 compute-1 nova_compute[230518]: 2025-10-02 12:29:33.691 2 DEBUG oslo.service.loopingcall [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:29:33 compute-1 nova_compute[230518]: 2025-10-02 12:29:33.691 2 DEBUG nova.compute.manager [-] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:29:33 compute-1 nova_compute[230518]: 2025-10-02 12:29:33.692 2 DEBUG nova.network.neutron [-] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:29:33 compute-1 ceph-mon[80926]: osdmap e227: 3 total, 3 up, 3 in
Oct 02 12:29:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:34.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:34.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:34 compute-1 nova_compute[230518]: 2025-10-02 12:29:34.608 2 INFO nova.virt.libvirt.driver [None req-d7d92ff2-8017-48a9-9ad1-0772ee646fe4 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Snapshot image upload complete
Oct 02 12:29:34 compute-1 nova_compute[230518]: 2025-10-02 12:29:34.609 2 INFO nova.compute.manager [None req-d7d92ff2-8017-48a9-9ad1-0772ee646fe4 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Took 8.69 seconds to snapshot the instance on the hypervisor.
Oct 02 12:29:34 compute-1 ceph-mon[80926]: pgmap v1572: 305 pgs: 3 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 298 active+clean; 484 MiB data, 803 MiB used, 20 GiB / 21 GiB avail; 13 MiB/s rd, 12 MiB/s wr, 298 op/s
Oct 02 12:29:34 compute-1 nova_compute[230518]: 2025-10-02 12:29:34.905 2 DEBUG nova.compute.manager [req-743d60c0-b227-4a7e-a859-ed6885c58546 req-de5a718e-4b14-4f0a-92b5-400f997abf45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Received event network-vif-plugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:29:34 compute-1 nova_compute[230518]: 2025-10-02 12:29:34.905 2 DEBUG oslo_concurrency.lockutils [req-743d60c0-b227-4a7e-a859-ed6885c58546 req-de5a718e-4b14-4f0a-92b5-400f997abf45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "91d6698b-e355-4477-8680-f469bfd285a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:34 compute-1 nova_compute[230518]: 2025-10-02 12:29:34.906 2 DEBUG oslo_concurrency.lockutils [req-743d60c0-b227-4a7e-a859-ed6885c58546 req-de5a718e-4b14-4f0a-92b5-400f997abf45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:34 compute-1 nova_compute[230518]: 2025-10-02 12:29:34.907 2 DEBUG oslo_concurrency.lockutils [req-743d60c0-b227-4a7e-a859-ed6885c58546 req-de5a718e-4b14-4f0a-92b5-400f997abf45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:34 compute-1 nova_compute[230518]: 2025-10-02 12:29:34.907 2 DEBUG nova.compute.manager [req-743d60c0-b227-4a7e-a859-ed6885c58546 req-de5a718e-4b14-4f0a-92b5-400f997abf45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] No waiting events found dispatching network-vif-plugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:29:34 compute-1 nova_compute[230518]: 2025-10-02 12:29:34.907 2 WARNING nova.compute.manager [req-743d60c0-b227-4a7e-a859-ed6885c58546 req-de5a718e-4b14-4f0a-92b5-400f997abf45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Received unexpected event network-vif-plugged-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 for instance with vm_state active and task_state deleting.
Oct 02 12:29:34 compute-1 nova_compute[230518]: 2025-10-02 12:29:34.999 2 DEBUG nova.network.neutron [-] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:29:35 compute-1 nova_compute[230518]: 2025-10-02 12:29:35.050 2 INFO nova.compute.manager [-] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Took 1.36 seconds to deallocate network for instance.
Oct 02 12:29:35 compute-1 nova_compute[230518]: 2025-10-02 12:29:35.104 2 DEBUG oslo_concurrency.lockutils [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:35 compute-1 nova_compute[230518]: 2025-10-02 12:29:35.105 2 DEBUG oslo_concurrency.lockutils [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:35 compute-1 nova_compute[230518]: 2025-10-02 12:29:35.183 2 DEBUG oslo_concurrency.processutils [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:29:35 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1493501614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:35 compute-1 nova_compute[230518]: 2025-10-02 12:29:35.653 2 DEBUG oslo_concurrency.processutils [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:35 compute-1 nova_compute[230518]: 2025-10-02 12:29:35.660 2 DEBUG nova.compute.provider_tree [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:29:35 compute-1 nova_compute[230518]: 2025-10-02 12:29:35.787 2 DEBUG nova.scheduler.client.report [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:29:35 compute-1 nova_compute[230518]: 2025-10-02 12:29:35.865 2 DEBUG oslo_concurrency.lockutils [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:35 compute-1 nova_compute[230518]: 2025-10-02 12:29:35.976 2 INFO nova.scheduler.client.report [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Deleted allocations for instance 91d6698b-e355-4477-8680-f469bfd285a4
Oct 02 12:29:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1493501614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2654852695' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:36 compute-1 nova_compute[230518]: 2025-10-02 12:29:36.110 2 DEBUG oslo_concurrency.lockutils [None req-4ce7c07a-4012-4f54-acdc-321378b0dbf9 28d5425714b04888ba9e6112879fae33 6b5045a3aa3e42e6b66e2ec8c6bb5810 - - default default] Lock "91d6698b-e355-4477-8680-f469bfd285a4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:36.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:29:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:36.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:29:37 compute-1 nova_compute[230518]: 2025-10-02 12:29:37.033 2 DEBUG nova.compute.manager [req-c3974bab-9576-44ee-8c36-4db9cafc684c req-892d3832-f097-40ee-9408-76af9650f416 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Received event network-vif-deleted-ef24dbb6-1a67-4d96-a8a7-c34925dd3699 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:29:37 compute-1 ceph-mon[80926]: pgmap v1573: 305 pgs: 3 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 298 active+clean; 491 MiB data, 804 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 11 MiB/s wr, 346 op/s
Oct 02 12:29:37 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1972595771' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:37 compute-1 nova_compute[230518]: 2025-10-02 12:29:37.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:37 compute-1 nova_compute[230518]: 2025-10-02 12:29:37.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:38 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3268243486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:38.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:38.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:38 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:29:38 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e228 e228: 3 total, 3 up, 3 in
Oct 02 12:29:39 compute-1 ceph-mon[80926]: pgmap v1574: 305 pgs: 305 active+clean; 451 MiB data, 793 MiB used, 20 GiB / 21 GiB avail; 8.8 MiB/s rd, 8.5 MiB/s wr, 321 op/s
Oct 02 12:29:39 compute-1 ceph-mon[80926]: osdmap e228: 3 total, 3 up, 3 in
Oct 02 12:29:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:40.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:29:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:40.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:29:40 compute-1 ceph-mon[80926]: pgmap v1576: 305 pgs: 305 active+clean; 451 MiB data, 793 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 7.9 MiB/s wr, 296 op/s
Oct 02 12:29:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:42.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:29:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:42.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:29:42 compute-1 ceph-mon[80926]: pgmap v1577: 305 pgs: 305 active+clean; 475 MiB data, 808 MiB used, 20 GiB / 21 GiB avail; 226 KiB/s rd, 1.6 MiB/s wr, 149 op/s
Oct 02 12:29:42 compute-1 nova_compute[230518]: 2025-10-02 12:29:42.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:42 compute-1 nova_compute[230518]: 2025-10-02 12:29:42.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:29:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/32196474' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3535058869' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:29:43 compute-1 podman[259277]: 2025-10-02 12:29:43.838087168 +0000 UTC m=+0.078184511 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:29:43 compute-1 podman[259276]: 2025-10-02 12:29:43.883626511 +0000 UTC m=+0.126765819 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:29:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e229 e229: 3 total, 3 up, 3 in
Oct 02 12:29:44 compute-1 nova_compute[230518]: 2025-10-02 12:29:44.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:29:44 compute-1 nova_compute[230518]: 2025-10-02 12:29:44.084 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:44 compute-1 nova_compute[230518]: 2025-10-02 12:29:44.084 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:44 compute-1 nova_compute[230518]: 2025-10-02 12:29:44.085 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:44 compute-1 nova_compute[230518]: 2025-10-02 12:29:44.085 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:29:44 compute-1 nova_compute[230518]: 2025-10-02 12:29:44.085 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:44.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:44.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:29:44 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4124496307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:44 compute-1 nova_compute[230518]: 2025-10-02 12:29:44.578 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:44 compute-1 nova_compute[230518]: 2025-10-02 12:29:44.662 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000040 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:29:44 compute-1 nova_compute[230518]: 2025-10-02 12:29:44.662 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000040 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:29:44 compute-1 nova_compute[230518]: 2025-10-02 12:29:44.819 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:29:44 compute-1 nova_compute[230518]: 2025-10-02 12:29:44.820 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4402MB free_disk=20.85568618774414GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:29:44 compute-1 nova_compute[230518]: 2025-10-02 12:29:44.821 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:44 compute-1 nova_compute[230518]: 2025-10-02 12:29:44.822 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:44 compute-1 nova_compute[230518]: 2025-10-02 12:29:44.894 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 55bd545d-c449-4749-a3f1-b04f0f37e06e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:29:44 compute-1 nova_compute[230518]: 2025-10-02 12:29:44.894 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:29:44 compute-1 nova_compute[230518]: 2025-10-02 12:29:44.894 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:29:44 compute-1 nova_compute[230518]: 2025-10-02 12:29:44.929 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:29:45 compute-1 ceph-mon[80926]: pgmap v1578: 305 pgs: 305 active+clean; 497 MiB data, 817 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.4 MiB/s wr, 181 op/s
Oct 02 12:29:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4200825112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:45 compute-1 ceph-mon[80926]: osdmap e229: 3 total, 3 up, 3 in
Oct 02 12:29:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4124496307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3910498834' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:29:45 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4172126751' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:45 compute-1 nova_compute[230518]: 2025-10-02 12:29:45.349 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:29:45 compute-1 nova_compute[230518]: 2025-10-02 12:29:45.356 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:29:45 compute-1 nova_compute[230518]: 2025-10-02 12:29:45.393 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:29:45 compute-1 nova_compute[230518]: 2025-10-02 12:29:45.457 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:29:45 compute-1 nova_compute[230518]: 2025-10-02 12:29:45.458 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:46.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:46.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4172126751' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:46 compute-1 nova_compute[230518]: 2025-10-02 12:29:46.454 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:29:47 compute-1 nova_compute[230518]: 2025-10-02 12:29:47.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:29:47 compute-1 nova_compute[230518]: 2025-10-02 12:29:47.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:29:47 compute-1 nova_compute[230518]: 2025-10-02 12:29:47.520 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408172.5193784, 91d6698b-e355-4477-8680-f469bfd285a4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:29:47 compute-1 nova_compute[230518]: 2025-10-02 12:29:47.520 2 INFO nova.compute.manager [-] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] VM Stopped (Lifecycle Event)
Oct 02 12:29:47 compute-1 nova_compute[230518]: 2025-10-02 12:29:47.557 2 DEBUG nova.compute.manager [None req-5bd27d72-f72e-4ccc-884f-58c73a5c0753 - - - - - -] [instance: 91d6698b-e355-4477-8680-f469bfd285a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:29:47 compute-1 nova_compute[230518]: 2025-10-02 12:29:47.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:47 compute-1 ceph-mon[80926]: pgmap v1580: 305 pgs: 305 active+clean; 484 MiB data, 815 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.7 MiB/s wr, 150 op/s
Oct 02 12:29:47 compute-1 nova_compute[230518]: 2025-10-02 12:29:47.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:48 compute-1 nova_compute[230518]: 2025-10-02 12:29:48.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:29:48 compute-1 nova_compute[230518]: 2025-10-02 12:29:48.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:29:48 compute-1 nova_compute[230518]: 2025-10-02 12:29:48.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:29:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:48.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:48.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:29:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e230 e230: 3 total, 3 up, 3 in
Oct 02 12:29:48 compute-1 ceph-mon[80926]: pgmap v1581: 305 pgs: 305 active+clean; 385 MiB data, 760 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 226 op/s
Oct 02 12:29:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e231 e231: 3 total, 3 up, 3 in
Oct 02 12:29:50 compute-1 ceph-mon[80926]: osdmap e230: 3 total, 3 up, 3 in
Oct 02 12:29:50 compute-1 ceph-mon[80926]: osdmap e231: 3 total, 3 up, 3 in
Oct 02 12:29:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:50.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:50.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:50.837 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:29:50 compute-1 nova_compute[230518]: 2025-10-02 12:29:50.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:50.838 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:29:51 compute-1 ceph-mon[80926]: pgmap v1584: 305 pgs: 305 active+clean; 385 MiB data, 760 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 34 KiB/s wr, 175 op/s
Oct 02 12:29:51 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2216246306' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:51 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1497082779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:51 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/103223059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:51 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e232 e232: 3 total, 3 up, 3 in
Oct 02 12:29:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:52.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:29:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:52.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:29:52 compute-1 ceph-mon[80926]: pgmap v1585: 305 pgs: 305 active+clean; 372 MiB data, 754 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 30 KiB/s wr, 156 op/s
Oct 02 12:29:52 compute-1 ceph-mon[80926]: osdmap e232: 3 total, 3 up, 3 in
Oct 02 12:29:52 compute-1 nova_compute[230518]: 2025-10-02 12:29:52.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:52 compute-1 nova_compute[230518]: 2025-10-02 12:29:52.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:53 compute-1 nova_compute[230518]: 2025-10-02 12:29:53.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:29:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:29:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e233 e233: 3 total, 3 up, 3 in
Oct 02 12:29:54 compute-1 nova_compute[230518]: 2025-10-02 12:29:54.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:29:54 compute-1 nova_compute[230518]: 2025-10-02 12:29:54.076 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:29:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:54.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:54.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:54 compute-1 podman[259366]: 2025-10-02 12:29:54.812612594 +0000 UTC m=+0.060641620 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 12:29:54 compute-1 podman[259367]: 2025-10-02 12:29:54.832603033 +0000 UTC m=+0.076381905 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:29:55 compute-1 ceph-mon[80926]: pgmap v1587: 305 pgs: 305 active+clean; 316 MiB data, 719 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.8 KiB/s wr, 140 op/s
Oct 02 12:29:55 compute-1 ceph-mon[80926]: osdmap e233: 3 total, 3 up, 3 in
Oct 02 12:29:56 compute-1 nova_compute[230518]: 2025-10-02 12:29:56.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:29:56 compute-1 nova_compute[230518]: 2025-10-02 12:29:56.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:29:56 compute-1 nova_compute[230518]: 2025-10-02 12:29:56.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:29:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:56.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:56 compute-1 nova_compute[230518]: 2025-10-02 12:29:56.279 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-55bd545d-c449-4749-a3f1-b04f0f37e06e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:29:56 compute-1 nova_compute[230518]: 2025-10-02 12:29:56.279 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-55bd545d-c449-4749-a3f1-b04f0f37e06e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:29:56 compute-1 nova_compute[230518]: 2025-10-02 12:29:56.280 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:29:56 compute-1 nova_compute[230518]: 2025-10-02 12:29:56.280 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 55bd545d-c449-4749-a3f1-b04f0f37e06e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:29:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:56.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:56 compute-1 nova_compute[230518]: 2025-10-02 12:29:56.428 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:29:56 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2737000721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:56 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3613187794' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:29:56 compute-1 nova_compute[230518]: 2025-10-02 12:29:56.792 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:29:56 compute-1 nova_compute[230518]: 2025-10-02 12:29:56.816 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-55bd545d-c449-4749-a3f1-b04f0f37e06e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:29:56 compute-1 nova_compute[230518]: 2025-10-02 12:29:56.816 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:29:57 compute-1 nova_compute[230518]: 2025-10-02 12:29:57.105 2 DEBUG oslo_concurrency.lockutils [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Acquiring lock "55bd545d-c449-4749-a3f1-b04f0f37e06e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:57 compute-1 nova_compute[230518]: 2025-10-02 12:29:57.106 2 DEBUG oslo_concurrency.lockutils [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "55bd545d-c449-4749-a3f1-b04f0f37e06e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:57 compute-1 nova_compute[230518]: 2025-10-02 12:29:57.106 2 DEBUG oslo_concurrency.lockutils [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Acquiring lock "55bd545d-c449-4749-a3f1-b04f0f37e06e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:29:57 compute-1 nova_compute[230518]: 2025-10-02 12:29:57.107 2 DEBUG oslo_concurrency.lockutils [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "55bd545d-c449-4749-a3f1-b04f0f37e06e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:29:57 compute-1 nova_compute[230518]: 2025-10-02 12:29:57.107 2 DEBUG oslo_concurrency.lockutils [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "55bd545d-c449-4749-a3f1-b04f0f37e06e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:29:57 compute-1 nova_compute[230518]: 2025-10-02 12:29:57.109 2 INFO nova.compute.manager [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Terminating instance
Oct 02 12:29:57 compute-1 nova_compute[230518]: 2025-10-02 12:29:57.111 2 DEBUG oslo_concurrency.lockutils [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Acquiring lock "refresh_cache-55bd545d-c449-4749-a3f1-b04f0f37e06e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:29:57 compute-1 nova_compute[230518]: 2025-10-02 12:29:57.111 2 DEBUG oslo_concurrency.lockutils [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Acquired lock "refresh_cache-55bd545d-c449-4749-a3f1-b04f0f37e06e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:29:57 compute-1 nova_compute[230518]: 2025-10-02 12:29:57.111 2 DEBUG nova.network.neutron [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:29:57 compute-1 nova_compute[230518]: 2025-10-02 12:29:57.253 2 DEBUG nova.network.neutron [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:29:57 compute-1 ceph-mon[80926]: pgmap v1589: 305 pgs: 305 active+clean; 231 MiB data, 701 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 7.1 KiB/s wr, 226 op/s
Oct 02 12:29:57 compute-1 nova_compute[230518]: 2025-10-02 12:29:57.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:57 compute-1 nova_compute[230518]: 2025-10-02 12:29:57.714 2 DEBUG nova.network.neutron [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:29:57 compute-1 nova_compute[230518]: 2025-10-02 12:29:57.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:29:57 compute-1 nova_compute[230518]: 2025-10-02 12:29:57.735 2 DEBUG oslo_concurrency.lockutils [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Releasing lock "refresh_cache-55bd545d-c449-4749-a3f1-b04f0f37e06e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:29:57 compute-1 nova_compute[230518]: 2025-10-02 12:29:57.736 2 DEBUG nova.compute.manager [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:29:57 compute-1 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d00000040.scope: Deactivated successfully.
Oct 02 12:29:57 compute-1 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d00000040.scope: Consumed 15.041s CPU time.
Oct 02 12:29:57 compute-1 systemd-machined[188247]: Machine qemu-33-instance-00000040 terminated.
Oct 02 12:29:57 compute-1 nova_compute[230518]: 2025-10-02 12:29:57.960 2 INFO nova.virt.libvirt.driver [-] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Instance destroyed successfully.
Oct 02 12:29:57 compute-1 nova_compute[230518]: 2025-10-02 12:29:57.960 2 DEBUG nova.objects.instance [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lazy-loading 'resources' on Instance uuid 55bd545d-c449-4749-a3f1-b04f0f37e06e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:29:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:29:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:29:58.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:29:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:29:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:29:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:29:58.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:29:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:29:58 compute-1 ceph-mon[80926]: pgmap v1590: 305 pgs: 305 active+clean; 167 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 7.1 KiB/s wr, 211 op/s
Oct 02 12:29:58 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:29:58.841 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:29:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e234 e234: 3 total, 3 up, 3 in
Oct 02 12:30:00 compute-1 nova_compute[230518]: 2025-10-02 12:30:00.101 2 INFO nova.virt.libvirt.driver [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Deleting instance files /var/lib/nova/instances/55bd545d-c449-4749-a3f1-b04f0f37e06e_del
Oct 02 12:30:00 compute-1 nova_compute[230518]: 2025-10-02 12:30:00.101 2 INFO nova.virt.libvirt.driver [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Deletion of /var/lib/nova/instances/55bd545d-c449-4749-a3f1-b04f0f37e06e_del complete
Oct 02 12:30:00 compute-1 nova_compute[230518]: 2025-10-02 12:30:00.150 2 INFO nova.compute.manager [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Took 2.41 seconds to destroy the instance on the hypervisor.
Oct 02 12:30:00 compute-1 nova_compute[230518]: 2025-10-02 12:30:00.151 2 DEBUG oslo.service.loopingcall [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:30:00 compute-1 nova_compute[230518]: 2025-10-02 12:30:00.151 2 DEBUG nova.compute.manager [-] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:30:00 compute-1 nova_compute[230518]: 2025-10-02 12:30:00.151 2 DEBUG nova.network.neutron [-] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:30:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:30:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:00.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:30:00 compute-1 ceph-mon[80926]: osdmap e234: 3 total, 3 up, 3 in
Oct 02 12:30:00 compute-1 ceph-mon[80926]: overall HEALTH_OK
Oct 02 12:30:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:00.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:00 compute-1 nova_compute[230518]: 2025-10-02 12:30:00.432 2 DEBUG nova.network.neutron [-] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:30:00 compute-1 nova_compute[230518]: 2025-10-02 12:30:00.451 2 DEBUG nova.network.neutron [-] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:30:00 compute-1 nova_compute[230518]: 2025-10-02 12:30:00.465 2 INFO nova.compute.manager [-] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Took 0.31 seconds to deallocate network for instance.
Oct 02 12:30:00 compute-1 nova_compute[230518]: 2025-10-02 12:30:00.524 2 DEBUG oslo_concurrency.lockutils [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:00 compute-1 nova_compute[230518]: 2025-10-02 12:30:00.525 2 DEBUG oslo_concurrency.lockutils [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:00 compute-1 nova_compute[230518]: 2025-10-02 12:30:00.605 2 DEBUG oslo_concurrency.processutils [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:01 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:30:01 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/653846976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:01 compute-1 nova_compute[230518]: 2025-10-02 12:30:01.090 2 DEBUG oslo_concurrency.processutils [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:01 compute-1 nova_compute[230518]: 2025-10-02 12:30:01.098 2 DEBUG nova.compute.provider_tree [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:30:01 compute-1 nova_compute[230518]: 2025-10-02 12:30:01.119 2 DEBUG nova.scheduler.client.report [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:30:01 compute-1 nova_compute[230518]: 2025-10-02 12:30:01.140 2 DEBUG oslo_concurrency.lockutils [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:01 compute-1 nova_compute[230518]: 2025-10-02 12:30:01.166 2 INFO nova.scheduler.client.report [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Deleted allocations for instance 55bd545d-c449-4749-a3f1-b04f0f37e06e
Oct 02 12:30:01 compute-1 nova_compute[230518]: 2025-10-02 12:30:01.254 2 DEBUG oslo_concurrency.lockutils [None req-fb31eb6b-6c97-454d-97ad-074db353f398 87db7657bb324d029ff3d66f218f1d8d 494736d8288b414094eb0bc6fbaa8cb7 - - default default] Lock "55bd545d-c449-4749-a3f1-b04f0f37e06e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.148s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:01 compute-1 ceph-mon[80926]: pgmap v1592: 305 pgs: 305 active+clean; 167 MiB data, 670 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 5.8 KiB/s wr, 204 op/s
Oct 02 12:30:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/653846976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:02.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:02.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:02 compute-1 nova_compute[230518]: 2025-10-02 12:30:02.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:02 compute-1 ceph-mon[80926]: pgmap v1593: 305 pgs: 305 active+clean; 162 MiB data, 649 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 142 op/s
Oct 02 12:30:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2196927650' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3720487963' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e235 e235: 3 total, 3 up, 3 in
Oct 02 12:30:02 compute-1 nova_compute[230518]: 2025-10-02 12:30:02.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:30:03 compute-1 ceph-mon[80926]: osdmap e235: 3 total, 3 up, 3 in
Oct 02 12:30:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:04.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:30:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:04.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:30:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e236 e236: 3 total, 3 up, 3 in
Oct 02 12:30:05 compute-1 ceph-mon[80926]: pgmap v1595: 305 pgs: 305 active+clean; 137 MiB data, 630 MiB used, 20 GiB / 21 GiB avail; 102 KiB/s rd, 3.2 MiB/s wr, 129 op/s
Oct 02 12:30:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:06.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:06.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:06 compute-1 ceph-mon[80926]: osdmap e236: 3 total, 3 up, 3 in
Oct 02 12:30:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1721363249' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:30:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1721363249' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:30:06 compute-1 ceph-mon[80926]: pgmap v1597: 305 pgs: 305 active+clean; 156 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 470 KiB/s rd, 6.6 MiB/s wr, 219 op/s
Oct 02 12:30:07 compute-1 nova_compute[230518]: 2025-10-02 12:30:07.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:07 compute-1 nova_compute[230518]: 2025-10-02 12:30:07.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:08.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:30:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:08.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:30:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:30:08 compute-1 ceph-mon[80926]: pgmap v1598: 305 pgs: 305 active+clean; 167 MiB data, 640 MiB used, 20 GiB / 21 GiB avail; 755 KiB/s rd, 5.9 MiB/s wr, 244 op/s
Oct 02 12:30:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e237 e237: 3 total, 3 up, 3 in
Oct 02 12:30:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:10.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:10.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:10 compute-1 ceph-mon[80926]: pgmap v1599: 305 pgs: 305 active+clean; 167 MiB data, 640 MiB used, 20 GiB / 21 GiB avail; 734 KiB/s rd, 4.1 MiB/s wr, 208 op/s
Oct 02 12:30:10 compute-1 ceph-mon[80926]: osdmap e237: 3 total, 3 up, 3 in
Oct 02 12:30:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:30:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:12.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:30:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:12.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e238 e238: 3 total, 3 up, 3 in
Oct 02 12:30:12 compute-1 nova_compute[230518]: 2025-10-02 12:30:12.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:12 compute-1 nova_compute[230518]: 2025-10-02 12:30:12.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:12 compute-1 nova_compute[230518]: 2025-10-02 12:30:12.960 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408197.957015, 55bd545d-c449-4749-a3f1-b04f0f37e06e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:30:12 compute-1 nova_compute[230518]: 2025-10-02 12:30:12.961 2 INFO nova.compute.manager [-] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] VM Stopped (Lifecycle Event)
Oct 02 12:30:12 compute-1 nova_compute[230518]: 2025-10-02 12:30:12.981 2 DEBUG nova.compute.manager [None req-25bb732b-7918-44d7-9fc1-056d2940d766 - - - - - -] [instance: 55bd545d-c449-4749-a3f1-b04f0f37e06e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:30:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:30:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:14.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:14.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:14 compute-1 podman[259454]: 2025-10-02 12:30:14.830540378 +0000 UTC m=+0.079073229 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 02 12:30:14 compute-1 podman[259453]: 2025-10-02 12:30:14.878856329 +0000 UTC m=+0.123345563 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:30:15 compute-1 ceph-mon[80926]: pgmap v1601: 305 pgs: 305 active+clean; 167 MiB data, 640 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 216 op/s
Oct 02 12:30:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:16.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:16 compute-1 ceph-mon[80926]: pgmap v1602: 305 pgs: 305 active+clean; 167 MiB data, 640 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 871 KiB/s wr, 188 op/s
Oct 02 12:30:16 compute-1 ceph-mon[80926]: osdmap e238: 3 total, 3 up, 3 in
Oct 02 12:30:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:16.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:16 compute-1 nova_compute[230518]: 2025-10-02 12:30:16.765 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquiring lock "60c7cce3-3461-4cad-a135-46f35e607214" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:16 compute-1 nova_compute[230518]: 2025-10-02 12:30:16.766 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "60c7cce3-3461-4cad-a135-46f35e607214" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:16 compute-1 nova_compute[230518]: 2025-10-02 12:30:16.804 2 DEBUG nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:30:16 compute-1 nova_compute[230518]: 2025-10-02 12:30:16.935 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:16 compute-1 nova_compute[230518]: 2025-10-02 12:30:16.936 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:16 compute-1 nova_compute[230518]: 2025-10-02 12:30:16.950 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:30:16 compute-1 nova_compute[230518]: 2025-10-02 12:30:16.951 2 INFO nova.compute.claims [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:30:17 compute-1 nova_compute[230518]: 2025-10-02 12:30:17.062 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:30:17 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2337983430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:17 compute-1 nova_compute[230518]: 2025-10-02 12:30:17.516 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:17 compute-1 nova_compute[230518]: 2025-10-02 12:30:17.522 2 DEBUG nova.compute.provider_tree [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:30:17 compute-1 nova_compute[230518]: 2025-10-02 12:30:17.541 2 DEBUG nova.scheduler.client.report [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:30:17 compute-1 nova_compute[230518]: 2025-10-02 12:30:17.574 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:17 compute-1 nova_compute[230518]: 2025-10-02 12:30:17.574 2 DEBUG nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:30:17 compute-1 nova_compute[230518]: 2025-10-02 12:30:17.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:17 compute-1 nova_compute[230518]: 2025-10-02 12:30:17.669 2 DEBUG nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:30:17 compute-1 nova_compute[230518]: 2025-10-02 12:30:17.670 2 DEBUG nova.network.neutron [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:30:17 compute-1 nova_compute[230518]: 2025-10-02 12:30:17.688 2 INFO nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:30:17 compute-1 nova_compute[230518]: 2025-10-02 12:30:17.714 2 DEBUG nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:30:17 compute-1 nova_compute[230518]: 2025-10-02 12:30:17.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:17 compute-1 nova_compute[230518]: 2025-10-02 12:30:17.928 2 DEBUG nova.policy [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '045de4bc70204ae8b6975513839061d8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '546222ddef05450d9aeb91e721403b5b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:30:17 compute-1 nova_compute[230518]: 2025-10-02 12:30:17.937 2 DEBUG nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:30:17 compute-1 nova_compute[230518]: 2025-10-02 12:30:17.939 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:30:17 compute-1 nova_compute[230518]: 2025-10-02 12:30:17.939 2 INFO nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Creating image(s)
Oct 02 12:30:17 compute-1 nova_compute[230518]: 2025-10-02 12:30:17.979 2 DEBUG nova.storage.rbd_utils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] rbd image 60c7cce3-3461-4cad-a135-46f35e607214_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:30:18 compute-1 ceph-mon[80926]: pgmap v1604: 305 pgs: 305 active+clean; 124 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 22 KiB/s wr, 134 op/s
Oct 02 12:30:18 compute-1 nova_compute[230518]: 2025-10-02 12:30:18.024 2 DEBUG nova.storage.rbd_utils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] rbd image 60c7cce3-3461-4cad-a135-46f35e607214_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:30:18 compute-1 nova_compute[230518]: 2025-10-02 12:30:18.055 2 DEBUG nova.storage.rbd_utils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] rbd image 60c7cce3-3461-4cad-a135-46f35e607214_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:30:18 compute-1 nova_compute[230518]: 2025-10-02 12:30:18.059 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:18 compute-1 nova_compute[230518]: 2025-10-02 12:30:18.152 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:18 compute-1 nova_compute[230518]: 2025-10-02 12:30:18.154 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:18 compute-1 nova_compute[230518]: 2025-10-02 12:30:18.156 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:18 compute-1 nova_compute[230518]: 2025-10-02 12:30:18.156 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:18 compute-1 nova_compute[230518]: 2025-10-02 12:30:18.211 2 DEBUG nova.storage.rbd_utils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] rbd image 60c7cce3-3461-4cad-a135-46f35e607214_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:30:18 compute-1 nova_compute[230518]: 2025-10-02 12:30:18.216 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 60c7cce3-3461-4cad-a135-46f35e607214_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:18.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:18.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:30:18 compute-1 nova_compute[230518]: 2025-10-02 12:30:18.637 2 DEBUG nova.network.neutron [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Successfully created port: a5294afe-68a4-4f22-b51c-725ac6164e9f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:30:19 compute-1 nova_compute[230518]: 2025-10-02 12:30:19.647 2 DEBUG nova.network.neutron [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Successfully updated port: a5294afe-68a4-4f22-b51c-725ac6164e9f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:30:19 compute-1 nova_compute[230518]: 2025-10-02 12:30:19.669 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquiring lock "refresh_cache-60c7cce3-3461-4cad-a135-46f35e607214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:30:19 compute-1 nova_compute[230518]: 2025-10-02 12:30:19.669 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquired lock "refresh_cache-60c7cce3-3461-4cad-a135-46f35e607214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:30:19 compute-1 nova_compute[230518]: 2025-10-02 12:30:19.669 2 DEBUG nova.network.neutron [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:30:19 compute-1 nova_compute[230518]: 2025-10-02 12:30:19.753 2 DEBUG nova.compute.manager [req-65f2e8cc-9cac-4008-9cb7-a9aaaec45843 req-a25c1105-2d90-4acd-8b03-bb46902bdaf0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Received event network-changed-a5294afe-68a4-4f22-b51c-725ac6164e9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:19 compute-1 nova_compute[230518]: 2025-10-02 12:30:19.753 2 DEBUG nova.compute.manager [req-65f2e8cc-9cac-4008-9cb7-a9aaaec45843 req-a25c1105-2d90-4acd-8b03-bb46902bdaf0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Refreshing instance network info cache due to event network-changed-a5294afe-68a4-4f22-b51c-725ac6164e9f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:30:19 compute-1 nova_compute[230518]: 2025-10-02 12:30:19.753 2 DEBUG oslo_concurrency.lockutils [req-65f2e8cc-9cac-4008-9cb7-a9aaaec45843 req-a25c1105-2d90-4acd-8b03-bb46902bdaf0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-60c7cce3-3461-4cad-a135-46f35e607214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:30:19 compute-1 nova_compute[230518]: 2025-10-02 12:30:19.811 2 DEBUG nova.network.neutron [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:30:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:20.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:20.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:20 compute-1 nova_compute[230518]: 2025-10-02 12:30:20.461 2 DEBUG nova.network.neutron [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Updating instance_info_cache with network_info: [{"id": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "address": "fa:16:3e:c3:ab:65", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5294afe-68", "ovs_interfaceid": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:30:20 compute-1 ceph-mon[80926]: pgmap v1605: 305 pgs: 305 active+clean; 88 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 22 KiB/s wr, 152 op/s
Oct 02 12:30:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3677734845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2337983430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1106284102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:20 compute-1 nova_compute[230518]: 2025-10-02 12:30:20.501 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Releasing lock "refresh_cache-60c7cce3-3461-4cad-a135-46f35e607214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:30:20 compute-1 nova_compute[230518]: 2025-10-02 12:30:20.501 2 DEBUG nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Instance network_info: |[{"id": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "address": "fa:16:3e:c3:ab:65", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5294afe-68", "ovs_interfaceid": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:30:20 compute-1 nova_compute[230518]: 2025-10-02 12:30:20.502 2 DEBUG oslo_concurrency.lockutils [req-65f2e8cc-9cac-4008-9cb7-a9aaaec45843 req-a25c1105-2d90-4acd-8b03-bb46902bdaf0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-60c7cce3-3461-4cad-a135-46f35e607214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:30:20 compute-1 nova_compute[230518]: 2025-10-02 12:30:20.502 2 DEBUG nova.network.neutron [req-65f2e8cc-9cac-4008-9cb7-a9aaaec45843 req-a25c1105-2d90-4acd-8b03-bb46902bdaf0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Refreshing network info cache for port a5294afe-68a4-4f22-b51c-725ac6164e9f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:30:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e239 e239: 3 total, 3 up, 3 in
Oct 02 12:30:21 compute-1 nova_compute[230518]: 2025-10-02 12:30:21.817 2 DEBUG nova.network.neutron [req-65f2e8cc-9cac-4008-9cb7-a9aaaec45843 req-a25c1105-2d90-4acd-8b03-bb46902bdaf0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Updated VIF entry in instance network info cache for port a5294afe-68a4-4f22-b51c-725ac6164e9f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:30:21 compute-1 nova_compute[230518]: 2025-10-02 12:30:21.818 2 DEBUG nova.network.neutron [req-65f2e8cc-9cac-4008-9cb7-a9aaaec45843 req-a25c1105-2d90-4acd-8b03-bb46902bdaf0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Updating instance_info_cache with network_info: [{"id": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "address": "fa:16:3e:c3:ab:65", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5294afe-68", "ovs_interfaceid": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:30:21 compute-1 nova_compute[230518]: 2025-10-02 12:30:21.851 2 DEBUG oslo_concurrency.lockutils [req-65f2e8cc-9cac-4008-9cb7-a9aaaec45843 req-a25c1105-2d90-4acd-8b03-bb46902bdaf0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-60c7cce3-3461-4cad-a135-46f35e607214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:30:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:30:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:22.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:30:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:22.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:22 compute-1 ceph-mon[80926]: pgmap v1606: 305 pgs: 305 active+clean; 88 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 669 KiB/s rd, 19 KiB/s wr, 77 op/s
Oct 02 12:30:22 compute-1 ceph-mon[80926]: osdmap e239: 3 total, 3 up, 3 in
Oct 02 12:30:22 compute-1 sudo[259616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:22 compute-1 sudo[259616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:22 compute-1 sudo[259616]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:22 compute-1 sudo[259641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:30:22 compute-1 sudo[259641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:22 compute-1 sudo[259641]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:22 compute-1 nova_compute[230518]: 2025-10-02 12:30:22.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:22 compute-1 sudo[259666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:22 compute-1 sudo[259666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:22 compute-1 sudo[259666]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:22 compute-1 nova_compute[230518]: 2025-10-02 12:30:22.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:22 compute-1 sudo[259691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 12:30:22 compute-1 sudo[259691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:23 compute-1 podman[259788]: 2025-10-02 12:30:23.408550953 +0000 UTC m=+0.081830025 container exec f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:30:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:30:23 compute-1 podman[259788]: 2025-10-02 12:30:23.52569264 +0000 UTC m=+0.198971712 container exec_died f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 02 12:30:23 compute-1 ceph-mon[80926]: pgmap v1607: 305 pgs: 305 active+clean; 121 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 641 KiB/s rd, 2.0 MiB/s wr, 96 op/s
Oct 02 12:30:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1752376986' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2545413803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:24 compute-1 sudo[259691]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:30:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:24.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:30:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:24.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:25 compute-1 ceph-mon[80926]: pgmap v1609: 305 pgs: 305 active+clean; 155 MiB data, 625 MiB used, 20 GiB / 21 GiB avail; 85 KiB/s rd, 4.5 MiB/s wr, 101 op/s
Oct 02 12:30:25 compute-1 podman[259909]: 2025-10-02 12:30:25.85487312 +0000 UTC m=+0.095971591 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:30:25 compute-1 podman[259910]: 2025-10-02 12:30:25.859812985 +0000 UTC m=+0.100914746 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:30:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:25.927 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:25.927 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:25.928 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:30:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:26.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:30:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:26.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:26 compute-1 sudo[259949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:26 compute-1 sudo[259949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:26 compute-1 sudo[259949]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:26 compute-1 sudo[259974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:30:26 compute-1 sudo[259974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:26 compute-1 sudo[259974]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:26 compute-1 sudo[259999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:26 compute-1 sudo[259999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:26 compute-1 sudo[259999]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:26 compute-1 sudo[260024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:30:26 compute-1 sudo[260024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:27 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:30:27 compute-1 ceph-mon[80926]: pgmap v1610: 305 pgs: 305 active+clean; 195 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 86 KiB/s rd, 6.3 MiB/s wr, 103 op/s
Oct 02 12:30:27 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:30:27 compute-1 sudo[260024]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:27 compute-1 nova_compute[230518]: 2025-10-02 12:30:27.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:27 compute-1 nova_compute[230518]: 2025-10-02 12:30:27.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:28 compute-1 nova_compute[230518]: 2025-10-02 12:30:28.042 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 60c7cce3-3461-4cad-a135-46f35e607214_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 9.826s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:28 compute-1 nova_compute[230518]: 2025-10-02 12:30:28.116 2 DEBUG nova.storage.rbd_utils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] resizing rbd image 60c7cce3-3461-4cad-a135-46f35e607214_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:30:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:28.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:28.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:30:28 compute-1 ceph-mon[80926]: pgmap v1611: 305 pgs: 305 active+clean; 248 MiB data, 698 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 8.7 MiB/s wr, 183 op/s
Oct 02 12:30:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:30.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:30.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:31 compute-1 nova_compute[230518]: 2025-10-02 12:30:31.613 2 DEBUG nova.objects.instance [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lazy-loading 'migration_context' on Instance uuid 60c7cce3-3461-4cad-a135-46f35e607214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:30:31 compute-1 nova_compute[230518]: 2025-10-02 12:30:31.628 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:30:31 compute-1 nova_compute[230518]: 2025-10-02 12:30:31.628 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Ensure instance console log exists: /var/lib/nova/instances/60c7cce3-3461-4cad-a135-46f35e607214/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:30:31 compute-1 nova_compute[230518]: 2025-10-02 12:30:31.629 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:31 compute-1 nova_compute[230518]: 2025-10-02 12:30:31.630 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:31 compute-1 nova_compute[230518]: 2025-10-02 12:30:31.630 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:31 compute-1 nova_compute[230518]: 2025-10-02 12:30:31.634 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Start _get_guest_xml network_info=[{"id": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "address": "fa:16:3e:c3:ab:65", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5294afe-68", "ovs_interfaceid": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:30:31 compute-1 nova_compute[230518]: 2025-10-02 12:30:31.640 2 WARNING nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:30:31 compute-1 nova_compute[230518]: 2025-10-02 12:30:31.646 2 DEBUG nova.virt.libvirt.host [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:30:31 compute-1 nova_compute[230518]: 2025-10-02 12:30:31.647 2 DEBUG nova.virt.libvirt.host [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:30:31 compute-1 nova_compute[230518]: 2025-10-02 12:30:31.651 2 DEBUG nova.virt.libvirt.host [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:30:31 compute-1 nova_compute[230518]: 2025-10-02 12:30:31.652 2 DEBUG nova.virt.libvirt.host [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:30:31 compute-1 nova_compute[230518]: 2025-10-02 12:30:31.654 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:30:31 compute-1 nova_compute[230518]: 2025-10-02 12:30:31.654 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:30:31 compute-1 nova_compute[230518]: 2025-10-02 12:30:31.655 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:30:31 compute-1 nova_compute[230518]: 2025-10-02 12:30:31.655 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:30:31 compute-1 nova_compute[230518]: 2025-10-02 12:30:31.656 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:30:31 compute-1 nova_compute[230518]: 2025-10-02 12:30:31.656 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:30:31 compute-1 nova_compute[230518]: 2025-10-02 12:30:31.657 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:30:31 compute-1 nova_compute[230518]: 2025-10-02 12:30:31.657 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:30:31 compute-1 nova_compute[230518]: 2025-10-02 12:30:31.658 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:30:31 compute-1 nova_compute[230518]: 2025-10-02 12:30:31.658 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:30:31 compute-1 nova_compute[230518]: 2025-10-02 12:30:31.658 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:30:31 compute-1 nova_compute[230518]: 2025-10-02 12:30:31.659 2 DEBUG nova.virt.hardware [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:30:31 compute-1 nova_compute[230518]: 2025-10-02 12:30:31.664 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:31 compute-1 ceph-mon[80926]: pgmap v1612: 305 pgs: 305 active+clean; 248 MiB data, 698 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 8.7 MiB/s wr, 183 op/s
Oct 02 12:30:31 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1719555084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:31 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2808893995' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:30:32 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3003137044' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:32 compute-1 nova_compute[230518]: 2025-10-02 12:30:32.151 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:32 compute-1 nova_compute[230518]: 2025-10-02 12:30:32.191 2 DEBUG nova.storage.rbd_utils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] rbd image 60c7cce3-3461-4cad-a135-46f35e607214_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:30:32 compute-1 nova_compute[230518]: 2025-10-02 12:30:32.196 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:30:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:32.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:30:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:32.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:32 compute-1 nova_compute[230518]: 2025-10-02 12:30:32.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:30:32 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/750541670' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:32 compute-1 nova_compute[230518]: 2025-10-02 12:30:32.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:32 compute-1 nova_compute[230518]: 2025-10-02 12:30:32.822 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:32 compute-1 nova_compute[230518]: 2025-10-02 12:30:32.825 2 DEBUG nova.virt.libvirt.vif [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:30:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-978789831',display_name='tempest-ListServersNegativeTestJSON-server-978789831-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-978789831-2',id=72,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='546222ddef05450d9aeb91e721403b5b',ramdisk_id='',reservation_id='r-dhkt2a9n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-400261674',owner_user_name='tempest-ListServersNegativeTestJSON-400261674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:30:17Z,user_data=None,user_id='045de4bc70204ae8b6975513839061d8',uuid=60c7cce3-3461-4cad-a135-46f35e607214,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "address": "fa:16:3e:c3:ab:65", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5294afe-68", "ovs_interfaceid": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:30:32 compute-1 nova_compute[230518]: 2025-10-02 12:30:32.826 2 DEBUG nova.network.os_vif_util [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Converting VIF {"id": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "address": "fa:16:3e:c3:ab:65", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5294afe-68", "ovs_interfaceid": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:30:32 compute-1 nova_compute[230518]: 2025-10-02 12:30:32.828 2 DEBUG nova.network.os_vif_util [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:ab:65,bridge_name='br-int',has_traffic_filtering=True,id=a5294afe-68a4-4f22-b51c-725ac6164e9f,network=Network(f0f24c0d-e50a-47b1-8faa-15e38342da63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5294afe-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:30:32 compute-1 nova_compute[230518]: 2025-10-02 12:30:32.830 2 DEBUG nova.objects.instance [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lazy-loading 'pci_devices' on Instance uuid 60c7cce3-3461-4cad-a135-46f35e607214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:30:32 compute-1 nova_compute[230518]: 2025-10-02 12:30:32.864 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:30:32 compute-1 nova_compute[230518]:   <uuid>60c7cce3-3461-4cad-a135-46f35e607214</uuid>
Oct 02 12:30:32 compute-1 nova_compute[230518]:   <name>instance-00000048</name>
Oct 02 12:30:32 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:30:32 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:30:32 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:30:32 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:       <nova:name>tempest-ListServersNegativeTestJSON-server-978789831-2</nova:name>
Oct 02 12:30:32 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:30:31</nova:creationTime>
Oct 02 12:30:32 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:30:32 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:30:32 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:30:32 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:30:32 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:30:32 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:30:32 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:30:32 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:30:32 compute-1 nova_compute[230518]:         <nova:user uuid="045de4bc70204ae8b6975513839061d8">tempest-ListServersNegativeTestJSON-400261674-project-member</nova:user>
Oct 02 12:30:32 compute-1 nova_compute[230518]:         <nova:project uuid="546222ddef05450d9aeb91e721403b5b">tempest-ListServersNegativeTestJSON-400261674</nova:project>
Oct 02 12:30:32 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:30:32 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:30:32 compute-1 nova_compute[230518]:         <nova:port uuid="a5294afe-68a4-4f22-b51c-725ac6164e9f">
Oct 02 12:30:32 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:30:32 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:30:32 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:30:32 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <system>
Oct 02 12:30:32 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:30:32 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:30:32 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:30:32 compute-1 nova_compute[230518]:       <entry name="serial">60c7cce3-3461-4cad-a135-46f35e607214</entry>
Oct 02 12:30:32 compute-1 nova_compute[230518]:       <entry name="uuid">60c7cce3-3461-4cad-a135-46f35e607214</entry>
Oct 02 12:30:32 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     </system>
Oct 02 12:30:32 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:30:32 compute-1 nova_compute[230518]:   <os>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:   </os>
Oct 02 12:30:32 compute-1 nova_compute[230518]:   <features>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:   </features>
Oct 02 12:30:32 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:30:32 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:30:32 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:30:32 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/60c7cce3-3461-4cad-a135-46f35e607214_disk">
Oct 02 12:30:32 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:       </source>
Oct 02 12:30:32 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:30:32 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:30:32 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:30:32 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/60c7cce3-3461-4cad-a135-46f35e607214_disk.config">
Oct 02 12:30:32 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:       </source>
Oct 02 12:30:32 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:30:32 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:30:32 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:30:32 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:c3:ab:65"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:       <target dev="tapa5294afe-68"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:30:32 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/60c7cce3-3461-4cad-a135-46f35e607214/console.log" append="off"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <video>
Oct 02 12:30:32 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     </video>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:30:32 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:30:32 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:30:32 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:30:32 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:30:32 compute-1 nova_compute[230518]: </domain>
Oct 02 12:30:32 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:30:32 compute-1 nova_compute[230518]: 2025-10-02 12:30:32.866 2 DEBUG nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Preparing to wait for external event network-vif-plugged-a5294afe-68a4-4f22-b51c-725ac6164e9f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:30:32 compute-1 nova_compute[230518]: 2025-10-02 12:30:32.867 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquiring lock "60c7cce3-3461-4cad-a135-46f35e607214-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:32 compute-1 nova_compute[230518]: 2025-10-02 12:30:32.867 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "60c7cce3-3461-4cad-a135-46f35e607214-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:32 compute-1 nova_compute[230518]: 2025-10-02 12:30:32.868 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "60c7cce3-3461-4cad-a135-46f35e607214-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:32 compute-1 nova_compute[230518]: 2025-10-02 12:30:32.869 2 DEBUG nova.virt.libvirt.vif [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:30:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-978789831',display_name='tempest-ListServersNegativeTestJSON-server-978789831-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-978789831-2',id=72,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='546222ddef05450d9aeb91e721403b5b',ramdisk_id='',reservation_id='r-dhkt2a9n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-400261674',owner_user_name='tempest-ListServersNegativeTestJSON-400261674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:30:17Z,user_data=None,user_id='045de4bc70204ae8b6975513839061d8',uuid=60c7cce3-3461-4cad-a135-46f35e607214,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "address": "fa:16:3e:c3:ab:65", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5294afe-68", "ovs_interfaceid": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:30:32 compute-1 nova_compute[230518]: 2025-10-02 12:30:32.869 2 DEBUG nova.network.os_vif_util [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Converting VIF {"id": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "address": "fa:16:3e:c3:ab:65", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5294afe-68", "ovs_interfaceid": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:30:32 compute-1 nova_compute[230518]: 2025-10-02 12:30:32.870 2 DEBUG nova.network.os_vif_util [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:ab:65,bridge_name='br-int',has_traffic_filtering=True,id=a5294afe-68a4-4f22-b51c-725ac6164e9f,network=Network(f0f24c0d-e50a-47b1-8faa-15e38342da63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5294afe-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:30:32 compute-1 nova_compute[230518]: 2025-10-02 12:30:32.870 2 DEBUG os_vif [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:ab:65,bridge_name='br-int',has_traffic_filtering=True,id=a5294afe-68a4-4f22-b51c-725ac6164e9f,network=Network(f0f24c0d-e50a-47b1-8faa-15e38342da63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5294afe-68') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:30:32 compute-1 nova_compute[230518]: 2025-10-02 12:30:32.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:32 compute-1 nova_compute[230518]: 2025-10-02 12:30:32.871 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:32 compute-1 nova_compute[230518]: 2025-10-02 12:30:32.872 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:30:32 compute-1 nova_compute[230518]: 2025-10-02 12:30:32.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:32 compute-1 nova_compute[230518]: 2025-10-02 12:30:32.877 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa5294afe-68, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:32 compute-1 nova_compute[230518]: 2025-10-02 12:30:32.878 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa5294afe-68, col_values=(('external_ids', {'iface-id': 'a5294afe-68a4-4f22-b51c-725ac6164e9f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c3:ab:65', 'vm-uuid': '60c7cce3-3461-4cad-a135-46f35e607214'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:32 compute-1 nova_compute[230518]: 2025-10-02 12:30:32.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:32 compute-1 nova_compute[230518]: 2025-10-02 12:30:32.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:30:32 compute-1 NetworkManager[44960]: <info>  [1759408232.8811] manager: (tapa5294afe-68): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/142)
Oct 02 12:30:32 compute-1 nova_compute[230518]: 2025-10-02 12:30:32.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:32 compute-1 nova_compute[230518]: 2025-10-02 12:30:32.888 2 INFO os_vif [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:ab:65,bridge_name='br-int',has_traffic_filtering=True,id=a5294afe-68a4-4f22-b51c-725ac6164e9f,network=Network(f0f24c0d-e50a-47b1-8faa-15e38342da63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5294afe-68')
Oct 02 12:30:32 compute-1 nova_compute[230518]: 2025-10-02 12:30:32.959 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:30:32 compute-1 nova_compute[230518]: 2025-10-02 12:30:32.961 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:30:32 compute-1 nova_compute[230518]: 2025-10-02 12:30:32.962 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] No VIF found with MAC fa:16:3e:c3:ab:65, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:30:32 compute-1 nova_compute[230518]: 2025-10-02 12:30:32.963 2 INFO nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Using config drive
Oct 02 12:30:33 compute-1 nova_compute[230518]: 2025-10-02 12:30:33.007 2 DEBUG nova.storage.rbd_utils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] rbd image 60c7cce3-3461-4cad-a135-46f35e607214_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:30:33 compute-1 ceph-mon[80926]: pgmap v1613: 305 pgs: 305 active+clean; 253 MiB data, 699 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 6.9 MiB/s wr, 199 op/s
Oct 02 12:30:33 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3003137044' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:33 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:30:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:30:33 compute-1 nova_compute[230518]: 2025-10-02 12:30:33.604 2 INFO nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Creating config drive at /var/lib/nova/instances/60c7cce3-3461-4cad-a135-46f35e607214/disk.config
Oct 02 12:30:33 compute-1 nova_compute[230518]: 2025-10-02 12:30:33.613 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/60c7cce3-3461-4cad-a135-46f35e607214/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2opik9yc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:33 compute-1 nova_compute[230518]: 2025-10-02 12:30:33.769 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/60c7cce3-3461-4cad-a135-46f35e607214/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2opik9yc" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:33 compute-1 nova_compute[230518]: 2025-10-02 12:30:33.797 2 DEBUG nova.storage.rbd_utils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] rbd image 60c7cce3-3461-4cad-a135-46f35e607214_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:30:33 compute-1 nova_compute[230518]: 2025-10-02 12:30:33.801 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/60c7cce3-3461-4cad-a135-46f35e607214/disk.config 60c7cce3-3461-4cad-a135-46f35e607214_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:30:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:34.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:30:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:34.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/750541670' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:34 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:30:34 compute-1 ceph-mon[80926]: pgmap v1614: 305 pgs: 305 active+clean; 253 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.8 MiB/s wr, 197 op/s
Oct 02 12:30:34 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:30:34 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:30:34 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:30:34 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:30:35 compute-1 nova_compute[230518]: 2025-10-02 12:30:35.552 2 DEBUG oslo_concurrency.processutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/60c7cce3-3461-4cad-a135-46f35e607214/disk.config 60c7cce3-3461-4cad-a135-46f35e607214_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.751s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:35 compute-1 nova_compute[230518]: 2025-10-02 12:30:35.553 2 INFO nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Deleting local config drive /var/lib/nova/instances/60c7cce3-3461-4cad-a135-46f35e607214/disk.config because it was imported into RBD.
Oct 02 12:30:35 compute-1 kernel: tapa5294afe-68: entered promiscuous mode
Oct 02 12:30:35 compute-1 NetworkManager[44960]: <info>  [1759408235.6248] manager: (tapa5294afe-68): new Tun device (/org/freedesktop/NetworkManager/Devices/143)
Oct 02 12:30:35 compute-1 ovn_controller[129257]: 2025-10-02T12:30:35Z|00307|binding|INFO|Claiming lport a5294afe-68a4-4f22-b51c-725ac6164e9f for this chassis.
Oct 02 12:30:35 compute-1 ovn_controller[129257]: 2025-10-02T12:30:35Z|00308|binding|INFO|a5294afe-68a4-4f22-b51c-725ac6164e9f: Claiming fa:16:3e:c3:ab:65 10.100.0.8
Oct 02 12:30:35 compute-1 nova_compute[230518]: 2025-10-02 12:30:35.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.641 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:ab:65 10.100.0.8'], port_security=['fa:16:3e:c3:ab:65 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '60c7cce3-3461-4cad-a135-46f35e607214', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0f24c0d-e50a-47b1-8faa-15e38342da63', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '546222ddef05450d9aeb91e721403b5b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0fed574f-e419-4ee1-a1ca-0cdfc77deb52', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9e589ca-e402-46cd-b1b2-28eed346077b, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=a5294afe-68a4-4f22-b51c-725ac6164e9f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.643 138374 INFO neutron.agent.ovn.metadata.agent [-] Port a5294afe-68a4-4f22-b51c-725ac6164e9f in datapath f0f24c0d-e50a-47b1-8faa-15e38342da63 bound to our chassis
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.647 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f0f24c0d-e50a-47b1-8faa-15e38342da63
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.663 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f998613a-0aaf-403e-b9f0-a7e67b7c55f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.664 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf0f24c0d-e1 in ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:30:35 compute-1 systemd-udevd[260288]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.667 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf0f24c0d-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.668 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d35d3eb9-c4e9-43a3-982e-11e64f5f7a17]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.669 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fc4cef17-1b4a-4aaf-9bfd-b63833eaae12]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:35 compute-1 systemd-machined[188247]: New machine qemu-36-instance-00000048.
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.684 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[27797add-e2a7-45c3-916c-f5e030d55ded]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:35 compute-1 NetworkManager[44960]: <info>  [1759408235.6879] device (tapa5294afe-68): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:30:35 compute-1 systemd[1]: Started Virtual Machine qemu-36-instance-00000048.
Oct 02 12:30:35 compute-1 NetworkManager[44960]: <info>  [1759408235.6908] device (tapa5294afe-68): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.702 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1de3c155-0252-4a4f-98bc-7df52f47e747]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:35 compute-1 nova_compute[230518]: 2025-10-02 12:30:35.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:35 compute-1 nova_compute[230518]: 2025-10-02 12:30:35.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:35 compute-1 ovn_controller[129257]: 2025-10-02T12:30:35Z|00309|binding|INFO|Setting lport a5294afe-68a4-4f22-b51c-725ac6164e9f ovn-installed in OVS
Oct 02 12:30:35 compute-1 ovn_controller[129257]: 2025-10-02T12:30:35Z|00310|binding|INFO|Setting lport a5294afe-68a4-4f22-b51c-725ac6164e9f up in Southbound
Oct 02 12:30:35 compute-1 nova_compute[230518]: 2025-10-02 12:30:35.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.732 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c83ffce6-e11d-4e2d-832b-ca365cab9e1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:35 compute-1 NetworkManager[44960]: <info>  [1759408235.7375] manager: (tapf0f24c0d-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/144)
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.736 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2feb2783-f8f1-406d-a1e6-e849b6b332e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.765 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[6fc1fdf5-a0b4-4d9d-9e26-33035f263749]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.767 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[bbbb14f1-cc4e-4a92-9e93-2b7ffc89af9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:35 compute-1 NetworkManager[44960]: <info>  [1759408235.7937] device (tapf0f24c0d-e0): carrier: link connected
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.797 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a05f670b-1753-483c-8471-6bc9fea371a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.813 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[20fd004e-f591-4834-b504-ca3a9fca872b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf0f24c0d-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:85:5e:d7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609935, 'reachable_time': 16042, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260321, 'error': None, 'target': 'ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.827 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6691d624-6532-471c-9c8f-486ec42ed481]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe85:5ed7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609935, 'tstamp': 609935}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260322, 'error': None, 'target': 'ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.841 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[18a28e3e-5088-458d-9f6b-f4a6ed482f51]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf0f24c0d-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:85:5e:d7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609935, 'reachable_time': 16042, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 260323, 'error': None, 'target': 'ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.875 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7049614d-ef13-4ea6-aa4a-025abe65c5b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.931 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3434f0eb-677a-4625-937a-659ebfcfab75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.932 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0f24c0d-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.932 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.933 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf0f24c0d-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:35 compute-1 kernel: tapf0f24c0d-e0: entered promiscuous mode
Oct 02 12:30:35 compute-1 nova_compute[230518]: 2025-10-02 12:30:35.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:35 compute-1 NetworkManager[44960]: <info>  [1759408235.9373] manager: (tapf0f24c0d-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/145)
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.937 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf0f24c0d-e0, col_values=(('external_ids', {'iface-id': 'aa017360-5737-4ad9-a150-2ba1122b7ea5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:35 compute-1 nova_compute[230518]: 2025-10-02 12:30:35.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:35 compute-1 nova_compute[230518]: 2025-10-02 12:30:35.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.940 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f0f24c0d-e50a-47b1-8faa-15e38342da63.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f0f24c0d-e50a-47b1-8faa-15e38342da63.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.940 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[56a77635-df2e-4d32-9cc8-c836dadb43cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:35 compute-1 ovn_controller[129257]: 2025-10-02T12:30:35Z|00311|binding|INFO|Releasing lport aa017360-5737-4ad9-a150-2ba1122b7ea5 from this chassis (sb_readonly=0)
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.941 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-f0f24c0d-e50a-47b1-8faa-15e38342da63
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/f0f24c0d-e50a-47b1-8faa-15e38342da63.pid.haproxy
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID f0f24c0d-e50a-47b1-8faa-15e38342da63
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:30:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:35.941 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63', 'env', 'PROCESS_TAG=haproxy-f0f24c0d-e50a-47b1-8faa-15e38342da63', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f0f24c0d-e50a-47b1-8faa-15e38342da63.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:30:35 compute-1 nova_compute[230518]: 2025-10-02 12:30:35.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:35 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:30:35 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:30:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/232359257' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:36 compute-1 nova_compute[230518]: 2025-10-02 12:30:36.059 2 DEBUG nova.compute.manager [req-0170a24d-3f1a-4926-b380-181cf931d715 req-e3b5bbe1-d20d-4987-9803-db7e80d70bdb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Received event network-vif-plugged-a5294afe-68a4-4f22-b51c-725ac6164e9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:36 compute-1 nova_compute[230518]: 2025-10-02 12:30:36.060 2 DEBUG oslo_concurrency.lockutils [req-0170a24d-3f1a-4926-b380-181cf931d715 req-e3b5bbe1-d20d-4987-9803-db7e80d70bdb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "60c7cce3-3461-4cad-a135-46f35e607214-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:36 compute-1 nova_compute[230518]: 2025-10-02 12:30:36.060 2 DEBUG oslo_concurrency.lockutils [req-0170a24d-3f1a-4926-b380-181cf931d715 req-e3b5bbe1-d20d-4987-9803-db7e80d70bdb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "60c7cce3-3461-4cad-a135-46f35e607214-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:36 compute-1 nova_compute[230518]: 2025-10-02 12:30:36.061 2 DEBUG oslo_concurrency.lockutils [req-0170a24d-3f1a-4926-b380-181cf931d715 req-e3b5bbe1-d20d-4987-9803-db7e80d70bdb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "60c7cce3-3461-4cad-a135-46f35e607214-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:36 compute-1 nova_compute[230518]: 2025-10-02 12:30:36.061 2 DEBUG nova.compute.manager [req-0170a24d-3f1a-4926-b380-181cf931d715 req-e3b5bbe1-d20d-4987-9803-db7e80d70bdb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Processing event network-vif-plugged-a5294afe-68a4-4f22-b51c-725ac6164e9f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:30:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:36.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:36 compute-1 podman[260373]: 2025-10-02 12:30:36.279698505 +0000 UTC m=+0.031598525 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:30:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:36.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:36 compute-1 podman[260373]: 2025-10-02 12:30:36.466027908 +0000 UTC m=+0.217927898 container create bee78d39e2decf3c8f69abe70d0331d0c4f3dc1ee68ad65f955413d86a07a1ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:30:36 compute-1 systemd[1]: Started libpod-conmon-bee78d39e2decf3c8f69abe70d0331d0c4f3dc1ee68ad65f955413d86a07a1ad.scope.
Oct 02 12:30:36 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:30:36 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/187f7d9e4444998b8922733136aa4f6bb49ad93e881eff99c8c8d9eeefe99b35/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:30:36 compute-1 podman[260373]: 2025-10-02 12:30:36.56969856 +0000 UTC m=+0.321598580 container init bee78d39e2decf3c8f69abe70d0331d0c4f3dc1ee68ad65f955413d86a07a1ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:30:36 compute-1 podman[260373]: 2025-10-02 12:30:36.574925375 +0000 UTC m=+0.326825365 container start bee78d39e2decf3c8f69abe70d0331d0c4f3dc1ee68ad65f955413d86a07a1ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 12:30:36 compute-1 neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63[260406]: [NOTICE]   (260410) : New worker (260412) forked
Oct 02 12:30:36 compute-1 neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63[260406]: [NOTICE]   (260410) : Loading success.
Oct 02 12:30:37 compute-1 ceph-mon[80926]: pgmap v1615: 305 pgs: 305 active+clean; 269 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.2 MiB/s wr, 182 op/s
Oct 02 12:30:37 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/315921504' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:37 compute-1 nova_compute[230518]: 2025-10-02 12:30:37.244 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408237.2439508, 60c7cce3-3461-4cad-a135-46f35e607214 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:30:37 compute-1 nova_compute[230518]: 2025-10-02 12:30:37.245 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] VM Started (Lifecycle Event)
Oct 02 12:30:37 compute-1 nova_compute[230518]: 2025-10-02 12:30:37.248 2 DEBUG nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:30:37 compute-1 nova_compute[230518]: 2025-10-02 12:30:37.252 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:30:37 compute-1 nova_compute[230518]: 2025-10-02 12:30:37.257 2 INFO nova.virt.libvirt.driver [-] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Instance spawned successfully.
Oct 02 12:30:37 compute-1 nova_compute[230518]: 2025-10-02 12:30:37.257 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:30:37 compute-1 nova_compute[230518]: 2025-10-02 12:30:37.277 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:30:37 compute-1 nova_compute[230518]: 2025-10-02 12:30:37.285 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:30:37 compute-1 nova_compute[230518]: 2025-10-02 12:30:37.291 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:30:37 compute-1 nova_compute[230518]: 2025-10-02 12:30:37.292 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:30:37 compute-1 nova_compute[230518]: 2025-10-02 12:30:37.293 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:30:37 compute-1 nova_compute[230518]: 2025-10-02 12:30:37.294 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:30:37 compute-1 nova_compute[230518]: 2025-10-02 12:30:37.295 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:30:37 compute-1 nova_compute[230518]: 2025-10-02 12:30:37.296 2 DEBUG nova.virt.libvirt.driver [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:30:37 compute-1 nova_compute[230518]: 2025-10-02 12:30:37.308 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:30:37 compute-1 nova_compute[230518]: 2025-10-02 12:30:37.309 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408237.2440524, 60c7cce3-3461-4cad-a135-46f35e607214 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:30:37 compute-1 nova_compute[230518]: 2025-10-02 12:30:37.309 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] VM Paused (Lifecycle Event)
Oct 02 12:30:37 compute-1 nova_compute[230518]: 2025-10-02 12:30:37.336 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:30:37 compute-1 nova_compute[230518]: 2025-10-02 12:30:37.340 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408237.2514226, 60c7cce3-3461-4cad-a135-46f35e607214 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:30:37 compute-1 nova_compute[230518]: 2025-10-02 12:30:37.341 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] VM Resumed (Lifecycle Event)
Oct 02 12:30:37 compute-1 nova_compute[230518]: 2025-10-02 12:30:37.362 2 INFO nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Took 19.42 seconds to spawn the instance on the hypervisor.
Oct 02 12:30:37 compute-1 nova_compute[230518]: 2025-10-02 12:30:37.362 2 DEBUG nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:30:37 compute-1 nova_compute[230518]: 2025-10-02 12:30:37.368 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:30:37 compute-1 nova_compute[230518]: 2025-10-02 12:30:37.377 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:30:37 compute-1 nova_compute[230518]: 2025-10-02 12:30:37.413 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:30:37 compute-1 nova_compute[230518]: 2025-10-02 12:30:37.437 2 INFO nova.compute.manager [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Took 20.54 seconds to build instance.
Oct 02 12:30:37 compute-1 nova_compute[230518]: 2025-10-02 12:30:37.458 2 DEBUG oslo_concurrency.lockutils [None req-0c91cd4e-1df6-4f78-bc49-c7a4b8fc3156 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "60c7cce3-3461-4cad-a135-46f35e607214" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:37 compute-1 nova_compute[230518]: 2025-10-02 12:30:37.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:37 compute-1 nova_compute[230518]: 2025-10-02 12:30:37.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:38 compute-1 nova_compute[230518]: 2025-10-02 12:30:38.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:30:38 compute-1 nova_compute[230518]: 2025-10-02 12:30:38.185 2 DEBUG nova.compute.manager [req-1fef6459-cd31-4877-a8d0-aa3bbb51d031 req-d88b6a71-dbd8-47e9-b64c-fa367a84690f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Received event network-vif-plugged-a5294afe-68a4-4f22-b51c-725ac6164e9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:38 compute-1 nova_compute[230518]: 2025-10-02 12:30:38.186 2 DEBUG oslo_concurrency.lockutils [req-1fef6459-cd31-4877-a8d0-aa3bbb51d031 req-d88b6a71-dbd8-47e9-b64c-fa367a84690f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "60c7cce3-3461-4cad-a135-46f35e607214-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:38 compute-1 nova_compute[230518]: 2025-10-02 12:30:38.186 2 DEBUG oslo_concurrency.lockutils [req-1fef6459-cd31-4877-a8d0-aa3bbb51d031 req-d88b6a71-dbd8-47e9-b64c-fa367a84690f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "60c7cce3-3461-4cad-a135-46f35e607214-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:38 compute-1 nova_compute[230518]: 2025-10-02 12:30:38.187 2 DEBUG oslo_concurrency.lockutils [req-1fef6459-cd31-4877-a8d0-aa3bbb51d031 req-d88b6a71-dbd8-47e9-b64c-fa367a84690f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "60c7cce3-3461-4cad-a135-46f35e607214-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:38 compute-1 nova_compute[230518]: 2025-10-02 12:30:38.187 2 DEBUG nova.compute.manager [req-1fef6459-cd31-4877-a8d0-aa3bbb51d031 req-d88b6a71-dbd8-47e9-b64c-fa367a84690f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] No waiting events found dispatching network-vif-plugged-a5294afe-68a4-4f22-b51c-725ac6164e9f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:30:38 compute-1 nova_compute[230518]: 2025-10-02 12:30:38.188 2 WARNING nova.compute.manager [req-1fef6459-cd31-4877-a8d0-aa3bbb51d031 req-d88b6a71-dbd8-47e9-b64c-fa367a84690f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Received unexpected event network-vif-plugged-a5294afe-68a4-4f22-b51c-725ac6164e9f for instance with vm_state active and task_state None.
Oct 02 12:30:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:38.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.001999982s ======
Oct 02 12:30:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:38.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001999982s
Oct 02 12:30:38 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:30:39 compute-1 ceph-mon[80926]: pgmap v1616: 305 pgs: 305 active+clean; 321 MiB data, 735 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.1 MiB/s wr, 216 op/s
Oct 02 12:30:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:30:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:40.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:30:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:30:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:40.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:30:40 compute-1 ceph-mon[80926]: pgmap v1617: 305 pgs: 305 active+clean; 321 MiB data, 735 MiB used, 20 GiB / 21 GiB avail; 934 KiB/s rd, 3.1 MiB/s wr, 138 op/s
Oct 02 12:30:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:30:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:42.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:30:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:30:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:42.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:30:42 compute-1 nova_compute[230518]: 2025-10-02 12:30:42.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:42 compute-1 nova_compute[230518]: 2025-10-02 12:30:42.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:43 compute-1 ceph-mon[80926]: pgmap v1618: 305 pgs: 305 active+clean; 336 MiB data, 746 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.0 MiB/s wr, 221 op/s
Oct 02 12:30:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:30:44 compute-1 nova_compute[230518]: 2025-10-02 12:30:44.085 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:30:44 compute-1 nova_compute[230518]: 2025-10-02 12:30:44.086 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 12:30:44 compute-1 nova_compute[230518]: 2025-10-02 12:30:44.203 2 DEBUG oslo_concurrency.lockutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Acquiring lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:44 compute-1 nova_compute[230518]: 2025-10-02 12:30:44.204 2 DEBUG oslo_concurrency.lockutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:44 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3250871919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:44 compute-1 nova_compute[230518]: 2025-10-02 12:30:44.256 2 DEBUG nova.compute.manager [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:30:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:30:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:44.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:30:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:44.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:44 compute-1 nova_compute[230518]: 2025-10-02 12:30:44.435 2 DEBUG oslo_concurrency.lockutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:44 compute-1 nova_compute[230518]: 2025-10-02 12:30:44.436 2 DEBUG oslo_concurrency.lockutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:44 compute-1 nova_compute[230518]: 2025-10-02 12:30:44.444 2 DEBUG nova.virt.hardware [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:30:44 compute-1 nova_compute[230518]: 2025-10-02 12:30:44.445 2 INFO nova.compute.claims [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:30:44 compute-1 nova_compute[230518]: 2025-10-02 12:30:44.646 2 DEBUG nova.scheduler.client.report [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 12:30:44 compute-1 nova_compute[230518]: 2025-10-02 12:30:44.679 2 DEBUG nova.scheduler.client.report [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 12:30:44 compute-1 nova_compute[230518]: 2025-10-02 12:30:44.679 2 DEBUG nova.compute.provider_tree [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:30:44 compute-1 nova_compute[230518]: 2025-10-02 12:30:44.700 2 DEBUG nova.scheduler.client.report [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 12:30:44 compute-1 nova_compute[230518]: 2025-10-02 12:30:44.814 2 DEBUG nova.scheduler.client.report [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 12:30:45 compute-1 nova_compute[230518]: 2025-10-02 12:30:45.067 2 DEBUG oslo_concurrency.processutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:45 compute-1 nova_compute[230518]: 2025-10-02 12:30:45.098 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:30:45 compute-1 nova_compute[230518]: 2025-10-02 12:30:45.143 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:45 compute-1 ceph-mon[80926]: pgmap v1619: 305 pgs: 305 active+clean; 298 MiB data, 718 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 4.1 MiB/s wr, 329 op/s
Oct 02 12:30:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/744951610' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/97662532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:45 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #70. Immutable memtables: 0.
Oct 02 12:30:45 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:45.296976) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:30:45 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 70
Oct 02 12:30:45 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408245297029, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 2473, "num_deletes": 264, "total_data_size": 5654713, "memory_usage": 5743656, "flush_reason": "Manual Compaction"}
Oct 02 12:30:45 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #71: started
Oct 02 12:30:45 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408245323010, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 71, "file_size": 3707008, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36290, "largest_seqno": 38758, "table_properties": {"data_size": 3696471, "index_size": 6775, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22145, "raw_average_key_size": 21, "raw_value_size": 3675363, "raw_average_value_size": 3487, "num_data_blocks": 291, "num_entries": 1054, "num_filter_entries": 1054, "num_deletions": 264, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408059, "oldest_key_time": 1759408059, "file_creation_time": 1759408245, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:30:45 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 26090 microseconds, and 12709 cpu microseconds.
Oct 02 12:30:45 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:30:45 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:45.323068) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #71: 3707008 bytes OK
Oct 02 12:30:45 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:45.323094) [db/memtable_list.cc:519] [default] Level-0 commit table #71 started
Oct 02 12:30:45 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:45.324789) [db/memtable_list.cc:722] [default] Level-0 commit table #71: memtable #1 done
Oct 02 12:30:45 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:45.324809) EVENT_LOG_v1 {"time_micros": 1759408245324803, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:30:45 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:45.324829) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:30:45 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 5643510, prev total WAL file size 5643510, number of live WAL files 2.
Oct 02 12:30:45 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000067.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:30:45 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:45.326620) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303035' seq:72057594037927935, type:22 .. '6C6F676D0031323536' seq:0, type:0; will stop at (end)
Oct 02 12:30:45 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:30:45 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [71(3620KB)], [69(8322KB)]
Oct 02 12:30:45 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408245326692, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [71], "files_L6": [69], "score": -1, "input_data_size": 12229095, "oldest_snapshot_seqno": -1}
Oct 02 12:30:45 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #72: 6410 keys, 12064410 bytes, temperature: kUnknown
Oct 02 12:30:45 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408245388854, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 72, "file_size": 12064410, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12018477, "index_size": 28799, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16069, "raw_key_size": 163825, "raw_average_key_size": 25, "raw_value_size": 11900554, "raw_average_value_size": 1856, "num_data_blocks": 1163, "num_entries": 6410, "num_filter_entries": 6410, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759408245, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 72, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:30:45 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:30:45 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:45.389056) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 12064410 bytes
Oct 02 12:30:45 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:45.390821) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 196.6 rd, 193.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 8.1 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(6.6) write-amplify(3.3) OK, records in: 6952, records dropped: 542 output_compression: NoCompression
Oct 02 12:30:45 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:45.390841) EVENT_LOG_v1 {"time_micros": 1759408245390832, "job": 42, "event": "compaction_finished", "compaction_time_micros": 62218, "compaction_time_cpu_micros": 29118, "output_level": 6, "num_output_files": 1, "total_output_size": 12064410, "num_input_records": 6952, "num_output_records": 6410, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:30:45 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:30:45 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408245391642, "job": 42, "event": "table_file_deletion", "file_number": 71}
Oct 02 12:30:45 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000069.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:30:45 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408245393290, "job": 42, "event": "table_file_deletion", "file_number": 69}
Oct 02 12:30:45 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:45.326462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:30:45 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:45.393461) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:30:45 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:45.393470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:30:45 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:45.393473) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:30:45 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:45.393476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:30:45 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:45.393478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:30:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:30:45 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/829913293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:45 compute-1 nova_compute[230518]: 2025-10-02 12:30:45.572 2 DEBUG oslo_concurrency.processutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:45 compute-1 nova_compute[230518]: 2025-10-02 12:30:45.579 2 DEBUG nova.compute.provider_tree [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:30:45 compute-1 nova_compute[230518]: 2025-10-02 12:30:45.650 2 DEBUG nova.scheduler.client.report [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:30:45 compute-1 nova_compute[230518]: 2025-10-02 12:30:45.711 2 DEBUG oslo_concurrency.lockutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.275s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:45 compute-1 nova_compute[230518]: 2025-10-02 12:30:45.712 2 DEBUG nova.compute.manager [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:30:45 compute-1 nova_compute[230518]: 2025-10-02 12:30:45.717 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:45 compute-1 nova_compute[230518]: 2025-10-02 12:30:45.717 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:45 compute-1 nova_compute[230518]: 2025-10-02 12:30:45.718 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:30:45 compute-1 nova_compute[230518]: 2025-10-02 12:30:45.718 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:45 compute-1 nova_compute[230518]: 2025-10-02 12:30:45.810 2 DEBUG nova.compute.manager [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:30:45 compute-1 nova_compute[230518]: 2025-10-02 12:30:45.811 2 DEBUG nova.network.neutron [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:30:45 compute-1 podman[260452]: 2025-10-02 12:30:45.833561407 +0000 UTC m=+0.069718244 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct 02 12:30:45 compute-1 nova_compute[230518]: 2025-10-02 12:30:45.857 2 INFO nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:30:45 compute-1 nova_compute[230518]: 2025-10-02 12:30:45.880 2 DEBUG nova.compute.manager [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:30:45 compute-1 podman[260450]: 2025-10-02 12:30:45.901411892 +0000 UTC m=+0.140035168 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:30:45 compute-1 nova_compute[230518]: 2025-10-02 12:30:45.986 2 DEBUG nova.compute.manager [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:30:45 compute-1 nova_compute[230518]: 2025-10-02 12:30:45.989 2 DEBUG nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:30:45 compute-1 nova_compute[230518]: 2025-10-02 12:30:45.990 2 INFO nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Creating image(s)
Oct 02 12:30:46 compute-1 nova_compute[230518]: 2025-10-02 12:30:46.025 2 DEBUG nova.storage.rbd_utils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] rbd image 86aa4a20-96fe-4862-a0c5-04ad92e40f1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:30:46 compute-1 nova_compute[230518]: 2025-10-02 12:30:46.061 2 DEBUG nova.storage.rbd_utils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] rbd image 86aa4a20-96fe-4862-a0c5-04ad92e40f1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:30:46 compute-1 nova_compute[230518]: 2025-10-02 12:30:46.099 2 DEBUG nova.storage.rbd_utils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] rbd image 86aa4a20-96fe-4862-a0c5-04ad92e40f1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:30:46 compute-1 nova_compute[230518]: 2025-10-02 12:30:46.106 2 DEBUG oslo_concurrency.processutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:46 compute-1 nova_compute[230518]: 2025-10-02 12:30:46.146 2 DEBUG nova.policy [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eed4fdf8b49f41bfb982bc858fa76bef', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a70008e0fc32481f8ed89060220b28d7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:30:46 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:30:46 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/737162632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:46 compute-1 nova_compute[230518]: 2025-10-02 12:30:46.190 2 DEBUG oslo_concurrency.processutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:46 compute-1 nova_compute[230518]: 2025-10-02 12:30:46.191 2 DEBUG oslo_concurrency.lockutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:46 compute-1 nova_compute[230518]: 2025-10-02 12:30:46.192 2 DEBUG oslo_concurrency.lockutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:46 compute-1 nova_compute[230518]: 2025-10-02 12:30:46.193 2 DEBUG oslo_concurrency.lockutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:46 compute-1 nova_compute[230518]: 2025-10-02 12:30:46.228 2 DEBUG nova.storage.rbd_utils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] rbd image 86aa4a20-96fe-4862-a0c5-04ad92e40f1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:30:46 compute-1 nova_compute[230518]: 2025-10-02 12:30:46.235 2 DEBUG oslo_concurrency.processutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 86aa4a20-96fe-4862-a0c5-04ad92e40f1b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:46 compute-1 nova_compute[230518]: 2025-10-02 12:30:46.274 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:46.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:46 compute-1 ceph-mon[80926]: pgmap v1620: 305 pgs: 305 active+clean; 248 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 4.0 MiB/s wr, 337 op/s
Oct 02 12:30:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/829913293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/737162632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:46 compute-1 nova_compute[230518]: 2025-10-02 12:30:46.390 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:30:46 compute-1 nova_compute[230518]: 2025-10-02 12:30:46.390 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:30:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:46.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:46 compute-1 nova_compute[230518]: 2025-10-02 12:30:46.625 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:30:46 compute-1 nova_compute[230518]: 2025-10-02 12:30:46.627 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4391MB free_disk=20.863059997558594GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:30:46 compute-1 nova_compute[230518]: 2025-10-02 12:30:46.627 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:46 compute-1 nova_compute[230518]: 2025-10-02 12:30:46.627 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:46 compute-1 sudo[260610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:30:46 compute-1 sudo[260610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:46 compute-1 sudo[260610]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:46 compute-1 nova_compute[230518]: 2025-10-02 12:30:46.683 2 DEBUG oslo_concurrency.processutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 86aa4a20-96fe-4862-a0c5-04ad92e40f1b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:46 compute-1 sudo[260635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:30:46 compute-1 sudo[260635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:30:46 compute-1 sudo[260635]: pam_unix(sudo:session): session closed for user root
Oct 02 12:30:46 compute-1 nova_compute[230518]: 2025-10-02 12:30:46.774 2 DEBUG nova.storage.rbd_utils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] resizing rbd image 86aa4a20-96fe-4862-a0c5-04ad92e40f1b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:30:46 compute-1 nova_compute[230518]: 2025-10-02 12:30:46.886 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 60c7cce3-3461-4cad-a135-46f35e607214 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:30:46 compute-1 nova_compute[230518]: 2025-10-02 12:30:46.886 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 86aa4a20-96fe-4862-a0c5-04ad92e40f1b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:30:46 compute-1 nova_compute[230518]: 2025-10-02 12:30:46.887 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:30:46 compute-1 nova_compute[230518]: 2025-10-02 12:30:46.887 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:30:46 compute-1 nova_compute[230518]: 2025-10-02 12:30:46.897 2 DEBUG nova.objects.instance [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lazy-loading 'migration_context' on Instance uuid 86aa4a20-96fe-4862-a0c5-04ad92e40f1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:30:46 compute-1 nova_compute[230518]: 2025-10-02 12:30:46.930 2 DEBUG nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:30:46 compute-1 nova_compute[230518]: 2025-10-02 12:30:46.931 2 DEBUG nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Ensure instance console log exists: /var/lib/nova/instances/86aa4a20-96fe-4862-a0c5-04ad92e40f1b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:30:46 compute-1 nova_compute[230518]: 2025-10-02 12:30:46.931 2 DEBUG oslo_concurrency.lockutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:46 compute-1 nova_compute[230518]: 2025-10-02 12:30:46.932 2 DEBUG oslo_concurrency.lockutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:46 compute-1 nova_compute[230518]: 2025-10-02 12:30:46.932 2 DEBUG oslo_concurrency.lockutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:46 compute-1 nova_compute[230518]: 2025-10-02 12:30:46.992 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:47 compute-1 nova_compute[230518]: 2025-10-02 12:30:47.247 2 DEBUG nova.network.neutron [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Successfully created port: 4a337dd1-6b9e-4a79-893f-de7400180a58 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:30:47 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:30:47 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:30:47 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1763093652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:30:47 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4286409731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:47 compute-1 nova_compute[230518]: 2025-10-02 12:30:47.480 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:47 compute-1 nova_compute[230518]: 2025-10-02 12:30:47.485 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:30:47 compute-1 nova_compute[230518]: 2025-10-02 12:30:47.594 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:30:47 compute-1 nova_compute[230518]: 2025-10-02 12:30:47.643 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:30:47 compute-1 nova_compute[230518]: 2025-10-02 12:30:47.644 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.017s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:47 compute-1 nova_compute[230518]: 2025-10-02 12:30:47.718 2 DEBUG oslo_concurrency.lockutils [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquiring lock "60c7cce3-3461-4cad-a135-46f35e607214" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:47 compute-1 nova_compute[230518]: 2025-10-02 12:30:47.719 2 DEBUG oslo_concurrency.lockutils [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "60c7cce3-3461-4cad-a135-46f35e607214" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:47 compute-1 nova_compute[230518]: 2025-10-02 12:30:47.719 2 DEBUG oslo_concurrency.lockutils [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquiring lock "60c7cce3-3461-4cad-a135-46f35e607214-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:47 compute-1 nova_compute[230518]: 2025-10-02 12:30:47.719 2 DEBUG oslo_concurrency.lockutils [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "60c7cce3-3461-4cad-a135-46f35e607214-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:47 compute-1 nova_compute[230518]: 2025-10-02 12:30:47.720 2 DEBUG oslo_concurrency.lockutils [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "60c7cce3-3461-4cad-a135-46f35e607214-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:47 compute-1 nova_compute[230518]: 2025-10-02 12:30:47.721 2 INFO nova.compute.manager [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Terminating instance
Oct 02 12:30:47 compute-1 nova_compute[230518]: 2025-10-02 12:30:47.722 2 DEBUG nova.compute.manager [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:30:47 compute-1 nova_compute[230518]: 2025-10-02 12:30:47.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:47 compute-1 kernel: tapa5294afe-68 (unregistering): left promiscuous mode
Oct 02 12:30:47 compute-1 NetworkManager[44960]: <info>  [1759408247.8073] device (tapa5294afe-68): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:30:47 compute-1 ovn_controller[129257]: 2025-10-02T12:30:47Z|00312|binding|INFO|Releasing lport a5294afe-68a4-4f22-b51c-725ac6164e9f from this chassis (sb_readonly=0)
Oct 02 12:30:47 compute-1 nova_compute[230518]: 2025-10-02 12:30:47.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:47 compute-1 ovn_controller[129257]: 2025-10-02T12:30:47Z|00313|binding|INFO|Setting lport a5294afe-68a4-4f22-b51c-725ac6164e9f down in Southbound
Oct 02 12:30:47 compute-1 ovn_controller[129257]: 2025-10-02T12:30:47Z|00314|binding|INFO|Removing iface tapa5294afe-68 ovn-installed in OVS
Oct 02 12:30:47 compute-1 nova_compute[230518]: 2025-10-02 12:30:47.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:47 compute-1 nova_compute[230518]: 2025-10-02 12:30:47.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:47 compute-1 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000048.scope: Deactivated successfully.
Oct 02 12:30:47 compute-1 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000048.scope: Consumed 11.654s CPU time.
Oct 02 12:30:47 compute-1 systemd-machined[188247]: Machine qemu-36-instance-00000048 terminated.
Oct 02 12:30:47 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:47.875 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:ab:65 10.100.0.8'], port_security=['fa:16:3e:c3:ab:65 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '60c7cce3-3461-4cad-a135-46f35e607214', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0f24c0d-e50a-47b1-8faa-15e38342da63', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '546222ddef05450d9aeb91e721403b5b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0fed574f-e419-4ee1-a1ca-0cdfc77deb52', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c9e589ca-e402-46cd-b1b2-28eed346077b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=a5294afe-68a4-4f22-b51c-725ac6164e9f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:30:47 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:47.877 138374 INFO neutron.agent.ovn.metadata.agent [-] Port a5294afe-68a4-4f22-b51c-725ac6164e9f in datapath f0f24c0d-e50a-47b1-8faa-15e38342da63 unbound from our chassis
Oct 02 12:30:47 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:47.879 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f0f24c0d-e50a-47b1-8faa-15e38342da63, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:30:47 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:47.880 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0c4de1a7-fbdb-4f9a-83dd-fb9be3d3038c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:47 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:47.881 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63 namespace which is not needed anymore
Oct 02 12:30:47 compute-1 nova_compute[230518]: 2025-10-02 12:30:47.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:47 compute-1 nova_compute[230518]: 2025-10-02 12:30:47.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:47 compute-1 nova_compute[230518]: 2025-10-02 12:30:47.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:47 compute-1 nova_compute[230518]: 2025-10-02 12:30:47.962 2 INFO nova.virt.libvirt.driver [-] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Instance destroyed successfully.
Oct 02 12:30:47 compute-1 nova_compute[230518]: 2025-10-02 12:30:47.963 2 DEBUG nova.objects.instance [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lazy-loading 'resources' on Instance uuid 60c7cce3-3461-4cad-a135-46f35e607214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:30:48 compute-1 nova_compute[230518]: 2025-10-02 12:30:48.023 2 DEBUG nova.virt.libvirt.vif [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:30:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-978789831',display_name='tempest-ListServersNegativeTestJSON-server-978789831-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-978789831-2',id=72,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-10-02T12:30:37Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='546222ddef05450d9aeb91e721403b5b',ramdisk_id='',reservation_id='r-dhkt2a9n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-400261674',owner_user_name='tempest-ListServersNegativeTestJSON-400261674-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:30:37Z,user_data=None,user_id='045de4bc70204ae8b6975513839061d8',uuid=60c7cce3-3461-4cad-a135-46f35e607214,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "address": "fa:16:3e:c3:ab:65", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5294afe-68", "ovs_interfaceid": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:30:48 compute-1 nova_compute[230518]: 2025-10-02 12:30:48.024 2 DEBUG nova.network.os_vif_util [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Converting VIF {"id": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "address": "fa:16:3e:c3:ab:65", "network": {"id": "f0f24c0d-e50a-47b1-8faa-15e38342da63", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-861963978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "546222ddef05450d9aeb91e721403b5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5294afe-68", "ovs_interfaceid": "a5294afe-68a4-4f22-b51c-725ac6164e9f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:30:48 compute-1 nova_compute[230518]: 2025-10-02 12:30:48.025 2 DEBUG nova.network.os_vif_util [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:ab:65,bridge_name='br-int',has_traffic_filtering=True,id=a5294afe-68a4-4f22-b51c-725ac6164e9f,network=Network(f0f24c0d-e50a-47b1-8faa-15e38342da63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5294afe-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:30:48 compute-1 nova_compute[230518]: 2025-10-02 12:30:48.026 2 DEBUG os_vif [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:ab:65,bridge_name='br-int',has_traffic_filtering=True,id=a5294afe-68a4-4f22-b51c-725ac6164e9f,network=Network(f0f24c0d-e50a-47b1-8faa-15e38342da63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5294afe-68') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:30:48 compute-1 nova_compute[230518]: 2025-10-02 12:30:48.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:48 compute-1 nova_compute[230518]: 2025-10-02 12:30:48.029 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa5294afe-68, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:48 compute-1 nova_compute[230518]: 2025-10-02 12:30:48.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:48 compute-1 nova_compute[230518]: 2025-10-02 12:30:48.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:30:48 compute-1 nova_compute[230518]: 2025-10-02 12:30:48.038 2 INFO os_vif [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:ab:65,bridge_name='br-int',has_traffic_filtering=True,id=a5294afe-68a4-4f22-b51c-725ac6164e9f,network=Network(f0f24c0d-e50a-47b1-8faa-15e38342da63),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5294afe-68')
Oct 02 12:30:48 compute-1 neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63[260406]: [NOTICE]   (260410) : haproxy version is 2.8.14-c23fe91
Oct 02 12:30:48 compute-1 neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63[260406]: [NOTICE]   (260410) : path to executable is /usr/sbin/haproxy
Oct 02 12:30:48 compute-1 neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63[260406]: [ALERT]    (260410) : Current worker (260412) exited with code 143 (Terminated)
Oct 02 12:30:48 compute-1 neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63[260406]: [WARNING]  (260410) : All workers exited. Exiting... (0)
Oct 02 12:30:48 compute-1 systemd[1]: libpod-bee78d39e2decf3c8f69abe70d0331d0c4f3dc1ee68ad65f955413d86a07a1ad.scope: Deactivated successfully.
Oct 02 12:30:48 compute-1 podman[260788]: 2025-10-02 12:30:48.090126782 +0000 UTC m=+0.070713586 container died bee78d39e2decf3c8f69abe70d0331d0c4f3dc1ee68ad65f955413d86a07a1ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 12:30:48 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bee78d39e2decf3c8f69abe70d0331d0c4f3dc1ee68ad65f955413d86a07a1ad-userdata-shm.mount: Deactivated successfully.
Oct 02 12:30:48 compute-1 systemd[1]: var-lib-containers-storage-overlay-187f7d9e4444998b8922733136aa4f6bb49ad93e881eff99c8c8d9eeefe99b35-merged.mount: Deactivated successfully.
Oct 02 12:30:48 compute-1 podman[260788]: 2025-10-02 12:30:48.144347318 +0000 UTC m=+0.124934092 container cleanup bee78d39e2decf3c8f69abe70d0331d0c4f3dc1ee68ad65f955413d86a07a1ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:30:48 compute-1 systemd[1]: libpod-conmon-bee78d39e2decf3c8f69abe70d0331d0c4f3dc1ee68ad65f955413d86a07a1ad.scope: Deactivated successfully.
Oct 02 12:30:48 compute-1 podman[260836]: 2025-10-02 12:30:48.22577081 +0000 UTC m=+0.053376110 container remove bee78d39e2decf3c8f69abe70d0331d0c4f3dc1ee68ad65f955413d86a07a1ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 12:30:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:48.232 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[af34291d-588a-46bb-9218-d85f407f8238]: (4, ('Thu Oct  2 12:30:48 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63 (bee78d39e2decf3c8f69abe70d0331d0c4f3dc1ee68ad65f955413d86a07a1ad)\nbee78d39e2decf3c8f69abe70d0331d0c4f3dc1ee68ad65f955413d86a07a1ad\nThu Oct  2 12:30:48 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63 (bee78d39e2decf3c8f69abe70d0331d0c4f3dc1ee68ad65f955413d86a07a1ad)\nbee78d39e2decf3c8f69abe70d0331d0c4f3dc1ee68ad65f955413d86a07a1ad\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:48.234 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b56605a9-a5da-4e17-9be4-6f605f0b7b95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:48.235 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0f24c0d-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:48 compute-1 nova_compute[230518]: 2025-10-02 12:30:48.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:30:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:48.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:30:48 compute-1 kernel: tapf0f24c0d-e0: left promiscuous mode
Oct 02 12:30:48 compute-1 nova_compute[230518]: 2025-10-02 12:30:48.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:48.342 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[57ad28c8-9651-4814-adca-1dece810835d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:48.363 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7140fa04-dcfb-47e4-b099-c50bb5ca0a17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:48.364 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d31b1b38-91d8-402b-927e-3ab0e0ceae2e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:48.381 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[33cd9c14-b4b7-4bf9-a603-e9d78e980021]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609928, 'reachable_time': 32529, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260851, 'error': None, 'target': 'ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:48 compute-1 systemd[1]: run-netns-ovnmeta\x2df0f24c0d\x2de50a\x2d47b1\x2d8faa\x2d15e38342da63.mount: Deactivated successfully.
Oct 02 12:30:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:48.383 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f0f24c0d-e50a-47b1-8faa-15e38342da63 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:30:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:48.383 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[e0fb905d-97d0-4832-9ebe-6e9bea2f81eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:48.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:48 compute-1 nova_compute[230518]: 2025-10-02 12:30:48.402 2 DEBUG nova.compute.manager [req-d8aee92d-8985-4a7b-abf8-5590cfef7325 req-2769308d-9333-4f00-88bb-83c2c17a0ca5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Received event network-vif-unplugged-a5294afe-68a4-4f22-b51c-725ac6164e9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:48 compute-1 nova_compute[230518]: 2025-10-02 12:30:48.403 2 DEBUG oslo_concurrency.lockutils [req-d8aee92d-8985-4a7b-abf8-5590cfef7325 req-2769308d-9333-4f00-88bb-83c2c17a0ca5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "60c7cce3-3461-4cad-a135-46f35e607214-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:48 compute-1 nova_compute[230518]: 2025-10-02 12:30:48.403 2 DEBUG oslo_concurrency.lockutils [req-d8aee92d-8985-4a7b-abf8-5590cfef7325 req-2769308d-9333-4f00-88bb-83c2c17a0ca5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "60c7cce3-3461-4cad-a135-46f35e607214-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:48 compute-1 nova_compute[230518]: 2025-10-02 12:30:48.403 2 DEBUG oslo_concurrency.lockutils [req-d8aee92d-8985-4a7b-abf8-5590cfef7325 req-2769308d-9333-4f00-88bb-83c2c17a0ca5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "60c7cce3-3461-4cad-a135-46f35e607214-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:48 compute-1 nova_compute[230518]: 2025-10-02 12:30:48.404 2 DEBUG nova.compute.manager [req-d8aee92d-8985-4a7b-abf8-5590cfef7325 req-2769308d-9333-4f00-88bb-83c2c17a0ca5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] No waiting events found dispatching network-vif-unplugged-a5294afe-68a4-4f22-b51c-725ac6164e9f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:30:48 compute-1 nova_compute[230518]: 2025-10-02 12:30:48.404 2 DEBUG nova.compute.manager [req-d8aee92d-8985-4a7b-abf8-5590cfef7325 req-2769308d-9333-4f00-88bb-83c2c17a0ca5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Received event network-vif-unplugged-a5294afe-68a4-4f22-b51c-725ac6164e9f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:30:48 compute-1 ceph-mon[80926]: pgmap v1621: 305 pgs: 305 active+clean; 237 MiB data, 692 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 4.5 MiB/s wr, 358 op/s
Oct 02 12:30:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4286409731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1935961188' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:48 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #73. Immutable memtables: 0.
Oct 02 12:30:48 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:48.467109) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:30:48 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 73
Oct 02 12:30:48 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408248467342, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 319, "num_deletes": 251, "total_data_size": 197299, "memory_usage": 204744, "flush_reason": "Manual Compaction"}
Oct 02 12:30:48 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #74: started
Oct 02 12:30:48 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408248470388, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 74, "file_size": 129947, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38763, "largest_seqno": 39077, "table_properties": {"data_size": 127837, "index_size": 274, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5344, "raw_average_key_size": 18, "raw_value_size": 123706, "raw_average_value_size": 434, "num_data_blocks": 11, "num_entries": 285, "num_filter_entries": 285, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408246, "oldest_key_time": 1759408246, "file_creation_time": 1759408248, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:30:48 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 3303 microseconds, and 1080 cpu microseconds.
Oct 02 12:30:48 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:30:48 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:48.470414) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #74: 129947 bytes OK
Oct 02 12:30:48 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:48.470431) [db/memtable_list.cc:519] [default] Level-0 commit table #74 started
Oct 02 12:30:48 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:48.471570) [db/memtable_list.cc:722] [default] Level-0 commit table #74: memtable #1 done
Oct 02 12:30:48 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:48.471582) EVENT_LOG_v1 {"time_micros": 1759408248471578, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:30:48 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:48.471596) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:30:48 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 195007, prev total WAL file size 195007, number of live WAL files 2.
Oct 02 12:30:48 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000070.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:30:48 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:48.472024) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Oct 02 12:30:48 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:30:48 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [74(126KB)], [72(11MB)]
Oct 02 12:30:48 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408248472053, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [74], "files_L6": [72], "score": -1, "input_data_size": 12194357, "oldest_snapshot_seqno": -1}
Oct 02 12:30:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:30:48 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #75: 6183 keys, 10187978 bytes, temperature: kUnknown
Oct 02 12:30:48 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408248538664, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 75, "file_size": 10187978, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10145397, "index_size": 26023, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15493, "raw_key_size": 159817, "raw_average_key_size": 25, "raw_value_size": 10033137, "raw_average_value_size": 1622, "num_data_blocks": 1038, "num_entries": 6183, "num_filter_entries": 6183, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759408248, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 75, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:30:48 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:30:48 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:48.538945) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 10187978 bytes
Oct 02 12:30:48 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:48.541249) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.7 rd, 152.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 11.5 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(172.2) write-amplify(78.4) OK, records in: 6695, records dropped: 512 output_compression: NoCompression
Oct 02 12:30:48 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:48.541270) EVENT_LOG_v1 {"time_micros": 1759408248541260, "job": 44, "event": "compaction_finished", "compaction_time_micros": 66750, "compaction_time_cpu_micros": 21234, "output_level": 6, "num_output_files": 1, "total_output_size": 10187978, "num_input_records": 6695, "num_output_records": 6183, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:30:48 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:30:48 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408248541417, "job": 44, "event": "table_file_deletion", "file_number": 74}
Oct 02 12:30:48 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000072.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:30:48 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408248543896, "job": 44, "event": "table_file_deletion", "file_number": 72}
Oct 02 12:30:48 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:48.471922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:30:48 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:48.544008) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:30:48 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:48.544016) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:30:48 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:48.544019) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:30:48 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:48.544021) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:30:48 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:30:48.544024) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:30:48 compute-1 nova_compute[230518]: 2025-10-02 12:30:48.957 2 DEBUG nova.network.neutron [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Successfully updated port: 4a337dd1-6b9e-4a79-893f-de7400180a58 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:30:48 compute-1 nova_compute[230518]: 2025-10-02 12:30:48.991 2 DEBUG oslo_concurrency.lockutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Acquiring lock "refresh_cache-86aa4a20-96fe-4862-a0c5-04ad92e40f1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:30:48 compute-1 nova_compute[230518]: 2025-10-02 12:30:48.992 2 DEBUG oslo_concurrency.lockutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Acquired lock "refresh_cache-86aa4a20-96fe-4862-a0c5-04ad92e40f1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:30:48 compute-1 nova_compute[230518]: 2025-10-02 12:30:48.992 2 DEBUG nova.network.neutron [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:30:49 compute-1 nova_compute[230518]: 2025-10-02 12:30:49.154 2 DEBUG nova.network.neutron [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:30:49 compute-1 nova_compute[230518]: 2025-10-02 12:30:49.589 2 INFO nova.virt.libvirt.driver [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Deleting instance files /var/lib/nova/instances/60c7cce3-3461-4cad-a135-46f35e607214_del
Oct 02 12:30:49 compute-1 nova_compute[230518]: 2025-10-02 12:30:49.591 2 INFO nova.virt.libvirt.driver [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Deletion of /var/lib/nova/instances/60c7cce3-3461-4cad-a135-46f35e607214_del complete
Oct 02 12:30:49 compute-1 nova_compute[230518]: 2025-10-02 12:30:49.596 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:30:49 compute-1 nova_compute[230518]: 2025-10-02 12:30:49.597 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:30:49 compute-1 nova_compute[230518]: 2025-10-02 12:30:49.597 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:30:49 compute-1 nova_compute[230518]: 2025-10-02 12:30:49.598 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:30:49 compute-1 nova_compute[230518]: 2025-10-02 12:30:49.598 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:30:49 compute-1 nova_compute[230518]: 2025-10-02 12:30:49.652 2 INFO nova.compute.manager [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Took 1.93 seconds to destroy the instance on the hypervisor.
Oct 02 12:30:49 compute-1 nova_compute[230518]: 2025-10-02 12:30:49.653 2 DEBUG oslo.service.loopingcall [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:30:49 compute-1 nova_compute[230518]: 2025-10-02 12:30:49.653 2 DEBUG nova.compute.manager [-] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:30:49 compute-1 nova_compute[230518]: 2025-10-02 12:30:49.653 2 DEBUG nova.network.neutron [-] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:30:50 compute-1 nova_compute[230518]: 2025-10-02 12:30:50.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:30:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:30:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:50.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:30:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:50.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:50 compute-1 ceph-mon[80926]: pgmap v1622: 305 pgs: 305 active+clean; 237 MiB data, 692 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 2.0 MiB/s wr, 307 op/s
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.000 2 DEBUG nova.compute.manager [req-c2fbae08-7d8b-4ac0-af3a-62c220437c82 req-bddfb720-1ffc-4dc2-9d89-16e17f05d68f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Received event network-vif-plugged-a5294afe-68a4-4f22-b51c-725ac6164e9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.001 2 DEBUG oslo_concurrency.lockutils [req-c2fbae08-7d8b-4ac0-af3a-62c220437c82 req-bddfb720-1ffc-4dc2-9d89-16e17f05d68f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "60c7cce3-3461-4cad-a135-46f35e607214-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.001 2 DEBUG oslo_concurrency.lockutils [req-c2fbae08-7d8b-4ac0-af3a-62c220437c82 req-bddfb720-1ffc-4dc2-9d89-16e17f05d68f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "60c7cce3-3461-4cad-a135-46f35e607214-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.001 2 DEBUG oslo_concurrency.lockutils [req-c2fbae08-7d8b-4ac0-af3a-62c220437c82 req-bddfb720-1ffc-4dc2-9d89-16e17f05d68f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "60c7cce3-3461-4cad-a135-46f35e607214-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.001 2 DEBUG nova.compute.manager [req-c2fbae08-7d8b-4ac0-af3a-62c220437c82 req-bddfb720-1ffc-4dc2-9d89-16e17f05d68f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] No waiting events found dispatching network-vif-plugged-a5294afe-68a4-4f22-b51c-725ac6164e9f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.002 2 WARNING nova.compute.manager [req-c2fbae08-7d8b-4ac0-af3a-62c220437c82 req-bddfb720-1ffc-4dc2-9d89-16e17f05d68f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Received unexpected event network-vif-plugged-a5294afe-68a4-4f22-b51c-725ac6164e9f for instance with vm_state active and task_state deleting.
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.002 2 DEBUG nova.compute.manager [req-c2fbae08-7d8b-4ac0-af3a-62c220437c82 req-bddfb720-1ffc-4dc2-9d89-16e17f05d68f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Received event network-changed-4a337dd1-6b9e-4a79-893f-de7400180a58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.002 2 DEBUG nova.compute.manager [req-c2fbae08-7d8b-4ac0-af3a-62c220437c82 req-bddfb720-1ffc-4dc2-9d89-16e17f05d68f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Refreshing instance network info cache due to event network-changed-4a337dd1-6b9e-4a79-893f-de7400180a58. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.002 2 DEBUG oslo_concurrency.lockutils [req-c2fbae08-7d8b-4ac0-af3a-62c220437c82 req-bddfb720-1ffc-4dc2-9d89-16e17f05d68f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-86aa4a20-96fe-4862-a0c5-04ad92e40f1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:30:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:51.286 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:51.290 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.533 2 DEBUG nova.network.neutron [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Updating instance_info_cache with network_info: [{"id": "4a337dd1-6b9e-4a79-893f-de7400180a58", "address": "fa:16:3e:e1:a3:65", "network": {"id": "41e6c621-c2f2-4fb3-a93d-8eda22ec0438", "bridge": "br-int", "label": "tempest-ServersTestJSON-650581243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a70008e0fc32481f8ed89060220b28d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a337dd1-6b", "ovs_interfaceid": "4a337dd1-6b9e-4a79-893f-de7400180a58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.564 2 DEBUG oslo_concurrency.lockutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Releasing lock "refresh_cache-86aa4a20-96fe-4862-a0c5-04ad92e40f1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.564 2 DEBUG nova.compute.manager [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Instance network_info: |[{"id": "4a337dd1-6b9e-4a79-893f-de7400180a58", "address": "fa:16:3e:e1:a3:65", "network": {"id": "41e6c621-c2f2-4fb3-a93d-8eda22ec0438", "bridge": "br-int", "label": "tempest-ServersTestJSON-650581243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a70008e0fc32481f8ed89060220b28d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a337dd1-6b", "ovs_interfaceid": "4a337dd1-6b9e-4a79-893f-de7400180a58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.564 2 DEBUG oslo_concurrency.lockutils [req-c2fbae08-7d8b-4ac0-af3a-62c220437c82 req-bddfb720-1ffc-4dc2-9d89-16e17f05d68f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-86aa4a20-96fe-4862-a0c5-04ad92e40f1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.564 2 DEBUG nova.network.neutron [req-c2fbae08-7d8b-4ac0-af3a-62c220437c82 req-bddfb720-1ffc-4dc2-9d89-16e17f05d68f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Refreshing network info cache for port 4a337dd1-6b9e-4a79-893f-de7400180a58 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.567 2 DEBUG nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Start _get_guest_xml network_info=[{"id": "4a337dd1-6b9e-4a79-893f-de7400180a58", "address": "fa:16:3e:e1:a3:65", "network": {"id": "41e6c621-c2f2-4fb3-a93d-8eda22ec0438", "bridge": "br-int", "label": "tempest-ServersTestJSON-650581243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a70008e0fc32481f8ed89060220b28d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a337dd1-6b", "ovs_interfaceid": "4a337dd1-6b9e-4a79-893f-de7400180a58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.573 2 WARNING nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.592 2 DEBUG nova.virt.libvirt.host [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.593 2 DEBUG nova.virt.libvirt.host [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.597 2 DEBUG nova.virt.libvirt.host [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.598 2 DEBUG nova.virt.libvirt.host [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.599 2 DEBUG nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.599 2 DEBUG nova.virt.hardware [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.600 2 DEBUG nova.virt.hardware [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.600 2 DEBUG nova.virt.hardware [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.600 2 DEBUG nova.virt.hardware [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.601 2 DEBUG nova.virt.hardware [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.601 2 DEBUG nova.virt.hardware [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.601 2 DEBUG nova.virt.hardware [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.601 2 DEBUG nova.virt.hardware [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.602 2 DEBUG nova.virt.hardware [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.602 2 DEBUG nova.virt.hardware [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.602 2 DEBUG nova.virt.hardware [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.605 2 DEBUG oslo_concurrency.processutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.667 2 DEBUG nova.network.neutron [-] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.690 2 INFO nova.compute.manager [-] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Took 2.04 seconds to deallocate network for instance.
Oct 02 12:30:51 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/128164467' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.752 2 DEBUG oslo_concurrency.lockutils [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.753 2 DEBUG oslo_concurrency.lockutils [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:51 compute-1 nova_compute[230518]: 2025-10-02 12:30:51.875 2 DEBUG oslo_concurrency.processutils [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:30:52 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/753036913' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.191 2 DEBUG oslo_concurrency.processutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.217 2 DEBUG nova.storage.rbd_utils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] rbd image 86aa4a20-96fe-4862-a0c5-04ad92e40f1b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.221 2 DEBUG oslo_concurrency.processutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:30:52 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4246083289' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:52.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.347 2 DEBUG oslo_concurrency.processutils [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.355 2 DEBUG nova.compute.provider_tree [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.384 2 DEBUG nova.scheduler.client.report [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:30:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:52.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.412 2 DEBUG oslo_concurrency.lockutils [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.455 2 INFO nova.scheduler.client.report [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Deleted allocations for instance 60c7cce3-3461-4cad-a135-46f35e607214
Oct 02 12:30:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:30:52 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1154353251' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.665 2 DEBUG oslo_concurrency.processutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.667 2 DEBUG nova.virt.libvirt.vif [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:30:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-606434201',display_name='tempest-ServersTestJSON-server-606434201',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-606434201',id=74,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEOSVA0UqkFnW7YuNQRsY27cJ9a2M1giyjrSsiXcPBlr5myx9iDFttDeAnJV4hFOON30Ktzf3cWWwY2KcF8NnunVbyoieS3+bfZniZlgOmGbNNyoaXEKseU0cJsJiwxrpQ==',key_name='tempest-keypair-2131917532',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a70008e0fc32481f8ed89060220b28d7',ramdisk_id='',reservation_id='r-0450lgko',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-128626597',owner_user_name='tempest-ServersTestJSON-128626597-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:30:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eed4fdf8b49f41bfb982bc858fa76bef',uuid=86aa4a20-96fe-4862-a0c5-04ad92e40f1b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4a337dd1-6b9e-4a79-893f-de7400180a58", "address": "fa:16:3e:e1:a3:65", "network": {"id": "41e6c621-c2f2-4fb3-a93d-8eda22ec0438", "bridge": "br-int", "label": "tempest-ServersTestJSON-650581243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a70008e0fc32481f8ed89060220b28d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a337dd1-6b", "ovs_interfaceid": "4a337dd1-6b9e-4a79-893f-de7400180a58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.667 2 DEBUG nova.network.os_vif_util [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Converting VIF {"id": "4a337dd1-6b9e-4a79-893f-de7400180a58", "address": "fa:16:3e:e1:a3:65", "network": {"id": "41e6c621-c2f2-4fb3-a93d-8eda22ec0438", "bridge": "br-int", "label": "tempest-ServersTestJSON-650581243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a70008e0fc32481f8ed89060220b28d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a337dd1-6b", "ovs_interfaceid": "4a337dd1-6b9e-4a79-893f-de7400180a58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.669 2 DEBUG nova.network.os_vif_util [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:a3:65,bridge_name='br-int',has_traffic_filtering=True,id=4a337dd1-6b9e-4a79-893f-de7400180a58,network=Network(41e6c621-c2f2-4fb3-a93d-8eda22ec0438),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a337dd1-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.671 2 DEBUG nova.objects.instance [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 86aa4a20-96fe-4862-a0c5-04ad92e40f1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.729 2 DEBUG nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:30:52 compute-1 nova_compute[230518]:   <uuid>86aa4a20-96fe-4862-a0c5-04ad92e40f1b</uuid>
Oct 02 12:30:52 compute-1 nova_compute[230518]:   <name>instance-0000004a</name>
Oct 02 12:30:52 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:30:52 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:30:52 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:30:52 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:       <nova:name>tempest-ServersTestJSON-server-606434201</nova:name>
Oct 02 12:30:52 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:30:51</nova:creationTime>
Oct 02 12:30:52 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:30:52 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:30:52 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:30:52 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:30:52 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:30:52 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:30:52 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:30:52 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:30:52 compute-1 nova_compute[230518]:         <nova:user uuid="eed4fdf8b49f41bfb982bc858fa76bef">tempest-ServersTestJSON-128626597-project-member</nova:user>
Oct 02 12:30:52 compute-1 nova_compute[230518]:         <nova:project uuid="a70008e0fc32481f8ed89060220b28d7">tempest-ServersTestJSON-128626597</nova:project>
Oct 02 12:30:52 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:30:52 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:30:52 compute-1 nova_compute[230518]:         <nova:port uuid="4a337dd1-6b9e-4a79-893f-de7400180a58">
Oct 02 12:30:52 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:30:52 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:30:52 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:30:52 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <system>
Oct 02 12:30:52 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:30:52 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:30:52 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:30:52 compute-1 nova_compute[230518]:       <entry name="serial">86aa4a20-96fe-4862-a0c5-04ad92e40f1b</entry>
Oct 02 12:30:52 compute-1 nova_compute[230518]:       <entry name="uuid">86aa4a20-96fe-4862-a0c5-04ad92e40f1b</entry>
Oct 02 12:30:52 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     </system>
Oct 02 12:30:52 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:30:52 compute-1 nova_compute[230518]:   <os>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:   </os>
Oct 02 12:30:52 compute-1 nova_compute[230518]:   <features>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:   </features>
Oct 02 12:30:52 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:30:52 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:30:52 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:30:52 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/86aa4a20-96fe-4862-a0c5-04ad92e40f1b_disk">
Oct 02 12:30:52 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:       </source>
Oct 02 12:30:52 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:30:52 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:30:52 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:30:52 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/86aa4a20-96fe-4862-a0c5-04ad92e40f1b_disk.config">
Oct 02 12:30:52 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:       </source>
Oct 02 12:30:52 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:30:52 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:30:52 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:30:52 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:e1:a3:65"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:       <target dev="tap4a337dd1-6b"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:30:52 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/86aa4a20-96fe-4862-a0c5-04ad92e40f1b/console.log" append="off"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <video>
Oct 02 12:30:52 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     </video>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:30:52 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:30:52 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:30:52 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:30:52 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:30:52 compute-1 nova_compute[230518]: </domain>
Oct 02 12:30:52 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.731 2 DEBUG nova.compute.manager [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Preparing to wait for external event network-vif-plugged-4a337dd1-6b9e-4a79-893f-de7400180a58 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.732 2 DEBUG oslo_concurrency.lockutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Acquiring lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.732 2 DEBUG oslo_concurrency.lockutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.733 2 DEBUG oslo_concurrency.lockutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.734 2 DEBUG nova.virt.libvirt.vif [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:30:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-606434201',display_name='tempest-ServersTestJSON-server-606434201',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-606434201',id=74,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEOSVA0UqkFnW7YuNQRsY27cJ9a2M1giyjrSsiXcPBlr5myx9iDFttDeAnJV4hFOON30Ktzf3cWWwY2KcF8NnunVbyoieS3+bfZniZlgOmGbNNyoaXEKseU0cJsJiwxrpQ==',key_name='tempest-keypair-2131917532',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a70008e0fc32481f8ed89060220b28d7',ramdisk_id='',reservation_id='r-0450lgko',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-128626597',owner_user_name='tempest-ServersTestJSON-128626597-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:30:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eed4fdf8b49f41bfb982bc858fa76bef',uuid=86aa4a20-96fe-4862-a0c5-04ad92e40f1b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4a337dd1-6b9e-4a79-893f-de7400180a58", "address": "fa:16:3e:e1:a3:65", "network": {"id": "41e6c621-c2f2-4fb3-a93d-8eda22ec0438", "bridge": "br-int", "label": "tempest-ServersTestJSON-650581243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a70008e0fc32481f8ed89060220b28d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a337dd1-6b", "ovs_interfaceid": "4a337dd1-6b9e-4a79-893f-de7400180a58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.735 2 DEBUG nova.network.os_vif_util [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Converting VIF {"id": "4a337dd1-6b9e-4a79-893f-de7400180a58", "address": "fa:16:3e:e1:a3:65", "network": {"id": "41e6c621-c2f2-4fb3-a93d-8eda22ec0438", "bridge": "br-int", "label": "tempest-ServersTestJSON-650581243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a70008e0fc32481f8ed89060220b28d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a337dd1-6b", "ovs_interfaceid": "4a337dd1-6b9e-4a79-893f-de7400180a58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.736 2 DEBUG nova.network.os_vif_util [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:a3:65,bridge_name='br-int',has_traffic_filtering=True,id=4a337dd1-6b9e-4a79-893f-de7400180a58,network=Network(41e6c621-c2f2-4fb3-a93d-8eda22ec0438),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a337dd1-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.737 2 DEBUG os_vif [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:a3:65,bridge_name='br-int',has_traffic_filtering=True,id=4a337dd1-6b9e-4a79-893f-de7400180a58,network=Network(41e6c621-c2f2-4fb3-a93d-8eda22ec0438),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a337dd1-6b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.740 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.740 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.745 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a337dd1-6b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.746 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4a337dd1-6b, col_values=(('external_ids', {'iface-id': '4a337dd1-6b9e-4a79-893f-de7400180a58', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e1:a3:65', 'vm-uuid': '86aa4a20-96fe-4862-a0c5-04ad92e40f1b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:52 compute-1 NetworkManager[44960]: <info>  [1759408252.7496] manager: (tap4a337dd1-6b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/146)
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.756 2 INFO os_vif [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:a3:65,bridge_name='br-int',has_traffic_filtering=True,id=4a337dd1-6b9e-4a79-893f-de7400180a58,network=Network(41e6c621-c2f2-4fb3-a93d-8eda22ec0438),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a337dd1-6b')
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.768 2 DEBUG oslo_concurrency.lockutils [None req-5c78c4f1-21ba-4c5b-af26-71f2025b56be 045de4bc70204ae8b6975513839061d8 546222ddef05450d9aeb91e721403b5b - - default default] Lock "60c7cce3-3461-4cad-a135-46f35e607214" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.840 2 DEBUG nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.841 2 DEBUG nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.841 2 DEBUG nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] No VIF found with MAC fa:16:3e:e1:a3:65, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.843 2 INFO nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Using config drive
Oct 02 12:30:52 compute-1 ceph-mon[80926]: pgmap v1623: 305 pgs: 305 active+clean; 230 MiB data, 688 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 4.4 MiB/s wr, 395 op/s
Oct 02 12:30:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4223918663' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/564061492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/753036913' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4246083289' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1154353251' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3813447493' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:52 compute-1 nova_compute[230518]: 2025-10-02 12:30:52.897 2 DEBUG nova.storage.rbd_utils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] rbd image 86aa4a20-96fe-4862-a0c5-04ad92e40f1b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:30:53 compute-1 nova_compute[230518]: 2025-10-02 12:30:53.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:30:53 compute-1 nova_compute[230518]: 2025-10-02 12:30:53.188 2 DEBUG nova.compute.manager [req-afc85540-41cb-454b-8d9a-d604ea3ac187 req-0ceede55-2e55-476e-b233-466fd69cf5bd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Received event network-vif-deleted-a5294afe-68a4-4f22-b51c-725ac6164e9f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:30:53 compute-1 nova_compute[230518]: 2025-10-02 12:30:53.634 2 INFO nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Creating config drive at /var/lib/nova/instances/86aa4a20-96fe-4862-a0c5-04ad92e40f1b/disk.config
Oct 02 12:30:53 compute-1 nova_compute[230518]: 2025-10-02 12:30:53.646 2 DEBUG oslo_concurrency.processutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/86aa4a20-96fe-4862-a0c5-04ad92e40f1b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn0_ebhzg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:53 compute-1 nova_compute[230518]: 2025-10-02 12:30:53.788 2 DEBUG oslo_concurrency.processutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/86aa4a20-96fe-4862-a0c5-04ad92e40f1b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn0_ebhzg" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:53 compute-1 nova_compute[230518]: 2025-10-02 12:30:53.833 2 DEBUG nova.storage.rbd_utils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] rbd image 86aa4a20-96fe-4862-a0c5-04ad92e40f1b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:30:53 compute-1 nova_compute[230518]: 2025-10-02 12:30:53.838 2 DEBUG oslo_concurrency.processutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/86aa4a20-96fe-4862-a0c5-04ad92e40f1b/disk.config 86aa4a20-96fe-4862-a0c5-04ad92e40f1b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:30:53 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2493142074' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:30:54 compute-1 nova_compute[230518]: 2025-10-02 12:30:54.216 2 DEBUG oslo_concurrency.processutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/86aa4a20-96fe-4862-a0c5-04ad92e40f1b/disk.config 86aa4a20-96fe-4862-a0c5-04ad92e40f1b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.378s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:30:54 compute-1 nova_compute[230518]: 2025-10-02 12:30:54.217 2 INFO nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Deleting local config drive /var/lib/nova/instances/86aa4a20-96fe-4862-a0c5-04ad92e40f1b/disk.config because it was imported into RBD.
Oct 02 12:30:54 compute-1 kernel: tap4a337dd1-6b: entered promiscuous mode
Oct 02 12:30:54 compute-1 NetworkManager[44960]: <info>  [1759408254.2901] manager: (tap4a337dd1-6b): new Tun device (/org/freedesktop/NetworkManager/Devices/147)
Oct 02 12:30:54 compute-1 ovn_controller[129257]: 2025-10-02T12:30:54Z|00315|binding|INFO|Claiming lport 4a337dd1-6b9e-4a79-893f-de7400180a58 for this chassis.
Oct 02 12:30:54 compute-1 ovn_controller[129257]: 2025-10-02T12:30:54Z|00316|binding|INFO|4a337dd1-6b9e-4a79-893f-de7400180a58: Claiming fa:16:3e:e1:a3:65 10.100.0.10
Oct 02 12:30:54 compute-1 nova_compute[230518]: 2025-10-02 12:30:54.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:54 compute-1 nova_compute[230518]: 2025-10-02 12:30:54.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.305 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:a3:65 10.100.0.10'], port_security=['fa:16:3e:e1:a3:65 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '86aa4a20-96fe-4862-a0c5-04ad92e40f1b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41e6c621-c2f2-4fb3-a93d-8eda22ec0438', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a70008e0fc32481f8ed89060220b28d7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9b5e4d5c-c242-45f9-85c8-d58de980e569', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4850e48d-a493-4bb4-bb29-020fdb04c9bf, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=4a337dd1-6b9e-4a79-893f-de7400180a58) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.307 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 4a337dd1-6b9e-4a79-893f-de7400180a58 in datapath 41e6c621-c2f2-4fb3-a93d-8eda22ec0438 bound to our chassis
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.309 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41e6c621-c2f2-4fb3-a93d-8eda22ec0438
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.324 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5fd635af-4e68-48d1-8292-c599946494b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:54 compute-1 systemd-udevd[261011]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.325 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap41e6c621-c1 in ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.327 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap41e6c621-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.327 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9f2f97c4-63f2-430d-89d3-4319d988a1e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.328 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[406ea6a5-d5a2-4dc4-a50d-3d53764a8731]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:54.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:54 compute-1 systemd-machined[188247]: New machine qemu-37-instance-0000004a.
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.343 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[10352e60-c0ea-4fca-a11d-a008f41ba956]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:54 compute-1 NetworkManager[44960]: <info>  [1759408254.3500] device (tap4a337dd1-6b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:30:54 compute-1 NetworkManager[44960]: <info>  [1759408254.3518] device (tap4a337dd1-6b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.373 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0dfbb0a3-f21a-42a6-a3e4-dabb483bcc65]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:54 compute-1 systemd[1]: Started Virtual Machine qemu-37-instance-0000004a.
Oct 02 12:30:54 compute-1 nova_compute[230518]: 2025-10-02 12:30:54.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:54 compute-1 ovn_controller[129257]: 2025-10-02T12:30:54Z|00317|binding|INFO|Setting lport 4a337dd1-6b9e-4a79-893f-de7400180a58 ovn-installed in OVS
Oct 02 12:30:54 compute-1 ovn_controller[129257]: 2025-10-02T12:30:54Z|00318|binding|INFO|Setting lport 4a337dd1-6b9e-4a79-893f-de7400180a58 up in Southbound
Oct 02 12:30:54 compute-1 nova_compute[230518]: 2025-10-02 12:30:54.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:54.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.412 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[70a64de9-fbe7-4bc4-9974-824e3c54e46c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:54 compute-1 NetworkManager[44960]: <info>  [1759408254.4222] manager: (tap41e6c621-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/148)
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.421 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[608cbc54-143e-4ed5-aa97-f2b7c92ef056]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.458 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[40e4dc89-896c-4a36-a26c-ab08d91d2b91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.462 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[762d7934-d0e2-4f55-8221-4ef818993123]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:54 compute-1 NetworkManager[44960]: <info>  [1759408254.5063] device (tap41e6c621-c0): carrier: link connected
Oct 02 12:30:54 compute-1 nova_compute[230518]: 2025-10-02 12:30:54.515 2 DEBUG nova.network.neutron [req-c2fbae08-7d8b-4ac0-af3a-62c220437c82 req-bddfb720-1ffc-4dc2-9d89-16e17f05d68f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Updated VIF entry in instance network info cache for port 4a337dd1-6b9e-4a79-893f-de7400180a58. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:30:54 compute-1 nova_compute[230518]: 2025-10-02 12:30:54.516 2 DEBUG nova.network.neutron [req-c2fbae08-7d8b-4ac0-af3a-62c220437c82 req-bddfb720-1ffc-4dc2-9d89-16e17f05d68f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Updating instance_info_cache with network_info: [{"id": "4a337dd1-6b9e-4a79-893f-de7400180a58", "address": "fa:16:3e:e1:a3:65", "network": {"id": "41e6c621-c2f2-4fb3-a93d-8eda22ec0438", "bridge": "br-int", "label": "tempest-ServersTestJSON-650581243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a70008e0fc32481f8ed89060220b28d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a337dd1-6b", "ovs_interfaceid": "4a337dd1-6b9e-4a79-893f-de7400180a58", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.516 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[141a481c-0c39-42a4-a21b-e4aded781caa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.545 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[083b41f7-3cae-4339-83cf-59a13982db7b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41e6c621-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:fa:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611806, 'reachable_time': 37356, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261044, 'error': None, 'target': 'ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.576 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c4bd3476-377c-4536-b39b-bdabac043c7a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe74:fa62'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611806, 'tstamp': 611806}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261045, 'error': None, 'target': 'ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:54 compute-1 nova_compute[230518]: 2025-10-02 12:30:54.585 2 DEBUG oslo_concurrency.lockutils [req-c2fbae08-7d8b-4ac0-af3a-62c220437c82 req-bddfb720-1ffc-4dc2-9d89-16e17f05d68f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-86aa4a20-96fe-4862-a0c5-04ad92e40f1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.608 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e930d68e-230a-4073-8b00-8d4d525d2d07]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41e6c621-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:fa:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611806, 'reachable_time': 37356, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261046, 'error': None, 'target': 'ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.665 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[44a8ab6e-6596-418a-81b1-0a7cbc254f07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.762 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e81892a1-185a-4add-bab8-6d9268904fba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.765 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41e6c621-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.765 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.766 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41e6c621-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:54 compute-1 NetworkManager[44960]: <info>  [1759408254.7700] manager: (tap41e6c621-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/149)
Oct 02 12:30:54 compute-1 nova_compute[230518]: 2025-10-02 12:30:54.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:54 compute-1 kernel: tap41e6c621-c0: entered promiscuous mode
Oct 02 12:30:54 compute-1 nova_compute[230518]: 2025-10-02 12:30:54.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.775 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41e6c621-c0, col_values=(('external_ids', {'iface-id': 'e11ef0cd-0cee-4792-8183-b5f339fd3fb3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:54 compute-1 nova_compute[230518]: 2025-10-02 12:30:54.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:54 compute-1 ovn_controller[129257]: 2025-10-02T12:30:54Z|00319|binding|INFO|Releasing lport e11ef0cd-0cee-4792-8183-b5f339fd3fb3 from this chassis (sb_readonly=0)
Oct 02 12:30:54 compute-1 nova_compute[230518]: 2025-10-02 12:30:54.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.802 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/41e6c621-c2f2-4fb3-a93d-8eda22ec0438.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/41e6c621-c2f2-4fb3-a93d-8eda22ec0438.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.803 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0fae875f-e15f-44c2-ad05-9f936677538a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.804 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-41e6c621-c2f2-4fb3-a93d-8eda22ec0438
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/41e6c621-c2f2-4fb3-a93d-8eda22ec0438.pid.haproxy
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 41e6c621-c2f2-4fb3-a93d-8eda22ec0438
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:30:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:54.804 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438', 'env', 'PROCESS_TAG=haproxy-41e6c621-c2f2-4fb3-a93d-8eda22ec0438', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/41e6c621-c2f2-4fb3-a93d-8eda22ec0438.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:30:55 compute-1 ceph-mon[80926]: pgmap v1624: 305 pgs: 305 active+clean; 213 MiB data, 687 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 3.7 MiB/s wr, 333 op/s
Oct 02 12:30:55 compute-1 podman[261120]: 2025-10-02 12:30:55.206741613 +0000 UTC m=+0.056932913 container create 31f6624e1e13ca25ad758626f678602e3de4c2b0af10a751ee91db9471207633 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.261 2 DEBUG nova.compute.manager [req-dfe367d1-039c-4d17-b6a3-7ceb28a25580 req-f230c201-1530-4b85-920a-dbcc652a075c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Received event network-vif-plugged-4a337dd1-6b9e-4a79-893f-de7400180a58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:55 compute-1 systemd[1]: Started libpod-conmon-31f6624e1e13ca25ad758626f678602e3de4c2b0af10a751ee91db9471207633.scope.
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.263 2 DEBUG oslo_concurrency.lockutils [req-dfe367d1-039c-4d17-b6a3-7ceb28a25580 req-f230c201-1530-4b85-920a-dbcc652a075c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.264 2 DEBUG oslo_concurrency.lockutils [req-dfe367d1-039c-4d17-b6a3-7ceb28a25580 req-f230c201-1530-4b85-920a-dbcc652a075c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.266 2 DEBUG oslo_concurrency.lockutils [req-dfe367d1-039c-4d17-b6a3-7ceb28a25580 req-f230c201-1530-4b85-920a-dbcc652a075c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.267 2 DEBUG nova.compute.manager [req-dfe367d1-039c-4d17-b6a3-7ceb28a25580 req-f230c201-1530-4b85-920a-dbcc652a075c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Processing event network-vif-plugged-4a337dd1-6b9e-4a79-893f-de7400180a58 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.267 2 DEBUG nova.compute.manager [req-dfe367d1-039c-4d17-b6a3-7ceb28a25580 req-f230c201-1530-4b85-920a-dbcc652a075c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Received event network-vif-plugged-4a337dd1-6b9e-4a79-893f-de7400180a58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.268 2 DEBUG oslo_concurrency.lockutils [req-dfe367d1-039c-4d17-b6a3-7ceb28a25580 req-f230c201-1530-4b85-920a-dbcc652a075c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.269 2 DEBUG oslo_concurrency.lockutils [req-dfe367d1-039c-4d17-b6a3-7ceb28a25580 req-f230c201-1530-4b85-920a-dbcc652a075c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.269 2 DEBUG oslo_concurrency.lockutils [req-dfe367d1-039c-4d17-b6a3-7ceb28a25580 req-f230c201-1530-4b85-920a-dbcc652a075c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.270 2 DEBUG nova.compute.manager [req-dfe367d1-039c-4d17-b6a3-7ceb28a25580 req-f230c201-1530-4b85-920a-dbcc652a075c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] No waiting events found dispatching network-vif-plugged-4a337dd1-6b9e-4a79-893f-de7400180a58 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:30:55 compute-1 podman[261120]: 2025-10-02 12:30:55.180802456 +0000 UTC m=+0.030993746 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.271 2 WARNING nova.compute.manager [req-dfe367d1-039c-4d17-b6a3-7ceb28a25580 req-f230c201-1530-4b85-920a-dbcc652a075c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Received unexpected event network-vif-plugged-4a337dd1-6b9e-4a79-893f-de7400180a58 for instance with vm_state building and task_state spawning.
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.276 2 DEBUG nova.compute.manager [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.277 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408255.275059, 86aa4a20-96fe-4862-a0c5-04ad92e40f1b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.278 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] VM Started (Lifecycle Event)
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.284 2 DEBUG nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.290 2 INFO nova.virt.libvirt.driver [-] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Instance spawned successfully.
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.290 2 DEBUG nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:30:55 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.297 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.299 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:30:55 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d688486329266ad986868fb626471e07a89a9808e3f1f1d1725bea2902a120a5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.307 2 DEBUG nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.307 2 DEBUG nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.307 2 DEBUG nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.308 2 DEBUG nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.308 2 DEBUG nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.308 2 DEBUG nova.virt.libvirt.driver [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.314 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.315 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408255.2759573, 86aa4a20-96fe-4862-a0c5-04ad92e40f1b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.315 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] VM Paused (Lifecycle Event)
Oct 02 12:30:55 compute-1 podman[261120]: 2025-10-02 12:30:55.324043394 +0000 UTC m=+0.174234714 container init 31f6624e1e13ca25ad758626f678602e3de4c2b0af10a751ee91db9471207633 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:30:55 compute-1 podman[261120]: 2025-10-02 12:30:55.336364021 +0000 UTC m=+0.186555311 container start 31f6624e1e13ca25ad758626f678602e3de4c2b0af10a751ee91db9471207633 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.340 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.344 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408255.2823753, 86aa4a20-96fe-4862-a0c5-04ad92e40f1b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.345 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] VM Resumed (Lifecycle Event)
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.363 2 INFO nova.compute.manager [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Took 9.38 seconds to spawn the instance on the hypervisor.
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.364 2 DEBUG nova.compute.manager [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:30:55 compute-1 neutron-haproxy-ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438[261135]: [NOTICE]   (261139) : New worker (261141) forked
Oct 02 12:30:55 compute-1 neutron-haproxy-ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438[261135]: [NOTICE]   (261139) : Loading success.
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.365 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.374 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.411 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.435 2 INFO nova.compute.manager [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Took 11.08 seconds to build instance.
Oct 02 12:30:55 compute-1 nova_compute[230518]: 2025-10-02 12:30:55.451 2 DEBUG oslo_concurrency.lockutils [None req-b26fac24-5c89-4b6e-b8e7-6762afe4dae4 eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.247s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:30:56 compute-1 nova_compute[230518]: 2025-10-02 12:30:56.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:30:56 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2281957984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:30:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:56.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:56.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:56 compute-1 podman[261151]: 2025-10-02 12:30:56.827588574 +0000 UTC m=+0.077281332 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:30:56 compute-1 podman[261150]: 2025-10-02 12:30:56.840635214 +0000 UTC m=+0.086269424 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 12:30:57 compute-1 ceph-mon[80926]: pgmap v1625: 305 pgs: 305 active+clean; 165 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 203 op/s
Oct 02 12:30:57 compute-1 nova_compute[230518]: 2025-10-02 12:30:57.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:57 compute-1 nova_compute[230518]: 2025-10-02 12:30:57.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:58 compute-1 nova_compute[230518]: 2025-10-02 12:30:58.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:30:58 compute-1 nova_compute[230518]: 2025-10-02 12:30:58.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:30:58 compute-1 nova_compute[230518]: 2025-10-02 12:30:58.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:30:58 compute-1 nova_compute[230518]: 2025-10-02 12:30:58.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:58 compute-1 NetworkManager[44960]: <info>  [1759408258.2870] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/150)
Oct 02 12:30:58 compute-1 NetworkManager[44960]: <info>  [1759408258.2885] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/151)
Oct 02 12:30:58 compute-1 nova_compute[230518]: 2025-10-02 12:30:58.328 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-86aa4a20-96fe-4862-a0c5-04ad92e40f1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:30:58 compute-1 nova_compute[230518]: 2025-10-02 12:30:58.329 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-86aa4a20-96fe-4862-a0c5-04ad92e40f1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:30:58 compute-1 nova_compute[230518]: 2025-10-02 12:30:58.329 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:30:58 compute-1 nova_compute[230518]: 2025-10-02 12:30:58.329 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 86aa4a20-96fe-4862-a0c5-04ad92e40f1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:30:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:30:58.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:58 compute-1 ceph-mon[80926]: pgmap v1626: 305 pgs: 305 active+clean; 134 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.6 MiB/s wr, 219 op/s
Oct 02 12:30:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:30:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:30:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:30:58.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:30:58 compute-1 nova_compute[230518]: 2025-10-02 12:30:58.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:58 compute-1 ovn_controller[129257]: 2025-10-02T12:30:58Z|00320|binding|INFO|Releasing lport e11ef0cd-0cee-4792-8183-b5f339fd3fb3 from this chassis (sb_readonly=0)
Oct 02 12:30:58 compute-1 nova_compute[230518]: 2025-10-02 12:30:58.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:58 compute-1 nova_compute[230518]: 2025-10-02 12:30:58.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:30:59 compute-1 ovn_controller[129257]: 2025-10-02T12:30:59Z|00321|binding|INFO|Releasing lport e11ef0cd-0cee-4792-8183-b5f339fd3fb3 from this chassis (sb_readonly=0)
Oct 02 12:30:59 compute-1 nova_compute[230518]: 2025-10-02 12:30:59.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:30:59 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:30:59.293 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:30:59 compute-1 nova_compute[230518]: 2025-10-02 12:30:59.770 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Updating instance_info_cache with network_info: [{"id": "4a337dd1-6b9e-4a79-893f-de7400180a58", "address": "fa:16:3e:e1:a3:65", "network": {"id": "41e6c621-c2f2-4fb3-a93d-8eda22ec0438", "bridge": "br-int", "label": "tempest-ServersTestJSON-650581243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a70008e0fc32481f8ed89060220b28d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a337dd1-6b", "ovs_interfaceid": "4a337dd1-6b9e-4a79-893f-de7400180a58", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:30:59 compute-1 nova_compute[230518]: 2025-10-02 12:30:59.798 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-86aa4a20-96fe-4862-a0c5-04ad92e40f1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:30:59 compute-1 nova_compute[230518]: 2025-10-02 12:30:59.799 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:30:59 compute-1 nova_compute[230518]: 2025-10-02 12:30:59.822 2 DEBUG nova.compute.manager [req-a5148dee-5198-47ce-af58-fbdd0274a728 req-0d0bcedb-b864-40c5-adb6-7739f47814c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Received event network-changed-4a337dd1-6b9e-4a79-893f-de7400180a58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:30:59 compute-1 nova_compute[230518]: 2025-10-02 12:30:59.822 2 DEBUG nova.compute.manager [req-a5148dee-5198-47ce-af58-fbdd0274a728 req-0d0bcedb-b864-40c5-adb6-7739f47814c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Refreshing instance network info cache due to event network-changed-4a337dd1-6b9e-4a79-893f-de7400180a58. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:30:59 compute-1 nova_compute[230518]: 2025-10-02 12:30:59.823 2 DEBUG oslo_concurrency.lockutils [req-a5148dee-5198-47ce-af58-fbdd0274a728 req-0d0bcedb-b864-40c5-adb6-7739f47814c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-86aa4a20-96fe-4862-a0c5-04ad92e40f1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:30:59 compute-1 nova_compute[230518]: 2025-10-02 12:30:59.823 2 DEBUG oslo_concurrency.lockutils [req-a5148dee-5198-47ce-af58-fbdd0274a728 req-0d0bcedb-b864-40c5-adb6-7739f47814c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-86aa4a20-96fe-4862-a0c5-04ad92e40f1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:30:59 compute-1 nova_compute[230518]: 2025-10-02 12:30:59.823 2 DEBUG nova.network.neutron [req-a5148dee-5198-47ce-af58-fbdd0274a728 req-0d0bcedb-b864-40c5-adb6-7739f47814c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Refreshing network info cache for port 4a337dd1-6b9e-4a79-893f-de7400180a58 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:31:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:00.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:00.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:00 compute-1 ceph-mon[80926]: pgmap v1627: 305 pgs: 305 active+clean; 134 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.6 MiB/s wr, 177 op/s
Oct 02 12:31:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:02.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:02.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:02 compute-1 nova_compute[230518]: 2025-10-02 12:31:02.472 2 DEBUG nova.network.neutron [req-a5148dee-5198-47ce-af58-fbdd0274a728 req-0d0bcedb-b864-40c5-adb6-7739f47814c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Updated VIF entry in instance network info cache for port 4a337dd1-6b9e-4a79-893f-de7400180a58. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:31:02 compute-1 nova_compute[230518]: 2025-10-02 12:31:02.472 2 DEBUG nova.network.neutron [req-a5148dee-5198-47ce-af58-fbdd0274a728 req-0d0bcedb-b864-40c5-adb6-7739f47814c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Updating instance_info_cache with network_info: [{"id": "4a337dd1-6b9e-4a79-893f-de7400180a58", "address": "fa:16:3e:e1:a3:65", "network": {"id": "41e6c621-c2f2-4fb3-a93d-8eda22ec0438", "bridge": "br-int", "label": "tempest-ServersTestJSON-650581243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a70008e0fc32481f8ed89060220b28d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a337dd1-6b", "ovs_interfaceid": "4a337dd1-6b9e-4a79-893f-de7400180a58", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:31:02 compute-1 ceph-mon[80926]: pgmap v1628: 305 pgs: 305 active+clean; 134 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.6 MiB/s wr, 272 op/s
Oct 02 12:31:02 compute-1 nova_compute[230518]: 2025-10-02 12:31:02.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:02 compute-1 nova_compute[230518]: 2025-10-02 12:31:02.753 2 DEBUG oslo_concurrency.lockutils [req-a5148dee-5198-47ce-af58-fbdd0274a728 req-0d0bcedb-b864-40c5-adb6-7739f47814c8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-86aa4a20-96fe-4862-a0c5-04ad92e40f1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:31:02 compute-1 nova_compute[230518]: 2025-10-02 12:31:02.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:02 compute-1 nova_compute[230518]: 2025-10-02 12:31:02.960 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408247.9589188, 60c7cce3-3461-4cad-a135-46f35e607214 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:31:02 compute-1 nova_compute[230518]: 2025-10-02 12:31:02.960 2 INFO nova.compute.manager [-] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] VM Stopped (Lifecycle Event)
Oct 02 12:31:02 compute-1 nova_compute[230518]: 2025-10-02 12:31:02.998 2 DEBUG nova.compute.manager [None req-fe655adc-bf9c-45e6-b590-6bdcbadf03f9 - - - - - -] [instance: 60c7cce3-3461-4cad-a135-46f35e607214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:31:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:31:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:04.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:31:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:04.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:31:05 compute-1 ceph-mon[80926]: pgmap v1629: 305 pgs: 305 active+clean; 134 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 167 KiB/s wr, 196 op/s
Oct 02 12:31:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:31:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:06.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:31:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/633304687' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:31:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/633304687' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:31:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:31:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:06.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:31:07 compute-1 ceph-mon[80926]: pgmap v1630: 305 pgs: 305 active+clean; 134 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 28 KiB/s wr, 175 op/s
Oct 02 12:31:07 compute-1 nova_compute[230518]: 2025-10-02 12:31:07.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:07 compute-1 nova_compute[230518]: 2025-10-02 12:31:07.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:31:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:08.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:31:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:08.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:31:09 compute-1 ceph-mon[80926]: pgmap v1631: 305 pgs: 305 active+clean; 134 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 197 KiB/s wr, 163 op/s
Oct 02 12:31:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:10.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:31:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:10.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:31:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/19304654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:11 compute-1 ceph-mon[80926]: pgmap v1632: 305 pgs: 305 active+clean; 134 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 170 KiB/s wr, 108 op/s
Oct 02 12:31:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2768776911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:12.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:12.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:12 compute-1 nova_compute[230518]: 2025-10-02 12:31:12.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:12 compute-1 ceph-mon[80926]: pgmap v1633: 305 pgs: 305 active+clean; 156 MiB data, 666 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 135 op/s
Oct 02 12:31:12 compute-1 nova_compute[230518]: 2025-10-02 12:31:12.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:13 compute-1 ovn_controller[129257]: 2025-10-02T12:31:13Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e1:a3:65 10.100.0.10
Oct 02 12:31:13 compute-1 ovn_controller[129257]: 2025-10-02T12:31:13Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e1:a3:65 10.100.0.10
Oct 02 12:31:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:31:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:14.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:14.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:15 compute-1 ceph-mon[80926]: pgmap v1634: 305 pgs: 305 active+clean; 165 MiB data, 676 MiB used, 20 GiB / 21 GiB avail; 405 KiB/s rd, 3.2 MiB/s wr, 52 op/s
Oct 02 12:31:15 compute-1 nova_compute[230518]: 2025-10-02 12:31:15.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:31:15 compute-1 nova_compute[230518]: 2025-10-02 12:31:15.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 12:31:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:31:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:16.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:31:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:16.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:16 compute-1 podman[261194]: 2025-10-02 12:31:16.81794803 +0000 UTC m=+0.058433790 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 12:31:16 compute-1 podman[261193]: 2025-10-02 12:31:16.845382063 +0000 UTC m=+0.093553194 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 12:31:17 compute-1 ceph-mon[80926]: pgmap v1635: 305 pgs: 305 active+clean; 177 MiB data, 709 MiB used, 20 GiB / 21 GiB avail; 376 KiB/s rd, 4.1 MiB/s wr, 71 op/s
Oct 02 12:31:17 compute-1 nova_compute[230518]: 2025-10-02 12:31:17.460 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 12:31:17 compute-1 nova_compute[230518]: 2025-10-02 12:31:17.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:17 compute-1 nova_compute[230518]: 2025-10-02 12:31:17.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:31:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:18.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:31:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:18.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:31:18 compute-1 ceph-mon[80926]: pgmap v1636: 305 pgs: 305 active+clean; 191 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 704 KiB/s rd, 4.2 MiB/s wr, 113 op/s
Oct 02 12:31:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:20.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:31:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:20.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:31:20 compute-1 ceph-mon[80926]: pgmap v1637: 305 pgs: 305 active+clean; 191 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 700 KiB/s rd, 4.0 MiB/s wr, 111 op/s
Oct 02 12:31:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:31:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:22.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:31:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:22.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:22 compute-1 nova_compute[230518]: 2025-10-02 12:31:22.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:22 compute-1 nova_compute[230518]: 2025-10-02 12:31:22.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:23 compute-1 ceph-mon[80926]: pgmap v1638: 305 pgs: 305 active+clean; 200 MiB data, 722 MiB used, 20 GiB / 21 GiB avail; 780 KiB/s rd, 4.1 MiB/s wr, 129 op/s
Oct 02 12:31:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:31:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:31:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:24.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:31:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:24.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:24 compute-1 ceph-mon[80926]: pgmap v1639: 305 pgs: 305 active+clean; 200 MiB data, 722 MiB used, 20 GiB / 21 GiB avail; 749 KiB/s rd, 2.1 MiB/s wr, 105 op/s
Oct 02 12:31:25 compute-1 nova_compute[230518]: 2025-10-02 12:31:25.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:25.928 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:25.928 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:25.929 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:26.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:26.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:26 compute-1 ceph-mon[80926]: pgmap v1640: 305 pgs: 305 active+clean; 212 MiB data, 723 MiB used, 20 GiB / 21 GiB avail; 712 KiB/s rd, 1.2 MiB/s wr, 95 op/s
Oct 02 12:31:27 compute-1 nova_compute[230518]: 2025-10-02 12:31:27.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:27 compute-1 podman[261239]: 2025-10-02 12:31:27.813044111 +0000 UTC m=+0.058693549 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:31:27 compute-1 podman[261240]: 2025-10-02 12:31:27.841757894 +0000 UTC m=+0.075911740 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct 02 12:31:27 compute-1 nova_compute[230518]: 2025-10-02 12:31:27.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:28 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3274566321' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:31:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:28.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:28.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:31:29 compute-1 ceph-mon[80926]: pgmap v1641: 305 pgs: 305 active+clean; 246 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 426 KiB/s rd, 2.0 MiB/s wr, 90 op/s
Oct 02 12:31:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/49731173' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:31:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:31:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:30.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:31:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:30.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:30 compute-1 ceph-mon[80926]: pgmap v1642: 305 pgs: 305 active+clean; 246 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 98 KiB/s rd, 1.8 MiB/s wr, 48 op/s
Oct 02 12:31:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:32.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:32.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:32 compute-1 nova_compute[230518]: 2025-10-02 12:31:32.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:32 compute-1 ceph-mon[80926]: pgmap v1643: 305 pgs: 305 active+clean; 246 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 100 KiB/s rd, 1.9 MiB/s wr, 50 op/s
Oct 02 12:31:32 compute-1 nova_compute[230518]: 2025-10-02 12:31:32.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:33 compute-1 nova_compute[230518]: 2025-10-02 12:31:33.487 2 DEBUG oslo_concurrency.lockutils [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Acquiring lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:33 compute-1 nova_compute[230518]: 2025-10-02 12:31:33.487 2 DEBUG oslo_concurrency.lockutils [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:33 compute-1 nova_compute[230518]: 2025-10-02 12:31:33.487 2 DEBUG oslo_concurrency.lockutils [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Acquiring lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:33 compute-1 nova_compute[230518]: 2025-10-02 12:31:33.488 2 DEBUG oslo_concurrency.lockutils [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:33 compute-1 nova_compute[230518]: 2025-10-02 12:31:33.488 2 DEBUG oslo_concurrency.lockutils [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:33 compute-1 nova_compute[230518]: 2025-10-02 12:31:33.489 2 INFO nova.compute.manager [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Terminating instance
Oct 02 12:31:33 compute-1 nova_compute[230518]: 2025-10-02 12:31:33.490 2 DEBUG nova.compute.manager [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:31:33 compute-1 kernel: tap4a337dd1-6b (unregistering): left promiscuous mode
Oct 02 12:31:33 compute-1 NetworkManager[44960]: <info>  [1759408293.5566] device (tap4a337dd1-6b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:31:33 compute-1 nova_compute[230518]: 2025-10-02 12:31:33.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:33 compute-1 ovn_controller[129257]: 2025-10-02T12:31:33Z|00322|binding|INFO|Releasing lport 4a337dd1-6b9e-4a79-893f-de7400180a58 from this chassis (sb_readonly=0)
Oct 02 12:31:33 compute-1 ovn_controller[129257]: 2025-10-02T12:31:33Z|00323|binding|INFO|Setting lport 4a337dd1-6b9e-4a79-893f-de7400180a58 down in Southbound
Oct 02 12:31:33 compute-1 ovn_controller[129257]: 2025-10-02T12:31:33Z|00324|binding|INFO|Removing iface tap4a337dd1-6b ovn-installed in OVS
Oct 02 12:31:33 compute-1 nova_compute[230518]: 2025-10-02 12:31:33.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:33.583 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:a3:65 10.100.0.10'], port_security=['fa:16:3e:e1:a3:65 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '86aa4a20-96fe-4862-a0c5-04ad92e40f1b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41e6c621-c2f2-4fb3-a93d-8eda22ec0438', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a70008e0fc32481f8ed89060220b28d7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9b5e4d5c-c242-45f9-85c8-d58de980e569', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4850e48d-a493-4bb4-bb29-020fdb04c9bf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=4a337dd1-6b9e-4a79-893f-de7400180a58) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:31:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:33.584 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 4a337dd1-6b9e-4a79-893f-de7400180a58 in datapath 41e6c621-c2f2-4fb3-a93d-8eda22ec0438 unbound from our chassis
Oct 02 12:31:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:33.585 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 41e6c621-c2f2-4fb3-a93d-8eda22ec0438, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:31:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:33.586 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6eae9624-45d2-4bc5-afb9-f424f8984baa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:33.587 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438 namespace which is not needed anymore
Oct 02 12:31:33 compute-1 nova_compute[230518]: 2025-10-02 12:31:33.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:31:33 compute-1 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d0000004a.scope: Deactivated successfully.
Oct 02 12:31:33 compute-1 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d0000004a.scope: Consumed 14.528s CPU time.
Oct 02 12:31:33 compute-1 systemd-machined[188247]: Machine qemu-37-instance-0000004a terminated.
Oct 02 12:31:33 compute-1 neutron-haproxy-ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438[261135]: [NOTICE]   (261139) : haproxy version is 2.8.14-c23fe91
Oct 02 12:31:33 compute-1 neutron-haproxy-ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438[261135]: [NOTICE]   (261139) : path to executable is /usr/sbin/haproxy
Oct 02 12:31:33 compute-1 neutron-haproxy-ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438[261135]: [WARNING]  (261139) : Exiting Master process...
Oct 02 12:31:33 compute-1 neutron-haproxy-ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438[261135]: [ALERT]    (261139) : Current worker (261141) exited with code 143 (Terminated)
Oct 02 12:31:33 compute-1 neutron-haproxy-ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438[261135]: [WARNING]  (261139) : All workers exited. Exiting... (0)
Oct 02 12:31:33 compute-1 systemd[1]: libpod-31f6624e1e13ca25ad758626f678602e3de4c2b0af10a751ee91db9471207633.scope: Deactivated successfully.
Oct 02 12:31:33 compute-1 nova_compute[230518]: 2025-10-02 12:31:33.722 2 INFO nova.virt.libvirt.driver [-] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Instance destroyed successfully.
Oct 02 12:31:33 compute-1 nova_compute[230518]: 2025-10-02 12:31:33.723 2 DEBUG nova.objects.instance [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lazy-loading 'resources' on Instance uuid 86aa4a20-96fe-4862-a0c5-04ad92e40f1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:31:33 compute-1 podman[261303]: 2025-10-02 12:31:33.728227297 +0000 UTC m=+0.053521455 container died 31f6624e1e13ca25ad758626f678602e3de4c2b0af10a751ee91db9471207633 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:31:33 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-31f6624e1e13ca25ad758626f678602e3de4c2b0af10a751ee91db9471207633-userdata-shm.mount: Deactivated successfully.
Oct 02 12:31:33 compute-1 systemd[1]: var-lib-containers-storage-overlay-d688486329266ad986868fb626471e07a89a9808e3f1f1d1725bea2902a120a5-merged.mount: Deactivated successfully.
Oct 02 12:31:33 compute-1 nova_compute[230518]: 2025-10-02 12:31:33.760 2 DEBUG nova.virt.libvirt.vif [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:30:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-606434201',display_name='tempest-ServersTestJSON-server-606434201',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-606434201',id=74,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEOSVA0UqkFnW7YuNQRsY27cJ9a2M1giyjrSsiXcPBlr5myx9iDFttDeAnJV4hFOON30Ktzf3cWWwY2KcF8NnunVbyoieS3+bfZniZlgOmGbNNyoaXEKseU0cJsJiwxrpQ==',key_name='tempest-keypair-2131917532',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:30:55Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a70008e0fc32481f8ed89060220b28d7',ramdisk_id='',reservation_id='r-0450lgko',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-128626597',owner_user_name='tempest-ServersTestJSON-128626597-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:30:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eed4fdf8b49f41bfb982bc858fa76bef',uuid=86aa4a20-96fe-4862-a0c5-04ad92e40f1b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4a337dd1-6b9e-4a79-893f-de7400180a58", "address": "fa:16:3e:e1:a3:65", "network": {"id": "41e6c621-c2f2-4fb3-a93d-8eda22ec0438", "bridge": "br-int", "label": "tempest-ServersTestJSON-650581243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a70008e0fc32481f8ed89060220b28d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a337dd1-6b", "ovs_interfaceid": "4a337dd1-6b9e-4a79-893f-de7400180a58", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:31:33 compute-1 nova_compute[230518]: 2025-10-02 12:31:33.761 2 DEBUG nova.network.os_vif_util [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Converting VIF {"id": "4a337dd1-6b9e-4a79-893f-de7400180a58", "address": "fa:16:3e:e1:a3:65", "network": {"id": "41e6c621-c2f2-4fb3-a93d-8eda22ec0438", "bridge": "br-int", "label": "tempest-ServersTestJSON-650581243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a70008e0fc32481f8ed89060220b28d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a337dd1-6b", "ovs_interfaceid": "4a337dd1-6b9e-4a79-893f-de7400180a58", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:31:33 compute-1 nova_compute[230518]: 2025-10-02 12:31:33.762 2 DEBUG nova.network.os_vif_util [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e1:a3:65,bridge_name='br-int',has_traffic_filtering=True,id=4a337dd1-6b9e-4a79-893f-de7400180a58,network=Network(41e6c621-c2f2-4fb3-a93d-8eda22ec0438),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a337dd1-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:31:33 compute-1 nova_compute[230518]: 2025-10-02 12:31:33.762 2 DEBUG os_vif [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e1:a3:65,bridge_name='br-int',has_traffic_filtering=True,id=4a337dd1-6b9e-4a79-893f-de7400180a58,network=Network(41e6c621-c2f2-4fb3-a93d-8eda22ec0438),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a337dd1-6b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:31:33 compute-1 podman[261303]: 2025-10-02 12:31:33.763801987 +0000 UTC m=+0.089096145 container cleanup 31f6624e1e13ca25ad758626f678602e3de4c2b0af10a751ee91db9471207633 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:31:33 compute-1 nova_compute[230518]: 2025-10-02 12:31:33.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:33 compute-1 nova_compute[230518]: 2025-10-02 12:31:33.764 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a337dd1-6b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:33 compute-1 nova_compute[230518]: 2025-10-02 12:31:33.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:33 compute-1 nova_compute[230518]: 2025-10-02 12:31:33.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:33 compute-1 nova_compute[230518]: 2025-10-02 12:31:33.769 2 INFO os_vif [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e1:a3:65,bridge_name='br-int',has_traffic_filtering=True,id=4a337dd1-6b9e-4a79-893f-de7400180a58,network=Network(41e6c621-c2f2-4fb3-a93d-8eda22ec0438),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a337dd1-6b')
Oct 02 12:31:33 compute-1 systemd[1]: libpod-conmon-31f6624e1e13ca25ad758626f678602e3de4c2b0af10a751ee91db9471207633.scope: Deactivated successfully.
Oct 02 12:31:33 compute-1 podman[261344]: 2025-10-02 12:31:33.824444715 +0000 UTC m=+0.039699770 container remove 31f6624e1e13ca25ad758626f678602e3de4c2b0af10a751ee91db9471207633 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:31:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:33.830 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[81bc33c0-5835-4b02-be19-ab3b2a10af05]: (4, ('Thu Oct  2 12:31:33 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438 (31f6624e1e13ca25ad758626f678602e3de4c2b0af10a751ee91db9471207633)\n31f6624e1e13ca25ad758626f678602e3de4c2b0af10a751ee91db9471207633\nThu Oct  2 12:31:33 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438 (31f6624e1e13ca25ad758626f678602e3de4c2b0af10a751ee91db9471207633)\n31f6624e1e13ca25ad758626f678602e3de4c2b0af10a751ee91db9471207633\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:33.832 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[145cce40-0bad-4bd1-adc3-2c44a50770c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:33.833 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41e6c621-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:33 compute-1 nova_compute[230518]: 2025-10-02 12:31:33.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:33 compute-1 kernel: tap41e6c621-c0: left promiscuous mode
Oct 02 12:31:33 compute-1 nova_compute[230518]: 2025-10-02 12:31:33.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:33.892 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3ede455e-b1bf-4f74-b51a-55aee4e8ce14]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:33.924 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[70545cc7-6cf7-4322-8c09-50857d5bd903]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:33.926 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ff0bf435-61e4-4a87-be0f-948d7de14aa2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:33.944 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bea2b7e2-30e1-40c4-bd46-59f0a9554997]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611796, 'reachable_time': 19517, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261376, 'error': None, 'target': 'ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:33.946 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-41e6c621-c2f2-4fb3-a93d-8eda22ec0438 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:31:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:33.946 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[1c296b66-3eef-4d80-82d8-5ef0c4b60fb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:33 compute-1 systemd[1]: run-netns-ovnmeta\x2d41e6c621\x2dc2f2\x2d4fb3\x2da93d\x2d8eda22ec0438.mount: Deactivated successfully.
Oct 02 12:31:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:34.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:34.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:34 compute-1 ceph-mon[80926]: pgmap v1644: 305 pgs: 305 active+clean; 246 MiB data, 743 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 02 12:31:35 compute-1 nova_compute[230518]: 2025-10-02 12:31:35.071 2 DEBUG nova.compute.manager [req-ca9ec256-9112-40b8-b206-707f0aeebbc0 req-7351fc54-af76-4188-805f-0d13c829a0a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Received event network-vif-unplugged-4a337dd1-6b9e-4a79-893f-de7400180a58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:31:35 compute-1 nova_compute[230518]: 2025-10-02 12:31:35.071 2 DEBUG oslo_concurrency.lockutils [req-ca9ec256-9112-40b8-b206-707f0aeebbc0 req-7351fc54-af76-4188-805f-0d13c829a0a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:35 compute-1 nova_compute[230518]: 2025-10-02 12:31:35.071 2 DEBUG oslo_concurrency.lockutils [req-ca9ec256-9112-40b8-b206-707f0aeebbc0 req-7351fc54-af76-4188-805f-0d13c829a0a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:35 compute-1 nova_compute[230518]: 2025-10-02 12:31:35.071 2 DEBUG oslo_concurrency.lockutils [req-ca9ec256-9112-40b8-b206-707f0aeebbc0 req-7351fc54-af76-4188-805f-0d13c829a0a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:35 compute-1 nova_compute[230518]: 2025-10-02 12:31:35.071 2 DEBUG nova.compute.manager [req-ca9ec256-9112-40b8-b206-707f0aeebbc0 req-7351fc54-af76-4188-805f-0d13c829a0a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] No waiting events found dispatching network-vif-unplugged-4a337dd1-6b9e-4a79-893f-de7400180a58 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:31:35 compute-1 nova_compute[230518]: 2025-10-02 12:31:35.072 2 DEBUG nova.compute.manager [req-ca9ec256-9112-40b8-b206-707f0aeebbc0 req-7351fc54-af76-4188-805f-0d13c829a0a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Received event network-vif-unplugged-4a337dd1-6b9e-4a79-893f-de7400180a58 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:31:35 compute-1 nova_compute[230518]: 2025-10-02 12:31:35.072 2 DEBUG nova.compute.manager [req-ca9ec256-9112-40b8-b206-707f0aeebbc0 req-7351fc54-af76-4188-805f-0d13c829a0a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Received event network-vif-plugged-4a337dd1-6b9e-4a79-893f-de7400180a58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:31:35 compute-1 nova_compute[230518]: 2025-10-02 12:31:35.072 2 DEBUG oslo_concurrency.lockutils [req-ca9ec256-9112-40b8-b206-707f0aeebbc0 req-7351fc54-af76-4188-805f-0d13c829a0a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:35 compute-1 nova_compute[230518]: 2025-10-02 12:31:35.072 2 DEBUG oslo_concurrency.lockutils [req-ca9ec256-9112-40b8-b206-707f0aeebbc0 req-7351fc54-af76-4188-805f-0d13c829a0a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:35 compute-1 nova_compute[230518]: 2025-10-02 12:31:35.072 2 DEBUG oslo_concurrency.lockutils [req-ca9ec256-9112-40b8-b206-707f0aeebbc0 req-7351fc54-af76-4188-805f-0d13c829a0a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:35 compute-1 nova_compute[230518]: 2025-10-02 12:31:35.072 2 DEBUG nova.compute.manager [req-ca9ec256-9112-40b8-b206-707f0aeebbc0 req-7351fc54-af76-4188-805f-0d13c829a0a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] No waiting events found dispatching network-vif-plugged-4a337dd1-6b9e-4a79-893f-de7400180a58 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:31:35 compute-1 nova_compute[230518]: 2025-10-02 12:31:35.073 2 WARNING nova.compute.manager [req-ca9ec256-9112-40b8-b206-707f0aeebbc0 req-7351fc54-af76-4188-805f-0d13c829a0a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Received unexpected event network-vif-plugged-4a337dd1-6b9e-4a79-893f-de7400180a58 for instance with vm_state active and task_state deleting.
Oct 02 12:31:35 compute-1 nova_compute[230518]: 2025-10-02 12:31:35.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:36.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:31:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:36.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:31:36 compute-1 nova_compute[230518]: 2025-10-02 12:31:36.718 2 INFO nova.virt.libvirt.driver [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Deleting instance files /var/lib/nova/instances/86aa4a20-96fe-4862-a0c5-04ad92e40f1b_del
Oct 02 12:31:36 compute-1 nova_compute[230518]: 2025-10-02 12:31:36.718 2 INFO nova.virt.libvirt.driver [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Deletion of /var/lib/nova/instances/86aa4a20-96fe-4862-a0c5-04ad92e40f1b_del complete
Oct 02 12:31:36 compute-1 nova_compute[230518]: 2025-10-02 12:31:36.814 2 INFO nova.compute.manager [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Took 3.32 seconds to destroy the instance on the hypervisor.
Oct 02 12:31:36 compute-1 nova_compute[230518]: 2025-10-02 12:31:36.814 2 DEBUG oslo.service.loopingcall [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:31:36 compute-1 nova_compute[230518]: 2025-10-02 12:31:36.814 2 DEBUG nova.compute.manager [-] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:31:36 compute-1 nova_compute[230518]: 2025-10-02 12:31:36.814 2 DEBUG nova.network.neutron [-] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:31:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e240 e240: 3 total, 3 up, 3 in
Oct 02 12:31:37 compute-1 ceph-mon[80926]: pgmap v1645: 305 pgs: 305 active+clean; 220 MiB data, 744 MiB used, 20 GiB / 21 GiB avail; 167 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Oct 02 12:31:37 compute-1 nova_compute[230518]: 2025-10-02 12:31:37.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:38.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:38.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:38 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:31:38 compute-1 nova_compute[230518]: 2025-10-02 12:31:38.630 2 DEBUG nova.network.neutron [-] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:31:38 compute-1 nova_compute[230518]: 2025-10-02 12:31:38.739 2 INFO nova.compute.manager [-] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Took 1.92 seconds to deallocate network for instance.
Oct 02 12:31:38 compute-1 nova_compute[230518]: 2025-10-02 12:31:38.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:38 compute-1 nova_compute[230518]: 2025-10-02 12:31:38.804 2 DEBUG nova.compute.manager [req-e9748a35-a894-4f1d-a3ce-5cb66c4972a6 req-5f957771-4f25-463e-b5bb-5b9b42303e6d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Received event network-vif-deleted-4a337dd1-6b9e-4a79-893f-de7400180a58 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:31:38 compute-1 nova_compute[230518]: 2025-10-02 12:31:38.847 2 DEBUG oslo_concurrency.lockutils [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:38 compute-1 nova_compute[230518]: 2025-10-02 12:31:38.847 2 DEBUG oslo_concurrency.lockutils [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:38 compute-1 ceph-mon[80926]: pgmap v1646: 305 pgs: 305 active+clean; 167 MiB data, 719 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 117 op/s
Oct 02 12:31:38 compute-1 ceph-mon[80926]: osdmap e240: 3 total, 3 up, 3 in
Oct 02 12:31:38 compute-1 nova_compute[230518]: 2025-10-02 12:31:38.924 2 DEBUG oslo_concurrency.processutils [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:31:39 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2973912813' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:39 compute-1 nova_compute[230518]: 2025-10-02 12:31:39.366 2 DEBUG oslo_concurrency.processutils [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:39 compute-1 nova_compute[230518]: 2025-10-02 12:31:39.372 2 DEBUG nova.compute.provider_tree [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:31:39 compute-1 nova_compute[230518]: 2025-10-02 12:31:39.425 2 DEBUG nova.scheduler.client.report [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:31:39 compute-1 nova_compute[230518]: 2025-10-02 12:31:39.508 2 DEBUG oslo_concurrency.lockutils [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:39 compute-1 nova_compute[230518]: 2025-10-02 12:31:39.617 2 INFO nova.scheduler.client.report [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Deleted allocations for instance 86aa4a20-96fe-4862-a0c5-04ad92e40f1b
Oct 02 12:31:39 compute-1 nova_compute[230518]: 2025-10-02 12:31:39.739 2 DEBUG oslo_concurrency.lockutils [None req-6772869b-3b0a-49f0-9d1e-778ca5a0f3ba eed4fdf8b49f41bfb982bc858fa76bef a70008e0fc32481f8ed89060220b28d7 - - default default] Lock "86aa4a20-96fe-4862-a0c5-04ad92e40f1b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.251s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2118957239' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:31:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2973912813' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2940299997' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:31:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:40.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:40.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:41 compute-1 ceph-mon[80926]: pgmap v1648: 305 pgs: 305 active+clean; 167 MiB data, 719 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 26 KiB/s wr, 110 op/s
Oct 02 12:31:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:42.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:42.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:42 compute-1 nova_compute[230518]: 2025-10-02 12:31:42.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:43 compute-1 ceph-mon[80926]: pgmap v1649: 305 pgs: 305 active+clean; 167 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 20 KiB/s wr, 145 op/s
Oct 02 12:31:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:31:43 compute-1 nova_compute[230518]: 2025-10-02 12:31:43.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:44 compute-1 nova_compute[230518]: 2025-10-02 12:31:44.002 2 DEBUG oslo_concurrency.lockutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:44 compute-1 nova_compute[230518]: 2025-10-02 12:31:44.002 2 DEBUG oslo_concurrency.lockutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:44 compute-1 nova_compute[230518]: 2025-10-02 12:31:44.019 2 DEBUG nova.compute.manager [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:31:44 compute-1 nova_compute[230518]: 2025-10-02 12:31:44.141 2 DEBUG oslo_concurrency.lockutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:44 compute-1 nova_compute[230518]: 2025-10-02 12:31:44.141 2 DEBUG oslo_concurrency.lockutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:44 compute-1 nova_compute[230518]: 2025-10-02 12:31:44.148 2 DEBUG nova.virt.hardware [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:31:44 compute-1 nova_compute[230518]: 2025-10-02 12:31:44.148 2 INFO nova.compute.claims [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:31:44 compute-1 nova_compute[230518]: 2025-10-02 12:31:44.252 2 DEBUG oslo_concurrency.processutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:31:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:44.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:31:44 compute-1 ceph-mon[80926]: pgmap v1650: 305 pgs: 305 active+clean; 167 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 19 KiB/s wr, 151 op/s
Oct 02 12:31:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:44.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:31:44 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3767337922' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:44 compute-1 nova_compute[230518]: 2025-10-02 12:31:44.693 2 DEBUG oslo_concurrency.processutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:44 compute-1 nova_compute[230518]: 2025-10-02 12:31:44.701 2 DEBUG nova.compute.provider_tree [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:31:44 compute-1 nova_compute[230518]: 2025-10-02 12:31:44.735 2 DEBUG nova.scheduler.client.report [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:31:44 compute-1 nova_compute[230518]: 2025-10-02 12:31:44.759 2 DEBUG oslo_concurrency.lockutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:44 compute-1 nova_compute[230518]: 2025-10-02 12:31:44.760 2 DEBUG nova.compute.manager [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:31:44 compute-1 nova_compute[230518]: 2025-10-02 12:31:44.817 2 DEBUG nova.compute.manager [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:31:44 compute-1 nova_compute[230518]: 2025-10-02 12:31:44.817 2 DEBUG nova.network.neutron [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:31:44 compute-1 nova_compute[230518]: 2025-10-02 12:31:44.844 2 INFO nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:31:44 compute-1 nova_compute[230518]: 2025-10-02 12:31:44.873 2 DEBUG nova.compute.manager [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:31:45 compute-1 nova_compute[230518]: 2025-10-02 12:31:45.038 2 DEBUG nova.compute.manager [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:31:45 compute-1 nova_compute[230518]: 2025-10-02 12:31:45.040 2 DEBUG nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:31:45 compute-1 nova_compute[230518]: 2025-10-02 12:31:45.040 2 INFO nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Creating image(s)
Oct 02 12:31:45 compute-1 nova_compute[230518]: 2025-10-02 12:31:45.072 2 DEBUG nova.storage.rbd_utils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 3e490470-5e33-4140-95c1-367805364c73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:31:45 compute-1 ceph-mgr[81282]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3443433125
Oct 02 12:31:45 compute-1 nova_compute[230518]: 2025-10-02 12:31:45.112 2 DEBUG nova.storage.rbd_utils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 3e490470-5e33-4140-95c1-367805364c73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:31:45 compute-1 nova_compute[230518]: 2025-10-02 12:31:45.150 2 DEBUG nova.storage.rbd_utils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 3e490470-5e33-4140-95c1-367805364c73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:31:45 compute-1 nova_compute[230518]: 2025-10-02 12:31:45.156 2 DEBUG oslo_concurrency.processutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:45 compute-1 nova_compute[230518]: 2025-10-02 12:31:45.202 2 DEBUG nova.policy [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '71d69bc37f274fad8a0b06c0b96f2a64', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:31:45 compute-1 nova_compute[230518]: 2025-10-02 12:31:45.248 2 DEBUG oslo_concurrency.processutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:45 compute-1 nova_compute[230518]: 2025-10-02 12:31:45.249 2 DEBUG oslo_concurrency.lockutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:45 compute-1 nova_compute[230518]: 2025-10-02 12:31:45.249 2 DEBUG oslo_concurrency.lockutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:45 compute-1 nova_compute[230518]: 2025-10-02 12:31:45.250 2 DEBUG oslo_concurrency.lockutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:45 compute-1 nova_compute[230518]: 2025-10-02 12:31:45.282 2 DEBUG nova.storage.rbd_utils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 3e490470-5e33-4140-95c1-367805364c73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:31:45 compute-1 nova_compute[230518]: 2025-10-02 12:31:45.287 2 DEBUG oslo_concurrency.processutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 3e490470-5e33-4140-95c1-367805364c73_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3767337922' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/231615588' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:45 compute-1 nova_compute[230518]: 2025-10-02 12:31:45.938 2 DEBUG oslo_concurrency.processutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 3e490470-5e33-4140-95c1-367805364c73_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.651s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:46 compute-1 nova_compute[230518]: 2025-10-02 12:31:46.049 2 DEBUG nova.storage.rbd_utils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] resizing rbd image 3e490470-5e33-4140-95c1-367805364c73_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:31:46 compute-1 nova_compute[230518]: 2025-10-02 12:31:46.109 2 DEBUG nova.network.neutron [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Successfully created port: a3bd0009-d256-4937-bdad-606abfd076e0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:31:46 compute-1 nova_compute[230518]: 2025-10-02 12:31:46.277 2 DEBUG nova.objects.instance [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'migration_context' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:31:46 compute-1 nova_compute[230518]: 2025-10-02 12:31:46.293 2 DEBUG nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:31:46 compute-1 nova_compute[230518]: 2025-10-02 12:31:46.293 2 DEBUG nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Ensure instance console log exists: /var/lib/nova/instances/3e490470-5e33-4140-95c1-367805364c73/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:31:46 compute-1 nova_compute[230518]: 2025-10-02 12:31:46.294 2 DEBUG oslo_concurrency.lockutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:46 compute-1 nova_compute[230518]: 2025-10-02 12:31:46.295 2 DEBUG oslo_concurrency.lockutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:46 compute-1 nova_compute[230518]: 2025-10-02 12:31:46.295 2 DEBUG oslo_concurrency.lockutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:46.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:46.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:46 compute-1 ceph-mon[80926]: pgmap v1651: 305 pgs: 305 active+clean; 167 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.9 KiB/s wr, 177 op/s
Oct 02 12:31:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3674039429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:46 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e241 e241: 3 total, 3 up, 3 in
Oct 02 12:31:47 compute-1 nova_compute[230518]: 2025-10-02 12:31:47.000 2 DEBUG nova.network.neutron [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Successfully updated port: a3bd0009-d256-4937-bdad-606abfd076e0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:31:47 compute-1 sudo[261589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:31:47 compute-1 sudo[261589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:47 compute-1 sudo[261589]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:47 compute-1 nova_compute[230518]: 2025-10-02 12:31:47.019 2 DEBUG oslo_concurrency.lockutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:31:47 compute-1 nova_compute[230518]: 2025-10-02 12:31:47.019 2 DEBUG oslo_concurrency.lockutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquired lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:31:47 compute-1 nova_compute[230518]: 2025-10-02 12:31:47.020 2 DEBUG nova.network.neutron [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:31:47 compute-1 nova_compute[230518]: 2025-10-02 12:31:47.082 2 DEBUG nova.compute.manager [req-4ffd7696-a2b1-42ba-aee4-93aec29f8fad req-f9497efa-2c74-418a-ba23-0c9931615c44 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-changed-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:31:47 compute-1 nova_compute[230518]: 2025-10-02 12:31:47.082 2 DEBUG nova.compute.manager [req-4ffd7696-a2b1-42ba-aee4-93aec29f8fad req-f9497efa-2c74-418a-ba23-0c9931615c44 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Refreshing instance network info cache due to event network-changed-a3bd0009-d256-4937-bdad-606abfd076e0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:31:47 compute-1 nova_compute[230518]: 2025-10-02 12:31:47.082 2 DEBUG oslo_concurrency.lockutils [req-4ffd7696-a2b1-42ba-aee4-93aec29f8fad req-f9497efa-2c74-418a-ba23-0c9931615c44 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:31:47 compute-1 sudo[261621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:31:47 compute-1 sudo[261621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:47 compute-1 sudo[261621]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:47 compute-1 podman[261614]: 2025-10-02 12:31:47.106272093 +0000 UTC m=+0.074239557 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:31:47 compute-1 nova_compute[230518]: 2025-10-02 12:31:47.159 2 DEBUG nova.network.neutron [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:31:47 compute-1 podman[261613]: 2025-10-02 12:31:47.160741346 +0000 UTC m=+0.137479146 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 12:31:47 compute-1 sudo[261678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:31:47 compute-1 sudo[261678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:47 compute-1 sudo[261678]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:47 compute-1 sudo[261709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:31:47 compute-1 sudo[261709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:47 compute-1 nova_compute[230518]: 2025-10-02 12:31:47.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:47 compute-1 nova_compute[230518]: 2025-10-02 12:31:47.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:47 compute-1 sudo[261709]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:47 compute-1 nova_compute[230518]: 2025-10-02 12:31:47.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:48 compute-1 ceph-mon[80926]: osdmap e241: 3 total, 3 up, 3 in
Oct 02 12:31:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3148228124' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:48 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:31:48 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:31:48 compute-1 nova_compute[230518]: 2025-10-02 12:31:48.356 2 DEBUG nova.network.neutron [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updating instance_info_cache with network_info: [{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:31:48 compute-1 nova_compute[230518]: 2025-10-02 12:31:48.379 2 DEBUG oslo_concurrency.lockutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Releasing lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:31:48 compute-1 nova_compute[230518]: 2025-10-02 12:31:48.379 2 DEBUG nova.compute.manager [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance network_info: |[{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:31:48 compute-1 nova_compute[230518]: 2025-10-02 12:31:48.380 2 DEBUG oslo_concurrency.lockutils [req-4ffd7696-a2b1-42ba-aee4-93aec29f8fad req-f9497efa-2c74-418a-ba23-0c9931615c44 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:31:48 compute-1 nova_compute[230518]: 2025-10-02 12:31:48.380 2 DEBUG nova.network.neutron [req-4ffd7696-a2b1-42ba-aee4-93aec29f8fad req-f9497efa-2c74-418a-ba23-0c9931615c44 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Refreshing network info cache for port a3bd0009-d256-4937-bdad-606abfd076e0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:31:48 compute-1 nova_compute[230518]: 2025-10-02 12:31:48.383 2 DEBUG nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Start _get_guest_xml network_info=[{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:31:48 compute-1 nova_compute[230518]: 2025-10-02 12:31:48.392 2 WARNING nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:31:48 compute-1 nova_compute[230518]: 2025-10-02 12:31:48.407 2 DEBUG nova.virt.libvirt.host [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:31:48 compute-1 nova_compute[230518]: 2025-10-02 12:31:48.409 2 DEBUG nova.virt.libvirt.host [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:31:48 compute-1 nova_compute[230518]: 2025-10-02 12:31:48.419 2 DEBUG nova.virt.libvirt.host [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:31:48 compute-1 nova_compute[230518]: 2025-10-02 12:31:48.420 2 DEBUG nova.virt.libvirt.host [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:31:48 compute-1 nova_compute[230518]: 2025-10-02 12:31:48.422 2 DEBUG nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:31:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:48 compute-1 nova_compute[230518]: 2025-10-02 12:31:48.423 2 DEBUG nova.virt.hardware [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:31:48 compute-1 nova_compute[230518]: 2025-10-02 12:31:48.424 2 DEBUG nova.virt.hardware [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:31:48 compute-1 nova_compute[230518]: 2025-10-02 12:31:48.424 2 DEBUG nova.virt.hardware [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:31:48 compute-1 nova_compute[230518]: 2025-10-02 12:31:48.425 2 DEBUG nova.virt.hardware [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:31:48 compute-1 nova_compute[230518]: 2025-10-02 12:31:48.425 2 DEBUG nova.virt.hardware [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:31:48 compute-1 nova_compute[230518]: 2025-10-02 12:31:48.425 2 DEBUG nova.virt.hardware [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:31:48 compute-1 nova_compute[230518]: 2025-10-02 12:31:48.425 2 DEBUG nova.virt.hardware [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:31:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.002999974s ======
Oct 02 12:31:48 compute-1 nova_compute[230518]: 2025-10-02 12:31:48.426 2 DEBUG nova.virt.hardware [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:31:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:48.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002999974s
Oct 02 12:31:48 compute-1 nova_compute[230518]: 2025-10-02 12:31:48.426 2 DEBUG nova.virt.hardware [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:31:48 compute-1 nova_compute[230518]: 2025-10-02 12:31:48.426 2 DEBUG nova.virt.hardware [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:31:48 compute-1 nova_compute[230518]: 2025-10-02 12:31:48.427 2 DEBUG nova.virt.hardware [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:31:48 compute-1 nova_compute[230518]: 2025-10-02 12:31:48.431 2 DEBUG oslo_concurrency.processutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:48 compute-1 nova_compute[230518]: 2025-10-02 12:31:48.471 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:31:48 compute-1 nova_compute[230518]: 2025-10-02 12:31:48.472 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:31:48 compute-1 nova_compute[230518]: 2025-10-02 12:31:48.498 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:48 compute-1 nova_compute[230518]: 2025-10-02 12:31:48.499 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:48 compute-1 nova_compute[230518]: 2025-10-02 12:31:48.499 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:48 compute-1 nova_compute[230518]: 2025-10-02 12:31:48.499 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:31:48 compute-1 nova_compute[230518]: 2025-10-02 12:31:48.499 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:48.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:31:48 compute-1 nova_compute[230518]: 2025-10-02 12:31:48.721 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408293.7195008, 86aa4a20-96fe-4862-a0c5-04ad92e40f1b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:31:48 compute-1 nova_compute[230518]: 2025-10-02 12:31:48.722 2 INFO nova.compute.manager [-] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] VM Stopped (Lifecycle Event)
Oct 02 12:31:48 compute-1 nova_compute[230518]: 2025-10-02 12:31:48.753 2 DEBUG nova.compute.manager [None req-a49466f4-aa5d-4b9c-871e-2cf5c6fdf966 - - - - - -] [instance: 86aa4a20-96fe-4862-a0c5-04ad92e40f1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:31:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:31:48 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4138466693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:31:48 compute-1 nova_compute[230518]: 2025-10-02 12:31:48.881 2 DEBUG oslo_concurrency.processutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:48 compute-1 nova_compute[230518]: 2025-10-02 12:31:48.926 2 DEBUG nova.storage.rbd_utils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 3e490470-5e33-4140-95c1-367805364c73_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:31:48 compute-1 nova_compute[230518]: 2025-10-02 12:31:48.932 2 DEBUG oslo_concurrency.processutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:49 compute-1 ceph-mon[80926]: pgmap v1653: 305 pgs: 305 active+clean; 189 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.1 MiB/s wr, 124 op/s
Oct 02 12:31:49 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:31:49 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:31:49 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:31:49 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:31:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4138466693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:31:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:31:49 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2173979604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.113 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.317 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.319 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4555MB free_disk=20.910640716552734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.319 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.320 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.399 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 3e490470-5e33-4140-95c1-367805364c73 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.400 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.400 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:31:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:31:49 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3674890317' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.434 2 DEBUG oslo_concurrency.processutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.437 2 DEBUG nova.virt.libvirt.vif [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:31:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1270476772',display_name='tempest-ServerActionsTestJSON-server-1270476772',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1270476772',id=77,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-t095cvs5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:31:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=3e490470-5e33-4140-95c1-367805364c73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.437 2 DEBUG nova.network.os_vif_util [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.439 2 DEBUG nova.network.os_vif_util [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.441 2 DEBUG nova.objects.instance [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.443 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.501 2 DEBUG nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:31:49 compute-1 nova_compute[230518]:   <uuid>3e490470-5e33-4140-95c1-367805364c73</uuid>
Oct 02 12:31:49 compute-1 nova_compute[230518]:   <name>instance-0000004d</name>
Oct 02 12:31:49 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:31:49 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:31:49 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:31:49 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:       <nova:name>tempest-ServerActionsTestJSON-server-1270476772</nova:name>
Oct 02 12:31:49 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:31:48</nova:creationTime>
Oct 02 12:31:49 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:31:49 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:31:49 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:31:49 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:31:49 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:31:49 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:31:49 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:31:49 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:31:49 compute-1 nova_compute[230518]:         <nova:user uuid="71d69bc37f274fad8a0b06c0b96f2a64">tempest-ServerActionsTestJSON-226762235-project-member</nova:user>
Oct 02 12:31:49 compute-1 nova_compute[230518]:         <nova:project uuid="3b295760a6d74c82bd0f9ee4154d7d10">tempest-ServerActionsTestJSON-226762235</nova:project>
Oct 02 12:31:49 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:31:49 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:31:49 compute-1 nova_compute[230518]:         <nova:port uuid="a3bd0009-d256-4937-bdad-606abfd076e0">
Oct 02 12:31:49 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:31:49 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:31:49 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:31:49 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <system>
Oct 02 12:31:49 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:31:49 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:31:49 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:31:49 compute-1 nova_compute[230518]:       <entry name="serial">3e490470-5e33-4140-95c1-367805364c73</entry>
Oct 02 12:31:49 compute-1 nova_compute[230518]:       <entry name="uuid">3e490470-5e33-4140-95c1-367805364c73</entry>
Oct 02 12:31:49 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     </system>
Oct 02 12:31:49 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:31:49 compute-1 nova_compute[230518]:   <os>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:   </os>
Oct 02 12:31:49 compute-1 nova_compute[230518]:   <features>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:   </features>
Oct 02 12:31:49 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:31:49 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:31:49 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:31:49 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/3e490470-5e33-4140-95c1-367805364c73_disk">
Oct 02 12:31:49 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:       </source>
Oct 02 12:31:49 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:31:49 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:31:49 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:31:49 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/3e490470-5e33-4140-95c1-367805364c73_disk.config">
Oct 02 12:31:49 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:       </source>
Oct 02 12:31:49 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:31:49 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:31:49 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:31:49 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:7b:e8:97"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:       <target dev="tapa3bd0009-d2"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:31:49 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/3e490470-5e33-4140-95c1-367805364c73/console.log" append="off"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <video>
Oct 02 12:31:49 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     </video>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:31:49 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:31:49 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:31:49 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:31:49 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:31:49 compute-1 nova_compute[230518]: </domain>
Oct 02 12:31:49 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.503 2 DEBUG nova.compute.manager [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Preparing to wait for external event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.504 2 DEBUG oslo_concurrency.lockutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.505 2 DEBUG oslo_concurrency.lockutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.505 2 DEBUG oslo_concurrency.lockutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.507 2 DEBUG nova.virt.libvirt.vif [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:31:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1270476772',display_name='tempest-ServerActionsTestJSON-server-1270476772',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1270476772',id=77,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-t095cvs5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:31:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=3e490470-5e33-4140-95c1-367805364c73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.507 2 DEBUG nova.network.os_vif_util [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.509 2 DEBUG nova.network.os_vif_util [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.509 2 DEBUG os_vif [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.512 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.513 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.520 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3bd0009-d2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.521 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa3bd0009-d2, col_values=(('external_ids', {'iface-id': 'a3bd0009-d256-4937-bdad-606abfd076e0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:e8:97', 'vm-uuid': '3e490470-5e33-4140-95c1-367805364c73'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:49 compute-1 NetworkManager[44960]: <info>  [1759408309.5254] manager: (tapa3bd0009-d2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/152)
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.535 2 INFO os_vif [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2')
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.604 2 DEBUG nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.606 2 DEBUG nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.606 2 DEBUG nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] No VIF found with MAC fa:16:3e:7b:e8:97, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.608 2 INFO nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Using config drive
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.640 2 DEBUG nova.storage.rbd_utils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 3e490470-5e33-4140-95c1-367805364c73_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:31:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:31:49 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/443590527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.887 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.894 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.908 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.930 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.931 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.981 2 DEBUG nova.network.neutron [req-4ffd7696-a2b1-42ba-aee4-93aec29f8fad req-f9497efa-2c74-418a-ba23-0c9931615c44 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updated VIF entry in instance network info cache for port a3bd0009-d256-4937-bdad-606abfd076e0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:31:49 compute-1 nova_compute[230518]: 2025-10-02 12:31:49.982 2 DEBUG nova.network.neutron [req-4ffd7696-a2b1-42ba-aee4-93aec29f8fad req-f9497efa-2c74-418a-ba23-0c9931615c44 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updating instance_info_cache with network_info: [{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:31:50 compute-1 nova_compute[230518]: 2025-10-02 12:31:50.003 2 DEBUG oslo_concurrency.lockutils [req-4ffd7696-a2b1-42ba-aee4-93aec29f8fad req-f9497efa-2c74-418a-ba23-0c9931615c44 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:31:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2173979604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3674890317' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:31:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/443590527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:50 compute-1 nova_compute[230518]: 2025-10-02 12:31:50.104 2 INFO nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Creating config drive at /var/lib/nova/instances/3e490470-5e33-4140-95c1-367805364c73/disk.config
Oct 02 12:31:50 compute-1 nova_compute[230518]: 2025-10-02 12:31:50.113 2 DEBUG oslo_concurrency.processutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3e490470-5e33-4140-95c1-367805364c73/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmparxrg3gf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:50 compute-1 nova_compute[230518]: 2025-10-02 12:31:50.274 2 DEBUG oslo_concurrency.processutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3e490470-5e33-4140-95c1-367805364c73/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmparxrg3gf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:50 compute-1 nova_compute[230518]: 2025-10-02 12:31:50.302 2 DEBUG nova.storage.rbd_utils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 3e490470-5e33-4140-95c1-367805364c73_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:31:50 compute-1 nova_compute[230518]: 2025-10-02 12:31:50.306 2 DEBUG oslo_concurrency.processutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3e490470-5e33-4140-95c1-367805364c73/disk.config 3e490470-5e33-4140-95c1-367805364c73_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:31:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:31:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:50.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:31:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:50.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:50 compute-1 nova_compute[230518]: 2025-10-02 12:31:50.512 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:31:50 compute-1 nova_compute[230518]: 2025-10-02 12:31:50.512 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:31:50 compute-1 nova_compute[230518]: 2025-10-02 12:31:50.512 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:31:50 compute-1 nova_compute[230518]: 2025-10-02 12:31:50.513 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:31:50 compute-1 nova_compute[230518]: 2025-10-02 12:31:50.895 2 DEBUG oslo_concurrency.processutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3e490470-5e33-4140-95c1-367805364c73/disk.config 3e490470-5e33-4140-95c1-367805364c73_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:31:50 compute-1 nova_compute[230518]: 2025-10-02 12:31:50.896 2 INFO nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Deleting local config drive /var/lib/nova/instances/3e490470-5e33-4140-95c1-367805364c73/disk.config because it was imported into RBD.
Oct 02 12:31:50 compute-1 kernel: tapa3bd0009-d2: entered promiscuous mode
Oct 02 12:31:50 compute-1 nova_compute[230518]: 2025-10-02 12:31:50.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:50 compute-1 NetworkManager[44960]: <info>  [1759408310.9654] manager: (tapa3bd0009-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/153)
Oct 02 12:31:50 compute-1 ovn_controller[129257]: 2025-10-02T12:31:50Z|00325|binding|INFO|Claiming lport a3bd0009-d256-4937-bdad-606abfd076e0 for this chassis.
Oct 02 12:31:50 compute-1 ovn_controller[129257]: 2025-10-02T12:31:50Z|00326|binding|INFO|a3bd0009-d256-4937-bdad-606abfd076e0: Claiming fa:16:3e:7b:e8:97 10.100.0.6
Oct 02 12:31:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:50.977 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:e8:97 10.100.0.6'], port_security=['fa:16:3e:7b:e8:97 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3e490470-5e33-4140-95c1-367805364c73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6fdfac51-abac-4e22-93ab-c3b799f666ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=a3bd0009-d256-4937-bdad-606abfd076e0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:31:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:50.980 138374 INFO neutron.agent.ovn.metadata.agent [-] Port a3bd0009-d256-4937-bdad-606abfd076e0 in datapath f011efa4-0132-405c-bb45-09d0a9352eff bound to our chassis
Oct 02 12:31:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:50.983 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f011efa4-0132-405c-bb45-09d0a9352eff
Oct 02 12:31:50 compute-1 systemd-udevd[261948]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:31:51 compute-1 systemd-machined[188247]: New machine qemu-38-instance-0000004d.
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.004 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[45b2a838-0724-480f-8e22-23ccd894c997]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.006 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf011efa4-01 in ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.008 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf011efa4-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.008 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d07694cb-d7d1-4bf2-adf4-35cb3255041c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.010 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6392a7f3-3aa4-49fe-b1eb-587bfc9fda39]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:51 compute-1 NetworkManager[44960]: <info>  [1759408311.0158] device (tapa3bd0009-d2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:31:51 compute-1 NetworkManager[44960]: <info>  [1759408311.0170] device (tapa3bd0009-d2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:31:51 compute-1 systemd[1]: Started Virtual Machine qemu-38-instance-0000004d.
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.027 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[ae1fe872-9d3a-42f6-be3f-fc69ab797b3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:51 compute-1 nova_compute[230518]: 2025-10-02 12:31:51.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:51 compute-1 nova_compute[230518]: 2025-10-02 12:31:51.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:31:51 compute-1 ovn_controller[129257]: 2025-10-02T12:31:51Z|00327|binding|INFO|Setting lport a3bd0009-d256-4937-bdad-606abfd076e0 ovn-installed in OVS
Oct 02 12:31:51 compute-1 ovn_controller[129257]: 2025-10-02T12:31:51Z|00328|binding|INFO|Setting lport a3bd0009-d256-4937-bdad-606abfd076e0 up in Southbound
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.094 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[65dac60a-e3ef-4d30-86af-341512de82cb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:51 compute-1 nova_compute[230518]: 2025-10-02 12:31:51.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:51 compute-1 ceph-mon[80926]: pgmap v1654: 305 pgs: 305 active+clean; 189 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.1 MiB/s wr, 124 op/s
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.127 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[7c45da37-bcb8-4cbc-8d93-360e39476ad8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.133 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a326a04e-de2d-4121-8b03-05b1feb49599]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:51 compute-1 systemd-udevd[261953]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:31:51 compute-1 NetworkManager[44960]: <info>  [1759408311.1342] manager: (tapf011efa4-00): new Veth device (/org/freedesktop/NetworkManager/Devices/154)
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.167 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf38de8-f634-4d95-a693-bec2d0d16170]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.170 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[626d386e-6d5c-4d3c-a231-fa3298cb95d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:51 compute-1 NetworkManager[44960]: <info>  [1759408311.1905] device (tapf011efa4-00): carrier: link connected
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.195 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[4fa4d261-90f9-45e7-a95a-d87b17435167]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.211 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4048fc72-9ad2-4f0b-b93e-ee193b358869]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 617474, 'reachable_time': 40178, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261982, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.227 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[13d74ed5-a9b8-49bb-8194-d8933907ef86]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feed:1a7a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 617474, 'tstamp': 617474}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261983, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.244 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e43e7b7d-5fb6-4031-abf2-a1e84124bb30]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 617474, 'reachable_time': 40178, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261984, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.289 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[05381acf-99e5-40b2-826b-bd6eb4d69128]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:51 compute-1 nova_compute[230518]: 2025-10-02 12:31:51.324 2 DEBUG nova.compute.manager [req-79b8fbdb-228a-4adc-83a5-e25e5e95637b req-83fead4b-de44-49aa-8c8e-491a924ea3f2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:31:51 compute-1 nova_compute[230518]: 2025-10-02 12:31:51.324 2 DEBUG oslo_concurrency.lockutils [req-79b8fbdb-228a-4adc-83a5-e25e5e95637b req-83fead4b-de44-49aa-8c8e-491a924ea3f2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:51 compute-1 nova_compute[230518]: 2025-10-02 12:31:51.324 2 DEBUG oslo_concurrency.lockutils [req-79b8fbdb-228a-4adc-83a5-e25e5e95637b req-83fead4b-de44-49aa-8c8e-491a924ea3f2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:51 compute-1 nova_compute[230518]: 2025-10-02 12:31:51.324 2 DEBUG oslo_concurrency.lockutils [req-79b8fbdb-228a-4adc-83a5-e25e5e95637b req-83fead4b-de44-49aa-8c8e-491a924ea3f2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:51 compute-1 nova_compute[230518]: 2025-10-02 12:31:51.325 2 DEBUG nova.compute.manager [req-79b8fbdb-228a-4adc-83a5-e25e5e95637b req-83fead4b-de44-49aa-8c8e-491a924ea3f2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Processing event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.350 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[993eca3a-38d6-4961-9a0e-e90fc4607548]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.351 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.352 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.352 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf011efa4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:51 compute-1 nova_compute[230518]: 2025-10-02 12:31:51.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:51 compute-1 NetworkManager[44960]: <info>  [1759408311.3545] manager: (tapf011efa4-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/155)
Oct 02 12:31:51 compute-1 kernel: tapf011efa4-00: entered promiscuous mode
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.357 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf011efa4-00, col_values=(('external_ids', {'iface-id': '678ebd13-2235-4191-a2a2-1f6e29399ca6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:31:51 compute-1 nova_compute[230518]: 2025-10-02 12:31:51.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:51 compute-1 ovn_controller[129257]: 2025-10-02T12:31:51Z|00329|binding|INFO|Releasing lport 678ebd13-2235-4191-a2a2-1f6e29399ca6 from this chassis (sb_readonly=0)
Oct 02 12:31:51 compute-1 nova_compute[230518]: 2025-10-02 12:31:51.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.361 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.361 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9ad29b4f-c853-402a-9683-a2ce7573a863]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.362 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-f011efa4-0132-405c-bb45-09d0a9352eff
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID f011efa4-0132-405c-bb45-09d0a9352eff
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:31:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:51.363 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'env', 'PROCESS_TAG=haproxy-f011efa4-0132-405c-bb45-09d0a9352eff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f011efa4-0132-405c-bb45-09d0a9352eff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:31:51 compute-1 nova_compute[230518]: 2025-10-02 12:31:51.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:51 compute-1 podman[262057]: 2025-10-02 12:31:51.71851113 +0000 UTC m=+0.068587599 container create 3cc6565e258c2ee42e8592ed38cf5554ffe3b3576d6303e27e3704078fd4351a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 12:31:51 compute-1 systemd[1]: Started libpod-conmon-3cc6565e258c2ee42e8592ed38cf5554ffe3b3576d6303e27e3704078fd4351a.scope.
Oct 02 12:31:51 compute-1 podman[262057]: 2025-10-02 12:31:51.679953207 +0000 UTC m=+0.030029666 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:31:51 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:31:51 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/849311bb380bc31d9f4d64ffecfbd0206e5d4571354752fcb4fe618ec50db739/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:31:51 compute-1 podman[262057]: 2025-10-02 12:31:51.834232901 +0000 UTC m=+0.184309390 container init 3cc6565e258c2ee42e8592ed38cf5554ffe3b3576d6303e27e3704078fd4351a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:31:51 compute-1 podman[262057]: 2025-10-02 12:31:51.840049835 +0000 UTC m=+0.190126274 container start 3cc6565e258c2ee42e8592ed38cf5554ffe3b3576d6303e27e3704078fd4351a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 12:31:51 compute-1 nova_compute[230518]: 2025-10-02 12:31:51.864 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408311.8636868, 3e490470-5e33-4140-95c1-367805364c73 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:31:51 compute-1 nova_compute[230518]: 2025-10-02 12:31:51.864 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] VM Started (Lifecycle Event)
Oct 02 12:31:51 compute-1 nova_compute[230518]: 2025-10-02 12:31:51.866 2 DEBUG nova.compute.manager [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:31:51 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262072]: [NOTICE]   (262076) : New worker (262078) forked
Oct 02 12:31:51 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262072]: [NOTICE]   (262076) : Loading success.
Oct 02 12:31:51 compute-1 nova_compute[230518]: 2025-10-02 12:31:51.871 2 DEBUG nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:31:51 compute-1 nova_compute[230518]: 2025-10-02 12:31:51.875 2 INFO nova.virt.libvirt.driver [-] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance spawned successfully.
Oct 02 12:31:51 compute-1 nova_compute[230518]: 2025-10-02 12:31:51.875 2 DEBUG nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:31:51 compute-1 nova_compute[230518]: 2025-10-02 12:31:51.897 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:31:51 compute-1 nova_compute[230518]: 2025-10-02 12:31:51.903 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:31:51 compute-1 nova_compute[230518]: 2025-10-02 12:31:51.905 2 DEBUG nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:31:51 compute-1 nova_compute[230518]: 2025-10-02 12:31:51.905 2 DEBUG nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:31:51 compute-1 nova_compute[230518]: 2025-10-02 12:31:51.906 2 DEBUG nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:31:51 compute-1 nova_compute[230518]: 2025-10-02 12:31:51.906 2 DEBUG nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:31:51 compute-1 nova_compute[230518]: 2025-10-02 12:31:51.906 2 DEBUG nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:31:51 compute-1 nova_compute[230518]: 2025-10-02 12:31:51.907 2 DEBUG nova.virt.libvirt.driver [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:31:51 compute-1 nova_compute[230518]: 2025-10-02 12:31:51.950 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:31:51 compute-1 nova_compute[230518]: 2025-10-02 12:31:51.951 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408311.8661175, 3e490470-5e33-4140-95c1-367805364c73 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:31:51 compute-1 nova_compute[230518]: 2025-10-02 12:31:51.951 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] VM Paused (Lifecycle Event)
Oct 02 12:31:51 compute-1 nova_compute[230518]: 2025-10-02 12:31:51.978 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:31:51 compute-1 nova_compute[230518]: 2025-10-02 12:31:51.984 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408311.8701344, 3e490470-5e33-4140-95c1-367805364c73 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:31:51 compute-1 nova_compute[230518]: 2025-10-02 12:31:51.984 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] VM Resumed (Lifecycle Event)
Oct 02 12:31:51 compute-1 nova_compute[230518]: 2025-10-02 12:31:51.995 2 INFO nova.compute.manager [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Took 6.96 seconds to spawn the instance on the hypervisor.
Oct 02 12:31:51 compute-1 nova_compute[230518]: 2025-10-02 12:31:51.995 2 DEBUG nova.compute.manager [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:31:52 compute-1 nova_compute[230518]: 2025-10-02 12:31:52.005 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:31:52 compute-1 nova_compute[230518]: 2025-10-02 12:31:52.008 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:31:52 compute-1 nova_compute[230518]: 2025-10-02 12:31:52.043 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:31:52 compute-1 nova_compute[230518]: 2025-10-02 12:31:52.079 2 INFO nova.compute.manager [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Took 8.00 seconds to build instance.
Oct 02 12:31:52 compute-1 nova_compute[230518]: 2025-10-02 12:31:52.094 2 DEBUG oslo_concurrency.lockutils [None req-48e065fe-bfa0-481a-b43f-1c2dba0a43a8 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.092s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:31:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:52.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:31:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:52.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:52 compute-1 nova_compute[230518]: 2025-10-02 12:31:52.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:53 compute-1 nova_compute[230518]: 2025-10-02 12:31:53.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:31:53 compute-1 ceph-mon[80926]: pgmap v1655: 305 pgs: 305 active+clean; 243 MiB data, 779 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.7 MiB/s wr, 195 op/s
Oct 02 12:31:53 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/660504455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:31:54 compute-1 nova_compute[230518]: 2025-10-02 12:31:54.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:31:54 compute-1 nova_compute[230518]: 2025-10-02 12:31:54.211 2 DEBUG nova.compute.manager [req-1e206570-3eee-427a-b78a-8a197436003e req-9c3d6f9e-dc1f-456d-abb1-69348ddc196e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:31:54 compute-1 nova_compute[230518]: 2025-10-02 12:31:54.212 2 DEBUG oslo_concurrency.lockutils [req-1e206570-3eee-427a-b78a-8a197436003e req-9c3d6f9e-dc1f-456d-abb1-69348ddc196e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:31:54 compute-1 nova_compute[230518]: 2025-10-02 12:31:54.212 2 DEBUG oslo_concurrency.lockutils [req-1e206570-3eee-427a-b78a-8a197436003e req-9c3d6f9e-dc1f-456d-abb1-69348ddc196e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:31:54 compute-1 nova_compute[230518]: 2025-10-02 12:31:54.213 2 DEBUG oslo_concurrency.lockutils [req-1e206570-3eee-427a-b78a-8a197436003e req-9c3d6f9e-dc1f-456d-abb1-69348ddc196e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:31:54 compute-1 nova_compute[230518]: 2025-10-02 12:31:54.213 2 DEBUG nova.compute.manager [req-1e206570-3eee-427a-b78a-8a197436003e req-9c3d6f9e-dc1f-456d-abb1-69348ddc196e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:31:54 compute-1 nova_compute[230518]: 2025-10-02 12:31:54.214 2 WARNING nova.compute.manager [req-1e206570-3eee-427a-b78a-8a197436003e req-9c3d6f9e-dc1f-456d-abb1-69348ddc196e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received unexpected event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with vm_state active and task_state None.
Oct 02 12:31:54 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/718085045' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:54.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:54.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:54 compute-1 nova_compute[230518]: 2025-10-02 12:31:54.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:55 compute-1 NetworkManager[44960]: <info>  [1759408315.1765] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/156)
Oct 02 12:31:55 compute-1 NetworkManager[44960]: <info>  [1759408315.1776] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/157)
Oct 02 12:31:55 compute-1 nova_compute[230518]: 2025-10-02 12:31:55.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:55 compute-1 ceph-mon[80926]: pgmap v1656: 305 pgs: 305 active+clean; 246 MiB data, 779 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.7 MiB/s wr, 192 op/s
Oct 02 12:31:55 compute-1 nova_compute[230518]: 2025-10-02 12:31:55.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:55 compute-1 ovn_controller[129257]: 2025-10-02T12:31:55Z|00330|binding|INFO|Releasing lport 678ebd13-2235-4191-a2a2-1f6e29399ca6 from this chassis (sb_readonly=0)
Oct 02 12:31:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e242 e242: 3 total, 3 up, 3 in
Oct 02 12:31:55 compute-1 nova_compute[230518]: 2025-10-02 12:31:55.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:55 compute-1 sudo[262088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:31:55 compute-1 sudo[262088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:55 compute-1 sudo[262088]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:55 compute-1 sudo[262113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:31:55 compute-1 sudo[262113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:31:55 compute-1 sudo[262113]: pam_unix(sudo:session): session closed for user root
Oct 02 12:31:56 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:31:56 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:31:56 compute-1 ceph-mon[80926]: pgmap v1657: 305 pgs: 305 active+clean; 203 MiB data, 754 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 4.7 MiB/s wr, 186 op/s
Oct 02 12:31:56 compute-1 ceph-mon[80926]: osdmap e242: 3 total, 3 up, 3 in
Oct 02 12:31:56 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1214102434' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:31:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:56.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:31:56 compute-1 nova_compute[230518]: 2025-10-02 12:31:56.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:56.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:57 compute-1 nova_compute[230518]: 2025-10-02 12:31:57.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:31:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:57.076 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:31:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:31:57.077 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:31:57 compute-1 nova_compute[230518]: 2025-10-02 12:31:57.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:57 compute-1 nova_compute[230518]: 2025-10-02 12:31:57.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:31:58 compute-1 ceph-mon[80926]: pgmap v1659: 305 pgs: 305 active+clean; 167 MiB data, 735 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 234 op/s
Oct 02 12:31:58 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1408219479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:31:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:31:58.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:31:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:31:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:31:58.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:31:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:31:58 compute-1 nova_compute[230518]: 2025-10-02 12:31:58.695 2 DEBUG nova.compute.manager [req-bd9ecba8-c247-4f62-9af0-63e6e566b0a9 req-24143d35-4447-452c-8199-566d0aa45396 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-changed-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:31:58 compute-1 nova_compute[230518]: 2025-10-02 12:31:58.695 2 DEBUG nova.compute.manager [req-bd9ecba8-c247-4f62-9af0-63e6e566b0a9 req-24143d35-4447-452c-8199-566d0aa45396 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Refreshing instance network info cache due to event network-changed-a3bd0009-d256-4937-bdad-606abfd076e0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:31:58 compute-1 nova_compute[230518]: 2025-10-02 12:31:58.696 2 DEBUG oslo_concurrency.lockutils [req-bd9ecba8-c247-4f62-9af0-63e6e566b0a9 req-24143d35-4447-452c-8199-566d0aa45396 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:31:58 compute-1 nova_compute[230518]: 2025-10-02 12:31:58.696 2 DEBUG oslo_concurrency.lockutils [req-bd9ecba8-c247-4f62-9af0-63e6e566b0a9 req-24143d35-4447-452c-8199-566d0aa45396 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:31:58 compute-1 nova_compute[230518]: 2025-10-02 12:31:58.696 2 DEBUG nova.network.neutron [req-bd9ecba8-c247-4f62-9af0-63e6e566b0a9 req-24143d35-4447-452c-8199-566d0aa45396 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Refreshing network info cache for port a3bd0009-d256-4937-bdad-606abfd076e0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:31:58 compute-1 podman[262139]: 2025-10-02 12:31:58.82638087 +0000 UTC m=+0.066930827 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:31:58 compute-1 podman[262138]: 2025-10-02 12:31:58.839079209 +0000 UTC m=+0.090568661 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:31:59 compute-1 nova_compute[230518]: 2025-10-02 12:31:59.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:31:59 compute-1 nova_compute[230518]: 2025-10-02 12:31:59.055 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:31:59 compute-1 nova_compute[230518]: 2025-10-02 12:31:59.055 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:31:59 compute-1 nova_compute[230518]: 2025-10-02 12:31:59.408 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:31:59 compute-1 nova_compute[230518]: 2025-10-02 12:31:59.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:00.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:00 compute-1 ceph-mon[80926]: pgmap v1660: 305 pgs: 305 active+clean; 167 MiB data, 735 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 234 op/s
Oct 02 12:32:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:00.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:01 compute-1 nova_compute[230518]: 2025-10-02 12:32:01.243 2 DEBUG nova.network.neutron [req-bd9ecba8-c247-4f62-9af0-63e6e566b0a9 req-24143d35-4447-452c-8199-566d0aa45396 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updated VIF entry in instance network info cache for port a3bd0009-d256-4937-bdad-606abfd076e0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:32:01 compute-1 nova_compute[230518]: 2025-10-02 12:32:01.244 2 DEBUG nova.network.neutron [req-bd9ecba8-c247-4f62-9af0-63e6e566b0a9 req-24143d35-4447-452c-8199-566d0aa45396 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updating instance_info_cache with network_info: [{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:32:01 compute-1 nova_compute[230518]: 2025-10-02 12:32:01.272 2 DEBUG oslo_concurrency.lockutils [req-bd9ecba8-c247-4f62-9af0-63e6e566b0a9 req-24143d35-4447-452c-8199-566d0aa45396 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:32:01 compute-1 nova_compute[230518]: 2025-10-02 12:32:01.273 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:32:01 compute-1 nova_compute[230518]: 2025-10-02 12:32:01.273 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:32:01 compute-1 nova_compute[230518]: 2025-10-02 12:32:01.273 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:02.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:02.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:02 compute-1 ceph-mon[80926]: pgmap v1661: 305 pgs: 305 active+clean; 192 MiB data, 732 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.3 MiB/s wr, 134 op/s
Oct 02 12:32:02 compute-1 nova_compute[230518]: 2025-10-02 12:32:02.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:32:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:04.079 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 12:32:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:04.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 12:32:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:04.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:04 compute-1 nova_compute[230518]: 2025-10-02 12:32:04.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:04 compute-1 ceph-mon[80926]: pgmap v1662: 305 pgs: 305 active+clean; 213 MiB data, 741 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 146 op/s
Oct 02 12:32:04 compute-1 ovn_controller[129257]: 2025-10-02T12:32:04Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7b:e8:97 10.100.0.6
Oct 02 12:32:04 compute-1 ovn_controller[129257]: 2025-10-02T12:32:04Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7b:e8:97 10.100.0.6
Oct 02 12:32:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:32:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/892827367' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:32:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:32:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/892827367' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:32:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/892827367' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:32:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/892827367' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:32:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:06.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:32:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:06.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:32:06 compute-1 nova_compute[230518]: 2025-10-02 12:32:06.722 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updating instance_info_cache with network_info: [{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:32:06 compute-1 nova_compute[230518]: 2025-10-02 12:32:06.768 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:32:06 compute-1 nova_compute[230518]: 2025-10-02 12:32:06.769 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:32:07 compute-1 ceph-mon[80926]: pgmap v1663: 305 pgs: 305 active+clean; 227 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.2 MiB/s wr, 137 op/s
Oct 02 12:32:07 compute-1 nova_compute[230518]: 2025-10-02 12:32:07.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:07 compute-1 nova_compute[230518]: 2025-10-02 12:32:07.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:08.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:32:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:08.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:32:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:32:09 compute-1 ceph-mon[80926]: pgmap v1664: 305 pgs: 305 active+clean; 242 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.9 MiB/s wr, 151 op/s
Oct 02 12:32:09 compute-1 nova_compute[230518]: 2025-10-02 12:32:09.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:10.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:10.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:10 compute-1 ceph-mon[80926]: pgmap v1665: 305 pgs: 305 active+clean; 242 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 329 KiB/s rd, 3.9 MiB/s wr, 83 op/s
Oct 02 12:32:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2211173841' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:32:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3524789732' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:32:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:12.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:12.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:12 compute-1 nova_compute[230518]: 2025-10-02 12:32:12.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:13 compute-1 nova_compute[230518]: 2025-10-02 12:32:13.040 2 DEBUG oslo_concurrency.lockutils [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:13 compute-1 nova_compute[230518]: 2025-10-02 12:32:13.041 2 DEBUG oslo_concurrency.lockutils [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:13 compute-1 nova_compute[230518]: 2025-10-02 12:32:13.042 2 INFO nova.compute.manager [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Rebooting instance
Oct 02 12:32:13 compute-1 nova_compute[230518]: 2025-10-02 12:32:13.058 2 DEBUG oslo_concurrency.lockutils [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:32:13 compute-1 nova_compute[230518]: 2025-10-02 12:32:13.059 2 DEBUG oslo_concurrency.lockutils [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquired lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:32:13 compute-1 nova_compute[230518]: 2025-10-02 12:32:13.060 2 DEBUG nova.network.neutron [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:32:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:32:13 compute-1 ceph-mon[80926]: pgmap v1666: 305 pgs: 305 active+clean; 246 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Oct 02 12:32:14 compute-1 nova_compute[230518]: 2025-10-02 12:32:14.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:14.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:14.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:14 compute-1 nova_compute[230518]: 2025-10-02 12:32:14.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:15 compute-1 ceph-mon[80926]: pgmap v1667: 305 pgs: 305 active+clean; 246 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 342 KiB/s rd, 2.9 MiB/s wr, 84 op/s
Oct 02 12:32:15 compute-1 nova_compute[230518]: 2025-10-02 12:32:15.766 2 DEBUG nova.network.neutron [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updating instance_info_cache with network_info: [{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:32:15 compute-1 nova_compute[230518]: 2025-10-02 12:32:15.788 2 DEBUG oslo_concurrency.lockutils [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Releasing lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:32:15 compute-1 nova_compute[230518]: 2025-10-02 12:32:15.791 2 DEBUG nova.compute.manager [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:32:16 compute-1 kernel: tapa3bd0009-d2 (unregistering): left promiscuous mode
Oct 02 12:32:16 compute-1 NetworkManager[44960]: <info>  [1759408336.4272] device (tapa3bd0009-d2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:32:16 compute-1 ovn_controller[129257]: 2025-10-02T12:32:16Z|00331|binding|INFO|Releasing lport a3bd0009-d256-4937-bdad-606abfd076e0 from this chassis (sb_readonly=0)
Oct 02 12:32:16 compute-1 ovn_controller[129257]: 2025-10-02T12:32:16Z|00332|binding|INFO|Setting lport a3bd0009-d256-4937-bdad-606abfd076e0 down in Southbound
Oct 02 12:32:16 compute-1 ovn_controller[129257]: 2025-10-02T12:32:16Z|00333|binding|INFO|Removing iface tapa3bd0009-d2 ovn-installed in OVS
Oct 02 12:32:16 compute-1 nova_compute[230518]: 2025-10-02 12:32:16.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:16 compute-1 nova_compute[230518]: 2025-10-02 12:32:16.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:16 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:16.458 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:e8:97 10.100.0.6'], port_security=['fa:16:3e:7b:e8:97 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3e490470-5e33-4140-95c1-367805364c73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6fdfac51-abac-4e22-93ab-c3b799f666ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.191'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=a3bd0009-d256-4937-bdad-606abfd076e0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:32:16 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:16.461 138374 INFO neutron.agent.ovn.metadata.agent [-] Port a3bd0009-d256-4937-bdad-606abfd076e0 in datapath f011efa4-0132-405c-bb45-09d0a9352eff unbound from our chassis
Oct 02 12:32:16 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:16.466 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f011efa4-0132-405c-bb45-09d0a9352eff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:32:16 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:16.467 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[94655b03-a2ac-4596-b6c9-afd6e96f6c21]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:16 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:16.469 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff namespace which is not needed anymore
Oct 02 12:32:16 compute-1 nova_compute[230518]: 2025-10-02 12:32:16.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:32:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:16.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:32:16 compute-1 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d0000004d.scope: Deactivated successfully.
Oct 02 12:32:16 compute-1 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d0000004d.scope: Consumed 13.895s CPU time.
Oct 02 12:32:16 compute-1 systemd-machined[188247]: Machine qemu-38-instance-0000004d terminated.
Oct 02 12:32:16 compute-1 ceph-mon[80926]: pgmap v1668: 305 pgs: 305 active+clean; 246 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 356 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Oct 02 12:32:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:16.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:16 compute-1 nova_compute[230518]: 2025-10-02 12:32:16.546 2 INFO nova.virt.libvirt.driver [-] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance destroyed successfully.
Oct 02 12:32:16 compute-1 nova_compute[230518]: 2025-10-02 12:32:16.547 2 DEBUG nova.objects.instance [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'resources' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:16 compute-1 nova_compute[230518]: 2025-10-02 12:32:16.565 2 DEBUG nova.virt.libvirt.vif [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1270476772',display_name='tempest-ServerActionsTestJSON-server-1270476772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1270476772',id=77,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-t095cvs5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:32:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=3e490470-5e33-4140-95c1-367805364c73,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:32:16 compute-1 nova_compute[230518]: 2025-10-02 12:32:16.566 2 DEBUG nova.network.os_vif_util [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:32:16 compute-1 nova_compute[230518]: 2025-10-02 12:32:16.567 2 DEBUG nova.network.os_vif_util [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:32:16 compute-1 nova_compute[230518]: 2025-10-02 12:32:16.567 2 DEBUG os_vif [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:32:16 compute-1 nova_compute[230518]: 2025-10-02 12:32:16.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:16 compute-1 nova_compute[230518]: 2025-10-02 12:32:16.569 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3bd0009-d2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:16 compute-1 nova_compute[230518]: 2025-10-02 12:32:16.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:16 compute-1 nova_compute[230518]: 2025-10-02 12:32:16.581 2 INFO os_vif [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2')
Oct 02 12:32:16 compute-1 nova_compute[230518]: 2025-10-02 12:32:16.589 2 DEBUG nova.virt.libvirt.driver [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Start _get_guest_xml network_info=[{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:32:16 compute-1 nova_compute[230518]: 2025-10-02 12:32:16.592 2 WARNING nova.virt.libvirt.driver [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:32:16 compute-1 nova_compute[230518]: 2025-10-02 12:32:16.597 2 DEBUG nova.virt.libvirt.host [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:32:16 compute-1 nova_compute[230518]: 2025-10-02 12:32:16.598 2 DEBUG nova.virt.libvirt.host [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:32:16 compute-1 nova_compute[230518]: 2025-10-02 12:32:16.601 2 DEBUG nova.virt.libvirt.host [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:32:16 compute-1 nova_compute[230518]: 2025-10-02 12:32:16.601 2 DEBUG nova.virt.libvirt.host [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:32:16 compute-1 nova_compute[230518]: 2025-10-02 12:32:16.603 2 DEBUG nova.virt.libvirt.driver [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:32:16 compute-1 nova_compute[230518]: 2025-10-02 12:32:16.603 2 DEBUG nova.virt.hardware [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:32:16 compute-1 nova_compute[230518]: 2025-10-02 12:32:16.604 2 DEBUG nova.virt.hardware [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:32:16 compute-1 nova_compute[230518]: 2025-10-02 12:32:16.604 2 DEBUG nova.virt.hardware [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:32:16 compute-1 nova_compute[230518]: 2025-10-02 12:32:16.605 2 DEBUG nova.virt.hardware [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:32:16 compute-1 nova_compute[230518]: 2025-10-02 12:32:16.605 2 DEBUG nova.virt.hardware [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:32:16 compute-1 nova_compute[230518]: 2025-10-02 12:32:16.605 2 DEBUG nova.virt.hardware [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:32:16 compute-1 nova_compute[230518]: 2025-10-02 12:32:16.605 2 DEBUG nova.virt.hardware [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:32:16 compute-1 nova_compute[230518]: 2025-10-02 12:32:16.606 2 DEBUG nova.virt.hardware [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:32:16 compute-1 nova_compute[230518]: 2025-10-02 12:32:16.606 2 DEBUG nova.virt.hardware [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:32:16 compute-1 nova_compute[230518]: 2025-10-02 12:32:16.606 2 DEBUG nova.virt.hardware [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:32:16 compute-1 nova_compute[230518]: 2025-10-02 12:32:16.606 2 DEBUG nova.virt.hardware [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:32:16 compute-1 nova_compute[230518]: 2025-10-02 12:32:16.607 2 DEBUG nova.objects.instance [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:16 compute-1 nova_compute[230518]: 2025-10-02 12:32:16.626 2 DEBUG oslo_concurrency.processutils [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:32:16 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262072]: [NOTICE]   (262076) : haproxy version is 2.8.14-c23fe91
Oct 02 12:32:16 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262072]: [NOTICE]   (262076) : path to executable is /usr/sbin/haproxy
Oct 02 12:32:16 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262072]: [WARNING]  (262076) : Exiting Master process...
Oct 02 12:32:16 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262072]: [ALERT]    (262076) : Current worker (262078) exited with code 143 (Terminated)
Oct 02 12:32:16 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262072]: [WARNING]  (262076) : All workers exited. Exiting... (0)
Oct 02 12:32:16 compute-1 systemd[1]: libpod-3cc6565e258c2ee42e8592ed38cf5554ffe3b3576d6303e27e3704078fd4351a.scope: Deactivated successfully.
Oct 02 12:32:16 compute-1 podman[262213]: 2025-10-02 12:32:16.667641431 +0000 UTC m=+0.065721130 container died 3cc6565e258c2ee42e8592ed38cf5554ffe3b3576d6303e27e3704078fd4351a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:32:16 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3cc6565e258c2ee42e8592ed38cf5554ffe3b3576d6303e27e3704078fd4351a-userdata-shm.mount: Deactivated successfully.
Oct 02 12:32:16 compute-1 systemd[1]: var-lib-containers-storage-overlay-849311bb380bc31d9f4d64ffecfbd0206e5d4571354752fcb4fe618ec50db739-merged.mount: Deactivated successfully.
Oct 02 12:32:16 compute-1 podman[262213]: 2025-10-02 12:32:16.723157417 +0000 UTC m=+0.121237086 container cleanup 3cc6565e258c2ee42e8592ed38cf5554ffe3b3576d6303e27e3704078fd4351a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 12:32:16 compute-1 systemd[1]: libpod-conmon-3cc6565e258c2ee42e8592ed38cf5554ffe3b3576d6303e27e3704078fd4351a.scope: Deactivated successfully.
Oct 02 12:32:16 compute-1 podman[262245]: 2025-10-02 12:32:16.810589188 +0000 UTC m=+0.055941281 container remove 3cc6565e258c2ee42e8592ed38cf5554ffe3b3576d6303e27e3704078fd4351a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:32:16 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:16.818 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f044c88a-6fc8-451b-914f-a94fd6d8cb42]: (4, ('Thu Oct  2 12:32:16 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff (3cc6565e258c2ee42e8592ed38cf5554ffe3b3576d6303e27e3704078fd4351a)\n3cc6565e258c2ee42e8592ed38cf5554ffe3b3576d6303e27e3704078fd4351a\nThu Oct  2 12:32:16 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff (3cc6565e258c2ee42e8592ed38cf5554ffe3b3576d6303e27e3704078fd4351a)\n3cc6565e258c2ee42e8592ed38cf5554ffe3b3576d6303e27e3704078fd4351a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:16 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:16.820 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[28ed2b1e-4ac6-4441-8ddd-901e60d197de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:16 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:16.822 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:16 compute-1 kernel: tapf011efa4-00: left promiscuous mode
Oct 02 12:32:16 compute-1 nova_compute[230518]: 2025-10-02 12:32:16.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:16 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:16.860 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6534ad7b-de91-40d4-9766-63df5cd64c0d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:16 compute-1 nova_compute[230518]: 2025-10-02 12:32:16.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:16 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:16.891 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8ab9725e-7367-4a46-9893-aed74b8a2b36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:16 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:16.893 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9e4775a2-fd77-48a0-9719-bdf292ad4a75]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:16 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:16.920 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0d0b1349-dceb-46f2-a611-b4cfbe64da2f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 617468, 'reachable_time': 36269, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262279, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:16 compute-1 systemd[1]: run-netns-ovnmeta\x2df011efa4\x2d0132\x2d405c\x2dbb45\x2d09d0a9352eff.mount: Deactivated successfully.
Oct 02 12:32:16 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:16.927 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:32:16 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:16.928 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[6f8896ad-df40-4b49-9198-89a43487bba6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:32:17 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2350958497' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:32:17 compute-1 nova_compute[230518]: 2025-10-02 12:32:17.099 2 DEBUG oslo_concurrency.processutils [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:32:17 compute-1 nova_compute[230518]: 2025-10-02 12:32:17.132 2 DEBUG oslo_concurrency.processutils [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:32:17 compute-1 nova_compute[230518]: 2025-10-02 12:32:17.366 2 DEBUG nova.compute.manager [req-679fe4d6-f105-43e1-9589-f45da3a5f3bd req-0ffca14f-58bd-4b4d-977f-7e379310b4b6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-unplugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:17 compute-1 nova_compute[230518]: 2025-10-02 12:32:17.367 2 DEBUG oslo_concurrency.lockutils [req-679fe4d6-f105-43e1-9589-f45da3a5f3bd req-0ffca14f-58bd-4b4d-977f-7e379310b4b6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:17 compute-1 nova_compute[230518]: 2025-10-02 12:32:17.368 2 DEBUG oslo_concurrency.lockutils [req-679fe4d6-f105-43e1-9589-f45da3a5f3bd req-0ffca14f-58bd-4b4d-977f-7e379310b4b6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:17 compute-1 nova_compute[230518]: 2025-10-02 12:32:17.369 2 DEBUG oslo_concurrency.lockutils [req-679fe4d6-f105-43e1-9589-f45da3a5f3bd req-0ffca14f-58bd-4b4d-977f-7e379310b4b6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:17 compute-1 nova_compute[230518]: 2025-10-02 12:32:17.369 2 DEBUG nova.compute.manager [req-679fe4d6-f105-43e1-9589-f45da3a5f3bd req-0ffca14f-58bd-4b4d-977f-7e379310b4b6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-unplugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:32:17 compute-1 nova_compute[230518]: 2025-10-02 12:32:17.369 2 WARNING nova.compute.manager [req-679fe4d6-f105-43e1-9589-f45da3a5f3bd req-0ffca14f-58bd-4b4d-977f-7e379310b4b6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received unexpected event network-vif-unplugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with vm_state active and task_state reboot_started_hard.
Oct 02 12:32:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:32:17 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3187127440' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:32:17 compute-1 nova_compute[230518]: 2025-10-02 12:32:17.549 2 DEBUG oslo_concurrency.processutils [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:32:17 compute-1 nova_compute[230518]: 2025-10-02 12:32:17.550 2 DEBUG nova.virt.libvirt.vif [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1270476772',display_name='tempest-ServerActionsTestJSON-server-1270476772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1270476772',id=77,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-t095cvs5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:32:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=3e490470-5e33-4140-95c1-367805364c73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:32:17 compute-1 nova_compute[230518]: 2025-10-02 12:32:17.551 2 DEBUG nova.network.os_vif_util [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:32:17 compute-1 nova_compute[230518]: 2025-10-02 12:32:17.552 2 DEBUG nova.network.os_vif_util [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:32:17 compute-1 nova_compute[230518]: 2025-10-02 12:32:17.554 2 DEBUG nova.objects.instance [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:17 compute-1 nova_compute[230518]: 2025-10-02 12:32:17.573 2 DEBUG nova.virt.libvirt.driver [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:32:17 compute-1 nova_compute[230518]:   <uuid>3e490470-5e33-4140-95c1-367805364c73</uuid>
Oct 02 12:32:17 compute-1 nova_compute[230518]:   <name>instance-0000004d</name>
Oct 02 12:32:17 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:32:17 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:32:17 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:32:17 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:       <nova:name>tempest-ServerActionsTestJSON-server-1270476772</nova:name>
Oct 02 12:32:17 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:32:16</nova:creationTime>
Oct 02 12:32:17 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:32:17 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:32:17 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:32:17 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:32:17 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:32:17 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:32:17 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:32:17 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:32:17 compute-1 nova_compute[230518]:         <nova:user uuid="71d69bc37f274fad8a0b06c0b96f2a64">tempest-ServerActionsTestJSON-226762235-project-member</nova:user>
Oct 02 12:32:17 compute-1 nova_compute[230518]:         <nova:project uuid="3b295760a6d74c82bd0f9ee4154d7d10">tempest-ServerActionsTestJSON-226762235</nova:project>
Oct 02 12:32:17 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:32:17 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:32:17 compute-1 nova_compute[230518]:         <nova:port uuid="a3bd0009-d256-4937-bdad-606abfd076e0">
Oct 02 12:32:17 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:32:17 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:32:17 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:32:17 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <system>
Oct 02 12:32:17 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:32:17 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:32:17 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:32:17 compute-1 nova_compute[230518]:       <entry name="serial">3e490470-5e33-4140-95c1-367805364c73</entry>
Oct 02 12:32:17 compute-1 nova_compute[230518]:       <entry name="uuid">3e490470-5e33-4140-95c1-367805364c73</entry>
Oct 02 12:32:17 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     </system>
Oct 02 12:32:17 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:32:17 compute-1 nova_compute[230518]:   <os>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:   </os>
Oct 02 12:32:17 compute-1 nova_compute[230518]:   <features>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:   </features>
Oct 02 12:32:17 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:32:17 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:32:17 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:32:17 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/3e490470-5e33-4140-95c1-367805364c73_disk">
Oct 02 12:32:17 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:       </source>
Oct 02 12:32:17 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:32:17 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:32:17 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:32:17 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/3e490470-5e33-4140-95c1-367805364c73_disk.config">
Oct 02 12:32:17 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:       </source>
Oct 02 12:32:17 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:32:17 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:32:17 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:32:17 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:7b:e8:97"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:       <target dev="tapa3bd0009-d2"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:32:17 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/3e490470-5e33-4140-95c1-367805364c73/console.log" append="off"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <video>
Oct 02 12:32:17 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     </video>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <input type="keyboard" bus="usb"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:32:17 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:32:17 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:32:17 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:32:17 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:32:17 compute-1 nova_compute[230518]: </domain>
Oct 02 12:32:17 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:32:17 compute-1 nova_compute[230518]: 2025-10-02 12:32:17.576 2 DEBUG nova.virt.libvirt.driver [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:32:17 compute-1 nova_compute[230518]: 2025-10-02 12:32:17.577 2 DEBUG nova.virt.libvirt.driver [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:32:17 compute-1 nova_compute[230518]: 2025-10-02 12:32:17.579 2 DEBUG nova.virt.libvirt.vif [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1270476772',display_name='tempest-ServerActionsTestJSON-server-1270476772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1270476772',id=77,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-t095cvs5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:32:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=3e490470-5e33-4140-95c1-367805364c73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:32:17 compute-1 nova_compute[230518]: 2025-10-02 12:32:17.579 2 DEBUG nova.network.os_vif_util [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:32:17 compute-1 nova_compute[230518]: 2025-10-02 12:32:17.581 2 DEBUG nova.network.os_vif_util [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:32:17 compute-1 nova_compute[230518]: 2025-10-02 12:32:17.582 2 DEBUG os_vif [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:32:17 compute-1 nova_compute[230518]: 2025-10-02 12:32:17.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:17 compute-1 nova_compute[230518]: 2025-10-02 12:32:17.584 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:17 compute-1 nova_compute[230518]: 2025-10-02 12:32:17.584 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:32:17 compute-1 nova_compute[230518]: 2025-10-02 12:32:17.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:17 compute-1 nova_compute[230518]: 2025-10-02 12:32:17.589 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3bd0009-d2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:17 compute-1 nova_compute[230518]: 2025-10-02 12:32:17.590 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa3bd0009-d2, col_values=(('external_ids', {'iface-id': 'a3bd0009-d256-4937-bdad-606abfd076e0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:e8:97', 'vm-uuid': '3e490470-5e33-4140-95c1-367805364c73'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:17 compute-1 NetworkManager[44960]: <info>  [1759408337.5940] manager: (tapa3bd0009-d2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/158)
Oct 02 12:32:17 compute-1 nova_compute[230518]: 2025-10-02 12:32:17.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:32:17 compute-1 nova_compute[230518]: 2025-10-02 12:32:17.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:17 compute-1 nova_compute[230518]: 2025-10-02 12:32:17.600 2 INFO os_vif [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2')
Oct 02 12:32:17 compute-1 kernel: tapa3bd0009-d2: entered promiscuous mode
Oct 02 12:32:17 compute-1 NetworkManager[44960]: <info>  [1759408337.7191] manager: (tapa3bd0009-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/159)
Oct 02 12:32:17 compute-1 ovn_controller[129257]: 2025-10-02T12:32:17Z|00334|binding|INFO|Claiming lport a3bd0009-d256-4937-bdad-606abfd076e0 for this chassis.
Oct 02 12:32:17 compute-1 ovn_controller[129257]: 2025-10-02T12:32:17Z|00335|binding|INFO|a3bd0009-d256-4937-bdad-606abfd076e0: Claiming fa:16:3e:7b:e8:97 10.100.0.6
Oct 02 12:32:17 compute-1 nova_compute[230518]: 2025-10-02 12:32:17.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:17 compute-1 systemd-udevd[262184]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:32:17 compute-1 podman[262325]: 2025-10-02 12:32:17.729692387 +0000 UTC m=+0.082590350 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:32:17 compute-1 NetworkManager[44960]: <info>  [1759408337.7490] device (tapa3bd0009-d2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:32:17 compute-1 NetworkManager[44960]: <info>  [1759408337.7505] device (tapa3bd0009-d2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:32:17 compute-1 ovn_controller[129257]: 2025-10-02T12:32:17Z|00336|binding|INFO|Setting lport a3bd0009-d256-4937-bdad-606abfd076e0 ovn-installed in OVS
Oct 02 12:32:17 compute-1 nova_compute[230518]: 2025-10-02 12:32:17.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:17 compute-1 nova_compute[230518]: 2025-10-02 12:32:17.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:17 compute-1 systemd-machined[188247]: New machine qemu-39-instance-0000004d.
Oct 02 12:32:17 compute-1 systemd[1]: Started Virtual Machine qemu-39-instance-0000004d.
Oct 02 12:32:17 compute-1 podman[262324]: 2025-10-02 12:32:17.793570677 +0000 UTC m=+0.155468093 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:32:17 compute-1 ovn_controller[129257]: 2025-10-02T12:32:17Z|00337|binding|INFO|Setting lport a3bd0009-d256-4937-bdad-606abfd076e0 up in Southbound
Oct 02 12:32:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:17.862 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:e8:97 10.100.0.6'], port_security=['fa:16:3e:7b:e8:97 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3e490470-5e33-4140-95c1-367805364c73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '5', 'neutron:security_group_ids': '6fdfac51-abac-4e22-93ab-c3b799f666ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.191'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=a3bd0009-d256-4937-bdad-606abfd076e0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:32:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:17.864 138374 INFO neutron.agent.ovn.metadata.agent [-] Port a3bd0009-d256-4937-bdad-606abfd076e0 in datapath f011efa4-0132-405c-bb45-09d0a9352eff bound to our chassis
Oct 02 12:32:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:17.867 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f011efa4-0132-405c-bb45-09d0a9352eff
Oct 02 12:32:17 compute-1 nova_compute[230518]: 2025-10-02 12:32:17.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:17.885 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[26885a82-8604-4682-bdae-e58733218198]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:17.887 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf011efa4-01 in ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:32:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:17.890 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf011efa4-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:32:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:17.890 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3bfa53e5-c4dd-4963-b510-23fa41a14ca1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:17.892 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b5ba245f-d9b5-4285-83db-46212a27df49]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:17.910 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[1e3b6eb1-c11e-4d90-b3d2-6647ab7b0de0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2350958497' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:32:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3187127440' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:32:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:17.943 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5081aae8-c5f7-4795-9b3f-e5dbc5e4fa70]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:17.991 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[1566ad93-26ef-4c64-970f-ff28f121885d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:18.002 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4da4fd09-d29f-4f32-a1b8-ed22ad372805]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:18 compute-1 NetworkManager[44960]: <info>  [1759408338.0039] manager: (tapf011efa4-00): new Veth device (/org/freedesktop/NetworkManager/Devices/160)
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:18.057 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c3849223-2dcf-43d7-b35f-fb9ad8d18849]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:18.062 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[5590dca6-8587-441d-87be-bd964d187ed1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:18 compute-1 NetworkManager[44960]: <info>  [1759408338.0961] device (tapf011efa4-00): carrier: link connected
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:18.103 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[f4d0b94d-8536-4a0b-95cc-d51bc9280546]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:18.126 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2e8e6cc9-45a1-40c2-9e42-55f35412faf6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 100], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 620165, 'reachable_time': 36988, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262419, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:18.155 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b9bf0ef8-2796-4b71-ac13-108eb4c577cf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feed:1a7a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 620165, 'tstamp': 620165}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262420, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:18.183 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f18423e3-1023-47bd-ab49-7a510e368627]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 100], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 620165, 'reachable_time': 36988, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 262429, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:18.232 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c3f29fa9-bf94-4910-b5d4-a40d9ec975e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:18.323 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a5f101f1-c5c8-4bbe-8002-18b3dca09512]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:18.325 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:18.325 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:18.326 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf011efa4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:18 compute-1 kernel: tapf011efa4-00: entered promiscuous mode
Oct 02 12:32:18 compute-1 nova_compute[230518]: 2025-10-02 12:32:18.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:18 compute-1 NetworkManager[44960]: <info>  [1759408338.3303] manager: (tapf011efa4-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/161)
Oct 02 12:32:18 compute-1 nova_compute[230518]: 2025-10-02 12:32:18.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:18.337 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf011efa4-00, col_values=(('external_ids', {'iface-id': '678ebd13-2235-4191-a2a2-1f6e29399ca6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:18 compute-1 nova_compute[230518]: 2025-10-02 12:32:18.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:18 compute-1 ovn_controller[129257]: 2025-10-02T12:32:18Z|00338|binding|INFO|Releasing lport 678ebd13-2235-4191-a2a2-1f6e29399ca6 from this chassis (sb_readonly=0)
Oct 02 12:32:18 compute-1 nova_compute[230518]: 2025-10-02 12:32:18.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:18.367 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:18.369 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[36c74f14-fb24-4b89-a9a6-d23c14e5ac6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:18.370 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-f011efa4-0132-405c-bb45-09d0a9352eff
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID f011efa4-0132-405c-bb45-09d0a9352eff
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:18.371 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'env', 'PROCESS_TAG=haproxy-f011efa4-0132-405c-bb45-09d0a9352eff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f011efa4-0132-405c-bb45-09d0a9352eff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:32:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.003999965s ======
Oct 02 12:32:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:18.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003999965s
Oct 02 12:32:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:32:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:18.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:32:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:32:18 compute-1 podman[262464]: 2025-10-02 12:32:18.821175768 +0000 UTC m=+0.077054064 container create 97229871644fc9f3e64b3127405bd54d22beae4bcaa9c83c3eda622877f8ca77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:32:18 compute-1 systemd[1]: Started libpod-conmon-97229871644fc9f3e64b3127405bd54d22beae4bcaa9c83c3eda622877f8ca77.scope.
Oct 02 12:32:18 compute-1 podman[262464]: 2025-10-02 12:32:18.790426141 +0000 UTC m=+0.046304467 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:32:18 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:32:18 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba9437f9d440afe338a712b53f4be0d5ad2304ed341c56636f129f0dcfa47d05/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:32:18 compute-1 podman[262464]: 2025-10-02 12:32:18.917174479 +0000 UTC m=+0.173052815 container init 97229871644fc9f3e64b3127405bd54d22beae4bcaa9c83c3eda622877f8ca77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 12:32:18 compute-1 podman[262464]: 2025-10-02 12:32:18.927529435 +0000 UTC m=+0.183407731 container start 97229871644fc9f3e64b3127405bd54d22beae4bcaa9c83c3eda622877f8ca77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Oct 02 12:32:18 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262501]: [NOTICE]   (262507) : New worker (262509) forked
Oct 02 12:32:18 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262501]: [NOTICE]   (262507) : Loading success.
Oct 02 12:32:19 compute-1 ceph-mon[80926]: pgmap v1669: 305 pgs: 305 active+clean; 247 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.2 MiB/s wr, 87 op/s
Oct 02 12:32:19 compute-1 nova_compute[230518]: 2025-10-02 12:32:19.417 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Removed pending event for 3e490470-5e33-4140-95c1-367805364c73 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:32:19 compute-1 nova_compute[230518]: 2025-10-02 12:32:19.418 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408339.4169931, 3e490470-5e33-4140-95c1-367805364c73 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:32:19 compute-1 nova_compute[230518]: 2025-10-02 12:32:19.418 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] VM Resumed (Lifecycle Event)
Oct 02 12:32:19 compute-1 nova_compute[230518]: 2025-10-02 12:32:19.420 2 DEBUG nova.compute.manager [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:32:19 compute-1 nova_compute[230518]: 2025-10-02 12:32:19.423 2 INFO nova.virt.libvirt.driver [-] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance rebooted successfully.
Oct 02 12:32:19 compute-1 nova_compute[230518]: 2025-10-02 12:32:19.423 2 DEBUG nova.compute.manager [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:32:19 compute-1 nova_compute[230518]: 2025-10-02 12:32:19.446 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:32:19 compute-1 nova_compute[230518]: 2025-10-02 12:32:19.449 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:32:19 compute-1 nova_compute[230518]: 2025-10-02 12:32:19.487 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Oct 02 12:32:19 compute-1 nova_compute[230518]: 2025-10-02 12:32:19.487 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408339.4182177, 3e490470-5e33-4140-95c1-367805364c73 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:32:19 compute-1 nova_compute[230518]: 2025-10-02 12:32:19.488 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] VM Started (Lifecycle Event)
Oct 02 12:32:19 compute-1 nova_compute[230518]: 2025-10-02 12:32:19.493 2 DEBUG oslo_concurrency.lockutils [None req-2d27b10e-12c9-49a4-8431-825f64ba5db4 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 6.452s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:19 compute-1 nova_compute[230518]: 2025-10-02 12:32:19.508 2 DEBUG nova.compute.manager [req-70ff90a9-47cd-4fa9-9174-40ec62e88aca req-5df82061-f41b-404e-9c8b-3efea0c5501c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:19 compute-1 nova_compute[230518]: 2025-10-02 12:32:19.509 2 DEBUG oslo_concurrency.lockutils [req-70ff90a9-47cd-4fa9-9174-40ec62e88aca req-5df82061-f41b-404e-9c8b-3efea0c5501c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:19 compute-1 nova_compute[230518]: 2025-10-02 12:32:19.509 2 DEBUG oslo_concurrency.lockutils [req-70ff90a9-47cd-4fa9-9174-40ec62e88aca req-5df82061-f41b-404e-9c8b-3efea0c5501c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:19 compute-1 nova_compute[230518]: 2025-10-02 12:32:19.509 2 DEBUG oslo_concurrency.lockutils [req-70ff90a9-47cd-4fa9-9174-40ec62e88aca req-5df82061-f41b-404e-9c8b-3efea0c5501c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:19 compute-1 nova_compute[230518]: 2025-10-02 12:32:19.509 2 DEBUG nova.compute.manager [req-70ff90a9-47cd-4fa9-9174-40ec62e88aca req-5df82061-f41b-404e-9c8b-3efea0c5501c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:32:19 compute-1 nova_compute[230518]: 2025-10-02 12:32:19.510 2 WARNING nova.compute.manager [req-70ff90a9-47cd-4fa9-9174-40ec62e88aca req-5df82061-f41b-404e-9c8b-3efea0c5501c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received unexpected event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with vm_state active and task_state None.
Oct 02 12:32:19 compute-1 nova_compute[230518]: 2025-10-02 12:32:19.510 2 DEBUG nova.compute.manager [req-70ff90a9-47cd-4fa9-9174-40ec62e88aca req-5df82061-f41b-404e-9c8b-3efea0c5501c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:19 compute-1 nova_compute[230518]: 2025-10-02 12:32:19.510 2 DEBUG oslo_concurrency.lockutils [req-70ff90a9-47cd-4fa9-9174-40ec62e88aca req-5df82061-f41b-404e-9c8b-3efea0c5501c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:19 compute-1 nova_compute[230518]: 2025-10-02 12:32:19.510 2 DEBUG oslo_concurrency.lockutils [req-70ff90a9-47cd-4fa9-9174-40ec62e88aca req-5df82061-f41b-404e-9c8b-3efea0c5501c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:19 compute-1 nova_compute[230518]: 2025-10-02 12:32:19.510 2 DEBUG oslo_concurrency.lockutils [req-70ff90a9-47cd-4fa9-9174-40ec62e88aca req-5df82061-f41b-404e-9c8b-3efea0c5501c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:19 compute-1 nova_compute[230518]: 2025-10-02 12:32:19.511 2 DEBUG nova.compute.manager [req-70ff90a9-47cd-4fa9-9174-40ec62e88aca req-5df82061-f41b-404e-9c8b-3efea0c5501c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:32:19 compute-1 nova_compute[230518]: 2025-10-02 12:32:19.511 2 WARNING nova.compute.manager [req-70ff90a9-47cd-4fa9-9174-40ec62e88aca req-5df82061-f41b-404e-9c8b-3efea0c5501c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received unexpected event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with vm_state active and task_state None.
Oct 02 12:32:19 compute-1 nova_compute[230518]: 2025-10-02 12:32:19.511 2 DEBUG nova.compute.manager [req-70ff90a9-47cd-4fa9-9174-40ec62e88aca req-5df82061-f41b-404e-9c8b-3efea0c5501c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:19 compute-1 nova_compute[230518]: 2025-10-02 12:32:19.511 2 DEBUG oslo_concurrency.lockutils [req-70ff90a9-47cd-4fa9-9174-40ec62e88aca req-5df82061-f41b-404e-9c8b-3efea0c5501c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:19 compute-1 nova_compute[230518]: 2025-10-02 12:32:19.511 2 DEBUG oslo_concurrency.lockutils [req-70ff90a9-47cd-4fa9-9174-40ec62e88aca req-5df82061-f41b-404e-9c8b-3efea0c5501c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:19 compute-1 nova_compute[230518]: 2025-10-02 12:32:19.512 2 DEBUG oslo_concurrency.lockutils [req-70ff90a9-47cd-4fa9-9174-40ec62e88aca req-5df82061-f41b-404e-9c8b-3efea0c5501c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:19 compute-1 nova_compute[230518]: 2025-10-02 12:32:19.512 2 DEBUG nova.compute.manager [req-70ff90a9-47cd-4fa9-9174-40ec62e88aca req-5df82061-f41b-404e-9c8b-3efea0c5501c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:32:19 compute-1 nova_compute[230518]: 2025-10-02 12:32:19.512 2 WARNING nova.compute.manager [req-70ff90a9-47cd-4fa9-9174-40ec62e88aca req-5df82061-f41b-404e-9c8b-3efea0c5501c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received unexpected event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with vm_state active and task_state None.
Oct 02 12:32:19 compute-1 nova_compute[230518]: 2025-10-02 12:32:19.517 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:32:19 compute-1 nova_compute[230518]: 2025-10-02 12:32:19.520 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:32:19 compute-1 nova_compute[230518]: 2025-10-02 12:32:19.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:32:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:20.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:32:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:20.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:20 compute-1 nova_compute[230518]: 2025-10-02 12:32:20.657 2 INFO nova.compute.manager [None req-0db206ec-f07e-4c6b-bab7-4f0e6de41099 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Get console output
Oct 02 12:32:20 compute-1 nova_compute[230518]: 2025-10-02 12:32:20.662 2 INFO oslo.privsep.daemon [None req-0db206ec-f07e-4c6b-bab7-4f0e6de41099 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpq1o9xy_t/privsep.sock']
Oct 02 12:32:20 compute-1 ceph-mon[80926]: pgmap v1670: 305 pgs: 305 active+clean; 247 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 973 KiB/s rd, 37 KiB/s wr, 50 op/s
Oct 02 12:32:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2908620508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:21 compute-1 nova_compute[230518]: 2025-10-02 12:32:21.420 2 INFO oslo.privsep.daemon [None req-0db206ec-f07e-4c6b-bab7-4f0e6de41099 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Spawned new privsep daemon via rootwrap
Oct 02 12:32:21 compute-1 nova_compute[230518]: 2025-10-02 12:32:21.297 13161 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 02 12:32:21 compute-1 nova_compute[230518]: 2025-10-02 12:32:21.300 13161 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 02 12:32:21 compute-1 nova_compute[230518]: 2025-10-02 12:32:21.302 13161 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 02 12:32:21 compute-1 nova_compute[230518]: 2025-10-02 12:32:21.302 13161 INFO oslo.privsep.daemon [-] privsep daemon running as pid 13161
Oct 02 12:32:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:32:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:22.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:32:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:32:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:22.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:32:22 compute-1 nova_compute[230518]: 2025-10-02 12:32:22.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:22 compute-1 nova_compute[230518]: 2025-10-02 12:32:22.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:23 compute-1 ceph-mon[80926]: pgmap v1671: 305 pgs: 305 active+clean; 247 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 44 KiB/s wr, 107 op/s
Oct 02 12:32:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:32:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:24.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:32:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:24.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:32:25 compute-1 ceph-mon[80926]: pgmap v1672: 305 pgs: 305 active+clean; 247 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 34 KiB/s wr, 117 op/s
Oct 02 12:32:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:25.929 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:25.930 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:25.931 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:26.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:26.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:26 compute-1 ceph-mon[80926]: pgmap v1673: 305 pgs: 305 active+clean; 247 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 36 KiB/s wr, 135 op/s
Oct 02 12:32:27 compute-1 nova_compute[230518]: 2025-10-02 12:32:27.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:27 compute-1 nova_compute[230518]: 2025-10-02 12:32:27.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:27 compute-1 nova_compute[230518]: 2025-10-02 12:32:27.917 2 DEBUG oslo_concurrency.lockutils [None req-7671830e-f0ce-49bc-8b66-e3e552476f4a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:27 compute-1 nova_compute[230518]: 2025-10-02 12:32:27.917 2 DEBUG oslo_concurrency.lockutils [None req-7671830e-f0ce-49bc-8b66-e3e552476f4a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:27 compute-1 nova_compute[230518]: 2025-10-02 12:32:27.917 2 DEBUG nova.compute.manager [None req-7671830e-f0ce-49bc-8b66-e3e552476f4a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:32:27 compute-1 nova_compute[230518]: 2025-10-02 12:32:27.920 2 DEBUG nova.compute.manager [None req-7671830e-f0ce-49bc-8b66-e3e552476f4a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Oct 02 12:32:27 compute-1 nova_compute[230518]: 2025-10-02 12:32:27.921 2 DEBUG nova.objects.instance [None req-7671830e-f0ce-49bc-8b66-e3e552476f4a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'flavor' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:27 compute-1 nova_compute[230518]: 2025-10-02 12:32:27.947 2 DEBUG nova.virt.libvirt.driver [None req-7671830e-f0ce-49bc-8b66-e3e552476f4a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:32:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:28.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:28.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:32:29 compute-1 ceph-mon[80926]: pgmap v1674: 305 pgs: 305 active+clean; 250 MiB data, 791 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 704 KiB/s wr, 155 op/s
Oct 02 12:32:29 compute-1 podman[262524]: 2025-10-02 12:32:29.81615856 +0000 UTC m=+0.061268578 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:32:29 compute-1 podman[262525]: 2025-10-02 12:32:29.820210058 +0000 UTC m=+0.064635195 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:32:30 compute-1 ceph-mon[80926]: pgmap v1675: 305 pgs: 305 active+clean; 250 MiB data, 791 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 680 KiB/s wr, 116 op/s
Oct 02 12:32:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:30.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:30.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:31 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Oct 02 12:32:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:32:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:32.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:32:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:32.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:32 compute-1 nova_compute[230518]: 2025-10-02 12:32:32.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:32 compute-1 ceph-mon[80926]: pgmap v1676: 305 pgs: 305 active+clean; 268 MiB data, 807 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.0 MiB/s wr, 145 op/s
Oct 02 12:32:32 compute-1 nova_compute[230518]: 2025-10-02 12:32:32.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:33 compute-1 ovn_controller[129257]: 2025-10-02T12:32:33Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7b:e8:97 10.100.0.6
Oct 02 12:32:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:32:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:34.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:34.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:34 compute-1 ceph-mon[80926]: pgmap v1677: 305 pgs: 305 active+clean; 275 MiB data, 825 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 113 op/s
Oct 02 12:32:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:36.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:36.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:37 compute-1 ceph-mon[80926]: pgmap v1678: 305 pgs: 305 active+clean; 276 MiB data, 825 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 117 op/s
Oct 02 12:32:37 compute-1 nova_compute[230518]: 2025-10-02 12:32:37.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:37 compute-1 nova_compute[230518]: 2025-10-02 12:32:37.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:37 compute-1 nova_compute[230518]: 2025-10-02 12:32:37.996 2 DEBUG nova.virt.libvirt.driver [None req-7671830e-f0ce-49bc-8b66-e3e552476f4a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:32:38 compute-1 ceph-mon[80926]: pgmap v1679: 305 pgs: 305 active+clean; 279 MiB data, 826 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.2 MiB/s wr, 119 op/s
Oct 02 12:32:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:38.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:38.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:38 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:32:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:32:39 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3695580193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3695580193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:40.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:40.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:41 compute-1 nova_compute[230518]: 2025-10-02 12:32:41.013 2 INFO nova.virt.libvirt.driver [None req-7671830e-f0ce-49bc-8b66-e3e552476f4a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance shutdown successfully after 13 seconds.
Oct 02 12:32:41 compute-1 ceph-mon[80926]: pgmap v1680: 305 pgs: 305 active+clean; 279 MiB data, 826 MiB used, 20 GiB / 21 GiB avail; 786 KiB/s rd, 1.5 MiB/s wr, 95 op/s
Oct 02 12:32:41 compute-1 kernel: tapa3bd0009-d2 (unregistering): left promiscuous mode
Oct 02 12:32:41 compute-1 NetworkManager[44960]: <info>  [1759408361.7659] device (tapa3bd0009-d2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:32:41 compute-1 ovn_controller[129257]: 2025-10-02T12:32:41Z|00339|binding|INFO|Releasing lport a3bd0009-d256-4937-bdad-606abfd076e0 from this chassis (sb_readonly=0)
Oct 02 12:32:41 compute-1 ovn_controller[129257]: 2025-10-02T12:32:41Z|00340|binding|INFO|Setting lport a3bd0009-d256-4937-bdad-606abfd076e0 down in Southbound
Oct 02 12:32:41 compute-1 ovn_controller[129257]: 2025-10-02T12:32:41Z|00341|binding|INFO|Removing iface tapa3bd0009-d2 ovn-installed in OVS
Oct 02 12:32:41 compute-1 nova_compute[230518]: 2025-10-02 12:32:41.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:41 compute-1 nova_compute[230518]: 2025-10-02 12:32:41.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:41.780 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:e8:97 10.100.0.6'], port_security=['fa:16:3e:7b:e8:97 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3e490470-5e33-4140-95c1-367805364c73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '6', 'neutron:security_group_ids': '6fdfac51-abac-4e22-93ab-c3b799f666ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.191', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=a3bd0009-d256-4937-bdad-606abfd076e0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:32:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:41.782 138374 INFO neutron.agent.ovn.metadata.agent [-] Port a3bd0009-d256-4937-bdad-606abfd076e0 in datapath f011efa4-0132-405c-bb45-09d0a9352eff unbound from our chassis
Oct 02 12:32:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:41.784 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f011efa4-0132-405c-bb45-09d0a9352eff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:32:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:41.785 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ea120be1-cb79-4e50-9e58-dc63ae951785]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:41.785 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff namespace which is not needed anymore
Oct 02 12:32:41 compute-1 nova_compute[230518]: 2025-10-02 12:32:41.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:41 compute-1 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d0000004d.scope: Deactivated successfully.
Oct 02 12:32:41 compute-1 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d0000004d.scope: Consumed 14.162s CPU time.
Oct 02 12:32:41 compute-1 systemd-machined[188247]: Machine qemu-39-instance-0000004d terminated.
Oct 02 12:32:41 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262501]: [NOTICE]   (262507) : haproxy version is 2.8.14-c23fe91
Oct 02 12:32:41 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262501]: [NOTICE]   (262507) : path to executable is /usr/sbin/haproxy
Oct 02 12:32:41 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262501]: [WARNING]  (262507) : Exiting Master process...
Oct 02 12:32:41 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262501]: [WARNING]  (262507) : Exiting Master process...
Oct 02 12:32:41 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262501]: [ALERT]    (262507) : Current worker (262509) exited with code 143 (Terminated)
Oct 02 12:32:41 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262501]: [WARNING]  (262507) : All workers exited. Exiting... (0)
Oct 02 12:32:41 compute-1 systemd[1]: libpod-97229871644fc9f3e64b3127405bd54d22beae4bcaa9c83c3eda622877f8ca77.scope: Deactivated successfully.
Oct 02 12:32:41 compute-1 podman[262587]: 2025-10-02 12:32:41.959686939 +0000 UTC m=+0.060761272 container died 97229871644fc9f3e64b3127405bd54d22beae4bcaa9c83c3eda622877f8ca77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 12:32:42 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-97229871644fc9f3e64b3127405bd54d22beae4bcaa9c83c3eda622877f8ca77-userdata-shm.mount: Deactivated successfully.
Oct 02 12:32:42 compute-1 systemd[1]: var-lib-containers-storage-overlay-ba9437f9d440afe338a712b53f4be0d5ad2304ed341c56636f129f0dcfa47d05-merged.mount: Deactivated successfully.
Oct 02 12:32:42 compute-1 nova_compute[230518]: 2025-10-02 12:32:42.065 2 INFO nova.virt.libvirt.driver [-] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance destroyed successfully.
Oct 02 12:32:42 compute-1 nova_compute[230518]: 2025-10-02 12:32:42.065 2 DEBUG nova.objects.instance [None req-7671830e-f0ce-49bc-8b66-e3e552476f4a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'numa_topology' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:42 compute-1 nova_compute[230518]: 2025-10-02 12:32:42.079 2 DEBUG nova.compute.manager [None req-7671830e-f0ce-49bc-8b66-e3e552476f4a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:32:42 compute-1 podman[262587]: 2025-10-02 12:32:42.11195036 +0000 UTC m=+0.213024713 container cleanup 97229871644fc9f3e64b3127405bd54d22beae4bcaa9c83c3eda622877f8ca77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:32:42 compute-1 systemd[1]: libpod-conmon-97229871644fc9f3e64b3127405bd54d22beae4bcaa9c83c3eda622877f8ca77.scope: Deactivated successfully.
Oct 02 12:32:42 compute-1 nova_compute[230518]: 2025-10-02 12:32:42.129 2 DEBUG oslo_concurrency.lockutils [None req-7671830e-f0ce-49bc-8b66-e3e552476f4a 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 14.212s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:42 compute-1 podman[262630]: 2025-10-02 12:32:42.279781641 +0000 UTC m=+0.131224760 container remove 97229871644fc9f3e64b3127405bd54d22beae4bcaa9c83c3eda622877f8ca77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 12:32:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:42.291 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c2a1e4e2-62fa-4e79-b327-f6e4d5f83065]: (4, ('Thu Oct  2 12:32:41 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff (97229871644fc9f3e64b3127405bd54d22beae4bcaa9c83c3eda622877f8ca77)\n97229871644fc9f3e64b3127405bd54d22beae4bcaa9c83c3eda622877f8ca77\nThu Oct  2 12:32:42 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff (97229871644fc9f3e64b3127405bd54d22beae4bcaa9c83c3eda622877f8ca77)\n97229871644fc9f3e64b3127405bd54d22beae4bcaa9c83c3eda622877f8ca77\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:42.293 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[92b29d48-6b53-4c31-98a1-41c435e89fd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:42.294 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:42 compute-1 kernel: tapf011efa4-00: left promiscuous mode
Oct 02 12:32:42 compute-1 nova_compute[230518]: 2025-10-02 12:32:42.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:42 compute-1 nova_compute[230518]: 2025-10-02 12:32:42.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:42.330 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3074b181-d6bc-4b87-93de-9538f847702e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:42.357 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7ef48f6f-0eb7-459c-a564-47ef3c968368]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:42.359 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[83c6e85f-b3cf-48a2-96dd-68a1d29040f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:42.386 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e09623b6-f8a6-499a-a5d9-8d10238e86fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 620154, 'reachable_time': 36565, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262648, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:42 compute-1 systemd[1]: run-netns-ovnmeta\x2df011efa4\x2d0132\x2d405c\x2dbb45\x2d09d0a9352eff.mount: Deactivated successfully.
Oct 02 12:32:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:42.392 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:32:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:42.392 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[4be9ba8d-b67c-4874-bc2a-485d772c46fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:42.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:32:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:42.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:32:42 compute-1 nova_compute[230518]: 2025-10-02 12:32:42.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e243 e243: 3 total, 3 up, 3 in
Oct 02 12:32:42 compute-1 ceph-mon[80926]: pgmap v1681: 305 pgs: 305 active+clean; 281 MiB data, 826 MiB used, 20 GiB / 21 GiB avail; 790 KiB/s rd, 1.5 MiB/s wr, 99 op/s
Oct 02 12:32:42 compute-1 nova_compute[230518]: 2025-10-02 12:32:42.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:42 compute-1 nova_compute[230518]: 2025-10-02 12:32:42.908 2 DEBUG nova.compute.manager [req-0937c8b6-3c25-4245-ad85-5832ee9360b5 req-335c84e7-f2f9-4d91-a8ab-e4cde20b80ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-unplugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:42 compute-1 nova_compute[230518]: 2025-10-02 12:32:42.909 2 DEBUG oslo_concurrency.lockutils [req-0937c8b6-3c25-4245-ad85-5832ee9360b5 req-335c84e7-f2f9-4d91-a8ab-e4cde20b80ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:42 compute-1 nova_compute[230518]: 2025-10-02 12:32:42.909 2 DEBUG oslo_concurrency.lockutils [req-0937c8b6-3c25-4245-ad85-5832ee9360b5 req-335c84e7-f2f9-4d91-a8ab-e4cde20b80ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:42 compute-1 nova_compute[230518]: 2025-10-02 12:32:42.910 2 DEBUG oslo_concurrency.lockutils [req-0937c8b6-3c25-4245-ad85-5832ee9360b5 req-335c84e7-f2f9-4d91-a8ab-e4cde20b80ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:42 compute-1 nova_compute[230518]: 2025-10-02 12:32:42.910 2 DEBUG nova.compute.manager [req-0937c8b6-3c25-4245-ad85-5832ee9360b5 req-335c84e7-f2f9-4d91-a8ab-e4cde20b80ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-unplugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:32:42 compute-1 nova_compute[230518]: 2025-10-02 12:32:42.911 2 WARNING nova.compute.manager [req-0937c8b6-3c25-4245-ad85-5832ee9360b5 req-335c84e7-f2f9-4d91-a8ab-e4cde20b80ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received unexpected event network-vif-unplugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with vm_state stopped and task_state None.
Oct 02 12:32:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:32:44 compute-1 ceph-mon[80926]: osdmap e243: 3 total, 3 up, 3 in
Oct 02 12:32:44 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2887869052' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:32:44 compute-1 nova_compute[230518]: 2025-10-02 12:32:44.346 2 DEBUG nova.objects.instance [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'flavor' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:44 compute-1 nova_compute[230518]: 2025-10-02 12:32:44.376 2 DEBUG oslo_concurrency.lockutils [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:32:44 compute-1 nova_compute[230518]: 2025-10-02 12:32:44.377 2 DEBUG oslo_concurrency.lockutils [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquired lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:32:44 compute-1 nova_compute[230518]: 2025-10-02 12:32:44.377 2 DEBUG nova.network.neutron [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:32:44 compute-1 nova_compute[230518]: 2025-10-02 12:32:44.377 2 DEBUG nova.objects.instance [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'info_cache' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:44.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:32:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:44.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:32:44 compute-1 nova_compute[230518]: 2025-10-02 12:32:44.997 2 DEBUG nova.compute.manager [req-e4e32990-2835-4ca5-9a81-e79ca2b607d4 req-de8ec6c9-ad38-4177-b532-3c3822acc7e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:44 compute-1 nova_compute[230518]: 2025-10-02 12:32:44.997 2 DEBUG oslo_concurrency.lockutils [req-e4e32990-2835-4ca5-9a81-e79ca2b607d4 req-de8ec6c9-ad38-4177-b532-3c3822acc7e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:44 compute-1 nova_compute[230518]: 2025-10-02 12:32:44.998 2 DEBUG oslo_concurrency.lockutils [req-e4e32990-2835-4ca5-9a81-e79ca2b607d4 req-de8ec6c9-ad38-4177-b532-3c3822acc7e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:44 compute-1 nova_compute[230518]: 2025-10-02 12:32:44.998 2 DEBUG oslo_concurrency.lockutils [req-e4e32990-2835-4ca5-9a81-e79ca2b607d4 req-de8ec6c9-ad38-4177-b532-3c3822acc7e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:44 compute-1 nova_compute[230518]: 2025-10-02 12:32:44.999 2 DEBUG nova.compute.manager [req-e4e32990-2835-4ca5-9a81-e79ca2b607d4 req-de8ec6c9-ad38-4177-b532-3c3822acc7e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:32:45 compute-1 nova_compute[230518]: 2025-10-02 12:32:44.999 2 WARNING nova.compute.manager [req-e4e32990-2835-4ca5-9a81-e79ca2b607d4 req-de8ec6c9-ad38-4177-b532-3c3822acc7e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received unexpected event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with vm_state stopped and task_state powering-on.
Oct 02 12:32:45 compute-1 ceph-mon[80926]: pgmap v1683: 305 pgs: 305 active+clean; 296 MiB data, 833 MiB used, 20 GiB / 21 GiB avail; 586 KiB/s rd, 813 KiB/s wr, 78 op/s
Oct 02 12:32:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3705393334' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:32:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:46.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:46.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:46 compute-1 ceph-mon[80926]: pgmap v1684: 305 pgs: 305 active+clean; 290 MiB data, 834 MiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 72 op/s
Oct 02 12:32:46 compute-1 nova_compute[230518]: 2025-10-02 12:32:46.910 2 DEBUG nova.network.neutron [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updating instance_info_cache with network_info: [{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:32:46 compute-1 nova_compute[230518]: 2025-10-02 12:32:46.942 2 DEBUG oslo_concurrency.lockutils [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Releasing lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:32:46 compute-1 nova_compute[230518]: 2025-10-02 12:32:46.983 2 INFO nova.virt.libvirt.driver [-] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance destroyed successfully.
Oct 02 12:32:46 compute-1 nova_compute[230518]: 2025-10-02 12:32:46.984 2 DEBUG nova.objects.instance [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'numa_topology' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.000 2 DEBUG nova.objects.instance [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'resources' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.013 2 DEBUG nova.virt.libvirt.vif [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1270476772',display_name='tempest-ServerActionsTestJSON-server-1270476772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1270476772',id=77,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-t095cvs5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:32:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=3e490470-5e33-4140-95c1-367805364c73,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.014 2 DEBUG nova.network.os_vif_util [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.016 2 DEBUG nova.network.os_vif_util [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.016 2 DEBUG os_vif [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.019 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3bd0009-d2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.028 2 INFO os_vif [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2')
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.036 2 DEBUG nova.virt.libvirt.driver [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Start _get_guest_xml network_info=[{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.040 2 WARNING nova.virt.libvirt.driver [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.045 2 DEBUG nova.virt.libvirt.host [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.046 2 DEBUG nova.virt.libvirt.host [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.049 2 DEBUG nova.virt.libvirt.host [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.049 2 DEBUG nova.virt.libvirt.host [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.050 2 DEBUG nova.virt.libvirt.driver [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.051 2 DEBUG nova.virt.hardware [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.051 2 DEBUG nova.virt.hardware [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.051 2 DEBUG nova.virt.hardware [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.052 2 DEBUG nova.virt.hardware [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.052 2 DEBUG nova.virt.hardware [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.052 2 DEBUG nova.virt.hardware [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.052 2 DEBUG nova.virt.hardware [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.052 2 DEBUG nova.virt.hardware [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.053 2 DEBUG nova.virt.hardware [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.053 2 DEBUG nova.virt.hardware [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.053 2 DEBUG nova.virt.hardware [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.053 2 DEBUG nova.objects.instance [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.055 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.074 2 DEBUG oslo_concurrency.processutils [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.105 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.105 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.106 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.106 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.106 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:32:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:32:47 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4025293043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:32:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:32:47 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/899398213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.515 2 DEBUG oslo_concurrency.processutils [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.556 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.560 2 DEBUG oslo_concurrency.processutils [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.749 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.750 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4532MB free_disk=20.845951080322266GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.750 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.751 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.885 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 3e490470-5e33-4140-95c1-367805364c73 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.885 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.885 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:32:47 compute-1 nova_compute[230518]: 2025-10-02 12:32:47.925 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:32:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:32:47 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/586986182' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:32:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/513984406' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/189214268' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:32:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4025293043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:32:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/899398213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/554296819' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:32:48 compute-1 nova_compute[230518]: 2025-10-02 12:32:48.098 2 DEBUG oslo_concurrency.processutils [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:32:48 compute-1 nova_compute[230518]: 2025-10-02 12:32:48.102 2 DEBUG nova.virt.libvirt.vif [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1270476772',display_name='tempest-ServerActionsTestJSON-server-1270476772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1270476772',id=77,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-t095cvs5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:32:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=3e490470-5e33-4140-95c1-367805364c73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:32:48 compute-1 nova_compute[230518]: 2025-10-02 12:32:48.102 2 DEBUG nova.network.os_vif_util [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:32:48 compute-1 nova_compute[230518]: 2025-10-02 12:32:48.104 2 DEBUG nova.network.os_vif_util [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:32:48 compute-1 nova_compute[230518]: 2025-10-02 12:32:48.108 2 DEBUG nova.objects.instance [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:48 compute-1 nova_compute[230518]: 2025-10-02 12:32:48.131 2 DEBUG nova.virt.libvirt.driver [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:32:48 compute-1 nova_compute[230518]:   <uuid>3e490470-5e33-4140-95c1-367805364c73</uuid>
Oct 02 12:32:48 compute-1 nova_compute[230518]:   <name>instance-0000004d</name>
Oct 02 12:32:48 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:32:48 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:32:48 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:32:48 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:       <nova:name>tempest-ServerActionsTestJSON-server-1270476772</nova:name>
Oct 02 12:32:48 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:32:47</nova:creationTime>
Oct 02 12:32:48 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:32:48 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:32:48 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:32:48 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:32:48 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:32:48 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:32:48 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:32:48 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:32:48 compute-1 nova_compute[230518]:         <nova:user uuid="71d69bc37f274fad8a0b06c0b96f2a64">tempest-ServerActionsTestJSON-226762235-project-member</nova:user>
Oct 02 12:32:48 compute-1 nova_compute[230518]:         <nova:project uuid="3b295760a6d74c82bd0f9ee4154d7d10">tempest-ServerActionsTestJSON-226762235</nova:project>
Oct 02 12:32:48 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:32:48 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:32:48 compute-1 nova_compute[230518]:         <nova:port uuid="a3bd0009-d256-4937-bdad-606abfd076e0">
Oct 02 12:32:48 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:32:48 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:32:48 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:32:48 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <system>
Oct 02 12:32:48 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:32:48 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:32:48 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:32:48 compute-1 nova_compute[230518]:       <entry name="serial">3e490470-5e33-4140-95c1-367805364c73</entry>
Oct 02 12:32:48 compute-1 nova_compute[230518]:       <entry name="uuid">3e490470-5e33-4140-95c1-367805364c73</entry>
Oct 02 12:32:48 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     </system>
Oct 02 12:32:48 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:32:48 compute-1 nova_compute[230518]:   <os>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:   </os>
Oct 02 12:32:48 compute-1 nova_compute[230518]:   <features>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:   </features>
Oct 02 12:32:48 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:32:48 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:32:48 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:32:48 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/3e490470-5e33-4140-95c1-367805364c73_disk">
Oct 02 12:32:48 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:       </source>
Oct 02 12:32:48 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:32:48 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:32:48 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:32:48 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/3e490470-5e33-4140-95c1-367805364c73_disk.config">
Oct 02 12:32:48 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:       </source>
Oct 02 12:32:48 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:32:48 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:32:48 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:32:48 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:7b:e8:97"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:       <target dev="tapa3bd0009-d2"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:32:48 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/3e490470-5e33-4140-95c1-367805364c73/console.log" append="off"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <video>
Oct 02 12:32:48 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     </video>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <input type="keyboard" bus="usb"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:32:48 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:32:48 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:32:48 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:32:48 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:32:48 compute-1 nova_compute[230518]: </domain>
Oct 02 12:32:48 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:32:48 compute-1 nova_compute[230518]: 2025-10-02 12:32:48.132 2 DEBUG nova.virt.libvirt.driver [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:32:48 compute-1 nova_compute[230518]: 2025-10-02 12:32:48.132 2 DEBUG nova.virt.libvirt.driver [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:32:48 compute-1 nova_compute[230518]: 2025-10-02 12:32:48.133 2 DEBUG nova.virt.libvirt.vif [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1270476772',display_name='tempest-ServerActionsTestJSON-server-1270476772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1270476772',id=77,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-t095cvs5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:32:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=3e490470-5e33-4140-95c1-367805364c73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:32:48 compute-1 nova_compute[230518]: 2025-10-02 12:32:48.134 2 DEBUG nova.network.os_vif_util [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:32:48 compute-1 nova_compute[230518]: 2025-10-02 12:32:48.134 2 DEBUG nova.network.os_vif_util [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:32:48 compute-1 nova_compute[230518]: 2025-10-02 12:32:48.135 2 DEBUG os_vif [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:32:48 compute-1 nova_compute[230518]: 2025-10-02 12:32:48.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:48 compute-1 nova_compute[230518]: 2025-10-02 12:32:48.136 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:48 compute-1 nova_compute[230518]: 2025-10-02 12:32:48.137 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:32:48 compute-1 nova_compute[230518]: 2025-10-02 12:32:48.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:48 compute-1 nova_compute[230518]: 2025-10-02 12:32:48.141 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3bd0009-d2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:48 compute-1 nova_compute[230518]: 2025-10-02 12:32:48.142 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa3bd0009-d2, col_values=(('external_ids', {'iface-id': 'a3bd0009-d256-4937-bdad-606abfd076e0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:e8:97', 'vm-uuid': '3e490470-5e33-4140-95c1-367805364c73'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:48 compute-1 NetworkManager[44960]: <info>  [1759408368.2016] manager: (tapa3bd0009-d2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/162)
Oct 02 12:32:48 compute-1 nova_compute[230518]: 2025-10-02 12:32:48.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:48 compute-1 nova_compute[230518]: 2025-10-02 12:32:48.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:32:48 compute-1 nova_compute[230518]: 2025-10-02 12:32:48.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:48 compute-1 nova_compute[230518]: 2025-10-02 12:32:48.206 2 INFO os_vif [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2')
Oct 02 12:32:48 compute-1 kernel: tapa3bd0009-d2: entered promiscuous mode
Oct 02 12:32:48 compute-1 NetworkManager[44960]: <info>  [1759408368.2829] manager: (tapa3bd0009-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/163)
Oct 02 12:32:48 compute-1 nova_compute[230518]: 2025-10-02 12:32:48.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:48 compute-1 ovn_controller[129257]: 2025-10-02T12:32:48Z|00342|binding|INFO|Claiming lport a3bd0009-d256-4937-bdad-606abfd076e0 for this chassis.
Oct 02 12:32:48 compute-1 ovn_controller[129257]: 2025-10-02T12:32:48Z|00343|binding|INFO|a3bd0009-d256-4937-bdad-606abfd076e0: Claiming fa:16:3e:7b:e8:97 10.100.0.6
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.294 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:e8:97 10.100.0.6'], port_security=['fa:16:3e:7b:e8:97 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3e490470-5e33-4140-95c1-367805364c73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '7', 'neutron:security_group_ids': '6fdfac51-abac-4e22-93ab-c3b799f666ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.191'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=a3bd0009-d256-4937-bdad-606abfd076e0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.295 138374 INFO neutron.agent.ovn.metadata.agent [-] Port a3bd0009-d256-4937-bdad-606abfd076e0 in datapath f011efa4-0132-405c-bb45-09d0a9352eff bound to our chassis
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.297 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f011efa4-0132-405c-bb45-09d0a9352eff
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.308 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e730b237-c36f-49c3-906a-89092c4cd5a2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.309 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf011efa4-01 in ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.317 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf011efa4-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.317 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f61f78b2-322c-4e4a-a721-5fc627405337]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:48 compute-1 ovn_controller[129257]: 2025-10-02T12:32:48Z|00344|binding|INFO|Setting lport a3bd0009-d256-4937-bdad-606abfd076e0 ovn-installed in OVS
Oct 02 12:32:48 compute-1 ovn_controller[129257]: 2025-10-02T12:32:48Z|00345|binding|INFO|Setting lport a3bd0009-d256-4937-bdad-606abfd076e0 up in Southbound
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.319 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b632fe3d-1b7e-4a38-97e5-ce47d70c3540]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:48 compute-1 nova_compute[230518]: 2025-10-02 12:32:48.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.336 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[ad134e1b-4bfc-4c0b-87b8-3a9c1dd77a68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:48 compute-1 systemd-machined[188247]: New machine qemu-40-instance-0000004d.
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.355 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[93600119-0e37-41ab-ba03-fed435e58dd2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:48 compute-1 systemd[1]: Started Virtual Machine qemu-40-instance-0000004d.
Oct 02 12:32:48 compute-1 systemd-udevd[262798]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.392 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[ce96bf55-0518-42cc-9928-d59f49522ad5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:32:48 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1878121547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:48 compute-1 NetworkManager[44960]: <info>  [1759408368.3985] device (tapa3bd0009-d2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:32:48 compute-1 NetworkManager[44960]: <info>  [1759408368.3996] device (tapa3bd0009-d2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.401 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7cb2f08e-4a3e-4d75-bbaa-d47eb77ef1e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:48 compute-1 systemd-udevd[262813]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:32:48 compute-1 NetworkManager[44960]: <info>  [1759408368.4051] manager: (tapf011efa4-00): new Veth device (/org/freedesktop/NetworkManager/Devices/164)
Oct 02 12:32:48 compute-1 nova_compute[230518]: 2025-10-02 12:32:48.429 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.431 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[811a20cc-24a0-474e-92e4-e4419d9fdce9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.434 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[0a2aa7fd-c127-43db-88ef-1c8abee9bed3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:48 compute-1 podman[262764]: 2025-10-02 12:32:48.449999058 +0000 UTC m=+0.123319251 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:32:48 compute-1 podman[262767]: 2025-10-02 12:32:48.451980481 +0000 UTC m=+0.116556719 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:32:48 compute-1 nova_compute[230518]: 2025-10-02 12:32:48.452 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:32:48 compute-1 NetworkManager[44960]: <info>  [1759408368.4615] device (tapf011efa4-00): carrier: link connected
Oct 02 12:32:48 compute-1 nova_compute[230518]: 2025-10-02 12:32:48.470 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.475 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[45bbd6dd-e6b8-47c7-b593-648848960029]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:48 compute-1 nova_compute[230518]: 2025-10-02 12:32:48.494 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:32:48 compute-1 nova_compute[230518]: 2025-10-02 12:32:48.494 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.498 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3990e7ca-8360-49d2-9391-9842d58dde60]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 103], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 623201, 'reachable_time': 33175, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262846, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.514 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4d54b3a8-6e51-4a58-adde-934a8a64fd73]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feed:1a7a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 623201, 'tstamp': 623201}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262847, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:32:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:48.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.532 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ebd7de60-5813-4dc5-91a8-5f1c6539cac7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 103], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 623201, 'reachable_time': 33175, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 262848, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.563 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[33f6e923-2dd2-4f72-ae53-f87c489587f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:32:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:48.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:32:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.626 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bd1efb22-ecf0-4ae8-8297-dee03cccb00b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.628 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.628 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.629 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf011efa4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:48 compute-1 kernel: tapf011efa4-00: entered promiscuous mode
Oct 02 12:32:48 compute-1 nova_compute[230518]: 2025-10-02 12:32:48.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:48 compute-1 NetworkManager[44960]: <info>  [1759408368.6318] manager: (tapf011efa4-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/165)
Oct 02 12:32:48 compute-1 nova_compute[230518]: 2025-10-02 12:32:48.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.638 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf011efa4-00, col_values=(('external_ids', {'iface-id': '678ebd13-2235-4191-a2a2-1f6e29399ca6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:32:48 compute-1 nova_compute[230518]: 2025-10-02 12:32:48.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:48 compute-1 ovn_controller[129257]: 2025-10-02T12:32:48Z|00346|binding|INFO|Releasing lport 678ebd13-2235-4191-a2a2-1f6e29399ca6 from this chassis (sb_readonly=0)
Oct 02 12:32:48 compute-1 nova_compute[230518]: 2025-10-02 12:32:48.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.642 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.644 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[df09561f-43af-45b2-b0aa-6e1bd92c517c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.645 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-f011efa4-0132-405c-bb45-09d0a9352eff
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID f011efa4-0132-405c-bb45-09d0a9352eff
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:32:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:32:48.646 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'env', 'PROCESS_TAG=haproxy-f011efa4-0132-405c-bb45-09d0a9352eff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f011efa4-0132-405c-bb45-09d0a9352eff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:32:48 compute-1 nova_compute[230518]: 2025-10-02 12:32:48.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:49 compute-1 podman[262922]: 2025-10-02 12:32:49.052462884 +0000 UTC m=+0.031410829 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:32:49 compute-1 ceph-mon[80926]: pgmap v1685: 305 pgs: 305 active+clean; 248 MiB data, 803 MiB used, 20 GiB / 21 GiB avail; 730 KiB/s rd, 2.2 MiB/s wr, 109 op/s
Oct 02 12:32:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/381182074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/586986182' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:32:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1878121547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:49 compute-1 podman[262922]: 2025-10-02 12:32:49.22745818 +0000 UTC m=+0.206406105 container create 1c13cf42afcd40c5e3bd84df468d8fa242df77a6dc135baf6fda0e549678222a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 12:32:49 compute-1 systemd[1]: Started libpod-conmon-1c13cf42afcd40c5e3bd84df468d8fa242df77a6dc135baf6fda0e549678222a.scope.
Oct 02 12:32:49 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:32:49 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bba23b9540567d83dd9a5ca70200357f97da436b9841f2d13c96266b6db2d85/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:32:49 compute-1 podman[262922]: 2025-10-02 12:32:49.404330825 +0000 UTC m=+0.383278710 container init 1c13cf42afcd40c5e3bd84df468d8fa242df77a6dc135baf6fda0e549678222a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 12:32:49 compute-1 podman[262922]: 2025-10-02 12:32:49.416459557 +0000 UTC m=+0.395407482 container start 1c13cf42afcd40c5e3bd84df468d8fa242df77a6dc135baf6fda0e549678222a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:32:49 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262937]: [NOTICE]   (262941) : New worker (262943) forked
Oct 02 12:32:49 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262937]: [NOTICE]   (262941) : Loading success.
Oct 02 12:32:49 compute-1 nova_compute[230518]: 2025-10-02 12:32:49.487 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:32:49 compute-1 nova_compute[230518]: 2025-10-02 12:32:49.531 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Removed pending event for 3e490470-5e33-4140-95c1-367805364c73 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:32:49 compute-1 nova_compute[230518]: 2025-10-02 12:32:49.532 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408369.5309772, 3e490470-5e33-4140-95c1-367805364c73 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:32:49 compute-1 nova_compute[230518]: 2025-10-02 12:32:49.532 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] VM Resumed (Lifecycle Event)
Oct 02 12:32:49 compute-1 nova_compute[230518]: 2025-10-02 12:32:49.534 2 DEBUG nova.compute.manager [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:32:49 compute-1 nova_compute[230518]: 2025-10-02 12:32:49.538 2 INFO nova.virt.libvirt.driver [-] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance rebooted successfully.
Oct 02 12:32:49 compute-1 nova_compute[230518]: 2025-10-02 12:32:49.538 2 DEBUG nova.compute.manager [None req-04f54221-af73-4c83-ae60-34c0d7085d0d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:32:49 compute-1 nova_compute[230518]: 2025-10-02 12:32:49.560 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:32:49 compute-1 nova_compute[230518]: 2025-10-02 12:32:49.565 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:32:49 compute-1 nova_compute[230518]: 2025-10-02 12:32:49.598 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] During sync_power_state the instance has a pending task (powering-on). Skip.
Oct 02 12:32:49 compute-1 nova_compute[230518]: 2025-10-02 12:32:49.599 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408369.5325024, 3e490470-5e33-4140-95c1-367805364c73 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:32:49 compute-1 nova_compute[230518]: 2025-10-02 12:32:49.599 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] VM Started (Lifecycle Event)
Oct 02 12:32:49 compute-1 nova_compute[230518]: 2025-10-02 12:32:49.623 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:32:49 compute-1 nova_compute[230518]: 2025-10-02 12:32:49.627 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:32:50 compute-1 nova_compute[230518]: 2025-10-02 12:32:50.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:32:50 compute-1 nova_compute[230518]: 2025-10-02 12:32:50.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:32:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2299185624' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:50.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:50 compute-1 nova_compute[230518]: 2025-10-02 12:32:50.545 2 DEBUG nova.compute.manager [req-7b72b1ce-fd67-460d-9b78-a4bd6039d68a req-f7c2094f-9dea-4985-b858-a057ac3e0d9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:50 compute-1 nova_compute[230518]: 2025-10-02 12:32:50.545 2 DEBUG oslo_concurrency.lockutils [req-7b72b1ce-fd67-460d-9b78-a4bd6039d68a req-f7c2094f-9dea-4985-b858-a057ac3e0d9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:50 compute-1 nova_compute[230518]: 2025-10-02 12:32:50.545 2 DEBUG oslo_concurrency.lockutils [req-7b72b1ce-fd67-460d-9b78-a4bd6039d68a req-f7c2094f-9dea-4985-b858-a057ac3e0d9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:50 compute-1 nova_compute[230518]: 2025-10-02 12:32:50.546 2 DEBUG oslo_concurrency.lockutils [req-7b72b1ce-fd67-460d-9b78-a4bd6039d68a req-f7c2094f-9dea-4985-b858-a057ac3e0d9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:50 compute-1 nova_compute[230518]: 2025-10-02 12:32:50.546 2 DEBUG nova.compute.manager [req-7b72b1ce-fd67-460d-9b78-a4bd6039d68a req-f7c2094f-9dea-4985-b858-a057ac3e0d9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:32:50 compute-1 nova_compute[230518]: 2025-10-02 12:32:50.546 2 WARNING nova.compute.manager [req-7b72b1ce-fd67-460d-9b78-a4bd6039d68a req-f7c2094f-9dea-4985-b858-a057ac3e0d9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received unexpected event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with vm_state active and task_state None.
Oct 02 12:32:50 compute-1 nova_compute[230518]: 2025-10-02 12:32:50.546 2 DEBUG nova.compute.manager [req-7b72b1ce-fd67-460d-9b78-a4bd6039d68a req-f7c2094f-9dea-4985-b858-a057ac3e0d9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:32:50 compute-1 nova_compute[230518]: 2025-10-02 12:32:50.546 2 DEBUG oslo_concurrency.lockutils [req-7b72b1ce-fd67-460d-9b78-a4bd6039d68a req-f7c2094f-9dea-4985-b858-a057ac3e0d9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:32:50 compute-1 nova_compute[230518]: 2025-10-02 12:32:50.546 2 DEBUG oslo_concurrency.lockutils [req-7b72b1ce-fd67-460d-9b78-a4bd6039d68a req-f7c2094f-9dea-4985-b858-a057ac3e0d9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:32:50 compute-1 nova_compute[230518]: 2025-10-02 12:32:50.547 2 DEBUG oslo_concurrency.lockutils [req-7b72b1ce-fd67-460d-9b78-a4bd6039d68a req-f7c2094f-9dea-4985-b858-a057ac3e0d9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:32:50 compute-1 nova_compute[230518]: 2025-10-02 12:32:50.547 2 DEBUG nova.compute.manager [req-7b72b1ce-fd67-460d-9b78-a4bd6039d68a req-f7c2094f-9dea-4985-b858-a057ac3e0d9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:32:50 compute-1 nova_compute[230518]: 2025-10-02 12:32:50.547 2 WARNING nova.compute.manager [req-7b72b1ce-fd67-460d-9b78-a4bd6039d68a req-f7c2094f-9dea-4985-b858-a057ac3e0d9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received unexpected event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with vm_state active and task_state None.
Oct 02 12:32:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:50.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:51 compute-1 nova_compute[230518]: 2025-10-02 12:32:51.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:32:51 compute-1 nova_compute[230518]: 2025-10-02 12:32:51.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:32:51 compute-1 ceph-mon[80926]: pgmap v1686: 305 pgs: 305 active+clean; 248 MiB data, 803 MiB used, 20 GiB / 21 GiB avail; 730 KiB/s rd, 2.2 MiB/s wr, 109 op/s
Oct 02 12:32:51 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e244 e244: 3 total, 3 up, 3 in
Oct 02 12:32:52 compute-1 nova_compute[230518]: 2025-10-02 12:32:52.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:32:52 compute-1 ceph-mon[80926]: pgmap v1687: 305 pgs: 305 active+clean; 248 MiB data, 801 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 180 op/s
Oct 02 12:32:52 compute-1 ceph-mon[80926]: osdmap e244: 3 total, 3 up, 3 in
Oct 02 12:32:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2482660617' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:32:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:52.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:32:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:32:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:52.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:32:52 compute-1 nova_compute[230518]: 2025-10-02 12:32:52.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:53 compute-1 nova_compute[230518]: 2025-10-02 12:32:53.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:32:53 compute-1 nova_compute[230518]: 2025-10-02 12:32:53.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:32:53 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1063843622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:32:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:54.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:32:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:54.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:54 compute-1 nova_compute[230518]: 2025-10-02 12:32:54.760 2 INFO nova.compute.manager [None req-66845643-3409-45b4-8845-b56973f06b48 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Pausing
Oct 02 12:32:54 compute-1 nova_compute[230518]: 2025-10-02 12:32:54.762 2 DEBUG nova.objects.instance [None req-66845643-3409-45b4-8845-b56973f06b48 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'flavor' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:54 compute-1 nova_compute[230518]: 2025-10-02 12:32:54.797 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408374.79743, 3e490470-5e33-4140-95c1-367805364c73 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:32:54 compute-1 nova_compute[230518]: 2025-10-02 12:32:54.799 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] VM Paused (Lifecycle Event)
Oct 02 12:32:54 compute-1 nova_compute[230518]: 2025-10-02 12:32:54.801 2 DEBUG nova.compute.manager [None req-66845643-3409-45b4-8845-b56973f06b48 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:32:54 compute-1 ceph-mon[80926]: pgmap v1689: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 248 MiB data, 801 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.5 MiB/s wr, 193 op/s
Oct 02 12:32:54 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1409830474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:32:54 compute-1 nova_compute[230518]: 2025-10-02 12:32:54.839 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:32:54 compute-1 nova_compute[230518]: 2025-10-02 12:32:54.845 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:32:54 compute-1 nova_compute[230518]: 2025-10-02 12:32:54.872 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] During sync_power_state the instance has a pending task (pausing). Skip.
Oct 02 12:32:55 compute-1 sudo[262953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:32:55 compute-1 sudo[262953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:55 compute-1 sudo[262953]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:55 compute-1 sudo[262978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:32:55 compute-1 sudo[262978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:55 compute-1 sudo[262978]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:55 compute-1 sudo[263003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:32:55 compute-1 sudo[263003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:55 compute-1 sudo[263003]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:56 compute-1 sudo[263028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:32:56 compute-1 sudo[263028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:32:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:32:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:56.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:32:56 compute-1 sudo[263028]: pam_unix(sudo:session): session closed for user root
Oct 02 12:32:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:32:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:56.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:32:56 compute-1 ceph-mon[80926]: pgmap v1690: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 248 MiB data, 801 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 100 KiB/s wr, 254 op/s
Oct 02 12:32:56 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:32:56 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:32:56 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:32:56 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:32:56 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:32:56 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:32:57 compute-1 nova_compute[230518]: 2025-10-02 12:32:57.003 2 INFO nova.compute.manager [None req-44648571-a3ed-468e-8274-8b9fa0bd4b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Unpausing
Oct 02 12:32:57 compute-1 nova_compute[230518]: 2025-10-02 12:32:57.005 2 DEBUG nova.objects.instance [None req-44648571-a3ed-468e-8274-8b9fa0bd4b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'flavor' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:32:57 compute-1 nova_compute[230518]: 2025-10-02 12:32:57.048 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408377.048426, 3e490470-5e33-4140-95c1-367805364c73 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:32:57 compute-1 virtqemud[230067]: argument unsupported: QEMU guest agent is not configured
Oct 02 12:32:57 compute-1 nova_compute[230518]: 2025-10-02 12:32:57.049 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] VM Resumed (Lifecycle Event)
Oct 02 12:32:57 compute-1 nova_compute[230518]: 2025-10-02 12:32:57.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:32:57 compute-1 nova_compute[230518]: 2025-10-02 12:32:57.052 2 DEBUG nova.virt.libvirt.guest [None req-44648571-a3ed-468e-8274-8b9fa0bd4b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 02 12:32:57 compute-1 nova_compute[230518]: 2025-10-02 12:32:57.053 2 DEBUG nova.compute.manager [None req-44648571-a3ed-468e-8274-8b9fa0bd4b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:32:57 compute-1 nova_compute[230518]: 2025-10-02 12:32:57.082 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:32:57 compute-1 nova_compute[230518]: 2025-10-02 12:32:57.085 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:32:57 compute-1 nova_compute[230518]: 2025-10-02 12:32:57.117 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] During sync_power_state the instance has a pending task (unpausing). Skip.
Oct 02 12:32:57 compute-1 nova_compute[230518]: 2025-10-02 12:32:57.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:58 compute-1 nova_compute[230518]: 2025-10-02 12:32:58.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:32:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:32:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:32:58.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:32:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:32:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:32:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:32:58.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:32:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:32:59 compute-1 ceph-mon[80926]: pgmap v1691: 305 pgs: 305 active+clean; 248 MiB data, 801 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 19 KiB/s wr, 252 op/s
Oct 02 12:32:59 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3780027703' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:33:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:00.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:33:00 compute-1 ceph-mon[80926]: pgmap v1692: 305 pgs: 305 active+clean; 248 MiB data, 801 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 19 KiB/s wr, 252 op/s
Oct 02 12:33:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2511685414' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:33:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:00.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:33:00 compute-1 podman[263084]: 2025-10-02 12:33:00.862742108 +0000 UTC m=+0.099104570 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 12:33:00 compute-1 podman[263085]: 2025-10-02 12:33:00.866981891 +0000 UTC m=+0.097968744 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct 02 12:33:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 e245: 3 total, 3 up, 3 in
Oct 02 12:33:01 compute-1 nova_compute[230518]: 2025-10-02 12:33:01.055 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:33:01 compute-1 nova_compute[230518]: 2025-10-02 12:33:01.056 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:33:01 compute-1 nova_compute[230518]: 2025-10-02 12:33:01.056 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:33:01 compute-1 nova_compute[230518]: 2025-10-02 12:33:01.254 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:33:01 compute-1 nova_compute[230518]: 2025-10-02 12:33:01.255 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:33:01 compute-1 nova_compute[230518]: 2025-10-02 12:33:01.256 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:33:01 compute-1 nova_compute[230518]: 2025-10-02 12:33:01.256 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:33:01 compute-1 ceph-mon[80926]: osdmap e245: 3 total, 3 up, 3 in
Oct 02 12:33:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:33:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:02.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:33:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:33:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:02.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:33:02 compute-1 nova_compute[230518]: 2025-10-02 12:33:02.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:03 compute-1 nova_compute[230518]: 2025-10-02 12:33:03.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:03 compute-1 ceph-mon[80926]: pgmap v1694: 305 pgs: 305 active+clean; 202 MiB data, 771 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 927 B/s wr, 191 op/s
Oct 02 12:33:03 compute-1 nova_compute[230518]: 2025-10-02 12:33:03.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:03.391 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:33:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:03.393 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:33:03 compute-1 nova_compute[230518]: 2025-10-02 12:33:03.615 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updating instance_info_cache with network_info: [{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:33:03 compute-1 nova_compute[230518]: 2025-10-02 12:33:03.636 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:33:03 compute-1 nova_compute[230518]: 2025-10-02 12:33:03.637 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:33:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:33:04 compute-1 sudo[263124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:33:04 compute-1 sudo[263124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:04 compute-1 sudo[263124]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:04 compute-1 sudo[263149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:33:04 compute-1 sudo[263149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:33:04 compute-1 sudo[263149]: pam_unix(sudo:session): session closed for user root
Oct 02 12:33:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:04.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:04.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:04 compute-1 ceph-mon[80926]: pgmap v1695: 305 pgs: 305 active+clean; 195 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 793 KiB/s wr, 179 op/s
Oct 02 12:33:04 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:33:04 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:33:05 compute-1 ovn_controller[129257]: 2025-10-02T12:33:05Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7b:e8:97 10.100.0.6
Oct 02 12:33:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/628293893' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:33:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/628293893' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:33:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:06.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:33:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:06.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:33:07 compute-1 ceph-mon[80926]: pgmap v1696: 305 pgs: 305 active+clean; 254 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.4 MiB/s wr, 148 op/s
Oct 02 12:33:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3869894575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:07 compute-1 nova_compute[230518]: 2025-10-02 12:33:07.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1683438901' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2923262910' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3684999600' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/531094039' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:08 compute-1 nova_compute[230518]: 2025-10-02 12:33:08.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:33:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:08.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:33:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:08.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:33:09 compute-1 ceph-mon[80926]: pgmap v1697: 305 pgs: 305 active+clean; 283 MiB data, 809 MiB used, 20 GiB / 21 GiB avail; 653 KiB/s rd, 6.6 MiB/s wr, 181 op/s
Oct 02 12:33:10 compute-1 ceph-mon[80926]: pgmap v1698: 305 pgs: 305 active+clean; 283 MiB data, 809 MiB used, 20 GiB / 21 GiB avail; 653 KiB/s rd, 6.6 MiB/s wr, 181 op/s
Oct 02 12:33:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:10.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:10.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:12 compute-1 nova_compute[230518]: 2025-10-02 12:33:12.367 2 DEBUG oslo_concurrency.lockutils [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:12 compute-1 nova_compute[230518]: 2025-10-02 12:33:12.368 2 DEBUG oslo_concurrency.lockutils [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:12 compute-1 nova_compute[230518]: 2025-10-02 12:33:12.368 2 INFO nova.compute.manager [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Rebooting instance
Oct 02 12:33:12 compute-1 nova_compute[230518]: 2025-10-02 12:33:12.381 2 DEBUG oslo_concurrency.lockutils [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:33:12 compute-1 nova_compute[230518]: 2025-10-02 12:33:12.382 2 DEBUG oslo_concurrency.lockutils [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquired lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:33:12 compute-1 nova_compute[230518]: 2025-10-02 12:33:12.382 2 DEBUG nova.network.neutron [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:33:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:12.396 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:12 compute-1 ceph-mon[80926]: pgmap v1699: 305 pgs: 305 active+clean; 295 MiB data, 822 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 6.5 MiB/s wr, 217 op/s
Oct 02 12:33:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:12.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:12.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:12 compute-1 nova_compute[230518]: 2025-10-02 12:33:12.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:13 compute-1 nova_compute[230518]: 2025-10-02 12:33:13.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:33:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:33:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:14.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:33:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:14.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:14 compute-1 ceph-mon[80926]: pgmap v1700: 305 pgs: 305 active+clean; 295 MiB data, 822 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 5.7 MiB/s wr, 227 op/s
Oct 02 12:33:15 compute-1 nova_compute[230518]: 2025-10-02 12:33:15.537 2 DEBUG nova.network.neutron [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updating instance_info_cache with network_info: [{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:33:15 compute-1 nova_compute[230518]: 2025-10-02 12:33:15.575 2 DEBUG oslo_concurrency.lockutils [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Releasing lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:33:15 compute-1 nova_compute[230518]: 2025-10-02 12:33:15.578 2 DEBUG nova.compute.manager [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:33:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:16.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:16 compute-1 kernel: tapa3bd0009-d2 (unregistering): left promiscuous mode
Oct 02 12:33:16 compute-1 NetworkManager[44960]: <info>  [1759408396.5816] device (tapa3bd0009-d2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:33:16 compute-1 ovn_controller[129257]: 2025-10-02T12:33:16Z|00347|binding|INFO|Releasing lport a3bd0009-d256-4937-bdad-606abfd076e0 from this chassis (sb_readonly=0)
Oct 02 12:33:16 compute-1 ovn_controller[129257]: 2025-10-02T12:33:16Z|00348|binding|INFO|Setting lport a3bd0009-d256-4937-bdad-606abfd076e0 down in Southbound
Oct 02 12:33:16 compute-1 nova_compute[230518]: 2025-10-02 12:33:16.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:16 compute-1 ovn_controller[129257]: 2025-10-02T12:33:16Z|00349|binding|INFO|Removing iface tapa3bd0009-d2 ovn-installed in OVS
Oct 02 12:33:16 compute-1 nova_compute[230518]: 2025-10-02 12:33:16.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:16.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:16 compute-1 nova_compute[230518]: 2025-10-02 12:33:16.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:16 compute-1 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d0000004d.scope: Deactivated successfully.
Oct 02 12:33:16 compute-1 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d0000004d.scope: Consumed 14.442s CPU time.
Oct 02 12:33:16 compute-1 systemd-machined[188247]: Machine qemu-40-instance-0000004d terminated.
Oct 02 12:33:16 compute-1 nova_compute[230518]: 2025-10-02 12:33:16.746 2 INFO nova.virt.libvirt.driver [-] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance destroyed successfully.
Oct 02 12:33:16 compute-1 nova_compute[230518]: 2025-10-02 12:33:16.747 2 DEBUG nova.objects.instance [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'resources' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:33:16 compute-1 nova_compute[230518]: 2025-10-02 12:33:16.948 2 DEBUG nova.virt.libvirt.vif [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1270476772',display_name='tempest-ServerActionsTestJSON-server-1270476772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1270476772',id=77,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-t095cvs5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:33:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=3e490470-5e33-4140-95c1-367805364c73,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:33:16 compute-1 nova_compute[230518]: 2025-10-02 12:33:16.949 2 DEBUG nova.network.os_vif_util [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:33:16 compute-1 nova_compute[230518]: 2025-10-02 12:33:16.950 2 DEBUG nova.network.os_vif_util [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:33:16 compute-1 nova_compute[230518]: 2025-10-02 12:33:16.951 2 DEBUG os_vif [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:33:16 compute-1 nova_compute[230518]: 2025-10-02 12:33:16.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:16 compute-1 nova_compute[230518]: 2025-10-02 12:33:16.954 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3bd0009-d2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:16 compute-1 nova_compute[230518]: 2025-10-02 12:33:16.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:16 compute-1 nova_compute[230518]: 2025-10-02 12:33:16.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:16 compute-1 nova_compute[230518]: 2025-10-02 12:33:16.962 2 INFO os_vif [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2')
Oct 02 12:33:16 compute-1 nova_compute[230518]: 2025-10-02 12:33:16.973 2 DEBUG nova.virt.libvirt.driver [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Start _get_guest_xml network_info=[{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:33:16 compute-1 nova_compute[230518]: 2025-10-02 12:33:16.978 2 WARNING nova.virt.libvirt.driver [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:33:16 compute-1 nova_compute[230518]: 2025-10-02 12:33:16.986 2 DEBUG nova.virt.libvirt.host [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:33:16 compute-1 nova_compute[230518]: 2025-10-02 12:33:16.988 2 DEBUG nova.virt.libvirt.host [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:33:16 compute-1 nova_compute[230518]: 2025-10-02 12:33:16.991 2 DEBUG nova.virt.libvirt.host [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:33:16 compute-1 nova_compute[230518]: 2025-10-02 12:33:16.992 2 DEBUG nova.virt.libvirt.host [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:33:16 compute-1 nova_compute[230518]: 2025-10-02 12:33:16.994 2 DEBUG nova.virt.libvirt.driver [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:33:16 compute-1 nova_compute[230518]: 2025-10-02 12:33:16.994 2 DEBUG nova.virt.hardware [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:33:16 compute-1 nova_compute[230518]: 2025-10-02 12:33:16.995 2 DEBUG nova.virt.hardware [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:33:16 compute-1 nova_compute[230518]: 2025-10-02 12:33:16.996 2 DEBUG nova.virt.hardware [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:33:16 compute-1 nova_compute[230518]: 2025-10-02 12:33:16.996 2 DEBUG nova.virt.hardware [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:33:16 compute-1 nova_compute[230518]: 2025-10-02 12:33:16.996 2 DEBUG nova.virt.hardware [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:33:16 compute-1 nova_compute[230518]: 2025-10-02 12:33:16.997 2 DEBUG nova.virt.hardware [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:33:16 compute-1 nova_compute[230518]: 2025-10-02 12:33:16.997 2 DEBUG nova.virt.hardware [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:33:16 compute-1 nova_compute[230518]: 2025-10-02 12:33:16.998 2 DEBUG nova.virt.hardware [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:33:16 compute-1 nova_compute[230518]: 2025-10-02 12:33:16.998 2 DEBUG nova.virt.hardware [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:33:16 compute-1 nova_compute[230518]: 2025-10-02 12:33:16.999 2 DEBUG nova.virt.hardware [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:33:17 compute-1 nova_compute[230518]: 2025-10-02 12:33:16.999 2 DEBUG nova.virt.hardware [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:33:17 compute-1 nova_compute[230518]: 2025-10-02 12:33:17.000 2 DEBUG nova.objects.instance [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:33:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:17.000 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:e8:97 10.100.0.6'], port_security=['fa:16:3e:7b:e8:97 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3e490470-5e33-4140-95c1-367805364c73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '8', 'neutron:security_group_ids': '6fdfac51-abac-4e22-93ab-c3b799f666ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.191', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=a3bd0009-d256-4937-bdad-606abfd076e0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:33:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:17.002 138374 INFO neutron.agent.ovn.metadata.agent [-] Port a3bd0009-d256-4937-bdad-606abfd076e0 in datapath f011efa4-0132-405c-bb45-09d0a9352eff unbound from our chassis
Oct 02 12:33:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:17.005 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f011efa4-0132-405c-bb45-09d0a9352eff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:33:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:17.007 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1fffd78c-f85d-4a47-9dd9-c1f5b5580093]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:17.007 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff namespace which is not needed anymore
Oct 02 12:33:17 compute-1 nova_compute[230518]: 2025-10-02 12:33:17.063 2 DEBUG oslo_concurrency.processutils [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:17 compute-1 ceph-mon[80926]: pgmap v1701: 305 pgs: 305 active+clean; 295 MiB data, 822 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 5.1 MiB/s wr, 268 op/s
Oct 02 12:33:17 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262937]: [NOTICE]   (262941) : haproxy version is 2.8.14-c23fe91
Oct 02 12:33:17 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262937]: [NOTICE]   (262941) : path to executable is /usr/sbin/haproxy
Oct 02 12:33:17 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262937]: [WARNING]  (262941) : Exiting Master process...
Oct 02 12:33:17 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262937]: [ALERT]    (262941) : Current worker (262943) exited with code 143 (Terminated)
Oct 02 12:33:17 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[262937]: [WARNING]  (262941) : All workers exited. Exiting... (0)
Oct 02 12:33:17 compute-1 systemd[1]: libpod-1c13cf42afcd40c5e3bd84df468d8fa242df77a6dc135baf6fda0e549678222a.scope: Deactivated successfully.
Oct 02 12:33:17 compute-1 podman[263210]: 2025-10-02 12:33:17.478002172 +0000 UTC m=+0.330076136 container died 1c13cf42afcd40c5e3bd84df468d8fa242df77a6dc135baf6fda0e549678222a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:33:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:33:17 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4171719287' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:17 compute-1 nova_compute[230518]: 2025-10-02 12:33:17.555 2 DEBUG oslo_concurrency.processutils [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:17 compute-1 nova_compute[230518]: 2025-10-02 12:33:17.598 2 DEBUG oslo_concurrency.processutils [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:17 compute-1 systemd[1]: var-lib-containers-storage-overlay-7bba23b9540567d83dd9a5ca70200357f97da436b9841f2d13c96266b6db2d85-merged.mount: Deactivated successfully.
Oct 02 12:33:17 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1c13cf42afcd40c5e3bd84df468d8fa242df77a6dc135baf6fda0e549678222a-userdata-shm.mount: Deactivated successfully.
Oct 02 12:33:17 compute-1 nova_compute[230518]: 2025-10-02 12:33:17.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:17 compute-1 nova_compute[230518]: 2025-10-02 12:33:17.952 2 DEBUG nova.compute.manager [req-84b1376a-1a7d-4bf3-a19c-565b9fd759ba req-3b368269-9845-4e0e-b5bc-6c9c99c2a3c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-unplugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:17 compute-1 nova_compute[230518]: 2025-10-02 12:33:17.953 2 DEBUG oslo_concurrency.lockutils [req-84b1376a-1a7d-4bf3-a19c-565b9fd759ba req-3b368269-9845-4e0e-b5bc-6c9c99c2a3c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:17 compute-1 nova_compute[230518]: 2025-10-02 12:33:17.953 2 DEBUG oslo_concurrency.lockutils [req-84b1376a-1a7d-4bf3-a19c-565b9fd759ba req-3b368269-9845-4e0e-b5bc-6c9c99c2a3c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:17 compute-1 nova_compute[230518]: 2025-10-02 12:33:17.954 2 DEBUG oslo_concurrency.lockutils [req-84b1376a-1a7d-4bf3-a19c-565b9fd759ba req-3b368269-9845-4e0e-b5bc-6c9c99c2a3c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:17 compute-1 nova_compute[230518]: 2025-10-02 12:33:17.954 2 DEBUG nova.compute.manager [req-84b1376a-1a7d-4bf3-a19c-565b9fd759ba req-3b368269-9845-4e0e-b5bc-6c9c99c2a3c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-unplugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:33:17 compute-1 nova_compute[230518]: 2025-10-02 12:33:17.954 2 WARNING nova.compute.manager [req-84b1376a-1a7d-4bf3-a19c-565b9fd759ba req-3b368269-9845-4e0e-b5bc-6c9c99c2a3c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received unexpected event network-vif-unplugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with vm_state active and task_state reboot_started_hard.
Oct 02 12:33:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:33:18 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3641656552' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:18 compute-1 nova_compute[230518]: 2025-10-02 12:33:18.136 2 DEBUG oslo_concurrency.processutils [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:18 compute-1 nova_compute[230518]: 2025-10-02 12:33:18.137 2 DEBUG nova.virt.libvirt.vif [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1270476772',display_name='tempest-ServerActionsTestJSON-server-1270476772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1270476772',id=77,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-t095cvs5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:33:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=3e490470-5e33-4140-95c1-367805364c73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:33:18 compute-1 nova_compute[230518]: 2025-10-02 12:33:18.138 2 DEBUG nova.network.os_vif_util [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:33:18 compute-1 nova_compute[230518]: 2025-10-02 12:33:18.138 2 DEBUG nova.network.os_vif_util [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:33:18 compute-1 nova_compute[230518]: 2025-10-02 12:33:18.140 2 DEBUG nova.objects.instance [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:33:18 compute-1 nova_compute[230518]: 2025-10-02 12:33:18.157 2 DEBUG nova.virt.libvirt.driver [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:33:18 compute-1 nova_compute[230518]:   <uuid>3e490470-5e33-4140-95c1-367805364c73</uuid>
Oct 02 12:33:18 compute-1 nova_compute[230518]:   <name>instance-0000004d</name>
Oct 02 12:33:18 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:33:18 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:33:18 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:33:18 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:       <nova:name>tempest-ServerActionsTestJSON-server-1270476772</nova:name>
Oct 02 12:33:18 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:33:16</nova:creationTime>
Oct 02 12:33:18 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:33:18 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:33:18 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:33:18 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:33:18 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:33:18 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:33:18 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:33:18 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:33:18 compute-1 nova_compute[230518]:         <nova:user uuid="71d69bc37f274fad8a0b06c0b96f2a64">tempest-ServerActionsTestJSON-226762235-project-member</nova:user>
Oct 02 12:33:18 compute-1 nova_compute[230518]:         <nova:project uuid="3b295760a6d74c82bd0f9ee4154d7d10">tempest-ServerActionsTestJSON-226762235</nova:project>
Oct 02 12:33:18 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:33:18 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:33:18 compute-1 nova_compute[230518]:         <nova:port uuid="a3bd0009-d256-4937-bdad-606abfd076e0">
Oct 02 12:33:18 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:33:18 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:33:18 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:33:18 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <system>
Oct 02 12:33:18 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:33:18 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:33:18 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:33:18 compute-1 nova_compute[230518]:       <entry name="serial">3e490470-5e33-4140-95c1-367805364c73</entry>
Oct 02 12:33:18 compute-1 nova_compute[230518]:       <entry name="uuid">3e490470-5e33-4140-95c1-367805364c73</entry>
Oct 02 12:33:18 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     </system>
Oct 02 12:33:18 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:33:18 compute-1 nova_compute[230518]:   <os>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:   </os>
Oct 02 12:33:18 compute-1 nova_compute[230518]:   <features>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:   </features>
Oct 02 12:33:18 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:33:18 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:33:18 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:33:18 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/3e490470-5e33-4140-95c1-367805364c73_disk">
Oct 02 12:33:18 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:       </source>
Oct 02 12:33:18 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:33:18 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:33:18 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:33:18 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/3e490470-5e33-4140-95c1-367805364c73_disk.config">
Oct 02 12:33:18 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:       </source>
Oct 02 12:33:18 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:33:18 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:33:18 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:33:18 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:7b:e8:97"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:       <target dev="tapa3bd0009-d2"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:33:18 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/3e490470-5e33-4140-95c1-367805364c73/console.log" append="off"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <video>
Oct 02 12:33:18 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     </video>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <input type="keyboard" bus="usb"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:33:18 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:33:18 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:33:18 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:33:18 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:33:18 compute-1 nova_compute[230518]: </domain>
Oct 02 12:33:18 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:33:18 compute-1 nova_compute[230518]: 2025-10-02 12:33:18.159 2 DEBUG nova.virt.libvirt.driver [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:33:18 compute-1 nova_compute[230518]: 2025-10-02 12:33:18.160 2 DEBUG nova.virt.libvirt.driver [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:33:18 compute-1 nova_compute[230518]: 2025-10-02 12:33:18.160 2 DEBUG nova.virt.libvirt.vif [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1270476772',display_name='tempest-ServerActionsTestJSON-server-1270476772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1270476772',id=77,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-t095cvs5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:33:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=3e490470-5e33-4140-95c1-367805364c73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:33:18 compute-1 nova_compute[230518]: 2025-10-02 12:33:18.161 2 DEBUG nova.network.os_vif_util [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:33:18 compute-1 nova_compute[230518]: 2025-10-02 12:33:18.161 2 DEBUG nova.network.os_vif_util [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:33:18 compute-1 nova_compute[230518]: 2025-10-02 12:33:18.161 2 DEBUG os_vif [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:33:18 compute-1 nova_compute[230518]: 2025-10-02 12:33:18.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:18 compute-1 nova_compute[230518]: 2025-10-02 12:33:18.162 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:18 compute-1 nova_compute[230518]: 2025-10-02 12:33:18.163 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:33:18 compute-1 nova_compute[230518]: 2025-10-02 12:33:18.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:18 compute-1 nova_compute[230518]: 2025-10-02 12:33:18.165 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3bd0009-d2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:18 compute-1 nova_compute[230518]: 2025-10-02 12:33:18.165 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa3bd0009-d2, col_values=(('external_ids', {'iface-id': 'a3bd0009-d256-4937-bdad-606abfd076e0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:e8:97', 'vm-uuid': '3e490470-5e33-4140-95c1-367805364c73'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:18 compute-1 nova_compute[230518]: 2025-10-02 12:33:18.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:18 compute-1 NetworkManager[44960]: <info>  [1759408398.1684] manager: (tapa3bd0009-d2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/166)
Oct 02 12:33:18 compute-1 nova_compute[230518]: 2025-10-02 12:33:18.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:33:18 compute-1 nova_compute[230518]: 2025-10-02 12:33:18.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:18 compute-1 nova_compute[230518]: 2025-10-02 12:33:18.176 2 INFO os_vif [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2')
Oct 02 12:33:18 compute-1 NetworkManager[44960]: <info>  [1759408398.3119] manager: (tapa3bd0009-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/167)
Oct 02 12:33:18 compute-1 kernel: tapa3bd0009-d2: entered promiscuous mode
Oct 02 12:33:18 compute-1 systemd-udevd[263177]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:33:18 compute-1 nova_compute[230518]: 2025-10-02 12:33:18.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:18 compute-1 ovn_controller[129257]: 2025-10-02T12:33:18Z|00350|binding|INFO|Claiming lport a3bd0009-d256-4937-bdad-606abfd076e0 for this chassis.
Oct 02 12:33:18 compute-1 ovn_controller[129257]: 2025-10-02T12:33:18Z|00351|binding|INFO|a3bd0009-d256-4937-bdad-606abfd076e0: Claiming fa:16:3e:7b:e8:97 10.100.0.6
Oct 02 12:33:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:18.334 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:e8:97 10.100.0.6'], port_security=['fa:16:3e:7b:e8:97 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3e490470-5e33-4140-95c1-367805364c73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '9', 'neutron:security_group_ids': '6fdfac51-abac-4e22-93ab-c3b799f666ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.191', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=a3bd0009-d256-4937-bdad-606abfd076e0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:33:18 compute-1 NetworkManager[44960]: <info>  [1759408398.3371] device (tapa3bd0009-d2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:33:18 compute-1 NetworkManager[44960]: <info>  [1759408398.3384] device (tapa3bd0009-d2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:33:18 compute-1 ovn_controller[129257]: 2025-10-02T12:33:18Z|00352|binding|INFO|Setting lport a3bd0009-d256-4937-bdad-606abfd076e0 ovn-installed in OVS
Oct 02 12:33:18 compute-1 ovn_controller[129257]: 2025-10-02T12:33:18Z|00353|binding|INFO|Setting lport a3bd0009-d256-4937-bdad-606abfd076e0 up in Southbound
Oct 02 12:33:18 compute-1 nova_compute[230518]: 2025-10-02 12:33:18.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:18 compute-1 nova_compute[230518]: 2025-10-02 12:33:18.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:18 compute-1 systemd-machined[188247]: New machine qemu-41-instance-0000004d.
Oct 02 12:33:18 compute-1 systemd[1]: Started Virtual Machine qemu-41-instance-0000004d.
Oct 02 12:33:18 compute-1 podman[263210]: 2025-10-02 12:33:18.399608679 +0000 UTC m=+1.251682693 container cleanup 1c13cf42afcd40c5e3bd84df468d8fa242df77a6dc135baf6fda0e549678222a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:33:18 compute-1 systemd[1]: libpod-conmon-1c13cf42afcd40c5e3bd84df468d8fa242df77a6dc135baf6fda0e549678222a.scope: Deactivated successfully.
Oct 02 12:33:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:18.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:18.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:18 compute-1 ceph-mon[80926]: pgmap v1702: 305 pgs: 305 active+clean; 295 MiB data, 822 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.1 MiB/s wr, 267 op/s
Oct 02 12:33:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4171719287' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3641656552' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:33:19 compute-1 podman[263318]: 2025-10-02 12:33:19.40903645 +0000 UTC m=+0.978605872 container remove 1c13cf42afcd40c5e3bd84df468d8fa242df77a6dc135baf6fda0e549678222a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.417 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cc1831cb-73cb-4501-828d-c6cdf647fbc2]: (4, ('Thu Oct  2 12:33:17 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff (1c13cf42afcd40c5e3bd84df468d8fa242df77a6dc135baf6fda0e549678222a)\n1c13cf42afcd40c5e3bd84df468d8fa242df77a6dc135baf6fda0e549678222a\nThu Oct  2 12:33:18 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff (1c13cf42afcd40c5e3bd84df468d8fa242df77a6dc135baf6fda0e549678222a)\n1c13cf42afcd40c5e3bd84df468d8fa242df77a6dc135baf6fda0e549678222a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.419 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[81190a67-22bc-43bb-a2e4-58c4cbd8847a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.421 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:19 compute-1 nova_compute[230518]: 2025-10-02 12:33:19.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:19 compute-1 kernel: tapf011efa4-00: left promiscuous mode
Oct 02 12:33:19 compute-1 nova_compute[230518]: 2025-10-02 12:33:19.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.457 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[39ea3800-de0f-4115-9a8a-e98781117429]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.477 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d8f5dfa3-6dfd-49ad-8f00-e4926cdcafd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.478 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5f595ac1-30e5-4c5d-a168-040de695a8c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.495 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a587ec51-c488-476a-b4e6-73a1c8223e8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 623194, 'reachable_time': 34738, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263413, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.498 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.498 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[924ea14a-1013-4aeb-9503-1735b16d38b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.499 138374 INFO neutron.agent.ovn.metadata.agent [-] Port a3bd0009-d256-4937-bdad-606abfd076e0 in datapath f011efa4-0132-405c-bb45-09d0a9352eff unbound from our chassis
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.501 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f011efa4-0132-405c-bb45-09d0a9352eff
Oct 02 12:33:19 compute-1 systemd[1]: run-netns-ovnmeta\x2df011efa4\x2d0132\x2d405c\x2dbb45\x2d09d0a9352eff.mount: Deactivated successfully.
Oct 02 12:33:19 compute-1 podman[263338]: 2025-10-02 12:33:19.502889293 +0000 UTC m=+0.738364083 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.524 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[430f9acd-3fcf-4304-955c-431ca7fd3900]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.526 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf011efa4-01 in ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.528 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf011efa4-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.528 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4aec8d15-11e9-49ad-aef9-26fe3213f811]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.530 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f02c9cf3-066d-4498-ab50-2598a185d6bd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.545 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[4515c52b-1bf0-44c3-a2e9-25575b836664]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.559 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7c4124e9-95a8-4b43-93da-5ae178506d8b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:19 compute-1 podman[263337]: 2025-10-02 12:33:19.561375073 +0000 UTC m=+0.808569582 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.591 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[22e02eb1-bf9b-4146-9279-08352b85481e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.602 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d7113380-dc7e-4cf9-a39a-5dfbb3ad967d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:19 compute-1 NetworkManager[44960]: <info>  [1759408399.6071] manager: (tapf011efa4-00): new Veth device (/org/freedesktop/NetworkManager/Devices/168)
Oct 02 12:33:19 compute-1 systemd-udevd[263432]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.635 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[456c4eb2-911c-4fa2-8e14-09da8886b26a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.638 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[d79476ae-9ce2-4219-8aa6-4668a8aebce6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:19 compute-1 NetworkManager[44960]: <info>  [1759408399.6637] device (tapf011efa4-00): carrier: link connected
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.667 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[ddf92330-926e-4793-a283-604f7016d47a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.681 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[47362246-4cab-431c-84a1-f6e6e4838eb0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 106], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626322, 'reachable_time': 34055, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263451, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.696 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e18f6c9f-57c3-4cff-ae78-4a7b31cc0949]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feed:1a7a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 626322, 'tstamp': 626322}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263452, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.714 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d2beaed5-e74d-4c8a-aff6-3ebb3477ceb8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 106], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626322, 'reachable_time': 34055, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 263453, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.746 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[60c39a09-53ce-4624-9237-63b5c4a8a385]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.811 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a44eaec1-dffd-452c-a72e-d260f6b95dfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.813 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.813 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.813 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf011efa4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:19 compute-1 NetworkManager[44960]: <info>  [1759408399.8161] manager: (tapf011efa4-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/169)
Oct 02 12:33:19 compute-1 kernel: tapf011efa4-00: entered promiscuous mode
Oct 02 12:33:19 compute-1 nova_compute[230518]: 2025-10-02 12:33:19.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.823 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf011efa4-00, col_values=(('external_ids', {'iface-id': '678ebd13-2235-4191-a2a2-1f6e29399ca6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:33:19 compute-1 nova_compute[230518]: 2025-10-02 12:33:19.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:19 compute-1 ovn_controller[129257]: 2025-10-02T12:33:19Z|00354|binding|INFO|Releasing lport 678ebd13-2235-4191-a2a2-1f6e29399ca6 from this chassis (sb_readonly=0)
Oct 02 12:33:19 compute-1 nova_compute[230518]: 2025-10-02 12:33:19.827 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Removed pending event for 3e490470-5e33-4140-95c1-367805364c73 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:33:19 compute-1 nova_compute[230518]: 2025-10-02 12:33:19.828 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408399.8276675, 3e490470-5e33-4140-95c1-367805364c73 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:33:19 compute-1 nova_compute[230518]: 2025-10-02 12:33:19.828 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] VM Resumed (Lifecycle Event)
Oct 02 12:33:19 compute-1 nova_compute[230518]: 2025-10-02 12:33:19.830 2 DEBUG nova.compute.manager [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.832 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:33:19 compute-1 nova_compute[230518]: 2025-10-02 12:33:19.834 2 INFO nova.virt.libvirt.driver [-] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance rebooted successfully.
Oct 02 12:33:19 compute-1 nova_compute[230518]: 2025-10-02 12:33:19.834 2 DEBUG nova.compute.manager [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.834 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2cbff876-2342-4ebe-b273-c306be9233c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.835 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-f011efa4-0132-405c-bb45-09d0a9352eff
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID f011efa4-0132-405c-bb45-09d0a9352eff
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:33:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:19.835 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'env', 'PROCESS_TAG=haproxy-f011efa4-0132-405c-bb45-09d0a9352eff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f011efa4-0132-405c-bb45-09d0a9352eff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:33:19 compute-1 nova_compute[230518]: 2025-10-02 12:33:19.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:19 compute-1 nova_compute[230518]: 2025-10-02 12:33:19.877 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:33:19 compute-1 nova_compute[230518]: 2025-10-02 12:33:19.883 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:33:19 compute-1 nova_compute[230518]: 2025-10-02 12:33:19.934 2 DEBUG oslo_concurrency.lockutils [None req-d6984e0b-eb4a-41c2-9785-ef6ef26eeaa9 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 7.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:19 compute-1 nova_compute[230518]: 2025-10-02 12:33:19.937 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408399.8282464, 3e490470-5e33-4140-95c1-367805364c73 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:33:19 compute-1 nova_compute[230518]: 2025-10-02 12:33:19.938 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] VM Started (Lifecycle Event)
Oct 02 12:33:19 compute-1 nova_compute[230518]: 2025-10-02 12:33:19.961 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:33:19 compute-1 nova_compute[230518]: 2025-10-02 12:33:19.965 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:33:20 compute-1 nova_compute[230518]: 2025-10-02 12:33:20.107 2 DEBUG nova.compute.manager [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:20 compute-1 nova_compute[230518]: 2025-10-02 12:33:20.109 2 DEBUG oslo_concurrency.lockutils [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:20 compute-1 nova_compute[230518]: 2025-10-02 12:33:20.109 2 DEBUG oslo_concurrency.lockutils [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:20 compute-1 nova_compute[230518]: 2025-10-02 12:33:20.110 2 DEBUG oslo_concurrency.lockutils [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:20 compute-1 nova_compute[230518]: 2025-10-02 12:33:20.111 2 DEBUG nova.compute.manager [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:33:20 compute-1 nova_compute[230518]: 2025-10-02 12:33:20.111 2 WARNING nova.compute.manager [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received unexpected event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with vm_state active and task_state None.
Oct 02 12:33:20 compute-1 nova_compute[230518]: 2025-10-02 12:33:20.112 2 DEBUG nova.compute.manager [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:20 compute-1 nova_compute[230518]: 2025-10-02 12:33:20.112 2 DEBUG oslo_concurrency.lockutils [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:20 compute-1 nova_compute[230518]: 2025-10-02 12:33:20.113 2 DEBUG oslo_concurrency.lockutils [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:20 compute-1 nova_compute[230518]: 2025-10-02 12:33:20.113 2 DEBUG oslo_concurrency.lockutils [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:20 compute-1 nova_compute[230518]: 2025-10-02 12:33:20.114 2 DEBUG nova.compute.manager [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:33:20 compute-1 nova_compute[230518]: 2025-10-02 12:33:20.115 2 WARNING nova.compute.manager [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received unexpected event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with vm_state active and task_state None.
Oct 02 12:33:20 compute-1 nova_compute[230518]: 2025-10-02 12:33:20.115 2 DEBUG nova.compute.manager [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:33:20 compute-1 nova_compute[230518]: 2025-10-02 12:33:20.116 2 DEBUG oslo_concurrency.lockutils [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:20 compute-1 nova_compute[230518]: 2025-10-02 12:33:20.116 2 DEBUG oslo_concurrency.lockutils [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:20 compute-1 nova_compute[230518]: 2025-10-02 12:33:20.117 2 DEBUG oslo_concurrency.lockutils [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:20 compute-1 nova_compute[230518]: 2025-10-02 12:33:20.118 2 DEBUG nova.compute.manager [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:33:20 compute-1 nova_compute[230518]: 2025-10-02 12:33:20.118 2 WARNING nova.compute.manager [req-c46e59fc-134c-4b1e-95f1-3a5ac7813ac6 req-49d208d4-c7ae-403d-8a8b-1163b554e88c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received unexpected event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with vm_state active and task_state None.
Oct 02 12:33:20 compute-1 podman[263484]: 2025-10-02 12:33:20.161450113 +0000 UTC m=+0.021605681 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:33:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:20.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:20.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:20 compute-1 podman[263484]: 2025-10-02 12:33:20.890472501 +0000 UTC m=+0.750628049 container create 9ad2cacdd271349a1f5ecf03d9126c3a66b63465d3994562a79af85ca8178515 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 12:33:21 compute-1 ceph-mon[80926]: pgmap v1703: 305 pgs: 305 active+clean; 295 MiB data, 822 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 217 KiB/s wr, 190 op/s
Oct 02 12:33:21 compute-1 systemd[1]: Started libpod-conmon-9ad2cacdd271349a1f5ecf03d9126c3a66b63465d3994562a79af85ca8178515.scope.
Oct 02 12:33:21 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:33:21 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed56eda4117edd4a73c69ba3e2c0e2d327a36dd0522d2b795d433383050ab310/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:33:21 compute-1 podman[263484]: 2025-10-02 12:33:21.480182936 +0000 UTC m=+1.340338564 container init 9ad2cacdd271349a1f5ecf03d9126c3a66b63465d3994562a79af85ca8178515 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:33:21 compute-1 podman[263484]: 2025-10-02 12:33:21.490416358 +0000 UTC m=+1.350571946 container start 9ad2cacdd271349a1f5ecf03d9126c3a66b63465d3994562a79af85ca8178515 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:33:21 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[263499]: [NOTICE]   (263503) : New worker (263505) forked
Oct 02 12:33:21 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[263499]: [NOTICE]   (263503) : Loading success.
Oct 02 12:33:22 compute-1 ceph-mon[80926]: pgmap v1704: 305 pgs: 305 active+clean; 295 MiB data, 822 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 217 KiB/s wr, 203 op/s
Oct 02 12:33:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:33:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:22.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:33:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:22.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:22 compute-1 nova_compute[230518]: 2025-10-02 12:33:22.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:23 compute-1 nova_compute[230518]: 2025-10-02 12:33:23.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3725152310' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:33:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:24.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:24.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:24 compute-1 ceph-mon[80926]: pgmap v1705: 305 pgs: 305 active+clean; 295 MiB data, 822 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 48 KiB/s wr, 172 op/s
Oct 02 12:33:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:25.930 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:25.931 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:33:25.932 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:26.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:26.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:27 compute-1 ceph-mon[80926]: pgmap v1706: 305 pgs: 305 active+clean; 313 MiB data, 812 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.6 MiB/s wr, 181 op/s
Oct 02 12:33:27 compute-1 nova_compute[230518]: 2025-10-02 12:33:27.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:28 compute-1 nova_compute[230518]: 2025-10-02 12:33:28.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:28 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4036348665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:28 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1222925000' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:28.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:28.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:33:29 compute-1 ceph-mon[80926]: pgmap v1707: 305 pgs: 305 active+clean; 317 MiB data, 813 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.6 MiB/s wr, 206 op/s
Oct 02 12:33:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2914263883' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:33:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:30.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:33:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:33:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:30.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:33:30 compute-1 ceph-mon[80926]: pgmap v1708: 305 pgs: 305 active+clean; 317 MiB data, 813 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.6 MiB/s wr, 167 op/s
Oct 02 12:33:31 compute-1 ovn_controller[129257]: 2025-10-02T12:33:31Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7b:e8:97 10.100.0.6
Oct 02 12:33:31 compute-1 podman[263514]: 2025-10-02 12:33:31.848102807 +0000 UTC m=+0.086600726 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:33:31 compute-1 podman[263515]: 2025-10-02 12:33:31.848109237 +0000 UTC m=+0.082588220 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 12:33:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:33:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:32.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:33:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:32.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:32 compute-1 nova_compute[230518]: 2025-10-02 12:33:32.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:32 compute-1 ceph-mon[80926]: pgmap v1709: 305 pgs: 305 active+clean; 326 MiB data, 847 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 191 op/s
Oct 02 12:33:33 compute-1 nova_compute[230518]: 2025-10-02 12:33:33.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:33:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:34.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:34.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:34 compute-1 ceph-mon[80926]: pgmap v1710: 305 pgs: 305 active+clean; 328 MiB data, 848 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 192 op/s
Oct 02 12:33:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:36.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:33:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:36.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:33:37 compute-1 ceph-mon[80926]: pgmap v1711: 305 pgs: 305 active+clean; 328 MiB data, 847 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.9 MiB/s wr, 238 op/s
Oct 02 12:33:37 compute-1 ovn_controller[129257]: 2025-10-02T12:33:37Z|00355|binding|INFO|Releasing lport 678ebd13-2235-4191-a2a2-1f6e29399ca6 from this chassis (sb_readonly=0)
Oct 02 12:33:37 compute-1 nova_compute[230518]: 2025-10-02 12:33:37.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:37 compute-1 nova_compute[230518]: 2025-10-02 12:33:37.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:38 compute-1 nova_compute[230518]: 2025-10-02 12:33:38.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:38.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:38.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:33:39 compute-1 ceph-mon[80926]: pgmap v1712: 305 pgs: 305 active+clean; 328 MiB data, 848 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.4 MiB/s wr, 231 op/s
Oct 02 12:33:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/4230650317' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:33:40 compute-1 ceph-mon[80926]: pgmap v1713: 305 pgs: 305 active+clean; 328 MiB data, 848 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 344 KiB/s wr, 142 op/s
Oct 02 12:33:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:40.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:33:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:40.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:33:42 compute-1 ceph-mon[80926]: pgmap v1714: 305 pgs: 305 active+clean; 328 MiB data, 848 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 348 KiB/s wr, 143 op/s
Oct 02 12:33:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:42.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:42.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:42 compute-1 nova_compute[230518]: 2025-10-02 12:33:42.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:43 compute-1 nova_compute[230518]: 2025-10-02 12:33:43.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:33:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:44.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:44.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:44 compute-1 ceph-mon[80926]: pgmap v1715: 305 pgs: 305 active+clean; 328 MiB data, 848 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 54 KiB/s wr, 122 op/s
Oct 02 12:33:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:46.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:46.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:46 compute-1 ceph-mon[80926]: pgmap v1716: 305 pgs: 305 active+clean; 340 MiB data, 848 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.0 MiB/s wr, 113 op/s
Oct 02 12:33:47 compute-1 nova_compute[230518]: 2025-10-02 12:33:47.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:33:47 compute-1 nova_compute[230518]: 2025-10-02 12:33:47.095 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:47 compute-1 nova_compute[230518]: 2025-10-02 12:33:47.096 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:47 compute-1 nova_compute[230518]: 2025-10-02 12:33:47.096 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:47 compute-1 nova_compute[230518]: 2025-10-02 12:33:47.097 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:33:47 compute-1 nova_compute[230518]: 2025-10-02 12:33:47.097 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:47 compute-1 nova_compute[230518]: 2025-10-02 12:33:47.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:33:47 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/84496681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:47 compute-1 nova_compute[230518]: 2025-10-02 12:33:47.535 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:47 compute-1 nova_compute[230518]: 2025-10-02 12:33:47.653 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:33:47 compute-1 nova_compute[230518]: 2025-10-02 12:33:47.653 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:33:47 compute-1 nova_compute[230518]: 2025-10-02 12:33:47.853 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:33:47 compute-1 nova_compute[230518]: 2025-10-02 12:33:47.854 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4352MB free_disk=20.818740844726562GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:33:47 compute-1 nova_compute[230518]: 2025-10-02 12:33:47.855 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:47 compute-1 nova_compute[230518]: 2025-10-02 12:33:47.855 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:47 compute-1 nova_compute[230518]: 2025-10-02 12:33:47.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:47 compute-1 nova_compute[230518]: 2025-10-02 12:33:47.929 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 3e490470-5e33-4140-95c1-367805364c73 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:33:47 compute-1 nova_compute[230518]: 2025-10-02 12:33:47.930 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:33:47 compute-1 nova_compute[230518]: 2025-10-02 12:33:47.930 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:33:47 compute-1 nova_compute[230518]: 2025-10-02 12:33:47.983 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:33:48 compute-1 nova_compute[230518]: 2025-10-02 12:33:48.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/84496681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:33:48 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2927055532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:48 compute-1 nova_compute[230518]: 2025-10-02 12:33:48.466 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:33:48 compute-1 nova_compute[230518]: 2025-10-02 12:33:48.475 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:33:48 compute-1 nova_compute[230518]: 2025-10-02 12:33:48.501 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:33:48 compute-1 nova_compute[230518]: 2025-10-02 12:33:48.552 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:33:48 compute-1 nova_compute[230518]: 2025-10-02 12:33:48.553 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:33:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:48.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:33:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:48.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:33:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:33:49 compute-1 ceph-mon[80926]: pgmap v1717: 305 pgs: 305 active+clean; 353 MiB data, 860 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.1 MiB/s wr, 90 op/s
Oct 02 12:33:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2807161124' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2927055532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2501477229' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:49 compute-1 podman[263600]: 2025-10-02 12:33:49.850596796 +0000 UTC m=+0.091781689 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 02 12:33:49 compute-1 podman[263599]: 2025-10-02 12:33:49.872234106 +0000 UTC m=+0.117366624 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 12:33:50 compute-1 ceph-mon[80926]: pgmap v1718: 305 pgs: 305 active+clean; 353 MiB data, 860 MiB used, 20 GiB / 21 GiB avail; 202 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Oct 02 12:33:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:50.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:50.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:51 compute-1 nova_compute[230518]: 2025-10-02 12:33:51.550 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:33:51 compute-1 nova_compute[230518]: 2025-10-02 12:33:51.550 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:33:51 compute-1 nova_compute[230518]: 2025-10-02 12:33:51.551 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:33:51 compute-1 nova_compute[230518]: 2025-10-02 12:33:51.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:52 compute-1 nova_compute[230518]: 2025-10-02 12:33:52.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:33:52 compute-1 nova_compute[230518]: 2025-10-02 12:33:52.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:33:52 compute-1 ceph-mon[80926]: pgmap v1719: 305 pgs: 305 active+clean; 361 MiB data, 872 MiB used, 20 GiB / 21 GiB avail; 393 KiB/s rd, 2.2 MiB/s wr, 71 op/s
Oct 02 12:33:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:52.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:52.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:52 compute-1 nova_compute[230518]: 2025-10-02 12:33:52.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:53 compute-1 nova_compute[230518]: 2025-10-02 12:33:53.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:54 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/99379198' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:54 compute-1 nova_compute[230518]: 2025-10-02 12:33:54.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:33:54 compute-1 nova_compute[230518]: 2025-10-02 12:33:54.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:33:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:33:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:33:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:54.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:33:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:33:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:54.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:33:55 compute-1 ceph-mon[80926]: pgmap v1720: 305 pgs: 305 active+clean; 343 MiB data, 877 MiB used, 20 GiB / 21 GiB avail; 420 KiB/s rd, 2.2 MiB/s wr, 85 op/s
Oct 02 12:33:55 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2996107139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:56 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1696718552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:56.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:33:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:56.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:33:57 compute-1 nova_compute[230518]: 2025-10-02 12:33:57.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:33:57 compute-1 ceph-mon[80926]: pgmap v1721: 305 pgs: 305 active+clean; 294 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 427 KiB/s rd, 2.2 MiB/s wr, 94 op/s
Oct 02 12:33:57 compute-1 nova_compute[230518]: 2025-10-02 12:33:57.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:58 compute-1 nova_compute[230518]: 2025-10-02 12:33:58.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:33:58 compute-1 nova_compute[230518]: 2025-10-02 12:33:58.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:33:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:33:58.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:33:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:33:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:33:58.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:33:59 compute-1 ceph-mon[80926]: pgmap v1722: 305 pgs: 305 active+clean; 281 MiB data, 831 MiB used, 20 GiB / 21 GiB avail; 428 KiB/s rd, 1.2 MiB/s wr, 94 op/s
Oct 02 12:33:59 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2500834686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:33:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:33:59 compute-1 nova_compute[230518]: 2025-10-02 12:33:59.657 2 DEBUG oslo_concurrency.lockutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "418e3157-f0a7-42ec-812b-2a4a2ad00991" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:59 compute-1 nova_compute[230518]: 2025-10-02 12:33:59.657 2 DEBUG oslo_concurrency.lockutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:59 compute-1 nova_compute[230518]: 2025-10-02 12:33:59.694 2 DEBUG nova.compute.manager [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:33:59 compute-1 nova_compute[230518]: 2025-10-02 12:33:59.805 2 DEBUG oslo_concurrency.lockutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:33:59 compute-1 nova_compute[230518]: 2025-10-02 12:33:59.806 2 DEBUG oslo_concurrency.lockutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:33:59 compute-1 nova_compute[230518]: 2025-10-02 12:33:59.818 2 DEBUG nova.virt.hardware [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:33:59 compute-1 nova_compute[230518]: 2025-10-02 12:33:59.818 2 INFO nova.compute.claims [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:33:59 compute-1 nova_compute[230518]: 2025-10-02 12:33:59.981 2 DEBUG oslo_concurrency.processutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:34:00 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1077859049' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:34:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:34:00 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1077859049' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:34:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:34:00 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1544477566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:00 compute-1 nova_compute[230518]: 2025-10-02 12:34:00.417 2 DEBUG oslo_concurrency.processutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:00 compute-1 nova_compute[230518]: 2025-10-02 12:34:00.424 2 DEBUG nova.compute.provider_tree [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:34:00 compute-1 nova_compute[230518]: 2025-10-02 12:34:00.444 2 DEBUG nova.scheduler.client.report [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:34:00 compute-1 nova_compute[230518]: 2025-10-02 12:34:00.472 2 DEBUG oslo_concurrency.lockutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:00 compute-1 nova_compute[230518]: 2025-10-02 12:34:00.473 2 DEBUG nova.compute.manager [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:34:00 compute-1 nova_compute[230518]: 2025-10-02 12:34:00.530 2 DEBUG nova.compute.manager [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:34:00 compute-1 nova_compute[230518]: 2025-10-02 12:34:00.531 2 DEBUG nova.network.neutron [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:34:00 compute-1 nova_compute[230518]: 2025-10-02 12:34:00.560 2 INFO nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:34:00 compute-1 nova_compute[230518]: 2025-10-02 12:34:00.603 2 DEBUG nova.compute.manager [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:34:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:00.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:00.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:00 compute-1 nova_compute[230518]: 2025-10-02 12:34:00.720 2 DEBUG nova.compute.manager [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:34:00 compute-1 nova_compute[230518]: 2025-10-02 12:34:00.721 2 DEBUG nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:34:00 compute-1 nova_compute[230518]: 2025-10-02 12:34:00.722 2 INFO nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Creating image(s)
Oct 02 12:34:00 compute-1 nova_compute[230518]: 2025-10-02 12:34:00.747 2 DEBUG nova.storage.rbd_utils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:00 compute-1 nova_compute[230518]: 2025-10-02 12:34:00.777 2 DEBUG nova.storage.rbd_utils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:00 compute-1 nova_compute[230518]: 2025-10-02 12:34:00.807 2 DEBUG nova.storage.rbd_utils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:00 compute-1 nova_compute[230518]: 2025-10-02 12:34:00.812 2 DEBUG oslo_concurrency.processutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:00 compute-1 nova_compute[230518]: 2025-10-02 12:34:00.893 2 DEBUG oslo_concurrency.processutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:00 compute-1 nova_compute[230518]: 2025-10-02 12:34:00.894 2 DEBUG oslo_concurrency.lockutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:00 compute-1 nova_compute[230518]: 2025-10-02 12:34:00.895 2 DEBUG oslo_concurrency.lockutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:00 compute-1 nova_compute[230518]: 2025-10-02 12:34:00.895 2 DEBUG oslo_concurrency.lockutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:00 compute-1 nova_compute[230518]: 2025-10-02 12:34:00.924 2 DEBUG nova.storage.rbd_utils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:00 compute-1 nova_compute[230518]: 2025-10-02 12:34:00.929 2 DEBUG oslo_concurrency.processutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:01 compute-1 ceph-mon[80926]: pgmap v1723: 305 pgs: 305 active+clean; 281 MiB data, 831 MiB used, 20 GiB / 21 GiB avail; 231 KiB/s rd, 55 KiB/s wr, 52 op/s
Oct 02 12:34:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1077859049' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:34:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1077859049' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:34:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1544477566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:01 compute-1 nova_compute[230518]: 2025-10-02 12:34:01.563 2 DEBUG oslo_concurrency.processutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:01 compute-1 nova_compute[230518]: 2025-10-02 12:34:01.591 2 DEBUG nova.policy [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '71d69bc37f274fad8a0b06c0b96f2a64', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:34:01 compute-1 nova_compute[230518]: 2025-10-02 12:34:01.625 2 DEBUG nova.storage.rbd_utils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] resizing rbd image 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:34:01 compute-1 nova_compute[230518]: 2025-10-02 12:34:01.836 2 DEBUG nova.objects.instance [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'migration_context' on Instance uuid 418e3157-f0a7-42ec-812b-2a4a2ad00991 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:01 compute-1 nova_compute[230518]: 2025-10-02 12:34:01.857 2 DEBUG nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:34:01 compute-1 nova_compute[230518]: 2025-10-02 12:34:01.858 2 DEBUG nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Ensure instance console log exists: /var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:34:01 compute-1 nova_compute[230518]: 2025-10-02 12:34:01.858 2 DEBUG oslo_concurrency.lockutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:01 compute-1 nova_compute[230518]: 2025-10-02 12:34:01.859 2 DEBUG oslo_concurrency.lockutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:01 compute-1 nova_compute[230518]: 2025-10-02 12:34:01.859 2 DEBUG oslo_concurrency.lockutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:02.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:02.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:02 compute-1 podman[263832]: 2025-10-02 12:34:02.833399101 +0000 UTC m=+0.074727362 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 12:34:02 compute-1 podman[263831]: 2025-10-02 12:34:02.833768672 +0000 UTC m=+0.075190296 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3)
Oct 02 12:34:02 compute-1 nova_compute[230518]: 2025-10-02 12:34:02.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:03 compute-1 nova_compute[230518]: 2025-10-02 12:34:03.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:34:03 compute-1 nova_compute[230518]: 2025-10-02 12:34:03.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:34:03 compute-1 nova_compute[230518]: 2025-10-02 12:34:03.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:34:03 compute-1 nova_compute[230518]: 2025-10-02 12:34:03.071 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 12:34:03 compute-1 nova_compute[230518]: 2025-10-02 12:34:03.076 2 DEBUG nova.network.neutron [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Successfully created port: 21510dd4-b155-46ed-bdb2-dc17a9149353 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:34:03 compute-1 ceph-mon[80926]: pgmap v1724: 305 pgs: 305 active+clean; 315 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 248 KiB/s rd, 1.3 MiB/s wr, 79 op/s
Oct 02 12:34:03 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/565392405' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:03 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2227295627' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:03 compute-1 nova_compute[230518]: 2025-10-02 12:34:03.225 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:34:03 compute-1 nova_compute[230518]: 2025-10-02 12:34:03.225 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:34:03 compute-1 nova_compute[230518]: 2025-10-02 12:34:03.225 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:34:03 compute-1 nova_compute[230518]: 2025-10-02 12:34:03.226 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:03 compute-1 nova_compute[230518]: 2025-10-02 12:34:03.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:34:04 compute-1 sudo[263870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:34:04 compute-1 sudo[263870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:04 compute-1 sudo[263870]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:04 compute-1 sudo[263895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:34:04 compute-1 sudo[263895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:04 compute-1 sudo[263895]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:04 compute-1 sudo[263920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:34:04 compute-1 sudo[263920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:04 compute-1 sudo[263920]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:04 compute-1 sudo[263945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:34:04 compute-1 sudo[263945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:04.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:04.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:05 compute-1 nova_compute[230518]: 2025-10-02 12:34:05.019 2 DEBUG nova.network.neutron [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Successfully updated port: 21510dd4-b155-46ed-bdb2-dc17a9149353 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:34:05 compute-1 nova_compute[230518]: 2025-10-02 12:34:05.035 2 DEBUG oslo_concurrency.lockutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "refresh_cache-418e3157-f0a7-42ec-812b-2a4a2ad00991" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:34:05 compute-1 nova_compute[230518]: 2025-10-02 12:34:05.036 2 DEBUG oslo_concurrency.lockutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquired lock "refresh_cache-418e3157-f0a7-42ec-812b-2a4a2ad00991" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:34:05 compute-1 nova_compute[230518]: 2025-10-02 12:34:05.036 2 DEBUG nova.network.neutron [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:34:05 compute-1 sudo[263945]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:05 compute-1 nova_compute[230518]: 2025-10-02 12:34:05.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:05.064 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:34:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:05.065 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:34:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:34:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3517646832' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:34:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:34:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3517646832' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:34:05 compute-1 nova_compute[230518]: 2025-10-02 12:34:05.149 2 DEBUG nova.compute.manager [req-dcd8a741-6b0a-4db8-bd27-6156a15b064d req-ddd7836a-2471-4a2b-a3f6-6bba3b0949e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Received event network-changed-21510dd4-b155-46ed-bdb2-dc17a9149353 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:05 compute-1 nova_compute[230518]: 2025-10-02 12:34:05.149 2 DEBUG nova.compute.manager [req-dcd8a741-6b0a-4db8-bd27-6156a15b064d req-ddd7836a-2471-4a2b-a3f6-6bba3b0949e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Refreshing instance network info cache due to event network-changed-21510dd4-b155-46ed-bdb2-dc17a9149353. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:34:05 compute-1 nova_compute[230518]: 2025-10-02 12:34:05.150 2 DEBUG oslo_concurrency.lockutils [req-dcd8a741-6b0a-4db8-bd27-6156a15b064d req-ddd7836a-2471-4a2b-a3f6-6bba3b0949e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-418e3157-f0a7-42ec-812b-2a4a2ad00991" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:34:05 compute-1 nova_compute[230518]: 2025-10-02 12:34:05.253 2 DEBUG nova.network.neutron [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:34:05 compute-1 nova_compute[230518]: 2025-10-02 12:34:05.342 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updating instance_info_cache with network_info: [{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:34:05 compute-1 ceph-mon[80926]: pgmap v1725: 305 pgs: 305 active+clean; 345 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 73 KiB/s rd, 2.5 MiB/s wr, 85 op/s
Oct 02 12:34:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3517646832' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:34:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3517646832' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:34:05 compute-1 nova_compute[230518]: 2025-10-02 12:34:05.380 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:34:05 compute-1 nova_compute[230518]: 2025-10-02 12:34:05.381 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:34:06 compute-1 nova_compute[230518]: 2025-10-02 12:34:06.589 2 DEBUG nova.network.neutron [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Updating instance_info_cache with network_info: [{"id": "21510dd4-b155-46ed-bdb2-dc17a9149353", "address": "fa:16:3e:44:7a:ec", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21510dd4-b1", "ovs_interfaceid": "21510dd4-b155-46ed-bdb2-dc17a9149353", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:34:06 compute-1 nova_compute[230518]: 2025-10-02 12:34:06.615 2 DEBUG oslo_concurrency.lockutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Releasing lock "refresh_cache-418e3157-f0a7-42ec-812b-2a4a2ad00991" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:34:06 compute-1 nova_compute[230518]: 2025-10-02 12:34:06.616 2 DEBUG nova.compute.manager [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Instance network_info: |[{"id": "21510dd4-b155-46ed-bdb2-dc17a9149353", "address": "fa:16:3e:44:7a:ec", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21510dd4-b1", "ovs_interfaceid": "21510dd4-b155-46ed-bdb2-dc17a9149353", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:34:06 compute-1 nova_compute[230518]: 2025-10-02 12:34:06.616 2 DEBUG oslo_concurrency.lockutils [req-dcd8a741-6b0a-4db8-bd27-6156a15b064d req-ddd7836a-2471-4a2b-a3f6-6bba3b0949e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-418e3157-f0a7-42ec-812b-2a4a2ad00991" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:34:06 compute-1 nova_compute[230518]: 2025-10-02 12:34:06.617 2 DEBUG nova.network.neutron [req-dcd8a741-6b0a-4db8-bd27-6156a15b064d req-ddd7836a-2471-4a2b-a3f6-6bba3b0949e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Refreshing network info cache for port 21510dd4-b155-46ed-bdb2-dc17a9149353 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:34:06 compute-1 nova_compute[230518]: 2025-10-02 12:34:06.621 2 DEBUG nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Start _get_guest_xml network_info=[{"id": "21510dd4-b155-46ed-bdb2-dc17a9149353", "address": "fa:16:3e:44:7a:ec", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21510dd4-b1", "ovs_interfaceid": "21510dd4-b155-46ed-bdb2-dc17a9149353", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:34:06 compute-1 nova_compute[230518]: 2025-10-02 12:34:06.626 2 WARNING nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:34:06 compute-1 nova_compute[230518]: 2025-10-02 12:34:06.631 2 DEBUG nova.virt.libvirt.host [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:34:06 compute-1 nova_compute[230518]: 2025-10-02 12:34:06.632 2 DEBUG nova.virt.libvirt.host [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:34:06 compute-1 nova_compute[230518]: 2025-10-02 12:34:06.638 2 DEBUG nova.virt.libvirt.host [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:34:06 compute-1 nova_compute[230518]: 2025-10-02 12:34:06.639 2 DEBUG nova.virt.libvirt.host [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:34:06 compute-1 nova_compute[230518]: 2025-10-02 12:34:06.640 2 DEBUG nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:34:06 compute-1 nova_compute[230518]: 2025-10-02 12:34:06.641 2 DEBUG nova.virt.hardware [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:34:06 compute-1 nova_compute[230518]: 2025-10-02 12:34:06.641 2 DEBUG nova.virt.hardware [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:34:06 compute-1 nova_compute[230518]: 2025-10-02 12:34:06.642 2 DEBUG nova.virt.hardware [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:34:06 compute-1 nova_compute[230518]: 2025-10-02 12:34:06.642 2 DEBUG nova.virt.hardware [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:34:06 compute-1 nova_compute[230518]: 2025-10-02 12:34:06.642 2 DEBUG nova.virt.hardware [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:34:06 compute-1 nova_compute[230518]: 2025-10-02 12:34:06.643 2 DEBUG nova.virt.hardware [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:34:06 compute-1 nova_compute[230518]: 2025-10-02 12:34:06.643 2 DEBUG nova.virt.hardware [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:34:06 compute-1 nova_compute[230518]: 2025-10-02 12:34:06.643 2 DEBUG nova.virt.hardware [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:34:06 compute-1 nova_compute[230518]: 2025-10-02 12:34:06.644 2 DEBUG nova.virt.hardware [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:34:06 compute-1 nova_compute[230518]: 2025-10-02 12:34:06.644 2 DEBUG nova.virt.hardware [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:34:06 compute-1 nova_compute[230518]: 2025-10-02 12:34:06.644 2 DEBUG nova.virt.hardware [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:34:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:06.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:06 compute-1 nova_compute[230518]: 2025-10-02 12:34:06.648 2 DEBUG oslo_concurrency.processutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:34:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:06.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:34:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:34:07 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/364482634' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:07 compute-1 nova_compute[230518]: 2025-10-02 12:34:07.059 2 DEBUG oslo_concurrency.processutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:07 compute-1 nova_compute[230518]: 2025-10-02 12:34:07.096 2 DEBUG nova.storage.rbd_utils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:07 compute-1 nova_compute[230518]: 2025-10-02 12:34:07.100 2 DEBUG oslo_concurrency.processutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:07 compute-1 ovn_controller[129257]: 2025-10-02T12:34:07Z|00356|binding|INFO|Releasing lport 678ebd13-2235-4191-a2a2-1f6e29399ca6 from this chassis (sb_readonly=0)
Oct 02 12:34:07 compute-1 nova_compute[230518]: 2025-10-02 12:34:07.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:07 compute-1 ceph-mon[80926]: pgmap v1726: 305 pgs: 305 active+clean; 349 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 59 KiB/s rd, 2.5 MiB/s wr, 87 op/s
Oct 02 12:34:07 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:34:07 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:34:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/364482634' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:07 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:34:07 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:34:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:34:07 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2879726199' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:07 compute-1 nova_compute[230518]: 2025-10-02 12:34:07.778 2 DEBUG oslo_concurrency.processutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.678s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:07 compute-1 nova_compute[230518]: 2025-10-02 12:34:07.780 2 DEBUG nova.virt.libvirt.vif [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:33:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-456092929',display_name='tempest-tempest.common.compute-instance-456092929',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-456092929',id=84,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-y84teqni',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:34:00Z,user_data=None,user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=418e3157-f0a7-42ec-812b-2a4a2ad00991,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "21510dd4-b155-46ed-bdb2-dc17a9149353", "address": "fa:16:3e:44:7a:ec", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21510dd4-b1", "ovs_interfaceid": "21510dd4-b155-46ed-bdb2-dc17a9149353", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:34:07 compute-1 nova_compute[230518]: 2025-10-02 12:34:07.782 2 DEBUG nova.network.os_vif_util [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "21510dd4-b155-46ed-bdb2-dc17a9149353", "address": "fa:16:3e:44:7a:ec", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21510dd4-b1", "ovs_interfaceid": "21510dd4-b155-46ed-bdb2-dc17a9149353", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:34:07 compute-1 nova_compute[230518]: 2025-10-02 12:34:07.784 2 DEBUG nova.network.os_vif_util [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:7a:ec,bridge_name='br-int',has_traffic_filtering=True,id=21510dd4-b155-46ed-bdb2-dc17a9149353,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21510dd4-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:34:07 compute-1 nova_compute[230518]: 2025-10-02 12:34:07.786 2 DEBUG nova.objects.instance [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'pci_devices' on Instance uuid 418e3157-f0a7-42ec-812b-2a4a2ad00991 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:07 compute-1 nova_compute[230518]: 2025-10-02 12:34:07.807 2 DEBUG nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:34:07 compute-1 nova_compute[230518]:   <uuid>418e3157-f0a7-42ec-812b-2a4a2ad00991</uuid>
Oct 02 12:34:07 compute-1 nova_compute[230518]:   <name>instance-00000054</name>
Oct 02 12:34:07 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:34:07 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:34:07 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:34:07 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:       <nova:name>tempest-tempest.common.compute-instance-456092929</nova:name>
Oct 02 12:34:07 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:34:06</nova:creationTime>
Oct 02 12:34:07 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:34:07 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:34:07 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:34:07 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:34:07 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:34:07 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:34:07 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:34:07 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:34:07 compute-1 nova_compute[230518]:         <nova:user uuid="71d69bc37f274fad8a0b06c0b96f2a64">tempest-ServerActionsTestJSON-226762235-project-member</nova:user>
Oct 02 12:34:07 compute-1 nova_compute[230518]:         <nova:project uuid="3b295760a6d74c82bd0f9ee4154d7d10">tempest-ServerActionsTestJSON-226762235</nova:project>
Oct 02 12:34:07 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:34:07 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:34:07 compute-1 nova_compute[230518]:         <nova:port uuid="21510dd4-b155-46ed-bdb2-dc17a9149353">
Oct 02 12:34:07 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:34:07 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:34:07 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:34:07 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <system>
Oct 02 12:34:07 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:34:07 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:34:07 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:34:07 compute-1 nova_compute[230518]:       <entry name="serial">418e3157-f0a7-42ec-812b-2a4a2ad00991</entry>
Oct 02 12:34:07 compute-1 nova_compute[230518]:       <entry name="uuid">418e3157-f0a7-42ec-812b-2a4a2ad00991</entry>
Oct 02 12:34:07 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     </system>
Oct 02 12:34:07 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:34:07 compute-1 nova_compute[230518]:   <os>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:   </os>
Oct 02 12:34:07 compute-1 nova_compute[230518]:   <features>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:   </features>
Oct 02 12:34:07 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:34:07 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:34:07 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:34:07 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/418e3157-f0a7-42ec-812b-2a4a2ad00991_disk">
Oct 02 12:34:07 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:       </source>
Oct 02 12:34:07 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:34:07 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:34:07 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:34:07 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/418e3157-f0a7-42ec-812b-2a4a2ad00991_disk.config">
Oct 02 12:34:07 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:       </source>
Oct 02 12:34:07 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:34:07 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:34:07 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:34:07 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:44:7a:ec"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:       <target dev="tap21510dd4-b1"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:34:07 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991/console.log" append="off"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <video>
Oct 02 12:34:07 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     </video>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:34:07 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:34:07 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:34:07 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:34:07 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:34:07 compute-1 nova_compute[230518]: </domain>
Oct 02 12:34:07 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:34:07 compute-1 nova_compute[230518]: 2025-10-02 12:34:07.808 2 DEBUG nova.compute.manager [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Preparing to wait for external event network-vif-plugged-21510dd4-b155-46ed-bdb2-dc17a9149353 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:34:07 compute-1 nova_compute[230518]: 2025-10-02 12:34:07.808 2 DEBUG oslo_concurrency.lockutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:07 compute-1 nova_compute[230518]: 2025-10-02 12:34:07.809 2 DEBUG oslo_concurrency.lockutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:07 compute-1 nova_compute[230518]: 2025-10-02 12:34:07.809 2 DEBUG oslo_concurrency.lockutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:07 compute-1 nova_compute[230518]: 2025-10-02 12:34:07.809 2 DEBUG nova.virt.libvirt.vif [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:33:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-456092929',display_name='tempest-tempest.common.compute-instance-456092929',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-456092929',id=84,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-y84teqni',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:34:00Z,user_data=None,user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=418e3157-f0a7-42ec-812b-2a4a2ad00991,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "21510dd4-b155-46ed-bdb2-dc17a9149353", "address": "fa:16:3e:44:7a:ec", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21510dd4-b1", "ovs_interfaceid": "21510dd4-b155-46ed-bdb2-dc17a9149353", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:34:07 compute-1 nova_compute[230518]: 2025-10-02 12:34:07.810 2 DEBUG nova.network.os_vif_util [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "21510dd4-b155-46ed-bdb2-dc17a9149353", "address": "fa:16:3e:44:7a:ec", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21510dd4-b1", "ovs_interfaceid": "21510dd4-b155-46ed-bdb2-dc17a9149353", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:34:07 compute-1 nova_compute[230518]: 2025-10-02 12:34:07.810 2 DEBUG nova.network.os_vif_util [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:7a:ec,bridge_name='br-int',has_traffic_filtering=True,id=21510dd4-b155-46ed-bdb2-dc17a9149353,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21510dd4-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:34:07 compute-1 nova_compute[230518]: 2025-10-02 12:34:07.810 2 DEBUG os_vif [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:7a:ec,bridge_name='br-int',has_traffic_filtering=True,id=21510dd4-b155-46ed-bdb2-dc17a9149353,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21510dd4-b1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:34:07 compute-1 nova_compute[230518]: 2025-10-02 12:34:07.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:07 compute-1 nova_compute[230518]: 2025-10-02 12:34:07.811 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:07 compute-1 nova_compute[230518]: 2025-10-02 12:34:07.811 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:34:07 compute-1 nova_compute[230518]: 2025-10-02 12:34:07.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:07 compute-1 nova_compute[230518]: 2025-10-02 12:34:07.817 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap21510dd4-b1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:07 compute-1 nova_compute[230518]: 2025-10-02 12:34:07.818 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap21510dd4-b1, col_values=(('external_ids', {'iface-id': '21510dd4-b155-46ed-bdb2-dc17a9149353', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:44:7a:ec', 'vm-uuid': '418e3157-f0a7-42ec-812b-2a4a2ad00991'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:07 compute-1 nova_compute[230518]: 2025-10-02 12:34:07.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:07 compute-1 NetworkManager[44960]: <info>  [1759408447.8210] manager: (tap21510dd4-b1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/170)
Oct 02 12:34:07 compute-1 nova_compute[230518]: 2025-10-02 12:34:07.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:34:07 compute-1 nova_compute[230518]: 2025-10-02 12:34:07.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:07 compute-1 nova_compute[230518]: 2025-10-02 12:34:07.833 2 INFO os_vif [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:7a:ec,bridge_name='br-int',has_traffic_filtering=True,id=21510dd4-b155-46ed-bdb2-dc17a9149353,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21510dd4-b1')
Oct 02 12:34:07 compute-1 nova_compute[230518]: 2025-10-02 12:34:07.899 2 DEBUG nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:34:07 compute-1 nova_compute[230518]: 2025-10-02 12:34:07.900 2 DEBUG nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:34:07 compute-1 nova_compute[230518]: 2025-10-02 12:34:07.900 2 DEBUG nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] No VIF found with MAC fa:16:3e:44:7a:ec, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:34:07 compute-1 nova_compute[230518]: 2025-10-02 12:34:07.900 2 INFO nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Using config drive
Oct 02 12:34:07 compute-1 nova_compute[230518]: 2025-10-02 12:34:07.937 2 DEBUG nova.storage.rbd_utils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:07 compute-1 nova_compute[230518]: 2025-10-02 12:34:07.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:07 compute-1 nova_compute[230518]: 2025-10-02 12:34:07.947 2 DEBUG nova.network.neutron [req-dcd8a741-6b0a-4db8-bd27-6156a15b064d req-ddd7836a-2471-4a2b-a3f6-6bba3b0949e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Updated VIF entry in instance network info cache for port 21510dd4-b155-46ed-bdb2-dc17a9149353. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:34:07 compute-1 nova_compute[230518]: 2025-10-02 12:34:07.948 2 DEBUG nova.network.neutron [req-dcd8a741-6b0a-4db8-bd27-6156a15b064d req-ddd7836a-2471-4a2b-a3f6-6bba3b0949e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Updating instance_info_cache with network_info: [{"id": "21510dd4-b155-46ed-bdb2-dc17a9149353", "address": "fa:16:3e:44:7a:ec", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21510dd4-b1", "ovs_interfaceid": "21510dd4-b155-46ed-bdb2-dc17a9149353", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:34:07 compute-1 nova_compute[230518]: 2025-10-02 12:34:07.965 2 DEBUG oslo_concurrency.lockutils [req-dcd8a741-6b0a-4db8-bd27-6156a15b064d req-ddd7836a-2471-4a2b-a3f6-6bba3b0949e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-418e3157-f0a7-42ec-812b-2a4a2ad00991" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:34:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:08.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:08.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:08 compute-1 nova_compute[230518]: 2025-10-02 12:34:08.727 2 INFO nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Creating config drive at /var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991/disk.config
Oct 02 12:34:08 compute-1 nova_compute[230518]: 2025-10-02 12:34:08.737 2 DEBUG oslo_concurrency.processutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8_fqho6z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:08 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:34:08 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:34:08 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:34:08 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:34:08 compute-1 ceph-mon[80926]: pgmap v1727: 305 pgs: 305 active+clean; 374 MiB data, 873 MiB used, 20 GiB / 21 GiB avail; 444 KiB/s rd, 3.6 MiB/s wr, 94 op/s
Oct 02 12:34:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2879726199' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:08 compute-1 nova_compute[230518]: 2025-10-02 12:34:08.885 2 DEBUG oslo_concurrency.processutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8_fqho6z" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:08 compute-1 nova_compute[230518]: 2025-10-02 12:34:08.914 2 DEBUG nova.storage.rbd_utils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:08 compute-1 nova_compute[230518]: 2025-10-02 12:34:08.918 2 DEBUG oslo_concurrency.processutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991/disk.config 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:09.067 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:34:10 compute-1 nova_compute[230518]: 2025-10-02 12:34:10.000 2 DEBUG oslo_concurrency.processutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991/disk.config 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:10 compute-1 nova_compute[230518]: 2025-10-02 12:34:10.001 2 INFO nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Deleting local config drive /var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991/disk.config because it was imported into RBD.
Oct 02 12:34:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4090946625' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:10 compute-1 kernel: tap21510dd4-b1: entered promiscuous mode
Oct 02 12:34:10 compute-1 NetworkManager[44960]: <info>  [1759408450.0440] manager: (tap21510dd4-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/171)
Oct 02 12:34:10 compute-1 ovn_controller[129257]: 2025-10-02T12:34:10Z|00357|binding|INFO|Claiming lport 21510dd4-b155-46ed-bdb2-dc17a9149353 for this chassis.
Oct 02 12:34:10 compute-1 ovn_controller[129257]: 2025-10-02T12:34:10Z|00358|binding|INFO|21510dd4-b155-46ed-bdb2-dc17a9149353: Claiming fa:16:3e:44:7a:ec 10.100.0.11
Oct 02 12:34:10 compute-1 nova_compute[230518]: 2025-10-02 12:34:10.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:10.052 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:7a:ec 10.100.0.11'], port_security=['fa:16:3e:44:7a:ec 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '418e3157-f0a7-42ec-812b-2a4a2ad00991', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '2', 'neutron:security_group_ids': '08293210-e9a4-4bb5-8fa1-174de1dd6444', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=21510dd4-b155-46ed-bdb2-dc17a9149353) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:34:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:10.053 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 21510dd4-b155-46ed-bdb2-dc17a9149353 in datapath f011efa4-0132-405c-bb45-09d0a9352eff bound to our chassis
Oct 02 12:34:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:10.054 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f011efa4-0132-405c-bb45-09d0a9352eff
Oct 02 12:34:10 compute-1 ovn_controller[129257]: 2025-10-02T12:34:10Z|00359|binding|INFO|Setting lport 21510dd4-b155-46ed-bdb2-dc17a9149353 ovn-installed in OVS
Oct 02 12:34:10 compute-1 ovn_controller[129257]: 2025-10-02T12:34:10Z|00360|binding|INFO|Setting lport 21510dd4-b155-46ed-bdb2-dc17a9149353 up in Southbound
Oct 02 12:34:10 compute-1 nova_compute[230518]: 2025-10-02 12:34:10.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:10 compute-1 nova_compute[230518]: 2025-10-02 12:34:10.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:10.070 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[12108672-a353-4604-83e1-a2aea4290b6c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:10 compute-1 systemd-machined[188247]: New machine qemu-42-instance-00000054.
Oct 02 12:34:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:10.101 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[6ea04dd3-e87e-4146-ba21-2b4bc85eff6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:10 compute-1 systemd[1]: Started Virtual Machine qemu-42-instance-00000054.
Oct 02 12:34:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:10.104 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[b6438a0a-0228-4095-b54a-344f70299e5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:10 compute-1 systemd-udevd[264141]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:34:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:10.129 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[23a29938-6912-4807-8310-cf40766c3bae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:10 compute-1 NetworkManager[44960]: <info>  [1759408450.1337] device (tap21510dd4-b1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:34:10 compute-1 NetworkManager[44960]: <info>  [1759408450.1350] device (tap21510dd4-b1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:34:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:10.152 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a03a64e6-39d4-4bc8-bfbf-47dc20769ac5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 106], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626322, 'reachable_time': 30206, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264144, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:10.168 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e156c51a-fc82-4627-9ceb-202698c6a49f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf011efa4-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 626333, 'tstamp': 626333}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264150, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf011efa4-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 626336, 'tstamp': 626336}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264150, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:10.169 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:10 compute-1 nova_compute[230518]: 2025-10-02 12:34:10.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:10.175 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf011efa4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:10 compute-1 nova_compute[230518]: 2025-10-02 12:34:10.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:10.175 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:34:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:10.175 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf011efa4-00, col_values=(('external_ids', {'iface-id': '678ebd13-2235-4191-a2a2-1f6e29399ca6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:10.176 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:34:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:10.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:10.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:11 compute-1 ceph-mon[80926]: pgmap v1728: 305 pgs: 305 active+clean; 374 MiB data, 873 MiB used, 20 GiB / 21 GiB avail; 440 KiB/s rd, 3.6 MiB/s wr, 88 op/s
Oct 02 12:34:11 compute-1 nova_compute[230518]: 2025-10-02 12:34:11.097 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408451.09674, 418e3157-f0a7-42ec-812b-2a4a2ad00991 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:34:11 compute-1 nova_compute[230518]: 2025-10-02 12:34:11.098 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] VM Started (Lifecycle Event)
Oct 02 12:34:11 compute-1 nova_compute[230518]: 2025-10-02 12:34:11.122 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:11 compute-1 nova_compute[230518]: 2025-10-02 12:34:11.126 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408451.096857, 418e3157-f0a7-42ec-812b-2a4a2ad00991 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:34:11 compute-1 nova_compute[230518]: 2025-10-02 12:34:11.127 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] VM Paused (Lifecycle Event)
Oct 02 12:34:11 compute-1 nova_compute[230518]: 2025-10-02 12:34:11.141 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:11 compute-1 nova_compute[230518]: 2025-10-02 12:34:11.144 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:34:11 compute-1 nova_compute[230518]: 2025-10-02 12:34:11.163 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:34:11 compute-1 nova_compute[230518]: 2025-10-02 12:34:11.830 2 DEBUG nova.compute.manager [req-86493620-dff8-4e51-8e6f-b27fe85f1060 req-626d6099-1454-46c5-8e22-ff933430c677 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Received event network-vif-plugged-21510dd4-b155-46ed-bdb2-dc17a9149353 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:11 compute-1 nova_compute[230518]: 2025-10-02 12:34:11.831 2 DEBUG oslo_concurrency.lockutils [req-86493620-dff8-4e51-8e6f-b27fe85f1060 req-626d6099-1454-46c5-8e22-ff933430c677 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:11 compute-1 nova_compute[230518]: 2025-10-02 12:34:11.831 2 DEBUG oslo_concurrency.lockutils [req-86493620-dff8-4e51-8e6f-b27fe85f1060 req-626d6099-1454-46c5-8e22-ff933430c677 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:11 compute-1 nova_compute[230518]: 2025-10-02 12:34:11.831 2 DEBUG oslo_concurrency.lockutils [req-86493620-dff8-4e51-8e6f-b27fe85f1060 req-626d6099-1454-46c5-8e22-ff933430c677 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:11 compute-1 nova_compute[230518]: 2025-10-02 12:34:11.831 2 DEBUG nova.compute.manager [req-86493620-dff8-4e51-8e6f-b27fe85f1060 req-626d6099-1454-46c5-8e22-ff933430c677 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Processing event network-vif-plugged-21510dd4-b155-46ed-bdb2-dc17a9149353 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:34:11 compute-1 nova_compute[230518]: 2025-10-02 12:34:11.832 2 DEBUG nova.compute.manager [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:34:11 compute-1 nova_compute[230518]: 2025-10-02 12:34:11.835 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408451.8355336, 418e3157-f0a7-42ec-812b-2a4a2ad00991 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:34:11 compute-1 nova_compute[230518]: 2025-10-02 12:34:11.836 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] VM Resumed (Lifecycle Event)
Oct 02 12:34:11 compute-1 nova_compute[230518]: 2025-10-02 12:34:11.838 2 DEBUG nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:34:11 compute-1 nova_compute[230518]: 2025-10-02 12:34:11.840 2 INFO nova.virt.libvirt.driver [-] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Instance spawned successfully.
Oct 02 12:34:11 compute-1 nova_compute[230518]: 2025-10-02 12:34:11.841 2 DEBUG nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:34:11 compute-1 nova_compute[230518]: 2025-10-02 12:34:11.861 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:11 compute-1 nova_compute[230518]: 2025-10-02 12:34:11.865 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:34:11 compute-1 nova_compute[230518]: 2025-10-02 12:34:11.873 2 DEBUG nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:11 compute-1 nova_compute[230518]: 2025-10-02 12:34:11.873 2 DEBUG nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:11 compute-1 nova_compute[230518]: 2025-10-02 12:34:11.874 2 DEBUG nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:11 compute-1 nova_compute[230518]: 2025-10-02 12:34:11.874 2 DEBUG nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:11 compute-1 nova_compute[230518]: 2025-10-02 12:34:11.875 2 DEBUG nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:11 compute-1 nova_compute[230518]: 2025-10-02 12:34:11.875 2 DEBUG nova.virt.libvirt.driver [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:11 compute-1 nova_compute[230518]: 2025-10-02 12:34:11.898 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:34:11 compute-1 nova_compute[230518]: 2025-10-02 12:34:11.936 2 INFO nova.compute.manager [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Took 11.22 seconds to spawn the instance on the hypervisor.
Oct 02 12:34:11 compute-1 nova_compute[230518]: 2025-10-02 12:34:11.937 2 DEBUG nova.compute.manager [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:12 compute-1 nova_compute[230518]: 2025-10-02 12:34:12.035 2 INFO nova.compute.manager [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Took 12.27 seconds to build instance.
Oct 02 12:34:12 compute-1 nova_compute[230518]: 2025-10-02 12:34:12.057 2 DEBUG oslo_concurrency.lockutils [None req-fb93e5b3-e81f-4b75-bfa7-0f0f3597d527 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.400s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:12.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:12.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:12 compute-1 nova_compute[230518]: 2025-10-02 12:34:12.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:12 compute-1 nova_compute[230518]: 2025-10-02 12:34:12.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:13 compute-1 ceph-mon[80926]: pgmap v1729: 305 pgs: 305 active+clean; 341 MiB data, 857 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 163 op/s
Oct 02 12:34:13 compute-1 nova_compute[230518]: 2025-10-02 12:34:13.936 2 DEBUG nova.compute.manager [req-3783e561-c635-43b6-b635-f8677445728c req-90e360a7-e392-4f56-b727-28ef69745a1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Received event network-vif-plugged-21510dd4-b155-46ed-bdb2-dc17a9149353 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:13 compute-1 nova_compute[230518]: 2025-10-02 12:34:13.937 2 DEBUG oslo_concurrency.lockutils [req-3783e561-c635-43b6-b635-f8677445728c req-90e360a7-e392-4f56-b727-28ef69745a1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:13 compute-1 nova_compute[230518]: 2025-10-02 12:34:13.937 2 DEBUG oslo_concurrency.lockutils [req-3783e561-c635-43b6-b635-f8677445728c req-90e360a7-e392-4f56-b727-28ef69745a1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:13 compute-1 nova_compute[230518]: 2025-10-02 12:34:13.938 2 DEBUG oslo_concurrency.lockutils [req-3783e561-c635-43b6-b635-f8677445728c req-90e360a7-e392-4f56-b727-28ef69745a1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:13 compute-1 nova_compute[230518]: 2025-10-02 12:34:13.938 2 DEBUG nova.compute.manager [req-3783e561-c635-43b6-b635-f8677445728c req-90e360a7-e392-4f56-b727-28ef69745a1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] No waiting events found dispatching network-vif-plugged-21510dd4-b155-46ed-bdb2-dc17a9149353 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:34:13 compute-1 nova_compute[230518]: 2025-10-02 12:34:13.939 2 WARNING nova.compute.manager [req-3783e561-c635-43b6-b635-f8677445728c req-90e360a7-e392-4f56-b727-28ef69745a1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Received unexpected event network-vif-plugged-21510dd4-b155-46ed-bdb2-dc17a9149353 for instance with vm_state active and task_state None.
Oct 02 12:34:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:34:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:34:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:14.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:34:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:14.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:15 compute-1 ceph-mon[80926]: pgmap v1730: 305 pgs: 305 active+clean; 328 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 147 op/s
Oct 02 12:34:15 compute-1 ovn_controller[129257]: 2025-10-02T12:34:15Z|00361|binding|INFO|Releasing lport 678ebd13-2235-4191-a2a2-1f6e29399ca6 from this chassis (sb_readonly=0)
Oct 02 12:34:15 compute-1 nova_compute[230518]: 2025-10-02 12:34:15.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:15 compute-1 nova_compute[230518]: 2025-10-02 12:34:15.735 2 INFO nova.compute.manager [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Rebuilding instance
Oct 02 12:34:15 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #76. Immutable memtables: 0.
Oct 02 12:34:15 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:34:15.805634) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:34:15 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 76
Oct 02 12:34:15 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408455805667, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 2400, "num_deletes": 253, "total_data_size": 5666634, "memory_usage": 5747856, "flush_reason": "Manual Compaction"}
Oct 02 12:34:15 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #77: started
Oct 02 12:34:15 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408455823238, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 77, "file_size": 3718074, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39082, "largest_seqno": 41477, "table_properties": {"data_size": 3708324, "index_size": 6116, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20779, "raw_average_key_size": 20, "raw_value_size": 3688661, "raw_average_value_size": 3670, "num_data_blocks": 266, "num_entries": 1005, "num_filter_entries": 1005, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408249, "oldest_key_time": 1759408249, "file_creation_time": 1759408455, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:34:15 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 17742 microseconds, and 6678 cpu microseconds.
Oct 02 12:34:15 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:34:15 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:34:15.823375) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #77: 3718074 bytes OK
Oct 02 12:34:15 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:34:15.823417) [db/memtable_list.cc:519] [default] Level-0 commit table #77 started
Oct 02 12:34:15 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:34:15.828014) [db/memtable_list.cc:722] [default] Level-0 commit table #77: memtable #1 done
Oct 02 12:34:15 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:34:15.828039) EVENT_LOG_v1 {"time_micros": 1759408455828033, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:34:15 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:34:15.828057) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:34:15 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 5655917, prev total WAL file size 5655917, number of live WAL files 2.
Oct 02 12:34:15 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000073.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:34:15 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:34:15.829477) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Oct 02 12:34:15 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:34:15 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [77(3630KB)], [75(9949KB)]
Oct 02 12:34:15 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408455829502, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [77], "files_L6": [75], "score": -1, "input_data_size": 13906052, "oldest_snapshot_seqno": -1}
Oct 02 12:34:15 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #78: 6664 keys, 11949846 bytes, temperature: kUnknown
Oct 02 12:34:15 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408455888435, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 78, "file_size": 11949846, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11902638, "index_size": 29432, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16709, "raw_key_size": 170777, "raw_average_key_size": 25, "raw_value_size": 11780671, "raw_average_value_size": 1767, "num_data_blocks": 1177, "num_entries": 6664, "num_filter_entries": 6664, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759408455, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 78, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:34:15 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:34:15 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:34:15.888666) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 11949846 bytes
Oct 02 12:34:15 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:34:15.892410) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 235.6 rd, 202.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 9.7 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(7.0) write-amplify(3.2) OK, records in: 7188, records dropped: 524 output_compression: NoCompression
Oct 02 12:34:15 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:34:15.892426) EVENT_LOG_v1 {"time_micros": 1759408455892418, "job": 46, "event": "compaction_finished", "compaction_time_micros": 59020, "compaction_time_cpu_micros": 25072, "output_level": 6, "num_output_files": 1, "total_output_size": 11949846, "num_input_records": 7188, "num_output_records": 6664, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:34:15 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:34:15 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408455893118, "job": 46, "event": "table_file_deletion", "file_number": 77}
Oct 02 12:34:15 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000075.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:34:15 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408455894860, "job": 46, "event": "table_file_deletion", "file_number": 75}
Oct 02 12:34:15 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:34:15.829414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:34:15 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:34:15.894970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:34:15 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:34:15.894975) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:34:15 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:34:15.894977) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:34:15 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:34:15.894979) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:34:15 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:34:15.894980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:34:15 compute-1 sudo[264194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:34:15 compute-1 ovn_controller[129257]: 2025-10-02T12:34:15Z|00362|binding|INFO|Releasing lport 678ebd13-2235-4191-a2a2-1f6e29399ca6 from this chassis (sb_readonly=0)
Oct 02 12:34:15 compute-1 sudo[264194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:15 compute-1 sudo[264194]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:15 compute-1 nova_compute[230518]: 2025-10-02 12:34:15.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:15 compute-1 sudo[264219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:34:15 compute-1 sudo[264219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:34:16 compute-1 sudo[264219]: pam_unix(sudo:session): session closed for user root
Oct 02 12:34:16 compute-1 nova_compute[230518]: 2025-10-02 12:34:16.058 2 DEBUG nova.objects.instance [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 418e3157-f0a7-42ec-812b-2a4a2ad00991 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:16 compute-1 nova_compute[230518]: 2025-10-02 12:34:16.072 2 DEBUG nova.compute.manager [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:16 compute-1 nova_compute[230518]: 2025-10-02 12:34:16.117 2 DEBUG nova.objects.instance [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'pci_requests' on Instance uuid 418e3157-f0a7-42ec-812b-2a4a2ad00991 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:16 compute-1 nova_compute[230518]: 2025-10-02 12:34:16.133 2 DEBUG nova.objects.instance [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'pci_devices' on Instance uuid 418e3157-f0a7-42ec-812b-2a4a2ad00991 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:16 compute-1 nova_compute[230518]: 2025-10-02 12:34:16.143 2 DEBUG nova.objects.instance [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'resources' on Instance uuid 418e3157-f0a7-42ec-812b-2a4a2ad00991 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:16 compute-1 nova_compute[230518]: 2025-10-02 12:34:16.154 2 DEBUG nova.objects.instance [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'migration_context' on Instance uuid 418e3157-f0a7-42ec-812b-2a4a2ad00991 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:16 compute-1 nova_compute[230518]: 2025-10-02 12:34:16.164 2 DEBUG nova.objects.instance [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:34:16 compute-1 nova_compute[230518]: 2025-10-02 12:34:16.167 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:34:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:16.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:16.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:16 compute-1 ceph-mon[80926]: pgmap v1731: 305 pgs: 305 active+clean; 328 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 125 op/s
Oct 02 12:34:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:34:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:34:17 compute-1 nova_compute[230518]: 2025-10-02 12:34:17.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:17 compute-1 nova_compute[230518]: 2025-10-02 12:34:17.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:34:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:18.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:34:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:18.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:18 compute-1 ceph-mon[80926]: pgmap v1732: 305 pgs: 305 active+clean; 328 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.1 MiB/s wr, 173 op/s
Oct 02 12:34:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:34:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4284340140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:20.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:34:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:20.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:34:20 compute-1 podman[264245]: 2025-10-02 12:34:20.820603455 +0000 UTC m=+0.064198661 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Oct 02 12:34:20 compute-1 ceph-mon[80926]: pgmap v1733: 305 pgs: 305 active+clean; 328 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 14 KiB/s wr, 155 op/s
Oct 02 12:34:20 compute-1 podman[264244]: 2025-10-02 12:34:20.850858607 +0000 UTC m=+0.097588191 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 12:34:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:22.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:22.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:22 compute-1 nova_compute[230518]: 2025-10-02 12:34:22.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:22 compute-1 ceph-mon[80926]: pgmap v1734: 305 pgs: 305 active+clean; 274 MiB data, 815 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 15 KiB/s wr, 179 op/s
Oct 02 12:34:22 compute-1 nova_compute[230518]: 2025-10-02 12:34:22.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/186885800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:23 compute-1 nova_compute[230518]: 2025-10-02 12:34:23.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:34:24 compute-1 ovn_controller[129257]: 2025-10-02T12:34:24Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:44:7a:ec 10.100.0.11
Oct 02 12:34:24 compute-1 ovn_controller[129257]: 2025-10-02T12:34:24Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:44:7a:ec 10.100.0.11
Oct 02 12:34:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:24.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:24.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:24 compute-1 ceph-mon[80926]: pgmap v1735: 305 pgs: 305 active+clean; 233 MiB data, 805 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 14 KiB/s wr, 118 op/s
Oct 02 12:34:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:25.931 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:25.931 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:25.931 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:26 compute-1 nova_compute[230518]: 2025-10-02 12:34:26.209 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:34:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:26.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:26.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:27 compute-1 ceph-mon[80926]: pgmap v1736: 305 pgs: 305 active+clean; 201 MiB data, 781 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 390 KiB/s wr, 127 op/s
Oct 02 12:34:27 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3798561716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:27 compute-1 nova_compute[230518]: 2025-10-02 12:34:27.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:27 compute-1 nova_compute[230518]: 2025-10-02 12:34:27.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:27 compute-1 nova_compute[230518]: 2025-10-02 12:34:27.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:28 compute-1 kernel: tap21510dd4-b1 (unregistering): left promiscuous mode
Oct 02 12:34:28 compute-1 NetworkManager[44960]: <info>  [1759408468.5789] device (tap21510dd4-b1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:34:28 compute-1 ovn_controller[129257]: 2025-10-02T12:34:28Z|00363|binding|INFO|Releasing lport 21510dd4-b155-46ed-bdb2-dc17a9149353 from this chassis (sb_readonly=0)
Oct 02 12:34:28 compute-1 nova_compute[230518]: 2025-10-02 12:34:28.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:28 compute-1 ovn_controller[129257]: 2025-10-02T12:34:28Z|00364|binding|INFO|Setting lport 21510dd4-b155-46ed-bdb2-dc17a9149353 down in Southbound
Oct 02 12:34:28 compute-1 ovn_controller[129257]: 2025-10-02T12:34:28Z|00365|binding|INFO|Removing iface tap21510dd4-b1 ovn-installed in OVS
Oct 02 12:34:28 compute-1 nova_compute[230518]: 2025-10-02 12:34:28.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:28.615 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:7a:ec 10.100.0.11'], port_security=['fa:16:3e:44:7a:ec 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '418e3157-f0a7-42ec-812b-2a4a2ad00991', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '4', 'neutron:security_group_ids': '08293210-e9a4-4bb5-8fa1-174de1dd6444', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=21510dd4-b155-46ed-bdb2-dc17a9149353) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:34:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:28.616 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 21510dd4-b155-46ed-bdb2-dc17a9149353 in datapath f011efa4-0132-405c-bb45-09d0a9352eff unbound from our chassis
Oct 02 12:34:28 compute-1 nova_compute[230518]: 2025-10-02 12:34:28.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:28.618 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f011efa4-0132-405c-bb45-09d0a9352eff
Oct 02 12:34:28 compute-1 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000054.scope: Deactivated successfully.
Oct 02 12:34:28 compute-1 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000054.scope: Consumed 13.477s CPU time.
Oct 02 12:34:28 compute-1 systemd-machined[188247]: Machine qemu-42-instance-00000054 terminated.
Oct 02 12:34:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:28.647 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3aefaa27-ff6c-4f03-a3db-ae1b34164888]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:28.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:28.687 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[03810d03-c3a3-4ed1-8fc4-07935f2fdd0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:28.690 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[0fb049a7-599f-4c7b-880a-1951d50cf2e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:28.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:28.729 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[7eb28950-5779-4243-bd41-487c653ccf8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:28.756 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a25bdc7a-982f-4cd5-abc8-33e8f8647604]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 106], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626322, 'reachable_time': 30206, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264298, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:28.780 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2d8cccc2-0880-4163-9429-cc65974c4a64]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf011efa4-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 626333, 'tstamp': 626333}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264299, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf011efa4-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 626336, 'tstamp': 626336}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264299, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:28.783 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:28 compute-1 nova_compute[230518]: 2025-10-02 12:34:28.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:28 compute-1 nova_compute[230518]: 2025-10-02 12:34:28.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:28.793 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf011efa4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:28.793 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:34:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:28.794 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf011efa4-00, col_values=(('external_ids', {'iface-id': '678ebd13-2235-4191-a2a2-1f6e29399ca6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:28.794 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:34:29 compute-1 ovn_controller[129257]: 2025-10-02T12:34:29Z|00366|binding|INFO|Releasing lport 678ebd13-2235-4191-a2a2-1f6e29399ca6 from this chassis (sb_readonly=0)
Oct 02 12:34:29 compute-1 nova_compute[230518]: 2025-10-02 12:34:29.224 2 INFO nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Instance shutdown successfully after 13 seconds.
Oct 02 12:34:29 compute-1 nova_compute[230518]: 2025-10-02 12:34:29.231 2 INFO nova.virt.libvirt.driver [-] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Instance destroyed successfully.
Oct 02 12:34:29 compute-1 nova_compute[230518]: 2025-10-02 12:34:29.243 2 INFO nova.virt.libvirt.driver [-] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Instance destroyed successfully.
Oct 02 12:34:29 compute-1 nova_compute[230518]: 2025-10-02 12:34:29.244 2 DEBUG nova.virt.libvirt.vif [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:33:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-456092929',display_name='tempest-ServerActionsTestJSON-server-1986633563',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-456092929',id=84,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:34:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-y84teqni',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:34:15Z,user_data=None,user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=418e3157-f0a7-42ec-812b-2a4a2ad00991,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "21510dd4-b155-46ed-bdb2-dc17a9149353", "address": "fa:16:3e:44:7a:ec", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21510dd4-b1", "ovs_interfaceid": "21510dd4-b155-46ed-bdb2-dc17a9149353", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:34:29 compute-1 nova_compute[230518]: 2025-10-02 12:34:29.244 2 DEBUG nova.network.os_vif_util [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "21510dd4-b155-46ed-bdb2-dc17a9149353", "address": "fa:16:3e:44:7a:ec", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21510dd4-b1", "ovs_interfaceid": "21510dd4-b155-46ed-bdb2-dc17a9149353", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:34:29 compute-1 nova_compute[230518]: 2025-10-02 12:34:29.245 2 DEBUG nova.network.os_vif_util [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:7a:ec,bridge_name='br-int',has_traffic_filtering=True,id=21510dd4-b155-46ed-bdb2-dc17a9149353,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21510dd4-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:34:29 compute-1 nova_compute[230518]: 2025-10-02 12:34:29.245 2 DEBUG os_vif [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:7a:ec,bridge_name='br-int',has_traffic_filtering=True,id=21510dd4-b155-46ed-bdb2-dc17a9149353,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21510dd4-b1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:34:29 compute-1 nova_compute[230518]: 2025-10-02 12:34:29.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:29 compute-1 nova_compute[230518]: 2025-10-02 12:34:29.247 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21510dd4-b1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:29 compute-1 nova_compute[230518]: 2025-10-02 12:34:29.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:34:29 compute-1 ceph-mon[80926]: pgmap v1737: 305 pgs: 305 active+clean; 202 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 182 op/s
Oct 02 12:34:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/629050486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:29 compute-1 nova_compute[230518]: 2025-10-02 12:34:29.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:29 compute-1 nova_compute[230518]: 2025-10-02 12:34:29.280 2 INFO os_vif [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:7a:ec,bridge_name='br-int',has_traffic_filtering=True,id=21510dd4-b155-46ed-bdb2-dc17a9149353,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21510dd4-b1')
Oct 02 12:34:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:34:29 compute-1 nova_compute[230518]: 2025-10-02 12:34:29.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:30 compute-1 ceph-mon[80926]: pgmap v1738: 305 pgs: 305 active+clean; 202 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 357 KiB/s rd, 2.1 MiB/s wr, 117 op/s
Oct 02 12:34:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:34:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:30.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:34:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:30.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:31 compute-1 nova_compute[230518]: 2025-10-02 12:34:31.381 2 INFO nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Deleting instance files /var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991_del
Oct 02 12:34:31 compute-1 nova_compute[230518]: 2025-10-02 12:34:31.382 2 INFO nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Deletion of /var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991_del complete
Oct 02 12:34:31 compute-1 nova_compute[230518]: 2025-10-02 12:34:31.565 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:34:31 compute-1 nova_compute[230518]: 2025-10-02 12:34:31.566 2 INFO nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Creating image(s)
Oct 02 12:34:31 compute-1 nova_compute[230518]: 2025-10-02 12:34:31.597 2 DEBUG nova.storage.rbd_utils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:31 compute-1 nova_compute[230518]: 2025-10-02 12:34:31.629 2 DEBUG nova.storage.rbd_utils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:31 compute-1 nova_compute[230518]: 2025-10-02 12:34:31.658 2 DEBUG nova.storage.rbd_utils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:31 compute-1 nova_compute[230518]: 2025-10-02 12:34:31.663 2 DEBUG oslo_concurrency.processutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:31 compute-1 nova_compute[230518]: 2025-10-02 12:34:31.734 2 DEBUG oslo_concurrency.processutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:31 compute-1 nova_compute[230518]: 2025-10-02 12:34:31.735 2 DEBUG oslo_concurrency.lockutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:31 compute-1 nova_compute[230518]: 2025-10-02 12:34:31.737 2 DEBUG oslo_concurrency.lockutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:31 compute-1 nova_compute[230518]: 2025-10-02 12:34:31.737 2 DEBUG oslo_concurrency.lockutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:31 compute-1 nova_compute[230518]: 2025-10-02 12:34:31.773 2 DEBUG nova.storage.rbd_utils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:31 compute-1 nova_compute[230518]: 2025-10-02 12:34:31.777 2 DEBUG oslo_concurrency.processutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:31 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4231160067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:31 compute-1 nova_compute[230518]: 2025-10-02 12:34:31.924 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:31 compute-1 nova_compute[230518]: 2025-10-02 12:34:31.925 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.035 2 DEBUG nova.compute.manager [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.102 2 DEBUG nova.compute.manager [req-846d2b66-c4be-45b9-ac3d-f6d91c2b3048 req-84527f8f-69b9-416c-af2b-5594546486e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Received event network-vif-unplugged-21510dd4-b155-46ed-bdb2-dc17a9149353 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.103 2 DEBUG oslo_concurrency.lockutils [req-846d2b66-c4be-45b9-ac3d-f6d91c2b3048 req-84527f8f-69b9-416c-af2b-5594546486e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.103 2 DEBUG oslo_concurrency.lockutils [req-846d2b66-c4be-45b9-ac3d-f6d91c2b3048 req-84527f8f-69b9-416c-af2b-5594546486e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.104 2 DEBUG oslo_concurrency.lockutils [req-846d2b66-c4be-45b9-ac3d-f6d91c2b3048 req-84527f8f-69b9-416c-af2b-5594546486e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.104 2 DEBUG nova.compute.manager [req-846d2b66-c4be-45b9-ac3d-f6d91c2b3048 req-84527f8f-69b9-416c-af2b-5594546486e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] No waiting events found dispatching network-vif-unplugged-21510dd4-b155-46ed-bdb2-dc17a9149353 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.104 2 WARNING nova.compute.manager [req-846d2b66-c4be-45b9-ac3d-f6d91c2b3048 req-84527f8f-69b9-416c-af2b-5594546486e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Received unexpected event network-vif-unplugged-21510dd4-b155-46ed-bdb2-dc17a9149353 for instance with vm_state active and task_state rebuild_spawning.
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.141 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.141 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.148 2 DEBUG nova.virt.hardware [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.149 2 INFO nova.compute.claims [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.255 2 DEBUG oslo_concurrency.processutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.318 2 DEBUG nova.storage.rbd_utils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] resizing rbd image 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.344 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.432 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.432 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Ensure instance console log exists: /var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.433 2 DEBUG oslo_concurrency.lockutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.433 2 DEBUG oslo_concurrency.lockutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.433 2 DEBUG oslo_concurrency.lockutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.435 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Start _get_guest_xml network_info=[{"id": "21510dd4-b155-46ed-bdb2-dc17a9149353", "address": "fa:16:3e:44:7a:ec", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21510dd4-b1", "ovs_interfaceid": "21510dd4-b155-46ed-bdb2-dc17a9149353", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:54Z,direct_url=<?>,disk_format='qcow2',id=52ef509e-0e22-464e-93c9-3ddcf574cd64,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.443 2 WARNING nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.450 2 DEBUG nova.virt.libvirt.host [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.450 2 DEBUG nova.virt.libvirt.host [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.454 2 DEBUG nova.virt.libvirt.host [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.454 2 DEBUG nova.virt.libvirt.host [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.455 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.455 2 DEBUG nova.virt.hardware [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:54Z,direct_url=<?>,disk_format='qcow2',id=52ef509e-0e22-464e-93c9-3ddcf574cd64,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.456 2 DEBUG nova.virt.hardware [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.456 2 DEBUG nova.virt.hardware [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.456 2 DEBUG nova.virt.hardware [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.457 2 DEBUG nova.virt.hardware [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.457 2 DEBUG nova.virt.hardware [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.457 2 DEBUG nova.virt.hardware [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.457 2 DEBUG nova.virt.hardware [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.458 2 DEBUG nova.virt.hardware [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.458 2 DEBUG nova.virt.hardware [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.458 2 DEBUG nova.virt.hardware [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.458 2 DEBUG nova.objects.instance [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 418e3157-f0a7-42ec-812b-2a4a2ad00991 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.476 2 DEBUG oslo_concurrency.processutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.005999946s ======
Oct 02 12:34:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:32.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.005999946s
Oct 02 12:34:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:32.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.796 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.804 2 DEBUG nova.compute.provider_tree [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.843 2 DEBUG nova.scheduler.client.report [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.910 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.910 2 DEBUG nova.compute.manager [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.938 2 DEBUG oslo_concurrency.processutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.966 2 DEBUG nova.storage.rbd_utils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:32 compute-1 nova_compute[230518]: 2025-10-02 12:34:32.971 2 DEBUG oslo_concurrency.processutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:32 compute-1 ceph-mon[80926]: pgmap v1739: 305 pgs: 305 active+clean; 190 MiB data, 774 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.7 MiB/s wr, 148 op/s
Oct 02 12:34:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1778796044' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1785355022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2658987936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/605827056' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1293371938' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.009 2 DEBUG nova.compute.manager [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.009 2 DEBUG nova.network.neutron [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.040 2 INFO nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.059 2 DEBUG nova.compute.manager [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.155 2 DEBUG nova.compute.manager [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.158 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.158 2 INFO nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Creating image(s)
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.203 2 DEBUG nova.storage.rbd_utils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] rbd image 51a18f7c-ed1b-4500-9d74-fb924f62b6d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.253 2 DEBUG nova.storage.rbd_utils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] rbd image 51a18f7c-ed1b-4500-9d74-fb924f62b6d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.288 2 DEBUG nova.storage.rbd_utils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] rbd image 51a18f7c-ed1b-4500-9d74-fb924f62b6d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.294 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.344 2 DEBUG nova.policy [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '34a9da53e0cc446593d0cea2f498c53e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ed58e2bfccb04353b29ae652cfed3546', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:34:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:34:33 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4020740269' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.392 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.393 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.394 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.395 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.431 2 DEBUG nova.storage.rbd_utils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] rbd image 51a18f7c-ed1b-4500-9d74-fb924f62b6d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.437 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 51a18f7c-ed1b-4500-9d74-fb924f62b6d9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.475 2 DEBUG oslo_concurrency.processutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.479 2 DEBUG nova.virt.libvirt.vif [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:33:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-456092929',display_name='tempest-ServerActionsTestJSON-server-1986633563',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-456092929',id=84,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:34:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-y84teqni',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:34:31Z,user_data=None,user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=418e3157-f0a7-42ec-812b-2a4a2ad00991,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "21510dd4-b155-46ed-bdb2-dc17a9149353", "address": "fa:16:3e:44:7a:ec", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21510dd4-b1", "ovs_interfaceid": "21510dd4-b155-46ed-bdb2-dc17a9149353", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.480 2 DEBUG nova.network.os_vif_util [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "21510dd4-b155-46ed-bdb2-dc17a9149353", "address": "fa:16:3e:44:7a:ec", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21510dd4-b1", "ovs_interfaceid": "21510dd4-b155-46ed-bdb2-dc17a9149353", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.481 2 DEBUG nova.network.os_vif_util [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:7a:ec,bridge_name='br-int',has_traffic_filtering=True,id=21510dd4-b155-46ed-bdb2-dc17a9149353,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21510dd4-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.484 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:34:33 compute-1 nova_compute[230518]:   <uuid>418e3157-f0a7-42ec-812b-2a4a2ad00991</uuid>
Oct 02 12:34:33 compute-1 nova_compute[230518]:   <name>instance-00000054</name>
Oct 02 12:34:33 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:34:33 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:34:33 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:34:33 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:       <nova:name>tempest-ServerActionsTestJSON-server-1986633563</nova:name>
Oct 02 12:34:33 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:34:32</nova:creationTime>
Oct 02 12:34:33 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:34:33 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:34:33 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:34:33 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:34:33 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:34:33 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:34:33 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:34:33 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:34:33 compute-1 nova_compute[230518]:         <nova:user uuid="71d69bc37f274fad8a0b06c0b96f2a64">tempest-ServerActionsTestJSON-226762235-project-member</nova:user>
Oct 02 12:34:33 compute-1 nova_compute[230518]:         <nova:project uuid="3b295760a6d74c82bd0f9ee4154d7d10">tempest-ServerActionsTestJSON-226762235</nova:project>
Oct 02 12:34:33 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:34:33 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="52ef509e-0e22-464e-93c9-3ddcf574cd64"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:34:33 compute-1 nova_compute[230518]:         <nova:port uuid="21510dd4-b155-46ed-bdb2-dc17a9149353">
Oct 02 12:34:33 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:34:33 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:34:33 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:34:33 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <system>
Oct 02 12:34:33 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:34:33 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:34:33 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:34:33 compute-1 nova_compute[230518]:       <entry name="serial">418e3157-f0a7-42ec-812b-2a4a2ad00991</entry>
Oct 02 12:34:33 compute-1 nova_compute[230518]:       <entry name="uuid">418e3157-f0a7-42ec-812b-2a4a2ad00991</entry>
Oct 02 12:34:33 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     </system>
Oct 02 12:34:33 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:34:33 compute-1 nova_compute[230518]:   <os>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:   </os>
Oct 02 12:34:33 compute-1 nova_compute[230518]:   <features>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:   </features>
Oct 02 12:34:33 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:34:33 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:34:33 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:34:33 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/418e3157-f0a7-42ec-812b-2a4a2ad00991_disk">
Oct 02 12:34:33 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:       </source>
Oct 02 12:34:33 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:34:33 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:34:33 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:34:33 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/418e3157-f0a7-42ec-812b-2a4a2ad00991_disk.config">
Oct 02 12:34:33 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:       </source>
Oct 02 12:34:33 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:34:33 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:34:33 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:34:33 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:44:7a:ec"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:       <target dev="tap21510dd4-b1"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:34:33 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991/console.log" append="off"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <video>
Oct 02 12:34:33 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     </video>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:34:33 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:34:33 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:34:33 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:34:33 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:34:33 compute-1 nova_compute[230518]: </domain>
Oct 02 12:34:33 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.486 2 DEBUG nova.compute.manager [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Preparing to wait for external event network-vif-plugged-21510dd4-b155-46ed-bdb2-dc17a9149353 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.486 2 DEBUG oslo_concurrency.lockutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.487 2 DEBUG oslo_concurrency.lockutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.487 2 DEBUG oslo_concurrency.lockutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.488 2 DEBUG nova.virt.libvirt.vif [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:33:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-456092929',display_name='tempest-ServerActionsTestJSON-server-1986633563',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-456092929',id=84,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:34:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-y84teqni',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:34:31Z,user_data=None,user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=418e3157-f0a7-42ec-812b-2a4a2ad00991,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "21510dd4-b155-46ed-bdb2-dc17a9149353", "address": "fa:16:3e:44:7a:ec", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21510dd4-b1", "ovs_interfaceid": "21510dd4-b155-46ed-bdb2-dc17a9149353", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.488 2 DEBUG nova.network.os_vif_util [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "21510dd4-b155-46ed-bdb2-dc17a9149353", "address": "fa:16:3e:44:7a:ec", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21510dd4-b1", "ovs_interfaceid": "21510dd4-b155-46ed-bdb2-dc17a9149353", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.489 2 DEBUG nova.network.os_vif_util [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:7a:ec,bridge_name='br-int',has_traffic_filtering=True,id=21510dd4-b155-46ed-bdb2-dc17a9149353,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21510dd4-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.489 2 DEBUG os_vif [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:7a:ec,bridge_name='br-int',has_traffic_filtering=True,id=21510dd4-b155-46ed-bdb2-dc17a9149353,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21510dd4-b1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.491 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.492 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.495 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap21510dd4-b1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.496 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap21510dd4-b1, col_values=(('external_ids', {'iface-id': '21510dd4-b155-46ed-bdb2-dc17a9149353', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:44:7a:ec', 'vm-uuid': '418e3157-f0a7-42ec-812b-2a4a2ad00991'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:33 compute-1 NetworkManager[44960]: <info>  [1759408473.4998] manager: (tap21510dd4-b1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/172)
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.508 2 INFO os_vif [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:7a:ec,bridge_name='br-int',has_traffic_filtering=True,id=21510dd4-b155-46ed-bdb2-dc17a9149353,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21510dd4-b1')
Oct 02 12:34:33 compute-1 podman[264676]: 2025-10-02 12:34:33.630359247 +0000 UTC m=+0.066918836 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 12:34:33 compute-1 podman[264675]: 2025-10-02 12:34:33.630334696 +0000 UTC m=+0.075405853 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.798 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.798 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.799 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] No VIF found with MAC fa:16:3e:44:7a:ec, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.799 2 INFO nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Using config drive
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.834 2 DEBUG nova.storage.rbd_utils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.852 2 DEBUG nova.objects.instance [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 418e3157-f0a7-42ec-812b-2a4a2ad00991 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:33 compute-1 nova_compute[230518]: 2025-10-02 12:34:33.883 2 DEBUG nova.objects.instance [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'keypairs' on Instance uuid 418e3157-f0a7-42ec-812b-2a4a2ad00991 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:34:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4020740269' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:34 compute-1 nova_compute[230518]: 2025-10-02 12:34:34.636 2 DEBUG nova.compute.manager [req-0b705de1-1c60-473e-a7b1-c0a9c594673d req-8ad5110b-9afc-4ed3-954c-59c0f48c56a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Received event network-vif-plugged-21510dd4-b155-46ed-bdb2-dc17a9149353 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:34 compute-1 nova_compute[230518]: 2025-10-02 12:34:34.637 2 DEBUG oslo_concurrency.lockutils [req-0b705de1-1c60-473e-a7b1-c0a9c594673d req-8ad5110b-9afc-4ed3-954c-59c0f48c56a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:34 compute-1 nova_compute[230518]: 2025-10-02 12:34:34.638 2 DEBUG oslo_concurrency.lockutils [req-0b705de1-1c60-473e-a7b1-c0a9c594673d req-8ad5110b-9afc-4ed3-954c-59c0f48c56a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:34 compute-1 nova_compute[230518]: 2025-10-02 12:34:34.638 2 DEBUG oslo_concurrency.lockutils [req-0b705de1-1c60-473e-a7b1-c0a9c594673d req-8ad5110b-9afc-4ed3-954c-59c0f48c56a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:34 compute-1 nova_compute[230518]: 2025-10-02 12:34:34.639 2 DEBUG nova.compute.manager [req-0b705de1-1c60-473e-a7b1-c0a9c594673d req-8ad5110b-9afc-4ed3-954c-59c0f48c56a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Processing event network-vif-plugged-21510dd4-b155-46ed-bdb2-dc17a9149353 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:34:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:34.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:34.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:34 compute-1 nova_compute[230518]: 2025-10-02 12:34:34.770 2 INFO nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Creating config drive at /var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991/disk.config
Oct 02 12:34:34 compute-1 nova_compute[230518]: 2025-10-02 12:34:34.780 2 DEBUG oslo_concurrency.processutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi742ghzt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:34 compute-1 nova_compute[230518]: 2025-10-02 12:34:34.941 2 DEBUG oslo_concurrency.processutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi742ghzt" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:34 compute-1 nova_compute[230518]: 2025-10-02 12:34:34.979 2 DEBUG nova.storage.rbd_utils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] rbd image 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:34 compute-1 nova_compute[230518]: 2025-10-02 12:34:34.985 2 DEBUG oslo_concurrency.processutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991/disk.config 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:35 compute-1 nova_compute[230518]: 2025-10-02 12:34:35.028 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 51a18f7c-ed1b-4500-9d74-fb924f62b6d9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:35 compute-1 nova_compute[230518]: 2025-10-02 12:34:35.112 2 DEBUG nova.storage.rbd_utils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] resizing rbd image 51a18f7c-ed1b-4500-9d74-fb924f62b6d9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:34:35 compute-1 ceph-mon[80926]: pgmap v1740: 305 pgs: 305 active+clean; 169 MiB data, 761 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 148 op/s
Oct 02 12:34:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3308431859' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:35 compute-1 nova_compute[230518]: 2025-10-02 12:34:35.597 2 DEBUG nova.network.neutron [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Successfully created port: bace8310-2635-4b16-b54b-7b961bf7c42a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:34:35 compute-1 nova_compute[230518]: 2025-10-02 12:34:35.756 2 DEBUG nova.objects.instance [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lazy-loading 'migration_context' on Instance uuid 51a18f7c-ed1b-4500-9d74-fb924f62b6d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:35 compute-1 nova_compute[230518]: 2025-10-02 12:34:35.773 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:34:35 compute-1 nova_compute[230518]: 2025-10-02 12:34:35.774 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Ensure instance console log exists: /var/lib/nova/instances/51a18f7c-ed1b-4500-9d74-fb924f62b6d9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:34:35 compute-1 nova_compute[230518]: 2025-10-02 12:34:35.775 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:35 compute-1 nova_compute[230518]: 2025-10-02 12:34:35.776 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:35 compute-1 nova_compute[230518]: 2025-10-02 12:34:35.776 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:36 compute-1 nova_compute[230518]: 2025-10-02 12:34:36.068 2 DEBUG oslo_concurrency.processutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991/disk.config 418e3157-f0a7-42ec-812b-2a4a2ad00991_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:36 compute-1 nova_compute[230518]: 2025-10-02 12:34:36.069 2 INFO nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Deleting local config drive /var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991/disk.config because it was imported into RBD.
Oct 02 12:34:36 compute-1 kernel: tap21510dd4-b1: entered promiscuous mode
Oct 02 12:34:36 compute-1 NetworkManager[44960]: <info>  [1759408476.1324] manager: (tap21510dd4-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/173)
Oct 02 12:34:36 compute-1 ovn_controller[129257]: 2025-10-02T12:34:36Z|00367|binding|INFO|Claiming lport 21510dd4-b155-46ed-bdb2-dc17a9149353 for this chassis.
Oct 02 12:34:36 compute-1 ovn_controller[129257]: 2025-10-02T12:34:36Z|00368|binding|INFO|21510dd4-b155-46ed-bdb2-dc17a9149353: Claiming fa:16:3e:44:7a:ec 10.100.0.11
Oct 02 12:34:36 compute-1 nova_compute[230518]: 2025-10-02 12:34:36.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:36 compute-1 ovn_controller[129257]: 2025-10-02T12:34:36Z|00369|binding|INFO|Setting lport 21510dd4-b155-46ed-bdb2-dc17a9149353 ovn-installed in OVS
Oct 02 12:34:36 compute-1 ovn_controller[129257]: 2025-10-02T12:34:36Z|00370|binding|INFO|Setting lport 21510dd4-b155-46ed-bdb2-dc17a9149353 up in Southbound
Oct 02 12:34:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:36.152 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:7a:ec 10.100.0.11'], port_security=['fa:16:3e:44:7a:ec 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '418e3157-f0a7-42ec-812b-2a4a2ad00991', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '5', 'neutron:security_group_ids': '08293210-e9a4-4bb5-8fa1-174de1dd6444', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=21510dd4-b155-46ed-bdb2-dc17a9149353) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:34:36 compute-1 nova_compute[230518]: 2025-10-02 12:34:36.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:36.155 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 21510dd4-b155-46ed-bdb2-dc17a9149353 in datapath f011efa4-0132-405c-bb45-09d0a9352eff bound to our chassis
Oct 02 12:34:36 compute-1 nova_compute[230518]: 2025-10-02 12:34:36.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:36.158 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f011efa4-0132-405c-bb45-09d0a9352eff
Oct 02 12:34:36 compute-1 systemd-udevd[264859]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:34:36 compute-1 systemd-machined[188247]: New machine qemu-43-instance-00000054.
Oct 02 12:34:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:36.178 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bff68d40-a01a-4d8a-9aa5-a20fe141f327]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:36 compute-1 NetworkManager[44960]: <info>  [1759408476.1829] device (tap21510dd4-b1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:34:36 compute-1 NetworkManager[44960]: <info>  [1759408476.1835] device (tap21510dd4-b1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:34:36 compute-1 systemd[1]: Started Virtual Machine qemu-43-instance-00000054.
Oct 02 12:34:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:36.215 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[fe335a14-a8ca-4318-8ec4-4d8f5b5ee4ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:36.221 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[87949f7f-8f16-4fb5-b9b2-580389e5d909]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:36.248 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[3567e327-f5c3-4600-9959-169ff82ad91c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:36.273 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3924ee01-e979-4614-a642-7d576bc8989d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 106], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626322, 'reachable_time': 30206, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264870, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:36.292 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c6d7dbcc-8d59-478d-b30e-fe15c4361bfc]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf011efa4-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 626333, 'tstamp': 626333}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264873, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf011efa4-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 626336, 'tstamp': 626336}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264873, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:36.293 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:36 compute-1 nova_compute[230518]: 2025-10-02 12:34:36.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:36.297 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf011efa4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:36.297 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:34:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:36.298 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf011efa4-00, col_values=(('external_ids', {'iface-id': '678ebd13-2235-4191-a2a2-1f6e29399ca6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:36.298 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:34:36 compute-1 ceph-mon[80926]: pgmap v1741: 305 pgs: 305 active+clean; 247 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 7.2 MiB/s wr, 184 op/s
Oct 02 12:34:36 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3443077683' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:36.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:36.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:36 compute-1 nova_compute[230518]: 2025-10-02 12:34:36.924 2 DEBUG nova.compute.manager [req-4ffbc810-eeda-41df-8195-39bf8adb552c req-499ffcba-60c9-4812-a294-e7804fcdb2c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Received event network-vif-plugged-21510dd4-b155-46ed-bdb2-dc17a9149353 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:36 compute-1 nova_compute[230518]: 2025-10-02 12:34:36.924 2 DEBUG oslo_concurrency.lockutils [req-4ffbc810-eeda-41df-8195-39bf8adb552c req-499ffcba-60c9-4812-a294-e7804fcdb2c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:36 compute-1 nova_compute[230518]: 2025-10-02 12:34:36.924 2 DEBUG oslo_concurrency.lockutils [req-4ffbc810-eeda-41df-8195-39bf8adb552c req-499ffcba-60c9-4812-a294-e7804fcdb2c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:36 compute-1 nova_compute[230518]: 2025-10-02 12:34:36.925 2 DEBUG oslo_concurrency.lockutils [req-4ffbc810-eeda-41df-8195-39bf8adb552c req-499ffcba-60c9-4812-a294-e7804fcdb2c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:36 compute-1 nova_compute[230518]: 2025-10-02 12:34:36.925 2 DEBUG nova.compute.manager [req-4ffbc810-eeda-41df-8195-39bf8adb552c req-499ffcba-60c9-4812-a294-e7804fcdb2c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] No waiting events found dispatching network-vif-plugged-21510dd4-b155-46ed-bdb2-dc17a9149353 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:34:36 compute-1 nova_compute[230518]: 2025-10-02 12:34:36.925 2 WARNING nova.compute.manager [req-4ffbc810-eeda-41df-8195-39bf8adb552c req-499ffcba-60c9-4812-a294-e7804fcdb2c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Received unexpected event network-vif-plugged-21510dd4-b155-46ed-bdb2-dc17a9149353 for instance with vm_state active and task_state rebuild_spawning.
Oct 02 12:34:37 compute-1 nova_compute[230518]: 2025-10-02 12:34:37.150 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Removed pending event for 418e3157-f0a7-42ec-812b-2a4a2ad00991 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:34:37 compute-1 nova_compute[230518]: 2025-10-02 12:34:37.150 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408477.1500473, 418e3157-f0a7-42ec-812b-2a4a2ad00991 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:34:37 compute-1 nova_compute[230518]: 2025-10-02 12:34:37.151 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] VM Started (Lifecycle Event)
Oct 02 12:34:37 compute-1 nova_compute[230518]: 2025-10-02 12:34:37.152 2 DEBUG nova.compute.manager [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:34:37 compute-1 nova_compute[230518]: 2025-10-02 12:34:37.154 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:34:37 compute-1 nova_compute[230518]: 2025-10-02 12:34:37.157 2 INFO nova.virt.libvirt.driver [-] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Instance spawned successfully.
Oct 02 12:34:37 compute-1 nova_compute[230518]: 2025-10-02 12:34:37.158 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:34:37 compute-1 nova_compute[230518]: 2025-10-02 12:34:37.179 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:37 compute-1 nova_compute[230518]: 2025-10-02 12:34:37.184 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:34:37 compute-1 nova_compute[230518]: 2025-10-02 12:34:37.203 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:37 compute-1 nova_compute[230518]: 2025-10-02 12:34:37.204 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:37 compute-1 nova_compute[230518]: 2025-10-02 12:34:37.204 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:37 compute-1 nova_compute[230518]: 2025-10-02 12:34:37.204 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:37 compute-1 nova_compute[230518]: 2025-10-02 12:34:37.205 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:37 compute-1 nova_compute[230518]: 2025-10-02 12:34:37.205 2 DEBUG nova.virt.libvirt.driver [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:37 compute-1 nova_compute[230518]: 2025-10-02 12:34:37.211 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 12:34:37 compute-1 nova_compute[230518]: 2025-10-02 12:34:37.211 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408477.1508596, 418e3157-f0a7-42ec-812b-2a4a2ad00991 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:34:37 compute-1 nova_compute[230518]: 2025-10-02 12:34:37.211 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] VM Paused (Lifecycle Event)
Oct 02 12:34:37 compute-1 nova_compute[230518]: 2025-10-02 12:34:37.272 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:37 compute-1 nova_compute[230518]: 2025-10-02 12:34:37.275 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408477.1542885, 418e3157-f0a7-42ec-812b-2a4a2ad00991 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:34:37 compute-1 nova_compute[230518]: 2025-10-02 12:34:37.275 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] VM Resumed (Lifecycle Event)
Oct 02 12:34:37 compute-1 nova_compute[230518]: 2025-10-02 12:34:37.337 2 DEBUG nova.compute.manager [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:37 compute-1 nova_compute[230518]: 2025-10-02 12:34:37.342 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:37 compute-1 nova_compute[230518]: 2025-10-02 12:34:37.350 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:34:37 compute-1 nova_compute[230518]: 2025-10-02 12:34:37.396 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 12:34:37 compute-1 nova_compute[230518]: 2025-10-02 12:34:37.458 2 DEBUG oslo_concurrency.lockutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:37 compute-1 nova_compute[230518]: 2025-10-02 12:34:37.458 2 DEBUG oslo_concurrency.lockutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:37 compute-1 nova_compute[230518]: 2025-10-02 12:34:37.458 2 DEBUG nova.objects.instance [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:34:37 compute-1 nova_compute[230518]: 2025-10-02 12:34:37.575 2 DEBUG oslo_concurrency.lockutils [None req-2ed4f882-1a2a-412f-bc6e-355fd4480e7d 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:37 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3481265486' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:37 compute-1 nova_compute[230518]: 2025-10-02 12:34:37.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:38 compute-1 nova_compute[230518]: 2025-10-02 12:34:38.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:38.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:38.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:38 compute-1 ceph-mon[80926]: pgmap v1742: 305 pgs: 305 active+clean; 381 MiB data, 855 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 12 MiB/s wr, 259 op/s
Oct 02 12:34:38 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3299077736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:38 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1600688275' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:39 compute-1 nova_compute[230518]: 2025-10-02 12:34:39.160 2 DEBUG nova.network.neutron [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Successfully updated port: bace8310-2635-4b16-b54b-7b961bf7c42a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:34:39 compute-1 nova_compute[230518]: 2025-10-02 12:34:39.206 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "refresh_cache-51a18f7c-ed1b-4500-9d74-fb924f62b6d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:34:39 compute-1 nova_compute[230518]: 2025-10-02 12:34:39.206 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquired lock "refresh_cache-51a18f7c-ed1b-4500-9d74-fb924f62b6d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:34:39 compute-1 nova_compute[230518]: 2025-10-02 12:34:39.206 2 DEBUG nova.network.neutron [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:34:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:34:39 compute-1 nova_compute[230518]: 2025-10-02 12:34:39.302 2 DEBUG nova.compute.manager [req-4002de06-8371-4a14-a143-84310beb6ec1 req-a4501520-dfad-426d-8d25-e41ce8727d38 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Received event network-vif-plugged-21510dd4-b155-46ed-bdb2-dc17a9149353 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:39 compute-1 nova_compute[230518]: 2025-10-02 12:34:39.303 2 DEBUG oslo_concurrency.lockutils [req-4002de06-8371-4a14-a143-84310beb6ec1 req-a4501520-dfad-426d-8d25-e41ce8727d38 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:39 compute-1 nova_compute[230518]: 2025-10-02 12:34:39.303 2 DEBUG oslo_concurrency.lockutils [req-4002de06-8371-4a14-a143-84310beb6ec1 req-a4501520-dfad-426d-8d25-e41ce8727d38 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:39 compute-1 nova_compute[230518]: 2025-10-02 12:34:39.303 2 DEBUG oslo_concurrency.lockutils [req-4002de06-8371-4a14-a143-84310beb6ec1 req-a4501520-dfad-426d-8d25-e41ce8727d38 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:39 compute-1 nova_compute[230518]: 2025-10-02 12:34:39.304 2 DEBUG nova.compute.manager [req-4002de06-8371-4a14-a143-84310beb6ec1 req-a4501520-dfad-426d-8d25-e41ce8727d38 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] No waiting events found dispatching network-vif-plugged-21510dd4-b155-46ed-bdb2-dc17a9149353 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:34:39 compute-1 nova_compute[230518]: 2025-10-02 12:34:39.304 2 WARNING nova.compute.manager [req-4002de06-8371-4a14-a143-84310beb6ec1 req-a4501520-dfad-426d-8d25-e41ce8727d38 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Received unexpected event network-vif-plugged-21510dd4-b155-46ed-bdb2-dc17a9149353 for instance with vm_state active and task_state None.
Oct 02 12:34:39 compute-1 nova_compute[230518]: 2025-10-02 12:34:39.423 2 DEBUG nova.network.neutron [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:34:39 compute-1 nova_compute[230518]: 2025-10-02 12:34:39.718 2 DEBUG nova.compute.manager [req-7a5c5209-759d-41cd-ba1f-e6a1ab9d5b83 req-7a32f489-5be5-4bef-9d3a-e0c5b50de89a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Received event network-changed-bace8310-2635-4b16-b54b-7b961bf7c42a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:39 compute-1 nova_compute[230518]: 2025-10-02 12:34:39.719 2 DEBUG nova.compute.manager [req-7a5c5209-759d-41cd-ba1f-e6a1ab9d5b83 req-7a32f489-5be5-4bef-9d3a-e0c5b50de89a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Refreshing instance network info cache due to event network-changed-bace8310-2635-4b16-b54b-7b961bf7c42a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:34:39 compute-1 nova_compute[230518]: 2025-10-02 12:34:39.719 2 DEBUG oslo_concurrency.lockutils [req-7a5c5209-759d-41cd-ba1f-e6a1ab9d5b83 req-7a32f489-5be5-4bef-9d3a-e0c5b50de89a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-51a18f7c-ed1b-4500-9d74-fb924f62b6d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:34:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1102977573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:34:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:40.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:34:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:34:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:40.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:34:41 compute-1 ceph-mon[80926]: pgmap v1743: 305 pgs: 305 active+clean; 381 MiB data, 855 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 9.8 MiB/s wr, 200 op/s
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.123 2 DEBUG nova.network.neutron [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Updating instance_info_cache with network_info: [{"id": "bace8310-2635-4b16-b54b-7b961bf7c42a", "address": "fa:16:3e:9d:e5:bf", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbace8310-26", "ovs_interfaceid": "bace8310-2635-4b16-b54b-7b961bf7c42a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.176 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Releasing lock "refresh_cache-51a18f7c-ed1b-4500-9d74-fb924f62b6d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.177 2 DEBUG nova.compute.manager [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Instance network_info: |[{"id": "bace8310-2635-4b16-b54b-7b961bf7c42a", "address": "fa:16:3e:9d:e5:bf", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbace8310-26", "ovs_interfaceid": "bace8310-2635-4b16-b54b-7b961bf7c42a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.177 2 DEBUG oslo_concurrency.lockutils [req-7a5c5209-759d-41cd-ba1f-e6a1ab9d5b83 req-7a32f489-5be5-4bef-9d3a-e0c5b50de89a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-51a18f7c-ed1b-4500-9d74-fb924f62b6d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.178 2 DEBUG nova.network.neutron [req-7a5c5209-759d-41cd-ba1f-e6a1ab9d5b83 req-7a32f489-5be5-4bef-9d3a-e0c5b50de89a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Refreshing network info cache for port bace8310-2635-4b16-b54b-7b961bf7c42a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.181 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Start _get_guest_xml network_info=[{"id": "bace8310-2635-4b16-b54b-7b961bf7c42a", "address": "fa:16:3e:9d:e5:bf", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbace8310-26", "ovs_interfaceid": "bace8310-2635-4b16-b54b-7b961bf7c42a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.184 2 WARNING nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.189 2 DEBUG nova.virt.libvirt.host [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.190 2 DEBUG nova.virt.libvirt.host [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.193 2 DEBUG nova.virt.libvirt.host [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.193 2 DEBUG nova.virt.libvirt.host [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.195 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.195 2 DEBUG nova.virt.hardware [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.196 2 DEBUG nova.virt.hardware [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.196 2 DEBUG nova.virt.hardware [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.196 2 DEBUG nova.virt.hardware [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.197 2 DEBUG nova.virt.hardware [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.197 2 DEBUG nova.virt.hardware [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.197 2 DEBUG nova.virt.hardware [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.198 2 DEBUG nova.virt.hardware [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.198 2 DEBUG nova.virt.hardware [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.198 2 DEBUG nova.virt.hardware [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.198 2 DEBUG nova.virt.hardware [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.201 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.515 2 DEBUG oslo_concurrency.lockutils [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "418e3157-f0a7-42ec-812b-2a4a2ad00991" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.517 2 DEBUG oslo_concurrency.lockutils [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.517 2 DEBUG oslo_concurrency.lockutils [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.518 2 DEBUG oslo_concurrency.lockutils [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.518 2 DEBUG oslo_concurrency.lockutils [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.520 2 INFO nova.compute.manager [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Terminating instance
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.521 2 DEBUG nova.compute.manager [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:34:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:34:42 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1368115747' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.697 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:42.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.723 2 DEBUG nova.storage.rbd_utils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] rbd image 51a18f7c-ed1b-4500-9d74-fb924f62b6d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.727 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:42.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:42 compute-1 kernel: tap21510dd4-b1 (unregistering): left promiscuous mode
Oct 02 12:34:42 compute-1 NetworkManager[44960]: <info>  [1759408482.7486] device (tap21510dd4-b1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:34:42 compute-1 ovn_controller[129257]: 2025-10-02T12:34:42Z|00371|binding|INFO|Releasing lport 21510dd4-b155-46ed-bdb2-dc17a9149353 from this chassis (sb_readonly=0)
Oct 02 12:34:42 compute-1 ovn_controller[129257]: 2025-10-02T12:34:42Z|00372|binding|INFO|Setting lport 21510dd4-b155-46ed-bdb2-dc17a9149353 down in Southbound
Oct 02 12:34:42 compute-1 ovn_controller[129257]: 2025-10-02T12:34:42Z|00373|binding|INFO|Removing iface tap21510dd4-b1 ovn-installed in OVS
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:42.776 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:7a:ec 10.100.0.11'], port_security=['fa:16:3e:44:7a:ec 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '418e3157-f0a7-42ec-812b-2a4a2ad00991', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '6', 'neutron:security_group_ids': '08293210-e9a4-4bb5-8fa1-174de1dd6444', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=21510dd4-b155-46ed-bdb2-dc17a9149353) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:34:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:42.777 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 21510dd4-b155-46ed-bdb2-dc17a9149353 in datapath f011efa4-0132-405c-bb45-09d0a9352eff unbound from our chassis
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:42.779 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f011efa4-0132-405c-bb45-09d0a9352eff
Oct 02 12:34:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:42.800 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f077f6ea-15fc-4bbc-b3d7-33019f5ec09f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:42 compute-1 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000054.scope: Deactivated successfully.
Oct 02 12:34:42 compute-1 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000054.scope: Consumed 6.314s CPU time.
Oct 02 12:34:42 compute-1 systemd-machined[188247]: Machine qemu-43-instance-00000054 terminated.
Oct 02 12:34:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:42.826 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a0d3a08a-62be-4e31-b30a-fdf1fa1eafb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:42.831 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[4627f6c4-6546-4bd9-9f80-8782e48cbf44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:42.860 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[7f931e8c-77e1-4fa3-9037-5155f7dd4d02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:42.876 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[78508f67-b00b-4266-aafb-fc2079b96f5f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 106], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626322, 'reachable_time': 30206, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264988, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:42.897 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6b8ac19f-cdc6-4548-9edc-147e9e02edd5]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf011efa4-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 626333, 'tstamp': 626333}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264989, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf011efa4-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 626336, 'tstamp': 626336}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264989, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:42.899 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:42.906 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf011efa4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:42.906 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:34:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:42.906 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf011efa4-00, col_values=(('external_ids', {'iface-id': '678ebd13-2235-4191-a2a2-1f6e29399ca6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:42.906 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.952 2 INFO nova.virt.libvirt.driver [-] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Instance destroyed successfully.
Oct 02 12:34:42 compute-1 nova_compute[230518]: 2025-10-02 12:34:42.952 2 DEBUG nova.objects.instance [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'resources' on Instance uuid 418e3157-f0a7-42ec-812b-2a4a2ad00991 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:43 compute-1 nova_compute[230518]: 2025-10-02 12:34:43.003 2 DEBUG nova.virt.libvirt.vif [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:33:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-456092929',display_name='tempest-ServerActionsTestJSON-server-1986633563',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-456092929',id=84,image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:34:37Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-y84teqni',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:34:37Z,user_data=None,user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=418e3157-f0a7-42ec-812b-2a4a2ad00991,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "21510dd4-b155-46ed-bdb2-dc17a9149353", "address": "fa:16:3e:44:7a:ec", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21510dd4-b1", "ovs_interfaceid": "21510dd4-b155-46ed-bdb2-dc17a9149353", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:34:43 compute-1 nova_compute[230518]: 2025-10-02 12:34:43.003 2 DEBUG nova.network.os_vif_util [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "21510dd4-b155-46ed-bdb2-dc17a9149353", "address": "fa:16:3e:44:7a:ec", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21510dd4-b1", "ovs_interfaceid": "21510dd4-b155-46ed-bdb2-dc17a9149353", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:34:43 compute-1 nova_compute[230518]: 2025-10-02 12:34:43.004 2 DEBUG nova.network.os_vif_util [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:7a:ec,bridge_name='br-int',has_traffic_filtering=True,id=21510dd4-b155-46ed-bdb2-dc17a9149353,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21510dd4-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:34:43 compute-1 nova_compute[230518]: 2025-10-02 12:34:43.004 2 DEBUG os_vif [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:7a:ec,bridge_name='br-int',has_traffic_filtering=True,id=21510dd4-b155-46ed-bdb2-dc17a9149353,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21510dd4-b1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:34:43 compute-1 nova_compute[230518]: 2025-10-02 12:34:43.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:43 compute-1 nova_compute[230518]: 2025-10-02 12:34:43.005 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21510dd4-b1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:43 compute-1 nova_compute[230518]: 2025-10-02 12:34:43.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:43 compute-1 nova_compute[230518]: 2025-10-02 12:34:43.010 2 INFO os_vif [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:7a:ec,bridge_name='br-int',has_traffic_filtering=True,id=21510dd4-b155-46ed-bdb2-dc17a9149353,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21510dd4-b1')
Oct 02 12:34:43 compute-1 nova_compute[230518]: 2025-10-02 12:34:43.196 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:43 compute-1 nova_compute[230518]: 2025-10-02 12:34:43.197 2 DEBUG nova.virt.libvirt.vif [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:34:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1446553806',display_name='tempest-tempest.common.compute-instance-1446553806-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1446553806-2',id=89,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ed58e2bfccb04353b29ae652cfed3546',ramdisk_id='',reservation_id='r-onpz3ikv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1074010337',owner_user_name='tempest-MultipleCreateTestJSON-1074010337-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:34:33Z,user_data=None,user_id='34a9da53e0cc446593d0cea2f498c53e',uuid=51a18f7c-ed1b-4500-9d74-fb924f62b6d9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bace8310-2635-4b16-b54b-7b961bf7c42a", "address": "fa:16:3e:9d:e5:bf", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbace8310-26", "ovs_interfaceid": "bace8310-2635-4b16-b54b-7b961bf7c42a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:34:43 compute-1 nova_compute[230518]: 2025-10-02 12:34:43.198 2 DEBUG nova.network.os_vif_util [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Converting VIF {"id": "bace8310-2635-4b16-b54b-7b961bf7c42a", "address": "fa:16:3e:9d:e5:bf", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbace8310-26", "ovs_interfaceid": "bace8310-2635-4b16-b54b-7b961bf7c42a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:34:43 compute-1 nova_compute[230518]: 2025-10-02 12:34:43.198 2 DEBUG nova.network.os_vif_util [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e5:bf,bridge_name='br-int',has_traffic_filtering=True,id=bace8310-2635-4b16-b54b-7b961bf7c42a,network=Network(885ece2c-b1ca-4d5a-9ddf-20d1baf155c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbace8310-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:34:43 compute-1 nova_compute[230518]: 2025-10-02 12:34:43.200 2 DEBUG nova.objects.instance [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lazy-loading 'pci_devices' on Instance uuid 51a18f7c-ed1b-4500-9d74-fb924f62b6d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:43 compute-1 nova_compute[230518]: 2025-10-02 12:34:43.236 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:34:43 compute-1 nova_compute[230518]:   <uuid>51a18f7c-ed1b-4500-9d74-fb924f62b6d9</uuid>
Oct 02 12:34:43 compute-1 nova_compute[230518]:   <name>instance-00000059</name>
Oct 02 12:34:43 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:34:43 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:34:43 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:34:43 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:       <nova:name>tempest-tempest.common.compute-instance-1446553806-2</nova:name>
Oct 02 12:34:43 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:34:42</nova:creationTime>
Oct 02 12:34:43 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:34:43 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:34:43 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:34:43 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:34:43 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:34:43 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:34:43 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:34:43 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:34:43 compute-1 nova_compute[230518]:         <nova:user uuid="34a9da53e0cc446593d0cea2f498c53e">tempest-MultipleCreateTestJSON-1074010337-project-member</nova:user>
Oct 02 12:34:43 compute-1 nova_compute[230518]:         <nova:project uuid="ed58e2bfccb04353b29ae652cfed3546">tempest-MultipleCreateTestJSON-1074010337</nova:project>
Oct 02 12:34:43 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:34:43 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:34:43 compute-1 nova_compute[230518]:         <nova:port uuid="bace8310-2635-4b16-b54b-7b961bf7c42a">
Oct 02 12:34:43 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:34:43 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:34:43 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:34:43 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <system>
Oct 02 12:34:43 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:34:43 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:34:43 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:34:43 compute-1 nova_compute[230518]:       <entry name="serial">51a18f7c-ed1b-4500-9d74-fb924f62b6d9</entry>
Oct 02 12:34:43 compute-1 nova_compute[230518]:       <entry name="uuid">51a18f7c-ed1b-4500-9d74-fb924f62b6d9</entry>
Oct 02 12:34:43 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     </system>
Oct 02 12:34:43 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:34:43 compute-1 nova_compute[230518]:   <os>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:   </os>
Oct 02 12:34:43 compute-1 nova_compute[230518]:   <features>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:   </features>
Oct 02 12:34:43 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:34:43 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:34:43 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:34:43 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/51a18f7c-ed1b-4500-9d74-fb924f62b6d9_disk">
Oct 02 12:34:43 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:       </source>
Oct 02 12:34:43 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:34:43 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:34:43 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:34:43 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/51a18f7c-ed1b-4500-9d74-fb924f62b6d9_disk.config">
Oct 02 12:34:43 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:       </source>
Oct 02 12:34:43 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:34:43 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:34:43 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:34:43 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:9d:e5:bf"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:       <target dev="tapbace8310-26"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:34:43 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/51a18f7c-ed1b-4500-9d74-fb924f62b6d9/console.log" append="off"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <video>
Oct 02 12:34:43 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     </video>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:34:43 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:34:43 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:34:43 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:34:43 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:34:43 compute-1 nova_compute[230518]: </domain>
Oct 02 12:34:43 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:34:43 compute-1 nova_compute[230518]: 2025-10-02 12:34:43.237 2 DEBUG nova.compute.manager [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Preparing to wait for external event network-vif-plugged-bace8310-2635-4b16-b54b-7b961bf7c42a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:34:43 compute-1 nova_compute[230518]: 2025-10-02 12:34:43.237 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:43 compute-1 nova_compute[230518]: 2025-10-02 12:34:43.237 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:43 compute-1 nova_compute[230518]: 2025-10-02 12:34:43.237 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:43 compute-1 nova_compute[230518]: 2025-10-02 12:34:43.238 2 DEBUG nova.virt.libvirt.vif [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:34:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1446553806',display_name='tempest-tempest.common.compute-instance-1446553806-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1446553806-2',id=89,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ed58e2bfccb04353b29ae652cfed3546',ramdisk_id='',reservation_id='r-onpz3ikv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1074010337',owner_user_name='tempest-MultipleCreateTestJSON-1074010337-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:34:33Z,user_data=None,user_id='34a9da53e0cc446593d0cea2f498c53e',uuid=51a18f7c-ed1b-4500-9d74-fb924f62b6d9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bace8310-2635-4b16-b54b-7b961bf7c42a", "address": "fa:16:3e:9d:e5:bf", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbace8310-26", "ovs_interfaceid": "bace8310-2635-4b16-b54b-7b961bf7c42a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:34:43 compute-1 nova_compute[230518]: 2025-10-02 12:34:43.238 2 DEBUG nova.network.os_vif_util [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Converting VIF {"id": "bace8310-2635-4b16-b54b-7b961bf7c42a", "address": "fa:16:3e:9d:e5:bf", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbace8310-26", "ovs_interfaceid": "bace8310-2635-4b16-b54b-7b961bf7c42a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:34:43 compute-1 nova_compute[230518]: 2025-10-02 12:34:43.239 2 DEBUG nova.network.os_vif_util [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e5:bf,bridge_name='br-int',has_traffic_filtering=True,id=bace8310-2635-4b16-b54b-7b961bf7c42a,network=Network(885ece2c-b1ca-4d5a-9ddf-20d1baf155c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbace8310-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:34:43 compute-1 nova_compute[230518]: 2025-10-02 12:34:43.239 2 DEBUG os_vif [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e5:bf,bridge_name='br-int',has_traffic_filtering=True,id=bace8310-2635-4b16-b54b-7b961bf7c42a,network=Network(885ece2c-b1ca-4d5a-9ddf-20d1baf155c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbace8310-26') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:34:43 compute-1 nova_compute[230518]: 2025-10-02 12:34:43.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:43 compute-1 nova_compute[230518]: 2025-10-02 12:34:43.239 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:43 compute-1 nova_compute[230518]: 2025-10-02 12:34:43.240 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:34:43 compute-1 nova_compute[230518]: 2025-10-02 12:34:43.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:43 compute-1 nova_compute[230518]: 2025-10-02 12:34:43.244 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbace8310-26, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:43 compute-1 nova_compute[230518]: 2025-10-02 12:34:43.245 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbace8310-26, col_values=(('external_ids', {'iface-id': 'bace8310-2635-4b16-b54b-7b961bf7c42a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:e5:bf', 'vm-uuid': '51a18f7c-ed1b-4500-9d74-fb924f62b6d9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:43 compute-1 NetworkManager[44960]: <info>  [1759408483.2476] manager: (tapbace8310-26): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/174)
Oct 02 12:34:43 compute-1 nova_compute[230518]: 2025-10-02 12:34:43.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:43 compute-1 nova_compute[230518]: 2025-10-02 12:34:43.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:34:43 compute-1 nova_compute[230518]: 2025-10-02 12:34:43.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:43 compute-1 nova_compute[230518]: 2025-10-02 12:34:43.252 2 INFO os_vif [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e5:bf,bridge_name='br-int',has_traffic_filtering=True,id=bace8310-2635-4b16-b54b-7b961bf7c42a,network=Network(885ece2c-b1ca-4d5a-9ddf-20d1baf155c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbace8310-26')
Oct 02 12:34:43 compute-1 ceph-mon[80926]: pgmap v1744: 305 pgs: 305 active+clean; 401 MiB data, 865 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 11 MiB/s wr, 332 op/s
Oct 02 12:34:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1368115747' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:43 compute-1 nova_compute[230518]: 2025-10-02 12:34:43.526 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:34:43 compute-1 nova_compute[230518]: 2025-10-02 12:34:43.526 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:34:43 compute-1 nova_compute[230518]: 2025-10-02 12:34:43.527 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] No VIF found with MAC fa:16:3e:9d:e5:bf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:34:43 compute-1 nova_compute[230518]: 2025-10-02 12:34:43.527 2 INFO nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Using config drive
Oct 02 12:34:43 compute-1 nova_compute[230518]: 2025-10-02 12:34:43.624 2 DEBUG nova.storage.rbd_utils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] rbd image 51a18f7c-ed1b-4500-9d74-fb924f62b6d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:44 compute-1 nova_compute[230518]: 2025-10-02 12:34:44.150 2 INFO nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Creating config drive at /var/lib/nova/instances/51a18f7c-ed1b-4500-9d74-fb924f62b6d9/disk.config
Oct 02 12:34:44 compute-1 nova_compute[230518]: 2025-10-02 12:34:44.162 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/51a18f7c-ed1b-4500-9d74-fb924f62b6d9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfwgj0p1f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:34:44 compute-1 nova_compute[230518]: 2025-10-02 12:34:44.325 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/51a18f7c-ed1b-4500-9d74-fb924f62b6d9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfwgj0p1f" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:44 compute-1 nova_compute[230518]: 2025-10-02 12:34:44.367 2 DEBUG nova.storage.rbd_utils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] rbd image 51a18f7c-ed1b-4500-9d74-fb924f62b6d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:34:44 compute-1 nova_compute[230518]: 2025-10-02 12:34:44.373 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/51a18f7c-ed1b-4500-9d74-fb924f62b6d9/disk.config 51a18f7c-ed1b-4500-9d74-fb924f62b6d9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:44 compute-1 nova_compute[230518]: 2025-10-02 12:34:44.417 2 DEBUG nova.network.neutron [req-7a5c5209-759d-41cd-ba1f-e6a1ab9d5b83 req-7a32f489-5be5-4bef-9d3a-e0c5b50de89a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Updated VIF entry in instance network info cache for port bace8310-2635-4b16-b54b-7b961bf7c42a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:34:44 compute-1 nova_compute[230518]: 2025-10-02 12:34:44.418 2 DEBUG nova.network.neutron [req-7a5c5209-759d-41cd-ba1f-e6a1ab9d5b83 req-7a32f489-5be5-4bef-9d3a-e0c5b50de89a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Updating instance_info_cache with network_info: [{"id": "bace8310-2635-4b16-b54b-7b961bf7c42a", "address": "fa:16:3e:9d:e5:bf", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbace8310-26", "ovs_interfaceid": "bace8310-2635-4b16-b54b-7b961bf7c42a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:34:44 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/810285189' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:34:44 compute-1 ceph-mon[80926]: pgmap v1745: 305 pgs: 305 active+clean; 401 MiB data, 865 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 9.1 MiB/s wr, 359 op/s
Oct 02 12:34:44 compute-1 nova_compute[230518]: 2025-10-02 12:34:44.500 2 DEBUG oslo_concurrency.lockutils [req-7a5c5209-759d-41cd-ba1f-e6a1ab9d5b83 req-7a32f489-5be5-4bef-9d3a-e0c5b50de89a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-51a18f7c-ed1b-4500-9d74-fb924f62b6d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:34:44 compute-1 nova_compute[230518]: 2025-10-02 12:34:44.620 2 DEBUG oslo_concurrency.processutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/51a18f7c-ed1b-4500-9d74-fb924f62b6d9/disk.config 51a18f7c-ed1b-4500-9d74-fb924f62b6d9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.247s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:44 compute-1 nova_compute[230518]: 2025-10-02 12:34:44.620 2 INFO nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Deleting local config drive /var/lib/nova/instances/51a18f7c-ed1b-4500-9d74-fb924f62b6d9/disk.config because it was imported into RBD.
Oct 02 12:34:44 compute-1 NetworkManager[44960]: <info>  [1759408484.6659] manager: (tapbace8310-26): new Tun device (/org/freedesktop/NetworkManager/Devices/175)
Oct 02 12:34:44 compute-1 kernel: tapbace8310-26: entered promiscuous mode
Oct 02 12:34:44 compute-1 systemd-udevd[264961]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:34:44 compute-1 ovn_controller[129257]: 2025-10-02T12:34:44Z|00374|binding|INFO|Claiming lport bace8310-2635-4b16-b54b-7b961bf7c42a for this chassis.
Oct 02 12:34:44 compute-1 ovn_controller[129257]: 2025-10-02T12:34:44Z|00375|binding|INFO|bace8310-2635-4b16-b54b-7b961bf7c42a: Claiming fa:16:3e:9d:e5:bf 10.100.0.6
Oct 02 12:34:44 compute-1 nova_compute[230518]: 2025-10-02 12:34:44.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:44 compute-1 NetworkManager[44960]: <info>  [1759408484.6792] device (tapbace8310-26): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:34:44 compute-1 NetworkManager[44960]: <info>  [1759408484.6826] device (tapbace8310-26): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:34:44 compute-1 ovn_controller[129257]: 2025-10-02T12:34:44Z|00376|binding|INFO|Setting lport bace8310-2635-4b16-b54b-7b961bf7c42a ovn-installed in OVS
Oct 02 12:34:44 compute-1 nova_compute[230518]: 2025-10-02 12:34:44.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:44 compute-1 nova_compute[230518]: 2025-10-02 12:34:44.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:44 compute-1 systemd-machined[188247]: New machine qemu-44-instance-00000059.
Oct 02 12:34:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:44.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.711 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:e5:bf 10.100.0.6'], port_security=['fa:16:3e:9d:e5:bf 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '51a18f7c-ed1b-4500-9d74-fb924f62b6d9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed58e2bfccb04353b29ae652cfed3546', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8afb1137-daba-41cb-976b-5cc3e880408c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1c0c736-25fb-4965-98a7-04a85ae45126, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=bace8310-2635-4b16-b54b-7b961bf7c42a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:34:44 compute-1 ovn_controller[129257]: 2025-10-02T12:34:44Z|00377|binding|INFO|Setting lport bace8310-2635-4b16-b54b-7b961bf7c42a up in Southbound
Oct 02 12:34:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.712 138374 INFO neutron.agent.ovn.metadata.agent [-] Port bace8310-2635-4b16-b54b-7b961bf7c42a in datapath 885ece2c-b1ca-4d5a-9ddf-20d1baf155c7 bound to our chassis
Oct 02 12:34:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.713 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 885ece2c-b1ca-4d5a-9ddf-20d1baf155c7
Oct 02 12:34:44 compute-1 systemd[1]: Started Virtual Machine qemu-44-instance-00000059.
Oct 02 12:34:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.726 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[69e41897-cfea-4f27-bf79-ab5d990d3530]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.727 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap885ece2c-b1 in ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:34:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:34:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:44.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:34:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.729 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap885ece2c-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:34:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.729 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8542feff-5cfe-42b5-af95-14b634f87067]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.730 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cf2daf04-20c4-4699-a98f-089e8d3c5254]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.741 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[76c8b135-08e1-4fcd-8ba1-183be84a96db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.763 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e54524c7-ae34-44e2-b93e-a33e00093281]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.786 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a6c34762-00d6-461f-9000-7143e54d9364]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:44 compute-1 NetworkManager[44960]: <info>  [1759408484.7934] manager: (tap885ece2c-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/176)
Oct 02 12:34:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.793 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7df47bec-a8c7-4977-8b0b-7f679a05ed81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.827 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[ccc6736e-517c-4905-af1b-db40d5aed0ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.829 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[de99e60d-6584-4954-afa2-6109166aefb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:44 compute-1 NetworkManager[44960]: <info>  [1759408484.8560] device (tap885ece2c-b0): carrier: link connected
Oct 02 12:34:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.861 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[5516d72a-7247-45df-b580-bc9370e95806]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.882 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[da2e829a-86dd-4417-8d88-c91690f6181f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap885ece2c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:58:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 112], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634841, 'reachable_time': 32773, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265125, 'error': None, 'target': 'ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.899 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[84d31e4b-6a66-437e-8498-cd156fe0f6d7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef7:5893'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 634841, 'tstamp': 634841}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265126, 'error': None, 'target': 'ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.925 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9b9bbe8b-b256-44b1-9628-1c0ff7b857fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap885ece2c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:58:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 112], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634841, 'reachable_time': 32773, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 265127, 'error': None, 'target': 'ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:44.968 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3db6baaf-e7e2-4913-beb0-075eb430f4a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:45.042 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8d7c9420-7809-47e0-b7d5-21c3fce496e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:45.044 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap885ece2c-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:45.044 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:45.044 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap885ece2c-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:45 compute-1 nova_compute[230518]: 2025-10-02 12:34:45.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:45 compute-1 NetworkManager[44960]: <info>  [1759408485.0468] manager: (tap885ece2c-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/177)
Oct 02 12:34:45 compute-1 kernel: tap885ece2c-b0: entered promiscuous mode
Oct 02 12:34:45 compute-1 nova_compute[230518]: 2025-10-02 12:34:45.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:45.053 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap885ece2c-b0, col_values=(('external_ids', {'iface-id': '24355553-27f6-4ebd-99c0-4f861ce0339d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:45 compute-1 nova_compute[230518]: 2025-10-02 12:34:45.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:45 compute-1 ovn_controller[129257]: 2025-10-02T12:34:45Z|00378|binding|INFO|Releasing lport 24355553-27f6-4ebd-99c0-4f861ce0339d from this chassis (sb_readonly=0)
Oct 02 12:34:45 compute-1 nova_compute[230518]: 2025-10-02 12:34:45.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:45 compute-1 nova_compute[230518]: 2025-10-02 12:34:45.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:45.070 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/885ece2c-b1ca-4d5a-9ddf-20d1baf155c7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/885ece2c-b1ca-4d5a-9ddf-20d1baf155c7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:45.071 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[92494d14-e85e-44eb-b42e-33e21e0919c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:45.071 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/885ece2c-b1ca-4d5a-9ddf-20d1baf155c7.pid.haproxy
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 885ece2c-b1ca-4d5a-9ddf-20d1baf155c7
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:34:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:45.072 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'env', 'PROCESS_TAG=haproxy-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/885ece2c-b1ca-4d5a-9ddf-20d1baf155c7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:34:45 compute-1 podman[265159]: 2025-10-02 12:34:45.482561002 +0000 UTC m=+0.059661159 container create 534aac96cc328c9aacca5e6df3a1e2ef958b02b9d51bbec77b3e9273b64ba5c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 12:34:45 compute-1 systemd[1]: Started libpod-conmon-534aac96cc328c9aacca5e6df3a1e2ef958b02b9d51bbec77b3e9273b64ba5c6.scope.
Oct 02 12:34:45 compute-1 podman[265159]: 2025-10-02 12:34:45.453852739 +0000 UTC m=+0.030952926 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:34:45 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:34:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/883c67a361fe0247ad395efa4ec52619f249067861dcbb2d49f5ac611aaf3b3d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:34:45 compute-1 podman[265159]: 2025-10-02 12:34:45.594493133 +0000 UTC m=+0.171593370 container init 534aac96cc328c9aacca5e6df3a1e2ef958b02b9d51bbec77b3e9273b64ba5c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:34:45 compute-1 podman[265159]: 2025-10-02 12:34:45.600340547 +0000 UTC m=+0.177440704 container start 534aac96cc328c9aacca5e6df3a1e2ef958b02b9d51bbec77b3e9273b64ba5c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct 02 12:34:45 compute-1 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[265195]: [NOTICE]   (265216) : New worker (265221) forked
Oct 02 12:34:45 compute-1 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[265195]: [NOTICE]   (265216) : Loading success.
Oct 02 12:34:45 compute-1 nova_compute[230518]: 2025-10-02 12:34:45.649 2 INFO nova.virt.libvirt.driver [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Deleting instance files /var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991_del
Oct 02 12:34:45 compute-1 nova_compute[230518]: 2025-10-02 12:34:45.650 2 INFO nova.virt.libvirt.driver [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Deletion of /var/lib/nova/instances/418e3157-f0a7-42ec-812b-2a4a2ad00991_del complete
Oct 02 12:34:45 compute-1 nova_compute[230518]: 2025-10-02 12:34:45.701 2 INFO nova.compute.manager [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Took 3.18 seconds to destroy the instance on the hypervisor.
Oct 02 12:34:45 compute-1 nova_compute[230518]: 2025-10-02 12:34:45.702 2 DEBUG oslo.service.loopingcall [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:34:45 compute-1 nova_compute[230518]: 2025-10-02 12:34:45.703 2 DEBUG nova.compute.manager [-] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:34:45 compute-1 nova_compute[230518]: 2025-10-02 12:34:45.703 2 DEBUG nova.network.neutron [-] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.090 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408486.0905228, 51a18f7c-ed1b-4500-9d74-fb924f62b6d9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.091 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] VM Started (Lifecycle Event)
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.101 2 DEBUG nova.compute.manager [req-3bc6a35e-9458-441b-af0f-7cff06a28659 req-d0471a5d-f8e2-4a30-8bd8-c048124c1383 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Received event network-vif-plugged-bace8310-2635-4b16-b54b-7b961bf7c42a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.101 2 DEBUG oslo_concurrency.lockutils [req-3bc6a35e-9458-441b-af0f-7cff06a28659 req-d0471a5d-f8e2-4a30-8bd8-c048124c1383 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.102 2 DEBUG oslo_concurrency.lockutils [req-3bc6a35e-9458-441b-af0f-7cff06a28659 req-d0471a5d-f8e2-4a30-8bd8-c048124c1383 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.102 2 DEBUG oslo_concurrency.lockutils [req-3bc6a35e-9458-441b-af0f-7cff06a28659 req-d0471a5d-f8e2-4a30-8bd8-c048124c1383 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.103 2 DEBUG nova.compute.manager [req-3bc6a35e-9458-441b-af0f-7cff06a28659 req-d0471a5d-f8e2-4a30-8bd8-c048124c1383 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Processing event network-vif-plugged-bace8310-2635-4b16-b54b-7b961bf7c42a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.103 2 DEBUG nova.compute.manager [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.107 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.113 2 INFO nova.virt.libvirt.driver [-] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Instance spawned successfully.
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.113 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.126 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.130 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.236 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.236 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408486.0906394, 51a18f7c-ed1b-4500-9d74-fb924f62b6d9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.237 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] VM Paused (Lifecycle Event)
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.242 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.242 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.243 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.244 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.244 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.245 2 DEBUG nova.virt.libvirt.driver [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.315 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.319 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408486.1075497, 51a18f7c-ed1b-4500-9d74-fb924f62b6d9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.319 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] VM Resumed (Lifecycle Event)
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.376 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.381 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.444 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.457 2 INFO nova.compute.manager [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Took 13.30 seconds to spawn the instance on the hypervisor.
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.458 2 DEBUG nova.compute.manager [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.590 2 INFO nova.compute.manager [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Took 14.48 seconds to build instance.
Oct 02 12:34:46 compute-1 ceph-mon[80926]: pgmap v1746: 305 pgs: 305 active+clean; 376 MiB data, 854 MiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 8.9 MiB/s wr, 415 op/s
Oct 02 12:34:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:34:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:46.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.707 2 DEBUG oslo_concurrency.lockutils [None req-e6abbdce-03b2-4ea4-ba5c-e8a30c88fb23 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:46.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.940 2 DEBUG nova.compute.manager [req-b432a614-3577-440b-bfb7-36a4e21a8dc7 req-8b6bafdf-97f0-408d-9ecc-57833e6c9d90 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Received event network-vif-unplugged-21510dd4-b155-46ed-bdb2-dc17a9149353 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.941 2 DEBUG oslo_concurrency.lockutils [req-b432a614-3577-440b-bfb7-36a4e21a8dc7 req-8b6bafdf-97f0-408d-9ecc-57833e6c9d90 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.942 2 DEBUG oslo_concurrency.lockutils [req-b432a614-3577-440b-bfb7-36a4e21a8dc7 req-8b6bafdf-97f0-408d-9ecc-57833e6c9d90 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.942 2 DEBUG oslo_concurrency.lockutils [req-b432a614-3577-440b-bfb7-36a4e21a8dc7 req-8b6bafdf-97f0-408d-9ecc-57833e6c9d90 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.943 2 DEBUG nova.compute.manager [req-b432a614-3577-440b-bfb7-36a4e21a8dc7 req-8b6bafdf-97f0-408d-9ecc-57833e6c9d90 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] No waiting events found dispatching network-vif-unplugged-21510dd4-b155-46ed-bdb2-dc17a9149353 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:34:46 compute-1 nova_compute[230518]: 2025-10-02 12:34:46.944 2 DEBUG nova.compute.manager [req-b432a614-3577-440b-bfb7-36a4e21a8dc7 req-8b6bafdf-97f0-408d-9ecc-57833e6c9d90 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Received event network-vif-unplugged-21510dd4-b155-46ed-bdb2-dc17a9149353 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:34:47 compute-1 nova_compute[230518]: 2025-10-02 12:34:47.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:34:47 compute-1 nova_compute[230518]: 2025-10-02 12:34:47.073 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:47 compute-1 nova_compute[230518]: 2025-10-02 12:34:47.073 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:47 compute-1 nova_compute[230518]: 2025-10-02 12:34:47.074 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:47 compute-1 nova_compute[230518]: 2025-10-02 12:34:47.074 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:34:47 compute-1 nova_compute[230518]: 2025-10-02 12:34:47.074 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:47 compute-1 nova_compute[230518]: 2025-10-02 12:34:47.137 2 DEBUG nova.network.neutron [-] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:34:47 compute-1 nova_compute[230518]: 2025-10-02 12:34:47.170 2 INFO nova.compute.manager [-] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Took 1.47 seconds to deallocate network for instance.
Oct 02 12:34:47 compute-1 nova_compute[230518]: 2025-10-02 12:34:47.222 2 DEBUG oslo_concurrency.lockutils [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:47 compute-1 nova_compute[230518]: 2025-10-02 12:34:47.223 2 DEBUG oslo_concurrency.lockutils [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:47 compute-1 nova_compute[230518]: 2025-10-02 12:34:47.229 2 DEBUG nova.compute.manager [req-cac75106-6c89-4be0-ad61-54b55e7ac879 req-bdd8c23c-4270-4714-b345-e481ae837f25 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Received event network-vif-deleted-21510dd4-b155-46ed-bdb2-dc17a9149353 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:47 compute-1 nova_compute[230518]: 2025-10-02 12:34:47.297 2 DEBUG oslo_concurrency.processutils [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:34:47 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2709834565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:47 compute-1 nova_compute[230518]: 2025-10-02 12:34:47.696 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:34:47 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2943096415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:47 compute-1 nova_compute[230518]: 2025-10-02 12:34:47.744 2 DEBUG oslo_concurrency.processutils [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:47 compute-1 nova_compute[230518]: 2025-10-02 12:34:47.751 2 DEBUG nova.compute.provider_tree [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:34:47 compute-1 nova_compute[230518]: 2025-10-02 12:34:47.769 2 DEBUG nova.scheduler.client.report [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:34:47 compute-1 nova_compute[230518]: 2025-10-02 12:34:47.780 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:34:47 compute-1 nova_compute[230518]: 2025-10-02 12:34:47.780 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:34:47 compute-1 nova_compute[230518]: 2025-10-02 12:34:47.783 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:34:47 compute-1 nova_compute[230518]: 2025-10-02 12:34:47.784 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:34:47 compute-1 nova_compute[230518]: 2025-10-02 12:34:47.796 2 DEBUG oslo_concurrency.lockutils [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:47 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2709834565' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:47 compute-1 nova_compute[230518]: 2025-10-02 12:34:47.872 2 INFO nova.scheduler.client.report [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Deleted allocations for instance 418e3157-f0a7-42ec-812b-2a4a2ad00991
Oct 02 12:34:47 compute-1 nova_compute[230518]: 2025-10-02 12:34:47.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:47 compute-1 nova_compute[230518]: 2025-10-02 12:34:47.954 2 DEBUG oslo_concurrency.lockutils [None req-6f5d1e8b-89b1-4995-b4a0-51c483836b52 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.437s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:47 compute-1 nova_compute[230518]: 2025-10-02 12:34:47.974 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:34:47 compute-1 nova_compute[230518]: 2025-10-02 12:34:47.975 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4192MB free_disk=20.828514099121094GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:34:47 compute-1 nova_compute[230518]: 2025-10-02 12:34:47.976 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:47 compute-1 nova_compute[230518]: 2025-10-02 12:34:47.976 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:48 compute-1 nova_compute[230518]: 2025-10-02 12:34:48.054 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 3e490470-5e33-4140-95c1-367805364c73 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:34:48 compute-1 nova_compute[230518]: 2025-10-02 12:34:48.054 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 51a18f7c-ed1b-4500-9d74-fb924f62b6d9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:34:48 compute-1 nova_compute[230518]: 2025-10-02 12:34:48.055 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:34:48 compute-1 nova_compute[230518]: 2025-10-02 12:34:48.055 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:34:48 compute-1 nova_compute[230518]: 2025-10-02 12:34:48.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:48 compute-1 nova_compute[230518]: 2025-10-02 12:34:48.113 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:48 compute-1 nova_compute[230518]: 2025-10-02 12:34:48.213 2 DEBUG nova.compute.manager [req-bb097057-a10f-49ec-831b-a8d17392ddba req-d00181c3-db0c-4d1c-b64d-4fd7ae298b23 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Received event network-vif-plugged-bace8310-2635-4b16-b54b-7b961bf7c42a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:48 compute-1 nova_compute[230518]: 2025-10-02 12:34:48.214 2 DEBUG oslo_concurrency.lockutils [req-bb097057-a10f-49ec-831b-a8d17392ddba req-d00181c3-db0c-4d1c-b64d-4fd7ae298b23 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:48 compute-1 nova_compute[230518]: 2025-10-02 12:34:48.215 2 DEBUG oslo_concurrency.lockutils [req-bb097057-a10f-49ec-831b-a8d17392ddba req-d00181c3-db0c-4d1c-b64d-4fd7ae298b23 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:48 compute-1 nova_compute[230518]: 2025-10-02 12:34:48.215 2 DEBUG oslo_concurrency.lockutils [req-bb097057-a10f-49ec-831b-a8d17392ddba req-d00181c3-db0c-4d1c-b64d-4fd7ae298b23 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:48 compute-1 nova_compute[230518]: 2025-10-02 12:34:48.215 2 DEBUG nova.compute.manager [req-bb097057-a10f-49ec-831b-a8d17392ddba req-d00181c3-db0c-4d1c-b64d-4fd7ae298b23 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] No waiting events found dispatching network-vif-plugged-bace8310-2635-4b16-b54b-7b961bf7c42a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:34:48 compute-1 nova_compute[230518]: 2025-10-02 12:34:48.216 2 WARNING nova.compute.manager [req-bb097057-a10f-49ec-831b-a8d17392ddba req-d00181c3-db0c-4d1c-b64d-4fd7ae298b23 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Received unexpected event network-vif-plugged-bace8310-2635-4b16-b54b-7b961bf7c42a for instance with vm_state active and task_state None.
Oct 02 12:34:48 compute-1 nova_compute[230518]: 2025-10-02 12:34:48.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:34:48 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1981752576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:48 compute-1 nova_compute[230518]: 2025-10-02 12:34:48.564 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:48 compute-1 nova_compute[230518]: 2025-10-02 12:34:48.568 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:34:48 compute-1 nova_compute[230518]: 2025-10-02 12:34:48.585 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:34:48 compute-1 nova_compute[230518]: 2025-10-02 12:34:48.628 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:34:48 compute-1 nova_compute[230518]: 2025-10-02 12:34:48.629 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:34:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:48.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:34:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:48.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:48 compute-1 nova_compute[230518]: 2025-10-02 12:34:48.930 2 DEBUG oslo_concurrency.lockutils [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:48 compute-1 nova_compute[230518]: 2025-10-02 12:34:48.930 2 DEBUG oslo_concurrency.lockutils [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:48 compute-1 nova_compute[230518]: 2025-10-02 12:34:48.931 2 DEBUG oslo_concurrency.lockutils [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:48 compute-1 nova_compute[230518]: 2025-10-02 12:34:48.931 2 DEBUG oslo_concurrency.lockutils [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:48 compute-1 nova_compute[230518]: 2025-10-02 12:34:48.932 2 DEBUG oslo_concurrency.lockutils [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:48 compute-1 nova_compute[230518]: 2025-10-02 12:34:48.934 2 INFO nova.compute.manager [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Terminating instance
Oct 02 12:34:48 compute-1 nova_compute[230518]: 2025-10-02 12:34:48.936 2 DEBUG nova.compute.manager [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:34:49 compute-1 nova_compute[230518]: 2025-10-02 12:34:49.087 2 DEBUG nova.compute.manager [req-4800f1c2-1aa1-4db3-b27b-430c7bb0697b req-686680dc-e62d-46a7-9283-c08ed2da522c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Received event network-vif-plugged-21510dd4-b155-46ed-bdb2-dc17a9149353 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:49 compute-1 nova_compute[230518]: 2025-10-02 12:34:49.088 2 DEBUG oslo_concurrency.lockutils [req-4800f1c2-1aa1-4db3-b27b-430c7bb0697b req-686680dc-e62d-46a7-9283-c08ed2da522c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:49 compute-1 nova_compute[230518]: 2025-10-02 12:34:49.089 2 DEBUG oslo_concurrency.lockutils [req-4800f1c2-1aa1-4db3-b27b-430c7bb0697b req-686680dc-e62d-46a7-9283-c08ed2da522c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:49 compute-1 nova_compute[230518]: 2025-10-02 12:34:49.089 2 DEBUG oslo_concurrency.lockutils [req-4800f1c2-1aa1-4db3-b27b-430c7bb0697b req-686680dc-e62d-46a7-9283-c08ed2da522c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "418e3157-f0a7-42ec-812b-2a4a2ad00991-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:49 compute-1 nova_compute[230518]: 2025-10-02 12:34:49.090 2 DEBUG nova.compute.manager [req-4800f1c2-1aa1-4db3-b27b-430c7bb0697b req-686680dc-e62d-46a7-9283-c08ed2da522c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] No waiting events found dispatching network-vif-plugged-21510dd4-b155-46ed-bdb2-dc17a9149353 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:34:49 compute-1 nova_compute[230518]: 2025-10-02 12:34:49.090 2 WARNING nova.compute.manager [req-4800f1c2-1aa1-4db3-b27b-430c7bb0697b req-686680dc-e62d-46a7-9283-c08ed2da522c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Received unexpected event network-vif-plugged-21510dd4-b155-46ed-bdb2-dc17a9149353 for instance with vm_state deleted and task_state None.
Oct 02 12:34:49 compute-1 ceph-mon[80926]: pgmap v1747: 305 pgs: 305 active+clean; 355 MiB data, 844 MiB used, 20 GiB / 21 GiB avail; 9.0 MiB/s rd, 5.6 MiB/s wr, 476 op/s
Oct 02 12:34:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2943096415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1981752576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:49 compute-1 kernel: tapbace8310-26 (unregistering): left promiscuous mode
Oct 02 12:34:49 compute-1 NetworkManager[44960]: <info>  [1759408489.1349] device (tapbace8310-26): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:34:49 compute-1 nova_compute[230518]: 2025-10-02 12:34:49.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:49 compute-1 ovn_controller[129257]: 2025-10-02T12:34:49Z|00379|binding|INFO|Releasing lport bace8310-2635-4b16-b54b-7b961bf7c42a from this chassis (sb_readonly=0)
Oct 02 12:34:49 compute-1 ovn_controller[129257]: 2025-10-02T12:34:49Z|00380|binding|INFO|Setting lport bace8310-2635-4b16-b54b-7b961bf7c42a down in Southbound
Oct 02 12:34:49 compute-1 ovn_controller[129257]: 2025-10-02T12:34:49Z|00381|binding|INFO|Removing iface tapbace8310-26 ovn-installed in OVS
Oct 02 12:34:49 compute-1 nova_compute[230518]: 2025-10-02 12:34:49.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:49.219 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:e5:bf 10.100.0.6'], port_security=['fa:16:3e:9d:e5:bf 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '51a18f7c-ed1b-4500-9d74-fb924f62b6d9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ed58e2bfccb04353b29ae652cfed3546', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8afb1137-daba-41cb-976b-5cc3e880408c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1c0c736-25fb-4965-98a7-04a85ae45126, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=bace8310-2635-4b16-b54b-7b961bf7c42a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:34:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:49.222 138374 INFO neutron.agent.ovn.metadata.agent [-] Port bace8310-2635-4b16-b54b-7b961bf7c42a in datapath 885ece2c-b1ca-4d5a-9ddf-20d1baf155c7 unbound from our chassis
Oct 02 12:34:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:49.225 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 885ece2c-b1ca-4d5a-9ddf-20d1baf155c7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:34:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:49.227 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b035525b-dd78-4cca-866a-8d457ccb117b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:49.227 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7 namespace which is not needed anymore
Oct 02 12:34:49 compute-1 nova_compute[230518]: 2025-10-02 12:34:49.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:49 compute-1 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000059.scope: Deactivated successfully.
Oct 02 12:34:49 compute-1 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000059.scope: Consumed 4.231s CPU time.
Oct 02 12:34:49 compute-1 systemd-machined[188247]: Machine qemu-44-instance-00000059 terminated.
Oct 02 12:34:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:34:49 compute-1 nova_compute[230518]: 2025-10-02 12:34:49.372 2 INFO nova.virt.libvirt.driver [-] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Instance destroyed successfully.
Oct 02 12:34:49 compute-1 nova_compute[230518]: 2025-10-02 12:34:49.373 2 DEBUG nova.objects.instance [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lazy-loading 'resources' on Instance uuid 51a18f7c-ed1b-4500-9d74-fb924f62b6d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:49 compute-1 nova_compute[230518]: 2025-10-02 12:34:49.393 2 DEBUG nova.virt.libvirt.vif [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:34:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1446553806',display_name='tempest-tempest.common.compute-instance-1446553806-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1446553806-2',id=89,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-10-02T12:34:46Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ed58e2bfccb04353b29ae652cfed3546',ramdisk_id='',reservation_id='r-onpz3ikv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1074010337',owner_user_name='tempest-MultipleCreateTestJSON-1074010337-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:34:46Z,user_data=None,user_id='34a9da53e0cc446593d0cea2f498c53e',uuid=51a18f7c-ed1b-4500-9d74-fb924f62b6d9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bace8310-2635-4b16-b54b-7b961bf7c42a", "address": "fa:16:3e:9d:e5:bf", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbace8310-26", "ovs_interfaceid": "bace8310-2635-4b16-b54b-7b961bf7c42a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:34:49 compute-1 nova_compute[230518]: 2025-10-02 12:34:49.394 2 DEBUG nova.network.os_vif_util [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Converting VIF {"id": "bace8310-2635-4b16-b54b-7b961bf7c42a", "address": "fa:16:3e:9d:e5:bf", "network": {"id": "885ece2c-b1ca-4d5a-9ddf-20d1baf155c7", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-1099079828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ed58e2bfccb04353b29ae652cfed3546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbace8310-26", "ovs_interfaceid": "bace8310-2635-4b16-b54b-7b961bf7c42a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:34:49 compute-1 nova_compute[230518]: 2025-10-02 12:34:49.395 2 DEBUG nova.network.os_vif_util [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e5:bf,bridge_name='br-int',has_traffic_filtering=True,id=bace8310-2635-4b16-b54b-7b961bf7c42a,network=Network(885ece2c-b1ca-4d5a-9ddf-20d1baf155c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbace8310-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:34:49 compute-1 nova_compute[230518]: 2025-10-02 12:34:49.395 2 DEBUG os_vif [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e5:bf,bridge_name='br-int',has_traffic_filtering=True,id=bace8310-2635-4b16-b54b-7b961bf7c42a,network=Network(885ece2c-b1ca-4d5a-9ddf-20d1baf155c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbace8310-26') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:34:49 compute-1 nova_compute[230518]: 2025-10-02 12:34:49.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:49 compute-1 nova_compute[230518]: 2025-10-02 12:34:49.397 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbace8310-26, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:49 compute-1 nova_compute[230518]: 2025-10-02 12:34:49.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:49 compute-1 nova_compute[230518]: 2025-10-02 12:34:49.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:34:49 compute-1 nova_compute[230518]: 2025-10-02 12:34:49.402 2 INFO os_vif [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e5:bf,bridge_name='br-int',has_traffic_filtering=True,id=bace8310-2635-4b16-b54b-7b961bf7c42a,network=Network(885ece2c-b1ca-4d5a-9ddf-20d1baf155c7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbace8310-26')
Oct 02 12:34:49 compute-1 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[265195]: [NOTICE]   (265216) : haproxy version is 2.8.14-c23fe91
Oct 02 12:34:49 compute-1 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[265195]: [NOTICE]   (265216) : path to executable is /usr/sbin/haproxy
Oct 02 12:34:49 compute-1 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[265195]: [WARNING]  (265216) : Exiting Master process...
Oct 02 12:34:49 compute-1 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[265195]: [ALERT]    (265216) : Current worker (265221) exited with code 143 (Terminated)
Oct 02 12:34:49 compute-1 neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7[265195]: [WARNING]  (265216) : All workers exited. Exiting... (0)
Oct 02 12:34:49 compute-1 systemd[1]: libpod-534aac96cc328c9aacca5e6df3a1e2ef958b02b9d51bbec77b3e9273b64ba5c6.scope: Deactivated successfully.
Oct 02 12:34:49 compute-1 podman[265323]: 2025-10-02 12:34:49.446250884 +0000 UTC m=+0.094588247 container died 534aac96cc328c9aacca5e6df3a1e2ef958b02b9d51bbec77b3e9273b64ba5c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:34:49 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-534aac96cc328c9aacca5e6df3a1e2ef958b02b9d51bbec77b3e9273b64ba5c6-userdata-shm.mount: Deactivated successfully.
Oct 02 12:34:49 compute-1 systemd[1]: var-lib-containers-storage-overlay-883c67a361fe0247ad395efa4ec52619f249067861dcbb2d49f5ac611aaf3b3d-merged.mount: Deactivated successfully.
Oct 02 12:34:49 compute-1 podman[265323]: 2025-10-02 12:34:49.829258015 +0000 UTC m=+0.477595388 container cleanup 534aac96cc328c9aacca5e6df3a1e2ef958b02b9d51bbec77b3e9273b64ba5c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 12:34:49 compute-1 systemd[1]: libpod-conmon-534aac96cc328c9aacca5e6df3a1e2ef958b02b9d51bbec77b3e9273b64ba5c6.scope: Deactivated successfully.
Oct 02 12:34:50 compute-1 podman[265383]: 2025-10-02 12:34:50.048554645 +0000 UTC m=+0.195951397 container remove 534aac96cc328c9aacca5e6df3a1e2ef958b02b9d51bbec77b3e9273b64ba5c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:34:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:50.054 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[23bf6b5a-bc0f-4832-b40a-f460b3efe918]: (4, ('Thu Oct  2 12:34:49 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7 (534aac96cc328c9aacca5e6df3a1e2ef958b02b9d51bbec77b3e9273b64ba5c6)\n534aac96cc328c9aacca5e6df3a1e2ef958b02b9d51bbec77b3e9273b64ba5c6\nThu Oct  2 12:34:49 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7 (534aac96cc328c9aacca5e6df3a1e2ef958b02b9d51bbec77b3e9273b64ba5c6)\n534aac96cc328c9aacca5e6df3a1e2ef958b02b9d51bbec77b3e9273b64ba5c6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:50.056 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[04c77fad-282e-49e7-ace1-4210f47580f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:50.058 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap885ece2c-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:34:50 compute-1 nova_compute[230518]: 2025-10-02 12:34:50.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:50 compute-1 kernel: tap885ece2c-b0: left promiscuous mode
Oct 02 12:34:50 compute-1 nova_compute[230518]: 2025-10-02 12:34:50.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:50.085 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e5d8c6ff-9beb-4a62-a4f2-20e54305a720]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:50.115 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2743a5c8-7cbd-4d24-a5ba-e328729be266]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:50.116 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2315ebd1-1814-4694-847c-e92113470ba5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:50.131 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a3baa375-4a93-4d51-a350-54b7ee3d674c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634834, 'reachable_time': 31859, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265398, 'error': None, 'target': 'ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:50 compute-1 systemd[1]: run-netns-ovnmeta\x2d885ece2c\x2db1ca\x2d4d5a\x2d9ddf\x2d20d1baf155c7.mount: Deactivated successfully.
Oct 02 12:34:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:50.133 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-885ece2c-b1ca-4d5a-9ddf-20d1baf155c7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:34:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:34:50.133 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[0c6325e9-4fe3-4c3a-a947-85946185905f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:34:50 compute-1 nova_compute[230518]: 2025-10-02 12:34:50.287 2 DEBUG nova.compute.manager [req-c12e5c28-1155-442a-abc0-36fdae4baf52 req-bdf4ed84-6b30-4959-ae50-2ec5b7df4342 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Received event network-vif-unplugged-bace8310-2635-4b16-b54b-7b961bf7c42a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:50 compute-1 nova_compute[230518]: 2025-10-02 12:34:50.288 2 DEBUG oslo_concurrency.lockutils [req-c12e5c28-1155-442a-abc0-36fdae4baf52 req-bdf4ed84-6b30-4959-ae50-2ec5b7df4342 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:50 compute-1 nova_compute[230518]: 2025-10-02 12:34:50.288 2 DEBUG oslo_concurrency.lockutils [req-c12e5c28-1155-442a-abc0-36fdae4baf52 req-bdf4ed84-6b30-4959-ae50-2ec5b7df4342 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:50 compute-1 nova_compute[230518]: 2025-10-02 12:34:50.288 2 DEBUG oslo_concurrency.lockutils [req-c12e5c28-1155-442a-abc0-36fdae4baf52 req-bdf4ed84-6b30-4959-ae50-2ec5b7df4342 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:50 compute-1 nova_compute[230518]: 2025-10-02 12:34:50.288 2 DEBUG nova.compute.manager [req-c12e5c28-1155-442a-abc0-36fdae4baf52 req-bdf4ed84-6b30-4959-ae50-2ec5b7df4342 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] No waiting events found dispatching network-vif-unplugged-bace8310-2635-4b16-b54b-7b961bf7c42a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:34:50 compute-1 nova_compute[230518]: 2025-10-02 12:34:50.289 2 DEBUG nova.compute.manager [req-c12e5c28-1155-442a-abc0-36fdae4baf52 req-bdf4ed84-6b30-4959-ae50-2ec5b7df4342 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Received event network-vif-unplugged-bace8310-2635-4b16-b54b-7b961bf7c42a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:34:50 compute-1 nova_compute[230518]: 2025-10-02 12:34:50.289 2 DEBUG nova.compute.manager [req-c12e5c28-1155-442a-abc0-36fdae4baf52 req-bdf4ed84-6b30-4959-ae50-2ec5b7df4342 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Received event network-vif-plugged-bace8310-2635-4b16-b54b-7b961bf7c42a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:50 compute-1 nova_compute[230518]: 2025-10-02 12:34:50.289 2 DEBUG oslo_concurrency.lockutils [req-c12e5c28-1155-442a-abc0-36fdae4baf52 req-bdf4ed84-6b30-4959-ae50-2ec5b7df4342 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:50 compute-1 nova_compute[230518]: 2025-10-02 12:34:50.290 2 DEBUG oslo_concurrency.lockutils [req-c12e5c28-1155-442a-abc0-36fdae4baf52 req-bdf4ed84-6b30-4959-ae50-2ec5b7df4342 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:50 compute-1 nova_compute[230518]: 2025-10-02 12:34:50.290 2 DEBUG oslo_concurrency.lockutils [req-c12e5c28-1155-442a-abc0-36fdae4baf52 req-bdf4ed84-6b30-4959-ae50-2ec5b7df4342 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:50 compute-1 nova_compute[230518]: 2025-10-02 12:34:50.290 2 DEBUG nova.compute.manager [req-c12e5c28-1155-442a-abc0-36fdae4baf52 req-bdf4ed84-6b30-4959-ae50-2ec5b7df4342 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] No waiting events found dispatching network-vif-plugged-bace8310-2635-4b16-b54b-7b961bf7c42a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:34:50 compute-1 nova_compute[230518]: 2025-10-02 12:34:50.290 2 WARNING nova.compute.manager [req-c12e5c28-1155-442a-abc0-36fdae4baf52 req-bdf4ed84-6b30-4959-ae50-2ec5b7df4342 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Received unexpected event network-vif-plugged-bace8310-2635-4b16-b54b-7b961bf7c42a for instance with vm_state active and task_state deleting.
Oct 02 12:34:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:50.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:50.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:51 compute-1 ceph-mon[80926]: pgmap v1748: 305 pgs: 305 active+clean; 355 MiB data, 844 MiB used, 20 GiB / 21 GiB avail; 8.9 MiB/s rd, 905 KiB/s wr, 380 op/s
Oct 02 12:34:51 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1478600791' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:51 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3763843049' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:51 compute-1 podman[265401]: 2025-10-02 12:34:51.814459967 +0000 UTC m=+0.061930719 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:34:51 compute-1 podman[265400]: 2025-10-02 12:34:51.878202923 +0000 UTC m=+0.121954148 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:34:52 compute-1 nova_compute[230518]: 2025-10-02 12:34:52.624 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:34:52 compute-1 nova_compute[230518]: 2025-10-02 12:34:52.625 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:34:52 compute-1 nova_compute[230518]: 2025-10-02 12:34:52.626 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:34:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:52.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:52.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:52 compute-1 nova_compute[230518]: 2025-10-02 12:34:52.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:53 compute-1 ceph-mon[80926]: pgmap v1749: 305 pgs: 305 active+clean; 358 MiB data, 847 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 1.3 MiB/s wr, 457 op/s
Oct 02 12:34:53 compute-1 nova_compute[230518]: 2025-10-02 12:34:53.754 2 INFO nova.virt.libvirt.driver [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Deleting instance files /var/lib/nova/instances/51a18f7c-ed1b-4500-9d74-fb924f62b6d9_del
Oct 02 12:34:53 compute-1 nova_compute[230518]: 2025-10-02 12:34:53.755 2 INFO nova.virt.libvirt.driver [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Deletion of /var/lib/nova/instances/51a18f7c-ed1b-4500-9d74-fb924f62b6d9_del complete
Oct 02 12:34:53 compute-1 nova_compute[230518]: 2025-10-02 12:34:53.888 2 INFO nova.compute.manager [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Took 4.95 seconds to destroy the instance on the hypervisor.
Oct 02 12:34:53 compute-1 nova_compute[230518]: 2025-10-02 12:34:53.889 2 DEBUG oslo.service.loopingcall [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:34:53 compute-1 nova_compute[230518]: 2025-10-02 12:34:53.890 2 DEBUG nova.compute.manager [-] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:34:53 compute-1 nova_compute[230518]: 2025-10-02 12:34:53.891 2 DEBUG nova.network.neutron [-] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:34:54 compute-1 nova_compute[230518]: 2025-10-02 12:34:54.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:34:54 compute-1 nova_compute[230518]: 2025-10-02 12:34:54.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:34:54 compute-1 nova_compute[230518]: 2025-10-02 12:34:54.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:34:54 compute-1 nova_compute[230518]: 2025-10-02 12:34:54.109 2 DEBUG nova.compute.manager [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Oct 02 12:34:54 compute-1 nova_compute[230518]: 2025-10-02 12:34:54.248 2 DEBUG oslo_concurrency.lockutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:54 compute-1 nova_compute[230518]: 2025-10-02 12:34:54.249 2 DEBUG oslo_concurrency.lockutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:34:54 compute-1 nova_compute[230518]: 2025-10-02 12:34:54.364 2 DEBUG nova.objects.instance [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'pci_requests' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:54 compute-1 nova_compute[230518]: 2025-10-02 12:34:54.382 2 DEBUG nova.virt.hardware [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:34:54 compute-1 nova_compute[230518]: 2025-10-02 12:34:54.382 2 INFO nova.compute.claims [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:34:54 compute-1 nova_compute[230518]: 2025-10-02 12:34:54.383 2 DEBUG nova.objects.instance [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'resources' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:54 compute-1 nova_compute[230518]: 2025-10-02 12:34:54.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:54 compute-1 nova_compute[230518]: 2025-10-02 12:34:54.450 2 DEBUG nova.objects.instance [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:34:54 compute-1 nova_compute[230518]: 2025-10-02 12:34:54.644 2 INFO nova.compute.resource_tracker [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updating resource usage from migration cc0c0504-9cb4-4e3c-94ee-f1413511b3ed
Oct 02 12:34:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:54.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:54 compute-1 nova_compute[230518]: 2025-10-02 12:34:54.718 2 DEBUG nova.network.neutron [-] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:34:54 compute-1 nova_compute[230518]: 2025-10-02 12:34:54.726 2 DEBUG oslo_concurrency.processutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:54.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:54 compute-1 nova_compute[230518]: 2025-10-02 12:34:54.852 2 DEBUG nova.compute.manager [req-c8ab32fc-7a49-4bee-83eb-35fd8a5468a0 req-1cdce109-754a-40ab-b254-4537a6c56dc1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Received event network-vif-deleted-bace8310-2635-4b16-b54b-7b961bf7c42a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:34:54 compute-1 nova_compute[230518]: 2025-10-02 12:34:54.853 2 INFO nova.compute.manager [req-c8ab32fc-7a49-4bee-83eb-35fd8a5468a0 req-1cdce109-754a-40ab-b254-4537a6c56dc1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Neutron deleted interface bace8310-2635-4b16-b54b-7b961bf7c42a; detaching it from the instance and deleting it from the info cache
Oct 02 12:34:54 compute-1 nova_compute[230518]: 2025-10-02 12:34:54.854 2 DEBUG nova.network.neutron [req-c8ab32fc-7a49-4bee-83eb-35fd8a5468a0 req-1cdce109-754a-40ab-b254-4537a6c56dc1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:34:54 compute-1 nova_compute[230518]: 2025-10-02 12:34:54.858 2 INFO nova.compute.manager [-] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Took 0.97 seconds to deallocate network for instance.
Oct 02 12:34:54 compute-1 nova_compute[230518]: 2025-10-02 12:34:54.956 2 DEBUG nova.compute.manager [req-c8ab32fc-7a49-4bee-83eb-35fd8a5468a0 req-1cdce109-754a-40ab-b254-4537a6c56dc1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Detach interface failed, port_id=bace8310-2635-4b16-b54b-7b961bf7c42a, reason: Instance 51a18f7c-ed1b-4500-9d74-fb924f62b6d9 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:34:55 compute-1 nova_compute[230518]: 2025-10-02 12:34:54.999 2 DEBUG oslo_concurrency.lockutils [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:34:55 compute-1 nova_compute[230518]: 2025-10-02 12:34:55.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:34:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:34:55 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1320649666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:55 compute-1 nova_compute[230518]: 2025-10-02 12:34:55.265 2 DEBUG oslo_concurrency.processutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:55 compute-1 nova_compute[230518]: 2025-10-02 12:34:55.274 2 DEBUG nova.compute.provider_tree [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:34:55 compute-1 ceph-mon[80926]: pgmap v1750: 305 pgs: 305 active+clean; 324 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 1.0 MiB/s wr, 377 op/s
Oct 02 12:34:55 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/143947822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:55 compute-1 nova_compute[230518]: 2025-10-02 12:34:55.448 2 DEBUG nova.scheduler.client.report [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:34:55 compute-1 nova_compute[230518]: 2025-10-02 12:34:55.470 2 DEBUG oslo_concurrency.lockutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 1.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:55 compute-1 nova_compute[230518]: 2025-10-02 12:34:55.471 2 INFO nova.compute.manager [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Migrating
Oct 02 12:34:55 compute-1 nova_compute[230518]: 2025-10-02 12:34:55.477 2 DEBUG oslo_concurrency.lockutils [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.477s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:34:55 compute-1 nova_compute[230518]: 2025-10-02 12:34:55.518 2 DEBUG oslo_concurrency.lockutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:34:55 compute-1 nova_compute[230518]: 2025-10-02 12:34:55.519 2 DEBUG oslo_concurrency.lockutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquired lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:34:55 compute-1 nova_compute[230518]: 2025-10-02 12:34:55.519 2 DEBUG nova.network.neutron [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:34:55 compute-1 nova_compute[230518]: 2025-10-02 12:34:55.578 2 DEBUG oslo_concurrency.processutils [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:34:56 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:34:56 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2693978985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:56 compute-1 nova_compute[230518]: 2025-10-02 12:34:56.057 2 DEBUG oslo_concurrency.processutils [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:34:56 compute-1 nova_compute[230518]: 2025-10-02 12:34:56.065 2 DEBUG nova.compute.provider_tree [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:34:56 compute-1 nova_compute[230518]: 2025-10-02 12:34:56.084 2 DEBUG nova.scheduler.client.report [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:34:56 compute-1 nova_compute[230518]: 2025-10-02 12:34:56.110 2 DEBUG oslo_concurrency.lockutils [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:56 compute-1 nova_compute[230518]: 2025-10-02 12:34:56.146 2 INFO nova.scheduler.client.report [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Deleted allocations for instance 51a18f7c-ed1b-4500-9d74-fb924f62b6d9
Oct 02 12:34:56 compute-1 nova_compute[230518]: 2025-10-02 12:34:56.217 2 DEBUG oslo_concurrency.lockutils [None req-9cca26f8-a249-4c3d-a5ab-3798b450bc87 34a9da53e0cc446593d0cea2f498c53e ed58e2bfccb04353b29ae652cfed3546 - - default default] Lock "51a18f7c-ed1b-4500-9d74-fb924f62b6d9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.286s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:34:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:56.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:34:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:56.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:34:56 compute-1 nova_compute[230518]: 2025-10-02 12:34:56.839 2 DEBUG nova.network.neutron [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updating instance_info_cache with network_info: [{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:34:56 compute-1 nova_compute[230518]: 2025-10-02 12:34:56.857 2 DEBUG oslo_concurrency.lockutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Releasing lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:34:56 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1320649666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:56 compute-1 ceph-mon[80926]: pgmap v1751: 305 pgs: 305 active+clean; 315 MiB data, 849 MiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 2.6 MiB/s wr, 350 op/s
Oct 02 12:34:56 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/43636201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:56 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2693978985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:56 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3757785379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:34:56 compute-1 nova_compute[230518]: 2025-10-02 12:34:56.993 2 DEBUG nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Oct 02 12:34:57 compute-1 nova_compute[230518]: 2025-10-02 12:34:57.000 2 DEBUG nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:34:57 compute-1 nova_compute[230518]: 2025-10-02 12:34:57.949 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408482.9472244, 418e3157-f0a7-42ec-812b-2a4a2ad00991 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:34:57 compute-1 nova_compute[230518]: 2025-10-02 12:34:57.949 2 INFO nova.compute.manager [-] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] VM Stopped (Lifecycle Event)
Oct 02 12:34:57 compute-1 nova_compute[230518]: 2025-10-02 12:34:57.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:57 compute-1 nova_compute[230518]: 2025-10-02 12:34:57.971 2 DEBUG nova.compute.manager [None req-074743f8-fd30-4ac1-a1e3-1d917cc03dc0 - - - - - -] [instance: 418e3157-f0a7-42ec-812b-2a4a2ad00991] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:34:58 compute-1 nova_compute[230518]: 2025-10-02 12:34:58.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:34:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:34:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:34:58.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:34:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:34:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:34:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:34:58.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:34:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:34:59 compute-1 nova_compute[230518]: 2025-10-02 12:34:59.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:34:59 compute-1 ceph-mon[80926]: pgmap v1752: 305 pgs: 305 active+clean; 323 MiB data, 856 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 4.8 MiB/s wr, 367 op/s
Oct 02 12:35:00 compute-1 nova_compute[230518]: 2025-10-02 12:35:00.020 2 INFO nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance shutdown successfully after 3 seconds.
Oct 02 12:35:00 compute-1 kernel: tapa3bd0009-d2 (unregistering): left promiscuous mode
Oct 02 12:35:00 compute-1 NetworkManager[44960]: <info>  [1759408500.1983] device (tapa3bd0009-d2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:35:00 compute-1 ovn_controller[129257]: 2025-10-02T12:35:00Z|00382|binding|INFO|Releasing lport a3bd0009-d256-4937-bdad-606abfd076e0 from this chassis (sb_readonly=0)
Oct 02 12:35:00 compute-1 ovn_controller[129257]: 2025-10-02T12:35:00Z|00383|binding|INFO|Setting lport a3bd0009-d256-4937-bdad-606abfd076e0 down in Southbound
Oct 02 12:35:00 compute-1 nova_compute[230518]: 2025-10-02 12:35:00.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:00 compute-1 ovn_controller[129257]: 2025-10-02T12:35:00Z|00384|binding|INFO|Removing iface tapa3bd0009-d2 ovn-installed in OVS
Oct 02 12:35:00 compute-1 nova_compute[230518]: 2025-10-02 12:35:00.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:00.214 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:e8:97 10.100.0.6'], port_security=['fa:16:3e:7b:e8:97 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3e490470-5e33-4140-95c1-367805364c73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '10', 'neutron:security_group_ids': '6fdfac51-abac-4e22-93ab-c3b799f666ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.191', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=a3bd0009-d256-4937-bdad-606abfd076e0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:35:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:00.215 138374 INFO neutron.agent.ovn.metadata.agent [-] Port a3bd0009-d256-4937-bdad-606abfd076e0 in datapath f011efa4-0132-405c-bb45-09d0a9352eff unbound from our chassis
Oct 02 12:35:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:00.217 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f011efa4-0132-405c-bb45-09d0a9352eff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:35:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:00.218 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1b08fa33-0855-4860-8372-55f99d23c3b8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:00.218 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff namespace which is not needed anymore
Oct 02 12:35:00 compute-1 nova_compute[230518]: 2025-10-02 12:35:00.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:00 compute-1 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d0000004d.scope: Deactivated successfully.
Oct 02 12:35:00 compute-1 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d0000004d.scope: Consumed 18.005s CPU time.
Oct 02 12:35:00 compute-1 systemd-machined[188247]: Machine qemu-41-instance-0000004d terminated.
Oct 02 12:35:00 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[263499]: [NOTICE]   (263503) : haproxy version is 2.8.14-c23fe91
Oct 02 12:35:00 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[263499]: [NOTICE]   (263503) : path to executable is /usr/sbin/haproxy
Oct 02 12:35:00 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[263499]: [WARNING]  (263503) : Exiting Master process...
Oct 02 12:35:00 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[263499]: [ALERT]    (263503) : Current worker (263505) exited with code 143 (Terminated)
Oct 02 12:35:00 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[263499]: [WARNING]  (263503) : All workers exited. Exiting... (0)
Oct 02 12:35:00 compute-1 systemd[1]: libpod-9ad2cacdd271349a1f5ecf03d9126c3a66b63465d3994562a79af85ca8178515.scope: Deactivated successfully.
Oct 02 12:35:00 compute-1 podman[265515]: 2025-10-02 12:35:00.390631305 +0000 UTC m=+0.083483218 container died 9ad2cacdd271349a1f5ecf03d9126c3a66b63465d3994562a79af85ca8178515 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:35:00 compute-1 nova_compute[230518]: 2025-10-02 12:35:00.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:00 compute-1 nova_compute[230518]: 2025-10-02 12:35:00.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:00 compute-1 nova_compute[230518]: 2025-10-02 12:35:00.453 2 INFO nova.virt.libvirt.driver [-] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance destroyed successfully.
Oct 02 12:35:00 compute-1 nova_compute[230518]: 2025-10-02 12:35:00.454 2 DEBUG nova.virt.libvirt.vif [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1270476772',display_name='tempest-ServerActionsTestJSON-server-1270476772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1270476772',id=77,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-t095cvs5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:34:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=3e490470-5e33-4140-95c1-367805364c73,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1480512928-network", "vif_mac": "fa:16:3e:7b:e8:97"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:35:00 compute-1 nova_compute[230518]: 2025-10-02 12:35:00.455 2 DEBUG nova.network.os_vif_util [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1480512928-network", "vif_mac": "fa:16:3e:7b:e8:97"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:35:00 compute-1 nova_compute[230518]: 2025-10-02 12:35:00.455 2 DEBUG nova.network.os_vif_util [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:35:00 compute-1 nova_compute[230518]: 2025-10-02 12:35:00.455 2 DEBUG os_vif [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:35:00 compute-1 nova_compute[230518]: 2025-10-02 12:35:00.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:00 compute-1 nova_compute[230518]: 2025-10-02 12:35:00.457 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3bd0009-d2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:00 compute-1 nova_compute[230518]: 2025-10-02 12:35:00.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:00 compute-1 nova_compute[230518]: 2025-10-02 12:35:00.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:00 compute-1 nova_compute[230518]: 2025-10-02 12:35:00.463 2 INFO os_vif [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2')
Oct 02 12:35:00 compute-1 nova_compute[230518]: 2025-10-02 12:35:00.466 2 DEBUG nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:35:00 compute-1 nova_compute[230518]: 2025-10-02 12:35:00.466 2 DEBUG nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:35:00 compute-1 systemd[1]: var-lib-containers-storage-overlay-ed56eda4117edd4a73c69ba3e2c0e2d327a36dd0522d2b795d433383050ab310-merged.mount: Deactivated successfully.
Oct 02 12:35:00 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9ad2cacdd271349a1f5ecf03d9126c3a66b63465d3994562a79af85ca8178515-userdata-shm.mount: Deactivated successfully.
Oct 02 12:35:00 compute-1 podman[265515]: 2025-10-02 12:35:00.645109462 +0000 UTC m=+0.337961375 container cleanup 9ad2cacdd271349a1f5ecf03d9126c3a66b63465d3994562a79af85ca8178515 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 12:35:00 compute-1 nova_compute[230518]: 2025-10-02 12:35:00.653 2 DEBUG nova.network.neutron [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Port a3bd0009-d256-4937-bdad-606abfd076e0 binding to destination host compute-1.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171
Oct 02 12:35:00 compute-1 podman[265554]: 2025-10-02 12:35:00.711535912 +0000 UTC m=+0.044401468 container remove 9ad2cacdd271349a1f5ecf03d9126c3a66b63465d3994562a79af85ca8178515 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:35:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:00.717 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[621cb94f-6149-4e20-a3e6-8f513f5e3709]: (4, ('Thu Oct  2 12:35:00 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff (9ad2cacdd271349a1f5ecf03d9126c3a66b63465d3994562a79af85ca8178515)\n9ad2cacdd271349a1f5ecf03d9126c3a66b63465d3994562a79af85ca8178515\nThu Oct  2 12:35:00 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff (9ad2cacdd271349a1f5ecf03d9126c3a66b63465d3994562a79af85ca8178515)\n9ad2cacdd271349a1f5ecf03d9126c3a66b63465d3994562a79af85ca8178515\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:00.719 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ae0c9c0d-ee5d-4a0d-a1a0-08713d2eba3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:00.721 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:00 compute-1 nova_compute[230518]: 2025-10-02 12:35:00.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:00 compute-1 kernel: tapf011efa4-00: left promiscuous mode
Oct 02 12:35:00 compute-1 nova_compute[230518]: 2025-10-02 12:35:00.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:00.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:00.731 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[df910375-45fa-42f7-9a90-bed845a63ad5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:00 compute-1 nova_compute[230518]: 2025-10-02 12:35:00.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:00 compute-1 systemd[1]: libpod-conmon-9ad2cacdd271349a1f5ecf03d9126c3a66b63465d3994562a79af85ca8178515.scope: Deactivated successfully.
Oct 02 12:35:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:00.752 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e94f5cd5-e3b4-43b1-a1fe-90276c855d93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:00.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:00.753 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3c1d8ca8-35b3-4bd4-8279-dfd1a0ff17ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:00.769 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d96460fe-a496-48c8-b33f-c7c6ab1045a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626314, 'reachable_time': 29344, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265569, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:00 compute-1 systemd[1]: run-netns-ovnmeta\x2df011efa4\x2d0132\x2d405c\x2dbb45\x2d09d0a9352eff.mount: Deactivated successfully.
Oct 02 12:35:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:00.772 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:35:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:00.772 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[c2a34c99-1cbf-4ad7-8b0e-4ae0253ca4f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:00 compute-1 nova_compute[230518]: 2025-10-02 12:35:00.844 2 DEBUG oslo_concurrency.lockutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:00 compute-1 nova_compute[230518]: 2025-10-02 12:35:00.845 2 DEBUG oslo_concurrency.lockutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:00 compute-1 nova_compute[230518]: 2025-10-02 12:35:00.845 2 DEBUG oslo_concurrency.lockutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:00 compute-1 nova_compute[230518]: 2025-10-02 12:35:00.911 2 DEBUG nova.compute.manager [req-6fd3735c-9043-434e-9e43-76a66c4aff50 req-c69b6a69-b9ee-4ade-9d34-152da8dc5fd5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-unplugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:00 compute-1 nova_compute[230518]: 2025-10-02 12:35:00.911 2 DEBUG oslo_concurrency.lockutils [req-6fd3735c-9043-434e-9e43-76a66c4aff50 req-c69b6a69-b9ee-4ade-9d34-152da8dc5fd5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:00 compute-1 nova_compute[230518]: 2025-10-02 12:35:00.911 2 DEBUG oslo_concurrency.lockutils [req-6fd3735c-9043-434e-9e43-76a66c4aff50 req-c69b6a69-b9ee-4ade-9d34-152da8dc5fd5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:00 compute-1 nova_compute[230518]: 2025-10-02 12:35:00.912 2 DEBUG oslo_concurrency.lockutils [req-6fd3735c-9043-434e-9e43-76a66c4aff50 req-c69b6a69-b9ee-4ade-9d34-152da8dc5fd5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:00 compute-1 nova_compute[230518]: 2025-10-02 12:35:00.912 2 DEBUG nova.compute.manager [req-6fd3735c-9043-434e-9e43-76a66c4aff50 req-c69b6a69-b9ee-4ade-9d34-152da8dc5fd5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-unplugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:00 compute-1 nova_compute[230518]: 2025-10-02 12:35:00.912 2 WARNING nova.compute.manager [req-6fd3735c-9043-434e-9e43-76a66c4aff50 req-c69b6a69-b9ee-4ade-9d34-152da8dc5fd5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received unexpected event network-vif-unplugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with vm_state active and task_state resize_migrated.
Oct 02 12:35:00 compute-1 ceph-mon[80926]: pgmap v1753: 305 pgs: 305 active+clean; 323 MiB data, 856 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.8 MiB/s wr, 256 op/s
Oct 02 12:35:01 compute-1 nova_compute[230518]: 2025-10-02 12:35:01.490 2 DEBUG oslo_concurrency.lockutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:35:01 compute-1 nova_compute[230518]: 2025-10-02 12:35:01.490 2 DEBUG oslo_concurrency.lockutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquired lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:35:01 compute-1 nova_compute[230518]: 2025-10-02 12:35:01.491 2 DEBUG nova.network.neutron [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:35:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:02.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:02.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:02 compute-1 nova_compute[230518]: 2025-10-02 12:35:02.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:03 compute-1 nova_compute[230518]: 2025-10-02 12:35:03.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:35:03 compute-1 nova_compute[230518]: 2025-10-02 12:35:03.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:35:03 compute-1 nova_compute[230518]: 2025-10-02 12:35:03.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:35:03 compute-1 nova_compute[230518]: 2025-10-02 12:35:03.084 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:35:03 compute-1 nova_compute[230518]: 2025-10-02 12:35:03.091 2 DEBUG nova.compute.manager [req-3b226728-c636-490a-a068-763916f696b3 req-b51416fc-2a0a-41ef-a0db-08c469f50fda 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:03 compute-1 nova_compute[230518]: 2025-10-02 12:35:03.092 2 DEBUG oslo_concurrency.lockutils [req-3b226728-c636-490a-a068-763916f696b3 req-b51416fc-2a0a-41ef-a0db-08c469f50fda 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:03 compute-1 nova_compute[230518]: 2025-10-02 12:35:03.092 2 DEBUG oslo_concurrency.lockutils [req-3b226728-c636-490a-a068-763916f696b3 req-b51416fc-2a0a-41ef-a0db-08c469f50fda 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:03 compute-1 nova_compute[230518]: 2025-10-02 12:35:03.092 2 DEBUG oslo_concurrency.lockutils [req-3b226728-c636-490a-a068-763916f696b3 req-b51416fc-2a0a-41ef-a0db-08c469f50fda 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:03 compute-1 nova_compute[230518]: 2025-10-02 12:35:03.093 2 DEBUG nova.compute.manager [req-3b226728-c636-490a-a068-763916f696b3 req-b51416fc-2a0a-41ef-a0db-08c469f50fda 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:03 compute-1 nova_compute[230518]: 2025-10-02 12:35:03.094 2 WARNING nova.compute.manager [req-3b226728-c636-490a-a068-763916f696b3 req-b51416fc-2a0a-41ef-a0db-08c469f50fda 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received unexpected event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with vm_state active and task_state resize_migrated.
Oct 02 12:35:03 compute-1 ceph-mon[80926]: pgmap v1754: 305 pgs: 305 active+clean; 345 MiB data, 875 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 6.2 MiB/s wr, 303 op/s
Oct 02 12:35:03 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3510125751' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:03 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1318539499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:03 compute-1 podman[265571]: 2025-10-02 12:35:03.837887617 +0000 UTC m=+0.086622796 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 12:35:03 compute-1 podman[265570]: 2025-10-02 12:35:03.846905731 +0000 UTC m=+0.096057552 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 12:35:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:35:04 compute-1 nova_compute[230518]: 2025-10-02 12:35:04.370 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408489.3698776, 51a18f7c-ed1b-4500-9d74-fb924f62b6d9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:35:04 compute-1 nova_compute[230518]: 2025-10-02 12:35:04.371 2 INFO nova.compute.manager [-] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] VM Stopped (Lifecycle Event)
Oct 02 12:35:04 compute-1 nova_compute[230518]: 2025-10-02 12:35:04.469 2 DEBUG nova.compute.manager [None req-4ce3332c-2917-4f21-b7f9-6ee8b50e5f49 - - - - - -] [instance: 51a18f7c-ed1b-4500-9d74-fb924f62b6d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:35:04 compute-1 nova_compute[230518]: 2025-10-02 12:35:04.593 2 DEBUG nova.network.neutron [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updating instance_info_cache with network_info: [{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:35:04 compute-1 nova_compute[230518]: 2025-10-02 12:35:04.617 2 DEBUG oslo_concurrency.lockutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Releasing lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:35:04 compute-1 nova_compute[230518]: 2025-10-02 12:35:04.625 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:35:04 compute-1 nova_compute[230518]: 2025-10-02 12:35:04.626 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:35:04 compute-1 nova_compute[230518]: 2025-10-02 12:35:04.626 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:35:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:04.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:04 compute-1 nova_compute[230518]: 2025-10-02 12:35:04.741 2 DEBUG nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Oct 02 12:35:04 compute-1 nova_compute[230518]: 2025-10-02 12:35:04.743 2 DEBUG nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 02 12:35:04 compute-1 nova_compute[230518]: 2025-10-02 12:35:04.744 2 INFO nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Creating image(s)
Oct 02 12:35:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:35:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:04.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:35:04 compute-1 ceph-mon[80926]: pgmap v1755: 305 pgs: 305 active+clean; 353 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.9 MiB/s wr, 245 op/s
Oct 02 12:35:04 compute-1 nova_compute[230518]: 2025-10-02 12:35:04.853 2 DEBUG nova.storage.rbd_utils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] creating snapshot(nova-resize) on rbd image(3e490470-5e33-4140-95c1-367805364c73_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:35:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:05.298 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:35:05 compute-1 nova_compute[230518]: 2025-10-02 12:35:05.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:05.300 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:35:05 compute-1 nova_compute[230518]: 2025-10-02 12:35:05.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:06 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:35:06 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1426181501' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:06 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e246 e246: 3 total, 3 up, 3 in
Oct 02 12:35:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3183312011' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:35:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3183312011' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:35:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:06.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:06.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:06 compute-1 nova_compute[230518]: 2025-10-02 12:35:06.826 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updating instance_info_cache with network_info: [{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:35:06 compute-1 nova_compute[230518]: 2025-10-02 12:35:06.869 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:35:06 compute-1 nova_compute[230518]: 2025-10-02 12:35:06.869 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:35:07 compute-1 ceph-mon[80926]: pgmap v1756: 305 pgs: 305 active+clean; 356 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.4 MiB/s wr, 204 op/s
Oct 02 12:35:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1426181501' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:07 compute-1 ceph-mon[80926]: osdmap e246: 3 total, 3 up, 3 in
Oct 02 12:35:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1751850354' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:07 compute-1 nova_compute[230518]: 2025-10-02 12:35:07.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:08 compute-1 nova_compute[230518]: 2025-10-02 12:35:08.196 2 DEBUG nova.objects.instance [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:35:08 compute-1 nova_compute[230518]: 2025-10-02 12:35:08.370 2 DEBUG nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 12:35:08 compute-1 nova_compute[230518]: 2025-10-02 12:35:08.371 2 DEBUG nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Ensure instance console log exists: /var/lib/nova/instances/3e490470-5e33-4140-95c1-367805364c73/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:35:08 compute-1 nova_compute[230518]: 2025-10-02 12:35:08.371 2 DEBUG oslo_concurrency.lockutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:08 compute-1 nova_compute[230518]: 2025-10-02 12:35:08.372 2 DEBUG oslo_concurrency.lockutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:08 compute-1 nova_compute[230518]: 2025-10-02 12:35:08.372 2 DEBUG oslo_concurrency.lockutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:08 compute-1 nova_compute[230518]: 2025-10-02 12:35:08.374 2 DEBUG nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Start _get_guest_xml network_info=[{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1480512928-network", "vif_mac": "fa:16:3e:7b:e8:97"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:35:08 compute-1 nova_compute[230518]: 2025-10-02 12:35:08.379 2 WARNING nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:35:08 compute-1 nova_compute[230518]: 2025-10-02 12:35:08.384 2 DEBUG nova.virt.libvirt.host [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:35:08 compute-1 nova_compute[230518]: 2025-10-02 12:35:08.384 2 DEBUG nova.virt.libvirt.host [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:35:08 compute-1 nova_compute[230518]: 2025-10-02 12:35:08.387 2 DEBUG nova.virt.libvirt.host [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:35:08 compute-1 nova_compute[230518]: 2025-10-02 12:35:08.388 2 DEBUG nova.virt.libvirt.host [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:35:08 compute-1 nova_compute[230518]: 2025-10-02 12:35:08.389 2 DEBUG nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:35:08 compute-1 nova_compute[230518]: 2025-10-02 12:35:08.389 2 DEBUG nova.virt.hardware [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:44Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='475e3257-fad6-494a-9174-56c6af5e0ac9',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:35:08 compute-1 nova_compute[230518]: 2025-10-02 12:35:08.389 2 DEBUG nova.virt.hardware [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:35:08 compute-1 nova_compute[230518]: 2025-10-02 12:35:08.389 2 DEBUG nova.virt.hardware [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:35:08 compute-1 nova_compute[230518]: 2025-10-02 12:35:08.390 2 DEBUG nova.virt.hardware [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:35:08 compute-1 nova_compute[230518]: 2025-10-02 12:35:08.390 2 DEBUG nova.virt.hardware [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:35:08 compute-1 nova_compute[230518]: 2025-10-02 12:35:08.390 2 DEBUG nova.virt.hardware [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:35:08 compute-1 nova_compute[230518]: 2025-10-02 12:35:08.390 2 DEBUG nova.virt.hardware [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:35:08 compute-1 nova_compute[230518]: 2025-10-02 12:35:08.390 2 DEBUG nova.virt.hardware [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:35:08 compute-1 nova_compute[230518]: 2025-10-02 12:35:08.391 2 DEBUG nova.virt.hardware [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:35:08 compute-1 nova_compute[230518]: 2025-10-02 12:35:08.391 2 DEBUG nova.virt.hardware [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:35:08 compute-1 nova_compute[230518]: 2025-10-02 12:35:08.391 2 DEBUG nova.virt.hardware [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:35:08 compute-1 nova_compute[230518]: 2025-10-02 12:35:08.391 2 DEBUG nova.objects.instance [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:35:08 compute-1 nova_compute[230518]: 2025-10-02 12:35:08.445 2 DEBUG oslo_concurrency.processutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:08.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:35:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:08.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:35:09 compute-1 ceph-mon[80926]: pgmap v1758: 305 pgs: 305 active+clean; 393 MiB data, 898 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.8 MiB/s wr, 138 op/s
Oct 02 12:35:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:35:09 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3868317450' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:09 compute-1 nova_compute[230518]: 2025-10-02 12:35:09.216 2 DEBUG oslo_concurrency.processutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.771s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:09 compute-1 nova_compute[230518]: 2025-10-02 12:35:09.275 2 DEBUG oslo_concurrency.processutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:35:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:35:09 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4223741338' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:09 compute-1 nova_compute[230518]: 2025-10-02 12:35:09.722 2 DEBUG oslo_concurrency.processutils [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:09 compute-1 nova_compute[230518]: 2025-10-02 12:35:09.724 2 DEBUG nova.virt.libvirt.vif [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1270476772',display_name='tempest-ServerActionsTestJSON-server-1270476772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1270476772',id=77,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-t095cvs5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:35:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=3e490470-5e33-4140-95c1-367805364c73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1480512928-network", "vif_mac": "fa:16:3e:7b:e8:97"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:35:09 compute-1 nova_compute[230518]: 2025-10-02 12:35:09.725 2 DEBUG nova.network.os_vif_util [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1480512928-network", "vif_mac": "fa:16:3e:7b:e8:97"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:35:09 compute-1 nova_compute[230518]: 2025-10-02 12:35:09.726 2 DEBUG nova.network.os_vif_util [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:35:09 compute-1 nova_compute[230518]: 2025-10-02 12:35:09.730 2 DEBUG nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:35:09 compute-1 nova_compute[230518]:   <uuid>3e490470-5e33-4140-95c1-367805364c73</uuid>
Oct 02 12:35:09 compute-1 nova_compute[230518]:   <name>instance-0000004d</name>
Oct 02 12:35:09 compute-1 nova_compute[230518]:   <memory>196608</memory>
Oct 02 12:35:09 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:35:09 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:35:09 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:       <nova:name>tempest-ServerActionsTestJSON-server-1270476772</nova:name>
Oct 02 12:35:09 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:35:08</nova:creationTime>
Oct 02 12:35:09 compute-1 nova_compute[230518]:       <nova:flavor name="m1.micro">
Oct 02 12:35:09 compute-1 nova_compute[230518]:         <nova:memory>192</nova:memory>
Oct 02 12:35:09 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:35:09 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:35:09 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:35:09 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:35:09 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:35:09 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:35:09 compute-1 nova_compute[230518]:         <nova:user uuid="71d69bc37f274fad8a0b06c0b96f2a64">tempest-ServerActionsTestJSON-226762235-project-member</nova:user>
Oct 02 12:35:09 compute-1 nova_compute[230518]:         <nova:project uuid="3b295760a6d74c82bd0f9ee4154d7d10">tempest-ServerActionsTestJSON-226762235</nova:project>
Oct 02 12:35:09 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:35:09 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:35:09 compute-1 nova_compute[230518]:         <nova:port uuid="a3bd0009-d256-4937-bdad-606abfd076e0">
Oct 02 12:35:09 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:35:09 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:35:09 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:35:09 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <system>
Oct 02 12:35:09 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:35:09 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:35:09 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:35:09 compute-1 nova_compute[230518]:       <entry name="serial">3e490470-5e33-4140-95c1-367805364c73</entry>
Oct 02 12:35:09 compute-1 nova_compute[230518]:       <entry name="uuid">3e490470-5e33-4140-95c1-367805364c73</entry>
Oct 02 12:35:09 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     </system>
Oct 02 12:35:09 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:35:09 compute-1 nova_compute[230518]:   <os>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:   </os>
Oct 02 12:35:09 compute-1 nova_compute[230518]:   <features>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:   </features>
Oct 02 12:35:09 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:35:09 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:35:09 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:35:09 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/3e490470-5e33-4140-95c1-367805364c73_disk">
Oct 02 12:35:09 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:       </source>
Oct 02 12:35:09 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:35:09 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:35:09 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:35:09 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/3e490470-5e33-4140-95c1-367805364c73_disk.config">
Oct 02 12:35:09 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:       </source>
Oct 02 12:35:09 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:35:09 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:35:09 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:35:09 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:7b:e8:97"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:       <target dev="tapa3bd0009-d2"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:35:09 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/3e490470-5e33-4140-95c1-367805364c73/console.log" append="off"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <video>
Oct 02 12:35:09 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     </video>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:35:09 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:35:09 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:35:09 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:35:09 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:35:09 compute-1 nova_compute[230518]: </domain>
Oct 02 12:35:09 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:35:09 compute-1 nova_compute[230518]: 2025-10-02 12:35:09.731 2 DEBUG nova.virt.libvirt.vif [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1270476772',display_name='tempest-ServerActionsTestJSON-server-1270476772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1270476772',id=77,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:31:51Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-t095cvs5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:35:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=3e490470-5e33-4140-95c1-367805364c73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1480512928-network", "vif_mac": "fa:16:3e:7b:e8:97"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:35:09 compute-1 nova_compute[230518]: 2025-10-02 12:35:09.731 2 DEBUG nova.network.os_vif_util [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1480512928-network", "vif_mac": "fa:16:3e:7b:e8:97"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:35:09 compute-1 nova_compute[230518]: 2025-10-02 12:35:09.732 2 DEBUG nova.network.os_vif_util [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:35:09 compute-1 nova_compute[230518]: 2025-10-02 12:35:09.732 2 DEBUG os_vif [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:35:09 compute-1 nova_compute[230518]: 2025-10-02 12:35:09.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:09 compute-1 nova_compute[230518]: 2025-10-02 12:35:09.734 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:09 compute-1 nova_compute[230518]: 2025-10-02 12:35:09.734 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:35:09 compute-1 nova_compute[230518]: 2025-10-02 12:35:09.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:09 compute-1 nova_compute[230518]: 2025-10-02 12:35:09.736 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3bd0009-d2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:09 compute-1 nova_compute[230518]: 2025-10-02 12:35:09.737 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa3bd0009-d2, col_values=(('external_ids', {'iface-id': 'a3bd0009-d256-4937-bdad-606abfd076e0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:e8:97', 'vm-uuid': '3e490470-5e33-4140-95c1-367805364c73'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:09 compute-1 nova_compute[230518]: 2025-10-02 12:35:09.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:09 compute-1 NetworkManager[44960]: <info>  [1759408509.7396] manager: (tapa3bd0009-d2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/178)
Oct 02 12:35:09 compute-1 nova_compute[230518]: 2025-10-02 12:35:09.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:35:09 compute-1 nova_compute[230518]: 2025-10-02 12:35:09.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:09 compute-1 nova_compute[230518]: 2025-10-02 12:35:09.745 2 INFO os_vif [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2')
Oct 02 12:35:09 compute-1 nova_compute[230518]: 2025-10-02 12:35:09.872 2 DEBUG nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:35:09 compute-1 nova_compute[230518]: 2025-10-02 12:35:09.872 2 DEBUG nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:35:09 compute-1 nova_compute[230518]: 2025-10-02 12:35:09.872 2 DEBUG nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] No VIF found with MAC fa:16:3e:7b:e8:97, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:35:09 compute-1 nova_compute[230518]: 2025-10-02 12:35:09.873 2 INFO nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Using config drive
Oct 02 12:35:09 compute-1 kernel: tapa3bd0009-d2: entered promiscuous mode
Oct 02 12:35:09 compute-1 NetworkManager[44960]: <info>  [1759408509.9827] manager: (tapa3bd0009-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/179)
Oct 02 12:35:09 compute-1 nova_compute[230518]: 2025-10-02 12:35:09.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:09 compute-1 ovn_controller[129257]: 2025-10-02T12:35:09Z|00385|binding|INFO|Claiming lport a3bd0009-d256-4937-bdad-606abfd076e0 for this chassis.
Oct 02 12:35:09 compute-1 ovn_controller[129257]: 2025-10-02T12:35:09Z|00386|binding|INFO|a3bd0009-d256-4937-bdad-606abfd076e0: Claiming fa:16:3e:7b:e8:97 10.100.0.6
Oct 02 12:35:10 compute-1 ovn_controller[129257]: 2025-10-02T12:35:10Z|00387|binding|INFO|Setting lport a3bd0009-d256-4937-bdad-606abfd076e0 ovn-installed in OVS
Oct 02 12:35:10 compute-1 systemd-udevd[265777]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:35:10 compute-1 nova_compute[230518]: 2025-10-02 12:35:10.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:10 compute-1 nova_compute[230518]: 2025-10-02 12:35:10.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:10 compute-1 systemd-machined[188247]: New machine qemu-45-instance-0000004d.
Oct 02 12:35:10 compute-1 NetworkManager[44960]: <info>  [1759408510.0401] device (tapa3bd0009-d2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:35:10 compute-1 NetworkManager[44960]: <info>  [1759408510.0409] device (tapa3bd0009-d2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:35:10 compute-1 systemd[1]: Started Virtual Machine qemu-45-instance-0000004d.
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.044 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:e8:97 10.100.0.6'], port_security=['fa:16:3e:7b:e8:97 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3e490470-5e33-4140-95c1-367805364c73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '11', 'neutron:security_group_ids': '6fdfac51-abac-4e22-93ab-c3b799f666ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.191'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=a3bd0009-d256-4937-bdad-606abfd076e0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.045 138374 INFO neutron.agent.ovn.metadata.agent [-] Port a3bd0009-d256-4937-bdad-606abfd076e0 in datapath f011efa4-0132-405c-bb45-09d0a9352eff bound to our chassis
Oct 02 12:35:10 compute-1 ovn_controller[129257]: 2025-10-02T12:35:10Z|00388|binding|INFO|Setting lport a3bd0009-d256-4937-bdad-606abfd076e0 up in Southbound
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.047 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f011efa4-0132-405c-bb45-09d0a9352eff
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.064 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4371a471-4b52-401e-841c-dda9a4bbeaf5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.065 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf011efa4-01 in ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.068 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf011efa4-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.068 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[74d879e1-9153-448b-8bc5-5bdd40ed4ce7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.070 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e13bfb93-3abc-4b18-a4db-f4c837d4db8a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.082 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[6494f3bd-013b-4617-b579-fbd7aef12f79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.110 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[72414d36-cc54-45df-bf9c-3b6d7a4e859c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.141 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a453441f-9266-4381-86b4-b41848844e68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:10 compute-1 NetworkManager[44960]: <info>  [1759408510.1516] manager: (tapf011efa4-00): new Veth device (/org/freedesktop/NetworkManager/Devices/180)
Oct 02 12:35:10 compute-1 systemd-udevd[265780]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.154 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d8c33861-871f-40e2-8487-026f71ddb550]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.191 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[6346a180-b3d8-4119-8814-e1bfea782477]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.194 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[d242825c-a630-44dd-8657-92f4d3c34c68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:10 compute-1 NetworkManager[44960]: <info>  [1759408510.2167] device (tapf011efa4-00): carrier: link connected
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.221 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[4f1ad628-34d0-48df-bb1c-148ff5f62b2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.242 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[77e0bc4e-cd1f-4070-8142-3aad6472a0cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 116], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 637377, 'reachable_time': 40349, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265811, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.263 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e34fe8e4-a812-42e5-9ca2-ef69ad7eadfa]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feed:1a7a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 637377, 'tstamp': 637377}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265812, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.290 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3b896ea7-2ad1-41d3-a701-8258bfbcec99]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 116], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 637377, 'reachable_time': 40349, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 265813, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.332 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[da5e4823-c292-46ca-bbec-af3e8136500d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3932958621' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3868317450' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/947683889' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1976850205' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4223741338' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3718680792' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.427 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cf3832b9-7fff-4e59-83d5-3ef8c779ab3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.430 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.430 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.431 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf011efa4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:10 compute-1 NetworkManager[44960]: <info>  [1759408510.4800] manager: (tapf011efa4-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/181)
Oct 02 12:35:10 compute-1 kernel: tapf011efa4-00: entered promiscuous mode
Oct 02 12:35:10 compute-1 nova_compute[230518]: 2025-10-02 12:35:10.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.496 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf011efa4-00, col_values=(('external_ids', {'iface-id': '678ebd13-2235-4191-a2a2-1f6e29399ca6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:10 compute-1 nova_compute[230518]: 2025-10-02 12:35:10.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:10 compute-1 ovn_controller[129257]: 2025-10-02T12:35:10Z|00389|binding|INFO|Releasing lport 678ebd13-2235-4191-a2a2-1f6e29399ca6 from this chassis (sb_readonly=0)
Oct 02 12:35:10 compute-1 nova_compute[230518]: 2025-10-02 12:35:10.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.530 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.531 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4917124b-4c73-41e2-abe1-de453650e2e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.532 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-f011efa4-0132-405c-bb45-09d0a9352eff
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID f011efa4-0132-405c-bb45-09d0a9352eff
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:35:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:10.533 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'env', 'PROCESS_TAG=haproxy-f011efa4-0132-405c-bb45-09d0a9352eff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f011efa4-0132-405c-bb45-09d0a9352eff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:35:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:10.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:10.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:10 compute-1 nova_compute[230518]: 2025-10-02 12:35:10.957 2 DEBUG nova.compute.manager [req-6ac78bac-12a9-4470-a180-c7cf5eca0337 req-9a09fc61-7694-4a25-8255-f951b283d5e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:10 compute-1 nova_compute[230518]: 2025-10-02 12:35:10.959 2 DEBUG oslo_concurrency.lockutils [req-6ac78bac-12a9-4470-a180-c7cf5eca0337 req-9a09fc61-7694-4a25-8255-f951b283d5e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:10 compute-1 nova_compute[230518]: 2025-10-02 12:35:10.960 2 DEBUG oslo_concurrency.lockutils [req-6ac78bac-12a9-4470-a180-c7cf5eca0337 req-9a09fc61-7694-4a25-8255-f951b283d5e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:10 compute-1 nova_compute[230518]: 2025-10-02 12:35:10.960 2 DEBUG oslo_concurrency.lockutils [req-6ac78bac-12a9-4470-a180-c7cf5eca0337 req-9a09fc61-7694-4a25-8255-f951b283d5e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:10 compute-1 nova_compute[230518]: 2025-10-02 12:35:10.960 2 DEBUG nova.compute.manager [req-6ac78bac-12a9-4470-a180-c7cf5eca0337 req-9a09fc61-7694-4a25-8255-f951b283d5e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:10 compute-1 nova_compute[230518]: 2025-10-02 12:35:10.961 2 WARNING nova.compute.manager [req-6ac78bac-12a9-4470-a180-c7cf5eca0337 req-9a09fc61-7694-4a25-8255-f951b283d5e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received unexpected event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with vm_state active and task_state resize_finish.
Oct 02 12:35:11 compute-1 podman[265844]: 2025-10-02 12:35:10.969582666 +0000 UTC m=+0.043978765 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:35:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:11.303 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:11 compute-1 nova_compute[230518]: 2025-10-02 12:35:11.656 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Removed pending event for 3e490470-5e33-4140-95c1-367805364c73 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:35:11 compute-1 nova_compute[230518]: 2025-10-02 12:35:11.656 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408511.6556685, 3e490470-5e33-4140-95c1-367805364c73 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:35:11 compute-1 nova_compute[230518]: 2025-10-02 12:35:11.657 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] VM Resumed (Lifecycle Event)
Oct 02 12:35:11 compute-1 nova_compute[230518]: 2025-10-02 12:35:11.659 2 DEBUG nova.compute.manager [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:35:11 compute-1 nova_compute[230518]: 2025-10-02 12:35:11.663 2 INFO nova.virt.libvirt.driver [-] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance running successfully.
Oct 02 12:35:11 compute-1 virtqemud[230067]: argument unsupported: QEMU guest agent is not configured
Oct 02 12:35:11 compute-1 nova_compute[230518]: 2025-10-02 12:35:11.665 2 DEBUG nova.virt.libvirt.guest [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 02 12:35:11 compute-1 nova_compute[230518]: 2025-10-02 12:35:11.665 2 DEBUG nova.virt.libvirt.driver [None req-be47cdc5-912e-46ae-a07c-000b56f37baf 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Oct 02 12:35:11 compute-1 nova_compute[230518]: 2025-10-02 12:35:11.790 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:35:11 compute-1 nova_compute[230518]: 2025-10-02 12:35:11.795 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:35:11 compute-1 nova_compute[230518]: 2025-10-02 12:35:11.856 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] During sync_power_state the instance has a pending task (resize_finish). Skip.
Oct 02 12:35:11 compute-1 nova_compute[230518]: 2025-10-02 12:35:11.857 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408511.6597936, 3e490470-5e33-4140-95c1-367805364c73 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:35:11 compute-1 nova_compute[230518]: 2025-10-02 12:35:11.857 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] VM Started (Lifecycle Event)
Oct 02 12:35:11 compute-1 ceph-mon[80926]: pgmap v1759: 305 pgs: 305 active+clean; 393 MiB data, 898 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.8 MiB/s wr, 138 op/s
Oct 02 12:35:12 compute-1 nova_compute[230518]: 2025-10-02 12:35:12.096 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:35:12 compute-1 nova_compute[230518]: 2025-10-02 12:35:12.100 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:35:12 compute-1 podman[265844]: 2025-10-02 12:35:12.407628532 +0000 UTC m=+1.482024551 container create 895452d17454984001d9fc4cf065c756f2e286540795cd6d4c1403a56cd4f31d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 12:35:12 compute-1 systemd[1]: Started libpod-conmon-895452d17454984001d9fc4cf065c756f2e286540795cd6d4c1403a56cd4f31d.scope.
Oct 02 12:35:12 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:35:12 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cd4a7dfe49440492c7b1837cfdf74d652207fb9a54ea584d72d946905023818/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:35:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:12.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:12.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:12 compute-1 podman[265844]: 2025-10-02 12:35:12.959016961 +0000 UTC m=+2.033413050 container init 895452d17454984001d9fc4cf065c756f2e286540795cd6d4c1403a56cd4f31d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:35:12 compute-1 nova_compute[230518]: 2025-10-02 12:35:12.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:12 compute-1 podman[265844]: 2025-10-02 12:35:12.969958775 +0000 UTC m=+2.044354814 container start 895452d17454984001d9fc4cf065c756f2e286540795cd6d4c1403a56cd4f31d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 12:35:12 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[265902]: [NOTICE]   (265906) : New worker (265908) forked
Oct 02 12:35:12 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[265902]: [NOTICE]   (265906) : Loading success.
Oct 02 12:35:13 compute-1 ceph-mon[80926]: pgmap v1760: 305 pgs: 305 active+clean; 473 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 5.2 MiB/s wr, 199 op/s
Oct 02 12:35:13 compute-1 nova_compute[230518]: 2025-10-02 12:35:13.288 2 DEBUG nova.compute.manager [req-3ca425ca-8da6-43fa-a38e-ef12233066da req-eb3bb704-1e3a-4df8-99f3-9b6fb926a595 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:13 compute-1 nova_compute[230518]: 2025-10-02 12:35:13.289 2 DEBUG oslo_concurrency.lockutils [req-3ca425ca-8da6-43fa-a38e-ef12233066da req-eb3bb704-1e3a-4df8-99f3-9b6fb926a595 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:13 compute-1 nova_compute[230518]: 2025-10-02 12:35:13.289 2 DEBUG oslo_concurrency.lockutils [req-3ca425ca-8da6-43fa-a38e-ef12233066da req-eb3bb704-1e3a-4df8-99f3-9b6fb926a595 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:13 compute-1 nova_compute[230518]: 2025-10-02 12:35:13.290 2 DEBUG oslo_concurrency.lockutils [req-3ca425ca-8da6-43fa-a38e-ef12233066da req-eb3bb704-1e3a-4df8-99f3-9b6fb926a595 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:13 compute-1 nova_compute[230518]: 2025-10-02 12:35:13.290 2 DEBUG nova.compute.manager [req-3ca425ca-8da6-43fa-a38e-ef12233066da req-eb3bb704-1e3a-4df8-99f3-9b6fb926a595 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:13 compute-1 nova_compute[230518]: 2025-10-02 12:35:13.291 2 WARNING nova.compute.manager [req-3ca425ca-8da6-43fa-a38e-ef12233066da req-eb3bb704-1e3a-4df8-99f3-9b6fb926a595 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received unexpected event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with vm_state resized and task_state None.
Oct 02 12:35:13 compute-1 nova_compute[230518]: 2025-10-02 12:35:13.395 2 DEBUG oslo_concurrency.lockutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Acquiring lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:13 compute-1 nova_compute[230518]: 2025-10-02 12:35:13.395 2 DEBUG oslo_concurrency.lockutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:13 compute-1 nova_compute[230518]: 2025-10-02 12:35:13.430 2 DEBUG nova.compute.manager [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:35:13 compute-1 nova_compute[230518]: 2025-10-02 12:35:13.572 2 DEBUG oslo_concurrency.lockutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:13 compute-1 nova_compute[230518]: 2025-10-02 12:35:13.573 2 DEBUG oslo_concurrency.lockutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:13 compute-1 nova_compute[230518]: 2025-10-02 12:35:13.580 2 DEBUG nova.virt.hardware [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:35:13 compute-1 nova_compute[230518]: 2025-10-02 12:35:13.580 2 INFO nova.compute.claims [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:35:13 compute-1 nova_compute[230518]: 2025-10-02 12:35:13.907 2 DEBUG oslo_concurrency.processutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:35:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:35:14 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2851248602' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:14 compute-1 nova_compute[230518]: 2025-10-02 12:35:14.345 2 DEBUG oslo_concurrency.processutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:14 compute-1 nova_compute[230518]: 2025-10-02 12:35:14.352 2 DEBUG nova.compute.provider_tree [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:35:14 compute-1 ceph-mon[80926]: pgmap v1761: 305 pgs: 305 active+clean; 499 MiB data, 940 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 6.5 MiB/s wr, 197 op/s
Oct 02 12:35:14 compute-1 nova_compute[230518]: 2025-10-02 12:35:14.523 2 DEBUG oslo_concurrency.lockutils [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:14 compute-1 nova_compute[230518]: 2025-10-02 12:35:14.524 2 DEBUG oslo_concurrency.lockutils [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:14 compute-1 nova_compute[230518]: 2025-10-02 12:35:14.524 2 DEBUG nova.compute.manager [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Going to confirm migration 13 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Oct 02 12:35:14 compute-1 nova_compute[230518]: 2025-10-02 12:35:14.541 2 DEBUG nova.scheduler.client.report [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:35:14 compute-1 nova_compute[230518]: 2025-10-02 12:35:14.702 2 DEBUG oslo_concurrency.lockutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:14 compute-1 nova_compute[230518]: 2025-10-02 12:35:14.702 2 DEBUG nova.compute.manager [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:35:14 compute-1 nova_compute[230518]: 2025-10-02 12:35:14.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:35:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:14.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:35:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:35:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:14.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:35:14 compute-1 nova_compute[230518]: 2025-10-02 12:35:14.913 2 DEBUG nova.compute.manager [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:35:14 compute-1 nova_compute[230518]: 2025-10-02 12:35:14.913 2 DEBUG nova.network.neutron [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:35:15 compute-1 nova_compute[230518]: 2025-10-02 12:35:15.115 2 INFO nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:35:15 compute-1 nova_compute[230518]: 2025-10-02 12:35:15.566 2 DEBUG nova.compute.manager [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:35:15 compute-1 nova_compute[230518]: 2025-10-02 12:35:15.733 2 INFO nova.virt.block_device [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Booting with volume 9283cdcd-9233-4064-a250-5d9e278af430 at /dev/vda
Oct 02 12:35:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2851248602' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:15 compute-1 nova_compute[230518]: 2025-10-02 12:35:15.886 2 DEBUG os_brick.utils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:35:15 compute-1 nova_compute[230518]: 2025-10-02 12:35:15.887 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:15 compute-1 nova_compute[230518]: 2025-10-02 12:35:15.900 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:15 compute-1 nova_compute[230518]: 2025-10-02 12:35:15.901 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[b2273a70-19b7-45a3-8381-a089d73d9f27]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:15 compute-1 nova_compute[230518]: 2025-10-02 12:35:15.902 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:15 compute-1 nova_compute[230518]: 2025-10-02 12:35:15.912 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:15 compute-1 nova_compute[230518]: 2025-10-02 12:35:15.913 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[d65bc848-8a63-4750-8acd-8d1c76e4af31]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:15 compute-1 nova_compute[230518]: 2025-10-02 12:35:15.914 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:15 compute-1 nova_compute[230518]: 2025-10-02 12:35:15.924 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:15 compute-1 nova_compute[230518]: 2025-10-02 12:35:15.925 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[5f5a455e-b623-4b75-b1d5-ca2ad6bffb11]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:15 compute-1 nova_compute[230518]: 2025-10-02 12:35:15.926 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[d664f54e-5c72-4b08-a2d7-19e417999e81]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:15 compute-1 nova_compute[230518]: 2025-10-02 12:35:15.927 2 DEBUG oslo_concurrency.processutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:15 compute-1 nova_compute[230518]: 2025-10-02 12:35:15.968 2 DEBUG oslo_concurrency.processutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] CMD "nvme version" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:15 compute-1 nova_compute[230518]: 2025-10-02 12:35:15.971 2 DEBUG os_brick.initiator.connectors.lightos [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:35:15 compute-1 nova_compute[230518]: 2025-10-02 12:35:15.971 2 DEBUG os_brick.initiator.connectors.lightos [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:35:15 compute-1 nova_compute[230518]: 2025-10-02 12:35:15.972 2 DEBUG os_brick.initiator.connectors.lightos [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:35:15 compute-1 nova_compute[230518]: 2025-10-02 12:35:15.972 2 DEBUG os_brick.utils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] <== get_connector_properties: return (85ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:35:15 compute-1 nova_compute[230518]: 2025-10-02 12:35:15.973 2 DEBUG nova.virt.block_device [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Updating existing volume attachment record: 748e3c4e-64ff-474b-9b2d-f528e6710211 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:35:16 compute-1 sudo[265946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:35:16 compute-1 sudo[265946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:16 compute-1 sudo[265946]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:16 compute-1 sudo[265971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:35:16 compute-1 sudo[265971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:16 compute-1 sudo[265971]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:16 compute-1 sudo[265996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:35:16 compute-1 sudo[265996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:16 compute-1 sudo[265996]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:16 compute-1 sudo[266021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:35:16 compute-1 sudo[266021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:16 compute-1 nova_compute[230518]: 2025-10-02 12:35:16.674 2 DEBUG oslo_concurrency.lockutils [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:35:16 compute-1 nova_compute[230518]: 2025-10-02 12:35:16.675 2 DEBUG oslo_concurrency.lockutils [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquired lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:35:16 compute-1 nova_compute[230518]: 2025-10-02 12:35:16.675 2 DEBUG nova.network.neutron [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:35:16 compute-1 nova_compute[230518]: 2025-10-02 12:35:16.676 2 DEBUG nova.objects.instance [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'info_cache' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:35:16 compute-1 nova_compute[230518]: 2025-10-02 12:35:16.703 2 DEBUG nova.policy [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '32f902c540fc464cb232c0a6942a5d22', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5c7832aaed82459e908e73712013728c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:35:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:16.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:16 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:35:16 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/265947530' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:16.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:17 compute-1 sudo[266021]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:17 compute-1 ceph-mon[80926]: pgmap v1762: 305 pgs: 305 active+clean; 499 MiB data, 940 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 6.5 MiB/s wr, 270 op/s
Oct 02 12:35:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/265947530' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:17 compute-1 nova_compute[230518]: 2025-10-02 12:35:17.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:18 compute-1 nova_compute[230518]: 2025-10-02 12:35:18.094 2 DEBUG nova.compute.manager [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:35:18 compute-1 nova_compute[230518]: 2025-10-02 12:35:18.096 2 DEBUG nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:35:18 compute-1 nova_compute[230518]: 2025-10-02 12:35:18.097 2 INFO nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Creating image(s)
Oct 02 12:35:18 compute-1 nova_compute[230518]: 2025-10-02 12:35:18.097 2 DEBUG nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 12:35:18 compute-1 nova_compute[230518]: 2025-10-02 12:35:18.098 2 DEBUG nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Ensure instance console log exists: /var/lib/nova/instances/00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:35:18 compute-1 nova_compute[230518]: 2025-10-02 12:35:18.098 2 DEBUG oslo_concurrency.lockutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:18 compute-1 nova_compute[230518]: 2025-10-02 12:35:18.099 2 DEBUG oslo_concurrency.lockutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:18 compute-1 nova_compute[230518]: 2025-10-02 12:35:18.099 2 DEBUG oslo_concurrency.lockutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:18 compute-1 nova_compute[230518]: 2025-10-02 12:35:18.228 2 DEBUG nova.network.neutron [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Successfully created port: be086260-f45b-41eb-ae9f-f401c64b9b22 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:35:18 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:35:18 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:35:18 compute-1 ceph-mon[80926]: pgmap v1763: 305 pgs: 305 active+clean; 500 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 6.0 MiB/s wr, 402 op/s
Oct 02 12:35:18 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:35:18 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:35:18 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:35:18 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:35:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:18.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:35:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:18.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:35:18 compute-1 nova_compute[230518]: 2025-10-02 12:35:18.876 2 DEBUG nova.network.neutron [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updating instance_info_cache with network_info: [{"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:35:18 compute-1 nova_compute[230518]: 2025-10-02 12:35:18.909 2 DEBUG oslo_concurrency.lockutils [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Releasing lock "refresh_cache-3e490470-5e33-4140-95c1-367805364c73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:35:18 compute-1 nova_compute[230518]: 2025-10-02 12:35:18.910 2 DEBUG nova.objects.instance [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'migration_context' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:35:19 compute-1 nova_compute[230518]: 2025-10-02 12:35:19.277 2 DEBUG nova.network.neutron [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Successfully updated port: be086260-f45b-41eb-ae9f-f401c64b9b22 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:35:19 compute-1 nova_compute[230518]: 2025-10-02 12:35:19.279 2 DEBUG nova.compute.manager [req-c8583232-40d9-4acd-9efe-808f2194d9a7 req-6fa40bbd-822e-4c74-801c-02ca8b30817a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Received event network-changed-be086260-f45b-41eb-ae9f-f401c64b9b22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:19 compute-1 nova_compute[230518]: 2025-10-02 12:35:19.280 2 DEBUG nova.compute.manager [req-c8583232-40d9-4acd-9efe-808f2194d9a7 req-6fa40bbd-822e-4c74-801c-02ca8b30817a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Refreshing instance network info cache due to event network-changed-be086260-f45b-41eb-ae9f-f401c64b9b22. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:35:19 compute-1 nova_compute[230518]: 2025-10-02 12:35:19.280 2 DEBUG oslo_concurrency.lockutils [req-c8583232-40d9-4acd-9efe-808f2194d9a7 req-6fa40bbd-822e-4c74-801c-02ca8b30817a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:35:19 compute-1 nova_compute[230518]: 2025-10-02 12:35:19.280 2 DEBUG oslo_concurrency.lockutils [req-c8583232-40d9-4acd-9efe-808f2194d9a7 req-6fa40bbd-822e-4c74-801c-02ca8b30817a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:35:19 compute-1 nova_compute[230518]: 2025-10-02 12:35:19.280 2 DEBUG nova.network.neutron [req-c8583232-40d9-4acd-9efe-808f2194d9a7 req-6fa40bbd-822e-4c74-801c-02ca8b30817a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Refreshing network info cache for port be086260-f45b-41eb-ae9f-f401c64b9b22 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:35:19 compute-1 nova_compute[230518]: 2025-10-02 12:35:19.282 2 DEBUG nova.storage.rbd_utils [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] removing snapshot(nova-resize) on rbd image(3e490470-5e33-4140-95c1-367805364c73_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 12:35:19 compute-1 nova_compute[230518]: 2025-10-02 12:35:19.307 2 DEBUG oslo_concurrency.lockutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Acquiring lock "refresh_cache-00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:35:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:35:19 compute-1 nova_compute[230518]: 2025-10-02 12:35:19.471 2 DEBUG nova.network.neutron [req-c8583232-40d9-4acd-9efe-808f2194d9a7 req-6fa40bbd-822e-4c74-801c-02ca8b30817a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:35:19 compute-1 nova_compute[230518]: 2025-10-02 12:35:19.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e247 e247: 3 total, 3 up, 3 in
Oct 02 12:35:20 compute-1 nova_compute[230518]: 2025-10-02 12:35:20.435 2 DEBUG nova.network.neutron [req-c8583232-40d9-4acd-9efe-808f2194d9a7 req-6fa40bbd-822e-4c74-801c-02ca8b30817a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:35:20 compute-1 nova_compute[230518]: 2025-10-02 12:35:20.450 2 DEBUG oslo_concurrency.lockutils [req-c8583232-40d9-4acd-9efe-808f2194d9a7 req-6fa40bbd-822e-4c74-801c-02ca8b30817a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:35:20 compute-1 nova_compute[230518]: 2025-10-02 12:35:20.451 2 DEBUG oslo_concurrency.lockutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Acquired lock "refresh_cache-00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:35:20 compute-1 nova_compute[230518]: 2025-10-02 12:35:20.452 2 DEBUG nova.network.neutron [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:35:20 compute-1 nova_compute[230518]: 2025-10-02 12:35:20.637 2 DEBUG nova.network.neutron [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:35:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:20.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:20.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:20 compute-1 nova_compute[230518]: 2025-10-02 12:35:20.797 2 DEBUG oslo_concurrency.lockutils [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:20 compute-1 nova_compute[230518]: 2025-10-02 12:35:20.798 2 DEBUG oslo_concurrency.lockutils [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:20 compute-1 nova_compute[230518]: 2025-10-02 12:35:20.957 2 DEBUG oslo_concurrency.processutils [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:20 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #79. Immutable memtables: 0.
Oct 02 12:35:20 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:35:20.983776) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:35:20 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 79
Oct 02 12:35:20 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408520983848, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 931, "num_deletes": 250, "total_data_size": 1754211, "memory_usage": 1783960, "flush_reason": "Manual Compaction"}
Oct 02 12:35:20 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #80: started
Oct 02 12:35:21 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408521149884, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 80, "file_size": 743955, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41482, "largest_seqno": 42408, "table_properties": {"data_size": 740342, "index_size": 1329, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 10119, "raw_average_key_size": 21, "raw_value_size": 732439, "raw_average_value_size": 1525, "num_data_blocks": 59, "num_entries": 480, "num_filter_entries": 480, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408456, "oldest_key_time": 1759408456, "file_creation_time": 1759408520, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:35:21 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 166140 microseconds, and 3409 cpu microseconds.
Oct 02 12:35:21 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:35:21 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:35:21.149926) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #80: 743955 bytes OK
Oct 02 12:35:21 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:35:21.149944) [db/memtable_list.cc:519] [default] Level-0 commit table #80 started
Oct 02 12:35:21 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:35:21.172144) [db/memtable_list.cc:722] [default] Level-0 commit table #80: memtable #1 done
Oct 02 12:35:21 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:35:21.172204) EVENT_LOG_v1 {"time_micros": 1759408521172178, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:35:21 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:35:21.172229) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:35:21 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 1749424, prev total WAL file size 1749424, number of live WAL files 2.
Oct 02 12:35:21 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000076.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:35:21 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:35:21.172967) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323533' seq:72057594037927935, type:22 .. '6D6772737461740031353034' seq:0, type:0; will stop at (end)
Oct 02 12:35:21 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:35:21 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [80(726KB)], [78(11MB)]
Oct 02 12:35:21 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408521173000, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [80], "files_L6": [78], "score": -1, "input_data_size": 12693801, "oldest_snapshot_seqno": -1}
Oct 02 12:35:21 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #81: 6657 keys, 9261034 bytes, temperature: kUnknown
Oct 02 12:35:21 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408521387706, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 81, "file_size": 9261034, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9217804, "index_size": 25454, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16709, "raw_key_size": 171032, "raw_average_key_size": 25, "raw_value_size": 9099889, "raw_average_value_size": 1366, "num_data_blocks": 1012, "num_entries": 6657, "num_filter_entries": 6657, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759408521, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 81, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:35:21 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:35:21 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:35:21.387987) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 9261034 bytes
Oct 02 12:35:21 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:35:21.406504) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 59.1 rd, 43.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 11.4 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(29.5) write-amplify(12.4) OK, records in: 7144, records dropped: 487 output_compression: NoCompression
Oct 02 12:35:21 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:35:21.406543) EVENT_LOG_v1 {"time_micros": 1759408521406529, "job": 48, "event": "compaction_finished", "compaction_time_micros": 214791, "compaction_time_cpu_micros": 23170, "output_level": 6, "num_output_files": 1, "total_output_size": 9261034, "num_input_records": 7144, "num_output_records": 6657, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:35:21 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:35:21 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408521406934, "job": 48, "event": "table_file_deletion", "file_number": 80}
Oct 02 12:35:21 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000078.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:35:21 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408521409546, "job": 48, "event": "table_file_deletion", "file_number": 78}
Oct 02 12:35:21 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:35:21.172903) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:35:21 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:35:21.409595) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:35:21 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:35:21.409600) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:35:21 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:35:21.409602) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:35:21 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:35:21.409604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:35:21 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:35:21.409605) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:35:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:35:21 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/112017634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:21 compute-1 nova_compute[230518]: 2025-10-02 12:35:21.469 2 DEBUG oslo_concurrency.processutils [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:21 compute-1 nova_compute[230518]: 2025-10-02 12:35:21.474 2 DEBUG nova.compute.provider_tree [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:35:21 compute-1 nova_compute[230518]: 2025-10-02 12:35:21.502 2 DEBUG nova.scheduler.client.report [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:35:21 compute-1 nova_compute[230518]: 2025-10-02 12:35:21.554 2 DEBUG oslo_concurrency.lockutils [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:21 compute-1 nova_compute[230518]: 2025-10-02 12:35:21.620 2 DEBUG nova.network.neutron [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Updating instance_info_cache with network_info: [{"id": "be086260-f45b-41eb-ae9f-f401c64b9b22", "address": "fa:16:3e:57:aa:f7", "network": {"id": "520fbe64-8b09-43ab-a51d-71daa1f67084", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-718613550-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c7832aaed82459e908e73712013728c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe086260-f4", "ovs_interfaceid": "be086260-f45b-41eb-ae9f-f401c64b9b22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:35:21 compute-1 nova_compute[230518]: 2025-10-02 12:35:21.638 2 DEBUG oslo_concurrency.lockutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Releasing lock "refresh_cache-00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:35:21 compute-1 nova_compute[230518]: 2025-10-02 12:35:21.638 2 DEBUG nova.compute.manager [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Instance network_info: |[{"id": "be086260-f45b-41eb-ae9f-f401c64b9b22", "address": "fa:16:3e:57:aa:f7", "network": {"id": "520fbe64-8b09-43ab-a51d-71daa1f67084", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-718613550-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c7832aaed82459e908e73712013728c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe086260-f4", "ovs_interfaceid": "be086260-f45b-41eb-ae9f-f401c64b9b22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:35:21 compute-1 nova_compute[230518]: 2025-10-02 12:35:21.644 2 DEBUG nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Start _get_guest_xml network_info=[{"id": "be086260-f45b-41eb-ae9f-f401c64b9b22", "address": "fa:16:3e:57:aa:f7", "network": {"id": "520fbe64-8b09-43ab-a51d-71daa1f67084", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-718613550-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c7832aaed82459e908e73712013728c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe086260-f4", "ovs_interfaceid": "be086260-f45b-41eb-ae9f-f401c64b9b22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'delete_on_termination': True, 'disk_bus': 'virtio', 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-9283cdcd-9233-4064-a250-5d9e278af430', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '9283cdcd-9233-4064-a250-5d9e278af430', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f', 'attached_at': '', 'detached_at': '', 'volume_id': '9283cdcd-9233-4064-a250-5d9e278af430', 'serial': '9283cdcd-9233-4064-a250-5d9e278af430'}, 'boot_index': 0, 'attachment_id': '748e3c4e-64ff-474b-9b2d-f528e6710211', 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:35:21 compute-1 nova_compute[230518]: 2025-10-02 12:35:21.650 2 WARNING nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:35:21 compute-1 nova_compute[230518]: 2025-10-02 12:35:21.656 2 DEBUG nova.virt.libvirt.host [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:35:21 compute-1 nova_compute[230518]: 2025-10-02 12:35:21.658 2 DEBUG nova.virt.libvirt.host [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:35:21 compute-1 nova_compute[230518]: 2025-10-02 12:35:21.663 2 DEBUG nova.virt.libvirt.host [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:35:21 compute-1 nova_compute[230518]: 2025-10-02 12:35:21.664 2 DEBUG nova.virt.libvirt.host [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:35:21 compute-1 nova_compute[230518]: 2025-10-02 12:35:21.666 2 DEBUG nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:35:21 compute-1 nova_compute[230518]: 2025-10-02 12:35:21.666 2 DEBUG nova.virt.hardware [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:35:21 compute-1 nova_compute[230518]: 2025-10-02 12:35:21.667 2 DEBUG nova.virt.hardware [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:35:21 compute-1 nova_compute[230518]: 2025-10-02 12:35:21.667 2 DEBUG nova.virt.hardware [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:35:21 compute-1 nova_compute[230518]: 2025-10-02 12:35:21.668 2 DEBUG nova.virt.hardware [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:35:21 compute-1 nova_compute[230518]: 2025-10-02 12:35:21.668 2 DEBUG nova.virt.hardware [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:35:21 compute-1 nova_compute[230518]: 2025-10-02 12:35:21.668 2 DEBUG nova.virt.hardware [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:35:21 compute-1 nova_compute[230518]: 2025-10-02 12:35:21.669 2 DEBUG nova.virt.hardware [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:35:21 compute-1 nova_compute[230518]: 2025-10-02 12:35:21.669 2 DEBUG nova.virt.hardware [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:35:21 compute-1 nova_compute[230518]: 2025-10-02 12:35:21.670 2 DEBUG nova.virt.hardware [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:35:21 compute-1 nova_compute[230518]: 2025-10-02 12:35:21.670 2 DEBUG nova.virt.hardware [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:35:21 compute-1 nova_compute[230518]: 2025-10-02 12:35:21.670 2 DEBUG nova.virt.hardware [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:35:21 compute-1 ceph-mon[80926]: pgmap v1764: 305 pgs: 305 active+clean; 500 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 3.8 MiB/s wr, 330 op/s
Oct 02 12:35:21 compute-1 ceph-mon[80926]: osdmap e247: 3 total, 3 up, 3 in
Oct 02 12:35:21 compute-1 nova_compute[230518]: 2025-10-02 12:35:21.725 2 DEBUG nova.storage.rbd_utils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] rbd image 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:35:21 compute-1 nova_compute[230518]: 2025-10-02 12:35:21.729 2 DEBUG oslo_concurrency.processutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:21 compute-1 nova_compute[230518]: 2025-10-02 12:35:21.753 2 INFO nova.scheduler.client.report [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Deleted allocation for migration cc0c0504-9cb4-4e3c-94ee-f1413511b3ed
Oct 02 12:35:21 compute-1 nova_compute[230518]: 2025-10-02 12:35:21.834 2 DEBUG oslo_concurrency.lockutils [None req-310b0d12-ee80-426c-9a42-542b27c0f9f2 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 7.310s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:22 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:35:22 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3810058863' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:22 compute-1 nova_compute[230518]: 2025-10-02 12:35:22.231 2 DEBUG oslo_concurrency.processutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:22 compute-1 nova_compute[230518]: 2025-10-02 12:35:22.255 2 DEBUG nova.virt.libvirt.vif [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:35:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestBootFromVolume-server-816873398',display_name='tempest-ServersTestBootFromVolume-server-816873398',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestbootfromvolume-server-816873398',id=92,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCExruwC5OJykxsUr85pmn2haRmMagZ7/mdNoJJYNggBWSlgN4YogOa/CvEZn0TcoJzbhFnO92dhNEzaWy4hNrf2HiyGznIRspY3dORZWHStfDh0jHNyWV7YXYrY8DMDPw==',key_name='tempest-keypair-581118509',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5c7832aaed82459e908e73712013728c',ramdisk_id='',reservation_id='r-a0kjnjfe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServersTestBootFromVolume-1573186401',owner_user_name='tempest-ServersTestBootFromVolume-1573186401-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:35:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='32f902c540fc464cb232c0a6942a5d22',uuid=00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be086260-f45b-41eb-ae9f-f401c64b9b22", "address": "fa:16:3e:57:aa:f7", "network": {"id": "520fbe64-8b09-43ab-a51d-71daa1f67084", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-718613550-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c7832aaed82459e908e73712013728c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe086260-f4", "ovs_interfaceid": "be086260-f45b-41eb-ae9f-f401c64b9b22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:35:22 compute-1 nova_compute[230518]: 2025-10-02 12:35:22.256 2 DEBUG nova.network.os_vif_util [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Converting VIF {"id": "be086260-f45b-41eb-ae9f-f401c64b9b22", "address": "fa:16:3e:57:aa:f7", "network": {"id": "520fbe64-8b09-43ab-a51d-71daa1f67084", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-718613550-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c7832aaed82459e908e73712013728c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe086260-f4", "ovs_interfaceid": "be086260-f45b-41eb-ae9f-f401c64b9b22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:35:22 compute-1 nova_compute[230518]: 2025-10-02 12:35:22.259 2 DEBUG nova.network.os_vif_util [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:aa:f7,bridge_name='br-int',has_traffic_filtering=True,id=be086260-f45b-41eb-ae9f-f401c64b9b22,network=Network(520fbe64-8b09-43ab-a51d-71daa1f67084),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe086260-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:35:22 compute-1 nova_compute[230518]: 2025-10-02 12:35:22.261 2 DEBUG nova.objects.instance [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Lazy-loading 'pci_devices' on Instance uuid 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:35:22 compute-1 nova_compute[230518]: 2025-10-02 12:35:22.277 2 DEBUG nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:35:22 compute-1 nova_compute[230518]:   <uuid>00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f</uuid>
Oct 02 12:35:22 compute-1 nova_compute[230518]:   <name>instance-0000005c</name>
Oct 02 12:35:22 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:35:22 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:35:22 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:35:22 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:       <nova:name>tempest-ServersTestBootFromVolume-server-816873398</nova:name>
Oct 02 12:35:22 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:35:21</nova:creationTime>
Oct 02 12:35:22 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:35:22 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:35:22 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:35:22 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:35:22 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:35:22 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:35:22 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:35:22 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:35:22 compute-1 nova_compute[230518]:         <nova:user uuid="32f902c540fc464cb232c0a6942a5d22">tempest-ServersTestBootFromVolume-1573186401-project-member</nova:user>
Oct 02 12:35:22 compute-1 nova_compute[230518]:         <nova:project uuid="5c7832aaed82459e908e73712013728c">tempest-ServersTestBootFromVolume-1573186401</nova:project>
Oct 02 12:35:22 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:35:22 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:35:22 compute-1 nova_compute[230518]:         <nova:port uuid="be086260-f45b-41eb-ae9f-f401c64b9b22">
Oct 02 12:35:22 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:35:22 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:35:22 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:35:22 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <system>
Oct 02 12:35:22 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:35:22 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:35:22 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:35:22 compute-1 nova_compute[230518]:       <entry name="serial">00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f</entry>
Oct 02 12:35:22 compute-1 nova_compute[230518]:       <entry name="uuid">00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f</entry>
Oct 02 12:35:22 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     </system>
Oct 02 12:35:22 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:35:22 compute-1 nova_compute[230518]:   <os>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:   </os>
Oct 02 12:35:22 compute-1 nova_compute[230518]:   <features>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:   </features>
Oct 02 12:35:22 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:35:22 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:35:22 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:35:22 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f_disk.config">
Oct 02 12:35:22 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:       </source>
Oct 02 12:35:22 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:35:22 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:35:22 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:35:22 compute-1 nova_compute[230518]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:       <source protocol="rbd" name="volumes/volume-9283cdcd-9233-4064-a250-5d9e278af430">
Oct 02 12:35:22 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:       </source>
Oct 02 12:35:22 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:35:22 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:35:22 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:       <serial>9283cdcd-9233-4064-a250-5d9e278af430</serial>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:35:22 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:57:aa:f7"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:       <target dev="tapbe086260-f4"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:35:22 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f/console.log" append="off"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <video>
Oct 02 12:35:22 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     </video>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:35:22 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:35:22 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:35:22 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:35:22 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:35:22 compute-1 nova_compute[230518]: </domain>
Oct 02 12:35:22 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:35:22 compute-1 nova_compute[230518]: 2025-10-02 12:35:22.277 2 DEBUG nova.compute.manager [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Preparing to wait for external event network-vif-plugged-be086260-f45b-41eb-ae9f-f401c64b9b22 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:35:22 compute-1 nova_compute[230518]: 2025-10-02 12:35:22.277 2 DEBUG oslo_concurrency.lockutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Acquiring lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:22 compute-1 nova_compute[230518]: 2025-10-02 12:35:22.278 2 DEBUG oslo_concurrency.lockutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:22 compute-1 nova_compute[230518]: 2025-10-02 12:35:22.278 2 DEBUG oslo_concurrency.lockutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:22 compute-1 nova_compute[230518]: 2025-10-02 12:35:22.278 2 DEBUG nova.virt.libvirt.vif [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:35:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestBootFromVolume-server-816873398',display_name='tempest-ServersTestBootFromVolume-server-816873398',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestbootfromvolume-server-816873398',id=92,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCExruwC5OJykxsUr85pmn2haRmMagZ7/mdNoJJYNggBWSlgN4YogOa/CvEZn0TcoJzbhFnO92dhNEzaWy4hNrf2HiyGznIRspY3dORZWHStfDh0jHNyWV7YXYrY8DMDPw==',key_name='tempest-keypair-581118509',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5c7832aaed82459e908e73712013728c',ramdisk_id='',reservation_id='r-a0kjnjfe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServersTestBootFromVolume-1573186401',owner_user_name='tempest-ServersTestBootFromVolume-1573186401-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:35:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='32f902c540fc464cb232c0a6942a5d22',uuid=00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be086260-f45b-41eb-ae9f-f401c64b9b22", "address": "fa:16:3e:57:aa:f7", "network": {"id": "520fbe64-8b09-43ab-a51d-71daa1f67084", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-718613550-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c7832aaed82459e908e73712013728c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe086260-f4", "ovs_interfaceid": "be086260-f45b-41eb-ae9f-f401c64b9b22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:35:22 compute-1 nova_compute[230518]: 2025-10-02 12:35:22.279 2 DEBUG nova.network.os_vif_util [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Converting VIF {"id": "be086260-f45b-41eb-ae9f-f401c64b9b22", "address": "fa:16:3e:57:aa:f7", "network": {"id": "520fbe64-8b09-43ab-a51d-71daa1f67084", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-718613550-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c7832aaed82459e908e73712013728c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe086260-f4", "ovs_interfaceid": "be086260-f45b-41eb-ae9f-f401c64b9b22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:35:22 compute-1 nova_compute[230518]: 2025-10-02 12:35:22.279 2 DEBUG nova.network.os_vif_util [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:aa:f7,bridge_name='br-int',has_traffic_filtering=True,id=be086260-f45b-41eb-ae9f-f401c64b9b22,network=Network(520fbe64-8b09-43ab-a51d-71daa1f67084),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe086260-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:35:22 compute-1 nova_compute[230518]: 2025-10-02 12:35:22.279 2 DEBUG os_vif [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:aa:f7,bridge_name='br-int',has_traffic_filtering=True,id=be086260-f45b-41eb-ae9f-f401c64b9b22,network=Network(520fbe64-8b09-43ab-a51d-71daa1f67084),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe086260-f4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:35:22 compute-1 nova_compute[230518]: 2025-10-02 12:35:22.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:22 compute-1 nova_compute[230518]: 2025-10-02 12:35:22.280 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:22 compute-1 nova_compute[230518]: 2025-10-02 12:35:22.281 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:35:22 compute-1 nova_compute[230518]: 2025-10-02 12:35:22.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:22 compute-1 nova_compute[230518]: 2025-10-02 12:35:22.283 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbe086260-f4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:22 compute-1 nova_compute[230518]: 2025-10-02 12:35:22.284 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbe086260-f4, col_values=(('external_ids', {'iface-id': 'be086260-f45b-41eb-ae9f-f401c64b9b22', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:57:aa:f7', 'vm-uuid': '00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:22 compute-1 nova_compute[230518]: 2025-10-02 12:35:22.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:22 compute-1 NetworkManager[44960]: <info>  [1759408522.2877] manager: (tapbe086260-f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/182)
Oct 02 12:35:22 compute-1 nova_compute[230518]: 2025-10-02 12:35:22.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:35:22 compute-1 nova_compute[230518]: 2025-10-02 12:35:22.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:22 compute-1 nova_compute[230518]: 2025-10-02 12:35:22.295 2 INFO os_vif [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:aa:f7,bridge_name='br-int',has_traffic_filtering=True,id=be086260-f45b-41eb-ae9f-f401c64b9b22,network=Network(520fbe64-8b09-43ab-a51d-71daa1f67084),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe086260-f4')
Oct 02 12:35:22 compute-1 podman[266177]: 2025-10-02 12:35:22.408072792 +0000 UTC m=+0.074160025 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 12:35:22 compute-1 podman[266175]: 2025-10-02 12:35:22.443742015 +0000 UTC m=+0.110861719 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:35:22 compute-1 nova_compute[230518]: 2025-10-02 12:35:22.466 2 DEBUG nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:35:22 compute-1 nova_compute[230518]: 2025-10-02 12:35:22.467 2 DEBUG nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:35:22 compute-1 nova_compute[230518]: 2025-10-02 12:35:22.467 2 DEBUG nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] No VIF found with MAC fa:16:3e:57:aa:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:35:22 compute-1 nova_compute[230518]: 2025-10-02 12:35:22.468 2 INFO nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Using config drive
Oct 02 12:35:22 compute-1 nova_compute[230518]: 2025-10-02 12:35:22.497 2 DEBUG nova.storage.rbd_utils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] rbd image 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:35:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:22.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:22.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:22 compute-1 ceph-mon[80926]: pgmap v1766: 305 pgs: 305 active+clean; 493 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 1.4 MiB/s wr, 328 op/s
Oct 02 12:35:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/112017634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3810058863' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:22 compute-1 nova_compute[230518]: 2025-10-02 12:35:22.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:23 compute-1 nova_compute[230518]: 2025-10-02 12:35:23.691 2 INFO nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Creating config drive at /var/lib/nova/instances/00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f/disk.config
Oct 02 12:35:23 compute-1 nova_compute[230518]: 2025-10-02 12:35:23.697 2 DEBUG oslo_concurrency.processutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprxj1m0j_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:23 compute-1 nova_compute[230518]: 2025-10-02 12:35:23.838 2 DEBUG oslo_concurrency.processutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprxj1m0j_" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:23 compute-1 nova_compute[230518]: 2025-10-02 12:35:23.867 2 DEBUG nova.storage.rbd_utils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] rbd image 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:35:23 compute-1 nova_compute[230518]: 2025-10-02 12:35:23.871 2 DEBUG oslo_concurrency.processutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f/disk.config 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:35:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:24.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:35:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:24.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:35:25 compute-1 ovn_controller[129257]: 2025-10-02T12:35:25Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7b:e8:97 10.100.0.6
Oct 02 12:35:25 compute-1 nova_compute[230518]: 2025-10-02 12:35:25.404 2 DEBUG oslo_concurrency.processutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f/disk.config 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:25 compute-1 nova_compute[230518]: 2025-10-02 12:35:25.405 2 INFO nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Deleting local config drive /var/lib/nova/instances/00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f/disk.config because it was imported into RBD.
Oct 02 12:35:25 compute-1 NetworkManager[44960]: <info>  [1759408525.4632] manager: (tapbe086260-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/183)
Oct 02 12:35:25 compute-1 kernel: tapbe086260-f4: entered promiscuous mode
Oct 02 12:35:25 compute-1 ovn_controller[129257]: 2025-10-02T12:35:25Z|00390|binding|INFO|Claiming lport be086260-f45b-41eb-ae9f-f401c64b9b22 for this chassis.
Oct 02 12:35:25 compute-1 ovn_controller[129257]: 2025-10-02T12:35:25Z|00391|binding|INFO|be086260-f45b-41eb-ae9f-f401c64b9b22: Claiming fa:16:3e:57:aa:f7 10.100.0.7
Oct 02 12:35:25 compute-1 nova_compute[230518]: 2025-10-02 12:35:25.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.479 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:aa:f7 10.100.0.7'], port_security=['fa:16:3e:57:aa:f7 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-520fbe64-8b09-43ab-a51d-71daa1f67084', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5c7832aaed82459e908e73712013728c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'daeb3977-3f78-41e9-ac0e-5cb1a13474c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=30b0a67c-101b-4cc3-b721-1d431685a667, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=be086260-f45b-41eb-ae9f-f401c64b9b22) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.484 138374 INFO neutron.agent.ovn.metadata.agent [-] Port be086260-f45b-41eb-ae9f-f401c64b9b22 in datapath 520fbe64-8b09-43ab-a51d-71daa1f67084 bound to our chassis
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.487 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 520fbe64-8b09-43ab-a51d-71daa1f67084
Oct 02 12:35:25 compute-1 systemd-udevd[266292]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:35:25 compute-1 ovn_controller[129257]: 2025-10-02T12:35:25Z|00392|binding|INFO|Setting lport be086260-f45b-41eb-ae9f-f401c64b9b22 ovn-installed in OVS
Oct 02 12:35:25 compute-1 ovn_controller[129257]: 2025-10-02T12:35:25Z|00393|binding|INFO|Setting lport be086260-f45b-41eb-ae9f-f401c64b9b22 up in Southbound
Oct 02 12:35:25 compute-1 systemd-machined[188247]: New machine qemu-46-instance-0000005c.
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.498 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[91e4e20c-9b62-4b7a-b17d-0c4000befa99]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.499 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap520fbe64-81 in ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:35:25 compute-1 nova_compute[230518]: 2025-10-02 12:35:25.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:25 compute-1 nova_compute[230518]: 2025-10-02 12:35:25.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.502 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap520fbe64-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.503 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9846fc50-cf3e-4205-af7f-bab6f4554bdb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.504 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[dc5cd543-1484-4b17-8892-7daf77cfdf21]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:25 compute-1 NetworkManager[44960]: <info>  [1759408525.5115] device (tapbe086260-f4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:35:25 compute-1 systemd[1]: Started Virtual Machine qemu-46-instance-0000005c.
Oct 02 12:35:25 compute-1 NetworkManager[44960]: <info>  [1759408525.5140] device (tapbe086260-f4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.531 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[f9f7090f-b3e0-45d7-8189-b529c4c9fe3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.557 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5d7a019e-4d39-4876-b693-5f94d61e5afe]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.590 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[81978bf7-b3de-4283-a96e-83ee605d1242]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.595 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7d1f3c15-2c2d-46f3-b710-81efe93228fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:25 compute-1 systemd-udevd[266295]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:35:25 compute-1 NetworkManager[44960]: <info>  [1759408525.5983] manager: (tap520fbe64-80): new Veth device (/org/freedesktop/NetworkManager/Devices/184)
Oct 02 12:35:25 compute-1 ceph-mon[80926]: pgmap v1767: 305 pgs: 305 active+clean; 478 MiB data, 931 MiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 65 KiB/s wr, 329 op/s
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.633 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[545bc8d9-558d-4c21-9b12-73f9a0a7b28c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.636 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[32f6df5f-58f5-423e-b94c-1b4ffd00a35b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:25 compute-1 NetworkManager[44960]: <info>  [1759408525.6617] device (tap520fbe64-80): carrier: link connected
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.668 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[df50f8ba-f664-4be5-82e9-7fdf58068a66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.692 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[005b7d2b-6493-4348-91cf-3e3bfa8490a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap520fbe64-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:88:a1:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 118], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 638921, 'reachable_time': 19450, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266324, 'error': None, 'target': 'ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.708 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1f945342-db62-43aa-8545-a1a472b66e7e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe88:a1ef'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 638921, 'tstamp': 638921}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266325, 'error': None, 'target': 'ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.729 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0f0f8b7e-5e35-4969-9e9a-bb5d631e853d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap520fbe64-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:88:a1:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 118], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 638921, 'reachable_time': 19450, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 266326, 'error': None, 'target': 'ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.759 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c89cede1-8806-4c3b-a5cd-510993417bd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.816 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8bcfcaed-4635-4898-9ae1-10eb8a22e326]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.817 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap520fbe64-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.817 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:35:25 compute-1 kernel: tap520fbe64-80: entered promiscuous mode
Oct 02 12:35:25 compute-1 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.817 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap520fbe64-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:25 compute-1 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 12:35:25 compute-1 nova_compute[230518]: 2025-10-02 12:35:25.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:25 compute-1 NetworkManager[44960]: <info>  [1759408525.8201] manager: (tap520fbe64-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/185)
Oct 02 12:35:25 compute-1 nova_compute[230518]: 2025-10-02 12:35:25.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.822 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap520fbe64-80, col_values=(('external_ids', {'iface-id': '16d9692f-0ff0-4286-9c67-9ab6ce457f57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:25 compute-1 nova_compute[230518]: 2025-10-02 12:35:25.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:25 compute-1 ovn_controller[129257]: 2025-10-02T12:35:25Z|00394|binding|INFO|Releasing lport 16d9692f-0ff0-4286-9c67-9ab6ce457f57 from this chassis (sb_readonly=0)
Oct 02 12:35:25 compute-1 nova_compute[230518]: 2025-10-02 12:35:25.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:25 compute-1 nova_compute[230518]: 2025-10-02 12:35:25.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.843 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/520fbe64-8b09-43ab-a51d-71daa1f67084.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/520fbe64-8b09-43ab-a51d-71daa1f67084.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.847 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7ca291df-8149-4eb7-9735-976abd4aa47d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.848 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-520fbe64-8b09-43ab-a51d-71daa1f67084
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/520fbe64-8b09-43ab-a51d-71daa1f67084.pid.haproxy
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 520fbe64-8b09-43ab-a51d-71daa1f67084
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.848 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084', 'env', 'PROCESS_TAG=haproxy-520fbe64-8b09-43ab-a51d-71daa1f67084', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/520fbe64-8b09-43ab-a51d-71daa1f67084.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:35:25 compute-1 nova_compute[230518]: 2025-10-02 12:35:25.878 2 DEBUG nova.compute.manager [req-996855da-40df-4a59-b446-e34022571a5e req-07939ae0-cb28-45fe-83d6-9ab9d025bc9a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Received event network-vif-plugged-be086260-f45b-41eb-ae9f-f401c64b9b22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:25 compute-1 nova_compute[230518]: 2025-10-02 12:35:25.879 2 DEBUG oslo_concurrency.lockutils [req-996855da-40df-4a59-b446-e34022571a5e req-07939ae0-cb28-45fe-83d6-9ab9d025bc9a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:25 compute-1 nova_compute[230518]: 2025-10-02 12:35:25.879 2 DEBUG oslo_concurrency.lockutils [req-996855da-40df-4a59-b446-e34022571a5e req-07939ae0-cb28-45fe-83d6-9ab9d025bc9a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:25 compute-1 nova_compute[230518]: 2025-10-02 12:35:25.879 2 DEBUG oslo_concurrency.lockutils [req-996855da-40df-4a59-b446-e34022571a5e req-07939ae0-cb28-45fe-83d6-9ab9d025bc9a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:25 compute-1 nova_compute[230518]: 2025-10-02 12:35:25.879 2 DEBUG nova.compute.manager [req-996855da-40df-4a59-b446-e34022571a5e req-07939ae0-cb28-45fe-83d6-9ab9d025bc9a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Processing event network-vif-plugged-be086260-f45b-41eb-ae9f-f401c64b9b22 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.931 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.932 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:25.932 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:26 compute-1 podman[266360]: 2025-10-02 12:35:26.221245128 +0000 UTC m=+0.039372670 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:35:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:35:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:26.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:35:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:26.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:26 compute-1 ceph-mon[80926]: pgmap v1768: 305 pgs: 305 active+clean; 396 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 51 KiB/s wr, 297 op/s
Oct 02 12:35:26 compute-1 podman[266360]: 2025-10-02 12:35:26.975326915 +0000 UTC m=+0.793454367 container create 34ffedd16241bcc44614d0faae272f2a4f859da4ca73ad2293822905fcaf4054 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 12:35:27 compute-1 systemd[1]: Started libpod-conmon-34ffedd16241bcc44614d0faae272f2a4f859da4ca73ad2293822905fcaf4054.scope.
Oct 02 12:35:27 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:35:27 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d831a68f21466534269406494e441ba9768beaf4f9233af088e058997fb9c0d1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:35:27 compute-1 podman[266360]: 2025-10-02 12:35:27.129577158 +0000 UTC m=+0.947704620 container init 34ffedd16241bcc44614d0faae272f2a4f859da4ca73ad2293822905fcaf4054 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:35:27 compute-1 podman[266360]: 2025-10-02 12:35:27.13569123 +0000 UTC m=+0.953818682 container start 34ffedd16241bcc44614d0faae272f2a4f859da4ca73ad2293822905fcaf4054 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 12:35:27 compute-1 neutron-haproxy-ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084[266393]: [NOTICE]   (266397) : New worker (266414) forked
Oct 02 12:35:27 compute-1 neutron-haproxy-ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084[266393]: [NOTICE]   (266397) : Loading success.
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.265 2 DEBUG oslo_concurrency.lockutils [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.266 2 DEBUG oslo_concurrency.lockutils [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.266 2 DEBUG oslo_concurrency.lockutils [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.266 2 DEBUG oslo_concurrency.lockutils [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.266 2 DEBUG oslo_concurrency.lockutils [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.267 2 INFO nova.compute.manager [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Terminating instance
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.268 2 DEBUG nova.compute.manager [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:27 compute-1 kernel: tapa3bd0009-d2 (unregistering): left promiscuous mode
Oct 02 12:35:27 compute-1 NetworkManager[44960]: <info>  [1759408527.5167] device (tapa3bd0009-d2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:35:27 compute-1 ovn_controller[129257]: 2025-10-02T12:35:27Z|00395|binding|INFO|Releasing lport a3bd0009-d256-4937-bdad-606abfd076e0 from this chassis (sb_readonly=0)
Oct 02 12:35:27 compute-1 ovn_controller[129257]: 2025-10-02T12:35:27Z|00396|binding|INFO|Setting lport a3bd0009-d256-4937-bdad-606abfd076e0 down in Southbound
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:27 compute-1 ovn_controller[129257]: 2025-10-02T12:35:27Z|00397|binding|INFO|Removing iface tapa3bd0009-d2 ovn-installed in OVS
Oct 02 12:35:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:27.550 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:e8:97 10.100.0.6'], port_security=['fa:16:3e:7b:e8:97 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '3e490470-5e33-4140-95c1-367805364c73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '12', 'neutron:security_group_ids': '6fdfac51-abac-4e22-93ab-c3b799f666ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.191', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=a3bd0009-d256-4937-bdad-606abfd076e0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:35:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:27.551 138374 INFO neutron.agent.ovn.metadata.agent [-] Port a3bd0009-d256-4937-bdad-606abfd076e0 in datapath f011efa4-0132-405c-bb45-09d0a9352eff unbound from our chassis
Oct 02 12:35:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:27.553 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f011efa4-0132-405c-bb45-09d0a9352eff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:35:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:27.554 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[64b656b0-f1bc-4b1b-9e0e-dd9f8be65287]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:27.554 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff namespace which is not needed anymore
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:27 compute-1 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d0000004d.scope: Deactivated successfully.
Oct 02 12:35:27 compute-1 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d0000004d.scope: Consumed 13.762s CPU time.
Oct 02 12:35:27 compute-1 systemd-machined[188247]: Machine qemu-45-instance-0000004d terminated.
Oct 02 12:35:27 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[265902]: [NOTICE]   (265906) : haproxy version is 2.8.14-c23fe91
Oct 02 12:35:27 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[265902]: [NOTICE]   (265906) : path to executable is /usr/sbin/haproxy
Oct 02 12:35:27 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[265902]: [WARNING]  (265906) : Exiting Master process...
Oct 02 12:35:27 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[265902]: [ALERT]    (265906) : Current worker (265908) exited with code 143 (Terminated)
Oct 02 12:35:27 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[265902]: [WARNING]  (265906) : All workers exited. Exiting... (0)
Oct 02 12:35:27 compute-1 NetworkManager[44960]: <info>  [1759408527.6888] manager: (tapa3bd0009-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/186)
Oct 02 12:35:27 compute-1 systemd[1]: libpod-895452d17454984001d9fc4cf065c756f2e286540795cd6d4c1403a56cd4f31d.scope: Deactivated successfully.
Oct 02 12:35:27 compute-1 podman[266453]: 2025-10-02 12:35:27.695097481 +0000 UTC m=+0.042303551 container died 895452d17454984001d9fc4cf065c756f2e286540795cd6d4c1403a56cd4f31d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.710 2 INFO nova.virt.libvirt.driver [-] [instance: 3e490470-5e33-4140-95c1-367805364c73] Instance destroyed successfully.
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.710 2 DEBUG nova.objects.instance [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'resources' on Instance uuid 3e490470-5e33-4140-95c1-367805364c73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:35:27 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-895452d17454984001d9fc4cf065c756f2e286540795cd6d4c1403a56cd4f31d-userdata-shm.mount: Deactivated successfully.
Oct 02 12:35:27 compute-1 systemd[1]: var-lib-containers-storage-overlay-6cd4a7dfe49440492c7b1837cfdf74d652207fb9a54ea584d72d946905023818-merged.mount: Deactivated successfully.
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.736 2 DEBUG nova.virt.libvirt.vif [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:31:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1270476772',display_name='tempest-ServerActionsTestJSON-server-1270476772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1270476772',id=77,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:35:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-t095cvs5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:35:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=3e490470-5e33-4140-95c1-367805364c73,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.737 2 DEBUG nova.network.os_vif_util [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "a3bd0009-d256-4937-bdad-606abfd076e0", "address": "fa:16:3e:7b:e8:97", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa3bd0009-d2", "ovs_interfaceid": "a3bd0009-d256-4937-bdad-606abfd076e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.738 2 DEBUG nova.network.os_vif_util [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.738 2 DEBUG os_vif [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.740 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3bd0009-d2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:27 compute-1 podman[266453]: 2025-10-02 12:35:27.741568553 +0000 UTC m=+0.088774583 container cleanup 895452d17454984001d9fc4cf065c756f2e286540795cd6d4c1403a56cd4f31d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.745 2 INFO os_vif [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:e8:97,bridge_name='br-int',has_traffic_filtering=True,id=a3bd0009-d256-4937-bdad-606abfd076e0,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa3bd0009-d2')
Oct 02 12:35:27 compute-1 systemd[1]: libpod-conmon-895452d17454984001d9fc4cf065c756f2e286540795cd6d4c1403a56cd4f31d.scope: Deactivated successfully.
Oct 02 12:35:27 compute-1 podman[266496]: 2025-10-02 12:35:27.80279259 +0000 UTC m=+0.037041397 container remove 895452d17454984001d9fc4cf065c756f2e286540795cd6d4c1403a56cd4f31d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:35:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:27.808 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6ebba1de-e204-4ca7-bfd5-748d9c8886ee]: (4, ('Thu Oct  2 12:35:27 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff (895452d17454984001d9fc4cf065c756f2e286540795cd6d4c1403a56cd4f31d)\n895452d17454984001d9fc4cf065c756f2e286540795cd6d4c1403a56cd4f31d\nThu Oct  2 12:35:27 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff (895452d17454984001d9fc4cf065c756f2e286540795cd6d4c1403a56cd4f31d)\n895452d17454984001d9fc4cf065c756f2e286540795cd6d4c1403a56cd4f31d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:27.810 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5e9e4deb-ab95-4460-bc56-95271d3a64f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:27.811 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:27 compute-1 kernel: tapf011efa4-00: left promiscuous mode
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.815 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408527.8150587, 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.815 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] VM Started (Lifecycle Event)
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.817 2 DEBUG nova.compute.manager [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.822 2 DEBUG nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.825 2 INFO nova.virt.libvirt.driver [-] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Instance spawned successfully.
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.826 2 DEBUG nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:27.830 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[44cb17a2-0a1a-4ecd-8565-3e71dfa5f43c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:27.850 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ed24d0f4-1b41-472c-b1c8-b9abf8002c96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:27.851 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[da1b9b40-1e73-4969-8b5a-6e99d70e770d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:27.863 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2e62fd57-447a-4c2e-ab83-ecddf62e9d9a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 637369, 'reachable_time': 22776, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266520, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:27.865 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:35:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:27.865 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[d7857945-67ae-4eef-a061-31f1fb790f0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:27 compute-1 systemd[1]: run-netns-ovnmeta\x2df011efa4\x2d0132\x2d405c\x2dbb45\x2d09d0a9352eff.mount: Deactivated successfully.
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.898 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.901 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.911 2 DEBUG nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.911 2 DEBUG nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.912 2 DEBUG nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.912 2 DEBUG nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.913 2 DEBUG nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.913 2 DEBUG nova.virt.libvirt.driver [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.952 2 DEBUG nova.compute.manager [req-02e4a288-567d-4e39-a44a-61f88755bca0 req-a005264f-e67e-4480-907d-08a7112b329f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-unplugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.952 2 DEBUG oslo_concurrency.lockutils [req-02e4a288-567d-4e39-a44a-61f88755bca0 req-a005264f-e67e-4480-907d-08a7112b329f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.952 2 DEBUG oslo_concurrency.lockutils [req-02e4a288-567d-4e39-a44a-61f88755bca0 req-a005264f-e67e-4480-907d-08a7112b329f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.953 2 DEBUG oslo_concurrency.lockutils [req-02e4a288-567d-4e39-a44a-61f88755bca0 req-a005264f-e67e-4480-907d-08a7112b329f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.953 2 DEBUG nova.compute.manager [req-02e4a288-567d-4e39-a44a-61f88755bca0 req-a005264f-e67e-4480-907d-08a7112b329f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-unplugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.953 2 DEBUG nova.compute.manager [req-02e4a288-567d-4e39-a44a-61f88755bca0 req-a005264f-e67e-4480-907d-08a7112b329f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-unplugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.956 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.957 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408527.8152015, 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.957 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] VM Paused (Lifecycle Event)
Oct 02 12:35:27 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3724517341' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:27 compute-1 nova_compute[230518]: 2025-10-02 12:35:27.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:28 compute-1 nova_compute[230518]: 2025-10-02 12:35:28.004 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:35:28 compute-1 nova_compute[230518]: 2025-10-02 12:35:28.007 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408527.818995, 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:35:28 compute-1 nova_compute[230518]: 2025-10-02 12:35:28.007 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] VM Resumed (Lifecycle Event)
Oct 02 12:35:28 compute-1 nova_compute[230518]: 2025-10-02 12:35:28.023 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:35:28 compute-1 nova_compute[230518]: 2025-10-02 12:35:28.026 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:35:28 compute-1 nova_compute[230518]: 2025-10-02 12:35:28.029 2 INFO nova.compute.manager [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Took 9.93 seconds to spawn the instance on the hypervisor.
Oct 02 12:35:28 compute-1 nova_compute[230518]: 2025-10-02 12:35:28.030 2 DEBUG nova.compute.manager [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:35:28 compute-1 nova_compute[230518]: 2025-10-02 12:35:28.052 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:35:28 compute-1 nova_compute[230518]: 2025-10-02 12:35:28.093 2 INFO nova.compute.manager [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Took 14.56 seconds to build instance.
Oct 02 12:35:28 compute-1 nova_compute[230518]: 2025-10-02 12:35:28.117 2 DEBUG oslo_concurrency.lockutils [None req-564b48f2-cc94-4cde-af7e-83516df86769 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:28 compute-1 nova_compute[230518]: 2025-10-02 12:35:28.228 2 DEBUG nova.compute.manager [req-12674de4-bcb8-420e-b4bf-2b6e94da39f2 req-38559d64-711c-4274-ada9-a38f52cd0318 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Received event network-vif-plugged-be086260-f45b-41eb-ae9f-f401c64b9b22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:28 compute-1 nova_compute[230518]: 2025-10-02 12:35:28.228 2 DEBUG oslo_concurrency.lockutils [req-12674de4-bcb8-420e-b4bf-2b6e94da39f2 req-38559d64-711c-4274-ada9-a38f52cd0318 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:28 compute-1 nova_compute[230518]: 2025-10-02 12:35:28.229 2 DEBUG oslo_concurrency.lockutils [req-12674de4-bcb8-420e-b4bf-2b6e94da39f2 req-38559d64-711c-4274-ada9-a38f52cd0318 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:28 compute-1 nova_compute[230518]: 2025-10-02 12:35:28.229 2 DEBUG oslo_concurrency.lockutils [req-12674de4-bcb8-420e-b4bf-2b6e94da39f2 req-38559d64-711c-4274-ada9-a38f52cd0318 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:28 compute-1 nova_compute[230518]: 2025-10-02 12:35:28.229 2 DEBUG nova.compute.manager [req-12674de4-bcb8-420e-b4bf-2b6e94da39f2 req-38559d64-711c-4274-ada9-a38f52cd0318 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] No waiting events found dispatching network-vif-plugged-be086260-f45b-41eb-ae9f-f401c64b9b22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:28 compute-1 nova_compute[230518]: 2025-10-02 12:35:28.229 2 WARNING nova.compute.manager [req-12674de4-bcb8-420e-b4bf-2b6e94da39f2 req-38559d64-711c-4274-ada9-a38f52cd0318 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Received unexpected event network-vif-plugged-be086260-f45b-41eb-ae9f-f401c64b9b22 for instance with vm_state active and task_state None.
Oct 02 12:35:28 compute-1 nova_compute[230518]: 2025-10-02 12:35:28.720 2 INFO nova.virt.libvirt.driver [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Deleting instance files /var/lib/nova/instances/3e490470-5e33-4140-95c1-367805364c73_del
Oct 02 12:35:28 compute-1 nova_compute[230518]: 2025-10-02 12:35:28.721 2 INFO nova.virt.libvirt.driver [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Deletion of /var/lib/nova/instances/3e490470-5e33-4140-95c1-367805364c73_del complete
Oct 02 12:35:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:35:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:28.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:35:28 compute-1 sudo[266522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:35:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:28.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:28 compute-1 sudo[266522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:28 compute-1 sudo[266522]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:28 compute-1 nova_compute[230518]: 2025-10-02 12:35:28.823 2 INFO nova.compute.manager [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Took 1.55 seconds to destroy the instance on the hypervisor.
Oct 02 12:35:28 compute-1 nova_compute[230518]: 2025-10-02 12:35:28.824 2 DEBUG oslo.service.loopingcall [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:35:28 compute-1 nova_compute[230518]: 2025-10-02 12:35:28.825 2 DEBUG nova.compute.manager [-] [instance: 3e490470-5e33-4140-95c1-367805364c73] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:35:28 compute-1 nova_compute[230518]: 2025-10-02 12:35:28.825 2 DEBUG nova.network.neutron [-] [instance: 3e490470-5e33-4140-95c1-367805364c73] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:35:28 compute-1 sudo[266547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:35:28 compute-1 sudo[266547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:35:28 compute-1 sudo[266547]: pam_unix(sudo:session): session closed for user root
Oct 02 12:35:29 compute-1 ceph-mon[80926]: pgmap v1769: 305 pgs: 305 active+clean; 328 MiB data, 865 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 55 KiB/s wr, 223 op/s
Oct 02 12:35:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/882280986' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:29 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:35:29 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:35:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:35:30 compute-1 nova_compute[230518]: 2025-10-02 12:35:30.129 2 DEBUG nova.compute.manager [req-9632fb4a-9452-473d-a8b1-0fc68d335894 req-921d2f4c-6d2b-4e90-baec-731cf8e8ff70 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:30 compute-1 nova_compute[230518]: 2025-10-02 12:35:30.130 2 DEBUG oslo_concurrency.lockutils [req-9632fb4a-9452-473d-a8b1-0fc68d335894 req-921d2f4c-6d2b-4e90-baec-731cf8e8ff70 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3e490470-5e33-4140-95c1-367805364c73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:30 compute-1 nova_compute[230518]: 2025-10-02 12:35:30.130 2 DEBUG oslo_concurrency.lockutils [req-9632fb4a-9452-473d-a8b1-0fc68d335894 req-921d2f4c-6d2b-4e90-baec-731cf8e8ff70 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:30 compute-1 nova_compute[230518]: 2025-10-02 12:35:30.131 2 DEBUG oslo_concurrency.lockutils [req-9632fb4a-9452-473d-a8b1-0fc68d335894 req-921d2f4c-6d2b-4e90-baec-731cf8e8ff70 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:30 compute-1 nova_compute[230518]: 2025-10-02 12:35:30.131 2 DEBUG nova.compute.manager [req-9632fb4a-9452-473d-a8b1-0fc68d335894 req-921d2f4c-6d2b-4e90-baec-731cf8e8ff70 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] No waiting events found dispatching network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:30 compute-1 nova_compute[230518]: 2025-10-02 12:35:30.131 2 WARNING nova.compute.manager [req-9632fb4a-9452-473d-a8b1-0fc68d335894 req-921d2f4c-6d2b-4e90-baec-731cf8e8ff70 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received unexpected event network-vif-plugged-a3bd0009-d256-4937-bdad-606abfd076e0 for instance with vm_state active and task_state deleting.
Oct 02 12:35:30 compute-1 ceph-mon[80926]: pgmap v1770: 305 pgs: 305 active+clean; 328 MiB data, 865 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 55 KiB/s wr, 223 op/s
Oct 02 12:35:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/947376582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:30.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:30.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:30 compute-1 nova_compute[230518]: 2025-10-02 12:35:30.901 2 DEBUG nova.network.neutron [-] [instance: 3e490470-5e33-4140-95c1-367805364c73] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:35:30 compute-1 nova_compute[230518]: 2025-10-02 12:35:30.940 2 INFO nova.compute.manager [-] [instance: 3e490470-5e33-4140-95c1-367805364c73] Took 2.12 seconds to deallocate network for instance.
Oct 02 12:35:31 compute-1 nova_compute[230518]: 2025-10-02 12:35:31.065 2 DEBUG oslo_concurrency.lockutils [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:31 compute-1 nova_compute[230518]: 2025-10-02 12:35:31.066 2 DEBUG oslo_concurrency.lockutils [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:31 compute-1 nova_compute[230518]: 2025-10-02 12:35:31.071 2 DEBUG oslo_concurrency.lockutils [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:31 compute-1 nova_compute[230518]: 2025-10-02 12:35:31.096 2 DEBUG nova.compute.manager [req-32b1b632-6e4a-4ab6-9963-3b152aa52604 req-504434ab-b065-40b8-a2f1-9bd058262ddb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3e490470-5e33-4140-95c1-367805364c73] Received event network-vif-deleted-a3bd0009-d256-4937-bdad-606abfd076e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:31 compute-1 nova_compute[230518]: 2025-10-02 12:35:31.144 2 INFO nova.scheduler.client.report [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Deleted allocations for instance 3e490470-5e33-4140-95c1-367805364c73
Oct 02 12:35:31 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e248 e248: 3 total, 3 up, 3 in
Oct 02 12:35:31 compute-1 nova_compute[230518]: 2025-10-02 12:35:31.331 2 DEBUG oslo_concurrency.lockutils [None req-943d017d-7d9f-4323-99be-a2d09ba150a1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "3e490470-5e33-4140-95c1-367805364c73" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:31 compute-1 nova_compute[230518]: 2025-10-02 12:35:31.864 2 DEBUG nova.compute.manager [req-f570bc4e-62e6-4746-9eb9-2f8efcf961f8 req-071070e1-188c-472c-8f85-afaa06f594b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Received event network-changed-be086260-f45b-41eb-ae9f-f401c64b9b22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:31 compute-1 nova_compute[230518]: 2025-10-02 12:35:31.865 2 DEBUG nova.compute.manager [req-f570bc4e-62e6-4746-9eb9-2f8efcf961f8 req-071070e1-188c-472c-8f85-afaa06f594b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Refreshing instance network info cache due to event network-changed-be086260-f45b-41eb-ae9f-f401c64b9b22. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:35:31 compute-1 nova_compute[230518]: 2025-10-02 12:35:31.865 2 DEBUG oslo_concurrency.lockutils [req-f570bc4e-62e6-4746-9eb9-2f8efcf961f8 req-071070e1-188c-472c-8f85-afaa06f594b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:35:31 compute-1 nova_compute[230518]: 2025-10-02 12:35:31.865 2 DEBUG oslo_concurrency.lockutils [req-f570bc4e-62e6-4746-9eb9-2f8efcf961f8 req-071070e1-188c-472c-8f85-afaa06f594b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:35:31 compute-1 nova_compute[230518]: 2025-10-02 12:35:31.866 2 DEBUG nova.network.neutron [req-f570bc4e-62e6-4746-9eb9-2f8efcf961f8 req-071070e1-188c-472c-8f85-afaa06f594b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Refreshing network info cache for port be086260-f45b-41eb-ae9f-f401c64b9b22 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:35:32 compute-1 ceph-mon[80926]: osdmap e248: 3 total, 3 up, 3 in
Oct 02 12:35:32 compute-1 nova_compute[230518]: 2025-10-02 12:35:32.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:35:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:32.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:35:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:32.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:32 compute-1 nova_compute[230518]: 2025-10-02 12:35:32.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:33 compute-1 ceph-mon[80926]: pgmap v1772: 305 pgs: 305 active+clean; 268 MiB data, 828 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 67 KiB/s wr, 262 op/s
Oct 02 12:35:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:35:34 compute-1 ceph-mon[80926]: pgmap v1773: 305 pgs: 305 active+clean; 248 MiB data, 818 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 51 KiB/s wr, 301 op/s
Oct 02 12:35:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:35:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:34.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:35:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:34.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:34 compute-1 podman[266572]: 2025-10-02 12:35:34.813239184 +0000 UTC m=+0.065783951 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2)
Oct 02 12:35:34 compute-1 podman[266573]: 2025-10-02 12:35:34.813200383 +0000 UTC m=+0.064446539 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 12:35:35 compute-1 nova_compute[230518]: 2025-10-02 12:35:35.874 2 DEBUG nova.network.neutron [req-f570bc4e-62e6-4746-9eb9-2f8efcf961f8 req-071070e1-188c-472c-8f85-afaa06f594b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Updated VIF entry in instance network info cache for port be086260-f45b-41eb-ae9f-f401c64b9b22. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:35:35 compute-1 nova_compute[230518]: 2025-10-02 12:35:35.874 2 DEBUG nova.network.neutron [req-f570bc4e-62e6-4746-9eb9-2f8efcf961f8 req-071070e1-188c-472c-8f85-afaa06f594b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Updating instance_info_cache with network_info: [{"id": "be086260-f45b-41eb-ae9f-f401c64b9b22", "address": "fa:16:3e:57:aa:f7", "network": {"id": "520fbe64-8b09-43ab-a51d-71daa1f67084", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-718613550-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c7832aaed82459e908e73712013728c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe086260-f4", "ovs_interfaceid": "be086260-f45b-41eb-ae9f-f401c64b9b22", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:35:35 compute-1 nova_compute[230518]: 2025-10-02 12:35:35.915 2 DEBUG oslo_concurrency.lockutils [req-f570bc4e-62e6-4746-9eb9-2f8efcf961f8 req-071070e1-188c-472c-8f85-afaa06f594b1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:35:36 compute-1 ovn_controller[129257]: 2025-10-02T12:35:36Z|00398|binding|INFO|Releasing lport 16d9692f-0ff0-4286-9c67-9ab6ce457f57 from this chassis (sb_readonly=0)
Oct 02 12:35:36 compute-1 nova_compute[230518]: 2025-10-02 12:35:36.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:36.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:36.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:36 compute-1 ceph-mon[80926]: pgmap v1774: 305 pgs: 305 active+clean; 222 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 48 KiB/s wr, 247 op/s
Oct 02 12:35:37 compute-1 nova_compute[230518]: 2025-10-02 12:35:37.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:37 compute-1 nova_compute[230518]: 2025-10-02 12:35:37.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:35:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:38.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:35:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:38.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:35:39 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 12:35:39 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 30K writes, 117K keys, 30K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.04 MB/s
                                           Cumulative WAL: 30K writes, 10K syncs, 2.86 writes per sync, written: 0.11 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 9003 writes, 34K keys, 9003 commit groups, 1.0 writes per commit group, ingest: 37.27 MB, 0.06 MB/s
                                           Interval WAL: 9003 writes, 3403 syncs, 2.65 writes per sync, written: 0.04 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 12:35:39 compute-1 ceph-mon[80926]: pgmap v1775: 305 pgs: 305 active+clean; 169 MiB data, 766 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 22 KiB/s wr, 181 op/s
Oct 02 12:35:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1625734000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:35:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:40.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:35:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:40.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:41 compute-1 ceph-mon[80926]: pgmap v1776: 305 pgs: 305 active+clean; 169 MiB data, 766 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 22 KiB/s wr, 181 op/s
Oct 02 12:35:41 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/9764884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:42 compute-1 ceph-mon[80926]: pgmap v1777: 305 pgs: 305 active+clean; 172 MiB data, 773 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 667 KiB/s wr, 105 op/s
Oct 02 12:35:42 compute-1 nova_compute[230518]: 2025-10-02 12:35:42.703 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408527.7015674, 3e490470-5e33-4140-95c1-367805364c73 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:35:42 compute-1 nova_compute[230518]: 2025-10-02 12:35:42.703 2 INFO nova.compute.manager [-] [instance: 3e490470-5e33-4140-95c1-367805364c73] VM Stopped (Lifecycle Event)
Oct 02 12:35:42 compute-1 nova_compute[230518]: 2025-10-02 12:35:42.726 2 DEBUG nova.compute.manager [None req-447bcf87-76a3-4b45-ada7-5669e7df94ba - - - - - -] [instance: 3e490470-5e33-4140-95c1-367805364c73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:35:42 compute-1 nova_compute[230518]: 2025-10-02 12:35:42.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:42.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:35:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:42.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:35:42 compute-1 nova_compute[230518]: 2025-10-02 12:35:42.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:43 compute-1 ovn_controller[129257]: 2025-10-02T12:35:43Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:57:aa:f7 10.100.0.7
Oct 02 12:35:43 compute-1 ovn_controller[129257]: 2025-10-02T12:35:43Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:57:aa:f7 10.100.0.7
Oct 02 12:35:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:35:44 compute-1 nova_compute[230518]: 2025-10-02 12:35:44.720 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:35:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:44.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:44.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:45 compute-1 nova_compute[230518]: 2025-10-02 12:35:45.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:35:45 compute-1 ceph-mon[80926]: pgmap v1778: 305 pgs: 305 active+clean; 189 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.5 MiB/s wr, 110 op/s
Oct 02 12:35:46 compute-1 ceph-mon[80926]: pgmap v1779: 305 pgs: 305 active+clean; 176 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 154 KiB/s rd, 2.6 MiB/s wr, 96 op/s
Oct 02 12:35:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:46.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:46.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:47 compute-1 nova_compute[230518]: 2025-10-02 12:35:47.067 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:35:47 compute-1 nova_compute[230518]: 2025-10-02 12:35:47.094 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:47 compute-1 nova_compute[230518]: 2025-10-02 12:35:47.094 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:47 compute-1 nova_compute[230518]: 2025-10-02 12:35:47.094 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:47 compute-1 nova_compute[230518]: 2025-10-02 12:35:47.095 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:35:47 compute-1 nova_compute[230518]: 2025-10-02 12:35:47.095 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:35:47 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3582682749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:47 compute-1 nova_compute[230518]: 2025-10-02 12:35:47.503 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:47 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/169662621' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:47 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4057833807' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:35:47 compute-1 nova_compute[230518]: 2025-10-02 12:35:47.670 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:35:47 compute-1 nova_compute[230518]: 2025-10-02 12:35:47.671 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:35:47 compute-1 nova_compute[230518]: 2025-10-02 12:35:47.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:47 compute-1 nova_compute[230518]: 2025-10-02 12:35:47.817 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:35:47 compute-1 nova_compute[230518]: 2025-10-02 12:35:47.818 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4317MB free_disk=20.95226287841797GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:35:47 compute-1 nova_compute[230518]: 2025-10-02 12:35:47.819 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:47 compute-1 nova_compute[230518]: 2025-10-02 12:35:47.819 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:47 compute-1 nova_compute[230518]: 2025-10-02 12:35:47.905 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:35:47 compute-1 nova_compute[230518]: 2025-10-02 12:35:47.906 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:35:47 compute-1 nova_compute[230518]: 2025-10-02 12:35:47.906 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:35:47 compute-1 nova_compute[230518]: 2025-10-02 12:35:47.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:48 compute-1 nova_compute[230518]: 2025-10-02 12:35:48.113 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 12:35:48 compute-1 nova_compute[230518]: 2025-10-02 12:35:48.215 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 12:35:48 compute-1 nova_compute[230518]: 2025-10-02 12:35:48.215 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:35:48 compute-1 nova_compute[230518]: 2025-10-02 12:35:48.238 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 12:35:48 compute-1 nova_compute[230518]: 2025-10-02 12:35:48.283 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 12:35:48 compute-1 nova_compute[230518]: 2025-10-02 12:35:48.323 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:35:48 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2847610097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:48 compute-1 nova_compute[230518]: 2025-10-02 12:35:48.755 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:48 compute-1 nova_compute[230518]: 2025-10-02 12:35:48.761 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:35:48 compute-1 nova_compute[230518]: 2025-10-02 12:35:48.784 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:35:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:48.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:35:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:48.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:35:48 compute-1 nova_compute[230518]: 2025-10-02 12:35:48.840 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:35:48 compute-1 nova_compute[230518]: 2025-10-02 12:35:48.840 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:48 compute-1 ceph-mon[80926]: pgmap v1780: 305 pgs: 305 active+clean; 165 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 368 KiB/s rd, 3.9 MiB/s wr, 138 op/s
Oct 02 12:35:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3582682749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/888421837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:49 compute-1 nova_compute[230518]: 2025-10-02 12:35:49.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:35:49 compute-1 nova_compute[230518]: 2025-10-02 12:35:49.811 2 DEBUG oslo_concurrency.lockutils [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Acquiring lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:49 compute-1 nova_compute[230518]: 2025-10-02 12:35:49.811 2 DEBUG oslo_concurrency.lockutils [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:49 compute-1 nova_compute[230518]: 2025-10-02 12:35:49.812 2 DEBUG oslo_concurrency.lockutils [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Acquiring lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:49 compute-1 nova_compute[230518]: 2025-10-02 12:35:49.812 2 DEBUG oslo_concurrency.lockutils [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:49 compute-1 nova_compute[230518]: 2025-10-02 12:35:49.812 2 DEBUG oslo_concurrency.lockutils [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:49 compute-1 nova_compute[230518]: 2025-10-02 12:35:49.813 2 INFO nova.compute.manager [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Terminating instance
Oct 02 12:35:49 compute-1 nova_compute[230518]: 2025-10-02 12:35:49.814 2 DEBUG nova.compute.manager [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:35:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2847610097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:50.136 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:35:50 compute-1 nova_compute[230518]: 2025-10-02 12:35:50.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:50.140 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:35:50 compute-1 kernel: tapbe086260-f4 (unregistering): left promiscuous mode
Oct 02 12:35:50 compute-1 NetworkManager[44960]: <info>  [1759408550.1534] device (tapbe086260-f4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:35:50 compute-1 nova_compute[230518]: 2025-10-02 12:35:50.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:50 compute-1 ovn_controller[129257]: 2025-10-02T12:35:50Z|00399|binding|INFO|Releasing lport be086260-f45b-41eb-ae9f-f401c64b9b22 from this chassis (sb_readonly=0)
Oct 02 12:35:50 compute-1 ovn_controller[129257]: 2025-10-02T12:35:50Z|00400|binding|INFO|Setting lport be086260-f45b-41eb-ae9f-f401c64b9b22 down in Southbound
Oct 02 12:35:50 compute-1 ovn_controller[129257]: 2025-10-02T12:35:50Z|00401|binding|INFO|Removing iface tapbe086260-f4 ovn-installed in OVS
Oct 02 12:35:50 compute-1 nova_compute[230518]: 2025-10-02 12:35:50.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:50.183 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:aa:f7 10.100.0.7'], port_security=['fa:16:3e:57:aa:f7 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-520fbe64-8b09-43ab-a51d-71daa1f67084', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5c7832aaed82459e908e73712013728c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'daeb3977-3f78-41e9-ac0e-5cb1a13474c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.212'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=30b0a67c-101b-4cc3-b721-1d431685a667, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=be086260-f45b-41eb-ae9f-f401c64b9b22) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:35:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:50.185 138374 INFO neutron.agent.ovn.metadata.agent [-] Port be086260-f45b-41eb-ae9f-f401c64b9b22 in datapath 520fbe64-8b09-43ab-a51d-71daa1f67084 unbound from our chassis
Oct 02 12:35:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:50.187 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 520fbe64-8b09-43ab-a51d-71daa1f67084, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:35:50 compute-1 nova_compute[230518]: 2025-10-02 12:35:50.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:50.188 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8ff1ee42-efe9-475d-a46e-c63287a06f81]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:50.189 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084 namespace which is not needed anymore
Oct 02 12:35:50 compute-1 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000005c.scope: Deactivated successfully.
Oct 02 12:35:50 compute-1 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000005c.scope: Consumed 14.495s CPU time.
Oct 02 12:35:50 compute-1 systemd-machined[188247]: Machine qemu-46-instance-0000005c terminated.
Oct 02 12:35:50 compute-1 nova_compute[230518]: 2025-10-02 12:35:50.439 2 DEBUG nova.compute.manager [req-4312bdc8-82b2-4934-9b2c-b3c673ca5605 req-df8ddab3-785f-4dee-86c5-d2eddef9758c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Received event network-vif-unplugged-be086260-f45b-41eb-ae9f-f401c64b9b22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:50 compute-1 nova_compute[230518]: 2025-10-02 12:35:50.439 2 DEBUG oslo_concurrency.lockutils [req-4312bdc8-82b2-4934-9b2c-b3c673ca5605 req-df8ddab3-785f-4dee-86c5-d2eddef9758c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:50 compute-1 nova_compute[230518]: 2025-10-02 12:35:50.439 2 DEBUG oslo_concurrency.lockutils [req-4312bdc8-82b2-4934-9b2c-b3c673ca5605 req-df8ddab3-785f-4dee-86c5-d2eddef9758c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:50 compute-1 nova_compute[230518]: 2025-10-02 12:35:50.440 2 DEBUG oslo_concurrency.lockutils [req-4312bdc8-82b2-4934-9b2c-b3c673ca5605 req-df8ddab3-785f-4dee-86c5-d2eddef9758c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:50 compute-1 nova_compute[230518]: 2025-10-02 12:35:50.440 2 DEBUG nova.compute.manager [req-4312bdc8-82b2-4934-9b2c-b3c673ca5605 req-df8ddab3-785f-4dee-86c5-d2eddef9758c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] No waiting events found dispatching network-vif-unplugged-be086260-f45b-41eb-ae9f-f401c64b9b22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:50 compute-1 nova_compute[230518]: 2025-10-02 12:35:50.440 2 DEBUG nova.compute.manager [req-4312bdc8-82b2-4934-9b2c-b3c673ca5605 req-df8ddab3-785f-4dee-86c5-d2eddef9758c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Received event network-vif-unplugged-be086260-f45b-41eb-ae9f-f401c64b9b22 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:35:50 compute-1 nova_compute[230518]: 2025-10-02 12:35:50.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:50 compute-1 nova_compute[230518]: 2025-10-02 12:35:50.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:50 compute-1 neutron-haproxy-ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084[266393]: [NOTICE]   (266397) : haproxy version is 2.8.14-c23fe91
Oct 02 12:35:50 compute-1 neutron-haproxy-ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084[266393]: [NOTICE]   (266397) : path to executable is /usr/sbin/haproxy
Oct 02 12:35:50 compute-1 neutron-haproxy-ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084[266393]: [WARNING]  (266397) : Exiting Master process...
Oct 02 12:35:50 compute-1 neutron-haproxy-ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084[266393]: [WARNING]  (266397) : Exiting Master process...
Oct 02 12:35:50 compute-1 neutron-haproxy-ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084[266393]: [ALERT]    (266397) : Current worker (266414) exited with code 143 (Terminated)
Oct 02 12:35:50 compute-1 neutron-haproxy-ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084[266393]: [WARNING]  (266397) : All workers exited. Exiting... (0)
Oct 02 12:35:50 compute-1 systemd[1]: libpod-34ffedd16241bcc44614d0faae272f2a4f859da4ca73ad2293822905fcaf4054.scope: Deactivated successfully.
Oct 02 12:35:50 compute-1 podman[266683]: 2025-10-02 12:35:50.466817913 +0000 UTC m=+0.176728291 container died 34ffedd16241bcc44614d0faae272f2a4f859da4ca73ad2293822905fcaf4054 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:35:50 compute-1 nova_compute[230518]: 2025-10-02 12:35:50.468 2 INFO nova.virt.libvirt.driver [-] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Instance destroyed successfully.
Oct 02 12:35:50 compute-1 nova_compute[230518]: 2025-10-02 12:35:50.469 2 DEBUG nova.objects.instance [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Lazy-loading 'resources' on Instance uuid 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:35:50 compute-1 nova_compute[230518]: 2025-10-02 12:35:50.513 2 DEBUG nova.virt.libvirt.vif [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:35:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestBootFromVolume-server-816873398',display_name='tempest-ServersTestBootFromVolume-server-816873398',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestbootfromvolume-server-816873398',id=92,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCExruwC5OJykxsUr85pmn2haRmMagZ7/mdNoJJYNggBWSlgN4YogOa/CvEZn0TcoJzbhFnO92dhNEzaWy4hNrf2HiyGznIRspY3dORZWHStfDh0jHNyWV7YXYrY8DMDPw==',key_name='tempest-keypair-581118509',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:35:28Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5c7832aaed82459e908e73712013728c',ramdisk_id='',reservation_id='r-a0kjnjfe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-ServersTestBootFromVolume-1573186401',owner_user_name='tempest-ServersTestBootFromVolume-1573186401-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:35:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='32f902c540fc464cb232c0a6942a5d22',uuid=00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be086260-f45b-41eb-ae9f-f401c64b9b22", "address": "fa:16:3e:57:aa:f7", "network": {"id": "520fbe64-8b09-43ab-a51d-71daa1f67084", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-718613550-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c7832aaed82459e908e73712013728c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe086260-f4", "ovs_interfaceid": "be086260-f45b-41eb-ae9f-f401c64b9b22", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:35:50 compute-1 nova_compute[230518]: 2025-10-02 12:35:50.514 2 DEBUG nova.network.os_vif_util [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Converting VIF {"id": "be086260-f45b-41eb-ae9f-f401c64b9b22", "address": "fa:16:3e:57:aa:f7", "network": {"id": "520fbe64-8b09-43ab-a51d-71daa1f67084", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-718613550-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c7832aaed82459e908e73712013728c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe086260-f4", "ovs_interfaceid": "be086260-f45b-41eb-ae9f-f401c64b9b22", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:35:50 compute-1 nova_compute[230518]: 2025-10-02 12:35:50.514 2 DEBUG nova.network.os_vif_util [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:57:aa:f7,bridge_name='br-int',has_traffic_filtering=True,id=be086260-f45b-41eb-ae9f-f401c64b9b22,network=Network(520fbe64-8b09-43ab-a51d-71daa1f67084),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe086260-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:35:50 compute-1 nova_compute[230518]: 2025-10-02 12:35:50.515 2 DEBUG os_vif [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:aa:f7,bridge_name='br-int',has_traffic_filtering=True,id=be086260-f45b-41eb-ae9f-f401c64b9b22,network=Network(520fbe64-8b09-43ab-a51d-71daa1f67084),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe086260-f4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:35:50 compute-1 nova_compute[230518]: 2025-10-02 12:35:50.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:50 compute-1 nova_compute[230518]: 2025-10-02 12:35:50.517 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe086260-f4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:50 compute-1 nova_compute[230518]: 2025-10-02 12:35:50.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:50 compute-1 nova_compute[230518]: 2025-10-02 12:35:50.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:50 compute-1 nova_compute[230518]: 2025-10-02 12:35:50.521 2 INFO os_vif [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:aa:f7,bridge_name='br-int',has_traffic_filtering=True,id=be086260-f45b-41eb-ae9f-f401c64b9b22,network=Network(520fbe64-8b09-43ab-a51d-71daa1f67084),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe086260-f4')
Oct 02 12:35:50 compute-1 systemd[1]: var-lib-containers-storage-overlay-d831a68f21466534269406494e441ba9768beaf4f9233af088e058997fb9c0d1-merged.mount: Deactivated successfully.
Oct 02 12:35:50 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-34ffedd16241bcc44614d0faae272f2a4f859da4ca73ad2293822905fcaf4054-userdata-shm.mount: Deactivated successfully.
Oct 02 12:35:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:50.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:50.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:51 compute-1 podman[266683]: 2025-10-02 12:35:51.044055665 +0000 UTC m=+0.753966053 container cleanup 34ffedd16241bcc44614d0faae272f2a4f859da4ca73ad2293822905fcaf4054 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 12:35:51 compute-1 systemd[1]: libpod-conmon-34ffedd16241bcc44614d0faae272f2a4f859da4ca73ad2293822905fcaf4054.scope: Deactivated successfully.
Oct 02 12:35:51 compute-1 nova_compute[230518]: 2025-10-02 12:35:51.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:35:51 compute-1 nova_compute[230518]: 2025-10-02 12:35:51.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:35:51 compute-1 nova_compute[230518]: 2025-10-02 12:35:51.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 12:35:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:51.142 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:51 compute-1 ceph-mon[80926]: pgmap v1781: 305 pgs: 305 active+clean; 165 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 351 KiB/s rd, 3.9 MiB/s wr, 113 op/s
Oct 02 12:35:51 compute-1 podman[266740]: 2025-10-02 12:35:51.27112339 +0000 UTC m=+0.201595995 container remove 34ffedd16241bcc44614d0faae272f2a4f859da4ca73ad2293822905fcaf4054 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:35:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:51.282 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[640427fc-0684-41d8-accb-ce5395cb135c]: (4, ('Thu Oct  2 12:35:50 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084 (34ffedd16241bcc44614d0faae272f2a4f859da4ca73ad2293822905fcaf4054)\n34ffedd16241bcc44614d0faae272f2a4f859da4ca73ad2293822905fcaf4054\nThu Oct  2 12:35:51 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084 (34ffedd16241bcc44614d0faae272f2a4f859da4ca73ad2293822905fcaf4054)\n34ffedd16241bcc44614d0faae272f2a4f859da4ca73ad2293822905fcaf4054\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:51.283 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[662ce29b-e101-4a9e-9691-74e61f9936b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:51.284 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap520fbe64-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:35:51 compute-1 kernel: tap520fbe64-80: left promiscuous mode
Oct 02 12:35:51 compute-1 nova_compute[230518]: 2025-10-02 12:35:51.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:51 compute-1 nova_compute[230518]: 2025-10-02 12:35:51.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:51.305 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b198d322-941f-472c-9075-ab308ca42bf2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:51.336 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[61b824c5-1129-457d-a504-7d484d17fc0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:51.337 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cd81386c-0075-46d6-9b07-75ac89ca9da1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:51.355 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[90590b6f-dd26-4b33-aa68-fe4bad76f152]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 638914, 'reachable_time': 16215, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266755, 'error': None, 'target': 'ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:51.358 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-520fbe64-8b09-43ab-a51d-71daa1f67084 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:35:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:35:51.358 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[b3fd3472-7582-4599-b7da-2cff8300a3e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:35:51 compute-1 systemd[1]: run-netns-ovnmeta\x2d520fbe64\x2d8b09\x2d43ab\x2da51d\x2d71daa1f67084.mount: Deactivated successfully.
Oct 02 12:35:51 compute-1 nova_compute[230518]: 2025-10-02 12:35:51.828 2 INFO nova.virt.libvirt.driver [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Deleting instance files /var/lib/nova/instances/00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f_del
Oct 02 12:35:51 compute-1 nova_compute[230518]: 2025-10-02 12:35:51.829 2 INFO nova.virt.libvirt.driver [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Deletion of /var/lib/nova/instances/00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f_del complete
Oct 02 12:35:51 compute-1 nova_compute[230518]: 2025-10-02 12:35:51.918 2 INFO nova.compute.manager [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Took 2.10 seconds to destroy the instance on the hypervisor.
Oct 02 12:35:51 compute-1 nova_compute[230518]: 2025-10-02 12:35:51.918 2 DEBUG oslo.service.loopingcall [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:35:51 compute-1 nova_compute[230518]: 2025-10-02 12:35:51.919 2 DEBUG nova.compute.manager [-] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:35:51 compute-1 nova_compute[230518]: 2025-10-02 12:35:51.919 2 DEBUG nova.network.neutron [-] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:35:52 compute-1 ceph-mon[80926]: pgmap v1782: 305 pgs: 305 active+clean; 167 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 390 KiB/s rd, 3.9 MiB/s wr, 128 op/s
Oct 02 12:35:52 compute-1 nova_compute[230518]: 2025-10-02 12:35:52.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:52 compute-1 nova_compute[230518]: 2025-10-02 12:35:52.697 2 DEBUG nova.compute.manager [req-b4ccd50a-473f-4b72-971d-fd1d6063d719 req-71b3c712-a50e-4301-ac98-6726b65f9a9b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Received event network-vif-plugged-be086260-f45b-41eb-ae9f-f401c64b9b22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:52 compute-1 nova_compute[230518]: 2025-10-02 12:35:52.697 2 DEBUG oslo_concurrency.lockutils [req-b4ccd50a-473f-4b72-971d-fd1d6063d719 req-71b3c712-a50e-4301-ac98-6726b65f9a9b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:52 compute-1 nova_compute[230518]: 2025-10-02 12:35:52.697 2 DEBUG oslo_concurrency.lockutils [req-b4ccd50a-473f-4b72-971d-fd1d6063d719 req-71b3c712-a50e-4301-ac98-6726b65f9a9b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:52 compute-1 nova_compute[230518]: 2025-10-02 12:35:52.697 2 DEBUG oslo_concurrency.lockutils [req-b4ccd50a-473f-4b72-971d-fd1d6063d719 req-71b3c712-a50e-4301-ac98-6726b65f9a9b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:52 compute-1 nova_compute[230518]: 2025-10-02 12:35:52.697 2 DEBUG nova.compute.manager [req-b4ccd50a-473f-4b72-971d-fd1d6063d719 req-71b3c712-a50e-4301-ac98-6726b65f9a9b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] No waiting events found dispatching network-vif-plugged-be086260-f45b-41eb-ae9f-f401c64b9b22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:35:52 compute-1 nova_compute[230518]: 2025-10-02 12:35:52.698 2 WARNING nova.compute.manager [req-b4ccd50a-473f-4b72-971d-fd1d6063d719 req-71b3c712-a50e-4301-ac98-6726b65f9a9b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Received unexpected event network-vif-plugged-be086260-f45b-41eb-ae9f-f401c64b9b22 for instance with vm_state active and task_state deleting.
Oct 02 12:35:52 compute-1 podman[266758]: 2025-10-02 12:35:52.805981542 +0000 UTC m=+0.048388453 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct 02 12:35:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:52.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:52 compute-1 podman[266757]: 2025-10-02 12:35:52.831125763 +0000 UTC m=+0.076626042 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:35:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:52.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:52 compute-1 nova_compute[230518]: 2025-10-02 12:35:52.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:53 compute-1 nova_compute[230518]: 2025-10-02 12:35:53.083 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:35:53 compute-1 nova_compute[230518]: 2025-10-02 12:35:53.309 2 DEBUG nova.network.neutron [-] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:35:53 compute-1 nova_compute[230518]: 2025-10-02 12:35:53.342 2 INFO nova.compute.manager [-] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Took 1.42 seconds to deallocate network for instance.
Oct 02 12:35:53 compute-1 nova_compute[230518]: 2025-10-02 12:35:53.473 2 DEBUG nova.compute.manager [req-ee2bc6a2-6c2f-49d0-a183-9a2198058600 req-878b718a-2cff-445b-b831-2df277a15276 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Received event network-vif-deleted-be086260-f45b-41eb-ae9f-f401c64b9b22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:35:53 compute-1 nova_compute[230518]: 2025-10-02 12:35:53.736 2 INFO nova.compute.manager [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Took 0.39 seconds to detach 1 volumes for instance.
Oct 02 12:35:53 compute-1 nova_compute[230518]: 2025-10-02 12:35:53.738 2 DEBUG nova.compute.manager [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Deleting volume: 9283cdcd-9233-4064-a250-5d9e278af430 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Oct 02 12:35:54 compute-1 nova_compute[230518]: 2025-10-02 12:35:54.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:35:54 compute-1 nova_compute[230518]: 2025-10-02 12:35:54.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:35:54 compute-1 nova_compute[230518]: 2025-10-02 12:35:54.444 2 DEBUG oslo_concurrency.lockutils [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:35:54 compute-1 nova_compute[230518]: 2025-10-02 12:35:54.445 2 DEBUG oslo_concurrency.lockutils [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:35:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:35:54 compute-1 nova_compute[230518]: 2025-10-02 12:35:54.522 2 DEBUG oslo_concurrency.processutils [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:35:54 compute-1 ceph-mon[80926]: pgmap v1783: 305 pgs: 305 active+clean; 167 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 595 KiB/s rd, 3.4 MiB/s wr, 126 op/s
Oct 02 12:35:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:54.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:54.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:35:54 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2667916916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:54 compute-1 nova_compute[230518]: 2025-10-02 12:35:54.935 2 DEBUG oslo_concurrency.processutils [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:35:54 compute-1 nova_compute[230518]: 2025-10-02 12:35:54.942 2 DEBUG nova.compute.provider_tree [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:35:54 compute-1 nova_compute[230518]: 2025-10-02 12:35:54.968 2 DEBUG nova.scheduler.client.report [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:35:55 compute-1 nova_compute[230518]: 2025-10-02 12:35:55.001 2 DEBUG oslo_concurrency.lockutils [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:55 compute-1 nova_compute[230518]: 2025-10-02 12:35:55.038 2 INFO nova.scheduler.client.report [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Deleted allocations for instance 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f
Oct 02 12:35:55 compute-1 nova_compute[230518]: 2025-10-02 12:35:55.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:35:55 compute-1 nova_compute[230518]: 2025-10-02 12:35:55.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:35:55 compute-1 nova_compute[230518]: 2025-10-02 12:35:55.118 2 DEBUG oslo_concurrency.lockutils [None req-6b18fa13-09e2-4f4c-95f2-bc2c1ffe48bf 32f902c540fc464cb232c0a6942a5d22 5c7832aaed82459e908e73712013728c - - default default] Lock "00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.307s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:35:55 compute-1 nova_compute[230518]: 2025-10-02 12:35:55.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:55 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3276537174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:55 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2667916916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:55 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3979436471' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:35:55 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1054373965' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:35:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:35:55 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1054373965' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:35:56 compute-1 nova_compute[230518]: 2025-10-02 12:35:56.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:35:56 compute-1 nova_compute[230518]: 2025-10-02 12:35:56.700 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:35:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:56.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:56.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:57 compute-1 ceph-mon[80926]: pgmap v1784: 305 pgs: 305 active+clean; 167 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.5 MiB/s wr, 152 op/s
Oct 02 12:35:57 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2632050585' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:57 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1054373965' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:35:57 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1054373965' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:35:57 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/177129438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:35:57 compute-1 nova_compute[230518]: 2025-10-02 12:35:57.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:35:58 compute-1 nova_compute[230518]: 2025-10-02 12:35:58.068 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:35:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:35:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:35:58.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:35:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:35:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:35:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:35:58.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:35:59 compute-1 nova_compute[230518]: 2025-10-02 12:35:59.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:35:59 compute-1 ceph-mon[80926]: pgmap v1785: 305 pgs: 305 active+clean; 99 MiB data, 761 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.4 MiB/s wr, 157 op/s
Oct 02 12:35:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:35:59 compute-1 nova_compute[230518]: 2025-10-02 12:35:59.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:00 compute-1 nova_compute[230518]: 2025-10-02 12:36:00.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1172926211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:00 compute-1 nova_compute[230518]: 2025-10-02 12:36:00.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:00.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:00.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:01 compute-1 ceph-mon[80926]: pgmap v1786: 305 pgs: 305 active+clean; 99 MiB data, 761 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 27 KiB/s wr, 113 op/s
Oct 02 12:36:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:02.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:02 compute-1 ceph-mon[80926]: pgmap v1787: 305 pgs: 305 active+clean; 88 MiB data, 741 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 27 KiB/s wr, 116 op/s
Oct 02 12:36:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:02.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:02 compute-1 nova_compute[230518]: 2025-10-02 12:36:02.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:36:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:04.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:04.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:04 compute-1 ceph-mon[80926]: pgmap v1788: 305 pgs: 305 active+clean; 104 MiB data, 746 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 456 KiB/s wr, 103 op/s
Oct 02 12:36:05 compute-1 nova_compute[230518]: 2025-10-02 12:36:05.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:36:05 compute-1 nova_compute[230518]: 2025-10-02 12:36:05.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:36:05 compute-1 nova_compute[230518]: 2025-10-02 12:36:05.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:36:05 compute-1 nova_compute[230518]: 2025-10-02 12:36:05.069 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:36:05 compute-1 nova_compute[230518]: 2025-10-02 12:36:05.466 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408550.4650056, 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:36:05 compute-1 nova_compute[230518]: 2025-10-02 12:36:05.466 2 INFO nova.compute.manager [-] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] VM Stopped (Lifecycle Event)
Oct 02 12:36:05 compute-1 nova_compute[230518]: 2025-10-02 12:36:05.485 2 DEBUG nova.compute.manager [None req-bf824c5c-990d-4c9e-9199-4d73ef704120 - - - - - -] [instance: 00fe5b9d-00be-45d2-9eb8-6ffb8dc0c42f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:36:05 compute-1 nova_compute[230518]: 2025-10-02 12:36:05.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:05 compute-1 podman[266826]: 2025-10-02 12:36:05.799096006 +0000 UTC m=+0.052357979 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 12:36:05 compute-1 podman[266825]: 2025-10-02 12:36:05.806891641 +0000 UTC m=+0.059619047 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct 02 12:36:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/102703640' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:36:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/102703640' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:36:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:06.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:06.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:07 compute-1 ceph-mon[80926]: pgmap v1789: 305 pgs: 305 active+clean; 118 MiB data, 737 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.0 MiB/s wr, 107 op/s
Oct 02 12:36:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1421537231' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:36:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2301724489' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:36:07 compute-1 nova_compute[230518]: 2025-10-02 12:36:07.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:08.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:08.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:36:09 compute-1 ceph-mon[80926]: pgmap v1790: 305 pgs: 305 active+clean; 166 MiB data, 751 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.9 MiB/s wr, 126 op/s
Oct 02 12:36:10 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 12:36:10 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 8306 writes, 42K keys, 8306 commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s
                                           Cumulative WAL: 8306 writes, 8306 syncs, 1.00 writes per sync, written: 0.09 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1587 writes, 8023 keys, 1587 commit groups, 1.0 writes per commit group, ingest: 16.18 MB, 0.03 MB/s
                                           Interval WAL: 1587 writes, 1587 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     72.4      0.71              0.14        24    0.030       0      0       0.0       0.0
                                             L6      1/0    8.83 MB   0.0      0.2     0.1      0.2       0.2      0.0       0.0   4.0    136.3    113.0      1.84              0.56        23    0.080    126K    12K       0.0       0.0
                                            Sum      1/0    8.83 MB   0.0      0.2     0.1      0.2       0.3      0.1       0.0   5.0     98.2    101.7      2.56              0.71        47    0.054    126K    12K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.7     78.6     77.3      0.90              0.20        12    0.075     40K   3025       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.1      0.2       0.2      0.0       0.0   0.0    136.3    113.0      1.84              0.56        23    0.080    126K    12K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     72.6      0.71              0.14        23    0.031       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.051, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.25 GB write, 0.09 MB/s write, 0.25 GB read, 0.08 MB/s read, 2.6 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.9 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5591049f71f0#2 capacity: 304.00 MB usage: 27.71 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000179 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1611,26.74 MB,8.79489%) FilterBlock(47,363.17 KB,0.116664%) IndexBlock(47,636.23 KB,0.204382%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 12:36:10 compute-1 nova_compute[230518]: 2025-10-02 12:36:10.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:10.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:10.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:11 compute-1 ceph-mon[80926]: pgmap v1791: 305 pgs: 305 active+clean; 166 MiB data, 751 MiB used, 20 GiB / 21 GiB avail; 316 KiB/s rd, 3.9 MiB/s wr, 85 op/s
Oct 02 12:36:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:12.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:12 compute-1 ceph-mon[80926]: pgmap v1792: 305 pgs: 305 active+clean; 167 MiB data, 751 MiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Oct 02 12:36:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:12.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:12 compute-1 nova_compute[230518]: 2025-10-02 12:36:12.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:36:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:14.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:14.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:15 compute-1 ceph-mon[80926]: pgmap v1793: 305 pgs: 305 active+clean; 167 MiB data, 751 MiB used, 20 GiB / 21 GiB avail; 345 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Oct 02 12:36:15 compute-1 nova_compute[230518]: 2025-10-02 12:36:15.414 2 DEBUG nova.compute.manager [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560
Oct 02 12:36:15 compute-1 nova_compute[230518]: 2025-10-02 12:36:15.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:15 compute-1 nova_compute[230518]: 2025-10-02 12:36:15.729 2 DEBUG oslo_concurrency.lockutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:15 compute-1 nova_compute[230518]: 2025-10-02 12:36:15.729 2 DEBUG oslo_concurrency.lockutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:15 compute-1 nova_compute[230518]: 2025-10-02 12:36:15.770 2 DEBUG nova.objects.instance [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'pci_requests' on Instance uuid a7ee799a-27f6-41a6-86dc-694c480fc3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:36:15 compute-1 nova_compute[230518]: 2025-10-02 12:36:15.817 2 DEBUG nova.virt.hardware [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:36:15 compute-1 nova_compute[230518]: 2025-10-02 12:36:15.818 2 INFO nova.compute.claims [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:36:15 compute-1 nova_compute[230518]: 2025-10-02 12:36:15.818 2 DEBUG nova.objects.instance [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'resources' on Instance uuid a7ee799a-27f6-41a6-86dc-694c480fc3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:36:15 compute-1 nova_compute[230518]: 2025-10-02 12:36:15.847 2 DEBUG nova.objects.instance [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'pci_devices' on Instance uuid a7ee799a-27f6-41a6-86dc-694c480fc3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:36:15 compute-1 nova_compute[230518]: 2025-10-02 12:36:15.912 2 INFO nova.compute.resource_tracker [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Updating resource usage from migration 375a3f6c-42dd-46ed-8db6-645863f744f6
Oct 02 12:36:15 compute-1 nova_compute[230518]: 2025-10-02 12:36:15.912 2 DEBUG nova.compute.resource_tracker [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Starting to track incoming migration 375a3f6c-42dd-46ed-8db6-645863f744f6 with flavor 475e3257-fad6-494a-9174-56c6af5e0ac9 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431
Oct 02 12:36:16 compute-1 nova_compute[230518]: 2025-10-02 12:36:16.037 2 DEBUG oslo_concurrency.processutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:36:16 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:36:16 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3738831801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:16 compute-1 nova_compute[230518]: 2025-10-02 12:36:16.542 2 DEBUG oslo_concurrency.processutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:36:16 compute-1 nova_compute[230518]: 2025-10-02 12:36:16.547 2 DEBUG nova.compute.provider_tree [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:36:16 compute-1 ceph-mon[80926]: pgmap v1794: 305 pgs: 305 active+clean; 167 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 642 KiB/s rd, 3.5 MiB/s wr, 102 op/s
Oct 02 12:36:16 compute-1 nova_compute[230518]: 2025-10-02 12:36:16.586 2 DEBUG nova.scheduler.client.report [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:36:16 compute-1 nova_compute[230518]: 2025-10-02 12:36:16.608 2 DEBUG oslo_concurrency.lockutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.879s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:16 compute-1 nova_compute[230518]: 2025-10-02 12:36:16.609 2 INFO nova.compute.manager [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Migrating
Oct 02 12:36:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:16.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:16.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3738831801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:17 compute-1 nova_compute[230518]: 2025-10-02 12:36:17.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:18.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:36:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:18.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:36:18 compute-1 ceph-mon[80926]: pgmap v1795: 305 pgs: 305 active+clean; 167 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.9 MiB/s wr, 142 op/s
Oct 02 12:36:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:36:20 compute-1 sshd-session[266887]: Accepted publickey for nova from 192.168.122.102 port 47204 ssh2: ECDSA SHA256:Vro/IzzyOA86z5RBI5lBF+NKUNzyxTh79RUgVc2XKwY
Oct 02 12:36:20 compute-1 systemd[1]: Created slice User Slice of UID 42436.
Oct 02 12:36:20 compute-1 systemd[1]: Starting User Runtime Directory /run/user/42436...
Oct 02 12:36:20 compute-1 systemd-logind[795]: New session 59 of user nova.
Oct 02 12:36:20 compute-1 systemd[1]: Finished User Runtime Directory /run/user/42436.
Oct 02 12:36:20 compute-1 systemd[1]: Starting User Manager for UID 42436...
Oct 02 12:36:20 compute-1 systemd[266891]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:36:20 compute-1 systemd[266891]: Queued start job for default target Main User Target.
Oct 02 12:36:20 compute-1 nova_compute[230518]: 2025-10-02 12:36:20.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:20 compute-1 systemd[266891]: Created slice User Application Slice.
Oct 02 12:36:20 compute-1 systemd[266891]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 02 12:36:20 compute-1 systemd[266891]: Started Daily Cleanup of User's Temporary Directories.
Oct 02 12:36:20 compute-1 systemd[266891]: Reached target Paths.
Oct 02 12:36:20 compute-1 systemd[266891]: Reached target Timers.
Oct 02 12:36:20 compute-1 systemd[266891]: Starting D-Bus User Message Bus Socket...
Oct 02 12:36:20 compute-1 systemd[266891]: Starting Create User's Volatile Files and Directories...
Oct 02 12:36:20 compute-1 systemd[266891]: Listening on D-Bus User Message Bus Socket.
Oct 02 12:36:20 compute-1 systemd[266891]: Reached target Sockets.
Oct 02 12:36:20 compute-1 systemd[266891]: Finished Create User's Volatile Files and Directories.
Oct 02 12:36:20 compute-1 systemd[266891]: Reached target Basic System.
Oct 02 12:36:20 compute-1 systemd[266891]: Reached target Main User Target.
Oct 02 12:36:20 compute-1 systemd[266891]: Startup finished in 173ms.
Oct 02 12:36:20 compute-1 systemd[1]: Started User Manager for UID 42436.
Oct 02 12:36:20 compute-1 systemd[1]: Started Session 59 of User nova.
Oct 02 12:36:20 compute-1 sshd-session[266887]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:36:20 compute-1 sshd-session[266906]: Received disconnect from 192.168.122.102 port 47204:11: disconnected by user
Oct 02 12:36:20 compute-1 sshd-session[266906]: Disconnected from user nova 192.168.122.102 port 47204
Oct 02 12:36:20 compute-1 sshd-session[266887]: pam_unix(sshd:session): session closed for user nova
Oct 02 12:36:20 compute-1 systemd[1]: session-59.scope: Deactivated successfully.
Oct 02 12:36:20 compute-1 systemd-logind[795]: Session 59 logged out. Waiting for processes to exit.
Oct 02 12:36:20 compute-1 systemd-logind[795]: Removed session 59.
Oct 02 12:36:20 compute-1 sshd-session[266908]: Accepted publickey for nova from 192.168.122.102 port 47210 ssh2: ECDSA SHA256:Vro/IzzyOA86z5RBI5lBF+NKUNzyxTh79RUgVc2XKwY
Oct 02 12:36:20 compute-1 systemd-logind[795]: New session 61 of user nova.
Oct 02 12:36:20 compute-1 systemd[1]: Started Session 61 of User nova.
Oct 02 12:36:20 compute-1 sshd-session[266908]: pam_unix(sshd:session): session opened for user nova(uid=42436) by nova(uid=0)
Oct 02 12:36:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:20.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:20.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:20 compute-1 sshd-session[266911]: Received disconnect from 192.168.122.102 port 47210:11: disconnected by user
Oct 02 12:36:20 compute-1 sshd-session[266911]: Disconnected from user nova 192.168.122.102 port 47210
Oct 02 12:36:20 compute-1 sshd-session[266908]: pam_unix(sshd:session): session closed for user nova
Oct 02 12:36:20 compute-1 systemd[1]: session-61.scope: Deactivated successfully.
Oct 02 12:36:20 compute-1 systemd-logind[795]: Session 61 logged out. Waiting for processes to exit.
Oct 02 12:36:20 compute-1 systemd-logind[795]: Removed session 61.
Oct 02 12:36:21 compute-1 ceph-mon[80926]: pgmap v1796: 305 pgs: 305 active+clean; 167 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 45 KiB/s wr, 76 op/s
Oct 02 12:36:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/995817035' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1685352000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:22.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:22.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:23 compute-1 nova_compute[230518]: 2025-10-02 12:36:23.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:23 compute-1 ceph-mon[80926]: pgmap v1797: 305 pgs: 305 active+clean; 137 MiB data, 736 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 50 KiB/s wr, 100 op/s
Oct 02 12:36:23 compute-1 podman[266914]: 2025-10-02 12:36:23.817864265 +0000 UTC m=+0.063499080 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 12:36:23 compute-1 podman[266913]: 2025-10-02 12:36:23.864027007 +0000 UTC m=+0.109856788 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:36:24 compute-1 nova_compute[230518]: 2025-10-02 12:36:24.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:36:24 compute-1 nova_compute[230518]: 2025-10-02 12:36:24.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 12:36:24 compute-1 nova_compute[230518]: 2025-10-02 12:36:24.075 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 12:36:24 compute-1 nova_compute[230518]: 2025-10-02 12:36:24.300 2 DEBUG nova.compute.manager [req-2ad5a787-c261-4b76-9105-be1358d774f2 req-85c9a14e-3cdc-44cc-8564-07af94705be4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received event network-vif-unplugged-b31bc9d2-5589-460c-9a78-a1d800087345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:36:24 compute-1 nova_compute[230518]: 2025-10-02 12:36:24.301 2 DEBUG oslo_concurrency.lockutils [req-2ad5a787-c261-4b76-9105-be1358d774f2 req-85c9a14e-3cdc-44cc-8564-07af94705be4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:24 compute-1 nova_compute[230518]: 2025-10-02 12:36:24.301 2 DEBUG oslo_concurrency.lockutils [req-2ad5a787-c261-4b76-9105-be1358d774f2 req-85c9a14e-3cdc-44cc-8564-07af94705be4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:24 compute-1 nova_compute[230518]: 2025-10-02 12:36:24.301 2 DEBUG oslo_concurrency.lockutils [req-2ad5a787-c261-4b76-9105-be1358d774f2 req-85c9a14e-3cdc-44cc-8564-07af94705be4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:24 compute-1 nova_compute[230518]: 2025-10-02 12:36:24.301 2 DEBUG nova.compute.manager [req-2ad5a787-c261-4b76-9105-be1358d774f2 req-85c9a14e-3cdc-44cc-8564-07af94705be4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] No waiting events found dispatching network-vif-unplugged-b31bc9d2-5589-460c-9a78-a1d800087345 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:36:24 compute-1 nova_compute[230518]: 2025-10-02 12:36:24.302 2 WARNING nova.compute.manager [req-2ad5a787-c261-4b76-9105-be1358d774f2 req-85c9a14e-3cdc-44cc-8564-07af94705be4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received unexpected event network-vif-unplugged-b31bc9d2-5589-460c-9a78-a1d800087345 for instance with vm_state active and task_state resize_migrating.
Oct 02 12:36:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:36:24 compute-1 ceph-mon[80926]: pgmap v1798: 305 pgs: 305 active+clean; 129 MiB data, 733 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 320 KiB/s wr, 107 op/s
Oct 02 12:36:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:24.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:24.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:25 compute-1 nova_compute[230518]: 2025-10-02 12:36:25.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:25 compute-1 nova_compute[230518]: 2025-10-02 12:36:25.741 2 INFO nova.network.neutron [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Updating port b31bc9d2-5589-460c-9a78-a1d800087345 with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}
Oct 02 12:36:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:25.932 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:25.933 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:25.933 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:26.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:26.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:27 compute-1 ceph-mon[80926]: pgmap v1799: 305 pgs: 305 active+clean; 159 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 109 op/s
Oct 02 12:36:27 compute-1 nova_compute[230518]: 2025-10-02 12:36:27.886 2 DEBUG nova.compute.manager [req-c18e6244-f25b-4b3f-9ab3-649505a1591a req-a9e4394c-991a-44cb-91a4-13350fee4c1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:36:27 compute-1 nova_compute[230518]: 2025-10-02 12:36:27.886 2 DEBUG oslo_concurrency.lockutils [req-c18e6244-f25b-4b3f-9ab3-649505a1591a req-a9e4394c-991a-44cb-91a4-13350fee4c1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:27 compute-1 nova_compute[230518]: 2025-10-02 12:36:27.887 2 DEBUG oslo_concurrency.lockutils [req-c18e6244-f25b-4b3f-9ab3-649505a1591a req-a9e4394c-991a-44cb-91a4-13350fee4c1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:27 compute-1 nova_compute[230518]: 2025-10-02 12:36:27.887 2 DEBUG oslo_concurrency.lockutils [req-c18e6244-f25b-4b3f-9ab3-649505a1591a req-a9e4394c-991a-44cb-91a4-13350fee4c1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:27 compute-1 nova_compute[230518]: 2025-10-02 12:36:27.887 2 DEBUG nova.compute.manager [req-c18e6244-f25b-4b3f-9ab3-649505a1591a req-a9e4394c-991a-44cb-91a4-13350fee4c1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] No waiting events found dispatching network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:36:27 compute-1 nova_compute[230518]: 2025-10-02 12:36:27.887 2 WARNING nova.compute.manager [req-c18e6244-f25b-4b3f-9ab3-649505a1591a req-a9e4394c-991a-44cb-91a4-13350fee4c1f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received unexpected event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 for instance with vm_state active and task_state resize_migrated.
Oct 02 12:36:28 compute-1 nova_compute[230518]: 2025-10-02 12:36:28.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:28 compute-1 ceph-mon[80926]: pgmap v1800: 305 pgs: 305 active+clean; 167 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Oct 02 12:36:28 compute-1 nova_compute[230518]: 2025-10-02 12:36:28.834 2 DEBUG oslo_concurrency.lockutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:36:28 compute-1 nova_compute[230518]: 2025-10-02 12:36:28.835 2 DEBUG oslo_concurrency.lockutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquired lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:36:28 compute-1 nova_compute[230518]: 2025-10-02 12:36:28.835 2 DEBUG nova.network.neutron [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:36:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:28.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:28.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:29 compute-1 sudo[266958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:36:29 compute-1 sudo[266958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:29 compute-1 sudo[266958]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:29 compute-1 sudo[266983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:36:29 compute-1 sudo[266983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:29 compute-1 sudo[266983]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:29 compute-1 sudo[267008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:36:29 compute-1 sudo[267008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:29 compute-1 sudo[267008]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:29 compute-1 sudo[267033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:36:29 compute-1 sudo[267033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3622962585' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2996427117' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:36:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:36:29 compute-1 sudo[267033]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:30 compute-1 nova_compute[230518]: 2025-10-02 12:36:30.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:30 compute-1 ceph-mon[80926]: pgmap v1801: 305 pgs: 305 active+clean; 167 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Oct 02 12:36:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/795182459' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:36:30 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:36:30 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:36:30 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:36:30 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:36:30 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:36:30 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:36:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:30.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:36:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:30.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:36:30 compute-1 systemd[1]: Stopping User Manager for UID 42436...
Oct 02 12:36:30 compute-1 systemd[266891]: Activating special unit Exit the Session...
Oct 02 12:36:30 compute-1 systemd[266891]: Stopped target Main User Target.
Oct 02 12:36:30 compute-1 systemd[266891]: Stopped target Basic System.
Oct 02 12:36:30 compute-1 systemd[266891]: Stopped target Paths.
Oct 02 12:36:30 compute-1 systemd[266891]: Stopped target Sockets.
Oct 02 12:36:30 compute-1 systemd[266891]: Stopped target Timers.
Oct 02 12:36:30 compute-1 systemd[266891]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 02 12:36:30 compute-1 systemd[266891]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 02 12:36:30 compute-1 systemd[266891]: Closed D-Bus User Message Bus Socket.
Oct 02 12:36:30 compute-1 systemd[266891]: Stopped Create User's Volatile Files and Directories.
Oct 02 12:36:30 compute-1 systemd[266891]: Removed slice User Application Slice.
Oct 02 12:36:30 compute-1 systemd[266891]: Reached target Shutdown.
Oct 02 12:36:30 compute-1 systemd[266891]: Finished Exit the Session.
Oct 02 12:36:30 compute-1 systemd[266891]: Reached target Exit the Session.
Oct 02 12:36:30 compute-1 systemd[1]: user@42436.service: Deactivated successfully.
Oct 02 12:36:30 compute-1 systemd[1]: Stopped User Manager for UID 42436.
Oct 02 12:36:31 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Oct 02 12:36:31 compute-1 systemd[1]: run-user-42436.mount: Deactivated successfully.
Oct 02 12:36:31 compute-1 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Oct 02 12:36:31 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Oct 02 12:36:31 compute-1 systemd[1]: Removed slice User Slice of UID 42436.
Oct 02 12:36:31 compute-1 nova_compute[230518]: 2025-10-02 12:36:31.197 2 DEBUG nova.compute.manager [req-57244e78-57fb-4909-83e7-c31e155fc6ac req-17d269e8-ea4e-4d81-b28b-ede8069ca5db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received event network-changed-b31bc9d2-5589-460c-9a78-a1d800087345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:36:31 compute-1 nova_compute[230518]: 2025-10-02 12:36:31.198 2 DEBUG nova.compute.manager [req-57244e78-57fb-4909-83e7-c31e155fc6ac req-17d269e8-ea4e-4d81-b28b-ede8069ca5db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Refreshing instance network info cache due to event network-changed-b31bc9d2-5589-460c-9a78-a1d800087345. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:36:31 compute-1 nova_compute[230518]: 2025-10-02 12:36:31.198 2 DEBUG oslo_concurrency.lockutils [req-57244e78-57fb-4909-83e7-c31e155fc6ac req-17d269e8-ea4e-4d81-b28b-ede8069ca5db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:36:31 compute-1 nova_compute[230518]: 2025-10-02 12:36:31.676 2 DEBUG nova.network.neutron [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Updating instance_info_cache with network_info: [{"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:36:31 compute-1 nova_compute[230518]: 2025-10-02 12:36:31.849 2 DEBUG oslo_concurrency.lockutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Releasing lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:36:31 compute-1 nova_compute[230518]: 2025-10-02 12:36:31.856 2 DEBUG oslo_concurrency.lockutils [req-57244e78-57fb-4909-83e7-c31e155fc6ac req-17d269e8-ea4e-4d81-b28b-ede8069ca5db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:36:31 compute-1 nova_compute[230518]: 2025-10-02 12:36:31.856 2 DEBUG nova.network.neutron [req-57244e78-57fb-4909-83e7-c31e155fc6ac req-17d269e8-ea4e-4d81-b28b-ede8069ca5db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Refreshing network info cache for port b31bc9d2-5589-460c-9a78-a1d800087345 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:36:32 compute-1 nova_compute[230518]: 2025-10-02 12:36:32.143 2 DEBUG nova.virt.libvirt.driver [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698
Oct 02 12:36:32 compute-1 nova_compute[230518]: 2025-10-02 12:36:32.146 2 DEBUG nova.virt.libvirt.driver [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 02 12:36:32 compute-1 nova_compute[230518]: 2025-10-02 12:36:32.146 2 INFO nova.virt.libvirt.driver [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Creating image(s)
Oct 02 12:36:32 compute-1 nova_compute[230518]: 2025-10-02 12:36:32.191 2 DEBUG nova.storage.rbd_utils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] creating snapshot(nova-resize) on rbd image(a7ee799a-27f6-41a6-86dc-694c480fc3a1_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:36:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:32.287 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:36:32 compute-1 nova_compute[230518]: 2025-10-02 12:36:32.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:32.289 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:36:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:36:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:32.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:36:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:32.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:32 compute-1 ceph-mon[80926]: pgmap v1802: 305 pgs: 305 active+clean; 167 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Oct 02 12:36:33 compute-1 nova_compute[230518]: 2025-10-02 12:36:33.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e249 e249: 3 total, 3 up, 3 in
Oct 02 12:36:33 compute-1 nova_compute[230518]: 2025-10-02 12:36:33.225 2 DEBUG nova.objects.instance [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'trusted_certs' on Instance uuid a7ee799a-27f6-41a6-86dc-694c480fc3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:36:34 compute-1 ceph-mon[80926]: osdmap e249: 3 total, 3 up, 3 in
Oct 02 12:36:34 compute-1 nova_compute[230518]: 2025-10-02 12:36:34.407 2 DEBUG nova.virt.libvirt.driver [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 12:36:34 compute-1 nova_compute[230518]: 2025-10-02 12:36:34.408 2 DEBUG nova.virt.libvirt.driver [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Ensure instance console log exists: /var/lib/nova/instances/a7ee799a-27f6-41a6-86dc-694c480fc3a1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:36:34 compute-1 nova_compute[230518]: 2025-10-02 12:36:34.408 2 DEBUG oslo_concurrency.lockutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:34 compute-1 nova_compute[230518]: 2025-10-02 12:36:34.409 2 DEBUG oslo_concurrency.lockutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:34 compute-1 nova_compute[230518]: 2025-10-02 12:36:34.409 2 DEBUG oslo_concurrency.lockutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:34 compute-1 nova_compute[230518]: 2025-10-02 12:36:34.411 2 DEBUG nova.virt.libvirt.driver [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Start _get_guest_xml network_info=[{"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1480512928-network", "vif_mac": "fa:16:3e:46:e0:75"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:36:34 compute-1 nova_compute[230518]: 2025-10-02 12:36:34.415 2 WARNING nova.virt.libvirt.driver [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:36:34 compute-1 nova_compute[230518]: 2025-10-02 12:36:34.425 2 DEBUG nova.virt.libvirt.host [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:36:34 compute-1 nova_compute[230518]: 2025-10-02 12:36:34.426 2 DEBUG nova.virt.libvirt.host [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:36:34 compute-1 nova_compute[230518]: 2025-10-02 12:36:34.430 2 DEBUG nova.virt.libvirt.host [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:36:34 compute-1 nova_compute[230518]: 2025-10-02 12:36:34.431 2 DEBUG nova.virt.libvirt.host [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:36:34 compute-1 nova_compute[230518]: 2025-10-02 12:36:34.432 2 DEBUG nova.virt.libvirt.driver [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:36:34 compute-1 nova_compute[230518]: 2025-10-02 12:36:34.432 2 DEBUG nova.virt.hardware [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:44Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='475e3257-fad6-494a-9174-56c6af5e0ac9',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:36:34 compute-1 nova_compute[230518]: 2025-10-02 12:36:34.432 2 DEBUG nova.virt.hardware [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:36:34 compute-1 nova_compute[230518]: 2025-10-02 12:36:34.432 2 DEBUG nova.virt.hardware [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:36:34 compute-1 nova_compute[230518]: 2025-10-02 12:36:34.433 2 DEBUG nova.virt.hardware [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:36:34 compute-1 nova_compute[230518]: 2025-10-02 12:36:34.433 2 DEBUG nova.virt.hardware [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:36:34 compute-1 nova_compute[230518]: 2025-10-02 12:36:34.433 2 DEBUG nova.virt.hardware [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:36:34 compute-1 nova_compute[230518]: 2025-10-02 12:36:34.433 2 DEBUG nova.virt.hardware [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:36:34 compute-1 nova_compute[230518]: 2025-10-02 12:36:34.433 2 DEBUG nova.virt.hardware [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:36:34 compute-1 nova_compute[230518]: 2025-10-02 12:36:34.434 2 DEBUG nova.virt.hardware [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:36:34 compute-1 nova_compute[230518]: 2025-10-02 12:36:34.434 2 DEBUG nova.virt.hardware [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:36:34 compute-1 nova_compute[230518]: 2025-10-02 12:36:34.434 2 DEBUG nova.virt.hardware [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:36:34 compute-1 nova_compute[230518]: 2025-10-02 12:36:34.434 2 DEBUG nova.objects.instance [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'vcpu_model' on Instance uuid a7ee799a-27f6-41a6-86dc-694c480fc3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:36:34 compute-1 nova_compute[230518]: 2025-10-02 12:36:34.487 2 DEBUG oslo_concurrency.processutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:36:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:36:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:34.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:34.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:36:34 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4108097679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:36:34 compute-1 nova_compute[230518]: 2025-10-02 12:36:34.939 2 DEBUG oslo_concurrency.processutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:36:34 compute-1 nova_compute[230518]: 2025-10-02 12:36:34.976 2 DEBUG oslo_concurrency.processutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:36:35 compute-1 ceph-mon[80926]: pgmap v1804: 305 pgs: 305 active+clean; 192 MiB data, 765 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 3.2 MiB/s wr, 25 op/s
Oct 02 12:36:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4108097679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:36:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:36:35 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3385465174' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:36:35 compute-1 nova_compute[230518]: 2025-10-02 12:36:35.398 2 DEBUG oslo_concurrency.processutils [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:36:35 compute-1 nova_compute[230518]: 2025-10-02 12:36:35.402 2 DEBUG nova.virt.libvirt.vif [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:35:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1748262975',display_name='tempest-ServerActionsTestJSON-server-1748262975',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1748262975',id=93,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:35:50Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-plwxzt7u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:36:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=a7ee799a-27f6-41a6-86dc-694c480fc3a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1480512928-network", "vif_mac": "fa:16:3e:46:e0:75"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:36:35 compute-1 nova_compute[230518]: 2025-10-02 12:36:35.402 2 DEBUG nova.network.os_vif_util [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1480512928-network", "vif_mac": "fa:16:3e:46:e0:75"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:36:35 compute-1 nova_compute[230518]: 2025-10-02 12:36:35.404 2 DEBUG nova.network.os_vif_util [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:e0:75,bridge_name='br-int',has_traffic_filtering=True,id=b31bc9d2-5589-460c-9a78-a1d800087345,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb31bc9d2-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:36:35 compute-1 nova_compute[230518]: 2025-10-02 12:36:35.409 2 DEBUG nova.virt.libvirt.driver [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:36:35 compute-1 nova_compute[230518]:   <uuid>a7ee799a-27f6-41a6-86dc-694c480fc3a1</uuid>
Oct 02 12:36:35 compute-1 nova_compute[230518]:   <name>instance-0000005d</name>
Oct 02 12:36:35 compute-1 nova_compute[230518]:   <memory>196608</memory>
Oct 02 12:36:35 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:36:35 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:36:35 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:       <nova:name>tempest-ServerActionsTestJSON-server-1748262975</nova:name>
Oct 02 12:36:35 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:36:34</nova:creationTime>
Oct 02 12:36:35 compute-1 nova_compute[230518]:       <nova:flavor name="m1.micro">
Oct 02 12:36:35 compute-1 nova_compute[230518]:         <nova:memory>192</nova:memory>
Oct 02 12:36:35 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:36:35 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:36:35 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:36:35 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:36:35 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:36:35 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:36:35 compute-1 nova_compute[230518]:         <nova:user uuid="71d69bc37f274fad8a0b06c0b96f2a64">tempest-ServerActionsTestJSON-226762235-project-member</nova:user>
Oct 02 12:36:35 compute-1 nova_compute[230518]:         <nova:project uuid="3b295760a6d74c82bd0f9ee4154d7d10">tempest-ServerActionsTestJSON-226762235</nova:project>
Oct 02 12:36:35 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:36:35 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:36:35 compute-1 nova_compute[230518]:         <nova:port uuid="b31bc9d2-5589-460c-9a78-a1d800087345">
Oct 02 12:36:35 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:36:35 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:36:35 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:36:35 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <system>
Oct 02 12:36:35 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:36:35 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:36:35 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:36:35 compute-1 nova_compute[230518]:       <entry name="serial">a7ee799a-27f6-41a6-86dc-694c480fc3a1</entry>
Oct 02 12:36:35 compute-1 nova_compute[230518]:       <entry name="uuid">a7ee799a-27f6-41a6-86dc-694c480fc3a1</entry>
Oct 02 12:36:35 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     </system>
Oct 02 12:36:35 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:36:35 compute-1 nova_compute[230518]:   <os>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:   </os>
Oct 02 12:36:35 compute-1 nova_compute[230518]:   <features>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:   </features>
Oct 02 12:36:35 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:36:35 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:36:35 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:36:35 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/a7ee799a-27f6-41a6-86dc-694c480fc3a1_disk">
Oct 02 12:36:35 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:       </source>
Oct 02 12:36:35 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:36:35 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:36:35 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:36:35 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/a7ee799a-27f6-41a6-86dc-694c480fc3a1_disk.config">
Oct 02 12:36:35 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:       </source>
Oct 02 12:36:35 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:36:35 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:36:35 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:36:35 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:46:e0:75"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:       <target dev="tapb31bc9d2-55"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:36:35 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/a7ee799a-27f6-41a6-86dc-694c480fc3a1/console.log" append="off"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <video>
Oct 02 12:36:35 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     </video>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:36:35 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:36:35 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:36:35 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:36:35 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:36:35 compute-1 nova_compute[230518]: </domain>
Oct 02 12:36:35 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:36:35 compute-1 nova_compute[230518]: 2025-10-02 12:36:35.411 2 DEBUG nova.virt.libvirt.vif [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:35:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1748262975',display_name='tempest-ServerActionsTestJSON-server-1748262975',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1748262975',id=93,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:35:50Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-plwxzt7u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:36:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=a7ee799a-27f6-41a6-86dc-694c480fc3a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1480512928-network", "vif_mac": "fa:16:3e:46:e0:75"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:36:35 compute-1 nova_compute[230518]: 2025-10-02 12:36:35.412 2 DEBUG nova.network.os_vif_util [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1480512928-network", "vif_mac": "fa:16:3e:46:e0:75"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:36:35 compute-1 nova_compute[230518]: 2025-10-02 12:36:35.413 2 DEBUG nova.network.os_vif_util [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:e0:75,bridge_name='br-int',has_traffic_filtering=True,id=b31bc9d2-5589-460c-9a78-a1d800087345,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb31bc9d2-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:36:35 compute-1 nova_compute[230518]: 2025-10-02 12:36:35.414 2 DEBUG os_vif [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:e0:75,bridge_name='br-int',has_traffic_filtering=True,id=b31bc9d2-5589-460c-9a78-a1d800087345,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb31bc9d2-55') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:36:35 compute-1 nova_compute[230518]: 2025-10-02 12:36:35.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:35 compute-1 nova_compute[230518]: 2025-10-02 12:36:35.416 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:36:35 compute-1 nova_compute[230518]: 2025-10-02 12:36:35.417 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:36:35 compute-1 nova_compute[230518]: 2025-10-02 12:36:35.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:35 compute-1 nova_compute[230518]: 2025-10-02 12:36:35.421 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb31bc9d2-55, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:36:35 compute-1 nova_compute[230518]: 2025-10-02 12:36:35.422 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb31bc9d2-55, col_values=(('external_ids', {'iface-id': 'b31bc9d2-5589-460c-9a78-a1d800087345', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:46:e0:75', 'vm-uuid': 'a7ee799a-27f6-41a6-86dc-694c480fc3a1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:36:35 compute-1 nova_compute[230518]: 2025-10-02 12:36:35.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:35 compute-1 NetworkManager[44960]: <info>  [1759408595.4261] manager: (tapb31bc9d2-55): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/187)
Oct 02 12:36:35 compute-1 nova_compute[230518]: 2025-10-02 12:36:35.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:36:35 compute-1 nova_compute[230518]: 2025-10-02 12:36:35.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:35 compute-1 nova_compute[230518]: 2025-10-02 12:36:35.434 2 INFO os_vif [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:e0:75,bridge_name='br-int',has_traffic_filtering=True,id=b31bc9d2-5589-460c-9a78-a1d800087345,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb31bc9d2-55')
Oct 02 12:36:35 compute-1 nova_compute[230518]: 2025-10-02 12:36:35.519 2 DEBUG nova.virt.libvirt.driver [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:36:35 compute-1 nova_compute[230518]: 2025-10-02 12:36:35.519 2 DEBUG nova.virt.libvirt.driver [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:36:35 compute-1 nova_compute[230518]: 2025-10-02 12:36:35.520 2 DEBUG nova.virt.libvirt.driver [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] No VIF found with MAC fa:16:3e:46:e0:75, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:36:35 compute-1 nova_compute[230518]: 2025-10-02 12:36:35.521 2 INFO nova.virt.libvirt.driver [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Using config drive
Oct 02 12:36:35 compute-1 kernel: tapb31bc9d2-55: entered promiscuous mode
Oct 02 12:36:35 compute-1 nova_compute[230518]: 2025-10-02 12:36:35.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:35 compute-1 NetworkManager[44960]: <info>  [1759408595.6329] manager: (tapb31bc9d2-55): new Tun device (/org/freedesktop/NetworkManager/Devices/188)
Oct 02 12:36:35 compute-1 ovn_controller[129257]: 2025-10-02T12:36:35Z|00402|binding|INFO|Claiming lport b31bc9d2-5589-460c-9a78-a1d800087345 for this chassis.
Oct 02 12:36:35 compute-1 ovn_controller[129257]: 2025-10-02T12:36:35Z|00403|binding|INFO|b31bc9d2-5589-460c-9a78-a1d800087345: Claiming fa:16:3e:46:e0:75 10.100.0.12
Oct 02 12:36:35 compute-1 nova_compute[230518]: 2025-10-02 12:36:35.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:35 compute-1 NetworkManager[44960]: <info>  [1759408595.6484] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/189)
Oct 02 12:36:35 compute-1 NetworkManager[44960]: <info>  [1759408595.6490] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/190)
Oct 02 12:36:35 compute-1 systemd-udevd[267257]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:36:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.664 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:e0:75 10.100.0.12'], port_security=['fa:16:3e:46:e0:75 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a7ee799a-27f6-41a6-86dc-694c480fc3a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '6', 'neutron:security_group_ids': '6fdfac51-abac-4e22-93ab-c3b799f666ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.191'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=b31bc9d2-5589-460c-9a78-a1d800087345) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:36:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.665 138374 INFO neutron.agent.ovn.metadata.agent [-] Port b31bc9d2-5589-460c-9a78-a1d800087345 in datapath f011efa4-0132-405c-bb45-09d0a9352eff bound to our chassis
Oct 02 12:36:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.667 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f011efa4-0132-405c-bb45-09d0a9352eff
Oct 02 12:36:35 compute-1 systemd-machined[188247]: New machine qemu-47-instance-0000005d.
Oct 02 12:36:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.677 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1cc3d769-f53c-4ccb-b2c7-15d2509d41f7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.679 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf011efa4-01 in ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:36:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.680 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf011efa4-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:36:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.681 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2cbea2b9-c4d0-4254-9bf7-f54cb5a9d390]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.681 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d474fcfb-df02-46e4-8a6e-b1bce15ab6d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:35 compute-1 NetworkManager[44960]: <info>  [1759408595.6872] device (tapb31bc9d2-55): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:36:35 compute-1 NetworkManager[44960]: <info>  [1759408595.6879] device (tapb31bc9d2-55): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:36:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.695 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[23147e73-f273-45dd-81c4-71b05f927c3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.722 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[05ced32e-c8dd-43bb-9839-434d5dc62ff0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:35 compute-1 systemd[1]: Started Virtual Machine qemu-47-instance-0000005d.
Oct 02 12:36:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.749 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[2006e9b5-b96f-435a-8e2d-813b2f7b509a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.755 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a1c50a0c-c9e6-4eae-870b-5e342ba97b2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:35 compute-1 NetworkManager[44960]: <info>  [1759408595.7562] manager: (tapf011efa4-00): new Veth device (/org/freedesktop/NetworkManager/Devices/191)
Oct 02 12:36:35 compute-1 systemd-udevd[267262]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:36:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.785 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[fbcc86a6-ed9d-48a9-b3cb-e20296ffa57c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.787 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[59a651d7-067d-494f-860c-c62e1d37f29d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:35 compute-1 NetworkManager[44960]: <info>  [1759408595.8062] device (tapf011efa4-00): carrier: link connected
Oct 02 12:36:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.811 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[0383039b-e35f-4af9-9fc5-94ed07d988cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.828 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cfab19ba-5c9c-4353-be45-a4eab07149a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 122], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 645936, 'reachable_time': 44253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267291, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:35 compute-1 nova_compute[230518]: 2025-10-02 12:36:35.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:35 compute-1 nova_compute[230518]: 2025-10-02 12:36:35.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.841 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5d1374be-3291-4552-a988-8f32eadfe9cb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feed:1a7a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 645936, 'tstamp': 645936}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267292, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:35 compute-1 nova_compute[230518]: 2025-10-02 12:36:35.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.855 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[943c78dc-ae88-4e11-902e-2f69e1fb48eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf011efa4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ed:1a:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 122], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 645936, 'reachable_time': 44253, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267293, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:35 compute-1 ovn_controller[129257]: 2025-10-02T12:36:35Z|00404|binding|INFO|Setting lport b31bc9d2-5589-460c-9a78-a1d800087345 ovn-installed in OVS
Oct 02 12:36:35 compute-1 ovn_controller[129257]: 2025-10-02T12:36:35Z|00405|binding|INFO|Setting lport b31bc9d2-5589-460c-9a78-a1d800087345 up in Southbound
Oct 02 12:36:35 compute-1 nova_compute[230518]: 2025-10-02 12:36:35.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.886 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e7c958e2-e2ee-4871-a6b3-ad4eab2ced2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.948 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bc1be787-f722-4342-b378-9fba7b726183]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.950 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:36:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.950 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:36:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.951 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf011efa4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:36:35 compute-1 nova_compute[230518]: 2025-10-02 12:36:35.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:35 compute-1 NetworkManager[44960]: <info>  [1759408595.9795] manager: (tapf011efa4-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/192)
Oct 02 12:36:35 compute-1 kernel: tapf011efa4-00: entered promiscuous mode
Oct 02 12:36:35 compute-1 nova_compute[230518]: 2025-10-02 12:36:35.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:35.985 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf011efa4-00, col_values=(('external_ids', {'iface-id': '678ebd13-2235-4191-a2a2-1f6e29399ca6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:36:35 compute-1 nova_compute[230518]: 2025-10-02 12:36:35.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:35 compute-1 ovn_controller[129257]: 2025-10-02T12:36:35Z|00406|binding|INFO|Releasing lport 678ebd13-2235-4191-a2a2-1f6e29399ca6 from this chassis (sb_readonly=0)
Oct 02 12:36:36 compute-1 nova_compute[230518]: 2025-10-02 12:36:36.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:36.017 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:36:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:36.018 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[49556ded-19ae-4b07-be48-658def98cbd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:36.019 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:36:36 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:36:36 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:36:36 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-f011efa4-0132-405c-bb45-09d0a9352eff
Oct 02 12:36:36 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:36:36 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:36:36 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:36:36 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/f011efa4-0132-405c-bb45-09d0a9352eff.pid.haproxy
Oct 02 12:36:36 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:36:36 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:36:36 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:36:36 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:36:36 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:36:36 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:36:36 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:36:36 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:36:36 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:36:36 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:36:36 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:36:36 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:36:36 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:36:36 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:36:36 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:36:36 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:36:36 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:36:36 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:36:36 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:36:36 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:36:36 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID f011efa4-0132-405c-bb45-09d0a9352eff
Oct 02 12:36:36 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:36:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:36.020 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'env', 'PROCESS_TAG=haproxy-f011efa4-0132-405c-bb45-09d0a9352eff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f011efa4-0132-405c-bb45-09d0a9352eff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:36:36 compute-1 podman[267325]: 2025-10-02 12:36:36.382500154 +0000 UTC m=+0.062541488 container create 8d5a5a907005fc67cc333df9c0d20b555a0824dee6413e7a9c8271b1b5b13cf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:36:36 compute-1 podman[267325]: 2025-10-02 12:36:36.343963651 +0000 UTC m=+0.024004985 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:36:36 compute-1 systemd[1]: Started libpod-conmon-8d5a5a907005fc67cc333df9c0d20b555a0824dee6413e7a9c8271b1b5b13cf3.scope.
Oct 02 12:36:36 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:36:36 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3df5e31392e82218d0f29695cbcb785ef01003b7693d75e9e22f28e44e1b45cf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:36:36 compute-1 podman[267325]: 2025-10-02 12:36:36.487182047 +0000 UTC m=+0.167223351 container init 8d5a5a907005fc67cc333df9c0d20b555a0824dee6413e7a9c8271b1b5b13cf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:36:36 compute-1 podman[267325]: 2025-10-02 12:36:36.492017649 +0000 UTC m=+0.172058913 container start 8d5a5a907005fc67cc333df9c0d20b555a0824dee6413e7a9c8271b1b5b13cf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:36:36 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3385465174' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:36:36 compute-1 ceph-mon[80926]: pgmap v1805: 305 pgs: 305 active+clean; 210 MiB data, 773 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.7 MiB/s wr, 110 op/s
Oct 02 12:36:36 compute-1 podman[267338]: 2025-10-02 12:36:36.513126344 +0000 UTC m=+0.078399288 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:36:36 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[267352]: [NOTICE]   (267381) : New worker (267385) forked
Oct 02 12:36:36 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[267352]: [NOTICE]   (267381) : Loading success.
Oct 02 12:36:36 compute-1 podman[267339]: 2025-10-02 12:36:36.526639159 +0000 UTC m=+0.091631964 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 12:36:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:36.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:36.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:37 compute-1 nova_compute[230518]: 2025-10-02 12:36:37.518 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408597.5181305, a7ee799a-27f6-41a6-86dc-694c480fc3a1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:36:37 compute-1 nova_compute[230518]: 2025-10-02 12:36:37.519 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] VM Resumed (Lifecycle Event)
Oct 02 12:36:37 compute-1 nova_compute[230518]: 2025-10-02 12:36:37.521 2 DEBUG nova.compute.manager [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:36:37 compute-1 nova_compute[230518]: 2025-10-02 12:36:37.525 2 INFO nova.virt.libvirt.driver [-] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Instance running successfully.
Oct 02 12:36:37 compute-1 virtqemud[230067]: argument unsupported: QEMU guest agent is not configured
Oct 02 12:36:37 compute-1 nova_compute[230518]: 2025-10-02 12:36:37.527 2 DEBUG nova.virt.libvirt.guest [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 02 12:36:37 compute-1 nova_compute[230518]: 2025-10-02 12:36:37.528 2 DEBUG nova.virt.libvirt.driver [None req-cecaa542-1f67-41d1-bc22-c6ab1767f902 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793
Oct 02 12:36:38 compute-1 nova_compute[230518]: 2025-10-02 12:36:38.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:38 compute-1 ceph-mon[80926]: pgmap v1806: 305 pgs: 305 active+clean; 213 MiB data, 773 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 141 op/s
Oct 02 12:36:38 compute-1 nova_compute[230518]: 2025-10-02 12:36:38.781 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:36:38 compute-1 nova_compute[230518]: 2025-10-02 12:36:38.785 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:36:38 compute-1 nova_compute[230518]: 2025-10-02 12:36:38.824 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] During sync_power_state the instance has a pending task (resize_finish). Skip.
Oct 02 12:36:38 compute-1 nova_compute[230518]: 2025-10-02 12:36:38.824 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408597.5183616, a7ee799a-27f6-41a6-86dc-694c480fc3a1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:36:38 compute-1 nova_compute[230518]: 2025-10-02 12:36:38.825 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] VM Started (Lifecycle Event)
Oct 02 12:36:38 compute-1 nova_compute[230518]: 2025-10-02 12:36:38.866 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:36:38 compute-1 nova_compute[230518]: 2025-10-02 12:36:38.870 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:36:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:38.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:38.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:38 compute-1 nova_compute[230518]: 2025-10-02 12:36:38.945 2 DEBUG nova.network.neutron [req-57244e78-57fb-4909-83e7-c31e155fc6ac req-17d269e8-ea4e-4d81-b28b-ede8069ca5db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Updated VIF entry in instance network info cache for port b31bc9d2-5589-460c-9a78-a1d800087345. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:36:38 compute-1 nova_compute[230518]: 2025-10-02 12:36:38.946 2 DEBUG nova.network.neutron [req-57244e78-57fb-4909-83e7-c31e155fc6ac req-17d269e8-ea4e-4d81-b28b-ede8069ca5db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Updating instance_info_cache with network_info: [{"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:36:38 compute-1 nova_compute[230518]: 2025-10-02 12:36:38.970 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] During sync_power_state the instance has a pending task (resize_finish). Skip.
Oct 02 12:36:38 compute-1 nova_compute[230518]: 2025-10-02 12:36:38.992 2 DEBUG oslo_concurrency.lockutils [req-57244e78-57fb-4909-83e7-c31e155fc6ac req-17d269e8-ea4e-4d81-b28b-ede8069ca5db 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:36:39 compute-1 sudo[267436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:36:39 compute-1 sudo[267436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:39 compute-1 sudo[267436]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:39 compute-1 sudo[267461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:36:39 compute-1 sudo[267461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:36:39 compute-1 sudo[267461]: pam_unix(sudo:session): session closed for user root
Oct 02 12:36:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:36:39 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:36:39 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:36:39 compute-1 nova_compute[230518]: 2025-10-02 12:36:39.990 2 DEBUG nova.compute.manager [req-f41409dc-0cc3-4272-9e7c-27008507a91e req-0561238c-d556-4ad5-8141-cb20423763da 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:36:39 compute-1 nova_compute[230518]: 2025-10-02 12:36:39.991 2 DEBUG oslo_concurrency.lockutils [req-f41409dc-0cc3-4272-9e7c-27008507a91e req-0561238c-d556-4ad5-8141-cb20423763da 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:39 compute-1 nova_compute[230518]: 2025-10-02 12:36:39.991 2 DEBUG oslo_concurrency.lockutils [req-f41409dc-0cc3-4272-9e7c-27008507a91e req-0561238c-d556-4ad5-8141-cb20423763da 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:39 compute-1 nova_compute[230518]: 2025-10-02 12:36:39.992 2 DEBUG oslo_concurrency.lockutils [req-f41409dc-0cc3-4272-9e7c-27008507a91e req-0561238c-d556-4ad5-8141-cb20423763da 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:39 compute-1 nova_compute[230518]: 2025-10-02 12:36:39.992 2 DEBUG nova.compute.manager [req-f41409dc-0cc3-4272-9e7c-27008507a91e req-0561238c-d556-4ad5-8141-cb20423763da 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] No waiting events found dispatching network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:36:39 compute-1 nova_compute[230518]: 2025-10-02 12:36:39.992 2 WARNING nova.compute.manager [req-f41409dc-0cc3-4272-9e7c-27008507a91e req-0561238c-d556-4ad5-8141-cb20423763da 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received unexpected event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 for instance with vm_state resized and task_state None.
Oct 02 12:36:40 compute-1 nova_compute[230518]: 2025-10-02 12:36:40.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:40.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:40.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:40 compute-1 ceph-mon[80926]: pgmap v1807: 305 pgs: 305 active+clean; 213 MiB data, 773 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 141 op/s
Oct 02 12:36:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:41.292 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:36:42 compute-1 nova_compute[230518]: 2025-10-02 12:36:42.223 2 DEBUG nova.compute.manager [req-b9142571-47d4-4144-8939-d699324ded3b req-6332066e-5187-405e-a376-6174ae533f48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:36:42 compute-1 nova_compute[230518]: 2025-10-02 12:36:42.223 2 DEBUG oslo_concurrency.lockutils [req-b9142571-47d4-4144-8939-d699324ded3b req-6332066e-5187-405e-a376-6174ae533f48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:42 compute-1 nova_compute[230518]: 2025-10-02 12:36:42.224 2 DEBUG oslo_concurrency.lockutils [req-b9142571-47d4-4144-8939-d699324ded3b req-6332066e-5187-405e-a376-6174ae533f48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:42 compute-1 nova_compute[230518]: 2025-10-02 12:36:42.224 2 DEBUG oslo_concurrency.lockutils [req-b9142571-47d4-4144-8939-d699324ded3b req-6332066e-5187-405e-a376-6174ae533f48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:42 compute-1 nova_compute[230518]: 2025-10-02 12:36:42.224 2 DEBUG nova.compute.manager [req-b9142571-47d4-4144-8939-d699324ded3b req-6332066e-5187-405e-a376-6174ae533f48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] No waiting events found dispatching network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:36:42 compute-1 nova_compute[230518]: 2025-10-02 12:36:42.224 2 WARNING nova.compute.manager [req-b9142571-47d4-4144-8939-d699324ded3b req-6332066e-5187-405e-a376-6174ae533f48 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received unexpected event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 for instance with vm_state resized and task_state None.
Oct 02 12:36:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:42.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:36:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:42.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:36:43 compute-1 nova_compute[230518]: 2025-10-02 12:36:43.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:43 compute-1 ceph-mon[80926]: pgmap v1808: 305 pgs: 305 active+clean; 213 MiB data, 773 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 2.1 MiB/s wr, 213 op/s
Oct 02 12:36:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/682488130' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:36:43 compute-1 nova_compute[230518]: 2025-10-02 12:36:43.717 2 DEBUG nova.network.neutron [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Port b31bc9d2-5589-460c-9a78-a1d800087345 binding to destination host compute-1.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171
Oct 02 12:36:43 compute-1 nova_compute[230518]: 2025-10-02 12:36:43.717 2 DEBUG oslo_concurrency.lockutils [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:36:43 compute-1 nova_compute[230518]: 2025-10-02 12:36:43.718 2 DEBUG oslo_concurrency.lockutils [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquired lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:36:43 compute-1 nova_compute[230518]: 2025-10-02 12:36:43.718 2 DEBUG nova.network.neutron [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:36:44 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1912308509' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:36:44 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2064274681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:36:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:36:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:44.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:36:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:36:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:44.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:36:45 compute-1 ceph-mon[80926]: pgmap v1809: 305 pgs: 305 active+clean; 205 MiB data, 769 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 754 KiB/s wr, 228 op/s
Oct 02 12:36:45 compute-1 nova_compute[230518]: 2025-10-02 12:36:45.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:46 compute-1 nova_compute[230518]: 2025-10-02 12:36:46.666 2 DEBUG nova.network.neutron [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Updating instance_info_cache with network_info: [{"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:36:46 compute-1 nova_compute[230518]: 2025-10-02 12:36:46.854 2 DEBUG oslo_concurrency.lockutils [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Releasing lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:36:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:46.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:36:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:46.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:36:46 compute-1 kernel: tapb31bc9d2-55 (unregistering): left promiscuous mode
Oct 02 12:36:46 compute-1 NetworkManager[44960]: <info>  [1759408606.9668] device (tapb31bc9d2-55): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:36:46 compute-1 ovn_controller[129257]: 2025-10-02T12:36:46Z|00407|binding|INFO|Releasing lport b31bc9d2-5589-460c-9a78-a1d800087345 from this chassis (sb_readonly=0)
Oct 02 12:36:46 compute-1 ovn_controller[129257]: 2025-10-02T12:36:46Z|00408|binding|INFO|Setting lport b31bc9d2-5589-460c-9a78-a1d800087345 down in Southbound
Oct 02 12:36:46 compute-1 nova_compute[230518]: 2025-10-02 12:36:46.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:46 compute-1 ovn_controller[129257]: 2025-10-02T12:36:46Z|00409|binding|INFO|Removing iface tapb31bc9d2-55 ovn-installed in OVS
Oct 02 12:36:46 compute-1 nova_compute[230518]: 2025-10-02 12:36:46.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:46 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:46.987 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:e0:75 10.100.0.12'], port_security=['fa:16:3e:46:e0:75 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a7ee799a-27f6-41a6-86dc-694c480fc3a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f011efa4-0132-405c-bb45-09d0a9352eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b295760a6d74c82bd0f9ee4154d7d10', 'neutron:revision_number': '8', 'neutron:security_group_ids': '6fdfac51-abac-4e22-93ab-c3b799f666ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.191', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fb0467f7-89dd-496a-881c-2161153c6831, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=b31bc9d2-5589-460c-9a78-a1d800087345) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:36:46 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:46.988 138374 INFO neutron.agent.ovn.metadata.agent [-] Port b31bc9d2-5589-460c-9a78-a1d800087345 in datapath f011efa4-0132-405c-bb45-09d0a9352eff unbound from our chassis
Oct 02 12:36:46 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:46.990 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f011efa4-0132-405c-bb45-09d0a9352eff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:36:46 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:46.994 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b7f72824-45aa-455c-adeb-f80e7667fbb5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:46 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:46.995 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff namespace which is not needed anymore
Oct 02 12:36:47 compute-1 nova_compute[230518]: 2025-10-02 12:36:46.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:47 compute-1 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Oct 02 12:36:47 compute-1 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000005d.scope: Consumed 11.189s CPU time.
Oct 02 12:36:47 compute-1 systemd-machined[188247]: Machine qemu-47-instance-0000005d terminated.
Oct 02 12:36:47 compute-1 nova_compute[230518]: 2025-10-02 12:36:47.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:47 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[267352]: [NOTICE]   (267381) : haproxy version is 2.8.14-c23fe91
Oct 02 12:36:47 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[267352]: [NOTICE]   (267381) : path to executable is /usr/sbin/haproxy
Oct 02 12:36:47 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[267352]: [WARNING]  (267381) : Exiting Master process...
Oct 02 12:36:47 compute-1 nova_compute[230518]: 2025-10-02 12:36:47.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:47 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[267352]: [ALERT]    (267381) : Current worker (267385) exited with code 143 (Terminated)
Oct 02 12:36:47 compute-1 neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff[267352]: [WARNING]  (267381) : All workers exited. Exiting... (0)
Oct 02 12:36:47 compute-1 systemd[1]: libpod-8d5a5a907005fc67cc333df9c0d20b555a0824dee6413e7a9c8271b1b5b13cf3.scope: Deactivated successfully.
Oct 02 12:36:47 compute-1 conmon[267352]: conmon 8d5a5a907005fc67cc33 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8d5a5a907005fc67cc333df9c0d20b555a0824dee6413e7a9c8271b1b5b13cf3.scope/container/memory.events
Oct 02 12:36:47 compute-1 podman[267511]: 2025-10-02 12:36:47.133671045 +0000 UTC m=+0.052142091 container died 8d5a5a907005fc67cc333df9c0d20b555a0824dee6413e7a9c8271b1b5b13cf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:36:47 compute-1 nova_compute[230518]: 2025-10-02 12:36:47.147 2 INFO nova.virt.libvirt.driver [-] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Instance destroyed successfully.
Oct 02 12:36:47 compute-1 nova_compute[230518]: 2025-10-02 12:36:47.148 2 DEBUG nova.objects.instance [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'resources' on Instance uuid a7ee799a-27f6-41a6-86dc-694c480fc3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:36:47 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8d5a5a907005fc67cc333df9c0d20b555a0824dee6413e7a9c8271b1b5b13cf3-userdata-shm.mount: Deactivated successfully.
Oct 02 12:36:47 compute-1 systemd[1]: var-lib-containers-storage-overlay-3df5e31392e82218d0f29695cbcb785ef01003b7693d75e9e22f28e44e1b45cf-merged.mount: Deactivated successfully.
Oct 02 12:36:47 compute-1 nova_compute[230518]: 2025-10-02 12:36:47.171 2 DEBUG nova.virt.libvirt.vif [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:35:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1748262975',display_name='tempest-ServerActionsTestJSON-server-1748262975',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1748262975',id=93,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDk5dDGw5Bu2rng/rtJXukeQfT1rmojbFD9r8VMq7oHOm+UEI4T9olVTmT96u9J+l+5CRhWq5N/yd4gNn+alqn5YyIzJwOAgpJuEqULncvUdrF3nOz+qfm+KciHWNzzl+w==',key_name='tempest-keypair-2067882672',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:36:38Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3b295760a6d74c82bd0f9ee4154d7d10',ramdisk_id='',reservation_id='r-plwxzt7u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-226762235',owner_user_name='tempest-ServerActionsTestJSON-226762235-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:36:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='71d69bc37f274fad8a0b06c0b96f2a64',uuid=a7ee799a-27f6-41a6-86dc-694c480fc3a1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:36:47 compute-1 nova_compute[230518]: 2025-10-02 12:36:47.171 2 DEBUG nova.network.os_vif_util [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converting VIF {"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:36:47 compute-1 nova_compute[230518]: 2025-10-02 12:36:47.173 2 DEBUG nova.network.os_vif_util [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:46:e0:75,bridge_name='br-int',has_traffic_filtering=True,id=b31bc9d2-5589-460c-9a78-a1d800087345,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb31bc9d2-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:36:47 compute-1 nova_compute[230518]: 2025-10-02 12:36:47.173 2 DEBUG os_vif [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:46:e0:75,bridge_name='br-int',has_traffic_filtering=True,id=b31bc9d2-5589-460c-9a78-a1d800087345,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb31bc9d2-55') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:36:47 compute-1 nova_compute[230518]: 2025-10-02 12:36:47.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:47 compute-1 nova_compute[230518]: 2025-10-02 12:36:47.175 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb31bc9d2-55, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:36:47 compute-1 nova_compute[230518]: 2025-10-02 12:36:47.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:47 compute-1 nova_compute[230518]: 2025-10-02 12:36:47.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:47 compute-1 nova_compute[230518]: 2025-10-02 12:36:47.180 2 INFO os_vif [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:46:e0:75,bridge_name='br-int',has_traffic_filtering=True,id=b31bc9d2-5589-460c-9a78-a1d800087345,network=Network(f011efa4-0132-405c-bb45-09d0a9352eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb31bc9d2-55')
Oct 02 12:36:47 compute-1 podman[267511]: 2025-10-02 12:36:47.18148721 +0000 UTC m=+0.099958246 container cleanup 8d5a5a907005fc67cc333df9c0d20b555a0824dee6413e7a9c8271b1b5b13cf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 12:36:47 compute-1 nova_compute[230518]: 2025-10-02 12:36:47.184 2 DEBUG oslo_concurrency.lockutils [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:47 compute-1 nova_compute[230518]: 2025-10-02 12:36:47.184 2 DEBUG oslo_concurrency.lockutils [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:47 compute-1 systemd[1]: libpod-conmon-8d5a5a907005fc67cc333df9c0d20b555a0824dee6413e7a9c8271b1b5b13cf3.scope: Deactivated successfully.
Oct 02 12:36:47 compute-1 nova_compute[230518]: 2025-10-02 12:36:47.228 2 DEBUG nova.objects.instance [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lazy-loading 'migration_context' on Instance uuid a7ee799a-27f6-41a6-86dc-694c480fc3a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:36:47 compute-1 podman[267549]: 2025-10-02 12:36:47.249730507 +0000 UTC m=+0.046205445 container remove 8d5a5a907005fc67cc333df9c0d20b555a0824dee6413e7a9c8271b1b5b13cf3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:36:47 compute-1 nova_compute[230518]: 2025-10-02 12:36:47.253 2 DEBUG nova.compute.manager [req-f98d5c6d-82ca-46fd-a88f-1559ee628b8b req-5a75bda4-ed9a-4c5f-a8bc-b729df091a47 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received event network-vif-unplugged-b31bc9d2-5589-460c-9a78-a1d800087345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:36:47 compute-1 nova_compute[230518]: 2025-10-02 12:36:47.253 2 DEBUG oslo_concurrency.lockutils [req-f98d5c6d-82ca-46fd-a88f-1559ee628b8b req-5a75bda4-ed9a-4c5f-a8bc-b729df091a47 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:47 compute-1 nova_compute[230518]: 2025-10-02 12:36:47.253 2 DEBUG oslo_concurrency.lockutils [req-f98d5c6d-82ca-46fd-a88f-1559ee628b8b req-5a75bda4-ed9a-4c5f-a8bc-b729df091a47 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:47 compute-1 nova_compute[230518]: 2025-10-02 12:36:47.254 2 DEBUG oslo_concurrency.lockutils [req-f98d5c6d-82ca-46fd-a88f-1559ee628b8b req-5a75bda4-ed9a-4c5f-a8bc-b729df091a47 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:47 compute-1 nova_compute[230518]: 2025-10-02 12:36:47.254 2 DEBUG nova.compute.manager [req-f98d5c6d-82ca-46fd-a88f-1559ee628b8b req-5a75bda4-ed9a-4c5f-a8bc-b729df091a47 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] No waiting events found dispatching network-vif-unplugged-b31bc9d2-5589-460c-9a78-a1d800087345 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:36:47 compute-1 nova_compute[230518]: 2025-10-02 12:36:47.254 2 WARNING nova.compute.manager [req-f98d5c6d-82ca-46fd-a88f-1559ee628b8b req-5a75bda4-ed9a-4c5f-a8bc-b729df091a47 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received unexpected event network-vif-unplugged-b31bc9d2-5589-460c-9a78-a1d800087345 for instance with vm_state resized and task_state resize_reverting.
Oct 02 12:36:47 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:47.257 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d3066fd6-6a79-4d15-8a7f-79583da5cb21]: (4, ('Thu Oct  2 12:36:47 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff (8d5a5a907005fc67cc333df9c0d20b555a0824dee6413e7a9c8271b1b5b13cf3)\n8d5a5a907005fc67cc333df9c0d20b555a0824dee6413e7a9c8271b1b5b13cf3\nThu Oct  2 12:36:47 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff (8d5a5a907005fc67cc333df9c0d20b555a0824dee6413e7a9c8271b1b5b13cf3)\n8d5a5a907005fc67cc333df9c0d20b555a0824dee6413e7a9c8271b1b5b13cf3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:47 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:47.259 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b1fa009f-a048-41e1-8eb3-6fa4b234cd58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:47 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:47.260 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf011efa4-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:36:47 compute-1 kernel: tapf011efa4-00: left promiscuous mode
Oct 02 12:36:47 compute-1 ceph-mon[80926]: pgmap v1810: 305 pgs: 305 active+clean; 175 MiB data, 758 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 661 KiB/s wr, 209 op/s
Oct 02 12:36:47 compute-1 nova_compute[230518]: 2025-10-02 12:36:47.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:47 compute-1 nova_compute[230518]: 2025-10-02 12:36:47.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:47 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:47.280 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6ee03e1d-9744-4751-adf0-86b92464f537]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:47 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:47.313 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[912fe4cb-ae10-4dc1-a1e0-94cc412c4fa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:47 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:47.314 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7be7ab24-80c9-4c1a-8821-4609448d9be7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:47 compute-1 nova_compute[230518]: 2025-10-02 12:36:47.330 2 DEBUG oslo_concurrency.processutils [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:36:47 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:47.335 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[aaadb1ce-e8d5-428b-bed7-9ff68213976d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 645930, 'reachable_time': 29469, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267564, 'error': None, 'target': 'ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:47 compute-1 systemd[1]: run-netns-ovnmeta\x2df011efa4\x2d0132\x2d405c\x2dbb45\x2d09d0a9352eff.mount: Deactivated successfully.
Oct 02 12:36:47 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:47.337 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f011efa4-0132-405c-bb45-09d0a9352eff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:36:47 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:36:47.338 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[c8965c42-7631-487c-97ee-6503e384be80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:36:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:36:47 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1057747748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:47 compute-1 nova_compute[230518]: 2025-10-02 12:36:47.755 2 DEBUG oslo_concurrency.processutils [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:36:47 compute-1 nova_compute[230518]: 2025-10-02 12:36:47.761 2 DEBUG nova.compute.provider_tree [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:36:48 compute-1 nova_compute[230518]: 2025-10-02 12:36:48.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:48 compute-1 nova_compute[230518]: 2025-10-02 12:36:48.086 2 DEBUG nova.scheduler.client.report [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:36:48 compute-1 nova_compute[230518]: 2025-10-02 12:36:48.188 2 DEBUG oslo_concurrency.lockutils [None req-be5fb5ac-209d-43c2-99d1-84801fa012c1 71d69bc37f274fad8a0b06c0b96f2a64 3b295760a6d74c82bd0f9ee4154d7d10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 1.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1057747748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:48.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:48.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:49 compute-1 nova_compute[230518]: 2025-10-02 12:36:49.075 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:36:49 compute-1 nova_compute[230518]: 2025-10-02 12:36:49.111 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:49 compute-1 nova_compute[230518]: 2025-10-02 12:36:49.112 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:49 compute-1 nova_compute[230518]: 2025-10-02 12:36:49.112 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:49 compute-1 nova_compute[230518]: 2025-10-02 12:36:49.112 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:36:49 compute-1 nova_compute[230518]: 2025-10-02 12:36:49.112 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:36:49 compute-1 nova_compute[230518]: 2025-10-02 12:36:49.392 2 DEBUG nova.compute.manager [req-d0ff463f-be20-47e6-b192-dbee2d74d17e req-759e5da2-2161-4cd2-a2f7-e51a625f2912 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:36:49 compute-1 nova_compute[230518]: 2025-10-02 12:36:49.393 2 DEBUG oslo_concurrency.lockutils [req-d0ff463f-be20-47e6-b192-dbee2d74d17e req-759e5da2-2161-4cd2-a2f7-e51a625f2912 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:49 compute-1 nova_compute[230518]: 2025-10-02 12:36:49.393 2 DEBUG oslo_concurrency.lockutils [req-d0ff463f-be20-47e6-b192-dbee2d74d17e req-759e5da2-2161-4cd2-a2f7-e51a625f2912 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:49 compute-1 nova_compute[230518]: 2025-10-02 12:36:49.394 2 DEBUG oslo_concurrency.lockutils [req-d0ff463f-be20-47e6-b192-dbee2d74d17e req-759e5da2-2161-4cd2-a2f7-e51a625f2912 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:49 compute-1 nova_compute[230518]: 2025-10-02 12:36:49.394 2 DEBUG nova.compute.manager [req-d0ff463f-be20-47e6-b192-dbee2d74d17e req-759e5da2-2161-4cd2-a2f7-e51a625f2912 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] No waiting events found dispatching network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:36:49 compute-1 nova_compute[230518]: 2025-10-02 12:36:49.394 2 WARNING nova.compute.manager [req-d0ff463f-be20-47e6-b192-dbee2d74d17e req-759e5da2-2161-4cd2-a2f7-e51a625f2912 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received unexpected event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 for instance with vm_state resized and task_state resize_reverting.
Oct 02 12:36:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:36:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:36:49 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4257555595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:49 compute-1 nova_compute[230518]: 2025-10-02 12:36:49.548 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:36:49 compute-1 ceph-mon[80926]: pgmap v1811: 305 pgs: 305 active+clean; 167 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 26 KiB/s wr, 143 op/s
Oct 02 12:36:49 compute-1 nova_compute[230518]: 2025-10-02 12:36:49.733 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:36:49 compute-1 nova_compute[230518]: 2025-10-02 12:36:49.734 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4490MB free_disk=20.921703338623047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:36:49 compute-1 nova_compute[230518]: 2025-10-02 12:36:49.735 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:36:49 compute-1 nova_compute[230518]: 2025-10-02 12:36:49.735 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:36:50 compute-1 nova_compute[230518]: 2025-10-02 12:36:50.291 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:36:50 compute-1 nova_compute[230518]: 2025-10-02 12:36:50.291 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:36:50 compute-1 nova_compute[230518]: 2025-10-02 12:36:50.327 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:36:50 compute-1 nova_compute[230518]: 2025-10-02 12:36:50.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:50 compute-1 nova_compute[230518]: 2025-10-02 12:36:50.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:36:50 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3035015947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:50 compute-1 nova_compute[230518]: 2025-10-02 12:36:50.818 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:36:50 compute-1 nova_compute[230518]: 2025-10-02 12:36:50.823 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:36:50 compute-1 nova_compute[230518]: 2025-10-02 12:36:50.883 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:36:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:36:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:50.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:36:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:50.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:50 compute-1 nova_compute[230518]: 2025-10-02 12:36:50.933 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:36:50 compute-1 nova_compute[230518]: 2025-10-02 12:36:50.934 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.199s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:36:50 compute-1 ceph-mon[80926]: pgmap v1812: 305 pgs: 305 active+clean; 167 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 103 op/s
Oct 02 12:36:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4257555595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3035015947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:52 compute-1 nova_compute[230518]: 2025-10-02 12:36:52.141 2 DEBUG nova.compute.manager [req-b63f3897-9efa-4350-b124-a5e97fa15b9c req-ee060bec-a13f-43e3-9c17-3cf9a6debc12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received event network-changed-b31bc9d2-5589-460c-9a78-a1d800087345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:36:52 compute-1 nova_compute[230518]: 2025-10-02 12:36:52.142 2 DEBUG nova.compute.manager [req-b63f3897-9efa-4350-b124-a5e97fa15b9c req-ee060bec-a13f-43e3-9c17-3cf9a6debc12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Refreshing instance network info cache due to event network-changed-b31bc9d2-5589-460c-9a78-a1d800087345. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:36:52 compute-1 nova_compute[230518]: 2025-10-02 12:36:52.142 2 DEBUG oslo_concurrency.lockutils [req-b63f3897-9efa-4350-b124-a5e97fa15b9c req-ee060bec-a13f-43e3-9c17-3cf9a6debc12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:36:52 compute-1 nova_compute[230518]: 2025-10-02 12:36:52.142 2 DEBUG oslo_concurrency.lockutils [req-b63f3897-9efa-4350-b124-a5e97fa15b9c req-ee060bec-a13f-43e3-9c17-3cf9a6debc12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:36:52 compute-1 nova_compute[230518]: 2025-10-02 12:36:52.142 2 DEBUG nova.network.neutron [req-b63f3897-9efa-4350-b124-a5e97fa15b9c req-ee060bec-a13f-43e3-9c17-3cf9a6debc12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Refreshing network info cache for port b31bc9d2-5589-460c-9a78-a1d800087345 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:36:52 compute-1 nova_compute[230518]: 2025-10-02 12:36:52.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:36:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:52.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:36:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:52.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:53 compute-1 nova_compute[230518]: 2025-10-02 12:36:53.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:53 compute-1 ceph-mon[80926]: pgmap v1813: 305 pgs: 305 active+clean; 167 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 16 KiB/s wr, 130 op/s
Oct 02 12:36:53 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2931010887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:53 compute-1 nova_compute[230518]: 2025-10-02 12:36:53.909 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:36:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:36:54 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3774149522' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:54 compute-1 podman[267634]: 2025-10-02 12:36:54.826436357 +0000 UTC m=+0.067808375 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 12:36:54 compute-1 nova_compute[230518]: 2025-10-02 12:36:54.876 2 DEBUG nova.network.neutron [req-b63f3897-9efa-4350-b124-a5e97fa15b9c req-ee060bec-a13f-43e3-9c17-3cf9a6debc12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Updated VIF entry in instance network info cache for port b31bc9d2-5589-460c-9a78-a1d800087345. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:36:54 compute-1 nova_compute[230518]: 2025-10-02 12:36:54.876 2 DEBUG nova.network.neutron [req-b63f3897-9efa-4350-b124-a5e97fa15b9c req-ee060bec-a13f-43e3-9c17-3cf9a6debc12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Updating instance_info_cache with network_info: [{"id": "b31bc9d2-5589-460c-9a78-a1d800087345", "address": "fa:16:3e:46:e0:75", "network": {"id": "f011efa4-0132-405c-bb45-09d0a9352eff", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1480512928-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b295760a6d74c82bd0f9ee4154d7d10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb31bc9d2-55", "ovs_interfaceid": "b31bc9d2-5589-460c-9a78-a1d800087345", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:36:54 compute-1 podman[267633]: 2025-10-02 12:36:54.896327186 +0000 UTC m=+0.131747656 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:36:54 compute-1 nova_compute[230518]: 2025-10-02 12:36:54.909 2 DEBUG oslo_concurrency.lockutils [req-b63f3897-9efa-4350-b124-a5e97fa15b9c req-ee060bec-a13f-43e3-9c17-3cf9a6debc12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a7ee799a-27f6-41a6-86dc-694c480fc3a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:36:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:54.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:54.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:55 compute-1 nova_compute[230518]: 2025-10-02 12:36:55.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:36:56 compute-1 ceph-mon[80926]: pgmap v1814: 305 pgs: 305 active+clean; 167 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 16 KiB/s wr, 109 op/s
Oct 02 12:36:56 compute-1 nova_compute[230518]: 2025-10-02 12:36:56.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:36:56 compute-1 nova_compute[230518]: 2025-10-02 12:36:56.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:36:56 compute-1 nova_compute[230518]: 2025-10-02 12:36:56.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:36:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:56.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:36:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:56.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:36:57 compute-1 nova_compute[230518]: 2025-10-02 12:36:57.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:36:57 compute-1 nova_compute[230518]: 2025-10-02 12:36:57.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:36:57 compute-1 nova_compute[230518]: 2025-10-02 12:36:57.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:57 compute-1 ceph-mon[80926]: pgmap v1815: 305 pgs: 305 active+clean; 167 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 92 op/s
Oct 02 12:36:57 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1580588440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:36:57 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e250 e250: 3 total, 3 up, 3 in
Oct 02 12:36:58 compute-1 nova_compute[230518]: 2025-10-02 12:36:58.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:36:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:36:58.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:36:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:36:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:36:58.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:36:59 compute-1 ceph-mon[80926]: pgmap v1816: 305 pgs: 305 active+clean; 167 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 511 B/s wr, 82 op/s
Oct 02 12:36:59 compute-1 ceph-mon[80926]: osdmap e250: 3 total, 3 up, 3 in
Oct 02 12:36:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:37:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3305874569' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:00.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:00.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:01 compute-1 nova_compute[230518]: 2025-10-02 12:37:01.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:37:01 compute-1 nova_compute[230518]: 2025-10-02 12:37:01.310 2 DEBUG nova.compute.manager [req-5d5742e9-ff70-4d42-b023-c01df24df7d7 req-29402bf4-70d9-4a43-adcb-345a5a9a56a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:37:01 compute-1 nova_compute[230518]: 2025-10-02 12:37:01.311 2 DEBUG oslo_concurrency.lockutils [req-5d5742e9-ff70-4d42-b023-c01df24df7d7 req-29402bf4-70d9-4a43-adcb-345a5a9a56a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:01 compute-1 nova_compute[230518]: 2025-10-02 12:37:01.311 2 DEBUG oslo_concurrency.lockutils [req-5d5742e9-ff70-4d42-b023-c01df24df7d7 req-29402bf4-70d9-4a43-adcb-345a5a9a56a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:01 compute-1 nova_compute[230518]: 2025-10-02 12:37:01.311 2 DEBUG oslo_concurrency.lockutils [req-5d5742e9-ff70-4d42-b023-c01df24df7d7 req-29402bf4-70d9-4a43-adcb-345a5a9a56a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a7ee799a-27f6-41a6-86dc-694c480fc3a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:01 compute-1 nova_compute[230518]: 2025-10-02 12:37:01.312 2 DEBUG nova.compute.manager [req-5d5742e9-ff70-4d42-b023-c01df24df7d7 req-29402bf4-70d9-4a43-adcb-345a5a9a56a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] No waiting events found dispatching network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:37:01 compute-1 nova_compute[230518]: 2025-10-02 12:37:01.312 2 WARNING nova.compute.manager [req-5d5742e9-ff70-4d42-b023-c01df24df7d7 req-29402bf4-70d9-4a43-adcb-345a5a9a56a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Received unexpected event network-vif-plugged-b31bc9d2-5589-460c-9a78-a1d800087345 for instance with vm_state resized and task_state resize_reverting.
Oct 02 12:37:01 compute-1 ceph-mon[80926]: pgmap v1818: 305 pgs: 305 active+clean; 167 MiB data, 756 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 102 B/s wr, 89 op/s
Oct 02 12:37:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3862867357' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1562042604' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:02 compute-1 nova_compute[230518]: 2025-10-02 12:37:02.145 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408607.1438193, a7ee799a-27f6-41a6-86dc-694c480fc3a1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:37:02 compute-1 nova_compute[230518]: 2025-10-02 12:37:02.146 2 INFO nova.compute.manager [-] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] VM Stopped (Lifecycle Event)
Oct 02 12:37:02 compute-1 nova_compute[230518]: 2025-10-02 12:37:02.168 2 DEBUG nova.compute.manager [None req-3ed406af-64b2-4085-a977-134358c323ed - - - - - -] [instance: a7ee799a-27f6-41a6-86dc-694c480fc3a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:37:02 compute-1 nova_compute[230518]: 2025-10-02 12:37:02.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:02.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:02.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:02 compute-1 ceph-mon[80926]: pgmap v1819: 305 pgs: 305 active+clean; 175 MiB data, 765 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 987 KiB/s wr, 81 op/s
Oct 02 12:37:03 compute-1 nova_compute[230518]: 2025-10-02 12:37:03.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:37:04 compute-1 nova_compute[230518]: 2025-10-02 12:37:04.770 2 DEBUG oslo_concurrency.lockutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Acquiring lock "1319f89a-ec57-41aa-b53e-07f2280a0d87" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:04 compute-1 nova_compute[230518]: 2025-10-02 12:37:04.770 2 DEBUG oslo_concurrency.lockutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lock "1319f89a-ec57-41aa-b53e-07f2280a0d87" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:04 compute-1 nova_compute[230518]: 2025-10-02 12:37:04.829 2 DEBUG nova.compute.manager [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:37:04 compute-1 nova_compute[230518]: 2025-10-02 12:37:04.933 2 DEBUG oslo_concurrency.lockutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:04 compute-1 nova_compute[230518]: 2025-10-02 12:37:04.934 2 DEBUG oslo_concurrency.lockutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:04.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:04 compute-1 nova_compute[230518]: 2025-10-02 12:37:04.946 2 DEBUG nova.virt.hardware [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:37:04 compute-1 nova_compute[230518]: 2025-10-02 12:37:04.947 2 INFO nova.compute.claims [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:37:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:04.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:05 compute-1 ceph-mon[80926]: pgmap v1820: 305 pgs: 305 active+clean; 183 MiB data, 777 MiB used, 20 GiB / 21 GiB avail; 50 KiB/s rd, 2.1 MiB/s wr, 44 op/s
Oct 02 12:37:05 compute-1 nova_compute[230518]: 2025-10-02 12:37:05.186 2 DEBUG oslo_concurrency.processutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:37:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/309699728' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:37:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:37:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/309699728' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:37:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:37:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4173785915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:05 compute-1 nova_compute[230518]: 2025-10-02 12:37:05.670 2 DEBUG oslo_concurrency.processutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:05 compute-1 nova_compute[230518]: 2025-10-02 12:37:05.678 2 DEBUG nova.compute.provider_tree [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:37:05 compute-1 nova_compute[230518]: 2025-10-02 12:37:05.788 2 DEBUG nova.scheduler.client.report [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:37:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e251 e251: 3 total, 3 up, 3 in
Oct 02 12:37:06 compute-1 nova_compute[230518]: 2025-10-02 12:37:06.008 2 DEBUG oslo_concurrency.lockutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:06 compute-1 nova_compute[230518]: 2025-10-02 12:37:06.009 2 DEBUG nova.compute.manager [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:37:06 compute-1 nova_compute[230518]: 2025-10-02 12:37:06.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:37:06 compute-1 nova_compute[230518]: 2025-10-02 12:37:06.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:37:06 compute-1 nova_compute[230518]: 2025-10-02 12:37:06.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:37:06 compute-1 nova_compute[230518]: 2025-10-02 12:37:06.114 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 12:37:06 compute-1 nova_compute[230518]: 2025-10-02 12:37:06.114 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:37:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/309699728' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:37:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/309699728' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:37:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4173785915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:06 compute-1 ceph-mon[80926]: osdmap e251: 3 total, 3 up, 3 in
Oct 02 12:37:06 compute-1 nova_compute[230518]: 2025-10-02 12:37:06.145 2 DEBUG nova.compute.manager [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:37:06 compute-1 nova_compute[230518]: 2025-10-02 12:37:06.146 2 DEBUG nova.network.neutron [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:37:06 compute-1 nova_compute[230518]: 2025-10-02 12:37:06.197 2 INFO nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:37:06 compute-1 nova_compute[230518]: 2025-10-02 12:37:06.244 2 DEBUG nova.compute.manager [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:37:06 compute-1 nova_compute[230518]: 2025-10-02 12:37:06.394 2 DEBUG nova.policy [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f1215de74baa4b7f8522ec44b7a4630b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '15fd01e26e294206846c155a766b0ad2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:37:06 compute-1 nova_compute[230518]: 2025-10-02 12:37:06.416 2 DEBUG nova.compute.manager [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:37:06 compute-1 nova_compute[230518]: 2025-10-02 12:37:06.417 2 DEBUG nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:37:06 compute-1 nova_compute[230518]: 2025-10-02 12:37:06.418 2 INFO nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Creating image(s)
Oct 02 12:37:06 compute-1 nova_compute[230518]: 2025-10-02 12:37:06.450 2 DEBUG nova.storage.rbd_utils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] rbd image 1319f89a-ec57-41aa-b53e-07f2280a0d87_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:37:06 compute-1 nova_compute[230518]: 2025-10-02 12:37:06.484 2 DEBUG nova.storage.rbd_utils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] rbd image 1319f89a-ec57-41aa-b53e-07f2280a0d87_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:37:06 compute-1 nova_compute[230518]: 2025-10-02 12:37:06.514 2 DEBUG nova.storage.rbd_utils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] rbd image 1319f89a-ec57-41aa-b53e-07f2280a0d87_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:37:06 compute-1 nova_compute[230518]: 2025-10-02 12:37:06.518 2 DEBUG oslo_concurrency.processutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:06 compute-1 nova_compute[230518]: 2025-10-02 12:37:06.616 2 DEBUG oslo_concurrency.processutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:06 compute-1 nova_compute[230518]: 2025-10-02 12:37:06.617 2 DEBUG oslo_concurrency.lockutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:06 compute-1 nova_compute[230518]: 2025-10-02 12:37:06.617 2 DEBUG oslo_concurrency.lockutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:06 compute-1 nova_compute[230518]: 2025-10-02 12:37:06.618 2 DEBUG oslo_concurrency.lockutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:06 compute-1 nova_compute[230518]: 2025-10-02 12:37:06.648 2 DEBUG nova.storage.rbd_utils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] rbd image 1319f89a-ec57-41aa-b53e-07f2280a0d87_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:37:06 compute-1 nova_compute[230518]: 2025-10-02 12:37:06.653 2 DEBUG oslo_concurrency.processutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 1319f89a-ec57-41aa-b53e-07f2280a0d87_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:06 compute-1 podman[267789]: 2025-10-02 12:37:06.809802581 +0000 UTC m=+0.059023768 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:37:06 compute-1 podman[267790]: 2025-10-02 12:37:06.828554401 +0000 UTC m=+0.079634937 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:37:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:37:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:06.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:37:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:37:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:06.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:37:06 compute-1 nova_compute[230518]: 2025-10-02 12:37:06.984 2 DEBUG oslo_concurrency.processutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 1319f89a-ec57-41aa-b53e-07f2280a0d87_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.331s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:07 compute-1 nova_compute[230518]: 2025-10-02 12:37:07.076 2 DEBUG nova.storage.rbd_utils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] resizing rbd image 1319f89a-ec57-41aa-b53e-07f2280a0d87_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:37:07 compute-1 ceph-mon[80926]: pgmap v1821: 305 pgs: 305 active+clean; 189 MiB data, 798 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.5 MiB/s wr, 103 op/s
Oct 02 12:37:07 compute-1 nova_compute[230518]: 2025-10-02 12:37:07.212 2 DEBUG nova.objects.instance [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lazy-loading 'migration_context' on Instance uuid 1319f89a-ec57-41aa-b53e-07f2280a0d87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:37:07 compute-1 nova_compute[230518]: 2025-10-02 12:37:07.235 2 DEBUG nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:37:07 compute-1 nova_compute[230518]: 2025-10-02 12:37:07.235 2 DEBUG nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Ensure instance console log exists: /var/lib/nova/instances/1319f89a-ec57-41aa-b53e-07f2280a0d87/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:37:07 compute-1 nova_compute[230518]: 2025-10-02 12:37:07.236 2 DEBUG oslo_concurrency.lockutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:07 compute-1 nova_compute[230518]: 2025-10-02 12:37:07.237 2 DEBUG oslo_concurrency.lockutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:07 compute-1 nova_compute[230518]: 2025-10-02 12:37:07.237 2 DEBUG oslo_concurrency.lockutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:07 compute-1 nova_compute[230518]: 2025-10-02 12:37:07.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:07 compute-1 nova_compute[230518]: 2025-10-02 12:37:07.326 2 DEBUG nova.network.neutron [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Successfully created port: 8de4019e-8174-4b43-9510-73318fd6ff8d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:37:08 compute-1 nova_compute[230518]: 2025-10-02 12:37:08.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:08 compute-1 nova_compute[230518]: 2025-10-02 12:37:08.284 2 DEBUG nova.network.neutron [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Successfully updated port: 8de4019e-8174-4b43-9510-73318fd6ff8d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:37:08 compute-1 nova_compute[230518]: 2025-10-02 12:37:08.317 2 DEBUG oslo_concurrency.lockutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Acquiring lock "refresh_cache-1319f89a-ec57-41aa-b53e-07f2280a0d87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:37:08 compute-1 nova_compute[230518]: 2025-10-02 12:37:08.318 2 DEBUG oslo_concurrency.lockutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Acquired lock "refresh_cache-1319f89a-ec57-41aa-b53e-07f2280a0d87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:37:08 compute-1 nova_compute[230518]: 2025-10-02 12:37:08.318 2 DEBUG nova.network.neutron [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:37:08 compute-1 nova_compute[230518]: 2025-10-02 12:37:08.426 2 DEBUG nova.compute.manager [req-9aef5093-3f70-4575-ab3c-ca957429ca6a req-d1f5ff70-cf25-4a68-b245-71519a8492b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Received event network-changed-8de4019e-8174-4b43-9510-73318fd6ff8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:37:08 compute-1 nova_compute[230518]: 2025-10-02 12:37:08.426 2 DEBUG nova.compute.manager [req-9aef5093-3f70-4575-ab3c-ca957429ca6a req-d1f5ff70-cf25-4a68-b245-71519a8492b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Refreshing instance network info cache due to event network-changed-8de4019e-8174-4b43-9510-73318fd6ff8d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:37:08 compute-1 nova_compute[230518]: 2025-10-02 12:37:08.426 2 DEBUG oslo_concurrency.lockutils [req-9aef5093-3f70-4575-ab3c-ca957429ca6a req-d1f5ff70-cf25-4a68-b245-71519a8492b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-1319f89a-ec57-41aa-b53e-07f2280a0d87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:37:08 compute-1 nova_compute[230518]: 2025-10-02 12:37:08.857 2 DEBUG nova.network.neutron [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:37:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:08.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:08.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:09 compute-1 ceph-mon[80926]: pgmap v1823: 305 pgs: 305 active+clean; 199 MiB data, 798 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 161 op/s
Oct 02 12:37:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:37:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2542507121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:10 compute-1 nova_compute[230518]: 2025-10-02 12:37:10.611 2 DEBUG nova.network.neutron [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Updating instance_info_cache with network_info: [{"id": "8de4019e-8174-4b43-9510-73318fd6ff8d", "address": "fa:16:3e:ee:67:9e", "network": {"id": "40e5ac91-4365-415e-86f3-b8d99d311f47", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-558314394-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15fd01e26e294206846c155a766b0ad2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de4019e-81", "ovs_interfaceid": "8de4019e-8174-4b43-9510-73318fd6ff8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:37:10 compute-1 nova_compute[230518]: 2025-10-02 12:37:10.631 2 DEBUG oslo_concurrency.lockutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Releasing lock "refresh_cache-1319f89a-ec57-41aa-b53e-07f2280a0d87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:37:10 compute-1 nova_compute[230518]: 2025-10-02 12:37:10.631 2 DEBUG nova.compute.manager [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Instance network_info: |[{"id": "8de4019e-8174-4b43-9510-73318fd6ff8d", "address": "fa:16:3e:ee:67:9e", "network": {"id": "40e5ac91-4365-415e-86f3-b8d99d311f47", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-558314394-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15fd01e26e294206846c155a766b0ad2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de4019e-81", "ovs_interfaceid": "8de4019e-8174-4b43-9510-73318fd6ff8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:37:10 compute-1 nova_compute[230518]: 2025-10-02 12:37:10.632 2 DEBUG oslo_concurrency.lockutils [req-9aef5093-3f70-4575-ab3c-ca957429ca6a req-d1f5ff70-cf25-4a68-b245-71519a8492b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-1319f89a-ec57-41aa-b53e-07f2280a0d87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:37:10 compute-1 nova_compute[230518]: 2025-10-02 12:37:10.632 2 DEBUG nova.network.neutron [req-9aef5093-3f70-4575-ab3c-ca957429ca6a req-d1f5ff70-cf25-4a68-b245-71519a8492b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Refreshing network info cache for port 8de4019e-8174-4b43-9510-73318fd6ff8d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:37:10 compute-1 nova_compute[230518]: 2025-10-02 12:37:10.634 2 DEBUG nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Start _get_guest_xml network_info=[{"id": "8de4019e-8174-4b43-9510-73318fd6ff8d", "address": "fa:16:3e:ee:67:9e", "network": {"id": "40e5ac91-4365-415e-86f3-b8d99d311f47", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-558314394-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15fd01e26e294206846c155a766b0ad2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de4019e-81", "ovs_interfaceid": "8de4019e-8174-4b43-9510-73318fd6ff8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:37:10 compute-1 nova_compute[230518]: 2025-10-02 12:37:10.638 2 WARNING nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:37:10 compute-1 nova_compute[230518]: 2025-10-02 12:37:10.644 2 DEBUG nova.virt.libvirt.host [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:37:10 compute-1 nova_compute[230518]: 2025-10-02 12:37:10.645 2 DEBUG nova.virt.libvirt.host [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:37:10 compute-1 nova_compute[230518]: 2025-10-02 12:37:10.657 2 DEBUG nova.virt.libvirt.host [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:37:10 compute-1 nova_compute[230518]: 2025-10-02 12:37:10.661 2 DEBUG nova.virt.libvirt.host [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:37:10 compute-1 nova_compute[230518]: 2025-10-02 12:37:10.666 2 DEBUG nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:37:10 compute-1 nova_compute[230518]: 2025-10-02 12:37:10.666 2 DEBUG nova.virt.hardware [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:37:10 compute-1 nova_compute[230518]: 2025-10-02 12:37:10.667 2 DEBUG nova.virt.hardware [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:37:10 compute-1 nova_compute[230518]: 2025-10-02 12:37:10.668 2 DEBUG nova.virt.hardware [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:37:10 compute-1 nova_compute[230518]: 2025-10-02 12:37:10.668 2 DEBUG nova.virt.hardware [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:37:10 compute-1 nova_compute[230518]: 2025-10-02 12:37:10.668 2 DEBUG nova.virt.hardware [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:37:10 compute-1 nova_compute[230518]: 2025-10-02 12:37:10.669 2 DEBUG nova.virt.hardware [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:37:10 compute-1 nova_compute[230518]: 2025-10-02 12:37:10.669 2 DEBUG nova.virt.hardware [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:37:10 compute-1 nova_compute[230518]: 2025-10-02 12:37:10.669 2 DEBUG nova.virt.hardware [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:37:10 compute-1 nova_compute[230518]: 2025-10-02 12:37:10.669 2 DEBUG nova.virt.hardware [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:37:10 compute-1 nova_compute[230518]: 2025-10-02 12:37:10.670 2 DEBUG nova.virt.hardware [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:37:10 compute-1 nova_compute[230518]: 2025-10-02 12:37:10.670 2 DEBUG nova.virt.hardware [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:37:10 compute-1 nova_compute[230518]: 2025-10-02 12:37:10.676 2 DEBUG oslo_concurrency.processutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:37:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:10.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:37:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:10.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:11 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:37:11 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1031192021' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:11 compute-1 nova_compute[230518]: 2025-10-02 12:37:11.188 2 DEBUG oslo_concurrency.processutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:11 compute-1 ceph-mon[80926]: pgmap v1824: 305 pgs: 305 active+clean; 199 MiB data, 798 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.5 MiB/s wr, 155 op/s
Oct 02 12:37:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1031192021' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:11 compute-1 nova_compute[230518]: 2025-10-02 12:37:11.236 2 DEBUG nova.storage.rbd_utils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] rbd image 1319f89a-ec57-41aa-b53e-07f2280a0d87_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:37:11 compute-1 nova_compute[230518]: 2025-10-02 12:37:11.243 2 DEBUG oslo_concurrency.processutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:11 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:37:11 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/850267506' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:11 compute-1 nova_compute[230518]: 2025-10-02 12:37:11.749 2 DEBUG oslo_concurrency.processutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:11 compute-1 nova_compute[230518]: 2025-10-02 12:37:11.753 2 DEBUG nova.virt.libvirt.vif [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:37:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1958706210',display_name='tempest-ServerMetadataNegativeTestJSON-server-1958706210',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1958706210',id=97,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='15fd01e26e294206846c155a766b0ad2',ramdisk_id='',reservation_id='r-aqm70j3y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-354053647',owner_user_name='tempest-ServerMetadataNegativeTestJSON-354053647-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:37:06Z,user_data=None,user_id='f1215de74baa4b7f8522ec44b7a4630b',uuid=1319f89a-ec57-41aa-b53e-07f2280a0d87,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8de4019e-8174-4b43-9510-73318fd6ff8d", "address": "fa:16:3e:ee:67:9e", "network": {"id": "40e5ac91-4365-415e-86f3-b8d99d311f47", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-558314394-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15fd01e26e294206846c155a766b0ad2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de4019e-81", "ovs_interfaceid": "8de4019e-8174-4b43-9510-73318fd6ff8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:37:11 compute-1 nova_compute[230518]: 2025-10-02 12:37:11.754 2 DEBUG nova.network.os_vif_util [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Converting VIF {"id": "8de4019e-8174-4b43-9510-73318fd6ff8d", "address": "fa:16:3e:ee:67:9e", "network": {"id": "40e5ac91-4365-415e-86f3-b8d99d311f47", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-558314394-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15fd01e26e294206846c155a766b0ad2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de4019e-81", "ovs_interfaceid": "8de4019e-8174-4b43-9510-73318fd6ff8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:37:11 compute-1 nova_compute[230518]: 2025-10-02 12:37:11.756 2 DEBUG nova.network.os_vif_util [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:67:9e,bridge_name='br-int',has_traffic_filtering=True,id=8de4019e-8174-4b43-9510-73318fd6ff8d,network=Network(40e5ac91-4365-415e-86f3-b8d99d311f47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8de4019e-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:37:11 compute-1 nova_compute[230518]: 2025-10-02 12:37:11.758 2 DEBUG nova.objects.instance [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1319f89a-ec57-41aa-b53e-07f2280a0d87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:37:11 compute-1 nova_compute[230518]: 2025-10-02 12:37:11.801 2 DEBUG nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:37:11 compute-1 nova_compute[230518]:   <uuid>1319f89a-ec57-41aa-b53e-07f2280a0d87</uuid>
Oct 02 12:37:11 compute-1 nova_compute[230518]:   <name>instance-00000061</name>
Oct 02 12:37:11 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:37:11 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:37:11 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:37:11 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:       <nova:name>tempest-ServerMetadataNegativeTestJSON-server-1958706210</nova:name>
Oct 02 12:37:11 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:37:10</nova:creationTime>
Oct 02 12:37:11 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:37:11 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:37:11 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:37:11 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:37:11 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:37:11 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:37:11 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:37:11 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:37:11 compute-1 nova_compute[230518]:         <nova:user uuid="f1215de74baa4b7f8522ec44b7a4630b">tempest-ServerMetadataNegativeTestJSON-354053647-project-member</nova:user>
Oct 02 12:37:11 compute-1 nova_compute[230518]:         <nova:project uuid="15fd01e26e294206846c155a766b0ad2">tempest-ServerMetadataNegativeTestJSON-354053647</nova:project>
Oct 02 12:37:11 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:37:11 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:37:11 compute-1 nova_compute[230518]:         <nova:port uuid="8de4019e-8174-4b43-9510-73318fd6ff8d">
Oct 02 12:37:11 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:37:11 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:37:11 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:37:11 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <system>
Oct 02 12:37:11 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:37:11 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:37:11 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:37:11 compute-1 nova_compute[230518]:       <entry name="serial">1319f89a-ec57-41aa-b53e-07f2280a0d87</entry>
Oct 02 12:37:11 compute-1 nova_compute[230518]:       <entry name="uuid">1319f89a-ec57-41aa-b53e-07f2280a0d87</entry>
Oct 02 12:37:11 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     </system>
Oct 02 12:37:11 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:37:11 compute-1 nova_compute[230518]:   <os>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:   </os>
Oct 02 12:37:11 compute-1 nova_compute[230518]:   <features>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:   </features>
Oct 02 12:37:11 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:37:11 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:37:11 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:37:11 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/1319f89a-ec57-41aa-b53e-07f2280a0d87_disk">
Oct 02 12:37:11 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:       </source>
Oct 02 12:37:11 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:37:11 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:37:11 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:37:11 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/1319f89a-ec57-41aa-b53e-07f2280a0d87_disk.config">
Oct 02 12:37:11 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:       </source>
Oct 02 12:37:11 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:37:11 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:37:11 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:37:11 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:ee:67:9e"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:       <target dev="tap8de4019e-81"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:37:11 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/1319f89a-ec57-41aa-b53e-07f2280a0d87/console.log" append="off"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <video>
Oct 02 12:37:11 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     </video>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:37:11 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:37:11 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:37:11 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:37:11 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:37:11 compute-1 nova_compute[230518]: </domain>
Oct 02 12:37:11 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:37:11 compute-1 nova_compute[230518]: 2025-10-02 12:37:11.803 2 DEBUG nova.compute.manager [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Preparing to wait for external event network-vif-plugged-8de4019e-8174-4b43-9510-73318fd6ff8d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:37:11 compute-1 nova_compute[230518]: 2025-10-02 12:37:11.804 2 DEBUG oslo_concurrency.lockutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Acquiring lock "1319f89a-ec57-41aa-b53e-07f2280a0d87-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:11 compute-1 nova_compute[230518]: 2025-10-02 12:37:11.804 2 DEBUG oslo_concurrency.lockutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lock "1319f89a-ec57-41aa-b53e-07f2280a0d87-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:11 compute-1 nova_compute[230518]: 2025-10-02 12:37:11.804 2 DEBUG oslo_concurrency.lockutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lock "1319f89a-ec57-41aa-b53e-07f2280a0d87-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:11 compute-1 nova_compute[230518]: 2025-10-02 12:37:11.805 2 DEBUG nova.virt.libvirt.vif [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:37:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1958706210',display_name='tempest-ServerMetadataNegativeTestJSON-server-1958706210',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1958706210',id=97,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='15fd01e26e294206846c155a766b0ad2',ramdisk_id='',reservation_id='r-aqm70j3y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-354053647',owner_user_name='tempest-ServerMetadataNegativeTestJSON-354053647-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:37:06Z,user_data=None,user_id='f1215de74baa4b7f8522ec44b7a4630b',uuid=1319f89a-ec57-41aa-b53e-07f2280a0d87,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8de4019e-8174-4b43-9510-73318fd6ff8d", "address": "fa:16:3e:ee:67:9e", "network": {"id": "40e5ac91-4365-415e-86f3-b8d99d311f47", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-558314394-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15fd01e26e294206846c155a766b0ad2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de4019e-81", "ovs_interfaceid": "8de4019e-8174-4b43-9510-73318fd6ff8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:37:11 compute-1 nova_compute[230518]: 2025-10-02 12:37:11.806 2 DEBUG nova.network.os_vif_util [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Converting VIF {"id": "8de4019e-8174-4b43-9510-73318fd6ff8d", "address": "fa:16:3e:ee:67:9e", "network": {"id": "40e5ac91-4365-415e-86f3-b8d99d311f47", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-558314394-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15fd01e26e294206846c155a766b0ad2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de4019e-81", "ovs_interfaceid": "8de4019e-8174-4b43-9510-73318fd6ff8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:37:11 compute-1 nova_compute[230518]: 2025-10-02 12:37:11.807 2 DEBUG nova.network.os_vif_util [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:67:9e,bridge_name='br-int',has_traffic_filtering=True,id=8de4019e-8174-4b43-9510-73318fd6ff8d,network=Network(40e5ac91-4365-415e-86f3-b8d99d311f47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8de4019e-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:37:11 compute-1 nova_compute[230518]: 2025-10-02 12:37:11.807 2 DEBUG os_vif [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:67:9e,bridge_name='br-int',has_traffic_filtering=True,id=8de4019e-8174-4b43-9510-73318fd6ff8d,network=Network(40e5ac91-4365-415e-86f3-b8d99d311f47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8de4019e-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:37:11 compute-1 nova_compute[230518]: 2025-10-02 12:37:11.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:11 compute-1 nova_compute[230518]: 2025-10-02 12:37:11.809 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:37:11 compute-1 nova_compute[230518]: 2025-10-02 12:37:11.809 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:37:11 compute-1 nova_compute[230518]: 2025-10-02 12:37:11.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:11 compute-1 nova_compute[230518]: 2025-10-02 12:37:11.813 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8de4019e-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:37:11 compute-1 nova_compute[230518]: 2025-10-02 12:37:11.814 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8de4019e-81, col_values=(('external_ids', {'iface-id': '8de4019e-8174-4b43-9510-73318fd6ff8d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ee:67:9e', 'vm-uuid': '1319f89a-ec57-41aa-b53e-07f2280a0d87'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:37:11 compute-1 nova_compute[230518]: 2025-10-02 12:37:11.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:11 compute-1 NetworkManager[44960]: <info>  [1759408631.8176] manager: (tap8de4019e-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/193)
Oct 02 12:37:11 compute-1 nova_compute[230518]: 2025-10-02 12:37:11.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:37:11 compute-1 nova_compute[230518]: 2025-10-02 12:37:11.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:11 compute-1 nova_compute[230518]: 2025-10-02 12:37:11.825 2 INFO os_vif [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:67:9e,bridge_name='br-int',has_traffic_filtering=True,id=8de4019e-8174-4b43-9510-73318fd6ff8d,network=Network(40e5ac91-4365-415e-86f3-b8d99d311f47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8de4019e-81')
Oct 02 12:37:11 compute-1 nova_compute[230518]: 2025-10-02 12:37:11.885 2 DEBUG nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:37:11 compute-1 nova_compute[230518]: 2025-10-02 12:37:11.885 2 DEBUG nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:37:11 compute-1 nova_compute[230518]: 2025-10-02 12:37:11.886 2 DEBUG nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] No VIF found with MAC fa:16:3e:ee:67:9e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:37:11 compute-1 nova_compute[230518]: 2025-10-02 12:37:11.886 2 INFO nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Using config drive
Oct 02 12:37:11 compute-1 nova_compute[230518]: 2025-10-02 12:37:11.922 2 DEBUG nova.storage.rbd_utils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] rbd image 1319f89a-ec57-41aa-b53e-07f2280a0d87_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:37:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/850267506' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:12 compute-1 nova_compute[230518]: 2025-10-02 12:37:12.600 2 DEBUG nova.network.neutron [req-9aef5093-3f70-4575-ab3c-ca957429ca6a req-d1f5ff70-cf25-4a68-b245-71519a8492b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Updated VIF entry in instance network info cache for port 8de4019e-8174-4b43-9510-73318fd6ff8d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:37:12 compute-1 nova_compute[230518]: 2025-10-02 12:37:12.601 2 DEBUG nova.network.neutron [req-9aef5093-3f70-4575-ab3c-ca957429ca6a req-d1f5ff70-cf25-4a68-b245-71519a8492b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Updating instance_info_cache with network_info: [{"id": "8de4019e-8174-4b43-9510-73318fd6ff8d", "address": "fa:16:3e:ee:67:9e", "network": {"id": "40e5ac91-4365-415e-86f3-b8d99d311f47", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-558314394-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15fd01e26e294206846c155a766b0ad2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de4019e-81", "ovs_interfaceid": "8de4019e-8174-4b43-9510-73318fd6ff8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:37:12 compute-1 nova_compute[230518]: 2025-10-02 12:37:12.619 2 DEBUG oslo_concurrency.lockutils [req-9aef5093-3f70-4575-ab3c-ca957429ca6a req-d1f5ff70-cf25-4a68-b245-71519a8492b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-1319f89a-ec57-41aa-b53e-07f2280a0d87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:37:12 compute-1 nova_compute[230518]: 2025-10-02 12:37:12.815 2 INFO nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Creating config drive at /var/lib/nova/instances/1319f89a-ec57-41aa-b53e-07f2280a0d87/disk.config
Oct 02 12:37:12 compute-1 nova_compute[230518]: 2025-10-02 12:37:12.824 2 DEBUG oslo_concurrency.processutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1319f89a-ec57-41aa-b53e-07f2280a0d87/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgczgmo1s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:12.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:12.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:12 compute-1 nova_compute[230518]: 2025-10-02 12:37:12.978 2 DEBUG oslo_concurrency.processutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1319f89a-ec57-41aa-b53e-07f2280a0d87/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgczgmo1s" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:13 compute-1 nova_compute[230518]: 2025-10-02 12:37:13.013 2 DEBUG nova.storage.rbd_utils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] rbd image 1319f89a-ec57-41aa-b53e-07f2280a0d87_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:37:13 compute-1 nova_compute[230518]: 2025-10-02 12:37:13.017 2 DEBUG oslo_concurrency.processutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1319f89a-ec57-41aa-b53e-07f2280a0d87/disk.config 1319f89a-ec57-41aa-b53e-07f2280a0d87_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:13 compute-1 nova_compute[230518]: 2025-10-02 12:37:13.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:13 compute-1 nova_compute[230518]: 2025-10-02 12:37:13.218 2 DEBUG oslo_concurrency.processutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1319f89a-ec57-41aa-b53e-07f2280a0d87/disk.config 1319f89a-ec57-41aa-b53e-07f2280a0d87_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.202s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:13 compute-1 nova_compute[230518]: 2025-10-02 12:37:13.219 2 INFO nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Deleting local config drive /var/lib/nova/instances/1319f89a-ec57-41aa-b53e-07f2280a0d87/disk.config because it was imported into RBD.
Oct 02 12:37:13 compute-1 ceph-mon[80926]: pgmap v1825: 305 pgs: 305 active+clean; 199 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.6 MiB/s wr, 193 op/s
Oct 02 12:37:13 compute-1 kernel: tap8de4019e-81: entered promiscuous mode
Oct 02 12:37:13 compute-1 NetworkManager[44960]: <info>  [1759408633.2761] manager: (tap8de4019e-81): new Tun device (/org/freedesktop/NetworkManager/Devices/194)
Oct 02 12:37:13 compute-1 ovn_controller[129257]: 2025-10-02T12:37:13Z|00410|binding|INFO|Claiming lport 8de4019e-8174-4b43-9510-73318fd6ff8d for this chassis.
Oct 02 12:37:13 compute-1 ovn_controller[129257]: 2025-10-02T12:37:13Z|00411|binding|INFO|8de4019e-8174-4b43-9510-73318fd6ff8d: Claiming fa:16:3e:ee:67:9e 10.100.0.8
Oct 02 12:37:13 compute-1 nova_compute[230518]: 2025-10-02 12:37:13.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:13 compute-1 nova_compute[230518]: 2025-10-02 12:37:13.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.290 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:67:9e 10.100.0.8'], port_security=['fa:16:3e:ee:67:9e 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '1319f89a-ec57-41aa-b53e-07f2280a0d87', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-40e5ac91-4365-415e-86f3-b8d99d311f47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '15fd01e26e294206846c155a766b0ad2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '010fb747-de4b-49ec-8a2a-286d666ddcca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5628d560-7d90-4922-8442-872cb81c1d7b, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=8de4019e-8174-4b43-9510-73318fd6ff8d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.291 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 8de4019e-8174-4b43-9510-73318fd6ff8d in datapath 40e5ac91-4365-415e-86f3-b8d99d311f47 bound to our chassis
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.293 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 40e5ac91-4365-415e-86f3-b8d99d311f47
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.312 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[00a8bbb2-c10b-433d-af55-7197773b2095]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.313 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap40e5ac91-41 in ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.316 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap40e5ac91-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.316 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[81c045e8-9e93-4199-9d80-bfff73e92fc3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.317 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e2364200-1cc7-4506-8fb9-da1c40230267]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:13 compute-1 systemd-udevd[268038]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:37:13 compute-1 systemd-machined[188247]: New machine qemu-48-instance-00000061.
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.338 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[91e5e837-446f-4189-86a7-275eb627da30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:13 compute-1 NetworkManager[44960]: <info>  [1759408633.3414] device (tap8de4019e-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:37:13 compute-1 NetworkManager[44960]: <info>  [1759408633.3427] device (tap8de4019e-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:37:13 compute-1 nova_compute[230518]: 2025-10-02 12:37:13.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:13 compute-1 systemd[1]: Started Virtual Machine qemu-48-instance-00000061.
Oct 02 12:37:13 compute-1 ovn_controller[129257]: 2025-10-02T12:37:13Z|00412|binding|INFO|Setting lport 8de4019e-8174-4b43-9510-73318fd6ff8d ovn-installed in OVS
Oct 02 12:37:13 compute-1 ovn_controller[129257]: 2025-10-02T12:37:13Z|00413|binding|INFO|Setting lport 8de4019e-8174-4b43-9510-73318fd6ff8d up in Southbound
Oct 02 12:37:13 compute-1 nova_compute[230518]: 2025-10-02 12:37:13.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.378 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b0929faf-12c4-4441-b114-b05b765f2edb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.417 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[0481578c-ad2d-4000-8e89-198ee98b7daf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:13 compute-1 systemd-udevd[268042]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:37:13 compute-1 NetworkManager[44960]: <info>  [1759408633.4246] manager: (tap40e5ac91-40): new Veth device (/org/freedesktop/NetworkManager/Devices/195)
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.423 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e7336c1e-d22d-4e86-82a7-3809610aea31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.464 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[3eadde05-5878-40c7-9993-8480084f627b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.468 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[8c28e5d5-b76a-4276-ad59-1e5e366552df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:13 compute-1 NetworkManager[44960]: <info>  [1759408633.5058] device (tap40e5ac91-40): carrier: link connected
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.516 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[6f872d00-b099-4fd4-bd68-5cb58f85dc1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.541 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[67ae7c05-5442-4867-af35-147b302fa309]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap40e5ac91-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:36:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 125], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649706, 'reachable_time': 37352, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268071, 'error': None, 'target': 'ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.557 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4dcad257-fdcc-4c68-b453-e35763d00141]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed7:3672'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 649706, 'tstamp': 649706}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268086, 'error': None, 'target': 'ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.586 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c285701a-864f-4dbc-a63c-9bc094f229fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap40e5ac91-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:36:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 125], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649706, 'reachable_time': 37352, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268089, 'error': None, 'target': 'ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.636 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1dae05e0-c3e6-4347-a33d-f6727a5d9183]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.712 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c32eba40-ff81-4408-8099-a4292f16d2c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.713 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap40e5ac91-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.714 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.714 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap40e5ac91-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:37:13 compute-1 nova_compute[230518]: 2025-10-02 12:37:13.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:13 compute-1 NetworkManager[44960]: <info>  [1759408633.7180] manager: (tap40e5ac91-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/196)
Oct 02 12:37:13 compute-1 kernel: tap40e5ac91-40: entered promiscuous mode
Oct 02 12:37:13 compute-1 nova_compute[230518]: 2025-10-02 12:37:13.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.722 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap40e5ac91-40, col_values=(('external_ids', {'iface-id': 'eb0bb96d-21f2-4509-98d2-52f4693069e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:37:13 compute-1 nova_compute[230518]: 2025-10-02 12:37:13.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:13 compute-1 ovn_controller[129257]: 2025-10-02T12:37:13Z|00414|binding|INFO|Releasing lport eb0bb96d-21f2-4509-98d2-52f4693069e8 from this chassis (sb_readonly=0)
Oct 02 12:37:13 compute-1 nova_compute[230518]: 2025-10-02 12:37:13.739 2 DEBUG nova.compute.manager [req-68b6f114-99f9-43db-a7ef-72f3d35df4a8 req-44e44ae6-f7ea-44a4-b037-cea81109567a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Received event network-vif-plugged-8de4019e-8174-4b43-9510-73318fd6ff8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:37:13 compute-1 nova_compute[230518]: 2025-10-02 12:37:13.739 2 DEBUG oslo_concurrency.lockutils [req-68b6f114-99f9-43db-a7ef-72f3d35df4a8 req-44e44ae6-f7ea-44a4-b037-cea81109567a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1319f89a-ec57-41aa-b53e-07f2280a0d87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:13 compute-1 nova_compute[230518]: 2025-10-02 12:37:13.740 2 DEBUG oslo_concurrency.lockutils [req-68b6f114-99f9-43db-a7ef-72f3d35df4a8 req-44e44ae6-f7ea-44a4-b037-cea81109567a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1319f89a-ec57-41aa-b53e-07f2280a0d87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:13 compute-1 nova_compute[230518]: 2025-10-02 12:37:13.740 2 DEBUG oslo_concurrency.lockutils [req-68b6f114-99f9-43db-a7ef-72f3d35df4a8 req-44e44ae6-f7ea-44a4-b037-cea81109567a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1319f89a-ec57-41aa-b53e-07f2280a0d87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:13 compute-1 nova_compute[230518]: 2025-10-02 12:37:13.741 2 DEBUG nova.compute.manager [req-68b6f114-99f9-43db-a7ef-72f3d35df4a8 req-44e44ae6-f7ea-44a4-b037-cea81109567a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Processing event network-vif-plugged-8de4019e-8174-4b43-9510-73318fd6ff8d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:37:13 compute-1 nova_compute[230518]: 2025-10-02 12:37:13.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.755 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/40e5ac91-4365-415e-86f3-b8d99d311f47.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/40e5ac91-4365-415e-86f3-b8d99d311f47.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.756 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c44ffc63-c38e-420b-aad7-f1735979a7e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.757 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-40e5ac91-4365-415e-86f3-b8d99d311f47
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/40e5ac91-4365-415e-86f3-b8d99d311f47.pid.haproxy
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 40e5ac91-4365-415e-86f3-b8d99d311f47
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:37:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:13.757 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47', 'env', 'PROCESS_TAG=haproxy-40e5ac91-4365-415e-86f3-b8d99d311f47', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/40e5ac91-4365-415e-86f3-b8d99d311f47.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:37:14 compute-1 podman[268147]: 2025-10-02 12:37:14.196331867 +0000 UTC m=+0.057681826 container create a7a77fa60738a437ee5973dad00d2edea047793a3c0e6dcdc54bde06ab0d5f17 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:37:14 compute-1 nova_compute[230518]: 2025-10-02 12:37:14.199 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408634.1990738, 1319f89a-ec57-41aa-b53e-07f2280a0d87 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:37:14 compute-1 nova_compute[230518]: 2025-10-02 12:37:14.200 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] VM Started (Lifecycle Event)
Oct 02 12:37:14 compute-1 nova_compute[230518]: 2025-10-02 12:37:14.202 2 DEBUG nova.compute.manager [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:37:14 compute-1 nova_compute[230518]: 2025-10-02 12:37:14.211 2 DEBUG nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:37:14 compute-1 nova_compute[230518]: 2025-10-02 12:37:14.214 2 INFO nova.virt.libvirt.driver [-] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Instance spawned successfully.
Oct 02 12:37:14 compute-1 nova_compute[230518]: 2025-10-02 12:37:14.215 2 DEBUG nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:37:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1533087876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:14 compute-1 nova_compute[230518]: 2025-10-02 12:37:14.241 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:37:14 compute-1 systemd[1]: Started libpod-conmon-a7a77fa60738a437ee5973dad00d2edea047793a3c0e6dcdc54bde06ab0d5f17.scope.
Oct 02 12:37:14 compute-1 nova_compute[230518]: 2025-10-02 12:37:14.248 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:37:14 compute-1 nova_compute[230518]: 2025-10-02 12:37:14.255 2 DEBUG nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:37:14 compute-1 nova_compute[230518]: 2025-10-02 12:37:14.255 2 DEBUG nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:37:14 compute-1 nova_compute[230518]: 2025-10-02 12:37:14.256 2 DEBUG nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:37:14 compute-1 nova_compute[230518]: 2025-10-02 12:37:14.257 2 DEBUG nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:37:14 compute-1 nova_compute[230518]: 2025-10-02 12:37:14.258 2 DEBUG nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:37:14 compute-1 nova_compute[230518]: 2025-10-02 12:37:14.258 2 DEBUG nova.virt.libvirt.driver [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:37:14 compute-1 podman[268147]: 2025-10-02 12:37:14.167305424 +0000 UTC m=+0.028655413 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:37:14 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:37:14 compute-1 nova_compute[230518]: 2025-10-02 12:37:14.283 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:37:14 compute-1 nova_compute[230518]: 2025-10-02 12:37:14.284 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408634.1993606, 1319f89a-ec57-41aa-b53e-07f2280a0d87 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:37:14 compute-1 nova_compute[230518]: 2025-10-02 12:37:14.284 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] VM Paused (Lifecycle Event)
Oct 02 12:37:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ae0f4a13ce66dd70ea553c8c5cf8b3214d2cc0beca828a9fd706491dc16fdc8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:37:14 compute-1 podman[268147]: 2025-10-02 12:37:14.299839704 +0000 UTC m=+0.161189683 container init a7a77fa60738a437ee5973dad00d2edea047793a3c0e6dcdc54bde06ab0d5f17 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:37:14 compute-1 podman[268147]: 2025-10-02 12:37:14.305292265 +0000 UTC m=+0.166642224 container start a7a77fa60738a437ee5973dad00d2edea047793a3c0e6dcdc54bde06ab0d5f17 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:37:14 compute-1 nova_compute[230518]: 2025-10-02 12:37:14.309 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:37:14 compute-1 nova_compute[230518]: 2025-10-02 12:37:14.318 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408634.2095737, 1319f89a-ec57-41aa-b53e-07f2280a0d87 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:37:14 compute-1 nova_compute[230518]: 2025-10-02 12:37:14.318 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] VM Resumed (Lifecycle Event)
Oct 02 12:37:14 compute-1 neutron-haproxy-ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47[268163]: [NOTICE]   (268167) : New worker (268169) forked
Oct 02 12:37:14 compute-1 neutron-haproxy-ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47[268163]: [NOTICE]   (268167) : Loading success.
Oct 02 12:37:14 compute-1 nova_compute[230518]: 2025-10-02 12:37:14.341 2 INFO nova.compute.manager [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Took 7.93 seconds to spawn the instance on the hypervisor.
Oct 02 12:37:14 compute-1 nova_compute[230518]: 2025-10-02 12:37:14.342 2 DEBUG nova.compute.manager [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:37:14 compute-1 nova_compute[230518]: 2025-10-02 12:37:14.344 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:37:14 compute-1 nova_compute[230518]: 2025-10-02 12:37:14.349 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:37:14 compute-1 nova_compute[230518]: 2025-10-02 12:37:14.383 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:37:14 compute-1 nova_compute[230518]: 2025-10-02 12:37:14.426 2 INFO nova.compute.manager [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Took 9.53 seconds to build instance.
Oct 02 12:37:14 compute-1 nova_compute[230518]: 2025-10-02 12:37:14.447 2 DEBUG oslo_concurrency.lockutils [None req-b35fea34-f265-4dd3-b358-399ba8e3cbcf f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lock "1319f89a-ec57-41aa-b53e-07f2280a0d87" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:37:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:37:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:14.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:37:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:14.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:15 compute-1 ceph-mon[80926]: pgmap v1826: 305 pgs: 305 active+clean; 167 MiB data, 777 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.6 MiB/s wr, 196 op/s
Oct 02 12:37:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2103544154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:15 compute-1 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Oct 02 12:37:15 compute-1 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Oct 02 12:37:15 compute-1 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Oct 02 12:37:15 compute-1 nova_compute[230518]: 2025-10-02 12:37:15.908 2 DEBUG nova.compute.manager [req-c568c464-8030-4ac3-8606-9f2af8957d25 req-afa90196-68b0-4250-a34c-0b9d79410b51 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Received event network-vif-plugged-8de4019e-8174-4b43-9510-73318fd6ff8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:37:15 compute-1 nova_compute[230518]: 2025-10-02 12:37:15.909 2 DEBUG oslo_concurrency.lockutils [req-c568c464-8030-4ac3-8606-9f2af8957d25 req-afa90196-68b0-4250-a34c-0b9d79410b51 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1319f89a-ec57-41aa-b53e-07f2280a0d87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:15 compute-1 nova_compute[230518]: 2025-10-02 12:37:15.909 2 DEBUG oslo_concurrency.lockutils [req-c568c464-8030-4ac3-8606-9f2af8957d25 req-afa90196-68b0-4250-a34c-0b9d79410b51 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1319f89a-ec57-41aa-b53e-07f2280a0d87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:15 compute-1 nova_compute[230518]: 2025-10-02 12:37:15.909 2 DEBUG oslo_concurrency.lockutils [req-c568c464-8030-4ac3-8606-9f2af8957d25 req-afa90196-68b0-4250-a34c-0b9d79410b51 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1319f89a-ec57-41aa-b53e-07f2280a0d87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:15 compute-1 nova_compute[230518]: 2025-10-02 12:37:15.910 2 DEBUG nova.compute.manager [req-c568c464-8030-4ac3-8606-9f2af8957d25 req-afa90196-68b0-4250-a34c-0b9d79410b51 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] No waiting events found dispatching network-vif-plugged-8de4019e-8174-4b43-9510-73318fd6ff8d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:37:15 compute-1 nova_compute[230518]: 2025-10-02 12:37:15.910 2 WARNING nova.compute.manager [req-c568c464-8030-4ac3-8606-9f2af8957d25 req-afa90196-68b0-4250-a34c-0b9d79410b51 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Received unexpected event network-vif-plugged-8de4019e-8174-4b43-9510-73318fd6ff8d for instance with vm_state active and task_state None.
Oct 02 12:37:15 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #82. Immutable memtables: 0.
Oct 02 12:37:15 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:15.956603) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:37:15 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 82
Oct 02 12:37:15 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408635956668, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1498, "num_deletes": 261, "total_data_size": 3132025, "memory_usage": 3165520, "flush_reason": "Manual Compaction"}
Oct 02 12:37:15 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #83: started
Oct 02 12:37:15 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408635978607, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 83, "file_size": 2055198, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42413, "largest_seqno": 43906, "table_properties": {"data_size": 2048960, "index_size": 3377, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14033, "raw_average_key_size": 20, "raw_value_size": 2036037, "raw_average_value_size": 2904, "num_data_blocks": 149, "num_entries": 701, "num_filter_entries": 701, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408521, "oldest_key_time": 1759408521, "file_creation_time": 1759408635, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:37:15 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 22046 microseconds, and 5281 cpu microseconds.
Oct 02 12:37:15 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:37:15 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:15.978655) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #83: 2055198 bytes OK
Oct 02 12:37:15 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:15.978677) [db/memtable_list.cc:519] [default] Level-0 commit table #83 started
Oct 02 12:37:15 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:15.984559) [db/memtable_list.cc:722] [default] Level-0 commit table #83: memtable #1 done
Oct 02 12:37:15 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:15.984592) EVENT_LOG_v1 {"time_micros": 1759408635984584, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:37:15 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:15.984613) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:37:15 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 3124938, prev total WAL file size 3124938, number of live WAL files 2.
Oct 02 12:37:15 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000079.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:37:15 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:15.985690) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323535' seq:72057594037927935, type:22 .. '6C6F676D0031353130' seq:0, type:0; will stop at (end)
Oct 02 12:37:15 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:37:15 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [83(2007KB)], [81(9043KB)]
Oct 02 12:37:15 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408635985726, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [83], "files_L6": [81], "score": -1, "input_data_size": 11316232, "oldest_snapshot_seqno": -1}
Oct 02 12:37:16 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #84: 6819 keys, 11169866 bytes, temperature: kUnknown
Oct 02 12:37:16 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408636075888, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 84, "file_size": 11169866, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11123434, "index_size": 28272, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17093, "raw_key_size": 175488, "raw_average_key_size": 25, "raw_value_size": 11000659, "raw_average_value_size": 1613, "num_data_blocks": 1128, "num_entries": 6819, "num_filter_entries": 6819, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759408635, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 84, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:37:16 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:37:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:16.076365) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 11169866 bytes
Oct 02 12:37:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:16.078034) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 125.3 rd, 123.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 8.8 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(10.9) write-amplify(5.4) OK, records in: 7358, records dropped: 539 output_compression: NoCompression
Oct 02 12:37:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:16.078059) EVENT_LOG_v1 {"time_micros": 1759408636078047, "job": 50, "event": "compaction_finished", "compaction_time_micros": 90316, "compaction_time_cpu_micros": 32015, "output_level": 6, "num_output_files": 1, "total_output_size": 11169866, "num_input_records": 7358, "num_output_records": 6819, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:37:16 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:37:16 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408636078634, "job": 50, "event": "table_file_deletion", "file_number": 83}
Oct 02 12:37:16 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000081.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:37:16 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408636080574, "job": 50, "event": "table_file_deletion", "file_number": 81}
Oct 02 12:37:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:15.985623) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:37:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:16.080631) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:37:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:16.080635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:37:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:16.080637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:37:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:16.080638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:37:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:16.080639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:37:16 compute-1 nova_compute[230518]: 2025-10-02 12:37:16.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:16.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:16.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:17 compute-1 ceph-mon[80926]: pgmap v1827: 305 pgs: 305 active+clean; 182 MiB data, 782 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 167 op/s
Oct 02 12:37:18 compute-1 nova_compute[230518]: 2025-10-02 12:37:18.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/650801237' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4088164253' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:18.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:18.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:19 compute-1 nova_compute[230518]: 2025-10-02 12:37:19.224 2 DEBUG oslo_concurrency.lockutils [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Acquiring lock "1319f89a-ec57-41aa-b53e-07f2280a0d87" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:19 compute-1 nova_compute[230518]: 2025-10-02 12:37:19.224 2 DEBUG oslo_concurrency.lockutils [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lock "1319f89a-ec57-41aa-b53e-07f2280a0d87" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:19 compute-1 nova_compute[230518]: 2025-10-02 12:37:19.225 2 DEBUG oslo_concurrency.lockutils [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Acquiring lock "1319f89a-ec57-41aa-b53e-07f2280a0d87-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:19 compute-1 nova_compute[230518]: 2025-10-02 12:37:19.225 2 DEBUG oslo_concurrency.lockutils [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lock "1319f89a-ec57-41aa-b53e-07f2280a0d87-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:19 compute-1 nova_compute[230518]: 2025-10-02 12:37:19.226 2 DEBUG oslo_concurrency.lockutils [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lock "1319f89a-ec57-41aa-b53e-07f2280a0d87-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:19 compute-1 nova_compute[230518]: 2025-10-02 12:37:19.228 2 INFO nova.compute.manager [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Terminating instance
Oct 02 12:37:19 compute-1 nova_compute[230518]: 2025-10-02 12:37:19.230 2 DEBUG nova.compute.manager [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:37:19 compute-1 kernel: tap8de4019e-81 (unregistering): left promiscuous mode
Oct 02 12:37:19 compute-1 NetworkManager[44960]: <info>  [1759408639.2791] device (tap8de4019e-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:37:19 compute-1 ovn_controller[129257]: 2025-10-02T12:37:19Z|00415|binding|INFO|Releasing lport 8de4019e-8174-4b43-9510-73318fd6ff8d from this chassis (sb_readonly=0)
Oct 02 12:37:19 compute-1 ovn_controller[129257]: 2025-10-02T12:37:19Z|00416|binding|INFO|Setting lport 8de4019e-8174-4b43-9510-73318fd6ff8d down in Southbound
Oct 02 12:37:19 compute-1 ovn_controller[129257]: 2025-10-02T12:37:19Z|00417|binding|INFO|Removing iface tap8de4019e-81 ovn-installed in OVS
Oct 02 12:37:19 compute-1 nova_compute[230518]: 2025-10-02 12:37:19.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:19.303 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:67:9e 10.100.0.8'], port_security=['fa:16:3e:ee:67:9e 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '1319f89a-ec57-41aa-b53e-07f2280a0d87', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-40e5ac91-4365-415e-86f3-b8d99d311f47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '15fd01e26e294206846c155a766b0ad2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '010fb747-de4b-49ec-8a2a-286d666ddcca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5628d560-7d90-4922-8442-872cb81c1d7b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=8de4019e-8174-4b43-9510-73318fd6ff8d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:37:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:19.305 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 8de4019e-8174-4b43-9510-73318fd6ff8d in datapath 40e5ac91-4365-415e-86f3-b8d99d311f47 unbound from our chassis
Oct 02 12:37:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:19.307 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 40e5ac91-4365-415e-86f3-b8d99d311f47, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:37:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:19.308 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3dac38d8-4f56-41cb-80df-f290183c7b08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:19.308 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47 namespace which is not needed anymore
Oct 02 12:37:19 compute-1 nova_compute[230518]: 2025-10-02 12:37:19.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:19 compute-1 ceph-mon[80926]: pgmap v1828: 305 pgs: 305 active+clean; 219 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.2 MiB/s wr, 278 op/s
Oct 02 12:37:19 compute-1 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d00000061.scope: Deactivated successfully.
Oct 02 12:37:19 compute-1 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d00000061.scope: Consumed 5.864s CPU time.
Oct 02 12:37:19 compute-1 systemd-machined[188247]: Machine qemu-48-instance-00000061 terminated.
Oct 02 12:37:19 compute-1 neutron-haproxy-ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47[268163]: [NOTICE]   (268167) : haproxy version is 2.8.14-c23fe91
Oct 02 12:37:19 compute-1 neutron-haproxy-ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47[268163]: [NOTICE]   (268167) : path to executable is /usr/sbin/haproxy
Oct 02 12:37:19 compute-1 neutron-haproxy-ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47[268163]: [WARNING]  (268167) : Exiting Master process...
Oct 02 12:37:19 compute-1 neutron-haproxy-ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47[268163]: [WARNING]  (268167) : Exiting Master process...
Oct 02 12:37:19 compute-1 neutron-haproxy-ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47[268163]: [ALERT]    (268167) : Current worker (268169) exited with code 143 (Terminated)
Oct 02 12:37:19 compute-1 neutron-haproxy-ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47[268163]: [WARNING]  (268167) : All workers exited. Exiting... (0)
Oct 02 12:37:19 compute-1 systemd[1]: libpod-a7a77fa60738a437ee5973dad00d2edea047793a3c0e6dcdc54bde06ab0d5f17.scope: Deactivated successfully.
Oct 02 12:37:19 compute-1 podman[268202]: 2025-10-02 12:37:19.452898708 +0000 UTC m=+0.049296912 container died a7a77fa60738a437ee5973dad00d2edea047793a3c0e6dcdc54bde06ab0d5f17 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 12:37:19 compute-1 nova_compute[230518]: 2025-10-02 12:37:19.472 2 INFO nova.virt.libvirt.driver [-] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Instance destroyed successfully.
Oct 02 12:37:19 compute-1 nova_compute[230518]: 2025-10-02 12:37:19.472 2 DEBUG nova.objects.instance [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lazy-loading 'resources' on Instance uuid 1319f89a-ec57-41aa-b53e-07f2280a0d87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:37:19 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a7a77fa60738a437ee5973dad00d2edea047793a3c0e6dcdc54bde06ab0d5f17-userdata-shm.mount: Deactivated successfully.
Oct 02 12:37:19 compute-1 systemd[1]: var-lib-containers-storage-overlay-2ae0f4a13ce66dd70ea553c8c5cf8b3214d2cc0beca828a9fd706491dc16fdc8-merged.mount: Deactivated successfully.
Oct 02 12:37:19 compute-1 nova_compute[230518]: 2025-10-02 12:37:19.489 2 DEBUG nova.virt.libvirt.vif [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:37:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1958706210',display_name='tempest-ServerMetadataNegativeTestJSON-server-1958706210',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1958706210',id=97,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:37:14Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='15fd01e26e294206846c155a766b0ad2',ramdisk_id='',reservation_id='r-aqm70j3y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataNegativeTestJSON-354053647',owner_user_name='tempest-ServerMetadataNegativeTestJSON-354053647-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:37:14Z,user_data=None,user_id='f1215de74baa4b7f8522ec44b7a4630b',uuid=1319f89a-ec57-41aa-b53e-07f2280a0d87,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8de4019e-8174-4b43-9510-73318fd6ff8d", "address": "fa:16:3e:ee:67:9e", "network": {"id": "40e5ac91-4365-415e-86f3-b8d99d311f47", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-558314394-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15fd01e26e294206846c155a766b0ad2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de4019e-81", "ovs_interfaceid": "8de4019e-8174-4b43-9510-73318fd6ff8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:37:19 compute-1 nova_compute[230518]: 2025-10-02 12:37:19.490 2 DEBUG nova.network.os_vif_util [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Converting VIF {"id": "8de4019e-8174-4b43-9510-73318fd6ff8d", "address": "fa:16:3e:ee:67:9e", "network": {"id": "40e5ac91-4365-415e-86f3-b8d99d311f47", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-558314394-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15fd01e26e294206846c155a766b0ad2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de4019e-81", "ovs_interfaceid": "8de4019e-8174-4b43-9510-73318fd6ff8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:37:19 compute-1 nova_compute[230518]: 2025-10-02 12:37:19.490 2 DEBUG nova.network.os_vif_util [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:67:9e,bridge_name='br-int',has_traffic_filtering=True,id=8de4019e-8174-4b43-9510-73318fd6ff8d,network=Network(40e5ac91-4365-415e-86f3-b8d99d311f47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8de4019e-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:37:19 compute-1 nova_compute[230518]: 2025-10-02 12:37:19.491 2 DEBUG os_vif [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:67:9e,bridge_name='br-int',has_traffic_filtering=True,id=8de4019e-8174-4b43-9510-73318fd6ff8d,network=Network(40e5ac91-4365-415e-86f3-b8d99d311f47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8de4019e-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:37:19 compute-1 nova_compute[230518]: 2025-10-02 12:37:19.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:19 compute-1 nova_compute[230518]: 2025-10-02 12:37:19.493 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8de4019e-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:37:19 compute-1 nova_compute[230518]: 2025-10-02 12:37:19.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:37:19 compute-1 nova_compute[230518]: 2025-10-02 12:37:19.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:37:19 compute-1 nova_compute[230518]: 2025-10-02 12:37:19.500 2 INFO os_vif [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:67:9e,bridge_name='br-int',has_traffic_filtering=True,id=8de4019e-8174-4b43-9510-73318fd6ff8d,network=Network(40e5ac91-4365-415e-86f3-b8d99d311f47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8de4019e-81')
Oct 02 12:37:19 compute-1 podman[268202]: 2025-10-02 12:37:19.501319631 +0000 UTC m=+0.097717835 container cleanup a7a77fa60738a437ee5973dad00d2edea047793a3c0e6dcdc54bde06ab0d5f17 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:37:19 compute-1 systemd[1]: libpod-conmon-a7a77fa60738a437ee5973dad00d2edea047793a3c0e6dcdc54bde06ab0d5f17.scope: Deactivated successfully.
Oct 02 12:37:19 compute-1 podman[268254]: 2025-10-02 12:37:19.59281065 +0000 UTC m=+0.056256991 container remove a7a77fa60738a437ee5973dad00d2edea047793a3c0e6dcdc54bde06ab0d5f17 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:37:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:19.601 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f1032b42-5452-481f-bb7d-8f4c2495ff97]: (4, ('Thu Oct  2 12:37:19 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47 (a7a77fa60738a437ee5973dad00d2edea047793a3c0e6dcdc54bde06ab0d5f17)\na7a77fa60738a437ee5973dad00d2edea047793a3c0e6dcdc54bde06ab0d5f17\nThu Oct  2 12:37:19 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47 (a7a77fa60738a437ee5973dad00d2edea047793a3c0e6dcdc54bde06ab0d5f17)\na7a77fa60738a437ee5973dad00d2edea047793a3c0e6dcdc54bde06ab0d5f17\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:19.603 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4a36f5cc-a241-4f24-b39a-bdb356e3afad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:19.604 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap40e5ac91-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:37:19 compute-1 nova_compute[230518]: 2025-10-02 12:37:19.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:19 compute-1 kernel: tap40e5ac91-40: left promiscuous mode
Oct 02 12:37:19 compute-1 nova_compute[230518]: 2025-10-02 12:37:19.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:19 compute-1 nova_compute[230518]: 2025-10-02 12:37:19.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:19.669 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3e945d78-3cd6-42a2-8d7b-6f12e6f779bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:19.703 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[186676ad-ba2a-4e6a-ab5d-2351a3747bfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:19.705 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[aad433e3-d534-438c-b465-db6012d81cb4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:19.733 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c75f9189-36f5-49ff-85ec-dbef93021552]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649696, 'reachable_time': 25059, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268277, 'error': None, 'target': 'ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:19.735 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-40e5ac91-4365-415e-86f3-b8d99d311f47 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:37:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:19.736 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[030540ce-3af1-4ac8-85b5-d06292d135cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:19 compute-1 systemd[1]: run-netns-ovnmeta\x2d40e5ac91\x2d4365\x2d415e\x2d86f3\x2db8d99d311f47.mount: Deactivated successfully.
Oct 02 12:37:19 compute-1 nova_compute[230518]: 2025-10-02 12:37:19.980 2 DEBUG nova.compute.manager [req-cf61bc61-c007-4f26-88e6-b5958c17a202 req-b9030aa2-46a0-4f51-9753-c769b6ad060e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Received event network-vif-unplugged-8de4019e-8174-4b43-9510-73318fd6ff8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:37:19 compute-1 nova_compute[230518]: 2025-10-02 12:37:19.981 2 DEBUG oslo_concurrency.lockutils [req-cf61bc61-c007-4f26-88e6-b5958c17a202 req-b9030aa2-46a0-4f51-9753-c769b6ad060e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1319f89a-ec57-41aa-b53e-07f2280a0d87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:19 compute-1 nova_compute[230518]: 2025-10-02 12:37:19.981 2 DEBUG oslo_concurrency.lockutils [req-cf61bc61-c007-4f26-88e6-b5958c17a202 req-b9030aa2-46a0-4f51-9753-c769b6ad060e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1319f89a-ec57-41aa-b53e-07f2280a0d87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:19 compute-1 nova_compute[230518]: 2025-10-02 12:37:19.981 2 DEBUG oslo_concurrency.lockutils [req-cf61bc61-c007-4f26-88e6-b5958c17a202 req-b9030aa2-46a0-4f51-9753-c769b6ad060e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1319f89a-ec57-41aa-b53e-07f2280a0d87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:19 compute-1 nova_compute[230518]: 2025-10-02 12:37:19.982 2 DEBUG nova.compute.manager [req-cf61bc61-c007-4f26-88e6-b5958c17a202 req-b9030aa2-46a0-4f51-9753-c769b6ad060e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] No waiting events found dispatching network-vif-unplugged-8de4019e-8174-4b43-9510-73318fd6ff8d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:37:19 compute-1 nova_compute[230518]: 2025-10-02 12:37:19.982 2 DEBUG nova.compute.manager [req-cf61bc61-c007-4f26-88e6-b5958c17a202 req-b9030aa2-46a0-4f51-9753-c769b6ad060e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Received event network-vif-unplugged-8de4019e-8174-4b43-9510-73318fd6ff8d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:37:19 compute-1 nova_compute[230518]: 2025-10-02 12:37:19.990 2 INFO nova.virt.libvirt.driver [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Deleting instance files /var/lib/nova/instances/1319f89a-ec57-41aa-b53e-07f2280a0d87_del
Oct 02 12:37:19 compute-1 nova_compute[230518]: 2025-10-02 12:37:19.991 2 INFO nova.virt.libvirt.driver [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Deletion of /var/lib/nova/instances/1319f89a-ec57-41aa-b53e-07f2280a0d87_del complete
Oct 02 12:37:20 compute-1 nova_compute[230518]: 2025-10-02 12:37:20.061 2 INFO nova.compute.manager [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Took 0.83 seconds to destroy the instance on the hypervisor.
Oct 02 12:37:20 compute-1 nova_compute[230518]: 2025-10-02 12:37:20.061 2 DEBUG oslo.service.loopingcall [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:37:20 compute-1 nova_compute[230518]: 2025-10-02 12:37:20.062 2 DEBUG nova.compute.manager [-] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:37:20 compute-1 nova_compute[230518]: 2025-10-02 12:37:20.062 2 DEBUG nova.network.neutron [-] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:37:20 compute-1 ceph-mon[80926]: pgmap v1829: 305 pgs: 305 active+clean; 219 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 712 KiB/s rd, 4.0 MiB/s wr, 248 op/s
Oct 02 12:37:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:20.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:20.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:21 compute-1 nova_compute[230518]: 2025-10-02 12:37:21.632 2 DEBUG nova.network.neutron [-] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:37:21 compute-1 nova_compute[230518]: 2025-10-02 12:37:21.670 2 INFO nova.compute.manager [-] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Took 1.61 seconds to deallocate network for instance.
Oct 02 12:37:21 compute-1 nova_compute[230518]: 2025-10-02 12:37:21.737 2 DEBUG nova.compute.manager [req-59c4f70a-5f71-4f1c-989c-d032a4d4de25 req-8924b32f-1ba4-4fbc-8cca-842993e5f02c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Received event network-vif-deleted-8de4019e-8174-4b43-9510-73318fd6ff8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:37:21 compute-1 nova_compute[230518]: 2025-10-02 12:37:21.741 2 DEBUG oslo_concurrency.lockutils [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:21 compute-1 nova_compute[230518]: 2025-10-02 12:37:21.742 2 DEBUG oslo_concurrency.lockutils [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:21 compute-1 nova_compute[230518]: 2025-10-02 12:37:21.824 2 DEBUG oslo_concurrency.processutils [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:22 compute-1 nova_compute[230518]: 2025-10-02 12:37:22.138 2 DEBUG nova.compute.manager [req-3fd0e222-fde7-4840-9f53-d0c030d02989 req-016388e6-d1f6-4516-88b9-d372d67201ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Received event network-vif-plugged-8de4019e-8174-4b43-9510-73318fd6ff8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:37:22 compute-1 nova_compute[230518]: 2025-10-02 12:37:22.138 2 DEBUG oslo_concurrency.lockutils [req-3fd0e222-fde7-4840-9f53-d0c030d02989 req-016388e6-d1f6-4516-88b9-d372d67201ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1319f89a-ec57-41aa-b53e-07f2280a0d87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:22 compute-1 nova_compute[230518]: 2025-10-02 12:37:22.138 2 DEBUG oslo_concurrency.lockutils [req-3fd0e222-fde7-4840-9f53-d0c030d02989 req-016388e6-d1f6-4516-88b9-d372d67201ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1319f89a-ec57-41aa-b53e-07f2280a0d87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:22 compute-1 nova_compute[230518]: 2025-10-02 12:37:22.139 2 DEBUG oslo_concurrency.lockutils [req-3fd0e222-fde7-4840-9f53-d0c030d02989 req-016388e6-d1f6-4516-88b9-d372d67201ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1319f89a-ec57-41aa-b53e-07f2280a0d87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:22 compute-1 nova_compute[230518]: 2025-10-02 12:37:22.139 2 DEBUG nova.compute.manager [req-3fd0e222-fde7-4840-9f53-d0c030d02989 req-016388e6-d1f6-4516-88b9-d372d67201ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] No waiting events found dispatching network-vif-plugged-8de4019e-8174-4b43-9510-73318fd6ff8d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:37:22 compute-1 nova_compute[230518]: 2025-10-02 12:37:22.139 2 WARNING nova.compute.manager [req-3fd0e222-fde7-4840-9f53-d0c030d02989 req-016388e6-d1f6-4516-88b9-d372d67201ba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Received unexpected event network-vif-plugged-8de4019e-8174-4b43-9510-73318fd6ff8d for instance with vm_state deleted and task_state None.
Oct 02 12:37:22 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:37:22 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2159854273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:22 compute-1 nova_compute[230518]: 2025-10-02 12:37:22.313 2 DEBUG oslo_concurrency.processutils [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:22 compute-1 nova_compute[230518]: 2025-10-02 12:37:22.320 2 DEBUG nova.compute.provider_tree [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:37:22 compute-1 nova_compute[230518]: 2025-10-02 12:37:22.339 2 DEBUG nova.scheduler.client.report [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:37:22 compute-1 nova_compute[230518]: 2025-10-02 12:37:22.372 2 DEBUG oslo_concurrency.lockutils [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:22 compute-1 nova_compute[230518]: 2025-10-02 12:37:22.428 2 INFO nova.scheduler.client.report [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Deleted allocations for instance 1319f89a-ec57-41aa-b53e-07f2280a0d87
Oct 02 12:37:22 compute-1 nova_compute[230518]: 2025-10-02 12:37:22.555 2 DEBUG oslo_concurrency.lockutils [None req-c08ed109-8175-4109-9d70-c243a4f41fcc f1215de74baa4b7f8522ec44b7a4630b 15fd01e26e294206846c155a766b0ad2 - - default default] Lock "1319f89a-ec57-41aa-b53e-07f2280a0d87" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.330s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:22 compute-1 ceph-mon[80926]: pgmap v1830: 305 pgs: 305 active+clean; 215 MiB data, 812 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 5.4 MiB/s wr, 381 op/s
Oct 02 12:37:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2159854273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:22.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:22.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:23 compute-1 nova_compute[230518]: 2025-10-02 12:37:23.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1206218469' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/895846916' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:37:24 compute-1 nova_compute[230518]: 2025-10-02 12:37:24.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:37:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:24.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:37:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:24.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:25 compute-1 ceph-mon[80926]: pgmap v1831: 305 pgs: 305 active+clean; 213 MiB data, 810 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.7 MiB/s wr, 336 op/s
Oct 02 12:37:25 compute-1 podman[268303]: 2025-10-02 12:37:25.839451042 +0000 UTC m=+0.069285651 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:37:25 compute-1 podman[268302]: 2025-10-02 12:37:25.883246709 +0000 UTC m=+0.116567398 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Oct 02 12:37:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:25.933 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:25.933 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:25.933 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:26.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:26.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:27 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e252 e252: 3 total, 3 up, 3 in
Oct 02 12:37:27 compute-1 ceph-mon[80926]: pgmap v1832: 305 pgs: 305 active+clean; 213 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 345 op/s
Oct 02 12:37:28 compute-1 nova_compute[230518]: 2025-10-02 12:37:28.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:28 compute-1 ceph-mon[80926]: osdmap e252: 3 total, 3 up, 3 in
Oct 02 12:37:28 compute-1 ceph-mon[80926]: pgmap v1834: 305 pgs: 305 active+clean; 214 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.6 MiB/s wr, 260 op/s
Oct 02 12:37:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:28.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:28.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:37:29 compute-1 nova_compute[230518]: 2025-10-02 12:37:29.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:30 compute-1 nova_compute[230518]: 2025-10-02 12:37:30.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e253 e253: 3 total, 3 up, 3 in
Oct 02 12:37:30 compute-1 ceph-mon[80926]: pgmap v1835: 305 pgs: 305 active+clean; 214 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.6 MiB/s wr, 260 op/s
Oct 02 12:37:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:30.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:31.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:31 compute-1 ceph-mon[80926]: osdmap e253: 3 total, 3 up, 3 in
Oct 02 12:37:31 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e254 e254: 3 total, 3 up, 3 in
Oct 02 12:37:32 compute-1 ceph-mon[80926]: pgmap v1837: 305 pgs: 305 active+clean; 230 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 656 KiB/s wr, 236 op/s
Oct 02 12:37:32 compute-1 ceph-mon[80926]: osdmap e254: 3 total, 3 up, 3 in
Oct 02 12:37:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:32.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:37:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:33.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:37:33 compute-1 nova_compute[230518]: 2025-10-02 12:37:33.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:33 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/577101976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:34 compute-1 nova_compute[230518]: 2025-10-02 12:37:34.468 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408639.4644618, 1319f89a-ec57-41aa-b53e-07f2280a0d87 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:37:34 compute-1 nova_compute[230518]: 2025-10-02 12:37:34.469 2 INFO nova.compute.manager [-] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] VM Stopped (Lifecycle Event)
Oct 02 12:37:34 compute-1 nova_compute[230518]: 2025-10-02 12:37:34.494 2 DEBUG nova.compute.manager [None req-77f28c72-c2c7-4c12-bf36-1910976c5e75 - - - - - -] [instance: 1319f89a-ec57-41aa-b53e-07f2280a0d87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:37:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:37:34 compute-1 nova_compute[230518]: 2025-10-02 12:37:34.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:37:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:34.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:37:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:37:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:35.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:37:35 compute-1 ceph-mon[80926]: pgmap v1839: 305 pgs: 305 active+clean; 223 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 3.4 MiB/s wr, 259 op/s
Oct 02 12:37:36 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #85. Immutable memtables: 0.
Oct 02 12:37:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:36.066041) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:37:36 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 85
Oct 02 12:37:36 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408656066135, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 493, "num_deletes": 251, "total_data_size": 634921, "memory_usage": 645576, "flush_reason": "Manual Compaction"}
Oct 02 12:37:36 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #86: started
Oct 02 12:37:36 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408656073449, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 86, "file_size": 418588, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43911, "largest_seqno": 44399, "table_properties": {"data_size": 415917, "index_size": 707, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6515, "raw_average_key_size": 19, "raw_value_size": 410545, "raw_average_value_size": 1200, "num_data_blocks": 31, "num_entries": 342, "num_filter_entries": 342, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408636, "oldest_key_time": 1759408636, "file_creation_time": 1759408656, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:37:36 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 7457 microseconds, and 2624 cpu microseconds.
Oct 02 12:37:36 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:37:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:36.073506) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #86: 418588 bytes OK
Oct 02 12:37:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:36.073550) [db/memtable_list.cc:519] [default] Level-0 commit table #86 started
Oct 02 12:37:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:36.076073) [db/memtable_list.cc:722] [default] Level-0 commit table #86: memtable #1 done
Oct 02 12:37:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:36.076133) EVENT_LOG_v1 {"time_micros": 1759408656076121, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:37:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:36.076161) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:37:36 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 631951, prev total WAL file size 631951, number of live WAL files 2.
Oct 02 12:37:36 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000082.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:37:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:36.076772) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Oct 02 12:37:36 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:37:36 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [86(408KB)], [84(10MB)]
Oct 02 12:37:36 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408656076803, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [86], "files_L6": [84], "score": -1, "input_data_size": 11588454, "oldest_snapshot_seqno": -1}
Oct 02 12:37:36 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #87: 6647 keys, 9628382 bytes, temperature: kUnknown
Oct 02 12:37:36 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408656132111, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 87, "file_size": 9628382, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9584643, "index_size": 26027, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16645, "raw_key_size": 172641, "raw_average_key_size": 25, "raw_value_size": 9466374, "raw_average_value_size": 1424, "num_data_blocks": 1026, "num_entries": 6647, "num_filter_entries": 6647, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759408656, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 87, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:37:36 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:37:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:36.132630) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 9628382 bytes
Oct 02 12:37:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:36.134364) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 209.0 rd, 173.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 10.7 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(50.7) write-amplify(23.0) OK, records in: 7161, records dropped: 514 output_compression: NoCompression
Oct 02 12:37:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:36.134412) EVENT_LOG_v1 {"time_micros": 1759408656134391, "job": 52, "event": "compaction_finished", "compaction_time_micros": 55442, "compaction_time_cpu_micros": 22692, "output_level": 6, "num_output_files": 1, "total_output_size": 9628382, "num_input_records": 7161, "num_output_records": 6647, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:37:36 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:37:36 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408656134870, "job": 52, "event": "table_file_deletion", "file_number": 86}
Oct 02 12:37:36 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000084.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:37:36 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408656139758, "job": 52, "event": "table_file_deletion", "file_number": 84}
Oct 02 12:37:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:36.076695) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:37:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:36.139898) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:37:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:36.139906) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:37:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:36.139910) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:37:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:36.139913) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:37:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:37:36.139916) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:37:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:36.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:37:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:37.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:37:37 compute-1 ceph-mon[80926]: pgmap v1840: 305 pgs: 305 active+clean; 207 MiB data, 788 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 4.5 MiB/s wr, 261 op/s
Oct 02 12:37:37 compute-1 podman[268349]: 2025-10-02 12:37:37.866528158 +0000 UTC m=+0.096157346 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct 02 12:37:37 compute-1 podman[268350]: 2025-10-02 12:37:37.873862389 +0000 UTC m=+0.097303352 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 12:37:38 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:38.066 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:37:38 compute-1 nova_compute[230518]: 2025-10-02 12:37:38.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:38 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:38.067 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:37:38 compute-1 nova_compute[230518]: 2025-10-02 12:37:38.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:38 compute-1 ceph-mon[80926]: pgmap v1841: 305 pgs: 305 active+clean; 211 MiB data, 830 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 5.8 MiB/s wr, 324 op/s
Oct 02 12:37:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:38.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:39.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:39 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:39.069 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:37:39 compute-1 sudo[268390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:37:39 compute-1 sudo[268390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:39 compute-1 sudo[268390]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:39 compute-1 sudo[268415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:37:39 compute-1 sudo[268415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:39 compute-1 sudo[268415]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:37:39 compute-1 sudo[268440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:37:39 compute-1 sudo[268440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:39 compute-1 sudo[268440]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:39 compute-1 nova_compute[230518]: 2025-10-02 12:37:39.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:39 compute-1 sudo[268465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 12:37:39 compute-1 sudo[268465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:39 compute-1 sudo[268465]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:40 compute-1 sudo[268511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:37:40 compute-1 sudo[268511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:40 compute-1 sudo[268511]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:40 compute-1 sudo[268536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:37:40 compute-1 sudo[268536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:40 compute-1 sudo[268536]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:40 compute-1 sudo[268561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:37:40 compute-1 sudo[268561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:40 compute-1 sudo[268561]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:40 compute-1 sudo[268586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:37:40 compute-1 sudo[268586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:40 compute-1 ceph-mon[80926]: pgmap v1842: 305 pgs: 305 active+clean; 211 MiB data, 830 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 4.7 MiB/s wr, 217 op/s
Oct 02 12:37:40 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:37:40 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:37:40 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:37:40 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:37:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:40.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:41 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e255 e255: 3 total, 3 up, 3 in
Oct 02 12:37:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:41.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:41 compute-1 sudo[268586]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:42 compute-1 ceph-mon[80926]: osdmap e255: 3 total, 3 up, 3 in
Oct 02 12:37:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:37:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:37:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:37:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:37:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:37:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:37:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:42.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:43.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:43 compute-1 nova_compute[230518]: 2025-10-02 12:37:43.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:43 compute-1 ceph-mon[80926]: pgmap v1844: 305 pgs: 305 active+clean; 229 MiB data, 844 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 6.4 MiB/s wr, 215 op/s
Oct 02 12:37:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2454314841' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/673595144' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3144403304' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:44 compute-1 nova_compute[230518]: 2025-10-02 12:37:44.360 2 DEBUG oslo_concurrency.lockutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Acquiring lock "a1979f95-4814-4098-8baa-6c4497f20612" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:44 compute-1 nova_compute[230518]: 2025-10-02 12:37:44.361 2 DEBUG oslo_concurrency.lockutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lock "a1979f95-4814-4098-8baa-6c4497f20612" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:44 compute-1 nova_compute[230518]: 2025-10-02 12:37:44.381 2 DEBUG nova.compute.manager [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:37:44 compute-1 nova_compute[230518]: 2025-10-02 12:37:44.453 2 DEBUG oslo_concurrency.lockutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:44 compute-1 nova_compute[230518]: 2025-10-02 12:37:44.453 2 DEBUG oslo_concurrency.lockutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:44 compute-1 nova_compute[230518]: 2025-10-02 12:37:44.460 2 DEBUG nova.virt.hardware [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:37:44 compute-1 nova_compute[230518]: 2025-10-02 12:37:44.460 2 INFO nova.compute.claims [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:37:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:37:44 compute-1 nova_compute[230518]: 2025-10-02 12:37:44.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:44 compute-1 nova_compute[230518]: 2025-10-02 12:37:44.552 2 DEBUG oslo_concurrency.processutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:44 compute-1 ceph-mon[80926]: pgmap v1845: 305 pgs: 305 active+clean; 239 MiB data, 848 MiB used, 20 GiB / 21 GiB avail; 526 KiB/s rd, 5.0 MiB/s wr, 160 op/s
Oct 02 12:37:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:44.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:37:45 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1181985338' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:37:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:45.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:37:45 compute-1 nova_compute[230518]: 2025-10-02 12:37:45.023 2 DEBUG oslo_concurrency.processutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:45 compute-1 nova_compute[230518]: 2025-10-02 12:37:45.031 2 DEBUG nova.compute.provider_tree [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:37:45 compute-1 nova_compute[230518]: 2025-10-02 12:37:45.050 2 DEBUG nova.scheduler.client.report [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:37:45 compute-1 nova_compute[230518]: 2025-10-02 12:37:45.090 2 DEBUG oslo_concurrency.lockutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:45 compute-1 nova_compute[230518]: 2025-10-02 12:37:45.091 2 DEBUG nova.compute.manager [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:37:45 compute-1 nova_compute[230518]: 2025-10-02 12:37:45.139 2 DEBUG nova.compute.manager [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:37:45 compute-1 nova_compute[230518]: 2025-10-02 12:37:45.139 2 DEBUG nova.network.neutron [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:37:45 compute-1 nova_compute[230518]: 2025-10-02 12:37:45.158 2 INFO nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:37:45 compute-1 nova_compute[230518]: 2025-10-02 12:37:45.181 2 DEBUG nova.compute.manager [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:37:45 compute-1 nova_compute[230518]: 2025-10-02 12:37:45.304 2 DEBUG nova.compute.manager [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:37:45 compute-1 nova_compute[230518]: 2025-10-02 12:37:45.306 2 DEBUG nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:37:45 compute-1 nova_compute[230518]: 2025-10-02 12:37:45.306 2 INFO nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Creating image(s)
Oct 02 12:37:45 compute-1 nova_compute[230518]: 2025-10-02 12:37:45.344 2 DEBUG nova.storage.rbd_utils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] rbd image a1979f95-4814-4098-8baa-6c4497f20612_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:37:45 compute-1 nova_compute[230518]: 2025-10-02 12:37:45.377 2 DEBUG nova.storage.rbd_utils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] rbd image a1979f95-4814-4098-8baa-6c4497f20612_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:37:45 compute-1 nova_compute[230518]: 2025-10-02 12:37:45.407 2 DEBUG nova.storage.rbd_utils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] rbd image a1979f95-4814-4098-8baa-6c4497f20612_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:37:45 compute-1 nova_compute[230518]: 2025-10-02 12:37:45.412 2 DEBUG oslo_concurrency.processutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:45 compute-1 nova_compute[230518]: 2025-10-02 12:37:45.513 2 DEBUG oslo_concurrency.processutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:45 compute-1 nova_compute[230518]: 2025-10-02 12:37:45.514 2 DEBUG oslo_concurrency.lockutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:45 compute-1 nova_compute[230518]: 2025-10-02 12:37:45.515 2 DEBUG oslo_concurrency.lockutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:45 compute-1 nova_compute[230518]: 2025-10-02 12:37:45.516 2 DEBUG oslo_concurrency.lockutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:45 compute-1 nova_compute[230518]: 2025-10-02 12:37:45.545 2 DEBUG nova.storage.rbd_utils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] rbd image a1979f95-4814-4098-8baa-6c4497f20612_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:37:45 compute-1 nova_compute[230518]: 2025-10-02 12:37:45.548 2 DEBUG oslo_concurrency.processutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 a1979f95-4814-4098-8baa-6c4497f20612_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:45 compute-1 nova_compute[230518]: 2025-10-02 12:37:45.586 2 DEBUG nova.policy [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e28e4c343f46426788534c9108c8a7a8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4179ec2dfcf6411faefd3d7d7e6356d0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:37:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1181985338' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:46 compute-1 nova_compute[230518]: 2025-10-02 12:37:46.727 2 DEBUG oslo_concurrency.processutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 a1979f95-4814-4098-8baa-6c4497f20612_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:46 compute-1 nova_compute[230518]: 2025-10-02 12:37:46.777 2 DEBUG nova.network.neutron [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Successfully created port: 542026c4-106b-4023-8944-0947d2ba1fb9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:37:46 compute-1 nova_compute[230518]: 2025-10-02 12:37:46.838 2 DEBUG nova.storage.rbd_utils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] resizing rbd image a1979f95-4814-4098-8baa-6c4497f20612_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:37:46 compute-1 nova_compute[230518]: 2025-10-02 12:37:46.984 2 DEBUG nova.objects.instance [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lazy-loading 'migration_context' on Instance uuid a1979f95-4814-4098-8baa-6c4497f20612 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:37:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:37:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:46.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:37:47 compute-1 nova_compute[230518]: 2025-10-02 12:37:47.009 2 DEBUG nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:37:47 compute-1 nova_compute[230518]: 2025-10-02 12:37:47.010 2 DEBUG nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Ensure instance console log exists: /var/lib/nova/instances/a1979f95-4814-4098-8baa-6c4497f20612/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:37:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:47.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:47 compute-1 nova_compute[230518]: 2025-10-02 12:37:47.034 2 DEBUG oslo_concurrency.lockutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:47 compute-1 nova_compute[230518]: 2025-10-02 12:37:47.035 2 DEBUG oslo_concurrency.lockutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:47 compute-1 nova_compute[230518]: 2025-10-02 12:37:47.036 2 DEBUG oslo_concurrency.lockutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:47 compute-1 ceph-mon[80926]: pgmap v1846: 305 pgs: 305 active+clean; 244 MiB data, 854 MiB used, 20 GiB / 21 GiB avail; 540 KiB/s rd, 3.6 MiB/s wr, 145 op/s
Oct 02 12:37:47 compute-1 nova_compute[230518]: 2025-10-02 12:37:47.940 2 DEBUG nova.network.neutron [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Successfully updated port: 542026c4-106b-4023-8944-0947d2ba1fb9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:37:47 compute-1 nova_compute[230518]: 2025-10-02 12:37:47.959 2 DEBUG oslo_concurrency.lockutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Acquiring lock "refresh_cache-a1979f95-4814-4098-8baa-6c4497f20612" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:37:47 compute-1 nova_compute[230518]: 2025-10-02 12:37:47.960 2 DEBUG oslo_concurrency.lockutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Acquired lock "refresh_cache-a1979f95-4814-4098-8baa-6c4497f20612" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:37:47 compute-1 nova_compute[230518]: 2025-10-02 12:37:47.960 2 DEBUG nova.network.neutron [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:37:48 compute-1 nova_compute[230518]: 2025-10-02 12:37:48.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:48 compute-1 nova_compute[230518]: 2025-10-02 12:37:48.120 2 DEBUG nova.compute.manager [req-c04e70a5-ffd3-4b26-8c55-57e9ea32b848 req-1cf4ae43-76f8-44f4-ba5c-1008839d498d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Received event network-changed-542026c4-106b-4023-8944-0947d2ba1fb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:37:48 compute-1 nova_compute[230518]: 2025-10-02 12:37:48.120 2 DEBUG nova.compute.manager [req-c04e70a5-ffd3-4b26-8c55-57e9ea32b848 req-1cf4ae43-76f8-44f4-ba5c-1008839d498d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Refreshing instance network info cache due to event network-changed-542026c4-106b-4023-8944-0947d2ba1fb9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:37:48 compute-1 nova_compute[230518]: 2025-10-02 12:37:48.120 2 DEBUG oslo_concurrency.lockutils [req-c04e70a5-ffd3-4b26-8c55-57e9ea32b848 req-1cf4ae43-76f8-44f4-ba5c-1008839d498d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a1979f95-4814-4098-8baa-6c4497f20612" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:37:48 compute-1 nova_compute[230518]: 2025-10-02 12:37:48.222 2 DEBUG nova.network.neutron [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:37:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:48.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:49.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:49 compute-1 nova_compute[230518]: 2025-10-02 12:37:49.073 2 DEBUG nova.network.neutron [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Updating instance_info_cache with network_info: [{"id": "542026c4-106b-4023-8944-0947d2ba1fb9", "address": "fa:16:3e:1a:32:6b", "network": {"id": "34a16246-d3b9-42cb-92f1-19ae3ccb345b", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-923062869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4179ec2dfcf6411faefd3d7d7e6356d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap542026c4-10", "ovs_interfaceid": "542026c4-106b-4023-8944-0947d2ba1fb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:37:49 compute-1 nova_compute[230518]: 2025-10-02 12:37:49.098 2 DEBUG oslo_concurrency.lockutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Releasing lock "refresh_cache-a1979f95-4814-4098-8baa-6c4497f20612" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:37:49 compute-1 nova_compute[230518]: 2025-10-02 12:37:49.098 2 DEBUG nova.compute.manager [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Instance network_info: |[{"id": "542026c4-106b-4023-8944-0947d2ba1fb9", "address": "fa:16:3e:1a:32:6b", "network": {"id": "34a16246-d3b9-42cb-92f1-19ae3ccb345b", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-923062869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4179ec2dfcf6411faefd3d7d7e6356d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap542026c4-10", "ovs_interfaceid": "542026c4-106b-4023-8944-0947d2ba1fb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:37:49 compute-1 nova_compute[230518]: 2025-10-02 12:37:49.099 2 DEBUG oslo_concurrency.lockutils [req-c04e70a5-ffd3-4b26-8c55-57e9ea32b848 req-1cf4ae43-76f8-44f4-ba5c-1008839d498d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a1979f95-4814-4098-8baa-6c4497f20612" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:37:49 compute-1 nova_compute[230518]: 2025-10-02 12:37:49.099 2 DEBUG nova.network.neutron [req-c04e70a5-ffd3-4b26-8c55-57e9ea32b848 req-1cf4ae43-76f8-44f4-ba5c-1008839d498d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Refreshing network info cache for port 542026c4-106b-4023-8944-0947d2ba1fb9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:37:49 compute-1 nova_compute[230518]: 2025-10-02 12:37:49.104 2 DEBUG nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Start _get_guest_xml network_info=[{"id": "542026c4-106b-4023-8944-0947d2ba1fb9", "address": "fa:16:3e:1a:32:6b", "network": {"id": "34a16246-d3b9-42cb-92f1-19ae3ccb345b", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-923062869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4179ec2dfcf6411faefd3d7d7e6356d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap542026c4-10", "ovs_interfaceid": "542026c4-106b-4023-8944-0947d2ba1fb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:37:49 compute-1 nova_compute[230518]: 2025-10-02 12:37:49.112 2 WARNING nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:37:49 compute-1 nova_compute[230518]: 2025-10-02 12:37:49.123 2 DEBUG nova.virt.libvirt.host [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:37:49 compute-1 nova_compute[230518]: 2025-10-02 12:37:49.124 2 DEBUG nova.virt.libvirt.host [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:37:49 compute-1 nova_compute[230518]: 2025-10-02 12:37:49.132 2 DEBUG nova.virt.libvirt.host [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:37:49 compute-1 nova_compute[230518]: 2025-10-02 12:37:49.133 2 DEBUG nova.virt.libvirt.host [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:37:49 compute-1 nova_compute[230518]: 2025-10-02 12:37:49.134 2 DEBUG nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:37:49 compute-1 nova_compute[230518]: 2025-10-02 12:37:49.134 2 DEBUG nova.virt.hardware [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:37:49 compute-1 nova_compute[230518]: 2025-10-02 12:37:49.135 2 DEBUG nova.virt.hardware [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:37:49 compute-1 nova_compute[230518]: 2025-10-02 12:37:49.135 2 DEBUG nova.virt.hardware [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:37:49 compute-1 nova_compute[230518]: 2025-10-02 12:37:49.135 2 DEBUG nova.virt.hardware [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:37:49 compute-1 nova_compute[230518]: 2025-10-02 12:37:49.136 2 DEBUG nova.virt.hardware [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:37:49 compute-1 nova_compute[230518]: 2025-10-02 12:37:49.136 2 DEBUG nova.virt.hardware [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:37:49 compute-1 nova_compute[230518]: 2025-10-02 12:37:49.136 2 DEBUG nova.virt.hardware [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:37:49 compute-1 nova_compute[230518]: 2025-10-02 12:37:49.136 2 DEBUG nova.virt.hardware [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:37:49 compute-1 nova_compute[230518]: 2025-10-02 12:37:49.136 2 DEBUG nova.virt.hardware [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:37:49 compute-1 nova_compute[230518]: 2025-10-02 12:37:49.137 2 DEBUG nova.virt.hardware [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:37:49 compute-1 nova_compute[230518]: 2025-10-02 12:37:49.137 2 DEBUG nova.virt.hardware [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:37:49 compute-1 nova_compute[230518]: 2025-10-02 12:37:49.140 2 DEBUG oslo_concurrency.processutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:49 compute-1 ceph-mon[80926]: pgmap v1847: 305 pgs: 305 active+clean; 257 MiB data, 855 MiB used, 20 GiB / 21 GiB avail; 402 KiB/s rd, 3.5 MiB/s wr, 120 op/s
Oct 02 12:37:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:37:49 compute-1 nova_compute[230518]: 2025-10-02 12:37:49.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:37:49 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/811997453' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:49 compute-1 nova_compute[230518]: 2025-10-02 12:37:49.616 2 DEBUG oslo_concurrency.processutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:49 compute-1 nova_compute[230518]: 2025-10-02 12:37:49.659 2 DEBUG nova.storage.rbd_utils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] rbd image a1979f95-4814-4098-8baa-6c4497f20612_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:37:49 compute-1 nova_compute[230518]: 2025-10-02 12:37:49.666 2 DEBUG oslo_concurrency.processutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:37:50 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3644381549' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:50 compute-1 nova_compute[230518]: 2025-10-02 12:37:50.174 2 DEBUG oslo_concurrency.processutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:50 compute-1 nova_compute[230518]: 2025-10-02 12:37:50.176 2 DEBUG nova.virt.libvirt.vif [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:37:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-2144240005',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-servertagstestjson-server-2144240005',id=100,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4179ec2dfcf6411faefd3d7d7e6356d0',ramdisk_id='',reservation_id='r-nntdzlwu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-416799390',owner_user_name='tempest-ServerTagsTestJSON-416799390-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:37:45Z,user_data=None,user_id='e28e4c343f46426788534c9108c8a7a8',uuid=a1979f95-4814-4098-8baa-6c4497f20612,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "542026c4-106b-4023-8944-0947d2ba1fb9", "address": "fa:16:3e:1a:32:6b", "network": {"id": "34a16246-d3b9-42cb-92f1-19ae3ccb345b", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-923062869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4179ec2dfcf6411faefd3d7d7e6356d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap542026c4-10", "ovs_interfaceid": "542026c4-106b-4023-8944-0947d2ba1fb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:37:50 compute-1 nova_compute[230518]: 2025-10-02 12:37:50.177 2 DEBUG nova.network.os_vif_util [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Converting VIF {"id": "542026c4-106b-4023-8944-0947d2ba1fb9", "address": "fa:16:3e:1a:32:6b", "network": {"id": "34a16246-d3b9-42cb-92f1-19ae3ccb345b", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-923062869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4179ec2dfcf6411faefd3d7d7e6356d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap542026c4-10", "ovs_interfaceid": "542026c4-106b-4023-8944-0947d2ba1fb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:37:50 compute-1 nova_compute[230518]: 2025-10-02 12:37:50.178 2 DEBUG nova.network.os_vif_util [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:32:6b,bridge_name='br-int',has_traffic_filtering=True,id=542026c4-106b-4023-8944-0947d2ba1fb9,network=Network(34a16246-d3b9-42cb-92f1-19ae3ccb345b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap542026c4-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:37:50 compute-1 nova_compute[230518]: 2025-10-02 12:37:50.179 2 DEBUG nova.objects.instance [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lazy-loading 'pci_devices' on Instance uuid a1979f95-4814-4098-8baa-6c4497f20612 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:37:50 compute-1 nova_compute[230518]: 2025-10-02 12:37:50.196 2 DEBUG nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:37:50 compute-1 nova_compute[230518]:   <uuid>a1979f95-4814-4098-8baa-6c4497f20612</uuid>
Oct 02 12:37:50 compute-1 nova_compute[230518]:   <name>instance-00000064</name>
Oct 02 12:37:50 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:37:50 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:37:50 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:37:50 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:       <nova:name>tempest-ServerTagsTestJSON-server-2144240005</nova:name>
Oct 02 12:37:50 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:37:49</nova:creationTime>
Oct 02 12:37:50 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:37:50 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:37:50 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:37:50 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:37:50 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:37:50 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:37:50 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:37:50 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:37:50 compute-1 nova_compute[230518]:         <nova:user uuid="e28e4c343f46426788534c9108c8a7a8">tempest-ServerTagsTestJSON-416799390-project-member</nova:user>
Oct 02 12:37:50 compute-1 nova_compute[230518]:         <nova:project uuid="4179ec2dfcf6411faefd3d7d7e6356d0">tempest-ServerTagsTestJSON-416799390</nova:project>
Oct 02 12:37:50 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:37:50 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:37:50 compute-1 nova_compute[230518]:         <nova:port uuid="542026c4-106b-4023-8944-0947d2ba1fb9">
Oct 02 12:37:50 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:37:50 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:37:50 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:37:50 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <system>
Oct 02 12:37:50 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:37:50 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:37:50 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:37:50 compute-1 nova_compute[230518]:       <entry name="serial">a1979f95-4814-4098-8baa-6c4497f20612</entry>
Oct 02 12:37:50 compute-1 nova_compute[230518]:       <entry name="uuid">a1979f95-4814-4098-8baa-6c4497f20612</entry>
Oct 02 12:37:50 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     </system>
Oct 02 12:37:50 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:37:50 compute-1 nova_compute[230518]:   <os>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:   </os>
Oct 02 12:37:50 compute-1 nova_compute[230518]:   <features>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:   </features>
Oct 02 12:37:50 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:37:50 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:37:50 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:37:50 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/a1979f95-4814-4098-8baa-6c4497f20612_disk">
Oct 02 12:37:50 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:       </source>
Oct 02 12:37:50 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:37:50 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:37:50 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:37:50 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/a1979f95-4814-4098-8baa-6c4497f20612_disk.config">
Oct 02 12:37:50 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:       </source>
Oct 02 12:37:50 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:37:50 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:37:50 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:37:50 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:1a:32:6b"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:       <target dev="tap542026c4-10"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:37:50 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/a1979f95-4814-4098-8baa-6c4497f20612/console.log" append="off"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <video>
Oct 02 12:37:50 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     </video>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:37:50 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:37:50 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:37:50 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:37:50 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:37:50 compute-1 nova_compute[230518]: </domain>
Oct 02 12:37:50 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:37:50 compute-1 nova_compute[230518]: 2025-10-02 12:37:50.197 2 DEBUG nova.compute.manager [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Preparing to wait for external event network-vif-plugged-542026c4-106b-4023-8944-0947d2ba1fb9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:37:50 compute-1 nova_compute[230518]: 2025-10-02 12:37:50.198 2 DEBUG oslo_concurrency.lockutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Acquiring lock "a1979f95-4814-4098-8baa-6c4497f20612-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:50 compute-1 nova_compute[230518]: 2025-10-02 12:37:50.198 2 DEBUG oslo_concurrency.lockutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lock "a1979f95-4814-4098-8baa-6c4497f20612-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:50 compute-1 nova_compute[230518]: 2025-10-02 12:37:50.198 2 DEBUG oslo_concurrency.lockutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lock "a1979f95-4814-4098-8baa-6c4497f20612-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:50 compute-1 nova_compute[230518]: 2025-10-02 12:37:50.199 2 DEBUG nova.virt.libvirt.vif [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:37:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-2144240005',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-servertagstestjson-server-2144240005',id=100,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4179ec2dfcf6411faefd3d7d7e6356d0',ramdisk_id='',reservation_id='r-nntdzlwu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-416799390',owner_user_name='tempest-ServerTagsTestJSON-416799390-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:37:45Z,user_data=None,user_id='e28e4c343f46426788534c9108c8a7a8',uuid=a1979f95-4814-4098-8baa-6c4497f20612,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "542026c4-106b-4023-8944-0947d2ba1fb9", "address": "fa:16:3e:1a:32:6b", "network": {"id": "34a16246-d3b9-42cb-92f1-19ae3ccb345b", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-923062869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4179ec2dfcf6411faefd3d7d7e6356d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap542026c4-10", "ovs_interfaceid": "542026c4-106b-4023-8944-0947d2ba1fb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:37:50 compute-1 nova_compute[230518]: 2025-10-02 12:37:50.199 2 DEBUG nova.network.os_vif_util [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Converting VIF {"id": "542026c4-106b-4023-8944-0947d2ba1fb9", "address": "fa:16:3e:1a:32:6b", "network": {"id": "34a16246-d3b9-42cb-92f1-19ae3ccb345b", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-923062869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4179ec2dfcf6411faefd3d7d7e6356d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap542026c4-10", "ovs_interfaceid": "542026c4-106b-4023-8944-0947d2ba1fb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:37:50 compute-1 nova_compute[230518]: 2025-10-02 12:37:50.199 2 DEBUG nova.network.os_vif_util [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:32:6b,bridge_name='br-int',has_traffic_filtering=True,id=542026c4-106b-4023-8944-0947d2ba1fb9,network=Network(34a16246-d3b9-42cb-92f1-19ae3ccb345b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap542026c4-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:37:50 compute-1 nova_compute[230518]: 2025-10-02 12:37:50.200 2 DEBUG os_vif [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:32:6b,bridge_name='br-int',has_traffic_filtering=True,id=542026c4-106b-4023-8944-0947d2ba1fb9,network=Network(34a16246-d3b9-42cb-92f1-19ae3ccb345b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap542026c4-10') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:37:50 compute-1 nova_compute[230518]: 2025-10-02 12:37:50.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:50 compute-1 nova_compute[230518]: 2025-10-02 12:37:50.201 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:37:50 compute-1 nova_compute[230518]: 2025-10-02 12:37:50.201 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:37:50 compute-1 nova_compute[230518]: 2025-10-02 12:37:50.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:50 compute-1 nova_compute[230518]: 2025-10-02 12:37:50.203 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap542026c4-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:37:50 compute-1 nova_compute[230518]: 2025-10-02 12:37:50.203 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap542026c4-10, col_values=(('external_ids', {'iface-id': '542026c4-106b-4023-8944-0947d2ba1fb9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1a:32:6b', 'vm-uuid': 'a1979f95-4814-4098-8baa-6c4497f20612'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:37:50 compute-1 nova_compute[230518]: 2025-10-02 12:37:50.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:50 compute-1 nova_compute[230518]: 2025-10-02 12:37:50.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:37:50 compute-1 NetworkManager[44960]: <info>  [1759408670.2073] manager: (tap542026c4-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/197)
Oct 02 12:37:50 compute-1 nova_compute[230518]: 2025-10-02 12:37:50.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:50 compute-1 nova_compute[230518]: 2025-10-02 12:37:50.213 2 INFO os_vif [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:32:6b,bridge_name='br-int',has_traffic_filtering=True,id=542026c4-106b-4023-8944-0947d2ba1fb9,network=Network(34a16246-d3b9-42cb-92f1-19ae3ccb345b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap542026c4-10')
Oct 02 12:37:50 compute-1 nova_compute[230518]: 2025-10-02 12:37:50.267 2 DEBUG nova.network.neutron [req-c04e70a5-ffd3-4b26-8c55-57e9ea32b848 req-1cf4ae43-76f8-44f4-ba5c-1008839d498d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Updated VIF entry in instance network info cache for port 542026c4-106b-4023-8944-0947d2ba1fb9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:37:50 compute-1 nova_compute[230518]: 2025-10-02 12:37:50.268 2 DEBUG nova.network.neutron [req-c04e70a5-ffd3-4b26-8c55-57e9ea32b848 req-1cf4ae43-76f8-44f4-ba5c-1008839d498d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Updating instance_info_cache with network_info: [{"id": "542026c4-106b-4023-8944-0947d2ba1fb9", "address": "fa:16:3e:1a:32:6b", "network": {"id": "34a16246-d3b9-42cb-92f1-19ae3ccb345b", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-923062869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4179ec2dfcf6411faefd3d7d7e6356d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap542026c4-10", "ovs_interfaceid": "542026c4-106b-4023-8944-0947d2ba1fb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:37:50 compute-1 nova_compute[230518]: 2025-10-02 12:37:50.280 2 DEBUG nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:37:50 compute-1 nova_compute[230518]: 2025-10-02 12:37:50.281 2 DEBUG nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:37:50 compute-1 nova_compute[230518]: 2025-10-02 12:37:50.281 2 DEBUG nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] No VIF found with MAC fa:16:3e:1a:32:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:37:50 compute-1 nova_compute[230518]: 2025-10-02 12:37:50.282 2 INFO nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Using config drive
Oct 02 12:37:50 compute-1 nova_compute[230518]: 2025-10-02 12:37:50.314 2 DEBUG nova.storage.rbd_utils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] rbd image a1979f95-4814-4098-8baa-6c4497f20612_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:37:50 compute-1 nova_compute[230518]: 2025-10-02 12:37:50.322 2 DEBUG oslo_concurrency.lockutils [req-c04e70a5-ffd3-4b26-8c55-57e9ea32b848 req-1cf4ae43-76f8-44f4-ba5c-1008839d498d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a1979f95-4814-4098-8baa-6c4497f20612" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:37:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/811997453' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3644381549' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:37:50 compute-1 nova_compute[230518]: 2025-10-02 12:37:50.597 2 INFO nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Creating config drive at /var/lib/nova/instances/a1979f95-4814-4098-8baa-6c4497f20612/disk.config
Oct 02 12:37:50 compute-1 nova_compute[230518]: 2025-10-02 12:37:50.602 2 DEBUG oslo_concurrency.processutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a1979f95-4814-4098-8baa-6c4497f20612/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8goaylod execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:50 compute-1 nova_compute[230518]: 2025-10-02 12:37:50.764 2 DEBUG oslo_concurrency.processutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a1979f95-4814-4098-8baa-6c4497f20612/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8goaylod" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:50 compute-1 nova_compute[230518]: 2025-10-02 12:37:50.806 2 DEBUG nova.storage.rbd_utils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] rbd image a1979f95-4814-4098-8baa-6c4497f20612_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:37:50 compute-1 nova_compute[230518]: 2025-10-02 12:37:50.812 2 DEBUG oslo_concurrency.processutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a1979f95-4814-4098-8baa-6c4497f20612/disk.config a1979f95-4814-4098-8baa-6c4497f20612_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:37:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:51.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:37:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:51.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:51 compute-1 nova_compute[230518]: 2025-10-02 12:37:51.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:37:51 compute-1 nova_compute[230518]: 2025-10-02 12:37:51.089 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:51 compute-1 nova_compute[230518]: 2025-10-02 12:37:51.089 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:51 compute-1 nova_compute[230518]: 2025-10-02 12:37:51.089 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:51 compute-1 nova_compute[230518]: 2025-10-02 12:37:51.090 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:37:51 compute-1 nova_compute[230518]: 2025-10-02 12:37:51.090 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:51 compute-1 sudo[268953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:37:51 compute-1 sudo[268953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:51 compute-1 sudo[268953]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:51 compute-1 nova_compute[230518]: 2025-10-02 12:37:51.134 2 DEBUG oslo_concurrency.processutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a1979f95-4814-4098-8baa-6c4497f20612/disk.config a1979f95-4814-4098-8baa-6c4497f20612_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.323s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:51 compute-1 nova_compute[230518]: 2025-10-02 12:37:51.136 2 INFO nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Deleting local config drive /var/lib/nova/instances/a1979f95-4814-4098-8baa-6c4497f20612/disk.config because it was imported into RBD.
Oct 02 12:37:51 compute-1 sudo[268979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:37:51 compute-1 kernel: tap542026c4-10: entered promiscuous mode
Oct 02 12:37:51 compute-1 sudo[268979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:37:51 compute-1 NetworkManager[44960]: <info>  [1759408671.2346] manager: (tap542026c4-10): new Tun device (/org/freedesktop/NetworkManager/Devices/198)
Oct 02 12:37:51 compute-1 sudo[268979]: pam_unix(sudo:session): session closed for user root
Oct 02 12:37:51 compute-1 nova_compute[230518]: 2025-10-02 12:37:51.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:51 compute-1 ovn_controller[129257]: 2025-10-02T12:37:51Z|00418|binding|INFO|Claiming lport 542026c4-106b-4023-8944-0947d2ba1fb9 for this chassis.
Oct 02 12:37:51 compute-1 ovn_controller[129257]: 2025-10-02T12:37:51Z|00419|binding|INFO|542026c4-106b-4023-8944-0947d2ba1fb9: Claiming fa:16:3e:1a:32:6b 10.100.0.14
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.284 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:32:6b 10.100.0.14'], port_security=['fa:16:3e:1a:32:6b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a1979f95-4814-4098-8baa-6c4497f20612', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34a16246-d3b9-42cb-92f1-19ae3ccb345b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4179ec2dfcf6411faefd3d7d7e6356d0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bba92fb6-f7b6-4607-89fe-9a7eb0e98f6a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72da5d14-3a62-46d0-8ef9-a36e497c62b9, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=542026c4-106b-4023-8944-0947d2ba1fb9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.286 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 542026c4-106b-4023-8944-0947d2ba1fb9 in datapath 34a16246-d3b9-42cb-92f1-19ae3ccb345b bound to our chassis
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.288 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 34a16246-d3b9-42cb-92f1-19ae3ccb345b
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.305 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a925734c-8b4f-4900-a282-c792751dea2a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.306 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap34a16246-d1 in ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.309 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap34a16246-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.309 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a2669279-e1f2-42cb-bad5-80db64552503]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.312 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2904b415-3aa5-4e5c-a850-3bf1a5b5044d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:51 compute-1 systemd-udevd[269036]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:37:51 compute-1 systemd-machined[188247]: New machine qemu-49-instance-00000064.
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.329 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[9b3c5d58-41db-4fc0-b5ab-a78c1e3e0d76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:51 compute-1 NetworkManager[44960]: <info>  [1759408671.3335] device (tap542026c4-10): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:37:51 compute-1 NetworkManager[44960]: <info>  [1759408671.3346] device (tap542026c4-10): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:37:51 compute-1 systemd[1]: Started Virtual Machine qemu-49-instance-00000064.
Oct 02 12:37:51 compute-1 nova_compute[230518]: 2025-10-02 12:37:51.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:51 compute-1 ovn_controller[129257]: 2025-10-02T12:37:51Z|00420|binding|INFO|Setting lport 542026c4-106b-4023-8944-0947d2ba1fb9 ovn-installed in OVS
Oct 02 12:37:51 compute-1 ovn_controller[129257]: 2025-10-02T12:37:51Z|00421|binding|INFO|Setting lport 542026c4-106b-4023-8944-0947d2ba1fb9 up in Southbound
Oct 02 12:37:51 compute-1 nova_compute[230518]: 2025-10-02 12:37:51.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.356 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[676d7b49-7557-4714-bcee-d8cef854d51f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:51 compute-1 ceph-mon[80926]: pgmap v1848: 305 pgs: 305 active+clean; 257 MiB data, 855 MiB used, 20 GiB / 21 GiB avail; 402 KiB/s rd, 3.5 MiB/s wr, 120 op/s
Oct 02 12:37:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:37:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.404 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[78e83d6a-a4f4-438f-84ae-94892dc1f2c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.415 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[096d8dcd-3d05-429e-bbfc-09d7cd83bcfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:51 compute-1 NetworkManager[44960]: <info>  [1759408671.4178] manager: (tap34a16246-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/199)
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.463 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[4a41862b-aa67-493b-bd00-b6371cd7ca0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.467 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[e1f70ee5-a02e-48c3-bea2-b559059f66a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:51 compute-1 NetworkManager[44960]: <info>  [1759408671.4943] device (tap34a16246-d0): carrier: link connected
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.500 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[5b2b380c-0f4e-4e02-bb12-7b0e3f6166f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.519 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9400e5ff-4fdd-47b7-b4c9-1991a50d02b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34a16246-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:ec:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 128], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 653505, 'reachable_time': 38834, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269069, 'error': None, 'target': 'ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.537 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[70123a10-a665-46b8-a75e-01622691071f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe80:ec21'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 653505, 'tstamp': 653505}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269070, 'error': None, 'target': 'ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.556 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bcb3684f-a6a7-408d-b27e-8d859cfdbfe5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap34a16246-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:ec:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 128], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 653505, 'reachable_time': 38834, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 269071, 'error': None, 'target': 'ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.592 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[79e966f9-f71e-47d4-bf51-75dde1ea11bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:51 compute-1 nova_compute[230518]: 2025-10-02 12:37:51.634 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.671 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a5ecd28d-16d4-450d-a256-34b382b47a0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.674 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34a16246-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.674 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.675 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34a16246-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:37:51 compute-1 kernel: tap34a16246-d0: entered promiscuous mode
Oct 02 12:37:51 compute-1 nova_compute[230518]: 2025-10-02 12:37:51.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:51 compute-1 NetworkManager[44960]: <info>  [1759408671.6791] manager: (tap34a16246-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/200)
Oct 02 12:37:51 compute-1 nova_compute[230518]: 2025-10-02 12:37:51.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.685 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap34a16246-d0, col_values=(('external_ids', {'iface-id': '6dd3f30e-6e40-4db9-bff9-cbf55578f3e5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:37:51 compute-1 ovn_controller[129257]: 2025-10-02T12:37:51Z|00422|binding|INFO|Releasing lport 6dd3f30e-6e40-4db9-bff9-cbf55578f3e5 from this chassis (sb_readonly=0)
Oct 02 12:37:51 compute-1 nova_compute[230518]: 2025-10-02 12:37:51.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.689 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/34a16246-d3b9-42cb-92f1-19ae3ccb345b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/34a16246-d3b9-42cb-92f1-19ae3ccb345b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.690 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[629de4ad-2d0d-4693-9eab-a326bfbce024]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.691 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-34a16246-d3b9-42cb-92f1-19ae3ccb345b
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/34a16246-d3b9-42cb-92f1-19ae3ccb345b.pid.haproxy
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 34a16246-d3b9-42cb-92f1-19ae3ccb345b
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:37:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:51.693 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b', 'env', 'PROCESS_TAG=haproxy-34a16246-d3b9-42cb-92f1-19ae3ccb345b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/34a16246-d3b9-42cb-92f1-19ae3ccb345b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:37:51 compute-1 nova_compute[230518]: 2025-10-02 12:37:51.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:51 compute-1 nova_compute[230518]: 2025-10-02 12:37:51.707 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:37:51 compute-1 nova_compute[230518]: 2025-10-02 12:37:51.707 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:37:51 compute-1 nova_compute[230518]: 2025-10-02 12:37:51.880 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:37:51 compute-1 nova_compute[230518]: 2025-10-02 12:37:51.881 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4457MB free_disk=20.888534545898438GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:37:51 compute-1 nova_compute[230518]: 2025-10-02 12:37:51.882 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:51 compute-1 nova_compute[230518]: 2025-10-02 12:37:51.882 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:51 compute-1 nova_compute[230518]: 2025-10-02 12:37:51.978 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance a1979f95-4814-4098-8baa-6c4497f20612 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:37:51 compute-1 nova_compute[230518]: 2025-10-02 12:37:51.979 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:37:51 compute-1 nova_compute[230518]: 2025-10-02 12:37:51.979 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:37:52 compute-1 nova_compute[230518]: 2025-10-02 12:37:52.041 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:37:52 compute-1 podman[269105]: 2025-10-02 12:37:52.085367635 +0000 UTC m=+0.029894011 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:37:52 compute-1 podman[269105]: 2025-10-02 12:37:52.227411404 +0000 UTC m=+0.171937760 container create 88ae49fe398ff68ed81b1affe832e3bb3947989c088a27a4fdbf56a114f19bd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:37:52 compute-1 systemd[1]: Started libpod-conmon-88ae49fe398ff68ed81b1affe832e3bb3947989c088a27a4fdbf56a114f19bd4.scope.
Oct 02 12:37:52 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:37:52 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/612b6bf1b23791fefeec94c60d80992f1e0c869f67aefa0e1f7a3f215a5e1cce/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:37:52 compute-1 podman[269105]: 2025-10-02 12:37:52.337354043 +0000 UTC m=+0.281880399 container init 88ae49fe398ff68ed81b1affe832e3bb3947989c088a27a4fdbf56a114f19bd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 02 12:37:52 compute-1 podman[269105]: 2025-10-02 12:37:52.344908111 +0000 UTC m=+0.289434467 container start 88ae49fe398ff68ed81b1affe832e3bb3947989c088a27a4fdbf56a114f19bd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 12:37:52 compute-1 ceph-mon[80926]: pgmap v1849: 305 pgs: 305 active+clean; 293 MiB data, 868 MiB used, 20 GiB / 21 GiB avail; 967 KiB/s rd, 2.6 MiB/s wr, 128 op/s
Oct 02 12:37:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1024383535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:52 compute-1 neutron-haproxy-ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b[269180]: [NOTICE]   (269186) : New worker (269188) forked
Oct 02 12:37:52 compute-1 neutron-haproxy-ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b[269180]: [NOTICE]   (269186) : Loading success.
Oct 02 12:37:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:37:52 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3572908026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:52 compute-1 nova_compute[230518]: 2025-10-02 12:37:52.536 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:37:52 compute-1 nova_compute[230518]: 2025-10-02 12:37:52.543 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:37:52 compute-1 nova_compute[230518]: 2025-10-02 12:37:52.574 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:37:52 compute-1 nova_compute[230518]: 2025-10-02 12:37:52.598 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:37:52 compute-1 nova_compute[230518]: 2025-10-02 12:37:52.599 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:52 compute-1 nova_compute[230518]: 2025-10-02 12:37:52.841 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408672.8409817, a1979f95-4814-4098-8baa-6c4497f20612 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:37:52 compute-1 nova_compute[230518]: 2025-10-02 12:37:52.842 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1979f95-4814-4098-8baa-6c4497f20612] VM Started (Lifecycle Event)
Oct 02 12:37:52 compute-1 nova_compute[230518]: 2025-10-02 12:37:52.866 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:37:52 compute-1 nova_compute[230518]: 2025-10-02 12:37:52.870 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408672.8426058, a1979f95-4814-4098-8baa-6c4497f20612 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:37:52 compute-1 nova_compute[230518]: 2025-10-02 12:37:52.870 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1979f95-4814-4098-8baa-6c4497f20612] VM Paused (Lifecycle Event)
Oct 02 12:37:52 compute-1 nova_compute[230518]: 2025-10-02 12:37:52.890 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:37:52 compute-1 nova_compute[230518]: 2025-10-02 12:37:52.895 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:37:52 compute-1 nova_compute[230518]: 2025-10-02 12:37:52.913 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1979f95-4814-4098-8baa-6c4497f20612] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:37:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:37:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:53.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:37:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:37:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:53.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:37:53 compute-1 nova_compute[230518]: 2025-10-02 12:37:53.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:53 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3572908026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:53 compute-1 nova_compute[230518]: 2025-10-02 12:37:53.594 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:37:54 compute-1 ceph-mon[80926]: pgmap v1850: 305 pgs: 305 active+clean; 293 MiB data, 877 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.3 MiB/s wr, 178 op/s
Oct 02 12:37:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:37:54 compute-1 nova_compute[230518]: 2025-10-02 12:37:54.582 2 DEBUG nova.compute.manager [req-a19d2a0d-da94-4497-b927-2909bd3ad8f7 req-34ccf94c-6581-4743-8963-0fed10d91934 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Received event network-vif-plugged-542026c4-106b-4023-8944-0947d2ba1fb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:37:54 compute-1 nova_compute[230518]: 2025-10-02 12:37:54.583 2 DEBUG oslo_concurrency.lockutils [req-a19d2a0d-da94-4497-b927-2909bd3ad8f7 req-34ccf94c-6581-4743-8963-0fed10d91934 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1979f95-4814-4098-8baa-6c4497f20612-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:54 compute-1 nova_compute[230518]: 2025-10-02 12:37:54.583 2 DEBUG oslo_concurrency.lockutils [req-a19d2a0d-da94-4497-b927-2909bd3ad8f7 req-34ccf94c-6581-4743-8963-0fed10d91934 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1979f95-4814-4098-8baa-6c4497f20612-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:54 compute-1 nova_compute[230518]: 2025-10-02 12:37:54.583 2 DEBUG oslo_concurrency.lockutils [req-a19d2a0d-da94-4497-b927-2909bd3ad8f7 req-34ccf94c-6581-4743-8963-0fed10d91934 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1979f95-4814-4098-8baa-6c4497f20612-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:54 compute-1 nova_compute[230518]: 2025-10-02 12:37:54.583 2 DEBUG nova.compute.manager [req-a19d2a0d-da94-4497-b927-2909bd3ad8f7 req-34ccf94c-6581-4743-8963-0fed10d91934 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Processing event network-vif-plugged-542026c4-106b-4023-8944-0947d2ba1fb9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:37:54 compute-1 nova_compute[230518]: 2025-10-02 12:37:54.584 2 DEBUG nova.compute.manager [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:37:54 compute-1 nova_compute[230518]: 2025-10-02 12:37:54.588 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408674.5881617, a1979f95-4814-4098-8baa-6c4497f20612 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:37:54 compute-1 nova_compute[230518]: 2025-10-02 12:37:54.588 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1979f95-4814-4098-8baa-6c4497f20612] VM Resumed (Lifecycle Event)
Oct 02 12:37:54 compute-1 nova_compute[230518]: 2025-10-02 12:37:54.590 2 DEBUG nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:37:54 compute-1 nova_compute[230518]: 2025-10-02 12:37:54.593 2 INFO nova.virt.libvirt.driver [-] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Instance spawned successfully.
Oct 02 12:37:54 compute-1 nova_compute[230518]: 2025-10-02 12:37:54.593 2 DEBUG nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:37:54 compute-1 nova_compute[230518]: 2025-10-02 12:37:54.613 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:37:54 compute-1 nova_compute[230518]: 2025-10-02 12:37:54.618 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:37:54 compute-1 nova_compute[230518]: 2025-10-02 12:37:54.623 2 DEBUG nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:37:54 compute-1 nova_compute[230518]: 2025-10-02 12:37:54.623 2 DEBUG nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:37:54 compute-1 nova_compute[230518]: 2025-10-02 12:37:54.624 2 DEBUG nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:37:54 compute-1 nova_compute[230518]: 2025-10-02 12:37:54.624 2 DEBUG nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:37:54 compute-1 nova_compute[230518]: 2025-10-02 12:37:54.624 2 DEBUG nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:37:54 compute-1 nova_compute[230518]: 2025-10-02 12:37:54.625 2 DEBUG nova.virt.libvirt.driver [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:37:54 compute-1 nova_compute[230518]: 2025-10-02 12:37:54.655 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1979f95-4814-4098-8baa-6c4497f20612] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:37:54 compute-1 nova_compute[230518]: 2025-10-02 12:37:54.693 2 INFO nova.compute.manager [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Took 9.39 seconds to spawn the instance on the hypervisor.
Oct 02 12:37:54 compute-1 nova_compute[230518]: 2025-10-02 12:37:54.694 2 DEBUG nova.compute.manager [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:37:54 compute-1 nova_compute[230518]: 2025-10-02 12:37:54.842 2 INFO nova.compute.manager [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Took 10.41 seconds to build instance.
Oct 02 12:37:54 compute-1 nova_compute[230518]: 2025-10-02 12:37:54.874 2 DEBUG oslo_concurrency.lockutils [None req-5e077226-067e-4f38-b046-f53b08fc7308 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lock "a1979f95-4814-4098-8baa-6c4497f20612" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.513s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:37:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:55.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:37:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:37:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:55.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:37:55 compute-1 nova_compute[230518]: 2025-10-02 12:37:55.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:55 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1993557897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:55 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3239222305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:55 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1142469038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:56 compute-1 nova_compute[230518]: 2025-10-02 12:37:56.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:37:56 compute-1 nova_compute[230518]: 2025-10-02 12:37:56.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:37:56 compute-1 ceph-mon[80926]: pgmap v1851: 305 pgs: 305 active+clean; 293 MiB data, 881 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.9 MiB/s wr, 208 op/s
Oct 02 12:37:56 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1475252102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:56 compute-1 nova_compute[230518]: 2025-10-02 12:37:56.680 2 DEBUG nova.compute.manager [req-16aed235-c73d-4646-9f09-0e4bbc7f91c8 req-7620b91f-5182-49a6-a4a4-189f5cab0cad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Received event network-vif-plugged-542026c4-106b-4023-8944-0947d2ba1fb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:37:56 compute-1 nova_compute[230518]: 2025-10-02 12:37:56.681 2 DEBUG oslo_concurrency.lockutils [req-16aed235-c73d-4646-9f09-0e4bbc7f91c8 req-7620b91f-5182-49a6-a4a4-189f5cab0cad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1979f95-4814-4098-8baa-6c4497f20612-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:56 compute-1 nova_compute[230518]: 2025-10-02 12:37:56.681 2 DEBUG oslo_concurrency.lockutils [req-16aed235-c73d-4646-9f09-0e4bbc7f91c8 req-7620b91f-5182-49a6-a4a4-189f5cab0cad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1979f95-4814-4098-8baa-6c4497f20612-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:56 compute-1 nova_compute[230518]: 2025-10-02 12:37:56.681 2 DEBUG oslo_concurrency.lockutils [req-16aed235-c73d-4646-9f09-0e4bbc7f91c8 req-7620b91f-5182-49a6-a4a4-189f5cab0cad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1979f95-4814-4098-8baa-6c4497f20612-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:56 compute-1 nova_compute[230518]: 2025-10-02 12:37:56.682 2 DEBUG nova.compute.manager [req-16aed235-c73d-4646-9f09-0e4bbc7f91c8 req-7620b91f-5182-49a6-a4a4-189f5cab0cad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] No waiting events found dispatching network-vif-plugged-542026c4-106b-4023-8944-0947d2ba1fb9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:37:56 compute-1 nova_compute[230518]: 2025-10-02 12:37:56.683 2 WARNING nova.compute.manager [req-16aed235-c73d-4646-9f09-0e4bbc7f91c8 req-7620b91f-5182-49a6-a4a4-189f5cab0cad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Received unexpected event network-vif-plugged-542026c4-106b-4023-8944-0947d2ba1fb9 for instance with vm_state active and task_state None.
Oct 02 12:37:56 compute-1 podman[269200]: 2025-10-02 12:37:56.821380808 +0000 UTC m=+0.067560116 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 12:37:56 compute-1 podman[269199]: 2025-10-02 12:37:56.8951619 +0000 UTC m=+0.139721957 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 12:37:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:57.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:57.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:58 compute-1 nova_compute[230518]: 2025-10-02 12:37:58.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:37:58 compute-1 nova_compute[230518]: 2025-10-02 12:37:58.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:37:58 compute-1 nova_compute[230518]: 2025-10-02 12:37:58.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:58 compute-1 ceph-mon[80926]: pgmap v1852: 305 pgs: 305 active+clean; 341 MiB data, 886 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 3.6 MiB/s wr, 246 op/s
Oct 02 12:37:58 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/825430397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:58 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1338803148' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:37:58 compute-1 nova_compute[230518]: 2025-10-02 12:37:58.745 2 DEBUG oslo_concurrency.lockutils [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Acquiring lock "a1979f95-4814-4098-8baa-6c4497f20612" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:58 compute-1 nova_compute[230518]: 2025-10-02 12:37:58.746 2 DEBUG oslo_concurrency.lockutils [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lock "a1979f95-4814-4098-8baa-6c4497f20612" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:58 compute-1 nova_compute[230518]: 2025-10-02 12:37:58.747 2 DEBUG oslo_concurrency.lockutils [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Acquiring lock "a1979f95-4814-4098-8baa-6c4497f20612-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:58 compute-1 nova_compute[230518]: 2025-10-02 12:37:58.748 2 DEBUG oslo_concurrency.lockutils [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lock "a1979f95-4814-4098-8baa-6c4497f20612-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:58 compute-1 nova_compute[230518]: 2025-10-02 12:37:58.748 2 DEBUG oslo_concurrency.lockutils [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lock "a1979f95-4814-4098-8baa-6c4497f20612-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:58 compute-1 nova_compute[230518]: 2025-10-02 12:37:58.749 2 INFO nova.compute.manager [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Terminating instance
Oct 02 12:37:58 compute-1 nova_compute[230518]: 2025-10-02 12:37:58.751 2 DEBUG nova.compute.manager [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:37:58 compute-1 kernel: tap542026c4-10 (unregistering): left promiscuous mode
Oct 02 12:37:58 compute-1 NetworkManager[44960]: <info>  [1759408678.9193] device (tap542026c4-10): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:37:58 compute-1 ovn_controller[129257]: 2025-10-02T12:37:58Z|00423|binding|INFO|Releasing lport 542026c4-106b-4023-8944-0947d2ba1fb9 from this chassis (sb_readonly=0)
Oct 02 12:37:58 compute-1 nova_compute[230518]: 2025-10-02 12:37:58.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:58 compute-1 ovn_controller[129257]: 2025-10-02T12:37:58Z|00424|binding|INFO|Setting lport 542026c4-106b-4023-8944-0947d2ba1fb9 down in Southbound
Oct 02 12:37:58 compute-1 ovn_controller[129257]: 2025-10-02T12:37:58Z|00425|binding|INFO|Removing iface tap542026c4-10 ovn-installed in OVS
Oct 02 12:37:58 compute-1 nova_compute[230518]: 2025-10-02 12:37:58.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:58 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:58.941 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:32:6b 10.100.0.14'], port_security=['fa:16:3e:1a:32:6b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a1979f95-4814-4098-8baa-6c4497f20612', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34a16246-d3b9-42cb-92f1-19ae3ccb345b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4179ec2dfcf6411faefd3d7d7e6356d0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bba92fb6-f7b6-4607-89fe-9a7eb0e98f6a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72da5d14-3a62-46d0-8ef9-a36e497c62b9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=542026c4-106b-4023-8944-0947d2ba1fb9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:37:58 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:58.944 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 542026c4-106b-4023-8944-0947d2ba1fb9 in datapath 34a16246-d3b9-42cb-92f1-19ae3ccb345b unbound from our chassis
Oct 02 12:37:58 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:58.947 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 34a16246-d3b9-42cb-92f1-19ae3ccb345b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:37:58 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:58.949 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5556d2a3-004f-4fbc-92a2-f916c4f14551]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:58 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:58.950 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b namespace which is not needed anymore
Oct 02 12:37:58 compute-1 nova_compute[230518]: 2025-10-02 12:37:58.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:59 compute-1 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d00000064.scope: Deactivated successfully.
Oct 02 12:37:59 compute-1 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d00000064.scope: Consumed 5.754s CPU time.
Oct 02 12:37:59 compute-1 systemd-machined[188247]: Machine qemu-49-instance-00000064 terminated.
Oct 02 12:37:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:37:59.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:37:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:37:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:37:59.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:37:59 compute-1 nova_compute[230518]: 2025-10-02 12:37:59.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:37:59 compute-1 nova_compute[230518]: 2025-10-02 12:37:59.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:37:59 compute-1 neutron-haproxy-ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b[269180]: [NOTICE]   (269186) : haproxy version is 2.8.14-c23fe91
Oct 02 12:37:59 compute-1 neutron-haproxy-ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b[269180]: [NOTICE]   (269186) : path to executable is /usr/sbin/haproxy
Oct 02 12:37:59 compute-1 neutron-haproxy-ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b[269180]: [WARNING]  (269186) : Exiting Master process...
Oct 02 12:37:59 compute-1 neutron-haproxy-ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b[269180]: [WARNING]  (269186) : Exiting Master process...
Oct 02 12:37:59 compute-1 neutron-haproxy-ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b[269180]: [ALERT]    (269186) : Current worker (269188) exited with code 143 (Terminated)
Oct 02 12:37:59 compute-1 neutron-haproxy-ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b[269180]: [WARNING]  (269186) : All workers exited. Exiting... (0)
Oct 02 12:37:59 compute-1 systemd[1]: libpod-88ae49fe398ff68ed81b1affe832e3bb3947989c088a27a4fdbf56a114f19bd4.scope: Deactivated successfully.
Oct 02 12:37:59 compute-1 podman[269265]: 2025-10-02 12:37:59.155062105 +0000 UTC m=+0.056383785 container died 88ae49fe398ff68ed81b1affe832e3bb3947989c088a27a4fdbf56a114f19bd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Oct 02 12:37:59 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-88ae49fe398ff68ed81b1affe832e3bb3947989c088a27a4fdbf56a114f19bd4-userdata-shm.mount: Deactivated successfully.
Oct 02 12:37:59 compute-1 nova_compute[230518]: 2025-10-02 12:37:59.197 2 INFO nova.virt.libvirt.driver [-] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Instance destroyed successfully.
Oct 02 12:37:59 compute-1 systemd[1]: var-lib-containers-storage-overlay-612b6bf1b23791fefeec94c60d80992f1e0c869f67aefa0e1f7a3f215a5e1cce-merged.mount: Deactivated successfully.
Oct 02 12:37:59 compute-1 nova_compute[230518]: 2025-10-02 12:37:59.198 2 DEBUG nova.objects.instance [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lazy-loading 'resources' on Instance uuid a1979f95-4814-4098-8baa-6c4497f20612 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:37:59 compute-1 podman[269265]: 2025-10-02 12:37:59.211572183 +0000 UTC m=+0.112893873 container cleanup 88ae49fe398ff68ed81b1affe832e3bb3947989c088a27a4fdbf56a114f19bd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:37:59 compute-1 nova_compute[230518]: 2025-10-02 12:37:59.229 2 DEBUG nova.virt.libvirt.vif [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:37:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-2144240005',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-servertagstestjson-server-2144240005',id=100,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:37:54Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4179ec2dfcf6411faefd3d7d7e6356d0',ramdisk_id='',reservation_id='r-nntdzlwu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerTagsTestJSON-416799390',owner_user_name='tempest-ServerTagsTestJSON-416799390-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:37:54Z,user_data=None,user_id='e28e4c343f46426788534c9108c8a7a8',uuid=a1979f95-4814-4098-8baa-6c4497f20612,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "542026c4-106b-4023-8944-0947d2ba1fb9", "address": "fa:16:3e:1a:32:6b", "network": {"id": "34a16246-d3b9-42cb-92f1-19ae3ccb345b", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-923062869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4179ec2dfcf6411faefd3d7d7e6356d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap542026c4-10", "ovs_interfaceid": "542026c4-106b-4023-8944-0947d2ba1fb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:37:59 compute-1 nova_compute[230518]: 2025-10-02 12:37:59.230 2 DEBUG nova.network.os_vif_util [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Converting VIF {"id": "542026c4-106b-4023-8944-0947d2ba1fb9", "address": "fa:16:3e:1a:32:6b", "network": {"id": "34a16246-d3b9-42cb-92f1-19ae3ccb345b", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-923062869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4179ec2dfcf6411faefd3d7d7e6356d0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap542026c4-10", "ovs_interfaceid": "542026c4-106b-4023-8944-0947d2ba1fb9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:37:59 compute-1 nova_compute[230518]: 2025-10-02 12:37:59.232 2 DEBUG nova.network.os_vif_util [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:32:6b,bridge_name='br-int',has_traffic_filtering=True,id=542026c4-106b-4023-8944-0947d2ba1fb9,network=Network(34a16246-d3b9-42cb-92f1-19ae3ccb345b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap542026c4-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:37:59 compute-1 nova_compute[230518]: 2025-10-02 12:37:59.232 2 DEBUG os_vif [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:32:6b,bridge_name='br-int',has_traffic_filtering=True,id=542026c4-106b-4023-8944-0947d2ba1fb9,network=Network(34a16246-d3b9-42cb-92f1-19ae3ccb345b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap542026c4-10') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:37:59 compute-1 nova_compute[230518]: 2025-10-02 12:37:59.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:59 compute-1 nova_compute[230518]: 2025-10-02 12:37:59.235 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap542026c4-10, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:37:59 compute-1 nova_compute[230518]: 2025-10-02 12:37:59.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:59 compute-1 nova_compute[230518]: 2025-10-02 12:37:59.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:59 compute-1 systemd[1]: libpod-conmon-88ae49fe398ff68ed81b1affe832e3bb3947989c088a27a4fdbf56a114f19bd4.scope: Deactivated successfully.
Oct 02 12:37:59 compute-1 nova_compute[230518]: 2025-10-02 12:37:59.242 2 INFO os_vif [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:32:6b,bridge_name='br-int',has_traffic_filtering=True,id=542026c4-106b-4023-8944-0947d2ba1fb9,network=Network(34a16246-d3b9-42cb-92f1-19ae3ccb345b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap542026c4-10')
Oct 02 12:37:59 compute-1 podman[269304]: 2025-10-02 12:37:59.301569365 +0000 UTC m=+0.057164330 container remove 88ae49fe398ff68ed81b1affe832e3bb3947989c088a27a4fdbf56a114f19bd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:37:59 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:59.310 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e1c87358-0693-4c6a-92cd-16681e89ee49]: (4, ('Thu Oct  2 12:37:59 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b (88ae49fe398ff68ed81b1affe832e3bb3947989c088a27a4fdbf56a114f19bd4)\n88ae49fe398ff68ed81b1affe832e3bb3947989c088a27a4fdbf56a114f19bd4\nThu Oct  2 12:37:59 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b (88ae49fe398ff68ed81b1affe832e3bb3947989c088a27a4fdbf56a114f19bd4)\n88ae49fe398ff68ed81b1affe832e3bb3947989c088a27a4fdbf56a114f19bd4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:59 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:59.312 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0ea0cbd6-8bbc-4b5a-8a5a-0e3796d9f479]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:59 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:59.314 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34a16246-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:37:59 compute-1 nova_compute[230518]: 2025-10-02 12:37:59.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:59 compute-1 kernel: tap34a16246-d0: left promiscuous mode
Oct 02 12:37:59 compute-1 nova_compute[230518]: 2025-10-02 12:37:59.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:37:59 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:59.347 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a6ce513b-37f6-428e-8699-167fbd65c26b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:59 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:59.373 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[108b7376-c0b7-4e40-a956-7f821b84c981]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:59 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:59.375 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[064fe824-3f8e-4c97-972a-c817aa5706b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:59 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:59.394 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5df94513-9a6f-42fd-93d5-508215bb4f9e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 653495, 'reachable_time': 16772, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269340, 'error': None, 'target': 'ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:59 compute-1 nova_compute[230518]: 2025-10-02 12:37:59.396 2 DEBUG nova.compute.manager [req-2fbeb0b2-a8b9-48fa-b465-c233eca176f7 req-8f3c3fcf-985b-485c-b1f1-1ba22d5ae52e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Received event network-vif-unplugged-542026c4-106b-4023-8944-0947d2ba1fb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:37:59 compute-1 nova_compute[230518]: 2025-10-02 12:37:59.398 2 DEBUG oslo_concurrency.lockutils [req-2fbeb0b2-a8b9-48fa-b465-c233eca176f7 req-8f3c3fcf-985b-485c-b1f1-1ba22d5ae52e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1979f95-4814-4098-8baa-6c4497f20612-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:37:59 compute-1 nova_compute[230518]: 2025-10-02 12:37:59.398 2 DEBUG oslo_concurrency.lockutils [req-2fbeb0b2-a8b9-48fa-b465-c233eca176f7 req-8f3c3fcf-985b-485c-b1f1-1ba22d5ae52e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1979f95-4814-4098-8baa-6c4497f20612-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:37:59 compute-1 nova_compute[230518]: 2025-10-02 12:37:59.399 2 DEBUG oslo_concurrency.lockutils [req-2fbeb0b2-a8b9-48fa-b465-c233eca176f7 req-8f3c3fcf-985b-485c-b1f1-1ba22d5ae52e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1979f95-4814-4098-8baa-6c4497f20612-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:37:59 compute-1 nova_compute[230518]: 2025-10-02 12:37:59.400 2 DEBUG nova.compute.manager [req-2fbeb0b2-a8b9-48fa-b465-c233eca176f7 req-8f3c3fcf-985b-485c-b1f1-1ba22d5ae52e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] No waiting events found dispatching network-vif-unplugged-542026c4-106b-4023-8944-0947d2ba1fb9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:37:59 compute-1 nova_compute[230518]: 2025-10-02 12:37:59.400 2 DEBUG nova.compute.manager [req-2fbeb0b2-a8b9-48fa-b465-c233eca176f7 req-8f3c3fcf-985b-485c-b1f1-1ba22d5ae52e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Received event network-vif-unplugged-542026c4-106b-4023-8944-0947d2ba1fb9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:37:59 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:59.400 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-34a16246-d3b9-42cb-92f1-19ae3ccb345b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:37:59 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:37:59.401 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[4f9b3f31-7b36-4865-86ca-ee164c2ec3e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:37:59 compute-1 systemd[1]: run-netns-ovnmeta\x2d34a16246\x2dd3b9\x2d42cb\x2d92f1\x2d19ae3ccb345b.mount: Deactivated successfully.
Oct 02 12:37:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:38:00 compute-1 nova_compute[230518]: 2025-10-02 12:38:00.373 2 INFO nova.virt.libvirt.driver [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Deleting instance files /var/lib/nova/instances/a1979f95-4814-4098-8baa-6c4497f20612_del
Oct 02 12:38:00 compute-1 nova_compute[230518]: 2025-10-02 12:38:00.375 2 INFO nova.virt.libvirt.driver [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Deletion of /var/lib/nova/instances/a1979f95-4814-4098-8baa-6c4497f20612_del complete
Oct 02 12:38:00 compute-1 nova_compute[230518]: 2025-10-02 12:38:00.434 2 INFO nova.compute.manager [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Took 1.68 seconds to destroy the instance on the hypervisor.
Oct 02 12:38:00 compute-1 nova_compute[230518]: 2025-10-02 12:38:00.435 2 DEBUG oslo.service.loopingcall [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:38:00 compute-1 nova_compute[230518]: 2025-10-02 12:38:00.435 2 DEBUG nova.compute.manager [-] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:38:00 compute-1 nova_compute[230518]: 2025-10-02 12:38:00.436 2 DEBUG nova.network.neutron [-] [instance: a1979f95-4814-4098-8baa-6c4497f20612] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:38:00 compute-1 ceph-mon[80926]: pgmap v1853: 305 pgs: 305 active+clean; 341 MiB data, 886 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.8 MiB/s wr, 225 op/s
Oct 02 12:38:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:01.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:01.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:01 compute-1 nova_compute[230518]: 2025-10-02 12:38:01.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:38:01 compute-1 nova_compute[230518]: 2025-10-02 12:38:01.488 2 DEBUG nova.compute.manager [req-bf10eae7-9d4c-474c-baea-e518879e19a4 req-a4e53f8d-1798-49ec-b94c-21f7d035c972 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Received event network-vif-plugged-542026c4-106b-4023-8944-0947d2ba1fb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:38:01 compute-1 nova_compute[230518]: 2025-10-02 12:38:01.489 2 DEBUG oslo_concurrency.lockutils [req-bf10eae7-9d4c-474c-baea-e518879e19a4 req-a4e53f8d-1798-49ec-b94c-21f7d035c972 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1979f95-4814-4098-8baa-6c4497f20612-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:01 compute-1 nova_compute[230518]: 2025-10-02 12:38:01.489 2 DEBUG oslo_concurrency.lockutils [req-bf10eae7-9d4c-474c-baea-e518879e19a4 req-a4e53f8d-1798-49ec-b94c-21f7d035c972 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1979f95-4814-4098-8baa-6c4497f20612-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:01 compute-1 nova_compute[230518]: 2025-10-02 12:38:01.490 2 DEBUG oslo_concurrency.lockutils [req-bf10eae7-9d4c-474c-baea-e518879e19a4 req-a4e53f8d-1798-49ec-b94c-21f7d035c972 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1979f95-4814-4098-8baa-6c4497f20612-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:01 compute-1 nova_compute[230518]: 2025-10-02 12:38:01.490 2 DEBUG nova.compute.manager [req-bf10eae7-9d4c-474c-baea-e518879e19a4 req-a4e53f8d-1798-49ec-b94c-21f7d035c972 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] No waiting events found dispatching network-vif-plugged-542026c4-106b-4023-8944-0947d2ba1fb9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:38:01 compute-1 nova_compute[230518]: 2025-10-02 12:38:01.491 2 WARNING nova.compute.manager [req-bf10eae7-9d4c-474c-baea-e518879e19a4 req-a4e53f8d-1798-49ec-b94c-21f7d035c972 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Received unexpected event network-vif-plugged-542026c4-106b-4023-8944-0947d2ba1fb9 for instance with vm_state active and task_state deleting.
Oct 02 12:38:02 compute-1 nova_compute[230518]: 2025-10-02 12:38:02.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:38:02 compute-1 nova_compute[230518]: 2025-10-02 12:38:02.284 2 DEBUG nova.network.neutron [-] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:38:02 compute-1 nova_compute[230518]: 2025-10-02 12:38:02.305 2 INFO nova.compute.manager [-] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Took 1.87 seconds to deallocate network for instance.
Oct 02 12:38:02 compute-1 nova_compute[230518]: 2025-10-02 12:38:02.359 2 DEBUG oslo_concurrency.lockutils [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:02 compute-1 nova_compute[230518]: 2025-10-02 12:38:02.360 2 DEBUG oslo_concurrency.lockutils [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:02 compute-1 nova_compute[230518]: 2025-10-02 12:38:02.413 2 DEBUG oslo_concurrency.processutils [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:02 compute-1 ceph-mon[80926]: pgmap v1854: 305 pgs: 305 active+clean; 349 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 4.6 MiB/s wr, 334 op/s
Oct 02 12:38:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3508779376' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1351175108' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3009324824' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:38:02 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2909892755' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:02 compute-1 nova_compute[230518]: 2025-10-02 12:38:02.923 2 DEBUG oslo_concurrency.processutils [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:02 compute-1 nova_compute[230518]: 2025-10-02 12:38:02.931 2 DEBUG nova.compute.provider_tree [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:38:02 compute-1 nova_compute[230518]: 2025-10-02 12:38:02.952 2 DEBUG nova.scheduler.client.report [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:38:02 compute-1 nova_compute[230518]: 2025-10-02 12:38:02.988 2 DEBUG oslo_concurrency.lockutils [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:03.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:03 compute-1 nova_compute[230518]: 2025-10-02 12:38:03.032 2 INFO nova.scheduler.client.report [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Deleted allocations for instance a1979f95-4814-4098-8baa-6c4497f20612
Oct 02 12:38:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:03.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:03 compute-1 nova_compute[230518]: 2025-10-02 12:38:03.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:03 compute-1 nova_compute[230518]: 2025-10-02 12:38:03.125 2 DEBUG oslo_concurrency.lockutils [None req-e824ceb1-6a66-4233-933e-6cc410111986 e28e4c343f46426788534c9108c8a7a8 4179ec2dfcf6411faefd3d7d7e6356d0 - - default default] Lock "a1979f95-4814-4098-8baa-6c4497f20612" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.379s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:03 compute-1 nova_compute[230518]: 2025-10-02 12:38:03.723 2 DEBUG nova.compute.manager [req-50085794-0949-449b-9333-da4e745de1f6 req-b46e4fcd-0804-4eda-813b-a35176a8fe23 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Received event network-vif-deleted-542026c4-106b-4023-8944-0947d2ba1fb9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:38:03 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2909892755' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:03 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2395417240' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:04 compute-1 nova_compute[230518]: 2025-10-02 12:38:04.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:38:04 compute-1 ceph-mon[80926]: pgmap v1855: 305 pgs: 305 active+clean; 339 MiB data, 914 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.6 MiB/s wr, 292 op/s
Oct 02 12:38:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:05.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:05.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2560162661' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:38:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2560162661' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:38:06 compute-1 nova_compute[230518]: 2025-10-02 12:38:06.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:38:06 compute-1 nova_compute[230518]: 2025-10-02 12:38:06.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:38:06 compute-1 nova_compute[230518]: 2025-10-02 12:38:06.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:38:06 compute-1 nova_compute[230518]: 2025-10-02 12:38:06.078 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:38:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 12:38:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:07.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 12:38:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:07.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:07 compute-1 ceph-mon[80926]: pgmap v1856: 305 pgs: 305 active+clean; 339 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 259 op/s
Oct 02 12:38:07 compute-1 nova_compute[230518]: 2025-10-02 12:38:07.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:08 compute-1 nova_compute[230518]: 2025-10-02 12:38:08.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:08 compute-1 podman[269364]: 2025-10-02 12:38:08.809341218 +0000 UTC m=+0.065425889 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 02 12:38:08 compute-1 podman[269365]: 2025-10-02 12:38:08.81546287 +0000 UTC m=+0.069698114 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:38:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:09.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:09.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:09 compute-1 ceph-mon[80926]: pgmap v1857: 305 pgs: 305 active+clean; 339 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 248 op/s
Oct 02 12:38:09 compute-1 nova_compute[230518]: 2025-10-02 12:38:09.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:38:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e256 e256: 3 total, 3 up, 3 in
Oct 02 12:38:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:11.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:11.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:11 compute-1 ceph-mon[80926]: pgmap v1858: 305 pgs: 305 active+clean; 339 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 185 op/s
Oct 02 12:38:11 compute-1 ceph-mon[80926]: osdmap e256: 3 total, 3 up, 3 in
Oct 02 12:38:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1494670598' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:11 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e257 e257: 3 total, 3 up, 3 in
Oct 02 12:38:12 compute-1 ceph-mon[80926]: pgmap v1860: 305 pgs: 305 active+clean; 353 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 818 KiB/s wr, 226 op/s
Oct 02 12:38:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3763526026' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:12 compute-1 ceph-mon[80926]: osdmap e257: 3 total, 3 up, 3 in
Oct 02 12:38:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e258 e258: 3 total, 3 up, 3 in
Oct 02 12:38:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:38:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:13.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:38:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:13.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:13 compute-1 nova_compute[230518]: 2025-10-02 12:38:13.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:14 compute-1 ceph-mon[80926]: osdmap e258: 3 total, 3 up, 3 in
Oct 02 12:38:14 compute-1 nova_compute[230518]: 2025-10-02 12:38:14.193 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408679.1927392, a1979f95-4814-4098-8baa-6c4497f20612 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:38:14 compute-1 nova_compute[230518]: 2025-10-02 12:38:14.193 2 INFO nova.compute.manager [-] [instance: a1979f95-4814-4098-8baa-6c4497f20612] VM Stopped (Lifecycle Event)
Oct 02 12:38:14 compute-1 nova_compute[230518]: 2025-10-02 12:38:14.222 2 DEBUG nova.compute.manager [None req-aa62d783-04a0-4bc6-ad7a-156a188e04f2 - - - - - -] [instance: a1979f95-4814-4098-8baa-6c4497f20612] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:38:14 compute-1 nova_compute[230518]: 2025-10-02 12:38:14.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:38:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:38:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:15.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:38:15 compute-1 ceph-mon[80926]: pgmap v1863: 305 pgs: 305 active+clean; 374 MiB data, 912 MiB used, 20 GiB / 21 GiB avail; 9.5 MiB/s rd, 2.9 MiB/s wr, 312 op/s
Oct 02 12:38:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:15.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:17.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:17.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:17 compute-1 ceph-mon[80926]: pgmap v1864: 305 pgs: 305 active+clean; 387 MiB data, 916 MiB used, 20 GiB / 21 GiB avail; 13 MiB/s rd, 3.6 MiB/s wr, 410 op/s
Oct 02 12:38:18 compute-1 nova_compute[230518]: 2025-10-02 12:38:18.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:18 compute-1 nova_compute[230518]: 2025-10-02 12:38:18.355 2 DEBUG oslo_concurrency.lockutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Acquiring lock "7cd53cbf-91c2-4750-a4c2-551e50950035" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:18 compute-1 nova_compute[230518]: 2025-10-02 12:38:18.355 2 DEBUG oslo_concurrency.lockutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "7cd53cbf-91c2-4750-a4c2-551e50950035" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:18 compute-1 nova_compute[230518]: 2025-10-02 12:38:18.370 2 DEBUG nova.compute.manager [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:38:18 compute-1 nova_compute[230518]: 2025-10-02 12:38:18.438 2 DEBUG oslo_concurrency.lockutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:18 compute-1 nova_compute[230518]: 2025-10-02 12:38:18.438 2 DEBUG oslo_concurrency.lockutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:18 compute-1 nova_compute[230518]: 2025-10-02 12:38:18.445 2 DEBUG nova.virt.hardware [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:38:18 compute-1 nova_compute[230518]: 2025-10-02 12:38:18.446 2 INFO nova.compute.claims [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:38:18 compute-1 nova_compute[230518]: 2025-10-02 12:38:18.538 2 DEBUG oslo_concurrency.processutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:18 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Oct 02 12:38:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:19.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:38:19 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/666265248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:19.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:19 compute-1 nova_compute[230518]: 2025-10-02 12:38:19.082 2 DEBUG oslo_concurrency.processutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:19 compute-1 nova_compute[230518]: 2025-10-02 12:38:19.090 2 DEBUG nova.compute.provider_tree [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:38:19 compute-1 nova_compute[230518]: 2025-10-02 12:38:19.254 2 DEBUG nova.scheduler.client.report [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:38:19 compute-1 nova_compute[230518]: 2025-10-02 12:38:19.297 2 DEBUG oslo_concurrency.lockutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:19 compute-1 nova_compute[230518]: 2025-10-02 12:38:19.298 2 DEBUG nova.compute.manager [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:38:19 compute-1 nova_compute[230518]: 2025-10-02 12:38:19.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:19 compute-1 ceph-mon[80926]: pgmap v1865: 305 pgs: 305 active+clean; 387 MiB data, 924 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 2.9 MiB/s wr, 374 op/s
Oct 02 12:38:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/666265248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:19 compute-1 nova_compute[230518]: 2025-10-02 12:38:19.423 2 DEBUG nova.compute.manager [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 02 12:38:19 compute-1 nova_compute[230518]: 2025-10-02 12:38:19.449 2 INFO nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:38:19 compute-1 nova_compute[230518]: 2025-10-02 12:38:19.485 2 DEBUG nova.compute.manager [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:38:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:38:19 compute-1 nova_compute[230518]: 2025-10-02 12:38:19.602 2 DEBUG nova.compute.manager [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:38:19 compute-1 nova_compute[230518]: 2025-10-02 12:38:19.604 2 DEBUG nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:38:19 compute-1 nova_compute[230518]: 2025-10-02 12:38:19.604 2 INFO nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Creating image(s)
Oct 02 12:38:19 compute-1 nova_compute[230518]: 2025-10-02 12:38:19.638 2 DEBUG nova.storage.rbd_utils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] rbd image 7cd53cbf-91c2-4750-a4c2-551e50950035_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:38:19 compute-1 nova_compute[230518]: 2025-10-02 12:38:19.675 2 DEBUG nova.storage.rbd_utils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] rbd image 7cd53cbf-91c2-4750-a4c2-551e50950035_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:38:19 compute-1 nova_compute[230518]: 2025-10-02 12:38:19.706 2 DEBUG nova.storage.rbd_utils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] rbd image 7cd53cbf-91c2-4750-a4c2-551e50950035_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:38:19 compute-1 nova_compute[230518]: 2025-10-02 12:38:19.711 2 DEBUG oslo_concurrency.processutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:19 compute-1 nova_compute[230518]: 2025-10-02 12:38:19.786 2 DEBUG oslo_concurrency.processutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:19 compute-1 nova_compute[230518]: 2025-10-02 12:38:19.787 2 DEBUG oslo_concurrency.lockutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:19 compute-1 nova_compute[230518]: 2025-10-02 12:38:19.788 2 DEBUG oslo_concurrency.lockutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:19 compute-1 nova_compute[230518]: 2025-10-02 12:38:19.788 2 DEBUG oslo_concurrency.lockutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:19 compute-1 nova_compute[230518]: 2025-10-02 12:38:19.813 2 DEBUG nova.storage.rbd_utils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] rbd image 7cd53cbf-91c2-4750-a4c2-551e50950035_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:38:19 compute-1 nova_compute[230518]: 2025-10-02 12:38:19.817 2 DEBUG oslo_concurrency.processutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 7cd53cbf-91c2-4750-a4c2-551e50950035_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:20 compute-1 ceph-mon[80926]: pgmap v1866: 305 pgs: 305 active+clean; 387 MiB data, 924 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 1.8 MiB/s wr, 226 op/s
Oct 02 12:38:20 compute-1 systemd[1]: Starting dnf makecache...
Oct 02 12:38:21 compute-1 dnf[269515]: Metadata cache refreshed recently.
Oct 02 12:38:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:21.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:21 compute-1 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct 02 12:38:21 compute-1 systemd[1]: Finished dnf makecache.
Oct 02 12:38:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:38:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:21.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:38:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e259 e259: 3 total, 3 up, 3 in
Oct 02 12:38:21 compute-1 nova_compute[230518]: 2025-10-02 12:38:21.927 2 DEBUG oslo_concurrency.processutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 7cd53cbf-91c2-4750-a4c2-551e50950035_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:22 compute-1 nova_compute[230518]: 2025-10-02 12:38:22.029 2 DEBUG nova.storage.rbd_utils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] resizing rbd image 7cd53cbf-91c2-4750-a4c2-551e50950035_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:38:22 compute-1 ceph-mon[80926]: osdmap e259: 3 total, 3 up, 3 in
Oct 02 12:38:22 compute-1 nova_compute[230518]: 2025-10-02 12:38:22.683 2 DEBUG nova.objects.instance [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lazy-loading 'migration_context' on Instance uuid 7cd53cbf-91c2-4750-a4c2-551e50950035 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:22 compute-1 nova_compute[230518]: 2025-10-02 12:38:22.698 2 DEBUG nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:38:22 compute-1 nova_compute[230518]: 2025-10-02 12:38:22.698 2 DEBUG nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Ensure instance console log exists: /var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:38:22 compute-1 nova_compute[230518]: 2025-10-02 12:38:22.698 2 DEBUG oslo_concurrency.lockutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:22 compute-1 nova_compute[230518]: 2025-10-02 12:38:22.699 2 DEBUG oslo_concurrency.lockutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:22 compute-1 nova_compute[230518]: 2025-10-02 12:38:22.699 2 DEBUG oslo_concurrency.lockutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:22 compute-1 nova_compute[230518]: 2025-10-02 12:38:22.700 2 DEBUG nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:38:22 compute-1 nova_compute[230518]: 2025-10-02 12:38:22.706 2 WARNING nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:38:22 compute-1 nova_compute[230518]: 2025-10-02 12:38:22.710 2 DEBUG nova.virt.libvirt.host [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:38:22 compute-1 nova_compute[230518]: 2025-10-02 12:38:22.710 2 DEBUG nova.virt.libvirt.host [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:38:22 compute-1 nova_compute[230518]: 2025-10-02 12:38:22.713 2 DEBUG nova.virt.libvirt.host [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:38:22 compute-1 nova_compute[230518]: 2025-10-02 12:38:22.713 2 DEBUG nova.virt.libvirt.host [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:38:22 compute-1 nova_compute[230518]: 2025-10-02 12:38:22.714 2 DEBUG nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:38:22 compute-1 nova_compute[230518]: 2025-10-02 12:38:22.714 2 DEBUG nova.virt.hardware [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:38:22 compute-1 nova_compute[230518]: 2025-10-02 12:38:22.715 2 DEBUG nova.virt.hardware [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:38:22 compute-1 nova_compute[230518]: 2025-10-02 12:38:22.715 2 DEBUG nova.virt.hardware [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:38:22 compute-1 nova_compute[230518]: 2025-10-02 12:38:22.715 2 DEBUG nova.virt.hardware [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:38:22 compute-1 nova_compute[230518]: 2025-10-02 12:38:22.715 2 DEBUG nova.virt.hardware [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:38:22 compute-1 nova_compute[230518]: 2025-10-02 12:38:22.715 2 DEBUG nova.virt.hardware [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:38:22 compute-1 nova_compute[230518]: 2025-10-02 12:38:22.715 2 DEBUG nova.virt.hardware [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:38:22 compute-1 nova_compute[230518]: 2025-10-02 12:38:22.716 2 DEBUG nova.virt.hardware [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:38:22 compute-1 nova_compute[230518]: 2025-10-02 12:38:22.716 2 DEBUG nova.virt.hardware [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:38:22 compute-1 nova_compute[230518]: 2025-10-02 12:38:22.716 2 DEBUG nova.virt.hardware [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:38:22 compute-1 nova_compute[230518]: 2025-10-02 12:38:22.716 2 DEBUG nova.virt.hardware [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:38:22 compute-1 nova_compute[230518]: 2025-10-02 12:38:22.719 2 DEBUG oslo_concurrency.processutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:23.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:23.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:23 compute-1 nova_compute[230518]: 2025-10-02 12:38:23.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:38:23 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2278695610' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:23 compute-1 nova_compute[230518]: 2025-10-02 12:38:23.231 2 DEBUG oslo_concurrency.processutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:23 compute-1 ceph-mon[80926]: pgmap v1868: 305 pgs: 305 active+clean; 410 MiB data, 963 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.3 MiB/s wr, 206 op/s
Oct 02 12:38:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2278695610' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:23 compute-1 nova_compute[230518]: 2025-10-02 12:38:23.261 2 DEBUG nova.storage.rbd_utils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] rbd image 7cd53cbf-91c2-4750-a4c2-551e50950035_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:38:23 compute-1 nova_compute[230518]: 2025-10-02 12:38:23.267 2 DEBUG oslo_concurrency.processutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:38:23 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1185421420' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:23 compute-1 nova_compute[230518]: 2025-10-02 12:38:23.716 2 DEBUG oslo_concurrency.processutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:23 compute-1 nova_compute[230518]: 2025-10-02 12:38:23.718 2 DEBUG nova.objects.instance [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lazy-loading 'pci_devices' on Instance uuid 7cd53cbf-91c2-4750-a4c2-551e50950035 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:23 compute-1 nova_compute[230518]: 2025-10-02 12:38:23.734 2 DEBUG nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:38:23 compute-1 nova_compute[230518]:   <uuid>7cd53cbf-91c2-4750-a4c2-551e50950035</uuid>
Oct 02 12:38:23 compute-1 nova_compute[230518]:   <name>instance-00000067</name>
Oct 02 12:38:23 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:38:23 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:38:23 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:38:23 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:       <nova:name>tempest-ServerShowV254Test-server-609721834</nova:name>
Oct 02 12:38:23 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:38:22</nova:creationTime>
Oct 02 12:38:23 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:38:23 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:38:23 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:38:23 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:38:23 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:38:23 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:38:23 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:38:23 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:38:23 compute-1 nova_compute[230518]:         <nova:user uuid="0b4b918d10704ca5852d80098d253220">tempest-ServerShowV254Test-555313685-project-member</nova:user>
Oct 02 12:38:23 compute-1 nova_compute[230518]:         <nova:project uuid="99ed62753466455f8b5795e12d35034e">tempest-ServerShowV254Test-555313685</nova:project>
Oct 02 12:38:23 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:38:23 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:       <nova:ports/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:38:23 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:38:23 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <system>
Oct 02 12:38:23 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:38:23 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:38:23 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:38:23 compute-1 nova_compute[230518]:       <entry name="serial">7cd53cbf-91c2-4750-a4c2-551e50950035</entry>
Oct 02 12:38:23 compute-1 nova_compute[230518]:       <entry name="uuid">7cd53cbf-91c2-4750-a4c2-551e50950035</entry>
Oct 02 12:38:23 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     </system>
Oct 02 12:38:23 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:38:23 compute-1 nova_compute[230518]:   <os>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:   </os>
Oct 02 12:38:23 compute-1 nova_compute[230518]:   <features>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:   </features>
Oct 02 12:38:23 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:38:23 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:38:23 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:38:23 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/7cd53cbf-91c2-4750-a4c2-551e50950035_disk">
Oct 02 12:38:23 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:       </source>
Oct 02 12:38:23 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:38:23 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:38:23 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:38:23 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/7cd53cbf-91c2-4750-a4c2-551e50950035_disk.config">
Oct 02 12:38:23 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:       </source>
Oct 02 12:38:23 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:38:23 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:38:23 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:38:23 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035/console.log" append="off"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <video>
Oct 02 12:38:23 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     </video>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:38:23 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:38:23 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:38:23 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:38:23 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:38:23 compute-1 nova_compute[230518]: </domain>
Oct 02 12:38:23 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:38:23 compute-1 nova_compute[230518]: 2025-10-02 12:38:23.829 2 DEBUG nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:38:23 compute-1 nova_compute[230518]: 2025-10-02 12:38:23.830 2 DEBUG nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:38:23 compute-1 nova_compute[230518]: 2025-10-02 12:38:23.831 2 INFO nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Using config drive
Oct 02 12:38:23 compute-1 nova_compute[230518]: 2025-10-02 12:38:23.856 2 DEBUG nova.storage.rbd_utils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] rbd image 7cd53cbf-91c2-4750-a4c2-551e50950035_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:38:24 compute-1 nova_compute[230518]: 2025-10-02 12:38:24.072 2 INFO nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Creating config drive at /var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035/disk.config
Oct 02 12:38:24 compute-1 nova_compute[230518]: 2025-10-02 12:38:24.078 2 DEBUG oslo_concurrency.processutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmneraay0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:24 compute-1 nova_compute[230518]: 2025-10-02 12:38:24.212 2 DEBUG oslo_concurrency.processutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmneraay0" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:24 compute-1 nova_compute[230518]: 2025-10-02 12:38:24.247 2 DEBUG nova.storage.rbd_utils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] rbd image 7cd53cbf-91c2-4750-a4c2-551e50950035_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:38:24 compute-1 nova_compute[230518]: 2025-10-02 12:38:24.253 2 DEBUG oslo_concurrency.processutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035/disk.config 7cd53cbf-91c2-4750-a4c2-551e50950035_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:24 compute-1 nova_compute[230518]: 2025-10-02 12:38:24.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1185421420' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:38:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:25.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:25.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:25 compute-1 nova_compute[230518]: 2025-10-02 12:38:25.130 2 DEBUG oslo_concurrency.processutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035/disk.config 7cd53cbf-91c2-4750-a4c2-551e50950035_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.877s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:25 compute-1 nova_compute[230518]: 2025-10-02 12:38:25.131 2 INFO nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Deleting local config drive /var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035/disk.config because it was imported into RBD.
Oct 02 12:38:25 compute-1 systemd-machined[188247]: New machine qemu-50-instance-00000067.
Oct 02 12:38:25 compute-1 systemd[1]: Started Virtual Machine qemu-50-instance-00000067.
Oct 02 12:38:25 compute-1 ceph-mon[80926]: pgmap v1869: 305 pgs: 305 active+clean; 449 MiB data, 1005 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 6.0 MiB/s wr, 221 op/s
Oct 02 12:38:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:38:25.934 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:38:25.936 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:38:25.936 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:26 compute-1 nova_compute[230518]: 2025-10-02 12:38:26.418 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408706.4179187, 7cd53cbf-91c2-4750-a4c2-551e50950035 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:38:26 compute-1 nova_compute[230518]: 2025-10-02 12:38:26.418 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] VM Resumed (Lifecycle Event)
Oct 02 12:38:26 compute-1 nova_compute[230518]: 2025-10-02 12:38:26.422 2 DEBUG nova.compute.manager [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:38:26 compute-1 nova_compute[230518]: 2025-10-02 12:38:26.422 2 DEBUG nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:38:26 compute-1 nova_compute[230518]: 2025-10-02 12:38:26.425 2 INFO nova.virt.libvirt.driver [-] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Instance spawned successfully.
Oct 02 12:38:26 compute-1 nova_compute[230518]: 2025-10-02 12:38:26.425 2 DEBUG nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:38:26 compute-1 nova_compute[230518]: 2025-10-02 12:38:26.445 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:38:26 compute-1 nova_compute[230518]: 2025-10-02 12:38:26.450 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:38:26 compute-1 nova_compute[230518]: 2025-10-02 12:38:26.454 2 DEBUG nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:38:26 compute-1 nova_compute[230518]: 2025-10-02 12:38:26.454 2 DEBUG nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:38:26 compute-1 nova_compute[230518]: 2025-10-02 12:38:26.455 2 DEBUG nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:38:26 compute-1 nova_compute[230518]: 2025-10-02 12:38:26.455 2 DEBUG nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:38:26 compute-1 nova_compute[230518]: 2025-10-02 12:38:26.455 2 DEBUG nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:38:26 compute-1 nova_compute[230518]: 2025-10-02 12:38:26.456 2 DEBUG nova.virt.libvirt.driver [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:38:26 compute-1 nova_compute[230518]: 2025-10-02 12:38:26.466 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:38:26 compute-1 nova_compute[230518]: 2025-10-02 12:38:26.467 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408706.4213724, 7cd53cbf-91c2-4750-a4c2-551e50950035 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:38:26 compute-1 nova_compute[230518]: 2025-10-02 12:38:26.467 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] VM Started (Lifecycle Event)
Oct 02 12:38:26 compute-1 nova_compute[230518]: 2025-10-02 12:38:26.485 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:38:26 compute-1 nova_compute[230518]: 2025-10-02 12:38:26.489 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:38:26 compute-1 nova_compute[230518]: 2025-10-02 12:38:26.512 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:38:26 compute-1 nova_compute[230518]: 2025-10-02 12:38:26.516 2 INFO nova.compute.manager [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Took 6.91 seconds to spawn the instance on the hypervisor.
Oct 02 12:38:26 compute-1 nova_compute[230518]: 2025-10-02 12:38:26.516 2 DEBUG nova.compute.manager [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:38:26 compute-1 nova_compute[230518]: 2025-10-02 12:38:26.560 2 INFO nova.compute.manager [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Took 8.15 seconds to build instance.
Oct 02 12:38:26 compute-1 nova_compute[230518]: 2025-10-02 12:38:26.574 2 DEBUG oslo_concurrency.lockutils [None req-1de81ac5-7ec9-4a72-b2f6-a5340a0c7970 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "7cd53cbf-91c2-4750-a4c2-551e50950035" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.218s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:26 compute-1 ceph-mon[80926]: pgmap v1870: 305 pgs: 305 active+clean; 471 MiB data, 1016 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 6.5 MiB/s wr, 223 op/s
Oct 02 12:38:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:38:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:27.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:38:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:27.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:27 compute-1 podman[269767]: 2025-10-02 12:38:27.848542732 +0000 UTC m=+0.081725652 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:38:27 compute-1 podman[269766]: 2025-10-02 12:38:27.870327018 +0000 UTC m=+0.114514214 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 02 12:38:28 compute-1 nova_compute[230518]: 2025-10-02 12:38:28.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:28 compute-1 ceph-mon[80926]: pgmap v1871: 305 pgs: 305 active+clean; 492 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 955 KiB/s rd, 7.2 MiB/s wr, 194 op/s
Oct 02 12:38:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:38:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:29.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:38:29 compute-1 nova_compute[230518]: 2025-10-02 12:38:29.059 2 INFO nova.compute.manager [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Rebuilding instance
Oct 02 12:38:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:29.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:29 compute-1 nova_compute[230518]: 2025-10-02 12:38:29.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:29 compute-1 nova_compute[230518]: 2025-10-02 12:38:29.371 2 DEBUG nova.objects.instance [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lazy-loading 'trusted_certs' on Instance uuid 7cd53cbf-91c2-4750-a4c2-551e50950035 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:29 compute-1 nova_compute[230518]: 2025-10-02 12:38:29.392 2 DEBUG nova.compute.manager [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:38:29 compute-1 nova_compute[230518]: 2025-10-02 12:38:29.441 2 DEBUG nova.objects.instance [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lazy-loading 'pci_requests' on Instance uuid 7cd53cbf-91c2-4750-a4c2-551e50950035 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:29 compute-1 nova_compute[230518]: 2025-10-02 12:38:29.455 2 DEBUG nova.objects.instance [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lazy-loading 'pci_devices' on Instance uuid 7cd53cbf-91c2-4750-a4c2-551e50950035 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:29 compute-1 nova_compute[230518]: 2025-10-02 12:38:29.470 2 DEBUG nova.objects.instance [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lazy-loading 'resources' on Instance uuid 7cd53cbf-91c2-4750-a4c2-551e50950035 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:29 compute-1 nova_compute[230518]: 2025-10-02 12:38:29.482 2 DEBUG nova.objects.instance [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lazy-loading 'migration_context' on Instance uuid 7cd53cbf-91c2-4750-a4c2-551e50950035 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:29 compute-1 nova_compute[230518]: 2025-10-02 12:38:29.498 2 DEBUG nova.objects.instance [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:38:29 compute-1 nova_compute[230518]: 2025-10-02 12:38:29.503 2 DEBUG nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:38:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:38:30 compute-1 ceph-mon[80926]: pgmap v1872: 305 pgs: 305 active+clean; 492 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 955 KiB/s rd, 7.2 MiB/s wr, 194 op/s
Oct 02 12:38:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2566839075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:31.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:31.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:32 compute-1 ceph-mon[80926]: pgmap v1873: 305 pgs: 305 active+clean; 437 MiB data, 997 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.6 MiB/s wr, 211 op/s
Oct 02 12:38:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:33.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:38:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:33.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:38:33 compute-1 nova_compute[230518]: 2025-10-02 12:38:33.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:34 compute-1 nova_compute[230518]: 2025-10-02 12:38:34.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:38:34 compute-1 ceph-mon[80926]: pgmap v1874: 305 pgs: 305 active+clean; 420 MiB data, 988 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.0 MiB/s wr, 230 op/s
Oct 02 12:38:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1755282525' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2557882276' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:35.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:38:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:35.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:38:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/815357803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:36 compute-1 ceph-mon[80926]: pgmap v1875: 305 pgs: 305 active+clean; 420 MiB data, 988 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.4 MiB/s wr, 203 op/s
Oct 02 12:38:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:37.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:37.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:38 compute-1 nova_compute[230518]: 2025-10-02 12:38:38.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:39 compute-1 ceph-mon[80926]: pgmap v1876: 305 pgs: 305 active+clean; 420 MiB data, 988 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 653 KiB/s wr, 166 op/s
Oct 02 12:38:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:39.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:39.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:39 compute-1 nova_compute[230518]: 2025-10-02 12:38:39.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:38:39 compute-1 nova_compute[230518]: 2025-10-02 12:38:39.558 2 DEBUG nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:38:39 compute-1 podman[269812]: 2025-10-02 12:38:39.846038647 +0000 UTC m=+0.078118979 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=iscsid, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:38:39 compute-1 podman[269813]: 2025-10-02 12:38:39.853181782 +0000 UTC m=+0.088052681 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2)
Oct 02 12:38:41 compute-1 ceph-mon[80926]: pgmap v1877: 305 pgs: 305 active+clean; 420 MiB data, 988 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 92 KiB/s wr, 132 op/s
Oct 02 12:38:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:41.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:41.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:42 compute-1 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d00000067.scope: Deactivated successfully.
Oct 02 12:38:42 compute-1 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d00000067.scope: Consumed 14.175s CPU time.
Oct 02 12:38:42 compute-1 systemd-machined[188247]: Machine qemu-50-instance-00000067 terminated.
Oct 02 12:38:42 compute-1 nova_compute[230518]: 2025-10-02 12:38:42.572 2 INFO nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Instance shutdown successfully after 13 seconds.
Oct 02 12:38:42 compute-1 nova_compute[230518]: 2025-10-02 12:38:42.579 2 INFO nova.virt.libvirt.driver [-] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Instance destroyed successfully.
Oct 02 12:38:42 compute-1 nova_compute[230518]: 2025-10-02 12:38:42.584 2 INFO nova.virt.libvirt.driver [-] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Instance destroyed successfully.
Oct 02 12:38:43 compute-1 ceph-mon[80926]: pgmap v1878: 305 pgs: 305 active+clean; 450 MiB data, 1010 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 172 op/s
Oct 02 12:38:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:43.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:43 compute-1 nova_compute[230518]: 2025-10-02 12:38:43.080 2 INFO nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Deleting instance files /var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035_del
Oct 02 12:38:43 compute-1 nova_compute[230518]: 2025-10-02 12:38:43.081 2 INFO nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Deletion of /var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035_del complete
Oct 02 12:38:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:43.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:43 compute-1 nova_compute[230518]: 2025-10-02 12:38:43.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:38:43.278 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:38:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:38:43.279 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:38:43 compute-1 nova_compute[230518]: 2025-10-02 12:38:43.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:43 compute-1 nova_compute[230518]: 2025-10-02 12:38:43.291 2 DEBUG nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:38:43 compute-1 nova_compute[230518]: 2025-10-02 12:38:43.291 2 INFO nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Creating image(s)
Oct 02 12:38:43 compute-1 nova_compute[230518]: 2025-10-02 12:38:43.317 2 DEBUG nova.storage.rbd_utils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] rbd image 7cd53cbf-91c2-4750-a4c2-551e50950035_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:38:43 compute-1 nova_compute[230518]: 2025-10-02 12:38:43.342 2 DEBUG nova.storage.rbd_utils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] rbd image 7cd53cbf-91c2-4750-a4c2-551e50950035_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:38:43 compute-1 nova_compute[230518]: 2025-10-02 12:38:43.366 2 DEBUG nova.storage.rbd_utils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] rbd image 7cd53cbf-91c2-4750-a4c2-551e50950035_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:38:43 compute-1 nova_compute[230518]: 2025-10-02 12:38:43.369 2 DEBUG oslo_concurrency.processutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:43 compute-1 nova_compute[230518]: 2025-10-02 12:38:43.436 2 DEBUG oslo_concurrency.processutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:43 compute-1 nova_compute[230518]: 2025-10-02 12:38:43.437 2 DEBUG oslo_concurrency.lockutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Acquiring lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:43 compute-1 nova_compute[230518]: 2025-10-02 12:38:43.438 2 DEBUG oslo_concurrency.lockutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:43 compute-1 nova_compute[230518]: 2025-10-02 12:38:43.438 2 DEBUG oslo_concurrency.lockutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "dd3a4569add1ef352b7c4d78d5e01667803900b4" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:43 compute-1 nova_compute[230518]: 2025-10-02 12:38:43.466 2 DEBUG nova.storage.rbd_utils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] rbd image 7cd53cbf-91c2-4750-a4c2-551e50950035_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:38:43 compute-1 nova_compute[230518]: 2025-10-02 12:38:43.470 2 DEBUG oslo_concurrency.processutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 7cd53cbf-91c2-4750-a4c2-551e50950035_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:43 compute-1 nova_compute[230518]: 2025-10-02 12:38:43.782 2 DEBUG oslo_concurrency.processutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 7cd53cbf-91c2-4750-a4c2-551e50950035_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.312s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:43 compute-1 nova_compute[230518]: 2025-10-02 12:38:43.849 2 DEBUG nova.storage.rbd_utils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] resizing rbd image 7cd53cbf-91c2-4750-a4c2-551e50950035_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:38:43 compute-1 nova_compute[230518]: 2025-10-02 12:38:43.988 2 DEBUG nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:38:43 compute-1 nova_compute[230518]: 2025-10-02 12:38:43.989 2 DEBUG nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Ensure instance console log exists: /var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:38:43 compute-1 nova_compute[230518]: 2025-10-02 12:38:43.990 2 DEBUG oslo_concurrency.lockutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:43 compute-1 nova_compute[230518]: 2025-10-02 12:38:43.990 2 DEBUG oslo_concurrency.lockutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:43 compute-1 nova_compute[230518]: 2025-10-02 12:38:43.991 2 DEBUG oslo_concurrency.lockutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:43 compute-1 nova_compute[230518]: 2025-10-02 12:38:43.992 2 DEBUG nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:54Z,direct_url=<?>,disk_format='qcow2',id=52ef509e-0e22-464e-93c9-3ddcf574cd64,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:38:43 compute-1 nova_compute[230518]: 2025-10-02 12:38:43.997 2 WARNING nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 02 12:38:44 compute-1 nova_compute[230518]: 2025-10-02 12:38:44.005 2 DEBUG nova.virt.libvirt.host [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:38:44 compute-1 nova_compute[230518]: 2025-10-02 12:38:44.006 2 DEBUG nova.virt.libvirt.host [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:38:44 compute-1 nova_compute[230518]: 2025-10-02 12:38:44.013 2 DEBUG nova.virt.libvirt.host [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:38:44 compute-1 nova_compute[230518]: 2025-10-02 12:38:44.014 2 DEBUG nova.virt.libvirt.host [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:38:44 compute-1 nova_compute[230518]: 2025-10-02 12:38:44.015 2 DEBUG nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:38:44 compute-1 nova_compute[230518]: 2025-10-02 12:38:44.015 2 DEBUG nova.virt.hardware [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:54Z,direct_url=<?>,disk_format='qcow2',id=52ef509e-0e22-464e-93c9-3ddcf574cd64,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:38:44 compute-1 nova_compute[230518]: 2025-10-02 12:38:44.016 2 DEBUG nova.virt.hardware [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:38:44 compute-1 nova_compute[230518]: 2025-10-02 12:38:44.016 2 DEBUG nova.virt.hardware [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:38:44 compute-1 nova_compute[230518]: 2025-10-02 12:38:44.016 2 DEBUG nova.virt.hardware [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:38:44 compute-1 nova_compute[230518]: 2025-10-02 12:38:44.017 2 DEBUG nova.virt.hardware [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:38:44 compute-1 nova_compute[230518]: 2025-10-02 12:38:44.018 2 DEBUG nova.virt.hardware [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:38:44 compute-1 nova_compute[230518]: 2025-10-02 12:38:44.019 2 DEBUG nova.virt.hardware [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:38:44 compute-1 nova_compute[230518]: 2025-10-02 12:38:44.019 2 DEBUG nova.virt.hardware [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:38:44 compute-1 nova_compute[230518]: 2025-10-02 12:38:44.019 2 DEBUG nova.virt.hardware [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:38:44 compute-1 nova_compute[230518]: 2025-10-02 12:38:44.020 2 DEBUG nova.virt.hardware [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:38:44 compute-1 nova_compute[230518]: 2025-10-02 12:38:44.020 2 DEBUG nova.virt.hardware [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:38:44 compute-1 nova_compute[230518]: 2025-10-02 12:38:44.020 2 DEBUG nova.objects.instance [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 7cd53cbf-91c2-4750-a4c2-551e50950035 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:44 compute-1 nova_compute[230518]: 2025-10-02 12:38:44.045 2 DEBUG oslo_concurrency.processutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:44 compute-1 nova_compute[230518]: 2025-10-02 12:38:44.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:38:44 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3017170306' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:44 compute-1 nova_compute[230518]: 2025-10-02 12:38:44.505 2 DEBUG oslo_concurrency.processutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:38:44 compute-1 nova_compute[230518]: 2025-10-02 12:38:44.539 2 DEBUG nova.storage.rbd_utils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] rbd image 7cd53cbf-91c2-4750-a4c2-551e50950035_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:38:44 compute-1 nova_compute[230518]: 2025-10-02 12:38:44.546 2 DEBUG oslo_concurrency.processutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:38:44 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1256335225' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:44 compute-1 nova_compute[230518]: 2025-10-02 12:38:44.980 2 DEBUG oslo_concurrency.processutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:44 compute-1 nova_compute[230518]: 2025-10-02 12:38:44.984 2 DEBUG nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:38:44 compute-1 nova_compute[230518]:   <uuid>7cd53cbf-91c2-4750-a4c2-551e50950035</uuid>
Oct 02 12:38:44 compute-1 nova_compute[230518]:   <name>instance-00000067</name>
Oct 02 12:38:44 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:38:44 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:38:44 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:38:44 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:       <nova:name>tempest-ServerShowV254Test-server-609721834</nova:name>
Oct 02 12:38:44 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:38:43</nova:creationTime>
Oct 02 12:38:44 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:38:44 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:38:44 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:38:44 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:38:44 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:38:44 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:38:44 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:38:44 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:38:44 compute-1 nova_compute[230518]:         <nova:user uuid="0b4b918d10704ca5852d80098d253220">tempest-ServerShowV254Test-555313685-project-member</nova:user>
Oct 02 12:38:44 compute-1 nova_compute[230518]:         <nova:project uuid="99ed62753466455f8b5795e12d35034e">tempest-ServerShowV254Test-555313685</nova:project>
Oct 02 12:38:44 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:38:44 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="52ef509e-0e22-464e-93c9-3ddcf574cd64"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:       <nova:ports/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:38:44 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:38:44 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <system>
Oct 02 12:38:44 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:38:44 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:38:44 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:38:44 compute-1 nova_compute[230518]:       <entry name="serial">7cd53cbf-91c2-4750-a4c2-551e50950035</entry>
Oct 02 12:38:44 compute-1 nova_compute[230518]:       <entry name="uuid">7cd53cbf-91c2-4750-a4c2-551e50950035</entry>
Oct 02 12:38:44 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     </system>
Oct 02 12:38:44 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:38:44 compute-1 nova_compute[230518]:   <os>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:   </os>
Oct 02 12:38:44 compute-1 nova_compute[230518]:   <features>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:   </features>
Oct 02 12:38:44 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:38:44 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:38:44 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:38:44 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/7cd53cbf-91c2-4750-a4c2-551e50950035_disk">
Oct 02 12:38:44 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:       </source>
Oct 02 12:38:44 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:38:44 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:38:44 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:38:44 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/7cd53cbf-91c2-4750-a4c2-551e50950035_disk.config">
Oct 02 12:38:44 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:       </source>
Oct 02 12:38:44 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:38:44 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:38:44 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:38:44 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035/console.log" append="off"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <video>
Oct 02 12:38:44 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     </video>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:38:44 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:38:44 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:38:44 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:38:44 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:38:44 compute-1 nova_compute[230518]: </domain>
Oct 02 12:38:44 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:38:45 compute-1 nova_compute[230518]: 2025-10-02 12:38:45.067 2 DEBUG nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:38:45 compute-1 nova_compute[230518]: 2025-10-02 12:38:45.068 2 DEBUG nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:38:45 compute-1 nova_compute[230518]: 2025-10-02 12:38:45.069 2 INFO nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Using config drive
Oct 02 12:38:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:45.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:45 compute-1 ceph-mon[80926]: pgmap v1879: 305 pgs: 305 active+clean; 453 MiB data, 1013 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.2 MiB/s wr, 146 op/s
Oct 02 12:38:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3017170306' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1256335225' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:38:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:38:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:45.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:38:45 compute-1 nova_compute[230518]: 2025-10-02 12:38:45.120 2 DEBUG nova.storage.rbd_utils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] rbd image 7cd53cbf-91c2-4750-a4c2-551e50950035_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:38:45 compute-1 nova_compute[230518]: 2025-10-02 12:38:45.145 2 DEBUG nova.objects.instance [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lazy-loading 'ec2_ids' on Instance uuid 7cd53cbf-91c2-4750-a4c2-551e50950035 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:45 compute-1 nova_compute[230518]: 2025-10-02 12:38:45.376 2 INFO nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Creating config drive at /var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035/disk.config
Oct 02 12:38:45 compute-1 nova_compute[230518]: 2025-10-02 12:38:45.381 2 DEBUG oslo_concurrency.processutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2n5ba1y7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:45 compute-1 nova_compute[230518]: 2025-10-02 12:38:45.514 2 DEBUG oslo_concurrency.processutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2n5ba1y7" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:45 compute-1 nova_compute[230518]: 2025-10-02 12:38:45.559 2 DEBUG nova.storage.rbd_utils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] rbd image 7cd53cbf-91c2-4750-a4c2-551e50950035_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:38:45 compute-1 nova_compute[230518]: 2025-10-02 12:38:45.563 2 DEBUG oslo_concurrency.processutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035/disk.config 7cd53cbf-91c2-4750-a4c2-551e50950035_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:45 compute-1 nova_compute[230518]: 2025-10-02 12:38:45.949 2 DEBUG oslo_concurrency.processutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035/disk.config 7cd53cbf-91c2-4750-a4c2-551e50950035_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.386s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:45 compute-1 nova_compute[230518]: 2025-10-02 12:38:45.950 2 INFO nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Deleting local config drive /var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035/disk.config because it was imported into RBD.
Oct 02 12:38:46 compute-1 systemd-machined[188247]: New machine qemu-51-instance-00000067.
Oct 02 12:38:46 compute-1 systemd[1]: Started Virtual Machine qemu-51-instance-00000067.
Oct 02 12:38:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:47.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:38:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:47.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:38:47 compute-1 nova_compute[230518]: 2025-10-02 12:38:47.187 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Removed pending event for 7cd53cbf-91c2-4750-a4c2-551e50950035 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:38:47 compute-1 nova_compute[230518]: 2025-10-02 12:38:47.188 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408727.1864934, 7cd53cbf-91c2-4750-a4c2-551e50950035 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:38:47 compute-1 nova_compute[230518]: 2025-10-02 12:38:47.189 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] VM Resumed (Lifecycle Event)
Oct 02 12:38:47 compute-1 nova_compute[230518]: 2025-10-02 12:38:47.192 2 DEBUG nova.compute.manager [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:38:47 compute-1 nova_compute[230518]: 2025-10-02 12:38:47.192 2 DEBUG nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:38:47 compute-1 nova_compute[230518]: 2025-10-02 12:38:47.196 2 INFO nova.virt.libvirt.driver [-] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Instance spawned successfully.
Oct 02 12:38:47 compute-1 nova_compute[230518]: 2025-10-02 12:38:47.196 2 DEBUG nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:38:47 compute-1 nova_compute[230518]: 2025-10-02 12:38:47.211 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:38:47 compute-1 nova_compute[230518]: 2025-10-02 12:38:47.214 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:38:47 compute-1 nova_compute[230518]: 2025-10-02 12:38:47.257 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 12:38:47 compute-1 nova_compute[230518]: 2025-10-02 12:38:47.258 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408727.1870239, 7cd53cbf-91c2-4750-a4c2-551e50950035 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:38:47 compute-1 nova_compute[230518]: 2025-10-02 12:38:47.258 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] VM Started (Lifecycle Event)
Oct 02 12:38:47 compute-1 nova_compute[230518]: 2025-10-02 12:38:47.263 2 DEBUG nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:38:47 compute-1 nova_compute[230518]: 2025-10-02 12:38:47.263 2 DEBUG nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:38:47 compute-1 nova_compute[230518]: 2025-10-02 12:38:47.264 2 DEBUG nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:38:47 compute-1 nova_compute[230518]: 2025-10-02 12:38:47.264 2 DEBUG nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:38:47 compute-1 nova_compute[230518]: 2025-10-02 12:38:47.265 2 DEBUG nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:38:47 compute-1 nova_compute[230518]: 2025-10-02 12:38:47.265 2 DEBUG nova.virt.libvirt.driver [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:38:47 compute-1 nova_compute[230518]: 2025-10-02 12:38:47.293 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:38:47 compute-1 nova_compute[230518]: 2025-10-02 12:38:47.296 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:38:47 compute-1 ceph-mon[80926]: pgmap v1880: 305 pgs: 305 active+clean; 404 MiB data, 990 MiB used, 20 GiB / 21 GiB avail; 376 KiB/s rd, 2.5 MiB/s wr, 131 op/s
Oct 02 12:38:47 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4193470462' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:47 compute-1 nova_compute[230518]: 2025-10-02 12:38:47.332 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 12:38:47 compute-1 nova_compute[230518]: 2025-10-02 12:38:47.350 2 DEBUG nova.compute.manager [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:38:47 compute-1 nova_compute[230518]: 2025-10-02 12:38:47.412 2 DEBUG oslo_concurrency.lockutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:47 compute-1 nova_compute[230518]: 2025-10-02 12:38:47.412 2 DEBUG oslo_concurrency.lockutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:47 compute-1 nova_compute[230518]: 2025-10-02 12:38:47.413 2 DEBUG nova.objects.instance [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:38:47 compute-1 nova_compute[230518]: 2025-10-02 12:38:47.482 2 DEBUG oslo_concurrency.lockutils [None req-7a0836a2-35ad-4913-b256-16cf9b6601f7 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.069s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:48 compute-1 nova_compute[230518]: 2025-10-02 12:38:48.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:38:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:49.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:38:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:49.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:49 compute-1 ceph-mon[80926]: pgmap v1881: 305 pgs: 305 active+clean; 341 MiB data, 957 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 235 op/s
Oct 02 12:38:49 compute-1 nova_compute[230518]: 2025-10-02 12:38:49.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:49 compute-1 nova_compute[230518]: 2025-10-02 12:38:49.401 2 DEBUG oslo_concurrency.lockutils [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Acquiring lock "7cd53cbf-91c2-4750-a4c2-551e50950035" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:49 compute-1 nova_compute[230518]: 2025-10-02 12:38:49.401 2 DEBUG oslo_concurrency.lockutils [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "7cd53cbf-91c2-4750-a4c2-551e50950035" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:49 compute-1 nova_compute[230518]: 2025-10-02 12:38:49.402 2 DEBUG oslo_concurrency.lockutils [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Acquiring lock "7cd53cbf-91c2-4750-a4c2-551e50950035-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:49 compute-1 nova_compute[230518]: 2025-10-02 12:38:49.402 2 DEBUG oslo_concurrency.lockutils [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "7cd53cbf-91c2-4750-a4c2-551e50950035-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:49 compute-1 nova_compute[230518]: 2025-10-02 12:38:49.402 2 DEBUG oslo_concurrency.lockutils [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "7cd53cbf-91c2-4750-a4c2-551e50950035-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:49 compute-1 nova_compute[230518]: 2025-10-02 12:38:49.406 2 INFO nova.compute.manager [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Terminating instance
Oct 02 12:38:49 compute-1 nova_compute[230518]: 2025-10-02 12:38:49.407 2 DEBUG oslo_concurrency.lockutils [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Acquiring lock "refresh_cache-7cd53cbf-91c2-4750-a4c2-551e50950035" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:38:49 compute-1 nova_compute[230518]: 2025-10-02 12:38:49.407 2 DEBUG oslo_concurrency.lockutils [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Acquired lock "refresh_cache-7cd53cbf-91c2-4750-a4c2-551e50950035" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:38:49 compute-1 nova_compute[230518]: 2025-10-02 12:38:49.407 2 DEBUG nova.network.neutron [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:38:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:38:49 compute-1 nova_compute[230518]: 2025-10-02 12:38:49.693 2 DEBUG nova.network.neutron [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:38:50 compute-1 nova_compute[230518]: 2025-10-02 12:38:50.143 2 DEBUG nova.network.neutron [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:38:50 compute-1 nova_compute[230518]: 2025-10-02 12:38:50.159 2 DEBUG oslo_concurrency.lockutils [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Releasing lock "refresh_cache-7cd53cbf-91c2-4750-a4c2-551e50950035" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:38:50 compute-1 nova_compute[230518]: 2025-10-02 12:38:50.161 2 DEBUG nova.compute.manager [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:38:50 compute-1 ceph-mon[80926]: pgmap v1882: 305 pgs: 305 active+clean; 341 MiB data, 957 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 222 op/s
Oct 02 12:38:50 compute-1 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d00000067.scope: Deactivated successfully.
Oct 02 12:38:50 compute-1 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d00000067.scope: Consumed 3.962s CPU time.
Oct 02 12:38:50 compute-1 systemd-machined[188247]: Machine qemu-51-instance-00000067 terminated.
Oct 02 12:38:50 compute-1 nova_compute[230518]: 2025-10-02 12:38:50.792 2 INFO nova.virt.libvirt.driver [-] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Instance destroyed successfully.
Oct 02 12:38:50 compute-1 nova_compute[230518]: 2025-10-02 12:38:50.794 2 DEBUG nova.objects.instance [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lazy-loading 'resources' on Instance uuid 7cd53cbf-91c2-4750-a4c2-551e50950035 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:38:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:51.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:51.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:51 compute-1 sudo[270236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:38:51 compute-1 sudo[270236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:38:51 compute-1 sudo[270236]: pam_unix(sudo:session): session closed for user root
Oct 02 12:38:51 compute-1 sudo[270261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:38:51 compute-1 sudo[270261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:38:51 compute-1 sudo[270261]: pam_unix(sudo:session): session closed for user root
Oct 02 12:38:51 compute-1 sudo[270286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:38:51 compute-1 sudo[270286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:38:51 compute-1 sudo[270286]: pam_unix(sudo:session): session closed for user root
Oct 02 12:38:51 compute-1 sudo[270311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:38:51 compute-1 sudo[270311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:38:52 compute-1 sudo[270311]: pam_unix(sudo:session): session closed for user root
Oct 02 12:38:53 compute-1 nova_compute[230518]: 2025-10-02 12:38:53.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:38:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:53.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:53 compute-1 nova_compute[230518]: 2025-10-02 12:38:53.092 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:53 compute-1 nova_compute[230518]: 2025-10-02 12:38:53.092 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:53 compute-1 nova_compute[230518]: 2025-10-02 12:38:53.092 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:53 compute-1 nova_compute[230518]: 2025-10-02 12:38:53.093 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:38:53 compute-1 nova_compute[230518]: 2025-10-02 12:38:53.093 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:53.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:53 compute-1 nova_compute[230518]: 2025-10-02 12:38:53.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:53 compute-1 ceph-mon[80926]: pgmap v1883: 305 pgs: 305 active+clean; 341 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.9 MiB/s wr, 309 op/s
Oct 02 12:38:53 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:38:53 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:38:53 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 12:38:53 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 12:38:53 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 12:38:53 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:38:53 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:38:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:38:53.282 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:38:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:38:53 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3361022258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:53 compute-1 nova_compute[230518]: 2025-10-02 12:38:53.875 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.782s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:53 compute-1 nova_compute[230518]: 2025-10-02 12:38:53.971 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:38:53 compute-1 nova_compute[230518]: 2025-10-02 12:38:53.972 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:38:54 compute-1 nova_compute[230518]: 2025-10-02 12:38:54.172 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:38:54 compute-1 nova_compute[230518]: 2025-10-02 12:38:54.173 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4443MB free_disk=20.876331329345703GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:38:54 compute-1 nova_compute[230518]: 2025-10-02 12:38:54.173 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:38:54 compute-1 nova_compute[230518]: 2025-10-02 12:38:54.174 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:38:54 compute-1 nova_compute[230518]: 2025-10-02 12:38:54.297 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 7cd53cbf-91c2-4750-a4c2-551e50950035 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:38:54 compute-1 nova_compute[230518]: 2025-10-02 12:38:54.298 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:38:54 compute-1 nova_compute[230518]: 2025-10-02 12:38:54.298 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:38:54 compute-1 nova_compute[230518]: 2025-10-02 12:38:54.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:54 compute-1 nova_compute[230518]: 2025-10-02 12:38:54.407 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:38:54 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:38:54 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:38:54 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:38:54 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:38:54 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3361022258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:55.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:55.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:38:55 compute-1 ceph-mon[80926]: pgmap v1884: 305 pgs: 305 active+clean; 341 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 294 op/s
Oct 02 12:38:56 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:38:56 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1239224870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:56 compute-1 nova_compute[230518]: 2025-10-02 12:38:56.508 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:38:56 compute-1 nova_compute[230518]: 2025-10-02 12:38:56.517 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:38:56 compute-1 nova_compute[230518]: 2025-10-02 12:38:56.539 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:38:56 compute-1 nova_compute[230518]: 2025-10-02 12:38:56.597 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:38:56 compute-1 nova_compute[230518]: 2025-10-02 12:38:56.598 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.424s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:38:56 compute-1 ceph-mon[80926]: pgmap v1885: 305 pgs: 305 active+clean; 333 MiB data, 937 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 257 op/s
Oct 02 12:38:56 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1343751637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:56 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1239224870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:57.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:57.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:57 compute-1 nova_compute[230518]: 2025-10-02 12:38:57.592 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:38:57 compute-1 nova_compute[230518]: 2025-10-02 12:38:57.593 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:38:58 compute-1 nova_compute[230518]: 2025-10-02 12:38:58.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:38:58 compute-1 nova_compute[230518]: 2025-10-02 12:38:58.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:38:58 compute-1 podman[270417]: 2025-10-02 12:38:58.867874425 +0000 UTC m=+0.098720687 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Oct 02 12:38:58 compute-1 podman[270416]: 2025-10-02 12:38:58.92235551 +0000 UTC m=+0.152618563 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Oct 02 12:38:59 compute-1 nova_compute[230518]: 2025-10-02 12:38:59.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:38:59 compute-1 nova_compute[230518]: 2025-10-02 12:38:59.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:38:59 compute-1 nova_compute[230518]: 2025-10-02 12:38:59.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:38:59 compute-1 nova_compute[230518]: 2025-10-02 12:38:59.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:38:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:38:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:38:59.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:38:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:38:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:38:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:38:59.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:38:59 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/668701634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:38:59 compute-1 nova_compute[230518]: 2025-10-02 12:38:59.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:01.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:39:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:01.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:39:01 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:39:01 compute-1 ceph-mon[80926]: pgmap v1886: 305 pgs: 305 active+clean; 294 MiB data, 922 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.5 MiB/s wr, 239 op/s
Oct 02 12:39:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/714823305' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:39:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/883040753' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:39:03 compute-1 nova_compute[230518]: 2025-10-02 12:39:03.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:39:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:39:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:03.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:39:03 compute-1 ceph-mon[80926]: pgmap v1887: 305 pgs: 305 active+clean; 294 MiB data, 922 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 14 KiB/s wr, 127 op/s
Oct 02 12:39:03 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2573262845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:39:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:03.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:03 compute-1 nova_compute[230518]: 2025-10-02 12:39:03.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:03 compute-1 sudo[270457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:39:03 compute-1 sudo[270457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:39:03 compute-1 sudo[270457]: pam_unix(sudo:session): session closed for user root
Oct 02 12:39:03 compute-1 sudo[270482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:39:03 compute-1 sudo[270482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:39:03 compute-1 sudo[270482]: pam_unix(sudo:session): session closed for user root
Oct 02 12:39:04 compute-1 nova_compute[230518]: 2025-10-02 12:39:04.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:04 compute-1 ceph-mon[80926]: pgmap v1888: 305 pgs: 305 active+clean; 300 MiB data, 922 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 169 KiB/s wr, 169 op/s
Oct 02 12:39:04 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:39:04 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:39:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:05.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:39:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:05.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:39:05 compute-1 nova_compute[230518]: 2025-10-02 12:39:05.790 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408730.7884607, 7cd53cbf-91c2-4750-a4c2-551e50950035 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:39:05 compute-1 nova_compute[230518]: 2025-10-02 12:39:05.790 2 INFO nova.compute.manager [-] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] VM Stopped (Lifecycle Event)
Oct 02 12:39:05 compute-1 nova_compute[230518]: 2025-10-02 12:39:05.826 2 DEBUG nova.compute.manager [None req-364567b6-30f3-47d1-9435-0f78c2132d79 - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:39:05 compute-1 nova_compute[230518]: 2025-10-02 12:39:05.830 2 DEBUG nova.compute.manager [None req-364567b6-30f3-47d1-9435-0f78c2132d79 - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: deleting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:39:05 compute-1 nova_compute[230518]: 2025-10-02 12:39:05.850 2 INFO nova.compute.manager [None req-364567b6-30f3-47d1-9435-0f78c2132d79 - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] During sync_power_state the instance has a pending task (deleting). Skip.
Oct 02 12:39:05 compute-1 ceph-mon[80926]: pgmap v1889: 305 pgs: 305 active+clean; 317 MiB data, 932 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.0 MiB/s wr, 94 op/s
Oct 02 12:39:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3364850339' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:39:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3364850339' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:39:06 compute-1 nova_compute[230518]: 2025-10-02 12:39:06.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:39:06 compute-1 nova_compute[230518]: 2025-10-02 12:39:06.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:39:06 compute-1 nova_compute[230518]: 2025-10-02 12:39:06.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:39:06 compute-1 nova_compute[230518]: 2025-10-02 12:39:06.152 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 02 12:39:06 compute-1 nova_compute[230518]: 2025-10-02 12:39:06.152 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:39:06 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:39:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:39:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:07.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:39:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:07.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:07 compute-1 ceph-mon[80926]: pgmap v1890: 305 pgs: 305 active+clean; 341 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 547 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Oct 02 12:39:08 compute-1 nova_compute[230518]: 2025-10-02 12:39:08.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:08 compute-1 nova_compute[230518]: 2025-10-02 12:39:08.319 2 INFO nova.virt.libvirt.driver [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Deleting instance files /var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035_del
Oct 02 12:39:08 compute-1 nova_compute[230518]: 2025-10-02 12:39:08.320 2 INFO nova.virt.libvirt.driver [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Deletion of /var/lib/nova/instances/7cd53cbf-91c2-4750-a4c2-551e50950035_del complete
Oct 02 12:39:08 compute-1 nova_compute[230518]: 2025-10-02 12:39:08.386 2 INFO nova.compute.manager [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Took 18.22 seconds to destroy the instance on the hypervisor.
Oct 02 12:39:08 compute-1 nova_compute[230518]: 2025-10-02 12:39:08.386 2 DEBUG oslo.service.loopingcall [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:39:08 compute-1 nova_compute[230518]: 2025-10-02 12:39:08.387 2 DEBUG nova.compute.manager [-] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:39:08 compute-1 nova_compute[230518]: 2025-10-02 12:39:08.387 2 DEBUG nova.network.neutron [-] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:39:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2854324697' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:39:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4233012682' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:39:08 compute-1 nova_compute[230518]: 2025-10-02 12:39:08.661 2 DEBUG nova.network.neutron [-] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:39:08 compute-1 nova_compute[230518]: 2025-10-02 12:39:08.687 2 DEBUG nova.network.neutron [-] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:39:08 compute-1 nova_compute[230518]: 2025-10-02 12:39:08.718 2 INFO nova.compute.manager [-] [instance: 7cd53cbf-91c2-4750-a4c2-551e50950035] Took 0.33 seconds to deallocate network for instance.
Oct 02 12:39:08 compute-1 nova_compute[230518]: 2025-10-02 12:39:08.776 2 DEBUG oslo_concurrency.lockutils [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:39:08 compute-1 nova_compute[230518]: 2025-10-02 12:39:08.777 2 DEBUG oslo_concurrency.lockutils [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:39:08 compute-1 nova_compute[230518]: 2025-10-02 12:39:08.848 2 DEBUG oslo_concurrency.processutils [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:39:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:09.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:09.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:39:09 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3862223501' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:39:09 compute-1 nova_compute[230518]: 2025-10-02 12:39:09.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:09 compute-1 nova_compute[230518]: 2025-10-02 12:39:09.399 2 DEBUG oslo_concurrency.processutils [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:39:09 compute-1 nova_compute[230518]: 2025-10-02 12:39:09.406 2 DEBUG nova.compute.provider_tree [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:39:09 compute-1 nova_compute[230518]: 2025-10-02 12:39:09.431 2 DEBUG nova.scheduler.client.report [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:39:09 compute-1 nova_compute[230518]: 2025-10-02 12:39:09.479 2 DEBUG oslo_concurrency.lockutils [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:39:09 compute-1 nova_compute[230518]: 2025-10-02 12:39:09.572 2 INFO nova.scheduler.client.report [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Deleted allocations for instance 7cd53cbf-91c2-4750-a4c2-551e50950035
Oct 02 12:39:09 compute-1 ceph-mon[80926]: pgmap v1891: 305 pgs: 305 active+clean; 343 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 558 KiB/s rd, 1.8 MiB/s wr, 90 op/s
Oct 02 12:39:09 compute-1 nova_compute[230518]: 2025-10-02 12:39:09.706 2 DEBUG oslo_concurrency.lockutils [None req-8576eb26-dd12-4096-a5fc-727e102d5fd0 0b4b918d10704ca5852d80098d253220 99ed62753466455f8b5795e12d35034e - - default default] Lock "7cd53cbf-91c2-4750-a4c2-551e50950035" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 20.305s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:39:10 compute-1 podman[270529]: 2025-10-02 12:39:10.819028602 +0000 UTC m=+0.061529177 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 12:39:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3862223501' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:39:10 compute-1 ceph-mon[80926]: pgmap v1892: 305 pgs: 305 active+clean; 343 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 547 KiB/s rd, 1.8 MiB/s wr, 77 op/s
Oct 02 12:39:10 compute-1 podman[270530]: 2025-10-02 12:39:10.84060381 +0000 UTC m=+0.075576609 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 12:39:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:39:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:11.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:39:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:11.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:11 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:39:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:39:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:13.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:39:13 compute-1 ceph-mon[80926]: pgmap v1893: 305 pgs: 305 active+clean; 342 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 548 KiB/s rd, 1.8 MiB/s wr, 80 op/s
Oct 02 12:39:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:13.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:13 compute-1 nova_compute[230518]: 2025-10-02 12:39:13.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:14 compute-1 nova_compute[230518]: 2025-10-02 12:39:14.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:15 compute-1 nova_compute[230518]: 2025-10-02 12:39:15.066 2 DEBUG oslo_concurrency.lockutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "7621a774-e0bc-4f4f-b900-c3608dd6835a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:39:15 compute-1 nova_compute[230518]: 2025-10-02 12:39:15.066 2 DEBUG oslo_concurrency.lockutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "7621a774-e0bc-4f4f-b900-c3608dd6835a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:39:15 compute-1 nova_compute[230518]: 2025-10-02 12:39:15.098 2 DEBUG nova.compute.manager [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:39:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:15.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:15.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:15 compute-1 nova_compute[230518]: 2025-10-02 12:39:15.204 2 DEBUG oslo_concurrency.lockutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:39:15 compute-1 nova_compute[230518]: 2025-10-02 12:39:15.205 2 DEBUG oslo_concurrency.lockutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:39:15 compute-1 nova_compute[230518]: 2025-10-02 12:39:15.214 2 DEBUG nova.virt.hardware [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:39:15 compute-1 nova_compute[230518]: 2025-10-02 12:39:15.214 2 INFO nova.compute.claims [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:39:15 compute-1 ceph-mon[80926]: pgmap v1894: 305 pgs: 305 active+clean; 343 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 157 KiB/s rd, 1.7 MiB/s wr, 45 op/s
Oct 02 12:39:15 compute-1 nova_compute[230518]: 2025-10-02 12:39:15.441 2 DEBUG oslo_concurrency.processutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:39:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:39:15 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1379615158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:39:15 compute-1 nova_compute[230518]: 2025-10-02 12:39:15.930 2 DEBUG oslo_concurrency.processutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:39:15 compute-1 nova_compute[230518]: 2025-10-02 12:39:15.939 2 DEBUG nova.compute.provider_tree [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:39:16 compute-1 nova_compute[230518]: 2025-10-02 12:39:16.027 2 DEBUG nova.scheduler.client.report [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:39:16 compute-1 nova_compute[230518]: 2025-10-02 12:39:16.129 2 DEBUG oslo_concurrency.lockutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.924s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:39:16 compute-1 nova_compute[230518]: 2025-10-02 12:39:16.130 2 DEBUG nova.compute.manager [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:39:16 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:39:16 compute-1 nova_compute[230518]: 2025-10-02 12:39:16.245 2 DEBUG nova.compute.manager [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:39:16 compute-1 nova_compute[230518]: 2025-10-02 12:39:16.246 2 DEBUG nova.network.neutron [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:39:16 compute-1 nova_compute[230518]: 2025-10-02 12:39:16.282 2 INFO nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:39:16 compute-1 nova_compute[230518]: 2025-10-02 12:39:16.306 2 DEBUG nova.compute.manager [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:39:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1379615158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:39:16 compute-1 nova_compute[230518]: 2025-10-02 12:39:16.542 2 DEBUG nova.compute.manager [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:39:16 compute-1 nova_compute[230518]: 2025-10-02 12:39:16.544 2 DEBUG nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:39:16 compute-1 nova_compute[230518]: 2025-10-02 12:39:16.545 2 INFO nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Creating image(s)
Oct 02 12:39:16 compute-1 nova_compute[230518]: 2025-10-02 12:39:16.595 2 DEBUG nova.storage.rbd_utils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 7621a774-e0bc-4f4f-b900-c3608dd6835a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:39:16 compute-1 nova_compute[230518]: 2025-10-02 12:39:16.629 2 DEBUG nova.storage.rbd_utils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 7621a774-e0bc-4f4f-b900-c3608dd6835a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:39:16 compute-1 nova_compute[230518]: 2025-10-02 12:39:16.657 2 DEBUG nova.storage.rbd_utils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 7621a774-e0bc-4f4f-b900-c3608dd6835a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:39:16 compute-1 nova_compute[230518]: 2025-10-02 12:39:16.661 2 DEBUG oslo_concurrency.processutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:39:16 compute-1 nova_compute[230518]: 2025-10-02 12:39:16.755 2 DEBUG oslo_concurrency.processutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:39:16 compute-1 nova_compute[230518]: 2025-10-02 12:39:16.758 2 DEBUG oslo_concurrency.lockutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:39:16 compute-1 nova_compute[230518]: 2025-10-02 12:39:16.759 2 DEBUG oslo_concurrency.lockutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:39:16 compute-1 nova_compute[230518]: 2025-10-02 12:39:16.759 2 DEBUG oslo_concurrency.lockutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:39:16 compute-1 nova_compute[230518]: 2025-10-02 12:39:16.798 2 DEBUG nova.storage.rbd_utils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 7621a774-e0bc-4f4f-b900-c3608dd6835a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:39:16 compute-1 nova_compute[230518]: 2025-10-02 12:39:16.805 2 DEBUG oslo_concurrency.processutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 7621a774-e0bc-4f4f-b900-c3608dd6835a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:39:16 compute-1 nova_compute[230518]: 2025-10-02 12:39:16.933 2 DEBUG nova.policy [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '17a0940c9daf48ac8cfa6c3e56d0e39c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '88141e38aa2347299e7ab249431ef68c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:39:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:17.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:17.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:17 compute-1 ceph-mon[80926]: pgmap v1895: 305 pgs: 305 active+clean; 343 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 945 KiB/s rd, 828 KiB/s wr, 64 op/s
Oct 02 12:39:18 compute-1 nova_compute[230518]: 2025-10-02 12:39:18.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:18 compute-1 nova_compute[230518]: 2025-10-02 12:39:18.233 2 DEBUG oslo_concurrency.processutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 7621a774-e0bc-4f4f-b900-c3608dd6835a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:39:18 compute-1 nova_compute[230518]: 2025-10-02 12:39:18.320 2 DEBUG nova.storage.rbd_utils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] resizing rbd image 7621a774-e0bc-4f4f-b900-c3608dd6835a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:39:18 compute-1 ceph-mon[80926]: pgmap v1896: 305 pgs: 305 active+clean; 343 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 91 op/s
Oct 02 12:39:18 compute-1 nova_compute[230518]: 2025-10-02 12:39:18.639 2 DEBUG nova.network.neutron [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Successfully created port: 8d9cc17a-7804-4743-925a-496d9fe78c73 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:39:18 compute-1 nova_compute[230518]: 2025-10-02 12:39:18.776 2 DEBUG nova.objects.instance [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'migration_context' on Instance uuid 7621a774-e0bc-4f4f-b900-c3608dd6835a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:39:18 compute-1 nova_compute[230518]: 2025-10-02 12:39:18.829 2 DEBUG nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:39:18 compute-1 nova_compute[230518]: 2025-10-02 12:39:18.829 2 DEBUG nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Ensure instance console log exists: /var/lib/nova/instances/7621a774-e0bc-4f4f-b900-c3608dd6835a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:39:18 compute-1 nova_compute[230518]: 2025-10-02 12:39:18.830 2 DEBUG oslo_concurrency.lockutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:39:18 compute-1 nova_compute[230518]: 2025-10-02 12:39:18.830 2 DEBUG oslo_concurrency.lockutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:39:18 compute-1 nova_compute[230518]: 2025-10-02 12:39:18.830 2 DEBUG oslo_concurrency.lockutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:39:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:39:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:19.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:39:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:39:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:19.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:39:19 compute-1 nova_compute[230518]: 2025-10-02 12:39:19.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e260 e260: 3 total, 3 up, 3 in
Oct 02 12:39:20 compute-1 nova_compute[230518]: 2025-10-02 12:39:20.200 2 DEBUG nova.network.neutron [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Successfully updated port: 8d9cc17a-7804-4743-925a-496d9fe78c73 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:39:20 compute-1 nova_compute[230518]: 2025-10-02 12:39:20.257 2 DEBUG oslo_concurrency.lockutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:39:20 compute-1 nova_compute[230518]: 2025-10-02 12:39:20.257 2 DEBUG oslo_concurrency.lockutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquired lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:39:20 compute-1 nova_compute[230518]: 2025-10-02 12:39:20.257 2 DEBUG nova.network.neutron [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:39:20 compute-1 nova_compute[230518]: 2025-10-02 12:39:20.368 2 DEBUG nova.compute.manager [req-c67976c3-c640-4213-bae8-20a87d248f44 req-82cb7463-2cda-4151-b5d1-8689ee3ab178 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Received event network-changed-8d9cc17a-7804-4743-925a-496d9fe78c73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:39:20 compute-1 nova_compute[230518]: 2025-10-02 12:39:20.368 2 DEBUG nova.compute.manager [req-c67976c3-c640-4213-bae8-20a87d248f44 req-82cb7463-2cda-4151-b5d1-8689ee3ab178 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Refreshing instance network info cache due to event network-changed-8d9cc17a-7804-4743-925a-496d9fe78c73. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:39:20 compute-1 nova_compute[230518]: 2025-10-02 12:39:20.369 2 DEBUG oslo_concurrency.lockutils [req-c67976c3-c640-4213-bae8-20a87d248f44 req-82cb7463-2cda-4151-b5d1-8689ee3ab178 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:39:20 compute-1 nova_compute[230518]: 2025-10-02 12:39:20.827 2 DEBUG nova.network.neutron [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:39:20 compute-1 ceph-mon[80926]: pgmap v1897: 305 pgs: 305 active+clean; 343 MiB data, 941 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 75 op/s
Oct 02 12:39:21 compute-1 ceph-mon[80926]: osdmap e260: 3 total, 3 up, 3 in
Oct 02 12:39:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:21.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:21.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:39:22 compute-1 nova_compute[230518]: 2025-10-02 12:39:22.638 2 DEBUG nova.network.neutron [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Updating instance_info_cache with network_info: [{"id": "8d9cc17a-7804-4743-925a-496d9fe78c73", "address": "fa:16:3e:c4:d9:d3", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d9cc17a-78", "ovs_interfaceid": "8d9cc17a-7804-4743-925a-496d9fe78c73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:39:22 compute-1 nova_compute[230518]: 2025-10-02 12:39:22.756 2 DEBUG oslo_concurrency.lockutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Releasing lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:39:22 compute-1 nova_compute[230518]: 2025-10-02 12:39:22.756 2 DEBUG nova.compute.manager [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Instance network_info: |[{"id": "8d9cc17a-7804-4743-925a-496d9fe78c73", "address": "fa:16:3e:c4:d9:d3", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d9cc17a-78", "ovs_interfaceid": "8d9cc17a-7804-4743-925a-496d9fe78c73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:39:22 compute-1 nova_compute[230518]: 2025-10-02 12:39:22.757 2 DEBUG oslo_concurrency.lockutils [req-c67976c3-c640-4213-bae8-20a87d248f44 req-82cb7463-2cda-4151-b5d1-8689ee3ab178 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:39:22 compute-1 nova_compute[230518]: 2025-10-02 12:39:22.757 2 DEBUG nova.network.neutron [req-c67976c3-c640-4213-bae8-20a87d248f44 req-82cb7463-2cda-4151-b5d1-8689ee3ab178 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Refreshing network info cache for port 8d9cc17a-7804-4743-925a-496d9fe78c73 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:39:22 compute-1 nova_compute[230518]: 2025-10-02 12:39:22.761 2 DEBUG nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Start _get_guest_xml network_info=[{"id": "8d9cc17a-7804-4743-925a-496d9fe78c73", "address": "fa:16:3e:c4:d9:d3", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d9cc17a-78", "ovs_interfaceid": "8d9cc17a-7804-4743-925a-496d9fe78c73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:39:22 compute-1 nova_compute[230518]: 2025-10-02 12:39:22.766 2 WARNING nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:39:22 compute-1 nova_compute[230518]: 2025-10-02 12:39:22.771 2 DEBUG nova.virt.libvirt.host [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:39:22 compute-1 nova_compute[230518]: 2025-10-02 12:39:22.772 2 DEBUG nova.virt.libvirt.host [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:39:22 compute-1 nova_compute[230518]: 2025-10-02 12:39:22.776 2 DEBUG nova.virt.libvirt.host [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:39:22 compute-1 nova_compute[230518]: 2025-10-02 12:39:22.777 2 DEBUG nova.virt.libvirt.host [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:39:22 compute-1 nova_compute[230518]: 2025-10-02 12:39:22.778 2 DEBUG nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:39:22 compute-1 nova_compute[230518]: 2025-10-02 12:39:22.778 2 DEBUG nova.virt.hardware [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:39:22 compute-1 nova_compute[230518]: 2025-10-02 12:39:22.778 2 DEBUG nova.virt.hardware [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:39:22 compute-1 nova_compute[230518]: 2025-10-02 12:39:22.779 2 DEBUG nova.virt.hardware [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:39:22 compute-1 nova_compute[230518]: 2025-10-02 12:39:22.779 2 DEBUG nova.virt.hardware [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:39:22 compute-1 nova_compute[230518]: 2025-10-02 12:39:22.779 2 DEBUG nova.virt.hardware [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:39:22 compute-1 nova_compute[230518]: 2025-10-02 12:39:22.779 2 DEBUG nova.virt.hardware [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:39:22 compute-1 nova_compute[230518]: 2025-10-02 12:39:22.780 2 DEBUG nova.virt.hardware [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:39:22 compute-1 nova_compute[230518]: 2025-10-02 12:39:22.780 2 DEBUG nova.virt.hardware [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:39:22 compute-1 nova_compute[230518]: 2025-10-02 12:39:22.780 2 DEBUG nova.virt.hardware [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:39:22 compute-1 nova_compute[230518]: 2025-10-02 12:39:22.781 2 DEBUG nova.virt.hardware [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:39:22 compute-1 nova_compute[230518]: 2025-10-02 12:39:22.781 2 DEBUG nova.virt.hardware [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:39:22 compute-1 nova_compute[230518]: 2025-10-02 12:39:22.785 2 DEBUG oslo_concurrency.processutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:39:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:39:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:23.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:39:23 compute-1 nova_compute[230518]: 2025-10-02 12:39:23.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:23.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:39:23 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/280546518' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:39:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e261 e261: 3 total, 3 up, 3 in
Oct 02 12:39:23 compute-1 nova_compute[230518]: 2025-10-02 12:39:23.314 2 DEBUG oslo_concurrency.processutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:39:23 compute-1 nova_compute[230518]: 2025-10-02 12:39:23.449 2 DEBUG nova.storage.rbd_utils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 7621a774-e0bc-4f4f-b900-c3608dd6835a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:39:23 compute-1 ceph-mon[80926]: pgmap v1899: 305 pgs: 305 active+clean; 381 MiB data, 961 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.0 MiB/s wr, 106 op/s
Oct 02 12:39:23 compute-1 nova_compute[230518]: 2025-10-02 12:39:23.454 2 DEBUG oslo_concurrency.processutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:39:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:39:23 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1127676684' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:39:23 compute-1 nova_compute[230518]: 2025-10-02 12:39:23.876 2 DEBUG oslo_concurrency.processutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:39:23 compute-1 nova_compute[230518]: 2025-10-02 12:39:23.877 2 DEBUG nova.virt.libvirt.vif [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:39:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1525238782',display_name='tempest-ServerActionsTestOtherA-server-1525238782',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1525238782',id=105,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJfI3E6popMNkSBH55JIIn+lxst+AgI5WbB+1D21g23xZC45mHZNKzJ1YzOQWfrILexv9zpuq5SLJQ8J6YEjTv4RhaLBgROGziYLwwgHom1wen0CDri217As6wNRpnqZsg==',key_name='tempest-keypair-1292637923',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='88141e38aa2347299e7ab249431ef68c',ramdisk_id='',reservation_id='r-uk3eghdh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1849713132',owner_user_name='tempest-ServerActionsTestOtherA-1849713132-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:39:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='17a0940c9daf48ac8cfa6c3e56d0e39c',uuid=7621a774-e0bc-4f4f-b900-c3608dd6835a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d9cc17a-7804-4743-925a-496d9fe78c73", "address": "fa:16:3e:c4:d9:d3", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d9cc17a-78", "ovs_interfaceid": "8d9cc17a-7804-4743-925a-496d9fe78c73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:39:23 compute-1 nova_compute[230518]: 2025-10-02 12:39:23.878 2 DEBUG nova.network.os_vif_util [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converting VIF {"id": "8d9cc17a-7804-4743-925a-496d9fe78c73", "address": "fa:16:3e:c4:d9:d3", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d9cc17a-78", "ovs_interfaceid": "8d9cc17a-7804-4743-925a-496d9fe78c73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:39:23 compute-1 nova_compute[230518]: 2025-10-02 12:39:23.879 2 DEBUG nova.network.os_vif_util [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:d9:d3,bridge_name='br-int',has_traffic_filtering=True,id=8d9cc17a-7804-4743-925a-496d9fe78c73,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d9cc17a-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:39:23 compute-1 nova_compute[230518]: 2025-10-02 12:39:23.881 2 DEBUG nova.objects.instance [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'pci_devices' on Instance uuid 7621a774-e0bc-4f4f-b900-c3608dd6835a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:39:23 compute-1 nova_compute[230518]: 2025-10-02 12:39:23.909 2 DEBUG nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:39:23 compute-1 nova_compute[230518]:   <uuid>7621a774-e0bc-4f4f-b900-c3608dd6835a</uuid>
Oct 02 12:39:23 compute-1 nova_compute[230518]:   <name>instance-00000069</name>
Oct 02 12:39:23 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:39:23 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:39:23 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:39:23 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:       <nova:name>tempest-ServerActionsTestOtherA-server-1525238782</nova:name>
Oct 02 12:39:23 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:39:22</nova:creationTime>
Oct 02 12:39:23 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:39:23 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:39:23 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:39:23 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:39:23 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:39:23 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:39:23 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:39:23 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:39:23 compute-1 nova_compute[230518]:         <nova:user uuid="17a0940c9daf48ac8cfa6c3e56d0e39c">tempest-ServerActionsTestOtherA-1849713132-project-member</nova:user>
Oct 02 12:39:23 compute-1 nova_compute[230518]:         <nova:project uuid="88141e38aa2347299e7ab249431ef68c">tempest-ServerActionsTestOtherA-1849713132</nova:project>
Oct 02 12:39:23 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:39:23 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:39:23 compute-1 nova_compute[230518]:         <nova:port uuid="8d9cc17a-7804-4743-925a-496d9fe78c73">
Oct 02 12:39:23 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:39:23 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:39:23 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:39:23 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <system>
Oct 02 12:39:23 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:39:23 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:39:23 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:39:23 compute-1 nova_compute[230518]:       <entry name="serial">7621a774-e0bc-4f4f-b900-c3608dd6835a</entry>
Oct 02 12:39:23 compute-1 nova_compute[230518]:       <entry name="uuid">7621a774-e0bc-4f4f-b900-c3608dd6835a</entry>
Oct 02 12:39:23 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     </system>
Oct 02 12:39:23 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:39:23 compute-1 nova_compute[230518]:   <os>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:   </os>
Oct 02 12:39:23 compute-1 nova_compute[230518]:   <features>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:   </features>
Oct 02 12:39:23 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:39:23 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:39:23 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:39:23 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/7621a774-e0bc-4f4f-b900-c3608dd6835a_disk">
Oct 02 12:39:23 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:       </source>
Oct 02 12:39:23 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:39:23 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:39:23 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:39:23 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/7621a774-e0bc-4f4f-b900-c3608dd6835a_disk.config">
Oct 02 12:39:23 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:       </source>
Oct 02 12:39:23 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:39:23 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:39:23 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:39:23 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:c4:d9:d3"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:       <target dev="tap8d9cc17a-78"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:39:23 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/7621a774-e0bc-4f4f-b900-c3608dd6835a/console.log" append="off"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <video>
Oct 02 12:39:23 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     </video>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:39:23 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:39:23 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:39:23 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:39:23 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:39:23 compute-1 nova_compute[230518]: </domain>
Oct 02 12:39:23 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:39:23 compute-1 nova_compute[230518]: 2025-10-02 12:39:23.910 2 DEBUG nova.compute.manager [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Preparing to wait for external event network-vif-plugged-8d9cc17a-7804-4743-925a-496d9fe78c73 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:39:23 compute-1 nova_compute[230518]: 2025-10-02 12:39:23.910 2 DEBUG oslo_concurrency.lockutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "7621a774-e0bc-4f4f-b900-c3608dd6835a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:39:23 compute-1 nova_compute[230518]: 2025-10-02 12:39:23.910 2 DEBUG oslo_concurrency.lockutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "7621a774-e0bc-4f4f-b900-c3608dd6835a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:39:23 compute-1 nova_compute[230518]: 2025-10-02 12:39:23.911 2 DEBUG oslo_concurrency.lockutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "7621a774-e0bc-4f4f-b900-c3608dd6835a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:39:23 compute-1 nova_compute[230518]: 2025-10-02 12:39:23.911 2 DEBUG nova.virt.libvirt.vif [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:39:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1525238782',display_name='tempest-ServerActionsTestOtherA-server-1525238782',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1525238782',id=105,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJfI3E6popMNkSBH55JIIn+lxst+AgI5WbB+1D21g23xZC45mHZNKzJ1YzOQWfrILexv9zpuq5SLJQ8J6YEjTv4RhaLBgROGziYLwwgHom1wen0CDri217As6wNRpnqZsg==',key_name='tempest-keypair-1292637923',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='88141e38aa2347299e7ab249431ef68c',ramdisk_id='',reservation_id='r-uk3eghdh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1849713132',owner_user_name='tempest-ServerActionsTestOtherA-1849713132-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:39:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='17a0940c9daf48ac8cfa6c3e56d0e39c',uuid=7621a774-e0bc-4f4f-b900-c3608dd6835a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d9cc17a-7804-4743-925a-496d9fe78c73", "address": "fa:16:3e:c4:d9:d3", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d9cc17a-78", "ovs_interfaceid": "8d9cc17a-7804-4743-925a-496d9fe78c73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:39:23 compute-1 nova_compute[230518]: 2025-10-02 12:39:23.911 2 DEBUG nova.network.os_vif_util [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converting VIF {"id": "8d9cc17a-7804-4743-925a-496d9fe78c73", "address": "fa:16:3e:c4:d9:d3", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d9cc17a-78", "ovs_interfaceid": "8d9cc17a-7804-4743-925a-496d9fe78c73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:39:23 compute-1 nova_compute[230518]: 2025-10-02 12:39:23.912 2 DEBUG nova.network.os_vif_util [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:d9:d3,bridge_name='br-int',has_traffic_filtering=True,id=8d9cc17a-7804-4743-925a-496d9fe78c73,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d9cc17a-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:39:23 compute-1 nova_compute[230518]: 2025-10-02 12:39:23.912 2 DEBUG os_vif [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:d9:d3,bridge_name='br-int',has_traffic_filtering=True,id=8d9cc17a-7804-4743-925a-496d9fe78c73,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d9cc17a-78') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:39:23 compute-1 nova_compute[230518]: 2025-10-02 12:39:23.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:23 compute-1 nova_compute[230518]: 2025-10-02 12:39:23.913 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:39:23 compute-1 nova_compute[230518]: 2025-10-02 12:39:23.914 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:39:23 compute-1 nova_compute[230518]: 2025-10-02 12:39:23.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:23 compute-1 nova_compute[230518]: 2025-10-02 12:39:23.917 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d9cc17a-78, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:39:23 compute-1 nova_compute[230518]: 2025-10-02 12:39:23.918 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8d9cc17a-78, col_values=(('external_ids', {'iface-id': '8d9cc17a-7804-4743-925a-496d9fe78c73', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c4:d9:d3', 'vm-uuid': '7621a774-e0bc-4f4f-b900-c3608dd6835a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:39:23 compute-1 NetworkManager[44960]: <info>  [1759408763.9202] manager: (tap8d9cc17a-78): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/201)
Oct 02 12:39:23 compute-1 nova_compute[230518]: 2025-10-02 12:39:23.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:39:23 compute-1 nova_compute[230518]: 2025-10-02 12:39:23.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:23 compute-1 nova_compute[230518]: 2025-10-02 12:39:23.926 2 INFO os_vif [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:d9:d3,bridge_name='br-int',has_traffic_filtering=True,id=8d9cc17a-7804-4743-925a-496d9fe78c73,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d9cc17a-78')
Oct 02 12:39:24 compute-1 nova_compute[230518]: 2025-10-02 12:39:24.103 2 DEBUG nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:39:24 compute-1 nova_compute[230518]: 2025-10-02 12:39:24.104 2 DEBUG nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:39:24 compute-1 nova_compute[230518]: 2025-10-02 12:39:24.104 2 DEBUG nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No VIF found with MAC fa:16:3e:c4:d9:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:39:24 compute-1 nova_compute[230518]: 2025-10-02 12:39:24.105 2 INFO nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Using config drive
Oct 02 12:39:24 compute-1 nova_compute[230518]: 2025-10-02 12:39:24.141 2 DEBUG nova.storage.rbd_utils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 7621a774-e0bc-4f4f-b900-c3608dd6835a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:39:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/280546518' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:39:24 compute-1 ceph-mon[80926]: osdmap e261: 3 total, 3 up, 3 in
Oct 02 12:39:24 compute-1 ceph-mon[80926]: pgmap v1901: 305 pgs: 305 active+clean; 408 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.2 MiB/s wr, 129 op/s
Oct 02 12:39:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1127676684' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:39:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e262 e262: 3 total, 3 up, 3 in
Oct 02 12:39:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:25.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:25.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:25 compute-1 nova_compute[230518]: 2025-10-02 12:39:25.449 2 INFO nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Creating config drive at /var/lib/nova/instances/7621a774-e0bc-4f4f-b900-c3608dd6835a/disk.config
Oct 02 12:39:25 compute-1 nova_compute[230518]: 2025-10-02 12:39:25.457 2 DEBUG oslo_concurrency.processutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7621a774-e0bc-4f4f-b900-c3608dd6835a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpno3jh7up execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:39:25 compute-1 nova_compute[230518]: 2025-10-02 12:39:25.592 2 DEBUG oslo_concurrency.processutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7621a774-e0bc-4f4f-b900-c3608dd6835a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpno3jh7up" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:39:25 compute-1 nova_compute[230518]: 2025-10-02 12:39:25.627 2 DEBUG nova.storage.rbd_utils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 7621a774-e0bc-4f4f-b900-c3608dd6835a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:39:25 compute-1 nova_compute[230518]: 2025-10-02 12:39:25.631 2 DEBUG oslo_concurrency.processutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7621a774-e0bc-4f4f-b900-c3608dd6835a/disk.config 7621a774-e0bc-4f4f-b900-c3608dd6835a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:39:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:39:25.935 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:39:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:39:25.936 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:39:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:39:25.936 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:39:26 compute-1 ceph-mon[80926]: osdmap e262: 3 total, 3 up, 3 in
Oct 02 12:39:26 compute-1 nova_compute[230518]: 2025-10-02 12:39:26.132 2 DEBUG oslo_concurrency.processutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7621a774-e0bc-4f4f-b900-c3608dd6835a/disk.config 7621a774-e0bc-4f4f-b900-c3608dd6835a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:39:26 compute-1 nova_compute[230518]: 2025-10-02 12:39:26.133 2 INFO nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Deleting local config drive /var/lib/nova/instances/7621a774-e0bc-4f4f-b900-c3608dd6835a/disk.config because it was imported into RBD.
Oct 02 12:39:26 compute-1 kernel: tap8d9cc17a-78: entered promiscuous mode
Oct 02 12:39:26 compute-1 ovn_controller[129257]: 2025-10-02T12:39:26Z|00426|binding|INFO|Claiming lport 8d9cc17a-7804-4743-925a-496d9fe78c73 for this chassis.
Oct 02 12:39:26 compute-1 ovn_controller[129257]: 2025-10-02T12:39:26Z|00427|binding|INFO|8d9cc17a-7804-4743-925a-496d9fe78c73: Claiming fa:16:3e:c4:d9:d3 10.100.0.14
Oct 02 12:39:26 compute-1 nova_compute[230518]: 2025-10-02 12:39:26.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:26 compute-1 NetworkManager[44960]: <info>  [1759408766.1977] manager: (tap8d9cc17a-78): new Tun device (/org/freedesktop/NetworkManager/Devices/202)
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.202 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:d9:d3 10.100.0.14'], port_security=['fa:16:3e:c4:d9:d3 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '7621a774-e0bc-4f4f-b900-c3608dd6835a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88141e38aa2347299e7ab249431ef68c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'da6daf73-7b18-4ff6-8a16-e2a94d642e77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59a86c9d-a113-4a7c-af97-5ea11dfa8c7c, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=8d9cc17a-7804-4743-925a-496d9fe78c73) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.203 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 8d9cc17a-7804-4743-925a-496d9fe78c73 in datapath f3643647-7cd9-4c43-8aaa-9b0f3160274b bound to our chassis
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.205 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3643647-7cd9-4c43-8aaa-9b0f3160274b
Oct 02 12:39:26 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:39:26 compute-1 systemd-udevd[270891]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.219 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6af7ca0d-1a5b-4a05-a5a1-0579858ec11b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.220 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf3643647-71 in ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.222 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf3643647-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.222 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[660b7088-58ba-40b7-814a-761ff7af4beb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.223 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[58d90aa1-e80b-4b56-8af5-2e4af3ffe7dd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:39:26 compute-1 NetworkManager[44960]: <info>  [1759408766.2310] device (tap8d9cc17a-78): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:39:26 compute-1 NetworkManager[44960]: <info>  [1759408766.2360] device (tap8d9cc17a-78): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.234 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[48128c41-f747-4d2d-885e-f0e16d0b6beb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:39:26 compute-1 systemd-machined[188247]: New machine qemu-52-instance-00000069.
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.257 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8075c969-0153-4189-a0ca-b99b227d2700]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:39:26 compute-1 systemd[1]: Started Virtual Machine qemu-52-instance-00000069.
Oct 02 12:39:26 compute-1 nova_compute[230518]: 2025-10-02 12:39:26.268 2 DEBUG nova.network.neutron [req-c67976c3-c640-4213-bae8-20a87d248f44 req-82cb7463-2cda-4151-b5d1-8689ee3ab178 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Updated VIF entry in instance network info cache for port 8d9cc17a-7804-4743-925a-496d9fe78c73. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:39:26 compute-1 nova_compute[230518]: 2025-10-02 12:39:26.269 2 DEBUG nova.network.neutron [req-c67976c3-c640-4213-bae8-20a87d248f44 req-82cb7463-2cda-4151-b5d1-8689ee3ab178 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Updating instance_info_cache with network_info: [{"id": "8d9cc17a-7804-4743-925a-496d9fe78c73", "address": "fa:16:3e:c4:d9:d3", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d9cc17a-78", "ovs_interfaceid": "8d9cc17a-7804-4743-925a-496d9fe78c73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:39:26 compute-1 nova_compute[230518]: 2025-10-02 12:39:26.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:26 compute-1 ovn_controller[129257]: 2025-10-02T12:39:26Z|00428|binding|INFO|Setting lport 8d9cc17a-7804-4743-925a-496d9fe78c73 ovn-installed in OVS
Oct 02 12:39:26 compute-1 ovn_controller[129257]: 2025-10-02T12:39:26Z|00429|binding|INFO|Setting lport 8d9cc17a-7804-4743-925a-496d9fe78c73 up in Southbound
Oct 02 12:39:26 compute-1 nova_compute[230518]: 2025-10-02 12:39:26.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.293 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[06854f83-f69a-47e4-b8f3-ef4c3222676b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:39:26 compute-1 systemd-udevd[270897]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:39:26 compute-1 NetworkManager[44960]: <info>  [1759408766.2999] manager: (tapf3643647-70): new Veth device (/org/freedesktop/NetworkManager/Devices/203)
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.299 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[688aeeb2-581d-4d85-9c09-e09ea1010f9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:39:26 compute-1 nova_compute[230518]: 2025-10-02 12:39:26.308 2 DEBUG oslo_concurrency.lockutils [req-c67976c3-c640-4213-bae8-20a87d248f44 req-82cb7463-2cda-4151-b5d1-8689ee3ab178 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.337 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[937f6336-a6ed-442e-838a-dfe3562248bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.340 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[aa37992d-2c5d-4e46-a836-f386a5f62b98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:39:26 compute-1 NetworkManager[44960]: <info>  [1759408766.3640] device (tapf3643647-70): carrier: link connected
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.371 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[8b00229b-4d46-4b9d-a92d-69db3a9738a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.388 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[526fc128-71c4-4974-944e-7c5d660eb725]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3643647-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:ed:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662992, 'reachable_time': 17721, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270926, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.403 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[925e4fd6-91b7-4733-9cc0-333ce2d34dd0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe23:edfc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662992, 'tstamp': 662992}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270927, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.420 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1c3420af-cd21-4c82-aa3c-ac8d91f2124c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3643647-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:ed:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662992, 'reachable_time': 17721, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 270928, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.448 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5c283142-4144-4b9e-9134-ebfca1e17421]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.496 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c89657fe-58e1-4e82-8ea1-a91a9eac4627]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.497 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3643647-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.498 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.498 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3643647-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:39:26 compute-1 kernel: tapf3643647-70: entered promiscuous mode
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.501 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3643647-70, col_values=(('external_ids', {'iface-id': '7b6dc1a1-1a58-45bd-84bb-97328397bf1b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:39:26 compute-1 NetworkManager[44960]: <info>  [1759408766.5020] manager: (tapf3643647-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/204)
Oct 02 12:39:26 compute-1 ovn_controller[129257]: 2025-10-02T12:39:26Z|00430|binding|INFO|Releasing lport 7b6dc1a1-1a58-45bd-84bb-97328397bf1b from this chassis (sb_readonly=0)
Oct 02 12:39:26 compute-1 nova_compute[230518]: 2025-10-02 12:39:26.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:26 compute-1 nova_compute[230518]: 2025-10-02 12:39:26.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.520 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f3643647-7cd9-4c43-8aaa-9b0f3160274b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f3643647-7cd9-4c43-8aaa-9b0f3160274b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.521 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[30340a16-1bd0-438c-b271-5213519252d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.522 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-f3643647-7cd9-4c43-8aaa-9b0f3160274b
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/f3643647-7cd9-4c43-8aaa-9b0f3160274b.pid.haproxy
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID f3643647-7cd9-4c43-8aaa-9b0f3160274b
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:39:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:39:26.523 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'env', 'PROCESS_TAG=haproxy-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f3643647-7cd9-4c43-8aaa-9b0f3160274b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:39:26 compute-1 podman[270960]: 2025-10-02 12:39:26.88059777 +0000 UTC m=+0.033629799 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:39:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:39:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:27.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:39:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:27.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:27 compute-1 podman[270960]: 2025-10-02 12:39:27.218799121 +0000 UTC m=+0.371831150 container create 3949a474b032a55f43dcd6459cc53a9429ab87c06d77bed27f8e8426cf75272d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 12:39:27 compute-1 ceph-mon[80926]: pgmap v1903: 305 pgs: 305 active+clean; 419 MiB data, 990 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 7.9 MiB/s wr, 159 op/s
Oct 02 12:39:27 compute-1 systemd[1]: Started libpod-conmon-3949a474b032a55f43dcd6459cc53a9429ab87c06d77bed27f8e8426cf75272d.scope.
Oct 02 12:39:27 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:39:27 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6ac23ed22d2d8032f0cbc3084da7e2a93ca5dafb47c834f98bbf243d6c598e7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:39:27 compute-1 nova_compute[230518]: 2025-10-02 12:39:27.692 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408767.6891396, 7621a774-e0bc-4f4f-b900-c3608dd6835a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:39:27 compute-1 nova_compute[230518]: 2025-10-02 12:39:27.696 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] VM Started (Lifecycle Event)
Oct 02 12:39:27 compute-1 nova_compute[230518]: 2025-10-02 12:39:27.731 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:39:27 compute-1 nova_compute[230518]: 2025-10-02 12:39:27.735 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408767.6907604, 7621a774-e0bc-4f4f-b900-c3608dd6835a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:39:27 compute-1 nova_compute[230518]: 2025-10-02 12:39:27.736 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] VM Paused (Lifecycle Event)
Oct 02 12:39:27 compute-1 nova_compute[230518]: 2025-10-02 12:39:27.762 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:39:27 compute-1 nova_compute[230518]: 2025-10-02 12:39:27.765 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:39:27 compute-1 podman[270960]: 2025-10-02 12:39:27.786313016 +0000 UTC m=+0.939345075 container init 3949a474b032a55f43dcd6459cc53a9429ab87c06d77bed27f8e8426cf75272d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 12:39:27 compute-1 podman[270960]: 2025-10-02 12:39:27.793843883 +0000 UTC m=+0.946875922 container start 3949a474b032a55f43dcd6459cc53a9429ab87c06d77bed27f8e8426cf75272d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 12:39:27 compute-1 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[271015]: [NOTICE]   (271019) : New worker (271021) forked
Oct 02 12:39:27 compute-1 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[271015]: [NOTICE]   (271019) : Loading success.
Oct 02 12:39:27 compute-1 nova_compute[230518]: 2025-10-02 12:39:27.832 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:39:28 compute-1 nova_compute[230518]: 2025-10-02 12:39:28.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:28 compute-1 nova_compute[230518]: 2025-10-02 12:39:28.471 2 DEBUG nova.compute.manager [req-4f608fe9-c082-4aa4-97b3-a3e329a6fc6a req-23297757-81e0-4262-a109-6ddb01b87c77 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Received event network-vif-plugged-8d9cc17a-7804-4743-925a-496d9fe78c73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:39:28 compute-1 nova_compute[230518]: 2025-10-02 12:39:28.471 2 DEBUG oslo_concurrency.lockutils [req-4f608fe9-c082-4aa4-97b3-a3e329a6fc6a req-23297757-81e0-4262-a109-6ddb01b87c77 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "7621a774-e0bc-4f4f-b900-c3608dd6835a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:39:28 compute-1 nova_compute[230518]: 2025-10-02 12:39:28.471 2 DEBUG oslo_concurrency.lockutils [req-4f608fe9-c082-4aa4-97b3-a3e329a6fc6a req-23297757-81e0-4262-a109-6ddb01b87c77 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7621a774-e0bc-4f4f-b900-c3608dd6835a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:39:28 compute-1 nova_compute[230518]: 2025-10-02 12:39:28.472 2 DEBUG oslo_concurrency.lockutils [req-4f608fe9-c082-4aa4-97b3-a3e329a6fc6a req-23297757-81e0-4262-a109-6ddb01b87c77 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7621a774-e0bc-4f4f-b900-c3608dd6835a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:39:28 compute-1 nova_compute[230518]: 2025-10-02 12:39:28.472 2 DEBUG nova.compute.manager [req-4f608fe9-c082-4aa4-97b3-a3e329a6fc6a req-23297757-81e0-4262-a109-6ddb01b87c77 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Processing event network-vif-plugged-8d9cc17a-7804-4743-925a-496d9fe78c73 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:39:28 compute-1 nova_compute[230518]: 2025-10-02 12:39:28.473 2 DEBUG nova.compute.manager [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:39:28 compute-1 nova_compute[230518]: 2025-10-02 12:39:28.477 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408768.4773347, 7621a774-e0bc-4f4f-b900-c3608dd6835a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:39:28 compute-1 nova_compute[230518]: 2025-10-02 12:39:28.478 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] VM Resumed (Lifecycle Event)
Oct 02 12:39:28 compute-1 nova_compute[230518]: 2025-10-02 12:39:28.479 2 DEBUG nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:39:28 compute-1 nova_compute[230518]: 2025-10-02 12:39:28.485 2 INFO nova.virt.libvirt.driver [-] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Instance spawned successfully.
Oct 02 12:39:28 compute-1 nova_compute[230518]: 2025-10-02 12:39:28.486 2 DEBUG nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:39:28 compute-1 nova_compute[230518]: 2025-10-02 12:39:28.519 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:39:28 compute-1 nova_compute[230518]: 2025-10-02 12:39:28.527 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:39:28 compute-1 nova_compute[230518]: 2025-10-02 12:39:28.533 2 DEBUG nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:39:28 compute-1 nova_compute[230518]: 2025-10-02 12:39:28.534 2 DEBUG nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:39:28 compute-1 nova_compute[230518]: 2025-10-02 12:39:28.535 2 DEBUG nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:39:28 compute-1 nova_compute[230518]: 2025-10-02 12:39:28.537 2 DEBUG nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:39:28 compute-1 nova_compute[230518]: 2025-10-02 12:39:28.537 2 DEBUG nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:39:28 compute-1 nova_compute[230518]: 2025-10-02 12:39:28.538 2 DEBUG nova.virt.libvirt.driver [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:39:28 compute-1 nova_compute[230518]: 2025-10-02 12:39:28.573 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:39:28 compute-1 nova_compute[230518]: 2025-10-02 12:39:28.619 2 INFO nova.compute.manager [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Took 12.08 seconds to spawn the instance on the hypervisor.
Oct 02 12:39:28 compute-1 nova_compute[230518]: 2025-10-02 12:39:28.619 2 DEBUG nova.compute.manager [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:39:28 compute-1 nova_compute[230518]: 2025-10-02 12:39:28.704 2 INFO nova.compute.manager [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Took 13.54 seconds to build instance.
Oct 02 12:39:28 compute-1 nova_compute[230518]: 2025-10-02 12:39:28.751 2 DEBUG oslo_concurrency.lockutils [None req-4a7ffc7a-9cd5-4222-a49a-fcee194750c6 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "7621a774-e0bc-4f4f-b900-c3608dd6835a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:39:28 compute-1 ceph-mon[80926]: pgmap v1904: 305 pgs: 305 active+clean; 455 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 6.8 MiB/s wr, 193 op/s
Oct 02 12:39:28 compute-1 nova_compute[230518]: 2025-10-02 12:39:28.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:29.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:29.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:29 compute-1 podman[271031]: 2025-10-02 12:39:29.815078659 +0000 UTC m=+0.056538330 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct 02 12:39:29 compute-1 podman[271030]: 2025-10-02 12:39:29.838404293 +0000 UTC m=+0.085804201 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:39:30 compute-1 nova_compute[230518]: 2025-10-02 12:39:30.644 2 DEBUG nova.compute.manager [req-290775d9-a864-44ec-9911-f766e3ed4608 req-398bf1ed-8364-400e-8dc9-d3f0f0078a09 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Received event network-vif-plugged-8d9cc17a-7804-4743-925a-496d9fe78c73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:39:30 compute-1 nova_compute[230518]: 2025-10-02 12:39:30.645 2 DEBUG oslo_concurrency.lockutils [req-290775d9-a864-44ec-9911-f766e3ed4608 req-398bf1ed-8364-400e-8dc9-d3f0f0078a09 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "7621a774-e0bc-4f4f-b900-c3608dd6835a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:39:30 compute-1 nova_compute[230518]: 2025-10-02 12:39:30.645 2 DEBUG oslo_concurrency.lockutils [req-290775d9-a864-44ec-9911-f766e3ed4608 req-398bf1ed-8364-400e-8dc9-d3f0f0078a09 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7621a774-e0bc-4f4f-b900-c3608dd6835a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:39:30 compute-1 nova_compute[230518]: 2025-10-02 12:39:30.645 2 DEBUG oslo_concurrency.lockutils [req-290775d9-a864-44ec-9911-f766e3ed4608 req-398bf1ed-8364-400e-8dc9-d3f0f0078a09 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7621a774-e0bc-4f4f-b900-c3608dd6835a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:39:30 compute-1 nova_compute[230518]: 2025-10-02 12:39:30.646 2 DEBUG nova.compute.manager [req-290775d9-a864-44ec-9911-f766e3ed4608 req-398bf1ed-8364-400e-8dc9-d3f0f0078a09 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] No waiting events found dispatching network-vif-plugged-8d9cc17a-7804-4743-925a-496d9fe78c73 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:39:30 compute-1 nova_compute[230518]: 2025-10-02 12:39:30.646 2 WARNING nova.compute.manager [req-290775d9-a864-44ec-9911-f766e3ed4608 req-398bf1ed-8364-400e-8dc9-d3f0f0078a09 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Received unexpected event network-vif-plugged-8d9cc17a-7804-4743-925a-496d9fe78c73 for instance with vm_state active and task_state None.
Oct 02 12:39:31 compute-1 ceph-mon[80926]: pgmap v1905: 305 pgs: 305 active+clean; 455 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 5.8 MiB/s wr, 166 op/s
Oct 02 12:39:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:31.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:31.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:31 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:39:31 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 e263: 3 total, 3 up, 3 in
Oct 02 12:39:32 compute-1 ceph-mon[80926]: osdmap e263: 3 total, 3 up, 3 in
Oct 02 12:39:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:33.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:33 compute-1 nova_compute[230518]: 2025-10-02 12:39:33.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:33.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:33 compute-1 nova_compute[230518]: 2025-10-02 12:39:33.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:33 compute-1 NetworkManager[44960]: <info>  [1759408773.5966] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/205)
Oct 02 12:39:33 compute-1 NetworkManager[44960]: <info>  [1759408773.5985] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/206)
Oct 02 12:39:33 compute-1 nova_compute[230518]: 2025-10-02 12:39:33.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:33 compute-1 ovn_controller[129257]: 2025-10-02T12:39:33Z|00431|binding|INFO|Releasing lport 7b6dc1a1-1a58-45bd-84bb-97328397bf1b from this chassis (sb_readonly=0)
Oct 02 12:39:33 compute-1 nova_compute[230518]: 2025-10-02 12:39:33.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:33 compute-1 ceph-mon[80926]: pgmap v1907: 305 pgs: 305 active+clean; 461 MiB data, 1013 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 4.3 MiB/s wr, 214 op/s
Oct 02 12:39:33 compute-1 nova_compute[230518]: 2025-10-02 12:39:33.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:34 compute-1 nova_compute[230518]: 2025-10-02 12:39:34.255 2 DEBUG nova.compute.manager [req-733ecb88-70cf-498a-9d49-57ce6605b58c req-365d1a68-e275-4a9c-911b-2480920c3385 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Received event network-changed-8d9cc17a-7804-4743-925a-496d9fe78c73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:39:34 compute-1 nova_compute[230518]: 2025-10-02 12:39:34.256 2 DEBUG nova.compute.manager [req-733ecb88-70cf-498a-9d49-57ce6605b58c req-365d1a68-e275-4a9c-911b-2480920c3385 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Refreshing instance network info cache due to event network-changed-8d9cc17a-7804-4743-925a-496d9fe78c73. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:39:34 compute-1 nova_compute[230518]: 2025-10-02 12:39:34.256 2 DEBUG oslo_concurrency.lockutils [req-733ecb88-70cf-498a-9d49-57ce6605b58c req-365d1a68-e275-4a9c-911b-2480920c3385 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:39:34 compute-1 nova_compute[230518]: 2025-10-02 12:39:34.257 2 DEBUG oslo_concurrency.lockutils [req-733ecb88-70cf-498a-9d49-57ce6605b58c req-365d1a68-e275-4a9c-911b-2480920c3385 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:39:34 compute-1 nova_compute[230518]: 2025-10-02 12:39:34.257 2 DEBUG nova.network.neutron [req-733ecb88-70cf-498a-9d49-57ce6605b58c req-365d1a68-e275-4a9c-911b-2480920c3385 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Refreshing network info cache for port 8d9cc17a-7804-4743-925a-496d9fe78c73 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:39:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:35.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:35.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:35 compute-1 ceph-mon[80926]: pgmap v1908: 305 pgs: 305 active+clean; 463 MiB data, 1013 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.4 MiB/s wr, 173 op/s
Oct 02 12:39:36 compute-1 nova_compute[230518]: 2025-10-02 12:39:36.037 2 DEBUG nova.network.neutron [req-733ecb88-70cf-498a-9d49-57ce6605b58c req-365d1a68-e275-4a9c-911b-2480920c3385 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Updated VIF entry in instance network info cache for port 8d9cc17a-7804-4743-925a-496d9fe78c73. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:39:36 compute-1 nova_compute[230518]: 2025-10-02 12:39:36.038 2 DEBUG nova.network.neutron [req-733ecb88-70cf-498a-9d49-57ce6605b58c req-365d1a68-e275-4a9c-911b-2480920c3385 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Updating instance_info_cache with network_info: [{"id": "8d9cc17a-7804-4743-925a-496d9fe78c73", "address": "fa:16:3e:c4:d9:d3", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d9cc17a-78", "ovs_interfaceid": "8d9cc17a-7804-4743-925a-496d9fe78c73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:39:36 compute-1 nova_compute[230518]: 2025-10-02 12:39:36.122 2 DEBUG oslo_concurrency.lockutils [req-733ecb88-70cf-498a-9d49-57ce6605b58c req-365d1a68-e275-4a9c-911b-2480920c3385 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:39:36 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:39:36 compute-1 ceph-mon[80926]: pgmap v1909: 305 pgs: 305 active+clean; 467 MiB data, 1013 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 180 op/s
Oct 02 12:39:36 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1441741427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:39:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:37.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:37.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:37 compute-1 nova_compute[230518]: 2025-10-02 12:39:37.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:38 compute-1 nova_compute[230518]: 2025-10-02 12:39:38.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:38 compute-1 ceph-mon[80926]: pgmap v1910: 305 pgs: 305 active+clean; 473 MiB data, 1020 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 569 KiB/s wr, 126 op/s
Oct 02 12:39:39 compute-1 nova_compute[230518]: 2025-10-02 12:39:39.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:39.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:39.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:41.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:41.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:41 compute-1 ceph-mon[80926]: pgmap v1911: 305 pgs: 305 active+clean; 473 MiB data, 1020 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 569 KiB/s wr, 126 op/s
Oct 02 12:39:41 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:39:41 compute-1 podman[271077]: 2025-10-02 12:39:41.802078606 +0000 UTC m=+0.057434199 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, io.buildah.version=1.41.3)
Oct 02 12:39:41 compute-1 podman[271078]: 2025-10-02 12:39:41.820581607 +0000 UTC m=+0.074051340 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 02 12:39:42 compute-1 ovn_controller[129257]: 2025-10-02T12:39:42Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c4:d9:d3 10.100.0.14
Oct 02 12:39:42 compute-1 ovn_controller[129257]: 2025-10-02T12:39:42Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c4:d9:d3 10.100.0.14
Oct 02 12:39:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2123990606' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:39:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/753406698' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:39:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:39:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:43.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:39:43 compute-1 nova_compute[230518]: 2025-10-02 12:39:43.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:43.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:43 compute-1 ceph-mon[80926]: pgmap v1912: 305 pgs: 305 active+clean; 504 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 874 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Oct 02 12:39:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2694472616' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:39:44 compute-1 nova_compute[230518]: 2025-10-02 12:39:44.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:44 compute-1 ceph-mon[80926]: pgmap v1913: 305 pgs: 305 active+clean; 524 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 848 KiB/s rd, 2.8 MiB/s wr, 96 op/s
Oct 02 12:39:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:45.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:39:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:45.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:39:46 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:39:47 compute-1 ceph-mon[80926]: pgmap v1914: 305 pgs: 305 active+clean; 542 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 951 KiB/s rd, 3.9 MiB/s wr, 133 op/s
Oct 02 12:39:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:47.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:47.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:47 compute-1 nova_compute[230518]: 2025-10-02 12:39:47.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:48 compute-1 nova_compute[230518]: 2025-10-02 12:39:48.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:49 compute-1 nova_compute[230518]: 2025-10-02 12:39:49.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:49.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:49.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:49 compute-1 ceph-mon[80926]: pgmap v1915: 305 pgs: 305 active+clean; 548 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 989 KiB/s rd, 3.9 MiB/s wr, 147 op/s
Oct 02 12:39:50 compute-1 ceph-mon[80926]: pgmap v1916: 305 pgs: 305 active+clean; 548 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 978 KiB/s rd, 3.7 MiB/s wr, 145 op/s
Oct 02 12:39:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:51.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:51.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:51 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:39:52 compute-1 ceph-mon[80926]: pgmap v1917: 305 pgs: 305 active+clean; 548 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.7 MiB/s wr, 176 op/s
Oct 02 12:39:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:53.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:53 compute-1 nova_compute[230518]: 2025-10-02 12:39:53.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:39:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:53.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:39:54 compute-1 nova_compute[230518]: 2025-10-02 12:39:54.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:54 compute-1 nova_compute[230518]: 2025-10-02 12:39:54.062 2 DEBUG oslo_concurrency.lockutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Acquiring lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:39:54 compute-1 nova_compute[230518]: 2025-10-02 12:39:54.063 2 DEBUG oslo_concurrency.lockutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:39:54 compute-1 nova_compute[230518]: 2025-10-02 12:39:54.174 2 DEBUG nova.compute.manager [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:39:54 compute-1 nova_compute[230518]: 2025-10-02 12:39:54.604 2 DEBUG oslo_concurrency.lockutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:39:54 compute-1 nova_compute[230518]: 2025-10-02 12:39:54.605 2 DEBUG oslo_concurrency.lockutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:39:54 compute-1 nova_compute[230518]: 2025-10-02 12:39:54.613 2 DEBUG nova.virt.hardware [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:39:54 compute-1 nova_compute[230518]: 2025-10-02 12:39:54.613 2 INFO nova.compute.claims [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:39:54 compute-1 ceph-mon[80926]: pgmap v1918: 305 pgs: 305 active+clean; 548 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.5 MiB/s wr, 156 op/s
Oct 02 12:39:54 compute-1 nova_compute[230518]: 2025-10-02 12:39:54.916 2 DEBUG oslo_concurrency.processutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:39:55 compute-1 nova_compute[230518]: 2025-10-02 12:39:55.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:39:55 compute-1 nova_compute[230518]: 2025-10-02 12:39:55.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:39:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:55.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:55.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:55 compute-1 nova_compute[230518]: 2025-10-02 12:39:55.257 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:39:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:39:55 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/717183492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:39:55 compute-1 nova_compute[230518]: 2025-10-02 12:39:55.366 2 DEBUG oslo_concurrency.processutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:39:55 compute-1 nova_compute[230518]: 2025-10-02 12:39:55.372 2 DEBUG nova.compute.provider_tree [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:39:55 compute-1 nova_compute[230518]: 2025-10-02 12:39:55.413 2 DEBUG nova.scheduler.client.report [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:39:55 compute-1 nova_compute[230518]: 2025-10-02 12:39:55.488 2 DEBUG oslo_concurrency.lockutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.884s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:39:55 compute-1 nova_compute[230518]: 2025-10-02 12:39:55.489 2 DEBUG nova.compute.manager [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:39:55 compute-1 nova_compute[230518]: 2025-10-02 12:39:55.492 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.235s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:39:55 compute-1 nova_compute[230518]: 2025-10-02 12:39:55.492 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:39:55 compute-1 nova_compute[230518]: 2025-10-02 12:39:55.492 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:39:55 compute-1 nova_compute[230518]: 2025-10-02 12:39:55.493 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:39:55 compute-1 nova_compute[230518]: 2025-10-02 12:39:55.745 2 DEBUG nova.compute.manager [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:39:55 compute-1 nova_compute[230518]: 2025-10-02 12:39:55.746 2 DEBUG nova.network.neutron [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:39:55 compute-1 nova_compute[230518]: 2025-10-02 12:39:55.901 2 INFO nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:39:55 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2458645433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:39:55 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/717183492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:39:55 compute-1 nova_compute[230518]: 2025-10-02 12:39:55.989 2 DEBUG nova.compute.manager [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:39:56 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:39:56 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2398810375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:39:56 compute-1 nova_compute[230518]: 2025-10-02 12:39:56.049 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:39:56 compute-1 nova_compute[230518]: 2025-10-02 12:39:56.149 2 DEBUG nova.policy [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea3659a324824b3991f98e26e33c752f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '44e6ad861d934450b2090f40fab255f0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:39:56 compute-1 nova_compute[230518]: 2025-10-02 12:39:56.202 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:39:56 compute-1 nova_compute[230518]: 2025-10-02 12:39:56.203 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:39:56 compute-1 nova_compute[230518]: 2025-10-02 12:39:56.228 2 DEBUG nova.compute.manager [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:39:56 compute-1 nova_compute[230518]: 2025-10-02 12:39:56.230 2 DEBUG nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:39:56 compute-1 nova_compute[230518]: 2025-10-02 12:39:56.231 2 INFO nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Creating image(s)
Oct 02 12:39:56 compute-1 nova_compute[230518]: 2025-10-02 12:39:56.260 2 DEBUG nova.storage.rbd_utils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] rbd image 104be830-8fcd-47dd-a2b4-f92b66dcbc80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:39:56 compute-1 nova_compute[230518]: 2025-10-02 12:39:56.285 2 DEBUG nova.storage.rbd_utils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] rbd image 104be830-8fcd-47dd-a2b4-f92b66dcbc80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:39:56 compute-1 nova_compute[230518]: 2025-10-02 12:39:56.311 2 DEBUG nova.storage.rbd_utils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] rbd image 104be830-8fcd-47dd-a2b4-f92b66dcbc80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:39:56 compute-1 nova_compute[230518]: 2025-10-02 12:39:56.314 2 DEBUG oslo_concurrency.processutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:39:56 compute-1 nova_compute[230518]: 2025-10-02 12:39:56.399 2 DEBUG oslo_concurrency.processutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:39:56 compute-1 nova_compute[230518]: 2025-10-02 12:39:56.401 2 DEBUG oslo_concurrency.lockutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:39:56 compute-1 nova_compute[230518]: 2025-10-02 12:39:56.401 2 DEBUG oslo_concurrency.lockutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:39:56 compute-1 nova_compute[230518]: 2025-10-02 12:39:56.402 2 DEBUG oslo_concurrency.lockutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:39:56 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:39:56 compute-1 nova_compute[230518]: 2025-10-02 12:39:56.431 2 DEBUG nova.storage.rbd_utils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] rbd image 104be830-8fcd-47dd-a2b4-f92b66dcbc80_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:39:56 compute-1 nova_compute[230518]: 2025-10-02 12:39:56.434 2 DEBUG oslo_concurrency.processutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 104be830-8fcd-47dd-a2b4-f92b66dcbc80_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:39:56 compute-1 nova_compute[230518]: 2025-10-02 12:39:56.580 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:39:56 compute-1 nova_compute[230518]: 2025-10-02 12:39:56.582 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4274MB free_disk=20.78521728515625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:39:56 compute-1 nova_compute[230518]: 2025-10-02 12:39:56.583 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:39:56 compute-1 nova_compute[230518]: 2025-10-02 12:39:56.583 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:39:56 compute-1 nova_compute[230518]: 2025-10-02 12:39:56.849 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 7621a774-e0bc-4f4f-b900-c3608dd6835a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:39:56 compute-1 nova_compute[230518]: 2025-10-02 12:39:56.849 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 104be830-8fcd-47dd-a2b4-f92b66dcbc80 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:39:56 compute-1 nova_compute[230518]: 2025-10-02 12:39:56.849 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:39:56 compute-1 nova_compute[230518]: 2025-10-02 12:39:56.850 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:39:57 compute-1 nova_compute[230518]: 2025-10-02 12:39:57.069 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:39:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:39:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:57.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:39:57 compute-1 ceph-mon[80926]: pgmap v1919: 305 pgs: 305 active+clean; 548 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.2 MiB/s wr, 175 op/s
Oct 02 12:39:57 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2398810375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:39:57 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3992094948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:39:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:57.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:57 compute-1 nova_compute[230518]: 2025-10-02 12:39:57.529 2 DEBUG oslo_concurrency.processutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 104be830-8fcd-47dd-a2b4-f92b66dcbc80_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:39:57 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:39:57 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4133083068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:39:57 compute-1 nova_compute[230518]: 2025-10-02 12:39:57.634 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:39:57 compute-1 nova_compute[230518]: 2025-10-02 12:39:57.641 2 DEBUG nova.storage.rbd_utils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] resizing rbd image 104be830-8fcd-47dd-a2b4-f92b66dcbc80_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:39:57 compute-1 nova_compute[230518]: 2025-10-02 12:39:57.831 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:39:57 compute-1 nova_compute[230518]: 2025-10-02 12:39:57.979 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:39:58 compute-1 nova_compute[230518]: 2025-10-02 12:39:58.079 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:39:58 compute-1 nova_compute[230518]: 2025-10-02 12:39:58.079 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.496s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:39:58 compute-1 nova_compute[230518]: 2025-10-02 12:39:58.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:58 compute-1 nova_compute[230518]: 2025-10-02 12:39:58.238 2 DEBUG nova.objects.instance [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lazy-loading 'migration_context' on Instance uuid 104be830-8fcd-47dd-a2b4-f92b66dcbc80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:39:58 compute-1 nova_compute[230518]: 2025-10-02 12:39:58.324 2 DEBUG nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:39:58 compute-1 nova_compute[230518]: 2025-10-02 12:39:58.324 2 DEBUG nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Ensure instance console log exists: /var/lib/nova/instances/104be830-8fcd-47dd-a2b4-f92b66dcbc80/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:39:58 compute-1 nova_compute[230518]: 2025-10-02 12:39:58.325 2 DEBUG oslo_concurrency.lockutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:39:58 compute-1 nova_compute[230518]: 2025-10-02 12:39:58.325 2 DEBUG oslo_concurrency.lockutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:39:58 compute-1 nova_compute[230518]: 2025-10-02 12:39:58.325 2 DEBUG oslo_concurrency.lockutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:39:58 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4133083068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:39:58 compute-1 nova_compute[230518]: 2025-10-02 12:39:58.369 2 DEBUG nova.network.neutron [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Successfully created port: 06cb1a9b-ada6-4485-bce1-9582e5b82b6f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:39:59 compute-1 nova_compute[230518]: 2025-10-02 12:39:59.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:39:59 compute-1 nova_compute[230518]: 2025-10-02 12:39:59.079 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:39:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:39:59.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:39:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:39:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:39:59.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:39:59 compute-1 ceph-mon[80926]: pgmap v1920: 305 pgs: 305 active+clean; 555 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 433 KiB/s wr, 181 op/s
Oct 02 12:40:00 compute-1 nova_compute[230518]: 2025-10-02 12:40:00.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:40:00 compute-1 nova_compute[230518]: 2025-10-02 12:40:00.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:40:00 compute-1 nova_compute[230518]: 2025-10-02 12:40:00.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:40:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3023802060' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:40:00 compute-1 ceph-mon[80926]: overall HEALTH_OK
Oct 02 12:40:00 compute-1 nova_compute[230518]: 2025-10-02 12:40:00.517 2 DEBUG nova.network.neutron [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Successfully updated port: 06cb1a9b-ada6-4485-bce1-9582e5b82b6f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:40:00 compute-1 nova_compute[230518]: 2025-10-02 12:40:00.615 2 DEBUG oslo_concurrency.lockutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Acquiring lock "refresh_cache-104be830-8fcd-47dd-a2b4-f92b66dcbc80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:40:00 compute-1 nova_compute[230518]: 2025-10-02 12:40:00.616 2 DEBUG oslo_concurrency.lockutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Acquired lock "refresh_cache-104be830-8fcd-47dd-a2b4-f92b66dcbc80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:40:00 compute-1 nova_compute[230518]: 2025-10-02 12:40:00.616 2 DEBUG nova.network.neutron [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:40:00 compute-1 nova_compute[230518]: 2025-10-02 12:40:00.741 2 DEBUG nova.compute.manager [req-7482d442-8f66-4140-9b7d-329aa83d6c75 req-c9d45933-ae27-4a32-b102-3c7069e086ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Received event network-changed-06cb1a9b-ada6-4485-bce1-9582e5b82b6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:40:00 compute-1 nova_compute[230518]: 2025-10-02 12:40:00.741 2 DEBUG nova.compute.manager [req-7482d442-8f66-4140-9b7d-329aa83d6c75 req-c9d45933-ae27-4a32-b102-3c7069e086ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Refreshing instance network info cache due to event network-changed-06cb1a9b-ada6-4485-bce1-9582e5b82b6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:40:00 compute-1 nova_compute[230518]: 2025-10-02 12:40:00.742 2 DEBUG oslo_concurrency.lockutils [req-7482d442-8f66-4140-9b7d-329aa83d6c75 req-c9d45933-ae27-4a32-b102-3c7069e086ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-104be830-8fcd-47dd-a2b4-f92b66dcbc80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:40:00 compute-1 podman[271353]: 2025-10-02 12:40:00.835843239 +0000 UTC m=+0.079510301 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:40:00 compute-1 podman[271352]: 2025-10-02 12:40:00.868167597 +0000 UTC m=+0.114477143 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller)
Oct 02 12:40:01 compute-1 nova_compute[230518]: 2025-10-02 12:40:01.050 2 DEBUG nova.network.neutron [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:40:01 compute-1 nova_compute[230518]: 2025-10-02 12:40:01.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:40:01 compute-1 nova_compute[230518]: 2025-10-02 12:40:01.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:40:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:01.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:01.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:01 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:40:01 compute-1 ceph-mon[80926]: pgmap v1921: 305 pgs: 305 active+clean; 555 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 379 KiB/s wr, 142 op/s
Oct 02 12:40:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3158801099' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:40:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3170026607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:40:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1698379328' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:40:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2482779898' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:40:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:02.374 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:40:02 compute-1 nova_compute[230518]: 2025-10-02 12:40:02.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:02.375 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:40:02 compute-1 ceph-mon[80926]: pgmap v1922: 305 pgs: 305 active+clean; 579 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.4 MiB/s wr, 168 op/s
Oct 02 12:40:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:03.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:03 compute-1 nova_compute[230518]: 2025-10-02 12:40:03.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:03.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:03 compute-1 nova_compute[230518]: 2025-10-02 12:40:03.362 2 DEBUG nova.network.neutron [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Updating instance_info_cache with network_info: [{"id": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "address": "fa:16:3e:80:d7:1c", "network": {"id": "17002dea-5c5e-46a6-892d-d32d33c1f02d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1040381813-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44e6ad861d934450b2090f40fab255f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cb1a9b-ad", "ovs_interfaceid": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:40:03 compute-1 nova_compute[230518]: 2025-10-02 12:40:03.530 2 DEBUG oslo_concurrency.lockutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Releasing lock "refresh_cache-104be830-8fcd-47dd-a2b4-f92b66dcbc80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:40:03 compute-1 nova_compute[230518]: 2025-10-02 12:40:03.531 2 DEBUG nova.compute.manager [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Instance network_info: |[{"id": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "address": "fa:16:3e:80:d7:1c", "network": {"id": "17002dea-5c5e-46a6-892d-d32d33c1f02d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1040381813-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44e6ad861d934450b2090f40fab255f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cb1a9b-ad", "ovs_interfaceid": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:40:03 compute-1 nova_compute[230518]: 2025-10-02 12:40:03.531 2 DEBUG oslo_concurrency.lockutils [req-7482d442-8f66-4140-9b7d-329aa83d6c75 req-c9d45933-ae27-4a32-b102-3c7069e086ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-104be830-8fcd-47dd-a2b4-f92b66dcbc80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:40:03 compute-1 nova_compute[230518]: 2025-10-02 12:40:03.532 2 DEBUG nova.network.neutron [req-7482d442-8f66-4140-9b7d-329aa83d6c75 req-c9d45933-ae27-4a32-b102-3c7069e086ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Refreshing network info cache for port 06cb1a9b-ada6-4485-bce1-9582e5b82b6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:40:03 compute-1 nova_compute[230518]: 2025-10-02 12:40:03.538 2 DEBUG nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Start _get_guest_xml network_info=[{"id": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "address": "fa:16:3e:80:d7:1c", "network": {"id": "17002dea-5c5e-46a6-892d-d32d33c1f02d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1040381813-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44e6ad861d934450b2090f40fab255f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cb1a9b-ad", "ovs_interfaceid": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:40:03 compute-1 nova_compute[230518]: 2025-10-02 12:40:03.545 2 WARNING nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:40:03 compute-1 nova_compute[230518]: 2025-10-02 12:40:03.550 2 DEBUG nova.virt.libvirt.host [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:40:03 compute-1 nova_compute[230518]: 2025-10-02 12:40:03.551 2 DEBUG nova.virt.libvirt.host [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:40:03 compute-1 nova_compute[230518]: 2025-10-02 12:40:03.555 2 DEBUG nova.virt.libvirt.host [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:40:03 compute-1 nova_compute[230518]: 2025-10-02 12:40:03.556 2 DEBUG nova.virt.libvirt.host [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:40:03 compute-1 nova_compute[230518]: 2025-10-02 12:40:03.557 2 DEBUG nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:40:03 compute-1 nova_compute[230518]: 2025-10-02 12:40:03.557 2 DEBUG nova.virt.hardware [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:40:03 compute-1 nova_compute[230518]: 2025-10-02 12:40:03.557 2 DEBUG nova.virt.hardware [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:40:03 compute-1 nova_compute[230518]: 2025-10-02 12:40:03.557 2 DEBUG nova.virt.hardware [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:40:03 compute-1 nova_compute[230518]: 2025-10-02 12:40:03.558 2 DEBUG nova.virt.hardware [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:40:03 compute-1 nova_compute[230518]: 2025-10-02 12:40:03.558 2 DEBUG nova.virt.hardware [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:40:03 compute-1 nova_compute[230518]: 2025-10-02 12:40:03.558 2 DEBUG nova.virt.hardware [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:40:03 compute-1 nova_compute[230518]: 2025-10-02 12:40:03.558 2 DEBUG nova.virt.hardware [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:40:03 compute-1 nova_compute[230518]: 2025-10-02 12:40:03.558 2 DEBUG nova.virt.hardware [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:40:03 compute-1 nova_compute[230518]: 2025-10-02 12:40:03.558 2 DEBUG nova.virt.hardware [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:40:03 compute-1 nova_compute[230518]: 2025-10-02 12:40:03.558 2 DEBUG nova.virt.hardware [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:40:03 compute-1 nova_compute[230518]: 2025-10-02 12:40:03.559 2 DEBUG nova.virt.hardware [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:40:03 compute-1 nova_compute[230518]: 2025-10-02 12:40:03.561 2 DEBUG oslo_concurrency.processutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:40:03 compute-1 sudo[271413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:40:03 compute-1 sudo[271413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:03 compute-1 sudo[271413]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:03 compute-1 sudo[271438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:40:03 compute-1 sudo[271438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:03 compute-1 sudo[271438]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:40:04 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1075071985' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:40:04 compute-1 nova_compute[230518]: 2025-10-02 12:40:04.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:04 compute-1 nova_compute[230518]: 2025-10-02 12:40:04.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:40:04 compute-1 sudo[271463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:40:04 compute-1 nova_compute[230518]: 2025-10-02 12:40:04.058 2 DEBUG oslo_concurrency.processutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:40:04 compute-1 sudo[271463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:04 compute-1 sudo[271463]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:04 compute-1 nova_compute[230518]: 2025-10-02 12:40:04.088 2 DEBUG nova.storage.rbd_utils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] rbd image 104be830-8fcd-47dd-a2b4-f92b66dcbc80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:40:04 compute-1 nova_compute[230518]: 2025-10-02 12:40:04.093 2 DEBUG oslo_concurrency.processutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:40:04 compute-1 sudo[271504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:40:04 compute-1 sudo[271504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:40:04 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/938358496' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:40:04 compute-1 nova_compute[230518]: 2025-10-02 12:40:04.501 2 DEBUG oslo_concurrency.processutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:40:04 compute-1 nova_compute[230518]: 2025-10-02 12:40:04.503 2 DEBUG nova.virt.libvirt.vif [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:39:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-1275138439',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-1275138439',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-1275138439',id=107,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='44e6ad861d934450b2090f40fab255f0',ramdisk_id='',reservation_id='r-i33rn6v4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-730584977',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-730584977-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:39:56Z,user_data=None,user_id='ea3659a324824b3991f98e26e33c752f',uuid=104be830-8fcd-47dd-a2b4-f92b66dcbc80,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "address": "fa:16:3e:80:d7:1c", "network": {"id": "17002dea-5c5e-46a6-892d-d32d33c1f02d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1040381813-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44e6ad861d934450b2090f40fab255f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cb1a9b-ad", "ovs_interfaceid": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:40:04 compute-1 nova_compute[230518]: 2025-10-02 12:40:04.503 2 DEBUG nova.network.os_vif_util [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Converting VIF {"id": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "address": "fa:16:3e:80:d7:1c", "network": {"id": "17002dea-5c5e-46a6-892d-d32d33c1f02d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1040381813-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44e6ad861d934450b2090f40fab255f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cb1a9b-ad", "ovs_interfaceid": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:40:04 compute-1 nova_compute[230518]: 2025-10-02 12:40:04.504 2 DEBUG nova.network.os_vif_util [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:d7:1c,bridge_name='br-int',has_traffic_filtering=True,id=06cb1a9b-ada6-4485-bce1-9582e5b82b6f,network=Network(17002dea-5c5e-46a6-892d-d32d33c1f02d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06cb1a9b-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:40:04 compute-1 nova_compute[230518]: 2025-10-02 12:40:04.505 2 DEBUG nova.objects.instance [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 104be830-8fcd-47dd-a2b4-f92b66dcbc80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:40:04 compute-1 nova_compute[230518]: 2025-10-02 12:40:04.565 2 DEBUG nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:40:04 compute-1 nova_compute[230518]:   <uuid>104be830-8fcd-47dd-a2b4-f92b66dcbc80</uuid>
Oct 02 12:40:04 compute-1 nova_compute[230518]:   <name>instance-0000006b</name>
Oct 02 12:40:04 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:40:04 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:40:04 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:40:04 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:       <nova:name>tempest-ServersNegativeTestMultiTenantJSON-server-1275138439</nova:name>
Oct 02 12:40:04 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:40:03</nova:creationTime>
Oct 02 12:40:04 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:40:04 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:40:04 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:40:04 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:40:04 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:40:04 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:40:04 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:40:04 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:40:04 compute-1 nova_compute[230518]:         <nova:user uuid="ea3659a324824b3991f98e26e33c752f">tempest-ServersNegativeTestMultiTenantJSON-730584977-project-member</nova:user>
Oct 02 12:40:04 compute-1 nova_compute[230518]:         <nova:project uuid="44e6ad861d934450b2090f40fab255f0">tempest-ServersNegativeTestMultiTenantJSON-730584977</nova:project>
Oct 02 12:40:04 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:40:04 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:40:04 compute-1 nova_compute[230518]:         <nova:port uuid="06cb1a9b-ada6-4485-bce1-9582e5b82b6f">
Oct 02 12:40:04 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:40:04 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:40:04 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:40:04 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <system>
Oct 02 12:40:04 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:40:04 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:40:04 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:40:04 compute-1 nova_compute[230518]:       <entry name="serial">104be830-8fcd-47dd-a2b4-f92b66dcbc80</entry>
Oct 02 12:40:04 compute-1 nova_compute[230518]:       <entry name="uuid">104be830-8fcd-47dd-a2b4-f92b66dcbc80</entry>
Oct 02 12:40:04 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     </system>
Oct 02 12:40:04 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:40:04 compute-1 nova_compute[230518]:   <os>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:   </os>
Oct 02 12:40:04 compute-1 nova_compute[230518]:   <features>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:   </features>
Oct 02 12:40:04 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:40:04 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:40:04 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:40:04 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/104be830-8fcd-47dd-a2b4-f92b66dcbc80_disk">
Oct 02 12:40:04 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:       </source>
Oct 02 12:40:04 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:40:04 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:40:04 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:40:04 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/104be830-8fcd-47dd-a2b4-f92b66dcbc80_disk.config">
Oct 02 12:40:04 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:       </source>
Oct 02 12:40:04 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:40:04 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:40:04 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:40:04 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:80:d7:1c"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:       <target dev="tap06cb1a9b-ad"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:40:04 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/104be830-8fcd-47dd-a2b4-f92b66dcbc80/console.log" append="off"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <video>
Oct 02 12:40:04 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     </video>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:40:04 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:40:04 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:40:04 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:40:04 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:40:04 compute-1 nova_compute[230518]: </domain>
Oct 02 12:40:04 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:40:04 compute-1 nova_compute[230518]: 2025-10-02 12:40:04.566 2 DEBUG nova.compute.manager [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Preparing to wait for external event network-vif-plugged-06cb1a9b-ada6-4485-bce1-9582e5b82b6f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:40:04 compute-1 nova_compute[230518]: 2025-10-02 12:40:04.567 2 DEBUG oslo_concurrency.lockutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Acquiring lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:04 compute-1 nova_compute[230518]: 2025-10-02 12:40:04.567 2 DEBUG oslo_concurrency.lockutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:04 compute-1 nova_compute[230518]: 2025-10-02 12:40:04.567 2 DEBUG oslo_concurrency.lockutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:04 compute-1 nova_compute[230518]: 2025-10-02 12:40:04.568 2 DEBUG nova.virt.libvirt.vif [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:39:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-1275138439',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-1275138439',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-1275138439',id=107,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='44e6ad861d934450b2090f40fab255f0',ramdisk_id='',reservation_id='r-i33rn6v4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-730584977',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-730584977-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:39:56Z,user_data=None,user_id='ea3659a324824b3991f98e26e33c752f',uuid=104be830-8fcd-47dd-a2b4-f92b66dcbc80,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "address": "fa:16:3e:80:d7:1c", "network": {"id": "17002dea-5c5e-46a6-892d-d32d33c1f02d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1040381813-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44e6ad861d934450b2090f40fab255f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cb1a9b-ad", "ovs_interfaceid": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:40:04 compute-1 nova_compute[230518]: 2025-10-02 12:40:04.568 2 DEBUG nova.network.os_vif_util [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Converting VIF {"id": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "address": "fa:16:3e:80:d7:1c", "network": {"id": "17002dea-5c5e-46a6-892d-d32d33c1f02d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1040381813-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44e6ad861d934450b2090f40fab255f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cb1a9b-ad", "ovs_interfaceid": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:40:04 compute-1 nova_compute[230518]: 2025-10-02 12:40:04.569 2 DEBUG nova.network.os_vif_util [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:d7:1c,bridge_name='br-int',has_traffic_filtering=True,id=06cb1a9b-ada6-4485-bce1-9582e5b82b6f,network=Network(17002dea-5c5e-46a6-892d-d32d33c1f02d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06cb1a9b-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:40:04 compute-1 nova_compute[230518]: 2025-10-02 12:40:04.569 2 DEBUG os_vif [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:d7:1c,bridge_name='br-int',has_traffic_filtering=True,id=06cb1a9b-ada6-4485-bce1-9582e5b82b6f,network=Network(17002dea-5c5e-46a6-892d-d32d33c1f02d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06cb1a9b-ad') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:40:04 compute-1 nova_compute[230518]: 2025-10-02 12:40:04.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:04 compute-1 nova_compute[230518]: 2025-10-02 12:40:04.570 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:04 compute-1 nova_compute[230518]: 2025-10-02 12:40:04.570 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:40:04 compute-1 nova_compute[230518]: 2025-10-02 12:40:04.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:04 compute-1 nova_compute[230518]: 2025-10-02 12:40:04.574 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap06cb1a9b-ad, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:04 compute-1 nova_compute[230518]: 2025-10-02 12:40:04.575 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap06cb1a9b-ad, col_values=(('external_ids', {'iface-id': '06cb1a9b-ada6-4485-bce1-9582e5b82b6f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:80:d7:1c', 'vm-uuid': '104be830-8fcd-47dd-a2b4-f92b66dcbc80'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:04 compute-1 NetworkManager[44960]: <info>  [1759408804.5777] manager: (tap06cb1a9b-ad): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/207)
Oct 02 12:40:04 compute-1 nova_compute[230518]: 2025-10-02 12:40:04.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:40:04 compute-1 nova_compute[230518]: 2025-10-02 12:40:04.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:04 compute-1 nova_compute[230518]: 2025-10-02 12:40:04.584 2 INFO os_vif [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:d7:1c,bridge_name='br-int',has_traffic_filtering=True,id=06cb1a9b-ada6-4485-bce1-9582e5b82b6f,network=Network(17002dea-5c5e-46a6-892d-d32d33c1f02d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06cb1a9b-ad')
Oct 02 12:40:04 compute-1 sudo[271504]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:04 compute-1 ceph-mon[80926]: pgmap v1923: 305 pgs: 305 active+clean; 594 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 140 op/s
Oct 02 12:40:04 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1075071985' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:40:04 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/938358496' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:40:04 compute-1 nova_compute[230518]: 2025-10-02 12:40:04.845 2 DEBUG nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:40:04 compute-1 nova_compute[230518]: 2025-10-02 12:40:04.845 2 DEBUG nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:40:04 compute-1 nova_compute[230518]: 2025-10-02 12:40:04.846 2 DEBUG nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] No VIF found with MAC fa:16:3e:80:d7:1c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:40:04 compute-1 nova_compute[230518]: 2025-10-02 12:40:04.846 2 INFO nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Using config drive
Oct 02 12:40:04 compute-1 nova_compute[230518]: 2025-10-02 12:40:04.893 2 DEBUG nova.storage.rbd_utils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] rbd image 104be830-8fcd-47dd-a2b4-f92b66dcbc80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:40:05 compute-1 nova_compute[230518]: 2025-10-02 12:40:05.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:40:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:05.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:05.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:05 compute-1 nova_compute[230518]: 2025-10-02 12:40:05.957 2 INFO nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Creating config drive at /var/lib/nova/instances/104be830-8fcd-47dd-a2b4-f92b66dcbc80/disk.config
Oct 02 12:40:05 compute-1 nova_compute[230518]: 2025-10-02 12:40:05.961 2 DEBUG oslo_concurrency.processutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/104be830-8fcd-47dd-a2b4-f92b66dcbc80/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps8quwt69 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:40:06 compute-1 nova_compute[230518]: 2025-10-02 12:40:06.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:40:06 compute-1 nova_compute[230518]: 2025-10-02 12:40:06.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:40:06 compute-1 nova_compute[230518]: 2025-10-02 12:40:06.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:40:06 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:40:06 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:40:06 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:40:06 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:40:06 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:40:06 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:40:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3067416809' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:40:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3067416809' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:40:06 compute-1 nova_compute[230518]: 2025-10-02 12:40:06.095 2 DEBUG oslo_concurrency.processutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/104be830-8fcd-47dd-a2b4-f92b66dcbc80/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps8quwt69" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:40:06 compute-1 nova_compute[230518]: 2025-10-02 12:40:06.244 2 DEBUG nova.storage.rbd_utils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] rbd image 104be830-8fcd-47dd-a2b4-f92b66dcbc80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:40:06 compute-1 nova_compute[230518]: 2025-10-02 12:40:06.247 2 DEBUG oslo_concurrency.processutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/104be830-8fcd-47dd-a2b4-f92b66dcbc80/disk.config 104be830-8fcd-47dd-a2b4-f92b66dcbc80_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:40:06 compute-1 nova_compute[230518]: 2025-10-02 12:40:06.274 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 12:40:06 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:40:06 compute-1 nova_compute[230518]: 2025-10-02 12:40:06.902 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:40:06 compute-1 nova_compute[230518]: 2025-10-02 12:40:06.902 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:40:06 compute-1 nova_compute[230518]: 2025-10-02 12:40:06.903 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:40:06 compute-1 nova_compute[230518]: 2025-10-02 12:40:06.903 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7621a774-e0bc-4f4f-b900-c3608dd6835a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:40:06 compute-1 nova_compute[230518]: 2025-10-02 12:40:06.944 2 DEBUG nova.network.neutron [req-7482d442-8f66-4140-9b7d-329aa83d6c75 req-c9d45933-ae27-4a32-b102-3c7069e086ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Updated VIF entry in instance network info cache for port 06cb1a9b-ada6-4485-bce1-9582e5b82b6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:40:06 compute-1 nova_compute[230518]: 2025-10-02 12:40:06.944 2 DEBUG nova.network.neutron [req-7482d442-8f66-4140-9b7d-329aa83d6c75 req-c9d45933-ae27-4a32-b102-3c7069e086ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Updating instance_info_cache with network_info: [{"id": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "address": "fa:16:3e:80:d7:1c", "network": {"id": "17002dea-5c5e-46a6-892d-d32d33c1f02d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1040381813-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44e6ad861d934450b2090f40fab255f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cb1a9b-ad", "ovs_interfaceid": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:40:07 compute-1 nova_compute[230518]: 2025-10-02 12:40:07.108 2 DEBUG oslo_concurrency.processutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/104be830-8fcd-47dd-a2b4-f92b66dcbc80/disk.config 104be830-8fcd-47dd-a2b4-f92b66dcbc80_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.861s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:40:07 compute-1 nova_compute[230518]: 2025-10-02 12:40:07.108 2 INFO nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Deleting local config drive /var/lib/nova/instances/104be830-8fcd-47dd-a2b4-f92b66dcbc80/disk.config because it was imported into RBD.
Oct 02 12:40:07 compute-1 ceph-mon[80926]: pgmap v1924: 305 pgs: 305 active+clean; 612 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.6 MiB/s wr, 150 op/s
Oct 02 12:40:07 compute-1 nova_compute[230518]: 2025-10-02 12:40:07.140 2 DEBUG oslo_concurrency.lockutils [req-7482d442-8f66-4140-9b7d-329aa83d6c75 req-c9d45933-ae27-4a32-b102-3c7069e086ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-104be830-8fcd-47dd-a2b4-f92b66dcbc80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:40:07 compute-1 kernel: tap06cb1a9b-ad: entered promiscuous mode
Oct 02 12:40:07 compute-1 NetworkManager[44960]: <info>  [1759408807.1784] manager: (tap06cb1a9b-ad): new Tun device (/org/freedesktop/NetworkManager/Devices/208)
Oct 02 12:40:07 compute-1 ovn_controller[129257]: 2025-10-02T12:40:07Z|00432|binding|INFO|Claiming lport 06cb1a9b-ada6-4485-bce1-9582e5b82b6f for this chassis.
Oct 02 12:40:07 compute-1 ovn_controller[129257]: 2025-10-02T12:40:07Z|00433|binding|INFO|06cb1a9b-ada6-4485-bce1-9582e5b82b6f: Claiming fa:16:3e:80:d7:1c 10.100.0.7
Oct 02 12:40:07 compute-1 nova_compute[230518]: 2025-10-02 12:40:07.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:07 compute-1 ovn_controller[129257]: 2025-10-02T12:40:07Z|00434|binding|INFO|Setting lport 06cb1a9b-ada6-4485-bce1-9582e5b82b6f ovn-installed in OVS
Oct 02 12:40:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:07.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:07 compute-1 nova_compute[230518]: 2025-10-02 12:40:07.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:07 compute-1 ovn_controller[129257]: 2025-10-02T12:40:07Z|00435|binding|INFO|Setting lport 06cb1a9b-ada6-4485-bce1-9582e5b82b6f up in Southbound
Oct 02 12:40:07 compute-1 nova_compute[230518]: 2025-10-02 12:40:07.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.208 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:d7:1c 10.100.0.7'], port_security=['fa:16:3e:80:d7:1c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '104be830-8fcd-47dd-a2b4-f92b66dcbc80', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17002dea-5c5e-46a6-892d-d32d33c1f02d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '44e6ad861d934450b2090f40fab255f0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '26178b5a-3362-47ee-8eb1-d9a8089548c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab327b80-2049-4821-a881-044c34a4c8df, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=06cb1a9b-ada6-4485-bce1-9582e5b82b6f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.211 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 06cb1a9b-ada6-4485-bce1-9582e5b82b6f in datapath 17002dea-5c5e-46a6-892d-d32d33c1f02d bound to our chassis
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.215 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 17002dea-5c5e-46a6-892d-d32d33c1f02d
Oct 02 12:40:07 compute-1 systemd-machined[188247]: New machine qemu-53-instance-0000006b.
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.230 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d948edaa-1458-448b-8ebf-5c8f9473b468]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.231 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap17002dea-51 in ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:40:07 compute-1 systemd[1]: Started Virtual Machine qemu-53-instance-0000006b.
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.236 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap17002dea-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.236 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d39c58e6-5282-403f-afd1-61df01da6fac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.237 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b3b808c8-e0f0-4518-888b-09ddaad39c55]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:07 compute-1 systemd-udevd[271660]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:40:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:07.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:07 compute-1 NetworkManager[44960]: <info>  [1759408807.2581] device (tap06cb1a9b-ad): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:40:07 compute-1 NetworkManager[44960]: <info>  [1759408807.2592] device (tap06cb1a9b-ad): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.266 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[565b6bb6-431c-4f95-827b-eb17a50c1629]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.300 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bbcfd085-ee4b-41e5-80d6-0633e850915e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.330 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[bdabbcf9-a78d-4e08-b03e-8f363cbf9103]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:07 compute-1 NetworkManager[44960]: <info>  [1759408807.3389] manager: (tap17002dea-50): new Veth device (/org/freedesktop/NetworkManager/Devices/209)
Oct 02 12:40:07 compute-1 systemd-udevd[271663]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.337 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f7dac6f7-661e-4f17-bc18-62e3e07a33f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.375 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[874c6c87-2742-496c-a544-08ce9988baa7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.376 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.377 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[ea6f7a32-356b-43d5-bfa4-0552f4eb16ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:07 compute-1 NetworkManager[44960]: <info>  [1759408807.4038] device (tap17002dea-50): carrier: link connected
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.408 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[b5570c9d-62df-4882-be05-928828a610de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.423 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d9a7bf0e-462f-4d34-8559-2b9fbd329e8d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap17002dea-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:14:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 133], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667096, 'reachable_time': 22697, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271692, 'error': None, 'target': 'ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.440 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[14da3ba9-2bee-4f1d-90f7-5c1cb0a14a6a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe05:14e3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667096, 'tstamp': 667096}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271693, 'error': None, 'target': 'ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.457 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4e2b683b-25a3-4c87-a134-dd2d48dc04d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap17002dea-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:14:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 133], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667096, 'reachable_time': 22697, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271701, 'error': None, 'target': 'ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.481 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[213180b5-3eb6-4f35-a8b8-1d082e908826]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.521 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1627ffd9-2101-4eca-9121-5a3d27c12847]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.522 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17002dea-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.522 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.522 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap17002dea-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:07 compute-1 NetworkManager[44960]: <info>  [1759408807.5250] manager: (tap17002dea-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/210)
Oct 02 12:40:07 compute-1 nova_compute[230518]: 2025-10-02 12:40:07.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:07 compute-1 kernel: tap17002dea-50: entered promiscuous mode
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.529 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap17002dea-50, col_values=(('external_ids', {'iface-id': 'ecf8c253-9ae0-49e5-afdc-690e186df947'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:07 compute-1 nova_compute[230518]: 2025-10-02 12:40:07.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:07 compute-1 nova_compute[230518]: 2025-10-02 12:40:07.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:07 compute-1 ovn_controller[129257]: 2025-10-02T12:40:07Z|00436|binding|INFO|Releasing lport ecf8c253-9ae0-49e5-afdc-690e186df947 from this chassis (sb_readonly=0)
Oct 02 12:40:07 compute-1 nova_compute[230518]: 2025-10-02 12:40:07.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.545 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/17002dea-5c5e-46a6-892d-d32d33c1f02d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/17002dea-5c5e-46a6-892d-d32d33c1f02d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.545 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bee04259-91b9-42c4-8aaa-0779db113807]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.546 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-17002dea-5c5e-46a6-892d-d32d33c1f02d
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/17002dea-5c5e-46a6-892d-d32d33c1f02d.pid.haproxy
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 17002dea-5c5e-46a6-892d-d32d33c1f02d
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:40:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:07.546 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d', 'env', 'PROCESS_TAG=haproxy-17002dea-5c5e-46a6-892d-d32d33c1f02d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/17002dea-5c5e-46a6-892d-d32d33c1f02d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:40:07 compute-1 podman[271768]: 2025-10-02 12:40:07.929211669 +0000 UTC m=+0.054200016 container create c4bc0140ee7828efdecc4a2ff21fa99cebef971de3832eb9944a03708d4473ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:40:07 compute-1 systemd[1]: Started libpod-conmon-c4bc0140ee7828efdecc4a2ff21fa99cebef971de3832eb9944a03708d4473ca.scope.
Oct 02 12:40:07 compute-1 podman[271768]: 2025-10-02 12:40:07.899801313 +0000 UTC m=+0.024789700 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:40:07 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:40:08 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca61862b6333cfd3d3887fbf1324742c9b2c5bd3ebde3922936d76af4c5a58c8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:40:08 compute-1 podman[271768]: 2025-10-02 12:40:08.015533555 +0000 UTC m=+0.140521922 container init c4bc0140ee7828efdecc4a2ff21fa99cebef971de3832eb9944a03708d4473ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 12:40:08 compute-1 podman[271768]: 2025-10-02 12:40:08.020289705 +0000 UTC m=+0.145278062 container start c4bc0140ee7828efdecc4a2ff21fa99cebef971de3832eb9944a03708d4473ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:40:08 compute-1 neutron-haproxy-ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d[271784]: [NOTICE]   (271788) : New worker (271790) forked
Oct 02 12:40:08 compute-1 neutron-haproxy-ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d[271784]: [NOTICE]   (271788) : Loading success.
Oct 02 12:40:08 compute-1 nova_compute[230518]: 2025-10-02 12:40:08.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:08 compute-1 nova_compute[230518]: 2025-10-02 12:40:08.316 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408808.3157983, 104be830-8fcd-47dd-a2b4-f92b66dcbc80 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:40:08 compute-1 nova_compute[230518]: 2025-10-02 12:40:08.317 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] VM Started (Lifecycle Event)
Oct 02 12:40:08 compute-1 nova_compute[230518]: 2025-10-02 12:40:08.476 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:40:08 compute-1 nova_compute[230518]: 2025-10-02 12:40:08.481 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408808.3166614, 104be830-8fcd-47dd-a2b4-f92b66dcbc80 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:40:08 compute-1 nova_compute[230518]: 2025-10-02 12:40:08.481 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] VM Paused (Lifecycle Event)
Oct 02 12:40:08 compute-1 nova_compute[230518]: 2025-10-02 12:40:08.519 2 DEBUG nova.compute.manager [req-ab9c0217-5f63-434b-8a0c-b832921b227b req-574409a9-e92a-4f7d-821a-64a8acecc65e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Received event network-vif-plugged-06cb1a9b-ada6-4485-bce1-9582e5b82b6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:40:08 compute-1 nova_compute[230518]: 2025-10-02 12:40:08.519 2 DEBUG oslo_concurrency.lockutils [req-ab9c0217-5f63-434b-8a0c-b832921b227b req-574409a9-e92a-4f7d-821a-64a8acecc65e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:08 compute-1 nova_compute[230518]: 2025-10-02 12:40:08.520 2 DEBUG oslo_concurrency.lockutils [req-ab9c0217-5f63-434b-8a0c-b832921b227b req-574409a9-e92a-4f7d-821a-64a8acecc65e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:08 compute-1 nova_compute[230518]: 2025-10-02 12:40:08.520 2 DEBUG oslo_concurrency.lockutils [req-ab9c0217-5f63-434b-8a0c-b832921b227b req-574409a9-e92a-4f7d-821a-64a8acecc65e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:08 compute-1 nova_compute[230518]: 2025-10-02 12:40:08.520 2 DEBUG nova.compute.manager [req-ab9c0217-5f63-434b-8a0c-b832921b227b req-574409a9-e92a-4f7d-821a-64a8acecc65e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Processing event network-vif-plugged-06cb1a9b-ada6-4485-bce1-9582e5b82b6f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:40:08 compute-1 nova_compute[230518]: 2025-10-02 12:40:08.521 2 DEBUG nova.compute.manager [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:40:08 compute-1 nova_compute[230518]: 2025-10-02 12:40:08.526 2 DEBUG nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:40:08 compute-1 nova_compute[230518]: 2025-10-02 12:40:08.528 2 INFO nova.virt.libvirt.driver [-] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Instance spawned successfully.
Oct 02 12:40:08 compute-1 nova_compute[230518]: 2025-10-02 12:40:08.529 2 DEBUG nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:40:08 compute-1 nova_compute[230518]: 2025-10-02 12:40:08.572 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:40:08 compute-1 nova_compute[230518]: 2025-10-02 12:40:08.576 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408808.525901, 104be830-8fcd-47dd-a2b4-f92b66dcbc80 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:40:08 compute-1 nova_compute[230518]: 2025-10-02 12:40:08.576 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] VM Resumed (Lifecycle Event)
Oct 02 12:40:08 compute-1 nova_compute[230518]: 2025-10-02 12:40:08.599 2 DEBUG nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:40:08 compute-1 nova_compute[230518]: 2025-10-02 12:40:08.599 2 DEBUG nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:40:08 compute-1 nova_compute[230518]: 2025-10-02 12:40:08.600 2 DEBUG nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:40:08 compute-1 nova_compute[230518]: 2025-10-02 12:40:08.600 2 DEBUG nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:40:08 compute-1 nova_compute[230518]: 2025-10-02 12:40:08.600 2 DEBUG nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:40:08 compute-1 nova_compute[230518]: 2025-10-02 12:40:08.601 2 DEBUG nova.virt.libvirt.driver [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:40:08 compute-1 nova_compute[230518]: 2025-10-02 12:40:08.727 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:40:08 compute-1 nova_compute[230518]: 2025-10-02 12:40:08.732 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:40:08 compute-1 nova_compute[230518]: 2025-10-02 12:40:08.830 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:40:09 compute-1 nova_compute[230518]: 2025-10-02 12:40:09.043 2 INFO nova.compute.manager [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Took 12.81 seconds to spawn the instance on the hypervisor.
Oct 02 12:40:09 compute-1 nova_compute[230518]: 2025-10-02 12:40:09.044 2 DEBUG nova.compute.manager [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:40:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:40:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:09.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:40:09 compute-1 nova_compute[230518]: 2025-10-02 12:40:09.236 2 INFO nova.compute.manager [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Took 14.67 seconds to build instance.
Oct 02 12:40:09 compute-1 ceph-mon[80926]: pgmap v1925: 305 pgs: 305 active+clean; 640 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.6 MiB/s wr, 159 op/s
Oct 02 12:40:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:09.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:09 compute-1 nova_compute[230518]: 2025-10-02 12:40:09.523 2 DEBUG oslo_concurrency.lockutils [None req-c340f781-fcc9-4fc5-8c19-4f4d4b0b04f7 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.460s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:09 compute-1 nova_compute[230518]: 2025-10-02 12:40:09.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:10 compute-1 nova_compute[230518]: 2025-10-02 12:40:10.962 2 DEBUG nova.compute.manager [req-0e9b95f5-6eba-4647-9980-ec52d0e26a75 req-3c2ac0d1-a8e2-4d96-9414-bfd32ac1d57e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Received event network-vif-plugged-06cb1a9b-ada6-4485-bce1-9582e5b82b6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:40:10 compute-1 nova_compute[230518]: 2025-10-02 12:40:10.962 2 DEBUG oslo_concurrency.lockutils [req-0e9b95f5-6eba-4647-9980-ec52d0e26a75 req-3c2ac0d1-a8e2-4d96-9414-bfd32ac1d57e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:10 compute-1 nova_compute[230518]: 2025-10-02 12:40:10.963 2 DEBUG oslo_concurrency.lockutils [req-0e9b95f5-6eba-4647-9980-ec52d0e26a75 req-3c2ac0d1-a8e2-4d96-9414-bfd32ac1d57e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:10 compute-1 nova_compute[230518]: 2025-10-02 12:40:10.963 2 DEBUG oslo_concurrency.lockutils [req-0e9b95f5-6eba-4647-9980-ec52d0e26a75 req-3c2ac0d1-a8e2-4d96-9414-bfd32ac1d57e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:10 compute-1 nova_compute[230518]: 2025-10-02 12:40:10.963 2 DEBUG nova.compute.manager [req-0e9b95f5-6eba-4647-9980-ec52d0e26a75 req-3c2ac0d1-a8e2-4d96-9414-bfd32ac1d57e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] No waiting events found dispatching network-vif-plugged-06cb1a9b-ada6-4485-bce1-9582e5b82b6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:40:10 compute-1 nova_compute[230518]: 2025-10-02 12:40:10.963 2 WARNING nova.compute.manager [req-0e9b95f5-6eba-4647-9980-ec52d0e26a75 req-3c2ac0d1-a8e2-4d96-9414-bfd32ac1d57e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Received unexpected event network-vif-plugged-06cb1a9b-ada6-4485-bce1-9582e5b82b6f for instance with vm_state active and task_state None.
Oct 02 12:40:11 compute-1 nova_compute[230518]: 2025-10-02 12:40:11.183 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Updating instance_info_cache with network_info: [{"id": "8d9cc17a-7804-4743-925a-496d9fe78c73", "address": "fa:16:3e:c4:d9:d3", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d9cc17a-78", "ovs_interfaceid": "8d9cc17a-7804-4743-925a-496d9fe78c73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:40:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:11.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:11 compute-1 nova_compute[230518]: 2025-10-02 12:40:11.231 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:40:11 compute-1 nova_compute[230518]: 2025-10-02 12:40:11.232 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:40:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:11.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:11 compute-1 ceph-mon[80926]: pgmap v1926: 305 pgs: 305 active+clean; 640 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 498 KiB/s rd, 3.2 MiB/s wr, 106 op/s
Oct 02 12:40:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2693718750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:40:11 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:40:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/306191312' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:40:12 compute-1 ceph-mon[80926]: pgmap v1927: 305 pgs: 305 active+clean; 642 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.2 MiB/s wr, 162 op/s
Oct 02 12:40:12 compute-1 podman[271800]: 2025-10-02 12:40:12.873198935 +0000 UTC m=+0.102970731 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 12:40:12 compute-1 podman[271799]: 2025-10-02 12:40:12.871882564 +0000 UTC m=+0.100952168 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:40:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:13.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:13 compute-1 nova_compute[230518]: 2025-10-02 12:40:13.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:13.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:13 compute-1 nova_compute[230518]: 2025-10-02 12:40:13.434 2 DEBUG oslo_concurrency.lockutils [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Acquiring lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:13 compute-1 nova_compute[230518]: 2025-10-02 12:40:13.436 2 DEBUG oslo_concurrency.lockutils [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:13 compute-1 nova_compute[230518]: 2025-10-02 12:40:13.436 2 DEBUG oslo_concurrency.lockutils [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Acquiring lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:13 compute-1 nova_compute[230518]: 2025-10-02 12:40:13.437 2 DEBUG oslo_concurrency.lockutils [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:13 compute-1 nova_compute[230518]: 2025-10-02 12:40:13.437 2 DEBUG oslo_concurrency.lockutils [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:13 compute-1 nova_compute[230518]: 2025-10-02 12:40:13.440 2 INFO nova.compute.manager [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Terminating instance
Oct 02 12:40:13 compute-1 nova_compute[230518]: 2025-10-02 12:40:13.442 2 DEBUG nova.compute.manager [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:40:13 compute-1 kernel: tap06cb1a9b-ad (unregistering): left promiscuous mode
Oct 02 12:40:13 compute-1 NetworkManager[44960]: <info>  [1759408813.5259] device (tap06cb1a9b-ad): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:40:13 compute-1 nova_compute[230518]: 2025-10-02 12:40:13.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:13 compute-1 ovn_controller[129257]: 2025-10-02T12:40:13Z|00437|binding|INFO|Releasing lport 06cb1a9b-ada6-4485-bce1-9582e5b82b6f from this chassis (sb_readonly=0)
Oct 02 12:40:13 compute-1 ovn_controller[129257]: 2025-10-02T12:40:13Z|00438|binding|INFO|Setting lport 06cb1a9b-ada6-4485-bce1-9582e5b82b6f down in Southbound
Oct 02 12:40:13 compute-1 ovn_controller[129257]: 2025-10-02T12:40:13Z|00439|binding|INFO|Removing iface tap06cb1a9b-ad ovn-installed in OVS
Oct 02 12:40:13 compute-1 nova_compute[230518]: 2025-10-02 12:40:13.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:13 compute-1 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Oct 02 12:40:13 compute-1 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000006b.scope: Consumed 5.904s CPU time.
Oct 02 12:40:13 compute-1 systemd-machined[188247]: Machine qemu-53-instance-0000006b terminated.
Oct 02 12:40:13 compute-1 nova_compute[230518]: 2025-10-02 12:40:13.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:13 compute-1 nova_compute[230518]: 2025-10-02 12:40:13.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:13 compute-1 nova_compute[230518]: 2025-10-02 12:40:13.684 2 INFO nova.virt.libvirt.driver [-] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Instance destroyed successfully.
Oct 02 12:40:13 compute-1 nova_compute[230518]: 2025-10-02 12:40:13.684 2 DEBUG nova.objects.instance [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lazy-loading 'resources' on Instance uuid 104be830-8fcd-47dd-a2b4-f92b66dcbc80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:40:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:13.690 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:d7:1c 10.100.0.7'], port_security=['fa:16:3e:80:d7:1c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '104be830-8fcd-47dd-a2b4-f92b66dcbc80', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17002dea-5c5e-46a6-892d-d32d33c1f02d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '44e6ad861d934450b2090f40fab255f0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '26178b5a-3362-47ee-8eb1-d9a8089548c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ab327b80-2049-4821-a881-044c34a4c8df, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=06cb1a9b-ada6-4485-bce1-9582e5b82b6f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:40:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:13.692 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 06cb1a9b-ada6-4485-bce1-9582e5b82b6f in datapath 17002dea-5c5e-46a6-892d-d32d33c1f02d unbound from our chassis
Oct 02 12:40:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:13.694 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 17002dea-5c5e-46a6-892d-d32d33c1f02d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:40:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:13.695 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1ea327e6-313f-49d2-8ae5-c4384c0b57ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:13.696 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d namespace which is not needed anymore
Oct 02 12:40:13 compute-1 nova_compute[230518]: 2025-10-02 12:40:13.837 2 DEBUG nova.virt.libvirt.vif [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:39:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-1275138439',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-1275138439',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-1275138439',id=107,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:40:09Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='44e6ad861d934450b2090f40fab255f0',ramdisk_id='',reservation_id='r-i33rn6v4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-730584977',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-730584977-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:40:09Z,user_data=None,user_id='ea3659a324824b3991f98e26e33c752f',uuid=104be830-8fcd-47dd-a2b4-f92b66dcbc80,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "address": "fa:16:3e:80:d7:1c", "network": {"id": "17002dea-5c5e-46a6-892d-d32d33c1f02d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1040381813-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44e6ad861d934450b2090f40fab255f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cb1a9b-ad", "ovs_interfaceid": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:40:13 compute-1 nova_compute[230518]: 2025-10-02 12:40:13.837 2 DEBUG nova.network.os_vif_util [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Converting VIF {"id": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "address": "fa:16:3e:80:d7:1c", "network": {"id": "17002dea-5c5e-46a6-892d-d32d33c1f02d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1040381813-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44e6ad861d934450b2090f40fab255f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06cb1a9b-ad", "ovs_interfaceid": "06cb1a9b-ada6-4485-bce1-9582e5b82b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:40:13 compute-1 nova_compute[230518]: 2025-10-02 12:40:13.838 2 DEBUG nova.network.os_vif_util [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:d7:1c,bridge_name='br-int',has_traffic_filtering=True,id=06cb1a9b-ada6-4485-bce1-9582e5b82b6f,network=Network(17002dea-5c5e-46a6-892d-d32d33c1f02d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06cb1a9b-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:40:13 compute-1 nova_compute[230518]: 2025-10-02 12:40:13.838 2 DEBUG os_vif [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:d7:1c,bridge_name='br-int',has_traffic_filtering=True,id=06cb1a9b-ada6-4485-bce1-9582e5b82b6f,network=Network(17002dea-5c5e-46a6-892d-d32d33c1f02d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06cb1a9b-ad') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:40:13 compute-1 nova_compute[230518]: 2025-10-02 12:40:13.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:13 compute-1 nova_compute[230518]: 2025-10-02 12:40:13.840 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap06cb1a9b-ad, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:13 compute-1 nova_compute[230518]: 2025-10-02 12:40:13.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:13 compute-1 nova_compute[230518]: 2025-10-02 12:40:13.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:40:13 compute-1 nova_compute[230518]: 2025-10-02 12:40:13.846 2 INFO os_vif [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:d7:1c,bridge_name='br-int',has_traffic_filtering=True,id=06cb1a9b-ada6-4485-bce1-9582e5b82b6f,network=Network(17002dea-5c5e-46a6-892d-d32d33c1f02d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06cb1a9b-ad')
Oct 02 12:40:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/926463960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:40:13 compute-1 neutron-haproxy-ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d[271784]: [NOTICE]   (271788) : haproxy version is 2.8.14-c23fe91
Oct 02 12:40:13 compute-1 neutron-haproxy-ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d[271784]: [NOTICE]   (271788) : path to executable is /usr/sbin/haproxy
Oct 02 12:40:13 compute-1 neutron-haproxy-ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d[271784]: [WARNING]  (271788) : Exiting Master process...
Oct 02 12:40:13 compute-1 neutron-haproxy-ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d[271784]: [ALERT]    (271788) : Current worker (271790) exited with code 143 (Terminated)
Oct 02 12:40:13 compute-1 neutron-haproxy-ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d[271784]: [WARNING]  (271788) : All workers exited. Exiting... (0)
Oct 02 12:40:13 compute-1 systemd[1]: libpod-c4bc0140ee7828efdecc4a2ff21fa99cebef971de3832eb9944a03708d4473ca.scope: Deactivated successfully.
Oct 02 12:40:13 compute-1 conmon[271784]: conmon c4bc0140ee7828efdecc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c4bc0140ee7828efdecc4a2ff21fa99cebef971de3832eb9944a03708d4473ca.scope/container/memory.events
Oct 02 12:40:13 compute-1 podman[271874]: 2025-10-02 12:40:13.874011755 +0000 UTC m=+0.060740483 container died c4bc0140ee7828efdecc4a2ff21fa99cebef971de3832eb9944a03708d4473ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 12:40:13 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c4bc0140ee7828efdecc4a2ff21fa99cebef971de3832eb9944a03708d4473ca-userdata-shm.mount: Deactivated successfully.
Oct 02 12:40:13 compute-1 systemd[1]: var-lib-containers-storage-overlay-ca61862b6333cfd3d3887fbf1324742c9b2c5bd3ebde3922936d76af4c5a58c8-merged.mount: Deactivated successfully.
Oct 02 12:40:13 compute-1 podman[271874]: 2025-10-02 12:40:13.935181208 +0000 UTC m=+0.121909926 container cleanup c4bc0140ee7828efdecc4a2ff21fa99cebef971de3832eb9944a03708d4473ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 12:40:13 compute-1 systemd[1]: libpod-conmon-c4bc0140ee7828efdecc4a2ff21fa99cebef971de3832eb9944a03708d4473ca.scope: Deactivated successfully.
Oct 02 12:40:14 compute-1 podman[271921]: 2025-10-02 12:40:14.002790286 +0000 UTC m=+0.044751889 container remove c4bc0140ee7828efdecc4a2ff21fa99cebef971de3832eb9944a03708d4473ca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:40:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:14.008 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b7bda872-50de-42b2-b670-3e15617561f3]: (4, ('Thu Oct  2 12:40:13 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d (c4bc0140ee7828efdecc4a2ff21fa99cebef971de3832eb9944a03708d4473ca)\nc4bc0140ee7828efdecc4a2ff21fa99cebef971de3832eb9944a03708d4473ca\nThu Oct  2 12:40:13 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d (c4bc0140ee7828efdecc4a2ff21fa99cebef971de3832eb9944a03708d4473ca)\nc4bc0140ee7828efdecc4a2ff21fa99cebef971de3832eb9944a03708d4473ca\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:14.010 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fc11aa79-a367-48ef-b40d-d46a0eaeb5cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:14.011 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17002dea-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:40:14 compute-1 nova_compute[230518]: 2025-10-02 12:40:14.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:14 compute-1 kernel: tap17002dea-50: left promiscuous mode
Oct 02 12:40:14 compute-1 nova_compute[230518]: 2025-10-02 12:40:14.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:14.028 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[63bb4682-8d1d-48dc-80bb-07ef53e0a62d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:14.060 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c90e6814-f4da-4182-8c92-e5665de28a6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:14.062 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[da2f6056-56ad-4c7a-b15e-47b97d7a7248]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:14.083 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9e69ec05-a08b-433d-8415-b0d00cfbfc97]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667088, 'reachable_time': 33985, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271936, 'error': None, 'target': 'ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:14.086 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-17002dea-5c5e-46a6-892d-d32d33c1f02d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:40:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:14.087 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[691712bb-59d4-4d26-9150-17016ee84795]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:40:14 compute-1 systemd[1]: run-netns-ovnmeta\x2d17002dea\x2d5c5e\x2d46a6\x2d892d\x2dd32d33c1f02d.mount: Deactivated successfully.
Oct 02 12:40:14 compute-1 ceph-mon[80926]: pgmap v1928: 305 pgs: 305 active+clean; 642 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.2 MiB/s wr, 198 op/s
Oct 02 12:40:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:15.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:15.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:15 compute-1 nova_compute[230518]: 2025-10-02 12:40:15.800 2 DEBUG nova.compute.manager [req-40f4b9a8-13bd-4c46-8a64-399d8d49ba85 req-007126bf-78a6-426d-946a-2593fc8bd7d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Received event network-vif-unplugged-06cb1a9b-ada6-4485-bce1-9582e5b82b6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:40:15 compute-1 nova_compute[230518]: 2025-10-02 12:40:15.801 2 DEBUG oslo_concurrency.lockutils [req-40f4b9a8-13bd-4c46-8a64-399d8d49ba85 req-007126bf-78a6-426d-946a-2593fc8bd7d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:15 compute-1 nova_compute[230518]: 2025-10-02 12:40:15.801 2 DEBUG oslo_concurrency.lockutils [req-40f4b9a8-13bd-4c46-8a64-399d8d49ba85 req-007126bf-78a6-426d-946a-2593fc8bd7d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:15 compute-1 nova_compute[230518]: 2025-10-02 12:40:15.802 2 DEBUG oslo_concurrency.lockutils [req-40f4b9a8-13bd-4c46-8a64-399d8d49ba85 req-007126bf-78a6-426d-946a-2593fc8bd7d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:15 compute-1 nova_compute[230518]: 2025-10-02 12:40:15.802 2 DEBUG nova.compute.manager [req-40f4b9a8-13bd-4c46-8a64-399d8d49ba85 req-007126bf-78a6-426d-946a-2593fc8bd7d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] No waiting events found dispatching network-vif-unplugged-06cb1a9b-ada6-4485-bce1-9582e5b82b6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:40:15 compute-1 nova_compute[230518]: 2025-10-02 12:40:15.803 2 DEBUG nova.compute.manager [req-40f4b9a8-13bd-4c46-8a64-399d8d49ba85 req-007126bf-78a6-426d-946a-2593fc8bd7d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Received event network-vif-unplugged-06cb1a9b-ada6-4485-bce1-9582e5b82b6f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:40:16 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:40:17 compute-1 sudo[271938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:40:17 compute-1 sudo[271938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:17 compute-1 sudo[271938]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:17 compute-1 sudo[271963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:40:17 compute-1 sudo[271963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:40:17 compute-1 sudo[271963]: pam_unix(sudo:session): session closed for user root
Oct 02 12:40:17 compute-1 ceph-mon[80926]: pgmap v1929: 305 pgs: 305 active+clean; 648 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.9 MiB/s wr, 249 op/s
Oct 02 12:40:17 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:40:17 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:40:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:17.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:40:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:17.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:40:17 compute-1 nova_compute[230518]: 2025-10-02 12:40:17.589 2 INFO nova.virt.libvirt.driver [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Deleting instance files /var/lib/nova/instances/104be830-8fcd-47dd-a2b4-f92b66dcbc80_del
Oct 02 12:40:17 compute-1 nova_compute[230518]: 2025-10-02 12:40:17.590 2 INFO nova.virt.libvirt.driver [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Deletion of /var/lib/nova/instances/104be830-8fcd-47dd-a2b4-f92b66dcbc80_del complete
Oct 02 12:40:17 compute-1 nova_compute[230518]: 2025-10-02 12:40:17.726 2 INFO nova.compute.manager [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Took 4.28 seconds to destroy the instance on the hypervisor.
Oct 02 12:40:17 compute-1 nova_compute[230518]: 2025-10-02 12:40:17.727 2 DEBUG oslo.service.loopingcall [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:40:17 compute-1 nova_compute[230518]: 2025-10-02 12:40:17.728 2 DEBUG nova.compute.manager [-] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:40:17 compute-1 nova_compute[230518]: 2025-10-02 12:40:17.728 2 DEBUG nova.network.neutron [-] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:40:18 compute-1 nova_compute[230518]: 2025-10-02 12:40:18.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:18 compute-1 nova_compute[230518]: 2025-10-02 12:40:18.271 2 DEBUG nova.compute.manager [req-fe20158d-57cf-4791-b299-96f082ebf6ec req-eec7c545-38c1-425e-a524-a0380a490532 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Received event network-vif-plugged-06cb1a9b-ada6-4485-bce1-9582e5b82b6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:40:18 compute-1 nova_compute[230518]: 2025-10-02 12:40:18.272 2 DEBUG oslo_concurrency.lockutils [req-fe20158d-57cf-4791-b299-96f082ebf6ec req-eec7c545-38c1-425e-a524-a0380a490532 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:18 compute-1 nova_compute[230518]: 2025-10-02 12:40:18.272 2 DEBUG oslo_concurrency.lockutils [req-fe20158d-57cf-4791-b299-96f082ebf6ec req-eec7c545-38c1-425e-a524-a0380a490532 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:18 compute-1 nova_compute[230518]: 2025-10-02 12:40:18.273 2 DEBUG oslo_concurrency.lockutils [req-fe20158d-57cf-4791-b299-96f082ebf6ec req-eec7c545-38c1-425e-a524-a0380a490532 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:18 compute-1 nova_compute[230518]: 2025-10-02 12:40:18.273 2 DEBUG nova.compute.manager [req-fe20158d-57cf-4791-b299-96f082ebf6ec req-eec7c545-38c1-425e-a524-a0380a490532 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] No waiting events found dispatching network-vif-plugged-06cb1a9b-ada6-4485-bce1-9582e5b82b6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:40:18 compute-1 nova_compute[230518]: 2025-10-02 12:40:18.273 2 WARNING nova.compute.manager [req-fe20158d-57cf-4791-b299-96f082ebf6ec req-eec7c545-38c1-425e-a524-a0380a490532 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Received unexpected event network-vif-plugged-06cb1a9b-ada6-4485-bce1-9582e5b82b6f for instance with vm_state active and task_state deleting.
Oct 02 12:40:18 compute-1 ceph-mon[80926]: pgmap v1930: 305 pgs: 305 active+clean; 612 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.8 MiB/s wr, 264 op/s
Oct 02 12:40:18 compute-1 nova_compute[230518]: 2025-10-02 12:40:18.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:19.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:19.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:19 compute-1 nova_compute[230518]: 2025-10-02 12:40:19.950 2 DEBUG nova.network.neutron [-] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:40:20 compute-1 nova_compute[230518]: 2025-10-02 12:40:20.028 2 INFO nova.compute.manager [-] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Took 2.30 seconds to deallocate network for instance.
Oct 02 12:40:20 compute-1 nova_compute[230518]: 2025-10-02 12:40:20.172 2 DEBUG oslo_concurrency.lockutils [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:20 compute-1 nova_compute[230518]: 2025-10-02 12:40:20.172 2 DEBUG oslo_concurrency.lockutils [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:20 compute-1 nova_compute[230518]: 2025-10-02 12:40:20.246 2 DEBUG oslo_concurrency.processutils [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:40:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:40:20 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1813702915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:40:20 compute-1 nova_compute[230518]: 2025-10-02 12:40:20.681 2 DEBUG oslo_concurrency.processutils [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:40:20 compute-1 nova_compute[230518]: 2025-10-02 12:40:20.690 2 DEBUG nova.compute.provider_tree [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:40:20 compute-1 nova_compute[230518]: 2025-10-02 12:40:20.844 2 DEBUG nova.scheduler.client.report [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:40:20 compute-1 nova_compute[230518]: 2025-10-02 12:40:20.931 2 DEBUG oslo_concurrency.lockutils [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:21 compute-1 nova_compute[230518]: 2025-10-02 12:40:21.004 2 INFO nova.scheduler.client.report [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Deleted allocations for instance 104be830-8fcd-47dd-a2b4-f92b66dcbc80
Oct 02 12:40:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:21.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:21 compute-1 ceph-mon[80926]: pgmap v1931: 305 pgs: 305 active+clean; 612 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 214 op/s
Oct 02 12:40:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2894032499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:40:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1813702915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:40:21 compute-1 nova_compute[230518]: 2025-10-02 12:40:21.262 2 DEBUG nova.compute.manager [req-5d8e86c3-c3c3-4816-95d8-e518ffb20ca1 req-40464c59-57f3-43cb-90eb-cdf13734e981 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Received event network-vif-deleted-06cb1a9b-ada6-4485-bce1-9582e5b82b6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:40:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:21.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:21 compute-1 nova_compute[230518]: 2025-10-02 12:40:21.299 2 DEBUG oslo_concurrency.lockutils [None req-f9e75a1d-b278-4dba-b3de-c18723addc72 ea3659a324824b3991f98e26e33c752f 44e6ad861d934450b2090f40fab255f0 - - default default] Lock "104be830-8fcd-47dd-a2b4-f92b66dcbc80" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.863s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:40:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/646677738' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:40:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:40:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:23.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:40:23 compute-1 nova_compute[230518]: 2025-10-02 12:40:23.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:23.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:23 compute-1 ceph-mon[80926]: pgmap v1932: 305 pgs: 305 active+clean; 596 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 1.8 MiB/s wr, 249 op/s
Oct 02 12:40:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2568843289' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:40:23 compute-1 nova_compute[230518]: 2025-10-02 12:40:23.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:24 compute-1 ceph-mon[80926]: pgmap v1933: 305 pgs: 305 active+clean; 596 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 213 op/s
Oct 02 12:40:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:40:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:25.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:40:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:40:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:25.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:40:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:25.937 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:25.937 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:40:25.938 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:26 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:40:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:40:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:27.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:40:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:27.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:27 compute-1 ceph-mon[80926]: pgmap v1934: 305 pgs: 305 active+clean; 596 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 176 op/s
Oct 02 12:40:28 compute-1 nova_compute[230518]: 2025-10-02 12:40:28.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:28 compute-1 nova_compute[230518]: 2025-10-02 12:40:28.682 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408813.6819274, 104be830-8fcd-47dd-a2b4-f92b66dcbc80 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:40:28 compute-1 nova_compute[230518]: 2025-10-02 12:40:28.683 2 INFO nova.compute.manager [-] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] VM Stopped (Lifecycle Event)
Oct 02 12:40:28 compute-1 nova_compute[230518]: 2025-10-02 12:40:28.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:29 compute-1 ceph-mon[80926]: pgmap v1935: 305 pgs: 305 active+clean; 596 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 682 KiB/s wr, 123 op/s
Oct 02 12:40:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:40:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:29.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:40:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:29.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:31.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:31.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:31 compute-1 ceph-mon[80926]: pgmap v1936: 305 pgs: 305 active+clean; 596 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.9 KiB/s wr, 80 op/s
Oct 02 12:40:31 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:40:31 compute-1 podman[272011]: 2025-10-02 12:40:31.851503202 +0000 UTC m=+0.083054573 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:40:31 compute-1 podman[272010]: 2025-10-02 12:40:31.862313672 +0000 UTC m=+0.108871494 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 02 12:40:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:40:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:33.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:40:33 compute-1 nova_compute[230518]: 2025-10-02 12:40:33.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:33.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:33 compute-1 ceph-mon[80926]: pgmap v1937: 305 pgs: 305 active+clean; 596 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.9 KiB/s wr, 80 op/s
Oct 02 12:40:33 compute-1 nova_compute[230518]: 2025-10-02 12:40:33.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:34 compute-1 ceph-mon[80926]: pgmap v1938: 305 pgs: 305 active+clean; 597 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 375 KiB/s wr, 56 op/s
Oct 02 12:40:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:40:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:35.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:40:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:35.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:36 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:40:37 compute-1 ceph-mon[80926]: pgmap v1939: 305 pgs: 305 active+clean; 610 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 813 KiB/s rd, 1.3 MiB/s wr, 42 op/s
Oct 02 12:40:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:40:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:37.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:40:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:37.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:38 compute-1 nova_compute[230518]: 2025-10-02 12:40:38.094 2 DEBUG nova.compute.manager [None req-2f41a135-e1d7-48be-92f7-4bd37b124aa8 - - - - - -] [instance: 104be830-8fcd-47dd-a2b4-f92b66dcbc80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:40:38 compute-1 nova_compute[230518]: 2025-10-02 12:40:38.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:38 compute-1 nova_compute[230518]: 2025-10-02 12:40:38.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:39.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:39.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:39 compute-1 ceph-mon[80926]: pgmap v1940: 305 pgs: 305 active+clean; 617 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 133 KiB/s rd, 2.0 MiB/s wr, 41 op/s
Oct 02 12:40:40 compute-1 ceph-mon[80926]: pgmap v1941: 305 pgs: 305 active+clean; 617 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 133 KiB/s rd, 2.0 MiB/s wr, 41 op/s
Oct 02 12:40:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:41.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:41.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:41 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:40:43 compute-1 ceph-mon[80926]: pgmap v1942: 305 pgs: 305 active+clean; 628 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 218 KiB/s rd, 2.1 MiB/s wr, 52 op/s
Oct 02 12:40:43 compute-1 nova_compute[230518]: 2025-10-02 12:40:43.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:43.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:43.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:43 compute-1 podman[272056]: 2025-10-02 12:40:43.795176157 +0000 UTC m=+0.053295857 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:40:43 compute-1 podman[272057]: 2025-10-02 12:40:43.801432663 +0000 UTC m=+0.056043023 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:40:43 compute-1 nova_compute[230518]: 2025-10-02 12:40:43.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:44 compute-1 ovn_controller[129257]: 2025-10-02T12:40:44Z|00440|binding|INFO|Releasing lport 7b6dc1a1-1a58-45bd-84bb-97328397bf1b from this chassis (sb_readonly=0)
Oct 02 12:40:44 compute-1 nova_compute[230518]: 2025-10-02 12:40:44.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:45.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:45.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:45 compute-1 ceph-mon[80926]: pgmap v1943: 305 pgs: 305 active+clean; 629 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 254 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 12:40:46 compute-1 ceph-mon[80926]: pgmap v1944: 305 pgs: 305 active+clean; 629 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Oct 02 12:40:46 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:40:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:47.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:47.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:48 compute-1 nova_compute[230518]: 2025-10-02 12:40:48.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:48 compute-1 ceph-mon[80926]: pgmap v1945: 305 pgs: 305 active+clean; 629 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 878 KiB/s wr, 122 op/s
Oct 02 12:40:48 compute-1 nova_compute[230518]: 2025-10-02 12:40:48.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:49.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:40:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:49.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:40:50 compute-1 ceph-mon[80926]: pgmap v1946: 305 pgs: 305 active+clean; 629 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 136 KiB/s wr, 98 op/s
Oct 02 12:40:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:51.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:51.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:51 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:40:52 compute-1 ceph-mon[80926]: pgmap v1947: 305 pgs: 305 active+clean; 629 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 136 KiB/s wr, 98 op/s
Oct 02 12:40:53 compute-1 nova_compute[230518]: 2025-10-02 12:40:53.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:40:53 compute-1 nova_compute[230518]: 2025-10-02 12:40:53.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:53.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:53.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:53 compute-1 nova_compute[230518]: 2025-10-02 12:40:53.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:54 compute-1 ceph-mon[80926]: pgmap v1948: 305 pgs: 305 active+clean; 629 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 114 KiB/s wr, 89 op/s
Oct 02 12:40:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:55.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:55.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:55 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4238417375' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:40:56 compute-1 nova_compute[230518]: 2025-10-02 12:40:56.189 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:40:56 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:40:56 compute-1 ceph-mon[80926]: pgmap v1949: 305 pgs: 305 active+clean; 616 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 115 op/s
Oct 02 12:40:56 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/813156201' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:40:57 compute-1 nova_compute[230518]: 2025-10-02 12:40:57.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:40:57 compute-1 nova_compute[230518]: 2025-10-02 12:40:57.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:40:57 compute-1 nova_compute[230518]: 2025-10-02 12:40:57.289 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:57 compute-1 nova_compute[230518]: 2025-10-02 12:40:57.290 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:57 compute-1 nova_compute[230518]: 2025-10-02 12:40:57.290 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:40:57 compute-1 nova_compute[230518]: 2025-10-02 12:40:57.290 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:40:57 compute-1 nova_compute[230518]: 2025-10-02 12:40:57.291 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:40:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:40:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:57.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:40:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:40:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:57.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:40:57 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:40:57 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3880091232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:40:57 compute-1 nova_compute[230518]: 2025-10-02 12:40:57.722 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:40:57 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/598766264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:40:57 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3880091232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:40:57 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #88. Immutable memtables: 0.
Oct 02 12:40:57 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:40:57.941922) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:40:57 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 88
Oct 02 12:40:57 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408857941969, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 2421, "num_deletes": 254, "total_data_size": 5721580, "memory_usage": 5799424, "flush_reason": "Manual Compaction"}
Oct 02 12:40:57 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #89: started
Oct 02 12:40:57 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408857974636, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 89, "file_size": 3742892, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44404, "largest_seqno": 46820, "table_properties": {"data_size": 3733017, "index_size": 6241, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 21101, "raw_average_key_size": 20, "raw_value_size": 3713052, "raw_average_value_size": 3658, "num_data_blocks": 271, "num_entries": 1015, "num_filter_entries": 1015, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408657, "oldest_key_time": 1759408657, "file_creation_time": 1759408857, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:40:57 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 32751 microseconds, and 12965 cpu microseconds.
Oct 02 12:40:57 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:40:57 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:40:57.974674) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #89: 3742892 bytes OK
Oct 02 12:40:57 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:40:57.974693) [db/memtable_list.cc:519] [default] Level-0 commit table #89 started
Oct 02 12:40:57 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:40:57.978994) [db/memtable_list.cc:722] [default] Level-0 commit table #89: memtable #1 done
Oct 02 12:40:57 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:40:57.979007) EVENT_LOG_v1 {"time_micros": 1759408857979003, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:40:57 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:40:57.979022) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:40:57 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 5710751, prev total WAL file size 5710751, number of live WAL files 2.
Oct 02 12:40:57 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000085.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:40:57 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:40:57.980222) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Oct 02 12:40:57 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:40:57 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [89(3655KB)], [87(9402KB)]
Oct 02 12:40:57 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408857980253, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [89], "files_L6": [87], "score": -1, "input_data_size": 13371274, "oldest_snapshot_seqno": -1}
Oct 02 12:40:58 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #90: 7134 keys, 11443223 bytes, temperature: kUnknown
Oct 02 12:40:58 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408858075562, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 90, "file_size": 11443223, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11394681, "index_size": 29614, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17861, "raw_key_size": 183721, "raw_average_key_size": 25, "raw_value_size": 11266366, "raw_average_value_size": 1579, "num_data_blocks": 1173, "num_entries": 7134, "num_filter_entries": 7134, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759408857, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 90, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:40:58 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:40:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:40:58.075854) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 11443223 bytes
Oct 02 12:40:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:40:58.078616) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 140.1 rd, 119.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 9.2 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(6.6) write-amplify(3.1) OK, records in: 7662, records dropped: 528 output_compression: NoCompression
Oct 02 12:40:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:40:58.078635) EVENT_LOG_v1 {"time_micros": 1759408858078625, "job": 54, "event": "compaction_finished", "compaction_time_micros": 95471, "compaction_time_cpu_micros": 22431, "output_level": 6, "num_output_files": 1, "total_output_size": 11443223, "num_input_records": 7662, "num_output_records": 7134, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:40:58 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:40:58 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408858079749, "job": 54, "event": "table_file_deletion", "file_number": 89}
Oct 02 12:40:58 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000087.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:40:58 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759408858081673, "job": 54, "event": "table_file_deletion", "file_number": 87}
Oct 02 12:40:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:40:57.980145) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:40:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:40:58.081788) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:40:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:40:58.081792) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:40:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:40:58.081794) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:40:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:40:58.081795) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:40:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:40:58.081797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:40:58 compute-1 nova_compute[230518]: 2025-10-02 12:40:58.111 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:40:58 compute-1 nova_compute[230518]: 2025-10-02 12:40:58.111 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:40:58 compute-1 nova_compute[230518]: 2025-10-02 12:40:58.246 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:40:58 compute-1 nova_compute[230518]: 2025-10-02 12:40:58.247 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4271MB free_disk=20.742088317871094GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:40:58 compute-1 nova_compute[230518]: 2025-10-02 12:40:58.247 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:40:58 compute-1 nova_compute[230518]: 2025-10-02 12:40:58.247 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:40:58 compute-1 nova_compute[230518]: 2025-10-02 12:40:58.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:58 compute-1 nova_compute[230518]: 2025-10-02 12:40:58.710 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 7621a774-e0bc-4f4f-b900-c3608dd6835a actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:40:58 compute-1 nova_compute[230518]: 2025-10-02 12:40:58.711 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:40:58 compute-1 nova_compute[230518]: 2025-10-02 12:40:58.711 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:40:58 compute-1 nova_compute[230518]: 2025-10-02 12:40:58.815 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 12:40:58 compute-1 nova_compute[230518]: 2025-10-02 12:40:58.884 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 12:40:58 compute-1 nova_compute[230518]: 2025-10-02 12:40:58.885 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:40:58 compute-1 nova_compute[230518]: 2025-10-02 12:40:58.903 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 12:40:58 compute-1 nova_compute[230518]: 2025-10-02 12:40:58.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:40:58 compute-1 nova_compute[230518]: 2025-10-02 12:40:58.922 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 12:40:58 compute-1 ceph-mon[80926]: pgmap v1950: 305 pgs: 305 active+clean; 624 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.9 MiB/s wr, 122 op/s
Oct 02 12:40:58 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/395680106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:40:58 compute-1 nova_compute[230518]: 2025-10-02 12:40:58.971 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:40:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:40:59.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:40:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:40:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:40:59.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:40:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:40:59 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2487088342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:40:59 compute-1 nova_compute[230518]: 2025-10-02 12:40:59.409 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:40:59 compute-1 nova_compute[230518]: 2025-10-02 12:40:59.415 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:40:59 compute-1 nova_compute[230518]: 2025-10-02 12:40:59.568 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:41:00 compute-1 ovn_controller[129257]: 2025-10-02T12:41:00Z|00441|binding|INFO|Releasing lport 7b6dc1a1-1a58-45bd-84bb-97328397bf1b from this chassis (sb_readonly=0)
Oct 02 12:41:00 compute-1 nova_compute[230518]: 2025-10-02 12:41:00.029 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:41:00 compute-1 nova_compute[230518]: 2025-10-02 12:41:00.029 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2487088342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3658988662' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:00 compute-1 nova_compute[230518]: 2025-10-02 12:41:00.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:01 compute-1 nova_compute[230518]: 2025-10-02 12:41:01.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:41:01 compute-1 nova_compute[230518]: 2025-10-02 12:41:01.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:41:01 compute-1 nova_compute[230518]: 2025-10-02 12:41:01.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:41:01 compute-1 nova_compute[230518]: 2025-10-02 12:41:01.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 12:41:01 compute-1 ceph-mon[80926]: pgmap v1951: 305 pgs: 305 active+clean; 624 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 221 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Oct 02 12:41:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:01.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:01.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:01.376 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:41:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:01.378 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:41:01 compute-1 nova_compute[230518]: 2025-10-02 12:41:01.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:01 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:41:02 compute-1 nova_compute[230518]: 2025-10-02 12:41:02.104 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:41:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/960821142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:02 compute-1 nova_compute[230518]: 2025-10-02 12:41:02.531 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Acquiring lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:02 compute-1 nova_compute[230518]: 2025-10-02 12:41:02.531 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:02 compute-1 nova_compute[230518]: 2025-10-02 12:41:02.602 2 DEBUG nova.compute.manager [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:41:02 compute-1 podman[272140]: 2025-10-02 12:41:02.79851309 +0000 UTC m=+0.051034156 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 02 12:41:02 compute-1 podman[272139]: 2025-10-02 12:41:02.821334748 +0000 UTC m=+0.075444114 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 12:41:02 compute-1 nova_compute[230518]: 2025-10-02 12:41:02.894 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:02 compute-1 nova_compute[230518]: 2025-10-02 12:41:02.894 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:02 compute-1 nova_compute[230518]: 2025-10-02 12:41:02.899 2 DEBUG nova.virt.hardware [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:41:02 compute-1 nova_compute[230518]: 2025-10-02 12:41:02.899 2 INFO nova.compute.claims [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:41:03 compute-1 nova_compute[230518]: 2025-10-02 12:41:03.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:41:03 compute-1 nova_compute[230518]: 2025-10-02 12:41:03.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:41:03 compute-1 nova_compute[230518]: 2025-10-02 12:41:03.177 2 DEBUG oslo_concurrency.processutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:03 compute-1 nova_compute[230518]: 2025-10-02 12:41:03.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:03 compute-1 ceph-mon[80926]: pgmap v1952: 305 pgs: 305 active+clean; 629 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 371 KiB/s rd, 3.9 MiB/s wr, 127 op/s
Oct 02 12:41:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:03.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:03.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:41:03 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3647303231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:03 compute-1 nova_compute[230518]: 2025-10-02 12:41:03.681 2 DEBUG oslo_concurrency.processutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:03 compute-1 nova_compute[230518]: 2025-10-02 12:41:03.686 2 DEBUG nova.compute.provider_tree [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:41:03 compute-1 nova_compute[230518]: 2025-10-02 12:41:03.716 2 DEBUG nova.scheduler.client.report [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:41:03 compute-1 nova_compute[230518]: 2025-10-02 12:41:03.761 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:03 compute-1 nova_compute[230518]: 2025-10-02 12:41:03.762 2 DEBUG nova.compute.manager [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:41:03 compute-1 nova_compute[230518]: 2025-10-02 12:41:03.874 2 DEBUG nova.compute.manager [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:41:03 compute-1 nova_compute[230518]: 2025-10-02 12:41:03.874 2 DEBUG nova.network.neutron [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:41:03 compute-1 nova_compute[230518]: 2025-10-02 12:41:03.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:03 compute-1 nova_compute[230518]: 2025-10-02 12:41:03.915 2 INFO nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:41:03 compute-1 nova_compute[230518]: 2025-10-02 12:41:03.938 2 DEBUG nova.compute.manager [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:41:04 compute-1 nova_compute[230518]: 2025-10-02 12:41:04.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:41:04 compute-1 nova_compute[230518]: 2025-10-02 12:41:04.067 2 DEBUG nova.compute.manager [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:41:04 compute-1 nova_compute[230518]: 2025-10-02 12:41:04.069 2 DEBUG nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:41:04 compute-1 nova_compute[230518]: 2025-10-02 12:41:04.069 2 INFO nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Creating image(s)
Oct 02 12:41:04 compute-1 nova_compute[230518]: 2025-10-02 12:41:04.105 2 DEBUG nova.storage.rbd_utils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] rbd image 4d8c4b3b-58c2-4d3d-863c-49b98333b84d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:41:04 compute-1 nova_compute[230518]: 2025-10-02 12:41:04.143 2 DEBUG nova.storage.rbd_utils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] rbd image 4d8c4b3b-58c2-4d3d-863c-49b98333b84d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:41:04 compute-1 nova_compute[230518]: 2025-10-02 12:41:04.178 2 DEBUG nova.storage.rbd_utils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] rbd image 4d8c4b3b-58c2-4d3d-863c-49b98333b84d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:41:04 compute-1 nova_compute[230518]: 2025-10-02 12:41:04.182 2 DEBUG oslo_concurrency.processutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:04 compute-1 nova_compute[230518]: 2025-10-02 12:41:04.239 2 DEBUG nova.policy [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9ef7a5dbc3524ee8a7efcd0d3ae36787', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a82ed194b379425aa5e1f31b993eee81', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:41:04 compute-1 nova_compute[230518]: 2025-10-02 12:41:04.277 2 DEBUG oslo_concurrency.processutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:04 compute-1 nova_compute[230518]: 2025-10-02 12:41:04.278 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:04 compute-1 nova_compute[230518]: 2025-10-02 12:41:04.279 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:04 compute-1 nova_compute[230518]: 2025-10-02 12:41:04.279 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:04 compute-1 nova_compute[230518]: 2025-10-02 12:41:04.314 2 DEBUG nova.storage.rbd_utils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] rbd image 4d8c4b3b-58c2-4d3d-863c-49b98333b84d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:41:04 compute-1 nova_compute[230518]: 2025-10-02 12:41:04.318 2 DEBUG oslo_concurrency.processutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 4d8c4b3b-58c2-4d3d-863c-49b98333b84d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:04 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3647303231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:05.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:41:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:05.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:41:05 compute-1 ceph-mon[80926]: pgmap v1953: 305 pgs: 305 active+clean; 629 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 650 KiB/s rd, 3.9 MiB/s wr, 138 op/s
Oct 02 12:41:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/4048858964' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:41:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/4048858964' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:41:05 compute-1 nova_compute[230518]: 2025-10-02 12:41:05.431 2 DEBUG oslo_concurrency.processutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 4d8c4b3b-58c2-4d3d-863c-49b98333b84d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:05 compute-1 nova_compute[230518]: 2025-10-02 12:41:05.541 2 DEBUG nova.storage.rbd_utils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] resizing rbd image 4d8c4b3b-58c2-4d3d-863c-49b98333b84d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:41:05 compute-1 nova_compute[230518]: 2025-10-02 12:41:05.616 2 DEBUG nova.network.neutron [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Successfully created port: 007233fd-556d-43ce-97fa-0f19306ba0aa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:41:05 compute-1 nova_compute[230518]: 2025-10-02 12:41:05.762 2 DEBUG nova.objects.instance [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lazy-loading 'migration_context' on Instance uuid 4d8c4b3b-58c2-4d3d-863c-49b98333b84d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:41:05 compute-1 nova_compute[230518]: 2025-10-02 12:41:05.782 2 DEBUG nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:41:05 compute-1 nova_compute[230518]: 2025-10-02 12:41:05.783 2 DEBUG nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Ensure instance console log exists: /var/lib/nova/instances/4d8c4b3b-58c2-4d3d-863c-49b98333b84d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:41:05 compute-1 nova_compute[230518]: 2025-10-02 12:41:05.783 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:05 compute-1 nova_compute[230518]: 2025-10-02 12:41:05.784 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:05 compute-1 nova_compute[230518]: 2025-10-02 12:41:05.784 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:06 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:41:06 compute-1 ceph-mon[80926]: pgmap v1954: 305 pgs: 305 active+clean; 629 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 943 KiB/s rd, 3.9 MiB/s wr, 149 op/s
Oct 02 12:41:07 compute-1 nova_compute[230518]: 2025-10-02 12:41:07.258 2 DEBUG nova.network.neutron [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Successfully created port: 1688d119-1bc8-410a-a80e-8536a113e986 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:41:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:07.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:07.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:08 compute-1 nova_compute[230518]: 2025-10-02 12:41:08.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:41:08 compute-1 nova_compute[230518]: 2025-10-02 12:41:08.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:41:08 compute-1 nova_compute[230518]: 2025-10-02 12:41:08.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:41:08 compute-1 nova_compute[230518]: 2025-10-02 12:41:08.094 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 12:41:08 compute-1 nova_compute[230518]: 2025-10-02 12:41:08.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:08 compute-1 nova_compute[230518]: 2025-10-02 12:41:08.333 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:41:08 compute-1 nova_compute[230518]: 2025-10-02 12:41:08.334 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:41:08 compute-1 nova_compute[230518]: 2025-10-02 12:41:08.334 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:41:08 compute-1 nova_compute[230518]: 2025-10-02 12:41:08.335 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7621a774-e0bc-4f4f-b900-c3608dd6835a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:41:08 compute-1 nova_compute[230518]: 2025-10-02 12:41:08.648 2 DEBUG nova.network.neutron [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Successfully updated port: 007233fd-556d-43ce-97fa-0f19306ba0aa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:41:08 compute-1 ceph-mon[80926]: pgmap v1955: 305 pgs: 305 active+clean; 658 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 964 KiB/s rd, 3.6 MiB/s wr, 129 op/s
Oct 02 12:41:08 compute-1 nova_compute[230518]: 2025-10-02 12:41:08.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:09 compute-1 nova_compute[230518]: 2025-10-02 12:41:09.102 2 DEBUG nova.compute.manager [req-b1e0c660-98f8-42cf-bd59-c01e8f34de08 req-51dc8058-d315-4393-8a05-430065b0fa42 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Received event network-changed-007233fd-556d-43ce-97fa-0f19306ba0aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:41:09 compute-1 nova_compute[230518]: 2025-10-02 12:41:09.103 2 DEBUG nova.compute.manager [req-b1e0c660-98f8-42cf-bd59-c01e8f34de08 req-51dc8058-d315-4393-8a05-430065b0fa42 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Refreshing instance network info cache due to event network-changed-007233fd-556d-43ce-97fa-0f19306ba0aa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:41:09 compute-1 nova_compute[230518]: 2025-10-02 12:41:09.104 2 DEBUG oslo_concurrency.lockutils [req-b1e0c660-98f8-42cf-bd59-c01e8f34de08 req-51dc8058-d315-4393-8a05-430065b0fa42 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-4d8c4b3b-58c2-4d3d-863c-49b98333b84d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:41:09 compute-1 nova_compute[230518]: 2025-10-02 12:41:09.104 2 DEBUG oslo_concurrency.lockutils [req-b1e0c660-98f8-42cf-bd59-c01e8f34de08 req-51dc8058-d315-4393-8a05-430065b0fa42 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-4d8c4b3b-58c2-4d3d-863c-49b98333b84d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:41:09 compute-1 nova_compute[230518]: 2025-10-02 12:41:09.105 2 DEBUG nova.network.neutron [req-b1e0c660-98f8-42cf-bd59-c01e8f34de08 req-51dc8058-d315-4393-8a05-430065b0fa42 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Refreshing network info cache for port 007233fd-556d-43ce-97fa-0f19306ba0aa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:41:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:09.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:09.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:09 compute-1 nova_compute[230518]: 2025-10-02 12:41:09.405 2 DEBUG nova.network.neutron [req-b1e0c660-98f8-42cf-bd59-c01e8f34de08 req-51dc8058-d315-4393-8a05-430065b0fa42 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:41:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e264 e264: 3 total, 3 up, 3 in
Oct 02 12:41:10 compute-1 nova_compute[230518]: 2025-10-02 12:41:10.289 2 DEBUG nova.network.neutron [req-b1e0c660-98f8-42cf-bd59-c01e8f34de08 req-51dc8058-d315-4393-8a05-430065b0fa42 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:41:10 compute-1 nova_compute[230518]: 2025-10-02 12:41:10.312 2 DEBUG oslo_concurrency.lockutils [req-b1e0c660-98f8-42cf-bd59-c01e8f34de08 req-51dc8058-d315-4393-8a05-430065b0fa42 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-4d8c4b3b-58c2-4d3d-863c-49b98333b84d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:41:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:10.381 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:11 compute-1 nova_compute[230518]: 2025-10-02 12:41:11.138 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Updating instance_info_cache with network_info: [{"id": "8d9cc17a-7804-4743-925a-496d9fe78c73", "address": "fa:16:3e:c4:d9:d3", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d9cc17a-78", "ovs_interfaceid": "8d9cc17a-7804-4743-925a-496d9fe78c73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:41:11 compute-1 nova_compute[230518]: 2025-10-02 12:41:11.169 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:41:11 compute-1 nova_compute[230518]: 2025-10-02 12:41:11.170 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:41:11 compute-1 ceph-mon[80926]: pgmap v1956: 305 pgs: 305 active+clean; 658 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 776 KiB/s rd, 1.3 MiB/s wr, 76 op/s
Oct 02 12:41:11 compute-1 ceph-mon[80926]: osdmap e264: 3 total, 3 up, 3 in
Oct 02 12:41:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.001999983s ======
Oct 02 12:41:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:11.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001999983s
Oct 02 12:41:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:11.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:11 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:41:12 compute-1 nova_compute[230518]: 2025-10-02 12:41:12.248 2 DEBUG nova.network.neutron [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Successfully updated port: 1688d119-1bc8-410a-a80e-8536a113e986 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:41:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e265 e265: 3 total, 3 up, 3 in
Oct 02 12:41:12 compute-1 nova_compute[230518]: 2025-10-02 12:41:12.469 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Acquiring lock "refresh_cache-4d8c4b3b-58c2-4d3d-863c-49b98333b84d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:41:12 compute-1 nova_compute[230518]: 2025-10-02 12:41:12.470 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Acquired lock "refresh_cache-4d8c4b3b-58c2-4d3d-863c-49b98333b84d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:41:12 compute-1 nova_compute[230518]: 2025-10-02 12:41:12.470 2 DEBUG nova.network.neutron [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:41:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/791234106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:12 compute-1 nova_compute[230518]: 2025-10-02 12:41:12.967 2 DEBUG nova.compute.manager [req-01f8422e-394e-4bf1-8f8a-04dfba2ff1c4 req-26c8e608-c554-4b69-b701-60a97d714ab5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Received event network-changed-1688d119-1bc8-410a-a80e-8536a113e986 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:41:12 compute-1 nova_compute[230518]: 2025-10-02 12:41:12.968 2 DEBUG nova.compute.manager [req-01f8422e-394e-4bf1-8f8a-04dfba2ff1c4 req-26c8e608-c554-4b69-b701-60a97d714ab5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Refreshing instance network info cache due to event network-changed-1688d119-1bc8-410a-a80e-8536a113e986. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:41:12 compute-1 nova_compute[230518]: 2025-10-02 12:41:12.968 2 DEBUG oslo_concurrency.lockutils [req-01f8422e-394e-4bf1-8f8a-04dfba2ff1c4 req-26c8e608-c554-4b69-b701-60a97d714ab5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-4d8c4b3b-58c2-4d3d-863c-49b98333b84d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:41:13 compute-1 nova_compute[230518]: 2025-10-02 12:41:13.142 2 DEBUG nova.network.neutron [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:41:13 compute-1 nova_compute[230518]: 2025-10-02 12:41:13.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:13.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:13.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:13 compute-1 ceph-mon[80926]: pgmap v1958: 305 pgs: 305 active+clean; 646 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 102 op/s
Oct 02 12:41:13 compute-1 ceph-mon[80926]: osdmap e265: 3 total, 3 up, 3 in
Oct 02 12:41:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e266 e266: 3 total, 3 up, 3 in
Oct 02 12:41:13 compute-1 nova_compute[230518]: 2025-10-02 12:41:13.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:14 compute-1 ceph-mon[80926]: pgmap v1960: 305 pgs: 305 active+clean; 657 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.8 MiB/s wr, 121 op/s
Oct 02 12:41:14 compute-1 ceph-mon[80926]: osdmap e266: 3 total, 3 up, 3 in
Oct 02 12:41:14 compute-1 podman[272372]: 2025-10-02 12:41:14.838712572 +0000 UTC m=+0.075116284 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:41:14 compute-1 podman[272373]: 2025-10-02 12:41:14.866433992 +0000 UTC m=+0.090519837 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 12:41:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:15.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:15.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:16 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:41:16 compute-1 ceph-mon[80926]: pgmap v1962: 305 pgs: 305 active+clean; 688 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 7.2 MiB/s wr, 182 op/s
Oct 02 12:41:17 compute-1 nova_compute[230518]: 2025-10-02 12:41:17.113 2 DEBUG nova.network.neutron [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Updating instance_info_cache with network_info: [{"id": "007233fd-556d-43ce-97fa-0f19306ba0aa", "address": "fa:16:3e:15:1b:4f", "network": {"id": "fb774493-1e03-4988-a332-4e7f3684ace8", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2108890300", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007233fd-55", "ovs_interfaceid": "007233fd-556d-43ce-97fa-0f19306ba0aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1688d119-1bc8-410a-a80e-8536a113e986", "address": "fa:16:3e:df:fd:ef", "network": {"id": "6536e1f6-8914-462b-bd28-3b66d21243dc", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1451876568", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.129", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1688d119-1b", "ovs_interfaceid": "1688d119-1bc8-410a-a80e-8536a113e986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:41:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:17.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:17.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:17 compute-1 nova_compute[230518]: 2025-10-02 12:41:17.367 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Releasing lock "refresh_cache-4d8c4b3b-58c2-4d3d-863c-49b98333b84d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:41:17 compute-1 nova_compute[230518]: 2025-10-02 12:41:17.367 2 DEBUG nova.compute.manager [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Instance network_info: |[{"id": "007233fd-556d-43ce-97fa-0f19306ba0aa", "address": "fa:16:3e:15:1b:4f", "network": {"id": "fb774493-1e03-4988-a332-4e7f3684ace8", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2108890300", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007233fd-55", "ovs_interfaceid": "007233fd-556d-43ce-97fa-0f19306ba0aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1688d119-1bc8-410a-a80e-8536a113e986", "address": "fa:16:3e:df:fd:ef", "network": {"id": "6536e1f6-8914-462b-bd28-3b66d21243dc", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1451876568", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.129", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1688d119-1b", "ovs_interfaceid": "1688d119-1bc8-410a-a80e-8536a113e986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:41:17 compute-1 nova_compute[230518]: 2025-10-02 12:41:17.368 2 DEBUG oslo_concurrency.lockutils [req-01f8422e-394e-4bf1-8f8a-04dfba2ff1c4 req-26c8e608-c554-4b69-b701-60a97d714ab5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-4d8c4b3b-58c2-4d3d-863c-49b98333b84d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:41:17 compute-1 nova_compute[230518]: 2025-10-02 12:41:17.368 2 DEBUG nova.network.neutron [req-01f8422e-394e-4bf1-8f8a-04dfba2ff1c4 req-26c8e608-c554-4b69-b701-60a97d714ab5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Refreshing network info cache for port 1688d119-1bc8-410a-a80e-8536a113e986 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:41:17 compute-1 nova_compute[230518]: 2025-10-02 12:41:17.372 2 DEBUG nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Start _get_guest_xml network_info=[{"id": "007233fd-556d-43ce-97fa-0f19306ba0aa", "address": "fa:16:3e:15:1b:4f", "network": {"id": "fb774493-1e03-4988-a332-4e7f3684ace8", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2108890300", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007233fd-55", "ovs_interfaceid": "007233fd-556d-43ce-97fa-0f19306ba0aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1688d119-1bc8-410a-a80e-8536a113e986", "address": "fa:16:3e:df:fd:ef", "network": {"id": "6536e1f6-8914-462b-bd28-3b66d21243dc", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1451876568", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.129", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1688d119-1b", "ovs_interfaceid": "1688d119-1bc8-410a-a80e-8536a113e986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:41:17 compute-1 nova_compute[230518]: 2025-10-02 12:41:17.378 2 WARNING nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:41:17 compute-1 nova_compute[230518]: 2025-10-02 12:41:17.383 2 DEBUG nova.virt.libvirt.host [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:41:17 compute-1 nova_compute[230518]: 2025-10-02 12:41:17.385 2 DEBUG nova.virt.libvirt.host [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:41:17 compute-1 nova_compute[230518]: 2025-10-02 12:41:17.390 2 DEBUG nova.virt.libvirt.host [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:41:17 compute-1 nova_compute[230518]: 2025-10-02 12:41:17.390 2 DEBUG nova.virt.libvirt.host [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:41:17 compute-1 nova_compute[230518]: 2025-10-02 12:41:17.392 2 DEBUG nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:41:17 compute-1 nova_compute[230518]: 2025-10-02 12:41:17.393 2 DEBUG nova.virt.hardware [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:41:17 compute-1 nova_compute[230518]: 2025-10-02 12:41:17.393 2 DEBUG nova.virt.hardware [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:41:17 compute-1 nova_compute[230518]: 2025-10-02 12:41:17.394 2 DEBUG nova.virt.hardware [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:41:17 compute-1 nova_compute[230518]: 2025-10-02 12:41:17.394 2 DEBUG nova.virt.hardware [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:41:17 compute-1 nova_compute[230518]: 2025-10-02 12:41:17.395 2 DEBUG nova.virt.hardware [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:41:17 compute-1 nova_compute[230518]: 2025-10-02 12:41:17.395 2 DEBUG nova.virt.hardware [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:41:17 compute-1 nova_compute[230518]: 2025-10-02 12:41:17.396 2 DEBUG nova.virt.hardware [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:41:17 compute-1 nova_compute[230518]: 2025-10-02 12:41:17.396 2 DEBUG nova.virt.hardware [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:41:17 compute-1 nova_compute[230518]: 2025-10-02 12:41:17.397 2 DEBUG nova.virt.hardware [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:41:17 compute-1 nova_compute[230518]: 2025-10-02 12:41:17.397 2 DEBUG nova.virt.hardware [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:41:17 compute-1 nova_compute[230518]: 2025-10-02 12:41:17.398 2 DEBUG nova.virt.hardware [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:41:17 compute-1 nova_compute[230518]: 2025-10-02 12:41:17.402 2 DEBUG oslo_concurrency.processutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:17 compute-1 sudo[272413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:41:17 compute-1 sudo[272413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:17 compute-1 sudo[272413]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:17 compute-1 sudo[272439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:41:17 compute-1 sudo[272439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:17 compute-1 sudo[272439]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:17 compute-1 sudo[272464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:41:17 compute-1 sudo[272464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:17 compute-1 sudo[272464]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:17 compute-1 sudo[272508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 12:41:17 compute-1 sudo[272508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:41:17 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1150966713' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:17 compute-1 nova_compute[230518]: 2025-10-02 12:41:17.892 2 DEBUG oslo_concurrency.processutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:17 compute-1 nova_compute[230518]: 2025-10-02 12:41:17.933 2 DEBUG nova.storage.rbd_utils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] rbd image 4d8c4b3b-58c2-4d3d-863c-49b98333b84d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:41:17 compute-1 nova_compute[230518]: 2025-10-02 12:41:17.944 2 DEBUG oslo_concurrency.processutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:18 compute-1 podman[272645]: 2025-10-02 12:41:18.421776378 +0000 UTC m=+0.201276280 container exec f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.437 2 DEBUG oslo_concurrency.processutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.440 2 DEBUG nova.virt.libvirt.vif [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:41:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1784036592',display_name='tempest-ServersTestMultiNic-server-1784036592',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1784036592',id=110,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a82ed194b379425aa5e1f31b993eee81',ramdisk_id='',reservation_id='r-ztoikflv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-2055566246',owner_user_name='tempest-ServersTestMultiNic-2055566246-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:41:03Z,user_data=None,user_id='9ef7a5dbc3524ee8a7efcd0d3ae36787',uuid=4d8c4b3b-58c2-4d3d-863c-49b98333b84d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "007233fd-556d-43ce-97fa-0f19306ba0aa", "address": "fa:16:3e:15:1b:4f", "network": {"id": "fb774493-1e03-4988-a332-4e7f3684ace8", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2108890300", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007233fd-55", "ovs_interfaceid": "007233fd-556d-43ce-97fa-0f19306ba0aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.440 2 DEBUG nova.network.os_vif_util [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converting VIF {"id": "007233fd-556d-43ce-97fa-0f19306ba0aa", "address": "fa:16:3e:15:1b:4f", "network": {"id": "fb774493-1e03-4988-a332-4e7f3684ace8", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2108890300", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007233fd-55", "ovs_interfaceid": "007233fd-556d-43ce-97fa-0f19306ba0aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.441 2 DEBUG nova.network.os_vif_util [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:1b:4f,bridge_name='br-int',has_traffic_filtering=True,id=007233fd-556d-43ce-97fa-0f19306ba0aa,network=Network(fb774493-1e03-4988-a332-4e7f3684ace8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap007233fd-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.442 2 DEBUG nova.virt.libvirt.vif [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:41:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1784036592',display_name='tempest-ServersTestMultiNic-server-1784036592',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1784036592',id=110,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a82ed194b379425aa5e1f31b993eee81',ramdisk_id='',reservation_id='r-ztoikflv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-2055566246',owner_user_name='tempest-ServersTestMultiNic-2055566246-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:41:03Z,user_data=None,user_id='9ef7a5dbc3524ee8a7efcd0d3ae36787',uuid=4d8c4b3b-58c2-4d3d-863c-49b98333b84d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1688d119-1bc8-410a-a80e-8536a113e986", "address": "fa:16:3e:df:fd:ef", "network": {"id": "6536e1f6-8914-462b-bd28-3b66d21243dc", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1451876568", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.129", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1688d119-1b", "ovs_interfaceid": "1688d119-1bc8-410a-a80e-8536a113e986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.442 2 DEBUG nova.network.os_vif_util [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converting VIF {"id": "1688d119-1bc8-410a-a80e-8536a113e986", "address": "fa:16:3e:df:fd:ef", "network": {"id": "6536e1f6-8914-462b-bd28-3b66d21243dc", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1451876568", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.129", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1688d119-1b", "ovs_interfaceid": "1688d119-1bc8-410a-a80e-8536a113e986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.442 2 DEBUG nova.network.os_vif_util [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:fd:ef,bridge_name='br-int',has_traffic_filtering=True,id=1688d119-1bc8-410a-a80e-8536a113e986,network=Network(6536e1f6-8914-462b-bd28-3b66d21243dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1688d119-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.444 2 DEBUG nova.objects.instance [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4d8c4b3b-58c2-4d3d-863c-49b98333b84d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:41:18 compute-1 podman[272645]: 2025-10-02 12:41:18.543770244 +0000 UTC m=+0.323270146 container exec_died f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.568 2 DEBUG nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:41:18 compute-1 nova_compute[230518]:   <uuid>4d8c4b3b-58c2-4d3d-863c-49b98333b84d</uuid>
Oct 02 12:41:18 compute-1 nova_compute[230518]:   <name>instance-0000006e</name>
Oct 02 12:41:18 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:41:18 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:41:18 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:41:18 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:       <nova:name>tempest-ServersTestMultiNic-server-1784036592</nova:name>
Oct 02 12:41:18 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:41:17</nova:creationTime>
Oct 02 12:41:18 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:41:18 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:41:18 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:41:18 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:41:18 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:41:18 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:41:18 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:41:18 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:41:18 compute-1 nova_compute[230518]:         <nova:user uuid="9ef7a5dbc3524ee8a7efcd0d3ae36787">tempest-ServersTestMultiNic-2055566246-project-member</nova:user>
Oct 02 12:41:18 compute-1 nova_compute[230518]:         <nova:project uuid="a82ed194b379425aa5e1f31b993eee81">tempest-ServersTestMultiNic-2055566246</nova:project>
Oct 02 12:41:18 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:41:18 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:41:18 compute-1 nova_compute[230518]:         <nova:port uuid="007233fd-556d-43ce-97fa-0f19306ba0aa">
Oct 02 12:41:18 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.118" ipVersion="4"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:41:18 compute-1 nova_compute[230518]:         <nova:port uuid="1688d119-1bc8-410a-a80e-8536a113e986">
Oct 02 12:41:18 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.1.129" ipVersion="4"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:41:18 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:41:18 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:41:18 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <system>
Oct 02 12:41:18 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:41:18 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:41:18 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:41:18 compute-1 nova_compute[230518]:       <entry name="serial">4d8c4b3b-58c2-4d3d-863c-49b98333b84d</entry>
Oct 02 12:41:18 compute-1 nova_compute[230518]:       <entry name="uuid">4d8c4b3b-58c2-4d3d-863c-49b98333b84d</entry>
Oct 02 12:41:18 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     </system>
Oct 02 12:41:18 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:41:18 compute-1 nova_compute[230518]:   <os>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:   </os>
Oct 02 12:41:18 compute-1 nova_compute[230518]:   <features>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:   </features>
Oct 02 12:41:18 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:41:18 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:41:18 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:41:18 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/4d8c4b3b-58c2-4d3d-863c-49b98333b84d_disk">
Oct 02 12:41:18 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:       </source>
Oct 02 12:41:18 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:41:18 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:41:18 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:41:18 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/4d8c4b3b-58c2-4d3d-863c-49b98333b84d_disk.config">
Oct 02 12:41:18 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:       </source>
Oct 02 12:41:18 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:41:18 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:41:18 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:41:18 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:15:1b:4f"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:       <target dev="tap007233fd-55"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:41:18 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:df:fd:ef"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:       <target dev="tap1688d119-1b"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:41:18 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/4d8c4b3b-58c2-4d3d-863c-49b98333b84d/console.log" append="off"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <video>
Oct 02 12:41:18 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     </video>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:41:18 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:41:18 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:41:18 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:41:18 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:41:18 compute-1 nova_compute[230518]: </domain>
Oct 02 12:41:18 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.569 2 DEBUG nova.compute.manager [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Preparing to wait for external event network-vif-plugged-007233fd-556d-43ce-97fa-0f19306ba0aa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.570 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Acquiring lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.570 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.571 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.571 2 DEBUG nova.compute.manager [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Preparing to wait for external event network-vif-plugged-1688d119-1bc8-410a-a80e-8536a113e986 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.572 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Acquiring lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.572 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.573 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.574 2 DEBUG nova.virt.libvirt.vif [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:41:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1784036592',display_name='tempest-ServersTestMultiNic-server-1784036592',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1784036592',id=110,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a82ed194b379425aa5e1f31b993eee81',ramdisk_id='',reservation_id='r-ztoikflv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-2055566246',owner_user_name='tempest-ServersTestMultiNic-2055566246-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:41:03Z,user_data=None,user_id='9ef7a5dbc3524ee8a7efcd0d3ae36787',uuid=4d8c4b3b-58c2-4d3d-863c-49b98333b84d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "007233fd-556d-43ce-97fa-0f19306ba0aa", "address": "fa:16:3e:15:1b:4f", "network": {"id": "fb774493-1e03-4988-a332-4e7f3684ace8", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2108890300", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007233fd-55", "ovs_interfaceid": "007233fd-556d-43ce-97fa-0f19306ba0aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.575 2 DEBUG nova.network.os_vif_util [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converting VIF {"id": "007233fd-556d-43ce-97fa-0f19306ba0aa", "address": "fa:16:3e:15:1b:4f", "network": {"id": "fb774493-1e03-4988-a332-4e7f3684ace8", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2108890300", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007233fd-55", "ovs_interfaceid": "007233fd-556d-43ce-97fa-0f19306ba0aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.576 2 DEBUG nova.network.os_vif_util [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:1b:4f,bridge_name='br-int',has_traffic_filtering=True,id=007233fd-556d-43ce-97fa-0f19306ba0aa,network=Network(fb774493-1e03-4988-a332-4e7f3684ace8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap007233fd-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.577 2 DEBUG os_vif [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:1b:4f,bridge_name='br-int',has_traffic_filtering=True,id=007233fd-556d-43ce-97fa-0f19306ba0aa,network=Network(fb774493-1e03-4988-a332-4e7f3684ace8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap007233fd-55') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.578 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.579 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.584 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap007233fd-55, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.586 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap007233fd-55, col_values=(('external_ids', {'iface-id': '007233fd-556d-43ce-97fa-0f19306ba0aa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:15:1b:4f', 'vm-uuid': '4d8c4b3b-58c2-4d3d-863c-49b98333b84d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:18 compute-1 NetworkManager[44960]: <info>  [1759408878.6346] manager: (tap007233fd-55): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/211)
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.645 2 INFO os_vif [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:1b:4f,bridge_name='br-int',has_traffic_filtering=True,id=007233fd-556d-43ce-97fa-0f19306ba0aa,network=Network(fb774493-1e03-4988-a332-4e7f3684ace8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap007233fd-55')
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.647 2 DEBUG nova.virt.libvirt.vif [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:41:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1784036592',display_name='tempest-ServersTestMultiNic-server-1784036592',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1784036592',id=110,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a82ed194b379425aa5e1f31b993eee81',ramdisk_id='',reservation_id='r-ztoikflv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-2055566246',owner_user_name='tempest-ServersTestMultiNic-2055566246-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:41:03Z,user_data=None,user_id='9ef7a5dbc3524ee8a7efcd0d3ae36787',uuid=4d8c4b3b-58c2-4d3d-863c-49b98333b84d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1688d119-1bc8-410a-a80e-8536a113e986", "address": "fa:16:3e:df:fd:ef", "network": {"id": "6536e1f6-8914-462b-bd28-3b66d21243dc", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1451876568", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.129", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1688d119-1b", "ovs_interfaceid": "1688d119-1bc8-410a-a80e-8536a113e986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.647 2 DEBUG nova.network.os_vif_util [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converting VIF {"id": "1688d119-1bc8-410a-a80e-8536a113e986", "address": "fa:16:3e:df:fd:ef", "network": {"id": "6536e1f6-8914-462b-bd28-3b66d21243dc", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1451876568", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.129", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1688d119-1b", "ovs_interfaceid": "1688d119-1bc8-410a-a80e-8536a113e986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.648 2 DEBUG nova.network.os_vif_util [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:fd:ef,bridge_name='br-int',has_traffic_filtering=True,id=1688d119-1bc8-410a-a80e-8536a113e986,network=Network(6536e1f6-8914-462b-bd28-3b66d21243dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1688d119-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.649 2 DEBUG os_vif [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:fd:ef,bridge_name='br-int',has_traffic_filtering=True,id=1688d119-1bc8-410a-a80e-8536a113e986,network=Network(6536e1f6-8914-462b-bd28-3b66d21243dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1688d119-1b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.650 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.650 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.655 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1688d119-1b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.656 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1688d119-1b, col_values=(('external_ids', {'iface-id': '1688d119-1bc8-410a-a80e-8536a113e986', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:df:fd:ef', 'vm-uuid': '4d8c4b3b-58c2-4d3d-863c-49b98333b84d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:18 compute-1 NetworkManager[44960]: <info>  [1759408878.6590] manager: (tap1688d119-1b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/212)
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:18 compute-1 nova_compute[230518]: 2025-10-02 12:41:18.667 2 INFO os_vif [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:fd:ef,bridge_name='br-int',has_traffic_filtering=True,id=1688d119-1bc8-410a-a80e-8536a113e986,network=Network(6536e1f6-8914-462b-bd28-3b66d21243dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1688d119-1b')
Oct 02 12:41:18 compute-1 ceph-mon[80926]: pgmap v1963: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 6.2 MiB/s wr, 155 op/s
Oct 02 12:41:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1150966713' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2588003642' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:19 compute-1 nova_compute[230518]: 2025-10-02 12:41:19.013 2 DEBUG nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:41:19 compute-1 nova_compute[230518]: 2025-10-02 12:41:19.014 2 DEBUG nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:41:19 compute-1 nova_compute[230518]: 2025-10-02 12:41:19.014 2 DEBUG nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] No VIF found with MAC fa:16:3e:15:1b:4f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:41:19 compute-1 nova_compute[230518]: 2025-10-02 12:41:19.015 2 DEBUG nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] No VIF found with MAC fa:16:3e:df:fd:ef, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:41:19 compute-1 nova_compute[230518]: 2025-10-02 12:41:19.016 2 INFO nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Using config drive
Oct 02 12:41:19 compute-1 nova_compute[230518]: 2025-10-02 12:41:19.067 2 DEBUG nova.storage.rbd_utils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] rbd image 4d8c4b3b-58c2-4d3d-863c-49b98333b84d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:41:19 compute-1 sudo[272508]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:19.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:41:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:19.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:41:19 compute-1 sudo[272791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:41:19 compute-1 sudo[272791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:19 compute-1 sudo[272791]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:19 compute-1 sudo[272816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:41:19 compute-1 sudo[272816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:19 compute-1 sudo[272816]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:19 compute-1 nova_compute[230518]: 2025-10-02 12:41:19.550 2 INFO nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Creating config drive at /var/lib/nova/instances/4d8c4b3b-58c2-4d3d-863c-49b98333b84d/disk.config
Oct 02 12:41:19 compute-1 nova_compute[230518]: 2025-10-02 12:41:19.559 2 DEBUG oslo_concurrency.processutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4d8c4b3b-58c2-4d3d-863c-49b98333b84d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf8fe8tpm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:19 compute-1 sudo[272841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:41:19 compute-1 sudo[272841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:19 compute-1 sudo[272841]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:19 compute-1 nova_compute[230518]: 2025-10-02 12:41:19.699 2 DEBUG oslo_concurrency.processutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4d8c4b3b-58c2-4d3d-863c-49b98333b84d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf8fe8tpm" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:19 compute-1 sudo[272869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:41:19 compute-1 nova_compute[230518]: 2025-10-02 12:41:19.736 2 DEBUG nova.storage.rbd_utils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] rbd image 4d8c4b3b-58c2-4d3d-863c-49b98333b84d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:41:19 compute-1 sudo[272869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:19 compute-1 nova_compute[230518]: 2025-10-02 12:41:19.741 2 DEBUG oslo_concurrency.processutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4d8c4b3b-58c2-4d3d-863c-49b98333b84d/disk.config 4d8c4b3b-58c2-4d3d-863c-49b98333b84d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:20 compute-1 sudo[272869]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:20 compute-1 nova_compute[230518]: 2025-10-02 12:41:20.253 2 DEBUG oslo_concurrency.processutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4d8c4b3b-58c2-4d3d-863c-49b98333b84d/disk.config 4d8c4b3b-58c2-4d3d-863c-49b98333b84d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:20 compute-1 nova_compute[230518]: 2025-10-02 12:41:20.254 2 INFO nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Deleting local config drive /var/lib/nova/instances/4d8c4b3b-58c2-4d3d-863c-49b98333b84d/disk.config because it was imported into RBD.
Oct 02 12:41:20 compute-1 kernel: tap007233fd-55: entered promiscuous mode
Oct 02 12:41:20 compute-1 NetworkManager[44960]: <info>  [1759408880.2985] manager: (tap007233fd-55): new Tun device (/org/freedesktop/NetworkManager/Devices/213)
Oct 02 12:41:20 compute-1 nova_compute[230518]: 2025-10-02 12:41:20.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:20 compute-1 ovn_controller[129257]: 2025-10-02T12:41:20Z|00442|binding|INFO|Claiming lport 007233fd-556d-43ce-97fa-0f19306ba0aa for this chassis.
Oct 02 12:41:20 compute-1 ovn_controller[129257]: 2025-10-02T12:41:20Z|00443|binding|INFO|007233fd-556d-43ce-97fa-0f19306ba0aa: Claiming fa:16:3e:15:1b:4f 10.100.0.118
Oct 02 12:41:20 compute-1 NetworkManager[44960]: <info>  [1759408880.3203] manager: (tap1688d119-1b): new Tun device (/org/freedesktop/NetworkManager/Devices/214)
Oct 02 12:41:20 compute-1 systemd-udevd[272978]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:41:20 compute-1 systemd-udevd[272977]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:41:20 compute-1 kernel: tap1688d119-1b: entered promiscuous mode
Oct 02 12:41:20 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:41:20 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:41:20 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:41:20 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:41:20 compute-1 NetworkManager[44960]: <info>  [1759408880.3397] device (tap007233fd-55): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:41:20 compute-1 ovn_controller[129257]: 2025-10-02T12:41:20Z|00444|if_status|INFO|Not updating pb chassis for 1688d119-1bc8-410a-a80e-8536a113e986 now as sb is readonly
Oct 02 12:41:20 compute-1 nova_compute[230518]: 2025-10-02 12:41:20.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:20 compute-1 NetworkManager[44960]: <info>  [1759408880.3407] device (tap007233fd-55): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:41:20 compute-1 nova_compute[230518]: 2025-10-02 12:41:20.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:20 compute-1 nova_compute[230518]: 2025-10-02 12:41:20.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:20 compute-1 NetworkManager[44960]: <info>  [1759408880.3473] device (tap1688d119-1b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:41:20 compute-1 NetworkManager[44960]: <info>  [1759408880.3482] device (tap1688d119-1b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:41:20 compute-1 systemd-machined[188247]: New machine qemu-54-instance-0000006e.
Oct 02 12:41:20 compute-1 systemd[1]: Started Virtual Machine qemu-54-instance-0000006e.
Oct 02 12:41:20 compute-1 ovn_controller[129257]: 2025-10-02T12:41:20Z|00445|binding|INFO|Releasing lport 7b6dc1a1-1a58-45bd-84bb-97328397bf1b from this chassis (sb_readonly=0)
Oct 02 12:41:20 compute-1 ovn_controller[129257]: 2025-10-02T12:41:20Z|00446|binding|INFO|Claiming lport 1688d119-1bc8-410a-a80e-8536a113e986 for this chassis.
Oct 02 12:41:20 compute-1 ovn_controller[129257]: 2025-10-02T12:41:20Z|00447|binding|INFO|1688d119-1bc8-410a-a80e-8536a113e986: Claiming fa:16:3e:df:fd:ef 10.100.1.129
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.427 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:1b:4f 10.100.0.118'], port_security=['fa:16:3e:15:1b:4f 10.100.0.118'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.118/24', 'neutron:device_id': '4d8c4b3b-58c2-4d3d-863c-49b98333b84d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fb774493-1e03-4988-a332-4e7f3684ace8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a82ed194b379425aa5e1f31b993eee81', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0a763430-a613-4e24-8301-a0068489d29b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=30275ad8-63fb-492c-8ae6-6d69bb1e285c, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=007233fd-556d-43ce-97fa-0f19306ba0aa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.429 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 007233fd-556d-43ce-97fa-0f19306ba0aa in datapath fb774493-1e03-4988-a332-4e7f3684ace8 bound to our chassis
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.430 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fb774493-1e03-4988-a332-4e7f3684ace8
Oct 02 12:41:20 compute-1 ovn_controller[129257]: 2025-10-02T12:41:20Z|00448|binding|INFO|Setting lport 007233fd-556d-43ce-97fa-0f19306ba0aa ovn-installed in OVS
Oct 02 12:41:20 compute-1 ovn_controller[129257]: 2025-10-02T12:41:20Z|00449|binding|INFO|Setting lport 007233fd-556d-43ce-97fa-0f19306ba0aa up in Southbound
Oct 02 12:41:20 compute-1 nova_compute[230518]: 2025-10-02 12:41:20.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.443 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bfe07b53-83ec-48a3-932d-b34a9ca5a3ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.444 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfb774493-11 in ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.446 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfb774493-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.446 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fd4f778e-11d8-4ecb-8207-655128b6ceeb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.446 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9122fe6d-ce9d-479f-b9a7-1f5d150b7ea7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.457 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[711449e3-30e7-4e96-bdf9-adf0786710b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:20 compute-1 ovn_controller[129257]: 2025-10-02T12:41:20Z|00450|binding|INFO|Setting lport 1688d119-1bc8-410a-a80e-8536a113e986 ovn-installed in OVS
Oct 02 12:41:20 compute-1 nova_compute[230518]: 2025-10-02 12:41:20.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.485 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[15eb3f76-f488-4dbd-8be7-60b4e01f62c0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.514 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:fd:ef 10.100.1.129'], port_security=['fa:16:3e:df:fd:ef 10.100.1.129'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.129/24', 'neutron:device_id': '4d8c4b3b-58c2-4d3d-863c-49b98333b84d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6536e1f6-8914-462b-bd28-3b66d21243dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a82ed194b379425aa5e1f31b993eee81', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0a763430-a613-4e24-8301-a0068489d29b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=96f8db4a-8d74-4e13-9b5c-7bb0fe283c21, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=1688d119-1bc8-410a-a80e-8536a113e986) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:41:20 compute-1 ovn_controller[129257]: 2025-10-02T12:41:20Z|00451|binding|INFO|Setting lport 1688d119-1bc8-410a-a80e-8536a113e986 up in Southbound
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.519 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[5c426186-8fe2-4b92-9cb7-f8752b5adaee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.525 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d6f34cdf-6263-44d5-8754-419a8a43432b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:20 compute-1 NetworkManager[44960]: <info>  [1759408880.5262] manager: (tapfb774493-10): new Veth device (/org/freedesktop/NetworkManager/Devices/215)
Oct 02 12:41:20 compute-1 systemd-udevd[272981]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.565 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[3a18d769-dc2a-45c8-a6f6-c50947652466]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.568 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[8c8b07cb-fba0-4628-ba07-1a39634a3528]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:20 compute-1 NetworkManager[44960]: <info>  [1759408880.5952] device (tapfb774493-10): carrier: link connected
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.600 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[de689ae1-7034-422e-9838-55c53fcb1e27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.617 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[479b3e55-d475-43d8-8661-a65f23a65a07]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfb774493-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0c:b8:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 137], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 674415, 'reachable_time': 38654, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273014, 'error': None, 'target': 'ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.634 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0679d263-280d-481b-b48a-a7488863ba2f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0c:b8a8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 674415, 'tstamp': 674415}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273015, 'error': None, 'target': 'ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.653 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[280cdfcc-5959-4638-8b7b-01cfbf92b556]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfb774493-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0c:b8:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 137], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 674415, 'reachable_time': 38654, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273016, 'error': None, 'target': 'ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.691 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[af241a16-6f99-44c9-81ac-bbf0ba15f9c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.755 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a9a05975-9830-48ca-a92f-0784214d1f93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.757 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfb774493-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.758 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.759 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfb774493-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:20 compute-1 nova_compute[230518]: 2025-10-02 12:41:20.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:20 compute-1 kernel: tapfb774493-10: entered promiscuous mode
Oct 02 12:41:20 compute-1 nova_compute[230518]: 2025-10-02 12:41:20.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:20 compute-1 NetworkManager[44960]: <info>  [1759408880.7641] manager: (tapfb774493-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/216)
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.770 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfb774493-10, col_values=(('external_ids', {'iface-id': 'aa3d5e72-c8a0-4a99-8a41-fff6fec251ca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:20 compute-1 nova_compute[230518]: 2025-10-02 12:41:20.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:20 compute-1 ovn_controller[129257]: 2025-10-02T12:41:20Z|00452|binding|INFO|Releasing lport aa3d5e72-c8a0-4a99-8a41-fff6fec251ca from this chassis (sb_readonly=0)
Oct 02 12:41:20 compute-1 nova_compute[230518]: 2025-10-02 12:41:20.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.774 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fb774493-1e03-4988-a332-4e7f3684ace8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fb774493-1e03-4988-a332-4e7f3684ace8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.776 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7fcf357f-a90e-40cf-b880-da29641b7dde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.777 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-fb774493-1e03-4988-a332-4e7f3684ace8
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/fb774493-1e03-4988-a332-4e7f3684ace8.pid.haproxy
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID fb774493-1e03-4988-a332-4e7f3684ace8
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:41:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:20.781 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8', 'env', 'PROCESS_TAG=haproxy-fb774493-1e03-4988-a332-4e7f3684ace8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fb774493-1e03-4988-a332-4e7f3684ace8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:41:20 compute-1 nova_compute[230518]: 2025-10-02 12:41:20.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e267 e267: 3 total, 3 up, 3 in
Oct 02 12:41:21 compute-1 podman[273091]: 2025-10-02 12:41:21.230619381 +0000 UTC m=+0.071201979 container create a65bec35821dc9476870af807eb29169aff1865e6b1cce2f3d57625ecca79d4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:41:21 compute-1 systemd[1]: Started libpod-conmon-a65bec35821dc9476870af807eb29169aff1865e6b1cce2f3d57625ecca79d4e.scope.
Oct 02 12:41:21 compute-1 podman[273091]: 2025-10-02 12:41:21.193150493 +0000 UTC m=+0.033733111 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:41:21 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:41:21 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ca4dd895614093da27e3e57abc7da7e11453af00534b25663f8904c6c64dc69/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:41:21 compute-1 podman[273091]: 2025-10-02 12:41:21.333550108 +0000 UTC m=+0.174132796 container init a65bec35821dc9476870af807eb29169aff1865e6b1cce2f3d57625ecca79d4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:41:21 compute-1 podman[273091]: 2025-10-02 12:41:21.340004141 +0000 UTC m=+0.180586779 container start a65bec35821dc9476870af807eb29169aff1865e6b1cce2f3d57625ecca79d4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 12:41:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:21.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:41:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:21.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:41:21 compute-1 neutron-haproxy-ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8[273106]: [NOTICE]   (273110) : New worker (273112) forked
Oct 02 12:41:21 compute-1 neutron-haproxy-ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8[273106]: [NOTICE]   (273110) : Loading success.
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.402 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 1688d119-1bc8-410a-a80e-8536a113e986 in datapath 6536e1f6-8914-462b-bd28-3b66d21243dc unbound from our chassis
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.405 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6536e1f6-8914-462b-bd28-3b66d21243dc
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.417 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c250c997-3dd7-44dd-abe6-35c12fa23460]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.417 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6536e1f6-81 in ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.420 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6536e1f6-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.420 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ed6c0d34-6ecf-4d06-aa08-e3020f619edb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.421 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a038d1-69d7-4063-ab66-a07ffad0cb6a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.436 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[dbdfed5e-6803-415d-8f9e-1ad147bbb88b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.450 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f3e3e271-e656-4948-8352-730042ee5d5b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:21 compute-1 nova_compute[230518]: 2025-10-02 12:41:21.457 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408881.4572148, 4d8c4b3b-58c2-4d3d-863c-49b98333b84d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:41:21 compute-1 nova_compute[230518]: 2025-10-02 12:41:21.458 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] VM Started (Lifecycle Event)
Oct 02 12:41:21 compute-1 ceph-mon[80926]: pgmap v1964: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 5.8 MiB/s wr, 121 op/s
Oct 02 12:41:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:41:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:41:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:41:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:41:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:41:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:41:21 compute-1 ceph-mon[80926]: osdmap e267: 3 total, 3 up, 3 in
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.491 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[276d09a1-9e92-462f-902a-fb104066a0b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:21 compute-1 nova_compute[230518]: 2025-10-02 12:41:21.493 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:41:21 compute-1 nova_compute[230518]: 2025-10-02 12:41:21.497 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408881.4599757, 4d8c4b3b-58c2-4d3d-863c-49b98333b84d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:41:21 compute-1 nova_compute[230518]: 2025-10-02 12:41:21.497 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] VM Paused (Lifecycle Event)
Oct 02 12:41:21 compute-1 systemd-udevd[273007]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:41:21 compute-1 NetworkManager[44960]: <info>  [1759408881.5017] manager: (tap6536e1f6-80): new Veth device (/org/freedesktop/NetworkManager/Devices/217)
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.501 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4ee11a3c-4705-48c2-8371-9546c2d87182]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.539 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[68d08004-a6c8-4623-97a0-54553a7a22b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.542 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[268802ee-ae13-4389-8296-45e2d8b33445]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:21 compute-1 nova_compute[230518]: 2025-10-02 12:41:21.543 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:41:21 compute-1 nova_compute[230518]: 2025-10-02 12:41:21.546 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:41:21 compute-1 NetworkManager[44960]: <info>  [1759408881.5774] device (tap6536e1f6-80): carrier: link connected
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.584 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[29e271fb-8b51-4bf0-9284-a969ddc196d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.603 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d3c8c6ef-5dd0-4aea-9a1e-dc146cbde46f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6536e1f6-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:c4:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 138], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 674513, 'reachable_time': 30107, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273131, 'error': None, 'target': 'ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.616 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b6cf32d4-b56e-442c-8bee-0051a6d4947f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1a:c49b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 674513, 'tstamp': 674513}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273132, 'error': None, 'target': 'ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.630 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[31535181-534c-46b1-a29f-60528896419b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6536e1f6-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:c4:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 138], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 674513, 'reachable_time': 30107, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273133, 'error': None, 'target': 'ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:21 compute-1 nova_compute[230518]: 2025-10-02 12:41:21.651 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.660 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[08098629-e34c-4164-97d8-16543c8c4503]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.715 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a78f5343-8b58-40b1-80af-689731945987]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.716 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6536e1f6-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.716 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.717 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6536e1f6-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:21 compute-1 nova_compute[230518]: 2025-10-02 12:41:21.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:21 compute-1 kernel: tap6536e1f6-80: entered promiscuous mode
Oct 02 12:41:21 compute-1 NetworkManager[44960]: <info>  [1759408881.7198] manager: (tap6536e1f6-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/218)
Oct 02 12:41:21 compute-1 nova_compute[230518]: 2025-10-02 12:41:21.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.725 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6536e1f6-80, col_values=(('external_ids', {'iface-id': 'da0b8468-e646-4bba-8522-c8959170677a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:21 compute-1 nova_compute[230518]: 2025-10-02 12:41:21.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:21 compute-1 ovn_controller[129257]: 2025-10-02T12:41:21Z|00453|binding|INFO|Releasing lport da0b8468-e646-4bba-8522-c8959170677a from this chassis (sb_readonly=0)
Oct 02 12:41:21 compute-1 nova_compute[230518]: 2025-10-02 12:41:21.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.731 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6536e1f6-8914-462b-bd28-3b66d21243dc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6536e1f6-8914-462b-bd28-3b66d21243dc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.732 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1e1ba010-b0ef-449e-8a52-442d4ad140f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.733 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-6536e1f6-8914-462b-bd28-3b66d21243dc
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/6536e1f6-8914-462b-bd28-3b66d21243dc.pid.haproxy
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 6536e1f6-8914-462b-bd28-3b66d21243dc
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:41:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:21.734 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc', 'env', 'PROCESS_TAG=haproxy-6536e1f6-8914-462b-bd28-3b66d21243dc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6536e1f6-8914-462b-bd28-3b66d21243dc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:41:21 compute-1 nova_compute[230518]: 2025-10-02 12:41:21.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:22 compute-1 nova_compute[230518]: 2025-10-02 12:41:22.082 2 DEBUG nova.network.neutron [req-01f8422e-394e-4bf1-8f8a-04dfba2ff1c4 req-26c8e608-c554-4b69-b701-60a97d714ab5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Updated VIF entry in instance network info cache for port 1688d119-1bc8-410a-a80e-8536a113e986. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:41:22 compute-1 nova_compute[230518]: 2025-10-02 12:41:22.083 2 DEBUG nova.network.neutron [req-01f8422e-394e-4bf1-8f8a-04dfba2ff1c4 req-26c8e608-c554-4b69-b701-60a97d714ab5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Updating instance_info_cache with network_info: [{"id": "007233fd-556d-43ce-97fa-0f19306ba0aa", "address": "fa:16:3e:15:1b:4f", "network": {"id": "fb774493-1e03-4988-a332-4e7f3684ace8", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2108890300", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007233fd-55", "ovs_interfaceid": "007233fd-556d-43ce-97fa-0f19306ba0aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1688d119-1bc8-410a-a80e-8536a113e986", "address": "fa:16:3e:df:fd:ef", "network": {"id": "6536e1f6-8914-462b-bd28-3b66d21243dc", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1451876568", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.129", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1688d119-1b", "ovs_interfaceid": "1688d119-1bc8-410a-a80e-8536a113e986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:41:22 compute-1 podman[273164]: 2025-10-02 12:41:22.108356271 +0000 UTC m=+0.072416058 container create 709664975c11fa7d6399aa3077db76af4adf5e52a47ff6a1c7af3bbb1f57505f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 12:41:22 compute-1 nova_compute[230518]: 2025-10-02 12:41:22.123 2 DEBUG oslo_concurrency.lockutils [req-01f8422e-394e-4bf1-8f8a-04dfba2ff1c4 req-26c8e608-c554-4b69-b701-60a97d714ab5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-4d8c4b3b-58c2-4d3d-863c-49b98333b84d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:41:22 compute-1 podman[273164]: 2025-10-02 12:41:22.066168925 +0000 UTC m=+0.030228772 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:41:22 compute-1 systemd[1]: Started libpod-conmon-709664975c11fa7d6399aa3077db76af4adf5e52a47ff6a1c7af3bbb1f57505f.scope.
Oct 02 12:41:22 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:41:22 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e6e86809a8af64e19198f2ff488e9d14b554026026aa3351bf2e01639a78b89/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:41:22 compute-1 podman[273164]: 2025-10-02 12:41:22.194858841 +0000 UTC m=+0.158918628 container init 709664975c11fa7d6399aa3077db76af4adf5e52a47ff6a1c7af3bbb1f57505f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:41:22 compute-1 podman[273164]: 2025-10-02 12:41:22.199650722 +0000 UTC m=+0.163710479 container start 709664975c11fa7d6399aa3077db76af4adf5e52a47ff6a1c7af3bbb1f57505f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:41:22 compute-1 neutron-haproxy-ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc[273179]: [NOTICE]   (273183) : New worker (273185) forked
Oct 02 12:41:22 compute-1 neutron-haproxy-ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc[273179]: [NOTICE]   (273183) : Loading success.
Oct 02 12:41:22 compute-1 nova_compute[230518]: 2025-10-02 12:41:22.434 2 DEBUG nova.compute.manager [req-415786d0-5d42-458d-8ce0-ab4e6c6350ae req-358cd621-b1e2-443c-86ed-fa220b1e456d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Received event network-vif-plugged-007233fd-556d-43ce-97fa-0f19306ba0aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:41:22 compute-1 nova_compute[230518]: 2025-10-02 12:41:22.436 2 DEBUG oslo_concurrency.lockutils [req-415786d0-5d42-458d-8ce0-ab4e6c6350ae req-358cd621-b1e2-443c-86ed-fa220b1e456d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:22 compute-1 nova_compute[230518]: 2025-10-02 12:41:22.436 2 DEBUG oslo_concurrency.lockutils [req-415786d0-5d42-458d-8ce0-ab4e6c6350ae req-358cd621-b1e2-443c-86ed-fa220b1e456d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:22 compute-1 nova_compute[230518]: 2025-10-02 12:41:22.437 2 DEBUG oslo_concurrency.lockutils [req-415786d0-5d42-458d-8ce0-ab4e6c6350ae req-358cd621-b1e2-443c-86ed-fa220b1e456d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:22 compute-1 nova_compute[230518]: 2025-10-02 12:41:22.438 2 DEBUG nova.compute.manager [req-415786d0-5d42-458d-8ce0-ab4e6c6350ae req-358cd621-b1e2-443c-86ed-fa220b1e456d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Processing event network-vif-plugged-007233fd-556d-43ce-97fa-0f19306ba0aa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:41:22 compute-1 ceph-mon[80926]: pgmap v1966: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.7 MiB/s wr, 93 op/s
Oct 02 12:41:23 compute-1 nova_compute[230518]: 2025-10-02 12:41:23.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:23.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:23.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:23 compute-1 nova_compute[230518]: 2025-10-02 12:41:23.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:41:24 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/468430068' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:24 compute-1 nova_compute[230518]: 2025-10-02 12:41:24.614 2 DEBUG nova.compute.manager [req-8735e260-e4d5-49eb-be78-5b5062fd87f0 req-c3082a4b-7b10-48f7-a043-8fe7ae4fb72b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Received event network-vif-plugged-007233fd-556d-43ce-97fa-0f19306ba0aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:41:24 compute-1 nova_compute[230518]: 2025-10-02 12:41:24.615 2 DEBUG oslo_concurrency.lockutils [req-8735e260-e4d5-49eb-be78-5b5062fd87f0 req-c3082a4b-7b10-48f7-a043-8fe7ae4fb72b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:24 compute-1 nova_compute[230518]: 2025-10-02 12:41:24.615 2 DEBUG oslo_concurrency.lockutils [req-8735e260-e4d5-49eb-be78-5b5062fd87f0 req-c3082a4b-7b10-48f7-a043-8fe7ae4fb72b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:24 compute-1 nova_compute[230518]: 2025-10-02 12:41:24.615 2 DEBUG oslo_concurrency.lockutils [req-8735e260-e4d5-49eb-be78-5b5062fd87f0 req-c3082a4b-7b10-48f7-a043-8fe7ae4fb72b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:24 compute-1 nova_compute[230518]: 2025-10-02 12:41:24.616 2 DEBUG nova.compute.manager [req-8735e260-e4d5-49eb-be78-5b5062fd87f0 req-c3082a4b-7b10-48f7-a043-8fe7ae4fb72b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] No event matching network-vif-plugged-007233fd-556d-43ce-97fa-0f19306ba0aa in dict_keys([('network-vif-plugged', '1688d119-1bc8-410a-a80e-8536a113e986')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Oct 02 12:41:24 compute-1 nova_compute[230518]: 2025-10-02 12:41:24.616 2 WARNING nova.compute.manager [req-8735e260-e4d5-49eb-be78-5b5062fd87f0 req-c3082a4b-7b10-48f7-a043-8fe7ae4fb72b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Received unexpected event network-vif-plugged-007233fd-556d-43ce-97fa-0f19306ba0aa for instance with vm_state building and task_state spawning.
Oct 02 12:41:24 compute-1 nova_compute[230518]: 2025-10-02 12:41:24.616 2 DEBUG nova.compute.manager [req-8735e260-e4d5-49eb-be78-5b5062fd87f0 req-c3082a4b-7b10-48f7-a043-8fe7ae4fb72b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Received event network-vif-plugged-1688d119-1bc8-410a-a80e-8536a113e986 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:41:24 compute-1 nova_compute[230518]: 2025-10-02 12:41:24.616 2 DEBUG oslo_concurrency.lockutils [req-8735e260-e4d5-49eb-be78-5b5062fd87f0 req-c3082a4b-7b10-48f7-a043-8fe7ae4fb72b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:24 compute-1 nova_compute[230518]: 2025-10-02 12:41:24.616 2 DEBUG oslo_concurrency.lockutils [req-8735e260-e4d5-49eb-be78-5b5062fd87f0 req-c3082a4b-7b10-48f7-a043-8fe7ae4fb72b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:24 compute-1 nova_compute[230518]: 2025-10-02 12:41:24.616 2 DEBUG oslo_concurrency.lockutils [req-8735e260-e4d5-49eb-be78-5b5062fd87f0 req-c3082a4b-7b10-48f7-a043-8fe7ae4fb72b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:24 compute-1 nova_compute[230518]: 2025-10-02 12:41:24.616 2 DEBUG nova.compute.manager [req-8735e260-e4d5-49eb-be78-5b5062fd87f0 req-c3082a4b-7b10-48f7-a043-8fe7ae4fb72b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Processing event network-vif-plugged-1688d119-1bc8-410a-a80e-8536a113e986 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:41:24 compute-1 nova_compute[230518]: 2025-10-02 12:41:24.617 2 DEBUG nova.compute.manager [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Instance event wait completed in 3 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:41:24 compute-1 nova_compute[230518]: 2025-10-02 12:41:24.622 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408884.6226764, 4d8c4b3b-58c2-4d3d-863c-49b98333b84d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:41:24 compute-1 nova_compute[230518]: 2025-10-02 12:41:24.623 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] VM Resumed (Lifecycle Event)
Oct 02 12:41:24 compute-1 nova_compute[230518]: 2025-10-02 12:41:24.625 2 DEBUG nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:41:24 compute-1 nova_compute[230518]: 2025-10-02 12:41:24.628 2 INFO nova.virt.libvirt.driver [-] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Instance spawned successfully.
Oct 02 12:41:24 compute-1 nova_compute[230518]: 2025-10-02 12:41:24.629 2 DEBUG nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:41:24 compute-1 nova_compute[230518]: 2025-10-02 12:41:24.709 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:41:24 compute-1 nova_compute[230518]: 2025-10-02 12:41:24.713 2 DEBUG nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:41:24 compute-1 nova_compute[230518]: 2025-10-02 12:41:24.714 2 DEBUG nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:41:24 compute-1 nova_compute[230518]: 2025-10-02 12:41:24.714 2 DEBUG nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:41:24 compute-1 nova_compute[230518]: 2025-10-02 12:41:24.715 2 DEBUG nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:41:24 compute-1 nova_compute[230518]: 2025-10-02 12:41:24.715 2 DEBUG nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:41:24 compute-1 nova_compute[230518]: 2025-10-02 12:41:24.715 2 DEBUG nova.virt.libvirt.driver [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:41:24 compute-1 nova_compute[230518]: 2025-10-02 12:41:24.727 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:41:24 compute-1 ceph-mon[80926]: pgmap v1967: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.0 MiB/s wr, 83 op/s
Oct 02 12:41:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/468430068' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:25 compute-1 nova_compute[230518]: 2025-10-02 12:41:25.011 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:41:25 compute-1 nova_compute[230518]: 2025-10-02 12:41:25.204 2 INFO nova.compute.manager [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Took 21.14 seconds to spawn the instance on the hypervisor.
Oct 02 12:41:25 compute-1 nova_compute[230518]: 2025-10-02 12:41:25.205 2 DEBUG nova.compute.manager [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:41:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:41:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:25.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:41:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:41:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:25.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:41:25 compute-1 nova_compute[230518]: 2025-10-02 12:41:25.393 2 INFO nova.compute.manager [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Took 22.52 seconds to build instance.
Oct 02 12:41:25 compute-1 nova_compute[230518]: 2025-10-02 12:41:25.481 2 DEBUG oslo_concurrency.lockutils [None req-18555ab9-a31e-4d9a-afab-1239ad731b07 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 22.949s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:25.938 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:25.939 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:25.940 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:26 compute-1 nova_compute[230518]: 2025-10-02 12:41:26.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:26 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:41:26 compute-1 ceph-mon[80926]: pgmap v1968: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 116 KiB/s rd, 992 KiB/s wr, 53 op/s
Oct 02 12:41:27 compute-1 nova_compute[230518]: 2025-10-02 12:41:27.070 2 DEBUG nova.compute.manager [req-1a4c096e-05f6-4339-b95e-a3973b5e46c2 req-535b4325-2416-4ed7-8317-e8be878d0e31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Received event network-vif-plugged-1688d119-1bc8-410a-a80e-8536a113e986 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:41:27 compute-1 nova_compute[230518]: 2025-10-02 12:41:27.071 2 DEBUG oslo_concurrency.lockutils [req-1a4c096e-05f6-4339-b95e-a3973b5e46c2 req-535b4325-2416-4ed7-8317-e8be878d0e31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:27 compute-1 nova_compute[230518]: 2025-10-02 12:41:27.072 2 DEBUG oslo_concurrency.lockutils [req-1a4c096e-05f6-4339-b95e-a3973b5e46c2 req-535b4325-2416-4ed7-8317-e8be878d0e31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:27 compute-1 nova_compute[230518]: 2025-10-02 12:41:27.072 2 DEBUG oslo_concurrency.lockutils [req-1a4c096e-05f6-4339-b95e-a3973b5e46c2 req-535b4325-2416-4ed7-8317-e8be878d0e31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:27 compute-1 nova_compute[230518]: 2025-10-02 12:41:27.072 2 DEBUG nova.compute.manager [req-1a4c096e-05f6-4339-b95e-a3973b5e46c2 req-535b4325-2416-4ed7-8317-e8be878d0e31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] No waiting events found dispatching network-vif-plugged-1688d119-1bc8-410a-a80e-8536a113e986 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:41:27 compute-1 nova_compute[230518]: 2025-10-02 12:41:27.073 2 WARNING nova.compute.manager [req-1a4c096e-05f6-4339-b95e-a3973b5e46c2 req-535b4325-2416-4ed7-8317-e8be878d0e31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Received unexpected event network-vif-plugged-1688d119-1bc8-410a-a80e-8536a113e986 for instance with vm_state active and task_state None.
Oct 02 12:41:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:27.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:27.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.272 2 DEBUG oslo_concurrency.lockutils [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Acquiring lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.274 2 DEBUG oslo_concurrency.lockutils [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.274 2 DEBUG oslo_concurrency.lockutils [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Acquiring lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.275 2 DEBUG oslo_concurrency.lockutils [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.276 2 DEBUG oslo_concurrency.lockutils [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.277 2 INFO nova.compute.manager [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Terminating instance
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.279 2 DEBUG nova.compute.manager [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:28 compute-1 kernel: tap007233fd-55 (unregistering): left promiscuous mode
Oct 02 12:41:28 compute-1 NetworkManager[44960]: <info>  [1759408888.3305] device (tap007233fd-55): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:41:28 compute-1 ovn_controller[129257]: 2025-10-02T12:41:28Z|00454|binding|INFO|Releasing lport 007233fd-556d-43ce-97fa-0f19306ba0aa from this chassis (sb_readonly=0)
Oct 02 12:41:28 compute-1 ovn_controller[129257]: 2025-10-02T12:41:28Z|00455|binding|INFO|Setting lport 007233fd-556d-43ce-97fa-0f19306ba0aa down in Southbound
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:28 compute-1 ovn_controller[129257]: 2025-10-02T12:41:28Z|00456|binding|INFO|Removing iface tap007233fd-55 ovn-installed in OVS
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:28 compute-1 kernel: tap1688d119-1b (unregistering): left promiscuous mode
Oct 02 12:41:28 compute-1 NetworkManager[44960]: <info>  [1759408888.3721] device (tap1688d119-1b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:28 compute-1 ovn_controller[129257]: 2025-10-02T12:41:28Z|00457|binding|INFO|Releasing lport 1688d119-1bc8-410a-a80e-8536a113e986 from this chassis (sb_readonly=1)
Oct 02 12:41:28 compute-1 ovn_controller[129257]: 2025-10-02T12:41:28Z|00458|binding|INFO|Removing iface tap1688d119-1b ovn-installed in OVS
Oct 02 12:41:28 compute-1 ovn_controller[129257]: 2025-10-02T12:41:28Z|00459|if_status|INFO|Dropped 1 log messages in last 1267 seconds (most recently, 1267 seconds ago) due to excessive rate
Oct 02 12:41:28 compute-1 ovn_controller[129257]: 2025-10-02T12:41:28Z|00460|if_status|INFO|Not setting lport 1688d119-1bc8-410a-a80e-8536a113e986 down as sb is readonly
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:28 compute-1 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Oct 02 12:41:28 compute-1 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d0000006e.scope: Consumed 4.745s CPU time.
Oct 02 12:41:28 compute-1 ovn_controller[129257]: 2025-10-02T12:41:28Z|00461|binding|INFO|Setting lport 1688d119-1bc8-410a-a80e-8536a113e986 down in Southbound
Oct 02 12:41:28 compute-1 systemd-machined[188247]: Machine qemu-54-instance-0000006e terminated.
Oct 02 12:41:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.441 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:1b:4f 10.100.0.118'], port_security=['fa:16:3e:15:1b:4f 10.100.0.118'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.118/24', 'neutron:device_id': '4d8c4b3b-58c2-4d3d-863c-49b98333b84d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fb774493-1e03-4988-a332-4e7f3684ace8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a82ed194b379425aa5e1f31b993eee81', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0a763430-a613-4e24-8301-a0068489d29b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=30275ad8-63fb-492c-8ae6-6d69bb1e285c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=007233fd-556d-43ce-97fa-0f19306ba0aa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:41:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.443 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 007233fd-556d-43ce-97fa-0f19306ba0aa in datapath fb774493-1e03-4988-a332-4e7f3684ace8 unbound from our chassis
Oct 02 12:41:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.447 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fb774493-1e03-4988-a332-4e7f3684ace8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:41:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.448 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4f3b01c1-6447-4a16-b559-2b5fd6024cc7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.449 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8 namespace which is not needed anymore
Oct 02 12:41:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.486 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:fd:ef 10.100.1.129'], port_security=['fa:16:3e:df:fd:ef 10.100.1.129'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.129/24', 'neutron:device_id': '4d8c4b3b-58c2-4d3d-863c-49b98333b84d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6536e1f6-8914-462b-bd28-3b66d21243dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a82ed194b379425aa5e1f31b993eee81', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0a763430-a613-4e24-8301-a0068489d29b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=96f8db4a-8d74-4e13-9b5c-7bb0fe283c21, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=1688d119-1bc8-410a-a80e-8536a113e986) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:41:28 compute-1 NetworkManager[44960]: <info>  [1759408888.5203] manager: (tap1688d119-1b): new Tun device (/org/freedesktop/NetworkManager/Devices/219)
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.538 2 INFO nova.virt.libvirt.driver [-] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Instance destroyed successfully.
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.539 2 DEBUG nova.objects.instance [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lazy-loading 'resources' on Instance uuid 4d8c4b3b-58c2-4d3d-863c-49b98333b84d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:28 compute-1 neutron-haproxy-ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8[273106]: [NOTICE]   (273110) : haproxy version is 2.8.14-c23fe91
Oct 02 12:41:28 compute-1 neutron-haproxy-ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8[273106]: [NOTICE]   (273110) : path to executable is /usr/sbin/haproxy
Oct 02 12:41:28 compute-1 neutron-haproxy-ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8[273106]: [WARNING]  (273110) : Exiting Master process...
Oct 02 12:41:28 compute-1 neutron-haproxy-ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8[273106]: [ALERT]    (273110) : Current worker (273112) exited with code 143 (Terminated)
Oct 02 12:41:28 compute-1 neutron-haproxy-ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8[273106]: [WARNING]  (273110) : All workers exited. Exiting... (0)
Oct 02 12:41:28 compute-1 systemd[1]: libpod-a65bec35821dc9476870af807eb29169aff1865e6b1cce2f3d57625ecca79d4e.scope: Deactivated successfully.
Oct 02 12:41:28 compute-1 podman[273235]: 2025-10-02 12:41:28.67282147 +0000 UTC m=+0.067439092 container died a65bec35821dc9476870af807eb29169aff1865e6b1cce2f3d57625ecca79d4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:41:28 compute-1 systemd[1]: var-lib-containers-storage-overlay-9ca4dd895614093da27e3e57abc7da7e11453af00534b25663f8904c6c64dc69-merged.mount: Deactivated successfully.
Oct 02 12:41:28 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a65bec35821dc9476870af807eb29169aff1865e6b1cce2f3d57625ecca79d4e-userdata-shm.mount: Deactivated successfully.
Oct 02 12:41:28 compute-1 podman[273235]: 2025-10-02 12:41:28.734403196 +0000 UTC m=+0.129020818 container cleanup a65bec35821dc9476870af807eb29169aff1865e6b1cce2f3d57625ecca79d4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:41:28 compute-1 systemd[1]: libpod-conmon-a65bec35821dc9476870af807eb29169aff1865e6b1cce2f3d57625ecca79d4e.scope: Deactivated successfully.
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.748 2 DEBUG nova.virt.libvirt.vif [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:41:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1784036592',display_name='tempest-ServersTestMultiNic-server-1784036592',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1784036592',id=110,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:41:25Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a82ed194b379425aa5e1f31b993eee81',ramdisk_id='',reservation_id='r-ztoikflv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-2055566246',owner_user_name='tempest-ServersTestMultiNic-2055566246-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:41:25Z,user_data=None,user_id='9ef7a5dbc3524ee8a7efcd0d3ae36787',uuid=4d8c4b3b-58c2-4d3d-863c-49b98333b84d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "007233fd-556d-43ce-97fa-0f19306ba0aa", "address": "fa:16:3e:15:1b:4f", "network": {"id": "fb774493-1e03-4988-a332-4e7f3684ace8", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2108890300", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007233fd-55", "ovs_interfaceid": "007233fd-556d-43ce-97fa-0f19306ba0aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.750 2 DEBUG nova.network.os_vif_util [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converting VIF {"id": "007233fd-556d-43ce-97fa-0f19306ba0aa", "address": "fa:16:3e:15:1b:4f", "network": {"id": "fb774493-1e03-4988-a332-4e7f3684ace8", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2108890300", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007233fd-55", "ovs_interfaceid": "007233fd-556d-43ce-97fa-0f19306ba0aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.751 2 DEBUG nova.network.os_vif_util [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:1b:4f,bridge_name='br-int',has_traffic_filtering=True,id=007233fd-556d-43ce-97fa-0f19306ba0aa,network=Network(fb774493-1e03-4988-a332-4e7f3684ace8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap007233fd-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.752 2 DEBUG os_vif [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:1b:4f,bridge_name='br-int',has_traffic_filtering=True,id=007233fd-556d-43ce-97fa-0f19306ba0aa,network=Network(fb774493-1e03-4988-a332-4e7f3684ace8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap007233fd-55') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.756 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap007233fd-55, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.769 2 INFO os_vif [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:1b:4f,bridge_name='br-int',has_traffic_filtering=True,id=007233fd-556d-43ce-97fa-0f19306ba0aa,network=Network(fb774493-1e03-4988-a332-4e7f3684ace8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap007233fd-55')
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.770 2 DEBUG nova.virt.libvirt.vif [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:41:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1784036592',display_name='tempest-ServersTestMultiNic-server-1784036592',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1784036592',id=110,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:41:25Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a82ed194b379425aa5e1f31b993eee81',ramdisk_id='',reservation_id='r-ztoikflv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-2055566246',owner_user_name='tempest-ServersTestMultiNic-2055566246-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:41:25Z,user_data=None,user_id='9ef7a5dbc3524ee8a7efcd0d3ae36787',uuid=4d8c4b3b-58c2-4d3d-863c-49b98333b84d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1688d119-1bc8-410a-a80e-8536a113e986", "address": "fa:16:3e:df:fd:ef", "network": {"id": "6536e1f6-8914-462b-bd28-3b66d21243dc", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1451876568", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.129", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1688d119-1b", "ovs_interfaceid": "1688d119-1bc8-410a-a80e-8536a113e986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.771 2 DEBUG nova.network.os_vif_util [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converting VIF {"id": "1688d119-1bc8-410a-a80e-8536a113e986", "address": "fa:16:3e:df:fd:ef", "network": {"id": "6536e1f6-8914-462b-bd28-3b66d21243dc", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1451876568", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.129", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1688d119-1b", "ovs_interfaceid": "1688d119-1bc8-410a-a80e-8536a113e986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.772 2 DEBUG nova.network.os_vif_util [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:fd:ef,bridge_name='br-int',has_traffic_filtering=True,id=1688d119-1bc8-410a-a80e-8536a113e986,network=Network(6536e1f6-8914-462b-bd28-3b66d21243dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1688d119-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.773 2 DEBUG os_vif [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:fd:ef,bridge_name='br-int',has_traffic_filtering=True,id=1688d119-1bc8-410a-a80e-8536a113e986,network=Network(6536e1f6-8914-462b-bd28-3b66d21243dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1688d119-1b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.775 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1688d119-1b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.782 2 INFO os_vif [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:fd:ef,bridge_name='br-int',has_traffic_filtering=True,id=1688d119-1bc8-410a-a80e-8536a113e986,network=Network(6536e1f6-8914-462b-bd28-3b66d21243dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1688d119-1b')
Oct 02 12:41:28 compute-1 podman[273264]: 2025-10-02 12:41:28.818837311 +0000 UTC m=+0.062245178 container remove a65bec35821dc9476870af807eb29169aff1865e6b1cce2f3d57625ecca79d4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 12:41:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.832 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[800705f3-4515-4951-9d8c-3173879dec86]: (4, ('Thu Oct  2 12:41:28 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8 (a65bec35821dc9476870af807eb29169aff1865e6b1cce2f3d57625ecca79d4e)\na65bec35821dc9476870af807eb29169aff1865e6b1cce2f3d57625ecca79d4e\nThu Oct  2 12:41:28 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8 (a65bec35821dc9476870af807eb29169aff1865e6b1cce2f3d57625ecca79d4e)\na65bec35821dc9476870af807eb29169aff1865e6b1cce2f3d57625ecca79d4e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.834 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[12c2da18-acf4-4f09-a047-95ea6cdc0e75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.836 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfb774493-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:28 compute-1 kernel: tapfb774493-10: left promiscuous mode
Oct 02 12:41:28 compute-1 nova_compute[230518]: 2025-10-02 12:41:28.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.900 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[24b6f280-9b84-4907-a622-3d0731af5ecd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.933 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e849e774-814c-4187-958f-fdf3186412fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.934 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ec7d2a02-ee95-44d8-b9f2-bb3b8c425765]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.959 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6792df92-a878-44f6-a3cc-e4ff4afa4715]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 674406, 'reachable_time': 39649, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273298, 'error': None, 'target': 'ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:28 compute-1 systemd[1]: run-netns-ovnmeta\x2dfb774493\x2d1e03\x2d4988\x2da332\x2d4e7f3684ace8.mount: Deactivated successfully.
Oct 02 12:41:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.965 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fb774493-1e03-4988-a332-4e7f3684ace8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:41:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.966 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[fc4b523e-be9e-4008-a56b-f67d6ffd7d65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.967 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 1688d119-1bc8-410a-a80e-8536a113e986 in datapath 6536e1f6-8914-462b-bd28-3b66d21243dc unbound from our chassis
Oct 02 12:41:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.969 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6536e1f6-8914-462b-bd28-3b66d21243dc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:41:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.971 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[14abc2e3-0362-4f7b-be39-9faeda4be615]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:28.972 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc namespace which is not needed anymore
Oct 02 12:41:29 compute-1 ceph-mon[80926]: pgmap v1969: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 17 KiB/s wr, 67 op/s
Oct 02 12:41:29 compute-1 neutron-haproxy-ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc[273179]: [NOTICE]   (273183) : haproxy version is 2.8.14-c23fe91
Oct 02 12:41:29 compute-1 neutron-haproxy-ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc[273179]: [NOTICE]   (273183) : path to executable is /usr/sbin/haproxy
Oct 02 12:41:29 compute-1 neutron-haproxy-ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc[273179]: [WARNING]  (273183) : Exiting Master process...
Oct 02 12:41:29 compute-1 neutron-haproxy-ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc[273179]: [ALERT]    (273183) : Current worker (273185) exited with code 143 (Terminated)
Oct 02 12:41:29 compute-1 neutron-haproxy-ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc[273179]: [WARNING]  (273183) : All workers exited. Exiting... (0)
Oct 02 12:41:29 compute-1 systemd[1]: libpod-709664975c11fa7d6399aa3077db76af4adf5e52a47ff6a1c7af3bbb1f57505f.scope: Deactivated successfully.
Oct 02 12:41:29 compute-1 podman[273316]: 2025-10-02 12:41:29.147185226 +0000 UTC m=+0.046484773 container died 709664975c11fa7d6399aa3077db76af4adf5e52a47ff6a1c7af3bbb1f57505f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:41:29 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-709664975c11fa7d6399aa3077db76af4adf5e52a47ff6a1c7af3bbb1f57505f-userdata-shm.mount: Deactivated successfully.
Oct 02 12:41:29 compute-1 systemd[1]: var-lib-containers-storage-overlay-7e6e86809a8af64e19198f2ff488e9d14b554026026aa3351bf2e01639a78b89-merged.mount: Deactivated successfully.
Oct 02 12:41:29 compute-1 podman[273316]: 2025-10-02 12:41:29.186876434 +0000 UTC m=+0.086175941 container cleanup 709664975c11fa7d6399aa3077db76af4adf5e52a47ff6a1c7af3bbb1f57505f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:41:29 compute-1 systemd[1]: libpod-conmon-709664975c11fa7d6399aa3077db76af4adf5e52a47ff6a1c7af3bbb1f57505f.scope: Deactivated successfully.
Oct 02 12:41:29 compute-1 nova_compute[230518]: 2025-10-02 12:41:29.219 2 DEBUG nova.compute.manager [req-bf61f7a5-00b2-42c9-ad52-2875eaf9304a req-2a372d3a-b569-41fb-8fb7-753e174de89f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Received event network-vif-unplugged-007233fd-556d-43ce-97fa-0f19306ba0aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:41:29 compute-1 nova_compute[230518]: 2025-10-02 12:41:29.220 2 DEBUG oslo_concurrency.lockutils [req-bf61f7a5-00b2-42c9-ad52-2875eaf9304a req-2a372d3a-b569-41fb-8fb7-753e174de89f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:29 compute-1 nova_compute[230518]: 2025-10-02 12:41:29.220 2 DEBUG oslo_concurrency.lockutils [req-bf61f7a5-00b2-42c9-ad52-2875eaf9304a req-2a372d3a-b569-41fb-8fb7-753e174de89f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:29 compute-1 nova_compute[230518]: 2025-10-02 12:41:29.220 2 DEBUG oslo_concurrency.lockutils [req-bf61f7a5-00b2-42c9-ad52-2875eaf9304a req-2a372d3a-b569-41fb-8fb7-753e174de89f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:29 compute-1 nova_compute[230518]: 2025-10-02 12:41:29.221 2 DEBUG nova.compute.manager [req-bf61f7a5-00b2-42c9-ad52-2875eaf9304a req-2a372d3a-b569-41fb-8fb7-753e174de89f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] No waiting events found dispatching network-vif-unplugged-007233fd-556d-43ce-97fa-0f19306ba0aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:41:29 compute-1 nova_compute[230518]: 2025-10-02 12:41:29.221 2 DEBUG nova.compute.manager [req-bf61f7a5-00b2-42c9-ad52-2875eaf9304a req-2a372d3a-b569-41fb-8fb7-753e174de89f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Received event network-vif-unplugged-007233fd-556d-43ce-97fa-0f19306ba0aa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:41:29 compute-1 podman[273347]: 2025-10-02 12:41:29.262064068 +0000 UTC m=+0.053387490 container remove 709664975c11fa7d6399aa3077db76af4adf5e52a47ff6a1c7af3bbb1f57505f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 12:41:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:29.269 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d7f4a8d8-ac93-4b0b-8fd0-46d068d506b5]: (4, ('Thu Oct  2 12:41:29 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc (709664975c11fa7d6399aa3077db76af4adf5e52a47ff6a1c7af3bbb1f57505f)\n709664975c11fa7d6399aa3077db76af4adf5e52a47ff6a1c7af3bbb1f57505f\nThu Oct  2 12:41:29 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc (709664975c11fa7d6399aa3077db76af4adf5e52a47ff6a1c7af3bbb1f57505f)\n709664975c11fa7d6399aa3077db76af4adf5e52a47ff6a1c7af3bbb1f57505f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:29.271 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[052ce9ad-eab5-4602-b01a-d33e4373ca44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:29.272 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6536e1f6-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:29 compute-1 nova_compute[230518]: 2025-10-02 12:41:29.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:29 compute-1 kernel: tap6536e1f6-80: left promiscuous mode
Oct 02 12:41:29 compute-1 nova_compute[230518]: 2025-10-02 12:41:29.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:29.280 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[196b8773-6f64-42fe-857a-3aaed6cc7075]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:29 compute-1 nova_compute[230518]: 2025-10-02 12:41:29.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:29.301 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[80354593-2c1d-4beb-826c-9e4d8a41638b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:29.303 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[65aa31a5-ce90-4090-9915-96e5c4f568c0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:29.319 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[192d45af-7707-4559-a3ed-fc299270fdb7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 674504, 'reachable_time': 43527, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273363, 'error': None, 'target': 'ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:29.322 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6536e1f6-8914-462b-bd28-3b66d21243dc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:41:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:29.322 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[09f35639-1362-4489-a673-6fcc9abe9c6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:41:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:29.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:41:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:29.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:29 compute-1 systemd[1]: run-netns-ovnmeta\x2d6536e1f6\x2d8914\x2d462b\x2dbd28\x2d3b66d21243dc.mount: Deactivated successfully.
Oct 02 12:41:29 compute-1 sudo[273364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:41:29 compute-1 sudo[273364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:29 compute-1 sudo[273364]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:29 compute-1 sudo[273389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:41:29 compute-1 sudo[273389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:41:29 compute-1 sudo[273389]: pam_unix(sudo:session): session closed for user root
Oct 02 12:41:30 compute-1 nova_compute[230518]: 2025-10-02 12:41:30.286 2 INFO nova.virt.libvirt.driver [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Deleting instance files /var/lib/nova/instances/4d8c4b3b-58c2-4d3d-863c-49b98333b84d_del
Oct 02 12:41:30 compute-1 nova_compute[230518]: 2025-10-02 12:41:30.287 2 INFO nova.virt.libvirt.driver [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Deletion of /var/lib/nova/instances/4d8c4b3b-58c2-4d3d-863c-49b98333b84d_del complete
Oct 02 12:41:30 compute-1 nova_compute[230518]: 2025-10-02 12:41:30.736 2 INFO nova.compute.manager [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Took 2.46 seconds to destroy the instance on the hypervisor.
Oct 02 12:41:30 compute-1 nova_compute[230518]: 2025-10-02 12:41:30.738 2 DEBUG oslo.service.loopingcall [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:41:30 compute-1 nova_compute[230518]: 2025-10-02 12:41:30.738 2 DEBUG nova.compute.manager [-] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:41:30 compute-1 nova_compute[230518]: 2025-10-02 12:41:30.738 2 DEBUG nova.network.neutron [-] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:41:30 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:41:30 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:41:30 compute-1 ceph-mon[80926]: pgmap v1970: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 17 KiB/s wr, 67 op/s
Oct 02 12:41:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4066901759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:31.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:31.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:31 compute-1 nova_compute[230518]: 2025-10-02 12:41:31.392 2 DEBUG nova.compute.manager [req-0173a3ea-3622-4399-b52b-d96c69b8d7ef req-0ed0ae7d-2e01-4fa2-955f-56e4045a80b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Received event network-vif-plugged-007233fd-556d-43ce-97fa-0f19306ba0aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:41:31 compute-1 nova_compute[230518]: 2025-10-02 12:41:31.393 2 DEBUG oslo_concurrency.lockutils [req-0173a3ea-3622-4399-b52b-d96c69b8d7ef req-0ed0ae7d-2e01-4fa2-955f-56e4045a80b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:31 compute-1 nova_compute[230518]: 2025-10-02 12:41:31.394 2 DEBUG oslo_concurrency.lockutils [req-0173a3ea-3622-4399-b52b-d96c69b8d7ef req-0ed0ae7d-2e01-4fa2-955f-56e4045a80b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:31 compute-1 nova_compute[230518]: 2025-10-02 12:41:31.394 2 DEBUG oslo_concurrency.lockutils [req-0173a3ea-3622-4399-b52b-d96c69b8d7ef req-0ed0ae7d-2e01-4fa2-955f-56e4045a80b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:31 compute-1 nova_compute[230518]: 2025-10-02 12:41:31.395 2 DEBUG nova.compute.manager [req-0173a3ea-3622-4399-b52b-d96c69b8d7ef req-0ed0ae7d-2e01-4fa2-955f-56e4045a80b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] No waiting events found dispatching network-vif-plugged-007233fd-556d-43ce-97fa-0f19306ba0aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:41:31 compute-1 nova_compute[230518]: 2025-10-02 12:41:31.395 2 WARNING nova.compute.manager [req-0173a3ea-3622-4399-b52b-d96c69b8d7ef req-0ed0ae7d-2e01-4fa2-955f-56e4045a80b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Received unexpected event network-vif-plugged-007233fd-556d-43ce-97fa-0f19306ba0aa for instance with vm_state active and task_state deleting.
Oct 02 12:41:31 compute-1 nova_compute[230518]: 2025-10-02 12:41:31.396 2 DEBUG nova.compute.manager [req-0173a3ea-3622-4399-b52b-d96c69b8d7ef req-0ed0ae7d-2e01-4fa2-955f-56e4045a80b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Received event network-vif-unplugged-1688d119-1bc8-410a-a80e-8536a113e986 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:41:31 compute-1 nova_compute[230518]: 2025-10-02 12:41:31.396 2 DEBUG oslo_concurrency.lockutils [req-0173a3ea-3622-4399-b52b-d96c69b8d7ef req-0ed0ae7d-2e01-4fa2-955f-56e4045a80b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:31 compute-1 nova_compute[230518]: 2025-10-02 12:41:31.397 2 DEBUG oslo_concurrency.lockutils [req-0173a3ea-3622-4399-b52b-d96c69b8d7ef req-0ed0ae7d-2e01-4fa2-955f-56e4045a80b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:31 compute-1 nova_compute[230518]: 2025-10-02 12:41:31.397 2 DEBUG oslo_concurrency.lockutils [req-0173a3ea-3622-4399-b52b-d96c69b8d7ef req-0ed0ae7d-2e01-4fa2-955f-56e4045a80b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:31 compute-1 nova_compute[230518]: 2025-10-02 12:41:31.397 2 DEBUG nova.compute.manager [req-0173a3ea-3622-4399-b52b-d96c69b8d7ef req-0ed0ae7d-2e01-4fa2-955f-56e4045a80b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] No waiting events found dispatching network-vif-unplugged-1688d119-1bc8-410a-a80e-8536a113e986 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:41:31 compute-1 nova_compute[230518]: 2025-10-02 12:41:31.398 2 DEBUG nova.compute.manager [req-0173a3ea-3622-4399-b52b-d96c69b8d7ef req-0ed0ae7d-2e01-4fa2-955f-56e4045a80b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Received event network-vif-unplugged-1688d119-1bc8-410a-a80e-8536a113e986 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:41:31 compute-1 nova_compute[230518]: 2025-10-02 12:41:31.398 2 DEBUG nova.compute.manager [req-0173a3ea-3622-4399-b52b-d96c69b8d7ef req-0ed0ae7d-2e01-4fa2-955f-56e4045a80b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Received event network-vif-plugged-1688d119-1bc8-410a-a80e-8536a113e986 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:41:31 compute-1 nova_compute[230518]: 2025-10-02 12:41:31.399 2 DEBUG oslo_concurrency.lockutils [req-0173a3ea-3622-4399-b52b-d96c69b8d7ef req-0ed0ae7d-2e01-4fa2-955f-56e4045a80b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:31 compute-1 nova_compute[230518]: 2025-10-02 12:41:31.399 2 DEBUG oslo_concurrency.lockutils [req-0173a3ea-3622-4399-b52b-d96c69b8d7ef req-0ed0ae7d-2e01-4fa2-955f-56e4045a80b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:31 compute-1 nova_compute[230518]: 2025-10-02 12:41:31.400 2 DEBUG oslo_concurrency.lockutils [req-0173a3ea-3622-4399-b52b-d96c69b8d7ef req-0ed0ae7d-2e01-4fa2-955f-56e4045a80b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:31 compute-1 nova_compute[230518]: 2025-10-02 12:41:31.400 2 DEBUG nova.compute.manager [req-0173a3ea-3622-4399-b52b-d96c69b8d7ef req-0ed0ae7d-2e01-4fa2-955f-56e4045a80b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] No waiting events found dispatching network-vif-plugged-1688d119-1bc8-410a-a80e-8536a113e986 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:41:31 compute-1 nova_compute[230518]: 2025-10-02 12:41:31.401 2 WARNING nova.compute.manager [req-0173a3ea-3622-4399-b52b-d96c69b8d7ef req-0ed0ae7d-2e01-4fa2-955f-56e4045a80b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Received unexpected event network-vif-plugged-1688d119-1bc8-410a-a80e-8536a113e986 for instance with vm_state active and task_state deleting.
Oct 02 12:41:31 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:41:33 compute-1 nova_compute[230518]: 2025-10-02 12:41:33.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:41:33 compute-1 nova_compute[230518]: 2025-10-02 12:41:33.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 12:41:33 compute-1 nova_compute[230518]: 2025-10-02 12:41:33.115 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 12:41:33 compute-1 nova_compute[230518]: 2025-10-02 12:41:33.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:33.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:33.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:33 compute-1 ceph-mon[80926]: pgmap v1971: 305 pgs: 305 active+clean; 680 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 19 KiB/s wr, 103 op/s
Oct 02 12:41:33 compute-1 nova_compute[230518]: 2025-10-02 12:41:33.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:33 compute-1 podman[273415]: 2025-10-02 12:41:33.813319891 +0000 UTC m=+0.062942620 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:41:33 compute-1 podman[273414]: 2025-10-02 12:41:33.845788602 +0000 UTC m=+0.095125122 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:41:34 compute-1 nova_compute[230518]: 2025-10-02 12:41:34.346 2 DEBUG nova.compute.manager [req-20b33173-b24b-4db7-8559-f6dfa3afdbc7 req-5f735ab3-85f4-42bc-960c-780fdcd64687 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Received event network-vif-deleted-1688d119-1bc8-410a-a80e-8536a113e986 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:41:34 compute-1 nova_compute[230518]: 2025-10-02 12:41:34.346 2 INFO nova.compute.manager [req-20b33173-b24b-4db7-8559-f6dfa3afdbc7 req-5f735ab3-85f4-42bc-960c-780fdcd64687 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Neutron deleted interface 1688d119-1bc8-410a-a80e-8536a113e986; detaching it from the instance and deleting it from the info cache
Oct 02 12:41:34 compute-1 nova_compute[230518]: 2025-10-02 12:41:34.347 2 DEBUG nova.network.neutron [req-20b33173-b24b-4db7-8559-f6dfa3afdbc7 req-5f735ab3-85f4-42bc-960c-780fdcd64687 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Updating instance_info_cache with network_info: [{"id": "007233fd-556d-43ce-97fa-0f19306ba0aa", "address": "fa:16:3e:15:1b:4f", "network": {"id": "fb774493-1e03-4988-a332-4e7f3684ace8", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-2108890300", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.118", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a82ed194b379425aa5e1f31b993eee81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap007233fd-55", "ovs_interfaceid": "007233fd-556d-43ce-97fa-0f19306ba0aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:41:34 compute-1 ceph-mon[80926]: pgmap v1972: 305 pgs: 305 active+clean; 678 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 376 KiB/s wr, 118 op/s
Oct 02 12:41:34 compute-1 nova_compute[230518]: 2025-10-02 12:41:34.634 2 DEBUG nova.compute.manager [req-20b33173-b24b-4db7-8559-f6dfa3afdbc7 req-5f735ab3-85f4-42bc-960c-780fdcd64687 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Detach interface failed, port_id=1688d119-1bc8-410a-a80e-8536a113e986, reason: Instance 4d8c4b3b-58c2-4d3d-863c-49b98333b84d could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:41:34 compute-1 nova_compute[230518]: 2025-10-02 12:41:34.760 2 DEBUG nova.network.neutron [-] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:41:34 compute-1 nova_compute[230518]: 2025-10-02 12:41:34.805 2 INFO nova.compute.manager [-] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Took 4.07 seconds to deallocate network for instance.
Oct 02 12:41:34 compute-1 nova_compute[230518]: 2025-10-02 12:41:34.865 2 DEBUG oslo_concurrency.lockutils [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:34 compute-1 nova_compute[230518]: 2025-10-02 12:41:34.866 2 DEBUG oslo_concurrency.lockutils [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:34 compute-1 nova_compute[230518]: 2025-10-02 12:41:34.941 2 DEBUG oslo_concurrency.processutils [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:41:35 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/913548972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:35 compute-1 nova_compute[230518]: 2025-10-02 12:41:35.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:35 compute-1 nova_compute[230518]: 2025-10-02 12:41:35.355 2 DEBUG oslo_concurrency.processutils [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:35 compute-1 nova_compute[230518]: 2025-10-02 12:41:35.364 2 DEBUG nova.compute.provider_tree [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:41:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:35.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:35.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:35 compute-1 nova_compute[230518]: 2025-10-02 12:41:35.388 2 DEBUG nova.scheduler.client.report [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:41:35 compute-1 nova_compute[230518]: 2025-10-02 12:41:35.450 2 DEBUG oslo_concurrency.lockutils [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:35 compute-1 nova_compute[230518]: 2025-10-02 12:41:35.483 2 INFO nova.scheduler.client.report [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Deleted allocations for instance 4d8c4b3b-58c2-4d3d-863c-49b98333b84d
Oct 02 12:41:35 compute-1 nova_compute[230518]: 2025-10-02 12:41:35.603 2 DEBUG oslo_concurrency.lockutils [None req-b7080fb1-e29a-4dee-b8ca-530f125421d7 9ef7a5dbc3524ee8a7efcd0d3ae36787 a82ed194b379425aa5e1f31b993eee81 - - default default] Lock "4d8c4b3b-58c2-4d3d-863c-49b98333b84d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.329s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1242631889' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/913548972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/147114996' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:36 compute-1 nova_compute[230518]: 2025-10-02 12:41:36.493 2 DEBUG nova.compute.manager [req-e7e8dd66-dedf-4f02-84b9-fa901d803602 req-4bf538d8-9ed5-45ac-82c9-4dade05af019 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Received event network-vif-deleted-007233fd-556d-43ce-97fa-0f19306ba0aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:41:36 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:41:36 compute-1 ceph-mon[80926]: pgmap v1973: 305 pgs: 305 active+clean; 689 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 113 op/s
Oct 02 12:41:36 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3244284231' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:41:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:37.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:41:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:37.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:37 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1756480052' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:37 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2659227039' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:38 compute-1 nova_compute[230518]: 2025-10-02 12:41:38.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:38 compute-1 ovn_controller[129257]: 2025-10-02T12:41:38Z|00462|binding|INFO|Releasing lport 7b6dc1a1-1a58-45bd-84bb-97328397bf1b from this chassis (sb_readonly=0)
Oct 02 12:41:38 compute-1 nova_compute[230518]: 2025-10-02 12:41:38.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:38 compute-1 nova_compute[230518]: 2025-10-02 12:41:38.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:38 compute-1 ceph-mon[80926]: pgmap v1974: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 140 op/s
Oct 02 12:41:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:41:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:39.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:41:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:39.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:40 compute-1 ceph-mon[80926]: pgmap v1975: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 719 KiB/s rd, 1.8 MiB/s wr, 95 op/s
Oct 02 12:41:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:41.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:41.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:41 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:41:43 compute-1 ceph-mon[80926]: pgmap v1976: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 124 op/s
Oct 02 12:41:43 compute-1 nova_compute[230518]: 2025-10-02 12:41:43.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:41:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:43.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:41:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:43.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:43 compute-1 nova_compute[230518]: 2025-10-02 12:41:43.533 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408888.5321271, 4d8c4b3b-58c2-4d3d-863c-49b98333b84d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:41:43 compute-1 nova_compute[230518]: 2025-10-02 12:41:43.533 2 INFO nova.compute.manager [-] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] VM Stopped (Lifecycle Event)
Oct 02 12:41:43 compute-1 nova_compute[230518]: 2025-10-02 12:41:43.671 2 DEBUG nova.compute.manager [None req-ce453aec-8cef-48f3-aa30-59570bb4cc74 - - - - - -] [instance: 4d8c4b3b-58c2-4d3d-863c-49b98333b84d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:41:43 compute-1 nova_compute[230518]: 2025-10-02 12:41:43.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:44 compute-1 nova_compute[230518]: 2025-10-02 12:41:44.057 2 DEBUG oslo_concurrency.lockutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:44 compute-1 nova_compute[230518]: 2025-10-02 12:41:44.058 2 DEBUG oslo_concurrency.lockutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:44 compute-1 nova_compute[230518]: 2025-10-02 12:41:44.332 2 DEBUG nova.compute.manager [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:41:45 compute-1 ceph-mon[80926]: pgmap v1977: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 134 op/s
Oct 02 12:41:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:45.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:45.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:45 compute-1 nova_compute[230518]: 2025-10-02 12:41:45.487 2 DEBUG oslo_concurrency.lockutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:45 compute-1 nova_compute[230518]: 2025-10-02 12:41:45.488 2 DEBUG oslo_concurrency.lockutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:45 compute-1 nova_compute[230518]: 2025-10-02 12:41:45.498 2 DEBUG nova.virt.hardware [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:41:45 compute-1 nova_compute[230518]: 2025-10-02 12:41:45.499 2 INFO nova.compute.claims [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:41:45 compute-1 podman[273482]: 2025-10-02 12:41:45.851141418 +0000 UTC m=+0.087944066 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid)
Oct 02 12:41:45 compute-1 podman[273483]: 2025-10-02 12:41:45.85342398 +0000 UTC m=+0.092517901 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 12:41:46 compute-1 nova_compute[230518]: 2025-10-02 12:41:46.401 2 DEBUG oslo_concurrency.processutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:46 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:41:46 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:41:46 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1208385054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:46 compute-1 nova_compute[230518]: 2025-10-02 12:41:46.859 2 DEBUG oslo_concurrency.processutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:46 compute-1 nova_compute[230518]: 2025-10-02 12:41:46.865 2 DEBUG nova.compute.provider_tree [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:41:46 compute-1 nova_compute[230518]: 2025-10-02 12:41:46.990 2 DEBUG nova.scheduler.client.report [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:41:47 compute-1 nova_compute[230518]: 2025-10-02 12:41:47.222 2 DEBUG oslo_concurrency.lockutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:47 compute-1 nova_compute[230518]: 2025-10-02 12:41:47.223 2 DEBUG nova.compute.manager [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:41:47 compute-1 ceph-mon[80926]: pgmap v1978: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.5 MiB/s wr, 156 op/s
Oct 02 12:41:47 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1208385054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:47.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:41:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:47.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:41:47 compute-1 nova_compute[230518]: 2025-10-02 12:41:47.472 2 DEBUG nova.compute.manager [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:41:47 compute-1 nova_compute[230518]: 2025-10-02 12:41:47.472 2 DEBUG nova.network.neutron [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:41:47 compute-1 nova_compute[230518]: 2025-10-02 12:41:47.595 2 INFO nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:41:47 compute-1 nova_compute[230518]: 2025-10-02 12:41:47.708 2 DEBUG nova.policy [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3151966e941f4652ba984616bfa760c7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f7e2edef094b4ba5a56a5ec5ffce911e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:41:47 compute-1 nova_compute[230518]: 2025-10-02 12:41:47.790 2 DEBUG nova.compute.manager [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:41:48 compute-1 nova_compute[230518]: 2025-10-02 12:41:48.111 2 DEBUG nova.compute.manager [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:41:48 compute-1 nova_compute[230518]: 2025-10-02 12:41:48.112 2 DEBUG nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:41:48 compute-1 nova_compute[230518]: 2025-10-02 12:41:48.113 2 INFO nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Creating image(s)
Oct 02 12:41:48 compute-1 nova_compute[230518]: 2025-10-02 12:41:48.156 2 DEBUG nova.storage.rbd_utils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] rbd image 1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:41:48 compute-1 nova_compute[230518]: 2025-10-02 12:41:48.196 2 DEBUG nova.storage.rbd_utils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] rbd image 1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:41:48 compute-1 nova_compute[230518]: 2025-10-02 12:41:48.236 2 DEBUG nova.storage.rbd_utils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] rbd image 1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:41:48 compute-1 nova_compute[230518]: 2025-10-02 12:41:48.240 2 DEBUG oslo_concurrency.processutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:48 compute-1 nova_compute[230518]: 2025-10-02 12:41:48.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:48 compute-1 nova_compute[230518]: 2025-10-02 12:41:48.304 2 DEBUG oslo_concurrency.processutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:48 compute-1 nova_compute[230518]: 2025-10-02 12:41:48.305 2 DEBUG oslo_concurrency.lockutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:48 compute-1 nova_compute[230518]: 2025-10-02 12:41:48.305 2 DEBUG oslo_concurrency.lockutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:48 compute-1 nova_compute[230518]: 2025-10-02 12:41:48.306 2 DEBUG oslo_concurrency.lockutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:48 compute-1 nova_compute[230518]: 2025-10-02 12:41:48.343 2 DEBUG nova.storage.rbd_utils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] rbd image 1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:41:48 compute-1 nova_compute[230518]: 2025-10-02 12:41:48.349 2 DEBUG oslo_concurrency.processutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:48 compute-1 nova_compute[230518]: 2025-10-02 12:41:48.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:48 compute-1 ovn_controller[129257]: 2025-10-02T12:41:48Z|00463|binding|INFO|Releasing lport 7b6dc1a1-1a58-45bd-84bb-97328397bf1b from this chassis (sb_readonly=0)
Oct 02 12:41:49 compute-1 nova_compute[230518]: 2025-10-02 12:41:49.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:49 compute-1 nova_compute[230518]: 2025-10-02 12:41:49.130 2 DEBUG nova.network.neutron [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Successfully created port: 8eb9e971-5920-4103-9ba9-c0846182952d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:41:49 compute-1 nova_compute[230518]: 2025-10-02 12:41:49.164 2 DEBUG oslo_concurrency.processutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.815s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:49 compute-1 nova_compute[230518]: 2025-10-02 12:41:49.234 2 DEBUG nova.storage.rbd_utils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] resizing rbd image 1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:41:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:49.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:49.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:49 compute-1 nova_compute[230518]: 2025-10-02 12:41:49.426 2 DEBUG nova.objects.instance [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lazy-loading 'migration_context' on Instance uuid 1f9101c6-f4d8-46c7-8884-386f9f08e6fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:41:49 compute-1 ceph-mon[80926]: pgmap v1979: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 787 KiB/s wr, 187 op/s
Oct 02 12:41:49 compute-1 nova_compute[230518]: 2025-10-02 12:41:49.531 2 DEBUG nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:41:49 compute-1 nova_compute[230518]: 2025-10-02 12:41:49.532 2 DEBUG nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Ensure instance console log exists: /var/lib/nova/instances/1f9101c6-f4d8-46c7-8884-386f9f08e6fb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:41:49 compute-1 nova_compute[230518]: 2025-10-02 12:41:49.532 2 DEBUG oslo_concurrency.lockutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:49 compute-1 nova_compute[230518]: 2025-10-02 12:41:49.532 2 DEBUG oslo_concurrency.lockutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:49 compute-1 nova_compute[230518]: 2025-10-02 12:41:49.533 2 DEBUG oslo_concurrency.lockutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:50 compute-1 nova_compute[230518]: 2025-10-02 12:41:50.350 2 DEBUG nova.network.neutron [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Successfully updated port: 8eb9e971-5920-4103-9ba9-c0846182952d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:41:50 compute-1 nova_compute[230518]: 2025-10-02 12:41:50.373 2 DEBUG oslo_concurrency.lockutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "refresh_cache-1f9101c6-f4d8-46c7-8884-386f9f08e6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:41:50 compute-1 nova_compute[230518]: 2025-10-02 12:41:50.374 2 DEBUG oslo_concurrency.lockutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquired lock "refresh_cache-1f9101c6-f4d8-46c7-8884-386f9f08e6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:41:50 compute-1 nova_compute[230518]: 2025-10-02 12:41:50.374 2 DEBUG nova.network.neutron [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:41:50 compute-1 ceph-mon[80926]: pgmap v1980: 305 pgs: 305 active+clean; 708 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 29 KiB/s wr, 157 op/s
Oct 02 12:41:50 compute-1 nova_compute[230518]: 2025-10-02 12:41:50.629 2 DEBUG nova.network.neutron [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:41:50 compute-1 nova_compute[230518]: 2025-10-02 12:41:50.831 2 DEBUG nova.compute.manager [req-616fd017-c027-4eb5-9ecc-bc34bb12a7b1 req-8d744f18-86b1-4ec2-866f-2fb262c526b4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Received event network-changed-8eb9e971-5920-4103-9ba9-c0846182952d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:41:50 compute-1 nova_compute[230518]: 2025-10-02 12:41:50.831 2 DEBUG nova.compute.manager [req-616fd017-c027-4eb5-9ecc-bc34bb12a7b1 req-8d744f18-86b1-4ec2-866f-2fb262c526b4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Refreshing instance network info cache due to event network-changed-8eb9e971-5920-4103-9ba9-c0846182952d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:41:50 compute-1 nova_compute[230518]: 2025-10-02 12:41:50.831 2 DEBUG oslo_concurrency.lockutils [req-616fd017-c027-4eb5-9ecc-bc34bb12a7b1 req-8d744f18-86b1-4ec2-866f-2fb262c526b4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-1f9101c6-f4d8-46c7-8884-386f9f08e6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:41:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:51.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:41:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:51.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:41:51 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:41:51 compute-1 nova_compute[230518]: 2025-10-02 12:41:51.794 2 DEBUG nova.network.neutron [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Updating instance_info_cache with network_info: [{"id": "8eb9e971-5920-4103-9ba9-c0846182952d", "address": "fa:16:3e:43:40:72", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8eb9e971-59", "ovs_interfaceid": "8eb9e971-5920-4103-9ba9-c0846182952d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:41:51 compute-1 nova_compute[230518]: 2025-10-02 12:41:51.823 2 DEBUG oslo_concurrency.lockutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Releasing lock "refresh_cache-1f9101c6-f4d8-46c7-8884-386f9f08e6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:41:51 compute-1 nova_compute[230518]: 2025-10-02 12:41:51.824 2 DEBUG nova.compute.manager [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Instance network_info: |[{"id": "8eb9e971-5920-4103-9ba9-c0846182952d", "address": "fa:16:3e:43:40:72", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8eb9e971-59", "ovs_interfaceid": "8eb9e971-5920-4103-9ba9-c0846182952d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:41:51 compute-1 nova_compute[230518]: 2025-10-02 12:41:51.824 2 DEBUG oslo_concurrency.lockutils [req-616fd017-c027-4eb5-9ecc-bc34bb12a7b1 req-8d744f18-86b1-4ec2-866f-2fb262c526b4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-1f9101c6-f4d8-46c7-8884-386f9f08e6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:41:51 compute-1 nova_compute[230518]: 2025-10-02 12:41:51.825 2 DEBUG nova.network.neutron [req-616fd017-c027-4eb5-9ecc-bc34bb12a7b1 req-8d744f18-86b1-4ec2-866f-2fb262c526b4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Refreshing network info cache for port 8eb9e971-5920-4103-9ba9-c0846182952d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:41:51 compute-1 nova_compute[230518]: 2025-10-02 12:41:51.828 2 DEBUG nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Start _get_guest_xml network_info=[{"id": "8eb9e971-5920-4103-9ba9-c0846182952d", "address": "fa:16:3e:43:40:72", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8eb9e971-59", "ovs_interfaceid": "8eb9e971-5920-4103-9ba9-c0846182952d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:41:51 compute-1 nova_compute[230518]: 2025-10-02 12:41:51.832 2 WARNING nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:41:51 compute-1 nova_compute[230518]: 2025-10-02 12:41:51.839 2 DEBUG nova.virt.libvirt.host [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:41:51 compute-1 nova_compute[230518]: 2025-10-02 12:41:51.840 2 DEBUG nova.virt.libvirt.host [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:41:51 compute-1 nova_compute[230518]: 2025-10-02 12:41:51.846 2 DEBUG nova.virt.libvirt.host [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:41:51 compute-1 nova_compute[230518]: 2025-10-02 12:41:51.847 2 DEBUG nova.virt.libvirt.host [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:41:51 compute-1 nova_compute[230518]: 2025-10-02 12:41:51.848 2 DEBUG nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:41:51 compute-1 nova_compute[230518]: 2025-10-02 12:41:51.848 2 DEBUG nova.virt.hardware [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:41:51 compute-1 nova_compute[230518]: 2025-10-02 12:41:51.849 2 DEBUG nova.virt.hardware [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:41:51 compute-1 nova_compute[230518]: 2025-10-02 12:41:51.849 2 DEBUG nova.virt.hardware [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:41:51 compute-1 nova_compute[230518]: 2025-10-02 12:41:51.849 2 DEBUG nova.virt.hardware [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:41:51 compute-1 nova_compute[230518]: 2025-10-02 12:41:51.849 2 DEBUG nova.virt.hardware [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:41:51 compute-1 nova_compute[230518]: 2025-10-02 12:41:51.849 2 DEBUG nova.virt.hardware [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:41:51 compute-1 nova_compute[230518]: 2025-10-02 12:41:51.850 2 DEBUG nova.virt.hardware [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:41:51 compute-1 nova_compute[230518]: 2025-10-02 12:41:51.850 2 DEBUG nova.virt.hardware [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:41:51 compute-1 nova_compute[230518]: 2025-10-02 12:41:51.850 2 DEBUG nova.virt.hardware [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:41:51 compute-1 nova_compute[230518]: 2025-10-02 12:41:51.850 2 DEBUG nova.virt.hardware [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:41:51 compute-1 nova_compute[230518]: 2025-10-02 12:41:51.850 2 DEBUG nova.virt.hardware [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:41:51 compute-1 nova_compute[230518]: 2025-10-02 12:41:51.853 2 DEBUG oslo_concurrency.processutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:41:52 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/281551588' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:52 compute-1 nova_compute[230518]: 2025-10-02 12:41:52.289 2 DEBUG oslo_concurrency.processutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:52 compute-1 nova_compute[230518]: 2025-10-02 12:41:52.312 2 DEBUG nova.storage.rbd_utils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] rbd image 1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:41:52 compute-1 nova_compute[230518]: 2025-10-02 12:41:52.316 2 DEBUG oslo_concurrency.processutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:41:52 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4130622140' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:52 compute-1 nova_compute[230518]: 2025-10-02 12:41:52.743 2 DEBUG oslo_concurrency.processutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:52 compute-1 nova_compute[230518]: 2025-10-02 12:41:52.744 2 DEBUG nova.virt.libvirt.vif [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:41:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-1408399936',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-1408399936',id=112,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMPES/J98pyyGI+xC972/PJIY+D7X9SqOgh45Z1MPKQ6L1b0LXV7IORHQBCxCHGOsCWQssLDPZp4WJ8irI2AsYuAH5MVzTXEt9QIB2bOJQbGultCK6n77bAruhlsubzH7w==',key_name='tempest-keypair-372158786',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f7e2edef094b4ba5a56a5ec5ffce911e',ramdisk_id='',reservation_id='r-wr2knp7k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeShelveTestJSON-1943710095',owner_user_name='tempest-AttachVolumeShelveTestJSON-1943710095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:41:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3151966e941f4652ba984616bfa760c7',uuid=1f9101c6-f4d8-46c7-8884-386f9f08e6fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8eb9e971-5920-4103-9ba9-c0846182952d", "address": "fa:16:3e:43:40:72", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8eb9e971-59", "ovs_interfaceid": "8eb9e971-5920-4103-9ba9-c0846182952d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:41:52 compute-1 nova_compute[230518]: 2025-10-02 12:41:52.745 2 DEBUG nova.network.os_vif_util [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Converting VIF {"id": "8eb9e971-5920-4103-9ba9-c0846182952d", "address": "fa:16:3e:43:40:72", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8eb9e971-59", "ovs_interfaceid": "8eb9e971-5920-4103-9ba9-c0846182952d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:41:52 compute-1 nova_compute[230518]: 2025-10-02 12:41:52.746 2 DEBUG nova.network.os_vif_util [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:40:72,bridge_name='br-int',has_traffic_filtering=True,id=8eb9e971-5920-4103-9ba9-c0846182952d,network=Network(385a384c-5df0-4b04-b928-517a46df04f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8eb9e971-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:41:52 compute-1 nova_compute[230518]: 2025-10-02 12:41:52.747 2 DEBUG nova.objects.instance [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lazy-loading 'pci_devices' on Instance uuid 1f9101c6-f4d8-46c7-8884-386f9f08e6fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:41:52 compute-1 nova_compute[230518]: 2025-10-02 12:41:52.773 2 DEBUG nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:41:52 compute-1 nova_compute[230518]:   <uuid>1f9101c6-f4d8-46c7-8884-386f9f08e6fb</uuid>
Oct 02 12:41:52 compute-1 nova_compute[230518]:   <name>instance-00000070</name>
Oct 02 12:41:52 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:41:52 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:41:52 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:41:52 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:       <nova:name>tempest-AttachVolumeShelveTestJSON-server-1408399936</nova:name>
Oct 02 12:41:52 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:41:51</nova:creationTime>
Oct 02 12:41:52 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:41:52 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:41:52 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:41:52 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:41:52 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:41:52 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:41:52 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:41:52 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:41:52 compute-1 nova_compute[230518]:         <nova:user uuid="3151966e941f4652ba984616bfa760c7">tempest-AttachVolumeShelveTestJSON-1943710095-project-member</nova:user>
Oct 02 12:41:52 compute-1 nova_compute[230518]:         <nova:project uuid="f7e2edef094b4ba5a56a5ec5ffce911e">tempest-AttachVolumeShelveTestJSON-1943710095</nova:project>
Oct 02 12:41:52 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:41:52 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:41:52 compute-1 nova_compute[230518]:         <nova:port uuid="8eb9e971-5920-4103-9ba9-c0846182952d">
Oct 02 12:41:52 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:41:52 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:41:52 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:41:52 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <system>
Oct 02 12:41:52 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:41:52 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:41:52 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:41:52 compute-1 nova_compute[230518]:       <entry name="serial">1f9101c6-f4d8-46c7-8884-386f9f08e6fb</entry>
Oct 02 12:41:52 compute-1 nova_compute[230518]:       <entry name="uuid">1f9101c6-f4d8-46c7-8884-386f9f08e6fb</entry>
Oct 02 12:41:52 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     </system>
Oct 02 12:41:52 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:41:52 compute-1 nova_compute[230518]:   <os>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:   </os>
Oct 02 12:41:52 compute-1 nova_compute[230518]:   <features>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:   </features>
Oct 02 12:41:52 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:41:52 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:41:52 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:41:52 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk">
Oct 02 12:41:52 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:       </source>
Oct 02 12:41:52 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:41:52 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:41:52 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:41:52 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk.config">
Oct 02 12:41:52 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:       </source>
Oct 02 12:41:52 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:41:52 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:41:52 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:41:52 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:43:40:72"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:       <target dev="tap8eb9e971-59"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:41:52 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/1f9101c6-f4d8-46c7-8884-386f9f08e6fb/console.log" append="off"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <video>
Oct 02 12:41:52 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     </video>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:41:52 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:41:52 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:41:52 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:41:52 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:41:52 compute-1 nova_compute[230518]: </domain>
Oct 02 12:41:52 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:41:52 compute-1 nova_compute[230518]: 2025-10-02 12:41:52.774 2 DEBUG nova.compute.manager [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Preparing to wait for external event network-vif-plugged-8eb9e971-5920-4103-9ba9-c0846182952d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:41:52 compute-1 nova_compute[230518]: 2025-10-02 12:41:52.775 2 DEBUG oslo_concurrency.lockutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:52 compute-1 nova_compute[230518]: 2025-10-02 12:41:52.775 2 DEBUG oslo_concurrency.lockutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:52 compute-1 nova_compute[230518]: 2025-10-02 12:41:52.776 2 DEBUG oslo_concurrency.lockutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:52 compute-1 nova_compute[230518]: 2025-10-02 12:41:52.777 2 DEBUG nova.virt.libvirt.vif [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:41:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-1408399936',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-1408399936',id=112,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMPES/J98pyyGI+xC972/PJIY+D7X9SqOgh45Z1MPKQ6L1b0LXV7IORHQBCxCHGOsCWQssLDPZp4WJ8irI2AsYuAH5MVzTXEt9QIB2bOJQbGultCK6n77bAruhlsubzH7w==',key_name='tempest-keypair-372158786',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f7e2edef094b4ba5a56a5ec5ffce911e',ramdisk_id='',reservation_id='r-wr2knp7k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeShelveTestJSON-1943710095',owner_user_name='tempest-AttachVolumeShelveTestJSON-1943710095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:41:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3151966e941f4652ba984616bfa760c7',uuid=1f9101c6-f4d8-46c7-8884-386f9f08e6fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8eb9e971-5920-4103-9ba9-c0846182952d", "address": "fa:16:3e:43:40:72", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8eb9e971-59", "ovs_interfaceid": "8eb9e971-5920-4103-9ba9-c0846182952d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:41:52 compute-1 nova_compute[230518]: 2025-10-02 12:41:52.778 2 DEBUG nova.network.os_vif_util [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Converting VIF {"id": "8eb9e971-5920-4103-9ba9-c0846182952d", "address": "fa:16:3e:43:40:72", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8eb9e971-59", "ovs_interfaceid": "8eb9e971-5920-4103-9ba9-c0846182952d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:41:52 compute-1 nova_compute[230518]: 2025-10-02 12:41:52.779 2 DEBUG nova.network.os_vif_util [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:40:72,bridge_name='br-int',has_traffic_filtering=True,id=8eb9e971-5920-4103-9ba9-c0846182952d,network=Network(385a384c-5df0-4b04-b928-517a46df04f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8eb9e971-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:41:52 compute-1 nova_compute[230518]: 2025-10-02 12:41:52.779 2 DEBUG os_vif [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:40:72,bridge_name='br-int',has_traffic_filtering=True,id=8eb9e971-5920-4103-9ba9-c0846182952d,network=Network(385a384c-5df0-4b04-b928-517a46df04f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8eb9e971-59') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:41:52 compute-1 nova_compute[230518]: 2025-10-02 12:41:52.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:52 compute-1 nova_compute[230518]: 2025-10-02 12:41:52.781 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:52 compute-1 nova_compute[230518]: 2025-10-02 12:41:52.782 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:41:52 compute-1 nova_compute[230518]: 2025-10-02 12:41:52.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:52 compute-1 nova_compute[230518]: 2025-10-02 12:41:52.786 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8eb9e971-59, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:52 compute-1 nova_compute[230518]: 2025-10-02 12:41:52.787 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8eb9e971-59, col_values=(('external_ids', {'iface-id': '8eb9e971-5920-4103-9ba9-c0846182952d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:43:40:72', 'vm-uuid': '1f9101c6-f4d8-46c7-8884-386f9f08e6fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:52 compute-1 nova_compute[230518]: 2025-10-02 12:41:52.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:52 compute-1 NetworkManager[44960]: <info>  [1759408912.7897] manager: (tap8eb9e971-59): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/220)
Oct 02 12:41:52 compute-1 nova_compute[230518]: 2025-10-02 12:41:52.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:41:52 compute-1 nova_compute[230518]: 2025-10-02 12:41:52.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:52 compute-1 nova_compute[230518]: 2025-10-02 12:41:52.795 2 INFO os_vif [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:40:72,bridge_name='br-int',has_traffic_filtering=True,id=8eb9e971-5920-4103-9ba9-c0846182952d,network=Network(385a384c-5df0-4b04-b928-517a46df04f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8eb9e971-59')
Oct 02 12:41:52 compute-1 nova_compute[230518]: 2025-10-02 12:41:52.891 2 DEBUG nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:41:52 compute-1 nova_compute[230518]: 2025-10-02 12:41:52.892 2 DEBUG nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:41:52 compute-1 nova_compute[230518]: 2025-10-02 12:41:52.892 2 DEBUG nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] No VIF found with MAC fa:16:3e:43:40:72, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:41:52 compute-1 nova_compute[230518]: 2025-10-02 12:41:52.893 2 INFO nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Using config drive
Oct 02 12:41:52 compute-1 nova_compute[230518]: 2025-10-02 12:41:52.926 2 DEBUG nova.storage.rbd_utils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] rbd image 1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:41:52 compute-1 ceph-mon[80926]: pgmap v1981: 305 pgs: 305 active+clean; 738 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 1.2 MiB/s wr, 248 op/s
Oct 02 12:41:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/281551588' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:53 compute-1 nova_compute[230518]: 2025-10-02 12:41:53.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:53 compute-1 nova_compute[230518]: 2025-10-02 12:41:53.310 2 DEBUG nova.network.neutron [req-616fd017-c027-4eb5-9ecc-bc34bb12a7b1 req-8d744f18-86b1-4ec2-866f-2fb262c526b4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Updated VIF entry in instance network info cache for port 8eb9e971-5920-4103-9ba9-c0846182952d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:41:53 compute-1 nova_compute[230518]: 2025-10-02 12:41:53.311 2 DEBUG nova.network.neutron [req-616fd017-c027-4eb5-9ecc-bc34bb12a7b1 req-8d744f18-86b1-4ec2-866f-2fb262c526b4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Updating instance_info_cache with network_info: [{"id": "8eb9e971-5920-4103-9ba9-c0846182952d", "address": "fa:16:3e:43:40:72", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8eb9e971-59", "ovs_interfaceid": "8eb9e971-5920-4103-9ba9-c0846182952d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:41:53 compute-1 nova_compute[230518]: 2025-10-02 12:41:53.328 2 DEBUG oslo_concurrency.lockutils [req-616fd017-c027-4eb5-9ecc-bc34bb12a7b1 req-8d744f18-86b1-4ec2-866f-2fb262c526b4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-1f9101c6-f4d8-46c7-8884-386f9f08e6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:41:53 compute-1 nova_compute[230518]: 2025-10-02 12:41:53.351 2 INFO nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Creating config drive at /var/lib/nova/instances/1f9101c6-f4d8-46c7-8884-386f9f08e6fb/disk.config
Oct 02 12:41:53 compute-1 nova_compute[230518]: 2025-10-02 12:41:53.363 2 DEBUG oslo_concurrency.processutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1f9101c6-f4d8-46c7-8884-386f9f08e6fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfi9arr2f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:41:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:53.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:41:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:53.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:53 compute-1 nova_compute[230518]: 2025-10-02 12:41:53.514 2 DEBUG oslo_concurrency.processutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1f9101c6-f4d8-46c7-8884-386f9f08e6fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfi9arr2f" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:53 compute-1 nova_compute[230518]: 2025-10-02 12:41:53.561 2 DEBUG nova.storage.rbd_utils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] rbd image 1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:41:53 compute-1 nova_compute[230518]: 2025-10-02 12:41:53.567 2 DEBUG oslo_concurrency.processutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1f9101c6-f4d8-46c7-8884-386f9f08e6fb/disk.config 1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:54 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4130622140' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:41:55 compute-1 nova_compute[230518]: 2025-10-02 12:41:55.024 2 DEBUG oslo_concurrency.processutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1f9101c6-f4d8-46c7-8884-386f9f08e6fb/disk.config 1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:55 compute-1 nova_compute[230518]: 2025-10-02 12:41:55.024 2 INFO nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Deleting local config drive /var/lib/nova/instances/1f9101c6-f4d8-46c7-8884-386f9f08e6fb/disk.config because it was imported into RBD.
Oct 02 12:41:55 compute-1 kernel: tap8eb9e971-59: entered promiscuous mode
Oct 02 12:41:55 compute-1 NetworkManager[44960]: <info>  [1759408915.0772] manager: (tap8eb9e971-59): new Tun device (/org/freedesktop/NetworkManager/Devices/221)
Oct 02 12:41:55 compute-1 ovn_controller[129257]: 2025-10-02T12:41:55Z|00464|binding|INFO|Claiming lport 8eb9e971-5920-4103-9ba9-c0846182952d for this chassis.
Oct 02 12:41:55 compute-1 ovn_controller[129257]: 2025-10-02T12:41:55Z|00465|binding|INFO|8eb9e971-5920-4103-9ba9-c0846182952d: Claiming fa:16:3e:43:40:72 10.100.0.10
Oct 02 12:41:55 compute-1 nova_compute[230518]: 2025-10-02 12:41:55.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.086 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:40:72 10.100.0.10'], port_security=['fa:16:3e:43:40:72 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '1f9101c6-f4d8-46c7-8884-386f9f08e6fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-385a384c-5df0-4b04-b928-517a46df04f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f7e2edef094b4ba5a56a5ec5ffce911e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '041f6b5e-0e14-4ae5-9597-3a584e6f87e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5110437-1084-431d-86cb-6ad2d219bdc1, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=8eb9e971-5920-4103-9ba9-c0846182952d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.089 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 8eb9e971-5920-4103-9ba9-c0846182952d in datapath 385a384c-5df0-4b04-b928-517a46df04f4 bound to our chassis
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.093 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 385a384c-5df0-4b04-b928-517a46df04f4
Oct 02 12:41:55 compute-1 ovn_controller[129257]: 2025-10-02T12:41:55Z|00466|binding|INFO|Setting lport 8eb9e971-5920-4103-9ba9-c0846182952d up in Southbound
Oct 02 12:41:55 compute-1 ovn_controller[129257]: 2025-10-02T12:41:55Z|00467|binding|INFO|Setting lport 8eb9e971-5920-4103-9ba9-c0846182952d ovn-installed in OVS
Oct 02 12:41:55 compute-1 nova_compute[230518]: 2025-10-02 12:41:55.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.105 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[56260993-5067-498b-b413-b2a26cc02d54]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.106 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap385a384c-51 in ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:41:55 compute-1 nova_compute[230518]: 2025-10-02 12:41:55.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.108 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap385a384c-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.108 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bf99257a-226c-475c-b630-8806c3235eec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.110 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a759b20d-daa0-4c64-bc2f-1b9befb2ec6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:55 compute-1 systemd-machined[188247]: New machine qemu-55-instance-00000070.
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.124 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[0ee58962-c6a8-412e-87eb-43869423f8de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:55 compute-1 systemd[1]: Started Virtual Machine qemu-55-instance-00000070.
Oct 02 12:41:55 compute-1 systemd-udevd[273849]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.148 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a707992d-bed5-4dfb-90b6-2bd86759c55e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:55 compute-1 NetworkManager[44960]: <info>  [1759408915.1531] device (tap8eb9e971-59): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:41:55 compute-1 NetworkManager[44960]: <info>  [1759408915.1542] device (tap8eb9e971-59): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.177 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[1b61074f-db69-4e7e-9f52-ef2b520b1078]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:55 compute-1 NetworkManager[44960]: <info>  [1759408915.1832] manager: (tap385a384c-50): new Veth device (/org/freedesktop/NetworkManager/Devices/222)
Oct 02 12:41:55 compute-1 systemd-udevd[273854]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.184 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7312bba2-a12c-4df9-962e-240580b9f695]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.234 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[5f315c4b-15be-4675-97c0-79d1f8700ccb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.237 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[ee71b434-9752-4a6d-9570-c72536a869fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:55 compute-1 NetworkManager[44960]: <info>  [1759408915.2650] device (tap385a384c-50): carrier: link connected
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.274 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[f3f3d080-aaa6-4a31-882e-9fbc42361eba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.293 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ead865a9-db99-4a20-8ee1-c361832a7e41]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap385a384c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:d4:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 142], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677882, 'reachable_time': 32970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273879, 'error': None, 'target': 'ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.308 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[beace7bd-caad-435f-b50d-bff5b6cf12f2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3e:d461'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 677882, 'tstamp': 677882}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273880, 'error': None, 'target': 'ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.326 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7d479b16-b35c-407c-ab59-45643434244c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap385a384c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:d4:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 142], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677882, 'reachable_time': 32970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273881, 'error': None, 'target': 'ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.367 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9797885a-1506-4d7a-af3f-3efcfe701ed6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:41:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:55.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:41:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:41:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:55 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:55.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.454 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[457b6dea-48b2-48f2-a424-f3a150b634a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.456 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap385a384c-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.457 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.458 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap385a384c-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:55 compute-1 nova_compute[230518]: 2025-10-02 12:41:55.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:55 compute-1 kernel: tap385a384c-50: entered promiscuous mode
Oct 02 12:41:55 compute-1 NetworkManager[44960]: <info>  [1759408915.4614] manager: (tap385a384c-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/223)
Oct 02 12:41:55 compute-1 nova_compute[230518]: 2025-10-02 12:41:55.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.464 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap385a384c-50, col_values=(('external_ids', {'iface-id': '12496c3c-f50d-4104-bfb7-81f1aa24617e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:41:55 compute-1 nova_compute[230518]: 2025-10-02 12:41:55.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:55 compute-1 ovn_controller[129257]: 2025-10-02T12:41:55Z|00468|binding|INFO|Releasing lport 12496c3c-f50d-4104-bfb7-81f1aa24617e from this chassis (sb_readonly=0)
Oct 02 12:41:55 compute-1 ceph-mon[80926]: pgmap v1982: 305 pgs: 305 active+clean; 754 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 1.8 MiB/s wr, 251 op/s
Oct 02 12:41:55 compute-1 nova_compute[230518]: 2025-10-02 12:41:55.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.489 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/385a384c-5df0-4b04-b928-517a46df04f4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/385a384c-5df0-4b04-b928-517a46df04f4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.490 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5e07c97b-df85-480c-9188-e9a3ee69de56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.491 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-385a384c-5df0-4b04-b928-517a46df04f4
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/385a384c-5df0-4b04-b928-517a46df04f4.pid.haproxy
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 385a384c-5df0-4b04-b928-517a46df04f4
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:41:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:41:55.492 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4', 'env', 'PROCESS_TAG=haproxy-385a384c-5df0-4b04-b928-517a46df04f4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/385a384c-5df0-4b04-b928-517a46df04f4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:41:55 compute-1 podman[273913]: 2025-10-02 12:41:55.9109901 +0000 UTC m=+0.061379231 container create 1ad8cc8c7815fea4a6e1e9206536f8d22834d2428546c3e045c4f6cc6893ceb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct 02 12:41:55 compute-1 systemd[1]: Started libpod-conmon-1ad8cc8c7815fea4a6e1e9206536f8d22834d2428546c3e045c4f6cc6893ceb0.scope.
Oct 02 12:41:55 compute-1 podman[273913]: 2025-10-02 12:41:55.87824678 +0000 UTC m=+0.028635931 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:41:56 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:41:56 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f06bb73ff4fd07214bf62b4ab474c808902a8097ac5ccfb1095b1198b7d0666/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:41:56 compute-1 podman[273913]: 2025-10-02 12:41:56.162702485 +0000 UTC m=+0.313091666 container init 1ad8cc8c7815fea4a6e1e9206536f8d22834d2428546c3e045c4f6cc6893ceb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:41:56 compute-1 podman[273913]: 2025-10-02 12:41:56.17111903 +0000 UTC m=+0.321508161 container start 1ad8cc8c7815fea4a6e1e9206536f8d22834d2428546c3e045c4f6cc6893ceb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct 02 12:41:56 compute-1 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[273945]: [NOTICE]   (273974) : New worker (273976) forked
Oct 02 12:41:56 compute-1 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[273945]: [NOTICE]   (273974) : Loading success.
Oct 02 12:41:56 compute-1 nova_compute[230518]: 2025-10-02 12:41:56.641 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408916.6401796, 1f9101c6-f4d8-46c7-8884-386f9f08e6fb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:41:56 compute-1 nova_compute[230518]: 2025-10-02 12:41:56.641 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] VM Started (Lifecycle Event)
Oct 02 12:41:56 compute-1 ceph-mon[80926]: pgmap v1983: 305 pgs: 305 active+clean; 764 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.3 MiB/s wr, 216 op/s
Oct 02 12:41:56 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:41:56 compute-1 nova_compute[230518]: 2025-10-02 12:41:56.669 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:41:56 compute-1 nova_compute[230518]: 2025-10-02 12:41:56.673 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408916.6404648, 1f9101c6-f4d8-46c7-8884-386f9f08e6fb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:41:56 compute-1 nova_compute[230518]: 2025-10-02 12:41:56.673 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] VM Paused (Lifecycle Event)
Oct 02 12:41:56 compute-1 nova_compute[230518]: 2025-10-02 12:41:56.693 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:41:56 compute-1 nova_compute[230518]: 2025-10-02 12:41:56.696 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:41:56 compute-1 nova_compute[230518]: 2025-10-02 12:41:56.713 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:41:57 compute-1 nova_compute[230518]: 2025-10-02 12:41:57.115 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:41:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:57.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:57.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:57 compute-1 nova_compute[230518]: 2025-10-02 12:41:57.614 2 DEBUG nova.compute.manager [req-fb9770ab-1ed8-403f-b12b-86dcde3ce72b req-9a33f492-0768-4c3f-8c91-ce37c9bbad15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Received event network-vif-plugged-8eb9e971-5920-4103-9ba9-c0846182952d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:41:57 compute-1 nova_compute[230518]: 2025-10-02 12:41:57.615 2 DEBUG oslo_concurrency.lockutils [req-fb9770ab-1ed8-403f-b12b-86dcde3ce72b req-9a33f492-0768-4c3f-8c91-ce37c9bbad15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:57 compute-1 nova_compute[230518]: 2025-10-02 12:41:57.615 2 DEBUG oslo_concurrency.lockutils [req-fb9770ab-1ed8-403f-b12b-86dcde3ce72b req-9a33f492-0768-4c3f-8c91-ce37c9bbad15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:57 compute-1 nova_compute[230518]: 2025-10-02 12:41:57.616 2 DEBUG oslo_concurrency.lockutils [req-fb9770ab-1ed8-403f-b12b-86dcde3ce72b req-9a33f492-0768-4c3f-8c91-ce37c9bbad15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:57 compute-1 nova_compute[230518]: 2025-10-02 12:41:57.616 2 DEBUG nova.compute.manager [req-fb9770ab-1ed8-403f-b12b-86dcde3ce72b req-9a33f492-0768-4c3f-8c91-ce37c9bbad15 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Processing event network-vif-plugged-8eb9e971-5920-4103-9ba9-c0846182952d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:41:57 compute-1 nova_compute[230518]: 2025-10-02 12:41:57.617 2 DEBUG nova.compute.manager [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:41:57 compute-1 nova_compute[230518]: 2025-10-02 12:41:57.621 2 DEBUG nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:41:57 compute-1 nova_compute[230518]: 2025-10-02 12:41:57.623 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408917.6226275, 1f9101c6-f4d8-46c7-8884-386f9f08e6fb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:41:57 compute-1 nova_compute[230518]: 2025-10-02 12:41:57.623 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] VM Resumed (Lifecycle Event)
Oct 02 12:41:57 compute-1 nova_compute[230518]: 2025-10-02 12:41:57.628 2 INFO nova.virt.libvirt.driver [-] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Instance spawned successfully.
Oct 02 12:41:57 compute-1 nova_compute[230518]: 2025-10-02 12:41:57.629 2 DEBUG nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:41:57 compute-1 nova_compute[230518]: 2025-10-02 12:41:57.647 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:41:57 compute-1 nova_compute[230518]: 2025-10-02 12:41:57.652 2 DEBUG nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:41:57 compute-1 nova_compute[230518]: 2025-10-02 12:41:57.652 2 DEBUG nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:41:57 compute-1 nova_compute[230518]: 2025-10-02 12:41:57.653 2 DEBUG nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:41:57 compute-1 nova_compute[230518]: 2025-10-02 12:41:57.654 2 DEBUG nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:41:57 compute-1 nova_compute[230518]: 2025-10-02 12:41:57.654 2 DEBUG nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:41:57 compute-1 nova_compute[230518]: 2025-10-02 12:41:57.655 2 DEBUG nova.virt.libvirt.driver [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:41:57 compute-1 nova_compute[230518]: 2025-10-02 12:41:57.660 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:41:57 compute-1 nova_compute[230518]: 2025-10-02 12:41:57.691 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:41:57 compute-1 nova_compute[230518]: 2025-10-02 12:41:57.718 2 INFO nova.compute.manager [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Took 9.61 seconds to spawn the instance on the hypervisor.
Oct 02 12:41:57 compute-1 nova_compute[230518]: 2025-10-02 12:41:57.719 2 DEBUG nova.compute.manager [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:41:57 compute-1 nova_compute[230518]: 2025-10-02 12:41:57.779 2 INFO nova.compute.manager [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Took 13.19 seconds to build instance.
Oct 02 12:41:57 compute-1 nova_compute[230518]: 2025-10-02 12:41:57.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:57 compute-1 nova_compute[230518]: 2025-10-02 12:41:57.816 2 DEBUG oslo_concurrency.lockutils [None req-6f40af84-081f-4704-8cfc-43671e2b87d0 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:57 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3154034574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:58 compute-1 nova_compute[230518]: 2025-10-02 12:41:58.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:41:58 compute-1 nova_compute[230518]: 2025-10-02 12:41:58.082 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:58 compute-1 nova_compute[230518]: 2025-10-02 12:41:58.082 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:58 compute-1 nova_compute[230518]: 2025-10-02 12:41:58.082 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:58 compute-1 nova_compute[230518]: 2025-10-02 12:41:58.083 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:41:58 compute-1 nova_compute[230518]: 2025-10-02 12:41:58.083 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:58 compute-1 nova_compute[230518]: 2025-10-02 12:41:58.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:41:58 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3482794670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:58 compute-1 nova_compute[230518]: 2025-10-02 12:41:58.566 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:58 compute-1 nova_compute[230518]: 2025-10-02 12:41:58.631 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:41:58 compute-1 nova_compute[230518]: 2025-10-02 12:41:58.631 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:41:58 compute-1 nova_compute[230518]: 2025-10-02 12:41:58.634 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:41:58 compute-1 nova_compute[230518]: 2025-10-02 12:41:58.635 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:41:58 compute-1 nova_compute[230518]: 2025-10-02 12:41:58.804 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:41:58 compute-1 nova_compute[230518]: 2025-10-02 12:41:58.805 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4167MB free_disk=20.69430923461914GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:41:58 compute-1 nova_compute[230518]: 2025-10-02 12:41:58.805 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:58 compute-1 nova_compute[230518]: 2025-10-02 12:41:58.806 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:58 compute-1 nova_compute[230518]: 2025-10-02 12:41:58.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:41:58 compute-1 nova_compute[230518]: 2025-10-02 12:41:58.916 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 7621a774-e0bc-4f4f-b900-c3608dd6835a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:41:58 compute-1 nova_compute[230518]: 2025-10-02 12:41:58.917 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 1f9101c6-f4d8-46c7-8884-386f9f08e6fb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:41:58 compute-1 nova_compute[230518]: 2025-10-02 12:41:58.917 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:41:58 compute-1 nova_compute[230518]: 2025-10-02 12:41:58.917 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:41:58 compute-1 ceph-mon[80926]: pgmap v1984: 305 pgs: 305 active+clean; 779 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.9 MiB/s wr, 200 op/s
Oct 02 12:41:58 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3672207254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:58 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3482794670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:58 compute-1 nova_compute[230518]: 2025-10-02 12:41:58.987 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:41:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:41:59 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1881695082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:41:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:41:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:41:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:41:59.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:41:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:41:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:41:59 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:41:59.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:41:59 compute-1 nova_compute[230518]: 2025-10-02 12:41:59.430 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:41:59 compute-1 nova_compute[230518]: 2025-10-02 12:41:59.436 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:41:59 compute-1 nova_compute[230518]: 2025-10-02 12:41:59.458 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:41:59 compute-1 nova_compute[230518]: 2025-10-02 12:41:59.506 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:41:59 compute-1 nova_compute[230518]: 2025-10-02 12:41:59.507 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:59 compute-1 nova_compute[230518]: 2025-10-02 12:41:59.713 2 DEBUG nova.compute.manager [req-a99b592c-a7d4-4151-8128-7f885bb4ff94 req-5373baa7-6a31-4028-a801-947ad0145c5b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Received event network-vif-plugged-8eb9e971-5920-4103-9ba9-c0846182952d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:41:59 compute-1 nova_compute[230518]: 2025-10-02 12:41:59.713 2 DEBUG oslo_concurrency.lockutils [req-a99b592c-a7d4-4151-8128-7f885bb4ff94 req-5373baa7-6a31-4028-a801-947ad0145c5b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:41:59 compute-1 nova_compute[230518]: 2025-10-02 12:41:59.714 2 DEBUG oslo_concurrency.lockutils [req-a99b592c-a7d4-4151-8128-7f885bb4ff94 req-5373baa7-6a31-4028-a801-947ad0145c5b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:41:59 compute-1 nova_compute[230518]: 2025-10-02 12:41:59.714 2 DEBUG oslo_concurrency.lockutils [req-a99b592c-a7d4-4151-8128-7f885bb4ff94 req-5373baa7-6a31-4028-a801-947ad0145c5b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:41:59 compute-1 nova_compute[230518]: 2025-10-02 12:41:59.714 2 DEBUG nova.compute.manager [req-a99b592c-a7d4-4151-8128-7f885bb4ff94 req-5373baa7-6a31-4028-a801-947ad0145c5b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] No waiting events found dispatching network-vif-plugged-8eb9e971-5920-4103-9ba9-c0846182952d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:41:59 compute-1 nova_compute[230518]: 2025-10-02 12:41:59.715 2 WARNING nova.compute.manager [req-a99b592c-a7d4-4151-8128-7f885bb4ff94 req-5373baa7-6a31-4028-a801-947ad0145c5b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Received unexpected event network-vif-plugged-8eb9e971-5920-4103-9ba9-c0846182952d for instance with vm_state active and task_state None.
Oct 02 12:42:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1881695082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:00 compute-1 nova_compute[230518]: 2025-10-02 12:42:00.345 2 DEBUG nova.compute.manager [req-d0a65987-424e-45e6-944e-e8ce8999096f req-bf64ae4d-adb0-41dd-8f0b-0bed84f593c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Received event network-changed-8eb9e971-5920-4103-9ba9-c0846182952d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:00 compute-1 nova_compute[230518]: 2025-10-02 12:42:00.346 2 DEBUG nova.compute.manager [req-d0a65987-424e-45e6-944e-e8ce8999096f req-bf64ae4d-adb0-41dd-8f0b-0bed84f593c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Refreshing instance network info cache due to event network-changed-8eb9e971-5920-4103-9ba9-c0846182952d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:42:00 compute-1 nova_compute[230518]: 2025-10-02 12:42:00.346 2 DEBUG oslo_concurrency.lockutils [req-d0a65987-424e-45e6-944e-e8ce8999096f req-bf64ae4d-adb0-41dd-8f0b-0bed84f593c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-1f9101c6-f4d8-46c7-8884-386f9f08e6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:42:00 compute-1 nova_compute[230518]: 2025-10-02 12:42:00.346 2 DEBUG oslo_concurrency.lockutils [req-d0a65987-424e-45e6-944e-e8ce8999096f req-bf64ae4d-adb0-41dd-8f0b-0bed84f593c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-1f9101c6-f4d8-46c7-8884-386f9f08e6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:42:00 compute-1 nova_compute[230518]: 2025-10-02 12:42:00.346 2 DEBUG nova.network.neutron [req-d0a65987-424e-45e6-944e-e8ce8999096f req-bf64ae4d-adb0-41dd-8f0b-0bed84f593c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Refreshing network info cache for port 8eb9e971-5920-4103-9ba9-c0846182952d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:42:00 compute-1 nova_compute[230518]: 2025-10-02 12:42:00.503 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:42:01 compute-1 nova_compute[230518]: 2025-10-02 12:42:01.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:42:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:42:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:01.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:42:01 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:01.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:42:01 compute-1 ceph-mon[80926]: pgmap v1985: 305 pgs: 305 active+clean; 779 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 167 op/s
Oct 02 12:42:01 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:42:01 compute-1 nova_compute[230518]: 2025-10-02 12:42:01.870 2 DEBUG nova.network.neutron [req-d0a65987-424e-45e6-944e-e8ce8999096f req-bf64ae4d-adb0-41dd-8f0b-0bed84f593c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Updated VIF entry in instance network info cache for port 8eb9e971-5920-4103-9ba9-c0846182952d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:42:01 compute-1 nova_compute[230518]: 2025-10-02 12:42:01.870 2 DEBUG nova.network.neutron [req-d0a65987-424e-45e6-944e-e8ce8999096f req-bf64ae4d-adb0-41dd-8f0b-0bed84f593c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Updating instance_info_cache with network_info: [{"id": "8eb9e971-5920-4103-9ba9-c0846182952d", "address": "fa:16:3e:43:40:72", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8eb9e971-59", "ovs_interfaceid": "8eb9e971-5920-4103-9ba9-c0846182952d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:42:01 compute-1 nova_compute[230518]: 2025-10-02 12:42:01.893 2 DEBUG oslo_concurrency.lockutils [req-d0a65987-424e-45e6-944e-e8ce8999096f req-bf64ae4d-adb0-41dd-8f0b-0bed84f593c0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-1f9101c6-f4d8-46c7-8884-386f9f08e6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:42:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:01.901 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:42:01 compute-1 nova_compute[230518]: 2025-10-02 12:42:01.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:01.903 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:42:02 compute-1 nova_compute[230518]: 2025-10-02 12:42:02.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:03 compute-1 nova_compute[230518]: 2025-10-02 12:42:03.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:42:03 compute-1 nova_compute[230518]: 2025-10-02 12:42:03.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:42:03 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2350922912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:03 compute-1 nova_compute[230518]: 2025-10-02 12:42:03.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:03.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:03.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:04 compute-1 nova_compute[230518]: 2025-10-02 12:42:04.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:42:04 compute-1 ceph-mon[80926]: pgmap v1986: 305 pgs: 305 active+clean; 787 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 267 op/s
Oct 02 12:42:04 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3486233126' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:04 compute-1 podman[274032]: 2025-10-02 12:42:04.852033899 +0000 UTC m=+0.089001129 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:42:04 compute-1 podman[274031]: 2025-10-02 12:42:04.896822897 +0000 UTC m=+0.134691246 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 12:42:05 compute-1 nova_compute[230518]: 2025-10-02 12:42:05.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:42:05 compute-1 nova_compute[230518]: 2025-10-02 12:42:05.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:42:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:05.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:05.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:05 compute-1 nova_compute[230518]: 2025-10-02 12:42:05.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:05 compute-1 ceph-mon[80926]: pgmap v1987: 305 pgs: 305 active+clean; 787 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.8 MiB/s wr, 210 op/s
Oct 02 12:42:06 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:42:06 compute-1 ceph-mon[80926]: pgmap v1988: 305 pgs: 305 active+clean; 787 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.2 MiB/s wr, 185 op/s
Oct 02 12:42:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2563361418' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:42:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2563361418' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:42:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3270305858' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:42:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:07.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:42:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:42:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:42:07 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:07.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:42:07 compute-1 nova_compute[230518]: 2025-10-02 12:42:07.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:08 compute-1 nova_compute[230518]: 2025-10-02 12:42:08.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:42:08 compute-1 nova_compute[230518]: 2025-10-02 12:42:08.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:42:08 compute-1 nova_compute[230518]: 2025-10-02 12:42:08.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:42:08 compute-1 nova_compute[230518]: 2025-10-02 12:42:08.306 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:42:08 compute-1 nova_compute[230518]: 2025-10-02 12:42:08.306 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:42:08 compute-1 nova_compute[230518]: 2025-10-02 12:42:08.307 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:42:08 compute-1 nova_compute[230518]: 2025-10-02 12:42:08.307 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7621a774-e0bc-4f4f-b900-c3608dd6835a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:42:08 compute-1 nova_compute[230518]: 2025-10-02 12:42:08.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:08.905 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:09.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:09.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:09 compute-1 ceph-mon[80926]: pgmap v1989: 305 pgs: 305 active+clean; 789 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.6 MiB/s wr, 176 op/s
Oct 02 12:42:10 compute-1 nova_compute[230518]: 2025-10-02 12:42:10.315 2 DEBUG oslo_concurrency.lockutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "2d140186-e66c-4d55-b8df-5bb4214206d7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:10 compute-1 nova_compute[230518]: 2025-10-02 12:42:10.316 2 DEBUG oslo_concurrency.lockutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "2d140186-e66c-4d55-b8df-5bb4214206d7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:10 compute-1 nova_compute[230518]: 2025-10-02 12:42:10.346 2 DEBUG nova.compute.manager [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:42:10 compute-1 nova_compute[230518]: 2025-10-02 12:42:10.383 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Updating instance_info_cache with network_info: [{"id": "8d9cc17a-7804-4743-925a-496d9fe78c73", "address": "fa:16:3e:c4:d9:d3", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d9cc17a-78", "ovs_interfaceid": "8d9cc17a-7804-4743-925a-496d9fe78c73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:42:10 compute-1 nova_compute[230518]: 2025-10-02 12:42:10.400 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:42:10 compute-1 nova_compute[230518]: 2025-10-02 12:42:10.401 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:42:10 compute-1 nova_compute[230518]: 2025-10-02 12:42:10.410 2 DEBUG oslo_concurrency.lockutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:10 compute-1 nova_compute[230518]: 2025-10-02 12:42:10.410 2 DEBUG oslo_concurrency.lockutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:10 compute-1 nova_compute[230518]: 2025-10-02 12:42:10.415 2 DEBUG nova.virt.hardware [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:42:10 compute-1 nova_compute[230518]: 2025-10-02 12:42:10.415 2 INFO nova.compute.claims [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:42:10 compute-1 nova_compute[230518]: 2025-10-02 12:42:10.586 2 DEBUG oslo_concurrency.processutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:10 compute-1 ceph-mon[80926]: pgmap v1990: 305 pgs: 305 active+clean; 789 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 83 KiB/s wr, 141 op/s
Oct 02 12:42:11 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:42:11 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2821617500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:11 compute-1 nova_compute[230518]: 2025-10-02 12:42:11.101 2 DEBUG oslo_concurrency.processutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:11 compute-1 nova_compute[230518]: 2025-10-02 12:42:11.109 2 DEBUG nova.compute.provider_tree [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:42:11 compute-1 nova_compute[230518]: 2025-10-02 12:42:11.126 2 DEBUG nova.scheduler.client.report [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:42:11 compute-1 nova_compute[230518]: 2025-10-02 12:42:11.146 2 DEBUG oslo_concurrency.lockutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:11 compute-1 nova_compute[230518]: 2025-10-02 12:42:11.147 2 DEBUG nova.compute.manager [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:42:11 compute-1 nova_compute[230518]: 2025-10-02 12:42:11.185 2 DEBUG nova.compute.manager [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:42:11 compute-1 nova_compute[230518]: 2025-10-02 12:42:11.186 2 DEBUG nova.network.neutron [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:42:11 compute-1 nova_compute[230518]: 2025-10-02 12:42:11.210 2 INFO nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:42:11 compute-1 nova_compute[230518]: 2025-10-02 12:42:11.243 2 DEBUG nova.compute.manager [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:42:11 compute-1 nova_compute[230518]: 2025-10-02 12:42:11.321 2 DEBUG nova.policy [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ae7bcf1e6a3b4132a7068b0f863ca79c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '58b2fa4ee0cd4b97be1b303c203be14f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:42:11 compute-1 nova_compute[230518]: 2025-10-02 12:42:11.369 2 DEBUG nova.compute.manager [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:42:11 compute-1 nova_compute[230518]: 2025-10-02 12:42:11.370 2 DEBUG nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:42:11 compute-1 nova_compute[230518]: 2025-10-02 12:42:11.371 2 INFO nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Creating image(s)
Oct 02 12:42:11 compute-1 nova_compute[230518]: 2025-10-02 12:42:11.402 2 DEBUG nova.storage.rbd_utils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 2d140186-e66c-4d55-b8df-5bb4214206d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:42:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:42:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:11.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:11 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:11.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:11 compute-1 nova_compute[230518]: 2025-10-02 12:42:11.441 2 DEBUG nova.storage.rbd_utils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 2d140186-e66c-4d55-b8df-5bb4214206d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:42:11 compute-1 nova_compute[230518]: 2025-10-02 12:42:11.469 2 DEBUG nova.storage.rbd_utils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 2d140186-e66c-4d55-b8df-5bb4214206d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:42:11 compute-1 nova_compute[230518]: 2025-10-02 12:42:11.473 2 DEBUG oslo_concurrency.processutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:11 compute-1 nova_compute[230518]: 2025-10-02 12:42:11.536 2 DEBUG oslo_concurrency.processutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:11 compute-1 nova_compute[230518]: 2025-10-02 12:42:11.537 2 DEBUG oslo_concurrency.lockutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:11 compute-1 nova_compute[230518]: 2025-10-02 12:42:11.538 2 DEBUG oslo_concurrency.lockutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:11 compute-1 nova_compute[230518]: 2025-10-02 12:42:11.538 2 DEBUG oslo_concurrency.lockutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:11 compute-1 nova_compute[230518]: 2025-10-02 12:42:11.564 2 DEBUG nova.storage.rbd_utils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 2d140186-e66c-4d55-b8df-5bb4214206d7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:42:11 compute-1 nova_compute[230518]: 2025-10-02 12:42:11.567 2 DEBUG oslo_concurrency.processutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 2d140186-e66c-4d55-b8df-5bb4214206d7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:11 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:42:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2821617500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:12 compute-1 nova_compute[230518]: 2025-10-02 12:42:12.341 2 DEBUG nova.network.neutron [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Successfully created port: 76dd1a78-6a32-43c7-8633-51573580bfc9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:42:12 compute-1 nova_compute[230518]: 2025-10-02 12:42:12.395 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:42:12 compute-1 nova_compute[230518]: 2025-10-02 12:42:12.579 2 DEBUG oslo_concurrency.processutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 2d140186-e66c-4d55-b8df-5bb4214206d7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:12 compute-1 nova_compute[230518]: 2025-10-02 12:42:12.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:12 compute-1 nova_compute[230518]: 2025-10-02 12:42:12.866 2 DEBUG nova.storage.rbd_utils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] resizing rbd image 2d140186-e66c-4d55-b8df-5bb4214206d7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:42:13 compute-1 nova_compute[230518]: 2025-10-02 12:42:13.205 2 DEBUG nova.objects.instance [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'migration_context' on Instance uuid 2d140186-e66c-4d55-b8df-5bb4214206d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:42:13 compute-1 nova_compute[230518]: 2025-10-02 12:42:13.221 2 DEBUG nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:42:13 compute-1 nova_compute[230518]: 2025-10-02 12:42:13.222 2 DEBUG nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Ensure instance console log exists: /var/lib/nova/instances/2d140186-e66c-4d55-b8df-5bb4214206d7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:42:13 compute-1 nova_compute[230518]: 2025-10-02 12:42:13.222 2 DEBUG oslo_concurrency.lockutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:13 compute-1 nova_compute[230518]: 2025-10-02 12:42:13.223 2 DEBUG oslo_concurrency.lockutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:13 compute-1 nova_compute[230518]: 2025-10-02 12:42:13.223 2 DEBUG oslo_concurrency.lockutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:13 compute-1 nova_compute[230518]: 2025-10-02 12:42:13.312 2 DEBUG nova.network.neutron [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Successfully updated port: 76dd1a78-6a32-43c7-8633-51573580bfc9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:42:13 compute-1 nova_compute[230518]: 2025-10-02 12:42:13.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:13 compute-1 nova_compute[230518]: 2025-10-02 12:42:13.340 2 DEBUG oslo_concurrency.lockutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "refresh_cache-2d140186-e66c-4d55-b8df-5bb4214206d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:42:13 compute-1 nova_compute[230518]: 2025-10-02 12:42:13.340 2 DEBUG oslo_concurrency.lockutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquired lock "refresh_cache-2d140186-e66c-4d55-b8df-5bb4214206d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:42:13 compute-1 nova_compute[230518]: 2025-10-02 12:42:13.340 2 DEBUG nova.network.neutron [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:42:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:42:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:13.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:13 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:13.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:13 compute-1 nova_compute[230518]: 2025-10-02 12:42:13.444 2 DEBUG nova.compute.manager [req-c7d2fb4e-22b6-4ed5-80d8-a74cabf64aba req-68e8346c-aa09-4491-875b-71bbf03255ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Received event network-changed-76dd1a78-6a32-43c7-8633-51573580bfc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:13 compute-1 nova_compute[230518]: 2025-10-02 12:42:13.444 2 DEBUG nova.compute.manager [req-c7d2fb4e-22b6-4ed5-80d8-a74cabf64aba req-68e8346c-aa09-4491-875b-71bbf03255ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Refreshing instance network info cache due to event network-changed-76dd1a78-6a32-43c7-8633-51573580bfc9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:42:13 compute-1 nova_compute[230518]: 2025-10-02 12:42:13.445 2 DEBUG oslo_concurrency.lockutils [req-c7d2fb4e-22b6-4ed5-80d8-a74cabf64aba req-68e8346c-aa09-4491-875b-71bbf03255ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-2d140186-e66c-4d55-b8df-5bb4214206d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:42:13 compute-1 ceph-mon[80926]: pgmap v1991: 305 pgs: 305 active+clean; 800 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.2 MiB/s wr, 162 op/s
Oct 02 12:42:13 compute-1 nova_compute[230518]: 2025-10-02 12:42:13.483 2 DEBUG nova.network.neutron [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:42:13 compute-1 ovn_controller[129257]: 2025-10-02T12:42:13Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:43:40:72 10.100.0.10
Oct 02 12:42:13 compute-1 ovn_controller[129257]: 2025-10-02T12:42:13Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:43:40:72 10.100.0.10
Oct 02 12:42:14 compute-1 nova_compute[230518]: 2025-10-02 12:42:14.396 2 DEBUG nova.network.neutron [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Updating instance_info_cache with network_info: [{"id": "76dd1a78-6a32-43c7-8633-51573580bfc9", "address": "fa:16:3e:53:d1:89", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76dd1a78-6a", "ovs_interfaceid": "76dd1a78-6a32-43c7-8633-51573580bfc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:42:14 compute-1 nova_compute[230518]: 2025-10-02 12:42:14.432 2 DEBUG oslo_concurrency.lockutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Releasing lock "refresh_cache-2d140186-e66c-4d55-b8df-5bb4214206d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:42:14 compute-1 nova_compute[230518]: 2025-10-02 12:42:14.433 2 DEBUG nova.compute.manager [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Instance network_info: |[{"id": "76dd1a78-6a32-43c7-8633-51573580bfc9", "address": "fa:16:3e:53:d1:89", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76dd1a78-6a", "ovs_interfaceid": "76dd1a78-6a32-43c7-8633-51573580bfc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:42:14 compute-1 nova_compute[230518]: 2025-10-02 12:42:14.434 2 DEBUG oslo_concurrency.lockutils [req-c7d2fb4e-22b6-4ed5-80d8-a74cabf64aba req-68e8346c-aa09-4491-875b-71bbf03255ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-2d140186-e66c-4d55-b8df-5bb4214206d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:42:14 compute-1 nova_compute[230518]: 2025-10-02 12:42:14.434 2 DEBUG nova.network.neutron [req-c7d2fb4e-22b6-4ed5-80d8-a74cabf64aba req-68e8346c-aa09-4491-875b-71bbf03255ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Refreshing network info cache for port 76dd1a78-6a32-43c7-8633-51573580bfc9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:42:14 compute-1 nova_compute[230518]: 2025-10-02 12:42:14.440 2 DEBUG nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Start _get_guest_xml network_info=[{"id": "76dd1a78-6a32-43c7-8633-51573580bfc9", "address": "fa:16:3e:53:d1:89", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76dd1a78-6a", "ovs_interfaceid": "76dd1a78-6a32-43c7-8633-51573580bfc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:42:14 compute-1 nova_compute[230518]: 2025-10-02 12:42:14.446 2 WARNING nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:42:14 compute-1 nova_compute[230518]: 2025-10-02 12:42:14.453 2 DEBUG nova.virt.libvirt.host [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:42:14 compute-1 nova_compute[230518]: 2025-10-02 12:42:14.454 2 DEBUG nova.virt.libvirt.host [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:42:14 compute-1 nova_compute[230518]: 2025-10-02 12:42:14.457 2 DEBUG nova.virt.libvirt.host [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:42:14 compute-1 nova_compute[230518]: 2025-10-02 12:42:14.458 2 DEBUG nova.virt.libvirt.host [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:42:14 compute-1 nova_compute[230518]: 2025-10-02 12:42:14.460 2 DEBUG nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:42:14 compute-1 nova_compute[230518]: 2025-10-02 12:42:14.461 2 DEBUG nova.virt.hardware [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:42:14 compute-1 nova_compute[230518]: 2025-10-02 12:42:14.462 2 DEBUG nova.virt.hardware [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:42:14 compute-1 nova_compute[230518]: 2025-10-02 12:42:14.462 2 DEBUG nova.virt.hardware [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:42:14 compute-1 nova_compute[230518]: 2025-10-02 12:42:14.463 2 DEBUG nova.virt.hardware [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:42:14 compute-1 nova_compute[230518]: 2025-10-02 12:42:14.463 2 DEBUG nova.virt.hardware [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:42:14 compute-1 nova_compute[230518]: 2025-10-02 12:42:14.464 2 DEBUG nova.virt.hardware [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:42:14 compute-1 nova_compute[230518]: 2025-10-02 12:42:14.464 2 DEBUG nova.virt.hardware [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:42:14 compute-1 nova_compute[230518]: 2025-10-02 12:42:14.465 2 DEBUG nova.virt.hardware [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:42:14 compute-1 nova_compute[230518]: 2025-10-02 12:42:14.466 2 DEBUG nova.virt.hardware [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:42:14 compute-1 nova_compute[230518]: 2025-10-02 12:42:14.467 2 DEBUG nova.virt.hardware [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:42:14 compute-1 nova_compute[230518]: 2025-10-02 12:42:14.467 2 DEBUG nova.virt.hardware [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:42:14 compute-1 nova_compute[230518]: 2025-10-02 12:42:14.473 2 DEBUG oslo_concurrency.processutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:42:14 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1654915291' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:14 compute-1 nova_compute[230518]: 2025-10-02 12:42:14.919 2 DEBUG oslo_concurrency.processutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:14 compute-1 nova_compute[230518]: 2025-10-02 12:42:14.951 2 DEBUG nova.storage.rbd_utils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 2d140186-e66c-4d55-b8df-5bb4214206d7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:42:14 compute-1 nova_compute[230518]: 2025-10-02 12:42:14.956 2 DEBUG oslo_concurrency.processutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:15 compute-1 ceph-mon[80926]: pgmap v1992: 305 pgs: 305 active+clean; 817 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 885 KiB/s rd, 2.3 MiB/s wr, 71 op/s
Oct 02 12:42:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:42:15 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4124274956' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:15 compute-1 nova_compute[230518]: 2025-10-02 12:42:15.433 2 DEBUG oslo_concurrency.processutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:15 compute-1 nova_compute[230518]: 2025-10-02 12:42:15.435 2 DEBUG nova.virt.libvirt.vif [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:42:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-225030278',display_name='tempest-DeleteServersTestJSON-server-225030278',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-225030278',id=113,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='58b2fa4ee0cd4b97be1b303c203be14f',ramdisk_id='',reservation_id='r-ao62a00r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1740298646',owner_user_name='tempest-DeleteServersTestJSON-1740298646-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:42:11Z,user_data=None,user_id='ae7bcf1e6a3b4132a7068b0f863ca79c',uuid=2d140186-e66c-4d55-b8df-5bb4214206d7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "76dd1a78-6a32-43c7-8633-51573580bfc9", "address": "fa:16:3e:53:d1:89", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76dd1a78-6a", "ovs_interfaceid": "76dd1a78-6a32-43c7-8633-51573580bfc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:42:15 compute-1 nova_compute[230518]: 2025-10-02 12:42:15.435 2 DEBUG nova.network.os_vif_util [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converting VIF {"id": "76dd1a78-6a32-43c7-8633-51573580bfc9", "address": "fa:16:3e:53:d1:89", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76dd1a78-6a", "ovs_interfaceid": "76dd1a78-6a32-43c7-8633-51573580bfc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:42:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:15 compute-1 nova_compute[230518]: 2025-10-02 12:42:15.437 2 DEBUG nova.network.os_vif_util [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:d1:89,bridge_name='br-int',has_traffic_filtering=True,id=76dd1a78-6a32-43c7-8633-51573580bfc9,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76dd1a78-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:42:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:15.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:42:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:15 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:15.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:15 compute-1 nova_compute[230518]: 2025-10-02 12:42:15.438 2 DEBUG nova.objects.instance [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'pci_devices' on Instance uuid 2d140186-e66c-4d55-b8df-5bb4214206d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:42:15 compute-1 nova_compute[230518]: 2025-10-02 12:42:15.468 2 DEBUG nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:42:15 compute-1 nova_compute[230518]:   <uuid>2d140186-e66c-4d55-b8df-5bb4214206d7</uuid>
Oct 02 12:42:15 compute-1 nova_compute[230518]:   <name>instance-00000071</name>
Oct 02 12:42:15 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:42:15 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:42:15 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:42:15 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:       <nova:name>tempest-DeleteServersTestJSON-server-225030278</nova:name>
Oct 02 12:42:15 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:42:14</nova:creationTime>
Oct 02 12:42:15 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:42:15 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:42:15 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:42:15 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:42:15 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:42:15 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:42:15 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:42:15 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:42:15 compute-1 nova_compute[230518]:         <nova:user uuid="ae7bcf1e6a3b4132a7068b0f863ca79c">tempest-DeleteServersTestJSON-1740298646-project-member</nova:user>
Oct 02 12:42:15 compute-1 nova_compute[230518]:         <nova:project uuid="58b2fa4ee0cd4b97be1b303c203be14f">tempest-DeleteServersTestJSON-1740298646</nova:project>
Oct 02 12:42:15 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:42:15 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:42:15 compute-1 nova_compute[230518]:         <nova:port uuid="76dd1a78-6a32-43c7-8633-51573580bfc9">
Oct 02 12:42:15 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:42:15 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:42:15 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:42:15 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <system>
Oct 02 12:42:15 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:42:15 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:42:15 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:42:15 compute-1 nova_compute[230518]:       <entry name="serial">2d140186-e66c-4d55-b8df-5bb4214206d7</entry>
Oct 02 12:42:15 compute-1 nova_compute[230518]:       <entry name="uuid">2d140186-e66c-4d55-b8df-5bb4214206d7</entry>
Oct 02 12:42:15 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     </system>
Oct 02 12:42:15 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:42:15 compute-1 nova_compute[230518]:   <os>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:   </os>
Oct 02 12:42:15 compute-1 nova_compute[230518]:   <features>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:   </features>
Oct 02 12:42:15 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:42:15 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:42:15 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:42:15 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/2d140186-e66c-4d55-b8df-5bb4214206d7_disk">
Oct 02 12:42:15 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:       </source>
Oct 02 12:42:15 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:42:15 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:42:15 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:42:15 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/2d140186-e66c-4d55-b8df-5bb4214206d7_disk.config">
Oct 02 12:42:15 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:       </source>
Oct 02 12:42:15 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:42:15 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:42:15 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:42:15 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:53:d1:89"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:       <target dev="tap76dd1a78-6a"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:42:15 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/2d140186-e66c-4d55-b8df-5bb4214206d7/console.log" append="off"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <video>
Oct 02 12:42:15 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     </video>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:42:15 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:42:15 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:42:15 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:42:15 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:42:15 compute-1 nova_compute[230518]: </domain>
Oct 02 12:42:15 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:42:15 compute-1 nova_compute[230518]: 2025-10-02 12:42:15.469 2 DEBUG nova.compute.manager [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Preparing to wait for external event network-vif-plugged-76dd1a78-6a32-43c7-8633-51573580bfc9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:42:15 compute-1 nova_compute[230518]: 2025-10-02 12:42:15.469 2 DEBUG oslo_concurrency.lockutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "2d140186-e66c-4d55-b8df-5bb4214206d7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:15 compute-1 nova_compute[230518]: 2025-10-02 12:42:15.469 2 DEBUG oslo_concurrency.lockutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "2d140186-e66c-4d55-b8df-5bb4214206d7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:15 compute-1 nova_compute[230518]: 2025-10-02 12:42:15.469 2 DEBUG oslo_concurrency.lockutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "2d140186-e66c-4d55-b8df-5bb4214206d7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:15 compute-1 nova_compute[230518]: 2025-10-02 12:42:15.470 2 DEBUG nova.virt.libvirt.vif [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:42:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-225030278',display_name='tempest-DeleteServersTestJSON-server-225030278',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-225030278',id=113,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='58b2fa4ee0cd4b97be1b303c203be14f',ramdisk_id='',reservation_id='r-ao62a00r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1740298646',owner_user_name='tempest-DeleteServersTestJSON-1740298646-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:42:11Z,user_data=None,user_id='ae7bcf1e6a3b4132a7068b0f863ca79c',uuid=2d140186-e66c-4d55-b8df-5bb4214206d7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "76dd1a78-6a32-43c7-8633-51573580bfc9", "address": "fa:16:3e:53:d1:89", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76dd1a78-6a", "ovs_interfaceid": "76dd1a78-6a32-43c7-8633-51573580bfc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:42:15 compute-1 nova_compute[230518]: 2025-10-02 12:42:15.470 2 DEBUG nova.network.os_vif_util [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converting VIF {"id": "76dd1a78-6a32-43c7-8633-51573580bfc9", "address": "fa:16:3e:53:d1:89", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76dd1a78-6a", "ovs_interfaceid": "76dd1a78-6a32-43c7-8633-51573580bfc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:42:15 compute-1 nova_compute[230518]: 2025-10-02 12:42:15.471 2 DEBUG nova.network.os_vif_util [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:d1:89,bridge_name='br-int',has_traffic_filtering=True,id=76dd1a78-6a32-43c7-8633-51573580bfc9,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76dd1a78-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:42:15 compute-1 nova_compute[230518]: 2025-10-02 12:42:15.471 2 DEBUG os_vif [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:d1:89,bridge_name='br-int',has_traffic_filtering=True,id=76dd1a78-6a32-43c7-8633-51573580bfc9,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76dd1a78-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:42:15 compute-1 nova_compute[230518]: 2025-10-02 12:42:15.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:15 compute-1 nova_compute[230518]: 2025-10-02 12:42:15.473 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:15 compute-1 nova_compute[230518]: 2025-10-02 12:42:15.473 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:42:15 compute-1 nova_compute[230518]: 2025-10-02 12:42:15.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:15 compute-1 nova_compute[230518]: 2025-10-02 12:42:15.479 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76dd1a78-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:15 compute-1 nova_compute[230518]: 2025-10-02 12:42:15.480 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap76dd1a78-6a, col_values=(('external_ids', {'iface-id': '76dd1a78-6a32-43c7-8633-51573580bfc9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:d1:89', 'vm-uuid': '2d140186-e66c-4d55-b8df-5bb4214206d7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:15 compute-1 NetworkManager[44960]: <info>  [1759408935.4831] manager: (tap76dd1a78-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/224)
Oct 02 12:42:15 compute-1 nova_compute[230518]: 2025-10-02 12:42:15.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:42:15 compute-1 nova_compute[230518]: 2025-10-02 12:42:15.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:15 compute-1 nova_compute[230518]: 2025-10-02 12:42:15.492 2 INFO os_vif [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:d1:89,bridge_name='br-int',has_traffic_filtering=True,id=76dd1a78-6a32-43c7-8633-51573580bfc9,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76dd1a78-6a')
Oct 02 12:42:15 compute-1 nova_compute[230518]: 2025-10-02 12:42:15.604 2 DEBUG nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:42:15 compute-1 nova_compute[230518]: 2025-10-02 12:42:15.605 2 DEBUG nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:42:15 compute-1 nova_compute[230518]: 2025-10-02 12:42:15.605 2 DEBUG nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] No VIF found with MAC fa:16:3e:53:d1:89, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:42:15 compute-1 nova_compute[230518]: 2025-10-02 12:42:15.607 2 INFO nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Using config drive
Oct 02 12:42:15 compute-1 nova_compute[230518]: 2025-10-02 12:42:15.641 2 DEBUG nova.storage.rbd_utils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 2d140186-e66c-4d55-b8df-5bb4214206d7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:42:16 compute-1 nova_compute[230518]: 2025-10-02 12:42:16.171 2 INFO nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Creating config drive at /var/lib/nova/instances/2d140186-e66c-4d55-b8df-5bb4214206d7/disk.config
Oct 02 12:42:16 compute-1 nova_compute[230518]: 2025-10-02 12:42:16.186 2 DEBUG oslo_concurrency.processutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2d140186-e66c-4d55-b8df-5bb4214206d7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3te3f5ub execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:16 compute-1 nova_compute[230518]: 2025-10-02 12:42:16.331 2 DEBUG oslo_concurrency.processutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2d140186-e66c-4d55-b8df-5bb4214206d7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3te3f5ub" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:16 compute-1 nova_compute[230518]: 2025-10-02 12:42:16.374 2 DEBUG nova.storage.rbd_utils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 2d140186-e66c-4d55-b8df-5bb4214206d7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:42:16 compute-1 nova_compute[230518]: 2025-10-02 12:42:16.378 2 DEBUG oslo_concurrency.processutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2d140186-e66c-4d55-b8df-5bb4214206d7/disk.config 2d140186-e66c-4d55-b8df-5bb4214206d7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1654915291' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4124274956' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:16 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:42:16 compute-1 podman[274382]: 2025-10-02 12:42:16.845660234 +0000 UTC m=+0.069897839 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:42:16 compute-1 podman[274383]: 2025-10-02 12:42:16.845643283 +0000 UTC m=+0.070466877 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 12:42:16 compute-1 nova_compute[230518]: 2025-10-02 12:42:16.876 2 DEBUG nova.network.neutron [req-c7d2fb4e-22b6-4ed5-80d8-a74cabf64aba req-68e8346c-aa09-4491-875b-71bbf03255ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Updated VIF entry in instance network info cache for port 76dd1a78-6a32-43c7-8633-51573580bfc9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:42:16 compute-1 nova_compute[230518]: 2025-10-02 12:42:16.877 2 DEBUG nova.network.neutron [req-c7d2fb4e-22b6-4ed5-80d8-a74cabf64aba req-68e8346c-aa09-4491-875b-71bbf03255ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Updating instance_info_cache with network_info: [{"id": "76dd1a78-6a32-43c7-8633-51573580bfc9", "address": "fa:16:3e:53:d1:89", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76dd1a78-6a", "ovs_interfaceid": "76dd1a78-6a32-43c7-8633-51573580bfc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:42:16 compute-1 nova_compute[230518]: 2025-10-02 12:42:16.904 2 DEBUG oslo_concurrency.lockutils [req-c7d2fb4e-22b6-4ed5-80d8-a74cabf64aba req-68e8346c-aa09-4491-875b-71bbf03255ad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-2d140186-e66c-4d55-b8df-5bb4214206d7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:42:17 compute-1 nova_compute[230518]: 2025-10-02 12:42:17.424 2 DEBUG oslo_concurrency.processutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2d140186-e66c-4d55-b8df-5bb4214206d7/disk.config 2d140186-e66c-4d55-b8df-5bb4214206d7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:17 compute-1 nova_compute[230518]: 2025-10-02 12:42:17.425 2 INFO nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Deleting local config drive /var/lib/nova/instances/2d140186-e66c-4d55-b8df-5bb4214206d7/disk.config because it was imported into RBD.
Oct 02 12:42:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:17.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:42:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:17 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:17.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:17 compute-1 kernel: tap76dd1a78-6a: entered promiscuous mode
Oct 02 12:42:17 compute-1 NetworkManager[44960]: <info>  [1759408937.4903] manager: (tap76dd1a78-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/225)
Oct 02 12:42:17 compute-1 nova_compute[230518]: 2025-10-02 12:42:17.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:17 compute-1 ovn_controller[129257]: 2025-10-02T12:42:17Z|00469|binding|INFO|Claiming lport 76dd1a78-6a32-43c7-8633-51573580bfc9 for this chassis.
Oct 02 12:42:17 compute-1 ovn_controller[129257]: 2025-10-02T12:42:17Z|00470|binding|INFO|76dd1a78-6a32-43c7-8633-51573580bfc9: Claiming fa:16:3e:53:d1:89 10.100.0.10
Oct 02 12:42:17 compute-1 ovn_controller[129257]: 2025-10-02T12:42:17Z|00471|binding|INFO|Setting lport 76dd1a78-6a32-43c7-8633-51573580bfc9 ovn-installed in OVS
Oct 02 12:42:17 compute-1 nova_compute[230518]: 2025-10-02 12:42:17.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:17 compute-1 nova_compute[230518]: 2025-10-02 12:42:17.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:17 compute-1 systemd-udevd[274432]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:42:17 compute-1 NetworkManager[44960]: <info>  [1759408937.5423] device (tap76dd1a78-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:42:17 compute-1 NetworkManager[44960]: <info>  [1759408937.5443] device (tap76dd1a78-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:42:17 compute-1 systemd-machined[188247]: New machine qemu-56-instance-00000071.
Oct 02 12:42:17 compute-1 systemd[1]: Started Virtual Machine qemu-56-instance-00000071.
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.565 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:d1:89 10.100.0.10'], port_security=['fa:16:3e:53:d1:89 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '2d140186-e66c-4d55-b8df-5bb4214206d7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd4432c5-b907-49af-a666-2128c4085e24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '58b2fa4ee0cd4b97be1b303c203be14f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9c4b6dce-bc96-4e53-8c8b-5ae3df39cbb4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f2b4343-0afb-453d-9cae-4eb33f3ee50c, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=76dd1a78-6a32-43c7-8633-51573580bfc9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:42:17 compute-1 ovn_controller[129257]: 2025-10-02T12:42:17Z|00472|binding|INFO|Setting lport 76dd1a78-6a32-43c7-8633-51573580bfc9 up in Southbound
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.566 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 76dd1a78-6a32-43c7-8633-51573580bfc9 in datapath fd4432c5-b907-49af-a666-2128c4085e24 bound to our chassis
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.568 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fd4432c5-b907-49af-a666-2128c4085e24
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.578 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[75a73259-a3d1-4d35-98af-3efab4f236b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.580 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfd4432c5-b1 in ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.581 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfd4432c5-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.581 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[abbede67-ceff-4299-aae5-dd5c3077434d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.582 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[dd92ea71-bd0d-4a32-914b-a647f3a74fd0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.595 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[96a571d1-c3f7-49cb-b99b-26445a130f5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.611 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8cecb753-f805-4fb4-8f20-3bbb26f97bc3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:17 compute-1 ceph-mon[80926]: pgmap v1993: 305 pgs: 305 active+clean; 844 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 287 KiB/s rd, 2.7 MiB/s wr, 64 op/s
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.647 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[2b975e65-cf0a-4bc2-b3a0-627f6b1c06f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:17 compute-1 NetworkManager[44960]: <info>  [1759408937.6534] manager: (tapfd4432c5-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/226)
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.655 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0c46a747-59f9-4489-8f5f-96267b17a97b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.690 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[084d5906-5fd1-4a8b-8b0c-877ecd5912af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.693 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[d2fd4508-b6d4-49ed-9a7b-f2a3cd64f0ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:17 compute-1 NetworkManager[44960]: <info>  [1759408937.7136] device (tapfd4432c5-b0): carrier: link connected
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.718 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[8db8357a-7622-49f7-a5d5-baa2a51dd0d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.736 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4323d6b8-f84f-4691-b2ee-8e860e88c1fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfd4432c5-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:b3:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 144], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 680127, 'reachable_time': 31841, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274468, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.750 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cbea308a-d2e2-4d82-abbc-3abf513dcc48]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4c:b3ba'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 680127, 'tstamp': 680127}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274469, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.770 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0cbdee6c-6ef5-4d23-bd05-c34dc7c6d08e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfd4432c5-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:b3:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 144], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 680127, 'reachable_time': 31841, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274470, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.798 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1824d677-df67-497a-8a52-f407ddae0575]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.865 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a9a23a67-a535-4e71-9adb-bb20084f852a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.867 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd4432c5-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.867 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.868 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfd4432c5-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:17 compute-1 NetworkManager[44960]: <info>  [1759408937.8703] manager: (tapfd4432c5-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/227)
Oct 02 12:42:17 compute-1 kernel: tapfd4432c5-b0: entered promiscuous mode
Oct 02 12:42:17 compute-1 nova_compute[230518]: 2025-10-02 12:42:17.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.872 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfd4432c5-b0, col_values=(('external_ids', {'iface-id': 'd2e0cd82-7c1f-4194-aaaf-514fe24ec2a7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:17 compute-1 nova_compute[230518]: 2025-10-02 12:42:17.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:17 compute-1 ovn_controller[129257]: 2025-10-02T12:42:17Z|00473|binding|INFO|Releasing lport d2e0cd82-7c1f-4194-aaaf-514fe24ec2a7 from this chassis (sb_readonly=0)
Oct 02 12:42:17 compute-1 nova_compute[230518]: 2025-10-02 12:42:17.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.889 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fd4432c5-b907-49af-a666-2128c4085e24.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fd4432c5-b907-49af-a666-2128c4085e24.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.890 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[12d0c2cb-77b1-433f-98e0-defec1c63264]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.890 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-fd4432c5-b907-49af-a666-2128c4085e24
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/fd4432c5-b907-49af-a666-2128c4085e24.pid.haproxy
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID fd4432c5-b907-49af-a666-2128c4085e24
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:42:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:17.892 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'env', 'PROCESS_TAG=haproxy-fd4432c5-b907-49af-a666-2128c4085e24', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fd4432c5-b907-49af-a666-2128c4085e24.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:42:18 compute-1 nova_compute[230518]: 2025-10-02 12:42:18.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:18 compute-1 podman[274544]: 2025-10-02 12:42:18.273733699 +0000 UTC m=+0.029781028 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:42:18 compute-1 podman[274544]: 2025-10-02 12:42:18.577532621 +0000 UTC m=+0.333579990 container create 145c819a456b7a95c26b7c165f7c7ff9599ab85c5a7c40918d228fd8a7d020f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:42:18 compute-1 nova_compute[230518]: 2025-10-02 12:42:18.622 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408938.6219187, 2d140186-e66c-4d55-b8df-5bb4214206d7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:42:18 compute-1 nova_compute[230518]: 2025-10-02 12:42:18.622 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] VM Started (Lifecycle Event)
Oct 02 12:42:18 compute-1 nova_compute[230518]: 2025-10-02 12:42:18.626 2 DEBUG nova.compute.manager [req-ce53124b-f8c9-4183-b639-c22f8ce7945d req-68b0acef-2782-4a08-84ac-811f03d1f9af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Received event network-vif-plugged-76dd1a78-6a32-43c7-8633-51573580bfc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:18 compute-1 nova_compute[230518]: 2025-10-02 12:42:18.627 2 DEBUG oslo_concurrency.lockutils [req-ce53124b-f8c9-4183-b639-c22f8ce7945d req-68b0acef-2782-4a08-84ac-811f03d1f9af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2d140186-e66c-4d55-b8df-5bb4214206d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:18 compute-1 nova_compute[230518]: 2025-10-02 12:42:18.627 2 DEBUG oslo_concurrency.lockutils [req-ce53124b-f8c9-4183-b639-c22f8ce7945d req-68b0acef-2782-4a08-84ac-811f03d1f9af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2d140186-e66c-4d55-b8df-5bb4214206d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:18 compute-1 nova_compute[230518]: 2025-10-02 12:42:18.627 2 DEBUG oslo_concurrency.lockutils [req-ce53124b-f8c9-4183-b639-c22f8ce7945d req-68b0acef-2782-4a08-84ac-811f03d1f9af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2d140186-e66c-4d55-b8df-5bb4214206d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:18 compute-1 nova_compute[230518]: 2025-10-02 12:42:18.627 2 DEBUG nova.compute.manager [req-ce53124b-f8c9-4183-b639-c22f8ce7945d req-68b0acef-2782-4a08-84ac-811f03d1f9af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Processing event network-vif-plugged-76dd1a78-6a32-43c7-8633-51573580bfc9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:42:18 compute-1 nova_compute[230518]: 2025-10-02 12:42:18.628 2 DEBUG nova.compute.manager [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:42:18 compute-1 nova_compute[230518]: 2025-10-02 12:42:18.631 2 DEBUG nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:42:18 compute-1 nova_compute[230518]: 2025-10-02 12:42:18.634 2 INFO nova.virt.libvirt.driver [-] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Instance spawned successfully.
Oct 02 12:42:18 compute-1 nova_compute[230518]: 2025-10-02 12:42:18.634 2 DEBUG nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:42:18 compute-1 nova_compute[230518]: 2025-10-02 12:42:18.690 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:42:18 compute-1 nova_compute[230518]: 2025-10-02 12:42:18.694 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:42:18 compute-1 nova_compute[230518]: 2025-10-02 12:42:18.709 2 DEBUG nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:42:18 compute-1 nova_compute[230518]: 2025-10-02 12:42:18.710 2 DEBUG nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:42:18 compute-1 nova_compute[230518]: 2025-10-02 12:42:18.710 2 DEBUG nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:42:18 compute-1 nova_compute[230518]: 2025-10-02 12:42:18.710 2 DEBUG nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:42:18 compute-1 nova_compute[230518]: 2025-10-02 12:42:18.711 2 DEBUG nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:42:18 compute-1 nova_compute[230518]: 2025-10-02 12:42:18.711 2 DEBUG nova.virt.libvirt.driver [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:42:18 compute-1 nova_compute[230518]: 2025-10-02 12:42:18.744 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:42:18 compute-1 nova_compute[230518]: 2025-10-02 12:42:18.744 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408938.624742, 2d140186-e66c-4d55-b8df-5bb4214206d7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:42:18 compute-1 nova_compute[230518]: 2025-10-02 12:42:18.744 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] VM Paused (Lifecycle Event)
Oct 02 12:42:18 compute-1 systemd[1]: Started libpod-conmon-145c819a456b7a95c26b7c165f7c7ff9599ab85c5a7c40918d228fd8a7d020f7.scope.
Oct 02 12:42:18 compute-1 ceph-mon[80926]: pgmap v1994: 305 pgs: 305 active+clean; 863 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 334 KiB/s rd, 3.9 MiB/s wr, 96 op/s
Oct 02 12:42:18 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:42:18 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac4e0803a26e8010599196fbef18d640c314c793171bc446bcaa2ed335a9319d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:42:18 compute-1 podman[274544]: 2025-10-02 12:42:18.940113592 +0000 UTC m=+0.696160921 container init 145c819a456b7a95c26b7c165f7c7ff9599ab85c5a7c40918d228fd8a7d020f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:42:18 compute-1 podman[274544]: 2025-10-02 12:42:18.949180588 +0000 UTC m=+0.705227907 container start 145c819a456b7a95c26b7c165f7c7ff9599ab85c5a7c40918d228fd8a7d020f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 12:42:18 compute-1 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[274559]: [NOTICE]   (274564) : New worker (274566) forked
Oct 02 12:42:18 compute-1 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[274559]: [NOTICE]   (274564) : Loading success.
Oct 02 12:42:19 compute-1 nova_compute[230518]: 2025-10-02 12:42:19.091 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:42:19 compute-1 nova_compute[230518]: 2025-10-02 12:42:19.097 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408938.6313567, 2d140186-e66c-4d55-b8df-5bb4214206d7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:42:19 compute-1 nova_compute[230518]: 2025-10-02 12:42:19.098 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] VM Resumed (Lifecycle Event)
Oct 02 12:42:19 compute-1 nova_compute[230518]: 2025-10-02 12:42:19.272 2 INFO nova.compute.manager [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Took 7.90 seconds to spawn the instance on the hypervisor.
Oct 02 12:42:19 compute-1 nova_compute[230518]: 2025-10-02 12:42:19.273 2 DEBUG nova.compute.manager [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:42:19 compute-1 nova_compute[230518]: 2025-10-02 12:42:19.285 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:42:19 compute-1 nova_compute[230518]: 2025-10-02 12:42:19.288 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:42:19 compute-1 nova_compute[230518]: 2025-10-02 12:42:19.356 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:42:19 compute-1 nova_compute[230518]: 2025-10-02 12:42:19.412 2 INFO nova.compute.manager [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Took 9.02 seconds to build instance.
Oct 02 12:42:19 compute-1 nova_compute[230518]: 2025-10-02 12:42:19.441 2 DEBUG oslo_concurrency.lockutils [None req-787b91d1-5306-4948-9fe7-5e68c55b18ec ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "2d140186-e66c-4d55-b8df-5bb4214206d7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:42:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:19.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:42:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:19.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:20 compute-1 nova_compute[230518]: 2025-10-02 12:42:20.229 2 DEBUG oslo_concurrency.lockutils [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "2d140186-e66c-4d55-b8df-5bb4214206d7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:20 compute-1 nova_compute[230518]: 2025-10-02 12:42:20.229 2 DEBUG oslo_concurrency.lockutils [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "2d140186-e66c-4d55-b8df-5bb4214206d7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:20 compute-1 nova_compute[230518]: 2025-10-02 12:42:20.230 2 DEBUG oslo_concurrency.lockutils [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "2d140186-e66c-4d55-b8df-5bb4214206d7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:20 compute-1 nova_compute[230518]: 2025-10-02 12:42:20.230 2 DEBUG oslo_concurrency.lockutils [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "2d140186-e66c-4d55-b8df-5bb4214206d7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:20 compute-1 nova_compute[230518]: 2025-10-02 12:42:20.230 2 DEBUG oslo_concurrency.lockutils [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "2d140186-e66c-4d55-b8df-5bb4214206d7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:20 compute-1 nova_compute[230518]: 2025-10-02 12:42:20.231 2 INFO nova.compute.manager [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Terminating instance
Oct 02 12:42:20 compute-1 nova_compute[230518]: 2025-10-02 12:42:20.231 2 DEBUG nova.compute.manager [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:42:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2044087468' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:20 compute-1 nova_compute[230518]: 2025-10-02 12:42:20.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:20 compute-1 kernel: tap76dd1a78-6a (unregistering): left promiscuous mode
Oct 02 12:42:20 compute-1 NetworkManager[44960]: <info>  [1759408940.5579] device (tap76dd1a78-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:42:20 compute-1 ovn_controller[129257]: 2025-10-02T12:42:20Z|00474|binding|INFO|Releasing lport 76dd1a78-6a32-43c7-8633-51573580bfc9 from this chassis (sb_readonly=0)
Oct 02 12:42:20 compute-1 ovn_controller[129257]: 2025-10-02T12:42:20Z|00475|binding|INFO|Setting lport 76dd1a78-6a32-43c7-8633-51573580bfc9 down in Southbound
Oct 02 12:42:20 compute-1 ovn_controller[129257]: 2025-10-02T12:42:20Z|00476|binding|INFO|Removing iface tap76dd1a78-6a ovn-installed in OVS
Oct 02 12:42:20 compute-1 nova_compute[230518]: 2025-10-02 12:42:20.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:20 compute-1 nova_compute[230518]: 2025-10-02 12:42:20.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:20 compute-1 nova_compute[230518]: 2025-10-02 12:42:20.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:20.603 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:d1:89 10.100.0.10'], port_security=['fa:16:3e:53:d1:89 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '2d140186-e66c-4d55-b8df-5bb4214206d7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd4432c5-b907-49af-a666-2128c4085e24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '58b2fa4ee0cd4b97be1b303c203be14f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9c4b6dce-bc96-4e53-8c8b-5ae3df39cbb4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f2b4343-0afb-453d-9cae-4eb33f3ee50c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=76dd1a78-6a32-43c7-8633-51573580bfc9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:42:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:20.604 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 76dd1a78-6a32-43c7-8633-51573580bfc9 in datapath fd4432c5-b907-49af-a666-2128c4085e24 unbound from our chassis
Oct 02 12:42:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:20.606 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fd4432c5-b907-49af-a666-2128c4085e24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:42:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:20.607 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e1802ada-ce38-42a1-bdd1-0c22419a6c5e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:20.607 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 namespace which is not needed anymore
Oct 02 12:42:20 compute-1 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000071.scope: Deactivated successfully.
Oct 02 12:42:20 compute-1 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000071.scope: Consumed 2.507s CPU time.
Oct 02 12:42:20 compute-1 systemd-machined[188247]: Machine qemu-56-instance-00000071 terminated.
Oct 02 12:42:20 compute-1 nova_compute[230518]: 2025-10-02 12:42:20.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:20 compute-1 nova_compute[230518]: 2025-10-02 12:42:20.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:20 compute-1 nova_compute[230518]: 2025-10-02 12:42:20.667 2 INFO nova.virt.libvirt.driver [-] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Instance destroyed successfully.
Oct 02 12:42:20 compute-1 nova_compute[230518]: 2025-10-02 12:42:20.668 2 DEBUG nova.objects.instance [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'resources' on Instance uuid 2d140186-e66c-4d55-b8df-5bb4214206d7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:42:20 compute-1 nova_compute[230518]: 2025-10-02 12:42:20.722 2 DEBUG nova.compute.manager [req-0ce15241-1cb6-47ae-9e71-7fbb90cd47c4 req-97dca8b8-6fb7-4c21-beb2-378052ee3ae9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Received event network-vif-plugged-76dd1a78-6a32-43c7-8633-51573580bfc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:20 compute-1 nova_compute[230518]: 2025-10-02 12:42:20.722 2 DEBUG oslo_concurrency.lockutils [req-0ce15241-1cb6-47ae-9e71-7fbb90cd47c4 req-97dca8b8-6fb7-4c21-beb2-378052ee3ae9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2d140186-e66c-4d55-b8df-5bb4214206d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:20 compute-1 nova_compute[230518]: 2025-10-02 12:42:20.722 2 DEBUG oslo_concurrency.lockutils [req-0ce15241-1cb6-47ae-9e71-7fbb90cd47c4 req-97dca8b8-6fb7-4c21-beb2-378052ee3ae9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2d140186-e66c-4d55-b8df-5bb4214206d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:20 compute-1 nova_compute[230518]: 2025-10-02 12:42:20.723 2 DEBUG oslo_concurrency.lockutils [req-0ce15241-1cb6-47ae-9e71-7fbb90cd47c4 req-97dca8b8-6fb7-4c21-beb2-378052ee3ae9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2d140186-e66c-4d55-b8df-5bb4214206d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:20 compute-1 nova_compute[230518]: 2025-10-02 12:42:20.723 2 DEBUG nova.compute.manager [req-0ce15241-1cb6-47ae-9e71-7fbb90cd47c4 req-97dca8b8-6fb7-4c21-beb2-378052ee3ae9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] No waiting events found dispatching network-vif-plugged-76dd1a78-6a32-43c7-8633-51573580bfc9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:42:20 compute-1 nova_compute[230518]: 2025-10-02 12:42:20.723 2 WARNING nova.compute.manager [req-0ce15241-1cb6-47ae-9e71-7fbb90cd47c4 req-97dca8b8-6fb7-4c21-beb2-378052ee3ae9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Received unexpected event network-vif-plugged-76dd1a78-6a32-43c7-8633-51573580bfc9 for instance with vm_state active and task_state deleting.
Oct 02 12:42:20 compute-1 nova_compute[230518]: 2025-10-02 12:42:20.724 2 DEBUG nova.virt.libvirt.vif [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:42:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-225030278',display_name='tempest-DeleteServersTestJSON-server-225030278',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-225030278',id=113,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:42:19Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='58b2fa4ee0cd4b97be1b303c203be14f',ramdisk_id='',reservation_id='r-ao62a00r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-1740298646',owner_user_name='tempest-DeleteServersTestJSON-1740298646-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:42:19Z,user_data=None,user_id='ae7bcf1e6a3b4132a7068b0f863ca79c',uuid=2d140186-e66c-4d55-b8df-5bb4214206d7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "76dd1a78-6a32-43c7-8633-51573580bfc9", "address": "fa:16:3e:53:d1:89", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76dd1a78-6a", "ovs_interfaceid": "76dd1a78-6a32-43c7-8633-51573580bfc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:42:20 compute-1 nova_compute[230518]: 2025-10-02 12:42:20.725 2 DEBUG nova.network.os_vif_util [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converting VIF {"id": "76dd1a78-6a32-43c7-8633-51573580bfc9", "address": "fa:16:3e:53:d1:89", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76dd1a78-6a", "ovs_interfaceid": "76dd1a78-6a32-43c7-8633-51573580bfc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:42:20 compute-1 nova_compute[230518]: 2025-10-02 12:42:20.725 2 DEBUG nova.network.os_vif_util [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:d1:89,bridge_name='br-int',has_traffic_filtering=True,id=76dd1a78-6a32-43c7-8633-51573580bfc9,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76dd1a78-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:42:20 compute-1 nova_compute[230518]: 2025-10-02 12:42:20.725 2 DEBUG os_vif [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:d1:89,bridge_name='br-int',has_traffic_filtering=True,id=76dd1a78-6a32-43c7-8633-51573580bfc9,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76dd1a78-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:42:20 compute-1 nova_compute[230518]: 2025-10-02 12:42:20.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:20 compute-1 nova_compute[230518]: 2025-10-02 12:42:20.727 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76dd1a78-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:20 compute-1 nova_compute[230518]: 2025-10-02 12:42:20.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:20 compute-1 nova_compute[230518]: 2025-10-02 12:42:20.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:42:20 compute-1 nova_compute[230518]: 2025-10-02 12:42:20.733 2 INFO os_vif [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:d1:89,bridge_name='br-int',has_traffic_filtering=True,id=76dd1a78-6a32-43c7-8633-51573580bfc9,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76dd1a78-6a')
Oct 02 12:42:20 compute-1 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[274559]: [NOTICE]   (274564) : haproxy version is 2.8.14-c23fe91
Oct 02 12:42:20 compute-1 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[274559]: [NOTICE]   (274564) : path to executable is /usr/sbin/haproxy
Oct 02 12:42:20 compute-1 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[274559]: [WARNING]  (274564) : Exiting Master process...
Oct 02 12:42:20 compute-1 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[274559]: [ALERT]    (274564) : Current worker (274566) exited with code 143 (Terminated)
Oct 02 12:42:20 compute-1 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[274559]: [WARNING]  (274564) : All workers exited. Exiting... (0)
Oct 02 12:42:20 compute-1 systemd[1]: libpod-145c819a456b7a95c26b7c165f7c7ff9599ab85c5a7c40918d228fd8a7d020f7.scope: Deactivated successfully.
Oct 02 12:42:20 compute-1 podman[274605]: 2025-10-02 12:42:20.82748971 +0000 UTC m=+0.135450820 container died 145c819a456b7a95c26b7c165f7c7ff9599ab85c5a7c40918d228fd8a7d020f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 12:42:21 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-145c819a456b7a95c26b7c165f7c7ff9599ab85c5a7c40918d228fd8a7d020f7-userdata-shm.mount: Deactivated successfully.
Oct 02 12:42:21 compute-1 systemd[1]: var-lib-containers-storage-overlay-ac4e0803a26e8010599196fbef18d640c314c793171bc446bcaa2ed335a9319d-merged.mount: Deactivated successfully.
Oct 02 12:42:21 compute-1 podman[274605]: 2025-10-02 12:42:21.116044844 +0000 UTC m=+0.424005964 container cleanup 145c819a456b7a95c26b7c165f7c7ff9599ab85c5a7c40918d228fd8a7d020f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 12:42:21 compute-1 systemd[1]: libpod-conmon-145c819a456b7a95c26b7c165f7c7ff9599ab85c5a7c40918d228fd8a7d020f7.scope: Deactivated successfully.
Oct 02 12:42:21 compute-1 podman[274654]: 2025-10-02 12:42:21.268042303 +0000 UTC m=+0.126040264 container remove 145c819a456b7a95c26b7c165f7c7ff9599ab85c5a7c40918d228fd8a7d020f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:42:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:21.275 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8ef7e5ba-306c-4f06-b583-230d3f2f41ce]: (4, ('Thu Oct  2 12:42:20 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 (145c819a456b7a95c26b7c165f7c7ff9599ab85c5a7c40918d228fd8a7d020f7)\n145c819a456b7a95c26b7c165f7c7ff9599ab85c5a7c40918d228fd8a7d020f7\nThu Oct  2 12:42:21 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 (145c819a456b7a95c26b7c165f7c7ff9599ab85c5a7c40918d228fd8a7d020f7)\n145c819a456b7a95c26b7c165f7c7ff9599ab85c5a7c40918d228fd8a7d020f7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:21.277 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3b9cfc55-91a2-4e94-ab75-078c16ec55a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:21.278 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd4432c5-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:21 compute-1 nova_compute[230518]: 2025-10-02 12:42:21.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:21 compute-1 kernel: tapfd4432c5-b0: left promiscuous mode
Oct 02 12:42:21 compute-1 nova_compute[230518]: 2025-10-02 12:42:21.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:21.336 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2fec6b61-a647-459a-9e95-b7047fb41398]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:21 compute-1 nova_compute[230518]: 2025-10-02 12:42:21.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:21.364 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3baa60e5-98d1-4f0f-961f-2a56a3c4742f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:21.365 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c24be682-42a2-4414-8d4b-d7103da11619]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:21.380 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[93f21ff8-c96a-4672-aa7e-f04ced098281]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 680120, 'reachable_time': 34262, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274669, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:21.382 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:42:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:21.382 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[22ba385a-a07c-4d59-94d5-b13e8ebe6b62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:21 compute-1 systemd[1]: run-netns-ovnmeta\x2dfd4432c5\x2db907\x2d49af\x2da666\x2d2128c4085e24.mount: Deactivated successfully.
Oct 02 12:42:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:42:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:42:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:21.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:42:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:21 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:21.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:21 compute-1 ceph-mon[80926]: pgmap v1995: 305 pgs: 305 active+clean; 863 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 333 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Oct 02 12:42:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:42:22 compute-1 ceph-mon[80926]: pgmap v1996: 305 pgs: 305 active+clean; 816 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 178 op/s
Oct 02 12:42:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e268 e268: 3 total, 3 up, 3 in
Oct 02 12:42:23 compute-1 nova_compute[230518]: 2025-10-02 12:42:23.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:23 compute-1 nova_compute[230518]: 2025-10-02 12:42:23.339 2 INFO nova.virt.libvirt.driver [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Deleting instance files /var/lib/nova/instances/2d140186-e66c-4d55-b8df-5bb4214206d7_del
Oct 02 12:42:23 compute-1 nova_compute[230518]: 2025-10-02 12:42:23.339 2 INFO nova.virt.libvirt.driver [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Deletion of /var/lib/nova/instances/2d140186-e66c-4d55-b8df-5bb4214206d7_del complete
Oct 02 12:42:23 compute-1 nova_compute[230518]: 2025-10-02 12:42:23.385 2 DEBUG nova.compute.manager [req-f2cdefcb-ef0c-4344-842c-9490987cd5ff req-186bfd59-0f1b-45f3-97d6-05fdb7b906e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Received event network-vif-unplugged-76dd1a78-6a32-43c7-8633-51573580bfc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:23 compute-1 nova_compute[230518]: 2025-10-02 12:42:23.385 2 DEBUG oslo_concurrency.lockutils [req-f2cdefcb-ef0c-4344-842c-9490987cd5ff req-186bfd59-0f1b-45f3-97d6-05fdb7b906e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2d140186-e66c-4d55-b8df-5bb4214206d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:23 compute-1 nova_compute[230518]: 2025-10-02 12:42:23.386 2 DEBUG oslo_concurrency.lockutils [req-f2cdefcb-ef0c-4344-842c-9490987cd5ff req-186bfd59-0f1b-45f3-97d6-05fdb7b906e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2d140186-e66c-4d55-b8df-5bb4214206d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:23 compute-1 nova_compute[230518]: 2025-10-02 12:42:23.386 2 DEBUG oslo_concurrency.lockutils [req-f2cdefcb-ef0c-4344-842c-9490987cd5ff req-186bfd59-0f1b-45f3-97d6-05fdb7b906e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2d140186-e66c-4d55-b8df-5bb4214206d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:23 compute-1 nova_compute[230518]: 2025-10-02 12:42:23.386 2 DEBUG nova.compute.manager [req-f2cdefcb-ef0c-4344-842c-9490987cd5ff req-186bfd59-0f1b-45f3-97d6-05fdb7b906e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] No waiting events found dispatching network-vif-unplugged-76dd1a78-6a32-43c7-8633-51573580bfc9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:42:23 compute-1 nova_compute[230518]: 2025-10-02 12:42:23.386 2 DEBUG nova.compute.manager [req-f2cdefcb-ef0c-4344-842c-9490987cd5ff req-186bfd59-0f1b-45f3-97d6-05fdb7b906e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Received event network-vif-unplugged-76dd1a78-6a32-43c7-8633-51573580bfc9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:42:23 compute-1 nova_compute[230518]: 2025-10-02 12:42:23.387 2 DEBUG nova.compute.manager [req-f2cdefcb-ef0c-4344-842c-9490987cd5ff req-186bfd59-0f1b-45f3-97d6-05fdb7b906e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Received event network-vif-plugged-76dd1a78-6a32-43c7-8633-51573580bfc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:23 compute-1 nova_compute[230518]: 2025-10-02 12:42:23.387 2 DEBUG oslo_concurrency.lockutils [req-f2cdefcb-ef0c-4344-842c-9490987cd5ff req-186bfd59-0f1b-45f3-97d6-05fdb7b906e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2d140186-e66c-4d55-b8df-5bb4214206d7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:23 compute-1 nova_compute[230518]: 2025-10-02 12:42:23.387 2 DEBUG oslo_concurrency.lockutils [req-f2cdefcb-ef0c-4344-842c-9490987cd5ff req-186bfd59-0f1b-45f3-97d6-05fdb7b906e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2d140186-e66c-4d55-b8df-5bb4214206d7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:23 compute-1 nova_compute[230518]: 2025-10-02 12:42:23.387 2 DEBUG oslo_concurrency.lockutils [req-f2cdefcb-ef0c-4344-842c-9490987cd5ff req-186bfd59-0f1b-45f3-97d6-05fdb7b906e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2d140186-e66c-4d55-b8df-5bb4214206d7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:23 compute-1 nova_compute[230518]: 2025-10-02 12:42:23.387 2 DEBUG nova.compute.manager [req-f2cdefcb-ef0c-4344-842c-9490987cd5ff req-186bfd59-0f1b-45f3-97d6-05fdb7b906e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] No waiting events found dispatching network-vif-plugged-76dd1a78-6a32-43c7-8633-51573580bfc9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:42:23 compute-1 nova_compute[230518]: 2025-10-02 12:42:23.388 2 WARNING nova.compute.manager [req-f2cdefcb-ef0c-4344-842c-9490987cd5ff req-186bfd59-0f1b-45f3-97d6-05fdb7b906e2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Received unexpected event network-vif-plugged-76dd1a78-6a32-43c7-8633-51573580bfc9 for instance with vm_state active and task_state deleting.
Oct 02 12:42:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:42:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:23.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:42:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:23.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:24 compute-1 ceph-mon[80926]: osdmap e268: 3 total, 3 up, 3 in
Oct 02 12:42:24 compute-1 nova_compute[230518]: 2025-10-02 12:42:24.122 2 INFO nova.compute.manager [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Took 3.89 seconds to destroy the instance on the hypervisor.
Oct 02 12:42:24 compute-1 nova_compute[230518]: 2025-10-02 12:42:24.123 2 DEBUG oslo.service.loopingcall [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:42:24 compute-1 nova_compute[230518]: 2025-10-02 12:42:24.124 2 DEBUG nova.compute.manager [-] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:42:24 compute-1 nova_compute[230518]: 2025-10-02 12:42:24.124 2 DEBUG nova.network.neutron [-] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:42:25 compute-1 ceph-mon[80926]: pgmap v1998: 305 pgs: 305 active+clean; 808 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.1 MiB/s wr, 233 op/s
Oct 02 12:42:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4049197200' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3133103057' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:42:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:42:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:25.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:42:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:25 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:25.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:25 compute-1 nova_compute[230518]: 2025-10-02 12:42:25.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:25.939 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:25.940 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:25.941 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:26 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:42:27 compute-1 ceph-mon[80926]: pgmap v1999: 305 pgs: 305 active+clean; 774 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.3 MiB/s wr, 235 op/s
Oct 02 12:42:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:42:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:42:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:27.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:42:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:42:27 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:27.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:42:27 compute-1 nova_compute[230518]: 2025-10-02 12:42:27.559 2 DEBUG nova.network.neutron [-] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:42:27 compute-1 nova_compute[230518]: 2025-10-02 12:42:27.641 2 DEBUG nova.compute.manager [req-a7a91427-6a18-4af0-b537-81398ca96586 req-ce4d644e-6457-47d0-ab7d-02c29df47141 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Received event network-vif-deleted-76dd1a78-6a32-43c7-8633-51573580bfc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:27 compute-1 nova_compute[230518]: 2025-10-02 12:42:27.641 2 INFO nova.compute.manager [req-a7a91427-6a18-4af0-b537-81398ca96586 req-ce4d644e-6457-47d0-ab7d-02c29df47141 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Neutron deleted interface 76dd1a78-6a32-43c7-8633-51573580bfc9; detaching it from the instance and deleting it from the info cache
Oct 02 12:42:27 compute-1 nova_compute[230518]: 2025-10-02 12:42:27.641 2 DEBUG nova.network.neutron [req-a7a91427-6a18-4af0-b537-81398ca96586 req-ce4d644e-6457-47d0-ab7d-02c29df47141 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:42:28 compute-1 nova_compute[230518]: 2025-10-02 12:42:28.004 2 INFO nova.compute.manager [-] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Took 3.88 seconds to deallocate network for instance.
Oct 02 12:42:28 compute-1 nova_compute[230518]: 2025-10-02 12:42:28.013 2 DEBUG nova.compute.manager [req-a7a91427-6a18-4af0-b537-81398ca96586 req-ce4d644e-6457-47d0-ab7d-02c29df47141 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Detach interface failed, port_id=76dd1a78-6a32-43c7-8633-51573580bfc9, reason: Instance 2d140186-e66c-4d55-b8df-5bb4214206d7 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:42:28 compute-1 nova_compute[230518]: 2025-10-02 12:42:28.238 2 DEBUG oslo_concurrency.lockutils [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:28 compute-1 nova_compute[230518]: 2025-10-02 12:42:28.239 2 DEBUG oslo_concurrency.lockutils [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:28 compute-1 nova_compute[230518]: 2025-10-02 12:42:28.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:28 compute-1 nova_compute[230518]: 2025-10-02 12:42:28.756 2 DEBUG oslo_concurrency.processutils [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:42:29 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/104167137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:29 compute-1 nova_compute[230518]: 2025-10-02 12:42:29.222 2 DEBUG oslo_concurrency.processutils [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:29 compute-1 nova_compute[230518]: 2025-10-02 12:42:29.231 2 DEBUG nova.compute.provider_tree [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:42:29 compute-1 ceph-mon[80926]: pgmap v2000: 305 pgs: 305 active+clean; 662 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 228 op/s
Oct 02 12:42:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/104167137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:42:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:29.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:42:29 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:29.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:42:29 compute-1 nova_compute[230518]: 2025-10-02 12:42:29.686 2 DEBUG nova.scheduler.client.report [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:42:30 compute-1 nova_compute[230518]: 2025-10-02 12:42:30.064 2 DEBUG oslo_concurrency.lockutils [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:30 compute-1 sudo[274693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:42:30 compute-1 sudo[274693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:30 compute-1 sudo[274693]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:30 compute-1 sudo[274718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:42:30 compute-1 sudo[274718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:30 compute-1 sudo[274718]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:30 compute-1 sudo[274743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:42:30 compute-1 sudo[274743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:30 compute-1 sudo[274743]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:30 compute-1 sudo[274768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:42:30 compute-1 sudo[274768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:30 compute-1 nova_compute[230518]: 2025-10-02 12:42:30.554 2 INFO nova.scheduler.client.report [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Deleted allocations for instance 2d140186-e66c-4d55-b8df-5bb4214206d7
Oct 02 12:42:30 compute-1 nova_compute[230518]: 2025-10-02 12:42:30.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:30 compute-1 sudo[274768]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:30 compute-1 nova_compute[230518]: 2025-10-02 12:42:30.990 2 DEBUG oslo_concurrency.lockutils [None req-166bd8ba-53ea-40a7-8fd2-4e90d01f3853 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "2d140186-e66c-4d55-b8df-5bb4214206d7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:31 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e269 e269: 3 total, 3 up, 3 in
Oct 02 12:42:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:42:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:31 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:31.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:31.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:31 compute-1 ceph-mon[80926]: pgmap v2001: 305 pgs: 305 active+clean; 662 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 228 op/s
Oct 02 12:42:31 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:42:31 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:42:31 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:42:31 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:42:31 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:42:31 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:42:31 compute-1 ceph-mon[80926]: osdmap e269: 3 total, 3 up, 3 in
Oct 02 12:42:31 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:42:32 compute-1 ceph-mon[80926]: pgmap v2003: 305 pgs: 305 active+clean; 629 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 484 KiB/s rd, 1.1 MiB/s wr, 141 op/s
Oct 02 12:42:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/437680059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:33 compute-1 nova_compute[230518]: 2025-10-02 12:42:33.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:42:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:42:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:33.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:42:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:33 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:33.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:34 compute-1 nova_compute[230518]: 2025-10-02 12:42:34.159 2 DEBUG oslo_concurrency.lockutils [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:34 compute-1 nova_compute[230518]: 2025-10-02 12:42:34.159 2 DEBUG oslo_concurrency.lockutils [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:34 compute-1 nova_compute[230518]: 2025-10-02 12:42:34.160 2 INFO nova.compute.manager [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Shelving
Oct 02 12:42:34 compute-1 nova_compute[230518]: 2025-10-02 12:42:34.187 2 DEBUG nova.virt.libvirt.driver [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:42:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:42:34 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1733491597' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:42:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:42:34 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1733491597' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:42:34 compute-1 ceph-mon[80926]: pgmap v2004: 305 pgs: 305 active+clean; 629 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1011 KiB/s wr, 168 op/s
Oct 02 12:42:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:42:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:42:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:35.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:42:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:42:35 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:35.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:42:35 compute-1 nova_compute[230518]: 2025-10-02 12:42:35.666 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408940.6654503, 2d140186-e66c-4d55-b8df-5bb4214206d7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:42:35 compute-1 nova_compute[230518]: 2025-10-02 12:42:35.667 2 INFO nova.compute.manager [-] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] VM Stopped (Lifecycle Event)
Oct 02 12:42:35 compute-1 nova_compute[230518]: 2025-10-02 12:42:35.683 2 DEBUG nova.compute.manager [None req-928ec4c7-e91f-40c7-ad2c-1cc810f87b65 - - - - - -] [instance: 2d140186-e66c-4d55-b8df-5bb4214206d7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:42:35 compute-1 nova_compute[230518]: 2025-10-02 12:42:35.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:35 compute-1 podman[274825]: 2025-10-02 12:42:35.792666843 +0000 UTC m=+0.045775320 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:42:35 compute-1 podman[274824]: 2025-10-02 12:42:35.826099315 +0000 UTC m=+0.080666248 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 12:42:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1733491597' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:42:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1733491597' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:42:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/207719170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:36 compute-1 kernel: tap8eb9e971-59 (unregistering): left promiscuous mode
Oct 02 12:42:36 compute-1 NetworkManager[44960]: <info>  [1759408956.7228] device (tap8eb9e971-59): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:42:36 compute-1 nova_compute[230518]: 2025-10-02 12:42:36.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:36 compute-1 ovn_controller[129257]: 2025-10-02T12:42:36Z|00477|binding|INFO|Releasing lport 8eb9e971-5920-4103-9ba9-c0846182952d from this chassis (sb_readonly=0)
Oct 02 12:42:36 compute-1 ovn_controller[129257]: 2025-10-02T12:42:36Z|00478|binding|INFO|Setting lport 8eb9e971-5920-4103-9ba9-c0846182952d down in Southbound
Oct 02 12:42:36 compute-1 ovn_controller[129257]: 2025-10-02T12:42:36Z|00479|binding|INFO|Removing iface tap8eb9e971-59 ovn-installed in OVS
Oct 02 12:42:36 compute-1 nova_compute[230518]: 2025-10-02 12:42:36.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:36.740 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:40:72 10.100.0.10'], port_security=['fa:16:3e:43:40:72 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '1f9101c6-f4d8-46c7-8884-386f9f08e6fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-385a384c-5df0-4b04-b928-517a46df04f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f7e2edef094b4ba5a56a5ec5ffce911e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '041f6b5e-0e14-4ae5-9597-3a584e6f87e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.243'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5110437-1084-431d-86cb-6ad2d219bdc1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=8eb9e971-5920-4103-9ba9-c0846182952d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:42:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:36.741 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 8eb9e971-5920-4103-9ba9-c0846182952d in datapath 385a384c-5df0-4b04-b928-517a46df04f4 unbound from our chassis
Oct 02 12:42:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:36.743 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 385a384c-5df0-4b04-b928-517a46df04f4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:42:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:36.745 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ba95bf41-5ab7-45b1-b3cb-5d0a0076fd25]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:36.746 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4 namespace which is not needed anymore
Oct 02 12:42:36 compute-1 nova_compute[230518]: 2025-10-02 12:42:36.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:36 compute-1 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000070.scope: Deactivated successfully.
Oct 02 12:42:36 compute-1 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000070.scope: Consumed 14.924s CPU time.
Oct 02 12:42:36 compute-1 systemd-machined[188247]: Machine qemu-55-instance-00000070 terminated.
Oct 02 12:42:36 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:42:36 compute-1 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[273945]: [NOTICE]   (273974) : haproxy version is 2.8.14-c23fe91
Oct 02 12:42:36 compute-1 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[273945]: [NOTICE]   (273974) : path to executable is /usr/sbin/haproxy
Oct 02 12:42:36 compute-1 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[273945]: [WARNING]  (273974) : Exiting Master process...
Oct 02 12:42:36 compute-1 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[273945]: [WARNING]  (273974) : Exiting Master process...
Oct 02 12:42:36 compute-1 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[273945]: [ALERT]    (273974) : Current worker (273976) exited with code 143 (Terminated)
Oct 02 12:42:36 compute-1 neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4[273945]: [WARNING]  (273974) : All workers exited. Exiting... (0)
Oct 02 12:42:36 compute-1 systemd[1]: libpod-1ad8cc8c7815fea4a6e1e9206536f8d22834d2428546c3e045c4f6cc6893ceb0.scope: Deactivated successfully.
Oct 02 12:42:36 compute-1 podman[274892]: 2025-10-02 12:42:36.894787469 +0000 UTC m=+0.057016354 container died 1ad8cc8c7815fea4a6e1e9206536f8d22834d2428546c3e045c4f6cc6893ceb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:42:36 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1ad8cc8c7815fea4a6e1e9206536f8d22834d2428546c3e045c4f6cc6893ceb0-userdata-shm.mount: Deactivated successfully.
Oct 02 12:42:36 compute-1 systemd[1]: var-lib-containers-storage-overlay-3f06bb73ff4fd07214bf62b4ab474c808902a8097ac5ccfb1095b1198b7d0666-merged.mount: Deactivated successfully.
Oct 02 12:42:36 compute-1 podman[274892]: 2025-10-02 12:42:36.933060692 +0000 UTC m=+0.095289567 container cleanup 1ad8cc8c7815fea4a6e1e9206536f8d22834d2428546c3e045c4f6cc6893ceb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:42:36 compute-1 systemd[1]: libpod-conmon-1ad8cc8c7815fea4a6e1e9206536f8d22834d2428546c3e045c4f6cc6893ceb0.scope: Deactivated successfully.
Oct 02 12:42:36 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e270 e270: 3 total, 3 up, 3 in
Oct 02 12:42:37 compute-1 ceph-mon[80926]: pgmap v2005: 305 pgs: 305 active+clean; 629 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 367 KiB/s wr, 162 op/s
Oct 02 12:42:37 compute-1 podman[274924]: 2025-10-02 12:42:37.062137271 +0000 UTC m=+0.104072243 container remove 1ad8cc8c7815fea4a6e1e9206536f8d22834d2428546c3e045c4f6cc6893ceb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:42:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:37.069 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[026bbaf7-b94c-471a-86df-d6f7b72094c3]: (4, ('Thu Oct  2 12:42:36 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4 (1ad8cc8c7815fea4a6e1e9206536f8d22834d2428546c3e045c4f6cc6893ceb0)\n1ad8cc8c7815fea4a6e1e9206536f8d22834d2428546c3e045c4f6cc6893ceb0\nThu Oct  2 12:42:36 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4 (1ad8cc8c7815fea4a6e1e9206536f8d22834d2428546c3e045c4f6cc6893ceb0)\n1ad8cc8c7815fea4a6e1e9206536f8d22834d2428546c3e045c4f6cc6893ceb0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:37.071 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0df22d95-58e1-4709-92aa-340ecd6ccd7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:37.072 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap385a384c-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:37 compute-1 kernel: tap385a384c-50: left promiscuous mode
Oct 02 12:42:37 compute-1 nova_compute[230518]: 2025-10-02 12:42:37.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:37 compute-1 nova_compute[230518]: 2025-10-02 12:42:37.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:37 compute-1 nova_compute[230518]: 2025-10-02 12:42:37.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:37.095 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4e24b907-7a25-4693-98ae-ac2661a7d916]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:37.132 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[188825b2-67ff-4e76-87a7-2ea214a0f0a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:37.135 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[55a7e357-d39c-4c86-aa62-268cca869667]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:37.155 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[410411e5-c3ba-42bd-9504-006628a53cd8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677873, 'reachable_time': 21887, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274954, 'error': None, 'target': 'ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:37 compute-1 systemd[1]: run-netns-ovnmeta\x2d385a384c\x2d5df0\x2d4b04\x2db928\x2d517a46df04f4.mount: Deactivated successfully.
Oct 02 12:42:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:37.159 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-385a384c-5df0-4b04-b928-517a46df04f4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:42:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:42:37.159 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[f955be6c-c4b6-4204-888f-a30e0482a35b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:42:37 compute-1 nova_compute[230518]: 2025-10-02 12:42:37.203 2 INFO nova.virt.libvirt.driver [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Instance shutdown successfully after 3 seconds.
Oct 02 12:42:37 compute-1 nova_compute[230518]: 2025-10-02 12:42:37.208 2 INFO nova.virt.libvirt.driver [-] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Instance destroyed successfully.
Oct 02 12:42:37 compute-1 nova_compute[230518]: 2025-10-02 12:42:37.208 2 DEBUG nova.objects.instance [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lazy-loading 'numa_topology' on Instance uuid 1f9101c6-f4d8-46c7-8884-386f9f08e6fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:42:37 compute-1 nova_compute[230518]: 2025-10-02 12:42:37.400 2 DEBUG nova.compute.manager [req-210568dd-a45d-4b38-b45c-e18863be4078 req-a9d60b2e-8bfa-4a75-8b3a-a466647c949b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Received event network-vif-unplugged-8eb9e971-5920-4103-9ba9-c0846182952d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:37 compute-1 nova_compute[230518]: 2025-10-02 12:42:37.401 2 DEBUG oslo_concurrency.lockutils [req-210568dd-a45d-4b38-b45c-e18863be4078 req-a9d60b2e-8bfa-4a75-8b3a-a466647c949b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:37 compute-1 nova_compute[230518]: 2025-10-02 12:42:37.401 2 DEBUG oslo_concurrency.lockutils [req-210568dd-a45d-4b38-b45c-e18863be4078 req-a9d60b2e-8bfa-4a75-8b3a-a466647c949b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:37 compute-1 nova_compute[230518]: 2025-10-02 12:42:37.401 2 DEBUG oslo_concurrency.lockutils [req-210568dd-a45d-4b38-b45c-e18863be4078 req-a9d60b2e-8bfa-4a75-8b3a-a466647c949b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:37 compute-1 nova_compute[230518]: 2025-10-02 12:42:37.402 2 DEBUG nova.compute.manager [req-210568dd-a45d-4b38-b45c-e18863be4078 req-a9d60b2e-8bfa-4a75-8b3a-a466647c949b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] No waiting events found dispatching network-vif-unplugged-8eb9e971-5920-4103-9ba9-c0846182952d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:42:37 compute-1 nova_compute[230518]: 2025-10-02 12:42:37.402 2 WARNING nova.compute.manager [req-210568dd-a45d-4b38-b45c-e18863be4078 req-a9d60b2e-8bfa-4a75-8b3a-a466647c949b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Received unexpected event network-vif-unplugged-8eb9e971-5920-4103-9ba9-c0846182952d for instance with vm_state active and task_state shelving.
Oct 02 12:42:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:42:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:37 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:37.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:37.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:37 compute-1 nova_compute[230518]: 2025-10-02 12:42:37.541 2 INFO nova.virt.libvirt.driver [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Beginning cold snapshot process
Oct 02 12:42:37 compute-1 sudo[274955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:42:37 compute-1 sudo[274955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:37 compute-1 sudo[274955]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:37 compute-1 sudo[274980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:42:37 compute-1 sudo[274980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:42:37 compute-1 sudo[274980]: pam_unix(sudo:session): session closed for user root
Oct 02 12:42:37 compute-1 nova_compute[230518]: 2025-10-02 12:42:37.741 2 DEBUG nova.virt.libvirt.imagebackend [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 12:42:38 compute-1 nova_compute[230518]: 2025-10-02 12:42:38.041 2 DEBUG nova.storage.rbd_utils [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] creating snapshot(54439c570f6d4146b808add75210b9a8) on rbd image(1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:42:38 compute-1 ceph-mon[80926]: osdmap e270: 3 total, 3 up, 3 in
Oct 02 12:42:38 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:42:38 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:42:38 compute-1 nova_compute[230518]: 2025-10-02 12:42:38.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:39 compute-1 ceph-mon[80926]: pgmap v2007: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 640 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 214 KiB/s wr, 206 op/s
Oct 02 12:42:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e271 e271: 3 total, 3 up, 3 in
Oct 02 12:42:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:39.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:42:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:39.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:42:39 compute-1 nova_compute[230518]: 2025-10-02 12:42:39.513 2 DEBUG nova.compute.manager [req-8c1163d8-a2db-40f6-b51d-968a5254d067 req-5520687f-26c0-49b4-89c9-aae25bfebb1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Received event network-vif-plugged-8eb9e971-5920-4103-9ba9-c0846182952d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:39 compute-1 nova_compute[230518]: 2025-10-02 12:42:39.513 2 DEBUG oslo_concurrency.lockutils [req-8c1163d8-a2db-40f6-b51d-968a5254d067 req-5520687f-26c0-49b4-89c9-aae25bfebb1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:39 compute-1 nova_compute[230518]: 2025-10-02 12:42:39.514 2 DEBUG oslo_concurrency.lockutils [req-8c1163d8-a2db-40f6-b51d-968a5254d067 req-5520687f-26c0-49b4-89c9-aae25bfebb1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:39 compute-1 nova_compute[230518]: 2025-10-02 12:42:39.514 2 DEBUG oslo_concurrency.lockutils [req-8c1163d8-a2db-40f6-b51d-968a5254d067 req-5520687f-26c0-49b4-89c9-aae25bfebb1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:39 compute-1 nova_compute[230518]: 2025-10-02 12:42:39.514 2 DEBUG nova.compute.manager [req-8c1163d8-a2db-40f6-b51d-968a5254d067 req-5520687f-26c0-49b4-89c9-aae25bfebb1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] No waiting events found dispatching network-vif-plugged-8eb9e971-5920-4103-9ba9-c0846182952d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:42:39 compute-1 nova_compute[230518]: 2025-10-02 12:42:39.514 2 WARNING nova.compute.manager [req-8c1163d8-a2db-40f6-b51d-968a5254d067 req-5520687f-26c0-49b4-89c9-aae25bfebb1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Received unexpected event network-vif-plugged-8eb9e971-5920-4103-9ba9-c0846182952d for instance with vm_state active and task_state shelving_image_uploading.
Oct 02 12:42:39 compute-1 nova_compute[230518]: 2025-10-02 12:42:39.543 2 DEBUG nova.storage.rbd_utils [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] cloning vms/1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk@54439c570f6d4146b808add75210b9a8 to images/c37f4151-ac68-47f8-adfa-bd0c85e4c75d clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 12:42:39 compute-1 nova_compute[230518]: 2025-10-02 12:42:39.919 2 DEBUG nova.storage.rbd_utils [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] flattening images/c37f4151-ac68-47f8-adfa-bd0c85e4c75d flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 12:42:40 compute-1 ceph-mon[80926]: osdmap e271: 3 total, 3 up, 3 in
Oct 02 12:42:40 compute-1 nova_compute[230518]: 2025-10-02 12:42:40.309 2 DEBUG nova.storage.rbd_utils [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] removing snapshot(54439c570f6d4146b808add75210b9a8) on rbd image(1f9101c6-f4d8-46c7-8884-386f9f08e6fb_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 12:42:40 compute-1 nova_compute[230518]: 2025-10-02 12:42:40.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:41 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e272 e272: 3 total, 3 up, 3 in
Oct 02 12:42:41 compute-1 nova_compute[230518]: 2025-10-02 12:42:41.443 2 DEBUG nova.storage.rbd_utils [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] creating snapshot(snap) on rbd image(c37f4151-ac68-47f8-adfa-bd0c85e4c75d) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 12:42:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:42:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:41.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:42:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:41.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:41 compute-1 ceph-mon[80926]: pgmap v2009: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 640 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 213 KiB/s wr, 146 op/s
Oct 02 12:42:41 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2744660793' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:41 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3813736802' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:41 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:42:42 compute-1 ceph-mon[80926]: osdmap e272: 3 total, 3 up, 3 in
Oct 02 12:42:42 compute-1 ceph-mon[80926]: pgmap v2011: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 587 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 5.2 MiB/s wr, 197 op/s
Oct 02 12:42:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e273 e273: 3 total, 3 up, 3 in
Oct 02 12:42:43 compute-1 nova_compute[230518]: 2025-10-02 12:42:43.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:43.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:43.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:43 compute-1 ceph-mon[80926]: osdmap e273: 3 total, 3 up, 3 in
Oct 02 12:42:45 compute-1 nova_compute[230518]: 2025-10-02 12:42:45.077 2 INFO nova.virt.libvirt.driver [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Snapshot image upload complete
Oct 02 12:42:45 compute-1 nova_compute[230518]: 2025-10-02 12:42:45.077 2 DEBUG nova.compute.manager [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:42:45 compute-1 ceph-mon[80926]: pgmap v2013: 305 pgs: 305 active+clean; 560 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 9.2 MiB/s wr, 271 op/s
Oct 02 12:42:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4163876452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:45 compute-1 nova_compute[230518]: 2025-10-02 12:42:45.329 2 INFO nova.compute.manager [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Shelve offloading
Oct 02 12:42:45 compute-1 nova_compute[230518]: 2025-10-02 12:42:45.338 2 INFO nova.virt.libvirt.driver [-] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Instance destroyed successfully.
Oct 02 12:42:45 compute-1 nova_compute[230518]: 2025-10-02 12:42:45.338 2 DEBUG nova.compute.manager [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:42:45 compute-1 nova_compute[230518]: 2025-10-02 12:42:45.341 2 DEBUG oslo_concurrency.lockutils [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "refresh_cache-1f9101c6-f4d8-46c7-8884-386f9f08e6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:42:45 compute-1 nova_compute[230518]: 2025-10-02 12:42:45.341 2 DEBUG oslo_concurrency.lockutils [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquired lock "refresh_cache-1f9101c6-f4d8-46c7-8884-386f9f08e6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:42:45 compute-1 nova_compute[230518]: 2025-10-02 12:42:45.341 2 DEBUG nova.network.neutron [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:42:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:42:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:45.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:45 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:45.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:45 compute-1 nova_compute[230518]: 2025-10-02 12:42:45.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:46 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e274 e274: 3 total, 3 up, 3 in
Oct 02 12:42:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2947320957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:46 compute-1 ceph-mon[80926]: osdmap e274: 3 total, 3 up, 3 in
Oct 02 12:42:46 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:42:47 compute-1 nova_compute[230518]: 2025-10-02 12:42:47.231 2 DEBUG nova.network.neutron [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Updating instance_info_cache with network_info: [{"id": "8eb9e971-5920-4103-9ba9-c0846182952d", "address": "fa:16:3e:43:40:72", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8eb9e971-59", "ovs_interfaceid": "8eb9e971-5920-4103-9ba9-c0846182952d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:42:47 compute-1 nova_compute[230518]: 2025-10-02 12:42:47.293 2 DEBUG oslo_concurrency.lockutils [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Releasing lock "refresh_cache-1f9101c6-f4d8-46c7-8884-386f9f08e6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:42:47 compute-1 ceph-mon[80926]: pgmap v2014: 305 pgs: 305 active+clean; 581 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 10 MiB/s wr, 301 op/s
Oct 02 12:42:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:47.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:47.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e275 e275: 3 total, 3 up, 3 in
Oct 02 12:42:47 compute-1 podman[275147]: 2025-10-02 12:42:47.819523603 +0000 UTC m=+0.063654603 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 12:42:47 compute-1 podman[275148]: 2025-10-02 12:42:47.82454302 +0000 UTC m=+0.069162396 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 12:42:48 compute-1 nova_compute[230518]: 2025-10-02 12:42:48.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:48 compute-1 ceph-mon[80926]: osdmap e275: 3 total, 3 up, 3 in
Oct 02 12:42:48 compute-1 ceph-mon[80926]: pgmap v2017: 305 pgs: 305 active+clean; 581 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 6.2 MiB/s wr, 223 op/s
Oct 02 12:42:48 compute-1 nova_compute[230518]: 2025-10-02 12:42:48.623 2 INFO nova.virt.libvirt.driver [-] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Instance destroyed successfully.
Oct 02 12:42:48 compute-1 nova_compute[230518]: 2025-10-02 12:42:48.624 2 DEBUG nova.objects.instance [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lazy-loading 'resources' on Instance uuid 1f9101c6-f4d8-46c7-8884-386f9f08e6fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:42:48 compute-1 nova_compute[230518]: 2025-10-02 12:42:48.642 2 DEBUG nova.virt.libvirt.vif [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:41:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-1408399936',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-1408399936',id=112,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMPES/J98pyyGI+xC972/PJIY+D7X9SqOgh45Z1MPKQ6L1b0LXV7IORHQBCxCHGOsCWQssLDPZp4WJ8irI2AsYuAH5MVzTXEt9QIB2bOJQbGultCK6n77bAruhlsubzH7w==',key_name='tempest-keypair-372158786',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:41:57Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='f7e2edef094b4ba5a56a5ec5ffce911e',ramdisk_id='',reservation_id='r-wr2knp7k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-1943710095',owner_user_name='tempest-AttachVolumeShelveTestJSON-1943710095-project-member',shelved_at='2025-10-02T12:42:45.077638',shelved_host='compute-1.ctlplane.example.com',shelved_image_id='c37f4151-ac68-47f8-adfa-bd0c85e4c75d'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:42:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3151966e941f4652ba984616bfa760c7',uuid=1f9101c6-f4d8-46c7-8884-386f9f08e6fb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "8eb9e971-5920-4103-9ba9-c0846182952d", "address": "fa:16:3e:43:40:72", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8eb9e971-59", "ovs_interfaceid": "8eb9e971-5920-4103-9ba9-c0846182952d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:42:48 compute-1 nova_compute[230518]: 2025-10-02 12:42:48.642 2 DEBUG nova.network.os_vif_util [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Converting VIF {"id": "8eb9e971-5920-4103-9ba9-c0846182952d", "address": "fa:16:3e:43:40:72", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8eb9e971-59", "ovs_interfaceid": "8eb9e971-5920-4103-9ba9-c0846182952d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:42:48 compute-1 nova_compute[230518]: 2025-10-02 12:42:48.643 2 DEBUG nova.network.os_vif_util [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:40:72,bridge_name='br-int',has_traffic_filtering=True,id=8eb9e971-5920-4103-9ba9-c0846182952d,network=Network(385a384c-5df0-4b04-b928-517a46df04f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8eb9e971-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:42:48 compute-1 nova_compute[230518]: 2025-10-02 12:42:48.643 2 DEBUG os_vif [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:40:72,bridge_name='br-int',has_traffic_filtering=True,id=8eb9e971-5920-4103-9ba9-c0846182952d,network=Network(385a384c-5df0-4b04-b928-517a46df04f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8eb9e971-59') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:42:48 compute-1 nova_compute[230518]: 2025-10-02 12:42:48.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:48 compute-1 nova_compute[230518]: 2025-10-02 12:42:48.645 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8eb9e971-59, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:42:48 compute-1 nova_compute[230518]: 2025-10-02 12:42:48.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:48 compute-1 nova_compute[230518]: 2025-10-02 12:42:48.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:42:48 compute-1 nova_compute[230518]: 2025-10-02 12:42:48.651 2 INFO os_vif [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:40:72,bridge_name='br-int',has_traffic_filtering=True,id=8eb9e971-5920-4103-9ba9-c0846182952d,network=Network(385a384c-5df0-4b04-b928-517a46df04f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8eb9e971-59')
Oct 02 12:42:48 compute-1 nova_compute[230518]: 2025-10-02 12:42:48.696 2 DEBUG nova.compute.manager [req-86af0136-c9c2-43a5-be1c-a8b0bf1869bc req-d3ba1e94-8a05-4ebe-b3bf-87801c21230c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Received event network-changed-8eb9e971-5920-4103-9ba9-c0846182952d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:48 compute-1 nova_compute[230518]: 2025-10-02 12:42:48.696 2 DEBUG nova.compute.manager [req-86af0136-c9c2-43a5-be1c-a8b0bf1869bc req-d3ba1e94-8a05-4ebe-b3bf-87801c21230c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Refreshing instance network info cache due to event network-changed-8eb9e971-5920-4103-9ba9-c0846182952d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:42:48 compute-1 nova_compute[230518]: 2025-10-02 12:42:48.697 2 DEBUG oslo_concurrency.lockutils [req-86af0136-c9c2-43a5-be1c-a8b0bf1869bc req-d3ba1e94-8a05-4ebe-b3bf-87801c21230c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-1f9101c6-f4d8-46c7-8884-386f9f08e6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:42:48 compute-1 nova_compute[230518]: 2025-10-02 12:42:48.697 2 DEBUG oslo_concurrency.lockutils [req-86af0136-c9c2-43a5-be1c-a8b0bf1869bc req-d3ba1e94-8a05-4ebe-b3bf-87801c21230c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-1f9101c6-f4d8-46c7-8884-386f9f08e6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:42:48 compute-1 nova_compute[230518]: 2025-10-02 12:42:48.697 2 DEBUG nova.network.neutron [req-86af0136-c9c2-43a5-be1c-a8b0bf1869bc req-d3ba1e94-8a05-4ebe-b3bf-87801c21230c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Refreshing network info cache for port 8eb9e971-5920-4103-9ba9-c0846182952d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:42:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:42:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:49.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:42:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:49.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:50 compute-1 nova_compute[230518]: 2025-10-02 12:42:50.831 2 DEBUG nova.network.neutron [req-86af0136-c9c2-43a5-be1c-a8b0bf1869bc req-d3ba1e94-8a05-4ebe-b3bf-87801c21230c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Updated VIF entry in instance network info cache for port 8eb9e971-5920-4103-9ba9-c0846182952d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:42:50 compute-1 nova_compute[230518]: 2025-10-02 12:42:50.831 2 DEBUG nova.network.neutron [req-86af0136-c9c2-43a5-be1c-a8b0bf1869bc req-d3ba1e94-8a05-4ebe-b3bf-87801c21230c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Updating instance_info_cache with network_info: [{"id": "8eb9e971-5920-4103-9ba9-c0846182952d", "address": "fa:16:3e:43:40:72", "network": {"id": "385a384c-5df0-4b04-b928-517a46df04f4", "bridge": null, "label": "tempest-AttachVolumeShelveTestJSON-382753149-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f7e2edef094b4ba5a56a5ec5ffce911e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap8eb9e971-59", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:42:50 compute-1 ceph-mon[80926]: pgmap v2018: 305 pgs: 305 active+clean; 581 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.6 MiB/s wr, 68 op/s
Oct 02 12:42:50 compute-1 nova_compute[230518]: 2025-10-02 12:42:50.859 2 DEBUG oslo_concurrency.lockutils [req-86af0136-c9c2-43a5-be1c-a8b0bf1869bc req-d3ba1e94-8a05-4ebe-b3bf-87801c21230c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-1f9101c6-f4d8-46c7-8884-386f9f08e6fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:42:51 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e276 e276: 3 total, 3 up, 3 in
Oct 02 12:42:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:42:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:51.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:42:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:51.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:51 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:42:51 compute-1 nova_compute[230518]: 2025-10-02 12:42:51.979 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759408956.9787672, 1f9101c6-f4d8-46c7-8884-386f9f08e6fb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:42:51 compute-1 nova_compute[230518]: 2025-10-02 12:42:51.980 2 INFO nova.compute.manager [-] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] VM Stopped (Lifecycle Event)
Oct 02 12:42:52 compute-1 nova_compute[230518]: 2025-10-02 12:42:52.013 2 DEBUG nova.compute.manager [None req-92e945be-637a-4019-a96a-e344103808e8 - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:42:52 compute-1 nova_compute[230518]: 2025-10-02 12:42:52.017 2 DEBUG nova.compute.manager [None req-92e945be-637a-4019-a96a-e344103808e8 - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:42:52 compute-1 nova_compute[230518]: 2025-10-02 12:42:52.040 2 INFO nova.compute.manager [None req-92e945be-637a-4019-a96a-e344103808e8 - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] During sync_power_state the instance has a pending task (shelving_offloading). Skip.
Oct 02 12:42:52 compute-1 nova_compute[230518]: 2025-10-02 12:42:52.259 2 INFO nova.virt.libvirt.driver [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Deleting instance files /var/lib/nova/instances/1f9101c6-f4d8-46c7-8884-386f9f08e6fb_del
Oct 02 12:42:52 compute-1 nova_compute[230518]: 2025-10-02 12:42:52.261 2 INFO nova.virt.libvirt.driver [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Deletion of /var/lib/nova/instances/1f9101c6-f4d8-46c7-8884-386f9f08e6fb_del complete
Oct 02 12:42:52 compute-1 nova_compute[230518]: 2025-10-02 12:42:52.343 2 INFO nova.scheduler.client.report [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Deleted allocations for instance 1f9101c6-f4d8-46c7-8884-386f9f08e6fb
Oct 02 12:42:52 compute-1 ceph-mon[80926]: osdmap e276: 3 total, 3 up, 3 in
Oct 02 12:42:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2351080038' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:42:52 compute-1 nova_compute[230518]: 2025-10-02 12:42:52.387 2 DEBUG oslo_concurrency.lockutils [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:52 compute-1 nova_compute[230518]: 2025-10-02 12:42:52.388 2 DEBUG oslo_concurrency.lockutils [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:52 compute-1 nova_compute[230518]: 2025-10-02 12:42:52.425 2 DEBUG oslo_concurrency.processutils [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:42:52 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3290921538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:52 compute-1 nova_compute[230518]: 2025-10-02 12:42:52.859 2 DEBUG oslo_concurrency.processutils [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:52 compute-1 nova_compute[230518]: 2025-10-02 12:42:52.866 2 DEBUG nova.compute.provider_tree [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:42:52 compute-1 nova_compute[230518]: 2025-10-02 12:42:52.890 2 DEBUG nova.scheduler.client.report [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:42:52 compute-1 nova_compute[230518]: 2025-10-02 12:42:52.919 2 DEBUG oslo_concurrency.lockutils [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:52 compute-1 nova_compute[230518]: 2025-10-02 12:42:52.957 2 DEBUG oslo_concurrency.lockutils [None req-613deacf-9310-4f31-9d49-a0798645a889 3151966e941f4652ba984616bfa760c7 f7e2edef094b4ba5a56a5ec5ffce911e - - default default] Lock "1f9101c6-f4d8-46c7-8884-386f9f08e6fb" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 18.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:53 compute-1 nova_compute[230518]: 2025-10-02 12:42:53.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:53 compute-1 nova_compute[230518]: 2025-10-02 12:42:53.452 2 DEBUG oslo_concurrency.lockutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "26db575f-26df-4e1b-b0d8-38a12df557e3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:53 compute-1 nova_compute[230518]: 2025-10-02 12:42:53.453 2 DEBUG oslo_concurrency.lockutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "26db575f-26df-4e1b-b0d8-38a12df557e3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:53 compute-1 nova_compute[230518]: 2025-10-02 12:42:53.480 2 DEBUG nova.compute.manager [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:42:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:53.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:53.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:53 compute-1 ceph-mon[80926]: pgmap v2020: 305 pgs: 305 active+clean; 463 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 4.7 KiB/s wr, 252 op/s
Oct 02 12:42:53 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3290921538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:53 compute-1 nova_compute[230518]: 2025-10-02 12:42:53.552 2 DEBUG oslo_concurrency.lockutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:53 compute-1 nova_compute[230518]: 2025-10-02 12:42:53.552 2 DEBUG oslo_concurrency.lockutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:53 compute-1 nova_compute[230518]: 2025-10-02 12:42:53.558 2 DEBUG nova.virt.hardware [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:42:53 compute-1 nova_compute[230518]: 2025-10-02 12:42:53.559 2 INFO nova.compute.claims [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:42:53 compute-1 nova_compute[230518]: 2025-10-02 12:42:53.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:53 compute-1 nova_compute[230518]: 2025-10-02 12:42:53.675 2 DEBUG oslo_concurrency.processutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:42:54 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/829567821' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:54 compute-1 nova_compute[230518]: 2025-10-02 12:42:54.179 2 DEBUG oslo_concurrency.processutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:54 compute-1 nova_compute[230518]: 2025-10-02 12:42:54.186 2 DEBUG nova.compute.provider_tree [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:42:54 compute-1 nova_compute[230518]: 2025-10-02 12:42:54.204 2 DEBUG nova.scheduler.client.report [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:42:54 compute-1 nova_compute[230518]: 2025-10-02 12:42:54.238 2 DEBUG oslo_concurrency.lockutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:54 compute-1 nova_compute[230518]: 2025-10-02 12:42:54.240 2 DEBUG nova.compute.manager [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:42:54 compute-1 nova_compute[230518]: 2025-10-02 12:42:54.290 2 DEBUG nova.compute.manager [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:42:54 compute-1 nova_compute[230518]: 2025-10-02 12:42:54.291 2 DEBUG nova.network.neutron [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:42:54 compute-1 nova_compute[230518]: 2025-10-02 12:42:54.310 2 INFO nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:42:54 compute-1 nova_compute[230518]: 2025-10-02 12:42:54.334 2 DEBUG nova.compute.manager [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:42:54 compute-1 nova_compute[230518]: 2025-10-02 12:42:54.436 2 DEBUG nova.compute.manager [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:42:54 compute-1 nova_compute[230518]: 2025-10-02 12:42:54.438 2 DEBUG nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:42:54 compute-1 nova_compute[230518]: 2025-10-02 12:42:54.438 2 INFO nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Creating image(s)
Oct 02 12:42:54 compute-1 nova_compute[230518]: 2025-10-02 12:42:54.466 2 DEBUG nova.storage.rbd_utils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 26db575f-26df-4e1b-b0d8-38a12df557e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:42:54 compute-1 nova_compute[230518]: 2025-10-02 12:42:54.502 2 DEBUG nova.storage.rbd_utils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 26db575f-26df-4e1b-b0d8-38a12df557e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:42:54 compute-1 nova_compute[230518]: 2025-10-02 12:42:54.534 2 DEBUG nova.storage.rbd_utils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 26db575f-26df-4e1b-b0d8-38a12df557e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:42:54 compute-1 nova_compute[230518]: 2025-10-02 12:42:54.539 2 DEBUG oslo_concurrency.processutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:54 compute-1 nova_compute[230518]: 2025-10-02 12:42:54.581 2 DEBUG nova.policy [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '17a0940c9daf48ac8cfa6c3e56d0e39c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '88141e38aa2347299e7ab249431ef68c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:42:54 compute-1 nova_compute[230518]: 2025-10-02 12:42:54.638 2 DEBUG oslo_concurrency.processutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:54 compute-1 nova_compute[230518]: 2025-10-02 12:42:54.639 2 DEBUG oslo_concurrency.lockutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:54 compute-1 nova_compute[230518]: 2025-10-02 12:42:54.640 2 DEBUG oslo_concurrency.lockutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:54 compute-1 nova_compute[230518]: 2025-10-02 12:42:54.640 2 DEBUG oslo_concurrency.lockutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:54 compute-1 nova_compute[230518]: 2025-10-02 12:42:54.833 2 DEBUG nova.storage.rbd_utils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 26db575f-26df-4e1b-b0d8-38a12df557e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:42:54 compute-1 nova_compute[230518]: 2025-10-02 12:42:54.836 2 DEBUG oslo_concurrency.processutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 26db575f-26df-4e1b-b0d8-38a12df557e3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:54 compute-1 ceph-mon[80926]: pgmap v2021: 305 pgs: 305 active+clean; 385 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 5.8 KiB/s wr, 231 op/s
Oct 02 12:42:54 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1315279011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:54 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/829567821' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:55.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:55.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:55 compute-1 nova_compute[230518]: 2025-10-02 12:42:55.942 2 DEBUG oslo_concurrency.processutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 26db575f-26df-4e1b-b0d8-38a12df557e3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:56 compute-1 nova_compute[230518]: 2025-10-02 12:42:56.072 2 DEBUG nova.storage.rbd_utils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] resizing rbd image 26db575f-26df-4e1b-b0d8-38a12df557e3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:42:56 compute-1 nova_compute[230518]: 2025-10-02 12:42:56.357 2 DEBUG nova.network.neutron [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Successfully created port: 3de79762-7d07-45e3-b66d-38b20be62257 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:42:56 compute-1 nova_compute[230518]: 2025-10-02 12:42:56.430 2 DEBUG nova.objects.instance [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'migration_context' on Instance uuid 26db575f-26df-4e1b-b0d8-38a12df557e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:42:56 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e277 e277: 3 total, 3 up, 3 in
Oct 02 12:42:56 compute-1 nova_compute[230518]: 2025-10-02 12:42:56.545 2 DEBUG nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:42:56 compute-1 nova_compute[230518]: 2025-10-02 12:42:56.546 2 DEBUG nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Ensure instance console log exists: /var/lib/nova/instances/26db575f-26df-4e1b-b0d8-38a12df557e3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:42:56 compute-1 nova_compute[230518]: 2025-10-02 12:42:56.546 2 DEBUG oslo_concurrency.lockutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:56 compute-1 nova_compute[230518]: 2025-10-02 12:42:56.547 2 DEBUG oslo_concurrency.lockutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:56 compute-1 nova_compute[230518]: 2025-10-02 12:42:56.547 2 DEBUG oslo_concurrency.lockutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:56 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:42:56 compute-1 ceph-mon[80926]: pgmap v2022: 305 pgs: 305 active+clean; 374 MiB data, 999 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 5.7 KiB/s wr, 223 op/s
Oct 02 12:42:56 compute-1 ceph-mon[80926]: osdmap e277: 3 total, 3 up, 3 in
Oct 02 12:42:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:57.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:57.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:58 compute-1 nova_compute[230518]: 2025-10-02 12:42:58.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:42:58 compute-1 nova_compute[230518]: 2025-10-02 12:42:58.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:42:58 compute-1 nova_compute[230518]: 2025-10-02 12:42:58.228 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:58 compute-1 nova_compute[230518]: 2025-10-02 12:42:58.229 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:58 compute-1 nova_compute[230518]: 2025-10-02 12:42:58.229 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:42:58 compute-1 nova_compute[230518]: 2025-10-02 12:42:58.229 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:42:58 compute-1 nova_compute[230518]: 2025-10-02 12:42:58.230 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:58 compute-1 nova_compute[230518]: 2025-10-02 12:42:58.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:58 compute-1 nova_compute[230518]: 2025-10-02 12:42:58.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:42:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:42:58 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1302167254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:58 compute-1 nova_compute[230518]: 2025-10-02 12:42:58.728 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:58 compute-1 nova_compute[230518]: 2025-10-02 12:42:58.944 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:42:58 compute-1 nova_compute[230518]: 2025-10-02 12:42:58.945 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:42:59 compute-1 ceph-mon[80926]: pgmap v2024: 305 pgs: 305 active+clean; 383 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.3 MiB/s wr, 267 op/s
Oct 02 12:42:59 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3834112678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:59 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1302167254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:59 compute-1 nova_compute[230518]: 2025-10-02 12:42:59.124 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:42:59 compute-1 nova_compute[230518]: 2025-10-02 12:42:59.125 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4234MB free_disk=20.868064880371094GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:42:59 compute-1 nova_compute[230518]: 2025-10-02 12:42:59.126 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:42:59 compute-1 nova_compute[230518]: 2025-10-02 12:42:59.126 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:42:59 compute-1 nova_compute[230518]: 2025-10-02 12:42:59.266 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 7621a774-e0bc-4f4f-b900-c3608dd6835a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:42:59 compute-1 nova_compute[230518]: 2025-10-02 12:42:59.267 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 26db575f-26df-4e1b-b0d8-38a12df557e3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:42:59 compute-1 nova_compute[230518]: 2025-10-02 12:42:59.267 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:42:59 compute-1 nova_compute[230518]: 2025-10-02 12:42:59.267 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:42:59 compute-1 nova_compute[230518]: 2025-10-02 12:42:59.317 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:42:59 compute-1 nova_compute[230518]: 2025-10-02 12:42:59.414 2 DEBUG nova.network.neutron [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Successfully updated port: 3de79762-7d07-45e3-b66d-38b20be62257 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:42:59 compute-1 nova_compute[230518]: 2025-10-02 12:42:59.470 2 DEBUG oslo_concurrency.lockutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "refresh_cache-26db575f-26df-4e1b-b0d8-38a12df557e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:42:59 compute-1 nova_compute[230518]: 2025-10-02 12:42:59.471 2 DEBUG oslo_concurrency.lockutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquired lock "refresh_cache-26db575f-26df-4e1b-b0d8-38a12df557e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:42:59 compute-1 nova_compute[230518]: 2025-10-02 12:42:59.472 2 DEBUG nova.network.neutron [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:42:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:42:59.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:42:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:42:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:42:59.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:42:59 compute-1 nova_compute[230518]: 2025-10-02 12:42:59.583 2 DEBUG nova.compute.manager [req-9192ae62-49a8-4ad3-a15a-8d7ff6c58b24 req-0bdaa22b-2de9-43f6-a8a8-5d8af04f0851 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Received event network-changed-3de79762-7d07-45e3-b66d-38b20be62257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:42:59 compute-1 nova_compute[230518]: 2025-10-02 12:42:59.584 2 DEBUG nova.compute.manager [req-9192ae62-49a8-4ad3-a15a-8d7ff6c58b24 req-0bdaa22b-2de9-43f6-a8a8-5d8af04f0851 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Refreshing instance network info cache due to event network-changed-3de79762-7d07-45e3-b66d-38b20be62257. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:42:59 compute-1 nova_compute[230518]: 2025-10-02 12:42:59.584 2 DEBUG oslo_concurrency.lockutils [req-9192ae62-49a8-4ad3-a15a-8d7ff6c58b24 req-0bdaa22b-2de9-43f6-a8a8-5d8af04f0851 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-26db575f-26df-4e1b-b0d8-38a12df557e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:42:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:42:59 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2706070081' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:42:59 compute-1 nova_compute[230518]: 2025-10-02 12:42:59.767 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:42:59 compute-1 nova_compute[230518]: 2025-10-02 12:42:59.773 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:42:59 compute-1 nova_compute[230518]: 2025-10-02 12:42:59.802 2 DEBUG nova.network.neutron [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:42:59 compute-1 nova_compute[230518]: 2025-10-02 12:42:59.884 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:42:59 compute-1 nova_compute[230518]: 2025-10-02 12:42:59.957 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:42:59 compute-1 nova_compute[230518]: 2025-10-02 12:42:59.958 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2706070081' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/534810139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:43:00 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/889090069' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:01 compute-1 ceph-mon[80926]: pgmap v2025: 305 pgs: 305 active+clean; 383 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 298 KiB/s rd, 1.3 MiB/s wr, 96 op/s
Oct 02 12:43:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/889090069' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3206392360' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:01 compute-1 nova_compute[230518]: 2025-10-02 12:43:01.120 2 DEBUG nova.network.neutron [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Updating instance_info_cache with network_info: [{"id": "3de79762-7d07-45e3-b66d-38b20be62257", "address": "fa:16:3e:bb:bf:18", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de79762-7d", "ovs_interfaceid": "3de79762-7d07-45e3-b66d-38b20be62257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:43:01 compute-1 nova_compute[230518]: 2025-10-02 12:43:01.294 2 DEBUG oslo_concurrency.lockutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Releasing lock "refresh_cache-26db575f-26df-4e1b-b0d8-38a12df557e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:43:01 compute-1 nova_compute[230518]: 2025-10-02 12:43:01.295 2 DEBUG nova.compute.manager [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Instance network_info: |[{"id": "3de79762-7d07-45e3-b66d-38b20be62257", "address": "fa:16:3e:bb:bf:18", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de79762-7d", "ovs_interfaceid": "3de79762-7d07-45e3-b66d-38b20be62257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:43:01 compute-1 nova_compute[230518]: 2025-10-02 12:43:01.296 2 DEBUG oslo_concurrency.lockutils [req-9192ae62-49a8-4ad3-a15a-8d7ff6c58b24 req-0bdaa22b-2de9-43f6-a8a8-5d8af04f0851 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-26db575f-26df-4e1b-b0d8-38a12df557e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:43:01 compute-1 nova_compute[230518]: 2025-10-02 12:43:01.297 2 DEBUG nova.network.neutron [req-9192ae62-49a8-4ad3-a15a-8d7ff6c58b24 req-0bdaa22b-2de9-43f6-a8a8-5d8af04f0851 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Refreshing network info cache for port 3de79762-7d07-45e3-b66d-38b20be62257 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:43:01 compute-1 nova_compute[230518]: 2025-10-02 12:43:01.303 2 DEBUG nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Start _get_guest_xml network_info=[{"id": "3de79762-7d07-45e3-b66d-38b20be62257", "address": "fa:16:3e:bb:bf:18", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de79762-7d", "ovs_interfaceid": "3de79762-7d07-45e3-b66d-38b20be62257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:43:01 compute-1 nova_compute[230518]: 2025-10-02 12:43:01.308 2 WARNING nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:43:01 compute-1 nova_compute[230518]: 2025-10-02 12:43:01.313 2 DEBUG nova.virt.libvirt.host [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:43:01 compute-1 nova_compute[230518]: 2025-10-02 12:43:01.314 2 DEBUG nova.virt.libvirt.host [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:43:01 compute-1 nova_compute[230518]: 2025-10-02 12:43:01.316 2 DEBUG nova.virt.libvirt.host [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:43:01 compute-1 nova_compute[230518]: 2025-10-02 12:43:01.316 2 DEBUG nova.virt.libvirt.host [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:43:01 compute-1 nova_compute[230518]: 2025-10-02 12:43:01.318 2 DEBUG nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:43:01 compute-1 nova_compute[230518]: 2025-10-02 12:43:01.318 2 DEBUG nova.virt.hardware [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:43:01 compute-1 nova_compute[230518]: 2025-10-02 12:43:01.319 2 DEBUG nova.virt.hardware [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:43:01 compute-1 nova_compute[230518]: 2025-10-02 12:43:01.319 2 DEBUG nova.virt.hardware [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:43:01 compute-1 nova_compute[230518]: 2025-10-02 12:43:01.319 2 DEBUG nova.virt.hardware [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:43:01 compute-1 nova_compute[230518]: 2025-10-02 12:43:01.320 2 DEBUG nova.virt.hardware [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:43:01 compute-1 nova_compute[230518]: 2025-10-02 12:43:01.320 2 DEBUG nova.virt.hardware [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:43:01 compute-1 nova_compute[230518]: 2025-10-02 12:43:01.320 2 DEBUG nova.virt.hardware [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:43:01 compute-1 nova_compute[230518]: 2025-10-02 12:43:01.321 2 DEBUG nova.virt.hardware [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:43:01 compute-1 nova_compute[230518]: 2025-10-02 12:43:01.321 2 DEBUG nova.virt.hardware [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:43:01 compute-1 nova_compute[230518]: 2025-10-02 12:43:01.321 2 DEBUG nova.virt.hardware [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:43:01 compute-1 nova_compute[230518]: 2025-10-02 12:43:01.322 2 DEBUG nova.virt.hardware [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:43:01 compute-1 nova_compute[230518]: 2025-10-02 12:43:01.325 2 DEBUG oslo_concurrency.processutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:01.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:01.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:01 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:43:01 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3097588909' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:01 compute-1 nova_compute[230518]: 2025-10-02 12:43:01.761 2 DEBUG oslo_concurrency.processutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:01 compute-1 nova_compute[230518]: 2025-10-02 12:43:01.789 2 DEBUG nova.storage.rbd_utils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 26db575f-26df-4e1b-b0d8-38a12df557e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:43:01 compute-1 nova_compute[230518]: 2025-10-02 12:43:01.793 2 DEBUG oslo_concurrency.processutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:01 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:43:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:43:02 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2835615240' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/311222988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3097588909' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1798938763' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:02 compute-1 nova_compute[230518]: 2025-10-02 12:43:02.493 2 DEBUG oslo_concurrency.processutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.700s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:02 compute-1 nova_compute[230518]: 2025-10-02 12:43:02.496 2 DEBUG nova.virt.libvirt.vif [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:42:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-310646740',display_name='tempest-ServerActionsTestOtherA-server-310646740',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-310646740',id=115,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='88141e38aa2347299e7ab249431ef68c',ramdisk_id='',reservation_id='r-8ebo56vt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1849713132',owner_user_name='tempest-ServerActionsTestOtherA-1849713132-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:42:54Z,user_data=None,user_id='17a0940c9daf48ac8cfa6c3e56d0e39c',uuid=26db575f-26df-4e1b-b0d8-38a12df557e3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3de79762-7d07-45e3-b66d-38b20be62257", "address": "fa:16:3e:bb:bf:18", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de79762-7d", "ovs_interfaceid": "3de79762-7d07-45e3-b66d-38b20be62257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:43:02 compute-1 nova_compute[230518]: 2025-10-02 12:43:02.497 2 DEBUG nova.network.os_vif_util [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converting VIF {"id": "3de79762-7d07-45e3-b66d-38b20be62257", "address": "fa:16:3e:bb:bf:18", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de79762-7d", "ovs_interfaceid": "3de79762-7d07-45e3-b66d-38b20be62257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:43:02 compute-1 nova_compute[230518]: 2025-10-02 12:43:02.499 2 DEBUG nova.network.os_vif_util [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:bf:18,bridge_name='br-int',has_traffic_filtering=True,id=3de79762-7d07-45e3-b66d-38b20be62257,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3de79762-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:43:02 compute-1 nova_compute[230518]: 2025-10-02 12:43:02.502 2 DEBUG nova.objects.instance [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'pci_devices' on Instance uuid 26db575f-26df-4e1b-b0d8-38a12df557e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:43:02 compute-1 nova_compute[230518]: 2025-10-02 12:43:02.530 2 DEBUG nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:43:02 compute-1 nova_compute[230518]:   <uuid>26db575f-26df-4e1b-b0d8-38a12df557e3</uuid>
Oct 02 12:43:02 compute-1 nova_compute[230518]:   <name>instance-00000073</name>
Oct 02 12:43:02 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:43:02 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:43:02 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:43:02 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:       <nova:name>tempest-ServerActionsTestOtherA-server-310646740</nova:name>
Oct 02 12:43:02 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:43:01</nova:creationTime>
Oct 02 12:43:02 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:43:02 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:43:02 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:43:02 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:43:02 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:43:02 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:43:02 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:43:02 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:43:02 compute-1 nova_compute[230518]:         <nova:user uuid="17a0940c9daf48ac8cfa6c3e56d0e39c">tempest-ServerActionsTestOtherA-1849713132-project-member</nova:user>
Oct 02 12:43:02 compute-1 nova_compute[230518]:         <nova:project uuid="88141e38aa2347299e7ab249431ef68c">tempest-ServerActionsTestOtherA-1849713132</nova:project>
Oct 02 12:43:02 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:43:02 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:43:02 compute-1 nova_compute[230518]:         <nova:port uuid="3de79762-7d07-45e3-b66d-38b20be62257">
Oct 02 12:43:02 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:43:02 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:43:02 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:43:02 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <system>
Oct 02 12:43:02 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:43:02 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:43:02 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:43:02 compute-1 nova_compute[230518]:       <entry name="serial">26db575f-26df-4e1b-b0d8-38a12df557e3</entry>
Oct 02 12:43:02 compute-1 nova_compute[230518]:       <entry name="uuid">26db575f-26df-4e1b-b0d8-38a12df557e3</entry>
Oct 02 12:43:02 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     </system>
Oct 02 12:43:02 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:43:02 compute-1 nova_compute[230518]:   <os>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:   </os>
Oct 02 12:43:02 compute-1 nova_compute[230518]:   <features>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:   </features>
Oct 02 12:43:02 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:43:02 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:43:02 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:43:02 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/26db575f-26df-4e1b-b0d8-38a12df557e3_disk">
Oct 02 12:43:02 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:       </source>
Oct 02 12:43:02 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:43:02 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:43:02 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:43:02 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/26db575f-26df-4e1b-b0d8-38a12df557e3_disk.config">
Oct 02 12:43:02 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:       </source>
Oct 02 12:43:02 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:43:02 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:43:02 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:43:02 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:bb:bf:18"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:       <target dev="tap3de79762-7d"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:43:02 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/26db575f-26df-4e1b-b0d8-38a12df557e3/console.log" append="off"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <video>
Oct 02 12:43:02 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     </video>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:43:02 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:43:02 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:43:02 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:43:02 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:43:02 compute-1 nova_compute[230518]: </domain>
Oct 02 12:43:02 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:43:02 compute-1 nova_compute[230518]: 2025-10-02 12:43:02.533 2 DEBUG nova.compute.manager [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Preparing to wait for external event network-vif-plugged-3de79762-7d07-45e3-b66d-38b20be62257 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:43:02 compute-1 nova_compute[230518]: 2025-10-02 12:43:02.534 2 DEBUG oslo_concurrency.lockutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "26db575f-26df-4e1b-b0d8-38a12df557e3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:02 compute-1 nova_compute[230518]: 2025-10-02 12:43:02.534 2 DEBUG oslo_concurrency.lockutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "26db575f-26df-4e1b-b0d8-38a12df557e3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:02 compute-1 nova_compute[230518]: 2025-10-02 12:43:02.535 2 DEBUG oslo_concurrency.lockutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "26db575f-26df-4e1b-b0d8-38a12df557e3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:02 compute-1 nova_compute[230518]: 2025-10-02 12:43:02.535 2 DEBUG nova.virt.libvirt.vif [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:42:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-310646740',display_name='tempest-ServerActionsTestOtherA-server-310646740',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-310646740',id=115,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='88141e38aa2347299e7ab249431ef68c',ramdisk_id='',reservation_id='r-8ebo56vt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1849713132',owner_user_name='tempest-ServerActionsTestOtherA-1849713132-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:42:54Z,user_data=None,user_id='17a0940c9daf48ac8cfa6c3e56d0e39c',uuid=26db575f-26df-4e1b-b0d8-38a12df557e3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3de79762-7d07-45e3-b66d-38b20be62257", "address": "fa:16:3e:bb:bf:18", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de79762-7d", "ovs_interfaceid": "3de79762-7d07-45e3-b66d-38b20be62257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:43:02 compute-1 nova_compute[230518]: 2025-10-02 12:43:02.536 2 DEBUG nova.network.os_vif_util [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converting VIF {"id": "3de79762-7d07-45e3-b66d-38b20be62257", "address": "fa:16:3e:bb:bf:18", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de79762-7d", "ovs_interfaceid": "3de79762-7d07-45e3-b66d-38b20be62257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:43:02 compute-1 nova_compute[230518]: 2025-10-02 12:43:02.536 2 DEBUG nova.network.os_vif_util [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:bf:18,bridge_name='br-int',has_traffic_filtering=True,id=3de79762-7d07-45e3-b66d-38b20be62257,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3de79762-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:43:02 compute-1 nova_compute[230518]: 2025-10-02 12:43:02.537 2 DEBUG os_vif [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:bf:18,bridge_name='br-int',has_traffic_filtering=True,id=3de79762-7d07-45e3-b66d-38b20be62257,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3de79762-7d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:43:02 compute-1 nova_compute[230518]: 2025-10-02 12:43:02.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:02 compute-1 nova_compute[230518]: 2025-10-02 12:43:02.538 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:02 compute-1 nova_compute[230518]: 2025-10-02 12:43:02.539 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:43:02 compute-1 nova_compute[230518]: 2025-10-02 12:43:02.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:02 compute-1 nova_compute[230518]: 2025-10-02 12:43:02.542 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3de79762-7d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:02 compute-1 nova_compute[230518]: 2025-10-02 12:43:02.543 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3de79762-7d, col_values=(('external_ids', {'iface-id': '3de79762-7d07-45e3-b66d-38b20be62257', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bb:bf:18', 'vm-uuid': '26db575f-26df-4e1b-b0d8-38a12df557e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:02 compute-1 nova_compute[230518]: 2025-10-02 12:43:02.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:02 compute-1 NetworkManager[44960]: <info>  [1759408982.5458] manager: (tap3de79762-7d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/228)
Oct 02 12:43:02 compute-1 nova_compute[230518]: 2025-10-02 12:43:02.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:43:02 compute-1 nova_compute[230518]: 2025-10-02 12:43:02.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:02 compute-1 nova_compute[230518]: 2025-10-02 12:43:02.552 2 INFO os_vif [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:bf:18,bridge_name='br-int',has_traffic_filtering=True,id=3de79762-7d07-45e3-b66d-38b20be62257,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3de79762-7d')
Oct 02 12:43:02 compute-1 nova_compute[230518]: 2025-10-02 12:43:02.800 2 DEBUG nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:43:02 compute-1 nova_compute[230518]: 2025-10-02 12:43:02.800 2 DEBUG nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:43:02 compute-1 nova_compute[230518]: 2025-10-02 12:43:02.801 2 DEBUG nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No VIF found with MAC fa:16:3e:bb:bf:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:43:02 compute-1 nova_compute[230518]: 2025-10-02 12:43:02.801 2 INFO nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Using config drive
Oct 02 12:43:02 compute-1 nova_compute[230518]: 2025-10-02 12:43:02.829 2 DEBUG nova.storage.rbd_utils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 26db575f-26df-4e1b-b0d8-38a12df557e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:43:02 compute-1 nova_compute[230518]: 2025-10-02 12:43:02.954 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:43:02 compute-1 nova_compute[230518]: 2025-10-02 12:43:02.954 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:43:03 compute-1 nova_compute[230518]: 2025-10-02 12:43:03.063 2 DEBUG nova.network.neutron [req-9192ae62-49a8-4ad3-a15a-8d7ff6c58b24 req-0bdaa22b-2de9-43f6-a8a8-5d8af04f0851 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Updated VIF entry in instance network info cache for port 3de79762-7d07-45e3-b66d-38b20be62257. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:43:03 compute-1 nova_compute[230518]: 2025-10-02 12:43:03.064 2 DEBUG nova.network.neutron [req-9192ae62-49a8-4ad3-a15a-8d7ff6c58b24 req-0bdaa22b-2de9-43f6-a8a8-5d8af04f0851 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Updating instance_info_cache with network_info: [{"id": "3de79762-7d07-45e3-b66d-38b20be62257", "address": "fa:16:3e:bb:bf:18", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de79762-7d", "ovs_interfaceid": "3de79762-7d07-45e3-b66d-38b20be62257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:43:03 compute-1 nova_compute[230518]: 2025-10-02 12:43:03.145 2 INFO nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Creating config drive at /var/lib/nova/instances/26db575f-26df-4e1b-b0d8-38a12df557e3/disk.config
Oct 02 12:43:03 compute-1 nova_compute[230518]: 2025-10-02 12:43:03.149 2 DEBUG oslo_concurrency.processutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/26db575f-26df-4e1b-b0d8-38a12df557e3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp90qkizeg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:03 compute-1 nova_compute[230518]: 2025-10-02 12:43:03.282 2 DEBUG oslo_concurrency.processutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/26db575f-26df-4e1b-b0d8-38a12df557e3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp90qkizeg" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:03 compute-1 nova_compute[230518]: 2025-10-02 12:43:03.306 2 DEBUG nova.storage.rbd_utils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 26db575f-26df-4e1b-b0d8-38a12df557e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:43:03 compute-1 nova_compute[230518]: 2025-10-02 12:43:03.309 2 DEBUG oslo_concurrency.processutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/26db575f-26df-4e1b-b0d8-38a12df557e3/disk.config 26db575f-26df-4e1b-b0d8-38a12df557e3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:03 compute-1 nova_compute[230518]: 2025-10-02 12:43:03.336 2 DEBUG oslo_concurrency.lockutils [req-9192ae62-49a8-4ad3-a15a-8d7ff6c58b24 req-0bdaa22b-2de9-43f6-a8a8-5d8af04f0851 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-26db575f-26df-4e1b-b0d8-38a12df557e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:43:03 compute-1 nova_compute[230518]: 2025-10-02 12:43:03.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:03 compute-1 ceph-mon[80926]: pgmap v2026: 305 pgs: 305 active+clean; 332 MiB data, 978 MiB used, 20 GiB / 21 GiB avail; 286 KiB/s rd, 2.1 MiB/s wr, 135 op/s
Oct 02 12:43:03 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2835615240' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:03 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3231430811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:03 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2220759351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:03 compute-1 nova_compute[230518]: 2025-10-02 12:43:03.487 2 DEBUG oslo_concurrency.processutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/26db575f-26df-4e1b-b0d8-38a12df557e3/disk.config 26db575f-26df-4e1b-b0d8-38a12df557e3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:03 compute-1 nova_compute[230518]: 2025-10-02 12:43:03.488 2 INFO nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Deleting local config drive /var/lib/nova/instances/26db575f-26df-4e1b-b0d8-38a12df557e3/disk.config because it was imported into RBD.
Oct 02 12:43:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:03.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:03.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:03 compute-1 kernel: tap3de79762-7d: entered promiscuous mode
Oct 02 12:43:03 compute-1 NetworkManager[44960]: <info>  [1759408983.5530] manager: (tap3de79762-7d): new Tun device (/org/freedesktop/NetworkManager/Devices/229)
Oct 02 12:43:03 compute-1 nova_compute[230518]: 2025-10-02 12:43:03.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:03 compute-1 ovn_controller[129257]: 2025-10-02T12:43:03Z|00480|binding|INFO|Claiming lport 3de79762-7d07-45e3-b66d-38b20be62257 for this chassis.
Oct 02 12:43:03 compute-1 ovn_controller[129257]: 2025-10-02T12:43:03Z|00481|binding|INFO|3de79762-7d07-45e3-b66d-38b20be62257: Claiming fa:16:3e:bb:bf:18 10.100.0.11
Oct 02 12:43:03 compute-1 ovn_controller[129257]: 2025-10-02T12:43:03Z|00482|binding|INFO|Setting lport 3de79762-7d07-45e3-b66d-38b20be62257 ovn-installed in OVS
Oct 02 12:43:03 compute-1 nova_compute[230518]: 2025-10-02 12:43:03.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:03 compute-1 nova_compute[230518]: 2025-10-02 12:43:03.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:03 compute-1 systemd-machined[188247]: New machine qemu-57-instance-00000073.
Oct 02 12:43:03 compute-1 systemd[1]: Started Virtual Machine qemu-57-instance-00000073.
Oct 02 12:43:03 compute-1 systemd-udevd[275595]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:43:03 compute-1 NetworkManager[44960]: <info>  [1759408983.6253] device (tap3de79762-7d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:43:03 compute-1 NetworkManager[44960]: <info>  [1759408983.6264] device (tap3de79762-7d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:43:03 compute-1 ovn_controller[129257]: 2025-10-02T12:43:03Z|00483|binding|INFO|Setting lport 3de79762-7d07-45e3-b66d-38b20be62257 up in Southbound
Oct 02 12:43:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:43:03.809 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:bf:18 10.100.0.11'], port_security=['fa:16:3e:bb:bf:18 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '26db575f-26df-4e1b-b0d8-38a12df557e3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88141e38aa2347299e7ab249431ef68c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '01dffe06-e9c5-44f7-8e0c-9bbbdc67ec7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59a86c9d-a113-4a7c-af97-5ea11dfa8c7c, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=3de79762-7d07-45e3-b66d-38b20be62257) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:43:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:43:03.811 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 3de79762-7d07-45e3-b66d-38b20be62257 in datapath f3643647-7cd9-4c43-8aaa-9b0f3160274b bound to our chassis
Oct 02 12:43:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:43:03.813 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3643647-7cd9-4c43-8aaa-9b0f3160274b
Oct 02 12:43:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:43:03.826 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5557a3e9-8c91-4748-a379-755db70a20b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:43:03.862 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[2c2661b2-3f08-4ab4-a42f-2a74e0e7ee3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:43:03.865 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[72408b39-0423-4f3c-8d8b-a8c128860363]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:43:03.896 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[d78e34b9-a2d0-4961-99d6-ab902d43e1e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:43:03.917 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fdbed73c-6787-496a-a84d-706c6f3fe6bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3643647-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:ed:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662992, 'reachable_time': 17721, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275609, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:43:03.936 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[27d5c726-a6e4-4c6b-8a48-a216604a2964]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf3643647-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663002, 'tstamp': 663002}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275610, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf3643647-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663005, 'tstamp': 663005}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275610, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:43:03.938 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3643647-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:03 compute-1 nova_compute[230518]: 2025-10-02 12:43:03.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:03 compute-1 nova_compute[230518]: 2025-10-02 12:43:03.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:43:03.940 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3643647-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:43:03.940 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:43:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:43:03.941 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3643647-70, col_values=(('external_ids', {'iface-id': '7b6dc1a1-1a58-45bd-84bb-97328397bf1b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:43:03.941 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:43:04 compute-1 nova_compute[230518]: 2025-10-02 12:43:04.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:43:04 compute-1 nova_compute[230518]: 2025-10-02 12:43:04.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:43:04 compute-1 nova_compute[230518]: 2025-10-02 12:43:04.260 2 DEBUG nova.compute.manager [req-d519a9fa-5094-4832-b0aa-b57cf04b9809 req-ef410863-81b9-491a-b0d2-830e05701e2f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Received event network-vif-plugged-3de79762-7d07-45e3-b66d-38b20be62257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:43:04 compute-1 nova_compute[230518]: 2025-10-02 12:43:04.260 2 DEBUG oslo_concurrency.lockutils [req-d519a9fa-5094-4832-b0aa-b57cf04b9809 req-ef410863-81b9-491a-b0d2-830e05701e2f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "26db575f-26df-4e1b-b0d8-38a12df557e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:04 compute-1 nova_compute[230518]: 2025-10-02 12:43:04.260 2 DEBUG oslo_concurrency.lockutils [req-d519a9fa-5094-4832-b0aa-b57cf04b9809 req-ef410863-81b9-491a-b0d2-830e05701e2f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "26db575f-26df-4e1b-b0d8-38a12df557e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:04 compute-1 nova_compute[230518]: 2025-10-02 12:43:04.260 2 DEBUG oslo_concurrency.lockutils [req-d519a9fa-5094-4832-b0aa-b57cf04b9809 req-ef410863-81b9-491a-b0d2-830e05701e2f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "26db575f-26df-4e1b-b0d8-38a12df557e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:04 compute-1 nova_compute[230518]: 2025-10-02 12:43:04.261 2 DEBUG nova.compute.manager [req-d519a9fa-5094-4832-b0aa-b57cf04b9809 req-ef410863-81b9-491a-b0d2-830e05701e2f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Processing event network-vif-plugged-3de79762-7d07-45e3-b66d-38b20be62257 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:43:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:43:04.894 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:43:04 compute-1 nova_compute[230518]: 2025-10-02 12:43:04.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:43:04.896 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:43:05 compute-1 nova_compute[230518]: 2025-10-02 12:43:05.015 2 DEBUG nova.compute.manager [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:43:05 compute-1 nova_compute[230518]: 2025-10-02 12:43:05.017 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408985.0150387, 26db575f-26df-4e1b-b0d8-38a12df557e3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:43:05 compute-1 nova_compute[230518]: 2025-10-02 12:43:05.017 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] VM Started (Lifecycle Event)
Oct 02 12:43:05 compute-1 nova_compute[230518]: 2025-10-02 12:43:05.020 2 DEBUG nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:43:05 compute-1 nova_compute[230518]: 2025-10-02 12:43:05.025 2 INFO nova.virt.libvirt.driver [-] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Instance spawned successfully.
Oct 02 12:43:05 compute-1 nova_compute[230518]: 2025-10-02 12:43:05.025 2 DEBUG nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:43:05 compute-1 nova_compute[230518]: 2025-10-02 12:43:05.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:43:05 compute-1 nova_compute[230518]: 2025-10-02 12:43:05.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:43:05 compute-1 nova_compute[230518]: 2025-10-02 12:43:05.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:43:05 compute-1 nova_compute[230518]: 2025-10-02 12:43:05.287 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:43:05 compute-1 nova_compute[230518]: 2025-10-02 12:43:05.291 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:43:05 compute-1 nova_compute[230518]: 2025-10-02 12:43:05.392 2 DEBUG nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:43:05 compute-1 nova_compute[230518]: 2025-10-02 12:43:05.392 2 DEBUG nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:43:05 compute-1 nova_compute[230518]: 2025-10-02 12:43:05.393 2 DEBUG nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:43:05 compute-1 nova_compute[230518]: 2025-10-02 12:43:05.393 2 DEBUG nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:43:05 compute-1 nova_compute[230518]: 2025-10-02 12:43:05.394 2 DEBUG nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:43:05 compute-1 nova_compute[230518]: 2025-10-02 12:43:05.394 2 DEBUG nova.virt.libvirt.driver [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:43:05 compute-1 nova_compute[230518]: 2025-10-02 12:43:05.479 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:43:05 compute-1 nova_compute[230518]: 2025-10-02 12:43:05.479 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408985.016665, 26db575f-26df-4e1b-b0d8-38a12df557e3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:43:05 compute-1 nova_compute[230518]: 2025-10-02 12:43:05.479 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] VM Paused (Lifecycle Event)
Oct 02 12:43:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:05.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:05.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:05 compute-1 ceph-mon[80926]: pgmap v2027: 305 pgs: 305 active+clean; 276 MiB data, 950 MiB used, 20 GiB / 21 GiB avail; 88 KiB/s rd, 2.1 MiB/s wr, 130 op/s
Oct 02 12:43:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/69113505' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:43:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/69113505' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:43:05 compute-1 nova_compute[230518]: 2025-10-02 12:43:05.578 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:43:05 compute-1 nova_compute[230518]: 2025-10-02 12:43:05.584 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759408985.019841, 26db575f-26df-4e1b-b0d8-38a12df557e3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:43:05 compute-1 nova_compute[230518]: 2025-10-02 12:43:05.584 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] VM Resumed (Lifecycle Event)
Oct 02 12:43:05 compute-1 nova_compute[230518]: 2025-10-02 12:43:05.678 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:43:05 compute-1 nova_compute[230518]: 2025-10-02 12:43:05.681 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:43:05 compute-1 nova_compute[230518]: 2025-10-02 12:43:05.863 2 INFO nova.compute.manager [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Took 11.43 seconds to spawn the instance on the hypervisor.
Oct 02 12:43:05 compute-1 nova_compute[230518]: 2025-10-02 12:43:05.864 2 DEBUG nova.compute.manager [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:43:05 compute-1 nova_compute[230518]: 2025-10-02 12:43:05.897 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:43:06 compute-1 nova_compute[230518]: 2025-10-02 12:43:06.000 2 INFO nova.compute.manager [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Took 12.47 seconds to build instance.
Oct 02 12:43:06 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e278 e278: 3 total, 3 up, 3 in
Oct 02 12:43:06 compute-1 nova_compute[230518]: 2025-10-02 12:43:06.423 2 DEBUG nova.compute.manager [req-07e620fc-2205-4715-a2de-54ea282787fa req-fa0b9d19-153f-4b8c-a33c-5cf39445db53 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Received event network-vif-plugged-3de79762-7d07-45e3-b66d-38b20be62257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:43:06 compute-1 nova_compute[230518]: 2025-10-02 12:43:06.423 2 DEBUG oslo_concurrency.lockutils [req-07e620fc-2205-4715-a2de-54ea282787fa req-fa0b9d19-153f-4b8c-a33c-5cf39445db53 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "26db575f-26df-4e1b-b0d8-38a12df557e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:06 compute-1 nova_compute[230518]: 2025-10-02 12:43:06.424 2 DEBUG oslo_concurrency.lockutils [req-07e620fc-2205-4715-a2de-54ea282787fa req-fa0b9d19-153f-4b8c-a33c-5cf39445db53 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "26db575f-26df-4e1b-b0d8-38a12df557e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:06 compute-1 nova_compute[230518]: 2025-10-02 12:43:06.424 2 DEBUG oslo_concurrency.lockutils [req-07e620fc-2205-4715-a2de-54ea282787fa req-fa0b9d19-153f-4b8c-a33c-5cf39445db53 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "26db575f-26df-4e1b-b0d8-38a12df557e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:06 compute-1 nova_compute[230518]: 2025-10-02 12:43:06.424 2 DEBUG nova.compute.manager [req-07e620fc-2205-4715-a2de-54ea282787fa req-fa0b9d19-153f-4b8c-a33c-5cf39445db53 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] No waiting events found dispatching network-vif-plugged-3de79762-7d07-45e3-b66d-38b20be62257 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:43:06 compute-1 nova_compute[230518]: 2025-10-02 12:43:06.424 2 WARNING nova.compute.manager [req-07e620fc-2205-4715-a2de-54ea282787fa req-fa0b9d19-153f-4b8c-a33c-5cf39445db53 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Received unexpected event network-vif-plugged-3de79762-7d07-45e3-b66d-38b20be62257 for instance with vm_state active and task_state None.
Oct 02 12:43:06 compute-1 nova_compute[230518]: 2025-10-02 12:43:06.426 2 DEBUG oslo_concurrency.lockutils [None req-878e45da-77ae-4abe-a0f8-9fbbbfbf7954 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "26db575f-26df-4e1b-b0d8-38a12df557e3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:06 compute-1 podman[275654]: 2025-10-02 12:43:06.803134796 +0000 UTC m=+0.054508275 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 12:43:06 compute-1 podman[275653]: 2025-10-02 12:43:06.828237875 +0000 UTC m=+0.078763508 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:43:06 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:43:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:43:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:07.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:43:07 compute-1 ceph-mon[80926]: pgmap v2028: 305 pgs: 305 active+clean; 246 MiB data, 931 MiB used, 20 GiB / 21 GiB avail; 81 KiB/s rd, 2.1 MiB/s wr, 121 op/s
Oct 02 12:43:07 compute-1 ceph-mon[80926]: osdmap e278: 3 total, 3 up, 3 in
Oct 02 12:43:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:07.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:07 compute-1 nova_compute[230518]: 2025-10-02 12:43:07.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:43:07.899 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:08 compute-1 nova_compute[230518]: 2025-10-02 12:43:08.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:43:08 compute-1 nova_compute[230518]: 2025-10-02 12:43:08.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:43:08 compute-1 nova_compute[230518]: 2025-10-02 12:43:08.189 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 1f9101c6-f4d8-46c7-8884-386f9f08e6fb] Skipping network cache update for instance because it has been migrated to another host. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9902
Oct 02 12:43:08 compute-1 nova_compute[230518]: 2025-10-02 12:43:08.190 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:43:08 compute-1 nova_compute[230518]: 2025-10-02 12:43:08.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:08 compute-1 ceph-mon[80926]: pgmap v2030: 305 pgs: 305 active+clean; 246 MiB data, 931 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 143 op/s
Oct 02 12:43:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:43:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:09.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:43:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:43:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:09.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:43:09 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3519200454' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:10 compute-1 nova_compute[230518]: 2025-10-02 12:43:10.534 2 DEBUG nova.compute.manager [req-0235a4c3-6f33-411d-a8c8-ec91cf181d8f req-7866f020-b966-4e56-ae69-6b85b1f78b59 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Received event network-changed-3de79762-7d07-45e3-b66d-38b20be62257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:43:10 compute-1 nova_compute[230518]: 2025-10-02 12:43:10.535 2 DEBUG nova.compute.manager [req-0235a4c3-6f33-411d-a8c8-ec91cf181d8f req-7866f020-b966-4e56-ae69-6b85b1f78b59 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Refreshing instance network info cache due to event network-changed-3de79762-7d07-45e3-b66d-38b20be62257. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:43:10 compute-1 nova_compute[230518]: 2025-10-02 12:43:10.536 2 DEBUG oslo_concurrency.lockutils [req-0235a4c3-6f33-411d-a8c8-ec91cf181d8f req-7866f020-b966-4e56-ae69-6b85b1f78b59 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-26db575f-26df-4e1b-b0d8-38a12df557e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:43:10 compute-1 nova_compute[230518]: 2025-10-02 12:43:10.536 2 DEBUG oslo_concurrency.lockutils [req-0235a4c3-6f33-411d-a8c8-ec91cf181d8f req-7866f020-b966-4e56-ae69-6b85b1f78b59 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-26db575f-26df-4e1b-b0d8-38a12df557e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:43:10 compute-1 nova_compute[230518]: 2025-10-02 12:43:10.536 2 DEBUG nova.network.neutron [req-0235a4c3-6f33-411d-a8c8-ec91cf181d8f req-7866f020-b966-4e56-ae69-6b85b1f78b59 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Refreshing network info cache for port 3de79762-7d07-45e3-b66d-38b20be62257 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:43:10 compute-1 ovn_controller[129257]: 2025-10-02T12:43:10Z|00484|binding|INFO|Releasing lport 7b6dc1a1-1a58-45bd-84bb-97328397bf1b from this chassis (sb_readonly=0)
Oct 02 12:43:10 compute-1 nova_compute[230518]: 2025-10-02 12:43:10.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:10 compute-1 ceph-mon[80926]: pgmap v2031: 305 pgs: 305 active+clean; 246 MiB data, 931 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 143 op/s
Oct 02 12:43:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3432120987' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/255502040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:43:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:11.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:43:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:11.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:11 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:43:12 compute-1 nova_compute[230518]: 2025-10-02 12:43:12.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:12 compute-1 ceph-mon[80926]: pgmap v2032: 305 pgs: 305 active+clean; 292 MiB data, 959 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.8 MiB/s wr, 187 op/s
Oct 02 12:43:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e279 e279: 3 total, 3 up, 3 in
Oct 02 12:43:13 compute-1 nova_compute[230518]: 2025-10-02 12:43:13.317 2 DEBUG nova.network.neutron [req-0235a4c3-6f33-411d-a8c8-ec91cf181d8f req-7866f020-b966-4e56-ae69-6b85b1f78b59 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Updated VIF entry in instance network info cache for port 3de79762-7d07-45e3-b66d-38b20be62257. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:43:13 compute-1 nova_compute[230518]: 2025-10-02 12:43:13.318 2 DEBUG nova.network.neutron [req-0235a4c3-6f33-411d-a8c8-ec91cf181d8f req-7866f020-b966-4e56-ae69-6b85b1f78b59 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Updating instance_info_cache with network_info: [{"id": "3de79762-7d07-45e3-b66d-38b20be62257", "address": "fa:16:3e:bb:bf:18", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de79762-7d", "ovs_interfaceid": "3de79762-7d07-45e3-b66d-38b20be62257", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:43:13 compute-1 nova_compute[230518]: 2025-10-02 12:43:13.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:13 compute-1 nova_compute[230518]: 2025-10-02 12:43:13.498 2 DEBUG oslo_concurrency.lockutils [req-0235a4c3-6f33-411d-a8c8-ec91cf181d8f req-7866f020-b966-4e56-ae69-6b85b1f78b59 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-26db575f-26df-4e1b-b0d8-38a12df557e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:43:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:13.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:43:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:13.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:43:13 compute-1 ceph-mon[80926]: osdmap e279: 3 total, 3 up, 3 in
Oct 02 12:43:15 compute-1 ceph-mon[80926]: pgmap v2034: 305 pgs: 305 active+clean; 353 MiB data, 989 MiB used, 20 GiB / 21 GiB avail; 8.8 MiB/s rd, 7.4 MiB/s wr, 252 op/s
Oct 02 12:43:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:43:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:15.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:43:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:43:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:15.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:43:16 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:43:17 compute-1 ceph-mon[80926]: pgmap v2035: 305 pgs: 305 active+clean; 336 MiB data, 978 MiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 6.4 MiB/s wr, 239 op/s
Oct 02 12:43:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2601498084' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:43:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:17.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:43:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:17.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:17 compute-1 nova_compute[230518]: 2025-10-02 12:43:17.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:18 compute-1 nova_compute[230518]: 2025-10-02 12:43:18.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/738523268' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:18 compute-1 podman[275698]: 2025-10-02 12:43:18.820757445 +0000 UTC m=+0.066686838 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid)
Oct 02 12:43:18 compute-1 podman[275699]: 2025-10-02 12:43:18.830978766 +0000 UTC m=+0.076753114 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.license=GPLv2)
Oct 02 12:43:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:19.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:19.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:19 compute-1 ceph-mon[80926]: pgmap v2036: 305 pgs: 305 active+clean; 300 MiB data, 961 MiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 7.7 MiB/s wr, 290 op/s
Oct 02 12:43:21 compute-1 ceph-mon[80926]: pgmap v2037: 305 pgs: 305 active+clean; 300 MiB data, 961 MiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 7.7 MiB/s wr, 290 op/s
Oct 02 12:43:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:21.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:21.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e280 e280: 3 total, 3 up, 3 in
Oct 02 12:43:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:43:22 compute-1 nova_compute[230518]: 2025-10-02 12:43:22.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:22 compute-1 ceph-mon[80926]: pgmap v2038: 305 pgs: 305 active+clean; 307 MiB data, 976 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 6.0 MiB/s wr, 215 op/s
Oct 02 12:43:22 compute-1 ceph-mon[80926]: osdmap e280: 3 total, 3 up, 3 in
Oct 02 12:43:22 compute-1 nova_compute[230518]: 2025-10-02 12:43:22.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:23 compute-1 nova_compute[230518]: 2025-10-02 12:43:23.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:43:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:23.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:23 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:23.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:25 compute-1 ceph-mon[80926]: pgmap v2040: 305 pgs: 305 active+clean; 314 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 3.4 MiB/s wr, 182 op/s
Oct 02 12:43:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:43:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:25.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:43:25 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:25.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:43:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:43:25.939 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:43:25.940 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:43:25.940 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:26 compute-1 nova_compute[230518]: 2025-10-02 12:43:26.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:26 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:43:27 compute-1 ceph-mon[80926]: pgmap v2041: 305 pgs: 305 active+clean; 314 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 3.3 MiB/s wr, 167 op/s
Oct 02 12:43:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:27.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:27.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:27 compute-1 nova_compute[230518]: 2025-10-02 12:43:27.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:28 compute-1 nova_compute[230518]: 2025-10-02 12:43:28.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:28 compute-1 ceph-mon[80926]: pgmap v2042: 305 pgs: 305 active+clean; 322 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.6 MiB/s wr, 154 op/s
Oct 02 12:43:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:43:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:29.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:29 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:29.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:31 compute-1 ceph-mon[80926]: pgmap v2043: 305 pgs: 305 active+clean; 322 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.6 MiB/s wr, 154 op/s
Oct 02 12:43:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:43:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:31.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:31 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:31.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:31 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:43:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3590946673' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:32 compute-1 nova_compute[230518]: 2025-10-02 12:43:32.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:33 compute-1 nova_compute[230518]: 2025-10-02 12:43:33.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:33 compute-1 ceph-mon[80926]: pgmap v2044: 305 pgs: 305 active+clean; 348 MiB data, 991 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.5 MiB/s wr, 202 op/s
Oct 02 12:43:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:33.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:43:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:33.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:43:34 compute-1 nova_compute[230518]: 2025-10-02 12:43:34.581 2 DEBUG oslo_concurrency.lockutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "173830cb-12bb-4e1a-ba80-088da01ad107" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:34 compute-1 nova_compute[230518]: 2025-10-02 12:43:34.581 2 DEBUG oslo_concurrency.lockutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:34 compute-1 nova_compute[230518]: 2025-10-02 12:43:34.601 2 DEBUG nova.compute.manager [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:43:34 compute-1 nova_compute[230518]: 2025-10-02 12:43:34.677 2 DEBUG oslo_concurrency.lockutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:34 compute-1 nova_compute[230518]: 2025-10-02 12:43:34.677 2 DEBUG oslo_concurrency.lockutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:34 compute-1 nova_compute[230518]: 2025-10-02 12:43:34.686 2 DEBUG nova.virt.hardware [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:43:34 compute-1 nova_compute[230518]: 2025-10-02 12:43:34.687 2 INFO nova.compute.claims [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:43:34 compute-1 nova_compute[230518]: 2025-10-02 12:43:34.841 2 DEBUG oslo_concurrency.processutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3065390548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:35.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:43:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:35 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:35.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:43:35 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/469806506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:35 compute-1 nova_compute[230518]: 2025-10-02 12:43:35.807 2 DEBUG oslo_concurrency.processutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.966s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:35 compute-1 nova_compute[230518]: 2025-10-02 12:43:35.816 2 DEBUG nova.compute.provider_tree [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:43:35 compute-1 nova_compute[230518]: 2025-10-02 12:43:35.851 2 DEBUG nova.scheduler.client.report [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:43:35 compute-1 nova_compute[230518]: 2025-10-02 12:43:35.871 2 DEBUG oslo_concurrency.lockutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:35 compute-1 nova_compute[230518]: 2025-10-02 12:43:35.872 2 DEBUG nova.compute.manager [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:43:35 compute-1 nova_compute[230518]: 2025-10-02 12:43:35.926 2 DEBUG nova.compute.manager [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:43:35 compute-1 nova_compute[230518]: 2025-10-02 12:43:35.927 2 DEBUG nova.network.neutron [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:43:35 compute-1 nova_compute[230518]: 2025-10-02 12:43:35.955 2 INFO nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:43:35 compute-1 nova_compute[230518]: 2025-10-02 12:43:35.982 2 DEBUG nova.compute.manager [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:43:36 compute-1 nova_compute[230518]: 2025-10-02 12:43:36.048 2 INFO nova.virt.block_device [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Booting with volume fdc5e1d9-2228-4ec0-a6bb-8605f6207831 at /dev/vda
Oct 02 12:43:36 compute-1 nova_compute[230518]: 2025-10-02 12:43:36.170 2 DEBUG os_brick.utils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:43:36 compute-1 nova_compute[230518]: 2025-10-02 12:43:36.171 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:36 compute-1 nova_compute[230518]: 2025-10-02 12:43:36.192 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:36 compute-1 nova_compute[230518]: 2025-10-02 12:43:36.192 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[7340b5d6-2513-434f-8e66-8f909e371570]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:36 compute-1 nova_compute[230518]: 2025-10-02 12:43:36.194 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:36 compute-1 nova_compute[230518]: 2025-10-02 12:43:36.200 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:36 compute-1 nova_compute[230518]: 2025-10-02 12:43:36.201 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[d92e4a3c-d257-4edc-871c-a11c41d3ab33]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:36 compute-1 nova_compute[230518]: 2025-10-02 12:43:36.203 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:36 compute-1 nova_compute[230518]: 2025-10-02 12:43:36.209 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:36 compute-1 nova_compute[230518]: 2025-10-02 12:43:36.209 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[42ce9ee4-4ec1-45b4-88d8-df701b2a5ea8]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:36 compute-1 nova_compute[230518]: 2025-10-02 12:43:36.211 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[84beef6c-ed76-4621-822d-58069b345397]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:36 compute-1 nova_compute[230518]: 2025-10-02 12:43:36.212 2 DEBUG oslo_concurrency.processutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:36 compute-1 nova_compute[230518]: 2025-10-02 12:43:36.245 2 DEBUG oslo_concurrency.processutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:36 compute-1 nova_compute[230518]: 2025-10-02 12:43:36.248 2 DEBUG os_brick.initiator.connectors.lightos [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:43:36 compute-1 nova_compute[230518]: 2025-10-02 12:43:36.248 2 DEBUG os_brick.initiator.connectors.lightos [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:43:36 compute-1 nova_compute[230518]: 2025-10-02 12:43:36.248 2 DEBUG os_brick.initiator.connectors.lightos [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:43:36 compute-1 nova_compute[230518]: 2025-10-02 12:43:36.249 2 DEBUG os_brick.utils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] <== get_connector_properties: return (78ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:43:36 compute-1 nova_compute[230518]: 2025-10-02 12:43:36.249 2 DEBUG nova.virt.block_device [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Updating existing volume attachment record: 01c1187a-4578-4575-a086-d9c200afe7f4 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:43:36 compute-1 nova_compute[230518]: 2025-10-02 12:43:36.285 2 DEBUG nova.policy [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '17a0940c9daf48ac8cfa6c3e56d0e39c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '88141e38aa2347299e7ab249431ef68c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:43:36 compute-1 ceph-mon[80926]: pgmap v2045: 305 pgs: 305 active+clean; 372 MiB data, 1003 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.3 MiB/s wr, 175 op/s
Oct 02 12:43:36 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/469806506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:36 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:43:37 compute-1 nova_compute[230518]: 2025-10-02 12:43:37.427 2 DEBUG nova.network.neutron [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Successfully created port: ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:43:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:37.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:37.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:37 compute-1 nova_compute[230518]: 2025-10-02 12:43:37.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:37 compute-1 podman[275766]: 2025-10-02 12:43:37.836757389 +0000 UTC m=+0.082553057 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 12:43:37 compute-1 sudo[275791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:43:37 compute-1 sudo[275791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:37 compute-1 sudo[275791]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:37 compute-1 podman[275765]: 2025-10-02 12:43:37.876489188 +0000 UTC m=+0.125261289 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Oct 02 12:43:37 compute-1 sudo[275832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:43:37 compute-1 sudo[275832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:37 compute-1 sudo[275832]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:37 compute-1 ceph-mon[80926]: pgmap v2046: 305 pgs: 305 active+clean; 393 MiB data, 1005 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 166 op/s
Oct 02 12:43:37 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3507850382' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:37 compute-1 sudo[275858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:43:37 compute-1 sudo[275858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:37 compute-1 sudo[275858]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:38 compute-1 sudo[275883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:43:38 compute-1 sudo[275883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:38 compute-1 nova_compute[230518]: 2025-10-02 12:43:38.055 2 DEBUG nova.compute.manager [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:43:38 compute-1 nova_compute[230518]: 2025-10-02 12:43:38.057 2 DEBUG nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:43:38 compute-1 nova_compute[230518]: 2025-10-02 12:43:38.057 2 INFO nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Creating image(s)
Oct 02 12:43:38 compute-1 nova_compute[230518]: 2025-10-02 12:43:38.057 2 DEBUG nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 12:43:38 compute-1 nova_compute[230518]: 2025-10-02 12:43:38.058 2 DEBUG nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Ensure instance console log exists: /var/lib/nova/instances/173830cb-12bb-4e1a-ba80-088da01ad107/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:43:38 compute-1 nova_compute[230518]: 2025-10-02 12:43:38.058 2 DEBUG oslo_concurrency.lockutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:38 compute-1 nova_compute[230518]: 2025-10-02 12:43:38.058 2 DEBUG oslo_concurrency.lockutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:38 compute-1 nova_compute[230518]: 2025-10-02 12:43:38.058 2 DEBUG oslo_concurrency.lockutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:38 compute-1 nova_compute[230518]: 2025-10-02 12:43:38.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:38 compute-1 sudo[275883]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:38 compute-1 nova_compute[230518]: 2025-10-02 12:43:38.573 2 DEBUG nova.network.neutron [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Successfully updated port: ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:43:38 compute-1 nova_compute[230518]: 2025-10-02 12:43:38.592 2 DEBUG oslo_concurrency.lockutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:43:38 compute-1 nova_compute[230518]: 2025-10-02 12:43:38.592 2 DEBUG oslo_concurrency.lockutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquired lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:43:38 compute-1 nova_compute[230518]: 2025-10-02 12:43:38.593 2 DEBUG nova.network.neutron [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:43:38 compute-1 nova_compute[230518]: 2025-10-02 12:43:38.656 2 DEBUG nova.compute.manager [req-c41532a1-8d95-4960-a057-28b768001f47 req-e78f8b62-fbde-417e-8d2c-999170bd39bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received event network-changed-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:43:38 compute-1 nova_compute[230518]: 2025-10-02 12:43:38.657 2 DEBUG nova.compute.manager [req-c41532a1-8d95-4960-a057-28b768001f47 req-e78f8b62-fbde-417e-8d2c-999170bd39bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Refreshing instance network info cache due to event network-changed-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:43:38 compute-1 nova_compute[230518]: 2025-10-02 12:43:38.657 2 DEBUG oslo_concurrency.lockutils [req-c41532a1-8d95-4960-a057-28b768001f47 req-e78f8b62-fbde-417e-8d2c-999170bd39bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:43:38 compute-1 nova_compute[230518]: 2025-10-02 12:43:38.771 2 DEBUG nova.network.neutron [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:43:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:43:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:39.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:39 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:39.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:39 compute-1 ceph-mon[80926]: pgmap v2047: 305 pgs: 305 active+clean; 426 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 177 op/s
Oct 02 12:43:39 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:43:39 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:43:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/783646978' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:39 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:43:39 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:43:39 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:43:39 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:43:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4191999098' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:39 compute-1 nova_compute[230518]: 2025-10-02 12:43:39.839 2 DEBUG nova.network.neutron [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Updating instance_info_cache with network_info: [{"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:43:39 compute-1 nova_compute[230518]: 2025-10-02 12:43:39.878 2 DEBUG oslo_concurrency.lockutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Releasing lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:43:39 compute-1 nova_compute[230518]: 2025-10-02 12:43:39.878 2 DEBUG nova.compute.manager [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Instance network_info: |[{"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:43:39 compute-1 nova_compute[230518]: 2025-10-02 12:43:39.878 2 DEBUG oslo_concurrency.lockutils [req-c41532a1-8d95-4960-a057-28b768001f47 req-e78f8b62-fbde-417e-8d2c-999170bd39bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:43:39 compute-1 nova_compute[230518]: 2025-10-02 12:43:39.879 2 DEBUG nova.network.neutron [req-c41532a1-8d95-4960-a057-28b768001f47 req-e78f8b62-fbde-417e-8d2c-999170bd39bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Refreshing network info cache for port ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:43:39 compute-1 nova_compute[230518]: 2025-10-02 12:43:39.882 2 DEBUG nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Start _get_guest_xml network_info=[{"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'delete_on_termination': True, 'disk_bus': 'virtio', 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-fdc5e1d9-2228-4ec0-a6bb-8605f6207831', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'fdc5e1d9-2228-4ec0-a6bb-8605f6207831', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '173830cb-12bb-4e1a-ba80-088da01ad107', 'attached_at': '', 'detached_at': '', 'volume_id': 'fdc5e1d9-2228-4ec0-a6bb-8605f6207831', 'serial': 'fdc5e1d9-2228-4ec0-a6bb-8605f6207831'}, 'boot_index': 0, 'attachment_id': '01c1187a-4578-4575-a086-d9c200afe7f4', 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:43:39 compute-1 nova_compute[230518]: 2025-10-02 12:43:39.886 2 WARNING nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:43:39 compute-1 nova_compute[230518]: 2025-10-02 12:43:39.891 2 DEBUG nova.virt.libvirt.host [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:43:39 compute-1 nova_compute[230518]: 2025-10-02 12:43:39.892 2 DEBUG nova.virt.libvirt.host [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:43:39 compute-1 nova_compute[230518]: 2025-10-02 12:43:39.894 2 DEBUG nova.virt.libvirt.host [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:43:39 compute-1 nova_compute[230518]: 2025-10-02 12:43:39.894 2 DEBUG nova.virt.libvirt.host [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:43:39 compute-1 nova_compute[230518]: 2025-10-02 12:43:39.895 2 DEBUG nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:43:39 compute-1 nova_compute[230518]: 2025-10-02 12:43:39.896 2 DEBUG nova.virt.hardware [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:43:39 compute-1 nova_compute[230518]: 2025-10-02 12:43:39.896 2 DEBUG nova.virt.hardware [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:43:39 compute-1 nova_compute[230518]: 2025-10-02 12:43:39.896 2 DEBUG nova.virt.hardware [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:43:39 compute-1 nova_compute[230518]: 2025-10-02 12:43:39.897 2 DEBUG nova.virt.hardware [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:43:39 compute-1 nova_compute[230518]: 2025-10-02 12:43:39.897 2 DEBUG nova.virt.hardware [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:43:39 compute-1 nova_compute[230518]: 2025-10-02 12:43:39.897 2 DEBUG nova.virt.hardware [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:43:39 compute-1 nova_compute[230518]: 2025-10-02 12:43:39.897 2 DEBUG nova.virt.hardware [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:43:39 compute-1 nova_compute[230518]: 2025-10-02 12:43:39.898 2 DEBUG nova.virt.hardware [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:43:39 compute-1 nova_compute[230518]: 2025-10-02 12:43:39.898 2 DEBUG nova.virt.hardware [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:43:39 compute-1 nova_compute[230518]: 2025-10-02 12:43:39.898 2 DEBUG nova.virt.hardware [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:43:39 compute-1 nova_compute[230518]: 2025-10-02 12:43:39.898 2 DEBUG nova.virt.hardware [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:43:40 compute-1 nova_compute[230518]: 2025-10-02 12:43:40.032 2 DEBUG nova.storage.rbd_utils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 173830cb-12bb-4e1a-ba80-088da01ad107_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:43:40 compute-1 nova_compute[230518]: 2025-10-02 12:43:40.036 2 DEBUG oslo_concurrency.processutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:40 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:43:40 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3663326302' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:40 compute-1 nova_compute[230518]: 2025-10-02 12:43:40.580 2 DEBUG oslo_concurrency.processutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:40 compute-1 nova_compute[230518]: 2025-10-02 12:43:40.603 2 DEBUG nova.virt.libvirt.vif [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:43:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-716874932',display_name='tempest-ServerActionsTestOtherA-server-716874932',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-716874932',id=119,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJfI3E6popMNkSBH55JIIn+lxst+AgI5WbB+1D21g23xZC45mHZNKzJ1YzOQWfrILexv9zpuq5SLJQ8J6YEjTv4RhaLBgROGziYLwwgHom1wen0CDri217As6wNRpnqZsg==',key_name='tempest-keypair-1292637923',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='88141e38aa2347299e7ab249431ef68c',ramdisk_id='',reservation_id='r-cekdvg44',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1849713132',owner_user_name='tempest-ServerActionsTestOtherA-1849713132-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:43:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='17a0940c9daf48ac8cfa6c3e56d0e39c',uuid=173830cb-12bb-4e1a-ba80-088da01ad107,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:43:40 compute-1 nova_compute[230518]: 2025-10-02 12:43:40.604 2 DEBUG nova.network.os_vif_util [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converting VIF {"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:43:40 compute-1 nova_compute[230518]: 2025-10-02 12:43:40.605 2 DEBUG nova.network.os_vif_util [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:35:d6,bridge_name='br-int',has_traffic_filtering=True,id=ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea4a4acf-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:43:40 compute-1 nova_compute[230518]: 2025-10-02 12:43:40.606 2 DEBUG nova.objects.instance [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'pci_devices' on Instance uuid 173830cb-12bb-4e1a-ba80-088da01ad107 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:43:40 compute-1 nova_compute[230518]: 2025-10-02 12:43:40.622 2 DEBUG nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:43:40 compute-1 nova_compute[230518]:   <uuid>173830cb-12bb-4e1a-ba80-088da01ad107</uuid>
Oct 02 12:43:40 compute-1 nova_compute[230518]:   <name>instance-00000077</name>
Oct 02 12:43:40 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:43:40 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:43:40 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:43:40 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:       <nova:name>tempest-ServerActionsTestOtherA-server-716874932</nova:name>
Oct 02 12:43:40 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:43:39</nova:creationTime>
Oct 02 12:43:40 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:43:40 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:43:40 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:43:40 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:43:40 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:43:40 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:43:40 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:43:40 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:43:40 compute-1 nova_compute[230518]:         <nova:user uuid="17a0940c9daf48ac8cfa6c3e56d0e39c">tempest-ServerActionsTestOtherA-1849713132-project-member</nova:user>
Oct 02 12:43:40 compute-1 nova_compute[230518]:         <nova:project uuid="88141e38aa2347299e7ab249431ef68c">tempest-ServerActionsTestOtherA-1849713132</nova:project>
Oct 02 12:43:40 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:43:40 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:43:40 compute-1 nova_compute[230518]:         <nova:port uuid="ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb">
Oct 02 12:43:40 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:43:40 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:43:40 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:43:40 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <system>
Oct 02 12:43:40 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:43:40 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:43:40 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:43:40 compute-1 nova_compute[230518]:       <entry name="serial">173830cb-12bb-4e1a-ba80-088da01ad107</entry>
Oct 02 12:43:40 compute-1 nova_compute[230518]:       <entry name="uuid">173830cb-12bb-4e1a-ba80-088da01ad107</entry>
Oct 02 12:43:40 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     </system>
Oct 02 12:43:40 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:43:40 compute-1 nova_compute[230518]:   <os>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:   </os>
Oct 02 12:43:40 compute-1 nova_compute[230518]:   <features>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:   </features>
Oct 02 12:43:40 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:43:40 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:43:40 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:43:40 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/173830cb-12bb-4e1a-ba80-088da01ad107_disk.config">
Oct 02 12:43:40 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:       </source>
Oct 02 12:43:40 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:43:40 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:43:40 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:43:40 compute-1 nova_compute[230518]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:       <source protocol="rbd" name="volumes/volume-fdc5e1d9-2228-4ec0-a6bb-8605f6207831">
Oct 02 12:43:40 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:       </source>
Oct 02 12:43:40 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:43:40 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:43:40 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:       <serial>fdc5e1d9-2228-4ec0-a6bb-8605f6207831</serial>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:43:40 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:d6:35:d6"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:       <target dev="tapea4a4acf-33"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:43:40 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/173830cb-12bb-4e1a-ba80-088da01ad107/console.log" append="off"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <video>
Oct 02 12:43:40 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     </video>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:43:40 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:43:40 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:43:40 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:43:40 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:43:40 compute-1 nova_compute[230518]: </domain>
Oct 02 12:43:40 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:43:40 compute-1 nova_compute[230518]: 2025-10-02 12:43:40.624 2 DEBUG nova.compute.manager [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Preparing to wait for external event network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:43:40 compute-1 nova_compute[230518]: 2025-10-02 12:43:40.624 2 DEBUG oslo_concurrency.lockutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:40 compute-1 nova_compute[230518]: 2025-10-02 12:43:40.624 2 DEBUG oslo_concurrency.lockutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:40 compute-1 nova_compute[230518]: 2025-10-02 12:43:40.624 2 DEBUG oslo_concurrency.lockutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:40 compute-1 nova_compute[230518]: 2025-10-02 12:43:40.625 2 DEBUG nova.virt.libvirt.vif [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:43:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-716874932',display_name='tempest-ServerActionsTestOtherA-server-716874932',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-716874932',id=119,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJfI3E6popMNkSBH55JIIn+lxst+AgI5WbB+1D21g23xZC45mHZNKzJ1YzOQWfrILexv9zpuq5SLJQ8J6YEjTv4RhaLBgROGziYLwwgHom1wen0CDri217As6wNRpnqZsg==',key_name='tempest-keypair-1292637923',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='88141e38aa2347299e7ab249431ef68c',ramdisk_id='',reservation_id='r-cekdvg44',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1849713132',owner_user_name='tempest-ServerActionsTestOtherA-1849713132-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:43:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='17a0940c9daf48ac8cfa6c3e56d0e39c',uuid=173830cb-12bb-4e1a-ba80-088da01ad107,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:43:40 compute-1 nova_compute[230518]: 2025-10-02 12:43:40.625 2 DEBUG nova.network.os_vif_util [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converting VIF {"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:43:40 compute-1 nova_compute[230518]: 2025-10-02 12:43:40.626 2 DEBUG nova.network.os_vif_util [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:35:d6,bridge_name='br-int',has_traffic_filtering=True,id=ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea4a4acf-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:43:40 compute-1 nova_compute[230518]: 2025-10-02 12:43:40.626 2 DEBUG os_vif [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:35:d6,bridge_name='br-int',has_traffic_filtering=True,id=ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea4a4acf-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:43:40 compute-1 nova_compute[230518]: 2025-10-02 12:43:40.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:40 compute-1 nova_compute[230518]: 2025-10-02 12:43:40.627 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:40 compute-1 nova_compute[230518]: 2025-10-02 12:43:40.627 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:43:40 compute-1 nova_compute[230518]: 2025-10-02 12:43:40.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:40 compute-1 nova_compute[230518]: 2025-10-02 12:43:40.630 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapea4a4acf-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:40 compute-1 nova_compute[230518]: 2025-10-02 12:43:40.630 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapea4a4acf-33, col_values=(('external_ids', {'iface-id': 'ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:35:d6', 'vm-uuid': '173830cb-12bb-4e1a-ba80-088da01ad107'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:40 compute-1 nova_compute[230518]: 2025-10-02 12:43:40.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:40 compute-1 NetworkManager[44960]: <info>  [1759409020.6330] manager: (tapea4a4acf-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/230)
Oct 02 12:43:40 compute-1 nova_compute[230518]: 2025-10-02 12:43:40.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:43:40 compute-1 nova_compute[230518]: 2025-10-02 12:43:40.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:40 compute-1 nova_compute[230518]: 2025-10-02 12:43:40.640 2 INFO os_vif [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:35:d6,bridge_name='br-int',has_traffic_filtering=True,id=ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea4a4acf-33')
Oct 02 12:43:40 compute-1 nova_compute[230518]: 2025-10-02 12:43:40.845 2 DEBUG nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:43:40 compute-1 nova_compute[230518]: 2025-10-02 12:43:40.846 2 DEBUG nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:43:40 compute-1 nova_compute[230518]: 2025-10-02 12:43:40.846 2 DEBUG nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] No VIF found with MAC fa:16:3e:d6:35:d6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:43:40 compute-1 nova_compute[230518]: 2025-10-02 12:43:40.846 2 INFO nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Using config drive
Oct 02 12:43:40 compute-1 nova_compute[230518]: 2025-10-02 12:43:40.905 2 DEBUG nova.storage.rbd_utils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 173830cb-12bb-4e1a-ba80-088da01ad107_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:43:40 compute-1 ceph-mon[80926]: pgmap v2048: 305 pgs: 305 active+clean; 426 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 902 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Oct 02 12:43:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3663326302' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:41.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:43:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:41 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:41.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:41 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #91. Immutable memtables: 0.
Oct 02 12:43:41 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:41.834218) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:43:41 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 91
Oct 02 12:43:41 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409021834255, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 2193, "num_deletes": 263, "total_data_size": 4801876, "memory_usage": 4856784, "flush_reason": "Manual Compaction"}
Oct 02 12:43:41 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #92: started
Oct 02 12:43:41 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409021862130, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 92, "file_size": 3143689, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 46825, "largest_seqno": 49013, "table_properties": {"data_size": 3134738, "index_size": 5509, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19614, "raw_average_key_size": 20, "raw_value_size": 3116370, "raw_average_value_size": 3290, "num_data_blocks": 239, "num_entries": 947, "num_filter_entries": 947, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759408858, "oldest_key_time": 1759408858, "file_creation_time": 1759409021, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:43:41 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 27972 microseconds, and 6566 cpu microseconds.
Oct 02 12:43:41 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:43:41 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:41.862185) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #92: 3143689 bytes OK
Oct 02 12:43:41 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:41.862209) [db/memtable_list.cc:519] [default] Level-0 commit table #92 started
Oct 02 12:43:41 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:41.865988) [db/memtable_list.cc:722] [default] Level-0 commit table #92: memtable #1 done
Oct 02 12:43:41 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:41.866003) EVENT_LOG_v1 {"time_micros": 1759409021865997, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:43:41 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:41.866022) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:43:41 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 4791890, prev total WAL file size 4791890, number of live WAL files 2.
Oct 02 12:43:41 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000088.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:43:41 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:41.867400) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353039' seq:72057594037927935, type:22 .. '6C6F676D0031373631' seq:0, type:0; will stop at (end)
Oct 02 12:43:41 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:43:41 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [92(3070KB)], [90(10MB)]
Oct 02 12:43:41 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409021867450, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [92], "files_L6": [90], "score": -1, "input_data_size": 14586912, "oldest_snapshot_seqno": -1}
Oct 02 12:43:41 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #93: 7541 keys, 14437822 bytes, temperature: kUnknown
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409022036469, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 93, "file_size": 14437822, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14383308, "index_size": 34562, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18885, "raw_key_size": 193517, "raw_average_key_size": 25, "raw_value_size": 14244711, "raw_average_value_size": 1888, "num_data_blocks": 1379, "num_entries": 7541, "num_filter_entries": 7541, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759409021, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 93, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.036689) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 14437822 bytes
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.080984) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 86.3 rd, 85.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 10.9 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(9.2) write-amplify(4.6) OK, records in: 8081, records dropped: 540 output_compression: NoCompression
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.081027) EVENT_LOG_v1 {"time_micros": 1759409022081010, "job": 56, "event": "compaction_finished", "compaction_time_micros": 169087, "compaction_time_cpu_micros": 54488, "output_level": 6, "num_output_files": 1, "total_output_size": 14437822, "num_input_records": 8081, "num_output_records": 7541, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409022081929, "job": 56, "event": "table_file_deletion", "file_number": 92}
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000090.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409022084496, "job": 56, "event": "table_file_deletion", "file_number": 90}
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:41.867194) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.084540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.084546) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.084610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.084622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.084624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #94. Immutable memtables: 0.
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.178465) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 94
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409022178528, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 256, "num_deletes": 250, "total_data_size": 23018, "memory_usage": 28664, "flush_reason": "Manual Compaction"}
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #95: started
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409022184678, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 95, "file_size": 13846, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49015, "largest_seqno": 49269, "table_properties": {"data_size": 12094, "index_size": 49, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 645, "raw_key_size": 5124, "raw_average_key_size": 20, "raw_value_size": 8697, "raw_average_value_size": 34, "num_data_blocks": 2, "num_entries": 255, "num_filter_entries": 255, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409022, "oldest_key_time": 1759409022, "file_creation_time": 1759409022, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 6255 microseconds, and 881 cpu microseconds.
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.184730) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #95: 13846 bytes OK
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.184750) [db/memtable_list.cc:519] [default] Level-0 commit table #95 started
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.187753) [db/memtable_list.cc:722] [default] Level-0 commit table #95: memtable #1 done
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.187769) EVENT_LOG_v1 {"time_micros": 1759409022187764, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.187785) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 21000, prev total WAL file size 21000, number of live WAL files 2.
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000091.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.188127) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353033' seq:72057594037927935, type:22 .. '6D6772737461740031373534' seq:0, type:0; will stop at (end)
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [95(13KB)], [93(13MB)]
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409022188185, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [95], "files_L6": [93], "score": -1, "input_data_size": 14451668, "oldest_snapshot_seqno": -1}
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #96: 7292 keys, 10623149 bytes, temperature: kUnknown
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409022282197, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 96, "file_size": 10623149, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10575226, "index_size": 28611, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18245, "raw_key_size": 188535, "raw_average_key_size": 25, "raw_value_size": 10445844, "raw_average_value_size": 1432, "num_data_blocks": 1130, "num_entries": 7292, "num_filter_entries": 7292, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759409022, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 96, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.282593) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 10623149 bytes
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.288344) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.5 rd, 112.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 13.8 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(1811.0) write-amplify(767.2) OK, records in: 7796, records dropped: 504 output_compression: NoCompression
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.288377) EVENT_LOG_v1 {"time_micros": 1759409022288363, "job": 58, "event": "compaction_finished", "compaction_time_micros": 94174, "compaction_time_cpu_micros": 44347, "output_level": 6, "num_output_files": 1, "total_output_size": 10623149, "num_input_records": 7796, "num_output_records": 7292, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409022288542, "job": 58, "event": "table_file_deletion", "file_number": 95}
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000093.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409022293208, "job": 58, "event": "table_file_deletion", "file_number": 93}
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.188057) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.293342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.293348) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.293351) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.293354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:43:42 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:43:42.293357) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:43:42 compute-1 nova_compute[230518]: 2025-10-02 12:43:42.348 2 INFO nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Creating config drive at /var/lib/nova/instances/173830cb-12bb-4e1a-ba80-088da01ad107/disk.config
Oct 02 12:43:42 compute-1 nova_compute[230518]: 2025-10-02 12:43:42.356 2 DEBUG oslo_concurrency.processutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/173830cb-12bb-4e1a-ba80-088da01ad107/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz24ci6es execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:42 compute-1 nova_compute[230518]: 2025-10-02 12:43:42.499 2 DEBUG oslo_concurrency.processutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/173830cb-12bb-4e1a-ba80-088da01ad107/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz24ci6es" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:42 compute-1 nova_compute[230518]: 2025-10-02 12:43:42.538 2 DEBUG nova.storage.rbd_utils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 173830cb-12bb-4e1a-ba80-088da01ad107_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:43:42 compute-1 nova_compute[230518]: 2025-10-02 12:43:42.543 2 DEBUG oslo_concurrency.processutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/173830cb-12bb-4e1a-ba80-088da01ad107/disk.config 173830cb-12bb-4e1a-ba80-088da01ad107_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2230281663' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1821274135' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:43 compute-1 nova_compute[230518]: 2025-10-02 12:43:43.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:43 compute-1 nova_compute[230518]: 2025-10-02 12:43:43.480 2 DEBUG oslo_concurrency.processutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/173830cb-12bb-4e1a-ba80-088da01ad107/disk.config 173830cb-12bb-4e1a-ba80-088da01ad107_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.937s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:43 compute-1 nova_compute[230518]: 2025-10-02 12:43:43.480 2 INFO nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Deleting local config drive /var/lib/nova/instances/173830cb-12bb-4e1a-ba80-088da01ad107/disk.config because it was imported into RBD.
Oct 02 12:43:43 compute-1 kernel: tapea4a4acf-33: entered promiscuous mode
Oct 02 12:43:43 compute-1 NetworkManager[44960]: <info>  [1759409023.5444] manager: (tapea4a4acf-33): new Tun device (/org/freedesktop/NetworkManager/Devices/231)
Oct 02 12:43:43 compute-1 nova_compute[230518]: 2025-10-02 12:43:43.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:43 compute-1 ovn_controller[129257]: 2025-10-02T12:43:43Z|00485|binding|INFO|Claiming lport ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb for this chassis.
Oct 02 12:43:43 compute-1 ovn_controller[129257]: 2025-10-02T12:43:43Z|00486|binding|INFO|ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb: Claiming fa:16:3e:d6:35:d6 10.100.0.13
Oct 02 12:43:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:43:43.556 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:35:d6 10.100.0.13'], port_security=['fa:16:3e:d6:35:d6 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '173830cb-12bb-4e1a-ba80-088da01ad107', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88141e38aa2347299e7ab249431ef68c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'da6daf73-7b18-4ff6-8a16-e2a94d642e77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59a86c9d-a113-4a7c-af97-5ea11dfa8c7c, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:43:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:43:43.557 138374 INFO neutron.agent.ovn.metadata.agent [-] Port ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb in datapath f3643647-7cd9-4c43-8aaa-9b0f3160274b bound to our chassis
Oct 02 12:43:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:43:43.559 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3643647-7cd9-4c43-8aaa-9b0f3160274b
Oct 02 12:43:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:43:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:43.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:43 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:43.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:43 compute-1 ovn_controller[129257]: 2025-10-02T12:43:43Z|00487|binding|INFO|Setting lport ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb ovn-installed in OVS
Oct 02 12:43:43 compute-1 ovn_controller[129257]: 2025-10-02T12:43:43Z|00488|binding|INFO|Setting lport ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb up in Southbound
Oct 02 12:43:43 compute-1 nova_compute[230518]: 2025-10-02 12:43:43.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:43 compute-1 systemd-udevd[276053]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:43:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:43:43.579 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c49a3bef-a3dc-4f77-92b2-76460328680f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:43 compute-1 nova_compute[230518]: 2025-10-02 12:43:43.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:43 compute-1 systemd-machined[188247]: New machine qemu-58-instance-00000077.
Oct 02 12:43:43 compute-1 NetworkManager[44960]: <info>  [1759409023.5972] device (tapea4a4acf-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:43:43 compute-1 NetworkManager[44960]: <info>  [1759409023.5980] device (tapea4a4acf-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:43:43 compute-1 systemd[1]: Started Virtual Machine qemu-58-instance-00000077.
Oct 02 12:43:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:43:43.623 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[b6b1c726-83c7-456e-a4d0-b0f86fd8bb80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:43:43.626 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a73d6b-cf58-424e-8647-efb5d3f60cd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:43:43.653 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[85223aae-efae-4cdf-824e-9fa8e532c19c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:43:43.670 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2f2e0f93-9c82-48af-bff0-7d48470a8f81]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3643647-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:ed:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662992, 'reachable_time': 17721, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276066, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:43:43.691 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7a6c28ee-88cb-4e9b-be17-652814f230bd]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf3643647-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663002, 'tstamp': 663002}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276068, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf3643647-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663005, 'tstamp': 663005}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276068, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:43:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:43:43.693 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3643647-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:43 compute-1 nova_compute[230518]: 2025-10-02 12:43:43.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:43 compute-1 nova_compute[230518]: 2025-10-02 12:43:43.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:43:43.696 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3643647-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:43:43.697 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:43:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:43:43.697 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3643647-70, col_values=(('external_ids', {'iface-id': '7b6dc1a1-1a58-45bd-84bb-97328397bf1b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:43:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:43:43.698 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:43:43 compute-1 ceph-mon[80926]: pgmap v2049: 305 pgs: 305 active+clean; 392 MiB data, 1008 MiB used, 20 GiB / 21 GiB avail; 922 KiB/s rd, 5.3 MiB/s wr, 129 op/s
Oct 02 12:43:44 compute-1 nova_compute[230518]: 2025-10-02 12:43:44.363 2 DEBUG nova.network.neutron [req-c41532a1-8d95-4960-a057-28b768001f47 req-e78f8b62-fbde-417e-8d2c-999170bd39bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Updated VIF entry in instance network info cache for port ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:43:44 compute-1 nova_compute[230518]: 2025-10-02 12:43:44.364 2 DEBUG nova.network.neutron [req-c41532a1-8d95-4960-a057-28b768001f47 req-e78f8b62-fbde-417e-8d2c-999170bd39bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Updating instance_info_cache with network_info: [{"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:43:44 compute-1 nova_compute[230518]: 2025-10-02 12:43:44.411 2 DEBUG oslo_concurrency.lockutils [req-c41532a1-8d95-4960-a057-28b768001f47 req-e78f8b62-fbde-417e-8d2c-999170bd39bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:43:44 compute-1 nova_compute[230518]: 2025-10-02 12:43:44.967 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409024.9670033, 173830cb-12bb-4e1a-ba80-088da01ad107 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:43:44 compute-1 nova_compute[230518]: 2025-10-02 12:43:44.968 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] VM Started (Lifecycle Event)
Oct 02 12:43:44 compute-1 nova_compute[230518]: 2025-10-02 12:43:44.996 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:43:45 compute-1 nova_compute[230518]: 2025-10-02 12:43:45.001 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409024.9676995, 173830cb-12bb-4e1a-ba80-088da01ad107 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:43:45 compute-1 nova_compute[230518]: 2025-10-02 12:43:45.001 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] VM Paused (Lifecycle Event)
Oct 02 12:43:45 compute-1 nova_compute[230518]: 2025-10-02 12:43:45.020 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:43:45 compute-1 nova_compute[230518]: 2025-10-02 12:43:45.023 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:43:45 compute-1 nova_compute[230518]: 2025-10-02 12:43:45.045 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:43:45 compute-1 ceph-mon[80926]: pgmap v2050: 305 pgs: 305 active+clean; 339 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 4.6 MiB/s wr, 104 op/s
Oct 02 12:43:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:45.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:45.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:45 compute-1 nova_compute[230518]: 2025-10-02 12:43:45.631 2 DEBUG nova.compute.manager [req-933aff05-68ac-4de2-9461-34f15ddf8de0 req-6a096ca7-3f70-4cce-9c35-6bb388c71f93 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received event network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:43:45 compute-1 nova_compute[230518]: 2025-10-02 12:43:45.632 2 DEBUG oslo_concurrency.lockutils [req-933aff05-68ac-4de2-9461-34f15ddf8de0 req-6a096ca7-3f70-4cce-9c35-6bb388c71f93 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:45 compute-1 nova_compute[230518]: 2025-10-02 12:43:45.632 2 DEBUG oslo_concurrency.lockutils [req-933aff05-68ac-4de2-9461-34f15ddf8de0 req-6a096ca7-3f70-4cce-9c35-6bb388c71f93 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:45 compute-1 nova_compute[230518]: 2025-10-02 12:43:45.633 2 DEBUG oslo_concurrency.lockutils [req-933aff05-68ac-4de2-9461-34f15ddf8de0 req-6a096ca7-3f70-4cce-9c35-6bb388c71f93 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:45 compute-1 nova_compute[230518]: 2025-10-02 12:43:45.633 2 DEBUG nova.compute.manager [req-933aff05-68ac-4de2-9461-34f15ddf8de0 req-6a096ca7-3f70-4cce-9c35-6bb388c71f93 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Processing event network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:43:45 compute-1 nova_compute[230518]: 2025-10-02 12:43:45.634 2 DEBUG nova.compute.manager [req-933aff05-68ac-4de2-9461-34f15ddf8de0 req-6a096ca7-3f70-4cce-9c35-6bb388c71f93 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received event network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:43:45 compute-1 nova_compute[230518]: 2025-10-02 12:43:45.634 2 DEBUG oslo_concurrency.lockutils [req-933aff05-68ac-4de2-9461-34f15ddf8de0 req-6a096ca7-3f70-4cce-9c35-6bb388c71f93 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:45 compute-1 nova_compute[230518]: 2025-10-02 12:43:45.635 2 DEBUG oslo_concurrency.lockutils [req-933aff05-68ac-4de2-9461-34f15ddf8de0 req-6a096ca7-3f70-4cce-9c35-6bb388c71f93 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:45 compute-1 nova_compute[230518]: 2025-10-02 12:43:45.635 2 DEBUG oslo_concurrency.lockutils [req-933aff05-68ac-4de2-9461-34f15ddf8de0 req-6a096ca7-3f70-4cce-9c35-6bb388c71f93 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:45 compute-1 nova_compute[230518]: 2025-10-02 12:43:45.636 2 DEBUG nova.compute.manager [req-933aff05-68ac-4de2-9461-34f15ddf8de0 req-6a096ca7-3f70-4cce-9c35-6bb388c71f93 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] No waiting events found dispatching network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:43:45 compute-1 nova_compute[230518]: 2025-10-02 12:43:45.636 2 WARNING nova.compute.manager [req-933aff05-68ac-4de2-9461-34f15ddf8de0 req-6a096ca7-3f70-4cce-9c35-6bb388c71f93 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received unexpected event network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb for instance with vm_state building and task_state spawning.
Oct 02 12:43:45 compute-1 nova_compute[230518]: 2025-10-02 12:43:45.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:45 compute-1 nova_compute[230518]: 2025-10-02 12:43:45.638 2 DEBUG nova.compute.manager [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:43:45 compute-1 nova_compute[230518]: 2025-10-02 12:43:45.642 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409025.6414685, 173830cb-12bb-4e1a-ba80-088da01ad107 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:43:45 compute-1 nova_compute[230518]: 2025-10-02 12:43:45.643 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] VM Resumed (Lifecycle Event)
Oct 02 12:43:45 compute-1 nova_compute[230518]: 2025-10-02 12:43:45.646 2 DEBUG nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:43:45 compute-1 nova_compute[230518]: 2025-10-02 12:43:45.649 2 INFO nova.virt.libvirt.driver [-] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Instance spawned successfully.
Oct 02 12:43:45 compute-1 nova_compute[230518]: 2025-10-02 12:43:45.650 2 DEBUG nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:43:45 compute-1 nova_compute[230518]: 2025-10-02 12:43:45.669 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:43:45 compute-1 nova_compute[230518]: 2025-10-02 12:43:45.676 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:43:45 compute-1 nova_compute[230518]: 2025-10-02 12:43:45.681 2 DEBUG nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:43:45 compute-1 nova_compute[230518]: 2025-10-02 12:43:45.681 2 DEBUG nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:43:45 compute-1 nova_compute[230518]: 2025-10-02 12:43:45.682 2 DEBUG nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:43:45 compute-1 nova_compute[230518]: 2025-10-02 12:43:45.682 2 DEBUG nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:43:45 compute-1 nova_compute[230518]: 2025-10-02 12:43:45.682 2 DEBUG nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:43:45 compute-1 nova_compute[230518]: 2025-10-02 12:43:45.683 2 DEBUG nova.virt.libvirt.driver [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:43:45 compute-1 nova_compute[230518]: 2025-10-02 12:43:45.713 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:43:45 compute-1 nova_compute[230518]: 2025-10-02 12:43:45.749 2 INFO nova.compute.manager [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Took 7.69 seconds to spawn the instance on the hypervisor.
Oct 02 12:43:45 compute-1 nova_compute[230518]: 2025-10-02 12:43:45.750 2 DEBUG nova.compute.manager [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:43:45 compute-1 nova_compute[230518]: 2025-10-02 12:43:45.832 2 INFO nova.compute.manager [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Took 11.18 seconds to build instance.
Oct 02 12:43:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:43:45 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3103788107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:45 compute-1 nova_compute[230518]: 2025-10-02 12:43:45.854 2 DEBUG oslo_concurrency.lockutils [None req-7123a1cc-680e-4d48-aaa9-f63ba1286f3a 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.273s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1636903337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3103788107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:46 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:43:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:47.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:43:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:47.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:43:47 compute-1 ceph-mon[80926]: pgmap v2051: 305 pgs: 305 active+clean; 340 MiB data, 982 MiB used, 20 GiB / 21 GiB avail; 314 KiB/s rd, 3.6 MiB/s wr, 138 op/s
Oct 02 12:43:48 compute-1 nova_compute[230518]: 2025-10-02 12:43:48.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:49 compute-1 nova_compute[230518]: 2025-10-02 12:43:49.432 2 DEBUG nova.compute.manager [req-17d09909-5efc-4533-b2c3-c2fb78a37e65 req-40be3e54-1161-4add-b885-c22f957ecfe2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Received event network-changed-8d9cc17a-7804-4743-925a-496d9fe78c73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:43:49 compute-1 nova_compute[230518]: 2025-10-02 12:43:49.432 2 DEBUG nova.compute.manager [req-17d09909-5efc-4533-b2c3-c2fb78a37e65 req-40be3e54-1161-4add-b885-c22f957ecfe2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Refreshing instance network info cache due to event network-changed-8d9cc17a-7804-4743-925a-496d9fe78c73. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:43:49 compute-1 nova_compute[230518]: 2025-10-02 12:43:49.432 2 DEBUG oslo_concurrency.lockutils [req-17d09909-5efc-4533-b2c3-c2fb78a37e65 req-40be3e54-1161-4add-b885-c22f957ecfe2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:43:49 compute-1 nova_compute[230518]: 2025-10-02 12:43:49.433 2 DEBUG oslo_concurrency.lockutils [req-17d09909-5efc-4533-b2c3-c2fb78a37e65 req-40be3e54-1161-4add-b885-c22f957ecfe2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:43:49 compute-1 nova_compute[230518]: 2025-10-02 12:43:49.433 2 DEBUG nova.network.neutron [req-17d09909-5efc-4533-b2c3-c2fb78a37e65 req-40be3e54-1161-4add-b885-c22f957ecfe2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Refreshing network info cache for port 8d9cc17a-7804-4743-925a-496d9fe78c73 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:43:49 compute-1 ceph-mon[80926]: pgmap v2052: 305 pgs: 305 active+clean; 339 MiB data, 982 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.4 MiB/s wr, 250 op/s
Oct 02 12:43:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:49.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:49.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:49 compute-1 podman[276112]: 2025-10-02 12:43:49.823307876 +0000 UTC m=+0.071831629 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 12:43:49 compute-1 podman[276111]: 2025-10-02 12:43:49.836406079 +0000 UTC m=+0.088821645 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 12:43:50 compute-1 nova_compute[230518]: 2025-10-02 12:43:50.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:50 compute-1 ceph-mon[80926]: pgmap v2053: 305 pgs: 305 active+clean; 339 MiB data, 982 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.6 MiB/s wr, 223 op/s
Oct 02 12:43:50 compute-1 nova_compute[230518]: 2025-10-02 12:43:50.850 2 DEBUG nova.network.neutron [req-17d09909-5efc-4533-b2c3-c2fb78a37e65 req-40be3e54-1161-4add-b885-c22f957ecfe2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Updated VIF entry in instance network info cache for port 8d9cc17a-7804-4743-925a-496d9fe78c73. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:43:50 compute-1 nova_compute[230518]: 2025-10-02 12:43:50.851 2 DEBUG nova.network.neutron [req-17d09909-5efc-4533-b2c3-c2fb78a37e65 req-40be3e54-1161-4add-b885-c22f957ecfe2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Updating instance_info_cache with network_info: [{"id": "8d9cc17a-7804-4743-925a-496d9fe78c73", "address": "fa:16:3e:c4:d9:d3", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d9cc17a-78", "ovs_interfaceid": "8d9cc17a-7804-4743-925a-496d9fe78c73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:43:50 compute-1 nova_compute[230518]: 2025-10-02 12:43:50.877 2 DEBUG oslo_concurrency.lockutils [req-17d09909-5efc-4533-b2c3-c2fb78a37e65 req-40be3e54-1161-4add-b885-c22f957ecfe2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:43:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:51.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:51.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:51 compute-1 nova_compute[230518]: 2025-10-02 12:43:51.708 2 DEBUG nova.compute.manager [req-9cef4459-e67b-440f-ad7e-7af8119a4a7e req-f6decb2f-f8e0-469b-9d70-8a821384fdb7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received event network-changed-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:43:51 compute-1 nova_compute[230518]: 2025-10-02 12:43:51.709 2 DEBUG nova.compute.manager [req-9cef4459-e67b-440f-ad7e-7af8119a4a7e req-f6decb2f-f8e0-469b-9d70-8a821384fdb7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Refreshing instance network info cache due to event network-changed-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:43:51 compute-1 nova_compute[230518]: 2025-10-02 12:43:51.709 2 DEBUG oslo_concurrency.lockutils [req-9cef4459-e67b-440f-ad7e-7af8119a4a7e req-f6decb2f-f8e0-469b-9d70-8a821384fdb7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:43:51 compute-1 nova_compute[230518]: 2025-10-02 12:43:51.709 2 DEBUG oslo_concurrency.lockutils [req-9cef4459-e67b-440f-ad7e-7af8119a4a7e req-f6decb2f-f8e0-469b-9d70-8a821384fdb7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:43:51 compute-1 nova_compute[230518]: 2025-10-02 12:43:51.710 2 DEBUG nova.network.neutron [req-9cef4459-e67b-440f-ad7e-7af8119a4a7e req-f6decb2f-f8e0-469b-9d70-8a821384fdb7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Refreshing network info cache for port ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:43:51 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:43:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3030946270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:53 compute-1 nova_compute[230518]: 2025-10-02 12:43:53.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:53.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:53.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:53 compute-1 ceph-mon[80926]: pgmap v2054: 305 pgs: 305 active+clean; 339 MiB data, 982 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.6 MiB/s wr, 294 op/s
Oct 02 12:43:54 compute-1 sudo[276150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:43:54 compute-1 sudo[276150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:54 compute-1 sudo[276150]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:54 compute-1 sudo[276175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:43:54 compute-1 sudo[276175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:43:54 compute-1 sudo[276175]: pam_unix(sudo:session): session closed for user root
Oct 02 12:43:55 compute-1 nova_compute[230518]: 2025-10-02 12:43:55.277 2 DEBUG nova.network.neutron [req-9cef4459-e67b-440f-ad7e-7af8119a4a7e req-f6decb2f-f8e0-469b-9d70-8a821384fdb7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Updated VIF entry in instance network info cache for port ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:43:55 compute-1 nova_compute[230518]: 2025-10-02 12:43:55.277 2 DEBUG nova.network.neutron [req-9cef4459-e67b-440f-ad7e-7af8119a4a7e req-f6decb2f-f8e0-469b-9d70-8a821384fdb7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Updating instance_info_cache with network_info: [{"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:43:55 compute-1 nova_compute[230518]: 2025-10-02 12:43:55.350 2 DEBUG oslo_concurrency.lockutils [req-9cef4459-e67b-440f-ad7e-7af8119a4a7e req-f6decb2f-f8e0-469b-9d70-8a821384fdb7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:43:55 compute-1 ceph-mon[80926]: pgmap v2055: 305 pgs: 305 active+clean; 339 MiB data, 982 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 109 KiB/s wr, 263 op/s
Oct 02 12:43:55 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:43:55 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:43:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:43:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:55.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:55 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:55.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:55 compute-1 nova_compute[230518]: 2025-10-02 12:43:55.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:55 compute-1 nova_compute[230518]: 2025-10-02 12:43:55.904 2 DEBUG oslo_concurrency.lockutils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:43:55 compute-1 nova_compute[230518]: 2025-10-02 12:43:55.905 2 DEBUG oslo_concurrency.lockutils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquired lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:43:55 compute-1 nova_compute[230518]: 2025-10-02 12:43:55.905 2 DEBUG nova.network.neutron [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:43:56 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3386052185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:56 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:43:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:57.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:57.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:58 compute-1 nova_compute[230518]: 2025-10-02 12:43:58.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:43:58 compute-1 nova_compute[230518]: 2025-10-02 12:43:58.074 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:58 compute-1 nova_compute[230518]: 2025-10-02 12:43:58.074 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:58 compute-1 nova_compute[230518]: 2025-10-02 12:43:58.075 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:58 compute-1 nova_compute[230518]: 2025-10-02 12:43:58.075 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:43:58 compute-1 nova_compute[230518]: 2025-10-02 12:43:58.075 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:58 compute-1 nova_compute[230518]: 2025-10-02 12:43:58.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:43:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:43:58 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/707952787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:58 compute-1 nova_compute[230518]: 2025-10-02 12:43:58.537 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:58 compute-1 nova_compute[230518]: 2025-10-02 12:43:58.624 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:43:58 compute-1 nova_compute[230518]: 2025-10-02 12:43:58.625 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:43:58 compute-1 nova_compute[230518]: 2025-10-02 12:43:58.630 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:43:58 compute-1 nova_compute[230518]: 2025-10-02 12:43:58.631 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:43:58 compute-1 nova_compute[230518]: 2025-10-02 12:43:58.636 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:43:58 compute-1 nova_compute[230518]: 2025-10-02 12:43:58.637 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:43:58 compute-1 ceph-mon[80926]: pgmap v2056: 305 pgs: 305 active+clean; 356 MiB data, 989 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 689 KiB/s wr, 234 op/s
Oct 02 12:43:58 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1784953652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:58 compute-1 nova_compute[230518]: 2025-10-02 12:43:58.819 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:43:58 compute-1 nova_compute[230518]: 2025-10-02 12:43:58.820 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3845MB free_disk=20.82009506225586GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:43:58 compute-1 nova_compute[230518]: 2025-10-02 12:43:58.820 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:43:58 compute-1 nova_compute[230518]: 2025-10-02 12:43:58.821 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:43:58 compute-1 nova_compute[230518]: 2025-10-02 12:43:58.881 2 INFO nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Updating resource usage from migration db4a41c6-8a3c-43b7-b0e8-4bf46490cc1d
Oct 02 12:43:58 compute-1 nova_compute[230518]: 2025-10-02 12:43:58.913 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 7621a774-e0bc-4f4f-b900-c3608dd6835a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:43:58 compute-1 nova_compute[230518]: 2025-10-02 12:43:58.914 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 26db575f-26df-4e1b-b0d8-38a12df557e3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:43:58 compute-1 nova_compute[230518]: 2025-10-02 12:43:58.914 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Migration db4a41c6-8a3c-43b7-b0e8-4bf46490cc1d is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Oct 02 12:43:58 compute-1 nova_compute[230518]: 2025-10-02 12:43:58.914 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:43:58 compute-1 nova_compute[230518]: 2025-10-02 12:43:58.915 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:43:58 compute-1 nova_compute[230518]: 2025-10-02 12:43:58.997 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:59 compute-1 nova_compute[230518]: 2025-10-02 12:43:59.305 2 DEBUG nova.network.neutron [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Updating instance_info_cache with network_info: [{"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:43:59 compute-1 nova_compute[230518]: 2025-10-02 12:43:59.339 2 DEBUG oslo_concurrency.lockutils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Releasing lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:43:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:43:59 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1830993686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:59 compute-1 nova_compute[230518]: 2025-10-02 12:43:59.471 2 DEBUG nova.virt.libvirt.driver [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Oct 02 12:43:59 compute-1 nova_compute[230518]: 2025-10-02 12:43:59.472 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Creating file /var/lib/nova/instances/173830cb-12bb-4e1a-ba80-088da01ad107/897b806aef5f49019f206824fe6c29cb.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Oct 02 12:43:59 compute-1 nova_compute[230518]: 2025-10-02 12:43:59.472 2 DEBUG oslo_concurrency.processutils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/173830cb-12bb-4e1a-ba80-088da01ad107/897b806aef5f49019f206824fe6c29cb.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:43:59 compute-1 nova_compute[230518]: 2025-10-02 12:43:59.503 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:59 compute-1 nova_compute[230518]: 2025-10-02 12:43:59.510 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:43:59 compute-1 nova_compute[230518]: 2025-10-02 12:43:59.530 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:43:59 compute-1 nova_compute[230518]: 2025-10-02 12:43:59.559 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:43:59 compute-1 nova_compute[230518]: 2025-10-02 12:43:59.560 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:43:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:43:59.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:43:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:43:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:43:59.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:43:59 compute-1 ceph-mon[80926]: pgmap v2057: 305 pgs: 305 active+clean; 399 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 3.0 MiB/s wr, 245 op/s
Oct 02 12:43:59 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/707952787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:59 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3154617667' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:43:59 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1830993686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:43:59 compute-1 nova_compute[230518]: 2025-10-02 12:43:59.949 2 DEBUG oslo_concurrency.processutils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/173830cb-12bb-4e1a-ba80-088da01ad107/897b806aef5f49019f206824fe6c29cb.tmp" returned: 1 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:43:59 compute-1 nova_compute[230518]: 2025-10-02 12:43:59.950 2 DEBUG oslo_concurrency.processutils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/173830cb-12bb-4e1a-ba80-088da01ad107/897b806aef5f49019f206824fe6c29cb.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Oct 02 12:43:59 compute-1 nova_compute[230518]: 2025-10-02 12:43:59.950 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Creating directory /var/lib/nova/instances/173830cb-12bb-4e1a-ba80-088da01ad107 on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Oct 02 12:43:59 compute-1 nova_compute[230518]: 2025-10-02 12:43:59.950 2 DEBUG oslo_concurrency.processutils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/173830cb-12bb-4e1a-ba80-088da01ad107 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:44:00 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct 02 12:44:00 compute-1 nova_compute[230518]: 2025-10-02 12:44:00.185 2 DEBUG oslo_concurrency.processutils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/173830cb-12bb-4e1a-ba80-088da01ad107" returned: 0 in 0.234s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:44:00 compute-1 nova_compute[230518]: 2025-10-02 12:44:00.188 2 DEBUG nova.virt.libvirt.driver [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:44:00 compute-1 nova_compute[230518]: 2025-10-02 12:44:00.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:00 compute-1 ovn_controller[129257]: 2025-10-02T12:44:00Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d6:35:d6 10.100.0.13
Oct 02 12:44:00 compute-1 ovn_controller[129257]: 2025-10-02T12:44:00Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d6:35:d6 10.100.0.13
Oct 02 12:44:01 compute-1 ceph-mon[80926]: pgmap v2058: 305 pgs: 305 active+clean; 399 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.0 MiB/s wr, 126 op/s
Oct 02 12:44:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3350584005' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:44:01 compute-1 nova_compute[230518]: 2025-10-02 12:44:01.561 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:44:01 compute-1 nova_compute[230518]: 2025-10-02 12:44:01.562 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:44:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:01.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:44:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:01.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:44:01 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:44:03 compute-1 nova_compute[230518]: 2025-10-02 12:44:03.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:44:03 compute-1 ceph-mon[80926]: pgmap v2059: 305 pgs: 305 active+clean; 457 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.8 MiB/s wr, 221 op/s
Oct 02 12:44:03 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1387513762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:44:03 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/771119567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:44:03 compute-1 nova_compute[230518]: 2025-10-02 12:44:03.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:44:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:03.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:03 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:03.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:04 compute-1 nova_compute[230518]: 2025-10-02 12:44:04.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:44:04 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/908961569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:44:04 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2152615919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:44:04 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3452370904' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:44:04 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/164478413' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:44:05 compute-1 nova_compute[230518]: 2025-10-02 12:44:05.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:44:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:44:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3194892722' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:44:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:44:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3194892722' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:44:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:44:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:44:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:44:05 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:05.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:44:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:05.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:44:05 compute-1 ceph-mon[80926]: pgmap v2060: 305 pgs: 305 active+clean; 508 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 679 KiB/s rd, 9.4 MiB/s wr, 201 op/s
Oct 02 12:44:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3194892722' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:44:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3194892722' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:44:05 compute-1 nova_compute[230518]: 2025-10-02 12:44:05.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:06 compute-1 nova_compute[230518]: 2025-10-02 12:44:06.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:44:06 compute-1 nova_compute[230518]: 2025-10-02 12:44:06.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:44:06 compute-1 nova_compute[230518]: 2025-10-02 12:44:06.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:44:06 compute-1 ceph-mon[80926]: pgmap v2061: 305 pgs: 305 active+clean; 528 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 9.9 MiB/s wr, 262 op/s
Oct 02 12:44:06 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #97. Immutable memtables: 0.
Oct 02 12:44:06 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:44:06.902190) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:44:06 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 97
Oct 02 12:44:06 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409046902225, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 530, "num_deletes": 251, "total_data_size": 755363, "memory_usage": 766512, "flush_reason": "Manual Compaction"}
Oct 02 12:44:06 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #98: started
Oct 02 12:44:06 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409046934107, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 98, "file_size": 498458, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49274, "largest_seqno": 49799, "table_properties": {"data_size": 495592, "index_size": 838, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6983, "raw_average_key_size": 19, "raw_value_size": 489860, "raw_average_value_size": 1356, "num_data_blocks": 36, "num_entries": 361, "num_filter_entries": 361, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409022, "oldest_key_time": 1759409022, "file_creation_time": 1759409046, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:44:06 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 31972 microseconds, and 2332 cpu microseconds.
Oct 02 12:44:06 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:44:06 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:44:06.934157) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #98: 498458 bytes OK
Oct 02 12:44:06 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:44:06.934178) [db/memtable_list.cc:519] [default] Level-0 commit table #98 started
Oct 02 12:44:06 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:44:06.966205) [db/memtable_list.cc:722] [default] Level-0 commit table #98: memtable #1 done
Oct 02 12:44:06 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:44:06.966248) EVENT_LOG_v1 {"time_micros": 1759409046966239, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:44:06 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:44:06.966272) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:44:06 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 752206, prev total WAL file size 752206, number of live WAL files 2.
Oct 02 12:44:06 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000094.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:44:06 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:44:06.966932) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Oct 02 12:44:06 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:44:06 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [98(486KB)], [96(10MB)]
Oct 02 12:44:06 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409046967044, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [98], "files_L6": [96], "score": -1, "input_data_size": 11121607, "oldest_snapshot_seqno": -1}
Oct 02 12:44:06 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:44:07 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #99: 7140 keys, 9260658 bytes, temperature: kUnknown
Oct 02 12:44:07 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409047084894, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 99, "file_size": 9260658, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9214996, "index_size": 26734, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17861, "raw_key_size": 186131, "raw_average_key_size": 26, "raw_value_size": 9089465, "raw_average_value_size": 1273, "num_data_blocks": 1045, "num_entries": 7140, "num_filter_entries": 7140, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759409046, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 99, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:44:07 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:44:07 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:44:07.085653) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 9260658 bytes
Oct 02 12:44:07 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:44:07.104326) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 94.3 rd, 78.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.1 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(40.9) write-amplify(18.6) OK, records in: 7653, records dropped: 513 output_compression: NoCompression
Oct 02 12:44:07 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:44:07.104360) EVENT_LOG_v1 {"time_micros": 1759409047104347, "job": 60, "event": "compaction_finished", "compaction_time_micros": 117911, "compaction_time_cpu_micros": 44188, "output_level": 6, "num_output_files": 1, "total_output_size": 9260658, "num_input_records": 7653, "num_output_records": 7140, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:44:07 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:44:07 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409047104615, "job": 60, "event": "table_file_deletion", "file_number": 98}
Oct 02 12:44:07 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000096.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:44:07 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409047106240, "job": 60, "event": "table_file_deletion", "file_number": 96}
Oct 02 12:44:07 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:44:06.966768) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:44:07 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:44:07.106310) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:44:07 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:44:07.106314) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:44:07 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:44:07.106315) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:44:07 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:44:07.106317) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:44:07 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:44:07.106318) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:44:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:44:07.556 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:44:07 compute-1 nova_compute[230518]: 2025-10-02 12:44:07.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:07 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:44:07.558 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:44:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:44:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:07.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:07 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:07.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:08 compute-1 nova_compute[230518]: 2025-10-02 12:44:08.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:08 compute-1 podman[276250]: 2025-10-02 12:44:08.82702012 +0000 UTC m=+0.074411991 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct 02 12:44:08 compute-1 podman[276249]: 2025-10-02 12:44:08.834057541 +0000 UTC m=+0.084018682 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 12:44:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:44:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:44:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:44:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:09.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:44:09 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:09.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:44:09 compute-1 ceph-mon[80926]: pgmap v2062: 305 pgs: 305 active+clean; 531 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 9.3 MiB/s wr, 318 op/s
Oct 02 12:44:10 compute-1 nova_compute[230518]: 2025-10-02 12:44:10.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:44:10 compute-1 nova_compute[230518]: 2025-10-02 12:44:10.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:44:10 compute-1 nova_compute[230518]: 2025-10-02 12:44:10.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:44:10 compute-1 nova_compute[230518]: 2025-10-02 12:44:10.231 2 DEBUG nova.virt.libvirt.driver [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:44:10 compute-1 nova_compute[230518]: 2025-10-02 12:44:10.252 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:44:10 compute-1 nova_compute[230518]: 2025-10-02 12:44:10.253 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:44:10 compute-1 nova_compute[230518]: 2025-10-02 12:44:10.253 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:44:10 compute-1 nova_compute[230518]: 2025-10-02 12:44:10.253 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7621a774-e0bc-4f4f-b900-c3608dd6835a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:44:10 compute-1 nova_compute[230518]: 2025-10-02 12:44:10.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:10 compute-1 ceph-mon[80926]: pgmap v2063: 305 pgs: 305 active+clean; 531 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 7.0 MiB/s wr, 268 op/s
Oct 02 12:44:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:11.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:11.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:11 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:44:13 compute-1 nova_compute[230518]: 2025-10-02 12:44:13.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:13 compute-1 kernel: tapea4a4acf-33 (unregistering): left promiscuous mode
Oct 02 12:44:13 compute-1 NetworkManager[44960]: <info>  [1759409053.5297] device (tapea4a4acf-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:44:13 compute-1 nova_compute[230518]: 2025-10-02 12:44:13.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:13 compute-1 ovn_controller[129257]: 2025-10-02T12:44:13Z|00489|binding|INFO|Releasing lport ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb from this chassis (sb_readonly=0)
Oct 02 12:44:13 compute-1 ovn_controller[129257]: 2025-10-02T12:44:13Z|00490|binding|INFO|Setting lport ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb down in Southbound
Oct 02 12:44:13 compute-1 ovn_controller[129257]: 2025-10-02T12:44:13Z|00491|binding|INFO|Removing iface tapea4a4acf-33 ovn-installed in OVS
Oct 02 12:44:13 compute-1 nova_compute[230518]: 2025-10-02 12:44:13.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:13 compute-1 nova_compute[230518]: 2025-10-02 12:44:13.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:13 compute-1 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000077.scope: Deactivated successfully.
Oct 02 12:44:13 compute-1 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000077.scope: Consumed 13.964s CPU time.
Oct 02 12:44:13 compute-1 systemd-machined[188247]: Machine qemu-58-instance-00000077 terminated.
Oct 02 12:44:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:44:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:13.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:44:13 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:13.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:44:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:44:13.813 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:35:d6 10.100.0.13'], port_security=['fa:16:3e:d6:35:d6 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '173830cb-12bb-4e1a-ba80-088da01ad107', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88141e38aa2347299e7ab249431ef68c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'da6daf73-7b18-4ff6-8a16-e2a94d642e77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.210'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59a86c9d-a113-4a7c-af97-5ea11dfa8c7c, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:44:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:44:13.815 138374 INFO neutron.agent.ovn.metadata.agent [-] Port ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb in datapath f3643647-7cd9-4c43-8aaa-9b0f3160274b unbound from our chassis
Oct 02 12:44:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:44:13.818 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3643647-7cd9-4c43-8aaa-9b0f3160274b
Oct 02 12:44:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:44:13.848 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5d6bfe5a-ede3-4d65-905f-62d38817ee88]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:44:13.890 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[7a3169ff-f1b6-4c42-bed8-e6e29d6884fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:44:13.893 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[ec0a2e93-a7ce-4c24-8642-0266ab903021]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:44:13.919 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[887f96b9-91e5-45e6-8830-f18615a6f3b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:44:13.937 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d2c9b21c-d055-4a53-a5e4-b3d353e9a633]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3643647-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:ed:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 1000, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 1000, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662992, 'reachable_time': 31164, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276315, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:13 compute-1 ceph-mon[80926]: pgmap v2064: 305 pgs: 305 active+clean; 531 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 7.0 MiB/s wr, 310 op/s
Oct 02 12:44:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:44:13.955 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bef14b5d-10e8-4174-874f-82c43848b416]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf3643647-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663002, 'tstamp': 663002}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276316, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf3643647-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663005, 'tstamp': 663005}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276316, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:44:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:44:13.957 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3643647-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:44:13 compute-1 nova_compute[230518]: 2025-10-02 12:44:13.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:44:13.965 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3643647-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:44:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:44:13.965 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:44:13 compute-1 nova_compute[230518]: 2025-10-02 12:44:13.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:44:13.966 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3643647-70, col_values=(('external_ids', {'iface-id': '7b6dc1a1-1a58-45bd-84bb-97328397bf1b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:44:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:44:13.966 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:44:14 compute-1 nova_compute[230518]: 2025-10-02 12:44:14.252 2 INFO nova.virt.libvirt.driver [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Instance shutdown successfully after 14 seconds.
Oct 02 12:44:14 compute-1 nova_compute[230518]: 2025-10-02 12:44:14.263 2 INFO nova.virt.libvirt.driver [-] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Instance destroyed successfully.
Oct 02 12:44:14 compute-1 nova_compute[230518]: 2025-10-02 12:44:14.264 2 DEBUG nova.virt.libvirt.vif [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:43:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-716874932',display_name='tempest-ServerActionsTestOtherA-server-716874932',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-716874932',id=119,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJfI3E6popMNkSBH55JIIn+lxst+AgI5WbB+1D21g23xZC45mHZNKzJ1YzOQWfrILexv9zpuq5SLJQ8J6YEjTv4RhaLBgROGziYLwwgHom1wen0CDri217As6wNRpnqZsg==',key_name='tempest-keypair-1292637923',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:43:45Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='88141e38aa2347299e7ab249431ef68c',ramdisk_id='',reservation_id='r-cekdvg44',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherA-1849713132',owner_user_name='tempest-ServerActionsTestOtherA-1849713132-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:43:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='17a0940c9daf48ac8cfa6c3e56d0e39c',uuid=173830cb-12bb-4e1a-ba80-088da01ad107,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-497044539-network", "vif_mac": "fa:16:3e:d6:35:d6"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:44:14 compute-1 nova_compute[230518]: 2025-10-02 12:44:14.265 2 DEBUG nova.network.os_vif_util [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converting VIF {"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-497044539-network", "vif_mac": "fa:16:3e:d6:35:d6"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:44:14 compute-1 nova_compute[230518]: 2025-10-02 12:44:14.266 2 DEBUG nova.network.os_vif_util [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d6:35:d6,bridge_name='br-int',has_traffic_filtering=True,id=ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea4a4acf-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:44:14 compute-1 nova_compute[230518]: 2025-10-02 12:44:14.267 2 DEBUG os_vif [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:35:d6,bridge_name='br-int',has_traffic_filtering=True,id=ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea4a4acf-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:44:14 compute-1 nova_compute[230518]: 2025-10-02 12:44:14.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:14 compute-1 nova_compute[230518]: 2025-10-02 12:44:14.270 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea4a4acf-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:44:14 compute-1 nova_compute[230518]: 2025-10-02 12:44:14.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:14 compute-1 nova_compute[230518]: 2025-10-02 12:44:14.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:14 compute-1 nova_compute[230518]: 2025-10-02 12:44:14.278 2 INFO os_vif [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:35:d6,bridge_name='br-int',has_traffic_filtering=True,id=ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea4a4acf-33')
Oct 02 12:44:14 compute-1 nova_compute[230518]: 2025-10-02 12:44:14.286 2 DEBUG nova.virt.libvirt.driver [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:44:14 compute-1 nova_compute[230518]: 2025-10-02 12:44:14.287 2 DEBUG nova.virt.libvirt.driver [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:44:14 compute-1 nova_compute[230518]: 2025-10-02 12:44:14.586 2 DEBUG nova.compute.manager [req-1708148c-1c63-4514-b09f-c62c2512680e req-388e1cc3-df0c-45cd-9e75-5ca55d0e9a04 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received event network-vif-unplugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:44:14 compute-1 nova_compute[230518]: 2025-10-02 12:44:14.586 2 DEBUG oslo_concurrency.lockutils [req-1708148c-1c63-4514-b09f-c62c2512680e req-388e1cc3-df0c-45cd-9e75-5ca55d0e9a04 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:14 compute-1 nova_compute[230518]: 2025-10-02 12:44:14.587 2 DEBUG oslo_concurrency.lockutils [req-1708148c-1c63-4514-b09f-c62c2512680e req-388e1cc3-df0c-45cd-9e75-5ca55d0e9a04 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:14 compute-1 nova_compute[230518]: 2025-10-02 12:44:14.587 2 DEBUG oslo_concurrency.lockutils [req-1708148c-1c63-4514-b09f-c62c2512680e req-388e1cc3-df0c-45cd-9e75-5ca55d0e9a04 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:14 compute-1 nova_compute[230518]: 2025-10-02 12:44:14.587 2 DEBUG nova.compute.manager [req-1708148c-1c63-4514-b09f-c62c2512680e req-388e1cc3-df0c-45cd-9e75-5ca55d0e9a04 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] No waiting events found dispatching network-vif-unplugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:44:14 compute-1 nova_compute[230518]: 2025-10-02 12:44:14.587 2 WARNING nova.compute.manager [req-1708148c-1c63-4514-b09f-c62c2512680e req-388e1cc3-df0c-45cd-9e75-5ca55d0e9a04 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received unexpected event network-vif-unplugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb for instance with vm_state active and task_state resize_migrating.
Oct 02 12:44:14 compute-1 nova_compute[230518]: 2025-10-02 12:44:14.671 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Updating instance_info_cache with network_info: [{"id": "8d9cc17a-7804-4743-925a-496d9fe78c73", "address": "fa:16:3e:c4:d9:d3", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d9cc17a-78", "ovs_interfaceid": "8d9cc17a-7804-4743-925a-496d9fe78c73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:44:14 compute-1 nova_compute[230518]: 2025-10-02 12:44:14.760 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-7621a774-e0bc-4f4f-b900-c3608dd6835a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:44:14 compute-1 nova_compute[230518]: 2025-10-02 12:44:14.760 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:44:14 compute-1 ceph-mon[80926]: pgmap v2065: 305 pgs: 305 active+clean; 531 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.3 MiB/s wr, 245 op/s
Oct 02 12:44:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:44:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:15.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:44:15 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:15.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:44:16 compute-1 nova_compute[230518]: 2025-10-02 12:44:16.260 2 DEBUG neutronclient.v2_0.client [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Oct 02 12:44:16 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:44:16.562 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:44:16 compute-1 nova_compute[230518]: 2025-10-02 12:44:16.800 2 DEBUG oslo_concurrency.lockutils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:16 compute-1 nova_compute[230518]: 2025-10-02 12:44:16.801 2 DEBUG oslo_concurrency.lockutils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:16 compute-1 nova_compute[230518]: 2025-10-02 12:44:16.801 2 DEBUG oslo_concurrency.lockutils [None req-352f2eb7-b883-4c28-9577-ef6f74c5d8ec 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:16 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:44:17 compute-1 ceph-mon[80926]: pgmap v2066: 305 pgs: 305 active+clean; 547 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.9 MiB/s wr, 207 op/s
Oct 02 12:44:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:44:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:44:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:17.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:17 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:17.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:44:17 compute-1 nova_compute[230518]: 2025-10-02 12:44:17.624 2 DEBUG nova.compute.manager [req-dcf08827-691c-40c9-a605-27d44e50076a req-c07bd461-0b36-46f4-8cfa-4d8aa52d0406 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received event network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:44:17 compute-1 nova_compute[230518]: 2025-10-02 12:44:17.625 2 DEBUG oslo_concurrency.lockutils [req-dcf08827-691c-40c9-a605-27d44e50076a req-c07bd461-0b36-46f4-8cfa-4d8aa52d0406 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:17 compute-1 nova_compute[230518]: 2025-10-02 12:44:17.625 2 DEBUG oslo_concurrency.lockutils [req-dcf08827-691c-40c9-a605-27d44e50076a req-c07bd461-0b36-46f4-8cfa-4d8aa52d0406 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:17 compute-1 nova_compute[230518]: 2025-10-02 12:44:17.625 2 DEBUG oslo_concurrency.lockutils [req-dcf08827-691c-40c9-a605-27d44e50076a req-c07bd461-0b36-46f4-8cfa-4d8aa52d0406 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:17 compute-1 nova_compute[230518]: 2025-10-02 12:44:17.626 2 DEBUG nova.compute.manager [req-dcf08827-691c-40c9-a605-27d44e50076a req-c07bd461-0b36-46f4-8cfa-4d8aa52d0406 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] No waiting events found dispatching network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:44:17 compute-1 nova_compute[230518]: 2025-10-02 12:44:17.626 2 WARNING nova.compute.manager [req-dcf08827-691c-40c9-a605-27d44e50076a req-c07bd461-0b36-46f4-8cfa-4d8aa52d0406 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received unexpected event network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb for instance with vm_state active and task_state resize_migrated.
Oct 02 12:44:18 compute-1 nova_compute[230518]: 2025-10-02 12:44:18.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:19 compute-1 nova_compute[230518]: 2025-10-02 12:44:19.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:44:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:19 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:19.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:19.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:19 compute-1 nova_compute[230518]: 2025-10-02 12:44:19.754 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:44:19 compute-1 ceph-mon[80926]: pgmap v2067: 305 pgs: 305 active+clean; 571 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.5 MiB/s wr, 175 op/s
Oct 02 12:44:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3215505735' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:44:20 compute-1 podman[276319]: 2025-10-02 12:44:20.827533102 +0000 UTC m=+0.079351607 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 12:44:20 compute-1 podman[276320]: 2025-10-02 12:44:20.827649976 +0000 UTC m=+0.074958249 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:44:21 compute-1 ceph-mon[80926]: pgmap v2068: 305 pgs: 305 active+clean; 571 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.5 MiB/s wr, 115 op/s
Oct 02 12:44:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1001607952' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:44:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1558284503' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:44:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:44:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:21 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:21.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:21.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:44:23 compute-1 nova_compute[230518]: 2025-10-02 12:44:23.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:23 compute-1 ceph-mon[80926]: pgmap v2069: 305 pgs: 305 active+clean; 609 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.0 MiB/s wr, 155 op/s
Oct 02 12:44:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:44:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:23 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:23.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:44:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:23.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:44:24 compute-1 nova_compute[230518]: 2025-10-02 12:44:24.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:24 compute-1 nova_compute[230518]: 2025-10-02 12:44:24.536 2 DEBUG nova.compute.manager [req-588cd311-b21d-4937-98a0-dd4568bfb5b2 req-16616f60-f998-4eb6-b6db-c8534b5b3477 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received event network-changed-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:44:24 compute-1 nova_compute[230518]: 2025-10-02 12:44:24.536 2 DEBUG nova.compute.manager [req-588cd311-b21d-4937-98a0-dd4568bfb5b2 req-16616f60-f998-4eb6-b6db-c8534b5b3477 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Refreshing instance network info cache due to event network-changed-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:44:24 compute-1 nova_compute[230518]: 2025-10-02 12:44:24.537 2 DEBUG oslo_concurrency.lockutils [req-588cd311-b21d-4937-98a0-dd4568bfb5b2 req-16616f60-f998-4eb6-b6db-c8534b5b3477 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:44:24 compute-1 nova_compute[230518]: 2025-10-02 12:44:24.537 2 DEBUG oslo_concurrency.lockutils [req-588cd311-b21d-4937-98a0-dd4568bfb5b2 req-16616f60-f998-4eb6-b6db-c8534b5b3477 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:44:24 compute-1 nova_compute[230518]: 2025-10-02 12:44:24.537 2 DEBUG nova.network.neutron [req-588cd311-b21d-4937-98a0-dd4568bfb5b2 req-16616f60-f998-4eb6-b6db-c8534b5b3477 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Refreshing network info cache for port ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:44:24 compute-1 ceph-mon[80926]: pgmap v2070: 305 pgs: 305 active+clean; 614 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.4 MiB/s wr, 121 op/s
Oct 02 12:44:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:44:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:25 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:25.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:44:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:25.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:44:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:44:25.940 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:44:25.941 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:44:25.942 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:26 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:44:27 compute-1 ceph-mon[80926]: pgmap v2071: 305 pgs: 305 active+clean; 625 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 685 KiB/s rd, 5.5 MiB/s wr, 125 op/s
Oct 02 12:44:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:44:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:27.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:44:27 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:27.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:44:28 compute-1 nova_compute[230518]: 2025-10-02 12:44:28.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:28 compute-1 nova_compute[230518]: 2025-10-02 12:44:28.777 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409053.7767065, 173830cb-12bb-4e1a-ba80-088da01ad107 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:44:28 compute-1 nova_compute[230518]: 2025-10-02 12:44:28.778 2 INFO nova.compute.manager [-] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] VM Stopped (Lifecycle Event)
Oct 02 12:44:29 compute-1 nova_compute[230518]: 2025-10-02 12:44:29.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:29 compute-1 nova_compute[230518]: 2025-10-02 12:44:29.317 2 DEBUG nova.compute.manager [None req-a2bc63e7-9cd8-43ad-bdb7-07bca1305961 - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:44:29 compute-1 nova_compute[230518]: 2025-10-02 12:44:29.323 2 DEBUG nova.compute.manager [None req-a2bc63e7-9cd8-43ad-bdb7-07bca1305961 - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: resize_migrated, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:44:29 compute-1 nova_compute[230518]: 2025-10-02 12:44:29.379 2 INFO nova.compute.manager [None req-a2bc63e7-9cd8-43ad-bdb7-07bca1305961 - - - - - -] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com
Oct 02 12:44:29 compute-1 ceph-mon[80926]: pgmap v2072: 305 pgs: 305 active+clean; 636 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 705 KiB/s rd, 4.7 MiB/s wr, 138 op/s
Oct 02 12:44:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:44:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:44:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:44:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:29.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:44:29 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:29.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:44:31 compute-1 ceph-mon[80926]: pgmap v2073: 305 pgs: 305 active+clean; 636 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 586 KiB/s rd, 3.6 MiB/s wr, 109 op/s
Oct 02 12:44:31 compute-1 nova_compute[230518]: 2025-10-02 12:44:31.621 2 DEBUG nova.network.neutron [req-588cd311-b21d-4937-98a0-dd4568bfb5b2 req-16616f60-f998-4eb6-b6db-c8534b5b3477 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Updated VIF entry in instance network info cache for port ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:44:31 compute-1 nova_compute[230518]: 2025-10-02 12:44:31.622 2 DEBUG nova.network.neutron [req-588cd311-b21d-4937-98a0-dd4568bfb5b2 req-16616f60-f998-4eb6-b6db-c8534b5b3477 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Updating instance_info_cache with network_info: [{"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:44:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:31.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:44:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:31.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:44:31 compute-1 nova_compute[230518]: 2025-10-02 12:44:31.676 2 DEBUG oslo_concurrency.lockutils [req-588cd311-b21d-4937-98a0-dd4568bfb5b2 req-16616f60-f998-4eb6-b6db-c8534b5b3477 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:44:31 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:44:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e281 e281: 3 total, 3 up, 3 in
Oct 02 12:44:32 compute-1 ceph-mon[80926]: pgmap v2074: 305 pgs: 305 active+clean; 642 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.7 MiB/s wr, 176 op/s
Oct 02 12:44:33 compute-1 nova_compute[230518]: 2025-10-02 12:44:33.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:33.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:33.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:34 compute-1 ceph-mon[80926]: osdmap e281: 3 total, 3 up, 3 in
Oct 02 12:44:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2114306074' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:44:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2191314356' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:44:34 compute-1 nova_compute[230518]: 2025-10-02 12:44:34.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:35 compute-1 ceph-mon[80926]: pgmap v2076: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 173 op/s
Oct 02 12:44:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:35.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:35.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:36 compute-1 nova_compute[230518]: 2025-10-02 12:44:36.067 2 DEBUG nova.compute.manager [req-71240dea-0c89-4550-91d2-c210035b2590 req-761bcde2-2734-446c-92a2-04ace8c11327 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received event network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:44:36 compute-1 nova_compute[230518]: 2025-10-02 12:44:36.067 2 DEBUG oslo_concurrency.lockutils [req-71240dea-0c89-4550-91d2-c210035b2590 req-761bcde2-2734-446c-92a2-04ace8c11327 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:36 compute-1 nova_compute[230518]: 2025-10-02 12:44:36.068 2 DEBUG oslo_concurrency.lockutils [req-71240dea-0c89-4550-91d2-c210035b2590 req-761bcde2-2734-446c-92a2-04ace8c11327 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:36 compute-1 nova_compute[230518]: 2025-10-02 12:44:36.068 2 DEBUG oslo_concurrency.lockutils [req-71240dea-0c89-4550-91d2-c210035b2590 req-761bcde2-2734-446c-92a2-04ace8c11327 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:36 compute-1 nova_compute[230518]: 2025-10-02 12:44:36.068 2 DEBUG nova.compute.manager [req-71240dea-0c89-4550-91d2-c210035b2590 req-761bcde2-2734-446c-92a2-04ace8c11327 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] No waiting events found dispatching network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:44:36 compute-1 nova_compute[230518]: 2025-10-02 12:44:36.068 2 WARNING nova.compute.manager [req-71240dea-0c89-4550-91d2-c210035b2590 req-761bcde2-2734-446c-92a2-04ace8c11327 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received unexpected event network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb for instance with vm_state active and task_state resize_finish.
Oct 02 12:44:36 compute-1 ceph-mon[80926]: pgmap v2077: 305 pgs: 305 active+clean; 647 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.1 MiB/s wr, 142 op/s
Oct 02 12:44:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:44:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:44:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:37.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:44:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:37.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:38 compute-1 nova_compute[230518]: 2025-10-02 12:44:38.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:39 compute-1 ceph-mon[80926]: pgmap v2078: 305 pgs: 305 active+clean; 708 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 4.6 MiB/s wr, 190 op/s
Oct 02 12:44:39 compute-1 nova_compute[230518]: 2025-10-02 12:44:39.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e282 e282: 3 total, 3 up, 3 in
Oct 02 12:44:39 compute-1 nova_compute[230518]: 2025-10-02 12:44:39.604 2 DEBUG nova.compute.manager [req-446b5620-a1ae-46b2-966c-2ff0b116299d req-fa49d899-fa1a-4803-9a81-ea94cfb90621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received event network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:44:39 compute-1 nova_compute[230518]: 2025-10-02 12:44:39.604 2 DEBUG oslo_concurrency.lockutils [req-446b5620-a1ae-46b2-966c-2ff0b116299d req-fa49d899-fa1a-4803-9a81-ea94cfb90621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:39 compute-1 nova_compute[230518]: 2025-10-02 12:44:39.604 2 DEBUG oslo_concurrency.lockutils [req-446b5620-a1ae-46b2-966c-2ff0b116299d req-fa49d899-fa1a-4803-9a81-ea94cfb90621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:39 compute-1 nova_compute[230518]: 2025-10-02 12:44:39.605 2 DEBUG oslo_concurrency.lockutils [req-446b5620-a1ae-46b2-966c-2ff0b116299d req-fa49d899-fa1a-4803-9a81-ea94cfb90621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:39 compute-1 nova_compute[230518]: 2025-10-02 12:44:39.605 2 DEBUG nova.compute.manager [req-446b5620-a1ae-46b2-966c-2ff0b116299d req-fa49d899-fa1a-4803-9a81-ea94cfb90621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] No waiting events found dispatching network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:44:39 compute-1 nova_compute[230518]: 2025-10-02 12:44:39.605 2 WARNING nova.compute.manager [req-446b5620-a1ae-46b2-966c-2ff0b116299d req-fa49d899-fa1a-4803-9a81-ea94cfb90621 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Received unexpected event network-vif-plugged-ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb for instance with vm_state resized and task_state None.
Oct 02 12:44:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:39.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:39.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:39 compute-1 podman[276359]: 2025-10-02 12:44:39.819146344 +0000 UTC m=+0.065215512 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 12:44:39 compute-1 podman[276358]: 2025-10-02 12:44:39.852373379 +0000 UTC m=+0.101959077 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 02 12:44:39 compute-1 nova_compute[230518]: 2025-10-02 12:44:39.916 2 DEBUG oslo_concurrency.lockutils [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "173830cb-12bb-4e1a-ba80-088da01ad107" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:39 compute-1 nova_compute[230518]: 2025-10-02 12:44:39.916 2 DEBUG oslo_concurrency.lockutils [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:39 compute-1 nova_compute[230518]: 2025-10-02 12:44:39.917 2 DEBUG nova.compute.manager [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Going to confirm migration 15 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Oct 02 12:44:40 compute-1 ceph-mon[80926]: osdmap e282: 3 total, 3 up, 3 in
Oct 02 12:44:40 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e283 e283: 3 total, 3 up, 3 in
Oct 02 12:44:41 compute-1 nova_compute[230518]: 2025-10-02 12:44:41.300 2 DEBUG neutronclient.v2_0.client [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Oct 02 12:44:41 compute-1 nova_compute[230518]: 2025-10-02 12:44:41.301 2 DEBUG oslo_concurrency.lockutils [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:44:41 compute-1 nova_compute[230518]: 2025-10-02 12:44:41.301 2 DEBUG oslo_concurrency.lockutils [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquired lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:44:41 compute-1 nova_compute[230518]: 2025-10-02 12:44:41.302 2 DEBUG nova.network.neutron [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:44:41 compute-1 nova_compute[230518]: 2025-10-02 12:44:41.302 2 DEBUG nova.objects.instance [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'info_cache' on Instance uuid 173830cb-12bb-4e1a-ba80-088da01ad107 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:44:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:44:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:41.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:44:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:41.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:41 compute-1 ceph-mon[80926]: pgmap v2080: 305 pgs: 305 active+clean; 708 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 5.7 MiB/s wr, 137 op/s
Oct 02 12:44:41 compute-1 ceph-mon[80926]: osdmap e283: 3 total, 3 up, 3 in
Oct 02 12:44:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:44:42 compute-1 nova_compute[230518]: 2025-10-02 12:44:42.949 2 DEBUG nova.network.neutron [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 173830cb-12bb-4e1a-ba80-088da01ad107] Updating instance_info_cache with network_info: [{"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:44:43 compute-1 nova_compute[230518]: 2025-10-02 12:44:43.045 2 DEBUG oslo_concurrency.lockutils [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Releasing lock "refresh_cache-173830cb-12bb-4e1a-ba80-088da01ad107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:44:43 compute-1 nova_compute[230518]: 2025-10-02 12:44:43.046 2 DEBUG nova.objects.instance [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'migration_context' on Instance uuid 173830cb-12bb-4e1a-ba80-088da01ad107 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:44:43 compute-1 nova_compute[230518]: 2025-10-02 12:44:43.145 2 DEBUG nova.storage.rbd_utils [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] rbd image 173830cb-12bb-4e1a-ba80-088da01ad107_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:44:43 compute-1 nova_compute[230518]: 2025-10-02 12:44:43.159 2 DEBUG nova.virt.libvirt.vif [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:43:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-716874932',display_name='tempest-ServerActionsTestOtherA-server-716874932',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-716874932',id=119,image_ref='',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJfI3E6popMNkSBH55JIIn+lxst+AgI5WbB+1D21g23xZC45mHZNKzJ1YzOQWfrILexv9zpuq5SLJQ8J6YEjTv4RhaLBgROGziYLwwgHom1wen0CDri217As6wNRpnqZsg==',key_name='tempest-keypair-1292637923',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:44:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='88141e38aa2347299e7ab249431ef68c',ramdisk_id='',reservation_id='r-cekdvg44',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherA-1849713132',owner_user_name='tempest-ServerActionsTestOtherA-1849713132-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:44:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='17a0940c9daf48ac8cfa6c3e56d0e39c',uuid=173830cb-12bb-4e1a-ba80-088da01ad107,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:44:43 compute-1 nova_compute[230518]: 2025-10-02 12:44:43.159 2 DEBUG nova.network.os_vif_util [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converting VIF {"id": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "address": "fa:16:3e:d6:35:d6", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea4a4acf-33", "ovs_interfaceid": "ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:44:43 compute-1 nova_compute[230518]: 2025-10-02 12:44:43.160 2 DEBUG nova.network.os_vif_util [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d6:35:d6,bridge_name='br-int',has_traffic_filtering=True,id=ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea4a4acf-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:44:43 compute-1 nova_compute[230518]: 2025-10-02 12:44:43.161 2 DEBUG os_vif [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:35:d6,bridge_name='br-int',has_traffic_filtering=True,id=ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea4a4acf-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:44:43 compute-1 nova_compute[230518]: 2025-10-02 12:44:43.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:43 compute-1 nova_compute[230518]: 2025-10-02 12:44:43.162 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea4a4acf-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:44:43 compute-1 nova_compute[230518]: 2025-10-02 12:44:43.163 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:44:43 compute-1 nova_compute[230518]: 2025-10-02 12:44:43.166 2 INFO os_vif [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:35:d6,bridge_name='br-int',has_traffic_filtering=True,id=ea4a4acf-33d3-4e16-bd39-8ccf662c4bcb,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea4a4acf-33')
Oct 02 12:44:43 compute-1 nova_compute[230518]: 2025-10-02 12:44:43.167 2 DEBUG oslo_concurrency.lockutils [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:44:43 compute-1 nova_compute[230518]: 2025-10-02 12:44:43.167 2 DEBUG oslo_concurrency.lockutils [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:44:43 compute-1 nova_compute[230518]: 2025-10-02 12:44:43.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:43 compute-1 nova_compute[230518]: 2025-10-02 12:44:43.479 2 DEBUG oslo_concurrency.processutils [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:44:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:43.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:43.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:43 compute-1 ceph-mon[80926]: pgmap v2082: 305 pgs: 305 active+clean; 719 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 5.8 MiB/s wr, 223 op/s
Oct 02 12:44:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:44:44 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1418788370' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:44:44 compute-1 nova_compute[230518]: 2025-10-02 12:44:44.062 2 DEBUG oslo_concurrency.processutils [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:44:44 compute-1 nova_compute[230518]: 2025-10-02 12:44:44.070 2 DEBUG nova.compute.provider_tree [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:44:44 compute-1 nova_compute[230518]: 2025-10-02 12:44:44.110 2 DEBUG nova.scheduler.client.report [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:44:44 compute-1 nova_compute[230518]: 2025-10-02 12:44:44.234 2 DEBUG oslo_concurrency.lockutils [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 1.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:44 compute-1 nova_compute[230518]: 2025-10-02 12:44:44.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:44 compute-1 nova_compute[230518]: 2025-10-02 12:44:44.517 2 INFO nova.scheduler.client.report [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Deleted allocation for migration db4a41c6-8a3c-43b7-b0e8-4bf46490cc1d
Oct 02 12:44:44 compute-1 nova_compute[230518]: 2025-10-02 12:44:44.672 2 DEBUG oslo_concurrency.lockutils [None req-5e7340e2-38dd-44e9-a14d-5b8c79cae3b1 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "173830cb-12bb-4e1a-ba80-088da01ad107" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 4.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:44:45 compute-1 ceph-mon[80926]: pgmap v2083: 305 pgs: 305 active+clean; 722 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 5.4 MiB/s wr, 294 op/s
Oct 02 12:44:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1418788370' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:44:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:45.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:44:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:45.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:44:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:44:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e284 e284: 3 total, 3 up, 3 in
Oct 02 12:44:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:44:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:47.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:44:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:47.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:47 compute-1 ceph-mon[80926]: pgmap v2084: 305 pgs: 305 active+clean; 722 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 158 KiB/s wr, 210 op/s
Oct 02 12:44:48 compute-1 ovn_controller[129257]: 2025-10-02T12:44:48Z|00492|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Oct 02 12:44:48 compute-1 nova_compute[230518]: 2025-10-02 12:44:48.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:49 compute-1 nova_compute[230518]: 2025-10-02 12:44:49.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:49 compute-1 ceph-mon[80926]: osdmap e284: 3 total, 3 up, 3 in
Oct 02 12:44:49 compute-1 ceph-mon[80926]: pgmap v2086: 305 pgs: 305 active+clean; 724 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 163 KiB/s wr, 225 op/s
Oct 02 12:44:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:44:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:49.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:44:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:49.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e285 e285: 3 total, 3 up, 3 in
Oct 02 12:44:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:51.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:51 compute-1 ceph-mon[80926]: pgmap v2087: 305 pgs: 305 active+clean; 724 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 138 KiB/s wr, 146 op/s
Oct 02 12:44:51 compute-1 ceph-mon[80926]: osdmap e285: 3 total, 3 up, 3 in
Oct 02 12:44:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:44:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:51.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:44:51 compute-1 podman[276442]: 2025-10-02 12:44:51.825315674 +0000 UTC m=+0.069885618 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 12:44:51 compute-1 podman[276443]: 2025-10-02 12:44:51.83599951 +0000 UTC m=+0.077084115 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:44:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:44:53 compute-1 nova_compute[230518]: 2025-10-02 12:44:53.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:53 compute-1 ceph-mon[80926]: pgmap v2089: 305 pgs: 305 active+clean; 724 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 350 KiB/s rd, 52 KiB/s wr, 63 op/s
Oct 02 12:44:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:53.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:53.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:54 compute-1 nova_compute[230518]: 2025-10-02 12:44:54.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:54 compute-1 ceph-mon[80926]: pgmap v2090: 305 pgs: 305 active+clean; 726 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 202 KiB/s wr, 100 op/s
Oct 02 12:44:55 compute-1 sudo[276484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:44:55 compute-1 sudo[276484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:55 compute-1 sudo[276484]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:55 compute-1 sudo[276509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:44:55 compute-1 sudo[276509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:55 compute-1 sudo[276509]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:55 compute-1 sudo[276534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:44:55 compute-1 sudo[276534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:55 compute-1 sudo[276534]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:55 compute-1 sudo[276559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:44:55 compute-1 sudo[276559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:44:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e286 e286: 3 total, 3 up, 3 in
Oct 02 12:44:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:55.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:55.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:56 compute-1 sudo[276559]: pam_unix(sudo:session): session closed for user root
Oct 02 12:44:56 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e287 e287: 3 total, 3 up, 3 in
Oct 02 12:44:56 compute-1 ceph-mon[80926]: osdmap e286: 3 total, 3 up, 3 in
Oct 02 12:44:56 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3079862474' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:44:57 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:44:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:57.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:57.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:58 compute-1 ceph-mon[80926]: pgmap v2092: 305 pgs: 305 active+clean; 767 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 3.0 MiB/s wr, 148 op/s
Oct 02 12:44:58 compute-1 ceph-mon[80926]: osdmap e287: 3 total, 3 up, 3 in
Oct 02 12:44:58 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2297640210' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:44:58 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/865932368' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:44:58 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:44:58 compute-1 nova_compute[230518]: 2025-10-02 12:44:58.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:59 compute-1 nova_compute[230518]: 2025-10-02 12:44:59.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:44:59 compute-1 ceph-mon[80926]: pgmap v2094: 305 pgs: 305 active+clean; 815 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 8.6 MiB/s wr, 235 op/s
Oct 02 12:44:59 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:44:59 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:44:59 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:44:59 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:44:59 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:44:59 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:44:59 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:44:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:44:59.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:44:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:44:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:44:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:44:59.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:00 compute-1 nova_compute[230518]: 2025-10-02 12:45:00.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:45:00 compute-1 nova_compute[230518]: 2025-10-02 12:45:00.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:45:00 compute-1 nova_compute[230518]: 2025-10-02 12:45:00.091 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:00 compute-1 nova_compute[230518]: 2025-10-02 12:45:00.091 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:00 compute-1 nova_compute[230518]: 2025-10-02 12:45:00.091 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:00 compute-1 nova_compute[230518]: 2025-10-02 12:45:00.092 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:45:00 compute-1 nova_compute[230518]: 2025-10-02 12:45:00.092 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:45:00 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2698531315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:00 compute-1 nova_compute[230518]: 2025-10-02 12:45:00.700 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:00 compute-1 ceph-mon[80926]: pgmap v2095: 305 pgs: 305 active+clean; 815 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 8.5 MiB/s wr, 203 op/s
Oct 02 12:45:00 compute-1 nova_compute[230518]: 2025-10-02 12:45:00.990 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:45:00 compute-1 nova_compute[230518]: 2025-10-02 12:45:00.990 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:45:00 compute-1 nova_compute[230518]: 2025-10-02 12:45:00.994 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:45:00 compute-1 nova_compute[230518]: 2025-10-02 12:45:00.994 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:45:01 compute-1 nova_compute[230518]: 2025-10-02 12:45:01.166 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:45:01 compute-1 nova_compute[230518]: 2025-10-02 12:45:01.167 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4045MB free_disk=20.695758819580078GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:45:01 compute-1 nova_compute[230518]: 2025-10-02 12:45:01.167 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:01 compute-1 nova_compute[230518]: 2025-10-02 12:45:01.167 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:01.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:01.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:01 compute-1 nova_compute[230518]: 2025-10-02 12:45:01.724 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 7621a774-e0bc-4f4f-b900-c3608dd6835a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:45:01 compute-1 nova_compute[230518]: 2025-10-02 12:45:01.725 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 26db575f-26df-4e1b-b0d8-38a12df557e3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:45:01 compute-1 nova_compute[230518]: 2025-10-02 12:45:01.725 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:45:01 compute-1 nova_compute[230518]: 2025-10-02 12:45:01.725 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:45:01 compute-1 nova_compute[230518]: 2025-10-02 12:45:01.774 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2698531315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3557431853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:45:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e288 e288: 3 total, 3 up, 3 in
Oct 02 12:45:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:45:02 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1186594829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:02 compute-1 nova_compute[230518]: 2025-10-02 12:45:02.402 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:02 compute-1 nova_compute[230518]: 2025-10-02 12:45:02.407 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:45:02 compute-1 nova_compute[230518]: 2025-10-02 12:45:02.532 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:45:02 compute-1 nova_compute[230518]: 2025-10-02 12:45:02.590 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:45:02 compute-1 nova_compute[230518]: 2025-10-02 12:45:02.591 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.423s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:03 compute-1 ceph-mon[80926]: pgmap v2096: 305 pgs: 305 active+clean; 772 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 8.4 MiB/s wr, 170 op/s
Oct 02 12:45:03 compute-1 ceph-mon[80926]: osdmap e288: 3 total, 3 up, 3 in
Oct 02 12:45:03 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1186594829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:03 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3417613539' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:03 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2053349566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:03 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1132442729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:03 compute-1 nova_compute[230518]: 2025-10-02 12:45:03.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:03.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:03.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:04 compute-1 nova_compute[230518]: 2025-10-02 12:45:04.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:04 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1376223628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:05 compute-1 ceph-mon[80926]: pgmap v2098: 305 pgs: 305 active+clean; 772 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 5.6 MiB/s wr, 161 op/s
Oct 02 12:45:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3038036194' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:45:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3038036194' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:45:05 compute-1 nova_compute[230518]: 2025-10-02 12:45:05.587 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:45:05 compute-1 nova_compute[230518]: 2025-10-02 12:45:05.588 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:45:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:45:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:45:05 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:05.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:05.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:45:05 compute-1 sudo[276661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:45:05 compute-1 sudo[276661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:05 compute-1 sudo[276661]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:05 compute-1 sudo[276686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:45:05 compute-1 sudo[276686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:45:05 compute-1 sudo[276686]: pam_unix(sudo:session): session closed for user root
Oct 02 12:45:06 compute-1 nova_compute[230518]: 2025-10-02 12:45:06.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:45:06 compute-1 nova_compute[230518]: 2025-10-02 12:45:06.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:45:06 compute-1 nova_compute[230518]: 2025-10-02 12:45:06.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:45:06 compute-1 nova_compute[230518]: 2025-10-02 12:45:06.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:45:06 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:45:06 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:45:06 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e289 e289: 3 total, 3 up, 3 in
Oct 02 12:45:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:45:07 compute-1 ceph-mon[80926]: pgmap v2099: 305 pgs: 305 active+clean; 772 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 146 op/s
Oct 02 12:45:07 compute-1 ceph-mon[80926]: osdmap e289: 3 total, 3 up, 3 in
Oct 02 12:45:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:45:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:07.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:45:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:07.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:08 compute-1 nova_compute[230518]: 2025-10-02 12:45:08.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:45:08 compute-1 nova_compute[230518]: 2025-10-02 12:45:08.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:08.335 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:45:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:08.338 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:45:08 compute-1 nova_compute[230518]: 2025-10-02 12:45:08.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:08 compute-1 ceph-mon[80926]: pgmap v2101: 305 pgs: 305 active+clean; 726 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 34 KiB/s wr, 185 op/s
Oct 02 12:45:09 compute-1 nova_compute[230518]: 2025-10-02 12:45:09.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:09.341 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:45:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:45:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:09 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:09.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:09.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:09 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/99679887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:10 compute-1 nova_compute[230518]: 2025-10-02 12:45:10.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:45:10 compute-1 nova_compute[230518]: 2025-10-02 12:45:10.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:45:10 compute-1 nova_compute[230518]: 2025-10-02 12:45:10.339 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-26db575f-26df-4e1b-b0d8-38a12df557e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:45:10 compute-1 nova_compute[230518]: 2025-10-02 12:45:10.340 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-26db575f-26df-4e1b-b0d8-38a12df557e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:45:10 compute-1 nova_compute[230518]: 2025-10-02 12:45:10.340 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:45:10 compute-1 podman[276712]: 2025-10-02 12:45:10.845942921 +0000 UTC m=+0.090585860 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:45:10 compute-1 podman[276711]: 2025-10-02 12:45:10.875315944 +0000 UTC m=+0.120678536 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 12:45:10 compute-1 ceph-mon[80926]: pgmap v2102: 305 pgs: 305 active+clean; 726 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 164 op/s
Oct 02 12:45:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/140909361' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:11.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:11.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/378219883' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:45:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/378219883' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:45:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:45:12 compute-1 nova_compute[230518]: 2025-10-02 12:45:12.097 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Updating instance_info_cache with network_info: [{"id": "3de79762-7d07-45e3-b66d-38b20be62257", "address": "fa:16:3e:bb:bf:18", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de79762-7d", "ovs_interfaceid": "3de79762-7d07-45e3-b66d-38b20be62257", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:45:12 compute-1 nova_compute[230518]: 2025-10-02 12:45:12.118 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-26db575f-26df-4e1b-b0d8-38a12df557e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:45:12 compute-1 nova_compute[230518]: 2025-10-02 12:45:12.119 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:45:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e290 e290: 3 total, 3 up, 3 in
Oct 02 12:45:12 compute-1 nova_compute[230518]: 2025-10-02 12:45:12.257 2 DEBUG oslo_concurrency.lockutils [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "26db575f-26df-4e1b-b0d8-38a12df557e3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:12 compute-1 nova_compute[230518]: 2025-10-02 12:45:12.257 2 DEBUG oslo_concurrency.lockutils [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "26db575f-26df-4e1b-b0d8-38a12df557e3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:12 compute-1 nova_compute[230518]: 2025-10-02 12:45:12.258 2 DEBUG oslo_concurrency.lockutils [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "26db575f-26df-4e1b-b0d8-38a12df557e3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:12 compute-1 nova_compute[230518]: 2025-10-02 12:45:12.258 2 DEBUG oslo_concurrency.lockutils [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "26db575f-26df-4e1b-b0d8-38a12df557e3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:12 compute-1 nova_compute[230518]: 2025-10-02 12:45:12.258 2 DEBUG oslo_concurrency.lockutils [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "26db575f-26df-4e1b-b0d8-38a12df557e3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:12 compute-1 nova_compute[230518]: 2025-10-02 12:45:12.259 2 INFO nova.compute.manager [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Terminating instance
Oct 02 12:45:12 compute-1 nova_compute[230518]: 2025-10-02 12:45:12.260 2 DEBUG nova.compute.manager [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:45:12 compute-1 kernel: tap3de79762-7d (unregistering): left promiscuous mode
Oct 02 12:45:12 compute-1 NetworkManager[44960]: <info>  [1759409112.3359] device (tap3de79762-7d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:45:12 compute-1 ovn_controller[129257]: 2025-10-02T12:45:12Z|00493|binding|INFO|Releasing lport 3de79762-7d07-45e3-b66d-38b20be62257 from this chassis (sb_readonly=0)
Oct 02 12:45:12 compute-1 ovn_controller[129257]: 2025-10-02T12:45:12Z|00494|binding|INFO|Setting lport 3de79762-7d07-45e3-b66d-38b20be62257 down in Southbound
Oct 02 12:45:12 compute-1 nova_compute[230518]: 2025-10-02 12:45:12.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:12 compute-1 ovn_controller[129257]: 2025-10-02T12:45:12Z|00495|binding|INFO|Removing iface tap3de79762-7d ovn-installed in OVS
Oct 02 12:45:12 compute-1 nova_compute[230518]: 2025-10-02 12:45:12.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:12.357 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:bf:18 10.100.0.11'], port_security=['fa:16:3e:bb:bf:18 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '26db575f-26df-4e1b-b0d8-38a12df557e3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88141e38aa2347299e7ab249431ef68c', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59a86c9d-a113-4a7c-af97-5ea11dfa8c7c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=3de79762-7d07-45e3-b66d-38b20be62257) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:45:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:12.358 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 3de79762-7d07-45e3-b66d-38b20be62257 in datapath f3643647-7cd9-4c43-8aaa-9b0f3160274b unbound from our chassis
Oct 02 12:45:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:12.360 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3643647-7cd9-4c43-8aaa-9b0f3160274b
Oct 02 12:45:12 compute-1 nova_compute[230518]: 2025-10-02 12:45:12.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:12.379 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5e8b1234-79d2-4c7c-bd41-01ba0c19219a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:12.416 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[2ac38691-d430-409b-8c9e-6a082723bc1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:12.419 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[917e4912-84e8-4f42-b179-d3dcbe426cdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:12 compute-1 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000073.scope: Deactivated successfully.
Oct 02 12:45:12 compute-1 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000073.scope: Consumed 17.960s CPU time.
Oct 02 12:45:12 compute-1 systemd-machined[188247]: Machine qemu-57-instance-00000073 terminated.
Oct 02 12:45:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:12.446 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[8292939d-31ba-42d0-9608-2c884553198c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:12.468 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[dca59b37-d78d-4189-be2c-b6f9eda175c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3643647-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:ed:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662992, 'reachable_time': 31164, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276767, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:12.492 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9c8564c3-75c4-4ca7-858e-835fa94c0316]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf3643647-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663002, 'tstamp': 663002}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276770, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf3643647-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663005, 'tstamp': 663005}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276770, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:12.494 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3643647-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:45:12 compute-1 nova_compute[230518]: 2025-10-02 12:45:12.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:12 compute-1 nova_compute[230518]: 2025-10-02 12:45:12.499 2 INFO nova.virt.libvirt.driver [-] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Instance destroyed successfully.
Oct 02 12:45:12 compute-1 nova_compute[230518]: 2025-10-02 12:45:12.499 2 DEBUG nova.objects.instance [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'resources' on Instance uuid 26db575f-26df-4e1b-b0d8-38a12df557e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:45:12 compute-1 nova_compute[230518]: 2025-10-02 12:45:12.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:12.502 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3643647-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:45:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:12.502 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:45:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:12.503 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3643647-70, col_values=(('external_ids', {'iface-id': '7b6dc1a1-1a58-45bd-84bb-97328397bf1b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:45:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:12.503 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:45:12 compute-1 nova_compute[230518]: 2025-10-02 12:45:12.536 2 DEBUG nova.virt.libvirt.vif [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:42:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-310646740',display_name='tempest-ServerActionsTestOtherA-server-310646740',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-310646740',id=115,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:43:05Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='88141e38aa2347299e7ab249431ef68c',ramdisk_id='',reservation_id='r-8ebo56vt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1849713132',owner_user_name='tempest-ServerActionsTestOtherA-1849713132-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:43:05Z,user_data=None,user_id='17a0940c9daf48ac8cfa6c3e56d0e39c',uuid=26db575f-26df-4e1b-b0d8-38a12df557e3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3de79762-7d07-45e3-b66d-38b20be62257", "address": "fa:16:3e:bb:bf:18", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de79762-7d", "ovs_interfaceid": "3de79762-7d07-45e3-b66d-38b20be62257", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:45:12 compute-1 nova_compute[230518]: 2025-10-02 12:45:12.537 2 DEBUG nova.network.os_vif_util [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converting VIF {"id": "3de79762-7d07-45e3-b66d-38b20be62257", "address": "fa:16:3e:bb:bf:18", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de79762-7d", "ovs_interfaceid": "3de79762-7d07-45e3-b66d-38b20be62257", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:45:12 compute-1 nova_compute[230518]: 2025-10-02 12:45:12.537 2 DEBUG nova.network.os_vif_util [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bb:bf:18,bridge_name='br-int',has_traffic_filtering=True,id=3de79762-7d07-45e3-b66d-38b20be62257,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3de79762-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:45:12 compute-1 nova_compute[230518]: 2025-10-02 12:45:12.538 2 DEBUG os_vif [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:bf:18,bridge_name='br-int',has_traffic_filtering=True,id=3de79762-7d07-45e3-b66d-38b20be62257,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3de79762-7d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:45:12 compute-1 nova_compute[230518]: 2025-10-02 12:45:12.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:12 compute-1 nova_compute[230518]: 2025-10-02 12:45:12.540 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3de79762-7d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:45:12 compute-1 nova_compute[230518]: 2025-10-02 12:45:12.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:12 compute-1 nova_compute[230518]: 2025-10-02 12:45:12.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:12 compute-1 nova_compute[230518]: 2025-10-02 12:45:12.547 2 INFO os_vif [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:bf:18,bridge_name='br-int',has_traffic_filtering=True,id=3de79762-7d07-45e3-b66d-38b20be62257,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3de79762-7d')
Oct 02 12:45:12 compute-1 nova_compute[230518]: 2025-10-02 12:45:12.665 2 DEBUG nova.compute.manager [req-97a98caf-9d03-4eaf-bf20-27b19b4738b0 req-cd0a24c0-be38-4cf5-8ecb-3a97b1f2fc63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Received event network-vif-unplugged-3de79762-7d07-45e3-b66d-38b20be62257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:45:12 compute-1 nova_compute[230518]: 2025-10-02 12:45:12.666 2 DEBUG oslo_concurrency.lockutils [req-97a98caf-9d03-4eaf-bf20-27b19b4738b0 req-cd0a24c0-be38-4cf5-8ecb-3a97b1f2fc63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "26db575f-26df-4e1b-b0d8-38a12df557e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:12 compute-1 nova_compute[230518]: 2025-10-02 12:45:12.666 2 DEBUG oslo_concurrency.lockutils [req-97a98caf-9d03-4eaf-bf20-27b19b4738b0 req-cd0a24c0-be38-4cf5-8ecb-3a97b1f2fc63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "26db575f-26df-4e1b-b0d8-38a12df557e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:12 compute-1 nova_compute[230518]: 2025-10-02 12:45:12.667 2 DEBUG oslo_concurrency.lockutils [req-97a98caf-9d03-4eaf-bf20-27b19b4738b0 req-cd0a24c0-be38-4cf5-8ecb-3a97b1f2fc63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "26db575f-26df-4e1b-b0d8-38a12df557e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:12 compute-1 nova_compute[230518]: 2025-10-02 12:45:12.667 2 DEBUG nova.compute.manager [req-97a98caf-9d03-4eaf-bf20-27b19b4738b0 req-cd0a24c0-be38-4cf5-8ecb-3a97b1f2fc63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] No waiting events found dispatching network-vif-unplugged-3de79762-7d07-45e3-b66d-38b20be62257 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:45:12 compute-1 nova_compute[230518]: 2025-10-02 12:45:12.667 2 DEBUG nova.compute.manager [req-97a98caf-9d03-4eaf-bf20-27b19b4738b0 req-cd0a24c0-be38-4cf5-8ecb-3a97b1f2fc63 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Received event network-vif-unplugged-3de79762-7d07-45e3-b66d-38b20be62257 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:45:13 compute-1 ceph-mon[80926]: pgmap v2103: 305 pgs: 305 active+clean; 602 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 21 KiB/s wr, 243 op/s
Oct 02 12:45:13 compute-1 ceph-mon[80926]: osdmap e290: 3 total, 3 up, 3 in
Oct 02 12:45:13 compute-1 nova_compute[230518]: 2025-10-02 12:45:13.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:13 compute-1 nova_compute[230518]: 2025-10-02 12:45:13.673 2 INFO nova.virt.libvirt.driver [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Deleting instance files /var/lib/nova/instances/26db575f-26df-4e1b-b0d8-38a12df557e3_del
Oct 02 12:45:13 compute-1 nova_compute[230518]: 2025-10-02 12:45:13.675 2 INFO nova.virt.libvirt.driver [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Deletion of /var/lib/nova/instances/26db575f-26df-4e1b-b0d8-38a12df557e3_del complete
Oct 02 12:45:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:13.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:13.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:13 compute-1 nova_compute[230518]: 2025-10-02 12:45:13.735 2 INFO nova.compute.manager [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Took 1.47 seconds to destroy the instance on the hypervisor.
Oct 02 12:45:13 compute-1 nova_compute[230518]: 2025-10-02 12:45:13.735 2 DEBUG oslo.service.loopingcall [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:45:13 compute-1 nova_compute[230518]: 2025-10-02 12:45:13.736 2 DEBUG nova.compute.manager [-] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:45:13 compute-1 nova_compute[230518]: 2025-10-02 12:45:13.736 2 DEBUG nova.network.neutron [-] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:45:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2672774457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:14 compute-1 nova_compute[230518]: 2025-10-02 12:45:14.866 2 DEBUG nova.compute.manager [req-d626f981-517b-4dd8-8a17-1425ad2599ce req-0b68d903-6e67-4e1b-b22c-8389081909b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Received event network-vif-plugged-3de79762-7d07-45e3-b66d-38b20be62257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:45:14 compute-1 nova_compute[230518]: 2025-10-02 12:45:14.866 2 DEBUG oslo_concurrency.lockutils [req-d626f981-517b-4dd8-8a17-1425ad2599ce req-0b68d903-6e67-4e1b-b22c-8389081909b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "26db575f-26df-4e1b-b0d8-38a12df557e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:14 compute-1 nova_compute[230518]: 2025-10-02 12:45:14.867 2 DEBUG oslo_concurrency.lockutils [req-d626f981-517b-4dd8-8a17-1425ad2599ce req-0b68d903-6e67-4e1b-b22c-8389081909b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "26db575f-26df-4e1b-b0d8-38a12df557e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:14 compute-1 nova_compute[230518]: 2025-10-02 12:45:14.867 2 DEBUG oslo_concurrency.lockutils [req-d626f981-517b-4dd8-8a17-1425ad2599ce req-0b68d903-6e67-4e1b-b22c-8389081909b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "26db575f-26df-4e1b-b0d8-38a12df557e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:14 compute-1 nova_compute[230518]: 2025-10-02 12:45:14.868 2 DEBUG nova.compute.manager [req-d626f981-517b-4dd8-8a17-1425ad2599ce req-0b68d903-6e67-4e1b-b22c-8389081909b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] No waiting events found dispatching network-vif-plugged-3de79762-7d07-45e3-b66d-38b20be62257 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:45:14 compute-1 nova_compute[230518]: 2025-10-02 12:45:14.868 2 WARNING nova.compute.manager [req-d626f981-517b-4dd8-8a17-1425ad2599ce req-0b68d903-6e67-4e1b-b22c-8389081909b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Received unexpected event network-vif-plugged-3de79762-7d07-45e3-b66d-38b20be62257 for instance with vm_state active and task_state deleting.
Oct 02 12:45:15 compute-1 nova_compute[230518]: 2025-10-02 12:45:15.249 2 DEBUG nova.network.neutron [-] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:45:15 compute-1 nova_compute[230518]: 2025-10-02 12:45:15.294 2 INFO nova.compute.manager [-] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Took 1.56 seconds to deallocate network for instance.
Oct 02 12:45:15 compute-1 nova_compute[230518]: 2025-10-02 12:45:15.340 2 DEBUG oslo_concurrency.lockutils [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:15 compute-1 nova_compute[230518]: 2025-10-02 12:45:15.340 2 DEBUG oslo_concurrency.lockutils [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:15 compute-1 nova_compute[230518]: 2025-10-02 12:45:15.410 2 DEBUG oslo_concurrency.processutils [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:15 compute-1 nova_compute[230518]: 2025-10-02 12:45:15.552 2 DEBUG nova.compute.manager [req-0c331114-ae09-4cbf-ba69-749ad88c4e48 req-3e101f11-99f7-4d51-b89b-08772969a05e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Received event network-vif-deleted-3de79762-7d07-45e3-b66d-38b20be62257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:45:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:15.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:45:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:15.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:45:15 compute-1 ceph-mon[80926]: pgmap v2105: 305 pgs: 305 active+clean; 509 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 7.4 KiB/s wr, 303 op/s
Oct 02 12:45:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:45:15 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/889393379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:15 compute-1 nova_compute[230518]: 2025-10-02 12:45:15.869 2 DEBUG oslo_concurrency.processutils [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:15 compute-1 nova_compute[230518]: 2025-10-02 12:45:15.876 2 DEBUG nova.compute.provider_tree [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:45:15 compute-1 nova_compute[230518]: 2025-10-02 12:45:15.900 2 DEBUG nova.scheduler.client.report [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:45:15 compute-1 nova_compute[230518]: 2025-10-02 12:45:15.944 2 DEBUG oslo_concurrency.lockutils [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:15 compute-1 nova_compute[230518]: 2025-10-02 12:45:15.988 2 INFO nova.scheduler.client.report [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Deleted allocations for instance 26db575f-26df-4e1b-b0d8-38a12df557e3
Oct 02 12:45:16 compute-1 nova_compute[230518]: 2025-10-02 12:45:16.083 2 DEBUG oslo_concurrency.lockutils [None req-a1ad389b-807e-4d63-8085-8705a2c1df06 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "26db575f-26df-4e1b-b0d8-38a12df557e3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:16 compute-1 ceph-mon[80926]: pgmap v2106: 305 pgs: 305 active+clean; 484 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.7 MiB/s wr, 249 op/s
Oct 02 12:45:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3501430886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/889393379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:45:17 compute-1 nova_compute[230518]: 2025-10-02 12:45:17.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:45:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:17.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:17 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:17.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:18 compute-1 nova_compute[230518]: 2025-10-02 12:45:18.337 2 DEBUG oslo_concurrency.lockutils [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "7621a774-e0bc-4f4f-b900-c3608dd6835a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:18 compute-1 nova_compute[230518]: 2025-10-02 12:45:18.338 2 DEBUG oslo_concurrency.lockutils [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "7621a774-e0bc-4f4f-b900-c3608dd6835a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:18 compute-1 nova_compute[230518]: 2025-10-02 12:45:18.338 2 DEBUG oslo_concurrency.lockutils [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "7621a774-e0bc-4f4f-b900-c3608dd6835a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:18 compute-1 nova_compute[230518]: 2025-10-02 12:45:18.338 2 DEBUG oslo_concurrency.lockutils [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "7621a774-e0bc-4f4f-b900-c3608dd6835a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:18 compute-1 nova_compute[230518]: 2025-10-02 12:45:18.338 2 DEBUG oslo_concurrency.lockutils [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "7621a774-e0bc-4f4f-b900-c3608dd6835a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:18 compute-1 nova_compute[230518]: 2025-10-02 12:45:18.339 2 INFO nova.compute.manager [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Terminating instance
Oct 02 12:45:18 compute-1 nova_compute[230518]: 2025-10-02 12:45:18.340 2 DEBUG nova.compute.manager [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:45:18 compute-1 nova_compute[230518]: 2025-10-02 12:45:18.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:18 compute-1 nova_compute[230518]: 2025-10-02 12:45:18.509 2 DEBUG oslo_concurrency.lockutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:18 compute-1 nova_compute[230518]: 2025-10-02 12:45:18.509 2 DEBUG oslo_concurrency.lockutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:18 compute-1 nova_compute[230518]: 2025-10-02 12:45:18.541 2 DEBUG nova.compute.manager [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:45:18 compute-1 nova_compute[230518]: 2025-10-02 12:45:18.638 2 DEBUG oslo_concurrency.lockutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:18 compute-1 nova_compute[230518]: 2025-10-02 12:45:18.639 2 DEBUG oslo_concurrency.lockutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:18 compute-1 nova_compute[230518]: 2025-10-02 12:45:18.648 2 DEBUG nova.virt.hardware [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:45:18 compute-1 nova_compute[230518]: 2025-10-02 12:45:18.648 2 INFO nova.compute.claims [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:45:18 compute-1 nova_compute[230518]: 2025-10-02 12:45:18.866 2 DEBUG oslo_concurrency.processutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:18 compute-1 kernel: tap8d9cc17a-78 (unregistering): left promiscuous mode
Oct 02 12:45:18 compute-1 NetworkManager[44960]: <info>  [1759409118.9012] device (tap8d9cc17a-78): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:45:18 compute-1 ovn_controller[129257]: 2025-10-02T12:45:18Z|00496|binding|INFO|Releasing lport 8d9cc17a-7804-4743-925a-496d9fe78c73 from this chassis (sb_readonly=0)
Oct 02 12:45:18 compute-1 ovn_controller[129257]: 2025-10-02T12:45:18Z|00497|binding|INFO|Setting lport 8d9cc17a-7804-4743-925a-496d9fe78c73 down in Southbound
Oct 02 12:45:18 compute-1 ovn_controller[129257]: 2025-10-02T12:45:18Z|00498|binding|INFO|Removing iface tap8d9cc17a-78 ovn-installed in OVS
Oct 02 12:45:18 compute-1 nova_compute[230518]: 2025-10-02 12:45:18.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:18.923 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:d9:d3 10.100.0.14'], port_security=['fa:16:3e:c4:d9:d3 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '7621a774-e0bc-4f4f-b900-c3608dd6835a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '88141e38aa2347299e7ab249431ef68c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'da6daf73-7b18-4ff6-8a16-e2a94d642e77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59a86c9d-a113-4a7c-af97-5ea11dfa8c7c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=8d9cc17a-7804-4743-925a-496d9fe78c73) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:45:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:18.925 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 8d9cc17a-7804-4743-925a-496d9fe78c73 in datapath f3643647-7cd9-4c43-8aaa-9b0f3160274b unbound from our chassis
Oct 02 12:45:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:18.927 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f3643647-7cd9-4c43-8aaa-9b0f3160274b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:45:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:18.928 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7efece6d-e10b-40b1-9acc-adada4cdfdd2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:18.928 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b namespace which is not needed anymore
Oct 02 12:45:18 compute-1 nova_compute[230518]: 2025-10-02 12:45:18.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:18 compute-1 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000069.scope: Deactivated successfully.
Oct 02 12:45:18 compute-1 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d00000069.scope: Consumed 28.734s CPU time.
Oct 02 12:45:18 compute-1 systemd-machined[188247]: Machine qemu-52-instance-00000069 terminated.
Oct 02 12:45:19 compute-1 ceph-mon[80926]: pgmap v2107: 305 pgs: 305 active+clean; 453 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 255 op/s
Oct 02 12:45:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/855172294' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:45:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/855172294' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:45:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1873613455' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:45:19 compute-1 nova_compute[230518]: 2025-10-02 12:45:19.180 2 DEBUG nova.compute.manager [req-c8eeff4b-7e4f-4566-9bba-e93a71ec9e54 req-a1646599-990c-4c13-92ae-4d26cce858cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Received event network-vif-unplugged-8d9cc17a-7804-4743-925a-496d9fe78c73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:45:19 compute-1 nova_compute[230518]: 2025-10-02 12:45:19.180 2 DEBUG oslo_concurrency.lockutils [req-c8eeff4b-7e4f-4566-9bba-e93a71ec9e54 req-a1646599-990c-4c13-92ae-4d26cce858cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "7621a774-e0bc-4f4f-b900-c3608dd6835a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:19 compute-1 nova_compute[230518]: 2025-10-02 12:45:19.180 2 DEBUG oslo_concurrency.lockutils [req-c8eeff4b-7e4f-4566-9bba-e93a71ec9e54 req-a1646599-990c-4c13-92ae-4d26cce858cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7621a774-e0bc-4f4f-b900-c3608dd6835a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:19 compute-1 nova_compute[230518]: 2025-10-02 12:45:19.181 2 DEBUG oslo_concurrency.lockutils [req-c8eeff4b-7e4f-4566-9bba-e93a71ec9e54 req-a1646599-990c-4c13-92ae-4d26cce858cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7621a774-e0bc-4f4f-b900-c3608dd6835a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:19 compute-1 nova_compute[230518]: 2025-10-02 12:45:19.181 2 DEBUG nova.compute.manager [req-c8eeff4b-7e4f-4566-9bba-e93a71ec9e54 req-a1646599-990c-4c13-92ae-4d26cce858cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] No waiting events found dispatching network-vif-unplugged-8d9cc17a-7804-4743-925a-496d9fe78c73 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:45:19 compute-1 nova_compute[230518]: 2025-10-02 12:45:19.181 2 DEBUG nova.compute.manager [req-c8eeff4b-7e4f-4566-9bba-e93a71ec9e54 req-a1646599-990c-4c13-92ae-4d26cce858cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Received event network-vif-unplugged-8d9cc17a-7804-4743-925a-496d9fe78c73 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:45:19 compute-1 nova_compute[230518]: 2025-10-02 12:45:19.184 2 INFO nova.virt.libvirt.driver [-] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Instance destroyed successfully.
Oct 02 12:45:19 compute-1 nova_compute[230518]: 2025-10-02 12:45:19.184 2 DEBUG nova.objects.instance [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lazy-loading 'resources' on Instance uuid 7621a774-e0bc-4f4f-b900-c3608dd6835a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:45:19 compute-1 nova_compute[230518]: 2025-10-02 12:45:19.207 2 DEBUG nova.virt.libvirt.vif [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:39:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1525238782',display_name='tempest-ServerActionsTestOtherA-server-1525238782',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1525238782',id=105,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJfI3E6popMNkSBH55JIIn+lxst+AgI5WbB+1D21g23xZC45mHZNKzJ1YzOQWfrILexv9zpuq5SLJQ8J6YEjTv4RhaLBgROGziYLwwgHom1wen0CDri217As6wNRpnqZsg==',key_name='tempest-keypair-1292637923',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:39:28Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='88141e38aa2347299e7ab249431ef68c',ramdisk_id='',reservation_id='r-uk3eghdh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1849713132',owner_user_name='tempest-ServerActionsTestOtherA-1849713132-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:39:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='17a0940c9daf48ac8cfa6c3e56d0e39c',uuid=7621a774-e0bc-4f4f-b900-c3608dd6835a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8d9cc17a-7804-4743-925a-496d9fe78c73", "address": "fa:16:3e:c4:d9:d3", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d9cc17a-78", "ovs_interfaceid": "8d9cc17a-7804-4743-925a-496d9fe78c73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:45:19 compute-1 nova_compute[230518]: 2025-10-02 12:45:19.207 2 DEBUG nova.network.os_vif_util [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converting VIF {"id": "8d9cc17a-7804-4743-925a-496d9fe78c73", "address": "fa:16:3e:c4:d9:d3", "network": {"id": "f3643647-7cd9-4c43-8aaa-9b0f3160274b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-497044539-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "88141e38aa2347299e7ab249431ef68c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d9cc17a-78", "ovs_interfaceid": "8d9cc17a-7804-4743-925a-496d9fe78c73", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:45:19 compute-1 nova_compute[230518]: 2025-10-02 12:45:19.208 2 DEBUG nova.network.os_vif_util [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c4:d9:d3,bridge_name='br-int',has_traffic_filtering=True,id=8d9cc17a-7804-4743-925a-496d9fe78c73,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d9cc17a-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:45:19 compute-1 nova_compute[230518]: 2025-10-02 12:45:19.208 2 DEBUG os_vif [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c4:d9:d3,bridge_name='br-int',has_traffic_filtering=True,id=8d9cc17a-7804-4743-925a-496d9fe78c73,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d9cc17a-78') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:45:19 compute-1 nova_compute[230518]: 2025-10-02 12:45:19.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:19 compute-1 nova_compute[230518]: 2025-10-02 12:45:19.211 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d9cc17a-78, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:45:19 compute-1 nova_compute[230518]: 2025-10-02 12:45:19.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:45:19 compute-1 nova_compute[230518]: 2025-10-02 12:45:19.216 2 INFO os_vif [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c4:d9:d3,bridge_name='br-int',has_traffic_filtering=True,id=8d9cc17a-7804-4743-925a-496d9fe78c73,network=Network(f3643647-7cd9-4c43-8aaa-9b0f3160274b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d9cc17a-78')
Oct 02 12:45:19 compute-1 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[271015]: [NOTICE]   (271019) : haproxy version is 2.8.14-c23fe91
Oct 02 12:45:19 compute-1 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[271015]: [NOTICE]   (271019) : path to executable is /usr/sbin/haproxy
Oct 02 12:45:19 compute-1 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[271015]: [WARNING]  (271019) : Exiting Master process...
Oct 02 12:45:19 compute-1 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[271015]: [ALERT]    (271019) : Current worker (271021) exited with code 143 (Terminated)
Oct 02 12:45:19 compute-1 neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b[271015]: [WARNING]  (271019) : All workers exited. Exiting... (0)
Oct 02 12:45:19 compute-1 systemd[1]: libpod-3949a474b032a55f43dcd6459cc53a9429ab87c06d77bed27f8e8426cf75272d.scope: Deactivated successfully.
Oct 02 12:45:19 compute-1 podman[276848]: 2025-10-02 12:45:19.398136293 +0000 UTC m=+0.367280881 container died 3949a474b032a55f43dcd6459cc53a9429ab87c06d77bed27f8e8426cf75272d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:45:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:45:19 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3012165453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:19 compute-1 nova_compute[230518]: 2025-10-02 12:45:19.575 2 DEBUG oslo_concurrency.processutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.708s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:19 compute-1 nova_compute[230518]: 2025-10-02 12:45:19.581 2 DEBUG nova.compute.provider_tree [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:45:19 compute-1 nova_compute[230518]: 2025-10-02 12:45:19.603 2 DEBUG nova.scheduler.client.report [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:45:19 compute-1 nova_compute[230518]: 2025-10-02 12:45:19.627 2 DEBUG oslo_concurrency.lockutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.988s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:19 compute-1 nova_compute[230518]: 2025-10-02 12:45:19.628 2 DEBUG nova.compute.manager [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:45:19 compute-1 nova_compute[230518]: 2025-10-02 12:45:19.674 2 DEBUG nova.compute.manager [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:45:19 compute-1 nova_compute[230518]: 2025-10-02 12:45:19.675 2 DEBUG nova.network.neutron [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:45:19 compute-1 nova_compute[230518]: 2025-10-02 12:45:19.695 2 INFO nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:45:19 compute-1 nova_compute[230518]: 2025-10-02 12:45:19.714 2 DEBUG nova.compute.manager [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:45:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:45:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:19.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:19 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:19.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:19 compute-1 systemd[1]: var-lib-containers-storage-overlay-e6ac23ed22d2d8032f0cbc3084da7e2a93ca5dafb47c834f98bbf243d6c598e7-merged.mount: Deactivated successfully.
Oct 02 12:45:19 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3949a474b032a55f43dcd6459cc53a9429ab87c06d77bed27f8e8426cf75272d-userdata-shm.mount: Deactivated successfully.
Oct 02 12:45:19 compute-1 podman[276848]: 2025-10-02 12:45:19.793002958 +0000 UTC m=+0.762147596 container cleanup 3949a474b032a55f43dcd6459cc53a9429ab87c06d77bed27f8e8426cf75272d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:45:19 compute-1 systemd[1]: libpod-conmon-3949a474b032a55f43dcd6459cc53a9429ab87c06d77bed27f8e8426cf75272d.scope: Deactivated successfully.
Oct 02 12:45:19 compute-1 nova_compute[230518]: 2025-10-02 12:45:19.816 2 DEBUG nova.compute.manager [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:45:19 compute-1 nova_compute[230518]: 2025-10-02 12:45:19.818 2 DEBUG nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:45:19 compute-1 nova_compute[230518]: 2025-10-02 12:45:19.818 2 INFO nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Creating image(s)
Oct 02 12:45:19 compute-1 nova_compute[230518]: 2025-10-02 12:45:19.852 2 DEBUG nova.storage.rbd_utils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:45:19 compute-1 nova_compute[230518]: 2025-10-02 12:45:19.888 2 DEBUG nova.storage.rbd_utils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:45:19 compute-1 nova_compute[230518]: 2025-10-02 12:45:19.919 2 DEBUG nova.storage.rbd_utils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:45:19 compute-1 nova_compute[230518]: 2025-10-02 12:45:19.924 2 DEBUG oslo_concurrency.processutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:20 compute-1 nova_compute[230518]: 2025-10-02 12:45:19.999 2 DEBUG oslo_concurrency.processutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:20 compute-1 nova_compute[230518]: 2025-10-02 12:45:20.000 2 DEBUG oslo_concurrency.lockutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:20 compute-1 nova_compute[230518]: 2025-10-02 12:45:20.001 2 DEBUG oslo_concurrency.lockutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:20 compute-1 nova_compute[230518]: 2025-10-02 12:45:20.001 2 DEBUG oslo_concurrency.lockutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:20 compute-1 nova_compute[230518]: 2025-10-02 12:45:20.028 2 DEBUG nova.storage.rbd_utils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:45:20 compute-1 nova_compute[230518]: 2025-10-02 12:45:20.032 2 DEBUG oslo_concurrency.processutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:20 compute-1 nova_compute[230518]: 2025-10-02 12:45:20.384 2 DEBUG nova.policy [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b168e90f7c0c414ba26c576fb8706a80', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c87621e5c0ba4f13abfff528143c1c00', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:45:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/738593402' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:45:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3012165453' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:20 compute-1 podman[276927]: 2025-10-02 12:45:20.646040913 +0000 UTC m=+0.827559344 container remove 3949a474b032a55f43dcd6459cc53a9429ab87c06d77bed27f8e8426cf75272d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true)
Oct 02 12:45:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:20.652 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bf5688db-68fd-4dca-97a3-53573dffec9f]: (4, ('Thu Oct  2 12:45:19 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b (3949a474b032a55f43dcd6459cc53a9429ab87c06d77bed27f8e8426cf75272d)\n3949a474b032a55f43dcd6459cc53a9429ab87c06d77bed27f8e8426cf75272d\nThu Oct  2 12:45:19 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b (3949a474b032a55f43dcd6459cc53a9429ab87c06d77bed27f8e8426cf75272d)\n3949a474b032a55f43dcd6459cc53a9429ab87c06d77bed27f8e8426cf75272d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:20.654 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[11de863f-d61d-45df-8e07-59d6e660f186]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:20.655 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3643647-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:45:20 compute-1 nova_compute[230518]: 2025-10-02 12:45:20.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:20 compute-1 kernel: tapf3643647-70: left promiscuous mode
Oct 02 12:45:20 compute-1 nova_compute[230518]: 2025-10-02 12:45:20.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:20.676 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e98c51c7-a3c9-43ee-a93b-385e30919554]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:20.707 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[905d93d8-ff7b-4bb7-a2ed-722015207ca9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:20.709 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[91267aeb-ec27-4eac-8d6c-05d50ddb5287]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:20.724 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d7c86fac-dba6-4cd8-bcd1-5f47ab7a7f83]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662984, 'reachable_time': 34933, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277032, 'error': None, 'target': 'ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:20 compute-1 systemd[1]: run-netns-ovnmeta\x2df3643647\x2d7cd9\x2d4c43\x2d8aaa\x2d9b0f3160274b.mount: Deactivated successfully.
Oct 02 12:45:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:20.729 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f3643647-7cd9-4c43-8aaa-9b0f3160274b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:45:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:20.729 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[99ae6db7-29c6-4fdd-9d23-0a11ac01f4e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:21 compute-1 nova_compute[230518]: 2025-10-02 12:45:21.273 2 DEBUG nova.compute.manager [req-08ae3044-aeb0-458d-a594-b16f6034fd46 req-47170fd7-7ea3-4cf0-bb53-c25cbaa0e4fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Received event network-vif-plugged-8d9cc17a-7804-4743-925a-496d9fe78c73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:45:21 compute-1 nova_compute[230518]: 2025-10-02 12:45:21.274 2 DEBUG oslo_concurrency.lockutils [req-08ae3044-aeb0-458d-a594-b16f6034fd46 req-47170fd7-7ea3-4cf0-bb53-c25cbaa0e4fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "7621a774-e0bc-4f4f-b900-c3608dd6835a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:21 compute-1 nova_compute[230518]: 2025-10-02 12:45:21.274 2 DEBUG oslo_concurrency.lockutils [req-08ae3044-aeb0-458d-a594-b16f6034fd46 req-47170fd7-7ea3-4cf0-bb53-c25cbaa0e4fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7621a774-e0bc-4f4f-b900-c3608dd6835a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:21 compute-1 nova_compute[230518]: 2025-10-02 12:45:21.275 2 DEBUG oslo_concurrency.lockutils [req-08ae3044-aeb0-458d-a594-b16f6034fd46 req-47170fd7-7ea3-4cf0-bb53-c25cbaa0e4fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7621a774-e0bc-4f4f-b900-c3608dd6835a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:21 compute-1 nova_compute[230518]: 2025-10-02 12:45:21.275 2 DEBUG nova.compute.manager [req-08ae3044-aeb0-458d-a594-b16f6034fd46 req-47170fd7-7ea3-4cf0-bb53-c25cbaa0e4fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] No waiting events found dispatching network-vif-plugged-8d9cc17a-7804-4743-925a-496d9fe78c73 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:45:21 compute-1 nova_compute[230518]: 2025-10-02 12:45:21.275 2 WARNING nova.compute.manager [req-08ae3044-aeb0-458d-a594-b16f6034fd46 req-47170fd7-7ea3-4cf0-bb53-c25cbaa0e4fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Received unexpected event network-vif-plugged-8d9cc17a-7804-4743-925a-496d9fe78c73 for instance with vm_state active and task_state deleting.
Oct 02 12:45:21 compute-1 nova_compute[230518]: 2025-10-02 12:45:21.415 2 DEBUG nova.network.neutron [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Successfully created port: 241d570e-8eb4-4d2a-986b-b37fbcb780a9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:45:21 compute-1 ceph-mon[80926]: pgmap v2108: 305 pgs: 305 active+clean; 453 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 255 op/s
Oct 02 12:45:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:21.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:45:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:21 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:21.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:22 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:45:22 compute-1 nova_compute[230518]: 2025-10-02 12:45:22.111 2 DEBUG oslo_concurrency.processutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:22 compute-1 nova_compute[230518]: 2025-10-02 12:45:22.183 2 DEBUG nova.storage.rbd_utils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] resizing rbd image c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:45:22 compute-1 nova_compute[230518]: 2025-10-02 12:45:22.280 2 DEBUG nova.objects.instance [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'migration_context' on Instance uuid c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:45:22 compute-1 nova_compute[230518]: 2025-10-02 12:45:22.297 2 DEBUG nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:45:22 compute-1 nova_compute[230518]: 2025-10-02 12:45:22.297 2 DEBUG nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Ensure instance console log exists: /var/lib/nova/instances/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:45:22 compute-1 nova_compute[230518]: 2025-10-02 12:45:22.298 2 DEBUG oslo_concurrency.lockutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:22 compute-1 nova_compute[230518]: 2025-10-02 12:45:22.298 2 DEBUG oslo_concurrency.lockutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:22 compute-1 nova_compute[230518]: 2025-10-02 12:45:22.298 2 DEBUG oslo_concurrency.lockutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2690838680' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:45:22 compute-1 podman[277109]: 2025-10-02 12:45:22.806075655 +0000 UTC m=+0.057245811 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=iscsid, org.label-schema.schema-version=1.0)
Oct 02 12:45:22 compute-1 podman[277110]: 2025-10-02 12:45:22.810315369 +0000 UTC m=+0.055120565 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 12:45:23 compute-1 nova_compute[230518]: 2025-10-02 12:45:23.088 2 DEBUG nova.network.neutron [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Successfully updated port: 241d570e-8eb4-4d2a-986b-b37fbcb780a9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:45:23 compute-1 nova_compute[230518]: 2025-10-02 12:45:23.104 2 DEBUG oslo_concurrency.lockutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:45:23 compute-1 nova_compute[230518]: 2025-10-02 12:45:23.104 2 DEBUG oslo_concurrency.lockutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquired lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:45:23 compute-1 nova_compute[230518]: 2025-10-02 12:45:23.105 2 DEBUG nova.network.neutron [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:45:23 compute-1 nova_compute[230518]: 2025-10-02 12:45:23.164 2 DEBUG nova.compute.manager [req-0ba674e5-aab7-4fff-a1a0-fe02ed9f0b8a req-a9f045ec-4ff3-4189-a701-21e84d14ceb2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received event network-changed-241d570e-8eb4-4d2a-986b-b37fbcb780a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:45:23 compute-1 nova_compute[230518]: 2025-10-02 12:45:23.164 2 DEBUG nova.compute.manager [req-0ba674e5-aab7-4fff-a1a0-fe02ed9f0b8a req-a9f045ec-4ff3-4189-a701-21e84d14ceb2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Refreshing instance network info cache due to event network-changed-241d570e-8eb4-4d2a-986b-b37fbcb780a9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:45:23 compute-1 nova_compute[230518]: 2025-10-02 12:45:23.165 2 DEBUG oslo_concurrency.lockutils [req-0ba674e5-aab7-4fff-a1a0-fe02ed9f0b8a req-a9f045ec-4ff3-4189-a701-21e84d14ceb2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:45:23 compute-1 nova_compute[230518]: 2025-10-02 12:45:23.239 2 DEBUG nova.network.neutron [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:45:23 compute-1 nova_compute[230518]: 2025-10-02 12:45:23.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:23 compute-1 ceph-mon[80926]: pgmap v2109: 305 pgs: 305 active+clean; 481 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.7 MiB/s wr, 223 op/s
Oct 02 12:45:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/96683948' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:45:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:45:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:23.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:45:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:23.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:24 compute-1 nova_compute[230518]: 2025-10-02 12:45:24.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:25 compute-1 ceph-mon[80926]: pgmap v2110: 305 pgs: 305 active+clean; 497 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 5.1 MiB/s wr, 251 op/s
Oct 02 12:45:25 compute-1 nova_compute[230518]: 2025-10-02 12:45:25.353 2 DEBUG nova.network.neutron [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Updating instance_info_cache with network_info: [{"id": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "address": "fa:16:3e:2d:9b:0c", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241d570e-8e", "ovs_interfaceid": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:45:25 compute-1 nova_compute[230518]: 2025-10-02 12:45:25.373 2 DEBUG oslo_concurrency.lockutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Releasing lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:45:25 compute-1 nova_compute[230518]: 2025-10-02 12:45:25.374 2 DEBUG nova.compute.manager [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Instance network_info: |[{"id": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "address": "fa:16:3e:2d:9b:0c", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241d570e-8e", "ovs_interfaceid": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:45:25 compute-1 nova_compute[230518]: 2025-10-02 12:45:25.374 2 DEBUG oslo_concurrency.lockutils [req-0ba674e5-aab7-4fff-a1a0-fe02ed9f0b8a req-a9f045ec-4ff3-4189-a701-21e84d14ceb2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:45:25 compute-1 nova_compute[230518]: 2025-10-02 12:45:25.374 2 DEBUG nova.network.neutron [req-0ba674e5-aab7-4fff-a1a0-fe02ed9f0b8a req-a9f045ec-4ff3-4189-a701-21e84d14ceb2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Refreshing network info cache for port 241d570e-8eb4-4d2a-986b-b37fbcb780a9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:45:25 compute-1 nova_compute[230518]: 2025-10-02 12:45:25.378 2 DEBUG nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Start _get_guest_xml network_info=[{"id": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "address": "fa:16:3e:2d:9b:0c", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241d570e-8e", "ovs_interfaceid": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:45:25 compute-1 nova_compute[230518]: 2025-10-02 12:45:25.384 2 WARNING nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:45:25 compute-1 nova_compute[230518]: 2025-10-02 12:45:25.390 2 DEBUG nova.virt.libvirt.host [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:45:25 compute-1 nova_compute[230518]: 2025-10-02 12:45:25.392 2 DEBUG nova.virt.libvirt.host [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:45:25 compute-1 nova_compute[230518]: 2025-10-02 12:45:25.399 2 DEBUG nova.virt.libvirt.host [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:45:25 compute-1 nova_compute[230518]: 2025-10-02 12:45:25.400 2 DEBUG nova.virt.libvirt.host [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:45:25 compute-1 nova_compute[230518]: 2025-10-02 12:45:25.401 2 DEBUG nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:45:25 compute-1 nova_compute[230518]: 2025-10-02 12:45:25.401 2 DEBUG nova.virt.hardware [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:45:25 compute-1 nova_compute[230518]: 2025-10-02 12:45:25.401 2 DEBUG nova.virt.hardware [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:45:25 compute-1 nova_compute[230518]: 2025-10-02 12:45:25.401 2 DEBUG nova.virt.hardware [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:45:25 compute-1 nova_compute[230518]: 2025-10-02 12:45:25.402 2 DEBUG nova.virt.hardware [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:45:25 compute-1 nova_compute[230518]: 2025-10-02 12:45:25.402 2 DEBUG nova.virt.hardware [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:45:25 compute-1 nova_compute[230518]: 2025-10-02 12:45:25.402 2 DEBUG nova.virt.hardware [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:45:25 compute-1 nova_compute[230518]: 2025-10-02 12:45:25.402 2 DEBUG nova.virt.hardware [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:45:25 compute-1 nova_compute[230518]: 2025-10-02 12:45:25.402 2 DEBUG nova.virt.hardware [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:45:25 compute-1 nova_compute[230518]: 2025-10-02 12:45:25.403 2 DEBUG nova.virt.hardware [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:45:25 compute-1 nova_compute[230518]: 2025-10-02 12:45:25.403 2 DEBUG nova.virt.hardware [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:45:25 compute-1 nova_compute[230518]: 2025-10-02 12:45:25.403 2 DEBUG nova.virt.hardware [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:45:25 compute-1 nova_compute[230518]: 2025-10-02 12:45:25.405 2 DEBUG oslo_concurrency.processutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:45:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:25.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:45:25 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:25.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:45:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:45:25 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3855621067' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:45:25 compute-1 nova_compute[230518]: 2025-10-02 12:45:25.856 2 DEBUG oslo_concurrency.processutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:25 compute-1 nova_compute[230518]: 2025-10-02 12:45:25.881 2 DEBUG nova.storage.rbd_utils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:45:25 compute-1 nova_compute[230518]: 2025-10-02 12:45:25.885 2 DEBUG oslo_concurrency.processutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:25.941 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:25.942 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:25.942 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:26 compute-1 nova_compute[230518]: 2025-10-02 12:45:26.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:45:26 compute-1 nova_compute[230518]: 2025-10-02 12:45:26.053 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:26 compute-1 nova_compute[230518]: 2025-10-02 12:45:26.053 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:26 compute-1 nova_compute[230518]: 2025-10-02 12:45:26.054 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:26 compute-1 nova_compute[230518]: 2025-10-02 12:45:26.054 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:26 compute-1 nova_compute[230518]: 2025-10-02 12:45:26.054 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:26 compute-1 nova_compute[230518]: 2025-10-02 12:45:26.054 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:26 compute-1 nova_compute[230518]: 2025-10-02 12:45:26.092 2 DEBUG nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Oct 02 12:45:26 compute-1 nova_compute[230518]: 2025-10-02 12:45:26.112 2 DEBUG nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Oct 02 12:45:26 compute-1 nova_compute[230518]: 2025-10-02 12:45:26.112 2 DEBUG nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Image id 423b8b5f-aab8-418b-8fad-d82c90818bdd yields fingerprint 472c3cad2e339908bc4a8cea12fc22c04fcd93b6 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Oct 02 12:45:26 compute-1 nova_compute[230518]: 2025-10-02 12:45:26.113 2 INFO nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] image 423b8b5f-aab8-418b-8fad-d82c90818bdd at (/var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6): checking
Oct 02 12:45:26 compute-1 nova_compute[230518]: 2025-10-02 12:45:26.113 2 DEBUG nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] image 423b8b5f-aab8-418b-8fad-d82c90818bdd at (/var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Oct 02 12:45:26 compute-1 nova_compute[230518]: 2025-10-02 12:45:26.114 2 DEBUG nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Oct 02 12:45:26 compute-1 nova_compute[230518]: 2025-10-02 12:45:26.115 2 DEBUG nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] 7621a774-e0bc-4f4f-b900-c3608dd6835a is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Oct 02 12:45:26 compute-1 nova_compute[230518]: 2025-10-02 12:45:26.115 2 DEBUG nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Oct 02 12:45:26 compute-1 nova_compute[230518]: 2025-10-02 12:45:26.115 2 WARNING nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4
Oct 02 12:45:26 compute-1 nova_compute[230518]: 2025-10-02 12:45:26.115 2 INFO nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Active base files: /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6
Oct 02 12:45:26 compute-1 nova_compute[230518]: 2025-10-02 12:45:26.115 2 INFO nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Removable base files: /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4
Oct 02 12:45:26 compute-1 nova_compute[230518]: 2025-10-02 12:45:26.116 2 INFO nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4
Oct 02 12:45:26 compute-1 nova_compute[230518]: 2025-10-02 12:45:26.116 2 DEBUG nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Oct 02 12:45:26 compute-1 nova_compute[230518]: 2025-10-02 12:45:26.116 2 DEBUG nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Oct 02 12:45:26 compute-1 nova_compute[230518]: 2025-10-02 12:45:26.116 2 DEBUG nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Oct 02 12:45:26 compute-1 nova_compute[230518]: 2025-10-02 12:45:26.116 2 INFO nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Oct 02 12:45:26 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3855621067' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:45:26 compute-1 nova_compute[230518]: 2025-10-02 12:45:26.945 2 INFO nova.virt.libvirt.driver [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Deleting instance files /var/lib/nova/instances/7621a774-e0bc-4f4f-b900-c3608dd6835a_del
Oct 02 12:45:26 compute-1 nova_compute[230518]: 2025-10-02 12:45:26.946 2 INFO nova.virt.libvirt.driver [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Deletion of /var/lib/nova/instances/7621a774-e0bc-4f4f-b900-c3608dd6835a_del complete
Oct 02 12:45:26 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:45:26 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3442325023' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:45:26 compute-1 nova_compute[230518]: 2025-10-02 12:45:26.984 2 DEBUG oslo_concurrency.processutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:26 compute-1 nova_compute[230518]: 2025-10-02 12:45:26.986 2 DEBUG nova.virt.libvirt.vif [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:45:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1940292897',display_name='tempest-ServerRescueNegativeTestJSON-server-1940292897',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1940292897',id=123,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE3MP/X9GW3AkvsXC34rGYeE9R6n5owPQ7q879Zh3KlhVfQDQGNR9SPxUIek8XO1dhRvBM+bbuljUGfLUZmn0JV9ekbocSGkiHwYeMyp832Egmcx2kY1B+audZxDye556Q==',key_name='tempest-keypair-866203463',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c87621e5c0ba4f13abfff528143c1c00',ramdisk_id='',reservation_id='r-n6jrurpd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-488939839',owner_user_name='tempest-ServerRescueNegativeTestJSON-488939839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:45:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b168e90f7c0c414ba26c576fb8706a80',uuid=c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "address": "fa:16:3e:2d:9b:0c", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241d570e-8e", "ovs_interfaceid": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:45:26 compute-1 nova_compute[230518]: 2025-10-02 12:45:26.987 2 DEBUG nova.network.os_vif_util [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Converting VIF {"id": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "address": "fa:16:3e:2d:9b:0c", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241d570e-8e", "ovs_interfaceid": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:45:26 compute-1 nova_compute[230518]: 2025-10-02 12:45:26.988 2 DEBUG nova.network.os_vif_util [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:9b:0c,bridge_name='br-int',has_traffic_filtering=True,id=241d570e-8eb4-4d2a-986b-b37fbcb780a9,network=Network(f3934261-ba19-494f-8d9f-23360c5b30b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap241d570e-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:45:26 compute-1 nova_compute[230518]: 2025-10-02 12:45:26.989 2 DEBUG nova.objects.instance [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'pci_devices' on Instance uuid c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.041 2 DEBUG nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:45:27 compute-1 nova_compute[230518]:   <uuid>c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f</uuid>
Oct 02 12:45:27 compute-1 nova_compute[230518]:   <name>instance-0000007b</name>
Oct 02 12:45:27 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:45:27 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:45:27 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:45:27 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-1940292897</nova:name>
Oct 02 12:45:27 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:45:25</nova:creationTime>
Oct 02 12:45:27 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:45:27 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:45:27 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:45:27 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:45:27 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:45:27 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:45:27 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:45:27 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:45:27 compute-1 nova_compute[230518]:         <nova:user uuid="b168e90f7c0c414ba26c576fb8706a80">tempest-ServerRescueNegativeTestJSON-488939839-project-member</nova:user>
Oct 02 12:45:27 compute-1 nova_compute[230518]:         <nova:project uuid="c87621e5c0ba4f13abfff528143c1c00">tempest-ServerRescueNegativeTestJSON-488939839</nova:project>
Oct 02 12:45:27 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:45:27 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:45:27 compute-1 nova_compute[230518]:         <nova:port uuid="241d570e-8eb4-4d2a-986b-b37fbcb780a9">
Oct 02 12:45:27 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:45:27 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:45:27 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:45:27 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <system>
Oct 02 12:45:27 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:45:27 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:45:27 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:45:27 compute-1 nova_compute[230518]:       <entry name="serial">c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f</entry>
Oct 02 12:45:27 compute-1 nova_compute[230518]:       <entry name="uuid">c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f</entry>
Oct 02 12:45:27 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     </system>
Oct 02 12:45:27 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:45:27 compute-1 nova_compute[230518]:   <os>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:   </os>
Oct 02 12:45:27 compute-1 nova_compute[230518]:   <features>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:   </features>
Oct 02 12:45:27 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:45:27 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:45:27 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:45:27 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk">
Oct 02 12:45:27 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:       </source>
Oct 02 12:45:27 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:45:27 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:45:27 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:45:27 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk.config">
Oct 02 12:45:27 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:       </source>
Oct 02 12:45:27 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:45:27 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:45:27 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:45:27 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:2d:9b:0c"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:       <target dev="tap241d570e-8e"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:45:27 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f/console.log" append="off"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <video>
Oct 02 12:45:27 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     </video>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:45:27 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:45:27 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:45:27 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:45:27 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:45:27 compute-1 nova_compute[230518]: </domain>
Oct 02 12:45:27 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.042 2 DEBUG nova.compute.manager [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Preparing to wait for external event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.042 2 DEBUG oslo_concurrency.lockutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.042 2 DEBUG oslo_concurrency.lockutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.042 2 DEBUG oslo_concurrency.lockutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.043 2 DEBUG nova.virt.libvirt.vif [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:45:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1940292897',display_name='tempest-ServerRescueNegativeTestJSON-server-1940292897',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1940292897',id=123,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE3MP/X9GW3AkvsXC34rGYeE9R6n5owPQ7q879Zh3KlhVfQDQGNR9SPxUIek8XO1dhRvBM+bbuljUGfLUZmn0JV9ekbocSGkiHwYeMyp832Egmcx2kY1B+audZxDye556Q==',key_name='tempest-keypair-866203463',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c87621e5c0ba4f13abfff528143c1c00',ramdisk_id='',reservation_id='r-n6jrurpd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-488939839',owner_user_name='tempest-ServerRescueNegativeTestJSON-488939839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:45:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b168e90f7c0c414ba26c576fb8706a80',uuid=c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "address": "fa:16:3e:2d:9b:0c", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241d570e-8e", "ovs_interfaceid": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.043 2 DEBUG nova.network.os_vif_util [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Converting VIF {"id": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "address": "fa:16:3e:2d:9b:0c", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241d570e-8e", "ovs_interfaceid": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.044 2 DEBUG nova.network.os_vif_util [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:9b:0c,bridge_name='br-int',has_traffic_filtering=True,id=241d570e-8eb4-4d2a-986b-b37fbcb780a9,network=Network(f3934261-ba19-494f-8d9f-23360c5b30b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap241d570e-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.044 2 DEBUG os_vif [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:9b:0c,bridge_name='br-int',has_traffic_filtering=True,id=241d570e-8eb4-4d2a-986b-b37fbcb780a9,network=Network(f3934261-ba19-494f-8d9f-23360c5b30b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap241d570e-8e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.045 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.045 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.048 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap241d570e-8e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.049 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap241d570e-8e, col_values=(('external_ids', {'iface-id': '241d570e-8eb4-4d2a-986b-b37fbcb780a9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2d:9b:0c', 'vm-uuid': 'c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:27 compute-1 NetworkManager[44960]: <info>  [1759409127.0515] manager: (tap241d570e-8e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/232)
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.058 2 INFO os_vif [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:9b:0c,bridge_name='br-int',has_traffic_filtering=True,id=241d570e-8eb4-4d2a-986b-b37fbcb780a9,network=Network(f3934261-ba19-494f-8d9f-23360c5b30b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap241d570e-8e')
Oct 02 12:45:27 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:45:27 compute-1 ceph-mon[80926]: pgmap v2111: 305 pgs: 305 active+clean; 496 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 7.0 MiB/s wr, 279 op/s
Oct 02 12:45:27 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3442325023' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.092 2 INFO nova.compute.manager [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Took 8.75 seconds to destroy the instance on the hypervisor.
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.093 2 DEBUG oslo.service.loopingcall [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.093 2 DEBUG nova.compute.manager [-] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.094 2 DEBUG nova.network.neutron [-] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.276 2 DEBUG nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.276 2 DEBUG nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.276 2 DEBUG nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] No VIF found with MAC fa:16:3e:2d:9b:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.277 2 INFO nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Using config drive
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.297 2 DEBUG nova.storage.rbd_utils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.498 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409112.4961956, 26db575f-26df-4e1b-b0d8-38a12df557e3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.499 2 INFO nova.compute.manager [-] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] VM Stopped (Lifecycle Event)
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.621 2 DEBUG nova.compute.manager [None req-d8345a56-da49-44d6-94c8-0be86c203185 - - - - - -] [instance: 26db575f-26df-4e1b-b0d8-38a12df557e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.678 2 INFO nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Creating config drive at /var/lib/nova/instances/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f/disk.config
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.683 2 DEBUG oslo_concurrency.processutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptw3gwjbt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.709 2 DEBUG nova.network.neutron [req-0ba674e5-aab7-4fff-a1a0-fe02ed9f0b8a req-a9f045ec-4ff3-4189-a701-21e84d14ceb2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Updated VIF entry in instance network info cache for port 241d570e-8eb4-4d2a-986b-b37fbcb780a9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.709 2 DEBUG nova.network.neutron [req-0ba674e5-aab7-4fff-a1a0-fe02ed9f0b8a req-a9f045ec-4ff3-4189-a701-21e84d14ceb2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Updating instance_info_cache with network_info: [{"id": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "address": "fa:16:3e:2d:9b:0c", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241d570e-8e", "ovs_interfaceid": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:45:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:27.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:27.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.793 2 DEBUG oslo_concurrency.lockutils [req-0ba674e5-aab7-4fff-a1a0-fe02ed9f0b8a req-a9f045ec-4ff3-4189-a701-21e84d14ceb2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.815 2 DEBUG oslo_concurrency.processutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptw3gwjbt" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.840 2 DEBUG nova.storage.rbd_utils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:45:27 compute-1 nova_compute[230518]: 2025-10-02 12:45:27.844 2 DEBUG oslo_concurrency.processutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f/disk.config c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e291 e291: 3 total, 3 up, 3 in
Oct 02 12:45:28 compute-1 nova_compute[230518]: 2025-10-02 12:45:28.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:28 compute-1 nova_compute[230518]: 2025-10-02 12:45:28.484 2 DEBUG nova.network.neutron [-] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:45:28 compute-1 nova_compute[230518]: 2025-10-02 12:45:28.519 2 INFO nova.compute.manager [-] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Took 1.43 seconds to deallocate network for instance.
Oct 02 12:45:28 compute-1 nova_compute[230518]: 2025-10-02 12:45:28.567 2 DEBUG oslo_concurrency.lockutils [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:28 compute-1 nova_compute[230518]: 2025-10-02 12:45:28.568 2 DEBUG oslo_concurrency.lockutils [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:28 compute-1 nova_compute[230518]: 2025-10-02 12:45:28.633 2 DEBUG oslo_concurrency.processutils [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:28 compute-1 nova_compute[230518]: 2025-10-02 12:45:28.875 2 DEBUG oslo_concurrency.processutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f/disk.config c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:28 compute-1 nova_compute[230518]: 2025-10-02 12:45:28.877 2 INFO nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Deleting local config drive /var/lib/nova/instances/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f/disk.config because it was imported into RBD.
Oct 02 12:45:28 compute-1 kernel: tap241d570e-8e: entered promiscuous mode
Oct 02 12:45:28 compute-1 NetworkManager[44960]: <info>  [1759409128.9330] manager: (tap241d570e-8e): new Tun device (/org/freedesktop/NetworkManager/Devices/233)
Oct 02 12:45:28 compute-1 ovn_controller[129257]: 2025-10-02T12:45:28Z|00499|binding|INFO|Claiming lport 241d570e-8eb4-4d2a-986b-b37fbcb780a9 for this chassis.
Oct 02 12:45:28 compute-1 ovn_controller[129257]: 2025-10-02T12:45:28Z|00500|binding|INFO|241d570e-8eb4-4d2a-986b-b37fbcb780a9: Claiming fa:16:3e:2d:9b:0c 10.100.0.11
Oct 02 12:45:28 compute-1 nova_compute[230518]: 2025-10-02 12:45:28.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:28.945 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:9b:0c 10.100.0.11'], port_security=['fa:16:3e:2d:9b:0c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3934261-ba19-494f-8d9f-23360c5b30b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c87621e5c0ba4f13abfff528143c1c00', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e07b5251-78cf-4560-8d41-5dc3daef96ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4887de20-f7d5-4732-a50a-969a38516c82, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=241d570e-8eb4-4d2a-986b-b37fbcb780a9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:45:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:28.946 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 241d570e-8eb4-4d2a-986b-b37fbcb780a9 in datapath f3934261-ba19-494f-8d9f-23360c5b30b9 bound to our chassis
Oct 02 12:45:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:28.947 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3934261-ba19-494f-8d9f-23360c5b30b9
Oct 02 12:45:28 compute-1 ovn_controller[129257]: 2025-10-02T12:45:28Z|00501|binding|INFO|Setting lport 241d570e-8eb4-4d2a-986b-b37fbcb780a9 ovn-installed in OVS
Oct 02 12:45:28 compute-1 ovn_controller[129257]: 2025-10-02T12:45:28Z|00502|binding|INFO|Setting lport 241d570e-8eb4-4d2a-986b-b37fbcb780a9 up in Southbound
Oct 02 12:45:28 compute-1 nova_compute[230518]: 2025-10-02 12:45:28.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:28 compute-1 nova_compute[230518]: 2025-10-02 12:45:28.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:28.969 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[79133cd9-539d-4cd3-81d5-ca5e3f88a372]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:28.970 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf3934261-b1 in ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:45:28 compute-1 systemd-udevd[277306]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:45:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:28.973 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf3934261-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:45:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:28.973 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c59dfd8b-d351-4504-8457-21e574e5edfe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:28.974 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fb686e15-190b-4e80-99e1-d25ddc7bc50d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:28 compute-1 systemd-machined[188247]: New machine qemu-59-instance-0000007b.
Oct 02 12:45:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:28.987 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[e63eeda1-4f3c-4945-af0d-4d230208c31a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:28 compute-1 NetworkManager[44960]: <info>  [1759409128.9910] device (tap241d570e-8e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:45:28 compute-1 NetworkManager[44960]: <info>  [1759409128.9925] device (tap241d570e-8e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:45:28 compute-1 systemd[1]: Started Virtual Machine qemu-59-instance-0000007b.
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.014 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[80229201-eafb-4983-ac7f-cd0d0797256a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.056 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[f7d172a3-ad1b-40b6-bab8-614b15e4401b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:29 compute-1 NetworkManager[44960]: <info>  [1759409129.0680] manager: (tapf3934261-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/234)
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.070 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8d12cee5-c072-46cd-acf5-6c62256df4d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.112 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[3ca7b83d-4a05-47ee-b3ce-b29498a261cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.115 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a52b4197-74b5-4a1f-a8cf-a4e1035538fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:29 compute-1 ceph-mon[80926]: pgmap v2112: 305 pgs: 305 active+clean; 501 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 6.2 MiB/s wr, 357 op/s
Oct 02 12:45:29 compute-1 ceph-mon[80926]: osdmap e291: 3 total, 3 up, 3 in
Oct 02 12:45:29 compute-1 NetworkManager[44960]: <info>  [1759409129.1440] device (tapf3934261-b0): carrier: link connected
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.153 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[9425eca5-4c12-4a23-bf0f-c8fdb8cdd153]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:45:29 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3675489831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.176 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b779b26d-9a0c-43ab-9d74-effa826084c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3934261-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:9f:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699270, 'reachable_time': 29481, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277342, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.191 2 DEBUG nova.compute.manager [req-ccf56dc8-a984-4c0c-9cd6-f682bc50d9cf req-049511d6-60a4-4426-a9ed-aeed17f70e09 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.192 2 DEBUG oslo_concurrency.lockutils [req-ccf56dc8-a984-4c0c-9cd6-f682bc50d9cf req-049511d6-60a4-4426-a9ed-aeed17f70e09 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.193 2 DEBUG oslo_concurrency.lockutils [req-ccf56dc8-a984-4c0c-9cd6-f682bc50d9cf req-049511d6-60a4-4426-a9ed-aeed17f70e09 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.193 2 DEBUG oslo_concurrency.lockutils [req-ccf56dc8-a984-4c0c-9cd6-f682bc50d9cf req-049511d6-60a4-4426-a9ed-aeed17f70e09 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.194 2 DEBUG nova.compute.manager [req-ccf56dc8-a984-4c0c-9cd6-f682bc50d9cf req-049511d6-60a4-4426-a9ed-aeed17f70e09 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Processing event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.195 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[11a1c233-5a4c-4f1c-b128-a4b06c27b970]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec3:9fbb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 699270, 'tstamp': 699270}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277357, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.207 2 DEBUG oslo_concurrency.processutils [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.213 2 DEBUG nova.compute.provider_tree [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.219 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3897a1a5-65bf-4ca2-93e1-70e0daf9a2f9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3934261-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:9f:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699270, 'reachable_time': 29481, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 277359, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.253 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6b1e3026-080d-4461-b880-86dc32567c30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.273 2 DEBUG nova.scheduler.client.report [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.306 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[07f43dc9-7b53-49d0-8273-37be13cc0799]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.308 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3934261-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.308 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.308 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3934261-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.309 2 DEBUG oslo_concurrency.lockutils [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:29 compute-1 NetworkManager[44960]: <info>  [1759409129.3112] manager: (tapf3934261-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/235)
Oct 02 12:45:29 compute-1 kernel: tapf3934261-b0: entered promiscuous mode
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.318 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3934261-b0, col_values=(('external_ids', {'iface-id': '3890f7a6-6cc9-4237-a2a2-3c43818c1748'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.320 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f3934261-ba19-494f-8d9f-23360c5b30b9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f3934261-ba19-494f-8d9f-23360c5b30b9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:45:29 compute-1 ovn_controller[129257]: 2025-10-02T12:45:29Z|00503|binding|INFO|Releasing lport 3890f7a6-6cc9-4237-a2a2-3c43818c1748 from this chassis (sb_readonly=0)
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.322 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b7066670-db1d-4a12-825c-d2b417d47fcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.323 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-f3934261-ba19-494f-8d9f-23360c5b30b9
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/f3934261-ba19-494f-8d9f-23360c5b30b9.pid.haproxy
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID f3934261-ba19-494f-8d9f-23360c5b30b9
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:45:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:29.324 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'env', 'PROCESS_TAG=haproxy-f3934261-ba19-494f-8d9f-23360c5b30b9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f3934261-ba19-494f-8d9f-23360c5b30b9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.356 2 INFO nova.scheduler.client.report [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Deleted allocations for instance 7621a774-e0bc-4f4f-b900-c3608dd6835a
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.498 2 DEBUG nova.compute.manager [req-f8302356-0969-4975-ad45-cc8df68875f1 req-66c9cf76-c79c-4ea8-bf83-e6afbee318dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Received event network-vif-deleted-8d9cc17a-7804-4743-925a-496d9fe78c73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.506 2 DEBUG oslo_concurrency.lockutils [None req-7459ddf2-c4d2-443e-8a01-860d45b45959 17a0940c9daf48ac8cfa6c3e56d0e39c 88141e38aa2347299e7ab249431ef68c - - default default] Lock "7621a774-e0bc-4f4f-b900-c3608dd6835a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.168s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:45:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:29.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:29 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:29.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.789 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409129.7892838, c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.790 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] VM Started (Lifecycle Event)
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.791 2 DEBUG nova.compute.manager [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.794 2 DEBUG nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:45:29 compute-1 podman[277416]: 2025-10-02 12:45:29.70786606 +0000 UTC m=+0.029881981 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.798 2 INFO nova.virt.libvirt.driver [-] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Instance spawned successfully.
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.798 2 DEBUG nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.817 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.823 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.827 2 DEBUG nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.828 2 DEBUG nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.828 2 DEBUG nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.828 2 DEBUG nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.829 2 DEBUG nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.829 2 DEBUG nova.virt.libvirt.driver [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.871 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.871 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409129.7894123, c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.871 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] VM Paused (Lifecycle Event)
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.897 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.900 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409129.793948, c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.900 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] VM Resumed (Lifecycle Event)
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.904 2 INFO nova.compute.manager [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Took 10.09 seconds to spawn the instance on the hypervisor.
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.904 2 DEBUG nova.compute.manager [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.930 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.933 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.955 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.970 2 INFO nova.compute.manager [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Took 11.36 seconds to build instance.
Oct 02 12:45:29 compute-1 nova_compute[230518]: 2025-10-02 12:45:29.998 2 DEBUG oslo_concurrency.lockutils [None req-849d8899-81a3-42fd-8b69-82befac1b10d b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.489s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:30 compute-1 podman[277416]: 2025-10-02 12:45:30.275451077 +0000 UTC m=+0.597467008 container create 95fdf9808758ea229924549ee290b7685ecaf3c3f7af930d80a1ea572cfec4a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:45:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3675489831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:30 compute-1 systemd[1]: Started libpod-conmon-95fdf9808758ea229924549ee290b7685ecaf3c3f7af930d80a1ea572cfec4a3.scope.
Oct 02 12:45:30 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:45:30 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df0cc289efa4928f4df71ab8c55274a5960f97d161eaafb88ba0444aad755767/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:45:30 compute-1 podman[277416]: 2025-10-02 12:45:30.631234764 +0000 UTC m=+0.953250695 container init 95fdf9808758ea229924549ee290b7685ecaf3c3f7af930d80a1ea572cfec4a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:45:30 compute-1 podman[277416]: 2025-10-02 12:45:30.637721599 +0000 UTC m=+0.959737500 container start 95fdf9808758ea229924549ee290b7685ecaf3c3f7af930d80a1ea572cfec4a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:45:30 compute-1 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[277431]: [NOTICE]   (277435) : New worker (277437) forked
Oct 02 12:45:30 compute-1 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[277431]: [NOTICE]   (277435) : Loading success.
Oct 02 12:45:31 compute-1 nova_compute[230518]: 2025-10-02 12:45:31.293 2 DEBUG nova.compute.manager [req-6af84cec-0b13-428b-9211-cf792d5bba4f req-1c962b6a-e893-4981-9c52-65ddccb7b9fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:45:31 compute-1 nova_compute[230518]: 2025-10-02 12:45:31.294 2 DEBUG oslo_concurrency.lockutils [req-6af84cec-0b13-428b-9211-cf792d5bba4f req-1c962b6a-e893-4981-9c52-65ddccb7b9fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:31 compute-1 nova_compute[230518]: 2025-10-02 12:45:31.294 2 DEBUG oslo_concurrency.lockutils [req-6af84cec-0b13-428b-9211-cf792d5bba4f req-1c962b6a-e893-4981-9c52-65ddccb7b9fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:31 compute-1 nova_compute[230518]: 2025-10-02 12:45:31.294 2 DEBUG oslo_concurrency.lockutils [req-6af84cec-0b13-428b-9211-cf792d5bba4f req-1c962b6a-e893-4981-9c52-65ddccb7b9fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:31 compute-1 nova_compute[230518]: 2025-10-02 12:45:31.295 2 DEBUG nova.compute.manager [req-6af84cec-0b13-428b-9211-cf792d5bba4f req-1c962b6a-e893-4981-9c52-65ddccb7b9fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] No waiting events found dispatching network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:45:31 compute-1 nova_compute[230518]: 2025-10-02 12:45:31.295 2 WARNING nova.compute.manager [req-6af84cec-0b13-428b-9211-cf792d5bba4f req-1c962b6a-e893-4981-9c52-65ddccb7b9fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received unexpected event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 for instance with vm_state active and task_state None.
Oct 02 12:45:31 compute-1 ceph-mon[80926]: pgmap v2114: 305 pgs: 305 active+clean; 501 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 8.6 MiB/s rd, 6.8 MiB/s wr, 368 op/s
Oct 02 12:45:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:45:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:31 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:31.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:31.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:32 compute-1 nova_compute[230518]: 2025-10-02 12:45:32.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:45:32 compute-1 ceph-mon[80926]: pgmap v2115: 305 pgs: 305 active+clean; 449 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 8.8 MiB/s rd, 5.3 MiB/s wr, 378 op/s
Oct 02 12:45:33 compute-1 nova_compute[230518]: 2025-10-02 12:45:33.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:45:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:33.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:33 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:33.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:34 compute-1 nova_compute[230518]: 2025-10-02 12:45:34.182 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409119.1768508, 7621a774-e0bc-4f4f-b900-c3608dd6835a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:45:34 compute-1 nova_compute[230518]: 2025-10-02 12:45:34.183 2 INFO nova.compute.manager [-] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] VM Stopped (Lifecycle Event)
Oct 02 12:45:34 compute-1 nova_compute[230518]: 2025-10-02 12:45:34.211 2 DEBUG nova.compute.manager [None req-71181f9b-aca8-4b1f-b9f3-9bc5b9968ea5 - - - - - -] [instance: 7621a774-e0bc-4f4f-b900-c3608dd6835a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:45:34 compute-1 nova_compute[230518]: 2025-10-02 12:45:34.262 2 DEBUG nova.compute.manager [req-574db175-3788-4744-98e8-75fe4f6bded2 req-477a27ce-4d13-4245-b89f-25eaef86c1af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received event network-changed-241d570e-8eb4-4d2a-986b-b37fbcb780a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:45:34 compute-1 nova_compute[230518]: 2025-10-02 12:45:34.262 2 DEBUG nova.compute.manager [req-574db175-3788-4744-98e8-75fe4f6bded2 req-477a27ce-4d13-4245-b89f-25eaef86c1af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Refreshing instance network info cache due to event network-changed-241d570e-8eb4-4d2a-986b-b37fbcb780a9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:45:34 compute-1 nova_compute[230518]: 2025-10-02 12:45:34.262 2 DEBUG oslo_concurrency.lockutils [req-574db175-3788-4744-98e8-75fe4f6bded2 req-477a27ce-4d13-4245-b89f-25eaef86c1af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:45:34 compute-1 nova_compute[230518]: 2025-10-02 12:45:34.263 2 DEBUG oslo_concurrency.lockutils [req-574db175-3788-4744-98e8-75fe4f6bded2 req-477a27ce-4d13-4245-b89f-25eaef86c1af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:45:34 compute-1 nova_compute[230518]: 2025-10-02 12:45:34.263 2 DEBUG nova.network.neutron [req-574db175-3788-4744-98e8-75fe4f6bded2 req-477a27ce-4d13-4245-b89f-25eaef86c1af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Refreshing network info cache for port 241d570e-8eb4-4d2a-986b-b37fbcb780a9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:45:34 compute-1 ovn_controller[129257]: 2025-10-02T12:45:34Z|00504|binding|INFO|Releasing lport 3890f7a6-6cc9-4237-a2a2-3c43818c1748 from this chassis (sb_readonly=0)
Oct 02 12:45:34 compute-1 nova_compute[230518]: 2025-10-02 12:45:34.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:35 compute-1 ceph-mon[80926]: pgmap v2116: 305 pgs: 305 active+clean; 422 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 3.1 MiB/s wr, 360 op/s
Oct 02 12:45:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:45:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:35.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:35 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:35.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:35 compute-1 nova_compute[230518]: 2025-10-02 12:45:35.873 2 DEBUG nova.network.neutron [req-574db175-3788-4744-98e8-75fe4f6bded2 req-477a27ce-4d13-4245-b89f-25eaef86c1af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Updated VIF entry in instance network info cache for port 241d570e-8eb4-4d2a-986b-b37fbcb780a9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:45:35 compute-1 nova_compute[230518]: 2025-10-02 12:45:35.874 2 DEBUG nova.network.neutron [req-574db175-3788-4744-98e8-75fe4f6bded2 req-477a27ce-4d13-4245-b89f-25eaef86c1af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Updating instance_info_cache with network_info: [{"id": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "address": "fa:16:3e:2d:9b:0c", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241d570e-8e", "ovs_interfaceid": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:45:35 compute-1 nova_compute[230518]: 2025-10-02 12:45:35.894 2 DEBUG oslo_concurrency.lockutils [req-574db175-3788-4744-98e8-75fe4f6bded2 req-477a27ce-4d13-4245-b89f-25eaef86c1af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:45:37 compute-1 nova_compute[230518]: 2025-10-02 12:45:37.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:45:37 compute-1 ceph-mon[80926]: pgmap v2117: 305 pgs: 305 active+clean; 422 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 623 KiB/s wr, 254 op/s
Oct 02 12:45:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e292 e292: 3 total, 3 up, 3 in
Oct 02 12:45:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:45:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:37 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:37.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:37.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:38 compute-1 nova_compute[230518]: 2025-10-02 12:45:38.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:38 compute-1 ceph-mon[80926]: osdmap e292: 3 total, 3 up, 3 in
Oct 02 12:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 12:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 39K writes, 154K keys, 39K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.04 MB/s
                                           Cumulative WAL: 39K writes, 14K syncs, 2.81 writes per sync, written: 0.15 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 9106 writes, 36K keys, 9106 commit groups, 1.0 writes per commit group, ingest: 36.49 MB, 0.06 MB/s
                                           Interval WAL: 9105 writes, 3449 syncs, 2.64 writes per sync, written: 0.04 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 12:45:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:45:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:39.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:39 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:39.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:39 compute-1 ceph-mon[80926]: pgmap v2119: 305 pgs: 305 active+clean; 436 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.6 MiB/s wr, 195 op/s
Oct 02 12:45:41 compute-1 ceph-mon[80926]: pgmap v2120: 305 pgs: 305 active+clean; 436 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.5 MiB/s wr, 188 op/s
Oct 02 12:45:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:45:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:45:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:41.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:45:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:41 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:41.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:41 compute-1 podman[277448]: 2025-10-02 12:45:41.884656646 +0000 UTC m=+0.115173523 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 12:45:41 compute-1 podman[277447]: 2025-10-02 12:45:41.893247696 +0000 UTC m=+0.135028206 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Oct 02 12:45:42 compute-1 nova_compute[230518]: 2025-10-02 12:45:42.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:45:42 compute-1 ceph-mon[80926]: pgmap v2121: 305 pgs: 305 active+clean; 444 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.5 MiB/s wr, 185 op/s
Oct 02 12:45:43 compute-1 nova_compute[230518]: 2025-10-02 12:45:43.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:43 compute-1 nova_compute[230518]: 2025-10-02 12:45:43.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:43 compute-1 ovn_controller[129257]: 2025-10-02T12:45:43Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2d:9b:0c 10.100.0.11
Oct 02 12:45:43 compute-1 ovn_controller[129257]: 2025-10-02T12:45:43Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2d:9b:0c 10.100.0.11
Oct 02 12:45:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:43.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:43.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:45 compute-1 ceph-mon[80926]: pgmap v2122: 305 pgs: 305 active+clean; 452 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.9 MiB/s wr, 141 op/s
Oct 02 12:45:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:45:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:45.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:45 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:45.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:47 compute-1 nova_compute[230518]: 2025-10-02 12:45:47.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:45:47 compute-1 ceph-mon[80926]: pgmap v2123: 305 pgs: 305 active+clean; 461 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.4 MiB/s wr, 150 op/s
Oct 02 12:45:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:45:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:47 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:47.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:45:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:47.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:45:48 compute-1 nova_compute[230518]: 2025-10-02 12:45:48.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:48 compute-1 nova_compute[230518]: 2025-10-02 12:45:48.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:49 compute-1 ceph-mon[80926]: pgmap v2124: 305 pgs: 305 active+clean; 489 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.6 MiB/s wr, 173 op/s
Oct 02 12:45:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:45:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:49.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:49 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:49.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:49 compute-1 nova_compute[230518]: 2025-10-02 12:45:49.984 2 DEBUG oslo_concurrency.lockutils [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:49 compute-1 nova_compute[230518]: 2025-10-02 12:45:49.985 2 DEBUG oslo_concurrency.lockutils [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:50 compute-1 nova_compute[230518]: 2025-10-02 12:45:50.005 2 DEBUG nova.objects.instance [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'flavor' on Instance uuid c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:45:50 compute-1 nova_compute[230518]: 2025-10-02 12:45:50.048 2 DEBUG oslo_concurrency.lockutils [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.064s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:50 compute-1 nova_compute[230518]: 2025-10-02 12:45:50.232 2 DEBUG oslo_concurrency.lockutils [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:50 compute-1 nova_compute[230518]: 2025-10-02 12:45:50.233 2 DEBUG oslo_concurrency.lockutils [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:50 compute-1 nova_compute[230518]: 2025-10-02 12:45:50.234 2 INFO nova.compute.manager [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Attaching volume 9405efbb-874d-467a-93c3-bbd76870d422 to /dev/vdb
Oct 02 12:45:50 compute-1 nova_compute[230518]: 2025-10-02 12:45:50.359 2 DEBUG os_brick.utils [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:45:50 compute-1 nova_compute[230518]: 2025-10-02 12:45:50.362 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:50 compute-1 nova_compute[230518]: 2025-10-02 12:45:50.379 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:50 compute-1 nova_compute[230518]: 2025-10-02 12:45:50.379 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[2275a5af-1526-4dfe-91cb-27374444f029]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:50 compute-1 nova_compute[230518]: 2025-10-02 12:45:50.380 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:50 compute-1 nova_compute[230518]: 2025-10-02 12:45:50.392 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:50 compute-1 nova_compute[230518]: 2025-10-02 12:45:50.392 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[ac6c413a-5869-43c6-9064-c85ea98f1942]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:50 compute-1 nova_compute[230518]: 2025-10-02 12:45:50.396 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:50 compute-1 nova_compute[230518]: 2025-10-02 12:45:50.407 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:50 compute-1 nova_compute[230518]: 2025-10-02 12:45:50.407 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[5d080bf8-d121-4c44-bdd1-53060063c600]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:50 compute-1 nova_compute[230518]: 2025-10-02 12:45:50.409 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[614bccfe-ef19-4ea5-9f6f-3f8fe7389d52]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:50 compute-1 nova_compute[230518]: 2025-10-02 12:45:50.411 2 DEBUG oslo_concurrency.processutils [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:50 compute-1 nova_compute[230518]: 2025-10-02 12:45:50.464 2 DEBUG oslo_concurrency.processutils [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "nvme version" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:50 compute-1 nova_compute[230518]: 2025-10-02 12:45:50.469 2 DEBUG os_brick.initiator.connectors.lightos [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:45:50 compute-1 nova_compute[230518]: 2025-10-02 12:45:50.470 2 DEBUG os_brick.initiator.connectors.lightos [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:45:50 compute-1 nova_compute[230518]: 2025-10-02 12:45:50.471 2 DEBUG os_brick.initiator.connectors.lightos [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:45:50 compute-1 nova_compute[230518]: 2025-10-02 12:45:50.471 2 DEBUG os_brick.utils [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] <== get_connector_properties: return (111ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:45:50 compute-1 nova_compute[230518]: 2025-10-02 12:45:50.472 2 DEBUG nova.virt.block_device [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Updating existing volume attachment record: 8795165d-376d-4129-82b5-208bafc1acc5 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:45:50 compute-1 ceph-mon[80926]: pgmap v2125: 305 pgs: 305 active+clean; 489 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 927 KiB/s rd, 3.0 MiB/s wr, 129 op/s
Oct 02 12:45:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3365978155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:51 compute-1 nova_compute[230518]: 2025-10-02 12:45:51.235 2 DEBUG nova.objects.instance [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'flavor' on Instance uuid c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:45:51 compute-1 nova_compute[230518]: 2025-10-02 12:45:51.256 2 DEBUG nova.virt.libvirt.driver [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Attempting to attach volume 9405efbb-874d-467a-93c3-bbd76870d422 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 12:45:51 compute-1 nova_compute[230518]: 2025-10-02 12:45:51.260 2 DEBUG nova.virt.libvirt.guest [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 12:45:51 compute-1 nova_compute[230518]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:45:51 compute-1 nova_compute[230518]:   <source protocol="rbd" name="volumes/volume-9405efbb-874d-467a-93c3-bbd76870d422">
Oct 02 12:45:51 compute-1 nova_compute[230518]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:45:51 compute-1 nova_compute[230518]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:45:51 compute-1 nova_compute[230518]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:45:51 compute-1 nova_compute[230518]:   </source>
Oct 02 12:45:51 compute-1 nova_compute[230518]:   <auth username="openstack">
Oct 02 12:45:51 compute-1 nova_compute[230518]:     <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:45:51 compute-1 nova_compute[230518]:   </auth>
Oct 02 12:45:51 compute-1 nova_compute[230518]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:45:51 compute-1 nova_compute[230518]:   <serial>9405efbb-874d-467a-93c3-bbd76870d422</serial>
Oct 02 12:45:51 compute-1 nova_compute[230518]: </disk>
Oct 02 12:45:51 compute-1 nova_compute[230518]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 12:45:51 compute-1 nova_compute[230518]: 2025-10-02 12:45:51.463 2 DEBUG nova.virt.libvirt.driver [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:45:51 compute-1 nova_compute[230518]: 2025-10-02 12:45:51.464 2 DEBUG nova.virt.libvirt.driver [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:45:51 compute-1 nova_compute[230518]: 2025-10-02 12:45:51.464 2 DEBUG nova.virt.libvirt.driver [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:45:51 compute-1 nova_compute[230518]: 2025-10-02 12:45:51.464 2 DEBUG nova.virt.libvirt.driver [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] No VIF found with MAC fa:16:3e:2d:9b:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:45:51 compute-1 nova_compute[230518]: 2025-10-02 12:45:51.664 2 DEBUG oslo_concurrency.lockutils [None req-9f2209ec-f6e8-4866-a344-2f17c4f8f2e2 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.431s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:51.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:51.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:51 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/368511100' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:45:52 compute-1 nova_compute[230518]: 2025-10-02 12:45:52.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:45:52 compute-1 nova_compute[230518]: 2025-10-02 12:45:52.918 2 INFO nova.compute.manager [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Rescuing
Oct 02 12:45:52 compute-1 nova_compute[230518]: 2025-10-02 12:45:52.918 2 DEBUG oslo_concurrency.lockutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:45:52 compute-1 nova_compute[230518]: 2025-10-02 12:45:52.918 2 DEBUG oslo_concurrency.lockutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquired lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:45:52 compute-1 nova_compute[230518]: 2025-10-02 12:45:52.918 2 DEBUG nova.network.neutron [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:45:52 compute-1 ceph-mon[80926]: pgmap v2126: 305 pgs: 305 active+clean; 409 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 961 KiB/s rd, 3.1 MiB/s wr, 168 op/s
Oct 02 12:45:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2094384174' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:45:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2094384174' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:45:53 compute-1 nova_compute[230518]: 2025-10-02 12:45:53.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:45:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:53 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:53.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:53.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:53 compute-1 podman[277519]: 2025-10-02 12:45:53.831310972 +0000 UTC m=+0.065300625 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 02 12:45:53 compute-1 podman[277518]: 2025-10-02 12:45:53.840136699 +0000 UTC m=+0.073409759 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:45:54 compute-1 nova_compute[230518]: 2025-10-02 12:45:54.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:45:54 compute-1 nova_compute[230518]: 2025-10-02 12:45:54.863 2 DEBUG nova.network.neutron [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Updating instance_info_cache with network_info: [{"id": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "address": "fa:16:3e:2d:9b:0c", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241d570e-8e", "ovs_interfaceid": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:45:54 compute-1 nova_compute[230518]: 2025-10-02 12:45:54.880 2 DEBUG oslo_concurrency.lockutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Releasing lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:45:55 compute-1 nova_compute[230518]: 2025-10-02 12:45:55.132 2 DEBUG nova.virt.libvirt.driver [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:45:55 compute-1 ceph-mon[80926]: pgmap v2127: 305 pgs: 305 active+clean; 409 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 390 KiB/s rd, 2.2 MiB/s wr, 107 op/s
Oct 02 12:45:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:45:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:45:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:55.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:45:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:55 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:55.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:56 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/328033079' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:45:56 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1202705781' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:45:56 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1202705781' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:45:57 compute-1 nova_compute[230518]: 2025-10-02 12:45:57.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:57 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:45:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:45:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:57 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:57.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:45:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:57.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:45:57 compute-1 kernel: tap241d570e-8e (unregistering): left promiscuous mode
Oct 02 12:45:57 compute-1 NetworkManager[44960]: <info>  [1759409157.8590] device (tap241d570e-8e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:45:57 compute-1 ovn_controller[129257]: 2025-10-02T12:45:57Z|00505|binding|INFO|Releasing lport 241d570e-8eb4-4d2a-986b-b37fbcb780a9 from this chassis (sb_readonly=0)
Oct 02 12:45:57 compute-1 ovn_controller[129257]: 2025-10-02T12:45:57Z|00506|binding|INFO|Setting lport 241d570e-8eb4-4d2a-986b-b37fbcb780a9 down in Southbound
Oct 02 12:45:57 compute-1 ovn_controller[129257]: 2025-10-02T12:45:57Z|00507|binding|INFO|Removing iface tap241d570e-8e ovn-installed in OVS
Oct 02 12:45:57 compute-1 nova_compute[230518]: 2025-10-02 12:45:57.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:57 compute-1 nova_compute[230518]: 2025-10-02 12:45:57.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:57.879 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:9b:0c 10.100.0.11'], port_security=['fa:16:3e:2d:9b:0c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3934261-ba19-494f-8d9f-23360c5b30b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c87621e5c0ba4f13abfff528143c1c00', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e07b5251-78cf-4560-8d41-5dc3daef96ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4887de20-f7d5-4732-a50a-969a38516c82, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=241d570e-8eb4-4d2a-986b-b37fbcb780a9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:45:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:57.881 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 241d570e-8eb4-4d2a-986b-b37fbcb780a9 in datapath f3934261-ba19-494f-8d9f-23360c5b30b9 unbound from our chassis
Oct 02 12:45:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:57.884 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f3934261-ba19-494f-8d9f-23360c5b30b9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:45:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:57.886 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fcad731b-9944-4bf3-a099-2c924be1e448]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:45:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:45:57.887 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 namespace which is not needed anymore
Oct 02 12:45:57 compute-1 nova_compute[230518]: 2025-10-02 12:45:57.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:57 compute-1 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Oct 02 12:45:57 compute-1 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d0000007b.scope: Consumed 14.416s CPU time.
Oct 02 12:45:57 compute-1 systemd-machined[188247]: Machine qemu-59-instance-0000007b terminated.
Oct 02 12:45:58 compute-1 ceph-mon[80926]: pgmap v2128: 305 pgs: 305 active+clean; 409 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 377 KiB/s rd, 1.9 MiB/s wr, 104 op/s
Oct 02 12:45:58 compute-1 nova_compute[230518]: 2025-10-02 12:45:58.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:58 compute-1 nova_compute[230518]: 2025-10-02 12:45:58.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:58 compute-1 nova_compute[230518]: 2025-10-02 12:45:58.148 2 INFO nova.virt.libvirt.driver [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Instance shutdown successfully after 3 seconds.
Oct 02 12:45:58 compute-1 nova_compute[230518]: 2025-10-02 12:45:58.155 2 INFO nova.virt.libvirt.driver [-] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Instance destroyed successfully.
Oct 02 12:45:58 compute-1 nova_compute[230518]: 2025-10-02 12:45:58.156 2 DEBUG nova.objects.instance [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'numa_topology' on Instance uuid c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:45:58 compute-1 nova_compute[230518]: 2025-10-02 12:45:58.176 2 INFO nova.virt.libvirt.driver [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Attempting rescue
Oct 02 12:45:58 compute-1 nova_compute[230518]: 2025-10-02 12:45:58.177 2 DEBUG nova.virt.libvirt.driver [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Oct 02 12:45:58 compute-1 nova_compute[230518]: 2025-10-02 12:45:58.182 2 DEBUG nova.virt.libvirt.driver [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 02 12:45:58 compute-1 nova_compute[230518]: 2025-10-02 12:45:58.183 2 INFO nova.virt.libvirt.driver [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Creating image(s)
Oct 02 12:45:58 compute-1 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[277431]: [NOTICE]   (277435) : haproxy version is 2.8.14-c23fe91
Oct 02 12:45:58 compute-1 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[277431]: [NOTICE]   (277435) : path to executable is /usr/sbin/haproxy
Oct 02 12:45:58 compute-1 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[277431]: [WARNING]  (277435) : Exiting Master process...
Oct 02 12:45:58 compute-1 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[277431]: [WARNING]  (277435) : Exiting Master process...
Oct 02 12:45:58 compute-1 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[277431]: [ALERT]    (277435) : Current worker (277437) exited with code 143 (Terminated)
Oct 02 12:45:58 compute-1 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[277431]: [WARNING]  (277435) : All workers exited. Exiting... (0)
Oct 02 12:45:58 compute-1 systemd[1]: libpod-95fdf9808758ea229924549ee290b7685ecaf3c3f7af930d80a1ea572cfec4a3.scope: Deactivated successfully.
Oct 02 12:45:58 compute-1 podman[277580]: 2025-10-02 12:45:58.536698771 +0000 UTC m=+0.556609174 container died 95fdf9808758ea229924549ee290b7685ecaf3c3f7af930d80a1ea572cfec4a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Oct 02 12:45:58 compute-1 nova_compute[230518]: 2025-10-02 12:45:58.866 2 DEBUG nova.storage.rbd_utils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:45:58 compute-1 nova_compute[230518]: 2025-10-02 12:45:58.871 2 DEBUG nova.objects.instance [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'trusted_certs' on Instance uuid c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:45:58 compute-1 nova_compute[230518]: 2025-10-02 12:45:58.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:45:58 compute-1 nova_compute[230518]: 2025-10-02 12:45:58.881 2 DEBUG nova.compute.manager [req-99232f9d-0a3d-438a-a00c-0b08a49babc0 req-b6aec4cb-625c-465c-82b5-0024e8ee8265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received event network-vif-unplugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:45:58 compute-1 nova_compute[230518]: 2025-10-02 12:45:58.881 2 DEBUG oslo_concurrency.lockutils [req-99232f9d-0a3d-438a-a00c-0b08a49babc0 req-b6aec4cb-625c-465c-82b5-0024e8ee8265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:58 compute-1 nova_compute[230518]: 2025-10-02 12:45:58.882 2 DEBUG oslo_concurrency.lockutils [req-99232f9d-0a3d-438a-a00c-0b08a49babc0 req-b6aec4cb-625c-465c-82b5-0024e8ee8265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:58 compute-1 nova_compute[230518]: 2025-10-02 12:45:58.882 2 DEBUG oslo_concurrency.lockutils [req-99232f9d-0a3d-438a-a00c-0b08a49babc0 req-b6aec4cb-625c-465c-82b5-0024e8ee8265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:58 compute-1 nova_compute[230518]: 2025-10-02 12:45:58.883 2 DEBUG nova.compute.manager [req-99232f9d-0a3d-438a-a00c-0b08a49babc0 req-b6aec4cb-625c-465c-82b5-0024e8ee8265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] No waiting events found dispatching network-vif-unplugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:45:58 compute-1 nova_compute[230518]: 2025-10-02 12:45:58.883 2 WARNING nova.compute.manager [req-99232f9d-0a3d-438a-a00c-0b08a49babc0 req-b6aec4cb-625c-465c-82b5-0024e8ee8265 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received unexpected event network-vif-unplugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 for instance with vm_state active and task_state rescuing.
Oct 02 12:45:59 compute-1 nova_compute[230518]: 2025-10-02 12:45:59.088 2 DEBUG nova.storage.rbd_utils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:45:59 compute-1 nova_compute[230518]: 2025-10-02 12:45:59.112 2 DEBUG nova.storage.rbd_utils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:45:59 compute-1 nova_compute[230518]: 2025-10-02 12:45:59.116 2 DEBUG oslo_concurrency.processutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:59 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-95fdf9808758ea229924549ee290b7685ecaf3c3f7af930d80a1ea572cfec4a3-userdata-shm.mount: Deactivated successfully.
Oct 02 12:45:59 compute-1 systemd[1]: var-lib-containers-storage-overlay-df0cc289efa4928f4df71ab8c55274a5960f97d161eaafb88ba0444aad755767-merged.mount: Deactivated successfully.
Oct 02 12:45:59 compute-1 nova_compute[230518]: 2025-10-02 12:45:59.197 2 DEBUG oslo_concurrency.processutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:45:59 compute-1 nova_compute[230518]: 2025-10-02 12:45:59.199 2 DEBUG oslo_concurrency.lockutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:45:59 compute-1 nova_compute[230518]: 2025-10-02 12:45:59.200 2 DEBUG oslo_concurrency.lockutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:45:59 compute-1 nova_compute[230518]: 2025-10-02 12:45:59.200 2 DEBUG oslo_concurrency.lockutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:45:59 compute-1 ceph-mon[80926]: pgmap v2129: 305 pgs: 305 active+clean; 409 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 284 KiB/s rd, 1.5 MiB/s wr, 108 op/s
Oct 02 12:45:59 compute-1 podman[277580]: 2025-10-02 12:45:59.326084933 +0000 UTC m=+1.345995346 container cleanup 95fdf9808758ea229924549ee290b7685ecaf3c3f7af930d80a1ea572cfec4a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:45:59 compute-1 systemd[1]: libpod-conmon-95fdf9808758ea229924549ee290b7685ecaf3c3f7af930d80a1ea572cfec4a3.scope: Deactivated successfully.
Oct 02 12:45:59 compute-1 nova_compute[230518]: 2025-10-02 12:45:59.359 2 DEBUG nova.storage.rbd_utils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:45:59 compute-1 nova_compute[230518]: 2025-10-02 12:45:59.365 2 DEBUG oslo_concurrency.processutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:45:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:45:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:45:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:45:59.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:45:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:45:59 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:45:59.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:00 compute-1 podman[277690]: 2025-10-02 12:46:00.206329782 +0000 UTC m=+0.837962231 container remove 95fdf9808758ea229924549ee290b7685ecaf3c3f7af930d80a1ea572cfec4a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:46:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:00.217 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3dae729a-a293-44bf-a8bd-de0f7303109a]: (4, ('Thu Oct  2 12:45:57 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 (95fdf9808758ea229924549ee290b7685ecaf3c3f7af930d80a1ea572cfec4a3)\n95fdf9808758ea229924549ee290b7685ecaf3c3f7af930d80a1ea572cfec4a3\nThu Oct  2 12:45:59 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 (95fdf9808758ea229924549ee290b7685ecaf3c3f7af930d80a1ea572cfec4a3)\n95fdf9808758ea229924549ee290b7685ecaf3c3f7af930d80a1ea572cfec4a3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:00.219 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[50eca9af-2f5d-4701-b3fb-a34f94314950]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:00.221 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3934261-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:46:00 compute-1 nova_compute[230518]: 2025-10-02 12:46:00.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:00 compute-1 kernel: tapf3934261-b0: left promiscuous mode
Oct 02 12:46:00 compute-1 nova_compute[230518]: 2025-10-02 12:46:00.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:00.257 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a2b0d2d6-5d08-4134-9d8e-2ae798dce630]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:00.282 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[75da7018-e895-43a7-aee8-7b662084aded]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:00.283 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d98974af-7fc7-4aca-a4f7-2e6fc15e02f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:00.300 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e5ce9866-5207-4c87-a1ca-2374fe09ef54]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699259, 'reachable_time': 35117, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277733, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:00.302 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:46:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:00.303 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[fe28ea92-085d-4673-b465-a354184aea58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:00 compute-1 systemd[1]: run-netns-ovnmeta\x2df3934261\x2dba19\x2d494f\x2d8d9f\x2d23360c5b30b9.mount: Deactivated successfully.
Oct 02 12:46:00 compute-1 nova_compute[230518]: 2025-10-02 12:46:00.542 2 DEBUG oslo_concurrency.processutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:46:00 compute-1 nova_compute[230518]: 2025-10-02 12:46:00.543 2 DEBUG nova.objects.instance [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'migration_context' on Instance uuid c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:46:00 compute-1 nova_compute[230518]: 2025-10-02 12:46:00.559 2 DEBUG nova.virt.libvirt.driver [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:46:00 compute-1 nova_compute[230518]: 2025-10-02 12:46:00.560 2 DEBUG nova.virt.libvirt.driver [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Start _get_guest_xml network_info=[{"id": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "address": "fa:16:3e:2d:9b:0c", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "vif_mac": "fa:16:3e:2d:9b:0c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241d570e-8e", "ovs_interfaceid": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:46:00 compute-1 nova_compute[230518]: 2025-10-02 12:46:00.560 2 DEBUG nova.objects.instance [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'resources' on Instance uuid c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:46:00 compute-1 nova_compute[230518]: 2025-10-02 12:46:00.579 2 DEBUG nova.compute.manager [req-f1ae5818-4dfa-4f00-aeba-fac24e358f16 req-ca24ca8e-66d6-458b-94f3-e61486100e67 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:46:00 compute-1 nova_compute[230518]: 2025-10-02 12:46:00.580 2 DEBUG oslo_concurrency.lockutils [req-f1ae5818-4dfa-4f00-aeba-fac24e358f16 req-ca24ca8e-66d6-458b-94f3-e61486100e67 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:00 compute-1 nova_compute[230518]: 2025-10-02 12:46:00.580 2 DEBUG oslo_concurrency.lockutils [req-f1ae5818-4dfa-4f00-aeba-fac24e358f16 req-ca24ca8e-66d6-458b-94f3-e61486100e67 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:00 compute-1 nova_compute[230518]: 2025-10-02 12:46:00.580 2 DEBUG oslo_concurrency.lockutils [req-f1ae5818-4dfa-4f00-aeba-fac24e358f16 req-ca24ca8e-66d6-458b-94f3-e61486100e67 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:00 compute-1 nova_compute[230518]: 2025-10-02 12:46:00.580 2 DEBUG nova.compute.manager [req-f1ae5818-4dfa-4f00-aeba-fac24e358f16 req-ca24ca8e-66d6-458b-94f3-e61486100e67 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] No waiting events found dispatching network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:46:00 compute-1 nova_compute[230518]: 2025-10-02 12:46:00.581 2 WARNING nova.compute.manager [req-f1ae5818-4dfa-4f00-aeba-fac24e358f16 req-ca24ca8e-66d6-458b-94f3-e61486100e67 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received unexpected event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 for instance with vm_state active and task_state rescuing.
Oct 02 12:46:00 compute-1 nova_compute[230518]: 2025-10-02 12:46:00.581 2 WARNING nova.virt.libvirt.driver [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:46:00 compute-1 nova_compute[230518]: 2025-10-02 12:46:00.586 2 DEBUG nova.virt.libvirt.host [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:46:00 compute-1 nova_compute[230518]: 2025-10-02 12:46:00.587 2 DEBUG nova.virt.libvirt.host [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:46:00 compute-1 nova_compute[230518]: 2025-10-02 12:46:00.589 2 DEBUG nova.virt.libvirt.host [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:46:00 compute-1 nova_compute[230518]: 2025-10-02 12:46:00.590 2 DEBUG nova.virt.libvirt.host [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:46:00 compute-1 nova_compute[230518]: 2025-10-02 12:46:00.590 2 DEBUG nova.virt.libvirt.driver [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:46:00 compute-1 nova_compute[230518]: 2025-10-02 12:46:00.591 2 DEBUG nova.virt.hardware [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:46:00 compute-1 nova_compute[230518]: 2025-10-02 12:46:00.591 2 DEBUG nova.virt.hardware [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:46:00 compute-1 nova_compute[230518]: 2025-10-02 12:46:00.591 2 DEBUG nova.virt.hardware [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:46:00 compute-1 nova_compute[230518]: 2025-10-02 12:46:00.591 2 DEBUG nova.virt.hardware [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:46:00 compute-1 nova_compute[230518]: 2025-10-02 12:46:00.591 2 DEBUG nova.virt.hardware [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:46:00 compute-1 nova_compute[230518]: 2025-10-02 12:46:00.592 2 DEBUG nova.virt.hardware [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:46:00 compute-1 nova_compute[230518]: 2025-10-02 12:46:00.592 2 DEBUG nova.virt.hardware [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:46:00 compute-1 nova_compute[230518]: 2025-10-02 12:46:00.592 2 DEBUG nova.virt.hardware [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:46:00 compute-1 nova_compute[230518]: 2025-10-02 12:46:00.592 2 DEBUG nova.virt.hardware [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:46:00 compute-1 nova_compute[230518]: 2025-10-02 12:46:00.592 2 DEBUG nova.virt.hardware [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:46:00 compute-1 nova_compute[230518]: 2025-10-02 12:46:00.592 2 DEBUG nova.virt.hardware [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:46:00 compute-1 nova_compute[230518]: 2025-10-02 12:46:00.592 2 DEBUG nova.objects.instance [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'vcpu_model' on Instance uuid c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:46:00 compute-1 nova_compute[230518]: 2025-10-02 12:46:00.612 2 DEBUG oslo_concurrency.processutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:46:01 compute-1 nova_compute[230518]: 2025-10-02 12:46:01.065 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:46:01 compute-1 nova_compute[230518]: 2025-10-02 12:46:01.066 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:46:01 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:46:01 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/199098813' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:46:01 compute-1 nova_compute[230518]: 2025-10-02 12:46:01.135 2 DEBUG oslo_concurrency.processutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:46:01 compute-1 nova_compute[230518]: 2025-10-02 12:46:01.137 2 DEBUG oslo_concurrency.processutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:46:01 compute-1 nova_compute[230518]: 2025-10-02 12:46:01.167 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:01 compute-1 nova_compute[230518]: 2025-10-02 12:46:01.168 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:01 compute-1 nova_compute[230518]: 2025-10-02 12:46:01.168 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:01 compute-1 nova_compute[230518]: 2025-10-02 12:46:01.169 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:46:01 compute-1 nova_compute[230518]: 2025-10-02 12:46:01.169 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:46:01 compute-1 ceph-mon[80926]: pgmap v2130: 305 pgs: 305 active+clean; 409 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 58 KiB/s rd, 43 KiB/s wr, 72 op/s
Oct 02 12:46:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/199098813' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:46:01 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:46:01 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3014393884' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:46:01 compute-1 nova_compute[230518]: 2025-10-02 12:46:01.596 2 DEBUG oslo_concurrency.processutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:46:01 compute-1 nova_compute[230518]: 2025-10-02 12:46:01.597 2 DEBUG oslo_concurrency.processutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:46:01 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:46:01 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/459826867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:01 compute-1 nova_compute[230518]: 2025-10-02 12:46:01.630 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:46:01 compute-1 nova_compute[230518]: 2025-10-02 12:46:01.697 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:46:01 compute-1 nova_compute[230518]: 2025-10-02 12:46:01.698 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:46:01 compute-1 nova_compute[230518]: 2025-10-02 12:46:01.698 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:46:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:01.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:46:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:01 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:01.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:01 compute-1 nova_compute[230518]: 2025-10-02 12:46:01.858 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:46:01 compute-1 nova_compute[230518]: 2025-10-02 12:46:01.860 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4363MB free_disk=20.781219482421875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:46:01 compute-1 nova_compute[230518]: 2025-10-02 12:46:01.860 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:01 compute-1 nova_compute[230518]: 2025-10-02 12:46:01.861 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:01 compute-1 nova_compute[230518]: 2025-10-02 12:46:01.981 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:46:01 compute-1 nova_compute[230518]: 2025-10-02 12:46:01.981 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:46:01 compute-1 nova_compute[230518]: 2025-10-02 12:46:01.982 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:46:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:46:02 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/626934885' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:46:02 compute-1 nova_compute[230518]: 2025-10-02 12:46:02.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:02 compute-1 nova_compute[230518]: 2025-10-02 12:46:02.130 2 DEBUG oslo_concurrency.processutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:46:02 compute-1 nova_compute[230518]: 2025-10-02 12:46:02.132 2 DEBUG nova.virt.libvirt.vif [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:45:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1940292897',display_name='tempest-ServerRescueNegativeTestJSON-server-1940292897',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1940292897',id=123,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE3MP/X9GW3AkvsXC34rGYeE9R6n5owPQ7q879Zh3KlhVfQDQGNR9SPxUIek8XO1dhRvBM+bbuljUGfLUZmn0JV9ekbocSGkiHwYeMyp832Egmcx2kY1B+audZxDye556Q==',key_name='tempest-keypair-866203463',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:45:29Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c87621e5c0ba4f13abfff528143c1c00',ramdisk_id='',reservation_id='r-n6jrurpd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-488939839',owner_user_name='tempest-ServerRescueNegativeTestJSON-488939839-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:45:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b168e90f7c0c414ba26c576fb8706a80',uuid=c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "address": "fa:16:3e:2d:9b:0c", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "vif_mac": "fa:16:3e:2d:9b:0c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241d570e-8e", "ovs_interfaceid": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:46:02 compute-1 nova_compute[230518]: 2025-10-02 12:46:02.132 2 DEBUG nova.network.os_vif_util [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Converting VIF {"id": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "address": "fa:16:3e:2d:9b:0c", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "vif_mac": "fa:16:3e:2d:9b:0c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241d570e-8e", "ovs_interfaceid": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:46:02 compute-1 nova_compute[230518]: 2025-10-02 12:46:02.133 2 DEBUG nova.network.os_vif_util [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2d:9b:0c,bridge_name='br-int',has_traffic_filtering=True,id=241d570e-8eb4-4d2a-986b-b37fbcb780a9,network=Network(f3934261-ba19-494f-8d9f-23360c5b30b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap241d570e-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:46:02 compute-1 nova_compute[230518]: 2025-10-02 12:46:02.134 2 DEBUG nova.objects.instance [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'pci_devices' on Instance uuid c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:46:02 compute-1 nova_compute[230518]: 2025-10-02 12:46:02.135 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 12:46:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:46:02 compute-1 nova_compute[230518]: 2025-10-02 12:46:02.165 2 DEBUG nova.virt.libvirt.driver [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:46:02 compute-1 nova_compute[230518]:   <uuid>c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f</uuid>
Oct 02 12:46:02 compute-1 nova_compute[230518]:   <name>instance-0000007b</name>
Oct 02 12:46:02 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:46:02 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:46:02 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:46:02 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-1940292897</nova:name>
Oct 02 12:46:02 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:46:00</nova:creationTime>
Oct 02 12:46:02 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:46:02 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:46:02 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:46:02 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:46:02 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:46:02 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:46:02 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:46:02 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:46:02 compute-1 nova_compute[230518]:         <nova:user uuid="b168e90f7c0c414ba26c576fb8706a80">tempest-ServerRescueNegativeTestJSON-488939839-project-member</nova:user>
Oct 02 12:46:02 compute-1 nova_compute[230518]:         <nova:project uuid="c87621e5c0ba4f13abfff528143c1c00">tempest-ServerRescueNegativeTestJSON-488939839</nova:project>
Oct 02 12:46:02 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:46:02 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:46:02 compute-1 nova_compute[230518]:         <nova:port uuid="241d570e-8eb4-4d2a-986b-b37fbcb780a9">
Oct 02 12:46:02 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:46:02 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:46:02 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:46:02 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <system>
Oct 02 12:46:02 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:46:02 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:46:02 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:46:02 compute-1 nova_compute[230518]:       <entry name="serial">c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f</entry>
Oct 02 12:46:02 compute-1 nova_compute[230518]:       <entry name="uuid">c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f</entry>
Oct 02 12:46:02 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     </system>
Oct 02 12:46:02 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:46:02 compute-1 nova_compute[230518]:   <os>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:   </os>
Oct 02 12:46:02 compute-1 nova_compute[230518]:   <features>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:   </features>
Oct 02 12:46:02 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:46:02 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:46:02 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:46:02 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk.rescue">
Oct 02 12:46:02 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:       </source>
Oct 02 12:46:02 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:46:02 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:46:02 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:46:02 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk">
Oct 02 12:46:02 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:       </source>
Oct 02 12:46:02 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:46:02 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:46:02 compute-1 nova_compute[230518]:       <target dev="vdb" bus="virtio"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:46:02 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk.config.rescue">
Oct 02 12:46:02 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:       </source>
Oct 02 12:46:02 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:46:02 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:46:02 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:46:02 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:2d:9b:0c"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:       <target dev="tap241d570e-8e"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:46:02 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f/console.log" append="off"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <video>
Oct 02 12:46:02 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     </video>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:46:02 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:46:02 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:46:02 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:46:02 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:46:02 compute-1 nova_compute[230518]: </domain>
Oct 02 12:46:02 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:46:02 compute-1 nova_compute[230518]: 2025-10-02 12:46:02.172 2 INFO nova.virt.libvirt.driver [-] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Instance destroyed successfully.
Oct 02 12:46:02 compute-1 nova_compute[230518]: 2025-10-02 12:46:02.185 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 12:46:02 compute-1 nova_compute[230518]: 2025-10-02 12:46:02.185 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:46:02 compute-1 nova_compute[230518]: 2025-10-02 12:46:02.199 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 12:46:02 compute-1 nova_compute[230518]: 2025-10-02 12:46:02.222 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 12:46:02 compute-1 nova_compute[230518]: 2025-10-02 12:46:02.245 2 DEBUG nova.virt.libvirt.driver [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:46:02 compute-1 nova_compute[230518]: 2025-10-02 12:46:02.245 2 DEBUG nova.virt.libvirt.driver [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:46:02 compute-1 nova_compute[230518]: 2025-10-02 12:46:02.246 2 DEBUG nova.virt.libvirt.driver [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:46:02 compute-1 nova_compute[230518]: 2025-10-02 12:46:02.246 2 DEBUG nova.virt.libvirt.driver [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] No VIF found with MAC fa:16:3e:2d:9b:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:46:02 compute-1 nova_compute[230518]: 2025-10-02 12:46:02.246 2 INFO nova.virt.libvirt.driver [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Using config drive
Oct 02 12:46:02 compute-1 nova_compute[230518]: 2025-10-02 12:46:02.269 2 DEBUG nova.storage.rbd_utils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:46:02 compute-1 nova_compute[230518]: 2025-10-02 12:46:02.274 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:46:02 compute-1 nova_compute[230518]: 2025-10-02 12:46:02.314 2 DEBUG nova.objects.instance [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'ec2_ids' on Instance uuid c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:46:02 compute-1 nova_compute[230518]: 2025-10-02 12:46:02.347 2 DEBUG nova.objects.instance [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'keypairs' on Instance uuid c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:46:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3014393884' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:46:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/459826867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3153811059' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:46:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/626934885' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:46:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:46:02 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1811187702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:02 compute-1 nova_compute[230518]: 2025-10-02 12:46:02.783 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:46:02 compute-1 nova_compute[230518]: 2025-10-02 12:46:02.789 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:46:02 compute-1 nova_compute[230518]: 2025-10-02 12:46:02.818 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:46:02 compute-1 nova_compute[230518]: 2025-10-02 12:46:02.859 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:46:02 compute-1 nova_compute[230518]: 2025-10-02 12:46:02.860 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.999s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:02 compute-1 nova_compute[230518]: 2025-10-02 12:46:02.918 2 INFO nova.virt.libvirt.driver [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Creating config drive at /var/lib/nova/instances/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f/disk.config.rescue
Oct 02 12:46:02 compute-1 nova_compute[230518]: 2025-10-02 12:46:02.931 2 DEBUG oslo_concurrency.processutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpylgsn0hh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:46:03 compute-1 nova_compute[230518]: 2025-10-02 12:46:03.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:03 compute-1 nova_compute[230518]: 2025-10-02 12:46:03.094 2 DEBUG oslo_concurrency.processutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpylgsn0hh" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:46:03 compute-1 nova_compute[230518]: 2025-10-02 12:46:03.126 2 DEBUG nova.storage.rbd_utils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] rbd image c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:46:03 compute-1 nova_compute[230518]: 2025-10-02 12:46:03.130 2 DEBUG oslo_concurrency.processutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f/disk.config.rescue c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:46:03 compute-1 nova_compute[230518]: 2025-10-02 12:46:03.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:03 compute-1 ceph-mon[80926]: pgmap v2131: 305 pgs: 305 active+clean; 501 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 82 KiB/s rd, 3.6 MiB/s wr, 114 op/s
Oct 02 12:46:03 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2267448461' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:46:03 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1811187702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:03 compute-1 nova_compute[230518]: 2025-10-02 12:46:03.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:46:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:03.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:46:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:03.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:03 compute-1 nova_compute[230518]: 2025-10-02 12:46:03.960 2 DEBUG oslo_concurrency.processutils [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f/disk.config.rescue c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.830s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:46:03 compute-1 nova_compute[230518]: 2025-10-02 12:46:03.961 2 INFO nova.virt.libvirt.driver [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Deleting local config drive /var/lib/nova/instances/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f/disk.config.rescue because it was imported into RBD.
Oct 02 12:46:04 compute-1 kernel: tap241d570e-8e: entered promiscuous mode
Oct 02 12:46:04 compute-1 NetworkManager[44960]: <info>  [1759409164.0131] manager: (tap241d570e-8e): new Tun device (/org/freedesktop/NetworkManager/Devices/236)
Oct 02 12:46:04 compute-1 ovn_controller[129257]: 2025-10-02T12:46:04Z|00508|binding|INFO|Claiming lport 241d570e-8eb4-4d2a-986b-b37fbcb780a9 for this chassis.
Oct 02 12:46:04 compute-1 ovn_controller[129257]: 2025-10-02T12:46:04Z|00509|binding|INFO|241d570e-8eb4-4d2a-986b-b37fbcb780a9: Claiming fa:16:3e:2d:9b:0c 10.100.0.11
Oct 02 12:46:04 compute-1 nova_compute[230518]: 2025-10-02 12:46:04.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:04 compute-1 nova_compute[230518]: 2025-10-02 12:46:04.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:04 compute-1 systemd-machined[188247]: New machine qemu-60-instance-0000007b.
Oct 02 12:46:04 compute-1 nova_compute[230518]: 2025-10-02 12:46:04.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:04 compute-1 NetworkManager[44960]: <info>  [1759409164.0443] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/237)
Oct 02 12:46:04 compute-1 NetworkManager[44960]: <info>  [1759409164.0447] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/238)
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.045 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:9b:0c 10.100.0.11'], port_security=['fa:16:3e:2d:9b:0c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3934261-ba19-494f-8d9f-23360c5b30b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c87621e5c0ba4f13abfff528143c1c00', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'e07b5251-78cf-4560-8d41-5dc3daef96ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4887de20-f7d5-4732-a50a-969a38516c82, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=241d570e-8eb4-4d2a-986b-b37fbcb780a9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.046 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 241d570e-8eb4-4d2a-986b-b37fbcb780a9 in datapath f3934261-ba19-494f-8d9f-23360c5b30b9 bound to our chassis
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.047 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3934261-ba19-494f-8d9f-23360c5b30b9
Oct 02 12:46:04 compute-1 systemd[1]: Started Virtual Machine qemu-60-instance-0000007b.
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.058 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ceac6211-c60d-4153-8d73-c1db091e3a23]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.059 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf3934261-b1 in ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.063 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf3934261-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.063 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c439f6f2-d4b7-43c8-99f4-3c705256ead5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:04 compute-1 systemd-udevd[277918]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.066 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2165e13e-3279-40d8-82fd-2d137c2e246b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:04 compute-1 NetworkManager[44960]: <info>  [1759409164.0786] device (tap241d570e-8e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.077 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[026e9c50-05f0-4cc1-be68-0949fabc473b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:04 compute-1 NetworkManager[44960]: <info>  [1759409164.0803] device (tap241d570e-8e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.104 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f3b71b21-814a-4c34-9509-f5bdabaabc3c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.136 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[48946bf0-7434-4bb9-a692-3aaca26ba811]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.142 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[dfcc12a2-c051-45e3-8f5b-ae0149626abd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:04 compute-1 NetworkManager[44960]: <info>  [1759409164.1438] manager: (tapf3934261-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/239)
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.174 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[b7bfa173-4350-4c31-b73e-fe3aeb30a2b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.177 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[86b32976-8c44-49e6-af7c-6443d50fa501]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:04 compute-1 NetworkManager[44960]: <info>  [1759409164.2069] device (tapf3934261-b0): carrier: link connected
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.212 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[42e8b0d0-1a05-44a0-bd01-1a4b8a55beac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.235 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[baf8e688-ad7c-405b-8d83-6028700623a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3934261-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:9f:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 156], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702776, 'reachable_time': 30594, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277950, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:04 compute-1 nova_compute[230518]: 2025-10-02 12:46:04.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:04 compute-1 nova_compute[230518]: 2025-10-02 12:46:04.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.255 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2b2f0354-844e-4d49-b14b-17cd20d39531]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec3:9fbb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702776, 'tstamp': 702776}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277951, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:04 compute-1 ovn_controller[129257]: 2025-10-02T12:46:04Z|00510|binding|INFO|Setting lport 241d570e-8eb4-4d2a-986b-b37fbcb780a9 ovn-installed in OVS
Oct 02 12:46:04 compute-1 ovn_controller[129257]: 2025-10-02T12:46:04Z|00511|binding|INFO|Setting lport 241d570e-8eb4-4d2a-986b-b37fbcb780a9 up in Southbound
Oct 02 12:46:04 compute-1 nova_compute[230518]: 2025-10-02 12:46:04.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.274 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[13d31c08-afa8-4b3f-a4c3-90fd175a3b20]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3934261-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:9f:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 156], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702776, 'reachable_time': 30594, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 277952, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.310 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[84455169-edae-4a0d-9b22-716765a95926]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.372 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f10dcb16-08f1-4e7d-90f7-38a11169c81e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.374 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3934261-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.374 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.375 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3934261-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:46:04 compute-1 nova_compute[230518]: 2025-10-02 12:46:04.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:04 compute-1 kernel: tapf3934261-b0: entered promiscuous mode
Oct 02 12:46:04 compute-1 NetworkManager[44960]: <info>  [1759409164.3773] manager: (tapf3934261-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/240)
Oct 02 12:46:04 compute-1 nova_compute[230518]: 2025-10-02 12:46:04.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.379 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3934261-b0, col_values=(('external_ids', {'iface-id': '3890f7a6-6cc9-4237-a2a2-3c43818c1748'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:46:04 compute-1 nova_compute[230518]: 2025-10-02 12:46:04.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:04 compute-1 ovn_controller[129257]: 2025-10-02T12:46:04Z|00512|binding|INFO|Releasing lport 3890f7a6-6cc9-4237-a2a2-3c43818c1748 from this chassis (sb_readonly=0)
Oct 02 12:46:04 compute-1 nova_compute[230518]: 2025-10-02 12:46:04.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.393 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f3934261-ba19-494f-8d9f-23360c5b30b9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f3934261-ba19-494f-8d9f-23360c5b30b9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.394 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[12a2ae3a-a795-44c3-ae71-b2bb5a78ea2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.395 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-f3934261-ba19-494f-8d9f-23360c5b30b9
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/f3934261-ba19-494f-8d9f-23360c5b30b9.pid.haproxy
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID f3934261-ba19-494f-8d9f-23360c5b30b9
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:46:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:04.396 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'env', 'PROCESS_TAG=haproxy-f3934261-ba19-494f-8d9f-23360c5b30b9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f3934261-ba19-494f-8d9f-23360c5b30b9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:46:04 compute-1 nova_compute[230518]: 2025-10-02 12:46:04.532 2 DEBUG nova.compute.manager [req-2ca67147-0219-43ef-8e83-23ec0f3f8eaa req-00bb7f1e-accb-4736-b6c4-804eeb57a8c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:46:04 compute-1 nova_compute[230518]: 2025-10-02 12:46:04.533 2 DEBUG oslo_concurrency.lockutils [req-2ca67147-0219-43ef-8e83-23ec0f3f8eaa req-00bb7f1e-accb-4736-b6c4-804eeb57a8c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:04 compute-1 nova_compute[230518]: 2025-10-02 12:46:04.533 2 DEBUG oslo_concurrency.lockutils [req-2ca67147-0219-43ef-8e83-23ec0f3f8eaa req-00bb7f1e-accb-4736-b6c4-804eeb57a8c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:04 compute-1 nova_compute[230518]: 2025-10-02 12:46:04.534 2 DEBUG oslo_concurrency.lockutils [req-2ca67147-0219-43ef-8e83-23ec0f3f8eaa req-00bb7f1e-accb-4736-b6c4-804eeb57a8c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:04 compute-1 nova_compute[230518]: 2025-10-02 12:46:04.534 2 DEBUG nova.compute.manager [req-2ca67147-0219-43ef-8e83-23ec0f3f8eaa req-00bb7f1e-accb-4736-b6c4-804eeb57a8c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] No waiting events found dispatching network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:46:04 compute-1 nova_compute[230518]: 2025-10-02 12:46:04.534 2 WARNING nova.compute.manager [req-2ca67147-0219-43ef-8e83-23ec0f3f8eaa req-00bb7f1e-accb-4736-b6c4-804eeb57a8c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received unexpected event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 for instance with vm_state active and task_state rescuing.
Oct 02 12:46:04 compute-1 nova_compute[230518]: 2025-10-02 12:46:04.842 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:46:04 compute-1 nova_compute[230518]: 2025-10-02 12:46:04.843 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:46:04 compute-1 podman[277991]: 2025-10-02 12:46:04.819771779 +0000 UTC m=+0.034020120 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:46:05 compute-1 ceph-mon[80926]: pgmap v2132: 305 pgs: 305 active+clean; 501 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 3.6 MiB/s wr, 75 op/s
Oct 02 12:46:05 compute-1 podman[277991]: 2025-10-02 12:46:05.506497653 +0000 UTC m=+0.720746004 container create eb74a0b757741244de37a749e3ef34090bd824c1f04bab604eff3daaf44544b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:46:05 compute-1 nova_compute[230518]: 2025-10-02 12:46:05.700 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:46:05 compute-1 nova_compute[230518]: 2025-10-02 12:46:05.755 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Triggering sync for uuid c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 12:46:05 compute-1 nova_compute[230518]: 2025-10-02 12:46:05.756 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:05 compute-1 nova_compute[230518]: 2025-10-02 12:46:05.756 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:05 compute-1 nova_compute[230518]: 2025-10-02 12:46:05.756 2 INFO nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] During sync_power_state the instance has a pending task (rescuing). Skip.
Oct 02 12:46:05 compute-1 nova_compute[230518]: 2025-10-02 12:46:05.756 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:46:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:05.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:05 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:05.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:05 compute-1 systemd[1]: Started libpod-conmon-eb74a0b757741244de37a749e3ef34090bd824c1f04bab604eff3daaf44544b9.scope.
Oct 02 12:46:05 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:46:05 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b0fe032c883f01cbe2c0fc7a6c92461065f29d5017d7d4b7bb4ad0ea07ca86a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:46:05 compute-1 podman[277991]: 2025-10-02 12:46:05.977635438 +0000 UTC m=+1.191883809 container init eb74a0b757741244de37a749e3ef34090bd824c1f04bab604eff3daaf44544b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:46:05 compute-1 podman[277991]: 2025-10-02 12:46:05.983129221 +0000 UTC m=+1.197377562 container start eb74a0b757741244de37a749e3ef34090bd824c1f04bab604eff3daaf44544b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct 02 12:46:06 compute-1 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[278059]: [NOTICE]   (278063) : New worker (278065) forked
Oct 02 12:46:06 compute-1 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[278059]: [NOTICE]   (278063) : Loading success.
Oct 02 12:46:06 compute-1 nova_compute[230518]: 2025-10-02 12:46:06.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:46:06 compute-1 nova_compute[230518]: 2025-10-02 12:46:06.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:46:06 compute-1 nova_compute[230518]: 2025-10-02 12:46:06.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:46:06 compute-1 nova_compute[230518]: 2025-10-02 12:46:06.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 12:46:06 compute-1 sudo[278074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:06 compute-1 sudo[278074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:06 compute-1 sudo[278074]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:06 compute-1 nova_compute[230518]: 2025-10-02 12:46:06.181 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Removed pending event for c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:46:06 compute-1 nova_compute[230518]: 2025-10-02 12:46:06.181 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409166.1806834, c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:46:06 compute-1 nova_compute[230518]: 2025-10-02 12:46:06.182 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] VM Resumed (Lifecycle Event)
Oct 02 12:46:06 compute-1 nova_compute[230518]: 2025-10-02 12:46:06.187 2 DEBUG nova.compute.manager [None req-e5d7a99d-b07f-4fe9-8bc1-73a2ae255dae b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:46:06 compute-1 sudo[278099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:46:06 compute-1 sudo[278099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:06 compute-1 sudo[278099]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:06 compute-1 nova_compute[230518]: 2025-10-02 12:46:06.210 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:46:06 compute-1 nova_compute[230518]: 2025-10-02 12:46:06.213 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:46:06 compute-1 sudo[278124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:06 compute-1 sudo[278124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:06 compute-1 sudo[278124]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1982039176' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1696941197' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3523870997' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:46:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3523870997' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:46:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/626095779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:06 compute-1 nova_compute[230518]: 2025-10-02 12:46:06.306 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] During sync_power_state the instance has a pending task (rescuing). Skip.
Oct 02 12:46:06 compute-1 nova_compute[230518]: 2025-10-02 12:46:06.306 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409166.1821876, c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:46:06 compute-1 nova_compute[230518]: 2025-10-02 12:46:06.307 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] VM Started (Lifecycle Event)
Oct 02 12:46:06 compute-1 sudo[278149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:46:06 compute-1 sudo[278149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:06 compute-1 nova_compute[230518]: 2025-10-02 12:46:06.359 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:46:06 compute-1 nova_compute[230518]: 2025-10-02 12:46:06.362 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:46:06 compute-1 nova_compute[230518]: 2025-10-02 12:46:06.752 2 DEBUG nova.compute.manager [req-90fd006f-45c0-4eeb-839d-13b3bed4d735 req-6a545d27-bf49-48fc-81aa-e3e77beb2fe7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:46:06 compute-1 nova_compute[230518]: 2025-10-02 12:46:06.753 2 DEBUG oslo_concurrency.lockutils [req-90fd006f-45c0-4eeb-839d-13b3bed4d735 req-6a545d27-bf49-48fc-81aa-e3e77beb2fe7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:06 compute-1 nova_compute[230518]: 2025-10-02 12:46:06.753 2 DEBUG oslo_concurrency.lockutils [req-90fd006f-45c0-4eeb-839d-13b3bed4d735 req-6a545d27-bf49-48fc-81aa-e3e77beb2fe7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:06 compute-1 nova_compute[230518]: 2025-10-02 12:46:06.754 2 DEBUG oslo_concurrency.lockutils [req-90fd006f-45c0-4eeb-839d-13b3bed4d735 req-6a545d27-bf49-48fc-81aa-e3e77beb2fe7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:06 compute-1 nova_compute[230518]: 2025-10-02 12:46:06.754 2 DEBUG nova.compute.manager [req-90fd006f-45c0-4eeb-839d-13b3bed4d735 req-6a545d27-bf49-48fc-81aa-e3e77beb2fe7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] No waiting events found dispatching network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:46:06 compute-1 nova_compute[230518]: 2025-10-02 12:46:06.755 2 WARNING nova.compute.manager [req-90fd006f-45c0-4eeb-839d-13b3bed4d735 req-6a545d27-bf49-48fc-81aa-e3e77beb2fe7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received unexpected event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 for instance with vm_state rescued and task_state None.
Oct 02 12:46:06 compute-1 sudo[278149]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:07 compute-1 sudo[278206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:07 compute-1 sudo[278206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:07 compute-1 sudo[278206]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:07 compute-1 nova_compute[230518]: 2025-10-02 12:46:07.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:07 compute-1 sudo[278231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:46:07 compute-1 sudo[278231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:07 compute-1 sudo[278231]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:46:07 compute-1 sudo[278256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:07 compute-1 sudo[278256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:07 compute-1 sudo[278256]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:07 compute-1 sudo[278281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Oct 02 12:46:07 compute-1 sudo[278281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:07 compute-1 sudo[278281]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:07 compute-1 ceph-mon[80926]: pgmap v2133: 305 pgs: 305 active+clean; 502 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 3.6 MiB/s wr, 78 op/s
Oct 02 12:46:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/221783826' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:46:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:07.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:46:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:46:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:07 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:07.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:08 compute-1 nova_compute[230518]: 2025-10-02 12:46:08.080 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:46:08 compute-1 nova_compute[230518]: 2025-10-02 12:46:08.081 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:46:08 compute-1 nova_compute[230518]: 2025-10-02 12:46:08.081 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:46:08 compute-1 nova_compute[230518]: 2025-10-02 12:46:08.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:08 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:46:08 compute-1 ceph-mon[80926]: pgmap v2134: 305 pgs: 305 active+clean; 502 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 562 KiB/s rd, 3.6 MiB/s wr, 109 op/s
Oct 02 12:46:08 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:46:08 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:46:08 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:46:08 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:46:08 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:46:08 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:46:08 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:46:08 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:46:08 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:46:08 compute-1 nova_compute[230518]: 2025-10-02 12:46:08.925 2 INFO nova.compute.manager [None req-7da7c6bc-27e3-4549-b5dd-d97607b562bd b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Unrescuing
Oct 02 12:46:08 compute-1 nova_compute[230518]: 2025-10-02 12:46:08.926 2 DEBUG oslo_concurrency.lockutils [None req-7da7c6bc-27e3-4549-b5dd-d97607b562bd b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:46:08 compute-1 nova_compute[230518]: 2025-10-02 12:46:08.927 2 DEBUG oslo_concurrency.lockutils [None req-7da7c6bc-27e3-4549-b5dd-d97607b562bd b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquired lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:46:08 compute-1 nova_compute[230518]: 2025-10-02 12:46:08.927 2 DEBUG nova.network.neutron [None req-7da7c6bc-27e3-4549-b5dd-d97607b562bd b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:46:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:09.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:09.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:10 compute-1 nova_compute[230518]: 2025-10-02 12:46:10.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:46:10 compute-1 nova_compute[230518]: 2025-10-02 12:46:10.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:46:10 compute-1 nova_compute[230518]: 2025-10-02 12:46:10.071 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:46:10 compute-1 nova_compute[230518]: 2025-10-02 12:46:10.097 2 DEBUG nova.network.neutron [None req-7da7c6bc-27e3-4549-b5dd-d97607b562bd b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Updating instance_info_cache with network_info: [{"id": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "address": "fa:16:3e:2d:9b:0c", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241d570e-8e", "ovs_interfaceid": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:46:10 compute-1 nova_compute[230518]: 2025-10-02 12:46:10.114 2 DEBUG oslo_concurrency.lockutils [None req-7da7c6bc-27e3-4549-b5dd-d97607b562bd b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Releasing lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:46:10 compute-1 nova_compute[230518]: 2025-10-02 12:46:10.115 2 DEBUG nova.objects.instance [None req-7da7c6bc-27e3-4549-b5dd-d97607b562bd b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'flavor' on Instance uuid c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:46:10 compute-1 kernel: tap241d570e-8e (unregistering): left promiscuous mode
Oct 02 12:46:10 compute-1 NetworkManager[44960]: <info>  [1759409170.1871] device (tap241d570e-8e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:46:10 compute-1 ovn_controller[129257]: 2025-10-02T12:46:10Z|00513|binding|INFO|Releasing lport 241d570e-8eb4-4d2a-986b-b37fbcb780a9 from this chassis (sb_readonly=0)
Oct 02 12:46:10 compute-1 nova_compute[230518]: 2025-10-02 12:46:10.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:10 compute-1 ovn_controller[129257]: 2025-10-02T12:46:10Z|00514|binding|INFO|Setting lport 241d570e-8eb4-4d2a-986b-b37fbcb780a9 down in Southbound
Oct 02 12:46:10 compute-1 ovn_controller[129257]: 2025-10-02T12:46:10Z|00515|binding|INFO|Removing iface tap241d570e-8e ovn-installed in OVS
Oct 02 12:46:10 compute-1 nova_compute[230518]: 2025-10-02 12:46:10.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.205 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:9b:0c 10.100.0.11'], port_security=['fa:16:3e:2d:9b:0c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3934261-ba19-494f-8d9f-23360c5b30b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c87621e5c0ba4f13abfff528143c1c00', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'e07b5251-78cf-4560-8d41-5dc3daef96ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.188', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4887de20-f7d5-4732-a50a-969a38516c82, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=241d570e-8eb4-4d2a-986b-b37fbcb780a9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.207 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 241d570e-8eb4-4d2a-986b-b37fbcb780a9 in datapath f3934261-ba19-494f-8d9f-23360c5b30b9 unbound from our chassis
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.209 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f3934261-ba19-494f-8d9f-23360c5b30b9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.211 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5de888fa-505f-470e-a06f-b64a794e4504]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.211 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 namespace which is not needed anymore
Oct 02 12:46:10 compute-1 nova_compute[230518]: 2025-10-02 12:46:10.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:10 compute-1 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Oct 02 12:46:10 compute-1 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d0000007b.scope: Consumed 5.382s CPU time.
Oct 02 12:46:10 compute-1 systemd-machined[188247]: Machine qemu-60-instance-0000007b terminated.
Oct 02 12:46:10 compute-1 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[278059]: [NOTICE]   (278063) : haproxy version is 2.8.14-c23fe91
Oct 02 12:46:10 compute-1 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[278059]: [NOTICE]   (278063) : path to executable is /usr/sbin/haproxy
Oct 02 12:46:10 compute-1 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[278059]: [WARNING]  (278063) : Exiting Master process...
Oct 02 12:46:10 compute-1 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[278059]: [ALERT]    (278063) : Current worker (278065) exited with code 143 (Terminated)
Oct 02 12:46:10 compute-1 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[278059]: [WARNING]  (278063) : All workers exited. Exiting... (0)
Oct 02 12:46:10 compute-1 systemd[1]: libpod-eb74a0b757741244de37a749e3ef34090bd824c1f04bab604eff3daaf44544b9.scope: Deactivated successfully.
Oct 02 12:46:10 compute-1 podman[278348]: 2025-10-02 12:46:10.363331033 +0000 UTC m=+0.050725355 container died eb74a0b757741244de37a749e3ef34090bd824c1f04bab604eff3daaf44544b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 12:46:10 compute-1 nova_compute[230518]: 2025-10-02 12:46:10.381 2 INFO nova.virt.libvirt.driver [-] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Instance destroyed successfully.
Oct 02 12:46:10 compute-1 nova_compute[230518]: 2025-10-02 12:46:10.381 2 DEBUG nova.objects.instance [None req-7da7c6bc-27e3-4549-b5dd-d97607b562bd b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'numa_topology' on Instance uuid c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:46:10 compute-1 systemd[1]: var-lib-containers-storage-overlay-4b0fe032c883f01cbe2c0fc7a6c92461065f29d5017d7d4b7bb4ad0ea07ca86a-merged.mount: Deactivated successfully.
Oct 02 12:46:10 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eb74a0b757741244de37a749e3ef34090bd824c1f04bab604eff3daaf44544b9-userdata-shm.mount: Deactivated successfully.
Oct 02 12:46:10 compute-1 podman[278348]: 2025-10-02 12:46:10.419729987 +0000 UTC m=+0.107124309 container cleanup eb74a0b757741244de37a749e3ef34090bd824c1f04bab604eff3daaf44544b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 12:46:10 compute-1 systemd[1]: libpod-conmon-eb74a0b757741244de37a749e3ef34090bd824c1f04bab604eff3daaf44544b9.scope: Deactivated successfully.
Oct 02 12:46:10 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 12:46:10 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Cumulative writes: 9959 writes, 51K keys, 9959 commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s
                                           Cumulative WAL: 9959 writes, 9959 syncs, 1.00 writes per sync, written: 0.10 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1653 writes, 8307 keys, 1653 commit groups, 1.0 writes per commit group, ingest: 16.61 MB, 0.03 MB/s
                                           Interval WAL: 1653 writes, 1653 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     72.5      0.84              0.17        30    0.028       0      0       0.0       0.0
                                             L6      1/0    8.83 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.4    131.5    110.2      2.47              0.78        29    0.085    172K    16K       0.0       0.0
                                            Sum      1/0    8.83 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   5.4     98.0    100.6      3.31              0.96        59    0.056    172K    16K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.7     97.1     97.1      0.75              0.25        12    0.063     45K   3138       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0    131.5    110.2      2.47              0.78        29    0.085    172K    16K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     72.7      0.84              0.17        29    0.029       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.060, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.33 GB write, 0.09 MB/s write, 0.32 GB read, 0.09 MB/s read, 3.3 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5591049f71f0#2 capacity: 304.00 MB usage: 35.76 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000259 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2075,34.45 MB,11.3313%) FilterBlock(59,493.48 KB,0.158526%) IndexBlock(59,849.36 KB,0.272846%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 12:46:10 compute-1 podman[278389]: 2025-10-02 12:46:10.487970723 +0000 UTC m=+0.039990228 container remove eb74a0b757741244de37a749e3ef34090bd824c1f04bab604eff3daaf44544b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:46:10 compute-1 kernel: tap241d570e-8e: entered promiscuous mode
Oct 02 12:46:10 compute-1 systemd-udevd[278328]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:46:10 compute-1 NetworkManager[44960]: <info>  [1759409170.4952] manager: (tap241d570e-8e): new Tun device (/org/freedesktop/NetworkManager/Devices/241)
Oct 02 12:46:10 compute-1 nova_compute[230518]: 2025-10-02 12:46:10.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:10 compute-1 ovn_controller[129257]: 2025-10-02T12:46:10Z|00516|binding|INFO|Claiming lport 241d570e-8eb4-4d2a-986b-b37fbcb780a9 for this chassis.
Oct 02 12:46:10 compute-1 ovn_controller[129257]: 2025-10-02T12:46:10Z|00517|binding|INFO|241d570e-8eb4-4d2a-986b-b37fbcb780a9: Claiming fa:16:3e:2d:9b:0c 10.100.0.11
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.497 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d8440667-92e9-46d6-8efd-cd8a1cbb61e3]: (4, ('Thu Oct  2 12:46:10 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 (eb74a0b757741244de37a749e3ef34090bd824c1f04bab604eff3daaf44544b9)\neb74a0b757741244de37a749e3ef34090bd824c1f04bab604eff3daaf44544b9\nThu Oct  2 12:46:10 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 (eb74a0b757741244de37a749e3ef34090bd824c1f04bab604eff3daaf44544b9)\neb74a0b757741244de37a749e3ef34090bd824c1f04bab604eff3daaf44544b9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.500 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[439ab5c7-ee61-4959-b5da-c5331ef98e1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.501 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3934261-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:46:10 compute-1 nova_compute[230518]: 2025-10-02 12:46:10.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.504 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:9b:0c 10.100.0.11'], port_security=['fa:16:3e:2d:9b:0c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3934261-ba19-494f-8d9f-23360c5b30b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c87621e5c0ba4f13abfff528143c1c00', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'e07b5251-78cf-4560-8d41-5dc3daef96ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.188', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4887de20-f7d5-4732-a50a-969a38516c82, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=241d570e-8eb4-4d2a-986b-b37fbcb780a9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:46:10 compute-1 nova_compute[230518]: 2025-10-02 12:46:10.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:10 compute-1 NetworkManager[44960]: <info>  [1759409170.5129] device (tap241d570e-8e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:46:10 compute-1 NetworkManager[44960]: <info>  [1759409170.5143] device (tap241d570e-8e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:46:10 compute-1 ovn_controller[129257]: 2025-10-02T12:46:10Z|00518|binding|INFO|Setting lport 241d570e-8eb4-4d2a-986b-b37fbcb780a9 ovn-installed in OVS
Oct 02 12:46:10 compute-1 ovn_controller[129257]: 2025-10-02T12:46:10Z|00519|binding|INFO|Setting lport 241d570e-8eb4-4d2a-986b-b37fbcb780a9 up in Southbound
Oct 02 12:46:10 compute-1 nova_compute[230518]: 2025-10-02 12:46:10.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:10 compute-1 kernel: tapf3934261-b0: left promiscuous mode
Oct 02 12:46:10 compute-1 nova_compute[230518]: 2025-10-02 12:46:10.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:10 compute-1 nova_compute[230518]: 2025-10-02 12:46:10.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.529 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d8a76a03-759d-47fd-8085-ce9793ea2801]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:10 compute-1 systemd-machined[188247]: New machine qemu-61-instance-0000007b.
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.550 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d388f3b3-6a0b-44a5-a0bf-7bb4e3df4870]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.555 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8b7877c6-8162-4e5b-bd7e-5609cd870054]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:10 compute-1 systemd[1]: Started Virtual Machine qemu-61-instance-0000007b.
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.572 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e6bcb38c-88d9-462a-9084-d2f305517cb2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702768, 'reachable_time': 24456, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278415, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:10 compute-1 systemd[1]: run-netns-ovnmeta\x2df3934261\x2dba19\x2d494f\x2d8d9f\x2d23360c5b30b9.mount: Deactivated successfully.
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.575 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.575 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[d74ac5de-13b1-48f7-9462-b0dcd854bcf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.576 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 241d570e-8eb4-4d2a-986b-b37fbcb780a9 in datapath f3934261-ba19-494f-8d9f-23360c5b30b9 unbound from our chassis
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.577 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3934261-ba19-494f-8d9f-23360c5b30b9
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.591 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6b63b904-6bd6-450e-b409-e10c9349b154]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.592 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf3934261-b1 in ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.594 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf3934261-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.594 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cf500269-b555-4736-bb8c-5b64300cc482]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.595 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[46b5b39e-1c13-41d3-9771-63263909aacf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.612 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[28ac9eb6-a916-4c9e-b05f-48320d205bd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.640 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[87323c76-db44-4c27-a7f0-0e5895ee3fba]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.651 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:46:10 compute-1 nova_compute[230518]: 2025-10-02 12:46:10.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.684 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[8110b395-b8e5-4a34-a4e9-8e18a5751182]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.691 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bfe03ed5-0ab5-4769-b12d-cf0b54f3c5a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:10 compute-1 NetworkManager[44960]: <info>  [1759409170.6940] manager: (tapf3934261-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/242)
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.726 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[8ad1932d-10b7-401c-a1a9-b52b1c0dd734]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.730 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[3c109a81-6dec-4f93-b243-bf952f038eab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:10 compute-1 NetworkManager[44960]: <info>  [1759409170.7624] device (tapf3934261-b0): carrier: link connected
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.771 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[660fec1f-cf41-4d76-9023-d5f30f0071a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.789 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[865b264d-d8db-4198-a1ee-2dc7c578e9e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3934261-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:9f:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703432, 'reachable_time': 27439, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278446, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.805 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c2405b55-4a55-46b1-8290-048eff0e8a9e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec3:9fbb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703432, 'tstamp': 703432}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278447, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.825 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[acc89527-d6a3-4feb-b359-cc22773f2f39]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3934261-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:9f:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703432, 'reachable_time': 27439, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 278448, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.855 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9fd5c536-7e7c-43f0-a2da-5c9b05756f31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.903 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[55bb4263-4c94-46a7-aaa5-13c002185ba0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:10 compute-1 ceph-mon[80926]: pgmap v2135: 305 pgs: 305 active+clean; 502 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 539 KiB/s rd, 3.6 MiB/s wr, 77 op/s
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.904 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3934261-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.904 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.905 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3934261-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:46:10 compute-1 kernel: tapf3934261-b0: entered promiscuous mode
Oct 02 12:46:10 compute-1 NetworkManager[44960]: <info>  [1759409170.9071] manager: (tapf3934261-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/243)
Oct 02 12:46:10 compute-1 nova_compute[230518]: 2025-10-02 12:46:10.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:10 compute-1 nova_compute[230518]: 2025-10-02 12:46:10.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.911 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3934261-b0, col_values=(('external_ids', {'iface-id': '3890f7a6-6cc9-4237-a2a2-3c43818c1748'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:46:10 compute-1 ovn_controller[129257]: 2025-10-02T12:46:10Z|00520|binding|INFO|Releasing lport 3890f7a6-6cc9-4237-a2a2-3c43818c1748 from this chassis (sb_readonly=0)
Oct 02 12:46:10 compute-1 nova_compute[230518]: 2025-10-02 12:46:10.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.913 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f3934261-ba19-494f-8d9f-23360c5b30b9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f3934261-ba19-494f-8d9f-23360c5b30b9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:46:10 compute-1 nova_compute[230518]: 2025-10-02 12:46:10.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.914 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a52853c8-70f1-4e43-9917-0877c73891db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.915 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-f3934261-ba19-494f-8d9f-23360c5b30b9
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/f3934261-ba19-494f-8d9f-23360c5b30b9.pid.haproxy
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID f3934261-ba19-494f-8d9f-23360c5b30b9
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:46:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:10.915 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'env', 'PROCESS_TAG=haproxy-f3934261-ba19-494f-8d9f-23360c5b30b9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f3934261-ba19-494f-8d9f-23360c5b30b9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:46:10 compute-1 nova_compute[230518]: 2025-10-02 12:46:10.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:11 compute-1 nova_compute[230518]: 2025-10-02 12:46:11.034 2 DEBUG nova.compute.manager [req-8614bcb8-e796-4efd-b664-c0e375bf5322 req-582cd6c3-f8b9-4947-badd-a2fc36231944 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received event network-vif-unplugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:46:11 compute-1 nova_compute[230518]: 2025-10-02 12:46:11.035 2 DEBUG oslo_concurrency.lockutils [req-8614bcb8-e796-4efd-b664-c0e375bf5322 req-582cd6c3-f8b9-4947-badd-a2fc36231944 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:11 compute-1 nova_compute[230518]: 2025-10-02 12:46:11.035 2 DEBUG oslo_concurrency.lockutils [req-8614bcb8-e796-4efd-b664-c0e375bf5322 req-582cd6c3-f8b9-4947-badd-a2fc36231944 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:11 compute-1 nova_compute[230518]: 2025-10-02 12:46:11.036 2 DEBUG oslo_concurrency.lockutils [req-8614bcb8-e796-4efd-b664-c0e375bf5322 req-582cd6c3-f8b9-4947-badd-a2fc36231944 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:11 compute-1 nova_compute[230518]: 2025-10-02 12:46:11.036 2 DEBUG nova.compute.manager [req-8614bcb8-e796-4efd-b664-c0e375bf5322 req-582cd6c3-f8b9-4947-badd-a2fc36231944 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] No waiting events found dispatching network-vif-unplugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:46:11 compute-1 nova_compute[230518]: 2025-10-02 12:46:11.036 2 WARNING nova.compute.manager [req-8614bcb8-e796-4efd-b664-c0e375bf5322 req-582cd6c3-f8b9-4947-badd-a2fc36231944 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received unexpected event network-vif-unplugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 for instance with vm_state rescued and task_state unrescuing.
Oct 02 12:46:11 compute-1 nova_compute[230518]: 2025-10-02 12:46:11.037 2 DEBUG nova.compute.manager [req-8614bcb8-e796-4efd-b664-c0e375bf5322 req-582cd6c3-f8b9-4947-badd-a2fc36231944 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:46:11 compute-1 nova_compute[230518]: 2025-10-02 12:46:11.037 2 DEBUG oslo_concurrency.lockutils [req-8614bcb8-e796-4efd-b664-c0e375bf5322 req-582cd6c3-f8b9-4947-badd-a2fc36231944 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:11 compute-1 nova_compute[230518]: 2025-10-02 12:46:11.037 2 DEBUG oslo_concurrency.lockutils [req-8614bcb8-e796-4efd-b664-c0e375bf5322 req-582cd6c3-f8b9-4947-badd-a2fc36231944 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:11 compute-1 nova_compute[230518]: 2025-10-02 12:46:11.037 2 DEBUG oslo_concurrency.lockutils [req-8614bcb8-e796-4efd-b664-c0e375bf5322 req-582cd6c3-f8b9-4947-badd-a2fc36231944 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:11 compute-1 nova_compute[230518]: 2025-10-02 12:46:11.038 2 DEBUG nova.compute.manager [req-8614bcb8-e796-4efd-b664-c0e375bf5322 req-582cd6c3-f8b9-4947-badd-a2fc36231944 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] No waiting events found dispatching network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:46:11 compute-1 nova_compute[230518]: 2025-10-02 12:46:11.038 2 WARNING nova.compute.manager [req-8614bcb8-e796-4efd-b664-c0e375bf5322 req-582cd6c3-f8b9-4947-badd-a2fc36231944 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received unexpected event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 for instance with vm_state rescued and task_state unrescuing.
Oct 02 12:46:11 compute-1 nova_compute[230518]: 2025-10-02 12:46:11.038 2 DEBUG nova.compute.manager [req-8614bcb8-e796-4efd-b664-c0e375bf5322 req-582cd6c3-f8b9-4947-badd-a2fc36231944 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:46:11 compute-1 nova_compute[230518]: 2025-10-02 12:46:11.038 2 DEBUG oslo_concurrency.lockutils [req-8614bcb8-e796-4efd-b664-c0e375bf5322 req-582cd6c3-f8b9-4947-badd-a2fc36231944 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:11 compute-1 nova_compute[230518]: 2025-10-02 12:46:11.039 2 DEBUG oslo_concurrency.lockutils [req-8614bcb8-e796-4efd-b664-c0e375bf5322 req-582cd6c3-f8b9-4947-badd-a2fc36231944 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:11 compute-1 nova_compute[230518]: 2025-10-02 12:46:11.039 2 DEBUG oslo_concurrency.lockutils [req-8614bcb8-e796-4efd-b664-c0e375bf5322 req-582cd6c3-f8b9-4947-badd-a2fc36231944 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:11 compute-1 nova_compute[230518]: 2025-10-02 12:46:11.039 2 DEBUG nova.compute.manager [req-8614bcb8-e796-4efd-b664-c0e375bf5322 req-582cd6c3-f8b9-4947-badd-a2fc36231944 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] No waiting events found dispatching network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:46:11 compute-1 nova_compute[230518]: 2025-10-02 12:46:11.039 2 WARNING nova.compute.manager [req-8614bcb8-e796-4efd-b664-c0e375bf5322 req-582cd6c3-f8b9-4947-badd-a2fc36231944 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received unexpected event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 for instance with vm_state rescued and task_state unrescuing.
Oct 02 12:46:11 compute-1 podman[278481]: 2025-10-02 12:46:11.263387926 +0000 UTC m=+0.045202753 container create 9d47356da694773c0d590d01a8afef9bf7388d6489618145e89c79c998fad31d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 12:46:11 compute-1 systemd[1]: Started libpod-conmon-9d47356da694773c0d590d01a8afef9bf7388d6489618145e89c79c998fad31d.scope.
Oct 02 12:46:11 compute-1 podman[278481]: 2025-10-02 12:46:11.241030293 +0000 UTC m=+0.022845140 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:46:11 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:46:11 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/382483586f98dc9c7e63f8af29bc5c60af48f2c266cb3884274ad336cebfb902/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:46:11 compute-1 podman[278481]: 2025-10-02 12:46:11.367996135 +0000 UTC m=+0.149810982 container init 9d47356da694773c0d590d01a8afef9bf7388d6489618145e89c79c998fad31d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:46:11 compute-1 podman[278481]: 2025-10-02 12:46:11.373104656 +0000 UTC m=+0.154919483 container start 9d47356da694773c0d590d01a8afef9bf7388d6489618145e89c79c998fad31d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 12:46:11 compute-1 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[278496]: [NOTICE]   (278500) : New worker (278502) forked
Oct 02 12:46:11 compute-1 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[278496]: [NOTICE]   (278500) : Loading success.
Oct 02 12:46:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:11.553 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:46:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:11.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:46:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:46:11 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:11.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:46:12 compute-1 nova_compute[230518]: 2025-10-02 12:46:12.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:12 compute-1 nova_compute[230518]: 2025-10-02 12:46:12.270 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Removed pending event for c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:46:12 compute-1 nova_compute[230518]: 2025-10-02 12:46:12.271 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409172.269763, c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:46:12 compute-1 nova_compute[230518]: 2025-10-02 12:46:12.271 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] VM Resumed (Lifecycle Event)
Oct 02 12:46:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:46:12 compute-1 nova_compute[230518]: 2025-10-02 12:46:12.342 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:46:12 compute-1 nova_compute[230518]: 2025-10-02 12:46:12.345 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:46:12 compute-1 nova_compute[230518]: 2025-10-02 12:46:12.374 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] During sync_power_state the instance has a pending task (unrescuing). Skip.
Oct 02 12:46:12 compute-1 nova_compute[230518]: 2025-10-02 12:46:12.374 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409172.2734544, c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:46:12 compute-1 nova_compute[230518]: 2025-10-02 12:46:12.375 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] VM Started (Lifecycle Event)
Oct 02 12:46:12 compute-1 nova_compute[230518]: 2025-10-02 12:46:12.405 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:46:12 compute-1 nova_compute[230518]: 2025-10-02 12:46:12.409 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:46:12 compute-1 nova_compute[230518]: 2025-10-02 12:46:12.431 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] During sync_power_state the instance has a pending task (unrescuing). Skip.
Oct 02 12:46:12 compute-1 podman[278590]: 2025-10-02 12:46:12.828729187 +0000 UTC m=+0.077050044 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct 02 12:46:12 compute-1 podman[278589]: 2025-10-02 12:46:12.830479103 +0000 UTC m=+0.083348653 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:46:13 compute-1 nova_compute[230518]: 2025-10-02 12:46:13.115 2 DEBUG nova.compute.manager [req-5e0b1ff4-607d-4291-b160-be1f7a4baf26 req-fe1247b2-7002-4d2d-b5c2-0f2bdcb1edd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:46:13 compute-1 nova_compute[230518]: 2025-10-02 12:46:13.116 2 DEBUG oslo_concurrency.lockutils [req-5e0b1ff4-607d-4291-b160-be1f7a4baf26 req-fe1247b2-7002-4d2d-b5c2-0f2bdcb1edd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:13 compute-1 nova_compute[230518]: 2025-10-02 12:46:13.116 2 DEBUG oslo_concurrency.lockutils [req-5e0b1ff4-607d-4291-b160-be1f7a4baf26 req-fe1247b2-7002-4d2d-b5c2-0f2bdcb1edd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:13 compute-1 nova_compute[230518]: 2025-10-02 12:46:13.116 2 DEBUG oslo_concurrency.lockutils [req-5e0b1ff4-607d-4291-b160-be1f7a4baf26 req-fe1247b2-7002-4d2d-b5c2-0f2bdcb1edd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:13 compute-1 nova_compute[230518]: 2025-10-02 12:46:13.117 2 DEBUG nova.compute.manager [req-5e0b1ff4-607d-4291-b160-be1f7a4baf26 req-fe1247b2-7002-4d2d-b5c2-0f2bdcb1edd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] No waiting events found dispatching network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:46:13 compute-1 nova_compute[230518]: 2025-10-02 12:46:13.117 2 WARNING nova.compute.manager [req-5e0b1ff4-607d-4291-b160-be1f7a4baf26 req-fe1247b2-7002-4d2d-b5c2-0f2bdcb1edd4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received unexpected event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 for instance with vm_state rescued and task_state unrescuing.
Oct 02 12:46:13 compute-1 ceph-mon[80926]: pgmap v2136: 305 pgs: 305 active+clean; 502 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 195 op/s
Oct 02 12:46:13 compute-1 nova_compute[230518]: 2025-10-02 12:46:13.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:13.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:13.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:14 compute-1 nova_compute[230518]: 2025-10-02 12:46:14.254 2 DEBUG nova.compute.manager [None req-7da7c6bc-27e3-4549-b5dd-d97607b562bd b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:46:15 compute-1 ceph-mon[80926]: pgmap v2137: 305 pgs: 305 active+clean; 502 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 33 KiB/s wr, 153 op/s
Oct 02 12:46:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2788448323' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:46:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2788448323' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:46:15 compute-1 sudo[278636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:46:15 compute-1 sudo[278636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:15 compute-1 sudo[278636]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:15 compute-1 sudo[278661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:46:15 compute-1 sudo[278661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:46:15 compute-1 sudo[278661]: pam_unix(sudo:session): session closed for user root
Oct 02 12:46:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:46:15 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1099713528' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:46:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:46:15 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1099713528' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:46:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:15.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:15.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:46:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:46:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1099713528' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:46:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1099713528' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:46:17 compute-1 nova_compute[230518]: 2025-10-02 12:46:17.043 2 DEBUG nova.compute.manager [req-c854d1e7-061d-457b-ac16-24de344e29f2 req-8bfe2436-6b43-4b8b-bf4b-7f2feb9eeced 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received event network-changed-241d570e-8eb4-4d2a-986b-b37fbcb780a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:46:17 compute-1 nova_compute[230518]: 2025-10-02 12:46:17.043 2 DEBUG nova.compute.manager [req-c854d1e7-061d-457b-ac16-24de344e29f2 req-8bfe2436-6b43-4b8b-bf4b-7f2feb9eeced 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Refreshing instance network info cache due to event network-changed-241d570e-8eb4-4d2a-986b-b37fbcb780a9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:46:17 compute-1 nova_compute[230518]: 2025-10-02 12:46:17.043 2 DEBUG oslo_concurrency.lockutils [req-c854d1e7-061d-457b-ac16-24de344e29f2 req-8bfe2436-6b43-4b8b-bf4b-7f2feb9eeced 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:46:17 compute-1 nova_compute[230518]: 2025-10-02 12:46:17.043 2 DEBUG oslo_concurrency.lockutils [req-c854d1e7-061d-457b-ac16-24de344e29f2 req-8bfe2436-6b43-4b8b-bf4b-7f2feb9eeced 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:46:17 compute-1 nova_compute[230518]: 2025-10-02 12:46:17.044 2 DEBUG nova.network.neutron [req-c854d1e7-061d-457b-ac16-24de344e29f2 req-8bfe2436-6b43-4b8b-bf4b-7f2feb9eeced 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Refreshing network info cache for port 241d570e-8eb4-4d2a-986b-b37fbcb780a9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:46:17 compute-1 nova_compute[230518]: 2025-10-02 12:46:17.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:46:17 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1934683106' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:46:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:46:17 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1934683106' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:46:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:46:17 compute-1 ceph-mon[80926]: pgmap v2138: 305 pgs: 305 active+clean; 478 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 34 KiB/s wr, 203 op/s
Oct 02 12:46:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1934683106' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:46:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1934683106' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:46:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:17.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:17.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:18 compute-1 nova_compute[230518]: 2025-10-02 12:46:18.065 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:46:18 compute-1 nova_compute[230518]: 2025-10-02 12:46:18.546 2 DEBUG nova.network.neutron [req-c854d1e7-061d-457b-ac16-24de344e29f2 req-8bfe2436-6b43-4b8b-bf4b-7f2feb9eeced 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Updated VIF entry in instance network info cache for port 241d570e-8eb4-4d2a-986b-b37fbcb780a9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:46:18 compute-1 nova_compute[230518]: 2025-10-02 12:46:18.546 2 DEBUG nova.network.neutron [req-c854d1e7-061d-457b-ac16-24de344e29f2 req-8bfe2436-6b43-4b8b-bf4b-7f2feb9eeced 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Updating instance_info_cache with network_info: [{"id": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "address": "fa:16:3e:2d:9b:0c", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241d570e-8e", "ovs_interfaceid": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:46:18 compute-1 nova_compute[230518]: 2025-10-02 12:46:18.563 2 DEBUG oslo_concurrency.lockutils [req-c854d1e7-061d-457b-ac16-24de344e29f2 req-8bfe2436-6b43-4b8b-bf4b-7f2feb9eeced 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:46:18 compute-1 nova_compute[230518]: 2025-10-02 12:46:18.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:19 compute-1 nova_compute[230518]: 2025-10-02 12:46:19.151 2 DEBUG nova.compute.manager [req-93ee4050-6374-4b37-b695-9725e3745650 req-7da77f16-81e3-4ad7-b7db-8fee73e25f74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received event network-changed-241d570e-8eb4-4d2a-986b-b37fbcb780a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:46:19 compute-1 nova_compute[230518]: 2025-10-02 12:46:19.151 2 DEBUG nova.compute.manager [req-93ee4050-6374-4b37-b695-9725e3745650 req-7da77f16-81e3-4ad7-b7db-8fee73e25f74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Refreshing instance network info cache due to event network-changed-241d570e-8eb4-4d2a-986b-b37fbcb780a9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:46:19 compute-1 nova_compute[230518]: 2025-10-02 12:46:19.152 2 DEBUG oslo_concurrency.lockutils [req-93ee4050-6374-4b37-b695-9725e3745650 req-7da77f16-81e3-4ad7-b7db-8fee73e25f74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:46:19 compute-1 nova_compute[230518]: 2025-10-02 12:46:19.152 2 DEBUG oslo_concurrency.lockutils [req-93ee4050-6374-4b37-b695-9725e3745650 req-7da77f16-81e3-4ad7-b7db-8fee73e25f74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:46:19 compute-1 nova_compute[230518]: 2025-10-02 12:46:19.152 2 DEBUG nova.network.neutron [req-93ee4050-6374-4b37-b695-9725e3745650 req-7da77f16-81e3-4ad7-b7db-8fee73e25f74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Refreshing network info cache for port 241d570e-8eb4-4d2a-986b-b37fbcb780a9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:46:19 compute-1 ceph-mon[80926]: pgmap v2139: 305 pgs: 305 active+clean; 455 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 16 KiB/s wr, 276 op/s
Oct 02 12:46:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:46:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:19.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:19 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:19.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:20.555 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:46:20 compute-1 ceph-mon[80926]: pgmap v2140: 305 pgs: 305 active+clean; 455 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 15 KiB/s wr, 243 op/s
Oct 02 12:46:21 compute-1 nova_compute[230518]: 2025-10-02 12:46:21.153 2 DEBUG nova.network.neutron [req-93ee4050-6374-4b37-b695-9725e3745650 req-7da77f16-81e3-4ad7-b7db-8fee73e25f74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Updated VIF entry in instance network info cache for port 241d570e-8eb4-4d2a-986b-b37fbcb780a9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:46:21 compute-1 nova_compute[230518]: 2025-10-02 12:46:21.153 2 DEBUG nova.network.neutron [req-93ee4050-6374-4b37-b695-9725e3745650 req-7da77f16-81e3-4ad7-b7db-8fee73e25f74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Updating instance_info_cache with network_info: [{"id": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "address": "fa:16:3e:2d:9b:0c", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241d570e-8e", "ovs_interfaceid": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:46:21 compute-1 nova_compute[230518]: 2025-10-02 12:46:21.168 2 DEBUG oslo_concurrency.lockutils [req-93ee4050-6374-4b37-b695-9725e3745650 req-7da77f16-81e3-4ad7-b7db-8fee73e25f74 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:46:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:46:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:46:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:21.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:46:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:21 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:21.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:22 compute-1 nova_compute[230518]: 2025-10-02 12:46:22.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:22 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:46:23 compute-1 ceph-mon[80926]: pgmap v2141: 305 pgs: 305 active+clean; 436 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 2.1 MiB/s wr, 328 op/s
Oct 02 12:46:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/569026192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:23 compute-1 nova_compute[230518]: 2025-10-02 12:46:23.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:23.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:23.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:24 compute-1 podman[278686]: 2025-10-02 12:46:24.808801665 +0000 UTC m=+0.058561282 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 12:46:24 compute-1 podman[278687]: 2025-10-02 12:46:24.838319864 +0000 UTC m=+0.086662626 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3)
Oct 02 12:46:25 compute-1 ovn_controller[129257]: 2025-10-02T12:46:25Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2d:9b:0c 10.100.0.11
Oct 02 12:46:25 compute-1 ceph-mon[80926]: pgmap v2142: 305 pgs: 305 active+clean; 436 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 210 op/s
Oct 02 12:46:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:25.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:25.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:25.942 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:25.943 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:25.945 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:27 compute-1 nova_compute[230518]: 2025-10-02 12:46:27.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:27 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:46:27 compute-1 ceph-mon[80926]: pgmap v2143: 305 pgs: 305 active+clean; 409 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 244 op/s
Oct 02 12:46:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:46:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:27 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:27.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:27.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:28 compute-1 nova_compute[230518]: 2025-10-02 12:46:28.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e293 e293: 3 total, 3 up, 3 in
Oct 02 12:46:29 compute-1 ceph-mon[80926]: pgmap v2144: 305 pgs: 305 active+clean; 409 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 225 op/s
Oct 02 12:46:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/357741991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:46:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:29.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:46:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:46:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:29 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:29.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:30 compute-1 ceph-mon[80926]: osdmap e293: 3 total, 3 up, 3 in
Oct 02 12:46:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e294 e294: 3 total, 3 up, 3 in
Oct 02 12:46:31 compute-1 nova_compute[230518]: 2025-10-02 12:46:31.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:31 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e295 e295: 3 total, 3 up, 3 in
Oct 02 12:46:31 compute-1 ceph-mon[80926]: pgmap v2146: 305 pgs: 305 active+clean; 409 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1019 KiB/s rd, 2.6 MiB/s wr, 180 op/s
Oct 02 12:46:31 compute-1 ceph-mon[80926]: osdmap e294: 3 total, 3 up, 3 in
Oct 02 12:46:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:31.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:31.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:32 compute-1 nova_compute[230518]: 2025-10-02 12:46:32.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:46:32 compute-1 ceph-mon[80926]: osdmap e295: 3 total, 3 up, 3 in
Oct 02 12:46:33 compute-1 nova_compute[230518]: 2025-10-02 12:46:33.362 2 DEBUG oslo_concurrency.lockutils [None req-09612838-f044-472d-a969-7937a578349f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:33 compute-1 nova_compute[230518]: 2025-10-02 12:46:33.362 2 DEBUG oslo_concurrency.lockutils [None req-09612838-f044-472d-a969-7937a578349f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:33 compute-1 nova_compute[230518]: 2025-10-02 12:46:33.377 2 INFO nova.compute.manager [None req-09612838-f044-472d-a969-7937a578349f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Detaching volume 9405efbb-874d-467a-93c3-bbd76870d422
Oct 02 12:46:33 compute-1 ceph-mon[80926]: pgmap v2149: 305 pgs: 305 active+clean; 531 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.7 MiB/s rd, 11 MiB/s wr, 269 op/s
Oct 02 12:46:33 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1083121368' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:46:33 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2689995221' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:46:33 compute-1 nova_compute[230518]: 2025-10-02 12:46:33.582 2 INFO nova.virt.block_device [None req-09612838-f044-472d-a969-7937a578349f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Attempting to driver detach volume 9405efbb-874d-467a-93c3-bbd76870d422 from mountpoint /dev/vdb
Oct 02 12:46:33 compute-1 nova_compute[230518]: 2025-10-02 12:46:33.590 2 DEBUG nova.virt.libvirt.driver [None req-09612838-f044-472d-a969-7937a578349f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Attempting to detach device vdb from instance c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 12:46:33 compute-1 nova_compute[230518]: 2025-10-02 12:46:33.591 2 DEBUG nova.virt.libvirt.guest [None req-09612838-f044-472d-a969-7937a578349f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:46:33 compute-1 nova_compute[230518]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:46:33 compute-1 nova_compute[230518]:   <source protocol="rbd" name="volumes/volume-9405efbb-874d-467a-93c3-bbd76870d422">
Oct 02 12:46:33 compute-1 nova_compute[230518]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:46:33 compute-1 nova_compute[230518]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:46:33 compute-1 nova_compute[230518]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:46:33 compute-1 nova_compute[230518]:   </source>
Oct 02 12:46:33 compute-1 nova_compute[230518]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:46:33 compute-1 nova_compute[230518]:   <serial>9405efbb-874d-467a-93c3-bbd76870d422</serial>
Oct 02 12:46:33 compute-1 nova_compute[230518]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 12:46:33 compute-1 nova_compute[230518]: </disk>
Oct 02 12:46:33 compute-1 nova_compute[230518]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:46:33 compute-1 nova_compute[230518]: 2025-10-02 12:46:33.599 2 INFO nova.virt.libvirt.driver [None req-09612838-f044-472d-a969-7937a578349f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Successfully detached device vdb from instance c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f from the persistent domain config.
Oct 02 12:46:33 compute-1 nova_compute[230518]: 2025-10-02 12:46:33.600 2 DEBUG nova.virt.libvirt.driver [None req-09612838-f044-472d-a969-7937a578349f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 12:46:33 compute-1 nova_compute[230518]: 2025-10-02 12:46:33.600 2 DEBUG nova.virt.libvirt.guest [None req-09612838-f044-472d-a969-7937a578349f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:46:33 compute-1 nova_compute[230518]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:46:33 compute-1 nova_compute[230518]:   <source protocol="rbd" name="volumes/volume-9405efbb-874d-467a-93c3-bbd76870d422">
Oct 02 12:46:33 compute-1 nova_compute[230518]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:46:33 compute-1 nova_compute[230518]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:46:33 compute-1 nova_compute[230518]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:46:33 compute-1 nova_compute[230518]:   </source>
Oct 02 12:46:33 compute-1 nova_compute[230518]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:46:33 compute-1 nova_compute[230518]:   <serial>9405efbb-874d-467a-93c3-bbd76870d422</serial>
Oct 02 12:46:33 compute-1 nova_compute[230518]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 12:46:33 compute-1 nova_compute[230518]: </disk>
Oct 02 12:46:33 compute-1 nova_compute[230518]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:46:33 compute-1 nova_compute[230518]: 2025-10-02 12:46:33.707 2 DEBUG nova.virt.libvirt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Received event <DeviceRemovedEvent: 1759409193.7073429, c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 12:46:33 compute-1 nova_compute[230518]: 2025-10-02 12:46:33.709 2 DEBUG nova.virt.libvirt.driver [None req-09612838-f044-472d-a969-7937a578349f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 12:46:33 compute-1 nova_compute[230518]: 2025-10-02 12:46:33.710 2 INFO nova.virt.libvirt.driver [None req-09612838-f044-472d-a969-7937a578349f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Successfully detached device vdb from instance c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f from the live domain config.
Oct 02 12:46:33 compute-1 nova_compute[230518]: 2025-10-02 12:46:33.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:46:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:33.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:46:33 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:33.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:46:33 compute-1 nova_compute[230518]: 2025-10-02 12:46:33.964 2 DEBUG nova.objects.instance [None req-09612838-f044-472d-a969-7937a578349f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'flavor' on Instance uuid c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:46:34 compute-1 nova_compute[230518]: 2025-10-02 12:46:34.023 2 DEBUG oslo_concurrency.lockutils [None req-09612838-f044-472d-a969-7937a578349f b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:34 compute-1 ceph-mon[80926]: pgmap v2150: 305 pgs: 305 active+clean; 531 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 11 MiB/s wr, 207 op/s
Oct 02 12:46:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:46:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:35.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:35 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:35.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:36 compute-1 nova_compute[230518]: 2025-10-02 12:46:36.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:46:36 compute-1 nova_compute[230518]: 2025-10-02 12:46:36.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 12:46:36 compute-1 nova_compute[230518]: 2025-10-02 12:46:36.079 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 12:46:36 compute-1 nova_compute[230518]: 2025-10-02 12:46:36.122 2 DEBUG oslo_concurrency.lockutils [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:36 compute-1 nova_compute[230518]: 2025-10-02 12:46:36.122 2 DEBUG oslo_concurrency.lockutils [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:36 compute-1 nova_compute[230518]: 2025-10-02 12:46:36.123 2 DEBUG oslo_concurrency.lockutils [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:36 compute-1 nova_compute[230518]: 2025-10-02 12:46:36.123 2 DEBUG oslo_concurrency.lockutils [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:36 compute-1 nova_compute[230518]: 2025-10-02 12:46:36.123 2 DEBUG oslo_concurrency.lockutils [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:36 compute-1 nova_compute[230518]: 2025-10-02 12:46:36.124 2 INFO nova.compute.manager [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Terminating instance
Oct 02 12:46:36 compute-1 nova_compute[230518]: 2025-10-02 12:46:36.126 2 DEBUG nova.compute.manager [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:46:36 compute-1 kernel: tap241d570e-8e (unregistering): left promiscuous mode
Oct 02 12:46:36 compute-1 NetworkManager[44960]: <info>  [1759409196.1906] device (tap241d570e-8e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:46:36 compute-1 ovn_controller[129257]: 2025-10-02T12:46:36Z|00521|binding|INFO|Releasing lport 241d570e-8eb4-4d2a-986b-b37fbcb780a9 from this chassis (sb_readonly=0)
Oct 02 12:46:36 compute-1 ovn_controller[129257]: 2025-10-02T12:46:36Z|00522|binding|INFO|Setting lport 241d570e-8eb4-4d2a-986b-b37fbcb780a9 down in Southbound
Oct 02 12:46:36 compute-1 ovn_controller[129257]: 2025-10-02T12:46:36Z|00523|binding|INFO|Removing iface tap241d570e-8e ovn-installed in OVS
Oct 02 12:46:36 compute-1 nova_compute[230518]: 2025-10-02 12:46:36.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:36 compute-1 nova_compute[230518]: 2025-10-02 12:46:36.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:36.250 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:9b:0c 10.100.0.11'], port_security=['fa:16:3e:2d:9b:0c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3934261-ba19-494f-8d9f-23360c5b30b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c87621e5c0ba4f13abfff528143c1c00', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'e07b5251-78cf-4560-8d41-5dc3daef96ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.188', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4887de20-f7d5-4732-a50a-969a38516c82, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=241d570e-8eb4-4d2a-986b-b37fbcb780a9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:46:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:36.252 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 241d570e-8eb4-4d2a-986b-b37fbcb780a9 in datapath f3934261-ba19-494f-8d9f-23360c5b30b9 unbound from our chassis
Oct 02 12:46:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:36.254 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f3934261-ba19-494f-8d9f-23360c5b30b9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:46:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:36.255 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[72d30dd8-47dd-4305-aab1-279e98429396]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:36.256 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 namespace which is not needed anymore
Oct 02 12:46:36 compute-1 nova_compute[230518]: 2025-10-02 12:46:36.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:36 compute-1 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Oct 02 12:46:36 compute-1 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d0000007b.scope: Consumed 14.588s CPU time.
Oct 02 12:46:36 compute-1 systemd-machined[188247]: Machine qemu-61-instance-0000007b terminated.
Oct 02 12:46:36 compute-1 nova_compute[230518]: 2025-10-02 12:46:36.360 2 INFO nova.virt.libvirt.driver [-] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Instance destroyed successfully.
Oct 02 12:46:36 compute-1 nova_compute[230518]: 2025-10-02 12:46:36.362 2 DEBUG nova.objects.instance [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lazy-loading 'resources' on Instance uuid c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:46:36 compute-1 nova_compute[230518]: 2025-10-02 12:46:36.379 2 DEBUG nova.virt.libvirt.vif [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:45:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1940292897',display_name='tempest-ServerRescueNegativeTestJSON-server-1940292897',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1940292897',id=123,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE3MP/X9GW3AkvsXC34rGYeE9R6n5owPQ7q879Zh3KlhVfQDQGNR9SPxUIek8XO1dhRvBM+bbuljUGfLUZmn0JV9ekbocSGkiHwYeMyp832Egmcx2kY1B+audZxDye556Q==',key_name='tempest-keypair-866203463',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:46:06Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c87621e5c0ba4f13abfff528143c1c00',ramdisk_id='',reservation_id='r-n6jrurpd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-488939839',owner_user_name='tempest-ServerRescueNegativeTestJSON-488939839-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:46:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b168e90f7c0c414ba26c576fb8706a80',uuid=c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "address": "fa:16:3e:2d:9b:0c", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241d570e-8e", "ovs_interfaceid": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:46:36 compute-1 nova_compute[230518]: 2025-10-02 12:46:36.380 2 DEBUG nova.network.os_vif_util [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Converting VIF {"id": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "address": "fa:16:3e:2d:9b:0c", "network": {"id": "f3934261-ba19-494f-8d9f-23360c5b30b9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2082470523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c87621e5c0ba4f13abfff528143c1c00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap241d570e-8e", "ovs_interfaceid": "241d570e-8eb4-4d2a-986b-b37fbcb780a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:46:36 compute-1 nova_compute[230518]: 2025-10-02 12:46:36.380 2 DEBUG nova.network.os_vif_util [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2d:9b:0c,bridge_name='br-int',has_traffic_filtering=True,id=241d570e-8eb4-4d2a-986b-b37fbcb780a9,network=Network(f3934261-ba19-494f-8d9f-23360c5b30b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap241d570e-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:46:36 compute-1 nova_compute[230518]: 2025-10-02 12:46:36.381 2 DEBUG os_vif [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2d:9b:0c,bridge_name='br-int',has_traffic_filtering=True,id=241d570e-8eb4-4d2a-986b-b37fbcb780a9,network=Network(f3934261-ba19-494f-8d9f-23360c5b30b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap241d570e-8e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:46:36 compute-1 nova_compute[230518]: 2025-10-02 12:46:36.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:36 compute-1 nova_compute[230518]: 2025-10-02 12:46:36.383 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap241d570e-8e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:46:36 compute-1 nova_compute[230518]: 2025-10-02 12:46:36.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:46:36 compute-1 nova_compute[230518]: 2025-10-02 12:46:36.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:36 compute-1 nova_compute[230518]: 2025-10-02 12:46:36.390 2 INFO os_vif [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2d:9b:0c,bridge_name='br-int',has_traffic_filtering=True,id=241d570e-8eb4-4d2a-986b-b37fbcb780a9,network=Network(f3934261-ba19-494f-8d9f-23360c5b30b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap241d570e-8e')
Oct 02 12:46:36 compute-1 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[278496]: [NOTICE]   (278500) : haproxy version is 2.8.14-c23fe91
Oct 02 12:46:36 compute-1 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[278496]: [NOTICE]   (278500) : path to executable is /usr/sbin/haproxy
Oct 02 12:46:36 compute-1 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[278496]: [WARNING]  (278500) : Exiting Master process...
Oct 02 12:46:36 compute-1 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[278496]: [WARNING]  (278500) : Exiting Master process...
Oct 02 12:46:36 compute-1 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[278496]: [ALERT]    (278500) : Current worker (278502) exited with code 143 (Terminated)
Oct 02 12:46:36 compute-1 neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9[278496]: [WARNING]  (278500) : All workers exited. Exiting... (0)
Oct 02 12:46:36 compute-1 systemd[1]: libpod-9d47356da694773c0d590d01a8afef9bf7388d6489618145e89c79c998fad31d.scope: Deactivated successfully.
Oct 02 12:46:36 compute-1 podman[278754]: 2025-10-02 12:46:36.416438513 +0000 UTC m=+0.060199703 container died 9d47356da694773c0d590d01a8afef9bf7388d6489618145e89c79c998fad31d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:46:36 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9d47356da694773c0d590d01a8afef9bf7388d6489618145e89c79c998fad31d-userdata-shm.mount: Deactivated successfully.
Oct 02 12:46:36 compute-1 systemd[1]: var-lib-containers-storage-overlay-382483586f98dc9c7e63f8af29bc5c60af48f2c266cb3884274ad336cebfb902-merged.mount: Deactivated successfully.
Oct 02 12:46:36 compute-1 podman[278754]: 2025-10-02 12:46:36.463653758 +0000 UTC m=+0.107414938 container cleanup 9d47356da694773c0d590d01a8afef9bf7388d6489618145e89c79c998fad31d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 12:46:36 compute-1 systemd[1]: libpod-conmon-9d47356da694773c0d590d01a8afef9bf7388d6489618145e89c79c998fad31d.scope: Deactivated successfully.
Oct 02 12:46:36 compute-1 podman[278811]: 2025-10-02 12:46:36.540144864 +0000 UTC m=+0.056676243 container remove 9d47356da694773c0d590d01a8afef9bf7388d6489618145e89c79c998fad31d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 12:46:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:36.550 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8ca3f6a8-558f-4f66-82e8-05544935f211]: (4, ('Thu Oct  2 12:46:36 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 (9d47356da694773c0d590d01a8afef9bf7388d6489618145e89c79c998fad31d)\n9d47356da694773c0d590d01a8afef9bf7388d6489618145e89c79c998fad31d\nThu Oct  2 12:46:36 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 (9d47356da694773c0d590d01a8afef9bf7388d6489618145e89c79c998fad31d)\n9d47356da694773c0d590d01a8afef9bf7388d6489618145e89c79c998fad31d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:36.552 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[68dde93a-ff10-4dbd-a7e3-8b21c96e809b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:36.554 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3934261-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:46:36 compute-1 nova_compute[230518]: 2025-10-02 12:46:36.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:36 compute-1 kernel: tapf3934261-b0: left promiscuous mode
Oct 02 12:46:36 compute-1 nova_compute[230518]: 2025-10-02 12:46:36.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:36 compute-1 nova_compute[230518]: 2025-10-02 12:46:36.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:36.574 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[908f8615-6d05-4618-a7bc-0b61a917b792]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:36.605 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3c100fb7-17b4-4c87-a38f-31df9569d29a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:36.606 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[44363207-5c8d-41c4-a39a-7c0a92b2d539]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:36 compute-1 nova_compute[230518]: 2025-10-02 12:46:36.622 2 DEBUG nova.compute.manager [req-35366a5a-848c-4363-bd4d-ad9b3205d886 req-28cb909c-fe7d-40e5-807a-382e688af1cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received event network-vif-unplugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:46:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:36.622 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[dfd8dbe5-73e4-40e4-9719-2d2916d68c30]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703423, 'reachable_time': 24634, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278826, 'error': None, 'target': 'ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:36 compute-1 nova_compute[230518]: 2025-10-02 12:46:36.623 2 DEBUG oslo_concurrency.lockutils [req-35366a5a-848c-4363-bd4d-ad9b3205d886 req-28cb909c-fe7d-40e5-807a-382e688af1cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:36 compute-1 nova_compute[230518]: 2025-10-02 12:46:36.623 2 DEBUG oslo_concurrency.lockutils [req-35366a5a-848c-4363-bd4d-ad9b3205d886 req-28cb909c-fe7d-40e5-807a-382e688af1cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:36 compute-1 nova_compute[230518]: 2025-10-02 12:46:36.623 2 DEBUG oslo_concurrency.lockutils [req-35366a5a-848c-4363-bd4d-ad9b3205d886 req-28cb909c-fe7d-40e5-807a-382e688af1cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:36 compute-1 nova_compute[230518]: 2025-10-02 12:46:36.624 2 DEBUG nova.compute.manager [req-35366a5a-848c-4363-bd4d-ad9b3205d886 req-28cb909c-fe7d-40e5-807a-382e688af1cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] No waiting events found dispatching network-vif-unplugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:46:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:36.624 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f3934261-ba19-494f-8d9f-23360c5b30b9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:46:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:46:36.624 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[f7aeb43b-8b8f-4d1a-8524-76140f32df8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:46:36 compute-1 nova_compute[230518]: 2025-10-02 12:46:36.624 2 DEBUG nova.compute.manager [req-35366a5a-848c-4363-bd4d-ad9b3205d886 req-28cb909c-fe7d-40e5-807a-382e688af1cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received event network-vif-unplugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:46:36 compute-1 systemd[1]: run-netns-ovnmeta\x2df3934261\x2dba19\x2d494f\x2d8d9f\x2d23360c5b30b9.mount: Deactivated successfully.
Oct 02 12:46:36 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e296 e296: 3 total, 3 up, 3 in
Oct 02 12:46:36 compute-1 ceph-mon[80926]: pgmap v2151: 305 pgs: 305 active+clean; 536 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.5 MiB/s rd, 11 MiB/s wr, 206 op/s
Oct 02 12:46:36 compute-1 nova_compute[230518]: 2025-10-02 12:46:36.948 2 INFO nova.virt.libvirt.driver [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Deleting instance files /var/lib/nova/instances/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_del
Oct 02 12:46:36 compute-1 nova_compute[230518]: 2025-10-02 12:46:36.949 2 INFO nova.virt.libvirt.driver [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Deletion of /var/lib/nova/instances/c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f_del complete
Oct 02 12:46:37 compute-1 nova_compute[230518]: 2025-10-02 12:46:37.217 2 INFO nova.compute.manager [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Took 1.09 seconds to destroy the instance on the hypervisor.
Oct 02 12:46:37 compute-1 nova_compute[230518]: 2025-10-02 12:46:37.218 2 DEBUG oslo.service.loopingcall [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:46:37 compute-1 nova_compute[230518]: 2025-10-02 12:46:37.218 2 DEBUG nova.compute.manager [-] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:46:37 compute-1 nova_compute[230518]: 2025-10-02 12:46:37.219 2 DEBUG nova.network.neutron [-] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:46:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e297 e297: 3 total, 3 up, 3 in
Oct 02 12:46:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:46:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:46:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:37 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:37.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:46:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:37.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:46:37 compute-1 ceph-mon[80926]: osdmap e296: 3 total, 3 up, 3 in
Oct 02 12:46:37 compute-1 ceph-mon[80926]: osdmap e297: 3 total, 3 up, 3 in
Oct 02 12:46:38 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e298 e298: 3 total, 3 up, 3 in
Oct 02 12:46:38 compute-1 nova_compute[230518]: 2025-10-02 12:46:38.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:38 compute-1 nova_compute[230518]: 2025-10-02 12:46:38.828 2 DEBUG nova.compute.manager [req-947d5bb9-c416-411a-94af-caeec34fadea req-6320be56-9c62-4a43-8f86-87711f117536 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:46:38 compute-1 nova_compute[230518]: 2025-10-02 12:46:38.828 2 DEBUG oslo_concurrency.lockutils [req-947d5bb9-c416-411a-94af-caeec34fadea req-6320be56-9c62-4a43-8f86-87711f117536 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:38 compute-1 nova_compute[230518]: 2025-10-02 12:46:38.828 2 DEBUG oslo_concurrency.lockutils [req-947d5bb9-c416-411a-94af-caeec34fadea req-6320be56-9c62-4a43-8f86-87711f117536 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:38 compute-1 nova_compute[230518]: 2025-10-02 12:46:38.829 2 DEBUG oslo_concurrency.lockutils [req-947d5bb9-c416-411a-94af-caeec34fadea req-6320be56-9c62-4a43-8f86-87711f117536 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:38 compute-1 nova_compute[230518]: 2025-10-02 12:46:38.829 2 DEBUG nova.compute.manager [req-947d5bb9-c416-411a-94af-caeec34fadea req-6320be56-9c62-4a43-8f86-87711f117536 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] No waiting events found dispatching network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:46:38 compute-1 nova_compute[230518]: 2025-10-02 12:46:38.829 2 WARNING nova.compute.manager [req-947d5bb9-c416-411a-94af-caeec34fadea req-6320be56-9c62-4a43-8f86-87711f117536 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received unexpected event network-vif-plugged-241d570e-8eb4-4d2a-986b-b37fbcb780a9 for instance with vm_state active and task_state deleting.
Oct 02 12:46:39 compute-1 nova_compute[230518]: 2025-10-02 12:46:39.258 2 DEBUG nova.network.neutron [-] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:46:39 compute-1 ceph-mon[80926]: pgmap v2154: 305 pgs: 305 active+clean; 536 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 473 KiB/s wr, 47 op/s
Oct 02 12:46:39 compute-1 ceph-mon[80926]: osdmap e298: 3 total, 3 up, 3 in
Oct 02 12:46:39 compute-1 nova_compute[230518]: 2025-10-02 12:46:39.294 2 INFO nova.compute.manager [-] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Took 2.08 seconds to deallocate network for instance.
Oct 02 12:46:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e299 e299: 3 total, 3 up, 3 in
Oct 02 12:46:39 compute-1 nova_compute[230518]: 2025-10-02 12:46:39.376 2 DEBUG oslo_concurrency.lockutils [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:39 compute-1 nova_compute[230518]: 2025-10-02 12:46:39.377 2 DEBUG oslo_concurrency.lockutils [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:39 compute-1 nova_compute[230518]: 2025-10-02 12:46:39.463 2 DEBUG oslo_concurrency.processutils [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:46:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:46:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:39 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:39.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:39.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:46:39 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2370722550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:39 compute-1 nova_compute[230518]: 2025-10-02 12:46:39.894 2 DEBUG oslo_concurrency.processutils [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:46:39 compute-1 nova_compute[230518]: 2025-10-02 12:46:39.903 2 DEBUG nova.compute.provider_tree [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:46:40 compute-1 ceph-mon[80926]: osdmap e299: 3 total, 3 up, 3 in
Oct 02 12:46:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2370722550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:40 compute-1 nova_compute[230518]: 2025-10-02 12:46:40.653 2 DEBUG nova.scheduler.client.report [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:46:40 compute-1 nova_compute[230518]: 2025-10-02 12:46:40.710 2 DEBUG oslo_concurrency.lockutils [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.333s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:40 compute-1 nova_compute[230518]: 2025-10-02 12:46:40.748 2 INFO nova.scheduler.client.report [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Deleted allocations for instance c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f
Oct 02 12:46:40 compute-1 nova_compute[230518]: 2025-10-02 12:46:40.974 2 DEBUG oslo_concurrency.lockutils [None req-c43b5734-caf6-4e9a-b58f-5f80c4a3e1c0 b168e90f7c0c414ba26c576fb8706a80 c87621e5c0ba4f13abfff528143c1c00 - - default default] Lock "c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:41 compute-1 nova_compute[230518]: 2025-10-02 12:46:41.077 2 DEBUG nova.compute.manager [req-6626f24a-be3e-4581-b096-23a44a6c1a3c req-75992d98-5c77-4b5f-8138-77cf1844033a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Received event network-vif-deleted-241d570e-8eb4-4d2a-986b-b37fbcb780a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:46:41 compute-1 nova_compute[230518]: 2025-10-02 12:46:41.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:41 compute-1 nova_compute[230518]: 2025-10-02 12:46:41.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:41 compute-1 ceph-mon[80926]: pgmap v2157: 305 pgs: 305 active+clean; 536 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 49 KiB/s wr, 56 op/s
Oct 02 12:46:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:46:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:41.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:41 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:41.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:46:42 compute-1 ceph-mon[80926]: pgmap v2158: 305 pgs: 305 active+clean; 534 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 14 MiB/s rd, 9.6 MiB/s wr, 432 op/s
Oct 02 12:46:43 compute-1 nova_compute[230518]: 2025-10-02 12:46:43.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:43 compute-1 podman[278852]: 2025-10-02 12:46:43.83917759 +0000 UTC m=+0.086447300 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:46:43 compute-1 podman[278851]: 2025-10-02 12:46:43.839265322 +0000 UTC m=+0.093571463 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:46:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:46:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:43.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:43 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:43.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:44 compute-1 ceph-mon[80926]: pgmap v2159: 305 pgs: 305 active+clean; 534 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 7.2 MiB/s wr, 322 op/s
Oct 02 12:46:45 compute-1 ceph-mgr[81282]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3443433125
Oct 02 12:46:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:46:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:45.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:46:45 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:45.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:46:46 compute-1 nova_compute[230518]: 2025-10-02 12:46:46.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:46 compute-1 ceph-mon[80926]: pgmap v2160: 305 pgs: 305 active+clean; 534 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.8 MiB/s rd, 5.8 MiB/s wr, 262 op/s
Oct 02 12:46:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e300 e300: 3 total, 3 up, 3 in
Oct 02 12:46:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:46:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:46:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:47.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:46:47 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:47.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:46:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e301 e301: 3 total, 3 up, 3 in
Oct 02 12:46:48 compute-1 nova_compute[230518]: 2025-10-02 12:46:48.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:48 compute-1 ceph-mon[80926]: osdmap e300: 3 total, 3 up, 3 in
Oct 02 12:46:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1020510815' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:46:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1020510815' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:46:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:46:49 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1577299259' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:46:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:46:49 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1577299259' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:46:49 compute-1 ceph-mon[80926]: pgmap v2162: 305 pgs: 305 active+clean; 534 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 5.5 MiB/s wr, 263 op/s
Oct 02 12:46:49 compute-1 ceph-mon[80926]: osdmap e301: 3 total, 3 up, 3 in
Oct 02 12:46:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1468040698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1577299259' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:46:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1577299259' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:46:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:49.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:49.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:51 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e302 e302: 3 total, 3 up, 3 in
Oct 02 12:46:51 compute-1 nova_compute[230518]: 2025-10-02 12:46:51.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:51 compute-1 ceph-mon[80926]: pgmap v2164: 305 pgs: 305 active+clean; 510 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 425 KiB/s wr, 48 op/s
Oct 02 12:46:51 compute-1 nova_compute[230518]: 2025-10-02 12:46:51.360 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409196.358669, c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:46:51 compute-1 nova_compute[230518]: 2025-10-02 12:46:51.360 2 INFO nova.compute.manager [-] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] VM Stopped (Lifecycle Event)
Oct 02 12:46:51 compute-1 nova_compute[230518]: 2025-10-02 12:46:51.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:51 compute-1 nova_compute[230518]: 2025-10-02 12:46:51.409 2 DEBUG nova.compute.manager [None req-9e74bb75-2830-48e7-8edf-840a74090827 - - - - - -] [instance: c2c8068e-6789-48ce-95a5-ecf7cc0e7d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:46:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:46:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:51.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:51 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:51.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:52 compute-1 ceph-mon[80926]: osdmap e302: 3 total, 3 up, 3 in
Oct 02 12:46:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e303 e303: 3 total, 3 up, 3 in
Oct 02 12:46:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:46:53 compute-1 ceph-mon[80926]: pgmap v2166: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 555 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 6.7 MiB/s wr, 260 op/s
Oct 02 12:46:53 compute-1 ceph-mon[80926]: osdmap e303: 3 total, 3 up, 3 in
Oct 02 12:46:53 compute-1 nova_compute[230518]: 2025-10-02 12:46:53.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:46:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:53.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:46:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:46:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:53 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:53.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:54 compute-1 nova_compute[230518]: 2025-10-02 12:46:54.901 2 DEBUG oslo_concurrency.lockutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "1c4025f8-834f-474c-87ee-59600e6ffb96" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:54 compute-1 nova_compute[230518]: 2025-10-02 12:46:54.902 2 DEBUG oslo_concurrency.lockutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:54 compute-1 nova_compute[230518]: 2025-10-02 12:46:54.928 2 DEBUG nova.compute.manager [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:46:55 compute-1 nova_compute[230518]: 2025-10-02 12:46:55.080 2 DEBUG oslo_concurrency.lockutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:55 compute-1 nova_compute[230518]: 2025-10-02 12:46:55.081 2 DEBUG oslo_concurrency.lockutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:55 compute-1 nova_compute[230518]: 2025-10-02 12:46:55.088 2 DEBUG nova.virt.hardware [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:46:55 compute-1 nova_compute[230518]: 2025-10-02 12:46:55.088 2 INFO nova.compute.claims [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:46:55 compute-1 ceph-mon[80926]: pgmap v2168: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 555 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 6.7 MiB/s wr, 239 op/s
Oct 02 12:46:55 compute-1 nova_compute[230518]: 2025-10-02 12:46:55.337 2 DEBUG oslo_concurrency.processutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:46:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:46:55 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2657385976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:55 compute-1 podman[278917]: 2025-10-02 12:46:55.806223168 +0000 UTC m=+0.056120006 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:46:55 compute-1 nova_compute[230518]: 2025-10-02 12:46:55.810 2 DEBUG oslo_concurrency.processutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:46:55 compute-1 podman[278918]: 2025-10-02 12:46:55.815647675 +0000 UTC m=+0.060296637 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 12:46:55 compute-1 nova_compute[230518]: 2025-10-02 12:46:55.817 2 DEBUG nova.compute.provider_tree [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:46:55 compute-1 nova_compute[230518]: 2025-10-02 12:46:55.839 2 DEBUG nova.scheduler.client.report [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:46:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:55.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:55.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:55 compute-1 nova_compute[230518]: 2025-10-02 12:46:55.878 2 DEBUG oslo_concurrency.lockutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:55 compute-1 nova_compute[230518]: 2025-10-02 12:46:55.879 2 DEBUG nova.compute.manager [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:46:56 compute-1 nova_compute[230518]: 2025-10-02 12:46:56.111 2 DEBUG nova.compute.manager [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:46:56 compute-1 nova_compute[230518]: 2025-10-02 12:46:56.111 2 DEBUG nova.network.neutron [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:46:56 compute-1 nova_compute[230518]: 2025-10-02 12:46:56.133 2 INFO nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:46:56 compute-1 nova_compute[230518]: 2025-10-02 12:46:56.159 2 DEBUG nova.compute.manager [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:46:56 compute-1 nova_compute[230518]: 2025-10-02 12:46:56.261 2 DEBUG nova.compute.manager [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:46:56 compute-1 nova_compute[230518]: 2025-10-02 12:46:56.263 2 DEBUG nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:46:56 compute-1 nova_compute[230518]: 2025-10-02 12:46:56.263 2 INFO nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Creating image(s)
Oct 02 12:46:56 compute-1 nova_compute[230518]: 2025-10-02 12:46:56.291 2 DEBUG nova.storage.rbd_utils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 1c4025f8-834f-474c-87ee-59600e6ffb96_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:46:56 compute-1 nova_compute[230518]: 2025-10-02 12:46:56.322 2 DEBUG nova.storage.rbd_utils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 1c4025f8-834f-474c-87ee-59600e6ffb96_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:46:56 compute-1 nova_compute[230518]: 2025-10-02 12:46:56.351 2 DEBUG nova.storage.rbd_utils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 1c4025f8-834f-474c-87ee-59600e6ffb96_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:46:56 compute-1 nova_compute[230518]: 2025-10-02 12:46:56.354 2 DEBUG oslo_concurrency.processutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:46:56 compute-1 nova_compute[230518]: 2025-10-02 12:46:56.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:56 compute-1 nova_compute[230518]: 2025-10-02 12:46:56.419 2 DEBUG oslo_concurrency.processutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:46:56 compute-1 nova_compute[230518]: 2025-10-02 12:46:56.420 2 DEBUG oslo_concurrency.lockutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:56 compute-1 nova_compute[230518]: 2025-10-02 12:46:56.420 2 DEBUG oslo_concurrency.lockutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:56 compute-1 nova_compute[230518]: 2025-10-02 12:46:56.421 2 DEBUG oslo_concurrency.lockutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:56 compute-1 nova_compute[230518]: 2025-10-02 12:46:56.446 2 DEBUG nova.storage.rbd_utils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 1c4025f8-834f-474c-87ee-59600e6ffb96_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:46:56 compute-1 nova_compute[230518]: 2025-10-02 12:46:56.450 2 DEBUG oslo_concurrency.processutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 1c4025f8-834f-474c-87ee-59600e6ffb96_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:46:56 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2657385976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:56 compute-1 nova_compute[230518]: 2025-10-02 12:46:56.536 2 DEBUG nova.policy [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ae7bcf1e6a3b4132a7068b0f863ca79c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '58b2fa4ee0cd4b97be1b303c203be14f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:46:56 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e304 e304: 3 total, 3 up, 3 in
Oct 02 12:46:57 compute-1 nova_compute[230518]: 2025-10-02 12:46:57.185 2 DEBUG oslo_concurrency.processutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 1c4025f8-834f-474c-87ee-59600e6ffb96_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.735s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:46:57 compute-1 nova_compute[230518]: 2025-10-02 12:46:57.255 2 DEBUG nova.storage.rbd_utils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] resizing rbd image 1c4025f8-834f-474c-87ee-59600e6ffb96_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:46:57 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:46:57 compute-1 ceph-mon[80926]: pgmap v2169: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 518 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 6.3 MiB/s wr, 246 op/s
Oct 02 12:46:57 compute-1 ceph-mon[80926]: osdmap e304: 3 total, 3 up, 3 in
Oct 02 12:46:57 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3892251147' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:46:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:46:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:46:57 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:57.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:57.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:46:57 compute-1 nova_compute[230518]: 2025-10-02 12:46:57.964 2 DEBUG nova.objects.instance [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'migration_context' on Instance uuid 1c4025f8-834f-474c-87ee-59600e6ffb96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:46:57 compute-1 nova_compute[230518]: 2025-10-02 12:46:57.996 2 DEBUG nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:46:57 compute-1 nova_compute[230518]: 2025-10-02 12:46:57.996 2 DEBUG nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Ensure instance console log exists: /var/lib/nova/instances/1c4025f8-834f-474c-87ee-59600e6ffb96/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:46:57 compute-1 nova_compute[230518]: 2025-10-02 12:46:57.997 2 DEBUG oslo_concurrency.lockutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:46:57 compute-1 nova_compute[230518]: 2025-10-02 12:46:57.997 2 DEBUG oslo_concurrency.lockutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:46:57 compute-1 nova_compute[230518]: 2025-10-02 12:46:57.997 2 DEBUG oslo_concurrency.lockutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:46:58 compute-1 nova_compute[230518]: 2025-10-02 12:46:58.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:58 compute-1 nova_compute[230518]: 2025-10-02 12:46:58.829 2 DEBUG nova.network.neutron [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Successfully created port: 8001e1a0-d1c2-49ac-8630-690ed8ac9801 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:46:58 compute-1 ceph-mon[80926]: pgmap v2171: 305 pgs: 305 active+clean; 439 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 6.2 MiB/s wr, 300 op/s
Oct 02 12:46:59 compute-1 nova_compute[230518]: 2025-10-02 12:46:59.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:46:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:46:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:46:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:46:59 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:46:59.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:46:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:46:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:46:59.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:47:00 compute-1 nova_compute[230518]: 2025-10-02 12:47:00.505 2 DEBUG nova.network.neutron [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Successfully updated port: 8001e1a0-d1c2-49ac-8630-690ed8ac9801 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:47:00 compute-1 nova_compute[230518]: 2025-10-02 12:47:00.533 2 DEBUG oslo_concurrency.lockutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:47:00 compute-1 nova_compute[230518]: 2025-10-02 12:47:00.533 2 DEBUG oslo_concurrency.lockutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquired lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:47:00 compute-1 nova_compute[230518]: 2025-10-02 12:47:00.533 2 DEBUG nova.network.neutron [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:47:00 compute-1 nova_compute[230518]: 2025-10-02 12:47:00.651 2 DEBUG nova.compute.manager [req-be54f3d0-03b8-4f68-ba2a-cc98aae5a48b req-711943b1-0547-4a46-bdda-5c45b82137ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received event network-changed-8001e1a0-d1c2-49ac-8630-690ed8ac9801 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:47:00 compute-1 nova_compute[230518]: 2025-10-02 12:47:00.652 2 DEBUG nova.compute.manager [req-be54f3d0-03b8-4f68-ba2a-cc98aae5a48b req-711943b1-0547-4a46-bdda-5c45b82137ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Refreshing instance network info cache due to event network-changed-8001e1a0-d1c2-49ac-8630-690ed8ac9801. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:47:00 compute-1 nova_compute[230518]: 2025-10-02 12:47:00.652 2 DEBUG oslo_concurrency.lockutils [req-be54f3d0-03b8-4f68-ba2a-cc98aae5a48b req-711943b1-0547-4a46-bdda-5c45b82137ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:47:00 compute-1 nova_compute[230518]: 2025-10-02 12:47:00.768 2 DEBUG nova.network.neutron [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:47:01 compute-1 nova_compute[230518]: 2025-10-02 12:47:01.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:01 compute-1 ceph-mon[80926]: pgmap v2172: 305 pgs: 305 active+clean; 428 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.1 MiB/s wr, 128 op/s
Oct 02 12:47:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:01.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:01.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:02 compute-1 nova_compute[230518]: 2025-10-02 12:47:02.079 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:47:02 compute-1 nova_compute[230518]: 2025-10-02 12:47:02.344 2 DEBUG nova.network.neutron [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Updating instance_info_cache with network_info: [{"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:47:02 compute-1 nova_compute[230518]: 2025-10-02 12:47:02.377 2 DEBUG oslo_concurrency.lockutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Releasing lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:47:02 compute-1 nova_compute[230518]: 2025-10-02 12:47:02.377 2 DEBUG nova.compute.manager [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Instance network_info: |[{"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:47:02 compute-1 nova_compute[230518]: 2025-10-02 12:47:02.378 2 DEBUG oslo_concurrency.lockutils [req-be54f3d0-03b8-4f68-ba2a-cc98aae5a48b req-711943b1-0547-4a46-bdda-5c45b82137ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:47:02 compute-1 nova_compute[230518]: 2025-10-02 12:47:02.378 2 DEBUG nova.network.neutron [req-be54f3d0-03b8-4f68-ba2a-cc98aae5a48b req-711943b1-0547-4a46-bdda-5c45b82137ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Refreshing network info cache for port 8001e1a0-d1c2-49ac-8630-690ed8ac9801 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:47:02 compute-1 nova_compute[230518]: 2025-10-02 12:47:02.381 2 DEBUG nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Start _get_guest_xml network_info=[{"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:47:02 compute-1 nova_compute[230518]: 2025-10-02 12:47:02.386 2 WARNING nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:47:02 compute-1 nova_compute[230518]: 2025-10-02 12:47:02.390 2 DEBUG nova.virt.libvirt.host [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:47:02 compute-1 nova_compute[230518]: 2025-10-02 12:47:02.391 2 DEBUG nova.virt.libvirt.host [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:47:02 compute-1 nova_compute[230518]: 2025-10-02 12:47:02.394 2 DEBUG nova.virt.libvirt.host [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:47:02 compute-1 nova_compute[230518]: 2025-10-02 12:47:02.394 2 DEBUG nova.virt.libvirt.host [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:47:02 compute-1 nova_compute[230518]: 2025-10-02 12:47:02.396 2 DEBUG nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:47:02 compute-1 nova_compute[230518]: 2025-10-02 12:47:02.396 2 DEBUG nova.virt.hardware [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:47:02 compute-1 nova_compute[230518]: 2025-10-02 12:47:02.396 2 DEBUG nova.virt.hardware [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:47:02 compute-1 nova_compute[230518]: 2025-10-02 12:47:02.397 2 DEBUG nova.virt.hardware [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:47:02 compute-1 nova_compute[230518]: 2025-10-02 12:47:02.397 2 DEBUG nova.virt.hardware [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:47:02 compute-1 nova_compute[230518]: 2025-10-02 12:47:02.397 2 DEBUG nova.virt.hardware [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:47:02 compute-1 nova_compute[230518]: 2025-10-02 12:47:02.397 2 DEBUG nova.virt.hardware [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:47:02 compute-1 nova_compute[230518]: 2025-10-02 12:47:02.398 2 DEBUG nova.virt.hardware [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:47:02 compute-1 nova_compute[230518]: 2025-10-02 12:47:02.398 2 DEBUG nova.virt.hardware [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:47:02 compute-1 nova_compute[230518]: 2025-10-02 12:47:02.398 2 DEBUG nova.virt.hardware [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:47:02 compute-1 nova_compute[230518]: 2025-10-02 12:47:02.398 2 DEBUG nova.virt.hardware [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:47:02 compute-1 nova_compute[230518]: 2025-10-02 12:47:02.399 2 DEBUG nova.virt.hardware [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:47:02 compute-1 nova_compute[230518]: 2025-10-02 12:47:02.402 2 DEBUG oslo_concurrency.processutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:47:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e305 e305: 3 total, 3 up, 3 in
Oct 02 12:47:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1797470337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:47:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:47:03 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/773088728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.131 2 DEBUG oslo_concurrency.processutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.729s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.163 2 DEBUG nova.storage.rbd_utils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 1c4025f8-834f-474c-87ee-59600e6ffb96_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.167 2 DEBUG oslo_concurrency.processutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.231 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.231 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.232 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.232 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.232 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:47:03 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/604551742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.599 2 DEBUG oslo_concurrency.processutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.600 2 DEBUG nova.virt.libvirt.vif [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:46:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1232591881',display_name='tempest-DeleteServersTestJSON-server-1232591881',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1232591881',id=126,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='58b2fa4ee0cd4b97be1b303c203be14f',ramdisk_id='',reservation_id='r-fwmxnmnm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1740298646',owner_user_name='tempest-DeleteServersTestJSON-1740298646-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:46:56Z,user_data=None,user_id='ae7bcf1e6a3b4132a7068b0f863ca79c',uuid=1c4025f8-834f-474c-87ee-59600e6ffb96,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.601 2 DEBUG nova.network.os_vif_util [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converting VIF {"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.602 2 DEBUG nova.network.os_vif_util [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:3d:b5,bridge_name='br-int',has_traffic_filtering=True,id=8001e1a0-d1c2-49ac-8630-690ed8ac9801,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8001e1a0-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.603 2 DEBUG nova.objects.instance [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'pci_devices' on Instance uuid 1c4025f8-834f-474c-87ee-59600e6ffb96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:47:03 compute-1 ceph-mon[80926]: pgmap v2173: 305 pgs: 305 active+clean; 357 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.9 MiB/s wr, 182 op/s
Oct 02 12:47:03 compute-1 ceph-mon[80926]: osdmap e305: 3 total, 3 up, 3 in
Oct 02 12:47:03 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/773088728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:03 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/604551742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:47:03 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/887990936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e306 e306: 3 total, 3 up, 3 in
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.669 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.810 2 DEBUG nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:47:03 compute-1 nova_compute[230518]:   <uuid>1c4025f8-834f-474c-87ee-59600e6ffb96</uuid>
Oct 02 12:47:03 compute-1 nova_compute[230518]:   <name>instance-0000007e</name>
Oct 02 12:47:03 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:47:03 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:47:03 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:47:03 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:       <nova:name>tempest-DeleteServersTestJSON-server-1232591881</nova:name>
Oct 02 12:47:03 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:47:02</nova:creationTime>
Oct 02 12:47:03 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:47:03 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:47:03 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:47:03 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:47:03 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:47:03 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:47:03 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:47:03 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:47:03 compute-1 nova_compute[230518]:         <nova:user uuid="ae7bcf1e6a3b4132a7068b0f863ca79c">tempest-DeleteServersTestJSON-1740298646-project-member</nova:user>
Oct 02 12:47:03 compute-1 nova_compute[230518]:         <nova:project uuid="58b2fa4ee0cd4b97be1b303c203be14f">tempest-DeleteServersTestJSON-1740298646</nova:project>
Oct 02 12:47:03 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:47:03 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:47:03 compute-1 nova_compute[230518]:         <nova:port uuid="8001e1a0-d1c2-49ac-8630-690ed8ac9801">
Oct 02 12:47:03 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:47:03 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:47:03 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:47:03 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <system>
Oct 02 12:47:03 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:47:03 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:47:03 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:47:03 compute-1 nova_compute[230518]:       <entry name="serial">1c4025f8-834f-474c-87ee-59600e6ffb96</entry>
Oct 02 12:47:03 compute-1 nova_compute[230518]:       <entry name="uuid">1c4025f8-834f-474c-87ee-59600e6ffb96</entry>
Oct 02 12:47:03 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     </system>
Oct 02 12:47:03 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:47:03 compute-1 nova_compute[230518]:   <os>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:   </os>
Oct 02 12:47:03 compute-1 nova_compute[230518]:   <features>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:   </features>
Oct 02 12:47:03 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:47:03 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:47:03 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:47:03 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/1c4025f8-834f-474c-87ee-59600e6ffb96_disk">
Oct 02 12:47:03 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:       </source>
Oct 02 12:47:03 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:47:03 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:47:03 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:47:03 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/1c4025f8-834f-474c-87ee-59600e6ffb96_disk.config">
Oct 02 12:47:03 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:       </source>
Oct 02 12:47:03 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:47:03 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:47:03 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:47:03 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:48:3d:b5"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:       <target dev="tap8001e1a0-d1"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:47:03 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/1c4025f8-834f-474c-87ee-59600e6ffb96/console.log" append="off"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <video>
Oct 02 12:47:03 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     </video>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:47:03 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:47:03 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:47:03 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:47:03 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:47:03 compute-1 nova_compute[230518]: </domain>
Oct 02 12:47:03 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.812 2 DEBUG nova.compute.manager [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Preparing to wait for external event network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.812 2 DEBUG oslo_concurrency.lockutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.813 2 DEBUG oslo_concurrency.lockutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.813 2 DEBUG oslo_concurrency.lockutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.814 2 DEBUG nova.virt.libvirt.vif [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:46:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1232591881',display_name='tempest-DeleteServersTestJSON-server-1232591881',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1232591881',id=126,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='58b2fa4ee0cd4b97be1b303c203be14f',ramdisk_id='',reservation_id='r-fwmxnmnm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1740298646',owner_user_name='tempest-DeleteServersTestJSON-1740298646-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:46:56Z,user_data=None,user_id='ae7bcf1e6a3b4132a7068b0f863ca79c',uuid=1c4025f8-834f-474c-87ee-59600e6ffb96,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.814 2 DEBUG nova.network.os_vif_util [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converting VIF {"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.815 2 DEBUG nova.network.os_vif_util [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:3d:b5,bridge_name='br-int',has_traffic_filtering=True,id=8001e1a0-d1c2-49ac-8630-690ed8ac9801,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8001e1a0-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.815 2 DEBUG os_vif [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:3d:b5,bridge_name='br-int',has_traffic_filtering=True,id=8001e1a0-d1c2-49ac-8630-690ed8ac9801,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8001e1a0-d1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.816 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.817 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.822 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8001e1a0-d1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.823 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8001e1a0-d1, col_values=(('external_ids', {'iface-id': '8001e1a0-d1c2-49ac-8630-690ed8ac9801', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:48:3d:b5', 'vm-uuid': '1c4025f8-834f-474c-87ee-59600e6ffb96'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:47:03 compute-1 NetworkManager[44960]: <info>  [1759409223.8267] manager: (tap8001e1a0-d1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/244)
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.833 2 INFO os_vif [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:3d:b5,bridge_name='br-int',has_traffic_filtering=True,id=8001e1a0-d1c2-49ac-8630-690ed8ac9801,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8001e1a0-d1')
Oct 02 12:47:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:47:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:03.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:47:03 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:03.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.887 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.889 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4436MB free_disk=20.909137725830078GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.889 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:03 compute-1 nova_compute[230518]: 2025-10-02 12:47:03.889 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:04 compute-1 nova_compute[230518]: 2025-10-02 12:47:04.005 2 DEBUG nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:47:04 compute-1 nova_compute[230518]: 2025-10-02 12:47:04.005 2 DEBUG nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:47:04 compute-1 nova_compute[230518]: 2025-10-02 12:47:04.006 2 DEBUG nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] No VIF found with MAC fa:16:3e:48:3d:b5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:47:04 compute-1 nova_compute[230518]: 2025-10-02 12:47:04.006 2 INFO nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Using config drive
Oct 02 12:47:04 compute-1 nova_compute[230518]: 2025-10-02 12:47:04.034 2 DEBUG nova.storage.rbd_utils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 1c4025f8-834f-474c-87ee-59600e6ffb96_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:47:04 compute-1 nova_compute[230518]: 2025-10-02 12:47:04.072 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 1c4025f8-834f-474c-87ee-59600e6ffb96 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:47:04 compute-1 nova_compute[230518]: 2025-10-02 12:47:04.072 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:47:04 compute-1 nova_compute[230518]: 2025-10-02 12:47:04.072 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:47:04 compute-1 nova_compute[230518]: 2025-10-02 12:47:04.124 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:47:04 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1011230374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:04 compute-1 nova_compute[230518]: 2025-10-02 12:47:04.574 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:04 compute-1 nova_compute[230518]: 2025-10-02 12:47:04.581 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:47:04 compute-1 nova_compute[230518]: 2025-10-02 12:47:04.604 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:47:04 compute-1 nova_compute[230518]: 2025-10-02 12:47:04.641 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:47:04 compute-1 nova_compute[230518]: 2025-10-02 12:47:04.641 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:04 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/887990936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:04 compute-1 ceph-mon[80926]: osdmap e306: 3 total, 3 up, 3 in
Oct 02 12:47:04 compute-1 ceph-mon[80926]: pgmap v2176: 305 pgs: 305 active+clean; 357 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 76 KiB/s rd, 3.0 MiB/s wr, 113 op/s
Oct 02 12:47:04 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1011230374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:04 compute-1 nova_compute[230518]: 2025-10-02 12:47:04.711 2 INFO nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Creating config drive at /var/lib/nova/instances/1c4025f8-834f-474c-87ee-59600e6ffb96/disk.config
Oct 02 12:47:04 compute-1 nova_compute[230518]: 2025-10-02 12:47:04.716 2 DEBUG oslo_concurrency.processutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1c4025f8-834f-474c-87ee-59600e6ffb96/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfdsqtdw7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:04 compute-1 nova_compute[230518]: 2025-10-02 12:47:04.755 2 DEBUG nova.network.neutron [req-be54f3d0-03b8-4f68-ba2a-cc98aae5a48b req-711943b1-0547-4a46-bdda-5c45b82137ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Updated VIF entry in instance network info cache for port 8001e1a0-d1c2-49ac-8630-690ed8ac9801. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:47:04 compute-1 nova_compute[230518]: 2025-10-02 12:47:04.756 2 DEBUG nova.network.neutron [req-be54f3d0-03b8-4f68-ba2a-cc98aae5a48b req-711943b1-0547-4a46-bdda-5c45b82137ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Updating instance_info_cache with network_info: [{"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:47:04 compute-1 nova_compute[230518]: 2025-10-02 12:47:04.790 2 DEBUG oslo_concurrency.lockutils [req-be54f3d0-03b8-4f68-ba2a-cc98aae5a48b req-711943b1-0547-4a46-bdda-5c45b82137ec 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:47:04 compute-1 nova_compute[230518]: 2025-10-02 12:47:04.869 2 DEBUG oslo_concurrency.processutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1c4025f8-834f-474c-87ee-59600e6ffb96/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfdsqtdw7" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:04 compute-1 nova_compute[230518]: 2025-10-02 12:47:04.898 2 DEBUG nova.storage.rbd_utils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] rbd image 1c4025f8-834f-474c-87ee-59600e6ffb96_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:47:04 compute-1 nova_compute[230518]: 2025-10-02 12:47:04.901 2 DEBUG oslo_concurrency.processutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1c4025f8-834f-474c-87ee-59600e6ffb96/disk.config 1c4025f8-834f-474c-87ee-59600e6ffb96_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:47:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3728934185' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:47:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:47:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3728934185' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:47:05 compute-1 nova_compute[230518]: 2025-10-02 12:47:05.247 2 DEBUG oslo_concurrency.processutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1c4025f8-834f-474c-87ee-59600e6ffb96/disk.config 1c4025f8-834f-474c-87ee-59600e6ffb96_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:05 compute-1 nova_compute[230518]: 2025-10-02 12:47:05.248 2 INFO nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Deleting local config drive /var/lib/nova/instances/1c4025f8-834f-474c-87ee-59600e6ffb96/disk.config because it was imported into RBD.
Oct 02 12:47:05 compute-1 NetworkManager[44960]: <info>  [1759409225.3046] manager: (tap8001e1a0-d1): new Tun device (/org/freedesktop/NetworkManager/Devices/245)
Oct 02 12:47:05 compute-1 kernel: tap8001e1a0-d1: entered promiscuous mode
Oct 02 12:47:05 compute-1 nova_compute[230518]: 2025-10-02 12:47:05.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:05 compute-1 ovn_controller[129257]: 2025-10-02T12:47:05Z|00524|binding|INFO|Claiming lport 8001e1a0-d1c2-49ac-8630-690ed8ac9801 for this chassis.
Oct 02 12:47:05 compute-1 ovn_controller[129257]: 2025-10-02T12:47:05Z|00525|binding|INFO|8001e1a0-d1c2-49ac-8630-690ed8ac9801: Claiming fa:16:3e:48:3d:b5 10.100.0.5
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.318 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:3d:b5 10.100.0.5'], port_security=['fa:16:3e:48:3d:b5 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1c4025f8-834f-474c-87ee-59600e6ffb96', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd4432c5-b907-49af-a666-2128c4085e24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '58b2fa4ee0cd4b97be1b303c203be14f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9c4b6dce-bc96-4e53-8c8b-5ae3df39cbb4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f2b4343-0afb-453d-9cae-4eb33f3ee50c, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=8001e1a0-d1c2-49ac-8630-690ed8ac9801) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.319 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 8001e1a0-d1c2-49ac-8630-690ed8ac9801 in datapath fd4432c5-b907-49af-a666-2128c4085e24 bound to our chassis
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.320 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fd4432c5-b907-49af-a666-2128c4085e24
Oct 02 12:47:05 compute-1 ovn_controller[129257]: 2025-10-02T12:47:05Z|00526|binding|INFO|Setting lport 8001e1a0-d1c2-49ac-8630-690ed8ac9801 ovn-installed in OVS
Oct 02 12:47:05 compute-1 ovn_controller[129257]: 2025-10-02T12:47:05Z|00527|binding|INFO|Setting lport 8001e1a0-d1c2-49ac-8630-690ed8ac9801 up in Southbound
Oct 02 12:47:05 compute-1 nova_compute[230518]: 2025-10-02 12:47:05.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:05 compute-1 nova_compute[230518]: 2025-10-02 12:47:05.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.336 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f0c66893-9e12-490f-bb38-89fefdb9d9a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.337 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfd4432c5-b1 in ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.341 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfd4432c5-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.341 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1b604c94-8f69-47c9-80fb-eb18f96785a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.342 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[13be1164-384a-4603-a9ab-7cb219a92dc4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:05 compute-1 systemd-machined[188247]: New machine qemu-62-instance-0000007e.
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.355 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[1c6fec6e-c6e6-469b-9aba-42cf6806f376]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:05 compute-1 systemd[1]: Started Virtual Machine qemu-62-instance-0000007e.
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.371 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c35842ee-582c-41d4-8ebf-65dbd31c67b9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:05 compute-1 systemd-udevd[279305]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:47:05 compute-1 NetworkManager[44960]: <info>  [1759409225.3849] device (tap8001e1a0-d1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:47:05 compute-1 NetworkManager[44960]: <info>  [1759409225.3861] device (tap8001e1a0-d1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.400 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[4780e9a2-c74e-4de4-942e-c2a2c68edd76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:05 compute-1 systemd-udevd[279309]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.405 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[456d6d76-dcaf-4340-b4bc-ac90e2af84fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:05 compute-1 NetworkManager[44960]: <info>  [1759409225.4069] manager: (tapfd4432c5-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/246)
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.441 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[1d2675d3-b5ad-494b-b385-54628c39a494]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.444 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[2fcd57d1-8476-415d-9088-32a4578f57c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:05 compute-1 NetworkManager[44960]: <info>  [1759409225.4703] device (tapfd4432c5-b0): carrier: link connected
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.475 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[cd60817c-52f3-4619-b044-15fc840978b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.492 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9e088b01-8427-4a99-8b2c-f6be1780f992]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfd4432c5-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:b3:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 162], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708902, 'reachable_time': 34511, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279335, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.508 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[134bc215-7dd1-45d7-a4d4-1916c386b951]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4c:b3ba'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 708902, 'tstamp': 708902}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279336, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.526 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9524dfe0-6650-462e-8436-ddb5bbc5b6ab]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfd4432c5-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:b3:ba'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 162], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708902, 'reachable_time': 34511, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 279337, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.554 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[eb5841e1-ee72-4454-984d-ffb942c0db6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.608 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e4fc703a-7ec5-43c9-a11f-f67ba96c01b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.610 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd4432c5-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.610 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.611 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfd4432c5-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:05 compute-1 nova_compute[230518]: 2025-10-02 12:47:05.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:05 compute-1 kernel: tapfd4432c5-b0: entered promiscuous mode
Oct 02 12:47:05 compute-1 NetworkManager[44960]: <info>  [1759409225.6139] manager: (tapfd4432c5-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/247)
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.618 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfd4432c5-b0, col_values=(('external_ids', {'iface-id': 'd2e0cd82-7c1f-4194-aaaf-514fe24ec2a7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:05 compute-1 ovn_controller[129257]: 2025-10-02T12:47:05Z|00528|binding|INFO|Releasing lport d2e0cd82-7c1f-4194-aaaf-514fe24ec2a7 from this chassis (sb_readonly=0)
Oct 02 12:47:05 compute-1 nova_compute[230518]: 2025-10-02 12:47:05.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.623 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fd4432c5-b907-49af-a666-2128c4085e24.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fd4432c5-b907-49af-a666-2128c4085e24.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.624 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5796828b-ae5a-42b0-a6cb-a4e99503bf68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.625 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-fd4432c5-b907-49af-a666-2128c4085e24
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/fd4432c5-b907-49af-a666-2128c4085e24.pid.haproxy
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID fd4432c5-b907-49af-a666-2128c4085e24
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:47:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:05.627 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'env', 'PROCESS_TAG=haproxy-fd4432c5-b907-49af-a666-2128c4085e24', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fd4432c5-b907-49af-a666-2128c4085e24.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:47:05 compute-1 nova_compute[230518]: 2025-10-02 12:47:05.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:05 compute-1 nova_compute[230518]: 2025-10-02 12:47:05.642 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:47:05 compute-1 nova_compute[230518]: 2025-10-02 12:47:05.682 2 DEBUG nova.compute.manager [req-c771aca4-899c-44df-847c-233ecf3001bb req-81ae9dd2-68c5-4c85-a6e2-eea4f0238f73 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received event network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:47:05 compute-1 nova_compute[230518]: 2025-10-02 12:47:05.683 2 DEBUG oslo_concurrency.lockutils [req-c771aca4-899c-44df-847c-233ecf3001bb req-81ae9dd2-68c5-4c85-a6e2-eea4f0238f73 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:05 compute-1 nova_compute[230518]: 2025-10-02 12:47:05.683 2 DEBUG oslo_concurrency.lockutils [req-c771aca4-899c-44df-847c-233ecf3001bb req-81ae9dd2-68c5-4c85-a6e2-eea4f0238f73 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:05 compute-1 nova_compute[230518]: 2025-10-02 12:47:05.683 2 DEBUG oslo_concurrency.lockutils [req-c771aca4-899c-44df-847c-233ecf3001bb req-81ae9dd2-68c5-4c85-a6e2-eea4f0238f73 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:05 compute-1 nova_compute[230518]: 2025-10-02 12:47:05.684 2 DEBUG nova.compute.manager [req-c771aca4-899c-44df-847c-233ecf3001bb req-81ae9dd2-68c5-4c85-a6e2-eea4f0238f73 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Processing event network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:47:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e307 e307: 3 total, 3 up, 3 in
Oct 02 12:47:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3728934185' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:47:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3728934185' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:47:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:47:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:05.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:05 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:05.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:06 compute-1 podman[279369]: 2025-10-02 12:47:06.000727562 +0000 UTC m=+0.060340568 container create 8584f105e7ef62496f2728e644d93b0c59545ac2e3c004be64d7cb6b746334b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:47:06 compute-1 systemd[1]: Started libpod-conmon-8584f105e7ef62496f2728e644d93b0c59545ac2e3c004be64d7cb6b746334b5.scope.
Oct 02 12:47:06 compute-1 nova_compute[230518]: 2025-10-02 12:47:06.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:47:06 compute-1 nova_compute[230518]: 2025-10-02 12:47:06.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:47:06 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:47:06 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fed3ce3b5bf83c329c5a1cdc2d4894017038ce5e7f2b6cb2909063ada580d1f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:47:06 compute-1 podman[279369]: 2025-10-02 12:47:05.963556634 +0000 UTC m=+0.023169650 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:47:06 compute-1 podman[279369]: 2025-10-02 12:47:06.073687337 +0000 UTC m=+0.133300343 container init 8584f105e7ef62496f2728e644d93b0c59545ac2e3c004be64d7cb6b746334b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 12:47:06 compute-1 podman[279369]: 2025-10-02 12:47:06.07920862 +0000 UTC m=+0.138821606 container start 8584f105e7ef62496f2728e644d93b0c59545ac2e3c004be64d7cb6b746334b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 12:47:06 compute-1 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[279385]: [NOTICE]   (279389) : New worker (279391) forked
Oct 02 12:47:06 compute-1 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[279385]: [NOTICE]   (279389) : Loading success.
Oct 02 12:47:06 compute-1 ceph-mon[80926]: osdmap e307: 3 total, 3 up, 3 in
Oct 02 12:47:06 compute-1 ceph-mon[80926]: pgmap v2178: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 88 KiB/s rd, 3.1 MiB/s wr, 134 op/s
Oct 02 12:47:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2285883654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4084135172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:06 compute-1 nova_compute[230518]: 2025-10-02 12:47:06.857 2 DEBUG nova.compute.manager [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:47:06 compute-1 nova_compute[230518]: 2025-10-02 12:47:06.858 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409226.8571339, 1c4025f8-834f-474c-87ee-59600e6ffb96 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:47:06 compute-1 nova_compute[230518]: 2025-10-02 12:47:06.859 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] VM Started (Lifecycle Event)
Oct 02 12:47:06 compute-1 nova_compute[230518]: 2025-10-02 12:47:06.862 2 DEBUG nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:47:06 compute-1 nova_compute[230518]: 2025-10-02 12:47:06.865 2 INFO nova.virt.libvirt.driver [-] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Instance spawned successfully.
Oct 02 12:47:06 compute-1 nova_compute[230518]: 2025-10-02 12:47:06.865 2 DEBUG nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:47:06 compute-1 nova_compute[230518]: 2025-10-02 12:47:06.909 2 DEBUG nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:47:06 compute-1 nova_compute[230518]: 2025-10-02 12:47:06.910 2 DEBUG nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:47:06 compute-1 nova_compute[230518]: 2025-10-02 12:47:06.910 2 DEBUG nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:47:06 compute-1 nova_compute[230518]: 2025-10-02 12:47:06.911 2 DEBUG nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:47:06 compute-1 nova_compute[230518]: 2025-10-02 12:47:06.911 2 DEBUG nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:47:06 compute-1 nova_compute[230518]: 2025-10-02 12:47:06.912 2 DEBUG nova.virt.libvirt.driver [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:47:06 compute-1 nova_compute[230518]: 2025-10-02 12:47:06.960 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:47:06 compute-1 nova_compute[230518]: 2025-10-02 12:47:06.963 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:47:07 compute-1 nova_compute[230518]: 2025-10-02 12:47:07.003 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:47:07 compute-1 nova_compute[230518]: 2025-10-02 12:47:07.004 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409226.858197, 1c4025f8-834f-474c-87ee-59600e6ffb96 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:47:07 compute-1 nova_compute[230518]: 2025-10-02 12:47:07.004 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] VM Paused (Lifecycle Event)
Oct 02 12:47:07 compute-1 nova_compute[230518]: 2025-10-02 12:47:07.026 2 INFO nova.compute.manager [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Took 10.76 seconds to spawn the instance on the hypervisor.
Oct 02 12:47:07 compute-1 nova_compute[230518]: 2025-10-02 12:47:07.027 2 DEBUG nova.compute.manager [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:47:07 compute-1 nova_compute[230518]: 2025-10-02 12:47:07.041 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:47:07 compute-1 nova_compute[230518]: 2025-10-02 12:47:07.046 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409226.8616016, 1c4025f8-834f-474c-87ee-59600e6ffb96 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:47:07 compute-1 nova_compute[230518]: 2025-10-02 12:47:07.047 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] VM Resumed (Lifecycle Event)
Oct 02 12:47:07 compute-1 nova_compute[230518]: 2025-10-02 12:47:07.086 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:47:07 compute-1 nova_compute[230518]: 2025-10-02 12:47:07.090 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:47:07 compute-1 nova_compute[230518]: 2025-10-02 12:47:07.130 2 INFO nova.compute.manager [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Took 12.14 seconds to build instance.
Oct 02 12:47:07 compute-1 nova_compute[230518]: 2025-10-02 12:47:07.163 2 DEBUG oslo_concurrency.lockutils [None req-5fc8f51f-eb0d-4825-a432-5e274154d5f6 ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.262s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e308 e308: 3 total, 3 up, 3 in
Oct 02 12:47:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:47:07 compute-1 nova_compute[230518]: 2025-10-02 12:47:07.885 2 DEBUG nova.compute.manager [req-e7be5edc-eebc-4ef3-bcd2-4499027eb084 req-ecc1972c-4c56-42de-b61b-e6643d1388c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received event network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:47:07 compute-1 nova_compute[230518]: 2025-10-02 12:47:07.886 2 DEBUG oslo_concurrency.lockutils [req-e7be5edc-eebc-4ef3-bcd2-4499027eb084 req-ecc1972c-4c56-42de-b61b-e6643d1388c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:07 compute-1 nova_compute[230518]: 2025-10-02 12:47:07.886 2 DEBUG oslo_concurrency.lockutils [req-e7be5edc-eebc-4ef3-bcd2-4499027eb084 req-ecc1972c-4c56-42de-b61b-e6643d1388c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:07 compute-1 nova_compute[230518]: 2025-10-02 12:47:07.886 2 DEBUG oslo_concurrency.lockutils [req-e7be5edc-eebc-4ef3-bcd2-4499027eb084 req-ecc1972c-4c56-42de-b61b-e6643d1388c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:07 compute-1 nova_compute[230518]: 2025-10-02 12:47:07.886 2 DEBUG nova.compute.manager [req-e7be5edc-eebc-4ef3-bcd2-4499027eb084 req-ecc1972c-4c56-42de-b61b-e6643d1388c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] No waiting events found dispatching network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:47:07 compute-1 nova_compute[230518]: 2025-10-02 12:47:07.886 2 WARNING nova.compute.manager [req-e7be5edc-eebc-4ef3-bcd2-4499027eb084 req-ecc1972c-4c56-42de-b61b-e6643d1388c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received unexpected event network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 for instance with vm_state active and task_state None.
Oct 02 12:47:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200c786f0 =====
Oct 02 12:47:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200c786f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:47:07 compute-1 radosgw[82922]: beast: 0x7f9200c786f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:07.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:47:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:47:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:07.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:47:08 compute-1 nova_compute[230518]: 2025-10-02 12:47:08.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:47:08 compute-1 ceph-mon[80926]: osdmap e308: 3 total, 3 up, 3 in
Oct 02 12:47:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3847758872' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2976661906' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:08 compute-1 ovn_controller[129257]: 2025-10-02T12:47:08Z|00529|binding|INFO|Releasing lport d2e0cd82-7c1f-4194-aaaf-514fe24ec2a7 from this chassis (sb_readonly=0)
Oct 02 12:47:08 compute-1 nova_compute[230518]: 2025-10-02 12:47:08.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:08 compute-1 ovn_controller[129257]: 2025-10-02T12:47:08Z|00530|binding|INFO|Releasing lport d2e0cd82-7c1f-4194-aaaf-514fe24ec2a7 from this chassis (sb_readonly=0)
Oct 02 12:47:08 compute-1 nova_compute[230518]: 2025-10-02 12:47:08.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:08 compute-1 nova_compute[230518]: 2025-10-02 12:47:08.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:08 compute-1 nova_compute[230518]: 2025-10-02 12:47:08.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:09 compute-1 nova_compute[230518]: 2025-10-02 12:47:09.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:47:09 compute-1 ceph-mon[80926]: pgmap v2180: 305 pgs: 305 active+clean; 240 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 108 KiB/s rd, 3.9 MiB/s wr, 162 op/s
Oct 02 12:47:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:09.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:09.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:10 compute-1 nova_compute[230518]: 2025-10-02 12:47:10.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:47:10 compute-1 nova_compute[230518]: 2025-10-02 12:47:10.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:47:10 compute-1 nova_compute[230518]: 2025-10-02 12:47:10.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:47:10 compute-1 nova_compute[230518]: 2025-10-02 12:47:10.126 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:47:10 compute-1 nova_compute[230518]: 2025-10-02 12:47:10.127 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:47:10 compute-1 nova_compute[230518]: 2025-10-02 12:47:10.127 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:47:10 compute-1 nova_compute[230518]: 2025-10-02 12:47:10.128 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1c4025f8-834f-474c-87ee-59600e6ffb96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:47:11 compute-1 ceph-mon[80926]: pgmap v2181: 305 pgs: 305 active+clean; 194 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 626 KiB/s rd, 3.4 MiB/s wr, 171 op/s
Oct 02 12:47:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:11.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:47:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:11.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:47:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e309 e309: 3 total, 3 up, 3 in
Oct 02 12:47:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:47:13 compute-1 ceph-mon[80926]: pgmap v2182: 305 pgs: 305 active+clean; 188 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.6 MiB/s wr, 223 op/s
Oct 02 12:47:13 compute-1 ceph-mon[80926]: osdmap e309: 3 total, 3 up, 3 in
Oct 02 12:47:13 compute-1 nova_compute[230518]: 2025-10-02 12:47:13.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:13 compute-1 nova_compute[230518]: 2025-10-02 12:47:13.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:13.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:13.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:14 compute-1 nova_compute[230518]: 2025-10-02 12:47:14.159 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Updating instance_info_cache with network_info: [{"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:47:14 compute-1 podman[279444]: 2025-10-02 12:47:14.832333069 +0000 UTC m=+0.080200123 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:47:14 compute-1 podman[279443]: 2025-10-02 12:47:14.869420535 +0000 UTC m=+0.116936838 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 12:47:15 compute-1 nova_compute[230518]: 2025-10-02 12:47:15.201 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:47:15 compute-1 nova_compute[230518]: 2025-10-02 12:47:15.201 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:47:15 compute-1 nova_compute[230518]: 2025-10-02 12:47:15.201 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:47:15 compute-1 ceph-mon[80926]: pgmap v2184: 305 pgs: 305 active+clean; 188 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.6 MiB/s wr, 210 op/s
Oct 02 12:47:15 compute-1 sudo[279486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:47:15 compute-1 sudo[279486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:15 compute-1 sudo[279486]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:15.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:15 compute-1 sudo[279511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:47:15 compute-1 sudo[279511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:15 compute-1 sudo[279511]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:15.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:15 compute-1 sudo[279536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:47:15 compute-1 sudo[279536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:16 compute-1 sudo[279536]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:16 compute-1 sudo[279561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:47:16 compute-1 sudo[279561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:16 compute-1 sudo[279561]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:16 compute-1 sudo[279616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:47:16 compute-1 sudo[279616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:16 compute-1 sudo[279616]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:16 compute-1 sudo[279641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:47:16 compute-1 sudo[279641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:16 compute-1 sudo[279641]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:16 compute-1 sudo[279666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:47:16 compute-1 sudo[279666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:16 compute-1 sudo[279666]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:16 compute-1 sudo[279691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- inventory --format=json-pretty --filter-for-batch
Oct 02 12:47:16 compute-1 sudo[279691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:17 compute-1 nova_compute[230518]: 2025-10-02 12:47:17.074 2 DEBUG oslo_concurrency.lockutils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:47:17 compute-1 nova_compute[230518]: 2025-10-02 12:47:17.077 2 DEBUG oslo_concurrency.lockutils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquired lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:47:17 compute-1 nova_compute[230518]: 2025-10-02 12:47:17.078 2 DEBUG nova.network.neutron [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:47:17 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #100. Immutable memtables: 0.
Oct 02 12:47:17 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:47:17.144168) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:47:17 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 100
Oct 02 12:47:17 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409237144209, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 2543, "num_deletes": 262, "total_data_size": 5866672, "memory_usage": 5938296, "flush_reason": "Manual Compaction"}
Oct 02 12:47:17 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #101: started
Oct 02 12:47:17 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409237192499, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 101, "file_size": 3845464, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49804, "largest_seqno": 52342, "table_properties": {"data_size": 3834771, "index_size": 6931, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22643, "raw_average_key_size": 21, "raw_value_size": 3813334, "raw_average_value_size": 3573, "num_data_blocks": 297, "num_entries": 1067, "num_filter_entries": 1067, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409048, "oldest_key_time": 1759409048, "file_creation_time": 1759409237, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:47:17 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 48381 microseconds, and 8812 cpu microseconds.
Oct 02 12:47:17 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:47:17 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:47:17.192546) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #101: 3845464 bytes OK
Oct 02 12:47:17 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:47:17.192567) [db/memtable_list.cc:519] [default] Level-0 commit table #101 started
Oct 02 12:47:17 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:47:17.196648) [db/memtable_list.cc:722] [default] Level-0 commit table #101: memtable #1 done
Oct 02 12:47:17 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:47:17.196673) EVENT_LOG_v1 {"time_micros": 1759409237196666, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:47:17 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:47:17.196697) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:47:17 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 5855197, prev total WAL file size 5876256, number of live WAL files 2.
Oct 02 12:47:17 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000097.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:47:17 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:47:17.198831) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Oct 02 12:47:17 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:47:17 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [101(3755KB)], [99(9043KB)]
Oct 02 12:47:17 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409237198966, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [101], "files_L6": [99], "score": -1, "input_data_size": 13106122, "oldest_snapshot_seqno": -1}
Oct 02 12:47:17 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #102: 7671 keys, 11159525 bytes, temperature: kUnknown
Oct 02 12:47:17 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409237336306, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 102, "file_size": 11159525, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11108234, "index_size": 31019, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19205, "raw_key_size": 198377, "raw_average_key_size": 25, "raw_value_size": 10971393, "raw_average_value_size": 1430, "num_data_blocks": 1219, "num_entries": 7671, "num_filter_entries": 7671, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759409237, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 102, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:47:17 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:47:17 compute-1 podman[279752]: 2025-10-02 12:47:17.269716001 +0000 UTC m=+0.022540629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:47:17 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:47:17.336722) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 11159525 bytes
Oct 02 12:47:17 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:47:17.387689) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 95.3 rd, 81.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 8.8 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(6.3) write-amplify(2.9) OK, records in: 8207, records dropped: 536 output_compression: NoCompression
Oct 02 12:47:17 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:47:17.387735) EVENT_LOG_v1 {"time_micros": 1759409237387719, "job": 62, "event": "compaction_finished", "compaction_time_micros": 137479, "compaction_time_cpu_micros": 29609, "output_level": 6, "num_output_files": 1, "total_output_size": 11159525, "num_input_records": 8207, "num_output_records": 7671, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:47:17 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:47:17 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409237388856, "job": 62, "event": "table_file_deletion", "file_number": 101}
Oct 02 12:47:17 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000099.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:47:17 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409237390459, "job": 62, "event": "table_file_deletion", "file_number": 99}
Oct 02 12:47:17 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:47:17.198701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:47:17 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:47:17.390571) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:47:17 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:47:17.390575) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:47:17 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:47:17.390577) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:47:17 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:47:17.390578) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:47:17 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:47:17.390580) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:47:17 compute-1 ceph-mon[80926]: pgmap v2185: 305 pgs: 305 active+clean; 188 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 15 KiB/s wr, 135 op/s
Oct 02 12:47:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3358379371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:17 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:47:17 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:47:17 compute-1 podman[279752]: 2025-10-02 12:47:17.402317071 +0000 UTC m=+0.155141669 container create 4d56b8966ccf13f819d3c3da15ef2a92e65afea328c587f7104d4b0ef92bf22e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 02 12:47:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:47:17 compute-1 systemd[1]: Started libpod-conmon-4d56b8966ccf13f819d3c3da15ef2a92e65afea328c587f7104d4b0ef92bf22e.scope.
Oct 02 12:47:17 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:47:17 compute-1 podman[279752]: 2025-10-02 12:47:17.619687716 +0000 UTC m=+0.372512314 container init 4d56b8966ccf13f819d3c3da15ef2a92e65afea328c587f7104d4b0ef92bf22e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bose, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:47:17 compute-1 podman[279752]: 2025-10-02 12:47:17.628786882 +0000 UTC m=+0.381611520 container start 4d56b8966ccf13f819d3c3da15ef2a92e65afea328c587f7104d4b0ef92bf22e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bose, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:47:17 compute-1 podman[279752]: 2025-10-02 12:47:17.666794557 +0000 UTC m=+0.419619155 container attach 4d56b8966ccf13f819d3c3da15ef2a92e65afea328c587f7104d4b0ef92bf22e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bose, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:47:17 compute-1 laughing_bose[279769]: 167 167
Oct 02 12:47:17 compute-1 systemd[1]: libpod-4d56b8966ccf13f819d3c3da15ef2a92e65afea328c587f7104d4b0ef92bf22e.scope: Deactivated successfully.
Oct 02 12:47:17 compute-1 conmon[279769]: conmon 4d56b8966ccf13f819d3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4d56b8966ccf13f819d3c3da15ef2a92e65afea328c587f7104d4b0ef92bf22e.scope/container/memory.events
Oct 02 12:47:17 compute-1 podman[279752]: 2025-10-02 12:47:17.702316654 +0000 UTC m=+0.455141252 container died 4d56b8966ccf13f819d3c3da15ef2a92e65afea328c587f7104d4b0ef92bf22e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bose, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 12:47:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:47:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:17.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:47:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:17.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:18 compute-1 systemd[1]: var-lib-containers-storage-overlay-f6dbd000f8faa9b53022ec548d908b8619b70c7155b173a2b63c93e12d833389-merged.mount: Deactivated successfully.
Oct 02 12:47:18 compute-1 podman[279752]: 2025-10-02 12:47:18.422624114 +0000 UTC m=+1.175448712 container remove 4d56b8966ccf13f819d3c3da15ef2a92e65afea328c587f7104d4b0ef92bf22e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bose, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:47:18 compute-1 systemd[1]: libpod-conmon-4d56b8966ccf13f819d3c3da15ef2a92e65afea328c587f7104d4b0ef92bf22e.scope: Deactivated successfully.
Oct 02 12:47:18 compute-1 podman[279793]: 2025-10-02 12:47:18.614848449 +0000 UTC m=+0.029390855 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 12:47:18 compute-1 podman[279793]: 2025-10-02 12:47:18.779382332 +0000 UTC m=+0.193924738 container create b3f14dfd8fe36c732ee02031751c5c41252cd3c7daaffd1da51fdd3fb7efa925 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 12:47:18 compute-1 nova_compute[230518]: 2025-10-02 12:47:18.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:18 compute-1 nova_compute[230518]: 2025-10-02 12:47:18.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:18 compute-1 systemd[1]: Started libpod-conmon-b3f14dfd8fe36c732ee02031751c5c41252cd3c7daaffd1da51fdd3fb7efa925.scope.
Oct 02 12:47:18 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:47:18 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f16622ef1bfc18d262fc4630a10b16b70cd4bd326de38ff512f224919c15fd8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 12:47:18 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f16622ef1bfc18d262fc4630a10b16b70cd4bd326de38ff512f224919c15fd8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 12:47:18 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f16622ef1bfc18d262fc4630a10b16b70cd4bd326de38ff512f224919c15fd8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 12:47:18 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f16622ef1bfc18d262fc4630a10b16b70cd4bd326de38ff512f224919c15fd8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 12:47:19 compute-1 podman[279793]: 2025-10-02 12:47:19.024606024 +0000 UTC m=+0.439148450 container init b3f14dfd8fe36c732ee02031751c5c41252cd3c7daaffd1da51fdd3fb7efa925 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 12:47:19 compute-1 podman[279793]: 2025-10-02 12:47:19.033503234 +0000 UTC m=+0.448045640 container start b3f14dfd8fe36c732ee02031751c5c41252cd3c7daaffd1da51fdd3fb7efa925 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 02 12:47:19 compute-1 podman[279793]: 2025-10-02 12:47:19.128640715 +0000 UTC m=+0.543183281 container attach b3f14dfd8fe36c732ee02031751c5c41252cd3c7daaffd1da51fdd3fb7efa925 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sammet, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct 02 12:47:19 compute-1 ceph-mon[80926]: pgmap v2186: 305 pgs: 305 active+clean; 188 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 12 KiB/s wr, 187 op/s
Oct 02 12:47:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:47:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:47:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.002999973s ======
Oct 02 12:47:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:19.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002999973s
Oct 02 12:47:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:47:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:19.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:47:20 compute-1 friendly_sammet[279809]: [
Oct 02 12:47:20 compute-1 friendly_sammet[279809]:     {
Oct 02 12:47:20 compute-1 friendly_sammet[279809]:         "available": false,
Oct 02 12:47:20 compute-1 friendly_sammet[279809]:         "ceph_device": false,
Oct 02 12:47:20 compute-1 friendly_sammet[279809]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 02 12:47:20 compute-1 friendly_sammet[279809]:         "lsm_data": {},
Oct 02 12:47:20 compute-1 friendly_sammet[279809]:         "lvs": [],
Oct 02 12:47:20 compute-1 friendly_sammet[279809]:         "path": "/dev/sr0",
Oct 02 12:47:20 compute-1 friendly_sammet[279809]:         "rejected_reasons": [
Oct 02 12:47:20 compute-1 friendly_sammet[279809]:             "Has a FileSystem",
Oct 02 12:47:20 compute-1 friendly_sammet[279809]:             "Insufficient space (<5GB)"
Oct 02 12:47:20 compute-1 friendly_sammet[279809]:         ],
Oct 02 12:47:20 compute-1 friendly_sammet[279809]:         "sys_api": {
Oct 02 12:47:20 compute-1 friendly_sammet[279809]:             "actuators": null,
Oct 02 12:47:20 compute-1 friendly_sammet[279809]:             "device_nodes": "sr0",
Oct 02 12:47:20 compute-1 friendly_sammet[279809]:             "devname": "sr0",
Oct 02 12:47:20 compute-1 friendly_sammet[279809]:             "human_readable_size": "482.00 KB",
Oct 02 12:47:20 compute-1 friendly_sammet[279809]:             "id_bus": "ata",
Oct 02 12:47:20 compute-1 friendly_sammet[279809]:             "model": "QEMU DVD-ROM",
Oct 02 12:47:20 compute-1 friendly_sammet[279809]:             "nr_requests": "2",
Oct 02 12:47:20 compute-1 friendly_sammet[279809]:             "parent": "/dev/sr0",
Oct 02 12:47:20 compute-1 friendly_sammet[279809]:             "partitions": {},
Oct 02 12:47:20 compute-1 friendly_sammet[279809]:             "path": "/dev/sr0",
Oct 02 12:47:20 compute-1 friendly_sammet[279809]:             "removable": "1",
Oct 02 12:47:20 compute-1 friendly_sammet[279809]:             "rev": "2.5+",
Oct 02 12:47:20 compute-1 friendly_sammet[279809]:             "ro": "0",
Oct 02 12:47:20 compute-1 friendly_sammet[279809]:             "rotational": "0",
Oct 02 12:47:20 compute-1 friendly_sammet[279809]:             "sas_address": "",
Oct 02 12:47:20 compute-1 friendly_sammet[279809]:             "sas_device_handle": "",
Oct 02 12:47:20 compute-1 friendly_sammet[279809]:             "scheduler_mode": "mq-deadline",
Oct 02 12:47:20 compute-1 friendly_sammet[279809]:             "sectors": 0,
Oct 02 12:47:20 compute-1 friendly_sammet[279809]:             "sectorsize": "2048",
Oct 02 12:47:20 compute-1 friendly_sammet[279809]:             "size": 493568.0,
Oct 02 12:47:20 compute-1 friendly_sammet[279809]:             "support_discard": "2048",
Oct 02 12:47:20 compute-1 friendly_sammet[279809]:             "type": "disk",
Oct 02 12:47:20 compute-1 friendly_sammet[279809]:             "vendor": "QEMU"
Oct 02 12:47:20 compute-1 friendly_sammet[279809]:         }
Oct 02 12:47:20 compute-1 friendly_sammet[279809]:     }
Oct 02 12:47:20 compute-1 friendly_sammet[279809]: ]
Oct 02 12:47:20 compute-1 systemd[1]: libpod-b3f14dfd8fe36c732ee02031751c5c41252cd3c7daaffd1da51fdd3fb7efa925.scope: Deactivated successfully.
Oct 02 12:47:20 compute-1 systemd[1]: libpod-b3f14dfd8fe36c732ee02031751c5c41252cd3c7daaffd1da51fdd3fb7efa925.scope: Consumed 1.160s CPU time.
Oct 02 12:47:20 compute-1 podman[280960]: 2025-10-02 12:47:20.357881838 +0000 UTC m=+0.024459680 container died b3f14dfd8fe36c732ee02031751c5c41252cd3c7daaffd1da51fdd3fb7efa925 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct 02 12:47:20 compute-1 systemd[1]: var-lib-containers-storage-overlay-f16622ef1bfc18d262fc4630a10b16b70cd4bd326de38ff512f224919c15fd8f-merged.mount: Deactivated successfully.
Oct 02 12:47:20 compute-1 podman[280960]: 2025-10-02 12:47:20.418949248 +0000 UTC m=+0.085527080 container remove b3f14dfd8fe36c732ee02031751c5c41252cd3c7daaffd1da51fdd3fb7efa925 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sammet, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 12:47:20 compute-1 systemd[1]: libpod-conmon-b3f14dfd8fe36c732ee02031751c5c41252cd3c7daaffd1da51fdd3fb7efa925.scope: Deactivated successfully.
Oct 02 12:47:20 compute-1 sudo[279691]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:20 compute-1 nova_compute[230518]: 2025-10-02 12:47:20.956 2 DEBUG nova.network.neutron [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Updating instance_info_cache with network_info: [{"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:47:21 compute-1 nova_compute[230518]: 2025-10-02 12:47:21.025 2 DEBUG oslo_concurrency.lockutils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Releasing lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:47:21 compute-1 nova_compute[230518]: 2025-10-02 12:47:21.297 2 DEBUG nova.virt.libvirt.driver [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Oct 02 12:47:21 compute-1 nova_compute[230518]: 2025-10-02 12:47:21.298 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Creating file /var/lib/nova/instances/1c4025f8-834f-474c-87ee-59600e6ffb96/da8e137c0c114648871cbc0f3f940840.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Oct 02 12:47:21 compute-1 nova_compute[230518]: 2025-10-02 12:47:21.298 2 DEBUG oslo_concurrency.processutils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/1c4025f8-834f-474c-87ee-59600e6ffb96/da8e137c0c114648871cbc0f3f940840.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:21 compute-1 ceph-mon[80926]: pgmap v2187: 305 pgs: 305 active+clean; 202 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.4 MiB/s wr, 220 op/s
Oct 02 12:47:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:47:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:47:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:47:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:47:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:47:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:47:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:47:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:47:21 compute-1 nova_compute[230518]: 2025-10-02 12:47:21.758 2 DEBUG oslo_concurrency.processutils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/1c4025f8-834f-474c-87ee-59600e6ffb96/da8e137c0c114648871cbc0f3f940840.tmp" returned: 1 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:21 compute-1 nova_compute[230518]: 2025-10-02 12:47:21.760 2 DEBUG oslo_concurrency.processutils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/1c4025f8-834f-474c-87ee-59600e6ffb96/da8e137c0c114648871cbc0f3f940840.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Oct 02 12:47:21 compute-1 nova_compute[230518]: 2025-10-02 12:47:21.760 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Creating directory /var/lib/nova/instances/1c4025f8-834f-474c-87ee-59600e6ffb96 on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Oct 02 12:47:21 compute-1 nova_compute[230518]: 2025-10-02 12:47:21.760 2 DEBUG oslo_concurrency.processutils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/1c4025f8-834f-474c-87ee-59600e6ffb96 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:21 compute-1 ovn_controller[129257]: 2025-10-02T12:47:21Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:48:3d:b5 10.100.0.5
Oct 02 12:47:21 compute-1 ovn_controller[129257]: 2025-10-02T12:47:21Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:48:3d:b5 10.100.0.5
Oct 02 12:47:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:21.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:47:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:21.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:47:21 compute-1 nova_compute[230518]: 2025-10-02 12:47:21.982 2 DEBUG oslo_concurrency.processutils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/1c4025f8-834f-474c-87ee-59600e6ffb96" returned: 0 in 0.222s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:21 compute-1 nova_compute[230518]: 2025-10-02 12:47:21.988 2 DEBUG nova.virt.libvirt.driver [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:47:22 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:47:23 compute-1 ceph-mon[80926]: pgmap v2188: 305 pgs: 305 active+clean; 213 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.4 MiB/s wr, 295 op/s
Oct 02 12:47:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3250697552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:23 compute-1 nova_compute[230518]: 2025-10-02 12:47:23.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:23 compute-1 nova_compute[230518]: 2025-10-02 12:47:23.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:23.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:23.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:24 compute-1 ceph-mon[80926]: pgmap v2189: 305 pgs: 305 active+clean; 213 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 257 op/s
Oct 02 12:47:24 compute-1 kernel: tap8001e1a0-d1 (unregistering): left promiscuous mode
Oct 02 12:47:24 compute-1 NetworkManager[44960]: <info>  [1759409244.6846] device (tap8001e1a0-d1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:47:24 compute-1 ovn_controller[129257]: 2025-10-02T12:47:24Z|00531|binding|INFO|Releasing lport 8001e1a0-d1c2-49ac-8630-690ed8ac9801 from this chassis (sb_readonly=0)
Oct 02 12:47:24 compute-1 ovn_controller[129257]: 2025-10-02T12:47:24Z|00532|binding|INFO|Setting lport 8001e1a0-d1c2-49ac-8630-690ed8ac9801 down in Southbound
Oct 02 12:47:24 compute-1 ovn_controller[129257]: 2025-10-02T12:47:24Z|00533|binding|INFO|Removing iface tap8001e1a0-d1 ovn-installed in OVS
Oct 02 12:47:24 compute-1 nova_compute[230518]: 2025-10-02 12:47:24.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:24 compute-1 nova_compute[230518]: 2025-10-02 12:47:24.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:24 compute-1 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d0000007e.scope: Deactivated successfully.
Oct 02 12:47:24 compute-1 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d0000007e.scope: Consumed 14.626s CPU time.
Oct 02 12:47:24 compute-1 systemd-machined[188247]: Machine qemu-62-instance-0000007e terminated.
Oct 02 12:47:25 compute-1 nova_compute[230518]: 2025-10-02 12:47:25.004 2 INFO nova.virt.libvirt.driver [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Instance shutdown successfully after 3 seconds.
Oct 02 12:47:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:25.008 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:3d:b5 10.100.0.5'], port_security=['fa:16:3e:48:3d:b5 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1c4025f8-834f-474c-87ee-59600e6ffb96', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd4432c5-b907-49af-a666-2128c4085e24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '58b2fa4ee0cd4b97be1b303c203be14f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9c4b6dce-bc96-4e53-8c8b-5ae3df39cbb4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2f2b4343-0afb-453d-9cae-4eb33f3ee50c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=8001e1a0-d1c2-49ac-8630-690ed8ac9801) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:47:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:25.009 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 8001e1a0-d1c2-49ac-8630-690ed8ac9801 in datapath fd4432c5-b907-49af-a666-2128c4085e24 unbound from our chassis
Oct 02 12:47:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:25.011 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fd4432c5-b907-49af-a666-2128c4085e24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:47:25 compute-1 nova_compute[230518]: 2025-10-02 12:47:25.013 2 INFO nova.virt.libvirt.driver [-] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Instance destroyed successfully.
Oct 02 12:47:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:25.012 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[03604aa0-cf85-42e1-9297-e12d10b5b6c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:25.013 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 namespace which is not needed anymore
Oct 02 12:47:25 compute-1 nova_compute[230518]: 2025-10-02 12:47:25.014 2 DEBUG nova.virt.libvirt.vif [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:46:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1232591881',display_name='tempest-DeleteServersTestJSON-server-1232591881',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1232591881',id=126,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:47:07Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='58b2fa4ee0cd4b97be1b303c203be14f',ramdisk_id='',reservation_id='r-fwmxnmnm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-1740298646',owner_user_name='tempest-DeleteServersTestJSON-1740298646-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:47:13Z,user_data=None,user_id='ae7bcf1e6a3b4132a7068b0f863ca79c',uuid=1c4025f8-834f-474c-87ee-59600e6ffb96,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-541864340-network", "vif_mac": "fa:16:3e:48:3d:b5"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:47:25 compute-1 nova_compute[230518]: 2025-10-02 12:47:25.015 2 DEBUG nova.network.os_vif_util [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converting VIF {"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-541864340-network", "vif_mac": "fa:16:3e:48:3d:b5"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:47:25 compute-1 nova_compute[230518]: 2025-10-02 12:47:25.015 2 DEBUG nova.network.os_vif_util [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:48:3d:b5,bridge_name='br-int',has_traffic_filtering=True,id=8001e1a0-d1c2-49ac-8630-690ed8ac9801,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8001e1a0-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:47:25 compute-1 nova_compute[230518]: 2025-10-02 12:47:25.016 2 DEBUG os_vif [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:3d:b5,bridge_name='br-int',has_traffic_filtering=True,id=8001e1a0-d1c2-49ac-8630-690ed8ac9801,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8001e1a0-d1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:47:25 compute-1 nova_compute[230518]: 2025-10-02 12:47:25.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:25 compute-1 nova_compute[230518]: 2025-10-02 12:47:25.018 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8001e1a0-d1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:25 compute-1 nova_compute[230518]: 2025-10-02 12:47:25.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:25 compute-1 nova_compute[230518]: 2025-10-02 12:47:25.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:25 compute-1 nova_compute[230518]: 2025-10-02 12:47:25.024 2 INFO os_vif [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:3d:b5,bridge_name='br-int',has_traffic_filtering=True,id=8001e1a0-d1c2-49ac-8630-690ed8ac9801,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8001e1a0-d1')
Oct 02 12:47:25 compute-1 nova_compute[230518]: 2025-10-02 12:47:25.028 2 DEBUG nova.virt.libvirt.driver [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:47:25 compute-1 nova_compute[230518]: 2025-10-02 12:47:25.029 2 DEBUG nova.virt.libvirt.driver [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:47:25 compute-1 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[279385]: [NOTICE]   (279389) : haproxy version is 2.8.14-c23fe91
Oct 02 12:47:25 compute-1 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[279385]: [NOTICE]   (279389) : path to executable is /usr/sbin/haproxy
Oct 02 12:47:25 compute-1 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[279385]: [WARNING]  (279389) : Exiting Master process...
Oct 02 12:47:25 compute-1 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[279385]: [WARNING]  (279389) : Exiting Master process...
Oct 02 12:47:25 compute-1 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[279385]: [ALERT]    (279389) : Current worker (279391) exited with code 143 (Terminated)
Oct 02 12:47:25 compute-1 neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24[279385]: [WARNING]  (279389) : All workers exited. Exiting... (0)
Oct 02 12:47:25 compute-1 systemd[1]: libpod-8584f105e7ef62496f2728e644d93b0c59545ac2e3c004be64d7cb6b746334b5.scope: Deactivated successfully.
Oct 02 12:47:25 compute-1 podman[281012]: 2025-10-02 12:47:25.162664706 +0000 UTC m=+0.050186560 container died 8584f105e7ef62496f2728e644d93b0c59545ac2e3c004be64d7cb6b746334b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:47:25 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8584f105e7ef62496f2728e644d93b0c59545ac2e3c004be64d7cb6b746334b5-userdata-shm.mount: Deactivated successfully.
Oct 02 12:47:25 compute-1 systemd[1]: var-lib-containers-storage-overlay-2fed3ce3b5bf83c329c5a1cdc2d4894017038ce5e7f2b6cb2909063ada580d1f-merged.mount: Deactivated successfully.
Oct 02 12:47:25 compute-1 podman[281012]: 2025-10-02 12:47:25.252275783 +0000 UTC m=+0.139797637 container cleanup 8584f105e7ef62496f2728e644d93b0c59545ac2e3c004be64d7cb6b746334b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:47:25 compute-1 systemd[1]: libpod-conmon-8584f105e7ef62496f2728e644d93b0c59545ac2e3c004be64d7cb6b746334b5.scope: Deactivated successfully.
Oct 02 12:47:25 compute-1 podman[281043]: 2025-10-02 12:47:25.30908496 +0000 UTC m=+0.036921372 container remove 8584f105e7ef62496f2728e644d93b0c59545ac2e3c004be64d7cb6b746334b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 12:47:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:25.315 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e4696785-5d60-45f9-b43f-f3ab39a0a18e]: (4, ('Thu Oct  2 12:47:25 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 (8584f105e7ef62496f2728e644d93b0c59545ac2e3c004be64d7cb6b746334b5)\n8584f105e7ef62496f2728e644d93b0c59545ac2e3c004be64d7cb6b746334b5\nThu Oct  2 12:47:25 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 (8584f105e7ef62496f2728e644d93b0c59545ac2e3c004be64d7cb6b746334b5)\n8584f105e7ef62496f2728e644d93b0c59545ac2e3c004be64d7cb6b746334b5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:25.317 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[452f866f-819c-4c46-b451-66584dc4e04b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:25.318 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd4432c5-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:25 compute-1 kernel: tapfd4432c5-b0: left promiscuous mode
Oct 02 12:47:25 compute-1 nova_compute[230518]: 2025-10-02 12:47:25.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:25 compute-1 nova_compute[230518]: 2025-10-02 12:47:25.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:25.338 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[24a4c221-69c0-41da-b788-7f4c9854cfac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:25.369 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fa6ffbd8-e6f6-465c-8496-22f05aa86664]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:25.370 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[64483c0c-9001-42ce-8c39-4c40a074183f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:25.388 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6e4ce390-ea81-4314-b6a9-c2ccc782289e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708895, 'reachable_time': 18278, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281059, 'error': None, 'target': 'ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:25.392 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fd4432c5-b907-49af-a666-2128c4085e24 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:47:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:25.392 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[fecdbb6a-0a22-40a3-aee2-f90bfb0f7967]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:25 compute-1 systemd[1]: run-netns-ovnmeta\x2dfd4432c5\x2db907\x2d49af\x2da666\x2d2128c4085e24.mount: Deactivated successfully.
Oct 02 12:47:25 compute-1 nova_compute[230518]: 2025-10-02 12:47:25.671 2 DEBUG neutronclient.v2_0.client [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 8001e1a0-d1c2-49ac-8630-690ed8ac9801 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Oct 02 12:47:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2809155184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:25 compute-1 nova_compute[230518]: 2025-10-02 12:47:25.882 2 DEBUG oslo_concurrency.lockutils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:25 compute-1 nova_compute[230518]: 2025-10-02 12:47:25.883 2 DEBUG oslo_concurrency.lockutils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:25 compute-1 nova_compute[230518]: 2025-10-02 12:47:25.883 2 DEBUG oslo_concurrency.lockutils [None req-848ed91d-11f9-46fd-9b72-35378ede897f ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:25.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:25.942 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:25.943 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:25.943 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:47:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:25.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:47:26 compute-1 nova_compute[230518]: 2025-10-02 12:47:26.566 2 DEBUG nova.compute.manager [req-fddb0fa3-cb03-45f1-8f88-8ec5dddd74cb req-29713bd3-5274-4f4b-baf4-2a1810d18169 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received event network-vif-unplugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:47:26 compute-1 nova_compute[230518]: 2025-10-02 12:47:26.567 2 DEBUG oslo_concurrency.lockutils [req-fddb0fa3-cb03-45f1-8f88-8ec5dddd74cb req-29713bd3-5274-4f4b-baf4-2a1810d18169 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:26 compute-1 nova_compute[230518]: 2025-10-02 12:47:26.568 2 DEBUG oslo_concurrency.lockutils [req-fddb0fa3-cb03-45f1-8f88-8ec5dddd74cb req-29713bd3-5274-4f4b-baf4-2a1810d18169 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:26 compute-1 nova_compute[230518]: 2025-10-02 12:47:26.568 2 DEBUG oslo_concurrency.lockutils [req-fddb0fa3-cb03-45f1-8f88-8ec5dddd74cb req-29713bd3-5274-4f4b-baf4-2a1810d18169 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:26 compute-1 nova_compute[230518]: 2025-10-02 12:47:26.568 2 DEBUG nova.compute.manager [req-fddb0fa3-cb03-45f1-8f88-8ec5dddd74cb req-29713bd3-5274-4f4b-baf4-2a1810d18169 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] No waiting events found dispatching network-vif-unplugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:47:26 compute-1 nova_compute[230518]: 2025-10-02 12:47:26.569 2 WARNING nova.compute.manager [req-fddb0fa3-cb03-45f1-8f88-8ec5dddd74cb req-29713bd3-5274-4f4b-baf4-2a1810d18169 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received unexpected event network-vif-unplugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 for instance with vm_state active and task_state resize_migrated.
Oct 02 12:47:26 compute-1 podman[281060]: 2025-10-02 12:47:26.807428686 +0000 UTC m=+0.056757796 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid)
Oct 02 12:47:26 compute-1 podman[281061]: 2025-10-02 12:47:26.811394091 +0000 UTC m=+0.059420860 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 12:47:26 compute-1 ceph-mon[80926]: pgmap v2190: 305 pgs: 305 active+clean; 214 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.1 MiB/s wr, 265 op/s
Oct 02 12:47:27 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:47:27 compute-1 sudo[281104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:47:27 compute-1 sudo[281104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:27 compute-1 sudo[281104]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:27 compute-1 sudo[281129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:47:27 compute-1 sudo[281129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:47:27 compute-1 sudo[281129]: pam_unix(sudo:session): session closed for user root
Oct 02 12:47:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:27.867 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:47:27 compute-1 nova_compute[230518]: 2025-10-02 12:47:27.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:27.868 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:47:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:27.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:27.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:28 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:47:28 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:47:28 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1989105194' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:28 compute-1 nova_compute[230518]: 2025-10-02 12:47:28.812 2 DEBUG nova.compute.manager [req-698cae7b-99e1-421d-80fe-13fb7001682b req-141990b3-5501-4b73-839d-24de879197b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received event network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:47:28 compute-1 nova_compute[230518]: 2025-10-02 12:47:28.812 2 DEBUG oslo_concurrency.lockutils [req-698cae7b-99e1-421d-80fe-13fb7001682b req-141990b3-5501-4b73-839d-24de879197b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:28 compute-1 nova_compute[230518]: 2025-10-02 12:47:28.813 2 DEBUG oslo_concurrency.lockutils [req-698cae7b-99e1-421d-80fe-13fb7001682b req-141990b3-5501-4b73-839d-24de879197b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:28 compute-1 nova_compute[230518]: 2025-10-02 12:47:28.813 2 DEBUG oslo_concurrency.lockutils [req-698cae7b-99e1-421d-80fe-13fb7001682b req-141990b3-5501-4b73-839d-24de879197b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:28 compute-1 nova_compute[230518]: 2025-10-02 12:47:28.813 2 DEBUG nova.compute.manager [req-698cae7b-99e1-421d-80fe-13fb7001682b req-141990b3-5501-4b73-839d-24de879197b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] No waiting events found dispatching network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:47:28 compute-1 nova_compute[230518]: 2025-10-02 12:47:28.813 2 WARNING nova.compute.manager [req-698cae7b-99e1-421d-80fe-13fb7001682b req-141990b3-5501-4b73-839d-24de879197b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received unexpected event network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 for instance with vm_state active and task_state resize_migrated.
Oct 02 12:47:28 compute-1 nova_compute[230518]: 2025-10-02 12:47:28.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:28 compute-1 nova_compute[230518]: 2025-10-02 12:47:28.842 2 DEBUG nova.compute.manager [req-2440c514-ed3c-40de-9980-d1dad38ed1fd req-efbc3221-b83e-4718-b73c-7db7ef5c17c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received event network-changed-8001e1a0-d1c2-49ac-8630-690ed8ac9801 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:47:28 compute-1 nova_compute[230518]: 2025-10-02 12:47:28.842 2 DEBUG nova.compute.manager [req-2440c514-ed3c-40de-9980-d1dad38ed1fd req-efbc3221-b83e-4718-b73c-7db7ef5c17c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Refreshing instance network info cache due to event network-changed-8001e1a0-d1c2-49ac-8630-690ed8ac9801. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:47:28 compute-1 nova_compute[230518]: 2025-10-02 12:47:28.843 2 DEBUG oslo_concurrency.lockutils [req-2440c514-ed3c-40de-9980-d1dad38ed1fd req-efbc3221-b83e-4718-b73c-7db7ef5c17c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:47:28 compute-1 nova_compute[230518]: 2025-10-02 12:47:28.843 2 DEBUG oslo_concurrency.lockutils [req-2440c514-ed3c-40de-9980-d1dad38ed1fd req-efbc3221-b83e-4718-b73c-7db7ef5c17c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:47:28 compute-1 nova_compute[230518]: 2025-10-02 12:47:28.843 2 DEBUG nova.network.neutron [req-2440c514-ed3c-40de-9980-d1dad38ed1fd req-efbc3221-b83e-4718-b73c-7db7ef5c17c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Refreshing network info cache for port 8001e1a0-d1c2-49ac-8630-690ed8ac9801 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:47:29 compute-1 ceph-mon[80926]: pgmap v2191: 305 pgs: 305 active+clean; 239 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 2.9 MiB/s wr, 272 op/s
Oct 02 12:47:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2696433292' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3889467188' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:29.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:29.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:30 compute-1 nova_compute[230518]: 2025-10-02 12:47:30.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:30 compute-1 nova_compute[230518]: 2025-10-02 12:47:30.934 2 DEBUG nova.network.neutron [req-2440c514-ed3c-40de-9980-d1dad38ed1fd req-efbc3221-b83e-4718-b73c-7db7ef5c17c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Updated VIF entry in instance network info cache for port 8001e1a0-d1c2-49ac-8630-690ed8ac9801. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:47:30 compute-1 nova_compute[230518]: 2025-10-02 12:47:30.935 2 DEBUG nova.network.neutron [req-2440c514-ed3c-40de-9980-d1dad38ed1fd req-efbc3221-b83e-4718-b73c-7db7ef5c17c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Updating instance_info_cache with network_info: [{"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:47:31 compute-1 nova_compute[230518]: 2025-10-02 12:47:31.415 2 DEBUG oslo_concurrency.lockutils [req-2440c514-ed3c-40de-9980-d1dad38ed1fd req-efbc3221-b83e-4718-b73c-7db7ef5c17c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:47:31 compute-1 ceph-mon[80926]: pgmap v2192: 305 pgs: 305 active+clean; 309 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 5.6 MiB/s wr, 255 op/s
Oct 02 12:47:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:31.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:31.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:47:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4205247774' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/991247951' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:33 compute-1 ceph-mon[80926]: pgmap v2193: 305 pgs: 305 active+clean; 436 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 9.5 MiB/s wr, 317 op/s
Oct 02 12:47:33 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2906456310' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e310 e310: 3 total, 3 up, 3 in
Oct 02 12:47:33 compute-1 nova_compute[230518]: 2025-10-02 12:47:33.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:33.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:33.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1188429841' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:34 compute-1 ceph-mon[80926]: osdmap e310: 3 total, 3 up, 3 in
Oct 02 12:47:35 compute-1 nova_compute[230518]: 2025-10-02 12:47:35.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:35 compute-1 nova_compute[230518]: 2025-10-02 12:47:35.112 2 DEBUG oslo_concurrency.lockutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:35 compute-1 nova_compute[230518]: 2025-10-02 12:47:35.113 2 DEBUG oslo_concurrency.lockutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:35 compute-1 ceph-mon[80926]: pgmap v2195: 305 pgs: 305 active+clean; 436 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 10 MiB/s wr, 231 op/s
Oct 02 12:47:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/232072933' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3088756800' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:35.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:35.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:37 compute-1 nova_compute[230518]: 2025-10-02 12:47:37.227 2 DEBUG nova.compute.manager [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:47:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:47:37 compute-1 ceph-mon[80926]: pgmap v2196: 305 pgs: 305 active+clean; 452 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 11 MiB/s wr, 251 op/s
Oct 02 12:47:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:37.870 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:37.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:37.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:38 compute-1 ceph-mon[80926]: pgmap v2197: 305 pgs: 305 active+clean; 452 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 9.8 MiB/s wr, 340 op/s
Oct 02 12:47:38 compute-1 nova_compute[230518]: 2025-10-02 12:47:38.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:39 compute-1 ovn_controller[129257]: 2025-10-02T12:47:39Z|00534|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 02 12:47:39 compute-1 nova_compute[230518]: 2025-10-02 12:47:39.929 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409244.927992, 1c4025f8-834f-474c-87ee-59600e6ffb96 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:47:39 compute-1 nova_compute[230518]: 2025-10-02 12:47:39.929 2 INFO nova.compute.manager [-] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] VM Stopped (Lifecycle Event)
Oct 02 12:47:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:47:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:39.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:47:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:39.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:40 compute-1 nova_compute[230518]: 2025-10-02 12:47:40.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:40 compute-1 nova_compute[230518]: 2025-10-02 12:47:40.122 2 DEBUG nova.compute.manager [None req-43c5a35c-a3dd-4a94-887e-b5e2a49cdf3f - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:47:40 compute-1 nova_compute[230518]: 2025-10-02 12:47:40.126 2 DEBUG nova.compute.manager [None req-43c5a35c-a3dd-4a94-887e-b5e2a49cdf3f - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:47:40 compute-1 nova_compute[230518]: 2025-10-02 12:47:40.229 2 INFO nova.compute.manager [None req-43c5a35c-a3dd-4a94-887e-b5e2a49cdf3f - - - - - -] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com
Oct 02 12:47:40 compute-1 ceph-mon[80926]: pgmap v2198: 305 pgs: 305 active+clean; 464 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 6.9 MiB/s wr, 339 op/s
Oct 02 12:47:41 compute-1 nova_compute[230518]: 2025-10-02 12:47:41.098 2 DEBUG oslo_concurrency.lockutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:41 compute-1 nova_compute[230518]: 2025-10-02 12:47:41.099 2 DEBUG oslo_concurrency.lockutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:41 compute-1 nova_compute[230518]: 2025-10-02 12:47:41.109 2 DEBUG nova.virt.hardware [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:47:41 compute-1 nova_compute[230518]: 2025-10-02 12:47:41.109 2 INFO nova.compute.claims [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:47:41 compute-1 nova_compute[230518]: 2025-10-02 12:47:41.660 2 DEBUG oslo_concurrency.processutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:41.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:47:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:41.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:47:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:47:42 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2583393535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:42 compute-1 nova_compute[230518]: 2025-10-02 12:47:42.080 2 DEBUG oslo_concurrency.processutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:42 compute-1 nova_compute[230518]: 2025-10-02 12:47:42.086 2 DEBUG nova.compute.provider_tree [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:47:42 compute-1 nova_compute[230518]: 2025-10-02 12:47:42.185 2 DEBUG nova.scheduler.client.report [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:47:42 compute-1 nova_compute[230518]: 2025-10-02 12:47:42.324 2 DEBUG oslo_concurrency.lockutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.225s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:42 compute-1 nova_compute[230518]: 2025-10-02 12:47:42.325 2 DEBUG nova.compute.manager [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:47:42 compute-1 nova_compute[230518]: 2025-10-02 12:47:42.448 2 DEBUG nova.compute.manager [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:47:42 compute-1 nova_compute[230518]: 2025-10-02 12:47:42.449 2 DEBUG nova.network.neutron [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:47:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:47:42 compute-1 nova_compute[230518]: 2025-10-02 12:47:42.504 2 INFO nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:47:42 compute-1 nova_compute[230518]: 2025-10-02 12:47:42.635 2 DEBUG nova.compute.manager [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:47:42 compute-1 nova_compute[230518]: 2025-10-02 12:47:42.812 2 DEBUG nova.policy [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e3cd62a3208649c183d3fc2edc1c0f18', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd3e0300f3cf5493d8a9e62e2c4a95767', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:47:43 compute-1 ceph-mon[80926]: pgmap v2199: 305 pgs: 305 active+clean; 499 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 2.6 MiB/s wr, 277 op/s
Oct 02 12:47:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2583393535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:43 compute-1 nova_compute[230518]: 2025-10-02 12:47:43.654 2 INFO nova.virt.block_device [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Booting with volume aee976fe-a491-4491-adf3-8d226e48711d at /dev/vda
Oct 02 12:47:43 compute-1 nova_compute[230518]: 2025-10-02 12:47:43.816 2 DEBUG os_brick.utils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:47:43 compute-1 nova_compute[230518]: 2025-10-02 12:47:43.817 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:43 compute-1 nova_compute[230518]: 2025-10-02 12:47:43.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:43 compute-1 nova_compute[230518]: 2025-10-02 12:47:43.827 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:43 compute-1 nova_compute[230518]: 2025-10-02 12:47:43.827 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[7594c18e-f7d0-4f14-b48a-e834dbf3fa20]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:43 compute-1 nova_compute[230518]: 2025-10-02 12:47:43.829 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:43 compute-1 nova_compute[230518]: 2025-10-02 12:47:43.836 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:43 compute-1 nova_compute[230518]: 2025-10-02 12:47:43.837 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[61e9ff9f-bd1f-4168-920f-8abe84966b1a]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:43 compute-1 nova_compute[230518]: 2025-10-02 12:47:43.838 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:43 compute-1 nova_compute[230518]: 2025-10-02 12:47:43.845 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:43 compute-1 nova_compute[230518]: 2025-10-02 12:47:43.846 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[ae3b7462-30e2-4c2e-86f4-b16d8c617d09]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:43 compute-1 nova_compute[230518]: 2025-10-02 12:47:43.847 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[ecf8bfd4-23a7-46e2-b328-a361eff1a93f]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:43 compute-1 nova_compute[230518]: 2025-10-02 12:47:43.847 2 DEBUG oslo_concurrency.processutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:43 compute-1 nova_compute[230518]: 2025-10-02 12:47:43.880 2 DEBUG oslo_concurrency.processutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:43 compute-1 nova_compute[230518]: 2025-10-02 12:47:43.883 2 DEBUG os_brick.initiator.connectors.lightos [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:47:43 compute-1 nova_compute[230518]: 2025-10-02 12:47:43.883 2 DEBUG os_brick.initiator.connectors.lightos [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:47:43 compute-1 nova_compute[230518]: 2025-10-02 12:47:43.883 2 DEBUG os_brick.initiator.connectors.lightos [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:47:43 compute-1 nova_compute[230518]: 2025-10-02 12:47:43.884 2 DEBUG os_brick.utils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] <== get_connector_properties: return (66ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:47:43 compute-1 nova_compute[230518]: 2025-10-02 12:47:43.884 2 DEBUG nova.virt.block_device [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updating existing volume attachment record: 9d9b2116-d013-4f8a-99a9-6ebdcb06d0df _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:47:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:43.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:43.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:45 compute-1 nova_compute[230518]: 2025-10-02 12:47:45.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:45 compute-1 nova_compute[230518]: 2025-10-02 12:47:45.175 2 DEBUG oslo_concurrency.lockutils [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "1c4025f8-834f-474c-87ee-59600e6ffb96" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:45 compute-1 nova_compute[230518]: 2025-10-02 12:47:45.175 2 DEBUG oslo_concurrency.lockutils [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:45 compute-1 nova_compute[230518]: 2025-10-02 12:47:45.176 2 DEBUG nova.compute.manager [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Going to confirm migration 16 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Oct 02 12:47:45 compute-1 ceph-mon[80926]: pgmap v2200: 305 pgs: 305 active+clean; 499 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 2.5 MiB/s wr, 275 op/s
Oct 02 12:47:45 compute-1 podman[281185]: 2025-10-02 12:47:45.816770878 +0000 UTC m=+0.060604997 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 12:47:45 compute-1 podman[281184]: 2025-10-02 12:47:45.84675632 +0000 UTC m=+0.091762446 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 12:47:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:45.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:45.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2446061113' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:46 compute-1 nova_compute[230518]: 2025-10-02 12:47:46.535 2 DEBUG neutronclient.v2_0.client [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 8001e1a0-d1c2-49ac-8630-690ed8ac9801 for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Oct 02 12:47:46 compute-1 nova_compute[230518]: 2025-10-02 12:47:46.536 2 DEBUG oslo_concurrency.lockutils [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:47:46 compute-1 nova_compute[230518]: 2025-10-02 12:47:46.536 2 DEBUG oslo_concurrency.lockutils [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquired lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:47:46 compute-1 nova_compute[230518]: 2025-10-02 12:47:46.536 2 DEBUG nova.network.neutron [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:47:46 compute-1 nova_compute[230518]: 2025-10-02 12:47:46.536 2 DEBUG nova.objects.instance [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'info_cache' on Instance uuid 1c4025f8-834f-474c-87ee-59600e6ffb96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:47:46 compute-1 nova_compute[230518]: 2025-10-02 12:47:46.755 2 DEBUG nova.network.neutron [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Successfully created port: bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:47:47 compute-1 nova_compute[230518]: 2025-10-02 12:47:47.001 2 DEBUG nova.compute.manager [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:47:47 compute-1 nova_compute[230518]: 2025-10-02 12:47:47.003 2 DEBUG nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:47:47 compute-1 nova_compute[230518]: 2025-10-02 12:47:47.003 2 INFO nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Creating image(s)
Oct 02 12:47:47 compute-1 nova_compute[230518]: 2025-10-02 12:47:47.003 2 DEBUG nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 12:47:47 compute-1 nova_compute[230518]: 2025-10-02 12:47:47.004 2 DEBUG nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Ensure instance console log exists: /var/lib/nova/instances/4b2aefbb-92cb-4a24-9ad2-884a12fa514c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:47:47 compute-1 nova_compute[230518]: 2025-10-02 12:47:47.004 2 DEBUG oslo_concurrency.lockutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:47 compute-1 nova_compute[230518]: 2025-10-02 12:47:47.004 2 DEBUG oslo_concurrency.lockutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:47 compute-1 nova_compute[230518]: 2025-10-02 12:47:47.004 2 DEBUG oslo_concurrency.lockutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:47 compute-1 ceph-mon[80926]: pgmap v2201: 305 pgs: 305 active+clean; 507 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 2.6 MiB/s wr, 304 op/s
Oct 02 12:47:47 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3155259324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:47:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:47:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:47.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:47:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:47.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:48 compute-1 nova_compute[230518]: 2025-10-02 12:47:48.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:49 compute-1 ceph-mon[80926]: pgmap v2202: 305 pgs: 305 active+clean; 530 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 3.8 MiB/s wr, 355 op/s
Oct 02 12:47:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2476830693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:49.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:49 compute-1 nova_compute[230518]: 2025-10-02 12:47:49.990 2 DEBUG oslo_concurrency.lockutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "3b348c58-f179-41db-bd79-1fdea0ade389" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:49 compute-1 nova_compute[230518]: 2025-10-02 12:47:49.991 2 DEBUG oslo_concurrency.lockutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:49.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:50 compute-1 nova_compute[230518]: 2025-10-02 12:47:50.025 2 DEBUG nova.network.neutron [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Updating instance_info_cache with network_info: [{"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:47:50 compute-1 nova_compute[230518]: 2025-10-02 12:47:50.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:50 compute-1 nova_compute[230518]: 2025-10-02 12:47:50.091 2 DEBUG oslo_concurrency.lockutils [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Releasing lock "refresh_cache-1c4025f8-834f-474c-87ee-59600e6ffb96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:47:50 compute-1 nova_compute[230518]: 2025-10-02 12:47:50.091 2 DEBUG nova.objects.instance [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lazy-loading 'migration_context' on Instance uuid 1c4025f8-834f-474c-87ee-59600e6ffb96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:47:50 compute-1 nova_compute[230518]: 2025-10-02 12:47:50.093 2 DEBUG nova.compute.manager [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:47:50 compute-1 nova_compute[230518]: 2025-10-02 12:47:50.204 2 DEBUG nova.storage.rbd_utils [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] removing snapshot(nova-resize) on rbd image(1c4025f8-834f-474c-87ee-59600e6ffb96_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 12:47:50 compute-1 nova_compute[230518]: 2025-10-02 12:47:50.228 2 DEBUG oslo_concurrency.lockutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:50 compute-1 nova_compute[230518]: 2025-10-02 12:47:50.229 2 DEBUG oslo_concurrency.lockutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:50 compute-1 nova_compute[230518]: 2025-10-02 12:47:50.238 2 DEBUG nova.virt.hardware [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:47:50 compute-1 nova_compute[230518]: 2025-10-02 12:47:50.239 2 INFO nova.compute.claims [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:47:50 compute-1 nova_compute[230518]: 2025-10-02 12:47:50.347 2 DEBUG nova.network.neutron [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Successfully updated port: bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:47:50 compute-1 nova_compute[230518]: 2025-10-02 12:47:50.393 2 DEBUG oslo_concurrency.lockutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:47:50 compute-1 nova_compute[230518]: 2025-10-02 12:47:50.394 2 DEBUG oslo_concurrency.lockutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquired lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:47:50 compute-1 nova_compute[230518]: 2025-10-02 12:47:50.394 2 DEBUG nova.network.neutron [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:47:50 compute-1 nova_compute[230518]: 2025-10-02 12:47:50.466 2 DEBUG nova.compute.manager [req-c47b3e18-4ef3-4415-9d92-9d3f47f264d2 req-3877ae26-0321-4878-a447-17ae271b14c1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Received event network-changed-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:47:50 compute-1 nova_compute[230518]: 2025-10-02 12:47:50.466 2 DEBUG nova.compute.manager [req-c47b3e18-4ef3-4415-9d92-9d3f47f264d2 req-3877ae26-0321-4878-a447-17ae271b14c1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Refreshing instance network info cache due to event network-changed-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:47:50 compute-1 nova_compute[230518]: 2025-10-02 12:47:50.467 2 DEBUG oslo_concurrency.lockutils [req-c47b3e18-4ef3-4415-9d92-9d3f47f264d2 req-3877ae26-0321-4878-a447-17ae271b14c1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:47:50 compute-1 nova_compute[230518]: 2025-10-02 12:47:50.478 2 DEBUG oslo_concurrency.processutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e311 e311: 3 total, 3 up, 3 in
Oct 02 12:47:50 compute-1 nova_compute[230518]: 2025-10-02 12:47:50.567 2 DEBUG nova.virt.libvirt.vif [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:46:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1232591881',display_name='tempest-DeleteServersTestJSON-server-1232591881',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1232591881',id=126,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:47:40Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='58b2fa4ee0cd4b97be1b303c203be14f',ramdisk_id='',reservation_id='r-fwmxnmnm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-1740298646',owner_user_name='tempest-DeleteServersTestJSON-1740298646-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:47:42Z,user_data=None,user_id='ae7bcf1e6a3b4132a7068b0f863ca79c',uuid=1c4025f8-834f-474c-87ee-59600e6ffb96,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:47:50 compute-1 nova_compute[230518]: 2025-10-02 12:47:50.568 2 DEBUG nova.network.os_vif_util [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converting VIF {"id": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "address": "fa:16:3e:48:3d:b5", "network": {"id": "fd4432c5-b907-49af-a666-2128c4085e24", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-541864340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58b2fa4ee0cd4b97be1b303c203be14f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8001e1a0-d1", "ovs_interfaceid": "8001e1a0-d1c2-49ac-8630-690ed8ac9801", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:47:50 compute-1 nova_compute[230518]: 2025-10-02 12:47:50.569 2 DEBUG nova.network.os_vif_util [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:48:3d:b5,bridge_name='br-int',has_traffic_filtering=True,id=8001e1a0-d1c2-49ac-8630-690ed8ac9801,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8001e1a0-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:47:50 compute-1 nova_compute[230518]: 2025-10-02 12:47:50.569 2 DEBUG os_vif [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:3d:b5,bridge_name='br-int',has_traffic_filtering=True,id=8001e1a0-d1c2-49ac-8630-690ed8ac9801,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8001e1a0-d1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:47:50 compute-1 nova_compute[230518]: 2025-10-02 12:47:50.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:50 compute-1 nova_compute[230518]: 2025-10-02 12:47:50.572 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8001e1a0-d1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:50 compute-1 nova_compute[230518]: 2025-10-02 12:47:50.572 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:47:50 compute-1 nova_compute[230518]: 2025-10-02 12:47:50.575 2 INFO os_vif [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:3d:b5,bridge_name='br-int',has_traffic_filtering=True,id=8001e1a0-d1c2-49ac-8630-690ed8ac9801,network=Network(fd4432c5-b907-49af-a666-2128c4085e24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8001e1a0-d1')
Oct 02 12:47:50 compute-1 nova_compute[230518]: 2025-10-02 12:47:50.575 2 DEBUG oslo_concurrency.lockutils [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:50 compute-1 nova_compute[230518]: 2025-10-02 12:47:50.644 2 DEBUG nova.network.neutron [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:47:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:47:50 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2675420944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:50 compute-1 nova_compute[230518]: 2025-10-02 12:47:50.988 2 DEBUG oslo_concurrency.processutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:50 compute-1 nova_compute[230518]: 2025-10-02 12:47:50.993 2 DEBUG nova.compute.provider_tree [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:47:51 compute-1 nova_compute[230518]: 2025-10-02 12:47:51.025 2 DEBUG nova.scheduler.client.report [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:47:51 compute-1 nova_compute[230518]: 2025-10-02 12:47:51.070 2 DEBUG oslo_concurrency.lockutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:51 compute-1 nova_compute[230518]: 2025-10-02 12:47:51.071 2 DEBUG nova.compute.manager [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:47:51 compute-1 nova_compute[230518]: 2025-10-02 12:47:51.074 2 DEBUG oslo_concurrency.lockutils [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.499s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:51 compute-1 nova_compute[230518]: 2025-10-02 12:47:51.165 2 DEBUG nova.compute.manager [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:47:51 compute-1 nova_compute[230518]: 2025-10-02 12:47:51.165 2 DEBUG nova.network.neutron [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:47:51 compute-1 nova_compute[230518]: 2025-10-02 12:47:51.216 2 INFO nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:47:51 compute-1 nova_compute[230518]: 2025-10-02 12:47:51.251 2 DEBUG nova.compute.manager [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:47:51 compute-1 nova_compute[230518]: 2025-10-02 12:47:51.297 2 DEBUG oslo_concurrency.processutils [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:51 compute-1 nova_compute[230518]: 2025-10-02 12:47:51.345 2 INFO nova.virt.block_device [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Booting with volume 83afd020-c3e2-4cb0-a15d-83739807079d at /dev/vda
Oct 02 12:47:51 compute-1 nova_compute[230518]: 2025-10-02 12:47:51.472 2 DEBUG nova.policy [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e3cd62a3208649c183d3fc2edc1c0f18', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd3e0300f3cf5493d8a9e62e2c4a95767', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:47:51 compute-1 ceph-mon[80926]: pgmap v2203: 305 pgs: 305 active+clean; 547 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 4.7 MiB/s wr, 304 op/s
Oct 02 12:47:51 compute-1 ceph-mon[80926]: osdmap e311: 3 total, 3 up, 3 in
Oct 02 12:47:51 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2675420944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:51 compute-1 nova_compute[230518]: 2025-10-02 12:47:51.510 2 DEBUG os_brick.utils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:47:51 compute-1 nova_compute[230518]: 2025-10-02 12:47:51.512 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:51 compute-1 nova_compute[230518]: 2025-10-02 12:47:51.521 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:51 compute-1 nova_compute[230518]: 2025-10-02 12:47:51.522 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[b7e19801-9c94-427b-8f83-0edba981702a]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:51 compute-1 nova_compute[230518]: 2025-10-02 12:47:51.523 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:51 compute-1 nova_compute[230518]: 2025-10-02 12:47:51.530 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:51 compute-1 nova_compute[230518]: 2025-10-02 12:47:51.530 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[76aee0c4-1e79-46d4-9802-0e5cc810fcc8]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:51 compute-1 nova_compute[230518]: 2025-10-02 12:47:51.532 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:51 compute-1 nova_compute[230518]: 2025-10-02 12:47:51.539 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:51 compute-1 nova_compute[230518]: 2025-10-02 12:47:51.540 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[f032d607-7d5d-4b02-ad26-b90c4a5c1168]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:51 compute-1 nova_compute[230518]: 2025-10-02 12:47:51.542 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[7ed1bd55-59ba-4f14-ab36-4791b97a9aef]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:51 compute-1 nova_compute[230518]: 2025-10-02 12:47:51.543 2 DEBUG oslo_concurrency.processutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:51 compute-1 nova_compute[230518]: 2025-10-02 12:47:51.577 2 DEBUG oslo_concurrency.processutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:51 compute-1 nova_compute[230518]: 2025-10-02 12:47:51.580 2 DEBUG os_brick.initiator.connectors.lightos [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:47:51 compute-1 nova_compute[230518]: 2025-10-02 12:47:51.581 2 DEBUG os_brick.initiator.connectors.lightos [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:47:51 compute-1 nova_compute[230518]: 2025-10-02 12:47:51.581 2 DEBUG os_brick.initiator.connectors.lightos [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:47:51 compute-1 nova_compute[230518]: 2025-10-02 12:47:51.582 2 DEBUG os_brick.utils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] <== get_connector_properties: return (70ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:47:51 compute-1 nova_compute[230518]: 2025-10-02 12:47:51.582 2 DEBUG nova.virt.block_device [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updating existing volume attachment record: cd97b6b4-f57d-4d9e-b3b1-f6b4e1f9df0e _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:47:51 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:47:51 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/560336633' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:51 compute-1 nova_compute[230518]: 2025-10-02 12:47:51.748 2 DEBUG oslo_concurrency.processutils [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:51 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Oct 02 12:47:51 compute-1 nova_compute[230518]: 2025-10-02 12:47:51.754 2 DEBUG nova.compute.provider_tree [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:47:51 compute-1 nova_compute[230518]: 2025-10-02 12:47:51.780 2 DEBUG nova.scheduler.client.report [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:47:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:51.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:51.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.054 2 DEBUG oslo_concurrency.lockutils [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.980s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.068 2 DEBUG nova.compute.manager [req-7370dcbc-53a4-45a8-bd00-249b8e1ce0c3 req-c2e856cf-6a72-458a-aadc-d349607ad890 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received event network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.068 2 DEBUG oslo_concurrency.lockutils [req-7370dcbc-53a4-45a8-bd00-249b8e1ce0c3 req-c2e856cf-6a72-458a-aadc-d349607ad890 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.068 2 DEBUG oslo_concurrency.lockutils [req-7370dcbc-53a4-45a8-bd00-249b8e1ce0c3 req-c2e856cf-6a72-458a-aadc-d349607ad890 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.069 2 DEBUG oslo_concurrency.lockutils [req-7370dcbc-53a4-45a8-bd00-249b8e1ce0c3 req-c2e856cf-6a72-458a-aadc-d349607ad890 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.069 2 DEBUG nova.compute.manager [req-7370dcbc-53a4-45a8-bd00-249b8e1ce0c3 req-c2e856cf-6a72-458a-aadc-d349607ad890 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] No waiting events found dispatching network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.069 2 WARNING nova.compute.manager [req-7370dcbc-53a4-45a8-bd00-249b8e1ce0c3 req-c2e856cf-6a72-458a-aadc-d349607ad890 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received unexpected event network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 for instance with vm_state resized and task_state deleting.
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.070 2 DEBUG nova.compute.manager [req-7370dcbc-53a4-45a8-bd00-249b8e1ce0c3 req-c2e856cf-6a72-458a-aadc-d349607ad890 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received event network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.070 2 DEBUG oslo_concurrency.lockutils [req-7370dcbc-53a4-45a8-bd00-249b8e1ce0c3 req-c2e856cf-6a72-458a-aadc-d349607ad890 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.070 2 DEBUG oslo_concurrency.lockutils [req-7370dcbc-53a4-45a8-bd00-249b8e1ce0c3 req-c2e856cf-6a72-458a-aadc-d349607ad890 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.070 2 DEBUG oslo_concurrency.lockutils [req-7370dcbc-53a4-45a8-bd00-249b8e1ce0c3 req-c2e856cf-6a72-458a-aadc-d349607ad890 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.071 2 DEBUG nova.compute.manager [req-7370dcbc-53a4-45a8-bd00-249b8e1ce0c3 req-c2e856cf-6a72-458a-aadc-d349607ad890 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] No waiting events found dispatching network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.071 2 WARNING nova.compute.manager [req-7370dcbc-53a4-45a8-bd00-249b8e1ce0c3 req-c2e856cf-6a72-458a-aadc-d349607ad890 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 1c4025f8-834f-474c-87ee-59600e6ffb96] Received unexpected event network-vif-plugged-8001e1a0-d1c2-49ac-8630-690ed8ac9801 for instance with vm_state resized and task_state deleting.
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.221 2 INFO nova.scheduler.client.report [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Deleted allocation for migration 31253cf4-d4b6-4114-ba91-912bf75a32d5
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.345 2 DEBUG oslo_concurrency.lockutils [None req-d136e0ac-2599-4d09-b491-5ae42ca9c34c ae7bcf1e6a3b4132a7068b0f863ca79c 58b2fa4ee0cd4b97be1b303c203be14f - - default default] Lock "1c4025f8-834f-474c-87ee-59600e6ffb96" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 7.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:47:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/560336633' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:47:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:47:52 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2696604845' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.721 2 DEBUG nova.network.neutron [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updating instance_info_cache with network_info: [{"id": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "address": "fa:16:3e:41:04:35", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf58273a-e5", "ovs_interfaceid": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.769 2 DEBUG oslo_concurrency.lockutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Releasing lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.770 2 DEBUG nova.compute.manager [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Instance network_info: |[{"id": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "address": "fa:16:3e:41:04:35", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf58273a-e5", "ovs_interfaceid": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.770 2 DEBUG oslo_concurrency.lockutils [req-c47b3e18-4ef3-4415-9d92-9d3f47f264d2 req-3877ae26-0321-4878-a447-17ae271b14c1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.771 2 DEBUG nova.network.neutron [req-c47b3e18-4ef3-4415-9d92-9d3f47f264d2 req-3877ae26-0321-4878-a447-17ae271b14c1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Refreshing network info cache for port bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.776 2 DEBUG nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Start _get_guest_xml network_info=[{"id": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "address": "fa:16:3e:41:04:35", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf58273a-e5", "ovs_interfaceid": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'delete_on_termination': False, 'disk_bus': 'virtio', 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-aee976fe-a491-4491-adf3-8d226e48711d', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'aee976fe-a491-4491-adf3-8d226e48711d', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '4b2aefbb-92cb-4a24-9ad2-884a12fa514c', 'attached_at': '', 'detached_at': '', 'volume_id': 'aee976fe-a491-4491-adf3-8d226e48711d', 'serial': 'aee976fe-a491-4491-adf3-8d226e48711d'}, 'boot_index': 0, 'attachment_id': '9d9b2116-d013-4f8a-99a9-6ebdcb06d0df', 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.781 2 WARNING nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.787 2 DEBUG nova.virt.libvirt.host [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.788 2 DEBUG nova.virt.libvirt.host [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.795 2 DEBUG nova.virt.libvirt.host [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.796 2 DEBUG nova.virt.libvirt.host [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.797 2 DEBUG nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.797 2 DEBUG nova.virt.hardware [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.798 2 DEBUG nova.virt.hardware [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.798 2 DEBUG nova.virt.hardware [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.799 2 DEBUG nova.virt.hardware [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.799 2 DEBUG nova.virt.hardware [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.799 2 DEBUG nova.virt.hardware [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.800 2 DEBUG nova.virt.hardware [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.800 2 DEBUG nova.virt.hardware [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.800 2 DEBUG nova.virt.hardware [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.800 2 DEBUG nova.virt.hardware [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.801 2 DEBUG nova.virt.hardware [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.834 2 DEBUG nova.storage.rbd_utils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] rbd image 4b2aefbb-92cb-4a24-9ad2-884a12fa514c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:47:52 compute-1 nova_compute[230518]: 2025-10-02 12:47:52.839 2 DEBUG oslo_concurrency.processutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:47:53 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1678523138' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:53 compute-1 nova_compute[230518]: 2025-10-02 12:47:53.298 2 DEBUG oslo_concurrency.processutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:53 compute-1 ceph-mon[80926]: pgmap v2205: 305 pgs: 305 active+clean; 564 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 5.1 MiB/s wr, 283 op/s
Oct 02 12:47:53 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2696604845' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:53 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1678523138' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:53 compute-1 nova_compute[230518]: 2025-10-02 12:47:53.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:47:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:53.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:47:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:53.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:54 compute-1 nova_compute[230518]: 2025-10-02 12:47:54.270 2 DEBUG nova.virt.libvirt.vif [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:47:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1530906949',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1530906949',id=130,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN8AO0n4F9qQHfktAb1KqUpFZGDIBw8Q+DMA6Gtgwbe4fJSHZtT9yxONp57Pu+/JlMfK0hzt7rHvQAXjHsqixRJ8kNgVzAz0UxxllE90LKBM9NxuJLShf+JD7SBBSy6srw==',key_name='tempest-TestInstancesWithCinderVolumes-1494763419',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d3e0300f3cf5493d8a9e62e2c4a95767',ramdisk_id='',reservation_id='r-9rd4q9z5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-621751307',owner_user_name='tempest-TestInstancesWithCinderVolumes-621751307-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:47:42Z,user_data=None,user_id='e3cd62a3208649c183d3fc2edc1c0f18',uuid=4b2aefbb-92cb-4a24-9ad2-884a12fa514c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "address": "fa:16:3e:41:04:35", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf58273a-e5", "ovs_interfaceid": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:47:54 compute-1 nova_compute[230518]: 2025-10-02 12:47:54.270 2 DEBUG nova.network.os_vif_util [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Converting VIF {"id": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "address": "fa:16:3e:41:04:35", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf58273a-e5", "ovs_interfaceid": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:47:54 compute-1 nova_compute[230518]: 2025-10-02 12:47:54.271 2 DEBUG nova.network.os_vif_util [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:04:35,bridge_name='br-int',has_traffic_filtering=True,id=bf58273a-e5f6-4e36-bb1e-7ca0c2462d54,network=Network(aa3b4df3-6044-4a53-8039-c9a5c05725aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf58273a-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:47:54 compute-1 nova_compute[230518]: 2025-10-02 12:47:54.272 2 DEBUG nova.objects.instance [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4b2aefbb-92cb-4a24-9ad2-884a12fa514c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:47:54 compute-1 nova_compute[230518]: 2025-10-02 12:47:54.531 2 DEBUG nova.network.neutron [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Successfully created port: a568d61d-6863-474f-83f4-ba38b88de19a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:47:54 compute-1 nova_compute[230518]: 2025-10-02 12:47:54.827 2 DEBUG nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:47:54 compute-1 nova_compute[230518]:   <uuid>4b2aefbb-92cb-4a24-9ad2-884a12fa514c</uuid>
Oct 02 12:47:54 compute-1 nova_compute[230518]:   <name>instance-00000082</name>
Oct 02 12:47:54 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:47:54 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:47:54 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:47:54 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:       <nova:name>tempest-TestInstancesWithCinderVolumes-server-1530906949</nova:name>
Oct 02 12:47:54 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:47:52</nova:creationTime>
Oct 02 12:47:54 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:47:54 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:47:54 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:47:54 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:47:54 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:47:54 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:47:54 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:47:54 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:47:54 compute-1 nova_compute[230518]:         <nova:user uuid="e3cd62a3208649c183d3fc2edc1c0f18">tempest-TestInstancesWithCinderVolumes-621751307-project-member</nova:user>
Oct 02 12:47:54 compute-1 nova_compute[230518]:         <nova:project uuid="d3e0300f3cf5493d8a9e62e2c4a95767">tempest-TestInstancesWithCinderVolumes-621751307</nova:project>
Oct 02 12:47:54 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:47:54 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:47:54 compute-1 nova_compute[230518]:         <nova:port uuid="bf58273a-e5f6-4e36-bb1e-7ca0c2462d54">
Oct 02 12:47:54 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:47:54 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:47:54 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:47:54 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <system>
Oct 02 12:47:54 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:47:54 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:47:54 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:47:54 compute-1 nova_compute[230518]:       <entry name="serial">4b2aefbb-92cb-4a24-9ad2-884a12fa514c</entry>
Oct 02 12:47:54 compute-1 nova_compute[230518]:       <entry name="uuid">4b2aefbb-92cb-4a24-9ad2-884a12fa514c</entry>
Oct 02 12:47:54 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     </system>
Oct 02 12:47:54 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:47:54 compute-1 nova_compute[230518]:   <os>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:   </os>
Oct 02 12:47:54 compute-1 nova_compute[230518]:   <features>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:   </features>
Oct 02 12:47:54 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:47:54 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:47:54 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:47:54 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/4b2aefbb-92cb-4a24-9ad2-884a12fa514c_disk.config">
Oct 02 12:47:54 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:       </source>
Oct 02 12:47:54 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:47:54 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:47:54 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:47:54 compute-1 nova_compute[230518]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:       <source protocol="rbd" name="volumes/volume-aee976fe-a491-4491-adf3-8d226e48711d">
Oct 02 12:47:54 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:       </source>
Oct 02 12:47:54 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:47:54 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:47:54 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:       <serial>aee976fe-a491-4491-adf3-8d226e48711d</serial>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:47:54 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:41:04:35"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:       <target dev="tapbf58273a-e5"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:47:54 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/4b2aefbb-92cb-4a24-9ad2-884a12fa514c/console.log" append="off"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <video>
Oct 02 12:47:54 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     </video>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:47:54 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:47:54 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:47:54 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:47:54 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:47:54 compute-1 nova_compute[230518]: </domain>
Oct 02 12:47:54 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:47:54 compute-1 nova_compute[230518]: 2025-10-02 12:47:54.827 2 DEBUG nova.compute.manager [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Preparing to wait for external event network-vif-plugged-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:47:54 compute-1 nova_compute[230518]: 2025-10-02 12:47:54.827 2 DEBUG oslo_concurrency.lockutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:54 compute-1 nova_compute[230518]: 2025-10-02 12:47:54.828 2 DEBUG oslo_concurrency.lockutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:54 compute-1 nova_compute[230518]: 2025-10-02 12:47:54.828 2 DEBUG oslo_concurrency.lockutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:54 compute-1 nova_compute[230518]: 2025-10-02 12:47:54.829 2 DEBUG nova.virt.libvirt.vif [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:47:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1530906949',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1530906949',id=130,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN8AO0n4F9qQHfktAb1KqUpFZGDIBw8Q+DMA6Gtgwbe4fJSHZtT9yxONp57Pu+/JlMfK0hzt7rHvQAXjHsqixRJ8kNgVzAz0UxxllE90LKBM9NxuJLShf+JD7SBBSy6srw==',key_name='tempest-TestInstancesWithCinderVolumes-1494763419',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d3e0300f3cf5493d8a9e62e2c4a95767',ramdisk_id='',reservation_id='r-9rd4q9z5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-621751307',owner_user_name='tempest-TestInstancesWithCinderVolumes-621751307-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:47:42Z,user_data=None,user_id='e3cd62a3208649c183d3fc2edc1c0f18',uuid=4b2aefbb-92cb-4a24-9ad2-884a12fa514c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "address": "fa:16:3e:41:04:35", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf58273a-e5", "ovs_interfaceid": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:47:54 compute-1 nova_compute[230518]: 2025-10-02 12:47:54.829 2 DEBUG nova.network.os_vif_util [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Converting VIF {"id": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "address": "fa:16:3e:41:04:35", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf58273a-e5", "ovs_interfaceid": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:47:54 compute-1 nova_compute[230518]: 2025-10-02 12:47:54.829 2 DEBUG nova.network.os_vif_util [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:04:35,bridge_name='br-int',has_traffic_filtering=True,id=bf58273a-e5f6-4e36-bb1e-7ca0c2462d54,network=Network(aa3b4df3-6044-4a53-8039-c9a5c05725aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf58273a-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:47:54 compute-1 nova_compute[230518]: 2025-10-02 12:47:54.830 2 DEBUG os_vif [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:04:35,bridge_name='br-int',has_traffic_filtering=True,id=bf58273a-e5f6-4e36-bb1e-7ca0c2462d54,network=Network(aa3b4df3-6044-4a53-8039-c9a5c05725aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf58273a-e5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:47:54 compute-1 nova_compute[230518]: 2025-10-02 12:47:54.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:54 compute-1 nova_compute[230518]: 2025-10-02 12:47:54.831 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:54 compute-1 nova_compute[230518]: 2025-10-02 12:47:54.831 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:47:54 compute-1 nova_compute[230518]: 2025-10-02 12:47:54.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:54 compute-1 nova_compute[230518]: 2025-10-02 12:47:54.833 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf58273a-e5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:54 compute-1 nova_compute[230518]: 2025-10-02 12:47:54.834 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf58273a-e5, col_values=(('external_ids', {'iface-id': 'bf58273a-e5f6-4e36-bb1e-7ca0c2462d54', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:41:04:35', 'vm-uuid': '4b2aefbb-92cb-4a24-9ad2-884a12fa514c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:54 compute-1 nova_compute[230518]: 2025-10-02 12:47:54.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:54 compute-1 NetworkManager[44960]: <info>  [1759409274.8380] manager: (tapbf58273a-e5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/248)
Oct 02 12:47:54 compute-1 nova_compute[230518]: 2025-10-02 12:47:54.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:47:54 compute-1 nova_compute[230518]: 2025-10-02 12:47:54.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:54 compute-1 nova_compute[230518]: 2025-10-02 12:47:54.843 2 INFO os_vif [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:04:35,bridge_name='br-int',has_traffic_filtering=True,id=bf58273a-e5f6-4e36-bb1e-7ca0c2462d54,network=Network(aa3b4df3-6044-4a53-8039-c9a5c05725aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf58273a-e5')
Oct 02 12:47:54 compute-1 nova_compute[230518]: 2025-10-02 12:47:54.985 2 DEBUG nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:47:54 compute-1 nova_compute[230518]: 2025-10-02 12:47:54.986 2 DEBUG nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:47:54 compute-1 nova_compute[230518]: 2025-10-02 12:47:54.986 2 DEBUG nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No VIF found with MAC fa:16:3e:41:04:35, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:47:54 compute-1 nova_compute[230518]: 2025-10-02 12:47:54.986 2 INFO nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Using config drive
Oct 02 12:47:55 compute-1 nova_compute[230518]: 2025-10-02 12:47:55.017 2 DEBUG nova.storage.rbd_utils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] rbd image 4b2aefbb-92cb-4a24-9ad2-884a12fa514c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:47:55 compute-1 nova_compute[230518]: 2025-10-02 12:47:55.024 2 DEBUG nova.compute.manager [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:47:55 compute-1 nova_compute[230518]: 2025-10-02 12:47:55.025 2 DEBUG nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:47:55 compute-1 nova_compute[230518]: 2025-10-02 12:47:55.026 2 INFO nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Creating image(s)
Oct 02 12:47:55 compute-1 nova_compute[230518]: 2025-10-02 12:47:55.026 2 DEBUG nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 12:47:55 compute-1 nova_compute[230518]: 2025-10-02 12:47:55.026 2 DEBUG nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Ensure instance console log exists: /var/lib/nova/instances/3b348c58-f179-41db-bd79-1fdea0ade389/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:47:55 compute-1 nova_compute[230518]: 2025-10-02 12:47:55.027 2 DEBUG oslo_concurrency.lockutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:55 compute-1 nova_compute[230518]: 2025-10-02 12:47:55.027 2 DEBUG oslo_concurrency.lockutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:55 compute-1 nova_compute[230518]: 2025-10-02 12:47:55.027 2 DEBUG oslo_concurrency.lockutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:55 compute-1 ceph-mon[80926]: pgmap v2206: 305 pgs: 305 active+clean; 564 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 5.1 MiB/s wr, 283 op/s
Oct 02 12:47:55 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2330067773' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:55.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:56.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:56 compute-1 nova_compute[230518]: 2025-10-02 12:47:56.084 2 INFO nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Creating config drive at /var/lib/nova/instances/4b2aefbb-92cb-4a24-9ad2-884a12fa514c/disk.config
Oct 02 12:47:56 compute-1 nova_compute[230518]: 2025-10-02 12:47:56.091 2 DEBUG oslo_concurrency.processutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4b2aefbb-92cb-4a24-9ad2-884a12fa514c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppf99kfo4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:56 compute-1 nova_compute[230518]: 2025-10-02 12:47:56.222 2 DEBUG oslo_concurrency.processutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4b2aefbb-92cb-4a24-9ad2-884a12fa514c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppf99kfo4" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:56 compute-1 nova_compute[230518]: 2025-10-02 12:47:56.354 2 DEBUG nova.storage.rbd_utils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] rbd image 4b2aefbb-92cb-4a24-9ad2-884a12fa514c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:47:56 compute-1 nova_compute[230518]: 2025-10-02 12:47:56.358 2 DEBUG oslo_concurrency.processutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4b2aefbb-92cb-4a24-9ad2-884a12fa514c/disk.config 4b2aefbb-92cb-4a24-9ad2-884a12fa514c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:47:56 compute-1 ceph-mon[80926]: pgmap v2207: 305 pgs: 305 active+clean; 539 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.9 MiB/s wr, 218 op/s
Oct 02 12:47:56 compute-1 nova_compute[230518]: 2025-10-02 12:47:56.680 2 DEBUG nova.network.neutron [req-c47b3e18-4ef3-4415-9d92-9d3f47f264d2 req-3877ae26-0321-4878-a447-17ae271b14c1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updated VIF entry in instance network info cache for port bf58273a-e5f6-4e36-bb1e-7ca0c2462d54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:47:56 compute-1 nova_compute[230518]: 2025-10-02 12:47:56.681 2 DEBUG nova.network.neutron [req-c47b3e18-4ef3-4415-9d92-9d3f47f264d2 req-3877ae26-0321-4878-a447-17ae271b14c1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updating instance_info_cache with network_info: [{"id": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "address": "fa:16:3e:41:04:35", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf58273a-e5", "ovs_interfaceid": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:47:56 compute-1 nova_compute[230518]: 2025-10-02 12:47:56.731 2 DEBUG oslo_concurrency.lockutils [req-c47b3e18-4ef3-4415-9d92-9d3f47f264d2 req-3877ae26-0321-4878-a447-17ae271b14c1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:47:56 compute-1 nova_compute[230518]: 2025-10-02 12:47:56.974 2 DEBUG oslo_concurrency.processutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4b2aefbb-92cb-4a24-9ad2-884a12fa514c/disk.config 4b2aefbb-92cb-4a24-9ad2-884a12fa514c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:47:56 compute-1 nova_compute[230518]: 2025-10-02 12:47:56.975 2 INFO nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Deleting local config drive /var/lib/nova/instances/4b2aefbb-92cb-4a24-9ad2-884a12fa514c/disk.config because it was imported into RBD.
Oct 02 12:47:57 compute-1 kernel: tapbf58273a-e5: entered promiscuous mode
Oct 02 12:47:57 compute-1 NetworkManager[44960]: <info>  [1759409277.0498] manager: (tapbf58273a-e5): new Tun device (/org/freedesktop/NetworkManager/Devices/249)
Oct 02 12:47:57 compute-1 nova_compute[230518]: 2025-10-02 12:47:57.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:57 compute-1 ovn_controller[129257]: 2025-10-02T12:47:57Z|00535|binding|INFO|Claiming lport bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 for this chassis.
Oct 02 12:47:57 compute-1 ovn_controller[129257]: 2025-10-02T12:47:57Z|00536|binding|INFO|bf58273a-e5f6-4e36-bb1e-7ca0c2462d54: Claiming fa:16:3e:41:04:35 10.100.0.9
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.065 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:04:35 10.100.0.9'], port_security=['fa:16:3e:41:04:35 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4b2aefbb-92cb-4a24-9ad2-884a12fa514c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd3e0300f3cf5493d8a9e62e2c4a95767', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b476c8f5-f8e9-416f-ac80-6e4f069ebf34', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=19951b97-567d-403e-9a99-3dd9660c4a7b, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=bf58273a-e5f6-4e36-bb1e-7ca0c2462d54) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.066 138374 INFO neutron.agent.ovn.metadata.agent [-] Port bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 in datapath aa3b4df3-6044-4a53-8039-c9a5c05725aa bound to our chassis
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.068 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa3b4df3-6044-4a53-8039-c9a5c05725aa
Oct 02 12:47:57 compute-1 systemd-machined[188247]: New machine qemu-63-instance-00000082.
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.090 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[41701a26-afa5-46bb-983f-f3f9370a3c0e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.091 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaa3b4df3-61 in ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.093 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaa3b4df3-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.093 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[523b2c98-233f-4b3d-ab41-4817a75e7ac7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.095 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1ce5ade1-87cd-4dca-b76d-5d1ba425462d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.107 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[51011997-f717-48b9-bbce-1d58b7d0cffb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:57 compute-1 systemd[1]: Started Virtual Machine qemu-63-instance-00000082.
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.121 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a2de02f7-fd6e-4e75-ab9a-7631b5cd5c8b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:57 compute-1 systemd-udevd[281467]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:47:57 compute-1 podman[281415]: 2025-10-02 12:47:57.144409188 +0000 UTC m=+0.116230326 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 02 12:47:57 compute-1 NetworkManager[44960]: <info>  [1759409277.1550] device (tapbf58273a-e5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:47:57 compute-1 NetworkManager[44960]: <info>  [1759409277.1569] device (tapbf58273a-e5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:47:57 compute-1 ovn_controller[129257]: 2025-10-02T12:47:57Z|00537|binding|INFO|Setting lport bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 ovn-installed in OVS
Oct 02 12:47:57 compute-1 ovn_controller[129257]: 2025-10-02T12:47:57Z|00538|binding|INFO|Setting lport bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 up in Southbound
Oct 02 12:47:57 compute-1 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 12:47:57 compute-1 nova_compute[230518]: 2025-10-02 12:47:57.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:57 compute-1 podman[281420]: 2025-10-02 12:47:57.196588429 +0000 UTC m=+0.169627355 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.200 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[ae5f377e-e81b-48a0-ba3c-9902e3f63dce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.205 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e77ded6f-5498-42d5-bc18-6ed727cbd199]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:57 compute-1 NetworkManager[44960]: <info>  [1759409277.2067] manager: (tapaa3b4df3-60): new Veth device (/org/freedesktop/NetworkManager/Devices/250)
Oct 02 12:47:57 compute-1 systemd-udevd[281472]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.240 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[f00bfc00-2984-40e0-ae26-fa6491173ec1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.242 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[0fde2cf3-8568-420c-93f3-9836daf83f53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:57 compute-1 NetworkManager[44960]: <info>  [1759409277.2636] device (tapaa3b4df3-60): carrier: link connected
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.269 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a88ca0dd-08e9-44c1-9a95-706aad4933f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:57 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e312 e312: 3 total, 3 up, 3 in
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.284 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[45adc37e-d753-4869-b6dc-44e97ac7eaab]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa3b4df3-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:81:7f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 165], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714082, 'reachable_time': 20272, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281500, 'error': None, 'target': 'ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.298 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[db3cb08d-80a7-43a7-9ab7-1ca354c5197b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1c:817f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 714082, 'tstamp': 714082}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281501, 'error': None, 'target': 'ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.313 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[052d8370-de65-42c1-bdbf-a4b6183ebb22]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa3b4df3-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:81:7f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 165], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714082, 'reachable_time': 20272, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281502, 'error': None, 'target': 'ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.350 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[66a0cd25-330e-43b6-b7d3-05f727d60260]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.423 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[451c080d-b367-4c3e-9a4c-54fe786e916c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.424 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa3b4df3-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.424 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.425 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa3b4df3-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:57 compute-1 nova_compute[230518]: 2025-10-02 12:47:57.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:57 compute-1 NetworkManager[44960]: <info>  [1759409277.4276] manager: (tapaa3b4df3-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/251)
Oct 02 12:47:57 compute-1 kernel: tapaa3b4df3-60: entered promiscuous mode
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.433 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa3b4df3-60, col_values=(('external_ids', {'iface-id': 'fb7cdb79-68cf-4ad8-80ea-cb25da88eb6c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:47:57 compute-1 ovn_controller[129257]: 2025-10-02T12:47:57Z|00539|binding|INFO|Releasing lport fb7cdb79-68cf-4ad8-80ea-cb25da88eb6c from this chassis (sb_readonly=0)
Oct 02 12:47:57 compute-1 nova_compute[230518]: 2025-10-02 12:47:57.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.437 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aa3b4df3-6044-4a53-8039-c9a5c05725aa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aa3b4df3-6044-4a53-8039-c9a5c05725aa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.438 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[79c48e7a-9db9-43f9-a852-39b43695ff0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.439 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-aa3b4df3-6044-4a53-8039-c9a5c05725aa
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/aa3b4df3-6044-4a53-8039-c9a5c05725aa.pid.haproxy
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID aa3b4df3-6044-4a53-8039-c9a5c05725aa
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:47:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:47:57.440 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'env', 'PROCESS_TAG=haproxy-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aa3b4df3-6044-4a53-8039-c9a5c05725aa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:47:57 compute-1 nova_compute[230518]: 2025-10-02 12:47:57.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:57 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:47:57 compute-1 nova_compute[230518]: 2025-10-02 12:47:57.476 2 DEBUG nova.network.neutron [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Successfully updated port: a568d61d-6863-474f-83f4-ba38b88de19a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:47:57 compute-1 nova_compute[230518]: 2025-10-02 12:47:57.559 2 DEBUG oslo_concurrency.lockutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:47:57 compute-1 nova_compute[230518]: 2025-10-02 12:47:57.559 2 DEBUG oslo_concurrency.lockutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquired lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:47:57 compute-1 nova_compute[230518]: 2025-10-02 12:47:57.559 2 DEBUG nova.network.neutron [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:47:57 compute-1 podman[281576]: 2025-10-02 12:47:57.803199994 +0000 UTC m=+0.052926576 container create b28b990e2fcd86cc0c6d1ad70110f5ea58e185effc1cf30910ba97d5ee6fa6d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 12:47:57 compute-1 systemd[1]: Started libpod-conmon-b28b990e2fcd86cc0c6d1ad70110f5ea58e185effc1cf30910ba97d5ee6fa6d9.scope.
Oct 02 12:47:57 compute-1 podman[281576]: 2025-10-02 12:47:57.772660614 +0000 UTC m=+0.022387176 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:47:57 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:47:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c561c2b937b63684f4fa80d6a77b8b706c94b8f7ce5c4ada629e45417aa2f7f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:47:57 compute-1 podman[281576]: 2025-10-02 12:47:57.891123318 +0000 UTC m=+0.140849880 container init b28b990e2fcd86cc0c6d1ad70110f5ea58e185effc1cf30910ba97d5ee6fa6d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:47:57 compute-1 podman[281576]: 2025-10-02 12:47:57.896155416 +0000 UTC m=+0.145881958 container start b28b990e2fcd86cc0c6d1ad70110f5ea58e185effc1cf30910ba97d5ee6fa6d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:47:57 compute-1 neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa[281591]: [NOTICE]   (281595) : New worker (281597) forked
Oct 02 12:47:57 compute-1 neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa[281591]: [NOTICE]   (281595) : Loading success.
Oct 02 12:47:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:57.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:47:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:47:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:47:58.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:47:58 compute-1 nova_compute[230518]: 2025-10-02 12:47:58.166 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409278.1663046, 4b2aefbb-92cb-4a24-9ad2-884a12fa514c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:47:58 compute-1 nova_compute[230518]: 2025-10-02 12:47:58.167 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] VM Started (Lifecycle Event)
Oct 02 12:47:58 compute-1 nova_compute[230518]: 2025-10-02 12:47:58.235 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:47:58 compute-1 nova_compute[230518]: 2025-10-02 12:47:58.240 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409278.166569, 4b2aefbb-92cb-4a24-9ad2-884a12fa514c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:47:58 compute-1 nova_compute[230518]: 2025-10-02 12:47:58.241 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] VM Paused (Lifecycle Event)
Oct 02 12:47:58 compute-1 nova_compute[230518]: 2025-10-02 12:47:58.277 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:47:58 compute-1 nova_compute[230518]: 2025-10-02 12:47:58.281 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:47:58 compute-1 nova_compute[230518]: 2025-10-02 12:47:58.302 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:47:58 compute-1 ceph-mon[80926]: osdmap e312: 3 total, 3 up, 3 in
Oct 02 12:47:58 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2389259163' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:58 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/573455435' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:47:58 compute-1 nova_compute[230518]: 2025-10-02 12:47:58.467 2 DEBUG nova.network.neutron [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:47:58 compute-1 nova_compute[230518]: 2025-10-02 12:47:58.558 2 DEBUG nova.compute.manager [req-13e88cb5-bfda-4f81-b8c0-c622e8c2e4c2 req-5db2dcfc-9815-49ea-a01f-05694f76b253 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Received event network-vif-plugged-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:47:58 compute-1 nova_compute[230518]: 2025-10-02 12:47:58.558 2 DEBUG oslo_concurrency.lockutils [req-13e88cb5-bfda-4f81-b8c0-c622e8c2e4c2 req-5db2dcfc-9815-49ea-a01f-05694f76b253 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:47:58 compute-1 nova_compute[230518]: 2025-10-02 12:47:58.559 2 DEBUG oslo_concurrency.lockutils [req-13e88cb5-bfda-4f81-b8c0-c622e8c2e4c2 req-5db2dcfc-9815-49ea-a01f-05694f76b253 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:47:58 compute-1 nova_compute[230518]: 2025-10-02 12:47:58.559 2 DEBUG oslo_concurrency.lockutils [req-13e88cb5-bfda-4f81-b8c0-c622e8c2e4c2 req-5db2dcfc-9815-49ea-a01f-05694f76b253 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:47:58 compute-1 nova_compute[230518]: 2025-10-02 12:47:58.559 2 DEBUG nova.compute.manager [req-13e88cb5-bfda-4f81-b8c0-c622e8c2e4c2 req-5db2dcfc-9815-49ea-a01f-05694f76b253 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Processing event network-vif-plugged-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:47:58 compute-1 nova_compute[230518]: 2025-10-02 12:47:58.560 2 DEBUG nova.compute.manager [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:47:58 compute-1 nova_compute[230518]: 2025-10-02 12:47:58.569 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409278.5633876, 4b2aefbb-92cb-4a24-9ad2-884a12fa514c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:47:58 compute-1 nova_compute[230518]: 2025-10-02 12:47:58.570 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] VM Resumed (Lifecycle Event)
Oct 02 12:47:58 compute-1 nova_compute[230518]: 2025-10-02 12:47:58.573 2 DEBUG nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:47:58 compute-1 nova_compute[230518]: 2025-10-02 12:47:58.576 2 INFO nova.virt.libvirt.driver [-] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Instance spawned successfully.
Oct 02 12:47:58 compute-1 nova_compute[230518]: 2025-10-02 12:47:58.576 2 DEBUG nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:47:58 compute-1 nova_compute[230518]: 2025-10-02 12:47:58.594 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:47:58 compute-1 nova_compute[230518]: 2025-10-02 12:47:58.597 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:47:58 compute-1 nova_compute[230518]: 2025-10-02 12:47:58.622 2 DEBUG nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:47:58 compute-1 nova_compute[230518]: 2025-10-02 12:47:58.622 2 DEBUG nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:47:58 compute-1 nova_compute[230518]: 2025-10-02 12:47:58.623 2 DEBUG nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:47:58 compute-1 nova_compute[230518]: 2025-10-02 12:47:58.623 2 DEBUG nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:47:58 compute-1 nova_compute[230518]: 2025-10-02 12:47:58.623 2 DEBUG nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:47:58 compute-1 nova_compute[230518]: 2025-10-02 12:47:58.624 2 DEBUG nova.virt.libvirt.driver [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:47:58 compute-1 nova_compute[230518]: 2025-10-02 12:47:58.628 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:47:58 compute-1 nova_compute[230518]: 2025-10-02 12:47:58.702 2 INFO nova.compute.manager [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Took 11.70 seconds to spawn the instance on the hypervisor.
Oct 02 12:47:58 compute-1 nova_compute[230518]: 2025-10-02 12:47:58.702 2 DEBUG nova.compute.manager [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:47:58 compute-1 nova_compute[230518]: 2025-10-02 12:47:58.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:59 compute-1 nova_compute[230518]: 2025-10-02 12:47:59.147 2 INFO nova.compute.manager [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Took 18.97 seconds to build instance.
Oct 02 12:47:59 compute-1 ceph-mon[80926]: pgmap v2209: 305 pgs: 305 active+clean; 469 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 808 KiB/s rd, 5.0 MiB/s wr, 222 op/s
Oct 02 12:47:59 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Oct 02 12:47:59 compute-1 nova_compute[230518]: 2025-10-02 12:47:59.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:47:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:47:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:47:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:47:59.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:00.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:00 compute-1 nova_compute[230518]: 2025-10-02 12:48:00.849 2 DEBUG nova.network.neutron [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updating instance_info_cache with network_info: [{"id": "a568d61d-6863-474f-83f4-ba38b88de19a", "address": "fa:16:3e:fa:2f:46", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa568d61d-68", "ovs_interfaceid": "a568d61d-6863-474f-83f4-ba38b88de19a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:48:01 compute-1 nova_compute[230518]: 2025-10-02 12:48:01.510 2 DEBUG nova.compute.manager [req-4f91877e-30f7-41c5-983f-a5beeb79c7d2 req-6867a648-8cfa-4be9-8a31-fea607973ca0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Received event network-changed-a568d61d-6863-474f-83f4-ba38b88de19a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:48:01 compute-1 nova_compute[230518]: 2025-10-02 12:48:01.510 2 DEBUG nova.compute.manager [req-4f91877e-30f7-41c5-983f-a5beeb79c7d2 req-6867a648-8cfa-4be9-8a31-fea607973ca0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Refreshing instance network info cache due to event network-changed-a568d61d-6863-474f-83f4-ba38b88de19a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:48:01 compute-1 nova_compute[230518]: 2025-10-02 12:48:01.510 2 DEBUG oslo_concurrency.lockutils [req-4f91877e-30f7-41c5-983f-a5beeb79c7d2 req-6867a648-8cfa-4be9-8a31-fea607973ca0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:48:01 compute-1 ceph-mon[80926]: pgmap v2210: 305 pgs: 305 active+clean; 481 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 967 KiB/s rd, 6.7 MiB/s wr, 256 op/s
Oct 02 12:48:01 compute-1 nova_compute[230518]: 2025-10-02 12:48:01.636 2 DEBUG nova.compute.manager [req-88beff1d-2969-4c03-b10a-2f410935ab79 req-c0e9d660-3d04-4922-90a0-7b8de521b7d9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Received event network-vif-plugged-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:48:01 compute-1 nova_compute[230518]: 2025-10-02 12:48:01.636 2 DEBUG oslo_concurrency.lockutils [req-88beff1d-2969-4c03-b10a-2f410935ab79 req-c0e9d660-3d04-4922-90a0-7b8de521b7d9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:01 compute-1 nova_compute[230518]: 2025-10-02 12:48:01.636 2 DEBUG oslo_concurrency.lockutils [req-88beff1d-2969-4c03-b10a-2f410935ab79 req-c0e9d660-3d04-4922-90a0-7b8de521b7d9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:01 compute-1 nova_compute[230518]: 2025-10-02 12:48:01.637 2 DEBUG oslo_concurrency.lockutils [req-88beff1d-2969-4c03-b10a-2f410935ab79 req-c0e9d660-3d04-4922-90a0-7b8de521b7d9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:01 compute-1 nova_compute[230518]: 2025-10-02 12:48:01.637 2 DEBUG nova.compute.manager [req-88beff1d-2969-4c03-b10a-2f410935ab79 req-c0e9d660-3d04-4922-90a0-7b8de521b7d9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] No waiting events found dispatching network-vif-plugged-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:48:01 compute-1 nova_compute[230518]: 2025-10-02 12:48:01.637 2 WARNING nova.compute.manager [req-88beff1d-2969-4c03-b10a-2f410935ab79 req-c0e9d660-3d04-4922-90a0-7b8de521b7d9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Received unexpected event network-vif-plugged-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 for instance with vm_state active and task_state None.
Oct 02 12:48:01 compute-1 nova_compute[230518]: 2025-10-02 12:48:01.700 2 DEBUG oslo_concurrency.lockutils [None req-c5418397-6142-4405-8aba-6448ecd8221c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 26.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:01 compute-1 nova_compute[230518]: 2025-10-02 12:48:01.754 2 DEBUG oslo_concurrency.lockutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Releasing lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:48:01 compute-1 nova_compute[230518]: 2025-10-02 12:48:01.755 2 DEBUG nova.compute.manager [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Instance network_info: |[{"id": "a568d61d-6863-474f-83f4-ba38b88de19a", "address": "fa:16:3e:fa:2f:46", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa568d61d-68", "ovs_interfaceid": "a568d61d-6863-474f-83f4-ba38b88de19a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:48:01 compute-1 nova_compute[230518]: 2025-10-02 12:48:01.755 2 DEBUG oslo_concurrency.lockutils [req-4f91877e-30f7-41c5-983f-a5beeb79c7d2 req-6867a648-8cfa-4be9-8a31-fea607973ca0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:48:01 compute-1 nova_compute[230518]: 2025-10-02 12:48:01.756 2 DEBUG nova.network.neutron [req-4f91877e-30f7-41c5-983f-a5beeb79c7d2 req-6867a648-8cfa-4be9-8a31-fea607973ca0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Refreshing network info cache for port a568d61d-6863-474f-83f4-ba38b88de19a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:48:01 compute-1 nova_compute[230518]: 2025-10-02 12:48:01.760 2 DEBUG nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Start _get_guest_xml network_info=[{"id": "a568d61d-6863-474f-83f4-ba38b88de19a", "address": "fa:16:3e:fa:2f:46", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa568d61d-68", "ovs_interfaceid": "a568d61d-6863-474f-83f4-ba38b88de19a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'delete_on_termination': False, 'disk_bus': 'virtio', 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-83afd020-c3e2-4cb0-a15d-83739807079d', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '83afd020-c3e2-4cb0-a15d-83739807079d', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '3b348c58-f179-41db-bd79-1fdea0ade389', 'attached_at': '', 'detached_at': '', 'volume_id': '83afd020-c3e2-4cb0-a15d-83739807079d', 'serial': '83afd020-c3e2-4cb0-a15d-83739807079d'}, 'boot_index': 0, 'attachment_id': 'cd97b6b4-f57d-4d9e-b3b1-f6b4e1f9df0e', 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:48:01 compute-1 nova_compute[230518]: 2025-10-02 12:48:01.765 2 WARNING nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:48:01 compute-1 nova_compute[230518]: 2025-10-02 12:48:01.773 2 DEBUG nova.virt.libvirt.host [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:48:01 compute-1 nova_compute[230518]: 2025-10-02 12:48:01.774 2 DEBUG nova.virt.libvirt.host [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:48:01 compute-1 nova_compute[230518]: 2025-10-02 12:48:01.779 2 DEBUG nova.virt.libvirt.host [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:48:01 compute-1 nova_compute[230518]: 2025-10-02 12:48:01.780 2 DEBUG nova.virt.libvirt.host [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:48:01 compute-1 nova_compute[230518]: 2025-10-02 12:48:01.782 2 DEBUG nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:48:01 compute-1 nova_compute[230518]: 2025-10-02 12:48:01.783 2 DEBUG nova.virt.hardware [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:48:01 compute-1 nova_compute[230518]: 2025-10-02 12:48:01.784 2 DEBUG nova.virt.hardware [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:48:01 compute-1 nova_compute[230518]: 2025-10-02 12:48:01.785 2 DEBUG nova.virt.hardware [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:48:01 compute-1 nova_compute[230518]: 2025-10-02 12:48:01.785 2 DEBUG nova.virt.hardware [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:48:01 compute-1 nova_compute[230518]: 2025-10-02 12:48:01.786 2 DEBUG nova.virt.hardware [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:48:01 compute-1 nova_compute[230518]: 2025-10-02 12:48:01.786 2 DEBUG nova.virt.hardware [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:48:01 compute-1 nova_compute[230518]: 2025-10-02 12:48:01.787 2 DEBUG nova.virt.hardware [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:48:01 compute-1 nova_compute[230518]: 2025-10-02 12:48:01.788 2 DEBUG nova.virt.hardware [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:48:01 compute-1 nova_compute[230518]: 2025-10-02 12:48:01.789 2 DEBUG nova.virt.hardware [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:48:01 compute-1 nova_compute[230518]: 2025-10-02 12:48:01.789 2 DEBUG nova.virt.hardware [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:48:01 compute-1 nova_compute[230518]: 2025-10-02 12:48:01.790 2 DEBUG nova.virt.hardware [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:48:01 compute-1 nova_compute[230518]: 2025-10-02 12:48:01.826 2 DEBUG nova.storage.rbd_utils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] rbd image 3b348c58-f179-41db-bd79-1fdea0ade389_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:48:01 compute-1 nova_compute[230518]: 2025-10-02 12:48:01.831 2 DEBUG oslo_concurrency.processutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:48:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:01.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:02.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:48:02 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3094106259' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:02 compute-1 nova_compute[230518]: 2025-10-02 12:48:02.278 2 DEBUG oslo_concurrency.processutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:48:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:48:02 compute-1 nova_compute[230518]: 2025-10-02 12:48:02.493 2 DEBUG nova.virt.libvirt.vif [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:47:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1396980789',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1396980789',id=132,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN8AO0n4F9qQHfktAb1KqUpFZGDIBw8Q+DMA6Gtgwbe4fJSHZtT9yxONp57Pu+/JlMfK0hzt7rHvQAXjHsqixRJ8kNgVzAz0UxxllE90LKBM9NxuJLShf+JD7SBBSy6srw==',key_name='tempest-TestInstancesWithCinderVolumes-1494763419',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d3e0300f3cf5493d8a9e62e2c4a95767',ramdisk_id='',reservation_id='r-g7qut09a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-621751307',owner_user_name='tempest-TestInstancesWithCinderVolumes-621751307-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:47:51Z,user_data=None,user_id='e3cd62a3208649c183d3fc2edc1c0f18',uuid=3b348c58-f179-41db-bd79-1fdea0ade389,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a568d61d-6863-474f-83f4-ba38b88de19a", "address": "fa:16:3e:fa:2f:46", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa568d61d-68", "ovs_interfaceid": "a568d61d-6863-474f-83f4-ba38b88de19a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:48:02 compute-1 nova_compute[230518]: 2025-10-02 12:48:02.494 2 DEBUG nova.network.os_vif_util [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Converting VIF {"id": "a568d61d-6863-474f-83f4-ba38b88de19a", "address": "fa:16:3e:fa:2f:46", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa568d61d-68", "ovs_interfaceid": "a568d61d-6863-474f-83f4-ba38b88de19a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:48:02 compute-1 nova_compute[230518]: 2025-10-02 12:48:02.495 2 DEBUG nova.network.os_vif_util [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:2f:46,bridge_name='br-int',has_traffic_filtering=True,id=a568d61d-6863-474f-83f4-ba38b88de19a,network=Network(aa3b4df3-6044-4a53-8039-c9a5c05725aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa568d61d-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:48:02 compute-1 nova_compute[230518]: 2025-10-02 12:48:02.496 2 DEBUG nova.objects.instance [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3b348c58-f179-41db-bd79-1fdea0ade389 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:48:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3094106259' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:02 compute-1 nova_compute[230518]: 2025-10-02 12:48:02.638 2 DEBUG nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:48:02 compute-1 nova_compute[230518]:   <uuid>3b348c58-f179-41db-bd79-1fdea0ade389</uuid>
Oct 02 12:48:02 compute-1 nova_compute[230518]:   <name>instance-00000084</name>
Oct 02 12:48:02 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:48:02 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:48:02 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:48:02 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:       <nova:name>tempest-TestInstancesWithCinderVolumes-server-1396980789</nova:name>
Oct 02 12:48:02 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:48:01</nova:creationTime>
Oct 02 12:48:02 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:48:02 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:48:02 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:48:02 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:48:02 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:48:02 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:48:02 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:48:02 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:48:02 compute-1 nova_compute[230518]:         <nova:user uuid="e3cd62a3208649c183d3fc2edc1c0f18">tempest-TestInstancesWithCinderVolumes-621751307-project-member</nova:user>
Oct 02 12:48:02 compute-1 nova_compute[230518]:         <nova:project uuid="d3e0300f3cf5493d8a9e62e2c4a95767">tempest-TestInstancesWithCinderVolumes-621751307</nova:project>
Oct 02 12:48:02 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:48:02 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:48:02 compute-1 nova_compute[230518]:         <nova:port uuid="a568d61d-6863-474f-83f4-ba38b88de19a">
Oct 02 12:48:02 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:48:02 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:48:02 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:48:02 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <system>
Oct 02 12:48:02 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:48:02 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:48:02 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:48:02 compute-1 nova_compute[230518]:       <entry name="serial">3b348c58-f179-41db-bd79-1fdea0ade389</entry>
Oct 02 12:48:02 compute-1 nova_compute[230518]:       <entry name="uuid">3b348c58-f179-41db-bd79-1fdea0ade389</entry>
Oct 02 12:48:02 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     </system>
Oct 02 12:48:02 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:48:02 compute-1 nova_compute[230518]:   <os>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:   </os>
Oct 02 12:48:02 compute-1 nova_compute[230518]:   <features>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:   </features>
Oct 02 12:48:02 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:48:02 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:48:02 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:48:02 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/3b348c58-f179-41db-bd79-1fdea0ade389_disk.config">
Oct 02 12:48:02 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:       </source>
Oct 02 12:48:02 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:48:02 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:48:02 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:48:02 compute-1 nova_compute[230518]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:       <source protocol="rbd" name="volumes/volume-83afd020-c3e2-4cb0-a15d-83739807079d">
Oct 02 12:48:02 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:       </source>
Oct 02 12:48:02 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:48:02 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:48:02 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:       <serial>83afd020-c3e2-4cb0-a15d-83739807079d</serial>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:48:02 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:fa:2f:46"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:       <target dev="tapa568d61d-68"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:48:02 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/3b348c58-f179-41db-bd79-1fdea0ade389/console.log" append="off"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <video>
Oct 02 12:48:02 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     </video>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:48:02 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:48:02 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:48:02 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:48:02 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:48:02 compute-1 nova_compute[230518]: </domain>
Oct 02 12:48:02 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:48:02 compute-1 nova_compute[230518]: 2025-10-02 12:48:02.639 2 DEBUG nova.compute.manager [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Preparing to wait for external event network-vif-plugged-a568d61d-6863-474f-83f4-ba38b88de19a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:48:02 compute-1 nova_compute[230518]: 2025-10-02 12:48:02.639 2 DEBUG oslo_concurrency.lockutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "3b348c58-f179-41db-bd79-1fdea0ade389-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:02 compute-1 nova_compute[230518]: 2025-10-02 12:48:02.639 2 DEBUG oslo_concurrency.lockutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:02 compute-1 nova_compute[230518]: 2025-10-02 12:48:02.640 2 DEBUG oslo_concurrency.lockutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:02 compute-1 nova_compute[230518]: 2025-10-02 12:48:02.640 2 DEBUG nova.virt.libvirt.vif [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:47:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1396980789',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1396980789',id=132,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN8AO0n4F9qQHfktAb1KqUpFZGDIBw8Q+DMA6Gtgwbe4fJSHZtT9yxONp57Pu+/JlMfK0hzt7rHvQAXjHsqixRJ8kNgVzAz0UxxllE90LKBM9NxuJLShf+JD7SBBSy6srw==',key_name='tempest-TestInstancesWithCinderVolumes-1494763419',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d3e0300f3cf5493d8a9e62e2c4a95767',ramdisk_id='',reservation_id='r-g7qut09a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-621751307',owner_user_name='tempest-TestInstancesWithCinderVolumes-621751307-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:47:51Z,user_data=None,user_id='e3cd62a3208649c183d3fc2edc1c0f18',uuid=3b348c58-f179-41db-bd79-1fdea0ade389,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a568d61d-6863-474f-83f4-ba38b88de19a", "address": "fa:16:3e:fa:2f:46", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa568d61d-68", "ovs_interfaceid": "a568d61d-6863-474f-83f4-ba38b88de19a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:48:02 compute-1 nova_compute[230518]: 2025-10-02 12:48:02.641 2 DEBUG nova.network.os_vif_util [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Converting VIF {"id": "a568d61d-6863-474f-83f4-ba38b88de19a", "address": "fa:16:3e:fa:2f:46", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa568d61d-68", "ovs_interfaceid": "a568d61d-6863-474f-83f4-ba38b88de19a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:48:02 compute-1 nova_compute[230518]: 2025-10-02 12:48:02.641 2 DEBUG nova.network.os_vif_util [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:2f:46,bridge_name='br-int',has_traffic_filtering=True,id=a568d61d-6863-474f-83f4-ba38b88de19a,network=Network(aa3b4df3-6044-4a53-8039-c9a5c05725aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa568d61d-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:48:02 compute-1 nova_compute[230518]: 2025-10-02 12:48:02.642 2 DEBUG os_vif [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:2f:46,bridge_name='br-int',has_traffic_filtering=True,id=a568d61d-6863-474f-83f4-ba38b88de19a,network=Network(aa3b4df3-6044-4a53-8039-c9a5c05725aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa568d61d-68') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:48:02 compute-1 nova_compute[230518]: 2025-10-02 12:48:02.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:02 compute-1 nova_compute[230518]: 2025-10-02 12:48:02.643 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:48:02 compute-1 nova_compute[230518]: 2025-10-02 12:48:02.643 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:48:02 compute-1 nova_compute[230518]: 2025-10-02 12:48:02.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:02 compute-1 nova_compute[230518]: 2025-10-02 12:48:02.645 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa568d61d-68, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:48:02 compute-1 nova_compute[230518]: 2025-10-02 12:48:02.646 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa568d61d-68, col_values=(('external_ids', {'iface-id': 'a568d61d-6863-474f-83f4-ba38b88de19a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fa:2f:46', 'vm-uuid': '3b348c58-f179-41db-bd79-1fdea0ade389'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:48:02 compute-1 NetworkManager[44960]: <info>  [1759409282.6486] manager: (tapa568d61d-68): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/252)
Oct 02 12:48:02 compute-1 nova_compute[230518]: 2025-10-02 12:48:02.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:02 compute-1 nova_compute[230518]: 2025-10-02 12:48:02.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:48:02 compute-1 nova_compute[230518]: 2025-10-02 12:48:02.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:02 compute-1 nova_compute[230518]: 2025-10-02 12:48:02.655 2 INFO os_vif [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:2f:46,bridge_name='br-int',has_traffic_filtering=True,id=a568d61d-6863-474f-83f4-ba38b88de19a,network=Network(aa3b4df3-6044-4a53-8039-c9a5c05725aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa568d61d-68')
Oct 02 12:48:03 compute-1 nova_compute[230518]: 2025-10-02 12:48:03.010 2 DEBUG nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:48:03 compute-1 nova_compute[230518]: 2025-10-02 12:48:03.010 2 DEBUG nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:48:03 compute-1 nova_compute[230518]: 2025-10-02 12:48:03.011 2 DEBUG nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No VIF found with MAC fa:16:3e:fa:2f:46, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:48:03 compute-1 nova_compute[230518]: 2025-10-02 12:48:03.011 2 INFO nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Using config drive
Oct 02 12:48:03 compute-1 nova_compute[230518]: 2025-10-02 12:48:03.031 2 DEBUG nova.storage.rbd_utils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] rbd image 3b348c58-f179-41db-bd79-1fdea0ade389_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:48:03 compute-1 nova_compute[230518]: 2025-10-02 12:48:03.418 2 DEBUG oslo_concurrency.lockutils [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:03 compute-1 nova_compute[230518]: 2025-10-02 12:48:03.419 2 DEBUG oslo_concurrency.lockutils [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:03 compute-1 nova_compute[230518]: 2025-10-02 12:48:03.578 2 DEBUG nova.objects.instance [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'flavor' on Instance uuid 4b2aefbb-92cb-4a24-9ad2-884a12fa514c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:48:03 compute-1 nova_compute[230518]: 2025-10-02 12:48:03.609 2 INFO nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Creating config drive at /var/lib/nova/instances/3b348c58-f179-41db-bd79-1fdea0ade389/disk.config
Oct 02 12:48:03 compute-1 nova_compute[230518]: 2025-10-02 12:48:03.614 2 DEBUG oslo_concurrency.processutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3b348c58-f179-41db-bd79-1fdea0ade389/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmperbori22 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:48:03 compute-1 ceph-mon[80926]: pgmap v2211: 305 pgs: 305 active+clean; 485 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.8 MiB/s wr, 367 op/s
Oct 02 12:48:03 compute-1 nova_compute[230518]: 2025-10-02 12:48:03.644 2 DEBUG nova.network.neutron [req-4f91877e-30f7-41c5-983f-a5beeb79c7d2 req-6867a648-8cfa-4be9-8a31-fea607973ca0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updated VIF entry in instance network info cache for port a568d61d-6863-474f-83f4-ba38b88de19a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:48:03 compute-1 nova_compute[230518]: 2025-10-02 12:48:03.645 2 DEBUG nova.network.neutron [req-4f91877e-30f7-41c5-983f-a5beeb79c7d2 req-6867a648-8cfa-4be9-8a31-fea607973ca0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updating instance_info_cache with network_info: [{"id": "a568d61d-6863-474f-83f4-ba38b88de19a", "address": "fa:16:3e:fa:2f:46", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa568d61d-68", "ovs_interfaceid": "a568d61d-6863-474f-83f4-ba38b88de19a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:48:03 compute-1 nova_compute[230518]: 2025-10-02 12:48:03.748 2 DEBUG oslo_concurrency.processutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3b348c58-f179-41db-bd79-1fdea0ade389/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmperbori22" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:48:03 compute-1 nova_compute[230518]: 2025-10-02 12:48:03.773 2 DEBUG nova.storage.rbd_utils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] rbd image 3b348c58-f179-41db-bd79-1fdea0ade389_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:48:03 compute-1 nova_compute[230518]: 2025-10-02 12:48:03.777 2 DEBUG oslo_concurrency.processutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3b348c58-f179-41db-bd79-1fdea0ade389/disk.config 3b348c58-f179-41db-bd79-1fdea0ade389_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:48:03 compute-1 nova_compute[230518]: 2025-10-02 12:48:03.809 2 DEBUG oslo_concurrency.lockutils [req-4f91877e-30f7-41c5-983f-a5beeb79c7d2 req-6867a648-8cfa-4be9-8a31-fea607973ca0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:48:03 compute-1 nova_compute[230518]: 2025-10-02 12:48:03.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:03 compute-1 nova_compute[230518]: 2025-10-02 12:48:03.922 2 DEBUG oslo_concurrency.lockutils [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.503s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:03.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:04.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:04 compute-1 nova_compute[230518]: 2025-10-02 12:48:04.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:48:04 compute-1 nova_compute[230518]: 2025-10-02 12:48:04.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:48:04 compute-1 nova_compute[230518]: 2025-10-02 12:48:04.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:48:04 compute-1 nova_compute[230518]: 2025-10-02 12:48:04.138 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:04 compute-1 nova_compute[230518]: 2025-10-02 12:48:04.139 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:04 compute-1 nova_compute[230518]: 2025-10-02 12:48:04.139 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:04 compute-1 nova_compute[230518]: 2025-10-02 12:48:04.140 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:48:04 compute-1 nova_compute[230518]: 2025-10-02 12:48:04.140 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:48:04 compute-1 nova_compute[230518]: 2025-10-02 12:48:04.406 2 DEBUG oslo_concurrency.processutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3b348c58-f179-41db-bd79-1fdea0ade389/disk.config 3b348c58-f179-41db-bd79-1fdea0ade389_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:48:04 compute-1 nova_compute[230518]: 2025-10-02 12:48:04.407 2 INFO nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Deleting local config drive /var/lib/nova/instances/3b348c58-f179-41db-bd79-1fdea0ade389/disk.config because it was imported into RBD.
Oct 02 12:48:04 compute-1 kernel: tapa568d61d-68: entered promiscuous mode
Oct 02 12:48:04 compute-1 NetworkManager[44960]: <info>  [1759409284.4747] manager: (tapa568d61d-68): new Tun device (/org/freedesktop/NetworkManager/Devices/253)
Oct 02 12:48:04 compute-1 ovn_controller[129257]: 2025-10-02T12:48:04Z|00540|binding|INFO|Claiming lport a568d61d-6863-474f-83f4-ba38b88de19a for this chassis.
Oct 02 12:48:04 compute-1 ovn_controller[129257]: 2025-10-02T12:48:04Z|00541|binding|INFO|a568d61d-6863-474f-83f4-ba38b88de19a: Claiming fa:16:3e:fa:2f:46 10.100.0.7
Oct 02 12:48:04 compute-1 nova_compute[230518]: 2025-10-02 12:48:04.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:04 compute-1 ovn_controller[129257]: 2025-10-02T12:48:04Z|00542|binding|INFO|Setting lport a568d61d-6863-474f-83f4-ba38b88de19a ovn-installed in OVS
Oct 02 12:48:04 compute-1 nova_compute[230518]: 2025-10-02 12:48:04.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:04 compute-1 nova_compute[230518]: 2025-10-02 12:48:04.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:04 compute-1 systemd-machined[188247]: New machine qemu-64-instance-00000084.
Oct 02 12:48:04 compute-1 systemd-udevd[281740]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:48:04 compute-1 systemd[1]: Started Virtual Machine qemu-64-instance-00000084.
Oct 02 12:48:04 compute-1 NetworkManager[44960]: <info>  [1759409284.5416] device (tapa568d61d-68): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:48:04 compute-1 NetworkManager[44960]: <info>  [1759409284.5429] device (tapa568d61d-68): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:48:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:48:04 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1078617232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:04 compute-1 nova_compute[230518]: 2025-10-02 12:48:04.644 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:48:04 compute-1 ceph-mon[80926]: pgmap v2212: 305 pgs: 305 active+clean; 485 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.8 MiB/s wr, 367 op/s
Oct 02 12:48:04 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2977171546' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:48:04 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2977171546' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:48:04 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1389528664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:04 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1078617232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:48:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1159840049' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:48:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:48:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1159840049' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:48:05 compute-1 nova_compute[230518]: 2025-10-02 12:48:05.712 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409285.7104182, 3b348c58-f179-41db-bd79-1fdea0ade389 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:48:05 compute-1 nova_compute[230518]: 2025-10-02 12:48:05.714 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] VM Started (Lifecycle Event)
Oct 02 12:48:05 compute-1 ovn_controller[129257]: 2025-10-02T12:48:05Z|00543|binding|INFO|Setting lport a568d61d-6863-474f-83f4-ba38b88de19a up in Southbound
Oct 02 12:48:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:48:05.957 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:2f:46 10.100.0.7'], port_security=['fa:16:3e:fa:2f:46 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '3b348c58-f179-41db-bd79-1fdea0ade389', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd3e0300f3cf5493d8a9e62e2c4a95767', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b476c8f5-f8e9-416f-ac80-6e4f069ebf34', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=19951b97-567d-403e-9a99-3dd9660c4a7b, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=a568d61d-6863-474f-83f4-ba38b88de19a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:48:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:48:05.960 138374 INFO neutron.agent.ovn.metadata.agent [-] Port a568d61d-6863-474f-83f4-ba38b88de19a in datapath aa3b4df3-6044-4a53-8039-c9a5c05725aa bound to our chassis
Oct 02 12:48:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:48:05.962 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa3b4df3-6044-4a53-8039-c9a5c05725aa
Oct 02 12:48:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:05.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:48:05.981 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[65c28e84-4d5a-4d1f-b921-ebfc0dab71a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:06.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:48:06.027 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[cca725d3-6c44-4e22-ba4c-8d76acec1bac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:48:06.032 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[099b7d1b-f131-43bc-97d6-d7ec89076845]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1159840049' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:48:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1159840049' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:48:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:48:06.078 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[b89a3c0a-8e1d-4dd2-9f4d-e15b9208afd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:48:06.116 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[91d0320b-3cf0-4489-935d-cbcd14feecd5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa3b4df3-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:81:7f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 165], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714082, 'reachable_time': 20272, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281798, 'error': None, 'target': 'ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:48:06.142 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3a8297ca-4cb7-43b3-a351-a19128110113]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapaa3b4df3-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 714093, 'tstamp': 714093}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281799, 'error': None, 'target': 'ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapaa3b4df3-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 714097, 'tstamp': 714097}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281799, 'error': None, 'target': 'ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:48:06.145 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa3b4df3-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:48:06.149 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa3b4df3-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:48:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:48:06.150 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.150 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:48:06.150 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa3b4df3-60, col_values=(('external_ids', {'iface-id': 'fb7cdb79-68cf-4ad8-80ea-cb25da88eb6c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:48:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:48:06.151 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.156 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409285.7107584, 3b348c58-f179-41db-bd79-1fdea0ade389 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.157 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] VM Paused (Lifecycle Event)
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.238 2 DEBUG oslo_concurrency.lockutils [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.238 2 DEBUG oslo_concurrency.lockutils [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.239 2 INFO nova.compute.manager [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Attaching volume 59930c46-79e6-4eb5-b8a0-3382452117c0 to /dev/vdb
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.277 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.283 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.283 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.285 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.290 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.290 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.472 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.474 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4260MB free_disk=20.83069610595703GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.474 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.474 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.481 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.662 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 4b2aefbb-92cb-4a24-9ad2-884a12fa514c actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.662 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 3b348c58-f179-41db-bd79-1fdea0ade389 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.663 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.663 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.825 2 DEBUG os_brick.utils [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.826 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.838 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.838 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.839 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[4faf7f24-d900-4007-8924-cef2f77c08de]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.873 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.884 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.884 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[7a737271-911e-4d74-8825-47876b533e32]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.886 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.900 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.901 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[d6433885-85f9-4c43-a902-c3575ef9bd02]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.903 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[86e6de3c-eb3a-41bd-8f66-8dec9855102b]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.903 2 DEBUG oslo_concurrency.processutils [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.936 2 DEBUG oslo_concurrency.processutils [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.939 2 DEBUG os_brick.initiator.connectors.lightos [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.940 2 DEBUG os_brick.initiator.connectors.lightos [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.940 2 DEBUG os_brick.initiator.connectors.lightos [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.941 2 DEBUG os_brick.utils [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] <== get_connector_properties: return (114ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:48:06 compute-1 nova_compute[230518]: 2025-10-02 12:48:06.941 2 DEBUG nova.virt.block_device [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updating existing volume attachment record: 04cb9bff-78f6-41d7-bf08-73f086d3a288 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:48:07 compute-1 nova_compute[230518]: 2025-10-02 12:48:07.105 2 DEBUG nova.compute.manager [req-b331eeac-df62-4b64-93b3-f696d5e131a8 req-5880c27d-c755-4ad8-b3b0-b0220e8d2235 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Received event network-vif-plugged-a568d61d-6863-474f-83f4-ba38b88de19a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:48:07 compute-1 nova_compute[230518]: 2025-10-02 12:48:07.105 2 DEBUG oslo_concurrency.lockutils [req-b331eeac-df62-4b64-93b3-f696d5e131a8 req-5880c27d-c755-4ad8-b3b0-b0220e8d2235 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3b348c58-f179-41db-bd79-1fdea0ade389-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:07 compute-1 nova_compute[230518]: 2025-10-02 12:48:07.105 2 DEBUG oslo_concurrency.lockutils [req-b331eeac-df62-4b64-93b3-f696d5e131a8 req-5880c27d-c755-4ad8-b3b0-b0220e8d2235 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:07 compute-1 nova_compute[230518]: 2025-10-02 12:48:07.106 2 DEBUG oslo_concurrency.lockutils [req-b331eeac-df62-4b64-93b3-f696d5e131a8 req-5880c27d-c755-4ad8-b3b0-b0220e8d2235 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:07 compute-1 nova_compute[230518]: 2025-10-02 12:48:07.106 2 DEBUG nova.compute.manager [req-b331eeac-df62-4b64-93b3-f696d5e131a8 req-5880c27d-c755-4ad8-b3b0-b0220e8d2235 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Processing event network-vif-plugged-a568d61d-6863-474f-83f4-ba38b88de19a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:48:07 compute-1 nova_compute[230518]: 2025-10-02 12:48:07.107 2 DEBUG nova.compute.manager [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:48:07 compute-1 nova_compute[230518]: 2025-10-02 12:48:07.110 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409287.1100595, 3b348c58-f179-41db-bd79-1fdea0ade389 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:48:07 compute-1 nova_compute[230518]: 2025-10-02 12:48:07.110 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] VM Resumed (Lifecycle Event)
Oct 02 12:48:07 compute-1 nova_compute[230518]: 2025-10-02 12:48:07.112 2 DEBUG nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:48:07 compute-1 nova_compute[230518]: 2025-10-02 12:48:07.114 2 INFO nova.virt.libvirt.driver [-] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Instance spawned successfully.
Oct 02 12:48:07 compute-1 nova_compute[230518]: 2025-10-02 12:48:07.115 2 DEBUG nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:48:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:48:07 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1625439027' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:07 compute-1 nova_compute[230518]: 2025-10-02 12:48:07.259 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:48:07 compute-1 nova_compute[230518]: 2025-10-02 12:48:07.265 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:48:07 compute-1 nova_compute[230518]: 2025-10-02 12:48:07.267 2 DEBUG nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:48:07 compute-1 nova_compute[230518]: 2025-10-02 12:48:07.268 2 DEBUG nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:48:07 compute-1 nova_compute[230518]: 2025-10-02 12:48:07.268 2 DEBUG nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:48:07 compute-1 nova_compute[230518]: 2025-10-02 12:48:07.268 2 DEBUG nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:48:07 compute-1 nova_compute[230518]: 2025-10-02 12:48:07.269 2 DEBUG nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:48:07 compute-1 nova_compute[230518]: 2025-10-02 12:48:07.270 2 DEBUG nova.virt.libvirt.driver [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:48:07 compute-1 nova_compute[230518]: 2025-10-02 12:48:07.283 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:48:07 compute-1 nova_compute[230518]: 2025-10-02 12:48:07.289 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:48:07 compute-1 nova_compute[230518]: 2025-10-02 12:48:07.358 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:48:07 compute-1 nova_compute[230518]: 2025-10-02 12:48:07.360 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:48:07 compute-1 nova_compute[230518]: 2025-10-02 12:48:07.408 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:48:07 compute-1 nova_compute[230518]: 2025-10-02 12:48:07.408 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.934s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:07 compute-1 nova_compute[230518]: 2025-10-02 12:48:07.420 2 INFO nova.compute.manager [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Took 12.40 seconds to spawn the instance on the hypervisor.
Oct 02 12:48:07 compute-1 nova_compute[230518]: 2025-10-02 12:48:07.421 2 DEBUG nova.compute.manager [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:48:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:48:07 compute-1 nova_compute[230518]: 2025-10-02 12:48:07.527 2 INFO nova.compute.manager [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Took 17.33 seconds to build instance.
Oct 02 12:48:07 compute-1 nova_compute[230518]: 2025-10-02 12:48:07.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:07 compute-1 nova_compute[230518]: 2025-10-02 12:48:07.721 2 DEBUG oslo_concurrency.lockutils [None req-1ae5766a-2adc-464d-b514-410687da49a9 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:07 compute-1 ceph-mon[80926]: pgmap v2213: 305 pgs: 305 active+clean; 486 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 4.5 MiB/s wr, 445 op/s
Oct 02 12:48:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/659319244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1625439027' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:07 compute-1 nova_compute[230518]: 2025-10-02 12:48:07.963 2 DEBUG nova.objects.instance [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'flavor' on Instance uuid 4b2aefbb-92cb-4a24-9ad2-884a12fa514c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:48:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:07.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:07 compute-1 nova_compute[230518]: 2025-10-02 12:48:07.997 2 DEBUG nova.virt.libvirt.driver [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Attempting to attach volume 59930c46-79e6-4eb5-b8a0-3382452117c0 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 12:48:08 compute-1 nova_compute[230518]: 2025-10-02 12:48:07.999 2 DEBUG nova.virt.libvirt.guest [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 12:48:08 compute-1 nova_compute[230518]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:48:08 compute-1 nova_compute[230518]:   <source protocol="rbd" name="volumes/volume-59930c46-79e6-4eb5-b8a0-3382452117c0">
Oct 02 12:48:08 compute-1 nova_compute[230518]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:48:08 compute-1 nova_compute[230518]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:48:08 compute-1 nova_compute[230518]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:48:08 compute-1 nova_compute[230518]:   </source>
Oct 02 12:48:08 compute-1 nova_compute[230518]:   <auth username="openstack">
Oct 02 12:48:08 compute-1 nova_compute[230518]:     <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:48:08 compute-1 nova_compute[230518]:   </auth>
Oct 02 12:48:08 compute-1 nova_compute[230518]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:48:08 compute-1 nova_compute[230518]:   <serial>59930c46-79e6-4eb5-b8a0-3382452117c0</serial>
Oct 02 12:48:08 compute-1 nova_compute[230518]: </disk>
Oct 02 12:48:08 compute-1 nova_compute[230518]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 12:48:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:08.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:08 compute-1 nova_compute[230518]: 2025-10-02 12:48:08.180 2 DEBUG nova.virt.libvirt.driver [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:48:08 compute-1 nova_compute[230518]: 2025-10-02 12:48:08.180 2 DEBUG nova.virt.libvirt.driver [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:48:08 compute-1 nova_compute[230518]: 2025-10-02 12:48:08.180 2 DEBUG nova.virt.libvirt.driver [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:48:08 compute-1 nova_compute[230518]: 2025-10-02 12:48:08.181 2 DEBUG nova.virt.libvirt.driver [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No VIF found with MAC fa:16:3e:41:04:35, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:48:08 compute-1 nova_compute[230518]: 2025-10-02 12:48:08.408 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:48:08 compute-1 nova_compute[230518]: 2025-10-02 12:48:08.409 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:48:08 compute-1 nova_compute[230518]: 2025-10-02 12:48:08.409 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:48:08 compute-1 nova_compute[230518]: 2025-10-02 12:48:08.539 2 DEBUG oslo_concurrency.lockutils [None req-1b91034b-d089-4432-af2d-3ebbc27a14f5 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.301s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:08 compute-1 nova_compute[230518]: 2025-10-02 12:48:08.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:09 compute-1 nova_compute[230518]: 2025-10-02 12:48:09.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:48:09 compute-1 nova_compute[230518]: 2025-10-02 12:48:09.350 2 DEBUG nova.compute.manager [req-bc0b836c-9001-4051-8d49-b4c0571f081c req-e42e5ac9-c4f8-4378-aa16-1106dddeb35b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Received event network-vif-plugged-a568d61d-6863-474f-83f4-ba38b88de19a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:48:09 compute-1 nova_compute[230518]: 2025-10-02 12:48:09.351 2 DEBUG oslo_concurrency.lockutils [req-bc0b836c-9001-4051-8d49-b4c0571f081c req-e42e5ac9-c4f8-4378-aa16-1106dddeb35b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3b348c58-f179-41db-bd79-1fdea0ade389-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:09 compute-1 nova_compute[230518]: 2025-10-02 12:48:09.351 2 DEBUG oslo_concurrency.lockutils [req-bc0b836c-9001-4051-8d49-b4c0571f081c req-e42e5ac9-c4f8-4378-aa16-1106dddeb35b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:09 compute-1 nova_compute[230518]: 2025-10-02 12:48:09.352 2 DEBUG oslo_concurrency.lockutils [req-bc0b836c-9001-4051-8d49-b4c0571f081c req-e42e5ac9-c4f8-4378-aa16-1106dddeb35b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:09 compute-1 nova_compute[230518]: 2025-10-02 12:48:09.352 2 DEBUG nova.compute.manager [req-bc0b836c-9001-4051-8d49-b4c0571f081c req-e42e5ac9-c4f8-4378-aa16-1106dddeb35b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] No waiting events found dispatching network-vif-plugged-a568d61d-6863-474f-83f4-ba38b88de19a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:48:09 compute-1 nova_compute[230518]: 2025-10-02 12:48:09.352 2 WARNING nova.compute.manager [req-bc0b836c-9001-4051-8d49-b4c0571f081c req-e42e5ac9-c4f8-4378-aa16-1106dddeb35b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Received unexpected event network-vif-plugged-a568d61d-6863-474f-83f4-ba38b88de19a for instance with vm_state active and task_state None.
Oct 02 12:48:09 compute-1 ceph-mon[80926]: pgmap v2214: 305 pgs: 305 active+clean; 486 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 2.3 MiB/s wr, 341 op/s
Oct 02 12:48:09 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2255234982' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:09 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4154427461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:48:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:09.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:48:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:10.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:10 compute-1 nova_compute[230518]: 2025-10-02 12:48:10.035 2 DEBUG oslo_concurrency.lockutils [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:10 compute-1 nova_compute[230518]: 2025-10-02 12:48:10.035 2 DEBUG oslo_concurrency.lockutils [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:10 compute-1 nova_compute[230518]: 2025-10-02 12:48:10.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:48:10 compute-1 nova_compute[230518]: 2025-10-02 12:48:10.056 2 DEBUG nova.objects.instance [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'flavor' on Instance uuid 4b2aefbb-92cb-4a24-9ad2-884a12fa514c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:48:10 compute-1 nova_compute[230518]: 2025-10-02 12:48:10.113 2 DEBUG oslo_concurrency.lockutils [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.078s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:10 compute-1 ovn_controller[129257]: 2025-10-02T12:48:10Z|00544|binding|INFO|Releasing lport fb7cdb79-68cf-4ad8-80ea-cb25da88eb6c from this chassis (sb_readonly=0)
Oct 02 12:48:10 compute-1 nova_compute[230518]: 2025-10-02 12:48:10.422 2 DEBUG oslo_concurrency.lockutils [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:10 compute-1 nova_compute[230518]: 2025-10-02 12:48:10.423 2 DEBUG oslo_concurrency.lockutils [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:10 compute-1 nova_compute[230518]: 2025-10-02 12:48:10.424 2 INFO nova.compute.manager [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Attaching volume 58e4ef18-7a31-4027-a9e7-0cc5f7920707 to /dev/vdc
Oct 02 12:48:10 compute-1 nova_compute[230518]: 2025-10-02 12:48:10.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:10 compute-1 nova_compute[230518]: 2025-10-02 12:48:10.600 2 DEBUG os_brick.utils [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:48:10 compute-1 nova_compute[230518]: 2025-10-02 12:48:10.602 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:48:10 compute-1 nova_compute[230518]: 2025-10-02 12:48:10.616 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:48:10 compute-1 nova_compute[230518]: 2025-10-02 12:48:10.617 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[53f778fb-3eb1-4175-afcf-d81afd54276b]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:10 compute-1 nova_compute[230518]: 2025-10-02 12:48:10.618 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:48:10 compute-1 nova_compute[230518]: 2025-10-02 12:48:10.627 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:48:10 compute-1 nova_compute[230518]: 2025-10-02 12:48:10.627 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[d295a253-5d8e-4b8e-91c8-fc78d4834f63]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:10 compute-1 nova_compute[230518]: 2025-10-02 12:48:10.629 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:48:10 compute-1 nova_compute[230518]: 2025-10-02 12:48:10.637 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:48:10 compute-1 nova_compute[230518]: 2025-10-02 12:48:10.638 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[1879f6ae-f5ec-4ef6-8801-84464e17c6df]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:10 compute-1 nova_compute[230518]: 2025-10-02 12:48:10.639 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[fb938594-cdc5-4389-a16e-529296498126]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:48:10 compute-1 nova_compute[230518]: 2025-10-02 12:48:10.640 2 DEBUG oslo_concurrency.processutils [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:48:10 compute-1 nova_compute[230518]: 2025-10-02 12:48:10.668 2 DEBUG oslo_concurrency.processutils [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:48:10 compute-1 nova_compute[230518]: 2025-10-02 12:48:10.671 2 DEBUG os_brick.initiator.connectors.lightos [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:48:10 compute-1 nova_compute[230518]: 2025-10-02 12:48:10.671 2 DEBUG os_brick.initiator.connectors.lightos [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:48:10 compute-1 nova_compute[230518]: 2025-10-02 12:48:10.672 2 DEBUG os_brick.initiator.connectors.lightos [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:48:10 compute-1 nova_compute[230518]: 2025-10-02 12:48:10.672 2 DEBUG os_brick.utils [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] <== get_connector_properties: return (70ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:48:10 compute-1 nova_compute[230518]: 2025-10-02 12:48:10.672 2 DEBUG nova.virt.block_device [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updating existing volume attachment record: dd282639-493e-4fcf-8294-2dd5ce05fa0b _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:48:11 compute-1 ceph-mon[80926]: pgmap v2215: 305 pgs: 305 active+clean; 486 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 2.0 MiB/s wr, 314 op/s
Oct 02 12:48:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1974617506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:11 compute-1 nova_compute[230518]: 2025-10-02 12:48:11.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:48:11 compute-1 nova_compute[230518]: 2025-10-02 12:48:11.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:48:11 compute-1 nova_compute[230518]: 2025-10-02 12:48:11.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:48:11 compute-1 nova_compute[230518]: 2025-10-02 12:48:11.376 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:48:11 compute-1 nova_compute[230518]: 2025-10-02 12:48:11.377 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:48:11 compute-1 nova_compute[230518]: 2025-10-02 12:48:11.378 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:48:11 compute-1 nova_compute[230518]: 2025-10-02 12:48:11.379 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4b2aefbb-92cb-4a24-9ad2-884a12fa514c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:48:11 compute-1 nova_compute[230518]: 2025-10-02 12:48:11.760 2 DEBUG nova.objects.instance [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'flavor' on Instance uuid 4b2aefbb-92cb-4a24-9ad2-884a12fa514c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:48:11 compute-1 nova_compute[230518]: 2025-10-02 12:48:11.808 2 DEBUG nova.virt.libvirt.driver [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Attempting to attach volume 58e4ef18-7a31-4027-a9e7-0cc5f7920707 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 12:48:11 compute-1 nova_compute[230518]: 2025-10-02 12:48:11.810 2 DEBUG nova.virt.libvirt.guest [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 12:48:11 compute-1 nova_compute[230518]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:48:11 compute-1 nova_compute[230518]:   <source protocol="rbd" name="volumes/volume-58e4ef18-7a31-4027-a9e7-0cc5f7920707">
Oct 02 12:48:11 compute-1 nova_compute[230518]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:48:11 compute-1 nova_compute[230518]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:48:11 compute-1 nova_compute[230518]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:48:11 compute-1 nova_compute[230518]:   </source>
Oct 02 12:48:11 compute-1 nova_compute[230518]:   <auth username="openstack">
Oct 02 12:48:11 compute-1 nova_compute[230518]:     <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:48:11 compute-1 nova_compute[230518]:   </auth>
Oct 02 12:48:11 compute-1 nova_compute[230518]:   <target dev="vdc" bus="virtio"/>
Oct 02 12:48:11 compute-1 nova_compute[230518]:   <serial>58e4ef18-7a31-4027-a9e7-0cc5f7920707</serial>
Oct 02 12:48:11 compute-1 nova_compute[230518]: </disk>
Oct 02 12:48:11 compute-1 nova_compute[230518]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 12:48:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:11.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:12.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/867284691' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:48:12 compute-1 nova_compute[230518]: 2025-10-02 12:48:12.514 2 DEBUG nova.virt.libvirt.driver [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:48:12 compute-1 nova_compute[230518]: 2025-10-02 12:48:12.515 2 DEBUG nova.virt.libvirt.driver [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:48:12 compute-1 nova_compute[230518]: 2025-10-02 12:48:12.515 2 DEBUG nova.virt.libvirt.driver [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:48:12 compute-1 nova_compute[230518]: 2025-10-02 12:48:12.516 2 DEBUG nova.virt.libvirt.driver [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:48:12 compute-1 nova_compute[230518]: 2025-10-02 12:48:12.516 2 DEBUG nova.virt.libvirt.driver [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No VIF found with MAC fa:16:3e:41:04:35, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:48:12 compute-1 nova_compute[230518]: 2025-10-02 12:48:12.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:13 compute-1 nova_compute[230518]: 2025-10-02 12:48:13.129 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updating instance_info_cache with network_info: [{"id": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "address": "fa:16:3e:41:04:35", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf58273a-e5", "ovs_interfaceid": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:48:13 compute-1 nova_compute[230518]: 2025-10-02 12:48:13.186 2 DEBUG oslo_concurrency.lockutils [None req-3ea0b061-38b9-498b-a22b-ac0f60764a7c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:13 compute-1 ceph-mon[80926]: pgmap v2216: 305 pgs: 305 active+clean; 489 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 484 KiB/s wr, 336 op/s
Oct 02 12:48:13 compute-1 nova_compute[230518]: 2025-10-02 12:48:13.283 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:48:13 compute-1 nova_compute[230518]: 2025-10-02 12:48:13.283 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:48:13 compute-1 nova_compute[230518]: 2025-10-02 12:48:13.284 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:48:13 compute-1 nova_compute[230518]: 2025-10-02 12:48:13.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:13.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:14.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:15 compute-1 ceph-mon[80926]: pgmap v2217: 305 pgs: 305 active+clean; 489 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 418 KiB/s wr, 200 op/s
Oct 02 12:48:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3151108426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:15 compute-1 ovn_controller[129257]: 2025-10-02T12:48:15Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:41:04:35 10.100.0.9
Oct 02 12:48:15 compute-1 nova_compute[230518]: 2025-10-02 12:48:15.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:15 compute-1 NetworkManager[44960]: <info>  [1759409295.9318] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/254)
Oct 02 12:48:15 compute-1 NetworkManager[44960]: <info>  [1759409295.9328] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/255)
Oct 02 12:48:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:48:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:15.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:48:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:16.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:16 compute-1 ovn_controller[129257]: 2025-10-02T12:48:16Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:41:04:35 10.100.0.9
Oct 02 12:48:16 compute-1 nova_compute[230518]: 2025-10-02 12:48:16.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:16 compute-1 ovn_controller[129257]: 2025-10-02T12:48:16Z|00545|binding|INFO|Releasing lport fb7cdb79-68cf-4ad8-80ea-cb25da88eb6c from this chassis (sb_readonly=0)
Oct 02 12:48:16 compute-1 nova_compute[230518]: 2025-10-02 12:48:16.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:16 compute-1 nova_compute[230518]: 2025-10-02 12:48:16.299 2 DEBUG nova.compute.manager [req-7ca6bcf4-073d-461a-9595-68a0040a2614 req-545197fa-78b2-4702-94cb-b87cde737d64 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Received event network-changed-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:48:16 compute-1 nova_compute[230518]: 2025-10-02 12:48:16.300 2 DEBUG nova.compute.manager [req-7ca6bcf4-073d-461a-9595-68a0040a2614 req-545197fa-78b2-4702-94cb-b87cde737d64 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Refreshing instance network info cache due to event network-changed-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:48:16 compute-1 nova_compute[230518]: 2025-10-02 12:48:16.300 2 DEBUG oslo_concurrency.lockutils [req-7ca6bcf4-073d-461a-9595-68a0040a2614 req-545197fa-78b2-4702-94cb-b87cde737d64 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:48:16 compute-1 nova_compute[230518]: 2025-10-02 12:48:16.301 2 DEBUG oslo_concurrency.lockutils [req-7ca6bcf4-073d-461a-9595-68a0040a2614 req-545197fa-78b2-4702-94cb-b87cde737d64 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:48:16 compute-1 nova_compute[230518]: 2025-10-02 12:48:16.301 2 DEBUG nova.network.neutron [req-7ca6bcf4-073d-461a-9595-68a0040a2614 req-545197fa-78b2-4702-94cb-b87cde737d64 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Refreshing network info cache for port bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:48:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1837585474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:16 compute-1 podman[281879]: 2025-10-02 12:48:16.830572761 +0000 UTC m=+0.052749000 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:48:16 compute-1 podman[281878]: 2025-10-02 12:48:16.86616267 +0000 UTC m=+0.094780781 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:48:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:48:17 compute-1 ceph-mon[80926]: pgmap v2218: 305 pgs: 305 active+clean; 481 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 2.1 MiB/s wr, 230 op/s
Oct 02 12:48:17 compute-1 nova_compute[230518]: 2025-10-02 12:48:17.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:17.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:48:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:18.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:48:18 compute-1 nova_compute[230518]: 2025-10-02 12:48:18.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:19 compute-1 nova_compute[230518]: 2025-10-02 12:48:19.487 2 DEBUG nova.compute.manager [req-cc656376-2ff3-4dc9-8625-12299a0d508a req-7bb9c7c6-4ee7-4cfc-86a8-83fe3f534ce6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Received event network-changed-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:48:19 compute-1 nova_compute[230518]: 2025-10-02 12:48:19.488 2 DEBUG nova.compute.manager [req-cc656376-2ff3-4dc9-8625-12299a0d508a req-7bb9c7c6-4ee7-4cfc-86a8-83fe3f534ce6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Refreshing instance network info cache due to event network-changed-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:48:19 compute-1 nova_compute[230518]: 2025-10-02 12:48:19.488 2 DEBUG oslo_concurrency.lockutils [req-cc656376-2ff3-4dc9-8625-12299a0d508a req-7bb9c7c6-4ee7-4cfc-86a8-83fe3f534ce6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:48:19 compute-1 ceph-mon[80926]: pgmap v2219: 305 pgs: 305 active+clean; 499 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.2 MiB/s wr, 237 op/s
Oct 02 12:48:19 compute-1 nova_compute[230518]: 2025-10-02 12:48:19.805 2 DEBUG nova.network.neutron [req-7ca6bcf4-073d-461a-9595-68a0040a2614 req-545197fa-78b2-4702-94cb-b87cde737d64 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updated VIF entry in instance network info cache for port bf58273a-e5f6-4e36-bb1e-7ca0c2462d54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:48:19 compute-1 nova_compute[230518]: 2025-10-02 12:48:19.806 2 DEBUG nova.network.neutron [req-7ca6bcf4-073d-461a-9595-68a0040a2614 req-545197fa-78b2-4702-94cb-b87cde737d64 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updating instance_info_cache with network_info: [{"id": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "address": "fa:16:3e:41:04:35", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf58273a-e5", "ovs_interfaceid": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:48:19 compute-1 nova_compute[230518]: 2025-10-02 12:48:19.865 2 DEBUG oslo_concurrency.lockutils [req-7ca6bcf4-073d-461a-9595-68a0040a2614 req-545197fa-78b2-4702-94cb-b87cde737d64 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:48:19 compute-1 nova_compute[230518]: 2025-10-02 12:48:19.866 2 DEBUG oslo_concurrency.lockutils [req-cc656376-2ff3-4dc9-8625-12299a0d508a req-7bb9c7c6-4ee7-4cfc-86a8-83fe3f534ce6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:48:19 compute-1 nova_compute[230518]: 2025-10-02 12:48:19.866 2 DEBUG nova.network.neutron [req-cc656376-2ff3-4dc9-8625-12299a0d508a req-7bb9c7c6-4ee7-4cfc-86a8-83fe3f534ce6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Refreshing network info cache for port bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:48:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:48:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:19.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:48:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:20.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:20 compute-1 ovn_controller[129257]: 2025-10-02T12:48:20Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fa:2f:46 10.100.0.7
Oct 02 12:48:20 compute-1 ovn_controller[129257]: 2025-10-02T12:48:20Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fa:2f:46 10.100.0.7
Oct 02 12:48:21 compute-1 nova_compute[230518]: 2025-10-02 12:48:21.617 2 DEBUG nova.network.neutron [req-cc656376-2ff3-4dc9-8625-12299a0d508a req-7bb9c7c6-4ee7-4cfc-86a8-83fe3f534ce6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updated VIF entry in instance network info cache for port bf58273a-e5f6-4e36-bb1e-7ca0c2462d54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:48:21 compute-1 nova_compute[230518]: 2025-10-02 12:48:21.618 2 DEBUG nova.network.neutron [req-cc656376-2ff3-4dc9-8625-12299a0d508a req-7bb9c7c6-4ee7-4cfc-86a8-83fe3f534ce6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updating instance_info_cache with network_info: [{"id": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "address": "fa:16:3e:41:04:35", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf58273a-e5", "ovs_interfaceid": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:48:21 compute-1 nova_compute[230518]: 2025-10-02 12:48:21.700 2 DEBUG oslo_concurrency.lockutils [req-cc656376-2ff3-4dc9-8625-12299a0d508a req-7bb9c7c6-4ee7-4cfc-86a8-83fe3f534ce6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:48:21 compute-1 ceph-mon[80926]: pgmap v2220: 305 pgs: 305 active+clean; 512 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.0 MiB/s wr, 242 op/s
Oct 02 12:48:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:21.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:22.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:22 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:48:22 compute-1 nova_compute[230518]: 2025-10-02 12:48:22.658 2 DEBUG nova.compute.manager [req-6c6328dc-fef5-459e-91f5-9393224b51c1 req-d98c5924-a1fd-4a77-8408-f010c724b569 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Received event network-changed-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:48:22 compute-1 nova_compute[230518]: 2025-10-02 12:48:22.659 2 DEBUG nova.compute.manager [req-6c6328dc-fef5-459e-91f5-9393224b51c1 req-d98c5924-a1fd-4a77-8408-f010c724b569 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Refreshing instance network info cache due to event network-changed-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:48:22 compute-1 nova_compute[230518]: 2025-10-02 12:48:22.659 2 DEBUG oslo_concurrency.lockutils [req-6c6328dc-fef5-459e-91f5-9393224b51c1 req-d98c5924-a1fd-4a77-8408-f010c724b569 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:48:22 compute-1 nova_compute[230518]: 2025-10-02 12:48:22.659 2 DEBUG oslo_concurrency.lockutils [req-6c6328dc-fef5-459e-91f5-9393224b51c1 req-d98c5924-a1fd-4a77-8408-f010c724b569 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:48:22 compute-1 nova_compute[230518]: 2025-10-02 12:48:22.659 2 DEBUG nova.network.neutron [req-6c6328dc-fef5-459e-91f5-9393224b51c1 req-d98c5924-a1fd-4a77-8408-f010c724b569 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Refreshing network info cache for port bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:48:22 compute-1 nova_compute[230518]: 2025-10-02 12:48:22.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:23 compute-1 ceph-mon[80926]: pgmap v2221: 305 pgs: 305 active+clean; 480 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 6.3 MiB/s wr, 272 op/s
Oct 02 12:48:23 compute-1 nova_compute[230518]: 2025-10-02 12:48:23.279 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:48:23 compute-1 nova_compute[230518]: 2025-10-02 12:48:23.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:23.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:24.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:24 compute-1 nova_compute[230518]: 2025-10-02 12:48:24.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:25 compute-1 ceph-mon[80926]: pgmap v2222: 305 pgs: 305 active+clean; 480 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 833 KiB/s rd, 5.9 MiB/s wr, 199 op/s
Oct 02 12:48:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4231190045' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:48:25.944 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:48:25.945 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:48:25.945 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:25.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:48:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:26.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:48:26 compute-1 nova_compute[230518]: 2025-10-02 12:48:26.711 2 DEBUG nova.network.neutron [req-6c6328dc-fef5-459e-91f5-9393224b51c1 req-d98c5924-a1fd-4a77-8408-f010c724b569 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updated VIF entry in instance network info cache for port bf58273a-e5f6-4e36-bb1e-7ca0c2462d54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:48:26 compute-1 nova_compute[230518]: 2025-10-02 12:48:26.711 2 DEBUG nova.network.neutron [req-6c6328dc-fef5-459e-91f5-9393224b51c1 req-d98c5924-a1fd-4a77-8408-f010c724b569 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updating instance_info_cache with network_info: [{"id": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "address": "fa:16:3e:41:04:35", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf58273a-e5", "ovs_interfaceid": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:48:26 compute-1 nova_compute[230518]: 2025-10-02 12:48:26.776 2 DEBUG oslo_concurrency.lockutils [req-6c6328dc-fef5-459e-91f5-9393224b51c1 req-d98c5924-a1fd-4a77-8408-f010c724b569 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:48:27 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e313 e313: 3 total, 3 up, 3 in
Oct 02 12:48:27 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:48:27 compute-1 nova_compute[230518]: 2025-10-02 12:48:27.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:27 compute-1 ceph-mon[80926]: pgmap v2223: 305 pgs: 305 active+clean; 454 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1002 KiB/s rd, 6.0 MiB/s wr, 227 op/s
Oct 02 12:48:27 compute-1 sudo[281929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:48:27 compute-1 podman[281924]: 2025-10-02 12:48:27.805052608 +0000 UTC m=+0.058851222 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:48:27 compute-1 sudo[281929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:27 compute-1 sudo[281929]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:27 compute-1 podman[281925]: 2025-10-02 12:48:27.813289677 +0000 UTC m=+0.060190423 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:48:27 compute-1 sudo[281985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:48:27 compute-1 sudo[281985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:27 compute-1 sudo[281985]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:27 compute-1 sudo[282010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:48:27 compute-1 sudo[282010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:27 compute-1 sudo[282010]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:27 compute-1 sudo[282035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 12:48:27 compute-1 sudo[282035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:28.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:48:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:28.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:48:28 compute-1 sudo[282035]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:28 compute-1 sudo[282080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:48:28 compute-1 sudo[282080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:28 compute-1 sudo[282080]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:28 compute-1 sudo[282105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:48:28 compute-1 sudo[282105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:28 compute-1 sudo[282105]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:28 compute-1 sudo[282130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:48:28 compute-1 sudo[282130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:28 compute-1 sudo[282130]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:28 compute-1 sudo[282155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:48:28 compute-1 sudo[282155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:28 compute-1 nova_compute[230518]: 2025-10-02 12:48:28.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:28 compute-1 ceph-mon[80926]: osdmap e313: 3 total, 3 up, 3 in
Oct 02 12:48:28 compute-1 ceph-mon[80926]: pgmap v2225: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 577 KiB/s rd, 2.7 MiB/s wr, 134 op/s
Oct 02 12:48:28 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:48:28 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:48:28 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:48:28 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:48:29 compute-1 sudo[282155]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:30.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:30.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2481788006' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/589268743' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:31 compute-1 ceph-mon[80926]: pgmap v2226: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 369 KiB/s rd, 1.7 MiB/s wr, 97 op/s
Oct 02 12:48:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:32.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:32.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:48:32 compute-1 nova_compute[230518]: 2025-10-02 12:48:32.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:32 compute-1 nova_compute[230518]: 2025-10-02 12:48:32.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:48:32.683 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:48:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:48:32.684 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:48:33 compute-1 ceph-mon[80926]: pgmap v2227: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 246 KiB/s rd, 143 KiB/s wr, 59 op/s
Oct 02 12:48:33 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:48:33 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:48:33 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:48:33 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:48:33 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:48:33 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:48:33 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:48:33 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:48:33 compute-1 nova_compute[230518]: 2025-10-02 12:48:33.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:48:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:34.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:48:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:34.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:34 compute-1 nova_compute[230518]: 2025-10-02 12:48:34.368 2 DEBUG nova.compute.manager [req-f24761e6-1bfb-4df9-8fd5-a0710c2cb4e6 req-b39e74a3-87f9-445a-8be4-9ed8aa51dadd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Received event network-changed-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:48:34 compute-1 nova_compute[230518]: 2025-10-02 12:48:34.369 2 DEBUG nova.compute.manager [req-f24761e6-1bfb-4df9-8fd5-a0710c2cb4e6 req-b39e74a3-87f9-445a-8be4-9ed8aa51dadd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Refreshing instance network info cache due to event network-changed-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:48:34 compute-1 nova_compute[230518]: 2025-10-02 12:48:34.369 2 DEBUG oslo_concurrency.lockutils [req-f24761e6-1bfb-4df9-8fd5-a0710c2cb4e6 req-b39e74a3-87f9-445a-8be4-9ed8aa51dadd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:48:34 compute-1 nova_compute[230518]: 2025-10-02 12:48:34.369 2 DEBUG oslo_concurrency.lockutils [req-f24761e6-1bfb-4df9-8fd5-a0710c2cb4e6 req-b39e74a3-87f9-445a-8be4-9ed8aa51dadd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:48:34 compute-1 nova_compute[230518]: 2025-10-02 12:48:34.370 2 DEBUG nova.network.neutron [req-f24761e6-1bfb-4df9-8fd5-a0710c2cb4e6 req-b39e74a3-87f9-445a-8be4-9ed8aa51dadd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Refreshing network info cache for port bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:48:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:36.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:48:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:36.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:48:36 compute-1 ceph-mon[80926]: pgmap v2228: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 246 KiB/s rd, 143 KiB/s wr, 59 op/s
Oct 02 12:48:36 compute-1 nova_compute[230518]: 2025-10-02 12:48:36.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:37 compute-1 nova_compute[230518]: 2025-10-02 12:48:37.056 2 DEBUG nova.network.neutron [req-f24761e6-1bfb-4df9-8fd5-a0710c2cb4e6 req-b39e74a3-87f9-445a-8be4-9ed8aa51dadd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updated VIF entry in instance network info cache for port bf58273a-e5f6-4e36-bb1e-7ca0c2462d54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:48:37 compute-1 nova_compute[230518]: 2025-10-02 12:48:37.056 2 DEBUG nova.network.neutron [req-f24761e6-1bfb-4df9-8fd5-a0710c2cb4e6 req-b39e74a3-87f9-445a-8be4-9ed8aa51dadd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updating instance_info_cache with network_info: [{"id": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "address": "fa:16:3e:41:04:35", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf58273a-e5", "ovs_interfaceid": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:48:37 compute-1 nova_compute[230518]: 2025-10-02 12:48:37.094 2 DEBUG oslo_concurrency.lockutils [req-f24761e6-1bfb-4df9-8fd5-a0710c2cb4e6 req-b39e74a3-87f9-445a-8be4-9ed8aa51dadd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:48:37 compute-1 ceph-mon[80926]: pgmap v2229: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 81 KiB/s wr, 25 op/s
Oct 02 12:48:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:48:37 compute-1 nova_compute[230518]: 2025-10-02 12:48:37.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:38.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:38.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:38 compute-1 nova_compute[230518]: 2025-10-02 12:48:38.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:39 compute-1 ceph-mon[80926]: pgmap v2230: 305 pgs: 305 active+clean; 460 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 219 KiB/s wr, 24 op/s
Oct 02 12:48:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:40.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:40.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:40 compute-1 ceph-mon[80926]: pgmap v2231: 305 pgs: 305 active+clean; 460 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 201 KiB/s wr, 24 op/s
Oct 02 12:48:41 compute-1 nova_compute[230518]: 2025-10-02 12:48:41.228 2 DEBUG oslo_concurrency.lockutils [None req-a0e33e83-e09a-4d22-80a0-0c84659317c8 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:41 compute-1 nova_compute[230518]: 2025-10-02 12:48:41.229 2 DEBUG oslo_concurrency.lockutils [None req-a0e33e83-e09a-4d22-80a0-0c84659317c8 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:41 compute-1 nova_compute[230518]: 2025-10-02 12:48:41.272 2 INFO nova.compute.manager [None req-a0e33e83-e09a-4d22-80a0-0c84659317c8 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Detaching volume 59930c46-79e6-4eb5-b8a0-3382452117c0
Oct 02 12:48:41 compute-1 nova_compute[230518]: 2025-10-02 12:48:41.575 2 INFO nova.virt.block_device [None req-a0e33e83-e09a-4d22-80a0-0c84659317c8 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Attempting to driver detach volume 59930c46-79e6-4eb5-b8a0-3382452117c0 from mountpoint /dev/vdb
Oct 02 12:48:41 compute-1 nova_compute[230518]: 2025-10-02 12:48:41.590 2 DEBUG nova.virt.libvirt.driver [None req-a0e33e83-e09a-4d22-80a0-0c84659317c8 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Attempting to detach device vdb from instance 4b2aefbb-92cb-4a24-9ad2-884a12fa514c from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 12:48:41 compute-1 nova_compute[230518]: 2025-10-02 12:48:41.590 2 DEBUG nova.virt.libvirt.guest [None req-a0e33e83-e09a-4d22-80a0-0c84659317c8 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:48:41 compute-1 nova_compute[230518]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:48:41 compute-1 nova_compute[230518]:   <source protocol="rbd" name="volumes/volume-59930c46-79e6-4eb5-b8a0-3382452117c0">
Oct 02 12:48:41 compute-1 nova_compute[230518]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:48:41 compute-1 nova_compute[230518]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:48:41 compute-1 nova_compute[230518]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:48:41 compute-1 nova_compute[230518]:   </source>
Oct 02 12:48:41 compute-1 nova_compute[230518]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:48:41 compute-1 nova_compute[230518]:   <serial>59930c46-79e6-4eb5-b8a0-3382452117c0</serial>
Oct 02 12:48:41 compute-1 nova_compute[230518]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 12:48:41 compute-1 nova_compute[230518]: </disk>
Oct 02 12:48:41 compute-1 nova_compute[230518]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:48:41 compute-1 sudo[282212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:48:41 compute-1 sudo[282212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:41 compute-1 sudo[282212]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:41 compute-1 sudo[282237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:48:41 compute-1 sudo[282237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:48:41 compute-1 sudo[282237]: pam_unix(sudo:session): session closed for user root
Oct 02 12:48:41 compute-1 nova_compute[230518]: 2025-10-02 12:48:41.700 2 INFO nova.virt.libvirt.driver [None req-a0e33e83-e09a-4d22-80a0-0c84659317c8 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Successfully detached device vdb from instance 4b2aefbb-92cb-4a24-9ad2-884a12fa514c from the persistent domain config.
Oct 02 12:48:41 compute-1 nova_compute[230518]: 2025-10-02 12:48:41.700 2 DEBUG nova.virt.libvirt.driver [None req-a0e33e83-e09a-4d22-80a0-0c84659317c8 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 4b2aefbb-92cb-4a24-9ad2-884a12fa514c from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 12:48:41 compute-1 nova_compute[230518]: 2025-10-02 12:48:41.701 2 DEBUG nova.virt.libvirt.guest [None req-a0e33e83-e09a-4d22-80a0-0c84659317c8 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:48:41 compute-1 nova_compute[230518]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:48:41 compute-1 nova_compute[230518]:   <source protocol="rbd" name="volumes/volume-59930c46-79e6-4eb5-b8a0-3382452117c0">
Oct 02 12:48:41 compute-1 nova_compute[230518]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:48:41 compute-1 nova_compute[230518]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:48:41 compute-1 nova_compute[230518]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:48:41 compute-1 nova_compute[230518]:   </source>
Oct 02 12:48:41 compute-1 nova_compute[230518]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:48:41 compute-1 nova_compute[230518]:   <serial>59930c46-79e6-4eb5-b8a0-3382452117c0</serial>
Oct 02 12:48:41 compute-1 nova_compute[230518]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 12:48:41 compute-1 nova_compute[230518]: </disk>
Oct 02 12:48:41 compute-1 nova_compute[230518]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:48:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:42.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:42 compute-1 nova_compute[230518]: 2025-10-02 12:48:42.050 2 DEBUG nova.virt.libvirt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Received event <DeviceRemovedEvent: 1759409322.0497136, 4b2aefbb-92cb-4a24-9ad2-884a12fa514c => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 12:48:42 compute-1 nova_compute[230518]: 2025-10-02 12:48:42.052 2 DEBUG nova.virt.libvirt.driver [None req-a0e33e83-e09a-4d22-80a0-0c84659317c8 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 4b2aefbb-92cb-4a24-9ad2-884a12fa514c _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 12:48:42 compute-1 nova_compute[230518]: 2025-10-02 12:48:42.055 2 INFO nova.virt.libvirt.driver [None req-a0e33e83-e09a-4d22-80a0-0c84659317c8 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Successfully detached device vdb from instance 4b2aefbb-92cb-4a24-9ad2-884a12fa514c from the live domain config.
Oct 02 12:48:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:42.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:42 compute-1 nova_compute[230518]: 2025-10-02 12:48:42.349 2 DEBUG nova.objects.instance [None req-a0e33e83-e09a-4d22-80a0-0c84659317c8 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'flavor' on Instance uuid 4b2aefbb-92cb-4a24-9ad2-884a12fa514c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:48:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:48:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:48:42 compute-1 nova_compute[230518]: 2025-10-02 12:48:42.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:48:42.686 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:48:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:48:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e314 e314: 3 total, 3 up, 3 in
Oct 02 12:48:42 compute-1 nova_compute[230518]: 2025-10-02 12:48:42.804 2 DEBUG oslo_concurrency.lockutils [None req-a0e33e83-e09a-4d22-80a0-0c84659317c8 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:43 compute-1 ceph-mon[80926]: pgmap v2232: 305 pgs: 305 active+clean; 460 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 217 KiB/s wr, 26 op/s
Oct 02 12:48:43 compute-1 ceph-mon[80926]: osdmap e314: 3 total, 3 up, 3 in
Oct 02 12:48:43 compute-1 nova_compute[230518]: 2025-10-02 12:48:43.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:48:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:44.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:48:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:44.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:44 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4245125267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:46.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:48:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:46.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:48:46 compute-1 ceph-mon[80926]: pgmap v2234: 305 pgs: 305 active+clean; 460 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 245 KiB/s wr, 15 op/s
Oct 02 12:48:47 compute-1 ceph-mon[80926]: pgmap v2235: 305 pgs: 305 active+clean; 460 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 245 KiB/s wr, 23 op/s
Oct 02 12:48:47 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3566954741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:47 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2530885159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:48:47 compute-1 nova_compute[230518]: 2025-10-02 12:48:47.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:48:47 compute-1 podman[282265]: 2025-10-02 12:48:47.80911583 +0000 UTC m=+0.055082383 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 12:48:47 compute-1 podman[282264]: 2025-10-02 12:48:47.836572594 +0000 UTC m=+0.088313508 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:48:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:48.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:48.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:48 compute-1 nova_compute[230518]: 2025-10-02 12:48:48.195 2 DEBUG oslo_concurrency.lockutils [None req-e363509a-5851-44ec-a51b-35bb550e8d8c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:48:48 compute-1 nova_compute[230518]: 2025-10-02 12:48:48.195 2 DEBUG oslo_concurrency.lockutils [None req-e363509a-5851-44ec-a51b-35bb550e8d8c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:48:48 compute-1 nova_compute[230518]: 2025-10-02 12:48:48.218 2 INFO nova.compute.manager [None req-e363509a-5851-44ec-a51b-35bb550e8d8c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Detaching volume 58e4ef18-7a31-4027-a9e7-0cc5f7920707
Oct 02 12:48:48 compute-1 nova_compute[230518]: 2025-10-02 12:48:48.447 2 INFO nova.virt.block_device [None req-e363509a-5851-44ec-a51b-35bb550e8d8c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Attempting to driver detach volume 58e4ef18-7a31-4027-a9e7-0cc5f7920707 from mountpoint /dev/vdc
Oct 02 12:48:48 compute-1 nova_compute[230518]: 2025-10-02 12:48:48.455 2 DEBUG nova.virt.libvirt.driver [None req-e363509a-5851-44ec-a51b-35bb550e8d8c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Attempting to detach device vdc from instance 4b2aefbb-92cb-4a24-9ad2-884a12fa514c from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 12:48:48 compute-1 nova_compute[230518]: 2025-10-02 12:48:48.456 2 DEBUG nova.virt.libvirt.guest [None req-e363509a-5851-44ec-a51b-35bb550e8d8c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:48:48 compute-1 nova_compute[230518]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:48:48 compute-1 nova_compute[230518]:   <source protocol="rbd" name="volumes/volume-58e4ef18-7a31-4027-a9e7-0cc5f7920707">
Oct 02 12:48:48 compute-1 nova_compute[230518]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:48:48 compute-1 nova_compute[230518]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:48:48 compute-1 nova_compute[230518]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:48:48 compute-1 nova_compute[230518]:   </source>
Oct 02 12:48:48 compute-1 nova_compute[230518]:   <target dev="vdc" bus="virtio"/>
Oct 02 12:48:48 compute-1 nova_compute[230518]:   <serial>58e4ef18-7a31-4027-a9e7-0cc5f7920707</serial>
Oct 02 12:48:48 compute-1 nova_compute[230518]:   <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Oct 02 12:48:48 compute-1 nova_compute[230518]: </disk>
Oct 02 12:48:48 compute-1 nova_compute[230518]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:48:48 compute-1 nova_compute[230518]: 2025-10-02 12:48:48.620 2 INFO nova.virt.libvirt.driver [None req-e363509a-5851-44ec-a51b-35bb550e8d8c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Successfully detached device vdc from instance 4b2aefbb-92cb-4a24-9ad2-884a12fa514c from the persistent domain config.
Oct 02 12:48:48 compute-1 nova_compute[230518]: 2025-10-02 12:48:48.621 2 DEBUG nova.virt.libvirt.driver [None req-e363509a-5851-44ec-a51b-35bb550e8d8c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] (1/8): Attempting to detach device vdc with device alias virtio-disk2 from instance 4b2aefbb-92cb-4a24-9ad2-884a12fa514c from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 12:48:48 compute-1 nova_compute[230518]: 2025-10-02 12:48:48.621 2 DEBUG nova.virt.libvirt.guest [None req-e363509a-5851-44ec-a51b-35bb550e8d8c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:48:48 compute-1 nova_compute[230518]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:48:48 compute-1 nova_compute[230518]:   <source protocol="rbd" name="volumes/volume-58e4ef18-7a31-4027-a9e7-0cc5f7920707">
Oct 02 12:48:48 compute-1 nova_compute[230518]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:48:48 compute-1 nova_compute[230518]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:48:48 compute-1 nova_compute[230518]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:48:48 compute-1 nova_compute[230518]:   </source>
Oct 02 12:48:48 compute-1 nova_compute[230518]:   <target dev="vdc" bus="virtio"/>
Oct 02 12:48:48 compute-1 nova_compute[230518]:   <serial>58e4ef18-7a31-4027-a9e7-0cc5f7920707</serial>
Oct 02 12:48:48 compute-1 nova_compute[230518]:   <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Oct 02 12:48:48 compute-1 nova_compute[230518]: </disk>
Oct 02 12:48:48 compute-1 nova_compute[230518]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:48:48 compute-1 nova_compute[230518]: 2025-10-02 12:48:48.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:49 compute-1 ceph-mon[80926]: pgmap v2236: 305 pgs: 305 active+clean; 460 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 46 KiB/s wr, 30 op/s
Oct 02 12:48:49 compute-1 nova_compute[230518]: 2025-10-02 12:48:49.506 2 DEBUG nova.virt.libvirt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Received event <DeviceRemovedEvent: 1759409329.5057685, 4b2aefbb-92cb-4a24-9ad2-884a12fa514c => virtio-disk2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 12:48:49 compute-1 nova_compute[230518]: 2025-10-02 12:48:49.507 2 DEBUG nova.virt.libvirt.driver [None req-e363509a-5851-44ec-a51b-35bb550e8d8c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Start waiting for the detach event from libvirt for device vdc with device alias virtio-disk2 for instance 4b2aefbb-92cb-4a24-9ad2-884a12fa514c _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 12:48:49 compute-1 nova_compute[230518]: 2025-10-02 12:48:49.510 2 INFO nova.virt.libvirt.driver [None req-e363509a-5851-44ec-a51b-35bb550e8d8c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Successfully detached device vdc from instance 4b2aefbb-92cb-4a24-9ad2-884a12fa514c from the live domain config.
Oct 02 12:48:49 compute-1 nova_compute[230518]: 2025-10-02 12:48:49.751 2 DEBUG nova.objects.instance [None req-e363509a-5851-44ec-a51b-35bb550e8d8c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'flavor' on Instance uuid 4b2aefbb-92cb-4a24-9ad2-884a12fa514c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:48:49 compute-1 nova_compute[230518]: 2025-10-02 12:48:49.832 2 DEBUG oslo_concurrency.lockutils [None req-e363509a-5851-44ec-a51b-35bb550e8d8c e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:48:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:48:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:50.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:48:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:48:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:50.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:48:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:48:50 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2031016226' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:48:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:48:50 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2031016226' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:48:51 compute-1 ceph-mon[80926]: pgmap v2237: 305 pgs: 305 active+clean; 506 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 1.9 MiB/s wr, 74 op/s
Oct 02 12:48:51 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2031016226' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:48:51 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2031016226' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:48:51 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1118640825' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:51 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2709375719' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:51 compute-1 nova_compute[230518]: 2025-10-02 12:48:51.627 2 DEBUG nova.compute.manager [req-4388aa3a-5271-45f2-b011-b03ae579e960 req-766126e1-0faa-4bb1-861d-a8b3fdcb89a7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Received event network-changed-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:48:51 compute-1 nova_compute[230518]: 2025-10-02 12:48:51.627 2 DEBUG nova.compute.manager [req-4388aa3a-5271-45f2-b011-b03ae579e960 req-766126e1-0faa-4bb1-861d-a8b3fdcb89a7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Refreshing instance network info cache due to event network-changed-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:48:51 compute-1 nova_compute[230518]: 2025-10-02 12:48:51.627 2 DEBUG oslo_concurrency.lockutils [req-4388aa3a-5271-45f2-b011-b03ae579e960 req-766126e1-0faa-4bb1-861d-a8b3fdcb89a7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:48:51 compute-1 nova_compute[230518]: 2025-10-02 12:48:51.628 2 DEBUG oslo_concurrency.lockutils [req-4388aa3a-5271-45f2-b011-b03ae579e960 req-766126e1-0faa-4bb1-861d-a8b3fdcb89a7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:48:51 compute-1 nova_compute[230518]: 2025-10-02 12:48:51.628 2 DEBUG nova.network.neutron [req-4388aa3a-5271-45f2-b011-b03ae579e960 req-766126e1-0faa-4bb1-861d-a8b3fdcb89a7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Refreshing network info cache for port bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:48:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:48:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:52.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:48:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:52.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:52 compute-1 nova_compute[230518]: 2025-10-02 12:48:52.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e315 e315: 3 total, 3 up, 3 in
Oct 02 12:48:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/67991890' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1656398606' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3128083608' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:48:53 compute-1 ceph-mon[80926]: pgmap v2238: 305 pgs: 305 active+clean; 553 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 63 KiB/s rd, 4.5 MiB/s wr, 97 op/s
Oct 02 12:48:53 compute-1 ceph-mon[80926]: osdmap e315: 3 total, 3 up, 3 in
Oct 02 12:48:53 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3908904066' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:48:53 compute-1 nova_compute[230518]: 2025-10-02 12:48:53.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:54.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:54.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:48:54 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4197403709' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:48:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:48:54 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4197403709' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:48:54 compute-1 nova_compute[230518]: 2025-10-02 12:48:54.724 2 DEBUG nova.network.neutron [req-4388aa3a-5271-45f2-b011-b03ae579e960 req-766126e1-0faa-4bb1-861d-a8b3fdcb89a7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updated VIF entry in instance network info cache for port bf58273a-e5f6-4e36-bb1e-7ca0c2462d54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:48:54 compute-1 nova_compute[230518]: 2025-10-02 12:48:54.725 2 DEBUG nova.network.neutron [req-4388aa3a-5271-45f2-b011-b03ae579e960 req-766126e1-0faa-4bb1-861d-a8b3fdcb89a7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updating instance_info_cache with network_info: [{"id": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "address": "fa:16:3e:41:04:35", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf58273a-e5", "ovs_interfaceid": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:48:54 compute-1 nova_compute[230518]: 2025-10-02 12:48:54.757 2 DEBUG oslo_concurrency.lockutils [req-4388aa3a-5271-45f2-b011-b03ae579e960 req-766126e1-0faa-4bb1-861d-a8b3fdcb89a7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:48:55 compute-1 ceph-mon[80926]: pgmap v2240: 305 pgs: 305 active+clean; 553 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 63 KiB/s rd, 4.5 MiB/s wr, 97 op/s
Oct 02 12:48:55 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/4197403709' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:48:55 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/4197403709' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:48:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:48:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:56.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:48:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:56.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:56 compute-1 ceph-mon[80926]: pgmap v2241: 305 pgs: 305 active+clean; 551 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 70 KiB/s rd, 4.5 MiB/s wr, 107 op/s
Oct 02 12:48:57 compute-1 nova_compute[230518]: 2025-10-02 12:48:57.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:57 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:48:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:48:58.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:48:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:48:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:48:58.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:48:58 compute-1 podman[282310]: 2025-10-02 12:48:58.81384522 +0000 UTC m=+0.070818748 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 12:48:58 compute-1 podman[282311]: 2025-10-02 12:48:58.826154537 +0000 UTC m=+0.069157696 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 12:48:58 compute-1 nova_compute[230518]: 2025-10-02 12:48:58.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:48:58 compute-1 ceph-mon[80926]: pgmap v2242: 305 pgs: 305 active+clean; 539 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.5 MiB/s wr, 220 op/s
Oct 02 12:49:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:00.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:00.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:01 compute-1 ceph-mon[80926]: pgmap v2243: 305 pgs: 305 active+clean; 498 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 2.7 MiB/s wr, 284 op/s
Oct 02 12:49:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:02.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:02.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/832610661' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:49:02 compute-1 nova_compute[230518]: 2025-10-02 12:49:02.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:49:03 compute-1 ceph-mon[80926]: pgmap v2244: 305 pgs: 305 active+clean; 472 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 35 KiB/s wr, 317 op/s
Oct 02 12:49:03 compute-1 nova_compute[230518]: 2025-10-02 12:49:03.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:49:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:04.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:49:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:04.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:04 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1560321428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:05 compute-1 nova_compute[230518]: 2025-10-02 12:49:05.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:49:05 compute-1 nova_compute[230518]: 2025-10-02 12:49:05.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:49:05 compute-1 nova_compute[230518]: 2025-10-02 12:49:05.095 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:05 compute-1 nova_compute[230518]: 2025-10-02 12:49:05.095 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:05 compute-1 nova_compute[230518]: 2025-10-02 12:49:05.096 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:05 compute-1 nova_compute[230518]: 2025-10-02 12:49:05.096 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:49:05 compute-1 nova_compute[230518]: 2025-10-02 12:49:05.096 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:49:05 compute-1 ceph-mon[80926]: pgmap v2245: 305 pgs: 305 active+clean; 472 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 32 KiB/s wr, 286 op/s
Oct 02 12:49:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1003549039' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1519361877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1372453133' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:49:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/722255255' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:49:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/722255255' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:49:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:49:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3544233615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:05 compute-1 nova_compute[230518]: 2025-10-02 12:49:05.577 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:49:05 compute-1 nova_compute[230518]: 2025-10-02 12:49:05.667 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:49:05 compute-1 nova_compute[230518]: 2025-10-02 12:49:05.667 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:49:05 compute-1 nova_compute[230518]: 2025-10-02 12:49:05.670 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:49:05 compute-1 nova_compute[230518]: 2025-10-02 12:49:05.670 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:49:05 compute-1 nova_compute[230518]: 2025-10-02 12:49:05.830 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:49:05 compute-1 nova_compute[230518]: 2025-10-02 12:49:05.831 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4005MB free_disk=20.900550842285156GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:49:05 compute-1 nova_compute[230518]: 2025-10-02 12:49:05.831 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:05 compute-1 nova_compute[230518]: 2025-10-02 12:49:05.831 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:05 compute-1 nova_compute[230518]: 2025-10-02 12:49:05.950 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 4b2aefbb-92cb-4a24-9ad2-884a12fa514c actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:49:05 compute-1 nova_compute[230518]: 2025-10-02 12:49:05.950 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 3b348c58-f179-41db-bd79-1fdea0ade389 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:49:05 compute-1 nova_compute[230518]: 2025-10-02 12:49:05.950 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:49:05 compute-1 nova_compute[230518]: 2025-10-02 12:49:05.951 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:49:06 compute-1 nova_compute[230518]: 2025-10-02 12:49:06.051 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:49:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:49:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:06.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:49:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:06.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:06 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:49:06 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2052356705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:06 compute-1 nova_compute[230518]: 2025-10-02 12:49:06.480 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:49:06 compute-1 nova_compute[230518]: 2025-10-02 12:49:06.488 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:49:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3544233615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1743317268' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2052356705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:06 compute-1 nova_compute[230518]: 2025-10-02 12:49:06.525 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:49:06 compute-1 nova_compute[230518]: 2025-10-02 12:49:06.564 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:49:06 compute-1 nova_compute[230518]: 2025-10-02 12:49:06.564 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:07 compute-1 nova_compute[230518]: 2025-10-02 12:49:07.563 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:49:07 compute-1 nova_compute[230518]: 2025-10-02 12:49:07.564 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:49:07 compute-1 nova_compute[230518]: 2025-10-02 12:49:07.565 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:49:07 compute-1 nova_compute[230518]: 2025-10-02 12:49:07.565 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:49:07 compute-1 ceph-mon[80926]: pgmap v2246: 305 pgs: 305 active+clean; 448 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 31 KiB/s wr, 285 op/s
Oct 02 12:49:07 compute-1 nova_compute[230518]: 2025-10-02 12:49:07.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:49:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:49:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:08.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:49:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:08.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:08 compute-1 ceph-mon[80926]: pgmap v2247: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 314 op/s
Oct 02 12:49:08 compute-1 nova_compute[230518]: 2025-10-02 12:49:08.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:09 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/194072597' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:49:09 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1080643873' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:49:10 compute-1 nova_compute[230518]: 2025-10-02 12:49:10.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:49:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:10.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:10.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:10 compute-1 ceph-mon[80926]: pgmap v2248: 305 pgs: 305 active+clean; 491 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.5 MiB/s wr, 241 op/s
Oct 02 12:49:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1206636422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:11 compute-1 ovn_controller[129257]: 2025-10-02T12:49:11Z|00546|binding|INFO|Releasing lport fb7cdb79-68cf-4ad8-80ea-cb25da88eb6c from this chassis (sb_readonly=0)
Oct 02 12:49:11 compute-1 nova_compute[230518]: 2025-10-02 12:49:11.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:12 compute-1 nova_compute[230518]: 2025-10-02 12:49:12.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:49:12 compute-1 nova_compute[230518]: 2025-10-02 12:49:12.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:49:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:12.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:12.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:12 compute-1 nova_compute[230518]: 2025-10-02 12:49:12.346 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:49:12 compute-1 nova_compute[230518]: 2025-10-02 12:49:12.346 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:49:12 compute-1 nova_compute[230518]: 2025-10-02 12:49:12.347 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:49:12 compute-1 nova_compute[230518]: 2025-10-02 12:49:12.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:12 compute-1 ceph-mon[80926]: pgmap v2249: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 173 op/s
Oct 02 12:49:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3483710548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:49:13 compute-1 nova_compute[230518]: 2025-10-02 12:49:13.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:14.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:14.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:49:14.724 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:49:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:49:14.725 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:49:14 compute-1 nova_compute[230518]: 2025-10-02 12:49:14.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:14 compute-1 ceph-mon[80926]: pgmap v2250: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 331 KiB/s rd, 3.9 MiB/s wr, 123 op/s
Oct 02 12:49:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3924265383' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:49:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2830854995' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:49:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:16.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:16.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:16 compute-1 ceph-mon[80926]: pgmap v2251: 305 pgs: 305 active+clean; 495 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 509 KiB/s rd, 5.1 MiB/s wr, 150 op/s
Oct 02 12:49:17 compute-1 nova_compute[230518]: 2025-10-02 12:49:17.664 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updating instance_info_cache with network_info: [{"id": "a568d61d-6863-474f-83f4-ba38b88de19a", "address": "fa:16:3e:fa:2f:46", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa568d61d-68", "ovs_interfaceid": "a568d61d-6863-474f-83f4-ba38b88de19a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:49:17 compute-1 nova_compute[230518]: 2025-10-02 12:49:17.685 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:49:17 compute-1 nova_compute[230518]: 2025-10-02 12:49:17.685 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:49:17 compute-1 nova_compute[230518]: 2025-10-02 12:49:17.686 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:49:17 compute-1 nova_compute[230518]: 2025-10-02 12:49:17.686 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:49:17 compute-1 nova_compute[230518]: 2025-10-02 12:49:17.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:49:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:49:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:18.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:49:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:18.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:18 compute-1 podman[282391]: 2025-10-02 12:49:18.799227734 +0000 UTC m=+0.049213655 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 02 12:49:18 compute-1 podman[282390]: 2025-10-02 12:49:18.82715196 +0000 UTC m=+0.080710254 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Oct 02 12:49:18 compute-1 nova_compute[230518]: 2025-10-02 12:49:18.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:18 compute-1 ceph-mon[80926]: pgmap v2252: 305 pgs: 305 active+clean; 472 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.7 MiB/s wr, 214 op/s
Oct 02 12:49:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:49:19.727 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:49:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:20.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:49:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:20.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:49:20 compute-1 ceph-mon[80926]: pgmap v2253: 305 pgs: 305 active+clean; 472 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.9 MiB/s wr, 236 op/s
Oct 02 12:49:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:22.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:22.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:22 compute-1 nova_compute[230518]: 2025-10-02 12:49:22.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:22 compute-1 ceph-mon[80926]: pgmap v2254: 305 pgs: 305 active+clean; 472 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.2 MiB/s wr, 229 op/s
Oct 02 12:49:22 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:49:23 compute-1 nova_compute[230518]: 2025-10-02 12:49:23.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:24.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:24.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:25 compute-1 ceph-mon[80926]: pgmap v2255: 305 pgs: 305 active+clean; 472 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 203 op/s
Oct 02 12:49:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/526865586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:49:25.946 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:49:25.946 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:49:25.946 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:26.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:26.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:27 compute-1 ceph-mon[80926]: pgmap v2256: 305 pgs: 305 active+clean; 448 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 216 op/s
Oct 02 12:49:27 compute-1 nova_compute[230518]: 2025-10-02 12:49:27.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:27 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:49:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:28.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:28.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:28 compute-1 nova_compute[230518]: 2025-10-02 12:49:28.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:29 compute-1 ceph-mon[80926]: pgmap v2257: 305 pgs: 305 active+clean; 437 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 219 op/s
Oct 02 12:49:29 compute-1 podman[282436]: 2025-10-02 12:49:29.806191334 +0000 UTC m=+0.057027360 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=multipathd, managed_by=edpm_ansible)
Oct 02 12:49:29 compute-1 podman[282435]: 2025-10-02 12:49:29.826148571 +0000 UTC m=+0.081501399 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct 02 12:49:29 compute-1 nova_compute[230518]: 2025-10-02 12:49:29.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:30.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:30.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:31 compute-1 ceph-mon[80926]: pgmap v2258: 305 pgs: 305 active+clean; 455 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.1 MiB/s wr, 167 op/s
Oct 02 12:49:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:32.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:32.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:32 compute-1 nova_compute[230518]: 2025-10-02 12:49:32.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:49:33 compute-1 ceph-mon[80926]: pgmap v2259: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1010 KiB/s rd, 2.2 MiB/s wr, 117 op/s
Oct 02 12:49:33 compute-1 nova_compute[230518]: 2025-10-02 12:49:33.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:34.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:34.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:34 compute-1 nova_compute[230518]: 2025-10-02 12:49:34.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:35 compute-1 ceph-mon[80926]: pgmap v2260: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 293 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Oct 02 12:49:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:36.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:36.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:36 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3589274330' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:49:36 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3589274330' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:49:37 compute-1 ceph-mon[80926]: pgmap v2261: 305 pgs: 305 active+clean; 460 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 295 KiB/s rd, 2.3 MiB/s wr, 94 op/s
Oct 02 12:49:37 compute-1 nova_compute[230518]: 2025-10-02 12:49:37.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:49:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:49:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:38.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:49:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:49:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:38.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:49:38 compute-1 ceph-mon[80926]: pgmap v2262: 305 pgs: 305 active+clean; 462 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 299 KiB/s rd, 2.5 MiB/s wr, 101 op/s
Oct 02 12:49:38 compute-1 nova_compute[230518]: 2025-10-02 12:49:38.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:49:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:40.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:49:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:49:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:40.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:49:41 compute-1 ceph-mon[80926]: pgmap v2263: 305 pgs: 305 active+clean; 460 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 224 KiB/s rd, 1.4 MiB/s wr, 71 op/s
Oct 02 12:49:41 compute-1 sudo[282474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:49:41 compute-1 sudo[282474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:41 compute-1 sudo[282474]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:41 compute-1 sudo[282499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:49:41 compute-1 sudo[282499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:41 compute-1 sudo[282499]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:42 compute-1 sudo[282524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:49:42 compute-1 sudo[282524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:42 compute-1 sudo[282524]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:42 compute-1 sudo[282549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:49:42 compute-1 sudo[282549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:49:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:42.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:49:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:42.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:42 compute-1 sudo[282549]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:42 compute-1 nova_compute[230518]: 2025-10-02 12:49:42.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:49:42 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2258362784' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:49:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:49:42 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2258362784' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:49:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:49:43 compute-1 ceph-mon[80926]: pgmap v2264: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 115 KiB/s rd, 401 KiB/s wr, 46 op/s
Oct 02 12:49:43 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:49:43 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 12:49:43 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 12:49:43 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:49:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2258362784' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:49:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2258362784' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:49:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/931648065' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:49:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/931648065' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:49:43 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 12:49:43 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:49:43 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:49:43 compute-1 nova_compute[230518]: 2025-10-02 12:49:43.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:44.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:49:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:44.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:49:44 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:49:44 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:49:44 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:49:44 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:49:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:49:45 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/574049216' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:49:45 compute-1 ceph-mon[80926]: pgmap v2265: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 368 KiB/s wr, 30 op/s
Oct 02 12:49:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/574049216' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:49:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:46.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:49:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:46.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:49:47 compute-1 ceph-mon[80926]: pgmap v2266: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 370 KiB/s wr, 41 op/s
Oct 02 12:49:47 compute-1 nova_compute[230518]: 2025-10-02 12:49:47.264 2 DEBUG oslo_concurrency.lockutils [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "3b348c58-f179-41db-bd79-1fdea0ade389" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:47 compute-1 nova_compute[230518]: 2025-10-02 12:49:47.264 2 DEBUG oslo_concurrency.lockutils [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:47 compute-1 nova_compute[230518]: 2025-10-02 12:49:47.293 2 DEBUG nova.objects.instance [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'flavor' on Instance uuid 3b348c58-f179-41db-bd79-1fdea0ade389 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:49:47 compute-1 nova_compute[230518]: 2025-10-02 12:49:47.377 2 DEBUG oslo_concurrency.lockutils [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:47 compute-1 nova_compute[230518]: 2025-10-02 12:49:47.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:49:48 compute-1 nova_compute[230518]: 2025-10-02 12:49:48.076 2 DEBUG oslo_concurrency.lockutils [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "3b348c58-f179-41db-bd79-1fdea0ade389" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:48 compute-1 nova_compute[230518]: 2025-10-02 12:49:48.076 2 DEBUG oslo_concurrency.lockutils [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:48 compute-1 nova_compute[230518]: 2025-10-02 12:49:48.076 2 INFO nova.compute.manager [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Attaching volume ada5d5be-9d4c-4653-ac57-931c6322dea6 to /dev/vdb
Oct 02 12:49:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:48.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:48.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:48 compute-1 nova_compute[230518]: 2025-10-02 12:49:48.597 2 DEBUG os_brick.utils [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:49:48 compute-1 nova_compute[230518]: 2025-10-02 12:49:48.598 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:49:48 compute-1 nova_compute[230518]: 2025-10-02 12:49:48.615 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:49:48 compute-1 nova_compute[230518]: 2025-10-02 12:49:48.615 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[ec968f45-d8f3-492e-b013-bd45c10a426d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:48 compute-1 nova_compute[230518]: 2025-10-02 12:49:48.616 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:49:48 compute-1 nova_compute[230518]: 2025-10-02 12:49:48.625 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:49:48 compute-1 nova_compute[230518]: 2025-10-02 12:49:48.625 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[71e84f13-595d-4e40-bfeb-1d35f66d76c5]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:48 compute-1 nova_compute[230518]: 2025-10-02 12:49:48.627 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:49:48 compute-1 nova_compute[230518]: 2025-10-02 12:49:48.637 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:49:48 compute-1 nova_compute[230518]: 2025-10-02 12:49:48.638 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[6f8bcf75-8d91-4b33-8141-549d18f6ec30]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:48 compute-1 nova_compute[230518]: 2025-10-02 12:49:48.639 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[59be481c-d169-4a5b-8ac1-296b56430c58]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:48 compute-1 nova_compute[230518]: 2025-10-02 12:49:48.639 2 DEBUG oslo_concurrency.processutils [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:49:48 compute-1 nova_compute[230518]: 2025-10-02 12:49:48.671 2 DEBUG oslo_concurrency.processutils [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "nvme version" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:49:48 compute-1 nova_compute[230518]: 2025-10-02 12:49:48.675 2 DEBUG os_brick.initiator.connectors.lightos [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:49:48 compute-1 nova_compute[230518]: 2025-10-02 12:49:48.676 2 DEBUG os_brick.initiator.connectors.lightos [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:49:48 compute-1 nova_compute[230518]: 2025-10-02 12:49:48.676 2 DEBUG os_brick.initiator.connectors.lightos [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:49:48 compute-1 nova_compute[230518]: 2025-10-02 12:49:48.677 2 DEBUG os_brick.utils [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] <== get_connector_properties: return (78ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:49:48 compute-1 nova_compute[230518]: 2025-10-02 12:49:48.677 2 DEBUG nova.virt.block_device [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updating existing volume attachment record: ccf2a014-7375-4c5a-86ef-fd81dce50e93 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:49:48 compute-1 nova_compute[230518]: 2025-10-02 12:49:48.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:49 compute-1 ceph-mon[80926]: pgmap v2267: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 187 KiB/s wr, 59 op/s
Oct 02 12:49:49 compute-1 podman[282613]: 2025-10-02 12:49:49.806074077 +0000 UTC m=+0.052975233 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 12:49:49 compute-1 podman[282612]: 2025-10-02 12:49:49.831808365 +0000 UTC m=+0.081703505 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:49:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:50.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:50.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2086177414' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:49:50 compute-1 nova_compute[230518]: 2025-10-02 12:49:50.322 2 DEBUG nova.objects.instance [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'flavor' on Instance uuid 3b348c58-f179-41db-bd79-1fdea0ade389 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:49:50 compute-1 nova_compute[230518]: 2025-10-02 12:49:50.400 2 DEBUG nova.virt.libvirt.driver [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Attempting to attach volume ada5d5be-9d4c-4653-ac57-931c6322dea6 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 12:49:50 compute-1 nova_compute[230518]: 2025-10-02 12:49:50.405 2 DEBUG nova.virt.libvirt.guest [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 12:49:50 compute-1 nova_compute[230518]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:49:50 compute-1 nova_compute[230518]:   <source protocol="rbd" name="volumes/volume-ada5d5be-9d4c-4653-ac57-931c6322dea6">
Oct 02 12:49:50 compute-1 nova_compute[230518]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:49:50 compute-1 nova_compute[230518]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:49:50 compute-1 nova_compute[230518]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:49:50 compute-1 nova_compute[230518]:   </source>
Oct 02 12:49:50 compute-1 nova_compute[230518]:   <auth username="openstack">
Oct 02 12:49:50 compute-1 nova_compute[230518]:     <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:49:50 compute-1 nova_compute[230518]:   </auth>
Oct 02 12:49:50 compute-1 nova_compute[230518]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:49:50 compute-1 nova_compute[230518]:   <serial>ada5d5be-9d4c-4653-ac57-931c6322dea6</serial>
Oct 02 12:49:50 compute-1 nova_compute[230518]: </disk>
Oct 02 12:49:50 compute-1 nova_compute[230518]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 12:49:50 compute-1 nova_compute[230518]: 2025-10-02 12:49:50.899 2 DEBUG nova.virt.libvirt.driver [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:49:50 compute-1 nova_compute[230518]: 2025-10-02 12:49:50.900 2 DEBUG nova.virt.libvirt.driver [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:49:50 compute-1 nova_compute[230518]: 2025-10-02 12:49:50.900 2 DEBUG nova.virt.libvirt.driver [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:49:50 compute-1 nova_compute[230518]: 2025-10-02 12:49:50.900 2 DEBUG nova.virt.libvirt.driver [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No VIF found with MAC fa:16:3e:fa:2f:46, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:49:51 compute-1 nova_compute[230518]: 2025-10-02 12:49:51.415 2 DEBUG oslo_concurrency.lockutils [None req-13ddf221-724e-4ab0-9163-258f4f18fe2b e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.339s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:51 compute-1 ceph-mon[80926]: pgmap v2268: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 5.4 KiB/s wr, 41 op/s
Oct 02 12:49:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:52.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:52 compute-1 sudo[282677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:49:52 compute-1 sudo[282677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:52 compute-1 sudo[282677]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:52.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:52 compute-1 sudo[282702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:49:52 compute-1 sudo[282702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:49:52 compute-1 sudo[282702]: pam_unix(sudo:session): session closed for user root
Oct 02 12:49:52 compute-1 nova_compute[230518]: 2025-10-02 12:49:52.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:52 compute-1 ceph-mon[80926]: pgmap v2269: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 5.6 KiB/s wr, 42 op/s
Oct 02 12:49:52 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:49:52 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:49:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:49:53 compute-1 nova_compute[230518]: 2025-10-02 12:49:53.327 2 DEBUG oslo_concurrency.lockutils [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "3b348c58-f179-41db-bd79-1fdea0ade389" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:53 compute-1 nova_compute[230518]: 2025-10-02 12:49:53.327 2 DEBUG oslo_concurrency.lockutils [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:53 compute-1 nova_compute[230518]: 2025-10-02 12:49:53.345 2 DEBUG nova.objects.instance [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'flavor' on Instance uuid 3b348c58-f179-41db-bd79-1fdea0ade389 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:49:53 compute-1 nova_compute[230518]: 2025-10-02 12:49:53.394 2 DEBUG oslo_concurrency.lockutils [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:53 compute-1 nova_compute[230518]: 2025-10-02 12:49:53.833 2 DEBUG oslo_concurrency.lockutils [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "3b348c58-f179-41db-bd79-1fdea0ade389" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:49:53 compute-1 nova_compute[230518]: 2025-10-02 12:49:53.834 2 DEBUG oslo_concurrency.lockutils [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:49:53 compute-1 nova_compute[230518]: 2025-10-02 12:49:53.834 2 INFO nova.compute.manager [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Attaching volume 77e42bdf-1989-460b-aa47-82eb53d89208 to /dev/vdc
Oct 02 12:49:53 compute-1 nova_compute[230518]: 2025-10-02 12:49:53.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:54 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3670348211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:49:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:54.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:54 compute-1 nova_compute[230518]: 2025-10-02 12:49:54.142 2 DEBUG os_brick.utils [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:49:54 compute-1 nova_compute[230518]: 2025-10-02 12:49:54.143 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:49:54 compute-1 nova_compute[230518]: 2025-10-02 12:49:54.155 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:49:54 compute-1 nova_compute[230518]: 2025-10-02 12:49:54.155 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[52cde937-70ed-46a6-9360-d3ad2dbc8d4b]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:54 compute-1 nova_compute[230518]: 2025-10-02 12:49:54.157 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:49:54 compute-1 nova_compute[230518]: 2025-10-02 12:49:54.164 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:49:54 compute-1 nova_compute[230518]: 2025-10-02 12:49:54.164 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[39e039a9-f8c4-41d1-8ccb-da1907114ec8]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:54 compute-1 nova_compute[230518]: 2025-10-02 12:49:54.166 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:49:54 compute-1 nova_compute[230518]: 2025-10-02 12:49:54.179 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:49:54 compute-1 nova_compute[230518]: 2025-10-02 12:49:54.179 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[4484f38d-3e61-4f77-849c-ac2c2c930c8f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:54.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:54 compute-1 nova_compute[230518]: 2025-10-02 12:49:54.181 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[c3c21d13-e3ea-4886-a40f-186679155cf7]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:49:54 compute-1 nova_compute[230518]: 2025-10-02 12:49:54.181 2 DEBUG oslo_concurrency.processutils [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:49:54 compute-1 nova_compute[230518]: 2025-10-02 12:49:54.208 2 DEBUG oslo_concurrency.processutils [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:49:54 compute-1 nova_compute[230518]: 2025-10-02 12:49:54.211 2 DEBUG os_brick.initiator.connectors.lightos [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:49:54 compute-1 nova_compute[230518]: 2025-10-02 12:49:54.211 2 DEBUG os_brick.initiator.connectors.lightos [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:49:54 compute-1 nova_compute[230518]: 2025-10-02 12:49:54.212 2 DEBUG os_brick.initiator.connectors.lightos [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:49:54 compute-1 nova_compute[230518]: 2025-10-02 12:49:54.212 2 DEBUG os_brick.utils [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] <== get_connector_properties: return (69ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:49:54 compute-1 nova_compute[230518]: 2025-10-02 12:49:54.213 2 DEBUG nova.virt.block_device [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updating existing volume attachment record: 5fd89014-a8a9-427e-ba08-f0b8efc311f1 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:49:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:49:54 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1772711169' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:49:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:49:55.094 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:49:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:49:55.095 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:49:55 compute-1 nova_compute[230518]: 2025-10-02 12:49:55.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:55 compute-1 nova_compute[230518]: 2025-10-02 12:49:55.139 2 DEBUG nova.objects.instance [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'flavor' on Instance uuid 3b348c58-f179-41db-bd79-1fdea0ade389 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:49:55 compute-1 nova_compute[230518]: 2025-10-02 12:49:55.187 2 DEBUG nova.virt.libvirt.driver [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Attempting to attach volume 77e42bdf-1989-460b-aa47-82eb53d89208 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 12:49:55 compute-1 nova_compute[230518]: 2025-10-02 12:49:55.189 2 DEBUG nova.virt.libvirt.guest [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 12:49:55 compute-1 nova_compute[230518]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:49:55 compute-1 nova_compute[230518]:   <source protocol="rbd" name="volumes/volume-77e42bdf-1989-460b-aa47-82eb53d89208">
Oct 02 12:49:55 compute-1 nova_compute[230518]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:49:55 compute-1 nova_compute[230518]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:49:55 compute-1 nova_compute[230518]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:49:55 compute-1 nova_compute[230518]:   </source>
Oct 02 12:49:55 compute-1 nova_compute[230518]:   <auth username="openstack">
Oct 02 12:49:55 compute-1 nova_compute[230518]:     <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:49:55 compute-1 nova_compute[230518]:   </auth>
Oct 02 12:49:55 compute-1 nova_compute[230518]:   <target dev="vdc" bus="virtio"/>
Oct 02 12:49:55 compute-1 nova_compute[230518]:   <serial>77e42bdf-1989-460b-aa47-82eb53d89208</serial>
Oct 02 12:49:55 compute-1 nova_compute[230518]: </disk>
Oct 02 12:49:55 compute-1 nova_compute[230518]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 12:49:55 compute-1 ceph-mon[80926]: pgmap v2270: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.4 KiB/s wr, 36 op/s
Oct 02 12:49:55 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1772711169' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:49:55 compute-1 nova_compute[230518]: 2025-10-02 12:49:55.371 2 DEBUG nova.virt.libvirt.driver [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:49:55 compute-1 nova_compute[230518]: 2025-10-02 12:49:55.372 2 DEBUG nova.virt.libvirt.driver [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:49:55 compute-1 nova_compute[230518]: 2025-10-02 12:49:55.372 2 DEBUG nova.virt.libvirt.driver [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:49:55 compute-1 nova_compute[230518]: 2025-10-02 12:49:55.373 2 DEBUG nova.virt.libvirt.driver [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:49:55 compute-1 nova_compute[230518]: 2025-10-02 12:49:55.373 2 DEBUG nova.virt.libvirt.driver [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] No VIF found with MAC fa:16:3e:fa:2f:46, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:49:55 compute-1 nova_compute[230518]: 2025-10-02 12:49:55.735 2 DEBUG oslo_concurrency.lockutils [None req-f2d52e4a-5a26-45f7-b195-565d531ea145 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.901s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:49:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:56.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:56.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:56 compute-1 nova_compute[230518]: 2025-10-02 12:49:56.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:57 compute-1 ceph-mon[80926]: pgmap v2271: 305 pgs: 305 active+clean; 458 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.5 KiB/s wr, 37 op/s
Oct 02 12:49:57 compute-1 nova_compute[230518]: 2025-10-02 12:49:57.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:49:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:49:58.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:49:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:49:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:49:58.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:49:58 compute-1 nova_compute[230518]: 2025-10-02 12:49:58.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:49:59 compute-1 ceph-mon[80926]: pgmap v2272: 305 pgs: 305 active+clean; 478 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 706 KiB/s wr, 46 op/s
Oct 02 12:50:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:50:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:00.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:50:00 compute-1 nova_compute[230518]: 2025-10-02 12:50:00.177 2 DEBUG nova.compute.manager [req-c0611fc1-4cd0-48f2-aced-1c2426a9e21b req-f73c9ede-f442-46ae-87cc-ad52fdcc1296 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Received event network-changed-a568d61d-6863-474f-83f4-ba38b88de19a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:00 compute-1 nova_compute[230518]: 2025-10-02 12:50:00.178 2 DEBUG nova.compute.manager [req-c0611fc1-4cd0-48f2-aced-1c2426a9e21b req-f73c9ede-f442-46ae-87cc-ad52fdcc1296 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Refreshing instance network info cache due to event network-changed-a568d61d-6863-474f-83f4-ba38b88de19a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:50:00 compute-1 nova_compute[230518]: 2025-10-02 12:50:00.178 2 DEBUG oslo_concurrency.lockutils [req-c0611fc1-4cd0-48f2-aced-1c2426a9e21b req-f73c9ede-f442-46ae-87cc-ad52fdcc1296 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:50:00 compute-1 nova_compute[230518]: 2025-10-02 12:50:00.178 2 DEBUG oslo_concurrency.lockutils [req-c0611fc1-4cd0-48f2-aced-1c2426a9e21b req-f73c9ede-f442-46ae-87cc-ad52fdcc1296 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:50:00 compute-1 nova_compute[230518]: 2025-10-02 12:50:00.178 2 DEBUG nova.network.neutron [req-c0611fc1-4cd0-48f2-aced-1c2426a9e21b req-f73c9ede-f442-46ae-87cc-ad52fdcc1296 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Refreshing network info cache for port a568d61d-6863-474f-83f4-ba38b88de19a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:50:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:00.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:00 compute-1 ceph-mon[80926]: overall HEALTH_OK
Oct 02 12:50:00 compute-1 podman[282754]: 2025-10-02 12:50:00.83766939 +0000 UTC m=+0.074630522 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:50:00 compute-1 podman[282755]: 2025-10-02 12:50:00.843218355 +0000 UTC m=+0.076368237 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:50:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:01.096 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:01 compute-1 ceph-mon[80926]: pgmap v2273: 305 pgs: 305 active+clean; 497 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 1.5 MiB/s wr, 41 op/s
Oct 02 12:50:01 compute-1 nova_compute[230518]: 2025-10-02 12:50:01.968 2 DEBUG nova.network.neutron [req-c0611fc1-4cd0-48f2-aced-1c2426a9e21b req-f73c9ede-f442-46ae-87cc-ad52fdcc1296 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updated VIF entry in instance network info cache for port a568d61d-6863-474f-83f4-ba38b88de19a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:50:01 compute-1 nova_compute[230518]: 2025-10-02 12:50:01.969 2 DEBUG nova.network.neutron [req-c0611fc1-4cd0-48f2-aced-1c2426a9e21b req-f73c9ede-f442-46ae-87cc-ad52fdcc1296 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updating instance_info_cache with network_info: [{"id": "a568d61d-6863-474f-83f4-ba38b88de19a", "address": "fa:16:3e:fa:2f:46", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa568d61d-68", "ovs_interfaceid": "a568d61d-6863-474f-83f4-ba38b88de19a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:50:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:02.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:02.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:02 compute-1 nova_compute[230518]: 2025-10-02 12:50:02.239 2 DEBUG oslo_concurrency.lockutils [req-c0611fc1-4cd0-48f2-aced-1c2426a9e21b req-f73c9ede-f442-46ae-87cc-ad52fdcc1296 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:50:02 compute-1 nova_compute[230518]: 2025-10-02 12:50:02.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:50:03 compute-1 ceph-mon[80926]: pgmap v2274: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Oct 02 12:50:03 compute-1 nova_compute[230518]: 2025-10-02 12:50:03.871 2 DEBUG nova.compute.manager [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Received event network-changed-a568d61d-6863-474f-83f4-ba38b88de19a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:03 compute-1 nova_compute[230518]: 2025-10-02 12:50:03.872 2 DEBUG nova.compute.manager [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Refreshing instance network info cache due to event network-changed-a568d61d-6863-474f-83f4-ba38b88de19a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:50:03 compute-1 nova_compute[230518]: 2025-10-02 12:50:03.872 2 DEBUG oslo_concurrency.lockutils [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:50:03 compute-1 nova_compute[230518]: 2025-10-02 12:50:03.872 2 DEBUG oslo_concurrency.lockutils [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:50:03 compute-1 nova_compute[230518]: 2025-10-02 12:50:03.872 2 DEBUG nova.network.neutron [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Refreshing network info cache for port a568d61d-6863-474f-83f4-ba38b88de19a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:50:03 compute-1 nova_compute[230518]: 2025-10-02 12:50:03.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:04.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:04.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:04 compute-1 ceph-mon[80926]: pgmap v2275: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Oct 02 12:50:04 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/836225235' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:04 compute-1 nova_compute[230518]: 2025-10-02 12:50:04.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:50:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3004531338' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:50:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:50:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3004531338' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:50:05 compute-1 nova_compute[230518]: 2025-10-02 12:50:05.367 2 DEBUG oslo_concurrency.lockutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Acquiring lock "33780b49-b5a1-4f3f-a6c5-a00011d53718" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:05 compute-1 nova_compute[230518]: 2025-10-02 12:50:05.367 2 DEBUG oslo_concurrency.lockutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:05 compute-1 nova_compute[230518]: 2025-10-02 12:50:05.573 2 DEBUG nova.compute.manager [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:50:05 compute-1 nova_compute[230518]: 2025-10-02 12:50:05.666 2 DEBUG nova.network.neutron [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updated VIF entry in instance network info cache for port a568d61d-6863-474f-83f4-ba38b88de19a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:50:05 compute-1 nova_compute[230518]: 2025-10-02 12:50:05.667 2 DEBUG nova.network.neutron [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updating instance_info_cache with network_info: [{"id": "a568d61d-6863-474f-83f4-ba38b88de19a", "address": "fa:16:3e:fa:2f:46", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa568d61d-68", "ovs_interfaceid": "a568d61d-6863-474f-83f4-ba38b88de19a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:50:05 compute-1 nova_compute[230518]: 2025-10-02 12:50:05.752 2 DEBUG oslo_concurrency.lockutils [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:50:05 compute-1 nova_compute[230518]: 2025-10-02 12:50:05.752 2 DEBUG nova.compute.manager [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Received event network-changed-a568d61d-6863-474f-83f4-ba38b88de19a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:05 compute-1 nova_compute[230518]: 2025-10-02 12:50:05.753 2 DEBUG nova.compute.manager [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Refreshing instance network info cache due to event network-changed-a568d61d-6863-474f-83f4-ba38b88de19a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:50:05 compute-1 nova_compute[230518]: 2025-10-02 12:50:05.753 2 DEBUG oslo_concurrency.lockutils [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:50:05 compute-1 nova_compute[230518]: 2025-10-02 12:50:05.753 2 DEBUG oslo_concurrency.lockutils [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:50:05 compute-1 nova_compute[230518]: 2025-10-02 12:50:05.753 2 DEBUG nova.network.neutron [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Refreshing network info cache for port a568d61d-6863-474f-83f4-ba38b88de19a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:50:05 compute-1 nova_compute[230518]: 2025-10-02 12:50:05.907 2 DEBUG oslo_concurrency.lockutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:05 compute-1 nova_compute[230518]: 2025-10-02 12:50:05.908 2 DEBUG oslo_concurrency.lockutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:05 compute-1 nova_compute[230518]: 2025-10-02 12:50:05.913 2 DEBUG nova.virt.hardware [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:50:05 compute-1 nova_compute[230518]: 2025-10-02 12:50:05.914 2 INFO nova.compute.claims [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:50:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3004531338' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:50:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3004531338' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:50:06 compute-1 nova_compute[230518]: 2025-10-02 12:50:06.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:50:06 compute-1 nova_compute[230518]: 2025-10-02 12:50:06.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:50:06 compute-1 nova_compute[230518]: 2025-10-02 12:50:06.123 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:50:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:06.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:50:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:06.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:06 compute-1 nova_compute[230518]: 2025-10-02 12:50:06.368 2 DEBUG oslo_concurrency.processutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:06 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:50:06 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4131426592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:06 compute-1 nova_compute[230518]: 2025-10-02 12:50:06.809 2 DEBUG oslo_concurrency.processutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:06 compute-1 nova_compute[230518]: 2025-10-02 12:50:06.815 2 DEBUG nova.compute.provider_tree [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:50:06 compute-1 nova_compute[230518]: 2025-10-02 12:50:06.844 2 DEBUG nova.scheduler.client.report [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:50:06 compute-1 nova_compute[230518]: 2025-10-02 12:50:06.877 2 DEBUG oslo_concurrency.lockutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.970s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:06 compute-1 nova_compute[230518]: 2025-10-02 12:50:06.878 2 DEBUG nova.compute.manager [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:50:06 compute-1 nova_compute[230518]: 2025-10-02 12:50:06.880 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:06 compute-1 nova_compute[230518]: 2025-10-02 12:50:06.880 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:06 compute-1 nova_compute[230518]: 2025-10-02 12:50:06.881 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:50:06 compute-1 nova_compute[230518]: 2025-10-02 12:50:06.881 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:06 compute-1 nova_compute[230518]: 2025-10-02 12:50:06.975 2 DEBUG nova.compute.manager [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:50:06 compute-1 nova_compute[230518]: 2025-10-02 12:50:06.976 2 DEBUG nova.network.neutron [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.014 2 INFO nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.040 2 DEBUG nova.compute.manager [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.115 2 INFO nova.virt.block_device [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Booting with volume a7bdd212-0d34-40b8-9af9-06388d215028 at /dev/vda
Oct 02 12:50:07 compute-1 ceph-mon[80926]: pgmap v2276: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Oct 02 12:50:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1879367731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4131426592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:50:07 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1040149648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.309 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.365 2 DEBUG nova.policy [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'af2648eefb594bc49309cccf408f7ae1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1308a7eb298f49baaeaf3dc3a6acf592', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.402 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.402 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.405 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.405 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.405 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.405 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.478 2 DEBUG os_brick.utils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.479 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.489 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.489 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[f8600f8e-62f1-46b6-8578-cff6f6132896]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.490 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.499 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.499 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[da7599f8-b826-45c7-af89-fe1f4844fdb2]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.500 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.515 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.515 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[0cf6d3d8-47d0-4b07-8a6b-386afdab48fd]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.516 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[4696f10a-e0b9-49cc-8a89-295d857af0e2]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.516 2 DEBUG oslo_concurrency.processutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.542 2 DEBUG oslo_concurrency.processutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] CMD "nvme version" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.544 2 DEBUG os_brick.initiator.connectors.lightos [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.544 2 DEBUG os_brick.initiator.connectors.lightos [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.544 2 DEBUG os_brick.initiator.connectors.lightos [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.545 2 DEBUG os_brick.utils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] <== get_connector_properties: return (66ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.545 2 DEBUG nova.virt.block_device [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Updating existing volume attachment record: 215e435c-8279-4bbc-bbb1-8c2a849c9afe _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.591 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.592 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4011MB free_disk=20.896709442138672GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.592 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.592 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.724 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 4b2aefbb-92cb-4a24-9ad2-884a12fa514c actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.724 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 3b348c58-f179-41db-bd79-1fdea0ade389 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.725 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 33780b49-b5a1-4f3f-a6c5-a00011d53718 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.725 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.725 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:07 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #103. Immutable memtables: 0.
Oct 02 12:50:07 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:07.822863) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:50:07 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 103
Oct 02 12:50:07 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409407822893, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 2214, "num_deletes": 260, "total_data_size": 5186037, "memory_usage": 5262624, "flush_reason": "Manual Compaction"}
Oct 02 12:50:07 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #104: started
Oct 02 12:50:07 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409407964766, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 104, "file_size": 3348140, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52347, "largest_seqno": 54556, "table_properties": {"data_size": 3339020, "index_size": 5614, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19755, "raw_average_key_size": 20, "raw_value_size": 3320426, "raw_average_value_size": 3451, "num_data_blocks": 244, "num_entries": 962, "num_filter_entries": 962, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409237, "oldest_key_time": 1759409237, "file_creation_time": 1759409407, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:50:07 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 141980 microseconds, and 6876 cpu microseconds.
Oct 02 12:50:07 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:50:07 compute-1 nova_compute[230518]: 2025-10-02 12:50:07.993 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:50:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:08.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:07.964834) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #104: 3348140 bytes OK
Oct 02 12:50:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:07.964862) [db/memtable_list.cc:519] [default] Level-0 commit table #104 started
Oct 02 12:50:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:08.146848) [db/memtable_list.cc:722] [default] Level-0 commit table #104: memtable #1 done
Oct 02 12:50:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:08.146875) EVENT_LOG_v1 {"time_micros": 1759409408146868, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:50:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:08.146895) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:50:08 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 5175955, prev total WAL file size 5175955, number of live WAL files 2.
Oct 02 12:50:08 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000100.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:50:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:08.148071) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373630' seq:72057594037927935, type:22 .. '6C6F676D0032303134' seq:0, type:0; will stop at (end)
Oct 02 12:50:08 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:50:08 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [104(3269KB)], [102(10MB)]
Oct 02 12:50:08 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409408148136, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [104], "files_L6": [102], "score": -1, "input_data_size": 14507665, "oldest_snapshot_seqno": -1}
Oct 02 12:50:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:08.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:08 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #105: 8093 keys, 14348471 bytes, temperature: kUnknown
Oct 02 12:50:08 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409408384749, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 105, "file_size": 14348471, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14291160, "index_size": 35951, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20293, "raw_key_size": 208420, "raw_average_key_size": 25, "raw_value_size": 14143924, "raw_average_value_size": 1747, "num_data_blocks": 1427, "num_entries": 8093, "num_filter_entries": 8093, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759409408, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 105, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:50:08 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:50:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:08.384971) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 14348471 bytes
Oct 02 12:50:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:08.468915) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 61.3 rd, 60.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 10.6 +0.0 blob) out(13.7 +0.0 blob), read-write-amplify(8.6) write-amplify(4.3) OK, records in: 8633, records dropped: 540 output_compression: NoCompression
Oct 02 12:50:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:08.468957) EVENT_LOG_v1 {"time_micros": 1759409408468942, "job": 64, "event": "compaction_finished", "compaction_time_micros": 236667, "compaction_time_cpu_micros": 55431, "output_level": 6, "num_output_files": 1, "total_output_size": 14348471, "num_input_records": 8633, "num_output_records": 8093, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:50:08 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:50:08 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409408469656, "job": 64, "event": "table_file_deletion", "file_number": 104}
Oct 02 12:50:08 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000102.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:50:08 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409408471263, "job": 64, "event": "table_file_deletion", "file_number": 102}
Oct 02 12:50:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:08.147967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:50:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:08.471338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:50:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:08.471357) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:50:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:08.471360) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:50:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:08.471363) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:50:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:08.471366) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:50:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1040149648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:50:08 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/793802527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:08 compute-1 nova_compute[230518]: 2025-10-02 12:50:08.500 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:08 compute-1 nova_compute[230518]: 2025-10-02 12:50:08.507 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:50:08 compute-1 nova_compute[230518]: 2025-10-02 12:50:08.553 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:50:08 compute-1 nova_compute[230518]: 2025-10-02 12:50:08.619 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:50:08 compute-1 nova_compute[230518]: 2025-10-02 12:50:08.620 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.028s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:08 compute-1 nova_compute[230518]: 2025-10-02 12:50:08.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:50:09 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2755918285' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:09 compute-1 nova_compute[230518]: 2025-10-02 12:50:09.349 2 DEBUG nova.network.neutron [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updated VIF entry in instance network info cache for port a568d61d-6863-474f-83f4-ba38b88de19a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:50:09 compute-1 nova_compute[230518]: 2025-10-02 12:50:09.350 2 DEBUG nova.network.neutron [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updating instance_info_cache with network_info: [{"id": "a568d61d-6863-474f-83f4-ba38b88de19a", "address": "fa:16:3e:fa:2f:46", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa568d61d-68", "ovs_interfaceid": "a568d61d-6863-474f-83f4-ba38b88de19a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:50:09 compute-1 nova_compute[230518]: 2025-10-02 12:50:09.380 2 DEBUG oslo_concurrency.lockutils [req-9eae8641-369b-4d23-b4cb-ff37a7f32142 req-9525a026-0406-477a-b4c6-25b0ac5aac27 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:50:09 compute-1 nova_compute[230518]: 2025-10-02 12:50:09.620 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:50:09 compute-1 nova_compute[230518]: 2025-10-02 12:50:09.621 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:50:09 compute-1 nova_compute[230518]: 2025-10-02 12:50:09.621 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:50:09 compute-1 nova_compute[230518]: 2025-10-02 12:50:09.621 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:50:09 compute-1 ceph-mon[80926]: pgmap v2277: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Oct 02 12:50:09 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/793802527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:09 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2755918285' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:09 compute-1 nova_compute[230518]: 2025-10-02 12:50:09.873 2 DEBUG nova.compute.manager [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:50:09 compute-1 nova_compute[230518]: 2025-10-02 12:50:09.875 2 DEBUG nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:50:09 compute-1 nova_compute[230518]: 2025-10-02 12:50:09.876 2 INFO nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Creating image(s)
Oct 02 12:50:09 compute-1 nova_compute[230518]: 2025-10-02 12:50:09.876 2 DEBUG nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 12:50:09 compute-1 nova_compute[230518]: 2025-10-02 12:50:09.877 2 DEBUG nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Ensure instance console log exists: /var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:50:09 compute-1 nova_compute[230518]: 2025-10-02 12:50:09.877 2 DEBUG oslo_concurrency.lockutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:09 compute-1 nova_compute[230518]: 2025-10-02 12:50:09.877 2 DEBUG oslo_concurrency.lockutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:09 compute-1 nova_compute[230518]: 2025-10-02 12:50:09.878 2 DEBUG oslo_concurrency.lockutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:10.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:10.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:10 compute-1 nova_compute[230518]: 2025-10-02 12:50:10.582 2 DEBUG nova.network.neutron [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Successfully created port: 6f16f975-1155-4931-9798-72b46e8ca37f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:50:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3290704519' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:11 compute-1 nova_compute[230518]: 2025-10-02 12:50:11.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:50:11 compute-1 nova_compute[230518]: 2025-10-02 12:50:11.474 2 DEBUG nova.network.neutron [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Successfully updated port: 6f16f975-1155-4931-9798-72b46e8ca37f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:50:11 compute-1 nova_compute[230518]: 2025-10-02 12:50:11.506 2 DEBUG nova.compute.manager [req-b98f7e07-5c1f-47d7-b858-0d50bb8dc7fa req-4d6a359e-4ce6-4d3a-ac36-9cade10f97cb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Received event network-changed-a568d61d-6863-474f-83f4-ba38b88de19a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:11 compute-1 nova_compute[230518]: 2025-10-02 12:50:11.507 2 DEBUG nova.compute.manager [req-b98f7e07-5c1f-47d7-b858-0d50bb8dc7fa req-4d6a359e-4ce6-4d3a-ac36-9cade10f97cb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Refreshing instance network info cache due to event network-changed-a568d61d-6863-474f-83f4-ba38b88de19a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:50:11 compute-1 nova_compute[230518]: 2025-10-02 12:50:11.507 2 DEBUG oslo_concurrency.lockutils [req-b98f7e07-5c1f-47d7-b858-0d50bb8dc7fa req-4d6a359e-4ce6-4d3a-ac36-9cade10f97cb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:50:11 compute-1 nova_compute[230518]: 2025-10-02 12:50:11.508 2 DEBUG oslo_concurrency.lockutils [req-b98f7e07-5c1f-47d7-b858-0d50bb8dc7fa req-4d6a359e-4ce6-4d3a-ac36-9cade10f97cb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:50:11 compute-1 nova_compute[230518]: 2025-10-02 12:50:11.508 2 DEBUG nova.network.neutron [req-b98f7e07-5c1f-47d7-b858-0d50bb8dc7fa req-4d6a359e-4ce6-4d3a-ac36-9cade10f97cb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Refreshing network info cache for port a568d61d-6863-474f-83f4-ba38b88de19a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:50:11 compute-1 nova_compute[230518]: 2025-10-02 12:50:11.511 2 DEBUG oslo_concurrency.lockutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Acquiring lock "refresh_cache-33780b49-b5a1-4f3f-a6c5-a00011d53718" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:50:11 compute-1 nova_compute[230518]: 2025-10-02 12:50:11.512 2 DEBUG oslo_concurrency.lockutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Acquired lock "refresh_cache-33780b49-b5a1-4f3f-a6c5-a00011d53718" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:50:11 compute-1 nova_compute[230518]: 2025-10-02 12:50:11.512 2 DEBUG nova.network.neutron [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:50:11 compute-1 nova_compute[230518]: 2025-10-02 12:50:11.623 2 DEBUG nova.compute.manager [req-4853c0b7-ace8-4884-8ead-4fac1b496eea req-244c5ab5-140a-4c52-9611-772120b57b49 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received event network-changed-6f16f975-1155-4931-9798-72b46e8ca37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:11 compute-1 nova_compute[230518]: 2025-10-02 12:50:11.623 2 DEBUG nova.compute.manager [req-4853c0b7-ace8-4884-8ead-4fac1b496eea req-244c5ab5-140a-4c52-9611-772120b57b49 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Refreshing instance network info cache due to event network-changed-6f16f975-1155-4931-9798-72b46e8ca37f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:50:11 compute-1 nova_compute[230518]: 2025-10-02 12:50:11.623 2 DEBUG oslo_concurrency.lockutils [req-4853c0b7-ace8-4884-8ead-4fac1b496eea req-244c5ab5-140a-4c52-9611-772120b57b49 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-33780b49-b5a1-4f3f-a6c5-a00011d53718" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:50:11 compute-1 nova_compute[230518]: 2025-10-02 12:50:11.780 2 DEBUG nova.network.neutron [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:50:12 compute-1 nova_compute[230518]: 2025-10-02 12:50:12.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:50:12 compute-1 nova_compute[230518]: 2025-10-02 12:50:12.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:50:12 compute-1 ceph-mon[80926]: pgmap v2278: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 1.1 MiB/s wr, 19 op/s
Oct 02 12:50:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3597875286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3005696517' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:12.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:12.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e316 e316: 3 total, 3 up, 3 in
Oct 02 12:50:12 compute-1 nova_compute[230518]: 2025-10-02 12:50:12.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:50:13 compute-1 nova_compute[230518]: 2025-10-02 12:50:13.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:14 compute-1 ceph-mon[80926]: pgmap v2279: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 8.1 KiB/s rd, 301 KiB/s wr, 8 op/s
Oct 02 12:50:14 compute-1 ceph-mon[80926]: osdmap e316: 3 total, 3 up, 3 in
Oct 02 12:50:14 compute-1 nova_compute[230518]: 2025-10-02 12:50:14.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:50:14 compute-1 nova_compute[230518]: 2025-10-02 12:50:14.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:50:14 compute-1 nova_compute[230518]: 2025-10-02 12:50:14.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:50:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:14.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:50:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:14.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:50:14 compute-1 nova_compute[230518]: 2025-10-02 12:50:14.326 2 DEBUG nova.network.neutron [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Updating instance_info_cache with network_info: [{"id": "6f16f975-1155-4931-9798-72b46e8ca37f", "address": "fa:16:3e:9e:0c:7f", "network": {"id": "1ea35968-5cdb-414e-9226-6ba534628944", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-359916218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1308a7eb298f49baaeaf3dc3a6acf592", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f16f975-11", "ovs_interfaceid": "6f16f975-1155-4931-9798-72b46e8ca37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:50:14 compute-1 nova_compute[230518]: 2025-10-02 12:50:14.397 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 12:50:14 compute-1 nova_compute[230518]: 2025-10-02 12:50:14.514 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:50:14 compute-1 nova_compute[230518]: 2025-10-02 12:50:14.515 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:50:14 compute-1 nova_compute[230518]: 2025-10-02 12:50:14.515 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:50:14 compute-1 nova_compute[230518]: 2025-10-02 12:50:14.515 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4b2aefbb-92cb-4a24-9ad2-884a12fa514c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:50:14 compute-1 nova_compute[230518]: 2025-10-02 12:50:14.541 2 DEBUG nova.network.neutron [req-b98f7e07-5c1f-47d7-b858-0d50bb8dc7fa req-4d6a359e-4ce6-4d3a-ac36-9cade10f97cb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updated VIF entry in instance network info cache for port a568d61d-6863-474f-83f4-ba38b88de19a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:50:14 compute-1 nova_compute[230518]: 2025-10-02 12:50:14.542 2 DEBUG nova.network.neutron [req-b98f7e07-5c1f-47d7-b858-0d50bb8dc7fa req-4d6a359e-4ce6-4d3a-ac36-9cade10f97cb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updating instance_info_cache with network_info: [{"id": "a568d61d-6863-474f-83f4-ba38b88de19a", "address": "fa:16:3e:fa:2f:46", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa568d61d-68", "ovs_interfaceid": "a568d61d-6863-474f-83f4-ba38b88de19a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:50:14 compute-1 nova_compute[230518]: 2025-10-02 12:50:14.915 2 DEBUG oslo_concurrency.lockutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Releasing lock "refresh_cache-33780b49-b5a1-4f3f-a6c5-a00011d53718" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:50:14 compute-1 nova_compute[230518]: 2025-10-02 12:50:14.915 2 DEBUG nova.compute.manager [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Instance network_info: |[{"id": "6f16f975-1155-4931-9798-72b46e8ca37f", "address": "fa:16:3e:9e:0c:7f", "network": {"id": "1ea35968-5cdb-414e-9226-6ba534628944", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-359916218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1308a7eb298f49baaeaf3dc3a6acf592", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f16f975-11", "ovs_interfaceid": "6f16f975-1155-4931-9798-72b46e8ca37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:50:14 compute-1 nova_compute[230518]: 2025-10-02 12:50:14.916 2 DEBUG oslo_concurrency.lockutils [req-4853c0b7-ace8-4884-8ead-4fac1b496eea req-244c5ab5-140a-4c52-9611-772120b57b49 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-33780b49-b5a1-4f3f-a6c5-a00011d53718" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:50:14 compute-1 nova_compute[230518]: 2025-10-02 12:50:14.916 2 DEBUG nova.network.neutron [req-4853c0b7-ace8-4884-8ead-4fac1b496eea req-244c5ab5-140a-4c52-9611-772120b57b49 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Refreshing network info cache for port 6f16f975-1155-4931-9798-72b46e8ca37f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:50:14 compute-1 nova_compute[230518]: 2025-10-02 12:50:14.919 2 DEBUG nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Start _get_guest_xml network_info=[{"id": "6f16f975-1155-4931-9798-72b46e8ca37f", "address": "fa:16:3e:9e:0c:7f", "network": {"id": "1ea35968-5cdb-414e-9226-6ba534628944", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-359916218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1308a7eb298f49baaeaf3dc3a6acf592", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f16f975-11", "ovs_interfaceid": "6f16f975-1155-4931-9798-72b46e8ca37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'delete_on_termination': True, 'disk_bus': 'virtio', 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-a7bdd212-0d34-40b8-9af9-06388d215028', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'a7bdd212-0d34-40b8-9af9-06388d215028', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '33780b49-b5a1-4f3f-a6c5-a00011d53718', 'attached_at': '', 'detached_at': '', 'volume_id': 'a7bdd212-0d34-40b8-9af9-06388d215028', 'serial': 'a7bdd212-0d34-40b8-9af9-06388d215028'}, 'boot_index': 0, 'attachment_id': '215e435c-8279-4bbc-bbb1-8c2a849c9afe', 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:50:14 compute-1 nova_compute[230518]: 2025-10-02 12:50:14.922 2 WARNING nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:50:14 compute-1 nova_compute[230518]: 2025-10-02 12:50:14.926 2 DEBUG nova.virt.libvirt.host [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:50:14 compute-1 nova_compute[230518]: 2025-10-02 12:50:14.927 2 DEBUG nova.virt.libvirt.host [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:50:14 compute-1 nova_compute[230518]: 2025-10-02 12:50:14.931 2 DEBUG nova.virt.libvirt.host [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:50:14 compute-1 nova_compute[230518]: 2025-10-02 12:50:14.932 2 DEBUG nova.virt.libvirt.host [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:50:14 compute-1 nova_compute[230518]: 2025-10-02 12:50:14.933 2 DEBUG nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:50:14 compute-1 nova_compute[230518]: 2025-10-02 12:50:14.933 2 DEBUG nova.virt.hardware [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:50:14 compute-1 nova_compute[230518]: 2025-10-02 12:50:14.934 2 DEBUG nova.virt.hardware [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:50:14 compute-1 nova_compute[230518]: 2025-10-02 12:50:14.934 2 DEBUG nova.virt.hardware [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:50:14 compute-1 nova_compute[230518]: 2025-10-02 12:50:14.934 2 DEBUG nova.virt.hardware [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:50:14 compute-1 nova_compute[230518]: 2025-10-02 12:50:14.934 2 DEBUG nova.virt.hardware [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:50:14 compute-1 nova_compute[230518]: 2025-10-02 12:50:14.935 2 DEBUG nova.virt.hardware [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:50:14 compute-1 nova_compute[230518]: 2025-10-02 12:50:14.935 2 DEBUG nova.virt.hardware [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:50:14 compute-1 nova_compute[230518]: 2025-10-02 12:50:14.935 2 DEBUG nova.virt.hardware [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:50:14 compute-1 nova_compute[230518]: 2025-10-02 12:50:14.935 2 DEBUG nova.virt.hardware [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:50:14 compute-1 nova_compute[230518]: 2025-10-02 12:50:14.936 2 DEBUG nova.virt.hardware [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:50:14 compute-1 nova_compute[230518]: 2025-10-02 12:50:14.936 2 DEBUG nova.virt.hardware [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:50:14 compute-1 nova_compute[230518]: 2025-10-02 12:50:14.962 2 DEBUG nova.storage.rbd_utils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] rbd image 33780b49-b5a1-4f3f-a6c5-a00011d53718_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:50:14 compute-1 nova_compute[230518]: 2025-10-02 12:50:14.965 2 DEBUG oslo_concurrency.processutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:14 compute-1 nova_compute[230518]: 2025-10-02 12:50:14.995 2 DEBUG oslo_concurrency.lockutils [req-b98f7e07-5c1f-47d7-b858-0d50bb8dc7fa req-4d6a359e-4ce6-4d3a-ac36-9cade10f97cb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:50:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:50:15 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2961129839' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:15 compute-1 ceph-mon[80926]: pgmap v2281: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.8 KiB/s rd, 1.6 KiB/s wr, 5 op/s
Oct 02 12:50:15 compute-1 nova_compute[230518]: 2025-10-02 12:50:15.571 2 DEBUG oslo_concurrency.processutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:15 compute-1 nova_compute[230518]: 2025-10-02 12:50:15.633 2 DEBUG nova.virt.libvirt.vif [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:50:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-1058292720',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-1058292720',id=136,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEq6SX0G1P4eXV7KyXSZ/35WTMB3jbVB1SupKDvjpwDO6estYqWrZvLKSnDQx+vS99wAQs9lzaTS/c5UGzVwCp+w6SXjcPi0171w0SmxtpKZLlMM30YnMg14Y62hnRsW1w==',key_name='tempest-keypair-290079418',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1308a7eb298f49baaeaf3dc3a6acf592',ramdisk_id='',reservation_id='r-vezvikos',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServerActionsV293TestJSON-365577023',owner_user_name='tempest-ServerActionsV293TestJSON-365577023-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:50:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='af2648eefb594bc49309cccf408f7ae1',uuid=33780b49-b5a1-4f3f-a6c5-a00011d53718,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6f16f975-1155-4931-9798-72b46e8ca37f", "address": "fa:16:3e:9e:0c:7f", "network": {"id": "1ea35968-5cdb-414e-9226-6ba534628944", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-359916218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1308a7eb298f49baaeaf3dc3a6acf592", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f16f975-11", "ovs_interfaceid": "6f16f975-1155-4931-9798-72b46e8ca37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:50:15 compute-1 nova_compute[230518]: 2025-10-02 12:50:15.634 2 DEBUG nova.network.os_vif_util [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Converting VIF {"id": "6f16f975-1155-4931-9798-72b46e8ca37f", "address": "fa:16:3e:9e:0c:7f", "network": {"id": "1ea35968-5cdb-414e-9226-6ba534628944", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-359916218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1308a7eb298f49baaeaf3dc3a6acf592", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f16f975-11", "ovs_interfaceid": "6f16f975-1155-4931-9798-72b46e8ca37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:50:15 compute-1 nova_compute[230518]: 2025-10-02 12:50:15.634 2 DEBUG nova.network.os_vif_util [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:0c:7f,bridge_name='br-int',has_traffic_filtering=True,id=6f16f975-1155-4931-9798-72b46e8ca37f,network=Network(1ea35968-5cdb-414e-9226-6ba534628944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f16f975-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:50:15 compute-1 nova_compute[230518]: 2025-10-02 12:50:15.635 2 DEBUG nova.objects.instance [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lazy-loading 'pci_devices' on Instance uuid 33780b49-b5a1-4f3f-a6c5-a00011d53718 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:50:15 compute-1 nova_compute[230518]: 2025-10-02 12:50:15.802 2 DEBUG nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:50:15 compute-1 nova_compute[230518]:   <uuid>33780b49-b5a1-4f3f-a6c5-a00011d53718</uuid>
Oct 02 12:50:15 compute-1 nova_compute[230518]:   <name>instance-00000088</name>
Oct 02 12:50:15 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:50:15 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:50:15 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:50:15 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:       <nova:name>tempest-ServerActionsV293TestJSON-server-1058292720</nova:name>
Oct 02 12:50:15 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:50:14</nova:creationTime>
Oct 02 12:50:15 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:50:15 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:50:15 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:50:15 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:50:15 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:50:15 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:50:15 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:50:15 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:50:15 compute-1 nova_compute[230518]:         <nova:user uuid="af2648eefb594bc49309cccf408f7ae1">tempest-ServerActionsV293TestJSON-365577023-project-member</nova:user>
Oct 02 12:50:15 compute-1 nova_compute[230518]:         <nova:project uuid="1308a7eb298f49baaeaf3dc3a6acf592">tempest-ServerActionsV293TestJSON-365577023</nova:project>
Oct 02 12:50:15 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:50:15 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:50:15 compute-1 nova_compute[230518]:         <nova:port uuid="6f16f975-1155-4931-9798-72b46e8ca37f">
Oct 02 12:50:15 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:50:15 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:50:15 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:50:15 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <system>
Oct 02 12:50:15 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:50:15 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:50:15 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:50:15 compute-1 nova_compute[230518]:       <entry name="serial">33780b49-b5a1-4f3f-a6c5-a00011d53718</entry>
Oct 02 12:50:15 compute-1 nova_compute[230518]:       <entry name="uuid">33780b49-b5a1-4f3f-a6c5-a00011d53718</entry>
Oct 02 12:50:15 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     </system>
Oct 02 12:50:15 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:50:15 compute-1 nova_compute[230518]:   <os>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:   </os>
Oct 02 12:50:15 compute-1 nova_compute[230518]:   <features>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:   </features>
Oct 02 12:50:15 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:50:15 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:50:15 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:50:15 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/33780b49-b5a1-4f3f-a6c5-a00011d53718_disk.config">
Oct 02 12:50:15 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:       </source>
Oct 02 12:50:15 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:50:15 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:50:15 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:50:15 compute-1 nova_compute[230518]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:       <source protocol="rbd" name="volumes/volume-a7bdd212-0d34-40b8-9af9-06388d215028">
Oct 02 12:50:15 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:       </source>
Oct 02 12:50:15 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:50:15 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:50:15 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:       <serial>a7bdd212-0d34-40b8-9af9-06388d215028</serial>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:50:15 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:9e:0c:7f"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:       <target dev="tap6f16f975-11"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:50:15 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718/console.log" append="off"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <video>
Oct 02 12:50:15 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     </video>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:50:15 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:50:15 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:50:15 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:50:15 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:50:15 compute-1 nova_compute[230518]: </domain>
Oct 02 12:50:15 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:50:15 compute-1 nova_compute[230518]: 2025-10-02 12:50:15.803 2 DEBUG nova.compute.manager [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Preparing to wait for external event network-vif-plugged-6f16f975-1155-4931-9798-72b46e8ca37f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:50:15 compute-1 nova_compute[230518]: 2025-10-02 12:50:15.804 2 DEBUG oslo_concurrency.lockutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Acquiring lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:15 compute-1 nova_compute[230518]: 2025-10-02 12:50:15.804 2 DEBUG oslo_concurrency.lockutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:15 compute-1 nova_compute[230518]: 2025-10-02 12:50:15.805 2 DEBUG oslo_concurrency.lockutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:15 compute-1 nova_compute[230518]: 2025-10-02 12:50:15.806 2 DEBUG nova.virt.libvirt.vif [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:50:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-1058292720',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-1058292720',id=136,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEq6SX0G1P4eXV7KyXSZ/35WTMB3jbVB1SupKDvjpwDO6estYqWrZvLKSnDQx+vS99wAQs9lzaTS/c5UGzVwCp+w6SXjcPi0171w0SmxtpKZLlMM30YnMg14Y62hnRsW1w==',key_name='tempest-keypair-290079418',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1308a7eb298f49baaeaf3dc3a6acf592',ramdisk_id='',reservation_id='r-vezvikos',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServerActionsV293TestJSON-365577023',owner_user_name='tempest-ServerActionsV293TestJSON-365577023-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:50:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='af2648eefb594bc49309cccf408f7ae1',uuid=33780b49-b5a1-4f3f-a6c5-a00011d53718,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6f16f975-1155-4931-9798-72b46e8ca37f", "address": "fa:16:3e:9e:0c:7f", "network": {"id": "1ea35968-5cdb-414e-9226-6ba534628944", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-359916218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1308a7eb298f49baaeaf3dc3a6acf592", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f16f975-11", "ovs_interfaceid": "6f16f975-1155-4931-9798-72b46e8ca37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:50:15 compute-1 nova_compute[230518]: 2025-10-02 12:50:15.806 2 DEBUG nova.network.os_vif_util [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Converting VIF {"id": "6f16f975-1155-4931-9798-72b46e8ca37f", "address": "fa:16:3e:9e:0c:7f", "network": {"id": "1ea35968-5cdb-414e-9226-6ba534628944", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-359916218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1308a7eb298f49baaeaf3dc3a6acf592", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f16f975-11", "ovs_interfaceid": "6f16f975-1155-4931-9798-72b46e8ca37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:50:15 compute-1 nova_compute[230518]: 2025-10-02 12:50:15.807 2 DEBUG nova.network.os_vif_util [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:0c:7f,bridge_name='br-int',has_traffic_filtering=True,id=6f16f975-1155-4931-9798-72b46e8ca37f,network=Network(1ea35968-5cdb-414e-9226-6ba534628944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f16f975-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:50:15 compute-1 nova_compute[230518]: 2025-10-02 12:50:15.807 2 DEBUG os_vif [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:0c:7f,bridge_name='br-int',has_traffic_filtering=True,id=6f16f975-1155-4931-9798-72b46e8ca37f,network=Network(1ea35968-5cdb-414e-9226-6ba534628944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f16f975-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:50:15 compute-1 nova_compute[230518]: 2025-10-02 12:50:15.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:15 compute-1 nova_compute[230518]: 2025-10-02 12:50:15.809 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:15 compute-1 nova_compute[230518]: 2025-10-02 12:50:15.809 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:50:15 compute-1 nova_compute[230518]: 2025-10-02 12:50:15.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:15 compute-1 nova_compute[230518]: 2025-10-02 12:50:15.813 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6f16f975-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:15 compute-1 nova_compute[230518]: 2025-10-02 12:50:15.814 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6f16f975-11, col_values=(('external_ids', {'iface-id': '6f16f975-1155-4931-9798-72b46e8ca37f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9e:0c:7f', 'vm-uuid': '33780b49-b5a1-4f3f-a6c5-a00011d53718'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:15 compute-1 nova_compute[230518]: 2025-10-02 12:50:15.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:15 compute-1 NetworkManager[44960]: <info>  [1759409415.8162] manager: (tap6f16f975-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/256)
Oct 02 12:50:15 compute-1 nova_compute[230518]: 2025-10-02 12:50:15.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:50:15 compute-1 nova_compute[230518]: 2025-10-02 12:50:15.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:15 compute-1 nova_compute[230518]: 2025-10-02 12:50:15.826 2 INFO os_vif [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:0c:7f,bridge_name='br-int',has_traffic_filtering=True,id=6f16f975-1155-4931-9798-72b46e8ca37f,network=Network(1ea35968-5cdb-414e-9226-6ba534628944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f16f975-11')
Oct 02 12:50:15 compute-1 nova_compute[230518]: 2025-10-02 12:50:15.944 2 DEBUG nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:50:15 compute-1 nova_compute[230518]: 2025-10-02 12:50:15.944 2 DEBUG nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:50:15 compute-1 nova_compute[230518]: 2025-10-02 12:50:15.944 2 DEBUG nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] No VIF found with MAC fa:16:3e:9e:0c:7f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:50:15 compute-1 nova_compute[230518]: 2025-10-02 12:50:15.945 2 INFO nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Using config drive
Oct 02 12:50:15 compute-1 nova_compute[230518]: 2025-10-02 12:50:15.971 2 DEBUG nova.storage.rbd_utils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] rbd image 33780b49-b5a1-4f3f-a6c5-a00011d53718_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:50:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:16.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:50:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:16.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:50:17 compute-1 nova_compute[230518]: 2025-10-02 12:50:17.136 2 INFO nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Creating config drive at /var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718/disk.config
Oct 02 12:50:17 compute-1 nova_compute[230518]: 2025-10-02 12:50:17.142 2 DEBUG oslo_concurrency.processutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe2tnfey8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2961129839' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/550337763' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2081917777' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:17 compute-1 nova_compute[230518]: 2025-10-02 12:50:17.279 2 DEBUG oslo_concurrency.processutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe2tnfey8" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:17 compute-1 nova_compute[230518]: 2025-10-02 12:50:17.307 2 DEBUG nova.storage.rbd_utils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] rbd image 33780b49-b5a1-4f3f-a6c5-a00011d53718_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:50:17 compute-1 nova_compute[230518]: 2025-10-02 12:50:17.310 2 DEBUG oslo_concurrency.processutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718/disk.config 33780b49-b5a1-4f3f-a6c5-a00011d53718_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:50:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:50:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:18.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:50:18 compute-1 nova_compute[230518]: 2025-10-02 12:50:18.160 2 DEBUG oslo_concurrency.processutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718/disk.config 33780b49-b5a1-4f3f-a6c5-a00011d53718_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.850s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:18 compute-1 nova_compute[230518]: 2025-10-02 12:50:18.162 2 INFO nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Deleting local config drive /var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718/disk.config because it was imported into RBD.
Oct 02 12:50:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:18.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:18 compute-1 kernel: tap6f16f975-11: entered promiscuous mode
Oct 02 12:50:18 compute-1 NetworkManager[44960]: <info>  [1759409418.2259] manager: (tap6f16f975-11): new Tun device (/org/freedesktop/NetworkManager/Devices/257)
Oct 02 12:50:18 compute-1 ovn_controller[129257]: 2025-10-02T12:50:18Z|00547|binding|INFO|Claiming lport 6f16f975-1155-4931-9798-72b46e8ca37f for this chassis.
Oct 02 12:50:18 compute-1 ovn_controller[129257]: 2025-10-02T12:50:18Z|00548|binding|INFO|6f16f975-1155-4931-9798-72b46e8ca37f: Claiming fa:16:3e:9e:0c:7f 10.100.0.13
Oct 02 12:50:18 compute-1 nova_compute[230518]: 2025-10-02 12:50:18.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:18 compute-1 ovn_controller[129257]: 2025-10-02T12:50:18Z|00549|binding|INFO|Setting lport 6f16f975-1155-4931-9798-72b46e8ca37f ovn-installed in OVS
Oct 02 12:50:18 compute-1 nova_compute[230518]: 2025-10-02 12:50:18.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:18 compute-1 systemd-machined[188247]: New machine qemu-65-instance-00000088.
Oct 02 12:50:18 compute-1 systemd[1]: Started Virtual Machine qemu-65-instance-00000088.
Oct 02 12:50:18 compute-1 systemd-udevd[282980]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:50:18 compute-1 NetworkManager[44960]: <info>  [1759409418.3069] device (tap6f16f975-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:50:18 compute-1 NetworkManager[44960]: <info>  [1759409418.3080] device (tap6f16f975-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:50:18 compute-1 ovn_controller[129257]: 2025-10-02T12:50:18Z|00550|binding|INFO|Setting lport 6f16f975-1155-4931-9798-72b46e8ca37f up in Southbound
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.308 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:0c:7f 10.100.0.13'], port_security=['fa:16:3e:9e:0c:7f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '33780b49-b5a1-4f3f-a6c5-a00011d53718', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ea35968-5cdb-414e-9226-6ba534628944', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1308a7eb298f49baaeaf3dc3a6acf592', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fda5d6a4-b319-45fc-a863-02113f198b5e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=526db40a-6e83-473d-bcf2-8fd6e9668069, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=6f16f975-1155-4931-9798-72b46e8ca37f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.308 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 6f16f975-1155-4931-9798-72b46e8ca37f in datapath 1ea35968-5cdb-414e-9226-6ba534628944 bound to our chassis
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.310 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ea35968-5cdb-414e-9226-6ba534628944
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.324 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a2f8e7c7-193b-4cd3-af3f-722a4fc12903]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.325 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1ea35968-51 in ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.327 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1ea35968-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.327 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7664bc5c-5cdc-42c6-8dae-4f5d434086e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.327 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[43d440a1-8985-4fae-94df-53864151264a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.342 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[92850906-5c94-4cee-b28f-91627263d12b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.360 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9d0bbcca-f28a-473e-a923-71d9e804834a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.400 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[8c0bb5a2-3a5e-4317-b161-e424f77bd78e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.405 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[352edc2e-ac62-4e83-b057-734c2f41d389]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:18 compute-1 systemd-udevd[282982]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:50:18 compute-1 NetworkManager[44960]: <info>  [1759409418.4076] manager: (tap1ea35968-50): new Veth device (/org/freedesktop/NetworkManager/Devices/258)
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.439 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[082822ec-d233-4d5b-8edf-6dedad58fabb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.442 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[e758f6d9-d845-4ec6-80af-5645ae208a92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:18 compute-1 NetworkManager[44960]: <info>  [1759409418.4667] device (tap1ea35968-50): carrier: link connected
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.473 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[51902685-bb4e-44c0-a91f-158b37888215]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.492 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8f1ce06f-eb4c-4ef5-b53c-5af230728cdc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ea35968-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:b5:78'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 168], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 728202, 'reachable_time': 29585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283013, 'error': None, 'target': 'ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.507 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[20d8bcbf-3efc-419f-b2cc-cc5a41b28e1e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe38:b578'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 728202, 'tstamp': 728202}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283014, 'error': None, 'target': 'ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.524 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[979e5d35-6d20-4c64-9f28-083781d3b288]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ea35968-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:b5:78'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 168], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 728202, 'reachable_time': 29585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283015, 'error': None, 'target': 'ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:18 compute-1 nova_compute[230518]: 2025-10-02 12:50:18.533 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updating instance_info_cache with network_info: [{"id": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "address": "fa:16:3e:41:04:35", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf58273a-e5", "ovs_interfaceid": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.560 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0a390b15-df31-49a9-bbda-800139ee4e41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:18 compute-1 ceph-mon[80926]: pgmap v2282: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.0 KiB/s rd, 1.7 KiB/s wr, 6 op/s
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.615 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bff97ed5-702b-45b0-b6c5-0a84ef6e1121]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.617 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ea35968-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.617 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.617 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ea35968-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:18 compute-1 nova_compute[230518]: 2025-10-02 12:50:18.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:18 compute-1 kernel: tap1ea35968-50: entered promiscuous mode
Oct 02 12:50:18 compute-1 NetworkManager[44960]: <info>  [1759409418.6205] manager: (tap1ea35968-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/259)
Oct 02 12:50:18 compute-1 ovn_controller[129257]: 2025-10-02T12:50:18Z|00551|binding|INFO|Releasing lport 656124c9-fbda-4e47-b94b-fbe1ed24070e from this chassis (sb_readonly=0)
Oct 02 12:50:18 compute-1 nova_compute[230518]: 2025-10-02 12:50:18.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:18 compute-1 nova_compute[230518]: 2025-10-02 12:50:18.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.624 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ea35968-50, col_values=(('external_ids', {'iface-id': '656124c9-fbda-4e47-b94b-fbe1ed24070e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.628 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1ea35968-5cdb-414e-9226-6ba534628944.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1ea35968-5cdb-414e-9226-6ba534628944.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.629 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1b832282-6a93-462c-9da3-d7d2f17ae193]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.630 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-1ea35968-5cdb-414e-9226-6ba534628944
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/1ea35968-5cdb-414e-9226-6ba534628944.pid.haproxy
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 1ea35968-5cdb-414e-9226-6ba534628944
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:50:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:18.630 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944', 'env', 'PROCESS_TAG=haproxy-1ea35968-5cdb-414e-9226-6ba534628944', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1ea35968-5cdb-414e-9226-6ba534628944.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:50:18 compute-1 nova_compute[230518]: 2025-10-02 12:50:18.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:18 compute-1 nova_compute[230518]: 2025-10-02 12:50:18.651 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:50:18 compute-1 nova_compute[230518]: 2025-10-02 12:50:18.651 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:50:18 compute-1 nova_compute[230518]: 2025-10-02 12:50:18.652 2 DEBUG nova.network.neutron [req-4853c0b7-ace8-4884-8ead-4fac1b496eea req-244c5ab5-140a-4c52-9611-772120b57b49 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Updated VIF entry in instance network info cache for port 6f16f975-1155-4931-9798-72b46e8ca37f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:50:18 compute-1 nova_compute[230518]: 2025-10-02 12:50:18.653 2 DEBUG nova.network.neutron [req-4853c0b7-ace8-4884-8ead-4fac1b496eea req-244c5ab5-140a-4c52-9611-772120b57b49 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Updating instance_info_cache with network_info: [{"id": "6f16f975-1155-4931-9798-72b46e8ca37f", "address": "fa:16:3e:9e:0c:7f", "network": {"id": "1ea35968-5cdb-414e-9226-6ba534628944", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-359916218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1308a7eb298f49baaeaf3dc3a6acf592", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f16f975-11", "ovs_interfaceid": "6f16f975-1155-4931-9798-72b46e8ca37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:50:18 compute-1 nova_compute[230518]: 2025-10-02 12:50:18.768 2 DEBUG oslo_concurrency.lockutils [req-4853c0b7-ace8-4884-8ead-4fac1b496eea req-244c5ab5-140a-4c52-9611-772120b57b49 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-33780b49-b5a1-4f3f-a6c5-a00011d53718" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:50:18 compute-1 nova_compute[230518]: 2025-10-02 12:50:18.769 2 DEBUG oslo_concurrency.lockutils [None req-e329cfac-18ed-4258-9db2-f6b22c8949fa e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "3b348c58-f179-41db-bd79-1fdea0ade389" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:18 compute-1 nova_compute[230518]: 2025-10-02 12:50:18.769 2 DEBUG oslo_concurrency.lockutils [None req-e329cfac-18ed-4258-9db2-f6b22c8949fa e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:18 compute-1 nova_compute[230518]: 2025-10-02 12:50:18.884 2 INFO nova.compute.manager [None req-e329cfac-18ed-4258-9db2-f6b22c8949fa e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Detaching volume ada5d5be-9d4c-4653-ac57-931c6322dea6
Oct 02 12:50:18 compute-1 nova_compute[230518]: 2025-10-02 12:50:18.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:19 compute-1 podman[283063]: 2025-10-02 12:50:18.964557281 +0000 UTC m=+0.028921608 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:50:19 compute-1 nova_compute[230518]: 2025-10-02 12:50:19.114 2 DEBUG nova.compute.manager [req-c925ad30-e4f1-43dc-bd81-dfe85a097744 req-b513d25d-83c8-4dba-9061-81794db4a7eb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received event network-vif-plugged-6f16f975-1155-4931-9798-72b46e8ca37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:19 compute-1 nova_compute[230518]: 2025-10-02 12:50:19.115 2 DEBUG oslo_concurrency.lockutils [req-c925ad30-e4f1-43dc-bd81-dfe85a097744 req-b513d25d-83c8-4dba-9061-81794db4a7eb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:19 compute-1 nova_compute[230518]: 2025-10-02 12:50:19.115 2 DEBUG oslo_concurrency.lockutils [req-c925ad30-e4f1-43dc-bd81-dfe85a097744 req-b513d25d-83c8-4dba-9061-81794db4a7eb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:19 compute-1 nova_compute[230518]: 2025-10-02 12:50:19.115 2 DEBUG oslo_concurrency.lockutils [req-c925ad30-e4f1-43dc-bd81-dfe85a097744 req-b513d25d-83c8-4dba-9061-81794db4a7eb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:19 compute-1 nova_compute[230518]: 2025-10-02 12:50:19.116 2 DEBUG nova.compute.manager [req-c925ad30-e4f1-43dc-bd81-dfe85a097744 req-b513d25d-83c8-4dba-9061-81794db4a7eb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Processing event network-vif-plugged-6f16f975-1155-4931-9798-72b46e8ca37f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:50:19 compute-1 nova_compute[230518]: 2025-10-02 12:50:19.151 2 INFO nova.virt.block_device [None req-e329cfac-18ed-4258-9db2-f6b22c8949fa e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Attempting to driver detach volume ada5d5be-9d4c-4653-ac57-931c6322dea6 from mountpoint /dev/vdb
Oct 02 12:50:19 compute-1 nova_compute[230518]: 2025-10-02 12:50:19.166 2 DEBUG nova.virt.libvirt.driver [None req-e329cfac-18ed-4258-9db2-f6b22c8949fa e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Attempting to detach device vdb from instance 3b348c58-f179-41db-bd79-1fdea0ade389 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 12:50:19 compute-1 nova_compute[230518]: 2025-10-02 12:50:19.168 2 DEBUG nova.virt.libvirt.guest [None req-e329cfac-18ed-4258-9db2-f6b22c8949fa e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:50:19 compute-1 nova_compute[230518]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:50:19 compute-1 nova_compute[230518]:   <source protocol="rbd" name="volumes/volume-ada5d5be-9d4c-4653-ac57-931c6322dea6">
Oct 02 12:50:19 compute-1 nova_compute[230518]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:50:19 compute-1 nova_compute[230518]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:50:19 compute-1 nova_compute[230518]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:50:19 compute-1 nova_compute[230518]:   </source>
Oct 02 12:50:19 compute-1 nova_compute[230518]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:50:19 compute-1 nova_compute[230518]:   <serial>ada5d5be-9d4c-4653-ac57-931c6322dea6</serial>
Oct 02 12:50:19 compute-1 nova_compute[230518]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 12:50:19 compute-1 nova_compute[230518]: </disk>
Oct 02 12:50:19 compute-1 nova_compute[230518]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:50:19 compute-1 nova_compute[230518]: 2025-10-02 12:50:19.329 2 INFO nova.virt.libvirt.driver [None req-e329cfac-18ed-4258-9db2-f6b22c8949fa e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Successfully detached device vdb from instance 3b348c58-f179-41db-bd79-1fdea0ade389 from the persistent domain config.
Oct 02 12:50:19 compute-1 nova_compute[230518]: 2025-10-02 12:50:19.330 2 DEBUG nova.virt.libvirt.driver [None req-e329cfac-18ed-4258-9db2-f6b22c8949fa e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 3b348c58-f179-41db-bd79-1fdea0ade389 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 12:50:19 compute-1 podman[283063]: 2025-10-02 12:50:19.330491352 +0000 UTC m=+0.394855659 container create 739de0dd35a17676b93417a95883d3546509af2e4e5095cc7ecdf9ea58be73dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:50:19 compute-1 nova_compute[230518]: 2025-10-02 12:50:19.330 2 DEBUG nova.virt.libvirt.guest [None req-e329cfac-18ed-4258-9db2-f6b22c8949fa e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:50:19 compute-1 nova_compute[230518]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:50:19 compute-1 nova_compute[230518]:   <source protocol="rbd" name="volumes/volume-ada5d5be-9d4c-4653-ac57-931c6322dea6">
Oct 02 12:50:19 compute-1 nova_compute[230518]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:50:19 compute-1 nova_compute[230518]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:50:19 compute-1 nova_compute[230518]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:50:19 compute-1 nova_compute[230518]:   </source>
Oct 02 12:50:19 compute-1 nova_compute[230518]:   <target dev="vdb" bus="virtio"/>
Oct 02 12:50:19 compute-1 nova_compute[230518]:   <serial>ada5d5be-9d4c-4653-ac57-931c6322dea6</serial>
Oct 02 12:50:19 compute-1 nova_compute[230518]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 12:50:19 compute-1 nova_compute[230518]: </disk>
Oct 02 12:50:19 compute-1 nova_compute[230518]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:50:19 compute-1 systemd[1]: Started libpod-conmon-739de0dd35a17676b93417a95883d3546509af2e4e5095cc7ecdf9ea58be73dc.scope.
Oct 02 12:50:19 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:50:19 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2275299b59a6f31b3f5c4c1bc1ae63291b4b9fabe72845925a94cee91153691/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:50:19 compute-1 nova_compute[230518]: 2025-10-02 12:50:19.605 2 DEBUG nova.virt.libvirt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Received event <DeviceRemovedEvent: 1759409419.6046026, 3b348c58-f179-41db-bd79-1fdea0ade389 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 12:50:19 compute-1 nova_compute[230518]: 2025-10-02 12:50:19.607 2 DEBUG nova.virt.libvirt.driver [None req-e329cfac-18ed-4258-9db2-f6b22c8949fa e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 3b348c58-f179-41db-bd79-1fdea0ade389 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 12:50:19 compute-1 nova_compute[230518]: 2025-10-02 12:50:19.611 2 INFO nova.virt.libvirt.driver [None req-e329cfac-18ed-4258-9db2-f6b22c8949fa e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Successfully detached device vdb from instance 3b348c58-f179-41db-bd79-1fdea0ade389 from the live domain config.
Oct 02 12:50:19 compute-1 nova_compute[230518]: 2025-10-02 12:50:19.689 2 DEBUG nova.compute.manager [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:50:19 compute-1 nova_compute[230518]: 2025-10-02 12:50:19.691 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409419.6885846, 33780b49-b5a1-4f3f-a6c5-a00011d53718 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:50:19 compute-1 nova_compute[230518]: 2025-10-02 12:50:19.692 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] VM Started (Lifecycle Event)
Oct 02 12:50:19 compute-1 nova_compute[230518]: 2025-10-02 12:50:19.695 2 DEBUG nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:50:19 compute-1 nova_compute[230518]: 2025-10-02 12:50:19.699 2 INFO nova.virt.libvirt.driver [-] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Instance spawned successfully.
Oct 02 12:50:19 compute-1 nova_compute[230518]: 2025-10-02 12:50:19.700 2 DEBUG nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:50:19 compute-1 ceph-mon[80926]: pgmap v2283: 305 pgs: 305 active+clean; 507 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 223 KiB/s wr, 19 op/s
Oct 02 12:50:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1613571210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:19 compute-1 podman[283063]: 2025-10-02 12:50:19.745689808 +0000 UTC m=+0.810054135 container init 739de0dd35a17676b93417a95883d3546509af2e4e5095cc7ecdf9ea58be73dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:50:19 compute-1 podman[283063]: 2025-10-02 12:50:19.752475751 +0000 UTC m=+0.816840058 container start 739de0dd35a17676b93417a95883d3546509af2e4e5095cc7ecdf9ea58be73dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 12:50:19 compute-1 neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944[283104]: [NOTICE]   (283110) : New worker (283112) forked
Oct 02 12:50:19 compute-1 neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944[283104]: [NOTICE]   (283110) : Loading success.
Oct 02 12:50:19 compute-1 nova_compute[230518]: 2025-10-02 12:50:19.906 2 DEBUG nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:50:19 compute-1 nova_compute[230518]: 2025-10-02 12:50:19.907 2 DEBUG nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:50:19 compute-1 nova_compute[230518]: 2025-10-02 12:50:19.907 2 DEBUG nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:50:19 compute-1 nova_compute[230518]: 2025-10-02 12:50:19.908 2 DEBUG nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:50:19 compute-1 nova_compute[230518]: 2025-10-02 12:50:19.908 2 DEBUG nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:50:19 compute-1 nova_compute[230518]: 2025-10-02 12:50:19.908 2 DEBUG nova.virt.libvirt.driver [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:50:19 compute-1 nova_compute[230518]: 2025-10-02 12:50:19.914 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:50:19 compute-1 nova_compute[230518]: 2025-10-02 12:50:19.919 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:50:19 compute-1 nova_compute[230518]: 2025-10-02 12:50:19.960 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:50:19 compute-1 nova_compute[230518]: 2025-10-02 12:50:19.960 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409419.6887245, 33780b49-b5a1-4f3f-a6c5-a00011d53718 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:50:19 compute-1 nova_compute[230518]: 2025-10-02 12:50:19.961 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] VM Paused (Lifecycle Event)
Oct 02 12:50:19 compute-1 nova_compute[230518]: 2025-10-02 12:50:19.978 2 DEBUG nova.objects.instance [None req-e329cfac-18ed-4258-9db2-f6b22c8949fa e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'flavor' on Instance uuid 3b348c58-f179-41db-bd79-1fdea0ade389 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:50:20 compute-1 nova_compute[230518]: 2025-10-02 12:50:20.035 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:50:20 compute-1 nova_compute[230518]: 2025-10-02 12:50:20.040 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409419.6925225, 33780b49-b5a1-4f3f-a6c5-a00011d53718 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:50:20 compute-1 nova_compute[230518]: 2025-10-02 12:50:20.041 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] VM Resumed (Lifecycle Event)
Oct 02 12:50:20 compute-1 nova_compute[230518]: 2025-10-02 12:50:20.072 2 INFO nova.compute.manager [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Took 10.20 seconds to spawn the instance on the hypervisor.
Oct 02 12:50:20 compute-1 nova_compute[230518]: 2025-10-02 12:50:20.073 2 DEBUG nova.compute.manager [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:50:20 compute-1 nova_compute[230518]: 2025-10-02 12:50:20.075 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:50:20 compute-1 nova_compute[230518]: 2025-10-02 12:50:20.078 2 DEBUG oslo_concurrency.lockutils [None req-e329cfac-18ed-4258-9db2-f6b22c8949fa e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.309s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:20 compute-1 nova_compute[230518]: 2025-10-02 12:50:20.084 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:50:20 compute-1 nova_compute[230518]: 2025-10-02 12:50:20.125 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:50:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:50:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:20.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:50:20 compute-1 nova_compute[230518]: 2025-10-02 12:50:20.166 2 INFO nova.compute.manager [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Took 14.31 seconds to build instance.
Oct 02 12:50:20 compute-1 nova_compute[230518]: 2025-10-02 12:50:20.191 2 DEBUG oslo_concurrency.lockutils [None req-01cfeef9-ffab-4327-b33a-f8d66c7a5dce af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:20.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:20 compute-1 podman[283122]: 2025-10-02 12:50:20.814367615 +0000 UTC m=+0.066232449 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:50:20 compute-1 nova_compute[230518]: 2025-10-02 12:50:20.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:20 compute-1 podman[283121]: 2025-10-02 12:50:20.841127805 +0000 UTC m=+0.094545268 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Oct 02 12:50:21 compute-1 ceph-mon[80926]: pgmap v2284: 305 pgs: 305 active+clean; 507 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 505 KiB/s rd, 241 KiB/s wr, 58 op/s
Oct 02 12:50:21 compute-1 nova_compute[230518]: 2025-10-02 12:50:21.274 2 DEBUG nova.compute.manager [req-f3bc06da-8339-46d5-b967-669cad2cab14 req-6ea9d7e1-6ac0-4d0e-8c6d-f13f345e7922 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received event network-vif-plugged-6f16f975-1155-4931-9798-72b46e8ca37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:21 compute-1 nova_compute[230518]: 2025-10-02 12:50:21.274 2 DEBUG oslo_concurrency.lockutils [req-f3bc06da-8339-46d5-b967-669cad2cab14 req-6ea9d7e1-6ac0-4d0e-8c6d-f13f345e7922 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:21 compute-1 nova_compute[230518]: 2025-10-02 12:50:21.275 2 DEBUG oslo_concurrency.lockutils [req-f3bc06da-8339-46d5-b967-669cad2cab14 req-6ea9d7e1-6ac0-4d0e-8c6d-f13f345e7922 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:21 compute-1 nova_compute[230518]: 2025-10-02 12:50:21.275 2 DEBUG oslo_concurrency.lockutils [req-f3bc06da-8339-46d5-b967-669cad2cab14 req-6ea9d7e1-6ac0-4d0e-8c6d-f13f345e7922 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:21 compute-1 nova_compute[230518]: 2025-10-02 12:50:21.275 2 DEBUG nova.compute.manager [req-f3bc06da-8339-46d5-b967-669cad2cab14 req-6ea9d7e1-6ac0-4d0e-8c6d-f13f345e7922 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] No waiting events found dispatching network-vif-plugged-6f16f975-1155-4931-9798-72b46e8ca37f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:50:21 compute-1 nova_compute[230518]: 2025-10-02 12:50:21.276 2 WARNING nova.compute.manager [req-f3bc06da-8339-46d5-b967-669cad2cab14 req-6ea9d7e1-6ac0-4d0e-8c6d-f13f345e7922 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received unexpected event network-vif-plugged-6f16f975-1155-4931-9798-72b46e8ca37f for instance with vm_state active and task_state None.
Oct 02 12:50:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:22.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:22.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:50:23 compute-1 ceph-mon[80926]: pgmap v2285: 305 pgs: 305 active+clean; 533 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.6 MiB/s wr, 171 op/s
Oct 02 12:50:23 compute-1 nova_compute[230518]: 2025-10-02 12:50:23.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:24.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:24 compute-1 nova_compute[230518]: 2025-10-02 12:50:24.178 2 DEBUG nova.compute.manager [req-24e84c36-43da-4e05-acc4-615fc8127bc9 req-2980b960-2341-464b-bb6b-710fae839838 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received event network-changed-6f16f975-1155-4931-9798-72b46e8ca37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:24 compute-1 nova_compute[230518]: 2025-10-02 12:50:24.179 2 DEBUG nova.compute.manager [req-24e84c36-43da-4e05-acc4-615fc8127bc9 req-2980b960-2341-464b-bb6b-710fae839838 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Refreshing instance network info cache due to event network-changed-6f16f975-1155-4931-9798-72b46e8ca37f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:50:24 compute-1 nova_compute[230518]: 2025-10-02 12:50:24.179 2 DEBUG oslo_concurrency.lockutils [req-24e84c36-43da-4e05-acc4-615fc8127bc9 req-2980b960-2341-464b-bb6b-710fae839838 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-33780b49-b5a1-4f3f-a6c5-a00011d53718" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:50:24 compute-1 nova_compute[230518]: 2025-10-02 12:50:24.179 2 DEBUG oslo_concurrency.lockutils [req-24e84c36-43da-4e05-acc4-615fc8127bc9 req-2980b960-2341-464b-bb6b-710fae839838 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-33780b49-b5a1-4f3f-a6c5-a00011d53718" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:50:24 compute-1 nova_compute[230518]: 2025-10-02 12:50:24.179 2 DEBUG nova.network.neutron [req-24e84c36-43da-4e05-acc4-615fc8127bc9 req-2980b960-2341-464b-bb6b-710fae839838 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Refreshing network info cache for port 6f16f975-1155-4931-9798-72b46e8ca37f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:50:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:50:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:24.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:50:24 compute-1 nova_compute[230518]: 2025-10-02 12:50:24.542 2 DEBUG oslo_concurrency.lockutils [None req-7431225d-a658-46b2-9bb3-ee2f9c8f2c6e e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "3b348c58-f179-41db-bd79-1fdea0ade389" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:24 compute-1 nova_compute[230518]: 2025-10-02 12:50:24.543 2 DEBUG oslo_concurrency.lockutils [None req-7431225d-a658-46b2-9bb3-ee2f9c8f2c6e e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:24 compute-1 nova_compute[230518]: 2025-10-02 12:50:24.562 2 INFO nova.compute.manager [None req-7431225d-a658-46b2-9bb3-ee2f9c8f2c6e e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Detaching volume 77e42bdf-1989-460b-aa47-82eb53d89208
Oct 02 12:50:24 compute-1 nova_compute[230518]: 2025-10-02 12:50:24.835 2 INFO nova.virt.block_device [None req-7431225d-a658-46b2-9bb3-ee2f9c8f2c6e e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Attempting to driver detach volume 77e42bdf-1989-460b-aa47-82eb53d89208 from mountpoint /dev/vdc
Oct 02 12:50:24 compute-1 nova_compute[230518]: 2025-10-02 12:50:24.849 2 DEBUG nova.virt.libvirt.driver [None req-7431225d-a658-46b2-9bb3-ee2f9c8f2c6e e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Attempting to detach device vdc from instance 3b348c58-f179-41db-bd79-1fdea0ade389 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 12:50:24 compute-1 nova_compute[230518]: 2025-10-02 12:50:24.850 2 DEBUG nova.virt.libvirt.guest [None req-7431225d-a658-46b2-9bb3-ee2f9c8f2c6e e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:50:24 compute-1 nova_compute[230518]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:50:24 compute-1 nova_compute[230518]:   <source protocol="rbd" name="volumes/volume-77e42bdf-1989-460b-aa47-82eb53d89208">
Oct 02 12:50:24 compute-1 nova_compute[230518]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:50:24 compute-1 nova_compute[230518]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:50:24 compute-1 nova_compute[230518]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:50:24 compute-1 nova_compute[230518]:   </source>
Oct 02 12:50:24 compute-1 nova_compute[230518]:   <target dev="vdc" bus="virtio"/>
Oct 02 12:50:24 compute-1 nova_compute[230518]:   <serial>77e42bdf-1989-460b-aa47-82eb53d89208</serial>
Oct 02 12:50:24 compute-1 nova_compute[230518]:   <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Oct 02 12:50:24 compute-1 nova_compute[230518]: </disk>
Oct 02 12:50:24 compute-1 nova_compute[230518]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:50:24 compute-1 nova_compute[230518]: 2025-10-02 12:50:24.863 2 INFO nova.virt.libvirt.driver [None req-7431225d-a658-46b2-9bb3-ee2f9c8f2c6e e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Successfully detached device vdc from instance 3b348c58-f179-41db-bd79-1fdea0ade389 from the persistent domain config.
Oct 02 12:50:24 compute-1 nova_compute[230518]: 2025-10-02 12:50:24.863 2 DEBUG nova.virt.libvirt.driver [None req-7431225d-a658-46b2-9bb3-ee2f9c8f2c6e e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] (1/8): Attempting to detach device vdc with device alias virtio-disk2 from instance 3b348c58-f179-41db-bd79-1fdea0ade389 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 12:50:24 compute-1 nova_compute[230518]: 2025-10-02 12:50:24.864 2 DEBUG nova.virt.libvirt.guest [None req-7431225d-a658-46b2-9bb3-ee2f9c8f2c6e e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 12:50:24 compute-1 nova_compute[230518]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:50:24 compute-1 nova_compute[230518]:   <source protocol="rbd" name="volumes/volume-77e42bdf-1989-460b-aa47-82eb53d89208">
Oct 02 12:50:24 compute-1 nova_compute[230518]:     <host name="192.168.122.100" port="6789"/>
Oct 02 12:50:24 compute-1 nova_compute[230518]:     <host name="192.168.122.102" port="6789"/>
Oct 02 12:50:24 compute-1 nova_compute[230518]:     <host name="192.168.122.101" port="6789"/>
Oct 02 12:50:24 compute-1 nova_compute[230518]:   </source>
Oct 02 12:50:24 compute-1 nova_compute[230518]:   <target dev="vdc" bus="virtio"/>
Oct 02 12:50:24 compute-1 nova_compute[230518]:   <serial>77e42bdf-1989-460b-aa47-82eb53d89208</serial>
Oct 02 12:50:24 compute-1 nova_compute[230518]:   <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Oct 02 12:50:24 compute-1 nova_compute[230518]: </disk>
Oct 02 12:50:24 compute-1 nova_compute[230518]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:50:25 compute-1 ceph-mon[80926]: pgmap v2286: 305 pgs: 305 active+clean; 533 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.4 MiB/s wr, 152 op/s
Oct 02 12:50:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/221187614' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:50:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/221187614' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:50:25 compute-1 nova_compute[230518]: 2025-10-02 12:50:25.369 2 DEBUG nova.virt.libvirt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Received event <DeviceRemovedEvent: 1759409425.368921, 3b348c58-f179-41db-bd79-1fdea0ade389 => virtio-disk2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 12:50:25 compute-1 nova_compute[230518]: 2025-10-02 12:50:25.370 2 DEBUG nova.virt.libvirt.driver [None req-7431225d-a658-46b2-9bb3-ee2f9c8f2c6e e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Start waiting for the detach event from libvirt for device vdc with device alias virtio-disk2 for instance 3b348c58-f179-41db-bd79-1fdea0ade389 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 12:50:25 compute-1 nova_compute[230518]: 2025-10-02 12:50:25.372 2 INFO nova.virt.libvirt.driver [None req-7431225d-a658-46b2-9bb3-ee2f9c8f2c6e e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Successfully detached device vdc from instance 3b348c58-f179-41db-bd79-1fdea0ade389 from the live domain config.
Oct 02 12:50:25 compute-1 nova_compute[230518]: 2025-10-02 12:50:25.640 2 DEBUG nova.objects.instance [None req-7431225d-a658-46b2-9bb3-ee2f9c8f2c6e e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'flavor' on Instance uuid 3b348c58-f179-41db-bd79-1fdea0ade389 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:50:25 compute-1 nova_compute[230518]: 2025-10-02 12:50:25.648 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:50:25 compute-1 nova_compute[230518]: 2025-10-02 12:50:25.698 2 DEBUG oslo_concurrency.lockutils [None req-7431225d-a658-46b2-9bb3-ee2f9c8f2c6e e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:25 compute-1 nova_compute[230518]: 2025-10-02 12:50:25.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:25.946 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:25.947 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:25.948 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:26.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:26.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:26 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1660636347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:26 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1189858071' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:26 compute-1 nova_compute[230518]: 2025-10-02 12:50:26.712 2 DEBUG nova.network.neutron [req-24e84c36-43da-4e05-acc4-615fc8127bc9 req-2980b960-2341-464b-bb6b-710fae839838 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Updated VIF entry in instance network info cache for port 6f16f975-1155-4931-9798-72b46e8ca37f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:50:26 compute-1 nova_compute[230518]: 2025-10-02 12:50:26.713 2 DEBUG nova.network.neutron [req-24e84c36-43da-4e05-acc4-615fc8127bc9 req-2980b960-2341-464b-bb6b-710fae839838 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Updating instance_info_cache with network_info: [{"id": "6f16f975-1155-4931-9798-72b46e8ca37f", "address": "fa:16:3e:9e:0c:7f", "network": {"id": "1ea35968-5cdb-414e-9226-6ba534628944", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-359916218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1308a7eb298f49baaeaf3dc3a6acf592", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f16f975-11", "ovs_interfaceid": "6f16f975-1155-4931-9798-72b46e8ca37f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:50:26 compute-1 nova_compute[230518]: 2025-10-02 12:50:26.742 2 DEBUG oslo_concurrency.lockutils [req-24e84c36-43da-4e05-acc4-615fc8127bc9 req-2980b960-2341-464b-bb6b-710fae839838 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-33780b49-b5a1-4f3f-a6c5-a00011d53718" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:50:27 compute-1 ceph-mon[80926]: pgmap v2287: 305 pgs: 305 active+clean; 551 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 200 op/s
Oct 02 12:50:27 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2239796276' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:27 compute-1 nova_compute[230518]: 2025-10-02 12:50:27.744 2 DEBUG nova.compute.manager [req-30480608-5666-45c2-8480-b99e52e0c120 req-04a5b64f-e402-4fde-aeb7-11bfcc617f9e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Received event network-changed-a568d61d-6863-474f-83f4-ba38b88de19a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:27 compute-1 nova_compute[230518]: 2025-10-02 12:50:27.744 2 DEBUG nova.compute.manager [req-30480608-5666-45c2-8480-b99e52e0c120 req-04a5b64f-e402-4fde-aeb7-11bfcc617f9e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Refreshing instance network info cache due to event network-changed-a568d61d-6863-474f-83f4-ba38b88de19a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:50:27 compute-1 nova_compute[230518]: 2025-10-02 12:50:27.745 2 DEBUG oslo_concurrency.lockutils [req-30480608-5666-45c2-8480-b99e52e0c120 req-04a5b64f-e402-4fde-aeb7-11bfcc617f9e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:50:27 compute-1 nova_compute[230518]: 2025-10-02 12:50:27.745 2 DEBUG oslo_concurrency.lockutils [req-30480608-5666-45c2-8480-b99e52e0c120 req-04a5b64f-e402-4fde-aeb7-11bfcc617f9e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:50:27 compute-1 nova_compute[230518]: 2025-10-02 12:50:27.745 2 DEBUG nova.network.neutron [req-30480608-5666-45c2-8480-b99e52e0c120 req-04a5b64f-e402-4fde-aeb7-11bfcc617f9e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Refreshing network info cache for port a568d61d-6863-474f-83f4-ba38b88de19a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:50:28 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #106. Immutable memtables: 0.
Oct 02 12:50:28 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:28.083011) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:50:28 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 106
Oct 02 12:50:28 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409428083082, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 474, "num_deletes": 251, "total_data_size": 555599, "memory_usage": 565128, "flush_reason": "Manual Compaction"}
Oct 02 12:50:28 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #107: started
Oct 02 12:50:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:50:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:28.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:28.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:28 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409428253087, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 107, "file_size": 366281, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54561, "largest_seqno": 55030, "table_properties": {"data_size": 363680, "index_size": 637, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6515, "raw_average_key_size": 19, "raw_value_size": 358421, "raw_average_value_size": 1051, "num_data_blocks": 28, "num_entries": 341, "num_filter_entries": 341, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409408, "oldest_key_time": 1759409408, "file_creation_time": 1759409428, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:50:28 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 170135 microseconds, and 1816 cpu microseconds.
Oct 02 12:50:28 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:50:28 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:28.253153) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #107: 366281 bytes OK
Oct 02 12:50:28 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:28.253176) [db/memtable_list.cc:519] [default] Level-0 commit table #107 started
Oct 02 12:50:28 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:28.486364) [db/memtable_list.cc:722] [default] Level-0 commit table #107: memtable #1 done
Oct 02 12:50:28 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:28.486408) EVENT_LOG_v1 {"time_micros": 1759409428486398, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:50:28 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:28.486429) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:50:28 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 552690, prev total WAL file size 552690, number of live WAL files 2.
Oct 02 12:50:28 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000103.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:50:28 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:28.487035) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Oct 02 12:50:28 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:50:28 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [107(357KB)], [105(13MB)]
Oct 02 12:50:28 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409428487085, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [107], "files_L6": [105], "score": -1, "input_data_size": 14714752, "oldest_snapshot_seqno": -1}
Oct 02 12:50:28 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #108: 7920 keys, 12844925 bytes, temperature: kUnknown
Oct 02 12:50:28 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409428691515, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 108, "file_size": 12844925, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12790121, "index_size": 33892, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19845, "raw_key_size": 205573, "raw_average_key_size": 25, "raw_value_size": 12647169, "raw_average_value_size": 1596, "num_data_blocks": 1333, "num_entries": 7920, "num_filter_entries": 7920, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759409428, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 108, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:50:28 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:50:28 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:28.691746) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 12844925 bytes
Oct 02 12:50:28 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:28.899654) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 72.0 rd, 62.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 13.7 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(75.2) write-amplify(35.1) OK, records in: 8434, records dropped: 514 output_compression: NoCompression
Oct 02 12:50:28 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:28.899697) EVENT_LOG_v1 {"time_micros": 1759409428899681, "job": 66, "event": "compaction_finished", "compaction_time_micros": 204496, "compaction_time_cpu_micros": 28703, "output_level": 6, "num_output_files": 1, "total_output_size": 12844925, "num_input_records": 8434, "num_output_records": 7920, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:50:28 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:50:28 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409428899947, "job": 66, "event": "table_file_deletion", "file_number": 107}
Oct 02 12:50:28 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000105.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:50:28 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409428902614, "job": 66, "event": "table_file_deletion", "file_number": 105}
Oct 02 12:50:28 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:28.486921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:50:28 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:28.902644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:50:28 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:28.902647) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:50:28 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:28.902649) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:50:28 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:28.902650) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:50:28 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:50:28.902651) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:50:28 compute-1 nova_compute[230518]: 2025-10-02 12:50:28.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:29 compute-1 ceph-mon[80926]: pgmap v2288: 305 pgs: 305 active+clean; 553 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 216 op/s
Oct 02 12:50:30 compute-1 nova_compute[230518]: 2025-10-02 12:50:30.058 2 DEBUG nova.network.neutron [req-30480608-5666-45c2-8480-b99e52e0c120 req-04a5b64f-e402-4fde-aeb7-11bfcc617f9e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updated VIF entry in instance network info cache for port a568d61d-6863-474f-83f4-ba38b88de19a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:50:30 compute-1 nova_compute[230518]: 2025-10-02 12:50:30.059 2 DEBUG nova.network.neutron [req-30480608-5666-45c2-8480-b99e52e0c120 req-04a5b64f-e402-4fde-aeb7-11bfcc617f9e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updating instance_info_cache with network_info: [{"id": "a568d61d-6863-474f-83f4-ba38b88de19a", "address": "fa:16:3e:fa:2f:46", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa568d61d-68", "ovs_interfaceid": "a568d61d-6863-474f-83f4-ba38b88de19a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:50:30 compute-1 nova_compute[230518]: 2025-10-02 12:50:30.153 2 DEBUG oslo_concurrency.lockutils [req-30480608-5666-45c2-8480-b99e52e0c120 req-04a5b64f-e402-4fde-aeb7-11bfcc617f9e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:50:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:30.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:30.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:30 compute-1 nova_compute[230518]: 2025-10-02 12:50:30.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:31 compute-1 ceph-mon[80926]: pgmap v2289: 305 pgs: 305 active+clean; 553 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.0 MiB/s wr, 206 op/s
Oct 02 12:50:31 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:50:31 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2356732532' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:31 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:50:31 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/501928493' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:50:31 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:50:31 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/501928493' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:50:31 compute-1 podman[283170]: 2025-10-02 12:50:31.55979181 +0000 UTC m=+0.057611058 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, io.buildah.version=1.41.3)
Oct 02 12:50:31 compute-1 podman[283171]: 2025-10-02 12:50:31.566958825 +0000 UTC m=+0.062140650 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct 02 12:50:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:32.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:32.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2356732532' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/501928493' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:50:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/501928493' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:50:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:50:33 compute-1 ceph-mon[80926]: pgmap v2290: 305 pgs: 305 active+clean; 554 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.1 MiB/s wr, 204 op/s
Oct 02 12:50:33 compute-1 nova_compute[230518]: 2025-10-02 12:50:33.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:34.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:34.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:35 compute-1 ceph-mon[80926]: pgmap v2291: 305 pgs: 305 active+clean; 554 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1010 KiB/s wr, 105 op/s
Oct 02 12:50:35 compute-1 nova_compute[230518]: 2025-10-02 12:50:35.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:36.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:36.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:36 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e317 e317: 3 total, 3 up, 3 in
Oct 02 12:50:36 compute-1 ovn_controller[129257]: 2025-10-02T12:50:36Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9e:0c:7f 10.100.0.13
Oct 02 12:50:36 compute-1 ovn_controller[129257]: 2025-10-02T12:50:36Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9e:0c:7f 10.100.0.13
Oct 02 12:50:36 compute-1 nova_compute[230518]: 2025-10-02 12:50:36.974 2 DEBUG nova.compute.manager [req-30db5eaa-b14d-46f4-a1a5-7626c3596388 req-4784b5f0-b1e8-466d-9ddf-272f29a2f3e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Received event network-changed-a568d61d-6863-474f-83f4-ba38b88de19a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:36 compute-1 nova_compute[230518]: 2025-10-02 12:50:36.974 2 DEBUG nova.compute.manager [req-30db5eaa-b14d-46f4-a1a5-7626c3596388 req-4784b5f0-b1e8-466d-9ddf-272f29a2f3e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Refreshing instance network info cache due to event network-changed-a568d61d-6863-474f-83f4-ba38b88de19a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:50:36 compute-1 nova_compute[230518]: 2025-10-02 12:50:36.975 2 DEBUG oslo_concurrency.lockutils [req-30db5eaa-b14d-46f4-a1a5-7626c3596388 req-4784b5f0-b1e8-466d-9ddf-272f29a2f3e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:50:36 compute-1 nova_compute[230518]: 2025-10-02 12:50:36.975 2 DEBUG oslo_concurrency.lockutils [req-30db5eaa-b14d-46f4-a1a5-7626c3596388 req-4784b5f0-b1e8-466d-9ddf-272f29a2f3e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:50:36 compute-1 nova_compute[230518]: 2025-10-02 12:50:36.975 2 DEBUG nova.network.neutron [req-30db5eaa-b14d-46f4-a1a5-7626c3596388 req-4784b5f0-b1e8-466d-9ddf-272f29a2f3e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Refreshing network info cache for port a568d61d-6863-474f-83f4-ba38b88de19a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:50:37 compute-1 ceph-mon[80926]: pgmap v2292: 305 pgs: 305 active+clean; 559 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.6 MiB/s wr, 136 op/s
Oct 02 12:50:37 compute-1 ceph-mon[80926]: osdmap e317: 3 total, 3 up, 3 in
Oct 02 12:50:38 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:50:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:50:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:38.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:50:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:38.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:38 compute-1 nova_compute[230518]: 2025-10-02 12:50:38.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1115229878' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:39 compute-1 ceph-mon[80926]: pgmap v2294: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 577 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.5 MiB/s wr, 165 op/s
Oct 02 12:50:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3816795570' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:50:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:40.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:50:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:40.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:50:40 compute-1 nova_compute[230518]: 2025-10-02 12:50:40.406 2 DEBUG nova.network.neutron [req-30db5eaa-b14d-46f4-a1a5-7626c3596388 req-4784b5f0-b1e8-466d-9ddf-272f29a2f3e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updated VIF entry in instance network info cache for port a568d61d-6863-474f-83f4-ba38b88de19a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:50:40 compute-1 nova_compute[230518]: 2025-10-02 12:50:40.406 2 DEBUG nova.network.neutron [req-30db5eaa-b14d-46f4-a1a5-7626c3596388 req-4784b5f0-b1e8-466d-9ddf-272f29a2f3e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updating instance_info_cache with network_info: [{"id": "a568d61d-6863-474f-83f4-ba38b88de19a", "address": "fa:16:3e:fa:2f:46", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa568d61d-68", "ovs_interfaceid": "a568d61d-6863-474f-83f4-ba38b88de19a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:50:40 compute-1 nova_compute[230518]: 2025-10-02 12:50:40.427 2 DEBUG oslo_concurrency.lockutils [req-30db5eaa-b14d-46f4-a1a5-7626c3596388 req-4784b5f0-b1e8-466d-9ddf-272f29a2f3e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-3b348c58-f179-41db-bd79-1fdea0ade389" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:50:40 compute-1 nova_compute[230518]: 2025-10-02 12:50:40.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:41 compute-1 ceph-mon[80926]: pgmap v2295: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 577 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 189 op/s
Oct 02 12:50:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:42.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:42.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:42.869 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:50:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:42.870 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:50:42 compute-1 nova_compute[230518]: 2025-10-02 12:50:42.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:43 compute-1 ceph-mon[80926]: pgmap v2296: 305 pgs: 305 active+clean; 582 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.4 MiB/s wr, 208 op/s
Oct 02 12:50:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:50:43 compute-1 nova_compute[230518]: 2025-10-02 12:50:43.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:44.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:44.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:45 compute-1 ceph-mon[80926]: pgmap v2297: 305 pgs: 305 active+clean; 582 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.4 MiB/s wr, 208 op/s
Oct 02 12:50:45 compute-1 nova_compute[230518]: 2025-10-02 12:50:45.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:45.872 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:46.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:46.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:47 compute-1 nova_compute[230518]: 2025-10-02 12:50:47.192 2 DEBUG nova.compute.manager [req-1c46be69-fb18-4123-92fe-35d745b072c8 req-d1dc9d84-79ce-44cc-8690-fb3e13f55372 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Received event network-changed-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:47 compute-1 nova_compute[230518]: 2025-10-02 12:50:47.193 2 DEBUG nova.compute.manager [req-1c46be69-fb18-4123-92fe-35d745b072c8 req-d1dc9d84-79ce-44cc-8690-fb3e13f55372 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Refreshing instance network info cache due to event network-changed-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:50:47 compute-1 nova_compute[230518]: 2025-10-02 12:50:47.193 2 DEBUG oslo_concurrency.lockutils [req-1c46be69-fb18-4123-92fe-35d745b072c8 req-d1dc9d84-79ce-44cc-8690-fb3e13f55372 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:50:47 compute-1 nova_compute[230518]: 2025-10-02 12:50:47.193 2 DEBUG oslo_concurrency.lockutils [req-1c46be69-fb18-4123-92fe-35d745b072c8 req-d1dc9d84-79ce-44cc-8690-fb3e13f55372 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:50:47 compute-1 nova_compute[230518]: 2025-10-02 12:50:47.193 2 DEBUG nova.network.neutron [req-1c46be69-fb18-4123-92fe-35d745b072c8 req-d1dc9d84-79ce-44cc-8690-fb3e13f55372 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Refreshing network info cache for port bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:50:47 compute-1 ceph-mon[80926]: pgmap v2298: 305 pgs: 305 active+clean; 584 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 1.7 MiB/s wr, 213 op/s
Oct 02 12:50:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e318 e318: 3 total, 3 up, 3 in
Oct 02 12:50:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:50:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:48.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:48.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:48 compute-1 nova_compute[230518]: 2025-10-02 12:50:48.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:49 compute-1 ceph-mon[80926]: pgmap v2299: 305 pgs: 305 active+clean; 593 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.4 MiB/s wr, 204 op/s
Oct 02 12:50:49 compute-1 ceph-mon[80926]: osdmap e318: 3 total, 3 up, 3 in
Oct 02 12:50:49 compute-1 nova_compute[230518]: 2025-10-02 12:50:49.985 2 DEBUG nova.network.neutron [req-1c46be69-fb18-4123-92fe-35d745b072c8 req-d1dc9d84-79ce-44cc-8690-fb3e13f55372 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updated VIF entry in instance network info cache for port bf58273a-e5f6-4e36-bb1e-7ca0c2462d54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:50:49 compute-1 nova_compute[230518]: 2025-10-02 12:50:49.985 2 DEBUG nova.network.neutron [req-1c46be69-fb18-4123-92fe-35d745b072c8 req-d1dc9d84-79ce-44cc-8690-fb3e13f55372 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updating instance_info_cache with network_info: [{"id": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "address": "fa:16:3e:41:04:35", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf58273a-e5", "ovs_interfaceid": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:50:50 compute-1 nova_compute[230518]: 2025-10-02 12:50:50.008 2 DEBUG oslo_concurrency.lockutils [req-1c46be69-fb18-4123-92fe-35d745b072c8 req-d1dc9d84-79ce-44cc-8690-fb3e13f55372 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-4b2aefbb-92cb-4a24-9ad2-884a12fa514c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:50:50 compute-1 nova_compute[230518]: 2025-10-02 12:50:50.053 2 DEBUG oslo_concurrency.lockutils [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "3b348c58-f179-41db-bd79-1fdea0ade389" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:50 compute-1 nova_compute[230518]: 2025-10-02 12:50:50.053 2 DEBUG oslo_concurrency.lockutils [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:50 compute-1 nova_compute[230518]: 2025-10-02 12:50:50.054 2 DEBUG oslo_concurrency.lockutils [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "3b348c58-f179-41db-bd79-1fdea0ade389-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:50 compute-1 nova_compute[230518]: 2025-10-02 12:50:50.054 2 DEBUG oslo_concurrency.lockutils [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:50 compute-1 nova_compute[230518]: 2025-10-02 12:50:50.054 2 DEBUG oslo_concurrency.lockutils [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:50 compute-1 nova_compute[230518]: 2025-10-02 12:50:50.055 2 INFO nova.compute.manager [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Terminating instance
Oct 02 12:50:50 compute-1 nova_compute[230518]: 2025-10-02 12:50:50.057 2 DEBUG nova.compute.manager [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:50:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:50.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:50:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:50.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:50:50 compute-1 kernel: tapa568d61d-68 (unregistering): left promiscuous mode
Oct 02 12:50:50 compute-1 NetworkManager[44960]: <info>  [1759409450.4351] device (tapa568d61d-68): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:50:50 compute-1 ovn_controller[129257]: 2025-10-02T12:50:50Z|00552|binding|INFO|Releasing lport a568d61d-6863-474f-83f4-ba38b88de19a from this chassis (sb_readonly=0)
Oct 02 12:50:50 compute-1 nova_compute[230518]: 2025-10-02 12:50:50.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:50 compute-1 ovn_controller[129257]: 2025-10-02T12:50:50Z|00553|binding|INFO|Setting lport a568d61d-6863-474f-83f4-ba38b88de19a down in Southbound
Oct 02 12:50:50 compute-1 ovn_controller[129257]: 2025-10-02T12:50:50Z|00554|binding|INFO|Removing iface tapa568d61d-68 ovn-installed in OVS
Oct 02 12:50:50 compute-1 nova_compute[230518]: 2025-10-02 12:50:50.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:50.458 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:2f:46 10.100.0.7'], port_security=['fa:16:3e:fa:2f:46 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '3b348c58-f179-41db-bd79-1fdea0ade389', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd3e0300f3cf5493d8a9e62e2c4a95767', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'b476c8f5-f8e9-416f-ac80-6e4f069ebf34', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=19951b97-567d-403e-9a99-3dd9660c4a7b, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=a568d61d-6863-474f-83f4-ba38b88de19a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:50:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:50.459 138374 INFO neutron.agent.ovn.metadata.agent [-] Port a568d61d-6863-474f-83f4-ba38b88de19a in datapath aa3b4df3-6044-4a53-8039-c9a5c05725aa unbound from our chassis
Oct 02 12:50:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:50.461 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aa3b4df3-6044-4a53-8039-c9a5c05725aa
Oct 02 12:50:50 compute-1 nova_compute[230518]: 2025-10-02 12:50:50.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:50.485 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3f092925-d128-455b-b3b3-0386795e3d6f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:50 compute-1 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000084.scope: Deactivated successfully.
Oct 02 12:50:50 compute-1 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000084.scope: Consumed 20.861s CPU time.
Oct 02 12:50:50 compute-1 systemd-machined[188247]: Machine qemu-64-instance-00000084 terminated.
Oct 02 12:50:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:50.513 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a9840349-afcf-4d15-bf21-ca31eaaaefb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:50.516 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c19bee2f-af89-4745-b46d-e8690d0179fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:50.539 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[48522065-235e-490e-b6f1-db2d6d16931b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:50.563 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[12b98cfc-fece-4715-b079-3d5fb65925f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaa3b4df3-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:81:7f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 165], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714082, 'reachable_time': 29723, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283221, 'error': None, 'target': 'ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:50.580 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[347aa48c-f0f5-4553-bf77-4bacc089eebd]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapaa3b4df3-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 714093, 'tstamp': 714093}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283222, 'error': None, 'target': 'ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapaa3b4df3-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 714097, 'tstamp': 714097}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283222, 'error': None, 'target': 'ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:50:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:50.581 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa3b4df3-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:50 compute-1 nova_compute[230518]: 2025-10-02 12:50:50.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:50 compute-1 nova_compute[230518]: 2025-10-02 12:50:50.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:50.590 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa3b4df3-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:50.591 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:50:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:50.591 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaa3b4df3-60, col_values=(('external_ids', {'iface-id': 'fb7cdb79-68cf-4ad8-80ea-cb25da88eb6c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:50:50.591 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:50:50 compute-1 nova_compute[230518]: 2025-10-02 12:50:50.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:50 compute-1 nova_compute[230518]: 2025-10-02 12:50:50.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:50 compute-1 nova_compute[230518]: 2025-10-02 12:50:50.700 2 INFO nova.virt.libvirt.driver [-] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Instance destroyed successfully.
Oct 02 12:50:50 compute-1 nova_compute[230518]: 2025-10-02 12:50:50.700 2 DEBUG nova.objects.instance [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'resources' on Instance uuid 3b348c58-f179-41db-bd79-1fdea0ade389 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:50:50 compute-1 nova_compute[230518]: 2025-10-02 12:50:50.713 2 DEBUG nova.virt.libvirt.vif [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:47:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1396980789',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1396980789',id=132,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN8AO0n4F9qQHfktAb1KqUpFZGDIBw8Q+DMA6Gtgwbe4fJSHZtT9yxONp57Pu+/JlMfK0hzt7rHvQAXjHsqixRJ8kNgVzAz0UxxllE90LKBM9NxuJLShf+JD7SBBSy6srw==',key_name='tempest-TestInstancesWithCinderVolumes-1494763419',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:48:07Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d3e0300f3cf5493d8a9e62e2c4a95767',ramdisk_id='',reservation_id='r-g7qut09a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestInstancesWithCinderVolumes-621751307',owner_user_name='tempest-TestInstancesWithCinderVolumes-621751307-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:48:07Z,user_data=None,user_id='e3cd62a3208649c183d3fc2edc1c0f18',uuid=3b348c58-f179-41db-bd79-1fdea0ade389,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a568d61d-6863-474f-83f4-ba38b88de19a", "address": "fa:16:3e:fa:2f:46", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa568d61d-68", "ovs_interfaceid": "a568d61d-6863-474f-83f4-ba38b88de19a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:50:50 compute-1 nova_compute[230518]: 2025-10-02 12:50:50.713 2 DEBUG nova.network.os_vif_util [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Converting VIF {"id": "a568d61d-6863-474f-83f4-ba38b88de19a", "address": "fa:16:3e:fa:2f:46", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa568d61d-68", "ovs_interfaceid": "a568d61d-6863-474f-83f4-ba38b88de19a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:50:50 compute-1 nova_compute[230518]: 2025-10-02 12:50:50.714 2 DEBUG nova.network.os_vif_util [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fa:2f:46,bridge_name='br-int',has_traffic_filtering=True,id=a568d61d-6863-474f-83f4-ba38b88de19a,network=Network(aa3b4df3-6044-4a53-8039-c9a5c05725aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa568d61d-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:50:50 compute-1 nova_compute[230518]: 2025-10-02 12:50:50.715 2 DEBUG os_vif [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fa:2f:46,bridge_name='br-int',has_traffic_filtering=True,id=a568d61d-6863-474f-83f4-ba38b88de19a,network=Network(aa3b4df3-6044-4a53-8039-c9a5c05725aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa568d61d-68') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:50:50 compute-1 nova_compute[230518]: 2025-10-02 12:50:50.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:50 compute-1 nova_compute[230518]: 2025-10-02 12:50:50.717 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa568d61d-68, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:50:50 compute-1 nova_compute[230518]: 2025-10-02 12:50:50.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:50 compute-1 nova_compute[230518]: 2025-10-02 12:50:50.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:50:50 compute-1 nova_compute[230518]: 2025-10-02 12:50:50.725 2 INFO os_vif [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fa:2f:46,bridge_name='br-int',has_traffic_filtering=True,id=a568d61d-6863-474f-83f4-ba38b88de19a,network=Network(aa3b4df3-6044-4a53-8039-c9a5c05725aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa568d61d-68')
Oct 02 12:50:51 compute-1 nova_compute[230518]: 2025-10-02 12:50:51.049 2 DEBUG nova.compute.manager [req-1edd2d27-b9f6-400b-b3ca-3213025764d5 req-fc474fdd-46b0-40e7-a3ed-d35c9d201793 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Received event network-vif-unplugged-a568d61d-6863-474f-83f4-ba38b88de19a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:51 compute-1 nova_compute[230518]: 2025-10-02 12:50:51.050 2 DEBUG oslo_concurrency.lockutils [req-1edd2d27-b9f6-400b-b3ca-3213025764d5 req-fc474fdd-46b0-40e7-a3ed-d35c9d201793 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3b348c58-f179-41db-bd79-1fdea0ade389-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:51 compute-1 nova_compute[230518]: 2025-10-02 12:50:51.050 2 DEBUG oslo_concurrency.lockutils [req-1edd2d27-b9f6-400b-b3ca-3213025764d5 req-fc474fdd-46b0-40e7-a3ed-d35c9d201793 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:51 compute-1 nova_compute[230518]: 2025-10-02 12:50:51.050 2 DEBUG oslo_concurrency.lockutils [req-1edd2d27-b9f6-400b-b3ca-3213025764d5 req-fc474fdd-46b0-40e7-a3ed-d35c9d201793 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:51 compute-1 nova_compute[230518]: 2025-10-02 12:50:51.050 2 DEBUG nova.compute.manager [req-1edd2d27-b9f6-400b-b3ca-3213025764d5 req-fc474fdd-46b0-40e7-a3ed-d35c9d201793 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] No waiting events found dispatching network-vif-unplugged-a568d61d-6863-474f-83f4-ba38b88de19a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:50:51 compute-1 nova_compute[230518]: 2025-10-02 12:50:51.051 2 DEBUG nova.compute.manager [req-1edd2d27-b9f6-400b-b3ca-3213025764d5 req-fc474fdd-46b0-40e7-a3ed-d35c9d201793 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Received event network-vif-unplugged-a568d61d-6863-474f-83f4-ba38b88de19a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:50:51 compute-1 ceph-mon[80926]: pgmap v2301: 305 pgs: 305 active+clean; 563 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.1 MiB/s wr, 146 op/s
Oct 02 12:50:51 compute-1 podman[283253]: 2025-10-02 12:50:51.822198949 +0000 UTC m=+0.065672502 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:50:51 compute-1 podman[283252]: 2025-10-02 12:50:51.852428957 +0000 UTC m=+0.102159556 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:50:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:52.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:52.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:52 compute-1 sudo[283298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:50:52 compute-1 sudo[283298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:52 compute-1 sudo[283298]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:52 compute-1 sudo[283323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:50:52 compute-1 sudo[283323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:52 compute-1 sudo[283323]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:52 compute-1 sudo[283348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:50:52 compute-1 sudo[283348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:52 compute-1 sudo[283348]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:52 compute-1 sudo[283373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:50:52 compute-1 sudo[283373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:50:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:50:53 compute-1 nova_compute[230518]: 2025-10-02 12:50:53.160 2 DEBUG nova.compute.manager [req-f54b316a-a433-4ac9-83a1-7a5e2b9045c8 req-d52a0d09-1eda-45c7-8ec9-4de6f0d7a1bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Received event network-vif-plugged-a568d61d-6863-474f-83f4-ba38b88de19a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:53 compute-1 nova_compute[230518]: 2025-10-02 12:50:53.161 2 DEBUG oslo_concurrency.lockutils [req-f54b316a-a433-4ac9-83a1-7a5e2b9045c8 req-d52a0d09-1eda-45c7-8ec9-4de6f0d7a1bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3b348c58-f179-41db-bd79-1fdea0ade389-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:53 compute-1 nova_compute[230518]: 2025-10-02 12:50:53.162 2 DEBUG oslo_concurrency.lockutils [req-f54b316a-a433-4ac9-83a1-7a5e2b9045c8 req-d52a0d09-1eda-45c7-8ec9-4de6f0d7a1bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:53 compute-1 nova_compute[230518]: 2025-10-02 12:50:53.163 2 DEBUG oslo_concurrency.lockutils [req-f54b316a-a433-4ac9-83a1-7a5e2b9045c8 req-d52a0d09-1eda-45c7-8ec9-4de6f0d7a1bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:53 compute-1 nova_compute[230518]: 2025-10-02 12:50:53.163 2 DEBUG nova.compute.manager [req-f54b316a-a433-4ac9-83a1-7a5e2b9045c8 req-d52a0d09-1eda-45c7-8ec9-4de6f0d7a1bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] No waiting events found dispatching network-vif-plugged-a568d61d-6863-474f-83f4-ba38b88de19a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:50:53 compute-1 nova_compute[230518]: 2025-10-02 12:50:53.163 2 WARNING nova.compute.manager [req-f54b316a-a433-4ac9-83a1-7a5e2b9045c8 req-d52a0d09-1eda-45c7-8ec9-4de6f0d7a1bb 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Received unexpected event network-vif-plugged-a568d61d-6863-474f-83f4-ba38b88de19a for instance with vm_state active and task_state deleting.
Oct 02 12:50:53 compute-1 sudo[283373]: pam_unix(sudo:session): session closed for user root
Oct 02 12:50:53 compute-1 nova_compute[230518]: 2025-10-02 12:50:53.504 2 INFO nova.virt.libvirt.driver [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Deleting instance files /var/lib/nova/instances/3b348c58-f179-41db-bd79-1fdea0ade389_del
Oct 02 12:50:53 compute-1 nova_compute[230518]: 2025-10-02 12:50:53.504 2 INFO nova.virt.libvirt.driver [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Deletion of /var/lib/nova/instances/3b348c58-f179-41db-bd79-1fdea0ade389_del complete
Oct 02 12:50:53 compute-1 nova_compute[230518]: 2025-10-02 12:50:53.571 2 INFO nova.compute.manager [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Took 3.51 seconds to destroy the instance on the hypervisor.
Oct 02 12:50:53 compute-1 nova_compute[230518]: 2025-10-02 12:50:53.572 2 DEBUG oslo.service.loopingcall [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:50:53 compute-1 nova_compute[230518]: 2025-10-02 12:50:53.572 2 DEBUG nova.compute.manager [-] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:50:53 compute-1 nova_compute[230518]: 2025-10-02 12:50:53.572 2 DEBUG nova.network.neutron [-] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:50:53 compute-1 ceph-mon[80926]: pgmap v2302: 305 pgs: 305 active+clean; 535 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.5 MiB/s wr, 141 op/s
Oct 02 12:50:53 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:50:53 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:50:53 compute-1 nova_compute[230518]: 2025-10-02 12:50:53.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:54.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:54.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:54 compute-1 nova_compute[230518]: 2025-10-02 12:50:54.708 2 DEBUG nova.network.neutron [-] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:50:54 compute-1 nova_compute[230518]: 2025-10-02 12:50:54.761 2 INFO nova.compute.manager [-] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Took 1.19 seconds to deallocate network for instance.
Oct 02 12:50:55 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:50:55 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:50:55 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:50:55 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:50:55 compute-1 ceph-mon[80926]: pgmap v2303: 305 pgs: 305 active+clean; 535 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.5 MiB/s wr, 141 op/s
Oct 02 12:50:55 compute-1 nova_compute[230518]: 2025-10-02 12:50:55.169 2 INFO nova.compute.manager [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Took 0.41 seconds to detach 1 volumes for instance.
Oct 02 12:50:55 compute-1 nova_compute[230518]: 2025-10-02 12:50:55.264 2 DEBUG oslo_concurrency.lockutils [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:50:55 compute-1 nova_compute[230518]: 2025-10-02 12:50:55.265 2 DEBUG oslo_concurrency.lockutils [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:50:55 compute-1 nova_compute[230518]: 2025-10-02 12:50:55.331 2 DEBUG nova.compute.manager [req-1bcd25cf-2e0d-4347-9ec9-1ef3ea2016b9 req-cdd838c3-8086-4d56-ac8a-b04c142a2482 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Received event network-vif-deleted-a568d61d-6863-474f-83f4-ba38b88de19a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:50:55 compute-1 nova_compute[230518]: 2025-10-02 12:50:55.375 2 DEBUG oslo_concurrency.processutils [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:50:55 compute-1 nova_compute[230518]: 2025-10-02 12:50:55.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:50:55 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2193676697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:55 compute-1 nova_compute[230518]: 2025-10-02 12:50:55.845 2 DEBUG oslo_concurrency.processutils [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:50:55 compute-1 nova_compute[230518]: 2025-10-02 12:50:55.852 2 DEBUG nova.compute.provider_tree [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:50:55 compute-1 nova_compute[230518]: 2025-10-02 12:50:55.869 2 DEBUG nova.scheduler.client.report [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:50:55 compute-1 nova_compute[230518]: 2025-10-02 12:50:55.896 2 DEBUG oslo_concurrency.lockutils [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:55 compute-1 nova_compute[230518]: 2025-10-02 12:50:55.929 2 INFO nova.scheduler.client.report [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Deleted allocations for instance 3b348c58-f179-41db-bd79-1fdea0ade389
Oct 02 12:50:56 compute-1 nova_compute[230518]: 2025-10-02 12:50:56.007 2 DEBUG oslo_concurrency.lockutils [None req-cd0fd3ec-0438-4e5b-8f6d-6702a2374c55 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "3b348c58-f179-41db-bd79-1fdea0ade389" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.954s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:50:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:56.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:56.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:56 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2193676697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:50:57 compute-1 ceph-mon[80926]: pgmap v2304: 305 pgs: 305 active+clean; 538 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 382 KiB/s rd, 2.6 MiB/s wr, 117 op/s
Oct 02 12:50:58 compute-1 nova_compute[230518]: 2025-10-02 12:50:58.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:50:58 compute-1 nova_compute[230518]: 2025-10-02 12:50:58.122 2 INFO nova.compute.manager [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Rebuilding instance
Oct 02 12:50:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:50:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:50:58.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:50:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:50:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:50:58.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:50:58 compute-1 nova_compute[230518]: 2025-10-02 12:50:58.407 2 DEBUG nova.objects.instance [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 33780b49-b5a1-4f3f-a6c5-a00011d53718 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:50:58 compute-1 nova_compute[230518]: 2025-10-02 12:50:58.420 2 DEBUG nova.compute.manager [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:50:58 compute-1 nova_compute[230518]: 2025-10-02 12:50:58.468 2 DEBUG nova.objects.instance [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lazy-loading 'pci_requests' on Instance uuid 33780b49-b5a1-4f3f-a6c5-a00011d53718 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:50:58 compute-1 nova_compute[230518]: 2025-10-02 12:50:58.493 2 DEBUG nova.objects.instance [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lazy-loading 'pci_devices' on Instance uuid 33780b49-b5a1-4f3f-a6c5-a00011d53718 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:50:58 compute-1 nova_compute[230518]: 2025-10-02 12:50:58.506 2 DEBUG nova.objects.instance [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lazy-loading 'resources' on Instance uuid 33780b49-b5a1-4f3f-a6c5-a00011d53718 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:50:58 compute-1 nova_compute[230518]: 2025-10-02 12:50:58.517 2 DEBUG nova.objects.instance [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lazy-loading 'migration_context' on Instance uuid 33780b49-b5a1-4f3f-a6c5-a00011d53718 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:50:58 compute-1 nova_compute[230518]: 2025-10-02 12:50:58.530 2 DEBUG nova.objects.instance [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:50:58 compute-1 nova_compute[230518]: 2025-10-02 12:50:58.533 2 DEBUG nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:50:58 compute-1 nova_compute[230518]: 2025-10-02 12:50:58.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:50:59 compute-1 ceph-mon[80926]: pgmap v2305: 305 pgs: 305 active+clean; 538 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 409 KiB/s rd, 1.5 MiB/s wr, 109 op/s
Oct 02 12:51:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:51:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:00.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:51:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:00.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1132038165' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:00 compute-1 nova_compute[230518]: 2025-10-02 12:51:00.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:01 compute-1 nova_compute[230518]: 2025-10-02 12:51:01.550 2 INFO nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Instance shutdown successfully after 3 seconds.
Oct 02 12:51:01 compute-1 podman[283452]: 2025-10-02 12:51:01.830844576 +0000 UTC m=+0.072741293 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Oct 02 12:51:01 compute-1 podman[283451]: 2025-10-02 12:51:01.840259571 +0000 UTC m=+0.081718574 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid)
Oct 02 12:51:02 compute-1 ceph-mon[80926]: pgmap v2306: 305 pgs: 305 active+clean; 538 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 353 KiB/s rd, 1.3 MiB/s wr, 94 op/s
Oct 02 12:51:02 compute-1 kernel: tap6f16f975-11 (unregistering): left promiscuous mode
Oct 02 12:51:02 compute-1 NetworkManager[44960]: <info>  [1759409462.1925] device (tap6f16f975-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:51:02 compute-1 ovn_controller[129257]: 2025-10-02T12:51:02Z|00555|binding|INFO|Releasing lport 6f16f975-1155-4931-9798-72b46e8ca37f from this chassis (sb_readonly=0)
Oct 02 12:51:02 compute-1 ovn_controller[129257]: 2025-10-02T12:51:02Z|00556|binding|INFO|Setting lport 6f16f975-1155-4931-9798-72b46e8ca37f down in Southbound
Oct 02 12:51:02 compute-1 nova_compute[230518]: 2025-10-02 12:51:02.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:02 compute-1 ovn_controller[129257]: 2025-10-02T12:51:02Z|00557|binding|INFO|Removing iface tap6f16f975-11 ovn-installed in OVS
Oct 02 12:51:02 compute-1 nova_compute[230518]: 2025-10-02 12:51:02.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:02.217 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:0c:7f 10.100.0.13'], port_security=['fa:16:3e:9e:0c:7f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '33780b49-b5a1-4f3f-a6c5-a00011d53718', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ea35968-5cdb-414e-9226-6ba534628944', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1308a7eb298f49baaeaf3dc3a6acf592', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fda5d6a4-b319-45fc-a863-02113f198b5e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.203'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=526db40a-6e83-473d-bcf2-8fd6e9668069, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=6f16f975-1155-4931-9798-72b46e8ca37f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:51:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:02.219 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 6f16f975-1155-4931-9798-72b46e8ca37f in datapath 1ea35968-5cdb-414e-9226-6ba534628944 unbound from our chassis
Oct 02 12:51:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:02.221 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1ea35968-5cdb-414e-9226-6ba534628944, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:51:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:02.223 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b58363d8-c15c-4933-9c0d-7471de2f25c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:02.223 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944 namespace which is not needed anymore
Oct 02 12:51:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:02.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:02 compute-1 nova_compute[230518]: 2025-10-02 12:51:02.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:02 compute-1 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000088.scope: Deactivated successfully.
Oct 02 12:51:02 compute-1 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000088.scope: Consumed 14.900s CPU time.
Oct 02 12:51:02 compute-1 systemd-machined[188247]: Machine qemu-65-instance-00000088 terminated.
Oct 02 12:51:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:51:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:02.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:51:02 compute-1 kernel: tap6f16f975-11: entered promiscuous mode
Oct 02 12:51:02 compute-1 kernel: tap6f16f975-11 (unregistering): left promiscuous mode
Oct 02 12:51:02 compute-1 nova_compute[230518]: 2025-10-02 12:51:02.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:02 compute-1 nova_compute[230518]: 2025-10-02 12:51:02.386 2 INFO nova.virt.libvirt.driver [-] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Instance destroyed successfully.
Oct 02 12:51:02 compute-1 nova_compute[230518]: 2025-10-02 12:51:02.392 2 INFO nova.virt.libvirt.driver [-] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Instance destroyed successfully.
Oct 02 12:51:02 compute-1 nova_compute[230518]: 2025-10-02 12:51:02.393 2 DEBUG nova.virt.libvirt.vif [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:50:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-1954698310',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-1058292720',id=136,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEq6SX0G1P4eXV7KyXSZ/35WTMB3jbVB1SupKDvjpwDO6estYqWrZvLKSnDQx+vS99wAQs9lzaTS/c5UGzVwCp+w6SXjcPi0171w0SmxtpKZLlMM30YnMg14Y62hnRsW1w==',key_name='tempest-keypair-290079418',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:50:20Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1308a7eb298f49baaeaf3dc3a6acf592',ramdisk_id='',reservation_id='r-vezvikos',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsV293TestJSON-365577023',owner_user_name='tempest-ServerActionsV293TestJSON-365577023-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:50:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='af2648eefb594bc49309cccf408f7ae1',uuid=33780b49-b5a1-4f3f-a6c5-a00011d53718,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6f16f975-1155-4931-9798-72b46e8ca37f", "address": "fa:16:3e:9e:0c:7f", "network": {"id": "1ea35968-5cdb-414e-9226-6ba534628944", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-359916218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1308a7eb298f49baaeaf3dc3a6acf592", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f16f975-11", "ovs_interfaceid": "6f16f975-1155-4931-9798-72b46e8ca37f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:51:02 compute-1 nova_compute[230518]: 2025-10-02 12:51:02.394 2 DEBUG nova.network.os_vif_util [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Converting VIF {"id": "6f16f975-1155-4931-9798-72b46e8ca37f", "address": "fa:16:3e:9e:0c:7f", "network": {"id": "1ea35968-5cdb-414e-9226-6ba534628944", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-359916218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1308a7eb298f49baaeaf3dc3a6acf592", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f16f975-11", "ovs_interfaceid": "6f16f975-1155-4931-9798-72b46e8ca37f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:51:02 compute-1 nova_compute[230518]: 2025-10-02 12:51:02.394 2 DEBUG nova.network.os_vif_util [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9e:0c:7f,bridge_name='br-int',has_traffic_filtering=True,id=6f16f975-1155-4931-9798-72b46e8ca37f,network=Network(1ea35968-5cdb-414e-9226-6ba534628944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f16f975-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:51:02 compute-1 nova_compute[230518]: 2025-10-02 12:51:02.395 2 DEBUG os_vif [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9e:0c:7f,bridge_name='br-int',has_traffic_filtering=True,id=6f16f975-1155-4931-9798-72b46e8ca37f,network=Network(1ea35968-5cdb-414e-9226-6ba534628944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f16f975-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:51:02 compute-1 nova_compute[230518]: 2025-10-02 12:51:02.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:02 compute-1 nova_compute[230518]: 2025-10-02 12:51:02.397 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f16f975-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:51:02 compute-1 nova_compute[230518]: 2025-10-02 12:51:02.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:02 compute-1 nova_compute[230518]: 2025-10-02 12:51:02.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:02 compute-1 nova_compute[230518]: 2025-10-02 12:51:02.401 2 INFO os_vif [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9e:0c:7f,bridge_name='br-int',has_traffic_filtering=True,id=6f16f975-1155-4931-9798-72b46e8ca37f,network=Network(1ea35968-5cdb-414e-9226-6ba534628944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f16f975-11')
Oct 02 12:51:02 compute-1 neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944[283104]: [NOTICE]   (283110) : haproxy version is 2.8.14-c23fe91
Oct 02 12:51:02 compute-1 neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944[283104]: [NOTICE]   (283110) : path to executable is /usr/sbin/haproxy
Oct 02 12:51:02 compute-1 neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944[283104]: [WARNING]  (283110) : Exiting Master process...
Oct 02 12:51:02 compute-1 neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944[283104]: [ALERT]    (283110) : Current worker (283112) exited with code 143 (Terminated)
Oct 02 12:51:02 compute-1 neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944[283104]: [WARNING]  (283110) : All workers exited. Exiting... (0)
Oct 02 12:51:02 compute-1 systemd[1]: libpod-739de0dd35a17676b93417a95883d3546509af2e4e5095cc7ecdf9ea58be73dc.scope: Deactivated successfully.
Oct 02 12:51:02 compute-1 podman[283514]: 2025-10-02 12:51:02.509634471 +0000 UTC m=+0.190506447 container died 739de0dd35a17676b93417a95883d3546509af2e4e5095cc7ecdf9ea58be73dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:51:03 compute-1 sudo[283567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:51:03 compute-1 sudo[283567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:03 compute-1 sudo[283567]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:51:03 compute-1 sudo[283592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:51:03 compute-1 sudo[283592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:51:03 compute-1 sudo[283592]: pam_unix(sudo:session): session closed for user root
Oct 02 12:51:03 compute-1 nova_compute[230518]: 2025-10-02 12:51:03.238 2 DEBUG nova.compute.manager [req-602c9100-37a4-40f9-a780-713e0e2d382e req-25e879e9-1af9-497b-b59a-2634fcc678ae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received event network-vif-unplugged-6f16f975-1155-4931-9798-72b46e8ca37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:51:03 compute-1 nova_compute[230518]: 2025-10-02 12:51:03.239 2 DEBUG oslo_concurrency.lockutils [req-602c9100-37a4-40f9-a780-713e0e2d382e req-25e879e9-1af9-497b-b59a-2634fcc678ae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:03 compute-1 nova_compute[230518]: 2025-10-02 12:51:03.239 2 DEBUG oslo_concurrency.lockutils [req-602c9100-37a4-40f9-a780-713e0e2d382e req-25e879e9-1af9-497b-b59a-2634fcc678ae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:03 compute-1 nova_compute[230518]: 2025-10-02 12:51:03.239 2 DEBUG oslo_concurrency.lockutils [req-602c9100-37a4-40f9-a780-713e0e2d382e req-25e879e9-1af9-497b-b59a-2634fcc678ae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:03 compute-1 nova_compute[230518]: 2025-10-02 12:51:03.240 2 DEBUG nova.compute.manager [req-602c9100-37a4-40f9-a780-713e0e2d382e req-25e879e9-1af9-497b-b59a-2634fcc678ae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] No waiting events found dispatching network-vif-unplugged-6f16f975-1155-4931-9798-72b46e8ca37f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:51:03 compute-1 nova_compute[230518]: 2025-10-02 12:51:03.240 2 WARNING nova.compute.manager [req-602c9100-37a4-40f9-a780-713e0e2d382e req-25e879e9-1af9-497b-b59a-2634fcc678ae 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received unexpected event network-vif-unplugged-6f16f975-1155-4931-9798-72b46e8ca37f for instance with vm_state active and task_state rebuilding.
Oct 02 12:51:03 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-739de0dd35a17676b93417a95883d3546509af2e4e5095cc7ecdf9ea58be73dc-userdata-shm.mount: Deactivated successfully.
Oct 02 12:51:03 compute-1 systemd[1]: var-lib-containers-storage-overlay-d2275299b59a6f31b3f5c4c1bc1ae63291b4b9fabe72845925a94cee91153691-merged.mount: Deactivated successfully.
Oct 02 12:51:03 compute-1 ceph-mon[80926]: pgmap v2307: 305 pgs: 305 active+clean; 574 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 274 KiB/s rd, 1.9 MiB/s wr, 95 op/s
Oct 02 12:51:03 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:51:03 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:51:03 compute-1 nova_compute[230518]: 2025-10-02 12:51:03.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:03 compute-1 podman[283514]: 2025-10-02 12:51:03.973947231 +0000 UTC m=+1.654819207 container cleanup 739de0dd35a17676b93417a95883d3546509af2e4e5095cc7ecdf9ea58be73dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 12:51:03 compute-1 systemd[1]: libpod-conmon-739de0dd35a17676b93417a95883d3546509af2e4e5095cc7ecdf9ea58be73dc.scope: Deactivated successfully.
Oct 02 12:51:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:04.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:04.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:04 compute-1 podman[283622]: 2025-10-02 12:51:04.308743264 +0000 UTC m=+0.309169711 container remove 739de0dd35a17676b93417a95883d3546509af2e4e5095cc7ecdf9ea58be73dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 12:51:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:04.317 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[389b942f-71bf-44f7-b3f0-025745638447]: (4, ('Thu Oct  2 12:51:02 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944 (739de0dd35a17676b93417a95883d3546509af2e4e5095cc7ecdf9ea58be73dc)\n739de0dd35a17676b93417a95883d3546509af2e4e5095cc7ecdf9ea58be73dc\nThu Oct  2 12:51:03 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944 (739de0dd35a17676b93417a95883d3546509af2e4e5095cc7ecdf9ea58be73dc)\n739de0dd35a17676b93417a95883d3546509af2e4e5095cc7ecdf9ea58be73dc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:04.320 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4c8213be-6c43-4527-a1fd-c133622c0d63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:04.321 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ea35968-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:51:04 compute-1 nova_compute[230518]: 2025-10-02 12:51:04.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:04 compute-1 kernel: tap1ea35968-50: left promiscuous mode
Oct 02 12:51:04 compute-1 nova_compute[230518]: 2025-10-02 12:51:04.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:04.346 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a08e1403-677e-4464-8f02-96e1b97f64ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:04.384 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6e3fd3ee-ab78-4a35-9421-eb0aca46fbd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:04.386 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[48939f12-ee73-4b81-a17e-b86a251bf064]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:04.401 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7d5bd1b0-ed29-4089-8fd4-2e817e269ac6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 728195, 'reachable_time': 36493, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283639, 'error': None, 'target': 'ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:04 compute-1 systemd[1]: run-netns-ovnmeta\x2d1ea35968\x2d5cdb\x2d414e\x2d9226\x2d6ba534628944.mount: Deactivated successfully.
Oct 02 12:51:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:04.407 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:51:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:04.408 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[430cc590-e4c6-4e9f-b5ee-1c3dc5948e7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:05 compute-1 ceph-mon[80926]: pgmap v2308: 305 pgs: 305 active+clean; 574 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 135 KiB/s rd, 1.5 MiB/s wr, 53 op/s
Oct 02 12:51:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4145012872' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1436376951' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:51:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:51:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1287988797' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:51:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:51:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1287988797' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:51:05 compute-1 nova_compute[230518]: 2025-10-02 12:51:05.302 2 DEBUG nova.compute.manager [req-ad15ad54-0ca8-4187-985e-deacdedc5652 req-a2a288ea-4bdb-4d65-b761-fb8b6178313b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received event network-vif-plugged-6f16f975-1155-4931-9798-72b46e8ca37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:51:05 compute-1 nova_compute[230518]: 2025-10-02 12:51:05.303 2 DEBUG oslo_concurrency.lockutils [req-ad15ad54-0ca8-4187-985e-deacdedc5652 req-a2a288ea-4bdb-4d65-b761-fb8b6178313b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:05 compute-1 nova_compute[230518]: 2025-10-02 12:51:05.303 2 DEBUG oslo_concurrency.lockutils [req-ad15ad54-0ca8-4187-985e-deacdedc5652 req-a2a288ea-4bdb-4d65-b761-fb8b6178313b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:05 compute-1 nova_compute[230518]: 2025-10-02 12:51:05.303 2 DEBUG oslo_concurrency.lockutils [req-ad15ad54-0ca8-4187-985e-deacdedc5652 req-a2a288ea-4bdb-4d65-b761-fb8b6178313b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:05 compute-1 nova_compute[230518]: 2025-10-02 12:51:05.304 2 DEBUG nova.compute.manager [req-ad15ad54-0ca8-4187-985e-deacdedc5652 req-a2a288ea-4bdb-4d65-b761-fb8b6178313b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] No waiting events found dispatching network-vif-plugged-6f16f975-1155-4931-9798-72b46e8ca37f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:51:05 compute-1 nova_compute[230518]: 2025-10-02 12:51:05.304 2 WARNING nova.compute.manager [req-ad15ad54-0ca8-4187-985e-deacdedc5652 req-a2a288ea-4bdb-4d65-b761-fb8b6178313b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received unexpected event network-vif-plugged-6f16f975-1155-4931-9798-72b46e8ca37f for instance with vm_state active and task_state rebuilding.
Oct 02 12:51:05 compute-1 nova_compute[230518]: 2025-10-02 12:51:05.698 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409450.6974986, 3b348c58-f179-41db-bd79-1fdea0ade389 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:51:05 compute-1 nova_compute[230518]: 2025-10-02 12:51:05.699 2 INFO nova.compute.manager [-] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] VM Stopped (Lifecycle Event)
Oct 02 12:51:05 compute-1 nova_compute[230518]: 2025-10-02 12:51:05.725 2 DEBUG nova.compute.manager [None req-e8616022-c2b1-4a92-9546-fe4eddde2cfc - - - - - -] [instance: 3b348c58-f179-41db-bd79-1fdea0ade389] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:51:06 compute-1 nova_compute[230518]: 2025-10-02 12:51:06.043 2 DEBUG oslo_concurrency.lockutils [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:06 compute-1 nova_compute[230518]: 2025-10-02 12:51:06.043 2 DEBUG oslo_concurrency.lockutils [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:06 compute-1 nova_compute[230518]: 2025-10-02 12:51:06.043 2 DEBUG oslo_concurrency.lockutils [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:06 compute-1 nova_compute[230518]: 2025-10-02 12:51:06.044 2 DEBUG oslo_concurrency.lockutils [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:06 compute-1 nova_compute[230518]: 2025-10-02 12:51:06.044 2 DEBUG oslo_concurrency.lockutils [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:06 compute-1 nova_compute[230518]: 2025-10-02 12:51:06.045 2 INFO nova.compute.manager [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Terminating instance
Oct 02 12:51:06 compute-1 nova_compute[230518]: 2025-10-02 12:51:06.046 2 DEBUG nova.compute.manager [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:51:06 compute-1 nova_compute[230518]: 2025-10-02 12:51:06.066 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:51:06 compute-1 nova_compute[230518]: 2025-10-02 12:51:06.092 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:06 compute-1 nova_compute[230518]: 2025-10-02 12:51:06.094 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:06 compute-1 nova_compute[230518]: 2025-10-02 12:51:06.094 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:06 compute-1 nova_compute[230518]: 2025-10-02 12:51:06.095 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:51:06 compute-1 nova_compute[230518]: 2025-10-02 12:51:06.095 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:06.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:06.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1287988797' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:51:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1287988797' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:51:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2214410976' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:51:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4258114913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:06 compute-1 kernel: tapbf58273a-e5 (unregistering): left promiscuous mode
Oct 02 12:51:06 compute-1 NetworkManager[44960]: <info>  [1759409466.4339] device (tapbf58273a-e5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:51:06 compute-1 ovn_controller[129257]: 2025-10-02T12:51:06Z|00558|binding|INFO|Releasing lport bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 from this chassis (sb_readonly=0)
Oct 02 12:51:06 compute-1 ovn_controller[129257]: 2025-10-02T12:51:06Z|00559|binding|INFO|Setting lport bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 down in Southbound
Oct 02 12:51:06 compute-1 ovn_controller[129257]: 2025-10-02T12:51:06Z|00560|binding|INFO|Removing iface tapbf58273a-e5 ovn-installed in OVS
Oct 02 12:51:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:06.463 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:04:35 10.100.0.9'], port_security=['fa:16:3e:41:04:35 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4b2aefbb-92cb-4a24-9ad2-884a12fa514c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd3e0300f3cf5493d8a9e62e2c4a95767', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'b476c8f5-f8e9-416f-ac80-6e4f069ebf34', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=19951b97-567d-403e-9a99-3dd9660c4a7b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=bf58273a-e5f6-4e36-bb1e-7ca0c2462d54) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:51:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:06.464 138374 INFO neutron.agent.ovn.metadata.agent [-] Port bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 in datapath aa3b4df3-6044-4a53-8039-c9a5c05725aa unbound from our chassis
Oct 02 12:51:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:06.465 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aa3b4df3-6044-4a53-8039-c9a5c05725aa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:51:06 compute-1 nova_compute[230518]: 2025-10-02 12:51:06.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:06.467 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[827d1f0e-d1f9-43b6-90b7-cf13055015ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:06.469 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa namespace which is not needed anymore
Oct 02 12:51:06 compute-1 nova_compute[230518]: 2025-10-02 12:51:06.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:06 compute-1 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000082.scope: Deactivated successfully.
Oct 02 12:51:06 compute-1 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000082.scope: Consumed 21.971s CPU time.
Oct 02 12:51:06 compute-1 systemd-machined[188247]: Machine qemu-63-instance-00000082 terminated.
Oct 02 12:51:06 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:51:06 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4101096271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:06 compute-1 nova_compute[230518]: 2025-10-02 12:51:06.559 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:06 compute-1 neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa[281591]: [NOTICE]   (281595) : haproxy version is 2.8.14-c23fe91
Oct 02 12:51:06 compute-1 neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa[281591]: [NOTICE]   (281595) : path to executable is /usr/sbin/haproxy
Oct 02 12:51:06 compute-1 neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa[281591]: [WARNING]  (281595) : Exiting Master process...
Oct 02 12:51:06 compute-1 neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa[281591]: [ALERT]    (281595) : Current worker (281597) exited with code 143 (Terminated)
Oct 02 12:51:06 compute-1 neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa[281591]: [WARNING]  (281595) : All workers exited. Exiting... (0)
Oct 02 12:51:06 compute-1 systemd[1]: libpod-b28b990e2fcd86cc0c6d1ad70110f5ea58e185effc1cf30910ba97d5ee6fa6d9.scope: Deactivated successfully.
Oct 02 12:51:06 compute-1 conmon[281591]: conmon b28b990e2fcd86cc0c6d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b28b990e2fcd86cc0c6d1ad70110f5ea58e185effc1cf30910ba97d5ee6fa6d9.scope/container/memory.events
Oct 02 12:51:06 compute-1 nova_compute[230518]: 2025-10-02 12:51:06.683 2 INFO nova.virt.libvirt.driver [-] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Instance destroyed successfully.
Oct 02 12:51:06 compute-1 nova_compute[230518]: 2025-10-02 12:51:06.683 2 DEBUG nova.objects.instance [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lazy-loading 'resources' on Instance uuid 4b2aefbb-92cb-4a24-9ad2-884a12fa514c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:51:06 compute-1 podman[283683]: 2025-10-02 12:51:06.689697432 +0000 UTC m=+0.116799356 container died b28b990e2fcd86cc0c6d1ad70110f5ea58e185effc1cf30910ba97d5ee6fa6d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 12:51:06 compute-1 nova_compute[230518]: 2025-10-02 12:51:06.701 2 DEBUG nova.virt.libvirt.vif [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:47:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1530906949',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1530906949',id=130,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN8AO0n4F9qQHfktAb1KqUpFZGDIBw8Q+DMA6Gtgwbe4fJSHZtT9yxONp57Pu+/JlMfK0hzt7rHvQAXjHsqixRJ8kNgVzAz0UxxllE90LKBM9NxuJLShf+JD7SBBSy6srw==',key_name='tempest-TestInstancesWithCinderVolumes-1494763419',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:47:58Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d3e0300f3cf5493d8a9e62e2c4a95767',ramdisk_id='',reservation_id='r-9rd4q9z5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestInstancesWithCinderVolumes-621751307',owner_user_name='tempest-TestInstancesWithCinderVolumes-621751307-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:47:58Z,user_data=None,user_id='e3cd62a3208649c183d3fc2edc1c0f18',uuid=4b2aefbb-92cb-4a24-9ad2-884a12fa514c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "address": "fa:16:3e:41:04:35", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf58273a-e5", "ovs_interfaceid": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:51:06 compute-1 nova_compute[230518]: 2025-10-02 12:51:06.702 2 DEBUG nova.network.os_vif_util [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Converting VIF {"id": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "address": "fa:16:3e:41:04:35", "network": {"id": "aa3b4df3-6044-4a53-8039-c9a5c05725aa", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-47591645-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d3e0300f3cf5493d8a9e62e2c4a95767", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf58273a-e5", "ovs_interfaceid": "bf58273a-e5f6-4e36-bb1e-7ca0c2462d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:51:06 compute-1 nova_compute[230518]: 2025-10-02 12:51:06.702 2 DEBUG nova.network.os_vif_util [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:41:04:35,bridge_name='br-int',has_traffic_filtering=True,id=bf58273a-e5f6-4e36-bb1e-7ca0c2462d54,network=Network(aa3b4df3-6044-4a53-8039-c9a5c05725aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf58273a-e5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:51:06 compute-1 nova_compute[230518]: 2025-10-02 12:51:06.703 2 DEBUG os_vif [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:41:04:35,bridge_name='br-int',has_traffic_filtering=True,id=bf58273a-e5f6-4e36-bb1e-7ca0c2462d54,network=Network(aa3b4df3-6044-4a53-8039-c9a5c05725aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf58273a-e5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:51:06 compute-1 nova_compute[230518]: 2025-10-02 12:51:06.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:06 compute-1 nova_compute[230518]: 2025-10-02 12:51:06.705 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf58273a-e5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:51:06 compute-1 nova_compute[230518]: 2025-10-02 12:51:06.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:51:06 compute-1 nova_compute[230518]: 2025-10-02 12:51:06.711 2 INFO os_vif [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:41:04:35,bridge_name='br-int',has_traffic_filtering=True,id=bf58273a-e5f6-4e36-bb1e-7ca0c2462d54,network=Network(aa3b4df3-6044-4a53-8039-c9a5c05725aa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf58273a-e5')
Oct 02 12:51:06 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b28b990e2fcd86cc0c6d1ad70110f5ea58e185effc1cf30910ba97d5ee6fa6d9-userdata-shm.mount: Deactivated successfully.
Oct 02 12:51:06 compute-1 systemd[1]: var-lib-containers-storage-overlay-3c561c2b937b63684f4fa80d6a77b8b706c94b8f7ce5c4ada629e45417aa2f7f-merged.mount: Deactivated successfully.
Oct 02 12:51:06 compute-1 nova_compute[230518]: 2025-10-02 12:51:06.756 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:51:06 compute-1 nova_compute[230518]: 2025-10-02 12:51:06.756 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:51:06 compute-1 nova_compute[230518]: 2025-10-02 12:51:06.760 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:51:06 compute-1 nova_compute[230518]: 2025-10-02 12:51:06.760 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:51:06 compute-1 podman[283683]: 2025-10-02 12:51:06.764723126 +0000 UTC m=+0.191825040 container cleanup b28b990e2fcd86cc0c6d1ad70110f5ea58e185effc1cf30910ba97d5ee6fa6d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:51:06 compute-1 systemd[1]: libpod-conmon-b28b990e2fcd86cc0c6d1ad70110f5ea58e185effc1cf30910ba97d5ee6fa6d9.scope: Deactivated successfully.
Oct 02 12:51:06 compute-1 podman[283744]: 2025-10-02 12:51:06.87039379 +0000 UTC m=+0.083570183 container remove b28b990e2fcd86cc0c6d1ad70110f5ea58e185effc1cf30910ba97d5ee6fa6d9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 12:51:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:06.877 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[794d1be2-2af2-407d-b12d-2eafbc50a6ea]: (4, ('Thu Oct  2 12:51:06 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa (b28b990e2fcd86cc0c6d1ad70110f5ea58e185effc1cf30910ba97d5ee6fa6d9)\nb28b990e2fcd86cc0c6d1ad70110f5ea58e185effc1cf30910ba97d5ee6fa6d9\nThu Oct  2 12:51:06 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa (b28b990e2fcd86cc0c6d1ad70110f5ea58e185effc1cf30910ba97d5ee6fa6d9)\nb28b990e2fcd86cc0c6d1ad70110f5ea58e185effc1cf30910ba97d5ee6fa6d9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:06.879 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7b368b73-bdac-4564-beb0-97a2bfe3da25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:06.880 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa3b4df3-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:51:06 compute-1 kernel: tapaa3b4df3-60: left promiscuous mode
Oct 02 12:51:06 compute-1 nova_compute[230518]: 2025-10-02 12:51:06.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:06 compute-1 nova_compute[230518]: 2025-10-02 12:51:06.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:06.897 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8f52420c-9c36-4ed2-a03c-3eb0006905c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:06.929 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a19a2fca-9bb0-409f-98ac-b9756a2a7b6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:06 compute-1 nova_compute[230518]: 2025-10-02 12:51:06.930 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:51:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:06.931 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[db28439d-7896-4760-832b-181974bd1516]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:06 compute-1 nova_compute[230518]: 2025-10-02 12:51:06.931 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4375MB free_disk=20.880725860595703GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:51:06 compute-1 nova_compute[230518]: 2025-10-02 12:51:06.931 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:06 compute-1 nova_compute[230518]: 2025-10-02 12:51:06.932 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:06.946 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[581c39a6-6c62-460b-98c2-fc6ceda8fe6a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714075, 'reachable_time': 25153, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283760, 'error': None, 'target': 'ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:06.948 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aa3b4df3-6044-4a53-8039-c9a5c05725aa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:51:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:06.948 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[c38dbc3d-2e3d-4cc5-8be1-0633407ecf14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:06 compute-1 systemd[1]: run-netns-ovnmeta\x2daa3b4df3\x2d6044\x2d4a53\x2d8039\x2dc9a5c05725aa.mount: Deactivated successfully.
Oct 02 12:51:07 compute-1 nova_compute[230518]: 2025-10-02 12:51:07.102 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 4b2aefbb-92cb-4a24-9ad2-884a12fa514c actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:51:07 compute-1 nova_compute[230518]: 2025-10-02 12:51:07.102 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 33780b49-b5a1-4f3f-a6c5-a00011d53718 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:51:07 compute-1 nova_compute[230518]: 2025-10-02 12:51:07.102 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:51:07 compute-1 nova_compute[230518]: 2025-10-02 12:51:07.102 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:51:07 compute-1 nova_compute[230518]: 2025-10-02 12:51:07.160 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 12:51:07 compute-1 nova_compute[230518]: 2025-10-02 12:51:07.185 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 12:51:07 compute-1 nova_compute[230518]: 2025-10-02 12:51:07.186 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:51:07 compute-1 nova_compute[230518]: 2025-10-02 12:51:07.206 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 12:51:07 compute-1 nova_compute[230518]: 2025-10-02 12:51:07.229 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 12:51:07 compute-1 nova_compute[230518]: 2025-10-02 12:51:07.271 2 INFO nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Deleting instance files /var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718_del
Oct 02 12:51:07 compute-1 nova_compute[230518]: 2025-10-02 12:51:07.271 2 INFO nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Deletion of /var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718_del complete
Oct 02 12:51:07 compute-1 nova_compute[230518]: 2025-10-02 12:51:07.288 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:07 compute-1 ceph-mon[80926]: pgmap v2309: 305 pgs: 305 active+clean; 584 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 142 KiB/s rd, 1.9 MiB/s wr, 63 op/s
Oct 02 12:51:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4101096271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:07 compute-1 nova_compute[230518]: 2025-10-02 12:51:07.554 2 WARNING nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] During detach_volume, instance disappeared.: nova.exception.InstanceNotFound: Instance 33780b49-b5a1-4f3f-a6c5-a00011d53718 could not be found.
Oct 02 12:51:07 compute-1 nova_compute[230518]: 2025-10-02 12:51:07.571 2 DEBUG nova.compute.manager [req-f58648ad-4a49-48ec-976f-d58b44fdc70b req-ac65987d-3e95-41d5-adff-634728b6ce45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Received event network-vif-unplugged-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:51:07 compute-1 nova_compute[230518]: 2025-10-02 12:51:07.572 2 DEBUG oslo_concurrency.lockutils [req-f58648ad-4a49-48ec-976f-d58b44fdc70b req-ac65987d-3e95-41d5-adff-634728b6ce45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:07 compute-1 nova_compute[230518]: 2025-10-02 12:51:07.572 2 DEBUG oslo_concurrency.lockutils [req-f58648ad-4a49-48ec-976f-d58b44fdc70b req-ac65987d-3e95-41d5-adff-634728b6ce45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:07 compute-1 nova_compute[230518]: 2025-10-02 12:51:07.573 2 DEBUG oslo_concurrency.lockutils [req-f58648ad-4a49-48ec-976f-d58b44fdc70b req-ac65987d-3e95-41d5-adff-634728b6ce45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:07 compute-1 nova_compute[230518]: 2025-10-02 12:51:07.573 2 DEBUG nova.compute.manager [req-f58648ad-4a49-48ec-976f-d58b44fdc70b req-ac65987d-3e95-41d5-adff-634728b6ce45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] No waiting events found dispatching network-vif-unplugged-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:51:07 compute-1 nova_compute[230518]: 2025-10-02 12:51:07.573 2 DEBUG nova.compute.manager [req-f58648ad-4a49-48ec-976f-d58b44fdc70b req-ac65987d-3e95-41d5-adff-634728b6ce45 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Received event network-vif-unplugged-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:51:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:51:07 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3298766319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:07 compute-1 nova_compute[230518]: 2025-10-02 12:51:07.700 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:07 compute-1 nova_compute[230518]: 2025-10-02 12:51:07.705 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:51:07 compute-1 nova_compute[230518]: 2025-10-02 12:51:07.726 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:51:07 compute-1 nova_compute[230518]: 2025-10-02 12:51:07.769 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:51:07 compute-1 nova_compute[230518]: 2025-10-02 12:51:07.769 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:51:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:08.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:08 compute-1 nova_compute[230518]: 2025-10-02 12:51:08.285 2 DEBUG nova.compute.manager [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Preparing to wait for external event volume-reimaged-a7bdd212-0d34-40b8-9af9-06388d215028 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:51:08 compute-1 nova_compute[230518]: 2025-10-02 12:51:08.286 2 DEBUG oslo_concurrency.lockutils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Acquiring lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:08 compute-1 nova_compute[230518]: 2025-10-02 12:51:08.286 2 DEBUG oslo_concurrency.lockutils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:08 compute-1 nova_compute[230518]: 2025-10-02 12:51:08.286 2 DEBUG oslo_concurrency.lockutils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:08.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3298766319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2420490110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:08 compute-1 nova_compute[230518]: 2025-10-02 12:51:08.750 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:51:08 compute-1 nova_compute[230518]: 2025-10-02 12:51:08.750 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:51:08 compute-1 nova_compute[230518]: 2025-10-02 12:51:08.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:08 compute-1 nova_compute[230518]: 2025-10-02 12:51:08.984 2 INFO nova.virt.libvirt.driver [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Deleting instance files /var/lib/nova/instances/4b2aefbb-92cb-4a24-9ad2-884a12fa514c_del
Oct 02 12:51:08 compute-1 nova_compute[230518]: 2025-10-02 12:51:08.984 2 INFO nova.virt.libvirt.driver [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Deletion of /var/lib/nova/instances/4b2aefbb-92cb-4a24-9ad2-884a12fa514c_del complete
Oct 02 12:51:09 compute-1 nova_compute[230518]: 2025-10-02 12:51:09.039 2 INFO nova.compute.manager [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Took 2.99 seconds to destroy the instance on the hypervisor.
Oct 02 12:51:09 compute-1 nova_compute[230518]: 2025-10-02 12:51:09.039 2 DEBUG oslo.service.loopingcall [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:51:09 compute-1 nova_compute[230518]: 2025-10-02 12:51:09.039 2 DEBUG nova.compute.manager [-] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:51:09 compute-1 nova_compute[230518]: 2025-10-02 12:51:09.040 2 DEBUG nova.network.neutron [-] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:51:09 compute-1 nova_compute[230518]: 2025-10-02 12:51:09.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:51:09 compute-1 ceph-mon[80926]: pgmap v2310: 305 pgs: 305 active+clean; 584 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 82 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Oct 02 12:51:09 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3069559069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:09 compute-1 nova_compute[230518]: 2025-10-02 12:51:09.690 2 DEBUG nova.compute.manager [req-239d7aac-ed86-44e6-b9a2-b752d94f9e4f req-eb40fde7-504b-41a8-8f6f-76b61e02b6df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Received event network-vif-plugged-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:51:09 compute-1 nova_compute[230518]: 2025-10-02 12:51:09.691 2 DEBUG oslo_concurrency.lockutils [req-239d7aac-ed86-44e6-b9a2-b752d94f9e4f req-eb40fde7-504b-41a8-8f6f-76b61e02b6df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:09 compute-1 nova_compute[230518]: 2025-10-02 12:51:09.691 2 DEBUG oslo_concurrency.lockutils [req-239d7aac-ed86-44e6-b9a2-b752d94f9e4f req-eb40fde7-504b-41a8-8f6f-76b61e02b6df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:09 compute-1 nova_compute[230518]: 2025-10-02 12:51:09.691 2 DEBUG oslo_concurrency.lockutils [req-239d7aac-ed86-44e6-b9a2-b752d94f9e4f req-eb40fde7-504b-41a8-8f6f-76b61e02b6df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:09 compute-1 nova_compute[230518]: 2025-10-02 12:51:09.692 2 DEBUG nova.compute.manager [req-239d7aac-ed86-44e6-b9a2-b752d94f9e4f req-eb40fde7-504b-41a8-8f6f-76b61e02b6df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] No waiting events found dispatching network-vif-plugged-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:51:09 compute-1 nova_compute[230518]: 2025-10-02 12:51:09.692 2 WARNING nova.compute.manager [req-239d7aac-ed86-44e6-b9a2-b752d94f9e4f req-eb40fde7-504b-41a8-8f6f-76b61e02b6df 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Received unexpected event network-vif-plugged-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 for instance with vm_state active and task_state deleting.
Oct 02 12:51:09 compute-1 nova_compute[230518]: 2025-10-02 12:51:09.991 2 DEBUG nova.network.neutron [-] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:51:10 compute-1 nova_compute[230518]: 2025-10-02 12:51:10.021 2 INFO nova.compute.manager [-] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Took 0.98 seconds to deallocate network for instance.
Oct 02 12:51:10 compute-1 nova_compute[230518]: 2025-10-02 12:51:10.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:51:10 compute-1 nova_compute[230518]: 2025-10-02 12:51:10.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:51:10 compute-1 nova_compute[230518]: 2025-10-02 12:51:10.083 2 DEBUG nova.compute.manager [req-c0f2c6e7-55bf-4856-8cde-5ffa8225d4d8 req-68d8fb20-de85-4a92-ac15-e99bc15eefb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Received event network-vif-deleted-bf58273a-e5f6-4e36-bb1e-7ca0c2462d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:51:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:51:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:10.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:51:10 compute-1 nova_compute[230518]: 2025-10-02 12:51:10.244 2 INFO nova.compute.manager [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Took 0.22 seconds to detach 1 volumes for instance.
Oct 02 12:51:10 compute-1 nova_compute[230518]: 2025-10-02 12:51:10.293 2 DEBUG oslo_concurrency.lockutils [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:10 compute-1 nova_compute[230518]: 2025-10-02 12:51:10.293 2 DEBUG oslo_concurrency.lockutils [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:10.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:10 compute-1 nova_compute[230518]: 2025-10-02 12:51:10.343 2 DEBUG oslo_concurrency.processutils [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:10 compute-1 ceph-mon[80926]: pgmap v2311: 305 pgs: 305 active+clean; 583 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 73 op/s
Oct 02 12:51:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:51:10 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3025908862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:10 compute-1 nova_compute[230518]: 2025-10-02 12:51:10.808 2 DEBUG oslo_concurrency.processutils [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:10 compute-1 nova_compute[230518]: 2025-10-02 12:51:10.817 2 DEBUG nova.compute.provider_tree [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:51:10 compute-1 nova_compute[230518]: 2025-10-02 12:51:10.833 2 DEBUG nova.scheduler.client.report [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:51:10 compute-1 nova_compute[230518]: 2025-10-02 12:51:10.856 2 DEBUG oslo_concurrency.lockutils [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:10 compute-1 nova_compute[230518]: 2025-10-02 12:51:10.887 2 INFO nova.scheduler.client.report [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Deleted allocations for instance 4b2aefbb-92cb-4a24-9ad2-884a12fa514c
Oct 02 12:51:10 compute-1 nova_compute[230518]: 2025-10-02 12:51:10.952 2 DEBUG oslo_concurrency.lockutils [None req-caa22ef0-f21a-4f83-bce0-5772d5ebeac7 e3cd62a3208649c183d3fc2edc1c0f18 d3e0300f3cf5493d8a9e62e2c4a95767 - - default default] Lock "4b2aefbb-92cb-4a24-9ad2-884a12fa514c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.908s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:11 compute-1 nova_compute[230518]: 2025-10-02 12:51:11.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:51:11 compute-1 nova_compute[230518]: 2025-10-02 12:51:11.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3025908862' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:12 compute-1 nova_compute[230518]: 2025-10-02 12:51:12.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:51:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:12.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:12.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1944792511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:12 compute-1 ceph-mon[80926]: pgmap v2312: 305 pgs: 305 active+clean; 599 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 162 op/s
Oct 02 12:51:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3728589029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:51:13 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4031260926' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:51:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:51:13 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4031260926' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:51:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:51:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/4031260926' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:51:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/4031260926' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:51:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1485776948' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:51:13 compute-1 nova_compute[230518]: 2025-10-02 12:51:13.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:14 compute-1 nova_compute[230518]: 2025-10-02 12:51:14.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:51:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:14.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:14.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:15 compute-1 nova_compute[230518]: 2025-10-02 12:51:15.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:51:15 compute-1 nova_compute[230518]: 2025-10-02 12:51:15.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:51:15 compute-1 nova_compute[230518]: 2025-10-02 12:51:15.082 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:51:15 compute-1 nova_compute[230518]: 2025-10-02 12:51:15.083 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:51:15 compute-1 nova_compute[230518]: 2025-10-02 12:51:15.083 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 12:51:15 compute-1 ceph-mon[80926]: pgmap v2313: 305 pgs: 305 active+clean; 599 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.1 MiB/s wr, 134 op/s
Oct 02 12:51:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1009412585' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:51:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:51:15 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2592219865' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:51:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:51:15 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2592219865' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:51:15 compute-1 nova_compute[230518]: 2025-10-02 12:51:15.698 2 DEBUG nova.compute.manager [req-f5c461f1-d23a-439b-9d40-bab509e5253d req-7d1d6a1d-c0d5-4c63-a853-389cb9486f13 7720aaf0b6dc41d38f1e03222fa6a93b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received event volume-reimaged-a7bdd212-0d34-40b8-9af9-06388d215028 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:51:15 compute-1 nova_compute[230518]: 2025-10-02 12:51:15.698 2 DEBUG oslo_concurrency.lockutils [req-f5c461f1-d23a-439b-9d40-bab509e5253d req-7d1d6a1d-c0d5-4c63-a853-389cb9486f13 7720aaf0b6dc41d38f1e03222fa6a93b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:15 compute-1 nova_compute[230518]: 2025-10-02 12:51:15.699 2 DEBUG oslo_concurrency.lockutils [req-f5c461f1-d23a-439b-9d40-bab509e5253d req-7d1d6a1d-c0d5-4c63-a853-389cb9486f13 7720aaf0b6dc41d38f1e03222fa6a93b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:15 compute-1 nova_compute[230518]: 2025-10-02 12:51:15.699 2 DEBUG oslo_concurrency.lockutils [req-f5c461f1-d23a-439b-9d40-bab509e5253d req-7d1d6a1d-c0d5-4c63-a853-389cb9486f13 7720aaf0b6dc41d38f1e03222fa6a93b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:15 compute-1 nova_compute[230518]: 2025-10-02 12:51:15.699 2 DEBUG nova.compute.manager [req-f5c461f1-d23a-439b-9d40-bab509e5253d req-7d1d6a1d-c0d5-4c63-a853-389cb9486f13 7720aaf0b6dc41d38f1e03222fa6a93b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Processing event volume-reimaged-a7bdd212-0d34-40b8-9af9-06388d215028 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:51:15 compute-1 nova_compute[230518]: 2025-10-02 12:51:15.699 2 DEBUG nova.compute.manager [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Instance event wait completed in 6 seconds for volume-reimaged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:51:15 compute-1 nova_compute[230518]: 2025-10-02 12:51:15.791 2 INFO nova.virt.block_device [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Booting with volume a7bdd212-0d34-40b8-9af9-06388d215028 at /dev/vda
Oct 02 12:51:15 compute-1 nova_compute[230518]: 2025-10-02 12:51:15.956 2 DEBUG os_brick.utils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 12:51:15 compute-1 nova_compute[230518]: 2025-10-02 12:51:15.957 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:15 compute-1 nova_compute[230518]: 2025-10-02 12:51:15.968 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:15 compute-1 nova_compute[230518]: 2025-10-02 12:51:15.969 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[0a9bae59-a7a7-429b-831c-90bb7137ad82]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:15 compute-1 nova_compute[230518]: 2025-10-02 12:51:15.970 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:15 compute-1 nova_compute[230518]: 2025-10-02 12:51:15.976 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:15 compute-1 nova_compute[230518]: 2025-10-02 12:51:15.976 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[f5ddfecc-d99f-46fc-80a1-7d9c16c5d1d3]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:15 compute-1 nova_compute[230518]: 2025-10-02 12:51:15.978 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:15 compute-1 nova_compute[230518]: 2025-10-02 12:51:15.985 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:15 compute-1 nova_compute[230518]: 2025-10-02 12:51:15.985 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[1da3156b-c4b3-438e-8446-92cd58fc9036]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:15 compute-1 nova_compute[230518]: 2025-10-02 12:51:15.986 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[77a078ff-d752-4436-a145-0a93d57bdb57]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:15 compute-1 nova_compute[230518]: 2025-10-02 12:51:15.987 2 DEBUG oslo_concurrency.processutils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:16 compute-1 nova_compute[230518]: 2025-10-02 12:51:16.016 2 DEBUG oslo_concurrency.processutils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] CMD "nvme version" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:16 compute-1 nova_compute[230518]: 2025-10-02 12:51:16.018 2 DEBUG os_brick.initiator.connectors.lightos [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 12:51:16 compute-1 nova_compute[230518]: 2025-10-02 12:51:16.018 2 DEBUG os_brick.initiator.connectors.lightos [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 12:51:16 compute-1 nova_compute[230518]: 2025-10-02 12:51:16.018 2 DEBUG os_brick.initiator.connectors.lightos [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 12:51:16 compute-1 nova_compute[230518]: 2025-10-02 12:51:16.019 2 DEBUG os_brick.utils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] <== get_connector_properties: return (61ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 12:51:16 compute-1 nova_compute[230518]: 2025-10-02 12:51:16.019 2 DEBUG nova.virt.block_device [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Updating existing volume attachment record: 1860ab7f-c635-4488-b2d3-849571b3b102 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 12:51:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:16.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2592219865' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:51:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2592219865' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:51:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:16.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:16 compute-1 nova_compute[230518]: 2025-10-02 12:51:16.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:17 compute-1 nova_compute[230518]: 2025-10-02 12:51:17.371 2 DEBUG nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:51:17 compute-1 nova_compute[230518]: 2025-10-02 12:51:17.372 2 INFO nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Creating image(s)
Oct 02 12:51:17 compute-1 nova_compute[230518]: 2025-10-02 12:51:17.372 2 DEBUG nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 12:51:17 compute-1 nova_compute[230518]: 2025-10-02 12:51:17.372 2 DEBUG nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Ensure instance console log exists: /var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:51:17 compute-1 nova_compute[230518]: 2025-10-02 12:51:17.373 2 DEBUG oslo_concurrency.lockutils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:17 compute-1 nova_compute[230518]: 2025-10-02 12:51:17.373 2 DEBUG oslo_concurrency.lockutils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:17 compute-1 nova_compute[230518]: 2025-10-02 12:51:17.374 2 DEBUG oslo_concurrency.lockutils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:17 compute-1 nova_compute[230518]: 2025-10-02 12:51:17.376 2 DEBUG nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Start _get_guest_xml network_info=[{"id": "6f16f975-1155-4931-9798-72b46e8ca37f", "address": "fa:16:3e:9e:0c:7f", "network": {"id": "1ea35968-5cdb-414e-9226-6ba534628944", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-359916218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1308a7eb298f49baaeaf3dc3a6acf592", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f16f975-11", "ovs_interfaceid": "6f16f975-1155-4931-9798-72b46e8ca37f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:54Z,direct_url=<?>,disk_format='qcow2',id=52ef509e-0e22-464e-93c9-3ddcf574cd64,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'delete_on_termination': True, 'disk_bus': 'virtio', 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-a7bdd212-0d34-40b8-9af9-06388d215028', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'a7bdd212-0d34-40b8-9af9-06388d215028', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '33780b49-b5a1-4f3f-a6c5-a00011d53718', 'attached_at': '', 'detached_at': '', 'volume_id': 'a7bdd212-0d34-40b8-9af9-06388d215028', 'serial': 'a7bdd212-0d34-40b8-9af9-06388d215028'}, 'boot_index': 0, 'attachment_id': '1860ab7f-c635-4488-b2d3-849571b3b102', 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:51:17 compute-1 nova_compute[230518]: 2025-10-02 12:51:17.381 2 WARNING nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Oct 02 12:51:17 compute-1 nova_compute[230518]: 2025-10-02 12:51:17.385 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409462.3843558, 33780b49-b5a1-4f3f-a6c5-a00011d53718 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:51:17 compute-1 nova_compute[230518]: 2025-10-02 12:51:17.386 2 INFO nova.compute.manager [-] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] VM Stopped (Lifecycle Event)
Oct 02 12:51:17 compute-1 nova_compute[230518]: 2025-10-02 12:51:17.390 2 DEBUG nova.virt.libvirt.host [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:51:17 compute-1 nova_compute[230518]: 2025-10-02 12:51:17.391 2 DEBUG nova.virt.libvirt.host [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:51:17 compute-1 nova_compute[230518]: 2025-10-02 12:51:17.393 2 DEBUG nova.virt.libvirt.host [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:51:17 compute-1 nova_compute[230518]: 2025-10-02 12:51:17.394 2 DEBUG nova.virt.libvirt.host [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:51:17 compute-1 nova_compute[230518]: 2025-10-02 12:51:17.394 2 DEBUG nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:51:17 compute-1 nova_compute[230518]: 2025-10-02 12:51:17.395 2 DEBUG nova.virt.hardware [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:54Z,direct_url=<?>,disk_format='qcow2',id=52ef509e-0e22-464e-93c9-3ddcf574cd64,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:51:17 compute-1 nova_compute[230518]: 2025-10-02 12:51:17.395 2 DEBUG nova.virt.hardware [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:51:17 compute-1 nova_compute[230518]: 2025-10-02 12:51:17.396 2 DEBUG nova.virt.hardware [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:51:17 compute-1 nova_compute[230518]: 2025-10-02 12:51:17.396 2 DEBUG nova.virt.hardware [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:51:17 compute-1 nova_compute[230518]: 2025-10-02 12:51:17.396 2 DEBUG nova.virt.hardware [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:51:17 compute-1 nova_compute[230518]: 2025-10-02 12:51:17.396 2 DEBUG nova.virt.hardware [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:51:17 compute-1 nova_compute[230518]: 2025-10-02 12:51:17.396 2 DEBUG nova.virt.hardware [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:51:17 compute-1 nova_compute[230518]: 2025-10-02 12:51:17.397 2 DEBUG nova.virt.hardware [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:51:17 compute-1 nova_compute[230518]: 2025-10-02 12:51:17.397 2 DEBUG nova.virt.hardware [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:51:17 compute-1 nova_compute[230518]: 2025-10-02 12:51:17.397 2 DEBUG nova.virt.hardware [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:51:17 compute-1 nova_compute[230518]: 2025-10-02 12:51:17.397 2 DEBUG nova.virt.hardware [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:51:17 compute-1 nova_compute[230518]: 2025-10-02 12:51:17.397 2 DEBUG nova.objects.instance [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 33780b49-b5a1-4f3f-a6c5-a00011d53718 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:51:17 compute-1 nova_compute[230518]: 2025-10-02 12:51:17.400 2 DEBUG nova.compute.manager [None req-c0d72535-549c-48e6-977f-e156f998b85c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:51:17 compute-1 ceph-mon[80926]: pgmap v2314: 305 pgs: 305 active+clean; 587 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.2 MiB/s wr, 185 op/s
Oct 02 12:51:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2108244912' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:51:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:18.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:51:18 compute-1 nova_compute[230518]: 2025-10-02 12:51:18.256 2 DEBUG nova.storage.rbd_utils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] rbd image 33780b49-b5a1-4f3f-a6c5-a00011d53718_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:51:18 compute-1 nova_compute[230518]: 2025-10-02 12:51:18.263 2 DEBUG oslo_concurrency.processutils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:51:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:18.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:51:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:51:18 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4223872734' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:51:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:51:18 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4223872734' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:51:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:51:18 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/100893316' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:51:18 compute-1 nova_compute[230518]: 2025-10-02 12:51:18.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/4223872734' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:51:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/4223872734' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:51:19 compute-1 nova_compute[230518]: 2025-10-02 12:51:19.244 2 DEBUG oslo_concurrency.processutils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.981s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:19 compute-1 nova_compute[230518]: 2025-10-02 12:51:19.270 2 DEBUG nova.virt.libvirt.vif [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:50:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-1954698310',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-1058292720',id=136,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEq6SX0G1P4eXV7KyXSZ/35WTMB3jbVB1SupKDvjpwDO6estYqWrZvLKSnDQx+vS99wAQs9lzaTS/c5UGzVwCp+w6SXjcPi0171w0SmxtpKZLlMM30YnMg14Y62hnRsW1w==',key_name='tempest-keypair-290079418',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:50:20Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1308a7eb298f49baaeaf3dc3a6acf592',ramdisk_id='',reservation_id='r-vezvikos',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsV293TestJSON-365577023',owner_user_name='tempest-ServerActionsV293TestJSON-365577023-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:51:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='af2648eefb594bc49309cccf408f7ae1',uuid=33780b49-b5a1-4f3f-a6c5-a00011d53718,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6f16f975-1155-4931-9798-72b46e8ca37f", "address": "fa:16:3e:9e:0c:7f", "network": {"id": "1ea35968-5cdb-414e-9226-6ba534628944", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-359916218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1308a7eb298f49baaeaf3dc3a6acf592", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f16f975-11", "ovs_interfaceid": "6f16f975-1155-4931-9798-72b46e8ca37f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:51:19 compute-1 nova_compute[230518]: 2025-10-02 12:51:19.271 2 DEBUG nova.network.os_vif_util [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Converting VIF {"id": "6f16f975-1155-4931-9798-72b46e8ca37f", "address": "fa:16:3e:9e:0c:7f", "network": {"id": "1ea35968-5cdb-414e-9226-6ba534628944", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-359916218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1308a7eb298f49baaeaf3dc3a6acf592", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f16f975-11", "ovs_interfaceid": "6f16f975-1155-4931-9798-72b46e8ca37f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:51:19 compute-1 nova_compute[230518]: 2025-10-02 12:51:19.272 2 DEBUG nova.network.os_vif_util [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9e:0c:7f,bridge_name='br-int',has_traffic_filtering=True,id=6f16f975-1155-4931-9798-72b46e8ca37f,network=Network(1ea35968-5cdb-414e-9226-6ba534628944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f16f975-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:51:19 compute-1 nova_compute[230518]: 2025-10-02 12:51:19.274 2 DEBUG nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:51:19 compute-1 nova_compute[230518]:   <uuid>33780b49-b5a1-4f3f-a6c5-a00011d53718</uuid>
Oct 02 12:51:19 compute-1 nova_compute[230518]:   <name>instance-00000088</name>
Oct 02 12:51:19 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:51:19 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:51:19 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:51:19 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:       <nova:name>tempest-ServerActionsV293TestJSON-server-1954698310</nova:name>
Oct 02 12:51:19 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:51:17</nova:creationTime>
Oct 02 12:51:19 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:51:19 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:51:19 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:51:19 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:51:19 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:51:19 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:51:19 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:51:19 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:51:19 compute-1 nova_compute[230518]:         <nova:user uuid="af2648eefb594bc49309cccf408f7ae1">tempest-ServerActionsV293TestJSON-365577023-project-member</nova:user>
Oct 02 12:51:19 compute-1 nova_compute[230518]:         <nova:project uuid="1308a7eb298f49baaeaf3dc3a6acf592">tempest-ServerActionsV293TestJSON-365577023</nova:project>
Oct 02 12:51:19 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:51:19 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:51:19 compute-1 nova_compute[230518]:         <nova:port uuid="6f16f975-1155-4931-9798-72b46e8ca37f">
Oct 02 12:51:19 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:51:19 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:51:19 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:51:19 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <system>
Oct 02 12:51:19 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:51:19 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:51:19 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:51:19 compute-1 nova_compute[230518]:       <entry name="serial">33780b49-b5a1-4f3f-a6c5-a00011d53718</entry>
Oct 02 12:51:19 compute-1 nova_compute[230518]:       <entry name="uuid">33780b49-b5a1-4f3f-a6c5-a00011d53718</entry>
Oct 02 12:51:19 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     </system>
Oct 02 12:51:19 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:51:19 compute-1 nova_compute[230518]:   <os>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:   </os>
Oct 02 12:51:19 compute-1 nova_compute[230518]:   <features>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:   </features>
Oct 02 12:51:19 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:51:19 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:51:19 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:51:19 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/33780b49-b5a1-4f3f-a6c5-a00011d53718_disk.config">
Oct 02 12:51:19 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:       </source>
Oct 02 12:51:19 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:51:19 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:51:19 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:51:19 compute-1 nova_compute[230518]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:       <source protocol="rbd" name="volumes/volume-a7bdd212-0d34-40b8-9af9-06388d215028">
Oct 02 12:51:19 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:       </source>
Oct 02 12:51:19 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:51:19 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:51:19 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:       <serial>a7bdd212-0d34-40b8-9af9-06388d215028</serial>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:51:19 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:9e:0c:7f"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:       <target dev="tap6f16f975-11"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:51:19 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718/console.log" append="off"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <video>
Oct 02 12:51:19 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     </video>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:51:19 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:51:19 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:51:19 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:51:19 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:51:19 compute-1 nova_compute[230518]: </domain>
Oct 02 12:51:19 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:51:19 compute-1 nova_compute[230518]: 2025-10-02 12:51:19.275 2 DEBUG nova.virt.libvirt.vif [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:50:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-1954698310',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-1058292720',id=136,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEq6SX0G1P4eXV7KyXSZ/35WTMB3jbVB1SupKDvjpwDO6estYqWrZvLKSnDQx+vS99wAQs9lzaTS/c5UGzVwCp+w6SXjcPi0171w0SmxtpKZLlMM30YnMg14Y62hnRsW1w==',key_name='tempest-keypair-290079418',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:50:20Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1308a7eb298f49baaeaf3dc3a6acf592',ramdisk_id='',reservation_id='r-vezvikos',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsV293TestJSON-365577023',owner_user_name='tempest-ServerActionsV293TestJSON-365577023-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:51:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='af2648eefb594bc49309cccf408f7ae1',uuid=33780b49-b5a1-4f3f-a6c5-a00011d53718,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6f16f975-1155-4931-9798-72b46e8ca37f", "address": "fa:16:3e:9e:0c:7f", "network": {"id": "1ea35968-5cdb-414e-9226-6ba534628944", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-359916218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1308a7eb298f49baaeaf3dc3a6acf592", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f16f975-11", "ovs_interfaceid": "6f16f975-1155-4931-9798-72b46e8ca37f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:51:19 compute-1 nova_compute[230518]: 2025-10-02 12:51:19.276 2 DEBUG nova.network.os_vif_util [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Converting VIF {"id": "6f16f975-1155-4931-9798-72b46e8ca37f", "address": "fa:16:3e:9e:0c:7f", "network": {"id": "1ea35968-5cdb-414e-9226-6ba534628944", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-359916218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1308a7eb298f49baaeaf3dc3a6acf592", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f16f975-11", "ovs_interfaceid": "6f16f975-1155-4931-9798-72b46e8ca37f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:51:19 compute-1 nova_compute[230518]: 2025-10-02 12:51:19.276 2 DEBUG nova.network.os_vif_util [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9e:0c:7f,bridge_name='br-int',has_traffic_filtering=True,id=6f16f975-1155-4931-9798-72b46e8ca37f,network=Network(1ea35968-5cdb-414e-9226-6ba534628944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f16f975-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:51:19 compute-1 nova_compute[230518]: 2025-10-02 12:51:19.277 2 DEBUG os_vif [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9e:0c:7f,bridge_name='br-int',has_traffic_filtering=True,id=6f16f975-1155-4931-9798-72b46e8ca37f,network=Network(1ea35968-5cdb-414e-9226-6ba534628944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f16f975-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:51:19 compute-1 nova_compute[230518]: 2025-10-02 12:51:19.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:19 compute-1 nova_compute[230518]: 2025-10-02 12:51:19.278 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:51:19 compute-1 nova_compute[230518]: 2025-10-02 12:51:19.278 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:51:19 compute-1 nova_compute[230518]: 2025-10-02 12:51:19.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:19 compute-1 nova_compute[230518]: 2025-10-02 12:51:19.281 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6f16f975-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:51:19 compute-1 nova_compute[230518]: 2025-10-02 12:51:19.281 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6f16f975-11, col_values=(('external_ids', {'iface-id': '6f16f975-1155-4931-9798-72b46e8ca37f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9e:0c:7f', 'vm-uuid': '33780b49-b5a1-4f3f-a6c5-a00011d53718'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:51:19 compute-1 NetworkManager[44960]: <info>  [1759409479.2842] manager: (tap6f16f975-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/260)
Oct 02 12:51:19 compute-1 nova_compute[230518]: 2025-10-02 12:51:19.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:51:19 compute-1 nova_compute[230518]: 2025-10-02 12:51:19.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:19 compute-1 nova_compute[230518]: 2025-10-02 12:51:19.289 2 INFO os_vif [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9e:0c:7f,bridge_name='br-int',has_traffic_filtering=True,id=6f16f975-1155-4931-9798-72b46e8ca37f,network=Network(1ea35968-5cdb-414e-9226-6ba534628944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f16f975-11')
Oct 02 12:51:19 compute-1 nova_compute[230518]: 2025-10-02 12:51:19.423 2 DEBUG nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:51:19 compute-1 nova_compute[230518]: 2025-10-02 12:51:19.424 2 DEBUG nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:51:19 compute-1 nova_compute[230518]: 2025-10-02 12:51:19.424 2 DEBUG nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] No VIF found with MAC fa:16:3e:9e:0c:7f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:51:19 compute-1 nova_compute[230518]: 2025-10-02 12:51:19.424 2 INFO nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Using config drive
Oct 02 12:51:19 compute-1 nova_compute[230518]: 2025-10-02 12:51:19.449 2 DEBUG nova.storage.rbd_utils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] rbd image 33780b49-b5a1-4f3f-a6c5-a00011d53718_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:51:19 compute-1 nova_compute[230518]: 2025-10-02 12:51:19.474 2 DEBUG nova.objects.instance [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 33780b49-b5a1-4f3f-a6c5-a00011d53718 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:51:19 compute-1 nova_compute[230518]: 2025-10-02 12:51:19.503 2 DEBUG nova.objects.instance [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lazy-loading 'keypairs' on Instance uuid 33780b49-b5a1-4f3f-a6c5-a00011d53718 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:51:19 compute-1 nova_compute[230518]: 2025-10-02 12:51:19.934 2 INFO nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Creating config drive at /var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718/disk.config
Oct 02 12:51:19 compute-1 nova_compute[230518]: 2025-10-02 12:51:19.939 2 DEBUG oslo_concurrency.processutils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpacqx0puf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:20 compute-1 nova_compute[230518]: 2025-10-02 12:51:20.076 2 DEBUG oslo_concurrency.processutils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpacqx0puf" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:20 compute-1 ceph-mon[80926]: pgmap v2315: 305 pgs: 305 active+clean; 430 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 226 op/s
Oct 02 12:51:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/100893316' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:51:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:20.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:20.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:20 compute-1 nova_compute[230518]: 2025-10-02 12:51:20.501 2 DEBUG nova.storage.rbd_utils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] rbd image 33780b49-b5a1-4f3f-a6c5-a00011d53718_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:51:20 compute-1 nova_compute[230518]: 2025-10-02 12:51:20.507 2 DEBUG oslo_concurrency.processutils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718/disk.config 33780b49-b5a1-4f3f-a6c5-a00011d53718_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:51:21 compute-1 ceph-mon[80926]: pgmap v2316: 305 pgs: 305 active+clean; 380 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 3.6 MiB/s wr, 267 op/s
Oct 02 12:51:21 compute-1 nova_compute[230518]: 2025-10-02 12:51:21.681 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409466.6800296, 4b2aefbb-92cb-4a24-9ad2-884a12fa514c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:51:21 compute-1 nova_compute[230518]: 2025-10-02 12:51:21.682 2 INFO nova.compute.manager [-] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] VM Stopped (Lifecycle Event)
Oct 02 12:51:21 compute-1 nova_compute[230518]: 2025-10-02 12:51:21.703 2 DEBUG nova.compute.manager [None req-8d7a16f7-6b84-44f1-8c8b-1db0e20de2ed - - - - - -] [instance: 4b2aefbb-92cb-4a24-9ad2-884a12fa514c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:51:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:22.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:22 compute-1 nova_compute[230518]: 2025-10-02 12:51:22.292 2 DEBUG oslo_concurrency.processutils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718/disk.config 33780b49-b5a1-4f3f-a6c5-a00011d53718_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.785s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:51:22 compute-1 nova_compute[230518]: 2025-10-02 12:51:22.292 2 INFO nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Deleting local config drive /var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718/disk.config because it was imported into RBD.
Oct 02 12:51:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:51:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:22.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:51:22 compute-1 kernel: tap6f16f975-11: entered promiscuous mode
Oct 02 12:51:22 compute-1 NetworkManager[44960]: <info>  [1759409482.3544] manager: (tap6f16f975-11): new Tun device (/org/freedesktop/NetworkManager/Devices/261)
Oct 02 12:51:22 compute-1 ovn_controller[129257]: 2025-10-02T12:51:22Z|00561|binding|INFO|Claiming lport 6f16f975-1155-4931-9798-72b46e8ca37f for this chassis.
Oct 02 12:51:22 compute-1 ovn_controller[129257]: 2025-10-02T12:51:22Z|00562|binding|INFO|6f16f975-1155-4931-9798-72b46e8ca37f: Claiming fa:16:3e:9e:0c:7f 10.100.0.13
Oct 02 12:51:22 compute-1 nova_compute[230518]: 2025-10-02 12:51:22.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.362 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:0c:7f 10.100.0.13'], port_security=['fa:16:3e:9e:0c:7f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '33780b49-b5a1-4f3f-a6c5-a00011d53718', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ea35968-5cdb-414e-9226-6ba534628944', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1308a7eb298f49baaeaf3dc3a6acf592', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'fda5d6a4-b319-45fc-a863-02113f198b5e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.203'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=526db40a-6e83-473d-bcf2-8fd6e9668069, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=6f16f975-1155-4931-9798-72b46e8ca37f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.363 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 6f16f975-1155-4931-9798-72b46e8ca37f in datapath 1ea35968-5cdb-414e-9226-6ba534628944 bound to our chassis
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.366 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ea35968-5cdb-414e-9226-6ba534628944
Oct 02 12:51:22 compute-1 systemd-udevd[283951]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.378 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[06f16751-7390-4d20-8b76-3f005e66396b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.379 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1ea35968-51 in ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.381 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1ea35968-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.381 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cc74ece2-e4a3-485d-93ff-b021c8af77fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.382 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9087b46c-15b7-42f5-9212-864548ddabe3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:22 compute-1 nova_compute[230518]: 2025-10-02 12:51:22.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.396 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[d53c4566-eb0b-4110-b629-2898466f6beb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:22 compute-1 ovn_controller[129257]: 2025-10-02T12:51:22Z|00563|binding|INFO|Setting lport 6f16f975-1155-4931-9798-72b46e8ca37f ovn-installed in OVS
Oct 02 12:51:22 compute-1 ovn_controller[129257]: 2025-10-02T12:51:22Z|00564|binding|INFO|Setting lport 6f16f975-1155-4931-9798-72b46e8ca37f up in Southbound
Oct 02 12:51:22 compute-1 NetworkManager[44960]: <info>  [1759409482.3996] device (tap6f16f975-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:51:22 compute-1 NetworkManager[44960]: <info>  [1759409482.4002] device (tap6f16f975-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:51:22 compute-1 nova_compute[230518]: 2025-10-02 12:51:22.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:22 compute-1 systemd-machined[188247]: New machine qemu-66-instance-00000088.
Oct 02 12:51:22 compute-1 nova_compute[230518]: 2025-10-02 12:51:22.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:22 compute-1 systemd[1]: Started Virtual Machine qemu-66-instance-00000088.
Oct 02 12:51:22 compute-1 podman[283918]: 2025-10-02 12:51:22.423293578 +0000 UTC m=+0.092939557 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.422 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[018a8215-20ce-49dc-9486-dbe7a5370553]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.457 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[7f7a00cd-16d1-4467-944a-cfc3605220ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:22 compute-1 podman[283914]: 2025-10-02 12:51:22.461043362 +0000 UTC m=+0.130065732 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.463 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b026bf46-90e0-49d0-967a-f72d87b36e31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:22 compute-1 systemd-udevd[283967]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:51:22 compute-1 NetworkManager[44960]: <info>  [1759409482.4660] manager: (tap1ea35968-50): new Veth device (/org/freedesktop/NetworkManager/Devices/262)
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.496 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[645194f1-2202-4ec4-957e-32a90e01f966]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.500 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[961fb1f4-e036-4b03-a5b1-88e2c00c571a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:22 compute-1 NetworkManager[44960]: <info>  [1759409482.5233] device (tap1ea35968-50): carrier: link connected
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.531 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[9bf91713-542c-427e-bf51-f36e4ee7cc57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.553 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f7360662-6958-4d1e-8855-89cc14c880cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ea35968-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:b5:78'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 173], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 734608, 'reachable_time': 39730, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284002, 'error': None, 'target': 'ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.572 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b6059587-108c-4049-899f-8eb36a1dd24e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe38:b578'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 734608, 'tstamp': 734608}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284003, 'error': None, 'target': 'ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.591 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3f52689d-e62a-4480-bea6-1bffb4affa4c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ea35968-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:b5:78'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 173], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 734608, 'reachable_time': 39730, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284004, 'error': None, 'target': 'ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.626 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[837ca5d2-e8ca-4c92-847d-02907cfdfedb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:22 compute-1 nova_compute[230518]: 2025-10-02 12:51:22.662 2 DEBUG nova.compute.manager [req-09fab892-9022-4374-9e46-5ff03a81e17d req-7003f313-02de-448a-9b98-53675c02ba95 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received event network-vif-plugged-6f16f975-1155-4931-9798-72b46e8ca37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:51:22 compute-1 nova_compute[230518]: 2025-10-02 12:51:22.663 2 DEBUG oslo_concurrency.lockutils [req-09fab892-9022-4374-9e46-5ff03a81e17d req-7003f313-02de-448a-9b98-53675c02ba95 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:22 compute-1 nova_compute[230518]: 2025-10-02 12:51:22.663 2 DEBUG oslo_concurrency.lockutils [req-09fab892-9022-4374-9e46-5ff03a81e17d req-7003f313-02de-448a-9b98-53675c02ba95 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:22 compute-1 nova_compute[230518]: 2025-10-02 12:51:22.663 2 DEBUG oslo_concurrency.lockutils [req-09fab892-9022-4374-9e46-5ff03a81e17d req-7003f313-02de-448a-9b98-53675c02ba95 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:22 compute-1 nova_compute[230518]: 2025-10-02 12:51:22.663 2 DEBUG nova.compute.manager [req-09fab892-9022-4374-9e46-5ff03a81e17d req-7003f313-02de-448a-9b98-53675c02ba95 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] No waiting events found dispatching network-vif-plugged-6f16f975-1155-4931-9798-72b46e8ca37f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:51:22 compute-1 nova_compute[230518]: 2025-10-02 12:51:22.664 2 WARNING nova.compute.manager [req-09fab892-9022-4374-9e46-5ff03a81e17d req-7003f313-02de-448a-9b98-53675c02ba95 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received unexpected event network-vif-plugged-6f16f975-1155-4931-9798-72b46e8ca37f for instance with vm_state active and task_state rebuild_spawning.
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.686 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b8dcf584-ddd4-4e2b-a7c4-6de6f2dd713f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.687 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ea35968-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.687 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.688 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ea35968-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:51:22 compute-1 nova_compute[230518]: 2025-10-02 12:51:22.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:22 compute-1 NetworkManager[44960]: <info>  [1759409482.6917] manager: (tap1ea35968-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/263)
Oct 02 12:51:22 compute-1 kernel: tap1ea35968-50: entered promiscuous mode
Oct 02 12:51:22 compute-1 nova_compute[230518]: 2025-10-02 12:51:22.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.697 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ea35968-50, col_values=(('external_ids', {'iface-id': '656124c9-fbda-4e47-b94b-fbe1ed24070e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:51:22 compute-1 nova_compute[230518]: 2025-10-02 12:51:22.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:22 compute-1 ovn_controller[129257]: 2025-10-02T12:51:22Z|00565|binding|INFO|Releasing lport 656124c9-fbda-4e47-b94b-fbe1ed24070e from this chassis (sb_readonly=0)
Oct 02 12:51:22 compute-1 nova_compute[230518]: 2025-10-02 12:51:22.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.703 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1ea35968-5cdb-414e-9226-6ba534628944.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1ea35968-5cdb-414e-9226-6ba534628944.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.704 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[acd0fef8-4f67-4c46-94d5-ed41498ac1cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.705 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-1ea35968-5cdb-414e-9226-6ba534628944
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/1ea35968-5cdb-414e-9226-6ba534628944.pid.haproxy
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 1ea35968-5cdb-414e-9226-6ba534628944
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:51:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:22.706 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944', 'env', 'PROCESS_TAG=haproxy-1ea35968-5cdb-414e-9226-6ba534628944', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1ea35968-5cdb-414e-9226-6ba534628944.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:51:22 compute-1 nova_compute[230518]: 2025-10-02 12:51:22.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:22 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e319 e319: 3 total, 3 up, 3 in
Oct 02 12:51:23 compute-1 ceph-mon[80926]: pgmap v2317: 305 pgs: 305 active+clean; 360 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 3.6 MiB/s wr, 276 op/s
Oct 02 12:51:23 compute-1 podman[284052]: 2025-10-02 12:51:23.058432414 +0000 UTC m=+0.025501611 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:51:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:51:23 compute-1 podman[284052]: 2025-10-02 12:51:23.437332301 +0000 UTC m=+0.404401478 container create 5cdac57e9c846536541c74c186e601bab2012bcbf2d38a5d28243e1803853ba6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:51:23 compute-1 systemd[1]: Started libpod-conmon-5cdac57e9c846536541c74c186e601bab2012bcbf2d38a5d28243e1803853ba6.scope.
Oct 02 12:51:23 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:51:23 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78c0bf0e8b041f6f676207de558f8da163cf1ee93caa7b027803650b8a4521d5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:51:23 compute-1 podman[284052]: 2025-10-02 12:51:23.740032258 +0000 UTC m=+0.707101455 container init 5cdac57e9c846536541c74c186e601bab2012bcbf2d38a5d28243e1803853ba6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:51:23 compute-1 podman[284052]: 2025-10-02 12:51:23.744990123 +0000 UTC m=+0.712059280 container start 5cdac57e9c846536541c74c186e601bab2012bcbf2d38a5d28243e1803853ba6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:51:23 compute-1 neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944[284068]: [NOTICE]   (284072) : New worker (284076) forked
Oct 02 12:51:23 compute-1 neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944[284068]: [NOTICE]   (284072) : Loading success.
Oct 02 12:51:23 compute-1 nova_compute[230518]: 2025-10-02 12:51:23.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:51:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:24.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:51:24 compute-1 nova_compute[230518]: 2025-10-02 12:51:24.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:24 compute-1 nova_compute[230518]: 2025-10-02 12:51:24.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:24 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:24.295 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:51:24 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:24.297 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:51:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:24.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:24 compute-1 ceph-mon[80926]: osdmap e319: 3 total, 3 up, 3 in
Oct 02 12:51:24 compute-1 nova_compute[230518]: 2025-10-02 12:51:24.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:24 compute-1 nova_compute[230518]: 2025-10-02 12:51:24.996 2 DEBUG nova.compute.manager [req-68c0ef84-1aa2-4366-bf66-18b8c252c570 req-548d69c1-c3bf-42cf-b45a-54555b99cd5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received event network-vif-plugged-6f16f975-1155-4931-9798-72b46e8ca37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:51:24 compute-1 nova_compute[230518]: 2025-10-02 12:51:24.997 2 DEBUG oslo_concurrency.lockutils [req-68c0ef84-1aa2-4366-bf66-18b8c252c570 req-548d69c1-c3bf-42cf-b45a-54555b99cd5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:24 compute-1 nova_compute[230518]: 2025-10-02 12:51:24.997 2 DEBUG oslo_concurrency.lockutils [req-68c0ef84-1aa2-4366-bf66-18b8c252c570 req-548d69c1-c3bf-42cf-b45a-54555b99cd5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:24 compute-1 nova_compute[230518]: 2025-10-02 12:51:24.997 2 DEBUG oslo_concurrency.lockutils [req-68c0ef84-1aa2-4366-bf66-18b8c252c570 req-548d69c1-c3bf-42cf-b45a-54555b99cd5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:24 compute-1 nova_compute[230518]: 2025-10-02 12:51:24.998 2 DEBUG nova.compute.manager [req-68c0ef84-1aa2-4366-bf66-18b8c252c570 req-548d69c1-c3bf-42cf-b45a-54555b99cd5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] No waiting events found dispatching network-vif-plugged-6f16f975-1155-4931-9798-72b46e8ca37f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:51:24 compute-1 nova_compute[230518]: 2025-10-02 12:51:24.998 2 WARNING nova.compute.manager [req-68c0ef84-1aa2-4366-bf66-18b8c252c570 req-548d69c1-c3bf-42cf-b45a-54555b99cd5e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received unexpected event network-vif-plugged-6f16f975-1155-4931-9798-72b46e8ca37f for instance with vm_state active and task_state rebuild_spawning.
Oct 02 12:51:25 compute-1 nova_compute[230518]: 2025-10-02 12:51:25.125 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409485.1250057, 33780b49-b5a1-4f3f-a6c5-a00011d53718 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:51:25 compute-1 nova_compute[230518]: 2025-10-02 12:51:25.126 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] VM Resumed (Lifecycle Event)
Oct 02 12:51:25 compute-1 nova_compute[230518]: 2025-10-02 12:51:25.128 2 DEBUG nova.compute.manager [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:51:25 compute-1 nova_compute[230518]: 2025-10-02 12:51:25.129 2 DEBUG nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:51:25 compute-1 nova_compute[230518]: 2025-10-02 12:51:25.132 2 INFO nova.virt.libvirt.driver [-] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Instance spawned successfully.
Oct 02 12:51:25 compute-1 nova_compute[230518]: 2025-10-02 12:51:25.132 2 DEBUG nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:51:25 compute-1 nova_compute[230518]: 2025-10-02 12:51:25.148 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:51:25 compute-1 nova_compute[230518]: 2025-10-02 12:51:25.155 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:51:25 compute-1 nova_compute[230518]: 2025-10-02 12:51:25.159 2 DEBUG nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:51:25 compute-1 nova_compute[230518]: 2025-10-02 12:51:25.159 2 DEBUG nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:51:25 compute-1 nova_compute[230518]: 2025-10-02 12:51:25.160 2 DEBUG nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:51:25 compute-1 nova_compute[230518]: 2025-10-02 12:51:25.160 2 DEBUG nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:51:25 compute-1 nova_compute[230518]: 2025-10-02 12:51:25.161 2 DEBUG nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:51:25 compute-1 nova_compute[230518]: 2025-10-02 12:51:25.162 2 DEBUG nova.virt.libvirt.driver [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:51:25 compute-1 nova_compute[230518]: 2025-10-02 12:51:25.199 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 12:51:25 compute-1 nova_compute[230518]: 2025-10-02 12:51:25.199 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409485.125977, 33780b49-b5a1-4f3f-a6c5-a00011d53718 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:51:25 compute-1 nova_compute[230518]: 2025-10-02 12:51:25.199 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] VM Started (Lifecycle Event)
Oct 02 12:51:25 compute-1 nova_compute[230518]: 2025-10-02 12:51:25.226 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:51:25 compute-1 nova_compute[230518]: 2025-10-02 12:51:25.231 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:51:25 compute-1 nova_compute[230518]: 2025-10-02 12:51:25.248 2 DEBUG nova.compute.manager [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:51:25 compute-1 nova_compute[230518]: 2025-10-02 12:51:25.261 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Oct 02 12:51:25 compute-1 nova_compute[230518]: 2025-10-02 12:51:25.324 2 DEBUG oslo_concurrency.lockutils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:25 compute-1 nova_compute[230518]: 2025-10-02 12:51:25.325 2 DEBUG oslo_concurrency.lockutils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:25 compute-1 nova_compute[230518]: 2025-10-02 12:51:25.325 2 DEBUG nova.objects.instance [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Oct 02 12:51:25 compute-1 nova_compute[230518]: 2025-10-02 12:51:25.401 2 DEBUG oslo_concurrency.lockutils [None req-f5c461f1-d23a-439b-9d40-bab509e5253d af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.076s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:25 compute-1 ceph-mon[80926]: pgmap v2319: 305 pgs: 305 active+clean; 360 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.5 MiB/s wr, 223 op/s
Oct 02 12:51:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:25.948 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:51:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:25.948 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:51:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:25.949 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:51:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:26.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:26.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:26 compute-1 ceph-mon[80926]: pgmap v2320: 305 pgs: 305 active+clean; 367 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.7 MiB/s wr, 171 op/s
Oct 02 12:51:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:51:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:28.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:28.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:28 compute-1 nova_compute[230518]: 2025-10-02 12:51:28.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:29 compute-1 nova_compute[230518]: 2025-10-02 12:51:29.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:29 compute-1 ceph-mon[80926]: pgmap v2321: 305 pgs: 305 active+clean; 355 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.0 MiB/s wr, 233 op/s
Oct 02 12:51:29 compute-1 ovn_controller[129257]: 2025-10-02T12:51:29Z|00566|binding|INFO|Releasing lport 656124c9-fbda-4e47-b94b-fbe1ed24070e from this chassis (sb_readonly=0)
Oct 02 12:51:29 compute-1 nova_compute[230518]: 2025-10-02 12:51:29.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:30.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:51:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:30.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:51:31 compute-1 ceph-mon[80926]: pgmap v2322: 305 pgs: 305 active+clean; 361 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.7 MiB/s wr, 189 op/s
Oct 02 12:51:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:32.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:32.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e320 e320: 3 total, 3 up, 3 in
Oct 02 12:51:32 compute-1 podman[284110]: 2025-10-02 12:51:32.80331032 +0000 UTC m=+0.050176436 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 02 12:51:32 compute-1 podman[284109]: 2025-10-02 12:51:32.803294779 +0000 UTC m=+0.050856296 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:51:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:51:33 compute-1 ovn_controller[129257]: 2025-10-02T12:51:33Z|00567|binding|INFO|Releasing lport 656124c9-fbda-4e47-b94b-fbe1ed24070e from this chassis (sb_readonly=0)
Oct 02 12:51:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:51:33.299 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:51:33 compute-1 nova_compute[230518]: 2025-10-02 12:51:33.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:33 compute-1 ceph-mon[80926]: pgmap v2323: 305 pgs: 305 active+clean; 378 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.8 MiB/s wr, 198 op/s
Oct 02 12:51:33 compute-1 ceph-mon[80926]: osdmap e320: 3 total, 3 up, 3 in
Oct 02 12:51:33 compute-1 nova_compute[230518]: 2025-10-02 12:51:33.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:34.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:34 compute-1 nova_compute[230518]: 2025-10-02 12:51:34.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:34.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:35 compute-1 ceph-mon[80926]: pgmap v2325: 305 pgs: 305 active+clean; 378 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.8 MiB/s wr, 198 op/s
Oct 02 12:51:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:36.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:36.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:37 compute-1 ceph-mon[80926]: pgmap v2326: 305 pgs: 305 active+clean; 388 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.0 MiB/s wr, 213 op/s
Oct 02 12:51:38 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:51:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:51:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:38.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:51:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:38.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:38 compute-1 nova_compute[230518]: 2025-10-02 12:51:38.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:39 compute-1 nova_compute[230518]: 2025-10-02 12:51:39.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:39 compute-1 ceph-mon[80926]: pgmap v2327: 305 pgs: 305 active+clean; 412 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 767 KiB/s rd, 4.6 MiB/s wr, 135 op/s
Oct 02 12:51:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:40.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:40 compute-1 ovn_controller[129257]: 2025-10-02T12:51:40Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9e:0c:7f 10.100.0.13
Oct 02 12:51:40 compute-1 ovn_controller[129257]: 2025-10-02T12:51:40Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9e:0c:7f 10.100.0.13
Oct 02 12:51:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:40.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:41 compute-1 nova_compute[230518]: 2025-10-02 12:51:41.066 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:51:41 compute-1 nova_compute[230518]: 2025-10-02 12:51:41.066 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 12:51:41 compute-1 nova_compute[230518]: 2025-10-02 12:51:41.086 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 12:51:41 compute-1 ceph-mon[80926]: pgmap v2328: 305 pgs: 305 active+clean; 420 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 660 KiB/s rd, 4.2 MiB/s wr, 131 op/s
Oct 02 12:51:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:42.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:42.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:51:43 compute-1 ceph-mon[80926]: pgmap v2329: 305 pgs: 305 active+clean; 430 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 730 KiB/s rd, 3.8 MiB/s wr, 119 op/s
Oct 02 12:51:43 compute-1 nova_compute[230518]: 2025-10-02 12:51:43.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:44.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:44 compute-1 nova_compute[230518]: 2025-10-02 12:51:44.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:44.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:44 compute-1 ceph-mon[80926]: pgmap v2330: 305 pgs: 305 active+clean; 430 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 647 KiB/s rd, 3.4 MiB/s wr, 105 op/s
Oct 02 12:51:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:51:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:46.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:51:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:51:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:46.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:51:47 compute-1 ceph-mon[80926]: pgmap v2331: 305 pgs: 305 active+clean; 393 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 613 KiB/s rd, 3.3 MiB/s wr, 109 op/s
Oct 02 12:51:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:51:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:51:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:48.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:51:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:48.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:48 compute-1 nova_compute[230518]: 2025-10-02 12:51:48.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:49 compute-1 ceph-mon[80926]: pgmap v2332: 305 pgs: 305 active+clean; 359 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 536 KiB/s rd, 2.6 MiB/s wr, 120 op/s
Oct 02 12:51:49 compute-1 nova_compute[230518]: 2025-10-02 12:51:49.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:50.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2327210934' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:50 compute-1 nova_compute[230518]: 2025-10-02 12:51:50.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:51:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:50.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:51:51 compute-1 ceph-mon[80926]: pgmap v2333: 305 pgs: 305 active+clean; 359 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 352 KiB/s rd, 985 KiB/s wr, 84 op/s
Oct 02 12:51:51 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2790795647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:52.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:52.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:52 compute-1 ovn_controller[129257]: 2025-10-02T12:51:52Z|00568|binding|INFO|Releasing lport 656124c9-fbda-4e47-b94b-fbe1ed24070e from this chassis (sb_readonly=0)
Oct 02 12:51:52 compute-1 nova_compute[230518]: 2025-10-02 12:51:52.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:52 compute-1 podman[284149]: 2025-10-02 12:51:52.802185579 +0000 UTC m=+0.054321155 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:51:52 compute-1 podman[284148]: 2025-10-02 12:51:52.857205915 +0000 UTC m=+0.112140198 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 02 12:51:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:51:53 compute-1 ceph-mon[80926]: pgmap v2334: 305 pgs: 305 active+clean; 359 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 316 KiB/s rd, 659 KiB/s wr, 73 op/s
Oct 02 12:51:53 compute-1 nova_compute[230518]: 2025-10-02 12:51:53.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:54 compute-1 nova_compute[230518]: 2025-10-02 12:51:54.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:54.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:54.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:54 compute-1 nova_compute[230518]: 2025-10-02 12:51:54.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:54 compute-1 ceph-mon[80926]: pgmap v2335: 305 pgs: 305 active+clean; 359 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 121 KiB/s wr, 45 op/s
Oct 02 12:51:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:56.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:56.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:57 compute-1 ceph-mon[80926]: pgmap v2336: 305 pgs: 305 active+clean; 360 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 70 KiB/s rd, 1.5 MiB/s wr, 69 op/s
Oct 02 12:51:57 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4079673580' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:51:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:51:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:51:58.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:58 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3737708049' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:51:58 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3812003800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:51:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:51:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:51:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:51:58.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:51:58 compute-1 nova_compute[230518]: 2025-10-02 12:51:58.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:59 compute-1 nova_compute[230518]: 2025-10-02 12:51:59.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:51:59 compute-1 ceph-mon[80926]: pgmap v2337: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 86 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Oct 02 12:52:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:00.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:00.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:00 compute-1 nova_compute[230518]: 2025-10-02 12:52:00.623 2 DEBUG oslo_concurrency.lockutils [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Acquiring lock "33780b49-b5a1-4f3f-a6c5-a00011d53718" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:00 compute-1 nova_compute[230518]: 2025-10-02 12:52:00.624 2 DEBUG oslo_concurrency.lockutils [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:00 compute-1 nova_compute[230518]: 2025-10-02 12:52:00.624 2 DEBUG oslo_concurrency.lockutils [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Acquiring lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:00 compute-1 nova_compute[230518]: 2025-10-02 12:52:00.624 2 DEBUG oslo_concurrency.lockutils [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:00 compute-1 nova_compute[230518]: 2025-10-02 12:52:00.624 2 DEBUG oslo_concurrency.lockutils [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:00 compute-1 nova_compute[230518]: 2025-10-02 12:52:00.626 2 INFO nova.compute.manager [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Terminating instance
Oct 02 12:52:00 compute-1 nova_compute[230518]: 2025-10-02 12:52:00.627 2 DEBUG nova.compute.manager [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:52:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1620616246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:01 compute-1 kernel: tap6f16f975-11 (unregistering): left promiscuous mode
Oct 02 12:52:01 compute-1 NetworkManager[44960]: <info>  [1759409521.2204] device (tap6f16f975-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:52:01 compute-1 nova_compute[230518]: 2025-10-02 12:52:01.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:01 compute-1 ovn_controller[129257]: 2025-10-02T12:52:01Z|00569|binding|INFO|Releasing lport 6f16f975-1155-4931-9798-72b46e8ca37f from this chassis (sb_readonly=0)
Oct 02 12:52:01 compute-1 ovn_controller[129257]: 2025-10-02T12:52:01Z|00570|binding|INFO|Setting lport 6f16f975-1155-4931-9798-72b46e8ca37f down in Southbound
Oct 02 12:52:01 compute-1 ovn_controller[129257]: 2025-10-02T12:52:01Z|00571|binding|INFO|Removing iface tap6f16f975-11 ovn-installed in OVS
Oct 02 12:52:01 compute-1 nova_compute[230518]: 2025-10-02 12:52:01.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:01.237 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:0c:7f 10.100.0.13'], port_security=['fa:16:3e:9e:0c:7f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '33780b49-b5a1-4f3f-a6c5-a00011d53718', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ea35968-5cdb-414e-9226-6ba534628944', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1308a7eb298f49baaeaf3dc3a6acf592', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'fda5d6a4-b319-45fc-a863-02113f198b5e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.203', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=526db40a-6e83-473d-bcf2-8fd6e9668069, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=6f16f975-1155-4931-9798-72b46e8ca37f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:52:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:01.238 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 6f16f975-1155-4931-9798-72b46e8ca37f in datapath 1ea35968-5cdb-414e-9226-6ba534628944 unbound from our chassis
Oct 02 12:52:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:01.239 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1ea35968-5cdb-414e-9226-6ba534628944, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:52:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:01.240 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9ec9170c-4cca-4f9f-95c7-39d4673a8c14]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:01.241 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944 namespace which is not needed anymore
Oct 02 12:52:01 compute-1 nova_compute[230518]: 2025-10-02 12:52:01.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:01 compute-1 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000088.scope: Deactivated successfully.
Oct 02 12:52:01 compute-1 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000088.scope: Consumed 14.392s CPU time.
Oct 02 12:52:01 compute-1 systemd-machined[188247]: Machine qemu-66-instance-00000088 terminated.
Oct 02 12:52:01 compute-1 neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944[284068]: [NOTICE]   (284072) : haproxy version is 2.8.14-c23fe91
Oct 02 12:52:01 compute-1 neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944[284068]: [NOTICE]   (284072) : path to executable is /usr/sbin/haproxy
Oct 02 12:52:01 compute-1 neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944[284068]: [WARNING]  (284072) : Exiting Master process...
Oct 02 12:52:01 compute-1 neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944[284068]: [ALERT]    (284072) : Current worker (284076) exited with code 143 (Terminated)
Oct 02 12:52:01 compute-1 neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944[284068]: [WARNING]  (284072) : All workers exited. Exiting... (0)
Oct 02 12:52:01 compute-1 systemd[1]: libpod-5cdac57e9c846536541c74c186e601bab2012bcbf2d38a5d28243e1803853ba6.scope: Deactivated successfully.
Oct 02 12:52:01 compute-1 podman[284217]: 2025-10-02 12:52:01.374126385 +0000 UTC m=+0.044241619 container died 5cdac57e9c846536541c74c186e601bab2012bcbf2d38a5d28243e1803853ba6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 12:52:01 compute-1 systemd[1]: var-lib-containers-storage-overlay-78c0bf0e8b041f6f676207de558f8da163cf1ee93caa7b027803650b8a4521d5-merged.mount: Deactivated successfully.
Oct 02 12:52:01 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5cdac57e9c846536541c74c186e601bab2012bcbf2d38a5d28243e1803853ba6-userdata-shm.mount: Deactivated successfully.
Oct 02 12:52:01 compute-1 podman[284217]: 2025-10-02 12:52:01.410859367 +0000 UTC m=+0.080974571 container cleanup 5cdac57e9c846536541c74c186e601bab2012bcbf2d38a5d28243e1803853ba6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:52:01 compute-1 systemd[1]: libpod-conmon-5cdac57e9c846536541c74c186e601bab2012bcbf2d38a5d28243e1803853ba6.scope: Deactivated successfully.
Oct 02 12:52:01 compute-1 nova_compute[230518]: 2025-10-02 12:52:01.461 2 INFO nova.virt.libvirt.driver [-] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Instance destroyed successfully.
Oct 02 12:52:01 compute-1 nova_compute[230518]: 2025-10-02 12:52:01.462 2 DEBUG nova.objects.instance [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lazy-loading 'resources' on Instance uuid 33780b49-b5a1-4f3f-a6c5-a00011d53718 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:52:01 compute-1 podman[284250]: 2025-10-02 12:52:01.471438118 +0000 UTC m=+0.041787963 container remove 5cdac57e9c846536541c74c186e601bab2012bcbf2d38a5d28243e1803853ba6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 02 12:52:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:01.477 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[17c1d41b-0fdb-49ae-81fd-ed19c291f1dd]: (4, ('Thu Oct  2 12:52:01 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944 (5cdac57e9c846536541c74c186e601bab2012bcbf2d38a5d28243e1803853ba6)\n5cdac57e9c846536541c74c186e601bab2012bcbf2d38a5d28243e1803853ba6\nThu Oct  2 12:52:01 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944 (5cdac57e9c846536541c74c186e601bab2012bcbf2d38a5d28243e1803853ba6)\n5cdac57e9c846536541c74c186e601bab2012bcbf2d38a5d28243e1803853ba6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:01.479 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[43cec4f4-2a82-41b3-b1b0-7567ad1e6a88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:01.480 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ea35968-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:52:01 compute-1 nova_compute[230518]: 2025-10-02 12:52:01.482 2 DEBUG nova.virt.libvirt.vif [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:50:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerActionsV293TestJSON-server-1954698310',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionsv293testjson-server-1058292720',id=136,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEq6SX0G1P4eXV7KyXSZ/35WTMB3jbVB1SupKDvjpwDO6estYqWrZvLKSnDQx+vS99wAQs9lzaTS/c5UGzVwCp+w6SXjcPi0171w0SmxtpKZLlMM30YnMg14Y62hnRsW1w==',key_name='tempest-keypair-290079418',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:51:25Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1308a7eb298f49baaeaf3dc3a6acf592',ramdisk_id='',reservation_id='r-vezvikos',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='52ef509e-0e22-464e-93c9-3ddcf574cd64',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsV293TestJSON-365577023',owner_user_name='tempest-ServerActionsV293TestJSON-365577023-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:51:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='af2648eefb594bc49309cccf408f7ae1',uuid=33780b49-b5a1-4f3f-a6c5-a00011d53718,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6f16f975-1155-4931-9798-72b46e8ca37f", "address": "fa:16:3e:9e:0c:7f", "network": {"id": "1ea35968-5cdb-414e-9226-6ba534628944", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-359916218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1308a7eb298f49baaeaf3dc3a6acf592", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f16f975-11", "ovs_interfaceid": "6f16f975-1155-4931-9798-72b46e8ca37f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:52:01 compute-1 nova_compute[230518]: 2025-10-02 12:52:01.482 2 DEBUG nova.network.os_vif_util [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Converting VIF {"id": "6f16f975-1155-4931-9798-72b46e8ca37f", "address": "fa:16:3e:9e:0c:7f", "network": {"id": "1ea35968-5cdb-414e-9226-6ba534628944", "bridge": "br-int", "label": "tempest-ServerActionsV293TestJSON-359916218-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1308a7eb298f49baaeaf3dc3a6acf592", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f16f975-11", "ovs_interfaceid": "6f16f975-1155-4931-9798-72b46e8ca37f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:52:01 compute-1 kernel: tap1ea35968-50: left promiscuous mode
Oct 02 12:52:01 compute-1 nova_compute[230518]: 2025-10-02 12:52:01.483 2 DEBUG nova.network.os_vif_util [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9e:0c:7f,bridge_name='br-int',has_traffic_filtering=True,id=6f16f975-1155-4931-9798-72b46e8ca37f,network=Network(1ea35968-5cdb-414e-9226-6ba534628944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f16f975-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:52:01 compute-1 nova_compute[230518]: 2025-10-02 12:52:01.484 2 DEBUG os_vif [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9e:0c:7f,bridge_name='br-int',has_traffic_filtering=True,id=6f16f975-1155-4931-9798-72b46e8ca37f,network=Network(1ea35968-5cdb-414e-9226-6ba534628944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f16f975-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:52:01 compute-1 nova_compute[230518]: 2025-10-02 12:52:01.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:01 compute-1 nova_compute[230518]: 2025-10-02 12:52:01.487 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f16f975-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:52:01 compute-1 nova_compute[230518]: 2025-10-02 12:52:01.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:01 compute-1 nova_compute[230518]: 2025-10-02 12:52:01.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:52:01 compute-1 nova_compute[230518]: 2025-10-02 12:52:01.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:01 compute-1 nova_compute[230518]: 2025-10-02 12:52:01.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:01.498 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c7ba157a-2180-4eae-abe4-09679607a9dd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:01 compute-1 nova_compute[230518]: 2025-10-02 12:52:01.500 2 INFO os_vif [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9e:0c:7f,bridge_name='br-int',has_traffic_filtering=True,id=6f16f975-1155-4931-9798-72b46e8ca37f,network=Network(1ea35968-5cdb-414e-9226-6ba534628944),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f16f975-11')
Oct 02 12:52:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:01.520 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cd7ede0d-1272-45a1-a1a8-45c202616315]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:01.521 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[af766b83-be47-4a23-9d51-b9b4681ff6e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:01.535 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[04b2f0bb-102c-4f3f-be03-9c532c3a2b0f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 734600, 'reachable_time': 29634, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284290, 'error': None, 'target': 'ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:01 compute-1 systemd[1]: run-netns-ovnmeta\x2d1ea35968\x2d5cdb\x2d414e\x2d9226\x2d6ba534628944.mount: Deactivated successfully.
Oct 02 12:52:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:01.540 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1ea35968-5cdb-414e-9226-6ba534628944 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:52:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:01.540 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[50d2ed17-4f86-4bba-8176-06f84aebf64b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:01 compute-1 nova_compute[230518]: 2025-10-02 12:52:01.988 2 DEBUG nova.compute.manager [req-e997c493-2267-4533-88f3-401222fbc2e4 req-263be49b-fd52-4452-b7a1-ed7cd907d962 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received event network-vif-unplugged-6f16f975-1155-4931-9798-72b46e8ca37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:52:01 compute-1 nova_compute[230518]: 2025-10-02 12:52:01.988 2 DEBUG oslo_concurrency.lockutils [req-e997c493-2267-4533-88f3-401222fbc2e4 req-263be49b-fd52-4452-b7a1-ed7cd907d962 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:01 compute-1 nova_compute[230518]: 2025-10-02 12:52:01.989 2 DEBUG oslo_concurrency.lockutils [req-e997c493-2267-4533-88f3-401222fbc2e4 req-263be49b-fd52-4452-b7a1-ed7cd907d962 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:01 compute-1 nova_compute[230518]: 2025-10-02 12:52:01.989 2 DEBUG oslo_concurrency.lockutils [req-e997c493-2267-4533-88f3-401222fbc2e4 req-263be49b-fd52-4452-b7a1-ed7cd907d962 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:01 compute-1 nova_compute[230518]: 2025-10-02 12:52:01.989 2 DEBUG nova.compute.manager [req-e997c493-2267-4533-88f3-401222fbc2e4 req-263be49b-fd52-4452-b7a1-ed7cd907d962 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] No waiting events found dispatching network-vif-unplugged-6f16f975-1155-4931-9798-72b46e8ca37f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:52:01 compute-1 nova_compute[230518]: 2025-10-02 12:52:01.989 2 DEBUG nova.compute.manager [req-e997c493-2267-4533-88f3-401222fbc2e4 req-263be49b-fd52-4452-b7a1-ed7cd907d962 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received event network-vif-unplugged-6f16f975-1155-4931-9798-72b46e8ca37f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:52:02 compute-1 ceph-mon[80926]: pgmap v2338: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Oct 02 12:52:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:02.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:02.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:02 compute-1 nova_compute[230518]: 2025-10-02 12:52:02.402 2 INFO nova.virt.libvirt.driver [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Deleting instance files /var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718_del
Oct 02 12:52:02 compute-1 nova_compute[230518]: 2025-10-02 12:52:02.403 2 INFO nova.virt.libvirt.driver [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Deletion of /var/lib/nova/instances/33780b49-b5a1-4f3f-a6c5-a00011d53718_del complete
Oct 02 12:52:02 compute-1 nova_compute[230518]: 2025-10-02 12:52:02.477 2 INFO nova.compute.manager [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Took 1.85 seconds to destroy the instance on the hypervisor.
Oct 02 12:52:02 compute-1 nova_compute[230518]: 2025-10-02 12:52:02.477 2 DEBUG oslo.service.loopingcall [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:52:02 compute-1 nova_compute[230518]: 2025-10-02 12:52:02.477 2 DEBUG nova.compute.manager [-] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:52:02 compute-1 nova_compute[230518]: 2025-10-02 12:52:02.478 2 DEBUG nova.network.neutron [-] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:52:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:52:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:03.409 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:52:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:03.410 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:52:03 compute-1 nova_compute[230518]: 2025-10-02 12:52:03.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:03.412 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:52:03 compute-1 sudo[284295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:52:03 compute-1 sudo[284295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:03 compute-1 sudo[284295]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:03 compute-1 sudo[284332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:52:03 compute-1 sudo[284332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:03 compute-1 podman[284319]: 2025-10-02 12:52:03.509122075 +0000 UTC m=+0.062128840 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:52:03 compute-1 podman[284320]: 2025-10-02 12:52:03.511068166 +0000 UTC m=+0.064419492 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:52:03 compute-1 sudo[284332]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:03 compute-1 sudo[284384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:52:03 compute-1 sudo[284384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:03 compute-1 sudo[284384]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:03 compute-1 ceph-mon[80926]: pgmap v2339: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 395 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Oct 02 12:52:03 compute-1 sudo[284409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 12:52:03 compute-1 sudo[284409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:03 compute-1 nova_compute[230518]: 2025-10-02 12:52:03.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:03 compute-1 nova_compute[230518]: 2025-10-02 12:52:03.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:03 compute-1 nova_compute[230518]: 2025-10-02 12:52:03.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:04 compute-1 nova_compute[230518]: 2025-10-02 12:52:04.204 2 DEBUG nova.network.neutron [-] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:52:04 compute-1 nova_compute[230518]: 2025-10-02 12:52:04.209 2 DEBUG nova.compute.manager [req-c6b130ae-637c-4401-8dfb-1d4c7d7c3734 req-7d7ac1f9-3a04-4425-b9ff-b473f643261f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received event network-vif-plugged-6f16f975-1155-4931-9798-72b46e8ca37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:52:04 compute-1 nova_compute[230518]: 2025-10-02 12:52:04.209 2 DEBUG oslo_concurrency.lockutils [req-c6b130ae-637c-4401-8dfb-1d4c7d7c3734 req-7d7ac1f9-3a04-4425-b9ff-b473f643261f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:04 compute-1 nova_compute[230518]: 2025-10-02 12:52:04.210 2 DEBUG oslo_concurrency.lockutils [req-c6b130ae-637c-4401-8dfb-1d4c7d7c3734 req-7d7ac1f9-3a04-4425-b9ff-b473f643261f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:04 compute-1 nova_compute[230518]: 2025-10-02 12:52:04.210 2 DEBUG oslo_concurrency.lockutils [req-c6b130ae-637c-4401-8dfb-1d4c7d7c3734 req-7d7ac1f9-3a04-4425-b9ff-b473f643261f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:04 compute-1 nova_compute[230518]: 2025-10-02 12:52:04.210 2 DEBUG nova.compute.manager [req-c6b130ae-637c-4401-8dfb-1d4c7d7c3734 req-7d7ac1f9-3a04-4425-b9ff-b473f643261f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] No waiting events found dispatching network-vif-plugged-6f16f975-1155-4931-9798-72b46e8ca37f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:52:04 compute-1 nova_compute[230518]: 2025-10-02 12:52:04.211 2 WARNING nova.compute.manager [req-c6b130ae-637c-4401-8dfb-1d4c7d7c3734 req-7d7ac1f9-3a04-4425-b9ff-b473f643261f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received unexpected event network-vif-plugged-6f16f975-1155-4931-9798-72b46e8ca37f for instance with vm_state active and task_state deleting.
Oct 02 12:52:04 compute-1 podman[284509]: 2025-10-02 12:52:04.225320094 +0000 UTC m=+0.148169049 container exec f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:52:04 compute-1 nova_compute[230518]: 2025-10-02 12:52:04.239 2 INFO nova.compute.manager [-] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Took 1.76 seconds to deallocate network for instance.
Oct 02 12:52:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:04.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:04 compute-1 nova_compute[230518]: 2025-10-02 12:52:04.335 2 DEBUG nova.compute.manager [req-a4510d72-a7be-416e-b22a-eada7ded4f16 req-aa60cb57-ac4b-4482-ac04-338327e6ad1c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Received event network-vif-deleted-6f16f975-1155-4931-9798-72b46e8ca37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:52:04 compute-1 podman[284509]: 2025-10-02 12:52:04.370448838 +0000 UTC m=+0.293297813 container exec_died f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 02 12:52:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:04.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:04 compute-1 nova_compute[230518]: 2025-10-02 12:52:04.626 2 INFO nova.compute.manager [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Took 0.39 seconds to detach 1 volumes for instance.
Oct 02 12:52:04 compute-1 nova_compute[230518]: 2025-10-02 12:52:04.627 2 DEBUG nova.compute.manager [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Deleting volume: a7bdd212-0d34-40b8-9af9-06388d215028 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Oct 02 12:52:05 compute-1 nova_compute[230518]: 2025-10-02 12:52:05.097 2 DEBUG oslo_concurrency.lockutils [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:05 compute-1 nova_compute[230518]: 2025-10-02 12:52:05.098 2 DEBUG oslo_concurrency.lockutils [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:05 compute-1 nova_compute[230518]: 2025-10-02 12:52:05.169 2 DEBUG oslo_concurrency.processutils [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:52:05 compute-1 sudo[284409]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:52:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4094236586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:05 compute-1 nova_compute[230518]: 2025-10-02 12:52:05.587 2 DEBUG oslo_concurrency.processutils [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:52:05 compute-1 nova_compute[230518]: 2025-10-02 12:52:05.593 2 DEBUG nova.compute.provider_tree [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:52:05 compute-1 nova_compute[230518]: 2025-10-02 12:52:05.629 2 DEBUG nova.scheduler.client.report [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:52:05 compute-1 nova_compute[230518]: 2025-10-02 12:52:05.652 2 DEBUG oslo_concurrency.lockutils [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:05 compute-1 nova_compute[230518]: 2025-10-02 12:52:05.676 2 INFO nova.scheduler.client.report [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Deleted allocations for instance 33780b49-b5a1-4f3f-a6c5-a00011d53718
Oct 02 12:52:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:52:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3678137017' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:52:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:52:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3678137017' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:52:05 compute-1 sudo[284656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:52:05 compute-1 sudo[284656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:05 compute-1 sudo[284656]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:05 compute-1 nova_compute[230518]: 2025-10-02 12:52:05.785 2 DEBUG oslo_concurrency.lockutils [None req-d916b01a-67e9-4bbf-ae4b-336fd27de1b7 af2648eefb594bc49309cccf408f7ae1 1308a7eb298f49baaeaf3dc3a6acf592 - - default default] Lock "33780b49-b5a1-4f3f-a6c5-a00011d53718" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:05 compute-1 sudo[284681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:52:05 compute-1 sudo[284681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:05 compute-1 sudo[284681]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:05 compute-1 sudo[284706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:52:05 compute-1 sudo[284706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:05 compute-1 sudo[284706]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:05 compute-1 ceph-mon[80926]: pgmap v2340: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 393 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Oct 02 12:52:05 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:52:05 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:52:05 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:52:05 compute-1 sudo[284731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:52:05 compute-1 sudo[284731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:06 compute-1 nova_compute[230518]: 2025-10-02 12:52:06.073 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:52:06 compute-1 nova_compute[230518]: 2025-10-02 12:52:06.094 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:06 compute-1 nova_compute[230518]: 2025-10-02 12:52:06.095 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:06 compute-1 nova_compute[230518]: 2025-10-02 12:52:06.095 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:06 compute-1 nova_compute[230518]: 2025-10-02 12:52:06.095 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:52:06 compute-1 nova_compute[230518]: 2025-10-02 12:52:06.095 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:52:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:52:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:06.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:52:06 compute-1 sudo[284731]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:52:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:06.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:52:06 compute-1 nova_compute[230518]: 2025-10-02 12:52:06.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:06 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:52:06 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3880822609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:06 compute-1 nova_compute[230518]: 2025-10-02 12:52:06.550 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:52:06 compute-1 nova_compute[230518]: 2025-10-02 12:52:06.700 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:52:06 compute-1 nova_compute[230518]: 2025-10-02 12:52:06.701 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4377MB free_disk=20.876129150390625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:52:06 compute-1 nova_compute[230518]: 2025-10-02 12:52:06.702 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:06 compute-1 nova_compute[230518]: 2025-10-02 12:52:06.702 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:06 compute-1 nova_compute[230518]: 2025-10-02 12:52:06.759 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:52:06 compute-1 nova_compute[230518]: 2025-10-02 12:52:06.760 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:52:06 compute-1 nova_compute[230518]: 2025-10-02 12:52:06.869 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:52:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4094236586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:07 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:52:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3678137017' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:52:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3678137017' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:52:07 compute-1 ceph-mon[80926]: pgmap v2341: 305 pgs: 305 active+clean; 344 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.6 MiB/s wr, 134 op/s
Oct 02 12:52:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/723638720' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:07 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:52:07 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:52:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3880822609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:07 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:52:07 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:52:07 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:52:07 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:52:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:52:07 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1114343258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:07 compute-1 nova_compute[230518]: 2025-10-02 12:52:07.346 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:52:07 compute-1 nova_compute[230518]: 2025-10-02 12:52:07.352 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:52:07 compute-1 nova_compute[230518]: 2025-10-02 12:52:07.371 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:52:07 compute-1 nova_compute[230518]: 2025-10-02 12:52:07.395 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:52:07 compute-1 nova_compute[230518]: 2025-10-02 12:52:07.396 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e321 e321: 3 total, 3 up, 3 in
Oct 02 12:52:08 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #109. Immutable memtables: 0.
Oct 02 12:52:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:52:08.097661) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:52:08 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 109
Oct 02 12:52:08 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409528097743, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 1348, "num_deletes": 256, "total_data_size": 2864792, "memory_usage": 2911704, "flush_reason": "Manual Compaction"}
Oct 02 12:52:08 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #110: started
Oct 02 12:52:08 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409528190921, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 110, "file_size": 1204616, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 55035, "largest_seqno": 56378, "table_properties": {"data_size": 1199873, "index_size": 2139, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12972, "raw_average_key_size": 21, "raw_value_size": 1189398, "raw_average_value_size": 1962, "num_data_blocks": 94, "num_entries": 606, "num_filter_entries": 606, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409429, "oldest_key_time": 1759409429, "file_creation_time": 1759409528, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:52:08 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 93303 microseconds, and 4143 cpu microseconds.
Oct 02 12:52:08 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:52:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:52:08.190974) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #110: 1204616 bytes OK
Oct 02 12:52:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:52:08.191034) [db/memtable_list.cc:519] [default] Level-0 commit table #110 started
Oct 02 12:52:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:52:08.197527) [db/memtable_list.cc:722] [default] Level-0 commit table #110: memtable #1 done
Oct 02 12:52:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:52:08.197557) EVENT_LOG_v1 {"time_micros": 1759409528197551, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:52:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:52:08.197577) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:52:08 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 2858304, prev total WAL file size 2858304, number of live WAL files 2.
Oct 02 12:52:08 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000106.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:52:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:52:08.198568) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373533' seq:72057594037927935, type:22 .. '6D6772737461740032303037' seq:0, type:0; will stop at (end)
Oct 02 12:52:08 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:52:08 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [110(1176KB)], [108(12MB)]
Oct 02 12:52:08 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409528198640, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [110], "files_L6": [108], "score": -1, "input_data_size": 14049541, "oldest_snapshot_seqno": -1}
Oct 02 12:52:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:52:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:08.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:08 compute-1 nova_compute[230518]: 2025-10-02 12:52:08.370 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:52:08 compute-1 nova_compute[230518]: 2025-10-02 12:52:08.371 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:52:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:52:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:08.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:52:08 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #111: 8039 keys, 10810823 bytes, temperature: kUnknown
Oct 02 12:52:08 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409528420064, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 111, "file_size": 10810823, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10758566, "index_size": 31025, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20165, "raw_key_size": 208379, "raw_average_key_size": 25, "raw_value_size": 10616992, "raw_average_value_size": 1320, "num_data_blocks": 1215, "num_entries": 8039, "num_filter_entries": 8039, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759409528, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 111, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:52:08 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:52:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:52:08.420518) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 10810823 bytes
Oct 02 12:52:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:52:08.446391) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 63.4 rd, 48.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 12.2 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(20.6) write-amplify(9.0) OK, records in: 8526, records dropped: 487 output_compression: NoCompression
Oct 02 12:52:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:52:08.446432) EVENT_LOG_v1 {"time_micros": 1759409528446417, "job": 68, "event": "compaction_finished", "compaction_time_micros": 221514, "compaction_time_cpu_micros": 30965, "output_level": 6, "num_output_files": 1, "total_output_size": 10810823, "num_input_records": 8526, "num_output_records": 8039, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:52:08 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:52:08 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409528446793, "job": 68, "event": "table_file_deletion", "file_number": 110}
Oct 02 12:52:08 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000108.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:52:08 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409528448947, "job": 68, "event": "table_file_deletion", "file_number": 108}
Oct 02 12:52:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:52:08.198460) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:52:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:52:08.449062) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:52:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:52:08.449067) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:52:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:52:08.449069) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:52:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:52:08.449070) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:52:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:52:08.449071) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:52:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1114343258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:08 compute-1 ceph-mon[80926]: osdmap e321: 3 total, 3 up, 3 in
Oct 02 12:52:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2918952010' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:52:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1669602666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/392679263' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:52:08 compute-1 nova_compute[230518]: 2025-10-02 12:52:08.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:09 compute-1 ceph-mon[80926]: pgmap v2343: 305 pgs: 305 active+clean; 372 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Oct 02 12:52:10 compute-1 nova_compute[230518]: 2025-10-02 12:52:10.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:52:10 compute-1 nova_compute[230518]: 2025-10-02 12:52:10.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:52:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:52:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:10.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:52:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:52:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:10.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:52:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:52:10 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3513686125' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:52:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:52:10 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3513686125' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:52:11 compute-1 nova_compute[230518]: 2025-10-02 12:52:11.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:52:11 compute-1 ceph-mon[80926]: pgmap v2344: 305 pgs: 305 active+clean; 365 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.2 MiB/s wr, 139 op/s
Oct 02 12:52:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3513686125' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:52:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3513686125' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:52:11 compute-1 nova_compute[230518]: 2025-10-02 12:52:11.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2444992257' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:12.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e322 e322: 3 total, 3 up, 3 in
Oct 02 12:52:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:52:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:12.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:52:13 compute-1 nova_compute[230518]: 2025-10-02 12:52:13.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:52:13 compute-1 nova_compute[230518]: 2025-10-02 12:52:13.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:52:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:52:13 compute-1 ceph-mon[80926]: pgmap v2345: 305 pgs: 305 active+clean; 335 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Oct 02 12:52:13 compute-1 ceph-mon[80926]: osdmap e322: 3 total, 3 up, 3 in
Oct 02 12:52:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3378941821' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e323 e323: 3 total, 3 up, 3 in
Oct 02 12:52:13 compute-1 nova_compute[230518]: 2025-10-02 12:52:13.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:14 compute-1 sudo[284833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:52:14 compute-1 sudo[284833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:14 compute-1 sudo[284833]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:14.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:14 compute-1 sudo[284858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:52:14 compute-1 sudo[284858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:52:14 compute-1 sudo[284858]: pam_unix(sudo:session): session closed for user root
Oct 02 12:52:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:14.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:14 compute-1 ceph-mon[80926]: pgmap v2347: 305 pgs: 305 active+clean; 335 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.5 MiB/s wr, 128 op/s
Oct 02 12:52:14 compute-1 ceph-mon[80926]: osdmap e323: 3 total, 3 up, 3 in
Oct 02 12:52:14 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:52:14 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:52:16 compute-1 nova_compute[230518]: 2025-10-02 12:52:16.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:52:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:16.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:16.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:16 compute-1 nova_compute[230518]: 2025-10-02 12:52:16.460 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409521.4588168, 33780b49-b5a1-4f3f-a6c5-a00011d53718 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:52:16 compute-1 nova_compute[230518]: 2025-10-02 12:52:16.460 2 INFO nova.compute.manager [-] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] VM Stopped (Lifecycle Event)
Oct 02 12:52:16 compute-1 nova_compute[230518]: 2025-10-02 12:52:16.480 2 DEBUG nova.compute.manager [None req-f6f12623-d4c3-4e48-aa39-40ff5b01ef90 - - - - - -] [instance: 33780b49-b5a1-4f3f-a6c5-a00011d53718] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:52:16 compute-1 nova_compute[230518]: 2025-10-02 12:52:16.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:17 compute-1 ceph-mon[80926]: pgmap v2349: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 91 op/s
Oct 02 12:52:17 compute-1 nova_compute[230518]: 2025-10-02 12:52:17.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:52:17 compute-1 nova_compute[230518]: 2025-10-02 12:52:17.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:52:17 compute-1 nova_compute[230518]: 2025-10-02 12:52:17.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:52:17 compute-1 nova_compute[230518]: 2025-10-02 12:52:17.081 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:52:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:52:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:18.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:52:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:18.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:52:18 compute-1 nova_compute[230518]: 2025-10-02 12:52:18.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:19 compute-1 ceph-mon[80926]: pgmap v2350: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.6 MiB/s wr, 164 op/s
Oct 02 12:52:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:20.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:20.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:21 compute-1 nova_compute[230518]: 2025-10-02 12:52:21.076 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:52:21 compute-1 ceph-mon[80926]: pgmap v2351: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 544 KiB/s wr, 139 op/s
Oct 02 12:52:21 compute-1 nova_compute[230518]: 2025-10-02 12:52:21.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:22.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:22.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e324 e324: 3 total, 3 up, 3 in
Oct 02 12:52:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:52:23 compute-1 ceph-mon[80926]: pgmap v2352: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 458 KiB/s wr, 117 op/s
Oct 02 12:52:23 compute-1 ceph-mon[80926]: osdmap e324: 3 total, 3 up, 3 in
Oct 02 12:52:23 compute-1 podman[284884]: 2025-10-02 12:52:23.808551816 +0000 UTC m=+0.055289066 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 12:52:23 compute-1 podman[284883]: 2025-10-02 12:52:23.865144191 +0000 UTC m=+0.106059369 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Oct 02 12:52:23 compute-1 nova_compute[230518]: 2025-10-02 12:52:23.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:24.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:24.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:24 compute-1 nova_compute[230518]: 2025-10-02 12:52:24.466 2 DEBUG oslo_concurrency.lockutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Acquiring lock "9668bd28-30e7-4dd0-87a8-6577135cc19b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:24 compute-1 nova_compute[230518]: 2025-10-02 12:52:24.467 2 DEBUG oslo_concurrency.lockutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Lock "9668bd28-30e7-4dd0-87a8-6577135cc19b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:24 compute-1 nova_compute[230518]: 2025-10-02 12:52:24.504 2 DEBUG nova.compute.manager [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:52:24 compute-1 nova_compute[230518]: 2025-10-02 12:52:24.602 2 DEBUG oslo_concurrency.lockutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:24 compute-1 nova_compute[230518]: 2025-10-02 12:52:24.603 2 DEBUG oslo_concurrency.lockutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:24 compute-1 nova_compute[230518]: 2025-10-02 12:52:24.613 2 DEBUG nova.virt.hardware [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:52:24 compute-1 nova_compute[230518]: 2025-10-02 12:52:24.613 2 INFO nova.compute.claims [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:52:24 compute-1 nova_compute[230518]: 2025-10-02 12:52:24.722 2 DEBUG oslo_concurrency.processutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:52:25 compute-1 ceph-mon[80926]: pgmap v2354: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 436 KiB/s wr, 111 op/s
Oct 02 12:52:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:52:25 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1864420010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:25 compute-1 nova_compute[230518]: 2025-10-02 12:52:25.181 2 DEBUG oslo_concurrency.processutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:52:25 compute-1 nova_compute[230518]: 2025-10-02 12:52:25.187 2 DEBUG nova.compute.provider_tree [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:52:25 compute-1 nova_compute[230518]: 2025-10-02 12:52:25.213 2 DEBUG nova.scheduler.client.report [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:52:25 compute-1 nova_compute[230518]: 2025-10-02 12:52:25.243 2 DEBUG oslo_concurrency.lockutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:25 compute-1 nova_compute[230518]: 2025-10-02 12:52:25.244 2 DEBUG nova.compute.manager [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:52:25 compute-1 nova_compute[230518]: 2025-10-02 12:52:25.293 2 DEBUG nova.compute.manager [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Oct 02 12:52:25 compute-1 nova_compute[230518]: 2025-10-02 12:52:25.319 2 INFO nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:52:25 compute-1 nova_compute[230518]: 2025-10-02 12:52:25.348 2 DEBUG nova.compute.manager [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:52:25 compute-1 nova_compute[230518]: 2025-10-02 12:52:25.378 2 DEBUG oslo_concurrency.lockutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "d70a747f-a75e-4341-89db-5953efdbbbd9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:25 compute-1 nova_compute[230518]: 2025-10-02 12:52:25.379 2 DEBUG oslo_concurrency.lockutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:25 compute-1 nova_compute[230518]: 2025-10-02 12:52:25.411 2 DEBUG nova.compute.manager [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:52:25 compute-1 nova_compute[230518]: 2025-10-02 12:52:25.485 2 DEBUG nova.compute.manager [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:52:25 compute-1 nova_compute[230518]: 2025-10-02 12:52:25.486 2 DEBUG nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:52:25 compute-1 nova_compute[230518]: 2025-10-02 12:52:25.487 2 INFO nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Creating image(s)
Oct 02 12:52:25 compute-1 nova_compute[230518]: 2025-10-02 12:52:25.516 2 DEBUG nova.storage.rbd_utils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] rbd image 9668bd28-30e7-4dd0-87a8-6577135cc19b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:52:25 compute-1 nova_compute[230518]: 2025-10-02 12:52:25.541 2 DEBUG nova.storage.rbd_utils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] rbd image 9668bd28-30e7-4dd0-87a8-6577135cc19b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:52:25 compute-1 nova_compute[230518]: 2025-10-02 12:52:25.563 2 DEBUG nova.storage.rbd_utils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] rbd image 9668bd28-30e7-4dd0-87a8-6577135cc19b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:52:25 compute-1 nova_compute[230518]: 2025-10-02 12:52:25.567 2 DEBUG oslo_concurrency.processutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:52:25 compute-1 nova_compute[230518]: 2025-10-02 12:52:25.622 2 DEBUG oslo_concurrency.lockutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:25 compute-1 nova_compute[230518]: 2025-10-02 12:52:25.623 2 DEBUG oslo_concurrency.lockutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:25 compute-1 nova_compute[230518]: 2025-10-02 12:52:25.631 2 DEBUG nova.virt.hardware [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:52:25 compute-1 nova_compute[230518]: 2025-10-02 12:52:25.631 2 INFO nova.compute.claims [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:52:25 compute-1 nova_compute[230518]: 2025-10-02 12:52:25.634 2 DEBUG oslo_concurrency.processutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:52:25 compute-1 nova_compute[230518]: 2025-10-02 12:52:25.635 2 DEBUG oslo_concurrency.lockutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:25 compute-1 nova_compute[230518]: 2025-10-02 12:52:25.635 2 DEBUG oslo_concurrency.lockutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:25 compute-1 nova_compute[230518]: 2025-10-02 12:52:25.636 2 DEBUG oslo_concurrency.lockutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:25 compute-1 nova_compute[230518]: 2025-10-02 12:52:25.659 2 DEBUG nova.storage.rbd_utils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] rbd image 9668bd28-30e7-4dd0-87a8-6577135cc19b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:52:25 compute-1 nova_compute[230518]: 2025-10-02 12:52:25.662 2 DEBUG oslo_concurrency.processutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 9668bd28-30e7-4dd0-87a8-6577135cc19b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:52:25 compute-1 nova_compute[230518]: 2025-10-02 12:52:25.916 2 DEBUG oslo_concurrency.processutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:52:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:25.949 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:25.949 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:25.949 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:26.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:26.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:26 compute-1 nova_compute[230518]: 2025-10-02 12:52:26.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:26 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:52:26 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3065473880' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:26 compute-1 nova_compute[230518]: 2025-10-02 12:52:26.760 2 DEBUG oslo_concurrency.processutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.844s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:52:26 compute-1 nova_compute[230518]: 2025-10-02 12:52:26.766 2 DEBUG nova.compute.provider_tree [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:52:26 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1864420010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:26 compute-1 nova_compute[230518]: 2025-10-02 12:52:26.813 2 DEBUG nova.scheduler.client.report [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:52:26 compute-1 nova_compute[230518]: 2025-10-02 12:52:26.853 2 DEBUG oslo_concurrency.lockutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.230s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:26 compute-1 nova_compute[230518]: 2025-10-02 12:52:26.854 2 DEBUG nova.compute.manager [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:52:26 compute-1 nova_compute[230518]: 2025-10-02 12:52:26.931 2 DEBUG nova.compute.manager [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:52:26 compute-1 nova_compute[230518]: 2025-10-02 12:52:26.931 2 DEBUG nova.network.neutron [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:52:26 compute-1 nova_compute[230518]: 2025-10-02 12:52:26.951 2 INFO nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:52:26 compute-1 nova_compute[230518]: 2025-10-02 12:52:26.979 2 DEBUG nova.compute.manager [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:52:27 compute-1 nova_compute[230518]: 2025-10-02 12:52:27.102 2 DEBUG nova.compute.manager [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:52:27 compute-1 nova_compute[230518]: 2025-10-02 12:52:27.103 2 DEBUG nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:52:27 compute-1 nova_compute[230518]: 2025-10-02 12:52:27.103 2 INFO nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Creating image(s)
Oct 02 12:52:27 compute-1 nova_compute[230518]: 2025-10-02 12:52:27.134 2 DEBUG nova.storage.rbd_utils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image d70a747f-a75e-4341-89db-5953efdbbbd9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:52:27 compute-1 nova_compute[230518]: 2025-10-02 12:52:27.163 2 DEBUG nova.storage.rbd_utils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image d70a747f-a75e-4341-89db-5953efdbbbd9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:52:27 compute-1 nova_compute[230518]: 2025-10-02 12:52:27.191 2 DEBUG nova.storage.rbd_utils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image d70a747f-a75e-4341-89db-5953efdbbbd9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:52:27 compute-1 nova_compute[230518]: 2025-10-02 12:52:27.195 2 DEBUG oslo_concurrency.processutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:52:27 compute-1 nova_compute[230518]: 2025-10-02 12:52:27.258 2 DEBUG oslo_concurrency.processutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:52:27 compute-1 nova_compute[230518]: 2025-10-02 12:52:27.260 2 DEBUG oslo_concurrency.lockutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:27 compute-1 nova_compute[230518]: 2025-10-02 12:52:27.260 2 DEBUG oslo_concurrency.lockutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:27 compute-1 nova_compute[230518]: 2025-10-02 12:52:27.261 2 DEBUG oslo_concurrency.lockutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:28 compute-1 nova_compute[230518]: 2025-10-02 12:52:28.015 2 DEBUG nova.storage.rbd_utils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image d70a747f-a75e-4341-89db-5953efdbbbd9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:52:28 compute-1 nova_compute[230518]: 2025-10-02 12:52:28.019 2 DEBUG oslo_concurrency.processutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 d70a747f-a75e-4341-89db-5953efdbbbd9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:52:28 compute-1 ceph-mon[80926]: pgmap v2355: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 25 KiB/s wr, 103 op/s
Oct 02 12:52:28 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3065473880' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:28 compute-1 nova_compute[230518]: 2025-10-02 12:52:28.050 2 DEBUG nova.policy [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '96fd589a75cb4fcfac0072edabb9b3a1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '64f187c60881475e9e1f062bb198d205', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:52:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:28.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:52:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:52:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:28.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:52:28 compute-1 nova_compute[230518]: 2025-10-02 12:52:28.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:29 compute-1 nova_compute[230518]: 2025-10-02 12:52:29.166 2 DEBUG oslo_concurrency.processutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 9668bd28-30e7-4dd0-87a8-6577135cc19b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:52:29 compute-1 nova_compute[230518]: 2025-10-02 12:52:29.226 2 DEBUG nova.storage.rbd_utils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] resizing rbd image 9668bd28-30e7-4dd0-87a8-6577135cc19b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:52:29 compute-1 ceph-mon[80926]: pgmap v2356: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 296 KiB/s wr, 62 op/s
Oct 02 12:52:29 compute-1 nova_compute[230518]: 2025-10-02 12:52:29.596 2 DEBUG nova.objects.instance [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Lazy-loading 'migration_context' on Instance uuid 9668bd28-30e7-4dd0-87a8-6577135cc19b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:52:29 compute-1 nova_compute[230518]: 2025-10-02 12:52:29.611 2 DEBUG nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:52:29 compute-1 nova_compute[230518]: 2025-10-02 12:52:29.611 2 DEBUG nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Ensure instance console log exists: /var/lib/nova/instances/9668bd28-30e7-4dd0-87a8-6577135cc19b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:52:29 compute-1 nova_compute[230518]: 2025-10-02 12:52:29.612 2 DEBUG oslo_concurrency.lockutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:29 compute-1 nova_compute[230518]: 2025-10-02 12:52:29.612 2 DEBUG oslo_concurrency.lockutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:29 compute-1 nova_compute[230518]: 2025-10-02 12:52:29.612 2 DEBUG oslo_concurrency.lockutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:29 compute-1 nova_compute[230518]: 2025-10-02 12:52:29.613 2 DEBUG nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:52:29 compute-1 nova_compute[230518]: 2025-10-02 12:52:29.618 2 WARNING nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:52:29 compute-1 nova_compute[230518]: 2025-10-02 12:52:29.629 2 DEBUG nova.virt.libvirt.host [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:52:29 compute-1 nova_compute[230518]: 2025-10-02 12:52:29.630 2 DEBUG nova.virt.libvirt.host [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:52:29 compute-1 nova_compute[230518]: 2025-10-02 12:52:29.632 2 DEBUG nova.virt.libvirt.host [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:52:29 compute-1 nova_compute[230518]: 2025-10-02 12:52:29.633 2 DEBUG nova.virt.libvirt.host [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:52:29 compute-1 nova_compute[230518]: 2025-10-02 12:52:29.634 2 DEBUG nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:52:29 compute-1 nova_compute[230518]: 2025-10-02 12:52:29.634 2 DEBUG nova.virt.hardware [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:52:29 compute-1 nova_compute[230518]: 2025-10-02 12:52:29.635 2 DEBUG nova.virt.hardware [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:52:29 compute-1 nova_compute[230518]: 2025-10-02 12:52:29.635 2 DEBUG nova.virt.hardware [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:52:29 compute-1 nova_compute[230518]: 2025-10-02 12:52:29.635 2 DEBUG nova.virt.hardware [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:52:29 compute-1 nova_compute[230518]: 2025-10-02 12:52:29.635 2 DEBUG nova.virt.hardware [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:52:29 compute-1 nova_compute[230518]: 2025-10-02 12:52:29.635 2 DEBUG nova.virt.hardware [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:52:29 compute-1 nova_compute[230518]: 2025-10-02 12:52:29.636 2 DEBUG nova.virt.hardware [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:52:29 compute-1 nova_compute[230518]: 2025-10-02 12:52:29.636 2 DEBUG nova.virt.hardware [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:52:29 compute-1 nova_compute[230518]: 2025-10-02 12:52:29.636 2 DEBUG nova.virt.hardware [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:52:29 compute-1 nova_compute[230518]: 2025-10-02 12:52:29.636 2 DEBUG nova.virt.hardware [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:52:29 compute-1 nova_compute[230518]: 2025-10-02 12:52:29.637 2 DEBUG nova.virt.hardware [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:52:29 compute-1 nova_compute[230518]: 2025-10-02 12:52:29.639 2 DEBUG oslo_concurrency.processutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:52:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:52:30 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4066300551' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:52:30 compute-1 nova_compute[230518]: 2025-10-02 12:52:30.069 2 DEBUG oslo_concurrency.processutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:52:30 compute-1 nova_compute[230518]: 2025-10-02 12:52:30.099 2 DEBUG nova.storage.rbd_utils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] rbd image 9668bd28-30e7-4dd0-87a8-6577135cc19b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:52:30 compute-1 nova_compute[230518]: 2025-10-02 12:52:30.103 2 DEBUG oslo_concurrency.processutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:52:30 compute-1 nova_compute[230518]: 2025-10-02 12:52:30.352 2 DEBUG nova.network.neutron [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Successfully created port: 9e761925-3065-4b15-ab37-4ce18061fcf6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:52:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:30.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:30.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:52:30 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3447191113' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:52:30 compute-1 nova_compute[230518]: 2025-10-02 12:52:30.531 2 DEBUG oslo_concurrency.processutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:52:30 compute-1 nova_compute[230518]: 2025-10-02 12:52:30.533 2 DEBUG nova.objects.instance [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Lazy-loading 'pci_devices' on Instance uuid 9668bd28-30e7-4dd0-87a8-6577135cc19b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:52:30 compute-1 nova_compute[230518]: 2025-10-02 12:52:30.554 2 DEBUG nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:52:30 compute-1 nova_compute[230518]:   <uuid>9668bd28-30e7-4dd0-87a8-6577135cc19b</uuid>
Oct 02 12:52:30 compute-1 nova_compute[230518]:   <name>instance-0000008e</name>
Oct 02 12:52:30 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:52:30 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:52:30 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:52:30 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:       <nova:name>tempest-ServersAaction247Test-server-392071192</nova:name>
Oct 02 12:52:30 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:52:29</nova:creationTime>
Oct 02 12:52:30 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:52:30 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:52:30 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:52:30 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:52:30 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:52:30 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:52:30 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:52:30 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:52:30 compute-1 nova_compute[230518]:         <nova:user uuid="c6b3ffaf413a4cc592b58bfbf3b40c2b">tempest-ServersAaction247Test-547354818-project-member</nova:user>
Oct 02 12:52:30 compute-1 nova_compute[230518]:         <nova:project uuid="324f52a964c64bfc9c214faab2ddda5b">tempest-ServersAaction247Test-547354818</nova:project>
Oct 02 12:52:30 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:52:30 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:       <nova:ports/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:52:30 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:52:30 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <system>
Oct 02 12:52:30 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:52:30 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:52:30 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:52:30 compute-1 nova_compute[230518]:       <entry name="serial">9668bd28-30e7-4dd0-87a8-6577135cc19b</entry>
Oct 02 12:52:30 compute-1 nova_compute[230518]:       <entry name="uuid">9668bd28-30e7-4dd0-87a8-6577135cc19b</entry>
Oct 02 12:52:30 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     </system>
Oct 02 12:52:30 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:52:30 compute-1 nova_compute[230518]:   <os>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:   </os>
Oct 02 12:52:30 compute-1 nova_compute[230518]:   <features>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:   </features>
Oct 02 12:52:30 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:52:30 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:52:30 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:52:30 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/9668bd28-30e7-4dd0-87a8-6577135cc19b_disk">
Oct 02 12:52:30 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:       </source>
Oct 02 12:52:30 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:52:30 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:52:30 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:52:30 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/9668bd28-30e7-4dd0-87a8-6577135cc19b_disk.config">
Oct 02 12:52:30 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:       </source>
Oct 02 12:52:30 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:52:30 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:52:30 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:52:30 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/9668bd28-30e7-4dd0-87a8-6577135cc19b/console.log" append="off"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <video>
Oct 02 12:52:30 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     </video>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:52:30 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:52:30 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:52:30 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:52:30 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:52:30 compute-1 nova_compute[230518]: </domain>
Oct 02 12:52:30 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:52:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4066300551' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:52:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3447191113' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:52:30 compute-1 nova_compute[230518]: 2025-10-02 12:52:30.648 2 DEBUG nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:52:30 compute-1 nova_compute[230518]: 2025-10-02 12:52:30.649 2 DEBUG nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:52:30 compute-1 nova_compute[230518]: 2025-10-02 12:52:30.649 2 INFO nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Using config drive
Oct 02 12:52:30 compute-1 nova_compute[230518]: 2025-10-02 12:52:30.676 2 DEBUG nova.storage.rbd_utils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] rbd image 9668bd28-30e7-4dd0-87a8-6577135cc19b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:52:30 compute-1 nova_compute[230518]: 2025-10-02 12:52:30.936 2 INFO nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Creating config drive at /var/lib/nova/instances/9668bd28-30e7-4dd0-87a8-6577135cc19b/disk.config
Oct 02 12:52:30 compute-1 nova_compute[230518]: 2025-10-02 12:52:30.947 2 DEBUG oslo_concurrency.processutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9668bd28-30e7-4dd0-87a8-6577135cc19b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcmpixkpr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:52:31 compute-1 nova_compute[230518]: 2025-10-02 12:52:31.123 2 DEBUG oslo_concurrency.processutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9668bd28-30e7-4dd0-87a8-6577135cc19b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcmpixkpr" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:52:31 compute-1 nova_compute[230518]: 2025-10-02 12:52:31.170 2 DEBUG nova.storage.rbd_utils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] rbd image 9668bd28-30e7-4dd0-87a8-6577135cc19b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:52:31 compute-1 nova_compute[230518]: 2025-10-02 12:52:31.177 2 DEBUG oslo_concurrency.processutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9668bd28-30e7-4dd0-87a8-6577135cc19b/disk.config 9668bd28-30e7-4dd0-87a8-6577135cc19b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:52:31 compute-1 nova_compute[230518]: 2025-10-02 12:52:31.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:31 compute-1 ceph-mon[80926]: pgmap v2357: 305 pgs: 305 active+clean; 329 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 2.3 MiB/s wr, 48 op/s
Oct 02 12:52:31 compute-1 nova_compute[230518]: 2025-10-02 12:52:31.856 2 DEBUG nova.network.neutron [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Successfully updated port: 9e761925-3065-4b15-ab37-4ce18061fcf6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:52:31 compute-1 nova_compute[230518]: 2025-10-02 12:52:31.875 2 DEBUG oslo_concurrency.lockutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:52:31 compute-1 nova_compute[230518]: 2025-10-02 12:52:31.875 2 DEBUG oslo_concurrency.lockutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquired lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:52:31 compute-1 nova_compute[230518]: 2025-10-02 12:52:31.875 2 DEBUG nova.network.neutron [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:52:32 compute-1 nova_compute[230518]: 2025-10-02 12:52:32.032 2 DEBUG nova.compute.manager [req-cd3e4a0a-f7b2-42b9-980e-94e50d4b3394 req-73294807-c211-4594-b7a9-2a53c39ef3e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Received event network-changed-9e761925-3065-4b15-ab37-4ce18061fcf6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:52:32 compute-1 nova_compute[230518]: 2025-10-02 12:52:32.033 2 DEBUG nova.compute.manager [req-cd3e4a0a-f7b2-42b9-980e-94e50d4b3394 req-73294807-c211-4594-b7a9-2a53c39ef3e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Refreshing instance network info cache due to event network-changed-9e761925-3065-4b15-ab37-4ce18061fcf6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:52:32 compute-1 nova_compute[230518]: 2025-10-02 12:52:32.034 2 DEBUG oslo_concurrency.lockutils [req-cd3e4a0a-f7b2-42b9-980e-94e50d4b3394 req-73294807-c211-4594-b7a9-2a53c39ef3e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:52:32 compute-1 nova_compute[230518]: 2025-10-02 12:52:32.078 2 DEBUG nova.network.neutron [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:52:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:32.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:32.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:32 compute-1 ceph-mon[80926]: pgmap v2358: 305 pgs: 305 active+clean; 386 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 281 KiB/s rd, 6.2 MiB/s wr, 107 op/s
Oct 02 12:52:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:52:33 compute-1 nova_compute[230518]: 2025-10-02 12:52:33.392 2 DEBUG nova.network.neutron [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Updating instance_info_cache with network_info: [{"id": "9e761925-3065-4b15-ab37-4ce18061fcf6", "address": "fa:16:3e:61:28:1d", "network": {"id": "a8923666-d594-4b3c-acca-d8d2652ab2bc", "bridge": "br-int", "label": "tempest-network-smoke--1377768226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e761925-30", "ovs_interfaceid": "9e761925-3065-4b15-ab37-4ce18061fcf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:52:33 compute-1 nova_compute[230518]: 2025-10-02 12:52:33.418 2 DEBUG oslo_concurrency.lockutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Releasing lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:52:33 compute-1 nova_compute[230518]: 2025-10-02 12:52:33.419 2 DEBUG nova.compute.manager [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Instance network_info: |[{"id": "9e761925-3065-4b15-ab37-4ce18061fcf6", "address": "fa:16:3e:61:28:1d", "network": {"id": "a8923666-d594-4b3c-acca-d8d2652ab2bc", "bridge": "br-int", "label": "tempest-network-smoke--1377768226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e761925-30", "ovs_interfaceid": "9e761925-3065-4b15-ab37-4ce18061fcf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:52:33 compute-1 nova_compute[230518]: 2025-10-02 12:52:33.419 2 DEBUG oslo_concurrency.lockutils [req-cd3e4a0a-f7b2-42b9-980e-94e50d4b3394 req-73294807-c211-4594-b7a9-2a53c39ef3e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:52:33 compute-1 nova_compute[230518]: 2025-10-02 12:52:33.420 2 DEBUG nova.network.neutron [req-cd3e4a0a-f7b2-42b9-980e-94e50d4b3394 req-73294807-c211-4594-b7a9-2a53c39ef3e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Refreshing network info cache for port 9e761925-3065-4b15-ab37-4ce18061fcf6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:52:33 compute-1 nova_compute[230518]: 2025-10-02 12:52:33.557 2 DEBUG oslo_concurrency.processutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 d70a747f-a75e-4341-89db-5953efdbbbd9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 5.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:52:33 compute-1 nova_compute[230518]: 2025-10-02 12:52:33.666 2 DEBUG nova.storage.rbd_utils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] resizing rbd image d70a747f-a75e-4341-89db-5953efdbbbd9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:52:33 compute-1 podman[285403]: 2025-10-02 12:52:33.811842928 +0000 UTC m=+0.055802912 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 12:52:33 compute-1 podman[285404]: 2025-10-02 12:52:33.828595243 +0000 UTC m=+0.056527104 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct 02 12:52:33 compute-1 nova_compute[230518]: 2025-10-02 12:52:33.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:34.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:34.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:35 compute-1 nova_compute[230518]: 2025-10-02 12:52:35.233 2 DEBUG nova.network.neutron [req-cd3e4a0a-f7b2-42b9-980e-94e50d4b3394 req-73294807-c211-4594-b7a9-2a53c39ef3e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Updated VIF entry in instance network info cache for port 9e761925-3065-4b15-ab37-4ce18061fcf6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:52:35 compute-1 nova_compute[230518]: 2025-10-02 12:52:35.234 2 DEBUG nova.network.neutron [req-cd3e4a0a-f7b2-42b9-980e-94e50d4b3394 req-73294807-c211-4594-b7a9-2a53c39ef3e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Updating instance_info_cache with network_info: [{"id": "9e761925-3065-4b15-ab37-4ce18061fcf6", "address": "fa:16:3e:61:28:1d", "network": {"id": "a8923666-d594-4b3c-acca-d8d2652ab2bc", "bridge": "br-int", "label": "tempest-network-smoke--1377768226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e761925-30", "ovs_interfaceid": "9e761925-3065-4b15-ab37-4ce18061fcf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:52:35 compute-1 nova_compute[230518]: 2025-10-02 12:52:35.255 2 DEBUG oslo_concurrency.lockutils [req-cd3e4a0a-f7b2-42b9-980e-94e50d4b3394 req-73294807-c211-4594-b7a9-2a53c39ef3e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:52:35 compute-1 nova_compute[230518]: 2025-10-02 12:52:35.509 2 DEBUG oslo_concurrency.processutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9668bd28-30e7-4dd0-87a8-6577135cc19b/disk.config 9668bd28-30e7-4dd0-87a8-6577135cc19b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.332s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:52:35 compute-1 nova_compute[230518]: 2025-10-02 12:52:35.509 2 INFO nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Deleting local config drive /var/lib/nova/instances/9668bd28-30e7-4dd0-87a8-6577135cc19b/disk.config because it was imported into RBD.
Oct 02 12:52:35 compute-1 systemd-machined[188247]: New machine qemu-67-instance-0000008e.
Oct 02 12:52:35 compute-1 systemd[1]: Started Virtual Machine qemu-67-instance-0000008e.
Oct 02 12:52:36 compute-1 ceph-mon[80926]: pgmap v2359: 305 pgs: 305 active+clean; 386 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 263 KiB/s rd, 5.8 MiB/s wr, 100 op/s
Oct 02 12:52:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:52:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:36.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:52:36 compute-1 nova_compute[230518]: 2025-10-02 12:52:36.445 2 DEBUG nova.objects.instance [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'migration_context' on Instance uuid d70a747f-a75e-4341-89db-5953efdbbbd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:52:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:36.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:36 compute-1 nova_compute[230518]: 2025-10-02 12:52:36.461 2 DEBUG nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:52:36 compute-1 nova_compute[230518]: 2025-10-02 12:52:36.461 2 DEBUG nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Ensure instance console log exists: /var/lib/nova/instances/d70a747f-a75e-4341-89db-5953efdbbbd9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:52:36 compute-1 nova_compute[230518]: 2025-10-02 12:52:36.462 2 DEBUG oslo_concurrency.lockutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:36 compute-1 nova_compute[230518]: 2025-10-02 12:52:36.462 2 DEBUG oslo_concurrency.lockutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:36 compute-1 nova_compute[230518]: 2025-10-02 12:52:36.462 2 DEBUG oslo_concurrency.lockutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:36 compute-1 nova_compute[230518]: 2025-10-02 12:52:36.465 2 DEBUG nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Start _get_guest_xml network_info=[{"id": "9e761925-3065-4b15-ab37-4ce18061fcf6", "address": "fa:16:3e:61:28:1d", "network": {"id": "a8923666-d594-4b3c-acca-d8d2652ab2bc", "bridge": "br-int", "label": "tempest-network-smoke--1377768226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e761925-30", "ovs_interfaceid": "9e761925-3065-4b15-ab37-4ce18061fcf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:52:36 compute-1 nova_compute[230518]: 2025-10-02 12:52:36.469 2 WARNING nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:52:36 compute-1 nova_compute[230518]: 2025-10-02 12:52:36.478 2 DEBUG nova.virt.libvirt.host [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:52:36 compute-1 nova_compute[230518]: 2025-10-02 12:52:36.480 2 DEBUG nova.virt.libvirt.host [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:52:36 compute-1 nova_compute[230518]: 2025-10-02 12:52:36.484 2 DEBUG nova.virt.libvirt.host [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:52:36 compute-1 nova_compute[230518]: 2025-10-02 12:52:36.485 2 DEBUG nova.virt.libvirt.host [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:52:36 compute-1 nova_compute[230518]: 2025-10-02 12:52:36.487 2 DEBUG nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:52:36 compute-1 nova_compute[230518]: 2025-10-02 12:52:36.487 2 DEBUG nova.virt.hardware [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:52:36 compute-1 nova_compute[230518]: 2025-10-02 12:52:36.488 2 DEBUG nova.virt.hardware [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:52:36 compute-1 nova_compute[230518]: 2025-10-02 12:52:36.489 2 DEBUG nova.virt.hardware [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:52:36 compute-1 nova_compute[230518]: 2025-10-02 12:52:36.489 2 DEBUG nova.virt.hardware [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:52:36 compute-1 nova_compute[230518]: 2025-10-02 12:52:36.489 2 DEBUG nova.virt.hardware [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:52:36 compute-1 nova_compute[230518]: 2025-10-02 12:52:36.490 2 DEBUG nova.virt.hardware [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:52:36 compute-1 nova_compute[230518]: 2025-10-02 12:52:36.490 2 DEBUG nova.virt.hardware [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:52:36 compute-1 nova_compute[230518]: 2025-10-02 12:52:36.490 2 DEBUG nova.virt.hardware [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:52:36 compute-1 nova_compute[230518]: 2025-10-02 12:52:36.491 2 DEBUG nova.virt.hardware [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:52:36 compute-1 nova_compute[230518]: 2025-10-02 12:52:36.491 2 DEBUG nova.virt.hardware [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:52:36 compute-1 nova_compute[230518]: 2025-10-02 12:52:36.491 2 DEBUG nova.virt.hardware [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:52:36 compute-1 nova_compute[230518]: 2025-10-02 12:52:36.494 2 DEBUG oslo_concurrency.processutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:52:36 compute-1 nova_compute[230518]: 2025-10-02 12:52:36.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:36 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:52:36 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1082652408' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:52:36 compute-1 nova_compute[230518]: 2025-10-02 12:52:36.922 2 DEBUG oslo_concurrency.processutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:52:36 compute-1 nova_compute[230518]: 2025-10-02 12:52:36.947 2 DEBUG nova.storage.rbd_utils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image d70a747f-a75e-4341-89db-5953efdbbbd9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:52:36 compute-1 nova_compute[230518]: 2025-10-02 12:52:36.951 2 DEBUG oslo_concurrency.processutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.231 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409557.2308712, 9668bd28-30e7-4dd0-87a8-6577135cc19b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.232 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] VM Resumed (Lifecycle Event)
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.239 2 DEBUG nova.compute.manager [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.239 2 DEBUG nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.243 2 INFO nova.virt.libvirt.driver [-] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Instance spawned successfully.
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.244 2 DEBUG nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.257 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.264 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.266 2 DEBUG nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.267 2 DEBUG nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.267 2 DEBUG nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.267 2 DEBUG nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.268 2 DEBUG nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.268 2 DEBUG nova.virt.libvirt.driver [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.294 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.295 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409557.2311225, 9668bd28-30e7-4dd0-87a8-6577135cc19b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.295 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] VM Started (Lifecycle Event)
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.325 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.327 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.339 2 INFO nova.compute.manager [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Took 11.85 seconds to spawn the instance on the hypervisor.
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.340 2 DEBUG nova.compute.manager [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.353 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:52:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:52:37 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1889640611' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.399 2 DEBUG oslo_concurrency.processutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.401 2 DEBUG nova.virt.libvirt.vif [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:52:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1834334084',display_name='tempest-TestNetworkBasicOps-server-1834334084',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1834334084',id=143,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBWzyEyTLwn/OnMkL7XEVYTgkdobvFvcDEiRuV0NOSu0/Vc6+w7CYW/OJQq8xHJ7yByGK0zJNMNYx8BDnEAMNmh8dLyyLFr5uFvHFoK31s13NXGnnrP3EXSfoIgrfk2ieg==',key_name='tempest-TestNetworkBasicOps-1099517484',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-oim7lxfm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:52:27Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=d70a747f-a75e-4341-89db-5953efdbbbd9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9e761925-3065-4b15-ab37-4ce18061fcf6", "address": "fa:16:3e:61:28:1d", "network": {"id": "a8923666-d594-4b3c-acca-d8d2652ab2bc", "bridge": "br-int", "label": "tempest-network-smoke--1377768226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e761925-30", "ovs_interfaceid": "9e761925-3065-4b15-ab37-4ce18061fcf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.402 2 DEBUG nova.network.os_vif_util [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "9e761925-3065-4b15-ab37-4ce18061fcf6", "address": "fa:16:3e:61:28:1d", "network": {"id": "a8923666-d594-4b3c-acca-d8d2652ab2bc", "bridge": "br-int", "label": "tempest-network-smoke--1377768226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e761925-30", "ovs_interfaceid": "9e761925-3065-4b15-ab37-4ce18061fcf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.403 2 DEBUG nova.network.os_vif_util [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:28:1d,bridge_name='br-int',has_traffic_filtering=True,id=9e761925-3065-4b15-ab37-4ce18061fcf6,network=Network(a8923666-d594-4b3c-acca-d8d2652ab2bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e761925-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.405 2 DEBUG nova.objects.instance [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'pci_devices' on Instance uuid d70a747f-a75e-4341-89db-5953efdbbbd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.409 2 INFO nova.compute.manager [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Took 12.84 seconds to build instance.
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.430 2 DEBUG nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:52:37 compute-1 nova_compute[230518]:   <uuid>d70a747f-a75e-4341-89db-5953efdbbbd9</uuid>
Oct 02 12:52:37 compute-1 nova_compute[230518]:   <name>instance-0000008f</name>
Oct 02 12:52:37 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:52:37 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:52:37 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:52:37 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:       <nova:name>tempest-TestNetworkBasicOps-server-1834334084</nova:name>
Oct 02 12:52:37 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:52:36</nova:creationTime>
Oct 02 12:52:37 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:52:37 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:52:37 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:52:37 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:52:37 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:52:37 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:52:37 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:52:37 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:52:37 compute-1 nova_compute[230518]:         <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct 02 12:52:37 compute-1 nova_compute[230518]:         <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct 02 12:52:37 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:52:37 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:52:37 compute-1 nova_compute[230518]:         <nova:port uuid="9e761925-3065-4b15-ab37-4ce18061fcf6">
Oct 02 12:52:37 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:52:37 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:52:37 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:52:37 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <system>
Oct 02 12:52:37 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:52:37 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:52:37 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:52:37 compute-1 nova_compute[230518]:       <entry name="serial">d70a747f-a75e-4341-89db-5953efdbbbd9</entry>
Oct 02 12:52:37 compute-1 nova_compute[230518]:       <entry name="uuid">d70a747f-a75e-4341-89db-5953efdbbbd9</entry>
Oct 02 12:52:37 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     </system>
Oct 02 12:52:37 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:52:37 compute-1 nova_compute[230518]:   <os>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:   </os>
Oct 02 12:52:37 compute-1 nova_compute[230518]:   <features>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:   </features>
Oct 02 12:52:37 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:52:37 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:52:37 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:52:37 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/d70a747f-a75e-4341-89db-5953efdbbbd9_disk">
Oct 02 12:52:37 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:       </source>
Oct 02 12:52:37 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:52:37 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:52:37 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:52:37 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/d70a747f-a75e-4341-89db-5953efdbbbd9_disk.config">
Oct 02 12:52:37 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:       </source>
Oct 02 12:52:37 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:52:37 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:52:37 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:52:37 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:61:28:1d"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:       <target dev="tap9e761925-30"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:52:37 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/d70a747f-a75e-4341-89db-5953efdbbbd9/console.log" append="off"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <video>
Oct 02 12:52:37 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     </video>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:52:37 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:52:37 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:52:37 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:52:37 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:52:37 compute-1 nova_compute[230518]: </domain>
Oct 02 12:52:37 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.433 2 DEBUG nova.compute.manager [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Preparing to wait for external event network-vif-plugged-9e761925-3065-4b15-ab37-4ce18061fcf6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.433 2 DEBUG oslo_concurrency.lockutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.434 2 DEBUG oslo_concurrency.lockutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.435 2 DEBUG oslo_concurrency.lockutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.437 2 DEBUG nova.virt.libvirt.vif [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:52:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1834334084',display_name='tempest-TestNetworkBasicOps-server-1834334084',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1834334084',id=143,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBWzyEyTLwn/OnMkL7XEVYTgkdobvFvcDEiRuV0NOSu0/Vc6+w7CYW/OJQq8xHJ7yByGK0zJNMNYx8BDnEAMNmh8dLyyLFr5uFvHFoK31s13NXGnnrP3EXSfoIgrfk2ieg==',key_name='tempest-TestNetworkBasicOps-1099517484',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-oim7lxfm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:52:27Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=d70a747f-a75e-4341-89db-5953efdbbbd9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9e761925-3065-4b15-ab37-4ce18061fcf6", "address": "fa:16:3e:61:28:1d", "network": {"id": "a8923666-d594-4b3c-acca-d8d2652ab2bc", "bridge": "br-int", "label": "tempest-network-smoke--1377768226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e761925-30", "ovs_interfaceid": "9e761925-3065-4b15-ab37-4ce18061fcf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.437 2 DEBUG nova.network.os_vif_util [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "9e761925-3065-4b15-ab37-4ce18061fcf6", "address": "fa:16:3e:61:28:1d", "network": {"id": "a8923666-d594-4b3c-acca-d8d2652ab2bc", "bridge": "br-int", "label": "tempest-network-smoke--1377768226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e761925-30", "ovs_interfaceid": "9e761925-3065-4b15-ab37-4ce18061fcf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.439 2 DEBUG nova.network.os_vif_util [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:28:1d,bridge_name='br-int',has_traffic_filtering=True,id=9e761925-3065-4b15-ab37-4ce18061fcf6,network=Network(a8923666-d594-4b3c-acca-d8d2652ab2bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e761925-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.440 2 DEBUG os_vif [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:28:1d,bridge_name='br-int',has_traffic_filtering=True,id=9e761925-3065-4b15-ab37-4ce18061fcf6,network=Network(a8923666-d594-4b3c-acca-d8d2652ab2bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e761925-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.443 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.444 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.445 2 DEBUG oslo_concurrency.lockutils [None req-67d50489-0cb4-4542-b7cf-a1a33024c806 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Lock "9668bd28-30e7-4dd0-87a8-6577135cc19b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.978s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.450 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9e761925-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.451 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9e761925-30, col_values=(('external_ids', {'iface-id': '9e761925-3065-4b15-ab37-4ce18061fcf6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:61:28:1d', 'vm-uuid': 'd70a747f-a75e-4341-89db-5953efdbbbd9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:37 compute-1 NetworkManager[44960]: <info>  [1759409557.4541] manager: (tap9e761925-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/264)
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.461 2 INFO os_vif [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:28:1d,bridge_name='br-int',has_traffic_filtering=True,id=9e761925-3065-4b15-ab37-4ce18061fcf6,network=Network(a8923666-d594-4b3c-acca-d8d2652ab2bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e761925-30')
Oct 02 12:52:37 compute-1 ceph-mon[80926]: pgmap v2360: 305 pgs: 305 active+clean; 411 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 264 KiB/s rd, 5.6 MiB/s wr, 122 op/s
Oct 02 12:52:37 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1082652408' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:52:37 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1889640611' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.610 2 DEBUG nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.610 2 DEBUG nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.611 2 DEBUG nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No VIF found with MAC fa:16:3e:61:28:1d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.611 2 INFO nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Using config drive
Oct 02 12:52:37 compute-1 nova_compute[230518]: 2025-10-02 12:52:37.808 2 DEBUG nova.storage.rbd_utils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image d70a747f-a75e-4341-89db-5953efdbbbd9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:52:38 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:52:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:38.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:38.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:38 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/297911692' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:38 compute-1 nova_compute[230518]: 2025-10-02 12:52:38.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:39 compute-1 nova_compute[230518]: 2025-10-02 12:52:39.394 2 INFO nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Creating config drive at /var/lib/nova/instances/d70a747f-a75e-4341-89db-5953efdbbbd9/disk.config
Oct 02 12:52:39 compute-1 nova_compute[230518]: 2025-10-02 12:52:39.398 2 DEBUG oslo_concurrency.processutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d70a747f-a75e-4341-89db-5953efdbbbd9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxv88v742 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:52:39 compute-1 nova_compute[230518]: 2025-10-02 12:52:39.542 2 DEBUG oslo_concurrency.processutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d70a747f-a75e-4341-89db-5953efdbbbd9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxv88v742" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:52:39 compute-1 nova_compute[230518]: 2025-10-02 12:52:39.565 2 DEBUG nova.storage.rbd_utils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image d70a747f-a75e-4341-89db-5953efdbbbd9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:52:39 compute-1 nova_compute[230518]: 2025-10-02 12:52:39.569 2 DEBUG oslo_concurrency.processutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d70a747f-a75e-4341-89db-5953efdbbbd9/disk.config d70a747f-a75e-4341-89db-5953efdbbbd9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:52:39 compute-1 ceph-mon[80926]: pgmap v2361: 305 pgs: 305 active+clean; 415 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 423 KiB/s rd, 5.7 MiB/s wr, 142 op/s
Oct 02 12:52:39 compute-1 nova_compute[230518]: 2025-10-02 12:52:39.832 2 DEBUG nova.compute.manager [None req-e2f4b6cf-6fca-483f-bbf1-5158f76b80f4 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:52:39 compute-1 nova_compute[230518]: 2025-10-02 12:52:39.871 2 INFO nova.compute.manager [None req-e2f4b6cf-6fca-483f-bbf1-5158f76b80f4 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] instance snapshotting
Oct 02 12:52:39 compute-1 nova_compute[230518]: 2025-10-02 12:52:39.872 2 DEBUG nova.objects.instance [None req-e2f4b6cf-6fca-483f-bbf1-5158f76b80f4 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Lazy-loading 'flavor' on Instance uuid 9668bd28-30e7-4dd0-87a8-6577135cc19b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:52:40 compute-1 nova_compute[230518]: 2025-10-02 12:52:40.013 2 DEBUG oslo_concurrency.lockutils [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Acquiring lock "9668bd28-30e7-4dd0-87a8-6577135cc19b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:40 compute-1 nova_compute[230518]: 2025-10-02 12:52:40.014 2 DEBUG oslo_concurrency.lockutils [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Lock "9668bd28-30e7-4dd0-87a8-6577135cc19b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:40 compute-1 nova_compute[230518]: 2025-10-02 12:52:40.015 2 DEBUG oslo_concurrency.lockutils [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Acquiring lock "9668bd28-30e7-4dd0-87a8-6577135cc19b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:40 compute-1 nova_compute[230518]: 2025-10-02 12:52:40.015 2 DEBUG oslo_concurrency.lockutils [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Lock "9668bd28-30e7-4dd0-87a8-6577135cc19b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:40 compute-1 nova_compute[230518]: 2025-10-02 12:52:40.016 2 DEBUG oslo_concurrency.lockutils [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Lock "9668bd28-30e7-4dd0-87a8-6577135cc19b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:40 compute-1 nova_compute[230518]: 2025-10-02 12:52:40.018 2 INFO nova.compute.manager [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Terminating instance
Oct 02 12:52:40 compute-1 nova_compute[230518]: 2025-10-02 12:52:40.019 2 DEBUG oslo_concurrency.lockutils [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Acquiring lock "refresh_cache-9668bd28-30e7-4dd0-87a8-6577135cc19b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:52:40 compute-1 nova_compute[230518]: 2025-10-02 12:52:40.020 2 DEBUG oslo_concurrency.lockutils [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Acquired lock "refresh_cache-9668bd28-30e7-4dd0-87a8-6577135cc19b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:52:40 compute-1 nova_compute[230518]: 2025-10-02 12:52:40.021 2 DEBUG nova.network.neutron [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:52:40 compute-1 nova_compute[230518]: 2025-10-02 12:52:40.366 2 INFO nova.virt.libvirt.driver [None req-e2f4b6cf-6fca-483f-bbf1-5158f76b80f4 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Beginning live snapshot process
Oct 02 12:52:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:40.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:40 compute-1 nova_compute[230518]: 2025-10-02 12:52:40.423 2 DEBUG nova.compute.manager [None req-e2f4b6cf-6fca-483f-bbf1-5158f76b80f4 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Instance disappeared during snapshot _snapshot_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:4390
Oct 02 12:52:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:40.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:40 compute-1 nova_compute[230518]: 2025-10-02 12:52:40.641 2 DEBUG oslo_concurrency.processutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d70a747f-a75e-4341-89db-5953efdbbbd9/disk.config d70a747f-a75e-4341-89db-5953efdbbbd9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:52:40 compute-1 nova_compute[230518]: 2025-10-02 12:52:40.642 2 INFO nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Deleting local config drive /var/lib/nova/instances/d70a747f-a75e-4341-89db-5953efdbbbd9/disk.config because it was imported into RBD.
Oct 02 12:52:40 compute-1 kernel: tap9e761925-30: entered promiscuous mode
Oct 02 12:52:40 compute-1 nova_compute[230518]: 2025-10-02 12:52:40.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:40 compute-1 ovn_controller[129257]: 2025-10-02T12:52:40Z|00572|binding|INFO|Claiming lport 9e761925-3065-4b15-ab37-4ce18061fcf6 for this chassis.
Oct 02 12:52:40 compute-1 ovn_controller[129257]: 2025-10-02T12:52:40Z|00573|binding|INFO|9e761925-3065-4b15-ab37-4ce18061fcf6: Claiming fa:16:3e:61:28:1d 10.100.0.4
Oct 02 12:52:40 compute-1 NetworkManager[44960]: <info>  [1759409560.7412] manager: (tap9e761925-30): new Tun device (/org/freedesktop/NetworkManager/Devices/265)
Oct 02 12:52:40 compute-1 nova_compute[230518]: 2025-10-02 12:52:40.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:40 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:40.762 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:28:1d 10.100.0.4'], port_security=['fa:16:3e:61:28:1d 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd70a747f-a75e-4341-89db-5953efdbbbd9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8923666-d594-4b3c-acca-d8d2652ab2bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '2', 'neutron:security_group_ids': '50493e8d-b9e4-415b-bc68-4eb501d460cc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d9eaefc8-91b5-45ac-8f60-f49bcfa08eb3, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=9e761925-3065-4b15-ab37-4ce18061fcf6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:52:40 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:40.763 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 9e761925-3065-4b15-ab37-4ce18061fcf6 in datapath a8923666-d594-4b3c-acca-d8d2652ab2bc bound to our chassis
Oct 02 12:52:40 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:40.764 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a8923666-d594-4b3c-acca-d8d2652ab2bc
Oct 02 12:52:40 compute-1 systemd-machined[188247]: New machine qemu-68-instance-0000008f.
Oct 02 12:52:40 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:40.785 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6c8cf521-3d1d-432f-86c9-1c2e969b01ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:40 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:40.788 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa8923666-d1 in ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:52:40 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:40.789 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa8923666-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:52:40 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:40.789 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[96acc015-c65a-484c-a877-c90bec368c3f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:40 compute-1 systemd[1]: Started Virtual Machine qemu-68-instance-0000008f.
Oct 02 12:52:40 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:40.790 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b866949c-c68d-40c4-a1b0-a7f85f5c1d19]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:40 compute-1 systemd-udevd[285659]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:52:40 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:40.809 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[df755030-2b4d-4c80-9701-5aea864e4820]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:40 compute-1 NetworkManager[44960]: <info>  [1759409560.8190] device (tap9e761925-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:52:40 compute-1 NetworkManager[44960]: <info>  [1759409560.8201] device (tap9e761925-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:52:40 compute-1 ovn_controller[129257]: 2025-10-02T12:52:40Z|00574|binding|INFO|Setting lport 9e761925-3065-4b15-ab37-4ce18061fcf6 ovn-installed in OVS
Oct 02 12:52:40 compute-1 ovn_controller[129257]: 2025-10-02T12:52:40Z|00575|binding|INFO|Setting lport 9e761925-3065-4b15-ab37-4ce18061fcf6 up in Southbound
Oct 02 12:52:40 compute-1 nova_compute[230518]: 2025-10-02 12:52:40.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:40 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:40.835 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a6587a8c-26e2-4f37-bb36-c22624e50987]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:40 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:40.869 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[6aa4288c-c384-4ccf-81dd-243614edada3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:40 compute-1 systemd-udevd[285662]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:52:40 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:40.879 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4346e85d-dd4c-443d-ae46-3479271da1ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:40 compute-1 NetworkManager[44960]: <info>  [1759409560.8809] manager: (tapa8923666-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/266)
Oct 02 12:52:40 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:40.915 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[ce420a5e-1e0e-4286-99c9-be30faf47203]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:40 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:40.918 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[6d54beb9-f096-41af-b2a0-04336b167cf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:40 compute-1 ceph-mon[80926]: pgmap v2362: 305 pgs: 305 active+clean; 418 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 5.4 MiB/s wr, 166 op/s
Oct 02 12:52:40 compute-1 NetworkManager[44960]: <info>  [1759409560.9454] device (tapa8923666-d0): carrier: link connected
Oct 02 12:52:40 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:40.955 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a6f8b04b-d203-4ed1-8987-59a25c2aba3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:40 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:40.978 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4b7d828a-52bd-42ca-b6e2-705fdd360610]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa8923666-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:ee:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 742450, 'reachable_time': 19139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285690, 'error': None, 'target': 'ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:41.001 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[93ed1b26-0ddc-4750-9bbe-a092e83d5c6b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fece:ee99'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 742450, 'tstamp': 742450}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285691, 'error': None, 'target': 'ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:41.026 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6cd189c1-aff3-4d31-9c26-8d99111294cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa8923666-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:ee:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 742450, 'reachable_time': 19139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 285692, 'error': None, 'target': 'ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:41 compute-1 nova_compute[230518]: 2025-10-02 12:52:41.055 2 DEBUG nova.network.neutron [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:41.073 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[09b52348-75d1-436a-9c8b-5f05f2c51dbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:41.160 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e25b1ec6-ce66-4124-8115-7c5e188ef43b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:41.161 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8923666-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:41.162 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:41.163 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa8923666-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:52:41 compute-1 nova_compute[230518]: 2025-10-02 12:52:41.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:41 compute-1 NetworkManager[44960]: <info>  [1759409561.1657] manager: (tapa8923666-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/267)
Oct 02 12:52:41 compute-1 kernel: tapa8923666-d0: entered promiscuous mode
Oct 02 12:52:41 compute-1 nova_compute[230518]: 2025-10-02 12:52:41.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:41.168 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa8923666-d0, col_values=(('external_ids', {'iface-id': '4b02cca2-258b-4a05-9628-3add3aef7360'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:52:41 compute-1 ovn_controller[129257]: 2025-10-02T12:52:41Z|00576|binding|INFO|Releasing lport 4b02cca2-258b-4a05-9628-3add3aef7360 from this chassis (sb_readonly=0)
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:41.172 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a8923666-d594-4b3c-acca-d8d2652ab2bc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a8923666-d594-4b3c-acca-d8d2652ab2bc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:41.180 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2814f436-4b0c-401b-b763-610d9ba06e4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:41.181 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-a8923666-d594-4b3c-acca-d8d2652ab2bc
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/a8923666-d594-4b3c-acca-d8d2652ab2bc.pid.haproxy
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID a8923666-d594-4b3c-acca-d8d2652ab2bc
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:52:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:41.182 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc', 'env', 'PROCESS_TAG=haproxy-a8923666-d594-4b3c-acca-d8d2652ab2bc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a8923666-d594-4b3c-acca-d8d2652ab2bc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:52:41 compute-1 nova_compute[230518]: 2025-10-02 12:52:41.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:41 compute-1 nova_compute[230518]: 2025-10-02 12:52:41.499 2 DEBUG nova.compute.manager [None req-e2f4b6cf-6fca-483f-bbf1-5158f76b80f4 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Found 0 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Oct 02 12:52:41 compute-1 nova_compute[230518]: 2025-10-02 12:52:41.607 2 DEBUG nova.network.neutron [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:52:41 compute-1 nova_compute[230518]: 2025-10-02 12:52:41.630 2 DEBUG oslo_concurrency.lockutils [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Releasing lock "refresh_cache-9668bd28-30e7-4dd0-87a8-6577135cc19b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:52:41 compute-1 nova_compute[230518]: 2025-10-02 12:52:41.631 2 DEBUG nova.compute.manager [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:52:41 compute-1 podman[285764]: 2025-10-02 12:52:41.589491105 +0000 UTC m=+0.039529541 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:52:41 compute-1 podman[285764]: 2025-10-02 12:52:41.887942408 +0000 UTC m=+0.337980874 container create 13aa4b98cba66276f75559ae5745a870b17395095597f17d3677d5b57d514e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 02 12:52:41 compute-1 nova_compute[230518]: 2025-10-02 12:52:41.924 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409561.923868, d70a747f-a75e-4341-89db-5953efdbbbd9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:52:41 compute-1 nova_compute[230518]: 2025-10-02 12:52:41.925 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] VM Started (Lifecycle Event)
Oct 02 12:52:41 compute-1 systemd[1]: Started libpod-conmon-13aa4b98cba66276f75559ae5745a870b17395095597f17d3677d5b57d514e7b.scope.
Oct 02 12:52:41 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:52:41 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a37905e41d2d492f51b4caba938f776c5363e981c3b279006e2868ae414335e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:52:41 compute-1 nova_compute[230518]: 2025-10-02 12:52:41.990 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:52:41 compute-1 nova_compute[230518]: 2025-10-02 12:52:41.994 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409561.9239721, d70a747f-a75e-4341-89db-5953efdbbbd9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:52:41 compute-1 nova_compute[230518]: 2025-10-02 12:52:41.995 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] VM Paused (Lifecycle Event)
Oct 02 12:52:41 compute-1 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000008e.scope: Deactivated successfully.
Oct 02 12:52:41 compute-1 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000008e.scope: Consumed 5.496s CPU time.
Oct 02 12:52:42 compute-1 systemd-machined[188247]: Machine qemu-67-instance-0000008e terminated.
Oct 02 12:52:42 compute-1 podman[285764]: 2025-10-02 12:52:42.012183696 +0000 UTC m=+0.462222152 container init 13aa4b98cba66276f75559ae5745a870b17395095597f17d3677d5b57d514e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:52:42 compute-1 podman[285764]: 2025-10-02 12:52:42.01807056 +0000 UTC m=+0.468108996 container start 13aa4b98cba66276f75559ae5745a870b17395095597f17d3677d5b57d514e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 12:52:42 compute-1 nova_compute[230518]: 2025-10-02 12:52:42.026 2 DEBUG nova.compute.manager [req-014941a5-1035-45fd-bd4d-f5114d34b80a req-d27eb29f-80e6-449e-866e-9bd24aa4f37b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Received event network-vif-plugged-9e761925-3065-4b15-ab37-4ce18061fcf6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:52:42 compute-1 nova_compute[230518]: 2025-10-02 12:52:42.026 2 DEBUG oslo_concurrency.lockutils [req-014941a5-1035-45fd-bd4d-f5114d34b80a req-d27eb29f-80e6-449e-866e-9bd24aa4f37b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:42 compute-1 nova_compute[230518]: 2025-10-02 12:52:42.027 2 DEBUG oslo_concurrency.lockutils [req-014941a5-1035-45fd-bd4d-f5114d34b80a req-d27eb29f-80e6-449e-866e-9bd24aa4f37b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:42 compute-1 nova_compute[230518]: 2025-10-02 12:52:42.027 2 DEBUG oslo_concurrency.lockutils [req-014941a5-1035-45fd-bd4d-f5114d34b80a req-d27eb29f-80e6-449e-866e-9bd24aa4f37b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:42 compute-1 nova_compute[230518]: 2025-10-02 12:52:42.028 2 DEBUG nova.compute.manager [req-014941a5-1035-45fd-bd4d-f5114d34b80a req-d27eb29f-80e6-449e-866e-9bd24aa4f37b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Processing event network-vif-plugged-9e761925-3065-4b15-ab37-4ce18061fcf6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:52:42 compute-1 nova_compute[230518]: 2025-10-02 12:52:42.029 2 DEBUG nova.compute.manager [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:52:42 compute-1 nova_compute[230518]: 2025-10-02 12:52:42.033 2 DEBUG nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:52:42 compute-1 nova_compute[230518]: 2025-10-02 12:52:42.037 2 INFO nova.virt.libvirt.driver [-] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Instance spawned successfully.
Oct 02 12:52:42 compute-1 nova_compute[230518]: 2025-10-02 12:52:42.038 2 DEBUG nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:52:42 compute-1 neutron-haproxy-ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc[285779]: [NOTICE]   (285783) : New worker (285785) forked
Oct 02 12:52:42 compute-1 neutron-haproxy-ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc[285779]: [NOTICE]   (285783) : Loading success.
Oct 02 12:52:42 compute-1 nova_compute[230518]: 2025-10-02 12:52:42.056 2 INFO nova.virt.libvirt.driver [-] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Instance destroyed successfully.
Oct 02 12:52:42 compute-1 nova_compute[230518]: 2025-10-02 12:52:42.057 2 DEBUG nova.objects.instance [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Lazy-loading 'resources' on Instance uuid 9668bd28-30e7-4dd0-87a8-6577135cc19b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:52:42 compute-1 nova_compute[230518]: 2025-10-02 12:52:42.093 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:52:42 compute-1 nova_compute[230518]: 2025-10-02 12:52:42.135 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409562.032501, d70a747f-a75e-4341-89db-5953efdbbbd9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:52:42 compute-1 nova_compute[230518]: 2025-10-02 12:52:42.137 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] VM Resumed (Lifecycle Event)
Oct 02 12:52:42 compute-1 nova_compute[230518]: 2025-10-02 12:52:42.142 2 DEBUG nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:52:42 compute-1 nova_compute[230518]: 2025-10-02 12:52:42.143 2 DEBUG nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:52:42 compute-1 nova_compute[230518]: 2025-10-02 12:52:42.144 2 DEBUG nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:52:42 compute-1 nova_compute[230518]: 2025-10-02 12:52:42.145 2 DEBUG nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:52:42 compute-1 nova_compute[230518]: 2025-10-02 12:52:42.146 2 DEBUG nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:52:42 compute-1 nova_compute[230518]: 2025-10-02 12:52:42.147 2 DEBUG nova.virt.libvirt.driver [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:52:42 compute-1 nova_compute[230518]: 2025-10-02 12:52:42.198 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:52:42 compute-1 nova_compute[230518]: 2025-10-02 12:52:42.203 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:52:42 compute-1 nova_compute[230518]: 2025-10-02 12:52:42.245 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:52:42 compute-1 nova_compute[230518]: 2025-10-02 12:52:42.287 2 INFO nova.compute.manager [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Took 15.19 seconds to spawn the instance on the hypervisor.
Oct 02 12:52:42 compute-1 nova_compute[230518]: 2025-10-02 12:52:42.289 2 DEBUG nova.compute.manager [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:52:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:42.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:42 compute-1 nova_compute[230518]: 2025-10-02 12:52:42.450 2 INFO nova.compute.manager [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Took 16.86 seconds to build instance.
Oct 02 12:52:42 compute-1 nova_compute[230518]: 2025-10-02 12:52:42.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:42.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:42 compute-1 nova_compute[230518]: 2025-10-02 12:52:42.479 2 DEBUG oslo_concurrency.lockutils [None req-9da36cbe-02f9-40bb-873c-5b2f866083be 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:52:43 compute-1 ceph-mon[80926]: pgmap v2363: 305 pgs: 305 active+clean; 418 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.7 MiB/s wr, 187 op/s
Oct 02 12:52:43 compute-1 nova_compute[230518]: 2025-10-02 12:52:43.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:52:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:44.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:52:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:44.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:45 compute-1 nova_compute[230518]: 2025-10-02 12:52:45.092 2 DEBUG nova.compute.manager [req-90917e1a-a3e6-4228-9c77-776150c9c166 req-597022bd-b102-49c9-86ad-2a09d3d36cde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Received event network-vif-plugged-9e761925-3065-4b15-ab37-4ce18061fcf6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:52:45 compute-1 nova_compute[230518]: 2025-10-02 12:52:45.093 2 DEBUG oslo_concurrency.lockutils [req-90917e1a-a3e6-4228-9c77-776150c9c166 req-597022bd-b102-49c9-86ad-2a09d3d36cde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:45 compute-1 nova_compute[230518]: 2025-10-02 12:52:45.094 2 DEBUG oslo_concurrency.lockutils [req-90917e1a-a3e6-4228-9c77-776150c9c166 req-597022bd-b102-49c9-86ad-2a09d3d36cde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:45 compute-1 nova_compute[230518]: 2025-10-02 12:52:45.094 2 DEBUG oslo_concurrency.lockutils [req-90917e1a-a3e6-4228-9c77-776150c9c166 req-597022bd-b102-49c9-86ad-2a09d3d36cde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:45 compute-1 nova_compute[230518]: 2025-10-02 12:52:45.095 2 DEBUG nova.compute.manager [req-90917e1a-a3e6-4228-9c77-776150c9c166 req-597022bd-b102-49c9-86ad-2a09d3d36cde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] No waiting events found dispatching network-vif-plugged-9e761925-3065-4b15-ab37-4ce18061fcf6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:52:45 compute-1 nova_compute[230518]: 2025-10-02 12:52:45.095 2 WARNING nova.compute.manager [req-90917e1a-a3e6-4228-9c77-776150c9c166 req-597022bd-b102-49c9-86ad-2a09d3d36cde 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Received unexpected event network-vif-plugged-9e761925-3065-4b15-ab37-4ce18061fcf6 for instance with vm_state active and task_state None.
Oct 02 12:52:45 compute-1 ceph-mon[80926]: pgmap v2364: 305 pgs: 305 active+clean; 418 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 555 KiB/s wr, 138 op/s
Oct 02 12:52:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:52:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:46.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:52:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:46.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:46 compute-1 nova_compute[230518]: 2025-10-02 12:52:46.677 2 INFO nova.virt.libvirt.driver [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Deleting instance files /var/lib/nova/instances/9668bd28-30e7-4dd0-87a8-6577135cc19b_del
Oct 02 12:52:46 compute-1 nova_compute[230518]: 2025-10-02 12:52:46.678 2 INFO nova.virt.libvirt.driver [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Deletion of /var/lib/nova/instances/9668bd28-30e7-4dd0-87a8-6577135cc19b_del complete
Oct 02 12:52:46 compute-1 nova_compute[230518]: 2025-10-02 12:52:46.903 2 INFO nova.compute.manager [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Took 5.27 seconds to destroy the instance on the hypervisor.
Oct 02 12:52:46 compute-1 nova_compute[230518]: 2025-10-02 12:52:46.904 2 DEBUG oslo.service.loopingcall [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:52:46 compute-1 nova_compute[230518]: 2025-10-02 12:52:46.904 2 DEBUG nova.compute.manager [-] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:52:46 compute-1 nova_compute[230518]: 2025-10-02 12:52:46.905 2 DEBUG nova.network.neutron [-] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:52:47 compute-1 ceph-mon[80926]: pgmap v2365: 305 pgs: 305 active+clean; 392 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 567 KiB/s wr, 159 op/s
Oct 02 12:52:47 compute-1 nova_compute[230518]: 2025-10-02 12:52:47.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:48 compute-1 nova_compute[230518]: 2025-10-02 12:52:48.122 2 DEBUG nova.network.neutron [-] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:52:48 compute-1 nova_compute[230518]: 2025-10-02 12:52:48.140 2 DEBUG nova.network.neutron [-] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:52:48 compute-1 nova_compute[230518]: 2025-10-02 12:52:48.200 2 INFO nova.compute.manager [-] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Took 1.30 seconds to deallocate network for instance.
Oct 02 12:52:48 compute-1 nova_compute[230518]: 2025-10-02 12:52:48.326 2 DEBUG oslo_concurrency.lockutils [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:52:48 compute-1 nova_compute[230518]: 2025-10-02 12:52:48.327 2 DEBUG oslo_concurrency.lockutils [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:52:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:52:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:48.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:48.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:48 compute-1 nova_compute[230518]: 2025-10-02 12:52:48.506 2 DEBUG oslo_concurrency.processutils [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:52:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:52:48 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2136771465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:48 compute-1 nova_compute[230518]: 2025-10-02 12:52:48.951 2 DEBUG oslo_concurrency.processutils [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:52:48 compute-1 nova_compute[230518]: 2025-10-02 12:52:48.959 2 DEBUG nova.compute.provider_tree [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:52:48 compute-1 nova_compute[230518]: 2025-10-02 12:52:48.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:49 compute-1 nova_compute[230518]: 2025-10-02 12:52:49.020 2 DEBUG nova.scheduler.client.report [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:52:49 compute-1 nova_compute[230518]: 2025-10-02 12:52:49.040 2 DEBUG oslo_concurrency.lockutils [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:49 compute-1 nova_compute[230518]: 2025-10-02 12:52:49.115 2 INFO nova.scheduler.client.report [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Deleted allocations for instance 9668bd28-30e7-4dd0-87a8-6577135cc19b
Oct 02 12:52:49 compute-1 nova_compute[230518]: 2025-10-02 12:52:49.239 2 DEBUG oslo_concurrency.lockutils [None req-d4d72f14-817a-48b5-a124-b474747e6574 c6b3ffaf413a4cc592b58bfbf3b40c2b 324f52a964c64bfc9c214faab2ddda5b - - default default] Lock "9668bd28-30e7-4dd0-87a8-6577135cc19b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.225s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:52:49 compute-1 ceph-mon[80926]: pgmap v2366: 305 pgs: 305 active+clean; 372 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 140 KiB/s wr, 194 op/s
Oct 02 12:52:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2136771465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:50.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:50.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3424068853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:52:51 compute-1 NetworkManager[44960]: <info>  [1759409571.7075] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/268)
Oct 02 12:52:51 compute-1 NetworkManager[44960]: <info>  [1759409571.7084] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/269)
Oct 02 12:52:51 compute-1 nova_compute[230518]: 2025-10-02 12:52:51.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:51 compute-1 nova_compute[230518]: 2025-10-02 12:52:51.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:51 compute-1 ovn_controller[129257]: 2025-10-02T12:52:51Z|00577|binding|INFO|Releasing lport 4b02cca2-258b-4a05-9628-3add3aef7360 from this chassis (sb_readonly=0)
Oct 02 12:52:51 compute-1 nova_compute[230518]: 2025-10-02 12:52:51.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:51 compute-1 ceph-mon[80926]: pgmap v2367: 305 pgs: 305 active+clean; 372 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 41 KiB/s wr, 178 op/s
Oct 02 12:52:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:52.334 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:52:52 compute-1 nova_compute[230518]: 2025-10-02 12:52:52.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:52.335 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:52:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:52.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:52 compute-1 nova_compute[230518]: 2025-10-02 12:52:52.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:52.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e325 e325: 3 total, 3 up, 3 in
Oct 02 12:52:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:52:53 compute-1 ceph-mon[80926]: pgmap v2368: 305 pgs: 305 active+clean; 372 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 34 KiB/s wr, 138 op/s
Oct 02 12:52:53 compute-1 ceph-mon[80926]: osdmap e325: 3 total, 3 up, 3 in
Oct 02 12:52:53 compute-1 nova_compute[230518]: 2025-10-02 12:52:53.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:54.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:54.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:54 compute-1 podman[285839]: 2025-10-02 12:52:54.80935281 +0000 UTC m=+0.057176015 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true)
Oct 02 12:52:54 compute-1 podman[285838]: 2025-10-02 12:52:54.839076083 +0000 UTC m=+0.087501057 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:52:55 compute-1 ceph-mon[80926]: pgmap v2370: 305 pgs: 305 active+clean; 372 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 40 KiB/s wr, 120 op/s
Oct 02 12:52:55 compute-1 nova_compute[230518]: 2025-10-02 12:52:55.723 2 DEBUG nova.compute.manager [req-d965c02a-eb2e-4927-9803-03135e65cf8d req-0d28004b-6218-4792-9762-36b27b57e33b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Received event network-changed-9e761925-3065-4b15-ab37-4ce18061fcf6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:52:55 compute-1 nova_compute[230518]: 2025-10-02 12:52:55.723 2 DEBUG nova.compute.manager [req-d965c02a-eb2e-4927-9803-03135e65cf8d req-0d28004b-6218-4792-9762-36b27b57e33b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Refreshing instance network info cache due to event network-changed-9e761925-3065-4b15-ab37-4ce18061fcf6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:52:55 compute-1 nova_compute[230518]: 2025-10-02 12:52:55.723 2 DEBUG oslo_concurrency.lockutils [req-d965c02a-eb2e-4927-9803-03135e65cf8d req-0d28004b-6218-4792-9762-36b27b57e33b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:52:55 compute-1 nova_compute[230518]: 2025-10-02 12:52:55.724 2 DEBUG oslo_concurrency.lockutils [req-d965c02a-eb2e-4927-9803-03135e65cf8d req-0d28004b-6218-4792-9762-36b27b57e33b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:52:55 compute-1 nova_compute[230518]: 2025-10-02 12:52:55.724 2 DEBUG nova.network.neutron [req-d965c02a-eb2e-4927-9803-03135e65cf8d req-0d28004b-6218-4792-9762-36b27b57e33b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Refreshing network info cache for port 9e761925-3065-4b15-ab37-4ce18061fcf6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:52:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:56.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:52:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:56.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:52:57 compute-1 ceph-mon[80926]: pgmap v2371: 305 pgs: 305 active+clean; 378 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 725 KiB/s wr, 112 op/s
Oct 02 12:52:57 compute-1 nova_compute[230518]: 2025-10-02 12:52:57.247 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409562.0536482, 9668bd28-30e7-4dd0-87a8-6577135cc19b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:52:57 compute-1 nova_compute[230518]: 2025-10-02 12:52:57.247 2 INFO nova.compute.manager [-] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] VM Stopped (Lifecycle Event)
Oct 02 12:52:57 compute-1 nova_compute[230518]: 2025-10-02 12:52:57.287 2 DEBUG nova.compute.manager [None req-7d4a9db2-11fe-42b0-b7ef-dc5a09b20e0d - - - - - -] [instance: 9668bd28-30e7-4dd0-87a8-6577135cc19b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:52:57 compute-1 nova_compute[230518]: 2025-10-02 12:52:57.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:58 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:52:58.339 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:52:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:52:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:52:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:52:58.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:52:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:52:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:52:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:52:58.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:52:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e326 e326: 3 total, 3 up, 3 in
Oct 02 12:52:58 compute-1 nova_compute[230518]: 2025-10-02 12:52:58.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:52:59 compute-1 ceph-mon[80926]: pgmap v2372: 305 pgs: 305 active+clean; 432 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 4.7 MiB/s wr, 91 op/s
Oct 02 12:52:59 compute-1 ceph-mon[80926]: osdmap e326: 3 total, 3 up, 3 in
Oct 02 12:52:59 compute-1 nova_compute[230518]: 2025-10-02 12:52:59.988 2 DEBUG nova.network.neutron [req-d965c02a-eb2e-4927-9803-03135e65cf8d req-0d28004b-6218-4792-9762-36b27b57e33b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Updated VIF entry in instance network info cache for port 9e761925-3065-4b15-ab37-4ce18061fcf6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:52:59 compute-1 nova_compute[230518]: 2025-10-02 12:52:59.988 2 DEBUG nova.network.neutron [req-d965c02a-eb2e-4927-9803-03135e65cf8d req-0d28004b-6218-4792-9762-36b27b57e33b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Updating instance_info_cache with network_info: [{"id": "9e761925-3065-4b15-ab37-4ce18061fcf6", "address": "fa:16:3e:61:28:1d", "network": {"id": "a8923666-d594-4b3c-acca-d8d2652ab2bc", "bridge": "br-int", "label": "tempest-network-smoke--1377768226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e761925-30", "ovs_interfaceid": "9e761925-3065-4b15-ab37-4ce18061fcf6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:53:00 compute-1 nova_compute[230518]: 2025-10-02 12:53:00.041 2 DEBUG oslo_concurrency.lockutils [req-d965c02a-eb2e-4927-9803-03135e65cf8d req-0d28004b-6218-4792-9762-36b27b57e33b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:53:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e327 e327: 3 total, 3 up, 3 in
Oct 02 12:53:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:00.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:00.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:00 compute-1 ovn_controller[129257]: 2025-10-02T12:53:00Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:61:28:1d 10.100.0.4
Oct 02 12:53:00 compute-1 ovn_controller[129257]: 2025-10-02T12:53:00Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:61:28:1d 10.100.0.4
Oct 02 12:53:01 compute-1 ceph-mon[80926]: pgmap v2374: 305 pgs: 305 active+clean; 460 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 7.9 MiB/s wr, 138 op/s
Oct 02 12:53:01 compute-1 ceph-mon[80926]: osdmap e327: 3 total, 3 up, 3 in
Oct 02 12:53:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:02.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:02 compute-1 nova_compute[230518]: 2025-10-02 12:53:02.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:02.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:03 compute-1 nova_compute[230518]: 2025-10-02 12:53:03.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:03 compute-1 ceph-mon[80926]: pgmap v2376: 305 pgs: 305 active+clean; 473 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 8.9 MiB/s wr, 179 op/s
Oct 02 12:53:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e327 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:53:03 compute-1 nova_compute[230518]: 2025-10-02 12:53:03.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:04.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:04.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:04 compute-1 podman[285883]: 2025-10-02 12:53:04.831124229 +0000 UTC m=+0.070934707 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:53:04 compute-1 podman[285882]: 2025-10-02 12:53:04.831136059 +0000 UTC m=+0.067334963 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid)
Oct 02 12:53:05 compute-1 ceph-mon[80926]: pgmap v2377: 305 pgs: 305 active+clean; 473 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 8.1 MiB/s wr, 157 op/s
Oct 02 12:53:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2154277201' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:53:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2154277201' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:53:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:06.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:06.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:06 compute-1 nova_compute[230518]: 2025-10-02 12:53:06.512 2 INFO nova.compute.manager [None req-c05c6fdd-e355-49f8-9aff-c03ffcc51a7e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Get console output
Oct 02 12:53:06 compute-1 nova_compute[230518]: 2025-10-02 12:53:06.517 13161 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 12:53:06 compute-1 ceph-mon[80926]: pgmap v2378: 305 pgs: 305 active+clean; 478 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.1 MiB/s wr, 102 op/s
Oct 02 12:53:07 compute-1 nova_compute[230518]: 2025-10-02 12:53:07.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e328 e328: 3 total, 3 up, 3 in
Oct 02 12:53:08 compute-1 nova_compute[230518]: 2025-10-02 12:53:08.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:08 compute-1 nova_compute[230518]: 2025-10-02 12:53:08.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:53:08 compute-1 nova_compute[230518]: 2025-10-02 12:53:08.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:53:08 compute-1 nova_compute[230518]: 2025-10-02 12:53:08.087 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:08 compute-1 nova_compute[230518]: 2025-10-02 12:53:08.087 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:08 compute-1 nova_compute[230518]: 2025-10-02 12:53:08.087 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:08 compute-1 nova_compute[230518]: 2025-10-02 12:53:08.087 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:53:08 compute-1 nova_compute[230518]: 2025-10-02 12:53:08.088 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:53:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:53:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:08.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:53:08 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2500340169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:08.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:08 compute-1 nova_compute[230518]: 2025-10-02 12:53:08.506 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:53:08 compute-1 nova_compute[230518]: 2025-10-02 12:53:08.602 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:53:08 compute-1 nova_compute[230518]: 2025-10-02 12:53:08.603 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:53:08 compute-1 nova_compute[230518]: 2025-10-02 12:53:08.766 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:53:08 compute-1 nova_compute[230518]: 2025-10-02 12:53:08.767 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4199MB free_disk=20.80624771118164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:53:08 compute-1 nova_compute[230518]: 2025-10-02 12:53:08.767 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:08 compute-1 nova_compute[230518]: 2025-10-02 12:53:08.768 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:08 compute-1 ceph-mon[80926]: osdmap e328: 3 total, 3 up, 3 in
Oct 02 12:53:08 compute-1 ceph-mon[80926]: pgmap v2380: 305 pgs: 305 active+clean; 484 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 152 KiB/s rd, 1.2 MiB/s wr, 77 op/s
Oct 02 12:53:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1871141040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2500340169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:08 compute-1 nova_compute[230518]: 2025-10-02 12:53:08.939 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance d70a747f-a75e-4341-89db-5953efdbbbd9 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:53:08 compute-1 nova_compute[230518]: 2025-10-02 12:53:08.939 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:53:08 compute-1 nova_compute[230518]: 2025-10-02 12:53:08.939 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:53:08 compute-1 nova_compute[230518]: 2025-10-02 12:53:08.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:09 compute-1 nova_compute[230518]: 2025-10-02 12:53:09.504 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:53:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e329 e329: 3 total, 3 up, 3 in
Oct 02 12:53:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:53:09 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/67641422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:09 compute-1 nova_compute[230518]: 2025-10-02 12:53:09.940 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:53:09 compute-1 nova_compute[230518]: 2025-10-02 12:53:09.945 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:53:09 compute-1 nova_compute[230518]: 2025-10-02 12:53:09.965 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:53:09 compute-1 nova_compute[230518]: 2025-10-02 12:53:09.993 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:53:09 compute-1 nova_compute[230518]: 2025-10-02 12:53:09.994 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.226s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2854629276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:53:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:10.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:53:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:10.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:10 compute-1 nova_compute[230518]: 2025-10-02 12:53:10.995 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:53:10 compute-1 nova_compute[230518]: 2025-10-02 12:53:10.995 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:53:10 compute-1 nova_compute[230518]: 2025-10-02 12:53:10.996 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:53:11 compute-1 ceph-mon[80926]: pgmap v2381: 305 pgs: 305 active+clean; 457 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 139 KiB/s rd, 1014 KiB/s wr, 82 op/s
Oct 02 12:53:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/67641422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:11 compute-1 ceph-mon[80926]: osdmap e329: 3 total, 3 up, 3 in
Oct 02 12:53:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3794160655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2288584448' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:53:12 compute-1 nova_compute[230518]: 2025-10-02 12:53:12.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:53:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/301379997' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:53:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2526815352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:12.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:53:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:12.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:53:12 compute-1 nova_compute[230518]: 2025-10-02 12:53:12.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:13 compute-1 nova_compute[230518]: 2025-10-02 12:53:13.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:53:13 compute-1 ceph-mon[80926]: pgmap v2383: 305 pgs: 305 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 107 KiB/s rd, 145 KiB/s wr, 96 op/s
Oct 02 12:53:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/446103540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:53:13 compute-1 nova_compute[230518]: 2025-10-02 12:53:13.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:14.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:14.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:14 compute-1 sudo[285967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:53:14 compute-1 sudo[285967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:14 compute-1 sudo[285967]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:14 compute-1 sudo[285992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:53:14 compute-1 sudo[285992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:14 compute-1 sudo[285992]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:14 compute-1 sudo[286017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:53:14 compute-1 sudo[286017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:14 compute-1 sudo[286017]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:14 compute-1 sudo[286042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:53:14 compute-1 sudo[286042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:15 compute-1 nova_compute[230518]: 2025-10-02 12:53:15.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:53:15 compute-1 sudo[286042]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:15 compute-1 ceph-mon[80926]: pgmap v2384: 305 pgs: 305 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 70 KiB/s rd, 33 KiB/s wr, 75 op/s
Oct 02 12:53:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:53:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:53:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:53:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:53:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:53:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:53:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:16.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:16 compute-1 nova_compute[230518]: 2025-10-02 12:53:16.433 2 DEBUG oslo_concurrency.lockutils [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "interface-d70a747f-a75e-4341-89db-5953efdbbbd9-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:16 compute-1 nova_compute[230518]: 2025-10-02 12:53:16.433 2 DEBUG oslo_concurrency.lockutils [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "interface-d70a747f-a75e-4341-89db-5953efdbbbd9-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:16 compute-1 nova_compute[230518]: 2025-10-02 12:53:16.434 2 DEBUG nova.objects.instance [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'flavor' on Instance uuid d70a747f-a75e-4341-89db-5953efdbbbd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:53:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:16.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:17 compute-1 nova_compute[230518]: 2025-10-02 12:53:17.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:17 compute-1 ceph-mon[80926]: pgmap v2385: 305 pgs: 305 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 21 KiB/s wr, 113 op/s
Oct 02 12:53:18 compute-1 nova_compute[230518]: 2025-10-02 12:53:18.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:53:18 compute-1 nova_compute[230518]: 2025-10-02 12:53:18.219 2 DEBUG nova.objects.instance [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'pci_requests' on Instance uuid d70a747f-a75e-4341-89db-5953efdbbbd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:53:18 compute-1 nova_compute[230518]: 2025-10-02 12:53:18.245 2 DEBUG nova.network.neutron [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:53:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:53:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:18.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:18.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:19 compute-1 nova_compute[230518]: 2025-10-02 12:53:19.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:19 compute-1 nova_compute[230518]: 2025-10-02 12:53:19.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:53:19 compute-1 nova_compute[230518]: 2025-10-02 12:53:19.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:53:19 compute-1 nova_compute[230518]: 2025-10-02 12:53:19.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:53:19 compute-1 nova_compute[230518]: 2025-10-02 12:53:19.306 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:53:19 compute-1 nova_compute[230518]: 2025-10-02 12:53:19.306 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:53:19 compute-1 nova_compute[230518]: 2025-10-02 12:53:19.307 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:53:19 compute-1 nova_compute[230518]: 2025-10-02 12:53:19.307 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d70a747f-a75e-4341-89db-5953efdbbbd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:53:19 compute-1 nova_compute[230518]: 2025-10-02 12:53:19.330 2 DEBUG nova.policy [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '96fd589a75cb4fcfac0072edabb9b3a1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '64f187c60881475e9e1f062bb198d205', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:53:19 compute-1 ceph-mon[80926]: pgmap v2386: 305 pgs: 305 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 137 op/s
Oct 02 12:53:19 compute-1 nova_compute[230518]: 2025-10-02 12:53:19.675 2 DEBUG oslo_concurrency.lockutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "a1440a2f-0663-451f-bef5-bbece30acc40" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:19 compute-1 nova_compute[230518]: 2025-10-02 12:53:19.676 2 DEBUG oslo_concurrency.lockutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:19 compute-1 nova_compute[230518]: 2025-10-02 12:53:19.676 2 INFO nova.compute.manager [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Unshelving
Oct 02 12:53:19 compute-1 nova_compute[230518]: 2025-10-02 12:53:19.898 2 DEBUG oslo_concurrency.lockutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:19 compute-1 nova_compute[230518]: 2025-10-02 12:53:19.898 2 DEBUG oslo_concurrency.lockutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:19 compute-1 nova_compute[230518]: 2025-10-02 12:53:19.903 2 DEBUG nova.objects.instance [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'pci_requests' on Instance uuid a1440a2f-0663-451f-bef5-bbece30acc40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:53:19 compute-1 nova_compute[230518]: 2025-10-02 12:53:19.924 2 DEBUG nova.objects.instance [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'numa_topology' on Instance uuid a1440a2f-0663-451f-bef5-bbece30acc40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:53:20 compute-1 nova_compute[230518]: 2025-10-02 12:53:20.030 2 DEBUG nova.virt.hardware [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:53:20 compute-1 nova_compute[230518]: 2025-10-02 12:53:20.030 2 INFO nova.compute.claims [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:53:20 compute-1 ovn_controller[129257]: 2025-10-02T12:53:20Z|00578|binding|INFO|Releasing lport 4b02cca2-258b-4a05-9628-3add3aef7360 from this chassis (sb_readonly=0)
Oct 02 12:53:20 compute-1 nova_compute[230518]: 2025-10-02 12:53:20.246 2 DEBUG oslo_concurrency.processutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:53:20 compute-1 nova_compute[230518]: 2025-10-02 12:53:20.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:20 compute-1 nova_compute[230518]: 2025-10-02 12:53:20.411 2 DEBUG nova.network.neutron [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Successfully created port: f47045d5-0c1f-4e24-be1d-a8f054763926 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:53:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:53:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:20.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:53:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:20.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:53:20 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1354063422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:20 compute-1 nova_compute[230518]: 2025-10-02 12:53:20.684 2 DEBUG oslo_concurrency.processutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:53:20 compute-1 nova_compute[230518]: 2025-10-02 12:53:20.690 2 DEBUG nova.compute.provider_tree [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:53:20 compute-1 nova_compute[230518]: 2025-10-02 12:53:20.716 2 DEBUG nova.scheduler.client.report [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:53:20 compute-1 nova_compute[230518]: 2025-10-02 12:53:20.747 2 DEBUG oslo_concurrency.lockutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:21 compute-1 nova_compute[230518]: 2025-10-02 12:53:21.148 2 INFO nova.network.neutron [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Updating port d3265627-45dd-403c-990b-451562559afe with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}
Oct 02 12:53:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e330 e330: 3 total, 3 up, 3 in
Oct 02 12:53:21 compute-1 ceph-mon[80926]: pgmap v2387: 305 pgs: 305 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.0 KiB/s wr, 121 op/s
Oct 02 12:53:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1354063422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:21 compute-1 nova_compute[230518]: 2025-10-02 12:53:21.689 2 DEBUG nova.network.neutron [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Successfully updated port: f47045d5-0c1f-4e24-be1d-a8f054763926 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:53:21 compute-1 nova_compute[230518]: 2025-10-02 12:53:21.705 2 DEBUG oslo_concurrency.lockutils [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:53:21 compute-1 sudo[286120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:53:21 compute-1 sudo[286120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:21 compute-1 sudo[286120]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:21 compute-1 sudo[286145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:53:21 compute-1 sudo[286145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:53:21 compute-1 sudo[286145]: pam_unix(sudo:session): session closed for user root
Oct 02 12:53:21 compute-1 nova_compute[230518]: 2025-10-02 12:53:21.940 2 DEBUG nova.compute.manager [req-84a78d88-963c-4144-b617-7aee7169a2d0 req-29aa4e3b-a5b8-4665-86e7-f769aba9e179 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Received event network-changed-f47045d5-0c1f-4e24-be1d-a8f054763926 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:53:21 compute-1 nova_compute[230518]: 2025-10-02 12:53:21.941 2 DEBUG nova.compute.manager [req-84a78d88-963c-4144-b617-7aee7169a2d0 req-29aa4e3b-a5b8-4665-86e7-f769aba9e179 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Refreshing instance network info cache due to event network-changed-f47045d5-0c1f-4e24-be1d-a8f054763926. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:53:21 compute-1 nova_compute[230518]: 2025-10-02 12:53:21.941 2 DEBUG oslo_concurrency.lockutils [req-84a78d88-963c-4144-b617-7aee7169a2d0 req-29aa4e3b-a5b8-4665-86e7-f769aba9e179 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:53:22 compute-1 nova_compute[230518]: 2025-10-02 12:53:22.245 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Updating instance_info_cache with network_info: [{"id": "9e761925-3065-4b15-ab37-4ce18061fcf6", "address": "fa:16:3e:61:28:1d", "network": {"id": "a8923666-d594-4b3c-acca-d8d2652ab2bc", "bridge": "br-int", "label": "tempest-network-smoke--1377768226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e761925-30", "ovs_interfaceid": "9e761925-3065-4b15-ab37-4ce18061fcf6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:53:22 compute-1 nova_compute[230518]: 2025-10-02 12:53:22.257 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:53:22 compute-1 nova_compute[230518]: 2025-10-02 12:53:22.258 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:53:22 compute-1 nova_compute[230518]: 2025-10-02 12:53:22.258 2 DEBUG oslo_concurrency.lockutils [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquired lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:53:22 compute-1 nova_compute[230518]: 2025-10-02 12:53:22.258 2 DEBUG nova.network.neutron [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:53:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:22.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:22 compute-1 nova_compute[230518]: 2025-10-02 12:53:22.576 2 DEBUG oslo_concurrency.lockutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "refresh_cache-a1440a2f-0663-451f-bef5-bbece30acc40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:53:22 compute-1 nova_compute[230518]: 2025-10-02 12:53:22.576 2 DEBUG oslo_concurrency.lockutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquired lock "refresh_cache-a1440a2f-0663-451f-bef5-bbece30acc40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:53:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:22.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:22 compute-1 nova_compute[230518]: 2025-10-02 12:53:22.577 2 DEBUG nova.network.neutron [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:53:22 compute-1 nova_compute[230518]: 2025-10-02 12:53:22.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:22 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:53:22 compute-1 ceph-mon[80926]: osdmap e330: 3 total, 3 up, 3 in
Oct 02 12:53:22 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:53:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1898851491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:22 compute-1 nova_compute[230518]: 2025-10-02 12:53:22.721 2 DEBUG nova.compute.manager [req-c1f4b934-c379-45e8-91ec-8e4a8c9368a2 req-5df1b7bf-d15b-49f7-843b-ea15a35bced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Received event network-changed-d3265627-45dd-403c-990b-451562559afe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:53:22 compute-1 nova_compute[230518]: 2025-10-02 12:53:22.722 2 DEBUG nova.compute.manager [req-c1f4b934-c379-45e8-91ec-8e4a8c9368a2 req-5df1b7bf-d15b-49f7-843b-ea15a35bced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Refreshing instance network info cache due to event network-changed-d3265627-45dd-403c-990b-451562559afe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:53:22 compute-1 nova_compute[230518]: 2025-10-02 12:53:22.723 2 DEBUG oslo_concurrency.lockutils [req-c1f4b934-c379-45e8-91ec-8e4a8c9368a2 req-5df1b7bf-d15b-49f7-843b-ea15a35bced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a1440a2f-0663-451f-bef5-bbece30acc40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:53:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:53:23 compute-1 ceph-mon[80926]: pgmap v2389: 305 pgs: 6 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 295 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.9 KiB/s wr, 98 op/s
Oct 02 12:53:24 compute-1 nova_compute[230518]: 2025-10-02 12:53:24.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:24.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:24.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:25 compute-1 nova_compute[230518]: 2025-10-02 12:53:25.262 2 DEBUG nova.network.neutron [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Updating instance_info_cache with network_info: [{"id": "d3265627-45dd-403c-990b-451562559afe", "address": "fa:16:3e:a5:ff:5d", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3265627-45", "ovs_interfaceid": "d3265627-45dd-403c-990b-451562559afe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:53:25 compute-1 nova_compute[230518]: 2025-10-02 12:53:25.284 2 DEBUG oslo_concurrency.lockutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Releasing lock "refresh_cache-a1440a2f-0663-451f-bef5-bbece30acc40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:53:25 compute-1 nova_compute[230518]: 2025-10-02 12:53:25.285 2 DEBUG nova.virt.libvirt.driver [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:53:25 compute-1 nova_compute[230518]: 2025-10-02 12:53:25.286 2 INFO nova.virt.libvirt.driver [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Creating image(s)
Oct 02 12:53:25 compute-1 nova_compute[230518]: 2025-10-02 12:53:25.308 2 DEBUG nova.storage.rbd_utils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image a1440a2f-0663-451f-bef5-bbece30acc40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:53:25 compute-1 nova_compute[230518]: 2025-10-02 12:53:25.311 2 DEBUG nova.objects.instance [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'trusted_certs' on Instance uuid a1440a2f-0663-451f-bef5-bbece30acc40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:53:25 compute-1 nova_compute[230518]: 2025-10-02 12:53:25.312 2 DEBUG oslo_concurrency.lockutils [req-c1f4b934-c379-45e8-91ec-8e4a8c9368a2 req-5df1b7bf-d15b-49f7-843b-ea15a35bced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a1440a2f-0663-451f-bef5-bbece30acc40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:53:25 compute-1 nova_compute[230518]: 2025-10-02 12:53:25.312 2 DEBUG nova.network.neutron [req-c1f4b934-c379-45e8-91ec-8e4a8c9368a2 req-5df1b7bf-d15b-49f7-843b-ea15a35bced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Refreshing network info cache for port d3265627-45dd-403c-990b-451562559afe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:53:25 compute-1 nova_compute[230518]: 2025-10-02 12:53:25.352 2 DEBUG nova.storage.rbd_utils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image a1440a2f-0663-451f-bef5-bbece30acc40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:53:25 compute-1 nova_compute[230518]: 2025-10-02 12:53:25.378 2 DEBUG nova.storage.rbd_utils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image a1440a2f-0663-451f-bef5-bbece30acc40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:53:25 compute-1 nova_compute[230518]: 2025-10-02 12:53:25.381 2 DEBUG oslo_concurrency.lockutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "ddf24850ac6bde3d49ac6b6be0dd633aca2f36fd" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:25 compute-1 nova_compute[230518]: 2025-10-02 12:53:25.382 2 DEBUG oslo_concurrency.lockutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "ddf24850ac6bde3d49ac6b6be0dd633aca2f36fd" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:25 compute-1 ceph-mon[80926]: pgmap v2390: 305 pgs: 6 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 295 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.9 KiB/s wr, 98 op/s
Oct 02 12:53:25 compute-1 nova_compute[230518]: 2025-10-02 12:53:25.789 2 DEBUG nova.virt.libvirt.imagebackend [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Image locations are: [{'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/47596e8e-a667-4ff8-bd1f-3f35c36243ae/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/47596e8e-a667-4ff8-bd1f-3f35c36243ae/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 02 12:53:25 compute-1 podman[286225]: 2025-10-02 12:53:25.877228055 +0000 UTC m=+0.118669333 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 02 12:53:25 compute-1 podman[286224]: 2025-10-02 12:53:25.890810591 +0000 UTC m=+0.139034963 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true)
Oct 02 12:53:25 compute-1 nova_compute[230518]: 2025-10-02 12:53:25.891 2 DEBUG nova.virt.libvirt.imagebackend [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Selected location: {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/47596e8e-a667-4ff8-bd1f-3f35c36243ae/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 02 12:53:25 compute-1 nova_compute[230518]: 2025-10-02 12:53:25.891 2 DEBUG nova.storage.rbd_utils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] cloning images/47596e8e-a667-4ff8-bd1f-3f35c36243ae@snap to None/a1440a2f-0663-451f-bef5-bbece30acc40_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 12:53:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:25.950 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:25.950 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:25.951 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.243 2 DEBUG oslo_concurrency.lockutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "ddf24850ac6bde3d49ac6b6be0dd633aca2f36fd" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.333 2 DEBUG nova.network.neutron [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Updating instance_info_cache with network_info: [{"id": "9e761925-3065-4b15-ab37-4ce18061fcf6", "address": "fa:16:3e:61:28:1d", "network": {"id": "a8923666-d594-4b3c-acca-d8d2652ab2bc", "bridge": "br-int", "label": "tempest-network-smoke--1377768226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e761925-30", "ovs_interfaceid": "9e761925-3065-4b15-ab37-4ce18061fcf6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f47045d5-0c1f-4e24-be1d-a8f054763926", "address": "fa:16:3e:13:ea:d4", "network": {"id": "8efeaa72-f872-4ae7-abf0-187d9b448a81", "bridge": "br-int", "label": "tempest-network-smoke--1227118711", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf47045d5-0c", "ovs_interfaceid": "f47045d5-0c1f-4e24-be1d-a8f054763926", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.376 2 DEBUG oslo_concurrency.lockutils [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Releasing lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.377 2 DEBUG oslo_concurrency.lockutils [req-84a78d88-963c-4144-b617-7aee7169a2d0 req-29aa4e3b-a5b8-4665-86e7-f769aba9e179 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.377 2 DEBUG nova.network.neutron [req-84a78d88-963c-4144-b617-7aee7169a2d0 req-29aa4e3b-a5b8-4665-86e7-f769aba9e179 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Refreshing network info cache for port f47045d5-0c1f-4e24-be1d-a8f054763926 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.380 2 DEBUG nova.virt.libvirt.vif [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:52:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1834334084',display_name='tempest-TestNetworkBasicOps-server-1834334084',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1834334084',id=143,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBWzyEyTLwn/OnMkL7XEVYTgkdobvFvcDEiRuV0NOSu0/Vc6+w7CYW/OJQq8xHJ7yByGK0zJNMNYx8BDnEAMNmh8dLyyLFr5uFvHFoK31s13NXGnnrP3EXSfoIgrfk2ieg==',key_name='tempest-TestNetworkBasicOps-1099517484',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:52:42Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-oim7lxfm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:52:42Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=d70a747f-a75e-4341-89db-5953efdbbbd9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f47045d5-0c1f-4e24-be1d-a8f054763926", "address": "fa:16:3e:13:ea:d4", "network": {"id": "8efeaa72-f872-4ae7-abf0-187d9b448a81", "bridge": "br-int", "label": "tempest-network-smoke--1227118711", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf47045d5-0c", "ovs_interfaceid": "f47045d5-0c1f-4e24-be1d-a8f054763926", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.381 2 DEBUG nova.network.os_vif_util [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "f47045d5-0c1f-4e24-be1d-a8f054763926", "address": "fa:16:3e:13:ea:d4", "network": {"id": "8efeaa72-f872-4ae7-abf0-187d9b448a81", "bridge": "br-int", "label": "tempest-network-smoke--1227118711", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf47045d5-0c", "ovs_interfaceid": "f47045d5-0c1f-4e24-be1d-a8f054763926", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.381 2 DEBUG nova.network.os_vif_util [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=f47045d5-0c1f-4e24-be1d-a8f054763926,network=Network(8efeaa72-f872-4ae7-abf0-187d9b448a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf47045d5-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.382 2 DEBUG os_vif [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=f47045d5-0c1f-4e24-be1d-a8f054763926,network=Network(8efeaa72-f872-4ae7-abf0-187d9b448a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf47045d5-0c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.382 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.383 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.389 2 DEBUG nova.objects.instance [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'migration_context' on Instance uuid a1440a2f-0663-451f-bef5-bbece30acc40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.394 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf47045d5-0c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.395 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf47045d5-0c, col_values=(('external_ids', {'iface-id': 'f47045d5-0c1f-4e24-be1d-a8f054763926', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:13:ea:d4', 'vm-uuid': 'd70a747f-a75e-4341-89db-5953efdbbbd9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:53:26 compute-1 NetworkManager[44960]: <info>  [1759409606.3981] manager: (tapf47045d5-0c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/270)
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:53:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:26.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.451 2 INFO os_vif [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=f47045d5-0c1f-4e24-be1d-a8f054763926,network=Network(8efeaa72-f872-4ae7-abf0-187d9b448a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf47045d5-0c')
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.453 2 DEBUG nova.virt.libvirt.vif [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:52:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1834334084',display_name='tempest-TestNetworkBasicOps-server-1834334084',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1834334084',id=143,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBWzyEyTLwn/OnMkL7XEVYTgkdobvFvcDEiRuV0NOSu0/Vc6+w7CYW/OJQq8xHJ7yByGK0zJNMNYx8BDnEAMNmh8dLyyLFr5uFvHFoK31s13NXGnnrP3EXSfoIgrfk2ieg==',key_name='tempest-TestNetworkBasicOps-1099517484',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:52:42Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-oim7lxfm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:52:42Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=d70a747f-a75e-4341-89db-5953efdbbbd9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f47045d5-0c1f-4e24-be1d-a8f054763926", "address": "fa:16:3e:13:ea:d4", "network": {"id": "8efeaa72-f872-4ae7-abf0-187d9b448a81", "bridge": "br-int", "label": "tempest-network-smoke--1227118711", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf47045d5-0c", "ovs_interfaceid": "f47045d5-0c1f-4e24-be1d-a8f054763926", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.453 2 DEBUG nova.network.os_vif_util [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "f47045d5-0c1f-4e24-be1d-a8f054763926", "address": "fa:16:3e:13:ea:d4", "network": {"id": "8efeaa72-f872-4ae7-abf0-187d9b448a81", "bridge": "br-int", "label": "tempest-network-smoke--1227118711", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf47045d5-0c", "ovs_interfaceid": "f47045d5-0c1f-4e24-be1d-a8f054763926", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.454 2 DEBUG nova.network.os_vif_util [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=f47045d5-0c1f-4e24-be1d-a8f054763926,network=Network(8efeaa72-f872-4ae7-abf0-187d9b448a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf47045d5-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.462 2 DEBUG nova.storage.rbd_utils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] flattening vms/a1440a2f-0663-451f-bef5-bbece30acc40_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.525 2 DEBUG nova.virt.libvirt.guest [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] attach device xml: <interface type="ethernet">
Oct 02 12:53:26 compute-1 nova_compute[230518]:   <mac address="fa:16:3e:13:ea:d4"/>
Oct 02 12:53:26 compute-1 nova_compute[230518]:   <model type="virtio"/>
Oct 02 12:53:26 compute-1 nova_compute[230518]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:53:26 compute-1 nova_compute[230518]:   <mtu size="1442"/>
Oct 02 12:53:26 compute-1 nova_compute[230518]:   <target dev="tapf47045d5-0c"/>
Oct 02 12:53:26 compute-1 nova_compute[230518]: </interface>
Oct 02 12:53:26 compute-1 nova_compute[230518]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 12:53:26 compute-1 kernel: tapf47045d5-0c: entered promiscuous mode
Oct 02 12:53:26 compute-1 NetworkManager[44960]: <info>  [1759409606.5424] manager: (tapf47045d5-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/271)
Oct 02 12:53:26 compute-1 ovn_controller[129257]: 2025-10-02T12:53:26Z|00579|binding|INFO|Claiming lport f47045d5-0c1f-4e24-be1d-a8f054763926 for this chassis.
Oct 02 12:53:26 compute-1 ovn_controller[129257]: 2025-10-02T12:53:26Z|00580|binding|INFO|f47045d5-0c1f-4e24-be1d-a8f054763926: Claiming fa:16:3e:13:ea:d4 10.100.0.25
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.557 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:ea:d4 10.100.0.25'], port_security=['fa:16:3e:13:ea:d4 10.100.0.25'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.25/28', 'neutron:device_id': 'd70a747f-a75e-4341-89db-5953efdbbbd9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8efeaa72-f872-4ae7-abf0-187d9b448a81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f9e51548-d675-4462-aaf0-72519e827667', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2fdae84-979e-43d0-b38a-d190b914304f, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=f47045d5-0c1f-4e24-be1d-a8f054763926) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.558 138374 INFO neutron.agent.ovn.metadata.agent [-] Port f47045d5-0c1f-4e24-be1d-a8f054763926 in datapath 8efeaa72-f872-4ae7-abf0-187d9b448a81 bound to our chassis
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.559 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8efeaa72-f872-4ae7-abf0-187d9b448a81
Oct 02 12:53:26 compute-1 systemd-udevd[286433]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.577 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[350dd922-df28-4e5d-a172-877806fff22e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.578 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8efeaa72-f1 in ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:53:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.581 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8efeaa72-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.581 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[57b1d45c-d614-49af-901d-85a1c0acde28]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:53:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:26.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.584 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c275d54b-064b-4915-86cc-af73a34abe7f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:26 compute-1 NetworkManager[44960]: <info>  [1759409606.5908] device (tapf47045d5-0c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:53:26 compute-1 NetworkManager[44960]: <info>  [1759409606.5920] device (tapf47045d5-0c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.607 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[9abef24a-c7b0-4c96-b005-214f4e395ef0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:26 compute-1 ovn_controller[129257]: 2025-10-02T12:53:26Z|00581|binding|INFO|Setting lport f47045d5-0c1f-4e24-be1d-a8f054763926 ovn-installed in OVS
Oct 02 12:53:26 compute-1 ovn_controller[129257]: 2025-10-02T12:53:26Z|00582|binding|INFO|Setting lport f47045d5-0c1f-4e24-be1d-a8f054763926 up in Southbound
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.628 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e4a4f43d-bd4e-4ddb-a4a3-f54bfbe3784e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.663 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[63162d6c-7789-4571-8288-fe96beccacd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.668 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fc56a1d1-18f0-4a9c-a433-1068ab653443]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:26 compute-1 NetworkManager[44960]: <info>  [1759409606.6701] manager: (tap8efeaa72-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/272)
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.694 2 DEBUG nova.virt.libvirt.driver [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.696 2 DEBUG nova.virt.libvirt.driver [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.696 2 DEBUG nova.virt.libvirt.driver [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No VIF found with MAC fa:16:3e:61:28:1d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.696 2 DEBUG nova.virt.libvirt.driver [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No VIF found with MAC fa:16:3e:13:ea:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.714 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[5fe73e83-5878-46cb-b4c5-9b523b95a928]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.718 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[34b47925-efce-44e8-86d2-88bc5c901ad2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:26 compute-1 NetworkManager[44960]: <info>  [1759409606.7515] device (tap8efeaa72-f0): carrier: link connected
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.754 2 DEBUG nova.virt.libvirt.guest [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:53:26 compute-1 nova_compute[230518]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:53:26 compute-1 nova_compute[230518]:   <nova:name>tempest-TestNetworkBasicOps-server-1834334084</nova:name>
Oct 02 12:53:26 compute-1 nova_compute[230518]:   <nova:creationTime>2025-10-02 12:53:26</nova:creationTime>
Oct 02 12:53:26 compute-1 nova_compute[230518]:   <nova:flavor name="m1.nano">
Oct 02 12:53:26 compute-1 nova_compute[230518]:     <nova:memory>128</nova:memory>
Oct 02 12:53:26 compute-1 nova_compute[230518]:     <nova:disk>1</nova:disk>
Oct 02 12:53:26 compute-1 nova_compute[230518]:     <nova:swap>0</nova:swap>
Oct 02 12:53:26 compute-1 nova_compute[230518]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:53:26 compute-1 nova_compute[230518]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:53:26 compute-1 nova_compute[230518]:   </nova:flavor>
Oct 02 12:53:26 compute-1 nova_compute[230518]:   <nova:owner>
Oct 02 12:53:26 compute-1 nova_compute[230518]:     <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct 02 12:53:26 compute-1 nova_compute[230518]:     <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct 02 12:53:26 compute-1 nova_compute[230518]:   </nova:owner>
Oct 02 12:53:26 compute-1 nova_compute[230518]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:53:26 compute-1 nova_compute[230518]:   <nova:ports>
Oct 02 12:53:26 compute-1 nova_compute[230518]:     <nova:port uuid="9e761925-3065-4b15-ab37-4ce18061fcf6">
Oct 02 12:53:26 compute-1 nova_compute[230518]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 12:53:26 compute-1 nova_compute[230518]:     </nova:port>
Oct 02 12:53:26 compute-1 nova_compute[230518]:     <nova:port uuid="f47045d5-0c1f-4e24-be1d-a8f054763926">
Oct 02 12:53:26 compute-1 nova_compute[230518]:       <nova:ip type="fixed" address="10.100.0.25" ipVersion="4"/>
Oct 02 12:53:26 compute-1 nova_compute[230518]:     </nova:port>
Oct 02 12:53:26 compute-1 nova_compute[230518]:   </nova:ports>
Oct 02 12:53:26 compute-1 nova_compute[230518]: </nova:instance>
Oct 02 12:53:26 compute-1 nova_compute[230518]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.761 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[5c11bf2c-949a-43fe-947e-3f22a39b29c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.788 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1172fd9f-aa32-4530-9bbd-4ae5fa3433a7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8efeaa72-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:88:bb:1d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 178], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 747030, 'reachable_time': 43432, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286460, 'error': None, 'target': 'ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.807 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2642041f-7748-4bf4-911b-3cf96249ec3c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe88:bb1d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 747030, 'tstamp': 747030}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286461, 'error': None, 'target': 'ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.812 2 DEBUG oslo_concurrency.lockutils [None req-127ccc27-8f9e-421c-b15f-da19053e2ad7 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "interface-d70a747f-a75e-4341-89db-5953efdbbbd9-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 10.378s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.835 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[40f6615a-8728-4fb0-bee4-3f20b4925a82]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8efeaa72-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:88:bb:1d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 178], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 747030, 'reachable_time': 43432, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 286462, 'error': None, 'target': 'ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.864 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[703449ce-3fa6-4de6-b32f-a04a44cc7d66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.927 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7e76522f-2407-4aac-91d2-aa54da98dfd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.929 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8efeaa72-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.929 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.929 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8efeaa72-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.957 2 DEBUG nova.compute.manager [req-3fa1f495-04a7-4b68-9aae-cf0dc79fb9b7 req-a6c9c404-d00d-49c3-905f-6e6ab53e5e31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Received event network-vif-plugged-f47045d5-0c1f-4e24-be1d-a8f054763926 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.958 2 DEBUG oslo_concurrency.lockutils [req-3fa1f495-04a7-4b68-9aae-cf0dc79fb9b7 req-a6c9c404-d00d-49c3-905f-6e6ab53e5e31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.958 2 DEBUG oslo_concurrency.lockutils [req-3fa1f495-04a7-4b68-9aae-cf0dc79fb9b7 req-a6c9c404-d00d-49c3-905f-6e6ab53e5e31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.958 2 DEBUG oslo_concurrency.lockutils [req-3fa1f495-04a7-4b68-9aae-cf0dc79fb9b7 req-a6c9c404-d00d-49c3-905f-6e6ab53e5e31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.959 2 DEBUG nova.compute.manager [req-3fa1f495-04a7-4b68-9aae-cf0dc79fb9b7 req-a6c9c404-d00d-49c3-905f-6e6ab53e5e31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] No waiting events found dispatching network-vif-plugged-f47045d5-0c1f-4e24-be1d-a8f054763926 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.959 2 WARNING nova.compute.manager [req-3fa1f495-04a7-4b68-9aae-cf0dc79fb9b7 req-a6c9c404-d00d-49c3-905f-6e6ab53e5e31 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Received unexpected event network-vif-plugged-f47045d5-0c1f-4e24-be1d-a8f054763926 for instance with vm_state active and task_state None.
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:26 compute-1 NetworkManager[44960]: <info>  [1759409606.9663] manager: (tap8efeaa72-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/273)
Oct 02 12:53:26 compute-1 kernel: tap8efeaa72-f0: entered promiscuous mode
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.970 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8efeaa72-f0, col_values=(('external_ids', {'iface-id': '3e18aee7-a834-4412-85b0-a759d74fd965'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:26 compute-1 ovn_controller[129257]: 2025-10-02T12:53:26Z|00583|binding|INFO|Releasing lport 3e18aee7-a834-4412-85b0-a759d74fd965 from this chassis (sb_readonly=0)
Oct 02 12:53:26 compute-1 nova_compute[230518]: 2025-10-02 12:53:26.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.996 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8efeaa72-f872-4ae7-abf0-187d9b448a81.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8efeaa72-f872-4ae7-abf0-187d9b448a81.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.997 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[43c6a78a-d1b4-4078-bd57-2736d5018ef6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.998 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-8efeaa72-f872-4ae7-abf0-187d9b448a81
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/8efeaa72-f872-4ae7-abf0-187d9b448a81.pid.haproxy
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 8efeaa72-f872-4ae7-abf0-187d9b448a81
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:53:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:26.999 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81', 'env', 'PROCESS_TAG=haproxy-8efeaa72-f872-4ae7-abf0-187d9b448a81', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8efeaa72-f872-4ae7-abf0-187d9b448a81.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:53:27 compute-1 ceph-mon[80926]: pgmap v2391: 305 pgs: 6 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 295 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.2 KiB/s wr, 65 op/s
Oct 02 12:53:27 compute-1 podman[286494]: 2025-10-02 12:53:27.346984365 +0000 UTC m=+0.025093147 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:53:27 compute-1 nova_compute[230518]: 2025-10-02 12:53:27.636 2 DEBUG nova.network.neutron [req-c1f4b934-c379-45e8-91ec-8e4a8c9368a2 req-5df1b7bf-d15b-49f7-843b-ea15a35bced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Updated VIF entry in instance network info cache for port d3265627-45dd-403c-990b-451562559afe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:53:27 compute-1 nova_compute[230518]: 2025-10-02 12:53:27.637 2 DEBUG nova.network.neutron [req-c1f4b934-c379-45e8-91ec-8e4a8c9368a2 req-5df1b7bf-d15b-49f7-843b-ea15a35bced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Updating instance_info_cache with network_info: [{"id": "d3265627-45dd-403c-990b-451562559afe", "address": "fa:16:3e:a5:ff:5d", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3265627-45", "ovs_interfaceid": "d3265627-45dd-403c-990b-451562559afe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:53:27 compute-1 nova_compute[230518]: 2025-10-02 12:53:27.658 2 DEBUG oslo_concurrency.lockutils [req-c1f4b934-c379-45e8-91ec-8e4a8c9368a2 req-5df1b7bf-d15b-49f7-843b-ea15a35bced5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a1440a2f-0663-451f-bef5-bbece30acc40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:53:28 compute-1 podman[286494]: 2025-10-02 12:53:28.115215017 +0000 UTC m=+0.793323719 container create fcd860c0f0c38fb90414d240135c8e9430b672ab361c08eacc41f5cc538f3280 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 02 12:53:28 compute-1 nova_compute[230518]: 2025-10-02 12:53:28.162 2 DEBUG nova.network.neutron [req-84a78d88-963c-4144-b617-7aee7169a2d0 req-29aa4e3b-a5b8-4665-86e7-f769aba9e179 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Updated VIF entry in instance network info cache for port f47045d5-0c1f-4e24-be1d-a8f054763926. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:53:28 compute-1 nova_compute[230518]: 2025-10-02 12:53:28.162 2 DEBUG nova.network.neutron [req-84a78d88-963c-4144-b617-7aee7169a2d0 req-29aa4e3b-a5b8-4665-86e7-f769aba9e179 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Updating instance_info_cache with network_info: [{"id": "9e761925-3065-4b15-ab37-4ce18061fcf6", "address": "fa:16:3e:61:28:1d", "network": {"id": "a8923666-d594-4b3c-acca-d8d2652ab2bc", "bridge": "br-int", "label": "tempest-network-smoke--1377768226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e761925-30", "ovs_interfaceid": "9e761925-3065-4b15-ab37-4ce18061fcf6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f47045d5-0c1f-4e24-be1d-a8f054763926", "address": "fa:16:3e:13:ea:d4", "network": {"id": "8efeaa72-f872-4ae7-abf0-187d9b448a81", "bridge": "br-int", "label": "tempest-network-smoke--1227118711", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf47045d5-0c", "ovs_interfaceid": "f47045d5-0c1f-4e24-be1d-a8f054763926", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:53:28 compute-1 nova_compute[230518]: 2025-10-02 12:53:28.192 2 DEBUG oslo_concurrency.lockutils [req-84a78d88-963c-4144-b617-7aee7169a2d0 req-29aa4e3b-a5b8-4665-86e7-f769aba9e179 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:53:28 compute-1 ovn_controller[129257]: 2025-10-02T12:53:28Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:13:ea:d4 10.100.0.25
Oct 02 12:53:28 compute-1 ovn_controller[129257]: 2025-10-02T12:53:28Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:13:ea:d4 10.100.0.25
Oct 02 12:53:28 compute-1 systemd[1]: Started libpod-conmon-fcd860c0f0c38fb90414d240135c8e9430b672ab361c08eacc41f5cc538f3280.scope.
Oct 02 12:53:28 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:53:28 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ba3fd6480f261d514725bedcd9e73ed5ac3464d3e9ec307380fb269a44db53/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:53:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:53:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:28.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:28 compute-1 podman[286494]: 2025-10-02 12:53:28.463929117 +0000 UTC m=+1.142037849 container init fcd860c0f0c38fb90414d240135c8e9430b672ab361c08eacc41f5cc538f3280 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:53:28 compute-1 nova_compute[230518]: 2025-10-02 12:53:28.466 2 DEBUG oslo_concurrency.lockutils [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "interface-d70a747f-a75e-4341-89db-5953efdbbbd9-f47045d5-0c1f-4e24-be1d-a8f054763926" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:28 compute-1 nova_compute[230518]: 2025-10-02 12:53:28.468 2 DEBUG oslo_concurrency.lockutils [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "interface-d70a747f-a75e-4341-89db-5953efdbbbd9-f47045d5-0c1f-4e24-be1d-a8f054763926" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:28 compute-1 podman[286494]: 2025-10-02 12:53:28.470139042 +0000 UTC m=+1.148247764 container start fcd860c0f0c38fb90414d240135c8e9430b672ab361c08eacc41f5cc538f3280 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 12:53:28 compute-1 nova_compute[230518]: 2025-10-02 12:53:28.489 2 DEBUG nova.objects.instance [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'flavor' on Instance uuid d70a747f-a75e-4341-89db-5953efdbbbd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:53:28 compute-1 neutron-haproxy-ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81[286510]: [NOTICE]   (286514) : New worker (286516) forked
Oct 02 12:53:28 compute-1 neutron-haproxy-ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81[286510]: [NOTICE]   (286514) : Loading success.
Oct 02 12:53:28 compute-1 nova_compute[230518]: 2025-10-02 12:53:28.509 2 DEBUG nova.virt.libvirt.vif [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:52:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1834334084',display_name='tempest-TestNetworkBasicOps-server-1834334084',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1834334084',id=143,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBWzyEyTLwn/OnMkL7XEVYTgkdobvFvcDEiRuV0NOSu0/Vc6+w7CYW/OJQq8xHJ7yByGK0zJNMNYx8BDnEAMNmh8dLyyLFr5uFvHFoK31s13NXGnnrP3EXSfoIgrfk2ieg==',key_name='tempest-TestNetworkBasicOps-1099517484',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:52:42Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-oim7lxfm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:52:42Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=d70a747f-a75e-4341-89db-5953efdbbbd9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f47045d5-0c1f-4e24-be1d-a8f054763926", "address": "fa:16:3e:13:ea:d4", "network": {"id": "8efeaa72-f872-4ae7-abf0-187d9b448a81", "bridge": "br-int", "label": "tempest-network-smoke--1227118711", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf47045d5-0c", "ovs_interfaceid": "f47045d5-0c1f-4e24-be1d-a8f054763926", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:53:28 compute-1 nova_compute[230518]: 2025-10-02 12:53:28.510 2 DEBUG nova.network.os_vif_util [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "f47045d5-0c1f-4e24-be1d-a8f054763926", "address": "fa:16:3e:13:ea:d4", "network": {"id": "8efeaa72-f872-4ae7-abf0-187d9b448a81", "bridge": "br-int", "label": "tempest-network-smoke--1227118711", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf47045d5-0c", "ovs_interfaceid": "f47045d5-0c1f-4e24-be1d-a8f054763926", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:53:28 compute-1 nova_compute[230518]: 2025-10-02 12:53:28.511 2 DEBUG nova.network.os_vif_util [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=f47045d5-0c1f-4e24-be1d-a8f054763926,network=Network(8efeaa72-f872-4ae7-abf0-187d9b448a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf47045d5-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:53:28 compute-1 nova_compute[230518]: 2025-10-02 12:53:28.517 2 DEBUG nova.virt.libvirt.guest [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:13:ea:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf47045d5-0c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:53:28 compute-1 nova_compute[230518]: 2025-10-02 12:53:28.520 2 DEBUG nova.virt.libvirt.guest [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:13:ea:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf47045d5-0c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:53:28 compute-1 nova_compute[230518]: 2025-10-02 12:53:28.522 2 DEBUG nova.virt.libvirt.driver [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Attempting to detach device tapf47045d5-0c from instance d70a747f-a75e-4341-89db-5953efdbbbd9 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 12:53:28 compute-1 nova_compute[230518]: 2025-10-02 12:53:28.522 2 DEBUG nova.virt.libvirt.guest [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] detach device xml: <interface type="ethernet">
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <mac address="fa:16:3e:13:ea:d4"/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <model type="virtio"/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <mtu size="1442"/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <target dev="tapf47045d5-0c"/>
Oct 02 12:53:28 compute-1 nova_compute[230518]: </interface>
Oct 02 12:53:28 compute-1 nova_compute[230518]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:53:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:28.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:28 compute-1 nova_compute[230518]: 2025-10-02 12:53:28.711 2 DEBUG nova.virt.libvirt.guest [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:13:ea:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf47045d5-0c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:53:28 compute-1 nova_compute[230518]: 2025-10-02 12:53:28.714 2 DEBUG nova.virt.libvirt.guest [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:13:ea:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf47045d5-0c"/></interface>not found in domain: <domain type='kvm' id='68'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <name>instance-0000008f</name>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <uuid>d70a747f-a75e-4341-89db-5953efdbbbd9</uuid>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <nova:name>tempest-TestNetworkBasicOps-server-1834334084</nova:name>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <nova:creationTime>2025-10-02 12:53:26</nova:creationTime>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <nova:flavor name="m1.nano">
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <nova:memory>128</nova:memory>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <nova:disk>1</nova:disk>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <nova:swap>0</nova:swap>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   </nova:flavor>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <nova:owner>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   </nova:owner>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <nova:ports>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <nova:port uuid="9e761925-3065-4b15-ab37-4ce18061fcf6">
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </nova:port>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <nova:port uuid="f47045d5-0c1f-4e24-be1d-a8f054763926">
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <nova:ip type="fixed" address="10.100.0.25" ipVersion="4"/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </nova:port>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   </nova:ports>
Oct 02 12:53:28 compute-1 nova_compute[230518]: </nova:instance>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <memory unit='KiB'>131072</memory>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <vcpu placement='static'>1</vcpu>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <resource>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <partition>/machine</partition>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   </resource>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <sysinfo type='smbios'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <system>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <entry name='manufacturer'>RDO</entry>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <entry name='serial'>d70a747f-a75e-4341-89db-5953efdbbbd9</entry>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <entry name='uuid'>d70a747f-a75e-4341-89db-5953efdbbbd9</entry>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <entry name='family'>Virtual Machine</entry>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </system>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <os>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <boot dev='hd'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <smbios mode='sysinfo'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   </os>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <features>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <vmcoreinfo state='on'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   </features>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <model fallback='forbid'>Nehalem</model>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <feature policy='require' name='x2apic'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <feature policy='require' name='hypervisor'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <feature policy='require' name='vme'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <clock offset='utc'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <timer name='hpet' present='no'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <on_poweroff>destroy</on_poweroff>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <on_reboot>restart</on_reboot>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <on_crash>destroy</on_crash>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <disk type='network' device='disk'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <auth username='openstack'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <source protocol='rbd' name='vms/d70a747f-a75e-4341-89db-5953efdbbbd9_disk' index='2'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       </source>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target dev='vda' bus='virtio'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='virtio-disk0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <disk type='network' device='cdrom'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <auth username='openstack'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <source protocol='rbd' name='vms/d70a747f-a75e-4341-89db-5953efdbbbd9_disk.config' index='1'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       </source>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target dev='sda' bus='sata'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <readonly/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='sata0-0-0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pcie.0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='1' port='0x10'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.1'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='2' port='0x11'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.2'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='3' port='0x12'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.3'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='4' port='0x13'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.4'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='5' port='0x14'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.5'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='6' port='0x15'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.6'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='7' port='0x16'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.7'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='8' port='0x17'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.8'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='9' port='0x18'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.9'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='10' port='0x19'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.10'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='11' port='0x1a'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.11'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='12' port='0x1b'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.12'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='13' port='0x1c'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.13'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='14' port='0x1d'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.14'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='15' port='0x1e'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.15'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='16' port='0x1f'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.16'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='17' port='0x20'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.17'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='18' port='0x21'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.18'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='19' port='0x22'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.19'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='20' port='0x23'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.20'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='21' port='0x24'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.21'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='22' port='0x25'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.22'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='23' port='0x26'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.23'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='24' port='0x27'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.24'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='25' port='0x28'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.25'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-pci-bridge'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.26'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='usb'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='sata' index='0'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='ide'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <interface type='ethernet'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <mac address='fa:16:3e:61:28:1d'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target dev='tap9e761925-30'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model type='virtio'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <mtu size='1442'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='net0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <interface type='ethernet'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <mac address='fa:16:3e:13:ea:d4'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target dev='tapf47045d5-0c'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model type='virtio'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <mtu size='1442'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='net1'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <serial type='pty'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <source path='/dev/pts/1'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <log file='/var/lib/nova/instances/d70a747f-a75e-4341-89db-5953efdbbbd9/console.log' append='off'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target type='isa-serial' port='0'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:         <model name='isa-serial'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       </target>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='serial0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <console type='pty' tty='/dev/pts/1'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <source path='/dev/pts/1'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <log file='/var/lib/nova/instances/d70a747f-a75e-4341-89db-5953efdbbbd9/console.log' append='off'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target type='serial' port='0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='serial0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </console>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <input type='tablet' bus='usb'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='input0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='usb' bus='0' port='1'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </input>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <input type='mouse' bus='ps2'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='input1'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </input>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <input type='keyboard' bus='ps2'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='input2'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </input>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <listen type='address' address='::0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </graphics>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <audio id='1' type='none'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <video>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='video0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </video>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <watchdog model='itco' action='reset'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='watchdog0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </watchdog>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <memballoon model='virtio'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <stats period='10'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='balloon0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <rng model='virtio'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <backend model='random'>/dev/urandom</backend>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='rng0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <label>system_u:system_r:svirt_t:s0:c65,c245</label>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c65,c245</imagelabel>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   </seclabel>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <label>+107:+107</label>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <imagelabel>+107:+107</imagelabel>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   </seclabel>
Oct 02 12:53:28 compute-1 nova_compute[230518]: </domain>
Oct 02 12:53:28 compute-1 nova_compute[230518]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 12:53:28 compute-1 nova_compute[230518]: 2025-10-02 12:53:28.715 2 INFO nova.virt.libvirt.driver [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully detached device tapf47045d5-0c from instance d70a747f-a75e-4341-89db-5953efdbbbd9 from the persistent domain config.
Oct 02 12:53:28 compute-1 nova_compute[230518]: 2025-10-02 12:53:28.715 2 DEBUG nova.virt.libvirt.driver [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] (1/8): Attempting to detach device tapf47045d5-0c with device alias net1 from instance d70a747f-a75e-4341-89db-5953efdbbbd9 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 12:53:28 compute-1 nova_compute[230518]: 2025-10-02 12:53:28.715 2 DEBUG nova.virt.libvirt.guest [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] detach device xml: <interface type="ethernet">
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <mac address="fa:16:3e:13:ea:d4"/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <model type="virtio"/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <mtu size="1442"/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <target dev="tapf47045d5-0c"/>
Oct 02 12:53:28 compute-1 nova_compute[230518]: </interface>
Oct 02 12:53:28 compute-1 nova_compute[230518]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 12:53:28 compute-1 kernel: tapf47045d5-0c (unregistering): left promiscuous mode
Oct 02 12:53:28 compute-1 NetworkManager[44960]: <info>  [1759409608.8196] device (tapf47045d5-0c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:53:28 compute-1 ovn_controller[129257]: 2025-10-02T12:53:28Z|00584|binding|INFO|Releasing lport f47045d5-0c1f-4e24-be1d-a8f054763926 from this chassis (sb_readonly=0)
Oct 02 12:53:28 compute-1 ovn_controller[129257]: 2025-10-02T12:53:28Z|00585|binding|INFO|Setting lport f47045d5-0c1f-4e24-be1d-a8f054763926 down in Southbound
Oct 02 12:53:28 compute-1 nova_compute[230518]: 2025-10-02 12:53:28.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:28 compute-1 ovn_controller[129257]: 2025-10-02T12:53:28Z|00586|binding|INFO|Removing iface tapf47045d5-0c ovn-installed in OVS
Oct 02 12:53:28 compute-1 nova_compute[230518]: 2025-10-02 12:53:28.837 2 DEBUG nova.virt.libvirt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Received event <DeviceRemovedEvent: 1759409608.8365045, d70a747f-a75e-4341-89db-5953efdbbbd9 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 12:53:28 compute-1 nova_compute[230518]: 2025-10-02 12:53:28.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:28 compute-1 nova_compute[230518]: 2025-10-02 12:53:28.839 2 DEBUG nova.virt.libvirt.driver [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Start waiting for the detach event from libvirt for device tapf47045d5-0c with device alias net1 for instance d70a747f-a75e-4341-89db-5953efdbbbd9 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 12:53:28 compute-1 nova_compute[230518]: 2025-10-02 12:53:28.840 2 DEBUG nova.virt.libvirt.guest [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:13:ea:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf47045d5-0c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:53:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:28.842 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:ea:d4 10.100.0.25'], port_security=['fa:16:3e:13:ea:d4 10.100.0.25'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.25/28', 'neutron:device_id': 'd70a747f-a75e-4341-89db-5953efdbbbd9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8efeaa72-f872-4ae7-abf0-187d9b448a81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f9e51548-d675-4462-aaf0-72519e827667', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2fdae84-979e-43d0-b38a-d190b914304f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=f47045d5-0c1f-4e24-be1d-a8f054763926) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:53:28 compute-1 nova_compute[230518]: 2025-10-02 12:53:28.844 2 DEBUG nova.virt.libvirt.guest [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:13:ea:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf47045d5-0c"/></interface>not found in domain: <domain type='kvm' id='68'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <name>instance-0000008f</name>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <uuid>d70a747f-a75e-4341-89db-5953efdbbbd9</uuid>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <nova:name>tempest-TestNetworkBasicOps-server-1834334084</nova:name>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <nova:creationTime>2025-10-02 12:53:26</nova:creationTime>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <nova:flavor name="m1.nano">
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <nova:memory>128</nova:memory>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <nova:disk>1</nova:disk>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <nova:swap>0</nova:swap>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   </nova:flavor>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <nova:owner>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   </nova:owner>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <nova:ports>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <nova:port uuid="9e761925-3065-4b15-ab37-4ce18061fcf6">
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </nova:port>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <nova:port uuid="f47045d5-0c1f-4e24-be1d-a8f054763926">
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <nova:ip type="fixed" address="10.100.0.25" ipVersion="4"/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </nova:port>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   </nova:ports>
Oct 02 12:53:28 compute-1 nova_compute[230518]: </nova:instance>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <memory unit='KiB'>131072</memory>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <vcpu placement='static'>1</vcpu>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <resource>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <partition>/machine</partition>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   </resource>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <sysinfo type='smbios'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <system>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <entry name='manufacturer'>RDO</entry>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <entry name='serial'>d70a747f-a75e-4341-89db-5953efdbbbd9</entry>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <entry name='uuid'>d70a747f-a75e-4341-89db-5953efdbbbd9</entry>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <entry name='family'>Virtual Machine</entry>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </system>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <os>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <boot dev='hd'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <smbios mode='sysinfo'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   </os>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <features>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <vmcoreinfo state='on'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   </features>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <model fallback='forbid'>Nehalem</model>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <feature policy='require' name='x2apic'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <feature policy='require' name='hypervisor'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <feature policy='require' name='vme'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <clock offset='utc'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <timer name='hpet' present='no'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <on_poweroff>destroy</on_poweroff>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <on_reboot>restart</on_reboot>
Oct 02 12:53:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:28.845 138374 INFO neutron.agent.ovn.metadata.agent [-] Port f47045d5-0c1f-4e24-be1d-a8f054763926 in datapath 8efeaa72-f872-4ae7-abf0-187d9b448a81 unbound from our chassis
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <on_crash>destroy</on_crash>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <disk type='network' device='disk'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <auth username='openstack'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <source protocol='rbd' name='vms/d70a747f-a75e-4341-89db-5953efdbbbd9_disk' index='2'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       </source>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target dev='vda' bus='virtio'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='virtio-disk0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <disk type='network' device='cdrom'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <auth username='openstack'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <source protocol='rbd' name='vms/d70a747f-a75e-4341-89db-5953efdbbbd9_disk.config' index='1'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       </source>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target dev='sda' bus='sata'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <readonly/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='sata0-0-0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pcie.0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='1' port='0x10'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.1'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='2' port='0x11'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.2'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='3' port='0x12'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.3'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='4' port='0x13'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.4'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='5' port='0x14'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.5'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='6' port='0x15'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.6'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='7' port='0x16'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.7'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='8' port='0x17'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.8'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='9' port='0x18'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.9'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='10' port='0x19'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.10'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='11' port='0x1a'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.11'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='12' port='0x1b'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.12'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='13' port='0x1c'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.13'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='14' port='0x1d'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.14'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='15' port='0x1e'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.15'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='16' port='0x1f'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.16'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='17' port='0x20'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.17'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='18' port='0x21'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.18'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='19' port='0x22'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.19'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='20' port='0x23'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.20'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='21' port='0x24'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.21'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='22' port='0x25'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.22'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='23' port='0x26'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.23'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='24' port='0x27'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.24'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target chassis='25' port='0x28'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.25'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model name='pcie-pci-bridge'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='pci.26'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='usb'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <controller type='sata' index='0'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='ide'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <interface type='ethernet'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <mac address='fa:16:3e:61:28:1d'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target dev='tap9e761925-30'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model type='virtio'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <mtu size='1442'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='net0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <serial type='pty'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <source path='/dev/pts/1'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <log file='/var/lib/nova/instances/d70a747f-a75e-4341-89db-5953efdbbbd9/console.log' append='off'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target type='isa-serial' port='0'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:         <model name='isa-serial'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       </target>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='serial0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <console type='pty' tty='/dev/pts/1'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <source path='/dev/pts/1'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <log file='/var/lib/nova/instances/d70a747f-a75e-4341-89db-5953efdbbbd9/console.log' append='off'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <target type='serial' port='0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='serial0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </console>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <input type='tablet' bus='usb'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='input0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='usb' bus='0' port='1'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </input>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <input type='mouse' bus='ps2'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='input1'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </input>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <input type='keyboard' bus='ps2'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='input2'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </input>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <listen type='address' address='::0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </graphics>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <audio id='1' type='none'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <video>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='video0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </video>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <watchdog model='itco' action='reset'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='watchdog0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </watchdog>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <memballoon model='virtio'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <stats period='10'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='balloon0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <rng model='virtio'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <backend model='random'>/dev/urandom</backend>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <alias name='rng0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <label>system_u:system_r:svirt_t:s0:c65,c245</label>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c65,c245</imagelabel>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   </seclabel>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <label>+107:+107</label>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <imagelabel>+107:+107</imagelabel>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   </seclabel>
Oct 02 12:53:28 compute-1 nova_compute[230518]: </domain>
Oct 02 12:53:28 compute-1 nova_compute[230518]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 12:53:28 compute-1 nova_compute[230518]: 2025-10-02 12:53:28.844 2 INFO nova.virt.libvirt.driver [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully detached device tapf47045d5-0c from instance d70a747f-a75e-4341-89db-5953efdbbbd9 from the live domain config.
Oct 02 12:53:28 compute-1 nova_compute[230518]: 2025-10-02 12:53:28.845 2 DEBUG nova.virt.libvirt.vif [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:52:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1834334084',display_name='tempest-TestNetworkBasicOps-server-1834334084',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1834334084',id=143,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBWzyEyTLwn/OnMkL7XEVYTgkdobvFvcDEiRuV0NOSu0/Vc6+w7CYW/OJQq8xHJ7yByGK0zJNMNYx8BDnEAMNmh8dLyyLFr5uFvHFoK31s13NXGnnrP3EXSfoIgrfk2ieg==',key_name='tempest-TestNetworkBasicOps-1099517484',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:52:42Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-oim7lxfm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:52:42Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=d70a747f-a75e-4341-89db-5953efdbbbd9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f47045d5-0c1f-4e24-be1d-a8f054763926", "address": "fa:16:3e:13:ea:d4", "network": {"id": "8efeaa72-f872-4ae7-abf0-187d9b448a81", "bridge": "br-int", "label": "tempest-network-smoke--1227118711", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf47045d5-0c", "ovs_interfaceid": "f47045d5-0c1f-4e24-be1d-a8f054763926", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:53:28 compute-1 nova_compute[230518]: 2025-10-02 12:53:28.846 2 DEBUG nova.network.os_vif_util [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "f47045d5-0c1f-4e24-be1d-a8f054763926", "address": "fa:16:3e:13:ea:d4", "network": {"id": "8efeaa72-f872-4ae7-abf0-187d9b448a81", "bridge": "br-int", "label": "tempest-network-smoke--1227118711", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf47045d5-0c", "ovs_interfaceid": "f47045d5-0c1f-4e24-be1d-a8f054763926", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:53:28 compute-1 nova_compute[230518]: 2025-10-02 12:53:28.846 2 DEBUG nova.network.os_vif_util [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=f47045d5-0c1f-4e24-be1d-a8f054763926,network=Network(8efeaa72-f872-4ae7-abf0-187d9b448a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf47045d5-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:53:28 compute-1 nova_compute[230518]: 2025-10-02 12:53:28.847 2 DEBUG os_vif [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=f47045d5-0c1f-4e24-be1d-a8f054763926,network=Network(8efeaa72-f872-4ae7-abf0-187d9b448a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf47045d5-0c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:53:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:28.848 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8efeaa72-f872-4ae7-abf0-187d9b448a81, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:53:28 compute-1 nova_compute[230518]: 2025-10-02 12:53:28.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:28.849 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4541d1df-1d89-4da8-881c-5fb3cd50aadd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:28 compute-1 nova_compute[230518]: 2025-10-02 12:53:28.850 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf47045d5-0c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:53:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:28.850 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81 namespace which is not needed anymore
Oct 02 12:53:28 compute-1 nova_compute[230518]: 2025-10-02 12:53:28.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:28 compute-1 nova_compute[230518]: 2025-10-02 12:53:28.857 2 INFO os_vif [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=f47045d5-0c1f-4e24-be1d-a8f054763926,network=Network(8efeaa72-f872-4ae7-abf0-187d9b448a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf47045d5-0c')
Oct 02 12:53:28 compute-1 nova_compute[230518]: 2025-10-02 12:53:28.858 2 DEBUG nova.virt.libvirt.guest [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <nova:name>tempest-TestNetworkBasicOps-server-1834334084</nova:name>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <nova:creationTime>2025-10-02 12:53:28</nova:creationTime>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <nova:flavor name="m1.nano">
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <nova:memory>128</nova:memory>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <nova:disk>1</nova:disk>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <nova:swap>0</nova:swap>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   </nova:flavor>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <nova:owner>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   </nova:owner>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   <nova:ports>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     <nova:port uuid="9e761925-3065-4b15-ab37-4ce18061fcf6">
Oct 02 12:53:28 compute-1 nova_compute[230518]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 12:53:28 compute-1 nova_compute[230518]:     </nova:port>
Oct 02 12:53:28 compute-1 nova_compute[230518]:   </nova:ports>
Oct 02 12:53:28 compute-1 nova_compute[230518]: </nova:instance>
Oct 02 12:53:28 compute-1 nova_compute[230518]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 12:53:28 compute-1 nova_compute[230518]: 2025-10-02 12:53:28.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:29 compute-1 nova_compute[230518]: 2025-10-02 12:53:29.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:29 compute-1 nova_compute[230518]: 2025-10-02 12:53:29.101 2 DEBUG nova.compute.manager [req-bd1fcc4f-fcf7-482e-a1e6-ce6c789c2e7d req-2f9317c2-f305-4a64-bfef-070e411844e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Received event network-vif-plugged-f47045d5-0c1f-4e24-be1d-a8f054763926 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:53:29 compute-1 nova_compute[230518]: 2025-10-02 12:53:29.102 2 DEBUG oslo_concurrency.lockutils [req-bd1fcc4f-fcf7-482e-a1e6-ce6c789c2e7d req-2f9317c2-f305-4a64-bfef-070e411844e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:29 compute-1 nova_compute[230518]: 2025-10-02 12:53:29.102 2 DEBUG oslo_concurrency.lockutils [req-bd1fcc4f-fcf7-482e-a1e6-ce6c789c2e7d req-2f9317c2-f305-4a64-bfef-070e411844e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:29 compute-1 nova_compute[230518]: 2025-10-02 12:53:29.102 2 DEBUG oslo_concurrency.lockutils [req-bd1fcc4f-fcf7-482e-a1e6-ce6c789c2e7d req-2f9317c2-f305-4a64-bfef-070e411844e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:29 compute-1 nova_compute[230518]: 2025-10-02 12:53:29.102 2 DEBUG nova.compute.manager [req-bd1fcc4f-fcf7-482e-a1e6-ce6c789c2e7d req-2f9317c2-f305-4a64-bfef-070e411844e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] No waiting events found dispatching network-vif-plugged-f47045d5-0c1f-4e24-be1d-a8f054763926 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:53:29 compute-1 nova_compute[230518]: 2025-10-02 12:53:29.102 2 WARNING nova.compute.manager [req-bd1fcc4f-fcf7-482e-a1e6-ce6c789c2e7d req-2f9317c2-f305-4a64-bfef-070e411844e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Received unexpected event network-vif-plugged-f47045d5-0c1f-4e24-be1d-a8f054763926 for instance with vm_state active and task_state None.
Oct 02 12:53:29 compute-1 neutron-haproxy-ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81[286510]: [NOTICE]   (286514) : haproxy version is 2.8.14-c23fe91
Oct 02 12:53:29 compute-1 neutron-haproxy-ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81[286510]: [NOTICE]   (286514) : path to executable is /usr/sbin/haproxy
Oct 02 12:53:29 compute-1 neutron-haproxy-ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81[286510]: [WARNING]  (286514) : Exiting Master process...
Oct 02 12:53:29 compute-1 neutron-haproxy-ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81[286510]: [ALERT]    (286514) : Current worker (286516) exited with code 143 (Terminated)
Oct 02 12:53:29 compute-1 neutron-haproxy-ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81[286510]: [WARNING]  (286514) : All workers exited. Exiting... (0)
Oct 02 12:53:29 compute-1 systemd[1]: libpod-fcd860c0f0c38fb90414d240135c8e9430b672ab361c08eacc41f5cc538f3280.scope: Deactivated successfully.
Oct 02 12:53:29 compute-1 podman[286546]: 2025-10-02 12:53:29.282029724 +0000 UTC m=+0.330420358 container died fcd860c0f0c38fb90414d240135c8e9430b672ab361c08eacc41f5cc538f3280 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:53:29 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fcd860c0f0c38fb90414d240135c8e9430b672ab361c08eacc41f5cc538f3280-userdata-shm.mount: Deactivated successfully.
Oct 02 12:53:29 compute-1 systemd[1]: var-lib-containers-storage-overlay-04ba3fd6480f261d514725bedcd9e73ed5ac3464d3e9ec307380fb269a44db53-merged.mount: Deactivated successfully.
Oct 02 12:53:29 compute-1 nova_compute[230518]: 2025-10-02 12:53:29.721 2 DEBUG nova.virt.libvirt.driver [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Image rbd:vms/a1440a2f-0663-451f-bef5-bbece30acc40_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Oct 02 12:53:29 compute-1 nova_compute[230518]: 2025-10-02 12:53:29.723 2 DEBUG nova.virt.libvirt.driver [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:53:29 compute-1 nova_compute[230518]: 2025-10-02 12:53:29.723 2 DEBUG nova.virt.libvirt.driver [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Ensure instance console log exists: /var/lib/nova/instances/a1440a2f-0663-451f-bef5-bbece30acc40/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:53:29 compute-1 nova_compute[230518]: 2025-10-02 12:53:29.724 2 DEBUG oslo_concurrency.lockutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:29 compute-1 nova_compute[230518]: 2025-10-02 12:53:29.724 2 DEBUG oslo_concurrency.lockutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:29 compute-1 nova_compute[230518]: 2025-10-02 12:53:29.725 2 DEBUG oslo_concurrency.lockutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:29 compute-1 nova_compute[230518]: 2025-10-02 12:53:29.729 2 DEBUG nova.virt.libvirt.driver [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Start _get_guest_xml network_info=[{"id": "d3265627-45dd-403c-990b-451562559afe", "address": "fa:16:3e:a5:ff:5d", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3265627-45", "ovs_interfaceid": "d3265627-45dd-403c-990b-451562559afe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T12:52:45Z,direct_url=<?>,disk_format='raw',id=47596e8e-a667-4ff8-bd1f-3f35c36243ae,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-1789493944-shelved',owner='dbd0afdfb05849f9abfe4cd4454f6a13',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T12:53:02Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:53:29 compute-1 nova_compute[230518]: 2025-10-02 12:53:29.734 2 WARNING nova.virt.libvirt.driver [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:53:29 compute-1 nova_compute[230518]: 2025-10-02 12:53:29.740 2 DEBUG nova.virt.libvirt.host [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:53:29 compute-1 ceph-mon[80926]: pgmap v2392: 305 pgs: 305 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 102 op/s
Oct 02 12:53:29 compute-1 nova_compute[230518]: 2025-10-02 12:53:29.741 2 DEBUG nova.virt.libvirt.host [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:53:29 compute-1 nova_compute[230518]: 2025-10-02 12:53:29.745 2 DEBUG nova.virt.libvirt.host [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:53:29 compute-1 nova_compute[230518]: 2025-10-02 12:53:29.746 2 DEBUG nova.virt.libvirt.host [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:53:29 compute-1 nova_compute[230518]: 2025-10-02 12:53:29.748 2 DEBUG nova.virt.libvirt.driver [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:53:29 compute-1 nova_compute[230518]: 2025-10-02 12:53:29.748 2 DEBUG nova.virt.hardware [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T12:52:45Z,direct_url=<?>,disk_format='raw',id=47596e8e-a667-4ff8-bd1f-3f35c36243ae,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-1789493944-shelved',owner='dbd0afdfb05849f9abfe4cd4454f6a13',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T12:53:02Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:53:29 compute-1 nova_compute[230518]: 2025-10-02 12:53:29.749 2 DEBUG nova.virt.hardware [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:53:29 compute-1 nova_compute[230518]: 2025-10-02 12:53:29.749 2 DEBUG nova.virt.hardware [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:53:29 compute-1 nova_compute[230518]: 2025-10-02 12:53:29.750 2 DEBUG nova.virt.hardware [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:53:29 compute-1 nova_compute[230518]: 2025-10-02 12:53:29.750 2 DEBUG nova.virt.hardware [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:53:29 compute-1 nova_compute[230518]: 2025-10-02 12:53:29.751 2 DEBUG nova.virt.hardware [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:53:29 compute-1 nova_compute[230518]: 2025-10-02 12:53:29.751 2 DEBUG nova.virt.hardware [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:53:29 compute-1 nova_compute[230518]: 2025-10-02 12:53:29.752 2 DEBUG nova.virt.hardware [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:53:29 compute-1 nova_compute[230518]: 2025-10-02 12:53:29.753 2 DEBUG nova.virt.hardware [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:53:29 compute-1 nova_compute[230518]: 2025-10-02 12:53:29.754 2 DEBUG nova.virt.hardware [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:53:29 compute-1 nova_compute[230518]: 2025-10-02 12:53:29.755 2 DEBUG nova.virt.hardware [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:53:29 compute-1 nova_compute[230518]: 2025-10-02 12:53:29.755 2 DEBUG nova.objects.instance [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'vcpu_model' on Instance uuid a1440a2f-0663-451f-bef5-bbece30acc40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:53:29 compute-1 nova_compute[230518]: 2025-10-02 12:53:29.774 2 DEBUG oslo_concurrency.processutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:53:29 compute-1 podman[286546]: 2025-10-02 12:53:29.884065042 +0000 UTC m=+0.932455666 container cleanup fcd860c0f0c38fb90414d240135c8e9430b672ab361c08eacc41f5cc538f3280 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:53:29 compute-1 systemd[1]: libpod-conmon-fcd860c0f0c38fb90414d240135c8e9430b672ab361c08eacc41f5cc538f3280.scope: Deactivated successfully.
Oct 02 12:53:30 compute-1 podman[286580]: 2025-10-02 12:53:30.041736288 +0000 UTC m=+0.123891138 container remove fcd860c0f0c38fb90414d240135c8e9430b672ab361c08eacc41f5cc538f3280 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:53:30 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:30.047 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f154149c-f16e-4ab9-b610-bd870c81fe0d]: (4, ('Thu Oct  2 12:53:28 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81 (fcd860c0f0c38fb90414d240135c8e9430b672ab361c08eacc41f5cc538f3280)\nfcd860c0f0c38fb90414d240135c8e9430b672ab361c08eacc41f5cc538f3280\nThu Oct  2 12:53:29 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81 (fcd860c0f0c38fb90414d240135c8e9430b672ab361c08eacc41f5cc538f3280)\nfcd860c0f0c38fb90414d240135c8e9430b672ab361c08eacc41f5cc538f3280\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:30 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:30.050 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a6ba88-1ac9-4fbb-aa01-8922a3a16bac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:30 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:30.052 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8efeaa72-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:30 compute-1 kernel: tap8efeaa72-f0: left promiscuous mode
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:30 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:30.095 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1c34f6c5-2f53-42cf-a48d-8938f8fc8c08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:30 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:30.120 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a125d6f4-d6b9-46d0-bec2-26e26794a81b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:30 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:30.123 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[73dfad6c-84c5-4fbd-85e2-3108c92c2fa9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:30 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:30.144 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6e31f84d-d615-4b3f-a77e-8672b9a8f36a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 747021, 'reachable_time': 41194, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286611, 'error': None, 'target': 'ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:30 compute-1 systemd[1]: run-netns-ovnmeta\x2d8efeaa72\x2df872\x2d4ae7\x2dabf0\x2d187d9b448a81.mount: Deactivated successfully.
Oct 02 12:53:30 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:30.148 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8efeaa72-f872-4ae7-abf0-187d9b448a81 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:53:30 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:30.148 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[3bc3fe26-0a36-4653-819d-8d70f3236f78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.153 2 DEBUG oslo_concurrency.lockutils [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.154 2 DEBUG oslo_concurrency.lockutils [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquired lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.155 2 DEBUG nova.network.neutron [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.200 2 DEBUG nova.compute.manager [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Received event network-vif-deleted-f47045d5-0c1f-4e24-be1d-a8f054763926 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.200 2 INFO nova.compute.manager [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Neutron deleted interface f47045d5-0c1f-4e24-be1d-a8f054763926; detaching it from the instance and deleting it from the info cache
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.201 2 DEBUG nova.network.neutron [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Updating instance_info_cache with network_info: [{"id": "9e761925-3065-4b15-ab37-4ce18061fcf6", "address": "fa:16:3e:61:28:1d", "network": {"id": "a8923666-d594-4b3c-acca-d8d2652ab2bc", "bridge": "br-int", "label": "tempest-network-smoke--1377768226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e761925-30", "ovs_interfaceid": "9e761925-3065-4b15-ab37-4ce18061fcf6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.225 2 DEBUG nova.objects.instance [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lazy-loading 'system_metadata' on Instance uuid d70a747f-a75e-4341-89db-5953efdbbbd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:53:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:53:30 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1327837543' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.254 2 DEBUG nova.objects.instance [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lazy-loading 'flavor' on Instance uuid d70a747f-a75e-4341-89db-5953efdbbbd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.256 2 DEBUG oslo_concurrency.processutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.291 2 DEBUG nova.storage.rbd_utils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image a1440a2f-0663-451f-bef5-bbece30acc40_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.298 2 DEBUG oslo_concurrency.processutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.350 2 DEBUG nova.virt.libvirt.vif [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:52:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1834334084',display_name='tempest-TestNetworkBasicOps-server-1834334084',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1834334084',id=143,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBWzyEyTLwn/OnMkL7XEVYTgkdobvFvcDEiRuV0NOSu0/Vc6+w7CYW/OJQq8xHJ7yByGK0zJNMNYx8BDnEAMNmh8dLyyLFr5uFvHFoK31s13NXGnnrP3EXSfoIgrfk2ieg==',key_name='tempest-TestNetworkBasicOps-1099517484',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:52:42Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-oim7lxfm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:52:42Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=d70a747f-a75e-4341-89db-5953efdbbbd9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f47045d5-0c1f-4e24-be1d-a8f054763926", "address": "fa:16:3e:13:ea:d4", "network": {"id": "8efeaa72-f872-4ae7-abf0-187d9b448a81", "bridge": "br-int", "label": "tempest-network-smoke--1227118711", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf47045d5-0c", "ovs_interfaceid": "f47045d5-0c1f-4e24-be1d-a8f054763926", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.351 2 DEBUG nova.network.os_vif_util [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Converting VIF {"id": "f47045d5-0c1f-4e24-be1d-a8f054763926", "address": "fa:16:3e:13:ea:d4", "network": {"id": "8efeaa72-f872-4ae7-abf0-187d9b448a81", "bridge": "br-int", "label": "tempest-network-smoke--1227118711", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf47045d5-0c", "ovs_interfaceid": "f47045d5-0c1f-4e24-be1d-a8f054763926", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.352 2 DEBUG nova.network.os_vif_util [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=f47045d5-0c1f-4e24-be1d-a8f054763926,network=Network(8efeaa72-f872-4ae7-abf0-187d9b448a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf47045d5-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.357 2 DEBUG nova.virt.libvirt.guest [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:13:ea:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf47045d5-0c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.362 2 DEBUG nova.virt.libvirt.guest [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:13:ea:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf47045d5-0c"/></interface>not found in domain: <domain type='kvm' id='68'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <name>instance-0000008f</name>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <uuid>d70a747f-a75e-4341-89db-5953efdbbbd9</uuid>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <nova:name>tempest-TestNetworkBasicOps-server-1834334084</nova:name>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <nova:creationTime>2025-10-02 12:53:28</nova:creationTime>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <nova:flavor name="m1.nano">
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <nova:memory>128</nova:memory>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <nova:disk>1</nova:disk>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <nova:swap>0</nova:swap>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   </nova:flavor>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <nova:owner>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   </nova:owner>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <nova:ports>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <nova:port uuid="9e761925-3065-4b15-ab37-4ce18061fcf6">
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </nova:port>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   </nova:ports>
Oct 02 12:53:30 compute-1 nova_compute[230518]: </nova:instance>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <memory unit='KiB'>131072</memory>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <vcpu placement='static'>1</vcpu>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <resource>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <partition>/machine</partition>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   </resource>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <sysinfo type='smbios'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <system>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <entry name='manufacturer'>RDO</entry>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <entry name='serial'>d70a747f-a75e-4341-89db-5953efdbbbd9</entry>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <entry name='uuid'>d70a747f-a75e-4341-89db-5953efdbbbd9</entry>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <entry name='family'>Virtual Machine</entry>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </system>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <os>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <boot dev='hd'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <smbios mode='sysinfo'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   </os>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <features>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <vmcoreinfo state='on'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   </features>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <model fallback='forbid'>Nehalem</model>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <feature policy='require' name='x2apic'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <feature policy='require' name='hypervisor'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <feature policy='require' name='vme'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <clock offset='utc'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <timer name='hpet' present='no'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <on_poweroff>destroy</on_poweroff>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <on_reboot>restart</on_reboot>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <on_crash>destroy</on_crash>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <disk type='network' device='disk'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <auth username='openstack'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <source protocol='rbd' name='vms/d70a747f-a75e-4341-89db-5953efdbbbd9_disk' index='2'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       </source>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target dev='vda' bus='virtio'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='virtio-disk0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <disk type='network' device='cdrom'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <auth username='openstack'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <source protocol='rbd' name='vms/d70a747f-a75e-4341-89db-5953efdbbbd9_disk.config' index='1'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       </source>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target dev='sda' bus='sata'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <readonly/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='sata0-0-0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pcie.0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='1' port='0x10'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.1'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='2' port='0x11'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.2'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='3' port='0x12'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.3'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='4' port='0x13'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.4'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='5' port='0x14'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.5'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='6' port='0x15'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.6'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='7' port='0x16'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.7'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='8' port='0x17'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.8'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='9' port='0x18'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.9'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='10' port='0x19'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.10'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='11' port='0x1a'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.11'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='12' port='0x1b'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.12'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='13' port='0x1c'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.13'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='14' port='0x1d'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.14'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='15' port='0x1e'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.15'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='16' port='0x1f'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.16'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='17' port='0x20'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.17'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='18' port='0x21'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.18'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='19' port='0x22'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.19'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='20' port='0x23'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.20'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='21' port='0x24'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.21'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='22' port='0x25'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.22'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='23' port='0x26'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.23'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='24' port='0x27'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.24'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='25' port='0x28'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.25'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-pci-bridge'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.26'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='usb'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='sata' index='0'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='ide'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <interface type='ethernet'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <mac address='fa:16:3e:61:28:1d'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target dev='tap9e761925-30'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model type='virtio'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <mtu size='1442'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='net0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <serial type='pty'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <source path='/dev/pts/1'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <log file='/var/lib/nova/instances/d70a747f-a75e-4341-89db-5953efdbbbd9/console.log' append='off'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target type='isa-serial' port='0'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:         <model name='isa-serial'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       </target>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='serial0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <console type='pty' tty='/dev/pts/1'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <source path='/dev/pts/1'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <log file='/var/lib/nova/instances/d70a747f-a75e-4341-89db-5953efdbbbd9/console.log' append='off'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target type='serial' port='0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='serial0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </console>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <input type='tablet' bus='usb'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='input0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='usb' bus='0' port='1'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </input>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <input type='mouse' bus='ps2'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='input1'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </input>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <input type='keyboard' bus='ps2'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='input2'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </input>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <listen type='address' address='::0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </graphics>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <audio id='1' type='none'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <video>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='video0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </video>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <watchdog model='itco' action='reset'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='watchdog0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </watchdog>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <memballoon model='virtio'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <stats period='10'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='balloon0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <rng model='virtio'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <backend model='random'>/dev/urandom</backend>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='rng0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <label>system_u:system_r:svirt_t:s0:c65,c245</label>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c65,c245</imagelabel>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   </seclabel>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <label>+107:+107</label>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <imagelabel>+107:+107</imagelabel>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   </seclabel>
Oct 02 12:53:30 compute-1 nova_compute[230518]: </domain>
Oct 02 12:53:30 compute-1 nova_compute[230518]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.363 2 DEBUG nova.virt.libvirt.guest [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:13:ea:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf47045d5-0c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.371 2 DEBUG nova.virt.libvirt.guest [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:13:ea:d4"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf47045d5-0c"/></interface>not found in domain: <domain type='kvm' id='68'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <name>instance-0000008f</name>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <uuid>d70a747f-a75e-4341-89db-5953efdbbbd9</uuid>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <nova:name>tempest-TestNetworkBasicOps-server-1834334084</nova:name>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <nova:creationTime>2025-10-02 12:53:28</nova:creationTime>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <nova:flavor name="m1.nano">
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <nova:memory>128</nova:memory>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <nova:disk>1</nova:disk>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <nova:swap>0</nova:swap>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   </nova:flavor>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <nova:owner>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   </nova:owner>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <nova:ports>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <nova:port uuid="9e761925-3065-4b15-ab37-4ce18061fcf6">
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </nova:port>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   </nova:ports>
Oct 02 12:53:30 compute-1 nova_compute[230518]: </nova:instance>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <memory unit='KiB'>131072</memory>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <vcpu placement='static'>1</vcpu>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <resource>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <partition>/machine</partition>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   </resource>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <sysinfo type='smbios'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <system>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <entry name='manufacturer'>RDO</entry>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <entry name='product'>OpenStack Compute</entry>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <entry name='serial'>d70a747f-a75e-4341-89db-5953efdbbbd9</entry>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <entry name='uuid'>d70a747f-a75e-4341-89db-5953efdbbbd9</entry>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <entry name='family'>Virtual Machine</entry>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </system>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <os>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <boot dev='hd'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <smbios mode='sysinfo'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   </os>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <features>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <vmcoreinfo state='on'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   </features>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <cpu mode='custom' match='exact' check='full'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <model fallback='forbid'>Nehalem</model>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <feature policy='require' name='x2apic'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <feature policy='require' name='hypervisor'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <feature policy='require' name='vme'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <clock offset='utc'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <timer name='pit' tickpolicy='delay'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <timer name='hpet' present='no'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <on_poweroff>destroy</on_poweroff>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <on_reboot>restart</on_reboot>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <on_crash>destroy</on_crash>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <disk type='network' device='disk'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <auth username='openstack'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <source protocol='rbd' name='vms/d70a747f-a75e-4341-89db-5953efdbbbd9_disk' index='2'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       </source>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target dev='vda' bus='virtio'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='virtio-disk0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <disk type='network' device='cdrom'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <driver name='qemu' type='raw' cache='none'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <auth username='openstack'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:         <secret type='ceph' uuid='20fdc58c-b037-5094-a8ef-d490aa7c36f3'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <source protocol='rbd' name='vms/d70a747f-a75e-4341-89db-5953efdbbbd9_disk.config' index='1'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:         <host name='192.168.122.100' port='6789'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:         <host name='192.168.122.102' port='6789'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:         <host name='192.168.122.101' port='6789'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       </source>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target dev='sda' bus='sata'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <readonly/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='sata0-0-0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='0' model='pcie-root'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pcie.0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='1' port='0x10'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.1'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='2' port='0x11'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.2'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='3' port='0x12'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.3'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='4' port='0x13'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.4'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='5' port='0x14'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.5'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='6' port='0x15'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.6'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='7' port='0x16'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.7'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='8' port='0x17'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.8'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='9' port='0x18'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.9'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='10' port='0x19'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.10'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='11' port='0x1a'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.11'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='12' port='0x1b'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.12'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='13' port='0x1c'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.13'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='14' port='0x1d'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.14'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='15' port='0x1e'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.15'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='16' port='0x1f'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.16'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='17' port='0x20'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.17'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='18' port='0x21'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.18'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='19' port='0x22'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.19'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='20' port='0x23'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.20'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='21' port='0x24'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.21'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='22' port='0x25'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.22'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='23' port='0x26'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.23'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='24' port='0x27'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.24'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-root-port'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target chassis='25' port='0x28'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.25'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model name='pcie-pci-bridge'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='pci.26'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='usb'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type='sata' index='0'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='ide'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </controller>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <interface type='ethernet'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <mac address='fa:16:3e:61:28:1d'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target dev='tap9e761925-30'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model type='virtio'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <driver name='vhost' rx_queue_size='512'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <mtu size='1442'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='net0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <serial type='pty'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <source path='/dev/pts/1'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <log file='/var/lib/nova/instances/d70a747f-a75e-4341-89db-5953efdbbbd9/console.log' append='off'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target type='isa-serial' port='0'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:         <model name='isa-serial'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       </target>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='serial0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <console type='pty' tty='/dev/pts/1'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <source path='/dev/pts/1'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <log file='/var/lib/nova/instances/d70a747f-a75e-4341-89db-5953efdbbbd9/console.log' append='off'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target type='serial' port='0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='serial0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </console>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <input type='tablet' bus='usb'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='input0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='usb' bus='0' port='1'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </input>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <input type='mouse' bus='ps2'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='input1'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </input>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <input type='keyboard' bus='ps2'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='input2'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </input>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <listen type='address' address='::0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </graphics>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <audio id='1' type='none'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <video>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model type='virtio' heads='1' primary='yes'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='video0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </video>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <watchdog model='itco' action='reset'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='watchdog0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </watchdog>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <memballoon model='virtio'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <stats period='10'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='balloon0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <rng model='virtio'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <backend model='random'>/dev/urandom</backend>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <alias name='rng0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <label>system_u:system_r:svirt_t:s0:c65,c245</label>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c65,c245</imagelabel>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   </seclabel>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <label>+107:+107</label>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <imagelabel>+107:+107</imagelabel>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   </seclabel>
Oct 02 12:53:30 compute-1 nova_compute[230518]: </domain>
Oct 02 12:53:30 compute-1 nova_compute[230518]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.372 2 WARNING nova.virt.libvirt.driver [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Detaching interface fa:16:3e:13:ea:d4 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapf47045d5-0c' not found.
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.372 2 DEBUG nova.virt.libvirt.vif [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:52:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1834334084',display_name='tempest-TestNetworkBasicOps-server-1834334084',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1834334084',id=143,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBWzyEyTLwn/OnMkL7XEVYTgkdobvFvcDEiRuV0NOSu0/Vc6+w7CYW/OJQq8xHJ7yByGK0zJNMNYx8BDnEAMNmh8dLyyLFr5uFvHFoK31s13NXGnnrP3EXSfoIgrfk2ieg==',key_name='tempest-TestNetworkBasicOps-1099517484',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:52:42Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-oim7lxfm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:52:42Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=d70a747f-a75e-4341-89db-5953efdbbbd9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f47045d5-0c1f-4e24-be1d-a8f054763926", "address": "fa:16:3e:13:ea:d4", "network": {"id": "8efeaa72-f872-4ae7-abf0-187d9b448a81", "bridge": "br-int", "label": "tempest-network-smoke--1227118711", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf47045d5-0c", "ovs_interfaceid": "f47045d5-0c1f-4e24-be1d-a8f054763926", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.373 2 DEBUG nova.network.os_vif_util [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Converting VIF {"id": "f47045d5-0c1f-4e24-be1d-a8f054763926", "address": "fa:16:3e:13:ea:d4", "network": {"id": "8efeaa72-f872-4ae7-abf0-187d9b448a81", "bridge": "br-int", "label": "tempest-network-smoke--1227118711", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf47045d5-0c", "ovs_interfaceid": "f47045d5-0c1f-4e24-be1d-a8f054763926", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.373 2 DEBUG nova.network.os_vif_util [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=f47045d5-0c1f-4e24-be1d-a8f054763926,network=Network(8efeaa72-f872-4ae7-abf0-187d9b448a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf47045d5-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.374 2 DEBUG os_vif [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=f47045d5-0c1f-4e24-be1d-a8f054763926,network=Network(8efeaa72-f872-4ae7-abf0-187d9b448a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf47045d5-0c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.376 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf47045d5-0c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.376 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.378 2 INFO os_vif [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=f47045d5-0c1f-4e24-be1d-a8f054763926,network=Network(8efeaa72-f872-4ae7-abf0-187d9b448a81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf47045d5-0c')
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.379 2 DEBUG nova.virt.libvirt.guest [req-11f44a8b-94b8-4b4e-a1e3-61c066743ede req-162a3393-b96a-4971-a972-98c9557f969c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <nova:name>tempest-TestNetworkBasicOps-server-1834334084</nova:name>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <nova:creationTime>2025-10-02 12:53:30</nova:creationTime>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <nova:flavor name="m1.nano">
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <nova:memory>128</nova:memory>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <nova:disk>1</nova:disk>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <nova:swap>0</nova:swap>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <nova:vcpus>1</nova:vcpus>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   </nova:flavor>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <nova:owner>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   </nova:owner>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <nova:ports>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <nova:port uuid="9e761925-3065-4b15-ab37-4ce18061fcf6">
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </nova:port>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   </nova:ports>
Oct 02 12:53:30 compute-1 nova_compute[230518]: </nova:instance>
Oct 02 12:53:30 compute-1 nova_compute[230518]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 02 12:53:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:53:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:30.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:53:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:30.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:53:30 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/297649764' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:53:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1327837543' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.873 2 DEBUG oslo_concurrency.processutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.877 2 DEBUG nova.virt.libvirt.vif [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:50:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1789493944',display_name='tempest-ServerActionsTestOtherB-server-1789493944',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1789493944',id=138,image_ref='47596e8e-a667-4ff8-bd1f-3f35c36243ae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-808136615',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:51:09Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='dbd0afdfb05849f9abfe4cd4454f6a13',ramdisk_id='',reservation_id='r-0tg60q9k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-858400398',owner_user_name='tempest-ServerActionsTestOtherB-858400398-project-member',shelved_at='2025-10-02T12:53:03.363111',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='47596e8e-a667-4ff8-bd1f-3f35c36243ae'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:53:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b5104e5372994cd19b720862cf1ca2ce',uuid=a1440a2f-0663-451f-bef5-bbece30acc40,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "d3265627-45dd-403c-990b-451562559afe", "address": "fa:16:3e:a5:ff:5d", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3265627-45", "ovs_interfaceid": "d3265627-45dd-403c-990b-451562559afe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.877 2 DEBUG nova.network.os_vif_util [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converting VIF {"id": "d3265627-45dd-403c-990b-451562559afe", "address": "fa:16:3e:a5:ff:5d", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3265627-45", "ovs_interfaceid": "d3265627-45dd-403c-990b-451562559afe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.878 2 DEBUG nova.network.os_vif_util [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:ff:5d,bridge_name='br-int',has_traffic_filtering=True,id=d3265627-45dd-403c-990b-451562559afe,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3265627-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.880 2 DEBUG nova.objects.instance [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'pci_devices' on Instance uuid a1440a2f-0663-451f-bef5-bbece30acc40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.896 2 DEBUG nova.virt.libvirt.driver [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <uuid>a1440a2f-0663-451f-bef5-bbece30acc40</uuid>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <name>instance-0000008a</name>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <nova:name>tempest-ServerActionsTestOtherB-server-1789493944</nova:name>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:53:29</nova:creationTime>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:53:30 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:53:30 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:53:30 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:53:30 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:53:30 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:53:30 compute-1 nova_compute[230518]:         <nova:user uuid="b5104e5372994cd19b720862cf1ca2ce">tempest-ServerActionsTestOtherB-858400398-project-member</nova:user>
Oct 02 12:53:30 compute-1 nova_compute[230518]:         <nova:project uuid="dbd0afdfb05849f9abfe4cd4454f6a13">tempest-ServerActionsTestOtherB-858400398</nova:project>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="47596e8e-a667-4ff8-bd1f-3f35c36243ae"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:53:30 compute-1 nova_compute[230518]:         <nova:port uuid="d3265627-45dd-403c-990b-451562559afe">
Oct 02 12:53:30 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <system>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <entry name="serial">a1440a2f-0663-451f-bef5-bbece30acc40</entry>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <entry name="uuid">a1440a2f-0663-451f-bef5-bbece30acc40</entry>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </system>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <os>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   </os>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <features>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   </features>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/a1440a2f-0663-451f-bef5-bbece30acc40_disk">
Oct 02 12:53:30 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       </source>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:53:30 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/a1440a2f-0663-451f-bef5-bbece30acc40_disk.config">
Oct 02 12:53:30 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       </source>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:53:30 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:a5:ff:5d"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <target dev="tapd3265627-45"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/a1440a2f-0663-451f-bef5-bbece30acc40/console.log" append="off"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <video>
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </video>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <input type="keyboard" bus="usb"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:53:30 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:53:30 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:53:30 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:53:30 compute-1 nova_compute[230518]: </domain>
Oct 02 12:53:30 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.898 2 DEBUG nova.compute.manager [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Preparing to wait for external event network-vif-plugged-d3265627-45dd-403c-990b-451562559afe prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.898 2 DEBUG oslo_concurrency.lockutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.898 2 DEBUG oslo_concurrency.lockutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.899 2 DEBUG oslo_concurrency.lockutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.899 2 DEBUG nova.virt.libvirt.vif [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:50:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1789493944',display_name='tempest-ServerActionsTestOtherB-server-1789493944',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1789493944',id=138,image_ref='47596e8e-a667-4ff8-bd1f-3f35c36243ae',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-808136615',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:51:09Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='dbd0afdfb05849f9abfe4cd4454f6a13',ramdisk_id='',reservation_id='r-0tg60q9k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-858400398',owner_user_name='tempest-ServerActionsTestOtherB-858400398-project-member',shelved_at='2025-10-02T12:53:03.363111',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='47596e8e-a667-4ff8-bd1f-3f35c36243ae'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:53:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b5104e5372994cd19b720862cf1ca2ce',uuid=a1440a2f-0663-451f-bef5-bbece30acc40,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "d3265627-45dd-403c-990b-451562559afe", "address": "fa:16:3e:a5:ff:5d", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3265627-45", "ovs_interfaceid": "d3265627-45dd-403c-990b-451562559afe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.900 2 DEBUG nova.network.os_vif_util [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converting VIF {"id": "d3265627-45dd-403c-990b-451562559afe", "address": "fa:16:3e:a5:ff:5d", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3265627-45", "ovs_interfaceid": "d3265627-45dd-403c-990b-451562559afe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.900 2 DEBUG nova.network.os_vif_util [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:ff:5d,bridge_name='br-int',has_traffic_filtering=True,id=d3265627-45dd-403c-990b-451562559afe,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3265627-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.901 2 DEBUG os_vif [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:ff:5d,bridge_name='br-int',has_traffic_filtering=True,id=d3265627-45dd-403c-990b-451562559afe,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3265627-45') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.902 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.902 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.905 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd3265627-45, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.905 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd3265627-45, col_values=(('external_ids', {'iface-id': 'd3265627-45dd-403c-990b-451562559afe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:ff:5d', 'vm-uuid': 'a1440a2f-0663-451f-bef5-bbece30acc40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:53:30 compute-1 NetworkManager[44960]: <info>  [1759409610.9081] manager: (tapd3265627-45): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/274)
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.915 2 INFO os_vif [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:ff:5d,bridge_name='br-int',has_traffic_filtering=True,id=d3265627-45dd-403c-990b-451562559afe,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3265627-45')
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.976 2 DEBUG nova.virt.libvirt.driver [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.976 2 DEBUG nova.virt.libvirt.driver [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.976 2 DEBUG nova.virt.libvirt.driver [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] No VIF found with MAC fa:16:3e:a5:ff:5d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:53:30 compute-1 nova_compute[230518]: 2025-10-02 12:53:30.977 2 INFO nova.virt.libvirt.driver [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Using config drive
Oct 02 12:53:31 compute-1 nova_compute[230518]: 2025-10-02 12:53:31.007 2 DEBUG nova.storage.rbd_utils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image a1440a2f-0663-451f-bef5-bbece30acc40_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:53:31 compute-1 nova_compute[230518]: 2025-10-02 12:53:31.028 2 DEBUG nova.objects.instance [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'ec2_ids' on Instance uuid a1440a2f-0663-451f-bef5-bbece30acc40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:53:31 compute-1 nova_compute[230518]: 2025-10-02 12:53:31.081 2 DEBUG nova.objects.instance [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'keypairs' on Instance uuid a1440a2f-0663-451f-bef5-bbece30acc40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:53:31 compute-1 nova_compute[230518]: 2025-10-02 12:53:31.255 2 DEBUG nova.compute.manager [req-e060bec6-7adc-4de5-af20-1cc01e66f4d8 req-0e19292c-2c3a-434a-8bac-8ead9fe0c0c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Received event network-vif-unplugged-f47045d5-0c1f-4e24-be1d-a8f054763926 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:53:31 compute-1 nova_compute[230518]: 2025-10-02 12:53:31.255 2 DEBUG oslo_concurrency.lockutils [req-e060bec6-7adc-4de5-af20-1cc01e66f4d8 req-0e19292c-2c3a-434a-8bac-8ead9fe0c0c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:31 compute-1 nova_compute[230518]: 2025-10-02 12:53:31.255 2 DEBUG oslo_concurrency.lockutils [req-e060bec6-7adc-4de5-af20-1cc01e66f4d8 req-0e19292c-2c3a-434a-8bac-8ead9fe0c0c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:31 compute-1 nova_compute[230518]: 2025-10-02 12:53:31.256 2 DEBUG oslo_concurrency.lockutils [req-e060bec6-7adc-4de5-af20-1cc01e66f4d8 req-0e19292c-2c3a-434a-8bac-8ead9fe0c0c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:31 compute-1 nova_compute[230518]: 2025-10-02 12:53:31.256 2 DEBUG nova.compute.manager [req-e060bec6-7adc-4de5-af20-1cc01e66f4d8 req-0e19292c-2c3a-434a-8bac-8ead9fe0c0c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] No waiting events found dispatching network-vif-unplugged-f47045d5-0c1f-4e24-be1d-a8f054763926 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:53:31 compute-1 nova_compute[230518]: 2025-10-02 12:53:31.256 2 WARNING nova.compute.manager [req-e060bec6-7adc-4de5-af20-1cc01e66f4d8 req-0e19292c-2c3a-434a-8bac-8ead9fe0c0c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Received unexpected event network-vif-unplugged-f47045d5-0c1f-4e24-be1d-a8f054763926 for instance with vm_state active and task_state None.
Oct 02 12:53:31 compute-1 nova_compute[230518]: 2025-10-02 12:53:31.256 2 DEBUG nova.compute.manager [req-e060bec6-7adc-4de5-af20-1cc01e66f4d8 req-0e19292c-2c3a-434a-8bac-8ead9fe0c0c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Received event network-vif-plugged-f47045d5-0c1f-4e24-be1d-a8f054763926 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:53:31 compute-1 nova_compute[230518]: 2025-10-02 12:53:31.256 2 DEBUG oslo_concurrency.lockutils [req-e060bec6-7adc-4de5-af20-1cc01e66f4d8 req-0e19292c-2c3a-434a-8bac-8ead9fe0c0c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:31 compute-1 nova_compute[230518]: 2025-10-02 12:53:31.256 2 DEBUG oslo_concurrency.lockutils [req-e060bec6-7adc-4de5-af20-1cc01e66f4d8 req-0e19292c-2c3a-434a-8bac-8ead9fe0c0c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:31 compute-1 nova_compute[230518]: 2025-10-02 12:53:31.257 2 DEBUG oslo_concurrency.lockutils [req-e060bec6-7adc-4de5-af20-1cc01e66f4d8 req-0e19292c-2c3a-434a-8bac-8ead9fe0c0c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:31 compute-1 nova_compute[230518]: 2025-10-02 12:53:31.257 2 DEBUG nova.compute.manager [req-e060bec6-7adc-4de5-af20-1cc01e66f4d8 req-0e19292c-2c3a-434a-8bac-8ead9fe0c0c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] No waiting events found dispatching network-vif-plugged-f47045d5-0c1f-4e24-be1d-a8f054763926 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:53:31 compute-1 nova_compute[230518]: 2025-10-02 12:53:31.257 2 WARNING nova.compute.manager [req-e060bec6-7adc-4de5-af20-1cc01e66f4d8 req-0e19292c-2c3a-434a-8bac-8ead9fe0c0c9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Received unexpected event network-vif-plugged-f47045d5-0c1f-4e24-be1d-a8f054763926 for instance with vm_state active and task_state None.
Oct 02 12:53:31 compute-1 nova_compute[230518]: 2025-10-02 12:53:31.616 2 INFO nova.virt.libvirt.driver [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Creating config drive at /var/lib/nova/instances/a1440a2f-0663-451f-bef5-bbece30acc40/disk.config
Oct 02 12:53:31 compute-1 nova_compute[230518]: 2025-10-02 12:53:31.625 2 DEBUG oslo_concurrency.processutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a1440a2f-0663-451f-bef5-bbece30acc40/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6c5l7aqu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:53:31 compute-1 nova_compute[230518]: 2025-10-02 12:53:31.669 2 INFO nova.network.neutron [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Port f47045d5-0c1f-4e24-be1d-a8f054763926 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 02 12:53:31 compute-1 nova_compute[230518]: 2025-10-02 12:53:31.670 2 DEBUG nova.network.neutron [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Updating instance_info_cache with network_info: [{"id": "9e761925-3065-4b15-ab37-4ce18061fcf6", "address": "fa:16:3e:61:28:1d", "network": {"id": "a8923666-d594-4b3c-acca-d8d2652ab2bc", "bridge": "br-int", "label": "tempest-network-smoke--1377768226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e761925-30", "ovs_interfaceid": "9e761925-3065-4b15-ab37-4ce18061fcf6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:53:31 compute-1 nova_compute[230518]: 2025-10-02 12:53:31.698 2 DEBUG oslo_concurrency.lockutils [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Releasing lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:53:31 compute-1 nova_compute[230518]: 2025-10-02 12:53:31.732 2 DEBUG oslo_concurrency.lockutils [None req-5ce74367-4be0-464d-b193-3d3344f6fb04 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "interface-d70a747f-a75e-4341-89db-5953efdbbbd9-f47045d5-0c1f-4e24-be1d-a8f054763926" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.264s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:31 compute-1 nova_compute[230518]: 2025-10-02 12:53:31.779 2 DEBUG oslo_concurrency.processutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a1440a2f-0663-451f-bef5-bbece30acc40/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6c5l7aqu" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:53:31 compute-1 nova_compute[230518]: 2025-10-02 12:53:31.815 2 DEBUG nova.storage.rbd_utils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] rbd image a1440a2f-0663-451f-bef5-bbece30acc40_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:53:31 compute-1 nova_compute[230518]: 2025-10-02 12:53:31.819 2 DEBUG oslo_concurrency.processutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a1440a2f-0663-451f-bef5-bbece30acc40/disk.config a1440a2f-0663-451f-bef5-bbece30acc40_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:53:31 compute-1 ceph-mon[80926]: pgmap v2393: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.7 MiB/s wr, 147 op/s
Oct 02 12:53:31 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/297649764' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:53:32 compute-1 ovn_controller[129257]: 2025-10-02T12:53:31Z|00587|binding|INFO|Releasing lport 4b02cca2-258b-4a05-9628-3add3aef7360 from this chassis (sb_readonly=0)
Oct 02 12:53:32 compute-1 nova_compute[230518]: 2025-10-02 12:53:32.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:32.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:32 compute-1 nova_compute[230518]: 2025-10-02 12:53:32.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:53:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:32.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:53:32 compute-1 nova_compute[230518]: 2025-10-02 12:53:32.594 2 DEBUG oslo_concurrency.processutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a1440a2f-0663-451f-bef5-bbece30acc40/disk.config a1440a2f-0663-451f-bef5-bbece30acc40_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.775s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:53:32 compute-1 nova_compute[230518]: 2025-10-02 12:53:32.594 2 INFO nova.virt.libvirt.driver [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Deleting local config drive /var/lib/nova/instances/a1440a2f-0663-451f-bef5-bbece30acc40/disk.config because it was imported into RBD.
Oct 02 12:53:32 compute-1 kernel: tapd3265627-45: entered promiscuous mode
Oct 02 12:53:32 compute-1 NetworkManager[44960]: <info>  [1759409612.6570] manager: (tapd3265627-45): new Tun device (/org/freedesktop/NetworkManager/Devices/275)
Oct 02 12:53:32 compute-1 ovn_controller[129257]: 2025-10-02T12:53:32Z|00588|binding|INFO|Claiming lport d3265627-45dd-403c-990b-451562559afe for this chassis.
Oct 02 12:53:32 compute-1 nova_compute[230518]: 2025-10-02 12:53:32.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:32 compute-1 ovn_controller[129257]: 2025-10-02T12:53:32Z|00589|binding|INFO|d3265627-45dd-403c-990b-451562559afe: Claiming fa:16:3e:a5:ff:5d 10.100.0.6
Oct 02 12:53:32 compute-1 systemd-udevd[286612]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:53:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.667 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:ff:5d 10.100.0.6'], port_security=['fa:16:3e:a5:ff:5d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'a1440a2f-0663-451f-bef5-bbece30acc40', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dbd0afdfb05849f9abfe4cd4454f6a13', 'neutron:revision_number': '7', 'neutron:security_group_ids': '78172745-da53-4827-9b36-8764c18b9057', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58cd6088-09cb-4f1a-b5f9-48a0ee1d072a, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=d3265627-45dd-403c-990b-451562559afe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:53:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.668 138374 INFO neutron.agent.ovn.metadata.agent [-] Port d3265627-45dd-403c-990b-451562559afe in datapath 9266ebd7-321c-4fc7-a6c8-c1c304634bb4 bound to our chassis
Oct 02 12:53:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.671 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9266ebd7-321c-4fc7-a6c8-c1c304634bb4
Oct 02 12:53:32 compute-1 ovn_controller[129257]: 2025-10-02T12:53:32Z|00590|binding|INFO|Setting lport d3265627-45dd-403c-990b-451562559afe ovn-installed in OVS
Oct 02 12:53:32 compute-1 ovn_controller[129257]: 2025-10-02T12:53:32Z|00591|binding|INFO|Setting lport d3265627-45dd-403c-990b-451562559afe up in Southbound
Oct 02 12:53:32 compute-1 nova_compute[230518]: 2025-10-02 12:53:32.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:32 compute-1 NetworkManager[44960]: <info>  [1759409612.6775] device (tapd3265627-45): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:53:32 compute-1 nova_compute[230518]: 2025-10-02 12:53:32.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:32 compute-1 NetworkManager[44960]: <info>  [1759409612.6803] device (tapd3265627-45): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:53:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.684 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[946cd7fb-8e0d-4cb6-b405-284533bf3414]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.685 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9266ebd7-31 in ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:53:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e331 e331: 3 total, 3 up, 3 in
Oct 02 12:53:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.688 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9266ebd7-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:53:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.689 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[12d7946d-ab9d-4979-b95f-cb69185b1eea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.690 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a5c5b6-c0ce-4ce0-91cc-81ae887704bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:32 compute-1 systemd-machined[188247]: New machine qemu-69-instance-0000008a.
Oct 02 12:53:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.707 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[c7fbcc2d-a56a-4dc7-b1f4-b504b96615e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:32 compute-1 systemd[1]: Started Virtual Machine qemu-69-instance-0000008a.
Oct 02 12:53:32 compute-1 nova_compute[230518]: 2025-10-02 12:53:32.739 2 DEBUG nova.compute.manager [req-c4ee23f7-1082-40f9-9ea2-d2129315a127 req-87915950-63ce-4562-87dc-b3dc30fe2e12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Received event network-changed-9e761925-3065-4b15-ab37-4ce18061fcf6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:53:32 compute-1 nova_compute[230518]: 2025-10-02 12:53:32.739 2 DEBUG nova.compute.manager [req-c4ee23f7-1082-40f9-9ea2-d2129315a127 req-87915950-63ce-4562-87dc-b3dc30fe2e12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Refreshing instance network info cache due to event network-changed-9e761925-3065-4b15-ab37-4ce18061fcf6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:53:32 compute-1 nova_compute[230518]: 2025-10-02 12:53:32.739 2 DEBUG oslo_concurrency.lockutils [req-c4ee23f7-1082-40f9-9ea2-d2129315a127 req-87915950-63ce-4562-87dc-b3dc30fe2e12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:53:32 compute-1 nova_compute[230518]: 2025-10-02 12:53:32.740 2 DEBUG oslo_concurrency.lockutils [req-c4ee23f7-1082-40f9-9ea2-d2129315a127 req-87915950-63ce-4562-87dc-b3dc30fe2e12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:53:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.739 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[02b93739-b260-4ed0-93bd-97696a8865c1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:32 compute-1 nova_compute[230518]: 2025-10-02 12:53:32.740 2 DEBUG nova.network.neutron [req-c4ee23f7-1082-40f9-9ea2-d2129315a127 req-87915950-63ce-4562-87dc-b3dc30fe2e12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Refreshing network info cache for port 9e761925-3065-4b15-ab37-4ce18061fcf6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:53:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.772 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[74420d0c-9bf2-41b7-98a0-3614e0550cd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.780 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f9400e1d-28c6-4693-9614-d4dc7761f376]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:32 compute-1 NetworkManager[44960]: <info>  [1759409612.7813] manager: (tap9266ebd7-30): new Veth device (/org/freedesktop/NetworkManager/Devices/276)
Oct 02 12:53:32 compute-1 nova_compute[230518]: 2025-10-02 12:53:32.788 2 DEBUG oslo_concurrency.lockutils [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "d70a747f-a75e-4341-89db-5953efdbbbd9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:32 compute-1 nova_compute[230518]: 2025-10-02 12:53:32.789 2 DEBUG oslo_concurrency.lockutils [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:32 compute-1 nova_compute[230518]: 2025-10-02 12:53:32.789 2 DEBUG oslo_concurrency.lockutils [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:32 compute-1 nova_compute[230518]: 2025-10-02 12:53:32.789 2 DEBUG oslo_concurrency.lockutils [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:32 compute-1 nova_compute[230518]: 2025-10-02 12:53:32.790 2 DEBUG oslo_concurrency.lockutils [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:32 compute-1 nova_compute[230518]: 2025-10-02 12:53:32.791 2 INFO nova.compute.manager [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Terminating instance
Oct 02 12:53:32 compute-1 nova_compute[230518]: 2025-10-02 12:53:32.793 2 DEBUG nova.compute.manager [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:53:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.817 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c8985f3c-00c1-46be-af61-62501ac47c20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.820 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[1757430c-03d2-4585-94b3-7850031ade56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:32 compute-1 NetworkManager[44960]: <info>  [1759409612.8425] device (tap9266ebd7-30): carrier: link connected
Oct 02 12:53:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.847 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[583f6bf8-a4b7-4f9b-954e-55d2cb85442f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.865 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6da26a3e-62fd-473b-b301-f2a39e952c12]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9266ebd7-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:65:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 180], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 747640, 'reachable_time': 43723, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286761, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.880 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6a1e4355-8ee5-4988-80af-cbcd0b9e2183]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:6593'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 747640, 'tstamp': 747640}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286762, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.896 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fed12466-b789-4833-8b98-15c845376dce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9266ebd7-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:65:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 180], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 747640, 'reachable_time': 43723, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 286763, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.926 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5f4b585b-b38f-4652-b4ad-aa25e3554289]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.985 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3bf7ffbf-e5d5-41c7-84bd-43d26de59053]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.987 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9266ebd7-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:53:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.987 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:53:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.987 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9266ebd7-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:53:32 compute-1 kernel: tap9266ebd7-30: entered promiscuous mode
Oct 02 12:53:32 compute-1 NetworkManager[44960]: <info>  [1759409612.9898] manager: (tap9266ebd7-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/277)
Oct 02 12:53:32 compute-1 nova_compute[230518]: 2025-10-02 12:53:32.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:32.992 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9266ebd7-30, col_values=(('external_ids', {'iface-id': '9fee59c9-e25a-4600-b33b-de655b7e8c27'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:53:32 compute-1 nova_compute[230518]: 2025-10-02 12:53:32.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:32 compute-1 ovn_controller[129257]: 2025-10-02T12:53:32Z|00592|binding|INFO|Releasing lport 9fee59c9-e25a-4600-b33b-de655b7e8c27 from this chassis (sb_readonly=0)
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:33.009 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9266ebd7-321c-4fc7-a6c8-c1c304634bb4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9266ebd7-321c-4fc7-a6c8-c1c304634bb4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:53:33 compute-1 nova_compute[230518]: 2025-10-02 12:53:33.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:33.010 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[67c1e37c-6b7b-4765-8a4d-3a54586af2e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:33.010 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-9266ebd7-321c-4fc7-a6c8-c1c304634bb4
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/9266ebd7-321c-4fc7-a6c8-c1c304634bb4.pid.haproxy
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 9266ebd7-321c-4fc7-a6c8-c1c304634bb4
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:33.011 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'env', 'PROCESS_TAG=haproxy-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9266ebd7-321c-4fc7-a6c8-c1c304634bb4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:53:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:53:33 compute-1 kernel: tap9e761925-30 (unregistering): left promiscuous mode
Oct 02 12:53:33 compute-1 NetworkManager[44960]: <info>  [1759409613.4076] device (tap9e761925-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:53:33 compute-1 podman[286838]: 2025-10-02 12:53:33.40966485 +0000 UTC m=+0.052430125 container create 167d4d6a2c44afdb612e624522a4912cdf400cd0154e6d642fd6608c91286371 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:53:33 compute-1 ovn_controller[129257]: 2025-10-02T12:53:33Z|00593|binding|INFO|Releasing lport 9e761925-3065-4b15-ab37-4ce18061fcf6 from this chassis (sb_readonly=0)
Oct 02 12:53:33 compute-1 ovn_controller[129257]: 2025-10-02T12:53:33Z|00594|binding|INFO|Setting lport 9e761925-3065-4b15-ab37-4ce18061fcf6 down in Southbound
Oct 02 12:53:33 compute-1 nova_compute[230518]: 2025-10-02 12:53:33.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:33 compute-1 ovn_controller[129257]: 2025-10-02T12:53:33Z|00595|binding|INFO|Removing iface tap9e761925-30 ovn-installed in OVS
Oct 02 12:53:33 compute-1 nova_compute[230518]: 2025-10-02 12:53:33.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:33.446 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:28:1d 10.100.0.4'], port_security=['fa:16:3e:61:28:1d 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd70a747f-a75e-4341-89db-5953efdbbbd9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8923666-d594-4b3c-acca-d8d2652ab2bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '4', 'neutron:security_group_ids': '50493e8d-b9e4-415b-bc68-4eb501d460cc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d9eaefc8-91b5-45ac-8f60-f49bcfa08eb3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=9e761925-3065-4b15-ab37-4ce18061fcf6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:53:33 compute-1 nova_compute[230518]: 2025-10-02 12:53:33.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:33 compute-1 systemd[1]: Started libpod-conmon-167d4d6a2c44afdb612e624522a4912cdf400cd0154e6d642fd6608c91286371.scope.
Oct 02 12:53:33 compute-1 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000008f.scope: Deactivated successfully.
Oct 02 12:53:33 compute-1 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000008f.scope: Consumed 15.352s CPU time.
Oct 02 12:53:33 compute-1 podman[286838]: 2025-10-02 12:53:33.381809257 +0000 UTC m=+0.024574562 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:53:33 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:53:33 compute-1 systemd-machined[188247]: Machine qemu-68-instance-0000008f terminated.
Oct 02 12:53:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6218758efbae1e2167da1e023b3af2d79632b28cc528df2bdcb085fa84c90af/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:53:33 compute-1 podman[286838]: 2025-10-02 12:53:33.499237621 +0000 UTC m=+0.142002916 container init 167d4d6a2c44afdb612e624522a4912cdf400cd0154e6d642fd6608c91286371 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:53:33 compute-1 podman[286838]: 2025-10-02 12:53:33.506672144 +0000 UTC m=+0.149437419 container start 167d4d6a2c44afdb612e624522a4912cdf400cd0154e6d642fd6608c91286371 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 12:53:33 compute-1 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[286857]: [NOTICE]   (286861) : New worker (286863) forked
Oct 02 12:53:33 compute-1 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[286857]: [NOTICE]   (286861) : Loading success.
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:33.565 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 9e761925-3065-4b15-ab37-4ce18061fcf6 in datapath a8923666-d594-4b3c-acca-d8d2652ab2bc unbound from our chassis
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:33.566 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a8923666-d594-4b3c-acca-d8d2652ab2bc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:33.567 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e804a156-fc26-4da0-b17a-539586d7c8a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:33.568 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc namespace which is not needed anymore
Oct 02 12:53:33 compute-1 nova_compute[230518]: 2025-10-02 12:53:33.638 2 INFO nova.virt.libvirt.driver [-] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Instance destroyed successfully.
Oct 02 12:53:33 compute-1 nova_compute[230518]: 2025-10-02 12:53:33.639 2 DEBUG nova.objects.instance [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'resources' on Instance uuid d70a747f-a75e-4341-89db-5953efdbbbd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:53:33 compute-1 nova_compute[230518]: 2025-10-02 12:53:33.654 2 DEBUG nova.virt.libvirt.vif [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:52:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1834334084',display_name='tempest-TestNetworkBasicOps-server-1834334084',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1834334084',id=143,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBWzyEyTLwn/OnMkL7XEVYTgkdobvFvcDEiRuV0NOSu0/Vc6+w7CYW/OJQq8xHJ7yByGK0zJNMNYx8BDnEAMNmh8dLyyLFr5uFvHFoK31s13NXGnnrP3EXSfoIgrfk2ieg==',key_name='tempest-TestNetworkBasicOps-1099517484',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:52:42Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-oim7lxfm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:52:42Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=d70a747f-a75e-4341-89db-5953efdbbbd9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9e761925-3065-4b15-ab37-4ce18061fcf6", "address": "fa:16:3e:61:28:1d", "network": {"id": "a8923666-d594-4b3c-acca-d8d2652ab2bc", "bridge": "br-int", "label": "tempest-network-smoke--1377768226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e761925-30", "ovs_interfaceid": "9e761925-3065-4b15-ab37-4ce18061fcf6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:53:33 compute-1 nova_compute[230518]: 2025-10-02 12:53:33.655 2 DEBUG nova.network.os_vif_util [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "9e761925-3065-4b15-ab37-4ce18061fcf6", "address": "fa:16:3e:61:28:1d", "network": {"id": "a8923666-d594-4b3c-acca-d8d2652ab2bc", "bridge": "br-int", "label": "tempest-network-smoke--1377768226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e761925-30", "ovs_interfaceid": "9e761925-3065-4b15-ab37-4ce18061fcf6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:53:33 compute-1 nova_compute[230518]: 2025-10-02 12:53:33.656 2 DEBUG nova.network.os_vif_util [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:61:28:1d,bridge_name='br-int',has_traffic_filtering=True,id=9e761925-3065-4b15-ab37-4ce18061fcf6,network=Network(a8923666-d594-4b3c-acca-d8d2652ab2bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e761925-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:53:33 compute-1 nova_compute[230518]: 2025-10-02 12:53:33.657 2 DEBUG os_vif [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:61:28:1d,bridge_name='br-int',has_traffic_filtering=True,id=9e761925-3065-4b15-ab37-4ce18061fcf6,network=Network(a8923666-d594-4b3c-acca-d8d2652ab2bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e761925-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:53:33 compute-1 nova_compute[230518]: 2025-10-02 12:53:33.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:33 compute-1 nova_compute[230518]: 2025-10-02 12:53:33.661 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9e761925-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:53:33 compute-1 nova_compute[230518]: 2025-10-02 12:53:33.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:33 compute-1 nova_compute[230518]: 2025-10-02 12:53:33.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:33 compute-1 nova_compute[230518]: 2025-10-02 12:53:33.667 2 INFO os_vif [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:61:28:1d,bridge_name='br-int',has_traffic_filtering=True,id=9e761925-3065-4b15-ab37-4ce18061fcf6,network=Network(a8923666-d594-4b3c-acca-d8d2652ab2bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e761925-30')
Oct 02 12:53:33 compute-1 neutron-haproxy-ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc[285779]: [NOTICE]   (285783) : haproxy version is 2.8.14-c23fe91
Oct 02 12:53:33 compute-1 neutron-haproxy-ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc[285779]: [NOTICE]   (285783) : path to executable is /usr/sbin/haproxy
Oct 02 12:53:33 compute-1 neutron-haproxy-ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc[285779]: [WARNING]  (285783) : Exiting Master process...
Oct 02 12:53:33 compute-1 neutron-haproxy-ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc[285779]: [ALERT]    (285783) : Current worker (285785) exited with code 143 (Terminated)
Oct 02 12:53:33 compute-1 neutron-haproxy-ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc[285779]: [WARNING]  (285783) : All workers exited. Exiting... (0)
Oct 02 12:53:33 compute-1 systemd[1]: libpod-13aa4b98cba66276f75559ae5745a870b17395095597f17d3677d5b57d514e7b.scope: Deactivated successfully.
Oct 02 12:53:33 compute-1 podman[286910]: 2025-10-02 12:53:33.746651763 +0000 UTC m=+0.051105364 container died 13aa4b98cba66276f75559ae5745a870b17395095597f17d3677d5b57d514e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:53:33 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-13aa4b98cba66276f75559ae5745a870b17395095597f17d3677d5b57d514e7b-userdata-shm.mount: Deactivated successfully.
Oct 02 12:53:33 compute-1 systemd[1]: var-lib-containers-storage-overlay-1a37905e41d2d492f51b4caba938f776c5363e981c3b279006e2868ae414335e-merged.mount: Deactivated successfully.
Oct 02 12:53:33 compute-1 podman[286910]: 2025-10-02 12:53:33.78736599 +0000 UTC m=+0.091819631 container cleanup 13aa4b98cba66276f75559ae5745a870b17395095597f17d3677d5b57d514e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:53:33 compute-1 ceph-mon[80926]: pgmap v2394: 305 pgs: 305 active+clean; 486 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 4.5 MiB/s wr, 155 op/s
Oct 02 12:53:33 compute-1 ceph-mon[80926]: osdmap e331: 3 total, 3 up, 3 in
Oct 02 12:53:33 compute-1 systemd[1]: libpod-conmon-13aa4b98cba66276f75559ae5745a870b17395095597f17d3677d5b57d514e7b.scope: Deactivated successfully.
Oct 02 12:53:33 compute-1 podman[286949]: 2025-10-02 12:53:33.853385742 +0000 UTC m=+0.043279299 container remove 13aa4b98cba66276f75559ae5745a870b17395095597f17d3677d5b57d514e7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:33.861 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b147ff7c-08f8-4c58-a862-d6d8181a773b]: (4, ('Thu Oct  2 12:53:33 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc (13aa4b98cba66276f75559ae5745a870b17395095597f17d3677d5b57d514e7b)\n13aa4b98cba66276f75559ae5745a870b17395095597f17d3677d5b57d514e7b\nThu Oct  2 12:53:33 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc (13aa4b98cba66276f75559ae5745a870b17395095597f17d3677d5b57d514e7b)\n13aa4b98cba66276f75559ae5745a870b17395095597f17d3677d5b57d514e7b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:33.862 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[428ad20b-33f3-4dc6-a0c8-e96feb65c84b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:33.863 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8923666-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:53:33 compute-1 nova_compute[230518]: 2025-10-02 12:53:33.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:33 compute-1 kernel: tapa8923666-d0: left promiscuous mode
Oct 02 12:53:33 compute-1 nova_compute[230518]: 2025-10-02 12:53:33.879 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409613.879577, a1440a2f-0663-451f-bef5-bbece30acc40 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:53:33 compute-1 nova_compute[230518]: 2025-10-02 12:53:33.880 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] VM Started (Lifecycle Event)
Oct 02 12:53:33 compute-1 nova_compute[230518]: 2025-10-02 12:53:33.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:33.884 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[954ac901-4dd4-4b28-b263-ac62d3f7b119]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:33 compute-1 nova_compute[230518]: 2025-10-02 12:53:33.900 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:53:33 compute-1 nova_compute[230518]: 2025-10-02 12:53:33.905 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409613.8796678, a1440a2f-0663-451f-bef5-bbece30acc40 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:53:33 compute-1 nova_compute[230518]: 2025-10-02 12:53:33.906 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] VM Paused (Lifecycle Event)
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:33.917 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4c2b1624-0c89-47be-8220-71b3901720ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:33.919 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0b8f3ded-d754-4601-95e1-531451f91646]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:33 compute-1 nova_compute[230518]: 2025-10-02 12:53:33.923 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:53:33 compute-1 nova_compute[230518]: 2025-10-02 12:53:33.930 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:33.936 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3be85442-5522-4bd0-a7f7-808cc73e29c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 742442, 'reachable_time': 31332, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286964, 'error': None, 'target': 'ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:33 compute-1 systemd[1]: run-netns-ovnmeta\x2da8923666\x2dd594\x2d4b3c\x2dacca\x2dd8d2652ab2bc.mount: Deactivated successfully.
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:33.939 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a8923666-d594-4b3c-acca-d8d2652ab2bc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:53:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:33.939 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[a4c2335e-401b-4cd6-8750-86075acc8d4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:33 compute-1 nova_compute[230518]: 2025-10-02 12:53:33.948 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:53:34 compute-1 nova_compute[230518]: 2025-10-02 12:53:34.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:34.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:34.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:34 compute-1 nova_compute[230518]: 2025-10-02 12:53:34.622 2 DEBUG nova.network.neutron [req-c4ee23f7-1082-40f9-9ea2-d2129315a127 req-87915950-63ce-4562-87dc-b3dc30fe2e12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Updated VIF entry in instance network info cache for port 9e761925-3065-4b15-ab37-4ce18061fcf6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:53:34 compute-1 nova_compute[230518]: 2025-10-02 12:53:34.623 2 DEBUG nova.network.neutron [req-c4ee23f7-1082-40f9-9ea2-d2129315a127 req-87915950-63ce-4562-87dc-b3dc30fe2e12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Updating instance_info_cache with network_info: [{"id": "9e761925-3065-4b15-ab37-4ce18061fcf6", "address": "fa:16:3e:61:28:1d", "network": {"id": "a8923666-d594-4b3c-acca-d8d2652ab2bc", "bridge": "br-int", "label": "tempest-network-smoke--1377768226", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e761925-30", "ovs_interfaceid": "9e761925-3065-4b15-ab37-4ce18061fcf6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:53:34 compute-1 nova_compute[230518]: 2025-10-02 12:53:34.647 2 DEBUG oslo_concurrency.lockutils [req-c4ee23f7-1082-40f9-9ea2-d2129315a127 req-87915950-63ce-4562-87dc-b3dc30fe2e12 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-d70a747f-a75e-4341-89db-5953efdbbbd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:53:34 compute-1 ceph-mon[80926]: pgmap v2396: 305 pgs: 305 active+clean; 486 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 4.7 MiB/s wr, 150 op/s
Oct 02 12:53:35 compute-1 podman[286966]: 2025-10-02 12:53:35.805979281 +0000 UTC m=+0.060783978 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:53:35 compute-1 podman[286967]: 2025-10-02 12:53:35.8310978 +0000 UTC m=+0.086094333 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Oct 02 12:53:36 compute-1 nova_compute[230518]: 2025-10-02 12:53:36.225 2 INFO nova.virt.libvirt.driver [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Deleting instance files /var/lib/nova/instances/d70a747f-a75e-4341-89db-5953efdbbbd9_del
Oct 02 12:53:36 compute-1 nova_compute[230518]: 2025-10-02 12:53:36.227 2 INFO nova.virt.libvirt.driver [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Deletion of /var/lib/nova/instances/d70a747f-a75e-4341-89db-5953efdbbbd9_del complete
Oct 02 12:53:36 compute-1 nova_compute[230518]: 2025-10-02 12:53:36.329 2 INFO nova.compute.manager [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Took 3.54 seconds to destroy the instance on the hypervisor.
Oct 02 12:53:36 compute-1 nova_compute[230518]: 2025-10-02 12:53:36.329 2 DEBUG oslo.service.loopingcall [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:53:36 compute-1 nova_compute[230518]: 2025-10-02 12:53:36.330 2 DEBUG nova.compute.manager [-] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:53:36 compute-1 nova_compute[230518]: 2025-10-02 12:53:36.330 2 DEBUG nova.network.neutron [-] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:53:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:36.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:53:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:36.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:53:37 compute-1 ceph-mon[80926]: pgmap v2397: 305 pgs: 305 active+clean; 486 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 4.7 MiB/s wr, 151 op/s
Oct 02 12:53:37 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2781070299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:38 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:53:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:53:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:38.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:53:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:38.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:38 compute-1 nova_compute[230518]: 2025-10-02 12:53:38.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:38 compute-1 nova_compute[230518]: 2025-10-02 12:53:38.859 2 DEBUG nova.compute.manager [req-ddbfa8a5-96f1-47af-8255-767d157c5b9e req-2b240056-0807-4e04-8f1c-59c11638561a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Received event network-vif-plugged-d3265627-45dd-403c-990b-451562559afe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:53:38 compute-1 nova_compute[230518]: 2025-10-02 12:53:38.860 2 DEBUG oslo_concurrency.lockutils [req-ddbfa8a5-96f1-47af-8255-767d157c5b9e req-2b240056-0807-4e04-8f1c-59c11638561a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:38 compute-1 nova_compute[230518]: 2025-10-02 12:53:38.860 2 DEBUG oslo_concurrency.lockutils [req-ddbfa8a5-96f1-47af-8255-767d157c5b9e req-2b240056-0807-4e04-8f1c-59c11638561a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:38 compute-1 nova_compute[230518]: 2025-10-02 12:53:38.860 2 DEBUG oslo_concurrency.lockutils [req-ddbfa8a5-96f1-47af-8255-767d157c5b9e req-2b240056-0807-4e04-8f1c-59c11638561a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:38 compute-1 nova_compute[230518]: 2025-10-02 12:53:38.861 2 DEBUG nova.compute.manager [req-ddbfa8a5-96f1-47af-8255-767d157c5b9e req-2b240056-0807-4e04-8f1c-59c11638561a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Processing event network-vif-plugged-d3265627-45dd-403c-990b-451562559afe _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:53:38 compute-1 nova_compute[230518]: 2025-10-02 12:53:38.861 2 DEBUG nova.compute.manager [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:53:38 compute-1 nova_compute[230518]: 2025-10-02 12:53:38.865 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409618.8654199, a1440a2f-0663-451f-bef5-bbece30acc40 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:53:38 compute-1 nova_compute[230518]: 2025-10-02 12:53:38.866 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] VM Resumed (Lifecycle Event)
Oct 02 12:53:38 compute-1 nova_compute[230518]: 2025-10-02 12:53:38.868 2 DEBUG nova.virt.libvirt.driver [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:53:38 compute-1 nova_compute[230518]: 2025-10-02 12:53:38.871 2 INFO nova.virt.libvirt.driver [-] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Instance spawned successfully.
Oct 02 12:53:38 compute-1 nova_compute[230518]: 2025-10-02 12:53:38.889 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:53:38 compute-1 nova_compute[230518]: 2025-10-02 12:53:38.892 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:53:38 compute-1 nova_compute[230518]: 2025-10-02 12:53:38.943 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:53:39 compute-1 nova_compute[230518]: 2025-10-02 12:53:39.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:39 compute-1 nova_compute[230518]: 2025-10-02 12:53:39.063 2 DEBUG nova.network.neutron [-] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:53:39 compute-1 nova_compute[230518]: 2025-10-02 12:53:39.081 2 INFO nova.compute.manager [-] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Took 2.75 seconds to deallocate network for instance.
Oct 02 12:53:39 compute-1 nova_compute[230518]: 2025-10-02 12:53:39.124 2 DEBUG oslo_concurrency.lockutils [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:39 compute-1 nova_compute[230518]: 2025-10-02 12:53:39.125 2 DEBUG oslo_concurrency.lockutils [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:39 compute-1 ceph-mon[80926]: pgmap v2398: 305 pgs: 305 active+clean; 437 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 4.7 MiB/s wr, 105 op/s
Oct 02 12:53:39 compute-1 nova_compute[230518]: 2025-10-02 12:53:39.219 2 DEBUG oslo_concurrency.processutils [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:53:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:53:39 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2661338451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:39 compute-1 nova_compute[230518]: 2025-10-02 12:53:39.697 2 DEBUG oslo_concurrency.processutils [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:53:39 compute-1 nova_compute[230518]: 2025-10-02 12:53:39.706 2 DEBUG nova.compute.provider_tree [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:53:39 compute-1 nova_compute[230518]: 2025-10-02 12:53:39.726 2 DEBUG nova.scheduler.client.report [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:53:39 compute-1 nova_compute[230518]: 2025-10-02 12:53:39.766 2 DEBUG oslo_concurrency.lockutils [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:39 compute-1 nova_compute[230518]: 2025-10-02 12:53:39.837 2 INFO nova.scheduler.client.report [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Deleted allocations for instance d70a747f-a75e-4341-89db-5953efdbbbd9
Oct 02 12:53:39 compute-1 nova_compute[230518]: 2025-10-02 12:53:39.965 2 DEBUG oslo_concurrency.lockutils [None req-874b0376-3e6d-4de9-8735-34e999066443 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "d70a747f-a75e-4341-89db-5953efdbbbd9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.177s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:40 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e332 e332: 3 total, 3 up, 3 in
Oct 02 12:53:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2661338451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:53:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:40.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:53:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:40.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:40 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:40.640 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:53:40 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:40.641 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:53:40 compute-1 nova_compute[230518]: 2025-10-02 12:53:40.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:40 compute-1 nova_compute[230518]: 2025-10-02 12:53:40.813 2 DEBUG nova.compute.manager [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:53:40 compute-1 nova_compute[230518]: 2025-10-02 12:53:40.976 2 DEBUG oslo_concurrency.lockutils [None req-2ef3eee9-a252-4ece-bc2e-42750f6e21a9 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 21.300s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:41 compute-1 nova_compute[230518]: 2025-10-02 12:53:41.017 2 DEBUG nova.compute.manager [req-38078e86-0646-41fb-8555-e9bb45242ad9 req-be4965e2-66b6-4c2a-a6ab-989423b63723 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Received event network-vif-deleted-9e761925-3065-4b15-ab37-4ce18061fcf6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:53:41 compute-1 nova_compute[230518]: 2025-10-02 12:53:41.017 2 DEBUG nova.compute.manager [req-38078e86-0646-41fb-8555-e9bb45242ad9 req-be4965e2-66b6-4c2a-a6ab-989423b63723 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Received event network-vif-plugged-d3265627-45dd-403c-990b-451562559afe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:53:41 compute-1 nova_compute[230518]: 2025-10-02 12:53:41.018 2 DEBUG oslo_concurrency.lockutils [req-38078e86-0646-41fb-8555-e9bb45242ad9 req-be4965e2-66b6-4c2a-a6ab-989423b63723 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:41 compute-1 nova_compute[230518]: 2025-10-02 12:53:41.018 2 DEBUG oslo_concurrency.lockutils [req-38078e86-0646-41fb-8555-e9bb45242ad9 req-be4965e2-66b6-4c2a-a6ab-989423b63723 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:41 compute-1 nova_compute[230518]: 2025-10-02 12:53:41.018 2 DEBUG oslo_concurrency.lockutils [req-38078e86-0646-41fb-8555-e9bb45242ad9 req-be4965e2-66b6-4c2a-a6ab-989423b63723 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:41 compute-1 nova_compute[230518]: 2025-10-02 12:53:41.019 2 DEBUG nova.compute.manager [req-38078e86-0646-41fb-8555-e9bb45242ad9 req-be4965e2-66b6-4c2a-a6ab-989423b63723 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] No waiting events found dispatching network-vif-plugged-d3265627-45dd-403c-990b-451562559afe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:53:41 compute-1 nova_compute[230518]: 2025-10-02 12:53:41.019 2 WARNING nova.compute.manager [req-38078e86-0646-41fb-8555-e9bb45242ad9 req-be4965e2-66b6-4c2a-a6ab-989423b63723 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Received unexpected event network-vif-plugged-d3265627-45dd-403c-990b-451562559afe for instance with vm_state active and task_state None.
Oct 02 12:53:41 compute-1 ceph-mon[80926]: pgmap v2399: 305 pgs: 305 active+clean; 393 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.3 MiB/s wr, 113 op/s
Oct 02 12:53:41 compute-1 ceph-mon[80926]: osdmap e332: 3 total, 3 up, 3 in
Oct 02 12:53:41 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1513231900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:53:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:42.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:53:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:42.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:53:43 compute-1 ceph-mon[80926]: pgmap v2401: 305 pgs: 305 active+clean; 320 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.3 MiB/s wr, 196 op/s
Oct 02 12:53:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4245949945' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:53:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/220270278' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:53:43 compute-1 nova_compute[230518]: 2025-10-02 12:53:43.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:44 compute-1 nova_compute[230518]: 2025-10-02 12:53:44.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:44.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e333 e333: 3 total, 3 up, 3 in
Oct 02 12:53:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:44.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:44 compute-1 ovn_controller[129257]: 2025-10-02T12:53:44Z|00596|binding|INFO|Releasing lport 9fee59c9-e25a-4600-b33b-de655b7e8c27 from this chassis (sb_readonly=0)
Oct 02 12:53:44 compute-1 nova_compute[230518]: 2025-10-02 12:53:44.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:45 compute-1 ovn_controller[129257]: 2025-10-02T12:53:45Z|00597|binding|INFO|Releasing lport 9fee59c9-e25a-4600-b33b-de655b7e8c27 from this chassis (sb_readonly=0)
Oct 02 12:53:45 compute-1 nova_compute[230518]: 2025-10-02 12:53:45.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:45 compute-1 ceph-mon[80926]: pgmap v2402: 305 pgs: 305 active+clean; 320 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.2 MiB/s wr, 181 op/s
Oct 02 12:53:45 compute-1 ceph-mon[80926]: osdmap e333: 3 total, 3 up, 3 in
Oct 02 12:53:45 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #112. Immutable memtables: 0.
Oct 02 12:53:45 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:53:45.573593) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:53:45 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 112
Oct 02 12:53:45 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409625573656, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 1369, "num_deletes": 255, "total_data_size": 2909700, "memory_usage": 2962624, "flush_reason": "Manual Compaction"}
Oct 02 12:53:45 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #113: started
Oct 02 12:53:45 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409625590516, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 113, "file_size": 1908022, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56383, "largest_seqno": 57747, "table_properties": {"data_size": 1901941, "index_size": 3348, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13366, "raw_average_key_size": 20, "raw_value_size": 1889629, "raw_average_value_size": 2902, "num_data_blocks": 147, "num_entries": 651, "num_filter_entries": 651, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409528, "oldest_key_time": 1759409528, "file_creation_time": 1759409625, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:53:45 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 16969 microseconds, and 4848 cpu microseconds.
Oct 02 12:53:45 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:53:45 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:53:45.590569) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #113: 1908022 bytes OK
Oct 02 12:53:45 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:53:45.590593) [db/memtable_list.cc:519] [default] Level-0 commit table #113 started
Oct 02 12:53:45 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:53:45.593421) [db/memtable_list.cc:722] [default] Level-0 commit table #113: memtable #1 done
Oct 02 12:53:45 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:53:45.593434) EVENT_LOG_v1 {"time_micros": 1759409625593430, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:53:45 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:53:45.593451) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:53:45 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 2903126, prev total WAL file size 2903126, number of live WAL files 2.
Oct 02 12:53:45 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000109.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:53:45 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:53:45.594215) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Oct 02 12:53:45 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:53:45 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [113(1863KB)], [111(10MB)]
Oct 02 12:53:45 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409625594291, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [113], "files_L6": [111], "score": -1, "input_data_size": 12718845, "oldest_snapshot_seqno": -1}
Oct 02 12:53:45 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #114: 8166 keys, 10743450 bytes, temperature: kUnknown
Oct 02 12:53:45 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409625686154, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 114, "file_size": 10743450, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10690476, "index_size": 31486, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20421, "raw_key_size": 211940, "raw_average_key_size": 25, "raw_value_size": 10546677, "raw_average_value_size": 1291, "num_data_blocks": 1228, "num_entries": 8166, "num_filter_entries": 8166, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759409625, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 114, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:53:45 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:53:45 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:53:45.686408) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 10743450 bytes
Oct 02 12:53:45 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:53:45.688209) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.4 rd, 116.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 10.3 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(12.3) write-amplify(5.6) OK, records in: 8690, records dropped: 524 output_compression: NoCompression
Oct 02 12:53:45 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:53:45.688225) EVENT_LOG_v1 {"time_micros": 1759409625688217, "job": 70, "event": "compaction_finished", "compaction_time_micros": 91887, "compaction_time_cpu_micros": 24652, "output_level": 6, "num_output_files": 1, "total_output_size": 10743450, "num_input_records": 8690, "num_output_records": 8166, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:53:45 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:53:45 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409625688619, "job": 70, "event": "table_file_deletion", "file_number": 113}
Oct 02 12:53:45 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000111.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:53:45 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409625690176, "job": 70, "event": "table_file_deletion", "file_number": 111}
Oct 02 12:53:45 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:53:45.594104) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:53:45 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:53:45.690251) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:53:45 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:53:45.690257) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:53:45 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:53:45.690258) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:53:45 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:53:45.690260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:53:45 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:53:45.690261) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:53:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:46.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:53:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:46.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:53:46 compute-1 nova_compute[230518]: 2025-10-02 12:53:46.820 2 DEBUG oslo_concurrency.lockutils [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "a1440a2f-0663-451f-bef5-bbece30acc40" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:46 compute-1 nova_compute[230518]: 2025-10-02 12:53:46.820 2 DEBUG oslo_concurrency.lockutils [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:46 compute-1 nova_compute[230518]: 2025-10-02 12:53:46.820 2 DEBUG oslo_concurrency.lockutils [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:46 compute-1 nova_compute[230518]: 2025-10-02 12:53:46.821 2 DEBUG oslo_concurrency.lockutils [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:46 compute-1 nova_compute[230518]: 2025-10-02 12:53:46.821 2 DEBUG oslo_concurrency.lockutils [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:46 compute-1 nova_compute[230518]: 2025-10-02 12:53:46.822 2 INFO nova.compute.manager [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Terminating instance
Oct 02 12:53:46 compute-1 nova_compute[230518]: 2025-10-02 12:53:46.823 2 DEBUG nova.compute.manager [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:53:46 compute-1 kernel: tapd3265627-45 (unregistering): left promiscuous mode
Oct 02 12:53:46 compute-1 NetworkManager[44960]: <info>  [1759409626.9027] device (tapd3265627-45): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:53:46 compute-1 ovn_controller[129257]: 2025-10-02T12:53:46Z|00598|binding|INFO|Releasing lport d3265627-45dd-403c-990b-451562559afe from this chassis (sb_readonly=0)
Oct 02 12:53:46 compute-1 ovn_controller[129257]: 2025-10-02T12:53:46Z|00599|binding|INFO|Setting lport d3265627-45dd-403c-990b-451562559afe down in Southbound
Oct 02 12:53:46 compute-1 ovn_controller[129257]: 2025-10-02T12:53:46Z|00600|binding|INFO|Removing iface tapd3265627-45 ovn-installed in OVS
Oct 02 12:53:46 compute-1 nova_compute[230518]: 2025-10-02 12:53:46.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:46 compute-1 nova_compute[230518]: 2025-10-02 12:53:46.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:46 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:46.954 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:ff:5d 10.100.0.6'], port_security=['fa:16:3e:a5:ff:5d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'a1440a2f-0663-451f-bef5-bbece30acc40', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dbd0afdfb05849f9abfe4cd4454f6a13', 'neutron:revision_number': '9', 'neutron:security_group_ids': '78172745-da53-4827-9b36-8764c18b9057', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.200', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58cd6088-09cb-4f1a-b5f9-48a0ee1d072a, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=d3265627-45dd-403c-990b-451562559afe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:53:46 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:46.955 138374 INFO neutron.agent.ovn.metadata.agent [-] Port d3265627-45dd-403c-990b-451562559afe in datapath 9266ebd7-321c-4fc7-a6c8-c1c304634bb4 unbound from our chassis
Oct 02 12:53:46 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:46.956 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9266ebd7-321c-4fc7-a6c8-c1c304634bb4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:53:46 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:46.957 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f398423d-f9af-43b7-9fe3-cf78f2321395]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:46 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:46.958 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4 namespace which is not needed anymore
Oct 02 12:53:46 compute-1 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000008a.scope: Deactivated successfully.
Oct 02 12:53:46 compute-1 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000008a.scope: Consumed 9.395s CPU time.
Oct 02 12:53:46 compute-1 systemd-machined[188247]: Machine qemu-69-instance-0000008a terminated.
Oct 02 12:53:47 compute-1 nova_compute[230518]: 2025-10-02 12:53:47.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:47 compute-1 nova_compute[230518]: 2025-10-02 12:53:47.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:47 compute-1 nova_compute[230518]: 2025-10-02 12:53:47.067 2 INFO nova.virt.libvirt.driver [-] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Instance destroyed successfully.
Oct 02 12:53:47 compute-1 nova_compute[230518]: 2025-10-02 12:53:47.068 2 DEBUG nova.objects.instance [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lazy-loading 'resources' on Instance uuid a1440a2f-0663-451f-bef5-bbece30acc40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:53:47 compute-1 nova_compute[230518]: 2025-10-02 12:53:47.087 2 DEBUG nova.virt.libvirt.vif [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T12:50:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1789493944',display_name='tempest-ServerActionsTestOtherB-server-1789493944',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1789493944',id=138,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOVeGF1+29dCCSGngLFqUI5U8IKnL3UcgGS4WClpsJyDpduj/85QjDW8aY882CsqWWPRk76dFurArmt1NXQYOhmozPVf9s/UvGFBD7n4WLFBfPQzMC9sFsLbMC2wM2/UyQ==',key_name='tempest-keypair-808136615',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:53:40Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dbd0afdfb05849f9abfe4cd4454f6a13',ramdisk_id='',reservation_id='r-0tg60q9k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-858400398',owner_user_name='tempest-ServerActionsTestOtherB-858400398-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:53:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b5104e5372994cd19b720862cf1ca2ce',uuid=a1440a2f-0663-451f-bef5-bbece30acc40,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d3265627-45dd-403c-990b-451562559afe", "address": "fa:16:3e:a5:ff:5d", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3265627-45", "ovs_interfaceid": "d3265627-45dd-403c-990b-451562559afe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:53:47 compute-1 nova_compute[230518]: 2025-10-02 12:53:47.089 2 DEBUG nova.network.os_vif_util [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converting VIF {"id": "d3265627-45dd-403c-990b-451562559afe", "address": "fa:16:3e:a5:ff:5d", "network": {"id": "9266ebd7-321c-4fc7-a6c8-c1c304634bb4", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1350645832-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbd0afdfb05849f9abfe4cd4454f6a13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3265627-45", "ovs_interfaceid": "d3265627-45dd-403c-990b-451562559afe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:53:47 compute-1 nova_compute[230518]: 2025-10-02 12:53:47.091 2 DEBUG nova.network.os_vif_util [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:ff:5d,bridge_name='br-int',has_traffic_filtering=True,id=d3265627-45dd-403c-990b-451562559afe,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3265627-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:53:47 compute-1 nova_compute[230518]: 2025-10-02 12:53:47.091 2 DEBUG os_vif [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:ff:5d,bridge_name='br-int',has_traffic_filtering=True,id=d3265627-45dd-403c-990b-451562559afe,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3265627-45') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:53:47 compute-1 nova_compute[230518]: 2025-10-02 12:53:47.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:47 compute-1 nova_compute[230518]: 2025-10-02 12:53:47.094 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3265627-45, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:53:47 compute-1 nova_compute[230518]: 2025-10-02 12:53:47.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:47 compute-1 nova_compute[230518]: 2025-10-02 12:53:47.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:47 compute-1 nova_compute[230518]: 2025-10-02 12:53:47.101 2 INFO os_vif [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:ff:5d,bridge_name='br-int',has_traffic_filtering=True,id=d3265627-45dd-403c-990b-451562559afe,network=Network(9266ebd7-321c-4fc7-a6c8-c1c304634bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3265627-45')
Oct 02 12:53:47 compute-1 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[286857]: [NOTICE]   (286861) : haproxy version is 2.8.14-c23fe91
Oct 02 12:53:47 compute-1 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[286857]: [NOTICE]   (286861) : path to executable is /usr/sbin/haproxy
Oct 02 12:53:47 compute-1 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[286857]: [WARNING]  (286861) : Exiting Master process...
Oct 02 12:53:47 compute-1 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[286857]: [WARNING]  (286861) : Exiting Master process...
Oct 02 12:53:47 compute-1 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[286857]: [ALERT]    (286861) : Current worker (286863) exited with code 143 (Terminated)
Oct 02 12:53:47 compute-1 neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4[286857]: [WARNING]  (286861) : All workers exited. Exiting... (0)
Oct 02 12:53:47 compute-1 systemd[1]: libpod-167d4d6a2c44afdb612e624522a4912cdf400cd0154e6d642fd6608c91286371.scope: Deactivated successfully.
Oct 02 12:53:47 compute-1 podman[287054]: 2025-10-02 12:53:47.128175291 +0000 UTC m=+0.056002488 container died 167d4d6a2c44afdb612e624522a4912cdf400cd0154e6d642fd6608c91286371 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:53:47 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-167d4d6a2c44afdb612e624522a4912cdf400cd0154e6d642fd6608c91286371-userdata-shm.mount: Deactivated successfully.
Oct 02 12:53:47 compute-1 systemd[1]: var-lib-containers-storage-overlay-f6218758efbae1e2167da1e023b3af2d79632b28cc528df2bdcb085fa84c90af-merged.mount: Deactivated successfully.
Oct 02 12:53:47 compute-1 podman[287054]: 2025-10-02 12:53:47.168616311 +0000 UTC m=+0.096443508 container cleanup 167d4d6a2c44afdb612e624522a4912cdf400cd0154e6d642fd6608c91286371 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 12:53:47 compute-1 systemd[1]: libpod-conmon-167d4d6a2c44afdb612e624522a4912cdf400cd0154e6d642fd6608c91286371.scope: Deactivated successfully.
Oct 02 12:53:47 compute-1 podman[287105]: 2025-10-02 12:53:47.249145046 +0000 UTC m=+0.054991346 container remove 167d4d6a2c44afdb612e624522a4912cdf400cd0154e6d642fd6608c91286371 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 12:53:47 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:47.256 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b5c80150-01a2-48b3-84a4-244eb6cc04e3]: (4, ('Thu Oct  2 12:53:47 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4 (167d4d6a2c44afdb612e624522a4912cdf400cd0154e6d642fd6608c91286371)\n167d4d6a2c44afdb612e624522a4912cdf400cd0154e6d642fd6608c91286371\nThu Oct  2 12:53:47 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4 (167d4d6a2c44afdb612e624522a4912cdf400cd0154e6d642fd6608c91286371)\n167d4d6a2c44afdb612e624522a4912cdf400cd0154e6d642fd6608c91286371\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:47 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:47.258 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[90f61d5b-cd28-4136-ab45-6414cc4137ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:47 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:47.259 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9266ebd7-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:53:47 compute-1 nova_compute[230518]: 2025-10-02 12:53:47.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:47 compute-1 kernel: tap9266ebd7-30: left promiscuous mode
Oct 02 12:53:47 compute-1 nova_compute[230518]: 2025-10-02 12:53:47.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:47 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:47.281 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a8f7de92-7e82-463a-bdae-941081f718da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:47 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:47.306 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d597723c-3787-4271-ab46-05d1c6a001fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:47 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:47.309 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c7348140-c846-4fcd-89e0-4f05256d9011]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:47 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:47.328 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[81dd85c8-32a0-4911-a8d5-11cca9fbc7a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 747632, 'reachable_time': 16840, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287120, 'error': None, 'target': 'ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:47 compute-1 systemd[1]: run-netns-ovnmeta\x2d9266ebd7\x2d321c\x2d4fc7\x2da6c8\x2dc1c304634bb4.mount: Deactivated successfully.
Oct 02 12:53:47 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:47.332 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9266ebd7-321c-4fc7-a6c8-c1c304634bb4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:53:47 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:47.332 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[6dbed848-0cef-40e5-9bd5-324767a04bce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:53:47 compute-1 ceph-mon[80926]: pgmap v2404: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 210 op/s
Oct 02 12:53:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e334 e334: 3 total, 3 up, 3 in
Oct 02 12:53:47 compute-1 nova_compute[230518]: 2025-10-02 12:53:47.973 2 INFO nova.virt.libvirt.driver [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Deleting instance files /var/lib/nova/instances/a1440a2f-0663-451f-bef5-bbece30acc40_del
Oct 02 12:53:47 compute-1 nova_compute[230518]: 2025-10-02 12:53:47.973 2 INFO nova.virt.libvirt.driver [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Deletion of /var/lib/nova/instances/a1440a2f-0663-451f-bef5-bbece30acc40_del complete
Oct 02 12:53:48 compute-1 nova_compute[230518]: 2025-10-02 12:53:48.044 2 INFO nova.compute.manager [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Took 1.22 seconds to destroy the instance on the hypervisor.
Oct 02 12:53:48 compute-1 nova_compute[230518]: 2025-10-02 12:53:48.045 2 DEBUG oslo.service.loopingcall [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:53:48 compute-1 nova_compute[230518]: 2025-10-02 12:53:48.045 2 DEBUG nova.compute.manager [-] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:53:48 compute-1 nova_compute[230518]: 2025-10-02 12:53:48.046 2 DEBUG nova.network.neutron [-] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:53:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e334 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:53:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:48.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:48 compute-1 nova_compute[230518]: 2025-10-02 12:53:48.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:53:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:48.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:53:48 compute-1 nova_compute[230518]: 2025-10-02 12:53:48.637 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409613.6351857, d70a747f-a75e-4341-89db-5953efdbbbd9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:53:48 compute-1 nova_compute[230518]: 2025-10-02 12:53:48.638 2 INFO nova.compute.manager [-] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] VM Stopped (Lifecycle Event)
Oct 02 12:53:48 compute-1 nova_compute[230518]: 2025-10-02 12:53:48.671 2 DEBUG nova.compute.manager [None req-ced838e6-4100-43e1-ae40-eb1afc1c7053 - - - - - -] [instance: d70a747f-a75e-4341-89db-5953efdbbbd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:53:48 compute-1 ceph-mon[80926]: osdmap e334: 3 total, 3 up, 3 in
Oct 02 12:53:48 compute-1 ceph-mon[80926]: pgmap v2406: 305 pgs: 305 active+clean; 254 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.1 MiB/s wr, 198 op/s
Oct 02 12:53:49 compute-1 nova_compute[230518]: 2025-10-02 12:53:49.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:49 compute-1 nova_compute[230518]: 2025-10-02 12:53:49.257 2 DEBUG nova.compute.manager [req-7dd9625a-4be6-4631-983e-91f386d480d8 req-5f825b7c-f2a5-4ddd-8078-66ff92be4540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Received event network-vif-unplugged-d3265627-45dd-403c-990b-451562559afe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:53:49 compute-1 nova_compute[230518]: 2025-10-02 12:53:49.258 2 DEBUG oslo_concurrency.lockutils [req-7dd9625a-4be6-4631-983e-91f386d480d8 req-5f825b7c-f2a5-4ddd-8078-66ff92be4540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:49 compute-1 nova_compute[230518]: 2025-10-02 12:53:49.258 2 DEBUG oslo_concurrency.lockutils [req-7dd9625a-4be6-4631-983e-91f386d480d8 req-5f825b7c-f2a5-4ddd-8078-66ff92be4540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:49 compute-1 nova_compute[230518]: 2025-10-02 12:53:49.259 2 DEBUG oslo_concurrency.lockutils [req-7dd9625a-4be6-4631-983e-91f386d480d8 req-5f825b7c-f2a5-4ddd-8078-66ff92be4540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:49 compute-1 nova_compute[230518]: 2025-10-02 12:53:49.259 2 DEBUG nova.compute.manager [req-7dd9625a-4be6-4631-983e-91f386d480d8 req-5f825b7c-f2a5-4ddd-8078-66ff92be4540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] No waiting events found dispatching network-vif-unplugged-d3265627-45dd-403c-990b-451562559afe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:53:49 compute-1 nova_compute[230518]: 2025-10-02 12:53:49.259 2 DEBUG nova.compute.manager [req-7dd9625a-4be6-4631-983e-91f386d480d8 req-5f825b7c-f2a5-4ddd-8078-66ff92be4540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Received event network-vif-unplugged-d3265627-45dd-403c-990b-451562559afe for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:53:49 compute-1 nova_compute[230518]: 2025-10-02 12:53:49.259 2 DEBUG nova.compute.manager [req-7dd9625a-4be6-4631-983e-91f386d480d8 req-5f825b7c-f2a5-4ddd-8078-66ff92be4540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Received event network-vif-plugged-d3265627-45dd-403c-990b-451562559afe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:53:49 compute-1 nova_compute[230518]: 2025-10-02 12:53:49.259 2 DEBUG oslo_concurrency.lockutils [req-7dd9625a-4be6-4631-983e-91f386d480d8 req-5f825b7c-f2a5-4ddd-8078-66ff92be4540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:49 compute-1 nova_compute[230518]: 2025-10-02 12:53:49.260 2 DEBUG oslo_concurrency.lockutils [req-7dd9625a-4be6-4631-983e-91f386d480d8 req-5f825b7c-f2a5-4ddd-8078-66ff92be4540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:49 compute-1 nova_compute[230518]: 2025-10-02 12:53:49.260 2 DEBUG oslo_concurrency.lockutils [req-7dd9625a-4be6-4631-983e-91f386d480d8 req-5f825b7c-f2a5-4ddd-8078-66ff92be4540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:49 compute-1 nova_compute[230518]: 2025-10-02 12:53:49.260 2 DEBUG nova.compute.manager [req-7dd9625a-4be6-4631-983e-91f386d480d8 req-5f825b7c-f2a5-4ddd-8078-66ff92be4540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] No waiting events found dispatching network-vif-plugged-d3265627-45dd-403c-990b-451562559afe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:53:49 compute-1 nova_compute[230518]: 2025-10-02 12:53:49.260 2 WARNING nova.compute.manager [req-7dd9625a-4be6-4631-983e-91f386d480d8 req-5f825b7c-f2a5-4ddd-8078-66ff92be4540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Received unexpected event network-vif-plugged-d3265627-45dd-403c-990b-451562559afe for instance with vm_state active and task_state deleting.
Oct 02 12:53:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:53:49.644 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:53:49 compute-1 nova_compute[230518]: 2025-10-02 12:53:49.760 2 DEBUG nova.network.neutron [-] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:53:49 compute-1 nova_compute[230518]: 2025-10-02 12:53:49.787 2 INFO nova.compute.manager [-] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Took 1.74 seconds to deallocate network for instance.
Oct 02 12:53:49 compute-1 nova_compute[230518]: 2025-10-02 12:53:49.859 2 DEBUG oslo_concurrency.lockutils [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:53:49 compute-1 nova_compute[230518]: 2025-10-02 12:53:49.860 2 DEBUG oslo_concurrency.lockutils [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:53:49 compute-1 nova_compute[230518]: 2025-10-02 12:53:49.914 2 DEBUG oslo_concurrency.processutils [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:53:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:53:50 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3360298542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:50 compute-1 nova_compute[230518]: 2025-10-02 12:53:50.398 2 DEBUG oslo_concurrency.processutils [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:53:50 compute-1 nova_compute[230518]: 2025-10-02 12:53:50.405 2 DEBUG nova.compute.provider_tree [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:53:50 compute-1 nova_compute[230518]: 2025-10-02 12:53:50.423 2 DEBUG nova.scheduler.client.report [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:53:50 compute-1 nova_compute[230518]: 2025-10-02 12:53:50.458 2 DEBUG oslo_concurrency.lockutils [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:50.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:50 compute-1 nova_compute[230518]: 2025-10-02 12:53:50.486 2 INFO nova.scheduler.client.report [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Deleted allocations for instance a1440a2f-0663-451f-bef5-bbece30acc40
Oct 02 12:53:50 compute-1 nova_compute[230518]: 2025-10-02 12:53:50.574 2 DEBUG oslo_concurrency.lockutils [None req-1dae5b3e-a416-484f-9926-ea786ed09937 b5104e5372994cd19b720862cf1ca2ce dbd0afdfb05849f9abfe4cd4454f6a13 - - default default] Lock "a1440a2f-0663-451f-bef5-bbece30acc40" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:53:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:50.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:51 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e335 e335: 3 total, 3 up, 3 in
Oct 02 12:53:51 compute-1 nova_compute[230518]: 2025-10-02 12:53:51.376 2 DEBUG nova.compute.manager [req-85086bab-d4a7-4a7f-8674-32e09caf72e7 req-a0a3db08-5db2-498c-816d-273882f2cf9d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Received event network-vif-deleted-d3265627-45dd-403c-990b-451562559afe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:53:51 compute-1 ceph-mon[80926]: pgmap v2407: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 21 KiB/s wr, 166 op/s
Oct 02 12:53:51 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3360298542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:53:52 compute-1 nova_compute[230518]: 2025-10-02 12:53:52.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:52.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:52 compute-1 ceph-mon[80926]: osdmap e335: 3 total, 3 up, 3 in
Oct 02 12:53:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:52.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:52 compute-1 nova_compute[230518]: 2025-10-02 12:53:52.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e336 e336: 3 total, 3 up, 3 in
Oct 02 12:53:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:53:53 compute-1 ceph-mon[80926]: pgmap v2409: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 24 KiB/s wr, 250 op/s
Oct 02 12:53:53 compute-1 ceph-mon[80926]: osdmap e336: 3 total, 3 up, 3 in
Oct 02 12:53:54 compute-1 nova_compute[230518]: 2025-10-02 12:53:54.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:54 compute-1 nova_compute[230518]: 2025-10-02 12:53:54.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:53:54 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1761461335' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:53:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:53:54 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1761461335' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:53:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:54.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:54.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:54 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1761461335' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:53:54 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1761461335' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:53:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e337 e337: 3 total, 3 up, 3 in
Oct 02 12:53:55 compute-1 ceph-mon[80926]: pgmap v2411: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.9 KiB/s wr, 181 op/s
Oct 02 12:53:55 compute-1 ceph-mon[80926]: osdmap e337: 3 total, 3 up, 3 in
Oct 02 12:53:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e338 e338: 3 total, 3 up, 3 in
Oct 02 12:53:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:56.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:53:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:56.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:53:56 compute-1 podman[287145]: 2025-10-02 12:53:56.80343687 +0000 UTC m=+0.049311738 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 02 12:53:56 compute-1 podman[287144]: 2025-10-02 12:53:56.834518925 +0000 UTC m=+0.083187981 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct 02 12:53:57 compute-1 ceph-mon[80926]: pgmap v2413: 305 pgs: 305 active+clean; 186 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.7 MiB/s wr, 172 op/s
Oct 02 12:53:57 compute-1 ceph-mon[80926]: osdmap e338: 3 total, 3 up, 3 in
Oct 02 12:53:57 compute-1 nova_compute[230518]: 2025-10-02 12:53:57.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:57 compute-1 nova_compute[230518]: 2025-10-02 12:53:57.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e338 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:53:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:53:58.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:53:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:53:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:53:58.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:53:59 compute-1 nova_compute[230518]: 2025-10-02 12:53:59.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:53:59 compute-1 ceph-mon[80926]: pgmap v2415: 305 pgs: 305 active+clean; 213 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.5 MiB/s wr, 132 op/s
Oct 02 12:54:00 compute-1 nova_compute[230518]: 2025-10-02 12:54:00.490 2 DEBUG oslo_concurrency.lockutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:00 compute-1 nova_compute[230518]: 2025-10-02 12:54:00.490 2 DEBUG oslo_concurrency.lockutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:54:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:00.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:54:00 compute-1 nova_compute[230518]: 2025-10-02 12:54:00.517 2 DEBUG nova.compute.manager [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:54:00 compute-1 nova_compute[230518]: 2025-10-02 12:54:00.620 2 DEBUG oslo_concurrency.lockutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:00 compute-1 nova_compute[230518]: 2025-10-02 12:54:00.621 2 DEBUG oslo_concurrency.lockutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:00 compute-1 nova_compute[230518]: 2025-10-02 12:54:00.629 2 DEBUG nova.virt.hardware [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:54:00 compute-1 nova_compute[230518]: 2025-10-02 12:54:00.629 2 INFO nova.compute.claims [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:54:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:54:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:00.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:54:00 compute-1 nova_compute[230518]: 2025-10-02 12:54:00.702 2 DEBUG oslo_concurrency.lockutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:00 compute-1 nova_compute[230518]: 2025-10-02 12:54:00.702 2 DEBUG oslo_concurrency.lockutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:00 compute-1 nova_compute[230518]: 2025-10-02 12:54:00.726 2 DEBUG nova.compute.manager [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:54:00 compute-1 nova_compute[230518]: 2025-10-02 12:54:00.785 2 DEBUG oslo_concurrency.processutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:00 compute-1 nova_compute[230518]: 2025-10-02 12:54:00.940 2 DEBUG oslo_concurrency.lockutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:01 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:54:01 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4268433024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:01 compute-1 nova_compute[230518]: 2025-10-02 12:54:01.242 2 DEBUG oslo_concurrency.processutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:01 compute-1 nova_compute[230518]: 2025-10-02 12:54:01.247 2 DEBUG nova.compute.provider_tree [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:54:01 compute-1 nova_compute[230518]: 2025-10-02 12:54:01.274 2 DEBUG nova.scheduler.client.report [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:54:01 compute-1 nova_compute[230518]: 2025-10-02 12:54:01.313 2 DEBUG oslo_concurrency.lockutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:01 compute-1 nova_compute[230518]: 2025-10-02 12:54:01.314 2 DEBUG nova.compute.manager [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:54:01 compute-1 nova_compute[230518]: 2025-10-02 12:54:01.316 2 DEBUG oslo_concurrency.lockutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.376s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:01 compute-1 nova_compute[230518]: 2025-10-02 12:54:01.322 2 DEBUG nova.virt.hardware [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:54:01 compute-1 nova_compute[230518]: 2025-10-02 12:54:01.322 2 INFO nova.compute.claims [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:54:01 compute-1 nova_compute[230518]: 2025-10-02 12:54:01.418 2 DEBUG nova.compute.manager [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:54:01 compute-1 nova_compute[230518]: 2025-10-02 12:54:01.419 2 DEBUG nova.network.neutron [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:54:01 compute-1 nova_compute[230518]: 2025-10-02 12:54:01.464 2 INFO nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:54:01 compute-1 nova_compute[230518]: 2025-10-02 12:54:01.493 2 DEBUG nova.compute.manager [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:54:01 compute-1 nova_compute[230518]: 2025-10-02 12:54:01.505 2 DEBUG oslo_concurrency.processutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:01 compute-1 nova_compute[230518]: 2025-10-02 12:54:01.590 2 DEBUG nova.compute.manager [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:54:01 compute-1 nova_compute[230518]: 2025-10-02 12:54:01.591 2 DEBUG nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:54:01 compute-1 nova_compute[230518]: 2025-10-02 12:54:01.591 2 INFO nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Creating image(s)
Oct 02 12:54:01 compute-1 nova_compute[230518]: 2025-10-02 12:54:01.615 2 DEBUG nova.storage.rbd_utils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:54:01 compute-1 ceph-mon[80926]: pgmap v2416: 305 pgs: 305 active+clean; 173 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.9 MiB/s wr, 160 op/s
Oct 02 12:54:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2879670665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4268433024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:01 compute-1 nova_compute[230518]: 2025-10-02 12:54:01.674 2 DEBUG nova.storage.rbd_utils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:54:01 compute-1 nova_compute[230518]: 2025-10-02 12:54:01.698 2 DEBUG nova.storage.rbd_utils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:54:01 compute-1 nova_compute[230518]: 2025-10-02 12:54:01.702 2 DEBUG oslo_concurrency.processutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:01 compute-1 nova_compute[230518]: 2025-10-02 12:54:01.763 2 DEBUG oslo_concurrency.processutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:01 compute-1 nova_compute[230518]: 2025-10-02 12:54:01.764 2 DEBUG oslo_concurrency.lockutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:01 compute-1 nova_compute[230518]: 2025-10-02 12:54:01.764 2 DEBUG oslo_concurrency.lockutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:01 compute-1 nova_compute[230518]: 2025-10-02 12:54:01.765 2 DEBUG oslo_concurrency.lockutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:01 compute-1 nova_compute[230518]: 2025-10-02 12:54:01.788 2 DEBUG nova.storage.rbd_utils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:54:01 compute-1 nova_compute[230518]: 2025-10-02 12:54:01.792 2 DEBUG oslo_concurrency.processutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:01 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:54:01 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1477771253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:01 compute-1 nova_compute[230518]: 2025-10-02 12:54:01.922 2 DEBUG oslo_concurrency.processutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:01 compute-1 nova_compute[230518]: 2025-10-02 12:54:01.928 2 DEBUG nova.policy [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '96fd589a75cb4fcfac0072edabb9b3a1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '64f187c60881475e9e1f062bb198d205', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:54:01 compute-1 nova_compute[230518]: 2025-10-02 12:54:01.934 2 DEBUG nova.compute.provider_tree [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:54:01 compute-1 nova_compute[230518]: 2025-10-02 12:54:01.949 2 DEBUG nova.scheduler.client.report [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:54:01 compute-1 nova_compute[230518]: 2025-10-02 12:54:01.987 2 DEBUG oslo_concurrency.lockutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:01 compute-1 nova_compute[230518]: 2025-10-02 12:54:01.988 2 DEBUG nova.compute.manager [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:54:02 compute-1 nova_compute[230518]: 2025-10-02 12:54:02.064 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409627.063448, a1440a2f-0663-451f-bef5-bbece30acc40 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:54:02 compute-1 nova_compute[230518]: 2025-10-02 12:54:02.065 2 INFO nova.compute.manager [-] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] VM Stopped (Lifecycle Event)
Oct 02 12:54:02 compute-1 nova_compute[230518]: 2025-10-02 12:54:02.078 2 DEBUG nova.compute.manager [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:54:02 compute-1 nova_compute[230518]: 2025-10-02 12:54:02.078 2 DEBUG nova.network.neutron [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:54:02 compute-1 nova_compute[230518]: 2025-10-02 12:54:02.086 2 DEBUG nova.compute.manager [None req-13cc619e-8d96-4ed4-91fb-1d85f1981ff2 - - - - - -] [instance: a1440a2f-0663-451f-bef5-bbece30acc40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:54:02 compute-1 nova_compute[230518]: 2025-10-02 12:54:02.096 2 INFO nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:54:02 compute-1 nova_compute[230518]: 2025-10-02 12:54:02.120 2 DEBUG nova.compute.manager [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:54:02 compute-1 nova_compute[230518]: 2025-10-02 12:54:02.215 2 DEBUG nova.compute.manager [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:54:02 compute-1 nova_compute[230518]: 2025-10-02 12:54:02.216 2 DEBUG nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:54:02 compute-1 nova_compute[230518]: 2025-10-02 12:54:02.216 2 INFO nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Creating image(s)
Oct 02 12:54:02 compute-1 nova_compute[230518]: 2025-10-02 12:54:02.244 2 DEBUG nova.storage.rbd_utils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image a1e0932b-16b6-46b9-8192-b89b91e91802_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:54:02 compute-1 nova_compute[230518]: 2025-10-02 12:54:02.269 2 DEBUG nova.storage.rbd_utils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image a1e0932b-16b6-46b9-8192-b89b91e91802_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:54:02 compute-1 nova_compute[230518]: 2025-10-02 12:54:02.294 2 DEBUG nova.storage.rbd_utils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image a1e0932b-16b6-46b9-8192-b89b91e91802_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:54:02 compute-1 nova_compute[230518]: 2025-10-02 12:54:02.298 2 DEBUG oslo_concurrency.processutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:02 compute-1 nova_compute[230518]: 2025-10-02 12:54:02.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:02 compute-1 nova_compute[230518]: 2025-10-02 12:54:02.361 2 DEBUG oslo_concurrency.processutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:02 compute-1 nova_compute[230518]: 2025-10-02 12:54:02.362 2 DEBUG oslo_concurrency.lockutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:02 compute-1 nova_compute[230518]: 2025-10-02 12:54:02.363 2 DEBUG oslo_concurrency.lockutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:02 compute-1 nova_compute[230518]: 2025-10-02 12:54:02.363 2 DEBUG oslo_concurrency.lockutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:02 compute-1 nova_compute[230518]: 2025-10-02 12:54:02.391 2 DEBUG nova.storage.rbd_utils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image a1e0932b-16b6-46b9-8192-b89b91e91802_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:54:02 compute-1 nova_compute[230518]: 2025-10-02 12:54:02.394 2 DEBUG oslo_concurrency.processutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 a1e0932b-16b6-46b9-8192-b89b91e91802_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:54:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:02.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:54:02 compute-1 nova_compute[230518]: 2025-10-02 12:54:02.528 2 DEBUG nova.policy [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '47f465d8c8ac44c982f2a2e60ae9eb40', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:54:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:02.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1477771253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:03 compute-1 nova_compute[230518]: 2025-10-02 12:54:03.090 2 DEBUG nova.network.neutron [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Successfully created port: f67b8436-7ef3-4d35-814c-3d62c9a9fec0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:54:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e339 e339: 3 total, 3 up, 3 in
Oct 02 12:54:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:54:03 compute-1 nova_compute[230518]: 2025-10-02 12:54:03.532 2 DEBUG oslo_concurrency.processutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.740s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:03 compute-1 nova_compute[230518]: 2025-10-02 12:54:03.606 2 DEBUG nova.storage.rbd_utils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] resizing rbd image 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:54:03 compute-1 nova_compute[230518]: 2025-10-02 12:54:03.736 2 DEBUG nova.network.neutron [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Successfully created port: 20204810-ff47-450e-80e5-23d03b435455 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:54:03 compute-1 ceph-mon[80926]: pgmap v2417: 305 pgs: 305 active+clean; 156 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 5.8 MiB/s wr, 236 op/s
Oct 02 12:54:03 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2957687789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:03 compute-1 ceph-mon[80926]: osdmap e339: 3 total, 3 up, 3 in
Oct 02 12:54:03 compute-1 nova_compute[230518]: 2025-10-02 12:54:03.826 2 DEBUG oslo_concurrency.processutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 a1e0932b-16b6-46b9-8192-b89b91e91802_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:03 compute-1 nova_compute[230518]: 2025-10-02 12:54:03.902 2 DEBUG nova.objects.instance [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'migration_context' on Instance uuid 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:54:03 compute-1 nova_compute[230518]: 2025-10-02 12:54:03.941 2 DEBUG nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:54:03 compute-1 nova_compute[230518]: 2025-10-02 12:54:03.941 2 DEBUG nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Ensure instance console log exists: /var/lib/nova/instances/95fd2a5f-82d9-46eb-b218-cb0a9a4e2765/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:54:03 compute-1 nova_compute[230518]: 2025-10-02 12:54:03.942 2 DEBUG oslo_concurrency.lockutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:03 compute-1 nova_compute[230518]: 2025-10-02 12:54:03.942 2 DEBUG oslo_concurrency.lockutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:03 compute-1 nova_compute[230518]: 2025-10-02 12:54:03.942 2 DEBUG oslo_concurrency.lockutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:03 compute-1 nova_compute[230518]: 2025-10-02 12:54:03.946 2 DEBUG nova.storage.rbd_utils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] resizing rbd image a1e0932b-16b6-46b9-8192-b89b91e91802_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:54:04 compute-1 nova_compute[230518]: 2025-10-02 12:54:04.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:04 compute-1 nova_compute[230518]: 2025-10-02 12:54:04.133 2 DEBUG nova.network.neutron [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Successfully updated port: f67b8436-7ef3-4d35-814c-3d62c9a9fec0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:54:04 compute-1 nova_compute[230518]: 2025-10-02 12:54:04.139 2 DEBUG nova.objects.instance [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'migration_context' on Instance uuid a1e0932b-16b6-46b9-8192-b89b91e91802 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:54:04 compute-1 nova_compute[230518]: 2025-10-02 12:54:04.150 2 DEBUG nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:54:04 compute-1 nova_compute[230518]: 2025-10-02 12:54:04.150 2 DEBUG nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Ensure instance console log exists: /var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:54:04 compute-1 nova_compute[230518]: 2025-10-02 12:54:04.150 2 DEBUG oslo_concurrency.lockutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:04 compute-1 nova_compute[230518]: 2025-10-02 12:54:04.151 2 DEBUG oslo_concurrency.lockutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:04 compute-1 nova_compute[230518]: 2025-10-02 12:54:04.151 2 DEBUG oslo_concurrency.lockutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:04 compute-1 nova_compute[230518]: 2025-10-02 12:54:04.152 2 DEBUG oslo_concurrency.lockutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "refresh_cache-95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:54:04 compute-1 nova_compute[230518]: 2025-10-02 12:54:04.152 2 DEBUG oslo_concurrency.lockutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquired lock "refresh_cache-95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:54:04 compute-1 nova_compute[230518]: 2025-10-02 12:54:04.152 2 DEBUG nova.network.neutron [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:54:04 compute-1 nova_compute[230518]: 2025-10-02 12:54:04.213 2 DEBUG nova.compute.manager [req-0d319b2b-ef45-44a7-95d1-28d3d62d62b4 req-abe9e0bc-2eb4-4ac6-a350-399f548e08aa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Received event network-changed-f67b8436-7ef3-4d35-814c-3d62c9a9fec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:04 compute-1 nova_compute[230518]: 2025-10-02 12:54:04.213 2 DEBUG nova.compute.manager [req-0d319b2b-ef45-44a7-95d1-28d3d62d62b4 req-abe9e0bc-2eb4-4ac6-a350-399f548e08aa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Refreshing instance network info cache due to event network-changed-f67b8436-7ef3-4d35-814c-3d62c9a9fec0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:54:04 compute-1 nova_compute[230518]: 2025-10-02 12:54:04.214 2 DEBUG oslo_concurrency.lockutils [req-0d319b2b-ef45-44a7-95d1-28d3d62d62b4 req-abe9e0bc-2eb4-4ac6-a350-399f548e08aa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:54:04 compute-1 nova_compute[230518]: 2025-10-02 12:54:04.384 2 DEBUG nova.network.neutron [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:54:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:04.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:04 compute-1 nova_compute[230518]: 2025-10-02 12:54:04.628 2 DEBUG nova.network.neutron [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Successfully updated port: 20204810-ff47-450e-80e5-23d03b435455 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:54:04 compute-1 nova_compute[230518]: 2025-10-02 12:54:04.646 2 DEBUG oslo_concurrency.lockutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:54:04 compute-1 nova_compute[230518]: 2025-10-02 12:54:04.647 2 DEBUG oslo_concurrency.lockutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquired lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:54:04 compute-1 nova_compute[230518]: 2025-10-02 12:54:04.647 2 DEBUG nova.network.neutron [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:54:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:04.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:04 compute-1 nova_compute[230518]: 2025-10-02 12:54:04.723 2 DEBUG nova.compute.manager [req-a977c721-5647-4a39-8125-0a6c33f22d22 req-7c4e72f9-23b2-4e62-9f51-7012bd7278a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-changed-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:04 compute-1 nova_compute[230518]: 2025-10-02 12:54:04.723 2 DEBUG nova.compute.manager [req-a977c721-5647-4a39-8125-0a6c33f22d22 req-7c4e72f9-23b2-4e62-9f51-7012bd7278a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Refreshing instance network info cache due to event network-changed-20204810-ff47-450e-80e5-23d03b435455. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:54:04 compute-1 nova_compute[230518]: 2025-10-02 12:54:04.723 2 DEBUG oslo_concurrency.lockutils [req-a977c721-5647-4a39-8125-0a6c33f22d22 req-7c4e72f9-23b2-4e62-9f51-7012bd7278a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:54:04 compute-1 ceph-mon[80926]: pgmap v2419: 305 pgs: 305 active+clean; 156 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 4.6 MiB/s wr, 171 op/s
Oct 02 12:54:05 compute-1 nova_compute[230518]: 2025-10-02 12:54:05.211 2 DEBUG nova.network.neutron [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:54:05 compute-1 nova_compute[230518]: 2025-10-02 12:54:05.631 2 DEBUG nova.network.neutron [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Updating instance_info_cache with network_info: [{"id": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "address": "fa:16:3e:49:9e:90", "network": {"id": "83bbba21-a002-4973-9f29-252bf270271b", "bridge": "br-int", "label": "tempest-network-smoke--636136101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf67b8436-7e", "ovs_interfaceid": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:54:05 compute-1 nova_compute[230518]: 2025-10-02 12:54:05.672 2 DEBUG oslo_concurrency.lockutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Releasing lock "refresh_cache-95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:54:05 compute-1 nova_compute[230518]: 2025-10-02 12:54:05.673 2 DEBUG nova.compute.manager [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Instance network_info: |[{"id": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "address": "fa:16:3e:49:9e:90", "network": {"id": "83bbba21-a002-4973-9f29-252bf270271b", "bridge": "br-int", "label": "tempest-network-smoke--636136101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf67b8436-7e", "ovs_interfaceid": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:54:05 compute-1 nova_compute[230518]: 2025-10-02 12:54:05.674 2 DEBUG oslo_concurrency.lockutils [req-0d319b2b-ef45-44a7-95d1-28d3d62d62b4 req-abe9e0bc-2eb4-4ac6-a350-399f548e08aa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:54:05 compute-1 nova_compute[230518]: 2025-10-02 12:54:05.674 2 DEBUG nova.network.neutron [req-0d319b2b-ef45-44a7-95d1-28d3d62d62b4 req-abe9e0bc-2eb4-4ac6-a350-399f548e08aa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Refreshing network info cache for port f67b8436-7ef3-4d35-814c-3d62c9a9fec0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:54:05 compute-1 nova_compute[230518]: 2025-10-02 12:54:05.679 2 DEBUG nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Start _get_guest_xml network_info=[{"id": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "address": "fa:16:3e:49:9e:90", "network": {"id": "83bbba21-a002-4973-9f29-252bf270271b", "bridge": "br-int", "label": "tempest-network-smoke--636136101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf67b8436-7e", "ovs_interfaceid": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:54:05 compute-1 nova_compute[230518]: 2025-10-02 12:54:05.684 2 WARNING nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:54:05 compute-1 nova_compute[230518]: 2025-10-02 12:54:05.688 2 DEBUG nova.virt.libvirt.host [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:54:05 compute-1 nova_compute[230518]: 2025-10-02 12:54:05.688 2 DEBUG nova.virt.libvirt.host [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:54:05 compute-1 nova_compute[230518]: 2025-10-02 12:54:05.691 2 DEBUG nova.virt.libvirt.host [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:54:05 compute-1 nova_compute[230518]: 2025-10-02 12:54:05.691 2 DEBUG nova.virt.libvirt.host [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:54:05 compute-1 nova_compute[230518]: 2025-10-02 12:54:05.693 2 DEBUG nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:54:05 compute-1 nova_compute[230518]: 2025-10-02 12:54:05.693 2 DEBUG nova.virt.hardware [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:54:05 compute-1 nova_compute[230518]: 2025-10-02 12:54:05.693 2 DEBUG nova.virt.hardware [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:54:05 compute-1 nova_compute[230518]: 2025-10-02 12:54:05.694 2 DEBUG nova.virt.hardware [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:54:05 compute-1 nova_compute[230518]: 2025-10-02 12:54:05.694 2 DEBUG nova.virt.hardware [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:54:05 compute-1 nova_compute[230518]: 2025-10-02 12:54:05.694 2 DEBUG nova.virt.hardware [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:54:05 compute-1 nova_compute[230518]: 2025-10-02 12:54:05.695 2 DEBUG nova.virt.hardware [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:54:05 compute-1 nova_compute[230518]: 2025-10-02 12:54:05.695 2 DEBUG nova.virt.hardware [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:54:05 compute-1 nova_compute[230518]: 2025-10-02 12:54:05.695 2 DEBUG nova.virt.hardware [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:54:05 compute-1 nova_compute[230518]: 2025-10-02 12:54:05.696 2 DEBUG nova.virt.hardware [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:54:05 compute-1 nova_compute[230518]: 2025-10-02 12:54:05.696 2 DEBUG nova.virt.hardware [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:54:05 compute-1 nova_compute[230518]: 2025-10-02 12:54:05.696 2 DEBUG nova.virt.hardware [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:54:05 compute-1 nova_compute[230518]: 2025-10-02 12:54:05.699 2 DEBUG oslo_concurrency.processutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:06 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:54:06 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3687837290' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.136 2 DEBUG nova.network.neutron [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updating instance_info_cache with network_info: [{"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.141 2 DEBUG oslo_concurrency.processutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2143275758' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:54:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2143275758' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.181 2 DEBUG nova.storage.rbd_utils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.186 2 DEBUG oslo_concurrency.processutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.224 2 DEBUG oslo_concurrency.lockutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Releasing lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.225 2 DEBUG nova.compute.manager [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Instance network_info: |[{"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.226 2 DEBUG oslo_concurrency.lockutils [req-a977c721-5647-4a39-8125-0a6c33f22d22 req-7c4e72f9-23b2-4e62-9f51-7012bd7278a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.227 2 DEBUG nova.network.neutron [req-a977c721-5647-4a39-8125-0a6c33f22d22 req-7c4e72f9-23b2-4e62-9f51-7012bd7278a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Refreshing network info cache for port 20204810-ff47-450e-80e5-23d03b435455 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.230 2 DEBUG nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Start _get_guest_xml network_info=[{"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.235 2 WARNING nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.241 2 DEBUG nova.virt.libvirt.host [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.242 2 DEBUG nova.virt.libvirt.host [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.245 2 DEBUG nova.virt.libvirt.host [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.245 2 DEBUG nova.virt.libvirt.host [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.246 2 DEBUG nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.246 2 DEBUG nova.virt.hardware [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.247 2 DEBUG nova.virt.hardware [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.247 2 DEBUG nova.virt.hardware [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.248 2 DEBUG nova.virt.hardware [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.248 2 DEBUG nova.virt.hardware [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.248 2 DEBUG nova.virt.hardware [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.248 2 DEBUG nova.virt.hardware [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.249 2 DEBUG nova.virt.hardware [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.249 2 DEBUG nova.virt.hardware [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.249 2 DEBUG nova.virt.hardware [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.250 2 DEBUG nova.virt.hardware [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.253 2 DEBUG oslo_concurrency.processutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:06.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:06 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:54:06 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1284789999' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.609 2 DEBUG oslo_concurrency.processutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.612 2 DEBUG nova.virt.libvirt.vif [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:53:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1578945059',display_name='tempest-TestNetworkBasicOps-server-1578945059',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1578945059',id=145,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIapgtoYYA99v/havIqzWJIpGUB3YFecKSBWtCRKGjdMREGYtjf88dqlPzDhmBAjJKkZ8FXg34wGrVA0nOzc0vpbWnm6iI8pilmg2YswkyP+t9hDFvcoygSa2fIr9gTuTw==',key_name='tempest-TestNetworkBasicOps-1376701642',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-o7vn37r9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:54:01Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=95fd2a5f-82d9-46eb-b218-cb0a9a4e2765,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "address": "fa:16:3e:49:9e:90", "network": {"id": "83bbba21-a002-4973-9f29-252bf270271b", "bridge": "br-int", "label": "tempest-network-smoke--636136101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf67b8436-7e", "ovs_interfaceid": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.613 2 DEBUG nova.network.os_vif_util [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "address": "fa:16:3e:49:9e:90", "network": {"id": "83bbba21-a002-4973-9f29-252bf270271b", "bridge": "br-int", "label": "tempest-network-smoke--636136101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf67b8436-7e", "ovs_interfaceid": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.615 2 DEBUG nova.network.os_vif_util [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:9e:90,bridge_name='br-int',has_traffic_filtering=True,id=f67b8436-7ef3-4d35-814c-3d62c9a9fec0,network=Network(83bbba21-a002-4973-9f29-252bf270271b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf67b8436-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.617 2 DEBUG nova.objects.instance [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'pci_devices' on Instance uuid 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.645 2 DEBUG nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:54:06 compute-1 nova_compute[230518]:   <uuid>95fd2a5f-82d9-46eb-b218-cb0a9a4e2765</uuid>
Oct 02 12:54:06 compute-1 nova_compute[230518]:   <name>instance-00000091</name>
Oct 02 12:54:06 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:54:06 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:54:06 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:54:06 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:       <nova:name>tempest-TestNetworkBasicOps-server-1578945059</nova:name>
Oct 02 12:54:06 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:54:05</nova:creationTime>
Oct 02 12:54:06 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:54:06 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:54:06 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:54:06 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:54:06 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:54:06 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:54:06 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:54:06 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:54:06 compute-1 nova_compute[230518]:         <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct 02 12:54:06 compute-1 nova_compute[230518]:         <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct 02 12:54:06 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:54:06 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:54:06 compute-1 nova_compute[230518]:         <nova:port uuid="f67b8436-7ef3-4d35-814c-3d62c9a9fec0">
Oct 02 12:54:06 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:54:06 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:54:06 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:54:06 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <system>
Oct 02 12:54:06 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:54:06 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:54:06 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:54:06 compute-1 nova_compute[230518]:       <entry name="serial">95fd2a5f-82d9-46eb-b218-cb0a9a4e2765</entry>
Oct 02 12:54:06 compute-1 nova_compute[230518]:       <entry name="uuid">95fd2a5f-82d9-46eb-b218-cb0a9a4e2765</entry>
Oct 02 12:54:06 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     </system>
Oct 02 12:54:06 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:54:06 compute-1 nova_compute[230518]:   <os>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:   </os>
Oct 02 12:54:06 compute-1 nova_compute[230518]:   <features>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:   </features>
Oct 02 12:54:06 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:54:06 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:54:06 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:54:06 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/95fd2a5f-82d9-46eb-b218-cb0a9a4e2765_disk">
Oct 02 12:54:06 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:       </source>
Oct 02 12:54:06 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:54:06 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:54:06 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:54:06 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/95fd2a5f-82d9-46eb-b218-cb0a9a4e2765_disk.config">
Oct 02 12:54:06 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:       </source>
Oct 02 12:54:06 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:54:06 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:54:06 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:54:06 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:49:9e:90"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:       <target dev="tapf67b8436-7e"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:54:06 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/95fd2a5f-82d9-46eb-b218-cb0a9a4e2765/console.log" append="off"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <video>
Oct 02 12:54:06 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     </video>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:54:06 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:54:06 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:54:06 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:54:06 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:54:06 compute-1 nova_compute[230518]: </domain>
Oct 02 12:54:06 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.648 2 DEBUG nova.compute.manager [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Preparing to wait for external event network-vif-plugged-f67b8436-7ef3-4d35-814c-3d62c9a9fec0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.648 2 DEBUG oslo_concurrency.lockutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.649 2 DEBUG oslo_concurrency.lockutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.649 2 DEBUG oslo_concurrency.lockutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.651 2 DEBUG nova.virt.libvirt.vif [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:53:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1578945059',display_name='tempest-TestNetworkBasicOps-server-1578945059',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1578945059',id=145,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIapgtoYYA99v/havIqzWJIpGUB3YFecKSBWtCRKGjdMREGYtjf88dqlPzDhmBAjJKkZ8FXg34wGrVA0nOzc0vpbWnm6iI8pilmg2YswkyP+t9hDFvcoygSa2fIr9gTuTw==',key_name='tempest-TestNetworkBasicOps-1376701642',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-o7vn37r9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:54:01Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=95fd2a5f-82d9-46eb-b218-cb0a9a4e2765,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "address": "fa:16:3e:49:9e:90", "network": {"id": "83bbba21-a002-4973-9f29-252bf270271b", "bridge": "br-int", "label": "tempest-network-smoke--636136101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf67b8436-7e", "ovs_interfaceid": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.652 2 DEBUG nova.network.os_vif_util [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "address": "fa:16:3e:49:9e:90", "network": {"id": "83bbba21-a002-4973-9f29-252bf270271b", "bridge": "br-int", "label": "tempest-network-smoke--636136101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf67b8436-7e", "ovs_interfaceid": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.653 2 DEBUG nova.network.os_vif_util [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:9e:90,bridge_name='br-int',has_traffic_filtering=True,id=f67b8436-7ef3-4d35-814c-3d62c9a9fec0,network=Network(83bbba21-a002-4973-9f29-252bf270271b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf67b8436-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.654 2 DEBUG os_vif [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:9e:90,bridge_name='br-int',has_traffic_filtering=True,id=f67b8436-7ef3-4d35-814c-3d62c9a9fec0,network=Network(83bbba21-a002-4973-9f29-252bf270271b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf67b8436-7e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.656 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.657 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:54:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:06.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.666 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf67b8436-7e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.667 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf67b8436-7e, col_values=(('external_ids', {'iface-id': 'f67b8436-7ef3-4d35-814c-3d62c9a9fec0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:49:9e:90', 'vm-uuid': '95fd2a5f-82d9-46eb-b218-cb0a9a4e2765'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:06 compute-1 NetworkManager[44960]: <info>  [1759409646.6705] manager: (tapf67b8436-7e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/278)
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.677 2 INFO os_vif [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:9e:90,bridge_name='br-int',has_traffic_filtering=True,id=f67b8436-7ef3-4d35-814c-3d62c9a9fec0,network=Network(83bbba21-a002-4973-9f29-252bf270271b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf67b8436-7e')
Oct 02 12:54:06 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:54:06 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/651591536' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.728 2 DEBUG oslo_concurrency.processutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.758 2 DEBUG nova.storage.rbd_utils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image a1e0932b-16b6-46b9-8192-b89b91e91802_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.764 2 DEBUG oslo_concurrency.processutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:06 compute-1 podman[287650]: 2025-10-02 12:54:06.811973036 +0000 UTC m=+0.091478511 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:54:06 compute-1 podman[287653]: 2025-10-02 12:54:06.812947596 +0000 UTC m=+0.078825074 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.852 2 DEBUG nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.853 2 DEBUG nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.853 2 DEBUG nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No VIF found with MAC fa:16:3e:49:9e:90, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.853 2 INFO nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Using config drive
Oct 02 12:54:06 compute-1 nova_compute[230518]: 2025-10-02 12:54:06.878 2 DEBUG nova.storage.rbd_utils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:54:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:54:07 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3023350745' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.205 2 DEBUG oslo_concurrency.processutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.207 2 DEBUG nova.virt.libvirt.vif [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:53:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1723654799',display_name='tempest-TestNetworkAdvancedServerOps-server-1723654799',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1723654799',id=146,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX6YVDsN9ZX5bxWRi+hOcCI5VJa6Q2nedRNQF3bG+0Pznov2NvoOk008+cPv/dH5+9KDDN9Rpi1O2z1pYZSfJd9pzzfPLrJFsvhHAGAb1dgOP5UShntoHoUWnJ4mGisJQ==',key_name='tempest-TestNetworkAdvancedServerOps-520644426',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-hzwwvz6x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:54:02Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=a1e0932b-16b6-46b9-8192-b89b91e91802,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.207 2 DEBUG nova.network.os_vif_util [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.208 2 DEBUG nova.network.os_vif_util [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:41:1c,bridge_name='br-int',has_traffic_filtering=True,id=20204810-ff47-450e-80e5-23d03b435455,network=Network(a39243cb-5286-4429-8879-7b4d535de128),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20204810-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.209 2 DEBUG nova.objects.instance [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'pci_devices' on Instance uuid a1e0932b-16b6-46b9-8192-b89b91e91802 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.227 2 DEBUG nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:54:07 compute-1 nova_compute[230518]:   <uuid>a1e0932b-16b6-46b9-8192-b89b91e91802</uuid>
Oct 02 12:54:07 compute-1 nova_compute[230518]:   <name>instance-00000092</name>
Oct 02 12:54:07 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:54:07 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:54:07 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:54:07 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1723654799</nova:name>
Oct 02 12:54:07 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:54:06</nova:creationTime>
Oct 02 12:54:07 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:54:07 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:54:07 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:54:07 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:54:07 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:54:07 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:54:07 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:54:07 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:54:07 compute-1 nova_compute[230518]:         <nova:user uuid="47f465d8c8ac44c982f2a2e60ae9eb40">tempest-TestNetworkAdvancedServerOps-1770117619-project-member</nova:user>
Oct 02 12:54:07 compute-1 nova_compute[230518]:         <nova:project uuid="072925a6aec84a77a9c09ae0c83efdb3">tempest-TestNetworkAdvancedServerOps-1770117619</nova:project>
Oct 02 12:54:07 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:54:07 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:54:07 compute-1 nova_compute[230518]:         <nova:port uuid="20204810-ff47-450e-80e5-23d03b435455">
Oct 02 12:54:07 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:54:07 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:54:07 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:54:07 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <system>
Oct 02 12:54:07 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:54:07 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:54:07 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:54:07 compute-1 nova_compute[230518]:       <entry name="serial">a1e0932b-16b6-46b9-8192-b89b91e91802</entry>
Oct 02 12:54:07 compute-1 nova_compute[230518]:       <entry name="uuid">a1e0932b-16b6-46b9-8192-b89b91e91802</entry>
Oct 02 12:54:07 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     </system>
Oct 02 12:54:07 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:54:07 compute-1 nova_compute[230518]:   <os>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:   </os>
Oct 02 12:54:07 compute-1 nova_compute[230518]:   <features>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:   </features>
Oct 02 12:54:07 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:54:07 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:54:07 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:54:07 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/a1e0932b-16b6-46b9-8192-b89b91e91802_disk">
Oct 02 12:54:07 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:       </source>
Oct 02 12:54:07 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:54:07 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:54:07 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:54:07 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/a1e0932b-16b6-46b9-8192-b89b91e91802_disk.config">
Oct 02 12:54:07 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:       </source>
Oct 02 12:54:07 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:54:07 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:54:07 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:54:07 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:5b:41:1c"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:       <target dev="tap20204810-ff"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:54:07 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802/console.log" append="off"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <video>
Oct 02 12:54:07 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     </video>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:54:07 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:54:07 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:54:07 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:54:07 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:54:07 compute-1 nova_compute[230518]: </domain>
Oct 02 12:54:07 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.228 2 DEBUG nova.compute.manager [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Preparing to wait for external event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.228 2 DEBUG oslo_concurrency.lockutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.228 2 DEBUG oslo_concurrency.lockutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.228 2 DEBUG oslo_concurrency.lockutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.229 2 DEBUG nova.virt.libvirt.vif [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:53:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1723654799',display_name='tempest-TestNetworkAdvancedServerOps-server-1723654799',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1723654799',id=146,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX6YVDsN9ZX5bxWRi+hOcCI5VJa6Q2nedRNQF3bG+0Pznov2NvoOk008+cPv/dH5+9KDDN9Rpi1O2z1pYZSfJd9pzzfPLrJFsvhHAGAb1dgOP5UShntoHoUWnJ4mGisJQ==',key_name='tempest-TestNetworkAdvancedServerOps-520644426',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-hzwwvz6x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:54:02Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=a1e0932b-16b6-46b9-8192-b89b91e91802,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.229 2 DEBUG nova.network.os_vif_util [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.229 2 DEBUG nova.network.os_vif_util [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:41:1c,bridge_name='br-int',has_traffic_filtering=True,id=20204810-ff47-450e-80e5-23d03b435455,network=Network(a39243cb-5286-4429-8879-7b4d535de128),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20204810-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.230 2 DEBUG os_vif [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:41:1c,bridge_name='br-int',has_traffic_filtering=True,id=20204810-ff47-450e-80e5-23d03b435455,network=Network(a39243cb-5286-4429-8879-7b4d535de128),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20204810-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.231 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.231 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.234 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20204810-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.235 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap20204810-ff, col_values=(('external_ids', {'iface-id': '20204810-ff47-450e-80e5-23d03b435455', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:41:1c', 'vm-uuid': 'a1e0932b-16b6-46b9-8192-b89b91e91802'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:07 compute-1 NetworkManager[44960]: <info>  [1759409647.2375] manager: (tap20204810-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/279)
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.244 2 INFO os_vif [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:41:1c,bridge_name='br-int',has_traffic_filtering=True,id=20204810-ff47-450e-80e5-23d03b435455,network=Network(a39243cb-5286-4429-8879-7b4d535de128),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20204810-ff')
Oct 02 12:54:07 compute-1 ceph-mon[80926]: pgmap v2420: 305 pgs: 305 active+clean; 188 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 5.4 MiB/s wr, 170 op/s
Oct 02 12:54:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3687837290' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:54:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1284789999' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:54:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/651591536' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.329 2 DEBUG nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.330 2 DEBUG nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.330 2 DEBUG nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No VIF found with MAC fa:16:3e:5b:41:1c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.330 2 INFO nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Using config drive
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.355 2 DEBUG nova.storage.rbd_utils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image a1e0932b-16b6-46b9-8192-b89b91e91802_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.573 2 INFO nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Creating config drive at /var/lib/nova/instances/95fd2a5f-82d9-46eb-b218-cb0a9a4e2765/disk.config
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.580 2 DEBUG oslo_concurrency.processutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/95fd2a5f-82d9-46eb-b218-cb0a9a4e2765/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg2ngchn4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.647 2 DEBUG nova.network.neutron [req-0d319b2b-ef45-44a7-95d1-28d3d62d62b4 req-abe9e0bc-2eb4-4ac6-a350-399f548e08aa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Updated VIF entry in instance network info cache for port f67b8436-7ef3-4d35-814c-3d62c9a9fec0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.648 2 DEBUG nova.network.neutron [req-0d319b2b-ef45-44a7-95d1-28d3d62d62b4 req-abe9e0bc-2eb4-4ac6-a350-399f548e08aa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Updating instance_info_cache with network_info: [{"id": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "address": "fa:16:3e:49:9e:90", "network": {"id": "83bbba21-a002-4973-9f29-252bf270271b", "bridge": "br-int", "label": "tempest-network-smoke--636136101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf67b8436-7e", "ovs_interfaceid": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.663 2 DEBUG oslo_concurrency.lockutils [req-0d319b2b-ef45-44a7-95d1-28d3d62d62b4 req-abe9e0bc-2eb4-4ac6-a350-399f548e08aa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.741 2 DEBUG oslo_concurrency.processutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/95fd2a5f-82d9-46eb-b218-cb0a9a4e2765/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg2ngchn4" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.774 2 DEBUG nova.storage.rbd_utils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.777 2 DEBUG oslo_concurrency.processutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/95fd2a5f-82d9-46eb-b218-cb0a9a4e2765/disk.config 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.986 2 DEBUG oslo_concurrency.processutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/95fd2a5f-82d9-46eb-b218-cb0a9a4e2765/disk.config 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:07 compute-1 nova_compute[230518]: 2025-10-02 12:54:07.987 2 INFO nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Deleting local config drive /var/lib/nova/instances/95fd2a5f-82d9-46eb-b218-cb0a9a4e2765/disk.config because it was imported into RBD.
Oct 02 12:54:08 compute-1 NetworkManager[44960]: <info>  [1759409648.0354] manager: (tapf67b8436-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/280)
Oct 02 12:54:08 compute-1 kernel: tapf67b8436-7e: entered promiscuous mode
Oct 02 12:54:08 compute-1 ovn_controller[129257]: 2025-10-02T12:54:08Z|00601|binding|INFO|Claiming lport f67b8436-7ef3-4d35-814c-3d62c9a9fec0 for this chassis.
Oct 02 12:54:08 compute-1 ovn_controller[129257]: 2025-10-02T12:54:08Z|00602|binding|INFO|f67b8436-7ef3-4d35-814c-3d62c9a9fec0: Claiming fa:16:3e:49:9e:90 10.100.0.5
Oct 02 12:54:08 compute-1 nova_compute[230518]: 2025-10-02 12:54:08.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.051 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:9e:90 10.100.0.5'], port_security=['fa:16:3e:49:9e:90 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '95fd2a5f-82d9-46eb-b218-cb0a9a4e2765', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-83bbba21-a002-4973-9f29-252bf270271b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '2', 'neutron:security_group_ids': '33cdd8ea-2edc-4577-8235-b9f26b2b357e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ddb9b596-76b5-41a0-897b-33f6aa75f8df, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=f67b8436-7ef3-4d35-814c-3d62c9a9fec0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.052 138374 INFO neutron.agent.ovn.metadata.agent [-] Port f67b8436-7ef3-4d35-814c-3d62c9a9fec0 in datapath 83bbba21-a002-4973-9f29-252bf270271b bound to our chassis
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.053 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 83bbba21-a002-4973-9f29-252bf270271b
Oct 02 12:54:08 compute-1 systemd-machined[188247]: New machine qemu-70-instance-00000091.
Oct 02 12:54:08 compute-1 systemd-udevd[287827]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.068 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c1b59e0a-c67d-4fc8-a0ae-2135a893e950]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.069 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap83bbba21-a1 in ovnmeta-83bbba21-a002-4973-9f29-252bf270271b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.071 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap83bbba21-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.071 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0eb6a685-08cf-4f4d-896b-c49ea447f9da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.072 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3e3dcb29-0435-4a24-b2ba-d73bbb742186]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.083 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[b48cc5e2-b2b8-4f36-ba8e-5828c0a71d8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:08 compute-1 systemd[1]: Started Virtual Machine qemu-70-instance-00000091.
Oct 02 12:54:08 compute-1 NetworkManager[44960]: <info>  [1759409648.0850] device (tapf67b8436-7e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:54:08 compute-1 NetworkManager[44960]: <info>  [1759409648.0860] device (tapf67b8436-7e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.102 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c60dbf00-77e0-405a-b44b-a01720cf53ab]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:08 compute-1 nova_compute[230518]: 2025-10-02 12:54:08.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:08 compute-1 ovn_controller[129257]: 2025-10-02T12:54:08Z|00603|binding|INFO|Setting lport f67b8436-7ef3-4d35-814c-3d62c9a9fec0 ovn-installed in OVS
Oct 02 12:54:08 compute-1 ovn_controller[129257]: 2025-10-02T12:54:08Z|00604|binding|INFO|Setting lport f67b8436-7ef3-4d35-814c-3d62c9a9fec0 up in Southbound
Oct 02 12:54:08 compute-1 nova_compute[230518]: 2025-10-02 12:54:08.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.139 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[f323b6ce-ee4e-4536-92b1-2d2a0eca75d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.143 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1721b1b7-f389-434c-b59f-761cc210087c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:08 compute-1 NetworkManager[44960]: <info>  [1759409648.1444] manager: (tap83bbba21-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/281)
Oct 02 12:54:08 compute-1 systemd-udevd[287830]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.175 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a58a2dd7-54e7-413f-aee5-749c773fa6d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.178 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[52f604d0-e5d8-4070-a5a3-6bc24aa8ee3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:08 compute-1 NetworkManager[44960]: <info>  [1759409648.2008] device (tap83bbba21-a0): carrier: link connected
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.206 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[514daa98-2afe-4e3f-ac8f-37f485fb09fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:08 compute-1 nova_compute[230518]: 2025-10-02 12:54:08.218 2 INFO nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Creating config drive at /var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802/disk.config
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.223 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[221c7680-dd27-46ee-8b01-e184c9e1fb34]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap83bbba21-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:c8:6e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 184], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751175, 'reachable_time': 23991, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287863, 'error': None, 'target': 'ovnmeta-83bbba21-a002-4973-9f29-252bf270271b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:08 compute-1 nova_compute[230518]: 2025-10-02 12:54:08.228 2 DEBUG oslo_concurrency.processutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzc5e4lmv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.244 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f16bf7e3-aff9-467b-8b0c-7af60009b097]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe01:c86e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 751175, 'tstamp': 751175}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287864, 'error': None, 'target': 'ovnmeta-83bbba21-a002-4973-9f29-252bf270271b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.267 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[573846cf-e2c2-476d-8513-f326ec0ce206]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap83bbba21-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:c8:6e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 184], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751175, 'reachable_time': 23991, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287866, 'error': None, 'target': 'ovnmeta-83bbba21-a002-4973-9f29-252bf270271b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.294 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[848e978c-6d88-477a-bcdc-0a6e40744b58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3023350745' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.362 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f4c059ec-ae7d-4b5d-9b3e-6854c9df1d61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.364 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap83bbba21-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.364 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.364 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap83bbba21-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:08 compute-1 NetworkManager[44960]: <info>  [1759409648.3667] manager: (tap83bbba21-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/282)
Oct 02 12:54:08 compute-1 nova_compute[230518]: 2025-10-02 12:54:08.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:08 compute-1 nova_compute[230518]: 2025-10-02 12:54:08.369 2 DEBUG oslo_concurrency.processutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzc5e4lmv" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:08 compute-1 kernel: tap83bbba21-a0: entered promiscuous mode
Oct 02 12:54:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.372 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap83bbba21-a0, col_values=(('external_ids', {'iface-id': '3c77c977-5d61-489e-bb52-04238449aff2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:08 compute-1 ovn_controller[129257]: 2025-10-02T12:54:08Z|00605|binding|INFO|Releasing lport 3c77c977-5d61-489e-bb52-04238449aff2 from this chassis (sb_readonly=0)
Oct 02 12:54:08 compute-1 nova_compute[230518]: 2025-10-02 12:54:08.396 2 DEBUG nova.storage.rbd_utils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image a1e0932b-16b6-46b9-8192-b89b91e91802_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.402 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/83bbba21-a002-4973-9f29-252bf270271b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/83bbba21-a002-4973-9f29-252bf270271b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.404 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9e0d55eb-3de2-4d5f-8618-b792c1d57a58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.404 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-83bbba21-a002-4973-9f29-252bf270271b
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/83bbba21-a002-4973-9f29-252bf270271b.pid.haproxy
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 83bbba21-a002-4973-9f29-252bf270271b
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:54:08 compute-1 nova_compute[230518]: 2025-10-02 12:54:08.404 2 DEBUG oslo_concurrency.processutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802/disk.config a1e0932b-16b6-46b9-8192-b89b91e91802_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.407 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-83bbba21-a002-4973-9f29-252bf270271b', 'env', 'PROCESS_TAG=haproxy-83bbba21-a002-4973-9f29-252bf270271b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/83bbba21-a002-4973-9f29-252bf270271b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:54:08 compute-1 nova_compute[230518]: 2025-10-02 12:54:08.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:08 compute-1 nova_compute[230518]: 2025-10-02 12:54:08.443 2 DEBUG nova.compute.manager [req-480fdfbd-bb2a-4b82-b657-04c7e540086b req-ea6d5c4f-12e3-4c5b-aae1-22f90710aeea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Received event network-vif-plugged-f67b8436-7ef3-4d35-814c-3d62c9a9fec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:08 compute-1 nova_compute[230518]: 2025-10-02 12:54:08.444 2 DEBUG oslo_concurrency.lockutils [req-480fdfbd-bb2a-4b82-b657-04c7e540086b req-ea6d5c4f-12e3-4c5b-aae1-22f90710aeea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:08 compute-1 nova_compute[230518]: 2025-10-02 12:54:08.444 2 DEBUG oslo_concurrency.lockutils [req-480fdfbd-bb2a-4b82-b657-04c7e540086b req-ea6d5c4f-12e3-4c5b-aae1-22f90710aeea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:08 compute-1 nova_compute[230518]: 2025-10-02 12:54:08.444 2 DEBUG oslo_concurrency.lockutils [req-480fdfbd-bb2a-4b82-b657-04c7e540086b req-ea6d5c4f-12e3-4c5b-aae1-22f90710aeea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:08 compute-1 nova_compute[230518]: 2025-10-02 12:54:08.444 2 DEBUG nova.compute.manager [req-480fdfbd-bb2a-4b82-b657-04c7e540086b req-ea6d5c4f-12e3-4c5b-aae1-22f90710aeea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Processing event network-vif-plugged-f67b8436-7ef3-4d35-814c-3d62c9a9fec0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:54:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:08.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:08 compute-1 nova_compute[230518]: 2025-10-02 12:54:08.516 2 DEBUG nova.network.neutron [req-a977c721-5647-4a39-8125-0a6c33f22d22 req-7c4e72f9-23b2-4e62-9f51-7012bd7278a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updated VIF entry in instance network info cache for port 20204810-ff47-450e-80e5-23d03b435455. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:54:08 compute-1 nova_compute[230518]: 2025-10-02 12:54:08.516 2 DEBUG nova.network.neutron [req-a977c721-5647-4a39-8125-0a6c33f22d22 req-7c4e72f9-23b2-4e62-9f51-7012bd7278a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updating instance_info_cache with network_info: [{"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:54:08 compute-1 nova_compute[230518]: 2025-10-02 12:54:08.536 2 DEBUG oslo_concurrency.lockutils [req-a977c721-5647-4a39-8125-0a6c33f22d22 req-7c4e72f9-23b2-4e62-9f51-7012bd7278a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:54:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:54:08 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2532031233' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:54:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:54:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:08.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:54:08 compute-1 nova_compute[230518]: 2025-10-02 12:54:08.678 2 DEBUG oslo_concurrency.processutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802/disk.config a1e0932b-16b6-46b9-8192-b89b91e91802_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:08 compute-1 nova_compute[230518]: 2025-10-02 12:54:08.678 2 INFO nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Deleting local config drive /var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802/disk.config because it was imported into RBD.
Oct 02 12:54:08 compute-1 kernel: tap20204810-ff: entered promiscuous mode
Oct 02 12:54:08 compute-1 NetworkManager[44960]: <info>  [1759409648.7362] manager: (tap20204810-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/283)
Oct 02 12:54:08 compute-1 systemd-udevd[287850]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:54:08 compute-1 ovn_controller[129257]: 2025-10-02T12:54:08Z|00606|binding|INFO|Claiming lport 20204810-ff47-450e-80e5-23d03b435455 for this chassis.
Oct 02 12:54:08 compute-1 ovn_controller[129257]: 2025-10-02T12:54:08Z|00607|binding|INFO|20204810-ff47-450e-80e5-23d03b435455: Claiming fa:16:3e:5b:41:1c 10.100.0.7
Oct 02 12:54:08 compute-1 nova_compute[230518]: 2025-10-02 12:54:08.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.750 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:41:1c 10.100.0.7'], port_security=['fa:16:3e:5b:41:1c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a1e0932b-16b6-46b9-8192-b89b91e91802', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a39243cb-5286-4429-8879-7b4d535de128', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd1b59c5d-0681-456e-a8d1-b3629344f9b0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf84aebf-21d8-4569-891a-417406561224, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=20204810-ff47-450e-80e5-23d03b435455) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:54:08 compute-1 NetworkManager[44960]: <info>  [1759409648.7569] device (tap20204810-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:54:08 compute-1 NetworkManager[44960]: <info>  [1759409648.7576] device (tap20204810-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:54:08 compute-1 podman[287944]: 2025-10-02 12:54:08.782223859 +0000 UTC m=+0.059524759 container create dcc1b9cb27557c658fb36ed8f47ef8a16ec360a5f6a0b686acb027283a8c6b4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-83bbba21-a002-4973-9f29-252bf270271b, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 12:54:08 compute-1 systemd-machined[188247]: New machine qemu-71-instance-00000092.
Oct 02 12:54:08 compute-1 systemd[1]: Started Virtual Machine qemu-71-instance-00000092.
Oct 02 12:54:08 compute-1 nova_compute[230518]: 2025-10-02 12:54:08.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:08 compute-1 ovn_controller[129257]: 2025-10-02T12:54:08Z|00608|binding|INFO|Setting lport 20204810-ff47-450e-80e5-23d03b435455 ovn-installed in OVS
Oct 02 12:54:08 compute-1 ovn_controller[129257]: 2025-10-02T12:54:08Z|00609|binding|INFO|Setting lport 20204810-ff47-450e-80e5-23d03b435455 up in Southbound
Oct 02 12:54:08 compute-1 nova_compute[230518]: 2025-10-02 12:54:08.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:08 compute-1 systemd[1]: Started libpod-conmon-dcc1b9cb27557c658fb36ed8f47ef8a16ec360a5f6a0b686acb027283a8c6b4f.scope.
Oct 02 12:54:08 compute-1 podman[287944]: 2025-10-02 12:54:08.748064656 +0000 UTC m=+0.025365576 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:54:08 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:54:08 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2b6de8b1fd368c961dc50bb0264389c886212a783acf9ddb0316ed806440300/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:54:08 compute-1 podman[287944]: 2025-10-02 12:54:08.863348084 +0000 UTC m=+0.140649004 container init dcc1b9cb27557c658fb36ed8f47ef8a16ec360a5f6a0b686acb027283a8c6b4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-83bbba21-a002-4973-9f29-252bf270271b, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:54:08 compute-1 podman[287944]: 2025-10-02 12:54:08.868737892 +0000 UTC m=+0.146038792 container start dcc1b9cb27557c658fb36ed8f47ef8a16ec360a5f6a0b686acb027283a8c6b4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-83bbba21-a002-4973-9f29-252bf270271b, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 02 12:54:08 compute-1 neutron-haproxy-ovnmeta-83bbba21-a002-4973-9f29-252bf270271b[287969]: [NOTICE]   (287986) : New worker (287999) forked
Oct 02 12:54:08 compute-1 neutron-haproxy-ovnmeta-83bbba21-a002-4973-9f29-252bf270271b[287969]: [NOTICE]   (287986) : Loading success.
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.933 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 20204810-ff47-450e-80e5-23d03b435455 in datapath a39243cb-5286-4429-8879-7b4d535de128 unbound from our chassis
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.934 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a39243cb-5286-4429-8879-7b4d535de128
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.944 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3a07b690-f4b5-4f5c-8114-f0bc41126902]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.944 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa39243cb-51 in ovnmeta-a39243cb-5286-4429-8879-7b4d535de128 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.946 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa39243cb-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.946 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d35c3092-50bf-4853-87ec-d2db86198c08]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.947 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e2a4999c-0336-4701-a3d4-a4cec884be04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.961 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[82bd8975-793f-46a9-a0ac-a59d39551210]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:08.985 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[591d4651-ed02-4a3e-b141-9e2f03320128]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:09.014 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[b051a576-abff-4cbe-801f-9719f4a5fb4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:09.022 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d2c4c895-03b1-4108-9dd6-b72e07db2b92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:09 compute-1 NetworkManager[44960]: <info>  [1759409649.0233] manager: (tapa39243cb-50): new Veth device (/org/freedesktop/NetworkManager/Devices/284)
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:09.054 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[224aadee-e422-4805-9519-e8e90c5124b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:09.057 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[278d9607-71d3-4e0b-a41e-4b65b6b03eb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.077 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.077 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.077 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.078 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.078 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:09 compute-1 NetworkManager[44960]: <info>  [1759409649.0891] device (tapa39243cb-50): carrier: link connected
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:09.094 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[5060ca89-19e5-46bb-a6ea-8154edf23a4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:09.118 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[47e81210-97c8-4ec5-a3bb-276f593c6934]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa39243cb-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:6a:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751264, 'reachable_time': 41314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288043, 'error': None, 'target': 'ovnmeta-a39243cb-5286-4429-8879-7b4d535de128', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:09.133 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d971a80e-4f9f-4ba6-a192-53c0e555f999]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe30:6ac8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 751264, 'tstamp': 751264}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288044, 'error': None, 'target': 'ovnmeta-a39243cb-5286-4429-8879-7b4d535de128', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:09.153 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d44b8436-822e-4dd7-96b5-4b9974d880e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa39243cb-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:6a:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751264, 'reachable_time': 41314, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288046, 'error': None, 'target': 'ovnmeta-a39243cb-5286-4429-8879-7b4d535de128', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:09.187 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[72cd5559-6529-45cc-8617-f2c06d0577d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.222 2 DEBUG nova.compute.manager [req-3cb99851-96e3-44ea-b271-d7ba64f161d6 req-c7f9f926-2aad-418a-8b4e-3cc6e17385f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.223 2 DEBUG oslo_concurrency.lockutils [req-3cb99851-96e3-44ea-b271-d7ba64f161d6 req-c7f9f926-2aad-418a-8b4e-3cc6e17385f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.223 2 DEBUG oslo_concurrency.lockutils [req-3cb99851-96e3-44ea-b271-d7ba64f161d6 req-c7f9f926-2aad-418a-8b4e-3cc6e17385f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.223 2 DEBUG oslo_concurrency.lockutils [req-3cb99851-96e3-44ea-b271-d7ba64f161d6 req-c7f9f926-2aad-418a-8b4e-3cc6e17385f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.224 2 DEBUG nova.compute.manager [req-3cb99851-96e3-44ea-b271-d7ba64f161d6 req-c7f9f926-2aad-418a-8b4e-3cc6e17385f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Processing event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:09.259 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6828ab1f-9748-4b4f-869c-ba0085f7f370]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:09.260 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa39243cb-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:09.261 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:09.261 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa39243cb-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:09 compute-1 kernel: tapa39243cb-50: entered promiscuous mode
Oct 02 12:54:09 compute-1 NetworkManager[44960]: <info>  [1759409649.2637] manager: (tapa39243cb-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/285)
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:09.265 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa39243cb-50, col_values=(('external_ids', {'iface-id': '75b1d4a5-2f3f-44a0-a22e-e6c15fda36d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:09 compute-1 ovn_controller[129257]: 2025-10-02T12:54:09Z|00610|binding|INFO|Releasing lport 75b1d4a5-2f3f-44a0-a22e-e6c15fda36d1 from this chassis (sb_readonly=0)
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:09.285 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a39243cb-5286-4429-8879-7b4d535de128.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a39243cb-5286-4429-8879-7b4d535de128.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:09.286 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d034edaf-0199-4c2e-9ac5-3542d8c584a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:09.287 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-a39243cb-5286-4429-8879-7b4d535de128
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/a39243cb-5286-4429-8879-7b4d535de128.pid.haproxy
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID a39243cb-5286-4429-8879-7b4d535de128
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:54:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:09.289 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a39243cb-5286-4429-8879-7b4d535de128', 'env', 'PROCESS_TAG=haproxy-a39243cb-5286-4429-8879-7b4d535de128', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a39243cb-5286-4429-8879-7b4d535de128.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:54:09 compute-1 ceph-mon[80926]: pgmap v2421: 305 pgs: 305 active+clean; 259 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 449 KiB/s rd, 6.8 MiB/s wr, 188 op/s
Oct 02 12:54:09 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2532031233' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:54:09 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3621289055' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.446 2 DEBUG nova.compute.manager [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.448 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409649.4453878, 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.449 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] VM Started (Lifecycle Event)
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.452 2 DEBUG nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.457 2 INFO nova.virt.libvirt.driver [-] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Instance spawned successfully.
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.457 2 DEBUG nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.475 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.480 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.485 2 DEBUG nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.486 2 DEBUG nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.487 2 DEBUG nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.487 2 DEBUG nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.488 2 DEBUG nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.489 2 DEBUG nova.virt.libvirt.driver [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:54:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:54:09 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/498936511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.514 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.515 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409649.4471543, 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.515 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] VM Paused (Lifecycle Event)
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.520 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.541 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.548 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409649.4538093, 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.549 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] VM Resumed (Lifecycle Event)
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.556 2 INFO nova.compute.manager [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Took 7.97 seconds to spawn the instance on the hypervisor.
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.556 2 DEBUG nova.compute.manager [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.571 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.575 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.642 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.653 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.654 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:54:09 compute-1 podman[288140]: 2025-10-02 12:54:09.654674249 +0000 UTC m=+0.052343002 container create b86c35cf61108b8e4c4158e52cffc23c6893ab70d528fda02449d4e6d8928699 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.665 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.665 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.692 2 INFO nova.compute.manager [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Took 9.11 seconds to build instance.
Oct 02 12:54:09 compute-1 systemd[1]: Started libpod-conmon-b86c35cf61108b8e4c4158e52cffc23c6893ab70d528fda02449d4e6d8928699.scope.
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.714 2 DEBUG oslo_concurrency.lockutils [None req-2797946d-652b-40be-a47e-2b7f31bdd613 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.224s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:09 compute-1 podman[288140]: 2025-10-02 12:54:09.629051805 +0000 UTC m=+0.026720588 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:54:09 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:54:09 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/134e48fef8ee40dfc4155c6ec896c104676820b240dcee32a40405e7898ee537/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.732 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409649.7319562, a1e0932b-16b6-46b9-8192-b89b91e91802 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.732 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] VM Started (Lifecycle Event)
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.734 2 DEBUG nova.compute.manager [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:54:09 compute-1 podman[288140]: 2025-10-02 12:54:09.73787633 +0000 UTC m=+0.135545093 container init b86c35cf61108b8e4c4158e52cffc23c6893ab70d528fda02449d4e6d8928699 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.739 2 DEBUG nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.742 2 INFO nova.virt.libvirt.driver [-] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Instance spawned successfully.
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.743 2 DEBUG nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:54:09 compute-1 podman[288140]: 2025-10-02 12:54:09.746546992 +0000 UTC m=+0.144215745 container start b86c35cf61108b8e4c4158e52cffc23c6893ab70d528fda02449d4e6d8928699 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.768 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.774 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:54:09 compute-1 neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128[288155]: [NOTICE]   (288159) : New worker (288161) forked
Oct 02 12:54:09 compute-1 neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128[288155]: [NOTICE]   (288159) : Loading success.
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.782 2 DEBUG nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.782 2 DEBUG nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.783 2 DEBUG nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.783 2 DEBUG nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.784 2 DEBUG nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.784 2 DEBUG nova.virt.libvirt.driver [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.816 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.817 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409649.7344809, a1e0932b-16b6-46b9-8192-b89b91e91802 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.817 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] VM Paused (Lifecycle Event)
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.871 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.873 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409649.7373836, a1e0932b-16b6-46b9-8192-b89b91e91802 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.873 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] VM Resumed (Lifecycle Event)
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.878 2 INFO nova.compute.manager [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Took 7.66 seconds to spawn the instance on the hypervisor.
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.878 2 DEBUG nova.compute.manager [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.888 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.889 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4272MB free_disk=20.90142822265625GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.889 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.890 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.891 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.895 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.921 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.970 2 INFO nova.compute.manager [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Took 9.04 seconds to build instance.
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.988 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.989 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance a1e0932b-16b6-46b9-8192-b89b91e91802 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.989 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.990 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:54:09 compute-1 nova_compute[230518]: 2025-10-02 12:54:09.994 2 DEBUG oslo_concurrency.lockutils [None req-cd0d5ece-da7b-437b-88c7-b0467ec59db9 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.292s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:10 compute-1 nova_compute[230518]: 2025-10-02 12:54:10.037 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/498936511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2341730290' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:54:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4040131513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:54:10 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2087824584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:10 compute-1 nova_compute[230518]: 2025-10-02 12:54:10.484 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:10 compute-1 nova_compute[230518]: 2025-10-02 12:54:10.490 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:54:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:10.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:10 compute-1 nova_compute[230518]: 2025-10-02 12:54:10.507 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:54:10 compute-1 nova_compute[230518]: 2025-10-02 12:54:10.552 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:54:10 compute-1 nova_compute[230518]: 2025-10-02 12:54:10.553 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:10 compute-1 nova_compute[230518]: 2025-10-02 12:54:10.595 2 DEBUG nova.compute.manager [req-ecb9aa0d-d604-4493-a2ed-ce7d7bbe3483 req-4b94c397-6b6d-42b6-9b6d-8d738b401154 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Received event network-vif-plugged-f67b8436-7ef3-4d35-814c-3d62c9a9fec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:10 compute-1 nova_compute[230518]: 2025-10-02 12:54:10.596 2 DEBUG oslo_concurrency.lockutils [req-ecb9aa0d-d604-4493-a2ed-ce7d7bbe3483 req-4b94c397-6b6d-42b6-9b6d-8d738b401154 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:10 compute-1 nova_compute[230518]: 2025-10-02 12:54:10.597 2 DEBUG oslo_concurrency.lockutils [req-ecb9aa0d-d604-4493-a2ed-ce7d7bbe3483 req-4b94c397-6b6d-42b6-9b6d-8d738b401154 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:10 compute-1 nova_compute[230518]: 2025-10-02 12:54:10.597 2 DEBUG oslo_concurrency.lockutils [req-ecb9aa0d-d604-4493-a2ed-ce7d7bbe3483 req-4b94c397-6b6d-42b6-9b6d-8d738b401154 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:10 compute-1 nova_compute[230518]: 2025-10-02 12:54:10.597 2 DEBUG nova.compute.manager [req-ecb9aa0d-d604-4493-a2ed-ce7d7bbe3483 req-4b94c397-6b6d-42b6-9b6d-8d738b401154 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] No waiting events found dispatching network-vif-plugged-f67b8436-7ef3-4d35-814c-3d62c9a9fec0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:54:10 compute-1 nova_compute[230518]: 2025-10-02 12:54:10.598 2 WARNING nova.compute.manager [req-ecb9aa0d-d604-4493-a2ed-ce7d7bbe3483 req-4b94c397-6b6d-42b6-9b6d-8d738b401154 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Received unexpected event network-vif-plugged-f67b8436-7ef3-4d35-814c-3d62c9a9fec0 for instance with vm_state active and task_state None.
Oct 02 12:54:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:10.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:11 compute-1 ceph-mon[80926]: pgmap v2422: 305 pgs: 305 active+clean; 260 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 431 KiB/s rd, 6.2 MiB/s wr, 158 op/s
Oct 02 12:54:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2087824584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:11 compute-1 nova_compute[230518]: 2025-10-02 12:54:11.417 2 DEBUG nova.compute.manager [req-e6575dc1-a797-4056-aba6-aa12c461f9fc req-02016bfc-b194-489f-b4f4-70da90e1d592 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:11 compute-1 nova_compute[230518]: 2025-10-02 12:54:11.418 2 DEBUG oslo_concurrency.lockutils [req-e6575dc1-a797-4056-aba6-aa12c461f9fc req-02016bfc-b194-489f-b4f4-70da90e1d592 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:11 compute-1 nova_compute[230518]: 2025-10-02 12:54:11.419 2 DEBUG oslo_concurrency.lockutils [req-e6575dc1-a797-4056-aba6-aa12c461f9fc req-02016bfc-b194-489f-b4f4-70da90e1d592 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:11 compute-1 nova_compute[230518]: 2025-10-02 12:54:11.419 2 DEBUG oslo_concurrency.lockutils [req-e6575dc1-a797-4056-aba6-aa12c461f9fc req-02016bfc-b194-489f-b4f4-70da90e1d592 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:11 compute-1 nova_compute[230518]: 2025-10-02 12:54:11.420 2 DEBUG nova.compute.manager [req-e6575dc1-a797-4056-aba6-aa12c461f9fc req-02016bfc-b194-489f-b4f4-70da90e1d592 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] No waiting events found dispatching network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:54:11 compute-1 nova_compute[230518]: 2025-10-02 12:54:11.420 2 WARNING nova.compute.manager [req-e6575dc1-a797-4056-aba6-aa12c461f9fc req-02016bfc-b194-489f-b4f4-70da90e1d592 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received unexpected event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 for instance with vm_state active and task_state None.
Oct 02 12:54:12 compute-1 nova_compute[230518]: 2025-10-02 12:54:12.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:12.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:12 compute-1 nova_compute[230518]: 2025-10-02 12:54:12.553 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:54:12 compute-1 nova_compute[230518]: 2025-10-02 12:54:12.553 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:54:12 compute-1 nova_compute[230518]: 2025-10-02 12:54:12.553 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:54:12 compute-1 nova_compute[230518]: 2025-10-02 12:54:12.553 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:54:12 compute-1 NetworkManager[44960]: <info>  [1759409652.5749] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/286)
Oct 02 12:54:12 compute-1 NetworkManager[44960]: <info>  [1759409652.5759] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/287)
Oct 02 12:54:12 compute-1 nova_compute[230518]: 2025-10-02 12:54:12.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:12 compute-1 ovn_controller[129257]: 2025-10-02T12:54:12Z|00611|binding|INFO|Releasing lport 3c77c977-5d61-489e-bb52-04238449aff2 from this chassis (sb_readonly=0)
Oct 02 12:54:12 compute-1 nova_compute[230518]: 2025-10-02 12:54:12.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:12.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:12 compute-1 ovn_controller[129257]: 2025-10-02T12:54:12Z|00612|binding|INFO|Releasing lport 75b1d4a5-2f3f-44a0-a22e-e6c15fda36d1 from this chassis (sb_readonly=0)
Oct 02 12:54:12 compute-1 nova_compute[230518]: 2025-10-02 12:54:12.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:12 compute-1 nova_compute[230518]: 2025-10-02 12:54:12.958 2 DEBUG nova.compute.manager [req-f7f2550d-2171-47ea-bac9-c30c03980f97 req-4c8bea1b-f2b4-43a0-a2d0-80136a976152 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Received event network-changed-f67b8436-7ef3-4d35-814c-3d62c9a9fec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:12 compute-1 nova_compute[230518]: 2025-10-02 12:54:12.958 2 DEBUG nova.compute.manager [req-f7f2550d-2171-47ea-bac9-c30c03980f97 req-4c8bea1b-f2b4-43a0-a2d0-80136a976152 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Refreshing instance network info cache due to event network-changed-f67b8436-7ef3-4d35-814c-3d62c9a9fec0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:54:12 compute-1 nova_compute[230518]: 2025-10-02 12:54:12.959 2 DEBUG oslo_concurrency.lockutils [req-f7f2550d-2171-47ea-bac9-c30c03980f97 req-4c8bea1b-f2b4-43a0-a2d0-80136a976152 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:54:12 compute-1 nova_compute[230518]: 2025-10-02 12:54:12.959 2 DEBUG oslo_concurrency.lockutils [req-f7f2550d-2171-47ea-bac9-c30c03980f97 req-4c8bea1b-f2b4-43a0-a2d0-80136a976152 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:54:12 compute-1 nova_compute[230518]: 2025-10-02 12:54:12.959 2 DEBUG nova.network.neutron [req-f7f2550d-2171-47ea-bac9-c30c03980f97 req-4c8bea1b-f2b4-43a0-a2d0-80136a976152 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Refreshing network info cache for port f67b8436-7ef3-4d35-814c-3d62c9a9fec0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:54:13 compute-1 nova_compute[230518]: 2025-10-02 12:54:13.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:54:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:54:13 compute-1 ceph-mon[80926]: pgmap v2423: 305 pgs: 305 active+clean; 260 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.3 MiB/s wr, 206 op/s
Oct 02 12:54:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3033829171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:14 compute-1 nova_compute[230518]: 2025-10-02 12:54:14.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:14.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1352908275' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:54:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:14.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:54:15 compute-1 ceph-mon[80926]: pgmap v2424: 305 pgs: 305 active+clean; 260 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.1 MiB/s wr, 194 op/s
Oct 02 12:54:16 compute-1 nova_compute[230518]: 2025-10-02 12:54:16.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:54:16 compute-1 nova_compute[230518]: 2025-10-02 12:54:16.336 2 DEBUG nova.compute.manager [req-aa98038b-7f5d-45ff-9a83-6818c3109107 req-8f6ccd22-4233-4154-ae79-cb32a4998540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-changed-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:16 compute-1 nova_compute[230518]: 2025-10-02 12:54:16.337 2 DEBUG nova.compute.manager [req-aa98038b-7f5d-45ff-9a83-6818c3109107 req-8f6ccd22-4233-4154-ae79-cb32a4998540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Refreshing instance network info cache due to event network-changed-20204810-ff47-450e-80e5-23d03b435455. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:54:16 compute-1 nova_compute[230518]: 2025-10-02 12:54:16.337 2 DEBUG oslo_concurrency.lockutils [req-aa98038b-7f5d-45ff-9a83-6818c3109107 req-8f6ccd22-4233-4154-ae79-cb32a4998540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:54:16 compute-1 nova_compute[230518]: 2025-10-02 12:54:16.337 2 DEBUG oslo_concurrency.lockutils [req-aa98038b-7f5d-45ff-9a83-6818c3109107 req-8f6ccd22-4233-4154-ae79-cb32a4998540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:54:16 compute-1 nova_compute[230518]: 2025-10-02 12:54:16.337 2 DEBUG nova.network.neutron [req-aa98038b-7f5d-45ff-9a83-6818c3109107 req-8f6ccd22-4233-4154-ae79-cb32a4998540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Refreshing network info cache for port 20204810-ff47-450e-80e5-23d03b435455 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:54:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:16.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:16.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:17 compute-1 ceph-mon[80926]: pgmap v2425: 305 pgs: 305 active+clean; 260 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 222 op/s
Oct 02 12:54:17 compute-1 nova_compute[230518]: 2025-10-02 12:54:17.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:17 compute-1 nova_compute[230518]: 2025-10-02 12:54:17.674 2 DEBUG nova.network.neutron [req-f7f2550d-2171-47ea-bac9-c30c03980f97 req-4c8bea1b-f2b4-43a0-a2d0-80136a976152 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Updated VIF entry in instance network info cache for port f67b8436-7ef3-4d35-814c-3d62c9a9fec0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:54:17 compute-1 nova_compute[230518]: 2025-10-02 12:54:17.674 2 DEBUG nova.network.neutron [req-f7f2550d-2171-47ea-bac9-c30c03980f97 req-4c8bea1b-f2b4-43a0-a2d0-80136a976152 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Updating instance_info_cache with network_info: [{"id": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "address": "fa:16:3e:49:9e:90", "network": {"id": "83bbba21-a002-4973-9f29-252bf270271b", "bridge": "br-int", "label": "tempest-network-smoke--636136101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf67b8436-7e", "ovs_interfaceid": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:54:17 compute-1 nova_compute[230518]: 2025-10-02 12:54:17.700 2 DEBUG oslo_concurrency.lockutils [req-f7f2550d-2171-47ea-bac9-c30c03980f97 req-4c8bea1b-f2b4-43a0-a2d0-80136a976152 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:54:18 compute-1 nova_compute[230518]: 2025-10-02 12:54:18.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:54:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:18.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:18.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:18 compute-1 nova_compute[230518]: 2025-10-02 12:54:18.726 2 DEBUG nova.network.neutron [req-aa98038b-7f5d-45ff-9a83-6818c3109107 req-8f6ccd22-4233-4154-ae79-cb32a4998540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updated VIF entry in instance network info cache for port 20204810-ff47-450e-80e5-23d03b435455. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:54:18 compute-1 nova_compute[230518]: 2025-10-02 12:54:18.726 2 DEBUG nova.network.neutron [req-aa98038b-7f5d-45ff-9a83-6818c3109107 req-8f6ccd22-4233-4154-ae79-cb32a4998540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updating instance_info_cache with network_info: [{"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:54:18 compute-1 nova_compute[230518]: 2025-10-02 12:54:18.756 2 DEBUG oslo_concurrency.lockutils [req-aa98038b-7f5d-45ff-9a83-6818c3109107 req-8f6ccd22-4233-4154-ae79-cb32a4998540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:54:19 compute-1 nova_compute[230518]: 2025-10-02 12:54:19.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:19 compute-1 ceph-mon[80926]: pgmap v2426: 305 pgs: 305 active+clean; 260 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 196 op/s
Oct 02 12:54:20 compute-1 nova_compute[230518]: 2025-10-02 12:54:20.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:54:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:20.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:20.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:21 compute-1 nova_compute[230518]: 2025-10-02 12:54:21.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:54:21 compute-1 nova_compute[230518]: 2025-10-02 12:54:21.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:54:21 compute-1 nova_compute[230518]: 2025-10-02 12:54:21.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:54:21 compute-1 nova_compute[230518]: 2025-10-02 12:54:21.189 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:54:21 compute-1 nova_compute[230518]: 2025-10-02 12:54:21.190 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:54:21 compute-1 nova_compute[230518]: 2025-10-02 12:54:21.190 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:54:21 compute-1 nova_compute[230518]: 2025-10-02 12:54:21.190 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:54:21 compute-1 ceph-mon[80926]: pgmap v2427: 305 pgs: 305 active+clean; 260 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 50 KiB/s wr, 157 op/s
Oct 02 12:54:22 compute-1 sudo[288193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:54:22 compute-1 sudo[288193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:22 compute-1 sudo[288193]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:22 compute-1 nova_compute[230518]: 2025-10-02 12:54:22.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:22 compute-1 sudo[288218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:54:22 compute-1 sudo[288218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:22 compute-1 sudo[288218]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:22 compute-1 sudo[288243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:54:22 compute-1 nova_compute[230518]: 2025-10-02 12:54:22.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:22 compute-1 sudo[288243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:22 compute-1 sudo[288243]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:22 compute-1 sudo[288268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:54:22 compute-1 sudo[288268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:22.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:22.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:22 compute-1 sudo[288268]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:22 compute-1 nova_compute[230518]: 2025-10-02 12:54:22.971 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Updating instance_info_cache with network_info: [{"id": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "address": "fa:16:3e:49:9e:90", "network": {"id": "83bbba21-a002-4973-9f29-252bf270271b", "bridge": "br-int", "label": "tempest-network-smoke--636136101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf67b8436-7e", "ovs_interfaceid": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:54:22 compute-1 nova_compute[230518]: 2025-10-02 12:54:22.993 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:54:22 compute-1 nova_compute[230518]: 2025-10-02 12:54:22.993 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:54:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:54:23 compute-1 ceph-mon[80926]: pgmap v2428: 305 pgs: 305 active+clean; 265 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 852 KiB/s wr, 172 op/s
Oct 02 12:54:23 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:54:23 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:54:23 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:54:23 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:54:23 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:54:23 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:54:24 compute-1 nova_compute[230518]: 2025-10-02 12:54:24.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:24 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Oct 02 12:54:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:24.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:24.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:25 compute-1 ovn_controller[129257]: 2025-10-02T12:54:25Z|00081|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:49:9e:90 10.100.0.5
Oct 02 12:54:25 compute-1 ovn_controller[129257]: 2025-10-02T12:54:25Z|00082|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:49:9e:90 10.100.0.5
Oct 02 12:54:25 compute-1 ceph-mon[80926]: pgmap v2429: 305 pgs: 305 active+clean; 265 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 827 KiB/s wr, 70 op/s
Oct 02 12:54:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:25.951 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:25.952 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:25 compute-1 ovn_controller[129257]: 2025-10-02T12:54:25Z|00083|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5b:41:1c 10.100.0.7
Oct 02 12:54:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:25.953 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:25 compute-1 ovn_controller[129257]: 2025-10-02T12:54:25Z|00084|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5b:41:1c 10.100.0.7
Oct 02 12:54:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:26.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:54:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:26.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:54:26 compute-1 nova_compute[230518]: 2025-10-02 12:54:26.988 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:54:27 compute-1 nova_compute[230518]: 2025-10-02 12:54:27.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:27 compute-1 ceph-mon[80926]: pgmap v2430: 305 pgs: 305 active+clean; 286 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.3 MiB/s wr, 93 op/s
Oct 02 12:54:27 compute-1 podman[288325]: 2025-10-02 12:54:27.80602397 +0000 UTC m=+0.048606426 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:54:27 compute-1 podman[288324]: 2025-10-02 12:54:27.836462744 +0000 UTC m=+0.084276234 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Oct 02 12:54:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:54:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:28.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:28 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4280822267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:54:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:28.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:54:29 compute-1 nova_compute[230518]: 2025-10-02 12:54:29.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:29 compute-1 ceph-mon[80926]: pgmap v2431: 305 pgs: 305 active+clean; 307 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 472 KiB/s rd, 4.1 MiB/s wr, 90 op/s
Oct 02 12:54:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:30.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:30.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:30 compute-1 ceph-mon[80926]: pgmap v2432: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 564 KiB/s rd, 4.9 MiB/s wr, 122 op/s
Oct 02 12:54:31 compute-1 nova_compute[230518]: 2025-10-02 12:54:31.657 2 INFO nova.compute.manager [None req-60d8673b-4135-4183-a6b7-61d918ee6dc3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Get console output
Oct 02 12:54:31 compute-1 nova_compute[230518]: 2025-10-02 12:54:31.664 13161 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 12:54:32 compute-1 nova_compute[230518]: 2025-10-02 12:54:32.121 2 INFO nova.compute.manager [None req-548e69c7-4b4a-4f8b-90e5-53d38bf68e87 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Get console output
Oct 02 12:54:32 compute-1 nova_compute[230518]: 2025-10-02 12:54:32.126 13161 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 12:54:32 compute-1 nova_compute[230518]: 2025-10-02 12:54:32.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:32 compute-1 nova_compute[230518]: 2025-10-02 12:54:32.485 2 INFO nova.compute.manager [None req-dec6a418-f356-4db7-ab51-369eff8a0178 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Get console output
Oct 02 12:54:32 compute-1 nova_compute[230518]: 2025-10-02 12:54:32.491 13161 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 12:54:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:32.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:32.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:32 compute-1 sudo[288367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:54:32 compute-1 sudo[288367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:32 compute-1 sudo[288367]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:32 compute-1 sudo[288392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:54:32 compute-1 sudo[288392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:54:32 compute-1 sudo[288392]: pam_unix(sudo:session): session closed for user root
Oct 02 12:54:32 compute-1 nova_compute[230518]: 2025-10-02 12:54:32.964 2 DEBUG oslo_concurrency.lockutils [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:32 compute-1 nova_compute[230518]: 2025-10-02 12:54:32.965 2 DEBUG oslo_concurrency.lockutils [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:32 compute-1 nova_compute[230518]: 2025-10-02 12:54:32.965 2 DEBUG oslo_concurrency.lockutils [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:32 compute-1 nova_compute[230518]: 2025-10-02 12:54:32.966 2 DEBUG oslo_concurrency.lockutils [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:32 compute-1 nova_compute[230518]: 2025-10-02 12:54:32.966 2 DEBUG oslo_concurrency.lockutils [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:32 compute-1 nova_compute[230518]: 2025-10-02 12:54:32.967 2 INFO nova.compute.manager [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Terminating instance
Oct 02 12:54:32 compute-1 nova_compute[230518]: 2025-10-02 12:54:32.968 2 DEBUG nova.compute.manager [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:54:33 compute-1 kernel: tapf67b8436-7e (unregistering): left promiscuous mode
Oct 02 12:54:33 compute-1 NetworkManager[44960]: <info>  [1759409673.0304] device (tapf67b8436-7e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:54:33 compute-1 nova_compute[230518]: 2025-10-02 12:54:33.035 2 DEBUG nova.compute.manager [req-0b4a0d23-a5c0-4c59-9a5c-3cf3e5f4665e req-3806805c-7ded-4520-874c-6c27082504d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Received event network-changed-f67b8436-7ef3-4d35-814c-3d62c9a9fec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:33 compute-1 nova_compute[230518]: 2025-10-02 12:54:33.035 2 DEBUG nova.compute.manager [req-0b4a0d23-a5c0-4c59-9a5c-3cf3e5f4665e req-3806805c-7ded-4520-874c-6c27082504d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Refreshing instance network info cache due to event network-changed-f67b8436-7ef3-4d35-814c-3d62c9a9fec0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:54:33 compute-1 nova_compute[230518]: 2025-10-02 12:54:33.035 2 DEBUG oslo_concurrency.lockutils [req-0b4a0d23-a5c0-4c59-9a5c-3cf3e5f4665e req-3806805c-7ded-4520-874c-6c27082504d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:54:33 compute-1 nova_compute[230518]: 2025-10-02 12:54:33.036 2 DEBUG oslo_concurrency.lockutils [req-0b4a0d23-a5c0-4c59-9a5c-3cf3e5f4665e req-3806805c-7ded-4520-874c-6c27082504d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:54:33 compute-1 nova_compute[230518]: 2025-10-02 12:54:33.036 2 DEBUG nova.network.neutron [req-0b4a0d23-a5c0-4c59-9a5c-3cf3e5f4665e req-3806805c-7ded-4520-874c-6c27082504d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Refreshing network info cache for port f67b8436-7ef3-4d35-814c-3d62c9a9fec0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:54:33 compute-1 nova_compute[230518]: 2025-10-02 12:54:33.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:33 compute-1 ovn_controller[129257]: 2025-10-02T12:54:33Z|00613|binding|INFO|Releasing lport f67b8436-7ef3-4d35-814c-3d62c9a9fec0 from this chassis (sb_readonly=0)
Oct 02 12:54:33 compute-1 ovn_controller[129257]: 2025-10-02T12:54:33Z|00614|binding|INFO|Setting lport f67b8436-7ef3-4d35-814c-3d62c9a9fec0 down in Southbound
Oct 02 12:54:33 compute-1 ovn_controller[129257]: 2025-10-02T12:54:33Z|00615|binding|INFO|Removing iface tapf67b8436-7e ovn-installed in OVS
Oct 02 12:54:33 compute-1 nova_compute[230518]: 2025-10-02 12:54:33.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:33.051 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:9e:90 10.100.0.5'], port_security=['fa:16:3e:49:9e:90 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '95fd2a5f-82d9-46eb-b218-cb0a9a4e2765', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-83bbba21-a002-4973-9f29-252bf270271b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '4', 'neutron:security_group_ids': '33cdd8ea-2edc-4577-8235-b9f26b2b357e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ddb9b596-76b5-41a0-897b-33f6aa75f8df, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=f67b8436-7ef3-4d35-814c-3d62c9a9fec0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:54:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:33.053 138374 INFO neutron.agent.ovn.metadata.agent [-] Port f67b8436-7ef3-4d35-814c-3d62c9a9fec0 in datapath 83bbba21-a002-4973-9f29-252bf270271b unbound from our chassis
Oct 02 12:54:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:33.054 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 83bbba21-a002-4973-9f29-252bf270271b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:54:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:33.055 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d34a2138-06e7-441d-9456-8e03d31784d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:33.056 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-83bbba21-a002-4973-9f29-252bf270271b namespace which is not needed anymore
Oct 02 12:54:33 compute-1 nova_compute[230518]: 2025-10-02 12:54:33.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:33 compute-1 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000091.scope: Deactivated successfully.
Oct 02 12:54:33 compute-1 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000091.scope: Consumed 14.386s CPU time.
Oct 02 12:54:33 compute-1 systemd-machined[188247]: Machine qemu-70-instance-00000091 terminated.
Oct 02 12:54:33 compute-1 neutron-haproxy-ovnmeta-83bbba21-a002-4973-9f29-252bf270271b[287969]: [NOTICE]   (287986) : haproxy version is 2.8.14-c23fe91
Oct 02 12:54:33 compute-1 neutron-haproxy-ovnmeta-83bbba21-a002-4973-9f29-252bf270271b[287969]: [NOTICE]   (287986) : path to executable is /usr/sbin/haproxy
Oct 02 12:54:33 compute-1 neutron-haproxy-ovnmeta-83bbba21-a002-4973-9f29-252bf270271b[287969]: [WARNING]  (287986) : Exiting Master process...
Oct 02 12:54:33 compute-1 neutron-haproxy-ovnmeta-83bbba21-a002-4973-9f29-252bf270271b[287969]: [ALERT]    (287986) : Current worker (287999) exited with code 143 (Terminated)
Oct 02 12:54:33 compute-1 neutron-haproxy-ovnmeta-83bbba21-a002-4973-9f29-252bf270271b[287969]: [WARNING]  (287986) : All workers exited. Exiting... (0)
Oct 02 12:54:33 compute-1 systemd[1]: libpod-dcc1b9cb27557c658fb36ed8f47ef8a16ec360a5f6a0b686acb027283a8c6b4f.scope: Deactivated successfully.
Oct 02 12:54:33 compute-1 podman[288440]: 2025-10-02 12:54:33.196453514 +0000 UTC m=+0.047214072 container died dcc1b9cb27557c658fb36ed8f47ef8a16ec360a5f6a0b686acb027283a8c6b4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-83bbba21-a002-4973-9f29-252bf270271b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:54:33 compute-1 nova_compute[230518]: 2025-10-02 12:54:33.212 2 INFO nova.virt.libvirt.driver [-] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Instance destroyed successfully.
Oct 02 12:54:33 compute-1 nova_compute[230518]: 2025-10-02 12:54:33.213 2 DEBUG nova.objects.instance [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'resources' on Instance uuid 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:54:33 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dcc1b9cb27557c658fb36ed8f47ef8a16ec360a5f6a0b686acb027283a8c6b4f-userdata-shm.mount: Deactivated successfully.
Oct 02 12:54:33 compute-1 systemd[1]: var-lib-containers-storage-overlay-b2b6de8b1fd368c961dc50bb0264389c886212a783acf9ddb0316ed806440300-merged.mount: Deactivated successfully.
Oct 02 12:54:33 compute-1 nova_compute[230518]: 2025-10-02 12:54:33.226 2 DEBUG nova.virt.libvirt.vif [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:53:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1578945059',display_name='tempest-TestNetworkBasicOps-server-1578945059',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1578945059',id=145,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIapgtoYYA99v/havIqzWJIpGUB3YFecKSBWtCRKGjdMREGYtjf88dqlPzDhmBAjJKkZ8FXg34wGrVA0nOzc0vpbWnm6iI8pilmg2YswkyP+t9hDFvcoygSa2fIr9gTuTw==',key_name='tempest-TestNetworkBasicOps-1376701642',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:54:09Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-o7vn37r9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:54:09Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=95fd2a5f-82d9-46eb-b218-cb0a9a4e2765,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "address": "fa:16:3e:49:9e:90", "network": {"id": "83bbba21-a002-4973-9f29-252bf270271b", "bridge": "br-int", "label": "tempest-network-smoke--636136101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf67b8436-7e", "ovs_interfaceid": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:54:33 compute-1 nova_compute[230518]: 2025-10-02 12:54:33.226 2 DEBUG nova.network.os_vif_util [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "address": "fa:16:3e:49:9e:90", "network": {"id": "83bbba21-a002-4973-9f29-252bf270271b", "bridge": "br-int", "label": "tempest-network-smoke--636136101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf67b8436-7e", "ovs_interfaceid": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:54:33 compute-1 nova_compute[230518]: 2025-10-02 12:54:33.227 2 DEBUG nova.network.os_vif_util [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:49:9e:90,bridge_name='br-int',has_traffic_filtering=True,id=f67b8436-7ef3-4d35-814c-3d62c9a9fec0,network=Network(83bbba21-a002-4973-9f29-252bf270271b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf67b8436-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:54:33 compute-1 nova_compute[230518]: 2025-10-02 12:54:33.227 2 DEBUG os_vif [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:9e:90,bridge_name='br-int',has_traffic_filtering=True,id=f67b8436-7ef3-4d35-814c-3d62c9a9fec0,network=Network(83bbba21-a002-4973-9f29-252bf270271b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf67b8436-7e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:54:33 compute-1 nova_compute[230518]: 2025-10-02 12:54:33.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:33 compute-1 nova_compute[230518]: 2025-10-02 12:54:33.232 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf67b8436-7e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:33 compute-1 nova_compute[230518]: 2025-10-02 12:54:33.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:33 compute-1 nova_compute[230518]: 2025-10-02 12:54:33.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:33 compute-1 nova_compute[230518]: 2025-10-02 12:54:33.237 2 INFO os_vif [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:9e:90,bridge_name='br-int',has_traffic_filtering=True,id=f67b8436-7ef3-4d35-814c-3d62c9a9fec0,network=Network(83bbba21-a002-4973-9f29-252bf270271b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf67b8436-7e')
Oct 02 12:54:33 compute-1 podman[288440]: 2025-10-02 12:54:33.241002851 +0000 UTC m=+0.091763409 container cleanup dcc1b9cb27557c658fb36ed8f47ef8a16ec360a5f6a0b686acb027283a8c6b4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-83bbba21-a002-4973-9f29-252bf270271b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:54:33 compute-1 systemd[1]: libpod-conmon-dcc1b9cb27557c658fb36ed8f47ef8a16ec360a5f6a0b686acb027283a8c6b4f.scope: Deactivated successfully.
Oct 02 12:54:33 compute-1 podman[288496]: 2025-10-02 12:54:33.319000549 +0000 UTC m=+0.047956836 container remove dcc1b9cb27557c658fb36ed8f47ef8a16ec360a5f6a0b686acb027283a8c6b4f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-83bbba21-a002-4973-9f29-252bf270271b, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:54:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:33.327 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[48d915f8-cbac-44da-85af-39c360029aee]: (4, ('Thu Oct  2 12:54:33 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-83bbba21-a002-4973-9f29-252bf270271b (dcc1b9cb27557c658fb36ed8f47ef8a16ec360a5f6a0b686acb027283a8c6b4f)\ndcc1b9cb27557c658fb36ed8f47ef8a16ec360a5f6a0b686acb027283a8c6b4f\nThu Oct  2 12:54:33 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-83bbba21-a002-4973-9f29-252bf270271b (dcc1b9cb27557c658fb36ed8f47ef8a16ec360a5f6a0b686acb027283a8c6b4f)\ndcc1b9cb27557c658fb36ed8f47ef8a16ec360a5f6a0b686acb027283a8c6b4f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:33.330 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0c9ac330-c7f0-4fc7-9d97-ae573a375a45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:33.331 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap83bbba21-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:33 compute-1 nova_compute[230518]: 2025-10-02 12:54:33.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:33 compute-1 kernel: tap83bbba21-a0: left promiscuous mode
Oct 02 12:54:33 compute-1 nova_compute[230518]: 2025-10-02 12:54:33.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:33 compute-1 nova_compute[230518]: 2025-10-02 12:54:33.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:33.352 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[22393f4c-e31f-420d-bdc4-d21667605fec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:54:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:33.385 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a7f89790-d461-4a9a-a74a-ec14c85a00d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:33.387 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a2ddbbc5-0894-4d31-b3eb-23ac22aa6096]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:33 compute-1 nova_compute[230518]: 2025-10-02 12:54:33.410 2 DEBUG nova.compute.manager [req-4e741438-5606-4032-8e9c-47237d81d16a req-036c2059-88e6-41fb-8352-51056f412844 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Received event network-vif-unplugged-f67b8436-7ef3-4d35-814c-3d62c9a9fec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:33 compute-1 nova_compute[230518]: 2025-10-02 12:54:33.411 2 DEBUG oslo_concurrency.lockutils [req-4e741438-5606-4032-8e9c-47237d81d16a req-036c2059-88e6-41fb-8352-51056f412844 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:33 compute-1 nova_compute[230518]: 2025-10-02 12:54:33.411 2 DEBUG oslo_concurrency.lockutils [req-4e741438-5606-4032-8e9c-47237d81d16a req-036c2059-88e6-41fb-8352-51056f412844 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:33 compute-1 nova_compute[230518]: 2025-10-02 12:54:33.411 2 DEBUG oslo_concurrency.lockutils [req-4e741438-5606-4032-8e9c-47237d81d16a req-036c2059-88e6-41fb-8352-51056f412844 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:33 compute-1 nova_compute[230518]: 2025-10-02 12:54:33.411 2 DEBUG nova.compute.manager [req-4e741438-5606-4032-8e9c-47237d81d16a req-036c2059-88e6-41fb-8352-51056f412844 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] No waiting events found dispatching network-vif-unplugged-f67b8436-7ef3-4d35-814c-3d62c9a9fec0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:54:33 compute-1 nova_compute[230518]: 2025-10-02 12:54:33.411 2 DEBUG nova.compute.manager [req-4e741438-5606-4032-8e9c-47237d81d16a req-036c2059-88e6-41fb-8352-51056f412844 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Received event network-vif-unplugged-f67b8436-7ef3-4d35-814c-3d62c9a9fec0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:54:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:33.411 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cc0b07b7-de7a-48e5-9a95-c0a834995b2a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751169, 'reachable_time': 32152, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288514, 'error': None, 'target': 'ovnmeta-83bbba21-a002-4973-9f29-252bf270271b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:33.416 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-83bbba21-a002-4973-9f29-252bf270271b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:54:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:33.416 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[dfc86ec4-e8de-4e10-9089-960c0720f8d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:33 compute-1 systemd[1]: run-netns-ovnmeta\x2d83bbba21\x2da002\x2d4973\x2d9f29\x2d252bf270271b.mount: Deactivated successfully.
Oct 02 12:54:33 compute-1 ceph-mon[80926]: pgmap v2433: 305 pgs: 305 active+clean; 372 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 647 KiB/s rd, 6.0 MiB/s wr, 150 op/s
Oct 02 12:54:33 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:54:33 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:54:33 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/433984474' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:54:34 compute-1 nova_compute[230518]: 2025-10-02 12:54:34.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:34 compute-1 nova_compute[230518]: 2025-10-02 12:54:34.454 2 INFO nova.virt.libvirt.driver [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Deleting instance files /var/lib/nova/instances/95fd2a5f-82d9-46eb-b218-cb0a9a4e2765_del
Oct 02 12:54:34 compute-1 nova_compute[230518]: 2025-10-02 12:54:34.455 2 INFO nova.virt.libvirt.driver [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Deletion of /var/lib/nova/instances/95fd2a5f-82d9-46eb-b218-cb0a9a4e2765_del complete
Oct 02 12:54:34 compute-1 nova_compute[230518]: 2025-10-02 12:54:34.511 2 INFO nova.compute.manager [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Took 1.54 seconds to destroy the instance on the hypervisor.
Oct 02 12:54:34 compute-1 nova_compute[230518]: 2025-10-02 12:54:34.512 2 DEBUG oslo.service.loopingcall [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:54:34 compute-1 nova_compute[230518]: 2025-10-02 12:54:34.512 2 DEBUG nova.compute.manager [-] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:54:34 compute-1 nova_compute[230518]: 2025-10-02 12:54:34.513 2 DEBUG nova.network.neutron [-] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:54:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:34.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:34.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2457786826' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:54:35 compute-1 nova_compute[230518]: 2025-10-02 12:54:35.528 2 DEBUG nova.compute.manager [req-248ce492-3953-43e3-9fc8-fea991e2b887 req-c73ace15-28ba-429b-bc53-91cfb352af13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Received event network-vif-plugged-f67b8436-7ef3-4d35-814c-3d62c9a9fec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:35 compute-1 nova_compute[230518]: 2025-10-02 12:54:35.529 2 DEBUG oslo_concurrency.lockutils [req-248ce492-3953-43e3-9fc8-fea991e2b887 req-c73ace15-28ba-429b-bc53-91cfb352af13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:35 compute-1 nova_compute[230518]: 2025-10-02 12:54:35.529 2 DEBUG oslo_concurrency.lockutils [req-248ce492-3953-43e3-9fc8-fea991e2b887 req-c73ace15-28ba-429b-bc53-91cfb352af13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:35 compute-1 nova_compute[230518]: 2025-10-02 12:54:35.529 2 DEBUG oslo_concurrency.lockutils [req-248ce492-3953-43e3-9fc8-fea991e2b887 req-c73ace15-28ba-429b-bc53-91cfb352af13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:35 compute-1 nova_compute[230518]: 2025-10-02 12:54:35.530 2 DEBUG nova.compute.manager [req-248ce492-3953-43e3-9fc8-fea991e2b887 req-c73ace15-28ba-429b-bc53-91cfb352af13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] No waiting events found dispatching network-vif-plugged-f67b8436-7ef3-4d35-814c-3d62c9a9fec0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:54:35 compute-1 nova_compute[230518]: 2025-10-02 12:54:35.530 2 WARNING nova.compute.manager [req-248ce492-3953-43e3-9fc8-fea991e2b887 req-c73ace15-28ba-429b-bc53-91cfb352af13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Received unexpected event network-vif-plugged-f67b8436-7ef3-4d35-814c-3d62c9a9fec0 for instance with vm_state active and task_state deleting.
Oct 02 12:54:35 compute-1 nova_compute[230518]: 2025-10-02 12:54:35.589 2 INFO nova.compute.manager [None req-2a35d582-da9f-448a-86ed-e3f233f32d76 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Get console output
Oct 02 12:54:35 compute-1 nova_compute[230518]: 2025-10-02 12:54:35.594 13161 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 12:54:35 compute-1 ceph-mon[80926]: pgmap v2434: 305 pgs: 305 active+clean; 372 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 611 KiB/s rd, 5.2 MiB/s wr, 131 op/s
Oct 02 12:54:36 compute-1 nova_compute[230518]: 2025-10-02 12:54:36.251 2 DEBUG nova.network.neutron [req-0b4a0d23-a5c0-4c59-9a5c-3cf3e5f4665e req-3806805c-7ded-4520-874c-6c27082504d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Updated VIF entry in instance network info cache for port f67b8436-7ef3-4d35-814c-3d62c9a9fec0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:54:36 compute-1 nova_compute[230518]: 2025-10-02 12:54:36.251 2 DEBUG nova.network.neutron [req-0b4a0d23-a5c0-4c59-9a5c-3cf3e5f4665e req-3806805c-7ded-4520-874c-6c27082504d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Updating instance_info_cache with network_info: [{"id": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "address": "fa:16:3e:49:9e:90", "network": {"id": "83bbba21-a002-4973-9f29-252bf270271b", "bridge": "br-int", "label": "tempest-network-smoke--636136101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf67b8436-7e", "ovs_interfaceid": "f67b8436-7ef3-4d35-814c-3d62c9a9fec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:54:36 compute-1 nova_compute[230518]: 2025-10-02 12:54:36.279 2 DEBUG oslo_concurrency.lockutils [req-0b4a0d23-a5c0-4c59-9a5c-3cf3e5f4665e req-3806805c-7ded-4520-874c-6c27082504d4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:54:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:36.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:36.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:36 compute-1 ceph-mon[80926]: pgmap v2435: 305 pgs: 305 active+clean; 342 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 619 KiB/s rd, 5.2 MiB/s wr, 143 op/s
Oct 02 12:54:37 compute-1 nova_compute[230518]: 2025-10-02 12:54:37.208 2 DEBUG nova.network.neutron [-] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:54:37 compute-1 nova_compute[230518]: 2025-10-02 12:54:37.226 2 INFO nova.compute.manager [-] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Took 2.71 seconds to deallocate network for instance.
Oct 02 12:54:37 compute-1 nova_compute[230518]: 2025-10-02 12:54:37.270 2 DEBUG oslo_concurrency.lockutils [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:37 compute-1 nova_compute[230518]: 2025-10-02 12:54:37.270 2 DEBUG oslo_concurrency.lockutils [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:37 compute-1 nova_compute[230518]: 2025-10-02 12:54:37.332 2 DEBUG oslo_concurrency.processutils [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:37 compute-1 nova_compute[230518]: 2025-10-02 12:54:37.575 2 DEBUG nova.compute.manager [req-989ad7a8-1354-4551-8b8a-938561ba9b3e req-ab1f2182-3b18-496f-b9b4-b54c1864aaf8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Received event network-vif-deleted-f67b8436-7ef3-4d35-814c-3d62c9a9fec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:54:37 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/365730846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:37 compute-1 nova_compute[230518]: 2025-10-02 12:54:37.767 2 DEBUG oslo_concurrency.processutils [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:37 compute-1 nova_compute[230518]: 2025-10-02 12:54:37.772 2 DEBUG nova.compute.provider_tree [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:54:37 compute-1 podman[288536]: 2025-10-02 12:54:37.798000448 +0000 UTC m=+0.053342285 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid)
Oct 02 12:54:37 compute-1 podman[288537]: 2025-10-02 12:54:37.798128272 +0000 UTC m=+0.051042002 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:54:37 compute-1 nova_compute[230518]: 2025-10-02 12:54:37.808 2 DEBUG nova.scheduler.client.report [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:54:37 compute-1 nova_compute[230518]: 2025-10-02 12:54:37.836 2 DEBUG oslo_concurrency.lockutils [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:37 compute-1 nova_compute[230518]: 2025-10-02 12:54:37.867 2 INFO nova.scheduler.client.report [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Deleted allocations for instance 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765
Oct 02 12:54:37 compute-1 nova_compute[230518]: 2025-10-02 12:54:37.935 2 DEBUG oslo_concurrency.lockutils [None req-c56b217c-1a9d-4a21-b788-4c6c94e75801 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "95fd2a5f-82d9-46eb-b218-cb0a9a4e2765" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.970s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:38 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/365730846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:38 compute-1 nova_compute[230518]: 2025-10-02 12:54:38.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:38 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:54:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:38.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:54:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:38.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:54:39 compute-1 nova_compute[230518]: 2025-10-02 12:54:39.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:39 compute-1 ceph-mon[80926]: pgmap v2436: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 589 KiB/s rd, 3.7 MiB/s wr, 147 op/s
Oct 02 12:54:39 compute-1 nova_compute[230518]: 2025-10-02 12:54:39.309 2 DEBUG oslo_concurrency.lockutils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Acquiring lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:54:39 compute-1 nova_compute[230518]: 2025-10-02 12:54:39.309 2 DEBUG oslo_concurrency.lockutils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Acquired lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:54:39 compute-1 nova_compute[230518]: 2025-10-02 12:54:39.310 2 DEBUG nova.network.neutron [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:54:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1167518381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:40.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:40.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:41 compute-1 nova_compute[230518]: 2025-10-02 12:54:41.444 2 DEBUG nova.network.neutron [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updating instance_info_cache with network_info: [{"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:54:41 compute-1 nova_compute[230518]: 2025-10-02 12:54:41.472 2 DEBUG oslo_concurrency.lockutils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Releasing lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:54:41 compute-1 nova_compute[230518]: 2025-10-02 12:54:41.605 2 DEBUG nova.virt.libvirt.driver [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Oct 02 12:54:41 compute-1 nova_compute[230518]: 2025-10-02 12:54:41.605 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Creating file /var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802/ece1e349863b4f0aa857a271f88ed61c.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Oct 02 12:54:41 compute-1 nova_compute[230518]: 2025-10-02 12:54:41.606 2 DEBUG oslo_concurrency.processutils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802/ece1e349863b4f0aa857a271f88ed61c.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:41 compute-1 ceph-mon[80926]: pgmap v2437: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1011 KiB/s rd, 1.9 MiB/s wr, 128 op/s
Oct 02 12:54:42 compute-1 nova_compute[230518]: 2025-10-02 12:54:42.047 2 DEBUG oslo_concurrency.processutils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802/ece1e349863b4f0aa857a271f88ed61c.tmp" returned: 1 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:42 compute-1 nova_compute[230518]: 2025-10-02 12:54:42.048 2 DEBUG oslo_concurrency.processutils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802/ece1e349863b4f0aa857a271f88ed61c.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Oct 02 12:54:42 compute-1 nova_compute[230518]: 2025-10-02 12:54:42.048 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Creating directory /var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802 on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Oct 02 12:54:42 compute-1 nova_compute[230518]: 2025-10-02 12:54:42.049 2 DEBUG oslo_concurrency.processutils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:54:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:42.120 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:54:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:42.121 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:54:42 compute-1 nova_compute[230518]: 2025-10-02 12:54:42.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:42 compute-1 nova_compute[230518]: 2025-10-02 12:54:42.254 2 DEBUG oslo_concurrency.processutils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:54:42 compute-1 nova_compute[230518]: 2025-10-02 12:54:42.258 2 DEBUG nova.virt.libvirt.driver [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:54:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:42.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:42.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:43.123 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:43 compute-1 nova_compute[230518]: 2025-10-02 12:54:43.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:43 compute-1 ovn_controller[129257]: 2025-10-02T12:54:43Z|00616|binding|INFO|Releasing lport 75b1d4a5-2f3f-44a0-a22e-e6c15fda36d1 from this chassis (sb_readonly=0)
Oct 02 12:54:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:54:43 compute-1 nova_compute[230518]: 2025-10-02 12:54:43.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:43 compute-1 ceph-mon[80926]: pgmap v2438: 305 pgs: 305 active+clean; 266 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 142 op/s
Oct 02 12:54:44 compute-1 nova_compute[230518]: 2025-10-02 12:54:44.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:44 compute-1 kernel: tap20204810-ff (unregistering): left promiscuous mode
Oct 02 12:54:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:44.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:44 compute-1 NetworkManager[44960]: <info>  [1759409684.5588] device (tap20204810-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:54:44 compute-1 nova_compute[230518]: 2025-10-02 12:54:44.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:44 compute-1 ovn_controller[129257]: 2025-10-02T12:54:44Z|00617|binding|INFO|Releasing lport 20204810-ff47-450e-80e5-23d03b435455 from this chassis (sb_readonly=0)
Oct 02 12:54:44 compute-1 ovn_controller[129257]: 2025-10-02T12:54:44Z|00618|binding|INFO|Setting lport 20204810-ff47-450e-80e5-23d03b435455 down in Southbound
Oct 02 12:54:44 compute-1 ovn_controller[129257]: 2025-10-02T12:54:44Z|00619|binding|INFO|Removing iface tap20204810-ff ovn-installed in OVS
Oct 02 12:54:44 compute-1 nova_compute[230518]: 2025-10-02 12:54:44.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:44.618 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:41:1c 10.100.0.7'], port_security=['fa:16:3e:5b:41:1c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a1e0932b-16b6-46b9-8192-b89b91e91802', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a39243cb-5286-4429-8879-7b4d535de128', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd1b59c5d-0681-456e-a8d1-b3629344f9b0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.216'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf84aebf-21d8-4569-891a-417406561224, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=20204810-ff47-450e-80e5-23d03b435455) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:54:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:44.619 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 20204810-ff47-450e-80e5-23d03b435455 in datapath a39243cb-5286-4429-8879-7b4d535de128 unbound from our chassis
Oct 02 12:54:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:44.620 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a39243cb-5286-4429-8879-7b4d535de128, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:54:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:44.621 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d2641b3e-1a26-4a45-aded-320885fdd230]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:44.621 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a39243cb-5286-4429-8879-7b4d535de128 namespace which is not needed anymore
Oct 02 12:54:44 compute-1 nova_compute[230518]: 2025-10-02 12:54:44.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:44 compute-1 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000092.scope: Deactivated successfully.
Oct 02 12:54:44 compute-1 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000092.scope: Consumed 13.947s CPU time.
Oct 02 12:54:44 compute-1 systemd-machined[188247]: Machine qemu-71-instance-00000092 terminated.
Oct 02 12:54:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:44.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:44 compute-1 neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128[288155]: [NOTICE]   (288159) : haproxy version is 2.8.14-c23fe91
Oct 02 12:54:44 compute-1 neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128[288155]: [NOTICE]   (288159) : path to executable is /usr/sbin/haproxy
Oct 02 12:54:44 compute-1 neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128[288155]: [WARNING]  (288159) : Exiting Master process...
Oct 02 12:54:44 compute-1 neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128[288155]: [WARNING]  (288159) : Exiting Master process...
Oct 02 12:54:44 compute-1 neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128[288155]: [ALERT]    (288159) : Current worker (288161) exited with code 143 (Terminated)
Oct 02 12:54:44 compute-1 neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128[288155]: [WARNING]  (288159) : All workers exited. Exiting... (0)
Oct 02 12:54:44 compute-1 systemd[1]: libpod-b86c35cf61108b8e4c4158e52cffc23c6893ab70d528fda02449d4e6d8928699.scope: Deactivated successfully.
Oct 02 12:54:44 compute-1 podman[288605]: 2025-10-02 12:54:44.744465607 +0000 UTC m=+0.044928280 container died b86c35cf61108b8e4c4158e52cffc23c6893ab70d528fda02449d4e6d8928699 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 12:54:44 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1011278444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:44 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b86c35cf61108b8e4c4158e52cffc23c6893ab70d528fda02449d4e6d8928699-userdata-shm.mount: Deactivated successfully.
Oct 02 12:54:44 compute-1 systemd[1]: var-lib-containers-storage-overlay-134e48fef8ee40dfc4155c6ec896c104676820b240dcee32a40405e7898ee537-merged.mount: Deactivated successfully.
Oct 02 12:54:44 compute-1 podman[288605]: 2025-10-02 12:54:44.787399504 +0000 UTC m=+0.087862177 container cleanup b86c35cf61108b8e4c4158e52cffc23c6893ab70d528fda02449d4e6d8928699 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 12:54:44 compute-1 systemd[1]: libpod-conmon-b86c35cf61108b8e4c4158e52cffc23c6893ab70d528fda02449d4e6d8928699.scope: Deactivated successfully.
Oct 02 12:54:44 compute-1 podman[288646]: 2025-10-02 12:54:44.848402608 +0000 UTC m=+0.039414068 container remove b86c35cf61108b8e4c4158e52cffc23c6893ab70d528fda02449d4e6d8928699 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 12:54:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:44.856 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1d0c40e2-3cbc-4ba3-a793-3659f7484ac8]: (4, ('Thu Oct  2 12:54:44 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128 (b86c35cf61108b8e4c4158e52cffc23c6893ab70d528fda02449d4e6d8928699)\nb86c35cf61108b8e4c4158e52cffc23c6893ab70d528fda02449d4e6d8928699\nThu Oct  2 12:54:44 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128 (b86c35cf61108b8e4c4158e52cffc23c6893ab70d528fda02449d4e6d8928699)\nb86c35cf61108b8e4c4158e52cffc23c6893ab70d528fda02449d4e6d8928699\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:44.857 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e80bc918-6f05-4328-8be0-f421ab26c9aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:44.858 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa39243cb-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:44 compute-1 nova_compute[230518]: 2025-10-02 12:54:44.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:44 compute-1 kernel: tapa39243cb-50: left promiscuous mode
Oct 02 12:54:44 compute-1 nova_compute[230518]: 2025-10-02 12:54:44.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:44.882 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c219a2ca-9dec-491e-9006-8b6e2e40b225]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:44 compute-1 nova_compute[230518]: 2025-10-02 12:54:44.887 2 DEBUG nova.compute.manager [req-9915cf29-8d84-4ae8-80b2-1141c5778b7a req-cff4d1f8-a6b3-4beb-8759-12f17bb3d836 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-vif-unplugged-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:44 compute-1 nova_compute[230518]: 2025-10-02 12:54:44.888 2 DEBUG oslo_concurrency.lockutils [req-9915cf29-8d84-4ae8-80b2-1141c5778b7a req-cff4d1f8-a6b3-4beb-8759-12f17bb3d836 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:44 compute-1 nova_compute[230518]: 2025-10-02 12:54:44.888 2 DEBUG oslo_concurrency.lockutils [req-9915cf29-8d84-4ae8-80b2-1141c5778b7a req-cff4d1f8-a6b3-4beb-8759-12f17bb3d836 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:44 compute-1 nova_compute[230518]: 2025-10-02 12:54:44.888 2 DEBUG oslo_concurrency.lockutils [req-9915cf29-8d84-4ae8-80b2-1141c5778b7a req-cff4d1f8-a6b3-4beb-8759-12f17bb3d836 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:44 compute-1 nova_compute[230518]: 2025-10-02 12:54:44.888 2 DEBUG nova.compute.manager [req-9915cf29-8d84-4ae8-80b2-1141c5778b7a req-cff4d1f8-a6b3-4beb-8759-12f17bb3d836 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] No waiting events found dispatching network-vif-unplugged-20204810-ff47-450e-80e5-23d03b435455 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:54:44 compute-1 nova_compute[230518]: 2025-10-02 12:54:44.889 2 WARNING nova.compute.manager [req-9915cf29-8d84-4ae8-80b2-1141c5778b7a req-cff4d1f8-a6b3-4beb-8759-12f17bb3d836 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received unexpected event network-vif-unplugged-20204810-ff47-450e-80e5-23d03b435455 for instance with vm_state active and task_state resize_migrating.
Oct 02 12:54:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:44.919 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[57ec8569-e4fb-40f7-a3d0-802225415d5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:44.921 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[322fb02d-9d53-4f22-9036-8c7ab1ab4ff4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:44.936 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[759df8f8-cac0-407a-b9ba-0e155bcba0c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 751256, 'reachable_time': 37246, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288667, 'error': None, 'target': 'ovnmeta-a39243cb-5286-4429-8879-7b4d535de128', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:44 compute-1 systemd[1]: run-netns-ovnmeta\x2da39243cb\x2d5286\x2d4429\x2d8879\x2d7b4d535de128.mount: Deactivated successfully.
Oct 02 12:54:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:44.939 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a39243cb-5286-4429-8879-7b4d535de128 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:54:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:54:44.939 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[764c2134-31c3-4fb2-8155-51f9197e85c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:54:45 compute-1 nova_compute[230518]: 2025-10-02 12:54:45.272 2 INFO nova.virt.libvirt.driver [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Instance shutdown successfully after 3 seconds.
Oct 02 12:54:45 compute-1 nova_compute[230518]: 2025-10-02 12:54:45.276 2 INFO nova.virt.libvirt.driver [-] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Instance destroyed successfully.
Oct 02 12:54:45 compute-1 nova_compute[230518]: 2025-10-02 12:54:45.277 2 DEBUG nova.virt.libvirt.vif [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:53:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1723654799',display_name='tempest-TestNetworkAdvancedServerOps-server-1723654799',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1723654799',id=146,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX6YVDsN9ZX5bxWRi+hOcCI5VJa6Q2nedRNQF3bG+0Pznov2NvoOk008+cPv/dH5+9KDDN9Rpi1O2z1pYZSfJd9pzzfPLrJFsvhHAGAb1dgOP5UShntoHoUWnJ4mGisJQ==',key_name='tempest-TestNetworkAdvancedServerOps-520644426',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:54:09Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-hzwwvz6x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:54:38Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=a1e0932b-16b6-46b9-8192-b89b91e91802,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1767599231", "vif_mac": "fa:16:3e:5b:41:1c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:54:45 compute-1 nova_compute[230518]: 2025-10-02 12:54:45.277 2 DEBUG nova.network.os_vif_util [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Converting VIF {"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1767599231", "vif_mac": "fa:16:3e:5b:41:1c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:54:45 compute-1 nova_compute[230518]: 2025-10-02 12:54:45.278 2 DEBUG nova.network.os_vif_util [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5b:41:1c,bridge_name='br-int',has_traffic_filtering=True,id=20204810-ff47-450e-80e5-23d03b435455,network=Network(a39243cb-5286-4429-8879-7b4d535de128),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20204810-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:54:45 compute-1 nova_compute[230518]: 2025-10-02 12:54:45.278 2 DEBUG os_vif [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:41:1c,bridge_name='br-int',has_traffic_filtering=True,id=20204810-ff47-450e-80e5-23d03b435455,network=Network(a39243cb-5286-4429-8879-7b4d535de128),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20204810-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:54:45 compute-1 nova_compute[230518]: 2025-10-02 12:54:45.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:45 compute-1 nova_compute[230518]: 2025-10-02 12:54:45.280 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20204810-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:54:45 compute-1 nova_compute[230518]: 2025-10-02 12:54:45.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:45 compute-1 nova_compute[230518]: 2025-10-02 12:54:45.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:45 compute-1 nova_compute[230518]: 2025-10-02 12:54:45.284 2 INFO os_vif [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:41:1c,bridge_name='br-int',has_traffic_filtering=True,id=20204810-ff47-450e-80e5-23d03b435455,network=Network(a39243cb-5286-4429-8879-7b4d535de128),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20204810-ff')
Oct 02 12:54:45 compute-1 nova_compute[230518]: 2025-10-02 12:54:45.288 2 DEBUG nova.virt.libvirt.driver [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:54:45 compute-1 nova_compute[230518]: 2025-10-02 12:54:45.288 2 DEBUG nova.virt.libvirt.driver [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:54:45 compute-1 nova_compute[230518]: 2025-10-02 12:54:45.508 2 DEBUG neutronclient.v2_0.client [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 20204810-ff47-450e-80e5-23d03b435455 for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Oct 02 12:54:45 compute-1 nova_compute[230518]: 2025-10-02 12:54:45.644 2 DEBUG oslo_concurrency.lockutils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:45 compute-1 nova_compute[230518]: 2025-10-02 12:54:45.644 2 DEBUG oslo_concurrency.lockutils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:45 compute-1 nova_compute[230518]: 2025-10-02 12:54:45.644 2 DEBUG oslo_concurrency.lockutils [None req-759ecf12-cf90-4790-8030-5b39089a212e adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:45 compute-1 ceph-mon[80926]: pgmap v2439: 305 pgs: 305 active+clean; 266 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 113 op/s
Oct 02 12:54:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:46.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:46.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:46 compute-1 nova_compute[230518]: 2025-10-02 12:54:46.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:46 compute-1 ceph-mon[80926]: pgmap v2440: 305 pgs: 305 active+clean; 247 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 31 KiB/s wr, 118 op/s
Oct 02 12:54:47 compute-1 nova_compute[230518]: 2025-10-02 12:54:47.032 2 DEBUG nova.compute.manager [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:47 compute-1 nova_compute[230518]: 2025-10-02 12:54:47.032 2 DEBUG oslo_concurrency.lockutils [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:47 compute-1 nova_compute[230518]: 2025-10-02 12:54:47.032 2 DEBUG oslo_concurrency.lockutils [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:47 compute-1 nova_compute[230518]: 2025-10-02 12:54:47.033 2 DEBUG oslo_concurrency.lockutils [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:47 compute-1 nova_compute[230518]: 2025-10-02 12:54:47.033 2 DEBUG nova.compute.manager [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] No waiting events found dispatching network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:54:47 compute-1 nova_compute[230518]: 2025-10-02 12:54:47.033 2 WARNING nova.compute.manager [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received unexpected event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 for instance with vm_state active and task_state resize_migrated.
Oct 02 12:54:47 compute-1 nova_compute[230518]: 2025-10-02 12:54:47.033 2 DEBUG nova.compute.manager [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-changed-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:47 compute-1 nova_compute[230518]: 2025-10-02 12:54:47.033 2 DEBUG nova.compute.manager [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Refreshing instance network info cache due to event network-changed-20204810-ff47-450e-80e5-23d03b435455. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:54:47 compute-1 nova_compute[230518]: 2025-10-02 12:54:47.034 2 DEBUG oslo_concurrency.lockutils [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:54:47 compute-1 nova_compute[230518]: 2025-10-02 12:54:47.034 2 DEBUG oslo_concurrency.lockutils [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:54:47 compute-1 nova_compute[230518]: 2025-10-02 12:54:47.034 2 DEBUG nova.network.neutron [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Refreshing network info cache for port 20204810-ff47-450e-80e5-23d03b435455 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:54:48 compute-1 nova_compute[230518]: 2025-10-02 12:54:48.210 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409673.2091131, 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:54:48 compute-1 nova_compute[230518]: 2025-10-02 12:54:48.210 2 INFO nova.compute.manager [-] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] VM Stopped (Lifecycle Event)
Oct 02 12:54:48 compute-1 nova_compute[230518]: 2025-10-02 12:54:48.239 2 DEBUG nova.compute.manager [None req-2633ffeb-9291-49f2-9146-50aa78471ff8 - - - - - -] [instance: 95fd2a5f-82d9-46eb-b218-cb0a9a4e2765] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:54:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:54:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:48.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:48 compute-1 nova_compute[230518]: 2025-10-02 12:54:48.613 2 DEBUG nova.network.neutron [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updated VIF entry in instance network info cache for port 20204810-ff47-450e-80e5-23d03b435455. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:54:48 compute-1 nova_compute[230518]: 2025-10-02 12:54:48.614 2 DEBUG nova.network.neutron [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updating instance_info_cache with network_info: [{"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:54:48 compute-1 nova_compute[230518]: 2025-10-02 12:54:48.631 2 DEBUG oslo_concurrency.lockutils [req-2ad33e2f-8b5b-4af4-9962-a202179d81b4 req-ef6524cd-a6d5-48e2-8383-05b678025db3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:54:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:48.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:49 compute-1 nova_compute[230518]: 2025-10-02 12:54:49.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:49 compute-1 nova_compute[230518]: 2025-10-02 12:54:49.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:49 compute-1 ceph-mon[80926]: pgmap v2441: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 47 KiB/s wr, 122 op/s
Oct 02 12:54:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e340 e340: 3 total, 3 up, 3 in
Oct 02 12:54:49 compute-1 nova_compute[230518]: 2025-10-02 12:54:49.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:50 compute-1 nova_compute[230518]: 2025-10-02 12:54:50.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:50 compute-1 ceph-mon[80926]: osdmap e340: 3 total, 3 up, 3 in
Oct 02 12:54:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:50.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:50.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:51 compute-1 ceph-mon[80926]: pgmap v2443: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 23 KiB/s wr, 81 op/s
Oct 02 12:54:51 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3158757264' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:54:51 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/121809429' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:54:51 compute-1 nova_compute[230518]: 2025-10-02 12:54:51.624 2 DEBUG nova.compute.manager [req-3fa934aa-511d-4c6b-ac44-de379ff728d3 req-007a3f24-8638-4662-a6f2-c89350b7aaa3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:51 compute-1 nova_compute[230518]: 2025-10-02 12:54:51.625 2 DEBUG oslo_concurrency.lockutils [req-3fa934aa-511d-4c6b-ac44-de379ff728d3 req-007a3f24-8638-4662-a6f2-c89350b7aaa3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:51 compute-1 nova_compute[230518]: 2025-10-02 12:54:51.625 2 DEBUG oslo_concurrency.lockutils [req-3fa934aa-511d-4c6b-ac44-de379ff728d3 req-007a3f24-8638-4662-a6f2-c89350b7aaa3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:51 compute-1 nova_compute[230518]: 2025-10-02 12:54:51.625 2 DEBUG oslo_concurrency.lockutils [req-3fa934aa-511d-4c6b-ac44-de379ff728d3 req-007a3f24-8638-4662-a6f2-c89350b7aaa3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:51 compute-1 nova_compute[230518]: 2025-10-02 12:54:51.626 2 DEBUG nova.compute.manager [req-3fa934aa-511d-4c6b-ac44-de379ff728d3 req-007a3f24-8638-4662-a6f2-c89350b7aaa3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] No waiting events found dispatching network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:54:51 compute-1 nova_compute[230518]: 2025-10-02 12:54:51.626 2 WARNING nova.compute.manager [req-3fa934aa-511d-4c6b-ac44-de379ff728d3 req-007a3f24-8638-4662-a6f2-c89350b7aaa3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received unexpected event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 for instance with vm_state active and task_state resize_finish.
Oct 02 12:54:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:52.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:52.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:54:53 compute-1 ceph-mon[80926]: pgmap v2444: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 31 KiB/s wr, 50 op/s
Oct 02 12:54:53 compute-1 nova_compute[230518]: 2025-10-02 12:54:53.957 2 DEBUG nova.compute.manager [req-75fc5431-47c0-4044-96a1-c043f75fbfaf req-48c3f043-f923-41c8-95f7-869ccaf4f572 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:53 compute-1 nova_compute[230518]: 2025-10-02 12:54:53.957 2 DEBUG oslo_concurrency.lockutils [req-75fc5431-47c0-4044-96a1-c043f75fbfaf req-48c3f043-f923-41c8-95f7-869ccaf4f572 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:53 compute-1 nova_compute[230518]: 2025-10-02 12:54:53.958 2 DEBUG oslo_concurrency.lockutils [req-75fc5431-47c0-4044-96a1-c043f75fbfaf req-48c3f043-f923-41c8-95f7-869ccaf4f572 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:53 compute-1 nova_compute[230518]: 2025-10-02 12:54:53.958 2 DEBUG oslo_concurrency.lockutils [req-75fc5431-47c0-4044-96a1-c043f75fbfaf req-48c3f043-f923-41c8-95f7-869ccaf4f572 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:53 compute-1 nova_compute[230518]: 2025-10-02 12:54:53.958 2 DEBUG nova.compute.manager [req-75fc5431-47c0-4044-96a1-c043f75fbfaf req-48c3f043-f923-41c8-95f7-869ccaf4f572 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] No waiting events found dispatching network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:54:53 compute-1 nova_compute[230518]: 2025-10-02 12:54:53.958 2 WARNING nova.compute.manager [req-75fc5431-47c0-4044-96a1-c043f75fbfaf req-48c3f043-f923-41c8-95f7-869ccaf4f572 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received unexpected event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 for instance with vm_state resized and task_state resize_reverting.
Oct 02 12:54:54 compute-1 nova_compute[230518]: 2025-10-02 12:54:54.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:54.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:54.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:55 compute-1 nova_compute[230518]: 2025-10-02 12:54:55.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:55 compute-1 ceph-mon[80926]: pgmap v2445: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 31 KiB/s wr, 50 op/s
Oct 02 12:54:56 compute-1 nova_compute[230518]: 2025-10-02 12:54:56.340 2 DEBUG nova.compute.manager [req-6eb7bb35-d8e2-4fb1-8702-d4651af21f89 req-a0e4a89a-c40d-4470-b986-0d08cf99fa4a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-vif-unplugged-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:56 compute-1 nova_compute[230518]: 2025-10-02 12:54:56.341 2 DEBUG oslo_concurrency.lockutils [req-6eb7bb35-d8e2-4fb1-8702-d4651af21f89 req-a0e4a89a-c40d-4470-b986-0d08cf99fa4a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:56 compute-1 nova_compute[230518]: 2025-10-02 12:54:56.341 2 DEBUG oslo_concurrency.lockutils [req-6eb7bb35-d8e2-4fb1-8702-d4651af21f89 req-a0e4a89a-c40d-4470-b986-0d08cf99fa4a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:56 compute-1 nova_compute[230518]: 2025-10-02 12:54:56.342 2 DEBUG oslo_concurrency.lockutils [req-6eb7bb35-d8e2-4fb1-8702-d4651af21f89 req-a0e4a89a-c40d-4470-b986-0d08cf99fa4a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:56 compute-1 nova_compute[230518]: 2025-10-02 12:54:56.342 2 DEBUG nova.compute.manager [req-6eb7bb35-d8e2-4fb1-8702-d4651af21f89 req-a0e4a89a-c40d-4470-b986-0d08cf99fa4a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] No waiting events found dispatching network-vif-unplugged-20204810-ff47-450e-80e5-23d03b435455 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:54:56 compute-1 nova_compute[230518]: 2025-10-02 12:54:56.342 2 WARNING nova.compute.manager [req-6eb7bb35-d8e2-4fb1-8702-d4651af21f89 req-a0e4a89a-c40d-4470-b986-0d08cf99fa4a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received unexpected event network-vif-unplugged-20204810-ff47-450e-80e5-23d03b435455 for instance with vm_state resized and task_state resize_reverting.
Oct 02 12:54:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:56.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:56.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:57 compute-1 nova_compute[230518]: 2025-10-02 12:54:57.101 2 INFO nova.compute.manager [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Swapping old allocation on dict_keys(['730da6ce-9754-46f0-88e3-0019d056443f']) held by migration 967e2439-1b81-4fe0-baf4-48b7e3d12a87 for instance
Oct 02 12:54:57 compute-1 nova_compute[230518]: 2025-10-02 12:54:57.130 2 DEBUG nova.scheduler.client.report [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Overwriting current allocation {'allocations': {'f694d536-1dcd-4bb3-8516-534a40cdf6d7': {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}, 'generation': 72}}, 'project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'user_id': '47f465d8c8ac44c982f2a2e60ae9eb40', 'consumer_generation': 1} on consumer a1e0932b-16b6-46b9-8192-b89b91e91802 move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018
Oct 02 12:54:57 compute-1 ceph-mon[80926]: pgmap v2446: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 759 KiB/s rd, 30 KiB/s wr, 67 op/s
Oct 02 12:54:57 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/141177385' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:58 compute-1 nova_compute[230518]: 2025-10-02 12:54:58.212 2 INFO nova.network.neutron [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updating port 20204810-ff47-450e-80e5-23d03b435455 with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}
Oct 02 12:54:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:54:58 compute-1 nova_compute[230518]: 2025-10-02 12:54:58.451 2 DEBUG nova.compute.manager [req-e19d0b3f-0539-400b-bf53-edf6a3d00321 req-934d0337-b7e2-45d0-9522-2d49ece416e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:58 compute-1 nova_compute[230518]: 2025-10-02 12:54:58.452 2 DEBUG oslo_concurrency.lockutils [req-e19d0b3f-0539-400b-bf53-edf6a3d00321 req-934d0337-b7e2-45d0-9522-2d49ece416e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:54:58 compute-1 nova_compute[230518]: 2025-10-02 12:54:58.452 2 DEBUG oslo_concurrency.lockutils [req-e19d0b3f-0539-400b-bf53-edf6a3d00321 req-934d0337-b7e2-45d0-9522-2d49ece416e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:54:58 compute-1 nova_compute[230518]: 2025-10-02 12:54:58.452 2 DEBUG oslo_concurrency.lockutils [req-e19d0b3f-0539-400b-bf53-edf6a3d00321 req-934d0337-b7e2-45d0-9522-2d49ece416e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:54:58 compute-1 nova_compute[230518]: 2025-10-02 12:54:58.452 2 DEBUG nova.compute.manager [req-e19d0b3f-0539-400b-bf53-edf6a3d00321 req-934d0337-b7e2-45d0-9522-2d49ece416e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] No waiting events found dispatching network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:54:58 compute-1 nova_compute[230518]: 2025-10-02 12:54:58.453 2 WARNING nova.compute.manager [req-e19d0b3f-0539-400b-bf53-edf6a3d00321 req-934d0337-b7e2-45d0-9522-2d49ece416e5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received unexpected event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 for instance with vm_state resized and task_state resize_reverting.
Oct 02 12:54:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:54:58.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:54:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:54:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:54:58.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:54:58 compute-1 podman[288670]: 2025-10-02 12:54:58.813494951 +0000 UTC m=+0.059542689 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:54:58 compute-1 ceph-mon[80926]: pgmap v2447: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 10 KiB/s wr, 105 op/s
Oct 02 12:54:58 compute-1 podman[288669]: 2025-10-02 12:54:58.873210704 +0000 UTC m=+0.118120627 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible)
Oct 02 12:54:59 compute-1 nova_compute[230518]: 2025-10-02 12:54:59.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:54:59 compute-1 nova_compute[230518]: 2025-10-02 12:54:59.067 2 DEBUG oslo_concurrency.lockutils [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:54:59 compute-1 nova_compute[230518]: 2025-10-02 12:54:59.068 2 DEBUG oslo_concurrency.lockutils [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquired lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:54:59 compute-1 nova_compute[230518]: 2025-10-02 12:54:59.068 2 DEBUG nova.network.neutron [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:54:59 compute-1 nova_compute[230518]: 2025-10-02 12:54:59.195 2 DEBUG nova.compute.manager [req-dc3d6024-e00e-45e3-913e-bbddd206626d req-13267798-9262-47a7-8748-9ddff8ca529b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-changed-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:54:59 compute-1 nova_compute[230518]: 2025-10-02 12:54:59.196 2 DEBUG nova.compute.manager [req-dc3d6024-e00e-45e3-913e-bbddd206626d req-13267798-9262-47a7-8748-9ddff8ca529b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Refreshing instance network info cache due to event network-changed-20204810-ff47-450e-80e5-23d03b435455. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:54:59 compute-1 nova_compute[230518]: 2025-10-02 12:54:59.196 2 DEBUG oslo_concurrency.lockutils [req-dc3d6024-e00e-45e3-913e-bbddd206626d req-13267798-9262-47a7-8748-9ddff8ca529b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:54:59 compute-1 nova_compute[230518]: 2025-10-02 12:54:59.805 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409684.803514, a1e0932b-16b6-46b9-8192-b89b91e91802 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:54:59 compute-1 nova_compute[230518]: 2025-10-02 12:54:59.805 2 INFO nova.compute.manager [-] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] VM Stopped (Lifecycle Event)
Oct 02 12:54:59 compute-1 nova_compute[230518]: 2025-10-02 12:54:59.833 2 DEBUG nova.compute.manager [None req-3a05067b-1e59-472d-bb50-920355bcd72a - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:54:59 compute-1 nova_compute[230518]: 2025-10-02 12:54:59.835 2 DEBUG nova.compute.manager [None req-3a05067b-1e59-472d-bb50-920355bcd72a - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:54:59 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2007440849' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:54:59 compute-1 nova_compute[230518]: 2025-10-02 12:54:59.864 2 INFO nova.compute.manager [None req-3a05067b-1e59-472d-bb50-920355bcd72a - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Oct 02 12:55:00 compute-1 nova_compute[230518]: 2025-10-02 12:55:00.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:00.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:00.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:00 compute-1 ceph-mon[80926]: pgmap v2448: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 9.5 KiB/s wr, 98 op/s
Oct 02 12:55:01 compute-1 nova_compute[230518]: 2025-10-02 12:55:01.403 2 DEBUG nova.network.neutron [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updating instance_info_cache with network_info: [{"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:55:01 compute-1 nova_compute[230518]: 2025-10-02 12:55:01.431 2 DEBUG oslo_concurrency.lockutils [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Releasing lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:55:01 compute-1 nova_compute[230518]: 2025-10-02 12:55:01.432 2 DEBUG nova.virt.libvirt.driver [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843
Oct 02 12:55:01 compute-1 nova_compute[230518]: 2025-10-02 12:55:01.463 2 DEBUG oslo_concurrency.lockutils [req-dc3d6024-e00e-45e3-913e-bbddd206626d req-13267798-9262-47a7-8748-9ddff8ca529b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:55:01 compute-1 nova_compute[230518]: 2025-10-02 12:55:01.464 2 DEBUG nova.network.neutron [req-dc3d6024-e00e-45e3-913e-bbddd206626d req-13267798-9262-47a7-8748-9ddff8ca529b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Refreshing network info cache for port 20204810-ff47-450e-80e5-23d03b435455 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:55:01 compute-1 nova_compute[230518]: 2025-10-02 12:55:01.508 2 DEBUG nova.storage.rbd_utils [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rolling back rbd image(a1e0932b-16b6-46b9-8192-b89b91e91802_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505
Oct 02 12:55:01 compute-1 nova_compute[230518]: 2025-10-02 12:55:01.658 2 DEBUG nova.storage.rbd_utils [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] removing snapshot(nova-resize) on rbd image(a1e0932b-16b6-46b9-8192-b89b91e91802_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 12:55:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e341 e341: 3 total, 3 up, 3 in
Oct 02 12:55:02 compute-1 nova_compute[230518]: 2025-10-02 12:55:02.157 2 DEBUG nova.virt.libvirt.driver [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Start _get_guest_xml network_info=[{"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:55:02 compute-1 nova_compute[230518]: 2025-10-02 12:55:02.160 2 WARNING nova.virt.libvirt.driver [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:55:02 compute-1 nova_compute[230518]: 2025-10-02 12:55:02.166 2 DEBUG nova.virt.libvirt.host [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:55:02 compute-1 nova_compute[230518]: 2025-10-02 12:55:02.167 2 DEBUG nova.virt.libvirt.host [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:55:02 compute-1 nova_compute[230518]: 2025-10-02 12:55:02.178 2 DEBUG nova.virt.libvirt.host [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:55:02 compute-1 nova_compute[230518]: 2025-10-02 12:55:02.179 2 DEBUG nova.virt.libvirt.host [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:55:02 compute-1 nova_compute[230518]: 2025-10-02 12:55:02.180 2 DEBUG nova.virt.libvirt.driver [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:55:02 compute-1 nova_compute[230518]: 2025-10-02 12:55:02.180 2 DEBUG nova.virt.hardware [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:55:02 compute-1 nova_compute[230518]: 2025-10-02 12:55:02.181 2 DEBUG nova.virt.hardware [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:55:02 compute-1 nova_compute[230518]: 2025-10-02 12:55:02.181 2 DEBUG nova.virt.hardware [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:55:02 compute-1 nova_compute[230518]: 2025-10-02 12:55:02.182 2 DEBUG nova.virt.hardware [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:55:02 compute-1 nova_compute[230518]: 2025-10-02 12:55:02.183 2 DEBUG nova.virt.hardware [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:55:02 compute-1 nova_compute[230518]: 2025-10-02 12:55:02.183 2 DEBUG nova.virt.hardware [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:55:02 compute-1 nova_compute[230518]: 2025-10-02 12:55:02.183 2 DEBUG nova.virt.hardware [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:55:02 compute-1 nova_compute[230518]: 2025-10-02 12:55:02.183 2 DEBUG nova.virt.hardware [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:55:02 compute-1 nova_compute[230518]: 2025-10-02 12:55:02.183 2 DEBUG nova.virt.hardware [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:55:02 compute-1 nova_compute[230518]: 2025-10-02 12:55:02.184 2 DEBUG nova.virt.hardware [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:55:02 compute-1 nova_compute[230518]: 2025-10-02 12:55:02.184 2 DEBUG nova.virt.hardware [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:55:02 compute-1 nova_compute[230518]: 2025-10-02 12:55:02.184 2 DEBUG nova.objects.instance [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid a1e0932b-16b6-46b9-8192-b89b91e91802 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:55:02 compute-1 nova_compute[230518]: 2025-10-02 12:55:02.202 2 DEBUG oslo_concurrency.processutils [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:02.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:55:02 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3427044163' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:02 compute-1 nova_compute[230518]: 2025-10-02 12:55:02.616 2 DEBUG oslo_concurrency.processutils [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:02 compute-1 nova_compute[230518]: 2025-10-02 12:55:02.657 2 DEBUG oslo_concurrency.processutils [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:02.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:55:03 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1586802724' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:03 compute-1 nova_compute[230518]: 2025-10-02 12:55:03.071 2 DEBUG oslo_concurrency.processutils [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:03 compute-1 nova_compute[230518]: 2025-10-02 12:55:03.073 2 DEBUG nova.virt.libvirt.vif [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:53:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1723654799',display_name='tempest-TestNetworkAdvancedServerOps-server-1723654799',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1723654799',id=146,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX6YVDsN9ZX5bxWRi+hOcCI5VJa6Q2nedRNQF3bG+0Pznov2NvoOk008+cPv/dH5+9KDDN9Rpi1O2z1pYZSfJd9pzzfPLrJFsvhHAGAb1dgOP5UShntoHoUWnJ4mGisJQ==',key_name='tempest-TestNetworkAdvancedServerOps-520644426',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:54:52Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-hzwwvz6x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:54:53Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=a1e0932b-16b6-46b9-8192-b89b91e91802,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:55:03 compute-1 nova_compute[230518]: 2025-10-02 12:55:03.073 2 DEBUG nova.network.os_vif_util [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:55:03 compute-1 nova_compute[230518]: 2025-10-02 12:55:03.074 2 DEBUG nova.network.os_vif_util [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:41:1c,bridge_name='br-int',has_traffic_filtering=True,id=20204810-ff47-450e-80e5-23d03b435455,network=Network(a39243cb-5286-4429-8879-7b4d535de128),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20204810-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:55:03 compute-1 nova_compute[230518]: 2025-10-02 12:55:03.076 2 DEBUG nova.virt.libvirt.driver [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:55:03 compute-1 nova_compute[230518]:   <uuid>a1e0932b-16b6-46b9-8192-b89b91e91802</uuid>
Oct 02 12:55:03 compute-1 nova_compute[230518]:   <name>instance-00000092</name>
Oct 02 12:55:03 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:55:03 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:55:03 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:55:03 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1723654799</nova:name>
Oct 02 12:55:03 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:55:02</nova:creationTime>
Oct 02 12:55:03 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:55:03 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:55:03 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:55:03 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:55:03 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:55:03 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:55:03 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:55:03 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:55:03 compute-1 nova_compute[230518]:         <nova:user uuid="47f465d8c8ac44c982f2a2e60ae9eb40">tempest-TestNetworkAdvancedServerOps-1770117619-project-member</nova:user>
Oct 02 12:55:03 compute-1 nova_compute[230518]:         <nova:project uuid="072925a6aec84a77a9c09ae0c83efdb3">tempest-TestNetworkAdvancedServerOps-1770117619</nova:project>
Oct 02 12:55:03 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:55:03 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:55:03 compute-1 nova_compute[230518]:         <nova:port uuid="20204810-ff47-450e-80e5-23d03b435455">
Oct 02 12:55:03 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:55:03 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:55:03 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:55:03 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <system>
Oct 02 12:55:03 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:55:03 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:55:03 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:55:03 compute-1 nova_compute[230518]:       <entry name="serial">a1e0932b-16b6-46b9-8192-b89b91e91802</entry>
Oct 02 12:55:03 compute-1 nova_compute[230518]:       <entry name="uuid">a1e0932b-16b6-46b9-8192-b89b91e91802</entry>
Oct 02 12:55:03 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     </system>
Oct 02 12:55:03 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:55:03 compute-1 nova_compute[230518]:   <os>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:   </os>
Oct 02 12:55:03 compute-1 nova_compute[230518]:   <features>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:   </features>
Oct 02 12:55:03 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:55:03 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:55:03 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:55:03 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/a1e0932b-16b6-46b9-8192-b89b91e91802_disk">
Oct 02 12:55:03 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:       </source>
Oct 02 12:55:03 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:55:03 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:55:03 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:55:03 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/a1e0932b-16b6-46b9-8192-b89b91e91802_disk.config">
Oct 02 12:55:03 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:       </source>
Oct 02 12:55:03 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:55:03 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:55:03 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:55:03 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:5b:41:1c"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:       <target dev="tap20204810-ff"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:55:03 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802/console.log" append="off"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <video>
Oct 02 12:55:03 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     </video>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <input type="keyboard" bus="usb"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:55:03 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:55:03 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:55:03 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:55:03 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:55:03 compute-1 nova_compute[230518]: </domain>
Oct 02 12:55:03 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:55:03 compute-1 nova_compute[230518]: 2025-10-02 12:55:03.078 2 DEBUG nova.compute.manager [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Preparing to wait for external event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:55:03 compute-1 nova_compute[230518]: 2025-10-02 12:55:03.078 2 DEBUG oslo_concurrency.lockutils [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:03 compute-1 nova_compute[230518]: 2025-10-02 12:55:03.078 2 DEBUG oslo_concurrency.lockutils [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:03 compute-1 nova_compute[230518]: 2025-10-02 12:55:03.078 2 DEBUG oslo_concurrency.lockutils [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:03 compute-1 nova_compute[230518]: 2025-10-02 12:55:03.079 2 DEBUG nova.virt.libvirt.vif [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:53:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1723654799',display_name='tempest-TestNetworkAdvancedServerOps-server-1723654799',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1723654799',id=146,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX6YVDsN9ZX5bxWRi+hOcCI5VJa6Q2nedRNQF3bG+0Pznov2NvoOk008+cPv/dH5+9KDDN9Rpi1O2z1pYZSfJd9pzzfPLrJFsvhHAGAb1dgOP5UShntoHoUWnJ4mGisJQ==',key_name='tempest-TestNetworkAdvancedServerOps-520644426',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:54:52Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-hzwwvz6x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:54:53Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=a1e0932b-16b6-46b9-8192-b89b91e91802,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:55:03 compute-1 nova_compute[230518]: 2025-10-02 12:55:03.079 2 DEBUG nova.network.os_vif_util [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:55:03 compute-1 nova_compute[230518]: 2025-10-02 12:55:03.080 2 DEBUG nova.network.os_vif_util [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:41:1c,bridge_name='br-int',has_traffic_filtering=True,id=20204810-ff47-450e-80e5-23d03b435455,network=Network(a39243cb-5286-4429-8879-7b4d535de128),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20204810-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:55:03 compute-1 nova_compute[230518]: 2025-10-02 12:55:03.080 2 DEBUG os_vif [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:41:1c,bridge_name='br-int',has_traffic_filtering=True,id=20204810-ff47-450e-80e5-23d03b435455,network=Network(a39243cb-5286-4429-8879-7b4d535de128),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20204810-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:55:03 compute-1 nova_compute[230518]: 2025-10-02 12:55:03.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:03 compute-1 nova_compute[230518]: 2025-10-02 12:55:03.081 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:03 compute-1 nova_compute[230518]: 2025-10-02 12:55:03.082 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:55:03 compute-1 nova_compute[230518]: 2025-10-02 12:55:03.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:03 compute-1 nova_compute[230518]: 2025-10-02 12:55:03.085 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20204810-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:03 compute-1 nova_compute[230518]: 2025-10-02 12:55:03.085 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap20204810-ff, col_values=(('external_ids', {'iface-id': '20204810-ff47-450e-80e5-23d03b435455', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:41:1c', 'vm-uuid': 'a1e0932b-16b6-46b9-8192-b89b91e91802'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:03 compute-1 nova_compute[230518]: 2025-10-02 12:55:03.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:03 compute-1 NetworkManager[44960]: <info>  [1759409703.0881] manager: (tap20204810-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/288)
Oct 02 12:55:03 compute-1 nova_compute[230518]: 2025-10-02 12:55:03.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:55:03 compute-1 nova_compute[230518]: 2025-10-02 12:55:03.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:03 compute-1 nova_compute[230518]: 2025-10-02 12:55:03.092 2 INFO os_vif [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:41:1c,bridge_name='br-int',has_traffic_filtering=True,id=20204810-ff47-450e-80e5-23d03b435455,network=Network(a39243cb-5286-4429-8879-7b4d535de128),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20204810-ff')
Oct 02 12:55:03 compute-1 nova_compute[230518]: 2025-10-02 12:55:03.148 2 DEBUG nova.network.neutron [req-dc3d6024-e00e-45e3-913e-bbddd206626d req-13267798-9262-47a7-8748-9ddff8ca529b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updated VIF entry in instance network info cache for port 20204810-ff47-450e-80e5-23d03b435455. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:55:03 compute-1 nova_compute[230518]: 2025-10-02 12:55:03.148 2 DEBUG nova.network.neutron [req-dc3d6024-e00e-45e3-913e-bbddd206626d req-13267798-9262-47a7-8748-9ddff8ca529b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updating instance_info_cache with network_info: [{"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:55:03 compute-1 nova_compute[230518]: 2025-10-02 12:55:03.168 2 DEBUG oslo_concurrency.lockutils [req-dc3d6024-e00e-45e3-913e-bbddd206626d req-13267798-9262-47a7-8748-9ddff8ca529b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:55:03 compute-1 kernel: tap20204810-ff: entered promiscuous mode
Oct 02 12:55:03 compute-1 NetworkManager[44960]: <info>  [1759409703.2986] manager: (tap20204810-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/289)
Oct 02 12:55:03 compute-1 nova_compute[230518]: 2025-10-02 12:55:03.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:03 compute-1 ovn_controller[129257]: 2025-10-02T12:55:03Z|00620|binding|INFO|Claiming lport 20204810-ff47-450e-80e5-23d03b435455 for this chassis.
Oct 02 12:55:03 compute-1 ovn_controller[129257]: 2025-10-02T12:55:03Z|00621|binding|INFO|20204810-ff47-450e-80e5-23d03b435455: Claiming fa:16:3e:5b:41:1c 10.100.0.7
Oct 02 12:55:03 compute-1 nova_compute[230518]: 2025-10-02 12:55:03.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:03 compute-1 NetworkManager[44960]: <info>  [1759409703.3187] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/290)
Oct 02 12:55:03 compute-1 NetworkManager[44960]: <info>  [1759409703.3192] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/291)
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.325 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:41:1c 10.100.0.7'], port_security=['fa:16:3e:5b:41:1c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a1e0932b-16b6-46b9-8192-b89b91e91802', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a39243cb-5286-4429-8879-7b4d535de128', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'd1b59c5d-0681-456e-a8d1-b3629344f9b0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.216'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf84aebf-21d8-4569-891a-417406561224, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=20204810-ff47-450e-80e5-23d03b435455) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.326 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 20204810-ff47-450e-80e5-23d03b435455 in datapath a39243cb-5286-4429-8879-7b4d535de128 bound to our chassis
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.328 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a39243cb-5286-4429-8879-7b4d535de128
Oct 02 12:55:03 compute-1 systemd-machined[188247]: New machine qemu-72-instance-00000092.
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.342 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2db1d66b-4c12-4625-a57d-011353d34289]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.343 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa39243cb-51 in ovnmeta-a39243cb-5286-4429-8879-7b4d535de128 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.345 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa39243cb-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.345 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[91fb9d62-1280-45b1-a92e-ef6862e812d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.345 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[867b0925-5acd-4f6b-91a6-dab1382fc34d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.358 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[97507291-a7ab-4525-b51a-ff30c06d2f71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:03 compute-1 systemd[1]: Started Virtual Machine qemu-72-instance-00000092.
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.385 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9f53d482-1906-4e17-9bc3-e5b920f5f693]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:03 compute-1 systemd-udevd[288847]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:55:03 compute-1 NetworkManager[44960]: <info>  [1759409703.4061] device (tap20204810-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:55:03 compute-1 NetworkManager[44960]: <info>  [1759409703.4071] device (tap20204810-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.424 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[1cec32e9-6ed5-40ad-a010-238e7fb68505]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:03 compute-1 ceph-mon[80926]: pgmap v2449: 305 pgs: 305 active+clean; 271 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 111 op/s
Oct 02 12:55:03 compute-1 ceph-mon[80926]: osdmap e341: 3 total, 3 up, 3 in
Oct 02 12:55:03 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3427044163' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:03 compute-1 NetworkManager[44960]: <info>  [1759409703.4454] manager: (tapa39243cb-50): new Veth device (/org/freedesktop/NetworkManager/Devices/292)
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.445 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[895f5148-c7a5-446b-b4a2-5b3deffa0192]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.483 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[bd20b8e3-49ac-4d69-aaad-3aa02530c734]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.487 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[cddbe4d0-4411-4269-a709-46c92b835d77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:03 compute-1 NetworkManager[44960]: <info>  [1759409703.5085] device (tapa39243cb-50): carrier: link connected
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.514 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[d0a9838f-b7fc-41d2-8450-e852ed933fad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:03 compute-1 nova_compute[230518]: 2025-10-02 12:55:03.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.529 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7300e330-50cb-437c-8e79-04c6decb9a41]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa39243cb-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:6a:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 190], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 756706, 'reachable_time': 28614, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288877, 'error': None, 'target': 'ovnmeta-a39243cb-5286-4429-8879-7b4d535de128', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:03 compute-1 nova_compute[230518]: 2025-10-02 12:55:03.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.545 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e3e7c963-f142-4ae6-98ea-75e429779e0e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe30:6ac8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 756706, 'tstamp': 756706}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288878, 'error': None, 'target': 'ovnmeta-a39243cb-5286-4429-8879-7b4d535de128', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:03 compute-1 nova_compute[230518]: 2025-10-02 12:55:03.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.560 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a740d79e-e9b7-47eb-ada6-794897405949]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa39243cb-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:6a:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 190], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 756706, 'reachable_time': 28614, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288879, 'error': None, 'target': 'ovnmeta-a39243cb-5286-4429-8879-7b4d535de128', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:03 compute-1 ovn_controller[129257]: 2025-10-02T12:55:03Z|00622|binding|INFO|Setting lport 20204810-ff47-450e-80e5-23d03b435455 ovn-installed in OVS
Oct 02 12:55:03 compute-1 ovn_controller[129257]: 2025-10-02T12:55:03Z|00623|binding|INFO|Setting lport 20204810-ff47-450e-80e5-23d03b435455 up in Southbound
Oct 02 12:55:03 compute-1 nova_compute[230518]: 2025-10-02 12:55:03.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.587 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7a52c270-ca2c-4f33-807e-3025201b8fbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.637 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f7f4c9b3-27f5-428d-92c0-1d53996b0290]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.638 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa39243cb-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.638 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.639 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa39243cb-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:03 compute-1 nova_compute[230518]: 2025-10-02 12:55:03.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:03 compute-1 kernel: tapa39243cb-50: entered promiscuous mode
Oct 02 12:55:03 compute-1 NetworkManager[44960]: <info>  [1759409703.6411] manager: (tapa39243cb-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/293)
Oct 02 12:55:03 compute-1 nova_compute[230518]: 2025-10-02 12:55:03.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.643 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa39243cb-50, col_values=(('external_ids', {'iface-id': '75b1d4a5-2f3f-44a0-a22e-e6c15fda36d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:03 compute-1 nova_compute[230518]: 2025-10-02 12:55:03.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:03 compute-1 ovn_controller[129257]: 2025-10-02T12:55:03Z|00624|binding|INFO|Releasing lport 75b1d4a5-2f3f-44a0-a22e-e6c15fda36d1 from this chassis (sb_readonly=0)
Oct 02 12:55:03 compute-1 nova_compute[230518]: 2025-10-02 12:55:03.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.659 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a39243cb-5286-4429-8879-7b4d535de128.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a39243cb-5286-4429-8879-7b4d535de128.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.659 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8441d5e2-3c20-4a7b-9010-a7bee41025f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.660 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-a39243cb-5286-4429-8879-7b4d535de128
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/a39243cb-5286-4429-8879-7b4d535de128.pid.haproxy
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID a39243cb-5286-4429-8879-7b4d535de128
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:55:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:03.661 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a39243cb-5286-4429-8879-7b4d535de128', 'env', 'PROCESS_TAG=haproxy-a39243cb-5286-4429-8879-7b4d535de128', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a39243cb-5286-4429-8879-7b4d535de128.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:55:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:55:04 compute-1 nova_compute[230518]: 2025-10-02 12:55:04.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:04 compute-1 podman[288918]: 2025-10-02 12:55:03.980893507 +0000 UTC m=+0.023427257 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:55:04 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1586802724' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:04 compute-1 podman[288918]: 2025-10-02 12:55:04.398355624 +0000 UTC m=+0.440889374 container create 0d9d044c9e3698037c09f065dbf8c2d0aad1b27119ad4a3267ccc2ed0a81d383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 12:55:04 compute-1 systemd[1]: Started libpod-conmon-0d9d044c9e3698037c09f065dbf8c2d0aad1b27119ad4a3267ccc2ed0a81d383.scope.
Oct 02 12:55:04 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:55:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d9e319b30d1b4061deb78f71d6770d855f65959fcfb953fc9732e0288e8dbf3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:55:04 compute-1 nova_compute[230518]: 2025-10-02 12:55:04.529 2 DEBUG nova.compute.manager [req-8f05eb5f-97f9-48de-aec0-d53d7ce1c480 req-7b35a5d9-ae41-4c8e-8bd2-9b3ade7fe7be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:55:04 compute-1 nova_compute[230518]: 2025-10-02 12:55:04.529 2 DEBUG oslo_concurrency.lockutils [req-8f05eb5f-97f9-48de-aec0-d53d7ce1c480 req-7b35a5d9-ae41-4c8e-8bd2-9b3ade7fe7be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:04 compute-1 nova_compute[230518]: 2025-10-02 12:55:04.530 2 DEBUG oslo_concurrency.lockutils [req-8f05eb5f-97f9-48de-aec0-d53d7ce1c480 req-7b35a5d9-ae41-4c8e-8bd2-9b3ade7fe7be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:04 compute-1 nova_compute[230518]: 2025-10-02 12:55:04.530 2 DEBUG oslo_concurrency.lockutils [req-8f05eb5f-97f9-48de-aec0-d53d7ce1c480 req-7b35a5d9-ae41-4c8e-8bd2-9b3ade7fe7be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:04 compute-1 nova_compute[230518]: 2025-10-02 12:55:04.530 2 DEBUG nova.compute.manager [req-8f05eb5f-97f9-48de-aec0-d53d7ce1c480 req-7b35a5d9-ae41-4c8e-8bd2-9b3ade7fe7be 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Processing event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:55:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:55:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:04.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:55:04 compute-1 podman[288918]: 2025-10-02 12:55:04.587711044 +0000 UTC m=+0.630244824 container init 0d9d044c9e3698037c09f065dbf8c2d0aad1b27119ad4a3267ccc2ed0a81d383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:55:04 compute-1 podman[288918]: 2025-10-02 12:55:04.59426983 +0000 UTC m=+0.636803580 container start 0d9d044c9e3698037c09f065dbf8c2d0aad1b27119ad4a3267ccc2ed0a81d383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:55:04 compute-1 nova_compute[230518]: 2025-10-02 12:55:04.619 2 DEBUG nova.compute.manager [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:55:04 compute-1 neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128[288969]: [NOTICE]   (288973) : New worker (288975) forked
Oct 02 12:55:04 compute-1 neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128[288969]: [NOTICE]   (288973) : Loading success.
Oct 02 12:55:04 compute-1 nova_compute[230518]: 2025-10-02 12:55:04.621 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409704.619002, a1e0932b-16b6-46b9-8192-b89b91e91802 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:55:04 compute-1 nova_compute[230518]: 2025-10-02 12:55:04.622 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] VM Started (Lifecycle Event)
Oct 02 12:55:04 compute-1 nova_compute[230518]: 2025-10-02 12:55:04.632 2 INFO nova.virt.libvirt.driver [-] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Instance running successfully.
Oct 02 12:55:04 compute-1 nova_compute[230518]: 2025-10-02 12:55:04.633 2 DEBUG nova.virt.libvirt.driver [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] finish_revert_migration finished successfully. finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11887
Oct 02 12:55:04 compute-1 nova_compute[230518]: 2025-10-02 12:55:04.646 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:55:04 compute-1 nova_compute[230518]: 2025-10-02 12:55:04.649 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:55:04 compute-1 nova_compute[230518]: 2025-10-02 12:55:04.675 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Oct 02 12:55:04 compute-1 nova_compute[230518]: 2025-10-02 12:55:04.675 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409704.6199708, a1e0932b-16b6-46b9-8192-b89b91e91802 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:55:04 compute-1 nova_compute[230518]: 2025-10-02 12:55:04.676 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] VM Paused (Lifecycle Event)
Oct 02 12:55:04 compute-1 nova_compute[230518]: 2025-10-02 12:55:04.704 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:55:04 compute-1 nova_compute[230518]: 2025-10-02 12:55:04.709 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409704.6275783, a1e0932b-16b6-46b9-8192-b89b91e91802 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:55:04 compute-1 nova_compute[230518]: 2025-10-02 12:55:04.709 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] VM Resumed (Lifecycle Event)
Oct 02 12:55:04 compute-1 nova_compute[230518]: 2025-10-02 12:55:04.721 2 INFO nova.compute.manager [None req-e390400f-ea61-49b0-b99d-cc9fb1cc8dcf 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updating instance to original state: 'active'
Oct 02 12:55:04 compute-1 nova_compute[230518]: 2025-10-02 12:55:04.730 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:55:04 compute-1 nova_compute[230518]: 2025-10-02 12:55:04.734 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:55:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:04.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:04 compute-1 nova_compute[230518]: 2025-10-02 12:55:04.770 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] During sync_power_state the instance has a pending task (resize_reverting). Skip.
Oct 02 12:55:05 compute-1 ceph-mon[80926]: pgmap v2451: 305 pgs: 305 active+clean; 271 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 109 op/s
Oct 02 12:55:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2182082830' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:55:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2182082830' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:55:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:55:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:06.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:55:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:06.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:06 compute-1 nova_compute[230518]: 2025-10-02 12:55:06.907 2 DEBUG nova.compute.manager [req-5e231c75-f889-4b9f-974f-4afeb79d8ded req-5dbb104b-8846-4823-9f65-6a0292976bcf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:55:06 compute-1 nova_compute[230518]: 2025-10-02 12:55:06.908 2 DEBUG oslo_concurrency.lockutils [req-5e231c75-f889-4b9f-974f-4afeb79d8ded req-5dbb104b-8846-4823-9f65-6a0292976bcf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:06 compute-1 nova_compute[230518]: 2025-10-02 12:55:06.908 2 DEBUG oslo_concurrency.lockutils [req-5e231c75-f889-4b9f-974f-4afeb79d8ded req-5dbb104b-8846-4823-9f65-6a0292976bcf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:06 compute-1 nova_compute[230518]: 2025-10-02 12:55:06.908 2 DEBUG oslo_concurrency.lockutils [req-5e231c75-f889-4b9f-974f-4afeb79d8ded req-5dbb104b-8846-4823-9f65-6a0292976bcf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:06 compute-1 nova_compute[230518]: 2025-10-02 12:55:06.908 2 DEBUG nova.compute.manager [req-5e231c75-f889-4b9f-974f-4afeb79d8ded req-5dbb104b-8846-4823-9f65-6a0292976bcf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] No waiting events found dispatching network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:55:06 compute-1 nova_compute[230518]: 2025-10-02 12:55:06.909 2 WARNING nova.compute.manager [req-5e231c75-f889-4b9f-974f-4afeb79d8ded req-5dbb104b-8846-4823-9f65-6a0292976bcf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received unexpected event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 for instance with vm_state active and task_state None.
Oct 02 12:55:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1717806451' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:07 compute-1 nova_compute[230518]: 2025-10-02 12:55:07.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:07 compute-1 ceph-mon[80926]: pgmap v2452: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 103 op/s
Oct 02 12:55:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1772510293' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:08 compute-1 nova_compute[230518]: 2025-10-02 12:55:08.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:08.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:55:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:08.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:08 compute-1 podman[288985]: 2025-10-02 12:55:08.811436705 +0000 UTC m=+0.059374014 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 12:55:08 compute-1 podman[288984]: 2025-10-02 12:55:08.834610911 +0000 UTC m=+0.085367848 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 02 12:55:09 compute-1 nova_compute[230518]: 2025-10-02 12:55:09.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:09 compute-1 nova_compute[230518]: 2025-10-02 12:55:09.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:55:09 compute-1 nova_compute[230518]: 2025-10-02 12:55:09.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:55:09 compute-1 ceph-mon[80926]: pgmap v2453: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 110 op/s
Oct 02 12:55:09 compute-1 nova_compute[230518]: 2025-10-02 12:55:09.108 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:09 compute-1 nova_compute[230518]: 2025-10-02 12:55:09.109 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:09 compute-1 nova_compute[230518]: 2025-10-02 12:55:09.109 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:09 compute-1 nova_compute[230518]: 2025-10-02 12:55:09.109 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:55:09 compute-1 nova_compute[230518]: 2025-10-02 12:55:09.109 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:55:09 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2393667972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:09 compute-1 nova_compute[230518]: 2025-10-02 12:55:09.590 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:09 compute-1 nova_compute[230518]: 2025-10-02 12:55:09.842 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:55:09 compute-1 nova_compute[230518]: 2025-10-02 12:55:09.843 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:55:10 compute-1 nova_compute[230518]: 2025-10-02 12:55:10.026 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:55:10 compute-1 nova_compute[230518]: 2025-10-02 12:55:10.027 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4227MB free_disk=20.87628173828125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:55:10 compute-1 nova_compute[230518]: 2025-10-02 12:55:10.028 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:10 compute-1 nova_compute[230518]: 2025-10-02 12:55:10.028 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3776642989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2393667972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:10 compute-1 nova_compute[230518]: 2025-10-02 12:55:10.327 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance a1e0932b-16b6-46b9-8192-b89b91e91802 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:55:10 compute-1 nova_compute[230518]: 2025-10-02 12:55:10.328 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:55:10 compute-1 nova_compute[230518]: 2025-10-02 12:55:10.328 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:55:10 compute-1 nova_compute[230518]: 2025-10-02 12:55:10.365 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:10.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:55:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:10.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:55:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:55:10 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2956126067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:10 compute-1 nova_compute[230518]: 2025-10-02 12:55:10.804 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:10 compute-1 nova_compute[230518]: 2025-10-02 12:55:10.810 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:55:10 compute-1 nova_compute[230518]: 2025-10-02 12:55:10.893 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:55:11 compute-1 nova_compute[230518]: 2025-10-02 12:55:11.009 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:55:11 compute-1 nova_compute[230518]: 2025-10-02 12:55:11.010 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.982s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:11 compute-1 ceph-mon[80926]: pgmap v2454: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Oct 02 12:55:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3005431859' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2956126067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:11 compute-1 nova_compute[230518]: 2025-10-02 12:55:11.920 2 DEBUG oslo_concurrency.lockutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Acquiring lock "2b371444-62dc-4270-8164-64eac7dcead4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:11 compute-1 nova_compute[230518]: 2025-10-02 12:55:11.921 2 DEBUG oslo_concurrency.lockutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lock "2b371444-62dc-4270-8164-64eac7dcead4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:11 compute-1 nova_compute[230518]: 2025-10-02 12:55:11.963 2 DEBUG nova.compute.manager [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:55:12 compute-1 nova_compute[230518]: 2025-10-02 12:55:12.067 2 DEBUG oslo_concurrency.lockutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:12 compute-1 nova_compute[230518]: 2025-10-02 12:55:12.067 2 DEBUG oslo_concurrency.lockutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:12 compute-1 nova_compute[230518]: 2025-10-02 12:55:12.082 2 DEBUG nova.virt.hardware [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:55:12 compute-1 nova_compute[230518]: 2025-10-02 12:55:12.082 2 INFO nova.compute.claims [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:55:12 compute-1 nova_compute[230518]: 2025-10-02 12:55:12.258 2 DEBUG oslo_concurrency.processutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:55:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:12.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:55:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:55:12 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2576941608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:12 compute-1 nova_compute[230518]: 2025-10-02 12:55:12.677 2 DEBUG oslo_concurrency.processutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:12 compute-1 nova_compute[230518]: 2025-10-02 12:55:12.684 2 DEBUG nova.compute.provider_tree [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:55:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:12.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:12 compute-1 nova_compute[230518]: 2025-10-02 12:55:12.767 2 DEBUG nova.scheduler.client.report [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:55:12 compute-1 nova_compute[230518]: 2025-10-02 12:55:12.824 2 DEBUG oslo_concurrency.lockutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:12 compute-1 nova_compute[230518]: 2025-10-02 12:55:12.824 2 DEBUG nova.compute.manager [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:55:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e342 e342: 3 total, 3 up, 3 in
Oct 02 12:55:12 compute-1 nova_compute[230518]: 2025-10-02 12:55:12.891 2 DEBUG nova.compute.manager [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:55:12 compute-1 nova_compute[230518]: 2025-10-02 12:55:12.892 2 DEBUG nova.network.neutron [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:55:12 compute-1 nova_compute[230518]: 2025-10-02 12:55:12.920 2 INFO nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:55:12 compute-1 nova_compute[230518]: 2025-10-02 12:55:12.954 2 DEBUG nova.compute.manager [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:55:13 compute-1 nova_compute[230518]: 2025-10-02 12:55:13.036 2 DEBUG nova.compute.manager [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:55:13 compute-1 nova_compute[230518]: 2025-10-02 12:55:13.037 2 DEBUG nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:55:13 compute-1 nova_compute[230518]: 2025-10-02 12:55:13.038 2 INFO nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Creating image(s)
Oct 02 12:55:13 compute-1 nova_compute[230518]: 2025-10-02 12:55:13.084 2 DEBUG nova.storage.rbd_utils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] rbd image 2b371444-62dc-4270-8164-64eac7dcead4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:55:13 compute-1 nova_compute[230518]: 2025-10-02 12:55:13.110 2 DEBUG nova.storage.rbd_utils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] rbd image 2b371444-62dc-4270-8164-64eac7dcead4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:55:13 compute-1 nova_compute[230518]: 2025-10-02 12:55:13.135 2 DEBUG nova.storage.rbd_utils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] rbd image 2b371444-62dc-4270-8164-64eac7dcead4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:55:13 compute-1 nova_compute[230518]: 2025-10-02 12:55:13.139 2 DEBUG oslo_concurrency.processutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:13 compute-1 nova_compute[230518]: 2025-10-02 12:55:13.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:13 compute-1 nova_compute[230518]: 2025-10-02 12:55:13.220 2 DEBUG oslo_concurrency.processutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:13 compute-1 nova_compute[230518]: 2025-10-02 12:55:13.221 2 DEBUG oslo_concurrency.lockutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:13 compute-1 nova_compute[230518]: 2025-10-02 12:55:13.222 2 DEBUG oslo_concurrency.lockutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:13 compute-1 nova_compute[230518]: 2025-10-02 12:55:13.222 2 DEBUG oslo_concurrency.lockutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:13 compute-1 nova_compute[230518]: 2025-10-02 12:55:13.247 2 DEBUG nova.storage.rbd_utils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] rbd image 2b371444-62dc-4270-8164-64eac7dcead4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:55:13 compute-1 nova_compute[230518]: 2025-10-02 12:55:13.251 2 DEBUG oslo_concurrency.processutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 2b371444-62dc-4270-8164-64eac7dcead4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:13 compute-1 nova_compute[230518]: 2025-10-02 12:55:13.285 2 DEBUG nova.policy [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9ea4224783c14b01bd0ff8988a45a5f2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b48381b3787c4f3d9bb0c9050cf4c52c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:55:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:55:13 compute-1 ceph-mon[80926]: pgmap v2455: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 953 KiB/s wr, 160 op/s
Oct 02 12:55:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2576941608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:13 compute-1 ceph-mon[80926]: osdmap e342: 3 total, 3 up, 3 in
Oct 02 12:55:14 compute-1 nova_compute[230518]: 2025-10-02 12:55:14.011 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:55:14 compute-1 nova_compute[230518]: 2025-10-02 12:55:14.012 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:55:14 compute-1 nova_compute[230518]: 2025-10-02 12:55:14.012 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:55:14 compute-1 nova_compute[230518]: 2025-10-02 12:55:14.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:14 compute-1 nova_compute[230518]: 2025-10-02 12:55:14.056 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:55:14 compute-1 nova_compute[230518]: 2025-10-02 12:55:14.056 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:55:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:55:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:14.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:55:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:14.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:14 compute-1 nova_compute[230518]: 2025-10-02 12:55:14.784 2 DEBUG oslo_concurrency.processutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 2b371444-62dc-4270-8164-64eac7dcead4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:14 compute-1 nova_compute[230518]: 2025-10-02 12:55:14.860 2 DEBUG nova.storage.rbd_utils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] resizing rbd image 2b371444-62dc-4270-8164-64eac7dcead4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:55:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1794191949' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:15 compute-1 ceph-mon[80926]: pgmap v2457: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 953 KiB/s wr, 160 op/s
Oct 02 12:55:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/140536109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:15 compute-1 nova_compute[230518]: 2025-10-02 12:55:15.178 2 DEBUG nova.objects.instance [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lazy-loading 'migration_context' on Instance uuid 2b371444-62dc-4270-8164-64eac7dcead4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:55:15 compute-1 nova_compute[230518]: 2025-10-02 12:55:15.192 2 DEBUG nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:55:15 compute-1 nova_compute[230518]: 2025-10-02 12:55:15.193 2 DEBUG nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Ensure instance console log exists: /var/lib/nova/instances/2b371444-62dc-4270-8164-64eac7dcead4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:55:15 compute-1 nova_compute[230518]: 2025-10-02 12:55:15.194 2 DEBUG oslo_concurrency.lockutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:15 compute-1 nova_compute[230518]: 2025-10-02 12:55:15.194 2 DEBUG oslo_concurrency.lockutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:15 compute-1 nova_compute[230518]: 2025-10-02 12:55:15.194 2 DEBUG oslo_concurrency.lockutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:16.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:16 compute-1 nova_compute[230518]: 2025-10-02 12:55:16.634 2 DEBUG nova.network.neutron [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Successfully created port: 8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:55:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:16.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:17 compute-1 nova_compute[230518]: 2025-10-02 12:55:17.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:55:17 compute-1 ceph-mon[80926]: pgmap v2458: 305 pgs: 305 active+clean; 307 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 577 KiB/s wr, 190 op/s
Oct 02 12:55:18 compute-1 nova_compute[230518]: 2025-10-02 12:55:18.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:18 compute-1 ovn_controller[129257]: 2025-10-02T12:55:18Z|00085|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5b:41:1c 10.100.0.7
Oct 02 12:55:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:18.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:18.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:55:18 compute-1 nova_compute[230518]: 2025-10-02 12:55:18.861 2 DEBUG nova.network.neutron [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Successfully updated port: 8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:55:18 compute-1 nova_compute[230518]: 2025-10-02 12:55:18.887 2 DEBUG oslo_concurrency.lockutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Acquiring lock "refresh_cache-2b371444-62dc-4270-8164-64eac7dcead4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:55:18 compute-1 nova_compute[230518]: 2025-10-02 12:55:18.887 2 DEBUG oslo_concurrency.lockutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Acquired lock "refresh_cache-2b371444-62dc-4270-8164-64eac7dcead4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:55:18 compute-1 nova_compute[230518]: 2025-10-02 12:55:18.887 2 DEBUG nova.network.neutron [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:55:19 compute-1 nova_compute[230518]: 2025-10-02 12:55:19.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:19 compute-1 nova_compute[230518]: 2025-10-02 12:55:19.207 2 DEBUG nova.network.neutron [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:55:19 compute-1 ceph-mon[80926]: pgmap v2459: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.2 MiB/s wr, 150 op/s
Oct 02 12:55:20 compute-1 nova_compute[230518]: 2025-10-02 12:55:20.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:55:20 compute-1 nova_compute[230518]: 2025-10-02 12:55:20.246 2 DEBUG nova.network.neutron [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Updating instance_info_cache with network_info: [{"id": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "address": "fa:16:3e:68:de:52", "network": {"id": "f266165c-cf86-4062-8010-5a7ecdec1578", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-460364202-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b48381b3787c4f3d9bb0c9050cf4c52c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d31e365-a7", "ovs_interfaceid": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:55:20 compute-1 nova_compute[230518]: 2025-10-02 12:55:20.261 2 DEBUG oslo_concurrency.lockutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Releasing lock "refresh_cache-2b371444-62dc-4270-8164-64eac7dcead4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:55:20 compute-1 nova_compute[230518]: 2025-10-02 12:55:20.261 2 DEBUG nova.compute.manager [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Instance network_info: |[{"id": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "address": "fa:16:3e:68:de:52", "network": {"id": "f266165c-cf86-4062-8010-5a7ecdec1578", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-460364202-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b48381b3787c4f3d9bb0c9050cf4c52c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d31e365-a7", "ovs_interfaceid": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:55:20 compute-1 nova_compute[230518]: 2025-10-02 12:55:20.264 2 DEBUG nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Start _get_guest_xml network_info=[{"id": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "address": "fa:16:3e:68:de:52", "network": {"id": "f266165c-cf86-4062-8010-5a7ecdec1578", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-460364202-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b48381b3787c4f3d9bb0c9050cf4c52c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d31e365-a7", "ovs_interfaceid": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:55:20 compute-1 nova_compute[230518]: 2025-10-02 12:55:20.267 2 WARNING nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:55:20 compute-1 nova_compute[230518]: 2025-10-02 12:55:20.274 2 DEBUG nova.virt.libvirt.host [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:55:20 compute-1 nova_compute[230518]: 2025-10-02 12:55:20.274 2 DEBUG nova.virt.libvirt.host [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:55:20 compute-1 nova_compute[230518]: 2025-10-02 12:55:20.276 2 DEBUG nova.virt.libvirt.host [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:55:20 compute-1 nova_compute[230518]: 2025-10-02 12:55:20.277 2 DEBUG nova.virt.libvirt.host [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:55:20 compute-1 nova_compute[230518]: 2025-10-02 12:55:20.278 2 DEBUG nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:55:20 compute-1 nova_compute[230518]: 2025-10-02 12:55:20.278 2 DEBUG nova.virt.hardware [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:55:20 compute-1 nova_compute[230518]: 2025-10-02 12:55:20.278 2 DEBUG nova.virt.hardware [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:55:20 compute-1 nova_compute[230518]: 2025-10-02 12:55:20.279 2 DEBUG nova.virt.hardware [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:55:20 compute-1 nova_compute[230518]: 2025-10-02 12:55:20.279 2 DEBUG nova.virt.hardware [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:55:20 compute-1 nova_compute[230518]: 2025-10-02 12:55:20.279 2 DEBUG nova.virt.hardware [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:55:20 compute-1 nova_compute[230518]: 2025-10-02 12:55:20.279 2 DEBUG nova.virt.hardware [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:55:20 compute-1 nova_compute[230518]: 2025-10-02 12:55:20.279 2 DEBUG nova.virt.hardware [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:55:20 compute-1 nova_compute[230518]: 2025-10-02 12:55:20.280 2 DEBUG nova.virt.hardware [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:55:20 compute-1 nova_compute[230518]: 2025-10-02 12:55:20.280 2 DEBUG nova.virt.hardware [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:55:20 compute-1 nova_compute[230518]: 2025-10-02 12:55:20.280 2 DEBUG nova.virt.hardware [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:55:20 compute-1 nova_compute[230518]: 2025-10-02 12:55:20.280 2 DEBUG nova.virt.hardware [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:55:20 compute-1 nova_compute[230518]: 2025-10-02 12:55:20.283 2 DEBUG oslo_concurrency.processutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:55:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:20.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:55:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:55:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:20.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:55:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:55:20 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1742637085' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:20 compute-1 nova_compute[230518]: 2025-10-02 12:55:20.823 2 DEBUG oslo_concurrency.processutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:20 compute-1 nova_compute[230518]: 2025-10-02 12:55:20.857 2 DEBUG nova.storage.rbd_utils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] rbd image 2b371444-62dc-4270-8164-64eac7dcead4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:55:20 compute-1 nova_compute[230518]: 2025-10-02 12:55:20.862 2 DEBUG oslo_concurrency.processutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:20 compute-1 nova_compute[230518]: 2025-10-02 12:55:20.893 2 DEBUG nova.compute.manager [req-11ff02f1-e05b-402b-bb37-cfd8fdfffcda req-1a431179-93e1-4cb7-abdd-42e51a27b145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Received event network-changed-8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:55:20 compute-1 nova_compute[230518]: 2025-10-02 12:55:20.894 2 DEBUG nova.compute.manager [req-11ff02f1-e05b-402b-bb37-cfd8fdfffcda req-1a431179-93e1-4cb7-abdd-42e51a27b145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Refreshing instance network info cache due to event network-changed-8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:55:20 compute-1 nova_compute[230518]: 2025-10-02 12:55:20.894 2 DEBUG oslo_concurrency.lockutils [req-11ff02f1-e05b-402b-bb37-cfd8fdfffcda req-1a431179-93e1-4cb7-abdd-42e51a27b145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-2b371444-62dc-4270-8164-64eac7dcead4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:55:20 compute-1 nova_compute[230518]: 2025-10-02 12:55:20.895 2 DEBUG oslo_concurrency.lockutils [req-11ff02f1-e05b-402b-bb37-cfd8fdfffcda req-1a431179-93e1-4cb7-abdd-42e51a27b145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-2b371444-62dc-4270-8164-64eac7dcead4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:55:20 compute-1 nova_compute[230518]: 2025-10-02 12:55:20.895 2 DEBUG nova.network.neutron [req-11ff02f1-e05b-402b-bb37-cfd8fdfffcda req-1a431179-93e1-4cb7-abdd-42e51a27b145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Refreshing network info cache for port 8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:55:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:55:21 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3610127590' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:21 compute-1 nova_compute[230518]: 2025-10-02 12:55:21.292 2 DEBUG oslo_concurrency.processutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:21 compute-1 nova_compute[230518]: 2025-10-02 12:55:21.293 2 DEBUG nova.virt.libvirt.vif [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:55:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-861404210',display_name='tempest-ServerMetadataTestJSON-server-861404210',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-861404210',id=150,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b48381b3787c4f3d9bb0c9050cf4c52c',ramdisk_id='',reservation_id='r-jg3xbq5e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-459689577',owner_user_name='tempest-ServerMetadataTestJSON-459689577-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:55:12Z,user_data=None,user_id='9ea4224783c14b01bd0ff8988a45a5f2',uuid=2b371444-62dc-4270-8164-64eac7dcead4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "address": "fa:16:3e:68:de:52", "network": {"id": "f266165c-cf86-4062-8010-5a7ecdec1578", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-460364202-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b48381b3787c4f3d9bb0c9050cf4c52c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d31e365-a7", "ovs_interfaceid": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:55:21 compute-1 nova_compute[230518]: 2025-10-02 12:55:21.294 2 DEBUG nova.network.os_vif_util [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Converting VIF {"id": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "address": "fa:16:3e:68:de:52", "network": {"id": "f266165c-cf86-4062-8010-5a7ecdec1578", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-460364202-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b48381b3787c4f3d9bb0c9050cf4c52c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d31e365-a7", "ovs_interfaceid": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:55:21 compute-1 nova_compute[230518]: 2025-10-02 12:55:21.295 2 DEBUG nova.network.os_vif_util [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:de:52,bridge_name='br-int',has_traffic_filtering=True,id=8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4,network=Network(f266165c-cf86-4062-8010-5a7ecdec1578),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d31e365-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:55:21 compute-1 nova_compute[230518]: 2025-10-02 12:55:21.296 2 DEBUG nova.objects.instance [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lazy-loading 'pci_devices' on Instance uuid 2b371444-62dc-4270-8164-64eac7dcead4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:55:21 compute-1 nova_compute[230518]: 2025-10-02 12:55:21.310 2 DEBUG nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:55:21 compute-1 nova_compute[230518]:   <uuid>2b371444-62dc-4270-8164-64eac7dcead4</uuid>
Oct 02 12:55:21 compute-1 nova_compute[230518]:   <name>instance-00000096</name>
Oct 02 12:55:21 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:55:21 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:55:21 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:55:21 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:       <nova:name>tempest-ServerMetadataTestJSON-server-861404210</nova:name>
Oct 02 12:55:21 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:55:20</nova:creationTime>
Oct 02 12:55:21 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:55:21 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:55:21 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:55:21 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:55:21 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:55:21 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:55:21 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:55:21 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:55:21 compute-1 nova_compute[230518]:         <nova:user uuid="9ea4224783c14b01bd0ff8988a45a5f2">tempest-ServerMetadataTestJSON-459689577-project-member</nova:user>
Oct 02 12:55:21 compute-1 nova_compute[230518]:         <nova:project uuid="b48381b3787c4f3d9bb0c9050cf4c52c">tempest-ServerMetadataTestJSON-459689577</nova:project>
Oct 02 12:55:21 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:55:21 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:55:21 compute-1 nova_compute[230518]:         <nova:port uuid="8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4">
Oct 02 12:55:21 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:55:21 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:55:21 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:55:21 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <system>
Oct 02 12:55:21 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:55:21 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:55:21 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:55:21 compute-1 nova_compute[230518]:       <entry name="serial">2b371444-62dc-4270-8164-64eac7dcead4</entry>
Oct 02 12:55:21 compute-1 nova_compute[230518]:       <entry name="uuid">2b371444-62dc-4270-8164-64eac7dcead4</entry>
Oct 02 12:55:21 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     </system>
Oct 02 12:55:21 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:55:21 compute-1 nova_compute[230518]:   <os>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:   </os>
Oct 02 12:55:21 compute-1 nova_compute[230518]:   <features>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:   </features>
Oct 02 12:55:21 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:55:21 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:55:21 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:55:21 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/2b371444-62dc-4270-8164-64eac7dcead4_disk">
Oct 02 12:55:21 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:       </source>
Oct 02 12:55:21 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:55:21 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:55:21 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:55:21 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/2b371444-62dc-4270-8164-64eac7dcead4_disk.config">
Oct 02 12:55:21 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:       </source>
Oct 02 12:55:21 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:55:21 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:55:21 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:55:21 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:68:de:52"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:       <target dev="tap8d31e365-a7"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:55:21 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/2b371444-62dc-4270-8164-64eac7dcead4/console.log" append="off"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <video>
Oct 02 12:55:21 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     </video>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:55:21 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:55:21 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:55:21 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:55:21 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:55:21 compute-1 nova_compute[230518]: </domain>
Oct 02 12:55:21 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:55:21 compute-1 nova_compute[230518]: 2025-10-02 12:55:21.311 2 DEBUG nova.compute.manager [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Preparing to wait for external event network-vif-plugged-8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:55:21 compute-1 nova_compute[230518]: 2025-10-02 12:55:21.312 2 DEBUG oslo_concurrency.lockutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Acquiring lock "2b371444-62dc-4270-8164-64eac7dcead4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:21 compute-1 nova_compute[230518]: 2025-10-02 12:55:21.312 2 DEBUG oslo_concurrency.lockutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lock "2b371444-62dc-4270-8164-64eac7dcead4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:21 compute-1 nova_compute[230518]: 2025-10-02 12:55:21.312 2 DEBUG oslo_concurrency.lockutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lock "2b371444-62dc-4270-8164-64eac7dcead4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:21 compute-1 nova_compute[230518]: 2025-10-02 12:55:21.313 2 DEBUG nova.virt.libvirt.vif [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:55:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-861404210',display_name='tempest-ServerMetadataTestJSON-server-861404210',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-861404210',id=150,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b48381b3787c4f3d9bb0c9050cf4c52c',ramdisk_id='',reservation_id='r-jg3xbq5e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-459689577',owner_user_name='tempest-ServerMetadataTestJSON-459689577-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:55:12Z,user_data=None,user_id='9ea4224783c14b01bd0ff8988a45a5f2',uuid=2b371444-62dc-4270-8164-64eac7dcead4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "address": "fa:16:3e:68:de:52", "network": {"id": "f266165c-cf86-4062-8010-5a7ecdec1578", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-460364202-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b48381b3787c4f3d9bb0c9050cf4c52c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d31e365-a7", "ovs_interfaceid": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:55:21 compute-1 nova_compute[230518]: 2025-10-02 12:55:21.313 2 DEBUG nova.network.os_vif_util [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Converting VIF {"id": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "address": "fa:16:3e:68:de:52", "network": {"id": "f266165c-cf86-4062-8010-5a7ecdec1578", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-460364202-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b48381b3787c4f3d9bb0c9050cf4c52c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d31e365-a7", "ovs_interfaceid": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:55:21 compute-1 nova_compute[230518]: 2025-10-02 12:55:21.314 2 DEBUG nova.network.os_vif_util [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:de:52,bridge_name='br-int',has_traffic_filtering=True,id=8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4,network=Network(f266165c-cf86-4062-8010-5a7ecdec1578),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d31e365-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:55:21 compute-1 nova_compute[230518]: 2025-10-02 12:55:21.314 2 DEBUG os_vif [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:de:52,bridge_name='br-int',has_traffic_filtering=True,id=8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4,network=Network(f266165c-cf86-4062-8010-5a7ecdec1578),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d31e365-a7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:55:21 compute-1 nova_compute[230518]: 2025-10-02 12:55:21.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:21 compute-1 nova_compute[230518]: 2025-10-02 12:55:21.316 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:21 compute-1 nova_compute[230518]: 2025-10-02 12:55:21.316 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:55:21 compute-1 nova_compute[230518]: 2025-10-02 12:55:21.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:21 compute-1 nova_compute[230518]: 2025-10-02 12:55:21.320 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d31e365-a7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:21 compute-1 nova_compute[230518]: 2025-10-02 12:55:21.321 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8d31e365-a7, col_values=(('external_ids', {'iface-id': '8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:68:de:52', 'vm-uuid': '2b371444-62dc-4270-8164-64eac7dcead4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:21 compute-1 nova_compute[230518]: 2025-10-02 12:55:21.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:21 compute-1 NetworkManager[44960]: <info>  [1759409721.3244] manager: (tap8d31e365-a7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/294)
Oct 02 12:55:21 compute-1 nova_compute[230518]: 2025-10-02 12:55:21.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:55:21 compute-1 nova_compute[230518]: 2025-10-02 12:55:21.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:21 compute-1 nova_compute[230518]: 2025-10-02 12:55:21.335 2 INFO os_vif [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:de:52,bridge_name='br-int',has_traffic_filtering=True,id=8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4,network=Network(f266165c-cf86-4062-8010-5a7ecdec1578),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d31e365-a7')
Oct 02 12:55:21 compute-1 nova_compute[230518]: 2025-10-02 12:55:21.554 2 DEBUG nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:55:21 compute-1 nova_compute[230518]: 2025-10-02 12:55:21.554 2 DEBUG nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:55:21 compute-1 nova_compute[230518]: 2025-10-02 12:55:21.555 2 DEBUG nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] No VIF found with MAC fa:16:3e:68:de:52, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:55:21 compute-1 nova_compute[230518]: 2025-10-02 12:55:21.555 2 INFO nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Using config drive
Oct 02 12:55:21 compute-1 ceph-mon[80926]: pgmap v2460: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 164 op/s
Oct 02 12:55:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1742637085' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2386159734' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3610127590' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:21 compute-1 nova_compute[230518]: 2025-10-02 12:55:21.883 2 DEBUG nova.storage.rbd_utils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] rbd image 2b371444-62dc-4270-8164-64eac7dcead4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:55:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 12:55:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:22.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 12:55:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:22.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4102663619' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:23 compute-1 nova_compute[230518]: 2025-10-02 12:55:23.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:55:23 compute-1 nova_compute[230518]: 2025-10-02 12:55:23.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:55:23 compute-1 nova_compute[230518]: 2025-10-02 12:55:23.231 2 INFO nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Creating config drive at /var/lib/nova/instances/2b371444-62dc-4270-8164-64eac7dcead4/disk.config
Oct 02 12:55:23 compute-1 nova_compute[230518]: 2025-10-02 12:55:23.236 2 DEBUG oslo_concurrency.processutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2b371444-62dc-4270-8164-64eac7dcead4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps_gklfp1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:23 compute-1 nova_compute[230518]: 2025-10-02 12:55:23.264 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:55:23 compute-1 nova_compute[230518]: 2025-10-02 12:55:23.264 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:55:23 compute-1 nova_compute[230518]: 2025-10-02 12:55:23.265 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:55:23 compute-1 nova_compute[230518]: 2025-10-02 12:55:23.372 2 DEBUG oslo_concurrency.processutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2b371444-62dc-4270-8164-64eac7dcead4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps_gklfp1" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:23 compute-1 nova_compute[230518]: 2025-10-02 12:55:23.404 2 DEBUG nova.storage.rbd_utils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] rbd image 2b371444-62dc-4270-8164-64eac7dcead4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:55:23 compute-1 nova_compute[230518]: 2025-10-02 12:55:23.409 2 DEBUG oslo_concurrency.processutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2b371444-62dc-4270-8164-64eac7dcead4/disk.config 2b371444-62dc-4270-8164-64eac7dcead4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:23 compute-1 nova_compute[230518]: 2025-10-02 12:55:23.771 2 DEBUG oslo_concurrency.processutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2b371444-62dc-4270-8164-64eac7dcead4/disk.config 2b371444-62dc-4270-8164-64eac7dcead4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:23 compute-1 nova_compute[230518]: 2025-10-02 12:55:23.772 2 INFO nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Deleting local config drive /var/lib/nova/instances/2b371444-62dc-4270-8164-64eac7dcead4/disk.config because it was imported into RBD.
Oct 02 12:55:23 compute-1 kernel: tap8d31e365-a7: entered promiscuous mode
Oct 02 12:55:23 compute-1 NetworkManager[44960]: <info>  [1759409723.8437] manager: (tap8d31e365-a7): new Tun device (/org/freedesktop/NetworkManager/Devices/295)
Oct 02 12:55:23 compute-1 nova_compute[230518]: 2025-10-02 12:55:23.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:23 compute-1 ovn_controller[129257]: 2025-10-02T12:55:23Z|00625|binding|INFO|Claiming lport 8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 for this chassis.
Oct 02 12:55:23 compute-1 ovn_controller[129257]: 2025-10-02T12:55:23Z|00626|binding|INFO|8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4: Claiming fa:16:3e:68:de:52 10.100.0.7
Oct 02 12:55:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:23.853 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:de:52 10.100.0.7'], port_security=['fa:16:3e:68:de:52 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2b371444-62dc-4270-8164-64eac7dcead4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f266165c-cf86-4062-8010-5a7ecdec1578', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b48381b3787c4f3d9bb0c9050cf4c52c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2c33b6f9-8dd0-4521-bcdc-04a49781adff', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5a527214-80f6-4bea-aba9-1aa4b53a782b, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:55:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:23.854 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 in datapath f266165c-cf86-4062-8010-5a7ecdec1578 bound to our chassis
Oct 02 12:55:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:23.856 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f266165c-cf86-4062-8010-5a7ecdec1578
Oct 02 12:55:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:55:23 compute-1 ovn_controller[129257]: 2025-10-02T12:55:23Z|00627|binding|INFO|Setting lport 8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 ovn-installed in OVS
Oct 02 12:55:23 compute-1 ovn_controller[129257]: 2025-10-02T12:55:23Z|00628|binding|INFO|Setting lport 8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 up in Southbound
Oct 02 12:55:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:23.868 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ffeea0f1-92da-4486-80e8-14a333831382]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:23.869 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf266165c-c1 in ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:55:23 compute-1 nova_compute[230518]: 2025-10-02 12:55:23.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:23.871 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf266165c-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:55:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:23.872 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[04402ec6-bf35-408b-8299-d055304657e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:23 compute-1 nova_compute[230518]: 2025-10-02 12:55:23.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:23.872 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[245a0de5-7541-44ad-8bc2-d9f768578caf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:23 compute-1 systemd-udevd[289393]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:55:23 compute-1 systemd-machined[188247]: New machine qemu-73-instance-00000096.
Oct 02 12:55:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:23.892 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[9179410c-8a9b-4fc8-9c64-1100afea9038]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:23 compute-1 NetworkManager[44960]: <info>  [1759409723.8947] device (tap8d31e365-a7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:55:23 compute-1 NetworkManager[44960]: <info>  [1759409723.8958] device (tap8d31e365-a7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:55:23 compute-1 systemd[1]: Started Virtual Machine qemu-73-instance-00000096.
Oct 02 12:55:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:23.913 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1c807042-2d1c-4730-bafd-ac60316fd6b7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:23.945 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c3f72328-333d-4c74-8b4f-06d0677369d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:23 compute-1 NetworkManager[44960]: <info>  [1759409723.9530] manager: (tapf266165c-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/296)
Oct 02 12:55:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:23.953 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5fc8bc94-2f68-468a-a0d3-4c8c63178f52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:23.986 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[3de9419b-0f9e-43eb-afca-2c06979d8897]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:23.990 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c95cb835-ff7e-464c-be8d-819ee6036fe9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:24 compute-1 NetworkManager[44960]: <info>  [1759409724.0162] device (tapf266165c-c0): carrier: link connected
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:24.021 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[2839345a-8b71-4f09-89d5-0a614790c521]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:24 compute-1 ceph-mon[80926]: pgmap v2461: 305 pgs: 305 active+clean; 340 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.3 MiB/s wr, 143 op/s
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:24.039 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9263897e-4f35-4467-8582-769c5ac63cbb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf266165c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:52:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 192], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758757, 'reachable_time': 33756, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289426, 'error': None, 'target': 'ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:24 compute-1 nova_compute[230518]: 2025-10-02 12:55:24.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:24.063 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[92375c7b-7bb3-4534-b73f-4cf2f3f955e4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9a:5225'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 758757, 'tstamp': 758757}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289427, 'error': None, 'target': 'ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:24.084 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e2943fef-2ec9-40d6-9054-9a1e6b2ee5b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf266165c-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:52:25'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 192], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758757, 'reachable_time': 33756, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289428, 'error': None, 'target': 'ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:24.123 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[dd97d4b3-31d5-49d8-82f0-1bb706631e67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:24 compute-1 nova_compute[230518]: 2025-10-02 12:55:24.131 2 INFO nova.compute.manager [None req-6815fa79-2f16-426f-86ad-c4f394a9715a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Get console output
Oct 02 12:55:24 compute-1 nova_compute[230518]: 2025-10-02 12:55:24.137 13161 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:24.201 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b86c96bb-ff37-4eea-a325-df3142ea38c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:24.203 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf266165c-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:24.203 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:24.204 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf266165c-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:24 compute-1 nova_compute[230518]: 2025-10-02 12:55:24.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:24 compute-1 NetworkManager[44960]: <info>  [1759409724.2061] manager: (tapf266165c-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/297)
Oct 02 12:55:24 compute-1 kernel: tapf266165c-c0: entered promiscuous mode
Oct 02 12:55:24 compute-1 nova_compute[230518]: 2025-10-02 12:55:24.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:24.209 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf266165c-c0, col_values=(('external_ids', {'iface-id': 'efec5d16-2b0c-4379-bb94-6e89b3bd1faf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:24 compute-1 nova_compute[230518]: 2025-10-02 12:55:24.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:24 compute-1 ovn_controller[129257]: 2025-10-02T12:55:24Z|00629|binding|INFO|Releasing lport efec5d16-2b0c-4379-bb94-6e89b3bd1faf from this chassis (sb_readonly=0)
Oct 02 12:55:24 compute-1 nova_compute[230518]: 2025-10-02 12:55:24.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:24.216 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f266165c-cf86-4062-8010-5a7ecdec1578.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f266165c-cf86-4062-8010-5a7ecdec1578.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:24.218 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c72eab6c-7bfe-4992-a737-c4c10baf97ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:24.219 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-f266165c-cf86-4062-8010-5a7ecdec1578
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/f266165c-cf86-4062-8010-5a7ecdec1578.pid.haproxy
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID f266165c-cf86-4062-8010-5a7ecdec1578
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:55:24 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:24.219 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578', 'env', 'PROCESS_TAG=haproxy-f266165c-cf86-4062-8010-5a7ecdec1578', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f266165c-cf86-4062-8010-5a7ecdec1578.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:55:24 compute-1 nova_compute[230518]: 2025-10-02 12:55:24.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:24.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:24 compute-1 podman[289474]: 2025-10-02 12:55:24.586953287 +0000 UTC m=+0.024811169 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:55:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:55:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:24.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:55:25 compute-1 podman[289474]: 2025-10-02 12:55:25.249848745 +0000 UTC m=+0.687706627 container create c2c6b10f716cd3ea29a04db136f8f95379f8ab638819ccc0b732214672b9f89c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 02 12:55:25 compute-1 nova_compute[230518]: 2025-10-02 12:55:25.260 2 DEBUG nova.network.neutron [req-11ff02f1-e05b-402b-bb37-cfd8fdfffcda req-1a431179-93e1-4cb7-abdd-42e51a27b145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Updated VIF entry in instance network info cache for port 8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:55:25 compute-1 nova_compute[230518]: 2025-10-02 12:55:25.260 2 DEBUG nova.network.neutron [req-11ff02f1-e05b-402b-bb37-cfd8fdfffcda req-1a431179-93e1-4cb7-abdd-42e51a27b145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Updating instance_info_cache with network_info: [{"id": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "address": "fa:16:3e:68:de:52", "network": {"id": "f266165c-cf86-4062-8010-5a7ecdec1578", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-460364202-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b48381b3787c4f3d9bb0c9050cf4c52c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d31e365-a7", "ovs_interfaceid": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:55:25 compute-1 nova_compute[230518]: 2025-10-02 12:55:25.278 2 DEBUG oslo_concurrency.lockutils [req-11ff02f1-e05b-402b-bb37-cfd8fdfffcda req-1a431179-93e1-4cb7-abdd-42e51a27b145 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-2b371444-62dc-4270-8164-64eac7dcead4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:55:25 compute-1 systemd[1]: Started libpod-conmon-c2c6b10f716cd3ea29a04db136f8f95379f8ab638819ccc0b732214672b9f89c.scope.
Oct 02 12:55:25 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:55:25 compute-1 nova_compute[230518]: 2025-10-02 12:55:25.321 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409725.320647, 2b371444-62dc-4270-8164-64eac7dcead4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:55:25 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abcf7a3e9396fcd78c2400963121561ebe3f42a3433196532cd89ee9d7c19a83/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:55:25 compute-1 nova_compute[230518]: 2025-10-02 12:55:25.322 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] VM Started (Lifecycle Event)
Oct 02 12:55:25 compute-1 nova_compute[230518]: 2025-10-02 12:55:25.351 2 DEBUG nova.compute.manager [req-36633fc3-9f06-4ae1-a658-f18c8b8177e0 req-183fbb39-1ef0-499e-9632-b7c1c97cf02a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Received event network-vif-plugged-8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:55:25 compute-1 nova_compute[230518]: 2025-10-02 12:55:25.351 2 DEBUG oslo_concurrency.lockutils [req-36633fc3-9f06-4ae1-a658-f18c8b8177e0 req-183fbb39-1ef0-499e-9632-b7c1c97cf02a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2b371444-62dc-4270-8164-64eac7dcead4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:25 compute-1 nova_compute[230518]: 2025-10-02 12:55:25.352 2 DEBUG oslo_concurrency.lockutils [req-36633fc3-9f06-4ae1-a658-f18c8b8177e0 req-183fbb39-1ef0-499e-9632-b7c1c97cf02a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2b371444-62dc-4270-8164-64eac7dcead4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:25 compute-1 nova_compute[230518]: 2025-10-02 12:55:25.352 2 DEBUG oslo_concurrency.lockutils [req-36633fc3-9f06-4ae1-a658-f18c8b8177e0 req-183fbb39-1ef0-499e-9632-b7c1c97cf02a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2b371444-62dc-4270-8164-64eac7dcead4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:25 compute-1 nova_compute[230518]: 2025-10-02 12:55:25.352 2 DEBUG nova.compute.manager [req-36633fc3-9f06-4ae1-a658-f18c8b8177e0 req-183fbb39-1ef0-499e-9632-b7c1c97cf02a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Processing event network-vif-plugged-8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:55:25 compute-1 nova_compute[230518]: 2025-10-02 12:55:25.353 2 DEBUG nova.compute.manager [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:55:25 compute-1 nova_compute[230518]: 2025-10-02 12:55:25.355 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:55:25 compute-1 nova_compute[230518]: 2025-10-02 12:55:25.357 2 DEBUG nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:55:25 compute-1 nova_compute[230518]: 2025-10-02 12:55:25.360 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:55:25 compute-1 nova_compute[230518]: 2025-10-02 12:55:25.362 2 INFO nova.virt.libvirt.driver [-] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Instance spawned successfully.
Oct 02 12:55:25 compute-1 nova_compute[230518]: 2025-10-02 12:55:25.362 2 DEBUG nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:55:25 compute-1 nova_compute[230518]: 2025-10-02 12:55:25.389 2 DEBUG nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:55:25 compute-1 nova_compute[230518]: 2025-10-02 12:55:25.389 2 DEBUG nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:55:25 compute-1 nova_compute[230518]: 2025-10-02 12:55:25.390 2 DEBUG nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:55:25 compute-1 nova_compute[230518]: 2025-10-02 12:55:25.390 2 DEBUG nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:55:25 compute-1 nova_compute[230518]: 2025-10-02 12:55:25.390 2 DEBUG nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:55:25 compute-1 nova_compute[230518]: 2025-10-02 12:55:25.391 2 DEBUG nova.virt.libvirt.driver [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:55:25 compute-1 podman[289474]: 2025-10-02 12:55:25.392684985 +0000 UTC m=+0.830542897 container init c2c6b10f716cd3ea29a04db136f8f95379f8ab638819ccc0b732214672b9f89c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 12:55:25 compute-1 nova_compute[230518]: 2025-10-02 12:55:25.396 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:55:25 compute-1 nova_compute[230518]: 2025-10-02 12:55:25.396 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409725.320851, 2b371444-62dc-4270-8164-64eac7dcead4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:55:25 compute-1 nova_compute[230518]: 2025-10-02 12:55:25.397 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] VM Paused (Lifecycle Event)
Oct 02 12:55:25 compute-1 podman[289474]: 2025-10-02 12:55:25.398483237 +0000 UTC m=+0.836341119 container start c2c6b10f716cd3ea29a04db136f8f95379f8ab638819ccc0b732214672b9f89c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:55:25 compute-1 neutron-haproxy-ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578[289518]: [NOTICE]   (289522) : New worker (289524) forked
Oct 02 12:55:25 compute-1 neutron-haproxy-ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578[289518]: [NOTICE]   (289522) : Loading success.
Oct 02 12:55:25 compute-1 nova_compute[230518]: 2025-10-02 12:55:25.424 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:55:25 compute-1 nova_compute[230518]: 2025-10-02 12:55:25.428 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409725.3565223, 2b371444-62dc-4270-8164-64eac7dcead4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:55:25 compute-1 nova_compute[230518]: 2025-10-02 12:55:25.429 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] VM Resumed (Lifecycle Event)
Oct 02 12:55:25 compute-1 nova_compute[230518]: 2025-10-02 12:55:25.451 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:55:25 compute-1 ceph-mon[80926]: pgmap v2462: 305 pgs: 305 active+clean; 340 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 130 op/s
Oct 02 12:55:25 compute-1 nova_compute[230518]: 2025-10-02 12:55:25.457 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:55:25 compute-1 nova_compute[230518]: 2025-10-02 12:55:25.460 2 INFO nova.compute.manager [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Took 12.42 seconds to spawn the instance on the hypervisor.
Oct 02 12:55:25 compute-1 nova_compute[230518]: 2025-10-02 12:55:25.460 2 DEBUG nova.compute.manager [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:55:25 compute-1 nova_compute[230518]: 2025-10-02 12:55:25.490 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:55:25 compute-1 nova_compute[230518]: 2025-10-02 12:55:25.597 2 INFO nova.compute.manager [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Took 13.54 seconds to build instance.
Oct 02 12:55:25 compute-1 nova_compute[230518]: 2025-10-02 12:55:25.613 2 DEBUG oslo_concurrency.lockutils [None req-3c67ca08-f333-4bb1-a6fd-47c0c9e31791 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lock "2b371444-62dc-4270-8164-64eac7dcead4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:25.952 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:25.952 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:25.953 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:26 compute-1 nova_compute[230518]: 2025-10-02 12:55:26.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:26.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:55:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:26.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:55:26 compute-1 nova_compute[230518]: 2025-10-02 12:55:26.822 2 DEBUG oslo_concurrency.lockutils [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:26 compute-1 nova_compute[230518]: 2025-10-02 12:55:26.823 2 DEBUG oslo_concurrency.lockutils [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:26 compute-1 nova_compute[230518]: 2025-10-02 12:55:26.823 2 DEBUG oslo_concurrency.lockutils [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:26 compute-1 nova_compute[230518]: 2025-10-02 12:55:26.824 2 DEBUG oslo_concurrency.lockutils [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:26 compute-1 nova_compute[230518]: 2025-10-02 12:55:26.824 2 DEBUG oslo_concurrency.lockutils [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:26 compute-1 nova_compute[230518]: 2025-10-02 12:55:26.825 2 INFO nova.compute.manager [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Terminating instance
Oct 02 12:55:26 compute-1 nova_compute[230518]: 2025-10-02 12:55:26.826 2 DEBUG nova.compute.manager [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:55:26 compute-1 kernel: tap20204810-ff (unregistering): left promiscuous mode
Oct 02 12:55:26 compute-1 NetworkManager[44960]: <info>  [1759409726.9596] device (tap20204810-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:55:26 compute-1 nova_compute[230518]: 2025-10-02 12:55:26.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:26 compute-1 ovn_controller[129257]: 2025-10-02T12:55:26Z|00630|binding|INFO|Releasing lport 20204810-ff47-450e-80e5-23d03b435455 from this chassis (sb_readonly=0)
Oct 02 12:55:26 compute-1 ovn_controller[129257]: 2025-10-02T12:55:26Z|00631|binding|INFO|Setting lport 20204810-ff47-450e-80e5-23d03b435455 down in Southbound
Oct 02 12:55:26 compute-1 ovn_controller[129257]: 2025-10-02T12:55:26Z|00632|binding|INFO|Removing iface tap20204810-ff ovn-installed in OVS
Oct 02 12:55:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:26.976 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:41:1c 10.100.0.7'], port_security=['fa:16:3e:5b:41:1c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'a1e0932b-16b6-46b9-8192-b89b91e91802', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a39243cb-5286-4429-8879-7b4d535de128', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '12', 'neutron:security_group_ids': 'd1b59c5d-0681-456e-a8d1-b3629344f9b0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf84aebf-21d8-4569-891a-417406561224, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=20204810-ff47-450e-80e5-23d03b435455) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:55:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:26.977 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 20204810-ff47-450e-80e5-23d03b435455 in datapath a39243cb-5286-4429-8879-7b4d535de128 unbound from our chassis
Oct 02 12:55:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:26.979 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a39243cb-5286-4429-8879-7b4d535de128, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:55:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:26.980 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f682c31b-524b-42f7-8502-f50751377aea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:26.981 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a39243cb-5286-4429-8879-7b4d535de128 namespace which is not needed anymore
Oct 02 12:55:26 compute-1 nova_compute[230518]: 2025-10-02 12:55:26.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:27 compute-1 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d00000092.scope: Deactivated successfully.
Oct 02 12:55:27 compute-1 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d00000092.scope: Consumed 13.883s CPU time.
Oct 02 12:55:27 compute-1 systemd-machined[188247]: Machine qemu-72-instance-00000092 terminated.
Oct 02 12:55:27 compute-1 nova_compute[230518]: 2025-10-02 12:55:27.063 2 INFO nova.virt.libvirt.driver [-] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Instance destroyed successfully.
Oct 02 12:55:27 compute-1 nova_compute[230518]: 2025-10-02 12:55:27.063 2 DEBUG nova.objects.instance [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'resources' on Instance uuid a1e0932b-16b6-46b9-8192-b89b91e91802 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:55:27 compute-1 nova_compute[230518]: 2025-10-02 12:55:27.085 2 DEBUG nova.compute.manager [req-33e0e519-9746-4409-bdd8-714d4357ac3d req-668e1574-ac6a-41e8-b424-ba95d0d586c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-changed-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:55:27 compute-1 nova_compute[230518]: 2025-10-02 12:55:27.086 2 DEBUG nova.compute.manager [req-33e0e519-9746-4409-bdd8-714d4357ac3d req-668e1574-ac6a-41e8-b424-ba95d0d586c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Refreshing instance network info cache due to event network-changed-20204810-ff47-450e-80e5-23d03b435455. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:55:27 compute-1 nova_compute[230518]: 2025-10-02 12:55:27.087 2 DEBUG oslo_concurrency.lockutils [req-33e0e519-9746-4409-bdd8-714d4357ac3d req-668e1574-ac6a-41e8-b424-ba95d0d586c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:55:27 compute-1 nova_compute[230518]: 2025-10-02 12:55:27.098 2 DEBUG nova.virt.libvirt.vif [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:53:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1723654799',display_name='tempest-TestNetworkAdvancedServerOps-server-1723654799',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1723654799',id=146,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNX6YVDsN9ZX5bxWRi+hOcCI5VJa6Q2nedRNQF3bG+0Pznov2NvoOk008+cPv/dH5+9KDDN9Rpi1O2z1pYZSfJd9pzzfPLrJFsvhHAGAb1dgOP5UShntoHoUWnJ4mGisJQ==',key_name='tempest-TestNetworkAdvancedServerOps-520644426',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:55:04Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-hzwwvz6x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:55:04Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=a1e0932b-16b6-46b9-8192-b89b91e91802,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:55:27 compute-1 nova_compute[230518]: 2025-10-02 12:55:27.099 2 DEBUG nova.network.os_vif_util [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:55:27 compute-1 nova_compute[230518]: 2025-10-02 12:55:27.100 2 DEBUG nova.network.os_vif_util [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:41:1c,bridge_name='br-int',has_traffic_filtering=True,id=20204810-ff47-450e-80e5-23d03b435455,network=Network(a39243cb-5286-4429-8879-7b4d535de128),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20204810-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:55:27 compute-1 nova_compute[230518]: 2025-10-02 12:55:27.100 2 DEBUG os_vif [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:41:1c,bridge_name='br-int',has_traffic_filtering=True,id=20204810-ff47-450e-80e5-23d03b435455,network=Network(a39243cb-5286-4429-8879-7b4d535de128),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20204810-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:55:27 compute-1 nova_compute[230518]: 2025-10-02 12:55:27.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:27 compute-1 nova_compute[230518]: 2025-10-02 12:55:27.105 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20204810-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:27 compute-1 nova_compute[230518]: 2025-10-02 12:55:27.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:27 compute-1 nova_compute[230518]: 2025-10-02 12:55:27.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:55:27 compute-1 nova_compute[230518]: 2025-10-02 12:55:27.112 2 INFO os_vif [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:41:1c,bridge_name='br-int',has_traffic_filtering=True,id=20204810-ff47-450e-80e5-23d03b435455,network=Network(a39243cb-5286-4429-8879-7b4d535de128),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap20204810-ff')
Oct 02 12:55:27 compute-1 nova_compute[230518]: 2025-10-02 12:55:27.135 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updating instance_info_cache with network_info: [{"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:55:27 compute-1 nova_compute[230518]: 2025-10-02 12:55:27.158 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:55:27 compute-1 nova_compute[230518]: 2025-10-02 12:55:27.159 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:55:27 compute-1 nova_compute[230518]: 2025-10-02 12:55:27.160 2 DEBUG oslo_concurrency.lockutils [req-33e0e519-9746-4409-bdd8-714d4357ac3d req-668e1574-ac6a-41e8-b424-ba95d0d586c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:55:27 compute-1 nova_compute[230518]: 2025-10-02 12:55:27.160 2 DEBUG nova.network.neutron [req-33e0e519-9746-4409-bdd8-714d4357ac3d req-668e1574-ac6a-41e8-b424-ba95d0d586c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Refreshing network info cache for port 20204810-ff47-450e-80e5-23d03b435455 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:55:27 compute-1 neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128[288969]: [NOTICE]   (288973) : haproxy version is 2.8.14-c23fe91
Oct 02 12:55:27 compute-1 neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128[288969]: [NOTICE]   (288973) : path to executable is /usr/sbin/haproxy
Oct 02 12:55:27 compute-1 neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128[288969]: [WARNING]  (288973) : Exiting Master process...
Oct 02 12:55:27 compute-1 neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128[288969]: [ALERT]    (288973) : Current worker (288975) exited with code 143 (Terminated)
Oct 02 12:55:27 compute-1 neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128[288969]: [WARNING]  (288973) : All workers exited. Exiting... (0)
Oct 02 12:55:27 compute-1 systemd[1]: libpod-0d9d044c9e3698037c09f065dbf8c2d0aad1b27119ad4a3267ccc2ed0a81d383.scope: Deactivated successfully.
Oct 02 12:55:27 compute-1 podman[289567]: 2025-10-02 12:55:27.432225093 +0000 UTC m=+0.328412565 container died 0d9d044c9e3698037c09f065dbf8c2d0aad1b27119ad4a3267ccc2ed0a81d383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:55:27 compute-1 nova_compute[230518]: 2025-10-02 12:55:27.515 2 DEBUG nova.compute.manager [req-b198a043-459a-4ca5-99a1-977b5c6bb113 req-b949a5d9-e29a-4834-a06b-08613da65091 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Received event network-vif-plugged-8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:55:27 compute-1 nova_compute[230518]: 2025-10-02 12:55:27.517 2 DEBUG oslo_concurrency.lockutils [req-b198a043-459a-4ca5-99a1-977b5c6bb113 req-b949a5d9-e29a-4834-a06b-08613da65091 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2b371444-62dc-4270-8164-64eac7dcead4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:27 compute-1 nova_compute[230518]: 2025-10-02 12:55:27.517 2 DEBUG oslo_concurrency.lockutils [req-b198a043-459a-4ca5-99a1-977b5c6bb113 req-b949a5d9-e29a-4834-a06b-08613da65091 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2b371444-62dc-4270-8164-64eac7dcead4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:27 compute-1 nova_compute[230518]: 2025-10-02 12:55:27.518 2 DEBUG oslo_concurrency.lockutils [req-b198a043-459a-4ca5-99a1-977b5c6bb113 req-b949a5d9-e29a-4834-a06b-08613da65091 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2b371444-62dc-4270-8164-64eac7dcead4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:27 compute-1 nova_compute[230518]: 2025-10-02 12:55:27.519 2 DEBUG nova.compute.manager [req-b198a043-459a-4ca5-99a1-977b5c6bb113 req-b949a5d9-e29a-4834-a06b-08613da65091 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] No waiting events found dispatching network-vif-plugged-8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:55:27 compute-1 nova_compute[230518]: 2025-10-02 12:55:27.519 2 WARNING nova.compute.manager [req-b198a043-459a-4ca5-99a1-977b5c6bb113 req-b949a5d9-e29a-4834-a06b-08613da65091 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Received unexpected event network-vif-plugged-8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 for instance with vm_state active and task_state None.
Oct 02 12:55:27 compute-1 ceph-mon[80926]: pgmap v2463: 305 pgs: 305 active+clean; 345 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.6 MiB/s wr, 137 op/s
Oct 02 12:55:28 compute-1 systemd[1]: var-lib-containers-storage-overlay-1d9e319b30d1b4061deb78f71d6770d855f65959fcfb953fc9732e0288e8dbf3-merged.mount: Deactivated successfully.
Oct 02 12:55:28 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0d9d044c9e3698037c09f065dbf8c2d0aad1b27119ad4a3267ccc2ed0a81d383-userdata-shm.mount: Deactivated successfully.
Oct 02 12:55:28 compute-1 podman[289567]: 2025-10-02 12:55:28.233577333 +0000 UTC m=+1.129764805 container cleanup 0d9d044c9e3698037c09f065dbf8c2d0aad1b27119ad4a3267ccc2ed0a81d383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:55:28 compute-1 systemd[1]: libpod-conmon-0d9d044c9e3698037c09f065dbf8c2d0aad1b27119ad4a3267ccc2ed0a81d383.scope: Deactivated successfully.
Oct 02 12:55:28 compute-1 podman[289614]: 2025-10-02 12:55:28.617044214 +0000 UTC m=+0.358238171 container remove 0d9d044c9e3698037c09f065dbf8c2d0aad1b27119ad4a3267ccc2ed0a81d383 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 12:55:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:28.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:28.624 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[529b3fb4-dfff-452c-8deb-23174e01490d]: (4, ('Thu Oct  2 12:55:27 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128 (0d9d044c9e3698037c09f065dbf8c2d0aad1b27119ad4a3267ccc2ed0a81d383)\n0d9d044c9e3698037c09f065dbf8c2d0aad1b27119ad4a3267ccc2ed0a81d383\nThu Oct  2 12:55:28 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a39243cb-5286-4429-8879-7b4d535de128 (0d9d044c9e3698037c09f065dbf8c2d0aad1b27119ad4a3267ccc2ed0a81d383)\n0d9d044c9e3698037c09f065dbf8c2d0aad1b27119ad4a3267ccc2ed0a81d383\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:28.625 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[48a3883b-3447-4d45-8350-7dda49c1693b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:28.626 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa39243cb-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:28 compute-1 nova_compute[230518]: 2025-10-02 12:55:28.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:28 compute-1 kernel: tapa39243cb-50: left promiscuous mode
Oct 02 12:55:28 compute-1 nova_compute[230518]: 2025-10-02 12:55:28.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:28.651 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[145316b7-bddd-4a2a-9c04-afde2d638dc8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:28.679 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[825f1f3e-2740-4c4a-bf9a-796545f8ac9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:28.681 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ea6ed287-3a32-405f-8c17-8e8242f28ee8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:28.697 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5958d1a0-4d6c-4e00-b766-adbcd381688c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 756697, 'reachable_time': 33597, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289630, 'error': None, 'target': 'ovnmeta-a39243cb-5286-4429-8879-7b4d535de128', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:28.699 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a39243cb-5286-4429-8879-7b4d535de128 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:55:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:28.699 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[95401074-bab8-45b0-9f6a-210e09468347]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:28 compute-1 systemd[1]: run-netns-ovnmeta\x2da39243cb\x2d5286\x2d4429\x2d8879\x2d7b4d535de128.mount: Deactivated successfully.
Oct 02 12:55:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:28.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:55:29 compute-1 nova_compute[230518]: 2025-10-02 12:55:29.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:29 compute-1 nova_compute[230518]: 2025-10-02 12:55:29.231 2 DEBUG nova.compute.manager [req-61df7852-46f5-4ba3-9d92-c06ad0ee6b95 req-87671260-dd9d-4d1e-b820-bb6e97c3081f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-vif-unplugged-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:55:29 compute-1 nova_compute[230518]: 2025-10-02 12:55:29.231 2 DEBUG oslo_concurrency.lockutils [req-61df7852-46f5-4ba3-9d92-c06ad0ee6b95 req-87671260-dd9d-4d1e-b820-bb6e97c3081f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:29 compute-1 nova_compute[230518]: 2025-10-02 12:55:29.232 2 DEBUG oslo_concurrency.lockutils [req-61df7852-46f5-4ba3-9d92-c06ad0ee6b95 req-87671260-dd9d-4d1e-b820-bb6e97c3081f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:29 compute-1 nova_compute[230518]: 2025-10-02 12:55:29.232 2 DEBUG oslo_concurrency.lockutils [req-61df7852-46f5-4ba3-9d92-c06ad0ee6b95 req-87671260-dd9d-4d1e-b820-bb6e97c3081f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:29 compute-1 nova_compute[230518]: 2025-10-02 12:55:29.233 2 DEBUG nova.compute.manager [req-61df7852-46f5-4ba3-9d92-c06ad0ee6b95 req-87671260-dd9d-4d1e-b820-bb6e97c3081f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] No waiting events found dispatching network-vif-unplugged-20204810-ff47-450e-80e5-23d03b435455 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:55:29 compute-1 nova_compute[230518]: 2025-10-02 12:55:29.233 2 DEBUG nova.compute.manager [req-61df7852-46f5-4ba3-9d92-c06ad0ee6b95 req-87671260-dd9d-4d1e-b820-bb6e97c3081f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-vif-unplugged-20204810-ff47-450e-80e5-23d03b435455 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:55:29 compute-1 nova_compute[230518]: 2025-10-02 12:55:29.233 2 DEBUG nova.compute.manager [req-61df7852-46f5-4ba3-9d92-c06ad0ee6b95 req-87671260-dd9d-4d1e-b820-bb6e97c3081f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:55:29 compute-1 nova_compute[230518]: 2025-10-02 12:55:29.234 2 DEBUG oslo_concurrency.lockutils [req-61df7852-46f5-4ba3-9d92-c06ad0ee6b95 req-87671260-dd9d-4d1e-b820-bb6e97c3081f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:29 compute-1 nova_compute[230518]: 2025-10-02 12:55:29.234 2 DEBUG oslo_concurrency.lockutils [req-61df7852-46f5-4ba3-9d92-c06ad0ee6b95 req-87671260-dd9d-4d1e-b820-bb6e97c3081f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:29 compute-1 nova_compute[230518]: 2025-10-02 12:55:29.234 2 DEBUG oslo_concurrency.lockutils [req-61df7852-46f5-4ba3-9d92-c06ad0ee6b95 req-87671260-dd9d-4d1e-b820-bb6e97c3081f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:29 compute-1 nova_compute[230518]: 2025-10-02 12:55:29.235 2 DEBUG nova.compute.manager [req-61df7852-46f5-4ba3-9d92-c06ad0ee6b95 req-87671260-dd9d-4d1e-b820-bb6e97c3081f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] No waiting events found dispatching network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:55:29 compute-1 nova_compute[230518]: 2025-10-02 12:55:29.235 2 WARNING nova.compute.manager [req-61df7852-46f5-4ba3-9d92-c06ad0ee6b95 req-87671260-dd9d-4d1e-b820-bb6e97c3081f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received unexpected event network-vif-plugged-20204810-ff47-450e-80e5-23d03b435455 for instance with vm_state active and task_state deleting.
Oct 02 12:55:29 compute-1 ceph-mon[80926]: pgmap v2464: 305 pgs: 305 active+clean; 364 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.4 MiB/s wr, 180 op/s
Oct 02 12:55:29 compute-1 nova_compute[230518]: 2025-10-02 12:55:29.525 2 DEBUG nova.network.neutron [req-33e0e519-9746-4409-bdd8-714d4357ac3d req-668e1574-ac6a-41e8-b424-ba95d0d586c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updated VIF entry in instance network info cache for port 20204810-ff47-450e-80e5-23d03b435455. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:55:29 compute-1 nova_compute[230518]: 2025-10-02 12:55:29.526 2 DEBUG nova.network.neutron [req-33e0e519-9746-4409-bdd8-714d4357ac3d req-668e1574-ac6a-41e8-b424-ba95d0d586c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updating instance_info_cache with network_info: [{"id": "20204810-ff47-450e-80e5-23d03b435455", "address": "fa:16:3e:5b:41:1c", "network": {"id": "a39243cb-5286-4429-8879-7b4d535de128", "bridge": "br-int", "label": "tempest-network-smoke--1767599231", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap20204810-ff", "ovs_interfaceid": "20204810-ff47-450e-80e5-23d03b435455", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:55:29 compute-1 nova_compute[230518]: 2025-10-02 12:55:29.674 2 DEBUG oslo_concurrency.lockutils [req-33e0e519-9746-4409-bdd8-714d4357ac3d req-668e1574-ac6a-41e8-b424-ba95d0d586c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-a1e0932b-16b6-46b9-8192-b89b91e91802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:55:29 compute-1 nova_compute[230518]: 2025-10-02 12:55:29.692 2 DEBUG oslo_concurrency.lockutils [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Acquiring lock "2b371444-62dc-4270-8164-64eac7dcead4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:29 compute-1 nova_compute[230518]: 2025-10-02 12:55:29.693 2 DEBUG oslo_concurrency.lockutils [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lock "2b371444-62dc-4270-8164-64eac7dcead4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:29 compute-1 nova_compute[230518]: 2025-10-02 12:55:29.693 2 DEBUG oslo_concurrency.lockutils [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Acquiring lock "2b371444-62dc-4270-8164-64eac7dcead4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:29 compute-1 nova_compute[230518]: 2025-10-02 12:55:29.694 2 DEBUG oslo_concurrency.lockutils [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lock "2b371444-62dc-4270-8164-64eac7dcead4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:29 compute-1 nova_compute[230518]: 2025-10-02 12:55:29.694 2 DEBUG oslo_concurrency.lockutils [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lock "2b371444-62dc-4270-8164-64eac7dcead4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:29 compute-1 nova_compute[230518]: 2025-10-02 12:55:29.695 2 INFO nova.compute.manager [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Terminating instance
Oct 02 12:55:29 compute-1 nova_compute[230518]: 2025-10-02 12:55:29.696 2 DEBUG nova.compute.manager [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:55:29 compute-1 podman[289632]: 2025-10-02 12:55:29.795271318 +0000 UTC m=+0.050058842 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 12:55:29 compute-1 podman[289631]: 2025-10-02 12:55:29.847052422 +0000 UTC m=+0.103020733 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Oct 02 12:55:30 compute-1 kernel: tap8d31e365-a7 (unregistering): left promiscuous mode
Oct 02 12:55:30 compute-1 NetworkManager[44960]: <info>  [1759409730.1725] device (tap8d31e365-a7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:55:30 compute-1 ovn_controller[129257]: 2025-10-02T12:55:30Z|00633|binding|INFO|Releasing lport 8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 from this chassis (sb_readonly=0)
Oct 02 12:55:30 compute-1 ovn_controller[129257]: 2025-10-02T12:55:30Z|00634|binding|INFO|Setting lport 8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 down in Southbound
Oct 02 12:55:30 compute-1 nova_compute[230518]: 2025-10-02 12:55:30.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:30 compute-1 ovn_controller[129257]: 2025-10-02T12:55:30Z|00635|binding|INFO|Removing iface tap8d31e365-a7 ovn-installed in OVS
Oct 02 12:55:30 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:30.188 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:de:52 10.100.0.7'], port_security=['fa:16:3e:68:de:52 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2b371444-62dc-4270-8164-64eac7dcead4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f266165c-cf86-4062-8010-5a7ecdec1578', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b48381b3787c4f3d9bb0c9050cf4c52c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2c33b6f9-8dd0-4521-bcdc-04a49781adff', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5a527214-80f6-4bea-aba9-1aa4b53a782b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:55:30 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:30.190 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 in datapath f266165c-cf86-4062-8010-5a7ecdec1578 unbound from our chassis
Oct 02 12:55:30 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:30.191 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f266165c-cf86-4062-8010-5a7ecdec1578, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:55:30 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:30.192 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[396562a5-7aa0-4ea9-8c09-22f73e10644e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:30 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:30.193 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578 namespace which is not needed anymore
Oct 02 12:55:30 compute-1 nova_compute[230518]: 2025-10-02 12:55:30.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:30 compute-1 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000096.scope: Deactivated successfully.
Oct 02 12:55:30 compute-1 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000096.scope: Consumed 5.373s CPU time.
Oct 02 12:55:30 compute-1 systemd-machined[188247]: Machine qemu-73-instance-00000096 terminated.
Oct 02 12:55:30 compute-1 nova_compute[230518]: 2025-10-02 12:55:30.342 2 INFO nova.virt.libvirt.driver [-] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Instance destroyed successfully.
Oct 02 12:55:30 compute-1 nova_compute[230518]: 2025-10-02 12:55:30.343 2 DEBUG nova.objects.instance [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lazy-loading 'resources' on Instance uuid 2b371444-62dc-4270-8164-64eac7dcead4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:55:30 compute-1 nova_compute[230518]: 2025-10-02 12:55:30.360 2 DEBUG nova.virt.libvirt.vif [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:55:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-861404210',display_name='tempest-ServerMetadataTestJSON-server-861404210',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-861404210',id=150,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:55:25Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={key1='alt1',key2='value2',key3='value3'},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b48381b3787c4f3d9bb0c9050cf4c52c',ramdisk_id='',reservation_id='r-jg3xbq5e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataTestJSON-459689577',owner_user_name='tempest-ServerMetadataTestJSON-459689577-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:55:29Z,user_data=None,user_id='9ea4224783c14b01bd0ff8988a45a5f2',uuid=2b371444-62dc-4270-8164-64eac7dcead4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "address": "fa:16:3e:68:de:52", "network": {"id": "f266165c-cf86-4062-8010-5a7ecdec1578", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-460364202-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b48381b3787c4f3d9bb0c9050cf4c52c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d31e365-a7", "ovs_interfaceid": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:55:30 compute-1 nova_compute[230518]: 2025-10-02 12:55:30.360 2 DEBUG nova.network.os_vif_util [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Converting VIF {"id": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "address": "fa:16:3e:68:de:52", "network": {"id": "f266165c-cf86-4062-8010-5a7ecdec1578", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-460364202-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b48381b3787c4f3d9bb0c9050cf4c52c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d31e365-a7", "ovs_interfaceid": "8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:55:30 compute-1 nova_compute[230518]: 2025-10-02 12:55:30.361 2 DEBUG nova.network.os_vif_util [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:de:52,bridge_name='br-int',has_traffic_filtering=True,id=8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4,network=Network(f266165c-cf86-4062-8010-5a7ecdec1578),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d31e365-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:55:30 compute-1 nova_compute[230518]: 2025-10-02 12:55:30.361 2 DEBUG os_vif [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:de:52,bridge_name='br-int',has_traffic_filtering=True,id=8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4,network=Network(f266165c-cf86-4062-8010-5a7ecdec1578),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d31e365-a7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:55:30 compute-1 nova_compute[230518]: 2025-10-02 12:55:30.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:30 compute-1 nova_compute[230518]: 2025-10-02 12:55:30.363 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d31e365-a7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:30 compute-1 nova_compute[230518]: 2025-10-02 12:55:30.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:30 compute-1 neutron-haproxy-ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578[289518]: [NOTICE]   (289522) : haproxy version is 2.8.14-c23fe91
Oct 02 12:55:30 compute-1 neutron-haproxy-ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578[289518]: [NOTICE]   (289522) : path to executable is /usr/sbin/haproxy
Oct 02 12:55:30 compute-1 neutron-haproxy-ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578[289518]: [WARNING]  (289522) : Exiting Master process...
Oct 02 12:55:30 compute-1 nova_compute[230518]: 2025-10-02 12:55:30.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:55:30 compute-1 neutron-haproxy-ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578[289518]: [ALERT]    (289522) : Current worker (289524) exited with code 143 (Terminated)
Oct 02 12:55:30 compute-1 neutron-haproxy-ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578[289518]: [WARNING]  (289522) : All workers exited. Exiting... (0)
Oct 02 12:55:30 compute-1 nova_compute[230518]: 2025-10-02 12:55:30.368 2 INFO os_vif [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:de:52,bridge_name='br-int',has_traffic_filtering=True,id=8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4,network=Network(f266165c-cf86-4062-8010-5a7ecdec1578),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d31e365-a7')
Oct 02 12:55:30 compute-1 systemd[1]: libpod-c2c6b10f716cd3ea29a04db136f8f95379f8ab638819ccc0b732214672b9f89c.scope: Deactivated successfully.
Oct 02 12:55:30 compute-1 podman[289698]: 2025-10-02 12:55:30.375925384 +0000 UTC m=+0.102303631 container died c2c6b10f716cd3ea29a04db136f8f95379f8ab638819ccc0b732214672b9f89c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 12:55:30 compute-1 nova_compute[230518]: 2025-10-02 12:55:30.423 2 DEBUG nova.compute.manager [req-bee6b486-31f2-4328-8268-03fb9eb66fb9 req-ff8e3839-6e82-4dad-8154-215598f991e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Received event network-vif-unplugged-8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:55:30 compute-1 nova_compute[230518]: 2025-10-02 12:55:30.424 2 DEBUG oslo_concurrency.lockutils [req-bee6b486-31f2-4328-8268-03fb9eb66fb9 req-ff8e3839-6e82-4dad-8154-215598f991e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2b371444-62dc-4270-8164-64eac7dcead4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:30 compute-1 nova_compute[230518]: 2025-10-02 12:55:30.424 2 DEBUG oslo_concurrency.lockutils [req-bee6b486-31f2-4328-8268-03fb9eb66fb9 req-ff8e3839-6e82-4dad-8154-215598f991e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2b371444-62dc-4270-8164-64eac7dcead4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:30 compute-1 nova_compute[230518]: 2025-10-02 12:55:30.425 2 DEBUG oslo_concurrency.lockutils [req-bee6b486-31f2-4328-8268-03fb9eb66fb9 req-ff8e3839-6e82-4dad-8154-215598f991e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2b371444-62dc-4270-8164-64eac7dcead4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:30 compute-1 nova_compute[230518]: 2025-10-02 12:55:30.425 2 DEBUG nova.compute.manager [req-bee6b486-31f2-4328-8268-03fb9eb66fb9 req-ff8e3839-6e82-4dad-8154-215598f991e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] No waiting events found dispatching network-vif-unplugged-8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:55:30 compute-1 nova_compute[230518]: 2025-10-02 12:55:30.425 2 DEBUG nova.compute.manager [req-bee6b486-31f2-4328-8268-03fb9eb66fb9 req-ff8e3839-6e82-4dad-8154-215598f991e1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Received event network-vif-unplugged-8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:55:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:30.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:30 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c2c6b10f716cd3ea29a04db136f8f95379f8ab638819ccc0b732214672b9f89c-userdata-shm.mount: Deactivated successfully.
Oct 02 12:55:30 compute-1 systemd[1]: var-lib-containers-storage-overlay-abcf7a3e9396fcd78c2400963121561ebe3f42a3433196532cd89ee9d7c19a83-merged.mount: Deactivated successfully.
Oct 02 12:55:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:30.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:30 compute-1 podman[289698]: 2025-10-02 12:55:30.981635776 +0000 UTC m=+0.708014033 container cleanup c2c6b10f716cd3ea29a04db136f8f95379f8ab638819ccc0b732214672b9f89c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:55:31 compute-1 ceph-mon[80926]: pgmap v2465: 305 pgs: 305 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.2 MiB/s wr, 223 op/s
Oct 02 12:55:31 compute-1 podman[289756]: 2025-10-02 12:55:31.789671777 +0000 UTC m=+0.784215544 container remove c2c6b10f716cd3ea29a04db136f8f95379f8ab638819ccc0b732214672b9f89c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:55:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:31.795 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[70f7e999-3487-434f-a85e-5cb47bb87e9e]: (4, ('Thu Oct  2 12:55:30 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578 (c2c6b10f716cd3ea29a04db136f8f95379f8ab638819ccc0b732214672b9f89c)\nc2c6b10f716cd3ea29a04db136f8f95379f8ab638819ccc0b732214672b9f89c\nThu Oct  2 12:55:30 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578 (c2c6b10f716cd3ea29a04db136f8f95379f8ab638819ccc0b732214672b9f89c)\nc2c6b10f716cd3ea29a04db136f8f95379f8ab638819ccc0b732214672b9f89c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:31.796 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b7d0806c-72cb-4d87-89e2-a205badfb2d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:31.797 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf266165c-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:31 compute-1 nova_compute[230518]: 2025-10-02 12:55:31.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:31 compute-1 kernel: tapf266165c-c0: left promiscuous mode
Oct 02 12:55:31 compute-1 systemd[1]: libpod-conmon-c2c6b10f716cd3ea29a04db136f8f95379f8ab638819ccc0b732214672b9f89c.scope: Deactivated successfully.
Oct 02 12:55:31 compute-1 nova_compute[230518]: 2025-10-02 12:55:31.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:31.830 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2d202c39-4977-4ba9-9884-f51753f605e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:31.862 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[056f6391-bf30-4967-bdac-bf3cac471884]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:31.863 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[720dee3d-165a-41e0-93e2-e5b249c72cfc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:31.880 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[93c00daf-87f5-40f6-8388-187bcb8c1fd0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758749, 'reachable_time': 26056, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289775, 'error': None, 'target': 'ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:31.882 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f266165c-cf86-4062-8010-5a7ecdec1578 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:55:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:31.882 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[b3c5d63d-0b57-4783-b51a-8a3a0e5c5361]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:31 compute-1 systemd[1]: run-netns-ovnmeta\x2df266165c\x2dcf86\x2d4062\x2d8010\x2d5a7ecdec1578.mount: Deactivated successfully.
Oct 02 12:55:32 compute-1 nova_compute[230518]: 2025-10-02 12:55:32.485 2 INFO nova.virt.libvirt.driver [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Deleting instance files /var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802_del
Oct 02 12:55:32 compute-1 nova_compute[230518]: 2025-10-02 12:55:32.486 2 INFO nova.virt.libvirt.driver [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Deletion of /var/lib/nova/instances/a1e0932b-16b6-46b9-8192-b89b91e91802_del complete
Oct 02 12:55:32 compute-1 nova_compute[230518]: 2025-10-02 12:55:32.502 2 DEBUG nova.compute.manager [req-36b438f4-3b17-4a6e-9c8a-b3ac5a642469 req-1e8b9a9c-8cee-4b67-8cbc-1dd999f11d98 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Received event network-vif-plugged-8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:55:32 compute-1 nova_compute[230518]: 2025-10-02 12:55:32.502 2 DEBUG oslo_concurrency.lockutils [req-36b438f4-3b17-4a6e-9c8a-b3ac5a642469 req-1e8b9a9c-8cee-4b67-8cbc-1dd999f11d98 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "2b371444-62dc-4270-8164-64eac7dcead4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:32 compute-1 nova_compute[230518]: 2025-10-02 12:55:32.502 2 DEBUG oslo_concurrency.lockutils [req-36b438f4-3b17-4a6e-9c8a-b3ac5a642469 req-1e8b9a9c-8cee-4b67-8cbc-1dd999f11d98 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2b371444-62dc-4270-8164-64eac7dcead4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:32 compute-1 nova_compute[230518]: 2025-10-02 12:55:32.503 2 DEBUG oslo_concurrency.lockutils [req-36b438f4-3b17-4a6e-9c8a-b3ac5a642469 req-1e8b9a9c-8cee-4b67-8cbc-1dd999f11d98 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "2b371444-62dc-4270-8164-64eac7dcead4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:32 compute-1 nova_compute[230518]: 2025-10-02 12:55:32.503 2 DEBUG nova.compute.manager [req-36b438f4-3b17-4a6e-9c8a-b3ac5a642469 req-1e8b9a9c-8cee-4b67-8cbc-1dd999f11d98 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] No waiting events found dispatching network-vif-plugged-8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:55:32 compute-1 nova_compute[230518]: 2025-10-02 12:55:32.503 2 WARNING nova.compute.manager [req-36b438f4-3b17-4a6e-9c8a-b3ac5a642469 req-1e8b9a9c-8cee-4b67-8cbc-1dd999f11d98 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Received unexpected event network-vif-plugged-8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 for instance with vm_state active and task_state deleting.
Oct 02 12:55:32 compute-1 nova_compute[230518]: 2025-10-02 12:55:32.566 2 INFO nova.compute.manager [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Took 5.74 seconds to destroy the instance on the hypervisor.
Oct 02 12:55:32 compute-1 nova_compute[230518]: 2025-10-02 12:55:32.567 2 DEBUG oslo.service.loopingcall [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:55:32 compute-1 nova_compute[230518]: 2025-10-02 12:55:32.567 2 DEBUG nova.compute.manager [-] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:55:32 compute-1 nova_compute[230518]: 2025-10-02 12:55:32.567 2 DEBUG nova.network.neutron [-] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:55:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:32.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:55:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:32.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:55:33 compute-1 sudo[289776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:55:33 compute-1 sudo[289776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:33 compute-1 sudo[289776]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:33 compute-1 sudo[289801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:55:33 compute-1 sudo[289801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:33 compute-1 sudo[289801]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:33 compute-1 sudo[289827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:55:33 compute-1 sudo[289827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:33 compute-1 sudo[289827]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:33 compute-1 sudo[289852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:55:33 compute-1 sudo[289852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:33 compute-1 ceph-mon[80926]: pgmap v2466: 305 pgs: 305 active+clean; 320 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 257 op/s
Oct 02 12:55:33 compute-1 nova_compute[230518]: 2025-10-02 12:55:33.674 2 DEBUG nova.network.neutron [-] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:55:33 compute-1 nova_compute[230518]: 2025-10-02 12:55:33.693 2 INFO nova.compute.manager [-] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Took 1.13 seconds to deallocate network for instance.
Oct 02 12:55:33 compute-1 sudo[289852]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:33 compute-1 nova_compute[230518]: 2025-10-02 12:55:33.740 2 DEBUG oslo_concurrency.lockutils [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:33 compute-1 nova_compute[230518]: 2025-10-02 12:55:33.741 2 DEBUG oslo_concurrency.lockutils [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:33 compute-1 nova_compute[230518]: 2025-10-02 12:55:33.849 2 DEBUG oslo_concurrency.processutils [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:55:33 compute-1 nova_compute[230518]: 2025-10-02 12:55:33.897 2 DEBUG nova.compute.manager [req-683ce2df-9a0f-4236-bce9-1f850ce519c9 req-f1085c47-18b5-4842-8245-2c154e946de7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Received event network-vif-deleted-20204810-ff47-450e-80e5-23d03b435455 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:55:34 compute-1 nova_compute[230518]: 2025-10-02 12:55:34.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:34 compute-1 nova_compute[230518]: 2025-10-02 12:55:34.216 2 INFO nova.virt.libvirt.driver [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Deleting instance files /var/lib/nova/instances/2b371444-62dc-4270-8164-64eac7dcead4_del
Oct 02 12:55:34 compute-1 nova_compute[230518]: 2025-10-02 12:55:34.217 2 INFO nova.virt.libvirt.driver [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Deletion of /var/lib/nova/instances/2b371444-62dc-4270-8164-64eac7dcead4_del complete
Oct 02 12:55:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:55:34 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2393714639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:34 compute-1 nova_compute[230518]: 2025-10-02 12:55:34.281 2 INFO nova.compute.manager [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Took 4.58 seconds to destroy the instance on the hypervisor.
Oct 02 12:55:34 compute-1 nova_compute[230518]: 2025-10-02 12:55:34.281 2 DEBUG oslo.service.loopingcall [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:55:34 compute-1 nova_compute[230518]: 2025-10-02 12:55:34.282 2 DEBUG nova.compute.manager [-] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:55:34 compute-1 nova_compute[230518]: 2025-10-02 12:55:34.282 2 DEBUG nova.network.neutron [-] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:55:34 compute-1 nova_compute[230518]: 2025-10-02 12:55:34.291 2 DEBUG oslo_concurrency.processutils [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:34 compute-1 nova_compute[230518]: 2025-10-02 12:55:34.299 2 DEBUG nova.compute.provider_tree [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:55:34 compute-1 nova_compute[230518]: 2025-10-02 12:55:34.314 2 DEBUG nova.scheduler.client.report [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:55:34 compute-1 nova_compute[230518]: 2025-10-02 12:55:34.338 2 DEBUG oslo_concurrency.lockutils [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:34 compute-1 nova_compute[230518]: 2025-10-02 12:55:34.365 2 INFO nova.scheduler.client.report [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Deleted allocations for instance a1e0932b-16b6-46b9-8192-b89b91e91802
Oct 02 12:55:34 compute-1 nova_compute[230518]: 2025-10-02 12:55:34.434 2 DEBUG oslo_concurrency.lockutils [None req-f918f02c-e346-4939-9eaf-558e61a0ab6a 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "a1e0932b-16b6-46b9-8192-b89b91e91802" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:34.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2393714639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 12:55:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:34.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 12:55:35 compute-1 nova_compute[230518]: 2025-10-02 12:55:35.132 2 DEBUG nova.network.neutron [-] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:55:35 compute-1 nova_compute[230518]: 2025-10-02 12:55:35.155 2 INFO nova.compute.manager [-] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Took 0.87 seconds to deallocate network for instance.
Oct 02 12:55:35 compute-1 nova_compute[230518]: 2025-10-02 12:55:35.197 2 DEBUG oslo_concurrency.lockutils [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:35 compute-1 nova_compute[230518]: 2025-10-02 12:55:35.197 2 DEBUG oslo_concurrency.lockutils [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:35 compute-1 nova_compute[230518]: 2025-10-02 12:55:35.239 2 DEBUG oslo_concurrency.processutils [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:35 compute-1 nova_compute[230518]: 2025-10-02 12:55:35.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:55:35 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/428658818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:35 compute-1 nova_compute[230518]: 2025-10-02 12:55:35.647 2 DEBUG oslo_concurrency.processutils [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:35 compute-1 nova_compute[230518]: 2025-10-02 12:55:35.652 2 DEBUG nova.compute.provider_tree [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:55:35 compute-1 nova_compute[230518]: 2025-10-02 12:55:35.674 2 DEBUG nova.scheduler.client.report [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:55:35 compute-1 nova_compute[230518]: 2025-10-02 12:55:35.705 2 DEBUG oslo_concurrency.lockutils [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.508s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:35 compute-1 nova_compute[230518]: 2025-10-02 12:55:35.739 2 INFO nova.scheduler.client.report [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Deleted allocations for instance 2b371444-62dc-4270-8164-64eac7dcead4
Oct 02 12:55:35 compute-1 nova_compute[230518]: 2025-10-02 12:55:35.853 2 DEBUG oslo_concurrency.lockutils [None req-93e3690e-f91c-4eac-8185-b3a683366c33 9ea4224783c14b01bd0ff8988a45a5f2 b48381b3787c4f3d9bb0c9050cf4c52c - - default default] Lock "2b371444-62dc-4270-8164-64eac7dcead4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:35 compute-1 nova_compute[230518]: 2025-10-02 12:55:35.977 2 DEBUG nova.compute.manager [req-091d7424-bdac-4774-a883-e2c7593572e9 req-c942aa55-e5a8-4da4-abb1-a132e3e17fc7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Received event network-vif-deleted-8d31e365-a7a6-4bde-8919-ddfbc0a6b7f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:55:36 compute-1 ceph-mon[80926]: pgmap v2467: 305 pgs: 305 active+clean; 320 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.0 MiB/s wr, 231 op/s
Oct 02 12:55:36 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:55:36 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:55:36 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/428658818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:36 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:55:36 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:55:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:36.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:55:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:36.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:55:37 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:55:37 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:55:37 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:55:37 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:55:37 compute-1 ceph-mon[80926]: pgmap v2468: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 263 op/s
Oct 02 12:55:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:38.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:55:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:38.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:55:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:55:39 compute-1 nova_compute[230518]: 2025-10-02 12:55:39.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:39 compute-1 ceph-mon[80926]: pgmap v2469: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.4 MiB/s wr, 269 op/s
Oct 02 12:55:39 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 12:55:39 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 48K writes, 191K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s
                                           Cumulative WAL: 48K writes, 17K syncs, 2.75 writes per sync, written: 0.18 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 9335 writes, 37K keys, 9335 commit groups, 1.0 writes per commit group, ingest: 35.42 MB, 0.06 MB/s
                                           Interval WAL: 9335 writes, 3667 syncs, 2.55 writes per sync, written: 0.03 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 12:55:39 compute-1 nova_compute[230518]: 2025-10-02 12:55:39.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:39 compute-1 podman[289953]: 2025-10-02 12:55:39.87507108 +0000 UTC m=+0.064934357 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 02 12:55:39 compute-1 podman[289954]: 2025-10-02 12:55:39.87507075 +0000 UTC m=+0.063904966 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:55:39 compute-1 nova_compute[230518]: 2025-10-02 12:55:39.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:40 compute-1 nova_compute[230518]: 2025-10-02 12:55:40.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:40.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:40 compute-1 nova_compute[230518]: 2025-10-02 12:55:40.747 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:40 compute-1 nova_compute[230518]: 2025-10-02 12:55:40.747 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:40 compute-1 nova_compute[230518]: 2025-10-02 12:55:40.775 2 DEBUG nova.compute.manager [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:55:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:55:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:40.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:55:40 compute-1 nova_compute[230518]: 2025-10-02 12:55:40.877 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:40 compute-1 nova_compute[230518]: 2025-10-02 12:55:40.878 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:40 compute-1 nova_compute[230518]: 2025-10-02 12:55:40.888 2 DEBUG nova.virt.hardware [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:55:40 compute-1 nova_compute[230518]: 2025-10-02 12:55:40.888 2 INFO nova.compute.claims [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:55:41 compute-1 nova_compute[230518]: 2025-10-02 12:55:41.000 2 DEBUG oslo_concurrency.processutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:41 compute-1 ceph-mon[80926]: pgmap v2470: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 70 KiB/s wr, 189 op/s
Oct 02 12:55:42 compute-1 nova_compute[230518]: 2025-10-02 12:55:42.065 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409727.0628552, a1e0932b-16b6-46b9-8192-b89b91e91802 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:55:42 compute-1 nova_compute[230518]: 2025-10-02 12:55:42.066 2 INFO nova.compute.manager [-] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] VM Stopped (Lifecycle Event)
Oct 02 12:55:42 compute-1 nova_compute[230518]: 2025-10-02 12:55:42.089 2 DEBUG nova.compute.manager [None req-cd82f65a-a959-45ea-ae85-75c27648e029 - - - - - -] [instance: a1e0932b-16b6-46b9-8192-b89b91e91802] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:55:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:55:42 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1353320155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:42 compute-1 nova_compute[230518]: 2025-10-02 12:55:42.145 2 DEBUG oslo_concurrency.processutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:42 compute-1 nova_compute[230518]: 2025-10-02 12:55:42.155 2 DEBUG nova.compute.provider_tree [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:55:42 compute-1 nova_compute[230518]: 2025-10-02 12:55:42.174 2 DEBUG nova.scheduler.client.report [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:55:42 compute-1 nova_compute[230518]: 2025-10-02 12:55:42.288 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.410s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:42 compute-1 nova_compute[230518]: 2025-10-02 12:55:42.289 2 DEBUG nova.compute.manager [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:55:42 compute-1 nova_compute[230518]: 2025-10-02 12:55:42.410 2 DEBUG nova.compute.manager [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:55:42 compute-1 nova_compute[230518]: 2025-10-02 12:55:42.411 2 DEBUG nova.network.neutron [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:55:42 compute-1 nova_compute[230518]: 2025-10-02 12:55:42.492 2 INFO nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:55:42 compute-1 nova_compute[230518]: 2025-10-02 12:55:42.519 2 DEBUG nova.compute.manager [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:55:42 compute-1 nova_compute[230518]: 2025-10-02 12:55:42.632 2 DEBUG nova.compute.manager [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:55:42 compute-1 nova_compute[230518]: 2025-10-02 12:55:42.633 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:55:42 compute-1 nova_compute[230518]: 2025-10-02 12:55:42.633 2 INFO nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Creating image(s)
Oct 02 12:55:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:55:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:42.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:55:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:42.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1353320155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:43 compute-1 nova_compute[230518]: 2025-10-02 12:55:43.088 2 DEBUG nova.storage.rbd_utils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image cba9797d-d8c0-42bb-99e8-21ff3406d1ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:55:43 compute-1 nova_compute[230518]: 2025-10-02 12:55:43.267 2 DEBUG nova.storage.rbd_utils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image cba9797d-d8c0-42bb-99e8-21ff3406d1ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:55:43 compute-1 nova_compute[230518]: 2025-10-02 12:55:43.997 2 DEBUG nova.storage.rbd_utils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image cba9797d-d8c0-42bb-99e8-21ff3406d1ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:55:44 compute-1 nova_compute[230518]: 2025-10-02 12:55:44.002 2 DEBUG oslo_concurrency.processutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:44 compute-1 nova_compute[230518]: 2025-10-02 12:55:44.047 2 DEBUG nova.policy [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '96fd589a75cb4fcfac0072edabb9b3a1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '64f187c60881475e9e1f062bb198d205', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:55:44 compute-1 nova_compute[230518]: 2025-10-02 12:55:44.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:44 compute-1 nova_compute[230518]: 2025-10-02 12:55:44.107 2 DEBUG oslo_concurrency.processutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:44 compute-1 nova_compute[230518]: 2025-10-02 12:55:44.108 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:44 compute-1 nova_compute[230518]: 2025-10-02 12:55:44.109 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:44 compute-1 nova_compute[230518]: 2025-10-02 12:55:44.109 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:55:44 compute-1 ceph-mon[80926]: pgmap v2471: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 18 KiB/s wr, 133 op/s
Oct 02 12:55:44 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:55:44 compute-1 sudo[290089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:55:44 compute-1 sudo[290089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:44 compute-1 sudo[290089]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:44 compute-1 sudo[290114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:55:44 compute-1 sudo[290114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:55:44 compute-1 sudo[290114]: pam_unix(sudo:session): session closed for user root
Oct 02 12:55:44 compute-1 nova_compute[230518]: 2025-10-02 12:55:44.397 2 DEBUG nova.storage.rbd_utils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image cba9797d-d8c0-42bb-99e8-21ff3406d1ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:55:44 compute-1 nova_compute[230518]: 2025-10-02 12:55:44.402 2 DEBUG oslo_concurrency.processutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 cba9797d-d8c0-42bb-99e8-21ff3406d1ff_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:44.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:44.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:45 compute-1 nova_compute[230518]: 2025-10-02 12:55:45.036 2 DEBUG nova.network.neutron [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Successfully created port: 8de06222-5603-4a49-ac47-7db15cbb7e03 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:55:45 compute-1 nova_compute[230518]: 2025-10-02 12:55:45.340 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409730.33983, 2b371444-62dc-4270-8164-64eac7dcead4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:55:45 compute-1 nova_compute[230518]: 2025-10-02 12:55:45.341 2 INFO nova.compute.manager [-] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] VM Stopped (Lifecycle Event)
Oct 02 12:55:45 compute-1 nova_compute[230518]: 2025-10-02 12:55:45.364 2 DEBUG nova.compute.manager [None req-d18935fc-7f73-4697-8f6d-3865a60b7bd4 - - - - - -] [instance: 2b371444-62dc-4270-8164-64eac7dcead4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:55:45 compute-1 nova_compute[230518]: 2025-10-02 12:55:45.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:45 compute-1 ceph-mon[80926]: pgmap v2472: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 17 KiB/s wr, 64 op/s
Oct 02 12:55:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:55:45 compute-1 nova_compute[230518]: 2025-10-02 12:55:45.991 2 DEBUG nova.network.neutron [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Successfully updated port: 8de06222-5603-4a49-ac47-7db15cbb7e03 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:55:46 compute-1 nova_compute[230518]: 2025-10-02 12:55:46.007 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "refresh_cache-cba9797d-d8c0-42bb-99e8-21ff3406d1ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:55:46 compute-1 nova_compute[230518]: 2025-10-02 12:55:46.008 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquired lock "refresh_cache-cba9797d-d8c0-42bb-99e8-21ff3406d1ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:55:46 compute-1 nova_compute[230518]: 2025-10-02 12:55:46.008 2 DEBUG nova.network.neutron [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:55:46 compute-1 nova_compute[230518]: 2025-10-02 12:55:46.101 2 DEBUG nova.compute.manager [req-da607633-3f93-4a5e-94d0-2e98d257e716 req-2ab7436f-751c-4d4c-a75a-f8304d28298c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Received event network-changed-8de06222-5603-4a49-ac47-7db15cbb7e03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:55:46 compute-1 nova_compute[230518]: 2025-10-02 12:55:46.101 2 DEBUG nova.compute.manager [req-da607633-3f93-4a5e-94d0-2e98d257e716 req-2ab7436f-751c-4d4c-a75a-f8304d28298c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Refreshing instance network info cache due to event network-changed-8de06222-5603-4a49-ac47-7db15cbb7e03. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:55:46 compute-1 nova_compute[230518]: 2025-10-02 12:55:46.102 2 DEBUG oslo_concurrency.lockutils [req-da607633-3f93-4a5e-94d0-2e98d257e716 req-2ab7436f-751c-4d4c-a75a-f8304d28298c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-cba9797d-d8c0-42bb-99e8-21ff3406d1ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:55:46 compute-1 nova_compute[230518]: 2025-10-02 12:55:46.144 2 DEBUG nova.network.neutron [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:55:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:46.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:46.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:47 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:47.573 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:55:47 compute-1 nova_compute[230518]: 2025-10-02 12:55:47.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:47 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:47.575 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:55:47 compute-1 ceph-mon[80926]: pgmap v2473: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 18 KiB/s wr, 64 op/s
Oct 02 12:55:48 compute-1 nova_compute[230518]: 2025-10-02 12:55:48.076 2 DEBUG oslo_concurrency.processutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 cba9797d-d8c0-42bb-99e8-21ff3406d1ff_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.674s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:48 compute-1 nova_compute[230518]: 2025-10-02 12:55:48.148 2 DEBUG nova.storage.rbd_utils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] resizing rbd image cba9797d-d8c0-42bb-99e8-21ff3406d1ff_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:55:48 compute-1 nova_compute[230518]: 2025-10-02 12:55:48.458 2 DEBUG nova.network.neutron [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Updating instance_info_cache with network_info: [{"id": "8de06222-5603-4a49-ac47-7db15cbb7e03", "address": "fa:16:3e:68:46:e8", "network": {"id": "24ea8f37-7508-4c75-ae14-e4cc7b9f8e97", "bridge": "br-int", "label": "tempest-network-smoke--1532785266", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de06222-56", "ovs_interfaceid": "8de06222-5603-4a49-ac47-7db15cbb7e03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:55:48 compute-1 nova_compute[230518]: 2025-10-02 12:55:48.464 2 DEBUG nova.objects.instance [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'migration_context' on Instance uuid cba9797d-d8c0-42bb-99e8-21ff3406d1ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:55:48 compute-1 nova_compute[230518]: 2025-10-02 12:55:48.485 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:55:48 compute-1 nova_compute[230518]: 2025-10-02 12:55:48.486 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Ensure instance console log exists: /var/lib/nova/instances/cba9797d-d8c0-42bb-99e8-21ff3406d1ff/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:55:48 compute-1 nova_compute[230518]: 2025-10-02 12:55:48.486 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:48 compute-1 nova_compute[230518]: 2025-10-02 12:55:48.486 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:48 compute-1 nova_compute[230518]: 2025-10-02 12:55:48.487 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:48 compute-1 nova_compute[230518]: 2025-10-02 12:55:48.490 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Releasing lock "refresh_cache-cba9797d-d8c0-42bb-99e8-21ff3406d1ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:55:48 compute-1 nova_compute[230518]: 2025-10-02 12:55:48.491 2 DEBUG nova.compute.manager [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Instance network_info: |[{"id": "8de06222-5603-4a49-ac47-7db15cbb7e03", "address": "fa:16:3e:68:46:e8", "network": {"id": "24ea8f37-7508-4c75-ae14-e4cc7b9f8e97", "bridge": "br-int", "label": "tempest-network-smoke--1532785266", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de06222-56", "ovs_interfaceid": "8de06222-5603-4a49-ac47-7db15cbb7e03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:55:48 compute-1 nova_compute[230518]: 2025-10-02 12:55:48.491 2 DEBUG oslo_concurrency.lockutils [req-da607633-3f93-4a5e-94d0-2e98d257e716 req-2ab7436f-751c-4d4c-a75a-f8304d28298c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-cba9797d-d8c0-42bb-99e8-21ff3406d1ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:55:48 compute-1 nova_compute[230518]: 2025-10-02 12:55:48.491 2 DEBUG nova.network.neutron [req-da607633-3f93-4a5e-94d0-2e98d257e716 req-2ab7436f-751c-4d4c-a75a-f8304d28298c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Refreshing network info cache for port 8de06222-5603-4a49-ac47-7db15cbb7e03 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:55:48 compute-1 nova_compute[230518]: 2025-10-02 12:55:48.493 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Start _get_guest_xml network_info=[{"id": "8de06222-5603-4a49-ac47-7db15cbb7e03", "address": "fa:16:3e:68:46:e8", "network": {"id": "24ea8f37-7508-4c75-ae14-e4cc7b9f8e97", "bridge": "br-int", "label": "tempest-network-smoke--1532785266", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de06222-56", "ovs_interfaceid": "8de06222-5603-4a49-ac47-7db15cbb7e03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:55:48 compute-1 nova_compute[230518]: 2025-10-02 12:55:48.497 2 WARNING nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:55:48 compute-1 nova_compute[230518]: 2025-10-02 12:55:48.500 2 DEBUG nova.virt.libvirt.host [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:55:48 compute-1 nova_compute[230518]: 2025-10-02 12:55:48.501 2 DEBUG nova.virt.libvirt.host [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:55:48 compute-1 nova_compute[230518]: 2025-10-02 12:55:48.507 2 DEBUG nova.virt.libvirt.host [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:55:48 compute-1 nova_compute[230518]: 2025-10-02 12:55:48.508 2 DEBUG nova.virt.libvirt.host [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:55:48 compute-1 nova_compute[230518]: 2025-10-02 12:55:48.509 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:55:48 compute-1 nova_compute[230518]: 2025-10-02 12:55:48.509 2 DEBUG nova.virt.hardware [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:55:48 compute-1 nova_compute[230518]: 2025-10-02 12:55:48.510 2 DEBUG nova.virt.hardware [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:55:48 compute-1 nova_compute[230518]: 2025-10-02 12:55:48.510 2 DEBUG nova.virt.hardware [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:55:48 compute-1 nova_compute[230518]: 2025-10-02 12:55:48.511 2 DEBUG nova.virt.hardware [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:55:48 compute-1 nova_compute[230518]: 2025-10-02 12:55:48.511 2 DEBUG nova.virt.hardware [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:55:48 compute-1 nova_compute[230518]: 2025-10-02 12:55:48.511 2 DEBUG nova.virt.hardware [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:55:48 compute-1 nova_compute[230518]: 2025-10-02 12:55:48.511 2 DEBUG nova.virt.hardware [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:55:48 compute-1 nova_compute[230518]: 2025-10-02 12:55:48.512 2 DEBUG nova.virt.hardware [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:55:48 compute-1 nova_compute[230518]: 2025-10-02 12:55:48.512 2 DEBUG nova.virt.hardware [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:55:48 compute-1 nova_compute[230518]: 2025-10-02 12:55:48.512 2 DEBUG nova.virt.hardware [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:55:48 compute-1 nova_compute[230518]: 2025-10-02 12:55:48.513 2 DEBUG nova.virt.hardware [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:55:48 compute-1 nova_compute[230518]: 2025-10-02 12:55:48.515 2 DEBUG oslo_concurrency.processutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:55:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:48.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:55:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/609859162' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:48.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:55:48 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/959533747' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:49 compute-1 nova_compute[230518]: 2025-10-02 12:55:49.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:49 compute-1 nova_compute[230518]: 2025-10-02 12:55:49.092 2 DEBUG oslo_concurrency.processutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:49 compute-1 nova_compute[230518]: 2025-10-02 12:55:49.119 2 DEBUG nova.storage.rbd_utils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image cba9797d-d8c0-42bb-99e8-21ff3406d1ff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:55:49 compute-1 nova_compute[230518]: 2025-10-02 12:55:49.123 2 DEBUG oslo_concurrency.processutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:55:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:55:49 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/153925789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:49 compute-1 nova_compute[230518]: 2025-10-02 12:55:49.547 2 DEBUG oslo_concurrency.processutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:49 compute-1 nova_compute[230518]: 2025-10-02 12:55:49.549 2 DEBUG nova.virt.libvirt.vif [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:55:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1872091800',display_name='tempest-TestNetworkBasicOps-server-1872091800',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1872091800',id=151,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMSGOLZUJ3FrTIeEm0YCEFIRKta1oOKUh3K2cHX7D75D8mOr7z91wWb7O7IlUA8JdoZAVPTTXOOargDtG7eOD7M62+PfvZG7TnqVCQDZ9PY6Jtt6S6zET7dnTNArJzZa2A==',key_name='tempest-TestNetworkBasicOps-802197789',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-50zfty89',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:55:42Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=cba9797d-d8c0-42bb-99e8-21ff3406d1ff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8de06222-5603-4a49-ac47-7db15cbb7e03", "address": "fa:16:3e:68:46:e8", "network": {"id": "24ea8f37-7508-4c75-ae14-e4cc7b9f8e97", "bridge": "br-int", "label": "tempest-network-smoke--1532785266", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de06222-56", "ovs_interfaceid": "8de06222-5603-4a49-ac47-7db15cbb7e03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:55:49 compute-1 nova_compute[230518]: 2025-10-02 12:55:49.549 2 DEBUG nova.network.os_vif_util [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "8de06222-5603-4a49-ac47-7db15cbb7e03", "address": "fa:16:3e:68:46:e8", "network": {"id": "24ea8f37-7508-4c75-ae14-e4cc7b9f8e97", "bridge": "br-int", "label": "tempest-network-smoke--1532785266", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de06222-56", "ovs_interfaceid": "8de06222-5603-4a49-ac47-7db15cbb7e03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:55:49 compute-1 nova_compute[230518]: 2025-10-02 12:55:49.550 2 DEBUG nova.network.os_vif_util [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:46:e8,bridge_name='br-int',has_traffic_filtering=True,id=8de06222-5603-4a49-ac47-7db15cbb7e03,network=Network(24ea8f37-7508-4c75-ae14-e4cc7b9f8e97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8de06222-56') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:55:49 compute-1 nova_compute[230518]: 2025-10-02 12:55:49.551 2 DEBUG nova.objects.instance [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'pci_devices' on Instance uuid cba9797d-d8c0-42bb-99e8-21ff3406d1ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:55:49 compute-1 nova_compute[230518]: 2025-10-02 12:55:49.572 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:55:49 compute-1 nova_compute[230518]:   <uuid>cba9797d-d8c0-42bb-99e8-21ff3406d1ff</uuid>
Oct 02 12:55:49 compute-1 nova_compute[230518]:   <name>instance-00000097</name>
Oct 02 12:55:49 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:55:49 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:55:49 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:55:49 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:       <nova:name>tempest-TestNetworkBasicOps-server-1872091800</nova:name>
Oct 02 12:55:49 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:55:48</nova:creationTime>
Oct 02 12:55:49 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:55:49 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:55:49 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:55:49 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:55:49 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:55:49 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:55:49 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:55:49 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:55:49 compute-1 nova_compute[230518]:         <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct 02 12:55:49 compute-1 nova_compute[230518]:         <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct 02 12:55:49 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:55:49 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:55:49 compute-1 nova_compute[230518]:         <nova:port uuid="8de06222-5603-4a49-ac47-7db15cbb7e03">
Oct 02 12:55:49 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:55:49 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:55:49 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:55:49 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <system>
Oct 02 12:55:49 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:55:49 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:55:49 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:55:49 compute-1 nova_compute[230518]:       <entry name="serial">cba9797d-d8c0-42bb-99e8-21ff3406d1ff</entry>
Oct 02 12:55:49 compute-1 nova_compute[230518]:       <entry name="uuid">cba9797d-d8c0-42bb-99e8-21ff3406d1ff</entry>
Oct 02 12:55:49 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     </system>
Oct 02 12:55:49 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:55:49 compute-1 nova_compute[230518]:   <os>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:   </os>
Oct 02 12:55:49 compute-1 nova_compute[230518]:   <features>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:   </features>
Oct 02 12:55:49 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:55:49 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:55:49 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:55:49 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/cba9797d-d8c0-42bb-99e8-21ff3406d1ff_disk">
Oct 02 12:55:49 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:       </source>
Oct 02 12:55:49 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:55:49 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:55:49 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:55:49 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/cba9797d-d8c0-42bb-99e8-21ff3406d1ff_disk.config">
Oct 02 12:55:49 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:       </source>
Oct 02 12:55:49 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:55:49 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:55:49 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:55:49 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:68:46:e8"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:       <target dev="tap8de06222-56"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:55:49 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/cba9797d-d8c0-42bb-99e8-21ff3406d1ff/console.log" append="off"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <video>
Oct 02 12:55:49 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     </video>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:55:49 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:55:49 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:55:49 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:55:49 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:55:49 compute-1 nova_compute[230518]: </domain>
Oct 02 12:55:49 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:55:49 compute-1 nova_compute[230518]: 2025-10-02 12:55:49.573 2 DEBUG nova.compute.manager [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Preparing to wait for external event network-vif-plugged-8de06222-5603-4a49-ac47-7db15cbb7e03 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:55:49 compute-1 nova_compute[230518]: 2025-10-02 12:55:49.575 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:49 compute-1 nova_compute[230518]: 2025-10-02 12:55:49.575 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:49 compute-1 nova_compute[230518]: 2025-10-02 12:55:49.575 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:49 compute-1 nova_compute[230518]: 2025-10-02 12:55:49.576 2 DEBUG nova.virt.libvirt.vif [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:55:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1872091800',display_name='tempest-TestNetworkBasicOps-server-1872091800',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1872091800',id=151,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMSGOLZUJ3FrTIeEm0YCEFIRKta1oOKUh3K2cHX7D75D8mOr7z91wWb7O7IlUA8JdoZAVPTTXOOargDtG7eOD7M62+PfvZG7TnqVCQDZ9PY6Jtt6S6zET7dnTNArJzZa2A==',key_name='tempest-TestNetworkBasicOps-802197789',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-50zfty89',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:55:42Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=cba9797d-d8c0-42bb-99e8-21ff3406d1ff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8de06222-5603-4a49-ac47-7db15cbb7e03", "address": "fa:16:3e:68:46:e8", "network": {"id": "24ea8f37-7508-4c75-ae14-e4cc7b9f8e97", "bridge": "br-int", "label": "tempest-network-smoke--1532785266", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de06222-56", "ovs_interfaceid": "8de06222-5603-4a49-ac47-7db15cbb7e03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:55:49 compute-1 nova_compute[230518]: 2025-10-02 12:55:49.576 2 DEBUG nova.network.os_vif_util [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "8de06222-5603-4a49-ac47-7db15cbb7e03", "address": "fa:16:3e:68:46:e8", "network": {"id": "24ea8f37-7508-4c75-ae14-e4cc7b9f8e97", "bridge": "br-int", "label": "tempest-network-smoke--1532785266", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de06222-56", "ovs_interfaceid": "8de06222-5603-4a49-ac47-7db15cbb7e03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:55:49 compute-1 nova_compute[230518]: 2025-10-02 12:55:49.577 2 DEBUG nova.network.os_vif_util [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:46:e8,bridge_name='br-int',has_traffic_filtering=True,id=8de06222-5603-4a49-ac47-7db15cbb7e03,network=Network(24ea8f37-7508-4c75-ae14-e4cc7b9f8e97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8de06222-56') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:55:49 compute-1 nova_compute[230518]: 2025-10-02 12:55:49.577 2 DEBUG os_vif [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:46:e8,bridge_name='br-int',has_traffic_filtering=True,id=8de06222-5603-4a49-ac47-7db15cbb7e03,network=Network(24ea8f37-7508-4c75-ae14-e4cc7b9f8e97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8de06222-56') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:55:49 compute-1 nova_compute[230518]: 2025-10-02 12:55:49.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:49 compute-1 nova_compute[230518]: 2025-10-02 12:55:49.578 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:49 compute-1 nova_compute[230518]: 2025-10-02 12:55:49.579 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:55:49 compute-1 nova_compute[230518]: 2025-10-02 12:55:49.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:49 compute-1 nova_compute[230518]: 2025-10-02 12:55:49.582 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8de06222-56, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:49 compute-1 nova_compute[230518]: 2025-10-02 12:55:49.582 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8de06222-56, col_values=(('external_ids', {'iface-id': '8de06222-5603-4a49-ac47-7db15cbb7e03', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:68:46:e8', 'vm-uuid': 'cba9797d-d8c0-42bb-99e8-21ff3406d1ff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:49 compute-1 nova_compute[230518]: 2025-10-02 12:55:49.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:49 compute-1 NetworkManager[44960]: <info>  [1759409749.5847] manager: (tap8de06222-56): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/298)
Oct 02 12:55:49 compute-1 nova_compute[230518]: 2025-10-02 12:55:49.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:55:49 compute-1 nova_compute[230518]: 2025-10-02 12:55:49.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:49 compute-1 nova_compute[230518]: 2025-10-02 12:55:49.590 2 INFO os_vif [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:46:e8,bridge_name='br-int',has_traffic_filtering=True,id=8de06222-5603-4a49-ac47-7db15cbb7e03,network=Network(24ea8f37-7508-4c75-ae14-e4cc7b9f8e97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8de06222-56')
Oct 02 12:55:49 compute-1 nova_compute[230518]: 2025-10-02 12:55:49.732 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:55:49 compute-1 nova_compute[230518]: 2025-10-02 12:55:49.732 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:55:49 compute-1 nova_compute[230518]: 2025-10-02 12:55:49.732 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No VIF found with MAC fa:16:3e:68:46:e8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:55:49 compute-1 nova_compute[230518]: 2025-10-02 12:55:49.733 2 INFO nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Using config drive
Oct 02 12:55:49 compute-1 ceph-mon[80926]: pgmap v2474: 305 pgs: 305 active+clean; 255 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 76 KiB/s wr, 45 op/s
Oct 02 12:55:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/959533747' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/153925789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:50 compute-1 nova_compute[230518]: 2025-10-02 12:55:50.087 2 DEBUG nova.storage.rbd_utils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image cba9797d-d8c0-42bb-99e8-21ff3406d1ff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:55:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:50.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:50.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:51 compute-1 ceph-mon[80926]: pgmap v2475: 305 pgs: 305 active+clean; 282 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 1.3 MiB/s wr, 24 op/s
Oct 02 12:55:51 compute-1 nova_compute[230518]: 2025-10-02 12:55:51.288 2 INFO nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Creating config drive at /var/lib/nova/instances/cba9797d-d8c0-42bb-99e8-21ff3406d1ff/disk.config
Oct 02 12:55:51 compute-1 nova_compute[230518]: 2025-10-02 12:55:51.295 2 DEBUG oslo_concurrency.processutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cba9797d-d8c0-42bb-99e8-21ff3406d1ff/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4dycmxkc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:51 compute-1 nova_compute[230518]: 2025-10-02 12:55:51.338 2 DEBUG nova.network.neutron [req-da607633-3f93-4a5e-94d0-2e98d257e716 req-2ab7436f-751c-4d4c-a75a-f8304d28298c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Updated VIF entry in instance network info cache for port 8de06222-5603-4a49-ac47-7db15cbb7e03. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:55:51 compute-1 nova_compute[230518]: 2025-10-02 12:55:51.339 2 DEBUG nova.network.neutron [req-da607633-3f93-4a5e-94d0-2e98d257e716 req-2ab7436f-751c-4d4c-a75a-f8304d28298c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Updating instance_info_cache with network_info: [{"id": "8de06222-5603-4a49-ac47-7db15cbb7e03", "address": "fa:16:3e:68:46:e8", "network": {"id": "24ea8f37-7508-4c75-ae14-e4cc7b9f8e97", "bridge": "br-int", "label": "tempest-network-smoke--1532785266", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de06222-56", "ovs_interfaceid": "8de06222-5603-4a49-ac47-7db15cbb7e03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:55:51 compute-1 nova_compute[230518]: 2025-10-02 12:55:51.358 2 DEBUG oslo_concurrency.lockutils [req-da607633-3f93-4a5e-94d0-2e98d257e716 req-2ab7436f-751c-4d4c-a75a-f8304d28298c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-cba9797d-d8c0-42bb-99e8-21ff3406d1ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:55:51 compute-1 nova_compute[230518]: 2025-10-02 12:55:51.439 2 DEBUG oslo_concurrency.processutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cba9797d-d8c0-42bb-99e8-21ff3406d1ff/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4dycmxkc" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:51 compute-1 nova_compute[230518]: 2025-10-02 12:55:51.573 2 DEBUG nova.storage.rbd_utils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image cba9797d-d8c0-42bb-99e8-21ff3406d1ff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:55:51 compute-1 nova_compute[230518]: 2025-10-02 12:55:51.577 2 DEBUG oslo_concurrency.processutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cba9797d-d8c0-42bb-99e8-21ff3406d1ff/disk.config cba9797d-d8c0-42bb-99e8-21ff3406d1ff_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:52 compute-1 nova_compute[230518]: 2025-10-02 12:55:52.001 2 DEBUG oslo_concurrency.processutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cba9797d-d8c0-42bb-99e8-21ff3406d1ff/disk.config cba9797d-d8c0-42bb-99e8-21ff3406d1ff_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:52 compute-1 nova_compute[230518]: 2025-10-02 12:55:52.002 2 INFO nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Deleting local config drive /var/lib/nova/instances/cba9797d-d8c0-42bb-99e8-21ff3406d1ff/disk.config because it was imported into RBD.
Oct 02 12:55:52 compute-1 kernel: tap8de06222-56: entered promiscuous mode
Oct 02 12:55:52 compute-1 NetworkManager[44960]: <info>  [1759409752.0609] manager: (tap8de06222-56): new Tun device (/org/freedesktop/NetworkManager/Devices/299)
Oct 02 12:55:52 compute-1 nova_compute[230518]: 2025-10-02 12:55:52.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:52 compute-1 ovn_controller[129257]: 2025-10-02T12:55:52Z|00636|binding|INFO|Claiming lport 8de06222-5603-4a49-ac47-7db15cbb7e03 for this chassis.
Oct 02 12:55:52 compute-1 ovn_controller[129257]: 2025-10-02T12:55:52Z|00637|binding|INFO|8de06222-5603-4a49-ac47-7db15cbb7e03: Claiming fa:16:3e:68:46:e8 10.100.0.5
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.072 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:46:e8 10.100.0.5'], port_security=['fa:16:3e:68:46:e8 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'cba9797d-d8c0-42bb-99e8-21ff3406d1ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f08094d7-13d0-4e3d-b2f1-572cd1460e8d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f86b2949-a213-4bd4-b601-e7dc17853f7f, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=8de06222-5603-4a49-ac47-7db15cbb7e03) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.073 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 8de06222-5603-4a49-ac47-7db15cbb7e03 in datapath 24ea8f37-7508-4c75-ae14-e4cc7b9f8e97 bound to our chassis
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.074 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 24ea8f37-7508-4c75-ae14-e4cc7b9f8e97
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.087 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7a9020e6-94f3-4dc5-aa91-e839c77d33e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.088 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap24ea8f37-71 in ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.090 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap24ea8f37-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.090 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e9573746-00d5-4da1-92d5-4e793b3f8f82]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.092 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[98e292c7-c17e-4de4-967a-20635f09ddce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:52 compute-1 systemd-machined[188247]: New machine qemu-74-instance-00000097.
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.103 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[60474115-9622-4f32-8818-50520dfc3ccd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.129 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[092bb8a9-73a6-44d0-9b69-6e46e9c06699]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:52 compute-1 nova_compute[230518]: 2025-10-02 12:55:52.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:52 compute-1 systemd[1]: Started Virtual Machine qemu-74-instance-00000097.
Oct 02 12:55:52 compute-1 ovn_controller[129257]: 2025-10-02T12:55:52Z|00638|binding|INFO|Setting lport 8de06222-5603-4a49-ac47-7db15cbb7e03 ovn-installed in OVS
Oct 02 12:55:52 compute-1 ovn_controller[129257]: 2025-10-02T12:55:52Z|00639|binding|INFO|Setting lport 8de06222-5603-4a49-ac47-7db15cbb7e03 up in Southbound
Oct 02 12:55:52 compute-1 nova_compute[230518]: 2025-10-02 12:55:52.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:52 compute-1 systemd-udevd[290374]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.161 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[46ecb685-d0e0-415c-a582-752eb6691ee2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.165 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[465fda69-618d-4588-8d15-cb65009ee0a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:52 compute-1 NetworkManager[44960]: <info>  [1759409752.1667] manager: (tap24ea8f37-70): new Veth device (/org/freedesktop/NetworkManager/Devices/300)
Oct 02 12:55:52 compute-1 systemd-udevd[290376]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:55:52 compute-1 NetworkManager[44960]: <info>  [1759409752.1778] device (tap8de06222-56): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:55:52 compute-1 NetworkManager[44960]: <info>  [1759409752.1788] device (tap8de06222-56): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.195 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[39a45076-0e26-4ca5-b3e2-fab0be62cd35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.198 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[729e69a0-5976-4e19-a30b-b9c115ca0f2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:52 compute-1 NetworkManager[44960]: <info>  [1759409752.2243] device (tap24ea8f37-70): carrier: link connected
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.230 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[223804fe-5334-4324-85d1-5c999598d5a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.246 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f115b032-d092-4059-9c34-6c81a7342b2c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap24ea8f37-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:22:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 196], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 761578, 'reachable_time': 43305, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290402, 'error': None, 'target': 'ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.265 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[757e6cba-7c4a-4194-9dfc-de97bbc74690]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe50:22dd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 761578, 'tstamp': 761578}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290403, 'error': None, 'target': 'ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.280 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3dfebcdf-0731-4c63-9f9f-36596ea53116]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap24ea8f37-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:22:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 196], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 761578, 'reachable_time': 43305, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290404, 'error': None, 'target': 'ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.307 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6a5c20b7-5b54-4c11-b5ae-99e511048ecd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.354 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ec2f6c88-5ed7-4ec1-912d-f4c36cdf02a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.355 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap24ea8f37-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.356 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.356 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap24ea8f37-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:52 compute-1 kernel: tap24ea8f37-70: entered promiscuous mode
Oct 02 12:55:52 compute-1 NetworkManager[44960]: <info>  [1759409752.3587] manager: (tap24ea8f37-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/301)
Oct 02 12:55:52 compute-1 nova_compute[230518]: 2025-10-02 12:55:52.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.362 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap24ea8f37-70, col_values=(('external_ids', {'iface-id': '47294e02-9c49-4b4e-8e89-07ae56f17131'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:52 compute-1 nova_compute[230518]: 2025-10-02 12:55:52.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:52 compute-1 ovn_controller[129257]: 2025-10-02T12:55:52Z|00640|binding|INFO|Releasing lport 47294e02-9c49-4b4e-8e89-07ae56f17131 from this chassis (sb_readonly=0)
Oct 02 12:55:52 compute-1 nova_compute[230518]: 2025-10-02 12:55:52.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.379 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/24ea8f37-7508-4c75-ae14-e4cc7b9f8e97.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/24ea8f37-7508-4c75-ae14-e4cc7b9f8e97.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.380 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e16c879f-1b14-432e-8f6e-67d8f8198c11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.380 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/24ea8f37-7508-4c75-ae14-e4cc7b9f8e97.pid.haproxy
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 24ea8f37-7508-4c75-ae14-e4cc7b9f8e97
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:55:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:52.382 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97', 'env', 'PROCESS_TAG=haproxy-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/24ea8f37-7508-4c75-ae14-e4cc7b9f8e97.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:55:52 compute-1 nova_compute[230518]: 2025-10-02 12:55:52.622 2 DEBUG nova.compute.manager [req-79596143-e175-4ecd-acff-f56b7a166a26 req-ff693d0a-63db-4a82-8e5a-3a283827bd4c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Received event network-vif-plugged-8de06222-5603-4a49-ac47-7db15cbb7e03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:55:52 compute-1 nova_compute[230518]: 2025-10-02 12:55:52.622 2 DEBUG oslo_concurrency.lockutils [req-79596143-e175-4ecd-acff-f56b7a166a26 req-ff693d0a-63db-4a82-8e5a-3a283827bd4c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:52 compute-1 nova_compute[230518]: 2025-10-02 12:55:52.623 2 DEBUG oslo_concurrency.lockutils [req-79596143-e175-4ecd-acff-f56b7a166a26 req-ff693d0a-63db-4a82-8e5a-3a283827bd4c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:52 compute-1 nova_compute[230518]: 2025-10-02 12:55:52.623 2 DEBUG oslo_concurrency.lockutils [req-79596143-e175-4ecd-acff-f56b7a166a26 req-ff693d0a-63db-4a82-8e5a-3a283827bd4c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:52 compute-1 nova_compute[230518]: 2025-10-02 12:55:52.623 2 DEBUG nova.compute.manager [req-79596143-e175-4ecd-acff-f56b7a166a26 req-ff693d0a-63db-4a82-8e5a-3a283827bd4c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Processing event network-vif-plugged-8de06222-5603-4a49-ac47-7db15cbb7e03 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:55:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:52.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 12:55:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:52.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 12:55:52 compute-1 podman[290476]: 2025-10-02 12:55:52.729018595 +0000 UTC m=+0.024059596 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:55:53 compute-1 nova_compute[230518]: 2025-10-02 12:55:53.133 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409753.132523, cba9797d-d8c0-42bb-99e8-21ff3406d1ff => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:55:53 compute-1 nova_compute[230518]: 2025-10-02 12:55:53.134 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] VM Started (Lifecycle Event)
Oct 02 12:55:53 compute-1 nova_compute[230518]: 2025-10-02 12:55:53.137 2 DEBUG nova.compute.manager [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:55:53 compute-1 nova_compute[230518]: 2025-10-02 12:55:53.141 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:55:53 compute-1 nova_compute[230518]: 2025-10-02 12:55:53.147 2 INFO nova.virt.libvirt.driver [-] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Instance spawned successfully.
Oct 02 12:55:53 compute-1 nova_compute[230518]: 2025-10-02 12:55:53.147 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:55:53 compute-1 nova_compute[230518]: 2025-10-02 12:55:53.162 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:55:53 compute-1 nova_compute[230518]: 2025-10-02 12:55:53.166 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:55:53 compute-1 nova_compute[230518]: 2025-10-02 12:55:53.173 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:55:53 compute-1 nova_compute[230518]: 2025-10-02 12:55:53.173 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:55:53 compute-1 nova_compute[230518]: 2025-10-02 12:55:53.174 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:55:53 compute-1 nova_compute[230518]: 2025-10-02 12:55:53.174 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:55:53 compute-1 nova_compute[230518]: 2025-10-02 12:55:53.175 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:55:53 compute-1 nova_compute[230518]: 2025-10-02 12:55:53.175 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:55:53 compute-1 nova_compute[230518]: 2025-10-02 12:55:53.200 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:55:53 compute-1 nova_compute[230518]: 2025-10-02 12:55:53.201 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409753.1327386, cba9797d-d8c0-42bb-99e8-21ff3406d1ff => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:55:53 compute-1 nova_compute[230518]: 2025-10-02 12:55:53.201 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] VM Paused (Lifecycle Event)
Oct 02 12:55:53 compute-1 nova_compute[230518]: 2025-10-02 12:55:53.222 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:55:53 compute-1 nova_compute[230518]: 2025-10-02 12:55:53.224 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409753.1408713, cba9797d-d8c0-42bb-99e8-21ff3406d1ff => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:55:53 compute-1 nova_compute[230518]: 2025-10-02 12:55:53.224 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] VM Resumed (Lifecycle Event)
Oct 02 12:55:53 compute-1 nova_compute[230518]: 2025-10-02 12:55:53.256 2 INFO nova.compute.manager [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Took 10.62 seconds to spawn the instance on the hypervisor.
Oct 02 12:55:53 compute-1 nova_compute[230518]: 2025-10-02 12:55:53.256 2 DEBUG nova.compute.manager [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:55:53 compute-1 nova_compute[230518]: 2025-10-02 12:55:53.258 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:55:53 compute-1 nova_compute[230518]: 2025-10-02 12:55:53.263 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:55:53 compute-1 nova_compute[230518]: 2025-10-02 12:55:53.305 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:55:53 compute-1 nova_compute[230518]: 2025-10-02 12:55:53.355 2 INFO nova.compute.manager [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Took 12.50 seconds to build instance.
Oct 02 12:55:53 compute-1 nova_compute[230518]: 2025-10-02 12:55:53.392 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3f8d-3c14-4f1c-8f45-7093d8dd831b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:53 compute-1 ceph-mon[80926]: pgmap v2476: 305 pgs: 305 active+clean; 310 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 2.6 MiB/s wr, 48 op/s
Oct 02 12:55:54 compute-1 nova_compute[230518]: 2025-10-02 12:55:54.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:54 compute-1 podman[290476]: 2025-10-02 12:55:54.179161271 +0000 UTC m=+1.474202242 container create c187f4a6ff262e8a492932d357d73b3a8ddd7027b6712dc407f11942add52a34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:55:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:55:54 compute-1 systemd[1]: Started libpod-conmon-c187f4a6ff262e8a492932d357d73b3a8ddd7027b6712dc407f11942add52a34.scope.
Oct 02 12:55:54 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:55:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:55:54.578 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:55:54 compute-1 nova_compute[230518]: 2025-10-02 12:55:54.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:55:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:54.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:55:54 compute-1 nova_compute[230518]: 2025-10-02 12:55:54.730 2 DEBUG nova.compute.manager [req-3b7fdb94-54e1-4ccc-b669-5b1e5cf88b6c req-984c8389-2a9a-46bf-988a-a1c975bdebb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Received event network-vif-plugged-8de06222-5603-4a49-ac47-7db15cbb7e03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:55:54 compute-1 nova_compute[230518]: 2025-10-02 12:55:54.731 2 DEBUG oslo_concurrency.lockutils [req-3b7fdb94-54e1-4ccc-b669-5b1e5cf88b6c req-984c8389-2a9a-46bf-988a-a1c975bdebb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:54 compute-1 nova_compute[230518]: 2025-10-02 12:55:54.731 2 DEBUG oslo_concurrency.lockutils [req-3b7fdb94-54e1-4ccc-b669-5b1e5cf88b6c req-984c8389-2a9a-46bf-988a-a1c975bdebb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:54 compute-1 nova_compute[230518]: 2025-10-02 12:55:54.732 2 DEBUG oslo_concurrency.lockutils [req-3b7fdb94-54e1-4ccc-b669-5b1e5cf88b6c req-984c8389-2a9a-46bf-988a-a1c975bdebb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:54 compute-1 nova_compute[230518]: 2025-10-02 12:55:54.732 2 DEBUG nova.compute.manager [req-3b7fdb94-54e1-4ccc-b669-5b1e5cf88b6c req-984c8389-2a9a-46bf-988a-a1c975bdebb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] No waiting events found dispatching network-vif-plugged-8de06222-5603-4a49-ac47-7db15cbb7e03 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:55:54 compute-1 nova_compute[230518]: 2025-10-02 12:55:54.733 2 WARNING nova.compute.manager [req-3b7fdb94-54e1-4ccc-b669-5b1e5cf88b6c req-984c8389-2a9a-46bf-988a-a1c975bdebb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Received unexpected event network-vif-plugged-8de06222-5603-4a49-ac47-7db15cbb7e03 for instance with vm_state active and task_state None.
Oct 02 12:55:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:54.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:55 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fe32515c7b59ffa2b21e24ce17dce5eb4995b61dc532d34935df1e3ca314d83/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:55:55 compute-1 podman[290476]: 2025-10-02 12:55:55.40633635 +0000 UTC m=+2.701377321 container init c187f4a6ff262e8a492932d357d73b3a8ddd7027b6712dc407f11942add52a34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:55:55 compute-1 podman[290476]: 2025-10-02 12:55:55.417587123 +0000 UTC m=+2.712628094 container start c187f4a6ff262e8a492932d357d73b3a8ddd7027b6712dc407f11942add52a34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 02 12:55:55 compute-1 ceph-mon[80926]: pgmap v2477: 305 pgs: 305 active+clean; 310 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 2.6 MiB/s wr, 42 op/s
Oct 02 12:55:55 compute-1 neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97[290492]: [NOTICE]   (290496) : New worker (290498) forked
Oct 02 12:55:55 compute-1 neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97[290492]: [NOTICE]   (290496) : Loading success.
Oct 02 12:55:56 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/535558062' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:56 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2336472609' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:55:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:56.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:56 compute-1 nova_compute[230518]: 2025-10-02 12:55:56.716 2 DEBUG oslo_concurrency.lockutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:56 compute-1 nova_compute[230518]: 2025-10-02 12:55:56.718 2 DEBUG oslo_concurrency.lockutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:56 compute-1 nova_compute[230518]: 2025-10-02 12:55:56.742 2 DEBUG nova.compute.manager [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:55:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:56.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:56 compute-1 nova_compute[230518]: 2025-10-02 12:55:56.866 2 DEBUG oslo_concurrency.lockutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:56 compute-1 nova_compute[230518]: 2025-10-02 12:55:56.868 2 DEBUG oslo_concurrency.lockutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:56 compute-1 nova_compute[230518]: 2025-10-02 12:55:56.883 2 DEBUG nova.virt.hardware [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:55:56 compute-1 nova_compute[230518]: 2025-10-02 12:55:56.884 2 INFO nova.compute.claims [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:55:57 compute-1 nova_compute[230518]: 2025-10-02 12:55:57.035 2 DEBUG oslo_concurrency.processutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:57 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:55:57 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3254957633' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:57 compute-1 nova_compute[230518]: 2025-10-02 12:55:57.461 2 DEBUG oslo_concurrency.processutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:57 compute-1 nova_compute[230518]: 2025-10-02 12:55:57.470 2 DEBUG nova.compute.provider_tree [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:55:57 compute-1 nova_compute[230518]: 2025-10-02 12:55:57.511 2 DEBUG nova.scheduler.client.report [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:55:57 compute-1 nova_compute[230518]: 2025-10-02 12:55:57.554 2 DEBUG oslo_concurrency.lockutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:57 compute-1 nova_compute[230518]: 2025-10-02 12:55:57.555 2 DEBUG nova.compute.manager [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:55:57 compute-1 nova_compute[230518]: 2025-10-02 12:55:57.608 2 DEBUG nova.compute.manager [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:55:57 compute-1 nova_compute[230518]: 2025-10-02 12:55:57.609 2 DEBUG nova.network.neutron [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:55:57 compute-1 ceph-mon[80926]: pgmap v2478: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.5 MiB/s wr, 90 op/s
Oct 02 12:55:57 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3254957633' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:55:57 compute-1 nova_compute[230518]: 2025-10-02 12:55:57.630 2 INFO nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:55:57 compute-1 nova_compute[230518]: 2025-10-02 12:55:57.649 2 DEBUG nova.compute.manager [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:55:57 compute-1 nova_compute[230518]: 2025-10-02 12:55:57.741 2 DEBUG nova.compute.manager [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:55:57 compute-1 nova_compute[230518]: 2025-10-02 12:55:57.743 2 DEBUG nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:55:57 compute-1 nova_compute[230518]: 2025-10-02 12:55:57.743 2 INFO nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Creating image(s)
Oct 02 12:55:57 compute-1 nova_compute[230518]: 2025-10-02 12:55:57.786 2 DEBUG nova.storage.rbd_utils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image f6c0a66d-64f1-484a-ae4e-ece25fddf736_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:55:57 compute-1 nova_compute[230518]: 2025-10-02 12:55:57.807 2 DEBUG nova.storage.rbd_utils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image f6c0a66d-64f1-484a-ae4e-ece25fddf736_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:55:57 compute-1 nova_compute[230518]: 2025-10-02 12:55:57.827 2 DEBUG nova.storage.rbd_utils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image f6c0a66d-64f1-484a-ae4e-ece25fddf736_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:55:57 compute-1 nova_compute[230518]: 2025-10-02 12:55:57.830 2 DEBUG oslo_concurrency.processutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:57 compute-1 nova_compute[230518]: 2025-10-02 12:55:57.896 2 DEBUG oslo_concurrency.processutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:57 compute-1 nova_compute[230518]: 2025-10-02 12:55:57.898 2 DEBUG oslo_concurrency.lockutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:57 compute-1 nova_compute[230518]: 2025-10-02 12:55:57.898 2 DEBUG oslo_concurrency.lockutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:57 compute-1 nova_compute[230518]: 2025-10-02 12:55:57.899 2 DEBUG oslo_concurrency.lockutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:57 compute-1 nova_compute[230518]: 2025-10-02 12:55:57.919 2 DEBUG nova.storage.rbd_utils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image f6c0a66d-64f1-484a-ae4e-ece25fddf736_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:55:57 compute-1 nova_compute[230518]: 2025-10-02 12:55:57.922 2 DEBUG oslo_concurrency.processutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 f6c0a66d-64f1-484a-ae4e-ece25fddf736_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:55:58 compute-1 nova_compute[230518]: 2025-10-02 12:55:58.618 2 DEBUG nova.policy [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '47f465d8c8ac44c982f2a2e60ae9eb40', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:55:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:55:58.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:55:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:55:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:55:58.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:55:59 compute-1 nova_compute[230518]: 2025-10-02 12:55:59.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:59 compute-1 NetworkManager[44960]: <info>  [1759409759.0704] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/302)
Oct 02 12:55:59 compute-1 NetworkManager[44960]: <info>  [1759409759.0712] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/303)
Oct 02 12:55:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:55:59 compute-1 nova_compute[230518]: 2025-10-02 12:55:59.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:59 compute-1 nova_compute[230518]: 2025-10-02 12:55:59.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:59 compute-1 ovn_controller[129257]: 2025-10-02T12:55:59Z|00641|binding|INFO|Releasing lport 47294e02-9c49-4b4e-8e89-07ae56f17131 from this chassis (sb_readonly=0)
Oct 02 12:55:59 compute-1 nova_compute[230518]: 2025-10-02 12:55:59.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:59 compute-1 nova_compute[230518]: 2025-10-02 12:55:59.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:59 compute-1 nova_compute[230518]: 2025-10-02 12:55:59.354 2 DEBUG oslo_concurrency.processutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 f6c0a66d-64f1-484a-ae4e-ece25fddf736_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:55:59 compute-1 nova_compute[230518]: 2025-10-02 12:55:59.425 2 DEBUG nova.storage.rbd_utils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] resizing rbd image f6c0a66d-64f1-484a-ae4e-ece25fddf736_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:55:59 compute-1 nova_compute[230518]: 2025-10-02 12:55:59.531 2 DEBUG nova.compute.manager [req-46cd8347-ff02-407e-9059-abc700ee791e req-38a1dec3-7389-4fa8-af91-5ef001ca382b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Received event network-changed-8de06222-5603-4a49-ac47-7db15cbb7e03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:55:59 compute-1 nova_compute[230518]: 2025-10-02 12:55:59.532 2 DEBUG nova.compute.manager [req-46cd8347-ff02-407e-9059-abc700ee791e req-38a1dec3-7389-4fa8-af91-5ef001ca382b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Refreshing instance network info cache due to event network-changed-8de06222-5603-4a49-ac47-7db15cbb7e03. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:55:59 compute-1 nova_compute[230518]: 2025-10-02 12:55:59.532 2 DEBUG oslo_concurrency.lockutils [req-46cd8347-ff02-407e-9059-abc700ee791e req-38a1dec3-7389-4fa8-af91-5ef001ca382b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-cba9797d-d8c0-42bb-99e8-21ff3406d1ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:55:59 compute-1 nova_compute[230518]: 2025-10-02 12:55:59.532 2 DEBUG oslo_concurrency.lockutils [req-46cd8347-ff02-407e-9059-abc700ee791e req-38a1dec3-7389-4fa8-af91-5ef001ca382b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-cba9797d-d8c0-42bb-99e8-21ff3406d1ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:55:59 compute-1 nova_compute[230518]: 2025-10-02 12:55:59.533 2 DEBUG nova.network.neutron [req-46cd8347-ff02-407e-9059-abc700ee791e req-38a1dec3-7389-4fa8-af91-5ef001ca382b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Refreshing network info cache for port 8de06222-5603-4a49-ac47-7db15cbb7e03 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:55:59 compute-1 nova_compute[230518]: 2025-10-02 12:55:59.538 2 DEBUG nova.objects.instance [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'migration_context' on Instance uuid f6c0a66d-64f1-484a-ae4e-ece25fddf736 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:55:59 compute-1 nova_compute[230518]: 2025-10-02 12:55:59.560 2 DEBUG nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:55:59 compute-1 nova_compute[230518]: 2025-10-02 12:55:59.561 2 DEBUG nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Ensure instance console log exists: /var/lib/nova/instances/f6c0a66d-64f1-484a-ae4e-ece25fddf736/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:55:59 compute-1 nova_compute[230518]: 2025-10-02 12:55:59.561 2 DEBUG oslo_concurrency.lockutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:55:59 compute-1 nova_compute[230518]: 2025-10-02 12:55:59.562 2 DEBUG oslo_concurrency.lockutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:55:59 compute-1 nova_compute[230518]: 2025-10-02 12:55:59.562 2 DEBUG oslo_concurrency.lockutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:55:59 compute-1 nova_compute[230518]: 2025-10-02 12:55:59.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:55:59 compute-1 nova_compute[230518]: 2025-10-02 12:55:59.792 2 DEBUG nova.network.neutron [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Successfully created port: 6237bc28-d790-4861-976b-cda2e8dc93a9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:55:59 compute-1 ceph-mon[80926]: pgmap v2479: 305 pgs: 305 active+clean; 339 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Oct 02 12:56:00 compute-1 nova_compute[230518]: 2025-10-02 12:56:00.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:56:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:00.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:00 compute-1 podman[290697]: 2025-10-02 12:56:00.803049041 +0000 UTC m=+0.056151483 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 12:56:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:00.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:00 compute-1 podman[290696]: 2025-10-02 12:56:00.831202485 +0000 UTC m=+0.084256585 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 12:56:00 compute-1 nova_compute[230518]: 2025-10-02 12:56:00.877 2 DEBUG nova.network.neutron [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Successfully updated port: 6237bc28-d790-4861-976b-cda2e8dc93a9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:56:00 compute-1 nova_compute[230518]: 2025-10-02 12:56:00.897 2 DEBUG oslo_concurrency.lockutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:56:00 compute-1 nova_compute[230518]: 2025-10-02 12:56:00.898 2 DEBUG oslo_concurrency.lockutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquired lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:56:00 compute-1 nova_compute[230518]: 2025-10-02 12:56:00.899 2 DEBUG nova.network.neutron [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:56:00 compute-1 ceph-mon[80926]: pgmap v2480: 305 pgs: 305 active+clean; 359 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.9 MiB/s wr, 152 op/s
Oct 02 12:56:00 compute-1 nova_compute[230518]: 2025-10-02 12:56:00.932 2 DEBUG nova.network.neutron [req-46cd8347-ff02-407e-9059-abc700ee791e req-38a1dec3-7389-4fa8-af91-5ef001ca382b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Updated VIF entry in instance network info cache for port 8de06222-5603-4a49-ac47-7db15cbb7e03. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:56:00 compute-1 nova_compute[230518]: 2025-10-02 12:56:00.934 2 DEBUG nova.network.neutron [req-46cd8347-ff02-407e-9059-abc700ee791e req-38a1dec3-7389-4fa8-af91-5ef001ca382b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Updating instance_info_cache with network_info: [{"id": "8de06222-5603-4a49-ac47-7db15cbb7e03", "address": "fa:16:3e:68:46:e8", "network": {"id": "24ea8f37-7508-4c75-ae14-e4cc7b9f8e97", "bridge": "br-int", "label": "tempest-network-smoke--1532785266", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de06222-56", "ovs_interfaceid": "8de06222-5603-4a49-ac47-7db15cbb7e03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:56:00 compute-1 nova_compute[230518]: 2025-10-02 12:56:00.968 2 DEBUG oslo_concurrency.lockutils [req-46cd8347-ff02-407e-9059-abc700ee791e req-38a1dec3-7389-4fa8-af91-5ef001ca382b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-cba9797d-d8c0-42bb-99e8-21ff3406d1ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:56:01 compute-1 nova_compute[230518]: 2025-10-02 12:56:01.085 2 DEBUG nova.network.neutron [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:56:01 compute-1 nova_compute[230518]: 2025-10-02 12:56:01.619 2 DEBUG nova.compute.manager [req-ede58510-ff47-4c6b-98ac-cdaca32ac28e req-4effc56d-6595-47ae-894d-558a3d31edba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received event network-changed-6237bc28-d790-4861-976b-cda2e8dc93a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:56:01 compute-1 nova_compute[230518]: 2025-10-02 12:56:01.620 2 DEBUG nova.compute.manager [req-ede58510-ff47-4c6b-98ac-cdaca32ac28e req-4effc56d-6595-47ae-894d-558a3d31edba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Refreshing instance network info cache due to event network-changed-6237bc28-d790-4861-976b-cda2e8dc93a9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:56:01 compute-1 nova_compute[230518]: 2025-10-02 12:56:01.620 2 DEBUG oslo_concurrency.lockutils [req-ede58510-ff47-4c6b-98ac-cdaca32ac28e req-4effc56d-6595-47ae-894d-558a3d31edba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:56:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e343 e343: 3 total, 3 up, 3 in
Oct 02 12:56:02 compute-1 nova_compute[230518]: 2025-10-02 12:56:02.380 2 DEBUG nova.network.neutron [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Updating instance_info_cache with network_info: [{"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:56:02 compute-1 nova_compute[230518]: 2025-10-02 12:56:02.409 2 DEBUG oslo_concurrency.lockutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Releasing lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:56:02 compute-1 nova_compute[230518]: 2025-10-02 12:56:02.410 2 DEBUG nova.compute.manager [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Instance network_info: |[{"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:56:02 compute-1 nova_compute[230518]: 2025-10-02 12:56:02.410 2 DEBUG oslo_concurrency.lockutils [req-ede58510-ff47-4c6b-98ac-cdaca32ac28e req-4effc56d-6595-47ae-894d-558a3d31edba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:56:02 compute-1 nova_compute[230518]: 2025-10-02 12:56:02.410 2 DEBUG nova.network.neutron [req-ede58510-ff47-4c6b-98ac-cdaca32ac28e req-4effc56d-6595-47ae-894d-558a3d31edba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Refreshing network info cache for port 6237bc28-d790-4861-976b-cda2e8dc93a9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:56:02 compute-1 nova_compute[230518]: 2025-10-02 12:56:02.413 2 DEBUG nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Start _get_guest_xml network_info=[{"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:56:02 compute-1 nova_compute[230518]: 2025-10-02 12:56:02.418 2 WARNING nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:56:02 compute-1 nova_compute[230518]: 2025-10-02 12:56:02.423 2 DEBUG nova.virt.libvirt.host [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:56:02 compute-1 nova_compute[230518]: 2025-10-02 12:56:02.424 2 DEBUG nova.virt.libvirt.host [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:56:02 compute-1 nova_compute[230518]: 2025-10-02 12:56:02.428 2 DEBUG nova.virt.libvirt.host [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:56:02 compute-1 nova_compute[230518]: 2025-10-02 12:56:02.428 2 DEBUG nova.virt.libvirt.host [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:56:02 compute-1 nova_compute[230518]: 2025-10-02 12:56:02.429 2 DEBUG nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:56:02 compute-1 nova_compute[230518]: 2025-10-02 12:56:02.429 2 DEBUG nova.virt.hardware [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:56:02 compute-1 nova_compute[230518]: 2025-10-02 12:56:02.430 2 DEBUG nova.virt.hardware [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:56:02 compute-1 nova_compute[230518]: 2025-10-02 12:56:02.430 2 DEBUG nova.virt.hardware [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:56:02 compute-1 nova_compute[230518]: 2025-10-02 12:56:02.430 2 DEBUG nova.virt.hardware [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:56:02 compute-1 nova_compute[230518]: 2025-10-02 12:56:02.431 2 DEBUG nova.virt.hardware [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:56:02 compute-1 nova_compute[230518]: 2025-10-02 12:56:02.431 2 DEBUG nova.virt.hardware [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:56:02 compute-1 nova_compute[230518]: 2025-10-02 12:56:02.431 2 DEBUG nova.virt.hardware [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:56:02 compute-1 nova_compute[230518]: 2025-10-02 12:56:02.431 2 DEBUG nova.virt.hardware [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:56:02 compute-1 nova_compute[230518]: 2025-10-02 12:56:02.432 2 DEBUG nova.virt.hardware [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:56:02 compute-1 nova_compute[230518]: 2025-10-02 12:56:02.432 2 DEBUG nova.virt.hardware [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:56:02 compute-1 nova_compute[230518]: 2025-10-02 12:56:02.432 2 DEBUG nova.virt.hardware [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:56:02 compute-1 nova_compute[230518]: 2025-10-02 12:56:02.434 2 DEBUG oslo_concurrency.processutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:56:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:02.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:02.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:56:02 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3350021409' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:56:02 compute-1 nova_compute[230518]: 2025-10-02 12:56:02.890 2 DEBUG oslo_concurrency.processutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:56:02 compute-1 nova_compute[230518]: 2025-10-02 12:56:02.917 2 DEBUG nova.storage.rbd_utils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image f6c0a66d-64f1-484a-ae4e-ece25fddf736_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:56:02 compute-1 nova_compute[230518]: 2025-10-02 12:56:02.921 2 DEBUG oslo_concurrency.processutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:56:03 compute-1 ceph-mon[80926]: pgmap v2481: 305 pgs: 305 active+clean; 386 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.0 MiB/s wr, 203 op/s
Oct 02 12:56:03 compute-1 ceph-mon[80926]: osdmap e343: 3 total, 3 up, 3 in
Oct 02 12:56:03 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3350021409' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:56:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:56:03 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1541807916' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:56:03 compute-1 nova_compute[230518]: 2025-10-02 12:56:03.501 2 DEBUG oslo_concurrency.processutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:56:03 compute-1 nova_compute[230518]: 2025-10-02 12:56:03.504 2 DEBUG nova.virt.libvirt.vif [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:55:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1633674489',display_name='tempest-TestNetworkAdvancedServerOps-server-1633674489',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1633674489',id=153,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNDGNPTCWHMGn2WmI5eqLh6YCpsFWhqzTd+bjnfQT4djkmca349UiY4uIU3+umB1w10tp651dyIkW2ibD5dk6ldf9xoeyNtwfhmhivNqkDC8s1WG5y+WB+iPGYUm0nb4Ew==',key_name='tempest-TestNetworkAdvancedServerOps-519769562',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-ksuo2zkj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:55:57Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=f6c0a66d-64f1-484a-ae4e-ece25fddf736,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:56:03 compute-1 nova_compute[230518]: 2025-10-02 12:56:03.504 2 DEBUG nova.network.os_vif_util [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:56:03 compute-1 nova_compute[230518]: 2025-10-02 12:56:03.505 2 DEBUG nova.network.os_vif_util [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:c9:23,bridge_name='br-int',has_traffic_filtering=True,id=6237bc28-d790-4861-976b-cda2e8dc93a9,network=Network(5f30d280-519f-404a-a0ab-55bcf121986d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6237bc28-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:56:03 compute-1 nova_compute[230518]: 2025-10-02 12:56:03.506 2 DEBUG nova.objects.instance [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'pci_devices' on Instance uuid f6c0a66d-64f1-484a-ae4e-ece25fddf736 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:56:03 compute-1 nova_compute[230518]: 2025-10-02 12:56:03.537 2 DEBUG nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:56:03 compute-1 nova_compute[230518]:   <uuid>f6c0a66d-64f1-484a-ae4e-ece25fddf736</uuid>
Oct 02 12:56:03 compute-1 nova_compute[230518]:   <name>instance-00000099</name>
Oct 02 12:56:03 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:56:03 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:56:03 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:56:03 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1633674489</nova:name>
Oct 02 12:56:03 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:56:02</nova:creationTime>
Oct 02 12:56:03 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:56:03 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:56:03 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:56:03 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:56:03 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:56:03 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:56:03 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:56:03 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:56:03 compute-1 nova_compute[230518]:         <nova:user uuid="47f465d8c8ac44c982f2a2e60ae9eb40">tempest-TestNetworkAdvancedServerOps-1770117619-project-member</nova:user>
Oct 02 12:56:03 compute-1 nova_compute[230518]:         <nova:project uuid="072925a6aec84a77a9c09ae0c83efdb3">tempest-TestNetworkAdvancedServerOps-1770117619</nova:project>
Oct 02 12:56:03 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:56:03 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:56:03 compute-1 nova_compute[230518]:         <nova:port uuid="6237bc28-d790-4861-976b-cda2e8dc93a9">
Oct 02 12:56:03 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:56:03 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:56:03 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:56:03 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <system>
Oct 02 12:56:03 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:56:03 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:56:03 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:56:03 compute-1 nova_compute[230518]:       <entry name="serial">f6c0a66d-64f1-484a-ae4e-ece25fddf736</entry>
Oct 02 12:56:03 compute-1 nova_compute[230518]:       <entry name="uuid">f6c0a66d-64f1-484a-ae4e-ece25fddf736</entry>
Oct 02 12:56:03 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     </system>
Oct 02 12:56:03 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:56:03 compute-1 nova_compute[230518]:   <os>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:   </os>
Oct 02 12:56:03 compute-1 nova_compute[230518]:   <features>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:   </features>
Oct 02 12:56:03 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:56:03 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:56:03 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:56:03 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/f6c0a66d-64f1-484a-ae4e-ece25fddf736_disk">
Oct 02 12:56:03 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:       </source>
Oct 02 12:56:03 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:56:03 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:56:03 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:56:03 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/f6c0a66d-64f1-484a-ae4e-ece25fddf736_disk.config">
Oct 02 12:56:03 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:       </source>
Oct 02 12:56:03 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:56:03 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:56:03 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:56:03 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:f2:c9:23"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:       <target dev="tap6237bc28-d7"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:56:03 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/f6c0a66d-64f1-484a-ae4e-ece25fddf736/console.log" append="off"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <video>
Oct 02 12:56:03 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     </video>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:56:03 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:56:03 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:56:03 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:56:03 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:56:03 compute-1 nova_compute[230518]: </domain>
Oct 02 12:56:03 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:56:03 compute-1 nova_compute[230518]: 2025-10-02 12:56:03.543 2 DEBUG nova.compute.manager [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Preparing to wait for external event network-vif-plugged-6237bc28-d790-4861-976b-cda2e8dc93a9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:56:03 compute-1 nova_compute[230518]: 2025-10-02 12:56:03.544 2 DEBUG oslo_concurrency.lockutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:03 compute-1 nova_compute[230518]: 2025-10-02 12:56:03.544 2 DEBUG oslo_concurrency.lockutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:03 compute-1 nova_compute[230518]: 2025-10-02 12:56:03.545 2 DEBUG oslo_concurrency.lockutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:03 compute-1 nova_compute[230518]: 2025-10-02 12:56:03.546 2 DEBUG nova.virt.libvirt.vif [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:55:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1633674489',display_name='tempest-TestNetworkAdvancedServerOps-server-1633674489',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1633674489',id=153,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNDGNPTCWHMGn2WmI5eqLh6YCpsFWhqzTd+bjnfQT4djkmca349UiY4uIU3+umB1w10tp651dyIkW2ibD5dk6ldf9xoeyNtwfhmhivNqkDC8s1WG5y+WB+iPGYUm0nb4Ew==',key_name='tempest-TestNetworkAdvancedServerOps-519769562',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-ksuo2zkj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:55:57Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=f6c0a66d-64f1-484a-ae4e-ece25fddf736,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:56:03 compute-1 nova_compute[230518]: 2025-10-02 12:56:03.546 2 DEBUG nova.network.os_vif_util [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:56:03 compute-1 nova_compute[230518]: 2025-10-02 12:56:03.547 2 DEBUG nova.network.os_vif_util [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f2:c9:23,bridge_name='br-int',has_traffic_filtering=True,id=6237bc28-d790-4861-976b-cda2e8dc93a9,network=Network(5f30d280-519f-404a-a0ab-55bcf121986d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6237bc28-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:56:03 compute-1 nova_compute[230518]: 2025-10-02 12:56:03.548 2 DEBUG os_vif [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:c9:23,bridge_name='br-int',has_traffic_filtering=True,id=6237bc28-d790-4861-976b-cda2e8dc93a9,network=Network(5f30d280-519f-404a-a0ab-55bcf121986d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6237bc28-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:56:03 compute-1 nova_compute[230518]: 2025-10-02 12:56:03.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:03 compute-1 nova_compute[230518]: 2025-10-02 12:56:03.549 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:56:03 compute-1 nova_compute[230518]: 2025-10-02 12:56:03.549 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:56:03 compute-1 nova_compute[230518]: 2025-10-02 12:56:03.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:03 compute-1 nova_compute[230518]: 2025-10-02 12:56:03.553 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6237bc28-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:56:03 compute-1 nova_compute[230518]: 2025-10-02 12:56:03.553 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6237bc28-d7, col_values=(('external_ids', {'iface-id': '6237bc28-d790-4861-976b-cda2e8dc93a9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f2:c9:23', 'vm-uuid': 'f6c0a66d-64f1-484a-ae4e-ece25fddf736'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:56:03 compute-1 nova_compute[230518]: 2025-10-02 12:56:03.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:03 compute-1 NetworkManager[44960]: <info>  [1759409763.5560] manager: (tap6237bc28-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/304)
Oct 02 12:56:03 compute-1 nova_compute[230518]: 2025-10-02 12:56:03.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:03 compute-1 nova_compute[230518]: 2025-10-02 12:56:03.562 2 INFO os_vif [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f2:c9:23,bridge_name='br-int',has_traffic_filtering=True,id=6237bc28-d790-4861-976b-cda2e8dc93a9,network=Network(5f30d280-519f-404a-a0ab-55bcf121986d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6237bc28-d7')
Oct 02 12:56:03 compute-1 nova_compute[230518]: 2025-10-02 12:56:03.794 2 DEBUG nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:56:03 compute-1 nova_compute[230518]: 2025-10-02 12:56:03.795 2 DEBUG nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:56:03 compute-1 nova_compute[230518]: 2025-10-02 12:56:03.795 2 DEBUG nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No VIF found with MAC fa:16:3e:f2:c9:23, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:56:03 compute-1 nova_compute[230518]: 2025-10-02 12:56:03.796 2 INFO nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Using config drive
Oct 02 12:56:03 compute-1 nova_compute[230518]: 2025-10-02 12:56:03.821 2 DEBUG nova.storage.rbd_utils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image f6c0a66d-64f1-484a-ae4e-ece25fddf736_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:56:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:56:04 compute-1 nova_compute[230518]: 2025-10-02 12:56:04.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:04 compute-1 nova_compute[230518]: 2025-10-02 12:56:04.539 2 INFO nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Creating config drive at /var/lib/nova/instances/f6c0a66d-64f1-484a-ae4e-ece25fddf736/disk.config
Oct 02 12:56:04 compute-1 nova_compute[230518]: 2025-10-02 12:56:04.544 2 DEBUG oslo_concurrency.processutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f6c0a66d-64f1-484a-ae4e-ece25fddf736/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3_ekpb6r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:56:04 compute-1 nova_compute[230518]: 2025-10-02 12:56:04.602 2 DEBUG nova.network.neutron [req-ede58510-ff47-4c6b-98ac-cdaca32ac28e req-4effc56d-6595-47ae-894d-558a3d31edba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Updated VIF entry in instance network info cache for port 6237bc28-d790-4861-976b-cda2e8dc93a9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:56:04 compute-1 nova_compute[230518]: 2025-10-02 12:56:04.602 2 DEBUG nova.network.neutron [req-ede58510-ff47-4c6b-98ac-cdaca32ac28e req-4effc56d-6595-47ae-894d-558a3d31edba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Updating instance_info_cache with network_info: [{"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:56:04 compute-1 nova_compute[230518]: 2025-10-02 12:56:04.635 2 DEBUG oslo_concurrency.lockutils [req-ede58510-ff47-4c6b-98ac-cdaca32ac28e req-4effc56d-6595-47ae-894d-558a3d31edba 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:56:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:04.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:04 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1541807916' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:56:04 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3883695521' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:04 compute-1 nova_compute[230518]: 2025-10-02 12:56:04.702 2 DEBUG oslo_concurrency.processutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f6c0a66d-64f1-484a-ae4e-ece25fddf736/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3_ekpb6r" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:56:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:04.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:05 compute-1 nova_compute[230518]: 2025-10-02 12:56:05.348 2 DEBUG nova.storage.rbd_utils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image f6c0a66d-64f1-484a-ae4e-ece25fddf736_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:56:05 compute-1 nova_compute[230518]: 2025-10-02 12:56:05.358 2 DEBUG oslo_concurrency.processutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f6c0a66d-64f1-484a-ae4e-ece25fddf736/disk.config f6c0a66d-64f1-484a-ae4e-ece25fddf736_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:56:05 compute-1 ceph-mon[80926]: pgmap v2483: 305 pgs: 305 active+clean; 386 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.3 MiB/s wr, 212 op/s
Oct 02 12:56:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/852111110' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:56:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/852111110' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:56:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:56:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:06.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:56:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 12:56:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:06.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 12:56:08 compute-1 ceph-mon[80926]: pgmap v2484: 305 pgs: 305 active+clean; 386 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.2 MiB/s wr, 172 op/s
Oct 02 12:56:08 compute-1 nova_compute[230518]: 2025-10-02 12:56:08.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:08.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:08 compute-1 nova_compute[230518]: 2025-10-02 12:56:08.720 2 DEBUG oslo_concurrency.processutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f6c0a66d-64f1-484a-ae4e-ece25fddf736/disk.config f6c0a66d-64f1-484a-ae4e-ece25fddf736_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:56:08 compute-1 nova_compute[230518]: 2025-10-02 12:56:08.720 2 INFO nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Deleting local config drive /var/lib/nova/instances/f6c0a66d-64f1-484a-ae4e-ece25fddf736/disk.config because it was imported into RBD.
Oct 02 12:56:08 compute-1 kernel: tap6237bc28-d7: entered promiscuous mode
Oct 02 12:56:08 compute-1 NetworkManager[44960]: <info>  [1759409768.7747] manager: (tap6237bc28-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/305)
Oct 02 12:56:08 compute-1 ovn_controller[129257]: 2025-10-02T12:56:08Z|00642|binding|INFO|Claiming lport 6237bc28-d790-4861-976b-cda2e8dc93a9 for this chassis.
Oct 02 12:56:08 compute-1 ovn_controller[129257]: 2025-10-02T12:56:08Z|00643|binding|INFO|6237bc28-d790-4861-976b-cda2e8dc93a9: Claiming fa:16:3e:f2:c9:23 10.100.0.13
Oct 02 12:56:08 compute-1 nova_compute[230518]: 2025-10-02 12:56:08.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:08 compute-1 ovn_controller[129257]: 2025-10-02T12:56:08Z|00644|binding|INFO|Setting lport 6237bc28-d790-4861-976b-cda2e8dc93a9 ovn-installed in OVS
Oct 02 12:56:08 compute-1 ovn_controller[129257]: 2025-10-02T12:56:08Z|00645|binding|INFO|Setting lport 6237bc28-d790-4861-976b-cda2e8dc93a9 up in Southbound
Oct 02 12:56:08 compute-1 nova_compute[230518]: 2025-10-02 12:56:08.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:08.797 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:c9:23 10.100.0.13'], port_security=['fa:16:3e:f2:c9:23 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f6c0a66d-64f1-484a-ae4e-ece25fddf736', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5f30d280-519f-404a-a0ab-55bcf121986d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b996e116-bce4-4795-a454-74528663ff58', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ccf6ef12-40dd-424f-abb3-518813baf9b4, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=6237bc28-d790-4861-976b-cda2e8dc93a9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:56:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:08.799 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 6237bc28-d790-4861-976b-cda2e8dc93a9 in datapath 5f30d280-519f-404a-a0ab-55bcf121986d bound to our chassis
Oct 02 12:56:08 compute-1 nova_compute[230518]: 2025-10-02 12:56:08.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:08.801 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5f30d280-519f-404a-a0ab-55bcf121986d
Oct 02 12:56:08 compute-1 systemd-udevd[290874]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:56:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:08.813 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3c721b0f-3a31-4d49-a4bd-c9b780b88cb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:08.814 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5f30d280-51 in ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:56:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:08.815 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5f30d280-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:56:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:08.815 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[89f2c31f-feaf-4b57-ac3f-7c8bb4710324]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:08.817 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[afc719d2-9a62-44e1-9bfb-a86b0049c845]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:08 compute-1 systemd-machined[188247]: New machine qemu-75-instance-00000099.
Oct 02 12:56:08 compute-1 NetworkManager[44960]: <info>  [1759409768.8191] device (tap6237bc28-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:56:08 compute-1 NetworkManager[44960]: <info>  [1759409768.8216] device (tap6237bc28-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:56:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:08.828 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[20a3d939-8514-40c2-a00a-817ef74ea81a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:08 compute-1 systemd[1]: Started Virtual Machine qemu-75-instance-00000099.
Oct 02 12:56:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:56:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:08.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:56:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:08.850 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bb8d4694-01ea-427a-8687-94974379bb8f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:08.877 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a4128ee4-189c-46bd-8114-dcd9dfa64631]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:08.881 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f521a6d9-f7ba-4117-b195-fb74ae06316c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:08 compute-1 NetworkManager[44960]: <info>  [1759409768.8822] manager: (tap5f30d280-50): new Veth device (/org/freedesktop/NetworkManager/Devices/306)
Oct 02 12:56:08 compute-1 systemd-udevd[290879]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:56:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:08.915 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[33bd94fd-dd72-41af-95da-990cc544d904]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:08.917 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[16b97c7b-35f2-45f9-9d77-ea6347b97235]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:08 compute-1 NetworkManager[44960]: <info>  [1759409768.9371] device (tap5f30d280-50): carrier: link connected
Oct 02 12:56:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:08.943 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[883dffa8-86ef-498f-8a2f-3c18c50b3ba9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:08.959 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[961250d8-5830-42ee-b285-d0dacfecc55f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5f30d280-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:61:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 198], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763249, 'reachable_time': 38595, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290909, 'error': None, 'target': 'ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:08.975 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e055f45c-28aa-49ab-b88e-6bee407850f8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1a:61fc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763249, 'tstamp': 763249}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290910, 'error': None, 'target': 'ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:08.994 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d7c3c361-7058-4b48-9f4b-ae2a6ed23b01]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5f30d280-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1a:61:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 198], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763249, 'reachable_time': 38595, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290911, 'error': None, 'target': 'ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:09.029 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8e38bbbf-db5e-41c7-85fc-55d98aab6747]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:09 compute-1 nova_compute[230518]: 2025-10-02 12:56:09.066 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:09.088 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6952cee6-4e6a-48c3-b446-ffd83d6ddc74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:09.089 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5f30d280-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:09.089 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:09.090 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5f30d280-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:56:09 compute-1 nova_compute[230518]: 2025-10-02 12:56:09.090 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:09 compute-1 nova_compute[230518]: 2025-10-02 12:56:09.091 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:09 compute-1 nova_compute[230518]: 2025-10-02 12:56:09.091 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:09 compute-1 nova_compute[230518]: 2025-10-02 12:56:09.091 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:56:09 compute-1 nova_compute[230518]: 2025-10-02 12:56:09.091 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:56:09 compute-1 ceph-mon[80926]: pgmap v2485: 305 pgs: 305 active+clean; 418 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.1 MiB/s wr, 173 op/s
Oct 02 12:56:09 compute-1 kernel: tap5f30d280-50: entered promiscuous mode
Oct 02 12:56:09 compute-1 NetworkManager[44960]: <info>  [1759409769.1105] manager: (tap5f30d280-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/307)
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:09.112 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5f30d280-50, col_values=(('external_ids', {'iface-id': 'ffbc5ae9-0886-4744-96e3-66a615b2f014'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:56:09 compute-1 ovn_controller[129257]: 2025-10-02T12:56:09Z|00646|binding|INFO|Releasing lport ffbc5ae9-0886-4744-96e3-66a615b2f014 from this chassis (sb_readonly=0)
Oct 02 12:56:09 compute-1 nova_compute[230518]: 2025-10-02 12:56:09.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:09 compute-1 nova_compute[230518]: 2025-10-02 12:56:09.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:09.127 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5f30d280-519f-404a-a0ab-55bcf121986d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5f30d280-519f-404a-a0ab-55bcf121986d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:09.128 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[41ab4b73-5aa4-4850-9ba6-2e4243b48499]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:09.129 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-5f30d280-519f-404a-a0ab-55bcf121986d
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/5f30d280-519f-404a-a0ab-55bcf121986d.pid.haproxy
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 5f30d280-519f-404a-a0ab-55bcf121986d
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:56:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:09.130 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d', 'env', 'PROCESS_TAG=haproxy-5f30d280-519f-404a-a0ab-55bcf121986d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5f30d280-519f-404a-a0ab-55bcf121986d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:56:09 compute-1 nova_compute[230518]: 2025-10-02 12:56:09.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:56:09 compute-1 nova_compute[230518]: 2025-10-02 12:56:09.288 2 DEBUG nova.compute.manager [req-eb1ccc2f-d829-46ab-9822-4554853f1866 req-2fd6c6c8-0bf5-4f1e-bbec-4f5d2b131fb8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received event network-vif-plugged-6237bc28-d790-4861-976b-cda2e8dc93a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:56:09 compute-1 nova_compute[230518]: 2025-10-02 12:56:09.288 2 DEBUG oslo_concurrency.lockutils [req-eb1ccc2f-d829-46ab-9822-4554853f1866 req-2fd6c6c8-0bf5-4f1e-bbec-4f5d2b131fb8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:09 compute-1 nova_compute[230518]: 2025-10-02 12:56:09.289 2 DEBUG oslo_concurrency.lockutils [req-eb1ccc2f-d829-46ab-9822-4554853f1866 req-2fd6c6c8-0bf5-4f1e-bbec-4f5d2b131fb8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:09 compute-1 nova_compute[230518]: 2025-10-02 12:56:09.289 2 DEBUG oslo_concurrency.lockutils [req-eb1ccc2f-d829-46ab-9822-4554853f1866 req-2fd6c6c8-0bf5-4f1e-bbec-4f5d2b131fb8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:09 compute-1 nova_compute[230518]: 2025-10-02 12:56:09.289 2 DEBUG nova.compute.manager [req-eb1ccc2f-d829-46ab-9822-4554853f1866 req-2fd6c6c8-0bf5-4f1e-bbec-4f5d2b131fb8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Processing event network-vif-plugged-6237bc28-d790-4861-976b-cda2e8dc93a9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:56:09 compute-1 podman[290975]: 2025-10-02 12:56:09.479083752 +0000 UTC m=+0.028392682 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:56:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:56:09 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3013568106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:09 compute-1 nova_compute[230518]: 2025-10-02 12:56:09.918 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.826s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:56:10 compute-1 nova_compute[230518]: 2025-10-02 12:56:10.119 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409770.1195302, f6c0a66d-64f1-484a-ae4e-ece25fddf736 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:56:10 compute-1 nova_compute[230518]: 2025-10-02 12:56:10.120 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] VM Started (Lifecycle Event)
Oct 02 12:56:10 compute-1 nova_compute[230518]: 2025-10-02 12:56:10.123 2 DEBUG nova.compute.manager [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:56:10 compute-1 nova_compute[230518]: 2025-10-02 12:56:10.126 2 DEBUG nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:56:10 compute-1 nova_compute[230518]: 2025-10-02 12:56:10.129 2 INFO nova.virt.libvirt.driver [-] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Instance spawned successfully.
Oct 02 12:56:10 compute-1 nova_compute[230518]: 2025-10-02 12:56:10.129 2 DEBUG nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:56:10 compute-1 podman[290975]: 2025-10-02 12:56:10.251794765 +0000 UTC m=+0.801103625 container create e6bb264484e4bee301f6a5de6a355b7acd287a61fd63758c4f454410c31ab98b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 12:56:10 compute-1 nova_compute[230518]: 2025-10-02 12:56:10.440 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:56:10 compute-1 nova_compute[230518]: 2025-10-02 12:56:10.447 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:56:10 compute-1 nova_compute[230518]: 2025-10-02 12:56:10.451 2 DEBUG nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:56:10 compute-1 nova_compute[230518]: 2025-10-02 12:56:10.452 2 DEBUG nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:56:10 compute-1 nova_compute[230518]: 2025-10-02 12:56:10.452 2 DEBUG nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:56:10 compute-1 nova_compute[230518]: 2025-10-02 12:56:10.452 2 DEBUG nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:56:10 compute-1 nova_compute[230518]: 2025-10-02 12:56:10.453 2 DEBUG nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:56:10 compute-1 nova_compute[230518]: 2025-10-02 12:56:10.453 2 DEBUG nova.virt.libvirt.driver [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:56:10 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 12:56:10 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Cumulative writes: 11K writes, 59K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s
                                           Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.12 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1667 writes, 8136 keys, 1667 commit groups, 1.0 writes per commit group, ingest: 16.92 MB, 0.03 MB/s
                                           Interval WAL: 1666 writes, 1666 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     54.3      1.31              0.20        35    0.038       0      0       0.0       0.0
                                             L6      1/0   10.25 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.6    116.2     98.0      3.36              0.95        34    0.099    215K    18K       0.0       0.0
                                            Sum      1/0   10.25 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.6     83.5     85.7      4.67              1.15        69    0.068    215K    18K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.6     48.4     49.4      1.36              0.20        10    0.136     42K   2601       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0    116.2     98.0      3.36              0.95        34    0.099    215K    18K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     54.4      1.31              0.20        34    0.039       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.070, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.39 GB write, 0.10 MB/s write, 0.38 GB read, 0.09 MB/s read, 4.7 seconds
                                           Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 1.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5591049f71f0#2 capacity: 304.00 MB usage: 43.60 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000246 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2532,41.99 MB,13.8134%) FilterBlock(69,603.55 KB,0.193882%) IndexBlock(69,1.02 MB,0.336341%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 12:56:10 compute-1 systemd[1]: Started libpod-conmon-e6bb264484e4bee301f6a5de6a355b7acd287a61fd63758c4f454410c31ab98b.scope.
Oct 02 12:56:10 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:56:10 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/722445e320b05979190f4c22e24461a020bb72b6b235f9c49a131ad784afda90/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:56:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2440780115' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:56:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3013568106' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1041890516' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:56:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:56:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:10.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:56:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:10.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:10 compute-1 podman[290975]: 2025-10-02 12:56:10.917453978 +0000 UTC m=+1.466762848 container init e6bb264484e4bee301f6a5de6a355b7acd287a61fd63758c4f454410c31ab98b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 12:56:10 compute-1 podman[290975]: 2025-10-02 12:56:10.92835153 +0000 UTC m=+1.477660400 container start e6bb264484e4bee301f6a5de6a355b7acd287a61fd63758c4f454410c31ab98b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 12:56:10 compute-1 neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d[291044]: [NOTICE]   (291058) : New worker (291060) forked
Oct 02 12:56:10 compute-1 neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d[291044]: [NOTICE]   (291058) : Loading success.
Oct 02 12:56:11 compute-1 nova_compute[230518]: 2025-10-02 12:56:11.014 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:56:11 compute-1 nova_compute[230518]: 2025-10-02 12:56:11.014 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:56:11 compute-1 nova_compute[230518]: 2025-10-02 12:56:11.017 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000099 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:56:11 compute-1 nova_compute[230518]: 2025-10-02 12:56:11.018 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-00000099 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:56:11 compute-1 nova_compute[230518]: 2025-10-02 12:56:11.032 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:56:11 compute-1 nova_compute[230518]: 2025-10-02 12:56:11.032 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409770.1220043, f6c0a66d-64f1-484a-ae4e-ece25fddf736 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:56:11 compute-1 nova_compute[230518]: 2025-10-02 12:56:11.033 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] VM Paused (Lifecycle Event)
Oct 02 12:56:11 compute-1 nova_compute[230518]: 2025-10-02 12:56:11.071 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:56:11 compute-1 nova_compute[230518]: 2025-10-02 12:56:11.076 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409770.125318, f6c0a66d-64f1-484a-ae4e-ece25fddf736 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:56:11 compute-1 nova_compute[230518]: 2025-10-02 12:56:11.076 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] VM Resumed (Lifecycle Event)
Oct 02 12:56:11 compute-1 nova_compute[230518]: 2025-10-02 12:56:11.105 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:56:11 compute-1 nova_compute[230518]: 2025-10-02 12:56:11.109 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:56:11 compute-1 ovn_controller[129257]: 2025-10-02T12:56:11Z|00086|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:68:46:e8 10.100.0.5
Oct 02 12:56:11 compute-1 nova_compute[230518]: 2025-10-02 12:56:11.121 2 INFO nova.compute.manager [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Took 13.38 seconds to spawn the instance on the hypervisor.
Oct 02 12:56:11 compute-1 nova_compute[230518]: 2025-10-02 12:56:11.122 2 DEBUG nova.compute.manager [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:56:11 compute-1 ovn_controller[129257]: 2025-10-02T12:56:11Z|00087|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:68:46:e8 10.100.0.5
Oct 02 12:56:11 compute-1 nova_compute[230518]: 2025-10-02 12:56:11.137 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:56:11 compute-1 nova_compute[230518]: 2025-10-02 12:56:11.238 2 INFO nova.compute.manager [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Took 14.40 seconds to build instance.
Oct 02 12:56:11 compute-1 podman[291019]: 2025-10-02 12:56:11.250046092 +0000 UTC m=+1.282633510 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:56:11 compute-1 nova_compute[230518]: 2025-10-02 12:56:11.254 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:56:11 compute-1 nova_compute[230518]: 2025-10-02 12:56:11.256 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4038MB free_disk=20.814746856689453GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:56:11 compute-1 nova_compute[230518]: 2025-10-02 12:56:11.256 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:11 compute-1 nova_compute[230518]: 2025-10-02 12:56:11.256 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:11 compute-1 nova_compute[230518]: 2025-10-02 12:56:11.280 2 DEBUG oslo_concurrency.lockutils [None req-e151a67f-f234-48eb-9fc7-914d9ba469d0 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:11 compute-1 podman[291020]: 2025-10-02 12:56:11.297698837 +0000 UTC m=+1.325533266 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:56:11 compute-1 nova_compute[230518]: 2025-10-02 12:56:11.447 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance cba9797d-d8c0-42bb-99e8-21ff3406d1ff actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:56:11 compute-1 nova_compute[230518]: 2025-10-02 12:56:11.448 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance f6c0a66d-64f1-484a-ae4e-ece25fddf736 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:56:11 compute-1 nova_compute[230518]: 2025-10-02 12:56:11.448 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:56:11 compute-1 nova_compute[230518]: 2025-10-02 12:56:11.449 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:56:11 compute-1 nova_compute[230518]: 2025-10-02 12:56:11.647 2 DEBUG nova.compute.manager [req-41f4a164-9341-41d1-a15e-00f5261e0c9c req-e865f06d-2b50-4e6c-89bf-cc5b664dfbb3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received event network-vif-plugged-6237bc28-d790-4861-976b-cda2e8dc93a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:56:11 compute-1 nova_compute[230518]: 2025-10-02 12:56:11.648 2 DEBUG oslo_concurrency.lockutils [req-41f4a164-9341-41d1-a15e-00f5261e0c9c req-e865f06d-2b50-4e6c-89bf-cc5b664dfbb3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:11 compute-1 nova_compute[230518]: 2025-10-02 12:56:11.648 2 DEBUG oslo_concurrency.lockutils [req-41f4a164-9341-41d1-a15e-00f5261e0c9c req-e865f06d-2b50-4e6c-89bf-cc5b664dfbb3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:11 compute-1 nova_compute[230518]: 2025-10-02 12:56:11.649 2 DEBUG oslo_concurrency.lockutils [req-41f4a164-9341-41d1-a15e-00f5261e0c9c req-e865f06d-2b50-4e6c-89bf-cc5b664dfbb3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:11 compute-1 nova_compute[230518]: 2025-10-02 12:56:11.649 2 DEBUG nova.compute.manager [req-41f4a164-9341-41d1-a15e-00f5261e0c9c req-e865f06d-2b50-4e6c-89bf-cc5b664dfbb3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] No waiting events found dispatching network-vif-plugged-6237bc28-d790-4861-976b-cda2e8dc93a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:56:11 compute-1 nova_compute[230518]: 2025-10-02 12:56:11.649 2 WARNING nova.compute.manager [req-41f4a164-9341-41d1-a15e-00f5261e0c9c req-e865f06d-2b50-4e6c-89bf-cc5b664dfbb3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received unexpected event network-vif-plugged-6237bc28-d790-4861-976b-cda2e8dc93a9 for instance with vm_state active and task_state None.
Oct 02 12:56:11 compute-1 nova_compute[230518]: 2025-10-02 12:56:11.751 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 12:56:12 compute-1 nova_compute[230518]: 2025-10-02 12:56:12.014 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 12:56:12 compute-1 nova_compute[230518]: 2025-10-02 12:56:12.015 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 12:56:12 compute-1 nova_compute[230518]: 2025-10-02 12:56:12.040 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 12:56:12 compute-1 nova_compute[230518]: 2025-10-02 12:56:12.103 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 12:56:12 compute-1 nova_compute[230518]: 2025-10-02 12:56:12.184 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:56:12 compute-1 ceph-mon[80926]: pgmap v2486: 305 pgs: 305 active+clean; 452 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.5 MiB/s wr, 158 op/s
Oct 02 12:56:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4169217' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:12.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:12.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:56:13 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3594326844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:13 compute-1 nova_compute[230518]: 2025-10-02 12:56:13.251 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:56:13 compute-1 nova_compute[230518]: 2025-10-02 12:56:13.257 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:56:13 compute-1 nova_compute[230518]: 2025-10-02 12:56:13.295 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:56:13 compute-1 nova_compute[230518]: 2025-10-02 12:56:13.334 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:56:13 compute-1 nova_compute[230518]: 2025-10-02 12:56:13.335 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:13 compute-1 ceph-mon[80926]: pgmap v2487: 305 pgs: 305 active+clean; 496 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 6.3 MiB/s wr, 181 op/s
Oct 02 12:56:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1941349339' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:13 compute-1 nova_compute[230518]: 2025-10-02 12:56:13.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:14 compute-1 nova_compute[230518]: 2025-10-02 12:56:14.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:56:14 compute-1 nova_compute[230518]: 2025-10-02 12:56:14.317 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:56:14 compute-1 nova_compute[230518]: 2025-10-02 12:56:14.318 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:56:14 compute-1 nova_compute[230518]: 2025-10-02 12:56:14.318 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:56:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:14.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:14.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:15 compute-1 nova_compute[230518]: 2025-10-02 12:56:15.305 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Triggering sync for uuid cba9797d-d8c0-42bb-99e8-21ff3406d1ff _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 12:56:15 compute-1 nova_compute[230518]: 2025-10-02 12:56:15.306 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Triggering sync for uuid f6c0a66d-64f1-484a-ae4e-ece25fddf736 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 12:56:15 compute-1 nova_compute[230518]: 2025-10-02 12:56:15.306 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:15 compute-1 nova_compute[230518]: 2025-10-02 12:56:15.306 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:15 compute-1 nova_compute[230518]: 2025-10-02 12:56:15.307 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:15 compute-1 nova_compute[230518]: 2025-10-02 12:56:15.307 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:15 compute-1 nova_compute[230518]: 2025-10-02 12:56:15.308 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:56:15 compute-1 nova_compute[230518]: 2025-10-02 12:56:15.308 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:56:15 compute-1 nova_compute[230518]: 2025-10-02 12:56:15.309 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:56:15 compute-1 nova_compute[230518]: 2025-10-02 12:56:15.309 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 12:56:15 compute-1 nova_compute[230518]: 2025-10-02 12:56:15.340 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:15 compute-1 nova_compute[230518]: 2025-10-02 12:56:15.346 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.040s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3594326844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:16 compute-1 ceph-mon[80926]: pgmap v2488: 305 pgs: 305 active+clean; 496 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 5.4 MiB/s wr, 153 op/s
Oct 02 12:56:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4275587219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:16.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:16 compute-1 nova_compute[230518]: 2025-10-02 12:56:16.800 2 INFO nova.compute.manager [None req-389aa584-869f-4e51-99b4-6c46dcb9550a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Get console output
Oct 02 12:56:16 compute-1 nova_compute[230518]: 2025-10-02 12:56:16.806 13161 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 12:56:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:16.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:17 compute-1 nova_compute[230518]: 2025-10-02 12:56:17.071 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:56:17 compute-1 nova_compute[230518]: 2025-10-02 12:56:17.072 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:56:17 compute-1 nova_compute[230518]: 2025-10-02 12:56:17.072 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:56:17 compute-1 nova_compute[230518]: 2025-10-02 12:56:17.322 2 DEBUG oslo_concurrency.lockutils [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:17 compute-1 nova_compute[230518]: 2025-10-02 12:56:17.323 2 DEBUG oslo_concurrency.lockutils [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:17 compute-1 nova_compute[230518]: 2025-10-02 12:56:17.323 2 DEBUG oslo_concurrency.lockutils [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:17 compute-1 nova_compute[230518]: 2025-10-02 12:56:17.323 2 DEBUG oslo_concurrency.lockutils [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:17 compute-1 nova_compute[230518]: 2025-10-02 12:56:17.323 2 DEBUG oslo_concurrency.lockutils [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:17 compute-1 nova_compute[230518]: 2025-10-02 12:56:17.325 2 INFO nova.compute.manager [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Terminating instance
Oct 02 12:56:17 compute-1 nova_compute[230518]: 2025-10-02 12:56:17.326 2 DEBUG nova.compute.manager [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:56:17 compute-1 ceph-mon[80926]: pgmap v2489: 305 pgs: 305 active+clean; 516 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 6.8 MiB/s wr, 213 op/s
Oct 02 12:56:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1274072784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:17 compute-1 kernel: tap8de06222-56 (unregistering): left promiscuous mode
Oct 02 12:56:17 compute-1 NetworkManager[44960]: <info>  [1759409777.6287] device (tap8de06222-56): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:56:17 compute-1 nova_compute[230518]: 2025-10-02 12:56:17.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:17 compute-1 ovn_controller[129257]: 2025-10-02T12:56:17Z|00647|binding|INFO|Releasing lport 8de06222-5603-4a49-ac47-7db15cbb7e03 from this chassis (sb_readonly=0)
Oct 02 12:56:17 compute-1 ovn_controller[129257]: 2025-10-02T12:56:17Z|00648|binding|INFO|Setting lport 8de06222-5603-4a49-ac47-7db15cbb7e03 down in Southbound
Oct 02 12:56:17 compute-1 ovn_controller[129257]: 2025-10-02T12:56:17Z|00649|binding|INFO|Removing iface tap8de06222-56 ovn-installed in OVS
Oct 02 12:56:17 compute-1 nova_compute[230518]: 2025-10-02 12:56:17.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:17 compute-1 nova_compute[230518]: 2025-10-02 12:56:17.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:17.653 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:46:e8 10.100.0.5'], port_security=['fa:16:3e:68:46:e8 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'cba9797d-d8c0-42bb-99e8-21ff3406d1ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f08094d7-13d0-4e3d-b2f1-572cd1460e8d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.189'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f86b2949-a213-4bd4-b601-e7dc17853f7f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=8de06222-5603-4a49-ac47-7db15cbb7e03) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:56:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:17.655 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 8de06222-5603-4a49-ac47-7db15cbb7e03 in datapath 24ea8f37-7508-4c75-ae14-e4cc7b9f8e97 unbound from our chassis
Oct 02 12:56:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:17.658 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 24ea8f37-7508-4c75-ae14-e4cc7b9f8e97, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:56:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:17.659 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[18e26f5a-a7cc-489c-9f43-3025a447aa70]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:17.660 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97 namespace which is not needed anymore
Oct 02 12:56:17 compute-1 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d00000097.scope: Deactivated successfully.
Oct 02 12:56:17 compute-1 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d00000097.scope: Consumed 14.535s CPU time.
Oct 02 12:56:17 compute-1 systemd-machined[188247]: Machine qemu-74-instance-00000097 terminated.
Oct 02 12:56:17 compute-1 nova_compute[230518]: 2025-10-02 12:56:17.759 2 INFO nova.virt.libvirt.driver [-] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Instance destroyed successfully.
Oct 02 12:56:17 compute-1 nova_compute[230518]: 2025-10-02 12:56:17.761 2 DEBUG nova.objects.instance [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'resources' on Instance uuid cba9797d-d8c0-42bb-99e8-21ff3406d1ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:56:17 compute-1 nova_compute[230518]: 2025-10-02 12:56:17.864 2 DEBUG nova.virt.libvirt.vif [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:55:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1872091800',display_name='tempest-TestNetworkBasicOps-server-1872091800',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1872091800',id=151,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMSGOLZUJ3FrTIeEm0YCEFIRKta1oOKUh3K2cHX7D75D8mOr7z91wWb7O7IlUA8JdoZAVPTTXOOargDtG7eOD7M62+PfvZG7TnqVCQDZ9PY6Jtt6S6zET7dnTNArJzZa2A==',key_name='tempest-TestNetworkBasicOps-802197789',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:55:53Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-50zfty89',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:55:53Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=cba9797d-d8c0-42bb-99e8-21ff3406d1ff,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8de06222-5603-4a49-ac47-7db15cbb7e03", "address": "fa:16:3e:68:46:e8", "network": {"id": "24ea8f37-7508-4c75-ae14-e4cc7b9f8e97", "bridge": "br-int", "label": "tempest-network-smoke--1532785266", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de06222-56", "ovs_interfaceid": "8de06222-5603-4a49-ac47-7db15cbb7e03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:56:17 compute-1 nova_compute[230518]: 2025-10-02 12:56:17.865 2 DEBUG nova.network.os_vif_util [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "8de06222-5603-4a49-ac47-7db15cbb7e03", "address": "fa:16:3e:68:46:e8", "network": {"id": "24ea8f37-7508-4c75-ae14-e4cc7b9f8e97", "bridge": "br-int", "label": "tempest-network-smoke--1532785266", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8de06222-56", "ovs_interfaceid": "8de06222-5603-4a49-ac47-7db15cbb7e03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:56:17 compute-1 nova_compute[230518]: 2025-10-02 12:56:17.866 2 DEBUG nova.network.os_vif_util [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:68:46:e8,bridge_name='br-int',has_traffic_filtering=True,id=8de06222-5603-4a49-ac47-7db15cbb7e03,network=Network(24ea8f37-7508-4c75-ae14-e4cc7b9f8e97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8de06222-56') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:56:17 compute-1 nova_compute[230518]: 2025-10-02 12:56:17.866 2 DEBUG os_vif [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:68:46:e8,bridge_name='br-int',has_traffic_filtering=True,id=8de06222-5603-4a49-ac47-7db15cbb7e03,network=Network(24ea8f37-7508-4c75-ae14-e4cc7b9f8e97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8de06222-56') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:56:17 compute-1 nova_compute[230518]: 2025-10-02 12:56:17.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:17 compute-1 nova_compute[230518]: 2025-10-02 12:56:17.868 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8de06222-56, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:56:17 compute-1 nova_compute[230518]: 2025-10-02 12:56:17.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:17 compute-1 nova_compute[230518]: 2025-10-02 12:56:17.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:56:17 compute-1 nova_compute[230518]: 2025-10-02 12:56:17.873 2 INFO os_vif [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:68:46:e8,bridge_name='br-int',has_traffic_filtering=True,id=8de06222-5603-4a49-ac47-7db15cbb7e03,network=Network(24ea8f37-7508-4c75-ae14-e4cc7b9f8e97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8de06222-56')
Oct 02 12:56:17 compute-1 nova_compute[230518]: 2025-10-02 12:56:17.960 2 DEBUG nova.compute.manager [req-3df0aeb1-b378-4d93-a64a-b5a7c62d1396 req-7a5b20fd-307f-44d0-9673-e1b78b83632c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received event network-changed-6237bc28-d790-4861-976b-cda2e8dc93a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:56:17 compute-1 nova_compute[230518]: 2025-10-02 12:56:17.961 2 DEBUG nova.compute.manager [req-3df0aeb1-b378-4d93-a64a-b5a7c62d1396 req-7a5b20fd-307f-44d0-9673-e1b78b83632c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Refreshing instance network info cache due to event network-changed-6237bc28-d790-4861-976b-cda2e8dc93a9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:56:17 compute-1 nova_compute[230518]: 2025-10-02 12:56:17.962 2 DEBUG oslo_concurrency.lockutils [req-3df0aeb1-b378-4d93-a64a-b5a7c62d1396 req-7a5b20fd-307f-44d0-9673-e1b78b83632c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:56:17 compute-1 nova_compute[230518]: 2025-10-02 12:56:17.963 2 DEBUG oslo_concurrency.lockutils [req-3df0aeb1-b378-4d93-a64a-b5a7c62d1396 req-7a5b20fd-307f-44d0-9673-e1b78b83632c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:56:17 compute-1 nova_compute[230518]: 2025-10-02 12:56:17.963 2 DEBUG nova.network.neutron [req-3df0aeb1-b378-4d93-a64a-b5a7c62d1396 req-7a5b20fd-307f-44d0-9673-e1b78b83632c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Refreshing network info cache for port 6237bc28-d790-4861-976b-cda2e8dc93a9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:56:18 compute-1 nova_compute[230518]: 2025-10-02 12:56:18.276 2 DEBUG nova.compute.manager [req-7a3f9a5e-4517-48a9-9e80-4fd9d0ab2c2d req-a19ccc73-36fd-4782-a832-26dc8187ec4f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Received event network-vif-unplugged-8de06222-5603-4a49-ac47-7db15cbb7e03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:56:18 compute-1 nova_compute[230518]: 2025-10-02 12:56:18.277 2 DEBUG oslo_concurrency.lockutils [req-7a3f9a5e-4517-48a9-9e80-4fd9d0ab2c2d req-a19ccc73-36fd-4782-a832-26dc8187ec4f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:18 compute-1 nova_compute[230518]: 2025-10-02 12:56:18.277 2 DEBUG oslo_concurrency.lockutils [req-7a3f9a5e-4517-48a9-9e80-4fd9d0ab2c2d req-a19ccc73-36fd-4782-a832-26dc8187ec4f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:18 compute-1 nova_compute[230518]: 2025-10-02 12:56:18.277 2 DEBUG oslo_concurrency.lockutils [req-7a3f9a5e-4517-48a9-9e80-4fd9d0ab2c2d req-a19ccc73-36fd-4782-a832-26dc8187ec4f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:18 compute-1 nova_compute[230518]: 2025-10-02 12:56:18.278 2 DEBUG nova.compute.manager [req-7a3f9a5e-4517-48a9-9e80-4fd9d0ab2c2d req-a19ccc73-36fd-4782-a832-26dc8187ec4f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] No waiting events found dispatching network-vif-unplugged-8de06222-5603-4a49-ac47-7db15cbb7e03 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:56:18 compute-1 nova_compute[230518]: 2025-10-02 12:56:18.278 2 DEBUG nova.compute.manager [req-7a3f9a5e-4517-48a9-9e80-4fd9d0ab2c2d req-a19ccc73-36fd-4782-a832-26dc8187ec4f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Received event network-vif-unplugged-8de06222-5603-4a49-ac47-7db15cbb7e03 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:56:18 compute-1 neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97[290492]: [NOTICE]   (290496) : haproxy version is 2.8.14-c23fe91
Oct 02 12:56:18 compute-1 neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97[290492]: [NOTICE]   (290496) : path to executable is /usr/sbin/haproxy
Oct 02 12:56:18 compute-1 neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97[290492]: [WARNING]  (290496) : Exiting Master process...
Oct 02 12:56:18 compute-1 neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97[290492]: [WARNING]  (290496) : Exiting Master process...
Oct 02 12:56:18 compute-1 neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97[290492]: [ALERT]    (290496) : Current worker (290498) exited with code 143 (Terminated)
Oct 02 12:56:18 compute-1 neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97[290492]: [WARNING]  (290496) : All workers exited. Exiting... (0)
Oct 02 12:56:18 compute-1 systemd[1]: libpod-c187f4a6ff262e8a492932d357d73b3a8ddd7027b6712dc407f11942add52a34.scope: Deactivated successfully.
Oct 02 12:56:18 compute-1 podman[291133]: 2025-10-02 12:56:18.359028812 +0000 UTC m=+0.567539346 container died c187f4a6ff262e8a492932d357d73b3a8ddd7027b6712dc407f11942add52a34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:56:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:56:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:18.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:56:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e344 e344: 3 total, 3 up, 3 in
Oct 02 12:56:18 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c187f4a6ff262e8a492932d357d73b3a8ddd7027b6712dc407f11942add52a34-userdata-shm.mount: Deactivated successfully.
Oct 02 12:56:18 compute-1 systemd[1]: var-lib-containers-storage-overlay-8fe32515c7b59ffa2b21e24ce17dce5eb4995b61dc532d34935df1e3ca314d83-merged.mount: Deactivated successfully.
Oct 02 12:56:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:18.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:18 compute-1 podman[291133]: 2025-10-02 12:56:18.993556999 +0000 UTC m=+1.202067553 container cleanup c187f4a6ff262e8a492932d357d73b3a8ddd7027b6712dc407f11942add52a34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 12:56:19 compute-1 systemd[1]: libpod-conmon-c187f4a6ff262e8a492932d357d73b3a8ddd7027b6712dc407f11942add52a34.scope: Deactivated successfully.
Oct 02 12:56:19 compute-1 nova_compute[230518]: 2025-10-02 12:56:19.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:56:19 compute-1 podman[291184]: 2025-10-02 12:56:19.394404225 +0000 UTC m=+0.376300187 container remove c187f4a6ff262e8a492932d357d73b3a8ddd7027b6712dc407f11942add52a34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:56:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:19.402 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1c29faa5-9f88-4ccd-a535-3469dddd2781]: (4, ('Thu Oct  2 12:56:17 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97 (c187f4a6ff262e8a492932d357d73b3a8ddd7027b6712dc407f11942add52a34)\nc187f4a6ff262e8a492932d357d73b3a8ddd7027b6712dc407f11942add52a34\nThu Oct  2 12:56:19 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97 (c187f4a6ff262e8a492932d357d73b3a8ddd7027b6712dc407f11942add52a34)\nc187f4a6ff262e8a492932d357d73b3a8ddd7027b6712dc407f11942add52a34\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:19.404 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fb0937c9-ba29-4f45-a2c0-1758ede133e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:19.405 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap24ea8f37-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:56:19 compute-1 nova_compute[230518]: 2025-10-02 12:56:19.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:19 compute-1 kernel: tap24ea8f37-70: left promiscuous mode
Oct 02 12:56:19 compute-1 nova_compute[230518]: 2025-10-02 12:56:19.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:19.434 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b8289526-32e5-41e8-b5b9-3919100f47b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:19.468 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[233709ec-b602-45e7-bdc1-52bdef1eb0e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:19.469 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[240e4974-1828-47bb-85a6-bf243f7dd648]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:19.484 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[797f4e63-a5ea-4d58-bb95-bba71f031b2b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 761571, 'reachable_time': 40372, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291200, 'error': None, 'target': 'ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:19.487 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-24ea8f37-7508-4c75-ae14-e4cc7b9f8e97 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:56:19 compute-1 systemd[1]: run-netns-ovnmeta\x2d24ea8f37\x2d7508\x2d4c75\x2dae14\x2de4cc7b9f8e97.mount: Deactivated successfully.
Oct 02 12:56:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:19.488 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[db704aaa-d40c-4bca-af99-3573a1e0e6f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:19 compute-1 ceph-mon[80926]: pgmap v2490: 305 pgs: 305 active+clean; 529 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 7.4 MiB/s wr, 244 op/s
Oct 02 12:56:19 compute-1 ceph-mon[80926]: osdmap e344: 3 total, 3 up, 3 in
Oct 02 12:56:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e345 e345: 3 total, 3 up, 3 in
Oct 02 12:56:20 compute-1 nova_compute[230518]: 2025-10-02 12:56:20.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:56:20 compute-1 nova_compute[230518]: 2025-10-02 12:56:20.422 2 DEBUG nova.compute.manager [req-1f2f7b61-9e15-4adc-90e7-3227dce8b613 req-306cda94-7409-46af-b149-6321eb3963e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Received event network-vif-plugged-8de06222-5603-4a49-ac47-7db15cbb7e03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:56:20 compute-1 nova_compute[230518]: 2025-10-02 12:56:20.424 2 DEBUG oslo_concurrency.lockutils [req-1f2f7b61-9e15-4adc-90e7-3227dce8b613 req-306cda94-7409-46af-b149-6321eb3963e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:20 compute-1 nova_compute[230518]: 2025-10-02 12:56:20.424 2 DEBUG oslo_concurrency.lockutils [req-1f2f7b61-9e15-4adc-90e7-3227dce8b613 req-306cda94-7409-46af-b149-6321eb3963e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:20 compute-1 nova_compute[230518]: 2025-10-02 12:56:20.425 2 DEBUG oslo_concurrency.lockutils [req-1f2f7b61-9e15-4adc-90e7-3227dce8b613 req-306cda94-7409-46af-b149-6321eb3963e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:20 compute-1 nova_compute[230518]: 2025-10-02 12:56:20.425 2 DEBUG nova.compute.manager [req-1f2f7b61-9e15-4adc-90e7-3227dce8b613 req-306cda94-7409-46af-b149-6321eb3963e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] No waiting events found dispatching network-vif-plugged-8de06222-5603-4a49-ac47-7db15cbb7e03 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:56:20 compute-1 nova_compute[230518]: 2025-10-02 12:56:20.426 2 WARNING nova.compute.manager [req-1f2f7b61-9e15-4adc-90e7-3227dce8b613 req-306cda94-7409-46af-b149-6321eb3963e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Received unexpected event network-vif-plugged-8de06222-5603-4a49-ac47-7db15cbb7e03 for instance with vm_state active and task_state deleting.
Oct 02 12:56:20 compute-1 nova_compute[230518]: 2025-10-02 12:56:20.639 2 DEBUG nova.network.neutron [req-3df0aeb1-b378-4d93-a64a-b5a7c62d1396 req-7a5b20fd-307f-44d0-9673-e1b78b83632c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Updated VIF entry in instance network info cache for port 6237bc28-d790-4861-976b-cda2e8dc93a9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:56:20 compute-1 nova_compute[230518]: 2025-10-02 12:56:20.640 2 DEBUG nova.network.neutron [req-3df0aeb1-b378-4d93-a64a-b5a7c62d1396 req-7a5b20fd-307f-44d0-9673-e1b78b83632c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Updating instance_info_cache with network_info: [{"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:56:20 compute-1 nova_compute[230518]: 2025-10-02 12:56:20.666 2 DEBUG oslo_concurrency.lockutils [req-3df0aeb1-b378-4d93-a64a-b5a7c62d1396 req-7a5b20fd-307f-44d0-9673-e1b78b83632c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:56:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:20.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:20.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:21 compute-1 ceph-mon[80926]: osdmap e345: 3 total, 3 up, 3 in
Oct 02 12:56:22 compute-1 ceph-mon[80926]: pgmap v2493: 305 pgs: 305 active+clean; 529 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 185 op/s
Oct 02 12:56:22 compute-1 nova_compute[230518]: 2025-10-02 12:56:22.259 2 INFO nova.virt.libvirt.driver [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Deleting instance files /var/lib/nova/instances/cba9797d-d8c0-42bb-99e8-21ff3406d1ff_del
Oct 02 12:56:22 compute-1 nova_compute[230518]: 2025-10-02 12:56:22.260 2 INFO nova.virt.libvirt.driver [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Deletion of /var/lib/nova/instances/cba9797d-d8c0-42bb-99e8-21ff3406d1ff_del complete
Oct 02 12:56:22 compute-1 nova_compute[230518]: 2025-10-02 12:56:22.379 2 INFO nova.compute.manager [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Took 5.05 seconds to destroy the instance on the hypervisor.
Oct 02 12:56:22 compute-1 nova_compute[230518]: 2025-10-02 12:56:22.380 2 DEBUG oslo.service.loopingcall [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:56:22 compute-1 nova_compute[230518]: 2025-10-02 12:56:22.380 2 DEBUG nova.compute.manager [-] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:56:22 compute-1 nova_compute[230518]: 2025-10-02 12:56:22.380 2 DEBUG nova.network.neutron [-] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:56:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:22.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:22.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:22 compute-1 nova_compute[230518]: 2025-10-02 12:56:22.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e346 e346: 3 total, 3 up, 3 in
Oct 02 12:56:23 compute-1 ovn_controller[129257]: 2025-10-02T12:56:23Z|00088|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f2:c9:23 10.100.0.13
Oct 02 12:56:23 compute-1 ovn_controller[129257]: 2025-10-02T12:56:23Z|00089|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f2:c9:23 10.100.0.13
Oct 02 12:56:23 compute-1 nova_compute[230518]: 2025-10-02 12:56:23.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:56:23 compute-1 nova_compute[230518]: 2025-10-02 12:56:23.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:56:23 compute-1 nova_compute[230518]: 2025-10-02 12:56:23.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:56:23 compute-1 nova_compute[230518]: 2025-10-02 12:56:23.149 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 02 12:56:23 compute-1 ceph-mon[80926]: pgmap v2494: 305 pgs: 305 active+clean; 504 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.8 MiB/s wr, 264 op/s
Oct 02 12:56:23 compute-1 ceph-mon[80926]: osdmap e346: 3 total, 3 up, 3 in
Oct 02 12:56:23 compute-1 nova_compute[230518]: 2025-10-02 12:56:23.653 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:56:23 compute-1 nova_compute[230518]: 2025-10-02 12:56:23.653 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:56:23 compute-1 nova_compute[230518]: 2025-10-02 12:56:23.653 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:56:23 compute-1 nova_compute[230518]: 2025-10-02 12:56:23.653 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f6c0a66d-64f1-484a-ae4e-ece25fddf736 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:56:24 compute-1 nova_compute[230518]: 2025-10-02 12:56:24.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:56:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:56:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:24.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:56:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:56:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:24.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:56:25 compute-1 ceph-mon[80926]: pgmap v2496: 305 pgs: 305 active+clean; 504 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 622 KiB/s rd, 830 KiB/s wr, 136 op/s
Oct 02 12:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:25.953 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:25.953 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:25.954 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:26 compute-1 nova_compute[230518]: 2025-10-02 12:56:26.112 2 DEBUG nova.compute.manager [req-cee506ca-2716-4ee5-b615-6bb0eeefffdd req-22f092ec-d15e-4031-a9fc-be33e6d392c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Received event network-vif-deleted-8de06222-5603-4a49-ac47-7db15cbb7e03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:56:26 compute-1 nova_compute[230518]: 2025-10-02 12:56:26.112 2 INFO nova.compute.manager [req-cee506ca-2716-4ee5-b615-6bb0eeefffdd req-22f092ec-d15e-4031-a9fc-be33e6d392c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Neutron deleted interface 8de06222-5603-4a49-ac47-7db15cbb7e03; detaching it from the instance and deleting it from the info cache
Oct 02 12:56:26 compute-1 nova_compute[230518]: 2025-10-02 12:56:26.113 2 DEBUG nova.network.neutron [req-cee506ca-2716-4ee5-b615-6bb0eeefffdd req-22f092ec-d15e-4031-a9fc-be33e6d392c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:56:26 compute-1 nova_compute[230518]: 2025-10-02 12:56:26.135 2 DEBUG nova.network.neutron [-] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:56:26 compute-1 nova_compute[230518]: 2025-10-02 12:56:26.155 2 INFO nova.compute.manager [-] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Took 3.77 seconds to deallocate network for instance.
Oct 02 12:56:26 compute-1 nova_compute[230518]: 2025-10-02 12:56:26.160 2 DEBUG nova.compute.manager [req-cee506ca-2716-4ee5-b615-6bb0eeefffdd req-22f092ec-d15e-4031-a9fc-be33e6d392c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Detach interface failed, port_id=8de06222-5603-4a49-ac47-7db15cbb7e03, reason: Instance cba9797d-d8c0-42bb-99e8-21ff3406d1ff could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 12:56:26 compute-1 nova_compute[230518]: 2025-10-02 12:56:26.227 2 DEBUG oslo_concurrency.lockutils [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:26 compute-1 nova_compute[230518]: 2025-10-02 12:56:26.227 2 DEBUG oslo_concurrency.lockutils [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:26 compute-1 nova_compute[230518]: 2025-10-02 12:56:26.306 2 DEBUG oslo_concurrency.processutils [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:56:26 compute-1 nova_compute[230518]: 2025-10-02 12:56:26.693 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Updating instance_info_cache with network_info: [{"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:56:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:26.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:26 compute-1 nova_compute[230518]: 2025-10-02 12:56:26.712 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:56:26 compute-1 nova_compute[230518]: 2025-10-02 12:56:26.712 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:56:26 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:56:26 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1580477687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:26 compute-1 nova_compute[230518]: 2025-10-02 12:56:26.779 2 DEBUG oslo_concurrency.processutils [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:56:26 compute-1 nova_compute[230518]: 2025-10-02 12:56:26.785 2 DEBUG nova.compute.provider_tree [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:56:26 compute-1 nova_compute[230518]: 2025-10-02 12:56:26.812 2 DEBUG nova.scheduler.client.report [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:56:26 compute-1 nova_compute[230518]: 2025-10-02 12:56:26.835 2 DEBUG oslo_concurrency.lockutils [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:26 compute-1 nova_compute[230518]: 2025-10-02 12:56:26.858 2 INFO nova.scheduler.client.report [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Deleted allocations for instance cba9797d-d8c0-42bb-99e8-21ff3406d1ff
Oct 02 12:56:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:26.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:26 compute-1 nova_compute[230518]: 2025-10-02 12:56:26.940 2 DEBUG oslo_concurrency.lockutils [None req-79786a4f-41dd-46b7-89ff-809cfe84967e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "cba9797d-d8c0-42bb-99e8-21ff3406d1ff" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:27 compute-1 nova_compute[230518]: 2025-10-02 12:56:27.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:28 compute-1 ceph-mon[80926]: pgmap v2497: 305 pgs: 305 active+clean; 464 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.2 MiB/s wr, 208 op/s
Oct 02 12:56:28 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1580477687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:28 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3334793604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:28 compute-1 nova_compute[230518]: 2025-10-02 12:56:28.707 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:56:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:28.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:28.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:29 compute-1 nova_compute[230518]: 2025-10-02 12:56:29.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:56:29 compute-1 ceph-mon[80926]: pgmap v2498: 305 pgs: 305 active+clean; 449 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.3 MiB/s wr, 313 op/s
Oct 02 12:56:30 compute-1 nova_compute[230518]: 2025-10-02 12:56:30.139 2 INFO nova.compute.manager [None req-ec43cd3f-4ed5-419b-b163-436a3ffadd48 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Get console output
Oct 02 12:56:30 compute-1 nova_compute[230518]: 2025-10-02 12:56:30.145 13161 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 12:56:30 compute-1 ovn_controller[129257]: 2025-10-02T12:56:30Z|00650|binding|INFO|Releasing lport ffbc5ae9-0886-4744-96e3-66a615b2f014 from this chassis (sb_readonly=0)
Oct 02 12:56:30 compute-1 nova_compute[230518]: 2025-10-02 12:56:30.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:56:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:30.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:56:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:30.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:31 compute-1 nova_compute[230518]: 2025-10-02 12:56:31.113 2 INFO nova.compute.manager [None req-506c1595-483a-428e-a160-0c2d18f1ab45 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Get console output
Oct 02 12:56:31 compute-1 nova_compute[230518]: 2025-10-02 12:56:31.118 13161 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 12:56:31 compute-1 ceph-mon[80926]: pgmap v2499: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 267 op/s
Oct 02 12:56:31 compute-1 podman[291225]: 2025-10-02 12:56:31.563354869 +0000 UTC m=+0.063714719 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:56:31 compute-1 podman[291224]: 2025-10-02 12:56:31.661491488 +0000 UTC m=+0.153137706 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 12:56:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1910028212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 12:56:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:32.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 12:56:32 compute-1 nova_compute[230518]: 2025-10-02 12:56:32.759 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409777.7578926, cba9797d-d8c0-42bb-99e8-21ff3406d1ff => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:56:32 compute-1 nova_compute[230518]: 2025-10-02 12:56:32.759 2 INFO nova.compute.manager [-] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] VM Stopped (Lifecycle Event)
Oct 02 12:56:32 compute-1 nova_compute[230518]: 2025-10-02 12:56:32.798 2 DEBUG nova.compute.manager [None req-478e6ee6-55d9-4d7f-9bf7-ae31245cf451 - - - - - -] [instance: cba9797d-d8c0-42bb-99e8-21ff3406d1ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:56:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:32 compute-1 nova_compute[230518]: 2025-10-02 12:56:32.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:32.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:33 compute-1 ceph-mon[80926]: pgmap v2500: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.6 MiB/s wr, 206 op/s
Oct 02 12:56:34 compute-1 nova_compute[230518]: 2025-10-02 12:56:34.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:56:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:34.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:34.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:35 compute-1 ceph-mon[80926]: pgmap v2501: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.4 MiB/s wr, 188 op/s
Oct 02 12:56:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:36.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:36.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:36 compute-1 ceph-mon[80926]: pgmap v2502: 305 pgs: 305 active+clean; 414 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 183 op/s
Oct 02 12:56:37 compute-1 nova_compute[230518]: 2025-10-02 12:56:37.065 2 DEBUG nova.virt.libvirt.driver [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Check if temp file /var/lib/nova/instances/tmppyu_z2_l exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065
Oct 02 12:56:37 compute-1 nova_compute[230518]: 2025-10-02 12:56:37.066 2 DEBUG nova.compute.manager [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmppyu_z2_l',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='f6c0a66d-64f1-484a-ae4e-ece25fddf736',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587
Oct 02 12:56:37 compute-1 nova_compute[230518]: 2025-10-02 12:56:37.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:37 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3869001422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:38 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #115. Immutable memtables: 0.
Oct 02 12:56:38 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:38.060676) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:56:38 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 115
Oct 02 12:56:38 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409798060708, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 2185, "num_deletes": 261, "total_data_size": 4878572, "memory_usage": 4945416, "flush_reason": "Manual Compaction"}
Oct 02 12:56:38 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #116: started
Oct 02 12:56:38 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409798091866, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 116, "file_size": 3206826, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 57752, "largest_seqno": 59932, "table_properties": {"data_size": 3197927, "index_size": 5457, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19297, "raw_average_key_size": 20, "raw_value_size": 3179726, "raw_average_value_size": 3382, "num_data_blocks": 236, "num_entries": 940, "num_filter_entries": 940, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409626, "oldest_key_time": 1759409626, "file_creation_time": 1759409798, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:56:38 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 31241 microseconds, and 6106 cpu microseconds.
Oct 02 12:56:38 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:56:38 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:38.091913) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #116: 3206826 bytes OK
Oct 02 12:56:38 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:38.091934) [db/memtable_list.cc:519] [default] Level-0 commit table #116 started
Oct 02 12:56:38 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:38.095898) [db/memtable_list.cc:722] [default] Level-0 commit table #116: memtable #1 done
Oct 02 12:56:38 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:38.095915) EVENT_LOG_v1 {"time_micros": 1759409798095910, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:56:38 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:38.095935) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:56:38 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 4868668, prev total WAL file size 4868668, number of live WAL files 2.
Oct 02 12:56:38 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000112.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:56:38 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:38.097215) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303133' seq:72057594037927935, type:22 .. '6C6F676D0032323636' seq:0, type:0; will stop at (end)
Oct 02 12:56:38 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:56:38 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [116(3131KB)], [114(10MB)]
Oct 02 12:56:38 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409798097303, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [116], "files_L6": [114], "score": -1, "input_data_size": 13950276, "oldest_snapshot_seqno": -1}
Oct 02 12:56:38 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #117: 8563 keys, 13798790 bytes, temperature: kUnknown
Oct 02 12:56:38 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409798190637, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 117, "file_size": 13798790, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13739907, "index_size": 36371, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21445, "raw_key_size": 221406, "raw_average_key_size": 25, "raw_value_size": 13586078, "raw_average_value_size": 1586, "num_data_blocks": 1431, "num_entries": 8563, "num_filter_entries": 8563, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759409798, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 117, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:56:38 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:56:38 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:38.190967) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 13798790 bytes
Oct 02 12:56:38 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:38.194365) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 149.3 rd, 147.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 10.2 +0.0 blob) out(13.2 +0.0 blob), read-write-amplify(8.7) write-amplify(4.3) OK, records in: 9106, records dropped: 543 output_compression: NoCompression
Oct 02 12:56:38 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:38.194403) EVENT_LOG_v1 {"time_micros": 1759409798194390, "job": 72, "event": "compaction_finished", "compaction_time_micros": 93414, "compaction_time_cpu_micros": 31360, "output_level": 6, "num_output_files": 1, "total_output_size": 13798790, "num_input_records": 9106, "num_output_records": 8563, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:56:38 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:56:38 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409798195263, "job": 72, "event": "table_file_deletion", "file_number": 116}
Oct 02 12:56:38 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000114.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:56:38 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409798197751, "job": 72, "event": "table_file_deletion", "file_number": 114}
Oct 02 12:56:38 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:38.097069) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:56:38 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:38.197866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:56:38 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:38.197874) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:56:38 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:38.197877) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:56:38 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:38.197880) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:56:38 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:38.197883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:56:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:38.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:38.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:39 compute-1 ceph-mon[80926]: pgmap v2503: 305 pgs: 305 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.2 MiB/s wr, 142 op/s
Oct 02 12:56:39 compute-1 nova_compute[230518]: 2025-10-02 12:56:39.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:56:39 compute-1 nova_compute[230518]: 2025-10-02 12:56:39.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:40.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:40 compute-1 nova_compute[230518]: 2025-10-02 12:56:40.858 2 DEBUG nova.compute.manager [req-63cd6ccb-d995-4fc4-bd1d-199914f82ae5 req-f2e2df83-c80f-4ed9-ae7c-cee122ba57a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received event network-vif-unplugged-6237bc28-d790-4861-976b-cda2e8dc93a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:56:40 compute-1 nova_compute[230518]: 2025-10-02 12:56:40.858 2 DEBUG oslo_concurrency.lockutils [req-63cd6ccb-d995-4fc4-bd1d-199914f82ae5 req-f2e2df83-c80f-4ed9-ae7c-cee122ba57a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:40 compute-1 nova_compute[230518]: 2025-10-02 12:56:40.858 2 DEBUG oslo_concurrency.lockutils [req-63cd6ccb-d995-4fc4-bd1d-199914f82ae5 req-f2e2df83-c80f-4ed9-ae7c-cee122ba57a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:40 compute-1 nova_compute[230518]: 2025-10-02 12:56:40.859 2 DEBUG oslo_concurrency.lockutils [req-63cd6ccb-d995-4fc4-bd1d-199914f82ae5 req-f2e2df83-c80f-4ed9-ae7c-cee122ba57a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:40 compute-1 nova_compute[230518]: 2025-10-02 12:56:40.859 2 DEBUG nova.compute.manager [req-63cd6ccb-d995-4fc4-bd1d-199914f82ae5 req-f2e2df83-c80f-4ed9-ae7c-cee122ba57a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] No waiting events found dispatching network-vif-unplugged-6237bc28-d790-4861-976b-cda2e8dc93a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:56:40 compute-1 nova_compute[230518]: 2025-10-02 12:56:40.859 2 DEBUG nova.compute.manager [req-63cd6ccb-d995-4fc4-bd1d-199914f82ae5 req-f2e2df83-c80f-4ed9-ae7c-cee122ba57a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received event network-vif-unplugged-6237bc28-d790-4861-976b-cda2e8dc93a9 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:56:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:40.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:41 compute-1 ceph-mon[80926]: pgmap v2504: 305 pgs: 305 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 30 KiB/s wr, 43 op/s
Oct 02 12:56:41 compute-1 nova_compute[230518]: 2025-10-02 12:56:41.443 2 INFO nova.compute.manager [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Took 3.37 seconds for pre_live_migration on destination host compute-0.ctlplane.example.com.
Oct 02 12:56:41 compute-1 nova_compute[230518]: 2025-10-02 12:56:41.444 2 DEBUG nova.compute.manager [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:56:41 compute-1 nova_compute[230518]: 2025-10-02 12:56:41.461 2 DEBUG nova.compute.manager [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmppyu_z2_l',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='f6c0a66d-64f1-484a-ae4e-ece25fddf736',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(445166e1-a17f-4261-a4e4-29df5beb080c),old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939
Oct 02 12:56:41 compute-1 nova_compute[230518]: 2025-10-02 12:56:41.464 2 DEBUG nova.objects.instance [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lazy-loading 'migration_context' on Instance uuid f6c0a66d-64f1-484a-ae4e-ece25fddf736 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:56:41 compute-1 nova_compute[230518]: 2025-10-02 12:56:41.465 2 DEBUG nova.virt.libvirt.driver [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639
Oct 02 12:56:41 compute-1 nova_compute[230518]: 2025-10-02 12:56:41.466 2 DEBUG nova.virt.libvirt.driver [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440
Oct 02 12:56:41 compute-1 nova_compute[230518]: 2025-10-02 12:56:41.467 2 DEBUG nova.virt.libvirt.driver [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449
Oct 02 12:56:41 compute-1 nova_compute[230518]: 2025-10-02 12:56:41.483 2 DEBUG nova.virt.libvirt.vif [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:55:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1633674489',display_name='tempest-TestNetworkAdvancedServerOps-server-1633674489',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1633674489',id=153,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNDGNPTCWHMGn2WmI5eqLh6YCpsFWhqzTd+bjnfQT4djkmca349UiY4uIU3+umB1w10tp651dyIkW2ibD5dk6ldf9xoeyNtwfhmhivNqkDC8s1WG5y+WB+iPGYUm0nb4Ew==',key_name='tempest-TestNetworkAdvancedServerOps-519769562',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:56:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-ksuo2zkj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:56:11Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=f6c0a66d-64f1-484a-ae4e-ece25fddf736,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:56:41 compute-1 nova_compute[230518]: 2025-10-02 12:56:41.483 2 DEBUG nova.network.os_vif_util [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Converting VIF {"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:56:41 compute-1 nova_compute[230518]: 2025-10-02 12:56:41.484 2 DEBUG nova.network.os_vif_util [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f2:c9:23,bridge_name='br-int',has_traffic_filtering=True,id=6237bc28-d790-4861-976b-cda2e8dc93a9,network=Network(5f30d280-519f-404a-a0ab-55bcf121986d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6237bc28-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:56:41 compute-1 nova_compute[230518]: 2025-10-02 12:56:41.484 2 DEBUG nova.virt.libvirt.migration [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Updating guest XML with vif config: <interface type="ethernet">
Oct 02 12:56:41 compute-1 nova_compute[230518]:   <mac address="fa:16:3e:f2:c9:23"/>
Oct 02 12:56:41 compute-1 nova_compute[230518]:   <model type="virtio"/>
Oct 02 12:56:41 compute-1 nova_compute[230518]:   <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:56:41 compute-1 nova_compute[230518]:   <mtu size="1442"/>
Oct 02 12:56:41 compute-1 nova_compute[230518]:   <target dev="tap6237bc28-d7"/>
Oct 02 12:56:41 compute-1 nova_compute[230518]: </interface>
Oct 02 12:56:41 compute-1 nova_compute[230518]:  _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388
Oct 02 12:56:41 compute-1 nova_compute[230518]: 2025-10-02 12:56:41.485 2 DEBUG nova.virt.libvirt.driver [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272
Oct 02 12:56:41 compute-1 podman[291270]: 2025-10-02 12:56:41.79237185 +0000 UTC m=+0.047659737 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:56:41 compute-1 podman[291271]: 2025-10-02 12:56:41.827186652 +0000 UTC m=+0.079403243 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 12:56:41 compute-1 nova_compute[230518]: 2025-10-02 12:56:41.969 2 DEBUG nova.virt.libvirt.migration [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Oct 02 12:56:41 compute-1 nova_compute[230518]: 2025-10-02 12:56:41.970 2 INFO nova.virt.libvirt.migration [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Increasing downtime to 50 ms after 0 sec elapsed time
Oct 02 12:56:42 compute-1 nova_compute[230518]: 2025-10-02 12:56:42.060 2 INFO nova.virt.libvirt.driver [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Oct 02 12:56:42 compute-1 nova_compute[230518]: 2025-10-02 12:56:42.564 2 DEBUG nova.virt.libvirt.migration [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Oct 02 12:56:42 compute-1 nova_compute[230518]: 2025-10-02 12:56:42.565 2 DEBUG nova.virt.libvirt.migration [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Oct 02 12:56:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:42.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:42 compute-1 nova_compute[230518]: 2025-10-02 12:56:42.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:56:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:42.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:56:42 compute-1 nova_compute[230518]: 2025-10-02 12:56:42.958 2 DEBUG nova.compute.manager [req-03d1bbe6-7097-462b-b869-f683748fef0d req-09a9cd53-7879-4a40-816b-fc9b6ea889d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received event network-vif-plugged-6237bc28-d790-4861-976b-cda2e8dc93a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:56:42 compute-1 nova_compute[230518]: 2025-10-02 12:56:42.958 2 DEBUG oslo_concurrency.lockutils [req-03d1bbe6-7097-462b-b869-f683748fef0d req-09a9cd53-7879-4a40-816b-fc9b6ea889d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:42 compute-1 nova_compute[230518]: 2025-10-02 12:56:42.959 2 DEBUG oslo_concurrency.lockutils [req-03d1bbe6-7097-462b-b869-f683748fef0d req-09a9cd53-7879-4a40-816b-fc9b6ea889d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:42 compute-1 nova_compute[230518]: 2025-10-02 12:56:42.959 2 DEBUG oslo_concurrency.lockutils [req-03d1bbe6-7097-462b-b869-f683748fef0d req-09a9cd53-7879-4a40-816b-fc9b6ea889d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:42 compute-1 nova_compute[230518]: 2025-10-02 12:56:42.959 2 DEBUG nova.compute.manager [req-03d1bbe6-7097-462b-b869-f683748fef0d req-09a9cd53-7879-4a40-816b-fc9b6ea889d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] No waiting events found dispatching network-vif-plugged-6237bc28-d790-4861-976b-cda2e8dc93a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:56:42 compute-1 nova_compute[230518]: 2025-10-02 12:56:42.959 2 WARNING nova.compute.manager [req-03d1bbe6-7097-462b-b869-f683748fef0d req-09a9cd53-7879-4a40-816b-fc9b6ea889d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received unexpected event network-vif-plugged-6237bc28-d790-4861-976b-cda2e8dc93a9 for instance with vm_state active and task_state migrating.
Oct 02 12:56:42 compute-1 nova_compute[230518]: 2025-10-02 12:56:42.959 2 DEBUG nova.compute.manager [req-03d1bbe6-7097-462b-b869-f683748fef0d req-09a9cd53-7879-4a40-816b-fc9b6ea889d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received event network-changed-6237bc28-d790-4861-976b-cda2e8dc93a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:56:42 compute-1 nova_compute[230518]: 2025-10-02 12:56:42.960 2 DEBUG nova.compute.manager [req-03d1bbe6-7097-462b-b869-f683748fef0d req-09a9cd53-7879-4a40-816b-fc9b6ea889d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Refreshing instance network info cache due to event network-changed-6237bc28-d790-4861-976b-cda2e8dc93a9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:56:42 compute-1 nova_compute[230518]: 2025-10-02 12:56:42.960 2 DEBUG oslo_concurrency.lockutils [req-03d1bbe6-7097-462b-b869-f683748fef0d req-09a9cd53-7879-4a40-816b-fc9b6ea889d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:56:42 compute-1 nova_compute[230518]: 2025-10-02 12:56:42.960 2 DEBUG oslo_concurrency.lockutils [req-03d1bbe6-7097-462b-b869-f683748fef0d req-09a9cd53-7879-4a40-816b-fc9b6ea889d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:56:42 compute-1 nova_compute[230518]: 2025-10-02 12:56:42.960 2 DEBUG nova.network.neutron [req-03d1bbe6-7097-462b-b869-f683748fef0d req-09a9cd53-7879-4a40-816b-fc9b6ea889d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Refreshing network info cache for port 6237bc28-d790-4861-976b-cda2e8dc93a9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:56:43 compute-1 nova_compute[230518]: 2025-10-02 12:56:43.069 2 DEBUG nova.virt.libvirt.migration [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Oct 02 12:56:43 compute-1 nova_compute[230518]: 2025-10-02 12:56:43.069 2 DEBUG nova.virt.libvirt.migration [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Oct 02 12:56:43 compute-1 ovn_controller[129257]: 2025-10-02T12:56:43Z|00651|binding|INFO|Releasing lport ffbc5ae9-0886-4744-96e3-66a615b2f014 from this chassis (sb_readonly=0)
Oct 02 12:56:43 compute-1 nova_compute[230518]: 2025-10-02 12:56:43.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:43 compute-1 ceph-mon[80926]: pgmap v2505: 305 pgs: 305 active+clean; 414 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 68 op/s
Oct 02 12:56:43 compute-1 nova_compute[230518]: 2025-10-02 12:56:43.572 2 DEBUG nova.virt.libvirt.migration [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Oct 02 12:56:43 compute-1 nova_compute[230518]: 2025-10-02 12:56:43.573 2 DEBUG nova.virt.libvirt.migration [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Oct 02 12:56:43 compute-1 nova_compute[230518]: 2025-10-02 12:56:43.821 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409803.8211555, f6c0a66d-64f1-484a-ae4e-ece25fddf736 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:56:43 compute-1 nova_compute[230518]: 2025-10-02 12:56:43.822 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] VM Paused (Lifecycle Event)
Oct 02 12:56:43 compute-1 nova_compute[230518]: 2025-10-02 12:56:43.850 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:56:43 compute-1 nova_compute[230518]: 2025-10-02 12:56:43.855 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:56:43 compute-1 nova_compute[230518]: 2025-10-02 12:56:43.880 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] During sync_power_state the instance has a pending task (migrating). Skip.
Oct 02 12:56:44 compute-1 nova_compute[230518]: 2025-10-02 12:56:44.129 2 DEBUG nova.network.neutron [req-03d1bbe6-7097-462b-b869-f683748fef0d req-09a9cd53-7879-4a40-816b-fc9b6ea889d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Updated VIF entry in instance network info cache for port 6237bc28-d790-4861-976b-cda2e8dc93a9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:56:44 compute-1 nova_compute[230518]: 2025-10-02 12:56:44.130 2 DEBUG nova.network.neutron [req-03d1bbe6-7097-462b-b869-f683748fef0d req-09a9cd53-7879-4a40-816b-fc9b6ea889d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Updating instance_info_cache with network_info: [{"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-0.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:56:44 compute-1 nova_compute[230518]: 2025-10-02 12:56:44.148 2 DEBUG oslo_concurrency.lockutils [req-03d1bbe6-7097-462b-b869-f683748fef0d req-09a9cd53-7879-4a40-816b-fc9b6ea889d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-f6c0a66d-64f1-484a-ae4e-ece25fddf736" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:56:44 compute-1 kernel: tap6237bc28-d7 (unregistering): left promiscuous mode
Oct 02 12:56:44 compute-1 NetworkManager[44960]: <info>  [1759409804.1584] device (tap6237bc28-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:56:44 compute-1 nova_compute[230518]: 2025-10-02 12:56:44.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:44 compute-1 ovn_controller[129257]: 2025-10-02T12:56:44Z|00652|binding|INFO|Releasing lport 6237bc28-d790-4861-976b-cda2e8dc93a9 from this chassis (sb_readonly=0)
Oct 02 12:56:44 compute-1 ovn_controller[129257]: 2025-10-02T12:56:44Z|00653|binding|INFO|Setting lport 6237bc28-d790-4861-976b-cda2e8dc93a9 down in Southbound
Oct 02 12:56:44 compute-1 ovn_controller[129257]: 2025-10-02T12:56:44Z|00654|binding|INFO|Removing iface tap6237bc28-d7 ovn-installed in OVS
Oct 02 12:56:44 compute-1 nova_compute[230518]: 2025-10-02 12:56:44.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:44.176 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f2:c9:23 10.100.0.13'], port_security=['fa:16:3e:f2:c9:23 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '17f11839-42bc-4ba9-92b4-53d0d88b0404'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f6c0a66d-64f1-484a-ae4e-ece25fddf736', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5f30d280-519f-404a-a0ab-55bcf121986d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'b996e116-bce4-4795-a454-74528663ff58', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.207'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ccf6ef12-40dd-424f-abb3-518813baf9b4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=6237bc28-d790-4861-976b-cda2e8dc93a9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:56:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:44.179 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 6237bc28-d790-4861-976b-cda2e8dc93a9 in datapath 5f30d280-519f-404a-a0ab-55bcf121986d unbound from our chassis
Oct 02 12:56:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:44.181 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5f30d280-519f-404a-a0ab-55bcf121986d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:56:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:44.182 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ea0526ac-76bd-4fe5-8d6c-c74857e85518]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:44.183 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d namespace which is not needed anymore
Oct 02 12:56:44 compute-1 nova_compute[230518]: 2025-10-02 12:56:44.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:44 compute-1 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000099.scope: Deactivated successfully.
Oct 02 12:56:44 compute-1 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000099.scope: Consumed 14.869s CPU time.
Oct 02 12:56:44 compute-1 systemd-machined[188247]: Machine qemu-75-instance-00000099 terminated.
Oct 02 12:56:44 compute-1 nova_compute[230518]: 2025-10-02 12:56:44.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:44 compute-1 neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d[291044]: [NOTICE]   (291058) : haproxy version is 2.8.14-c23fe91
Oct 02 12:56:44 compute-1 neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d[291044]: [NOTICE]   (291058) : path to executable is /usr/sbin/haproxy
Oct 02 12:56:44 compute-1 neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d[291044]: [WARNING]  (291058) : Exiting Master process...
Oct 02 12:56:44 compute-1 neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d[291044]: [ALERT]    (291058) : Current worker (291060) exited with code 143 (Terminated)
Oct 02 12:56:44 compute-1 neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d[291044]: [WARNING]  (291058) : All workers exited. Exiting... (0)
Oct 02 12:56:44 compute-1 systemd[1]: libpod-e6bb264484e4bee301f6a5de6a355b7acd287a61fd63758c4f454410c31ab98b.scope: Deactivated successfully.
Oct 02 12:56:44 compute-1 podman[291334]: 2025-10-02 12:56:44.315131155 +0000 UTC m=+0.045437086 container died e6bb264484e4bee301f6a5de6a355b7acd287a61fd63758c4f454410c31ab98b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:56:44 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e6bb264484e4bee301f6a5de6a355b7acd287a61fd63758c4f454410c31ab98b-userdata-shm.mount: Deactivated successfully.
Oct 02 12:56:44 compute-1 systemd[1]: var-lib-containers-storage-overlay-722445e320b05979190f4c22e24461a020bb72b6b235f9c49a131ad784afda90-merged.mount: Deactivated successfully.
Oct 02 12:56:44 compute-1 podman[291334]: 2025-10-02 12:56:44.365522326 +0000 UTC m=+0.095828267 container cleanup e6bb264484e4bee301f6a5de6a355b7acd287a61fd63758c4f454410c31ab98b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 12:56:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:56:44 compute-1 systemd[1]: libpod-conmon-e6bb264484e4bee301f6a5de6a355b7acd287a61fd63758c4f454410c31ab98b.scope: Deactivated successfully.
Oct 02 12:56:44 compute-1 virtqemud[230067]: Unable to get XATTR trusted.libvirt.security.ref_selinux on f6c0a66d-64f1-484a-ae4e-ece25fddf736_disk: No such file or directory
Oct 02 12:56:44 compute-1 virtqemud[230067]: Unable to get XATTR trusted.libvirt.security.ref_dac on f6c0a66d-64f1-484a-ae4e-ece25fddf736_disk: No such file or directory
Oct 02 12:56:44 compute-1 nova_compute[230518]: 2025-10-02 12:56:44.434 2 DEBUG nova.virt.libvirt.guest [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688
Oct 02 12:56:44 compute-1 nova_compute[230518]: 2025-10-02 12:56:44.435 2 INFO nova.virt.libvirt.driver [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Migration operation has completed
Oct 02 12:56:44 compute-1 nova_compute[230518]: 2025-10-02 12:56:44.435 2 INFO nova.compute.manager [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] _post_live_migration() is started..
Oct 02 12:56:44 compute-1 nova_compute[230518]: 2025-10-02 12:56:44.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:44 compute-1 nova_compute[230518]: 2025-10-02 12:56:44.440 2 DEBUG nova.virt.libvirt.driver [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279
Oct 02 12:56:44 compute-1 nova_compute[230518]: 2025-10-02 12:56:44.441 2 DEBUG nova.virt.libvirt.driver [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327
Oct 02 12:56:44 compute-1 nova_compute[230518]: 2025-10-02 12:56:44.441 2 DEBUG nova.virt.libvirt.driver [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630
Oct 02 12:56:44 compute-1 podman[291363]: 2025-10-02 12:56:44.443455131 +0000 UTC m=+0.059484156 container remove e6bb264484e4bee301f6a5de6a355b7acd287a61fd63758c4f454410c31ab98b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:56:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:44.449 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2af29e7d-d4a5-401e-adab-b210ce2a3e10]: (4, ('Thu Oct  2 12:56:44 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d (e6bb264484e4bee301f6a5de6a355b7acd287a61fd63758c4f454410c31ab98b)\ne6bb264484e4bee301f6a5de6a355b7acd287a61fd63758c4f454410c31ab98b\nThu Oct  2 12:56:44 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d (e6bb264484e4bee301f6a5de6a355b7acd287a61fd63758c4f454410c31ab98b)\ne6bb264484e4bee301f6a5de6a355b7acd287a61fd63758c4f454410c31ab98b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:44.451 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0dfb3961-46d4-448a-bb5e-3b52a81ab42c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:44.452 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5f30d280-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:56:44 compute-1 nova_compute[230518]: 2025-10-02 12:56:44.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:44 compute-1 kernel: tap5f30d280-50: left promiscuous mode
Oct 02 12:56:44 compute-1 nova_compute[230518]: 2025-10-02 12:56:44.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:44.473 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ee1318b9-69a3-4bd6-a48d-0c67552c3aeb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:44.505 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e0b355aa-c128-4b0a-90b9-6f6ca8d714d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:44.507 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[807dd67a-b1ab-4d1e-8228-fa40b0d7a0f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:44.522 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[62d9dc27-688b-4fbb-9366-187cce81da25]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763243, 'reachable_time': 37745, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291407, 'error': None, 'target': 'ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:44 compute-1 nova_compute[230518]: 2025-10-02 12:56:44.524 2 DEBUG nova.compute.manager [req-c110bf98-1393-4f37-b8e5-4dcc3ff7adbb req-b89f38c5-0dee-429e-a9c8-2fce2c2c5078 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received event network-vif-unplugged-6237bc28-d790-4861-976b-cda2e8dc93a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:56:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:44.524 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5f30d280-519f-404a-a0ab-55bcf121986d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:56:44 compute-1 nova_compute[230518]: 2025-10-02 12:56:44.524 2 DEBUG oslo_concurrency.lockutils [req-c110bf98-1393-4f37-b8e5-4dcc3ff7adbb req-b89f38c5-0dee-429e-a9c8-2fce2c2c5078 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:44.524 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[5dfa10b2-e880-4b95-a9ec-753aa949b8ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:44 compute-1 nova_compute[230518]: 2025-10-02 12:56:44.525 2 DEBUG oslo_concurrency.lockutils [req-c110bf98-1393-4f37-b8e5-4dcc3ff7adbb req-b89f38c5-0dee-429e-a9c8-2fce2c2c5078 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:44 compute-1 nova_compute[230518]: 2025-10-02 12:56:44.525 2 DEBUG oslo_concurrency.lockutils [req-c110bf98-1393-4f37-b8e5-4dcc3ff7adbb req-b89f38c5-0dee-429e-a9c8-2fce2c2c5078 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:44 compute-1 nova_compute[230518]: 2025-10-02 12:56:44.525 2 DEBUG nova.compute.manager [req-c110bf98-1393-4f37-b8e5-4dcc3ff7adbb req-b89f38c5-0dee-429e-a9c8-2fce2c2c5078 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] No waiting events found dispatching network-vif-unplugged-6237bc28-d790-4861-976b-cda2e8dc93a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:56:44 compute-1 systemd[1]: run-netns-ovnmeta\x2d5f30d280\x2d519f\x2d404a\x2da0ab\x2d55bcf121986d.mount: Deactivated successfully.
Oct 02 12:56:44 compute-1 nova_compute[230518]: 2025-10-02 12:56:44.525 2 DEBUG nova.compute.manager [req-c110bf98-1393-4f37-b8e5-4dcc3ff7adbb req-b89f38c5-0dee-429e-a9c8-2fce2c2c5078 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received event network-vif-unplugged-6237bc28-d790-4861-976b-cda2e8dc93a9 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:56:44 compute-1 sudo[291393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:56:44 compute-1 sudo[291393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:44 compute-1 sudo[291393]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:44 compute-1 sudo[291419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:56:44 compute-1 sudo[291419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:44 compute-1 sudo[291419]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:44 compute-1 sudo[291444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:56:44 compute-1 sudo[291444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:44 compute-1 sudo[291444]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:44 compute-1 sudo[291469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:56:44 compute-1 sudo[291469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:44.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:44.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:56:44 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1756742258' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:56:45 compute-1 sudo[291469]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:45 compute-1 nova_compute[230518]: 2025-10-02 12:56:45.188 2 DEBUG nova.network.neutron [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Activated binding for port 6237bc28-d790-4861-976b-cda2e8dc93a9 and host compute-0.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181
Oct 02 12:56:45 compute-1 nova_compute[230518]: 2025-10-02 12:56:45.189 2 DEBUG nova.compute.manager [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326
Oct 02 12:56:45 compute-1 nova_compute[230518]: 2025-10-02 12:56:45.190 2 DEBUG nova.virt.libvirt.vif [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:55:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1633674489',display_name='tempest-TestNetworkAdvancedServerOps-server-1633674489',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1633674489',id=153,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNDGNPTCWHMGn2WmI5eqLh6YCpsFWhqzTd+bjnfQT4djkmca349UiY4uIU3+umB1w10tp651dyIkW2ibD5dk6ldf9xoeyNtwfhmhivNqkDC8s1WG5y+WB+iPGYUm0nb4Ew==',key_name='tempest-TestNetworkAdvancedServerOps-519769562',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:56:11Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-ksuo2zkj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:56:32Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=f6c0a66d-64f1-484a-ae4e-ece25fddf736,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:56:45 compute-1 nova_compute[230518]: 2025-10-02 12:56:45.190 2 DEBUG nova.network.os_vif_util [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Converting VIF {"id": "6237bc28-d790-4861-976b-cda2e8dc93a9", "address": "fa:16:3e:f2:c9:23", "network": {"id": "5f30d280-519f-404a-a0ab-55bcf121986d", "bridge": "br-int", "label": "tempest-network-smoke--2016610412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6237bc28-d7", "ovs_interfaceid": "6237bc28-d790-4861-976b-cda2e8dc93a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:56:45 compute-1 nova_compute[230518]: 2025-10-02 12:56:45.190 2 DEBUG nova.network.os_vif_util [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f2:c9:23,bridge_name='br-int',has_traffic_filtering=True,id=6237bc28-d790-4861-976b-cda2e8dc93a9,network=Network(5f30d280-519f-404a-a0ab-55bcf121986d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6237bc28-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:56:45 compute-1 nova_compute[230518]: 2025-10-02 12:56:45.191 2 DEBUG os_vif [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f2:c9:23,bridge_name='br-int',has_traffic_filtering=True,id=6237bc28-d790-4861-976b-cda2e8dc93a9,network=Network(5f30d280-519f-404a-a0ab-55bcf121986d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6237bc28-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:56:45 compute-1 nova_compute[230518]: 2025-10-02 12:56:45.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:45 compute-1 nova_compute[230518]: 2025-10-02 12:56:45.193 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6237bc28-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:56:45 compute-1 nova_compute[230518]: 2025-10-02 12:56:45.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:45 compute-1 nova_compute[230518]: 2025-10-02 12:56:45.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:45 compute-1 nova_compute[230518]: 2025-10-02 12:56:45.197 2 INFO os_vif [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f2:c9:23,bridge_name='br-int',has_traffic_filtering=True,id=6237bc28-d790-4861-976b-cda2e8dc93a9,network=Network(5f30d280-519f-404a-a0ab-55bcf121986d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6237bc28-d7')
Oct 02 12:56:45 compute-1 nova_compute[230518]: 2025-10-02 12:56:45.198 2 DEBUG oslo_concurrency.lockutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:45 compute-1 nova_compute[230518]: 2025-10-02 12:56:45.198 2 DEBUG oslo_concurrency.lockutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:45 compute-1 nova_compute[230518]: 2025-10-02 12:56:45.198 2 DEBUG oslo_concurrency.lockutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:45 compute-1 nova_compute[230518]: 2025-10-02 12:56:45.198 2 DEBUG nova.compute.manager [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349
Oct 02 12:56:45 compute-1 nova_compute[230518]: 2025-10-02 12:56:45.199 2 INFO nova.virt.libvirt.driver [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Deleting instance files /var/lib/nova/instances/f6c0a66d-64f1-484a-ae4e-ece25fddf736_del
Oct 02 12:56:45 compute-1 nova_compute[230518]: 2025-10-02 12:56:45.199 2 INFO nova.virt.libvirt.driver [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Deletion of /var/lib/nova/instances/f6c0a66d-64f1-484a-ae4e-ece25fddf736_del complete
Oct 02 12:56:45 compute-1 ceph-mon[80926]: pgmap v2506: 305 pgs: 305 active+clean; 414 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 66 op/s
Oct 02 12:56:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1756742258' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:56:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:56:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:56:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:56:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:56:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:56:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:56:46 compute-1 nova_compute[230518]: 2025-10-02 12:56:46.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2894943659' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:56:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:46.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:46 compute-1 nova_compute[230518]: 2025-10-02 12:56:46.739 2 DEBUG nova.compute.manager [req-2fbdf3f2-67b1-4266-b036-1d4a1cf0cf32 req-5f6e5e9a-8a59-4b17-aa8e-e983ece22f2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received event network-vif-plugged-6237bc28-d790-4861-976b-cda2e8dc93a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:56:46 compute-1 nova_compute[230518]: 2025-10-02 12:56:46.740 2 DEBUG oslo_concurrency.lockutils [req-2fbdf3f2-67b1-4266-b036-1d4a1cf0cf32 req-5f6e5e9a-8a59-4b17-aa8e-e983ece22f2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:46 compute-1 nova_compute[230518]: 2025-10-02 12:56:46.740 2 DEBUG oslo_concurrency.lockutils [req-2fbdf3f2-67b1-4266-b036-1d4a1cf0cf32 req-5f6e5e9a-8a59-4b17-aa8e-e983ece22f2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:46 compute-1 nova_compute[230518]: 2025-10-02 12:56:46.741 2 DEBUG oslo_concurrency.lockutils [req-2fbdf3f2-67b1-4266-b036-1d4a1cf0cf32 req-5f6e5e9a-8a59-4b17-aa8e-e983ece22f2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:46 compute-1 nova_compute[230518]: 2025-10-02 12:56:46.741 2 DEBUG nova.compute.manager [req-2fbdf3f2-67b1-4266-b036-1d4a1cf0cf32 req-5f6e5e9a-8a59-4b17-aa8e-e983ece22f2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] No waiting events found dispatching network-vif-plugged-6237bc28-d790-4861-976b-cda2e8dc93a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:56:46 compute-1 nova_compute[230518]: 2025-10-02 12:56:46.742 2 WARNING nova.compute.manager [req-2fbdf3f2-67b1-4266-b036-1d4a1cf0cf32 req-5f6e5e9a-8a59-4b17-aa8e-e983ece22f2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received unexpected event network-vif-plugged-6237bc28-d790-4861-976b-cda2e8dc93a9 for instance with vm_state active and task_state migrating.
Oct 02 12:56:46 compute-1 nova_compute[230518]: 2025-10-02 12:56:46.742 2 DEBUG nova.compute.manager [req-2fbdf3f2-67b1-4266-b036-1d4a1cf0cf32 req-5f6e5e9a-8a59-4b17-aa8e-e983ece22f2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received event network-vif-plugged-6237bc28-d790-4861-976b-cda2e8dc93a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:56:46 compute-1 nova_compute[230518]: 2025-10-02 12:56:46.743 2 DEBUG oslo_concurrency.lockutils [req-2fbdf3f2-67b1-4266-b036-1d4a1cf0cf32 req-5f6e5e9a-8a59-4b17-aa8e-e983ece22f2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:46 compute-1 nova_compute[230518]: 2025-10-02 12:56:46.743 2 DEBUG oslo_concurrency.lockutils [req-2fbdf3f2-67b1-4266-b036-1d4a1cf0cf32 req-5f6e5e9a-8a59-4b17-aa8e-e983ece22f2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:46 compute-1 nova_compute[230518]: 2025-10-02 12:56:46.744 2 DEBUG oslo_concurrency.lockutils [req-2fbdf3f2-67b1-4266-b036-1d4a1cf0cf32 req-5f6e5e9a-8a59-4b17-aa8e-e983ece22f2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:46 compute-1 nova_compute[230518]: 2025-10-02 12:56:46.744 2 DEBUG nova.compute.manager [req-2fbdf3f2-67b1-4266-b036-1d4a1cf0cf32 req-5f6e5e9a-8a59-4b17-aa8e-e983ece22f2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] No waiting events found dispatching network-vif-plugged-6237bc28-d790-4861-976b-cda2e8dc93a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:56:46 compute-1 nova_compute[230518]: 2025-10-02 12:56:46.745 2 WARNING nova.compute.manager [req-2fbdf3f2-67b1-4266-b036-1d4a1cf0cf32 req-5f6e5e9a-8a59-4b17-aa8e-e983ece22f2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received unexpected event network-vif-plugged-6237bc28-d790-4861-976b-cda2e8dc93a9 for instance with vm_state active and task_state migrating.
Oct 02 12:56:46 compute-1 nova_compute[230518]: 2025-10-02 12:56:46.745 2 DEBUG nova.compute.manager [req-2fbdf3f2-67b1-4266-b036-1d4a1cf0cf32 req-5f6e5e9a-8a59-4b17-aa8e-e983ece22f2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received event network-vif-plugged-6237bc28-d790-4861-976b-cda2e8dc93a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:56:46 compute-1 nova_compute[230518]: 2025-10-02 12:56:46.746 2 DEBUG oslo_concurrency.lockutils [req-2fbdf3f2-67b1-4266-b036-1d4a1cf0cf32 req-5f6e5e9a-8a59-4b17-aa8e-e983ece22f2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:46 compute-1 nova_compute[230518]: 2025-10-02 12:56:46.746 2 DEBUG oslo_concurrency.lockutils [req-2fbdf3f2-67b1-4266-b036-1d4a1cf0cf32 req-5f6e5e9a-8a59-4b17-aa8e-e983ece22f2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:46 compute-1 nova_compute[230518]: 2025-10-02 12:56:46.747 2 DEBUG oslo_concurrency.lockutils [req-2fbdf3f2-67b1-4266-b036-1d4a1cf0cf32 req-5f6e5e9a-8a59-4b17-aa8e-e983ece22f2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:46 compute-1 nova_compute[230518]: 2025-10-02 12:56:46.747 2 DEBUG nova.compute.manager [req-2fbdf3f2-67b1-4266-b036-1d4a1cf0cf32 req-5f6e5e9a-8a59-4b17-aa8e-e983ece22f2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] No waiting events found dispatching network-vif-plugged-6237bc28-d790-4861-976b-cda2e8dc93a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:56:46 compute-1 nova_compute[230518]: 2025-10-02 12:56:46.748 2 WARNING nova.compute.manager [req-2fbdf3f2-67b1-4266-b036-1d4a1cf0cf32 req-5f6e5e9a-8a59-4b17-aa8e-e983ece22f2c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Received unexpected event network-vif-plugged-6237bc28-d790-4861-976b-cda2e8dc93a9 for instance with vm_state active and task_state migrating.
Oct 02 12:56:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:46.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:47 compute-1 ceph-mon[80926]: pgmap v2507: 305 pgs: 305 active+clean; 418 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 69 op/s
Oct 02 12:56:47 compute-1 nova_compute[230518]: 2025-10-02 12:56:47.990 2 DEBUG oslo_concurrency.lockutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquiring lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:47 compute-1 nova_compute[230518]: 2025-10-02 12:56:47.991 2 DEBUG oslo_concurrency.lockutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:48 compute-1 nova_compute[230518]: 2025-10-02 12:56:48.008 2 DEBUG nova.compute.manager [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:56:48 compute-1 nova_compute[230518]: 2025-10-02 12:56:48.079 2 DEBUG oslo_concurrency.lockutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:48 compute-1 nova_compute[230518]: 2025-10-02 12:56:48.079 2 DEBUG oslo_concurrency.lockutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:48 compute-1 nova_compute[230518]: 2025-10-02 12:56:48.088 2 DEBUG nova.virt.hardware [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:56:48 compute-1 nova_compute[230518]: 2025-10-02 12:56:48.089 2 INFO nova.compute.claims [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:56:48 compute-1 nova_compute[230518]: 2025-10-02 12:56:48.192 2 DEBUG oslo_concurrency.processutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:56:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:56:48 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2370856319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:48 compute-1 nova_compute[230518]: 2025-10-02 12:56:48.598 2 DEBUG oslo_concurrency.processutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:56:48 compute-1 nova_compute[230518]: 2025-10-02 12:56:48.603 2 DEBUG nova.compute.provider_tree [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:56:48 compute-1 nova_compute[230518]: 2025-10-02 12:56:48.619 2 DEBUG nova.scheduler.client.report [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:56:48 compute-1 nova_compute[230518]: 2025-10-02 12:56:48.650 2 DEBUG oslo_concurrency.lockutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:48 compute-1 nova_compute[230518]: 2025-10-02 12:56:48.651 2 DEBUG nova.compute.manager [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:56:48 compute-1 nova_compute[230518]: 2025-10-02 12:56:48.697 2 DEBUG nova.compute.manager [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:56:48 compute-1 nova_compute[230518]: 2025-10-02 12:56:48.697 2 DEBUG nova.network.neutron [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:56:48 compute-1 nova_compute[230518]: 2025-10-02 12:56:48.720 2 INFO nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:56:48 compute-1 nova_compute[230518]: 2025-10-02 12:56:48.737 2 DEBUG nova.compute.manager [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:56:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:48.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2370856319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:48 compute-1 nova_compute[230518]: 2025-10-02 12:56:48.829 2 DEBUG nova.compute.manager [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:56:48 compute-1 nova_compute[230518]: 2025-10-02 12:56:48.831 2 DEBUG nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:56:48 compute-1 nova_compute[230518]: 2025-10-02 12:56:48.831 2 INFO nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Creating image(s)
Oct 02 12:56:48 compute-1 nova_compute[230518]: 2025-10-02 12:56:48.873 2 DEBUG nova.storage.rbd_utils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:56:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:48.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:48 compute-1 nova_compute[230518]: 2025-10-02 12:56:48.903 2 DEBUG nova.storage.rbd_utils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:56:48 compute-1 nova_compute[230518]: 2025-10-02 12:56:48.938 2 DEBUG nova.storage.rbd_utils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:56:48 compute-1 nova_compute[230518]: 2025-10-02 12:56:48.943 2 DEBUG oslo_concurrency.processutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:56:49 compute-1 nova_compute[230518]: 2025-10-02 12:56:49.003 2 DEBUG nova.policy [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dfe96a8fa48c4243b6262a0359f5b208', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd8f55f9d9ed144629bd9a03edb020c4f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:56:49 compute-1 nova_compute[230518]: 2025-10-02 12:56:49.020 2 DEBUG oslo_concurrency.processutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:56:49 compute-1 nova_compute[230518]: 2025-10-02 12:56:49.021 2 DEBUG oslo_concurrency.lockutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:49 compute-1 nova_compute[230518]: 2025-10-02 12:56:49.021 2 DEBUG oslo_concurrency.lockutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:49 compute-1 nova_compute[230518]: 2025-10-02 12:56:49.021 2 DEBUG oslo_concurrency.lockutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:49 compute-1 nova_compute[230518]: 2025-10-02 12:56:49.055 2 DEBUG nova.storage.rbd_utils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:56:49 compute-1 nova_compute[230518]: 2025-10-02 12:56:49.060 2 DEBUG oslo_concurrency.processutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:56:49 compute-1 nova_compute[230518]: 2025-10-02 12:56:49.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:56:49 compute-1 nova_compute[230518]: 2025-10-02 12:56:49.520 2 DEBUG oslo_concurrency.processutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:56:49 compute-1 nova_compute[230518]: 2025-10-02 12:56:49.595 2 DEBUG nova.storage.rbd_utils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] resizing rbd image c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:56:49 compute-1 nova_compute[230518]: 2025-10-02 12:56:49.628 2 DEBUG nova.network.neutron [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Successfully created port: 1ba1c69e-5414-4423-aa38-aea149f6f506 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:56:49 compute-1 nova_compute[230518]: 2025-10-02 12:56:49.715 2 DEBUG nova.objects.instance [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lazy-loading 'migration_context' on Instance uuid c793e384-4ddd-4531-b6fd-172ee2fcbd4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:56:49 compute-1 nova_compute[230518]: 2025-10-02 12:56:49.731 2 DEBUG nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:56:49 compute-1 nova_compute[230518]: 2025-10-02 12:56:49.732 2 DEBUG nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Ensure instance console log exists: /var/lib/nova/instances/c793e384-4ddd-4531-b6fd-172ee2fcbd4d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:56:49 compute-1 nova_compute[230518]: 2025-10-02 12:56:49.732 2 DEBUG oslo_concurrency.lockutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:49 compute-1 nova_compute[230518]: 2025-10-02 12:56:49.733 2 DEBUG oslo_concurrency.lockutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:49 compute-1 nova_compute[230518]: 2025-10-02 12:56:49.733 2 DEBUG oslo_concurrency.lockutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:49 compute-1 ceph-mon[80926]: pgmap v2508: 305 pgs: 305 active+clean; 418 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 61 op/s
Oct 02 12:56:50 compute-1 nova_compute[230518]: 2025-10-02 12:56:50.183 2 DEBUG oslo_concurrency.lockutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Acquiring lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:50 compute-1 nova_compute[230518]: 2025-10-02 12:56:50.184 2 DEBUG oslo_concurrency.lockutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:50 compute-1 nova_compute[230518]: 2025-10-02 12:56:50.184 2 DEBUG oslo_concurrency.lockutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lock "f6c0a66d-64f1-484a-ae4e-ece25fddf736-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:50 compute-1 nova_compute[230518]: 2025-10-02 12:56:50.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:50 compute-1 nova_compute[230518]: 2025-10-02 12:56:50.205 2 DEBUG oslo_concurrency.lockutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:50 compute-1 nova_compute[230518]: 2025-10-02 12:56:50.206 2 DEBUG oslo_concurrency.lockutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:50 compute-1 nova_compute[230518]: 2025-10-02 12:56:50.206 2 DEBUG oslo_concurrency.lockutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:50 compute-1 nova_compute[230518]: 2025-10-02 12:56:50.206 2 DEBUG nova.compute.resource_tracker [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:56:50 compute-1 nova_compute[230518]: 2025-10-02 12:56:50.206 2 DEBUG oslo_concurrency.processutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:56:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:50.349 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:56:50 compute-1 nova_compute[230518]: 2025-10-02 12:56:50.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:50.350 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:56:50 compute-1 nova_compute[230518]: 2025-10-02 12:56:50.579 2 DEBUG nova.network.neutron [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Successfully updated port: 1ba1c69e-5414-4423-aa38-aea149f6f506 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:56:50 compute-1 nova_compute[230518]: 2025-10-02 12:56:50.594 2 DEBUG oslo_concurrency.lockutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquiring lock "refresh_cache-c793e384-4ddd-4531-b6fd-172ee2fcbd4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:56:50 compute-1 nova_compute[230518]: 2025-10-02 12:56:50.594 2 DEBUG oslo_concurrency.lockutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquired lock "refresh_cache-c793e384-4ddd-4531-b6fd-172ee2fcbd4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:56:50 compute-1 nova_compute[230518]: 2025-10-02 12:56:50.595 2 DEBUG nova.network.neutron [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:56:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:56:50 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1101605346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:50 compute-1 nova_compute[230518]: 2025-10-02 12:56:50.632 2 DEBUG oslo_concurrency.processutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:56:50 compute-1 nova_compute[230518]: 2025-10-02 12:56:50.659 2 DEBUG nova.compute.manager [req-43a8a83b-cbec-4ed2-9d0b-4cc6d0c47401 req-c5289881-213f-4b9c-b496-aa181fb4c258 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Received event network-changed-1ba1c69e-5414-4423-aa38-aea149f6f506 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:56:50 compute-1 nova_compute[230518]: 2025-10-02 12:56:50.660 2 DEBUG nova.compute.manager [req-43a8a83b-cbec-4ed2-9d0b-4cc6d0c47401 req-c5289881-213f-4b9c-b496-aa181fb4c258 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Refreshing instance network info cache due to event network-changed-1ba1c69e-5414-4423-aa38-aea149f6f506. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:56:50 compute-1 nova_compute[230518]: 2025-10-02 12:56:50.661 2 DEBUG oslo_concurrency.lockutils [req-43a8a83b-cbec-4ed2-9d0b-4cc6d0c47401 req-c5289881-213f-4b9c-b496-aa181fb4c258 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-c793e384-4ddd-4531-b6fd-172ee2fcbd4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:56:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:56:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:50.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:56:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1101605346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:50 compute-1 nova_compute[230518]: 2025-10-02 12:56:50.831 2 WARNING nova.virt.libvirt.driver [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:56:50 compute-1 nova_compute[230518]: 2025-10-02 12:56:50.833 2 DEBUG nova.compute.resource_tracker [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4317MB free_disk=20.851459503173828GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:56:50 compute-1 nova_compute[230518]: 2025-10-02 12:56:50.833 2 DEBUG oslo_concurrency.lockutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:50 compute-1 nova_compute[230518]: 2025-10-02 12:56:50.834 2 DEBUG oslo_concurrency.lockutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:50 compute-1 nova_compute[230518]: 2025-10-02 12:56:50.876 2 DEBUG nova.compute.resource_tracker [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Migration for instance f6c0a66d-64f1-484a-ae4e-ece25fddf736 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Oct 02 12:56:50 compute-1 nova_compute[230518]: 2025-10-02 12:56:50.897 2 DEBUG nova.compute.resource_tracker [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491
Oct 02 12:56:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:50.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:50 compute-1 nova_compute[230518]: 2025-10-02 12:56:50.917 2 DEBUG nova.compute.resource_tracker [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Migration 445166e1-a17f-4261-a4e4-29df5beb080c is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Oct 02 12:56:50 compute-1 nova_compute[230518]: 2025-10-02 12:56:50.918 2 DEBUG nova.compute.resource_tracker [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Instance c793e384-4ddd-4531-b6fd-172ee2fcbd4d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:56:50 compute-1 nova_compute[230518]: 2025-10-02 12:56:50.918 2 DEBUG nova.compute.resource_tracker [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:56:50 compute-1 nova_compute[230518]: 2025-10-02 12:56:50.919 2 DEBUG nova.compute.resource_tracker [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:56:50 compute-1 nova_compute[230518]: 2025-10-02 12:56:50.980 2 DEBUG oslo_concurrency.processutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:56:51 compute-1 nova_compute[230518]: 2025-10-02 12:56:51.292 2 DEBUG nova.network.neutron [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:56:51 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:56:51 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1856637000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:51 compute-1 nova_compute[230518]: 2025-10-02 12:56:51.406 2 DEBUG oslo_concurrency.processutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:56:51 compute-1 nova_compute[230518]: 2025-10-02 12:56:51.411 2 DEBUG nova.compute.provider_tree [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:56:51 compute-1 nova_compute[230518]: 2025-10-02 12:56:51.429 2 DEBUG nova.scheduler.client.report [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:56:51 compute-1 nova_compute[230518]: 2025-10-02 12:56:51.455 2 DEBUG nova.compute.resource_tracker [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:56:51 compute-1 nova_compute[230518]: 2025-10-02 12:56:51.455 2 DEBUG oslo_concurrency.lockutils [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:51 compute-1 nova_compute[230518]: 2025-10-02 12:56:51.460 2 INFO nova.compute.manager [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Migrating instance to compute-0.ctlplane.example.com finished successfully.
Oct 02 12:56:51 compute-1 nova_compute[230518]: 2025-10-02 12:56:51.561 2 INFO nova.scheduler.client.report [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] Deleted allocation for migration 445166e1-a17f-4261-a4e4-29df5beb080c
Oct 02 12:56:51 compute-1 nova_compute[230518]: 2025-10-02 12:56:51.561 2 DEBUG nova.virt.libvirt.driver [None req-ded258b2-8f7b-43f3-bc5d-37f93e6a4602 adfff0af369747f4ba1424297bded55f 1acc020862eb4ef284d7cdddf3916b77 - - default default] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662
Oct 02 12:56:51 compute-1 sudo[291759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:56:51 compute-1 sudo[291759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:51 compute-1 sudo[291759]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:51 compute-1 sudo[291784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:56:51 compute-1 sudo[291784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:56:51 compute-1 sudo[291784]: pam_unix(sudo:session): session closed for user root
Oct 02 12:56:52 compute-1 ceph-mon[80926]: pgmap v2509: 305 pgs: 305 active+clean; 438 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 2.4 MiB/s wr, 51 op/s
Oct 02 12:56:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1856637000' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:56:52 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:56:52 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:56:52 compute-1 nova_compute[230518]: 2025-10-02 12:56:52.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:52 compute-1 nova_compute[230518]: 2025-10-02 12:56:52.495 2 DEBUG nova.network.neutron [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Updating instance_info_cache with network_info: [{"id": "1ba1c69e-5414-4423-aa38-aea149f6f506", "address": "fa:16:3e:27:9d:59", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba1c69e-54", "ovs_interfaceid": "1ba1c69e-5414-4423-aa38-aea149f6f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:56:52 compute-1 nova_compute[230518]: 2025-10-02 12:56:52.579 2 DEBUG oslo_concurrency.lockutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Releasing lock "refresh_cache-c793e384-4ddd-4531-b6fd-172ee2fcbd4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:56:52 compute-1 nova_compute[230518]: 2025-10-02 12:56:52.580 2 DEBUG nova.compute.manager [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Instance network_info: |[{"id": "1ba1c69e-5414-4423-aa38-aea149f6f506", "address": "fa:16:3e:27:9d:59", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba1c69e-54", "ovs_interfaceid": "1ba1c69e-5414-4423-aa38-aea149f6f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:56:52 compute-1 nova_compute[230518]: 2025-10-02 12:56:52.581 2 DEBUG oslo_concurrency.lockutils [req-43a8a83b-cbec-4ed2-9d0b-4cc6d0c47401 req-c5289881-213f-4b9c-b496-aa181fb4c258 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-c793e384-4ddd-4531-b6fd-172ee2fcbd4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:56:52 compute-1 nova_compute[230518]: 2025-10-02 12:56:52.581 2 DEBUG nova.network.neutron [req-43a8a83b-cbec-4ed2-9d0b-4cc6d0c47401 req-c5289881-213f-4b9c-b496-aa181fb4c258 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Refreshing network info cache for port 1ba1c69e-5414-4423-aa38-aea149f6f506 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:56:52 compute-1 nova_compute[230518]: 2025-10-02 12:56:52.585 2 DEBUG nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Start _get_guest_xml network_info=[{"id": "1ba1c69e-5414-4423-aa38-aea149f6f506", "address": "fa:16:3e:27:9d:59", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba1c69e-54", "ovs_interfaceid": "1ba1c69e-5414-4423-aa38-aea149f6f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:56:52 compute-1 nova_compute[230518]: 2025-10-02 12:56:52.590 2 WARNING nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:56:52 compute-1 nova_compute[230518]: 2025-10-02 12:56:52.594 2 DEBUG nova.virt.libvirt.host [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:56:52 compute-1 nova_compute[230518]: 2025-10-02 12:56:52.595 2 DEBUG nova.virt.libvirt.host [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:56:52 compute-1 nova_compute[230518]: 2025-10-02 12:56:52.598 2 DEBUG nova.virt.libvirt.host [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:56:52 compute-1 nova_compute[230518]: 2025-10-02 12:56:52.599 2 DEBUG nova.virt.libvirt.host [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:56:52 compute-1 nova_compute[230518]: 2025-10-02 12:56:52.600 2 DEBUG nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:56:52 compute-1 nova_compute[230518]: 2025-10-02 12:56:52.601 2 DEBUG nova.virt.hardware [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:56:52 compute-1 nova_compute[230518]: 2025-10-02 12:56:52.601 2 DEBUG nova.virt.hardware [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:56:52 compute-1 nova_compute[230518]: 2025-10-02 12:56:52.602 2 DEBUG nova.virt.hardware [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:56:52 compute-1 nova_compute[230518]: 2025-10-02 12:56:52.602 2 DEBUG nova.virt.hardware [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:56:52 compute-1 nova_compute[230518]: 2025-10-02 12:56:52.602 2 DEBUG nova.virt.hardware [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:56:52 compute-1 nova_compute[230518]: 2025-10-02 12:56:52.603 2 DEBUG nova.virt.hardware [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:56:52 compute-1 nova_compute[230518]: 2025-10-02 12:56:52.603 2 DEBUG nova.virt.hardware [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:56:52 compute-1 nova_compute[230518]: 2025-10-02 12:56:52.603 2 DEBUG nova.virt.hardware [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:56:52 compute-1 nova_compute[230518]: 2025-10-02 12:56:52.604 2 DEBUG nova.virt.hardware [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:56:52 compute-1 nova_compute[230518]: 2025-10-02 12:56:52.604 2 DEBUG nova.virt.hardware [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:56:52 compute-1 nova_compute[230518]: 2025-10-02 12:56:52.604 2 DEBUG nova.virt.hardware [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:56:52 compute-1 nova_compute[230518]: 2025-10-02 12:56:52.608 2 DEBUG oslo_concurrency.processutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:56:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:52.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:52.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:56:53 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/233049042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:56:53 compute-1 ceph-mon[80926]: pgmap v2510: 305 pgs: 305 active+clean; 465 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 139 op/s
Oct 02 12:56:53 compute-1 nova_compute[230518]: 2025-10-02 12:56:53.113 2 DEBUG oslo_concurrency.processutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:56:53 compute-1 nova_compute[230518]: 2025-10-02 12:56:53.148 2 DEBUG nova.storage.rbd_utils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:56:53 compute-1 nova_compute[230518]: 2025-10-02 12:56:53.152 2 DEBUG oslo_concurrency.processutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:56:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:56:53 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1654015826' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:56:53 compute-1 nova_compute[230518]: 2025-10-02 12:56:53.569 2 DEBUG oslo_concurrency.processutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:56:53 compute-1 nova_compute[230518]: 2025-10-02 12:56:53.571 2 DEBUG nova.virt.libvirt.vif [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:56:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-501698818',display_name='tempest-ServerRescueTestJSON-server-501698818',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-501698818',id=156,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d8f55f9d9ed144629bd9a03edb020c4f',ramdisk_id='',reservation_id='r-qzbsd8d7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-791200975',owner_user_name='tempest-ServerRescueTestJSON-791200975-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:56:48Z,user_data=None,user_id='dfe96a8fa48c4243b6262a0359f5b208',uuid=c793e384-4ddd-4531-b6fd-172ee2fcbd4d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1ba1c69e-5414-4423-aa38-aea149f6f506", "address": "fa:16:3e:27:9d:59", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba1c69e-54", "ovs_interfaceid": "1ba1c69e-5414-4423-aa38-aea149f6f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:56:53 compute-1 nova_compute[230518]: 2025-10-02 12:56:53.571 2 DEBUG nova.network.os_vif_util [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Converting VIF {"id": "1ba1c69e-5414-4423-aa38-aea149f6f506", "address": "fa:16:3e:27:9d:59", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba1c69e-54", "ovs_interfaceid": "1ba1c69e-5414-4423-aa38-aea149f6f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:56:53 compute-1 nova_compute[230518]: 2025-10-02 12:56:53.572 2 DEBUG nova.network.os_vif_util [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:9d:59,bridge_name='br-int',has_traffic_filtering=True,id=1ba1c69e-5414-4423-aa38-aea149f6f506,network=Network(a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ba1c69e-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:56:53 compute-1 nova_compute[230518]: 2025-10-02 12:56:53.573 2 DEBUG nova.objects.instance [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lazy-loading 'pci_devices' on Instance uuid c793e384-4ddd-4531-b6fd-172ee2fcbd4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:56:53 compute-1 nova_compute[230518]: 2025-10-02 12:56:53.595 2 DEBUG nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:56:53 compute-1 nova_compute[230518]:   <uuid>c793e384-4ddd-4531-b6fd-172ee2fcbd4d</uuid>
Oct 02 12:56:53 compute-1 nova_compute[230518]:   <name>instance-0000009c</name>
Oct 02 12:56:53 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:56:53 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:56:53 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:56:53 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:       <nova:name>tempest-ServerRescueTestJSON-server-501698818</nova:name>
Oct 02 12:56:53 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:56:52</nova:creationTime>
Oct 02 12:56:53 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:56:53 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:56:53 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:56:53 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:56:53 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:56:53 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:56:53 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:56:53 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:56:53 compute-1 nova_compute[230518]:         <nova:user uuid="dfe96a8fa48c4243b6262a0359f5b208">tempest-ServerRescueTestJSON-791200975-project-member</nova:user>
Oct 02 12:56:53 compute-1 nova_compute[230518]:         <nova:project uuid="d8f55f9d9ed144629bd9a03edb020c4f">tempest-ServerRescueTestJSON-791200975</nova:project>
Oct 02 12:56:53 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:56:53 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:56:53 compute-1 nova_compute[230518]:         <nova:port uuid="1ba1c69e-5414-4423-aa38-aea149f6f506">
Oct 02 12:56:53 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:56:53 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:56:53 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:56:53 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <system>
Oct 02 12:56:53 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:56:53 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:56:53 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:56:53 compute-1 nova_compute[230518]:       <entry name="serial">c793e384-4ddd-4531-b6fd-172ee2fcbd4d</entry>
Oct 02 12:56:53 compute-1 nova_compute[230518]:       <entry name="uuid">c793e384-4ddd-4531-b6fd-172ee2fcbd4d</entry>
Oct 02 12:56:53 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     </system>
Oct 02 12:56:53 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:56:53 compute-1 nova_compute[230518]:   <os>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:   </os>
Oct 02 12:56:53 compute-1 nova_compute[230518]:   <features>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:   </features>
Oct 02 12:56:53 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:56:53 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:56:53 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:56:53 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk">
Oct 02 12:56:53 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:       </source>
Oct 02 12:56:53 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:56:53 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:56:53 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:56:53 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk.config">
Oct 02 12:56:53 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:       </source>
Oct 02 12:56:53 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:56:53 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:56:53 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:56:53 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:27:9d:59"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:       <target dev="tap1ba1c69e-54"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:56:53 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/c793e384-4ddd-4531-b6fd-172ee2fcbd4d/console.log" append="off"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <video>
Oct 02 12:56:53 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     </video>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:56:53 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:56:53 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:56:53 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:56:53 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:56:53 compute-1 nova_compute[230518]: </domain>
Oct 02 12:56:53 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:56:53 compute-1 nova_compute[230518]: 2025-10-02 12:56:53.597 2 DEBUG nova.compute.manager [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Preparing to wait for external event network-vif-plugged-1ba1c69e-5414-4423-aa38-aea149f6f506 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:56:53 compute-1 nova_compute[230518]: 2025-10-02 12:56:53.597 2 DEBUG oslo_concurrency.lockutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquiring lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:53 compute-1 nova_compute[230518]: 2025-10-02 12:56:53.597 2 DEBUG oslo_concurrency.lockutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:53 compute-1 nova_compute[230518]: 2025-10-02 12:56:53.598 2 DEBUG oslo_concurrency.lockutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:53 compute-1 nova_compute[230518]: 2025-10-02 12:56:53.598 2 DEBUG nova.virt.libvirt.vif [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:56:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-501698818',display_name='tempest-ServerRescueTestJSON-server-501698818',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-501698818',id=156,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d8f55f9d9ed144629bd9a03edb020c4f',ramdisk_id='',reservation_id='r-qzbsd8d7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-791200975',owner_user_name='tempest-ServerRescueTestJSON-791200975-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:56:48Z,user_data=None,user_id='dfe96a8fa48c4243b6262a0359f5b208',uuid=c793e384-4ddd-4531-b6fd-172ee2fcbd4d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1ba1c69e-5414-4423-aa38-aea149f6f506", "address": "fa:16:3e:27:9d:59", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba1c69e-54", "ovs_interfaceid": "1ba1c69e-5414-4423-aa38-aea149f6f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:56:53 compute-1 nova_compute[230518]: 2025-10-02 12:56:53.599 2 DEBUG nova.network.os_vif_util [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Converting VIF {"id": "1ba1c69e-5414-4423-aa38-aea149f6f506", "address": "fa:16:3e:27:9d:59", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba1c69e-54", "ovs_interfaceid": "1ba1c69e-5414-4423-aa38-aea149f6f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:56:53 compute-1 nova_compute[230518]: 2025-10-02 12:56:53.599 2 DEBUG nova.network.os_vif_util [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:9d:59,bridge_name='br-int',has_traffic_filtering=True,id=1ba1c69e-5414-4423-aa38-aea149f6f506,network=Network(a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ba1c69e-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:56:53 compute-1 nova_compute[230518]: 2025-10-02 12:56:53.600 2 DEBUG os_vif [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:9d:59,bridge_name='br-int',has_traffic_filtering=True,id=1ba1c69e-5414-4423-aa38-aea149f6f506,network=Network(a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ba1c69e-54') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:56:53 compute-1 nova_compute[230518]: 2025-10-02 12:56:53.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:53 compute-1 nova_compute[230518]: 2025-10-02 12:56:53.601 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:56:53 compute-1 nova_compute[230518]: 2025-10-02 12:56:53.601 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:56:53 compute-1 nova_compute[230518]: 2025-10-02 12:56:53.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:53 compute-1 nova_compute[230518]: 2025-10-02 12:56:53.604 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ba1c69e-54, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:56:53 compute-1 nova_compute[230518]: 2025-10-02 12:56:53.605 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1ba1c69e-54, col_values=(('external_ids', {'iface-id': '1ba1c69e-5414-4423-aa38-aea149f6f506', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:27:9d:59', 'vm-uuid': 'c793e384-4ddd-4531-b6fd-172ee2fcbd4d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:56:53 compute-1 nova_compute[230518]: 2025-10-02 12:56:53.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:53 compute-1 NetworkManager[44960]: <info>  [1759409813.6084] manager: (tap1ba1c69e-54): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/308)
Oct 02 12:56:53 compute-1 nova_compute[230518]: 2025-10-02 12:56:53.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:56:53 compute-1 nova_compute[230518]: 2025-10-02 12:56:53.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:53 compute-1 nova_compute[230518]: 2025-10-02 12:56:53.614 2 INFO os_vif [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:9d:59,bridge_name='br-int',has_traffic_filtering=True,id=1ba1c69e-5414-4423-aa38-aea149f6f506,network=Network(a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ba1c69e-54')
Oct 02 12:56:53 compute-1 nova_compute[230518]: 2025-10-02 12:56:53.772 2 DEBUG nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:56:53 compute-1 nova_compute[230518]: 2025-10-02 12:56:53.772 2 DEBUG nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:56:53 compute-1 nova_compute[230518]: 2025-10-02 12:56:53.773 2 DEBUG nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] No VIF found with MAC fa:16:3e:27:9d:59, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:56:53 compute-1 nova_compute[230518]: 2025-10-02 12:56:53.773 2 INFO nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Using config drive
Oct 02 12:56:53 compute-1 nova_compute[230518]: 2025-10-02 12:56:53.795 2 DEBUG nova.storage.rbd_utils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:56:54 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/233049042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:56:54 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1654015826' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:56:54 compute-1 nova_compute[230518]: 2025-10-02 12:56:54.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:56:54 compute-1 nova_compute[230518]: 2025-10-02 12:56:54.469 2 INFO nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Creating config drive at /var/lib/nova/instances/c793e384-4ddd-4531-b6fd-172ee2fcbd4d/disk.config
Oct 02 12:56:54 compute-1 nova_compute[230518]: 2025-10-02 12:56:54.473 2 DEBUG oslo_concurrency.processutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c793e384-4ddd-4531-b6fd-172ee2fcbd4d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmoape_iy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:56:54 compute-1 nova_compute[230518]: 2025-10-02 12:56:54.605 2 DEBUG oslo_concurrency.processutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c793e384-4ddd-4531-b6fd-172ee2fcbd4d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmoape_iy" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:56:54 compute-1 nova_compute[230518]: 2025-10-02 12:56:54.638 2 DEBUG nova.storage.rbd_utils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:56:54 compute-1 nova_compute[230518]: 2025-10-02 12:56:54.643 2 DEBUG oslo_concurrency.processutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c793e384-4ddd-4531-b6fd-172ee2fcbd4d/disk.config c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:56:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:56:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:54.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:56:54 compute-1 nova_compute[230518]: 2025-10-02 12:56:54.816 2 DEBUG oslo_concurrency.processutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c793e384-4ddd-4531-b6fd-172ee2fcbd4d/disk.config c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:56:54 compute-1 nova_compute[230518]: 2025-10-02 12:56:54.817 2 INFO nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Deleting local config drive /var/lib/nova/instances/c793e384-4ddd-4531-b6fd-172ee2fcbd4d/disk.config because it was imported into RBD.
Oct 02 12:56:54 compute-1 kernel: tap1ba1c69e-54: entered promiscuous mode
Oct 02 12:56:54 compute-1 NetworkManager[44960]: <info>  [1759409814.8938] manager: (tap1ba1c69e-54): new Tun device (/org/freedesktop/NetworkManager/Devices/309)
Oct 02 12:56:54 compute-1 ovn_controller[129257]: 2025-10-02T12:56:54Z|00655|binding|INFO|Claiming lport 1ba1c69e-5414-4423-aa38-aea149f6f506 for this chassis.
Oct 02 12:56:54 compute-1 ovn_controller[129257]: 2025-10-02T12:56:54Z|00656|binding|INFO|1ba1c69e-5414-4423-aa38-aea149f6f506: Claiming fa:16:3e:27:9d:59 10.100.0.3
Oct 02 12:56:54 compute-1 nova_compute[230518]: 2025-10-02 12:56:54.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:54.903 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:9d:59 10.100.0.3'], port_security=['fa:16:3e:27:9d:59 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'c793e384-4ddd-4531-b6fd-172ee2fcbd4d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8f55f9d9ed144629bd9a03edb020c4f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a444c762-477f-4077-ba6b-7c28af4142c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7799346f-74e1-4324-b5da-a7c921979851, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=1ba1c69e-5414-4423-aa38-aea149f6f506) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:56:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:54.904 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 1ba1c69e-5414-4423-aa38-aea149f6f506 in datapath a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59 bound to our chassis
Oct 02 12:56:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:54.906 138374 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 02 12:56:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:56:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:54.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:56:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:56:54.908 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[40806447-26e6-41d8-9ba6-d2424055d3c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:56:54 compute-1 nova_compute[230518]: 2025-10-02 12:56:54.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:54 compute-1 ovn_controller[129257]: 2025-10-02T12:56:54Z|00657|binding|INFO|Setting lport 1ba1c69e-5414-4423-aa38-aea149f6f506 ovn-installed in OVS
Oct 02 12:56:54 compute-1 ovn_controller[129257]: 2025-10-02T12:56:54Z|00658|binding|INFO|Setting lport 1ba1c69e-5414-4423-aa38-aea149f6f506 up in Southbound
Oct 02 12:56:54 compute-1 nova_compute[230518]: 2025-10-02 12:56:54.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:54 compute-1 nova_compute[230518]: 2025-10-02 12:56:54.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:54 compute-1 systemd-udevd[291943]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:56:54 compute-1 systemd-machined[188247]: New machine qemu-76-instance-0000009c.
Oct 02 12:56:54 compute-1 NetworkManager[44960]: <info>  [1759409814.9755] device (tap1ba1c69e-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:56:54 compute-1 NetworkManager[44960]: <info>  [1759409814.9768] device (tap1ba1c69e-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:56:54 compute-1 systemd[1]: Started Virtual Machine qemu-76-instance-0000009c.
Oct 02 12:56:54 compute-1 nova_compute[230518]: 2025-10-02 12:56:54.997 2 DEBUG nova.network.neutron [req-43a8a83b-cbec-4ed2-9d0b-4cc6d0c47401 req-c5289881-213f-4b9c-b496-aa181fb4c258 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Updated VIF entry in instance network info cache for port 1ba1c69e-5414-4423-aa38-aea149f6f506. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:56:54 compute-1 nova_compute[230518]: 2025-10-02 12:56:54.998 2 DEBUG nova.network.neutron [req-43a8a83b-cbec-4ed2-9d0b-4cc6d0c47401 req-c5289881-213f-4b9c-b496-aa181fb4c258 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Updating instance_info_cache with network_info: [{"id": "1ba1c69e-5414-4423-aa38-aea149f6f506", "address": "fa:16:3e:27:9d:59", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba1c69e-54", "ovs_interfaceid": "1ba1c69e-5414-4423-aa38-aea149f6f506", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:56:55 compute-1 nova_compute[230518]: 2025-10-02 12:56:55.024 2 DEBUG oslo_concurrency.lockutils [req-43a8a83b-cbec-4ed2-9d0b-4cc6d0c47401 req-c5289881-213f-4b9c-b496-aa181fb4c258 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-c793e384-4ddd-4531-b6fd-172ee2fcbd4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:56:55 compute-1 nova_compute[230518]: 2025-10-02 12:56:55.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:56:55 compute-1 nova_compute[230518]: 2025-10-02 12:56:55.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 12:56:55 compute-1 nova_compute[230518]: 2025-10-02 12:56:55.090 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 12:56:55 compute-1 ceph-mon[80926]: pgmap v2511: 305 pgs: 305 active+clean; 465 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Oct 02 12:56:55 compute-1 nova_compute[230518]: 2025-10-02 12:56:55.997 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409815.997618, c793e384-4ddd-4531-b6fd-172ee2fcbd4d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:56:55 compute-1 nova_compute[230518]: 2025-10-02 12:56:55.998 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] VM Started (Lifecycle Event)
Oct 02 12:56:56 compute-1 nova_compute[230518]: 2025-10-02 12:56:56.034 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:56:56 compute-1 nova_compute[230518]: 2025-10-02 12:56:56.038 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409816.0021856, c793e384-4ddd-4531-b6fd-172ee2fcbd4d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:56:56 compute-1 nova_compute[230518]: 2025-10-02 12:56:56.039 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] VM Paused (Lifecycle Event)
Oct 02 12:56:56 compute-1 nova_compute[230518]: 2025-10-02 12:56:56.057 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:56:56 compute-1 nova_compute[230518]: 2025-10-02 12:56:56.060 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:56:56 compute-1 nova_compute[230518]: 2025-10-02 12:56:56.076 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:56:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:56.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:56.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:56 compute-1 nova_compute[230518]: 2025-10-02 12:56:56.956 2 DEBUG nova.compute.manager [req-06d255f6-26db-4eb6-906c-0737c0fafa02 req-1b9c02a9-0474-48f7-97ee-5e02d93d0989 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Received event network-vif-plugged-1ba1c69e-5414-4423-aa38-aea149f6f506 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:56:56 compute-1 nova_compute[230518]: 2025-10-02 12:56:56.957 2 DEBUG oslo_concurrency.lockutils [req-06d255f6-26db-4eb6-906c-0737c0fafa02 req-1b9c02a9-0474-48f7-97ee-5e02d93d0989 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:56 compute-1 nova_compute[230518]: 2025-10-02 12:56:56.958 2 DEBUG oslo_concurrency.lockutils [req-06d255f6-26db-4eb6-906c-0737c0fafa02 req-1b9c02a9-0474-48f7-97ee-5e02d93d0989 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:56 compute-1 nova_compute[230518]: 2025-10-02 12:56:56.958 2 DEBUG oslo_concurrency.lockutils [req-06d255f6-26db-4eb6-906c-0737c0fafa02 req-1b9c02a9-0474-48f7-97ee-5e02d93d0989 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:56 compute-1 nova_compute[230518]: 2025-10-02 12:56:56.958 2 DEBUG nova.compute.manager [req-06d255f6-26db-4eb6-906c-0737c0fafa02 req-1b9c02a9-0474-48f7-97ee-5e02d93d0989 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Processing event network-vif-plugged-1ba1c69e-5414-4423-aa38-aea149f6f506 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:56:56 compute-1 nova_compute[230518]: 2025-10-02 12:56:56.958 2 DEBUG nova.compute.manager [req-06d255f6-26db-4eb6-906c-0737c0fafa02 req-1b9c02a9-0474-48f7-97ee-5e02d93d0989 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Received event network-vif-plugged-1ba1c69e-5414-4423-aa38-aea149f6f506 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:56:56 compute-1 nova_compute[230518]: 2025-10-02 12:56:56.959 2 DEBUG oslo_concurrency.lockutils [req-06d255f6-26db-4eb6-906c-0737c0fafa02 req-1b9c02a9-0474-48f7-97ee-5e02d93d0989 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:56:56 compute-1 nova_compute[230518]: 2025-10-02 12:56:56.959 2 DEBUG oslo_concurrency.lockutils [req-06d255f6-26db-4eb6-906c-0737c0fafa02 req-1b9c02a9-0474-48f7-97ee-5e02d93d0989 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:56:56 compute-1 nova_compute[230518]: 2025-10-02 12:56:56.959 2 DEBUG oslo_concurrency.lockutils [req-06d255f6-26db-4eb6-906c-0737c0fafa02 req-1b9c02a9-0474-48f7-97ee-5e02d93d0989 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:56 compute-1 nova_compute[230518]: 2025-10-02 12:56:56.959 2 DEBUG nova.compute.manager [req-06d255f6-26db-4eb6-906c-0737c0fafa02 req-1b9c02a9-0474-48f7-97ee-5e02d93d0989 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] No waiting events found dispatching network-vif-plugged-1ba1c69e-5414-4423-aa38-aea149f6f506 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:56:56 compute-1 nova_compute[230518]: 2025-10-02 12:56:56.959 2 WARNING nova.compute.manager [req-06d255f6-26db-4eb6-906c-0737c0fafa02 req-1b9c02a9-0474-48f7-97ee-5e02d93d0989 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Received unexpected event network-vif-plugged-1ba1c69e-5414-4423-aa38-aea149f6f506 for instance with vm_state building and task_state spawning.
Oct 02 12:56:56 compute-1 nova_compute[230518]: 2025-10-02 12:56:56.960 2 DEBUG nova.compute.manager [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:56:56 compute-1 nova_compute[230518]: 2025-10-02 12:56:56.964 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409816.9646475, c793e384-4ddd-4531-b6fd-172ee2fcbd4d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:56:56 compute-1 nova_compute[230518]: 2025-10-02 12:56:56.965 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] VM Resumed (Lifecycle Event)
Oct 02 12:56:56 compute-1 nova_compute[230518]: 2025-10-02 12:56:56.967 2 DEBUG nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:56:56 compute-1 nova_compute[230518]: 2025-10-02 12:56:56.970 2 INFO nova.virt.libvirt.driver [-] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Instance spawned successfully.
Oct 02 12:56:56 compute-1 nova_compute[230518]: 2025-10-02 12:56:56.971 2 DEBUG nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:56:56 compute-1 nova_compute[230518]: 2025-10-02 12:56:56.993 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:56:57 compute-1 nova_compute[230518]: 2025-10-02 12:56:57.002 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:56:57 compute-1 nova_compute[230518]: 2025-10-02 12:56:57.010 2 DEBUG nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:56:57 compute-1 nova_compute[230518]: 2025-10-02 12:56:57.011 2 DEBUG nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:56:57 compute-1 nova_compute[230518]: 2025-10-02 12:56:57.011 2 DEBUG nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:56:57 compute-1 nova_compute[230518]: 2025-10-02 12:56:57.012 2 DEBUG nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:56:57 compute-1 nova_compute[230518]: 2025-10-02 12:56:57.012 2 DEBUG nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:56:57 compute-1 nova_compute[230518]: 2025-10-02 12:56:57.012 2 DEBUG nova.virt.libvirt.driver [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:56:57 compute-1 nova_compute[230518]: 2025-10-02 12:56:57.042 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:56:57 compute-1 nova_compute[230518]: 2025-10-02 12:56:57.072 2 INFO nova.compute.manager [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Took 8.24 seconds to spawn the instance on the hypervisor.
Oct 02 12:56:57 compute-1 nova_compute[230518]: 2025-10-02 12:56:57.073 2 DEBUG nova.compute.manager [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:56:57 compute-1 nova_compute[230518]: 2025-10-02 12:56:57.134 2 INFO nova.compute.manager [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Took 9.08 seconds to build instance.
Oct 02 12:56:57 compute-1 nova_compute[230518]: 2025-10-02 12:56:57.149 2 DEBUG oslo_concurrency.lockutils [None req-598080ca-eb7e-4930-a2ef-1d4d36e27cbc dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:56:57 compute-1 ceph-mon[80926]: pgmap v2512: 305 pgs: 305 active+clean; 465 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Oct 02 12:56:58 compute-1 nova_compute[230518]: 2025-10-02 12:56:58.496 2 INFO nova.compute.manager [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Rescuing
Oct 02 12:56:58 compute-1 nova_compute[230518]: 2025-10-02 12:56:58.497 2 DEBUG oslo_concurrency.lockutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquiring lock "refresh_cache-c793e384-4ddd-4531-b6fd-172ee2fcbd4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:56:58 compute-1 nova_compute[230518]: 2025-10-02 12:56:58.497 2 DEBUG oslo_concurrency.lockutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquired lock "refresh_cache-c793e384-4ddd-4531-b6fd-172ee2fcbd4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:56:58 compute-1 nova_compute[230518]: 2025-10-02 12:56:58.497 2 DEBUG nova.network.neutron [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:56:58 compute-1 nova_compute[230518]: 2025-10-02 12:56:58.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:56:58.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:56:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:56:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:56:58.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:56:59 compute-1 nova_compute[230518]: 2025-10-02 12:56:59.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:56:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:56:59 compute-1 nova_compute[230518]: 2025-10-02 12:56:59.435 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409804.4339073, f6c0a66d-64f1-484a-ae4e-ece25fddf736 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:56:59 compute-1 nova_compute[230518]: 2025-10-02 12:56:59.436 2 INFO nova.compute.manager [-] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] VM Stopped (Lifecycle Event)
Oct 02 12:56:59 compute-1 nova_compute[230518]: 2025-10-02 12:56:59.457 2 DEBUG nova.compute.manager [None req-90a00622-e2a1-4949-8013-8557f45bb606 - - - - - -] [instance: f6c0a66d-64f1-484a-ae4e-ece25fddf736] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:56:59 compute-1 ceph-mon[80926]: pgmap v2513: 305 pgs: 305 active+clean; 423 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 02 12:56:59 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #118. Immutable memtables: 0.
Oct 02 12:56:59 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:59.738451) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 12:56:59 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 118
Oct 02 12:56:59 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409819738503, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 503, "num_deletes": 251, "total_data_size": 648538, "memory_usage": 659192, "flush_reason": "Manual Compaction"}
Oct 02 12:56:59 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #119: started
Oct 02 12:56:59 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409819782448, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 119, "file_size": 427620, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59937, "largest_seqno": 60435, "table_properties": {"data_size": 424910, "index_size": 746, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6701, "raw_average_key_size": 19, "raw_value_size": 419428, "raw_average_value_size": 1205, "num_data_blocks": 32, "num_entries": 348, "num_filter_entries": 348, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409798, "oldest_key_time": 1759409798, "file_creation_time": 1759409819, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:56:59 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 44049 microseconds, and 2310 cpu microseconds.
Oct 02 12:56:59 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:56:59 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:59.782502) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #119: 427620 bytes OK
Oct 02 12:56:59 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:59.782525) [db/memtable_list.cc:519] [default] Level-0 commit table #119 started
Oct 02 12:56:59 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:59.871566) [db/memtable_list.cc:722] [default] Level-0 commit table #119: memtable #1 done
Oct 02 12:56:59 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:59.871621) EVENT_LOG_v1 {"time_micros": 1759409819871608, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 12:56:59 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:59.871647) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 12:56:59 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 645522, prev total WAL file size 645522, number of live WAL files 2.
Oct 02 12:56:59 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000115.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:56:59 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:59.872410) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Oct 02 12:56:59 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 12:56:59 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [119(417KB)], [117(13MB)]
Oct 02 12:56:59 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409819872478, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [119], "files_L6": [117], "score": -1, "input_data_size": 14226410, "oldest_snapshot_seqno": -1}
Oct 02 12:56:59 compute-1 nova_compute[230518]: 2025-10-02 12:56:59.995 2 DEBUG nova.network.neutron [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Updating instance_info_cache with network_info: [{"id": "1ba1c69e-5414-4423-aa38-aea149f6f506", "address": "fa:16:3e:27:9d:59", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba1c69e-54", "ovs_interfaceid": "1ba1c69e-5414-4423-aa38-aea149f6f506", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:57:00 compute-1 nova_compute[230518]: 2025-10-02 12:57:00.027 2 DEBUG oslo_concurrency.lockutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Releasing lock "refresh_cache-c793e384-4ddd-4531-b6fd-172ee2fcbd4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:57:00 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #120: 8397 keys, 12292727 bytes, temperature: kUnknown
Oct 02 12:57:00 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409820063008, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 120, "file_size": 12292727, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12236246, "index_size": 34353, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21061, "raw_key_size": 218669, "raw_average_key_size": 26, "raw_value_size": 12086533, "raw_average_value_size": 1439, "num_data_blocks": 1340, "num_entries": 8397, "num_filter_entries": 8397, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759409819, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 120, "seqno_to_time_mapping": "N/A"}}
Oct 02 12:57:00 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 12:57:00 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:57:00.063323) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 12292727 bytes
Oct 02 12:57:00 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:57:00.073791) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 74.6 rd, 64.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 13.2 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(62.0) write-amplify(28.7) OK, records in: 8911, records dropped: 514 output_compression: NoCompression
Oct 02 12:57:00 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:57:00.073830) EVENT_LOG_v1 {"time_micros": 1759409820073813, "job": 74, "event": "compaction_finished", "compaction_time_micros": 190599, "compaction_time_cpu_micros": 34108, "output_level": 6, "num_output_files": 1, "total_output_size": 12292727, "num_input_records": 8911, "num_output_records": 8397, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 12:57:00 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:57:00 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409820074124, "job": 74, "event": "table_file_deletion", "file_number": 119}
Oct 02 12:57:00 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000117.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 12:57:00 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759409820077105, "job": 74, "event": "table_file_deletion", "file_number": 117}
Oct 02 12:57:00 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:56:59.872273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:57:00 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:57:00.077154) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:57:00 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:57:00.077160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:57:00 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:57:00.077161) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:57:00 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:57:00.077163) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:57:00 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-12:57:00.077164) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 12:57:00 compute-1 nova_compute[230518]: 2025-10-02 12:57:00.303 2 DEBUG nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 12:57:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:57:00.353 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3554919007' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:00.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:00.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:00 compute-1 nova_compute[230518]: 2025-10-02 12:57:00.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:01 compute-1 nova_compute[230518]: 2025-10-02 12:57:01.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:01 compute-1 ceph-mon[80926]: pgmap v2514: 305 pgs: 305 active+clean; 386 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.8 MiB/s wr, 164 op/s
Oct 02 12:57:01 compute-1 podman[291997]: 2025-10-02 12:57:01.821381435 +0000 UTC m=+0.061202551 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 12:57:01 compute-1 podman[291996]: 2025-10-02 12:57:01.851228912 +0000 UTC m=+0.091397149 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct 02 12:57:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:02.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:02.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:03 compute-1 nova_compute[230518]: 2025-10-02 12:57:03.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:03 compute-1 ceph-mon[80926]: pgmap v2515: 305 pgs: 305 active+clean; 421 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.6 MiB/s wr, 235 op/s
Oct 02 12:57:04 compute-1 nova_compute[230518]: 2025-10-02 12:57:04.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:57:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:04.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:04.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:05 compute-1 ceph-mon[80926]: pgmap v2516: 305 pgs: 305 active+clean; 421 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.5 MiB/s wr, 144 op/s
Oct 02 12:57:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/548337609' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:57:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/548337609' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:57:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:57:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:06.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:57:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:06.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:07 compute-1 ceph-mon[80926]: pgmap v2517: 305 pgs: 305 active+clean; 455 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.8 MiB/s wr, 160 op/s
Oct 02 12:57:08 compute-1 nova_compute[230518]: 2025-10-02 12:57:08.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:08.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:08.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:09 compute-1 nova_compute[230518]: 2025-10-02 12:57:09.091 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:57:09 compute-1 nova_compute[230518]: 2025-10-02 12:57:09.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:57:09 compute-1 ceph-mon[80926]: pgmap v2518: 305 pgs: 305 active+clean; 465 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 195 op/s
Oct 02 12:57:09 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4051056186' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:09 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/432920824' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:09 compute-1 nova_compute[230518]: 2025-10-02 12:57:09.728 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:09 compute-1 nova_compute[230518]: 2025-10-02 12:57:09.728 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:09 compute-1 nova_compute[230518]: 2025-10-02 12:57:09.728 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:09 compute-1 nova_compute[230518]: 2025-10-02 12:57:09.729 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:57:09 compute-1 nova_compute[230518]: 2025-10-02 12:57:09.729 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:57:10 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4262556935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:10 compute-1 nova_compute[230518]: 2025-10-02 12:57:10.151 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:10 compute-1 nova_compute[230518]: 2025-10-02 12:57:10.276 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:57:10 compute-1 nova_compute[230518]: 2025-10-02 12:57:10.277 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:57:10 compute-1 nova_compute[230518]: 2025-10-02 12:57:10.354 2 DEBUG nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Oct 02 12:57:10 compute-1 nova_compute[230518]: 2025-10-02 12:57:10.434 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:57:10 compute-1 nova_compute[230518]: 2025-10-02 12:57:10.435 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4144MB free_disk=20.855342864990234GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:57:10 compute-1 nova_compute[230518]: 2025-10-02 12:57:10.435 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:10 compute-1 nova_compute[230518]: 2025-10-02 12:57:10.436 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:10 compute-1 nova_compute[230518]: 2025-10-02 12:57:10.536 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance c793e384-4ddd-4531-b6fd-172ee2fcbd4d actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:57:10 compute-1 nova_compute[230518]: 2025-10-02 12:57:10.536 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:57:10 compute-1 nova_compute[230518]: 2025-10-02 12:57:10.536 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:57:10 compute-1 nova_compute[230518]: 2025-10-02 12:57:10.578 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:10.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4262556935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:10.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:57:10 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2546884810' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:11 compute-1 nova_compute[230518]: 2025-10-02 12:57:11.004 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:11 compute-1 nova_compute[230518]: 2025-10-02 12:57:11.009 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:57:11 compute-1 nova_compute[230518]: 2025-10-02 12:57:11.040 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:57:11 compute-1 nova_compute[230518]: 2025-10-02 12:57:11.076 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:57:11 compute-1 nova_compute[230518]: 2025-10-02 12:57:11.077 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:11 compute-1 ceph-mon[80926]: pgmap v2519: 305 pgs: 305 active+clean; 469 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.4 MiB/s wr, 185 op/s
Oct 02 12:57:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2546884810' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:12.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:12 compute-1 podman[292089]: 2025-10-02 12:57:12.823306597 +0000 UTC m=+0.072946630 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:57:12 compute-1 podman[292090]: 2025-10-02 12:57:12.833890949 +0000 UTC m=+0.074918401 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd)
Oct 02 12:57:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:12.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:13 compute-1 ceph-mon[80926]: pgmap v2520: 305 pgs: 305 active+clean; 486 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 6.0 MiB/s wr, 176 op/s
Oct 02 12:57:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1790729380' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:13 compute-1 nova_compute[230518]: 2025-10-02 12:57:13.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:14 compute-1 nova_compute[230518]: 2025-10-02 12:57:14.035 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:57:14 compute-1 nova_compute[230518]: 2025-10-02 12:57:14.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:57:14 compute-1 nova_compute[230518]: 2025-10-02 12:57:14.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:57:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:14.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1667620999' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4205440800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2894204180' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:57:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:14.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:57:15 compute-1 nova_compute[230518]: 2025-10-02 12:57:15.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:57:15 compute-1 nova_compute[230518]: 2025-10-02 12:57:15.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:57:15 compute-1 ceph-mon[80926]: pgmap v2521: 305 pgs: 305 active+clean; 486 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 390 KiB/s rd, 3.5 MiB/s wr, 97 op/s
Oct 02 12:57:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3880338796' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3993617063' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:16 compute-1 nova_compute[230518]: 2025-10-02 12:57:16.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:57:16 compute-1 nova_compute[230518]: 2025-10-02 12:57:16.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:57:16 compute-1 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Oct 02 12:57:16 compute-1 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Oct 02 12:57:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:16.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:16.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:17 compute-1 ceph-mon[80926]: pgmap v2522: 305 pgs: 305 active+clean; 496 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.6 MiB/s wr, 147 op/s
Oct 02 12:57:17 compute-1 nova_compute[230518]: 2025-10-02 12:57:17.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:57:17 compute-1 kernel: tap1ba1c69e-54 (unregistering): left promiscuous mode
Oct 02 12:57:17 compute-1 NetworkManager[44960]: <info>  [1759409837.3578] device (tap1ba1c69e-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:57:17 compute-1 nova_compute[230518]: 2025-10-02 12:57:17.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:17 compute-1 ovn_controller[129257]: 2025-10-02T12:57:17Z|00659|binding|INFO|Releasing lport 1ba1c69e-5414-4423-aa38-aea149f6f506 from this chassis (sb_readonly=0)
Oct 02 12:57:17 compute-1 ovn_controller[129257]: 2025-10-02T12:57:17Z|00660|binding|INFO|Setting lport 1ba1c69e-5414-4423-aa38-aea149f6f506 down in Southbound
Oct 02 12:57:17 compute-1 ovn_controller[129257]: 2025-10-02T12:57:17Z|00661|binding|INFO|Removing iface tap1ba1c69e-54 ovn-installed in OVS
Oct 02 12:57:17 compute-1 nova_compute[230518]: 2025-10-02 12:57:17.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:17 compute-1 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Oct 02 12:57:17 compute-1 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d0000009c.scope: Deactivated successfully.
Oct 02 12:57:17 compute-1 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d0000009c.scope: Consumed 14.133s CPU time.
Oct 02 12:57:17 compute-1 systemd-machined[188247]: Machine qemu-76-instance-0000009c terminated.
Oct 02 12:57:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:57:17.457 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:9d:59 10.100.0.3'], port_security=['fa:16:3e:27:9d:59 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'c793e384-4ddd-4531-b6fd-172ee2fcbd4d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8f55f9d9ed144629bd9a03edb020c4f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a444c762-477f-4077-ba6b-7c28af4142c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7799346f-74e1-4324-b5da-a7c921979851, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=1ba1c69e-5414-4423-aa38-aea149f6f506) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:57:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:57:17.458 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 1ba1c69e-5414-4423-aa38-aea149f6f506 in datapath a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59 unbound from our chassis
Oct 02 12:57:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:57:17.459 138374 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 02 12:57:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:57:17.461 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[468bb252-a109-4523-a0d0-ba3dce55f5d9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:17 compute-1 nova_compute[230518]: 2025-10-02 12:57:17.607 2 INFO nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Instance shutdown successfully after 17 seconds.
Oct 02 12:57:17 compute-1 nova_compute[230518]: 2025-10-02 12:57:17.614 2 INFO nova.virt.libvirt.driver [-] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Instance destroyed successfully.
Oct 02 12:57:17 compute-1 nova_compute[230518]: 2025-10-02 12:57:17.614 2 DEBUG nova.objects.instance [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lazy-loading 'numa_topology' on Instance uuid c793e384-4ddd-4531-b6fd-172ee2fcbd4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:57:17 compute-1 nova_compute[230518]: 2025-10-02 12:57:17.675 2 INFO nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Attempting rescue
Oct 02 12:57:17 compute-1 nova_compute[230518]: 2025-10-02 12:57:17.676 2 DEBUG nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Oct 02 12:57:17 compute-1 nova_compute[230518]: 2025-10-02 12:57:17.679 2 DEBUG nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Oct 02 12:57:17 compute-1 nova_compute[230518]: 2025-10-02 12:57:17.680 2 INFO nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Creating image(s)
Oct 02 12:57:17 compute-1 nova_compute[230518]: 2025-10-02 12:57:17.705 2 DEBUG nova.storage.rbd_utils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:17 compute-1 nova_compute[230518]: 2025-10-02 12:57:17.708 2 DEBUG nova.objects.instance [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lazy-loading 'trusted_certs' on Instance uuid c793e384-4ddd-4531-b6fd-172ee2fcbd4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:57:17 compute-1 nova_compute[230518]: 2025-10-02 12:57:17.803 2 DEBUG nova.storage.rbd_utils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:17 compute-1 nova_compute[230518]: 2025-10-02 12:57:17.834 2 DEBUG nova.storage.rbd_utils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:17 compute-1 nova_compute[230518]: 2025-10-02 12:57:17.838 2 DEBUG oslo_concurrency.processutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:17 compute-1 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Oct 02 12:57:17 compute-1 nova_compute[230518]: 2025-10-02 12:57:17.876 2 DEBUG nova.compute.manager [req-dd723f86-88a2-4b08-a3c3-a89690f65653 req-7740d2b6-f687-4562-81d8-8e5971f2b9d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Received event network-vif-unplugged-1ba1c69e-5414-4423-aa38-aea149f6f506 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:57:17 compute-1 nova_compute[230518]: 2025-10-02 12:57:17.876 2 DEBUG oslo_concurrency.lockutils [req-dd723f86-88a2-4b08-a3c3-a89690f65653 req-7740d2b6-f687-4562-81d8-8e5971f2b9d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:17 compute-1 nova_compute[230518]: 2025-10-02 12:57:17.877 2 DEBUG oslo_concurrency.lockutils [req-dd723f86-88a2-4b08-a3c3-a89690f65653 req-7740d2b6-f687-4562-81d8-8e5971f2b9d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:17 compute-1 nova_compute[230518]: 2025-10-02 12:57:17.877 2 DEBUG oslo_concurrency.lockutils [req-dd723f86-88a2-4b08-a3c3-a89690f65653 req-7740d2b6-f687-4562-81d8-8e5971f2b9d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:17 compute-1 nova_compute[230518]: 2025-10-02 12:57:17.877 2 DEBUG nova.compute.manager [req-dd723f86-88a2-4b08-a3c3-a89690f65653 req-7740d2b6-f687-4562-81d8-8e5971f2b9d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] No waiting events found dispatching network-vif-unplugged-1ba1c69e-5414-4423-aa38-aea149f6f506 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:57:17 compute-1 nova_compute[230518]: 2025-10-02 12:57:17.877 2 WARNING nova.compute.manager [req-dd723f86-88a2-4b08-a3c3-a89690f65653 req-7740d2b6-f687-4562-81d8-8e5971f2b9d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Received unexpected event network-vif-unplugged-1ba1c69e-5414-4423-aa38-aea149f6f506 for instance with vm_state active and task_state rescuing.
Oct 02 12:57:17 compute-1 nova_compute[230518]: 2025-10-02 12:57:17.919 2 DEBUG oslo_concurrency.processutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:17 compute-1 nova_compute[230518]: 2025-10-02 12:57:17.920 2 DEBUG oslo_concurrency.lockutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:17 compute-1 nova_compute[230518]: 2025-10-02 12:57:17.921 2 DEBUG oslo_concurrency.lockutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:17 compute-1 nova_compute[230518]: 2025-10-02 12:57:17.921 2 DEBUG oslo_concurrency.lockutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:17 compute-1 nova_compute[230518]: 2025-10-02 12:57:17.946 2 DEBUG nova.storage.rbd_utils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:17 compute-1 nova_compute[230518]: 2025-10-02 12:57:17.950 2 DEBUG oslo_concurrency.processutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:18 compute-1 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Oct 02 12:57:18 compute-1 nova_compute[230518]: 2025-10-02 12:57:18.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:18.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:18 compute-1 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Oct 02 12:57:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:18.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:19 compute-1 nova_compute[230518]: 2025-10-02 12:57:19.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:57:19 compute-1 nova_compute[230518]: 2025-10-02 12:57:19.488 2 DEBUG oslo_concurrency.processutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:19 compute-1 nova_compute[230518]: 2025-10-02 12:57:19.489 2 DEBUG nova.objects.instance [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lazy-loading 'migration_context' on Instance uuid c793e384-4ddd-4531-b6fd-172ee2fcbd4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:57:19 compute-1 nova_compute[230518]: 2025-10-02 12:57:19.521 2 DEBUG nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:57:19 compute-1 nova_compute[230518]: 2025-10-02 12:57:19.522 2 DEBUG nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Start _get_guest_xml network_info=[{"id": "1ba1c69e-5414-4423-aa38-aea149f6f506", "address": "fa:16:3e:27:9d:59", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1859173317-network", "vif_mac": "fa:16:3e:27:9d:59"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba1c69e-54", "ovs_interfaceid": "1ba1c69e-5414-4423-aa38-aea149f6f506", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:57:19 compute-1 nova_compute[230518]: 2025-10-02 12:57:19.522 2 DEBUG nova.objects.instance [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lazy-loading 'resources' on Instance uuid c793e384-4ddd-4531-b6fd-172ee2fcbd4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:57:19 compute-1 nova_compute[230518]: 2025-10-02 12:57:19.565 2 WARNING nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:57:19 compute-1 nova_compute[230518]: 2025-10-02 12:57:19.574 2 DEBUG nova.virt.libvirt.host [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:57:19 compute-1 nova_compute[230518]: 2025-10-02 12:57:19.574 2 DEBUG nova.virt.libvirt.host [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:57:19 compute-1 nova_compute[230518]: 2025-10-02 12:57:19.578 2 DEBUG nova.virt.libvirt.host [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:57:19 compute-1 nova_compute[230518]: 2025-10-02 12:57:19.578 2 DEBUG nova.virt.libvirt.host [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:57:19 compute-1 nova_compute[230518]: 2025-10-02 12:57:19.579 2 DEBUG nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:57:19 compute-1 nova_compute[230518]: 2025-10-02 12:57:19.580 2 DEBUG nova.virt.hardware [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:57:19 compute-1 nova_compute[230518]: 2025-10-02 12:57:19.580 2 DEBUG nova.virt.hardware [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:57:19 compute-1 nova_compute[230518]: 2025-10-02 12:57:19.580 2 DEBUG nova.virt.hardware [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:57:19 compute-1 nova_compute[230518]: 2025-10-02 12:57:19.581 2 DEBUG nova.virt.hardware [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:57:19 compute-1 nova_compute[230518]: 2025-10-02 12:57:19.581 2 DEBUG nova.virt.hardware [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:57:19 compute-1 nova_compute[230518]: 2025-10-02 12:57:19.581 2 DEBUG nova.virt.hardware [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:57:19 compute-1 nova_compute[230518]: 2025-10-02 12:57:19.581 2 DEBUG nova.virt.hardware [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:57:19 compute-1 nova_compute[230518]: 2025-10-02 12:57:19.581 2 DEBUG nova.virt.hardware [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:57:19 compute-1 nova_compute[230518]: 2025-10-02 12:57:19.582 2 DEBUG nova.virt.hardware [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:57:19 compute-1 nova_compute[230518]: 2025-10-02 12:57:19.582 2 DEBUG nova.virt.hardware [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:57:19 compute-1 nova_compute[230518]: 2025-10-02 12:57:19.582 2 DEBUG nova.virt.hardware [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:57:19 compute-1 nova_compute[230518]: 2025-10-02 12:57:19.582 2 DEBUG nova.objects.instance [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lazy-loading 'vcpu_model' on Instance uuid c793e384-4ddd-4531-b6fd-172ee2fcbd4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:57:19 compute-1 nova_compute[230518]: 2025-10-02 12:57:19.611 2 DEBUG oslo_concurrency.processutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:19 compute-1 ceph-mon[80926]: pgmap v2523: 305 pgs: 305 active+clean; 498 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 195 op/s
Oct 02 12:57:19 compute-1 nova_compute[230518]: 2025-10-02 12:57:19.975 2 DEBUG nova.compute.manager [req-56c8e122-5259-443b-b65d-27ac42027ed7 req-f2989e73-d74b-403b-ab17-bef6d5ad6202 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Received event network-vif-plugged-1ba1c69e-5414-4423-aa38-aea149f6f506 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:57:19 compute-1 nova_compute[230518]: 2025-10-02 12:57:19.976 2 DEBUG oslo_concurrency.lockutils [req-56c8e122-5259-443b-b65d-27ac42027ed7 req-f2989e73-d74b-403b-ab17-bef6d5ad6202 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:19 compute-1 nova_compute[230518]: 2025-10-02 12:57:19.977 2 DEBUG oslo_concurrency.lockutils [req-56c8e122-5259-443b-b65d-27ac42027ed7 req-f2989e73-d74b-403b-ab17-bef6d5ad6202 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:19 compute-1 nova_compute[230518]: 2025-10-02 12:57:19.977 2 DEBUG oslo_concurrency.lockutils [req-56c8e122-5259-443b-b65d-27ac42027ed7 req-f2989e73-d74b-403b-ab17-bef6d5ad6202 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:19 compute-1 nova_compute[230518]: 2025-10-02 12:57:19.977 2 DEBUG nova.compute.manager [req-56c8e122-5259-443b-b65d-27ac42027ed7 req-f2989e73-d74b-403b-ab17-bef6d5ad6202 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] No waiting events found dispatching network-vif-plugged-1ba1c69e-5414-4423-aa38-aea149f6f506 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:57:19 compute-1 nova_compute[230518]: 2025-10-02 12:57:19.978 2 WARNING nova.compute.manager [req-56c8e122-5259-443b-b65d-27ac42027ed7 req-f2989e73-d74b-403b-ab17-bef6d5ad6202 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Received unexpected event network-vif-plugged-1ba1c69e-5414-4423-aa38-aea149f6f506 for instance with vm_state active and task_state rescuing.
Oct 02 12:57:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:57:20 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1416982405' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:20 compute-1 nova_compute[230518]: 2025-10-02 12:57:20.074 2 DEBUG oslo_concurrency.processutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:20 compute-1 nova_compute[230518]: 2025-10-02 12:57:20.076 2 DEBUG oslo_concurrency.processutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:57:20 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/12721875' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:20 compute-1 nova_compute[230518]: 2025-10-02 12:57:20.556 2 DEBUG oslo_concurrency.processutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:20 compute-1 nova_compute[230518]: 2025-10-02 12:57:20.557 2 DEBUG oslo_concurrency.processutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1416982405' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/12721875' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:20.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:20.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:57:20 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3284991121' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:21 compute-1 nova_compute[230518]: 2025-10-02 12:57:21.002 2 DEBUG oslo_concurrency.processutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:21 compute-1 nova_compute[230518]: 2025-10-02 12:57:21.004 2 DEBUG nova.virt.libvirt.vif [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:56:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-501698818',display_name='tempest-ServerRescueTestJSON-server-501698818',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-501698818',id=156,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:56:57Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d8f55f9d9ed144629bd9a03edb020c4f',ramdisk_id='',reservation_id='r-qzbsd8d7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-791200975',owner_user_name='tempest-ServerRescueTestJSON-791200975-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:56:57Z,user_data=None,user_id='dfe96a8fa48c4243b6262a0359f5b208',uuid=c793e384-4ddd-4531-b6fd-172ee2fcbd4d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1ba1c69e-5414-4423-aa38-aea149f6f506", "address": "fa:16:3e:27:9d:59", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1859173317-network", "vif_mac": "fa:16:3e:27:9d:59"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba1c69e-54", "ovs_interfaceid": "1ba1c69e-5414-4423-aa38-aea149f6f506", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:57:21 compute-1 nova_compute[230518]: 2025-10-02 12:57:21.004 2 DEBUG nova.network.os_vif_util [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Converting VIF {"id": "1ba1c69e-5414-4423-aa38-aea149f6f506", "address": "fa:16:3e:27:9d:59", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1859173317-network", "vif_mac": "fa:16:3e:27:9d:59"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba1c69e-54", "ovs_interfaceid": "1ba1c69e-5414-4423-aa38-aea149f6f506", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:57:21 compute-1 nova_compute[230518]: 2025-10-02 12:57:21.005 2 DEBUG nova.network.os_vif_util [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:27:9d:59,bridge_name='br-int',has_traffic_filtering=True,id=1ba1c69e-5414-4423-aa38-aea149f6f506,network=Network(a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ba1c69e-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:57:21 compute-1 nova_compute[230518]: 2025-10-02 12:57:21.006 2 DEBUG nova.objects.instance [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lazy-loading 'pci_devices' on Instance uuid c793e384-4ddd-4531-b6fd-172ee2fcbd4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:57:21 compute-1 nova_compute[230518]: 2025-10-02 12:57:21.029 2 DEBUG nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:57:21 compute-1 nova_compute[230518]:   <uuid>c793e384-4ddd-4531-b6fd-172ee2fcbd4d</uuid>
Oct 02 12:57:21 compute-1 nova_compute[230518]:   <name>instance-0000009c</name>
Oct 02 12:57:21 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:57:21 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:57:21 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:57:21 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:       <nova:name>tempest-ServerRescueTestJSON-server-501698818</nova:name>
Oct 02 12:57:21 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:57:19</nova:creationTime>
Oct 02 12:57:21 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:57:21 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:57:21 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:57:21 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:57:21 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:57:21 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:57:21 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:57:21 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:57:21 compute-1 nova_compute[230518]:         <nova:user uuid="dfe96a8fa48c4243b6262a0359f5b208">tempest-ServerRescueTestJSON-791200975-project-member</nova:user>
Oct 02 12:57:21 compute-1 nova_compute[230518]:         <nova:project uuid="d8f55f9d9ed144629bd9a03edb020c4f">tempest-ServerRescueTestJSON-791200975</nova:project>
Oct 02 12:57:21 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:57:21 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:57:21 compute-1 nova_compute[230518]:         <nova:port uuid="1ba1c69e-5414-4423-aa38-aea149f6f506">
Oct 02 12:57:21 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:57:21 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:57:21 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:57:21 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <system>
Oct 02 12:57:21 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:57:21 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:57:21 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:57:21 compute-1 nova_compute[230518]:       <entry name="serial">c793e384-4ddd-4531-b6fd-172ee2fcbd4d</entry>
Oct 02 12:57:21 compute-1 nova_compute[230518]:       <entry name="uuid">c793e384-4ddd-4531-b6fd-172ee2fcbd4d</entry>
Oct 02 12:57:21 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     </system>
Oct 02 12:57:21 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:57:21 compute-1 nova_compute[230518]:   <os>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:   </os>
Oct 02 12:57:21 compute-1 nova_compute[230518]:   <features>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:   </features>
Oct 02 12:57:21 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:57:21 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:57:21 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:57:21 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk.rescue">
Oct 02 12:57:21 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:       </source>
Oct 02 12:57:21 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:57:21 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:57:21 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:57:21 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk">
Oct 02 12:57:21 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:       </source>
Oct 02 12:57:21 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:57:21 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:57:21 compute-1 nova_compute[230518]:       <target dev="vdb" bus="virtio"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:57:21 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk.config.rescue">
Oct 02 12:57:21 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:       </source>
Oct 02 12:57:21 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:57:21 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:57:21 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:57:21 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:27:9d:59"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:       <target dev="tap1ba1c69e-54"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:57:21 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/c793e384-4ddd-4531-b6fd-172ee2fcbd4d/console.log" append="off"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <video>
Oct 02 12:57:21 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     </video>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:57:21 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:57:21 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:57:21 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:57:21 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:57:21 compute-1 nova_compute[230518]: </domain>
Oct 02 12:57:21 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:57:21 compute-1 nova_compute[230518]: 2025-10-02 12:57:21.036 2 INFO nova.virt.libvirt.driver [-] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Instance destroyed successfully.
Oct 02 12:57:21 compute-1 nova_compute[230518]: 2025-10-02 12:57:21.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:57:21 compute-1 nova_compute[230518]: 2025-10-02 12:57:21.117 2 DEBUG nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:57:21 compute-1 nova_compute[230518]: 2025-10-02 12:57:21.118 2 DEBUG nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:57:21 compute-1 nova_compute[230518]: 2025-10-02 12:57:21.118 2 DEBUG nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:57:21 compute-1 nova_compute[230518]: 2025-10-02 12:57:21.118 2 DEBUG nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] No VIF found with MAC fa:16:3e:27:9d:59, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:57:21 compute-1 nova_compute[230518]: 2025-10-02 12:57:21.119 2 INFO nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Using config drive
Oct 02 12:57:21 compute-1 nova_compute[230518]: 2025-10-02 12:57:21.146 2 DEBUG nova.storage.rbd_utils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:21 compute-1 nova_compute[230518]: 2025-10-02 12:57:21.197 2 DEBUG nova.objects.instance [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lazy-loading 'ec2_ids' on Instance uuid c793e384-4ddd-4531-b6fd-172ee2fcbd4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:57:21 compute-1 nova_compute[230518]: 2025-10-02 12:57:21.237 2 DEBUG nova.objects.instance [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lazy-loading 'keypairs' on Instance uuid c793e384-4ddd-4531-b6fd-172ee2fcbd4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:57:21 compute-1 ceph-mon[80926]: pgmap v2524: 305 pgs: 305 active+clean; 531 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.3 MiB/s wr, 200 op/s
Oct 02 12:57:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3284991121' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:21 compute-1 nova_compute[230518]: 2025-10-02 12:57:21.839 2 INFO nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Creating config drive at /var/lib/nova/instances/c793e384-4ddd-4531-b6fd-172ee2fcbd4d/disk.config.rescue
Oct 02 12:57:21 compute-1 nova_compute[230518]: 2025-10-02 12:57:21.844 2 DEBUG oslo_concurrency.processutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c793e384-4ddd-4531-b6fd-172ee2fcbd4d/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuc5iuqij execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:21 compute-1 nova_compute[230518]: 2025-10-02 12:57:21.980 2 DEBUG oslo_concurrency.processutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c793e384-4ddd-4531-b6fd-172ee2fcbd4d/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuc5iuqij" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:22 compute-1 nova_compute[230518]: 2025-10-02 12:57:22.012 2 DEBUG nova.storage.rbd_utils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] rbd image c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:57:22 compute-1 nova_compute[230518]: 2025-10-02 12:57:22.017 2 DEBUG oslo_concurrency.processutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c793e384-4ddd-4531-b6fd-172ee2fcbd4d/disk.config.rescue c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:57:22 compute-1 nova_compute[230518]: 2025-10-02 12:57:22.465 2 DEBUG oslo_concurrency.processutils [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c793e384-4ddd-4531-b6fd-172ee2fcbd4d/disk.config.rescue c793e384-4ddd-4531-b6fd-172ee2fcbd4d_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:57:22 compute-1 nova_compute[230518]: 2025-10-02 12:57:22.467 2 INFO nova.virt.libvirt.driver [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Deleting local config drive /var/lib/nova/instances/c793e384-4ddd-4531-b6fd-172ee2fcbd4d/disk.config.rescue because it was imported into RBD.
Oct 02 12:57:22 compute-1 kernel: tap1ba1c69e-54: entered promiscuous mode
Oct 02 12:57:22 compute-1 NetworkManager[44960]: <info>  [1759409842.5443] manager: (tap1ba1c69e-54): new Tun device (/org/freedesktop/NetworkManager/Devices/310)
Oct 02 12:57:22 compute-1 ovn_controller[129257]: 2025-10-02T12:57:22Z|00662|binding|INFO|Claiming lport 1ba1c69e-5414-4423-aa38-aea149f6f506 for this chassis.
Oct 02 12:57:22 compute-1 ovn_controller[129257]: 2025-10-02T12:57:22Z|00663|binding|INFO|1ba1c69e-5414-4423-aa38-aea149f6f506: Claiming fa:16:3e:27:9d:59 10.100.0.3
Oct 02 12:57:22 compute-1 nova_compute[230518]: 2025-10-02 12:57:22.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:57:22.560 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:9d:59 10.100.0.3'], port_security=['fa:16:3e:27:9d:59 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'c793e384-4ddd-4531-b6fd-172ee2fcbd4d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8f55f9d9ed144629bd9a03edb020c4f', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'a444c762-477f-4077-ba6b-7c28af4142c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7799346f-74e1-4324-b5da-a7c921979851, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=1ba1c69e-5414-4423-aa38-aea149f6f506) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:57:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:57:22.561 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 1ba1c69e-5414-4423-aa38-aea149f6f506 in datapath a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59 bound to our chassis
Oct 02 12:57:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:57:22.562 138374 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 02 12:57:22 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:57:22.563 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b6b53bb1-7651-413e-9853-65f545cd6ce9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:57:22 compute-1 nova_compute[230518]: 2025-10-02 12:57:22.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:22 compute-1 ovn_controller[129257]: 2025-10-02T12:57:22Z|00664|binding|INFO|Setting lport 1ba1c69e-5414-4423-aa38-aea149f6f506 ovn-installed in OVS
Oct 02 12:57:22 compute-1 ovn_controller[129257]: 2025-10-02T12:57:22Z|00665|binding|INFO|Setting lport 1ba1c69e-5414-4423-aa38-aea149f6f506 up in Southbound
Oct 02 12:57:22 compute-1 nova_compute[230518]: 2025-10-02 12:57:22.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:22 compute-1 nova_compute[230518]: 2025-10-02 12:57:22.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:22 compute-1 systemd-machined[188247]: New machine qemu-77-instance-0000009c.
Oct 02 12:57:22 compute-1 systemd[1]: Started Virtual Machine qemu-77-instance-0000009c.
Oct 02 12:57:22 compute-1 systemd-udevd[292380]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:57:22 compute-1 NetworkManager[44960]: <info>  [1759409842.6251] device (tap1ba1c69e-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:57:22 compute-1 NetworkManager[44960]: <info>  [1759409842.6264] device (tap1ba1c69e-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:57:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:22.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:22 compute-1 ceph-mon[80926]: pgmap v2525: 305 pgs: 305 active+clean; 544 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.5 MiB/s wr, 346 op/s
Oct 02 12:57:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:22.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:23 compute-1 nova_compute[230518]: 2025-10-02 12:57:23.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:57:23 compute-1 nova_compute[230518]: 2025-10-02 12:57:23.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:57:23 compute-1 nova_compute[230518]: 2025-10-02 12:57:23.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:57:23 compute-1 nova_compute[230518]: 2025-10-02 12:57:23.143 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-c793e384-4ddd-4531-b6fd-172ee2fcbd4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:57:23 compute-1 nova_compute[230518]: 2025-10-02 12:57:23.143 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-c793e384-4ddd-4531-b6fd-172ee2fcbd4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:57:23 compute-1 nova_compute[230518]: 2025-10-02 12:57:23.144 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:57:23 compute-1 nova_compute[230518]: 2025-10-02 12:57:23.144 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c793e384-4ddd-4531-b6fd-172ee2fcbd4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:57:23 compute-1 nova_compute[230518]: 2025-10-02 12:57:23.361 2 DEBUG nova.compute.manager [req-c8a12578-ef72-42bb-96eb-63b5e30faf07 req-ba822529-b2be-47bc-b1e0-6004ef7b9abe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Received event network-vif-plugged-1ba1c69e-5414-4423-aa38-aea149f6f506 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:57:23 compute-1 nova_compute[230518]: 2025-10-02 12:57:23.361 2 DEBUG oslo_concurrency.lockutils [req-c8a12578-ef72-42bb-96eb-63b5e30faf07 req-ba822529-b2be-47bc-b1e0-6004ef7b9abe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:23 compute-1 nova_compute[230518]: 2025-10-02 12:57:23.361 2 DEBUG oslo_concurrency.lockutils [req-c8a12578-ef72-42bb-96eb-63b5e30faf07 req-ba822529-b2be-47bc-b1e0-6004ef7b9abe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:23 compute-1 nova_compute[230518]: 2025-10-02 12:57:23.362 2 DEBUG oslo_concurrency.lockutils [req-c8a12578-ef72-42bb-96eb-63b5e30faf07 req-ba822529-b2be-47bc-b1e0-6004ef7b9abe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:23 compute-1 nova_compute[230518]: 2025-10-02 12:57:23.362 2 DEBUG nova.compute.manager [req-c8a12578-ef72-42bb-96eb-63b5e30faf07 req-ba822529-b2be-47bc-b1e0-6004ef7b9abe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] No waiting events found dispatching network-vif-plugged-1ba1c69e-5414-4423-aa38-aea149f6f506 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:57:23 compute-1 nova_compute[230518]: 2025-10-02 12:57:23.362 2 WARNING nova.compute.manager [req-c8a12578-ef72-42bb-96eb-63b5e30faf07 req-ba822529-b2be-47bc-b1e0-6004ef7b9abe 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Received unexpected event network-vif-plugged-1ba1c69e-5414-4423-aa38-aea149f6f506 for instance with vm_state active and task_state rescuing.
Oct 02 12:57:23 compute-1 nova_compute[230518]: 2025-10-02 12:57:23.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:24 compute-1 nova_compute[230518]: 2025-10-02 12:57:24.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:24 compute-1 nova_compute[230518]: 2025-10-02 12:57:24.293 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Removed pending event for c793e384-4ddd-4531-b6fd-172ee2fcbd4d due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:57:24 compute-1 nova_compute[230518]: 2025-10-02 12:57:24.293 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409844.2926095, c793e384-4ddd-4531-b6fd-172ee2fcbd4d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:57:24 compute-1 nova_compute[230518]: 2025-10-02 12:57:24.293 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] VM Resumed (Lifecycle Event)
Oct 02 12:57:24 compute-1 nova_compute[230518]: 2025-10-02 12:57:24.297 2 DEBUG nova.compute.manager [None req-06d60b37-4465-4fcc-9b26-7c9576690243 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:57:24 compute-1 nova_compute[230518]: 2025-10-02 12:57:24.334 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:57:24 compute-1 nova_compute[230518]: 2025-10-02 12:57:24.337 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:57:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:57:24 compute-1 nova_compute[230518]: 2025-10-02 12:57:24.372 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] During sync_power_state the instance has a pending task (rescuing). Skip.
Oct 02 12:57:24 compute-1 nova_compute[230518]: 2025-10-02 12:57:24.372 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409844.2936769, c793e384-4ddd-4531-b6fd-172ee2fcbd4d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:57:24 compute-1 nova_compute[230518]: 2025-10-02 12:57:24.372 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] VM Started (Lifecycle Event)
Oct 02 12:57:24 compute-1 nova_compute[230518]: 2025-10-02 12:57:24.403 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:57:24 compute-1 nova_compute[230518]: 2025-10-02 12:57:24.407 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:57:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:24.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:24.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:25 compute-1 ceph-mon[80926]: pgmap v2526: 305 pgs: 305 active+clean; 544 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.9 MiB/s wr, 314 op/s
Oct 02 12:57:25 compute-1 nova_compute[230518]: 2025-10-02 12:57:25.372 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Updating instance_info_cache with network_info: [{"id": "1ba1c69e-5414-4423-aa38-aea149f6f506", "address": "fa:16:3e:27:9d:59", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba1c69e-54", "ovs_interfaceid": "1ba1c69e-5414-4423-aa38-aea149f6f506", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:57:25 compute-1 nova_compute[230518]: 2025-10-02 12:57:25.730 2 DEBUG nova.compute.manager [req-0c1d7f07-11ad-494d-a7b2-173d855ef61d req-1de4b7fd-bb84-4715-8221-e5bc9c237080 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Received event network-vif-plugged-1ba1c69e-5414-4423-aa38-aea149f6f506 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:57:25 compute-1 nova_compute[230518]: 2025-10-02 12:57:25.730 2 DEBUG oslo_concurrency.lockutils [req-0c1d7f07-11ad-494d-a7b2-173d855ef61d req-1de4b7fd-bb84-4715-8221-e5bc9c237080 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:25 compute-1 nova_compute[230518]: 2025-10-02 12:57:25.730 2 DEBUG oslo_concurrency.lockutils [req-0c1d7f07-11ad-494d-a7b2-173d855ef61d req-1de4b7fd-bb84-4715-8221-e5bc9c237080 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:25 compute-1 nova_compute[230518]: 2025-10-02 12:57:25.730 2 DEBUG oslo_concurrency.lockutils [req-0c1d7f07-11ad-494d-a7b2-173d855ef61d req-1de4b7fd-bb84-4715-8221-e5bc9c237080 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:25 compute-1 nova_compute[230518]: 2025-10-02 12:57:25.730 2 DEBUG nova.compute.manager [req-0c1d7f07-11ad-494d-a7b2-173d855ef61d req-1de4b7fd-bb84-4715-8221-e5bc9c237080 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] No waiting events found dispatching network-vif-plugged-1ba1c69e-5414-4423-aa38-aea149f6f506 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:57:25 compute-1 nova_compute[230518]: 2025-10-02 12:57:25.731 2 WARNING nova.compute.manager [req-0c1d7f07-11ad-494d-a7b2-173d855ef61d req-1de4b7fd-bb84-4715-8221-e5bc9c237080 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Received unexpected event network-vif-plugged-1ba1c69e-5414-4423-aa38-aea149f6f506 for instance with vm_state rescued and task_state None.
Oct 02 12:57:25 compute-1 nova_compute[230518]: 2025-10-02 12:57:25.770 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-c793e384-4ddd-4531-b6fd-172ee2fcbd4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:57:25 compute-1 nova_compute[230518]: 2025-10-02 12:57:25.770 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:57:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:57:25.954 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:57:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:57:25.954 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:57:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:57:25.954 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:57:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:57:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:26.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:57:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:26.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:27 compute-1 ceph-mon[80926]: pgmap v2527: 305 pgs: 305 active+clean; 544 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.9 MiB/s wr, 360 op/s
Oct 02 12:57:28 compute-1 nova_compute[230518]: 2025-10-02 12:57:28.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:28.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:28.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:29 compute-1 nova_compute[230518]: 2025-10-02 12:57:29.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:57:29 compute-1 ceph-mon[80926]: pgmap v2528: 305 pgs: 305 active+clean; 554 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 2.7 MiB/s wr, 466 op/s
Oct 02 12:57:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:30.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:30.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:31 compute-1 ceph-mon[80926]: pgmap v2529: 305 pgs: 305 active+clean; 570 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 3.9 MiB/s wr, 490 op/s
Oct 02 12:57:31 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4181327579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:31 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/827417965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:32.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:32 compute-1 podman[292451]: 2025-10-02 12:57:32.806391706 +0000 UTC m=+0.054823361 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 12:57:32 compute-1 podman[292450]: 2025-10-02 12:57:32.849024723 +0000 UTC m=+0.097460139 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 12:57:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:57:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:32.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:57:33 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2260972818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:33 compute-1 nova_compute[230518]: 2025-10-02 12:57:33.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:34 compute-1 ceph-mon[80926]: pgmap v2530: 305 pgs: 305 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 2.8 MiB/s wr, 463 op/s
Oct 02 12:57:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e347 e347: 3 total, 3 up, 3 in
Oct 02 12:57:34 compute-1 nova_compute[230518]: 2025-10-02 12:57:34.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:57:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:34.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:34.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:35 compute-1 ceph-mon[80926]: pgmap v2531: 305 pgs: 305 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.2 MiB/s wr, 304 op/s
Oct 02 12:57:35 compute-1 ceph-mon[80926]: osdmap e347: 3 total, 3 up, 3 in
Oct 02 12:57:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:57:35 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1591168438' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:36 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3688898913' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:36 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1591168438' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:36.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:36.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:37 compute-1 ceph-mon[80926]: pgmap v2533: 305 pgs: 305 active+clean; 633 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 5.2 MiB/s wr, 376 op/s
Oct 02 12:57:38 compute-1 nova_compute[230518]: 2025-10-02 12:57:38.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:38 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2010598752' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:38.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:38.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:39 compute-1 nova_compute[230518]: 2025-10-02 12:57:39.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:57:39 compute-1 ceph-mon[80926]: pgmap v2534: 305 pgs: 305 active+clean; 598 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.8 MiB/s wr, 262 op/s
Oct 02 12:57:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2834199112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2702702585' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:57:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:40.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:40.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:41 compute-1 ceph-mon[80926]: pgmap v2535: 305 pgs: 305 active+clean; 544 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 748 KiB/s rd, 4.3 MiB/s wr, 226 op/s
Oct 02 12:57:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:42.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:57:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:42.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:57:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e348 e348: 3 total, 3 up, 3 in
Oct 02 12:57:43 compute-1 ceph-mon[80926]: pgmap v2536: 305 pgs: 305 active+clean; 544 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.3 MiB/s wr, 275 op/s
Oct 02 12:57:43 compute-1 ceph-mon[80926]: osdmap e348: 3 total, 3 up, 3 in
Oct 02 12:57:43 compute-1 nova_compute[230518]: 2025-10-02 12:57:43.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:43 compute-1 podman[292496]: 2025-10-02 12:57:43.802983362 +0000 UTC m=+0.052424960 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 12:57:43 compute-1 podman[292495]: 2025-10-02 12:57:43.820537057 +0000 UTC m=+0.068610752 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 12:57:44 compute-1 nova_compute[230518]: 2025-10-02 12:57:44.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:57:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:44.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:44.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:45 compute-1 ceph-mon[80926]: pgmap v2538: 305 pgs: 305 active+clean; 544 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.4 MiB/s wr, 281 op/s
Oct 02 12:57:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3758325169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:46 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e349 e349: 3 total, 3 up, 3 in
Oct 02 12:57:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:46.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:57:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:46.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:57:47 compute-1 ceph-mon[80926]: pgmap v2539: 305 pgs: 305 active+clean; 544 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 1.7 MiB/s wr, 316 op/s
Oct 02 12:57:47 compute-1 ceph-mon[80926]: osdmap e349: 3 total, 3 up, 3 in
Oct 02 12:57:48 compute-1 nova_compute[230518]: 2025-10-02 12:57:48.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:48.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:48.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:49 compute-1 nova_compute[230518]: 2025-10-02 12:57:49.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:57:49 compute-1 ceph-mon[80926]: pgmap v2541: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 546 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 57 KiB/s wr, 257 op/s
Oct 02 12:57:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1249706266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:50.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:50.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:51 compute-1 ceph-mon[80926]: pgmap v2542: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 496 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 38 KiB/s wr, 200 op/s
Oct 02 12:57:52 compute-1 sudo[292534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:57:52 compute-1 sudo[292534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:52 compute-1 sudo[292534]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:52 compute-1 sudo[292559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:57:52 compute-1 sudo[292559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:52 compute-1 sudo[292559]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:52 compute-1 sudo[292584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:57:52 compute-1 sudo[292584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:52 compute-1 sudo[292584]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:52 compute-1 sudo[292609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:57:52 compute-1 sudo[292609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:57:52 compute-1 nova_compute[230518]: 2025-10-02 12:57:52.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:57:52.736 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:57:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:57:52.738 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:57:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:52.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:52 compute-1 sudo[292609]: pam_unix(sudo:session): session closed for user root
Oct 02 12:57:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 12:57:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:52.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 12:57:53 compute-1 ceph-mon[80926]: pgmap v2543: 305 pgs: 305 active+clean; 428 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 807 KiB/s wr, 217 op/s
Oct 02 12:57:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 e350: 3 total, 3 up, 3 in
Oct 02 12:57:53 compute-1 nova_compute[230518]: 2025-10-02 12:57:53.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:54 compute-1 nova_compute[230518]: 2025-10-02 12:57:54.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:54 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:57:54 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:57:54 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:57:54 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:57:54 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:57:54 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:57:54 compute-1 ceph-mon[80926]: osdmap e350: 3 total, 3 up, 3 in
Oct 02 12:57:54 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3185490209' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:57:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:57:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:57:54.740 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:57:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:54.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:54.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:55 compute-1 ceph-mon[80926]: pgmap v2545: 305 pgs: 305 active+clean; 428 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 624 KiB/s rd, 862 KiB/s wr, 107 op/s
Oct 02 12:57:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:56.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:56 compute-1 ceph-mon[80926]: pgmap v2546: 305 pgs: 305 active+clean; 438 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 618 KiB/s rd, 2.6 MiB/s wr, 131 op/s
Oct 02 12:57:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:56.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:58 compute-1 nova_compute[230518]: 2025-10-02 12:57:58.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:57:58.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:57:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:57:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:57:59.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:57:59 compute-1 ceph-mon[80926]: pgmap v2547: 305 pgs: 305 active+clean; 483 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 480 KiB/s rd, 5.5 MiB/s wr, 161 op/s
Oct 02 12:57:59 compute-1 nova_compute[230518]: 2025-10-02 12:57:59.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:57:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:58:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4123427180' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:58:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2241388146' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:58:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:00.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:01.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:01 compute-1 ceph-mon[80926]: pgmap v2548: 305 pgs: 305 active+clean; 527 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 788 KiB/s rd, 7.2 MiB/s wr, 204 op/s
Oct 02 12:58:02 compute-1 sudo[292667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:58:02 compute-1 sudo[292667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:58:02 compute-1 sudo[292667]: pam_unix(sudo:session): session closed for user root
Oct 02 12:58:02 compute-1 sudo[292692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:58:02 compute-1 sudo[292692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:58:02 compute-1 sudo[292692]: pam_unix(sudo:session): session closed for user root
Oct 02 12:58:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:02.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:03.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:03 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:58:03 compute-1 ceph-mon[80926]: pgmap v2549: 305 pgs: 305 active+clean; 532 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 766 KiB/s rd, 6.6 MiB/s wr, 182 op/s
Oct 02 12:58:03 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:58:03 compute-1 nova_compute[230518]: 2025-10-02 12:58:03.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:03 compute-1 podman[292718]: 2025-10-02 12:58:03.815034889 +0000 UTC m=+0.056991252 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 12:58:03 compute-1 podman[292717]: 2025-10-02 12:58:03.837182578 +0000 UTC m=+0.084634581 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 12:58:04 compute-1 nova_compute[230518]: 2025-10-02 12:58:04.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:58:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 12:58:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:04.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 12:58:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:05.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 12:58:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2688330386' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:58:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 12:58:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2688330386' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:58:05 compute-1 ceph-mon[80926]: pgmap v2550: 305 pgs: 305 active+clean; 532 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 739 KiB/s rd, 6.3 MiB/s wr, 176 op/s
Oct 02 12:58:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2688330386' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:58:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2688330386' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:58:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:06.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:07.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:07 compute-1 ceph-mon[80926]: pgmap v2551: 305 pgs: 305 active+clean; 532 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.5 MiB/s wr, 189 op/s
Oct 02 12:58:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/161162007' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:58:08 compute-1 nova_compute[230518]: 2025-10-02 12:58:08.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:08.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:58:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:09.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:58:09 compute-1 nova_compute[230518]: 2025-10-02 12:58:09.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:58:09 compute-1 ceph-mon[80926]: pgmap v2552: 305 pgs: 305 active+clean; 532 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.0 MiB/s wr, 194 op/s
Oct 02 12:58:09 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/913417872' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:58:09 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/931723888' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:58:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:58:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:10.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:58:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:58:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:11.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:58:11 compute-1 nova_compute[230518]: 2025-10-02 12:58:11.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:58:11 compute-1 nova_compute[230518]: 2025-10-02 12:58:11.074 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:11 compute-1 nova_compute[230518]: 2025-10-02 12:58:11.075 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:11 compute-1 nova_compute[230518]: 2025-10-02 12:58:11.075 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:11 compute-1 nova_compute[230518]: 2025-10-02 12:58:11.075 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:58:11 compute-1 nova_compute[230518]: 2025-10-02 12:58:11.075 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:58:11 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:58:11 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4122840404' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:11 compute-1 nova_compute[230518]: 2025-10-02 12:58:11.520 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:58:11 compute-1 nova_compute[230518]: 2025-10-02 12:58:11.590 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:58:11 compute-1 nova_compute[230518]: 2025-10-02 12:58:11.591 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:58:11 compute-1 nova_compute[230518]: 2025-10-02 12:58:11.591 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-0000009c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:58:11 compute-1 nova_compute[230518]: 2025-10-02 12:58:11.720 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:58:11 compute-1 nova_compute[230518]: 2025-10-02 12:58:11.721 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4106MB free_disk=20.7608642578125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:58:11 compute-1 nova_compute[230518]: 2025-10-02 12:58:11.721 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:11 compute-1 nova_compute[230518]: 2025-10-02 12:58:11.721 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:11 compute-1 nova_compute[230518]: 2025-10-02 12:58:11.888 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance c793e384-4ddd-4531-b6fd-172ee2fcbd4d actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:58:11 compute-1 nova_compute[230518]: 2025-10-02 12:58:11.889 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:58:11 compute-1 nova_compute[230518]: 2025-10-02 12:58:11.889 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:58:11 compute-1 ceph-mon[80926]: pgmap v2553: 305 pgs: 305 active+clean; 544 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 137 op/s
Oct 02 12:58:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4122840404' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:12 compute-1 nova_compute[230518]: 2025-10-02 12:58:12.195 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:58:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:58:12 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1559841655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:12 compute-1 nova_compute[230518]: 2025-10-02 12:58:12.609 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:58:12 compute-1 nova_compute[230518]: 2025-10-02 12:58:12.614 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:58:12 compute-1 nova_compute[230518]: 2025-10-02 12:58:12.631 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:58:12 compute-1 nova_compute[230518]: 2025-10-02 12:58:12.667 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:58:12 compute-1 nova_compute[230518]: 2025-10-02 12:58:12.668 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.946s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:12.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:12 compute-1 ceph-mon[80926]: pgmap v2554: 305 pgs: 305 active+clean; 579 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 105 op/s
Oct 02 12:58:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1559841655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 12:58:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:13.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 12:58:13 compute-1 nova_compute[230518]: 2025-10-02 12:58:13.663 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:58:13 compute-1 nova_compute[230518]: 2025-10-02 12:58:13.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/815310072' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3666288749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:14 compute-1 nova_compute[230518]: 2025-10-02 12:58:14.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:58:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:58:14 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2625827684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:14 compute-1 podman[292807]: 2025-10-02 12:58:14.802829124 +0000 UTC m=+0.051645666 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:58:14 compute-1 podman[292806]: 2025-10-02 12:58:14.806035764 +0000 UTC m=+0.058289483 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:58:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:14.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:15.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:15 compute-1 nova_compute[230518]: 2025-10-02 12:58:15.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:58:15 compute-1 ceph-mon[80926]: pgmap v2555: 305 pgs: 305 active+clean; 579 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 95 op/s
Oct 02 12:58:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2625827684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:16 compute-1 nova_compute[230518]: 2025-10-02 12:58:16.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:58:16 compute-1 nova_compute[230518]: 2025-10-02 12:58:16.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:58:16 compute-1 nova_compute[230518]: 2025-10-02 12:58:16.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:58:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3987287269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/273215587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:58:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:16.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:58:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:17.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:17 compute-1 nova_compute[230518]: 2025-10-02 12:58:17.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:58:17 compute-1 nova_compute[230518]: 2025-10-02 12:58:17.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:58:17 compute-1 ceph-mon[80926]: pgmap v2556: 305 pgs: 305 active+clean; 540 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.1 MiB/s wr, 166 op/s
Oct 02 12:58:18 compute-1 nova_compute[230518]: 2025-10-02 12:58:18.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:18.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:19.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:19 compute-1 nova_compute[230518]: 2025-10-02 12:58:19.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:58:19 compute-1 ceph-mon[80926]: pgmap v2557: 305 pgs: 305 active+clean; 530 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.9 MiB/s wr, 216 op/s
Oct 02 12:58:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:58:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:20.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:58:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:21.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:21 compute-1 nova_compute[230518]: 2025-10-02 12:58:21.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:58:21 compute-1 ceph-mon[80926]: pgmap v2558: 305 pgs: 305 active+clean; 521 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.9 MiB/s wr, 228 op/s
Oct 02 12:58:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:22.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:58:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:23.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:58:23 compute-1 nova_compute[230518]: 2025-10-02 12:58:23.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:23 compute-1 nova_compute[230518]: 2025-10-02 12:58:23.797 2 DEBUG oslo_concurrency.lockutils [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquiring lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:23 compute-1 nova_compute[230518]: 2025-10-02 12:58:23.798 2 DEBUG oslo_concurrency.lockutils [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:23 compute-1 nova_compute[230518]: 2025-10-02 12:58:23.798 2 DEBUG oslo_concurrency.lockutils [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquiring lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:23 compute-1 nova_compute[230518]: 2025-10-02 12:58:23.798 2 DEBUG oslo_concurrency.lockutils [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:23 compute-1 nova_compute[230518]: 2025-10-02 12:58:23.798 2 DEBUG oslo_concurrency.lockutils [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:23 compute-1 nova_compute[230518]: 2025-10-02 12:58:23.800 2 INFO nova.compute.manager [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Terminating instance
Oct 02 12:58:23 compute-1 nova_compute[230518]: 2025-10-02 12:58:23.800 2 DEBUG nova.compute.manager [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:58:24 compute-1 ceph-mon[80926]: pgmap v2559: 305 pgs: 305 active+clean; 441 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.6 MiB/s wr, 286 op/s
Oct 02 12:58:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2852028571' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1139249947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:24 compute-1 nova_compute[230518]: 2025-10-02 12:58:24.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:58:24 compute-1 nova_compute[230518]: 2025-10-02 12:58:24.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:58:24 compute-1 nova_compute[230518]: 2025-10-02 12:58:24.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:58:24 compute-1 nova_compute[230518]: 2025-10-02 12:58:24.071 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 02 12:58:24 compute-1 nova_compute[230518]: 2025-10-02 12:58:24.072 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 12:58:24 compute-1 kernel: tap1ba1c69e-54 (unregistering): left promiscuous mode
Oct 02 12:58:24 compute-1 NetworkManager[44960]: <info>  [1759409904.2542] device (tap1ba1c69e-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:58:24 compute-1 nova_compute[230518]: 2025-10-02 12:58:24.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:24 compute-1 ovn_controller[129257]: 2025-10-02T12:58:24Z|00666|binding|INFO|Releasing lport 1ba1c69e-5414-4423-aa38-aea149f6f506 from this chassis (sb_readonly=0)
Oct 02 12:58:24 compute-1 ovn_controller[129257]: 2025-10-02T12:58:24Z|00667|binding|INFO|Setting lport 1ba1c69e-5414-4423-aa38-aea149f6f506 down in Southbound
Oct 02 12:58:24 compute-1 ovn_controller[129257]: 2025-10-02T12:58:24Z|00668|binding|INFO|Removing iface tap1ba1c69e-54 ovn-installed in OVS
Oct 02 12:58:24 compute-1 nova_compute[230518]: 2025-10-02 12:58:24.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:24 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:58:24.266 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:9d:59 10.100.0.3'], port_security=['fa:16:3e:27:9d:59 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'c793e384-4ddd-4531-b6fd-172ee2fcbd4d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8f55f9d9ed144629bd9a03edb020c4f', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a444c762-477f-4077-ba6b-7c28af4142c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7799346f-74e1-4324-b5da-a7c921979851, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=1ba1c69e-5414-4423-aa38-aea149f6f506) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:58:24 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:58:24.267 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 1ba1c69e-5414-4423-aa38-aea149f6f506 in datapath a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59 unbound from our chassis
Oct 02 12:58:24 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:58:24.268 138374 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 02 12:58:24 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:58:24.269 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cf3eb9c8-7f1d-4dac-bc20-267f74184d95]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:24 compute-1 nova_compute[230518]: 2025-10-02 12:58:24.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:24 compute-1 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d0000009c.scope: Deactivated successfully.
Oct 02 12:58:24 compute-1 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d0000009c.scope: Consumed 16.007s CPU time.
Oct 02 12:58:24 compute-1 nova_compute[230518]: 2025-10-02 12:58:24.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:24 compute-1 systemd-machined[188247]: Machine qemu-77-instance-0000009c terminated.
Oct 02 12:58:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:58:24 compute-1 nova_compute[230518]: 2025-10-02 12:58:24.438 2 INFO nova.virt.libvirt.driver [-] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Instance destroyed successfully.
Oct 02 12:58:24 compute-1 nova_compute[230518]: 2025-10-02 12:58:24.439 2 DEBUG nova.objects.instance [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lazy-loading 'resources' on Instance uuid c793e384-4ddd-4531-b6fd-172ee2fcbd4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:58:24 compute-1 nova_compute[230518]: 2025-10-02 12:58:24.463 2 DEBUG nova.virt.libvirt.vif [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:56:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-501698818',display_name='tempest-ServerRescueTestJSON-server-501698818',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-501698818',id=156,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:57:24Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d8f55f9d9ed144629bd9a03edb020c4f',ramdisk_id='',reservation_id='r-qzbsd8d7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-791200975',owner_user_name='tempest-ServerRescueTestJSON-791200975-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:57:24Z,user_data=None,user_id='dfe96a8fa48c4243b6262a0359f5b208',uuid=c793e384-4ddd-4531-b6fd-172ee2fcbd4d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "1ba1c69e-5414-4423-aa38-aea149f6f506", "address": "fa:16:3e:27:9d:59", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba1c69e-54", "ovs_interfaceid": "1ba1c69e-5414-4423-aa38-aea149f6f506", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:58:24 compute-1 nova_compute[230518]: 2025-10-02 12:58:24.463 2 DEBUG nova.network.os_vif_util [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Converting VIF {"id": "1ba1c69e-5414-4423-aa38-aea149f6f506", "address": "fa:16:3e:27:9d:59", "network": {"id": "a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1859173317-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d8f55f9d9ed144629bd9a03edb020c4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ba1c69e-54", "ovs_interfaceid": "1ba1c69e-5414-4423-aa38-aea149f6f506", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:58:24 compute-1 nova_compute[230518]: 2025-10-02 12:58:24.464 2 DEBUG nova.network.os_vif_util [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:27:9d:59,bridge_name='br-int',has_traffic_filtering=True,id=1ba1c69e-5414-4423-aa38-aea149f6f506,network=Network(a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ba1c69e-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:58:24 compute-1 nova_compute[230518]: 2025-10-02 12:58:24.464 2 DEBUG os_vif [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:27:9d:59,bridge_name='br-int',has_traffic_filtering=True,id=1ba1c69e-5414-4423-aa38-aea149f6f506,network=Network(a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ba1c69e-54') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:58:24 compute-1 nova_compute[230518]: 2025-10-02 12:58:24.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:24 compute-1 nova_compute[230518]: 2025-10-02 12:58:24.468 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ba1c69e-54, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:58:24 compute-1 nova_compute[230518]: 2025-10-02 12:58:24.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:24 compute-1 nova_compute[230518]: 2025-10-02 12:58:24.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:24 compute-1 nova_compute[230518]: 2025-10-02 12:58:24.476 2 DEBUG nova.compute.manager [req-3315c278-d5f0-4cae-89e4-3fae5adf3f9f req-22699a62-7b38-4344-91f6-1c82f68ca1e7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Received event network-vif-unplugged-1ba1c69e-5414-4423-aa38-aea149f6f506 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:58:24 compute-1 nova_compute[230518]: 2025-10-02 12:58:24.476 2 DEBUG oslo_concurrency.lockutils [req-3315c278-d5f0-4cae-89e4-3fae5adf3f9f req-22699a62-7b38-4344-91f6-1c82f68ca1e7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:24 compute-1 nova_compute[230518]: 2025-10-02 12:58:24.478 2 DEBUG oslo_concurrency.lockutils [req-3315c278-d5f0-4cae-89e4-3fae5adf3f9f req-22699a62-7b38-4344-91f6-1c82f68ca1e7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:24 compute-1 nova_compute[230518]: 2025-10-02 12:58:24.478 2 DEBUG oslo_concurrency.lockutils [req-3315c278-d5f0-4cae-89e4-3fae5adf3f9f req-22699a62-7b38-4344-91f6-1c82f68ca1e7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:24 compute-1 nova_compute[230518]: 2025-10-02 12:58:24.478 2 DEBUG nova.compute.manager [req-3315c278-d5f0-4cae-89e4-3fae5adf3f9f req-22699a62-7b38-4344-91f6-1c82f68ca1e7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] No waiting events found dispatching network-vif-unplugged-1ba1c69e-5414-4423-aa38-aea149f6f506 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:58:24 compute-1 nova_compute[230518]: 2025-10-02 12:58:24.479 2 DEBUG nova.compute.manager [req-3315c278-d5f0-4cae-89e4-3fae5adf3f9f req-22699a62-7b38-4344-91f6-1c82f68ca1e7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Received event network-vif-unplugged-1ba1c69e-5414-4423-aa38-aea149f6f506 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:58:24 compute-1 nova_compute[230518]: 2025-10-02 12:58:24.480 2 INFO os_vif [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:27:9d:59,bridge_name='br-int',has_traffic_filtering=True,id=1ba1c69e-5414-4423-aa38-aea149f6f506,network=Network(a4d6cf2f-6d4e-47f1-b0fe-882ac4775b59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ba1c69e-54')
Oct 02 12:58:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:58:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:24.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:58:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:25.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:25 compute-1 ceph-mon[80926]: pgmap v2560: 305 pgs: 305 active+clean; 441 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 265 op/s
Oct 02 12:58:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:58:25.955 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:58:25.955 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:58:25.955 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:26 compute-1 nova_compute[230518]: 2025-10-02 12:58:26.598 2 DEBUG nova.compute.manager [req-394f9dfc-7249-4008-8ac1-4377debbb329 req-6445f42d-6e6b-4f8a-b3f9-219e97e5323d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Received event network-vif-plugged-1ba1c69e-5414-4423-aa38-aea149f6f506 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:58:26 compute-1 nova_compute[230518]: 2025-10-02 12:58:26.599 2 DEBUG oslo_concurrency.lockutils [req-394f9dfc-7249-4008-8ac1-4377debbb329 req-6445f42d-6e6b-4f8a-b3f9-219e97e5323d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:26 compute-1 nova_compute[230518]: 2025-10-02 12:58:26.599 2 DEBUG oslo_concurrency.lockutils [req-394f9dfc-7249-4008-8ac1-4377debbb329 req-6445f42d-6e6b-4f8a-b3f9-219e97e5323d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:26 compute-1 nova_compute[230518]: 2025-10-02 12:58:26.599 2 DEBUG oslo_concurrency.lockutils [req-394f9dfc-7249-4008-8ac1-4377debbb329 req-6445f42d-6e6b-4f8a-b3f9-219e97e5323d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:26 compute-1 nova_compute[230518]: 2025-10-02 12:58:26.599 2 DEBUG nova.compute.manager [req-394f9dfc-7249-4008-8ac1-4377debbb329 req-6445f42d-6e6b-4f8a-b3f9-219e97e5323d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] No waiting events found dispatching network-vif-plugged-1ba1c69e-5414-4423-aa38-aea149f6f506 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:58:26 compute-1 nova_compute[230518]: 2025-10-02 12:58:26.599 2 WARNING nova.compute.manager [req-394f9dfc-7249-4008-8ac1-4377debbb329 req-6445f42d-6e6b-4f8a-b3f9-219e97e5323d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Received unexpected event network-vif-plugged-1ba1c69e-5414-4423-aa38-aea149f6f506 for instance with vm_state rescued and task_state deleting.
Oct 02 12:58:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:26.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:27.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:27 compute-1 ceph-mon[80926]: pgmap v2561: 305 pgs: 305 active+clean; 410 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 287 op/s
Oct 02 12:58:28 compute-1 nova_compute[230518]: 2025-10-02 12:58:28.838 2 INFO nova.virt.libvirt.driver [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Deleting instance files /var/lib/nova/instances/c793e384-4ddd-4531-b6fd-172ee2fcbd4d_del
Oct 02 12:58:28 compute-1 nova_compute[230518]: 2025-10-02 12:58:28.839 2 INFO nova.virt.libvirt.driver [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Deletion of /var/lib/nova/instances/c793e384-4ddd-4531-b6fd-172ee2fcbd4d_del complete
Oct 02 12:58:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:28.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:28 compute-1 nova_compute[230518]: 2025-10-02 12:58:28.937 2 INFO nova.compute.manager [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Took 5.14 seconds to destroy the instance on the hypervisor.
Oct 02 12:58:28 compute-1 nova_compute[230518]: 2025-10-02 12:58:28.937 2 DEBUG oslo.service.loopingcall [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:58:28 compute-1 nova_compute[230518]: 2025-10-02 12:58:28.938 2 DEBUG nova.compute.manager [-] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:58:28 compute-1 nova_compute[230518]: 2025-10-02 12:58:28.938 2 DEBUG nova.network.neutron [-] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:58:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:29.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:29 compute-1 nova_compute[230518]: 2025-10-02 12:58:29.068 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:58:29 compute-1 nova_compute[230518]: 2025-10-02 12:58:29.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:58:29 compute-1 nova_compute[230518]: 2025-10-02 12:58:29.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:29 compute-1 ceph-mon[80926]: pgmap v2562: 305 pgs: 305 active+clean; 380 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.1 MiB/s wr, 265 op/s
Oct 02 12:58:30 compute-1 nova_compute[230518]: 2025-10-02 12:58:30.541 2 DEBUG nova.network.neutron [-] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:58:30 compute-1 nova_compute[230518]: 2025-10-02 12:58:30.555 2 INFO nova.compute.manager [-] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Took 1.62 seconds to deallocate network for instance.
Oct 02 12:58:30 compute-1 nova_compute[230518]: 2025-10-02 12:58:30.608 2 DEBUG oslo_concurrency.lockutils [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:30 compute-1 nova_compute[230518]: 2025-10-02 12:58:30.608 2 DEBUG oslo_concurrency.lockutils [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:30 compute-1 nova_compute[230518]: 2025-10-02 12:58:30.659 2 DEBUG nova.compute.manager [req-452f91c8-4647-4cec-ac74-7f38f07360ee req-f1ef837c-5c11-4322-861e-3f3fc6454442 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Received event network-vif-deleted-1ba1c69e-5414-4423-aa38-aea149f6f506 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:58:30 compute-1 nova_compute[230518]: 2025-10-02 12:58:30.672 2 DEBUG oslo_concurrency.processutils [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:58:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:30.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/697967858' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:58:30 compute-1 ceph-mon[80926]: pgmap v2563: 305 pgs: 305 active+clean; 339 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 185 op/s
Oct 02 12:58:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1978523503' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:58:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:31.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:31 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:58:31 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1911091351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:31 compute-1 nova_compute[230518]: 2025-10-02 12:58:31.115 2 DEBUG oslo_concurrency.processutils [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:58:31 compute-1 nova_compute[230518]: 2025-10-02 12:58:31.121 2 DEBUG nova.compute.provider_tree [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:58:31 compute-1 nova_compute[230518]: 2025-10-02 12:58:31.138 2 DEBUG nova.scheduler.client.report [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:58:31 compute-1 nova_compute[230518]: 2025-10-02 12:58:31.159 2 DEBUG oslo_concurrency.lockutils [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.551s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:31 compute-1 nova_compute[230518]: 2025-10-02 12:58:31.202 2 INFO nova.scheduler.client.report [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Deleted allocations for instance c793e384-4ddd-4531-b6fd-172ee2fcbd4d
Oct 02 12:58:31 compute-1 nova_compute[230518]: 2025-10-02 12:58:31.299 2 DEBUG oslo_concurrency.lockutils [None req-b06290b6-bbfd-437c-92c0-5e266251a460 dfe96a8fa48c4243b6262a0359f5b208 d8f55f9d9ed144629bd9a03edb020c4f - - default default] Lock "c793e384-4ddd-4531-b6fd-172ee2fcbd4d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.502s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1911091351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:32.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:33.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:33 compute-1 ceph-mon[80926]: pgmap v2564: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 163 op/s
Oct 02 12:58:34 compute-1 nova_compute[230518]: 2025-10-02 12:58:34.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:58:34 compute-1 nova_compute[230518]: 2025-10-02 12:58:34.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:34 compute-1 podman[292904]: 2025-10-02 12:58:34.814079898 +0000 UTC m=+0.064366741 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 12:58:34 compute-1 podman[292903]: 2025-10-02 12:58:34.87269784 +0000 UTC m=+0.123038845 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 02 12:58:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:58:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:34.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:58:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:35.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:35 compute-1 ceph-mon[80926]: pgmap v2565: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 70 KiB/s rd, 1.8 MiB/s wr, 103 op/s
Oct 02 12:58:35 compute-1 nova_compute[230518]: 2025-10-02 12:58:35.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:36 compute-1 nova_compute[230518]: 2025-10-02 12:58:36.691 2 DEBUG oslo_concurrency.lockutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:36 compute-1 nova_compute[230518]: 2025-10-02 12:58:36.692 2 DEBUG oslo_concurrency.lockutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:36 compute-1 nova_compute[230518]: 2025-10-02 12:58:36.713 2 DEBUG nova.compute.manager [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 12:58:36 compute-1 nova_compute[230518]: 2025-10-02 12:58:36.809 2 DEBUG oslo_concurrency.lockutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:36 compute-1 nova_compute[230518]: 2025-10-02 12:58:36.810 2 DEBUG oslo_concurrency.lockutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:36 compute-1 nova_compute[230518]: 2025-10-02 12:58:36.821 2 DEBUG nova.virt.hardware [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 12:58:36 compute-1 nova_compute[230518]: 2025-10-02 12:58:36.822 2 INFO nova.compute.claims [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Claim successful on node compute-1.ctlplane.example.com
Oct 02 12:58:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:36.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:36 compute-1 nova_compute[230518]: 2025-10-02 12:58:36.972 2 DEBUG oslo_concurrency.processutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:58:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:37.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:58:37 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4207214228' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:37 compute-1 nova_compute[230518]: 2025-10-02 12:58:37.427 2 DEBUG oslo_concurrency.processutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:58:37 compute-1 nova_compute[230518]: 2025-10-02 12:58:37.434 2 DEBUG nova.compute.provider_tree [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:58:37 compute-1 nova_compute[230518]: 2025-10-02 12:58:37.449 2 DEBUG nova.scheduler.client.report [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:58:37 compute-1 nova_compute[230518]: 2025-10-02 12:58:37.470 2 DEBUG oslo_concurrency.lockutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:37 compute-1 nova_compute[230518]: 2025-10-02 12:58:37.471 2 DEBUG nova.compute.manager [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 12:58:37 compute-1 nova_compute[230518]: 2025-10-02 12:58:37.522 2 DEBUG nova.compute.manager [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 12:58:37 compute-1 nova_compute[230518]: 2025-10-02 12:58:37.522 2 DEBUG nova.network.neutron [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 12:58:37 compute-1 nova_compute[230518]: 2025-10-02 12:58:37.554 2 INFO nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 12:58:37 compute-1 nova_compute[230518]: 2025-10-02 12:58:37.573 2 DEBUG nova.compute.manager [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 12:58:37 compute-1 ceph-mon[80926]: pgmap v2566: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 643 KiB/s rd, 1.8 MiB/s wr, 130 op/s
Oct 02 12:58:37 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4207214228' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:37 compute-1 nova_compute[230518]: 2025-10-02 12:58:37.673 2 DEBUG nova.compute.manager [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 12:58:37 compute-1 nova_compute[230518]: 2025-10-02 12:58:37.674 2 DEBUG nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 12:58:37 compute-1 nova_compute[230518]: 2025-10-02 12:58:37.675 2 INFO nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Creating image(s)
Oct 02 12:58:37 compute-1 nova_compute[230518]: 2025-10-02 12:58:37.697 2 DEBUG nova.storage.rbd_utils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image c8cc2f8f-7f89-4304-b071-1849f76cfda8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:58:37 compute-1 nova_compute[230518]: 2025-10-02 12:58:37.723 2 DEBUG nova.storage.rbd_utils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image c8cc2f8f-7f89-4304-b071-1849f76cfda8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:58:37 compute-1 nova_compute[230518]: 2025-10-02 12:58:37.752 2 DEBUG nova.storage.rbd_utils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image c8cc2f8f-7f89-4304-b071-1849f76cfda8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:58:37 compute-1 nova_compute[230518]: 2025-10-02 12:58:37.757 2 DEBUG oslo_concurrency.processutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:58:37 compute-1 nova_compute[230518]: 2025-10-02 12:58:37.822 2 DEBUG oslo_concurrency.processutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:58:37 compute-1 nova_compute[230518]: 2025-10-02 12:58:37.824 2 DEBUG oslo_concurrency.lockutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:37 compute-1 nova_compute[230518]: 2025-10-02 12:58:37.824 2 DEBUG oslo_concurrency.lockutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:37 compute-1 nova_compute[230518]: 2025-10-02 12:58:37.824 2 DEBUG oslo_concurrency.lockutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:37 compute-1 nova_compute[230518]: 2025-10-02 12:58:37.851 2 DEBUG nova.storage.rbd_utils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image c8cc2f8f-7f89-4304-b071-1849f76cfda8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:58:37 compute-1 nova_compute[230518]: 2025-10-02 12:58:37.856 2 DEBUG oslo_concurrency.processutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c8cc2f8f-7f89-4304-b071-1849f76cfda8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:58:37 compute-1 nova_compute[230518]: 2025-10-02 12:58:37.907 2 DEBUG nova.policy [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '47f465d8c8ac44c982f2a2e60ae9eb40', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 12:58:38 compute-1 nova_compute[230518]: 2025-10-02 12:58:38.634 2 DEBUG oslo_concurrency.processutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 c8cc2f8f-7f89-4304-b071-1849f76cfda8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.778s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:58:38 compute-1 nova_compute[230518]: 2025-10-02 12:58:38.727 2 DEBUG nova.storage.rbd_utils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] resizing rbd image c8cc2f8f-7f89-4304-b071-1849f76cfda8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 12:58:38 compute-1 nova_compute[230518]: 2025-10-02 12:58:38.857 2 DEBUG nova.objects.instance [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'migration_context' on Instance uuid c8cc2f8f-7f89-4304-b071-1849f76cfda8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:58:38 compute-1 nova_compute[230518]: 2025-10-02 12:58:38.877 2 DEBUG nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 12:58:38 compute-1 nova_compute[230518]: 2025-10-02 12:58:38.877 2 DEBUG nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Ensure instance console log exists: /var/lib/nova/instances/c8cc2f8f-7f89-4304-b071-1849f76cfda8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 12:58:38 compute-1 nova_compute[230518]: 2025-10-02 12:58:38.878 2 DEBUG oslo_concurrency.lockutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:38 compute-1 nova_compute[230518]: 2025-10-02 12:58:38.878 2 DEBUG oslo_concurrency.lockutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:38 compute-1 nova_compute[230518]: 2025-10-02 12:58:38.878 2 DEBUG oslo_concurrency.lockutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:58:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:38.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:58:39 compute-1 nova_compute[230518]: 2025-10-02 12:58:39.047 2 DEBUG nova.network.neutron [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Successfully created port: bddb6509-7221-4ef0-bde7-be95b89ab6d8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 12:58:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:39.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:39 compute-1 nova_compute[230518]: 2025-10-02 12:58:39.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:58:39 compute-1 nova_compute[230518]: 2025-10-02 12:58:39.437 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409904.436356, c793e384-4ddd-4531-b6fd-172ee2fcbd4d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:58:39 compute-1 nova_compute[230518]: 2025-10-02 12:58:39.437 2 INFO nova.compute.manager [-] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] VM Stopped (Lifecycle Event)
Oct 02 12:58:39 compute-1 nova_compute[230518]: 2025-10-02 12:58:39.453 2 DEBUG nova.compute.manager [None req-7f15eccb-2e3f-47f0-958f-6ff2a325ab48 - - - - - -] [instance: c793e384-4ddd-4531-b6fd-172ee2fcbd4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:58:39 compute-1 nova_compute[230518]: 2025-10-02 12:58:39.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:39 compute-1 ceph-mon[80926]: pgmap v2567: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 150 op/s
Oct 02 12:58:40 compute-1 nova_compute[230518]: 2025-10-02 12:58:40.072 2 DEBUG nova.network.neutron [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Successfully updated port: bddb6509-7221-4ef0-bde7-be95b89ab6d8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 12:58:40 compute-1 nova_compute[230518]: 2025-10-02 12:58:40.084 2 DEBUG oslo_concurrency.lockutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "refresh_cache-c8cc2f8f-7f89-4304-b071-1849f76cfda8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:58:40 compute-1 nova_compute[230518]: 2025-10-02 12:58:40.084 2 DEBUG oslo_concurrency.lockutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquired lock "refresh_cache-c8cc2f8f-7f89-4304-b071-1849f76cfda8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:58:40 compute-1 nova_compute[230518]: 2025-10-02 12:58:40.084 2 DEBUG nova.network.neutron [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:58:40 compute-1 nova_compute[230518]: 2025-10-02 12:58:40.152 2 DEBUG nova.compute.manager [req-d4ef4afa-b180-4622-a453-a307919055d4 req-61ed140e-052e-4569-a6ac-fb89b6d99ac9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received event network-changed-bddb6509-7221-4ef0-bde7-be95b89ab6d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:58:40 compute-1 nova_compute[230518]: 2025-10-02 12:58:40.153 2 DEBUG nova.compute.manager [req-d4ef4afa-b180-4622-a453-a307919055d4 req-61ed140e-052e-4569-a6ac-fb89b6d99ac9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Refreshing instance network info cache due to event network-changed-bddb6509-7221-4ef0-bde7-be95b89ab6d8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:58:40 compute-1 nova_compute[230518]: 2025-10-02 12:58:40.153 2 DEBUG oslo_concurrency.lockutils [req-d4ef4afa-b180-4622-a453-a307919055d4 req-61ed140e-052e-4569-a6ac-fb89b6d99ac9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-c8cc2f8f-7f89-4304-b071-1849f76cfda8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:58:40 compute-1 nova_compute[230518]: 2025-10-02 12:58:40.238 2 DEBUG nova.network.neutron [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 12:58:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:40.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:41.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:41 compute-1 ceph-mon[80926]: pgmap v2568: 305 pgs: 305 active+clean; 349 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 109 op/s
Oct 02 12:58:42 compute-1 nova_compute[230518]: 2025-10-02 12:58:42.234 2 DEBUG nova.network.neutron [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Updating instance_info_cache with network_info: [{"id": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "address": "fa:16:3e:49:02:8c", "network": {"id": "2b820c79-77a7-4936-8c6e-9c38d383ad1b", "bridge": "br-int", "label": "tempest-network-smoke--453535835", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbddb6509-72", "ovs_interfaceid": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:58:42 compute-1 nova_compute[230518]: 2025-10-02 12:58:42.253 2 DEBUG oslo_concurrency.lockutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Releasing lock "refresh_cache-c8cc2f8f-7f89-4304-b071-1849f76cfda8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:58:42 compute-1 nova_compute[230518]: 2025-10-02 12:58:42.254 2 DEBUG nova.compute.manager [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Instance network_info: |[{"id": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "address": "fa:16:3e:49:02:8c", "network": {"id": "2b820c79-77a7-4936-8c6e-9c38d383ad1b", "bridge": "br-int", "label": "tempest-network-smoke--453535835", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbddb6509-72", "ovs_interfaceid": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 12:58:42 compute-1 nova_compute[230518]: 2025-10-02 12:58:42.254 2 DEBUG oslo_concurrency.lockutils [req-d4ef4afa-b180-4622-a453-a307919055d4 req-61ed140e-052e-4569-a6ac-fb89b6d99ac9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-c8cc2f8f-7f89-4304-b071-1849f76cfda8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:58:42 compute-1 nova_compute[230518]: 2025-10-02 12:58:42.255 2 DEBUG nova.network.neutron [req-d4ef4afa-b180-4622-a453-a307919055d4 req-61ed140e-052e-4569-a6ac-fb89b6d99ac9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Refreshing network info cache for port bddb6509-7221-4ef0-bde7-be95b89ab6d8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:58:42 compute-1 nova_compute[230518]: 2025-10-02 12:58:42.260 2 DEBUG nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Start _get_guest_xml network_info=[{"id": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "address": "fa:16:3e:49:02:8c", "network": {"id": "2b820c79-77a7-4936-8c6e-9c38d383ad1b", "bridge": "br-int", "label": "tempest-network-smoke--453535835", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbddb6509-72", "ovs_interfaceid": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 12:58:42 compute-1 nova_compute[230518]: 2025-10-02 12:58:42.265 2 WARNING nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:58:42 compute-1 nova_compute[230518]: 2025-10-02 12:58:42.272 2 DEBUG nova.virt.libvirt.host [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 12:58:42 compute-1 nova_compute[230518]: 2025-10-02 12:58:42.273 2 DEBUG nova.virt.libvirt.host [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 12:58:42 compute-1 nova_compute[230518]: 2025-10-02 12:58:42.282 2 DEBUG nova.virt.libvirt.host [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 12:58:42 compute-1 nova_compute[230518]: 2025-10-02 12:58:42.282 2 DEBUG nova.virt.libvirt.host [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 12:58:42 compute-1 nova_compute[230518]: 2025-10-02 12:58:42.284 2 DEBUG nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 12:58:42 compute-1 nova_compute[230518]: 2025-10-02 12:58:42.285 2 DEBUG nova.virt.hardware [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 12:58:42 compute-1 nova_compute[230518]: 2025-10-02 12:58:42.286 2 DEBUG nova.virt.hardware [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 12:58:42 compute-1 nova_compute[230518]: 2025-10-02 12:58:42.286 2 DEBUG nova.virt.hardware [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 12:58:42 compute-1 nova_compute[230518]: 2025-10-02 12:58:42.287 2 DEBUG nova.virt.hardware [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 12:58:42 compute-1 nova_compute[230518]: 2025-10-02 12:58:42.287 2 DEBUG nova.virt.hardware [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 12:58:42 compute-1 nova_compute[230518]: 2025-10-02 12:58:42.288 2 DEBUG nova.virt.hardware [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 12:58:42 compute-1 nova_compute[230518]: 2025-10-02 12:58:42.288 2 DEBUG nova.virt.hardware [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 12:58:42 compute-1 nova_compute[230518]: 2025-10-02 12:58:42.289 2 DEBUG nova.virt.hardware [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 12:58:42 compute-1 nova_compute[230518]: 2025-10-02 12:58:42.290 2 DEBUG nova.virt.hardware [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 12:58:42 compute-1 nova_compute[230518]: 2025-10-02 12:58:42.290 2 DEBUG nova.virt.hardware [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 12:58:42 compute-1 nova_compute[230518]: 2025-10-02 12:58:42.291 2 DEBUG nova.virt.hardware [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 12:58:42 compute-1 nova_compute[230518]: 2025-10-02 12:58:42.296 2 DEBUG oslo_concurrency.processutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:58:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:58:42 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2366804538' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:58:42 compute-1 nova_compute[230518]: 2025-10-02 12:58:42.763 2 DEBUG oslo_concurrency.processutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:58:42 compute-1 nova_compute[230518]: 2025-10-02 12:58:42.791 2 DEBUG nova.storage.rbd_utils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image c8cc2f8f-7f89-4304-b071-1849f76cfda8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:58:42 compute-1 nova_compute[230518]: 2025-10-02 12:58:42.795 2 DEBUG oslo_concurrency.processutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:58:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:58:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:42.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:58:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:43.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:58:43 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1199214120' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:58:43 compute-1 nova_compute[230518]: 2025-10-02 12:58:43.246 2 DEBUG oslo_concurrency.processutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:58:43 compute-1 nova_compute[230518]: 2025-10-02 12:58:43.248 2 DEBUG nova.virt.libvirt.vif [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:58:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-44339413',display_name='tempest-TestNetworkAdvancedServerOps-server-44339413',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-44339413',id=162,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEsh1zp1STzuMIXDnbbRAXZcbmmzIocYDU4MIRfUpLuSUtHJodm49lJQYIod0ZNL2zezyn78o0X/6+GzIk9NqxEaJ1JvcNDOKeRMzQvHVSgS3twK5fXwCqcCv0gGhQyYWw==',key_name='tempest-TestNetworkAdvancedServerOps-862557079',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-6p95bfrs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:58:37Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=c8cc2f8f-7f89-4304-b071-1849f76cfda8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "address": "fa:16:3e:49:02:8c", "network": {"id": "2b820c79-77a7-4936-8c6e-9c38d383ad1b", "bridge": "br-int", "label": "tempest-network-smoke--453535835", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbddb6509-72", "ovs_interfaceid": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 12:58:43 compute-1 nova_compute[230518]: 2025-10-02 12:58:43.249 2 DEBUG nova.network.os_vif_util [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "address": "fa:16:3e:49:02:8c", "network": {"id": "2b820c79-77a7-4936-8c6e-9c38d383ad1b", "bridge": "br-int", "label": "tempest-network-smoke--453535835", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbddb6509-72", "ovs_interfaceid": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:58:43 compute-1 nova_compute[230518]: 2025-10-02 12:58:43.249 2 DEBUG nova.network.os_vif_util [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:02:8c,bridge_name='br-int',has_traffic_filtering=True,id=bddb6509-7221-4ef0-bde7-be95b89ab6d8,network=Network(2b820c79-77a7-4936-8c6e-9c38d383ad1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbddb6509-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:58:43 compute-1 nova_compute[230518]: 2025-10-02 12:58:43.250 2 DEBUG nova.objects.instance [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'pci_devices' on Instance uuid c8cc2f8f-7f89-4304-b071-1849f76cfda8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:58:43 compute-1 nova_compute[230518]: 2025-10-02 12:58:43.263 2 DEBUG nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] End _get_guest_xml xml=<domain type="kvm">
Oct 02 12:58:43 compute-1 nova_compute[230518]:   <uuid>c8cc2f8f-7f89-4304-b071-1849f76cfda8</uuid>
Oct 02 12:58:43 compute-1 nova_compute[230518]:   <name>instance-000000a2</name>
Oct 02 12:58:43 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 12:58:43 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 12:58:43 compute-1 nova_compute[230518]:   <metadata>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 12:58:43 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-44339413</nova:name>
Oct 02 12:58:43 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 12:58:42</nova:creationTime>
Oct 02 12:58:43 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 12:58:43 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 12:58:43 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 12:58:43 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 12:58:43 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 12:58:43 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 12:58:43 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 12:58:43 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 12:58:43 compute-1 nova_compute[230518]:         <nova:user uuid="47f465d8c8ac44c982f2a2e60ae9eb40">tempest-TestNetworkAdvancedServerOps-1770117619-project-member</nova:user>
Oct 02 12:58:43 compute-1 nova_compute[230518]:         <nova:project uuid="072925a6aec84a77a9c09ae0c83efdb3">tempest-TestNetworkAdvancedServerOps-1770117619</nova:project>
Oct 02 12:58:43 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 12:58:43 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 12:58:43 compute-1 nova_compute[230518]:         <nova:port uuid="bddb6509-7221-4ef0-bde7-be95b89ab6d8">
Oct 02 12:58:43 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 12:58:43 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 12:58:43 compute-1 nova_compute[230518]:   </metadata>
Oct 02 12:58:43 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <system>
Oct 02 12:58:43 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 12:58:43 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 12:58:43 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 12:58:43 compute-1 nova_compute[230518]:       <entry name="serial">c8cc2f8f-7f89-4304-b071-1849f76cfda8</entry>
Oct 02 12:58:43 compute-1 nova_compute[230518]:       <entry name="uuid">c8cc2f8f-7f89-4304-b071-1849f76cfda8</entry>
Oct 02 12:58:43 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     </system>
Oct 02 12:58:43 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 12:58:43 compute-1 nova_compute[230518]:   <os>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:   </os>
Oct 02 12:58:43 compute-1 nova_compute[230518]:   <features>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <apic/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:   </features>
Oct 02 12:58:43 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:   </clock>
Oct 02 12:58:43 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:   </cpu>
Oct 02 12:58:43 compute-1 nova_compute[230518]:   <devices>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 12:58:43 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/c8cc2f8f-7f89-4304-b071-1849f76cfda8_disk">
Oct 02 12:58:43 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:       </source>
Oct 02 12:58:43 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:58:43 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:58:43 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 12:58:43 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/c8cc2f8f-7f89-4304-b071-1849f76cfda8_disk.config">
Oct 02 12:58:43 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:       </source>
Oct 02 12:58:43 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 12:58:43 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:       </auth>
Oct 02 12:58:43 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     </disk>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 12:58:43 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:49:02:8c"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:       <target dev="tapbddb6509-72"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     </interface>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 12:58:43 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/c8cc2f8f-7f89-4304-b071-1849f76cfda8/console.log" append="off"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     </serial>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <video>
Oct 02 12:58:43 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     </video>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 12:58:43 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     </rng>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 12:58:43 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 12:58:43 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 12:58:43 compute-1 nova_compute[230518]:   </devices>
Oct 02 12:58:43 compute-1 nova_compute[230518]: </domain>
Oct 02 12:58:43 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 12:58:43 compute-1 nova_compute[230518]: 2025-10-02 12:58:43.264 2 DEBUG nova.compute.manager [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Preparing to wait for external event network-vif-plugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 12:58:43 compute-1 nova_compute[230518]: 2025-10-02 12:58:43.265 2 DEBUG oslo_concurrency.lockutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:43 compute-1 nova_compute[230518]: 2025-10-02 12:58:43.265 2 DEBUG oslo_concurrency.lockutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:43 compute-1 nova_compute[230518]: 2025-10-02 12:58:43.265 2 DEBUG oslo_concurrency.lockutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:43 compute-1 nova_compute[230518]: 2025-10-02 12:58:43.266 2 DEBUG nova.virt.libvirt.vif [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T12:58:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-44339413',display_name='tempest-TestNetworkAdvancedServerOps-server-44339413',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-44339413',id=162,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEsh1zp1STzuMIXDnbbRAXZcbmmzIocYDU4MIRfUpLuSUtHJodm49lJQYIod0ZNL2zezyn78o0X/6+GzIk9NqxEaJ1JvcNDOKeRMzQvHVSgS3twK5fXwCqcCv0gGhQyYWw==',key_name='tempest-TestNetworkAdvancedServerOps-862557079',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-6p95bfrs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T12:58:37Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=c8cc2f8f-7f89-4304-b071-1849f76cfda8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "address": "fa:16:3e:49:02:8c", "network": {"id": "2b820c79-77a7-4936-8c6e-9c38d383ad1b", "bridge": "br-int", "label": "tempest-network-smoke--453535835", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbddb6509-72", "ovs_interfaceid": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 12:58:43 compute-1 nova_compute[230518]: 2025-10-02 12:58:43.266 2 DEBUG nova.network.os_vif_util [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "address": "fa:16:3e:49:02:8c", "network": {"id": "2b820c79-77a7-4936-8c6e-9c38d383ad1b", "bridge": "br-int", "label": "tempest-network-smoke--453535835", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbddb6509-72", "ovs_interfaceid": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:58:43 compute-1 nova_compute[230518]: 2025-10-02 12:58:43.267 2 DEBUG nova.network.os_vif_util [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:02:8c,bridge_name='br-int',has_traffic_filtering=True,id=bddb6509-7221-4ef0-bde7-be95b89ab6d8,network=Network(2b820c79-77a7-4936-8c6e-9c38d383ad1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbddb6509-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:58:43 compute-1 nova_compute[230518]: 2025-10-02 12:58:43.267 2 DEBUG os_vif [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:02:8c,bridge_name='br-int',has_traffic_filtering=True,id=bddb6509-7221-4ef0-bde7-be95b89ab6d8,network=Network(2b820c79-77a7-4936-8c6e-9c38d383ad1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbddb6509-72') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 12:58:43 compute-1 nova_compute[230518]: 2025-10-02 12:58:43.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:43 compute-1 nova_compute[230518]: 2025-10-02 12:58:43.268 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:58:43 compute-1 nova_compute[230518]: 2025-10-02 12:58:43.268 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:58:43 compute-1 nova_compute[230518]: 2025-10-02 12:58:43.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:43 compute-1 nova_compute[230518]: 2025-10-02 12:58:43.271 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbddb6509-72, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:58:43 compute-1 nova_compute[230518]: 2025-10-02 12:58:43.272 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbddb6509-72, col_values=(('external_ids', {'iface-id': 'bddb6509-7221-4ef0-bde7-be95b89ab6d8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:49:02:8c', 'vm-uuid': 'c8cc2f8f-7f89-4304-b071-1849f76cfda8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:58:43 compute-1 nova_compute[230518]: 2025-10-02 12:58:43.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:43 compute-1 NetworkManager[44960]: <info>  [1759409923.2745] manager: (tapbddb6509-72): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/311)
Oct 02 12:58:43 compute-1 nova_compute[230518]: 2025-10-02 12:58:43.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 12:58:43 compute-1 nova_compute[230518]: 2025-10-02 12:58:43.281 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:43 compute-1 nova_compute[230518]: 2025-10-02 12:58:43.282 2 INFO os_vif [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:02:8c,bridge_name='br-int',has_traffic_filtering=True,id=bddb6509-7221-4ef0-bde7-be95b89ab6d8,network=Network(2b820c79-77a7-4936-8c6e-9c38d383ad1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbddb6509-72')
Oct 02 12:58:43 compute-1 nova_compute[230518]: 2025-10-02 12:58:43.342 2 DEBUG nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:58:43 compute-1 nova_compute[230518]: 2025-10-02 12:58:43.343 2 DEBUG nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 12:58:43 compute-1 nova_compute[230518]: 2025-10-02 12:58:43.343 2 DEBUG nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] No VIF found with MAC fa:16:3e:49:02:8c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 12:58:43 compute-1 nova_compute[230518]: 2025-10-02 12:58:43.344 2 INFO nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Using config drive
Oct 02 12:58:43 compute-1 nova_compute[230518]: 2025-10-02 12:58:43.379 2 DEBUG nova.storage.rbd_utils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image c8cc2f8f-7f89-4304-b071-1849f76cfda8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:58:43 compute-1 ceph-mon[80926]: pgmap v2569: 305 pgs: 305 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Oct 02 12:58:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2366804538' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:58:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1199214120' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:58:44 compute-1 nova_compute[230518]: 2025-10-02 12:58:44.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:58:44 compute-1 nova_compute[230518]: 2025-10-02 12:58:44.508 2 INFO nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Creating config drive at /var/lib/nova/instances/c8cc2f8f-7f89-4304-b071-1849f76cfda8/disk.config
Oct 02 12:58:44 compute-1 nova_compute[230518]: 2025-10-02 12:58:44.513 2 DEBUG oslo_concurrency.processutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c8cc2f8f-7f89-4304-b071-1849f76cfda8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq9qsanb8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:58:44 compute-1 nova_compute[230518]: 2025-10-02 12:58:44.664 2 DEBUG oslo_concurrency.processutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c8cc2f8f-7f89-4304-b071-1849f76cfda8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq9qsanb8" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:58:44 compute-1 nova_compute[230518]: 2025-10-02 12:58:44.694 2 DEBUG nova.storage.rbd_utils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] rbd image c8cc2f8f-7f89-4304-b071-1849f76cfda8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 12:58:44 compute-1 nova_compute[230518]: 2025-10-02 12:58:44.698 2 DEBUG oslo_concurrency.processutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c8cc2f8f-7f89-4304-b071-1849f76cfda8/disk.config c8cc2f8f-7f89-4304-b071-1849f76cfda8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:58:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:44.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:45 compute-1 nova_compute[230518]: 2025-10-02 12:58:45.047 2 DEBUG oslo_concurrency.processutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c8cc2f8f-7f89-4304-b071-1849f76cfda8/disk.config c8cc2f8f-7f89-4304-b071-1849f76cfda8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:58:45 compute-1 nova_compute[230518]: 2025-10-02 12:58:45.049 2 INFO nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Deleting local config drive /var/lib/nova/instances/c8cc2f8f-7f89-4304-b071-1849f76cfda8/disk.config because it was imported into RBD.
Oct 02 12:58:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:58:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:45.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:58:45 compute-1 kernel: tapbddb6509-72: entered promiscuous mode
Oct 02 12:58:45 compute-1 NetworkManager[44960]: <info>  [1759409925.1323] manager: (tapbddb6509-72): new Tun device (/org/freedesktop/NetworkManager/Devices/312)
Oct 02 12:58:45 compute-1 ovn_controller[129257]: 2025-10-02T12:58:45Z|00669|binding|INFO|Claiming lport bddb6509-7221-4ef0-bde7-be95b89ab6d8 for this chassis.
Oct 02 12:58:45 compute-1 nova_compute[230518]: 2025-10-02 12:58:45.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:45 compute-1 ovn_controller[129257]: 2025-10-02T12:58:45Z|00670|binding|INFO|bddb6509-7221-4ef0-bde7-be95b89ab6d8: Claiming fa:16:3e:49:02:8c 10.100.0.11
Oct 02 12:58:45 compute-1 nova_compute[230518]: 2025-10-02 12:58:45.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.153 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:02:8c 10.100.0.11'], port_security=['fa:16:3e:49:02:8c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c8cc2f8f-7f89-4304-b071-1849f76cfda8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b820c79-77a7-4936-8c6e-9c38d383ad1b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'baa35b76-b47d-4782-b3a5-4738baaa63f8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=306c2335-d591-443e-b0c0-e2b4293c6e94, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=bddb6509-7221-4ef0-bde7-be95b89ab6d8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.154 138374 INFO neutron.agent.ovn.metadata.agent [-] Port bddb6509-7221-4ef0-bde7-be95b89ab6d8 in datapath 2b820c79-77a7-4936-8c6e-9c38d383ad1b bound to our chassis
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.155 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b820c79-77a7-4936-8c6e-9c38d383ad1b
Oct 02 12:58:45 compute-1 systemd-machined[188247]: New machine qemu-78-instance-000000a2.
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.173 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e848de92-f665-4678-89ac-6a7462a2ccc0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.175 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2b820c79-71 in ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.178 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2b820c79-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.178 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1b9532a9-6405-4a53-b184-bfa94e1bbb91]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.179 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e6905492-f281-47b2-b554-a232d512874a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:45 compute-1 systemd[1]: Started Virtual Machine qemu-78-instance-000000a2.
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.189 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[f5f3b3f8-095c-4c27-9d64-ce8a7091b9f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:45 compute-1 systemd-udevd[293294]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:58:45 compute-1 NetworkManager[44960]: <info>  [1759409925.2064] device (tapbddb6509-72): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:58:45 compute-1 NetworkManager[44960]: <info>  [1759409925.2078] device (tapbddb6509-72): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.227 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5fc5db61-041f-4d30-a858-e47a1f8063b8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:45 compute-1 ovn_controller[129257]: 2025-10-02T12:58:45Z|00671|binding|INFO|Setting lport bddb6509-7221-4ef0-bde7-be95b89ab6d8 ovn-installed in OVS
Oct 02 12:58:45 compute-1 ovn_controller[129257]: 2025-10-02T12:58:45Z|00672|binding|INFO|Setting lport bddb6509-7221-4ef0-bde7-be95b89ab6d8 up in Southbound
Oct 02 12:58:45 compute-1 nova_compute[230518]: 2025-10-02 12:58:45.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.306 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[1908c7b4-1a92-4b41-892c-47e5da70abf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:45 compute-1 podman[293273]: 2025-10-02 12:58:45.312108222 +0000 UTC m=+0.150045333 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:58:45 compute-1 NetworkManager[44960]: <info>  [1759409925.3209] manager: (tap2b820c79-70): new Veth device (/org/freedesktop/NetworkManager/Devices/313)
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.320 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[89af4dda-3ebc-4c3b-bc16-83be397eed5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:45 compute-1 podman[293272]: 2025-10-02 12:58:45.349415772 +0000 UTC m=+0.186290040 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.363 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[cad3fcc9-3a4d-4f2e-b04e-dfc4905a38e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.367 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[ce4e0d88-da2c-4c34-a558-29bd21dc479c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:45 compute-1 NetworkManager[44960]: <info>  [1759409925.3913] device (tap2b820c79-70): carrier: link connected
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.395 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[dded3948-42de-4049-b84f-80d8dc7e6199]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.412 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e4e14bda-a28b-431f-90ee-9696ad26594d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b820c79-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:69:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 778894, 'reachable_time': 24019, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293348, 'error': None, 'target': 'ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.427 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e38ce667-3425-41b6-b884-2771ba024686]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefe:6920'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 778894, 'tstamp': 778894}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293349, 'error': None, 'target': 'ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.441 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[10d59093-dbf8-465f-99c9-72a2c1ea077f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b820c79-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:69:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 778894, 'reachable_time': 24019, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293350, 'error': None, 'target': 'ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.465 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[351cdb5f-9946-4b63-af6a-3e7f36ea9e17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:45 compute-1 nova_compute[230518]: 2025-10-02 12:58:45.486 2 DEBUG nova.network.neutron [req-d4ef4afa-b180-4622-a453-a307919055d4 req-61ed140e-052e-4569-a6ac-fb89b6d99ac9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Updated VIF entry in instance network info cache for port bddb6509-7221-4ef0-bde7-be95b89ab6d8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:58:45 compute-1 nova_compute[230518]: 2025-10-02 12:58:45.486 2 DEBUG nova.network.neutron [req-d4ef4afa-b180-4622-a453-a307919055d4 req-61ed140e-052e-4569-a6ac-fb89b6d99ac9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Updating instance_info_cache with network_info: [{"id": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "address": "fa:16:3e:49:02:8c", "network": {"id": "2b820c79-77a7-4936-8c6e-9c38d383ad1b", "bridge": "br-int", "label": "tempest-network-smoke--453535835", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbddb6509-72", "ovs_interfaceid": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:58:45 compute-1 nova_compute[230518]: 2025-10-02 12:58:45.505 2 DEBUG oslo_concurrency.lockutils [req-d4ef4afa-b180-4622-a453-a307919055d4 req-61ed140e-052e-4569-a6ac-fb89b6d99ac9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-c8cc2f8f-7f89-4304-b071-1849f76cfda8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.510 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9e03dc3c-093b-4ae4-b0c9-7dc2cc697ef4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.511 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b820c79-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.511 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.512 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b820c79-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:58:45 compute-1 kernel: tap2b820c79-70: entered promiscuous mode
Oct 02 12:58:45 compute-1 nova_compute[230518]: 2025-10-02 12:58:45.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:45 compute-1 NetworkManager[44960]: <info>  [1759409925.5153] manager: (tap2b820c79-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/314)
Oct 02 12:58:45 compute-1 nova_compute[230518]: 2025-10-02 12:58:45.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.516 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b820c79-70, col_values=(('external_ids', {'iface-id': 'c363aa8d-5657-4504-a1d6-6861ffb1c6b4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:58:45 compute-1 nova_compute[230518]: 2025-10-02 12:58:45.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:45 compute-1 ovn_controller[129257]: 2025-10-02T12:58:45Z|00673|binding|INFO|Releasing lport c363aa8d-5657-4504-a1d6-6861ffb1c6b4 from this chassis (sb_readonly=0)
Oct 02 12:58:45 compute-1 nova_compute[230518]: 2025-10-02 12:58:45.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.536 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2b820c79-77a7-4936-8c6e-9c38d383ad1b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2b820c79-77a7-4936-8c6e-9c38d383ad1b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.537 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[07a6d72a-4764-444b-a6dd-7cc65c69ef1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.537 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-2b820c79-77a7-4936-8c6e-9c38d383ad1b
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/2b820c79-77a7-4936-8c6e-9c38d383ad1b.pid.haproxy
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 2b820c79-77a7-4936-8c6e-9c38d383ad1b
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:58:45 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:58:45.538 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b', 'env', 'PROCESS_TAG=haproxy-2b820c79-77a7-4936-8c6e-9c38d383ad1b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2b820c79-77a7-4936-8c6e-9c38d383ad1b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:58:45 compute-1 nova_compute[230518]: 2025-10-02 12:58:45.657 2 DEBUG nova.compute.manager [req-c446fcf1-8fa0-4abd-94f6-2faa1fa49fa0 req-f0698a5b-2109-44af-9a07-ed71ed746ebc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received event network-vif-plugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:58:45 compute-1 nova_compute[230518]: 2025-10-02 12:58:45.658 2 DEBUG oslo_concurrency.lockutils [req-c446fcf1-8fa0-4abd-94f6-2faa1fa49fa0 req-f0698a5b-2109-44af-9a07-ed71ed746ebc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:45 compute-1 nova_compute[230518]: 2025-10-02 12:58:45.659 2 DEBUG oslo_concurrency.lockutils [req-c446fcf1-8fa0-4abd-94f6-2faa1fa49fa0 req-f0698a5b-2109-44af-9a07-ed71ed746ebc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:45 compute-1 nova_compute[230518]: 2025-10-02 12:58:45.659 2 DEBUG oslo_concurrency.lockutils [req-c446fcf1-8fa0-4abd-94f6-2faa1fa49fa0 req-f0698a5b-2109-44af-9a07-ed71ed746ebc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:45 compute-1 nova_compute[230518]: 2025-10-02 12:58:45.659 2 DEBUG nova.compute.manager [req-c446fcf1-8fa0-4abd-94f6-2faa1fa49fa0 req-f0698a5b-2109-44af-9a07-ed71ed746ebc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Processing event network-vif-plugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 12:58:45 compute-1 ceph-mon[80926]: pgmap v2570: 305 pgs: 305 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 12:58:45 compute-1 podman[293424]: 2025-10-02 12:58:45.90518841 +0000 UTC m=+0.042507492 container create 371146781e21794d3e8e37c5131733a0f8d2af16d947e4bc63eeb28a26417c29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 12:58:45 compute-1 systemd[1]: Started libpod-conmon-371146781e21794d3e8e37c5131733a0f8d2af16d947e4bc63eeb28a26417c29.scope.
Oct 02 12:58:45 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:58:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5537c82eea9d782306a87a5e6ec1b0062bd7bca80ec083c7e105f886bdbc601b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:58:45 compute-1 podman[293424]: 2025-10-02 12:58:45.880807353 +0000 UTC m=+0.018126455 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:58:45 compute-1 podman[293424]: 2025-10-02 12:58:45.993377531 +0000 UTC m=+0.130696613 container init 371146781e21794d3e8e37c5131733a0f8d2af16d947e4bc63eeb28a26417c29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct 02 12:58:45 compute-1 podman[293424]: 2025-10-02 12:58:45.998909692 +0000 UTC m=+0.136228774 container start 371146781e21794d3e8e37c5131733a0f8d2af16d947e4bc63eeb28a26417c29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 12:58:46 compute-1 neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b[293440]: [NOTICE]   (293444) : New worker (293446) forked
Oct 02 12:58:46 compute-1 neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b[293440]: [NOTICE]   (293444) : Loading success.
Oct 02 12:58:46 compute-1 nova_compute[230518]: 2025-10-02 12:58:46.190 2 DEBUG nova.compute.manager [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 12:58:46 compute-1 nova_compute[230518]: 2025-10-02 12:58:46.192 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409926.1914358, c8cc2f8f-7f89-4304-b071-1849f76cfda8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:58:46 compute-1 nova_compute[230518]: 2025-10-02 12:58:46.192 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] VM Started (Lifecycle Event)
Oct 02 12:58:46 compute-1 nova_compute[230518]: 2025-10-02 12:58:46.196 2 DEBUG nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 12:58:46 compute-1 nova_compute[230518]: 2025-10-02 12:58:46.200 2 INFO nova.virt.libvirt.driver [-] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Instance spawned successfully.
Oct 02 12:58:46 compute-1 nova_compute[230518]: 2025-10-02 12:58:46.200 2 DEBUG nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 12:58:46 compute-1 nova_compute[230518]: 2025-10-02 12:58:46.228 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:58:46 compute-1 nova_compute[230518]: 2025-10-02 12:58:46.235 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:58:46 compute-1 nova_compute[230518]: 2025-10-02 12:58:46.239 2 DEBUG nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:58:46 compute-1 nova_compute[230518]: 2025-10-02 12:58:46.239 2 DEBUG nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:58:46 compute-1 nova_compute[230518]: 2025-10-02 12:58:46.239 2 DEBUG nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:58:46 compute-1 nova_compute[230518]: 2025-10-02 12:58:46.240 2 DEBUG nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:58:46 compute-1 nova_compute[230518]: 2025-10-02 12:58:46.241 2 DEBUG nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:58:46 compute-1 nova_compute[230518]: 2025-10-02 12:58:46.241 2 DEBUG nova.virt.libvirt.driver [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 12:58:46 compute-1 nova_compute[230518]: 2025-10-02 12:58:46.268 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:58:46 compute-1 nova_compute[230518]: 2025-10-02 12:58:46.268 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409926.19153, c8cc2f8f-7f89-4304-b071-1849f76cfda8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:58:46 compute-1 nova_compute[230518]: 2025-10-02 12:58:46.269 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] VM Paused (Lifecycle Event)
Oct 02 12:58:46 compute-1 nova_compute[230518]: 2025-10-02 12:58:46.298 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:58:46 compute-1 nova_compute[230518]: 2025-10-02 12:58:46.302 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409926.1951358, c8cc2f8f-7f89-4304-b071-1849f76cfda8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:58:46 compute-1 nova_compute[230518]: 2025-10-02 12:58:46.303 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] VM Resumed (Lifecycle Event)
Oct 02 12:58:46 compute-1 nova_compute[230518]: 2025-10-02 12:58:46.311 2 INFO nova.compute.manager [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Took 8.64 seconds to spawn the instance on the hypervisor.
Oct 02 12:58:46 compute-1 nova_compute[230518]: 2025-10-02 12:58:46.311 2 DEBUG nova.compute.manager [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:58:46 compute-1 nova_compute[230518]: 2025-10-02 12:58:46.322 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:58:46 compute-1 nova_compute[230518]: 2025-10-02 12:58:46.325 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:58:46 compute-1 nova_compute[230518]: 2025-10-02 12:58:46.355 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 12:58:46 compute-1 nova_compute[230518]: 2025-10-02 12:58:46.386 2 INFO nova.compute.manager [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Took 9.63 seconds to build instance.
Oct 02 12:58:46 compute-1 nova_compute[230518]: 2025-10-02 12:58:46.406 2 DEBUG oslo_concurrency.lockutils [None req-3f6e09fa-d16f-4d31-8f75-dcfc4619bceb 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:46 compute-1 ceph-mon[80926]: pgmap v2571: 305 pgs: 305 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 02 12:58:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:46.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:47.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:47 compute-1 nova_compute[230518]: 2025-10-02 12:58:47.756 2 DEBUG nova.compute.manager [req-0a0dc34b-97e5-41e8-bc3f-99288e93c5ba req-37332556-3845-420d-b92d-af63f27532e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received event network-vif-plugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:58:47 compute-1 nova_compute[230518]: 2025-10-02 12:58:47.756 2 DEBUG oslo_concurrency.lockutils [req-0a0dc34b-97e5-41e8-bc3f-99288e93c5ba req-37332556-3845-420d-b92d-af63f27532e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:58:47 compute-1 nova_compute[230518]: 2025-10-02 12:58:47.757 2 DEBUG oslo_concurrency.lockutils [req-0a0dc34b-97e5-41e8-bc3f-99288e93c5ba req-37332556-3845-420d-b92d-af63f27532e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:58:47 compute-1 nova_compute[230518]: 2025-10-02 12:58:47.757 2 DEBUG oslo_concurrency.lockutils [req-0a0dc34b-97e5-41e8-bc3f-99288e93c5ba req-37332556-3845-420d-b92d-af63f27532e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:58:47 compute-1 nova_compute[230518]: 2025-10-02 12:58:47.757 2 DEBUG nova.compute.manager [req-0a0dc34b-97e5-41e8-bc3f-99288e93c5ba req-37332556-3845-420d-b92d-af63f27532e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] No waiting events found dispatching network-vif-plugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:58:47 compute-1 nova_compute[230518]: 2025-10-02 12:58:47.757 2 WARNING nova.compute.manager [req-0a0dc34b-97e5-41e8-bc3f-99288e93c5ba req-37332556-3845-420d-b92d-af63f27532e9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received unexpected event network-vif-plugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 for instance with vm_state active and task_state None.
Oct 02 12:58:48 compute-1 nova_compute[230518]: 2025-10-02 12:58:48.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:48.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:49.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:49 compute-1 ceph-mon[80926]: pgmap v2572: 305 pgs: 305 active+clean; 396 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.7 MiB/s wr, 128 op/s
Oct 02 12:58:49 compute-1 nova_compute[230518]: 2025-10-02 12:58:49.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:58:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:50.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:58:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:51.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:58:51 compute-1 ceph-mon[80926]: pgmap v2573: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 137 op/s
Oct 02 12:58:51 compute-1 nova_compute[230518]: 2025-10-02 12:58:51.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:51 compute-1 NetworkManager[44960]: <info>  [1759409931.5830] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/315)
Oct 02 12:58:51 compute-1 NetworkManager[44960]: <info>  [1759409931.5846] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/316)
Oct 02 12:58:51 compute-1 nova_compute[230518]: 2025-10-02 12:58:51.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:51 compute-1 ovn_controller[129257]: 2025-10-02T12:58:51Z|00674|binding|INFO|Releasing lport c363aa8d-5657-4504-a1d6-6861ffb1c6b4 from this chassis (sb_readonly=0)
Oct 02 12:58:51 compute-1 nova_compute[230518]: 2025-10-02 12:58:51.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:52 compute-1 nova_compute[230518]: 2025-10-02 12:58:52.011 2 DEBUG nova.compute.manager [req-dd065442-74cf-4037-b6bd-5cd3b66192d7 req-408db3a3-e546-4aca-a909-1e28d0ba045e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received event network-changed-bddb6509-7221-4ef0-bde7-be95b89ab6d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:58:52 compute-1 nova_compute[230518]: 2025-10-02 12:58:52.011 2 DEBUG nova.compute.manager [req-dd065442-74cf-4037-b6bd-5cd3b66192d7 req-408db3a3-e546-4aca-a909-1e28d0ba045e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Refreshing instance network info cache due to event network-changed-bddb6509-7221-4ef0-bde7-be95b89ab6d8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:58:52 compute-1 nova_compute[230518]: 2025-10-02 12:58:52.011 2 DEBUG oslo_concurrency.lockutils [req-dd065442-74cf-4037-b6bd-5cd3b66192d7 req-408db3a3-e546-4aca-a909-1e28d0ba045e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-c8cc2f8f-7f89-4304-b071-1849f76cfda8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:58:52 compute-1 nova_compute[230518]: 2025-10-02 12:58:52.012 2 DEBUG oslo_concurrency.lockutils [req-dd065442-74cf-4037-b6bd-5cd3b66192d7 req-408db3a3-e546-4aca-a909-1e28d0ba045e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-c8cc2f8f-7f89-4304-b071-1849f76cfda8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:58:52 compute-1 nova_compute[230518]: 2025-10-02 12:58:52.012 2 DEBUG nova.network.neutron [req-dd065442-74cf-4037-b6bd-5cd3b66192d7 req-408db3a3-e546-4aca-a909-1e28d0ba045e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Refreshing network info cache for port bddb6509-7221-4ef0-bde7-be95b89ab6d8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:58:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:52.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:53.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:53 compute-1 ceph-mon[80926]: pgmap v2574: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.2 MiB/s wr, 163 op/s
Oct 02 12:58:53 compute-1 nova_compute[230518]: 2025-10-02 12:58:53.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:58:54 compute-1 nova_compute[230518]: 2025-10-02 12:58:54.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:54 compute-1 nova_compute[230518]: 2025-10-02 12:58:54.692 2 DEBUG nova.network.neutron [req-dd065442-74cf-4037-b6bd-5cd3b66192d7 req-408db3a3-e546-4aca-a909-1e28d0ba045e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Updated VIF entry in instance network info cache for port bddb6509-7221-4ef0-bde7-be95b89ab6d8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:58:54 compute-1 nova_compute[230518]: 2025-10-02 12:58:54.693 2 DEBUG nova.network.neutron [req-dd065442-74cf-4037-b6bd-5cd3b66192d7 req-408db3a3-e546-4aca-a909-1e28d0ba045e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Updating instance_info_cache with network_info: [{"id": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "address": "fa:16:3e:49:02:8c", "network": {"id": "2b820c79-77a7-4936-8c6e-9c38d383ad1b", "bridge": "br-int", "label": "tempest-network-smoke--453535835", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbddb6509-72", "ovs_interfaceid": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:58:54 compute-1 nova_compute[230518]: 2025-10-02 12:58:54.739 2 DEBUG oslo_concurrency.lockutils [req-dd065442-74cf-4037-b6bd-5cd3b66192d7 req-408db3a3-e546-4aca-a909-1e28d0ba045e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-c8cc2f8f-7f89-4304-b071-1849f76cfda8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:58:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:54.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:55.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:55 compute-1 ceph-mon[80926]: pgmap v2575: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Oct 02 12:58:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:56.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:57.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:57 compute-1 ceph-mon[80926]: pgmap v2576: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 139 op/s
Oct 02 12:58:57 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/519288926' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:58:58 compute-1 nova_compute[230518]: 2025-10-02 12:58:58.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:58:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:58:58.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:58:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:58:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:58:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:58:59.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:58:59 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:58:59.311 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:58:59 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:58:59.312 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 12:58:59 compute-1 nova_compute[230518]: 2025-10-02 12:58:59.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:59 compute-1 ovn_controller[129257]: 2025-10-02T12:58:59Z|00090|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:49:02:8c 10.100.0.11
Oct 02 12:58:59 compute-1 ovn_controller[129257]: 2025-10-02T12:58:59Z|00091|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:49:02:8c 10.100.0.11
Oct 02 12:58:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:58:59 compute-1 nova_compute[230518]: 2025-10-02 12:58:59.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:58:59 compute-1 ceph-mon[80926]: pgmap v2577: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Oct 02 12:59:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:00.315 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:59:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:00.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:01.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:01 compute-1 ceph-mon[80926]: pgmap v2578: 305 pgs: 305 active+clean; 433 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.3 MiB/s wr, 120 op/s
Oct 02 12:59:02 compute-1 sudo[293456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:59:02 compute-1 sudo[293456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:02 compute-1 sudo[293456]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:02 compute-1 sudo[293481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:59:02 compute-1 sudo[293481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:02 compute-1 sudo[293481]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:02 compute-1 sudo[293506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:59:02 compute-1 sudo[293506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:02 compute-1 sudo[293506]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:02 compute-1 sudo[293531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 12:59:02 compute-1 sudo[293531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:02 compute-1 sudo[293531]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:02 compute-1 ceph-mon[80926]: pgmap v2579: 305 pgs: 305 active+clean; 484 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 125 op/s
Oct 02 12:59:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:02.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:03.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:03 compute-1 sudo[293576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:59:03 compute-1 sudo[293576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:03 compute-1 sudo[293576]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:03 compute-1 nova_compute[230518]: 2025-10-02 12:59:03.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:03 compute-1 sudo[293601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 12:59:03 compute-1 sudo[293601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:03 compute-1 sudo[293601]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:03 compute-1 sudo[293626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:59:03 compute-1 sudo[293626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:03 compute-1 sudo[293626]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:03 compute-1 sudo[293651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 12:59:03 compute-1 sudo[293651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:03 compute-1 sudo[293651]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:04 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:59:04 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:59:04 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:59:04 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:59:04 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:59:04 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 12:59:04 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:59:04 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 12:59:04 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 12:59:04 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 12:59:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:59:04 compute-1 nova_compute[230518]: 2025-10-02 12:59:04.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:59:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:04.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:59:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:05.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:05 compute-1 ceph-mon[80926]: pgmap v2580: 305 pgs: 305 active+clean; 484 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Oct 02 12:59:05 compute-1 podman[293708]: 2025-10-02 12:59:05.807624471 +0000 UTC m=+0.057812877 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 12:59:05 compute-1 podman[293707]: 2025-10-02 12:59:05.873086016 +0000 UTC m=+0.124842111 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:59:06 compute-1 nova_compute[230518]: 2025-10-02 12:59:06.312 2 INFO nova.compute.manager [None req-187533de-d867-4c1d-94a7-99d8ae82b7b5 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Get console output
Oct 02 12:59:06 compute-1 nova_compute[230518]: 2025-10-02 12:59:06.317 13161 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 12:59:06 compute-1 nova_compute[230518]: 2025-10-02 12:59:06.637 2 DEBUG oslo_concurrency.lockutils [None req-4058dff6-a721-42c8-9c03-96311751f5e4 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:06 compute-1 nova_compute[230518]: 2025-10-02 12:59:06.638 2 DEBUG oslo_concurrency.lockutils [None req-4058dff6-a721-42c8-9c03-96311751f5e4 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:06 compute-1 nova_compute[230518]: 2025-10-02 12:59:06.638 2 INFO nova.compute.manager [None req-4058dff6-a721-42c8-9c03-96311751f5e4 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Rebooting instance
Oct 02 12:59:06 compute-1 nova_compute[230518]: 2025-10-02 12:59:06.662 2 DEBUG oslo_concurrency.lockutils [None req-4058dff6-a721-42c8-9c03-96311751f5e4 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "refresh_cache-c8cc2f8f-7f89-4304-b071-1849f76cfda8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:59:06 compute-1 nova_compute[230518]: 2025-10-02 12:59:06.663 2 DEBUG oslo_concurrency.lockutils [None req-4058dff6-a721-42c8-9c03-96311751f5e4 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquired lock "refresh_cache-c8cc2f8f-7f89-4304-b071-1849f76cfda8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:59:06 compute-1 nova_compute[230518]: 2025-10-02 12:59:06.664 2 DEBUG nova.network.neutron [None req-4058dff6-a721-42c8-9c03-96311751f5e4 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 12:59:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/4288783238' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 12:59:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/4288783238' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 12:59:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 12:59:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:06.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 12:59:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:07.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:08 compute-1 ceph-mon[80926]: pgmap v2581: 305 pgs: 305 active+clean; 484 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Oct 02 12:59:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/444165708' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3862035785' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:08 compute-1 nova_compute[230518]: 2025-10-02 12:59:08.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:59:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:08.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:59:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:09.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:09 compute-1 nova_compute[230518]: 2025-10-02 12:59:09.176 2 DEBUG nova.network.neutron [None req-4058dff6-a721-42c8-9c03-96311751f5e4 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Updating instance_info_cache with network_info: [{"id": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "address": "fa:16:3e:49:02:8c", "network": {"id": "2b820c79-77a7-4936-8c6e-9c38d383ad1b", "bridge": "br-int", "label": "tempest-network-smoke--453535835", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbddb6509-72", "ovs_interfaceid": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:59:09 compute-1 nova_compute[230518]: 2025-10-02 12:59:09.211 2 DEBUG oslo_concurrency.lockutils [None req-4058dff6-a721-42c8-9c03-96311751f5e4 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Releasing lock "refresh_cache-c8cc2f8f-7f89-4304-b071-1849f76cfda8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:59:09 compute-1 nova_compute[230518]: 2025-10-02 12:59:09.212 2 DEBUG nova.compute.manager [None req-4058dff6-a721-42c8-9c03-96311751f5e4 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:59:09 compute-1 ceph-mon[80926]: pgmap v2582: 305 pgs: 305 active+clean; 484 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Oct 02 12:59:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:59:09 compute-1 nova_compute[230518]: 2025-10-02 12:59:09.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:10 compute-1 ceph-mon[80926]: pgmap v2583: 305 pgs: 305 active+clean; 484 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 345 KiB/s rd, 3.9 MiB/s wr, 98 op/s
Oct 02 12:59:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:10.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:59:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:11.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:59:12 compute-1 nova_compute[230518]: 2025-10-02 12:59:12.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:59:12 compute-1 nova_compute[230518]: 2025-10-02 12:59:12.114 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:12 compute-1 nova_compute[230518]: 2025-10-02 12:59:12.114 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:12 compute-1 nova_compute[230518]: 2025-10-02 12:59:12.115 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:12 compute-1 nova_compute[230518]: 2025-10-02 12:59:12.115 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 12:59:12 compute-1 nova_compute[230518]: 2025-10-02 12:59:12.115 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:59:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:59:12 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3992647069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:12 compute-1 nova_compute[230518]: 2025-10-02 12:59:12.572 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:59:12 compute-1 nova_compute[230518]: 2025-10-02 12:59:12.686 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:59:12 compute-1 nova_compute[230518]: 2025-10-02 12:59:12.687 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000a2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 12:59:12 compute-1 nova_compute[230518]: 2025-10-02 12:59:12.870 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 12:59:12 compute-1 nova_compute[230518]: 2025-10-02 12:59:12.871 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4103MB free_disk=20.784870147705078GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 12:59:12 compute-1 kernel: tapbddb6509-72 (unregistering): left promiscuous mode
Oct 02 12:59:12 compute-1 nova_compute[230518]: 2025-10-02 12:59:12.872 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:12 compute-1 nova_compute[230518]: 2025-10-02 12:59:12.872 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:12 compute-1 NetworkManager[44960]: <info>  [1759409952.8768] device (tapbddb6509-72): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:59:12 compute-1 ovn_controller[129257]: 2025-10-02T12:59:12Z|00675|binding|INFO|Releasing lport bddb6509-7221-4ef0-bde7-be95b89ab6d8 from this chassis (sb_readonly=0)
Oct 02 12:59:12 compute-1 ovn_controller[129257]: 2025-10-02T12:59:12Z|00676|binding|INFO|Setting lport bddb6509-7221-4ef0-bde7-be95b89ab6d8 down in Southbound
Oct 02 12:59:12 compute-1 ovn_controller[129257]: 2025-10-02T12:59:12Z|00677|binding|INFO|Removing iface tapbddb6509-72 ovn-installed in OVS
Oct 02 12:59:12 compute-1 nova_compute[230518]: 2025-10-02 12:59:12.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:12 compute-1 nova_compute[230518]: 2025-10-02 12:59:12.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:12 compute-1 nova_compute[230518]: 2025-10-02 12:59:12.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:12 compute-1 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d000000a2.scope: Deactivated successfully.
Oct 02 12:59:12 compute-1 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d000000a2.scope: Consumed 13.810s CPU time.
Oct 02 12:59:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:12.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:12 compute-1 systemd-machined[188247]: Machine qemu-78-instance-000000a2 terminated.
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.001 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:02:8c 10.100.0.11'], port_security=['fa:16:3e:49:02:8c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c8cc2f8f-7f89-4304-b071-1849f76cfda8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b820c79-77a7-4936-8c6e-9c38d383ad1b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'baa35b76-b47d-4782-b3a5-4738baaa63f8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.186'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=306c2335-d591-443e-b0c0-e2b4293c6e94, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=bddb6509-7221-4ef0-bde7-be95b89ab6d8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.003 138374 INFO neutron.agent.ovn.metadata.agent [-] Port bddb6509-7221-4ef0-bde7-be95b89ab6d8 in datapath 2b820c79-77a7-4936-8c6e-9c38d383ad1b unbound from our chassis
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.005 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2b820c79-77a7-4936-8c6e-9c38d383ad1b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.005 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c9643469-45d9-4df8-bcc5-4df6db2a4808]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.007 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b namespace which is not needed anymore
Oct 02 12:59:13 compute-1 neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b[293440]: [NOTICE]   (293444) : haproxy version is 2.8.14-c23fe91
Oct 02 12:59:13 compute-1 neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b[293440]: [NOTICE]   (293444) : path to executable is /usr/sbin/haproxy
Oct 02 12:59:13 compute-1 neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b[293440]: [WARNING]  (293444) : Exiting Master process...
Oct 02 12:59:13 compute-1 neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b[293440]: [WARNING]  (293444) : Exiting Master process...
Oct 02 12:59:13 compute-1 neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b[293440]: [ALERT]    (293444) : Current worker (293446) exited with code 143 (Terminated)
Oct 02 12:59:13 compute-1 neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b[293440]: [WARNING]  (293444) : All workers exited. Exiting... (0)
Oct 02 12:59:13 compute-1 systemd[1]: libpod-371146781e21794d3e8e37c5131733a0f8d2af16d947e4bc63eeb28a26417c29.scope: Deactivated successfully.
Oct 02 12:59:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:59:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:13.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:59:13 compute-1 podman[293795]: 2025-10-02 12:59:13.124471834 +0000 UTC m=+0.040770998 container died 371146781e21794d3e8e37c5131733a0f8d2af16d947e4bc63eeb28a26417c29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 12:59:13 compute-1 nova_compute[230518]: 2025-10-02 12:59:13.129 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance c8cc2f8f-7f89-4304-b071-1849f76cfda8 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 12:59:13 compute-1 nova_compute[230518]: 2025-10-02 12:59:13.129 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 12:59:13 compute-1 nova_compute[230518]: 2025-10-02 12:59:13.130 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 12:59:13 compute-1 nova_compute[230518]: 2025-10-02 12:59:13.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:13 compute-1 nova_compute[230518]: 2025-10-02 12:59:13.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:13 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-371146781e21794d3e8e37c5131733a0f8d2af16d947e4bc63eeb28a26417c29-userdata-shm.mount: Deactivated successfully.
Oct 02 12:59:13 compute-1 systemd[1]: var-lib-containers-storage-overlay-5537c82eea9d782306a87a5e6ec1b0062bd7bca80ec083c7e105f886bdbc601b-merged.mount: Deactivated successfully.
Oct 02 12:59:13 compute-1 podman[293795]: 2025-10-02 12:59:13.169749161 +0000 UTC m=+0.086048325 container cleanup 371146781e21794d3e8e37c5131733a0f8d2af16d947e4bc63eeb28a26417c29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 12:59:13 compute-1 systemd[1]: libpod-conmon-371146781e21794d3e8e37c5131733a0f8d2af16d947e4bc63eeb28a26417c29.scope: Deactivated successfully.
Oct 02 12:59:13 compute-1 ceph-mon[80926]: pgmap v2584: 305 pgs: 305 active+clean; 484 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 911 KiB/s rd, 1.9 MiB/s wr, 95 op/s
Oct 02 12:59:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3992647069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:13 compute-1 nova_compute[230518]: 2025-10-02 12:59:13.235 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:59:13 compute-1 podman[293835]: 2025-10-02 12:59:13.237560097 +0000 UTC m=+0.045334469 container remove 371146781e21794d3e8e37c5131733a0f8d2af16d947e4bc63eeb28a26417c29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.243 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6b95c9f4-82f3-48cc-ac8d-9e9c125293ac]: (4, ('Thu Oct  2 12:59:13 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b (371146781e21794d3e8e37c5131733a0f8d2af16d947e4bc63eeb28a26417c29)\n371146781e21794d3e8e37c5131733a0f8d2af16d947e4bc63eeb28a26417c29\nThu Oct  2 12:59:13 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b (371146781e21794d3e8e37c5131733a0f8d2af16d947e4bc63eeb28a26417c29)\n371146781e21794d3e8e37c5131733a0f8d2af16d947e4bc63eeb28a26417c29\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.245 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[97224cb4-f147-45aa-8dfb-bddf83a59047]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.246 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b820c79-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:59:13 compute-1 kernel: tap2b820c79-70: left promiscuous mode
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.267 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[59e6ba71-0143-440a-bc1d-db13e4e0a11a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:13 compute-1 nova_compute[230518]: 2025-10-02 12:59:13.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:13 compute-1 nova_compute[230518]: 2025-10-02 12:59:13.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.301 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[724677f0-7c17-4ce8-a612-35b026dca304]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.302 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4a2cf049-e3bf-4e4b-9f31-be7ad7071d6f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.318 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f8eca367-90d6-4044-b0bd-192918799ac0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 778882, 'reachable_time': 30508, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293852, 'error': None, 'target': 'ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:13 compute-1 systemd[1]: run-netns-ovnmeta\x2d2b820c79\x2d77a7\x2d4936\x2d8c6e\x2d9c38d383ad1b.mount: Deactivated successfully.
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.324 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.324 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[91857656-a42c-49b3-ac98-0f96bfdd07b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:13 compute-1 nova_compute[230518]: 2025-10-02 12:59:13.375 2 INFO nova.virt.libvirt.driver [None req-4058dff6-a721-42c8-9c03-96311751f5e4 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Instance shutdown successfully.
Oct 02 12:59:13 compute-1 kernel: tapbddb6509-72: entered promiscuous mode
Oct 02 12:59:13 compute-1 ovn_controller[129257]: 2025-10-02T12:59:13Z|00678|binding|INFO|Claiming lport bddb6509-7221-4ef0-bde7-be95b89ab6d8 for this chassis.
Oct 02 12:59:13 compute-1 NetworkManager[44960]: <info>  [1759409953.4475] manager: (tapbddb6509-72): new Tun device (/org/freedesktop/NetworkManager/Devices/317)
Oct 02 12:59:13 compute-1 nova_compute[230518]: 2025-10-02 12:59:13.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:13 compute-1 ovn_controller[129257]: 2025-10-02T12:59:13Z|00679|binding|INFO|bddb6509-7221-4ef0-bde7-be95b89ab6d8: Claiming fa:16:3e:49:02:8c 10.100.0.11
Oct 02 12:59:13 compute-1 systemd-udevd[293774]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 12:59:13 compute-1 ovn_controller[129257]: 2025-10-02T12:59:13Z|00680|binding|INFO|Setting lport bddb6509-7221-4ef0-bde7-be95b89ab6d8 ovn-installed in OVS
Oct 02 12:59:13 compute-1 nova_compute[230518]: 2025-10-02 12:59:13.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:13 compute-1 nova_compute[230518]: 2025-10-02 12:59:13.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:13 compute-1 NetworkManager[44960]: <info>  [1759409953.4732] device (tapbddb6509-72): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 12:59:13 compute-1 NetworkManager[44960]: <info>  [1759409953.4758] device (tapbddb6509-72): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 12:59:13 compute-1 systemd-machined[188247]: New machine qemu-79-instance-000000a2.
Oct 02 12:59:13 compute-1 systemd[1]: Started Virtual Machine qemu-79-instance-000000a2.
Oct 02 12:59:13 compute-1 ovn_controller[129257]: 2025-10-02T12:59:13Z|00681|binding|INFO|Setting lport bddb6509-7221-4ef0-bde7-be95b89ab6d8 up in Southbound
Oct 02 12:59:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 12:59:13 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1424863242' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.590 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:02:8c 10.100.0.11'], port_security=['fa:16:3e:49:02:8c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c8cc2f8f-7f89-4304-b071-1849f76cfda8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b820c79-77a7-4936-8c6e-9c38d383ad1b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'baa35b76-b47d-4782-b3a5-4738baaa63f8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.186'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=306c2335-d591-443e-b0c0-e2b4293c6e94, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=bddb6509-7221-4ef0-bde7-be95b89ab6d8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.592 138374 INFO neutron.agent.ovn.metadata.agent [-] Port bddb6509-7221-4ef0-bde7-be95b89ab6d8 in datapath 2b820c79-77a7-4936-8c6e-9c38d383ad1b bound to our chassis
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.596 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b820c79-77a7-4936-8c6e-9c38d383ad1b
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.609 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ba255965-eb54-42c2-a6af-4c3b0f4b2aad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.611 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2b820c79-71 in ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.614 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2b820c79-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.614 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a2356538-7b6a-40ec-9039-981163b18c1e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.616 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cd8dfa02-d9cb-4ae9-ae6d-7b0c86c6ed09]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.639 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[1effdb09-40fb-4756-b0d9-82ee7323e08d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:59:13 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/779617749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.677 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[37cb2966-f7d7-4d12-977c-8eaca638a677]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:13 compute-1 nova_compute[230518]: 2025-10-02 12:59:13.684 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:59:13 compute-1 nova_compute[230518]: 2025-10-02 12:59:13.691 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.709 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[ea3403e3-c3b1-43d6-9fa4-94d024b0290f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.715 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d574dd1d-b7c3-47d0-832e-677afaa79609]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:13 compute-1 NetworkManager[44960]: <info>  [1759409953.7164] manager: (tap2b820c79-70): new Veth device (/org/freedesktop/NetworkManager/Devices/318)
Oct 02 12:59:13 compute-1 nova_compute[230518]: 2025-10-02 12:59:13.720 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.762 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[4979d423-5c29-4123-a04d-c44c124ea7b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.768 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[6c19caea-9035-46d6-9123-6e22592215b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:13 compute-1 NetworkManager[44960]: <info>  [1759409953.7939] device (tap2b820c79-70): carrier: link connected
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.799 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[b6d8daa9-e187-4b5f-ab03-38ec57257b65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.820 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2d387bda-e7f7-4d8f-8cdc-79259e914b95]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b820c79-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:69:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 209], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 781735, 'reachable_time': 33153, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293920, 'error': None, 'target': 'ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.837 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[626c8597-92f9-4e87-9a7c-fd68679bfef9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefe:6920'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 781735, 'tstamp': 781735}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293921, 'error': None, 'target': 'ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.857 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ea659def-99a0-4c12-8446-5a73873a9908]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b820c79-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:69:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 209], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 781735, 'reachable_time': 33153, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293922, 'error': None, 'target': 'ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:13 compute-1 nova_compute[230518]: 2025-10-02 12:59:13.864 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 12:59:13 compute-1 nova_compute[230518]: 2025-10-02 12:59:13.864 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.992s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.887 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b0279072-96ef-4e6c-912b-5f945c930fbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.956 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2ba565d7-522f-4d96-8a9f-71895e201435]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.958 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b820c79-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.958 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.959 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b820c79-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:59:13 compute-1 kernel: tap2b820c79-70: entered promiscuous mode
Oct 02 12:59:13 compute-1 NetworkManager[44960]: <info>  [1759409953.9625] manager: (tap2b820c79-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/319)
Oct 02 12:59:13 compute-1 nova_compute[230518]: 2025-10-02 12:59:13.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.972 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b820c79-70, col_values=(('external_ids', {'iface-id': 'c363aa8d-5657-4504-a1d6-6861ffb1c6b4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:59:13 compute-1 ovn_controller[129257]: 2025-10-02T12:59:13Z|00682|binding|INFO|Releasing lport c363aa8d-5657-4504-a1d6-6861ffb1c6b4 from this chassis (sb_readonly=0)
Oct 02 12:59:13 compute-1 nova_compute[230518]: 2025-10-02 12:59:13.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.977 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2b820c79-77a7-4936-8c6e-9c38d383ad1b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2b820c79-77a7-4936-8c6e-9c38d383ad1b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.978 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0e22b84c-f55b-4e04-a2fd-ccc4677f745b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.979 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: global
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-2b820c79-77a7-4936-8c6e-9c38d383ad1b
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/2b820c79-77a7-4936-8c6e-9c38d383ad1b.pid.haproxy
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 2b820c79-77a7-4936-8c6e-9c38d383ad1b
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 12:59:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:13.981 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b', 'env', 'PROCESS_TAG=haproxy-2b820c79-77a7-4936-8c6e-9c38d383ad1b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2b820c79-77a7-4936-8c6e-9c38d383ad1b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 12:59:13 compute-1 nova_compute[230518]: 2025-10-02 12:59:13.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:14 compute-1 sudo[293931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 12:59:14 compute-1 sudo[293931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:14 compute-1 sudo[293931]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:14 compute-1 sudo[293957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 12:59:14 compute-1 sudo[293957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 12:59:14 compute-1 sudo[293957]: pam_unix(sudo:session): session closed for user root
Oct 02 12:59:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1424863242' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/779617749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:14 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:59:14 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 12:59:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3718952746' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:14 compute-1 podman[294004]: 2025-10-02 12:59:14.390957774 +0000 UTC m=+0.061205802 container create 6bacd5eafb283f13f1f8ec71fbe19bc2b14636c8313f8d1061912a908d832fb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:59:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:59:14 compute-1 nova_compute[230518]: 2025-10-02 12:59:14.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:14 compute-1 systemd[1]: Started libpod-conmon-6bacd5eafb283f13f1f8ec71fbe19bc2b14636c8313f8d1061912a908d832fb0.scope.
Oct 02 12:59:14 compute-1 nova_compute[230518]: 2025-10-02 12:59:14.448 2 DEBUG nova.compute.manager [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received event network-vif-unplugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:59:14 compute-1 nova_compute[230518]: 2025-10-02 12:59:14.449 2 DEBUG oslo_concurrency.lockutils [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:14 compute-1 nova_compute[230518]: 2025-10-02 12:59:14.449 2 DEBUG oslo_concurrency.lockutils [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:14 compute-1 nova_compute[230518]: 2025-10-02 12:59:14.450 2 DEBUG oslo_concurrency.lockutils [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:14 compute-1 nova_compute[230518]: 2025-10-02 12:59:14.450 2 DEBUG nova.compute.manager [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] No waiting events found dispatching network-vif-unplugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:59:14 compute-1 nova_compute[230518]: 2025-10-02 12:59:14.450 2 WARNING nova.compute.manager [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received unexpected event network-vif-unplugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 for instance with vm_state active and task_state reboot_started.
Oct 02 12:59:14 compute-1 nova_compute[230518]: 2025-10-02 12:59:14.450 2 DEBUG nova.compute.manager [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received event network-vif-plugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:59:14 compute-1 nova_compute[230518]: 2025-10-02 12:59:14.451 2 DEBUG oslo_concurrency.lockutils [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:14 compute-1 nova_compute[230518]: 2025-10-02 12:59:14.451 2 DEBUG oslo_concurrency.lockutils [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:14 compute-1 nova_compute[230518]: 2025-10-02 12:59:14.451 2 DEBUG oslo_concurrency.lockutils [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:14 compute-1 nova_compute[230518]: 2025-10-02 12:59:14.451 2 DEBUG nova.compute.manager [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] No waiting events found dispatching network-vif-plugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:59:14 compute-1 podman[294004]: 2025-10-02 12:59:14.360207499 +0000 UTC m=+0.030455597 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 12:59:14 compute-1 nova_compute[230518]: 2025-10-02 12:59:14.452 2 WARNING nova.compute.manager [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received unexpected event network-vif-plugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 for instance with vm_state active and task_state reboot_started.
Oct 02 12:59:14 compute-1 nova_compute[230518]: 2025-10-02 12:59:14.452 2 DEBUG nova.compute.manager [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received event network-vif-plugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:59:14 compute-1 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 12:59:14 compute-1 nova_compute[230518]: 2025-10-02 12:59:14.452 2 DEBUG oslo_concurrency.lockutils [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:14 compute-1 nova_compute[230518]: 2025-10-02 12:59:14.452 2 DEBUG oslo_concurrency.lockutils [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:14 compute-1 nova_compute[230518]: 2025-10-02 12:59:14.453 2 DEBUG oslo_concurrency.lockutils [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:14 compute-1 nova_compute[230518]: 2025-10-02 12:59:14.453 2 DEBUG nova.compute.manager [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] No waiting events found dispatching network-vif-plugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:59:14 compute-1 nova_compute[230518]: 2025-10-02 12:59:14.453 2 WARNING nova.compute.manager [req-679205fc-a850-458f-9ace-691e2ee0446a req-a92b58ab-b652-4e03-a011-4a32a35ef6cd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received unexpected event network-vif-plugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 for instance with vm_state active and task_state reboot_started.
Oct 02 12:59:14 compute-1 systemd[1]: Started libcrun container.
Oct 02 12:59:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2b1c7c3db0c4c0309707ba6b2e6ca3059d313e0d06a5d2858650b6979b1b343/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 12:59:14 compute-1 podman[294004]: 2025-10-02 12:59:14.488210066 +0000 UTC m=+0.158458124 container init 6bacd5eafb283f13f1f8ec71fbe19bc2b14636c8313f8d1061912a908d832fb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 02 12:59:14 compute-1 podman[294004]: 2025-10-02 12:59:14.496131203 +0000 UTC m=+0.166379221 container start 6bacd5eafb283f13f1f8ec71fbe19bc2b14636c8313f8d1061912a908d832fb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true)
Oct 02 12:59:14 compute-1 neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b[294019]: [NOTICE]   (294024) : New worker (294026) forked
Oct 02 12:59:14 compute-1 neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b[294019]: [NOTICE]   (294024) : Loading success.
Oct 02 12:59:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:14.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:15.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:15 compute-1 nova_compute[230518]: 2025-10-02 12:59:15.143 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Removed pending event for c8cc2f8f-7f89-4304-b071-1849f76cfda8 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 12:59:15 compute-1 nova_compute[230518]: 2025-10-02 12:59:15.144 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409955.1423707, c8cc2f8f-7f89-4304-b071-1849f76cfda8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:59:15 compute-1 nova_compute[230518]: 2025-10-02 12:59:15.144 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] VM Resumed (Lifecycle Event)
Oct 02 12:59:15 compute-1 nova_compute[230518]: 2025-10-02 12:59:15.152 2 INFO nova.virt.libvirt.driver [-] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Instance running successfully.
Oct 02 12:59:15 compute-1 nova_compute[230518]: 2025-10-02 12:59:15.153 2 INFO nova.virt.libvirt.driver [None req-4058dff6-a721-42c8-9c03-96311751f5e4 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Instance soft rebooted successfully.
Oct 02 12:59:15 compute-1 nova_compute[230518]: 2025-10-02 12:59:15.153 2 DEBUG nova.compute.manager [None req-4058dff6-a721-42c8-9c03-96311751f5e4 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:59:15 compute-1 ceph-mon[80926]: pgmap v2585: 305 pgs: 305 active+clean; 484 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 752 KiB/s rd, 45 KiB/s wr, 37 op/s
Oct 02 12:59:15 compute-1 nova_compute[230518]: 2025-10-02 12:59:15.524 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:59:15 compute-1 nova_compute[230518]: 2025-10-02 12:59:15.528 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:59:15 compute-1 nova_compute[230518]: 2025-10-02 12:59:15.577 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] During sync_power_state the instance has a pending task (reboot_started). Skip.
Oct 02 12:59:15 compute-1 nova_compute[230518]: 2025-10-02 12:59:15.577 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759409955.1425006, c8cc2f8f-7f89-4304-b071-1849f76cfda8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:59:15 compute-1 nova_compute[230518]: 2025-10-02 12:59:15.578 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] VM Started (Lifecycle Event)
Oct 02 12:59:15 compute-1 nova_compute[230518]: 2025-10-02 12:59:15.613 2 DEBUG oslo_concurrency.lockutils [None req-4058dff6-a721-42c8-9c03-96311751f5e4 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 8.975s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:15 compute-1 nova_compute[230518]: 2025-10-02 12:59:15.624 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:59:15 compute-1 nova_compute[230518]: 2025-10-02 12:59:15.629 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 12:59:15 compute-1 podman[294077]: 2025-10-02 12:59:15.809246643 +0000 UTC m=+0.057345734 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 12:59:15 compute-1 podman[294078]: 2025-10-02 12:59:15.81466739 +0000 UTC m=+0.062900244 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:59:15 compute-1 nova_compute[230518]: 2025-10-02 12:59:15.859 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:59:16 compute-1 nova_compute[230518]: 2025-10-02 12:59:16.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:59:16 compute-1 nova_compute[230518]: 2025-10-02 12:59:16.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:59:16 compute-1 nova_compute[230518]: 2025-10-02 12:59:16.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 12:59:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1857352639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3169731568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/505408071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:16 compute-1 nova_compute[230518]: 2025-10-02 12:59:16.612 2 DEBUG nova.compute.manager [req-34e6aed4-f079-4132-9e19-16e9612c6d4f req-5704a8a4-4a17-4eb8-addd-f7ff8d2176dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received event network-vif-plugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:59:16 compute-1 nova_compute[230518]: 2025-10-02 12:59:16.613 2 DEBUG oslo_concurrency.lockutils [req-34e6aed4-f079-4132-9e19-16e9612c6d4f req-5704a8a4-4a17-4eb8-addd-f7ff8d2176dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:16 compute-1 nova_compute[230518]: 2025-10-02 12:59:16.613 2 DEBUG oslo_concurrency.lockutils [req-34e6aed4-f079-4132-9e19-16e9612c6d4f req-5704a8a4-4a17-4eb8-addd-f7ff8d2176dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:16 compute-1 nova_compute[230518]: 2025-10-02 12:59:16.614 2 DEBUG oslo_concurrency.lockutils [req-34e6aed4-f079-4132-9e19-16e9612c6d4f req-5704a8a4-4a17-4eb8-addd-f7ff8d2176dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:16 compute-1 nova_compute[230518]: 2025-10-02 12:59:16.614 2 DEBUG nova.compute.manager [req-34e6aed4-f079-4132-9e19-16e9612c6d4f req-5704a8a4-4a17-4eb8-addd-f7ff8d2176dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] No waiting events found dispatching network-vif-plugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:59:16 compute-1 nova_compute[230518]: 2025-10-02 12:59:16.614 2 WARNING nova.compute.manager [req-34e6aed4-f079-4132-9e19-16e9612c6d4f req-5704a8a4-4a17-4eb8-addd-f7ff8d2176dc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received unexpected event network-vif-plugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 for instance with vm_state active and task_state None.
Oct 02 12:59:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 12:59:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:16.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 12:59:17 compute-1 nova_compute[230518]: 2025-10-02 12:59:17.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:59:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:17.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:17 compute-1 ceph-mon[80926]: pgmap v2586: 305 pgs: 305 active+clean; 444 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 50 KiB/s wr, 81 op/s
Oct 02 12:59:18 compute-1 nova_compute[230518]: 2025-10-02 12:59:18.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/232803464' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 12:59:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:18.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 12:59:19 compute-1 nova_compute[230518]: 2025-10-02 12:59:19.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:59:19 compute-1 nova_compute[230518]: 2025-10-02 12:59:19.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:59:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:19.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:19 compute-1 ceph-mon[80926]: pgmap v2587: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 50 KiB/s wr, 154 op/s
Oct 02 12:59:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:59:19 compute-1 nova_compute[230518]: 2025-10-02 12:59:19.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:20.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:21.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:21 compute-1 ceph-mon[80926]: pgmap v2588: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 54 KiB/s wr, 181 op/s
Oct 02 12:59:22 compute-1 nova_compute[230518]: 2025-10-02 12:59:22.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:59:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:22.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:59:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:23.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:59:23 compute-1 nova_compute[230518]: 2025-10-02 12:59:23.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:23 compute-1 ceph-mon[80926]: pgmap v2589: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 47 KiB/s wr, 175 op/s
Oct 02 12:59:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:59:24 compute-1 nova_compute[230518]: 2025-10-02 12:59:24.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1884512294' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:24.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:25 compute-1 nova_compute[230518]: 2025-10-02 12:59:25.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 12:59:25 compute-1 nova_compute[230518]: 2025-10-02 12:59:25.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 12:59:25 compute-1 nova_compute[230518]: 2025-10-02 12:59:25.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 12:59:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 12:59:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:25.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 12:59:25 compute-1 ceph-mon[80926]: pgmap v2590: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 23 KiB/s wr, 144 op/s
Oct 02 12:59:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2065133054' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:25 compute-1 nova_compute[230518]: 2025-10-02 12:59:25.804 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-c8cc2f8f-7f89-4304-b071-1849f76cfda8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:59:25 compute-1 nova_compute[230518]: 2025-10-02 12:59:25.805 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-c8cc2f8f-7f89-4304-b071-1849f76cfda8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:59:25 compute-1 nova_compute[230518]: 2025-10-02 12:59:25.805 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 12:59:25 compute-1 nova_compute[230518]: 2025-10-02 12:59:25.806 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c8cc2f8f-7f89-4304-b071-1849f76cfda8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:59:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:25.955 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:25.956 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:25.958 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:26.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:59:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:27.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:59:27 compute-1 ovn_controller[129257]: 2025-10-02T12:59:27Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:49:02:8c 10.100.0.11
Oct 02 12:59:27 compute-1 ceph-mon[80926]: pgmap v2591: 305 pgs: 305 active+clean; 416 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.0 MiB/s wr, 172 op/s
Oct 02 12:59:28 compute-1 nova_compute[230518]: 2025-10-02 12:59:28.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:28 compute-1 nova_compute[230518]: 2025-10-02 12:59:28.891 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Updating instance_info_cache with network_info: [{"id": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "address": "fa:16:3e:49:02:8c", "network": {"id": "2b820c79-77a7-4936-8c6e-9c38d383ad1b", "bridge": "br-int", "label": "tempest-network-smoke--453535835", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbddb6509-72", "ovs_interfaceid": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:59:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:28.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:29.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:29 compute-1 ceph-mon[80926]: pgmap v2592: 305 pgs: 305 active+clean; 436 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 189 op/s
Oct 02 12:59:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:59:29 compute-1 nova_compute[230518]: 2025-10-02 12:59:29.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:29 compute-1 nova_compute[230518]: 2025-10-02 12:59:29.825 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-c8cc2f8f-7f89-4304-b071-1849f76cfda8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:59:29 compute-1 nova_compute[230518]: 2025-10-02 12:59:29.826 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 12:59:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:59:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:30.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:59:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:59:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:31.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:59:31 compute-1 ceph-mon[80926]: pgmap v2593: 305 pgs: 305 active+clean; 393 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.2 MiB/s wr, 192 op/s
Oct 02 12:59:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/857272195' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3163633002' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:32.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:59:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:33.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:59:33 compute-1 nova_compute[230518]: 2025-10-02 12:59:33.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:33 compute-1 nova_compute[230518]: 2025-10-02 12:59:33.494 2 INFO nova.compute.manager [None req-323ade18-e55c-483d-bcf3-66c5226433e2 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Get console output
Oct 02 12:59:33 compute-1 nova_compute[230518]: 2025-10-02 12:59:33.502 13161 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 12:59:33 compute-1 ceph-mon[80926]: pgmap v2594: 305 pgs: 305 active+clean; 385 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.4 MiB/s wr, 232 op/s
Oct 02 12:59:33 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2000465293' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 12:59:33 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3591579695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:59:34 compute-1 nova_compute[230518]: 2025-10-02 12:59:34.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:34.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:35.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:35 compute-1 ceph-mon[80926]: pgmap v2595: 305 pgs: 305 active+clean; 385 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.4 MiB/s wr, 231 op/s
Oct 02 12:59:35 compute-1 nova_compute[230518]: 2025-10-02 12:59:35.231 2 DEBUG nova.compute.manager [req-7a5a49b3-f04c-4cc2-899a-e69ca414bbfa req-60807b65-865a-44ec-a62f-1b0102f3e2c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received event network-changed-bddb6509-7221-4ef0-bde7-be95b89ab6d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:59:35 compute-1 nova_compute[230518]: 2025-10-02 12:59:35.232 2 DEBUG nova.compute.manager [req-7a5a49b3-f04c-4cc2-899a-e69ca414bbfa req-60807b65-865a-44ec-a62f-1b0102f3e2c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Refreshing instance network info cache due to event network-changed-bddb6509-7221-4ef0-bde7-be95b89ab6d8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 12:59:35 compute-1 nova_compute[230518]: 2025-10-02 12:59:35.232 2 DEBUG oslo_concurrency.lockutils [req-7a5a49b3-f04c-4cc2-899a-e69ca414bbfa req-60807b65-865a-44ec-a62f-1b0102f3e2c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-c8cc2f8f-7f89-4304-b071-1849f76cfda8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 12:59:35 compute-1 nova_compute[230518]: 2025-10-02 12:59:35.233 2 DEBUG oslo_concurrency.lockutils [req-7a5a49b3-f04c-4cc2-899a-e69ca414bbfa req-60807b65-865a-44ec-a62f-1b0102f3e2c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-c8cc2f8f-7f89-4304-b071-1849f76cfda8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 12:59:35 compute-1 nova_compute[230518]: 2025-10-02 12:59:35.233 2 DEBUG nova.network.neutron [req-7a5a49b3-f04c-4cc2-899a-e69ca414bbfa req-60807b65-865a-44ec-a62f-1b0102f3e2c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Refreshing network info cache for port bddb6509-7221-4ef0-bde7-be95b89ab6d8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 12:59:35 compute-1 nova_compute[230518]: 2025-10-02 12:59:35.459 2 DEBUG oslo_concurrency.lockutils [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:35 compute-1 nova_compute[230518]: 2025-10-02 12:59:35.460 2 DEBUG oslo_concurrency.lockutils [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:35 compute-1 nova_compute[230518]: 2025-10-02 12:59:35.460 2 DEBUG oslo_concurrency.lockutils [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:35 compute-1 nova_compute[230518]: 2025-10-02 12:59:35.460 2 DEBUG oslo_concurrency.lockutils [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:35 compute-1 nova_compute[230518]: 2025-10-02 12:59:35.461 2 DEBUG oslo_concurrency.lockutils [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:35 compute-1 nova_compute[230518]: 2025-10-02 12:59:35.462 2 INFO nova.compute.manager [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Terminating instance
Oct 02 12:59:35 compute-1 nova_compute[230518]: 2025-10-02 12:59:35.464 2 DEBUG nova.compute.manager [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 12:59:35 compute-1 kernel: tapbddb6509-72 (unregistering): left promiscuous mode
Oct 02 12:59:35 compute-1 NetworkManager[44960]: <info>  [1759409975.6539] device (tapbddb6509-72): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 12:59:35 compute-1 nova_compute[230518]: 2025-10-02 12:59:35.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:35 compute-1 ovn_controller[129257]: 2025-10-02T12:59:35Z|00683|binding|INFO|Releasing lport bddb6509-7221-4ef0-bde7-be95b89ab6d8 from this chassis (sb_readonly=0)
Oct 02 12:59:35 compute-1 ovn_controller[129257]: 2025-10-02T12:59:35Z|00684|binding|INFO|Setting lport bddb6509-7221-4ef0-bde7-be95b89ab6d8 down in Southbound
Oct 02 12:59:35 compute-1 ovn_controller[129257]: 2025-10-02T12:59:35Z|00685|binding|INFO|Removing iface tapbddb6509-72 ovn-installed in OVS
Oct 02 12:59:35 compute-1 nova_compute[230518]: 2025-10-02 12:59:35.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:35.676 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:02:8c 10.100.0.11'], port_security=['fa:16:3e:49:02:8c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'c8cc2f8f-7f89-4304-b071-1849f76cfda8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b820c79-77a7-4936-8c6e-9c38d383ad1b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '072925a6aec84a77a9c09ae0c83efdb3', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'baa35b76-b47d-4782-b3a5-4738baaa63f8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=306c2335-d591-443e-b0c0-e2b4293c6e94, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=bddb6509-7221-4ef0-bde7-be95b89ab6d8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 12:59:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:35.678 138374 INFO neutron.agent.ovn.metadata.agent [-] Port bddb6509-7221-4ef0-bde7-be95b89ab6d8 in datapath 2b820c79-77a7-4936-8c6e-9c38d383ad1b unbound from our chassis
Oct 02 12:59:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:35.682 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2b820c79-77a7-4936-8c6e-9c38d383ad1b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 12:59:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:35.683 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[88e440d0-29b2-4578-b8fe-40864e1b85d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:35.684 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b namespace which is not needed anymore
Oct 02 12:59:35 compute-1 nova_compute[230518]: 2025-10-02 12:59:35.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:35 compute-1 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d000000a2.scope: Deactivated successfully.
Oct 02 12:59:35 compute-1 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d000000a2.scope: Consumed 13.552s CPU time.
Oct 02 12:59:35 compute-1 systemd-machined[188247]: Machine qemu-79-instance-000000a2 terminated.
Oct 02 12:59:35 compute-1 nova_compute[230518]: 2025-10-02 12:59:35.908 2 INFO nova.virt.libvirt.driver [-] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Instance destroyed successfully.
Oct 02 12:59:35 compute-1 nova_compute[230518]: 2025-10-02 12:59:35.909 2 DEBUG nova.objects.instance [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lazy-loading 'resources' on Instance uuid c8cc2f8f-7f89-4304-b071-1849f76cfda8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 12:59:35 compute-1 nova_compute[230518]: 2025-10-02 12:59:35.931 2 DEBUG nova.virt.libvirt.vif [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T12:58:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-44339413',display_name='tempest-TestNetworkAdvancedServerOps-server-44339413',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-44339413',id=162,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEsh1zp1STzuMIXDnbbRAXZcbmmzIocYDU4MIRfUpLuSUtHJodm49lJQYIod0ZNL2zezyn78o0X/6+GzIk9NqxEaJ1JvcNDOKeRMzQvHVSgS3twK5fXwCqcCv0gGhQyYWw==',key_name='tempest-TestNetworkAdvancedServerOps-862557079',keypairs=<?>,launch_index=0,launched_at=2025-10-02T12:58:46Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='072925a6aec84a77a9c09ae0c83efdb3',ramdisk_id='',reservation_id='r-6p95bfrs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1770117619',owner_user_name='tempest-TestNetworkAdvancedServerOps-1770117619-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T12:59:15Z,user_data=None,user_id='47f465d8c8ac44c982f2a2e60ae9eb40',uuid=c8cc2f8f-7f89-4304-b071-1849f76cfda8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "address": "fa:16:3e:49:02:8c", "network": {"id": "2b820c79-77a7-4936-8c6e-9c38d383ad1b", "bridge": "br-int", "label": "tempest-network-smoke--453535835", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbddb6509-72", "ovs_interfaceid": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 12:59:35 compute-1 nova_compute[230518]: 2025-10-02 12:59:35.931 2 DEBUG nova.network.os_vif_util [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converting VIF {"id": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "address": "fa:16:3e:49:02:8c", "network": {"id": "2b820c79-77a7-4936-8c6e-9c38d383ad1b", "bridge": "br-int", "label": "tempest-network-smoke--453535835", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbddb6509-72", "ovs_interfaceid": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 12:59:35 compute-1 nova_compute[230518]: 2025-10-02 12:59:35.932 2 DEBUG nova.network.os_vif_util [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:49:02:8c,bridge_name='br-int',has_traffic_filtering=True,id=bddb6509-7221-4ef0-bde7-be95b89ab6d8,network=Network(2b820c79-77a7-4936-8c6e-9c38d383ad1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbddb6509-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 12:59:35 compute-1 nova_compute[230518]: 2025-10-02 12:59:35.932 2 DEBUG os_vif [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:02:8c,bridge_name='br-int',has_traffic_filtering=True,id=bddb6509-7221-4ef0-bde7-be95b89ab6d8,network=Network(2b820c79-77a7-4936-8c6e-9c38d383ad1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbddb6509-72') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 12:59:35 compute-1 nova_compute[230518]: 2025-10-02 12:59:35.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:35 compute-1 nova_compute[230518]: 2025-10-02 12:59:35.934 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbddb6509-72, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:59:35 compute-1 nova_compute[230518]: 2025-10-02 12:59:35.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:35 compute-1 nova_compute[230518]: 2025-10-02 12:59:35.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:35 compute-1 nova_compute[230518]: 2025-10-02 12:59:35.942 2 INFO os_vif [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:02:8c,bridge_name='br-int',has_traffic_filtering=True,id=bddb6509-7221-4ef0-bde7-be95b89ab6d8,network=Network(2b820c79-77a7-4936-8c6e-9c38d383ad1b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbddb6509-72')
Oct 02 12:59:36 compute-1 neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b[294019]: [NOTICE]   (294024) : haproxy version is 2.8.14-c23fe91
Oct 02 12:59:36 compute-1 neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b[294019]: [NOTICE]   (294024) : path to executable is /usr/sbin/haproxy
Oct 02 12:59:36 compute-1 neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b[294019]: [WARNING]  (294024) : Exiting Master process...
Oct 02 12:59:36 compute-1 neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b[294019]: [ALERT]    (294024) : Current worker (294026) exited with code 143 (Terminated)
Oct 02 12:59:36 compute-1 neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b[294019]: [WARNING]  (294024) : All workers exited. Exiting... (0)
Oct 02 12:59:36 compute-1 systemd[1]: libpod-6bacd5eafb283f13f1f8ec71fbe19bc2b14636c8313f8d1061912a908d832fb0.scope: Deactivated successfully.
Oct 02 12:59:36 compute-1 podman[294140]: 2025-10-02 12:59:36.217322666 +0000 UTC m=+0.412179338 container died 6bacd5eafb283f13f1f8ec71fbe19bc2b14636c8313f8d1061912a908d832fb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 12:59:36 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6bacd5eafb283f13f1f8ec71fbe19bc2b14636c8313f8d1061912a908d832fb0-userdata-shm.mount: Deactivated successfully.
Oct 02 12:59:36 compute-1 systemd[1]: var-lib-containers-storage-overlay-e2b1c7c3db0c4c0309707ba6b2e6ca3059d313e0d06a5d2858650b6979b1b343-merged.mount: Deactivated successfully.
Oct 02 12:59:36 compute-1 nova_compute[230518]: 2025-10-02 12:59:36.853 2 DEBUG nova.compute.manager [req-887d0ecf-a22f-4a54-905a-5a2377b5ea89 req-90bd905c-5f26-4d95-833c-3d221e936241 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received event network-vif-unplugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:59:36 compute-1 nova_compute[230518]: 2025-10-02 12:59:36.853 2 DEBUG oslo_concurrency.lockutils [req-887d0ecf-a22f-4a54-905a-5a2377b5ea89 req-90bd905c-5f26-4d95-833c-3d221e936241 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:36 compute-1 nova_compute[230518]: 2025-10-02 12:59:36.853 2 DEBUG oslo_concurrency.lockutils [req-887d0ecf-a22f-4a54-905a-5a2377b5ea89 req-90bd905c-5f26-4d95-833c-3d221e936241 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:36 compute-1 nova_compute[230518]: 2025-10-02 12:59:36.854 2 DEBUG oslo_concurrency.lockutils [req-887d0ecf-a22f-4a54-905a-5a2377b5ea89 req-90bd905c-5f26-4d95-833c-3d221e936241 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:36 compute-1 nova_compute[230518]: 2025-10-02 12:59:36.854 2 DEBUG nova.compute.manager [req-887d0ecf-a22f-4a54-905a-5a2377b5ea89 req-90bd905c-5f26-4d95-833c-3d221e936241 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] No waiting events found dispatching network-vif-unplugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:59:36 compute-1 nova_compute[230518]: 2025-10-02 12:59:36.854 2 DEBUG nova.compute.manager [req-887d0ecf-a22f-4a54-905a-5a2377b5ea89 req-90bd905c-5f26-4d95-833c-3d221e936241 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received event network-vif-unplugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 12:59:36 compute-1 nova_compute[230518]: 2025-10-02 12:59:36.854 2 DEBUG nova.compute.manager [req-887d0ecf-a22f-4a54-905a-5a2377b5ea89 req-90bd905c-5f26-4d95-833c-3d221e936241 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received event network-vif-plugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:59:36 compute-1 nova_compute[230518]: 2025-10-02 12:59:36.855 2 DEBUG oslo_concurrency.lockutils [req-887d0ecf-a22f-4a54-905a-5a2377b5ea89 req-90bd905c-5f26-4d95-833c-3d221e936241 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:36 compute-1 nova_compute[230518]: 2025-10-02 12:59:36.855 2 DEBUG oslo_concurrency.lockutils [req-887d0ecf-a22f-4a54-905a-5a2377b5ea89 req-90bd905c-5f26-4d95-833c-3d221e936241 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:36 compute-1 nova_compute[230518]: 2025-10-02 12:59:36.855 2 DEBUG oslo_concurrency.lockutils [req-887d0ecf-a22f-4a54-905a-5a2377b5ea89 req-90bd905c-5f26-4d95-833c-3d221e936241 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:36 compute-1 nova_compute[230518]: 2025-10-02 12:59:36.855 2 DEBUG nova.compute.manager [req-887d0ecf-a22f-4a54-905a-5a2377b5ea89 req-90bd905c-5f26-4d95-833c-3d221e936241 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] No waiting events found dispatching network-vif-plugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 12:59:36 compute-1 nova_compute[230518]: 2025-10-02 12:59:36.855 2 WARNING nova.compute.manager [req-887d0ecf-a22f-4a54-905a-5a2377b5ea89 req-90bd905c-5f26-4d95-833c-3d221e936241 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received unexpected event network-vif-plugged-bddb6509-7221-4ef0-bde7-be95b89ab6d8 for instance with vm_state active and task_state deleting.
Oct 02 12:59:36 compute-1 podman[294171]: 2025-10-02 12:59:36.870898874 +0000 UTC m=+0.891271685 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 02 12:59:36 compute-1 podman[294163]: 2025-10-02 12:59:36.903122084 +0000 UTC m=+0.930751960 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 02 12:59:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:36.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:37 compute-1 nova_compute[230518]: 2025-10-02 12:59:37.047 2 DEBUG nova.network.neutron [req-7a5a49b3-f04c-4cc2-899a-e69ca414bbfa req-60807b65-865a-44ec-a62f-1b0102f3e2c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Updated VIF entry in instance network info cache for port bddb6509-7221-4ef0-bde7-be95b89ab6d8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 12:59:37 compute-1 nova_compute[230518]: 2025-10-02 12:59:37.048 2 DEBUG nova.network.neutron [req-7a5a49b3-f04c-4cc2-899a-e69ca414bbfa req-60807b65-865a-44ec-a62f-1b0102f3e2c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Updating instance_info_cache with network_info: [{"id": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "address": "fa:16:3e:49:02:8c", "network": {"id": "2b820c79-77a7-4936-8c6e-9c38d383ad1b", "bridge": "br-int", "label": "tempest-network-smoke--453535835", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "072925a6aec84a77a9c09ae0c83efdb3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbddb6509-72", "ovs_interfaceid": "bddb6509-7221-4ef0-bde7-be95b89ab6d8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:59:37 compute-1 nova_compute[230518]: 2025-10-02 12:59:37.091 2 DEBUG oslo_concurrency.lockutils [req-7a5a49b3-f04c-4cc2-899a-e69ca414bbfa req-60807b65-865a-44ec-a62f-1b0102f3e2c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-c8cc2f8f-7f89-4304-b071-1849f76cfda8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 12:59:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:37.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:37 compute-1 podman[294140]: 2025-10-02 12:59:37.468980358 +0000 UTC m=+1.663837010 container cleanup 6bacd5eafb283f13f1f8ec71fbe19bc2b14636c8313f8d1061912a908d832fb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true)
Oct 02 12:59:37 compute-1 systemd[1]: libpod-conmon-6bacd5eafb283f13f1f8ec71fbe19bc2b14636c8313f8d1061912a908d832fb0.scope: Deactivated successfully.
Oct 02 12:59:37 compute-1 ceph-mon[80926]: pgmap v2596: 305 pgs: 305 active+clean; 407 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.0 MiB/s wr, 235 op/s
Oct 02 12:59:37 compute-1 podman[294243]: 2025-10-02 12:59:37.914928813 +0000 UTC m=+0.426880614 container remove 6bacd5eafb283f13f1f8ec71fbe19bc2b14636c8313f8d1061912a908d832fb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 12:59:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:37.921 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9a502a36-79a0-4f1a-a952-05d65f6f6548]: (4, ('Thu Oct  2 12:59:35 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b (6bacd5eafb283f13f1f8ec71fbe19bc2b14636c8313f8d1061912a908d832fb0)\n6bacd5eafb283f13f1f8ec71fbe19bc2b14636c8313f8d1061912a908d832fb0\nThu Oct  2 12:59:37 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b (6bacd5eafb283f13f1f8ec71fbe19bc2b14636c8313f8d1061912a908d832fb0)\n6bacd5eafb283f13f1f8ec71fbe19bc2b14636c8313f8d1061912a908d832fb0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:37.922 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1f43f0f9-4cca-461d-bca9-496bda1f9034]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:37.923 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b820c79-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 12:59:37 compute-1 kernel: tap2b820c79-70: left promiscuous mode
Oct 02 12:59:37 compute-1 nova_compute[230518]: 2025-10-02 12:59:37.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:37 compute-1 nova_compute[230518]: 2025-10-02 12:59:37.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:37.942 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5d32dca6-ce35-4dea-9d03-4d384000b512]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:37.973 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[72a73ca8-dd36-44dd-81d4-dd0ab514a54a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:37.974 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fa63e5df-e3c0-4c25-bcba-655407db4529]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:37.987 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[746e33e8-d232-4b98-9c63-c95c4ac74c33]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 781726, 'reachable_time': 35140, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294261, 'error': None, 'target': 'ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:37.990 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2b820c79-77a7-4936-8c6e-9c38d383ad1b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 12:59:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 12:59:37.990 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[980ba492-04a9-48f4-be33-58039ce453bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 12:59:37 compute-1 systemd[1]: run-netns-ovnmeta\x2d2b820c79\x2d77a7\x2d4936\x2d8c6e\x2d9c38d383ad1b.mount: Deactivated successfully.
Oct 02 12:59:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:38.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:39.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:59:39 compute-1 nova_compute[230518]: 2025-10-02 12:59:39.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:39 compute-1 ceph-mon[80926]: pgmap v2597: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.0 MiB/s wr, 228 op/s
Oct 02 12:59:39 compute-1 nova_compute[230518]: 2025-10-02 12:59:39.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:39 compute-1 nova_compute[230518]: 2025-10-02 12:59:39.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:40 compute-1 nova_compute[230518]: 2025-10-02 12:59:40.106 2 INFO nova.virt.libvirt.driver [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Deleting instance files /var/lib/nova/instances/c8cc2f8f-7f89-4304-b071-1849f76cfda8_del
Oct 02 12:59:40 compute-1 nova_compute[230518]: 2025-10-02 12:59:40.107 2 INFO nova.virt.libvirt.driver [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Deletion of /var/lib/nova/instances/c8cc2f8f-7f89-4304-b071-1849f76cfda8_del complete
Oct 02 12:59:40 compute-1 nova_compute[230518]: 2025-10-02 12:59:40.206 2 INFO nova.compute.manager [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Took 4.74 seconds to destroy the instance on the hypervisor.
Oct 02 12:59:40 compute-1 nova_compute[230518]: 2025-10-02 12:59:40.206 2 DEBUG oslo.service.loopingcall [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 12:59:40 compute-1 nova_compute[230518]: 2025-10-02 12:59:40.206 2 DEBUG nova.compute.manager [-] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 12:59:40 compute-1 nova_compute[230518]: 2025-10-02 12:59:40.206 2 DEBUG nova.network.neutron [-] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 12:59:40 compute-1 nova_compute[230518]: 2025-10-02 12:59:40.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:40.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:41.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:41 compute-1 ceph-mon[80926]: pgmap v2598: 305 pgs: 305 active+clean; 370 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.8 MiB/s wr, 209 op/s
Oct 02 12:59:42 compute-1 nova_compute[230518]: 2025-10-02 12:59:42.329 2 DEBUG nova.network.neutron [-] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 12:59:42 compute-1 nova_compute[230518]: 2025-10-02 12:59:42.369 2 INFO nova.compute.manager [-] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Took 2.16 seconds to deallocate network for instance.
Oct 02 12:59:42 compute-1 nova_compute[230518]: 2025-10-02 12:59:42.435 2 DEBUG oslo_concurrency.lockutils [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 12:59:42 compute-1 nova_compute[230518]: 2025-10-02 12:59:42.435 2 DEBUG oslo_concurrency.lockutils [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 12:59:42 compute-1 nova_compute[230518]: 2025-10-02 12:59:42.488 2 DEBUG nova.compute.manager [req-fadd172d-00c5-4efd-a04c-34a02ce2acce req-31e59711-0789-49d9-a930-86ec4d4d5f1a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Received event network-vif-deleted-bddb6509-7221-4ef0-bde7-be95b89ab6d8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 12:59:42 compute-1 nova_compute[230518]: 2025-10-02 12:59:42.500 2 DEBUG oslo_concurrency.processutils [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 12:59:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:42.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 12:59:43 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3512397272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:43 compute-1 nova_compute[230518]: 2025-10-02 12:59:43.032 2 DEBUG oslo_concurrency.processutils [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 12:59:43 compute-1 nova_compute[230518]: 2025-10-02 12:59:43.041 2 DEBUG nova.compute.provider_tree [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 12:59:43 compute-1 nova_compute[230518]: 2025-10-02 12:59:43.067 2 DEBUG nova.scheduler.client.report [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 12:59:43 compute-1 nova_compute[230518]: 2025-10-02 12:59:43.129 2 DEBUG oslo_concurrency.lockutils [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:43 compute-1 ceph-mon[80926]: pgmap v2599: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 219 op/s
Oct 02 12:59:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:59:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:43.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:59:43 compute-1 nova_compute[230518]: 2025-10-02 12:59:43.185 2 INFO nova.scheduler.client.report [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Deleted allocations for instance c8cc2f8f-7f89-4304-b071-1849f76cfda8
Oct 02 12:59:43 compute-1 nova_compute[230518]: 2025-10-02 12:59:43.366 2 DEBUG oslo_concurrency.lockutils [None req-410c3841-0b58-4953-b98b-e47472755729 47f465d8c8ac44c982f2a2e60ae9eb40 072925a6aec84a77a9c09ae0c83efdb3 - - default default] Lock "c8cc2f8f-7f89-4304-b071-1849f76cfda8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.906s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 12:59:44 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3512397272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 12:59:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:59:44 compute-1 nova_compute[230518]: 2025-10-02 12:59:44.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:59:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:44.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:59:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:45.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:45 compute-1 ceph-mon[80926]: pgmap v2600: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 552 KiB/s wr, 152 op/s
Oct 02 12:59:45 compute-1 nova_compute[230518]: 2025-10-02 12:59:45.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:46 compute-1 podman[294286]: 2025-10-02 12:59:46.836832066 +0000 UTC m=+0.086497279 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid)
Oct 02 12:59:46 compute-1 podman[294287]: 2025-10-02 12:59:46.843592856 +0000 UTC m=+0.088071357 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 12:59:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:46.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:47.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:47 compute-1 ceph-mon[80926]: pgmap v2601: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 561 KiB/s wr, 152 op/s
Oct 02 12:59:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:48.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:49.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:59:49 compute-1 nova_compute[230518]: 2025-10-02 12:59:49.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:49 compute-1 ceph-mon[80926]: pgmap v2602: 305 pgs: 305 active+clean; 327 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 25 KiB/s wr, 148 op/s
Oct 02 12:59:50 compute-1 nova_compute[230518]: 2025-10-02 12:59:50.907 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759409975.9059489, c8cc2f8f-7f89-4304-b071-1849f76cfda8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 12:59:50 compute-1 nova_compute[230518]: 2025-10-02 12:59:50.907 2 INFO nova.compute.manager [-] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] VM Stopped (Lifecycle Event)
Oct 02 12:59:50 compute-1 nova_compute[230518]: 2025-10-02 12:59:50.931 2 DEBUG nova.compute.manager [None req-734be1ab-a321-43c2-8556-1c34453f4bf0 - - - - - -] [instance: c8cc2f8f-7f89-4304-b071-1849f76cfda8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 12:59:50 compute-1 nova_compute[230518]: 2025-10-02 12:59:50.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:51.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:51.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:51 compute-1 ceph-mon[80926]: pgmap v2603: 305 pgs: 305 active+clean; 327 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 34 KiB/s wr, 172 op/s
Oct 02 12:59:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:53.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:53.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:53 compute-1 ceph-mon[80926]: pgmap v2604: 305 pgs: 305 active+clean; 327 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 32 KiB/s wr, 150 op/s
Oct 02 12:59:54 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #53. Immutable memtables: 9.
Oct 02 12:59:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:59:54 compute-1 nova_compute[230518]: 2025-10-02 12:59:54.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:59:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:55.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:59:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 12:59:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:55.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 12:59:55 compute-1 nova_compute[230518]: 2025-10-02 12:59:55.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:56 compute-1 ceph-mon[80926]: pgmap v2605: 305 pgs: 305 active+clean; 327 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 610 KiB/s rd, 22 KiB/s wr, 64 op/s
Oct 02 12:59:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:57.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:57.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:57 compute-1 ceph-mon[80926]: pgmap v2606: 305 pgs: 305 active+clean; 327 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 611 KiB/s rd, 24 KiB/s wr, 64 op/s
Oct 02 12:59:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:12:59:59.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 12:59:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 12:59:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:12:59:59.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 12:59:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 12:59:59 compute-1 nova_compute[230518]: 2025-10-02 12:59:59.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 12:59:59 compute-1 ceph-mon[80926]: pgmap v2607: 305 pgs: 305 active+clean; 329 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 611 KiB/s rd, 25 KiB/s wr, 64 op/s
Oct 02 13:00:00 compute-1 ceph-mon[80926]: overall HEALTH_OK
Oct 02 13:00:00 compute-1 nova_compute[230518]: 2025-10-02 13:00:00.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:00:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:01.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:00:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:01.107 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:00:01 compute-1 nova_compute[230518]: 2025-10-02 13:00:01.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:01.109 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:00:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:01.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:01 compute-1 ceph-mon[80926]: pgmap v2608: 305 pgs: 305 active+clean; 329 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 611 KiB/s rd, 28 KiB/s wr, 64 op/s
Oct 02 13:00:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:03.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:00:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:03.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:00:03 compute-1 ceph-mon[80926]: pgmap v2609: 305 pgs: 305 active+clean; 329 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 209 KiB/s rd, 24 KiB/s wr, 22 op/s
Oct 02 13:00:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:04.112 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:00:04 compute-1 nova_compute[230518]: 2025-10-02 13:00:04.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:05 compute-1 ceph-mon[80926]: pgmap v2610: 305 pgs: 305 active+clean; 329 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s rd, 22 KiB/s wr, 2 op/s
Oct 02 13:00:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:00:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:05.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:00:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:05.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:05 compute-1 nova_compute[230518]: 2025-10-02 13:00:05.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/4081704541' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:00:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/4081704541' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:00:06 compute-1 nova_compute[230518]: 2025-10-02 13:00:06.316 2 DEBUG oslo_concurrency.lockutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:06 compute-1 nova_compute[230518]: 2025-10-02 13:00:06.317 2 DEBUG oslo_concurrency.lockutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:06 compute-1 nova_compute[230518]: 2025-10-02 13:00:06.347 2 DEBUG nova.compute.manager [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:00:06 compute-1 nova_compute[230518]: 2025-10-02 13:00:06.425 2 DEBUG oslo_concurrency.lockutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:06 compute-1 nova_compute[230518]: 2025-10-02 13:00:06.426 2 DEBUG oslo_concurrency.lockutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:06 compute-1 nova_compute[230518]: 2025-10-02 13:00:06.433 2 DEBUG nova.virt.hardware [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:00:06 compute-1 nova_compute[230518]: 2025-10-02 13:00:06.433 2 INFO nova.compute.claims [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Claim successful on node compute-1.ctlplane.example.com
Oct 02 13:00:06 compute-1 nova_compute[230518]: 2025-10-02 13:00:06.619 2 DEBUG oslo_concurrency.processutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:00:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:07.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:00:07 compute-1 ceph-mon[80926]: pgmap v2611: 305 pgs: 305 active+clean; 300 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 25 KiB/s wr, 21 op/s
Oct 02 13:00:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1931536052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:00:07 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1204448448' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:07 compute-1 nova_compute[230518]: 2025-10-02 13:00:07.154 2 DEBUG oslo_concurrency.processutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:07 compute-1 nova_compute[230518]: 2025-10-02 13:00:07.162 2 DEBUG nova.compute.provider_tree [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:00:07 compute-1 nova_compute[230518]: 2025-10-02 13:00:07.176 2 DEBUG nova.scheduler.client.report [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:00:07 compute-1 nova_compute[230518]: 2025-10-02 13:00:07.201 2 DEBUG oslo_concurrency.lockutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:07 compute-1 nova_compute[230518]: 2025-10-02 13:00:07.202 2 DEBUG nova.compute.manager [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:00:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:07.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:07 compute-1 nova_compute[230518]: 2025-10-02 13:00:07.249 2 DEBUG nova.compute.manager [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:00:07 compute-1 nova_compute[230518]: 2025-10-02 13:00:07.249 2 DEBUG nova.network.neutron [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:00:07 compute-1 nova_compute[230518]: 2025-10-02 13:00:07.267 2 INFO nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:00:07 compute-1 nova_compute[230518]: 2025-10-02 13:00:07.284 2 DEBUG nova.compute.manager [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:00:07 compute-1 nova_compute[230518]: 2025-10-02 13:00:07.370 2 DEBUG nova.compute.manager [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:00:07 compute-1 nova_compute[230518]: 2025-10-02 13:00:07.371 2 DEBUG nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:00:07 compute-1 nova_compute[230518]: 2025-10-02 13:00:07.372 2 INFO nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Creating image(s)
Oct 02 13:00:07 compute-1 nova_compute[230518]: 2025-10-02 13:00:07.402 2 DEBUG nova.storage.rbd_utils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 11d532af-2778-4065-8cf4-f2f53d3dbb1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:07 compute-1 nova_compute[230518]: 2025-10-02 13:00:07.439 2 DEBUG nova.storage.rbd_utils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 11d532af-2778-4065-8cf4-f2f53d3dbb1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:07 compute-1 nova_compute[230518]: 2025-10-02 13:00:07.471 2 DEBUG nova.storage.rbd_utils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 11d532af-2778-4065-8cf4-f2f53d3dbb1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:07 compute-1 nova_compute[230518]: 2025-10-02 13:00:07.476 2 DEBUG oslo_concurrency.processutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:07 compute-1 nova_compute[230518]: 2025-10-02 13:00:07.549 2 DEBUG oslo_concurrency.processutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:07 compute-1 nova_compute[230518]: 2025-10-02 13:00:07.551 2 DEBUG oslo_concurrency.lockutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:07 compute-1 nova_compute[230518]: 2025-10-02 13:00:07.552 2 DEBUG oslo_concurrency.lockutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:07 compute-1 nova_compute[230518]: 2025-10-02 13:00:07.552 2 DEBUG oslo_concurrency.lockutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:07 compute-1 nova_compute[230518]: 2025-10-02 13:00:07.575 2 DEBUG nova.storage.rbd_utils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 11d532af-2778-4065-8cf4-f2f53d3dbb1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:07 compute-1 nova_compute[230518]: 2025-10-02 13:00:07.578 2 DEBUG oslo_concurrency.processutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 11d532af-2778-4065-8cf4-f2f53d3dbb1c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:07 compute-1 podman[294444]: 2025-10-02 13:00:07.824334854 +0000 UTC m=+0.066880459 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Oct 02 13:00:07 compute-1 podman[294443]: 2025-10-02 13:00:07.850646282 +0000 UTC m=+0.096251952 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 02 13:00:07 compute-1 nova_compute[230518]: 2025-10-02 13:00:07.998 2 DEBUG nova.policy [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '96fd589a75cb4fcfac0072edabb9b3a1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '64f187c60881475e9e1f062bb198d205', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:00:08 compute-1 nova_compute[230518]: 2025-10-02 13:00:08.110 2 DEBUG oslo_concurrency.processutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 11d532af-2778-4065-8cf4-f2f53d3dbb1c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1204448448' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:08 compute-1 nova_compute[230518]: 2025-10-02 13:00:08.221 2 DEBUG nova.storage.rbd_utils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] resizing rbd image 11d532af-2778-4065-8cf4-f2f53d3dbb1c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:00:08 compute-1 nova_compute[230518]: 2025-10-02 13:00:08.490 2 DEBUG nova.objects.instance [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'migration_context' on Instance uuid 11d532af-2778-4065-8cf4-f2f53d3dbb1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:00:08 compute-1 nova_compute[230518]: 2025-10-02 13:00:08.508 2 DEBUG nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:00:08 compute-1 nova_compute[230518]: 2025-10-02 13:00:08.508 2 DEBUG nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Ensure instance console log exists: /var/lib/nova/instances/11d532af-2778-4065-8cf4-f2f53d3dbb1c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:00:08 compute-1 nova_compute[230518]: 2025-10-02 13:00:08.509 2 DEBUG oslo_concurrency.lockutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:08 compute-1 nova_compute[230518]: 2025-10-02 13:00:08.509 2 DEBUG oslo_concurrency.lockutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:08 compute-1 nova_compute[230518]: 2025-10-02 13:00:08.509 2 DEBUG oslo_concurrency.lockutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:09.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:09.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:09 compute-1 ceph-mon[80926]: pgmap v2612: 305 pgs: 305 active+clean; 224 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 26 KiB/s wr, 57 op/s
Oct 02 13:00:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:00:09 compute-1 nova_compute[230518]: 2025-10-02 13:00:09.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:10 compute-1 nova_compute[230518]: 2025-10-02 13:00:10.031 2 DEBUG nova.network.neutron [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Successfully updated port: 6b57c5d9-dd16-427a-85c2-c02dedb41e29 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:00:10 compute-1 nova_compute[230518]: 2025-10-02 13:00:10.046 2 DEBUG oslo_concurrency.lockutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "refresh_cache-11d532af-2778-4065-8cf4-f2f53d3dbb1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:00:10 compute-1 nova_compute[230518]: 2025-10-02 13:00:10.046 2 DEBUG oslo_concurrency.lockutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquired lock "refresh_cache-11d532af-2778-4065-8cf4-f2f53d3dbb1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:00:10 compute-1 nova_compute[230518]: 2025-10-02 13:00:10.046 2 DEBUG nova.network.neutron [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:00:10 compute-1 nova_compute[230518]: 2025-10-02 13:00:10.162 2 DEBUG nova.compute.manager [req-939663b8-6f70-43ce-bb73-99778feed713 req-ff620870-9fee-448f-a35c-3786a5bf5a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Received event network-changed-6b57c5d9-dd16-427a-85c2-c02dedb41e29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:00:10 compute-1 nova_compute[230518]: 2025-10-02 13:00:10.163 2 DEBUG nova.compute.manager [req-939663b8-6f70-43ce-bb73-99778feed713 req-ff620870-9fee-448f-a35c-3786a5bf5a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Refreshing instance network info cache due to event network-changed-6b57c5d9-dd16-427a-85c2-c02dedb41e29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:00:10 compute-1 nova_compute[230518]: 2025-10-02 13:00:10.163 2 DEBUG oslo_concurrency.lockutils [req-939663b8-6f70-43ce-bb73-99778feed713 req-ff620870-9fee-448f-a35c-3786a5bf5a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-11d532af-2778-4065-8cf4-f2f53d3dbb1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:00:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1424016671' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:10 compute-1 nova_compute[230518]: 2025-10-02 13:00:10.582 2 DEBUG nova.network.neutron [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:00:10 compute-1 nova_compute[230518]: 2025-10-02 13:00:10.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:11.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:11.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:11 compute-1 ceph-mon[80926]: pgmap v2613: 305 pgs: 305 active+clean; 218 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 53 KiB/s rd, 681 KiB/s wr, 69 op/s
Oct 02 13:00:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/406634081' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:11 compute-1 nova_compute[230518]: 2025-10-02 13:00:11.634 2 DEBUG nova.network.neutron [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Updating instance_info_cache with network_info: [{"id": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "address": "fa:16:3e:6b:44:86", "network": {"id": "2dacd3c2-a76f-4896-a922-fdbbab78ce12", "bridge": "br-int", "label": "tempest-network-smoke--543222050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b57c5d9-dd", "ovs_interfaceid": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:00:11 compute-1 nova_compute[230518]: 2025-10-02 13:00:11.670 2 DEBUG oslo_concurrency.lockutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Releasing lock "refresh_cache-11d532af-2778-4065-8cf4-f2f53d3dbb1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:00:11 compute-1 nova_compute[230518]: 2025-10-02 13:00:11.671 2 DEBUG nova.compute.manager [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Instance network_info: |[{"id": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "address": "fa:16:3e:6b:44:86", "network": {"id": "2dacd3c2-a76f-4896-a922-fdbbab78ce12", "bridge": "br-int", "label": "tempest-network-smoke--543222050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b57c5d9-dd", "ovs_interfaceid": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:00:11 compute-1 nova_compute[230518]: 2025-10-02 13:00:11.671 2 DEBUG oslo_concurrency.lockutils [req-939663b8-6f70-43ce-bb73-99778feed713 req-ff620870-9fee-448f-a35c-3786a5bf5a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-11d532af-2778-4065-8cf4-f2f53d3dbb1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:00:11 compute-1 nova_compute[230518]: 2025-10-02 13:00:11.671 2 DEBUG nova.network.neutron [req-939663b8-6f70-43ce-bb73-99778feed713 req-ff620870-9fee-448f-a35c-3786a5bf5a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Refreshing network info cache for port 6b57c5d9-dd16-427a-85c2-c02dedb41e29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:00:11 compute-1 nova_compute[230518]: 2025-10-02 13:00:11.674 2 DEBUG nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Start _get_guest_xml network_info=[{"id": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "address": "fa:16:3e:6b:44:86", "network": {"id": "2dacd3c2-a76f-4896-a922-fdbbab78ce12", "bridge": "br-int", "label": "tempest-network-smoke--543222050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b57c5d9-dd", "ovs_interfaceid": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:00:11 compute-1 nova_compute[230518]: 2025-10-02 13:00:11.679 2 WARNING nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:00:11 compute-1 nova_compute[230518]: 2025-10-02 13:00:11.683 2 DEBUG nova.virt.libvirt.host [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:00:11 compute-1 nova_compute[230518]: 2025-10-02 13:00:11.684 2 DEBUG nova.virt.libvirt.host [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:00:11 compute-1 nova_compute[230518]: 2025-10-02 13:00:11.688 2 DEBUG nova.virt.libvirt.host [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:00:11 compute-1 nova_compute[230518]: 2025-10-02 13:00:11.688 2 DEBUG nova.virt.libvirt.host [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:00:11 compute-1 nova_compute[230518]: 2025-10-02 13:00:11.690 2 DEBUG nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:00:11 compute-1 nova_compute[230518]: 2025-10-02 13:00:11.691 2 DEBUG nova.virt.hardware [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:00:11 compute-1 nova_compute[230518]: 2025-10-02 13:00:11.691 2 DEBUG nova.virt.hardware [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:00:11 compute-1 nova_compute[230518]: 2025-10-02 13:00:11.692 2 DEBUG nova.virt.hardware [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:00:11 compute-1 nova_compute[230518]: 2025-10-02 13:00:11.692 2 DEBUG nova.virt.hardware [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:00:11 compute-1 nova_compute[230518]: 2025-10-02 13:00:11.693 2 DEBUG nova.virt.hardware [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:00:11 compute-1 nova_compute[230518]: 2025-10-02 13:00:11.693 2 DEBUG nova.virt.hardware [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:00:11 compute-1 nova_compute[230518]: 2025-10-02 13:00:11.694 2 DEBUG nova.virt.hardware [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:00:11 compute-1 nova_compute[230518]: 2025-10-02 13:00:11.694 2 DEBUG nova.virt.hardware [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:00:11 compute-1 nova_compute[230518]: 2025-10-02 13:00:11.695 2 DEBUG nova.virt.hardware [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:00:11 compute-1 nova_compute[230518]: 2025-10-02 13:00:11.695 2 DEBUG nova.virt.hardware [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:00:11 compute-1 nova_compute[230518]: 2025-10-02 13:00:11.695 2 DEBUG nova.virt.hardware [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:00:11 compute-1 nova_compute[230518]: 2025-10-02 13:00:11.700 2 DEBUG oslo_concurrency.processutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.083 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.083 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.084 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.084 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.084 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:00:12 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3162448280' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.130 2 DEBUG oslo_concurrency.processutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.166 2 DEBUG nova.storage.rbd_utils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 11d532af-2778-4065-8cf4-f2f53d3dbb1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.170 2 DEBUG oslo_concurrency.processutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3162448280' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3149400941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:00:12 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/376091757' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:00:12 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/604262924' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.579 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.582 2 DEBUG oslo_concurrency.processutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.583 2 DEBUG nova.virt.libvirt.vif [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:00:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-109200331',display_name='tempest-TestNetworkBasicOps-server-109200331',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-109200331',id=164,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOyVsgu/K1Be3T6lU80AA5key24FWBwYCD9vm40G5BNpftGNMWHfF5cy51qUgLzMoZT/j2dR3TucUmm1S5UEzlRAZFpDfO//FNaDZljlZXVXY30xYKCpB4GbuFcySY9mIg==',key_name='tempest-TestNetworkBasicOps-433983614',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-6r3anh17',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:00:07Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=11d532af-2778-4065-8cf4-f2f53d3dbb1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "address": "fa:16:3e:6b:44:86", "network": {"id": "2dacd3c2-a76f-4896-a922-fdbbab78ce12", "bridge": "br-int", "label": "tempest-network-smoke--543222050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b57c5d9-dd", "ovs_interfaceid": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.584 2 DEBUG nova.network.os_vif_util [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "address": "fa:16:3e:6b:44:86", "network": {"id": "2dacd3c2-a76f-4896-a922-fdbbab78ce12", "bridge": "br-int", "label": "tempest-network-smoke--543222050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b57c5d9-dd", "ovs_interfaceid": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.584 2 DEBUG nova.network.os_vif_util [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:44:86,bridge_name='br-int',has_traffic_filtering=True,id=6b57c5d9-dd16-427a-85c2-c02dedb41e29,network=Network(2dacd3c2-a76f-4896-a922-fdbbab78ce12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6b57c5d9-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.585 2 DEBUG nova.objects.instance [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'pci_devices' on Instance uuid 11d532af-2778-4065-8cf4-f2f53d3dbb1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.647 2 DEBUG nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:00:12 compute-1 nova_compute[230518]:   <uuid>11d532af-2778-4065-8cf4-f2f53d3dbb1c</uuid>
Oct 02 13:00:12 compute-1 nova_compute[230518]:   <name>instance-000000a4</name>
Oct 02 13:00:12 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 13:00:12 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 13:00:12 compute-1 nova_compute[230518]:   <metadata>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:00:12 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:       <nova:name>tempest-TestNetworkBasicOps-server-109200331</nova:name>
Oct 02 13:00:12 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 13:00:11</nova:creationTime>
Oct 02 13:00:12 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 13:00:12 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 13:00:12 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 13:00:12 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 13:00:12 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:00:12 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:00:12 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 13:00:12 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 13:00:12 compute-1 nova_compute[230518]:         <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct 02 13:00:12 compute-1 nova_compute[230518]:         <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct 02 13:00:12 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 13:00:12 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 13:00:12 compute-1 nova_compute[230518]:         <nova:port uuid="6b57c5d9-dd16-427a-85c2-c02dedb41e29">
Oct 02 13:00:12 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 13:00:12 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 13:00:12 compute-1 nova_compute[230518]:   </metadata>
Oct 02 13:00:12 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <system>
Oct 02 13:00:12 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:00:12 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:00:12 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:00:12 compute-1 nova_compute[230518]:       <entry name="serial">11d532af-2778-4065-8cf4-f2f53d3dbb1c</entry>
Oct 02 13:00:12 compute-1 nova_compute[230518]:       <entry name="uuid">11d532af-2778-4065-8cf4-f2f53d3dbb1c</entry>
Oct 02 13:00:12 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     </system>
Oct 02 13:00:12 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 13:00:12 compute-1 nova_compute[230518]:   <os>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:   </os>
Oct 02 13:00:12 compute-1 nova_compute[230518]:   <features>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <apic/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:   </features>
Oct 02 13:00:12 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:   </clock>
Oct 02 13:00:12 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:   </cpu>
Oct 02 13:00:12 compute-1 nova_compute[230518]:   <devices>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 13:00:12 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/11d532af-2778-4065-8cf4-f2f53d3dbb1c_disk">
Oct 02 13:00:12 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:       </source>
Oct 02 13:00:12 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:00:12 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:00:12 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 13:00:12 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/11d532af-2778-4065-8cf4-f2f53d3dbb1c_disk.config">
Oct 02 13:00:12 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:       </source>
Oct 02 13:00:12 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:00:12 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:00:12 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 13:00:12 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:6b:44:86"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:       <target dev="tap6b57c5d9-dd"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     </interface>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 13:00:12 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/11d532af-2778-4065-8cf4-f2f53d3dbb1c/console.log" append="off"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     </serial>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <video>
Oct 02 13:00:12 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     </video>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 13:00:12 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     </rng>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 13:00:12 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 13:00:12 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 13:00:12 compute-1 nova_compute[230518]:   </devices>
Oct 02 13:00:12 compute-1 nova_compute[230518]: </domain>
Oct 02 13:00:12 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.648 2 DEBUG nova.compute.manager [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Preparing to wait for external event network-vif-plugged-6b57c5d9-dd16-427a-85c2-c02dedb41e29 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.649 2 DEBUG oslo_concurrency.lockutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.649 2 DEBUG oslo_concurrency.lockutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.650 2 DEBUG oslo_concurrency.lockutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.651 2 DEBUG nova.virt.libvirt.vif [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:00:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-109200331',display_name='tempest-TestNetworkBasicOps-server-109200331',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-109200331',id=164,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOyVsgu/K1Be3T6lU80AA5key24FWBwYCD9vm40G5BNpftGNMWHfF5cy51qUgLzMoZT/j2dR3TucUmm1S5UEzlRAZFpDfO//FNaDZljlZXVXY30xYKCpB4GbuFcySY9mIg==',key_name='tempest-TestNetworkBasicOps-433983614',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-6r3anh17',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:00:07Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=11d532af-2778-4065-8cf4-f2f53d3dbb1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "address": "fa:16:3e:6b:44:86", "network": {"id": "2dacd3c2-a76f-4896-a922-fdbbab78ce12", "bridge": "br-int", "label": "tempest-network-smoke--543222050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b57c5d9-dd", "ovs_interfaceid": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.652 2 DEBUG nova.network.os_vif_util [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "address": "fa:16:3e:6b:44:86", "network": {"id": "2dacd3c2-a76f-4896-a922-fdbbab78ce12", "bridge": "br-int", "label": "tempest-network-smoke--543222050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b57c5d9-dd", "ovs_interfaceid": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.653 2 DEBUG nova.network.os_vif_util [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:44:86,bridge_name='br-int',has_traffic_filtering=True,id=6b57c5d9-dd16-427a-85c2-c02dedb41e29,network=Network(2dacd3c2-a76f-4896-a922-fdbbab78ce12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6b57c5d9-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.654 2 DEBUG os_vif [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:44:86,bridge_name='br-int',has_traffic_filtering=True,id=6b57c5d9-dd16-427a-85c2-c02dedb41e29,network=Network(2dacd3c2-a76f-4896-a922-fdbbab78ce12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6b57c5d9-dd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.656 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.657 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.663 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6b57c5d9-dd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.664 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6b57c5d9-dd, col_values=(('external_ids', {'iface-id': '6b57c5d9-dd16-427a-85c2-c02dedb41e29', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6b:44:86', 'vm-uuid': '11d532af-2778-4065-8cf4-f2f53d3dbb1c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:12 compute-1 NetworkManager[44960]: <info>  [1759410012.6674] manager: (tap6b57c5d9-dd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/320)
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.673 2 INFO os_vif [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:44:86,bridge_name='br-int',has_traffic_filtering=True,id=6b57c5d9-dd16-427a-85c2-c02dedb41e29,network=Network(2dacd3c2-a76f-4896-a922-fdbbab78ce12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6b57c5d9-dd')
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.730 2 DEBUG nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.730 2 DEBUG nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.731 2 DEBUG nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No VIF found with MAC fa:16:3e:6b:44:86, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.732 2 INFO nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Using config drive
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.766 2 DEBUG nova.storage.rbd_utils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 11d532af-2778-4065-8cf4-f2f53d3dbb1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.857 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.858 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4291MB free_disk=20.921802520751953GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.858 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.859 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.941 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 11d532af-2778-4065-8cf4-f2f53d3dbb1c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.942 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.942 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:00:12 compute-1 nova_compute[230518]: 2025-10-02 13:00:12.989 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:00:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:13.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:00:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:13.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:00:13 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2870876322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:13 compute-1 nova_compute[230518]: 2025-10-02 13:00:13.439 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:13 compute-1 nova_compute[230518]: 2025-10-02 13:00:13.448 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:00:13 compute-1 nova_compute[230518]: 2025-10-02 13:00:13.487 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:00:13 compute-1 nova_compute[230518]: 2025-10-02 13:00:13.529 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:00:13 compute-1 nova_compute[230518]: 2025-10-02 13:00:13.530 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:13 compute-1 nova_compute[230518]: 2025-10-02 13:00:13.535 2 DEBUG nova.network.neutron [req-939663b8-6f70-43ce-bb73-99778feed713 req-ff620870-9fee-448f-a35c-3786a5bf5a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Updated VIF entry in instance network info cache for port 6b57c5d9-dd16-427a-85c2-c02dedb41e29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:00:13 compute-1 nova_compute[230518]: 2025-10-02 13:00:13.536 2 DEBUG nova.network.neutron [req-939663b8-6f70-43ce-bb73-99778feed713 req-ff620870-9fee-448f-a35c-3786a5bf5a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Updating instance_info_cache with network_info: [{"id": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "address": "fa:16:3e:6b:44:86", "network": {"id": "2dacd3c2-a76f-4896-a922-fdbbab78ce12", "bridge": "br-int", "label": "tempest-network-smoke--543222050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b57c5d9-dd", "ovs_interfaceid": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:00:13 compute-1 nova_compute[230518]: 2025-10-02 13:00:13.569 2 INFO nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Creating config drive at /var/lib/nova/instances/11d532af-2778-4065-8cf4-f2f53d3dbb1c/disk.config
Oct 02 13:00:13 compute-1 nova_compute[230518]: 2025-10-02 13:00:13.578 2 DEBUG oslo_concurrency.processutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/11d532af-2778-4065-8cf4-f2f53d3dbb1c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpauotot11 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:13 compute-1 nova_compute[230518]: 2025-10-02 13:00:13.628 2 DEBUG oslo_concurrency.lockutils [req-939663b8-6f70-43ce-bb73-99778feed713 req-ff620870-9fee-448f-a35c-3786a5bf5a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-11d532af-2778-4065-8cf4-f2f53d3dbb1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:00:13 compute-1 ceph-mon[80926]: pgmap v2614: 305 pgs: 305 active+clean; 248 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 63 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Oct 02 13:00:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/376091757' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/604262924' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2870876322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:13 compute-1 nova_compute[230518]: 2025-10-02 13:00:13.741 2 DEBUG oslo_concurrency.processutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/11d532af-2778-4065-8cf4-f2f53d3dbb1c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpauotot11" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:13 compute-1 nova_compute[230518]: 2025-10-02 13:00:13.790 2 DEBUG nova.storage.rbd_utils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 11d532af-2778-4065-8cf4-f2f53d3dbb1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:13 compute-1 nova_compute[230518]: 2025-10-02 13:00:13.797 2 DEBUG oslo_concurrency.processutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/11d532af-2778-4065-8cf4-f2f53d3dbb1c/disk.config 11d532af-2778-4065-8cf4-f2f53d3dbb1c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:14 compute-1 nova_compute[230518]: 2025-10-02 13:00:14.009 2 DEBUG oslo_concurrency.processutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/11d532af-2778-4065-8cf4-f2f53d3dbb1c/disk.config 11d532af-2778-4065-8cf4-f2f53d3dbb1c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:14 compute-1 nova_compute[230518]: 2025-10-02 13:00:14.010 2 INFO nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Deleting local config drive /var/lib/nova/instances/11d532af-2778-4065-8cf4-f2f53d3dbb1c/disk.config because it was imported into RBD.
Oct 02 13:00:14 compute-1 kernel: tap6b57c5d9-dd: entered promiscuous mode
Oct 02 13:00:14 compute-1 NetworkManager[44960]: <info>  [1759410014.0799] manager: (tap6b57c5d9-dd): new Tun device (/org/freedesktop/NetworkManager/Devices/321)
Oct 02 13:00:14 compute-1 ovn_controller[129257]: 2025-10-02T13:00:14Z|00686|binding|INFO|Claiming lport 6b57c5d9-dd16-427a-85c2-c02dedb41e29 for this chassis.
Oct 02 13:00:14 compute-1 ovn_controller[129257]: 2025-10-02T13:00:14Z|00687|binding|INFO|6b57c5d9-dd16-427a-85c2-c02dedb41e29: Claiming fa:16:3e:6b:44:86 10.100.0.10
Oct 02 13:00:14 compute-1 nova_compute[230518]: 2025-10-02 13:00:14.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.093 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:44:86 10.100.0.10'], port_security=['fa:16:3e:6b:44:86 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1074515355', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '11d532af-2778-4065-8cf4-f2f53d3dbb1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2dacd3c2-a76f-4896-a922-fdbbab78ce12', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1074515355', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f9e51548-d675-4462-aaf0-72519e827667', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5a001cef-b85b-4c88-a329-8db2a6ee024d, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=6b57c5d9-dd16-427a-85c2-c02dedb41e29) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.095 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 6b57c5d9-dd16-427a-85c2-c02dedb41e29 in datapath 2dacd3c2-a76f-4896-a922-fdbbab78ce12 bound to our chassis
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.097 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2dacd3c2-a76f-4896-a922-fdbbab78ce12
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.110 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c347a301-81b5-4e7a-b74b-03f2c0d7ff55]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.111 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2dacd3c2-a1 in ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.113 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2dacd3c2-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.113 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3e2d7149-b253-4148-9904-c6d6a0aff839]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.114 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[df3f7742-d1d2-41a9-bdeb-f9e1824c2ec1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:14 compute-1 systemd-udevd[294740]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:00:14 compute-1 systemd-machined[188247]: New machine qemu-80-instance-000000a4.
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.129 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[0b619768-0c5a-4bbd-81ee-f52801d69c5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:14 compute-1 NetworkManager[44960]: <info>  [1759410014.1339] device (tap6b57c5d9-dd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:00:14 compute-1 NetworkManager[44960]: <info>  [1759410014.1353] device (tap6b57c5d9-dd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.155 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2e9c5192-3040-41f9-9652-6389089ce003]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:14 compute-1 nova_compute[230518]: 2025-10-02 13:00:14.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:14 compute-1 systemd[1]: Started Virtual Machine qemu-80-instance-000000a4.
Oct 02 13:00:14 compute-1 nova_compute[230518]: 2025-10-02 13:00:14.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:14 compute-1 ovn_controller[129257]: 2025-10-02T13:00:14Z|00688|binding|INFO|Setting lport 6b57c5d9-dd16-427a-85c2-c02dedb41e29 ovn-installed in OVS
Oct 02 13:00:14 compute-1 ovn_controller[129257]: 2025-10-02T13:00:14Z|00689|binding|INFO|Setting lport 6b57c5d9-dd16-427a-85c2-c02dedb41e29 up in Southbound
Oct 02 13:00:14 compute-1 nova_compute[230518]: 2025-10-02 13:00:14.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.189 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[b31bd5f2-5b03-4fe2-9c71-899345e6c45f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.193 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[671f5cb4-0fa0-47d7-b8a2-36f9fe7d453c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:14 compute-1 NetworkManager[44960]: <info>  [1759410014.1946] manager: (tap2dacd3c2-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/322)
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.229 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[972df981-21de-4f3f-943a-23428519ac44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.233 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[24ec1675-4e2c-42e8-b2a7-2ce84f33451c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:14 compute-1 NetworkManager[44960]: <info>  [1759410014.2561] device (tap2dacd3c2-a0): carrier: link connected
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.263 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[79101ddc-ab4a-4124-a22e-a6bea87a305c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.286 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ae213d0f-0a69-4f59-8071-6e391dd8dfa4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2dacd3c2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:66:7e:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 212], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 787781, 'reachable_time': 34970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294773, 'error': None, 'target': 'ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.306 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ec790238-84de-4969-8796-42a9a751b525]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe66:7e5f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 787781, 'tstamp': 787781}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294774, 'error': None, 'target': 'ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.323 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[faec8ba3-afd1-41be-ae15-afefda879ad5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2dacd3c2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:66:7e:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 212], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 787781, 'reachable_time': 34970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294775, 'error': None, 'target': 'ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.349 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[56b88836-cb03-41b4-9cb0-63702c3ea5e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:14 compute-1 sudo[294778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:00:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:00:14 compute-1 sudo[294778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:14 compute-1 sudo[294778]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.422 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9ed406c8-011c-4527-9eab-8d979c295695]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.424 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2dacd3c2-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.424 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.425 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2dacd3c2-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:14 compute-1 kernel: tap2dacd3c2-a0: entered promiscuous mode
Oct 02 13:00:14 compute-1 nova_compute[230518]: 2025-10-02 13:00:14.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:14 compute-1 NetworkManager[44960]: <info>  [1759410014.4275] manager: (tap2dacd3c2-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/323)
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.429 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2dacd3c2-a0, col_values=(('external_ids', {'iface-id': '563b4b62-2487-404e-81e1-f7d5b24fae89'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:14 compute-1 ovn_controller[129257]: 2025-10-02T13:00:14Z|00690|binding|INFO|Releasing lport 563b4b62-2487-404e-81e1-f7d5b24fae89 from this chassis (sb_readonly=0)
Oct 02 13:00:14 compute-1 nova_compute[230518]: 2025-10-02 13:00:14.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:14 compute-1 nova_compute[230518]: 2025-10-02 13:00:14.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.434 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2dacd3c2-a76f-4896-a922-fdbbab78ce12.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2dacd3c2-a76f-4896-a922-fdbbab78ce12.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.435 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[192e09a6-3ecf-4204-807e-d75e8bf23015]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.435 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]: global
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-2dacd3c2-a76f-4896-a922-fdbbab78ce12
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/2dacd3c2-a76f-4896-a922-fdbbab78ce12.pid.haproxy
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 2dacd3c2-a76f-4896-a922-fdbbab78ce12
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:00:14 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:14.436 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12', 'env', 'PROCESS_TAG=haproxy-2dacd3c2-a76f-4896-a922-fdbbab78ce12', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2dacd3c2-a76f-4896-a922-fdbbab78ce12.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:00:14 compute-1 nova_compute[230518]: 2025-10-02 13:00:14.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:14 compute-1 sudo[294807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:00:14 compute-1 sudo[294807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:14 compute-1 sudo[294807]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:14 compute-1 nova_compute[230518]: 2025-10-02 13:00:14.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:14 compute-1 sudo[294835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:00:14 compute-1 sudo[294835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:14 compute-1 sudo[294835]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:14 compute-1 sudo[294860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:00:14 compute-1 sudo[294860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:14 compute-1 podman[294959]: 2025-10-02 13:00:14.807756988 +0000 UTC m=+0.044542176 container create fe7d558be975b9b8dc7e47170fa426b7e9ceba484d6f37380fb5d89212c7b03a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct 02 13:00:14 compute-1 systemd[1]: Started libpod-conmon-fe7d558be975b9b8dc7e47170fa426b7e9ceba484d6f37380fb5d89212c7b03a.scope.
Oct 02 13:00:14 compute-1 systemd[1]: Started libcrun container.
Oct 02 13:00:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cd525238fd6b0eee5e5f3a7ce21043fc7136395e02b913b6624af579e8cb88e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:00:14 compute-1 podman[294959]: 2025-10-02 13:00:14.782891645 +0000 UTC m=+0.019676853 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:00:14 compute-1 podman[294959]: 2025-10-02 13:00:14.891383455 +0000 UTC m=+0.128168673 container init fe7d558be975b9b8dc7e47170fa426b7e9ceba484d6f37380fb5d89212c7b03a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 13:00:14 compute-1 nova_compute[230518]: 2025-10-02 13:00:14.894 2 DEBUG nova.compute.manager [req-93df1f47-7a78-4424-aebc-6378ac510cf0 req-83d9e8e8-09f1-497a-bba6-af7844cfa262 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Received event network-vif-plugged-6b57c5d9-dd16-427a-85c2-c02dedb41e29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:00:14 compute-1 nova_compute[230518]: 2025-10-02 13:00:14.894 2 DEBUG oslo_concurrency.lockutils [req-93df1f47-7a78-4424-aebc-6378ac510cf0 req-83d9e8e8-09f1-497a-bba6-af7844cfa262 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:14 compute-1 nova_compute[230518]: 2025-10-02 13:00:14.895 2 DEBUG oslo_concurrency.lockutils [req-93df1f47-7a78-4424-aebc-6378ac510cf0 req-83d9e8e8-09f1-497a-bba6-af7844cfa262 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:14 compute-1 nova_compute[230518]: 2025-10-02 13:00:14.895 2 DEBUG oslo_concurrency.lockutils [req-93df1f47-7a78-4424-aebc-6378ac510cf0 req-83d9e8e8-09f1-497a-bba6-af7844cfa262 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:14 compute-1 nova_compute[230518]: 2025-10-02 13:00:14.896 2 DEBUG nova.compute.manager [req-93df1f47-7a78-4424-aebc-6378ac510cf0 req-83d9e8e8-09f1-497a-bba6-af7844cfa262 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Processing event network-vif-plugged-6b57c5d9-dd16-427a-85c2-c02dedb41e29 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:00:14 compute-1 podman[294959]: 2025-10-02 13:00:14.897612559 +0000 UTC m=+0.134397747 container start fe7d558be975b9b8dc7e47170fa426b7e9ceba484d6f37380fb5d89212c7b03a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 13:00:14 compute-1 neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12[294976]: [NOTICE]   (294982) : New worker (294986) forked
Oct 02 13:00:14 compute-1 neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12[294976]: [NOTICE]   (294982) : Loading success.
Oct 02 13:00:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:15.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:15 compute-1 sudo[294860]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:15.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:15 compute-1 nova_compute[230518]: 2025-10-02 13:00:15.233 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410015.2335742, 11d532af-2778-4065-8cf4-f2f53d3dbb1c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:00:15 compute-1 nova_compute[230518]: 2025-10-02 13:00:15.234 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] VM Started (Lifecycle Event)
Oct 02 13:00:15 compute-1 nova_compute[230518]: 2025-10-02 13:00:15.236 2 DEBUG nova.compute.manager [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:00:15 compute-1 nova_compute[230518]: 2025-10-02 13:00:15.240 2 DEBUG nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:00:15 compute-1 nova_compute[230518]: 2025-10-02 13:00:15.243 2 INFO nova.virt.libvirt.driver [-] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Instance spawned successfully.
Oct 02 13:00:15 compute-1 nova_compute[230518]: 2025-10-02 13:00:15.243 2 DEBUG nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:00:15 compute-1 nova_compute[230518]: 2025-10-02 13:00:15.258 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:00:15 compute-1 nova_compute[230518]: 2025-10-02 13:00:15.263 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:00:15 compute-1 nova_compute[230518]: 2025-10-02 13:00:15.266 2 DEBUG nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:00:15 compute-1 nova_compute[230518]: 2025-10-02 13:00:15.266 2 DEBUG nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:00:15 compute-1 nova_compute[230518]: 2025-10-02 13:00:15.267 2 DEBUG nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:00:15 compute-1 nova_compute[230518]: 2025-10-02 13:00:15.267 2 DEBUG nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:00:15 compute-1 nova_compute[230518]: 2025-10-02 13:00:15.268 2 DEBUG nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:00:15 compute-1 nova_compute[230518]: 2025-10-02 13:00:15.268 2 DEBUG nova.virt.libvirt.driver [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:00:15 compute-1 nova_compute[230518]: 2025-10-02 13:00:15.290 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:00:15 compute-1 nova_compute[230518]: 2025-10-02 13:00:15.290 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410015.2336886, 11d532af-2778-4065-8cf4-f2f53d3dbb1c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:00:15 compute-1 nova_compute[230518]: 2025-10-02 13:00:15.290 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] VM Paused (Lifecycle Event)
Oct 02 13:00:15 compute-1 nova_compute[230518]: 2025-10-02 13:00:15.314 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:00:15 compute-1 nova_compute[230518]: 2025-10-02 13:00:15.317 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410015.238941, 11d532af-2778-4065-8cf4-f2f53d3dbb1c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:00:15 compute-1 nova_compute[230518]: 2025-10-02 13:00:15.317 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] VM Resumed (Lifecycle Event)
Oct 02 13:00:15 compute-1 nova_compute[230518]: 2025-10-02 13:00:15.327 2 INFO nova.compute.manager [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Took 7.96 seconds to spawn the instance on the hypervisor.
Oct 02 13:00:15 compute-1 nova_compute[230518]: 2025-10-02 13:00:15.328 2 DEBUG nova.compute.manager [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:00:15 compute-1 nova_compute[230518]: 2025-10-02 13:00:15.336 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:00:15 compute-1 nova_compute[230518]: 2025-10-02 13:00:15.340 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:00:15 compute-1 nova_compute[230518]: 2025-10-02 13:00:15.370 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:00:15 compute-1 nova_compute[230518]: 2025-10-02 13:00:15.391 2 INFO nova.compute.manager [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Took 8.99 seconds to build instance.
Oct 02 13:00:15 compute-1 nova_compute[230518]: 2025-10-02 13:00:15.411 2 DEBUG oslo_concurrency.lockutils [None req-d9a1ab5c-01fc-4d89-a4f2-e6d71700e346 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:15 compute-1 ceph-mon[80926]: pgmap v2615: 305 pgs: 305 active+clean; 248 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 62 KiB/s rd, 1.8 MiB/s wr, 85 op/s
Oct 02 13:00:15 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:00:15 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 13:00:15 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:00:15 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 13:00:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 13:00:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:00:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:00:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:00:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:00:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:00:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:00:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3382480637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:17 compute-1 nova_compute[230518]: 2025-10-02 13:00:17.028 2 DEBUG nova.compute.manager [req-2b07c479-1952-45e7-bfb2-068ce64ed2f7 req-80666858-6030-4ddb-8f6d-4f5133d1284c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Received event network-vif-plugged-6b57c5d9-dd16-427a-85c2-c02dedb41e29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:00:17 compute-1 nova_compute[230518]: 2025-10-02 13:00:17.029 2 DEBUG oslo_concurrency.lockutils [req-2b07c479-1952-45e7-bfb2-068ce64ed2f7 req-80666858-6030-4ddb-8f6d-4f5133d1284c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:17 compute-1 nova_compute[230518]: 2025-10-02 13:00:17.029 2 DEBUG oslo_concurrency.lockutils [req-2b07c479-1952-45e7-bfb2-068ce64ed2f7 req-80666858-6030-4ddb-8f6d-4f5133d1284c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:17 compute-1 nova_compute[230518]: 2025-10-02 13:00:17.030 2 DEBUG oslo_concurrency.lockutils [req-2b07c479-1952-45e7-bfb2-068ce64ed2f7 req-80666858-6030-4ddb-8f6d-4f5133d1284c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:17 compute-1 nova_compute[230518]: 2025-10-02 13:00:17.030 2 DEBUG nova.compute.manager [req-2b07c479-1952-45e7-bfb2-068ce64ed2f7 req-80666858-6030-4ddb-8f6d-4f5133d1284c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] No waiting events found dispatching network-vif-plugged-6b57c5d9-dd16-427a-85c2-c02dedb41e29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:00:17 compute-1 nova_compute[230518]: 2025-10-02 13:00:17.030 2 WARNING nova.compute.manager [req-2b07c479-1952-45e7-bfb2-068ce64ed2f7 req-80666858-6030-4ddb-8f6d-4f5133d1284c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Received unexpected event network-vif-plugged-6b57c5d9-dd16-427a-85c2-c02dedb41e29 for instance with vm_state active and task_state None.
Oct 02 13:00:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:17.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:00:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:17.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:00:17 compute-1 nova_compute[230518]: 2025-10-02 13:00:17.526 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:00:17 compute-1 nova_compute[230518]: 2025-10-02 13:00:17.527 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:00:17 compute-1 nova_compute[230518]: 2025-10-02 13:00:17.527 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:00:17 compute-1 nova_compute[230518]: 2025-10-02 13:00:17.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:17 compute-1 podman[295010]: 2025-10-02 13:00:17.818980499 +0000 UTC m=+0.063508354 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 13:00:17 compute-1 podman[295009]: 2025-10-02 13:00:17.819155674 +0000 UTC m=+0.063795142 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, container_name=iscsid)
Oct 02 13:00:17 compute-1 ceph-mon[80926]: pgmap v2616: 305 pgs: 305 active+clean; 266 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 780 KiB/s rd, 2.7 MiB/s wr, 118 op/s
Oct 02 13:00:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1652227038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2700135540' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1727126200' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:18 compute-1 nova_compute[230518]: 2025-10-02 13:00:18.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:00:18 compute-1 nova_compute[230518]: 2025-10-02 13:00:18.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:00:18 compute-1 nova_compute[230518]: 2025-10-02 13:00:18.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:18 compute-1 NetworkManager[44960]: <info>  [1759410018.5541] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/324)
Oct 02 13:00:18 compute-1 NetworkManager[44960]: <info>  [1759410018.5554] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/325)
Oct 02 13:00:18 compute-1 nova_compute[230518]: 2025-10-02 13:00:18.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:18 compute-1 ovn_controller[129257]: 2025-10-02T13:00:18Z|00691|binding|INFO|Releasing lport 563b4b62-2487-404e-81e1-f7d5b24fae89 from this chassis (sb_readonly=0)
Oct 02 13:00:18 compute-1 nova_compute[230518]: 2025-10-02 13:00:18.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/424886843' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/898661570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:19 compute-1 nova_compute[230518]: 2025-10-02 13:00:19.025 2 DEBUG nova.compute.manager [req-85259286-a608-44ca-be7c-eb5dec9e8469 req-20d97250-7f46-4d2b-b5ae-363ed1135ade 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Received event network-changed-6b57c5d9-dd16-427a-85c2-c02dedb41e29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:00:19 compute-1 nova_compute[230518]: 2025-10-02 13:00:19.025 2 DEBUG nova.compute.manager [req-85259286-a608-44ca-be7c-eb5dec9e8469 req-20d97250-7f46-4d2b-b5ae-363ed1135ade 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Refreshing instance network info cache due to event network-changed-6b57c5d9-dd16-427a-85c2-c02dedb41e29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:00:19 compute-1 nova_compute[230518]: 2025-10-02 13:00:19.026 2 DEBUG oslo_concurrency.lockutils [req-85259286-a608-44ca-be7c-eb5dec9e8469 req-20d97250-7f46-4d2b-b5ae-363ed1135ade 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-11d532af-2778-4065-8cf4-f2f53d3dbb1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:00:19 compute-1 nova_compute[230518]: 2025-10-02 13:00:19.026 2 DEBUG oslo_concurrency.lockutils [req-85259286-a608-44ca-be7c-eb5dec9e8469 req-20d97250-7f46-4d2b-b5ae-363ed1135ade 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-11d532af-2778-4065-8cf4-f2f53d3dbb1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:00:19 compute-1 nova_compute[230518]: 2025-10-02 13:00:19.026 2 DEBUG nova.network.neutron [req-85259286-a608-44ca-be7c-eb5dec9e8469 req-20d97250-7f46-4d2b-b5ae-363ed1135ade 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Refreshing network info cache for port 6b57c5d9-dd16-427a-85c2-c02dedb41e29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:00:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:19.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:19 compute-1 nova_compute[230518]: 2025-10-02 13:00:19.196 2 DEBUG oslo_concurrency.lockutils [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:19 compute-1 nova_compute[230518]: 2025-10-02 13:00:19.197 2 DEBUG oslo_concurrency.lockutils [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:19 compute-1 nova_compute[230518]: 2025-10-02 13:00:19.197 2 DEBUG oslo_concurrency.lockutils [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:19 compute-1 nova_compute[230518]: 2025-10-02 13:00:19.199 2 DEBUG oslo_concurrency.lockutils [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:19 compute-1 nova_compute[230518]: 2025-10-02 13:00:19.200 2 DEBUG oslo_concurrency.lockutils [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:19 compute-1 nova_compute[230518]: 2025-10-02 13:00:19.201 2 INFO nova.compute.manager [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Terminating instance
Oct 02 13:00:19 compute-1 nova_compute[230518]: 2025-10-02 13:00:19.202 2 DEBUG nova.compute.manager [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:00:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:19.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:19 compute-1 kernel: tap6b57c5d9-dd (unregistering): left promiscuous mode
Oct 02 13:00:19 compute-1 NetworkManager[44960]: <info>  [1759410019.3983] device (tap6b57c5d9-dd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:00:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:00:19 compute-1 nova_compute[230518]: 2025-10-02 13:00:19.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:19 compute-1 ovn_controller[129257]: 2025-10-02T13:00:19Z|00692|binding|INFO|Releasing lport 6b57c5d9-dd16-427a-85c2-c02dedb41e29 from this chassis (sb_readonly=0)
Oct 02 13:00:19 compute-1 ovn_controller[129257]: 2025-10-02T13:00:19Z|00693|binding|INFO|Setting lport 6b57c5d9-dd16-427a-85c2-c02dedb41e29 down in Southbound
Oct 02 13:00:19 compute-1 ovn_controller[129257]: 2025-10-02T13:00:19Z|00694|binding|INFO|Removing iface tap6b57c5d9-dd ovn-installed in OVS
Oct 02 13:00:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:19.415 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:44:86 10.100.0.10'], port_security=['fa:16:3e:6b:44:86 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1074515355', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '11d532af-2778-4065-8cf4-f2f53d3dbb1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2dacd3c2-a76f-4896-a922-fdbbab78ce12', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1074515355', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f9e51548-d675-4462-aaf0-72519e827667', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.193'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5a001cef-b85b-4c88-a329-8db2a6ee024d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=6b57c5d9-dd16-427a-85c2-c02dedb41e29) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:00:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:19.416 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 6b57c5d9-dd16-427a-85c2-c02dedb41e29 in datapath 2dacd3c2-a76f-4896-a922-fdbbab78ce12 unbound from our chassis
Oct 02 13:00:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:19.417 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2dacd3c2-a76f-4896-a922-fdbbab78ce12, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:00:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:19.419 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[99218535-59bf-48bf-a99d-beff3500d7d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:19.423 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12 namespace which is not needed anymore
Oct 02 13:00:19 compute-1 nova_compute[230518]: 2025-10-02 13:00:19.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:19 compute-1 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d000000a4.scope: Deactivated successfully.
Oct 02 13:00:19 compute-1 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d000000a4.scope: Consumed 5.082s CPU time.
Oct 02 13:00:19 compute-1 systemd-machined[188247]: Machine qemu-80-instance-000000a4 terminated.
Oct 02 13:00:19 compute-1 nova_compute[230518]: 2025-10-02 13:00:19.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:19 compute-1 neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12[294976]: [NOTICE]   (294982) : haproxy version is 2.8.14-c23fe91
Oct 02 13:00:19 compute-1 neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12[294976]: [NOTICE]   (294982) : path to executable is /usr/sbin/haproxy
Oct 02 13:00:19 compute-1 neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12[294976]: [WARNING]  (294982) : Exiting Master process...
Oct 02 13:00:19 compute-1 neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12[294976]: [WARNING]  (294982) : Exiting Master process...
Oct 02 13:00:19 compute-1 neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12[294976]: [ALERT]    (294982) : Current worker (294986) exited with code 143 (Terminated)
Oct 02 13:00:19 compute-1 neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12[294976]: [WARNING]  (294982) : All workers exited. Exiting... (0)
Oct 02 13:00:19 compute-1 systemd[1]: libpod-fe7d558be975b9b8dc7e47170fa426b7e9ceba484d6f37380fb5d89212c7b03a.scope: Deactivated successfully.
Oct 02 13:00:19 compute-1 podman[295071]: 2025-10-02 13:00:19.56822333 +0000 UTC m=+0.045756973 container died fe7d558be975b9b8dc7e47170fa426b7e9ceba484d6f37380fb5d89212c7b03a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 13:00:19 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fe7d558be975b9b8dc7e47170fa426b7e9ceba484d6f37380fb5d89212c7b03a-userdata-shm.mount: Deactivated successfully.
Oct 02 13:00:19 compute-1 systemd[1]: var-lib-containers-storage-overlay-7cd525238fd6b0eee5e5f3a7ce21043fc7136395e02b913b6624af579e8cb88e-merged.mount: Deactivated successfully.
Oct 02 13:00:19 compute-1 podman[295071]: 2025-10-02 13:00:19.615380305 +0000 UTC m=+0.092913948 container cleanup fe7d558be975b9b8dc7e47170fa426b7e9ceba484d6f37380fb5d89212c7b03a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:00:19 compute-1 systemd[1]: libpod-conmon-fe7d558be975b9b8dc7e47170fa426b7e9ceba484d6f37380fb5d89212c7b03a.scope: Deactivated successfully.
Oct 02 13:00:19 compute-1 nova_compute[230518]: 2025-10-02 13:00:19.639 2 INFO nova.virt.libvirt.driver [-] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Instance destroyed successfully.
Oct 02 13:00:19 compute-1 nova_compute[230518]: 2025-10-02 13:00:19.639 2 DEBUG nova.objects.instance [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'resources' on Instance uuid 11d532af-2778-4065-8cf4-f2f53d3dbb1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:00:19 compute-1 nova_compute[230518]: 2025-10-02 13:00:19.671 2 DEBUG nova.virt.libvirt.vif [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:00:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-109200331',display_name='tempest-TestNetworkBasicOps-server-109200331',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-109200331',id=164,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOyVsgu/K1Be3T6lU80AA5key24FWBwYCD9vm40G5BNpftGNMWHfF5cy51qUgLzMoZT/j2dR3TucUmm1S5UEzlRAZFpDfO//FNaDZljlZXVXY30xYKCpB4GbuFcySY9mIg==',key_name='tempest-TestNetworkBasicOps-433983614',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:00:15Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-6r3anh17',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:00:15Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=11d532af-2778-4065-8cf4-f2f53d3dbb1c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "address": "fa:16:3e:6b:44:86", "network": {"id": "2dacd3c2-a76f-4896-a922-fdbbab78ce12", "bridge": "br-int", "label": "tempest-network-smoke--543222050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b57c5d9-dd", "ovs_interfaceid": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:00:19 compute-1 nova_compute[230518]: 2025-10-02 13:00:19.672 2 DEBUG nova.network.os_vif_util [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "address": "fa:16:3e:6b:44:86", "network": {"id": "2dacd3c2-a76f-4896-a922-fdbbab78ce12", "bridge": "br-int", "label": "tempest-network-smoke--543222050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b57c5d9-dd", "ovs_interfaceid": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:00:19 compute-1 nova_compute[230518]: 2025-10-02 13:00:19.673 2 DEBUG nova.network.os_vif_util [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:44:86,bridge_name='br-int',has_traffic_filtering=True,id=6b57c5d9-dd16-427a-85c2-c02dedb41e29,network=Network(2dacd3c2-a76f-4896-a922-fdbbab78ce12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6b57c5d9-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:00:19 compute-1 nova_compute[230518]: 2025-10-02 13:00:19.673 2 DEBUG os_vif [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:44:86,bridge_name='br-int',has_traffic_filtering=True,id=6b57c5d9-dd16-427a-85c2-c02dedb41e29,network=Network(2dacd3c2-a76f-4896-a922-fdbbab78ce12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6b57c5d9-dd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:00:19 compute-1 nova_compute[230518]: 2025-10-02 13:00:19.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:19 compute-1 nova_compute[230518]: 2025-10-02 13:00:19.675 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b57c5d9-dd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:19 compute-1 nova_compute[230518]: 2025-10-02 13:00:19.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:19 compute-1 nova_compute[230518]: 2025-10-02 13:00:19.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:00:19 compute-1 nova_compute[230518]: 2025-10-02 13:00:19.681 2 INFO os_vif [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:44:86,bridge_name='br-int',has_traffic_filtering=True,id=6b57c5d9-dd16-427a-85c2-c02dedb41e29,network=Network(2dacd3c2-a76f-4896-a922-fdbbab78ce12),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6b57c5d9-dd')
Oct 02 13:00:19 compute-1 podman[295105]: 2025-10-02 13:00:19.694258596 +0000 UTC m=+0.050603933 container remove fe7d558be975b9b8dc7e47170fa426b7e9ceba484d6f37380fb5d89212c7b03a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 13:00:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:19.700 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fe1e6646-ad89-4351-93c0-f2c76cc42a3b]: (4, ('Thu Oct  2 01:00:19 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12 (fe7d558be975b9b8dc7e47170fa426b7e9ceba484d6f37380fb5d89212c7b03a)\nfe7d558be975b9b8dc7e47170fa426b7e9ceba484d6f37380fb5d89212c7b03a\nThu Oct  2 01:00:19 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12 (fe7d558be975b9b8dc7e47170fa426b7e9ceba484d6f37380fb5d89212c7b03a)\nfe7d558be975b9b8dc7e47170fa426b7e9ceba484d6f37380fb5d89212c7b03a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:19.703 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[33a5d916-a8c9-4014-af75-4d6219da5692]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:19.704 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2dacd3c2-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:19 compute-1 kernel: tap2dacd3c2-a0: left promiscuous mode
Oct 02 13:00:19 compute-1 nova_compute[230518]: 2025-10-02 13:00:19.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:19.710 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7cc1dc6e-2f80-4a1a-b9ad-5a6a6c20b9b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:19 compute-1 nova_compute[230518]: 2025-10-02 13:00:19.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:19 compute-1 nova_compute[230518]: 2025-10-02 13:00:19.732 2 DEBUG nova.compute.manager [req-82c19cae-e986-4a0e-98ce-719a2f4f22c7 req-20a7b595-4c1f-44b7-b523-c6b1e58a3aad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Received event network-vif-unplugged-6b57c5d9-dd16-427a-85c2-c02dedb41e29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:00:19 compute-1 nova_compute[230518]: 2025-10-02 13:00:19.733 2 DEBUG oslo_concurrency.lockutils [req-82c19cae-e986-4a0e-98ce-719a2f4f22c7 req-20a7b595-4c1f-44b7-b523-c6b1e58a3aad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:19 compute-1 nova_compute[230518]: 2025-10-02 13:00:19.733 2 DEBUG oslo_concurrency.lockutils [req-82c19cae-e986-4a0e-98ce-719a2f4f22c7 req-20a7b595-4c1f-44b7-b523-c6b1e58a3aad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:19 compute-1 nova_compute[230518]: 2025-10-02 13:00:19.733 2 DEBUG oslo_concurrency.lockutils [req-82c19cae-e986-4a0e-98ce-719a2f4f22c7 req-20a7b595-4c1f-44b7-b523-c6b1e58a3aad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:19 compute-1 nova_compute[230518]: 2025-10-02 13:00:19.734 2 DEBUG nova.compute.manager [req-82c19cae-e986-4a0e-98ce-719a2f4f22c7 req-20a7b595-4c1f-44b7-b523-c6b1e58a3aad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] No waiting events found dispatching network-vif-unplugged-6b57c5d9-dd16-427a-85c2-c02dedb41e29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:00:19 compute-1 nova_compute[230518]: 2025-10-02 13:00:19.734 2 DEBUG nova.compute.manager [req-82c19cae-e986-4a0e-98ce-719a2f4f22c7 req-20a7b595-4c1f-44b7-b523-c6b1e58a3aad 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Received event network-vif-unplugged-6b57c5d9-dd16-427a-85c2-c02dedb41e29 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:00:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:19.740 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ac79de9c-a937-48d7-b730-cfccdba7a517]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:19.742 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6696b250-fc5b-4743-9fbd-faf6094be684]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:19.756 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b9f179cc-b06d-44f2-84b5-81c3c27272cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 787774, 'reachable_time': 31381, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295143, 'error': None, 'target': 'ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:19 compute-1 systemd[1]: run-netns-ovnmeta\x2d2dacd3c2\x2da76f\x2d4896\x2da922\x2dfdbbab78ce12.mount: Deactivated successfully.
Oct 02 13:00:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:19.760 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2dacd3c2-a76f-4896-a922-fdbbab78ce12 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:00:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:19.760 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[02553841-8632-4db0-9055-bbff2a94d41b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:19 compute-1 nova_compute[230518]: 2025-10-02 13:00:19.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:19 compute-1 ceph-mon[80926]: pgmap v2617: 305 pgs: 305 active+clean; 295 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.6 MiB/s wr, 211 op/s
Oct 02 13:00:20 compute-1 nova_compute[230518]: 2025-10-02 13:00:20.401 2 DEBUG nova.network.neutron [req-85259286-a608-44ca-be7c-eb5dec9e8469 req-20d97250-7f46-4d2b-b5ae-363ed1135ade 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Updated VIF entry in instance network info cache for port 6b57c5d9-dd16-427a-85c2-c02dedb41e29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:00:20 compute-1 nova_compute[230518]: 2025-10-02 13:00:20.402 2 DEBUG nova.network.neutron [req-85259286-a608-44ca-be7c-eb5dec9e8469 req-20d97250-7f46-4d2b-b5ae-363ed1135ade 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Updating instance_info_cache with network_info: [{"id": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "address": "fa:16:3e:6b:44:86", "network": {"id": "2dacd3c2-a76f-4896-a922-fdbbab78ce12", "bridge": "br-int", "label": "tempest-network-smoke--543222050", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b57c5d9-dd", "ovs_interfaceid": "6b57c5d9-dd16-427a-85c2-c02dedb41e29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:00:20 compute-1 nova_compute[230518]: 2025-10-02 13:00:20.425 2 DEBUG oslo_concurrency.lockutils [req-85259286-a608-44ca-be7c-eb5dec9e8469 req-20d97250-7f46-4d2b-b5ae-363ed1135ade 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-11d532af-2778-4065-8cf4-f2f53d3dbb1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:00:20 compute-1 ceph-mon[80926]: pgmap v2618: 305 pgs: 305 active+clean; 295 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 203 op/s
Oct 02 13:00:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:21.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:21 compute-1 nova_compute[230518]: 2025-10-02 13:00:21.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:00:21 compute-1 nova_compute[230518]: 2025-10-02 13:00:21.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:00:21 compute-1 nova_compute[230518]: 2025-10-02 13:00:21.097 2 INFO nova.virt.libvirt.driver [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Deleting instance files /var/lib/nova/instances/11d532af-2778-4065-8cf4-f2f53d3dbb1c_del
Oct 02 13:00:21 compute-1 nova_compute[230518]: 2025-10-02 13:00:21.098 2 INFO nova.virt.libvirt.driver [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Deletion of /var/lib/nova/instances/11d532af-2778-4065-8cf4-f2f53d3dbb1c_del complete
Oct 02 13:00:21 compute-1 nova_compute[230518]: 2025-10-02 13:00:21.148 2 INFO nova.compute.manager [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Took 1.95 seconds to destroy the instance on the hypervisor.
Oct 02 13:00:21 compute-1 nova_compute[230518]: 2025-10-02 13:00:21.148 2 DEBUG oslo.service.loopingcall [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:00:21 compute-1 nova_compute[230518]: 2025-10-02 13:00:21.149 2 DEBUG nova.compute.manager [-] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:00:21 compute-1 nova_compute[230518]: 2025-10-02 13:00:21.149 2 DEBUG nova.network.neutron [-] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:00:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:00:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:21.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:00:21 compute-1 nova_compute[230518]: 2025-10-02 13:00:21.860 2 DEBUG nova.compute.manager [req-e5edb6ed-b8bd-46cc-a815-99c3f7eec63e req-80ca8cef-a2c1-4ea3-86cd-d831412837d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Received event network-vif-plugged-6b57c5d9-dd16-427a-85c2-c02dedb41e29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:00:21 compute-1 nova_compute[230518]: 2025-10-02 13:00:21.860 2 DEBUG oslo_concurrency.lockutils [req-e5edb6ed-b8bd-46cc-a815-99c3f7eec63e req-80ca8cef-a2c1-4ea3-86cd-d831412837d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:21 compute-1 nova_compute[230518]: 2025-10-02 13:00:21.860 2 DEBUG oslo_concurrency.lockutils [req-e5edb6ed-b8bd-46cc-a815-99c3f7eec63e req-80ca8cef-a2c1-4ea3-86cd-d831412837d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:21 compute-1 nova_compute[230518]: 2025-10-02 13:00:21.861 2 DEBUG oslo_concurrency.lockutils [req-e5edb6ed-b8bd-46cc-a815-99c3f7eec63e req-80ca8cef-a2c1-4ea3-86cd-d831412837d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:21 compute-1 nova_compute[230518]: 2025-10-02 13:00:21.861 2 DEBUG nova.compute.manager [req-e5edb6ed-b8bd-46cc-a815-99c3f7eec63e req-80ca8cef-a2c1-4ea3-86cd-d831412837d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] No waiting events found dispatching network-vif-plugged-6b57c5d9-dd16-427a-85c2-c02dedb41e29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:00:21 compute-1 nova_compute[230518]: 2025-10-02 13:00:21.861 2 WARNING nova.compute.manager [req-e5edb6ed-b8bd-46cc-a815-99c3f7eec63e req-80ca8cef-a2c1-4ea3-86cd-d831412837d5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Received unexpected event network-vif-plugged-6b57c5d9-dd16-427a-85c2-c02dedb41e29 for instance with vm_state active and task_state deleting.
Oct 02 13:00:22 compute-1 nova_compute[230518]: 2025-10-02 13:00:22.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:00:22 compute-1 nova_compute[230518]: 2025-10-02 13:00:22.270 2 DEBUG nova.network.neutron [-] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:00:22 compute-1 nova_compute[230518]: 2025-10-02 13:00:22.312 2 INFO nova.compute.manager [-] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Took 1.16 seconds to deallocate network for instance.
Oct 02 13:00:22 compute-1 nova_compute[230518]: 2025-10-02 13:00:22.369 2 DEBUG oslo_concurrency.lockutils [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:22 compute-1 nova_compute[230518]: 2025-10-02 13:00:22.370 2 DEBUG oslo_concurrency.lockutils [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:22 compute-1 nova_compute[230518]: 2025-10-02 13:00:22.421 2 DEBUG oslo_concurrency.processutils [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:22 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:00:22 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3281194222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:22 compute-1 nova_compute[230518]: 2025-10-02 13:00:22.828 2 DEBUG oslo_concurrency.processutils [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:22 compute-1 nova_compute[230518]: 2025-10-02 13:00:22.834 2 DEBUG nova.compute.provider_tree [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:00:22 compute-1 nova_compute[230518]: 2025-10-02 13:00:22.861 2 DEBUG nova.scheduler.client.report [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:00:22 compute-1 nova_compute[230518]: 2025-10-02 13:00:22.879 2 DEBUG oslo_concurrency.lockutils [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.510s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:22 compute-1 nova_compute[230518]: 2025-10-02 13:00:22.901 2 INFO nova.scheduler.client.report [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Deleted allocations for instance 11d532af-2778-4065-8cf4-f2f53d3dbb1c
Oct 02 13:00:22 compute-1 nova_compute[230518]: 2025-10-02 13:00:22.954 2 DEBUG oslo_concurrency.lockutils [None req-cb13863d-5109-4557-a522-5b49b7a2c62e 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "11d532af-2778-4065-8cf4-f2f53d3dbb1c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:23.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:23.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:23 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #121. Immutable memtables: 0.
Oct 02 13:00:23 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:23.295347) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:00:23 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 121
Oct 02 13:00:23 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410023295393, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 2401, "num_deletes": 253, "total_data_size": 5663441, "memory_usage": 5738160, "flush_reason": "Manual Compaction"}
Oct 02 13:00:23 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #122: started
Oct 02 13:00:23 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410023326089, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 122, "file_size": 3704781, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60440, "largest_seqno": 62836, "table_properties": {"data_size": 3695028, "index_size": 6119, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20943, "raw_average_key_size": 20, "raw_value_size": 3675280, "raw_average_value_size": 3638, "num_data_blocks": 265, "num_entries": 1010, "num_filter_entries": 1010, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759409820, "oldest_key_time": 1759409820, "file_creation_time": 1759410023, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:00:23 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 30813 microseconds, and 13208 cpu microseconds.
Oct 02 13:00:23 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:00:23 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:23.326157) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #122: 3704781 bytes OK
Oct 02 13:00:23 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:23.326186) [db/memtable_list.cc:519] [default] Level-0 commit table #122 started
Oct 02 13:00:23 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:23.328669) [db/memtable_list.cc:722] [default] Level-0 commit table #122: memtable #1 done
Oct 02 13:00:23 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:23.328701) EVENT_LOG_v1 {"time_micros": 1759410023328690, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:00:23 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:23.328729) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:00:23 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 5652693, prev total WAL file size 5652693, number of live WAL files 2.
Oct 02 13:00:23 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000118.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:00:23 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:23.331376) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Oct 02 13:00:23 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:00:23 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [122(3617KB)], [120(11MB)]
Oct 02 13:00:23 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410023331434, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [122], "files_L6": [120], "score": -1, "input_data_size": 15997508, "oldest_snapshot_seqno": -1}
Oct 02 13:00:23 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #123: 8881 keys, 14031804 bytes, temperature: kUnknown
Oct 02 13:00:23 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410023445976, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 123, "file_size": 14031804, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13970908, "index_size": 37616, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22213, "raw_key_size": 229685, "raw_average_key_size": 25, "raw_value_size": 13811657, "raw_average_value_size": 1555, "num_data_blocks": 1474, "num_entries": 8881, "num_filter_entries": 8881, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759410023, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 123, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:00:23 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:00:23 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:23.446426) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 14031804 bytes
Oct 02 13:00:23 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:23.449930) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 139.5 rd, 122.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 11.7 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(8.1) write-amplify(3.8) OK, records in: 9407, records dropped: 526 output_compression: NoCompression
Oct 02 13:00:23 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:23.449968) EVENT_LOG_v1 {"time_micros": 1759410023449952, "job": 76, "event": "compaction_finished", "compaction_time_micros": 114685, "compaction_time_cpu_micros": 57473, "output_level": 6, "num_output_files": 1, "total_output_size": 14031804, "num_input_records": 9407, "num_output_records": 8881, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:00:23 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:00:23 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410023451594, "job": 76, "event": "table_file_deletion", "file_number": 122}
Oct 02 13:00:23 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000120.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:00:23 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410023457372, "job": 76, "event": "table_file_deletion", "file_number": 120}
Oct 02 13:00:23 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:23.331184) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:00:23 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:23.457455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:00:23 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:23.457464) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:00:23 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:23.457469) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:00:23 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:23.457506) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:00:23 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:23.457516) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:00:23 compute-1 sudo[295167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:00:23 compute-1 sudo[295167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:23 compute-1 sudo[295167]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:23 compute-1 sudo[295192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:00:23 compute-1 sudo[295192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:00:23 compute-1 sudo[295192]: pam_unix(sudo:session): session closed for user root
Oct 02 13:00:23 compute-1 nova_compute[230518]: 2025-10-02 13:00:23.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:23 compute-1 ceph-mon[80926]: pgmap v2619: 305 pgs: 305 active+clean; 264 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.9 MiB/s wr, 218 op/s
Oct 02 13:00:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3281194222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:23 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:00:23 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:00:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:00:24 compute-1 nova_compute[230518]: 2025-10-02 13:00:24.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:24 compute-1 nova_compute[230518]: 2025-10-02 13:00:24.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:24 compute-1 ceph-mon[80926]: pgmap v2620: 305 pgs: 305 active+clean; 264 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.8 MiB/s wr, 200 op/s
Oct 02 13:00:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:25.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:25.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:25.956 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:25.957 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:25.957 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:26 compute-1 nova_compute[230518]: 2025-10-02 13:00:26.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:00:26 compute-1 nova_compute[230518]: 2025-10-02 13:00:26.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:00:26 compute-1 nova_compute[230518]: 2025-10-02 13:00:26.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:00:26 compute-1 nova_compute[230518]: 2025-10-02 13:00:26.067 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:00:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:27.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:27.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:27 compute-1 ceph-mon[80926]: pgmap v2621: 305 pgs: 305 active+clean; 248 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 1.8 MiB/s wr, 261 op/s
Oct 02 13:00:28 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #124. Immutable memtables: 0.
Oct 02 13:00:28 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:28.672041) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:00:28 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 124
Oct 02 13:00:28 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410028672112, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 308, "num_deletes": 251, "total_data_size": 172085, "memory_usage": 178936, "flush_reason": "Manual Compaction"}
Oct 02 13:00:28 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #125: started
Oct 02 13:00:28 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410028675426, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 125, "file_size": 112598, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 62841, "largest_seqno": 63144, "table_properties": {"data_size": 110623, "index_size": 203, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5610, "raw_average_key_size": 20, "raw_value_size": 106707, "raw_average_value_size": 385, "num_data_blocks": 9, "num_entries": 277, "num_filter_entries": 277, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410023, "oldest_key_time": 1759410023, "file_creation_time": 1759410028, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:00:28 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 3459 microseconds, and 1479 cpu microseconds.
Oct 02 13:00:28 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:00:28 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:28.675507) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #125: 112598 bytes OK
Oct 02 13:00:28 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:28.675551) [db/memtable_list.cc:519] [default] Level-0 commit table #125 started
Oct 02 13:00:28 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:28.677394) [db/memtable_list.cc:722] [default] Level-0 commit table #125: memtable #1 done
Oct 02 13:00:28 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:28.677422) EVENT_LOG_v1 {"time_micros": 1759410028677414, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:00:28 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:28.677442) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:00:28 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 169846, prev total WAL file size 169846, number of live WAL files 2.
Oct 02 13:00:28 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000121.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:00:28 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:28.678208) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303036' seq:72057594037927935, type:22 .. '6D6772737461740032323538' seq:0, type:0; will stop at (end)
Oct 02 13:00:28 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:00:28 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [125(109KB)], [123(13MB)]
Oct 02 13:00:28 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410028678247, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [125], "files_L6": [123], "score": -1, "input_data_size": 14144402, "oldest_snapshot_seqno": -1}
Oct 02 13:00:28 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #126: 8649 keys, 10298843 bytes, temperature: kUnknown
Oct 02 13:00:28 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410028769869, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 126, "file_size": 10298843, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10244291, "index_size": 31848, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21637, "raw_key_size": 225070, "raw_average_key_size": 26, "raw_value_size": 10093856, "raw_average_value_size": 1167, "num_data_blocks": 1232, "num_entries": 8649, "num_filter_entries": 8649, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759410028, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 126, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:00:28 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:00:28 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:28.770207) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 10298843 bytes
Oct 02 13:00:28 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:28.771650) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.2 rd, 112.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 13.4 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(217.1) write-amplify(91.5) OK, records in: 9158, records dropped: 509 output_compression: NoCompression
Oct 02 13:00:28 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:28.771680) EVENT_LOG_v1 {"time_micros": 1759410028771667, "job": 78, "event": "compaction_finished", "compaction_time_micros": 91737, "compaction_time_cpu_micros": 25722, "output_level": 6, "num_output_files": 1, "total_output_size": 10298843, "num_input_records": 9158, "num_output_records": 8649, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:00:28 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:00:28 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410028771871, "job": 78, "event": "table_file_deletion", "file_number": 125}
Oct 02 13:00:28 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000123.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:00:28 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410028776471, "job": 78, "event": "table_file_deletion", "file_number": 123}
Oct 02 13:00:28 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:28.678111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:00:28 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:28.776598) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:00:28 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:28.776604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:00:28 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:28.776606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:00:28 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:28.776608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:00:28 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:00:28.776610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:00:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:29.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:29 compute-1 nova_compute[230518]: 2025-10-02 13:00:29.061 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:00:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:29.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:00:29 compute-1 nova_compute[230518]: 2025-10-02 13:00:29.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:29 compute-1 nova_compute[230518]: 2025-10-02 13:00:29.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:29 compute-1 ceph-mon[80926]: pgmap v2622: 305 pgs: 305 active+clean; 248 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 906 KiB/s wr, 279 op/s
Oct 02 13:00:30 compute-1 nova_compute[230518]: 2025-10-02 13:00:30.352 2 DEBUG oslo_concurrency.lockutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:30 compute-1 nova_compute[230518]: 2025-10-02 13:00:30.353 2 DEBUG oslo_concurrency.lockutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:30 compute-1 nova_compute[230518]: 2025-10-02 13:00:30.390 2 DEBUG nova.compute.manager [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:00:30 compute-1 nova_compute[230518]: 2025-10-02 13:00:30.484 2 DEBUG oslo_concurrency.lockutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:30 compute-1 nova_compute[230518]: 2025-10-02 13:00:30.484 2 DEBUG oslo_concurrency.lockutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:30 compute-1 nova_compute[230518]: 2025-10-02 13:00:30.491 2 DEBUG nova.virt.hardware [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:00:30 compute-1 nova_compute[230518]: 2025-10-02 13:00:30.492 2 INFO nova.compute.claims [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Claim successful on node compute-1.ctlplane.example.com
Oct 02 13:00:30 compute-1 nova_compute[230518]: 2025-10-02 13:00:30.664 2 DEBUG oslo_concurrency.processutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:31.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:31 compute-1 ceph-mon[80926]: pgmap v2623: 305 pgs: 305 active+clean; 248 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 33 KiB/s wr, 170 op/s
Oct 02 13:00:31 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:00:31 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4223990827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:31 compute-1 nova_compute[230518]: 2025-10-02 13:00:31.197 2 DEBUG oslo_concurrency.processutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:31 compute-1 nova_compute[230518]: 2025-10-02 13:00:31.205 2 DEBUG nova.compute.provider_tree [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:00:31 compute-1 nova_compute[230518]: 2025-10-02 13:00:31.233 2 DEBUG nova.scheduler.client.report [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:00:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:31.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:31 compute-1 nova_compute[230518]: 2025-10-02 13:00:31.255 2 DEBUG oslo_concurrency.lockutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:31 compute-1 nova_compute[230518]: 2025-10-02 13:00:31.256 2 DEBUG nova.compute.manager [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:00:31 compute-1 nova_compute[230518]: 2025-10-02 13:00:31.306 2 DEBUG nova.compute.manager [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:00:31 compute-1 nova_compute[230518]: 2025-10-02 13:00:31.306 2 DEBUG nova.network.neutron [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:00:31 compute-1 nova_compute[230518]: 2025-10-02 13:00:31.325 2 INFO nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:00:31 compute-1 nova_compute[230518]: 2025-10-02 13:00:31.348 2 DEBUG nova.compute.manager [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:00:31 compute-1 nova_compute[230518]: 2025-10-02 13:00:31.447 2 DEBUG nova.compute.manager [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:00:31 compute-1 nova_compute[230518]: 2025-10-02 13:00:31.450 2 DEBUG nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:00:31 compute-1 nova_compute[230518]: 2025-10-02 13:00:31.451 2 INFO nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Creating image(s)
Oct 02 13:00:31 compute-1 nova_compute[230518]: 2025-10-02 13:00:31.486 2 DEBUG nova.storage.rbd_utils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 7c31bb0f-22b5-42a4-9b38-8ad3daac689f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:31 compute-1 nova_compute[230518]: 2025-10-02 13:00:31.516 2 DEBUG nova.storage.rbd_utils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 7c31bb0f-22b5-42a4-9b38-8ad3daac689f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:31 compute-1 nova_compute[230518]: 2025-10-02 13:00:31.544 2 DEBUG nova.storage.rbd_utils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 7c31bb0f-22b5-42a4-9b38-8ad3daac689f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:31 compute-1 nova_compute[230518]: 2025-10-02 13:00:31.550 2 DEBUG oslo_concurrency.processutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:31 compute-1 nova_compute[230518]: 2025-10-02 13:00:31.618 2 DEBUG nova.policy [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b04159d5bffe4259876ce57aec09716e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a6be0e77fb5b4355b4f2276c9e57d2bd', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:00:31 compute-1 nova_compute[230518]: 2025-10-02 13:00:31.621 2 DEBUG oslo_concurrency.processutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:31 compute-1 nova_compute[230518]: 2025-10-02 13:00:31.622 2 DEBUG oslo_concurrency.lockutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:31 compute-1 nova_compute[230518]: 2025-10-02 13:00:31.622 2 DEBUG oslo_concurrency.lockutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:31 compute-1 nova_compute[230518]: 2025-10-02 13:00:31.623 2 DEBUG oslo_concurrency.lockutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:31 compute-1 nova_compute[230518]: 2025-10-02 13:00:31.648 2 DEBUG nova.storage.rbd_utils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 7c31bb0f-22b5-42a4-9b38-8ad3daac689f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:31 compute-1 nova_compute[230518]: 2025-10-02 13:00:31.651 2 DEBUG oslo_concurrency.processutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 7c31bb0f-22b5-42a4-9b38-8ad3daac689f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4223990827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:32 compute-1 nova_compute[230518]: 2025-10-02 13:00:32.152 2 DEBUG oslo_concurrency.processutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 7c31bb0f-22b5-42a4-9b38-8ad3daac689f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:32 compute-1 nova_compute[230518]: 2025-10-02 13:00:32.231 2 DEBUG nova.storage.rbd_utils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] resizing rbd image 7c31bb0f-22b5-42a4-9b38-8ad3daac689f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:00:32 compute-1 nova_compute[230518]: 2025-10-02 13:00:32.418 2 DEBUG nova.objects.instance [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lazy-loading 'migration_context' on Instance uuid 7c31bb0f-22b5-42a4-9b38-8ad3daac689f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:00:32 compute-1 nova_compute[230518]: 2025-10-02 13:00:32.435 2 DEBUG nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:00:32 compute-1 nova_compute[230518]: 2025-10-02 13:00:32.436 2 DEBUG nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Ensure instance console log exists: /var/lib/nova/instances/7c31bb0f-22b5-42a4-9b38-8ad3daac689f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:00:32 compute-1 nova_compute[230518]: 2025-10-02 13:00:32.437 2 DEBUG oslo_concurrency.lockutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:32 compute-1 nova_compute[230518]: 2025-10-02 13:00:32.437 2 DEBUG oslo_concurrency.lockutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:32 compute-1 nova_compute[230518]: 2025-10-02 13:00:32.438 2 DEBUG oslo_concurrency.lockutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:32 compute-1 nova_compute[230518]: 2025-10-02 13:00:32.656 2 DEBUG nova.network.neutron [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Successfully created port: b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:00:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:00:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:33.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:00:33 compute-1 ceph-mon[80926]: pgmap v2624: 305 pgs: 305 active+clean; 204 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 22 KiB/s wr, 160 op/s
Oct 02 13:00:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:00:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:33.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:00:34 compute-1 nova_compute[230518]: 2025-10-02 13:00:34.154 2 DEBUG nova.network.neutron [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Successfully updated port: b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:00:34 compute-1 nova_compute[230518]: 2025-10-02 13:00:34.198 2 DEBUG oslo_concurrency.lockutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "refresh_cache-7c31bb0f-22b5-42a4-9b38-8ad3daac689f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:00:34 compute-1 nova_compute[230518]: 2025-10-02 13:00:34.199 2 DEBUG oslo_concurrency.lockutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquired lock "refresh_cache-7c31bb0f-22b5-42a4-9b38-8ad3daac689f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:00:34 compute-1 nova_compute[230518]: 2025-10-02 13:00:34.199 2 DEBUG nova.network.neutron [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:00:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/855805887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:34 compute-1 nova_compute[230518]: 2025-10-02 13:00:34.286 2 DEBUG nova.compute.manager [req-91226429-57a6-4fb1-8e3d-c56219d07925 req-6b7535ac-22fe-4791-9955-494b295d592c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Received event network-changed-b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:00:34 compute-1 nova_compute[230518]: 2025-10-02 13:00:34.287 2 DEBUG nova.compute.manager [req-91226429-57a6-4fb1-8e3d-c56219d07925 req-6b7535ac-22fe-4791-9955-494b295d592c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Refreshing instance network info cache due to event network-changed-b0acc3a3-80b3-4ec7-97e7-2e5813eb8790. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:00:34 compute-1 nova_compute[230518]: 2025-10-02 13:00:34.287 2 DEBUG oslo_concurrency.lockutils [req-91226429-57a6-4fb1-8e3d-c56219d07925 req-6b7535ac-22fe-4791-9955-494b295d592c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-7c31bb0f-22b5-42a4-9b38-8ad3daac689f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:00:34 compute-1 nova_compute[230518]: 2025-10-02 13:00:34.387 2 DEBUG nova.network.neutron [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:00:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:00:34 compute-1 nova_compute[230518]: 2025-10-02 13:00:34.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:34 compute-1 nova_compute[230518]: 2025-10-02 13:00:34.636 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410019.636069, 11d532af-2778-4065-8cf4-f2f53d3dbb1c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:00:34 compute-1 nova_compute[230518]: 2025-10-02 13:00:34.637 2 INFO nova.compute.manager [-] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] VM Stopped (Lifecycle Event)
Oct 02 13:00:34 compute-1 nova_compute[230518]: 2025-10-02 13:00:34.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:34 compute-1 nova_compute[230518]: 2025-10-02 13:00:34.689 2 DEBUG nova.compute.manager [None req-6007e259-0fb7-4417-91fe-435e8c6f0ef2 - - - - - -] [instance: 11d532af-2778-4065-8cf4-f2f53d3dbb1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:00:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:35.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:35 compute-1 nova_compute[230518]: 2025-10-02 13:00:35.148 2 DEBUG nova.network.neutron [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Updating instance_info_cache with network_info: [{"id": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "address": "fa:16:3e:d4:a1:f4", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0acc3a3-80", "ovs_interfaceid": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:00:35 compute-1 nova_compute[230518]: 2025-10-02 13:00:35.185 2 DEBUG oslo_concurrency.lockutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Releasing lock "refresh_cache-7c31bb0f-22b5-42a4-9b38-8ad3daac689f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:00:35 compute-1 nova_compute[230518]: 2025-10-02 13:00:35.185 2 DEBUG nova.compute.manager [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Instance network_info: |[{"id": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "address": "fa:16:3e:d4:a1:f4", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0acc3a3-80", "ovs_interfaceid": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:00:35 compute-1 nova_compute[230518]: 2025-10-02 13:00:35.185 2 DEBUG oslo_concurrency.lockutils [req-91226429-57a6-4fb1-8e3d-c56219d07925 req-6b7535ac-22fe-4791-9955-494b295d592c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-7c31bb0f-22b5-42a4-9b38-8ad3daac689f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:00:35 compute-1 nova_compute[230518]: 2025-10-02 13:00:35.185 2 DEBUG nova.network.neutron [req-91226429-57a6-4fb1-8e3d-c56219d07925 req-6b7535ac-22fe-4791-9955-494b295d592c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Refreshing network info cache for port b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:00:35 compute-1 nova_compute[230518]: 2025-10-02 13:00:35.188 2 DEBUG nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Start _get_guest_xml network_info=[{"id": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "address": "fa:16:3e:d4:a1:f4", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0acc3a3-80", "ovs_interfaceid": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:00:35 compute-1 nova_compute[230518]: 2025-10-02 13:00:35.196 2 WARNING nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:00:35 compute-1 nova_compute[230518]: 2025-10-02 13:00:35.202 2 DEBUG nova.virt.libvirt.host [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:00:35 compute-1 nova_compute[230518]: 2025-10-02 13:00:35.203 2 DEBUG nova.virt.libvirt.host [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:00:35 compute-1 nova_compute[230518]: 2025-10-02 13:00:35.206 2 DEBUG nova.virt.libvirt.host [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:00:35 compute-1 nova_compute[230518]: 2025-10-02 13:00:35.207 2 DEBUG nova.virt.libvirt.host [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:00:35 compute-1 nova_compute[230518]: 2025-10-02 13:00:35.208 2 DEBUG nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:00:35 compute-1 nova_compute[230518]: 2025-10-02 13:00:35.209 2 DEBUG nova.virt.hardware [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:00:35 compute-1 nova_compute[230518]: 2025-10-02 13:00:35.209 2 DEBUG nova.virt.hardware [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:00:35 compute-1 nova_compute[230518]: 2025-10-02 13:00:35.210 2 DEBUG nova.virt.hardware [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:00:35 compute-1 nova_compute[230518]: 2025-10-02 13:00:35.210 2 DEBUG nova.virt.hardware [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:00:35 compute-1 nova_compute[230518]: 2025-10-02 13:00:35.211 2 DEBUG nova.virt.hardware [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:00:35 compute-1 nova_compute[230518]: 2025-10-02 13:00:35.211 2 DEBUG nova.virt.hardware [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:00:35 compute-1 nova_compute[230518]: 2025-10-02 13:00:35.211 2 DEBUG nova.virt.hardware [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:00:35 compute-1 nova_compute[230518]: 2025-10-02 13:00:35.212 2 DEBUG nova.virt.hardware [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:00:35 compute-1 nova_compute[230518]: 2025-10-02 13:00:35.212 2 DEBUG nova.virt.hardware [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:00:35 compute-1 nova_compute[230518]: 2025-10-02 13:00:35.212 2 DEBUG nova.virt.hardware [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:00:35 compute-1 nova_compute[230518]: 2025-10-02 13:00:35.213 2 DEBUG nova.virt.hardware [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:00:35 compute-1 nova_compute[230518]: 2025-10-02 13:00:35.217 2 DEBUG oslo_concurrency.processutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:35.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:35 compute-1 ceph-mon[80926]: pgmap v2625: 305 pgs: 305 active+clean; 204 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 21 KiB/s wr, 132 op/s
Oct 02 13:00:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:00:35 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4257415503' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:35 compute-1 nova_compute[230518]: 2025-10-02 13:00:35.654 2 DEBUG oslo_concurrency.processutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:35 compute-1 nova_compute[230518]: 2025-10-02 13:00:35.689 2 DEBUG nova.storage.rbd_utils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 7c31bb0f-22b5-42a4-9b38-8ad3daac689f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:35 compute-1 nova_compute[230518]: 2025-10-02 13:00:35.694 2 DEBUG oslo_concurrency.processutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:36 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:00:36 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4036831251' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:36 compute-1 nova_compute[230518]: 2025-10-02 13:00:36.175 2 DEBUG oslo_concurrency.processutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:36 compute-1 nova_compute[230518]: 2025-10-02 13:00:36.178 2 DEBUG nova.virt.libvirt.vif [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:00:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-202037004',display_name='tempest-₡-202037004',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest--202037004',id=166,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a6be0e77fb5b4355b4f2276c9e57d2bd',ramdisk_id='',reservation_id='r-cvelnv0h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-146860306',owner_user_name='tempest-ServersTestJSON-146860306-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:00:31Z,user_data=None,user_id='b04159d5bffe4259876ce57aec09716e',uuid=7c31bb0f-22b5-42a4-9b38-8ad3daac689f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "address": "fa:16:3e:d4:a1:f4", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0acc3a3-80", "ovs_interfaceid": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:00:36 compute-1 nova_compute[230518]: 2025-10-02 13:00:36.179 2 DEBUG nova.network.os_vif_util [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converting VIF {"id": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "address": "fa:16:3e:d4:a1:f4", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0acc3a3-80", "ovs_interfaceid": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:00:36 compute-1 nova_compute[230518]: 2025-10-02 13:00:36.181 2 DEBUG nova.network.os_vif_util [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:a1:f4,bridge_name='br-int',has_traffic_filtering=True,id=b0acc3a3-80b3-4ec7-97e7-2e5813eb8790,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0acc3a3-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:00:36 compute-1 nova_compute[230518]: 2025-10-02 13:00:36.183 2 DEBUG nova.objects.instance [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lazy-loading 'pci_devices' on Instance uuid 7c31bb0f-22b5-42a4-9b38-8ad3daac689f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:00:36 compute-1 nova_compute[230518]: 2025-10-02 13:00:36.208 2 DEBUG nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:00:36 compute-1 nova_compute[230518]:   <uuid>7c31bb0f-22b5-42a4-9b38-8ad3daac689f</uuid>
Oct 02 13:00:36 compute-1 nova_compute[230518]:   <name>instance-000000a6</name>
Oct 02 13:00:36 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 13:00:36 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 13:00:36 compute-1 nova_compute[230518]:   <metadata>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:00:36 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:       <nova:name>tempest-₡-202037004</nova:name>
Oct 02 13:00:36 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 13:00:35</nova:creationTime>
Oct 02 13:00:36 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 13:00:36 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 13:00:36 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 13:00:36 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 13:00:36 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:00:36 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:00:36 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 13:00:36 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 13:00:36 compute-1 nova_compute[230518]:         <nova:user uuid="b04159d5bffe4259876ce57aec09716e">tempest-ServersTestJSON-146860306-project-member</nova:user>
Oct 02 13:00:36 compute-1 nova_compute[230518]:         <nova:project uuid="a6be0e77fb5b4355b4f2276c9e57d2bd">tempest-ServersTestJSON-146860306</nova:project>
Oct 02 13:00:36 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 13:00:36 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 13:00:36 compute-1 nova_compute[230518]:         <nova:port uuid="b0acc3a3-80b3-4ec7-97e7-2e5813eb8790">
Oct 02 13:00:36 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 13:00:36 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 13:00:36 compute-1 nova_compute[230518]:   </metadata>
Oct 02 13:00:36 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <system>
Oct 02 13:00:36 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:00:36 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:00:36 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:00:36 compute-1 nova_compute[230518]:       <entry name="serial">7c31bb0f-22b5-42a4-9b38-8ad3daac689f</entry>
Oct 02 13:00:36 compute-1 nova_compute[230518]:       <entry name="uuid">7c31bb0f-22b5-42a4-9b38-8ad3daac689f</entry>
Oct 02 13:00:36 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     </system>
Oct 02 13:00:36 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 13:00:36 compute-1 nova_compute[230518]:   <os>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:   </os>
Oct 02 13:00:36 compute-1 nova_compute[230518]:   <features>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <apic/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:   </features>
Oct 02 13:00:36 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:   </clock>
Oct 02 13:00:36 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:   </cpu>
Oct 02 13:00:36 compute-1 nova_compute[230518]:   <devices>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 13:00:36 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/7c31bb0f-22b5-42a4-9b38-8ad3daac689f_disk">
Oct 02 13:00:36 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:       </source>
Oct 02 13:00:36 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:00:36 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:00:36 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 13:00:36 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/7c31bb0f-22b5-42a4-9b38-8ad3daac689f_disk.config">
Oct 02 13:00:36 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:       </source>
Oct 02 13:00:36 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:00:36 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:00:36 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 13:00:36 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:d4:a1:f4"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:       <target dev="tapb0acc3a3-80"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     </interface>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 13:00:36 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/7c31bb0f-22b5-42a4-9b38-8ad3daac689f/console.log" append="off"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     </serial>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <video>
Oct 02 13:00:36 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     </video>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 13:00:36 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     </rng>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 13:00:36 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 13:00:36 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 13:00:36 compute-1 nova_compute[230518]:   </devices>
Oct 02 13:00:36 compute-1 nova_compute[230518]: </domain>
Oct 02 13:00:36 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:00:36 compute-1 nova_compute[230518]: 2025-10-02 13:00:36.210 2 DEBUG nova.compute.manager [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Preparing to wait for external event network-vif-plugged-b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:00:36 compute-1 nova_compute[230518]: 2025-10-02 13:00:36.210 2 DEBUG oslo_concurrency.lockutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:36 compute-1 nova_compute[230518]: 2025-10-02 13:00:36.210 2 DEBUG oslo_concurrency.lockutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:36 compute-1 nova_compute[230518]: 2025-10-02 13:00:36.211 2 DEBUG oslo_concurrency.lockutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:36 compute-1 nova_compute[230518]: 2025-10-02 13:00:36.212 2 DEBUG nova.virt.libvirt.vif [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:00:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-202037004',display_name='tempest-₡-202037004',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest--202037004',id=166,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a6be0e77fb5b4355b4f2276c9e57d2bd',ramdisk_id='',reservation_id='r-cvelnv0h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-146860306',owner_user_name='tempest-ServersTestJSON-146860306-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:00:31Z,user_data=None,user_id='b04159d5bffe4259876ce57aec09716e',uuid=7c31bb0f-22b5-42a4-9b38-8ad3daac689f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "address": "fa:16:3e:d4:a1:f4", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0acc3a3-80", "ovs_interfaceid": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:00:36 compute-1 nova_compute[230518]: 2025-10-02 13:00:36.212 2 DEBUG nova.network.os_vif_util [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converting VIF {"id": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "address": "fa:16:3e:d4:a1:f4", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0acc3a3-80", "ovs_interfaceid": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:00:36 compute-1 nova_compute[230518]: 2025-10-02 13:00:36.213 2 DEBUG nova.network.os_vif_util [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:a1:f4,bridge_name='br-int',has_traffic_filtering=True,id=b0acc3a3-80b3-4ec7-97e7-2e5813eb8790,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0acc3a3-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:00:36 compute-1 nova_compute[230518]: 2025-10-02 13:00:36.213 2 DEBUG os_vif [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:a1:f4,bridge_name='br-int',has_traffic_filtering=True,id=b0acc3a3-80b3-4ec7-97e7-2e5813eb8790,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0acc3a3-80') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:00:36 compute-1 nova_compute[230518]: 2025-10-02 13:00:36.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:36 compute-1 nova_compute[230518]: 2025-10-02 13:00:36.214 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:36 compute-1 nova_compute[230518]: 2025-10-02 13:00:36.215 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:00:36 compute-1 nova_compute[230518]: 2025-10-02 13:00:36.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:36 compute-1 nova_compute[230518]: 2025-10-02 13:00:36.219 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb0acc3a3-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:36 compute-1 nova_compute[230518]: 2025-10-02 13:00:36.220 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb0acc3a3-80, col_values=(('external_ids', {'iface-id': 'b0acc3a3-80b3-4ec7-97e7-2e5813eb8790', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d4:a1:f4', 'vm-uuid': '7c31bb0f-22b5-42a4-9b38-8ad3daac689f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:36 compute-1 nova_compute[230518]: 2025-10-02 13:00:36.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:36 compute-1 NetworkManager[44960]: <info>  [1759410036.2232] manager: (tapb0acc3a3-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/326)
Oct 02 13:00:36 compute-1 nova_compute[230518]: 2025-10-02 13:00:36.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:00:36 compute-1 nova_compute[230518]: 2025-10-02 13:00:36.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:36 compute-1 nova_compute[230518]: 2025-10-02 13:00:36.231 2 INFO os_vif [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:a1:f4,bridge_name='br-int',has_traffic_filtering=True,id=b0acc3a3-80b3-4ec7-97e7-2e5813eb8790,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0acc3a3-80')
Oct 02 13:00:36 compute-1 nova_compute[230518]: 2025-10-02 13:00:36.332 2 DEBUG nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:00:36 compute-1 nova_compute[230518]: 2025-10-02 13:00:36.332 2 DEBUG nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:00:36 compute-1 nova_compute[230518]: 2025-10-02 13:00:36.333 2 DEBUG nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] No VIF found with MAC fa:16:3e:d4:a1:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:00:36 compute-1 nova_compute[230518]: 2025-10-02 13:00:36.333 2 INFO nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Using config drive
Oct 02 13:00:36 compute-1 nova_compute[230518]: 2025-10-02 13:00:36.462 2 DEBUG nova.storage.rbd_utils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 7c31bb0f-22b5-42a4-9b38-8ad3daac689f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:36 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/707540778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:36 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4257415503' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:36 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4036831251' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:36 compute-1 nova_compute[230518]: 2025-10-02 13:00:36.748 2 INFO nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Creating config drive at /var/lib/nova/instances/7c31bb0f-22b5-42a4-9b38-8ad3daac689f/disk.config
Oct 02 13:00:36 compute-1 nova_compute[230518]: 2025-10-02 13:00:36.758 2 DEBUG oslo_concurrency.processutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7c31bb0f-22b5-42a4-9b38-8ad3daac689f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpofuyrm2o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:36 compute-1 nova_compute[230518]: 2025-10-02 13:00:36.915 2 DEBUG oslo_concurrency.processutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7c31bb0f-22b5-42a4-9b38-8ad3daac689f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpofuyrm2o" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:36 compute-1 nova_compute[230518]: 2025-10-02 13:00:36.944 2 DEBUG nova.storage.rbd_utils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 7c31bb0f-22b5-42a4-9b38-8ad3daac689f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:36 compute-1 nova_compute[230518]: 2025-10-02 13:00:36.947 2 DEBUG oslo_concurrency.processutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7c31bb0f-22b5-42a4-9b38-8ad3daac689f/disk.config 7c31bb0f-22b5-42a4-9b38-8ad3daac689f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:37.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:37.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:37 compute-1 nova_compute[230518]: 2025-10-02 13:00:37.308 2 DEBUG oslo_concurrency.processutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7c31bb0f-22b5-42a4-9b38-8ad3daac689f/disk.config 7c31bb0f-22b5-42a4-9b38-8ad3daac689f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:37 compute-1 nova_compute[230518]: 2025-10-02 13:00:37.310 2 INFO nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Deleting local config drive /var/lib/nova/instances/7c31bb0f-22b5-42a4-9b38-8ad3daac689f/disk.config because it was imported into RBD.
Oct 02 13:00:37 compute-1 kernel: tapb0acc3a3-80: entered promiscuous mode
Oct 02 13:00:37 compute-1 ovn_controller[129257]: 2025-10-02T13:00:37Z|00695|binding|INFO|Claiming lport b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 for this chassis.
Oct 02 13:00:37 compute-1 ovn_controller[129257]: 2025-10-02T13:00:37Z|00696|binding|INFO|b0acc3a3-80b3-4ec7-97e7-2e5813eb8790: Claiming fa:16:3e:d4:a1:f4 10.100.0.4
Oct 02 13:00:37 compute-1 NetworkManager[44960]: <info>  [1759410037.3777] manager: (tapb0acc3a3-80): new Tun device (/org/freedesktop/NetworkManager/Devices/327)
Oct 02 13:00:37 compute-1 nova_compute[230518]: 2025-10-02 13:00:37.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:37 compute-1 ovn_controller[129257]: 2025-10-02T13:00:37Z|00697|binding|INFO|Setting lport b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 ovn-installed in OVS
Oct 02 13:00:37 compute-1 nova_compute[230518]: 2025-10-02 13:00:37.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:37 compute-1 ovn_controller[129257]: 2025-10-02T13:00:37Z|00698|binding|INFO|Setting lport b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 up in Southbound
Oct 02 13:00:37 compute-1 nova_compute[230518]: 2025-10-02 13:00:37.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.394 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:a1:f4 10.100.0.4'], port_security=['fa:16:3e:d4:a1:f4 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7c31bb0f-22b5-42a4-9b38-8ad3daac689f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-052f341a-0628-4183-a5e0-76312bc986c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a6be0e77fb5b4355b4f2276c9e57d2bd', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f50d96b5-3505-4637-897f-64b0dcf7d106', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d584c9bd-ee36-4364-be0c-b350c44644a7, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=b0acc3a3-80b3-4ec7-97e7-2e5813eb8790) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.395 138374 INFO neutron.agent.ovn.metadata.agent [-] Port b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 in datapath 052f341a-0628-4183-a5e0-76312bc986c6 bound to our chassis
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.396 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 052f341a-0628-4183-a5e0-76312bc986c6
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.410 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1a63eb38-71ec-4d5c-9652-b76bad61e588]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.412 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap052f341a-01 in ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.415 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap052f341a-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.415 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[61bc6c14-6115-4dcf-9478-f0f1f5c31d26]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.416 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e95402fb-104d-470b-a62b-7b0484898a2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:37 compute-1 systemd-machined[188247]: New machine qemu-81-instance-000000a6.
Oct 02 13:00:37 compute-1 systemd[1]: Started Virtual Machine qemu-81-instance-000000a6.
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.436 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[67fd4c74-555a-4e2f-bc33-8899d9958bdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:37 compute-1 systemd-udevd[295543]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:00:37 compute-1 NetworkManager[44960]: <info>  [1759410037.4603] device (tapb0acc3a3-80): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:00:37 compute-1 NetworkManager[44960]: <info>  [1759410037.4616] device (tapb0acc3a3-80): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.463 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b64e454e-1906-433e-bfeb-640c8ad9d5dc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.504 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[bd9289c3-b576-442a-8381-decff9ad62b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:37 compute-1 nova_compute[230518]: 2025-10-02 13:00:37.508 2 DEBUG nova.network.neutron [req-91226429-57a6-4fb1-8e3d-c56219d07925 req-6b7535ac-22fe-4791-9955-494b295d592c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Updated VIF entry in instance network info cache for port b0acc3a3-80b3-4ec7-97e7-2e5813eb8790. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:00:37 compute-1 nova_compute[230518]: 2025-10-02 13:00:37.509 2 DEBUG nova.network.neutron [req-91226429-57a6-4fb1-8e3d-c56219d07925 req-6b7535ac-22fe-4791-9955-494b295d592c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Updating instance_info_cache with network_info: [{"id": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "address": "fa:16:3e:d4:a1:f4", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0acc3a3-80", "ovs_interfaceid": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:00:37 compute-1 systemd-udevd[295546]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.512 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0830cf38-81f0-4315-a6b2-17f0bf7d6384]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:37 compute-1 NetworkManager[44960]: <info>  [1759410037.5139] manager: (tap052f341a-00): new Veth device (/org/freedesktop/NetworkManager/Devices/328)
Oct 02 13:00:37 compute-1 nova_compute[230518]: 2025-10-02 13:00:37.539 2 DEBUG oslo_concurrency.lockutils [req-91226429-57a6-4fb1-8e3d-c56219d07925 req-6b7535ac-22fe-4791-9955-494b295d592c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-7c31bb0f-22b5-42a4-9b38-8ad3daac689f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.552 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a7eb2eb5-8100-4dc8-97e3-948170feffca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.557 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[e2892c3a-eaae-430e-8000-71a06c3ca0fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:37 compute-1 NetworkManager[44960]: <info>  [1759410037.5799] device (tap052f341a-00): carrier: link connected
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.587 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[df8390fc-1cb8-4be9-9aa1-f0963f2f70ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.624 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8d8d4984-776a-4793-9824-5b5977bacbc6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap052f341a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:0a:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 790113, 'reachable_time': 28247, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295574, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.645 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b427cef8-622b-482a-93b9-4cc04bfca074]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:a55'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 790113, 'tstamp': 790113}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295575, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.678 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e6b8b5fd-babc-433d-b31f-110128d1e2ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap052f341a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:0a:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 790113, 'reachable_time': 28247, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295576, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:37 compute-1 ceph-mon[80926]: pgmap v2626: 305 pgs: 305 active+clean; 193 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.2 MiB/s wr, 169 op/s
Oct 02 13:00:37 compute-1 nova_compute[230518]: 2025-10-02 13:00:37.684 2 DEBUG nova.compute.manager [req-b6a32515-6464-4721-a05c-4a3f5867d7c3 req-ccb58e5c-fa0d-4b64-a0d7-7456b1118c22 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Received event network-vif-plugged-b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:00:37 compute-1 nova_compute[230518]: 2025-10-02 13:00:37.684 2 DEBUG oslo_concurrency.lockutils [req-b6a32515-6464-4721-a05c-4a3f5867d7c3 req-ccb58e5c-fa0d-4b64-a0d7-7456b1118c22 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:37 compute-1 nova_compute[230518]: 2025-10-02 13:00:37.684 2 DEBUG oslo_concurrency.lockutils [req-b6a32515-6464-4721-a05c-4a3f5867d7c3 req-ccb58e5c-fa0d-4b64-a0d7-7456b1118c22 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:37 compute-1 nova_compute[230518]: 2025-10-02 13:00:37.685 2 DEBUG oslo_concurrency.lockutils [req-b6a32515-6464-4721-a05c-4a3f5867d7c3 req-ccb58e5c-fa0d-4b64-a0d7-7456b1118c22 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:37 compute-1 nova_compute[230518]: 2025-10-02 13:00:37.685 2 DEBUG nova.compute.manager [req-b6a32515-6464-4721-a05c-4a3f5867d7c3 req-ccb58e5c-fa0d-4b64-a0d7-7456b1118c22 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Processing event network-vif-plugged-b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.723 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4e453310-93cc-4c58-8d23-1b949dec2532]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.791 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3f0359ec-d8fe-4817-9b2d-b2f6e80b20fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.793 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap052f341a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.793 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.794 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap052f341a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:37 compute-1 kernel: tap052f341a-00: entered promiscuous mode
Oct 02 13:00:37 compute-1 nova_compute[230518]: 2025-10-02 13:00:37.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:37 compute-1 NetworkManager[44960]: <info>  [1759410037.7965] manager: (tap052f341a-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/329)
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.799 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap052f341a-00, col_values=(('external_ids', {'iface-id': '61e15bc4-7cff-4f2c-a6c4-d987859313b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:37 compute-1 ovn_controller[129257]: 2025-10-02T13:00:37Z|00699|binding|INFO|Releasing lport 61e15bc4-7cff-4f2c-a6c4-d987859313b6 from this chassis (sb_readonly=0)
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.801 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/052f341a-0628-4183-a5e0-76312bc986c6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/052f341a-0628-4183-a5e0-76312bc986c6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.802 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[32a0f8ac-b8b2-455a-86a7-200243637917]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.803 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]: global
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-052f341a-0628-4183-a5e0-76312bc986c6
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/052f341a-0628-4183-a5e0-76312bc986c6.pid.haproxy
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 052f341a-0628-4183-a5e0-76312bc986c6
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:00:37 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:37.804 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'env', 'PROCESS_TAG=haproxy-052f341a-0628-4183-a5e0-76312bc986c6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/052f341a-0628-4183-a5e0-76312bc986c6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:00:37 compute-1 nova_compute[230518]: 2025-10-02 13:00:37.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:38 compute-1 podman[295650]: 2025-10-02 13:00:38.194499159 +0000 UTC m=+0.076598531 container create 559ac6788ba0e6dd3f8a6eb8d97f13e9ee929f9f4ab5937e0e6b11f79f277613 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:00:38 compute-1 podman[295650]: 2025-10-02 13:00:38.140538242 +0000 UTC m=+0.022637624 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:00:38 compute-1 systemd[1]: Started libpod-conmon-559ac6788ba0e6dd3f8a6eb8d97f13e9ee929f9f4ab5937e0e6b11f79f277613.scope.
Oct 02 13:00:38 compute-1 systemd[1]: Started libcrun container.
Oct 02 13:00:38 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e09e09a9c189154a28ffa2be38e9a5e659937380e00a8af389eda1472e8aeea1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:00:38 compute-1 podman[295650]: 2025-10-02 13:00:38.332073733 +0000 UTC m=+0.214173105 container init 559ac6788ba0e6dd3f8a6eb8d97f13e9ee929f9f4ab5937e0e6b11f79f277613 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 02 13:00:38 compute-1 podman[295667]: 2025-10-02 13:00:38.332199777 +0000 UTC m=+0.081503613 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 02 13:00:38 compute-1 podman[295650]: 2025-10-02 13:00:38.338467482 +0000 UTC m=+0.220566824 container start 559ac6788ba0e6dd3f8a6eb8d97f13e9ee929f9f4ab5937e0e6b11f79f277613 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 13:00:38 compute-1 podman[295664]: 2025-10-02 13:00:38.351106994 +0000 UTC m=+0.101456572 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible)
Oct 02 13:00:38 compute-1 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[295686]: [NOTICE]   (295710) : New worker (295714) forked
Oct 02 13:00:38 compute-1 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[295686]: [NOTICE]   (295710) : Loading success.
Oct 02 13:00:38 compute-1 nova_compute[230518]: 2025-10-02 13:00:38.461 2 DEBUG nova.compute.manager [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:00:38 compute-1 nova_compute[230518]: 2025-10-02 13:00:38.463 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410038.4612422, 7c31bb0f-22b5-42a4-9b38-8ad3daac689f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:00:38 compute-1 nova_compute[230518]: 2025-10-02 13:00:38.463 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] VM Started (Lifecycle Event)
Oct 02 13:00:38 compute-1 nova_compute[230518]: 2025-10-02 13:00:38.468 2 DEBUG nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:00:38 compute-1 nova_compute[230518]: 2025-10-02 13:00:38.472 2 INFO nova.virt.libvirt.driver [-] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Instance spawned successfully.
Oct 02 13:00:38 compute-1 nova_compute[230518]: 2025-10-02 13:00:38.473 2 DEBUG nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:00:38 compute-1 nova_compute[230518]: 2025-10-02 13:00:38.498 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:00:38 compute-1 nova_compute[230518]: 2025-10-02 13:00:38.512 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:00:38 compute-1 nova_compute[230518]: 2025-10-02 13:00:38.517 2 DEBUG nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:00:38 compute-1 nova_compute[230518]: 2025-10-02 13:00:38.518 2 DEBUG nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:00:38 compute-1 nova_compute[230518]: 2025-10-02 13:00:38.519 2 DEBUG nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:00:38 compute-1 nova_compute[230518]: 2025-10-02 13:00:38.519 2 DEBUG nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:00:38 compute-1 nova_compute[230518]: 2025-10-02 13:00:38.520 2 DEBUG nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:00:38 compute-1 nova_compute[230518]: 2025-10-02 13:00:38.520 2 DEBUG nova.virt.libvirt.driver [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:00:38 compute-1 nova_compute[230518]: 2025-10-02 13:00:38.546 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:00:38 compute-1 nova_compute[230518]: 2025-10-02 13:00:38.546 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410038.4624336, 7c31bb0f-22b5-42a4-9b38-8ad3daac689f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:00:38 compute-1 nova_compute[230518]: 2025-10-02 13:00:38.547 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] VM Paused (Lifecycle Event)
Oct 02 13:00:38 compute-1 nova_compute[230518]: 2025-10-02 13:00:38.576 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:00:38 compute-1 nova_compute[230518]: 2025-10-02 13:00:38.580 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410038.4696395, 7c31bb0f-22b5-42a4-9b38-8ad3daac689f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:00:38 compute-1 nova_compute[230518]: 2025-10-02 13:00:38.580 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] VM Resumed (Lifecycle Event)
Oct 02 13:00:38 compute-1 nova_compute[230518]: 2025-10-02 13:00:38.591 2 INFO nova.compute.manager [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Took 7.14 seconds to spawn the instance on the hypervisor.
Oct 02 13:00:38 compute-1 nova_compute[230518]: 2025-10-02 13:00:38.592 2 DEBUG nova.compute.manager [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:00:38 compute-1 nova_compute[230518]: 2025-10-02 13:00:38.602 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:00:38 compute-1 nova_compute[230518]: 2025-10-02 13:00:38.605 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:00:38 compute-1 nova_compute[230518]: 2025-10-02 13:00:38.630 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:00:38 compute-1 nova_compute[230518]: 2025-10-02 13:00:38.656 2 INFO nova.compute.manager [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Took 8.21 seconds to build instance.
Oct 02 13:00:38 compute-1 nova_compute[230518]: 2025-10-02 13:00:38.673 2 DEBUG oslo_concurrency.lockutils [None req-3d8c928f-f7ec-47f0-ad92-d64a9c5216ab b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.321s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:00:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:39.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:00:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:39.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:00:39 compute-1 nova_compute[230518]: 2025-10-02 13:00:39.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:39 compute-1 nova_compute[230518]: 2025-10-02 13:00:39.767 2 DEBUG nova.compute.manager [req-f1fc28b3-9993-4af2-a2e3-84279cc7d239 req-62a1f163-f404-43b2-92a3-fa20983278b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Received event network-vif-plugged-b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:00:39 compute-1 nova_compute[230518]: 2025-10-02 13:00:39.767 2 DEBUG oslo_concurrency.lockutils [req-f1fc28b3-9993-4af2-a2e3-84279cc7d239 req-62a1f163-f404-43b2-92a3-fa20983278b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:39 compute-1 nova_compute[230518]: 2025-10-02 13:00:39.767 2 DEBUG oslo_concurrency.lockutils [req-f1fc28b3-9993-4af2-a2e3-84279cc7d239 req-62a1f163-f404-43b2-92a3-fa20983278b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:39 compute-1 nova_compute[230518]: 2025-10-02 13:00:39.768 2 DEBUG oslo_concurrency.lockutils [req-f1fc28b3-9993-4af2-a2e3-84279cc7d239 req-62a1f163-f404-43b2-92a3-fa20983278b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:39 compute-1 nova_compute[230518]: 2025-10-02 13:00:39.768 2 DEBUG nova.compute.manager [req-f1fc28b3-9993-4af2-a2e3-84279cc7d239 req-62a1f163-f404-43b2-92a3-fa20983278b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] No waiting events found dispatching network-vif-plugged-b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:00:39 compute-1 nova_compute[230518]: 2025-10-02 13:00:39.768 2 WARNING nova.compute.manager [req-f1fc28b3-9993-4af2-a2e3-84279cc7d239 req-62a1f163-f404-43b2-92a3-fa20983278b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Received unexpected event network-vif-plugged-b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 for instance with vm_state active and task_state None.
Oct 02 13:00:39 compute-1 ceph-mon[80926]: pgmap v2627: 305 pgs: 305 active+clean; 277 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 5.1 MiB/s wr, 170 op/s
Oct 02 13:00:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1953748865' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3874696248' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:41.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:41 compute-1 nova_compute[230518]: 2025-10-02 13:00:41.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:41.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:41 compute-1 ceph-mon[80926]: pgmap v2628: 305 pgs: 305 active+clean; 292 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 896 KiB/s rd, 5.7 MiB/s wr, 167 op/s
Oct 02 13:00:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:00:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:43.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:00:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:00:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:43.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:00:43 compute-1 ceph-mon[80926]: pgmap v2629: 305 pgs: 305 active+clean; 292 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 222 op/s
Oct 02 13:00:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1892820477' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:00:44 compute-1 nova_compute[230518]: 2025-10-02 13:00:44.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:45.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:00:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:45.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:00:45 compute-1 ceph-mon[80926]: pgmap v2630: 305 pgs: 305 active+clean; 292 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 203 op/s
Oct 02 13:00:45 compute-1 nova_compute[230518]: 2025-10-02 13:00:45.893 2 DEBUG oslo_concurrency.lockutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "85538bf5-69f3-4c92-baf5-a998835df357" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:45 compute-1 nova_compute[230518]: 2025-10-02 13:00:45.894 2 DEBUG oslo_concurrency.lockutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "85538bf5-69f3-4c92-baf5-a998835df357" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:45 compute-1 nova_compute[230518]: 2025-10-02 13:00:45.920 2 DEBUG nova.compute.manager [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:00:46 compute-1 nova_compute[230518]: 2025-10-02 13:00:46.034 2 DEBUG oslo_concurrency.lockutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:46 compute-1 nova_compute[230518]: 2025-10-02 13:00:46.034 2 DEBUG oslo_concurrency.lockutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:46 compute-1 nova_compute[230518]: 2025-10-02 13:00:46.044 2 DEBUG nova.virt.hardware [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:00:46 compute-1 nova_compute[230518]: 2025-10-02 13:00:46.045 2 INFO nova.compute.claims [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Claim successful on node compute-1.ctlplane.example.com
Oct 02 13:00:46 compute-1 nova_compute[230518]: 2025-10-02 13:00:46.180 2 DEBUG oslo_concurrency.processutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:46 compute-1 nova_compute[230518]: 2025-10-02 13:00:46.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:46 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:00:46 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4136340579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:46 compute-1 nova_compute[230518]: 2025-10-02 13:00:46.683 2 DEBUG oslo_concurrency.processutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:46 compute-1 nova_compute[230518]: 2025-10-02 13:00:46.689 2 DEBUG nova.compute.provider_tree [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:00:46 compute-1 nova_compute[230518]: 2025-10-02 13:00:46.713 2 DEBUG nova.scheduler.client.report [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:00:46 compute-1 nova_compute[230518]: 2025-10-02 13:00:46.758 2 DEBUG oslo_concurrency.lockutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:46 compute-1 nova_compute[230518]: 2025-10-02 13:00:46.759 2 DEBUG nova.compute.manager [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:00:46 compute-1 nova_compute[230518]: 2025-10-02 13:00:46.825 2 DEBUG nova.compute.manager [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:00:46 compute-1 nova_compute[230518]: 2025-10-02 13:00:46.826 2 DEBUG nova.network.neutron [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:00:46 compute-1 nova_compute[230518]: 2025-10-02 13:00:46.857 2 INFO nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:00:46 compute-1 nova_compute[230518]: 2025-10-02 13:00:46.880 2 DEBUG nova.compute.manager [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:00:46 compute-1 ceph-mon[80926]: pgmap v2631: 305 pgs: 305 active+clean; 324 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 6.7 MiB/s wr, 277 op/s
Oct 02 13:00:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4136340579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:46 compute-1 nova_compute[230518]: 2025-10-02 13:00:46.983 2 DEBUG nova.compute.manager [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:00:46 compute-1 nova_compute[230518]: 2025-10-02 13:00:46.984 2 DEBUG nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:00:46 compute-1 nova_compute[230518]: 2025-10-02 13:00:46.985 2 INFO nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Creating image(s)
Oct 02 13:00:47 compute-1 nova_compute[230518]: 2025-10-02 13:00:47.013 2 DEBUG nova.storage.rbd_utils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 85538bf5-69f3-4c92-baf5-a998835df357_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:47 compute-1 nova_compute[230518]: 2025-10-02 13:00:47.045 2 DEBUG nova.storage.rbd_utils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 85538bf5-69f3-4c92-baf5-a998835df357_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:47 compute-1 nova_compute[230518]: 2025-10-02 13:00:47.086 2 DEBUG nova.storage.rbd_utils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 85538bf5-69f3-4c92-baf5-a998835df357_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:47 compute-1 nova_compute[230518]: 2025-10-02 13:00:47.093 2 DEBUG oslo_concurrency.processutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:47.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:47 compute-1 nova_compute[230518]: 2025-10-02 13:00:47.159 2 DEBUG nova.policy [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b04159d5bffe4259876ce57aec09716e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a6be0e77fb5b4355b4f2276c9e57d2bd', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:00:47 compute-1 nova_compute[230518]: 2025-10-02 13:00:47.198 2 DEBUG oslo_concurrency.processutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:47 compute-1 nova_compute[230518]: 2025-10-02 13:00:47.198 2 DEBUG oslo_concurrency.lockutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:47 compute-1 nova_compute[230518]: 2025-10-02 13:00:47.199 2 DEBUG oslo_concurrency.lockutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:47 compute-1 nova_compute[230518]: 2025-10-02 13:00:47.199 2 DEBUG oslo_concurrency.lockutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:47 compute-1 nova_compute[230518]: 2025-10-02 13:00:47.228 2 DEBUG nova.storage.rbd_utils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 85538bf5-69f3-4c92-baf5-a998835df357_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:47 compute-1 nova_compute[230518]: 2025-10-02 13:00:47.232 2 DEBUG oslo_concurrency.processutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 85538bf5-69f3-4c92-baf5-a998835df357_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:00:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:47.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:00:47 compute-1 nova_compute[230518]: 2025-10-02 13:00:47.516 2 DEBUG oslo_concurrency.processutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 85538bf5-69f3-4c92-baf5-a998835df357_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:47 compute-1 nova_compute[230518]: 2025-10-02 13:00:47.592 2 DEBUG nova.storage.rbd_utils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] resizing rbd image 85538bf5-69f3-4c92-baf5-a998835df357_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:00:47 compute-1 nova_compute[230518]: 2025-10-02 13:00:47.936 2 DEBUG nova.objects.instance [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lazy-loading 'migration_context' on Instance uuid 85538bf5-69f3-4c92-baf5-a998835df357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:00:47 compute-1 nova_compute[230518]: 2025-10-02 13:00:47.960 2 DEBUG nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:00:47 compute-1 nova_compute[230518]: 2025-10-02 13:00:47.961 2 DEBUG nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Ensure instance console log exists: /var/lib/nova/instances/85538bf5-69f3-4c92-baf5-a998835df357/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:00:47 compute-1 nova_compute[230518]: 2025-10-02 13:00:47.961 2 DEBUG oslo_concurrency.lockutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:47 compute-1 nova_compute[230518]: 2025-10-02 13:00:47.961 2 DEBUG oslo_concurrency.lockutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:47 compute-1 nova_compute[230518]: 2025-10-02 13:00:47.962 2 DEBUG oslo_concurrency.lockutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:48 compute-1 nova_compute[230518]: 2025-10-02 13:00:48.185 2 DEBUG nova.network.neutron [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Successfully created port: 1f164634-caf5-4a9f-b0af-a47e5681a252 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:00:48 compute-1 podman[295913]: 2025-10-02 13:00:48.821392738 +0000 UTC m=+0.069807320 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:00:48 compute-1 podman[295912]: 2025-10-02 13:00:48.85042393 +0000 UTC m=+0.095043364 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 13:00:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:49.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:49 compute-1 nova_compute[230518]: 2025-10-02 13:00:49.172 2 DEBUG nova.network.neutron [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Successfully updated port: 1f164634-caf5-4a9f-b0af-a47e5681a252 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:00:49 compute-1 nova_compute[230518]: 2025-10-02 13:00:49.188 2 DEBUG oslo_concurrency.lockutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "refresh_cache-85538bf5-69f3-4c92-baf5-a998835df357" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:00:49 compute-1 nova_compute[230518]: 2025-10-02 13:00:49.188 2 DEBUG oslo_concurrency.lockutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquired lock "refresh_cache-85538bf5-69f3-4c92-baf5-a998835df357" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:00:49 compute-1 nova_compute[230518]: 2025-10-02 13:00:49.189 2 DEBUG nova.network.neutron [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:00:49 compute-1 ceph-mon[80926]: pgmap v2632: 305 pgs: 305 active+clean; 331 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 6.3 MiB/s wr, 276 op/s
Oct 02 13:00:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/741611088' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/42663333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2777311355' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:00:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:49.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:00:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:00:49 compute-1 nova_compute[230518]: 2025-10-02 13:00:49.542 2 DEBUG nova.network.neutron [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:00:49 compute-1 nova_compute[230518]: 2025-10-02 13:00:49.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4021418989' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1634489610' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:00:50 compute-1 nova_compute[230518]: 2025-10-02 13:00:50.312 2 DEBUG nova.network.neutron [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Updating instance_info_cache with network_info: [{"id": "1f164634-caf5-4a9f-b0af-a47e5681a252", "address": "fa:16:3e:53:b7:17", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f164634-ca", "ovs_interfaceid": "1f164634-caf5-4a9f-b0af-a47e5681a252", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:00:50 compute-1 nova_compute[230518]: 2025-10-02 13:00:50.339 2 DEBUG oslo_concurrency.lockutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Releasing lock "refresh_cache-85538bf5-69f3-4c92-baf5-a998835df357" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:00:50 compute-1 nova_compute[230518]: 2025-10-02 13:00:50.339 2 DEBUG nova.compute.manager [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Instance network_info: |[{"id": "1f164634-caf5-4a9f-b0af-a47e5681a252", "address": "fa:16:3e:53:b7:17", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f164634-ca", "ovs_interfaceid": "1f164634-caf5-4a9f-b0af-a47e5681a252", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:00:50 compute-1 nova_compute[230518]: 2025-10-02 13:00:50.342 2 DEBUG nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Start _get_guest_xml network_info=[{"id": "1f164634-caf5-4a9f-b0af-a47e5681a252", "address": "fa:16:3e:53:b7:17", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f164634-ca", "ovs_interfaceid": "1f164634-caf5-4a9f-b0af-a47e5681a252", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:00:50 compute-1 nova_compute[230518]: 2025-10-02 13:00:50.347 2 WARNING nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:00:50 compute-1 nova_compute[230518]: 2025-10-02 13:00:50.351 2 DEBUG nova.virt.libvirt.host [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:00:50 compute-1 nova_compute[230518]: 2025-10-02 13:00:50.352 2 DEBUG nova.virt.libvirt.host [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:00:50 compute-1 nova_compute[230518]: 2025-10-02 13:00:50.355 2 DEBUG nova.virt.libvirt.host [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:00:50 compute-1 nova_compute[230518]: 2025-10-02 13:00:50.356 2 DEBUG nova.virt.libvirt.host [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:00:50 compute-1 nova_compute[230518]: 2025-10-02 13:00:50.357 2 DEBUG nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:00:50 compute-1 nova_compute[230518]: 2025-10-02 13:00:50.358 2 DEBUG nova.virt.hardware [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:00:50 compute-1 nova_compute[230518]: 2025-10-02 13:00:50.358 2 DEBUG nova.virt.hardware [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:00:50 compute-1 nova_compute[230518]: 2025-10-02 13:00:50.359 2 DEBUG nova.virt.hardware [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:00:50 compute-1 nova_compute[230518]: 2025-10-02 13:00:50.359 2 DEBUG nova.virt.hardware [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:00:50 compute-1 nova_compute[230518]: 2025-10-02 13:00:50.359 2 DEBUG nova.virt.hardware [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:00:50 compute-1 nova_compute[230518]: 2025-10-02 13:00:50.359 2 DEBUG nova.virt.hardware [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:00:50 compute-1 nova_compute[230518]: 2025-10-02 13:00:50.360 2 DEBUG nova.virt.hardware [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:00:50 compute-1 nova_compute[230518]: 2025-10-02 13:00:50.360 2 DEBUG nova.virt.hardware [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:00:50 compute-1 nova_compute[230518]: 2025-10-02 13:00:50.360 2 DEBUG nova.virt.hardware [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:00:50 compute-1 nova_compute[230518]: 2025-10-02 13:00:50.361 2 DEBUG nova.virt.hardware [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:00:50 compute-1 nova_compute[230518]: 2025-10-02 13:00:50.361 2 DEBUG nova.virt.hardware [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:00:50 compute-1 nova_compute[230518]: 2025-10-02 13:00:50.364 2 DEBUG oslo_concurrency.processutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:00:50 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3463278456' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:50 compute-1 nova_compute[230518]: 2025-10-02 13:00:50.865 2 DEBUG oslo_concurrency.processutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:50 compute-1 nova_compute[230518]: 2025-10-02 13:00:50.890 2 DEBUG nova.storage.rbd_utils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 85538bf5-69f3-4c92-baf5-a998835df357_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:50 compute-1 nova_compute[230518]: 2025-10-02 13:00:50.894 2 DEBUG oslo_concurrency.processutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:50 compute-1 nova_compute[230518]: 2025-10-02 13:00:50.923 2 DEBUG nova.compute.manager [req-f8d132c9-693d-495b-a02c-75a95329253a req-4d4dc02a-9cd4-4dbe-9a86-8af1bc8b7600 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Received event network-changed-1f164634-caf5-4a9f-b0af-a47e5681a252 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:00:50 compute-1 nova_compute[230518]: 2025-10-02 13:00:50.924 2 DEBUG nova.compute.manager [req-f8d132c9-693d-495b-a02c-75a95329253a req-4d4dc02a-9cd4-4dbe-9a86-8af1bc8b7600 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Refreshing instance network info cache due to event network-changed-1f164634-caf5-4a9f-b0af-a47e5681a252. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:00:50 compute-1 nova_compute[230518]: 2025-10-02 13:00:50.924 2 DEBUG oslo_concurrency.lockutils [req-f8d132c9-693d-495b-a02c-75a95329253a req-4d4dc02a-9cd4-4dbe-9a86-8af1bc8b7600 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-85538bf5-69f3-4c92-baf5-a998835df357" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:00:50 compute-1 nova_compute[230518]: 2025-10-02 13:00:50.924 2 DEBUG oslo_concurrency.lockutils [req-f8d132c9-693d-495b-a02c-75a95329253a req-4d4dc02a-9cd4-4dbe-9a86-8af1bc8b7600 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-85538bf5-69f3-4c92-baf5-a998835df357" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:00:50 compute-1 nova_compute[230518]: 2025-10-02 13:00:50.924 2 DEBUG nova.network.neutron [req-f8d132c9-693d-495b-a02c-75a95329253a req-4d4dc02a-9cd4-4dbe-9a86-8af1bc8b7600 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Refreshing network info cache for port 1f164634-caf5-4a9f-b0af-a47e5681a252 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:00:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:51.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:51 compute-1 nova_compute[230518]: 2025-10-02 13:00:51.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:00:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:51.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:00:51 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:00:51 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1981855118' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:51 compute-1 nova_compute[230518]: 2025-10-02 13:00:51.326 2 DEBUG oslo_concurrency.processutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:51 compute-1 nova_compute[230518]: 2025-10-02 13:00:51.328 2 DEBUG nova.virt.libvirt.vif [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:00:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-350886286',display_name='tempest-ServersTestJSON-server-350886286',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-350886286',id=169,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a6be0e77fb5b4355b4f2276c9e57d2bd',ramdisk_id='',reservation_id='r-deu6rtq4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-146860306',owner_user_name='tempest-ServersTestJSON-146860306-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:00:46Z,user_data=None,user_id='b04159d5bffe4259876ce57aec09716e',uuid=85538bf5-69f3-4c92-baf5-a998835df357,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1f164634-caf5-4a9f-b0af-a47e5681a252", "address": "fa:16:3e:53:b7:17", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f164634-ca", "ovs_interfaceid": "1f164634-caf5-4a9f-b0af-a47e5681a252", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:00:51 compute-1 nova_compute[230518]: 2025-10-02 13:00:51.328 2 DEBUG nova.network.os_vif_util [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converting VIF {"id": "1f164634-caf5-4a9f-b0af-a47e5681a252", "address": "fa:16:3e:53:b7:17", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f164634-ca", "ovs_interfaceid": "1f164634-caf5-4a9f-b0af-a47e5681a252", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:00:51 compute-1 nova_compute[230518]: 2025-10-02 13:00:51.328 2 DEBUG nova.network.os_vif_util [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:b7:17,bridge_name='br-int',has_traffic_filtering=True,id=1f164634-caf5-4a9f-b0af-a47e5681a252,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f164634-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:00:51 compute-1 nova_compute[230518]: 2025-10-02 13:00:51.329 2 DEBUG nova.objects.instance [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lazy-loading 'pci_devices' on Instance uuid 85538bf5-69f3-4c92-baf5-a998835df357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:00:51 compute-1 nova_compute[230518]: 2025-10-02 13:00:51.358 2 DEBUG nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:00:51 compute-1 nova_compute[230518]:   <uuid>85538bf5-69f3-4c92-baf5-a998835df357</uuid>
Oct 02 13:00:51 compute-1 nova_compute[230518]:   <name>instance-000000a9</name>
Oct 02 13:00:51 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 13:00:51 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 13:00:51 compute-1 nova_compute[230518]:   <metadata>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:00:51 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:       <nova:name>tempest-ServersTestJSON-server-350886286</nova:name>
Oct 02 13:00:51 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 13:00:50</nova:creationTime>
Oct 02 13:00:51 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 13:00:51 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 13:00:51 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 13:00:51 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 13:00:51 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:00:51 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:00:51 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 13:00:51 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 13:00:51 compute-1 nova_compute[230518]:         <nova:user uuid="b04159d5bffe4259876ce57aec09716e">tempest-ServersTestJSON-146860306-project-member</nova:user>
Oct 02 13:00:51 compute-1 nova_compute[230518]:         <nova:project uuid="a6be0e77fb5b4355b4f2276c9e57d2bd">tempest-ServersTestJSON-146860306</nova:project>
Oct 02 13:00:51 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 13:00:51 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 13:00:51 compute-1 nova_compute[230518]:         <nova:port uuid="1f164634-caf5-4a9f-b0af-a47e5681a252">
Oct 02 13:00:51 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 13:00:51 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 13:00:51 compute-1 nova_compute[230518]:   </metadata>
Oct 02 13:00:51 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <system>
Oct 02 13:00:51 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:00:51 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:00:51 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:00:51 compute-1 nova_compute[230518]:       <entry name="serial">85538bf5-69f3-4c92-baf5-a998835df357</entry>
Oct 02 13:00:51 compute-1 nova_compute[230518]:       <entry name="uuid">85538bf5-69f3-4c92-baf5-a998835df357</entry>
Oct 02 13:00:51 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     </system>
Oct 02 13:00:51 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 13:00:51 compute-1 nova_compute[230518]:   <os>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:   </os>
Oct 02 13:00:51 compute-1 nova_compute[230518]:   <features>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <apic/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:   </features>
Oct 02 13:00:51 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:   </clock>
Oct 02 13:00:51 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:   </cpu>
Oct 02 13:00:51 compute-1 nova_compute[230518]:   <devices>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 13:00:51 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/85538bf5-69f3-4c92-baf5-a998835df357_disk">
Oct 02 13:00:51 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:       </source>
Oct 02 13:00:51 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:00:51 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:00:51 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 13:00:51 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/85538bf5-69f3-4c92-baf5-a998835df357_disk.config">
Oct 02 13:00:51 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:       </source>
Oct 02 13:00:51 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:00:51 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:00:51 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 13:00:51 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:53:b7:17"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:       <target dev="tap1f164634-ca"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     </interface>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 13:00:51 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/85538bf5-69f3-4c92-baf5-a998835df357/console.log" append="off"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     </serial>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <video>
Oct 02 13:00:51 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     </video>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 13:00:51 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     </rng>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 13:00:51 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 13:00:51 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 13:00:51 compute-1 nova_compute[230518]:   </devices>
Oct 02 13:00:51 compute-1 nova_compute[230518]: </domain>
Oct 02 13:00:51 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:00:51 compute-1 nova_compute[230518]: 2025-10-02 13:00:51.359 2 DEBUG nova.compute.manager [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Preparing to wait for external event network-vif-plugged-1f164634-caf5-4a9f-b0af-a47e5681a252 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:00:51 compute-1 nova_compute[230518]: 2025-10-02 13:00:51.360 2 DEBUG oslo_concurrency.lockutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "85538bf5-69f3-4c92-baf5-a998835df357-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:51 compute-1 nova_compute[230518]: 2025-10-02 13:00:51.360 2 DEBUG oslo_concurrency.lockutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "85538bf5-69f3-4c92-baf5-a998835df357-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:51 compute-1 nova_compute[230518]: 2025-10-02 13:00:51.360 2 DEBUG oslo_concurrency.lockutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "85538bf5-69f3-4c92-baf5-a998835df357-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:51 compute-1 nova_compute[230518]: 2025-10-02 13:00:51.361 2 DEBUG nova.virt.libvirt.vif [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:00:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-350886286',display_name='tempest-ServersTestJSON-server-350886286',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-350886286',id=169,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a6be0e77fb5b4355b4f2276c9e57d2bd',ramdisk_id='',reservation_id='r-deu6rtq4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-146860306',owner_user_name='tempest-ServersTestJSON-146860306-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:00:46Z,user_data=None,user_id='b04159d5bffe4259876ce57aec09716e',uuid=85538bf5-69f3-4c92-baf5-a998835df357,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1f164634-caf5-4a9f-b0af-a47e5681a252", "address": "fa:16:3e:53:b7:17", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f164634-ca", "ovs_interfaceid": "1f164634-caf5-4a9f-b0af-a47e5681a252", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:00:51 compute-1 nova_compute[230518]: 2025-10-02 13:00:51.361 2 DEBUG nova.network.os_vif_util [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converting VIF {"id": "1f164634-caf5-4a9f-b0af-a47e5681a252", "address": "fa:16:3e:53:b7:17", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f164634-ca", "ovs_interfaceid": "1f164634-caf5-4a9f-b0af-a47e5681a252", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:00:51 compute-1 nova_compute[230518]: 2025-10-02 13:00:51.361 2 DEBUG nova.network.os_vif_util [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:b7:17,bridge_name='br-int',has_traffic_filtering=True,id=1f164634-caf5-4a9f-b0af-a47e5681a252,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f164634-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:00:51 compute-1 nova_compute[230518]: 2025-10-02 13:00:51.362 2 DEBUG os_vif [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:b7:17,bridge_name='br-int',has_traffic_filtering=True,id=1f164634-caf5-4a9f-b0af-a47e5681a252,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f164634-ca') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:00:51 compute-1 nova_compute[230518]: 2025-10-02 13:00:51.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:51 compute-1 nova_compute[230518]: 2025-10-02 13:00:51.365 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:51 compute-1 nova_compute[230518]: 2025-10-02 13:00:51.365 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:00:51 compute-1 nova_compute[230518]: 2025-10-02 13:00:51.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:51 compute-1 nova_compute[230518]: 2025-10-02 13:00:51.368 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f164634-ca, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:51 compute-1 nova_compute[230518]: 2025-10-02 13:00:51.369 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1f164634-ca, col_values=(('external_ids', {'iface-id': '1f164634-caf5-4a9f-b0af-a47e5681a252', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:b7:17', 'vm-uuid': '85538bf5-69f3-4c92-baf5-a998835df357'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:51 compute-1 nova_compute[230518]: 2025-10-02 13:00:51.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:51 compute-1 NetworkManager[44960]: <info>  [1759410051.3719] manager: (tap1f164634-ca): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/330)
Oct 02 13:00:51 compute-1 nova_compute[230518]: 2025-10-02 13:00:51.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:00:51 compute-1 nova_compute[230518]: 2025-10-02 13:00:51.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:51 compute-1 nova_compute[230518]: 2025-10-02 13:00:51.377 2 INFO os_vif [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:b7:17,bridge_name='br-int',has_traffic_filtering=True,id=1f164634-caf5-4a9f-b0af-a47e5681a252,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f164634-ca')
Oct 02 13:00:51 compute-1 ceph-mon[80926]: pgmap v2633: 305 pgs: 305 active+clean; 327 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.3 MiB/s wr, 241 op/s
Oct 02 13:00:51 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3463278456' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:51 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1981855118' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:00:51 compute-1 nova_compute[230518]: 2025-10-02 13:00:51.490 2 DEBUG nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:00:51 compute-1 nova_compute[230518]: 2025-10-02 13:00:51.491 2 DEBUG nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:00:51 compute-1 nova_compute[230518]: 2025-10-02 13:00:51.492 2 DEBUG nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] No VIF found with MAC fa:16:3e:53:b7:17, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:00:51 compute-1 nova_compute[230518]: 2025-10-02 13:00:51.493 2 INFO nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Using config drive
Oct 02 13:00:51 compute-1 nova_compute[230518]: 2025-10-02 13:00:51.528 2 DEBUG nova.storage.rbd_utils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 85538bf5-69f3-4c92-baf5-a998835df357_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:51 compute-1 nova_compute[230518]: 2025-10-02 13:00:51.946 2 INFO nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Creating config drive at /var/lib/nova/instances/85538bf5-69f3-4c92-baf5-a998835df357/disk.config
Oct 02 13:00:51 compute-1 nova_compute[230518]: 2025-10-02 13:00:51.950 2 DEBUG oslo_concurrency.processutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/85538bf5-69f3-4c92-baf5-a998835df357/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7blrrk_v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:52 compute-1 nova_compute[230518]: 2025-10-02 13:00:52.082 2 DEBUG oslo_concurrency.processutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/85538bf5-69f3-4c92-baf5-a998835df357/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7blrrk_v" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:52 compute-1 nova_compute[230518]: 2025-10-02 13:00:52.118 2 DEBUG nova.storage.rbd_utils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] rbd image 85538bf5-69f3-4c92-baf5-a998835df357_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:00:52 compute-1 nova_compute[230518]: 2025-10-02 13:00:52.122 2 DEBUG oslo_concurrency.processutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/85538bf5-69f3-4c92-baf5-a998835df357/disk.config 85538bf5-69f3-4c92-baf5-a998835df357_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:00:52 compute-1 ovn_controller[129257]: 2025-10-02T13:00:52Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d4:a1:f4 10.100.0.4
Oct 02 13:00:52 compute-1 ovn_controller[129257]: 2025-10-02T13:00:52Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d4:a1:f4 10.100.0.4
Oct 02 13:00:52 compute-1 nova_compute[230518]: 2025-10-02 13:00:52.456 2 DEBUG nova.network.neutron [req-f8d132c9-693d-495b-a02c-75a95329253a req-4d4dc02a-9cd4-4dbe-9a86-8af1bc8b7600 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Updated VIF entry in instance network info cache for port 1f164634-caf5-4a9f-b0af-a47e5681a252. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:00:52 compute-1 nova_compute[230518]: 2025-10-02 13:00:52.458 2 DEBUG nova.network.neutron [req-f8d132c9-693d-495b-a02c-75a95329253a req-4d4dc02a-9cd4-4dbe-9a86-8af1bc8b7600 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Updating instance_info_cache with network_info: [{"id": "1f164634-caf5-4a9f-b0af-a47e5681a252", "address": "fa:16:3e:53:b7:17", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f164634-ca", "ovs_interfaceid": "1f164634-caf5-4a9f-b0af-a47e5681a252", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:00:52 compute-1 nova_compute[230518]: 2025-10-02 13:00:52.529 2 DEBUG oslo_concurrency.lockutils [req-f8d132c9-693d-495b-a02c-75a95329253a req-4d4dc02a-9cd4-4dbe-9a86-8af1bc8b7600 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-85538bf5-69f3-4c92-baf5-a998835df357" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:00:52 compute-1 nova_compute[230518]: 2025-10-02 13:00:52.581 2 DEBUG oslo_concurrency.processutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/85538bf5-69f3-4c92-baf5-a998835df357/disk.config 85538bf5-69f3-4c92-baf5-a998835df357_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:00:52 compute-1 nova_compute[230518]: 2025-10-02 13:00:52.582 2 INFO nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Deleting local config drive /var/lib/nova/instances/85538bf5-69f3-4c92-baf5-a998835df357/disk.config because it was imported into RBD.
Oct 02 13:00:52 compute-1 kernel: tap1f164634-ca: entered promiscuous mode
Oct 02 13:00:52 compute-1 NetworkManager[44960]: <info>  [1759410052.6477] manager: (tap1f164634-ca): new Tun device (/org/freedesktop/NetworkManager/Devices/331)
Oct 02 13:00:52 compute-1 ovn_controller[129257]: 2025-10-02T13:00:52Z|00700|binding|INFO|Claiming lport 1f164634-caf5-4a9f-b0af-a47e5681a252 for this chassis.
Oct 02 13:00:52 compute-1 ovn_controller[129257]: 2025-10-02T13:00:52Z|00701|binding|INFO|1f164634-caf5-4a9f-b0af-a47e5681a252: Claiming fa:16:3e:53:b7:17 10.100.0.5
Oct 02 13:00:52 compute-1 nova_compute[230518]: 2025-10-02 13:00:52.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:52.670 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:b7:17 10.100.0.5'], port_security=['fa:16:3e:53:b7:17 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '85538bf5-69f3-4c92-baf5-a998835df357', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-052f341a-0628-4183-a5e0-76312bc986c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a6be0e77fb5b4355b4f2276c9e57d2bd', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f50d96b5-3505-4637-897f-64b0dcf7d106', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d584c9bd-ee36-4364-be0c-b350c44644a7, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=1f164634-caf5-4a9f-b0af-a47e5681a252) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:00:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:52.673 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 1f164634-caf5-4a9f-b0af-a47e5681a252 in datapath 052f341a-0628-4183-a5e0-76312bc986c6 bound to our chassis
Oct 02 13:00:52 compute-1 ovn_controller[129257]: 2025-10-02T13:00:52Z|00702|binding|INFO|Setting lport 1f164634-caf5-4a9f-b0af-a47e5681a252 ovn-installed in OVS
Oct 02 13:00:52 compute-1 ovn_controller[129257]: 2025-10-02T13:00:52Z|00703|binding|INFO|Setting lport 1f164634-caf5-4a9f-b0af-a47e5681a252 up in Southbound
Oct 02 13:00:52 compute-1 nova_compute[230518]: 2025-10-02 13:00:52.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:52.677 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 052f341a-0628-4183-a5e0-76312bc986c6
Oct 02 13:00:52 compute-1 nova_compute[230518]: 2025-10-02 13:00:52.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:52 compute-1 systemd-udevd[296086]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:00:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:52.700 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[72e1401d-9e4d-4313-b769-46f3c5c02f65]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:52 compute-1 systemd-machined[188247]: New machine qemu-82-instance-000000a9.
Oct 02 13:00:52 compute-1 NetworkManager[44960]: <info>  [1759410052.7117] device (tap1f164634-ca): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:00:52 compute-1 NetworkManager[44960]: <info>  [1759410052.7145] device (tap1f164634-ca): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:00:52 compute-1 systemd[1]: Started Virtual Machine qemu-82-instance-000000a9.
Oct 02 13:00:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:52.759 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[ed6afe6b-df95-4952-a11c-e9c35c61ad89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:52.767 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[db456c06-b926-43c4-b8bf-71e838e19f05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:52.803 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c2b5d6dd-26f9-4782-9344-56c13100533f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:52.832 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8862787b-a6e7-482e-8158-1ef1c5d6e60b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap052f341a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:0a:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 790113, 'reachable_time': 28247, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296099, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:52.859 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6573aac8-ffb1-4dbb-8769-5da595758d93]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap052f341a-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 790131, 'tstamp': 790131}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296101, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap052f341a-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 790134, 'tstamp': 790134}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296101, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:00:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:52.862 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap052f341a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:52 compute-1 nova_compute[230518]: 2025-10-02 13:00:52.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:52.865 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap052f341a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:52.865 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:00:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:52.866 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap052f341a-00, col_values=(('external_ids', {'iface-id': '61e15bc4-7cff-4f2c-a6c4-d987859313b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:00:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:00:52.867 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:00:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:00:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:53.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:00:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:00:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:53.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:00:53 compute-1 ceph-mon[80926]: pgmap v2634: 305 pgs: 305 active+clean; 323 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 7.0 MiB/s wr, 309 op/s
Oct 02 13:00:53 compute-1 nova_compute[230518]: 2025-10-02 13:00:53.800 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410053.800076, 85538bf5-69f3-4c92-baf5-a998835df357 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:00:53 compute-1 nova_compute[230518]: 2025-10-02 13:00:53.801 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] VM Started (Lifecycle Event)
Oct 02 13:00:53 compute-1 nova_compute[230518]: 2025-10-02 13:00:53.826 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:00:53 compute-1 nova_compute[230518]: 2025-10-02 13:00:53.829 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410053.8001754, 85538bf5-69f3-4c92-baf5-a998835df357 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:00:53 compute-1 nova_compute[230518]: 2025-10-02 13:00:53.830 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] VM Paused (Lifecycle Event)
Oct 02 13:00:53 compute-1 nova_compute[230518]: 2025-10-02 13:00:53.847 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:00:53 compute-1 nova_compute[230518]: 2025-10-02 13:00:53.850 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:00:53 compute-1 nova_compute[230518]: 2025-10-02 13:00:53.878 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:00:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:00:54 compute-1 nova_compute[230518]: 2025-10-02 13:00:54.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:55.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:55.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:55 compute-1 ceph-mon[80926]: pgmap v2635: 305 pgs: 305 active+clean; 323 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 7.0 MiB/s wr, 253 op/s
Oct 02 13:00:56 compute-1 nova_compute[230518]: 2025-10-02 13:00:56.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:57.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:00:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:57.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:00:57 compute-1 ceph-mon[80926]: pgmap v2636: 305 pgs: 305 active+clean; 337 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 7.5 MiB/s wr, 382 op/s
Oct 02 13:00:57 compute-1 nova_compute[230518]: 2025-10-02 13:00:57.775 2 DEBUG nova.compute.manager [req-1471ed84-181c-431d-a81e-c26c7b9ee9bb req-9d817f4e-f7f7-47c4-ad72-3d7e7662023d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Received event network-vif-plugged-1f164634-caf5-4a9f-b0af-a47e5681a252 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:00:57 compute-1 nova_compute[230518]: 2025-10-02 13:00:57.775 2 DEBUG oslo_concurrency.lockutils [req-1471ed84-181c-431d-a81e-c26c7b9ee9bb req-9d817f4e-f7f7-47c4-ad72-3d7e7662023d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "85538bf5-69f3-4c92-baf5-a998835df357-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:00:57 compute-1 nova_compute[230518]: 2025-10-02 13:00:57.776 2 DEBUG oslo_concurrency.lockutils [req-1471ed84-181c-431d-a81e-c26c7b9ee9bb req-9d817f4e-f7f7-47c4-ad72-3d7e7662023d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "85538bf5-69f3-4c92-baf5-a998835df357-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:00:57 compute-1 nova_compute[230518]: 2025-10-02 13:00:57.776 2 DEBUG oslo_concurrency.lockutils [req-1471ed84-181c-431d-a81e-c26c7b9ee9bb req-9d817f4e-f7f7-47c4-ad72-3d7e7662023d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "85538bf5-69f3-4c92-baf5-a998835df357-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:57 compute-1 nova_compute[230518]: 2025-10-02 13:00:57.777 2 DEBUG nova.compute.manager [req-1471ed84-181c-431d-a81e-c26c7b9ee9bb req-9d817f4e-f7f7-47c4-ad72-3d7e7662023d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Processing event network-vif-plugged-1f164634-caf5-4a9f-b0af-a47e5681a252 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:00:57 compute-1 nova_compute[230518]: 2025-10-02 13:00:57.778 2 DEBUG nova.compute.manager [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:00:57 compute-1 nova_compute[230518]: 2025-10-02 13:00:57.784 2 DEBUG nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:00:57 compute-1 nova_compute[230518]: 2025-10-02 13:00:57.785 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410057.7845087, 85538bf5-69f3-4c92-baf5-a998835df357 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:00:57 compute-1 nova_compute[230518]: 2025-10-02 13:00:57.785 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] VM Resumed (Lifecycle Event)
Oct 02 13:00:57 compute-1 nova_compute[230518]: 2025-10-02 13:00:57.788 2 INFO nova.virt.libvirt.driver [-] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Instance spawned successfully.
Oct 02 13:00:57 compute-1 nova_compute[230518]: 2025-10-02 13:00:57.789 2 DEBUG nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:00:57 compute-1 nova_compute[230518]: 2025-10-02 13:00:57.816 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:00:57 compute-1 nova_compute[230518]: 2025-10-02 13:00:57.822 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:00:57 compute-1 nova_compute[230518]: 2025-10-02 13:00:57.827 2 DEBUG nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:00:57 compute-1 nova_compute[230518]: 2025-10-02 13:00:57.827 2 DEBUG nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:00:57 compute-1 nova_compute[230518]: 2025-10-02 13:00:57.828 2 DEBUG nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:00:57 compute-1 nova_compute[230518]: 2025-10-02 13:00:57.829 2 DEBUG nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:00:57 compute-1 nova_compute[230518]: 2025-10-02 13:00:57.829 2 DEBUG nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:00:57 compute-1 nova_compute[230518]: 2025-10-02 13:00:57.830 2 DEBUG nova.virt.libvirt.driver [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:00:57 compute-1 nova_compute[230518]: 2025-10-02 13:00:57.864 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:00:57 compute-1 nova_compute[230518]: 2025-10-02 13:00:57.905 2 INFO nova.compute.manager [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Took 10.92 seconds to spawn the instance on the hypervisor.
Oct 02 13:00:57 compute-1 nova_compute[230518]: 2025-10-02 13:00:57.906 2 DEBUG nova.compute.manager [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:00:57 compute-1 nova_compute[230518]: 2025-10-02 13:00:57.973 2 INFO nova.compute.manager [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Took 12.01 seconds to build instance.
Oct 02 13:00:57 compute-1 nova_compute[230518]: 2025-10-02 13:00:57.992 2 DEBUG oslo_concurrency.lockutils [None req-80ee30bb-f36d-4ee7-990e-57f0029e5256 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "85538bf5-69f3-4c92-baf5-a998835df357" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.098s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:00:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:00:59.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:00:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:00:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:00:59.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:00:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:00:59 compute-1 nova_compute[230518]: 2025-10-02 13:00:59.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:00:59 compute-1 ceph-mon[80926]: pgmap v2637: 305 pgs: 305 active+clean; 339 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 6.5 MiB/s wr, 366 op/s
Oct 02 13:01:00 compute-1 nova_compute[230518]: 2025-10-02 13:01:00.216 2 DEBUG nova.compute.manager [req-3d06d561-35d2-44c9-9b0d-40eefde8e8b4 req-66b5cc14-44b3-400b-8d77-e5a0f292344e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Received event network-vif-plugged-1f164634-caf5-4a9f-b0af-a47e5681a252 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:01:00 compute-1 nova_compute[230518]: 2025-10-02 13:01:00.217 2 DEBUG oslo_concurrency.lockutils [req-3d06d561-35d2-44c9-9b0d-40eefde8e8b4 req-66b5cc14-44b3-400b-8d77-e5a0f292344e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "85538bf5-69f3-4c92-baf5-a998835df357-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:00 compute-1 nova_compute[230518]: 2025-10-02 13:01:00.218 2 DEBUG oslo_concurrency.lockutils [req-3d06d561-35d2-44c9-9b0d-40eefde8e8b4 req-66b5cc14-44b3-400b-8d77-e5a0f292344e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "85538bf5-69f3-4c92-baf5-a998835df357-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:00 compute-1 nova_compute[230518]: 2025-10-02 13:01:00.219 2 DEBUG oslo_concurrency.lockutils [req-3d06d561-35d2-44c9-9b0d-40eefde8e8b4 req-66b5cc14-44b3-400b-8d77-e5a0f292344e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "85538bf5-69f3-4c92-baf5-a998835df357-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:00 compute-1 nova_compute[230518]: 2025-10-02 13:01:00.219 2 DEBUG nova.compute.manager [req-3d06d561-35d2-44c9-9b0d-40eefde8e8b4 req-66b5cc14-44b3-400b-8d77-e5a0f292344e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] No waiting events found dispatching network-vif-plugged-1f164634-caf5-4a9f-b0af-a47e5681a252 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:01:00 compute-1 nova_compute[230518]: 2025-10-02 13:01:00.220 2 WARNING nova.compute.manager [req-3d06d561-35d2-44c9-9b0d-40eefde8e8b4 req-66b5cc14-44b3-400b-8d77-e5a0f292344e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Received unexpected event network-vif-plugged-1f164634-caf5-4a9f-b0af-a47e5681a252 for instance with vm_state active and task_state None.
Oct 02 13:01:00 compute-1 nova_compute[230518]: 2025-10-02 13:01:00.705 2 DEBUG oslo_concurrency.lockutils [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "85538bf5-69f3-4c92-baf5-a998835df357" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:00 compute-1 nova_compute[230518]: 2025-10-02 13:01:00.705 2 DEBUG oslo_concurrency.lockutils [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "85538bf5-69f3-4c92-baf5-a998835df357" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:00 compute-1 nova_compute[230518]: 2025-10-02 13:01:00.706 2 DEBUG oslo_concurrency.lockutils [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "85538bf5-69f3-4c92-baf5-a998835df357-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:00 compute-1 nova_compute[230518]: 2025-10-02 13:01:00.706 2 DEBUG oslo_concurrency.lockutils [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "85538bf5-69f3-4c92-baf5-a998835df357-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:00 compute-1 nova_compute[230518]: 2025-10-02 13:01:00.706 2 DEBUG oslo_concurrency.lockutils [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "85538bf5-69f3-4c92-baf5-a998835df357-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:00 compute-1 nova_compute[230518]: 2025-10-02 13:01:00.707 2 INFO nova.compute.manager [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Terminating instance
Oct 02 13:01:00 compute-1 nova_compute[230518]: 2025-10-02 13:01:00.708 2 DEBUG nova.compute.manager [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:01:00 compute-1 kernel: tap1f164634-ca (unregistering): left promiscuous mode
Oct 02 13:01:00 compute-1 NetworkManager[44960]: <info>  [1759410060.7499] device (tap1f164634-ca): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:01:00 compute-1 nova_compute[230518]: 2025-10-02 13:01:00.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:00 compute-1 ovn_controller[129257]: 2025-10-02T13:01:00Z|00704|binding|INFO|Releasing lport 1f164634-caf5-4a9f-b0af-a47e5681a252 from this chassis (sb_readonly=0)
Oct 02 13:01:00 compute-1 ovn_controller[129257]: 2025-10-02T13:01:00Z|00705|binding|INFO|Setting lport 1f164634-caf5-4a9f-b0af-a47e5681a252 down in Southbound
Oct 02 13:01:00 compute-1 ovn_controller[129257]: 2025-10-02T13:01:00Z|00706|binding|INFO|Removing iface tap1f164634-ca ovn-installed in OVS
Oct 02 13:01:00 compute-1 nova_compute[230518]: 2025-10-02 13:01:00.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:00.770 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:b7:17 10.100.0.5'], port_security=['fa:16:3e:53:b7:17 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '85538bf5-69f3-4c92-baf5-a998835df357', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-052f341a-0628-4183-a5e0-76312bc986c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a6be0e77fb5b4355b4f2276c9e57d2bd', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f50d96b5-3505-4637-897f-64b0dcf7d106', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d584c9bd-ee36-4364-be0c-b350c44644a7, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=1f164634-caf5-4a9f-b0af-a47e5681a252) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:01:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:00.771 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 1f164634-caf5-4a9f-b0af-a47e5681a252 in datapath 052f341a-0628-4183-a5e0-76312bc986c6 unbound from our chassis
Oct 02 13:01:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:00.773 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 052f341a-0628-4183-a5e0-76312bc986c6
Oct 02 13:01:00 compute-1 nova_compute[230518]: 2025-10-02 13:01:00.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:00.793 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[01e2f8e2-3966-4429-b4de-68dc975a47a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:00.823 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[145150bd-56b1-4ec0-9381-849cc7f64678]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:00 compute-1 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000a9.scope: Deactivated successfully.
Oct 02 13:01:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:00.825 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[f68dc402-2143-4802-a203-ced730c03032]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:00 compute-1 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000a9.scope: Consumed 3.962s CPU time.
Oct 02 13:01:00 compute-1 systemd-machined[188247]: Machine qemu-82-instance-000000a9 terminated.
Oct 02 13:01:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:00.855 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[10e667e3-bae2-4e5d-abf3-f4317c4312e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:00.872 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[972fb6ac-ab7b-4efb-87f1-a9d875934eb3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap052f341a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:0a:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 790113, 'reachable_time': 28247, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296155, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:00.887 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ca3ccfe9-0e43-4ea9-9be6-8ef3b69443d6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap052f341a-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 790131, 'tstamp': 790131}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296156, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap052f341a-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 790134, 'tstamp': 790134}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296156, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:00.889 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap052f341a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:01:00 compute-1 nova_compute[230518]: 2025-10-02 13:01:00.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:00 compute-1 nova_compute[230518]: 2025-10-02 13:01:00.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:00.895 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap052f341a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:01:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:00.895 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:01:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:00.895 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap052f341a-00, col_values=(('external_ids', {'iface-id': '61e15bc4-7cff-4f2c-a6c4-d987859313b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:01:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:00.896 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:01:00 compute-1 nova_compute[230518]: 2025-10-02 13:01:00.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:00 compute-1 nova_compute[230518]: 2025-10-02 13:01:00.952 2 INFO nova.virt.libvirt.driver [-] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Instance destroyed successfully.
Oct 02 13:01:00 compute-1 nova_compute[230518]: 2025-10-02 13:01:00.953 2 DEBUG nova.objects.instance [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lazy-loading 'resources' on Instance uuid 85538bf5-69f3-4c92-baf5-a998835df357 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:01:00 compute-1 nova_compute[230518]: 2025-10-02 13:01:00.968 2 DEBUG nova.virt.libvirt.vif [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:00:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-350886286',display_name='tempest-ServersTestJSON-server-350886286',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverstestjson-server-350886286',id=169,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:00:57Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a6be0e77fb5b4355b4f2276c9e57d2bd',ramdisk_id='',reservation_id='r-deu6rtq4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-146860306',owner_user_name='tempest-ServersTestJSON-146860306-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:00:57Z,user_data=None,user_id='b04159d5bffe4259876ce57aec09716e',uuid=85538bf5-69f3-4c92-baf5-a998835df357,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1f164634-caf5-4a9f-b0af-a47e5681a252", "address": "fa:16:3e:53:b7:17", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f164634-ca", "ovs_interfaceid": "1f164634-caf5-4a9f-b0af-a47e5681a252", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:01:00 compute-1 nova_compute[230518]: 2025-10-02 13:01:00.968 2 DEBUG nova.network.os_vif_util [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converting VIF {"id": "1f164634-caf5-4a9f-b0af-a47e5681a252", "address": "fa:16:3e:53:b7:17", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1f164634-ca", "ovs_interfaceid": "1f164634-caf5-4a9f-b0af-a47e5681a252", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:01:00 compute-1 nova_compute[230518]: 2025-10-02 13:01:00.969 2 DEBUG nova.network.os_vif_util [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:b7:17,bridge_name='br-int',has_traffic_filtering=True,id=1f164634-caf5-4a9f-b0af-a47e5681a252,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f164634-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:01:00 compute-1 nova_compute[230518]: 2025-10-02 13:01:00.969 2 DEBUG os_vif [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:b7:17,bridge_name='br-int',has_traffic_filtering=True,id=1f164634-caf5-4a9f-b0af-a47e5681a252,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f164634-ca') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:01:00 compute-1 nova_compute[230518]: 2025-10-02 13:01:00.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:00 compute-1 nova_compute[230518]: 2025-10-02 13:01:00.970 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f164634-ca, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:01:00 compute-1 nova_compute[230518]: 2025-10-02 13:01:00.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:00 compute-1 nova_compute[230518]: 2025-10-02 13:01:00.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:01:00 compute-1 nova_compute[230518]: 2025-10-02 13:01:00.977 2 INFO os_vif [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:b7:17,bridge_name='br-int',has_traffic_filtering=True,id=1f164634-caf5-4a9f-b0af-a47e5681a252,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1f164634-ca')
Oct 02 13:01:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:01.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:01 compute-1 CROND[296187]: (root) CMD (run-parts /etc/cron.hourly)
Oct 02 13:01:01 compute-1 run-parts[296190]: (/etc/cron.hourly) starting 0anacron
Oct 02 13:01:01 compute-1 run-parts[296196]: (/etc/cron.hourly) finished 0anacron
Oct 02 13:01:01 compute-1 CROND[296186]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 02 13:01:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:01.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:01 compute-1 nova_compute[230518]: 2025-10-02 13:01:01.580 2 INFO nova.virt.libvirt.driver [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Deleting instance files /var/lib/nova/instances/85538bf5-69f3-4c92-baf5-a998835df357_del
Oct 02 13:01:01 compute-1 nova_compute[230518]: 2025-10-02 13:01:01.583 2 INFO nova.virt.libvirt.driver [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Deletion of /var/lib/nova/instances/85538bf5-69f3-4c92-baf5-a998835df357_del complete
Oct 02 13:01:01 compute-1 nova_compute[230518]: 2025-10-02 13:01:01.649 2 INFO nova.compute.manager [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Took 0.94 seconds to destroy the instance on the hypervisor.
Oct 02 13:01:01 compute-1 nova_compute[230518]: 2025-10-02 13:01:01.650 2 DEBUG oslo.service.loopingcall [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:01:01 compute-1 nova_compute[230518]: 2025-10-02 13:01:01.651 2 DEBUG nova.compute.manager [-] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:01:01 compute-1 nova_compute[230518]: 2025-10-02 13:01:01.652 2 DEBUG nova.network.neutron [-] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:01:01 compute-1 ceph-mon[80926]: pgmap v2638: 305 pgs: 305 active+clean; 339 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 5.7 MiB/s wr, 346 op/s
Oct 02 13:01:02 compute-1 nova_compute[230518]: 2025-10-02 13:01:02.369 2 DEBUG nova.compute.manager [req-92ca2cc3-59cf-4763-a6b4-3ac2ab2643da req-f23f1b61-8414-4334-960d-72723c6e0086 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Received event network-vif-unplugged-1f164634-caf5-4a9f-b0af-a47e5681a252 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:01:02 compute-1 nova_compute[230518]: 2025-10-02 13:01:02.371 2 DEBUG oslo_concurrency.lockutils [req-92ca2cc3-59cf-4763-a6b4-3ac2ab2643da req-f23f1b61-8414-4334-960d-72723c6e0086 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "85538bf5-69f3-4c92-baf5-a998835df357-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:02 compute-1 nova_compute[230518]: 2025-10-02 13:01:02.371 2 DEBUG oslo_concurrency.lockutils [req-92ca2cc3-59cf-4763-a6b4-3ac2ab2643da req-f23f1b61-8414-4334-960d-72723c6e0086 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "85538bf5-69f3-4c92-baf5-a998835df357-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:02 compute-1 nova_compute[230518]: 2025-10-02 13:01:02.371 2 DEBUG oslo_concurrency.lockutils [req-92ca2cc3-59cf-4763-a6b4-3ac2ab2643da req-f23f1b61-8414-4334-960d-72723c6e0086 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "85538bf5-69f3-4c92-baf5-a998835df357-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:02 compute-1 nova_compute[230518]: 2025-10-02 13:01:02.372 2 DEBUG nova.compute.manager [req-92ca2cc3-59cf-4763-a6b4-3ac2ab2643da req-f23f1b61-8414-4334-960d-72723c6e0086 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] No waiting events found dispatching network-vif-unplugged-1f164634-caf5-4a9f-b0af-a47e5681a252 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:01:02 compute-1 nova_compute[230518]: 2025-10-02 13:01:02.372 2 DEBUG nova.compute.manager [req-92ca2cc3-59cf-4763-a6b4-3ac2ab2643da req-f23f1b61-8414-4334-960d-72723c6e0086 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Received event network-vif-unplugged-1f164634-caf5-4a9f-b0af-a47e5681a252 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:01:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:02.721 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:01:02 compute-1 nova_compute[230518]: 2025-10-02 13:01:02.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:02.723 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:01:02 compute-1 nova_compute[230518]: 2025-10-02 13:01:02.764 2 DEBUG nova.network.neutron [-] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:01:02 compute-1 nova_compute[230518]: 2025-10-02 13:01:02.791 2 INFO nova.compute.manager [-] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Took 1.14 seconds to deallocate network for instance.
Oct 02 13:01:02 compute-1 nova_compute[230518]: 2025-10-02 13:01:02.853 2 DEBUG oslo_concurrency.lockutils [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:02 compute-1 nova_compute[230518]: 2025-10-02 13:01:02.853 2 DEBUG oslo_concurrency.lockutils [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:02 compute-1 nova_compute[230518]: 2025-10-02 13:01:02.887 2 DEBUG nova.compute.manager [req-8b9deb92-5493-41f6-bdb7-ebe8b99630b2 req-d4c2d0b2-7b09-4428-94eb-e8c34bbf7386 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Received event network-vif-deleted-1f164634-caf5-4a9f-b0af-a47e5681a252 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:01:02 compute-1 nova_compute[230518]: 2025-10-02 13:01:02.949 2 DEBUG oslo_concurrency.processutils [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:01:03 compute-1 ceph-mon[80926]: pgmap v2639: 305 pgs: 305 active+clean; 319 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 3.8 MiB/s wr, 372 op/s
Oct 02 13:01:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:03.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:03 compute-1 ovn_controller[129257]: 2025-10-02T13:01:03Z|00707|binding|INFO|Releasing lport 61e15bc4-7cff-4f2c-a6c4-d987859313b6 from this chassis (sb_readonly=0)
Oct 02 13:01:03 compute-1 nova_compute[230518]: 2025-10-02 13:01:03.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:01:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:03.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:01:03 compute-1 ovn_controller[129257]: 2025-10-02T13:01:03Z|00708|binding|INFO|Releasing lport 61e15bc4-7cff-4f2c-a6c4-d987859313b6 from this chassis (sb_readonly=0)
Oct 02 13:01:03 compute-1 nova_compute[230518]: 2025-10-02 13:01:03.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:01:03 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2464258050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:03 compute-1 nova_compute[230518]: 2025-10-02 13:01:03.363 2 DEBUG oslo_concurrency.processutils [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:01:03 compute-1 nova_compute[230518]: 2025-10-02 13:01:03.369 2 DEBUG nova.compute.provider_tree [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:01:03 compute-1 nova_compute[230518]: 2025-10-02 13:01:03.386 2 DEBUG nova.scheduler.client.report [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:01:03 compute-1 nova_compute[230518]: 2025-10-02 13:01:03.417 2 DEBUG oslo_concurrency.lockutils [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:03 compute-1 nova_compute[230518]: 2025-10-02 13:01:03.449 2 INFO nova.scheduler.client.report [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Deleted allocations for instance 85538bf5-69f3-4c92-baf5-a998835df357
Oct 02 13:01:03 compute-1 nova_compute[230518]: 2025-10-02 13:01:03.512 2 DEBUG oslo_concurrency.lockutils [None req-11a3a461-3d91-4490-9a4b-e783baac9048 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "85538bf5-69f3-4c92-baf5-a998835df357" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:04 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2464258050' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:01:04 compute-1 nova_compute[230518]: 2025-10-02 13:01:04.493 2 DEBUG nova.compute.manager [req-0b22dc4e-83f5-4edd-a0e2-386124f52005 req-a4fea588-37df-4c71-8fc2-f9bd11e0c922 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Received event network-vif-plugged-1f164634-caf5-4a9f-b0af-a47e5681a252 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:01:04 compute-1 nova_compute[230518]: 2025-10-02 13:01:04.494 2 DEBUG oslo_concurrency.lockutils [req-0b22dc4e-83f5-4edd-a0e2-386124f52005 req-a4fea588-37df-4c71-8fc2-f9bd11e0c922 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "85538bf5-69f3-4c92-baf5-a998835df357-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:04 compute-1 nova_compute[230518]: 2025-10-02 13:01:04.494 2 DEBUG oslo_concurrency.lockutils [req-0b22dc4e-83f5-4edd-a0e2-386124f52005 req-a4fea588-37df-4c71-8fc2-f9bd11e0c922 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "85538bf5-69f3-4c92-baf5-a998835df357-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:04 compute-1 nova_compute[230518]: 2025-10-02 13:01:04.494 2 DEBUG oslo_concurrency.lockutils [req-0b22dc4e-83f5-4edd-a0e2-386124f52005 req-a4fea588-37df-4c71-8fc2-f9bd11e0c922 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "85538bf5-69f3-4c92-baf5-a998835df357-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:04 compute-1 nova_compute[230518]: 2025-10-02 13:01:04.494 2 DEBUG nova.compute.manager [req-0b22dc4e-83f5-4edd-a0e2-386124f52005 req-a4fea588-37df-4c71-8fc2-f9bd11e0c922 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] No waiting events found dispatching network-vif-plugged-1f164634-caf5-4a9f-b0af-a47e5681a252 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:01:04 compute-1 nova_compute[230518]: 2025-10-02 13:01:04.495 2 WARNING nova.compute.manager [req-0b22dc4e-83f5-4edd-a0e2-386124f52005 req-a4fea588-37df-4c71-8fc2-f9bd11e0c922 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Received unexpected event network-vif-plugged-1f164634-caf5-4a9f-b0af-a47e5681a252 for instance with vm_state deleted and task_state None.
Oct 02 13:01:04 compute-1 nova_compute[230518]: 2025-10-02 13:01:04.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:05.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:05.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:01:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2695771455' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:01:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:01:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2695771455' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:01:05 compute-1 ceph-mon[80926]: pgmap v2640: 305 pgs: 305 active+clean; 319 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 506 KiB/s wr, 256 op/s
Oct 02 13:01:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2695771455' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:01:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2695771455' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:01:05 compute-1 nova_compute[230518]: 2025-10-02 13:01:05.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:07.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:07.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:07 compute-1 ceph-mon[80926]: pgmap v2641: 305 pgs: 305 active+clean; 308 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 2.1 MiB/s wr, 303 op/s
Oct 02 13:01:08 compute-1 podman[296222]: 2025-10-02 13:01:08.866685538 +0000 UTC m=+0.111581049 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 13:01:08 compute-1 podman[296221]: 2025-10-02 13:01:08.898220837 +0000 UTC m=+0.136592674 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 13:01:09 compute-1 ceph-mon[80926]: pgmap v2642: 305 pgs: 305 active+clean; 324 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.8 MiB/s wr, 187 op/s
Oct 02 13:01:09 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1154594658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:01:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:09.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:01:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:09.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:01:09 compute-1 nova_compute[230518]: 2025-10-02 13:01:09.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:10 compute-1 nova_compute[230518]: 2025-10-02 13:01:10.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:11.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:11.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:11 compute-1 ceph-mon[80926]: pgmap v2643: 305 pgs: 305 active+clean; 346 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.2 MiB/s wr, 187 op/s
Oct 02 13:01:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:11.726 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:01:12 compute-1 nova_compute[230518]: 2025-10-02 13:01:12.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:01:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1735928497' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:01:13 compute-1 nova_compute[230518]: 2025-10-02 13:01:13.066 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:01:13 compute-1 nova_compute[230518]: 2025-10-02 13:01:13.107 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:13 compute-1 nova_compute[230518]: 2025-10-02 13:01:13.108 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:13 compute-1 nova_compute[230518]: 2025-10-02 13:01:13.109 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:13 compute-1 nova_compute[230518]: 2025-10-02 13:01:13.109 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:01:13 compute-1 nova_compute[230518]: 2025-10-02 13:01:13.110 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:01:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:13.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:13.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:01:13 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2969665086' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:13 compute-1 nova_compute[230518]: 2025-10-02 13:01:13.599 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:01:13 compute-1 ceph-mon[80926]: pgmap v2644: 305 pgs: 305 active+clean; 368 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.8 MiB/s wr, 214 op/s
Oct 02 13:01:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1881902432' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:01:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2969665086' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:13 compute-1 nova_compute[230518]: 2025-10-02 13:01:13.684 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:01:13 compute-1 nova_compute[230518]: 2025-10-02 13:01:13.684 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:01:13 compute-1 nova_compute[230518]: 2025-10-02 13:01:13.944 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:01:13 compute-1 nova_compute[230518]: 2025-10-02 13:01:13.946 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4088MB free_disk=20.841327667236328GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:01:13 compute-1 nova_compute[230518]: 2025-10-02 13:01:13.946 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:13 compute-1 nova_compute[230518]: 2025-10-02 13:01:13.946 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:14 compute-1 nova_compute[230518]: 2025-10-02 13:01:14.092 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 7c31bb0f-22b5-42a4-9b38-8ad3daac689f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:01:14 compute-1 nova_compute[230518]: 2025-10-02 13:01:14.093 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:01:14 compute-1 nova_compute[230518]: 2025-10-02 13:01:14.093 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:01:14 compute-1 nova_compute[230518]: 2025-10-02 13:01:14.169 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 13:01:14 compute-1 nova_compute[230518]: 2025-10-02 13:01:14.224 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 13:01:14 compute-1 nova_compute[230518]: 2025-10-02 13:01:14.225 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 13:01:14 compute-1 nova_compute[230518]: 2025-10-02 13:01:14.254 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 13:01:14 compute-1 nova_compute[230518]: 2025-10-02 13:01:14.287 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 13:01:14 compute-1 nova_compute[230518]: 2025-10-02 13:01:14.321 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:01:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:01:14 compute-1 nova_compute[230518]: 2025-10-02 13:01:14.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:01:14 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/507289924' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:14 compute-1 nova_compute[230518]: 2025-10-02 13:01:14.739 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:01:14 compute-1 nova_compute[230518]: 2025-10-02 13:01:14.744 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:01:14 compute-1 nova_compute[230518]: 2025-10-02 13:01:14.767 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:01:14 compute-1 nova_compute[230518]: 2025-10-02 13:01:14.798 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:01:14 compute-1 nova_compute[230518]: 2025-10-02 13:01:14.798 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:15.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:15.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:15 compute-1 ceph-mon[80926]: pgmap v2645: 305 pgs: 305 active+clean; 368 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 639 KiB/s rd, 4.8 MiB/s wr, 160 op/s
Oct 02 13:01:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/507289924' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:15 compute-1 nova_compute[230518]: 2025-10-02 13:01:15.952 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410060.9510663, 85538bf5-69f3-4c92-baf5-a998835df357 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:01:15 compute-1 nova_compute[230518]: 2025-10-02 13:01:15.952 2 INFO nova.compute.manager [-] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] VM Stopped (Lifecycle Event)
Oct 02 13:01:15 compute-1 nova_compute[230518]: 2025-10-02 13:01:15.973 2 DEBUG nova.compute.manager [None req-ff0c47df-082b-420b-b08b-26a39df055f6 - - - - - -] [instance: 85538bf5-69f3-4c92-baf5-a998835df357] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:01:15 compute-1 nova_compute[230518]: 2025-10-02 13:01:15.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2598328140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:01:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:17.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:01:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:01:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:17.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:01:17 compute-1 ceph-mon[80926]: pgmap v2646: 305 pgs: 305 active+clean; 341 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 666 KiB/s rd, 6.1 MiB/s wr, 199 op/s
Oct 02 13:01:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3221705253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1283251929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:18 compute-1 nova_compute[230518]: 2025-10-02 13:01:18.785 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:01:18 compute-1 nova_compute[230518]: 2025-10-02 13:01:18.786 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:01:18 compute-1 nova_compute[230518]: 2025-10-02 13:01:18.786 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:01:18 compute-1 nova_compute[230518]: 2025-10-02 13:01:18.787 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:01:18 compute-1 nova_compute[230518]: 2025-10-02 13:01:18.787 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:01:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:19.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:01:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:19.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:01:19 compute-1 ceph-mon[80926]: pgmap v2647: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.5 MiB/s wr, 177 op/s
Oct 02 13:01:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:01:19 compute-1 nova_compute[230518]: 2025-10-02 13:01:19.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:19 compute-1 podman[296313]: 2025-10-02 13:01:19.803534869 +0000 UTC m=+0.053111962 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 13:01:19 compute-1 podman[296314]: 2025-10-02 13:01:19.825656765 +0000 UTC m=+0.064338059 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct 02 13:01:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/4245233163' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:01:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3634125951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:20 compute-1 nova_compute[230518]: 2025-10-02 13:01:20.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:21 compute-1 nova_compute[230518]: 2025-10-02 13:01:21.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:01:21 compute-1 nova_compute[230518]: 2025-10-02 13:01:21.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 13:01:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:01:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:21.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:01:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:21.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:21 compute-1 nova_compute[230518]: 2025-10-02 13:01:21.672 2 DEBUG oslo_concurrency.lockutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:21 compute-1 nova_compute[230518]: 2025-10-02 13:01:21.672 2 DEBUG oslo_concurrency.lockutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:21 compute-1 nova_compute[230518]: 2025-10-02 13:01:21.730 2 DEBUG nova.compute.manager [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:01:21 compute-1 ceph-mon[80926]: pgmap v2648: 305 pgs: 305 active+clean; 296 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.3 MiB/s wr, 212 op/s
Oct 02 13:01:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1258272080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:21 compute-1 nova_compute[230518]: 2025-10-02 13:01:21.977 2 DEBUG oslo_concurrency.lockutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:21 compute-1 nova_compute[230518]: 2025-10-02 13:01:21.978 2 DEBUG oslo_concurrency.lockutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:21 compute-1 nova_compute[230518]: 2025-10-02 13:01:21.989 2 DEBUG nova.virt.hardware [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:01:21 compute-1 nova_compute[230518]: 2025-10-02 13:01:21.990 2 INFO nova.compute.claims [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Claim successful on node compute-1.ctlplane.example.com
Oct 02 13:01:22 compute-1 nova_compute[230518]: 2025-10-02 13:01:22.141 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:01:22 compute-1 nova_compute[230518]: 2025-10-02 13:01:22.309 2 DEBUG oslo_concurrency.processutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:01:22 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:01:22 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2188608974' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:22 compute-1 nova_compute[230518]: 2025-10-02 13:01:22.784 2 DEBUG oslo_concurrency.processutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:01:22 compute-1 nova_compute[230518]: 2025-10-02 13:01:22.789 2 DEBUG nova.compute.provider_tree [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:01:22 compute-1 nova_compute[230518]: 2025-10-02 13:01:22.849 2 DEBUG nova.scheduler.client.report [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:01:22 compute-1 nova_compute[230518]: 2025-10-02 13:01:22.890 2 DEBUG oslo_concurrency.lockutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.912s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:22 compute-1 nova_compute[230518]: 2025-10-02 13:01:22.891 2 DEBUG nova.compute.manager [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:01:22 compute-1 nova_compute[230518]: 2025-10-02 13:01:22.965 2 DEBUG nova.compute.manager [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:01:22 compute-1 nova_compute[230518]: 2025-10-02 13:01:22.965 2 DEBUG nova.network.neutron [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:01:22 compute-1 nova_compute[230518]: 2025-10-02 13:01:22.986 2 INFO nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:01:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2188608974' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:23 compute-1 nova_compute[230518]: 2025-10-02 13:01:23.023 2 DEBUG nova.compute.manager [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:01:23 compute-1 nova_compute[230518]: 2025-10-02 13:01:23.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:01:23 compute-1 nova_compute[230518]: 2025-10-02 13:01:23.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:01:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:23.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:23.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:23 compute-1 nova_compute[230518]: 2025-10-02 13:01:23.351 2 DEBUG nova.compute.manager [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:01:23 compute-1 nova_compute[230518]: 2025-10-02 13:01:23.353 2 DEBUG nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:01:23 compute-1 nova_compute[230518]: 2025-10-02 13:01:23.354 2 INFO nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Creating image(s)
Oct 02 13:01:23 compute-1 nova_compute[230518]: 2025-10-02 13:01:23.396 2 DEBUG nova.storage.rbd_utils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 0386b301-1cd5-430d-8fc1-691b6bc3ad47_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:01:23 compute-1 nova_compute[230518]: 2025-10-02 13:01:23.440 2 DEBUG nova.storage.rbd_utils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 0386b301-1cd5-430d-8fc1-691b6bc3ad47_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:01:23 compute-1 nova_compute[230518]: 2025-10-02 13:01:23.485 2 DEBUG nova.storage.rbd_utils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 0386b301-1cd5-430d-8fc1-691b6bc3ad47_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:01:23 compute-1 nova_compute[230518]: 2025-10-02 13:01:23.492 2 DEBUG oslo_concurrency.processutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:01:23 compute-1 nova_compute[230518]: 2025-10-02 13:01:23.596 2 DEBUG oslo_concurrency.processutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:01:23 compute-1 nova_compute[230518]: 2025-10-02 13:01:23.598 2 DEBUG oslo_concurrency.lockutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:23 compute-1 nova_compute[230518]: 2025-10-02 13:01:23.599 2 DEBUG oslo_concurrency.lockutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:23 compute-1 nova_compute[230518]: 2025-10-02 13:01:23.600 2 DEBUG oslo_concurrency.lockutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:23 compute-1 nova_compute[230518]: 2025-10-02 13:01:23.639 2 DEBUG nova.storage.rbd_utils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 0386b301-1cd5-430d-8fc1-691b6bc3ad47_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:01:23 compute-1 nova_compute[230518]: 2025-10-02 13:01:23.643 2 DEBUG oslo_concurrency.processutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 0386b301-1cd5-430d-8fc1-691b6bc3ad47_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:01:23 compute-1 nova_compute[230518]: 2025-10-02 13:01:23.672 2 DEBUG nova.policy [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '96fd589a75cb4fcfac0072edabb9b3a1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '64f187c60881475e9e1f062bb198d205', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:01:23 compute-1 sudo[296468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:01:23 compute-1 sudo[296468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:23 compute-1 sudo[296468]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:23 compute-1 sudo[296496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:01:23 compute-1 sudo[296496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:23 compute-1 sudo[296496]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:23 compute-1 sudo[296521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:01:23 compute-1 sudo[296521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:23 compute-1 sudo[296521]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:23 compute-1 sudo[296546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:01:23 compute-1 sudo[296546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:24 compute-1 ceph-mon[80926]: pgmap v2649: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 177 op/s
Oct 02 13:01:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4026947805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:24 compute-1 nova_compute[230518]: 2025-10-02 13:01:24.098 2 DEBUG oslo_concurrency.processutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 0386b301-1cd5-430d-8fc1-691b6bc3ad47_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:01:24 compute-1 nova_compute[230518]: 2025-10-02 13:01:24.172 2 DEBUG nova.storage.rbd_utils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] resizing rbd image 0386b301-1cd5-430d-8fc1-691b6bc3ad47_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:01:24 compute-1 nova_compute[230518]: 2025-10-02 13:01:24.278 2 DEBUG nova.objects.instance [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'migration_context' on Instance uuid 0386b301-1cd5-430d-8fc1-691b6bc3ad47 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:01:24 compute-1 nova_compute[230518]: 2025-10-02 13:01:24.318 2 DEBUG nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:01:24 compute-1 nova_compute[230518]: 2025-10-02 13:01:24.319 2 DEBUG nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Ensure instance console log exists: /var/lib/nova/instances/0386b301-1cd5-430d-8fc1-691b6bc3ad47/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:01:24 compute-1 nova_compute[230518]: 2025-10-02 13:01:24.319 2 DEBUG oslo_concurrency.lockutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:24 compute-1 nova_compute[230518]: 2025-10-02 13:01:24.319 2 DEBUG oslo_concurrency.lockutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:24 compute-1 nova_compute[230518]: 2025-10-02 13:01:24.320 2 DEBUG oslo_concurrency.lockutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:01:24 compute-1 sudo[296546]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:24 compute-1 nova_compute[230518]: 2025-10-02 13:01:24.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:01:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:25.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:01:25 compute-1 ceph-mon[80926]: pgmap v2650: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 133 op/s
Oct 02 13:01:25 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:01:25 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:01:25 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:01:25 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:01:25 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:01:25 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:01:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:01:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:25.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:01:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:25.957 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:25.958 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:25.959 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:25 compute-1 nova_compute[230518]: 2025-10-02 13:01:25.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:26 compute-1 nova_compute[230518]: 2025-10-02 13:01:26.058 2 DEBUG nova.network.neutron [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Successfully created port: 11701be6-55ae-458a-a3a0-55a0c467ef46 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:01:27 compute-1 nova_compute[230518]: 2025-10-02 13:01:27.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:01:27 compute-1 nova_compute[230518]: 2025-10-02 13:01:27.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:01:27 compute-1 nova_compute[230518]: 2025-10-02 13:01:27.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:01:27 compute-1 nova_compute[230518]: 2025-10-02 13:01:27.118 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 13:01:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:01:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:27.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:01:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:01:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:27.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:01:27 compute-1 nova_compute[230518]: 2025-10-02 13:01:27.638 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-7c31bb0f-22b5-42a4-9b38-8ad3daac689f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:01:27 compute-1 nova_compute[230518]: 2025-10-02 13:01:27.639 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-7c31bb0f-22b5-42a4-9b38-8ad3daac689f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:01:27 compute-1 nova_compute[230518]: 2025-10-02 13:01:27.639 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 13:01:27 compute-1 nova_compute[230518]: 2025-10-02 13:01:27.639 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7c31bb0f-22b5-42a4-9b38-8ad3daac689f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:01:27 compute-1 ceph-mon[80926]: pgmap v2651: 305 pgs: 305 active+clean; 310 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.5 MiB/s wr, 160 op/s
Oct 02 13:01:27 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2992057832' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:01:28 compute-1 nova_compute[230518]: 2025-10-02 13:01:28.111 2 DEBUG nova.network.neutron [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Successfully updated port: 11701be6-55ae-458a-a3a0-55a0c467ef46 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:01:28 compute-1 nova_compute[230518]: 2025-10-02 13:01:28.135 2 DEBUG oslo_concurrency.lockutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "refresh_cache-0386b301-1cd5-430d-8fc1-691b6bc3ad47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:01:28 compute-1 nova_compute[230518]: 2025-10-02 13:01:28.135 2 DEBUG oslo_concurrency.lockutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquired lock "refresh_cache-0386b301-1cd5-430d-8fc1-691b6bc3ad47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:01:28 compute-1 nova_compute[230518]: 2025-10-02 13:01:28.135 2 DEBUG nova.network.neutron [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:01:28 compute-1 nova_compute[230518]: 2025-10-02 13:01:28.592 2 DEBUG nova.compute.manager [req-9bcba9df-e0e1-4e34-9950-e65702ba2259 req-b08f0e07-f875-4b96-9197-fa7dc323a575 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Received event network-changed-11701be6-55ae-458a-a3a0-55a0c467ef46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:01:28 compute-1 nova_compute[230518]: 2025-10-02 13:01:28.593 2 DEBUG nova.compute.manager [req-9bcba9df-e0e1-4e34-9950-e65702ba2259 req-b08f0e07-f875-4b96-9197-fa7dc323a575 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Refreshing instance network info cache due to event network-changed-11701be6-55ae-458a-a3a0-55a0c467ef46. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:01:28 compute-1 nova_compute[230518]: 2025-10-02 13:01:28.594 2 DEBUG oslo_concurrency.lockutils [req-9bcba9df-e0e1-4e34-9950-e65702ba2259 req-b08f0e07-f875-4b96-9197-fa7dc323a575 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-0386b301-1cd5-430d-8fc1-691b6bc3ad47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:01:28 compute-1 nova_compute[230518]: 2025-10-02 13:01:28.766 2 DEBUG nova.network.neutron [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:01:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:29.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:29.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:01:29 compute-1 nova_compute[230518]: 2025-10-02 13:01:29.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:29 compute-1 ceph-mon[80926]: pgmap v2652: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 134 op/s
Oct 02 13:01:30 compute-1 nova_compute[230518]: 2025-10-02 13:01:30.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:31.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:31 compute-1 ceph-mon[80926]: pgmap v2653: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Oct 02 13:01:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:01:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:31.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:01:31 compute-1 nova_compute[230518]: 2025-10-02 13:01:31.423 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Updating instance_info_cache with network_info: [{"id": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "address": "fa:16:3e:d4:a1:f4", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0acc3a3-80", "ovs_interfaceid": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:01:31 compute-1 nova_compute[230518]: 2025-10-02 13:01:31.474 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-7c31bb0f-22b5-42a4-9b38-8ad3daac689f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:01:31 compute-1 nova_compute[230518]: 2025-10-02 13:01:31.475 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 13:01:31 compute-1 nova_compute[230518]: 2025-10-02 13:01:31.538 2 DEBUG nova.network.neutron [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Updating instance_info_cache with network_info: [{"id": "11701be6-55ae-458a-a3a0-55a0c467ef46", "address": "fa:16:3e:55:f2:3d", "network": {"id": "1c43df3b-870e-48e7-aa17-655e3c34fe90", "bridge": "br-int", "label": "tempest-network-smoke--1383089846", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11701be6-55", "ovs_interfaceid": "11701be6-55ae-458a-a3a0-55a0c467ef46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:01:31 compute-1 nova_compute[230518]: 2025-10-02 13:01:31.574 2 DEBUG oslo_concurrency.lockutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Releasing lock "refresh_cache-0386b301-1cd5-430d-8fc1-691b6bc3ad47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:01:31 compute-1 nova_compute[230518]: 2025-10-02 13:01:31.575 2 DEBUG nova.compute.manager [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Instance network_info: |[{"id": "11701be6-55ae-458a-a3a0-55a0c467ef46", "address": "fa:16:3e:55:f2:3d", "network": {"id": "1c43df3b-870e-48e7-aa17-655e3c34fe90", "bridge": "br-int", "label": "tempest-network-smoke--1383089846", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11701be6-55", "ovs_interfaceid": "11701be6-55ae-458a-a3a0-55a0c467ef46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:01:31 compute-1 nova_compute[230518]: 2025-10-02 13:01:31.576 2 DEBUG oslo_concurrency.lockutils [req-9bcba9df-e0e1-4e34-9950-e65702ba2259 req-b08f0e07-f875-4b96-9197-fa7dc323a575 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-0386b301-1cd5-430d-8fc1-691b6bc3ad47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:01:31 compute-1 nova_compute[230518]: 2025-10-02 13:01:31.576 2 DEBUG nova.network.neutron [req-9bcba9df-e0e1-4e34-9950-e65702ba2259 req-b08f0e07-f875-4b96-9197-fa7dc323a575 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Refreshing network info cache for port 11701be6-55ae-458a-a3a0-55a0c467ef46 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:01:31 compute-1 nova_compute[230518]: 2025-10-02 13:01:31.582 2 DEBUG nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Start _get_guest_xml network_info=[{"id": "11701be6-55ae-458a-a3a0-55a0c467ef46", "address": "fa:16:3e:55:f2:3d", "network": {"id": "1c43df3b-870e-48e7-aa17-655e3c34fe90", "bridge": "br-int", "label": "tempest-network-smoke--1383089846", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11701be6-55", "ovs_interfaceid": "11701be6-55ae-458a-a3a0-55a0c467ef46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:01:31 compute-1 nova_compute[230518]: 2025-10-02 13:01:31.589 2 WARNING nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:01:31 compute-1 nova_compute[230518]: 2025-10-02 13:01:31.595 2 DEBUG nova.virt.libvirt.host [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:01:31 compute-1 nova_compute[230518]: 2025-10-02 13:01:31.597 2 DEBUG nova.virt.libvirt.host [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:01:31 compute-1 nova_compute[230518]: 2025-10-02 13:01:31.609 2 DEBUG nova.virt.libvirt.host [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:01:31 compute-1 nova_compute[230518]: 2025-10-02 13:01:31.610 2 DEBUG nova.virt.libvirt.host [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:01:31 compute-1 nova_compute[230518]: 2025-10-02 13:01:31.612 2 DEBUG nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:01:31 compute-1 nova_compute[230518]: 2025-10-02 13:01:31.613 2 DEBUG nova.virt.hardware [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:01:31 compute-1 nova_compute[230518]: 2025-10-02 13:01:31.614 2 DEBUG nova.virt.hardware [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:01:31 compute-1 nova_compute[230518]: 2025-10-02 13:01:31.614 2 DEBUG nova.virt.hardware [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:01:31 compute-1 nova_compute[230518]: 2025-10-02 13:01:31.615 2 DEBUG nova.virt.hardware [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:01:31 compute-1 nova_compute[230518]: 2025-10-02 13:01:31.615 2 DEBUG nova.virt.hardware [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:01:31 compute-1 nova_compute[230518]: 2025-10-02 13:01:31.616 2 DEBUG nova.virt.hardware [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:01:31 compute-1 nova_compute[230518]: 2025-10-02 13:01:31.616 2 DEBUG nova.virt.hardware [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:01:31 compute-1 nova_compute[230518]: 2025-10-02 13:01:31.617 2 DEBUG nova.virt.hardware [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:01:31 compute-1 nova_compute[230518]: 2025-10-02 13:01:31.617 2 DEBUG nova.virt.hardware [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:01:31 compute-1 nova_compute[230518]: 2025-10-02 13:01:31.618 2 DEBUG nova.virt.hardware [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:01:31 compute-1 nova_compute[230518]: 2025-10-02 13:01:31.618 2 DEBUG nova.virt.hardware [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:01:31 compute-1 nova_compute[230518]: 2025-10-02 13:01:31.623 2 DEBUG oslo_concurrency.processutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:01:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:01:32 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2312900658' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:01:32 compute-1 nova_compute[230518]: 2025-10-02 13:01:32.127 2 DEBUG oslo_concurrency.processutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:01:32 compute-1 nova_compute[230518]: 2025-10-02 13:01:32.156 2 DEBUG nova.storage.rbd_utils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 0386b301-1cd5-430d-8fc1-691b6bc3ad47_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:01:32 compute-1 nova_compute[230518]: 2025-10-02 13:01:32.160 2 DEBUG oslo_concurrency.processutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:01:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:01:32 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/204003872' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:01:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2312900658' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:01:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:01:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:33.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:01:33 compute-1 nova_compute[230518]: 2025-10-02 13:01:33.173 2 DEBUG oslo_concurrency.processutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:01:33 compute-1 nova_compute[230518]: 2025-10-02 13:01:33.176 2 DEBUG nova.virt.libvirt.vif [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:01:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-597885113',display_name='tempest-TestNetworkBasicOps-server-597885113',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-597885113',id=171,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG3FuXpBgVbHgGBq4egCNtz2a9jD/YtFdw9deNrsDMTopNMA3CO5MDdqo+hovI4wpSsKj6W1YZIokPp0dIAbUjy55TePr3Cxog4gFZ9e9nz0xlxf36KAvzLNH0NRnqtk4g==',key_name='tempest-TestNetworkBasicOps-216720718',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-c0lf99pp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:01:23Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=0386b301-1cd5-430d-8fc1-691b6bc3ad47,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "11701be6-55ae-458a-a3a0-55a0c467ef46", "address": "fa:16:3e:55:f2:3d", "network": {"id": "1c43df3b-870e-48e7-aa17-655e3c34fe90", "bridge": "br-int", "label": "tempest-network-smoke--1383089846", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11701be6-55", "ovs_interfaceid": "11701be6-55ae-458a-a3a0-55a0c467ef46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:01:33 compute-1 nova_compute[230518]: 2025-10-02 13:01:33.176 2 DEBUG nova.network.os_vif_util [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "11701be6-55ae-458a-a3a0-55a0c467ef46", "address": "fa:16:3e:55:f2:3d", "network": {"id": "1c43df3b-870e-48e7-aa17-655e3c34fe90", "bridge": "br-int", "label": "tempest-network-smoke--1383089846", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11701be6-55", "ovs_interfaceid": "11701be6-55ae-458a-a3a0-55a0c467ef46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:01:33 compute-1 nova_compute[230518]: 2025-10-02 13:01:33.177 2 DEBUG nova.network.os_vif_util [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:55:f2:3d,bridge_name='br-int',has_traffic_filtering=True,id=11701be6-55ae-458a-a3a0-55a0c467ef46,network=Network(1c43df3b-870e-48e7-aa17-655e3c34fe90),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11701be6-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:01:33 compute-1 nova_compute[230518]: 2025-10-02 13:01:33.179 2 DEBUG nova.objects.instance [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0386b301-1cd5-430d-8fc1-691b6bc3ad47 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:01:33 compute-1 nova_compute[230518]: 2025-10-02 13:01:33.196 2 DEBUG nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:01:33 compute-1 nova_compute[230518]:   <uuid>0386b301-1cd5-430d-8fc1-691b6bc3ad47</uuid>
Oct 02 13:01:33 compute-1 nova_compute[230518]:   <name>instance-000000ab</name>
Oct 02 13:01:33 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 13:01:33 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 13:01:33 compute-1 nova_compute[230518]:   <metadata>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:01:33 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:       <nova:name>tempest-TestNetworkBasicOps-server-597885113</nova:name>
Oct 02 13:01:33 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 13:01:31</nova:creationTime>
Oct 02 13:01:33 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 13:01:33 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 13:01:33 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 13:01:33 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 13:01:33 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:01:33 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:01:33 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 13:01:33 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 13:01:33 compute-1 nova_compute[230518]:         <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct 02 13:01:33 compute-1 nova_compute[230518]:         <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct 02 13:01:33 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 13:01:33 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 13:01:33 compute-1 nova_compute[230518]:         <nova:port uuid="11701be6-55ae-458a-a3a0-55a0c467ef46">
Oct 02 13:01:33 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 13:01:33 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 13:01:33 compute-1 nova_compute[230518]:   </metadata>
Oct 02 13:01:33 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <system>
Oct 02 13:01:33 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:01:33 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:01:33 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:01:33 compute-1 nova_compute[230518]:       <entry name="serial">0386b301-1cd5-430d-8fc1-691b6bc3ad47</entry>
Oct 02 13:01:33 compute-1 nova_compute[230518]:       <entry name="uuid">0386b301-1cd5-430d-8fc1-691b6bc3ad47</entry>
Oct 02 13:01:33 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     </system>
Oct 02 13:01:33 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 13:01:33 compute-1 nova_compute[230518]:   <os>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:   </os>
Oct 02 13:01:33 compute-1 nova_compute[230518]:   <features>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <apic/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:   </features>
Oct 02 13:01:33 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:   </clock>
Oct 02 13:01:33 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:   </cpu>
Oct 02 13:01:33 compute-1 nova_compute[230518]:   <devices>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 13:01:33 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/0386b301-1cd5-430d-8fc1-691b6bc3ad47_disk">
Oct 02 13:01:33 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:       </source>
Oct 02 13:01:33 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:01:33 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:01:33 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 13:01:33 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/0386b301-1cd5-430d-8fc1-691b6bc3ad47_disk.config">
Oct 02 13:01:33 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:       </source>
Oct 02 13:01:33 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:01:33 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:01:33 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 13:01:33 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:55:f2:3d"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:       <target dev="tap11701be6-55"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     </interface>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 13:01:33 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/0386b301-1cd5-430d-8fc1-691b6bc3ad47/console.log" append="off"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     </serial>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <video>
Oct 02 13:01:33 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     </video>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 13:01:33 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     </rng>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 13:01:33 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 13:01:33 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 13:01:33 compute-1 nova_compute[230518]:   </devices>
Oct 02 13:01:33 compute-1 nova_compute[230518]: </domain>
Oct 02 13:01:33 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:01:33 compute-1 nova_compute[230518]: 2025-10-02 13:01:33.198 2 DEBUG nova.compute.manager [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Preparing to wait for external event network-vif-plugged-11701be6-55ae-458a-a3a0-55a0c467ef46 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:01:33 compute-1 nova_compute[230518]: 2025-10-02 13:01:33.198 2 DEBUG oslo_concurrency.lockutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:33 compute-1 nova_compute[230518]: 2025-10-02 13:01:33.199 2 DEBUG oslo_concurrency.lockutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:33 compute-1 nova_compute[230518]: 2025-10-02 13:01:33.199 2 DEBUG oslo_concurrency.lockutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:33 compute-1 nova_compute[230518]: 2025-10-02 13:01:33.200 2 DEBUG nova.virt.libvirt.vif [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:01:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-597885113',display_name='tempest-TestNetworkBasicOps-server-597885113',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-597885113',id=171,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG3FuXpBgVbHgGBq4egCNtz2a9jD/YtFdw9deNrsDMTopNMA3CO5MDdqo+hovI4wpSsKj6W1YZIokPp0dIAbUjy55TePr3Cxog4gFZ9e9nz0xlxf36KAvzLNH0NRnqtk4g==',key_name='tempest-TestNetworkBasicOps-216720718',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-c0lf99pp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:01:23Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=0386b301-1cd5-430d-8fc1-691b6bc3ad47,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "11701be6-55ae-458a-a3a0-55a0c467ef46", "address": "fa:16:3e:55:f2:3d", "network": {"id": "1c43df3b-870e-48e7-aa17-655e3c34fe90", "bridge": "br-int", "label": "tempest-network-smoke--1383089846", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11701be6-55", "ovs_interfaceid": "11701be6-55ae-458a-a3a0-55a0c467ef46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:01:33 compute-1 nova_compute[230518]: 2025-10-02 13:01:33.200 2 DEBUG nova.network.os_vif_util [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "11701be6-55ae-458a-a3a0-55a0c467ef46", "address": "fa:16:3e:55:f2:3d", "network": {"id": "1c43df3b-870e-48e7-aa17-655e3c34fe90", "bridge": "br-int", "label": "tempest-network-smoke--1383089846", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11701be6-55", "ovs_interfaceid": "11701be6-55ae-458a-a3a0-55a0c467ef46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:01:33 compute-1 nova_compute[230518]: 2025-10-02 13:01:33.201 2 DEBUG nova.network.os_vif_util [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:55:f2:3d,bridge_name='br-int',has_traffic_filtering=True,id=11701be6-55ae-458a-a3a0-55a0c467ef46,network=Network(1c43df3b-870e-48e7-aa17-655e3c34fe90),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11701be6-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:01:33 compute-1 nova_compute[230518]: 2025-10-02 13:01:33.201 2 DEBUG os_vif [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:f2:3d,bridge_name='br-int',has_traffic_filtering=True,id=11701be6-55ae-458a-a3a0-55a0c467ef46,network=Network(1c43df3b-870e-48e7-aa17-655e3c34fe90),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11701be6-55') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:01:33 compute-1 nova_compute[230518]: 2025-10-02 13:01:33.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:33 compute-1 nova_compute[230518]: 2025-10-02 13:01:33.203 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:01:33 compute-1 nova_compute[230518]: 2025-10-02 13:01:33.203 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:01:33 compute-1 nova_compute[230518]: 2025-10-02 13:01:33.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:33 compute-1 nova_compute[230518]: 2025-10-02 13:01:33.208 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap11701be6-55, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:01:33 compute-1 nova_compute[230518]: 2025-10-02 13:01:33.209 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap11701be6-55, col_values=(('external_ids', {'iface-id': '11701be6-55ae-458a-a3a0-55a0c467ef46', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:55:f2:3d', 'vm-uuid': '0386b301-1cd5-430d-8fc1-691b6bc3ad47'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:01:33 compute-1 NetworkManager[44960]: <info>  [1759410093.2125] manager: (tap11701be6-55): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/332)
Oct 02 13:01:33 compute-1 nova_compute[230518]: 2025-10-02 13:01:33.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:33 compute-1 nova_compute[230518]: 2025-10-02 13:01:33.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:01:33 compute-1 nova_compute[230518]: 2025-10-02 13:01:33.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:33 compute-1 nova_compute[230518]: 2025-10-02 13:01:33.219 2 INFO os_vif [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:f2:3d,bridge_name='br-int',has_traffic_filtering=True,id=11701be6-55ae-458a-a3a0-55a0c467ef46,network=Network(1c43df3b-870e-48e7-aa17-655e3c34fe90),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11701be6-55')
Oct 02 13:01:33 compute-1 nova_compute[230518]: 2025-10-02 13:01:33.320 2 DEBUG nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:01:33 compute-1 nova_compute[230518]: 2025-10-02 13:01:33.320 2 DEBUG nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:01:33 compute-1 nova_compute[230518]: 2025-10-02 13:01:33.321 2 DEBUG nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No VIF found with MAC fa:16:3e:55:f2:3d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:01:33 compute-1 nova_compute[230518]: 2025-10-02 13:01:33.322 2 INFO nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Using config drive
Oct 02 13:01:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:33.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:33 compute-1 nova_compute[230518]: 2025-10-02 13:01:33.356 2 DEBUG nova.storage.rbd_utils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 0386b301-1cd5-430d-8fc1-691b6bc3ad47_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:01:33 compute-1 nova_compute[230518]: 2025-10-02 13:01:33.809 2 INFO nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Creating config drive at /var/lib/nova/instances/0386b301-1cd5-430d-8fc1-691b6bc3ad47/disk.config
Oct 02 13:01:33 compute-1 nova_compute[230518]: 2025-10-02 13:01:33.820 2 DEBUG oslo_concurrency.processutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0386b301-1cd5-430d-8fc1-691b6bc3ad47/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5wckt7u5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:01:33 compute-1 nova_compute[230518]: 2025-10-02 13:01:33.987 2 DEBUG oslo_concurrency.processutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0386b301-1cd5-430d-8fc1-691b6bc3ad47/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5wckt7u5" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:01:34 compute-1 ceph-mon[80926]: pgmap v2654: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 483 KiB/s rd, 1.8 MiB/s wr, 65 op/s
Oct 02 13:01:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/204003872' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:01:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/524146024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:34 compute-1 nova_compute[230518]: 2025-10-02 13:01:34.042 2 DEBUG nova.storage.rbd_utils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 0386b301-1cd5-430d-8fc1-691b6bc3ad47_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:01:34 compute-1 nova_compute[230518]: 2025-10-02 13:01:34.050 2 DEBUG oslo_concurrency.processutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0386b301-1cd5-430d-8fc1-691b6bc3ad47/disk.config 0386b301-1cd5-430d-8fc1-691b6bc3ad47_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:01:34 compute-1 nova_compute[230518]: 2025-10-02 13:01:34.286 2 DEBUG oslo_concurrency.processutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0386b301-1cd5-430d-8fc1-691b6bc3ad47/disk.config 0386b301-1cd5-430d-8fc1-691b6bc3ad47_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.236s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:01:34 compute-1 nova_compute[230518]: 2025-10-02 13:01:34.287 2 INFO nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Deleting local config drive /var/lib/nova/instances/0386b301-1cd5-430d-8fc1-691b6bc3ad47/disk.config because it was imported into RBD.
Oct 02 13:01:34 compute-1 NetworkManager[44960]: <info>  [1759410094.3307] manager: (tap11701be6-55): new Tun device (/org/freedesktop/NetworkManager/Devices/333)
Oct 02 13:01:34 compute-1 kernel: tap11701be6-55: entered promiscuous mode
Oct 02 13:01:34 compute-1 ovn_controller[129257]: 2025-10-02T13:01:34Z|00709|binding|INFO|Claiming lport 11701be6-55ae-458a-a3a0-55a0c467ef46 for this chassis.
Oct 02 13:01:34 compute-1 ovn_controller[129257]: 2025-10-02T13:01:34Z|00710|binding|INFO|11701be6-55ae-458a-a3a0-55a0c467ef46: Claiming fa:16:3e:55:f2:3d 10.100.0.13
Oct 02 13:01:34 compute-1 nova_compute[230518]: 2025-10-02 13:01:34.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.343 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:f2:3d 10.100.0.13'], port_security=['fa:16:3e:55:f2:3d 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0386b301-1cd5-430d-8fc1-691b6bc3ad47', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1c43df3b-870e-48e7-aa17-655e3c34fe90', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'deb73c46-dc90-4a9e-bf71-ad13a1f027f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f19ea678-25e4-4fc8-8150-d09743da6d71, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=11701be6-55ae-458a-a3a0-55a0c467ef46) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.344 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 11701be6-55ae-458a-a3a0-55a0c467ef46 in datapath 1c43df3b-870e-48e7-aa17-655e3c34fe90 bound to our chassis
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.346 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1c43df3b-870e-48e7-aa17-655e3c34fe90
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.357 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[357b668d-dc01-48a3-bae3-329e2f99ff25]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.358 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1c43df3b-81 in ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.359 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1c43df3b-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.360 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7121b895-5685-42af-bda6-6da8b4f3ab4f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.360 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e75ca27b-9f0b-4c56-b669-a5ddcc845994]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:34 compute-1 systemd-machined[188247]: New machine qemu-83-instance-000000ab.
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.372 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[cc2b06c3-7541-4c49-a902-b29ab09ad8ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.396 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[60624069-a6ed-4250-a349-a3e976764eb1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:34 compute-1 systemd[1]: Started Virtual Machine qemu-83-instance-000000ab.
Oct 02 13:01:34 compute-1 nova_compute[230518]: 2025-10-02 13:01:34.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:34 compute-1 ovn_controller[129257]: 2025-10-02T13:01:34Z|00711|binding|INFO|Setting lport 11701be6-55ae-458a-a3a0-55a0c467ef46 ovn-installed in OVS
Oct 02 13:01:34 compute-1 ovn_controller[129257]: 2025-10-02T13:01:34Z|00712|binding|INFO|Setting lport 11701be6-55ae-458a-a3a0-55a0c467ef46 up in Southbound
Oct 02 13:01:34 compute-1 nova_compute[230518]: 2025-10-02 13:01:34.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:34 compute-1 systemd-udevd[296812]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:01:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:01:34 compute-1 NetworkManager[44960]: <info>  [1759410094.4240] device (tap11701be6-55): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:01:34 compute-1 NetworkManager[44960]: <info>  [1759410094.4249] device (tap11701be6-55): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.429 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[6432d229-c028-455e-9c69-899b4f695847]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.434 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[733ddcf7-48f6-42f6-8d14-7870df2d5354]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:34 compute-1 NetworkManager[44960]: <info>  [1759410094.4357] manager: (tap1c43df3b-80): new Veth device (/org/freedesktop/NetworkManager/Devices/334)
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.470 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[71ccc93d-0aef-4c51-820d-88b056cce331]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.473 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[64f34eed-2af5-4210-89bb-7289b8a18a5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:34 compute-1 NetworkManager[44960]: <info>  [1759410094.4977] device (tap1c43df3b-80): carrier: link connected
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.502 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[f65daf90-722c-472c-b5db-ccbf577d7a26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.518 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2353ed9f-b23d-4464-a53d-a118618faf6f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1c43df3b-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:10:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 219], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795805, 'reachable_time': 44445, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296841, 'error': None, 'target': 'ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.530 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f4152c25-c4c3-4426-ab5c-472309b96cd4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe60:1092'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 795805, 'tstamp': 795805}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296842, 'error': None, 'target': 'ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:34 compute-1 nova_compute[230518]: 2025-10-02 13:01:34.546 2 DEBUG nova.network.neutron [req-9bcba9df-e0e1-4e34-9950-e65702ba2259 req-b08f0e07-f875-4b96-9197-fa7dc323a575 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Updated VIF entry in instance network info cache for port 11701be6-55ae-458a-a3a0-55a0c467ef46. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:01:34 compute-1 nova_compute[230518]: 2025-10-02 13:01:34.547 2 DEBUG nova.network.neutron [req-9bcba9df-e0e1-4e34-9950-e65702ba2259 req-b08f0e07-f875-4b96-9197-fa7dc323a575 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Updating instance_info_cache with network_info: [{"id": "11701be6-55ae-458a-a3a0-55a0c467ef46", "address": "fa:16:3e:55:f2:3d", "network": {"id": "1c43df3b-870e-48e7-aa17-655e3c34fe90", "bridge": "br-int", "label": "tempest-network-smoke--1383089846", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11701be6-55", "ovs_interfaceid": "11701be6-55ae-458a-a3a0-55a0c467ef46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.545 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e829abc5-4e97-449e-b95a-c43f8f059d10]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1c43df3b-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:10:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 219], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795805, 'reachable_time': 44445, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 296843, 'error': None, 'target': 'ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.571 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[619c8fbf-3831-43a9-aee3-da8e02ced3bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:34 compute-1 nova_compute[230518]: 2025-10-02 13:01:34.572 2 DEBUG oslo_concurrency.lockutils [req-9bcba9df-e0e1-4e34-9950-e65702ba2259 req-b08f0e07-f875-4b96-9197-fa7dc323a575 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-0386b301-1cd5-430d-8fc1-691b6bc3ad47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:01:34 compute-1 nova_compute[230518]: 2025-10-02 13:01:34.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.622 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[85b7d34e-d068-4c87-8210-b6889e2daca7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.623 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c43df3b-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.624 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.624 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1c43df3b-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:01:34 compute-1 nova_compute[230518]: 2025-10-02 13:01:34.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:34 compute-1 kernel: tap1c43df3b-80: entered promiscuous mode
Oct 02 13:01:34 compute-1 NetworkManager[44960]: <info>  [1759410094.6282] manager: (tap1c43df3b-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/335)
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.629 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1c43df3b-80, col_values=(('external_ids', {'iface-id': '07f078ea-d50b-4ace-9ee5-5241ea5b3915'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:01:34 compute-1 ovn_controller[129257]: 2025-10-02T13:01:34Z|00713|binding|INFO|Releasing lport 07f078ea-d50b-4ace-9ee5-5241ea5b3915 from this chassis (sb_readonly=0)
Oct 02 13:01:34 compute-1 nova_compute[230518]: 2025-10-02 13:01:34.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:34 compute-1 nova_compute[230518]: 2025-10-02 13:01:34.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.650 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1c43df3b-870e-48e7-aa17-655e3c34fe90.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1c43df3b-870e-48e7-aa17-655e3c34fe90.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.652 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d819f3b4-ac2f-4516-a531-18426602ca4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.654 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]: global
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-1c43df3b-870e-48e7-aa17-655e3c34fe90
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/1c43df3b-870e-48e7-aa17-655e3c34fe90.pid.haproxy
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 1c43df3b-870e-48e7-aa17-655e3c34fe90
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:01:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:01:34.656 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90', 'env', 'PROCESS_TAG=haproxy-1c43df3b-870e-48e7-aa17-655e3c34fe90', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1c43df3b-870e-48e7-aa17-655e3c34fe90.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:01:34 compute-1 nova_compute[230518]: 2025-10-02 13:01:34.858 2 DEBUG nova.compute.manager [req-905b1ae5-32b7-4458-becb-0e17644a960b req-5410e781-daba-4d69-aa02-3b35df72ebca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Received event network-vif-plugged-11701be6-55ae-458a-a3a0-55a0c467ef46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:01:34 compute-1 nova_compute[230518]: 2025-10-02 13:01:34.859 2 DEBUG oslo_concurrency.lockutils [req-905b1ae5-32b7-4458-becb-0e17644a960b req-5410e781-daba-4d69-aa02-3b35df72ebca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:34 compute-1 nova_compute[230518]: 2025-10-02 13:01:34.859 2 DEBUG oslo_concurrency.lockutils [req-905b1ae5-32b7-4458-becb-0e17644a960b req-5410e781-daba-4d69-aa02-3b35df72ebca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:34 compute-1 nova_compute[230518]: 2025-10-02 13:01:34.860 2 DEBUG oslo_concurrency.lockutils [req-905b1ae5-32b7-4458-becb-0e17644a960b req-5410e781-daba-4d69-aa02-3b35df72ebca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:34 compute-1 nova_compute[230518]: 2025-10-02 13:01:34.860 2 DEBUG nova.compute.manager [req-905b1ae5-32b7-4458-becb-0e17644a960b req-5410e781-daba-4d69-aa02-3b35df72ebca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Processing event network-vif-plugged-11701be6-55ae-458a-a3a0-55a0c467ef46 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:01:35 compute-1 podman[296891]: 2025-10-02 13:01:35.05344999 +0000 UTC m=+0.067756576 container create fac6bc95d0cfa546473de98e355220d13c32daa11022d5b6abed6541e216d398 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:01:35 compute-1 systemd[1]: Started libpod-conmon-fac6bc95d0cfa546473de98e355220d13c32daa11022d5b6abed6541e216d398.scope.
Oct 02 13:01:35 compute-1 podman[296891]: 2025-10-02 13:01:35.016568524 +0000 UTC m=+0.030875140 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:01:35 compute-1 systemd[1]: Started libcrun container.
Oct 02 13:01:35 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad5abcd8d66535572ebd9243f79c67a937f00fbde504f937bccc9e255f563f50/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:01:35 compute-1 ceph-mon[80926]: pgmap v2655: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Oct 02 13:01:35 compute-1 podman[296891]: 2025-10-02 13:01:35.144932452 +0000 UTC m=+0.159239058 container init fac6bc95d0cfa546473de98e355220d13c32daa11022d5b6abed6541e216d398 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 13:01:35 compute-1 podman[296891]: 2025-10-02 13:01:35.150488984 +0000 UTC m=+0.164795580 container start fac6bc95d0cfa546473de98e355220d13c32daa11022d5b6abed6541e216d398 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 13:01:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:35.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:35 compute-1 neutron-haproxy-ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90[296931]: [NOTICE]   (296935) : New worker (296937) forked
Oct 02 13:01:35 compute-1 neutron-haproxy-ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90[296931]: [NOTICE]   (296935) : Loading success.
Oct 02 13:01:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:35.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:35 compute-1 nova_compute[230518]: 2025-10-02 13:01:35.548 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410095.548259, 0386b301-1cd5-430d-8fc1-691b6bc3ad47 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:01:35 compute-1 nova_compute[230518]: 2025-10-02 13:01:35.549 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] VM Started (Lifecycle Event)
Oct 02 13:01:35 compute-1 nova_compute[230518]: 2025-10-02 13:01:35.550 2 DEBUG nova.compute.manager [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:01:35 compute-1 nova_compute[230518]: 2025-10-02 13:01:35.554 2 DEBUG nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:01:35 compute-1 nova_compute[230518]: 2025-10-02 13:01:35.562 2 INFO nova.virt.libvirt.driver [-] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Instance spawned successfully.
Oct 02 13:01:35 compute-1 nova_compute[230518]: 2025-10-02 13:01:35.562 2 DEBUG nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:01:35 compute-1 nova_compute[230518]: 2025-10-02 13:01:35.623 2 DEBUG nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:01:35 compute-1 nova_compute[230518]: 2025-10-02 13:01:35.623 2 DEBUG nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:01:35 compute-1 nova_compute[230518]: 2025-10-02 13:01:35.623 2 DEBUG nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:01:35 compute-1 nova_compute[230518]: 2025-10-02 13:01:35.624 2 DEBUG nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:01:35 compute-1 nova_compute[230518]: 2025-10-02 13:01:35.624 2 DEBUG nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:01:35 compute-1 nova_compute[230518]: 2025-10-02 13:01:35.624 2 DEBUG nova.virt.libvirt.driver [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:01:35 compute-1 nova_compute[230518]: 2025-10-02 13:01:35.628 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:01:35 compute-1 nova_compute[230518]: 2025-10-02 13:01:35.630 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:01:35 compute-1 nova_compute[230518]: 2025-10-02 13:01:35.695 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:01:35 compute-1 nova_compute[230518]: 2025-10-02 13:01:35.696 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410095.5505567, 0386b301-1cd5-430d-8fc1-691b6bc3ad47 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:01:35 compute-1 nova_compute[230518]: 2025-10-02 13:01:35.696 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] VM Paused (Lifecycle Event)
Oct 02 13:01:35 compute-1 nova_compute[230518]: 2025-10-02 13:01:35.752 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:01:35 compute-1 nova_compute[230518]: 2025-10-02 13:01:35.755 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410095.552723, 0386b301-1cd5-430d-8fc1-691b6bc3ad47 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:01:35 compute-1 nova_compute[230518]: 2025-10-02 13:01:35.756 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] VM Resumed (Lifecycle Event)
Oct 02 13:01:35 compute-1 nova_compute[230518]: 2025-10-02 13:01:35.764 2 INFO nova.compute.manager [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Took 12.41 seconds to spawn the instance on the hypervisor.
Oct 02 13:01:35 compute-1 nova_compute[230518]: 2025-10-02 13:01:35.764 2 DEBUG nova.compute.manager [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:01:35 compute-1 nova_compute[230518]: 2025-10-02 13:01:35.802 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:01:35 compute-1 nova_compute[230518]: 2025-10-02 13:01:35.807 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:01:35 compute-1 nova_compute[230518]: 2025-10-02 13:01:35.866 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:01:35 compute-1 nova_compute[230518]: 2025-10-02 13:01:35.883 2 INFO nova.compute.manager [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Took 14.01 seconds to build instance.
Oct 02 13:01:35 compute-1 nova_compute[230518]: 2025-10-02 13:01:35.921 2 DEBUG oslo_concurrency.lockutils [None req-26c00544-6d00-4a6d-8a1d-f8f1e1527cfd 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.248s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:37.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:37 compute-1 nova_compute[230518]: 2025-10-02 13:01:37.238 2 DEBUG nova.compute.manager [req-5029fbff-5468-4527-9500-0d830eb9703f req-ca51d530-5270-48e1-9cef-53d20020a1a0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Received event network-vif-plugged-11701be6-55ae-458a-a3a0-55a0c467ef46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:01:37 compute-1 nova_compute[230518]: 2025-10-02 13:01:37.239 2 DEBUG oslo_concurrency.lockutils [req-5029fbff-5468-4527-9500-0d830eb9703f req-ca51d530-5270-48e1-9cef-53d20020a1a0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:01:37 compute-1 nova_compute[230518]: 2025-10-02 13:01:37.239 2 DEBUG oslo_concurrency.lockutils [req-5029fbff-5468-4527-9500-0d830eb9703f req-ca51d530-5270-48e1-9cef-53d20020a1a0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:01:37 compute-1 nova_compute[230518]: 2025-10-02 13:01:37.239 2 DEBUG oslo_concurrency.lockutils [req-5029fbff-5468-4527-9500-0d830eb9703f req-ca51d530-5270-48e1-9cef-53d20020a1a0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:01:37 compute-1 nova_compute[230518]: 2025-10-02 13:01:37.239 2 DEBUG nova.compute.manager [req-5029fbff-5468-4527-9500-0d830eb9703f req-ca51d530-5270-48e1-9cef-53d20020a1a0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] No waiting events found dispatching network-vif-plugged-11701be6-55ae-458a-a3a0-55a0c467ef46 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:01:37 compute-1 nova_compute[230518]: 2025-10-02 13:01:37.240 2 WARNING nova.compute.manager [req-5029fbff-5468-4527-9500-0d830eb9703f req-ca51d530-5270-48e1-9cef-53d20020a1a0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Received unexpected event network-vif-plugged-11701be6-55ae-458a-a3a0-55a0c467ef46 for instance with vm_state active and task_state None.
Oct 02 13:01:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:37.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:37 compute-1 sudo[296946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:01:37 compute-1 sudo[296946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:37 compute-1 sudo[296946]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:37 compute-1 sudo[296971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:01:37 compute-1 sudo[296971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:01:37 compute-1 sudo[296971]: pam_unix(sudo:session): session closed for user root
Oct 02 13:01:37 compute-1 ceph-mon[80926]: pgmap v2656: 305 pgs: 305 active+clean; 358 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 2.9 MiB/s wr, 58 op/s
Oct 02 13:01:37 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:01:37 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:01:38 compute-1 nova_compute[230518]: 2025-10-02 13:01:38.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:39 compute-1 ceph-mon[80926]: pgmap v2657: 305 pgs: 305 active+clean; 372 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 680 KiB/s rd, 2.4 MiB/s wr, 75 op/s
Oct 02 13:01:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:01:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:39.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:01:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:39.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:01:39 compute-1 nova_compute[230518]: 2025-10-02 13:01:39.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:39 compute-1 podman[296998]: 2025-10-02 13:01:39.807668668 +0000 UTC m=+0.055690121 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 02 13:01:39 compute-1 podman[296997]: 2025-10-02 13:01:39.842009475 +0000 UTC m=+0.081913066 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:01:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2497424559' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:01:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3600982715' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:01:41 compute-1 nova_compute[230518]: 2025-10-02 13:01:41.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:41 compute-1 NetworkManager[44960]: <info>  [1759410101.0824] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/336)
Oct 02 13:01:41 compute-1 NetworkManager[44960]: <info>  [1759410101.0834] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/337)
Oct 02 13:01:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:41.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:41 compute-1 nova_compute[230518]: 2025-10-02 13:01:41.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:41 compute-1 ovn_controller[129257]: 2025-10-02T13:01:41Z|00714|binding|INFO|Releasing lport 07f078ea-d50b-4ace-9ee5-5241ea5b3915 from this chassis (sb_readonly=0)
Oct 02 13:01:41 compute-1 ovn_controller[129257]: 2025-10-02T13:01:41Z|00715|binding|INFO|Releasing lport 61e15bc4-7cff-4f2c-a6c4-d987859313b6 from this chassis (sb_readonly=0)
Oct 02 13:01:41 compute-1 nova_compute[230518]: 2025-10-02 13:01:41.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:41.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:41 compute-1 ceph-mon[80926]: pgmap v2658: 305 pgs: 305 active+clean; 332 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Oct 02 13:01:41 compute-1 nova_compute[230518]: 2025-10-02 13:01:41.713 2 DEBUG nova.compute.manager [req-04ba0d69-bb5c-4332-a5db-831f7f351547 req-8921b300-5170-46c1-bc84-ff4c4f377e6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Received event network-changed-11701be6-55ae-458a-a3a0-55a0c467ef46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:01:41 compute-1 nova_compute[230518]: 2025-10-02 13:01:41.713 2 DEBUG nova.compute.manager [req-04ba0d69-bb5c-4332-a5db-831f7f351547 req-8921b300-5170-46c1-bc84-ff4c4f377e6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Refreshing instance network info cache due to event network-changed-11701be6-55ae-458a-a3a0-55a0c467ef46. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:01:41 compute-1 nova_compute[230518]: 2025-10-02 13:01:41.714 2 DEBUG oslo_concurrency.lockutils [req-04ba0d69-bb5c-4332-a5db-831f7f351547 req-8921b300-5170-46c1-bc84-ff4c4f377e6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-0386b301-1cd5-430d-8fc1-691b6bc3ad47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:01:41 compute-1 nova_compute[230518]: 2025-10-02 13:01:41.714 2 DEBUG oslo_concurrency.lockutils [req-04ba0d69-bb5c-4332-a5db-831f7f351547 req-8921b300-5170-46c1-bc84-ff4c4f377e6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-0386b301-1cd5-430d-8fc1-691b6bc3ad47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:01:41 compute-1 nova_compute[230518]: 2025-10-02 13:01:41.715 2 DEBUG nova.network.neutron [req-04ba0d69-bb5c-4332-a5db-831f7f351547 req-8921b300-5170-46c1-bc84-ff4c4f377e6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Refreshing network info cache for port 11701be6-55ae-458a-a3a0-55a0c467ef46 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:01:41 compute-1 nova_compute[230518]: 2025-10-02 13:01:41.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:01:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:43.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:01:43 compute-1 nova_compute[230518]: 2025-10-02 13:01:43.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:43.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:43 compute-1 ceph-mon[80926]: pgmap v2659: 305 pgs: 305 active+clean; 293 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct 02 13:01:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1722812126' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:44 compute-1 nova_compute[230518]: 2025-10-02 13:01:44.123 2 DEBUG nova.network.neutron [req-04ba0d69-bb5c-4332-a5db-831f7f351547 req-8921b300-5170-46c1-bc84-ff4c4f377e6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Updated VIF entry in instance network info cache for port 11701be6-55ae-458a-a3a0-55a0c467ef46. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:01:44 compute-1 nova_compute[230518]: 2025-10-02 13:01:44.123 2 DEBUG nova.network.neutron [req-04ba0d69-bb5c-4332-a5db-831f7f351547 req-8921b300-5170-46c1-bc84-ff4c4f377e6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Updating instance_info_cache with network_info: [{"id": "11701be6-55ae-458a-a3a0-55a0c467ef46", "address": "fa:16:3e:55:f2:3d", "network": {"id": "1c43df3b-870e-48e7-aa17-655e3c34fe90", "bridge": "br-int", "label": "tempest-network-smoke--1383089846", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11701be6-55", "ovs_interfaceid": "11701be6-55ae-458a-a3a0-55a0c467ef46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:01:44 compute-1 nova_compute[230518]: 2025-10-02 13:01:44.159 2 DEBUG oslo_concurrency.lockutils [req-04ba0d69-bb5c-4332-a5db-831f7f351547 req-8921b300-5170-46c1-bc84-ff4c4f377e6f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-0386b301-1cd5-430d-8fc1-691b6bc3ad47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:01:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:01:44 compute-1 nova_compute[230518]: 2025-10-02 13:01:44.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:45 compute-1 ceph-mgr[81282]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3443433125
Oct 02 13:01:45 compute-1 ceph-mon[80926]: pgmap v2660: 305 pgs: 305 active+clean; 293 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 124 op/s
Oct 02 13:01:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:45.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:45.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:47.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:47.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:47 compute-1 ceph-mon[80926]: pgmap v2661: 305 pgs: 305 active+clean; 293 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 139 op/s
Oct 02 13:01:48 compute-1 nova_compute[230518]: 2025-10-02 13:01:48.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2515102607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:01:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:49.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:01:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:01:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:49.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:01:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:01:49 compute-1 nova_compute[230518]: 2025-10-02 13:01:49.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:49 compute-1 ceph-mon[80926]: pgmap v2662: 305 pgs: 305 active+clean; 293 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 684 KiB/s wr, 126 op/s
Oct 02 13:01:50 compute-1 podman[297043]: 2025-10-02 13:01:50.79957771 +0000 UTC m=+0.057685194 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 02 13:01:50 compute-1 podman[297044]: 2025-10-02 13:01:50.809666343 +0000 UTC m=+0.063489674 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 13:01:51 compute-1 ceph-mon[80926]: pgmap v2663: 305 pgs: 305 active+clean; 327 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.6 MiB/s wr, 129 op/s
Oct 02 13:01:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:51.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:51 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:01:51 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2618018890' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:01:51 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:01:51 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2618018890' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:01:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:51.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:51 compute-1 ovn_controller[129257]: 2025-10-02T13:01:51Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:55:f2:3d 10.100.0.13
Oct 02 13:01:51 compute-1 ovn_controller[129257]: 2025-10-02T13:01:51Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:55:f2:3d 10.100.0.13
Oct 02 13:01:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2618018890' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:01:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2618018890' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:01:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:01:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:53.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:01:53 compute-1 nova_compute[230518]: 2025-10-02 13:01:53.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:53 compute-1 ceph-mon[80926]: pgmap v2664: 305 pgs: 305 active+clean; 363 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 190 op/s
Oct 02 13:01:53 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2902355469' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:01:53 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2902355469' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:01:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:01:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:53.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:01:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:01:54 compute-1 nova_compute[230518]: 2025-10-02 13:01:54.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:54 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1270528742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:01:54 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3628328142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:01:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:55.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:55.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:55 compute-1 ceph-mon[80926]: pgmap v2665: 305 pgs: 305 active+clean; 363 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 159 op/s
Oct 02 13:01:55 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1595355531' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:01:56 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3214295918' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:01:56 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3214295918' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:01:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:57.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:01:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:57.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:01:58 compute-1 ceph-mon[80926]: pgmap v2666: 305 pgs: 305 active+clean; 389 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.7 MiB/s wr, 207 op/s
Oct 02 13:01:58 compute-1 nova_compute[230518]: 2025-10-02 13:01:58.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:01:58 compute-1 nova_compute[230518]: 2025-10-02 13:01:58.521 2 INFO nova.compute.manager [None req-5fbcd088-f2ff-4481-8577-112934977633 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Get console output
Oct 02 13:01:58 compute-1 nova_compute[230518]: 2025-10-02 13:01:58.526 13161 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 13:01:59 compute-1 ceph-mon[80926]: pgmap v2667: 305 pgs: 305 active+clean; 400 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.0 MiB/s wr, 211 op/s
Oct 02 13:01:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:01:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:01:59.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:01:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:01:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:01:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:01:59.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:01:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:01:59 compute-1 nova_compute[230518]: 2025-10-02 13:01:59.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:00 compute-1 ovn_controller[129257]: 2025-10-02T13:02:00Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:55:f2:3d 10.100.0.13
Oct 02 13:02:00 compute-1 ovn_controller[129257]: 2025-10-02T13:02:00Z|00716|binding|INFO|Releasing lport 07f078ea-d50b-4ace-9ee5-5241ea5b3915 from this chassis (sb_readonly=0)
Oct 02 13:02:00 compute-1 ovn_controller[129257]: 2025-10-02T13:02:00Z|00717|binding|INFO|Releasing lport 61e15bc4-7cff-4f2c-a6c4-d987859313b6 from this chassis (sb_readonly=0)
Oct 02 13:02:00 compute-1 nova_compute[230518]: 2025-10-02 13:02:00.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:02:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:01.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:02:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:01.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:01 compute-1 ceph-mon[80926]: pgmap v2668: 305 pgs: 305 active+clean; 425 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 6.2 MiB/s wr, 260 op/s
Oct 02 13:02:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4099713465' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:02:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3896671779' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:02:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:02:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:03.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:02:03 compute-1 nova_compute[230518]: 2025-10-02 13:02:03.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:03.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:03 compute-1 ceph-mon[80926]: pgmap v2669: 305 pgs: 305 active+clean; 440 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 6.2 MiB/s wr, 283 op/s
Oct 02 13:02:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:03.945 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:02:03 compute-1 nova_compute[230518]: 2025-10-02 13:02:03.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:03 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:03.947 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:02:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:02:04 compute-1 nova_compute[230518]: 2025-10-02 13:02:04.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:04 compute-1 nova_compute[230518]: 2025-10-02 13:02:04.706 2 DEBUG nova.compute.manager [req-a35d701a-2c4f-4862-93d2-3def98d0e85d req-3ffc2f8d-c5be-4a32-831d-9e4cab7ed1a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Received event network-changed-11701be6-55ae-458a-a3a0-55a0c467ef46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:02:04 compute-1 nova_compute[230518]: 2025-10-02 13:02:04.706 2 DEBUG nova.compute.manager [req-a35d701a-2c4f-4862-93d2-3def98d0e85d req-3ffc2f8d-c5be-4a32-831d-9e4cab7ed1a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Refreshing instance network info cache due to event network-changed-11701be6-55ae-458a-a3a0-55a0c467ef46. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:02:04 compute-1 nova_compute[230518]: 2025-10-02 13:02:04.706 2 DEBUG oslo_concurrency.lockutils [req-a35d701a-2c4f-4862-93d2-3def98d0e85d req-3ffc2f8d-c5be-4a32-831d-9e4cab7ed1a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-0386b301-1cd5-430d-8fc1-691b6bc3ad47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:02:04 compute-1 nova_compute[230518]: 2025-10-02 13:02:04.706 2 DEBUG oslo_concurrency.lockutils [req-a35d701a-2c4f-4862-93d2-3def98d0e85d req-3ffc2f8d-c5be-4a32-831d-9e4cab7ed1a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-0386b301-1cd5-430d-8fc1-691b6bc3ad47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:02:04 compute-1 nova_compute[230518]: 2025-10-02 13:02:04.707 2 DEBUG nova.network.neutron [req-a35d701a-2c4f-4862-93d2-3def98d0e85d req-3ffc2f8d-c5be-4a32-831d-9e4cab7ed1a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Refreshing network info cache for port 11701be6-55ae-458a-a3a0-55a0c467ef46 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:02:04 compute-1 nova_compute[230518]: 2025-10-02 13:02:04.811 2 DEBUG oslo_concurrency.lockutils [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:04 compute-1 nova_compute[230518]: 2025-10-02 13:02:04.812 2 DEBUG oslo_concurrency.lockutils [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:04 compute-1 nova_compute[230518]: 2025-10-02 13:02:04.812 2 DEBUG oslo_concurrency.lockutils [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:04 compute-1 nova_compute[230518]: 2025-10-02 13:02:04.812 2 DEBUG oslo_concurrency.lockutils [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:04 compute-1 nova_compute[230518]: 2025-10-02 13:02:04.813 2 DEBUG oslo_concurrency.lockutils [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:04 compute-1 nova_compute[230518]: 2025-10-02 13:02:04.814 2 INFO nova.compute.manager [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Terminating instance
Oct 02 13:02:04 compute-1 nova_compute[230518]: 2025-10-02 13:02:04.814 2 DEBUG nova.compute.manager [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:02:04 compute-1 kernel: tap11701be6-55 (unregistering): left promiscuous mode
Oct 02 13:02:04 compute-1 NetworkManager[44960]: <info>  [1759410124.9532] device (tap11701be6-55): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:02:04 compute-1 nova_compute[230518]: 2025-10-02 13:02:04.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:04 compute-1 ovn_controller[129257]: 2025-10-02T13:02:04Z|00718|binding|INFO|Releasing lport 11701be6-55ae-458a-a3a0-55a0c467ef46 from this chassis (sb_readonly=0)
Oct 02 13:02:04 compute-1 ovn_controller[129257]: 2025-10-02T13:02:04Z|00719|binding|INFO|Setting lport 11701be6-55ae-458a-a3a0-55a0c467ef46 down in Southbound
Oct 02 13:02:04 compute-1 ovn_controller[129257]: 2025-10-02T13:02:04Z|00720|binding|INFO|Removing iface tap11701be6-55 ovn-installed in OVS
Oct 02 13:02:04 compute-1 nova_compute[230518]: 2025-10-02 13:02:04.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:04.978 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:f2:3d 10.100.0.13'], port_security=['fa:16:3e:55:f2:3d 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0386b301-1cd5-430d-8fc1-691b6bc3ad47', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1c43df3b-870e-48e7-aa17-655e3c34fe90', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'deb73c46-dc90-4a9e-bf71-ad13a1f027f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f19ea678-25e4-4fc8-8150-d09743da6d71, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=11701be6-55ae-458a-a3a0-55a0c467ef46) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:02:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:04.980 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 11701be6-55ae-458a-a3a0-55a0c467ef46 in datapath 1c43df3b-870e-48e7-aa17-655e3c34fe90 unbound from our chassis
Oct 02 13:02:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:04.982 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1c43df3b-870e-48e7-aa17-655e3c34fe90, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:02:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:04.983 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0fd74082-6c9a-43cb-a56f-557a690067c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:04.984 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90 namespace which is not needed anymore
Oct 02 13:02:04 compute-1 nova_compute[230518]: 2025-10-02 13:02:04.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:05 compute-1 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000ab.scope: Deactivated successfully.
Oct 02 13:02:05 compute-1 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000ab.scope: Consumed 14.390s CPU time.
Oct 02 13:02:05 compute-1 systemd-machined[188247]: Machine qemu-83-instance-000000ab terminated.
Oct 02 13:02:05 compute-1 neutron-haproxy-ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90[296931]: [NOTICE]   (296935) : haproxy version is 2.8.14-c23fe91
Oct 02 13:02:05 compute-1 neutron-haproxy-ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90[296931]: [NOTICE]   (296935) : path to executable is /usr/sbin/haproxy
Oct 02 13:02:05 compute-1 neutron-haproxy-ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90[296931]: [WARNING]  (296935) : Exiting Master process...
Oct 02 13:02:05 compute-1 neutron-haproxy-ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90[296931]: [WARNING]  (296935) : Exiting Master process...
Oct 02 13:02:05 compute-1 neutron-haproxy-ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90[296931]: [ALERT]    (296935) : Current worker (296937) exited with code 143 (Terminated)
Oct 02 13:02:05 compute-1 neutron-haproxy-ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90[296931]: [WARNING]  (296935) : All workers exited. Exiting... (0)
Oct 02 13:02:05 compute-1 systemd[1]: libpod-fac6bc95d0cfa546473de98e355220d13c32daa11022d5b6abed6541e216d398.scope: Deactivated successfully.
Oct 02 13:02:05 compute-1 podman[297108]: 2025-10-02 13:02:05.117397094 +0000 UTC m=+0.047802446 container died fac6bc95d0cfa546473de98e355220d13c32daa11022d5b6abed6541e216d398 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 13:02:05 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fac6bc95d0cfa546473de98e355220d13c32daa11022d5b6abed6541e216d398-userdata-shm.mount: Deactivated successfully.
Oct 02 13:02:05 compute-1 systemd[1]: var-lib-containers-storage-overlay-ad5abcd8d66535572ebd9243f79c67a937f00fbde504f937bccc9e255f563f50-merged.mount: Deactivated successfully.
Oct 02 13:02:05 compute-1 podman[297108]: 2025-10-02 13:02:05.163201666 +0000 UTC m=+0.093606988 container cleanup fac6bc95d0cfa546473de98e355220d13c32daa11022d5b6abed6541e216d398 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:02:05 compute-1 systemd[1]: libpod-conmon-fac6bc95d0cfa546473de98e355220d13c32daa11022d5b6abed6541e216d398.scope: Deactivated successfully.
Oct 02 13:02:05 compute-1 ceph-mon[80926]: pgmap v2670: 305 pgs: 305 active+clean; 440 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 187 op/s
Oct 02 13:02:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:02:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:05.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:02:05 compute-1 podman[297139]: 2025-10-02 13:02:05.222817599 +0000 UTC m=+0.038908230 container remove fac6bc95d0cfa546473de98e355220d13c32daa11022d5b6abed6541e216d398 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:02:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:05.228 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c1cdc68c-091b-41da-b1a5-2afb81320976]: (4, ('Thu Oct  2 01:02:05 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90 (fac6bc95d0cfa546473de98e355220d13c32daa11022d5b6abed6541e216d398)\nfac6bc95d0cfa546473de98e355220d13c32daa11022d5b6abed6541e216d398\nThu Oct  2 01:02:05 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90 (fac6bc95d0cfa546473de98e355220d13c32daa11022d5b6abed6541e216d398)\nfac6bc95d0cfa546473de98e355220d13c32daa11022d5b6abed6541e216d398\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:05.231 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3b932d29-596a-424d-b57c-8cc33a6c1c7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:05.231 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c43df3b-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:02:05 compute-1 nova_compute[230518]: 2025-10-02 13:02:05.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:05 compute-1 kernel: tap1c43df3b-80: left promiscuous mode
Oct 02 13:02:05 compute-1 nova_compute[230518]: 2025-10-02 13:02:05.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:05 compute-1 nova_compute[230518]: 2025-10-02 13:02:05.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:05.254 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3cd06113-eb8d-45bb-a76d-91e86ff04fe9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:05 compute-1 nova_compute[230518]: 2025-10-02 13:02:05.254 2 INFO nova.virt.libvirt.driver [-] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Instance destroyed successfully.
Oct 02 13:02:05 compute-1 nova_compute[230518]: 2025-10-02 13:02:05.254 2 DEBUG nova.objects.instance [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'resources' on Instance uuid 0386b301-1cd5-430d-8fc1-691b6bc3ad47 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:02:05 compute-1 nova_compute[230518]: 2025-10-02 13:02:05.272 2 DEBUG nova.virt.libvirt.vif [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:01:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-597885113',display_name='tempest-TestNetworkBasicOps-server-597885113',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-597885113',id=171,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG3FuXpBgVbHgGBq4egCNtz2a9jD/YtFdw9deNrsDMTopNMA3CO5MDdqo+hovI4wpSsKj6W1YZIokPp0dIAbUjy55TePr3Cxog4gFZ9e9nz0xlxf36KAvzLNH0NRnqtk4g==',key_name='tempest-TestNetworkBasicOps-216720718',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:01:35Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-c0lf99pp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:01:35Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=0386b301-1cd5-430d-8fc1-691b6bc3ad47,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "11701be6-55ae-458a-a3a0-55a0c467ef46", "address": "fa:16:3e:55:f2:3d", "network": {"id": "1c43df3b-870e-48e7-aa17-655e3c34fe90", "bridge": "br-int", "label": "tempest-network-smoke--1383089846", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11701be6-55", "ovs_interfaceid": "11701be6-55ae-458a-a3a0-55a0c467ef46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:02:05 compute-1 nova_compute[230518]: 2025-10-02 13:02:05.272 2 DEBUG nova.network.os_vif_util [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "11701be6-55ae-458a-a3a0-55a0c467ef46", "address": "fa:16:3e:55:f2:3d", "network": {"id": "1c43df3b-870e-48e7-aa17-655e3c34fe90", "bridge": "br-int", "label": "tempest-network-smoke--1383089846", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11701be6-55", "ovs_interfaceid": "11701be6-55ae-458a-a3a0-55a0c467ef46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:02:05 compute-1 nova_compute[230518]: 2025-10-02 13:02:05.273 2 DEBUG nova.network.os_vif_util [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:55:f2:3d,bridge_name='br-int',has_traffic_filtering=True,id=11701be6-55ae-458a-a3a0-55a0c467ef46,network=Network(1c43df3b-870e-48e7-aa17-655e3c34fe90),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11701be6-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:02:05 compute-1 nova_compute[230518]: 2025-10-02 13:02:05.273 2 DEBUG os_vif [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:55:f2:3d,bridge_name='br-int',has_traffic_filtering=True,id=11701be6-55ae-458a-a3a0-55a0c467ef46,network=Network(1c43df3b-870e-48e7-aa17-655e3c34fe90),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11701be6-55') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:02:05 compute-1 nova_compute[230518]: 2025-10-02 13:02:05.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:05 compute-1 nova_compute[230518]: 2025-10-02 13:02:05.275 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap11701be6-55, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:02:05 compute-1 nova_compute[230518]: 2025-10-02 13:02:05.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:05 compute-1 nova_compute[230518]: 2025-10-02 13:02:05.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:02:05 compute-1 nova_compute[230518]: 2025-10-02 13:02:05.280 2 INFO os_vif [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:55:f2:3d,bridge_name='br-int',has_traffic_filtering=True,id=11701be6-55ae-458a-a3a0-55a0c467ef46,network=Network(1c43df3b-870e-48e7-aa17-655e3c34fe90),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap11701be6-55')
Oct 02 13:02:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:05.288 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9f0a5ded-4e36-42b4-81fa-84763206b499]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:05.290 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[da7d028d-c3ac-463b-87a2-43c96d6ce84b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:05.307 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1c183813-f275-478b-b28b-3cfa3ee7e659]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795797, 'reachable_time': 38195, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297179, 'error': None, 'target': 'ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:05 compute-1 systemd[1]: run-netns-ovnmeta\x2d1c43df3b\x2d870e\x2d48e7\x2daa17\x2d655e3c34fe90.mount: Deactivated successfully.
Oct 02 13:02:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:05.312 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1c43df3b-870e-48e7-aa17-655e3c34fe90 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:02:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:05.312 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[3edd0b1d-e4e9-4b86-9956-74d5fc65d0de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:05.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:02:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2196494992' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:02:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:02:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2196494992' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:02:05 compute-1 nova_compute[230518]: 2025-10-02 13:02:05.517 2 DEBUG nova.compute.manager [req-badf65b7-076c-439a-b554-fe0861c65f27 req-7a7c5a26-447b-443a-8b63-4ba7de3ea4a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Received event network-vif-unplugged-11701be6-55ae-458a-a3a0-55a0c467ef46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:02:05 compute-1 nova_compute[230518]: 2025-10-02 13:02:05.518 2 DEBUG oslo_concurrency.lockutils [req-badf65b7-076c-439a-b554-fe0861c65f27 req-7a7c5a26-447b-443a-8b63-4ba7de3ea4a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:05 compute-1 nova_compute[230518]: 2025-10-02 13:02:05.519 2 DEBUG oslo_concurrency.lockutils [req-badf65b7-076c-439a-b554-fe0861c65f27 req-7a7c5a26-447b-443a-8b63-4ba7de3ea4a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:05 compute-1 nova_compute[230518]: 2025-10-02 13:02:05.519 2 DEBUG oslo_concurrency.lockutils [req-badf65b7-076c-439a-b554-fe0861c65f27 req-7a7c5a26-447b-443a-8b63-4ba7de3ea4a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:05 compute-1 nova_compute[230518]: 2025-10-02 13:02:05.520 2 DEBUG nova.compute.manager [req-badf65b7-076c-439a-b554-fe0861c65f27 req-7a7c5a26-447b-443a-8b63-4ba7de3ea4a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] No waiting events found dispatching network-vif-unplugged-11701be6-55ae-458a-a3a0-55a0c467ef46 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:02:05 compute-1 nova_compute[230518]: 2025-10-02 13:02:05.520 2 DEBUG nova.compute.manager [req-badf65b7-076c-439a-b554-fe0861c65f27 req-7a7c5a26-447b-443a-8b63-4ba7de3ea4a1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Received event network-vif-unplugged-11701be6-55ae-458a-a3a0-55a0c467ef46 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:02:06 compute-1 nova_compute[230518]: 2025-10-02 13:02:06.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:02:06 compute-1 nova_compute[230518]: 2025-10-02 13:02:06.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 13:02:06 compute-1 nova_compute[230518]: 2025-10-02 13:02:06.074 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 13:02:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2196494992' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:02:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2196494992' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:02:06 compute-1 nova_compute[230518]: 2025-10-02 13:02:06.888 2 INFO nova.virt.libvirt.driver [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Deleting instance files /var/lib/nova/instances/0386b301-1cd5-430d-8fc1-691b6bc3ad47_del
Oct 02 13:02:06 compute-1 nova_compute[230518]: 2025-10-02 13:02:06.889 2 INFO nova.virt.libvirt.driver [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Deletion of /var/lib/nova/instances/0386b301-1cd5-430d-8fc1-691b6bc3ad47_del complete
Oct 02 13:02:07 compute-1 nova_compute[230518]: 2025-10-02 13:02:07.091 2 INFO nova.compute.manager [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Took 2.28 seconds to destroy the instance on the hypervisor.
Oct 02 13:02:07 compute-1 nova_compute[230518]: 2025-10-02 13:02:07.092 2 DEBUG oslo.service.loopingcall [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:02:07 compute-1 nova_compute[230518]: 2025-10-02 13:02:07.092 2 DEBUG nova.compute.manager [-] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:02:07 compute-1 nova_compute[230518]: 2025-10-02 13:02:07.092 2 DEBUG nova.network.neutron [-] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:02:07 compute-1 nova_compute[230518]: 2025-10-02 13:02:07.188 2 DEBUG nova.network.neutron [req-a35d701a-2c4f-4862-93d2-3def98d0e85d req-3ffc2f8d-c5be-4a32-831d-9e4cab7ed1a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Updated VIF entry in instance network info cache for port 11701be6-55ae-458a-a3a0-55a0c467ef46. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:02:07 compute-1 nova_compute[230518]: 2025-10-02 13:02:07.188 2 DEBUG nova.network.neutron [req-a35d701a-2c4f-4862-93d2-3def98d0e85d req-3ffc2f8d-c5be-4a32-831d-9e4cab7ed1a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Updating instance_info_cache with network_info: [{"id": "11701be6-55ae-458a-a3a0-55a0c467ef46", "address": "fa:16:3e:55:f2:3d", "network": {"id": "1c43df3b-870e-48e7-aa17-655e3c34fe90", "bridge": "br-int", "label": "tempest-network-smoke--1383089846", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap11701be6-55", "ovs_interfaceid": "11701be6-55ae-458a-a3a0-55a0c467ef46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:02:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:07.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:07 compute-1 nova_compute[230518]: 2025-10-02 13:02:07.255 2 DEBUG oslo_concurrency.lockutils [req-a35d701a-2c4f-4862-93d2-3def98d0e85d req-3ffc2f8d-c5be-4a32-831d-9e4cab7ed1a8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-0386b301-1cd5-430d-8fc1-691b6bc3ad47" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:02:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:02:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:07.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:02:07 compute-1 ceph-mon[80926]: pgmap v2671: 305 pgs: 305 active+clean; 448 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 4.0 MiB/s wr, 265 op/s
Oct 02 13:02:08 compute-1 nova_compute[230518]: 2025-10-02 13:02:08.206 2 DEBUG nova.compute.manager [req-97b12b65-c1d4-4fd5-8210-6124adb21f96 req-7c4f6111-de5b-4b06-b663-ef8bde053932 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Received event network-vif-plugged-11701be6-55ae-458a-a3a0-55a0c467ef46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:02:08 compute-1 nova_compute[230518]: 2025-10-02 13:02:08.207 2 DEBUG oslo_concurrency.lockutils [req-97b12b65-c1d4-4fd5-8210-6124adb21f96 req-7c4f6111-de5b-4b06-b663-ef8bde053932 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:08 compute-1 nova_compute[230518]: 2025-10-02 13:02:08.207 2 DEBUG oslo_concurrency.lockutils [req-97b12b65-c1d4-4fd5-8210-6124adb21f96 req-7c4f6111-de5b-4b06-b663-ef8bde053932 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:08 compute-1 nova_compute[230518]: 2025-10-02 13:02:08.208 2 DEBUG oslo_concurrency.lockutils [req-97b12b65-c1d4-4fd5-8210-6124adb21f96 req-7c4f6111-de5b-4b06-b663-ef8bde053932 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:08 compute-1 nova_compute[230518]: 2025-10-02 13:02:08.208 2 DEBUG nova.compute.manager [req-97b12b65-c1d4-4fd5-8210-6124adb21f96 req-7c4f6111-de5b-4b06-b663-ef8bde053932 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] No waiting events found dispatching network-vif-plugged-11701be6-55ae-458a-a3a0-55a0c467ef46 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:02:08 compute-1 nova_compute[230518]: 2025-10-02 13:02:08.209 2 WARNING nova.compute.manager [req-97b12b65-c1d4-4fd5-8210-6124adb21f96 req-7c4f6111-de5b-4b06-b663-ef8bde053932 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Received unexpected event network-vif-plugged-11701be6-55ae-458a-a3a0-55a0c467ef46 for instance with vm_state active and task_state deleting.
Oct 02 13:02:08 compute-1 nova_compute[230518]: 2025-10-02 13:02:08.393 2 DEBUG nova.network.neutron [-] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:02:08 compute-1 nova_compute[230518]: 2025-10-02 13:02:08.460 2 INFO nova.compute.manager [-] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Took 1.37 seconds to deallocate network for instance.
Oct 02 13:02:08 compute-1 nova_compute[230518]: 2025-10-02 13:02:08.520 2 DEBUG nova.compute.manager [req-986a77e2-bfd5-4843-be87-665f42fb864f req-dc42d81b-5684-40a5-9772-dc2b8af22bb4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Received event network-vif-deleted-11701be6-55ae-458a-a3a0-55a0c467ef46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:02:08 compute-1 nova_compute[230518]: 2025-10-02 13:02:08.551 2 DEBUG oslo_concurrency.lockutils [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:08 compute-1 nova_compute[230518]: 2025-10-02 13:02:08.551 2 DEBUG oslo_concurrency.lockutils [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:08 compute-1 nova_compute[230518]: 2025-10-02 13:02:08.641 2 DEBUG oslo_concurrency.processutils [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:02:08 compute-1 ceph-mon[80926]: pgmap v2672: 305 pgs: 305 active+clean; 428 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.1 MiB/s wr, 261 op/s
Oct 02 13:02:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:02:09 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1561182828' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:09 compute-1 nova_compute[230518]: 2025-10-02 13:02:09.061 2 DEBUG oslo_concurrency.processutils [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:02:09 compute-1 nova_compute[230518]: 2025-10-02 13:02:09.067 2 DEBUG nova.compute.provider_tree [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:02:09 compute-1 nova_compute[230518]: 2025-10-02 13:02:09.087 2 DEBUG nova.scheduler.client.report [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:02:09 compute-1 nova_compute[230518]: 2025-10-02 13:02:09.120 2 DEBUG oslo_concurrency.lockutils [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:09 compute-1 nova_compute[230518]: 2025-10-02 13:02:09.148 2 INFO nova.scheduler.client.report [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Deleted allocations for instance 0386b301-1cd5-430d-8fc1-691b6bc3ad47
Oct 02 13:02:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:09.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:09 compute-1 nova_compute[230518]: 2025-10-02 13:02:09.243 2 DEBUG oslo_concurrency.lockutils [None req-362c8c71-19d7-43ff-9948-12bad0a6c880 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "0386b301-1cd5-430d-8fc1-691b6bc3ad47" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.431s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:02:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:09.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:02:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:02:09 compute-1 nova_compute[230518]: 2025-10-02 13:02:09.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1561182828' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:10 compute-1 nova_compute[230518]: 2025-10-02 13:02:10.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:10 compute-1 podman[297207]: 2025-10-02 13:02:10.834676217 +0000 UTC m=+0.076237510 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:02:10 compute-1 podman[297206]: 2025-10-02 13:02:10.868347653 +0000 UTC m=+0.115089626 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:02:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:11.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:11.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:11 compute-1 ceph-mon[80926]: pgmap v2673: 305 pgs: 305 active+clean; 376 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.0 MiB/s wr, 261 op/s
Oct 02 13:02:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3695471513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:11 compute-1 nova_compute[230518]: 2025-10-02 13:02:11.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:11.949 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:02:12 compute-1 ovn_controller[129257]: 2025-10-02T13:02:12Z|00721|binding|INFO|Releasing lport 61e15bc4-7cff-4f2c-a6c4-d987859313b6 from this chassis (sb_readonly=0)
Oct 02 13:02:12 compute-1 nova_compute[230518]: 2025-10-02 13:02:12.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:12 compute-1 ovn_controller[129257]: 2025-10-02T13:02:12Z|00722|binding|INFO|Releasing lport 61e15bc4-7cff-4f2c-a6c4-d987859313b6 from this chassis (sb_readonly=0)
Oct 02 13:02:12 compute-1 nova_compute[230518]: 2025-10-02 13:02:12.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:13.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:13.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:13 compute-1 ceph-mon[80926]: pgmap v2674: 305 pgs: 305 active+clean; 337 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.6 MiB/s wr, 248 op/s
Oct 02 13:02:14 compute-1 nova_compute[230518]: 2025-10-02 13:02:14.074 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:02:14 compute-1 nova_compute[230518]: 2025-10-02 13:02:14.097 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:14 compute-1 nova_compute[230518]: 2025-10-02 13:02:14.097 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:14 compute-1 nova_compute[230518]: 2025-10-02 13:02:14.098 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:14 compute-1 nova_compute[230518]: 2025-10-02 13:02:14.098 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:02:14 compute-1 nova_compute[230518]: 2025-10-02 13:02:14.098 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:02:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:02:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:02:14 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/513885224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:14 compute-1 nova_compute[230518]: 2025-10-02 13:02:14.586 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:02:14 compute-1 nova_compute[230518]: 2025-10-02 13:02:14.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:14 compute-1 nova_compute[230518]: 2025-10-02 13:02:14.679 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:02:14 compute-1 nova_compute[230518]: 2025-10-02 13:02:14.680 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:02:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/513885224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:14 compute-1 nova_compute[230518]: 2025-10-02 13:02:14.895 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:02:14 compute-1 nova_compute[230518]: 2025-10-02 13:02:14.897 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4105MB free_disk=20.861236572265625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:02:14 compute-1 nova_compute[230518]: 2025-10-02 13:02:14.897 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:14 compute-1 nova_compute[230518]: 2025-10-02 13:02:14.897 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:15 compute-1 nova_compute[230518]: 2025-10-02 13:02:15.067 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 7c31bb0f-22b5-42a4-9b38-8ad3daac689f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:02:15 compute-1 nova_compute[230518]: 2025-10-02 13:02:15.068 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:02:15 compute-1 nova_compute[230518]: 2025-10-02 13:02:15.069 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:02:15 compute-1 nova_compute[230518]: 2025-10-02 13:02:15.138 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:02:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:15.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:15 compute-1 nova_compute[230518]: 2025-10-02 13:02:15.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:15.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:02:15 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3848085517' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:15 compute-1 nova_compute[230518]: 2025-10-02 13:02:15.614 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:02:15 compute-1 nova_compute[230518]: 2025-10-02 13:02:15.621 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:02:15 compute-1 nova_compute[230518]: 2025-10-02 13:02:15.646 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:02:15 compute-1 nova_compute[230518]: 2025-10-02 13:02:15.693 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:02:15 compute-1 nova_compute[230518]: 2025-10-02 13:02:15.694 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:15 compute-1 ceph-mon[80926]: pgmap v2675: 305 pgs: 305 active+clean; 337 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.0 MiB/s wr, 180 op/s
Oct 02 13:02:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3848085517' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:17 compute-1 ceph-mon[80926]: pgmap v2676: 305 pgs: 305 active+clean; 298 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.2 MiB/s wr, 238 op/s
Oct 02 13:02:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1321131216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:17.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:02:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:17.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:02:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3731738114' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:19.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:02:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:19.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:02:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:02:19 compute-1 ceph-mon[80926]: pgmap v2677: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 791 KiB/s rd, 2.2 MiB/s wr, 173 op/s
Oct 02 13:02:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/601393838' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:19 compute-1 nova_compute[230518]: 2025-10-02 13:02:19.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:19 compute-1 nova_compute[230518]: 2025-10-02 13:02:19.673 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:02:19 compute-1 nova_compute[230518]: 2025-10-02 13:02:19.674 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:02:19 compute-1 nova_compute[230518]: 2025-10-02 13:02:19.674 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:02:19 compute-1 nova_compute[230518]: 2025-10-02 13:02:19.674 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:02:20 compute-1 nova_compute[230518]: 2025-10-02 13:02:20.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:02:20 compute-1 nova_compute[230518]: 2025-10-02 13:02:20.253 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410125.2516162, 0386b301-1cd5-430d-8fc1-691b6bc3ad47 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:02:20 compute-1 nova_compute[230518]: 2025-10-02 13:02:20.253 2 INFO nova.compute.manager [-] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] VM Stopped (Lifecycle Event)
Oct 02 13:02:20 compute-1 nova_compute[230518]: 2025-10-02 13:02:20.286 2 DEBUG nova.compute.manager [None req-5cd2c949-46d7-4212-ae38-168aa0759253 - - - - - -] [instance: 0386b301-1cd5-430d-8fc1-691b6bc3ad47] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:02:20 compute-1 nova_compute[230518]: 2025-10-02 13:02:20.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.002999972s ======
Oct 02 13:02:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:21.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002999972s
Oct 02 13:02:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:02:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:21.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:02:21 compute-1 ceph-mon[80926]: pgmap v2678: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 437 KiB/s rd, 2.2 MiB/s wr, 129 op/s
Oct 02 13:02:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/537887399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:21 compute-1 podman[297295]: 2025-10-02 13:02:21.799634521 +0000 UTC m=+0.053952278 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 02 13:02:21 compute-1 podman[297296]: 2025-10-02 13:02:21.801973834 +0000 UTC m=+0.048706784 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 13:02:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2053345908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2643813130' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:23 compute-1 nova_compute[230518]: 2025-10-02 13:02:23.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:02:23 compute-1 nova_compute[230518]: 2025-10-02 13:02:23.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:02:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:02:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:23.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:02:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:23.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:23 compute-1 ceph-mon[80926]: pgmap v2679: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 418 KiB/s rd, 1.9 MiB/s wr, 111 op/s
Oct 02 13:02:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2704903679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:24 compute-1 nova_compute[230518]: 2025-10-02 13:02:24.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:02:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:02:24 compute-1 nova_compute[230518]: 2025-10-02 13:02:24.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:25 compute-1 ceph-mon[80926]: pgmap v2680: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 350 KiB/s rd, 1.2 MiB/s wr, 71 op/s
Oct 02 13:02:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:02:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:25.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:02:25 compute-1 nova_compute[230518]: 2025-10-02 13:02:25.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:25.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:25.959 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:25.959 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:25.961 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:02:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:27.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:02:27 compute-1 ceph-mon[80926]: pgmap v2681: 305 pgs: 305 active+clean; 299 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 360 KiB/s rd, 1.9 MiB/s wr, 86 op/s
Oct 02 13:02:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:27.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:28 compute-1 nova_compute[230518]: 2025-10-02 13:02:28.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:02:28 compute-1 nova_compute[230518]: 2025-10-02 13:02:28.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:02:28 compute-1 nova_compute[230518]: 2025-10-02 13:02:28.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:02:28 compute-1 nova_compute[230518]: 2025-10-02 13:02:28.680 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-7c31bb0f-22b5-42a4-9b38-8ad3daac689f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:02:28 compute-1 nova_compute[230518]: 2025-10-02 13:02:28.681 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-7c31bb0f-22b5-42a4-9b38-8ad3daac689f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:02:28 compute-1 nova_compute[230518]: 2025-10-02 13:02:28.681 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 13:02:28 compute-1 nova_compute[230518]: 2025-10-02 13:02:28.681 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7c31bb0f-22b5-42a4-9b38-8ad3daac689f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:02:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:29.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:29 compute-1 ceph-mon[80926]: pgmap v2682: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Oct 02 13:02:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:02:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:29.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:29 compute-1 nova_compute[230518]: 2025-10-02 13:02:29.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:30 compute-1 nova_compute[230518]: 2025-10-02 13:02:30.053 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Updating instance_info_cache with network_info: [{"id": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "address": "fa:16:3e:d4:a1:f4", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0acc3a3-80", "ovs_interfaceid": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:02:30 compute-1 nova_compute[230518]: 2025-10-02 13:02:30.080 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-7c31bb0f-22b5-42a4-9b38-8ad3daac689f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:02:30 compute-1 nova_compute[230518]: 2025-10-02 13:02:30.080 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 13:02:30 compute-1 nova_compute[230518]: 2025-10-02 13:02:30.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:31 compute-1 nova_compute[230518]: 2025-10-02 13:02:31.075 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:02:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:31.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:31.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:31 compute-1 ceph-mon[80926]: pgmap v2683: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 02 13:02:31 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1114902513' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:02:31 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/237972929' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:02:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:33.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:33.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:33 compute-1 ceph-mon[80926]: pgmap v2684: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 02 13:02:34 compute-1 nova_compute[230518]: 2025-10-02 13:02:34.239 2 DEBUG oslo_concurrency.lockutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:34 compute-1 nova_compute[230518]: 2025-10-02 13:02:34.240 2 DEBUG oslo_concurrency.lockutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:34 compute-1 nova_compute[230518]: 2025-10-02 13:02:34.263 2 DEBUG nova.compute.manager [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:02:34 compute-1 nova_compute[230518]: 2025-10-02 13:02:34.376 2 DEBUG oslo_concurrency.lockutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:34 compute-1 nova_compute[230518]: 2025-10-02 13:02:34.376 2 DEBUG oslo_concurrency.lockutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:34 compute-1 nova_compute[230518]: 2025-10-02 13:02:34.388 2 DEBUG nova.virt.hardware [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:02:34 compute-1 nova_compute[230518]: 2025-10-02 13:02:34.389 2 INFO nova.compute.claims [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Claim successful on node compute-1.ctlplane.example.com
Oct 02 13:02:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:02:34 compute-1 nova_compute[230518]: 2025-10-02 13:02:34.548 2 DEBUG oslo_concurrency.processutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:02:34 compute-1 nova_compute[230518]: 2025-10-02 13:02:34.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:02:34 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3707887180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:35 compute-1 nova_compute[230518]: 2025-10-02 13:02:35.011 2 DEBUG oslo_concurrency.processutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:02:35 compute-1 nova_compute[230518]: 2025-10-02 13:02:35.019 2 DEBUG nova.compute.provider_tree [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:02:35 compute-1 nova_compute[230518]: 2025-10-02 13:02:35.056 2 DEBUG nova.scheduler.client.report [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:02:35 compute-1 nova_compute[230518]: 2025-10-02 13:02:35.092 2 DEBUG oslo_concurrency.lockutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:35 compute-1 nova_compute[230518]: 2025-10-02 13:02:35.093 2 DEBUG nova.compute.manager [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:02:35 compute-1 nova_compute[230518]: 2025-10-02 13:02:35.141 2 DEBUG nova.compute.manager [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:02:35 compute-1 nova_compute[230518]: 2025-10-02 13:02:35.142 2 DEBUG nova.network.neutron [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:02:35 compute-1 nova_compute[230518]: 2025-10-02 13:02:35.169 2 INFO nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:02:35 compute-1 nova_compute[230518]: 2025-10-02 13:02:35.191 2 DEBUG nova.compute.manager [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:02:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:02:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:35.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:02:35 compute-1 nova_compute[230518]: 2025-10-02 13:02:35.278 2 DEBUG nova.compute.manager [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:02:35 compute-1 nova_compute[230518]: 2025-10-02 13:02:35.280 2 DEBUG nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:02:35 compute-1 nova_compute[230518]: 2025-10-02 13:02:35.280 2 INFO nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Creating image(s)
Oct 02 13:02:35 compute-1 nova_compute[230518]: 2025-10-02 13:02:35.308 2 DEBUG nova.storage.rbd_utils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 198c2dd4-f103-4bba-9fc3-9e41f44e465e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:02:35 compute-1 nova_compute[230518]: 2025-10-02 13:02:35.332 2 DEBUG nova.storage.rbd_utils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 198c2dd4-f103-4bba-9fc3-9e41f44e465e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:02:35 compute-1 nova_compute[230518]: 2025-10-02 13:02:35.362 2 DEBUG nova.storage.rbd_utils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 198c2dd4-f103-4bba-9fc3-9e41f44e465e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:02:35 compute-1 nova_compute[230518]: 2025-10-02 13:02:35.366 2 DEBUG oslo_concurrency.processutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:02:35 compute-1 nova_compute[230518]: 2025-10-02 13:02:35.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:35 compute-1 nova_compute[230518]: 2025-10-02 13:02:35.433 2 DEBUG oslo_concurrency.processutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:02:35 compute-1 nova_compute[230518]: 2025-10-02 13:02:35.434 2 DEBUG oslo_concurrency.lockutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:35 compute-1 nova_compute[230518]: 2025-10-02 13:02:35.435 2 DEBUG oslo_concurrency.lockutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:35 compute-1 nova_compute[230518]: 2025-10-02 13:02:35.436 2 DEBUG oslo_concurrency.lockutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:35.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:35 compute-1 nova_compute[230518]: 2025-10-02 13:02:35.465 2 DEBUG nova.storage.rbd_utils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 198c2dd4-f103-4bba-9fc3-9e41f44e465e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:02:35 compute-1 nova_compute[230518]: 2025-10-02 13:02:35.469 2 DEBUG oslo_concurrency.processutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 198c2dd4-f103-4bba-9fc3-9e41f44e465e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:02:35 compute-1 nova_compute[230518]: 2025-10-02 13:02:35.670 2 DEBUG nova.policy [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '96fd589a75cb4fcfac0072edabb9b3a1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '64f187c60881475e9e1f062bb198d205', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:02:36 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e351 e351: 3 total, 3 up, 3 in
Oct 02 13:02:36 compute-1 ceph-mon[80926]: pgmap v2685: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct 02 13:02:36 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3707887180' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:36 compute-1 nova_compute[230518]: 2025-10-02 13:02:36.814 2 DEBUG oslo_concurrency.processutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 198c2dd4-f103-4bba-9fc3-9e41f44e465e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.345s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:02:36 compute-1 nova_compute[230518]: 2025-10-02 13:02:36.886 2 DEBUG nova.storage.rbd_utils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] resizing rbd image 198c2dd4-f103-4bba-9fc3-9e41f44e465e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:02:37 compute-1 nova_compute[230518]: 2025-10-02 13:02:37.137 2 DEBUG nova.network.neutron [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Successfully created port: 075c87dd-2b98-4364-9955-b21fcbcd5b47 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:02:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:37.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:37 compute-1 nova_compute[230518]: 2025-10-02 13:02:37.304 2 DEBUG nova.objects.instance [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'migration_context' on Instance uuid 198c2dd4-f103-4bba-9fc3-9e41f44e465e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:02:37 compute-1 nova_compute[230518]: 2025-10-02 13:02:37.324 2 DEBUG nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:02:37 compute-1 nova_compute[230518]: 2025-10-02 13:02:37.324 2 DEBUG nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Ensure instance console log exists: /var/lib/nova/instances/198c2dd4-f103-4bba-9fc3-9e41f44e465e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:02:37 compute-1 nova_compute[230518]: 2025-10-02 13:02:37.325 2 DEBUG oslo_concurrency.lockutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:37 compute-1 nova_compute[230518]: 2025-10-02 13:02:37.325 2 DEBUG oslo_concurrency.lockutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:37 compute-1 nova_compute[230518]: 2025-10-02 13:02:37.325 2 DEBUG oslo_concurrency.lockutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:02:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:37.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:02:37 compute-1 ceph-mon[80926]: osdmap e351: 3 total, 3 up, 3 in
Oct 02 13:02:37 compute-1 ceph-mon[80926]: pgmap v2687: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.3 MiB/s wr, 26 op/s
Oct 02 13:02:37 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3209806150' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:02:37 compute-1 sudo[297520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:02:37 compute-1 sudo[297520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:37 compute-1 sudo[297520]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:37 compute-1 sudo[297545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:02:37 compute-1 sudo[297545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:37 compute-1 sudo[297545]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:37 compute-1 sudo[297570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:02:37 compute-1 sudo[297570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:37 compute-1 sudo[297570]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:37 compute-1 sudo[297595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 13:02:37 compute-1 sudo[297595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:38 compute-1 podman[297691]: 2025-10-02 13:02:38.480938727 +0000 UTC m=+0.080822983 container exec f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:02:38 compute-1 podman[297691]: 2025-10-02 13:02:38.569850889 +0000 UTC m=+0.169735165 container exec_died f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 02 13:02:38 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1729713660' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:02:39 compute-1 sudo[297595]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:39 compute-1 nova_compute[230518]: 2025-10-02 13:02:39.080 2 DEBUG nova.network.neutron [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Successfully updated port: 075c87dd-2b98-4364-9955-b21fcbcd5b47 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:02:39 compute-1 nova_compute[230518]: 2025-10-02 13:02:39.103 2 DEBUG oslo_concurrency.lockutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:02:39 compute-1 nova_compute[230518]: 2025-10-02 13:02:39.103 2 DEBUG oslo_concurrency.lockutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquired lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:02:39 compute-1 nova_compute[230518]: 2025-10-02 13:02:39.103 2 DEBUG nova.network.neutron [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:02:39 compute-1 nova_compute[230518]: 2025-10-02 13:02:39.233 2 DEBUG nova.compute.manager [req-09e327da-f792-4e50-85f5-8af50271f8a8 req-86cb0572-c5a1-4568-a0de-8cdabcf52d7b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received event network-changed-075c87dd-2b98-4364-9955-b21fcbcd5b47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:02:39 compute-1 nova_compute[230518]: 2025-10-02 13:02:39.234 2 DEBUG nova.compute.manager [req-09e327da-f792-4e50-85f5-8af50271f8a8 req-86cb0572-c5a1-4568-a0de-8cdabcf52d7b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Refreshing instance network info cache due to event network-changed-075c87dd-2b98-4364-9955-b21fcbcd5b47. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:02:39 compute-1 nova_compute[230518]: 2025-10-02 13:02:39.234 2 DEBUG oslo_concurrency.lockutils [req-09e327da-f792-4e50-85f5-8af50271f8a8 req-86cb0572-c5a1-4568-a0de-8cdabcf52d7b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:02:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:39.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:02:39 compute-1 sudo[297816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:02:39 compute-1 sudo[297816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:39 compute-1 sudo[297816]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:39.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:39 compute-1 sudo[297841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:02:39 compute-1 sudo[297841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:39 compute-1 sudo[297841]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:39 compute-1 sudo[297866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:02:39 compute-1 sudo[297866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:39 compute-1 sudo[297866]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:39 compute-1 sudo[297891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:02:39 compute-1 sudo[297891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:39 compute-1 nova_compute[230518]: 2025-10-02 13:02:39.669 2 DEBUG nova.network.neutron [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:02:39 compute-1 nova_compute[230518]: 2025-10-02 13:02:39.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:39 compute-1 ceph-mon[80926]: pgmap v2688: 305 pgs: 305 active+clean; 340 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 684 KiB/s rd, 469 KiB/s wr, 52 op/s
Oct 02 13:02:39 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:02:39 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:02:39 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:02:39 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:02:40 compute-1 sudo[297891]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:40 compute-1 nova_compute[230518]: 2025-10-02 13:02:40.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:40 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:02:40 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:02:40 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:02:40 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:02:40 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:02:40 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:02:40 compute-1 nova_compute[230518]: 2025-10-02 13:02:40.955 2 DEBUG nova.network.neutron [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Updating instance_info_cache with network_info: [{"id": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "address": "fa:16:3e:d9:79:9d", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap075c87dd-2b", "ovs_interfaceid": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:02:41 compute-1 nova_compute[230518]: 2025-10-02 13:02:41.001 2 DEBUG oslo_concurrency.lockutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Releasing lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:02:41 compute-1 nova_compute[230518]: 2025-10-02 13:02:41.001 2 DEBUG nova.compute.manager [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Instance network_info: |[{"id": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "address": "fa:16:3e:d9:79:9d", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap075c87dd-2b", "ovs_interfaceid": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:02:41 compute-1 nova_compute[230518]: 2025-10-02 13:02:41.002 2 DEBUG oslo_concurrency.lockutils [req-09e327da-f792-4e50-85f5-8af50271f8a8 req-86cb0572-c5a1-4568-a0de-8cdabcf52d7b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:02:41 compute-1 nova_compute[230518]: 2025-10-02 13:02:41.002 2 DEBUG nova.network.neutron [req-09e327da-f792-4e50-85f5-8af50271f8a8 req-86cb0572-c5a1-4568-a0de-8cdabcf52d7b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Refreshing network info cache for port 075c87dd-2b98-4364-9955-b21fcbcd5b47 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:02:41 compute-1 nova_compute[230518]: 2025-10-02 13:02:41.008 2 DEBUG nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Start _get_guest_xml network_info=[{"id": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "address": "fa:16:3e:d9:79:9d", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap075c87dd-2b", "ovs_interfaceid": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:02:41 compute-1 nova_compute[230518]: 2025-10-02 13:02:41.015 2 WARNING nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:02:41 compute-1 nova_compute[230518]: 2025-10-02 13:02:41.022 2 DEBUG nova.virt.libvirt.host [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:02:41 compute-1 nova_compute[230518]: 2025-10-02 13:02:41.023 2 DEBUG nova.virt.libvirt.host [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:02:41 compute-1 nova_compute[230518]: 2025-10-02 13:02:41.034 2 DEBUG nova.virt.libvirt.host [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:02:41 compute-1 nova_compute[230518]: 2025-10-02 13:02:41.035 2 DEBUG nova.virt.libvirt.host [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:02:41 compute-1 nova_compute[230518]: 2025-10-02 13:02:41.037 2 DEBUG nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:02:41 compute-1 nova_compute[230518]: 2025-10-02 13:02:41.037 2 DEBUG nova.virt.hardware [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:02:41 compute-1 nova_compute[230518]: 2025-10-02 13:02:41.038 2 DEBUG nova.virt.hardware [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:02:41 compute-1 nova_compute[230518]: 2025-10-02 13:02:41.038 2 DEBUG nova.virt.hardware [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:02:41 compute-1 nova_compute[230518]: 2025-10-02 13:02:41.039 2 DEBUG nova.virt.hardware [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:02:41 compute-1 nova_compute[230518]: 2025-10-02 13:02:41.039 2 DEBUG nova.virt.hardware [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:02:41 compute-1 nova_compute[230518]: 2025-10-02 13:02:41.039 2 DEBUG nova.virt.hardware [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:02:41 compute-1 nova_compute[230518]: 2025-10-02 13:02:41.040 2 DEBUG nova.virt.hardware [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:02:41 compute-1 nova_compute[230518]: 2025-10-02 13:02:41.040 2 DEBUG nova.virt.hardware [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:02:41 compute-1 nova_compute[230518]: 2025-10-02 13:02:41.040 2 DEBUG nova.virt.hardware [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:02:41 compute-1 nova_compute[230518]: 2025-10-02 13:02:41.041 2 DEBUG nova.virt.hardware [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:02:41 compute-1 nova_compute[230518]: 2025-10-02 13:02:41.041 2 DEBUG nova.virt.hardware [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:02:41 compute-1 nova_compute[230518]: 2025-10-02 13:02:41.046 2 DEBUG oslo_concurrency.processutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:02:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:41.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:41.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:41 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:02:41 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2609484403' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:02:41 compute-1 nova_compute[230518]: 2025-10-02 13:02:41.513 2 DEBUG oslo_concurrency.processutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:02:41 compute-1 nova_compute[230518]: 2025-10-02 13:02:41.537 2 DEBUG nova.storage.rbd_utils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 198c2dd4-f103-4bba-9fc3-9e41f44e465e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:02:41 compute-1 nova_compute[230518]: 2025-10-02 13:02:41.541 2 DEBUG oslo_concurrency.processutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:02:41 compute-1 podman[298008]: 2025-10-02 13:02:41.828573541 +0000 UTC m=+0.062967337 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent)
Oct 02 13:02:41 compute-1 podman[298007]: 2025-10-02 13:02:41.842169264 +0000 UTC m=+0.094318631 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 13:02:41 compute-1 ceph-mon[80926]: pgmap v2689: 305 pgs: 305 active+clean; 345 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 760 KiB/s wr, 83 op/s
Oct 02 13:02:41 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2609484403' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:02:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:02:42 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1130314900' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:02:42 compute-1 nova_compute[230518]: 2025-10-02 13:02:42.037 2 DEBUG oslo_concurrency.processutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:02:42 compute-1 nova_compute[230518]: 2025-10-02 13:02:42.040 2 DEBUG nova.virt.libvirt.vif [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:02:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-511078647',display_name='tempest-TestNetworkBasicOps-server-511078647',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-511078647',id=176,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJx6+OnWya4TKWI602K7FJTy0vvTR15qcn2a79LYqYLs4i+5cL4NrJf7MAy0xx98Y2Lu4xFova8uQh2TX9Sp+hRCxqeORgezwsMfN18SQyhFQii2RX1Yt01r5EbD581/cA==',key_name='tempest-TestNetworkBasicOps-68926903',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-a1p0qs88',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:02:35Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=198c2dd4-f103-4bba-9fc3-9e41f44e465e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "address": "fa:16:3e:d9:79:9d", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap075c87dd-2b", "ovs_interfaceid": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:02:42 compute-1 nova_compute[230518]: 2025-10-02 13:02:42.041 2 DEBUG nova.network.os_vif_util [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "address": "fa:16:3e:d9:79:9d", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap075c87dd-2b", "ovs_interfaceid": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:02:42 compute-1 nova_compute[230518]: 2025-10-02 13:02:42.043 2 DEBUG nova.network.os_vif_util [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:79:9d,bridge_name='br-int',has_traffic_filtering=True,id=075c87dd-2b98-4364-9955-b21fcbcd5b47,network=Network(0d2f6793-3f74-40a0-b15c-09282dcbf27c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap075c87dd-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:02:42 compute-1 nova_compute[230518]: 2025-10-02 13:02:42.046 2 DEBUG nova.objects.instance [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'pci_devices' on Instance uuid 198c2dd4-f103-4bba-9fc3-9e41f44e465e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:02:42 compute-1 nova_compute[230518]: 2025-10-02 13:02:42.065 2 DEBUG nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:02:42 compute-1 nova_compute[230518]:   <uuid>198c2dd4-f103-4bba-9fc3-9e41f44e465e</uuid>
Oct 02 13:02:42 compute-1 nova_compute[230518]:   <name>instance-000000b0</name>
Oct 02 13:02:42 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 13:02:42 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 13:02:42 compute-1 nova_compute[230518]:   <metadata>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:02:42 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:       <nova:name>tempest-TestNetworkBasicOps-server-511078647</nova:name>
Oct 02 13:02:42 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 13:02:41</nova:creationTime>
Oct 02 13:02:42 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 13:02:42 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 13:02:42 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 13:02:42 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 13:02:42 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:02:42 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:02:42 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 13:02:42 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 13:02:42 compute-1 nova_compute[230518]:         <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct 02 13:02:42 compute-1 nova_compute[230518]:         <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct 02 13:02:42 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 13:02:42 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 13:02:42 compute-1 nova_compute[230518]:         <nova:port uuid="075c87dd-2b98-4364-9955-b21fcbcd5b47">
Oct 02 13:02:42 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 13:02:42 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 13:02:42 compute-1 nova_compute[230518]:   </metadata>
Oct 02 13:02:42 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <system>
Oct 02 13:02:42 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:02:42 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:02:42 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:02:42 compute-1 nova_compute[230518]:       <entry name="serial">198c2dd4-f103-4bba-9fc3-9e41f44e465e</entry>
Oct 02 13:02:42 compute-1 nova_compute[230518]:       <entry name="uuid">198c2dd4-f103-4bba-9fc3-9e41f44e465e</entry>
Oct 02 13:02:42 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     </system>
Oct 02 13:02:42 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 13:02:42 compute-1 nova_compute[230518]:   <os>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:   </os>
Oct 02 13:02:42 compute-1 nova_compute[230518]:   <features>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <apic/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:   </features>
Oct 02 13:02:42 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:   </clock>
Oct 02 13:02:42 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:   </cpu>
Oct 02 13:02:42 compute-1 nova_compute[230518]:   <devices>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 13:02:42 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/198c2dd4-f103-4bba-9fc3-9e41f44e465e_disk">
Oct 02 13:02:42 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:       </source>
Oct 02 13:02:42 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:02:42 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:02:42 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 13:02:42 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/198c2dd4-f103-4bba-9fc3-9e41f44e465e_disk.config">
Oct 02 13:02:42 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:       </source>
Oct 02 13:02:42 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:02:42 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:02:42 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 13:02:42 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:d9:79:9d"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:       <target dev="tap075c87dd-2b"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     </interface>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 13:02:42 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/198c2dd4-f103-4bba-9fc3-9e41f44e465e/console.log" append="off"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     </serial>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <video>
Oct 02 13:02:42 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     </video>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 13:02:42 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     </rng>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 13:02:42 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 13:02:42 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 13:02:42 compute-1 nova_compute[230518]:   </devices>
Oct 02 13:02:42 compute-1 nova_compute[230518]: </domain>
Oct 02 13:02:42 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:02:42 compute-1 nova_compute[230518]: 2025-10-02 13:02:42.068 2 DEBUG nova.compute.manager [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Preparing to wait for external event network-vif-plugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:02:42 compute-1 nova_compute[230518]: 2025-10-02 13:02:42.069 2 DEBUG oslo_concurrency.lockutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:42 compute-1 nova_compute[230518]: 2025-10-02 13:02:42.070 2 DEBUG oslo_concurrency.lockutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:42 compute-1 nova_compute[230518]: 2025-10-02 13:02:42.070 2 DEBUG oslo_concurrency.lockutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:42 compute-1 nova_compute[230518]: 2025-10-02 13:02:42.072 2 DEBUG nova.virt.libvirt.vif [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:02:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-511078647',display_name='tempest-TestNetworkBasicOps-server-511078647',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-511078647',id=176,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJx6+OnWya4TKWI602K7FJTy0vvTR15qcn2a79LYqYLs4i+5cL4NrJf7MAy0xx98Y2Lu4xFova8uQh2TX9Sp+hRCxqeORgezwsMfN18SQyhFQii2RX1Yt01r5EbD581/cA==',key_name='tempest-TestNetworkBasicOps-68926903',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-a1p0qs88',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:02:35Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=198c2dd4-f103-4bba-9fc3-9e41f44e465e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "address": "fa:16:3e:d9:79:9d", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap075c87dd-2b", "ovs_interfaceid": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:02:42 compute-1 nova_compute[230518]: 2025-10-02 13:02:42.072 2 DEBUG nova.network.os_vif_util [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "address": "fa:16:3e:d9:79:9d", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap075c87dd-2b", "ovs_interfaceid": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:02:42 compute-1 nova_compute[230518]: 2025-10-02 13:02:42.073 2 DEBUG nova.network.os_vif_util [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:79:9d,bridge_name='br-int',has_traffic_filtering=True,id=075c87dd-2b98-4364-9955-b21fcbcd5b47,network=Network(0d2f6793-3f74-40a0-b15c-09282dcbf27c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap075c87dd-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:02:42 compute-1 nova_compute[230518]: 2025-10-02 13:02:42.074 2 DEBUG os_vif [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:79:9d,bridge_name='br-int',has_traffic_filtering=True,id=075c87dd-2b98-4364-9955-b21fcbcd5b47,network=Network(0d2f6793-3f74-40a0-b15c-09282dcbf27c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap075c87dd-2b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:02:42 compute-1 nova_compute[230518]: 2025-10-02 13:02:42.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:42 compute-1 nova_compute[230518]: 2025-10-02 13:02:42.076 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:02:42 compute-1 nova_compute[230518]: 2025-10-02 13:02:42.077 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:02:42 compute-1 nova_compute[230518]: 2025-10-02 13:02:42.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:42 compute-1 nova_compute[230518]: 2025-10-02 13:02:42.082 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap075c87dd-2b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:02:42 compute-1 nova_compute[230518]: 2025-10-02 13:02:42.083 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap075c87dd-2b, col_values=(('external_ids', {'iface-id': '075c87dd-2b98-4364-9955-b21fcbcd5b47', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d9:79:9d', 'vm-uuid': '198c2dd4-f103-4bba-9fc3-9e41f44e465e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:02:42 compute-1 NetworkManager[44960]: <info>  [1759410162.0866] manager: (tap075c87dd-2b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/338)
Oct 02 13:02:42 compute-1 nova_compute[230518]: 2025-10-02 13:02:42.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:02:42 compute-1 nova_compute[230518]: 2025-10-02 13:02:42.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:42 compute-1 nova_compute[230518]: 2025-10-02 13:02:42.111 2 INFO os_vif [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:79:9d,bridge_name='br-int',has_traffic_filtering=True,id=075c87dd-2b98-4364-9955-b21fcbcd5b47,network=Network(0d2f6793-3f74-40a0-b15c-09282dcbf27c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap075c87dd-2b')
Oct 02 13:02:42 compute-1 nova_compute[230518]: 2025-10-02 13:02:42.169 2 DEBUG nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:02:42 compute-1 nova_compute[230518]: 2025-10-02 13:02:42.170 2 DEBUG nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:02:42 compute-1 nova_compute[230518]: 2025-10-02 13:02:42.170 2 DEBUG nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No VIF found with MAC fa:16:3e:d9:79:9d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:02:42 compute-1 nova_compute[230518]: 2025-10-02 13:02:42.171 2 INFO nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Using config drive
Oct 02 13:02:42 compute-1 nova_compute[230518]: 2025-10-02 13:02:42.207 2 DEBUG nova.storage.rbd_utils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 198c2dd4-f103-4bba-9fc3-9e41f44e465e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:02:42 compute-1 nova_compute[230518]: 2025-10-02 13:02:42.865 2 DEBUG nova.network.neutron [req-09e327da-f792-4e50-85f5-8af50271f8a8 req-86cb0572-c5a1-4568-a0de-8cdabcf52d7b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Updated VIF entry in instance network info cache for port 075c87dd-2b98-4364-9955-b21fcbcd5b47. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:02:42 compute-1 nova_compute[230518]: 2025-10-02 13:02:42.866 2 DEBUG nova.network.neutron [req-09e327da-f792-4e50-85f5-8af50271f8a8 req-86cb0572-c5a1-4568-a0de-8cdabcf52d7b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Updating instance_info_cache with network_info: [{"id": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "address": "fa:16:3e:d9:79:9d", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap075c87dd-2b", "ovs_interfaceid": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:02:42 compute-1 nova_compute[230518]: 2025-10-02 13:02:42.885 2 DEBUG oslo_concurrency.lockutils [req-09e327da-f792-4e50-85f5-8af50271f8a8 req-86cb0572-c5a1-4568-a0de-8cdabcf52d7b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:02:42 compute-1 nova_compute[230518]: 2025-10-02 13:02:42.891 2 INFO nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Creating config drive at /var/lib/nova/instances/198c2dd4-f103-4bba-9fc3-9e41f44e465e/disk.config
Oct 02 13:02:42 compute-1 nova_compute[230518]: 2025-10-02 13:02:42.900 2 DEBUG oslo_concurrency.processutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/198c2dd4-f103-4bba-9fc3-9e41f44e465e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4q5neb2n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:02:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1130314900' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:02:43 compute-1 ceph-mon[80926]: pgmap v2690: 305 pgs: 305 active+clean; 372 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.1 MiB/s wr, 190 op/s
Oct 02 13:02:43 compute-1 nova_compute[230518]: 2025-10-02 13:02:43.049 2 DEBUG oslo_concurrency.processutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/198c2dd4-f103-4bba-9fc3-9e41f44e465e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4q5neb2n" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:02:43 compute-1 nova_compute[230518]: 2025-10-02 13:02:43.101 2 DEBUG nova.storage.rbd_utils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image 198c2dd4-f103-4bba-9fc3-9e41f44e465e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:02:43 compute-1 nova_compute[230518]: 2025-10-02 13:02:43.106 2 DEBUG oslo_concurrency.processutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/198c2dd4-f103-4bba-9fc3-9e41f44e465e/disk.config 198c2dd4-f103-4bba-9fc3-9e41f44e465e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:02:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:43.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:02:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:43.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:02:44 compute-1 nova_compute[230518]: 2025-10-02 13:02:44.006 2 DEBUG oslo_concurrency.processutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/198c2dd4-f103-4bba-9fc3-9e41f44e465e/disk.config 198c2dd4-f103-4bba-9fc3-9e41f44e465e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.899s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:02:44 compute-1 nova_compute[230518]: 2025-10-02 13:02:44.007 2 INFO nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Deleting local config drive /var/lib/nova/instances/198c2dd4-f103-4bba-9fc3-9e41f44e465e/disk.config because it was imported into RBD.
Oct 02 13:02:44 compute-1 kernel: tap075c87dd-2b: entered promiscuous mode
Oct 02 13:02:44 compute-1 NetworkManager[44960]: <info>  [1759410164.0731] manager: (tap075c87dd-2b): new Tun device (/org/freedesktop/NetworkManager/Devices/339)
Oct 02 13:02:44 compute-1 nova_compute[230518]: 2025-10-02 13:02:44.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:44 compute-1 ovn_controller[129257]: 2025-10-02T13:02:44Z|00723|binding|INFO|Claiming lport 075c87dd-2b98-4364-9955-b21fcbcd5b47 for this chassis.
Oct 02 13:02:44 compute-1 ovn_controller[129257]: 2025-10-02T13:02:44Z|00724|binding|INFO|075c87dd-2b98-4364-9955-b21fcbcd5b47: Claiming fa:16:3e:d9:79:9d 10.100.0.10
Oct 02 13:02:44 compute-1 systemd-udevd[298122]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.128 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:79:9d 10.100.0.10'], port_security=['fa:16:3e:d9:79:9d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '198c2dd4-f103-4bba-9fc3-9e41f44e465e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0d2f6793-3f74-40a0-b15c-09282dcbf27c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9eb88c13-ce55-413a-bc29-2cb1397ffc60', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66b7c563-3269-46d5-8080-eff6b31dc260, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=075c87dd-2b98-4364-9955-b21fcbcd5b47) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.130 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 075c87dd-2b98-4364-9955-b21fcbcd5b47 in datapath 0d2f6793-3f74-40a0-b15c-09282dcbf27c bound to our chassis
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.132 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0d2f6793-3f74-40a0-b15c-09282dcbf27c
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.145 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d9237b27-f22d-4b74-99cf-3968ec8dd816]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.147 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0d2f6793-31 in ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.149 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0d2f6793-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.149 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ddfb9de0-3922-4626-959c-fc29d8c51df9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:44 compute-1 systemd-machined[188247]: New machine qemu-84-instance-000000b0.
Oct 02 13:02:44 compute-1 NetworkManager[44960]: <info>  [1759410164.1516] device (tap075c87dd-2b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.151 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[de44aae5-7530-4ea1-8e33-359ff2570208]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:44 compute-1 NetworkManager[44960]: <info>  [1759410164.1527] device (tap075c87dd-2b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.170 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[2438a633-fe72-4827-badd-ad259ba0e567]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:44 compute-1 systemd[1]: Started Virtual Machine qemu-84-instance-000000b0.
Oct 02 13:02:44 compute-1 ovn_controller[129257]: 2025-10-02T13:02:44Z|00725|binding|INFO|Setting lport 075c87dd-2b98-4364-9955-b21fcbcd5b47 ovn-installed in OVS
Oct 02 13:02:44 compute-1 ovn_controller[129257]: 2025-10-02T13:02:44Z|00726|binding|INFO|Setting lport 075c87dd-2b98-4364-9955-b21fcbcd5b47 up in Southbound
Oct 02 13:02:44 compute-1 nova_compute[230518]: 2025-10-02 13:02:44.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:44 compute-1 nova_compute[230518]: 2025-10-02 13:02:44.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:44 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #127. Immutable memtables: 0.
Oct 02 13:02:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:44.199390) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:02:44 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 127
Oct 02 13:02:44 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410164199442, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 1680, "num_deletes": 252, "total_data_size": 3663000, "memory_usage": 3727520, "flush_reason": "Manual Compaction"}
Oct 02 13:02:44 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #128: started
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.204 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2bdce32e-eb4e-4d48-8f66-9d5a11bae468]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:44 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410164216492, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 128, "file_size": 2407025, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63149, "largest_seqno": 64824, "table_properties": {"data_size": 2400105, "index_size": 3926, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14657, "raw_average_key_size": 19, "raw_value_size": 2385867, "raw_average_value_size": 3114, "num_data_blocks": 172, "num_entries": 766, "num_filter_entries": 766, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410029, "oldest_key_time": 1759410029, "file_creation_time": 1759410164, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:02:44 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 17161 microseconds, and 10080 cpu microseconds.
Oct 02 13:02:44 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:02:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:44.216550) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #128: 2407025 bytes OK
Oct 02 13:02:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:44.216576) [db/memtable_list.cc:519] [default] Level-0 commit table #128 started
Oct 02 13:02:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:44.218227) [db/memtable_list.cc:722] [default] Level-0 commit table #128: memtable #1 done
Oct 02 13:02:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:44.218239) EVENT_LOG_v1 {"time_micros": 1759410164218236, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:02:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:44.218253) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:02:44 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 3655200, prev total WAL file size 3655200, number of live WAL files 2.
Oct 02 13:02:44 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000124.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:02:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:44.223424) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323532' seq:72057594037927935, type:22 .. '6B7600353035' seq:0, type:0; will stop at (end)
Oct 02 13:02:44 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:02:44 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [128(2350KB)], [126(10057KB)]
Oct 02 13:02:44 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410164223480, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [128], "files_L6": [126], "score": -1, "input_data_size": 12705868, "oldest_snapshot_seqno": -1}
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.248 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c3f88e2e-05c1-41ca-b68c-c7b5888d7b05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.255 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[dca2d0b9-f22b-4c65-be5a-42158c974971]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:44 compute-1 NetworkManager[44960]: <info>  [1759410164.2577] manager: (tap0d2f6793-30): new Veth device (/org/freedesktop/NetworkManager/Devices/340)
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.291 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[e0e24866-7cb5-4be2-8bd1-84fbb99151a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.294 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[bfc14b9e-c137-48d8-a620-2284aab6db14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:44 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #129: 8892 keys, 11599262 bytes, temperature: kUnknown
Oct 02 13:02:44 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410164305133, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 129, "file_size": 11599262, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11542102, "index_size": 33815, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22277, "raw_key_size": 232032, "raw_average_key_size": 26, "raw_value_size": 11386343, "raw_average_value_size": 1280, "num_data_blocks": 1299, "num_entries": 8892, "num_filter_entries": 8892, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759410164, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 129, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:02:44 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:02:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:44.305366) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 11599262 bytes
Oct 02 13:02:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:44.306533) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.5 rd, 142.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 9.8 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(10.1) write-amplify(4.8) OK, records in: 9415, records dropped: 523 output_compression: NoCompression
Oct 02 13:02:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:44.306548) EVENT_LOG_v1 {"time_micros": 1759410164306541, "job": 80, "event": "compaction_finished", "compaction_time_micros": 81703, "compaction_time_cpu_micros": 36362, "output_level": 6, "num_output_files": 1, "total_output_size": 11599262, "num_input_records": 9415, "num_output_records": 8892, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:02:44 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:02:44 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410164306962, "job": 80, "event": "table_file_deletion", "file_number": 128}
Oct 02 13:02:44 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000126.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:02:44 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410164308363, "job": 80, "event": "table_file_deletion", "file_number": 126}
Oct 02 13:02:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:44.223343) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:02:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:44.310294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:02:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:44.310299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:02:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:44.310300) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:02:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:44.310302) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:02:44 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:44.310303) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:02:44 compute-1 NetworkManager[44960]: <info>  [1759410164.3186] device (tap0d2f6793-30): carrier: link connected
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.324 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[af9c2a8c-2a29-4671-8d15-9a39e31784b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.339 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bf8d9c92-4b6c-4720-9aea-44d723ffc1a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0d2f6793-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:aa:b7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 802787, 'reachable_time': 16255, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298158, 'error': None, 'target': 'ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.353 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4601c058-52c4-4f7a-86d6-c3e67baa95f8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe02:aab7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 802787, 'tstamp': 802787}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298159, 'error': None, 'target': 'ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.366 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8ad0c0e9-b2c5-4cca-b966-d9e6704eab0e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0d2f6793-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:aa:b7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 802787, 'reachable_time': 16255, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 298160, 'error': None, 'target': 'ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.397 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ee31da40-2694-4940-9198-36328beabd63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.459 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a5abf83d-5b21-4b5c-ba54-bd7df41fdfd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.460 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d2f6793-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.461 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.461 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0d2f6793-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:02:44 compute-1 nova_compute[230518]: 2025-10-02 13:02:44.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:44 compute-1 kernel: tap0d2f6793-30: entered promiscuous mode
Oct 02 13:02:44 compute-1 NetworkManager[44960]: <info>  [1759410164.4643] manager: (tap0d2f6793-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/341)
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.469 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0d2f6793-30, col_values=(('external_ids', {'iface-id': '0dfea1be-4d56-45ad-8b1f-483fdf57471e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:02:44 compute-1 ovn_controller[129257]: 2025-10-02T13:02:44Z|00727|binding|INFO|Releasing lport 0dfea1be-4d56-45ad-8b1f-483fdf57471e from this chassis (sb_readonly=0)
Oct 02 13:02:44 compute-1 nova_compute[230518]: 2025-10-02 13:02:44.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.502 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0d2f6793-3f74-40a0-b15c-09282dcbf27c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0d2f6793-3f74-40a0-b15c-09282dcbf27c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.504 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1be15e84-c078-41d0-9e89-0769af063c79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.505 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]: global
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-0d2f6793-3f74-40a0-b15c-09282dcbf27c
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/0d2f6793-3f74-40a0-b15c-09282dcbf27c.pid.haproxy
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 0d2f6793-3f74-40a0-b15c-09282dcbf27c
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:02:44 compute-1 nova_compute[230518]: 2025-10-02 13:02:44.506 2 DEBUG nova.compute.manager [req-50b227d3-c7b6-4200-8a11-0193a3031e84 req-10bd17e2-e7e9-487c-8da6-505b86270277 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received event network-vif-plugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:02:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:02:44.507 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c', 'env', 'PROCESS_TAG=haproxy-0d2f6793-3f74-40a0-b15c-09282dcbf27c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0d2f6793-3f74-40a0-b15c-09282dcbf27c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:02:44 compute-1 nova_compute[230518]: 2025-10-02 13:02:44.507 2 DEBUG oslo_concurrency.lockutils [req-50b227d3-c7b6-4200-8a11-0193a3031e84 req-10bd17e2-e7e9-487c-8da6-505b86270277 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:44 compute-1 nova_compute[230518]: 2025-10-02 13:02:44.508 2 DEBUG oslo_concurrency.lockutils [req-50b227d3-c7b6-4200-8a11-0193a3031e84 req-10bd17e2-e7e9-487c-8da6-505b86270277 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:44 compute-1 nova_compute[230518]: 2025-10-02 13:02:44.509 2 DEBUG oslo_concurrency.lockutils [req-50b227d3-c7b6-4200-8a11-0193a3031e84 req-10bd17e2-e7e9-487c-8da6-505b86270277 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:44 compute-1 nova_compute[230518]: 2025-10-02 13:02:44.509 2 DEBUG nova.compute.manager [req-50b227d3-c7b6-4200-8a11-0193a3031e84 req-10bd17e2-e7e9-487c-8da6-505b86270277 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Processing event network-vif-plugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:02:44 compute-1 nova_compute[230518]: 2025-10-02 13:02:44.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:44 compute-1 podman[298235]: 2025-10-02 13:02:44.880209229 +0000 UTC m=+0.024180843 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:02:45 compute-1 podman[298235]: 2025-10-02 13:02:45.067598211 +0000 UTC m=+0.211569735 container create dc7f32dc3d576e8fd343d477ff3691b002fbb730a3874165429e265df3b16c4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 13:02:45 compute-1 nova_compute[230518]: 2025-10-02 13:02:45.128 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410165.1279624, 198c2dd4-f103-4bba-9fc3-9e41f44e465e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:02:45 compute-1 nova_compute[230518]: 2025-10-02 13:02:45.129 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] VM Started (Lifecycle Event)
Oct 02 13:02:45 compute-1 nova_compute[230518]: 2025-10-02 13:02:45.131 2 DEBUG nova.compute.manager [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:02:45 compute-1 nova_compute[230518]: 2025-10-02 13:02:45.135 2 DEBUG nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:02:45 compute-1 nova_compute[230518]: 2025-10-02 13:02:45.139 2 INFO nova.virt.libvirt.driver [-] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Instance spawned successfully.
Oct 02 13:02:45 compute-1 nova_compute[230518]: 2025-10-02 13:02:45.139 2 DEBUG nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:02:45 compute-1 nova_compute[230518]: 2025-10-02 13:02:45.175 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:02:45 compute-1 systemd[1]: Started libpod-conmon-dc7f32dc3d576e8fd343d477ff3691b002fbb730a3874165429e265df3b16c4b.scope.
Oct 02 13:02:45 compute-1 nova_compute[230518]: 2025-10-02 13:02:45.181 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:02:45 compute-1 nova_compute[230518]: 2025-10-02 13:02:45.191 2 DEBUG nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:02:45 compute-1 nova_compute[230518]: 2025-10-02 13:02:45.192 2 DEBUG nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:02:45 compute-1 nova_compute[230518]: 2025-10-02 13:02:45.192 2 DEBUG nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:02:45 compute-1 nova_compute[230518]: 2025-10-02 13:02:45.193 2 DEBUG nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:02:45 compute-1 nova_compute[230518]: 2025-10-02 13:02:45.193 2 DEBUG nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:02:45 compute-1 nova_compute[230518]: 2025-10-02 13:02:45.194 2 DEBUG nova.virt.libvirt.driver [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:02:45 compute-1 systemd[1]: Started libcrun container.
Oct 02 13:02:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0ed691c9efbcda2feeff6f5bd8ee2f1b7fe6192de23eebb1b94af40b0e8291e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:02:45 compute-1 nova_compute[230518]: 2025-10-02 13:02:45.226 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:02:45 compute-1 nova_compute[230518]: 2025-10-02 13:02:45.226 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410165.1280732, 198c2dd4-f103-4bba-9fc3-9e41f44e465e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:02:45 compute-1 nova_compute[230518]: 2025-10-02 13:02:45.227 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] VM Paused (Lifecycle Event)
Oct 02 13:02:45 compute-1 nova_compute[230518]: 2025-10-02 13:02:45.273 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:02:45 compute-1 nova_compute[230518]: 2025-10-02 13:02:45.277 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410165.1343846, 198c2dd4-f103-4bba-9fc3-9e41f44e465e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:02:45 compute-1 nova_compute[230518]: 2025-10-02 13:02:45.277 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] VM Resumed (Lifecycle Event)
Oct 02 13:02:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:45.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:45 compute-1 podman[298235]: 2025-10-02 13:02:45.291250861 +0000 UTC m=+0.435222405 container init dc7f32dc3d576e8fd343d477ff3691b002fbb730a3874165429e265df3b16c4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:02:45 compute-1 podman[298235]: 2025-10-02 13:02:45.299245288 +0000 UTC m=+0.443216792 container start dc7f32dc3d576e8fd343d477ff3691b002fbb730a3874165429e265df3b16c4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 13:02:45 compute-1 nova_compute[230518]: 2025-10-02 13:02:45.316 2 INFO nova.compute.manager [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Took 10.04 seconds to spawn the instance on the hypervisor.
Oct 02 13:02:45 compute-1 nova_compute[230518]: 2025-10-02 13:02:45.317 2 DEBUG nova.compute.manager [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:02:45 compute-1 nova_compute[230518]: 2025-10-02 13:02:45.320 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:02:45 compute-1 neutron-haproxy-ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c[298249]: [NOTICE]   (298253) : New worker (298255) forked
Oct 02 13:02:45 compute-1 neutron-haproxy-ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c[298249]: [NOTICE]   (298253) : Loading success.
Oct 02 13:02:45 compute-1 nova_compute[230518]: 2025-10-02 13:02:45.329 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:02:45 compute-1 nova_compute[230518]: 2025-10-02 13:02:45.362 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:02:45 compute-1 nova_compute[230518]: 2025-10-02 13:02:45.405 2 INFO nova.compute.manager [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Took 11.06 seconds to build instance.
Oct 02 13:02:45 compute-1 nova_compute[230518]: 2025-10-02 13:02:45.435 2 DEBUG oslo_concurrency.lockutils [None req-7baf6b7b-32b9-4b87-ad2d-d37c20f596f3 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.195s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:45.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:45 compute-1 ceph-mon[80926]: pgmap v2691: 305 pgs: 305 active+clean; 372 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.1 MiB/s wr, 190 op/s
Oct 02 13:02:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3140300285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/732233713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e352 e352: 3 total, 3 up, 3 in
Oct 02 13:02:46 compute-1 nova_compute[230518]: 2025-10-02 13:02:46.586 2 DEBUG nova.compute.manager [req-55724042-a919-48e6-aac1-1805045aaf86 req-26a9856b-8279-401e-bcf1-5ee2e799fdac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received event network-vif-plugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:02:46 compute-1 nova_compute[230518]: 2025-10-02 13:02:46.588 2 DEBUG oslo_concurrency.lockutils [req-55724042-a919-48e6-aac1-1805045aaf86 req-26a9856b-8279-401e-bcf1-5ee2e799fdac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:46 compute-1 nova_compute[230518]: 2025-10-02 13:02:46.588 2 DEBUG oslo_concurrency.lockutils [req-55724042-a919-48e6-aac1-1805045aaf86 req-26a9856b-8279-401e-bcf1-5ee2e799fdac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:46 compute-1 nova_compute[230518]: 2025-10-02 13:02:46.589 2 DEBUG oslo_concurrency.lockutils [req-55724042-a919-48e6-aac1-1805045aaf86 req-26a9856b-8279-401e-bcf1-5ee2e799fdac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:46 compute-1 nova_compute[230518]: 2025-10-02 13:02:46.589 2 DEBUG nova.compute.manager [req-55724042-a919-48e6-aac1-1805045aaf86 req-26a9856b-8279-401e-bcf1-5ee2e799fdac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] No waiting events found dispatching network-vif-plugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:02:46 compute-1 nova_compute[230518]: 2025-10-02 13:02:46.590 2 WARNING nova.compute.manager [req-55724042-a919-48e6-aac1-1805045aaf86 req-26a9856b-8279-401e-bcf1-5ee2e799fdac 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received unexpected event network-vif-plugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 for instance with vm_state active and task_state None.
Oct 02 13:02:46 compute-1 sudo[298264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:02:46 compute-1 sudo[298264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:46 compute-1 sudo[298264]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:46 compute-1 sudo[298289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:02:46 compute-1 sudo[298289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:02:46 compute-1 sudo[298289]: pam_unix(sudo:session): session closed for user root
Oct 02 13:02:47 compute-1 nova_compute[230518]: 2025-10-02 13:02:47.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:47 compute-1 ceph-mon[80926]: osdmap e352: 3 total, 3 up, 3 in
Oct 02 13:02:47 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:02:47 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:02:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:47.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:47.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:48 compute-1 ceph-mon[80926]: pgmap v2693: 305 pgs: 305 active+clean; 352 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.1 MiB/s wr, 246 op/s
Oct 02 13:02:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2858843834' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:49 compute-1 nova_compute[230518]: 2025-10-02 13:02:49.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:49 compute-1 NetworkManager[44960]: <info>  [1759410169.0869] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/342)
Oct 02 13:02:49 compute-1 NetworkManager[44960]: <info>  [1759410169.0882] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/343)
Oct 02 13:02:49 compute-1 nova_compute[230518]: 2025-10-02 13:02:49.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:49 compute-1 ovn_controller[129257]: 2025-10-02T13:02:49Z|00728|binding|INFO|Releasing lport 0dfea1be-4d56-45ad-8b1f-483fdf57471e from this chassis (sb_readonly=0)
Oct 02 13:02:49 compute-1 ovn_controller[129257]: 2025-10-02T13:02:49Z|00729|binding|INFO|Releasing lport 61e15bc4-7cff-4f2c-a6c4-d987859313b6 from this chassis (sb_readonly=0)
Oct 02 13:02:49 compute-1 ceph-mon[80926]: pgmap v2694: 305 pgs: 305 active+clean; 331 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 1.9 MiB/s wr, 264 op/s
Oct 02 13:02:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1635756819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:49 compute-1 nova_compute[230518]: 2025-10-02 13:02:49.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:49.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:49 compute-1 nova_compute[230518]: 2025-10-02 13:02:49.373 2 DEBUG nova.compute.manager [req-ed627347-4fe7-430e-88ac-a13ae6a56106 req-731d0586-30e7-438f-9a50-bab74bc616c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received event network-changed-075c87dd-2b98-4364-9955-b21fcbcd5b47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:02:49 compute-1 nova_compute[230518]: 2025-10-02 13:02:49.373 2 DEBUG nova.compute.manager [req-ed627347-4fe7-430e-88ac-a13ae6a56106 req-731d0586-30e7-438f-9a50-bab74bc616c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Refreshing instance network info cache due to event network-changed-075c87dd-2b98-4364-9955-b21fcbcd5b47. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:02:49 compute-1 nova_compute[230518]: 2025-10-02 13:02:49.373 2 DEBUG oslo_concurrency.lockutils [req-ed627347-4fe7-430e-88ac-a13ae6a56106 req-731d0586-30e7-438f-9a50-bab74bc616c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:02:49 compute-1 nova_compute[230518]: 2025-10-02 13:02:49.374 2 DEBUG oslo_concurrency.lockutils [req-ed627347-4fe7-430e-88ac-a13ae6a56106 req-731d0586-30e7-438f-9a50-bab74bc616c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:02:49 compute-1 nova_compute[230518]: 2025-10-02 13:02:49.374 2 DEBUG nova.network.neutron [req-ed627347-4fe7-430e-88ac-a13ae6a56106 req-731d0586-30e7-438f-9a50-bab74bc616c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Refreshing network info cache for port 075c87dd-2b98-4364-9955-b21fcbcd5b47 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:02:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e352 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:02:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:02:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:49.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:02:49 compute-1 nova_compute[230518]: 2025-10-02 13:02:49.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1717772748' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:02:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3217411682' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:02:50 compute-1 nova_compute[230518]: 2025-10-02 13:02:50.764 2 DEBUG nova.network.neutron [req-ed627347-4fe7-430e-88ac-a13ae6a56106 req-731d0586-30e7-438f-9a50-bab74bc616c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Updated VIF entry in instance network info cache for port 075c87dd-2b98-4364-9955-b21fcbcd5b47. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:02:50 compute-1 nova_compute[230518]: 2025-10-02 13:02:50.766 2 DEBUG nova.network.neutron [req-ed627347-4fe7-430e-88ac-a13ae6a56106 req-731d0586-30e7-438f-9a50-bab74bc616c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Updating instance_info_cache with network_info: [{"id": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "address": "fa:16:3e:d9:79:9d", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap075c87dd-2b", "ovs_interfaceid": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:02:50 compute-1 nova_compute[230518]: 2025-10-02 13:02:50.793 2 DEBUG oslo_concurrency.lockutils [req-ed627347-4fe7-430e-88ac-a13ae6a56106 req-731d0586-30e7-438f-9a50-bab74bc616c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:02:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:51.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:02:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:51.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:02:51 compute-1 ceph-mon[80926]: pgmap v2695: 305 pgs: 305 active+clean; 352 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 250 op/s
Oct 02 13:02:52 compute-1 nova_compute[230518]: 2025-10-02 13:02:52.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:52 compute-1 podman[298316]: 2025-10-02 13:02:52.81721631 +0000 UTC m=+0.069332365 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd)
Oct 02 13:02:52 compute-1 podman[298315]: 2025-10-02 13:02:52.837025835 +0000 UTC m=+0.089532622 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 13:02:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3837070089' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:02:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:53.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:53.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:54 compute-1 ceph-mon[80926]: pgmap v2696: 305 pgs: 305 active+clean; 385 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 3.1 MiB/s wr, 228 op/s
Oct 02 13:02:54 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/771358143' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:02:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 e353: 3 total, 3 up, 3 in
Oct 02 13:02:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:02:54 compute-1 nova_compute[230518]: 2025-10-02 13:02:54.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:55.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:55 compute-1 ceph-mon[80926]: osdmap e353: 3 total, 3 up, 3 in
Oct 02 13:02:55 compute-1 ceph-mon[80926]: pgmap v2698: 305 pgs: 305 active+clean; 385 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.7 MiB/s wr, 196 op/s
Oct 02 13:02:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:55.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:57 compute-1 nova_compute[230518]: 2025-10-02 13:02:57.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:02:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:57.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:02:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:57.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:57 compute-1 ceph-mon[80926]: pgmap v2699: 305 pgs: 305 active+clean; 418 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.3 MiB/s wr, 232 op/s
Oct 02 13:02:58 compute-1 nova_compute[230518]: 2025-10-02 13:02:58.057 2 DEBUG oslo_concurrency.lockutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:58 compute-1 nova_compute[230518]: 2025-10-02 13:02:58.058 2 DEBUG oslo_concurrency.lockutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:58 compute-1 nova_compute[230518]: 2025-10-02 13:02:58.086 2 DEBUG nova.compute.manager [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:02:58 compute-1 nova_compute[230518]: 2025-10-02 13:02:58.205 2 DEBUG oslo_concurrency.lockutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:58 compute-1 nova_compute[230518]: 2025-10-02 13:02:58.206 2 DEBUG oslo_concurrency.lockutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:58 compute-1 nova_compute[230518]: 2025-10-02 13:02:58.214 2 DEBUG nova.virt.hardware [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:02:58 compute-1 nova_compute[230518]: 2025-10-02 13:02:58.215 2 INFO nova.compute.claims [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Claim successful on node compute-1.ctlplane.example.com
Oct 02 13:02:58 compute-1 nova_compute[230518]: 2025-10-02 13:02:58.370 2 DEBUG oslo_concurrency.processutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:02:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:02:58 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1783475136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:58 compute-1 nova_compute[230518]: 2025-10-02 13:02:58.893 2 DEBUG oslo_concurrency.processutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:02:58 compute-1 nova_compute[230518]: 2025-10-02 13:02:58.902 2 DEBUG nova.compute.provider_tree [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:02:58 compute-1 nova_compute[230518]: 2025-10-02 13:02:58.924 2 DEBUG nova.scheduler.client.report [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:02:58 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1783475136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:02:58 compute-1 nova_compute[230518]: 2025-10-02 13:02:58.955 2 DEBUG oslo_concurrency.lockutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:58 compute-1 nova_compute[230518]: 2025-10-02 13:02:58.956 2 DEBUG nova.compute.manager [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:02:59 compute-1 nova_compute[230518]: 2025-10-02 13:02:59.036 2 DEBUG nova.compute.manager [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:02:59 compute-1 nova_compute[230518]: 2025-10-02 13:02:59.037 2 DEBUG nova.network.neutron [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:02:59 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #130. Immutable memtables: 0.
Oct 02 13:02:59 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:59.047574) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:02:59 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 130
Oct 02 13:02:59 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410179047763, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 468, "num_deletes": 257, "total_data_size": 550306, "memory_usage": 560448, "flush_reason": "Manual Compaction"}
Oct 02 13:02:59 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #131: started
Oct 02 13:02:59 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410179053813, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 131, "file_size": 363267, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64829, "largest_seqno": 65292, "table_properties": {"data_size": 360627, "index_size": 675, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6460, "raw_average_key_size": 18, "raw_value_size": 355171, "raw_average_value_size": 1029, "num_data_blocks": 29, "num_entries": 345, "num_filter_entries": 345, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410164, "oldest_key_time": 1759410164, "file_creation_time": 1759410179, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:02:59 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 6409 microseconds, and 3552 cpu microseconds.
Oct 02 13:02:59 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:02:59 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:59.053982) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #131: 363267 bytes OK
Oct 02 13:02:59 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:59.054149) [db/memtable_list.cc:519] [default] Level-0 commit table #131 started
Oct 02 13:02:59 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:59.055400) [db/memtable_list.cc:722] [default] Level-0 commit table #131: memtable #1 done
Oct 02 13:02:59 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:59.055417) EVENT_LOG_v1 {"time_micros": 1759410179055411, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:02:59 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:59.055439) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:02:59 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 547385, prev total WAL file size 547385, number of live WAL files 2.
Oct 02 13:02:59 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000127.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:02:59 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:59.056390) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323635' seq:72057594037927935, type:22 .. '6C6F676D0032353137' seq:0, type:0; will stop at (end)
Oct 02 13:02:59 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:02:59 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [131(354KB)], [129(11MB)]
Oct 02 13:02:59 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410179056503, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [131], "files_L6": [129], "score": -1, "input_data_size": 11962529, "oldest_snapshot_seqno": -1}
Oct 02 13:02:59 compute-1 nova_compute[230518]: 2025-10-02 13:02:59.065 2 INFO nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:02:59 compute-1 nova_compute[230518]: 2025-10-02 13:02:59.089 2 DEBUG nova.compute.manager [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:02:59 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #132: 8707 keys, 11813076 bytes, temperature: kUnknown
Oct 02 13:02:59 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410179144047, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 132, "file_size": 11813076, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11756387, "index_size": 33792, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21829, "raw_key_size": 229162, "raw_average_key_size": 26, "raw_value_size": 11603126, "raw_average_value_size": 1332, "num_data_blocks": 1295, "num_entries": 8707, "num_filter_entries": 8707, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759410179, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 132, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:02:59 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:02:59 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:59.144377) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 11813076 bytes
Oct 02 13:02:59 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:59.145980) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 136.5 rd, 134.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 11.1 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(65.4) write-amplify(32.5) OK, records in: 9237, records dropped: 530 output_compression: NoCompression
Oct 02 13:02:59 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:59.145995) EVENT_LOG_v1 {"time_micros": 1759410179145988, "job": 82, "event": "compaction_finished", "compaction_time_micros": 87617, "compaction_time_cpu_micros": 31203, "output_level": 6, "num_output_files": 1, "total_output_size": 11813076, "num_input_records": 9237, "num_output_records": 8707, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:02:59 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:02:59 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410179146156, "job": 82, "event": "table_file_deletion", "file_number": 131}
Oct 02 13:02:59 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000129.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:02:59 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410179147722, "job": 82, "event": "table_file_deletion", "file_number": 129}
Oct 02 13:02:59 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:59.056018) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:02:59 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:59.147805) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:02:59 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:59.147816) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:02:59 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:59.147819) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:02:59 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:59.147823) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:02:59 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:02:59.147826) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:02:59 compute-1 nova_compute[230518]: 2025-10-02 13:02:59.215 2 DEBUG nova.compute.manager [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:02:59 compute-1 nova_compute[230518]: 2025-10-02 13:02:59.218 2 DEBUG nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:02:59 compute-1 nova_compute[230518]: 2025-10-02 13:02:59.219 2 INFO nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Creating image(s)
Oct 02 13:02:59 compute-1 nova_compute[230518]: 2025-10-02 13:02:59.271 2 DEBUG nova.storage.rbd_utils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:02:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:02:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:02:59.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:02:59 compute-1 nova_compute[230518]: 2025-10-02 13:02:59.325 2 DEBUG nova.storage.rbd_utils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:02:59 compute-1 nova_compute[230518]: 2025-10-02 13:02:59.360 2 DEBUG nova.storage.rbd_utils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:02:59 compute-1 nova_compute[230518]: 2025-10-02 13:02:59.364 2 DEBUG oslo_concurrency.processutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:02:59 compute-1 nova_compute[230518]: 2025-10-02 13:02:59.409 2 DEBUG nova.policy [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '96fd589a75cb4fcfac0072edabb9b3a1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '64f187c60881475e9e1f062bb198d205', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:02:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:02:59 compute-1 nova_compute[230518]: 2025-10-02 13:02:59.443 2 DEBUG oslo_concurrency.processutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:02:59 compute-1 nova_compute[230518]: 2025-10-02 13:02:59.444 2 DEBUG oslo_concurrency.lockutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:02:59 compute-1 nova_compute[230518]: 2025-10-02 13:02:59.445 2 DEBUG oslo_concurrency.lockutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:02:59 compute-1 nova_compute[230518]: 2025-10-02 13:02:59.445 2 DEBUG oslo_concurrency.lockutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:02:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:02:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:02:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:02:59.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:02:59 compute-1 nova_compute[230518]: 2025-10-02 13:02:59.485 2 DEBUG nova.storage.rbd_utils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:02:59 compute-1 nova_compute[230518]: 2025-10-02 13:02:59.491 2 DEBUG oslo_concurrency.processutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:02:59 compute-1 nova_compute[230518]: 2025-10-02 13:02:59.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:02:59 compute-1 nova_compute[230518]: 2025-10-02 13:02:59.938 2 DEBUG oslo_concurrency.processutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:02:59 compute-1 ceph-mon[80926]: pgmap v2700: 305 pgs: 305 active+clean; 422 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 4.6 MiB/s wr, 208 op/s
Oct 02 13:03:00 compute-1 ovn_controller[129257]: 2025-10-02T13:03:00Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d9:79:9d 10.100.0.10
Oct 02 13:03:00 compute-1 ovn_controller[129257]: 2025-10-02T13:03:00Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d9:79:9d 10.100.0.10
Oct 02 13:03:00 compute-1 nova_compute[230518]: 2025-10-02 13:03:00.041 2 DEBUG nova.storage.rbd_utils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] resizing rbd image fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:03:00 compute-1 nova_compute[230518]: 2025-10-02 13:03:00.167 2 DEBUG nova.objects.instance [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'migration_context' on Instance uuid fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:03:00 compute-1 nova_compute[230518]: 2025-10-02 13:03:00.183 2 DEBUG nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:03:00 compute-1 nova_compute[230518]: 2025-10-02 13:03:00.184 2 DEBUG nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Ensure instance console log exists: /var/lib/nova/instances/fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:03:00 compute-1 nova_compute[230518]: 2025-10-02 13:03:00.184 2 DEBUG oslo_concurrency.lockutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:00 compute-1 nova_compute[230518]: 2025-10-02 13:03:00.185 2 DEBUG oslo_concurrency.lockutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:00 compute-1 nova_compute[230518]: 2025-10-02 13:03:00.185 2 DEBUG oslo_concurrency.lockutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:00 compute-1 nova_compute[230518]: 2025-10-02 13:03:00.198 2 DEBUG nova.network.neutron [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Successfully created port: 4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:03:00 compute-1 ceph-mon[80926]: pgmap v2701: 305 pgs: 305 active+clean; 435 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.4 MiB/s wr, 269 op/s
Oct 02 13:03:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:01.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:01.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:02 compute-1 nova_compute[230518]: 2025-10-02 13:03:02.033 2 DEBUG nova.network.neutron [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Successfully updated port: 4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:03:02 compute-1 nova_compute[230518]: 2025-10-02 13:03:02.072 2 DEBUG oslo_concurrency.lockutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "refresh_cache-fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:03:02 compute-1 nova_compute[230518]: 2025-10-02 13:03:02.072 2 DEBUG oslo_concurrency.lockutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquired lock "refresh_cache-fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:03:02 compute-1 nova_compute[230518]: 2025-10-02 13:03:02.072 2 DEBUG nova.network.neutron [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:03:02 compute-1 nova_compute[230518]: 2025-10-02 13:03:02.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:02 compute-1 nova_compute[230518]: 2025-10-02 13:03:02.168 2 DEBUG nova.compute.manager [req-a0932b96-6a6f-4595-b401-2433d9624058 req-9f44a657-60f0-43e5-9752-e887072fcef9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Received event network-changed-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:03:02 compute-1 nova_compute[230518]: 2025-10-02 13:03:02.169 2 DEBUG nova.compute.manager [req-a0932b96-6a6f-4595-b401-2433d9624058 req-9f44a657-60f0-43e5-9752-e887072fcef9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Refreshing instance network info cache due to event network-changed-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:03:02 compute-1 nova_compute[230518]: 2025-10-02 13:03:02.169 2 DEBUG oslo_concurrency.lockutils [req-a0932b96-6a6f-4595-b401-2433d9624058 req-9f44a657-60f0-43e5-9752-e887072fcef9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:03:02 compute-1 nova_compute[230518]: 2025-10-02 13:03:02.240 2 DEBUG nova.network.neutron [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:03:03 compute-1 ceph-mon[80926]: pgmap v2702: 305 pgs: 305 active+clean; 446 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 5.6 MiB/s wr, 364 op/s
Oct 02 13:03:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:03.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:03 compute-1 nova_compute[230518]: 2025-10-02 13:03:03.328 2 DEBUG nova.network.neutron [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Updating instance_info_cache with network_info: [{"id": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "address": "fa:16:3e:30:ac:86", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fcd0b0b-1e", "ovs_interfaceid": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:03:03 compute-1 nova_compute[230518]: 2025-10-02 13:03:03.348 2 DEBUG oslo_concurrency.lockutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Releasing lock "refresh_cache-fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:03:03 compute-1 nova_compute[230518]: 2025-10-02 13:03:03.348 2 DEBUG nova.compute.manager [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Instance network_info: |[{"id": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "address": "fa:16:3e:30:ac:86", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fcd0b0b-1e", "ovs_interfaceid": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:03:03 compute-1 nova_compute[230518]: 2025-10-02 13:03:03.349 2 DEBUG oslo_concurrency.lockutils [req-a0932b96-6a6f-4595-b401-2433d9624058 req-9f44a657-60f0-43e5-9752-e887072fcef9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:03:03 compute-1 nova_compute[230518]: 2025-10-02 13:03:03.349 2 DEBUG nova.network.neutron [req-a0932b96-6a6f-4595-b401-2433d9624058 req-9f44a657-60f0-43e5-9752-e887072fcef9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Refreshing network info cache for port 4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:03:03 compute-1 nova_compute[230518]: 2025-10-02 13:03:03.351 2 DEBUG nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Start _get_guest_xml network_info=[{"id": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "address": "fa:16:3e:30:ac:86", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fcd0b0b-1e", "ovs_interfaceid": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:03:03 compute-1 nova_compute[230518]: 2025-10-02 13:03:03.357 2 WARNING nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:03:03 compute-1 nova_compute[230518]: 2025-10-02 13:03:03.361 2 DEBUG nova.virt.libvirt.host [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:03:03 compute-1 nova_compute[230518]: 2025-10-02 13:03:03.361 2 DEBUG nova.virt.libvirt.host [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:03:03 compute-1 nova_compute[230518]: 2025-10-02 13:03:03.364 2 DEBUG nova.virt.libvirt.host [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:03:03 compute-1 nova_compute[230518]: 2025-10-02 13:03:03.364 2 DEBUG nova.virt.libvirt.host [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:03:03 compute-1 nova_compute[230518]: 2025-10-02 13:03:03.365 2 DEBUG nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:03:03 compute-1 nova_compute[230518]: 2025-10-02 13:03:03.365 2 DEBUG nova.virt.hardware [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:03:03 compute-1 nova_compute[230518]: 2025-10-02 13:03:03.366 2 DEBUG nova.virt.hardware [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:03:03 compute-1 nova_compute[230518]: 2025-10-02 13:03:03.366 2 DEBUG nova.virt.hardware [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:03:03 compute-1 nova_compute[230518]: 2025-10-02 13:03:03.366 2 DEBUG nova.virt.hardware [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:03:03 compute-1 nova_compute[230518]: 2025-10-02 13:03:03.367 2 DEBUG nova.virt.hardware [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:03:03 compute-1 nova_compute[230518]: 2025-10-02 13:03:03.367 2 DEBUG nova.virt.hardware [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:03:03 compute-1 nova_compute[230518]: 2025-10-02 13:03:03.367 2 DEBUG nova.virt.hardware [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:03:03 compute-1 nova_compute[230518]: 2025-10-02 13:03:03.367 2 DEBUG nova.virt.hardware [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:03:03 compute-1 nova_compute[230518]: 2025-10-02 13:03:03.367 2 DEBUG nova.virt.hardware [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:03:03 compute-1 nova_compute[230518]: 2025-10-02 13:03:03.368 2 DEBUG nova.virt.hardware [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:03:03 compute-1 nova_compute[230518]: 2025-10-02 13:03:03.368 2 DEBUG nova.virt.hardware [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:03:03 compute-1 nova_compute[230518]: 2025-10-02 13:03:03.370 2 DEBUG oslo_concurrency.processutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:03:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:03:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:03.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:03:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:03:03 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3110993335' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:03:03 compute-1 nova_compute[230518]: 2025-10-02 13:03:03.840 2 DEBUG oslo_concurrency.processutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:03:03 compute-1 nova_compute[230518]: 2025-10-02 13:03:03.883 2 DEBUG nova.storage.rbd_utils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:03:03 compute-1 nova_compute[230518]: 2025-10-02 13:03:03.890 2 DEBUG oslo_concurrency.processutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:03:04 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3110993335' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:03:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:03:04 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/425658778' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:03:04 compute-1 nova_compute[230518]: 2025-10-02 13:03:04.344 2 DEBUG oslo_concurrency.processutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:03:04 compute-1 nova_compute[230518]: 2025-10-02 13:03:04.347 2 DEBUG nova.virt.libvirt.vif [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:02:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1086725747',display_name='tempest-TestNetworkBasicOps-server-1086725747',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1086725747',id=179,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPBABVkZI5Kx2o3IBmNelxKPrpcXX1o46OX/ra3kYdzmZFj/cCMhJ1511ulGrJ3qwtAcfGfzsPlSIVbMP2imMAvPUtwUpeHp534Qlat71VA1CohVAjbm/2X4YYdTo5vxIw==',key_name='tempest-TestNetworkBasicOps-1630436212',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-iilx3u08',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:02:59Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "address": "fa:16:3e:30:ac:86", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fcd0b0b-1e", "ovs_interfaceid": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:03:04 compute-1 nova_compute[230518]: 2025-10-02 13:03:04.348 2 DEBUG nova.network.os_vif_util [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "address": "fa:16:3e:30:ac:86", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fcd0b0b-1e", "ovs_interfaceid": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:03:04 compute-1 nova_compute[230518]: 2025-10-02 13:03:04.349 2 DEBUG nova.network.os_vif_util [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:ac:86,bridge_name='br-int',has_traffic_filtering=True,id=4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8,network=Network(0d2f6793-3f74-40a0-b15c-09282dcbf27c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fcd0b0b-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:03:04 compute-1 nova_compute[230518]: 2025-10-02 13:03:04.350 2 DEBUG nova.objects.instance [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'pci_devices' on Instance uuid fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:03:04 compute-1 nova_compute[230518]: 2025-10-02 13:03:04.369 2 DEBUG nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:03:04 compute-1 nova_compute[230518]:   <uuid>fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13</uuid>
Oct 02 13:03:04 compute-1 nova_compute[230518]:   <name>instance-000000b3</name>
Oct 02 13:03:04 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 13:03:04 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 13:03:04 compute-1 nova_compute[230518]:   <metadata>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:03:04 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:       <nova:name>tempest-TestNetworkBasicOps-server-1086725747</nova:name>
Oct 02 13:03:04 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 13:03:03</nova:creationTime>
Oct 02 13:03:04 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 13:03:04 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 13:03:04 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 13:03:04 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 13:03:04 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:03:04 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:03:04 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 13:03:04 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 13:03:04 compute-1 nova_compute[230518]:         <nova:user uuid="96fd589a75cb4fcfac0072edabb9b3a1">tempest-TestNetworkBasicOps-1228914348-project-member</nova:user>
Oct 02 13:03:04 compute-1 nova_compute[230518]:         <nova:project uuid="64f187c60881475e9e1f062bb198d205">tempest-TestNetworkBasicOps-1228914348</nova:project>
Oct 02 13:03:04 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 13:03:04 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 13:03:04 compute-1 nova_compute[230518]:         <nova:port uuid="4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8">
Oct 02 13:03:04 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 13:03:04 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 13:03:04 compute-1 nova_compute[230518]:   </metadata>
Oct 02 13:03:04 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <system>
Oct 02 13:03:04 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:03:04 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:03:04 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:03:04 compute-1 nova_compute[230518]:       <entry name="serial">fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13</entry>
Oct 02 13:03:04 compute-1 nova_compute[230518]:       <entry name="uuid">fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13</entry>
Oct 02 13:03:04 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     </system>
Oct 02 13:03:04 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 13:03:04 compute-1 nova_compute[230518]:   <os>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:   </os>
Oct 02 13:03:04 compute-1 nova_compute[230518]:   <features>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <apic/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:   </features>
Oct 02 13:03:04 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:   </clock>
Oct 02 13:03:04 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:   </cpu>
Oct 02 13:03:04 compute-1 nova_compute[230518]:   <devices>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 13:03:04 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13_disk">
Oct 02 13:03:04 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:       </source>
Oct 02 13:03:04 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:03:04 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:03:04 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 13:03:04 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13_disk.config">
Oct 02 13:03:04 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:       </source>
Oct 02 13:03:04 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:03:04 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:03:04 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 13:03:04 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:30:ac:86"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:       <target dev="tap4fcd0b0b-1e"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     </interface>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 13:03:04 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13/console.log" append="off"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     </serial>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <video>
Oct 02 13:03:04 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     </video>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 13:03:04 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     </rng>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 13:03:04 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 13:03:04 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 13:03:04 compute-1 nova_compute[230518]:   </devices>
Oct 02 13:03:04 compute-1 nova_compute[230518]: </domain>
Oct 02 13:03:04 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:03:04 compute-1 nova_compute[230518]: 2025-10-02 13:03:04.371 2 DEBUG nova.compute.manager [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Preparing to wait for external event network-vif-plugged-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:03:04 compute-1 nova_compute[230518]: 2025-10-02 13:03:04.371 2 DEBUG oslo_concurrency.lockutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:04 compute-1 nova_compute[230518]: 2025-10-02 13:03:04.372 2 DEBUG oslo_concurrency.lockutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:04 compute-1 nova_compute[230518]: 2025-10-02 13:03:04.372 2 DEBUG oslo_concurrency.lockutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:04 compute-1 nova_compute[230518]: 2025-10-02 13:03:04.373 2 DEBUG nova.virt.libvirt.vif [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:02:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1086725747',display_name='tempest-TestNetworkBasicOps-server-1086725747',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1086725747',id=179,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPBABVkZI5Kx2o3IBmNelxKPrpcXX1o46OX/ra3kYdzmZFj/cCMhJ1511ulGrJ3qwtAcfGfzsPlSIVbMP2imMAvPUtwUpeHp534Qlat71VA1CohVAjbm/2X4YYdTo5vxIw==',key_name='tempest-TestNetworkBasicOps-1630436212',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-iilx3u08',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:02:59Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "address": "fa:16:3e:30:ac:86", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fcd0b0b-1e", "ovs_interfaceid": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:03:04 compute-1 nova_compute[230518]: 2025-10-02 13:03:04.373 2 DEBUG nova.network.os_vif_util [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "address": "fa:16:3e:30:ac:86", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fcd0b0b-1e", "ovs_interfaceid": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:03:04 compute-1 nova_compute[230518]: 2025-10-02 13:03:04.374 2 DEBUG nova.network.os_vif_util [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:ac:86,bridge_name='br-int',has_traffic_filtering=True,id=4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8,network=Network(0d2f6793-3f74-40a0-b15c-09282dcbf27c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fcd0b0b-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:03:04 compute-1 nova_compute[230518]: 2025-10-02 13:03:04.375 2 DEBUG os_vif [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:ac:86,bridge_name='br-int',has_traffic_filtering=True,id=4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8,network=Network(0d2f6793-3f74-40a0-b15c-09282dcbf27c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fcd0b0b-1e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:03:04 compute-1 nova_compute[230518]: 2025-10-02 13:03:04.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:04 compute-1 nova_compute[230518]: 2025-10-02 13:03:04.376 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:03:04 compute-1 nova_compute[230518]: 2025-10-02 13:03:04.377 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:03:04 compute-1 nova_compute[230518]: 2025-10-02 13:03:04.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:04 compute-1 nova_compute[230518]: 2025-10-02 13:03:04.381 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fcd0b0b-1e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:03:04 compute-1 nova_compute[230518]: 2025-10-02 13:03:04.382 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4fcd0b0b-1e, col_values=(('external_ids', {'iface-id': '4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:30:ac:86', 'vm-uuid': 'fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:03:04 compute-1 nova_compute[230518]: 2025-10-02 13:03:04.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:04 compute-1 NetworkManager[44960]: <info>  [1759410184.3851] manager: (tap4fcd0b0b-1e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/344)
Oct 02 13:03:04 compute-1 nova_compute[230518]: 2025-10-02 13:03:04.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:03:04 compute-1 nova_compute[230518]: 2025-10-02 13:03:04.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:04 compute-1 nova_compute[230518]: 2025-10-02 13:03:04.391 2 INFO os_vif [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:ac:86,bridge_name='br-int',has_traffic_filtering=True,id=4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8,network=Network(0d2f6793-3f74-40a0-b15c-09282dcbf27c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fcd0b0b-1e')
Oct 02 13:03:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:03:04 compute-1 nova_compute[230518]: 2025-10-02 13:03:04.457 2 DEBUG nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:03:04 compute-1 nova_compute[230518]: 2025-10-02 13:03:04.457 2 DEBUG nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:03:04 compute-1 nova_compute[230518]: 2025-10-02 13:03:04.458 2 DEBUG nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] No VIF found with MAC fa:16:3e:30:ac:86, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:03:04 compute-1 nova_compute[230518]: 2025-10-02 13:03:04.459 2 INFO nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Using config drive
Oct 02 13:03:04 compute-1 nova_compute[230518]: 2025-10-02 13:03:04.490 2 DEBUG nova.storage.rbd_utils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:03:04 compute-1 nova_compute[230518]: 2025-10-02 13:03:04.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:04 compute-1 nova_compute[230518]: 2025-10-02 13:03:04.997 2 INFO nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Creating config drive at /var/lib/nova/instances/fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13/disk.config
Oct 02 13:03:05 compute-1 nova_compute[230518]: 2025-10-02 13:03:05.003 2 DEBUG oslo_concurrency.processutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr1kn194i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:03:05 compute-1 nova_compute[230518]: 2025-10-02 13:03:05.106 2 DEBUG nova.network.neutron [req-a0932b96-6a6f-4595-b401-2433d9624058 req-9f44a657-60f0-43e5-9752-e887072fcef9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Updated VIF entry in instance network info cache for port 4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:03:05 compute-1 nova_compute[230518]: 2025-10-02 13:03:05.107 2 DEBUG nova.network.neutron [req-a0932b96-6a6f-4595-b401-2433d9624058 req-9f44a657-60f0-43e5-9752-e887072fcef9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Updating instance_info_cache with network_info: [{"id": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "address": "fa:16:3e:30:ac:86", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fcd0b0b-1e", "ovs_interfaceid": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:03:05 compute-1 nova_compute[230518]: 2025-10-02 13:03:05.132 2 DEBUG oslo_concurrency.lockutils [req-a0932b96-6a6f-4595-b401-2433d9624058 req-9f44a657-60f0-43e5-9752-e887072fcef9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:03:05 compute-1 nova_compute[230518]: 2025-10-02 13:03:05.144 2 DEBUG oslo_concurrency.processutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr1kn194i" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:03:05 compute-1 nova_compute[230518]: 2025-10-02 13:03:05.173 2 DEBUG nova.storage.rbd_utils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] rbd image fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:03:05 compute-1 nova_compute[230518]: 2025-10-02 13:03:05.177 2 DEBUG oslo_concurrency.processutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13/disk.config fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:03:05 compute-1 ceph-mon[80926]: pgmap v2703: 305 pgs: 305 active+clean; 446 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 5.5 MiB/s wr, 362 op/s
Oct 02 13:03:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/425658778' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:03:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:05.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:05 compute-1 nova_compute[230518]: 2025-10-02 13:03:05.432 2 DEBUG oslo_concurrency.processutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13/disk.config fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.255s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:03:05 compute-1 nova_compute[230518]: 2025-10-02 13:03:05.433 2 INFO nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Deleting local config drive /var/lib/nova/instances/fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13/disk.config because it was imported into RBD.
Oct 02 13:03:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:05.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:05 compute-1 kernel: tap4fcd0b0b-1e: entered promiscuous mode
Oct 02 13:03:05 compute-1 NetworkManager[44960]: <info>  [1759410185.4991] manager: (tap4fcd0b0b-1e): new Tun device (/org/freedesktop/NetworkManager/Devices/345)
Oct 02 13:03:05 compute-1 ovn_controller[129257]: 2025-10-02T13:03:05Z|00730|binding|INFO|Claiming lport 4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 for this chassis.
Oct 02 13:03:05 compute-1 ovn_controller[129257]: 2025-10-02T13:03:05Z|00731|binding|INFO|4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8: Claiming fa:16:3e:30:ac:86 10.100.0.4
Oct 02 13:03:05 compute-1 nova_compute[230518]: 2025-10-02 13:03:05.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:05.508 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:ac:86 10.100.0.4'], port_security=['fa:16:3e:30:ac:86 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0d2f6793-3f74-40a0-b15c-09282dcbf27c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '2', 'neutron:security_group_ids': '15970012-f057-462f-9dfb-1daddc0bd092', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66b7c563-3269-46d5-8080-eff6b31dc260, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:03:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:05.509 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 in datapath 0d2f6793-3f74-40a0-b15c-09282dcbf27c bound to our chassis
Oct 02 13:03:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:05.511 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0d2f6793-3f74-40a0-b15c-09282dcbf27c
Oct 02 13:03:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:05.529 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e5ddce0e-8d4d-4ad5-bbdd-14041466beb2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:05 compute-1 systemd-udevd[298681]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:03:05 compute-1 systemd-machined[188247]: New machine qemu-85-instance-000000b3.
Oct 02 13:03:05 compute-1 ovn_controller[129257]: 2025-10-02T13:03:05Z|00732|binding|INFO|Setting lport 4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 ovn-installed in OVS
Oct 02 13:03:05 compute-1 ovn_controller[129257]: 2025-10-02T13:03:05Z|00733|binding|INFO|Setting lport 4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 up in Southbound
Oct 02 13:03:05 compute-1 nova_compute[230518]: 2025-10-02 13:03:05.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:05 compute-1 nova_compute[230518]: 2025-10-02 13:03:05.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:05 compute-1 systemd[1]: Started Virtual Machine qemu-85-instance-000000b3.
Oct 02 13:03:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:05.557 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c19809f4-d57b-4a40-96bc-8ed5092f1e7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:05 compute-1 NetworkManager[44960]: <info>  [1759410185.5649] device (tap4fcd0b0b-1e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:03:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:05.563 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[d5bfdf1c-ef04-4b4e-95ff-f024ba357a74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:05 compute-1 NetworkManager[44960]: <info>  [1759410185.5658] device (tap4fcd0b0b-1e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:03:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:05.600 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[bceaee2e-5ab2-4c37-9c6d-118432a8f97d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:05.623 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e6334703-d13b-4de9-9289-b872cb470dc3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0d2f6793-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:aa:b7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 5, 'rx_bytes': 874, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 5, 'rx_bytes': 874, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 802787, 'reachable_time': 16255, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298692, 'error': None, 'target': 'ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:05.645 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1d44f40c-9eee-4c0e-927e-e67203dd2c0f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0d2f6793-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 802797, 'tstamp': 802797}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298694, 'error': None, 'target': 'ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0d2f6793-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 802801, 'tstamp': 802801}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298694, 'error': None, 'target': 'ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:05.647 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d2f6793-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:03:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:05.650 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0d2f6793-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:03:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:05.650 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:03:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:05.651 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0d2f6793-30, col_values=(('external_ids', {'iface-id': '0dfea1be-4d56-45ad-8b1f-483fdf57471e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:03:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:05.651 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:03:05 compute-1 nova_compute[230518]: 2025-10-02 13:03:05.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:05 compute-1 nova_compute[230518]: 2025-10-02 13:03:05.991 2 DEBUG nova.compute.manager [req-e8d3717f-3714-4a44-a2dc-371882b239b6 req-a620b23a-c022-4139-a0c4-486136213630 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Received event network-vif-plugged-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:03:05 compute-1 nova_compute[230518]: 2025-10-02 13:03:05.991 2 DEBUG oslo_concurrency.lockutils [req-e8d3717f-3714-4a44-a2dc-371882b239b6 req-a620b23a-c022-4139-a0c4-486136213630 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:05 compute-1 nova_compute[230518]: 2025-10-02 13:03:05.992 2 DEBUG oslo_concurrency.lockutils [req-e8d3717f-3714-4a44-a2dc-371882b239b6 req-a620b23a-c022-4139-a0c4-486136213630 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:05 compute-1 nova_compute[230518]: 2025-10-02 13:03:05.993 2 DEBUG oslo_concurrency.lockutils [req-e8d3717f-3714-4a44-a2dc-371882b239b6 req-a620b23a-c022-4139-a0c4-486136213630 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:05 compute-1 nova_compute[230518]: 2025-10-02 13:03:05.993 2 DEBUG nova.compute.manager [req-e8d3717f-3714-4a44-a2dc-371882b239b6 req-a620b23a-c022-4139-a0c4-486136213630 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Processing event network-vif-plugged-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:03:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/636709142' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:03:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/636709142' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:03:07 compute-1 ceph-mon[80926]: pgmap v2704: 305 pgs: 305 active+clean; 419 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 5.0 MiB/s wr, 319 op/s
Oct 02 13:03:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:07.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:07 compute-1 nova_compute[230518]: 2025-10-02 13:03:07.316 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410187.3157213, fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:03:07 compute-1 nova_compute[230518]: 2025-10-02 13:03:07.317 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] VM Started (Lifecycle Event)
Oct 02 13:03:07 compute-1 nova_compute[230518]: 2025-10-02 13:03:07.319 2 DEBUG nova.compute.manager [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:03:07 compute-1 nova_compute[230518]: 2025-10-02 13:03:07.323 2 DEBUG nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:03:07 compute-1 nova_compute[230518]: 2025-10-02 13:03:07.327 2 INFO nova.virt.libvirt.driver [-] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Instance spawned successfully.
Oct 02 13:03:07 compute-1 nova_compute[230518]: 2025-10-02 13:03:07.328 2 DEBUG nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:03:07 compute-1 nova_compute[230518]: 2025-10-02 13:03:07.346 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:03:07 compute-1 nova_compute[230518]: 2025-10-02 13:03:07.352 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:03:07 compute-1 nova_compute[230518]: 2025-10-02 13:03:07.356 2 DEBUG nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:03:07 compute-1 nova_compute[230518]: 2025-10-02 13:03:07.357 2 DEBUG nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:03:07 compute-1 nova_compute[230518]: 2025-10-02 13:03:07.357 2 DEBUG nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:03:07 compute-1 nova_compute[230518]: 2025-10-02 13:03:07.357 2 DEBUG nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:03:07 compute-1 nova_compute[230518]: 2025-10-02 13:03:07.358 2 DEBUG nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:03:07 compute-1 nova_compute[230518]: 2025-10-02 13:03:07.358 2 DEBUG nova.virt.libvirt.driver [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:03:07 compute-1 nova_compute[230518]: 2025-10-02 13:03:07.390 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:03:07 compute-1 nova_compute[230518]: 2025-10-02 13:03:07.390 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410187.3159416, fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:03:07 compute-1 nova_compute[230518]: 2025-10-02 13:03:07.390 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] VM Paused (Lifecycle Event)
Oct 02 13:03:07 compute-1 ovn_controller[129257]: 2025-10-02T13:03:07Z|00734|binding|INFO|Releasing lport 0dfea1be-4d56-45ad-8b1f-483fdf57471e from this chassis (sb_readonly=0)
Oct 02 13:03:07 compute-1 ovn_controller[129257]: 2025-10-02T13:03:07Z|00735|binding|INFO|Releasing lport 61e15bc4-7cff-4f2c-a6c4-d987859313b6 from this chassis (sb_readonly=0)
Oct 02 13:03:07 compute-1 nova_compute[230518]: 2025-10-02 13:03:07.423 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:03:07 compute-1 nova_compute[230518]: 2025-10-02 13:03:07.426 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410187.323382, fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:03:07 compute-1 nova_compute[230518]: 2025-10-02 13:03:07.426 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] VM Resumed (Lifecycle Event)
Oct 02 13:03:07 compute-1 nova_compute[230518]: 2025-10-02 13:03:07.450 2 INFO nova.compute.manager [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Took 8.23 seconds to spawn the instance on the hypervisor.
Oct 02 13:03:07 compute-1 nova_compute[230518]: 2025-10-02 13:03:07.451 2 DEBUG nova.compute.manager [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:03:07 compute-1 nova_compute[230518]: 2025-10-02 13:03:07.460 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:03:07 compute-1 nova_compute[230518]: 2025-10-02 13:03:07.466 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:03:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:07.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:07 compute-1 nova_compute[230518]: 2025-10-02 13:03:07.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:07 compute-1 nova_compute[230518]: 2025-10-02 13:03:07.506 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:03:07 compute-1 nova_compute[230518]: 2025-10-02 13:03:07.578 2 INFO nova.compute.manager [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Took 9.40 seconds to build instance.
Oct 02 13:03:07 compute-1 nova_compute[230518]: 2025-10-02 13:03:07.600 2 DEBUG oslo_concurrency.lockutils [None req-ffcbc870-4b19-4a27-975f-4990a434a906 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:08 compute-1 nova_compute[230518]: 2025-10-02 13:03:08.071 2 DEBUG nova.compute.manager [req-357790e2-203c-47f9-9c3f-2b8bbaf63fbb req-0f69f7cb-83c8-4aed-bda3-d7b9d80e7f88 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Received event network-vif-plugged-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:03:08 compute-1 nova_compute[230518]: 2025-10-02 13:03:08.072 2 DEBUG oslo_concurrency.lockutils [req-357790e2-203c-47f9-9c3f-2b8bbaf63fbb req-0f69f7cb-83c8-4aed-bda3-d7b9d80e7f88 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:08 compute-1 nova_compute[230518]: 2025-10-02 13:03:08.072 2 DEBUG oslo_concurrency.lockutils [req-357790e2-203c-47f9-9c3f-2b8bbaf63fbb req-0f69f7cb-83c8-4aed-bda3-d7b9d80e7f88 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:08 compute-1 nova_compute[230518]: 2025-10-02 13:03:08.072 2 DEBUG oslo_concurrency.lockutils [req-357790e2-203c-47f9-9c3f-2b8bbaf63fbb req-0f69f7cb-83c8-4aed-bda3-d7b9d80e7f88 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:08 compute-1 nova_compute[230518]: 2025-10-02 13:03:08.072 2 DEBUG nova.compute.manager [req-357790e2-203c-47f9-9c3f-2b8bbaf63fbb req-0f69f7cb-83c8-4aed-bda3-d7b9d80e7f88 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] No waiting events found dispatching network-vif-plugged-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:03:08 compute-1 nova_compute[230518]: 2025-10-02 13:03:08.073 2 WARNING nova.compute.manager [req-357790e2-203c-47f9-9c3f-2b8bbaf63fbb req-0f69f7cb-83c8-4aed-bda3-d7b9d80e7f88 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Received unexpected event network-vif-plugged-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 for instance with vm_state active and task_state None.
Oct 02 13:03:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:09.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:09 compute-1 ceph-mon[80926]: pgmap v2705: 305 pgs: 305 active+clean; 421 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.1 MiB/s wr, 279 op/s
Oct 02 13:03:09 compute-1 nova_compute[230518]: 2025-10-02 13:03:09.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:03:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:03:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:09.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:03:09 compute-1 nova_compute[230518]: 2025-10-02 13:03:09.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:11 compute-1 nova_compute[230518]: 2025-10-02 13:03:11.003 2 DEBUG nova.compute.manager [req-bb258a24-0411-40a0-a9f2-befd6af1039d req-75688438-119f-4d95-87d5-5e30cce7142e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Received event network-changed-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:03:11 compute-1 nova_compute[230518]: 2025-10-02 13:03:11.003 2 DEBUG nova.compute.manager [req-bb258a24-0411-40a0-a9f2-befd6af1039d req-75688438-119f-4d95-87d5-5e30cce7142e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Refreshing instance network info cache due to event network-changed-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:03:11 compute-1 nova_compute[230518]: 2025-10-02 13:03:11.005 2 DEBUG oslo_concurrency.lockutils [req-bb258a24-0411-40a0-a9f2-befd6af1039d req-75688438-119f-4d95-87d5-5e30cce7142e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:03:11 compute-1 nova_compute[230518]: 2025-10-02 13:03:11.006 2 DEBUG oslo_concurrency.lockutils [req-bb258a24-0411-40a0-a9f2-befd6af1039d req-75688438-119f-4d95-87d5-5e30cce7142e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:03:11 compute-1 nova_compute[230518]: 2025-10-02 13:03:11.006 2 DEBUG nova.network.neutron [req-bb258a24-0411-40a0-a9f2-befd6af1039d req-75688438-119f-4d95-87d5-5e30cce7142e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Refreshing network info cache for port 4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:03:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:03:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:11.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:03:11 compute-1 ceph-mon[80926]: pgmap v2706: 305 pgs: 305 active+clean; 451 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 6.2 MiB/s wr, 309 op/s
Oct 02 13:03:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:11.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:12 compute-1 nova_compute[230518]: 2025-10-02 13:03:12.176 2 DEBUG nova.network.neutron [req-bb258a24-0411-40a0-a9f2-befd6af1039d req-75688438-119f-4d95-87d5-5e30cce7142e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Updated VIF entry in instance network info cache for port 4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:03:12 compute-1 nova_compute[230518]: 2025-10-02 13:03:12.177 2 DEBUG nova.network.neutron [req-bb258a24-0411-40a0-a9f2-befd6af1039d req-75688438-119f-4d95-87d5-5e30cce7142e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Updating instance_info_cache with network_info: [{"id": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "address": "fa:16:3e:30:ac:86", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fcd0b0b-1e", "ovs_interfaceid": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:03:12 compute-1 nova_compute[230518]: 2025-10-02 13:03:12.201 2 DEBUG oslo_concurrency.lockutils [req-bb258a24-0411-40a0-a9f2-befd6af1039d req-75688438-119f-4d95-87d5-5e30cce7142e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:03:12 compute-1 podman[298738]: 2025-10-02 13:03:12.822373573 +0000 UTC m=+0.068801629 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct 02 13:03:12 compute-1 podman[298737]: 2025-10-02 13:03:12.849470674 +0000 UTC m=+0.093688382 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 13:03:12 compute-1 nova_compute[230518]: 2025-10-02 13:03:12.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:03:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:13.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:03:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:13.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:13 compute-1 ceph-mon[80926]: pgmap v2707: 305 pgs: 305 active+clean; 475 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 7.4 MiB/s wr, 350 op/s
Oct 02 13:03:14 compute-1 nova_compute[230518]: 2025-10-02 13:03:14.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:03:14 compute-1 nova_compute[230518]: 2025-10-02 13:03:14.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:15 compute-1 nova_compute[230518]: 2025-10-02 13:03:15.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:03:15 compute-1 nova_compute[230518]: 2025-10-02 13:03:15.076 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:15 compute-1 nova_compute[230518]: 2025-10-02 13:03:15.077 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:15 compute-1 nova_compute[230518]: 2025-10-02 13:03:15.077 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:15 compute-1 nova_compute[230518]: 2025-10-02 13:03:15.078 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:03:15 compute-1 nova_compute[230518]: 2025-10-02 13:03:15.078 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:03:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:15.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:03:15 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2532067342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:15 compute-1 nova_compute[230518]: 2025-10-02 13:03:15.510 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:03:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:15.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:15 compute-1 nova_compute[230518]: 2025-10-02 13:03:15.608 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000b0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:03:15 compute-1 nova_compute[230518]: 2025-10-02 13:03:15.608 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000b0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:03:15 compute-1 nova_compute[230518]: 2025-10-02 13:03:15.611 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000b3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:03:15 compute-1 nova_compute[230518]: 2025-10-02 13:03:15.612 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000b3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:03:15 compute-1 nova_compute[230518]: 2025-10-02 13:03:15.614 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:03:15 compute-1 nova_compute[230518]: 2025-10-02 13:03:15.615 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:03:15 compute-1 ceph-mon[80926]: pgmap v2708: 305 pgs: 305 active+clean; 475 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.5 MiB/s wr, 199 op/s
Oct 02 13:03:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2532067342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:15 compute-1 nova_compute[230518]: 2025-10-02 13:03:15.766 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:03:15 compute-1 nova_compute[230518]: 2025-10-02 13:03:15.767 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3759MB free_disk=20.786266326904297GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:03:15 compute-1 nova_compute[230518]: 2025-10-02 13:03:15.767 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:15 compute-1 nova_compute[230518]: 2025-10-02 13:03:15.767 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:15 compute-1 nova_compute[230518]: 2025-10-02 13:03:15.871 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 7c31bb0f-22b5-42a4-9b38-8ad3daac689f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:03:15 compute-1 nova_compute[230518]: 2025-10-02 13:03:15.871 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 198c2dd4-f103-4bba-9fc3-9e41f44e465e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:03:15 compute-1 nova_compute[230518]: 2025-10-02 13:03:15.872 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:03:15 compute-1 nova_compute[230518]: 2025-10-02 13:03:15.872 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:03:15 compute-1 nova_compute[230518]: 2025-10-02 13:03:15.872 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:03:16 compute-1 nova_compute[230518]: 2025-10-02 13:03:16.057 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:03:16 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:03:16 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1071118977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:16 compute-1 nova_compute[230518]: 2025-10-02 13:03:16.511 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:03:16 compute-1 nova_compute[230518]: 2025-10-02 13:03:16.520 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:03:16 compute-1 nova_compute[230518]: 2025-10-02 13:03:16.541 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:03:16 compute-1 nova_compute[230518]: 2025-10-02 13:03:16.575 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:03:16 compute-1 nova_compute[230518]: 2025-10-02 13:03:16.575 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1071118977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:17.190 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:03:17 compute-1 nova_compute[230518]: 2025-10-02 13:03:17.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:17.194 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:03:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:17.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:17 compute-1 nova_compute[230518]: 2025-10-02 13:03:17.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:17.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:17 compute-1 ceph-mon[80926]: pgmap v2709: 305 pgs: 305 active+clean; 484 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.6 MiB/s wr, 219 op/s
Oct 02 13:03:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2226714011' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:03:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:19.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:19 compute-1 nova_compute[230518]: 2025-10-02 13:03:19.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:03:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:19.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:19 compute-1 nova_compute[230518]: 2025-10-02 13:03:19.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:19 compute-1 ceph-mon[80926]: pgmap v2710: 305 pgs: 305 active+clean; 484 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 204 op/s
Oct 02 13:03:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2650273310' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:20 compute-1 ovn_controller[129257]: 2025-10-02T13:03:20Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:30:ac:86 10.100.0.4
Oct 02 13:03:20 compute-1 ovn_controller[129257]: 2025-10-02T13:03:20Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:30:ac:86 10.100.0.4
Oct 02 13:03:20 compute-1 nova_compute[230518]: 2025-10-02 13:03:20.577 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:03:20 compute-1 nova_compute[230518]: 2025-10-02 13:03:20.577 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:03:20 compute-1 nova_compute[230518]: 2025-10-02 13:03:20.578 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:03:20 compute-1 nova_compute[230518]: 2025-10-02 13:03:20.578 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:03:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4166443559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:21.196 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:03:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:03:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:21.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:03:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:21.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:22 compute-1 nova_compute[230518]: 2025-10-02 13:03:22.049 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:03:22 compute-1 ceph-mon[80926]: pgmap v2711: 305 pgs: 305 active+clean; 459 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.8 MiB/s wr, 215 op/s
Oct 02 13:03:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1206424062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/159773655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:23 compute-1 nova_compute[230518]: 2025-10-02 13:03:23.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:03:23 compute-1 ceph-mon[80926]: pgmap v2712: 305 pgs: 305 active+clean; 434 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.8 MiB/s wr, 218 op/s
Oct 02 13:03:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2732935419' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2882987751' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:03:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:23.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:03:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:23.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:23 compute-1 nova_compute[230518]: 2025-10-02 13:03:23.803 2 DEBUG oslo_concurrency.lockutils [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:23 compute-1 nova_compute[230518]: 2025-10-02 13:03:23.805 2 DEBUG oslo_concurrency.lockutils [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:23 compute-1 nova_compute[230518]: 2025-10-02 13:03:23.805 2 DEBUG oslo_concurrency.lockutils [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:23 compute-1 nova_compute[230518]: 2025-10-02 13:03:23.805 2 DEBUG oslo_concurrency.lockutils [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:23 compute-1 nova_compute[230518]: 2025-10-02 13:03:23.805 2 DEBUG oslo_concurrency.lockutils [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:23 compute-1 nova_compute[230518]: 2025-10-02 13:03:23.807 2 INFO nova.compute.manager [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Terminating instance
Oct 02 13:03:23 compute-1 nova_compute[230518]: 2025-10-02 13:03:23.808 2 DEBUG nova.compute.manager [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:03:23 compute-1 podman[298825]: 2025-10-02 13:03:23.818119172 +0000 UTC m=+0.067626233 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3)
Oct 02 13:03:23 compute-1 podman[298826]: 2025-10-02 13:03:23.829121354 +0000 UTC m=+0.071504283 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:03:24 compute-1 nova_compute[230518]: 2025-10-02 13:03:24.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:03:24 compute-1 kernel: tapb0acc3a3-80 (unregistering): left promiscuous mode
Oct 02 13:03:24 compute-1 NetworkManager[44960]: <info>  [1759410204.1621] device (tapb0acc3a3-80): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:03:24 compute-1 ovn_controller[129257]: 2025-10-02T13:03:24Z|00736|binding|INFO|Releasing lport b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 from this chassis (sb_readonly=0)
Oct 02 13:03:24 compute-1 ovn_controller[129257]: 2025-10-02T13:03:24Z|00737|binding|INFO|Setting lport b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 down in Southbound
Oct 02 13:03:24 compute-1 nova_compute[230518]: 2025-10-02 13:03:24.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:24 compute-1 ovn_controller[129257]: 2025-10-02T13:03:24Z|00738|binding|INFO|Removing iface tapb0acc3a3-80 ovn-installed in OVS
Oct 02 13:03:24 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:24.191 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:a1:f4 10.100.0.4'], port_security=['fa:16:3e:d4:a1:f4 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7c31bb0f-22b5-42a4-9b38-8ad3daac689f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-052f341a-0628-4183-a5e0-76312bc986c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a6be0e77fb5b4355b4f2276c9e57d2bd', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f50d96b5-3505-4637-897f-64b0dcf7d106', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d584c9bd-ee36-4364-be0c-b350c44644a7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=b0acc3a3-80b3-4ec7-97e7-2e5813eb8790) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:03:24 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:24.192 138374 INFO neutron.agent.ovn.metadata.agent [-] Port b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 in datapath 052f341a-0628-4183-a5e0-76312bc986c6 unbound from our chassis
Oct 02 13:03:24 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:24.194 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 052f341a-0628-4183-a5e0-76312bc986c6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:03:24 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:24.195 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5dc67ddf-a4a6-4314-85a4-0648f0690986]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:24 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:24.196 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6 namespace which is not needed anymore
Oct 02 13:03:24 compute-1 nova_compute[230518]: 2025-10-02 13:03:24.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:24 compute-1 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000a6.scope: Deactivated successfully.
Oct 02 13:03:24 compute-1 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000a6.scope: Consumed 19.710s CPU time.
Oct 02 13:03:24 compute-1 systemd-machined[188247]: Machine qemu-81-instance-000000a6 terminated.
Oct 02 13:03:24 compute-1 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[295686]: [NOTICE]   (295710) : haproxy version is 2.8.14-c23fe91
Oct 02 13:03:24 compute-1 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[295686]: [NOTICE]   (295710) : path to executable is /usr/sbin/haproxy
Oct 02 13:03:24 compute-1 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[295686]: [WARNING]  (295710) : Exiting Master process...
Oct 02 13:03:24 compute-1 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[295686]: [WARNING]  (295710) : Exiting Master process...
Oct 02 13:03:24 compute-1 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[295686]: [ALERT]    (295710) : Current worker (295714) exited with code 143 (Terminated)
Oct 02 13:03:24 compute-1 neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6[295686]: [WARNING]  (295710) : All workers exited. Exiting... (0)
Oct 02 13:03:24 compute-1 systemd[1]: libpod-559ac6788ba0e6dd3f8a6eb8d97f13e9ee929f9f4ab5937e0e6b11f79f277613.scope: Deactivated successfully.
Oct 02 13:03:24 compute-1 conmon[295686]: conmon 559ac6788ba0e6dd3f8a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-559ac6788ba0e6dd3f8a6eb8d97f13e9ee929f9f4ab5937e0e6b11f79f277613.scope/container/memory.events
Oct 02 13:03:24 compute-1 podman[298886]: 2025-10-02 13:03:24.390657602 +0000 UTC m=+0.076233900 container died 559ac6788ba0e6dd3f8a6eb8d97f13e9ee929f9f4ab5937e0e6b11f79f277613 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 13:03:24 compute-1 nova_compute[230518]: 2025-10-02 13:03:24.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:24 compute-1 nova_compute[230518]: 2025-10-02 13:03:24.396 2 DEBUG nova.compute.manager [req-fd47da0f-e02b-45a9-ad49-892b70b45318 req-9eee28de-b579-4042-898e-b599b6100a6d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Received event network-vif-unplugged-b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:03:24 compute-1 nova_compute[230518]: 2025-10-02 13:03:24.396 2 DEBUG oslo_concurrency.lockutils [req-fd47da0f-e02b-45a9-ad49-892b70b45318 req-9eee28de-b579-4042-898e-b599b6100a6d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:24 compute-1 nova_compute[230518]: 2025-10-02 13:03:24.397 2 DEBUG oslo_concurrency.lockutils [req-fd47da0f-e02b-45a9-ad49-892b70b45318 req-9eee28de-b579-4042-898e-b599b6100a6d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:24 compute-1 nova_compute[230518]: 2025-10-02 13:03:24.397 2 DEBUG oslo_concurrency.lockutils [req-fd47da0f-e02b-45a9-ad49-892b70b45318 req-9eee28de-b579-4042-898e-b599b6100a6d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:24 compute-1 nova_compute[230518]: 2025-10-02 13:03:24.397 2 DEBUG nova.compute.manager [req-fd47da0f-e02b-45a9-ad49-892b70b45318 req-9eee28de-b579-4042-898e-b599b6100a6d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] No waiting events found dispatching network-vif-unplugged-b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:03:24 compute-1 nova_compute[230518]: 2025-10-02 13:03:24.397 2 DEBUG nova.compute.manager [req-fd47da0f-e02b-45a9-ad49-892b70b45318 req-9eee28de-b579-4042-898e-b599b6100a6d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Received event network-vif-unplugged-b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:03:24 compute-1 nova_compute[230518]: 2025-10-02 13:03:24.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:03:24 compute-1 nova_compute[230518]: 2025-10-02 13:03:24.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:24 compute-1 nova_compute[230518]: 2025-10-02 13:03:24.449 2 INFO nova.virt.libvirt.driver [-] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Instance destroyed successfully.
Oct 02 13:03:24 compute-1 nova_compute[230518]: 2025-10-02 13:03:24.449 2 DEBUG nova.objects.instance [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lazy-loading 'resources' on Instance uuid 7c31bb0f-22b5-42a4-9b38-8ad3daac689f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:03:24 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-559ac6788ba0e6dd3f8a6eb8d97f13e9ee929f9f4ab5937e0e6b11f79f277613-userdata-shm.mount: Deactivated successfully.
Oct 02 13:03:24 compute-1 systemd[1]: var-lib-containers-storage-overlay-e09e09a9c189154a28ffa2be38e9a5e659937380e00a8af389eda1472e8aeea1-merged.mount: Deactivated successfully.
Oct 02 13:03:24 compute-1 podman[298886]: 2025-10-02 13:03:24.466447886 +0000 UTC m=+0.152024164 container cleanup 559ac6788ba0e6dd3f8a6eb8d97f13e9ee929f9f4ab5937e0e6b11f79f277613 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:03:24 compute-1 systemd[1]: libpod-conmon-559ac6788ba0e6dd3f8a6eb8d97f13e9ee929f9f4ab5937e0e6b11f79f277613.scope: Deactivated successfully.
Oct 02 13:03:24 compute-1 nova_compute[230518]: 2025-10-02 13:03:24.482 2 DEBUG nova.virt.libvirt.vif [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:00:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-₡-202037004',display_name='tempest-₡-202037004',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest--202037004',id=166,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:00:38Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a6be0e77fb5b4355b4f2276c9e57d2bd',ramdisk_id='',reservation_id='r-cvelnv0h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-146860306',owner_user_name='tempest-ServersTestJSON-146860306-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:00:38Z,user_data=None,user_id='b04159d5bffe4259876ce57aec09716e',uuid=7c31bb0f-22b5-42a4-9b38-8ad3daac689f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "address": "fa:16:3e:d4:a1:f4", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0acc3a3-80", "ovs_interfaceid": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:03:24 compute-1 nova_compute[230518]: 2025-10-02 13:03:24.484 2 DEBUG nova.network.os_vif_util [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converting VIF {"id": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "address": "fa:16:3e:d4:a1:f4", "network": {"id": "052f341a-0628-4183-a5e0-76312bc986c6", "bridge": "br-int", "label": "tempest-ServersTestJSON-918209516-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6be0e77fb5b4355b4f2276c9e57d2bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb0acc3a3-80", "ovs_interfaceid": "b0acc3a3-80b3-4ec7-97e7-2e5813eb8790", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:03:24 compute-1 nova_compute[230518]: 2025-10-02 13:03:24.485 2 DEBUG nova.network.os_vif_util [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d4:a1:f4,bridge_name='br-int',has_traffic_filtering=True,id=b0acc3a3-80b3-4ec7-97e7-2e5813eb8790,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0acc3a3-80') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:03:24 compute-1 nova_compute[230518]: 2025-10-02 13:03:24.486 2 DEBUG os_vif [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d4:a1:f4,bridge_name='br-int',has_traffic_filtering=True,id=b0acc3a3-80b3-4ec7-97e7-2e5813eb8790,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0acc3a3-80') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:03:24 compute-1 nova_compute[230518]: 2025-10-02 13:03:24.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:24 compute-1 nova_compute[230518]: 2025-10-02 13:03:24.489 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb0acc3a3-80, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:03:24 compute-1 nova_compute[230518]: 2025-10-02 13:03:24.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:24 compute-1 nova_compute[230518]: 2025-10-02 13:03:24.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:03:24 compute-1 nova_compute[230518]: 2025-10-02 13:03:24.497 2 INFO os_vif [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d4:a1:f4,bridge_name='br-int',has_traffic_filtering=True,id=b0acc3a3-80b3-4ec7-97e7-2e5813eb8790,network=Network(052f341a-0628-4183-a5e0-76312bc986c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb0acc3a3-80')
Oct 02 13:03:24 compute-1 podman[298927]: 2025-10-02 13:03:24.553091158 +0000 UTC m=+0.063488653 container remove 559ac6788ba0e6dd3f8a6eb8d97f13e9ee929f9f4ab5937e0e6b11f79f277613 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 13:03:24 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:24.560 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b02c0d10-b57d-46ba-a506-8480b673b5aa]: (4, ('Thu Oct  2 01:03:24 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6 (559ac6788ba0e6dd3f8a6eb8d97f13e9ee929f9f4ab5937e0e6b11f79f277613)\n559ac6788ba0e6dd3f8a6eb8d97f13e9ee929f9f4ab5937e0e6b11f79f277613\nThu Oct  2 01:03:24 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6 (559ac6788ba0e6dd3f8a6eb8d97f13e9ee929f9f4ab5937e0e6b11f79f277613)\n559ac6788ba0e6dd3f8a6eb8d97f13e9ee929f9f4ab5937e0e6b11f79f277613\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:24 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:24.562 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d282ce24-7a84-443b-ac2f-81541d9f041a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:24 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:24.562 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap052f341a-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:03:24 compute-1 kernel: tap052f341a-00: left promiscuous mode
Oct 02 13:03:24 compute-1 nova_compute[230518]: 2025-10-02 13:03:24.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:24 compute-1 nova_compute[230518]: 2025-10-02 13:03:24.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:24 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:24.584 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[31ebbbd3-e2e3-4a3b-abe8-59f02063d79d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:24 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:24.619 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[43719249-31a8-4720-ade9-fab565a5f72f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:24 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:24.621 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7f1ad831-c604-4441-aa99-8482097dfdc5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:24 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:24.635 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c38bb953-cc78-44dd-800a-70ff1088c7f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 790105, 'reachable_time': 25582, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298958, 'error': None, 'target': 'ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:24 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:24.637 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-052f341a-0628-4183-a5e0-76312bc986c6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:03:24 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:24.637 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[22da72ca-4eeb-4a7e-8849-3c0c596b202a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:24 compute-1 systemd[1]: run-netns-ovnmeta\x2d052f341a\x2d0628\x2d4183\x2da5e0\x2d76312bc986c6.mount: Deactivated successfully.
Oct 02 13:03:24 compute-1 nova_compute[230518]: 2025-10-02 13:03:24.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:25 compute-1 nova_compute[230518]: 2025-10-02 13:03:25.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:03:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:25.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:25 compute-1 ceph-mon[80926]: pgmap v2713: 305 pgs: 305 active+clean; 434 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 424 KiB/s rd, 2.2 MiB/s wr, 109 op/s
Oct 02 13:03:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:25.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:25.960 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:25.960 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:25.961 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:26 compute-1 nova_compute[230518]: 2025-10-02 13:03:26.351 2 INFO nova.virt.libvirt.driver [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Deleting instance files /var/lib/nova/instances/7c31bb0f-22b5-42a4-9b38-8ad3daac689f_del
Oct 02 13:03:26 compute-1 nova_compute[230518]: 2025-10-02 13:03:26.351 2 INFO nova.virt.libvirt.driver [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Deletion of /var/lib/nova/instances/7c31bb0f-22b5-42a4-9b38-8ad3daac689f_del complete
Oct 02 13:03:26 compute-1 nova_compute[230518]: 2025-10-02 13:03:26.400 2 INFO nova.compute.manager [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Took 2.59 seconds to destroy the instance on the hypervisor.
Oct 02 13:03:26 compute-1 nova_compute[230518]: 2025-10-02 13:03:26.400 2 DEBUG oslo.service.loopingcall [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:03:26 compute-1 nova_compute[230518]: 2025-10-02 13:03:26.401 2 DEBUG nova.compute.manager [-] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:03:26 compute-1 nova_compute[230518]: 2025-10-02 13:03:26.401 2 DEBUG nova.network.neutron [-] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:03:26 compute-1 nova_compute[230518]: 2025-10-02 13:03:26.511 2 DEBUG nova.compute.manager [req-8f6fd5b1-24eb-41ec-9b29-0d21b16ccb52 req-0822f777-552c-4f08-9386-6326cf6a40dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Received event network-vif-plugged-b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:03:26 compute-1 nova_compute[230518]: 2025-10-02 13:03:26.512 2 DEBUG oslo_concurrency.lockutils [req-8f6fd5b1-24eb-41ec-9b29-0d21b16ccb52 req-0822f777-552c-4f08-9386-6326cf6a40dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:26 compute-1 nova_compute[230518]: 2025-10-02 13:03:26.512 2 DEBUG oslo_concurrency.lockutils [req-8f6fd5b1-24eb-41ec-9b29-0d21b16ccb52 req-0822f777-552c-4f08-9386-6326cf6a40dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:26 compute-1 nova_compute[230518]: 2025-10-02 13:03:26.512 2 DEBUG oslo_concurrency.lockutils [req-8f6fd5b1-24eb-41ec-9b29-0d21b16ccb52 req-0822f777-552c-4f08-9386-6326cf6a40dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:26 compute-1 nova_compute[230518]: 2025-10-02 13:03:26.512 2 DEBUG nova.compute.manager [req-8f6fd5b1-24eb-41ec-9b29-0d21b16ccb52 req-0822f777-552c-4f08-9386-6326cf6a40dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] No waiting events found dispatching network-vif-plugged-b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:03:26 compute-1 nova_compute[230518]: 2025-10-02 13:03:26.512 2 WARNING nova.compute.manager [req-8f6fd5b1-24eb-41ec-9b29-0d21b16ccb52 req-0822f777-552c-4f08-9386-6326cf6a40dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Received unexpected event network-vif-plugged-b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 for instance with vm_state active and task_state deleting.
Oct 02 13:03:26 compute-1 nova_compute[230518]: 2025-10-02 13:03:26.678 2 INFO nova.compute.manager [None req-95dcdc4b-205b-47af-bdcf-e5c2de5460c2 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Get console output
Oct 02 13:03:26 compute-1 nova_compute[230518]: 2025-10-02 13:03:26.684 13161 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 13:03:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:03:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:27.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:03:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:03:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:27.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:03:27 compute-1 ceph-mon[80926]: pgmap v2714: 305 pgs: 305 active+clean; 425 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 447 KiB/s rd, 3.0 MiB/s wr, 134 op/s
Oct 02 13:03:27 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3195859859' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:03:27 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4201269597' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:03:28 compute-1 nova_compute[230518]: 2025-10-02 13:03:28.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:03:28 compute-1 nova_compute[230518]: 2025-10-02 13:03:28.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:03:28 compute-1 nova_compute[230518]: 2025-10-02 13:03:28.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:03:28 compute-1 nova_compute[230518]: 2025-10-02 13:03:28.182 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 02 13:03:28 compute-1 nova_compute[230518]: 2025-10-02 13:03:28.276 2 DEBUG nova.network.neutron [-] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:03:28 compute-1 nova_compute[230518]: 2025-10-02 13:03:28.297 2 INFO nova.compute.manager [-] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Took 1.90 seconds to deallocate network for instance.
Oct 02 13:03:28 compute-1 nova_compute[230518]: 2025-10-02 13:03:28.354 2 DEBUG oslo_concurrency.lockutils [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:28 compute-1 nova_compute[230518]: 2025-10-02 13:03:28.355 2 DEBUG oslo_concurrency.lockutils [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:28 compute-1 nova_compute[230518]: 2025-10-02 13:03:28.559 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:03:28 compute-1 nova_compute[230518]: 2025-10-02 13:03:28.559 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:03:28 compute-1 nova_compute[230518]: 2025-10-02 13:03:28.559 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 13:03:28 compute-1 nova_compute[230518]: 2025-10-02 13:03:28.559 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 198c2dd4-f103-4bba-9fc3-9e41f44e465e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:03:29 compute-1 nova_compute[230518]: 2025-10-02 13:03:29.136 2 DEBUG nova.compute.manager [req-4db31163-ef30-4189-a396-5b9c9ffe9164 req-93eed5ca-8165-4cf4-88a6-9b1c7d3cc916 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Received event network-vif-deleted-b0acc3a3-80b3-4ec7-97e7-2e5813eb8790 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:03:29 compute-1 nova_compute[230518]: 2025-10-02 13:03:29.137 2 DEBUG nova.compute.manager [req-4db31163-ef30-4189-a396-5b9c9ffe9164 req-93eed5ca-8165-4cf4-88a6-9b1c7d3cc916 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received event network-changed-075c87dd-2b98-4364-9955-b21fcbcd5b47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:03:29 compute-1 nova_compute[230518]: 2025-10-02 13:03:29.137 2 DEBUG nova.compute.manager [req-4db31163-ef30-4189-a396-5b9c9ffe9164 req-93eed5ca-8165-4cf4-88a6-9b1c7d3cc916 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Refreshing instance network info cache due to event network-changed-075c87dd-2b98-4364-9955-b21fcbcd5b47. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:03:29 compute-1 nova_compute[230518]: 2025-10-02 13:03:29.137 2 DEBUG oslo_concurrency.lockutils [req-4db31163-ef30-4189-a396-5b9c9ffe9164 req-93eed5ca-8165-4cf4-88a6-9b1c7d3cc916 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:03:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:29.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:03:29 compute-1 nova_compute[230518]: 2025-10-02 13:03:29.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:29 compute-1 nova_compute[230518]: 2025-10-02 13:03:29.502 2 DEBUG nova.compute.manager [req-86c4e679-37be-44be-801d-820bf5a64d35 req-2067eecb-33bc-40a3-a028-3b6be516921d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received event network-vif-unplugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:03:29 compute-1 nova_compute[230518]: 2025-10-02 13:03:29.503 2 DEBUG oslo_concurrency.lockutils [req-86c4e679-37be-44be-801d-820bf5a64d35 req-2067eecb-33bc-40a3-a028-3b6be516921d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:29 compute-1 nova_compute[230518]: 2025-10-02 13:03:29.503 2 DEBUG oslo_concurrency.lockutils [req-86c4e679-37be-44be-801d-820bf5a64d35 req-2067eecb-33bc-40a3-a028-3b6be516921d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:29 compute-1 nova_compute[230518]: 2025-10-02 13:03:29.503 2 DEBUG oslo_concurrency.lockutils [req-86c4e679-37be-44be-801d-820bf5a64d35 req-2067eecb-33bc-40a3-a028-3b6be516921d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:29 compute-1 nova_compute[230518]: 2025-10-02 13:03:29.503 2 DEBUG nova.compute.manager [req-86c4e679-37be-44be-801d-820bf5a64d35 req-2067eecb-33bc-40a3-a028-3b6be516921d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] No waiting events found dispatching network-vif-unplugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:03:29 compute-1 nova_compute[230518]: 2025-10-02 13:03:29.503 2 WARNING nova.compute.manager [req-86c4e679-37be-44be-801d-820bf5a64d35 req-2067eecb-33bc-40a3-a028-3b6be516921d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received unexpected event network-vif-unplugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 for instance with vm_state active and task_state None.
Oct 02 13:03:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:29.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:29 compute-1 ceph-mon[80926]: pgmap v2715: 305 pgs: 305 active+clean; 401 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 404 KiB/s rd, 3.9 MiB/s wr, 142 op/s
Oct 02 13:03:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2987500234' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:29 compute-1 nova_compute[230518]: 2025-10-02 13:03:29.732 2 INFO nova.compute.manager [None req-44a3b4f0-358e-4091-b416-2cb8ee85a85d 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Get console output
Oct 02 13:03:29 compute-1 nova_compute[230518]: 2025-10-02 13:03:29.738 13161 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 13:03:29 compute-1 nova_compute[230518]: 2025-10-02 13:03:29.760 2 DEBUG oslo_concurrency.processutils [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:03:29 compute-1 nova_compute[230518]: 2025-10-02 13:03:29.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:03:30 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1405131134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:30 compute-1 nova_compute[230518]: 2025-10-02 13:03:30.190 2 DEBUG oslo_concurrency.processutils [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:03:30 compute-1 nova_compute[230518]: 2025-10-02 13:03:30.197 2 DEBUG nova.compute.provider_tree [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:03:30 compute-1 nova_compute[230518]: 2025-10-02 13:03:30.215 2 DEBUG nova.scheduler.client.report [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:03:30 compute-1 nova_compute[230518]: 2025-10-02 13:03:30.246 2 DEBUG oslo_concurrency.lockutils [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:30 compute-1 nova_compute[230518]: 2025-10-02 13:03:30.281 2 INFO nova.scheduler.client.report [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Deleted allocations for instance 7c31bb0f-22b5-42a4-9b38-8ad3daac689f
Oct 02 13:03:30 compute-1 nova_compute[230518]: 2025-10-02 13:03:30.414 2 DEBUG oslo_concurrency.lockutils [None req-b50822a7-c49a-4af3-8450-3c2f657c91b1 b04159d5bffe4259876ce57aec09716e a6be0e77fb5b4355b4f2276c9e57d2bd - - default default] Lock "7c31bb0f-22b5-42a4-9b38-8ad3daac689f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:30 compute-1 nova_compute[230518]: 2025-10-02 13:03:30.549 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Updating instance_info_cache with network_info: [{"id": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "address": "fa:16:3e:d9:79:9d", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap075c87dd-2b", "ovs_interfaceid": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:03:30 compute-1 nova_compute[230518]: 2025-10-02 13:03:30.562 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:03:30 compute-1 nova_compute[230518]: 2025-10-02 13:03:30.563 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 13:03:30 compute-1 nova_compute[230518]: 2025-10-02 13:03:30.563 2 DEBUG oslo_concurrency.lockutils [req-4db31163-ef30-4189-a396-5b9c9ffe9164 req-93eed5ca-8165-4cf4-88a6-9b1c7d3cc916 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:03:30 compute-1 nova_compute[230518]: 2025-10-02 13:03:30.564 2 DEBUG nova.network.neutron [req-4db31163-ef30-4189-a396-5b9c9ffe9164 req-93eed5ca-8165-4cf4-88a6-9b1c7d3cc916 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Refreshing network info cache for port 075c87dd-2b98-4364-9955-b21fcbcd5b47 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:03:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1405131134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:31.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:31.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:31 compute-1 nova_compute[230518]: 2025-10-02 13:03:31.663 2 DEBUG nova.compute.manager [req-c6f8d5d0-a16a-44a7-8a79-cfd6f9748082 req-59922bd4-6091-4ee7-8b8f-fa933d694945 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received event network-vif-plugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:03:31 compute-1 nova_compute[230518]: 2025-10-02 13:03:31.664 2 DEBUG oslo_concurrency.lockutils [req-c6f8d5d0-a16a-44a7-8a79-cfd6f9748082 req-59922bd4-6091-4ee7-8b8f-fa933d694945 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:31 compute-1 nova_compute[230518]: 2025-10-02 13:03:31.664 2 DEBUG oslo_concurrency.lockutils [req-c6f8d5d0-a16a-44a7-8a79-cfd6f9748082 req-59922bd4-6091-4ee7-8b8f-fa933d694945 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:31 compute-1 nova_compute[230518]: 2025-10-02 13:03:31.664 2 DEBUG oslo_concurrency.lockutils [req-c6f8d5d0-a16a-44a7-8a79-cfd6f9748082 req-59922bd4-6091-4ee7-8b8f-fa933d694945 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:31 compute-1 nova_compute[230518]: 2025-10-02 13:03:31.664 2 DEBUG nova.compute.manager [req-c6f8d5d0-a16a-44a7-8a79-cfd6f9748082 req-59922bd4-6091-4ee7-8b8f-fa933d694945 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] No waiting events found dispatching network-vif-plugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:03:31 compute-1 nova_compute[230518]: 2025-10-02 13:03:31.664 2 WARNING nova.compute.manager [req-c6f8d5d0-a16a-44a7-8a79-cfd6f9748082 req-59922bd4-6091-4ee7-8b8f-fa933d694945 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received unexpected event network-vif-plugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 for instance with vm_state active and task_state None.
Oct 02 13:03:31 compute-1 nova_compute[230518]: 2025-10-02 13:03:31.664 2 DEBUG nova.compute.manager [req-c6f8d5d0-a16a-44a7-8a79-cfd6f9748082 req-59922bd4-6091-4ee7-8b8f-fa933d694945 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received event network-changed-075c87dd-2b98-4364-9955-b21fcbcd5b47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:03:31 compute-1 nova_compute[230518]: 2025-10-02 13:03:31.665 2 DEBUG nova.compute.manager [req-c6f8d5d0-a16a-44a7-8a79-cfd6f9748082 req-59922bd4-6091-4ee7-8b8f-fa933d694945 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Refreshing instance network info cache due to event network-changed-075c87dd-2b98-4364-9955-b21fcbcd5b47. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:03:31 compute-1 nova_compute[230518]: 2025-10-02 13:03:31.665 2 DEBUG oslo_concurrency.lockutils [req-c6f8d5d0-a16a-44a7-8a79-cfd6f9748082 req-59922bd4-6091-4ee7-8b8f-fa933d694945 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:03:31 compute-1 ceph-mon[80926]: pgmap v2716: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 435 KiB/s rd, 3.9 MiB/s wr, 183 op/s
Oct 02 13:03:31 compute-1 nova_compute[230518]: 2025-10-02 13:03:31.926 2 INFO nova.compute.manager [None req-75f385ad-f632-411b-9f94-bbbb8d8fc619 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Get console output
Oct 02 13:03:31 compute-1 nova_compute[230518]: 2025-10-02 13:03:31.932 13161 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 02 13:03:32 compute-1 nova_compute[230518]: 2025-10-02 13:03:32.290 2 DEBUG nova.network.neutron [req-4db31163-ef30-4189-a396-5b9c9ffe9164 req-93eed5ca-8165-4cf4-88a6-9b1c7d3cc916 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Updated VIF entry in instance network info cache for port 075c87dd-2b98-4364-9955-b21fcbcd5b47. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:03:32 compute-1 nova_compute[230518]: 2025-10-02 13:03:32.291 2 DEBUG nova.network.neutron [req-4db31163-ef30-4189-a396-5b9c9ffe9164 req-93eed5ca-8165-4cf4-88a6-9b1c7d3cc916 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Updating instance_info_cache with network_info: [{"id": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "address": "fa:16:3e:d9:79:9d", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap075c87dd-2b", "ovs_interfaceid": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:03:32 compute-1 nova_compute[230518]: 2025-10-02 13:03:32.321 2 DEBUG oslo_concurrency.lockutils [req-4db31163-ef30-4189-a396-5b9c9ffe9164 req-93eed5ca-8165-4cf4-88a6-9b1c7d3cc916 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:03:32 compute-1 nova_compute[230518]: 2025-10-02 13:03:32.322 2 DEBUG oslo_concurrency.lockutils [req-c6f8d5d0-a16a-44a7-8a79-cfd6f9748082 req-59922bd4-6091-4ee7-8b8f-fa933d694945 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:03:32 compute-1 nova_compute[230518]: 2025-10-02 13:03:32.322 2 DEBUG nova.network.neutron [req-c6f8d5d0-a16a-44a7-8a79-cfd6f9748082 req-59922bd4-6091-4ee7-8b8f-fa933d694945 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Refreshing network info cache for port 075c87dd-2b98-4364-9955-b21fcbcd5b47 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:03:33 compute-1 ceph-mon[80926]: pgmap v2717: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.4 MiB/s wr, 200 op/s
Oct 02 13:03:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:03:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:33.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:03:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:33.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:33 compute-1 nova_compute[230518]: 2025-10-02 13:03:33.735 2 DEBUG nova.compute.manager [req-ae957882-3bb3-475a-8ea2-8101d8cfc5cb req-5ae63822-c26c-4f7c-b216-97b37b7f40a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received event network-vif-plugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:03:33 compute-1 nova_compute[230518]: 2025-10-02 13:03:33.736 2 DEBUG oslo_concurrency.lockutils [req-ae957882-3bb3-475a-8ea2-8101d8cfc5cb req-5ae63822-c26c-4f7c-b216-97b37b7f40a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:33 compute-1 nova_compute[230518]: 2025-10-02 13:03:33.736 2 DEBUG oslo_concurrency.lockutils [req-ae957882-3bb3-475a-8ea2-8101d8cfc5cb req-5ae63822-c26c-4f7c-b216-97b37b7f40a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:33 compute-1 nova_compute[230518]: 2025-10-02 13:03:33.736 2 DEBUG oslo_concurrency.lockutils [req-ae957882-3bb3-475a-8ea2-8101d8cfc5cb req-5ae63822-c26c-4f7c-b216-97b37b7f40a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:33 compute-1 nova_compute[230518]: 2025-10-02 13:03:33.736 2 DEBUG nova.compute.manager [req-ae957882-3bb3-475a-8ea2-8101d8cfc5cb req-5ae63822-c26c-4f7c-b216-97b37b7f40a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] No waiting events found dispatching network-vif-plugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:03:33 compute-1 nova_compute[230518]: 2025-10-02 13:03:33.737 2 WARNING nova.compute.manager [req-ae957882-3bb3-475a-8ea2-8101d8cfc5cb req-5ae63822-c26c-4f7c-b216-97b37b7f40a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received unexpected event network-vif-plugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 for instance with vm_state active and task_state None.
Oct 02 13:03:33 compute-1 nova_compute[230518]: 2025-10-02 13:03:33.737 2 DEBUG nova.compute.manager [req-ae957882-3bb3-475a-8ea2-8101d8cfc5cb req-5ae63822-c26c-4f7c-b216-97b37b7f40a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received event network-vif-plugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:03:33 compute-1 nova_compute[230518]: 2025-10-02 13:03:33.737 2 DEBUG oslo_concurrency.lockutils [req-ae957882-3bb3-475a-8ea2-8101d8cfc5cb req-5ae63822-c26c-4f7c-b216-97b37b7f40a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:33 compute-1 nova_compute[230518]: 2025-10-02 13:03:33.737 2 DEBUG oslo_concurrency.lockutils [req-ae957882-3bb3-475a-8ea2-8101d8cfc5cb req-5ae63822-c26c-4f7c-b216-97b37b7f40a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:33 compute-1 nova_compute[230518]: 2025-10-02 13:03:33.737 2 DEBUG oslo_concurrency.lockutils [req-ae957882-3bb3-475a-8ea2-8101d8cfc5cb req-5ae63822-c26c-4f7c-b216-97b37b7f40a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:33 compute-1 nova_compute[230518]: 2025-10-02 13:03:33.738 2 DEBUG nova.compute.manager [req-ae957882-3bb3-475a-8ea2-8101d8cfc5cb req-5ae63822-c26c-4f7c-b216-97b37b7f40a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] No waiting events found dispatching network-vif-plugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:03:33 compute-1 nova_compute[230518]: 2025-10-02 13:03:33.738 2 WARNING nova.compute.manager [req-ae957882-3bb3-475a-8ea2-8101d8cfc5cb req-5ae63822-c26c-4f7c-b216-97b37b7f40a5 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received unexpected event network-vif-plugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 for instance with vm_state active and task_state None.
Oct 02 13:03:33 compute-1 nova_compute[230518]: 2025-10-02 13:03:33.906 2 DEBUG nova.compute.manager [req-2ea4346f-62e2-41b7-a1ee-a79d2e0e1027 req-784dfdff-8b4a-4097-9b79-cb3541058ae3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Received event network-changed-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:03:33 compute-1 nova_compute[230518]: 2025-10-02 13:03:33.907 2 DEBUG nova.compute.manager [req-2ea4346f-62e2-41b7-a1ee-a79d2e0e1027 req-784dfdff-8b4a-4097-9b79-cb3541058ae3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Refreshing instance network info cache due to event network-changed-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:03:33 compute-1 nova_compute[230518]: 2025-10-02 13:03:33.907 2 DEBUG oslo_concurrency.lockutils [req-2ea4346f-62e2-41b7-a1ee-a79d2e0e1027 req-784dfdff-8b4a-4097-9b79-cb3541058ae3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:03:33 compute-1 nova_compute[230518]: 2025-10-02 13:03:33.908 2 DEBUG oslo_concurrency.lockutils [req-2ea4346f-62e2-41b7-a1ee-a79d2e0e1027 req-784dfdff-8b4a-4097-9b79-cb3541058ae3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:03:33 compute-1 nova_compute[230518]: 2025-10-02 13:03:33.908 2 DEBUG nova.network.neutron [req-2ea4346f-62e2-41b7-a1ee-a79d2e0e1027 req-784dfdff-8b4a-4097-9b79-cb3541058ae3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Refreshing network info cache for port 4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:03:33 compute-1 nova_compute[230518]: 2025-10-02 13:03:33.970 2 DEBUG oslo_concurrency.lockutils [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:33 compute-1 nova_compute[230518]: 2025-10-02 13:03:33.971 2 DEBUG oslo_concurrency.lockutils [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:33 compute-1 nova_compute[230518]: 2025-10-02 13:03:33.972 2 DEBUG oslo_concurrency.lockutils [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:33 compute-1 nova_compute[230518]: 2025-10-02 13:03:33.972 2 DEBUG oslo_concurrency.lockutils [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:33 compute-1 nova_compute[230518]: 2025-10-02 13:03:33.973 2 DEBUG oslo_concurrency.lockutils [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:33 compute-1 nova_compute[230518]: 2025-10-02 13:03:33.974 2 INFO nova.compute.manager [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Terminating instance
Oct 02 13:03:33 compute-1 nova_compute[230518]: 2025-10-02 13:03:33.976 2 DEBUG nova.compute.manager [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:03:34 compute-1 kernel: tap4fcd0b0b-1e (unregistering): left promiscuous mode
Oct 02 13:03:34 compute-1 NetworkManager[44960]: <info>  [1759410214.0404] device (tap4fcd0b0b-1e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:03:34 compute-1 ovn_controller[129257]: 2025-10-02T13:03:34Z|00739|binding|INFO|Releasing lport 4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 from this chassis (sb_readonly=0)
Oct 02 13:03:34 compute-1 ovn_controller[129257]: 2025-10-02T13:03:34Z|00740|binding|INFO|Setting lport 4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 down in Southbound
Oct 02 13:03:34 compute-1 ovn_controller[129257]: 2025-10-02T13:03:34Z|00741|binding|INFO|Removing iface tap4fcd0b0b-1e ovn-installed in OVS
Oct 02 13:03:34 compute-1 nova_compute[230518]: 2025-10-02 13:03:34.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:34.072 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:ac:86 10.100.0.4'], port_security=['fa:16:3e:30:ac:86 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0d2f6793-3f74-40a0-b15c-09282dcbf27c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '4', 'neutron:security_group_ids': '15970012-f057-462f-9dfb-1daddc0bd092', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66b7c563-3269-46d5-8080-eff6b31dc260, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:03:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:34.073 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 in datapath 0d2f6793-3f74-40a0-b15c-09282dcbf27c unbound from our chassis
Oct 02 13:03:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:34.075 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0d2f6793-3f74-40a0-b15c-09282dcbf27c
Oct 02 13:03:34 compute-1 nova_compute[230518]: 2025-10-02 13:03:34.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:34.097 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fbe33712-4c72-4eb0-916d-4f52a4653d09]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:34 compute-1 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000b3.scope: Deactivated successfully.
Oct 02 13:03:34 compute-1 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000b3.scope: Consumed 13.955s CPU time.
Oct 02 13:03:34 compute-1 systemd-machined[188247]: Machine qemu-85-instance-000000b3 terminated.
Oct 02 13:03:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:34.135 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[5c0f03bf-9d77-420e-b345-aa80e3526e56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:34.140 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[15c9f1fc-9e1e-4773-a5be-929d3ff3d32c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:34.174 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[34b0fbd5-d256-483f-9072-f2f396cef962]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2683143405' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:03:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2683143405' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:03:34 compute-1 nova_compute[230518]: 2025-10-02 13:03:34.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:34.198 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f853f182-dbce-4943-9bb2-d95b94ee9a2d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0d2f6793-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:aa:b7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 222], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 802787, 'reachable_time': 16255, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298994, 'error': None, 'target': 'ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:34 compute-1 nova_compute[230518]: 2025-10-02 13:03:34.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:34 compute-1 nova_compute[230518]: 2025-10-02 13:03:34.216 2 INFO nova.virt.libvirt.driver [-] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Instance destroyed successfully.
Oct 02 13:03:34 compute-1 nova_compute[230518]: 2025-10-02 13:03:34.217 2 DEBUG nova.objects.instance [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'resources' on Instance uuid fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:03:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:34.231 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c75080f5-ab36-4552-aa70-63cb7b1df370]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0d2f6793-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 802797, 'tstamp': 802797}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299001, 'error': None, 'target': 'ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0d2f6793-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 802801, 'tstamp': 802801}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299001, 'error': None, 'target': 'ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:34.233 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d2f6793-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:03:34 compute-1 nova_compute[230518]: 2025-10-02 13:03:34.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:34 compute-1 nova_compute[230518]: 2025-10-02 13:03:34.238 2 DEBUG nova.virt.libvirt.vif [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:02:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1086725747',display_name='tempest-TestNetworkBasicOps-server-1086725747',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1086725747',id=179,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPBABVkZI5Kx2o3IBmNelxKPrpcXX1o46OX/ra3kYdzmZFj/cCMhJ1511ulGrJ3qwtAcfGfzsPlSIVbMP2imMAvPUtwUpeHp534Qlat71VA1CohVAjbm/2X4YYdTo5vxIw==',key_name='tempest-TestNetworkBasicOps-1630436212',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:03:07Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-iilx3u08',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:03:07Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "address": "fa:16:3e:30:ac:86", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fcd0b0b-1e", "ovs_interfaceid": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:03:34 compute-1 nova_compute[230518]: 2025-10-02 13:03:34.239 2 DEBUG nova.network.os_vif_util [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "address": "fa:16:3e:30:ac:86", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fcd0b0b-1e", "ovs_interfaceid": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:03:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:34.240 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0d2f6793-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:03:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:34.241 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:03:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:34.241 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0d2f6793-30, col_values=(('external_ids', {'iface-id': '0dfea1be-4d56-45ad-8b1f-483fdf57471e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:03:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:34.242 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:03:34 compute-1 nova_compute[230518]: 2025-10-02 13:03:34.242 2 DEBUG nova.network.os_vif_util [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:30:ac:86,bridge_name='br-int',has_traffic_filtering=True,id=4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8,network=Network(0d2f6793-3f74-40a0-b15c-09282dcbf27c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fcd0b0b-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:03:34 compute-1 nova_compute[230518]: 2025-10-02 13:03:34.243 2 DEBUG os_vif [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:ac:86,bridge_name='br-int',has_traffic_filtering=True,id=4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8,network=Network(0d2f6793-3f74-40a0-b15c-09282dcbf27c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fcd0b0b-1e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:03:34 compute-1 nova_compute[230518]: 2025-10-02 13:03:34.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:34 compute-1 nova_compute[230518]: 2025-10-02 13:03:34.247 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fcd0b0b-1e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:03:34 compute-1 nova_compute[230518]: 2025-10-02 13:03:34.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:34 compute-1 nova_compute[230518]: 2025-10-02 13:03:34.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:34 compute-1 nova_compute[230518]: 2025-10-02 13:03:34.254 2 INFO os_vif [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:ac:86,bridge_name='br-int',has_traffic_filtering=True,id=4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8,network=Network(0d2f6793-3f74-40a0-b15c-09282dcbf27c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4fcd0b0b-1e')
Oct 02 13:03:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:03:34 compute-1 nova_compute[230518]: 2025-10-02 13:03:34.735 2 DEBUG nova.network.neutron [req-c6f8d5d0-a16a-44a7-8a79-cfd6f9748082 req-59922bd4-6091-4ee7-8b8f-fa933d694945 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Updated VIF entry in instance network info cache for port 075c87dd-2b98-4364-9955-b21fcbcd5b47. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:03:34 compute-1 nova_compute[230518]: 2025-10-02 13:03:34.736 2 DEBUG nova.network.neutron [req-c6f8d5d0-a16a-44a7-8a79-cfd6f9748082 req-59922bd4-6091-4ee7-8b8f-fa933d694945 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Updating instance_info_cache with network_info: [{"id": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "address": "fa:16:3e:d9:79:9d", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap075c87dd-2b", "ovs_interfaceid": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:03:34 compute-1 nova_compute[230518]: 2025-10-02 13:03:34.756 2 DEBUG oslo_concurrency.lockutils [req-c6f8d5d0-a16a-44a7-8a79-cfd6f9748082 req-59922bd4-6091-4ee7-8b8f-fa933d694945 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:03:34 compute-1 nova_compute[230518]: 2025-10-02 13:03:34.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:35.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:35 compute-1 ceph-mon[80926]: pgmap v2718: 305 pgs: 305 active+clean; 326 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 138 op/s
Oct 02 13:03:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:35.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:35 compute-1 nova_compute[230518]: 2025-10-02 13:03:35.703 2 INFO nova.virt.libvirt.driver [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Deleting instance files /var/lib/nova/instances/fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13_del
Oct 02 13:03:35 compute-1 nova_compute[230518]: 2025-10-02 13:03:35.704 2 INFO nova.virt.libvirt.driver [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Deletion of /var/lib/nova/instances/fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13_del complete
Oct 02 13:03:35 compute-1 nova_compute[230518]: 2025-10-02 13:03:35.765 2 INFO nova.compute.manager [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Took 1.79 seconds to destroy the instance on the hypervisor.
Oct 02 13:03:35 compute-1 nova_compute[230518]: 2025-10-02 13:03:35.766 2 DEBUG oslo.service.loopingcall [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:03:35 compute-1 nova_compute[230518]: 2025-10-02 13:03:35.767 2 DEBUG nova.compute.manager [-] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:03:35 compute-1 nova_compute[230518]: 2025-10-02 13:03:35.767 2 DEBUG nova.network.neutron [-] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:03:36 compute-1 nova_compute[230518]: 2025-10-02 13:03:36.231 2 DEBUG nova.compute.manager [req-152afeee-d094-4bd4-aac0-fe0ce32cf699 req-b8b3f947-63c4-4127-ac73-b6c7e440bd13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Received event network-vif-unplugged-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:03:36 compute-1 nova_compute[230518]: 2025-10-02 13:03:36.232 2 DEBUG oslo_concurrency.lockutils [req-152afeee-d094-4bd4-aac0-fe0ce32cf699 req-b8b3f947-63c4-4127-ac73-b6c7e440bd13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:36 compute-1 nova_compute[230518]: 2025-10-02 13:03:36.233 2 DEBUG oslo_concurrency.lockutils [req-152afeee-d094-4bd4-aac0-fe0ce32cf699 req-b8b3f947-63c4-4127-ac73-b6c7e440bd13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:36 compute-1 nova_compute[230518]: 2025-10-02 13:03:36.233 2 DEBUG oslo_concurrency.lockutils [req-152afeee-d094-4bd4-aac0-fe0ce32cf699 req-b8b3f947-63c4-4127-ac73-b6c7e440bd13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:36 compute-1 nova_compute[230518]: 2025-10-02 13:03:36.233 2 DEBUG nova.compute.manager [req-152afeee-d094-4bd4-aac0-fe0ce32cf699 req-b8b3f947-63c4-4127-ac73-b6c7e440bd13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] No waiting events found dispatching network-vif-unplugged-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:03:36 compute-1 nova_compute[230518]: 2025-10-02 13:03:36.234 2 DEBUG nova.compute.manager [req-152afeee-d094-4bd4-aac0-fe0ce32cf699 req-b8b3f947-63c4-4127-ac73-b6c7e440bd13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Received event network-vif-unplugged-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:03:36 compute-1 nova_compute[230518]: 2025-10-02 13:03:36.234 2 DEBUG nova.compute.manager [req-152afeee-d094-4bd4-aac0-fe0ce32cf699 req-b8b3f947-63c4-4127-ac73-b6c7e440bd13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Received event network-vif-plugged-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:03:36 compute-1 nova_compute[230518]: 2025-10-02 13:03:36.235 2 DEBUG oslo_concurrency.lockutils [req-152afeee-d094-4bd4-aac0-fe0ce32cf699 req-b8b3f947-63c4-4127-ac73-b6c7e440bd13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:36 compute-1 nova_compute[230518]: 2025-10-02 13:03:36.235 2 DEBUG oslo_concurrency.lockutils [req-152afeee-d094-4bd4-aac0-fe0ce32cf699 req-b8b3f947-63c4-4127-ac73-b6c7e440bd13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:36 compute-1 nova_compute[230518]: 2025-10-02 13:03:36.236 2 DEBUG oslo_concurrency.lockutils [req-152afeee-d094-4bd4-aac0-fe0ce32cf699 req-b8b3f947-63c4-4127-ac73-b6c7e440bd13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:36 compute-1 nova_compute[230518]: 2025-10-02 13:03:36.236 2 DEBUG nova.compute.manager [req-152afeee-d094-4bd4-aac0-fe0ce32cf699 req-b8b3f947-63c4-4127-ac73-b6c7e440bd13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] No waiting events found dispatching network-vif-plugged-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:03:36 compute-1 nova_compute[230518]: 2025-10-02 13:03:36.236 2 WARNING nova.compute.manager [req-152afeee-d094-4bd4-aac0-fe0ce32cf699 req-b8b3f947-63c4-4127-ac73-b6c7e440bd13 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Received unexpected event network-vif-plugged-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 for instance with vm_state active and task_state deleting.
Oct 02 13:03:36 compute-1 nova_compute[230518]: 2025-10-02 13:03:36.407 2 DEBUG nova.network.neutron [req-2ea4346f-62e2-41b7-a1ee-a79d2e0e1027 req-784dfdff-8b4a-4097-9b79-cb3541058ae3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Updated VIF entry in instance network info cache for port 4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:03:36 compute-1 nova_compute[230518]: 2025-10-02 13:03:36.408 2 DEBUG nova.network.neutron [req-2ea4346f-62e2-41b7-a1ee-a79d2e0e1027 req-784dfdff-8b4a-4097-9b79-cb3541058ae3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Updating instance_info_cache with network_info: [{"id": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "address": "fa:16:3e:30:ac:86", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4fcd0b0b-1e", "ovs_interfaceid": "4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:03:36 compute-1 nova_compute[230518]: 2025-10-02 13:03:36.432 2 DEBUG oslo_concurrency.lockutils [req-2ea4346f-62e2-41b7-a1ee-a79d2e0e1027 req-784dfdff-8b4a-4097-9b79-cb3541058ae3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:03:36 compute-1 nova_compute[230518]: 2025-10-02 13:03:36.874 2 DEBUG nova.network.neutron [-] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:03:36 compute-1 nova_compute[230518]: 2025-10-02 13:03:36.902 2 INFO nova.compute.manager [-] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Took 1.14 seconds to deallocate network for instance.
Oct 02 13:03:36 compute-1 nova_compute[230518]: 2025-10-02 13:03:36.978 2 DEBUG oslo_concurrency.lockutils [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:36 compute-1 nova_compute[230518]: 2025-10-02 13:03:36.978 2 DEBUG oslo_concurrency.lockutils [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:36 compute-1 ovn_controller[129257]: 2025-10-02T13:03:36Z|00742|binding|INFO|Releasing lport 0dfea1be-4d56-45ad-8b1f-483fdf57471e from this chassis (sb_readonly=0)
Oct 02 13:03:37 compute-1 nova_compute[230518]: 2025-10-02 13:03:37.030 2 DEBUG nova.compute.manager [req-9de1695c-b5e5-4b3f-9393-3e5a8613661c req-eba9d196-0d5d-4142-bd8c-c066f9813f9e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Received event network-vif-deleted-4fcd0b0b-1ebf-480a-9ad6-c399a9ba26f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:03:37 compute-1 nova_compute[230518]: 2025-10-02 13:03:37.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:37 compute-1 nova_compute[230518]: 2025-10-02 13:03:37.139 2 DEBUG oslo_concurrency.processutils [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:03:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:37.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:37.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:03:37 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4018230026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:37 compute-1 nova_compute[230518]: 2025-10-02 13:03:37.590 2 DEBUG oslo_concurrency.processutils [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:03:37 compute-1 nova_compute[230518]: 2025-10-02 13:03:37.597 2 DEBUG nova.compute.provider_tree [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:03:37 compute-1 nova_compute[230518]: 2025-10-02 13:03:37.616 2 DEBUG nova.scheduler.client.report [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:03:37 compute-1 nova_compute[230518]: 2025-10-02 13:03:37.635 2 DEBUG oslo_concurrency.lockutils [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:37 compute-1 ceph-mon[80926]: pgmap v2719: 305 pgs: 305 active+clean; 280 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 184 op/s
Oct 02 13:03:37 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4018230026' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:37 compute-1 nova_compute[230518]: 2025-10-02 13:03:37.785 2 INFO nova.scheduler.client.report [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Deleted allocations for instance fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13
Oct 02 13:03:37 compute-1 nova_compute[230518]: 2025-10-02 13:03:37.881 2 DEBUG oslo_concurrency.lockutils [None req-559487d7-1596-4b73-a03e-8d07258d5c7b 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:39 compute-1 nova_compute[230518]: 2025-10-02 13:03:39.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:39.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:39 compute-1 ovn_controller[129257]: 2025-10-02T13:03:39Z|00743|binding|INFO|Releasing lport 0dfea1be-4d56-45ad-8b1f-483fdf57471e from this chassis (sb_readonly=0)
Oct 02 13:03:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:03:39 compute-1 nova_compute[230518]: 2025-10-02 13:03:39.448 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410204.445765, 7c31bb0f-22b5-42a4-9b38-8ad3daac689f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:03:39 compute-1 nova_compute[230518]: 2025-10-02 13:03:39.449 2 INFO nova.compute.manager [-] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] VM Stopped (Lifecycle Event)
Oct 02 13:03:39 compute-1 nova_compute[230518]: 2025-10-02 13:03:39.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:39 compute-1 nova_compute[230518]: 2025-10-02 13:03:39.472 2 DEBUG nova.compute.manager [None req-292c8a2a-4f4a-4977-85c7-1033d0dac31d - - - - - -] [instance: 7c31bb0f-22b5-42a4-9b38-8ad3daac689f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:03:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:39.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:39 compute-1 nova_compute[230518]: 2025-10-02 13:03:39.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:39 compute-1 ceph-mon[80926]: pgmap v2720: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 177 op/s
Oct 02 13:03:40 compute-1 nova_compute[230518]: 2025-10-02 13:03:40.565 2 DEBUG nova.compute.manager [req-45e6e3c7-1167-4e84-b12f-70b98ce2ce6a req-c25c0521-b51f-4c76-8f12-99141b4f7ce6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received event network-changed-075c87dd-2b98-4364-9955-b21fcbcd5b47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:03:40 compute-1 nova_compute[230518]: 2025-10-02 13:03:40.565 2 DEBUG nova.compute.manager [req-45e6e3c7-1167-4e84-b12f-70b98ce2ce6a req-c25c0521-b51f-4c76-8f12-99141b4f7ce6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Refreshing instance network info cache due to event network-changed-075c87dd-2b98-4364-9955-b21fcbcd5b47. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:03:40 compute-1 nova_compute[230518]: 2025-10-02 13:03:40.566 2 DEBUG oslo_concurrency.lockutils [req-45e6e3c7-1167-4e84-b12f-70b98ce2ce6a req-c25c0521-b51f-4c76-8f12-99141b4f7ce6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:03:40 compute-1 nova_compute[230518]: 2025-10-02 13:03:40.566 2 DEBUG oslo_concurrency.lockutils [req-45e6e3c7-1167-4e84-b12f-70b98ce2ce6a req-c25c0521-b51f-4c76-8f12-99141b4f7ce6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:03:40 compute-1 nova_compute[230518]: 2025-10-02 13:03:40.566 2 DEBUG nova.network.neutron [req-45e6e3c7-1167-4e84-b12f-70b98ce2ce6a req-c25c0521-b51f-4c76-8f12-99141b4f7ce6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Refreshing network info cache for port 075c87dd-2b98-4364-9955-b21fcbcd5b47 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:03:40 compute-1 nova_compute[230518]: 2025-10-02 13:03:40.643 2 DEBUG oslo_concurrency.lockutils [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:40 compute-1 nova_compute[230518]: 2025-10-02 13:03:40.644 2 DEBUG oslo_concurrency.lockutils [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:40 compute-1 nova_compute[230518]: 2025-10-02 13:03:40.644 2 DEBUG oslo_concurrency.lockutils [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:40 compute-1 nova_compute[230518]: 2025-10-02 13:03:40.644 2 DEBUG oslo_concurrency.lockutils [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:40 compute-1 nova_compute[230518]: 2025-10-02 13:03:40.645 2 DEBUG oslo_concurrency.lockutils [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:40 compute-1 nova_compute[230518]: 2025-10-02 13:03:40.646 2 INFO nova.compute.manager [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Terminating instance
Oct 02 13:03:40 compute-1 nova_compute[230518]: 2025-10-02 13:03:40.648 2 DEBUG nova.compute.manager [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:03:40 compute-1 kernel: tap075c87dd-2b (unregistering): left promiscuous mode
Oct 02 13:03:40 compute-1 NetworkManager[44960]: <info>  [1759410220.7182] device (tap075c87dd-2b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:03:40 compute-1 ovn_controller[129257]: 2025-10-02T13:03:40Z|00744|binding|INFO|Releasing lport 075c87dd-2b98-4364-9955-b21fcbcd5b47 from this chassis (sb_readonly=0)
Oct 02 13:03:40 compute-1 ovn_controller[129257]: 2025-10-02T13:03:40Z|00745|binding|INFO|Setting lport 075c87dd-2b98-4364-9955-b21fcbcd5b47 down in Southbound
Oct 02 13:03:40 compute-1 nova_compute[230518]: 2025-10-02 13:03:40.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:40 compute-1 ovn_controller[129257]: 2025-10-02T13:03:40Z|00746|binding|INFO|Removing iface tap075c87dd-2b ovn-installed in OVS
Oct 02 13:03:40 compute-1 nova_compute[230518]: 2025-10-02 13:03:40.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:40 compute-1 nova_compute[230518]: 2025-10-02 13:03:40.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:40 compute-1 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000b0.scope: Deactivated successfully.
Oct 02 13:03:40 compute-1 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000b0.scope: Consumed 15.452s CPU time.
Oct 02 13:03:40 compute-1 systemd-machined[188247]: Machine qemu-84-instance-000000b0 terminated.
Oct 02 13:03:40 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:40.822 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:79:9d 10.100.0.10'], port_security=['fa:16:3e:d9:79:9d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '198c2dd4-f103-4bba-9fc3-9e41f44e465e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0d2f6793-3f74-40a0-b15c-09282dcbf27c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '64f187c60881475e9e1f062bb198d205', 'neutron:revision_number': '8', 'neutron:security_group_ids': '9eb88c13-ce55-413a-bc29-2cb1397ffc60', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=66b7c563-3269-46d5-8080-eff6b31dc260, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=075c87dd-2b98-4364-9955-b21fcbcd5b47) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:03:40 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:40.823 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 075c87dd-2b98-4364-9955-b21fcbcd5b47 in datapath 0d2f6793-3f74-40a0-b15c-09282dcbf27c unbound from our chassis
Oct 02 13:03:40 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:40.824 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0d2f6793-3f74-40a0-b15c-09282dcbf27c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:03:40 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:40.825 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2143054a-11af-4823-adf0-acaec64fca88]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:40 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:40.826 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c namespace which is not needed anymore
Oct 02 13:03:40 compute-1 nova_compute[230518]: 2025-10-02 13:03:40.880 2 INFO nova.virt.libvirt.driver [-] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Instance destroyed successfully.
Oct 02 13:03:40 compute-1 nova_compute[230518]: 2025-10-02 13:03:40.881 2 DEBUG nova.objects.instance [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lazy-loading 'resources' on Instance uuid 198c2dd4-f103-4bba-9fc3-9e41f44e465e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:03:40 compute-1 nova_compute[230518]: 2025-10-02 13:03:40.900 2 DEBUG nova.virt.libvirt.vif [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:02:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-511078647',display_name='tempest-TestNetworkBasicOps-server-511078647',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-511078647',id=176,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJx6+OnWya4TKWI602K7FJTy0vvTR15qcn2a79LYqYLs4i+5cL4NrJf7MAy0xx98Y2Lu4xFova8uQh2TX9Sp+hRCxqeORgezwsMfN18SQyhFQii2RX1Yt01r5EbD581/cA==',key_name='tempest-TestNetworkBasicOps-68926903',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:02:45Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='64f187c60881475e9e1f062bb198d205',ramdisk_id='',reservation_id='r-a1p0qs88',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1228914348',owner_user_name='tempest-TestNetworkBasicOps-1228914348-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:02:45Z,user_data=None,user_id='96fd589a75cb4fcfac0072edabb9b3a1',uuid=198c2dd4-f103-4bba-9fc3-9e41f44e465e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "address": "fa:16:3e:d9:79:9d", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap075c87dd-2b", "ovs_interfaceid": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:03:40 compute-1 nova_compute[230518]: 2025-10-02 13:03:40.901 2 DEBUG nova.network.os_vif_util [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converting VIF {"id": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "address": "fa:16:3e:d9:79:9d", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap075c87dd-2b", "ovs_interfaceid": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:03:40 compute-1 nova_compute[230518]: 2025-10-02 13:03:40.902 2 DEBUG nova.network.os_vif_util [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d9:79:9d,bridge_name='br-int',has_traffic_filtering=True,id=075c87dd-2b98-4364-9955-b21fcbcd5b47,network=Network(0d2f6793-3f74-40a0-b15c-09282dcbf27c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap075c87dd-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:03:40 compute-1 nova_compute[230518]: 2025-10-02 13:03:40.903 2 DEBUG os_vif [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d9:79:9d,bridge_name='br-int',has_traffic_filtering=True,id=075c87dd-2b98-4364-9955-b21fcbcd5b47,network=Network(0d2f6793-3f74-40a0-b15c-09282dcbf27c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap075c87dd-2b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:03:40 compute-1 nova_compute[230518]: 2025-10-02 13:03:40.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:40 compute-1 nova_compute[230518]: 2025-10-02 13:03:40.906 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap075c87dd-2b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:03:40 compute-1 nova_compute[230518]: 2025-10-02 13:03:40.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:40 compute-1 nova_compute[230518]: 2025-10-02 13:03:40.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:03:40 compute-1 nova_compute[230518]: 2025-10-02 13:03:40.913 2 INFO os_vif [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d9:79:9d,bridge_name='br-int',has_traffic_filtering=True,id=075c87dd-2b98-4364-9955-b21fcbcd5b47,network=Network(0d2f6793-3f74-40a0-b15c-09282dcbf27c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap075c87dd-2b')
Oct 02 13:03:41 compute-1 neutron-haproxy-ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c[298249]: [NOTICE]   (298253) : haproxy version is 2.8.14-c23fe91
Oct 02 13:03:41 compute-1 neutron-haproxy-ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c[298249]: [NOTICE]   (298253) : path to executable is /usr/sbin/haproxy
Oct 02 13:03:41 compute-1 neutron-haproxy-ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c[298249]: [WARNING]  (298253) : Exiting Master process...
Oct 02 13:03:41 compute-1 neutron-haproxy-ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c[298249]: [ALERT]    (298253) : Current worker (298255) exited with code 143 (Terminated)
Oct 02 13:03:41 compute-1 neutron-haproxy-ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c[298249]: [WARNING]  (298253) : All workers exited. Exiting... (0)
Oct 02 13:03:41 compute-1 systemd[1]: libpod-dc7f32dc3d576e8fd343d477ff3691b002fbb730a3874165429e265df3b16c4b.scope: Deactivated successfully.
Oct 02 13:03:41 compute-1 conmon[298249]: conmon dc7f32dc3d576e8fd343 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dc7f32dc3d576e8fd343d477ff3691b002fbb730a3874165429e265df3b16c4b.scope/container/memory.events
Oct 02 13:03:41 compute-1 podman[299084]: 2025-10-02 13:03:41.042288167 +0000 UTC m=+0.118682367 container died dc7f32dc3d576e8fd343d477ff3691b002fbb730a3874165429e265df3b16c4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 13:03:41 compute-1 nova_compute[230518]: 2025-10-02 13:03:41.176 2 DEBUG nova.compute.manager [req-1a5710bc-ab11-411a-bc62-527f37820c55 req-6cf92757-7ef8-449d-9b05-26beab6bc6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received event network-vif-unplugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:03:41 compute-1 nova_compute[230518]: 2025-10-02 13:03:41.177 2 DEBUG oslo_concurrency.lockutils [req-1a5710bc-ab11-411a-bc62-527f37820c55 req-6cf92757-7ef8-449d-9b05-26beab6bc6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:41 compute-1 nova_compute[230518]: 2025-10-02 13:03:41.178 2 DEBUG oslo_concurrency.lockutils [req-1a5710bc-ab11-411a-bc62-527f37820c55 req-6cf92757-7ef8-449d-9b05-26beab6bc6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:41 compute-1 nova_compute[230518]: 2025-10-02 13:03:41.178 2 DEBUG oslo_concurrency.lockutils [req-1a5710bc-ab11-411a-bc62-527f37820c55 req-6cf92757-7ef8-449d-9b05-26beab6bc6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:41 compute-1 nova_compute[230518]: 2025-10-02 13:03:41.179 2 DEBUG nova.compute.manager [req-1a5710bc-ab11-411a-bc62-527f37820c55 req-6cf92757-7ef8-449d-9b05-26beab6bc6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] No waiting events found dispatching network-vif-unplugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:03:41 compute-1 nova_compute[230518]: 2025-10-02 13:03:41.180 2 DEBUG nova.compute.manager [req-1a5710bc-ab11-411a-bc62-527f37820c55 req-6cf92757-7ef8-449d-9b05-26beab6bc6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received event network-vif-unplugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:03:41 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dc7f32dc3d576e8fd343d477ff3691b002fbb730a3874165429e265df3b16c4b-userdata-shm.mount: Deactivated successfully.
Oct 02 13:03:41 compute-1 systemd[1]: var-lib-containers-storage-overlay-f0ed691c9efbcda2feeff6f5bd8ee2f1b7fe6192de23eebb1b94af40b0e8291e-merged.mount: Deactivated successfully.
Oct 02 13:03:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:41.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:41 compute-1 podman[299084]: 2025-10-02 13:03:41.490849355 +0000 UTC m=+0.567243535 container cleanup dc7f32dc3d576e8fd343d477ff3691b002fbb730a3874165429e265df3b16c4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 13:03:41 compute-1 systemd[1]: libpod-conmon-dc7f32dc3d576e8fd343d477ff3691b002fbb730a3874165429e265df3b16c4b.scope: Deactivated successfully.
Oct 02 13:03:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:41.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:41 compute-1 podman[299130]: 2025-10-02 13:03:41.698606231 +0000 UTC m=+0.182916754 container remove dc7f32dc3d576e8fd343d477ff3691b002fbb730a3874165429e265df3b16c4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:03:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:41.704 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c60f82cb-1478-478c-ba74-bfa77497215d]: (4, ('Thu Oct  2 01:03:40 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c (dc7f32dc3d576e8fd343d477ff3691b002fbb730a3874165429e265df3b16c4b)\ndc7f32dc3d576e8fd343d477ff3691b002fbb730a3874165429e265df3b16c4b\nThu Oct  2 01:03:41 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c (dc7f32dc3d576e8fd343d477ff3691b002fbb730a3874165429e265df3b16c4b)\ndc7f32dc3d576e8fd343d477ff3691b002fbb730a3874165429e265df3b16c4b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:41.706 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[398b9b23-7554-4e96-b31c-1cd8554df2f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:41.706 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d2f6793-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:03:41 compute-1 nova_compute[230518]: 2025-10-02 13:03:41.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:41 compute-1 kernel: tap0d2f6793-30: left promiscuous mode
Oct 02 13:03:41 compute-1 nova_compute[230518]: 2025-10-02 13:03:41.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:41.723 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cb5e3d2c-dc6b-4ead-b162-9ec2948cd943]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:41.752 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[91b88101-da18-4199-83b9-7e4ad7635b78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:41.754 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bcae6337-edd6-4415-b832-26798dd943cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:41.773 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[43ef7fd2-2a82-4150-87e0-2b7450fae15e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 802779, 'reachable_time': 27329, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299144, 'error': None, 'target': 'ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:41.775 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0d2f6793-3f74-40a0-b15c-09282dcbf27c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:03:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:03:41.775 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[43d5f045-8af0-40ab-b718-0eb1eaf45e6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:03:41 compute-1 systemd[1]: run-netns-ovnmeta\x2d0d2f6793\x2d3f74\x2d40a0\x2db15c\x2d09282dcbf27c.mount: Deactivated successfully.
Oct 02 13:03:41 compute-1 ceph-mon[80926]: pgmap v2721: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 33 KiB/s wr, 152 op/s
Oct 02 13:03:42 compute-1 nova_compute[230518]: 2025-10-02 13:03:42.767 2 DEBUG nova.network.neutron [req-45e6e3c7-1167-4e84-b12f-70b98ce2ce6a req-c25c0521-b51f-4c76-8f12-99141b4f7ce6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Updated VIF entry in instance network info cache for port 075c87dd-2b98-4364-9955-b21fcbcd5b47. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:03:42 compute-1 nova_compute[230518]: 2025-10-02 13:03:42.769 2 DEBUG nova.network.neutron [req-45e6e3c7-1167-4e84-b12f-70b98ce2ce6a req-c25c0521-b51f-4c76-8f12-99141b4f7ce6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Updating instance_info_cache with network_info: [{"id": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "address": "fa:16:3e:d9:79:9d", "network": {"id": "0d2f6793-3f74-40a0-b15c-09282dcbf27c", "bridge": "br-int", "label": "tempest-network-smoke--1948792144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "64f187c60881475e9e1f062bb198d205", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap075c87dd-2b", "ovs_interfaceid": "075c87dd-2b98-4364-9955-b21fcbcd5b47", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:03:42 compute-1 nova_compute[230518]: 2025-10-02 13:03:42.793 2 DEBUG oslo_concurrency.lockutils [req-45e6e3c7-1167-4e84-b12f-70b98ce2ce6a req-c25c0521-b51f-4c76-8f12-99141b4f7ce6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-198c2dd4-f103-4bba-9fc3-9e41f44e465e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:03:43 compute-1 ceph-mon[80926]: pgmap v2722: 305 pgs: 305 active+clean; 218 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 10 KiB/s wr, 113 op/s
Oct 02 13:03:43 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #133. Immutable memtables: 0.
Oct 02 13:03:43 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:03:43.072094) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:03:43 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 133
Oct 02 13:03:43 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410223072120, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 684, "num_deletes": 251, "total_data_size": 1130338, "memory_usage": 1158544, "flush_reason": "Manual Compaction"}
Oct 02 13:03:43 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #134: started
Oct 02 13:03:43 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410223077017, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 134, "file_size": 745488, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65297, "largest_seqno": 65976, "table_properties": {"data_size": 742207, "index_size": 1188, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7812, "raw_average_key_size": 19, "raw_value_size": 735589, "raw_average_value_size": 1816, "num_data_blocks": 53, "num_entries": 405, "num_filter_entries": 405, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410179, "oldest_key_time": 1759410179, "file_creation_time": 1759410223, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:03:43 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 4953 microseconds, and 2801 cpu microseconds.
Oct 02 13:03:43 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:03:43 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:03:43.077045) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #134: 745488 bytes OK
Oct 02 13:03:43 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:03:43.077062) [db/memtable_list.cc:519] [default] Level-0 commit table #134 started
Oct 02 13:03:43 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:03:43.078292) [db/memtable_list.cc:722] [default] Level-0 commit table #134: memtable #1 done
Oct 02 13:03:43 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:03:43.078304) EVENT_LOG_v1 {"time_micros": 1759410223078300, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:03:43 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:03:43.078317) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:03:43 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 1126607, prev total WAL file size 1126607, number of live WAL files 2.
Oct 02 13:03:43 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000130.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:03:43 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:03:43.078829) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Oct 02 13:03:43 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:03:43 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [134(728KB)], [132(11MB)]
Oct 02 13:03:43 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410223078918, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [134], "files_L6": [132], "score": -1, "input_data_size": 12558564, "oldest_snapshot_seqno": -1}
Oct 02 13:03:43 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #135: 8602 keys, 10708100 bytes, temperature: kUnknown
Oct 02 13:03:43 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410223204963, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 135, "file_size": 10708100, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10653069, "index_size": 32390, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21573, "raw_key_size": 227720, "raw_average_key_size": 26, "raw_value_size": 10502510, "raw_average_value_size": 1220, "num_data_blocks": 1230, "num_entries": 8602, "num_filter_entries": 8602, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759410223, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 135, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:03:43 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:03:43 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:03:43.205405) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 10708100 bytes
Oct 02 13:03:43 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:03:43.219589) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 99.7 rd, 85.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 11.3 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(31.2) write-amplify(14.4) OK, records in: 9112, records dropped: 510 output_compression: NoCompression
Oct 02 13:03:43 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:03:43.219634) EVENT_LOG_v1 {"time_micros": 1759410223219617, "job": 84, "event": "compaction_finished", "compaction_time_micros": 125978, "compaction_time_cpu_micros": 51665, "output_level": 6, "num_output_files": 1, "total_output_size": 10708100, "num_input_records": 9112, "num_output_records": 8602, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:03:43 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:03:43 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410223220018, "job": 84, "event": "table_file_deletion", "file_number": 134}
Oct 02 13:03:43 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000132.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:03:43 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410223222261, "job": 84, "event": "table_file_deletion", "file_number": 132}
Oct 02 13:03:43 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:03:43.078722) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:03:43 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:03:43.222367) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:03:43 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:03:43.222375) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:03:43 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:03:43.222378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:03:43 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:03:43.222381) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:03:43 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:03:43.222383) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:03:43 compute-1 nova_compute[230518]: 2025-10-02 13:03:43.263 2 INFO nova.virt.libvirt.driver [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Deleting instance files /var/lib/nova/instances/198c2dd4-f103-4bba-9fc3-9e41f44e465e_del
Oct 02 13:03:43 compute-1 nova_compute[230518]: 2025-10-02 13:03:43.264 2 INFO nova.virt.libvirt.driver [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Deletion of /var/lib/nova/instances/198c2dd4-f103-4bba-9fc3-9e41f44e465e_del complete
Oct 02 13:03:43 compute-1 nova_compute[230518]: 2025-10-02 13:03:43.292 2 DEBUG nova.compute.manager [req-eca166d3-51dd-4f15-b7ce-95468bbdb054 req-cec422ce-40f0-413c-9dac-1256c9c3aaa2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received event network-vif-plugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:03:43 compute-1 nova_compute[230518]: 2025-10-02 13:03:43.292 2 DEBUG oslo_concurrency.lockutils [req-eca166d3-51dd-4f15-b7ce-95468bbdb054 req-cec422ce-40f0-413c-9dac-1256c9c3aaa2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:43 compute-1 nova_compute[230518]: 2025-10-02 13:03:43.293 2 DEBUG oslo_concurrency.lockutils [req-eca166d3-51dd-4f15-b7ce-95468bbdb054 req-cec422ce-40f0-413c-9dac-1256c9c3aaa2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:43 compute-1 nova_compute[230518]: 2025-10-02 13:03:43.293 2 DEBUG oslo_concurrency.lockutils [req-eca166d3-51dd-4f15-b7ce-95468bbdb054 req-cec422ce-40f0-413c-9dac-1256c9c3aaa2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:43 compute-1 nova_compute[230518]: 2025-10-02 13:03:43.293 2 DEBUG nova.compute.manager [req-eca166d3-51dd-4f15-b7ce-95468bbdb054 req-cec422ce-40f0-413c-9dac-1256c9c3aaa2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] No waiting events found dispatching network-vif-plugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:03:43 compute-1 nova_compute[230518]: 2025-10-02 13:03:43.293 2 WARNING nova.compute.manager [req-eca166d3-51dd-4f15-b7ce-95468bbdb054 req-cec422ce-40f0-413c-9dac-1256c9c3aaa2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received unexpected event network-vif-plugged-075c87dd-2b98-4364-9955-b21fcbcd5b47 for instance with vm_state active and task_state deleting.
Oct 02 13:03:43 compute-1 nova_compute[230518]: 2025-10-02 13:03:43.317 2 INFO nova.compute.manager [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Took 2.67 seconds to destroy the instance on the hypervisor.
Oct 02 13:03:43 compute-1 nova_compute[230518]: 2025-10-02 13:03:43.318 2 DEBUG oslo.service.loopingcall [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:03:43 compute-1 nova_compute[230518]: 2025-10-02 13:03:43.319 2 DEBUG nova.compute.manager [-] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:03:43 compute-1 nova_compute[230518]: 2025-10-02 13:03:43.319 2 DEBUG nova.network.neutron [-] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:03:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:03:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:43.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:03:43 compute-1 nova_compute[230518]: 2025-10-02 13:03:43.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:03:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:43.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:03:43 compute-1 nova_compute[230518]: 2025-10-02 13:03:43.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:43 compute-1 podman[299149]: 2025-10-02 13:03:43.816604649 +0000 UTC m=+0.064770423 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 02 13:03:43 compute-1 podman[299148]: 2025-10-02 13:03:43.888654788 +0000 UTC m=+0.128759631 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 02 13:03:44 compute-1 nova_compute[230518]: 2025-10-02 13:03:44.043 2 DEBUG nova.network.neutron [-] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:03:44 compute-1 nova_compute[230518]: 2025-10-02 13:03:44.071 2 INFO nova.compute.manager [-] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Took 0.75 seconds to deallocate network for instance.
Oct 02 13:03:44 compute-1 nova_compute[230518]: 2025-10-02 13:03:44.100 2 DEBUG nova.compute.manager [req-adef30cb-7230-439e-9e4c-bb957dad98b7 req-ca4ac2f1-21ea-43f8-bf3a-2dbbbd2027d9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Received event network-vif-deleted-075c87dd-2b98-4364-9955-b21fcbcd5b47 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:03:44 compute-1 nova_compute[230518]: 2025-10-02 13:03:44.120 2 DEBUG oslo_concurrency.lockutils [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:44 compute-1 nova_compute[230518]: 2025-10-02 13:03:44.121 2 DEBUG oslo_concurrency.lockutils [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:44 compute-1 nova_compute[230518]: 2025-10-02 13:03:44.184 2 DEBUG oslo_concurrency.processutils [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:03:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:03:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:03:44 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3787610328' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:44 compute-1 nova_compute[230518]: 2025-10-02 13:03:44.616 2 DEBUG oslo_concurrency.processutils [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:03:44 compute-1 nova_compute[230518]: 2025-10-02 13:03:44.625 2 DEBUG nova.compute.provider_tree [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:03:44 compute-1 nova_compute[230518]: 2025-10-02 13:03:44.650 2 DEBUG nova.scheduler.client.report [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:03:44 compute-1 nova_compute[230518]: 2025-10-02 13:03:44.676 2 DEBUG oslo_concurrency.lockutils [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:44 compute-1 nova_compute[230518]: 2025-10-02 13:03:44.703 2 INFO nova.scheduler.client.report [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Deleted allocations for instance 198c2dd4-f103-4bba-9fc3-9e41f44e465e
Oct 02 13:03:44 compute-1 nova_compute[230518]: 2025-10-02 13:03:44.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:44 compute-1 nova_compute[230518]: 2025-10-02 13:03:44.783 2 DEBUG oslo_concurrency.lockutils [None req-6efb2d2b-ab05-43ff-ad17-2dacb9aac37a 96fd589a75cb4fcfac0072edabb9b3a1 64f187c60881475e9e1f062bb198d205 - - default default] Lock "198c2dd4-f103-4bba-9fc3-9e41f44e465e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:45 compute-1 ceph-mon[80926]: pgmap v2723: 305 pgs: 305 active+clean; 218 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 896 KiB/s rd, 5.7 KiB/s wr, 70 op/s
Oct 02 13:03:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3787610328' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:45.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:45.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:45 compute-1 nova_compute[230518]: 2025-10-02 13:03:45.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:47 compute-1 sudo[299215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:03:47 compute-1 sudo[299215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:47 compute-1 sudo[299215]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:47 compute-1 sudo[299240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:03:47 compute-1 sudo[299240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:47 compute-1 sudo[299240]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:47 compute-1 ceph-mon[80926]: pgmap v2724: 305 pgs: 305 active+clean; 189 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.2 MiB/s wr, 123 op/s
Oct 02 13:03:47 compute-1 sudo[299265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:03:47 compute-1 sudo[299265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:47 compute-1 sudo[299265]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:47.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:47 compute-1 sudo[299290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:03:47 compute-1 sudo[299290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:47.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:47 compute-1 sudo[299290]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:48 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:03:48 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:03:48 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:03:48 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:03:48 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:03:48 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:03:49 compute-1 nova_compute[230518]: 2025-10-02 13:03:49.214 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410214.2134264, fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:03:49 compute-1 nova_compute[230518]: 2025-10-02 13:03:49.215 2 INFO nova.compute.manager [-] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] VM Stopped (Lifecycle Event)
Oct 02 13:03:49 compute-1 nova_compute[230518]: 2025-10-02 13:03:49.238 2 DEBUG nova.compute.manager [None req-ff504c11-b5e1-41cb-970b-8a5848d9233c - - - - - -] [instance: fbe2bcad-a54f-4e3c-9866-1a8ae97e9d13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:03:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:49.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:03:49 compute-1 ceph-mon[80926]: pgmap v2725: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 360 KiB/s rd, 2.1 MiB/s wr, 113 op/s
Oct 02 13:03:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:49.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:49 compute-1 nova_compute[230518]: 2025-10-02 13:03:49.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:50 compute-1 nova_compute[230518]: 2025-10-02 13:03:50.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:51.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:51 compute-1 ceph-mon[80926]: pgmap v2726: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 95 op/s
Oct 02 13:03:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:51.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:52 compute-1 nova_compute[230518]: 2025-10-02 13:03:52.686 2 DEBUG oslo_concurrency.lockutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "ea034622-0a48-4de6-8d68-0f2240b54214" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:52 compute-1 nova_compute[230518]: 2025-10-02 13:03:52.687 2 DEBUG oslo_concurrency.lockutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:52 compute-1 nova_compute[230518]: 2025-10-02 13:03:52.702 2 DEBUG nova.compute.manager [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:03:52 compute-1 nova_compute[230518]: 2025-10-02 13:03:52.773 2 DEBUG oslo_concurrency.lockutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:52 compute-1 nova_compute[230518]: 2025-10-02 13:03:52.773 2 DEBUG oslo_concurrency.lockutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:52 compute-1 nova_compute[230518]: 2025-10-02 13:03:52.779 2 DEBUG nova.virt.hardware [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:03:52 compute-1 nova_compute[230518]: 2025-10-02 13:03:52.780 2 INFO nova.compute.claims [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Claim successful on node compute-1.ctlplane.example.com
Oct 02 13:03:52 compute-1 nova_compute[230518]: 2025-10-02 13:03:52.899 2 DEBUG oslo_concurrency.processutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:03:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:03:53 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1157188128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:53 compute-1 nova_compute[230518]: 2025-10-02 13:03:53.322 2 DEBUG oslo_concurrency.processutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:03:53 compute-1 nova_compute[230518]: 2025-10-02 13:03:53.329 2 DEBUG nova.compute.provider_tree [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:03:53 compute-1 nova_compute[230518]: 2025-10-02 13:03:53.348 2 DEBUG nova.scheduler.client.report [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:03:53 compute-1 nova_compute[230518]: 2025-10-02 13:03:53.378 2 DEBUG oslo_concurrency.lockutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:53 compute-1 nova_compute[230518]: 2025-10-02 13:03:53.378 2 DEBUG nova.compute.manager [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:03:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:53.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:53 compute-1 nova_compute[230518]: 2025-10-02 13:03:53.424 2 DEBUG nova.compute.manager [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:03:53 compute-1 nova_compute[230518]: 2025-10-02 13:03:53.425 2 DEBUG nova.network.neutron [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:03:53 compute-1 nova_compute[230518]: 2025-10-02 13:03:53.444 2 INFO nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:03:53 compute-1 nova_compute[230518]: 2025-10-02 13:03:53.464 2 DEBUG nova.compute.manager [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:03:53 compute-1 nova_compute[230518]: 2025-10-02 13:03:53.570 2 DEBUG nova.compute.manager [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:03:53 compute-1 nova_compute[230518]: 2025-10-02 13:03:53.572 2 DEBUG nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:03:53 compute-1 nova_compute[230518]: 2025-10-02 13:03:53.572 2 INFO nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Creating image(s)
Oct 02 13:03:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:53.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:53 compute-1 nova_compute[230518]: 2025-10-02 13:03:53.596 2 DEBUG nova.storage.rbd_utils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] rbd image ea034622-0a48-4de6-8d68-0f2240b54214_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:03:53 compute-1 nova_compute[230518]: 2025-10-02 13:03:53.625 2 DEBUG nova.storage.rbd_utils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] rbd image ea034622-0a48-4de6-8d68-0f2240b54214_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:03:53 compute-1 nova_compute[230518]: 2025-10-02 13:03:53.656 2 DEBUG nova.storage.rbd_utils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] rbd image ea034622-0a48-4de6-8d68-0f2240b54214_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:03:53 compute-1 nova_compute[230518]: 2025-10-02 13:03:53.661 2 DEBUG oslo_concurrency.processutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:03:53 compute-1 nova_compute[230518]: 2025-10-02 13:03:53.727 2 DEBUG oslo_concurrency.processutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:03:53 compute-1 nova_compute[230518]: 2025-10-02 13:03:53.728 2 DEBUG oslo_concurrency.lockutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:53 compute-1 nova_compute[230518]: 2025-10-02 13:03:53.729 2 DEBUG oslo_concurrency.lockutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:53 compute-1 nova_compute[230518]: 2025-10-02 13:03:53.729 2 DEBUG oslo_concurrency.lockutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:53 compute-1 nova_compute[230518]: 2025-10-02 13:03:53.750 2 DEBUG nova.storage.rbd_utils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] rbd image ea034622-0a48-4de6-8d68-0f2240b54214_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:03:53 compute-1 nova_compute[230518]: 2025-10-02 13:03:53.754 2 DEBUG oslo_concurrency.processutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 ea034622-0a48-4de6-8d68-0f2240b54214_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:03:53 compute-1 nova_compute[230518]: 2025-10-02 13:03:53.781 2 DEBUG nova.policy [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5206d24fd75a48758994a57e7fd259f2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '52dd3c4419794d0fbecd536c5088c60f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:03:53 compute-1 ceph-mon[80926]: pgmap v2727: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 02 13:03:53 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1157188128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:03:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:03:54 compute-1 nova_compute[230518]: 2025-10-02 13:03:54.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:54 compute-1 podman[299459]: 2025-10-02 13:03:54.847706838 +0000 UTC m=+0.103879458 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:03:54 compute-1 podman[299460]: 2025-10-02 13:03:54.863753257 +0000 UTC m=+0.109801493 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 13:03:55 compute-1 nova_compute[230518]: 2025-10-02 13:03:55.027 2 DEBUG nova.network.neutron [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Successfully created port: 55d951c1-1ce9-4d4a-979c-9be9aef7e283 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:03:55 compute-1 sudo[299501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:03:55 compute-1 sudo[299501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:55 compute-1 sudo[299501]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:55 compute-1 sudo[299526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:03:55 compute-1 sudo[299526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:03:55 compute-1 sudo[299526]: pam_unix(sudo:session): session closed for user root
Oct 02 13:03:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:55.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:55 compute-1 nova_compute[230518]: 2025-10-02 13:03:55.408 2 DEBUG oslo_concurrency.processutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 ea034622-0a48-4de6-8d68-0f2240b54214_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.654s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:03:55 compute-1 nova_compute[230518]: 2025-10-02 13:03:55.467 2 DEBUG nova.storage.rbd_utils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] resizing rbd image ea034622-0a48-4de6-8d68-0f2240b54214_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:03:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:55.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:55 compute-1 ceph-mon[80926]: pgmap v2728: 305 pgs: 305 active+clean; 200 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Oct 02 13:03:55 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:03:55 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:03:55 compute-1 nova_compute[230518]: 2025-10-02 13:03:55.878 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410220.8763592, 198c2dd4-f103-4bba-9fc3-9e41f44e465e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:03:55 compute-1 nova_compute[230518]: 2025-10-02 13:03:55.878 2 INFO nova.compute.manager [-] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] VM Stopped (Lifecycle Event)
Oct 02 13:03:55 compute-1 nova_compute[230518]: 2025-10-02 13:03:55.900 2 DEBUG nova.compute.manager [None req-da9cd9dd-dcc3-4f26-bee5-e61a4ccb0586 - - - - - -] [instance: 198c2dd4-f103-4bba-9fc3-9e41f44e465e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:03:55 compute-1 nova_compute[230518]: 2025-10-02 13:03:55.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:56 compute-1 nova_compute[230518]: 2025-10-02 13:03:56.282 2 DEBUG nova.objects.instance [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lazy-loading 'migration_context' on Instance uuid ea034622-0a48-4de6-8d68-0f2240b54214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:03:56 compute-1 nova_compute[230518]: 2025-10-02 13:03:56.297 2 DEBUG nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:03:56 compute-1 nova_compute[230518]: 2025-10-02 13:03:56.298 2 DEBUG nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Ensure instance console log exists: /var/lib/nova/instances/ea034622-0a48-4de6-8d68-0f2240b54214/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:03:56 compute-1 nova_compute[230518]: 2025-10-02 13:03:56.298 2 DEBUG oslo_concurrency.lockutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:56 compute-1 nova_compute[230518]: 2025-10-02 13:03:56.299 2 DEBUG oslo_concurrency.lockutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:56 compute-1 nova_compute[230518]: 2025-10-02 13:03:56.299 2 DEBUG oslo_concurrency.lockutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:56 compute-1 nova_compute[230518]: 2025-10-02 13:03:56.950 2 DEBUG nova.network.neutron [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Successfully updated port: 55d951c1-1ce9-4d4a-979c-9be9aef7e283 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:03:56 compute-1 nova_compute[230518]: 2025-10-02 13:03:56.983 2 DEBUG oslo_concurrency.lockutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:03:56 compute-1 nova_compute[230518]: 2025-10-02 13:03:56.983 2 DEBUG oslo_concurrency.lockutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquired lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:03:56 compute-1 nova_compute[230518]: 2025-10-02 13:03:56.984 2 DEBUG nova.network.neutron [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:03:57 compute-1 nova_compute[230518]: 2025-10-02 13:03:57.110 2 DEBUG nova.compute.manager [req-3eebd9d5-2ede-40b0-a006-da2651bb73b1 req-77b225cd-5746-4eb3-9e92-cd7a4459879b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Received event network-changed-55d951c1-1ce9-4d4a-979c-9be9aef7e283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:03:57 compute-1 nova_compute[230518]: 2025-10-02 13:03:57.110 2 DEBUG nova.compute.manager [req-3eebd9d5-2ede-40b0-a006-da2651bb73b1 req-77b225cd-5746-4eb3-9e92-cd7a4459879b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Refreshing instance network info cache due to event network-changed-55d951c1-1ce9-4d4a-979c-9be9aef7e283. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:03:57 compute-1 nova_compute[230518]: 2025-10-02 13:03:57.110 2 DEBUG oslo_concurrency.lockutils [req-3eebd9d5-2ede-40b0-a006-da2651bb73b1 req-77b225cd-5746-4eb3-9e92-cd7a4459879b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:03:57 compute-1 nova_compute[230518]: 2025-10-02 13:03:57.206 2 DEBUG nova.network.neutron [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:03:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:57.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:57.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:57 compute-1 ceph-mon[80926]: pgmap v2729: 305 pgs: 305 active+clean; 225 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 3.2 MiB/s wr, 94 op/s
Oct 02 13:03:58 compute-1 nova_compute[230518]: 2025-10-02 13:03:58.247 2 DEBUG nova.network.neutron [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Updating instance_info_cache with network_info: [{"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55d951c1-1c", "ovs_interfaceid": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:03:58 compute-1 nova_compute[230518]: 2025-10-02 13:03:58.268 2 DEBUG oslo_concurrency.lockutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Releasing lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:03:58 compute-1 nova_compute[230518]: 2025-10-02 13:03:58.268 2 DEBUG nova.compute.manager [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Instance network_info: |[{"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55d951c1-1c", "ovs_interfaceid": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:03:58 compute-1 nova_compute[230518]: 2025-10-02 13:03:58.268 2 DEBUG oslo_concurrency.lockutils [req-3eebd9d5-2ede-40b0-a006-da2651bb73b1 req-77b225cd-5746-4eb3-9e92-cd7a4459879b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:03:58 compute-1 nova_compute[230518]: 2025-10-02 13:03:58.268 2 DEBUG nova.network.neutron [req-3eebd9d5-2ede-40b0-a006-da2651bb73b1 req-77b225cd-5746-4eb3-9e92-cd7a4459879b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Refreshing network info cache for port 55d951c1-1ce9-4d4a-979c-9be9aef7e283 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:03:58 compute-1 nova_compute[230518]: 2025-10-02 13:03:58.271 2 DEBUG nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Start _get_guest_xml network_info=[{"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55d951c1-1c", "ovs_interfaceid": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:03:58 compute-1 nova_compute[230518]: 2025-10-02 13:03:58.275 2 WARNING nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:03:58 compute-1 nova_compute[230518]: 2025-10-02 13:03:58.280 2 DEBUG nova.virt.libvirt.host [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:03:58 compute-1 nova_compute[230518]: 2025-10-02 13:03:58.280 2 DEBUG nova.virt.libvirt.host [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:03:58 compute-1 nova_compute[230518]: 2025-10-02 13:03:58.285 2 DEBUG nova.virt.libvirt.host [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:03:58 compute-1 nova_compute[230518]: 2025-10-02 13:03:58.286 2 DEBUG nova.virt.libvirt.host [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:03:58 compute-1 nova_compute[230518]: 2025-10-02 13:03:58.287 2 DEBUG nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:03:58 compute-1 nova_compute[230518]: 2025-10-02 13:03:58.287 2 DEBUG nova.virt.hardware [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:03:58 compute-1 nova_compute[230518]: 2025-10-02 13:03:58.288 2 DEBUG nova.virt.hardware [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:03:58 compute-1 nova_compute[230518]: 2025-10-02 13:03:58.288 2 DEBUG nova.virt.hardware [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:03:58 compute-1 nova_compute[230518]: 2025-10-02 13:03:58.288 2 DEBUG nova.virt.hardware [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:03:58 compute-1 nova_compute[230518]: 2025-10-02 13:03:58.288 2 DEBUG nova.virt.hardware [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:03:58 compute-1 nova_compute[230518]: 2025-10-02 13:03:58.288 2 DEBUG nova.virt.hardware [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:03:58 compute-1 nova_compute[230518]: 2025-10-02 13:03:58.289 2 DEBUG nova.virt.hardware [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:03:58 compute-1 nova_compute[230518]: 2025-10-02 13:03:58.289 2 DEBUG nova.virt.hardware [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:03:58 compute-1 nova_compute[230518]: 2025-10-02 13:03:58.289 2 DEBUG nova.virt.hardware [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:03:58 compute-1 nova_compute[230518]: 2025-10-02 13:03:58.289 2 DEBUG nova.virt.hardware [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:03:58 compute-1 nova_compute[230518]: 2025-10-02 13:03:58.290 2 DEBUG nova.virt.hardware [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:03:58 compute-1 nova_compute[230518]: 2025-10-02 13:03:58.292 2 DEBUG oslo_concurrency.processutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:03:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:03:58 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2078953066' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:03:58 compute-1 nova_compute[230518]: 2025-10-02 13:03:58.708 2 DEBUG oslo_concurrency.processutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:03:58 compute-1 nova_compute[230518]: 2025-10-02 13:03:58.749 2 DEBUG nova.storage.rbd_utils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] rbd image ea034622-0a48-4de6-8d68-0f2240b54214_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:03:58 compute-1 nova_compute[230518]: 2025-10-02 13:03:58.755 2 DEBUG oslo_concurrency.processutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:03:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:03:59 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3142643056' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:03:59 compute-1 ceph-mon[80926]: pgmap v2730: 305 pgs: 305 active+clean; 238 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 181 KiB/s rd, 2.3 MiB/s wr, 57 op/s
Oct 02 13:03:59 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2078953066' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.217 2 DEBUG oslo_concurrency.processutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.219 2 DEBUG nova.virt.libvirt.vif [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:03:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-881712342',display_name='tempest-ServersNegativeTestJSON-server-881712342',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-881712342',id=181,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='52dd3c4419794d0fbecd536c5088c60f',ramdisk_id='',reservation_id='r-3dfuwrrh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1205930452',owner_user_name='tempest-ServersNegativeTestJSON-1205930452-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:03:53Z,user_data=None,user_id='5206d24fd75a48758994a57e7fd259f2',uuid=ea034622-0a48-4de6-8d68-0f2240b54214,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55d951c1-1c", "ovs_interfaceid": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.220 2 DEBUG nova.network.os_vif_util [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Converting VIF {"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55d951c1-1c", "ovs_interfaceid": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.220 2 DEBUG nova.network.os_vif_util [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:64:21,bridge_name='br-int',has_traffic_filtering=True,id=55d951c1-1ce9-4d4a-979c-9be9aef7e283,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55d951c1-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.221 2 DEBUG nova.objects.instance [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lazy-loading 'pci_devices' on Instance uuid ea034622-0a48-4de6-8d68-0f2240b54214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.235 2 DEBUG nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:03:59 compute-1 nova_compute[230518]:   <uuid>ea034622-0a48-4de6-8d68-0f2240b54214</uuid>
Oct 02 13:03:59 compute-1 nova_compute[230518]:   <name>instance-000000b5</name>
Oct 02 13:03:59 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 13:03:59 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 13:03:59 compute-1 nova_compute[230518]:   <metadata>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:03:59 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:       <nova:name>tempest-ServersNegativeTestJSON-server-881712342</nova:name>
Oct 02 13:03:59 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 13:03:58</nova:creationTime>
Oct 02 13:03:59 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 13:03:59 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 13:03:59 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 13:03:59 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 13:03:59 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:03:59 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:03:59 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 13:03:59 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 13:03:59 compute-1 nova_compute[230518]:         <nova:user uuid="5206d24fd75a48758994a57e7fd259f2">tempest-ServersNegativeTestJSON-1205930452-project-member</nova:user>
Oct 02 13:03:59 compute-1 nova_compute[230518]:         <nova:project uuid="52dd3c4419794d0fbecd536c5088c60f">tempest-ServersNegativeTestJSON-1205930452</nova:project>
Oct 02 13:03:59 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 13:03:59 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 13:03:59 compute-1 nova_compute[230518]:         <nova:port uuid="55d951c1-1ce9-4d4a-979c-9be9aef7e283">
Oct 02 13:03:59 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 13:03:59 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 13:03:59 compute-1 nova_compute[230518]:   </metadata>
Oct 02 13:03:59 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <system>
Oct 02 13:03:59 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:03:59 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:03:59 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:03:59 compute-1 nova_compute[230518]:       <entry name="serial">ea034622-0a48-4de6-8d68-0f2240b54214</entry>
Oct 02 13:03:59 compute-1 nova_compute[230518]:       <entry name="uuid">ea034622-0a48-4de6-8d68-0f2240b54214</entry>
Oct 02 13:03:59 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     </system>
Oct 02 13:03:59 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 13:03:59 compute-1 nova_compute[230518]:   <os>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:   </os>
Oct 02 13:03:59 compute-1 nova_compute[230518]:   <features>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <apic/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:   </features>
Oct 02 13:03:59 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:   </clock>
Oct 02 13:03:59 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:   </cpu>
Oct 02 13:03:59 compute-1 nova_compute[230518]:   <devices>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 13:03:59 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/ea034622-0a48-4de6-8d68-0f2240b54214_disk">
Oct 02 13:03:59 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:       </source>
Oct 02 13:03:59 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:03:59 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:03:59 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 13:03:59 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/ea034622-0a48-4de6-8d68-0f2240b54214_disk.config">
Oct 02 13:03:59 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:       </source>
Oct 02 13:03:59 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:03:59 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:03:59 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 13:03:59 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:e8:64:21"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:       <target dev="tap55d951c1-1c"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     </interface>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 13:03:59 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/ea034622-0a48-4de6-8d68-0f2240b54214/console.log" append="off"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     </serial>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <video>
Oct 02 13:03:59 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     </video>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 13:03:59 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     </rng>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 13:03:59 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 13:03:59 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 13:03:59 compute-1 nova_compute[230518]:   </devices>
Oct 02 13:03:59 compute-1 nova_compute[230518]: </domain>
Oct 02 13:03:59 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.236 2 DEBUG nova.compute.manager [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Preparing to wait for external event network-vif-plugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.237 2 DEBUG oslo_concurrency.lockutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.237 2 DEBUG oslo_concurrency.lockutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.237 2 DEBUG oslo_concurrency.lockutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.238 2 DEBUG nova.virt.libvirt.vif [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:03:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-881712342',display_name='tempest-ServersNegativeTestJSON-server-881712342',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-881712342',id=181,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='52dd3c4419794d0fbecd536c5088c60f',ramdisk_id='',reservation_id='r-3dfuwrrh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1205930452',owner_user_name='tempest-ServersNegativeTestJSON-1205930452-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:03:53Z,user_data=None,user_id='5206d24fd75a48758994a57e7fd259f2',uuid=ea034622-0a48-4de6-8d68-0f2240b54214,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55d951c1-1c", "ovs_interfaceid": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.238 2 DEBUG nova.network.os_vif_util [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Converting VIF {"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55d951c1-1c", "ovs_interfaceid": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.238 2 DEBUG nova.network.os_vif_util [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:64:21,bridge_name='br-int',has_traffic_filtering=True,id=55d951c1-1ce9-4d4a-979c-9be9aef7e283,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55d951c1-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.239 2 DEBUG os_vif [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:64:21,bridge_name='br-int',has_traffic_filtering=True,id=55d951c1-1ce9-4d4a-979c-9be9aef7e283,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55d951c1-1c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.240 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.240 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.244 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55d951c1-1c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.245 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap55d951c1-1c, col_values=(('external_ids', {'iface-id': '55d951c1-1ce9-4d4a-979c-9be9aef7e283', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e8:64:21', 'vm-uuid': 'ea034622-0a48-4de6-8d68-0f2240b54214'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:03:59 compute-1 NetworkManager[44960]: <info>  [1759410239.2485] manager: (tap55d951c1-1c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/346)
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.257 2 INFO os_vif [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:64:21,bridge_name='br-int',has_traffic_filtering=True,id=55d951c1-1ce9-4d4a-979c-9be9aef7e283,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55d951c1-1c')
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.314 2 DEBUG nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.315 2 DEBUG nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.315 2 DEBUG nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] No VIF found with MAC fa:16:3e:e8:64:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.316 2 INFO nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Using config drive
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.353 2 DEBUG nova.storage.rbd_utils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] rbd image ea034622-0a48-4de6-8d68-0f2240b54214_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:03:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:03:59.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.567 2 DEBUG nova.network.neutron [req-3eebd9d5-2ede-40b0-a006-da2651bb73b1 req-77b225cd-5746-4eb3-9e92-cd7a4459879b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Updated VIF entry in instance network info cache for port 55d951c1-1ce9-4d4a-979c-9be9aef7e283. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.568 2 DEBUG nova.network.neutron [req-3eebd9d5-2ede-40b0-a006-da2651bb73b1 req-77b225cd-5746-4eb3-9e92-cd7a4459879b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Updating instance_info_cache with network_info: [{"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55d951c1-1c", "ovs_interfaceid": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.583 2 DEBUG oslo_concurrency.lockutils [req-3eebd9d5-2ede-40b0-a006-da2651bb73b1 req-77b225cd-5746-4eb3-9e92-cd7a4459879b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:03:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:03:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:03:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:03:59.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.760 2 INFO nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Creating config drive at /var/lib/nova/instances/ea034622-0a48-4de6-8d68-0f2240b54214/disk.config
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.769 2 DEBUG oslo_concurrency.processutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ea034622-0a48-4de6-8d68-0f2240b54214/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfr6dbiyj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.931 2 DEBUG oslo_concurrency.processutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ea034622-0a48-4de6-8d68-0f2240b54214/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfr6dbiyj" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.961 2 DEBUG nova.storage.rbd_utils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] rbd image ea034622-0a48-4de6-8d68-0f2240b54214_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:03:59 compute-1 nova_compute[230518]: 2025-10-02 13:03:59.964 2 DEBUG oslo_concurrency.processutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ea034622-0a48-4de6-8d68-0f2240b54214/disk.config ea034622-0a48-4de6-8d68-0f2240b54214_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3142643056' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:01.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:01 compute-1 nova_compute[230518]: 2025-10-02 13:04:01.404 2 DEBUG oslo_concurrency.processutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ea034622-0a48-4de6-8d68-0f2240b54214/disk.config ea034622-0a48-4de6-8d68-0f2240b54214_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:01 compute-1 nova_compute[230518]: 2025-10-02 13:04:01.405 2 INFO nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Deleting local config drive /var/lib/nova/instances/ea034622-0a48-4de6-8d68-0f2240b54214/disk.config because it was imported into RBD.
Oct 02 13:04:01 compute-1 kernel: tap55d951c1-1c: entered promiscuous mode
Oct 02 13:04:01 compute-1 NetworkManager[44960]: <info>  [1759410241.4751] manager: (tap55d951c1-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/347)
Oct 02 13:04:01 compute-1 nova_compute[230518]: 2025-10-02 13:04:01.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:01 compute-1 ovn_controller[129257]: 2025-10-02T13:04:01Z|00747|binding|INFO|Claiming lport 55d951c1-1ce9-4d4a-979c-9be9aef7e283 for this chassis.
Oct 02 13:04:01 compute-1 ovn_controller[129257]: 2025-10-02T13:04:01Z|00748|binding|INFO|55d951c1-1ce9-4d4a-979c-9be9aef7e283: Claiming fa:16:3e:e8:64:21 10.100.0.3
Oct 02 13:04:01 compute-1 nova_compute[230518]: 2025-10-02 13:04:01.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.493 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:64:21 10.100.0.3'], port_security=['fa:16:3e:e8:64:21 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ea034622-0a48-4de6-8d68-0f2240b54214', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52dd3c4419794d0fbecd536c5088c60f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'df14d61b-9762-4791-8375-7e8d13f38de1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26295213-1e12-4cdb-92a9-b65812bf362e, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=55d951c1-1ce9-4d4a-979c-9be9aef7e283) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.496 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 55d951c1-1ce9-4d4a-979c-9be9aef7e283 in datapath b07d0c6a-5988-4afb-b4ba-d4048578b224 bound to our chassis
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.500 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b07d0c6a-5988-4afb-b4ba-d4048578b224
Oct 02 13:04:01 compute-1 systemd-udevd[299759]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:04:01 compute-1 systemd-machined[188247]: New machine qemu-86-instance-000000b5.
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.522 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[49444a95-bad8-4519-abd6-67a6ce6403fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.524 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb07d0c6a-51 in ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.527 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb07d0c6a-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.527 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[60c98234-b684-48ec-b0d6-a3266d75347f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:01 compute-1 NetworkManager[44960]: <info>  [1759410241.5288] device (tap55d951c1-1c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:04:01 compute-1 NetworkManager[44960]: <info>  [1759410241.5297] device (tap55d951c1-1c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.530 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ac345e59-0437-43ce-85ed-ca871b0d6cd1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.551 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[799eb589-f428-4622-923d-5fdef5530ac4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:01 compute-1 nova_compute[230518]: 2025-10-02 13:04:01.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:01 compute-1 systemd[1]: Started Virtual Machine qemu-86-instance-000000b5.
Oct 02 13:04:01 compute-1 ovn_controller[129257]: 2025-10-02T13:04:01Z|00749|binding|INFO|Setting lport 55d951c1-1ce9-4d4a-979c-9be9aef7e283 ovn-installed in OVS
Oct 02 13:04:01 compute-1 ovn_controller[129257]: 2025-10-02T13:04:01Z|00750|binding|INFO|Setting lport 55d951c1-1ce9-4d4a-979c-9be9aef7e283 up in Southbound
Oct 02 13:04:01 compute-1 nova_compute[230518]: 2025-10-02 13:04:01.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.580 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d93bb357-be68-4998-bb97-f76180112d54]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:01.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.618 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[4dcd37d8-c58a-4dca-bdb9-5ecc6d03b07a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:01 compute-1 NetworkManager[44960]: <info>  [1759410241.6256] manager: (tapb07d0c6a-50): new Veth device (/org/freedesktop/NetworkManager/Devices/348)
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.625 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1633adf1-4a39-467d-8eba-83b68d2ce3f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.672 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[03b310dc-ffae-4753-bc6c-51fa88267380]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.676 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[8c55bcc8-e28d-4c69-b720-7a6c2285ccf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:01 compute-1 NetworkManager[44960]: <info>  [1759410241.7021] device (tapb07d0c6a-50): carrier: link connected
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.709 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[32389b2d-f085-47b6-9943-eeefe6851226]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.728 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[57194fe2-28d8-439f-87ad-bab25362d2c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb07d0c6a-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:48:08'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 228], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 810526, 'reachable_time': 40282, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299792, 'error': None, 'target': 'ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.752 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9238ef81-b963-42f6-a6ca-132ceb369356]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feec:4808'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 810526, 'tstamp': 810526}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299793, 'error': None, 'target': 'ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.774 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d37f63f0-bcd9-477b-b891-191948d6d36c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb07d0c6a-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:48:08'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 228], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 810526, 'reachable_time': 40282, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299794, 'error': None, 'target': 'ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.819 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f3e58bcb-2fa5-467b-ac5d-12cf536ac50b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.880 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[096ecf0c-41b8-4613-b44b-0cc6e5c8b1ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.882 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb07d0c6a-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.882 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.883 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb07d0c6a-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:04:01 compute-1 nova_compute[230518]: 2025-10-02 13:04:01.883 2 DEBUG nova.compute.manager [req-5d2a43b8-814e-44b1-aa89-e38458123563 req-0d299fa7-f882-45b2-862f-75f07d94ea30 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Received event network-vif-plugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:04:01 compute-1 nova_compute[230518]: 2025-10-02 13:04:01.883 2 DEBUG oslo_concurrency.lockutils [req-5d2a43b8-814e-44b1-aa89-e38458123563 req-0d299fa7-f882-45b2-862f-75f07d94ea30 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:01 compute-1 nova_compute[230518]: 2025-10-02 13:04:01.883 2 DEBUG oslo_concurrency.lockutils [req-5d2a43b8-814e-44b1-aa89-e38458123563 req-0d299fa7-f882-45b2-862f-75f07d94ea30 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:01 compute-1 nova_compute[230518]: 2025-10-02 13:04:01.884 2 DEBUG oslo_concurrency.lockutils [req-5d2a43b8-814e-44b1-aa89-e38458123563 req-0d299fa7-f882-45b2-862f-75f07d94ea30 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:01 compute-1 nova_compute[230518]: 2025-10-02 13:04:01.884 2 DEBUG nova.compute.manager [req-5d2a43b8-814e-44b1-aa89-e38458123563 req-0d299fa7-f882-45b2-862f-75f07d94ea30 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Processing event network-vif-plugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:04:01 compute-1 nova_compute[230518]: 2025-10-02 13:04:01.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:01 compute-1 kernel: tapb07d0c6a-50: entered promiscuous mode
Oct 02 13:04:01 compute-1 NetworkManager[44960]: <info>  [1759410241.8862] manager: (tapb07d0c6a-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/349)
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.888 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb07d0c6a-50, col_values=(('external_ids', {'iface-id': '874a9fce-3ef5-498a-a977-43087c73ea46'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:04:01 compute-1 ovn_controller[129257]: 2025-10-02T13:04:01Z|00751|binding|INFO|Releasing lport 874a9fce-3ef5-498a-a977-43087c73ea46 from this chassis (sb_readonly=0)
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.906 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b07d0c6a-5988-4afb-b4ba-d4048578b224.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b07d0c6a-5988-4afb-b4ba-d4048578b224.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:04:01 compute-1 nova_compute[230518]: 2025-10-02 13:04:01.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.907 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ffbbb28e-e294-48eb-8c35-3ed1f4aa03a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.908 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]: global
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-b07d0c6a-5988-4afb-b4ba-d4048578b224
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/b07d0c6a-5988-4afb-b4ba-d4048578b224.pid.haproxy
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID b07d0c6a-5988-4afb-b4ba-d4048578b224
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:04:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:01.909 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'env', 'PROCESS_TAG=haproxy-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b07d0c6a-5988-4afb-b4ba-d4048578b224.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:04:01 compute-1 ceph-mon[80926]: pgmap v2731: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 02 13:04:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3400097481' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:02 compute-1 podman[299868]: 2025-10-02 13:04:02.279751591 +0000 UTC m=+0.072361110 container create 90d5203870b0613c0c4c818e8590f1179c936cced172fb5de2ecedb862e11cd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 02 13:04:02 compute-1 podman[299868]: 2025-10-02 13:04:02.228335083 +0000 UTC m=+0.020944622 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:04:02 compute-1 systemd[1]: Started libpod-conmon-90d5203870b0613c0c4c818e8590f1179c936cced172fb5de2ecedb862e11cd2.scope.
Oct 02 13:04:02 compute-1 systemd[1]: Started libcrun container.
Oct 02 13:04:02 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afce76e5b3e048a73313ef5d27cfa735799b4e55d4b12602e339b371cc625d3b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:04:02 compute-1 podman[299868]: 2025-10-02 13:04:02.368164938 +0000 UTC m=+0.160774477 container init 90d5203870b0613c0c4c818e8590f1179c936cced172fb5de2ecedb862e11cd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 02 13:04:02 compute-1 podman[299868]: 2025-10-02 13:04:02.373135142 +0000 UTC m=+0.165744661 container start 90d5203870b0613c0c4c818e8590f1179c936cced172fb5de2ecedb862e11cd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 02 13:04:02 compute-1 neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224[299883]: [NOTICE]   (299887) : New worker (299889) forked
Oct 02 13:04:02 compute-1 neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224[299883]: [NOTICE]   (299887) : Loading success.
Oct 02 13:04:02 compute-1 nova_compute[230518]: 2025-10-02 13:04:02.459 2 DEBUG nova.compute.manager [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:04:02 compute-1 nova_compute[230518]: 2025-10-02 13:04:02.460 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410242.4603882, ea034622-0a48-4de6-8d68-0f2240b54214 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:04:02 compute-1 nova_compute[230518]: 2025-10-02 13:04:02.461 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] VM Started (Lifecycle Event)
Oct 02 13:04:02 compute-1 nova_compute[230518]: 2025-10-02 13:04:02.463 2 DEBUG nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:04:02 compute-1 nova_compute[230518]: 2025-10-02 13:04:02.468 2 INFO nova.virt.libvirt.driver [-] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Instance spawned successfully.
Oct 02 13:04:02 compute-1 nova_compute[230518]: 2025-10-02 13:04:02.468 2 DEBUG nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:04:02 compute-1 nova_compute[230518]: 2025-10-02 13:04:02.489 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:04:02 compute-1 nova_compute[230518]: 2025-10-02 13:04:02.494 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:04:02 compute-1 nova_compute[230518]: 2025-10-02 13:04:02.497 2 DEBUG nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:04:02 compute-1 nova_compute[230518]: 2025-10-02 13:04:02.498 2 DEBUG nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:04:02 compute-1 nova_compute[230518]: 2025-10-02 13:04:02.498 2 DEBUG nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:04:02 compute-1 nova_compute[230518]: 2025-10-02 13:04:02.498 2 DEBUG nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:04:02 compute-1 nova_compute[230518]: 2025-10-02 13:04:02.499 2 DEBUG nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:04:02 compute-1 nova_compute[230518]: 2025-10-02 13:04:02.499 2 DEBUG nova.virt.libvirt.driver [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:04:02 compute-1 nova_compute[230518]: 2025-10-02 13:04:02.551 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:04:02 compute-1 nova_compute[230518]: 2025-10-02 13:04:02.551 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410242.461399, ea034622-0a48-4de6-8d68-0f2240b54214 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:04:02 compute-1 nova_compute[230518]: 2025-10-02 13:04:02.552 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] VM Paused (Lifecycle Event)
Oct 02 13:04:02 compute-1 nova_compute[230518]: 2025-10-02 13:04:02.602 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:04:02 compute-1 nova_compute[230518]: 2025-10-02 13:04:02.605 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410242.4638412, ea034622-0a48-4de6-8d68-0f2240b54214 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:04:02 compute-1 nova_compute[230518]: 2025-10-02 13:04:02.605 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] VM Resumed (Lifecycle Event)
Oct 02 13:04:02 compute-1 nova_compute[230518]: 2025-10-02 13:04:02.621 2 INFO nova.compute.manager [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Took 9.05 seconds to spawn the instance on the hypervisor.
Oct 02 13:04:02 compute-1 nova_compute[230518]: 2025-10-02 13:04:02.622 2 DEBUG nova.compute.manager [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:04:02 compute-1 nova_compute[230518]: 2025-10-02 13:04:02.631 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:04:02 compute-1 nova_compute[230518]: 2025-10-02 13:04:02.634 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:04:02 compute-1 nova_compute[230518]: 2025-10-02 13:04:02.662 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:04:02 compute-1 nova_compute[230518]: 2025-10-02 13:04:02.690 2 INFO nova.compute.manager [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Took 9.94 seconds to build instance.
Oct 02 13:04:02 compute-1 nova_compute[230518]: 2025-10-02 13:04:02.720 2 DEBUG oslo_concurrency.lockutils [None req-cbde80cd-991e-4a01-a9ec-ab9359edaa79 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2489860827' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1058577772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:03.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:03.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:03 compute-1 ceph-mon[80926]: pgmap v2732: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct 02 13:04:04 compute-1 nova_compute[230518]: 2025-10-02 13:04:04.096 2 DEBUG nova.compute.manager [req-0bdc5757-4a7f-4cdb-93bb-ebe3b03858b2 req-cd9d10f0-12d0-4182-9297-c127c7a645e4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Received event network-vif-plugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:04:04 compute-1 nova_compute[230518]: 2025-10-02 13:04:04.096 2 DEBUG oslo_concurrency.lockutils [req-0bdc5757-4a7f-4cdb-93bb-ebe3b03858b2 req-cd9d10f0-12d0-4182-9297-c127c7a645e4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:04 compute-1 nova_compute[230518]: 2025-10-02 13:04:04.096 2 DEBUG oslo_concurrency.lockutils [req-0bdc5757-4a7f-4cdb-93bb-ebe3b03858b2 req-cd9d10f0-12d0-4182-9297-c127c7a645e4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:04 compute-1 nova_compute[230518]: 2025-10-02 13:04:04.097 2 DEBUG oslo_concurrency.lockutils [req-0bdc5757-4a7f-4cdb-93bb-ebe3b03858b2 req-cd9d10f0-12d0-4182-9297-c127c7a645e4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:04 compute-1 nova_compute[230518]: 2025-10-02 13:04:04.097 2 DEBUG nova.compute.manager [req-0bdc5757-4a7f-4cdb-93bb-ebe3b03858b2 req-cd9d10f0-12d0-4182-9297-c127c7a645e4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] No waiting events found dispatching network-vif-plugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:04:04 compute-1 nova_compute[230518]: 2025-10-02 13:04:04.097 2 WARNING nova.compute.manager [req-0bdc5757-4a7f-4cdb-93bb-ebe3b03858b2 req-cd9d10f0-12d0-4182-9297-c127c7a645e4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Received unexpected event network-vif-plugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 for instance with vm_state active and task_state None.
Oct 02 13:04:04 compute-1 nova_compute[230518]: 2025-10-02 13:04:04.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:04:04 compute-1 nova_compute[230518]: 2025-10-02 13:04:04.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:05 compute-1 ceph-mon[80926]: pgmap v2733: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct 02 13:04:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:04:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4228384104' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:04:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:04:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4228384104' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:04:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:04:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:05.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:04:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:05.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2200722556' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/4228384104' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:04:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/4228384104' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:04:07 compute-1 ceph-mon[80926]: pgmap v2734: 305 pgs: 305 active+clean; 277 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.9 MiB/s wr, 144 op/s
Oct 02 13:04:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:04:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:07.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:04:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:07.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:09 compute-1 nova_compute[230518]: 2025-10-02 13:04:09.126 2 DEBUG oslo_concurrency.lockutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:09 compute-1 nova_compute[230518]: 2025-10-02 13:04:09.127 2 DEBUG oslo_concurrency.lockutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:09 compute-1 nova_compute[230518]: 2025-10-02 13:04:09.144 2 DEBUG nova.compute.manager [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:04:09 compute-1 nova_compute[230518]: 2025-10-02 13:04:09.226 2 DEBUG oslo_concurrency.lockutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:09 compute-1 nova_compute[230518]: 2025-10-02 13:04:09.227 2 DEBUG oslo_concurrency.lockutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:09 compute-1 nova_compute[230518]: 2025-10-02 13:04:09.234 2 DEBUG nova.virt.hardware [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:04:09 compute-1 nova_compute[230518]: 2025-10-02 13:04:09.235 2 INFO nova.compute.claims [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Claim successful on node compute-1.ctlplane.example.com
Oct 02 13:04:09 compute-1 ceph-mon[80926]: pgmap v2735: 305 pgs: 305 active+clean; 315 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.7 MiB/s wr, 209 op/s
Oct 02 13:04:09 compute-1 nova_compute[230518]: 2025-10-02 13:04:09.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:09 compute-1 nova_compute[230518]: 2025-10-02 13:04:09.355 2 DEBUG oslo_concurrency.processutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:09.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:04:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:04:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:09.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:04:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:04:09 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1182037887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:09 compute-1 nova_compute[230518]: 2025-10-02 13:04:09.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:09 compute-1 nova_compute[230518]: 2025-10-02 13:04:09.825 2 DEBUG oslo_concurrency.processutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:09 compute-1 nova_compute[230518]: 2025-10-02 13:04:09.832 2 DEBUG nova.compute.provider_tree [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:04:09 compute-1 nova_compute[230518]: 2025-10-02 13:04:09.875 2 DEBUG nova.scheduler.client.report [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:04:09 compute-1 nova_compute[230518]: 2025-10-02 13:04:09.921 2 DEBUG oslo_concurrency.lockutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:09 compute-1 nova_compute[230518]: 2025-10-02 13:04:09.921 2 DEBUG nova.compute.manager [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:04:09 compute-1 nova_compute[230518]: 2025-10-02 13:04:09.975 2 DEBUG nova.compute.manager [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:04:09 compute-1 nova_compute[230518]: 2025-10-02 13:04:09.976 2 DEBUG nova.network.neutron [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:04:09 compute-1 nova_compute[230518]: 2025-10-02 13:04:09.998 2 INFO nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:04:10 compute-1 nova_compute[230518]: 2025-10-02 13:04:10.023 2 DEBUG nova.compute.manager [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:04:10 compute-1 nova_compute[230518]: 2025-10-02 13:04:10.109 2 DEBUG nova.compute.manager [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:04:10 compute-1 nova_compute[230518]: 2025-10-02 13:04:10.110 2 DEBUG nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:04:10 compute-1 nova_compute[230518]: 2025-10-02 13:04:10.110 2 INFO nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Creating image(s)
Oct 02 13:04:10 compute-1 nova_compute[230518]: 2025-10-02 13:04:10.135 2 DEBUG nova.storage.rbd_utils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] rbd image ab1d03d2-f5f1-479c-9c49-4519bb6f6b53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:04:10 compute-1 nova_compute[230518]: 2025-10-02 13:04:10.161 2 DEBUG nova.storage.rbd_utils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] rbd image ab1d03d2-f5f1-479c-9c49-4519bb6f6b53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:04:10 compute-1 nova_compute[230518]: 2025-10-02 13:04:10.187 2 DEBUG nova.storage.rbd_utils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] rbd image ab1d03d2-f5f1-479c-9c49-4519bb6f6b53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:04:10 compute-1 nova_compute[230518]: 2025-10-02 13:04:10.190 2 DEBUG oslo_concurrency.processutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:10 compute-1 nova_compute[230518]: 2025-10-02 13:04:10.218 2 DEBUG nova.policy [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5206d24fd75a48758994a57e7fd259f2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '52dd3c4419794d0fbecd536c5088c60f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:04:10 compute-1 nova_compute[230518]: 2025-10-02 13:04:10.253 2 DEBUG oslo_concurrency.processutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:10 compute-1 nova_compute[230518]: 2025-10-02 13:04:10.254 2 DEBUG oslo_concurrency.lockutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:10 compute-1 nova_compute[230518]: 2025-10-02 13:04:10.254 2 DEBUG oslo_concurrency.lockutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:10 compute-1 nova_compute[230518]: 2025-10-02 13:04:10.254 2 DEBUG oslo_concurrency.lockutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/642492568' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1182037887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1501996238' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:10 compute-1 nova_compute[230518]: 2025-10-02 13:04:10.285 2 DEBUG nova.storage.rbd_utils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] rbd image ab1d03d2-f5f1-479c-9c49-4519bb6f6b53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:04:10 compute-1 nova_compute[230518]: 2025-10-02 13:04:10.289 2 DEBUG oslo_concurrency.processutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 ab1d03d2-f5f1-479c-9c49-4519bb6f6b53_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:10 compute-1 nova_compute[230518]: 2025-10-02 13:04:10.573 2 DEBUG oslo_concurrency.processutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 ab1d03d2-f5f1-479c-9c49-4519bb6f6b53_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.285s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:10 compute-1 nova_compute[230518]: 2025-10-02 13:04:10.648 2 DEBUG nova.storage.rbd_utils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] resizing rbd image ab1d03d2-f5f1-479c-9c49-4519bb6f6b53_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:04:10 compute-1 nova_compute[230518]: 2025-10-02 13:04:10.764 2 DEBUG nova.objects.instance [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lazy-loading 'migration_context' on Instance uuid ab1d03d2-f5f1-479c-9c49-4519bb6f6b53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:04:10 compute-1 nova_compute[230518]: 2025-10-02 13:04:10.780 2 DEBUG nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:04:10 compute-1 nova_compute[230518]: 2025-10-02 13:04:10.780 2 DEBUG nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Ensure instance console log exists: /var/lib/nova/instances/ab1d03d2-f5f1-479c-9c49-4519bb6f6b53/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:04:10 compute-1 nova_compute[230518]: 2025-10-02 13:04:10.781 2 DEBUG oslo_concurrency.lockutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:10 compute-1 nova_compute[230518]: 2025-10-02 13:04:10.781 2 DEBUG oslo_concurrency.lockutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:10 compute-1 nova_compute[230518]: 2025-10-02 13:04:10.781 2 DEBUG oslo_concurrency.lockutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:11 compute-1 ceph-mon[80926]: pgmap v2736: 305 pgs: 305 active+clean; 335 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 207 op/s
Oct 02 13:04:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:11.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:04:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:11.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:04:12 compute-1 nova_compute[230518]: 2025-10-02 13:04:12.194 2 DEBUG nova.network.neutron [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Successfully created port: b3e07905-2e01-4835-9249-b8d3a5c67f76 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:04:13 compute-1 ceph-mon[80926]: pgmap v2737: 305 pgs: 305 active+clean; 351 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.1 MiB/s wr, 216 op/s
Oct 02 13:04:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/647808684' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1493426402' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:04:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:13.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:04:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:13.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:14 compute-1 nova_compute[230518]: 2025-10-02 13:04:14.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:14 compute-1 nova_compute[230518]: 2025-10-02 13:04:14.290 2 DEBUG nova.network.neutron [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Successfully updated port: b3e07905-2e01-4835-9249-b8d3a5c67f76 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:04:14 compute-1 nova_compute[230518]: 2025-10-02 13:04:14.302 2 DEBUG oslo_concurrency.lockutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "refresh_cache-ab1d03d2-f5f1-479c-9c49-4519bb6f6b53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:04:14 compute-1 nova_compute[230518]: 2025-10-02 13:04:14.302 2 DEBUG oslo_concurrency.lockutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquired lock "refresh_cache-ab1d03d2-f5f1-479c-9c49-4519bb6f6b53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:04:14 compute-1 nova_compute[230518]: 2025-10-02 13:04:14.302 2 DEBUG nova.network.neutron [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:04:14 compute-1 nova_compute[230518]: 2025-10-02 13:04:14.389 2 DEBUG nova.compute.manager [req-d1b1e304-30d2-49c6-b338-51d1dedc2b6e req-1b93337c-7c8c-4268-a776-8ba93e308c36 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Received event network-changed-b3e07905-2e01-4835-9249-b8d3a5c67f76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:04:14 compute-1 nova_compute[230518]: 2025-10-02 13:04:14.390 2 DEBUG nova.compute.manager [req-d1b1e304-30d2-49c6-b338-51d1dedc2b6e req-1b93337c-7c8c-4268-a776-8ba93e308c36 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Refreshing instance network info cache due to event network-changed-b3e07905-2e01-4835-9249-b8d3a5c67f76. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:04:14 compute-1 nova_compute[230518]: 2025-10-02 13:04:14.390 2 DEBUG oslo_concurrency.lockutils [req-d1b1e304-30d2-49c6-b338-51d1dedc2b6e req-1b93337c-7c8c-4268-a776-8ba93e308c36 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-ab1d03d2-f5f1-479c-9c49-4519bb6f6b53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:04:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:04:14 compute-1 nova_compute[230518]: 2025-10-02 13:04:14.455 2 DEBUG nova.network.neutron [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:04:14 compute-1 nova_compute[230518]: 2025-10-02 13:04:14.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:14 compute-1 podman[300087]: 2025-10-02 13:04:14.825030777 +0000 UTC m=+0.068111677 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 13:04:14 compute-1 podman[300086]: 2025-10-02 13:04:14.863265935 +0000 UTC m=+0.104092815 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:04:15 compute-1 nova_compute[230518]: 2025-10-02 13:04:15.292 2 DEBUG nova.network.neutron [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Updating instance_info_cache with network_info: [{"id": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "address": "fa:16:3e:6b:50:dc", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e07905-2e", "ovs_interfaceid": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:04:15 compute-1 nova_compute[230518]: 2025-10-02 13:04:15.340 2 DEBUG oslo_concurrency.lockutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Releasing lock "refresh_cache-ab1d03d2-f5f1-479c-9c49-4519bb6f6b53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:04:15 compute-1 nova_compute[230518]: 2025-10-02 13:04:15.341 2 DEBUG nova.compute.manager [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Instance network_info: |[{"id": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "address": "fa:16:3e:6b:50:dc", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e07905-2e", "ovs_interfaceid": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:04:15 compute-1 nova_compute[230518]: 2025-10-02 13:04:15.341 2 DEBUG oslo_concurrency.lockutils [req-d1b1e304-30d2-49c6-b338-51d1dedc2b6e req-1b93337c-7c8c-4268-a776-8ba93e308c36 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-ab1d03d2-f5f1-479c-9c49-4519bb6f6b53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:04:15 compute-1 nova_compute[230518]: 2025-10-02 13:04:15.341 2 DEBUG nova.network.neutron [req-d1b1e304-30d2-49c6-b338-51d1dedc2b6e req-1b93337c-7c8c-4268-a776-8ba93e308c36 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Refreshing network info cache for port b3e07905-2e01-4835-9249-b8d3a5c67f76 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:04:15 compute-1 nova_compute[230518]: 2025-10-02 13:04:15.344 2 DEBUG nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Start _get_guest_xml network_info=[{"id": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "address": "fa:16:3e:6b:50:dc", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e07905-2e", "ovs_interfaceid": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:04:15 compute-1 nova_compute[230518]: 2025-10-02 13:04:15.350 2 WARNING nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:04:15 compute-1 nova_compute[230518]: 2025-10-02 13:04:15.355 2 DEBUG nova.virt.libvirt.host [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:04:15 compute-1 nova_compute[230518]: 2025-10-02 13:04:15.356 2 DEBUG nova.virt.libvirt.host [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:04:15 compute-1 nova_compute[230518]: 2025-10-02 13:04:15.359 2 DEBUG nova.virt.libvirt.host [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:04:15 compute-1 nova_compute[230518]: 2025-10-02 13:04:15.360 2 DEBUG nova.virt.libvirt.host [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:04:15 compute-1 nova_compute[230518]: 2025-10-02 13:04:15.361 2 DEBUG nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:04:15 compute-1 nova_compute[230518]: 2025-10-02 13:04:15.361 2 DEBUG nova.virt.hardware [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:04:15 compute-1 nova_compute[230518]: 2025-10-02 13:04:15.361 2 DEBUG nova.virt.hardware [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:04:15 compute-1 nova_compute[230518]: 2025-10-02 13:04:15.361 2 DEBUG nova.virt.hardware [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:04:15 compute-1 nova_compute[230518]: 2025-10-02 13:04:15.362 2 DEBUG nova.virt.hardware [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:04:15 compute-1 nova_compute[230518]: 2025-10-02 13:04:15.362 2 DEBUG nova.virt.hardware [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:04:15 compute-1 nova_compute[230518]: 2025-10-02 13:04:15.362 2 DEBUG nova.virt.hardware [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:04:15 compute-1 nova_compute[230518]: 2025-10-02 13:04:15.362 2 DEBUG nova.virt.hardware [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:04:15 compute-1 ceph-mon[80926]: pgmap v2738: 305 pgs: 305 active+clean; 351 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.1 MiB/s wr, 215 op/s
Oct 02 13:04:15 compute-1 nova_compute[230518]: 2025-10-02 13:04:15.362 2 DEBUG nova.virt.hardware [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:04:15 compute-1 nova_compute[230518]: 2025-10-02 13:04:15.362 2 DEBUG nova.virt.hardware [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:04:15 compute-1 nova_compute[230518]: 2025-10-02 13:04:15.363 2 DEBUG nova.virt.hardware [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:04:15 compute-1 nova_compute[230518]: 2025-10-02 13:04:15.363 2 DEBUG nova.virt.hardware [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:04:15 compute-1 nova_compute[230518]: 2025-10-02 13:04:15.365 2 DEBUG oslo_concurrency.processutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:04:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:15.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:04:15 compute-1 ovn_controller[129257]: 2025-10-02T13:04:15Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e8:64:21 10.100.0.3
Oct 02 13:04:15 compute-1 ovn_controller[129257]: 2025-10-02T13:04:15Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e8:64:21 10.100.0.3
Oct 02 13:04:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:15.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:04:15 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1864796394' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:15 compute-1 nova_compute[230518]: 2025-10-02 13:04:15.791 2 DEBUG oslo_concurrency.processutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:15 compute-1 nova_compute[230518]: 2025-10-02 13:04:15.818 2 DEBUG nova.storage.rbd_utils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] rbd image ab1d03d2-f5f1-479c-9c49-4519bb6f6b53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:04:15 compute-1 nova_compute[230518]: 2025-10-02 13:04:15.821 2 DEBUG oslo_concurrency.processutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:16 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:04:16 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2649936971' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:16 compute-1 nova_compute[230518]: 2025-10-02 13:04:16.247 2 DEBUG oslo_concurrency.processutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:16 compute-1 nova_compute[230518]: 2025-10-02 13:04:16.249 2 DEBUG nova.virt.libvirt.vif [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:04:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1005291499',display_name='tempest-ServersNegativeTestJSON-server-1005291499',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1005291499',id=184,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='52dd3c4419794d0fbecd536c5088c60f',ramdisk_id='',reservation_id='r-vr00cw3i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1205930452',owner_user_name='tempest-ServersNegativeTestJSON-1205930452-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:04:10Z,user_data=None,user_id='5206d24fd75a48758994a57e7fd259f2',uuid=ab1d03d2-f5f1-479c-9c49-4519bb6f6b53,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "address": "fa:16:3e:6b:50:dc", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e07905-2e", "ovs_interfaceid": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:04:16 compute-1 nova_compute[230518]: 2025-10-02 13:04:16.250 2 DEBUG nova.network.os_vif_util [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Converting VIF {"id": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "address": "fa:16:3e:6b:50:dc", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e07905-2e", "ovs_interfaceid": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:04:16 compute-1 nova_compute[230518]: 2025-10-02 13:04:16.251 2 DEBUG nova.network.os_vif_util [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:50:dc,bridge_name='br-int',has_traffic_filtering=True,id=b3e07905-2e01-4835-9249-b8d3a5c67f76,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e07905-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:04:16 compute-1 nova_compute[230518]: 2025-10-02 13:04:16.252 2 DEBUG nova.objects.instance [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lazy-loading 'pci_devices' on Instance uuid ab1d03d2-f5f1-479c-9c49-4519bb6f6b53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:04:16 compute-1 nova_compute[230518]: 2025-10-02 13:04:16.265 2 DEBUG nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:04:16 compute-1 nova_compute[230518]:   <uuid>ab1d03d2-f5f1-479c-9c49-4519bb6f6b53</uuid>
Oct 02 13:04:16 compute-1 nova_compute[230518]:   <name>instance-000000b8</name>
Oct 02 13:04:16 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 13:04:16 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 13:04:16 compute-1 nova_compute[230518]:   <metadata>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:04:16 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:       <nova:name>tempest-ServersNegativeTestJSON-server-1005291499</nova:name>
Oct 02 13:04:16 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 13:04:15</nova:creationTime>
Oct 02 13:04:16 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 13:04:16 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 13:04:16 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 13:04:16 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 13:04:16 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:04:16 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:04:16 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 13:04:16 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 13:04:16 compute-1 nova_compute[230518]:         <nova:user uuid="5206d24fd75a48758994a57e7fd259f2">tempest-ServersNegativeTestJSON-1205930452-project-member</nova:user>
Oct 02 13:04:16 compute-1 nova_compute[230518]:         <nova:project uuid="52dd3c4419794d0fbecd536c5088c60f">tempest-ServersNegativeTestJSON-1205930452</nova:project>
Oct 02 13:04:16 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 13:04:16 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 13:04:16 compute-1 nova_compute[230518]:         <nova:port uuid="b3e07905-2e01-4835-9249-b8d3a5c67f76">
Oct 02 13:04:16 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 13:04:16 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 13:04:16 compute-1 nova_compute[230518]:   </metadata>
Oct 02 13:04:16 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <system>
Oct 02 13:04:16 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:04:16 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:04:16 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:04:16 compute-1 nova_compute[230518]:       <entry name="serial">ab1d03d2-f5f1-479c-9c49-4519bb6f6b53</entry>
Oct 02 13:04:16 compute-1 nova_compute[230518]:       <entry name="uuid">ab1d03d2-f5f1-479c-9c49-4519bb6f6b53</entry>
Oct 02 13:04:16 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     </system>
Oct 02 13:04:16 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 13:04:16 compute-1 nova_compute[230518]:   <os>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:   </os>
Oct 02 13:04:16 compute-1 nova_compute[230518]:   <features>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <apic/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:   </features>
Oct 02 13:04:16 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:   </clock>
Oct 02 13:04:16 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:   </cpu>
Oct 02 13:04:16 compute-1 nova_compute[230518]:   <devices>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 13:04:16 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/ab1d03d2-f5f1-479c-9c49-4519bb6f6b53_disk">
Oct 02 13:04:16 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:       </source>
Oct 02 13:04:16 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:04:16 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:04:16 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 13:04:16 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/ab1d03d2-f5f1-479c-9c49-4519bb6f6b53_disk.config">
Oct 02 13:04:16 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:       </source>
Oct 02 13:04:16 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:04:16 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:04:16 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 13:04:16 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:6b:50:dc"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:       <target dev="tapb3e07905-2e"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     </interface>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 13:04:16 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/ab1d03d2-f5f1-479c-9c49-4519bb6f6b53/console.log" append="off"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     </serial>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <video>
Oct 02 13:04:16 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     </video>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 13:04:16 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     </rng>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 13:04:16 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 13:04:16 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 13:04:16 compute-1 nova_compute[230518]:   </devices>
Oct 02 13:04:16 compute-1 nova_compute[230518]: </domain>
Oct 02 13:04:16 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:04:16 compute-1 nova_compute[230518]: 2025-10-02 13:04:16.266 2 DEBUG nova.compute.manager [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Preparing to wait for external event network-vif-plugged-b3e07905-2e01-4835-9249-b8d3a5c67f76 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:04:16 compute-1 nova_compute[230518]: 2025-10-02 13:04:16.266 2 DEBUG oslo_concurrency.lockutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:16 compute-1 nova_compute[230518]: 2025-10-02 13:04:16.266 2 DEBUG oslo_concurrency.lockutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:16 compute-1 nova_compute[230518]: 2025-10-02 13:04:16.266 2 DEBUG oslo_concurrency.lockutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:16 compute-1 nova_compute[230518]: 2025-10-02 13:04:16.267 2 DEBUG nova.virt.libvirt.vif [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:04:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1005291499',display_name='tempest-ServersNegativeTestJSON-server-1005291499',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1005291499',id=184,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='52dd3c4419794d0fbecd536c5088c60f',ramdisk_id='',reservation_id='r-vr00cw3i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1205930452',owner_user_name='tempest-ServersNegativeTestJSON-1205930452-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:04:10Z,user_data=None,user_id='5206d24fd75a48758994a57e7fd259f2',uuid=ab1d03d2-f5f1-479c-9c49-4519bb6f6b53,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "address": "fa:16:3e:6b:50:dc", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e07905-2e", "ovs_interfaceid": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:04:16 compute-1 nova_compute[230518]: 2025-10-02 13:04:16.267 2 DEBUG nova.network.os_vif_util [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Converting VIF {"id": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "address": "fa:16:3e:6b:50:dc", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e07905-2e", "ovs_interfaceid": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:04:16 compute-1 nova_compute[230518]: 2025-10-02 13:04:16.267 2 DEBUG nova.network.os_vif_util [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:50:dc,bridge_name='br-int',has_traffic_filtering=True,id=b3e07905-2e01-4835-9249-b8d3a5c67f76,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e07905-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:04:16 compute-1 nova_compute[230518]: 2025-10-02 13:04:16.268 2 DEBUG os_vif [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:50:dc,bridge_name='br-int',has_traffic_filtering=True,id=b3e07905-2e01-4835-9249-b8d3a5c67f76,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e07905-2e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:04:16 compute-1 nova_compute[230518]: 2025-10-02 13:04:16.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:16 compute-1 nova_compute[230518]: 2025-10-02 13:04:16.269 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:04:16 compute-1 nova_compute[230518]: 2025-10-02 13:04:16.269 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:04:16 compute-1 nova_compute[230518]: 2025-10-02 13:04:16.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:16 compute-1 nova_compute[230518]: 2025-10-02 13:04:16.271 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb3e07905-2e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:04:16 compute-1 nova_compute[230518]: 2025-10-02 13:04:16.272 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb3e07905-2e, col_values=(('external_ids', {'iface-id': 'b3e07905-2e01-4835-9249-b8d3a5c67f76', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6b:50:dc', 'vm-uuid': 'ab1d03d2-f5f1-479c-9c49-4519bb6f6b53'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:04:16 compute-1 nova_compute[230518]: 2025-10-02 13:04:16.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:16 compute-1 NetworkManager[44960]: <info>  [1759410256.2745] manager: (tapb3e07905-2e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/350)
Oct 02 13:04:16 compute-1 nova_compute[230518]: 2025-10-02 13:04:16.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:04:16 compute-1 nova_compute[230518]: 2025-10-02 13:04:16.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:16 compute-1 nova_compute[230518]: 2025-10-02 13:04:16.280 2 INFO os_vif [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:50:dc,bridge_name='br-int',has_traffic_filtering=True,id=b3e07905-2e01-4835-9249-b8d3a5c67f76,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e07905-2e')
Oct 02 13:04:16 compute-1 nova_compute[230518]: 2025-10-02 13:04:16.332 2 DEBUG nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:04:16 compute-1 nova_compute[230518]: 2025-10-02 13:04:16.332 2 DEBUG nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:04:16 compute-1 nova_compute[230518]: 2025-10-02 13:04:16.332 2 DEBUG nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] No VIF found with MAC fa:16:3e:6b:50:dc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:04:16 compute-1 nova_compute[230518]: 2025-10-02 13:04:16.333 2 INFO nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Using config drive
Oct 02 13:04:16 compute-1 nova_compute[230518]: 2025-10-02 13:04:16.361 2 DEBUG nova.storage.rbd_utils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] rbd image ab1d03d2-f5f1-479c-9c49-4519bb6f6b53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:04:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1864796394' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2649936971' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:17 compute-1 nova_compute[230518]: 2025-10-02 13:04:17.026 2 INFO nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Creating config drive at /var/lib/nova/instances/ab1d03d2-f5f1-479c-9c49-4519bb6f6b53/disk.config
Oct 02 13:04:17 compute-1 nova_compute[230518]: 2025-10-02 13:04:17.030 2 DEBUG oslo_concurrency.processutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ab1d03d2-f5f1-479c-9c49-4519bb6f6b53/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2sjzgamx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:17 compute-1 nova_compute[230518]: 2025-10-02 13:04:17.058 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:04:17 compute-1 nova_compute[230518]: 2025-10-02 13:04:17.091 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:17 compute-1 nova_compute[230518]: 2025-10-02 13:04:17.092 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:17 compute-1 nova_compute[230518]: 2025-10-02 13:04:17.092 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:17 compute-1 nova_compute[230518]: 2025-10-02 13:04:17.092 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:04:17 compute-1 nova_compute[230518]: 2025-10-02 13:04:17.092 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:17 compute-1 nova_compute[230518]: 2025-10-02 13:04:17.165 2 DEBUG oslo_concurrency.processutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ab1d03d2-f5f1-479c-9c49-4519bb6f6b53/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2sjzgamx" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:17 compute-1 nova_compute[230518]: 2025-10-02 13:04:17.195 2 DEBUG nova.storage.rbd_utils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] rbd image ab1d03d2-f5f1-479c-9c49-4519bb6f6b53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:04:17 compute-1 nova_compute[230518]: 2025-10-02 13:04:17.199 2 DEBUG oslo_concurrency.processutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ab1d03d2-f5f1-479c-9c49-4519bb6f6b53/disk.config ab1d03d2-f5f1-479c-9c49-4519bb6f6b53_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:17 compute-1 nova_compute[230518]: 2025-10-02 13:04:17.363 2 DEBUG oslo_concurrency.processutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ab1d03d2-f5f1-479c-9c49-4519bb6f6b53/disk.config ab1d03d2-f5f1-479c-9c49-4519bb6f6b53_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:17 compute-1 nova_compute[230518]: 2025-10-02 13:04:17.364 2 INFO nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Deleting local config drive /var/lib/nova/instances/ab1d03d2-f5f1-479c-9c49-4519bb6f6b53/disk.config because it was imported into RBD.
Oct 02 13:04:17 compute-1 kernel: tapb3e07905-2e: entered promiscuous mode
Oct 02 13:04:17 compute-1 NetworkManager[44960]: <info>  [1759410257.4119] manager: (tapb3e07905-2e): new Tun device (/org/freedesktop/NetworkManager/Devices/351)
Oct 02 13:04:17 compute-1 ovn_controller[129257]: 2025-10-02T13:04:17Z|00752|binding|INFO|Claiming lport b3e07905-2e01-4835-9249-b8d3a5c67f76 for this chassis.
Oct 02 13:04:17 compute-1 ovn_controller[129257]: 2025-10-02T13:04:17Z|00753|binding|INFO|b3e07905-2e01-4835-9249-b8d3a5c67f76: Claiming fa:16:3e:6b:50:dc 10.100.0.9
Oct 02 13:04:17 compute-1 nova_compute[230518]: 2025-10-02 13:04:17.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:17.419 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:50:dc 10.100.0.9'], port_security=['fa:16:3e:6b:50:dc 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ab1d03d2-f5f1-479c-9c49-4519bb6f6b53', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52dd3c4419794d0fbecd536c5088c60f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'df14d61b-9762-4791-8375-7e8d13f38de1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26295213-1e12-4cdb-92a9-b65812bf362e, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=b3e07905-2e01-4835-9249-b8d3a5c67f76) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:04:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:17.420 138374 INFO neutron.agent.ovn.metadata.agent [-] Port b3e07905-2e01-4835-9249-b8d3a5c67f76 in datapath b07d0c6a-5988-4afb-b4ba-d4048578b224 bound to our chassis
Oct 02 13:04:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:17.423 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b07d0c6a-5988-4afb-b4ba-d4048578b224
Oct 02 13:04:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:17.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:17 compute-1 ovn_controller[129257]: 2025-10-02T13:04:17Z|00754|binding|INFO|Setting lport b3e07905-2e01-4835-9249-b8d3a5c67f76 ovn-installed in OVS
Oct 02 13:04:17 compute-1 ovn_controller[129257]: 2025-10-02T13:04:17Z|00755|binding|INFO|Setting lport b3e07905-2e01-4835-9249-b8d3a5c67f76 up in Southbound
Oct 02 13:04:17 compute-1 nova_compute[230518]: 2025-10-02 13:04:17.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:17 compute-1 ceph-mon[80926]: pgmap v2739: 305 pgs: 305 active+clean; 402 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 6.7 MiB/s wr, 302 op/s
Oct 02 13:04:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:17.442 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0945e480-aaab-460d-879b-ce58b7707862]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:17 compute-1 systemd-udevd[300284]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:04:17 compute-1 nova_compute[230518]: 2025-10-02 13:04:17.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:17 compute-1 systemd-machined[188247]: New machine qemu-87-instance-000000b8.
Oct 02 13:04:17 compute-1 NetworkManager[44960]: <info>  [1759410257.4662] device (tapb3e07905-2e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:04:17 compute-1 NetworkManager[44960]: <info>  [1759410257.4670] device (tapb3e07905-2e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:04:17 compute-1 systemd[1]: Started Virtual Machine qemu-87-instance-000000b8.
Oct 02 13:04:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:17.479 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[7c7d96c8-f8b1-46d2-beee-b44dc627cf8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:17.483 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[9088010c-9e93-4696-8444-b7b3753cee55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:17.509 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[e71ae27e-8f92-471e-8426-cbdf00be1998]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:17.527 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9814d27c-dabe-4b66-8eba-2734117c1f5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb07d0c6a-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:48:08'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 228], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 810526, 'reachable_time': 40282, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300296, 'error': None, 'target': 'ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:04:17 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1373024404' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:17.544 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c6b58442-a371-4005-8762-d66e6e05eee9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb07d0c6a-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 810541, 'tstamp': 810541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300299, 'error': None, 'target': 'ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb07d0c6a-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 810543, 'tstamp': 810543}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300299, 'error': None, 'target': 'ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:17.546 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb07d0c6a-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:04:17 compute-1 nova_compute[230518]: 2025-10-02 13:04:17.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:17 compute-1 nova_compute[230518]: 2025-10-02 13:04:17.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:17.551 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb07d0c6a-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:04:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:17.551 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:04:17 compute-1 nova_compute[230518]: 2025-10-02 13:04:17.552 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:17.551 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb07d0c6a-50, col_values=(('external_ids', {'iface-id': '874a9fce-3ef5-498a-a977-43087c73ea46'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:04:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:17.552 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:04:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:17.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:17 compute-1 nova_compute[230518]: 2025-10-02 13:04:17.655 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000b8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:04:17 compute-1 nova_compute[230518]: 2025-10-02 13:04:17.655 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000b8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:04:17 compute-1 nova_compute[230518]: 2025-10-02 13:04:17.659 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000b5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:04:17 compute-1 nova_compute[230518]: 2025-10-02 13:04:17.659 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000b5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:04:17 compute-1 nova_compute[230518]: 2025-10-02 13:04:17.821 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:04:17 compute-1 nova_compute[230518]: 2025-10-02 13:04:17.822 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4101MB free_disk=20.84320831298828GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:04:17 compute-1 nova_compute[230518]: 2025-10-02 13:04:17.823 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:17 compute-1 nova_compute[230518]: 2025-10-02 13:04:17.823 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:17 compute-1 nova_compute[230518]: 2025-10-02 13:04:17.952 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance ea034622-0a48-4de6-8d68-0f2240b54214 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:04:17 compute-1 nova_compute[230518]: 2025-10-02 13:04:17.952 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance ab1d03d2-f5f1-479c-9c49-4519bb6f6b53 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:04:17 compute-1 nova_compute[230518]: 2025-10-02 13:04:17.952 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:04:17 compute-1 nova_compute[230518]: 2025-10-02 13:04:17.952 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:04:18 compute-1 nova_compute[230518]: 2025-10-02 13:04:18.038 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1373024404' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:04:18 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3310282002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:18 compute-1 nova_compute[230518]: 2025-10-02 13:04:18.510 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:18 compute-1 nova_compute[230518]: 2025-10-02 13:04:18.520 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:04:18 compute-1 nova_compute[230518]: 2025-10-02 13:04:18.538 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:04:18 compute-1 nova_compute[230518]: 2025-10-02 13:04:18.561 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:04:18 compute-1 nova_compute[230518]: 2025-10-02 13:04:18.561 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:18 compute-1 nova_compute[230518]: 2025-10-02 13:04:18.795 2 DEBUG nova.network.neutron [req-d1b1e304-30d2-49c6-b338-51d1dedc2b6e req-1b93337c-7c8c-4268-a776-8ba93e308c36 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Updated VIF entry in instance network info cache for port b3e07905-2e01-4835-9249-b8d3a5c67f76. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:04:18 compute-1 nova_compute[230518]: 2025-10-02 13:04:18.796 2 DEBUG nova.network.neutron [req-d1b1e304-30d2-49c6-b338-51d1dedc2b6e req-1b93337c-7c8c-4268-a776-8ba93e308c36 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Updating instance_info_cache with network_info: [{"id": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "address": "fa:16:3e:6b:50:dc", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e07905-2e", "ovs_interfaceid": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:04:18 compute-1 nova_compute[230518]: 2025-10-02 13:04:18.821 2 DEBUG oslo_concurrency.lockutils [req-d1b1e304-30d2-49c6-b338-51d1dedc2b6e req-1b93337c-7c8c-4268-a776-8ba93e308c36 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-ab1d03d2-f5f1-479c-9c49-4519bb6f6b53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:04:19 compute-1 nova_compute[230518]: 2025-10-02 13:04:19.040 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410259.0395472, ab1d03d2-f5f1-479c-9c49-4519bb6f6b53 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:04:19 compute-1 nova_compute[230518]: 2025-10-02 13:04:19.040 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] VM Started (Lifecycle Event)
Oct 02 13:04:19 compute-1 nova_compute[230518]: 2025-10-02 13:04:19.078 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:04:19 compute-1 nova_compute[230518]: 2025-10-02 13:04:19.083 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410259.0399988, ab1d03d2-f5f1-479c-9c49-4519bb6f6b53 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:04:19 compute-1 nova_compute[230518]: 2025-10-02 13:04:19.083 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] VM Paused (Lifecycle Event)
Oct 02 13:04:19 compute-1 nova_compute[230518]: 2025-10-02 13:04:19.122 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:04:19 compute-1 nova_compute[230518]: 2025-10-02 13:04:19.126 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:04:19 compute-1 nova_compute[230518]: 2025-10-02 13:04:19.142 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:04:19 compute-1 nova_compute[230518]: 2025-10-02 13:04:19.160 2 DEBUG nova.compute.manager [req-60972a49-f209-4d7d-8eb7-719dfb874dea req-b8b1f2c0-bfb1-4a5b-a102-994ecd7232f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Received event network-vif-plugged-b3e07905-2e01-4835-9249-b8d3a5c67f76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:04:19 compute-1 nova_compute[230518]: 2025-10-02 13:04:19.160 2 DEBUG oslo_concurrency.lockutils [req-60972a49-f209-4d7d-8eb7-719dfb874dea req-b8b1f2c0-bfb1-4a5b-a102-994ecd7232f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:19 compute-1 nova_compute[230518]: 2025-10-02 13:04:19.161 2 DEBUG oslo_concurrency.lockutils [req-60972a49-f209-4d7d-8eb7-719dfb874dea req-b8b1f2c0-bfb1-4a5b-a102-994ecd7232f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:19 compute-1 nova_compute[230518]: 2025-10-02 13:04:19.161 2 DEBUG oslo_concurrency.lockutils [req-60972a49-f209-4d7d-8eb7-719dfb874dea req-b8b1f2c0-bfb1-4a5b-a102-994ecd7232f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:19 compute-1 nova_compute[230518]: 2025-10-02 13:04:19.161 2 DEBUG nova.compute.manager [req-60972a49-f209-4d7d-8eb7-719dfb874dea req-b8b1f2c0-bfb1-4a5b-a102-994ecd7232f9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Processing event network-vif-plugged-b3e07905-2e01-4835-9249-b8d3a5c67f76 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:04:19 compute-1 nova_compute[230518]: 2025-10-02 13:04:19.162 2 DEBUG nova.compute.manager [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:04:19 compute-1 nova_compute[230518]: 2025-10-02 13:04:19.166 2 DEBUG nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:04:19 compute-1 nova_compute[230518]: 2025-10-02 13:04:19.166 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410259.1656766, ab1d03d2-f5f1-479c-9c49-4519bb6f6b53 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:04:19 compute-1 nova_compute[230518]: 2025-10-02 13:04:19.167 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] VM Resumed (Lifecycle Event)
Oct 02 13:04:19 compute-1 nova_compute[230518]: 2025-10-02 13:04:19.172 2 INFO nova.virt.libvirt.driver [-] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Instance spawned successfully.
Oct 02 13:04:19 compute-1 nova_compute[230518]: 2025-10-02 13:04:19.172 2 DEBUG nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:04:19 compute-1 nova_compute[230518]: 2025-10-02 13:04:19.203 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:04:19 compute-1 nova_compute[230518]: 2025-10-02 13:04:19.210 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:04:19 compute-1 nova_compute[230518]: 2025-10-02 13:04:19.214 2 DEBUG nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:04:19 compute-1 nova_compute[230518]: 2025-10-02 13:04:19.215 2 DEBUG nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:04:19 compute-1 nova_compute[230518]: 2025-10-02 13:04:19.215 2 DEBUG nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:04:19 compute-1 nova_compute[230518]: 2025-10-02 13:04:19.216 2 DEBUG nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:04:19 compute-1 nova_compute[230518]: 2025-10-02 13:04:19.216 2 DEBUG nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:04:19 compute-1 nova_compute[230518]: 2025-10-02 13:04:19.216 2 DEBUG nova.virt.libvirt.driver [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:04:19 compute-1 nova_compute[230518]: 2025-10-02 13:04:19.318 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:04:19 compute-1 nova_compute[230518]: 2025-10-02 13:04:19.354 2 INFO nova.compute.manager [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Took 9.24 seconds to spawn the instance on the hypervisor.
Oct 02 13:04:19 compute-1 nova_compute[230518]: 2025-10-02 13:04:19.356 2 DEBUG nova.compute.manager [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:04:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:19.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:19 compute-1 nova_compute[230518]: 2025-10-02 13:04:19.433 2 INFO nova.compute.manager [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Took 10.23 seconds to build instance.
Oct 02 13:04:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:04:19 compute-1 nova_compute[230518]: 2025-10-02 13:04:19.450 2 DEBUG oslo_concurrency.lockutils [None req-b30869f9-000d-4612-a8f7-666abfb5d606 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.323s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:19 compute-1 ceph-mon[80926]: pgmap v2740: 305 pgs: 305 active+clean; 412 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 6.3 MiB/s wr, 289 op/s
Oct 02 13:04:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3310282002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:04:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:19.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:04:19 compute-1 nova_compute[230518]: 2025-10-02 13:04:19.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:20 compute-1 nova_compute[230518]: 2025-10-02 13:04:20.823 2 DEBUG oslo_concurrency.lockutils [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:20 compute-1 nova_compute[230518]: 2025-10-02 13:04:20.824 2 DEBUG oslo_concurrency.lockutils [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:20 compute-1 nova_compute[230518]: 2025-10-02 13:04:20.824 2 DEBUG oslo_concurrency.lockutils [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:20 compute-1 nova_compute[230518]: 2025-10-02 13:04:20.824 2 DEBUG oslo_concurrency.lockutils [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:20 compute-1 nova_compute[230518]: 2025-10-02 13:04:20.825 2 DEBUG oslo_concurrency.lockutils [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:20 compute-1 nova_compute[230518]: 2025-10-02 13:04:20.827 2 INFO nova.compute.manager [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Terminating instance
Oct 02 13:04:20 compute-1 nova_compute[230518]: 2025-10-02 13:04:20.828 2 DEBUG nova.compute.manager [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:04:20 compute-1 kernel: tapb3e07905-2e (unregistering): left promiscuous mode
Oct 02 13:04:20 compute-1 NetworkManager[44960]: <info>  [1759410260.8827] device (tapb3e07905-2e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:04:20 compute-1 ovn_controller[129257]: 2025-10-02T13:04:20Z|00756|binding|INFO|Releasing lport b3e07905-2e01-4835-9249-b8d3a5c67f76 from this chassis (sb_readonly=0)
Oct 02 13:04:20 compute-1 ovn_controller[129257]: 2025-10-02T13:04:20Z|00757|binding|INFO|Setting lport b3e07905-2e01-4835-9249-b8d3a5c67f76 down in Southbound
Oct 02 13:04:20 compute-1 ovn_controller[129257]: 2025-10-02T13:04:20Z|00758|binding|INFO|Removing iface tapb3e07905-2e ovn-installed in OVS
Oct 02 13:04:20 compute-1 nova_compute[230518]: 2025-10-02 13:04:20.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:20 compute-1 nova_compute[230518]: 2025-10-02 13:04:20.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:20 compute-1 nova_compute[230518]: 2025-10-02 13:04:20.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:20.959 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:50:dc 10.100.0.9'], port_security=['fa:16:3e:6b:50:dc 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ab1d03d2-f5f1-479c-9c49-4519bb6f6b53', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52dd3c4419794d0fbecd536c5088c60f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'df14d61b-9762-4791-8375-7e8d13f38de1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26295213-1e12-4cdb-92a9-b65812bf362e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=b3e07905-2e01-4835-9249-b8d3a5c67f76) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:04:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:20.960 138374 INFO neutron.agent.ovn.metadata.agent [-] Port b3e07905-2e01-4835-9249-b8d3a5c67f76 in datapath b07d0c6a-5988-4afb-b4ba-d4048578b224 unbound from our chassis
Oct 02 13:04:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:20.963 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b07d0c6a-5988-4afb-b4ba-d4048578b224
Oct 02 13:04:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:20.979 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ea711393-5d07-4b6f-8864-775ff1e65eed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:20 compute-1 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000b8.scope: Deactivated successfully.
Oct 02 13:04:20 compute-1 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000b8.scope: Consumed 3.219s CPU time.
Oct 02 13:04:21 compute-1 systemd-machined[188247]: Machine qemu-87-instance-000000b8 terminated.
Oct 02 13:04:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:21.011 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[98c05b4e-2980-4136-a584-7314e4ba1b8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:21.014 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[f99427fc-912c-4bdf-a231-bcda4fea0f7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:21.042 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[1ba32e9f-868f-432e-8577-95c46f101082]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:21 compute-1 kernel: tapb3e07905-2e: entered promiscuous mode
Oct 02 13:04:21 compute-1 kernel: tapb3e07905-2e (unregistering): left promiscuous mode
Oct 02 13:04:21 compute-1 nova_compute[230518]: 2025-10-02 13:04:21.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:21.064 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0a52686c-ffee-4cd5-a8a8-471f77621cd5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb07d0c6a-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:48:08'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 874, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 874, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 228], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 810526, 'reachable_time': 40282, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300381, 'error': None, 'target': 'ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:21 compute-1 nova_compute[230518]: 2025-10-02 13:04:21.074 2 INFO nova.virt.libvirt.driver [-] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Instance destroyed successfully.
Oct 02 13:04:21 compute-1 nova_compute[230518]: 2025-10-02 13:04:21.074 2 DEBUG nova.objects.instance [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lazy-loading 'resources' on Instance uuid ab1d03d2-f5f1-479c-9c49-4519bb6f6b53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:04:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:21.080 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[02264d18-660e-453b-9705-98faadfa8378]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb07d0c6a-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 810541, 'tstamp': 810541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300383, 'error': None, 'target': 'ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb07d0c6a-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 810543, 'tstamp': 810543}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300383, 'error': None, 'target': 'ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:04:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:21.082 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb07d0c6a-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:04:21 compute-1 nova_compute[230518]: 2025-10-02 13:04:21.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:21.088 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb07d0c6a-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:04:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:21.089 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:04:21 compute-1 nova_compute[230518]: 2025-10-02 13:04:21.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:21.089 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb07d0c6a-50, col_values=(('external_ids', {'iface-id': '874a9fce-3ef5-498a-a977-43087c73ea46'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:04:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:21.089 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:04:21 compute-1 nova_compute[230518]: 2025-10-02 13:04:21.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:21.111 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:04:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:21.113 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:04:21 compute-1 nova_compute[230518]: 2025-10-02 13:04:21.181 2 DEBUG nova.virt.libvirt.vif [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:04:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1005291499',display_name='tempest-ServersNegativeTestJSON-server-1005291499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1005291499',id=184,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:04:19Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='52dd3c4419794d0fbecd536c5088c60f',ramdisk_id='',reservation_id='r-vr00cw3i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1205930452',owner_user_name='tempest-ServersNegativeTestJSON-1205930452-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:04:19Z,user_data=None,user_id='5206d24fd75a48758994a57e7fd259f2',uuid=ab1d03d2-f5f1-479c-9c49-4519bb6f6b53,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "address": "fa:16:3e:6b:50:dc", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e07905-2e", "ovs_interfaceid": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:04:21 compute-1 nova_compute[230518]: 2025-10-02 13:04:21.182 2 DEBUG nova.network.os_vif_util [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Converting VIF {"id": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "address": "fa:16:3e:6b:50:dc", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e07905-2e", "ovs_interfaceid": "b3e07905-2e01-4835-9249-b8d3a5c67f76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:04:21 compute-1 nova_compute[230518]: 2025-10-02 13:04:21.182 2 DEBUG nova.network.os_vif_util [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:50:dc,bridge_name='br-int',has_traffic_filtering=True,id=b3e07905-2e01-4835-9249-b8d3a5c67f76,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e07905-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:04:21 compute-1 nova_compute[230518]: 2025-10-02 13:04:21.182 2 DEBUG os_vif [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:50:dc,bridge_name='br-int',has_traffic_filtering=True,id=b3e07905-2e01-4835-9249-b8d3a5c67f76,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e07905-2e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:04:21 compute-1 nova_compute[230518]: 2025-10-02 13:04:21.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:21 compute-1 nova_compute[230518]: 2025-10-02 13:04:21.184 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb3e07905-2e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:04:21 compute-1 nova_compute[230518]: 2025-10-02 13:04:21.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:21 compute-1 nova_compute[230518]: 2025-10-02 13:04:21.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:21 compute-1 nova_compute[230518]: 2025-10-02 13:04:21.189 2 INFO os_vif [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:50:dc,bridge_name='br-int',has_traffic_filtering=True,id=b3e07905-2e01-4835-9249-b8d3a5c67f76,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e07905-2e')
Oct 02 13:04:21 compute-1 nova_compute[230518]: 2025-10-02 13:04:21.294 2 DEBUG nova.compute.manager [req-e9dfd121-1133-47d8-a83c-3c9e4bb6b528 req-d228f0e3-ba4b-424a-ae46-10b474a82407 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Received event network-vif-plugged-b3e07905-2e01-4835-9249-b8d3a5c67f76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:04:21 compute-1 nova_compute[230518]: 2025-10-02 13:04:21.294 2 DEBUG oslo_concurrency.lockutils [req-e9dfd121-1133-47d8-a83c-3c9e4bb6b528 req-d228f0e3-ba4b-424a-ae46-10b474a82407 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:21 compute-1 nova_compute[230518]: 2025-10-02 13:04:21.294 2 DEBUG oslo_concurrency.lockutils [req-e9dfd121-1133-47d8-a83c-3c9e4bb6b528 req-d228f0e3-ba4b-424a-ae46-10b474a82407 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:21 compute-1 nova_compute[230518]: 2025-10-02 13:04:21.295 2 DEBUG oslo_concurrency.lockutils [req-e9dfd121-1133-47d8-a83c-3c9e4bb6b528 req-d228f0e3-ba4b-424a-ae46-10b474a82407 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:21 compute-1 nova_compute[230518]: 2025-10-02 13:04:21.295 2 DEBUG nova.compute.manager [req-e9dfd121-1133-47d8-a83c-3c9e4bb6b528 req-d228f0e3-ba4b-424a-ae46-10b474a82407 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] No waiting events found dispatching network-vif-plugged-b3e07905-2e01-4835-9249-b8d3a5c67f76 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:04:21 compute-1 nova_compute[230518]: 2025-10-02 13:04:21.295 2 WARNING nova.compute.manager [req-e9dfd121-1133-47d8-a83c-3c9e4bb6b528 req-d228f0e3-ba4b-424a-ae46-10b474a82407 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Received unexpected event network-vif-plugged-b3e07905-2e01-4835-9249-b8d3a5c67f76 for instance with vm_state active and task_state deleting.
Oct 02 13:04:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:04:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:21.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:04:21 compute-1 ceph-mon[80926]: pgmap v2741: 305 pgs: 305 active+clean; 414 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 4.6 MiB/s wr, 278 op/s
Oct 02 13:04:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1231900243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:04:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:21.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:04:21 compute-1 nova_compute[230518]: 2025-10-02 13:04:21.644 2 INFO nova.virt.libvirt.driver [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Deleting instance files /var/lib/nova/instances/ab1d03d2-f5f1-479c-9c49-4519bb6f6b53_del
Oct 02 13:04:21 compute-1 nova_compute[230518]: 2025-10-02 13:04:21.644 2 INFO nova.virt.libvirt.driver [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Deletion of /var/lib/nova/instances/ab1d03d2-f5f1-479c-9c49-4519bb6f6b53_del complete
Oct 02 13:04:21 compute-1 nova_compute[230518]: 2025-10-02 13:04:21.703 2 INFO nova.compute.manager [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Took 0.87 seconds to destroy the instance on the hypervisor.
Oct 02 13:04:21 compute-1 nova_compute[230518]: 2025-10-02 13:04:21.704 2 DEBUG oslo.service.loopingcall [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:04:21 compute-1 nova_compute[230518]: 2025-10-02 13:04:21.704 2 DEBUG nova.compute.manager [-] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:04:21 compute-1 nova_compute[230518]: 2025-10-02 13:04:21.705 2 DEBUG nova.network.neutron [-] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:04:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4195261917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2877419818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1343463159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:22 compute-1 nova_compute[230518]: 2025-10-02 13:04:22.556 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:04:22 compute-1 nova_compute[230518]: 2025-10-02 13:04:22.557 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:04:22 compute-1 nova_compute[230518]: 2025-10-02 13:04:22.558 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:04:22 compute-1 nova_compute[230518]: 2025-10-02 13:04:22.558 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:04:22 compute-1 nova_compute[230518]: 2025-10-02 13:04:22.974 2 DEBUG nova.network.neutron [-] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:04:22 compute-1 nova_compute[230518]: 2025-10-02 13:04:22.995 2 INFO nova.compute.manager [-] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Took 1.29 seconds to deallocate network for instance.
Oct 02 13:04:23 compute-1 nova_compute[230518]: 2025-10-02 13:04:23.045 2 DEBUG oslo_concurrency.lockutils [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:23 compute-1 nova_compute[230518]: 2025-10-02 13:04:23.045 2 DEBUG oslo_concurrency.lockutils [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:23 compute-1 nova_compute[230518]: 2025-10-02 13:04:23.125 2 DEBUG nova.compute.manager [req-7a5e65bd-a1e6-44b3-b041-cd1d608ef149 req-62b24368-e034-4fb8-917d-26b15c3963c3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Received event network-vif-deleted-b3e07905-2e01-4835-9249-b8d3a5c67f76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:04:23 compute-1 nova_compute[230518]: 2025-10-02 13:04:23.132 2 DEBUG oslo_concurrency.processutils [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:04:23 compute-1 nova_compute[230518]: 2025-10-02 13:04:23.369 2 DEBUG nova.compute.manager [req-b7dee062-77b3-4aca-ad9d-d079e436ee5d req-19e2295b-60be-4db1-b480-51f7260c5f82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Received event network-vif-unplugged-b3e07905-2e01-4835-9249-b8d3a5c67f76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:04:23 compute-1 nova_compute[230518]: 2025-10-02 13:04:23.375 2 DEBUG oslo_concurrency.lockutils [req-b7dee062-77b3-4aca-ad9d-d079e436ee5d req-19e2295b-60be-4db1-b480-51f7260c5f82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:23 compute-1 nova_compute[230518]: 2025-10-02 13:04:23.376 2 DEBUG oslo_concurrency.lockutils [req-b7dee062-77b3-4aca-ad9d-d079e436ee5d req-19e2295b-60be-4db1-b480-51f7260c5f82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:23 compute-1 nova_compute[230518]: 2025-10-02 13:04:23.376 2 DEBUG oslo_concurrency.lockutils [req-b7dee062-77b3-4aca-ad9d-d079e436ee5d req-19e2295b-60be-4db1-b480-51f7260c5f82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:23 compute-1 nova_compute[230518]: 2025-10-02 13:04:23.376 2 DEBUG nova.compute.manager [req-b7dee062-77b3-4aca-ad9d-d079e436ee5d req-19e2295b-60be-4db1-b480-51f7260c5f82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] No waiting events found dispatching network-vif-unplugged-b3e07905-2e01-4835-9249-b8d3a5c67f76 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:04:23 compute-1 nova_compute[230518]: 2025-10-02 13:04:23.377 2 WARNING nova.compute.manager [req-b7dee062-77b3-4aca-ad9d-d079e436ee5d req-19e2295b-60be-4db1-b480-51f7260c5f82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Received unexpected event network-vif-unplugged-b3e07905-2e01-4835-9249-b8d3a5c67f76 for instance with vm_state deleted and task_state None.
Oct 02 13:04:23 compute-1 nova_compute[230518]: 2025-10-02 13:04:23.377 2 DEBUG nova.compute.manager [req-b7dee062-77b3-4aca-ad9d-d079e436ee5d req-19e2295b-60be-4db1-b480-51f7260c5f82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Received event network-vif-plugged-b3e07905-2e01-4835-9249-b8d3a5c67f76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:04:23 compute-1 nova_compute[230518]: 2025-10-02 13:04:23.378 2 DEBUG oslo_concurrency.lockutils [req-b7dee062-77b3-4aca-ad9d-d079e436ee5d req-19e2295b-60be-4db1-b480-51f7260c5f82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:23 compute-1 nova_compute[230518]: 2025-10-02 13:04:23.378 2 DEBUG oslo_concurrency.lockutils [req-b7dee062-77b3-4aca-ad9d-d079e436ee5d req-19e2295b-60be-4db1-b480-51f7260c5f82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:23 compute-1 nova_compute[230518]: 2025-10-02 13:04:23.378 2 DEBUG oslo_concurrency.lockutils [req-b7dee062-77b3-4aca-ad9d-d079e436ee5d req-19e2295b-60be-4db1-b480-51f7260c5f82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:23 compute-1 nova_compute[230518]: 2025-10-02 13:04:23.379 2 DEBUG nova.compute.manager [req-b7dee062-77b3-4aca-ad9d-d079e436ee5d req-19e2295b-60be-4db1-b480-51f7260c5f82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] No waiting events found dispatching network-vif-plugged-b3e07905-2e01-4835-9249-b8d3a5c67f76 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:04:23 compute-1 nova_compute[230518]: 2025-10-02 13:04:23.379 2 WARNING nova.compute.manager [req-b7dee062-77b3-4aca-ad9d-d079e436ee5d req-19e2295b-60be-4db1-b480-51f7260c5f82 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Received unexpected event network-vif-plugged-b3e07905-2e01-4835-9249-b8d3a5c67f76 for instance with vm_state deleted and task_state None.
Oct 02 13:04:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:23.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:04:23 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/411876806' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:23 compute-1 nova_compute[230518]: 2025-10-02 13:04:23.552 2 DEBUG oslo_concurrency.processutils [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:04:23 compute-1 nova_compute[230518]: 2025-10-02 13:04:23.557 2 DEBUG nova.compute.provider_tree [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:04:23 compute-1 nova_compute[230518]: 2025-10-02 13:04:23.576 2 DEBUG nova.scheduler.client.report [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:04:23 compute-1 nova_compute[230518]: 2025-10-02 13:04:23.600 2 DEBUG oslo_concurrency.lockutils [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:23.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:23 compute-1 nova_compute[230518]: 2025-10-02 13:04:23.638 2 INFO nova.scheduler.client.report [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Deleted allocations for instance ab1d03d2-f5f1-479c-9c49-4519bb6f6b53
Oct 02 13:04:23 compute-1 ceph-mon[80926]: pgmap v2742: 305 pgs: 305 active+clean; 408 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 4.0 MiB/s wr, 352 op/s
Oct 02 13:04:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/411876806' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:23 compute-1 nova_compute[230518]: 2025-10-02 13:04:23.709 2 DEBUG oslo_concurrency.lockutils [None req-5f1eff2c-77db-4250-9d8a-6a62928cb08f 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "ab1d03d2-f5f1-479c-9c49-4519bb6f6b53" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.885s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:24 compute-1 nova_compute[230518]: 2025-10-02 13:04:24.049 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:04:24 compute-1 nova_compute[230518]: 2025-10-02 13:04:24.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:04:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:04:24 compute-1 nova_compute[230518]: 2025-10-02 13:04:24.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:25 compute-1 nova_compute[230518]: 2025-10-02 13:04:25.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:04:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:25.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:04:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:25.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:04:25 compute-1 ceph-mon[80926]: pgmap v2743: 305 pgs: 305 active+clean; 408 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 3.4 MiB/s wr, 334 op/s
Oct 02 13:04:25 compute-1 podman[300428]: 2025-10-02 13:04:25.820401225 +0000 UTC m=+0.061347057 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 13:04:25 compute-1 podman[300427]: 2025-10-02 13:04:25.820745005 +0000 UTC m=+0.065974910 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3)
Oct 02 13:04:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:25.960 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:04:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:25.961 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:04:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:25.962 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:04:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:04:26.114 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:04:26 compute-1 nova_compute[230518]: 2025-10-02 13:04:26.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:26 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1733589827' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:27 compute-1 nova_compute[230518]: 2025-10-02 13:04:27.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:04:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:27.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:27.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:27 compute-1 ceph-mon[80926]: pgmap v2744: 305 pgs: 305 active+clean; 337 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 4.7 MiB/s wr, 392 op/s
Oct 02 13:04:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:29.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:04:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:29.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:29 compute-1 ceph-mon[80926]: pgmap v2745: 305 pgs: 305 active+clean; 328 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 3.5 MiB/s wr, 352 op/s
Oct 02 13:04:29 compute-1 nova_compute[230518]: 2025-10-02 13:04:29.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:30 compute-1 nova_compute[230518]: 2025-10-02 13:04:30.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:04:30 compute-1 nova_compute[230518]: 2025-10-02 13:04:30.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:04:30 compute-1 nova_compute[230518]: 2025-10-02 13:04:30.077 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:04:31 compute-1 nova_compute[230518]: 2025-10-02 13:04:31.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:04:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:31.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:04:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:04:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:31.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:04:31 compute-1 ceph-mon[80926]: pgmap v2746: 305 pgs: 305 active+clean; 342 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.4 MiB/s wr, 300 op/s
Oct 02 13:04:33 compute-1 nova_compute[230518]: 2025-10-02 13:04:33.071 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:04:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:33.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:04:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:33.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:04:33 compute-1 ceph-mon[80926]: pgmap v2747: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 4.3 MiB/s wr, 269 op/s
Oct 02 13:04:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:04:34 compute-1 nova_compute[230518]: 2025-10-02 13:04:34.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:04:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:35.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:04:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:35.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:35 compute-1 ceph-mon[80926]: pgmap v2748: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 863 KiB/s rd, 4.3 MiB/s wr, 181 op/s
Oct 02 13:04:36 compute-1 nova_compute[230518]: 2025-10-02 13:04:36.071 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410261.0696728, ab1d03d2-f5f1-479c-9c49-4519bb6f6b53 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:04:36 compute-1 nova_compute[230518]: 2025-10-02 13:04:36.071 2 INFO nova.compute.manager [-] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] VM Stopped (Lifecycle Event)
Oct 02 13:04:36 compute-1 nova_compute[230518]: 2025-10-02 13:04:36.095 2 DEBUG nova.compute.manager [None req-1e77796d-45c3-4437-a95a-de7582c23f2f - - - - - -] [instance: ab1d03d2-f5f1-479c-9c49-4519bb6f6b53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:04:36 compute-1 nova_compute[230518]: 2025-10-02 13:04:36.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:37.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:37.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:37 compute-1 ceph-mon[80926]: pgmap v2749: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 863 KiB/s rd, 4.3 MiB/s wr, 182 op/s
Oct 02 13:04:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:04:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:39.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:04:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:39.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:04:39 compute-1 nova_compute[230518]: 2025-10-02 13:04:39.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:39 compute-1 ceph-mon[80926]: pgmap v2750: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 578 KiB/s rd, 3.0 MiB/s wr, 124 op/s
Oct 02 13:04:41 compute-1 nova_compute[230518]: 2025-10-02 13:04:41.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:41.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:41.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:41 compute-1 ceph-mon[80926]: pgmap v2751: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 364 KiB/s rd, 1.6 MiB/s wr, 77 op/s
Oct 02 13:04:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:43.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:43.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:43 compute-1 ceph-mon[80926]: pgmap v2752: 305 pgs: 305 active+clean; 327 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 196 KiB/s rd, 1.0 MiB/s wr, 44 op/s
Oct 02 13:04:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/341473276' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:04:44 compute-1 nova_compute[230518]: 2025-10-02 13:04:44.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:45.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:45.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:45 compute-1 podman[300467]: 2025-10-02 13:04:45.837558042 +0000 UTC m=+0.088761150 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Oct 02 13:04:45 compute-1 podman[300466]: 2025-10-02 13:04:45.851957369 +0000 UTC m=+0.110080632 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:04:45 compute-1 ceph-mon[80926]: pgmap v2753: 305 pgs: 305 active+clean; 327 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 30 KiB/s wr, 15 op/s
Oct 02 13:04:46 compute-1 nova_compute[230518]: 2025-10-02 13:04:46.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:47.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:47.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:47 compute-1 ceph-mon[80926]: pgmap v2754: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 32 KiB/s wr, 29 op/s
Oct 02 13:04:47 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3717693724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:48 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #54. Immutable memtables: 10.
Oct 02 13:04:49 compute-1 ceph-mon[80926]: pgmap v2755: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 8.5 KiB/s wr, 28 op/s
Oct 02 13:04:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2958912710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:04:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:04:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:49.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:49.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:49 compute-1 nova_compute[230518]: 2025-10-02 13:04:49.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:51 compute-1 nova_compute[230518]: 2025-10-02 13:04:51.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:51 compute-1 ceph-mon[80926]: pgmap v2756: 305 pgs: 305 active+clean; 310 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 1.2 MiB/s wr, 46 op/s
Oct 02 13:04:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:51.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:51.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2652707074' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2608477789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:53.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:04:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:53.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:04:53 compute-1 ceph-mon[80926]: pgmap v2757: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 3.6 MiB/s wr, 73 op/s
Oct 02 13:04:53 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/726833015' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:53 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/495404971' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:04:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:04:54 compute-1 nova_compute[230518]: 2025-10-02 13:04:54.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:55 compute-1 sudo[300511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:04:55 compute-1 sudo[300511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:55 compute-1 sudo[300511]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:55 compute-1 sudo[300536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:04:55 compute-1 sudo[300536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:55 compute-1 sudo[300536]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:55 compute-1 sudo[300561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:04:55 compute-1 sudo[300561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:55 compute-1 sudo[300561]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:55.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:55 compute-1 sudo[300586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:04:55 compute-1 sudo[300586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:04:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:04:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:55.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:04:55 compute-1 sudo[300586]: pam_unix(sudo:session): session closed for user root
Oct 02 13:04:56 compute-1 nova_compute[230518]: 2025-10-02 13:04:56.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:04:56 compute-1 ceph-mon[80926]: pgmap v2758: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 3.5 MiB/s wr, 59 op/s
Oct 02 13:04:56 compute-1 podman[300642]: 2025-10-02 13:04:56.809156961 +0000 UTC m=+0.065274119 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, config_id=iscsid)
Oct 02 13:04:56 compute-1 podman[300643]: 2025-10-02 13:04:56.824874699 +0000 UTC m=+0.070316106 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:04:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:57.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:57 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:04:57 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:04:57 compute-1 ceph-mon[80926]: pgmap v2759: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 3.5 MiB/s wr, 72 op/s
Oct 02 13:04:57 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:04:57 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:04:57 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:04:57 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:04:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:04:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:57.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:04:58 compute-1 nova_compute[230518]: 2025-10-02 13:04:58.678 2 INFO nova.compute.manager [None req-1d835b92-7838-4d05-a458-99435dafbaff 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Pausing
Oct 02 13:04:58 compute-1 nova_compute[230518]: 2025-10-02 13:04:58.679 2 DEBUG nova.objects.instance [None req-1d835b92-7838-4d05-a458-99435dafbaff 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lazy-loading 'flavor' on Instance uuid ea034622-0a48-4de6-8d68-0f2240b54214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:04:58 compute-1 nova_compute[230518]: 2025-10-02 13:04:58.715 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410298.7151353, ea034622-0a48-4de6-8d68-0f2240b54214 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:04:58 compute-1 nova_compute[230518]: 2025-10-02 13:04:58.716 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] VM Paused (Lifecycle Event)
Oct 02 13:04:58 compute-1 nova_compute[230518]: 2025-10-02 13:04:58.719 2 DEBUG nova.compute.manager [None req-1d835b92-7838-4d05-a458-99435dafbaff 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:04:58 compute-1 nova_compute[230518]: 2025-10-02 13:04:58.759 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:04:58 compute-1 nova_compute[230518]: 2025-10-02 13:04:58.765 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:04:58 compute-1 nova_compute[230518]: 2025-10-02 13:04:58.800 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] During sync_power_state the instance has a pending task (pausing). Skip.
Oct 02 13:04:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:04:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:04:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:04:59.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:04:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:04:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:04:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:04:59.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:04:59 compute-1 ceph-mon[80926]: pgmap v2760: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 3.6 MiB/s wr, 69 op/s
Oct 02 13:04:59 compute-1 nova_compute[230518]: 2025-10-02 13:04:59.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:01 compute-1 nova_compute[230518]: 2025-10-02 13:05:01.148 2 INFO nova.compute.manager [None req-9d4d0b89-6387-44be-856a-80f0059cf9f0 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Unpausing
Oct 02 13:05:01 compute-1 nova_compute[230518]: 2025-10-02 13:05:01.149 2 DEBUG nova.objects.instance [None req-9d4d0b89-6387-44be-856a-80f0059cf9f0 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lazy-loading 'flavor' on Instance uuid ea034622-0a48-4de6-8d68-0f2240b54214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:05:01 compute-1 nova_compute[230518]: 2025-10-02 13:05:01.203 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410301.2030923, ea034622-0a48-4de6-8d68-0f2240b54214 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:05:01 compute-1 nova_compute[230518]: 2025-10-02 13:05:01.203 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] VM Resumed (Lifecycle Event)
Oct 02 13:05:01 compute-1 virtqemud[230067]: argument unsupported: QEMU guest agent is not configured
Oct 02 13:05:01 compute-1 nova_compute[230518]: 2025-10-02 13:05:01.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:01 compute-1 nova_compute[230518]: 2025-10-02 13:05:01.209 2 DEBUG nova.virt.libvirt.guest [None req-9d4d0b89-6387-44be-856a-80f0059cf9f0 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Oct 02 13:05:01 compute-1 nova_compute[230518]: 2025-10-02 13:05:01.209 2 DEBUG nova.compute.manager [None req-9d4d0b89-6387-44be-856a-80f0059cf9f0 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:05:01 compute-1 nova_compute[230518]: 2025-10-02 13:05:01.262 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:05:01 compute-1 nova_compute[230518]: 2025-10-02 13:05:01.265 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:05:01 compute-1 nova_compute[230518]: 2025-10-02 13:05:01.301 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] During sync_power_state the instance has a pending task (unpausing). Skip.
Oct 02 13:05:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:01.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:01.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:02 compute-1 ceph-mon[80926]: pgmap v2761: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 120 op/s
Oct 02 13:05:03 compute-1 ceph-mon[80926]: pgmap v2762: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.4 MiB/s wr, 185 op/s
Oct 02 13:05:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:03.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:05:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:03.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:05:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:05:04 compute-1 sudo[300682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:05:04 compute-1 sudo[300682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:04 compute-1 sudo[300682]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:04 compute-1 sudo[300707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:05:04 compute-1 sudo[300707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:05:04 compute-1 sudo[300707]: pam_unix(sudo:session): session closed for user root
Oct 02 13:05:04 compute-1 nova_compute[230518]: 2025-10-02 13:05:04.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:05.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:05 compute-1 ceph-mon[80926]: pgmap v2763: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 35 KiB/s wr, 158 op/s
Oct 02 13:05:05 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:05:05 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:05:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2768741561' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:05:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2768741561' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:05:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:05.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:06 compute-1 nova_compute[230518]: 2025-10-02 13:05:06.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:05:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:07.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:05:07 compute-1 ceph-mon[80926]: pgmap v2764: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 35 KiB/s wr, 158 op/s
Oct 02 13:05:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:05:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:07.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:05:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:05:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:05:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:09.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:05:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:09.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:09 compute-1 ceph-mon[80926]: pgmap v2765: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 35 KiB/s wr, 144 op/s
Oct 02 13:05:09 compute-1 nova_compute[230518]: 2025-10-02 13:05:09.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:11 compute-1 nova_compute[230518]: 2025-10-02 13:05:11.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:05:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:11.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:05:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:05:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:11.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:05:11 compute-1 ceph-mon[80926]: pgmap v2766: 305 pgs: 305 active+clean; 397 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 171 op/s
Oct 02 13:05:13 compute-1 ceph-mon[80926]: pgmap v2767: 305 pgs: 305 active+clean; 420 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.2 MiB/s wr, 180 op/s
Oct 02 13:05:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:13.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:05:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:13.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:05:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:05:14 compute-1 nova_compute[230518]: 2025-10-02 13:05:14.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:15 compute-1 ceph-mon[80926]: pgmap v2768: 305 pgs: 305 active+clean; 420 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 520 KiB/s rd, 4.2 MiB/s wr, 97 op/s
Oct 02 13:05:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:05:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:15.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:05:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:15.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:16 compute-1 nova_compute[230518]: 2025-10-02 13:05:16.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:16 compute-1 podman[300733]: 2025-10-02 13:05:16.796013284 +0000 UTC m=+0.049557271 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 02 13:05:16 compute-1 podman[300732]: 2025-10-02 13:05:16.824076436 +0000 UTC m=+0.078969705 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:05:17 compute-1 nova_compute[230518]: 2025-10-02 13:05:17.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:05:17 compute-1 nova_compute[230518]: 2025-10-02 13:05:17.093 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:17 compute-1 nova_compute[230518]: 2025-10-02 13:05:17.094 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:17 compute-1 nova_compute[230518]: 2025-10-02 13:05:17.094 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:17 compute-1 nova_compute[230518]: 2025-10-02 13:05:17.094 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:05:17 compute-1 nova_compute[230518]: 2025-10-02 13:05:17.094 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:05:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:05:17 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2041434741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:17 compute-1 nova_compute[230518]: 2025-10-02 13:05:17.510 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:05:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:05:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:17.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:05:17 compute-1 nova_compute[230518]: 2025-10-02 13:05:17.595 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000b5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:05:17 compute-1 nova_compute[230518]: 2025-10-02 13:05:17.596 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000b5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:05:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:05:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:17.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:05:17 compute-1 nova_compute[230518]: 2025-10-02 13:05:17.771 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:05:17 compute-1 nova_compute[230518]: 2025-10-02 13:05:17.772 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4102MB free_disk=20.806453704833984GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:05:17 compute-1 nova_compute[230518]: 2025-10-02 13:05:17.772 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:17 compute-1 nova_compute[230518]: 2025-10-02 13:05:17.773 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:17 compute-1 nova_compute[230518]: 2025-10-02 13:05:17.886 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance ea034622-0a48-4de6-8d68-0f2240b54214 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:05:17 compute-1 nova_compute[230518]: 2025-10-02 13:05:17.887 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:05:17 compute-1 nova_compute[230518]: 2025-10-02 13:05:17.887 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:05:17 compute-1 ceph-mon[80926]: pgmap v2769: 305 pgs: 305 active+clean; 438 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 720 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Oct 02 13:05:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2041434741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:17 compute-1 nova_compute[230518]: 2025-10-02 13:05:17.927 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:05:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:05:18 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/62137085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:18 compute-1 nova_compute[230518]: 2025-10-02 13:05:18.349 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:05:18 compute-1 nova_compute[230518]: 2025-10-02 13:05:18.355 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:05:18 compute-1 nova_compute[230518]: 2025-10-02 13:05:18.369 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:05:18 compute-1 nova_compute[230518]: 2025-10-02 13:05:18.393 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:05:18 compute-1 nova_compute[230518]: 2025-10-02 13:05:18.393 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/62137085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:19 compute-1 nova_compute[230518]: 2025-10-02 13:05:19.147 2 DEBUG oslo_concurrency.lockutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquiring lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:19 compute-1 nova_compute[230518]: 2025-10-02 13:05:19.148 2 DEBUG oslo_concurrency.lockutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:19 compute-1 nova_compute[230518]: 2025-10-02 13:05:19.179 2 DEBUG nova.compute.manager [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:05:19 compute-1 nova_compute[230518]: 2025-10-02 13:05:19.309 2 DEBUG oslo_concurrency.lockutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:19 compute-1 nova_compute[230518]: 2025-10-02 13:05:19.309 2 DEBUG oslo_concurrency.lockutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:19 compute-1 nova_compute[230518]: 2025-10-02 13:05:19.314 2 DEBUG nova.virt.hardware [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:05:19 compute-1 nova_compute[230518]: 2025-10-02 13:05:19.314 2 INFO nova.compute.claims [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Claim successful on node compute-1.ctlplane.example.com
Oct 02 13:05:19 compute-1 nova_compute[230518]: 2025-10-02 13:05:19.446 2 DEBUG oslo_concurrency.processutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:05:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:05:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:19.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:19.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:05:19 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/376673178' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:19 compute-1 nova_compute[230518]: 2025-10-02 13:05:19.848 2 DEBUG oslo_concurrency.processutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:05:19 compute-1 nova_compute[230518]: 2025-10-02 13:05:19.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:19 compute-1 nova_compute[230518]: 2025-10-02 13:05:19.856 2 DEBUG nova.compute.provider_tree [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:05:19 compute-1 nova_compute[230518]: 2025-10-02 13:05:19.936 2 DEBUG nova.scheduler.client.report [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:05:19 compute-1 nova_compute[230518]: 2025-10-02 13:05:19.971 2 DEBUG oslo_concurrency.lockutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:19 compute-1 nova_compute[230518]: 2025-10-02 13:05:19.972 2 DEBUG nova.compute.manager [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:05:20 compute-1 nova_compute[230518]: 2025-10-02 13:05:20.017 2 DEBUG nova.compute.manager [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:05:20 compute-1 nova_compute[230518]: 2025-10-02 13:05:20.017 2 DEBUG nova.network.neutron [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:05:20 compute-1 nova_compute[230518]: 2025-10-02 13:05:20.037 2 INFO nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:05:20 compute-1 nova_compute[230518]: 2025-10-02 13:05:20.051 2 DEBUG nova.compute.manager [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:05:20 compute-1 ceph-mon[80926]: pgmap v2770: 305 pgs: 305 active+clean; 438 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 721 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Oct 02 13:05:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/376673178' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:20 compute-1 nova_compute[230518]: 2025-10-02 13:05:20.133 2 DEBUG nova.compute.manager [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:05:20 compute-1 nova_compute[230518]: 2025-10-02 13:05:20.134 2 DEBUG nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:05:20 compute-1 nova_compute[230518]: 2025-10-02 13:05:20.135 2 INFO nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Creating image(s)
Oct 02 13:05:20 compute-1 nova_compute[230518]: 2025-10-02 13:05:20.159 2 DEBUG nova.storage.rbd_utils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] rbd image f320bcaa-1dfe-4d91-bd4a-05ed389402a7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:05:20 compute-1 nova_compute[230518]: 2025-10-02 13:05:20.185 2 DEBUG nova.storage.rbd_utils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] rbd image f320bcaa-1dfe-4d91-bd4a-05ed389402a7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:05:20 compute-1 nova_compute[230518]: 2025-10-02 13:05:20.212 2 DEBUG nova.storage.rbd_utils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] rbd image f320bcaa-1dfe-4d91-bd4a-05ed389402a7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:05:20 compute-1 nova_compute[230518]: 2025-10-02 13:05:20.215 2 DEBUG oslo_concurrency.processutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:05:20 compute-1 nova_compute[230518]: 2025-10-02 13:05:20.277 2 DEBUG oslo_concurrency.processutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:05:20 compute-1 nova_compute[230518]: 2025-10-02 13:05:20.278 2 DEBUG oslo_concurrency.lockutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:20 compute-1 nova_compute[230518]: 2025-10-02 13:05:20.278 2 DEBUG oslo_concurrency.lockutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:20 compute-1 nova_compute[230518]: 2025-10-02 13:05:20.279 2 DEBUG oslo_concurrency.lockutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:20 compute-1 nova_compute[230518]: 2025-10-02 13:05:20.304 2 DEBUG nova.storage.rbd_utils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] rbd image f320bcaa-1dfe-4d91-bd4a-05ed389402a7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:05:20 compute-1 nova_compute[230518]: 2025-10-02 13:05:20.308 2 DEBUG oslo_concurrency.processutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 f320bcaa-1dfe-4d91-bd4a-05ed389402a7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:05:20 compute-1 nova_compute[230518]: 2025-10-02 13:05:20.629 2 DEBUG oslo_concurrency.processutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 f320bcaa-1dfe-4d91-bd4a-05ed389402a7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.321s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:05:20 compute-1 nova_compute[230518]: 2025-10-02 13:05:20.700 2 DEBUG nova.storage.rbd_utils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] resizing rbd image f320bcaa-1dfe-4d91-bd4a-05ed389402a7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:05:20 compute-1 nova_compute[230518]: 2025-10-02 13:05:20.796 2 DEBUG nova.objects.instance [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lazy-loading 'migration_context' on Instance uuid f320bcaa-1dfe-4d91-bd4a-05ed389402a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:05:20 compute-1 nova_compute[230518]: 2025-10-02 13:05:20.817 2 DEBUG nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:05:20 compute-1 nova_compute[230518]: 2025-10-02 13:05:20.818 2 DEBUG nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Ensure instance console log exists: /var/lib/nova/instances/f320bcaa-1dfe-4d91-bd4a-05ed389402a7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:05:20 compute-1 nova_compute[230518]: 2025-10-02 13:05:20.818 2 DEBUG oslo_concurrency.lockutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:20 compute-1 nova_compute[230518]: 2025-10-02 13:05:20.819 2 DEBUG oslo_concurrency.lockutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:20 compute-1 nova_compute[230518]: 2025-10-02 13:05:20.819 2 DEBUG oslo_concurrency.lockutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:21 compute-1 nova_compute[230518]: 2025-10-02 13:05:21.053 2 DEBUG nova.network.neutron [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Successfully created port: 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:05:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/634107591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:21 compute-1 ceph-mon[80926]: pgmap v2771: 305 pgs: 305 active+clean; 438 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 721 KiB/s rd, 4.3 MiB/s wr, 129 op/s
Oct 02 13:05:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/470398889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:21 compute-1 nova_compute[230518]: 2025-10-02 13:05:21.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:21.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:21.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:21.898 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:05:21 compute-1 nova_compute[230518]: 2025-10-02 13:05:21.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:21 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:21.900 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:05:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/511956726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:22 compute-1 nova_compute[230518]: 2025-10-02 13:05:22.418 2 DEBUG nova.network.neutron [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Successfully updated port: 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:05:22 compute-1 nova_compute[230518]: 2025-10-02 13:05:22.448 2 DEBUG oslo_concurrency.lockutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquiring lock "refresh_cache-f320bcaa-1dfe-4d91-bd4a-05ed389402a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:05:22 compute-1 nova_compute[230518]: 2025-10-02 13:05:22.449 2 DEBUG oslo_concurrency.lockutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquired lock "refresh_cache-f320bcaa-1dfe-4d91-bd4a-05ed389402a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:05:22 compute-1 nova_compute[230518]: 2025-10-02 13:05:22.449 2 DEBUG nova.network.neutron [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:05:22 compute-1 nova_compute[230518]: 2025-10-02 13:05:22.776 2 DEBUG nova.network.neutron [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:05:23 compute-1 ceph-mon[80926]: pgmap v2772: 305 pgs: 305 active+clean; 461 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 475 KiB/s rd, 3.0 MiB/s wr, 107 op/s
Oct 02 13:05:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/588557104' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:23 compute-1 nova_compute[230518]: 2025-10-02 13:05:23.393 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:05:23 compute-1 nova_compute[230518]: 2025-10-02 13:05:23.393 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:05:23 compute-1 nova_compute[230518]: 2025-10-02 13:05:23.393 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:05:23 compute-1 nova_compute[230518]: 2025-10-02 13:05:23.394 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:05:23 compute-1 nova_compute[230518]: 2025-10-02 13:05:23.478 2 DEBUG nova.compute.manager [req-477ead2f-c6df-4f99-ac3f-fb7196b95fd2 req-e95e28fe-0f85-49a9-854a-d5e419834898 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Received event network-changed-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:05:23 compute-1 nova_compute[230518]: 2025-10-02 13:05:23.478 2 DEBUG nova.compute.manager [req-477ead2f-c6df-4f99-ac3f-fb7196b95fd2 req-e95e28fe-0f85-49a9-854a-d5e419834898 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Refreshing instance network info cache due to event network-changed-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:05:23 compute-1 nova_compute[230518]: 2025-10-02 13:05:23.478 2 DEBUG oslo_concurrency.lockutils [req-477ead2f-c6df-4f99-ac3f-fb7196b95fd2 req-e95e28fe-0f85-49a9-854a-d5e419834898 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-f320bcaa-1dfe-4d91-bd4a-05ed389402a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:05:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:05:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:23.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:05:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:23.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/647629771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:24 compute-1 nova_compute[230518]: 2025-10-02 13:05:24.359 2 DEBUG nova.network.neutron [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Updating instance_info_cache with network_info: [{"id": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "address": "fa:16:3e:ee:3f:1d", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b77d75e-cf", "ovs_interfaceid": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:05:24 compute-1 nova_compute[230518]: 2025-10-02 13:05:24.404 2 DEBUG oslo_concurrency.lockutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Releasing lock "refresh_cache-f320bcaa-1dfe-4d91-bd4a-05ed389402a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:05:24 compute-1 nova_compute[230518]: 2025-10-02 13:05:24.405 2 DEBUG nova.compute.manager [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Instance network_info: |[{"id": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "address": "fa:16:3e:ee:3f:1d", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b77d75e-cf", "ovs_interfaceid": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:05:24 compute-1 nova_compute[230518]: 2025-10-02 13:05:24.405 2 DEBUG oslo_concurrency.lockutils [req-477ead2f-c6df-4f99-ac3f-fb7196b95fd2 req-e95e28fe-0f85-49a9-854a-d5e419834898 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-f320bcaa-1dfe-4d91-bd4a-05ed389402a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:05:24 compute-1 nova_compute[230518]: 2025-10-02 13:05:24.405 2 DEBUG nova.network.neutron [req-477ead2f-c6df-4f99-ac3f-fb7196b95fd2 req-e95e28fe-0f85-49a9-854a-d5e419834898 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Refreshing network info cache for port 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:05:24 compute-1 nova_compute[230518]: 2025-10-02 13:05:24.408 2 DEBUG nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Start _get_guest_xml network_info=[{"id": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "address": "fa:16:3e:ee:3f:1d", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b77d75e-cf", "ovs_interfaceid": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:05:24 compute-1 nova_compute[230518]: 2025-10-02 13:05:24.413 2 WARNING nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:05:24 compute-1 nova_compute[230518]: 2025-10-02 13:05:24.416 2 DEBUG nova.virt.libvirt.host [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:05:24 compute-1 nova_compute[230518]: 2025-10-02 13:05:24.417 2 DEBUG nova.virt.libvirt.host [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:05:24 compute-1 nova_compute[230518]: 2025-10-02 13:05:24.419 2 DEBUG nova.virt.libvirt.host [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:05:24 compute-1 nova_compute[230518]: 2025-10-02 13:05:24.420 2 DEBUG nova.virt.libvirt.host [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:05:24 compute-1 nova_compute[230518]: 2025-10-02 13:05:24.421 2 DEBUG nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:05:24 compute-1 nova_compute[230518]: 2025-10-02 13:05:24.421 2 DEBUG nova.virt.hardware [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:05:24 compute-1 nova_compute[230518]: 2025-10-02 13:05:24.421 2 DEBUG nova.virt.hardware [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:05:24 compute-1 nova_compute[230518]: 2025-10-02 13:05:24.421 2 DEBUG nova.virt.hardware [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:05:24 compute-1 nova_compute[230518]: 2025-10-02 13:05:24.422 2 DEBUG nova.virt.hardware [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:05:24 compute-1 nova_compute[230518]: 2025-10-02 13:05:24.422 2 DEBUG nova.virt.hardware [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:05:24 compute-1 nova_compute[230518]: 2025-10-02 13:05:24.422 2 DEBUG nova.virt.hardware [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:05:24 compute-1 nova_compute[230518]: 2025-10-02 13:05:24.422 2 DEBUG nova.virt.hardware [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:05:24 compute-1 nova_compute[230518]: 2025-10-02 13:05:24.422 2 DEBUG nova.virt.hardware [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:05:24 compute-1 nova_compute[230518]: 2025-10-02 13:05:24.422 2 DEBUG nova.virt.hardware [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:05:24 compute-1 nova_compute[230518]: 2025-10-02 13:05:24.422 2 DEBUG nova.virt.hardware [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:05:24 compute-1 nova_compute[230518]: 2025-10-02 13:05:24.423 2 DEBUG nova.virt.hardware [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:05:24 compute-1 nova_compute[230518]: 2025-10-02 13:05:24.425 2 DEBUG oslo_concurrency.processutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:05:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:05:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:05:24 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2669593725' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:05:24 compute-1 nova_compute[230518]: 2025-10-02 13:05:24.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:24 compute-1 nova_compute[230518]: 2025-10-02 13:05:24.862 2 DEBUG oslo_concurrency.processutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:05:24 compute-1 nova_compute[230518]: 2025-10-02 13:05:24.897 2 DEBUG nova.storage.rbd_utils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] rbd image f320bcaa-1dfe-4d91-bd4a-05ed389402a7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:05:24 compute-1 nova_compute[230518]: 2025-10-02 13:05:24.902 2 DEBUG oslo_concurrency.processutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:05:25 compute-1 nova_compute[230518]: 2025-10-02 13:05:25.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:05:25 compute-1 ceph-mon[80926]: pgmap v2773: 305 pgs: 305 active+clean; 461 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 219 KiB/s rd, 1000 KiB/s wr, 46 op/s
Oct 02 13:05:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2669593725' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:05:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3473546659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2114044563' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:05:25 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/734581899' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:05:25 compute-1 nova_compute[230518]: 2025-10-02 13:05:25.422 2 DEBUG oslo_concurrency.processutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:05:25 compute-1 nova_compute[230518]: 2025-10-02 13:05:25.424 2 DEBUG nova.virt.libvirt.vif [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:05:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-1687487968',display_name='tempest-TestServerMultinode-server-1687487968',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testservermultinode-server-1687487968',id=187,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='19365f54974d4109ae80bc13ac9ba55a',ramdisk_id='',reservation_id='r-6z94ee26',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-2060715482',owner_user_name='tempest-TestServerMultinode-2060715482-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:05:20Z,user_data=None,user_id='7fb7e45069d34870bc5f4fa70bd8c6de',uuid=f320bcaa-1dfe-4d91-bd4a-05ed389402a7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "address": "fa:16:3e:ee:3f:1d", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b77d75e-cf", "ovs_interfaceid": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:05:25 compute-1 nova_compute[230518]: 2025-10-02 13:05:25.424 2 DEBUG nova.network.os_vif_util [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Converting VIF {"id": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "address": "fa:16:3e:ee:3f:1d", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b77d75e-cf", "ovs_interfaceid": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:05:25 compute-1 nova_compute[230518]: 2025-10-02 13:05:25.425 2 DEBUG nova.network.os_vif_util [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3f:1d,bridge_name='br-int',has_traffic_filtering=True,id=5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3,network=Network(962339a8-ad45-401e-ae58-50cd40858566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b77d75e-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:05:25 compute-1 nova_compute[230518]: 2025-10-02 13:05:25.426 2 DEBUG nova.objects.instance [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lazy-loading 'pci_devices' on Instance uuid f320bcaa-1dfe-4d91-bd4a-05ed389402a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:05:25 compute-1 nova_compute[230518]: 2025-10-02 13:05:25.439 2 DEBUG nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:05:25 compute-1 nova_compute[230518]:   <uuid>f320bcaa-1dfe-4d91-bd4a-05ed389402a7</uuid>
Oct 02 13:05:25 compute-1 nova_compute[230518]:   <name>instance-000000bb</name>
Oct 02 13:05:25 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 13:05:25 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 13:05:25 compute-1 nova_compute[230518]:   <metadata>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:05:25 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:       <nova:name>tempest-TestServerMultinode-server-1687487968</nova:name>
Oct 02 13:05:25 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 13:05:24</nova:creationTime>
Oct 02 13:05:25 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 13:05:25 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 13:05:25 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 13:05:25 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 13:05:25 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:05:25 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:05:25 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 13:05:25 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 13:05:25 compute-1 nova_compute[230518]:         <nova:user uuid="7fb7e45069d34870bc5f4fa70bd8c6de">tempest-TestServerMultinode-2060715482-project-admin</nova:user>
Oct 02 13:05:25 compute-1 nova_compute[230518]:         <nova:project uuid="19365f54974d4109ae80bc13ac9ba55a">tempest-TestServerMultinode-2060715482</nova:project>
Oct 02 13:05:25 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 13:05:25 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 13:05:25 compute-1 nova_compute[230518]:         <nova:port uuid="5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3">
Oct 02 13:05:25 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 13:05:25 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 13:05:25 compute-1 nova_compute[230518]:   </metadata>
Oct 02 13:05:25 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <system>
Oct 02 13:05:25 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:05:25 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:05:25 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:05:25 compute-1 nova_compute[230518]:       <entry name="serial">f320bcaa-1dfe-4d91-bd4a-05ed389402a7</entry>
Oct 02 13:05:25 compute-1 nova_compute[230518]:       <entry name="uuid">f320bcaa-1dfe-4d91-bd4a-05ed389402a7</entry>
Oct 02 13:05:25 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     </system>
Oct 02 13:05:25 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 13:05:25 compute-1 nova_compute[230518]:   <os>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:   </os>
Oct 02 13:05:25 compute-1 nova_compute[230518]:   <features>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <apic/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:   </features>
Oct 02 13:05:25 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:   </clock>
Oct 02 13:05:25 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:   </cpu>
Oct 02 13:05:25 compute-1 nova_compute[230518]:   <devices>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 13:05:25 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/f320bcaa-1dfe-4d91-bd4a-05ed389402a7_disk">
Oct 02 13:05:25 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:       </source>
Oct 02 13:05:25 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:05:25 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:05:25 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 13:05:25 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/f320bcaa-1dfe-4d91-bd4a-05ed389402a7_disk.config">
Oct 02 13:05:25 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:       </source>
Oct 02 13:05:25 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:05:25 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:05:25 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 13:05:25 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:ee:3f:1d"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:       <target dev="tap5b77d75e-cf"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     </interface>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 13:05:25 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/f320bcaa-1dfe-4d91-bd4a-05ed389402a7/console.log" append="off"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     </serial>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <video>
Oct 02 13:05:25 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     </video>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 13:05:25 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     </rng>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 13:05:25 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 13:05:25 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 13:05:25 compute-1 nova_compute[230518]:   </devices>
Oct 02 13:05:25 compute-1 nova_compute[230518]: </domain>
Oct 02 13:05:25 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:05:25 compute-1 nova_compute[230518]: 2025-10-02 13:05:25.440 2 DEBUG nova.compute.manager [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Preparing to wait for external event network-vif-plugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:05:25 compute-1 nova_compute[230518]: 2025-10-02 13:05:25.440 2 DEBUG oslo_concurrency.lockutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquiring lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:25 compute-1 nova_compute[230518]: 2025-10-02 13:05:25.440 2 DEBUG oslo_concurrency.lockutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:25 compute-1 nova_compute[230518]: 2025-10-02 13:05:25.440 2 DEBUG oslo_concurrency.lockutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:25 compute-1 nova_compute[230518]: 2025-10-02 13:05:25.441 2 DEBUG nova.virt.libvirt.vif [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:05:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-1687487968',display_name='tempest-TestServerMultinode-server-1687487968',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testservermultinode-server-1687487968',id=187,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='19365f54974d4109ae80bc13ac9ba55a',ramdisk_id='',reservation_id='r-6z94ee26',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-2060715482',owner_user_name='tempest-TestServerMultinode-2060715482-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:05:20Z,user_data=None,user_id='7fb7e45069d34870bc5f4fa70bd8c6de',uuid=f320bcaa-1dfe-4d91-bd4a-05ed389402a7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "address": "fa:16:3e:ee:3f:1d", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b77d75e-cf", "ovs_interfaceid": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:05:25 compute-1 nova_compute[230518]: 2025-10-02 13:05:25.441 2 DEBUG nova.network.os_vif_util [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Converting VIF {"id": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "address": "fa:16:3e:ee:3f:1d", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b77d75e-cf", "ovs_interfaceid": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:05:25 compute-1 nova_compute[230518]: 2025-10-02 13:05:25.442 2 DEBUG nova.network.os_vif_util [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3f:1d,bridge_name='br-int',has_traffic_filtering=True,id=5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3,network=Network(962339a8-ad45-401e-ae58-50cd40858566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b77d75e-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:05:25 compute-1 nova_compute[230518]: 2025-10-02 13:05:25.442 2 DEBUG os_vif [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3f:1d,bridge_name='br-int',has_traffic_filtering=True,id=5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3,network=Network(962339a8-ad45-401e-ae58-50cd40858566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b77d75e-cf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:05:25 compute-1 nova_compute[230518]: 2025-10-02 13:05:25.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:25 compute-1 nova_compute[230518]: 2025-10-02 13:05:25.443 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:05:25 compute-1 nova_compute[230518]: 2025-10-02 13:05:25.443 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:05:25 compute-1 nova_compute[230518]: 2025-10-02 13:05:25.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:25 compute-1 nova_compute[230518]: 2025-10-02 13:05:25.446 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5b77d75e-cf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:05:25 compute-1 nova_compute[230518]: 2025-10-02 13:05:25.446 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5b77d75e-cf, col_values=(('external_ids', {'iface-id': '5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ee:3f:1d', 'vm-uuid': 'f320bcaa-1dfe-4d91-bd4a-05ed389402a7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:05:25 compute-1 nova_compute[230518]: 2025-10-02 13:05:25.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:25 compute-1 NetworkManager[44960]: <info>  [1759410325.4483] manager: (tap5b77d75e-cf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/352)
Oct 02 13:05:25 compute-1 nova_compute[230518]: 2025-10-02 13:05:25.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:05:25 compute-1 nova_compute[230518]: 2025-10-02 13:05:25.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:25 compute-1 nova_compute[230518]: 2025-10-02 13:05:25.455 2 INFO os_vif [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3f:1d,bridge_name='br-int',has_traffic_filtering=True,id=5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3,network=Network(962339a8-ad45-401e-ae58-50cd40858566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b77d75e-cf')
Oct 02 13:05:25 compute-1 nova_compute[230518]: 2025-10-02 13:05:25.506 2 DEBUG nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:05:25 compute-1 nova_compute[230518]: 2025-10-02 13:05:25.506 2 DEBUG nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:05:25 compute-1 nova_compute[230518]: 2025-10-02 13:05:25.506 2 DEBUG nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] No VIF found with MAC fa:16:3e:ee:3f:1d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:05:25 compute-1 nova_compute[230518]: 2025-10-02 13:05:25.507 2 INFO nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Using config drive
Oct 02 13:05:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:25.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:25 compute-1 nova_compute[230518]: 2025-10-02 13:05:25.528 2 DEBUG nova.storage.rbd_utils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] rbd image f320bcaa-1dfe-4d91-bd4a-05ed389402a7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:05:25 compute-1 nova_compute[230518]: 2025-10-02 13:05:25.578 2 DEBUG oslo_concurrency.lockutils [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "ea034622-0a48-4de6-8d68-0f2240b54214" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:25 compute-1 nova_compute[230518]: 2025-10-02 13:05:25.579 2 DEBUG oslo_concurrency.lockutils [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:25 compute-1 nova_compute[230518]: 2025-10-02 13:05:25.579 2 INFO nova.compute.manager [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Shelving
Oct 02 13:05:25 compute-1 nova_compute[230518]: 2025-10-02 13:05:25.602 2 DEBUG nova.virt.libvirt.driver [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 13:05:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:05:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:25.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:05:25 compute-1 nova_compute[230518]: 2025-10-02 13:05:25.815 2 DEBUG nova.network.neutron [req-477ead2f-c6df-4f99-ac3f-fb7196b95fd2 req-e95e28fe-0f85-49a9-854a-d5e419834898 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Updated VIF entry in instance network info cache for port 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:05:25 compute-1 nova_compute[230518]: 2025-10-02 13:05:25.816 2 DEBUG nova.network.neutron [req-477ead2f-c6df-4f99-ac3f-fb7196b95fd2 req-e95e28fe-0f85-49a9-854a-d5e419834898 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Updating instance_info_cache with network_info: [{"id": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "address": "fa:16:3e:ee:3f:1d", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b77d75e-cf", "ovs_interfaceid": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:05:25 compute-1 nova_compute[230518]: 2025-10-02 13:05:25.866 2 DEBUG oslo_concurrency.lockutils [req-477ead2f-c6df-4f99-ac3f-fb7196b95fd2 req-e95e28fe-0f85-49a9-854a-d5e419834898 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-f320bcaa-1dfe-4d91-bd4a-05ed389402a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:05:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:25.961 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:25.961 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:25.962 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:26 compute-1 nova_compute[230518]: 2025-10-02 13:05:26.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:05:26 compute-1 nova_compute[230518]: 2025-10-02 13:05:26.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:05:26 compute-1 nova_compute[230518]: 2025-10-02 13:05:26.104 2 INFO nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Creating config drive at /var/lib/nova/instances/f320bcaa-1dfe-4d91-bd4a-05ed389402a7/disk.config
Oct 02 13:05:26 compute-1 nova_compute[230518]: 2025-10-02 13:05:26.109 2 DEBUG oslo_concurrency.processutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f320bcaa-1dfe-4d91-bd4a-05ed389402a7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0c0m_um2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:05:26 compute-1 nova_compute[230518]: 2025-10-02 13:05:26.257 2 DEBUG oslo_concurrency.processutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f320bcaa-1dfe-4d91-bd4a-05ed389402a7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0c0m_um2" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:05:26 compute-1 nova_compute[230518]: 2025-10-02 13:05:26.288 2 DEBUG nova.storage.rbd_utils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] rbd image f320bcaa-1dfe-4d91-bd4a-05ed389402a7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:05:26 compute-1 nova_compute[230518]: 2025-10-02 13:05:26.291 2 DEBUG oslo_concurrency.processutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f320bcaa-1dfe-4d91-bd4a-05ed389402a7/disk.config f320bcaa-1dfe-4d91-bd4a-05ed389402a7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:05:26 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/734581899' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:05:27 compute-1 nova_compute[230518]: 2025-10-02 13:05:27.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:05:27 compute-1 nova_compute[230518]: 2025-10-02 13:05:27.278 2 DEBUG oslo_concurrency.processutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f320bcaa-1dfe-4d91-bd4a-05ed389402a7/disk.config f320bcaa-1dfe-4d91-bd4a-05ed389402a7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.987s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:05:27 compute-1 nova_compute[230518]: 2025-10-02 13:05:27.279 2 INFO nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Deleting local config drive /var/lib/nova/instances/f320bcaa-1dfe-4d91-bd4a-05ed389402a7/disk.config because it was imported into RBD.
Oct 02 13:05:27 compute-1 kernel: tap5b77d75e-cf: entered promiscuous mode
Oct 02 13:05:27 compute-1 NetworkManager[44960]: <info>  [1759410327.3312] manager: (tap5b77d75e-cf): new Tun device (/org/freedesktop/NetworkManager/Devices/353)
Oct 02 13:05:27 compute-1 ovn_controller[129257]: 2025-10-02T13:05:27Z|00759|binding|INFO|Claiming lport 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 for this chassis.
Oct 02 13:05:27 compute-1 ovn_controller[129257]: 2025-10-02T13:05:27Z|00760|binding|INFO|5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3: Claiming fa:16:3e:ee:3f:1d 10.100.0.12
Oct 02 13:05:27 compute-1 nova_compute[230518]: 2025-10-02 13:05:27.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.355 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:3f:1d 10.100.0.12'], port_security=['fa:16:3e:ee:3f:1d 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f320bcaa-1dfe-4d91-bd4a-05ed389402a7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-962339a8-ad45-401e-ae58-50cd40858566', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '19365f54974d4109ae80bc13ac9ba55a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '67af92b3-63f6-4f5d-8022-4679fd3c3d0b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=91847efc-0e01-4780-b433-994cc6662f15, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.356 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 in datapath 962339a8-ad45-401e-ae58-50cd40858566 bound to our chassis
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.358 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 962339a8-ad45-401e-ae58-50cd40858566
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.369 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4ff420d1-b104-4cc3-84c0-1a56c0b4ed5b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.370 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap962339a8-a1 in ovnmeta-962339a8-ad45-401e-ae58-50cd40858566 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.372 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap962339a8-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.372 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7e06b98b-81d6-4fb6-b625-95ad49660f05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.373 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b3f04c84-7ae1-458a-8b6d-55b79686de40]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:27 compute-1 systemd-machined[188247]: New machine qemu-88-instance-000000bb.
Oct 02 13:05:27 compute-1 systemd-udevd[301164]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.387 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[2afba9c9-b6f6-43b0-91af-94b226404dc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:27 compute-1 NetworkManager[44960]: <info>  [1759410327.4004] device (tap5b77d75e-cf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:05:27 compute-1 NetworkManager[44960]: <info>  [1759410327.4013] device (tap5b77d75e-cf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:05:27 compute-1 systemd[1]: Started Virtual Machine qemu-88-instance-000000bb.
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.412 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d3715af1-4dde-41e3-9fb0-30e1b5b05f0c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:27 compute-1 ovn_controller[129257]: 2025-10-02T13:05:27Z|00761|binding|INFO|Setting lport 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 ovn-installed in OVS
Oct 02 13:05:27 compute-1 ovn_controller[129257]: 2025-10-02T13:05:27Z|00762|binding|INFO|Setting lport 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 up in Southbound
Oct 02 13:05:27 compute-1 nova_compute[230518]: 2025-10-02 13:05:27.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.440 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[2ecab23a-9e6c-406c-8283-d5492c9e1e32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:27 compute-1 NetworkManager[44960]: <info>  [1759410327.4483] manager: (tap962339a8-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/354)
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.447 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[48440f77-25a5-4279-8430-a4e7a2fe3a3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:27 compute-1 podman[301143]: 2025-10-02 13:05:27.456201817 +0000 UTC m=+0.089639946 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:05:27 compute-1 podman[301144]: 2025-10-02 13:05:27.479397818 +0000 UTC m=+0.112757744 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible)
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.482 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c02ac312-485f-428c-9f72-3e1cbf8d53f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.485 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[d85d3aff-3ec0-4b97-be0a-8efbfe6884f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:27 compute-1 NetworkManager[44960]: <info>  [1759410327.5154] device (tap962339a8-a0): carrier: link connected
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.523 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[35031c23-a080-4ffe-8832-5f4210973b6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:05:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:27.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.539 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a58d44ed-c082-4bb0-bd85-5addb9912f49]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap962339a8-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:f8:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 232], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 819107, 'reachable_time': 22005, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301213, 'error': None, 'target': 'ovnmeta-962339a8-ad45-401e-ae58-50cd40858566', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.555 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cf247bd0-b207-4955-9823-6f408d0e29d9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3a:f8da'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 819107, 'tstamp': 819107}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301214, 'error': None, 'target': 'ovnmeta-962339a8-ad45-401e-ae58-50cd40858566', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.575 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a9185e8b-5171-4a1c-a5ec-e548d4301a6e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap962339a8-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:f8:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 232], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 819107, 'reachable_time': 22005, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301215, 'error': None, 'target': 'ovnmeta-962339a8-ad45-401e-ae58-50cd40858566', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.607 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ee477276-72d5-450b-a99c-bed1a93e4a02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.667 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[42876604-e3b6-419f-a216-ded0057e6e39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.668 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap962339a8-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.669 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.669 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap962339a8-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:05:27 compute-1 nova_compute[230518]: 2025-10-02 13:05:27.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:27 compute-1 NetworkManager[44960]: <info>  [1759410327.6714] manager: (tap962339a8-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/355)
Oct 02 13:05:27 compute-1 kernel: tap962339a8-a0: entered promiscuous mode
Oct 02 13:05:27 compute-1 nova_compute[230518]: 2025-10-02 13:05:27.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.674 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap962339a8-a0, col_values=(('external_ids', {'iface-id': '95f6c57c-e568-4ed7-aa6a-02671a012e41'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:05:27 compute-1 nova_compute[230518]: 2025-10-02 13:05:27.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:27 compute-1 ovn_controller[129257]: 2025-10-02T13:05:27Z|00763|binding|INFO|Releasing lport 95f6c57c-e568-4ed7-aa6a-02671a012e41 from this chassis (sb_readonly=0)
Oct 02 13:05:27 compute-1 nova_compute[230518]: 2025-10-02 13:05:27.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.691 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/962339a8-ad45-401e-ae58-50cd40858566.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/962339a8-ad45-401e-ae58-50cd40858566.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.692 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fd0a7a76-b90b-4778-908f-194546dfc566]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.693 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]: global
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-962339a8-ad45-401e-ae58-50cd40858566
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/962339a8-ad45-401e-ae58-50cd40858566.pid.haproxy
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 962339a8-ad45-401e-ae58-50cd40858566
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.695 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-962339a8-ad45-401e-ae58-50cd40858566', 'env', 'PROCESS_TAG=haproxy-962339a8-ad45-401e-ae58-50cd40858566', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/962339a8-ad45-401e-ae58-50cd40858566.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:05:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:27.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:27 compute-1 ceph-mon[80926]: pgmap v2774: 305 pgs: 305 active+clean; 464 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 250 KiB/s rd, 3.4 MiB/s wr, 93 op/s
Oct 02 13:05:27 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1079627697' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:05:27 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3334999725' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:05:27 compute-1 nova_compute[230518]: 2025-10-02 13:05:27.791 2 DEBUG nova.compute.manager [req-9a28b7c1-5cda-42cc-b24d-85e2b03bd204 req-77787c38-613e-45cc-ae50-da630f690b23 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Received event network-vif-plugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:05:27 compute-1 nova_compute[230518]: 2025-10-02 13:05:27.791 2 DEBUG oslo_concurrency.lockutils [req-9a28b7c1-5cda-42cc-b24d-85e2b03bd204 req-77787c38-613e-45cc-ae50-da630f690b23 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:27 compute-1 nova_compute[230518]: 2025-10-02 13:05:27.792 2 DEBUG oslo_concurrency.lockutils [req-9a28b7c1-5cda-42cc-b24d-85e2b03bd204 req-77787c38-613e-45cc-ae50-da630f690b23 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:27 compute-1 nova_compute[230518]: 2025-10-02 13:05:27.792 2 DEBUG oslo_concurrency.lockutils [req-9a28b7c1-5cda-42cc-b24d-85e2b03bd204 req-77787c38-613e-45cc-ae50-da630f690b23 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:27 compute-1 nova_compute[230518]: 2025-10-02 13:05:27.792 2 DEBUG nova.compute.manager [req-9a28b7c1-5cda-42cc-b24d-85e2b03bd204 req-77787c38-613e-45cc-ae50-da630f690b23 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Processing event network-vif-plugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:05:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:27.902 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:05:28 compute-1 kernel: tap55d951c1-1c (unregistering): left promiscuous mode
Oct 02 13:05:28 compute-1 NetworkManager[44960]: <info>  [1759410328.0300] device (tap55d951c1-1c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:28 compute-1 ovn_controller[129257]: 2025-10-02T13:05:28Z|00764|binding|INFO|Releasing lport 55d951c1-1ce9-4d4a-979c-9be9aef7e283 from this chassis (sb_readonly=0)
Oct 02 13:05:28 compute-1 ovn_controller[129257]: 2025-10-02T13:05:28Z|00765|binding|INFO|Setting lport 55d951c1-1ce9-4d4a-979c-9be9aef7e283 down in Southbound
Oct 02 13:05:28 compute-1 ovn_controller[129257]: 2025-10-02T13:05:28Z|00766|binding|INFO|Removing iface tap55d951c1-1c ovn-installed in OVS
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:28.049 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:64:21 10.100.0.3'], port_security=['fa:16:3e:e8:64:21 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ea034622-0a48-4de6-8d68-0f2240b54214', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '52dd3c4419794d0fbecd536c5088c60f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'df14d61b-9762-4791-8375-7e8d13f38de1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26295213-1e12-4cdb-92a9-b65812bf362e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=55d951c1-1ce9-4d4a-979c-9be9aef7e283) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:28 compute-1 podman[301282]: 2025-10-02 13:05:28.07100518 +0000 UTC m=+0.061659627 container create ae7e10fc83d27217fb95df7503cf33acab1b5a815324c9e58aea8051906e51c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 13:05:28 compute-1 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000b5.scope: Deactivated successfully.
Oct 02 13:05:28 compute-1 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000b5.scope: Consumed 16.912s CPU time.
Oct 02 13:05:28 compute-1 systemd[1]: Started libpod-conmon-ae7e10fc83d27217fb95df7503cf33acab1b5a815324c9e58aea8051906e51c2.scope.
Oct 02 13:05:28 compute-1 systemd-machined[188247]: Machine qemu-86-instance-000000b5 terminated.
Oct 02 13:05:28 compute-1 podman[301282]: 2025-10-02 13:05:28.041562656 +0000 UTC m=+0.032217133 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:05:28 compute-1 systemd[1]: Started libcrun container.
Oct 02 13:05:28 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c7857d0ba775bc4a2874c70f85893a680df6dee09c2866dc2675cedacec2fde/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:05:28 compute-1 podman[301282]: 2025-10-02 13:05:28.151419439 +0000 UTC m=+0.142073886 container init ae7e10fc83d27217fb95df7503cf33acab1b5a815324c9e58aea8051906e51c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS)
Oct 02 13:05:28 compute-1 podman[301282]: 2025-10-02 13:05:28.157147456 +0000 UTC m=+0.147801903 container start ae7e10fc83d27217fb95df7503cf33acab1b5a815324c9e58aea8051906e51c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 13:05:28 compute-1 neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566[301303]: [NOTICE]   (301310) : New worker (301313) forked
Oct 02 13:05:28 compute-1 neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566[301303]: [NOTICE]   (301310) : Loading success.
Oct 02 13:05:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:28.222 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 55d951c1-1ce9-4d4a-979c-9be9aef7e283 in datapath b07d0c6a-5988-4afb-b4ba-d4048578b224 unbound from our chassis
Oct 02 13:05:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:28.223 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b07d0c6a-5988-4afb-b4ba-d4048578b224, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:05:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:28.224 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e076f151-9085-4ba2-a9cd-d241c9d1df6a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:28.225 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224 namespace which is not needed anymore
Oct 02 13:05:28 compute-1 NetworkManager[44960]: <info>  [1759410328.2704] manager: (tap55d951c1-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/356)
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:28 compute-1 neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224[299883]: [NOTICE]   (299887) : haproxy version is 2.8.14-c23fe91
Oct 02 13:05:28 compute-1 neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224[299883]: [NOTICE]   (299887) : path to executable is /usr/sbin/haproxy
Oct 02 13:05:28 compute-1 neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224[299883]: [WARNING]  (299887) : Exiting Master process...
Oct 02 13:05:28 compute-1 neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224[299883]: [WARNING]  (299887) : Exiting Master process...
Oct 02 13:05:28 compute-1 neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224[299883]: [ALERT]    (299887) : Current worker (299889) exited with code 143 (Terminated)
Oct 02 13:05:28 compute-1 neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224[299883]: [WARNING]  (299887) : All workers exited. Exiting... (0)
Oct 02 13:05:28 compute-1 systemd[1]: libpod-90d5203870b0613c0c4c818e8590f1179c936cced172fb5de2ecedb862e11cd2.scope: Deactivated successfully.
Oct 02 13:05:28 compute-1 podman[301348]: 2025-10-02 13:05:28.362315781 +0000 UTC m=+0.049147587 container died 90d5203870b0613c0c4c818e8590f1179c936cced172fb5de2ecedb862e11cd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:05:28 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-90d5203870b0613c0c4c818e8590f1179c936cced172fb5de2ecedb862e11cd2-userdata-shm.mount: Deactivated successfully.
Oct 02 13:05:28 compute-1 systemd[1]: var-lib-containers-storage-overlay-afce76e5b3e048a73313ef5d27cfa735799b4e55d4b12602e339b371cc625d3b-merged.mount: Deactivated successfully.
Oct 02 13:05:28 compute-1 podman[301348]: 2025-10-02 13:05:28.402465539 +0000 UTC m=+0.089297345 container cleanup 90d5203870b0613c0c4c818e8590f1179c936cced172fb5de2ecedb862e11cd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 13:05:28 compute-1 systemd[1]: libpod-conmon-90d5203870b0613c0c4c818e8590f1179c936cced172fb5de2ecedb862e11cd2.scope: Deactivated successfully.
Oct 02 13:05:28 compute-1 podman[301379]: 2025-10-02 13:05:28.467215471 +0000 UTC m=+0.044348029 container remove 90d5203870b0613c0c4c818e8590f1179c936cced172fb5de2ecedb862e11cd2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct 02 13:05:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:28.472 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ecaa2383-38c3-40ca-8794-c1859c85a37e]: (4, ('Thu Oct  2 01:05:28 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224 (90d5203870b0613c0c4c818e8590f1179c936cced172fb5de2ecedb862e11cd2)\n90d5203870b0613c0c4c818e8590f1179c936cced172fb5de2ecedb862e11cd2\nThu Oct  2 01:05:28 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224 (90d5203870b0613c0c4c818e8590f1179c936cced172fb5de2ecedb862e11cd2)\n90d5203870b0613c0c4c818e8590f1179c936cced172fb5de2ecedb862e11cd2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:28.474 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[205612de-adab-40b4-8519-32b631c520d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:28.477 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb07d0c6a-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:28 compute-1 kernel: tapb07d0c6a-50: left promiscuous mode
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:28.509 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f9bca379-6cc6-4497-8b61-91a9fdeb5bc9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.519 2 DEBUG nova.compute.manager [req-072acbff-a8b9-4f70-8af1-370a6b498e2c req-eb19b526-4b62-4d6f-a65e-8e56871643c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Received event network-vif-unplugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.519 2 DEBUG oslo_concurrency.lockutils [req-072acbff-a8b9-4f70-8af1-370a6b498e2c req-eb19b526-4b62-4d6f-a65e-8e56871643c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.519 2 DEBUG oslo_concurrency.lockutils [req-072acbff-a8b9-4f70-8af1-370a6b498e2c req-eb19b526-4b62-4d6f-a65e-8e56871643c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.519 2 DEBUG oslo_concurrency.lockutils [req-072acbff-a8b9-4f70-8af1-370a6b498e2c req-eb19b526-4b62-4d6f-a65e-8e56871643c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.520 2 DEBUG nova.compute.manager [req-072acbff-a8b9-4f70-8af1-370a6b498e2c req-eb19b526-4b62-4d6f-a65e-8e56871643c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] No waiting events found dispatching network-vif-unplugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.520 2 WARNING nova.compute.manager [req-072acbff-a8b9-4f70-8af1-370a6b498e2c req-eb19b526-4b62-4d6f-a65e-8e56871643c2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Received unexpected event network-vif-unplugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 for instance with vm_state active and task_state shelving.
Oct 02 13:05:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:28.531 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[868a4654-74f4-4175-b5b0-262ec228dd80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:28.532 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[174d6d45-4ad4-4fee-824c-66d62ec0a0d2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:28.552 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6d53ccad-998c-4e12-abc7-901f85838963]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 810516, 'reachable_time': 17181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301397, 'error': None, 'target': 'ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:28 compute-1 systemd[1]: run-netns-ovnmeta\x2db07d0c6a\x2d5988\x2d4afb\x2db4ba\x2dd4048578b224.mount: Deactivated successfully.
Oct 02 13:05:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:28.555 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b07d0c6a-5988-4afb-b4ba-d4048578b224 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:05:28 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:28.555 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[9f9a0974-109c-4e8f-9979-9a142470872c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.582 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410328.5817065, f320bcaa-1dfe-4d91-bd4a-05ed389402a7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.582 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] VM Started (Lifecycle Event)
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.587 2 DEBUG nova.compute.manager [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.591 2 DEBUG nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.595 2 INFO nova.virt.libvirt.driver [-] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Instance spawned successfully.
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.596 2 DEBUG nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.618 2 INFO nova.virt.libvirt.driver [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Instance shutdown successfully after 3 seconds.
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.627 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.628 2 INFO nova.virt.libvirt.driver [-] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Instance destroyed successfully.
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.629 2 DEBUG nova.objects.instance [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lazy-loading 'numa_topology' on Instance uuid ea034622-0a48-4de6-8d68-0f2240b54214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.638 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.643 2 DEBUG nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.644 2 DEBUG nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.644 2 DEBUG nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.645 2 DEBUG nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.646 2 DEBUG nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.647 2 DEBUG nova.virt.libvirt.driver [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.729 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.730 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410328.5818174, f320bcaa-1dfe-4d91-bd4a-05ed389402a7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.730 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] VM Paused (Lifecycle Event)
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.766 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.771 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410328.5897799, f320bcaa-1dfe-4d91-bd4a-05ed389402a7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.772 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] VM Resumed (Lifecycle Event)
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.817 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.825 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.851 2 INFO nova.compute.manager [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Took 8.72 seconds to spawn the instance on the hypervisor.
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.851 2 DEBUG nova.compute.manager [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.900 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:05:28 compute-1 nova_compute[230518]: 2025-10-02 13:05:28.969 2 INFO nova.compute.manager [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Took 9.69 seconds to build instance.
Oct 02 13:05:29 compute-1 nova_compute[230518]: 2025-10-02 13:05:29.004 2 DEBUG oslo_concurrency.lockutils [None req-d7dbd128-af0b-4cae-863b-9047e738434b 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.856s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:29 compute-1 nova_compute[230518]: 2025-10-02 13:05:29.422 2 INFO nova.virt.libvirt.driver [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Beginning cold snapshot process
Oct 02 13:05:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:05:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:05:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:29.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:05:29 compute-1 nova_compute[230518]: 2025-10-02 13:05:29.626 2 DEBUG nova.virt.libvirt.imagebackend [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 13:05:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:29.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:29 compute-1 ceph-mon[80926]: pgmap v2775: 305 pgs: 305 active+clean; 470 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 71 KiB/s rd, 4.4 MiB/s wr, 92 op/s
Oct 02 13:05:29 compute-1 nova_compute[230518]: 2025-10-02 13:05:29.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:29 compute-1 nova_compute[230518]: 2025-10-02 13:05:29.928 2 DEBUG nova.compute.manager [req-2b9ec5fb-4d7c-40dc-9520-ded9b3f9eab1 req-b1ba7a68-271d-4e63-b485-864f9473af16 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Received event network-vif-plugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:05:29 compute-1 nova_compute[230518]: 2025-10-02 13:05:29.929 2 DEBUG oslo_concurrency.lockutils [req-2b9ec5fb-4d7c-40dc-9520-ded9b3f9eab1 req-b1ba7a68-271d-4e63-b485-864f9473af16 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:29 compute-1 nova_compute[230518]: 2025-10-02 13:05:29.929 2 DEBUG oslo_concurrency.lockutils [req-2b9ec5fb-4d7c-40dc-9520-ded9b3f9eab1 req-b1ba7a68-271d-4e63-b485-864f9473af16 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:29 compute-1 nova_compute[230518]: 2025-10-02 13:05:29.930 2 DEBUG oslo_concurrency.lockutils [req-2b9ec5fb-4d7c-40dc-9520-ded9b3f9eab1 req-b1ba7a68-271d-4e63-b485-864f9473af16 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:29 compute-1 nova_compute[230518]: 2025-10-02 13:05:29.930 2 DEBUG nova.compute.manager [req-2b9ec5fb-4d7c-40dc-9520-ded9b3f9eab1 req-b1ba7a68-271d-4e63-b485-864f9473af16 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] No waiting events found dispatching network-vif-plugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:05:29 compute-1 nova_compute[230518]: 2025-10-02 13:05:29.930 2 WARNING nova.compute.manager [req-2b9ec5fb-4d7c-40dc-9520-ded9b3f9eab1 req-b1ba7a68-271d-4e63-b485-864f9473af16 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Received unexpected event network-vif-plugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 for instance with vm_state active and task_state None.
Oct 02 13:05:30 compute-1 nova_compute[230518]: 2025-10-02 13:05:30.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:05:30 compute-1 nova_compute[230518]: 2025-10-02 13:05:30.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:05:30 compute-1 nova_compute[230518]: 2025-10-02 13:05:30.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:05:30 compute-1 nova_compute[230518]: 2025-10-02 13:05:30.084 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:05:30 compute-1 nova_compute[230518]: 2025-10-02 13:05:30.085 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:05:30 compute-1 nova_compute[230518]: 2025-10-02 13:05:30.086 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 13:05:30 compute-1 nova_compute[230518]: 2025-10-02 13:05:30.087 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ea034622-0a48-4de6-8d68-0f2240b54214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:05:30 compute-1 nova_compute[230518]: 2025-10-02 13:05:30.147 2 DEBUG nova.storage.rbd_utils [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] creating snapshot(6e91a9f1aeaf49a7b3f111e2432ffa25) on rbd image(ea034622-0a48-4de6-8d68-0f2240b54214_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 13:05:30 compute-1 nova_compute[230518]: 2025-10-02 13:05:30.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:30 compute-1 nova_compute[230518]: 2025-10-02 13:05:30.721 2 DEBUG nova.compute.manager [req-5fed1df9-bd72-447b-a4ff-f12830c56783 req-e7722747-fc17-4d93-ab45-e537e83445b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Received event network-vif-plugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:05:30 compute-1 nova_compute[230518]: 2025-10-02 13:05:30.721 2 DEBUG oslo_concurrency.lockutils [req-5fed1df9-bd72-447b-a4ff-f12830c56783 req-e7722747-fc17-4d93-ab45-e537e83445b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:30 compute-1 nova_compute[230518]: 2025-10-02 13:05:30.721 2 DEBUG oslo_concurrency.lockutils [req-5fed1df9-bd72-447b-a4ff-f12830c56783 req-e7722747-fc17-4d93-ab45-e537e83445b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:30 compute-1 nova_compute[230518]: 2025-10-02 13:05:30.722 2 DEBUG oslo_concurrency.lockutils [req-5fed1df9-bd72-447b-a4ff-f12830c56783 req-e7722747-fc17-4d93-ab45-e537e83445b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:30 compute-1 nova_compute[230518]: 2025-10-02 13:05:30.722 2 DEBUG nova.compute.manager [req-5fed1df9-bd72-447b-a4ff-f12830c56783 req-e7722747-fc17-4d93-ab45-e537e83445b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] No waiting events found dispatching network-vif-plugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:05:30 compute-1 nova_compute[230518]: 2025-10-02 13:05:30.722 2 WARNING nova.compute.manager [req-5fed1df9-bd72-447b-a4ff-f12830c56783 req-e7722747-fc17-4d93-ab45-e537e83445b8 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Received unexpected event network-vif-plugged-55d951c1-1ce9-4d4a-979c-9be9aef7e283 for instance with vm_state active and task_state shelving_image_uploading.
Oct 02 13:05:31 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e354 e354: 3 total, 3 up, 3 in
Oct 02 13:05:31 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1962952617' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:05:31 compute-1 nova_compute[230518]: 2025-10-02 13:05:31.324 2 DEBUG nova.storage.rbd_utils [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] cloning vms/ea034622-0a48-4de6-8d68-0f2240b54214_disk@6e91a9f1aeaf49a7b3f111e2432ffa25 to images/3560df73-c585-4179-87ca-fb0ca65743ee clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 13:05:31 compute-1 nova_compute[230518]: 2025-10-02 13:05:31.513 2 DEBUG nova.storage.rbd_utils [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] flattening images/3560df73-c585-4179-87ca-fb0ca65743ee flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 13:05:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:31.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:31.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:32 compute-1 nova_compute[230518]: 2025-10-02 13:05:32.160 2 DEBUG nova.storage.rbd_utils [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] removing snapshot(6e91a9f1aeaf49a7b3f111e2432ffa25) on rbd image(ea034622-0a48-4de6-8d68-0f2240b54214_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 13:05:32 compute-1 ceph-mon[80926]: pgmap v2776: 305 pgs: 305 active+clean; 447 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 569 KiB/s rd, 4.9 MiB/s wr, 143 op/s
Oct 02 13:05:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1386256853' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:05:32 compute-1 ceph-mon[80926]: osdmap e354: 3 total, 3 up, 3 in
Oct 02 13:05:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1531540515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e355 e355: 3 total, 3 up, 3 in
Oct 02 13:05:32 compute-1 nova_compute[230518]: 2025-10-02 13:05:32.364 2 DEBUG nova.storage.rbd_utils [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] creating snapshot(snap) on rbd image(3560df73-c585-4179-87ca-fb0ca65743ee) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 13:05:32 compute-1 nova_compute[230518]: 2025-10-02 13:05:32.800 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Updating instance_info_cache with network_info: [{"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55d951c1-1c", "ovs_interfaceid": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:05:32 compute-1 nova_compute[230518]: 2025-10-02 13:05:32.835 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:05:32 compute-1 nova_compute[230518]: 2025-10-02 13:05:32.835 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 13:05:33 compute-1 ceph-mon[80926]: pgmap v2778: 305 pgs: 305 active+clean; 435 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 6.4 MiB/s wr, 266 op/s
Oct 02 13:05:33 compute-1 ceph-mon[80926]: osdmap e355: 3 total, 3 up, 3 in
Oct 02 13:05:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e356 e356: 3 total, 3 up, 3 in
Oct 02 13:05:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:33.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:33.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:34 compute-1 ceph-mon[80926]: osdmap e356: 3 total, 3 up, 3 in
Oct 02 13:05:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:05:34 compute-1 nova_compute[230518]: 2025-10-02 13:05:34.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:35 compute-1 ceph-mon[80926]: pgmap v2781: 305 pgs: 305 active+clean; 435 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.5 MiB/s rd, 3.7 MiB/s wr, 291 op/s
Oct 02 13:05:35 compute-1 nova_compute[230518]: 2025-10-02 13:05:35.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:05:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:35.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:05:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:05:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:35.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:05:35 compute-1 nova_compute[230518]: 2025-10-02 13:05:35.923 2 INFO nova.virt.libvirt.driver [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Snapshot image upload complete
Oct 02 13:05:35 compute-1 nova_compute[230518]: 2025-10-02 13:05:35.924 2 DEBUG nova.compute.manager [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:05:36 compute-1 nova_compute[230518]: 2025-10-02 13:05:36.002 2 INFO nova.compute.manager [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Shelve offloading
Oct 02 13:05:36 compute-1 nova_compute[230518]: 2025-10-02 13:05:36.009 2 INFO nova.virt.libvirt.driver [-] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Instance destroyed successfully.
Oct 02 13:05:36 compute-1 nova_compute[230518]: 2025-10-02 13:05:36.011 2 DEBUG nova.compute.manager [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:05:36 compute-1 nova_compute[230518]: 2025-10-02 13:05:36.014 2 DEBUG oslo_concurrency.lockutils [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:05:36 compute-1 nova_compute[230518]: 2025-10-02 13:05:36.014 2 DEBUG oslo_concurrency.lockutils [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquired lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:05:36 compute-1 nova_compute[230518]: 2025-10-02 13:05:36.014 2 DEBUG nova.network.neutron [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:05:37 compute-1 ceph-mon[80926]: pgmap v2782: 305 pgs: 305 active+clean; 476 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 MiB/s rd, 7.2 MiB/s wr, 504 op/s
Oct 02 13:05:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:37.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:37.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:39 compute-1 nova_compute[230518]: 2025-10-02 13:05:39.258 2 DEBUG nova.network.neutron [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Updating instance_info_cache with network_info: [{"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55d951c1-1c", "ovs_interfaceid": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:05:39 compute-1 nova_compute[230518]: 2025-10-02 13:05:39.322 2 DEBUG oslo_concurrency.lockutils [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Releasing lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:05:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e357 e357: 3 total, 3 up, 3 in
Oct 02 13:05:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:05:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:39.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:39 compute-1 ceph-mon[80926]: pgmap v2783: 305 pgs: 305 active+clean; 466 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 15 MiB/s rd, 6.7 MiB/s wr, 512 op/s
Oct 02 13:05:39 compute-1 ceph-mon[80926]: osdmap e357: 3 total, 3 up, 3 in
Oct 02 13:05:39 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:05:39 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 58K writes, 229K keys, 58K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.05 MB/s
                                           Cumulative WAL: 58K writes, 21K syncs, 2.72 writes per sync, written: 0.22 GB, 0.05 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 9446 writes, 37K keys, 9446 commit groups, 1.0 writes per commit group, ingest: 38.79 MB, 0.06 MB/s
                                           Interval WAL: 9446 writes, 3659 syncs, 2.58 writes per sync, written: 0.04 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 13:05:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:39.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:39 compute-1 nova_compute[230518]: 2025-10-02 13:05:39.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:40 compute-1 nova_compute[230518]: 2025-10-02 13:05:40.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2968769541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:41.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:41.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:41 compute-1 nova_compute[230518]: 2025-10-02 13:05:41.801 2 INFO nova.virt.libvirt.driver [-] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Instance destroyed successfully.
Oct 02 13:05:41 compute-1 nova_compute[230518]: 2025-10-02 13:05:41.801 2 DEBUG nova.objects.instance [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lazy-loading 'resources' on Instance uuid ea034622-0a48-4de6-8d68-0f2240b54214 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:05:41 compute-1 nova_compute[230518]: 2025-10-02 13:05:41.822 2 DEBUG nova.virt.libvirt.vif [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:03:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-881712342',display_name='tempest-ServersNegativeTestJSON-server-881712342',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-881712342',id=181,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:04:02Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='52dd3c4419794d0fbecd536c5088c60f',ramdisk_id='',reservation_id='r-3dfuwrrh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1205930452',owner_user_name='tempest-ServersNegativeTestJSON-1205930452-project-member',shelved_at='2025-10-02T13:05:35.924546',shelved_host='compute-1.ctlplane.example.com',shelved_image_id='3560df73-c585-4179-87ca-fb0ca65743ee'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:05:29Z,user_data=None,user_id='5206d24fd75a48758994a57e7fd259f2',uuid=ea034622-0a48-4de6-8d68-0f2240b54214,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55d951c1-1c", "ovs_interfaceid": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:05:41 compute-1 nova_compute[230518]: 2025-10-02 13:05:41.822 2 DEBUG nova.network.os_vif_util [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Converting VIF {"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55d951c1-1c", "ovs_interfaceid": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:05:41 compute-1 nova_compute[230518]: 2025-10-02 13:05:41.823 2 DEBUG nova.network.os_vif_util [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:64:21,bridge_name='br-int',has_traffic_filtering=True,id=55d951c1-1ce9-4d4a-979c-9be9aef7e283,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55d951c1-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:05:41 compute-1 nova_compute[230518]: 2025-10-02 13:05:41.823 2 DEBUG os_vif [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:64:21,bridge_name='br-int',has_traffic_filtering=True,id=55d951c1-1ce9-4d4a-979c-9be9aef7e283,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55d951c1-1c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:05:41 compute-1 nova_compute[230518]: 2025-10-02 13:05:41.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:41 compute-1 nova_compute[230518]: 2025-10-02 13:05:41.825 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55d951c1-1c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:05:41 compute-1 nova_compute[230518]: 2025-10-02 13:05:41.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:41 compute-1 nova_compute[230518]: 2025-10-02 13:05:41.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:05:41 compute-1 nova_compute[230518]: 2025-10-02 13:05:41.831 2 INFO os_vif [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:64:21,bridge_name='br-int',has_traffic_filtering=True,id=55d951c1-1ce9-4d4a-979c-9be9aef7e283,network=Network(b07d0c6a-5988-4afb-b4ba-d4048578b224),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55d951c1-1c')
Oct 02 13:05:41 compute-1 ceph-mon[80926]: pgmap v2785: 305 pgs: 305 active+clean; 425 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 MiB/s rd, 5.1 MiB/s wr, 376 op/s
Oct 02 13:05:41 compute-1 nova_compute[230518]: 2025-10-02 13:05:41.997 2 DEBUG nova.compute.manager [req-26f90933-a423-4dbe-9f24-de501a2cb1a7 req-300c737a-a5cb-4e25-b59b-54a3c2a15934 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Received event network-changed-55d951c1-1ce9-4d4a-979c-9be9aef7e283 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:05:41 compute-1 nova_compute[230518]: 2025-10-02 13:05:41.997 2 DEBUG nova.compute.manager [req-26f90933-a423-4dbe-9f24-de501a2cb1a7 req-300c737a-a5cb-4e25-b59b-54a3c2a15934 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Refreshing instance network info cache due to event network-changed-55d951c1-1ce9-4d4a-979c-9be9aef7e283. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:05:41 compute-1 nova_compute[230518]: 2025-10-02 13:05:41.997 2 DEBUG oslo_concurrency.lockutils [req-26f90933-a423-4dbe-9f24-de501a2cb1a7 req-300c737a-a5cb-4e25-b59b-54a3c2a15934 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:05:41 compute-1 nova_compute[230518]: 2025-10-02 13:05:41.998 2 DEBUG oslo_concurrency.lockutils [req-26f90933-a423-4dbe-9f24-de501a2cb1a7 req-300c737a-a5cb-4e25-b59b-54a3c2a15934 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:05:41 compute-1 nova_compute[230518]: 2025-10-02 13:05:41.998 2 DEBUG nova.network.neutron [req-26f90933-a423-4dbe-9f24-de501a2cb1a7 req-300c737a-a5cb-4e25-b59b-54a3c2a15934 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Refreshing network info cache for port 55d951c1-1ce9-4d4a-979c-9be9aef7e283 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:05:42 compute-1 nova_compute[230518]: 2025-10-02 13:05:42.929 2 INFO nova.virt.libvirt.driver [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Deleting instance files /var/lib/nova/instances/ea034622-0a48-4de6-8d68-0f2240b54214_del
Oct 02 13:05:42 compute-1 nova_compute[230518]: 2025-10-02 13:05:42.930 2 INFO nova.virt.libvirt.driver [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Deletion of /var/lib/nova/instances/ea034622-0a48-4de6-8d68-0f2240b54214_del complete
Oct 02 13:05:43 compute-1 nova_compute[230518]: 2025-10-02 13:05:43.093 2 INFO nova.scheduler.client.report [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Deleted allocations for instance ea034622-0a48-4de6-8d68-0f2240b54214
Oct 02 13:05:43 compute-1 ovn_controller[129257]: 2025-10-02T13:05:43Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ee:3f:1d 10.100.0.12
Oct 02 13:05:43 compute-1 ovn_controller[129257]: 2025-10-02T13:05:43Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ee:3f:1d 10.100.0.12
Oct 02 13:05:43 compute-1 nova_compute[230518]: 2025-10-02 13:05:43.166 2 DEBUG oslo_concurrency.lockutils [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:43 compute-1 nova_compute[230518]: 2025-10-02 13:05:43.166 2 DEBUG oslo_concurrency.lockutils [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:43 compute-1 nova_compute[230518]: 2025-10-02 13:05:43.288 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410328.2867503, ea034622-0a48-4de6-8d68-0f2240b54214 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:05:43 compute-1 nova_compute[230518]: 2025-10-02 13:05:43.288 2 INFO nova.compute.manager [-] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] VM Stopped (Lifecycle Event)
Oct 02 13:05:43 compute-1 nova_compute[230518]: 2025-10-02 13:05:43.315 2 DEBUG oslo_concurrency.processutils [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:05:43 compute-1 nova_compute[230518]: 2025-10-02 13:05:43.353 2 DEBUG nova.compute.manager [None req-ede792f0-4128-4348-92e1-d3625ea500f8 - - - - - -] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:05:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:05:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:43.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:05:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:05:43 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4083777070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:43 compute-1 nova_compute[230518]: 2025-10-02 13:05:43.761 2 DEBUG oslo_concurrency.processutils [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:05:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:43.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:43 compute-1 nova_compute[230518]: 2025-10-02 13:05:43.768 2 DEBUG nova.compute.provider_tree [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:05:43 compute-1 nova_compute[230518]: 2025-10-02 13:05:43.810 2 DEBUG nova.scheduler.client.report [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:05:43 compute-1 nova_compute[230518]: 2025-10-02 13:05:43.854 2 DEBUG oslo_concurrency.lockutils [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:43 compute-1 nova_compute[230518]: 2025-10-02 13:05:43.935 2 DEBUG oslo_concurrency.lockutils [None req-c6bda231-16b8-4263-8c86-bee56aba72ad 5206d24fd75a48758994a57e7fd259f2 52dd3c4419794d0fbecd536c5088c60f - - default default] Lock "ea034622-0a48-4de6-8d68-0f2240b54214" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 18.356s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:43 compute-1 ceph-mon[80926]: pgmap v2786: 305 pgs: 305 active+clean; 417 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.7 MiB/s rd, 6.6 MiB/s wr, 381 op/s
Oct 02 13:05:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4083777070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:05:44 compute-1 nova_compute[230518]: 2025-10-02 13:05:44.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4013527471' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:45.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:45 compute-1 nova_compute[230518]: 2025-10-02 13:05:45.632 2 DEBUG nova.network.neutron [req-26f90933-a423-4dbe-9f24-de501a2cb1a7 req-300c737a-a5cb-4e25-b59b-54a3c2a15934 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Updated VIF entry in instance network info cache for port 55d951c1-1ce9-4d4a-979c-9be9aef7e283. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:05:45 compute-1 nova_compute[230518]: 2025-10-02 13:05:45.632 2 DEBUG nova.network.neutron [req-26f90933-a423-4dbe-9f24-de501a2cb1a7 req-300c737a-a5cb-4e25-b59b-54a3c2a15934 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: ea034622-0a48-4de6-8d68-0f2240b54214] Updating instance_info_cache with network_info: [{"id": "55d951c1-1ce9-4d4a-979c-9be9aef7e283", "address": "fa:16:3e:e8:64:21", "network": {"id": "b07d0c6a-5988-4afb-b4ba-d4048578b224", "bridge": null, "label": "tempest-ServersNegativeTestJSON-2003673620-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "52dd3c4419794d0fbecd536c5088c60f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap55d951c1-1c", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:05:45 compute-1 nova_compute[230518]: 2025-10-02 13:05:45.684 2 DEBUG oslo_concurrency.lockutils [req-26f90933-a423-4dbe-9f24-de501a2cb1a7 req-300c737a-a5cb-4e25-b59b-54a3c2a15934 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-ea034622-0a48-4de6-8d68-0f2240b54214" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:05:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:45.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:46 compute-1 ceph-mon[80926]: pgmap v2787: 305 pgs: 305 active+clean; 417 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.6 MiB/s rd, 5.9 MiB/s wr, 338 op/s
Oct 02 13:05:46 compute-1 nova_compute[230518]: 2025-10-02 13:05:46.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:47 compute-1 ceph-mon[80926]: pgmap v2788: 305 pgs: 305 active+clean; 352 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.9 MiB/s wr, 259 op/s
Oct 02 13:05:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:47.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:05:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:47.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:05:47 compute-1 podman[301585]: 2025-10-02 13:05:47.809926221 +0000 UTC m=+0.059202701 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:05:47 compute-1 podman[301584]: 2025-10-02 13:05:47.836626471 +0000 UTC m=+0.088493442 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 13:05:49 compute-1 ceph-mon[80926]: pgmap v2789: 305 pgs: 305 active+clean; 326 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 735 KiB/s rd, 5.0 MiB/s wr, 221 op/s
Oct 02 13:05:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:05:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:49.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:49.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:49 compute-1 nova_compute[230518]: 2025-10-02 13:05:49.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/72459278' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:51 compute-1 ceph-mon[80926]: pgmap v2790: 305 pgs: 305 active+clean; 313 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 692 KiB/s rd, 4.6 MiB/s wr, 221 op/s
Oct 02 13:05:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:51.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:51.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:51 compute-1 nova_compute[230518]: 2025-10-02 13:05:51.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:52 compute-1 nova_compute[230518]: 2025-10-02 13:05:52.664 2 DEBUG oslo_concurrency.lockutils [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquiring lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:52 compute-1 nova_compute[230518]: 2025-10-02 13:05:52.664 2 DEBUG oslo_concurrency.lockutils [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:52 compute-1 nova_compute[230518]: 2025-10-02 13:05:52.665 2 DEBUG oslo_concurrency.lockutils [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquiring lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:52 compute-1 nova_compute[230518]: 2025-10-02 13:05:52.665 2 DEBUG oslo_concurrency.lockutils [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:52 compute-1 nova_compute[230518]: 2025-10-02 13:05:52.665 2 DEBUG oslo_concurrency.lockutils [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:52 compute-1 nova_compute[230518]: 2025-10-02 13:05:52.666 2 INFO nova.compute.manager [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Terminating instance
Oct 02 13:05:52 compute-1 nova_compute[230518]: 2025-10-02 13:05:52.667 2 DEBUG nova.compute.manager [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:05:52 compute-1 kernel: tap5b77d75e-cf (unregistering): left promiscuous mode
Oct 02 13:05:52 compute-1 NetworkManager[44960]: <info>  [1759410352.7441] device (tap5b77d75e-cf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:05:52 compute-1 ovn_controller[129257]: 2025-10-02T13:05:52Z|00767|binding|INFO|Releasing lport 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 from this chassis (sb_readonly=0)
Oct 02 13:05:52 compute-1 ovn_controller[129257]: 2025-10-02T13:05:52Z|00768|binding|INFO|Setting lport 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 down in Southbound
Oct 02 13:05:52 compute-1 ovn_controller[129257]: 2025-10-02T13:05:52Z|00769|binding|INFO|Removing iface tap5b77d75e-cf ovn-installed in OVS
Oct 02 13:05:52 compute-1 nova_compute[230518]: 2025-10-02 13:05:52.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:52 compute-1 nova_compute[230518]: 2025-10-02 13:05:52.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:52.759 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:3f:1d 10.100.0.12'], port_security=['fa:16:3e:ee:3f:1d 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f320bcaa-1dfe-4d91-bd4a-05ed389402a7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-962339a8-ad45-401e-ae58-50cd40858566', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '19365f54974d4109ae80bc13ac9ba55a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '67af92b3-63f6-4f5d-8022-4679fd3c3d0b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=91847efc-0e01-4780-b433-994cc6662f15, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:05:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:52.760 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 in datapath 962339a8-ad45-401e-ae58-50cd40858566 unbound from our chassis
Oct 02 13:05:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:52.761 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 962339a8-ad45-401e-ae58-50cd40858566, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:05:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:52.762 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f6d5a1a1-fd1f-4231-b1ee-6de233cc800a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:52.763 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-962339a8-ad45-401e-ae58-50cd40858566 namespace which is not needed anymore
Oct 02 13:05:52 compute-1 nova_compute[230518]: 2025-10-02 13:05:52.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:52 compute-1 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000bb.scope: Deactivated successfully.
Oct 02 13:05:52 compute-1 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000bb.scope: Consumed 13.718s CPU time.
Oct 02 13:05:52 compute-1 systemd-machined[188247]: Machine qemu-88-instance-000000bb terminated.
Oct 02 13:05:52 compute-1 kernel: tap5b77d75e-cf: entered promiscuous mode
Oct 02 13:05:52 compute-1 systemd-udevd[301634]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:05:52 compute-1 NetworkManager[44960]: <info>  [1759410352.8846] manager: (tap5b77d75e-cf): new Tun device (/org/freedesktop/NetworkManager/Devices/357)
Oct 02 13:05:52 compute-1 nova_compute[230518]: 2025-10-02 13:05:52.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:52 compute-1 ovn_controller[129257]: 2025-10-02T13:05:52Z|00770|binding|INFO|Claiming lport 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 for this chassis.
Oct 02 13:05:52 compute-1 ovn_controller[129257]: 2025-10-02T13:05:52Z|00771|binding|INFO|5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3: Claiming fa:16:3e:ee:3f:1d 10.100.0.12
Oct 02 13:05:52 compute-1 kernel: tap5b77d75e-cf (unregistering): left promiscuous mode
Oct 02 13:05:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:52.899 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:3f:1d 10.100.0.12'], port_security=['fa:16:3e:ee:3f:1d 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f320bcaa-1dfe-4d91-bd4a-05ed389402a7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-962339a8-ad45-401e-ae58-50cd40858566', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '19365f54974d4109ae80bc13ac9ba55a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '67af92b3-63f6-4f5d-8022-4679fd3c3d0b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=91847efc-0e01-4780-b433-994cc6662f15, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:05:52 compute-1 ovn_controller[129257]: 2025-10-02T13:05:52Z|00772|binding|INFO|Setting lport 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 ovn-installed in OVS
Oct 02 13:05:52 compute-1 ovn_controller[129257]: 2025-10-02T13:05:52Z|00773|binding|INFO|Setting lport 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 up in Southbound
Oct 02 13:05:52 compute-1 nova_compute[230518]: 2025-10-02 13:05:52.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:52 compute-1 ovn_controller[129257]: 2025-10-02T13:05:52Z|00774|binding|INFO|Releasing lport 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 from this chassis (sb_readonly=1)
Oct 02 13:05:52 compute-1 ovn_controller[129257]: 2025-10-02T13:05:52Z|00775|binding|INFO|Removing iface tap5b77d75e-cf ovn-installed in OVS
Oct 02 13:05:52 compute-1 ovn_controller[129257]: 2025-10-02T13:05:52Z|00776|if_status|INFO|Dropped 4 log messages in last 1464 seconds (most recently, 1464 seconds ago) due to excessive rate
Oct 02 13:05:52 compute-1 ovn_controller[129257]: 2025-10-02T13:05:52Z|00777|if_status|INFO|Not setting lport 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 down as sb is readonly
Oct 02 13:05:52 compute-1 nova_compute[230518]: 2025-10-02 13:05:52.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:52 compute-1 ovn_controller[129257]: 2025-10-02T13:05:52Z|00778|binding|INFO|Releasing lport 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 from this chassis (sb_readonly=1)
Oct 02 13:05:52 compute-1 ovn_controller[129257]: 2025-10-02T13:05:52Z|00779|binding|INFO|Setting lport 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 down in Southbound
Oct 02 13:05:52 compute-1 nova_compute[230518]: 2025-10-02 13:05:52.915 2 INFO nova.virt.libvirt.driver [-] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Instance destroyed successfully.
Oct 02 13:05:52 compute-1 nova_compute[230518]: 2025-10-02 13:05:52.916 2 DEBUG nova.objects.instance [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lazy-loading 'resources' on Instance uuid f320bcaa-1dfe-4d91-bd4a-05ed389402a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:05:52 compute-1 nova_compute[230518]: 2025-10-02 13:05:52.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:52.925 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:3f:1d 10.100.0.12'], port_security=['fa:16:3e:ee:3f:1d 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f320bcaa-1dfe-4d91-bd4a-05ed389402a7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-962339a8-ad45-401e-ae58-50cd40858566', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '19365f54974d4109ae80bc13ac9ba55a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '67af92b3-63f6-4f5d-8022-4679fd3c3d0b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=91847efc-0e01-4780-b433-994cc6662f15, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:05:52 compute-1 nova_compute[230518]: 2025-10-02 13:05:52.939 2 DEBUG nova.virt.libvirt.vif [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:05:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerMultinode-server-1687487968',display_name='tempest-TestServerMultinode-server-1687487968',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testservermultinode-server-1687487968',id=187,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:05:28Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='19365f54974d4109ae80bc13ac9ba55a',ramdisk_id='',reservation_id='r-6z94ee26',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerMultinode-2060715482',owner_user_name='tempest-TestServerMultinode-2060715482-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:05:28Z,user_data=None,user_id='7fb7e45069d34870bc5f4fa70bd8c6de',uuid=f320bcaa-1dfe-4d91-bd4a-05ed389402a7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "address": "fa:16:3e:ee:3f:1d", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b77d75e-cf", "ovs_interfaceid": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:05:52 compute-1 nova_compute[230518]: 2025-10-02 13:05:52.940 2 DEBUG nova.network.os_vif_util [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Converting VIF {"id": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "address": "fa:16:3e:ee:3f:1d", "network": {"id": "962339a8-ad45-401e-ae58-50cd40858566", "bridge": "br-int", "label": "tempest-TestServerMultinode-2125814298-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6ffb4bd012a4aa2ace5c0158f51f8b0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b77d75e-cf", "ovs_interfaceid": "5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:05:52 compute-1 neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566[301303]: [NOTICE]   (301310) : haproxy version is 2.8.14-c23fe91
Oct 02 13:05:52 compute-1 neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566[301303]: [NOTICE]   (301310) : path to executable is /usr/sbin/haproxy
Oct 02 13:05:52 compute-1 neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566[301303]: [WARNING]  (301310) : Exiting Master process...
Oct 02 13:05:52 compute-1 nova_compute[230518]: 2025-10-02 13:05:52.940 2 DEBUG nova.network.os_vif_util [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3f:1d,bridge_name='br-int',has_traffic_filtering=True,id=5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3,network=Network(962339a8-ad45-401e-ae58-50cd40858566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b77d75e-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:05:52 compute-1 nova_compute[230518]: 2025-10-02 13:05:52.941 2 DEBUG os_vif [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3f:1d,bridge_name='br-int',has_traffic_filtering=True,id=5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3,network=Network(962339a8-ad45-401e-ae58-50cd40858566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b77d75e-cf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:05:52 compute-1 neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566[301303]: [ALERT]    (301310) : Current worker (301313) exited with code 143 (Terminated)
Oct 02 13:05:52 compute-1 neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566[301303]: [WARNING]  (301310) : All workers exited. Exiting... (0)
Oct 02 13:05:52 compute-1 nova_compute[230518]: 2025-10-02 13:05:52.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:52 compute-1 systemd[1]: libpod-ae7e10fc83d27217fb95df7503cf33acab1b5a815324c9e58aea8051906e51c2.scope: Deactivated successfully.
Oct 02 13:05:52 compute-1 nova_compute[230518]: 2025-10-02 13:05:52.943 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b77d75e-cf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:05:52 compute-1 nova_compute[230518]: 2025-10-02 13:05:52.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:52 compute-1 nova_compute[230518]: 2025-10-02 13:05:52.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:52 compute-1 nova_compute[230518]: 2025-10-02 13:05:52.949 2 INFO os_vif [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3f:1d,bridge_name='br-int',has_traffic_filtering=True,id=5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3,network=Network(962339a8-ad45-401e-ae58-50cd40858566),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b77d75e-cf')
Oct 02 13:05:52 compute-1 podman[301653]: 2025-10-02 13:05:52.951369182 +0000 UTC m=+0.099456161 container died ae7e10fc83d27217fb95df7503cf33acab1b5a815324c9e58aea8051906e51c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:05:53 compute-1 nova_compute[230518]: 2025-10-02 13:05:53.206 2 DEBUG nova.compute.manager [req-2a4f4d1c-dace-45a1-b4df-309c016d659b req-54bb6e2c-e8af-4cbe-ac2e-8a950cc18a6d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Received event network-vif-unplugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:05:53 compute-1 nova_compute[230518]: 2025-10-02 13:05:53.207 2 DEBUG oslo_concurrency.lockutils [req-2a4f4d1c-dace-45a1-b4df-309c016d659b req-54bb6e2c-e8af-4cbe-ac2e-8a950cc18a6d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:53 compute-1 nova_compute[230518]: 2025-10-02 13:05:53.208 2 DEBUG oslo_concurrency.lockutils [req-2a4f4d1c-dace-45a1-b4df-309c016d659b req-54bb6e2c-e8af-4cbe-ac2e-8a950cc18a6d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:53 compute-1 nova_compute[230518]: 2025-10-02 13:05:53.209 2 DEBUG oslo_concurrency.lockutils [req-2a4f4d1c-dace-45a1-b4df-309c016d659b req-54bb6e2c-e8af-4cbe-ac2e-8a950cc18a6d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:53 compute-1 nova_compute[230518]: 2025-10-02 13:05:53.210 2 DEBUG nova.compute.manager [req-2a4f4d1c-dace-45a1-b4df-309c016d659b req-54bb6e2c-e8af-4cbe-ac2e-8a950cc18a6d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] No waiting events found dispatching network-vif-unplugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:05:53 compute-1 nova_compute[230518]: 2025-10-02 13:05:53.211 2 DEBUG nova.compute.manager [req-2a4f4d1c-dace-45a1-b4df-309c016d659b req-54bb6e2c-e8af-4cbe-ac2e-8a950cc18a6d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Received event network-vif-unplugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:05:53 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ae7e10fc83d27217fb95df7503cf33acab1b5a815324c9e58aea8051906e51c2-userdata-shm.mount: Deactivated successfully.
Oct 02 13:05:53 compute-1 systemd[1]: var-lib-containers-storage-overlay-1c7857d0ba775bc4a2874c70f85893a680df6dee09c2866dc2675cedacec2fde-merged.mount: Deactivated successfully.
Oct 02 13:05:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:53.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:53 compute-1 ceph-mon[80926]: pgmap v2791: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 613 KiB/s rd, 3.9 MiB/s wr, 184 op/s
Oct 02 13:05:53 compute-1 podman[301653]: 2025-10-02 13:05:53.737925741 +0000 UTC m=+0.886012740 container cleanup ae7e10fc83d27217fb95df7503cf33acab1b5a815324c9e58aea8051906e51c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 13:05:53 compute-1 systemd[1]: libpod-conmon-ae7e10fc83d27217fb95df7503cf33acab1b5a815324c9e58aea8051906e51c2.scope: Deactivated successfully.
Oct 02 13:05:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:05:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:53.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:05:54 compute-1 podman[301706]: 2025-10-02 13:05:54.289440126 +0000 UTC m=+0.519280455 container remove ae7e10fc83d27217fb95df7503cf33acab1b5a815324c9e58aea8051906e51c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 13:05:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:54.295 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6c479f57-9960-45e0-98e7-0b93b925a654]: (4, ('Thu Oct  2 01:05:52 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566 (ae7e10fc83d27217fb95df7503cf33acab1b5a815324c9e58aea8051906e51c2)\nae7e10fc83d27217fb95df7503cf33acab1b5a815324c9e58aea8051906e51c2\nThu Oct  2 01:05:53 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-962339a8-ad45-401e-ae58-50cd40858566 (ae7e10fc83d27217fb95df7503cf33acab1b5a815324c9e58aea8051906e51c2)\nae7e10fc83d27217fb95df7503cf33acab1b5a815324c9e58aea8051906e51c2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:54.297 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1b0213bb-d1a7-420e-ad60-26bdb90681f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:54.298 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap962339a8-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:05:54 compute-1 nova_compute[230518]: 2025-10-02 13:05:54.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:54 compute-1 kernel: tap962339a8-a0: left promiscuous mode
Oct 02 13:05:54 compute-1 nova_compute[230518]: 2025-10-02 13:05:54.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:54.323 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6d2808f2-33ce-42cc-a360-3205ea90da9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:54.347 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[49c9eb82-b444-4137-b4d8-873d2560c43a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:54.349 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6c0b4fd0-ce19-48eb-b0c6-da5fca0fd1b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:54.377 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[402a32c3-0027-4c79-a365-6b911ffbc431]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 819099, 'reachable_time': 19728, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301722, 'error': None, 'target': 'ovnmeta-962339a8-ad45-401e-ae58-50cd40858566', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:54 compute-1 systemd[1]: run-netns-ovnmeta\x2d962339a8\x2dad45\x2d401e\x2dae58\x2d50cd40858566.mount: Deactivated successfully.
Oct 02 13:05:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:54.380 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-962339a8-ad45-401e-ae58-50cd40858566 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:05:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:54.380 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[bab1575c-c8b8-403d-a7ac-3449517dcc0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:54.384 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 in datapath 962339a8-ad45-401e-ae58-50cd40858566 unbound from our chassis
Oct 02 13:05:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:54.385 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 962339a8-ad45-401e-ae58-50cd40858566, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:05:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:54.386 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[80525358-b6b0-49bd-b888-9370cd93b678]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:54.387 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 in datapath 962339a8-ad45-401e-ae58-50cd40858566 unbound from our chassis
Oct 02 13:05:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:54.389 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 962339a8-ad45-401e-ae58-50cd40858566, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:05:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:05:54.389 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b2d711d0-486a-48f6-bcfd-f2be52ba64bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:05:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:05:54 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/975145866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:54 compute-1 nova_compute[230518]: 2025-10-02 13:05:54.659 2 INFO nova.virt.libvirt.driver [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Deleting instance files /var/lib/nova/instances/f320bcaa-1dfe-4d91-bd4a-05ed389402a7_del
Oct 02 13:05:54 compute-1 nova_compute[230518]: 2025-10-02 13:05:54.660 2 INFO nova.virt.libvirt.driver [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Deletion of /var/lib/nova/instances/f320bcaa-1dfe-4d91-bd4a-05ed389402a7_del complete
Oct 02 13:05:54 compute-1 nova_compute[230518]: 2025-10-02 13:05:54.774 2 INFO nova.compute.manager [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Took 2.11 seconds to destroy the instance on the hypervisor.
Oct 02 13:05:54 compute-1 nova_compute[230518]: 2025-10-02 13:05:54.775 2 DEBUG oslo.service.loopingcall [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:05:54 compute-1 nova_compute[230518]: 2025-10-02 13:05:54.775 2 DEBUG nova.compute.manager [-] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:05:54 compute-1 nova_compute[230518]: 2025-10-02 13:05:54.776 2 DEBUG nova.network.neutron [-] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:05:54 compute-1 nova_compute[230518]: 2025-10-02 13:05:54.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:55.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:55 compute-1 ceph-mon[80926]: pgmap v2792: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 415 KiB/s rd, 2.3 MiB/s wr, 150 op/s
Oct 02 13:05:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:05:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:55.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:05:56 compute-1 nova_compute[230518]: 2025-10-02 13:05:56.122 2 DEBUG nova.compute.manager [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Received event network-vif-plugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:05:56 compute-1 nova_compute[230518]: 2025-10-02 13:05:56.123 2 DEBUG oslo_concurrency.lockutils [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:56 compute-1 nova_compute[230518]: 2025-10-02 13:05:56.123 2 DEBUG oslo_concurrency.lockutils [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:56 compute-1 nova_compute[230518]: 2025-10-02 13:05:56.123 2 DEBUG oslo_concurrency.lockutils [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:56 compute-1 nova_compute[230518]: 2025-10-02 13:05:56.124 2 DEBUG nova.compute.manager [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] No waiting events found dispatching network-vif-plugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:05:56 compute-1 nova_compute[230518]: 2025-10-02 13:05:56.124 2 WARNING nova.compute.manager [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Received unexpected event network-vif-plugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 for instance with vm_state active and task_state deleting.
Oct 02 13:05:56 compute-1 nova_compute[230518]: 2025-10-02 13:05:56.124 2 DEBUG nova.compute.manager [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Received event network-vif-plugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:05:56 compute-1 nova_compute[230518]: 2025-10-02 13:05:56.124 2 DEBUG oslo_concurrency.lockutils [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:56 compute-1 nova_compute[230518]: 2025-10-02 13:05:56.125 2 DEBUG oslo_concurrency.lockutils [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:56 compute-1 nova_compute[230518]: 2025-10-02 13:05:56.125 2 DEBUG oslo_concurrency.lockutils [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:56 compute-1 nova_compute[230518]: 2025-10-02 13:05:56.125 2 DEBUG nova.compute.manager [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] No waiting events found dispatching network-vif-plugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:05:56 compute-1 nova_compute[230518]: 2025-10-02 13:05:56.125 2 WARNING nova.compute.manager [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Received unexpected event network-vif-plugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 for instance with vm_state active and task_state deleting.
Oct 02 13:05:56 compute-1 nova_compute[230518]: 2025-10-02 13:05:56.126 2 DEBUG nova.compute.manager [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Received event network-vif-plugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:05:56 compute-1 nova_compute[230518]: 2025-10-02 13:05:56.127 2 DEBUG oslo_concurrency.lockutils [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:56 compute-1 nova_compute[230518]: 2025-10-02 13:05:56.127 2 DEBUG oslo_concurrency.lockutils [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:56 compute-1 nova_compute[230518]: 2025-10-02 13:05:56.127 2 DEBUG oslo_concurrency.lockutils [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:56 compute-1 nova_compute[230518]: 2025-10-02 13:05:56.128 2 DEBUG nova.compute.manager [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] No waiting events found dispatching network-vif-plugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:05:56 compute-1 nova_compute[230518]: 2025-10-02 13:05:56.128 2 WARNING nova.compute.manager [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Received unexpected event network-vif-plugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 for instance with vm_state active and task_state deleting.
Oct 02 13:05:56 compute-1 nova_compute[230518]: 2025-10-02 13:05:56.128 2 DEBUG nova.compute.manager [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Received event network-vif-unplugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:05:56 compute-1 nova_compute[230518]: 2025-10-02 13:05:56.128 2 DEBUG oslo_concurrency.lockutils [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:56 compute-1 nova_compute[230518]: 2025-10-02 13:05:56.129 2 DEBUG oslo_concurrency.lockutils [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:56 compute-1 nova_compute[230518]: 2025-10-02 13:05:56.129 2 DEBUG oslo_concurrency.lockutils [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:56 compute-1 nova_compute[230518]: 2025-10-02 13:05:56.129 2 DEBUG nova.compute.manager [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] No waiting events found dispatching network-vif-unplugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:05:56 compute-1 nova_compute[230518]: 2025-10-02 13:05:56.130 2 DEBUG nova.compute.manager [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Received event network-vif-unplugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:05:56 compute-1 nova_compute[230518]: 2025-10-02 13:05:56.130 2 DEBUG nova.compute.manager [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Received event network-vif-plugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:05:56 compute-1 nova_compute[230518]: 2025-10-02 13:05:56.131 2 DEBUG oslo_concurrency.lockutils [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:56 compute-1 nova_compute[230518]: 2025-10-02 13:05:56.131 2 DEBUG oslo_concurrency.lockutils [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:56 compute-1 nova_compute[230518]: 2025-10-02 13:05:56.131 2 DEBUG oslo_concurrency.lockutils [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:56 compute-1 nova_compute[230518]: 2025-10-02 13:05:56.132 2 DEBUG nova.compute.manager [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] No waiting events found dispatching network-vif-plugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:05:56 compute-1 nova_compute[230518]: 2025-10-02 13:05:56.132 2 WARNING nova.compute.manager [req-91af1103-f5fb-4a04-b1c5-db9969663ccd req-cfda94ad-0c68-4b1b-8d87-828971aa53b7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Received unexpected event network-vif-plugged-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 for instance with vm_state active and task_state deleting.
Oct 02 13:05:57 compute-1 nova_compute[230518]: 2025-10-02 13:05:57.268 2 DEBUG nova.network.neutron [-] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:05:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:57.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:57 compute-1 ceph-mon[80926]: pgmap v2793: 305 pgs: 305 active+clean; 235 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 431 KiB/s rd, 2.3 MiB/s wr, 171 op/s
Oct 02 13:05:57 compute-1 nova_compute[230518]: 2025-10-02 13:05:57.770 2 INFO nova.compute.manager [-] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Took 2.99 seconds to deallocate network for instance.
Oct 02 13:05:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:05:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:57.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:05:57 compute-1 podman[301724]: 2025-10-02 13:05:57.807140164 +0000 UTC m=+0.055798825 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:05:57 compute-1 podman[301725]: 2025-10-02 13:05:57.808349032 +0000 UTC m=+0.055859467 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:05:57 compute-1 nova_compute[230518]: 2025-10-02 13:05:57.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:05:58 compute-1 nova_compute[230518]: 2025-10-02 13:05:58.126 2 DEBUG oslo_concurrency.lockutils [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:05:58 compute-1 nova_compute[230518]: 2025-10-02 13:05:58.126 2 DEBUG oslo_concurrency.lockutils [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:05:58 compute-1 nova_compute[230518]: 2025-10-02 13:05:58.294 2 DEBUG oslo_concurrency.processutils [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:05:58 compute-1 nova_compute[230518]: 2025-10-02 13:05:58.349 2 DEBUG nova.compute.manager [req-cc4ad190-bb03-4abb-872b-a2c84be357ab req-2f67c1ae-91b7-4dfb-9b66-27bca9a577a0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Received event network-vif-deleted-5b77d75e-cf0b-4e6c-acd3-93d32b69e1c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:05:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:05:58 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2926308603' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:58 compute-1 nova_compute[230518]: 2025-10-02 13:05:58.748 2 DEBUG oslo_concurrency.processutils [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:05:58 compute-1 nova_compute[230518]: 2025-10-02 13:05:58.754 2 DEBUG nova.compute.provider_tree [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:05:58 compute-1 nova_compute[230518]: 2025-10-02 13:05:58.794 2 DEBUG nova.scheduler.client.report [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:05:59 compute-1 nova_compute[230518]: 2025-10-02 13:05:59.086 2 DEBUG oslo_concurrency.lockutils [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.960s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:59 compute-1 nova_compute[230518]: 2025-10-02 13:05:59.219 2 INFO nova.scheduler.client.report [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Deleted allocations for instance f320bcaa-1dfe-4d91-bd4a-05ed389402a7
Oct 02 13:05:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:05:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:05:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:05:59.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:05:59 compute-1 nova_compute[230518]: 2025-10-02 13:05:59.590 2 DEBUG oslo_concurrency.lockutils [None req-3a2beada-1cfa-4cd5-b9e5-95a4aa1c7765 7fb7e45069d34870bc5f4fa70bd8c6de 19365f54974d4109ae80bc13ac9ba55a - - default default] Lock "f320bcaa-1dfe-4d91-bd4a-05ed389402a7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.926s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:05:59 compute-1 ceph-mon[80926]: pgmap v2794: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 157 KiB/s rd, 878 KiB/s wr, 85 op/s
Oct 02 13:05:59 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2926308603' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:05:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:05:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:05:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:05:59.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:05:59 compute-1 nova_compute[230518]: 2025-10-02 13:05:59.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:06:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:01.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:06:01 compute-1 ceph-mon[80926]: pgmap v2795: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 14 KiB/s wr, 45 op/s
Oct 02 13:06:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:01.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3556229993' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:06:02 compute-1 nova_compute[230518]: 2025-10-02 13:06:02.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:03.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:03.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:04 compute-1 ceph-mon[80926]: pgmap v2796: 305 pgs: 305 active+clean; 228 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 77 op/s
Oct 02 13:06:04 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/854998284' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:06:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:06:04 compute-1 nova_compute[230518]: 2025-10-02 13:06:04.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:04 compute-1 nova_compute[230518]: 2025-10-02 13:06:04.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:04 compute-1 sudo[301785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:06:04 compute-1 sudo[301785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:04 compute-1 sudo[301785]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:04 compute-1 sudo[301810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:06:04 compute-1 sudo[301810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:04 compute-1 sudo[301810]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:05 compute-1 ceph-mon[80926]: pgmap v2797: 305 pgs: 305 active+clean; 228 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 76 op/s
Oct 02 13:06:05 compute-1 sudo[301835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:06:05 compute-1 sudo[301835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:05 compute-1 sudo[301835]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:05 compute-1 sudo[301860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:06:05 compute-1 sudo[301860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:05 compute-1 sudo[301860]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:05.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:06:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:05.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:06:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3781703479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:06:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1235638161' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:06:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1235638161' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:06:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:07.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:06:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:07.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:06:07 compute-1 ceph-mon[80926]: pgmap v2798: 305 pgs: 305 active+clean; 277 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 110 op/s
Oct 02 13:06:07 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:06:07 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:06:07 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:06:07 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:06:07 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:06:07 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:06:07 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:06:07 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:06:07 compute-1 nova_compute[230518]: 2025-10-02 13:06:07.914 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410352.9119, f320bcaa-1dfe-4d91-bd4a-05ed389402a7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:06:07 compute-1 nova_compute[230518]: 2025-10-02 13:06:07.914 2 INFO nova.compute.manager [-] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] VM Stopped (Lifecycle Event)
Oct 02 13:06:07 compute-1 nova_compute[230518]: 2025-10-02 13:06:07.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:08 compute-1 nova_compute[230518]: 2025-10-02 13:06:08.043 2 DEBUG nova.compute.manager [None req-2785e55c-ee8e-4f2f-9946-7f6813ccf19a - - - - - -] [instance: f320bcaa-1dfe-4d91-bd4a-05ed389402a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:06:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:06:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:09.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:09.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:09 compute-1 nova_compute[230518]: 2025-10-02 13:06:09.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:09 compute-1 ceph-mon[80926]: pgmap v2799: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.7 MiB/s wr, 92 op/s
Oct 02 13:06:10 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:06:10 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Cumulative writes: 13K writes, 67K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s
                                           Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.13 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1568 writes, 8159 keys, 1568 commit groups, 1.0 writes per commit group, ingest: 15.81 MB, 0.03 MB/s
                                           Interval WAL: 1569 writes, 1569 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     56.3      1.45              0.24        42    0.035       0      0       0.0       0.0
                                             L6      1/0   10.21 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.0    116.1     98.9      4.14              1.22        41    0.101    279K    22K       0.0       0.0
                                            Sum      1/0   10.21 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.0     86.0     87.8      5.60              1.46        83    0.067    279K    22K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.7     98.6     98.6      0.92              0.31        14    0.066     64K   3655       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    116.1     98.9      4.14              1.22        41    0.101    279K    22K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     56.4      1.45              0.24        41    0.035       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.080, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.48 GB write, 0.10 MB/s write, 0.47 GB read, 0.10 MB/s read, 5.6 seconds
                                           Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.09 GB read, 0.15 MB/s read, 0.9 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5591049f71f0#2 capacity: 304.00 MB usage: 52.02 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000332 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3003,49.95 MB,16.4312%) FilterBlock(83,783.11 KB,0.251564%) IndexBlock(83,1.30 MB,0.427567%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 13:06:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:11.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:11.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:11 compute-1 ceph-mon[80926]: pgmap v2800: 305 pgs: 305 active+clean; 322 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 5.6 MiB/s wr, 113 op/s
Oct 02 13:06:12 compute-1 nova_compute[230518]: 2025-10-02 13:06:12.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:13.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:13 compute-1 nova_compute[230518]: 2025-10-02 13:06:13.693 2 DEBUG oslo_concurrency.lockutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:06:13 compute-1 nova_compute[230518]: 2025-10-02 13:06:13.694 2 DEBUG oslo_concurrency.lockutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:06:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:13.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:13 compute-1 nova_compute[230518]: 2025-10-02 13:06:13.828 2 DEBUG nova.compute.manager [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:06:13 compute-1 ceph-mon[80926]: pgmap v2801: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 5.7 MiB/s wr, 113 op/s
Oct 02 13:06:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e358 e358: 3 total, 3 up, 3 in
Oct 02 13:06:14 compute-1 nova_compute[230518]: 2025-10-02 13:06:14.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:06:14 compute-1 sudo[301917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:06:14 compute-1 sudo[301917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:14 compute-1 sudo[301917]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:06:14 compute-1 sudo[301942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:06:14 compute-1 sudo[301942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:06:14 compute-1 sudo[301942]: pam_unix(sudo:session): session closed for user root
Oct 02 13:06:14 compute-1 nova_compute[230518]: 2025-10-02 13:06:14.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:15 compute-1 ceph-mon[80926]: osdmap e358: 3 total, 3 up, 3 in
Oct 02 13:06:15 compute-1 ceph-mon[80926]: pgmap v2803: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.7 MiB/s wr, 78 op/s
Oct 02 13:06:15 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:06:15 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:06:15 compute-1 nova_compute[230518]: 2025-10-02 13:06:15.262 2 DEBUG oslo_concurrency.lockutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:06:15 compute-1 nova_compute[230518]: 2025-10-02 13:06:15.263 2 DEBUG oslo_concurrency.lockutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:06:15 compute-1 nova_compute[230518]: 2025-10-02 13:06:15.275 2 DEBUG nova.virt.hardware [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:06:15 compute-1 nova_compute[230518]: 2025-10-02 13:06:15.275 2 INFO nova.compute.claims [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Claim successful on node compute-1.ctlplane.example.com
Oct 02 13:06:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:15.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:15 compute-1 nova_compute[230518]: 2025-10-02 13:06:15.638 2 DEBUG nova.scheduler.client.report [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 13:06:15 compute-1 nova_compute[230518]: 2025-10-02 13:06:15.665 2 DEBUG nova.scheduler.client.report [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 13:06:15 compute-1 nova_compute[230518]: 2025-10-02 13:06:15.665 2 DEBUG nova.compute.provider_tree [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 13:06:15 compute-1 nova_compute[230518]: 2025-10-02 13:06:15.681 2 DEBUG nova.scheduler.client.report [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 13:06:15 compute-1 nova_compute[230518]: 2025-10-02 13:06:15.705 2 DEBUG nova.scheduler.client.report [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 13:06:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:06:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:15.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:06:15 compute-1 nova_compute[230518]: 2025-10-02 13:06:15.895 2 DEBUG oslo_concurrency.processutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:06:16 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:06:16 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4164908810' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:06:16 compute-1 nova_compute[230518]: 2025-10-02 13:06:16.339 2 DEBUG oslo_concurrency.processutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:06:16 compute-1 nova_compute[230518]: 2025-10-02 13:06:16.346 2 DEBUG nova.compute.provider_tree [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:06:16 compute-1 nova_compute[230518]: 2025-10-02 13:06:16.375 2 DEBUG nova.scheduler.client.report [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:06:16 compute-1 nova_compute[230518]: 2025-10-02 13:06:16.440 2 DEBUG oslo_concurrency.lockutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.177s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:06:16 compute-1 nova_compute[230518]: 2025-10-02 13:06:16.441 2 DEBUG nova.compute.manager [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:06:17 compute-1 nova_compute[230518]: 2025-10-02 13:06:17.282 2 DEBUG nova.compute.manager [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:06:17 compute-1 nova_compute[230518]: 2025-10-02 13:06:17.283 2 DEBUG nova.network.neutron [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:06:17 compute-1 ceph-mon[80926]: pgmap v2804: 305 pgs: 305 active+clean; 285 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 83 op/s
Oct 02 13:06:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4164908810' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:06:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2607940262' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:06:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3523504306' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:06:17 compute-1 nova_compute[230518]: 2025-10-02 13:06:17.589 2 INFO nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:06:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:06:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:17.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:06:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:17.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:17 compute-1 nova_compute[230518]: 2025-10-02 13:06:17.865 2 DEBUG nova.compute.manager [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:06:17 compute-1 nova_compute[230518]: 2025-10-02 13:06:17.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:18 compute-1 nova_compute[230518]: 2025-10-02 13:06:18.053 2 DEBUG nova.policy [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '362b536431b64b15b67740060af57e9c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e911de934ec043d1bd942c8aed562d04', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:06:18 compute-1 nova_compute[230518]: 2025-10-02 13:06:18.215 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:06:18 compute-1 nova_compute[230518]: 2025-10-02 13:06:18.236 2 DEBUG nova.compute.manager [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:06:18 compute-1 nova_compute[230518]: 2025-10-02 13:06:18.237 2 DEBUG nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:06:18 compute-1 nova_compute[230518]: 2025-10-02 13:06:18.238 2 INFO nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Creating image(s)
Oct 02 13:06:18 compute-1 nova_compute[230518]: 2025-10-02 13:06:18.263 2 DEBUG nova.storage.rbd_utils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image aaa891aa-5701-4706-b86f-6216b8cf4c6d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:06:18 compute-1 nova_compute[230518]: 2025-10-02 13:06:18.290 2 DEBUG nova.storage.rbd_utils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image aaa891aa-5701-4706-b86f-6216b8cf4c6d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:06:18 compute-1 nova_compute[230518]: 2025-10-02 13:06:18.316 2 DEBUG nova.storage.rbd_utils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image aaa891aa-5701-4706-b86f-6216b8cf4c6d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:06:18 compute-1 nova_compute[230518]: 2025-10-02 13:06:18.320 2 DEBUG oslo_concurrency.processutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:06:18 compute-1 nova_compute[230518]: 2025-10-02 13:06:18.352 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:06:18 compute-1 nova_compute[230518]: 2025-10-02 13:06:18.353 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:06:18 compute-1 nova_compute[230518]: 2025-10-02 13:06:18.353 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:06:18 compute-1 nova_compute[230518]: 2025-10-02 13:06:18.354 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:06:18 compute-1 nova_compute[230518]: 2025-10-02 13:06:18.354 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:06:18 compute-1 nova_compute[230518]: 2025-10-02 13:06:18.391 2 DEBUG oslo_concurrency.processutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:06:18 compute-1 nova_compute[230518]: 2025-10-02 13:06:18.392 2 DEBUG oslo_concurrency.lockutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:06:18 compute-1 nova_compute[230518]: 2025-10-02 13:06:18.392 2 DEBUG oslo_concurrency.lockutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:06:18 compute-1 nova_compute[230518]: 2025-10-02 13:06:18.393 2 DEBUG oslo_concurrency.lockutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:06:18 compute-1 nova_compute[230518]: 2025-10-02 13:06:18.423 2 DEBUG nova.storage.rbd_utils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image aaa891aa-5701-4706-b86f-6216b8cf4c6d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:06:18 compute-1 nova_compute[230518]: 2025-10-02 13:06:18.427 2 DEBUG oslo_concurrency.processutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 aaa891aa-5701-4706-b86f-6216b8cf4c6d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:06:18 compute-1 podman[302104]: 2025-10-02 13:06:18.800013274 +0000 UTC m=+0.048964622 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 02 13:06:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:06:18 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1150856767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:06:18 compute-1 nova_compute[230518]: 2025-10-02 13:06:18.854 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:06:18 compute-1 podman[302103]: 2025-10-02 13:06:18.858554773 +0000 UTC m=+0.108378038 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:06:18 compute-1 nova_compute[230518]: 2025-10-02 13:06:18.974 2 DEBUG oslo_concurrency.processutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 aaa891aa-5701-4706-b86f-6216b8cf4c6d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:06:19 compute-1 nova_compute[230518]: 2025-10-02 13:06:19.041 2 DEBUG nova.storage.rbd_utils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] resizing rbd image aaa891aa-5701-4706-b86f-6216b8cf4c6d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:06:19 compute-1 nova_compute[230518]: 2025-10-02 13:06:19.109 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:06:19 compute-1 nova_compute[230518]: 2025-10-02 13:06:19.110 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4199MB free_disk=20.921974182128906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:06:19 compute-1 nova_compute[230518]: 2025-10-02 13:06:19.110 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:06:19 compute-1 nova_compute[230518]: 2025-10-02 13:06:19.111 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:06:19 compute-1 nova_compute[230518]: 2025-10-02 13:06:19.161 2 DEBUG nova.objects.instance [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lazy-loading 'migration_context' on Instance uuid aaa891aa-5701-4706-b86f-6216b8cf4c6d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:06:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 e359: 3 total, 3 up, 3 in
Oct 02 13:06:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:06:19 compute-1 ceph-mon[80926]: pgmap v2805: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 137 op/s
Oct 02 13:06:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1150856767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:06:19 compute-1 ceph-mon[80926]: osdmap e359: 3 total, 3 up, 3 in
Oct 02 13:06:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:19.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:19 compute-1 nova_compute[230518]: 2025-10-02 13:06:19.807 2 DEBUG nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:06:19 compute-1 nova_compute[230518]: 2025-10-02 13:06:19.808 2 DEBUG nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Ensure instance console log exists: /var/lib/nova/instances/aaa891aa-5701-4706-b86f-6216b8cf4c6d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:06:19 compute-1 nova_compute[230518]: 2025-10-02 13:06:19.808 2 DEBUG oslo_concurrency.lockutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:06:19 compute-1 nova_compute[230518]: 2025-10-02 13:06:19.809 2 DEBUG oslo_concurrency.lockutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:06:19 compute-1 nova_compute[230518]: 2025-10-02 13:06:19.809 2 DEBUG oslo_concurrency.lockutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:06:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:06:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:19.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:06:19 compute-1 nova_compute[230518]: 2025-10-02 13:06:19.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:20 compute-1 nova_compute[230518]: 2025-10-02 13:06:20.192 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance aaa891aa-5701-4706-b86f-6216b8cf4c6d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:06:20 compute-1 nova_compute[230518]: 2025-10-02 13:06:20.193 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:06:20 compute-1 nova_compute[230518]: 2025-10-02 13:06:20.193 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:06:20 compute-1 nova_compute[230518]: 2025-10-02 13:06:20.252 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:06:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:06:20 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/85858181' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:06:20 compute-1 nova_compute[230518]: 2025-10-02 13:06:20.679 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:06:20 compute-1 nova_compute[230518]: 2025-10-02 13:06:20.685 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:06:20 compute-1 nova_compute[230518]: 2025-10-02 13:06:20.710 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:06:20 compute-1 nova_compute[230518]: 2025-10-02 13:06:20.754 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:06:20 compute-1 nova_compute[230518]: 2025-10-02 13:06:20.754 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:06:21 compute-1 nova_compute[230518]: 2025-10-02 13:06:21.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:06:21 compute-1 nova_compute[230518]: 2025-10-02 13:06:21.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 13:06:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:06:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:21.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:06:21 compute-1 ceph-mon[80926]: pgmap v2807: 305 pgs: 305 active+clean; 267 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 148 op/s
Oct 02 13:06:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1954721145' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:06:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/85858181' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:06:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:21.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:22 compute-1 nova_compute[230518]: 2025-10-02 13:06:22.299 2 DEBUG nova.network.neutron [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Successfully created port: fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:06:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/822308584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:06:22 compute-1 nova_compute[230518]: 2025-10-02 13:06:22.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:06:23.074 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:06:23 compute-1 nova_compute[230518]: 2025-10-02 13:06:23.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:06:23.075 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:06:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:23.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:23.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:23 compute-1 ceph-mon[80926]: pgmap v2808: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 176 op/s
Oct 02 13:06:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3814506278' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:06:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:06:24 compute-1 nova_compute[230518]: 2025-10-02 13:06:24.587 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:06:24 compute-1 nova_compute[230518]: 2025-10-02 13:06:24.587 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:06:24 compute-1 nova_compute[230518]: 2025-10-02 13:06:24.588 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:06:24 compute-1 nova_compute[230518]: 2025-10-02 13:06:24.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:25 compute-1 nova_compute[230518]: 2025-10-02 13:06:25.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:06:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:25.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:06:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:25.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:06:25 compute-1 ceph-mon[80926]: pgmap v2809: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 145 op/s
Oct 02 13:06:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/426792412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:06:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:06:25.962 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:06:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:06:25.962 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:06:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:06:25.962 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:06:26 compute-1 nova_compute[230518]: 2025-10-02 13:06:26.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:06:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:06:26.077 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:06:26 compute-1 nova_compute[230518]: 2025-10-02 13:06:26.262 2 DEBUG nova.network.neutron [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Successfully updated port: fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:06:26 compute-1 nova_compute[230518]: 2025-10-02 13:06:26.323 2 DEBUG oslo_concurrency.lockutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "refresh_cache-aaa891aa-5701-4706-b86f-6216b8cf4c6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:06:26 compute-1 nova_compute[230518]: 2025-10-02 13:06:26.323 2 DEBUG oslo_concurrency.lockutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquired lock "refresh_cache-aaa891aa-5701-4706-b86f-6216b8cf4c6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:06:26 compute-1 nova_compute[230518]: 2025-10-02 13:06:26.323 2 DEBUG nova.network.neutron [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:06:26 compute-1 nova_compute[230518]: 2025-10-02 13:06:26.893 2 DEBUG nova.compute.manager [req-19b846b3-c4f9-4468-8130-d22698dce083 req-22356ac8-707c-4fa9-b364-4c9bbbd5e07e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Received event network-changed-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:06:26 compute-1 nova_compute[230518]: 2025-10-02 13:06:26.894 2 DEBUG nova.compute.manager [req-19b846b3-c4f9-4468-8130-d22698dce083 req-22356ac8-707c-4fa9-b364-4c9bbbd5e07e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Refreshing instance network info cache due to event network-changed-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:06:26 compute-1 nova_compute[230518]: 2025-10-02 13:06:26.895 2 DEBUG oslo_concurrency.lockutils [req-19b846b3-c4f9-4468-8130-d22698dce083 req-22356ac8-707c-4fa9-b364-4c9bbbd5e07e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-aaa891aa-5701-4706-b86f-6216b8cf4c6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:06:27 compute-1 nova_compute[230518]: 2025-10-02 13:06:27.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:06:27 compute-1 nova_compute[230518]: 2025-10-02 13:06:27.212 2 DEBUG nova.network.neutron [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:06:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:06:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:27.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:06:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:27.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:27 compute-1 nova_compute[230518]: 2025-10-02 13:06:27.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:27 compute-1 ceph-mon[80926]: pgmap v2810: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 173 op/s
Oct 02 13:06:28 compute-1 nova_compute[230518]: 2025-10-02 13:06:28.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:06:28 compute-1 podman[302243]: 2025-10-02 13:06:28.804105143 +0000 UTC m=+0.060305475 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 02 13:06:28 compute-1 podman[302244]: 2025-10-02 13:06:28.819974465 +0000 UTC m=+0.065961490 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 13:06:29 compute-1 nova_compute[230518]: 2025-10-02 13:06:29.048 2 DEBUG nova.network.neutron [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Updating instance_info_cache with network_info: [{"id": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "address": "fa:16:3e:40:11:63", "network": {"id": "0de30c3c-d440-4dcd-8562-bb1990277f07", "bridge": "br-int", "label": "tempest-network-smoke--1283103558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd38dd09-d0", "ovs_interfaceid": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:06:29 compute-1 nova_compute[230518]: 2025-10-02 13:06:29.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:06:29 compute-1 ceph-mon[80926]: pgmap v2811: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 172 op/s
Oct 02 13:06:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:06:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:29.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:29.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:29 compute-1 nova_compute[230518]: 2025-10-02 13:06:29.926 2 DEBUG oslo_concurrency.lockutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Releasing lock "refresh_cache-aaa891aa-5701-4706-b86f-6216b8cf4c6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:06:29 compute-1 nova_compute[230518]: 2025-10-02 13:06:29.926 2 DEBUG nova.compute.manager [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Instance network_info: |[{"id": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "address": "fa:16:3e:40:11:63", "network": {"id": "0de30c3c-d440-4dcd-8562-bb1990277f07", "bridge": "br-int", "label": "tempest-network-smoke--1283103558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd38dd09-d0", "ovs_interfaceid": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:06:29 compute-1 nova_compute[230518]: 2025-10-02 13:06:29.927 2 DEBUG oslo_concurrency.lockutils [req-19b846b3-c4f9-4468-8130-d22698dce083 req-22356ac8-707c-4fa9-b364-4c9bbbd5e07e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-aaa891aa-5701-4706-b86f-6216b8cf4c6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:06:29 compute-1 nova_compute[230518]: 2025-10-02 13:06:29.927 2 DEBUG nova.network.neutron [req-19b846b3-c4f9-4468-8130-d22698dce083 req-22356ac8-707c-4fa9-b364-4c9bbbd5e07e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Refreshing network info cache for port fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:06:29 compute-1 nova_compute[230518]: 2025-10-02 13:06:29.931 2 DEBUG nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Start _get_guest_xml network_info=[{"id": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "address": "fa:16:3e:40:11:63", "network": {"id": "0de30c3c-d440-4dcd-8562-bb1990277f07", "bridge": "br-int", "label": "tempest-network-smoke--1283103558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd38dd09-d0", "ovs_interfaceid": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:06:29 compute-1 nova_compute[230518]: 2025-10-02 13:06:29.936 2 WARNING nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:06:29 compute-1 nova_compute[230518]: 2025-10-02 13:06:29.943 2 DEBUG nova.virt.libvirt.host [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:06:29 compute-1 nova_compute[230518]: 2025-10-02 13:06:29.944 2 DEBUG nova.virt.libvirt.host [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:06:29 compute-1 nova_compute[230518]: 2025-10-02 13:06:29.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:29 compute-1 nova_compute[230518]: 2025-10-02 13:06:29.948 2 DEBUG nova.virt.libvirt.host [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:06:29 compute-1 nova_compute[230518]: 2025-10-02 13:06:29.949 2 DEBUG nova.virt.libvirt.host [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:06:29 compute-1 nova_compute[230518]: 2025-10-02 13:06:29.950 2 DEBUG nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:06:29 compute-1 nova_compute[230518]: 2025-10-02 13:06:29.950 2 DEBUG nova.virt.hardware [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:06:29 compute-1 nova_compute[230518]: 2025-10-02 13:06:29.950 2 DEBUG nova.virt.hardware [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:06:29 compute-1 nova_compute[230518]: 2025-10-02 13:06:29.950 2 DEBUG nova.virt.hardware [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:06:29 compute-1 nova_compute[230518]: 2025-10-02 13:06:29.951 2 DEBUG nova.virt.hardware [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:06:29 compute-1 nova_compute[230518]: 2025-10-02 13:06:29.951 2 DEBUG nova.virt.hardware [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:06:29 compute-1 nova_compute[230518]: 2025-10-02 13:06:29.951 2 DEBUG nova.virt.hardware [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:06:29 compute-1 nova_compute[230518]: 2025-10-02 13:06:29.951 2 DEBUG nova.virt.hardware [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:06:29 compute-1 nova_compute[230518]: 2025-10-02 13:06:29.951 2 DEBUG nova.virt.hardware [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:06:29 compute-1 nova_compute[230518]: 2025-10-02 13:06:29.951 2 DEBUG nova.virt.hardware [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:06:29 compute-1 nova_compute[230518]: 2025-10-02 13:06:29.952 2 DEBUG nova.virt.hardware [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:06:29 compute-1 nova_compute[230518]: 2025-10-02 13:06:29.952 2 DEBUG nova.virt.hardware [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:06:29 compute-1 nova_compute[230518]: 2025-10-02 13:06:29.954 2 DEBUG oslo_concurrency.processutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:06:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:06:30 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3502773686' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:06:30 compute-1 nova_compute[230518]: 2025-10-02 13:06:30.381 2 DEBUG oslo_concurrency.processutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:06:30 compute-1 nova_compute[230518]: 2025-10-02 13:06:30.428 2 DEBUG nova.storage.rbd_utils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image aaa891aa-5701-4706-b86f-6216b8cf4c6d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:06:30 compute-1 nova_compute[230518]: 2025-10-02 13:06:30.432 2 DEBUG oslo_concurrency.processutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:06:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:06:30 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2494029457' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:06:31 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3502773686' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:06:31 compute-1 nova_compute[230518]: 2025-10-02 13:06:31.068 2 DEBUG oslo_concurrency.processutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.636s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:06:31 compute-1 nova_compute[230518]: 2025-10-02 13:06:31.071 2 DEBUG nova.virt.libvirt.vif [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:06:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-1820466426',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-1820466426',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2067500093-ac',id=191,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNdbTOu/iOjmmf1Z2Hg0rSsDt//p7Ch9xVqSyeto6UZ1iRgEh5F6Sri7ZZAdZ8QNt0gViIYuv1XXRkCjzWAk0XpaEE5lLQuYVE2mmjrf+0lOKB7Fd79GB/2z/StvvrkXAQ==',key_name='tempest-TestSecurityGroupsBasicOps-373143354',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e911de934ec043d1bd942c8aed562d04',ramdisk_id='',reservation_id='r-aaf9gv0a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-2067500093',owner_user_name='tempest-TestSecurityGroupsBasicOps-2067500093-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:06:18Z,user_data=None,user_id='362b536431b64b15b67740060af57e9c',uuid=aaa891aa-5701-4706-b86f-6216b8cf4c6d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "address": "fa:16:3e:40:11:63", "network": {"id": "0de30c3c-d440-4dcd-8562-bb1990277f07", "bridge": "br-int", "label": "tempest-network-smoke--1283103558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd38dd09-d0", "ovs_interfaceid": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:06:31 compute-1 nova_compute[230518]: 2025-10-02 13:06:31.072 2 DEBUG nova.network.os_vif_util [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converting VIF {"id": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "address": "fa:16:3e:40:11:63", "network": {"id": "0de30c3c-d440-4dcd-8562-bb1990277f07", "bridge": "br-int", "label": "tempest-network-smoke--1283103558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd38dd09-d0", "ovs_interfaceid": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:06:31 compute-1 nova_compute[230518]: 2025-10-02 13:06:31.073 2 DEBUG nova.network.os_vif_util [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:11:63,bridge_name='br-int',has_traffic_filtering=True,id=fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769,network=Network(0de30c3c-d440-4dcd-8562-bb1990277f07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd38dd09-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:06:31 compute-1 nova_compute[230518]: 2025-10-02 13:06:31.076 2 DEBUG nova.objects.instance [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lazy-loading 'pci_devices' on Instance uuid aaa891aa-5701-4706-b86f-6216b8cf4c6d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:06:31 compute-1 nova_compute[230518]: 2025-10-02 13:06:31.107 2 DEBUG nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:06:31 compute-1 nova_compute[230518]:   <uuid>aaa891aa-5701-4706-b86f-6216b8cf4c6d</uuid>
Oct 02 13:06:31 compute-1 nova_compute[230518]:   <name>instance-000000bf</name>
Oct 02 13:06:31 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 13:06:31 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 13:06:31 compute-1 nova_compute[230518]:   <metadata>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:06:31 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-1820466426</nova:name>
Oct 02 13:06:31 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 13:06:29</nova:creationTime>
Oct 02 13:06:31 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 13:06:31 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 13:06:31 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 13:06:31 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 13:06:31 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:06:31 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:06:31 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 13:06:31 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 13:06:31 compute-1 nova_compute[230518]:         <nova:user uuid="362b536431b64b15b67740060af57e9c">tempest-TestSecurityGroupsBasicOps-2067500093-project-member</nova:user>
Oct 02 13:06:31 compute-1 nova_compute[230518]:         <nova:project uuid="e911de934ec043d1bd942c8aed562d04">tempest-TestSecurityGroupsBasicOps-2067500093</nova:project>
Oct 02 13:06:31 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 13:06:31 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 13:06:31 compute-1 nova_compute[230518]:         <nova:port uuid="fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769">
Oct 02 13:06:31 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 13:06:31 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 13:06:31 compute-1 nova_compute[230518]:   </metadata>
Oct 02 13:06:31 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <system>
Oct 02 13:06:31 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:06:31 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:06:31 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:06:31 compute-1 nova_compute[230518]:       <entry name="serial">aaa891aa-5701-4706-b86f-6216b8cf4c6d</entry>
Oct 02 13:06:31 compute-1 nova_compute[230518]:       <entry name="uuid">aaa891aa-5701-4706-b86f-6216b8cf4c6d</entry>
Oct 02 13:06:31 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     </system>
Oct 02 13:06:31 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 13:06:31 compute-1 nova_compute[230518]:   <os>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:   </os>
Oct 02 13:06:31 compute-1 nova_compute[230518]:   <features>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <apic/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:   </features>
Oct 02 13:06:31 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:   </clock>
Oct 02 13:06:31 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:   </cpu>
Oct 02 13:06:31 compute-1 nova_compute[230518]:   <devices>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 13:06:31 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/aaa891aa-5701-4706-b86f-6216b8cf4c6d_disk">
Oct 02 13:06:31 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:       </source>
Oct 02 13:06:31 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:06:31 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:06:31 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 13:06:31 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/aaa891aa-5701-4706-b86f-6216b8cf4c6d_disk.config">
Oct 02 13:06:31 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:       </source>
Oct 02 13:06:31 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:06:31 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:06:31 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 13:06:31 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:40:11:63"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:       <target dev="tapfd38dd09-d0"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     </interface>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 13:06:31 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/aaa891aa-5701-4706-b86f-6216b8cf4c6d/console.log" append="off"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     </serial>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <video>
Oct 02 13:06:31 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     </video>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 13:06:31 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     </rng>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 13:06:31 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 13:06:31 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 13:06:31 compute-1 nova_compute[230518]:   </devices>
Oct 02 13:06:31 compute-1 nova_compute[230518]: </domain>
Oct 02 13:06:31 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:06:31 compute-1 nova_compute[230518]: 2025-10-02 13:06:31.110 2 DEBUG nova.compute.manager [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Preparing to wait for external event network-vif-plugged-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:06:31 compute-1 nova_compute[230518]: 2025-10-02 13:06:31.110 2 DEBUG oslo_concurrency.lockutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:06:31 compute-1 nova_compute[230518]: 2025-10-02 13:06:31.111 2 DEBUG oslo_concurrency.lockutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:06:31 compute-1 nova_compute[230518]: 2025-10-02 13:06:31.111 2 DEBUG oslo_concurrency.lockutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:06:31 compute-1 nova_compute[230518]: 2025-10-02 13:06:31.113 2 DEBUG nova.virt.libvirt.vif [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:06:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-1820466426',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-1820466426',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2067500093-ac',id=191,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNdbTOu/iOjmmf1Z2Hg0rSsDt//p7Ch9xVqSyeto6UZ1iRgEh5F6Sri7ZZAdZ8QNt0gViIYuv1XXRkCjzWAk0XpaEE5lLQuYVE2mmjrf+0lOKB7Fd79GB/2z/StvvrkXAQ==',key_name='tempest-TestSecurityGroupsBasicOps-373143354',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e911de934ec043d1bd942c8aed562d04',ramdisk_id='',reservation_id='r-aaf9gv0a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-2067500093',owner_user_name='tempest-TestSecurityGroupsBasicOps-2067500093-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:06:18Z,user_data=None,user_id='362b536431b64b15b67740060af57e9c',uuid=aaa891aa-5701-4706-b86f-6216b8cf4c6d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "address": "fa:16:3e:40:11:63", "network": {"id": "0de30c3c-d440-4dcd-8562-bb1990277f07", "bridge": "br-int", "label": "tempest-network-smoke--1283103558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd38dd09-d0", "ovs_interfaceid": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:06:31 compute-1 nova_compute[230518]: 2025-10-02 13:06:31.113 2 DEBUG nova.network.os_vif_util [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converting VIF {"id": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "address": "fa:16:3e:40:11:63", "network": {"id": "0de30c3c-d440-4dcd-8562-bb1990277f07", "bridge": "br-int", "label": "tempest-network-smoke--1283103558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd38dd09-d0", "ovs_interfaceid": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:06:31 compute-1 nova_compute[230518]: 2025-10-02 13:06:31.114 2 DEBUG nova.network.os_vif_util [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:11:63,bridge_name='br-int',has_traffic_filtering=True,id=fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769,network=Network(0de30c3c-d440-4dcd-8562-bb1990277f07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd38dd09-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:06:31 compute-1 nova_compute[230518]: 2025-10-02 13:06:31.115 2 DEBUG os_vif [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:11:63,bridge_name='br-int',has_traffic_filtering=True,id=fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769,network=Network(0de30c3c-d440-4dcd-8562-bb1990277f07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd38dd09-d0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:06:31 compute-1 nova_compute[230518]: 2025-10-02 13:06:31.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:31 compute-1 nova_compute[230518]: 2025-10-02 13:06:31.117 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:06:31 compute-1 nova_compute[230518]: 2025-10-02 13:06:31.118 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:06:31 compute-1 nova_compute[230518]: 2025-10-02 13:06:31.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:31 compute-1 nova_compute[230518]: 2025-10-02 13:06:31.123 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfd38dd09-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:06:31 compute-1 nova_compute[230518]: 2025-10-02 13:06:31.123 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfd38dd09-d0, col_values=(('external_ids', {'iface-id': 'fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:40:11:63', 'vm-uuid': 'aaa891aa-5701-4706-b86f-6216b8cf4c6d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:06:31 compute-1 nova_compute[230518]: 2025-10-02 13:06:31.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:31 compute-1 NetworkManager[44960]: <info>  [1759410391.1265] manager: (tapfd38dd09-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/358)
Oct 02 13:06:31 compute-1 nova_compute[230518]: 2025-10-02 13:06:31.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:06:31 compute-1 nova_compute[230518]: 2025-10-02 13:06:31.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:31 compute-1 nova_compute[230518]: 2025-10-02 13:06:31.134 2 INFO os_vif [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:11:63,bridge_name='br-int',has_traffic_filtering=True,id=fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769,network=Network(0de30c3c-d440-4dcd-8562-bb1990277f07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd38dd09-d0')
Oct 02 13:06:31 compute-1 nova_compute[230518]: 2025-10-02 13:06:31.211 2 DEBUG nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:06:31 compute-1 nova_compute[230518]: 2025-10-02 13:06:31.211 2 DEBUG nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:06:31 compute-1 nova_compute[230518]: 2025-10-02 13:06:31.211 2 DEBUG nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] No VIF found with MAC fa:16:3e:40:11:63, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:06:31 compute-1 nova_compute[230518]: 2025-10-02 13:06:31.212 2 INFO nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Using config drive
Oct 02 13:06:31 compute-1 nova_compute[230518]: 2025-10-02 13:06:31.241 2 DEBUG nova.storage.rbd_utils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image aaa891aa-5701-4706-b86f-6216b8cf4c6d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:06:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:31.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:06:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:31.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:06:32 compute-1 nova_compute[230518]: 2025-10-02 13:06:32.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:06:32 compute-1 nova_compute[230518]: 2025-10-02 13:06:32.055 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:06:32 compute-1 ceph-mon[80926]: pgmap v2812: 305 pgs: 305 active+clean; 292 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.0 MiB/s wr, 159 op/s
Oct 02 13:06:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2494029457' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:06:32 compute-1 nova_compute[230518]: 2025-10-02 13:06:32.101 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:06:32 compute-1 nova_compute[230518]: 2025-10-02 13:06:32.507 2 INFO nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Creating config drive at /var/lib/nova/instances/aaa891aa-5701-4706-b86f-6216b8cf4c6d/disk.config
Oct 02 13:06:32 compute-1 nova_compute[230518]: 2025-10-02 13:06:32.513 2 DEBUG oslo_concurrency.processutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aaa891aa-5701-4706-b86f-6216b8cf4c6d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg2d2lhl9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:06:32 compute-1 nova_compute[230518]: 2025-10-02 13:06:32.656 2 DEBUG oslo_concurrency.processutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aaa891aa-5701-4706-b86f-6216b8cf4c6d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg2d2lhl9" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:06:32 compute-1 nova_compute[230518]: 2025-10-02 13:06:32.708 2 DEBUG nova.storage.rbd_utils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image aaa891aa-5701-4706-b86f-6216b8cf4c6d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:06:32 compute-1 nova_compute[230518]: 2025-10-02 13:06:32.714 2 DEBUG oslo_concurrency.processutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aaa891aa-5701-4706-b86f-6216b8cf4c6d/disk.config aaa891aa-5701-4706-b86f-6216b8cf4c6d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:06:32 compute-1 nova_compute[230518]: 2025-10-02 13:06:32.947 2 DEBUG oslo_concurrency.processutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aaa891aa-5701-4706-b86f-6216b8cf4c6d/disk.config aaa891aa-5701-4706-b86f-6216b8cf4c6d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.233s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:06:32 compute-1 nova_compute[230518]: 2025-10-02 13:06:32.949 2 INFO nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Deleting local config drive /var/lib/nova/instances/aaa891aa-5701-4706-b86f-6216b8cf4c6d/disk.config because it was imported into RBD.
Oct 02 13:06:33 compute-1 kernel: tapfd38dd09-d0: entered promiscuous mode
Oct 02 13:06:33 compute-1 NetworkManager[44960]: <info>  [1759410393.0182] manager: (tapfd38dd09-d0): new Tun device (/org/freedesktop/NetworkManager/Devices/359)
Oct 02 13:06:33 compute-1 ovn_controller[129257]: 2025-10-02T13:06:33Z|00780|binding|INFO|Claiming lport fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 for this chassis.
Oct 02 13:06:33 compute-1 ovn_controller[129257]: 2025-10-02T13:06:33Z|00781|binding|INFO|fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769: Claiming fa:16:3e:40:11:63 10.100.0.10
Oct 02 13:06:33 compute-1 nova_compute[230518]: 2025-10-02 13:06:33.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.045 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:11:63 10.100.0.10'], port_security=['fa:16:3e:40:11:63 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'aaa891aa-5701-4706-b86f-6216b8cf4c6d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0de30c3c-d440-4dcd-8562-bb1990277f07', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e911de934ec043d1bd942c8aed562d04', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c17ab1a8-d19d-4728-b5e8-5f12f979e5d3 f03b5452-680c-4498-87d9-e083abe84e44', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4012d9db-84cc-44d4-8e0c-304e52c3ea33, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.046 138374 INFO neutron.agent.ovn.metadata.agent [-] Port fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 in datapath 0de30c3c-d440-4dcd-8562-bb1990277f07 bound to our chassis
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.048 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0de30c3c-d440-4dcd-8562-bb1990277f07
Oct 02 13:06:33 compute-1 systemd-udevd[302416]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:06:33 compute-1 NetworkManager[44960]: <info>  [1759410393.0644] device (tapfd38dd09-d0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:06:33 compute-1 NetworkManager[44960]: <info>  [1759410393.0656] device (tapfd38dd09-d0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.065 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ea17c51e-3638-4988-823f-20164cb87795]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.066 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0de30c3c-d1 in ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.068 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0de30c3c-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.068 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0204542c-2aa2-4870-8ed5-f0455c5d7e99]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:33 compute-1 systemd-machined[188247]: New machine qemu-89-instance-000000bf.
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.068 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[af1c7d94-63aa-4a2c-b1ac-ce4099172f8d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:33 compute-1 ceph-mon[80926]: pgmap v2813: 305 pgs: 305 active+clean; 294 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 606 KiB/s wr, 133 op/s
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.081 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[27ae6a47-31cb-4ede-8601-a9c04232c908]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.108 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[73c8a49b-3169-459e-8e2a-d445d1c1a4da]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:33 compute-1 nova_compute[230518]: 2025-10-02 13:06:33.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:33 compute-1 systemd[1]: Started Virtual Machine qemu-89-instance-000000bf.
Oct 02 13:06:33 compute-1 ovn_controller[129257]: 2025-10-02T13:06:33Z|00782|binding|INFO|Setting lport fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 ovn-installed in OVS
Oct 02 13:06:33 compute-1 ovn_controller[129257]: 2025-10-02T13:06:33Z|00783|binding|INFO|Setting lport fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 up in Southbound
Oct 02 13:06:33 compute-1 nova_compute[230518]: 2025-10-02 13:06:33.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.147 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[0c1b5d19-7def-4a23-83d3-cc93f2e28e9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.152 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fc98b39f-c8b9-4260-ac4c-1563f53a9598]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:33 compute-1 NetworkManager[44960]: <info>  [1759410393.1547] manager: (tap0de30c3c-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/360)
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.185 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[8f8d5ef9-92af-446e-a455-1c1dc6b3111b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.188 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[1dee8808-0c78-41e3-a924-a6faa841d083]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:33 compute-1 NetworkManager[44960]: <info>  [1759410393.2084] device (tap0de30c3c-d0): carrier: link connected
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.213 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[cdfe93f2-967e-4584-b417-8ea0d05b9d4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.228 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e64f5f87-b7db-4ed9-893e-106bd3028bd6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0de30c3c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:df:84:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 236], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 825676, 'reachable_time': 36182, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302450, 'error': None, 'target': 'ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.241 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a79c32e1-8aad-46b5-9185-859de9e88667]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedf:84cf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 825676, 'tstamp': 825676}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302451, 'error': None, 'target': 'ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.254 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[747c24f4-e094-4c14-bfbd-b315be7e949b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0de30c3c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:df:84:cf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 236], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 825676, 'reachable_time': 36182, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302452, 'error': None, 'target': 'ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.282 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[107d157f-3e9c-492c-b908-3d1978fa1432]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.334 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ba46c963-cbbe-43ad-a2a6-b01331e1d657]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.335 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0de30c3c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.336 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.336 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0de30c3c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:06:33 compute-1 nova_compute[230518]: 2025-10-02 13:06:33.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:33 compute-1 NetworkManager[44960]: <info>  [1759410393.3432] manager: (tap0de30c3c-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/361)
Oct 02 13:06:33 compute-1 kernel: tap0de30c3c-d0: entered promiscuous mode
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.347 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0de30c3c-d0, col_values=(('external_ids', {'iface-id': 'c46b8ee2-3741-41ad-a412-6a121aeea4c6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:06:33 compute-1 ovn_controller[129257]: 2025-10-02T13:06:33Z|00784|binding|INFO|Releasing lport c46b8ee2-3741-41ad-a412-6a121aeea4c6 from this chassis (sb_readonly=0)
Oct 02 13:06:33 compute-1 nova_compute[230518]: 2025-10-02 13:06:33.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:33 compute-1 nova_compute[230518]: 2025-10-02 13:06:33.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.368 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0de30c3c-d440-4dcd-8562-bb1990277f07.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0de30c3c-d440-4dcd-8562-bb1990277f07.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.369 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6d54aaa2-e172-4047-963b-6a0cb26db894]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.370 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]: global
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-0de30c3c-d440-4dcd-8562-bb1990277f07
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/0de30c3c-d440-4dcd-8562-bb1990277f07.pid.haproxy
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 0de30c3c-d440-4dcd-8562-bb1990277f07
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:06:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:06:33.370 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07', 'env', 'PROCESS_TAG=haproxy-0de30c3c-d440-4dcd-8562-bb1990277f07', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0de30c3c-d440-4dcd-8562-bb1990277f07.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:06:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:06:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:33.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:06:33 compute-1 podman[302526]: 2025-10-02 13:06:33.768603335 +0000 UTC m=+0.051837732 container create 7000b397b5b16989ef2ef3aa14fe514e4d5de8d104c2018016904c1a66ee5708 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 02 13:06:33 compute-1 systemd[1]: Started libpod-conmon-7000b397b5b16989ef2ef3aa14fe514e4d5de8d104c2018016904c1a66ee5708.scope.
Oct 02 13:06:33 compute-1 podman[302526]: 2025-10-02 13:06:33.738302213 +0000 UTC m=+0.021536620 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:06:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:06:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:33.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:06:33 compute-1 systemd[1]: Started libcrun container.
Oct 02 13:06:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b35f09704061bc163420de8e40766fdfa5b7dc927df56828247008e161b62606/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:06:33 compute-1 podman[302526]: 2025-10-02 13:06:33.886761676 +0000 UTC m=+0.169996133 container init 7000b397b5b16989ef2ef3aa14fe514e4d5de8d104c2018016904c1a66ee5708 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 02 13:06:33 compute-1 podman[302526]: 2025-10-02 13:06:33.89268664 +0000 UTC m=+0.175921037 container start 7000b397b5b16989ef2ef3aa14fe514e4d5de8d104c2018016904c1a66ee5708 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:06:33 compute-1 nova_compute[230518]: 2025-10-02 13:06:33.905 2 DEBUG nova.compute.manager [req-6bbbe992-dff2-416a-874c-96a58afc6147 req-9fbd5a8a-c55e-452e-9a91-55809f0a05fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Received event network-vif-plugged-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:06:33 compute-1 nova_compute[230518]: 2025-10-02 13:06:33.906 2 DEBUG oslo_concurrency.lockutils [req-6bbbe992-dff2-416a-874c-96a58afc6147 req-9fbd5a8a-c55e-452e-9a91-55809f0a05fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:06:33 compute-1 nova_compute[230518]: 2025-10-02 13:06:33.906 2 DEBUG oslo_concurrency.lockutils [req-6bbbe992-dff2-416a-874c-96a58afc6147 req-9fbd5a8a-c55e-452e-9a91-55809f0a05fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:06:33 compute-1 nova_compute[230518]: 2025-10-02 13:06:33.907 2 DEBUG oslo_concurrency.lockutils [req-6bbbe992-dff2-416a-874c-96a58afc6147 req-9fbd5a8a-c55e-452e-9a91-55809f0a05fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:06:33 compute-1 nova_compute[230518]: 2025-10-02 13:06:33.908 2 DEBUG nova.compute.manager [req-6bbbe992-dff2-416a-874c-96a58afc6147 req-9fbd5a8a-c55e-452e-9a91-55809f0a05fa 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Processing event network-vif-plugged-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:06:33 compute-1 neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07[302541]: [NOTICE]   (302545) : New worker (302547) forked
Oct 02 13:06:33 compute-1 neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07[302541]: [NOTICE]   (302545) : Loading success.
Oct 02 13:06:34 compute-1 nova_compute[230518]: 2025-10-02 13:06:34.018 2 DEBUG nova.compute.manager [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:06:34 compute-1 nova_compute[230518]: 2025-10-02 13:06:34.020 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410394.0175622, aaa891aa-5701-4706-b86f-6216b8cf4c6d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:06:34 compute-1 nova_compute[230518]: 2025-10-02 13:06:34.021 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] VM Started (Lifecycle Event)
Oct 02 13:06:34 compute-1 nova_compute[230518]: 2025-10-02 13:06:34.026 2 DEBUG nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:06:34 compute-1 nova_compute[230518]: 2025-10-02 13:06:34.033 2 INFO nova.virt.libvirt.driver [-] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Instance spawned successfully.
Oct 02 13:06:34 compute-1 nova_compute[230518]: 2025-10-02 13:06:34.034 2 DEBUG nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:06:34 compute-1 nova_compute[230518]: 2025-10-02 13:06:34.037 2 DEBUG nova.network.neutron [req-19b846b3-c4f9-4468-8130-d22698dce083 req-22356ac8-707c-4fa9-b364-4c9bbbd5e07e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Updated VIF entry in instance network info cache for port fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:06:34 compute-1 nova_compute[230518]: 2025-10-02 13:06:34.038 2 DEBUG nova.network.neutron [req-19b846b3-c4f9-4468-8130-d22698dce083 req-22356ac8-707c-4fa9-b364-4c9bbbd5e07e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Updating instance_info_cache with network_info: [{"id": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "address": "fa:16:3e:40:11:63", "network": {"id": "0de30c3c-d440-4dcd-8562-bb1990277f07", "bridge": "br-int", "label": "tempest-network-smoke--1283103558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd38dd09-d0", "ovs_interfaceid": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:06:34 compute-1 nova_compute[230518]: 2025-10-02 13:06:34.423 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:06:34 compute-1 nova_compute[230518]: 2025-10-02 13:06:34.428 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:06:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:06:34 compute-1 nova_compute[230518]: 2025-10-02 13:06:34.484 2 DEBUG oslo_concurrency.lockutils [req-19b846b3-c4f9-4468-8130-d22698dce083 req-22356ac8-707c-4fa9-b364-4c9bbbd5e07e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-aaa891aa-5701-4706-b86f-6216b8cf4c6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:06:34 compute-1 nova_compute[230518]: 2025-10-02 13:06:34.491 2 DEBUG nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:06:34 compute-1 nova_compute[230518]: 2025-10-02 13:06:34.492 2 DEBUG nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:06:34 compute-1 nova_compute[230518]: 2025-10-02 13:06:34.493 2 DEBUG nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:06:34 compute-1 nova_compute[230518]: 2025-10-02 13:06:34.493 2 DEBUG nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:06:34 compute-1 nova_compute[230518]: 2025-10-02 13:06:34.494 2 DEBUG nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:06:34 compute-1 nova_compute[230518]: 2025-10-02 13:06:34.494 2 DEBUG nova.virt.libvirt.driver [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:06:34 compute-1 nova_compute[230518]: 2025-10-02 13:06:34.622 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:06:34 compute-1 nova_compute[230518]: 2025-10-02 13:06:34.623 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410394.0178325, aaa891aa-5701-4706-b86f-6216b8cf4c6d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:06:34 compute-1 nova_compute[230518]: 2025-10-02 13:06:34.624 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] VM Paused (Lifecycle Event)
Oct 02 13:06:34 compute-1 nova_compute[230518]: 2025-10-02 13:06:34.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:34 compute-1 nova_compute[230518]: 2025-10-02 13:06:34.957 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:06:34 compute-1 nova_compute[230518]: 2025-10-02 13:06:34.961 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410394.025119, aaa891aa-5701-4706-b86f-6216b8cf4c6d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:06:34 compute-1 nova_compute[230518]: 2025-10-02 13:06:34.962 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] VM Resumed (Lifecycle Event)
Oct 02 13:06:35 compute-1 nova_compute[230518]: 2025-10-02 13:06:35.014 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:06:35 compute-1 nova_compute[230518]: 2025-10-02 13:06:35.017 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:06:35 compute-1 nova_compute[230518]: 2025-10-02 13:06:35.090 2 INFO nova.compute.manager [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Took 16.85 seconds to spawn the instance on the hypervisor.
Oct 02 13:06:35 compute-1 nova_compute[230518]: 2025-10-02 13:06:35.090 2 DEBUG nova.compute.manager [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:06:35 compute-1 nova_compute[230518]: 2025-10-02 13:06:35.093 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:06:35 compute-1 nova_compute[230518]: 2025-10-02 13:06:35.344 2 INFO nova.compute.manager [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Took 21.13 seconds to build instance.
Oct 02 13:06:35 compute-1 ceph-mon[80926]: pgmap v2814: 305 pgs: 305 active+clean; 294 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 21 KiB/s wr, 110 op/s
Oct 02 13:06:35 compute-1 nova_compute[230518]: 2025-10-02 13:06:35.407 2 DEBUG oslo_concurrency.lockutils [None req-6a918f86-8bcc-47c2-96f9-b01994a3f0a7 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:06:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:35.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:35.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:36 compute-1 nova_compute[230518]: 2025-10-02 13:06:36.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:36 compute-1 nova_compute[230518]: 2025-10-02 13:06:36.457 2 DEBUG nova.compute.manager [req-e11abf1c-1e4e-419e-93c8-98cc23776cd0 req-750d0106-ea8a-4c63-bf91-027f099352c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Received event network-vif-plugged-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:06:36 compute-1 nova_compute[230518]: 2025-10-02 13:06:36.457 2 DEBUG oslo_concurrency.lockutils [req-e11abf1c-1e4e-419e-93c8-98cc23776cd0 req-750d0106-ea8a-4c63-bf91-027f099352c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:06:36 compute-1 nova_compute[230518]: 2025-10-02 13:06:36.458 2 DEBUG oslo_concurrency.lockutils [req-e11abf1c-1e4e-419e-93c8-98cc23776cd0 req-750d0106-ea8a-4c63-bf91-027f099352c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:06:36 compute-1 nova_compute[230518]: 2025-10-02 13:06:36.458 2 DEBUG oslo_concurrency.lockutils [req-e11abf1c-1e4e-419e-93c8-98cc23776cd0 req-750d0106-ea8a-4c63-bf91-027f099352c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:06:36 compute-1 nova_compute[230518]: 2025-10-02 13:06:36.458 2 DEBUG nova.compute.manager [req-e11abf1c-1e4e-419e-93c8-98cc23776cd0 req-750d0106-ea8a-4c63-bf91-027f099352c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] No waiting events found dispatching network-vif-plugged-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:06:36 compute-1 nova_compute[230518]: 2025-10-02 13:06:36.459 2 WARNING nova.compute.manager [req-e11abf1c-1e4e-419e-93c8-98cc23776cd0 req-750d0106-ea8a-4c63-bf91-027f099352c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Received unexpected event network-vif-plugged-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 for instance with vm_state active and task_state None.
Oct 02 13:06:37 compute-1 nova_compute[230518]: 2025-10-02 13:06:37.093 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:06:37 compute-1 ceph-mon[80926]: pgmap v2815: 305 pgs: 305 active+clean; 294 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 33 KiB/s wr, 142 op/s
Oct 02 13:06:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:37.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:06:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:37.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:06:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:06:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:39.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:39 compute-1 nova_compute[230518]: 2025-10-02 13:06:39.700 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:06:39 compute-1 nova_compute[230518]: 2025-10-02 13:06:39.737 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Triggering sync for uuid aaa891aa-5701-4706-b86f-6216b8cf4c6d _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 02 13:06:39 compute-1 nova_compute[230518]: 2025-10-02 13:06:39.738 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:06:39 compute-1 nova_compute[230518]: 2025-10-02 13:06:39.738 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:06:39 compute-1 ceph-mon[80926]: pgmap v2816: 305 pgs: 305 active+clean; 294 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 31 KiB/s wr, 87 op/s
Oct 02 13:06:39 compute-1 nova_compute[230518]: 2025-10-02 13:06:39.844 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:06:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:39.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:39 compute-1 nova_compute[230518]: 2025-10-02 13:06:39.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:41 compute-1 nova_compute[230518]: 2025-10-02 13:06:41.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:41 compute-1 nova_compute[230518]: 2025-10-02 13:06:41.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:41 compute-1 NetworkManager[44960]: <info>  [1759410401.5035] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/362)
Oct 02 13:06:41 compute-1 NetworkManager[44960]: <info>  [1759410401.5050] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/363)
Oct 02 13:06:41 compute-1 ovn_controller[129257]: 2025-10-02T13:06:41Z|00785|binding|INFO|Releasing lport c46b8ee2-3741-41ad-a412-6a121aeea4c6 from this chassis (sb_readonly=0)
Oct 02 13:06:41 compute-1 nova_compute[230518]: 2025-10-02 13:06:41.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:41 compute-1 nova_compute[230518]: 2025-10-02 13:06:41.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:41.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:06:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:41.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:06:42 compute-1 ceph-mon[80926]: pgmap v2817: 305 pgs: 305 active+clean; 294 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 76 op/s
Oct 02 13:06:42 compute-1 nova_compute[230518]: 2025-10-02 13:06:42.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:43.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:43.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:43 compute-1 ceph-mon[80926]: pgmap v2818: 305 pgs: 305 active+clean; 294 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 81 op/s
Oct 02 13:06:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:06:44 compute-1 nova_compute[230518]: 2025-10-02 13:06:44.771 2 DEBUG nova.compute.manager [req-5e7604b1-0a26-40d7-b0ce-d33bd32e0f4b req-a7c81649-318c-458d-b685-bc9d264749e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Received event network-changed-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:06:44 compute-1 nova_compute[230518]: 2025-10-02 13:06:44.771 2 DEBUG nova.compute.manager [req-5e7604b1-0a26-40d7-b0ce-d33bd32e0f4b req-a7c81649-318c-458d-b685-bc9d264749e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Refreshing instance network info cache due to event network-changed-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:06:44 compute-1 nova_compute[230518]: 2025-10-02 13:06:44.771 2 DEBUG oslo_concurrency.lockutils [req-5e7604b1-0a26-40d7-b0ce-d33bd32e0f4b req-a7c81649-318c-458d-b685-bc9d264749e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-aaa891aa-5701-4706-b86f-6216b8cf4c6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:06:44 compute-1 nova_compute[230518]: 2025-10-02 13:06:44.772 2 DEBUG oslo_concurrency.lockutils [req-5e7604b1-0a26-40d7-b0ce-d33bd32e0f4b req-a7c81649-318c-458d-b685-bc9d264749e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-aaa891aa-5701-4706-b86f-6216b8cf4c6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:06:44 compute-1 nova_compute[230518]: 2025-10-02 13:06:44.772 2 DEBUG nova.network.neutron [req-5e7604b1-0a26-40d7-b0ce-d33bd32e0f4b req-a7c81649-318c-458d-b685-bc9d264749e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Refreshing network info cache for port fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:06:44 compute-1 nova_compute[230518]: 2025-10-02 13:06:44.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:45.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:45.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:46 compute-1 ceph-mon[80926]: pgmap v2819: 305 pgs: 305 active+clean; 294 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 81 op/s
Oct 02 13:06:46 compute-1 nova_compute[230518]: 2025-10-02 13:06:46.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:47 compute-1 ceph-mon[80926]: pgmap v2820: 305 pgs: 305 active+clean; 294 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 87 op/s
Oct 02 13:06:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:47.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:47.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:06:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:49.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:49 compute-1 podman[302559]: 2025-10-02 13:06:49.849229468 +0000 UTC m=+0.095075795 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:06:49 compute-1 ceph-mon[80926]: pgmap v2821: 305 pgs: 305 active+clean; 295 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 317 KiB/s wr, 59 op/s
Oct 02 13:06:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:49.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:49 compute-1 podman[302558]: 2025-10-02 13:06:49.884840134 +0000 UTC m=+0.122616960 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 13:06:49 compute-1 nova_compute[230518]: 2025-10-02 13:06:49.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:51 compute-1 nova_compute[230518]: 2025-10-02 13:06:51.118 2 DEBUG nova.network.neutron [req-5e7604b1-0a26-40d7-b0ce-d33bd32e0f4b req-a7c81649-318c-458d-b685-bc9d264749e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Updated VIF entry in instance network info cache for port fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:06:51 compute-1 nova_compute[230518]: 2025-10-02 13:06:51.119 2 DEBUG nova.network.neutron [req-5e7604b1-0a26-40d7-b0ce-d33bd32e0f4b req-a7c81649-318c-458d-b685-bc9d264749e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Updating instance_info_cache with network_info: [{"id": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "address": "fa:16:3e:40:11:63", "network": {"id": "0de30c3c-d440-4dcd-8562-bb1990277f07", "bridge": "br-int", "label": "tempest-network-smoke--1283103558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd38dd09-d0", "ovs_interfaceid": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:06:51 compute-1 nova_compute[230518]: 2025-10-02 13:06:51.175 2 DEBUG oslo_concurrency.lockutils [req-5e7604b1-0a26-40d7-b0ce-d33bd32e0f4b req-a7c81649-318c-458d-b685-bc9d264749e6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-aaa891aa-5701-4706-b86f-6216b8cf4c6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:06:51 compute-1 nova_compute[230518]: 2025-10-02 13:06:51.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:06:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:51.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:06:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:06:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:51.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:06:51 compute-1 ceph-mon[80926]: pgmap v2822: 305 pgs: 305 active+clean; 306 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.1 MiB/s wr, 67 op/s
Oct 02 13:06:52 compute-1 nova_compute[230518]: 2025-10-02 13:06:52.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:53 compute-1 ovn_controller[129257]: 2025-10-02T13:06:53Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:40:11:63 10.100.0.10
Oct 02 13:06:53 compute-1 ovn_controller[129257]: 2025-10-02T13:06:53Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:40:11:63 10.100.0.10
Oct 02 13:06:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:06:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:53.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:06:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:06:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:53.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:06:54 compute-1 ceph-mon[80926]: pgmap v2823: 305 pgs: 305 active+clean; 315 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 160 KiB/s rd, 1.9 MiB/s wr, 43 op/s
Oct 02 13:06:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:06:54 compute-1 nova_compute[230518]: 2025-10-02 13:06:54.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:55 compute-1 ceph-mon[80926]: pgmap v2824: 305 pgs: 305 active+clean; 315 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 153 KiB/s rd, 1.9 MiB/s wr, 37 op/s
Oct 02 13:06:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:55.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:55.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:56 compute-1 nova_compute[230518]: 2025-10-02 13:06:56.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:06:57 compute-1 ceph-mon[80926]: pgmap v2825: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Oct 02 13:06:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:06:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:57.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:06:57 compute-1 nova_compute[230518]: 2025-10-02 13:06:57.721 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:06:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:06:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:57.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:06:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:06:59 compute-1 ceph-mon[80926]: pgmap v2826: 305 pgs: 305 active+clean; 327 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 294 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 02 13:06:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:06:59.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:06:59 compute-1 podman[302606]: 2025-10-02 13:06:59.813028424 +0000 UTC m=+0.060792439 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:06:59 compute-1 podman[302605]: 2025-10-02 13:06:59.817251556 +0000 UTC m=+0.066629111 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 13:06:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:06:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:06:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:06:59.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:00 compute-1 nova_compute[230518]: 2025-10-02 13:06:59.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:01 compute-1 nova_compute[230518]: 2025-10-02 13:07:01.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:01 compute-1 ceph-mon[80926]: pgmap v2827: 305 pgs: 305 active+clean; 340 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 296 KiB/s rd, 3.4 MiB/s wr, 80 op/s
Oct 02 13:07:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:01.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:01.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:03 compute-1 ovn_controller[129257]: 2025-10-02T13:07:03Z|00786|binding|INFO|Releasing lport c46b8ee2-3741-41ad-a412-6a121aeea4c6 from this chassis (sb_readonly=0)
Oct 02 13:07:03 compute-1 nova_compute[230518]: 2025-10-02 13:07:03.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:03.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:03 compute-1 ceph-mon[80926]: pgmap v2828: 305 pgs: 305 active+clean; 349 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 371 KiB/s rd, 3.1 MiB/s wr, 83 op/s
Oct 02 13:07:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:03.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:07:05 compute-1 nova_compute[230518]: 2025-10-02 13:07:05.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:07:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1414850093' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:07:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:07:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1414850093' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:07:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:07:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:05.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:07:05 compute-1 ceph-mon[80926]: pgmap v2829: 305 pgs: 305 active+clean; 349 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 257 KiB/s rd, 2.3 MiB/s wr, 69 op/s
Oct 02 13:07:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1414850093' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:07:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1414850093' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:07:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:07:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:05.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:07:06 compute-1 nova_compute[230518]: 2025-10-02 13:07:06.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3631395183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:07.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:07 compute-1 ceph-mon[80926]: pgmap v2830: 305 pgs: 305 active+clean; 298 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 431 KiB/s rd, 2.4 MiB/s wr, 107 op/s
Oct 02 13:07:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:07.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:08 compute-1 nova_compute[230518]: 2025-10-02 13:07:08.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:07:08 compute-1 nova_compute[230518]: 2025-10-02 13:07:08.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 13:07:08 compute-1 nova_compute[230518]: 2025-10-02 13:07:08.074 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 13:07:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:07:09 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4290706919' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:07:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:07:09 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4290706919' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:07:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:07:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:09.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:09 compute-1 ceph-mon[80926]: pgmap v2831: 305 pgs: 305 active+clean; 281 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 305 KiB/s rd, 2.2 MiB/s wr, 89 op/s
Oct 02 13:07:09 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/4290706919' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:07:09 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/4290706919' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:07:09 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2568518845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:09.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:10 compute-1 nova_compute[230518]: 2025-10-02 13:07:10.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:07:10 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2641446130' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:07:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:07:10 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2641446130' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:07:11 compute-1 nova_compute[230518]: 2025-10-02 13:07:11.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:11.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:07:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:11.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:07:11 compute-1 ceph-mon[80926]: pgmap v2832: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 284 KiB/s rd, 2.4 MiB/s wr, 90 op/s
Oct 02 13:07:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2641446130' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:07:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2641446130' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:07:13 compute-1 ceph-mon[80926]: pgmap v2833: 305 pgs: 305 active+clean; 327 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 313 KiB/s rd, 2.3 MiB/s wr, 122 op/s
Oct 02 13:07:13 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #136. Immutable memtables: 0.
Oct 02 13:07:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:07:13.353689) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:07:13 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 136
Oct 02 13:07:13 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410433353738, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 2404, "num_deletes": 253, "total_data_size": 5714326, "memory_usage": 5788160, "flush_reason": "Manual Compaction"}
Oct 02 13:07:13 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #137: started
Oct 02 13:07:13 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410433405695, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 137, "file_size": 3749607, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65981, "largest_seqno": 68380, "table_properties": {"data_size": 3739781, "index_size": 6191, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20826, "raw_average_key_size": 20, "raw_value_size": 3719986, "raw_average_value_size": 3697, "num_data_blocks": 269, "num_entries": 1006, "num_filter_entries": 1006, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410224, "oldest_key_time": 1759410224, "file_creation_time": 1759410433, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:07:13 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 52048 microseconds, and 10608 cpu microseconds.
Oct 02 13:07:13 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:07:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:07:13.405736) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #137: 3749607 bytes OK
Oct 02 13:07:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:07:13.405755) [db/memtable_list.cc:519] [default] Level-0 commit table #137 started
Oct 02 13:07:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:07:13.453605) [db/memtable_list.cc:722] [default] Level-0 commit table #137: memtable #1 done
Oct 02 13:07:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:07:13.453658) EVENT_LOG_v1 {"time_micros": 1759410433453647, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:07:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:07:13.453685) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:07:13 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 5703593, prev total WAL file size 5703593, number of live WAL files 2.
Oct 02 13:07:13 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000133.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:07:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:07:13.456765) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Oct 02 13:07:13 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:07:13 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [137(3661KB)], [135(10MB)]
Oct 02 13:07:13 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410433456853, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [137], "files_L6": [135], "score": -1, "input_data_size": 14457707, "oldest_snapshot_seqno": -1}
Oct 02 13:07:13 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #138: 9083 keys, 12539598 bytes, temperature: kUnknown
Oct 02 13:07:13 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410433634786, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 138, "file_size": 12539598, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12479845, "index_size": 35960, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22725, "raw_key_size": 238680, "raw_average_key_size": 26, "raw_value_size": 12319408, "raw_average_value_size": 1356, "num_data_blocks": 1377, "num_entries": 9083, "num_filter_entries": 9083, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759410433, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 138, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:07:13 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:07:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:07:13.635010) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 12539598 bytes
Oct 02 13:07:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:07:13.648481) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 81.2 rd, 70.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 10.2 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(7.2) write-amplify(3.3) OK, records in: 9608, records dropped: 525 output_compression: NoCompression
Oct 02 13:07:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:07:13.648506) EVENT_LOG_v1 {"time_micros": 1759410433648496, "job": 86, "event": "compaction_finished", "compaction_time_micros": 177988, "compaction_time_cpu_micros": 53948, "output_level": 6, "num_output_files": 1, "total_output_size": 12539598, "num_input_records": 9608, "num_output_records": 9083, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:07:13 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:07:13 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410433649183, "job": 86, "event": "table_file_deletion", "file_number": 137}
Oct 02 13:07:13 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000135.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:07:13 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410433650927, "job": 86, "event": "table_file_deletion", "file_number": 135}
Oct 02 13:07:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:07:13.455950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:07:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:07:13.650968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:07:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:07:13.650972) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:07:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:07:13.650974) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:07:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:07:13.650975) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:07:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:07:13.650976) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:07:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:13.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:13.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:07:14 compute-1 sudo[302646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:07:14 compute-1 sudo[302646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:14 compute-1 sudo[302646]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:14 compute-1 sudo[302671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:07:14 compute-1 sudo[302671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:14 compute-1 sudo[302671]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:14 compute-1 sudo[302696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:07:14 compute-1 sudo[302696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:14 compute-1 sudo[302696]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:14 compute-1 sudo[302721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:07:14 compute-1 sudo[302721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:15 compute-1 nova_compute[230518]: 2025-10-02 13:07:15.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:15 compute-1 sudo[302721]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:15 compute-1 ceph-mon[80926]: pgmap v2834: 305 pgs: 305 active+clean; 327 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 221 KiB/s rd, 1.9 MiB/s wr, 106 op/s
Oct 02 13:07:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3448103692' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:07:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1836180856' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:07:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:07:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:15.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:07:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:15.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:16 compute-1 nova_compute[230518]: 2025-10-02 13:07:16.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:07:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:07:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:07:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:07:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:07:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:07:17 compute-1 ceph-mon[80926]: pgmap v2835: 305 pgs: 305 active+clean; 277 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 237 KiB/s rd, 1.9 MiB/s wr, 128 op/s
Oct 02 13:07:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:17.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:17.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:07:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:07:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:19.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:07:19 compute-1 ceph-mon[80926]: pgmap v2836: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 77 KiB/s rd, 1.8 MiB/s wr, 115 op/s
Oct 02 13:07:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:07:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:19.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:07:20 compute-1 nova_compute[230518]: 2025-10-02 13:07:20.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:20 compute-1 nova_compute[230518]: 2025-10-02 13:07:20.075 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:07:20 compute-1 nova_compute[230518]: 2025-10-02 13:07:20.145 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:20 compute-1 nova_compute[230518]: 2025-10-02 13:07:20.145 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:20 compute-1 nova_compute[230518]: 2025-10-02 13:07:20.146 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:20 compute-1 nova_compute[230518]: 2025-10-02 13:07:20.146 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:07:20 compute-1 nova_compute[230518]: 2025-10-02 13:07:20.146 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:07:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:07:20 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/512112853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:20 compute-1 nova_compute[230518]: 2025-10-02 13:07:20.585 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:07:20 compute-1 nova_compute[230518]: 2025-10-02 13:07:20.665 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000bf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:07:20 compute-1 nova_compute[230518]: 2025-10-02 13:07:20.666 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000bf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:07:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/913159876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/512112853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:20 compute-1 podman[302801]: 2025-10-02 13:07:20.704349465 +0000 UTC m=+0.066560259 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:07:20 compute-1 podman[302800]: 2025-10-02 13:07:20.732071106 +0000 UTC m=+0.095681713 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Oct 02 13:07:20 compute-1 nova_compute[230518]: 2025-10-02 13:07:20.831 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:07:20 compute-1 nova_compute[230518]: 2025-10-02 13:07:20.832 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4106MB free_disk=20.921966552734375GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:07:20 compute-1 nova_compute[230518]: 2025-10-02 13:07:20.832 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:20 compute-1 nova_compute[230518]: 2025-10-02 13:07:20.832 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:20 compute-1 nova_compute[230518]: 2025-10-02 13:07:20.955 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance aaa891aa-5701-4706-b86f-6216b8cf4c6d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:07:20 compute-1 nova_compute[230518]: 2025-10-02 13:07:20.956 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:07:20 compute-1 nova_compute[230518]: 2025-10-02 13:07:20.956 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:07:21 compute-1 nova_compute[230518]: 2025-10-02 13:07:21.044 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:07:21 compute-1 nova_compute[230518]: 2025-10-02 13:07:21.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:07:21 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1813418940' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:21 compute-1 nova_compute[230518]: 2025-10-02 13:07:21.469 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:07:21 compute-1 nova_compute[230518]: 2025-10-02 13:07:21.474 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:07:21 compute-1 nova_compute[230518]: 2025-10-02 13:07:21.520 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:07:21 compute-1 nova_compute[230518]: 2025-10-02 13:07:21.558 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:07:21 compute-1 nova_compute[230518]: 2025-10-02 13:07:21.559 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:21.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:21 compute-1 ceph-mon[80926]: pgmap v2837: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 97 KiB/s rd, 1.8 MiB/s wr, 152 op/s
Oct 02 13:07:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1813418940' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:21.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:22 compute-1 sudo[302864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:07:22 compute-1 sudo[302864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:22 compute-1 sudo[302864]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:22 compute-1 sudo[302889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:07:22 compute-1 sudo[302889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:07:22 compute-1 sudo[302889]: pam_unix(sudo:session): session closed for user root
Oct 02 13:07:22 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:07:22 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:07:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/674363330' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:23.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:23 compute-1 ceph-mon[80926]: pgmap v2838: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 265 KiB/s rd, 1.5 MiB/s wr, 270 op/s
Oct 02 13:07:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1430412347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:07:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:23.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:07:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:07:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/191533582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:25 compute-1 nova_compute[230518]: 2025-10-02 13:07:25.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:25 compute-1 nova_compute[230518]: 2025-10-02 13:07:25.536 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:07:25 compute-1 nova_compute[230518]: 2025-10-02 13:07:25.537 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:07:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:25.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:25 compute-1 ceph-mon[80926]: pgmap v2839: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 231 KiB/s rd, 15 KiB/s wr, 221 op/s
Oct 02 13:07:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2785683365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:07:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:25.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:07:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:07:25.963 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:07:25.963 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:07:25.963 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:26 compute-1 nova_compute[230518]: 2025-10-02 13:07:26.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:07:26 compute-1 ovn_controller[129257]: 2025-10-02T13:07:26Z|00787|binding|INFO|Releasing lport c46b8ee2-3741-41ad-a412-6a121aeea4c6 from this chassis (sb_readonly=0)
Oct 02 13:07:26 compute-1 nova_compute[230518]: 2025-10-02 13:07:26.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:26 compute-1 nova_compute[230518]: 2025-10-02 13:07:26.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:07:27.042 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:07:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:07:27.043 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:07:27 compute-1 nova_compute[230518]: 2025-10-02 13:07:27.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:27 compute-1 nova_compute[230518]: 2025-10-02 13:07:27.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:07:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:07:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:27.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:07:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:27.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:27 compute-1 ceph-mon[80926]: pgmap v2840: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 16 KiB/s wr, 255 op/s
Oct 02 13:07:28 compute-1 nova_compute[230518]: 2025-10-02 13:07:28.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:07:28 compute-1 nova_compute[230518]: 2025-10-02 13:07:28.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:07:29 compute-1 nova_compute[230518]: 2025-10-02 13:07:29.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:07:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:07:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:29.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:29.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:29 compute-1 ceph-mon[80926]: pgmap v2841: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 16 KiB/s wr, 259 op/s
Oct 02 13:07:30 compute-1 nova_compute[230518]: 2025-10-02 13:07:30.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:30 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:07:30.045 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:07:30 compute-1 nova_compute[230518]: 2025-10-02 13:07:30.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:07:30 compute-1 nova_compute[230518]: 2025-10-02 13:07:30.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:30 compute-1 podman[302914]: 2025-10-02 13:07:30.813606437 +0000 UTC m=+0.068068218 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:07:30 compute-1 podman[302915]: 2025-10-02 13:07:30.823673133 +0000 UTC m=+0.074554112 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:07:31 compute-1 nova_compute[230518]: 2025-10-02 13:07:31.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:31.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:31.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:31 compute-1 ceph-mon[80926]: pgmap v2842: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 235 op/s
Oct 02 13:07:32 compute-1 nova_compute[230518]: 2025-10-02 13:07:32.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:07:32 compute-1 nova_compute[230518]: 2025-10-02 13:07:32.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:07:32 compute-1 nova_compute[230518]: 2025-10-02 13:07:32.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:07:32 compute-1 nova_compute[230518]: 2025-10-02 13:07:32.629 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-aaa891aa-5701-4706-b86f-6216b8cf4c6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:07:32 compute-1 nova_compute[230518]: 2025-10-02 13:07:32.630 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-aaa891aa-5701-4706-b86f-6216b8cf4c6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:07:32 compute-1 nova_compute[230518]: 2025-10-02 13:07:32.630 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 13:07:32 compute-1 nova_compute[230518]: 2025-10-02 13:07:32.630 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid aaa891aa-5701-4706-b86f-6216b8cf4c6d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:07:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:33.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:33.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:33 compute-1 ceph-mon[80926]: pgmap v2843: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 182 op/s
Oct 02 13:07:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:07:34 compute-1 nova_compute[230518]: 2025-10-02 13:07:34.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:35 compute-1 nova_compute[230518]: 2025-10-02 13:07:35.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:35 compute-1 nova_compute[230518]: 2025-10-02 13:07:35.531 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Updating instance_info_cache with network_info: [{"id": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "address": "fa:16:3e:40:11:63", "network": {"id": "0de30c3c-d440-4dcd-8562-bb1990277f07", "bridge": "br-int", "label": "tempest-network-smoke--1283103558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd38dd09-d0", "ovs_interfaceid": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:07:35 compute-1 nova_compute[230518]: 2025-10-02 13:07:35.552 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-aaa891aa-5701-4706-b86f-6216b8cf4c6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:07:35 compute-1 nova_compute[230518]: 2025-10-02 13:07:35.553 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 13:07:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:35.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:35.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:36 compute-1 ceph-mon[80926]: pgmap v2844: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.7 KiB/s wr, 60 op/s
Oct 02 13:07:36 compute-1 nova_compute[230518]: 2025-10-02 13:07:36.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:37.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:37.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:38 compute-1 ceph-mon[80926]: pgmap v2845: 305 pgs: 305 active+clean; 277 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.6 MiB/s wr, 97 op/s
Oct 02 13:07:39 compute-1 ceph-mon[80926]: pgmap v2846: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 90 op/s
Oct 02 13:07:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:07:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:39.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:39.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:40 compute-1 nova_compute[230518]: 2025-10-02 13:07:40.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:41 compute-1 nova_compute[230518]: 2025-10-02 13:07:41.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:41 compute-1 ceph-mon[80926]: pgmap v2847: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 02 13:07:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:07:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:41.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:07:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:41.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:42 compute-1 nova_compute[230518]: 2025-10-02 13:07:42.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:43 compute-1 ceph-mon[80926]: pgmap v2848: 305 pgs: 305 active+clean; 257 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 338 KiB/s rd, 2.2 MiB/s wr, 75 op/s
Oct 02 13:07:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:43.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:07:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:43.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:07:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:07:45 compute-1 nova_compute[230518]: 2025-10-02 13:07:45.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:45 compute-1 ceph-mon[80926]: pgmap v2849: 305 pgs: 305 active+clean; 257 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 338 KiB/s rd, 2.2 MiB/s wr, 75 op/s
Oct 02 13:07:45 compute-1 nova_compute[230518]: 2025-10-02 13:07:45.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:45.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:45.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:46 compute-1 nova_compute[230518]: 2025-10-02 13:07:46.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2087518821' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:47 compute-1 ceph-mon[80926]: pgmap v2850: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 342 KiB/s rd, 2.2 MiB/s wr, 85 op/s
Oct 02 13:07:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:47.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:47 compute-1 nova_compute[230518]: 2025-10-02 13:07:47.912 2 DEBUG nova.compute.manager [req-8c0535cb-e3eb-4de5-ad36-48b762a27f35 req-e1b3e9b9-91a8-44b0-b920-09979f69e154 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Received event network-changed-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:07:47 compute-1 nova_compute[230518]: 2025-10-02 13:07:47.913 2 DEBUG nova.compute.manager [req-8c0535cb-e3eb-4de5-ad36-48b762a27f35 req-e1b3e9b9-91a8-44b0-b920-09979f69e154 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Refreshing instance network info cache due to event network-changed-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:07:47 compute-1 nova_compute[230518]: 2025-10-02 13:07:47.913 2 DEBUG oslo_concurrency.lockutils [req-8c0535cb-e3eb-4de5-ad36-48b762a27f35 req-e1b3e9b9-91a8-44b0-b920-09979f69e154 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-aaa891aa-5701-4706-b86f-6216b8cf4c6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:07:47 compute-1 nova_compute[230518]: 2025-10-02 13:07:47.913 2 DEBUG oslo_concurrency.lockutils [req-8c0535cb-e3eb-4de5-ad36-48b762a27f35 req-e1b3e9b9-91a8-44b0-b920-09979f69e154 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-aaa891aa-5701-4706-b86f-6216b8cf4c6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:07:47 compute-1 nova_compute[230518]: 2025-10-02 13:07:47.914 2 DEBUG nova.network.neutron [req-8c0535cb-e3eb-4de5-ad36-48b762a27f35 req-e1b3e9b9-91a8-44b0-b920-09979f69e154 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Refreshing network info cache for port fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:07:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:47.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:48 compute-1 nova_compute[230518]: 2025-10-02 13:07:48.043 2 DEBUG oslo_concurrency.lockutils [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:48 compute-1 nova_compute[230518]: 2025-10-02 13:07:48.044 2 DEBUG oslo_concurrency.lockutils [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:48 compute-1 nova_compute[230518]: 2025-10-02 13:07:48.044 2 DEBUG oslo_concurrency.lockutils [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:48 compute-1 nova_compute[230518]: 2025-10-02 13:07:48.045 2 DEBUG oslo_concurrency.lockutils [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:48 compute-1 nova_compute[230518]: 2025-10-02 13:07:48.045 2 DEBUG oslo_concurrency.lockutils [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:48 compute-1 nova_compute[230518]: 2025-10-02 13:07:48.046 2 INFO nova.compute.manager [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Terminating instance
Oct 02 13:07:48 compute-1 nova_compute[230518]: 2025-10-02 13:07:48.047 2 DEBUG nova.compute.manager [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:07:48 compute-1 nova_compute[230518]: 2025-10-02 13:07:48.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:48 compute-1 kernel: tapfd38dd09-d0 (unregistering): left promiscuous mode
Oct 02 13:07:48 compute-1 NetworkManager[44960]: <info>  [1759410468.4255] device (tapfd38dd09-d0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:07:48 compute-1 nova_compute[230518]: 2025-10-02 13:07:48.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:48 compute-1 ovn_controller[129257]: 2025-10-02T13:07:48Z|00788|binding|INFO|Releasing lport fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 from this chassis (sb_readonly=0)
Oct 02 13:07:48 compute-1 ovn_controller[129257]: 2025-10-02T13:07:48Z|00789|binding|INFO|Setting lport fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 down in Southbound
Oct 02 13:07:48 compute-1 ovn_controller[129257]: 2025-10-02T13:07:48Z|00790|binding|INFO|Removing iface tapfd38dd09-d0 ovn-installed in OVS
Oct 02 13:07:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:07:48.446 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:11:63 10.100.0.10'], port_security=['fa:16:3e:40:11:63 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'aaa891aa-5701-4706-b86f-6216b8cf4c6d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0de30c3c-d440-4dcd-8562-bb1990277f07', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e911de934ec043d1bd942c8aed562d04', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c17ab1a8-d19d-4728-b5e8-5f12f979e5d3 f03b5452-680c-4498-87d9-e083abe84e44', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4012d9db-84cc-44d4-8e0c-304e52c3ea33, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:07:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:07:48.448 138374 INFO neutron.agent.ovn.metadata.agent [-] Port fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 in datapath 0de30c3c-d440-4dcd-8562-bb1990277f07 unbound from our chassis
Oct 02 13:07:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:07:48.450 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0de30c3c-d440-4dcd-8562-bb1990277f07, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:07:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:07:48.452 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[268c48f1-8d38-4360-9cef-16df8bb74bed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:07:48.453 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07 namespace which is not needed anymore
Oct 02 13:07:48 compute-1 nova_compute[230518]: 2025-10-02 13:07:48.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:48 compute-1 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000bf.scope: Deactivated successfully.
Oct 02 13:07:48 compute-1 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000bf.scope: Consumed 15.995s CPU time.
Oct 02 13:07:48 compute-1 systemd-machined[188247]: Machine qemu-89-instance-000000bf terminated.
Oct 02 13:07:48 compute-1 neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07[302541]: [NOTICE]   (302545) : haproxy version is 2.8.14-c23fe91
Oct 02 13:07:48 compute-1 neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07[302541]: [NOTICE]   (302545) : path to executable is /usr/sbin/haproxy
Oct 02 13:07:48 compute-1 neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07[302541]: [ALERT]    (302545) : Current worker (302547) exited with code 143 (Terminated)
Oct 02 13:07:48 compute-1 neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07[302541]: [WARNING]  (302545) : All workers exited. Exiting... (0)
Oct 02 13:07:48 compute-1 systemd[1]: libpod-7000b397b5b16989ef2ef3aa14fe514e4d5de8d104c2018016904c1a66ee5708.scope: Deactivated successfully.
Oct 02 13:07:48 compute-1 podman[302981]: 2025-10-02 13:07:48.65892518 +0000 UTC m=+0.063098162 container died 7000b397b5b16989ef2ef3aa14fe514e4d5de8d104c2018016904c1a66ee5708 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 13:07:48 compute-1 nova_compute[230518]: 2025-10-02 13:07:48.699 2 INFO nova.virt.libvirt.driver [-] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Instance destroyed successfully.
Oct 02 13:07:48 compute-1 nova_compute[230518]: 2025-10-02 13:07:48.700 2 DEBUG nova.objects.instance [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lazy-loading 'resources' on Instance uuid aaa891aa-5701-4706-b86f-6216b8cf4c6d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:07:48 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7000b397b5b16989ef2ef3aa14fe514e4d5de8d104c2018016904c1a66ee5708-userdata-shm.mount: Deactivated successfully.
Oct 02 13:07:48 compute-1 systemd[1]: var-lib-containers-storage-overlay-b35f09704061bc163420de8e40766fdfa5b7dc927df56828247008e161b62606-merged.mount: Deactivated successfully.
Oct 02 13:07:48 compute-1 podman[302981]: 2025-10-02 13:07:48.717533681 +0000 UTC m=+0.121706663 container cleanup 7000b397b5b16989ef2ef3aa14fe514e4d5de8d104c2018016904c1a66ee5708 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true)
Oct 02 13:07:48 compute-1 systemd[1]: libpod-conmon-7000b397b5b16989ef2ef3aa14fe514e4d5de8d104c2018016904c1a66ee5708.scope: Deactivated successfully.
Oct 02 13:07:48 compute-1 nova_compute[230518]: 2025-10-02 13:07:48.743 2 DEBUG nova.virt.libvirt.vif [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:06:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-1820466426',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-1820466426',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2067500093-ac',id=191,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNdbTOu/iOjmmf1Z2Hg0rSsDt//p7Ch9xVqSyeto6UZ1iRgEh5F6Sri7ZZAdZ8QNt0gViIYuv1XXRkCjzWAk0XpaEE5lLQuYVE2mmjrf+0lOKB7Fd79GB/2z/StvvrkXAQ==',key_name='tempest-TestSecurityGroupsBasicOps-373143354',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:06:35Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e911de934ec043d1bd942c8aed562d04',ramdisk_id='',reservation_id='r-aaf9gv0a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-2067500093',owner_user_name='tempest-TestSecurityGroupsBasicOps-2067500093-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:06:35Z,user_data=None,user_id='362b536431b64b15b67740060af57e9c',uuid=aaa891aa-5701-4706-b86f-6216b8cf4c6d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "address": "fa:16:3e:40:11:63", "network": {"id": "0de30c3c-d440-4dcd-8562-bb1990277f07", "bridge": "br-int", "label": "tempest-network-smoke--1283103558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd38dd09-d0", "ovs_interfaceid": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:07:48 compute-1 nova_compute[230518]: 2025-10-02 13:07:48.744 2 DEBUG nova.network.os_vif_util [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converting VIF {"id": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "address": "fa:16:3e:40:11:63", "network": {"id": "0de30c3c-d440-4dcd-8562-bb1990277f07", "bridge": "br-int", "label": "tempest-network-smoke--1283103558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd38dd09-d0", "ovs_interfaceid": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:07:48 compute-1 nova_compute[230518]: 2025-10-02 13:07:48.745 2 DEBUG nova.network.os_vif_util [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:40:11:63,bridge_name='br-int',has_traffic_filtering=True,id=fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769,network=Network(0de30c3c-d440-4dcd-8562-bb1990277f07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd38dd09-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:07:48 compute-1 nova_compute[230518]: 2025-10-02 13:07:48.746 2 DEBUG os_vif [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:11:63,bridge_name='br-int',has_traffic_filtering=True,id=fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769,network=Network(0de30c3c-d440-4dcd-8562-bb1990277f07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd38dd09-d0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:07:48 compute-1 nova_compute[230518]: 2025-10-02 13:07:48.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:48 compute-1 nova_compute[230518]: 2025-10-02 13:07:48.750 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd38dd09-d0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:07:48 compute-1 nova_compute[230518]: 2025-10-02 13:07:48.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:07:48 compute-1 nova_compute[230518]: 2025-10-02 13:07:48.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:48 compute-1 nova_compute[230518]: 2025-10-02 13:07:48.761 2 INFO os_vif [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:40:11:63,bridge_name='br-int',has_traffic_filtering=True,id=fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769,network=Network(0de30c3c-d440-4dcd-8562-bb1990277f07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd38dd09-d0')
Oct 02 13:07:48 compute-1 podman[303019]: 2025-10-02 13:07:48.796529461 +0000 UTC m=+0.047724259 container remove 7000b397b5b16989ef2ef3aa14fe514e4d5de8d104c2018016904c1a66ee5708 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:07:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:07:48.803 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[40411305-7687-4ca1-b6a7-75949cf84dc4]: (4, ('Thu Oct  2 01:07:48 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07 (7000b397b5b16989ef2ef3aa14fe514e4d5de8d104c2018016904c1a66ee5708)\n7000b397b5b16989ef2ef3aa14fe514e4d5de8d104c2018016904c1a66ee5708\nThu Oct  2 01:07:48 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07 (7000b397b5b16989ef2ef3aa14fe514e4d5de8d104c2018016904c1a66ee5708)\n7000b397b5b16989ef2ef3aa14fe514e4d5de8d104c2018016904c1a66ee5708\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:07:48.805 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[10087e24-dc0f-4c77-b575-8422041c103a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:07:48.806 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0de30c3c-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:07:48 compute-1 nova_compute[230518]: 2025-10-02 13:07:48.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:48 compute-1 kernel: tap0de30c3c-d0: left promiscuous mode
Oct 02 13:07:48 compute-1 nova_compute[230518]: 2025-10-02 13:07:48.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:07:48.824 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b1fad489-6fa3-4f78-a0cd-fa4e381fea64]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:07:48.849 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[518642b4-b955-4374-99b0-48c4027d20ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:07:48.851 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[46739500-18ab-4474-bd87-4d038278331c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:07:48.871 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6c6eff55-125b-45ee-a28e-914106428820]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 825670, 'reachable_time': 18162, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303052, 'error': None, 'target': 'ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:07:48.873 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0de30c3c-d440-4dcd-8562-bb1990277f07 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:07:48 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:07:48.873 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[6aaae421-2573-4e1a-b17c-ad12607973c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:07:48 compute-1 systemd[1]: run-netns-ovnmeta\x2d0de30c3c\x2dd440\x2d4dcd\x2d8562\x2dbb1990277f07.mount: Deactivated successfully.
Oct 02 13:07:48 compute-1 nova_compute[230518]: 2025-10-02 13:07:48.951 2 DEBUG nova.compute.manager [req-048a1ec1-8ec1-4f5a-b506-c2fa418589b6 req-11e16413-6b78-43b8-8110-f94adb79be10 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Received event network-vif-unplugged-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:07:48 compute-1 nova_compute[230518]: 2025-10-02 13:07:48.952 2 DEBUG oslo_concurrency.lockutils [req-048a1ec1-8ec1-4f5a-b506-c2fa418589b6 req-11e16413-6b78-43b8-8110-f94adb79be10 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:48 compute-1 nova_compute[230518]: 2025-10-02 13:07:48.953 2 DEBUG oslo_concurrency.lockutils [req-048a1ec1-8ec1-4f5a-b506-c2fa418589b6 req-11e16413-6b78-43b8-8110-f94adb79be10 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:48 compute-1 nova_compute[230518]: 2025-10-02 13:07:48.953 2 DEBUG oslo_concurrency.lockutils [req-048a1ec1-8ec1-4f5a-b506-c2fa418589b6 req-11e16413-6b78-43b8-8110-f94adb79be10 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:48 compute-1 nova_compute[230518]: 2025-10-02 13:07:48.953 2 DEBUG nova.compute.manager [req-048a1ec1-8ec1-4f5a-b506-c2fa418589b6 req-11e16413-6b78-43b8-8110-f94adb79be10 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] No waiting events found dispatching network-vif-unplugged-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:07:48 compute-1 nova_compute[230518]: 2025-10-02 13:07:48.953 2 DEBUG nova.compute.manager [req-048a1ec1-8ec1-4f5a-b506-c2fa418589b6 req-11e16413-6b78-43b8-8110-f94adb79be10 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Received event network-vif-unplugged-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:07:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:07:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:49.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:07:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:49.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:07:50 compute-1 nova_compute[230518]: 2025-10-02 13:07:50.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:50 compute-1 ceph-mon[80926]: pgmap v2851: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 205 KiB/s rd, 617 KiB/s wr, 57 op/s
Oct 02 13:07:50 compute-1 nova_compute[230518]: 2025-10-02 13:07:50.163 2 DEBUG nova.network.neutron [req-8c0535cb-e3eb-4de5-ad36-48b762a27f35 req-e1b3e9b9-91a8-44b0-b920-09979f69e154 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Updated VIF entry in instance network info cache for port fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:07:50 compute-1 nova_compute[230518]: 2025-10-02 13:07:50.163 2 DEBUG nova.network.neutron [req-8c0535cb-e3eb-4de5-ad36-48b762a27f35 req-e1b3e9b9-91a8-44b0-b920-09979f69e154 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Updating instance_info_cache with network_info: [{"id": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "address": "fa:16:3e:40:11:63", "network": {"id": "0de30c3c-d440-4dcd-8562-bb1990277f07", "bridge": "br-int", "label": "tempest-network-smoke--1283103558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd38dd09-d0", "ovs_interfaceid": "fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:07:50 compute-1 nova_compute[230518]: 2025-10-02 13:07:50.198 2 DEBUG oslo_concurrency.lockutils [req-8c0535cb-e3eb-4de5-ad36-48b762a27f35 req-e1b3e9b9-91a8-44b0-b920-09979f69e154 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-aaa891aa-5701-4706-b86f-6216b8cf4c6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:07:50 compute-1 nova_compute[230518]: 2025-10-02 13:07:50.500 2 INFO nova.virt.libvirt.driver [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Deleting instance files /var/lib/nova/instances/aaa891aa-5701-4706-b86f-6216b8cf4c6d_del
Oct 02 13:07:50 compute-1 nova_compute[230518]: 2025-10-02 13:07:50.501 2 INFO nova.virt.libvirt.driver [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Deletion of /var/lib/nova/instances/aaa891aa-5701-4706-b86f-6216b8cf4c6d_del complete
Oct 02 13:07:50 compute-1 nova_compute[230518]: 2025-10-02 13:07:50.557 2 INFO nova.compute.manager [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Took 2.51 seconds to destroy the instance on the hypervisor.
Oct 02 13:07:50 compute-1 nova_compute[230518]: 2025-10-02 13:07:50.558 2 DEBUG oslo.service.loopingcall [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:07:50 compute-1 nova_compute[230518]: 2025-10-02 13:07:50.558 2 DEBUG nova.compute.manager [-] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:07:50 compute-1 nova_compute[230518]: 2025-10-02 13:07:50.558 2 DEBUG nova.network.neutron [-] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:07:50 compute-1 podman[303054]: 2025-10-02 13:07:50.788013686 +0000 UTC m=+0.046149621 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:07:50 compute-1 podman[303073]: 2025-10-02 13:07:50.879954002 +0000 UTC m=+0.067198351 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:07:51 compute-1 nova_compute[230518]: 2025-10-02 13:07:51.151 2 DEBUG nova.compute.manager [req-9f22e1fa-ef3c-480d-a983-bc019453af83 req-fe7f2801-63bd-41fd-844b-dc7b0a6dc7f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Received event network-vif-plugged-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:07:51 compute-1 nova_compute[230518]: 2025-10-02 13:07:51.151 2 DEBUG oslo_concurrency.lockutils [req-9f22e1fa-ef3c-480d-a983-bc019453af83 req-fe7f2801-63bd-41fd-844b-dc7b0a6dc7f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:51 compute-1 nova_compute[230518]: 2025-10-02 13:07:51.152 2 DEBUG oslo_concurrency.lockutils [req-9f22e1fa-ef3c-480d-a983-bc019453af83 req-fe7f2801-63bd-41fd-844b-dc7b0a6dc7f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:51 compute-1 nova_compute[230518]: 2025-10-02 13:07:51.152 2 DEBUG oslo_concurrency.lockutils [req-9f22e1fa-ef3c-480d-a983-bc019453af83 req-fe7f2801-63bd-41fd-844b-dc7b0a6dc7f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:51 compute-1 nova_compute[230518]: 2025-10-02 13:07:51.152 2 DEBUG nova.compute.manager [req-9f22e1fa-ef3c-480d-a983-bc019453af83 req-fe7f2801-63bd-41fd-844b-dc7b0a6dc7f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] No waiting events found dispatching network-vif-plugged-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:07:51 compute-1 nova_compute[230518]: 2025-10-02 13:07:51.152 2 WARNING nova.compute.manager [req-9f22e1fa-ef3c-480d-a983-bc019453af83 req-fe7f2801-63bd-41fd-844b-dc7b0a6dc7f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Received unexpected event network-vif-plugged-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 for instance with vm_state active and task_state deleting.
Oct 02 13:07:51 compute-1 ceph-mon[80926]: pgmap v2852: 305 pgs: 305 active+clean; 157 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 24 KiB/s wr, 35 op/s
Oct 02 13:07:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:51.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:51.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:52 compute-1 nova_compute[230518]: 2025-10-02 13:07:52.676 2 DEBUG nova.network.neutron [-] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:07:52 compute-1 nova_compute[230518]: 2025-10-02 13:07:52.701 2 INFO nova.compute.manager [-] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Took 2.14 seconds to deallocate network for instance.
Oct 02 13:07:52 compute-1 nova_compute[230518]: 2025-10-02 13:07:52.758 2 DEBUG oslo_concurrency.lockutils [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:07:52 compute-1 nova_compute[230518]: 2025-10-02 13:07:52.758 2 DEBUG oslo_concurrency.lockutils [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:07:52 compute-1 nova_compute[230518]: 2025-10-02 13:07:52.767 2 DEBUG nova.compute.manager [req-82743856-74a0-423c-916c-f7338ce6e985 req-7c663f5b-5872-4a3f-b2e0-e3a63b7068f4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Received event network-vif-deleted-fd38dd09-d07c-4d21-a4cf-f5ef2cd2e769 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:07:52 compute-1 nova_compute[230518]: 2025-10-02 13:07:52.818 2 DEBUG oslo_concurrency.processutils [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:07:53 compute-1 nova_compute[230518]: 2025-10-02 13:07:53.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:07:53 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4249213821' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:53 compute-1 nova_compute[230518]: 2025-10-02 13:07:53.290 2 DEBUG oslo_concurrency.processutils [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:07:53 compute-1 nova_compute[230518]: 2025-10-02 13:07:53.296 2 DEBUG nova.compute.provider_tree [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:07:53 compute-1 nova_compute[230518]: 2025-10-02 13:07:53.332 2 DEBUG nova.scheduler.client.report [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:07:53 compute-1 nova_compute[230518]: 2025-10-02 13:07:53.397 2 DEBUG oslo_concurrency.lockutils [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:53 compute-1 nova_compute[230518]: 2025-10-02 13:07:53.444 2 INFO nova.scheduler.client.report [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Deleted allocations for instance aaa891aa-5701-4706-b86f-6216b8cf4c6d
Oct 02 13:07:53 compute-1 nova_compute[230518]: 2025-10-02 13:07:53.565 2 DEBUG oslo_concurrency.lockutils [None req-c86edb71-171e-471a-96a0-370fb3ed373e 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "aaa891aa-5701-4706-b86f-6216b8cf4c6d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.521s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:07:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:53.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:53 compute-1 nova_compute[230518]: 2025-10-02 13:07:53.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:53.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:53 compute-1 ceph-mon[80926]: pgmap v2853: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 16 KiB/s wr, 55 op/s
Oct 02 13:07:53 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4249213821' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:07:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:07:55 compute-1 nova_compute[230518]: 2025-10-02 13:07:55.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:55 compute-1 ceph-mon[80926]: pgmap v2854: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 3.1 KiB/s wr, 45 op/s
Oct 02 13:07:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:07:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:55.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:07:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:55.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:07:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:57.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:07:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:07:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:57.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:07:58 compute-1 ceph-mon[80926]: pgmap v2855: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 3.1 KiB/s wr, 45 op/s
Oct 02 13:07:58 compute-1 nova_compute[230518]: 2025-10-02 13:07:58.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:07:59 compute-1 ceph-mon[80926]: pgmap v2856: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 1.4 KiB/s wr, 36 op/s
Oct 02 13:07:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:07:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:07:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:07:59.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:07:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:07:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:07:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:07:59.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:08:00 compute-1 nova_compute[230518]: 2025-10-02 13:08:00.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:01 compute-1 ceph-mon[80926]: pgmap v2857: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 13:08:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:01.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:01 compute-1 podman[303122]: 2025-10-02 13:08:01.844506494 +0000 UTC m=+0.085849997 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid)
Oct 02 13:08:01 compute-1 podman[303123]: 2025-10-02 13:08:01.84405474 +0000 UTC m=+0.078277189 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:08:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:08:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:01.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:08:03 compute-1 nova_compute[230518]: 2025-10-02 13:08:03.149 2 DEBUG oslo_concurrency.lockutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:03 compute-1 nova_compute[230518]: 2025-10-02 13:08:03.149 2 DEBUG oslo_concurrency.lockutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:03 compute-1 nova_compute[230518]: 2025-10-02 13:08:03.173 2 DEBUG nova.compute.manager [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:08:03 compute-1 nova_compute[230518]: 2025-10-02 13:08:03.276 2 DEBUG oslo_concurrency.lockutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:03 compute-1 nova_compute[230518]: 2025-10-02 13:08:03.277 2 DEBUG oslo_concurrency.lockutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:03 compute-1 nova_compute[230518]: 2025-10-02 13:08:03.286 2 DEBUG nova.virt.hardware [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:08:03 compute-1 nova_compute[230518]: 2025-10-02 13:08:03.286 2 INFO nova.compute.claims [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Claim successful on node compute-1.ctlplane.example.com
Oct 02 13:08:03 compute-1 nova_compute[230518]: 2025-10-02 13:08:03.425 2 DEBUG oslo_concurrency.processutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:08:03 compute-1 ceph-mon[80926]: pgmap v2858: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.2 KiB/s wr, 32 op/s
Oct 02 13:08:03 compute-1 nova_compute[230518]: 2025-10-02 13:08:03.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:03 compute-1 nova_compute[230518]: 2025-10-02 13:08:03.695 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410468.6932428, aaa891aa-5701-4706-b86f-6216b8cf4c6d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:08:03 compute-1 nova_compute[230518]: 2025-10-02 13:08:03.696 2 INFO nova.compute.manager [-] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] VM Stopped (Lifecycle Event)
Oct 02 13:08:03 compute-1 nova_compute[230518]: 2025-10-02 13:08:03.719 2 DEBUG nova.compute.manager [None req-472b5e42-9a33-4dbc-b3ef-77d70ab338cb - - - - - -] [instance: aaa891aa-5701-4706-b86f-6216b8cf4c6d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:08:03 compute-1 nova_compute[230518]: 2025-10-02 13:08:03.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:03.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:03 compute-1 nova_compute[230518]: 2025-10-02 13:08:03.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:08:03 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/90573178' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:03 compute-1 nova_compute[230518]: 2025-10-02 13:08:03.883 2 DEBUG oslo_concurrency.processutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:08:03 compute-1 nova_compute[230518]: 2025-10-02 13:08:03.888 2 DEBUG nova.compute.provider_tree [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:08:03 compute-1 nova_compute[230518]: 2025-10-02 13:08:03.953 2 DEBUG nova.scheduler.client.report [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:08:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:03.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:03 compute-1 nova_compute[230518]: 2025-10-02 13:08:03.985 2 DEBUG oslo_concurrency.lockutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:03 compute-1 nova_compute[230518]: 2025-10-02 13:08:03.985 2 DEBUG nova.compute.manager [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:08:04 compute-1 nova_compute[230518]: 2025-10-02 13:08:04.120 2 DEBUG nova.compute.manager [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:08:04 compute-1 nova_compute[230518]: 2025-10-02 13:08:04.120 2 DEBUG nova.network.neutron [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:08:04 compute-1 nova_compute[230518]: 2025-10-02 13:08:04.173 2 INFO nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:08:04 compute-1 nova_compute[230518]: 2025-10-02 13:08:04.255 2 DEBUG nova.compute.manager [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:08:04 compute-1 nova_compute[230518]: 2025-10-02 13:08:04.380 2 DEBUG nova.compute.manager [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:08:04 compute-1 nova_compute[230518]: 2025-10-02 13:08:04.381 2 DEBUG nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:08:04 compute-1 nova_compute[230518]: 2025-10-02 13:08:04.381 2 INFO nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Creating image(s)
Oct 02 13:08:04 compute-1 nova_compute[230518]: 2025-10-02 13:08:04.404 2 DEBUG nova.storage.rbd_utils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] rbd image b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:08:04 compute-1 nova_compute[230518]: 2025-10-02 13:08:04.433 2 DEBUG nova.storage.rbd_utils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] rbd image b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:08:04 compute-1 nova_compute[230518]: 2025-10-02 13:08:04.459 2 DEBUG nova.storage.rbd_utils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] rbd image b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:08:04 compute-1 nova_compute[230518]: 2025-10-02 13:08:04.462 2 DEBUG oslo_concurrency.processutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:08:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:08:04 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/90573178' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:04 compute-1 nova_compute[230518]: 2025-10-02 13:08:04.499 2 DEBUG nova.policy [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '62f4c4b5cc194bd59ca9cc9f1da78a79', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '954946ff6b204fba90f767ec67210620', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:08:04 compute-1 nova_compute[230518]: 2025-10-02 13:08:04.524 2 DEBUG oslo_concurrency.processutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:08:04 compute-1 nova_compute[230518]: 2025-10-02 13:08:04.525 2 DEBUG oslo_concurrency.lockutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:04 compute-1 nova_compute[230518]: 2025-10-02 13:08:04.526 2 DEBUG oslo_concurrency.lockutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:04 compute-1 nova_compute[230518]: 2025-10-02 13:08:04.526 2 DEBUG oslo_concurrency.lockutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:04 compute-1 nova_compute[230518]: 2025-10-02 13:08:04.547 2 DEBUG nova.storage.rbd_utils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] rbd image b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:08:04 compute-1 nova_compute[230518]: 2025-10-02 13:08:04.551 2 DEBUG oslo_concurrency.processutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:08:04 compute-1 nova_compute[230518]: 2025-10-02 13:08:04.817 2 DEBUG oslo_concurrency.processutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.266s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:08:04 compute-1 nova_compute[230518]: 2025-10-02 13:08:04.876 2 DEBUG nova.storage.rbd_utils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] resizing rbd image b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:08:04 compute-1 nova_compute[230518]: 2025-10-02 13:08:04.988 2 DEBUG nova.objects.instance [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lazy-loading 'migration_context' on Instance uuid b4640b6e-b1e0-4168-9970-c5d05a0e1621 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:08:05 compute-1 nova_compute[230518]: 2025-10-02 13:08:05.011 2 DEBUG nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:08:05 compute-1 nova_compute[230518]: 2025-10-02 13:08:05.012 2 DEBUG nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Ensure instance console log exists: /var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:08:05 compute-1 nova_compute[230518]: 2025-10-02 13:08:05.013 2 DEBUG oslo_concurrency.lockutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:05 compute-1 nova_compute[230518]: 2025-10-02 13:08:05.013 2 DEBUG oslo_concurrency.lockutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:05 compute-1 nova_compute[230518]: 2025-10-02 13:08:05.014 2 DEBUG oslo_concurrency.lockutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:05 compute-1 nova_compute[230518]: 2025-10-02 13:08:05.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:05 compute-1 ceph-mon[80926]: pgmap v2859: 305 pgs: 305 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 10 op/s
Oct 02 13:08:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3198759157' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:08:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3198759157' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:08:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:05.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:05.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:06 compute-1 nova_compute[230518]: 2025-10-02 13:08:06.210 2 DEBUG nova.network.neutron [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Successfully created port: 3994280c-c2c8-4fa7-bc48-f7b048d43015 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:08:07 compute-1 ceph-mon[80926]: pgmap v2860: 305 pgs: 305 active+clean; 150 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 897 KiB/s wr, 24 op/s
Oct 02 13:08:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:07.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:08:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:07.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:08:08 compute-1 nova_compute[230518]: 2025-10-02 13:08:08.310 2 DEBUG nova.network.neutron [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Successfully updated port: 3994280c-c2c8-4fa7-bc48-f7b048d43015 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:08:08 compute-1 nova_compute[230518]: 2025-10-02 13:08:08.331 2 DEBUG oslo_concurrency.lockutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:08:08 compute-1 nova_compute[230518]: 2025-10-02 13:08:08.331 2 DEBUG oslo_concurrency.lockutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquired lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:08:08 compute-1 nova_compute[230518]: 2025-10-02 13:08:08.332 2 DEBUG nova.network.neutron [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:08:08 compute-1 nova_compute[230518]: 2025-10-02 13:08:08.597 2 DEBUG nova.network.neutron [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:08:08 compute-1 nova_compute[230518]: 2025-10-02 13:08:08.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:08 compute-1 nova_compute[230518]: 2025-10-02 13:08:08.928 2 DEBUG nova.compute.manager [req-b543f0b4-8aaf-4979-845a-5762cf07cc36 req-bfe41cde-0b81-4a31-98c0-9d92191b99f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received event network-changed-3994280c-c2c8-4fa7-bc48-f7b048d43015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:08:08 compute-1 nova_compute[230518]: 2025-10-02 13:08:08.928 2 DEBUG nova.compute.manager [req-b543f0b4-8aaf-4979-845a-5762cf07cc36 req-bfe41cde-0b81-4a31-98c0-9d92191b99f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Refreshing instance network info cache due to event network-changed-3994280c-c2c8-4fa7-bc48-f7b048d43015. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:08:08 compute-1 nova_compute[230518]: 2025-10-02 13:08:08.928 2 DEBUG oslo_concurrency.lockutils [req-b543f0b4-8aaf-4979-845a-5762cf07cc36 req-bfe41cde-0b81-4a31-98c0-9d92191b99f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:08:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:08:09 compute-1 ceph-mon[80926]: pgmap v2861: 305 pgs: 305 active+clean; 181 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.5 MiB/s wr, 47 op/s
Oct 02 13:08:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:09.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:09.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:10 compute-1 nova_compute[230518]: 2025-10-02 13:08:10.065 2 DEBUG nova.network.neutron [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Updating instance_info_cache with network_info: [{"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:08:10 compute-1 nova_compute[230518]: 2025-10-02 13:08:10.088 2 DEBUG oslo_concurrency.lockutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Releasing lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:08:10 compute-1 nova_compute[230518]: 2025-10-02 13:08:10.088 2 DEBUG nova.compute.manager [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Instance network_info: |[{"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:08:10 compute-1 nova_compute[230518]: 2025-10-02 13:08:10.089 2 DEBUG oslo_concurrency.lockutils [req-b543f0b4-8aaf-4979-845a-5762cf07cc36 req-bfe41cde-0b81-4a31-98c0-9d92191b99f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:08:10 compute-1 nova_compute[230518]: 2025-10-02 13:08:10.089 2 DEBUG nova.network.neutron [req-b543f0b4-8aaf-4979-845a-5762cf07cc36 req-bfe41cde-0b81-4a31-98c0-9d92191b99f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Refreshing network info cache for port 3994280c-c2c8-4fa7-bc48-f7b048d43015 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:08:10 compute-1 nova_compute[230518]: 2025-10-02 13:08:10.093 2 DEBUG nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Start _get_guest_xml network_info=[{"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:08:10 compute-1 nova_compute[230518]: 2025-10-02 13:08:10.098 2 WARNING nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:08:10 compute-1 nova_compute[230518]: 2025-10-02 13:08:10.103 2 DEBUG nova.virt.libvirt.host [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:08:10 compute-1 nova_compute[230518]: 2025-10-02 13:08:10.104 2 DEBUG nova.virt.libvirt.host [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:08:10 compute-1 nova_compute[230518]: 2025-10-02 13:08:10.109 2 DEBUG nova.virt.libvirt.host [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:08:10 compute-1 nova_compute[230518]: 2025-10-02 13:08:10.109 2 DEBUG nova.virt.libvirt.host [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:08:10 compute-1 nova_compute[230518]: 2025-10-02 13:08:10.111 2 DEBUG nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:08:10 compute-1 nova_compute[230518]: 2025-10-02 13:08:10.111 2 DEBUG nova.virt.hardware [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:08:10 compute-1 nova_compute[230518]: 2025-10-02 13:08:10.111 2 DEBUG nova.virt.hardware [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:08:10 compute-1 nova_compute[230518]: 2025-10-02 13:08:10.112 2 DEBUG nova.virt.hardware [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:08:10 compute-1 nova_compute[230518]: 2025-10-02 13:08:10.112 2 DEBUG nova.virt.hardware [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:08:10 compute-1 nova_compute[230518]: 2025-10-02 13:08:10.112 2 DEBUG nova.virt.hardware [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:08:10 compute-1 nova_compute[230518]: 2025-10-02 13:08:10.112 2 DEBUG nova.virt.hardware [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:08:10 compute-1 nova_compute[230518]: 2025-10-02 13:08:10.113 2 DEBUG nova.virt.hardware [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:08:10 compute-1 nova_compute[230518]: 2025-10-02 13:08:10.113 2 DEBUG nova.virt.hardware [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:08:10 compute-1 nova_compute[230518]: 2025-10-02 13:08:10.113 2 DEBUG nova.virt.hardware [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:08:10 compute-1 nova_compute[230518]: 2025-10-02 13:08:10.113 2 DEBUG nova.virt.hardware [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:08:10 compute-1 nova_compute[230518]: 2025-10-02 13:08:10.113 2 DEBUG nova.virt.hardware [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:08:10 compute-1 nova_compute[230518]: 2025-10-02 13:08:10.116 2 DEBUG oslo_concurrency.processutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:08:10 compute-1 nova_compute[230518]: 2025-10-02 13:08:10.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:08:10 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/70365932' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:08:10 compute-1 nova_compute[230518]: 2025-10-02 13:08:10.564 2 DEBUG oslo_concurrency.processutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:08:10 compute-1 nova_compute[230518]: 2025-10-02 13:08:10.588 2 DEBUG nova.storage.rbd_utils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] rbd image b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:08:10 compute-1 nova_compute[230518]: 2025-10-02 13:08:10.592 2 DEBUG oslo_concurrency.processutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:08:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/70365932' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:08:11 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:08:11 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1373192062' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:08:11 compute-1 nova_compute[230518]: 2025-10-02 13:08:11.019 2 DEBUG oslo_concurrency.processutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:08:11 compute-1 nova_compute[230518]: 2025-10-02 13:08:11.021 2 DEBUG nova.virt.libvirt.vif [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:08:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1341299623',display_name='tempest-TestShelveInstance-server-1341299623',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1341299623',id=193,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAv/W6nS9dywKRKPfI/I8VC3fYq9NYpWuLkQShDmIF9/8AaTGucbYIXWWTw6soOxBIduh/FVz2D47Pgv7ES8PO/armLbuwNtkOQG1B1V9kNoQMvYjaLsOHDz5UTKAiXYBA==',key_name='tempest-TestShelveInstance-573161256',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='954946ff6b204fba90f767ec67210620',ramdisk_id='',reservation_id='r-qh5aopcq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-228669170',owner_user_name='tempest-TestShelveInstance-228669170-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:08:04Z,user_data=None,user_id='62f4c4b5cc194bd59ca9cc9f1da78a79',uuid=b4640b6e-b1e0-4168-9970-c5d05a0e1621,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:08:11 compute-1 nova_compute[230518]: 2025-10-02 13:08:11.021 2 DEBUG nova.network.os_vif_util [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Converting VIF {"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:08:11 compute-1 nova_compute[230518]: 2025-10-02 13:08:11.022 2 DEBUG nova.network.os_vif_util [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:d8:5a,bridge_name='br-int',has_traffic_filtering=True,id=3994280c-c2c8-4fa7-bc48-f7b048d43015,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3994280c-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:08:11 compute-1 nova_compute[230518]: 2025-10-02 13:08:11.023 2 DEBUG nova.objects.instance [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lazy-loading 'pci_devices' on Instance uuid b4640b6e-b1e0-4168-9970-c5d05a0e1621 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:08:11 compute-1 nova_compute[230518]: 2025-10-02 13:08:11.041 2 DEBUG nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:08:11 compute-1 nova_compute[230518]:   <uuid>b4640b6e-b1e0-4168-9970-c5d05a0e1621</uuid>
Oct 02 13:08:11 compute-1 nova_compute[230518]:   <name>instance-000000c1</name>
Oct 02 13:08:11 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 13:08:11 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 13:08:11 compute-1 nova_compute[230518]:   <metadata>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:08:11 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:       <nova:name>tempest-TestShelveInstance-server-1341299623</nova:name>
Oct 02 13:08:11 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 13:08:10</nova:creationTime>
Oct 02 13:08:11 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 13:08:11 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 13:08:11 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 13:08:11 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 13:08:11 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:08:11 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:08:11 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 13:08:11 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 13:08:11 compute-1 nova_compute[230518]:         <nova:user uuid="62f4c4b5cc194bd59ca9cc9f1da78a79">tempest-TestShelveInstance-228669170-project-member</nova:user>
Oct 02 13:08:11 compute-1 nova_compute[230518]:         <nova:project uuid="954946ff6b204fba90f767ec67210620">tempest-TestShelveInstance-228669170</nova:project>
Oct 02 13:08:11 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 13:08:11 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 13:08:11 compute-1 nova_compute[230518]:         <nova:port uuid="3994280c-c2c8-4fa7-bc48-f7b048d43015">
Oct 02 13:08:11 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 13:08:11 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 13:08:11 compute-1 nova_compute[230518]:   </metadata>
Oct 02 13:08:11 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <system>
Oct 02 13:08:11 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:08:11 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:08:11 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:08:11 compute-1 nova_compute[230518]:       <entry name="serial">b4640b6e-b1e0-4168-9970-c5d05a0e1621</entry>
Oct 02 13:08:11 compute-1 nova_compute[230518]:       <entry name="uuid">b4640b6e-b1e0-4168-9970-c5d05a0e1621</entry>
Oct 02 13:08:11 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     </system>
Oct 02 13:08:11 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 13:08:11 compute-1 nova_compute[230518]:   <os>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:   </os>
Oct 02 13:08:11 compute-1 nova_compute[230518]:   <features>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <apic/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:   </features>
Oct 02 13:08:11 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:   </clock>
Oct 02 13:08:11 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:   </cpu>
Oct 02 13:08:11 compute-1 nova_compute[230518]:   <devices>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 13:08:11 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk">
Oct 02 13:08:11 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:       </source>
Oct 02 13:08:11 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:08:11 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:08:11 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 13:08:11 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk.config">
Oct 02 13:08:11 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:       </source>
Oct 02 13:08:11 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:08:11 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:08:11 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 13:08:11 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:20:d8:5a"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:       <target dev="tap3994280c-c2"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     </interface>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 13:08:11 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621/console.log" append="off"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     </serial>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <video>
Oct 02 13:08:11 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     </video>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 13:08:11 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     </rng>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 13:08:11 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 13:08:11 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 13:08:11 compute-1 nova_compute[230518]:   </devices>
Oct 02 13:08:11 compute-1 nova_compute[230518]: </domain>
Oct 02 13:08:11 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:08:11 compute-1 nova_compute[230518]: 2025-10-02 13:08:11.043 2 DEBUG nova.compute.manager [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Preparing to wait for external event network-vif-plugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:08:11 compute-1 nova_compute[230518]: 2025-10-02 13:08:11.043 2 DEBUG oslo_concurrency.lockutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:11 compute-1 nova_compute[230518]: 2025-10-02 13:08:11.043 2 DEBUG oslo_concurrency.lockutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:11 compute-1 nova_compute[230518]: 2025-10-02 13:08:11.043 2 DEBUG oslo_concurrency.lockutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:11 compute-1 nova_compute[230518]: 2025-10-02 13:08:11.044 2 DEBUG nova.virt.libvirt.vif [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:08:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1341299623',display_name='tempest-TestShelveInstance-server-1341299623',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1341299623',id=193,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAv/W6nS9dywKRKPfI/I8VC3fYq9NYpWuLkQShDmIF9/8AaTGucbYIXWWTw6soOxBIduh/FVz2D47Pgv7ES8PO/armLbuwNtkOQG1B1V9kNoQMvYjaLsOHDz5UTKAiXYBA==',key_name='tempest-TestShelveInstance-573161256',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='954946ff6b204fba90f767ec67210620',ramdisk_id='',reservation_id='r-qh5aopcq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-228669170',owner_user_name='tempest-TestShelveInstance-228669170-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:08:04Z,user_data=None,user_id='62f4c4b5cc194bd59ca9cc9f1da78a79',uuid=b4640b6e-b1e0-4168-9970-c5d05a0e1621,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:08:11 compute-1 nova_compute[230518]: 2025-10-02 13:08:11.044 2 DEBUG nova.network.os_vif_util [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Converting VIF {"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:08:11 compute-1 nova_compute[230518]: 2025-10-02 13:08:11.045 2 DEBUG nova.network.os_vif_util [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:d8:5a,bridge_name='br-int',has_traffic_filtering=True,id=3994280c-c2c8-4fa7-bc48-f7b048d43015,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3994280c-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:08:11 compute-1 nova_compute[230518]: 2025-10-02 13:08:11.045 2 DEBUG os_vif [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:d8:5a,bridge_name='br-int',has_traffic_filtering=True,id=3994280c-c2c8-4fa7-bc48-f7b048d43015,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3994280c-c2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:08:11 compute-1 nova_compute[230518]: 2025-10-02 13:08:11.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:11 compute-1 nova_compute[230518]: 2025-10-02 13:08:11.046 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:08:11 compute-1 nova_compute[230518]: 2025-10-02 13:08:11.047 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:08:11 compute-1 nova_compute[230518]: 2025-10-02 13:08:11.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:11 compute-1 nova_compute[230518]: 2025-10-02 13:08:11.049 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3994280c-c2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:08:11 compute-1 nova_compute[230518]: 2025-10-02 13:08:11.050 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3994280c-c2, col_values=(('external_ids', {'iface-id': '3994280c-c2c8-4fa7-bc48-f7b048d43015', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:20:d8:5a', 'vm-uuid': 'b4640b6e-b1e0-4168-9970-c5d05a0e1621'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:08:11 compute-1 nova_compute[230518]: 2025-10-02 13:08:11.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:11 compute-1 NetworkManager[44960]: <info>  [1759410491.0526] manager: (tap3994280c-c2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/364)
Oct 02 13:08:11 compute-1 nova_compute[230518]: 2025-10-02 13:08:11.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:08:11 compute-1 nova_compute[230518]: 2025-10-02 13:08:11.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:11 compute-1 nova_compute[230518]: 2025-10-02 13:08:11.057 2 INFO os_vif [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:d8:5a,bridge_name='br-int',has_traffic_filtering=True,id=3994280c-c2c8-4fa7-bc48-f7b048d43015,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3994280c-c2')
Oct 02 13:08:11 compute-1 nova_compute[230518]: 2025-10-02 13:08:11.111 2 DEBUG nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:08:11 compute-1 nova_compute[230518]: 2025-10-02 13:08:11.112 2 DEBUG nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:08:11 compute-1 nova_compute[230518]: 2025-10-02 13:08:11.112 2 DEBUG nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] No VIF found with MAC fa:16:3e:20:d8:5a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:08:11 compute-1 nova_compute[230518]: 2025-10-02 13:08:11.112 2 INFO nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Using config drive
Oct 02 13:08:11 compute-1 nova_compute[230518]: 2025-10-02 13:08:11.140 2 DEBUG nova.storage.rbd_utils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] rbd image b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:08:11 compute-1 nova_compute[230518]: 2025-10-02 13:08:11.672 2 INFO nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Creating config drive at /var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621/disk.config
Oct 02 13:08:11 compute-1 nova_compute[230518]: 2025-10-02 13:08:11.680 2 DEBUG oslo_concurrency.processutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp84dibfsr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:08:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:08:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:11.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:08:11 compute-1 ceph-mon[80926]: pgmap v2862: 305 pgs: 305 active+clean; 204 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.0 MiB/s wr, 64 op/s
Oct 02 13:08:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1373192062' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:08:11 compute-1 nova_compute[230518]: 2025-10-02 13:08:11.829 2 DEBUG oslo_concurrency.processutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp84dibfsr" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:08:11 compute-1 nova_compute[230518]: 2025-10-02 13:08:11.864 2 DEBUG nova.storage.rbd_utils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] rbd image b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:08:11 compute-1 nova_compute[230518]: 2025-10-02 13:08:11.868 2 DEBUG oslo_concurrency.processutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621/disk.config b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:08:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:11.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:12 compute-1 nova_compute[230518]: 2025-10-02 13:08:12.062 2 DEBUG nova.network.neutron [req-b543f0b4-8aaf-4979-845a-5762cf07cc36 req-bfe41cde-0b81-4a31-98c0-9d92191b99f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Updated VIF entry in instance network info cache for port 3994280c-c2c8-4fa7-bc48-f7b048d43015. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:08:12 compute-1 nova_compute[230518]: 2025-10-02 13:08:12.063 2 DEBUG nova.network.neutron [req-b543f0b4-8aaf-4979-845a-5762cf07cc36 req-bfe41cde-0b81-4a31-98c0-9d92191b99f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Updating instance_info_cache with network_info: [{"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:08:12 compute-1 nova_compute[230518]: 2025-10-02 13:08:12.066 2 DEBUG oslo_concurrency.processutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621/disk.config b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:08:12 compute-1 nova_compute[230518]: 2025-10-02 13:08:12.066 2 INFO nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Deleting local config drive /var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621/disk.config because it was imported into RBD.
Oct 02 13:08:12 compute-1 nova_compute[230518]: 2025-10-02 13:08:12.090 2 DEBUG oslo_concurrency.lockutils [req-b543f0b4-8aaf-4979-845a-5762cf07cc36 req-bfe41cde-0b81-4a31-98c0-9d92191b99f0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:08:12 compute-1 kernel: tap3994280c-c2: entered promiscuous mode
Oct 02 13:08:12 compute-1 NetworkManager[44960]: <info>  [1759410492.1134] manager: (tap3994280c-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/365)
Oct 02 13:08:12 compute-1 nova_compute[230518]: 2025-10-02 13:08:12.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:12 compute-1 ovn_controller[129257]: 2025-10-02T13:08:12Z|00791|binding|INFO|Claiming lport 3994280c-c2c8-4fa7-bc48-f7b048d43015 for this chassis.
Oct 02 13:08:12 compute-1 ovn_controller[129257]: 2025-10-02T13:08:12Z|00792|binding|INFO|3994280c-c2c8-4fa7-bc48-f7b048d43015: Claiming fa:16:3e:20:d8:5a 10.100.0.6
Oct 02 13:08:12 compute-1 nova_compute[230518]: 2025-10-02 13:08:12.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.126 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:d8:5a 10.100.0.6'], port_security=['fa:16:3e:20:d8:5a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b4640b6e-b1e0-4168-9970-c5d05a0e1621', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4223a8cc-f72a-428d-accb-3f4210096878', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '954946ff6b204fba90f767ec67210620', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b142f2e0-15cd-46cd-bd2d-c2af8d42e97a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8308a587-4cdc-4eb3-9fc6-aab7267ec23f, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=3994280c-c2c8-4fa7-bc48-f7b048d43015) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.127 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 3994280c-c2c8-4fa7-bc48-f7b048d43015 in datapath 4223a8cc-f72a-428d-accb-3f4210096878 bound to our chassis
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.128 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4223a8cc-f72a-428d-accb-3f4210096878
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.140 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[464d609d-e5ec-4db6-b356-36c33111689a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.141 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4223a8cc-f1 in ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.142 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4223a8cc-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.143 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1a95e44d-6dbb-4fbf-9605-baf401fc7973]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:12 compute-1 systemd-udevd[303489]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.146 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[95091073-55b1-4981-b2bf-da474413c89f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.158 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[be5af1e5-f7c5-43f5-b261-00e3d4272e4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:12 compute-1 systemd-machined[188247]: New machine qemu-90-instance-000000c1.
Oct 02 13:08:12 compute-1 NetworkManager[44960]: <info>  [1759410492.1662] device (tap3994280c-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:08:12 compute-1 NetworkManager[44960]: <info>  [1759410492.1681] device (tap3994280c-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:08:12 compute-1 nova_compute[230518]: 2025-10-02 13:08:12.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:12 compute-1 systemd[1]: Started Virtual Machine qemu-90-instance-000000c1.
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.181 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ba4eba22-fd0a-4c42-8af1-2f533b06865e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:12 compute-1 ovn_controller[129257]: 2025-10-02T13:08:12Z|00793|binding|INFO|Setting lport 3994280c-c2c8-4fa7-bc48-f7b048d43015 ovn-installed in OVS
Oct 02 13:08:12 compute-1 ovn_controller[129257]: 2025-10-02T13:08:12Z|00794|binding|INFO|Setting lport 3994280c-c2c8-4fa7-bc48-f7b048d43015 up in Southbound
Oct 02 13:08:12 compute-1 nova_compute[230518]: 2025-10-02 13:08:12.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.212 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[5183f27f-39b4-41fb-acf8-aae54ff77c0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.216 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a46d5653-61cd-45ac-8074-04f5465e3d30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:12 compute-1 NetworkManager[44960]: <info>  [1759410492.2192] manager: (tap4223a8cc-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/366)
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.249 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[638f1c24-eb97-4a36-bf0a-89848a0e9d43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.251 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[80bd4acc-cbcf-42a1-94bf-3e97b1f8c1b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:12 compute-1 NetworkManager[44960]: <info>  [1759410492.2708] device (tap4223a8cc-f0): carrier: link connected
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.275 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a9beca03-bfce-403c-aba0-4b8912c2e363]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.289 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[60b9d446-1b35-4cc2-b038-b3eef08450b2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4223a8cc-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:f5:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 239], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 835582, 'reachable_time': 27641, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303522, 'error': None, 'target': 'ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.305 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c6e425e4-49a0-4591-ad0e-893092422640]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe74:f568'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 835582, 'tstamp': 835582}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303523, 'error': None, 'target': 'ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.320 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[44254b1d-03e4-406b-b031-1d412cc35cda]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4223a8cc-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:f5:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 239], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 835582, 'reachable_time': 27641, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303524, 'error': None, 'target': 'ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.348 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a852d09c-277f-4a3a-9fed-caec45b2468c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.401 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[05761391-16b8-4cc1-8d3a-1100b86dbaaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.402 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4223a8cc-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.403 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.403 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4223a8cc-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:08:12 compute-1 nova_compute[230518]: 2025-10-02 13:08:12.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:12 compute-1 NetworkManager[44960]: <info>  [1759410492.4059] manager: (tap4223a8cc-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/367)
Oct 02 13:08:12 compute-1 kernel: tap4223a8cc-f0: entered promiscuous mode
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.411 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4223a8cc-f0, col_values=(('external_ids', {'iface-id': '97eaefd1-ed23-4787-9782-741cd2cf7e3b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:08:12 compute-1 nova_compute[230518]: 2025-10-02 13:08:12.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:12 compute-1 ovn_controller[129257]: 2025-10-02T13:08:12Z|00795|binding|INFO|Releasing lport 97eaefd1-ed23-4787-9782-741cd2cf7e3b from this chassis (sb_readonly=0)
Oct 02 13:08:12 compute-1 nova_compute[230518]: 2025-10-02 13:08:12.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.416 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4223a8cc-f72a-428d-accb-3f4210096878.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4223a8cc-f72a-428d-accb-3f4210096878.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.416 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cd894d39-663e-4c36-9daf-0c1156c6a5bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.417 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]: global
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-4223a8cc-f72a-428d-accb-3f4210096878
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/4223a8cc-f72a-428d-accb-3f4210096878.pid.haproxy
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 4223a8cc-f72a-428d-accb-3f4210096878
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:08:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:12.419 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878', 'env', 'PROCESS_TAG=haproxy-4223a8cc-f72a-428d-accb-3f4210096878', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4223a8cc-f72a-428d-accb-3f4210096878.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:08:12 compute-1 nova_compute[230518]: 2025-10-02 13:08:12.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:12 compute-1 podman[303598]: 2025-10-02 13:08:12.757599508 +0000 UTC m=+0.053362306 container create a942cce4339f154eb3cc046f044794b8321484476a6aa6ca1a9af444efa2e991 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct 02 13:08:12 compute-1 systemd[1]: Started libpod-conmon-a942cce4339f154eb3cc046f044794b8321484476a6aa6ca1a9af444efa2e991.scope.
Oct 02 13:08:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/695871044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:12 compute-1 podman[303598]: 2025-10-02 13:08:12.729439045 +0000 UTC m=+0.025201833 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:08:12 compute-1 systemd[1]: Started libcrun container.
Oct 02 13:08:12 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eedb82cd50d6ea7a9033c769ee0024227330ee31a67a92bbebc2e547b0c3439/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:08:12 compute-1 podman[303598]: 2025-10-02 13:08:12.866511479 +0000 UTC m=+0.162274267 container init a942cce4339f154eb3cc046f044794b8321484476a6aa6ca1a9af444efa2e991 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 13:08:12 compute-1 podman[303598]: 2025-10-02 13:08:12.872580369 +0000 UTC m=+0.168343147 container start a942cce4339f154eb3cc046f044794b8321484476a6aa6ca1a9af444efa2e991 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 13:08:12 compute-1 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[303613]: [NOTICE]   (303617) : New worker (303619) forked
Oct 02 13:08:12 compute-1 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[303613]: [NOTICE]   (303617) : Loading success.
Oct 02 13:08:13 compute-1 nova_compute[230518]: 2025-10-02 13:08:13.002 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410493.0019891, b4640b6e-b1e0-4168-9970-c5d05a0e1621 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:08:13 compute-1 nova_compute[230518]: 2025-10-02 13:08:13.003 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] VM Started (Lifecycle Event)
Oct 02 13:08:13 compute-1 nova_compute[230518]: 2025-10-02 13:08:13.059 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:08:13 compute-1 nova_compute[230518]: 2025-10-02 13:08:13.064 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410493.0024707, b4640b6e-b1e0-4168-9970-c5d05a0e1621 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:08:13 compute-1 nova_compute[230518]: 2025-10-02 13:08:13.064 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] VM Paused (Lifecycle Event)
Oct 02 13:08:13 compute-1 nova_compute[230518]: 2025-10-02 13:08:13.094 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:08:13 compute-1 nova_compute[230518]: 2025-10-02 13:08:13.098 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:08:13 compute-1 nova_compute[230518]: 2025-10-02 13:08:13.143 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:08:13 compute-1 nova_compute[230518]: 2025-10-02 13:08:13.349 2 DEBUG nova.compute.manager [req-ca0a7954-027b-43a2-80cd-4356718226da req-0229a3a0-9c15-44a3-956a-6c2d3368d252 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received event network-vif-plugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:08:13 compute-1 nova_compute[230518]: 2025-10-02 13:08:13.349 2 DEBUG oslo_concurrency.lockutils [req-ca0a7954-027b-43a2-80cd-4356718226da req-0229a3a0-9c15-44a3-956a-6c2d3368d252 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:13 compute-1 nova_compute[230518]: 2025-10-02 13:08:13.350 2 DEBUG oslo_concurrency.lockutils [req-ca0a7954-027b-43a2-80cd-4356718226da req-0229a3a0-9c15-44a3-956a-6c2d3368d252 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:13 compute-1 nova_compute[230518]: 2025-10-02 13:08:13.350 2 DEBUG oslo_concurrency.lockutils [req-ca0a7954-027b-43a2-80cd-4356718226da req-0229a3a0-9c15-44a3-956a-6c2d3368d252 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:13 compute-1 nova_compute[230518]: 2025-10-02 13:08:13.351 2 DEBUG nova.compute.manager [req-ca0a7954-027b-43a2-80cd-4356718226da req-0229a3a0-9c15-44a3-956a-6c2d3368d252 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Processing event network-vif-plugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:08:13 compute-1 nova_compute[230518]: 2025-10-02 13:08:13.352 2 DEBUG nova.compute.manager [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:08:13 compute-1 nova_compute[230518]: 2025-10-02 13:08:13.356 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410493.3560207, b4640b6e-b1e0-4168-9970-c5d05a0e1621 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:08:13 compute-1 nova_compute[230518]: 2025-10-02 13:08:13.356 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] VM Resumed (Lifecycle Event)
Oct 02 13:08:13 compute-1 nova_compute[230518]: 2025-10-02 13:08:13.358 2 DEBUG nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:08:13 compute-1 nova_compute[230518]: 2025-10-02 13:08:13.362 2 INFO nova.virt.libvirt.driver [-] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Instance spawned successfully.
Oct 02 13:08:13 compute-1 nova_compute[230518]: 2025-10-02 13:08:13.362 2 DEBUG nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:08:13 compute-1 nova_compute[230518]: 2025-10-02 13:08:13.379 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:08:13 compute-1 nova_compute[230518]: 2025-10-02 13:08:13.386 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:08:13 compute-1 nova_compute[230518]: 2025-10-02 13:08:13.391 2 DEBUG nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:08:13 compute-1 nova_compute[230518]: 2025-10-02 13:08:13.391 2 DEBUG nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:08:13 compute-1 nova_compute[230518]: 2025-10-02 13:08:13.391 2 DEBUG nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:08:13 compute-1 nova_compute[230518]: 2025-10-02 13:08:13.392 2 DEBUG nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:08:13 compute-1 nova_compute[230518]: 2025-10-02 13:08:13.392 2 DEBUG nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:08:13 compute-1 nova_compute[230518]: 2025-10-02 13:08:13.393 2 DEBUG nova.virt.libvirt.driver [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:08:13 compute-1 nova_compute[230518]: 2025-10-02 13:08:13.433 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:08:13 compute-1 nova_compute[230518]: 2025-10-02 13:08:13.496 2 INFO nova.compute.manager [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Took 9.12 seconds to spawn the instance on the hypervisor.
Oct 02 13:08:13 compute-1 nova_compute[230518]: 2025-10-02 13:08:13.497 2 DEBUG nova.compute.manager [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:08:13 compute-1 nova_compute[230518]: 2025-10-02 13:08:13.579 2 INFO nova.compute.manager [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Took 10.34 seconds to build instance.
Oct 02 13:08:13 compute-1 nova_compute[230518]: 2025-10-02 13:08:13.604 2 DEBUG oslo_concurrency.lockutils [None req-b3808852-0de8-4acb-a4ea-f20bc878a868 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.455s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:13.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:13 compute-1 ceph-mon[80926]: pgmap v2863: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.5 MiB/s wr, 72 op/s
Oct 02 13:08:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:08:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:13.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:08:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:08:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:08:14 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2421613884' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:08:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2421613884' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:08:15 compute-1 nova_compute[230518]: 2025-10-02 13:08:15.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:15 compute-1 nova_compute[230518]: 2025-10-02 13:08:15.484 2 DEBUG nova.compute.manager [req-57496bff-35ef-4676-9def-0697e4f8decd req-f9997010-796c-4a86-957d-114aaabf0f8f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received event network-vif-plugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:08:15 compute-1 nova_compute[230518]: 2025-10-02 13:08:15.484 2 DEBUG oslo_concurrency.lockutils [req-57496bff-35ef-4676-9def-0697e4f8decd req-f9997010-796c-4a86-957d-114aaabf0f8f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:15 compute-1 nova_compute[230518]: 2025-10-02 13:08:15.485 2 DEBUG oslo_concurrency.lockutils [req-57496bff-35ef-4676-9def-0697e4f8decd req-f9997010-796c-4a86-957d-114aaabf0f8f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:15 compute-1 nova_compute[230518]: 2025-10-02 13:08:15.485 2 DEBUG oslo_concurrency.lockutils [req-57496bff-35ef-4676-9def-0697e4f8decd req-f9997010-796c-4a86-957d-114aaabf0f8f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:15 compute-1 nova_compute[230518]: 2025-10-02 13:08:15.485 2 DEBUG nova.compute.manager [req-57496bff-35ef-4676-9def-0697e4f8decd req-f9997010-796c-4a86-957d-114aaabf0f8f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] No waiting events found dispatching network-vif-plugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:08:15 compute-1 nova_compute[230518]: 2025-10-02 13:08:15.485 2 WARNING nova.compute.manager [req-57496bff-35ef-4676-9def-0697e4f8decd req-f9997010-796c-4a86-957d-114aaabf0f8f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received unexpected event network-vif-plugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 for instance with vm_state active and task_state None.
Oct 02 13:08:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:08:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:15.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:08:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:15.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:16 compute-1 ceph-mon[80926]: pgmap v2864: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 3.5 MiB/s wr, 62 op/s
Oct 02 13:08:16 compute-1 nova_compute[230518]: 2025-10-02 13:08:16.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:08:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:17.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:08:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:17.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:18 compute-1 NetworkManager[44960]: <info>  [1759410498.0140] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/368)
Oct 02 13:08:18 compute-1 NetworkManager[44960]: <info>  [1759410498.0149] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/369)
Oct 02 13:08:18 compute-1 nova_compute[230518]: 2025-10-02 13:08:18.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:18 compute-1 ceph-mon[80926]: pgmap v2865: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.5 MiB/s wr, 105 op/s
Oct 02 13:08:18 compute-1 nova_compute[230518]: 2025-10-02 13:08:18.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:18 compute-1 ovn_controller[129257]: 2025-10-02T13:08:18Z|00796|binding|INFO|Releasing lport 97eaefd1-ed23-4787-9782-741cd2cf7e3b from this chassis (sb_readonly=0)
Oct 02 13:08:18 compute-1 nova_compute[230518]: 2025-10-02 13:08:18.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:18 compute-1 nova_compute[230518]: 2025-10-02 13:08:18.513 2 DEBUG nova.compute.manager [req-cd9ff4aa-a31e-47af-ad73-3eab8aecaafe req-be6f0dc2-2b9e-4552-b72a-edf47d17cf4d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received event network-changed-3994280c-c2c8-4fa7-bc48-f7b048d43015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:08:18 compute-1 nova_compute[230518]: 2025-10-02 13:08:18.513 2 DEBUG nova.compute.manager [req-cd9ff4aa-a31e-47af-ad73-3eab8aecaafe req-be6f0dc2-2b9e-4552-b72a-edf47d17cf4d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Refreshing instance network info cache due to event network-changed-3994280c-c2c8-4fa7-bc48-f7b048d43015. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:08:18 compute-1 nova_compute[230518]: 2025-10-02 13:08:18.514 2 DEBUG oslo_concurrency.lockutils [req-cd9ff4aa-a31e-47af-ad73-3eab8aecaafe req-be6f0dc2-2b9e-4552-b72a-edf47d17cf4d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:08:18 compute-1 nova_compute[230518]: 2025-10-02 13:08:18.514 2 DEBUG oslo_concurrency.lockutils [req-cd9ff4aa-a31e-47af-ad73-3eab8aecaafe req-be6f0dc2-2b9e-4552-b72a-edf47d17cf4d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:08:18 compute-1 nova_compute[230518]: 2025-10-02 13:08:18.514 2 DEBUG nova.network.neutron [req-cd9ff4aa-a31e-47af-ad73-3eab8aecaafe req-be6f0dc2-2b9e-4552-b72a-edf47d17cf4d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Refreshing network info cache for port 3994280c-c2c8-4fa7-bc48-f7b048d43015 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:08:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2629945941' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:08:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:08:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:19.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:08:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:19.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:08:20 compute-1 nova_compute[230518]: 2025-10-02 13:08:20.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:20 compute-1 nova_compute[230518]: 2025-10-02 13:08:20.191 2 DEBUG nova.network.neutron [req-cd9ff4aa-a31e-47af-ad73-3eab8aecaafe req-be6f0dc2-2b9e-4552-b72a-edf47d17cf4d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Updated VIF entry in instance network info cache for port 3994280c-c2c8-4fa7-bc48-f7b048d43015. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:08:20 compute-1 nova_compute[230518]: 2025-10-02 13:08:20.191 2 DEBUG nova.network.neutron [req-cd9ff4aa-a31e-47af-ad73-3eab8aecaafe req-be6f0dc2-2b9e-4552-b72a-edf47d17cf4d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Updating instance_info_cache with network_info: [{"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:08:20 compute-1 ceph-mon[80926]: pgmap v2866: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.7 MiB/s wr, 122 op/s
Oct 02 13:08:20 compute-1 nova_compute[230518]: 2025-10-02 13:08:20.357 2 DEBUG oslo_concurrency.lockutils [req-cd9ff4aa-a31e-47af-ad73-3eab8aecaafe req-be6f0dc2-2b9e-4552-b72a-edf47d17cf4d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:08:21 compute-1 nova_compute[230518]: 2025-10-02 13:08:21.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:21 compute-1 ceph-mon[80926]: pgmap v2867: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 98 op/s
Oct 02 13:08:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/753051723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:08:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:21.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:08:21 compute-1 podman[303631]: 2025-10-02 13:08:21.82513096 +0000 UTC m=+0.064631111 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:08:21 compute-1 podman[303630]: 2025-10-02 13:08:21.861618245 +0000 UTC m=+0.093926530 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Oct 02 13:08:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:21.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:22 compute-1 nova_compute[230518]: 2025-10-02 13:08:22.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:08:22 compute-1 nova_compute[230518]: 2025-10-02 13:08:22.077 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:22 compute-1 nova_compute[230518]: 2025-10-02 13:08:22.078 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:22 compute-1 nova_compute[230518]: 2025-10-02 13:08:22.078 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:22 compute-1 nova_compute[230518]: 2025-10-02 13:08:22.079 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:08:22 compute-1 nova_compute[230518]: 2025-10-02 13:08:22.079 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:08:22 compute-1 sudo[303695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:08:22 compute-1 sudo[303695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:22 compute-1 sudo[303695]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:22 compute-1 sudo[303720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:08:22 compute-1 sudo[303720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:22 compute-1 sudo[303720]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:22 compute-1 sudo[303745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:08:22 compute-1 sudo[303745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:22 compute-1 sudo[303745]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/218103477' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:22 compute-1 sudo[303770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:08:22 compute-1 sudo[303770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:22 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:08:22 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1332190033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:22 compute-1 nova_compute[230518]: 2025-10-02 13:08:22.585 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:08:22 compute-1 nova_compute[230518]: 2025-10-02 13:08:22.660 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000c1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:08:22 compute-1 nova_compute[230518]: 2025-10-02 13:08:22.661 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000c1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:08:22 compute-1 nova_compute[230518]: 2025-10-02 13:08:22.880 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:08:22 compute-1 nova_compute[230518]: 2025-10-02 13:08:22.881 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4093MB free_disk=20.961849212646484GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:08:22 compute-1 nova_compute[230518]: 2025-10-02 13:08:22.882 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:22 compute-1 nova_compute[230518]: 2025-10-02 13:08:22.883 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:22 compute-1 nova_compute[230518]: 2025-10-02 13:08:22.961 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance b4640b6e-b1e0-4168-9970-c5d05a0e1621 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:08:22 compute-1 nova_compute[230518]: 2025-10-02 13:08:22.961 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:08:22 compute-1 nova_compute[230518]: 2025-10-02 13:08:22.962 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:08:22 compute-1 sudo[303770]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:22 compute-1 nova_compute[230518]: 2025-10-02 13:08:22.997 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:08:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:08:23 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3766980705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:23 compute-1 nova_compute[230518]: 2025-10-02 13:08:23.463 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:08:23 compute-1 nova_compute[230518]: 2025-10-02 13:08:23.474 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:08:23 compute-1 nova_compute[230518]: 2025-10-02 13:08:23.491 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:08:23 compute-1 nova_compute[230518]: 2025-10-02 13:08:23.511 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:08:23 compute-1 nova_compute[230518]: 2025-10-02 13:08:23.511 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:23 compute-1 ceph-mon[80926]: pgmap v2868: 305 pgs: 305 active+clean; 237 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 95 op/s
Oct 02 13:08:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1332190033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3757053216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:23 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:08:23 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:08:23 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:08:23 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:08:23 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:08:23 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:08:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3766980705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:23.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 13:08:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:23.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 13:08:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:08:25 compute-1 nova_compute[230518]: 2025-10-02 13:08:25.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:25.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:25.964 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:25.964 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:25.965 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 13:08:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:25.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 13:08:26 compute-1 nova_compute[230518]: 2025-10-02 13:08:26.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:26 compute-1 ceph-mon[80926]: pgmap v2869: 305 pgs: 305 active+clean; 237 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 495 KiB/s wr, 86 op/s
Oct 02 13:08:27 compute-1 nova_compute[230518]: 2025-10-02 13:08:27.512 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:08:27 compute-1 nova_compute[230518]: 2025-10-02 13:08:27.512 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:08:27 compute-1 nova_compute[230518]: 2025-10-02 13:08:27.513 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:08:27 compute-1 ceph-mon[80926]: pgmap v2870: 305 pgs: 305 active+clean; 263 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.9 MiB/s wr, 149 op/s
Oct 02 13:08:27 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/689967176' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:27.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:08:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:27.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:08:28 compute-1 nova_compute[230518]: 2025-10-02 13:08:28.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:08:28 compute-1 nova_compute[230518]: 2025-10-02 13:08:28.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:08:28 compute-1 ovn_controller[129257]: 2025-10-02T13:08:28Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:20:d8:5a 10.100.0.6
Oct 02 13:08:28 compute-1 ovn_controller[129257]: 2025-10-02T13:08:28Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:20:d8:5a 10.100.0.6
Oct 02 13:08:28 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1473500295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:29 compute-1 nova_compute[230518]: 2025-10-02 13:08:29.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:08:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:08:29 compute-1 ceph-mon[80926]: pgmap v2871: 305 pgs: 305 active+clean; 270 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 145 op/s
Oct 02 13:08:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:29.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:29 compute-1 nova_compute[230518]: 2025-10-02 13:08:29.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:29.910 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=64, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=63) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:08:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:29.913 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:08:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:08:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:29.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:08:30 compute-1 nova_compute[230518]: 2025-10-02 13:08:30.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:31 compute-1 nova_compute[230518]: 2025-10-02 13:08:31.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:08:31 compute-1 nova_compute[230518]: 2025-10-02 13:08:31.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:08:31 compute-1 nova_compute[230518]: 2025-10-02 13:08:31.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:31 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2378719045' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:08:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:31.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:31.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:32 compute-1 nova_compute[230518]: 2025-10-02 13:08:32.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:08:32 compute-1 nova_compute[230518]: 2025-10-02 13:08:32.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:08:32 compute-1 nova_compute[230518]: 2025-10-02 13:08:32.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:08:32 compute-1 nova_compute[230518]: 2025-10-02 13:08:32.261 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:08:32 compute-1 nova_compute[230518]: 2025-10-02 13:08:32.261 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:08:32 compute-1 nova_compute[230518]: 2025-10-02 13:08:32.262 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 13:08:32 compute-1 nova_compute[230518]: 2025-10-02 13:08:32.262 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b4640b6e-b1e0-4168-9970-c5d05a0e1621 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:08:32 compute-1 ceph-mon[80926]: pgmap v2872: 305 pgs: 305 active+clean; 281 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.4 MiB/s wr, 130 op/s
Oct 02 13:08:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2021696205' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:08:32 compute-1 podman[303849]: 2025-10-02 13:08:32.8568751 +0000 UTC m=+0.097468061 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:08:32 compute-1 podman[303850]: 2025-10-02 13:08:32.898626701 +0000 UTC m=+0.138169190 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 13:08:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:08:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:33.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:08:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:08:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:34.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:08:34 compute-1 ceph-mon[80926]: pgmap v2873: 305 pgs: 305 active+clean; 288 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 154 op/s
Oct 02 13:08:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:08:35 compute-1 nova_compute[230518]: 2025-10-02 13:08:35.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:35 compute-1 nova_compute[230518]: 2025-10-02 13:08:35.432 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Updating instance_info_cache with network_info: [{"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:08:35 compute-1 nova_compute[230518]: 2025-10-02 13:08:35.449 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:08:35 compute-1 nova_compute[230518]: 2025-10-02 13:08:35.449 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 13:08:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:08:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:35.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:08:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:36.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:36 compute-1 ceph-mon[80926]: pgmap v2874: 305 pgs: 305 active+clean; 288 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.4 MiB/s wr, 140 op/s
Oct 02 13:08:36 compute-1 nova_compute[230518]: 2025-10-02 13:08:36.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:37 compute-1 nova_compute[230518]: 2025-10-02 13:08:37.085 2 DEBUG oslo_concurrency.lockutils [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:37 compute-1 nova_compute[230518]: 2025-10-02 13:08:37.086 2 DEBUG oslo_concurrency.lockutils [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:37 compute-1 nova_compute[230518]: 2025-10-02 13:08:37.086 2 INFO nova.compute.manager [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Shelving
Oct 02 13:08:37 compute-1 nova_compute[230518]: 2025-10-02 13:08:37.105 2 DEBUG nova.virt.libvirt.driver [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 13:08:37 compute-1 ceph-mon[80926]: pgmap v2875: 305 pgs: 305 active+clean; 308 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.0 MiB/s wr, 168 op/s
Oct 02 13:08:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:08:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:37.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:08:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:38.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:38 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:38.916 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '64'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:08:38 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/769836320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:08:39 compute-1 sudo[303891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:08:39 compute-1 sudo[303891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:39 compute-1 sudo[303891]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:39 compute-1 sudo[303916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:08:39 compute-1 sudo[303916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:08:39 compute-1 sudo[303916]: pam_unix(sudo:session): session closed for user root
Oct 02 13:08:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:39.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:40.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:40 compute-1 ceph-mon[80926]: pgmap v2876: 305 pgs: 305 active+clean; 314 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.9 MiB/s wr, 109 op/s
Oct 02 13:08:40 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:08:40 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:08:40 compute-1 nova_compute[230518]: 2025-10-02 13:08:40.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:40 compute-1 kernel: tap3994280c-c2 (unregistering): left promiscuous mode
Oct 02 13:08:40 compute-1 NetworkManager[44960]: <info>  [1759410520.8636] device (tap3994280c-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:08:40 compute-1 ovn_controller[129257]: 2025-10-02T13:08:40Z|00797|binding|INFO|Releasing lport 3994280c-c2c8-4fa7-bc48-f7b048d43015 from this chassis (sb_readonly=0)
Oct 02 13:08:40 compute-1 nova_compute[230518]: 2025-10-02 13:08:40.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:40 compute-1 ovn_controller[129257]: 2025-10-02T13:08:40Z|00798|binding|INFO|Setting lport 3994280c-c2c8-4fa7-bc48-f7b048d43015 down in Southbound
Oct 02 13:08:40 compute-1 ovn_controller[129257]: 2025-10-02T13:08:40Z|00799|binding|INFO|Removing iface tap3994280c-c2 ovn-installed in OVS
Oct 02 13:08:40 compute-1 nova_compute[230518]: 2025-10-02 13:08:40.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:40 compute-1 nova_compute[230518]: 2025-10-02 13:08:40.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:40 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:40.934 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:d8:5a 10.100.0.6'], port_security=['fa:16:3e:20:d8:5a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b4640b6e-b1e0-4168-9970-c5d05a0e1621', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4223a8cc-f72a-428d-accb-3f4210096878', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '954946ff6b204fba90f767ec67210620', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b142f2e0-15cd-46cd-bd2d-c2af8d42e97a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.194'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8308a587-4cdc-4eb3-9fc6-aab7267ec23f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=3994280c-c2c8-4fa7-bc48-f7b048d43015) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:08:40 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:40.936 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 3994280c-c2c8-4fa7-bc48-f7b048d43015 in datapath 4223a8cc-f72a-428d-accb-3f4210096878 unbound from our chassis
Oct 02 13:08:40 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:40.939 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4223a8cc-f72a-428d-accb-3f4210096878, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:08:40 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:40.940 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4148c109-86b5-48c2-88f6-ef6168f768a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:40 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:40.941 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878 namespace which is not needed anymore
Oct 02 13:08:40 compute-1 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000c1.scope: Deactivated successfully.
Oct 02 13:08:40 compute-1 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000c1.scope: Consumed 14.571s CPU time.
Oct 02 13:08:40 compute-1 systemd-machined[188247]: Machine qemu-90-instance-000000c1 terminated.
Oct 02 13:08:41 compute-1 nova_compute[230518]: 2025-10-02 13:08:41.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:41 compute-1 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[303613]: [NOTICE]   (303617) : haproxy version is 2.8.14-c23fe91
Oct 02 13:08:41 compute-1 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[303613]: [NOTICE]   (303617) : path to executable is /usr/sbin/haproxy
Oct 02 13:08:41 compute-1 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[303613]: [WARNING]  (303617) : Exiting Master process...
Oct 02 13:08:41 compute-1 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[303613]: [WARNING]  (303617) : Exiting Master process...
Oct 02 13:08:41 compute-1 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[303613]: [ALERT]    (303617) : Current worker (303619) exited with code 143 (Terminated)
Oct 02 13:08:41 compute-1 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[303613]: [WARNING]  (303617) : All workers exited. Exiting... (0)
Oct 02 13:08:41 compute-1 systemd[1]: libpod-a942cce4339f154eb3cc046f044794b8321484476a6aa6ca1a9af444efa2e991.scope: Deactivated successfully.
Oct 02 13:08:41 compute-1 podman[303965]: 2025-10-02 13:08:41.116545681 +0000 UTC m=+0.073114236 container died a942cce4339f154eb3cc046f044794b8321484476a6aa6ca1a9af444efa2e991 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:08:41 compute-1 nova_compute[230518]: 2025-10-02 13:08:41.161 2 INFO nova.virt.libvirt.driver [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Instance shutdown successfully after 4 seconds.
Oct 02 13:08:41 compute-1 nova_compute[230518]: 2025-10-02 13:08:41.169 2 INFO nova.virt.libvirt.driver [-] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Instance destroyed successfully.
Oct 02 13:08:41 compute-1 nova_compute[230518]: 2025-10-02 13:08:41.170 2 DEBUG nova.objects.instance [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lazy-loading 'numa_topology' on Instance uuid b4640b6e-b1e0-4168-9970-c5d05a0e1621 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:08:41 compute-1 nova_compute[230518]: 2025-10-02 13:08:41.185 2 DEBUG nova.compute.manager [req-652a29b6-3672-40dc-8d90-88dce1dab71f req-ae40a45c-d962-4eeb-9ffc-429ca794b263 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received event network-vif-unplugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:08:41 compute-1 nova_compute[230518]: 2025-10-02 13:08:41.186 2 DEBUG oslo_concurrency.lockutils [req-652a29b6-3672-40dc-8d90-88dce1dab71f req-ae40a45c-d962-4eeb-9ffc-429ca794b263 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:41 compute-1 nova_compute[230518]: 2025-10-02 13:08:41.186 2 DEBUG oslo_concurrency.lockutils [req-652a29b6-3672-40dc-8d90-88dce1dab71f req-ae40a45c-d962-4eeb-9ffc-429ca794b263 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:41 compute-1 nova_compute[230518]: 2025-10-02 13:08:41.187 2 DEBUG oslo_concurrency.lockutils [req-652a29b6-3672-40dc-8d90-88dce1dab71f req-ae40a45c-d962-4eeb-9ffc-429ca794b263 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:41 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a942cce4339f154eb3cc046f044794b8321484476a6aa6ca1a9af444efa2e991-userdata-shm.mount: Deactivated successfully.
Oct 02 13:08:41 compute-1 nova_compute[230518]: 2025-10-02 13:08:41.187 2 DEBUG nova.compute.manager [req-652a29b6-3672-40dc-8d90-88dce1dab71f req-ae40a45c-d962-4eeb-9ffc-429ca794b263 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] No waiting events found dispatching network-vif-unplugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:08:41 compute-1 nova_compute[230518]: 2025-10-02 13:08:41.188 2 WARNING nova.compute.manager [req-652a29b6-3672-40dc-8d90-88dce1dab71f req-ae40a45c-d962-4eeb-9ffc-429ca794b263 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received unexpected event network-vif-unplugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 for instance with vm_state active and task_state shelving.
Oct 02 13:08:41 compute-1 systemd[1]: var-lib-containers-storage-overlay-7eedb82cd50d6ea7a9033c769ee0024227330ee31a67a92bbebc2e547b0c3439-merged.mount: Deactivated successfully.
Oct 02 13:08:41 compute-1 podman[303965]: 2025-10-02 13:08:41.247691249 +0000 UTC m=+0.204259794 container cleanup a942cce4339f154eb3cc046f044794b8321484476a6aa6ca1a9af444efa2e991 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 13:08:41 compute-1 systemd[1]: libpod-conmon-a942cce4339f154eb3cc046f044794b8321484476a6aa6ca1a9af444efa2e991.scope: Deactivated successfully.
Oct 02 13:08:41 compute-1 podman[304007]: 2025-10-02 13:08:41.399944911 +0000 UTC m=+0.117193282 container remove a942cce4339f154eb3cc046f044794b8321484476a6aa6ca1a9af444efa2e991 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2)
Oct 02 13:08:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:41.407 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3d0a2d1e-7639-4120-8609-577972d5433a]: (4, ('Thu Oct  2 01:08:41 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878 (a942cce4339f154eb3cc046f044794b8321484476a6aa6ca1a9af444efa2e991)\na942cce4339f154eb3cc046f044794b8321484476a6aa6ca1a9af444efa2e991\nThu Oct  2 01:08:41 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878 (a942cce4339f154eb3cc046f044794b8321484476a6aa6ca1a9af444efa2e991)\na942cce4339f154eb3cc046f044794b8321484476a6aa6ca1a9af444efa2e991\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:41.408 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b7165933-d276-44d4-9e71-958dbb923fbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:41.409 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4223a8cc-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:08:41 compute-1 nova_compute[230518]: 2025-10-02 13:08:41.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:41 compute-1 kernel: tap4223a8cc-f0: left promiscuous mode
Oct 02 13:08:41 compute-1 nova_compute[230518]: 2025-10-02 13:08:41.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:41.435 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3936b692-9e63-45ba-97d8-ace66fe93d5c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:41.465 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c22ca51d-6ed9-446b-946c-86285d3dff42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:41.466 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2eee297b-4c44-4ae6-87a8-d4a481cf6d3d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:41.486 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[40c1b003-c581-4056-b308-ef3ea44326b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 835576, 'reachable_time': 42302, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304027, 'error': None, 'target': 'ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:41 compute-1 systemd[1]: run-netns-ovnmeta\x2d4223a8cc\x2df72a\x2d428d\x2daccb\x2d3f4210096878.mount: Deactivated successfully.
Oct 02 13:08:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:41.491 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:08:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:08:41.492 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[073c0d3e-80ff-4e3a-a6f7-9f0e6d077e45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:08:41 compute-1 nova_compute[230518]: 2025-10-02 13:08:41.700 2 INFO nova.virt.libvirt.driver [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Beginning cold snapshot process
Oct 02 13:08:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:41.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:41 compute-1 ceph-mon[80926]: pgmap v2877: 305 pgs: 305 active+clean; 344 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 688 KiB/s rd, 4.4 MiB/s wr, 115 op/s
Oct 02 13:08:41 compute-1 nova_compute[230518]: 2025-10-02 13:08:41.911 2 DEBUG nova.virt.libvirt.imagebackend [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] No parent info for 423b8b5f-aab8-418b-8fad-d82c90818bdd; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Oct 02 13:08:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:42.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:42 compute-1 nova_compute[230518]: 2025-10-02 13:08:42.153 2 DEBUG nova.storage.rbd_utils [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] creating snapshot(ca0adbc11bba40809fd8b2643fe82da3) on rbd image(b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 13:08:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e360 e360: 3 total, 3 up, 3 in
Oct 02 13:08:43 compute-1 nova_compute[230518]: 2025-10-02 13:08:43.333 2 DEBUG nova.compute.manager [req-90067b56-fb71-47bd-9580-9381d733ef1f req-4d048459-cc40-40f3-ab13-6ce3b8970385 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received event network-vif-plugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:08:43 compute-1 nova_compute[230518]: 2025-10-02 13:08:43.334 2 DEBUG oslo_concurrency.lockutils [req-90067b56-fb71-47bd-9580-9381d733ef1f req-4d048459-cc40-40f3-ab13-6ce3b8970385 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:43 compute-1 nova_compute[230518]: 2025-10-02 13:08:43.334 2 DEBUG oslo_concurrency.lockutils [req-90067b56-fb71-47bd-9580-9381d733ef1f req-4d048459-cc40-40f3-ab13-6ce3b8970385 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:43 compute-1 nova_compute[230518]: 2025-10-02 13:08:43.334 2 DEBUG oslo_concurrency.lockutils [req-90067b56-fb71-47bd-9580-9381d733ef1f req-4d048459-cc40-40f3-ab13-6ce3b8970385 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:43 compute-1 nova_compute[230518]: 2025-10-02 13:08:43.334 2 DEBUG nova.compute.manager [req-90067b56-fb71-47bd-9580-9381d733ef1f req-4d048459-cc40-40f3-ab13-6ce3b8970385 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] No waiting events found dispatching network-vif-plugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:08:43 compute-1 nova_compute[230518]: 2025-10-02 13:08:43.334 2 WARNING nova.compute.manager [req-90067b56-fb71-47bd-9580-9381d733ef1f req-4d048459-cc40-40f3-ab13-6ce3b8970385 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received unexpected event network-vif-plugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 for instance with vm_state active and task_state shelving_image_uploading.
Oct 02 13:08:43 compute-1 nova_compute[230518]: 2025-10-02 13:08:43.435 2 DEBUG nova.storage.rbd_utils [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] cloning vms/b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk@ca0adbc11bba40809fd8b2643fe82da3 to images/c7b69b23-2de1-4a42-80f6-94ba898e82eb clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 13:08:43 compute-1 nova_compute[230518]: 2025-10-02 13:08:43.465 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:08:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:08:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:43.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:08:43 compute-1 nova_compute[230518]: 2025-10-02 13:08:43.867 2 DEBUG nova.storage.rbd_utils [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] flattening images/c7b69b23-2de1-4a42-80f6-94ba898e82eb flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 13:08:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:44.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:44 compute-1 ceph-mon[80926]: pgmap v2878: 305 pgs: 305 active+clean; 365 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.4 MiB/s wr, 180 op/s
Oct 02 13:08:44 compute-1 ceph-mon[80926]: osdmap e360: 3 total, 3 up, 3 in
Oct 02 13:08:44 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3950454878' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:08:44 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2802699572' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:08:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:08:45 compute-1 nova_compute[230518]: 2025-10-02 13:08:45.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:45.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:45 compute-1 ceph-mon[80926]: pgmap v2880: 305 pgs: 305 active+clean; 365 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.7 MiB/s wr, 188 op/s
Oct 02 13:08:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:46.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:46 compute-1 nova_compute[230518]: 2025-10-02 13:08:46.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:47 compute-1 nova_compute[230518]: 2025-10-02 13:08:47.640 2 DEBUG nova.storage.rbd_utils [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] removing snapshot(ca0adbc11bba40809fd8b2643fe82da3) on rbd image(b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 13:08:47 compute-1 ceph-mon[80926]: pgmap v2881: 305 pgs: 305 active+clean; 409 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.4 MiB/s rd, 5.4 MiB/s wr, 203 op/s
Oct 02 13:08:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:47.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:08:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:48.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:08:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e361 e361: 3 total, 3 up, 3 in
Oct 02 13:08:48 compute-1 nova_compute[230518]: 2025-10-02 13:08:48.690 2 DEBUG nova.storage.rbd_utils [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] creating snapshot(snap) on rbd image(c7b69b23-2de1-4a42-80f6-94ba898e82eb) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Oct 02 13:08:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:08:49 compute-1 ceph-mon[80926]: pgmap v2882: 305 pgs: 305 active+clean; 445 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.3 MiB/s rd, 7.1 MiB/s wr, 236 op/s
Oct 02 13:08:49 compute-1 ceph-mon[80926]: osdmap e361: 3 total, 3 up, 3 in
Oct 02 13:08:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:49.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:08:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:50.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:08:50 compute-1 nova_compute[230518]: 2025-10-02 13:08:50.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e362 e362: 3 total, 3 up, 3 in
Oct 02 13:08:50 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #139. Immutable memtables: 0.
Oct 02 13:08:50 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:08:50.897484) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:08:50 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 139
Oct 02 13:08:50 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410530897542, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 1259, "num_deletes": 250, "total_data_size": 2639086, "memory_usage": 2682760, "flush_reason": "Manual Compaction"}
Oct 02 13:08:50 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #140: started
Oct 02 13:08:50 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410530927733, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 140, "file_size": 1088282, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68385, "largest_seqno": 69639, "table_properties": {"data_size": 1083901, "index_size": 1840, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11847, "raw_average_key_size": 20, "raw_value_size": 1074322, "raw_average_value_size": 1901, "num_data_blocks": 82, "num_entries": 565, "num_filter_entries": 565, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410434, "oldest_key_time": 1759410434, "file_creation_time": 1759410530, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:08:50 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 30320 microseconds, and 6925 cpu microseconds.
Oct 02 13:08:50 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:08:50 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:08:50.927800) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #140: 1088282 bytes OK
Oct 02 13:08:50 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:08:50.927835) [db/memtable_list.cc:519] [default] Level-0 commit table #140 started
Oct 02 13:08:50 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:08:50.942835) [db/memtable_list.cc:722] [default] Level-0 commit table #140: memtable #1 done
Oct 02 13:08:50 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:08:50.942888) EVENT_LOG_v1 {"time_micros": 1759410530942877, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:08:50 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:08:50.942912) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:08:50 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 2633023, prev total WAL file size 2633023, number of live WAL files 2.
Oct 02 13:08:50 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000136.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:08:50 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:08:50.943981) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323537' seq:72057594037927935, type:22 .. '6D6772737461740032353038' seq:0, type:0; will stop at (end)
Oct 02 13:08:50 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:08:50 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [140(1062KB)], [138(11MB)]
Oct 02 13:08:50 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410530944093, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [140], "files_L6": [138], "score": -1, "input_data_size": 13627880, "oldest_snapshot_seqno": -1}
Oct 02 13:08:51 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #141: 9173 keys, 10418281 bytes, temperature: kUnknown
Oct 02 13:08:51 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410531015684, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 141, "file_size": 10418281, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10361359, "index_size": 32873, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22981, "raw_key_size": 240831, "raw_average_key_size": 26, "raw_value_size": 10202859, "raw_average_value_size": 1112, "num_data_blocks": 1250, "num_entries": 9173, "num_filter_entries": 9173, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759410530, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 141, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:08:51 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:08:51 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:08:51.015948) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 10418281 bytes
Oct 02 13:08:51 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:08:51.020064) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 190.2 rd, 145.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 12.0 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(22.1) write-amplify(9.6) OK, records in: 9648, records dropped: 475 output_compression: NoCompression
Oct 02 13:08:51 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:08:51.020125) EVENT_LOG_v1 {"time_micros": 1759410531020101, "job": 88, "event": "compaction_finished", "compaction_time_micros": 71656, "compaction_time_cpu_micros": 42192, "output_level": 6, "num_output_files": 1, "total_output_size": 10418281, "num_input_records": 9648, "num_output_records": 9173, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:08:51 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:08:51 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410531020788, "job": 88, "event": "table_file_deletion", "file_number": 140}
Oct 02 13:08:51 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000138.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:08:51 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410531025675, "job": 88, "event": "table_file_deletion", "file_number": 138}
Oct 02 13:08:51 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:08:50.943794) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:08:51 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:08:51.025980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:08:51 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:08:51.025989) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:08:51 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:08:51.025991) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:08:51 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:08:51.025993) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:08:51 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:08:51.025996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:08:51 compute-1 nova_compute[230518]: 2025-10-02 13:08:51.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:51.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:52.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:52 compute-1 ceph-mon[80926]: pgmap v2884: 305 pgs: 305 active+clean; 454 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.0 MiB/s rd, 6.2 MiB/s wr, 155 op/s
Oct 02 13:08:52 compute-1 ceph-mon[80926]: osdmap e362: 3 total, 3 up, 3 in
Oct 02 13:08:52 compute-1 podman[304170]: 2025-10-02 13:08:52.834177906 +0000 UTC m=+0.068148721 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 13:08:52 compute-1 podman[304169]: 2025-10-02 13:08:52.847302558 +0000 UTC m=+0.090517373 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:08:53 compute-1 nova_compute[230518]: 2025-10-02 13:08:53.335 2 INFO nova.virt.libvirt.driver [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Snapshot image upload complete
Oct 02 13:08:53 compute-1 nova_compute[230518]: 2025-10-02 13:08:53.336 2 DEBUG nova.compute.manager [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:08:53 compute-1 ceph-mon[80926]: pgmap v2886: 305 pgs: 305 active+clean; 459 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.3 MiB/s rd, 7.5 MiB/s wr, 240 op/s
Oct 02 13:08:53 compute-1 nova_compute[230518]: 2025-10-02 13:08:53.396 2 INFO nova.compute.manager [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Shelve offloading
Oct 02 13:08:53 compute-1 nova_compute[230518]: 2025-10-02 13:08:53.404 2 INFO nova.virt.libvirt.driver [-] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Instance destroyed successfully.
Oct 02 13:08:53 compute-1 nova_compute[230518]: 2025-10-02 13:08:53.405 2 DEBUG nova.compute.manager [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:08:53 compute-1 nova_compute[230518]: 2025-10-02 13:08:53.408 2 DEBUG oslo_concurrency.lockutils [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:08:53 compute-1 nova_compute[230518]: 2025-10-02 13:08:53.408 2 DEBUG oslo_concurrency.lockutils [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquired lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:08:53 compute-1 nova_compute[230518]: 2025-10-02 13:08:53.408 2 DEBUG nova.network.neutron [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:08:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:08:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:53.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:08:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:54.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e362 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:08:55 compute-1 nova_compute[230518]: 2025-10-02 13:08:55.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:55 compute-1 nova_compute[230518]: 2025-10-02 13:08:55.495 2 DEBUG nova.network.neutron [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Updating instance_info_cache with network_info: [{"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:08:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e363 e363: 3 total, 3 up, 3 in
Oct 02 13:08:55 compute-1 nova_compute[230518]: 2025-10-02 13:08:55.524 2 DEBUG oslo_concurrency.lockutils [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Releasing lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:08:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:55.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:56.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:56 compute-1 nova_compute[230518]: 2025-10-02 13:08:56.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:56 compute-1 nova_compute[230518]: 2025-10-02 13:08:56.163 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410521.1617134, b4640b6e-b1e0-4168-9970-c5d05a0e1621 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:08:56 compute-1 nova_compute[230518]: 2025-10-02 13:08:56.164 2 INFO nova.compute.manager [-] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] VM Stopped (Lifecycle Event)
Oct 02 13:08:56 compute-1 ceph-mon[80926]: pgmap v2887: 305 pgs: 305 active+clean; 459 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.4 MiB/s wr, 180 op/s
Oct 02 13:08:56 compute-1 ceph-mon[80926]: osdmap e363: 3 total, 3 up, 3 in
Oct 02 13:08:56 compute-1 nova_compute[230518]: 2025-10-02 13:08:56.199 2 DEBUG nova.compute.manager [None req-fd3c069e-8535-4973-ac19-ee02d980d737 - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:08:56 compute-1 nova_compute[230518]: 2025-10-02 13:08:56.202 2 DEBUG nova.compute.manager [None req-fd3c069e-8535-4973-ac19-ee02d980d737 - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:08:56 compute-1 nova_compute[230518]: 2025-10-02 13:08:56.222 2 INFO nova.compute.manager [None req-fd3c069e-8535-4973-ac19-ee02d980d737 - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] During sync_power_state the instance has a pending task (shelving_offloading). Skip.
Oct 02 13:08:56 compute-1 nova_compute[230518]: 2025-10-02 13:08:56.493 2 DEBUG oslo_concurrency.lockutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "658821a7-5b97-43ad-8fe2-46e5303cf56c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:56 compute-1 nova_compute[230518]: 2025-10-02 13:08:56.493 2 DEBUG oslo_concurrency.lockutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:56 compute-1 nova_compute[230518]: 2025-10-02 13:08:56.513 2 DEBUG nova.compute.manager [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:08:56 compute-1 nova_compute[230518]: 2025-10-02 13:08:56.649 2 DEBUG oslo_concurrency.lockutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:56 compute-1 nova_compute[230518]: 2025-10-02 13:08:56.650 2 DEBUG oslo_concurrency.lockutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:56 compute-1 nova_compute[230518]: 2025-10-02 13:08:56.657 2 DEBUG nova.virt.hardware [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:08:56 compute-1 nova_compute[230518]: 2025-10-02 13:08:56.657 2 INFO nova.compute.claims [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Claim successful on node compute-1.ctlplane.example.com
Oct 02 13:08:56 compute-1 nova_compute[230518]: 2025-10-02 13:08:56.934 2 DEBUG oslo_concurrency.processutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.221 2 INFO nova.virt.libvirt.driver [-] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Instance destroyed successfully.
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.222 2 DEBUG nova.objects.instance [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lazy-loading 'resources' on Instance uuid b4640b6e-b1e0-4168-9970-c5d05a0e1621 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.239 2 DEBUG nova.virt.libvirt.vif [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:08:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1341299623',display_name='tempest-TestShelveInstance-server-1341299623',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1341299623',id=193,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAv/W6nS9dywKRKPfI/I8VC3fYq9NYpWuLkQShDmIF9/8AaTGucbYIXWWTw6soOxBIduh/FVz2D47Pgv7ES8PO/armLbuwNtkOQG1B1V9kNoQMvYjaLsOHDz5UTKAiXYBA==',key_name='tempest-TestShelveInstance-573161256',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:08:13Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='954946ff6b204fba90f767ec67210620',ramdisk_id='',reservation_id='r-qh5aopcq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-228669170',owner_user_name='tempest-TestShelveInstance-228669170-project-member',shelved_at='2025-10-02T13:08:53.336076',shelved_host='compute-1.ctlplane.example.com',shelved_image_id='c7b69b23-2de1-4a42-80f6-94ba898e82eb'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:08:41Z,user_data=None,user_id='62f4c4b5cc194bd59ca9cc9f1da78a79',uuid=b4640b6e-b1e0-4168-9970-c5d05a0e1621,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.239 2 DEBUG nova.network.os_vif_util [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Converting VIF {"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.240 2 DEBUG nova.network.os_vif_util [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:d8:5a,bridge_name='br-int',has_traffic_filtering=True,id=3994280c-c2c8-4fa7-bc48-f7b048d43015,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3994280c-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.241 2 DEBUG os_vif [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:d8:5a,bridge_name='br-int',has_traffic_filtering=True,id=3994280c-c2c8-4fa7-bc48-f7b048d43015,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3994280c-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.243 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3994280c-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.247 2 INFO os_vif [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:d8:5a,bridge_name='br-int',has_traffic_filtering=True,id=3994280c-c2c8-4fa7-bc48-f7b048d43015,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3994280c-c2')
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.302 2 DEBUG nova.compute.manager [req-4e8d1f53-dd51-48b2-accd-3ed2a63dae58 req-f7ce60a8-d77e-464e-98a4-02ac3a6f9055 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received event network-changed-3994280c-c2c8-4fa7-bc48-f7b048d43015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.302 2 DEBUG nova.compute.manager [req-4e8d1f53-dd51-48b2-accd-3ed2a63dae58 req-f7ce60a8-d77e-464e-98a4-02ac3a6f9055 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Refreshing instance network info cache due to event network-changed-3994280c-c2c8-4fa7-bc48-f7b048d43015. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.303 2 DEBUG oslo_concurrency.lockutils [req-4e8d1f53-dd51-48b2-accd-3ed2a63dae58 req-f7ce60a8-d77e-464e-98a4-02ac3a6f9055 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.303 2 DEBUG oslo_concurrency.lockutils [req-4e8d1f53-dd51-48b2-accd-3ed2a63dae58 req-f7ce60a8-d77e-464e-98a4-02ac3a6f9055 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.303 2 DEBUG nova.network.neutron [req-4e8d1f53-dd51-48b2-accd-3ed2a63dae58 req-f7ce60a8-d77e-464e-98a4-02ac3a6f9055 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Refreshing network info cache for port 3994280c-c2c8-4fa7-bc48-f7b048d43015 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:08:57 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:08:57 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2061486174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.522 2 DEBUG oslo_concurrency.processutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.528 2 DEBUG nova.compute.provider_tree [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.548 2 DEBUG nova.scheduler.client.report [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.577 2 DEBUG oslo_concurrency.lockutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.927s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.578 2 DEBUG nova.compute.manager [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.625 2 DEBUG nova.compute.manager [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.625 2 DEBUG nova.network.neutron [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.641 2 INFO nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.656 2 DEBUG nova.compute.manager [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.729 2 DEBUG nova.compute.manager [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.730 2 DEBUG nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.731 2 INFO nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Creating image(s)
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.770 2 DEBUG nova.storage.rbd_utils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image 658821a7-5b97-43ad-8fe2-46e5303cf56c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.800 2 DEBUG nova.storage.rbd_utils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image 658821a7-5b97-43ad-8fe2-46e5303cf56c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.826 2 DEBUG nova.storage.rbd_utils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image 658821a7-5b97-43ad-8fe2-46e5303cf56c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.830 2 DEBUG oslo_concurrency.processutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:08:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:08:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:57.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:08:57 compute-1 ceph-mon[80926]: pgmap v2889: 305 pgs: 305 active+clean; 475 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.0 MiB/s wr, 199 op/s
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.900 2 DEBUG oslo_concurrency.processutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.902 2 DEBUG oslo_concurrency.lockutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.902 2 DEBUG oslo_concurrency.lockutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.903 2 DEBUG oslo_concurrency.lockutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.930 2 DEBUG nova.storage.rbd_utils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image 658821a7-5b97-43ad-8fe2-46e5303cf56c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:08:57 compute-1 nova_compute[230518]: 2025-10-02 13:08:57.933 2 DEBUG oslo_concurrency.processutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 658821a7-5b97-43ad-8fe2-46e5303cf56c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:08:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:08:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:08:58.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:08:58 compute-1 nova_compute[230518]: 2025-10-02 13:08:58.261 2 DEBUG nova.policy [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '156cc6022c70402ab6d194a340b076d5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9f85b8f387b146d29eabe946c4fbdee8', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:08:58 compute-1 nova_compute[230518]: 2025-10-02 13:08:58.872 2 DEBUG oslo_concurrency.processutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 658821a7-5b97-43ad-8fe2-46e5303cf56c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.939s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:08:58 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2061486174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:08:58 compute-1 nova_compute[230518]: 2025-10-02 13:08:58.937 2 DEBUG nova.storage.rbd_utils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] resizing rbd image 658821a7-5b97-43ad-8fe2-46e5303cf56c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:08:59 compute-1 nova_compute[230518]: 2025-10-02 13:08:59.032 2 DEBUG nova.network.neutron [req-4e8d1f53-dd51-48b2-accd-3ed2a63dae58 req-f7ce60a8-d77e-464e-98a4-02ac3a6f9055 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Updated VIF entry in instance network info cache for port 3994280c-c2c8-4fa7-bc48-f7b048d43015. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:08:59 compute-1 nova_compute[230518]: 2025-10-02 13:08:59.033 2 DEBUG nova.network.neutron [req-4e8d1f53-dd51-48b2-accd-3ed2a63dae58 req-f7ce60a8-d77e-464e-98a4-02ac3a6f9055 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Updating instance_info_cache with network_info: [{"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": null, "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap3994280c-c2", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:08:59 compute-1 nova_compute[230518]: 2025-10-02 13:08:59.040 2 DEBUG nova.objects.instance [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'migration_context' on Instance uuid 658821a7-5b97-43ad-8fe2-46e5303cf56c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:08:59 compute-1 nova_compute[230518]: 2025-10-02 13:08:59.053 2 DEBUG nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:08:59 compute-1 nova_compute[230518]: 2025-10-02 13:08:59.053 2 DEBUG nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Ensure instance console log exists: /var/lib/nova/instances/658821a7-5b97-43ad-8fe2-46e5303cf56c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:08:59 compute-1 nova_compute[230518]: 2025-10-02 13:08:59.054 2 DEBUG oslo_concurrency.lockutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:59 compute-1 nova_compute[230518]: 2025-10-02 13:08:59.054 2 DEBUG oslo_concurrency.lockutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:59 compute-1 nova_compute[230518]: 2025-10-02 13:08:59.054 2 DEBUG oslo_concurrency.lockutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:08:59 compute-1 nova_compute[230518]: 2025-10-02 13:08:59.055 2 DEBUG oslo_concurrency.lockutils [req-4e8d1f53-dd51-48b2-accd-3ed2a63dae58 req-f7ce60a8-d77e-464e-98a4-02ac3a6f9055 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:08:59 compute-1 nova_compute[230518]: 2025-10-02 13:08:59.102 2 DEBUG nova.network.neutron [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Successfully created port: 15cb070c-0f52-464f-a2b4-8597c15212e9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:08:59 compute-1 nova_compute[230518]: 2025-10-02 13:08:59.495 2 INFO nova.virt.libvirt.driver [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Deleting instance files /var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621_del
Oct 02 13:08:59 compute-1 nova_compute[230518]: 2025-10-02 13:08:59.496 2 INFO nova.virt.libvirt.driver [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Deletion of /var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621_del complete
Oct 02 13:08:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:08:59 compute-1 nova_compute[230518]: 2025-10-02 13:08:59.583 2 INFO nova.scheduler.client.report [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Deleted allocations for instance b4640b6e-b1e0-4168-9970-c5d05a0e1621
Oct 02 13:08:59 compute-1 nova_compute[230518]: 2025-10-02 13:08:59.627 2 DEBUG oslo_concurrency.lockutils [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:08:59 compute-1 nova_compute[230518]: 2025-10-02 13:08:59.627 2 DEBUG oslo_concurrency.lockutils [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:08:59 compute-1 nova_compute[230518]: 2025-10-02 13:08:59.670 2 DEBUG oslo_concurrency.processutils [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:08:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:08:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:08:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:08:59.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:08:59 compute-1 ceph-mon[80926]: pgmap v2890: 305 pgs: 305 active+clean; 476 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.8 MiB/s wr, 173 op/s
Oct 02 13:09:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:00.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:09:00 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3712778003' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:00 compute-1 nova_compute[230518]: 2025-10-02 13:09:00.093 2 DEBUG oslo_concurrency.processutils [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:00 compute-1 nova_compute[230518]: 2025-10-02 13:09:00.099 2 DEBUG nova.compute.provider_tree [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:09:00 compute-1 nova_compute[230518]: 2025-10-02 13:09:00.116 2 DEBUG nova.scheduler.client.report [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:09:00 compute-1 nova_compute[230518]: 2025-10-02 13:09:00.136 2 DEBUG oslo_concurrency.lockutils [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.508s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:00 compute-1 nova_compute[230518]: 2025-10-02 13:09:00.176 2 DEBUG oslo_concurrency.lockutils [None req-abddf1c1-efbb-4928-bafa-9ef91256ed9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 23.091s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:00 compute-1 nova_compute[230518]: 2025-10-02 13:09:00.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:00 compute-1 nova_compute[230518]: 2025-10-02 13:09:00.290 2 DEBUG nova.network.neutron [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Successfully updated port: 15cb070c-0f52-464f-a2b4-8597c15212e9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:09:00 compute-1 nova_compute[230518]: 2025-10-02 13:09:00.303 2 DEBUG oslo_concurrency.lockutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:09:00 compute-1 nova_compute[230518]: 2025-10-02 13:09:00.304 2 DEBUG oslo_concurrency.lockutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquired lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:09:00 compute-1 nova_compute[230518]: 2025-10-02 13:09:00.304 2 DEBUG nova.network.neutron [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:09:00 compute-1 nova_compute[230518]: 2025-10-02 13:09:00.456 2 DEBUG nova.compute.manager [req-c335f166-70bd-459f-a0cd-4a4f9939d4f1 req-c49673a9-67cf-4d53-9643-6201ea7bcebc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Received event network-changed-15cb070c-0f52-464f-a2b4-8597c15212e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:09:00 compute-1 nova_compute[230518]: 2025-10-02 13:09:00.457 2 DEBUG nova.compute.manager [req-c335f166-70bd-459f-a0cd-4a4f9939d4f1 req-c49673a9-67cf-4d53-9643-6201ea7bcebc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Refreshing instance network info cache due to event network-changed-15cb070c-0f52-464f-a2b4-8597c15212e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:09:00 compute-1 nova_compute[230518]: 2025-10-02 13:09:00.457 2 DEBUG oslo_concurrency.lockutils [req-c335f166-70bd-459f-a0cd-4a4f9939d4f1 req-c49673a9-67cf-4d53-9643-6201ea7bcebc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:09:00 compute-1 nova_compute[230518]: 2025-10-02 13:09:00.478 2 DEBUG nova.network.neutron [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:09:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3712778003' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:01 compute-1 nova_compute[230518]: 2025-10-02 13:09:01.510 2 DEBUG nova.network.neutron [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Updating instance_info_cache with network_info: [{"id": "15cb070c-0f52-464f-a2b4-8597c15212e9", "address": "fa:16:3e:e2:47:21", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15cb070c-0f", "ovs_interfaceid": "15cb070c-0f52-464f-a2b4-8597c15212e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:09:01 compute-1 nova_compute[230518]: 2025-10-02 13:09:01.532 2 DEBUG oslo_concurrency.lockutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Releasing lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:09:01 compute-1 nova_compute[230518]: 2025-10-02 13:09:01.533 2 DEBUG nova.compute.manager [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Instance network_info: |[{"id": "15cb070c-0f52-464f-a2b4-8597c15212e9", "address": "fa:16:3e:e2:47:21", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15cb070c-0f", "ovs_interfaceid": "15cb070c-0f52-464f-a2b4-8597c15212e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:09:01 compute-1 nova_compute[230518]: 2025-10-02 13:09:01.533 2 DEBUG oslo_concurrency.lockutils [req-c335f166-70bd-459f-a0cd-4a4f9939d4f1 req-c49673a9-67cf-4d53-9643-6201ea7bcebc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:09:01 compute-1 nova_compute[230518]: 2025-10-02 13:09:01.533 2 DEBUG nova.network.neutron [req-c335f166-70bd-459f-a0cd-4a4f9939d4f1 req-c49673a9-67cf-4d53-9643-6201ea7bcebc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Refreshing network info cache for port 15cb070c-0f52-464f-a2b4-8597c15212e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:09:01 compute-1 nova_compute[230518]: 2025-10-02 13:09:01.536 2 DEBUG nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Start _get_guest_xml network_info=[{"id": "15cb070c-0f52-464f-a2b4-8597c15212e9", "address": "fa:16:3e:e2:47:21", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15cb070c-0f", "ovs_interfaceid": "15cb070c-0f52-464f-a2b4-8597c15212e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:09:01 compute-1 nova_compute[230518]: 2025-10-02 13:09:01.541 2 WARNING nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:09:01 compute-1 nova_compute[230518]: 2025-10-02 13:09:01.545 2 DEBUG nova.virt.libvirt.host [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:09:01 compute-1 nova_compute[230518]: 2025-10-02 13:09:01.545 2 DEBUG nova.virt.libvirt.host [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:09:01 compute-1 nova_compute[230518]: 2025-10-02 13:09:01.548 2 DEBUG nova.virt.libvirt.host [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:09:01 compute-1 nova_compute[230518]: 2025-10-02 13:09:01.548 2 DEBUG nova.virt.libvirt.host [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:09:01 compute-1 nova_compute[230518]: 2025-10-02 13:09:01.549 2 DEBUG nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:09:01 compute-1 nova_compute[230518]: 2025-10-02 13:09:01.550 2 DEBUG nova.virt.hardware [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:09:01 compute-1 nova_compute[230518]: 2025-10-02 13:09:01.550 2 DEBUG nova.virt.hardware [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:09:01 compute-1 nova_compute[230518]: 2025-10-02 13:09:01.550 2 DEBUG nova.virt.hardware [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:09:01 compute-1 nova_compute[230518]: 2025-10-02 13:09:01.550 2 DEBUG nova.virt.hardware [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:09:01 compute-1 nova_compute[230518]: 2025-10-02 13:09:01.551 2 DEBUG nova.virt.hardware [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:09:01 compute-1 nova_compute[230518]: 2025-10-02 13:09:01.551 2 DEBUG nova.virt.hardware [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:09:01 compute-1 nova_compute[230518]: 2025-10-02 13:09:01.551 2 DEBUG nova.virt.hardware [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:09:01 compute-1 nova_compute[230518]: 2025-10-02 13:09:01.551 2 DEBUG nova.virt.hardware [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:09:01 compute-1 nova_compute[230518]: 2025-10-02 13:09:01.552 2 DEBUG nova.virt.hardware [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:09:01 compute-1 nova_compute[230518]: 2025-10-02 13:09:01.552 2 DEBUG nova.virt.hardware [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:09:01 compute-1 nova_compute[230518]: 2025-10-02 13:09:01.552 2 DEBUG nova.virt.hardware [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:09:01 compute-1 nova_compute[230518]: 2025-10-02 13:09:01.555 2 DEBUG oslo_concurrency.processutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:09:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:01.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:09:01 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:09:01 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/800651783' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:01 compute-1 nova_compute[230518]: 2025-10-02 13:09:01.988 2 DEBUG oslo_concurrency.processutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.009 2 DEBUG nova.storage.rbd_utils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image 658821a7-5b97-43ad-8fe2-46e5303cf56c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.013 2 DEBUG oslo_concurrency.processutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:09:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:02.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:02 compute-1 ceph-mon[80926]: pgmap v2891: 305 pgs: 305 active+clean; 470 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.7 MiB/s wr, 167 op/s
Oct 02 13:09:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/800651783' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:09:02 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/448831348' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.601 2 DEBUG oslo_concurrency.processutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.603 2 DEBUG nova.virt.libvirt.vif [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:08:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-1',id=197,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGLRY7MYmIa6+oLUh+Qg+B8a5i2XXFFyzSdgxs13sBRV1pAy/AOUY7U032oAYrVoY3TX/q037Gu8fuAeVLEbydGt9ytZ7oOiP2uoiKS3ZsON6mJ6KSvHrVdqmkzPhkxnA==',key_name='tempest-keypair-841361442',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9f85b8f387b146d29eabe946c4fbdee8',ramdisk_id='',reservation_id='r-51wwuied',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-2011266702',owner_user_name='tempest-AttachVolumeMultiAttachTest-2011266702-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:08:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='156cc6022c70402ab6d194a340b076d5',uuid=658821a7-5b97-43ad-8fe2-46e5303cf56c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "15cb070c-0f52-464f-a2b4-8597c15212e9", "address": "fa:16:3e:e2:47:21", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15cb070c-0f", "ovs_interfaceid": "15cb070c-0f52-464f-a2b4-8597c15212e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.604 2 DEBUG nova.network.os_vif_util [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converting VIF {"id": "15cb070c-0f52-464f-a2b4-8597c15212e9", "address": "fa:16:3e:e2:47:21", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15cb070c-0f", "ovs_interfaceid": "15cb070c-0f52-464f-a2b4-8597c15212e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.605 2 DEBUG nova.network.os_vif_util [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:47:21,bridge_name='br-int',has_traffic_filtering=True,id=15cb070c-0f52-464f-a2b4-8597c15212e9,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15cb070c-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.607 2 DEBUG nova.objects.instance [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 658821a7-5b97-43ad-8fe2-46e5303cf56c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.626 2 DEBUG nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:09:02 compute-1 nova_compute[230518]:   <uuid>658821a7-5b97-43ad-8fe2-46e5303cf56c</uuid>
Oct 02 13:09:02 compute-1 nova_compute[230518]:   <name>instance-000000c5</name>
Oct 02 13:09:02 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 13:09:02 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 13:09:02 compute-1 nova_compute[230518]:   <metadata>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:09:02 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:       <nova:name>multiattach-server-1</nova:name>
Oct 02 13:09:02 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 13:09:01</nova:creationTime>
Oct 02 13:09:02 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 13:09:02 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 13:09:02 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 13:09:02 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 13:09:02 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:09:02 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:09:02 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 13:09:02 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 13:09:02 compute-1 nova_compute[230518]:         <nova:user uuid="156cc6022c70402ab6d194a340b076d5">tempest-AttachVolumeMultiAttachTest-2011266702-project-member</nova:user>
Oct 02 13:09:02 compute-1 nova_compute[230518]:         <nova:project uuid="9f85b8f387b146d29eabe946c4fbdee8">tempest-AttachVolumeMultiAttachTest-2011266702</nova:project>
Oct 02 13:09:02 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 13:09:02 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 13:09:02 compute-1 nova_compute[230518]:         <nova:port uuid="15cb070c-0f52-464f-a2b4-8597c15212e9">
Oct 02 13:09:02 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 13:09:02 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 13:09:02 compute-1 nova_compute[230518]:   </metadata>
Oct 02 13:09:02 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <system>
Oct 02 13:09:02 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:09:02 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:09:02 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:09:02 compute-1 nova_compute[230518]:       <entry name="serial">658821a7-5b97-43ad-8fe2-46e5303cf56c</entry>
Oct 02 13:09:02 compute-1 nova_compute[230518]:       <entry name="uuid">658821a7-5b97-43ad-8fe2-46e5303cf56c</entry>
Oct 02 13:09:02 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     </system>
Oct 02 13:09:02 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 13:09:02 compute-1 nova_compute[230518]:   <os>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:   </os>
Oct 02 13:09:02 compute-1 nova_compute[230518]:   <features>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <apic/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:   </features>
Oct 02 13:09:02 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:   </clock>
Oct 02 13:09:02 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:   </cpu>
Oct 02 13:09:02 compute-1 nova_compute[230518]:   <devices>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 13:09:02 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/658821a7-5b97-43ad-8fe2-46e5303cf56c_disk">
Oct 02 13:09:02 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:       </source>
Oct 02 13:09:02 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:09:02 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:09:02 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 13:09:02 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/658821a7-5b97-43ad-8fe2-46e5303cf56c_disk.config">
Oct 02 13:09:02 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:       </source>
Oct 02 13:09:02 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:09:02 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:09:02 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 13:09:02 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:e2:47:21"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:       <target dev="tap15cb070c-0f"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     </interface>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 13:09:02 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/658821a7-5b97-43ad-8fe2-46e5303cf56c/console.log" append="off"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     </serial>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <video>
Oct 02 13:09:02 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     </video>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 13:09:02 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     </rng>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 13:09:02 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 13:09:02 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 13:09:02 compute-1 nova_compute[230518]:   </devices>
Oct 02 13:09:02 compute-1 nova_compute[230518]: </domain>
Oct 02 13:09:02 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.627 2 DEBUG nova.compute.manager [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Preparing to wait for external event network-vif-plugged-15cb070c-0f52-464f-a2b4-8597c15212e9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.628 2 DEBUG oslo_concurrency.lockutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "658821a7-5b97-43ad-8fe2-46e5303cf56c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.628 2 DEBUG oslo_concurrency.lockutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.629 2 DEBUG oslo_concurrency.lockutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.630 2 DEBUG nova.virt.libvirt.vif [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:08:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-1',id=197,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGLRY7MYmIa6+oLUh+Qg+B8a5i2XXFFyzSdgxs13sBRV1pAy/AOUY7U032oAYrVoY3TX/q037Gu8fuAeVLEbydGt9ytZ7oOiP2uoiKS3ZsON6mJ6KSvHrVdqmkzPhkxnA==',key_name='tempest-keypair-841361442',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9f85b8f387b146d29eabe946c4fbdee8',ramdisk_id='',reservation_id='r-51wwuied',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-2011266702',owner_user_name='tempest-AttachVolumeMultiAttachTest-2011266702-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:08:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='156cc6022c70402ab6d194a340b076d5',uuid=658821a7-5b97-43ad-8fe2-46e5303cf56c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "15cb070c-0f52-464f-a2b4-8597c15212e9", "address": "fa:16:3e:e2:47:21", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15cb070c-0f", "ovs_interfaceid": "15cb070c-0f52-464f-a2b4-8597c15212e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.630 2 DEBUG nova.network.os_vif_util [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converting VIF {"id": "15cb070c-0f52-464f-a2b4-8597c15212e9", "address": "fa:16:3e:e2:47:21", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15cb070c-0f", "ovs_interfaceid": "15cb070c-0f52-464f-a2b4-8597c15212e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.631 2 DEBUG nova.network.os_vif_util [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:47:21,bridge_name='br-int',has_traffic_filtering=True,id=15cb070c-0f52-464f-a2b4-8597c15212e9,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15cb070c-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.632 2 DEBUG os_vif [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:47:21,bridge_name='br-int',has_traffic_filtering=True,id=15cb070c-0f52-464f-a2b4-8597c15212e9,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15cb070c-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.633 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.634 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.637 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap15cb070c-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.638 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap15cb070c-0f, col_values=(('external_ids', {'iface-id': '15cb070c-0f52-464f-a2b4-8597c15212e9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e2:47:21', 'vm-uuid': '658821a7-5b97-43ad-8fe2-46e5303cf56c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:02 compute-1 NetworkManager[44960]: <info>  [1759410542.6777] manager: (tap15cb070c-0f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/370)
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.682 2 INFO os_vif [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:47:21,bridge_name='br-int',has_traffic_filtering=True,id=15cb070c-0f52-464f-a2b4-8597c15212e9,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15cb070c-0f')
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.727 2 DEBUG nova.network.neutron [req-c335f166-70bd-459f-a0cd-4a4f9939d4f1 req-c49673a9-67cf-4d53-9643-6201ea7bcebc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Updated VIF entry in instance network info cache for port 15cb070c-0f52-464f-a2b4-8597c15212e9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.728 2 DEBUG nova.network.neutron [req-c335f166-70bd-459f-a0cd-4a4f9939d4f1 req-c49673a9-67cf-4d53-9643-6201ea7bcebc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Updating instance_info_cache with network_info: [{"id": "15cb070c-0f52-464f-a2b4-8597c15212e9", "address": "fa:16:3e:e2:47:21", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15cb070c-0f", "ovs_interfaceid": "15cb070c-0f52-464f-a2b4-8597c15212e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.760 2 DEBUG oslo_concurrency.lockutils [req-c335f166-70bd-459f-a0cd-4a4f9939d4f1 req-c49673a9-67cf-4d53-9643-6201ea7bcebc 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.875 2 DEBUG nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.876 2 DEBUG nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.876 2 DEBUG nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No VIF found with MAC fa:16:3e:e2:47:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.877 2 INFO nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Using config drive
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.905 2 DEBUG nova.storage.rbd_utils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image 658821a7-5b97-43ad-8fe2-46e5303cf56c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.950 2 DEBUG oslo_concurrency.lockutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.951 2 DEBUG oslo_concurrency.lockutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:02 compute-1 nova_compute[230518]: 2025-10-02 13:09:02.951 2 INFO nova.compute.manager [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Unshelving
Oct 02 13:09:03 compute-1 nova_compute[230518]: 2025-10-02 13:09:03.044 2 DEBUG oslo_concurrency.lockutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:03 compute-1 nova_compute[230518]: 2025-10-02 13:09:03.045 2 DEBUG oslo_concurrency.lockutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:03 compute-1 nova_compute[230518]: 2025-10-02 13:09:03.051 2 DEBUG nova.objects.instance [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lazy-loading 'pci_requests' on Instance uuid b4640b6e-b1e0-4168-9970-c5d05a0e1621 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:09:03 compute-1 nova_compute[230518]: 2025-10-02 13:09:03.064 2 DEBUG nova.objects.instance [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lazy-loading 'numa_topology' on Instance uuid b4640b6e-b1e0-4168-9970-c5d05a0e1621 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:09:03 compute-1 nova_compute[230518]: 2025-10-02 13:09:03.075 2 DEBUG nova.virt.hardware [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:09:03 compute-1 nova_compute[230518]: 2025-10-02 13:09:03.075 2 INFO nova.compute.claims [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Claim successful on node compute-1.ctlplane.example.com
Oct 02 13:09:03 compute-1 nova_compute[230518]: 2025-10-02 13:09:03.180 2 DEBUG oslo_concurrency.processutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:03 compute-1 nova_compute[230518]: 2025-10-02 13:09:03.339 2 INFO nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Creating config drive at /var/lib/nova/instances/658821a7-5b97-43ad-8fe2-46e5303cf56c/disk.config
Oct 02 13:09:03 compute-1 nova_compute[230518]: 2025-10-02 13:09:03.346 2 DEBUG oslo_concurrency.processutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/658821a7-5b97-43ad-8fe2-46e5303cf56c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz8wc6ilf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:03 compute-1 nova_compute[230518]: 2025-10-02 13:09:03.494 2 DEBUG oslo_concurrency.processutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/658821a7-5b97-43ad-8fe2-46e5303cf56c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz8wc6ilf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:09:03 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2060950700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:03 compute-1 nova_compute[230518]: 2025-10-02 13:09:03.676 2 DEBUG nova.storage.rbd_utils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image 658821a7-5b97-43ad-8fe2-46e5303cf56c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:09:03 compute-1 nova_compute[230518]: 2025-10-02 13:09:03.680 2 DEBUG oslo_concurrency.processutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/658821a7-5b97-43ad-8fe2-46e5303cf56c/disk.config 658821a7-5b97-43ad-8fe2-46e5303cf56c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:03 compute-1 nova_compute[230518]: 2025-10-02 13:09:03.709 2 DEBUG oslo_concurrency.processutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:03 compute-1 nova_compute[230518]: 2025-10-02 13:09:03.716 2 DEBUG nova.compute.provider_tree [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:09:03 compute-1 nova_compute[230518]: 2025-10-02 13:09:03.731 2 DEBUG nova.scheduler.client.report [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:09:03 compute-1 nova_compute[230518]: 2025-10-02 13:09:03.757 2 DEBUG oslo_concurrency.lockutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:03 compute-1 podman[304571]: 2025-10-02 13:09:03.817227106 +0000 UTC m=+0.059708726 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid)
Oct 02 13:09:03 compute-1 ceph-mon[80926]: pgmap v2892: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.1 MiB/s wr, 169 op/s
Oct 02 13:09:03 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/448831348' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:03 compute-1 podman[304579]: 2025-10-02 13:09:03.835151399 +0000 UTC m=+0.067242782 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 13:09:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:09:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:03.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:09:04 compute-1 nova_compute[230518]: 2025-10-02 13:09:04.009 2 INFO nova.network.neutron [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Updating port 3994280c-c2c8-4fa7-bc48-f7b048d43015 with attributes {'binding:host_id': 'compute-1.ctlplane.example.com', 'device_owner': 'compute:nova'}
Oct 02 13:09:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:09:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:04.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:09:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:09:04 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2060950700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:04 compute-1 nova_compute[230518]: 2025-10-02 13:09:04.782 2 DEBUG oslo_concurrency.processutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/658821a7-5b97-43ad-8fe2-46e5303cf56c/disk.config 658821a7-5b97-43ad-8fe2-46e5303cf56c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:04 compute-1 nova_compute[230518]: 2025-10-02 13:09:04.783 2 INFO nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Deleting local config drive /var/lib/nova/instances/658821a7-5b97-43ad-8fe2-46e5303cf56c/disk.config because it was imported into RBD.
Oct 02 13:09:04 compute-1 kernel: tap15cb070c-0f: entered promiscuous mode
Oct 02 13:09:04 compute-1 NetworkManager[44960]: <info>  [1759410544.8480] manager: (tap15cb070c-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/371)
Oct 02 13:09:04 compute-1 ovn_controller[129257]: 2025-10-02T13:09:04Z|00800|binding|INFO|Claiming lport 15cb070c-0f52-464f-a2b4-8597c15212e9 for this chassis.
Oct 02 13:09:04 compute-1 ovn_controller[129257]: 2025-10-02T13:09:04Z|00801|binding|INFO|15cb070c-0f52-464f-a2b4-8597c15212e9: Claiming fa:16:3e:e2:47:21 10.100.0.3
Oct 02 13:09:04 compute-1 nova_compute[230518]: 2025-10-02 13:09:04.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:04.875 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:47:21 10.100.0.3'], port_security=['fa:16:3e:e2:47:21 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '658821a7-5b97-43ad-8fe2-46e5303cf56c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9001b9c-bca6-4085-a954-1414269e31bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f85b8f387b146d29eabe946c4fbdee8', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c95f312a-09a8-4e2c-af55-3ef0a0e41bfc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=57ece03e-f90b-4cd6-ae02-c9a908c888ae, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=15cb070c-0f52-464f-a2b4-8597c15212e9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:09:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:04.877 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 15cb070c-0f52-464f-a2b4-8597c15212e9 in datapath d9001b9c-bca6-4085-a954-1414269e31bc bound to our chassis
Oct 02 13:09:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:04.878 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d9001b9c-bca6-4085-a954-1414269e31bc
Oct 02 13:09:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:04.889 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[511cf011-9d9a-4689-aed4-efb88bc59a78]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:04.890 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd9001b9c-b1 in ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:09:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:04.893 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd9001b9c-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:09:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:04.893 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2af4a845-7bb7-4ba6-a46b-729baa1f0636]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:04.894 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f43e6da6-dd29-4be4-be67-819ffe46b60a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:04 compute-1 systemd-udevd[304644]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:09:04 compute-1 ovn_controller[129257]: 2025-10-02T13:09:04Z|00802|binding|INFO|Setting lport 15cb070c-0f52-464f-a2b4-8597c15212e9 ovn-installed in OVS
Oct 02 13:09:04 compute-1 ovn_controller[129257]: 2025-10-02T13:09:04Z|00803|binding|INFO|Setting lport 15cb070c-0f52-464f-a2b4-8597c15212e9 up in Southbound
Oct 02 13:09:04 compute-1 nova_compute[230518]: 2025-10-02 13:09:04.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:04 compute-1 nova_compute[230518]: 2025-10-02 13:09:04.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:04 compute-1 NetworkManager[44960]: <info>  [1759410544.9124] device (tap15cb070c-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:09:04 compute-1 NetworkManager[44960]: <info>  [1759410544.9133] device (tap15cb070c-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:09:04 compute-1 systemd-machined[188247]: New machine qemu-91-instance-000000c5.
Oct 02 13:09:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:04.915 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[e60a8cef-870c-4087-bcca-202fd75fabeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:04.929 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3b30203b-c53d-4bb7-a268-5e6379d74b9f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:04 compute-1 systemd[1]: Started Virtual Machine qemu-91-instance-000000c5.
Oct 02 13:09:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:04.960 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a3898d70-adac-4504-9530-d1e227860c8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:04.964 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6561bce1-e893-40d1-8559-6e1ad210f48f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:04 compute-1 NetworkManager[44960]: <info>  [1759410544.9665] manager: (tapd9001b9c-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/372)
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:04.999 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[1b3ed019-f1ae-4561-b526-07989629c77c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:05.002 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[ef0b0cf7-6c95-4316-b540-5ad3169bcc5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:05 compute-1 NetworkManager[44960]: <info>  [1759410545.0216] device (tapd9001b9c-b0): carrier: link connected
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:05.026 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c2370c64-e893-4efc-9664-4048230ec5a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:05.043 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7fad2d11-cdf0-48ee-88a4-0ee848c2d0d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9001b9c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:c0:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 242], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 840857, 'reachable_time': 31891, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304677, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:05.059 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[33996cda-39d1-4388-9cb7-da4b9382e44e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0d:c08b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 840857, 'tstamp': 840857}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304678, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:05.077 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d740edd5-ef9b-4c31-9309-04981fd6a44a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9001b9c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:c0:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 242], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 840857, 'reachable_time': 31891, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 304679, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:05.103 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ef9665e2-976d-417a-8263-ea88ed190527]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:05.149 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9dd4f881-36bf-40a5-adbc-f03905cdc177]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:05.150 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9001b9c-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:05.151 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:05.151 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9001b9c-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:05 compute-1 nova_compute[230518]: 2025-10-02 13:09:05.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:05 compute-1 NetworkManager[44960]: <info>  [1759410545.1539] manager: (tapd9001b9c-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/373)
Oct 02 13:09:05 compute-1 kernel: tapd9001b9c-b0: entered promiscuous mode
Oct 02 13:09:05 compute-1 nova_compute[230518]: 2025-10-02 13:09:05.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:05.156 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd9001b9c-b0, col_values=(('external_ids', {'iface-id': 'aa788301-8c47-4421-b693-3b37cb064ae2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:05 compute-1 nova_compute[230518]: 2025-10-02 13:09:05.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:05 compute-1 ovn_controller[129257]: 2025-10-02T13:09:05Z|00804|binding|INFO|Releasing lport aa788301-8c47-4421-b693-3b37cb064ae2 from this chassis (sb_readonly=0)
Oct 02 13:09:05 compute-1 nova_compute[230518]: 2025-10-02 13:09:05.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:05.173 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d9001b9c-bca6-4085-a954-1414269e31bc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d9001b9c-bca6-4085-a954-1414269e31bc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:05.174 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[63743e40-1dd1-4e93-8ff3-a70fddffac09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:05.175 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]: global
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-d9001b9c-bca6-4085-a954-1414269e31bc
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/d9001b9c-bca6-4085-a954-1414269e31bc.pid.haproxy
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID d9001b9c-bca6-4085-a954-1414269e31bc
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:09:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:05.175 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'env', 'PROCESS_TAG=haproxy-d9001b9c-bca6-4085-a954-1414269e31bc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d9001b9c-bca6-4085-a954-1414269e31bc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:09:05 compute-1 nova_compute[230518]: 2025-10-02 13:09:05.186 2 DEBUG oslo_concurrency.lockutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:09:05 compute-1 nova_compute[230518]: 2025-10-02 13:09:05.187 2 DEBUG oslo_concurrency.lockutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquired lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:09:05 compute-1 nova_compute[230518]: 2025-10-02 13:09:05.187 2 DEBUG nova.network.neutron [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:09:05 compute-1 nova_compute[230518]: 2025-10-02 13:09:05.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:05 compute-1 nova_compute[230518]: 2025-10-02 13:09:05.279 2 DEBUG nova.compute.manager [req-8d524778-9e6f-4a1b-ae17-3db1c6fdd0f9 req-181666de-d980-45c5-b2f7-f7daba72028c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Received event network-vif-plugged-15cb070c-0f52-464f-a2b4-8597c15212e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:09:05 compute-1 nova_compute[230518]: 2025-10-02 13:09:05.279 2 DEBUG oslo_concurrency.lockutils [req-8d524778-9e6f-4a1b-ae17-3db1c6fdd0f9 req-181666de-d980-45c5-b2f7-f7daba72028c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "658821a7-5b97-43ad-8fe2-46e5303cf56c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:05 compute-1 nova_compute[230518]: 2025-10-02 13:09:05.280 2 DEBUG oslo_concurrency.lockutils [req-8d524778-9e6f-4a1b-ae17-3db1c6fdd0f9 req-181666de-d980-45c5-b2f7-f7daba72028c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:05 compute-1 nova_compute[230518]: 2025-10-02 13:09:05.280 2 DEBUG oslo_concurrency.lockutils [req-8d524778-9e6f-4a1b-ae17-3db1c6fdd0f9 req-181666de-d980-45c5-b2f7-f7daba72028c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:05 compute-1 nova_compute[230518]: 2025-10-02 13:09:05.280 2 DEBUG nova.compute.manager [req-8d524778-9e6f-4a1b-ae17-3db1c6fdd0f9 req-181666de-d980-45c5-b2f7-f7daba72028c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Processing event network-vif-plugged-15cb070c-0f52-464f-a2b4-8597c15212e9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:09:05 compute-1 podman[304748]: 2025-10-02 13:09:05.593762112 +0000 UTC m=+0.097283006 container create c27054ff2842037e6cc54aa3b6a9d5fecd0fae1165591163b925e55b64e86003 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 13:09:05 compute-1 podman[304748]: 2025-10-02 13:09:05.518692194 +0000 UTC m=+0.022213108 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:09:05 compute-1 nova_compute[230518]: 2025-10-02 13:09:05.677 2 DEBUG nova.compute.manager [req-021527a2-04a4-4afd-b626-56f58bfa2ca0 req-510901cc-4f13-4ced-86c7-439bd9d4a3b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received event network-changed-3994280c-c2c8-4fa7-bc48-f7b048d43015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:09:05 compute-1 nova_compute[230518]: 2025-10-02 13:09:05.678 2 DEBUG nova.compute.manager [req-021527a2-04a4-4afd-b626-56f58bfa2ca0 req-510901cc-4f13-4ced-86c7-439bd9d4a3b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Refreshing instance network info cache due to event network-changed-3994280c-c2c8-4fa7-bc48-f7b048d43015. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:09:05 compute-1 nova_compute[230518]: 2025-10-02 13:09:05.678 2 DEBUG oslo_concurrency.lockutils [req-021527a2-04a4-4afd-b626-56f58bfa2ca0 req-510901cc-4f13-4ced-86c7-439bd9d4a3b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:09:05 compute-1 systemd[1]: Started libpod-conmon-c27054ff2842037e6cc54aa3b6a9d5fecd0fae1165591163b925e55b64e86003.scope.
Oct 02 13:09:05 compute-1 systemd[1]: Started libcrun container.
Oct 02 13:09:05 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410f773b118732b26b6feca850b0977a2c84aeb1020cb4d6bcef409aa2a24707/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:09:05 compute-1 podman[304748]: 2025-10-02 13:09:05.742937926 +0000 UTC m=+0.246458830 container init c27054ff2842037e6cc54aa3b6a9d5fecd0fae1165591163b925e55b64e86003 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:09:05 compute-1 podman[304748]: 2025-10-02 13:09:05.748638275 +0000 UTC m=+0.252159169 container start c27054ff2842037e6cc54aa3b6a9d5fecd0fae1165591163b925e55b64e86003 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:09:05 compute-1 neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc[304769]: [NOTICE]   (304773) : New worker (304775) forked
Oct 02 13:09:05 compute-1 neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc[304769]: [NOTICE]   (304773) : Loading success.
Oct 02 13:09:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:05.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:05 compute-1 ceph-mon[80926]: pgmap v2893: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.1 MiB/s wr, 169 op/s
Oct 02 13:09:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/4208459991' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:09:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/4208459991' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:09:05 compute-1 nova_compute[230518]: 2025-10-02 13:09:05.980 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410545.9805005, 658821a7-5b97-43ad-8fe2-46e5303cf56c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:09:05 compute-1 nova_compute[230518]: 2025-10-02 13:09:05.982 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] VM Started (Lifecycle Event)
Oct 02 13:09:05 compute-1 nova_compute[230518]: 2025-10-02 13:09:05.984 2 DEBUG nova.compute.manager [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:09:05 compute-1 nova_compute[230518]: 2025-10-02 13:09:05.989 2 DEBUG nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:09:05 compute-1 nova_compute[230518]: 2025-10-02 13:09:05.992 2 INFO nova.virt.libvirt.driver [-] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Instance spawned successfully.
Oct 02 13:09:05 compute-1 nova_compute[230518]: 2025-10-02 13:09:05.992 2 DEBUG nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:09:06 compute-1 nova_compute[230518]: 2025-10-02 13:09:06.027 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:09:06 compute-1 nova_compute[230518]: 2025-10-02 13:09:06.031 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:09:06 compute-1 nova_compute[230518]: 2025-10-02 13:09:06.035 2 DEBUG nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:09:06 compute-1 nova_compute[230518]: 2025-10-02 13:09:06.035 2 DEBUG nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:09:06 compute-1 nova_compute[230518]: 2025-10-02 13:09:06.036 2 DEBUG nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:09:06 compute-1 nova_compute[230518]: 2025-10-02 13:09:06.036 2 DEBUG nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:09:06 compute-1 nova_compute[230518]: 2025-10-02 13:09:06.036 2 DEBUG nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:09:06 compute-1 nova_compute[230518]: 2025-10-02 13:09:06.037 2 DEBUG nova.virt.libvirt.driver [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:09:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:09:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:06.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:09:06 compute-1 nova_compute[230518]: 2025-10-02 13:09:06.070 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:09:06 compute-1 nova_compute[230518]: 2025-10-02 13:09:06.071 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410545.98065, 658821a7-5b97-43ad-8fe2-46e5303cf56c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:09:06 compute-1 nova_compute[230518]: 2025-10-02 13:09:06.072 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] VM Paused (Lifecycle Event)
Oct 02 13:09:06 compute-1 nova_compute[230518]: 2025-10-02 13:09:06.104 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:09:06 compute-1 nova_compute[230518]: 2025-10-02 13:09:06.110 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410545.9878404, 658821a7-5b97-43ad-8fe2-46e5303cf56c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:09:06 compute-1 nova_compute[230518]: 2025-10-02 13:09:06.110 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] VM Resumed (Lifecycle Event)
Oct 02 13:09:06 compute-1 nova_compute[230518]: 2025-10-02 13:09:06.115 2 INFO nova.compute.manager [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Took 8.39 seconds to spawn the instance on the hypervisor.
Oct 02 13:09:06 compute-1 nova_compute[230518]: 2025-10-02 13:09:06.115 2 DEBUG nova.compute.manager [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:09:06 compute-1 nova_compute[230518]: 2025-10-02 13:09:06.127 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:09:06 compute-1 nova_compute[230518]: 2025-10-02 13:09:06.131 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:09:06 compute-1 nova_compute[230518]: 2025-10-02 13:09:06.159 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:09:06 compute-1 nova_compute[230518]: 2025-10-02 13:09:06.178 2 INFO nova.compute.manager [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Took 9.63 seconds to build instance.
Oct 02 13:09:06 compute-1 nova_compute[230518]: 2025-10-02 13:09:06.202 2 DEBUG oslo_concurrency.lockutils [None req-ad59d934-9f3a-43a0-8b40-de7b7d3a92b2 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:06 compute-1 nova_compute[230518]: 2025-10-02 13:09:06.293 2 DEBUG nova.network.neutron [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Updating instance_info_cache with network_info: [{"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:09:06 compute-1 nova_compute[230518]: 2025-10-02 13:09:06.320 2 DEBUG oslo_concurrency.lockutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Releasing lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:09:06 compute-1 nova_compute[230518]: 2025-10-02 13:09:06.321 2 DEBUG nova.virt.libvirt.driver [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:09:06 compute-1 nova_compute[230518]: 2025-10-02 13:09:06.322 2 INFO nova.virt.libvirt.driver [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Creating image(s)
Oct 02 13:09:06 compute-1 nova_compute[230518]: 2025-10-02 13:09:06.356 2 DEBUG nova.storage.rbd_utils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] rbd image b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:09:06 compute-1 nova_compute[230518]: 2025-10-02 13:09:06.360 2 DEBUG nova.objects.instance [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lazy-loading 'trusted_certs' on Instance uuid b4640b6e-b1e0-4168-9970-c5d05a0e1621 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:09:06 compute-1 nova_compute[230518]: 2025-10-02 13:09:06.362 2 DEBUG oslo_concurrency.lockutils [req-021527a2-04a4-4afd-b626-56f58bfa2ca0 req-510901cc-4f13-4ced-86c7-439bd9d4a3b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:09:06 compute-1 nova_compute[230518]: 2025-10-02 13:09:06.362 2 DEBUG nova.network.neutron [req-021527a2-04a4-4afd-b626-56f58bfa2ca0 req-510901cc-4f13-4ced-86c7-439bd9d4a3b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Refreshing network info cache for port 3994280c-c2c8-4fa7-bc48-f7b048d43015 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:09:06 compute-1 nova_compute[230518]: 2025-10-02 13:09:06.404 2 DEBUG nova.storage.rbd_utils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] rbd image b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:09:06 compute-1 nova_compute[230518]: 2025-10-02 13:09:06.431 2 DEBUG nova.storage.rbd_utils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] rbd image b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:09:06 compute-1 nova_compute[230518]: 2025-10-02 13:09:06.435 2 DEBUG oslo_concurrency.lockutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "866094531e2d8a23f188ebf1ca88baa9a196add2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:06 compute-1 nova_compute[230518]: 2025-10-02 13:09:06.436 2 DEBUG oslo_concurrency.lockutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "866094531e2d8a23f188ebf1ca88baa9a196add2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:06 compute-1 nova_compute[230518]: 2025-10-02 13:09:06.722 2 DEBUG nova.virt.libvirt.imagebackend [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Image locations are: [{'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/c7b69b23-2de1-4a42-80f6-94ba898e82eb/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/c7b69b23-2de1-4a42-80f6-94ba898e82eb/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 02 13:09:06 compute-1 nova_compute[230518]: 2025-10-02 13:09:06.771 2 DEBUG nova.virt.libvirt.imagebackend [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Selected location: {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/c7b69b23-2de1-4a42-80f6-94ba898e82eb/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Oct 02 13:09:06 compute-1 nova_compute[230518]: 2025-10-02 13:09:06.771 2 DEBUG nova.storage.rbd_utils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] cloning images/c7b69b23-2de1-4a42-80f6-94ba898e82eb@snap to None/b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Oct 02 13:09:06 compute-1 nova_compute[230518]: 2025-10-02 13:09:06.905 2 DEBUG oslo_concurrency.lockutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "866094531e2d8a23f188ebf1ca88baa9a196add2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.470s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.032 2 DEBUG nova.objects.instance [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lazy-loading 'migration_context' on Instance uuid b4640b6e-b1e0-4168-9970-c5d05a0e1621 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.092 2 DEBUG nova.storage.rbd_utils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] flattening vms/b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.393 2 DEBUG nova.compute.manager [req-f92b604f-fd0f-4def-a495-3e9b0625c67d req-da64316c-a957-4f3e-8b3a-844a1890e4d9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Received event network-vif-plugged-15cb070c-0f52-464f-a2b4-8597c15212e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.394 2 DEBUG oslo_concurrency.lockutils [req-f92b604f-fd0f-4def-a495-3e9b0625c67d req-da64316c-a957-4f3e-8b3a-844a1890e4d9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "658821a7-5b97-43ad-8fe2-46e5303cf56c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.394 2 DEBUG oslo_concurrency.lockutils [req-f92b604f-fd0f-4def-a495-3e9b0625c67d req-da64316c-a957-4f3e-8b3a-844a1890e4d9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.394 2 DEBUG oslo_concurrency.lockutils [req-f92b604f-fd0f-4def-a495-3e9b0625c67d req-da64316c-a957-4f3e-8b3a-844a1890e4d9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.394 2 DEBUG nova.compute.manager [req-f92b604f-fd0f-4def-a495-3e9b0625c67d req-da64316c-a957-4f3e-8b3a-844a1890e4d9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] No waiting events found dispatching network-vif-plugged-15cb070c-0f52-464f-a2b4-8597c15212e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.394 2 WARNING nova.compute.manager [req-f92b604f-fd0f-4def-a495-3e9b0625c67d req-da64316c-a957-4f3e-8b3a-844a1890e4d9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Received unexpected event network-vif-plugged-15cb070c-0f52-464f-a2b4-8597c15212e9 for instance with vm_state active and task_state None.
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.629 2 DEBUG nova.virt.libvirt.driver [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Image rbd:vms/b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.629 2 DEBUG nova.virt.libvirt.driver [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.629 2 DEBUG nova.virt.libvirt.driver [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Ensure instance console log exists: /var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.630 2 DEBUG oslo_concurrency.lockutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.630 2 DEBUG oslo_concurrency.lockutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.630 2 DEBUG oslo_concurrency.lockutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.632 2 DEBUG nova.virt.libvirt.driver [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Start _get_guest_xml network_info=[{"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T13:08:37Z,direct_url=<?>,disk_format='raw',id=c7b69b23-2de1-4a42-80f6-94ba898e82eb,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-1341299623-shelved',owner='954946ff6b204fba90f767ec67210620',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T13:08:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.637 2 WARNING nova.virt.libvirt.driver [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.642 2 DEBUG nova.virt.libvirt.host [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.642 2 DEBUG nova.virt.libvirt.host [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.645 2 DEBUG nova.virt.libvirt.host [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.645 2 DEBUG nova.virt.libvirt.host [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.646 2 DEBUG nova.virt.libvirt.driver [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.647 2 DEBUG nova.virt.hardware [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-02T13:08:37Z,direct_url=<?>,disk_format='raw',id=c7b69b23-2de1-4a42-80f6-94ba898e82eb,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-1341299623-shelved',owner='954946ff6b204fba90f767ec67210620',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-02T13:08:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.647 2 DEBUG nova.virt.hardware [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.647 2 DEBUG nova.virt.hardware [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.648 2 DEBUG nova.virt.hardware [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.648 2 DEBUG nova.virt.hardware [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.648 2 DEBUG nova.virt.hardware [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.648 2 DEBUG nova.virt.hardware [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.648 2 DEBUG nova.virt.hardware [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.649 2 DEBUG nova.virt.hardware [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.649 2 DEBUG nova.virt.hardware [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.649 2 DEBUG nova.virt.hardware [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.649 2 DEBUG nova.objects.instance [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lazy-loading 'vcpu_model' on Instance uuid b4640b6e-b1e0-4168-9970-c5d05a0e1621 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.664 2 DEBUG oslo_concurrency.processutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:09:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:07.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.957 2 DEBUG nova.network.neutron [req-021527a2-04a4-4afd-b626-56f58bfa2ca0 req-510901cc-4f13-4ced-86c7-439bd9d4a3b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Updated VIF entry in instance network info cache for port 3994280c-c2c8-4fa7-bc48-f7b048d43015. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.957 2 DEBUG nova.network.neutron [req-021527a2-04a4-4afd-b626-56f58bfa2ca0 req-510901cc-4f13-4ced-86c7-439bd9d4a3b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Updating instance_info_cache with network_info: [{"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:09:07 compute-1 ceph-mon[80926]: pgmap v2894: 305 pgs: 305 active+clean; 481 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 533 KiB/s rd, 4.6 MiB/s wr, 161 op/s
Oct 02 13:09:07 compute-1 nova_compute[230518]: 2025-10-02 13:09:07.979 2 DEBUG oslo_concurrency.lockutils [req-021527a2-04a4-4afd-b626-56f58bfa2ca0 req-510901cc-4f13-4ced-86c7-439bd9d4a3b9 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:09:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:08.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:09:08 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2569559011' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:08 compute-1 nova_compute[230518]: 2025-10-02 13:09:08.111 2 DEBUG oslo_concurrency.processutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:08 compute-1 nova_compute[230518]: 2025-10-02 13:09:08.140 2 DEBUG nova.storage.rbd_utils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] rbd image b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:09:08 compute-1 nova_compute[230518]: 2025-10-02 13:09:08.145 2 DEBUG oslo_concurrency.processutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:09:08 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3346000006' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:08 compute-1 nova_compute[230518]: 2025-10-02 13:09:08.570 2 DEBUG oslo_concurrency.processutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:08 compute-1 nova_compute[230518]: 2025-10-02 13:09:08.571 2 DEBUG nova.virt.libvirt.vif [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T13:08:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1341299623',display_name='tempest-TestShelveInstance-server-1341299623',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1341299623',id=193,image_ref='c7b69b23-2de1-4a42-80f6-94ba898e82eb',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-573161256',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:08:13Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='954946ff6b204fba90f767ec67210620',ramdisk_id='',reservation_id='r-qh5aopcq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-228669170',owner_user_name='tempest-TestShelveInstance-228669170-project-member',shelved_at='2025-10-02T13:08:53.336076',shelved_host='compute-1.ctlplane.example.com',shelved_image_id='c7b69b23-2de1-4a42-80f6-94ba898e82eb'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:09:03Z,user_data=None,user_id='62f4c4b5cc194bd59ca9cc9f1da78a79',uuid=b4640b6e-b1e0-4168-9970-c5d05a0e1621,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:09:08 compute-1 nova_compute[230518]: 2025-10-02 13:09:08.572 2 DEBUG nova.network.os_vif_util [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Converting VIF {"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:09:08 compute-1 nova_compute[230518]: 2025-10-02 13:09:08.573 2 DEBUG nova.network.os_vif_util [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:d8:5a,bridge_name='br-int',has_traffic_filtering=True,id=3994280c-c2c8-4fa7-bc48-f7b048d43015,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3994280c-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:09:08 compute-1 nova_compute[230518]: 2025-10-02 13:09:08.574 2 DEBUG nova.objects.instance [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lazy-loading 'pci_devices' on Instance uuid b4640b6e-b1e0-4168-9970-c5d05a0e1621 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:09:08 compute-1 nova_compute[230518]: 2025-10-02 13:09:08.587 2 DEBUG nova.virt.libvirt.driver [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:09:08 compute-1 nova_compute[230518]:   <uuid>b4640b6e-b1e0-4168-9970-c5d05a0e1621</uuid>
Oct 02 13:09:08 compute-1 nova_compute[230518]:   <name>instance-000000c1</name>
Oct 02 13:09:08 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 13:09:08 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 13:09:08 compute-1 nova_compute[230518]:   <metadata>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:09:08 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:       <nova:name>tempest-TestShelveInstance-server-1341299623</nova:name>
Oct 02 13:09:08 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 13:09:07</nova:creationTime>
Oct 02 13:09:08 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 13:09:08 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 13:09:08 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 13:09:08 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 13:09:08 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:09:08 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:09:08 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 13:09:08 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 13:09:08 compute-1 nova_compute[230518]:         <nova:user uuid="62f4c4b5cc194bd59ca9cc9f1da78a79">tempest-TestShelveInstance-228669170-project-member</nova:user>
Oct 02 13:09:08 compute-1 nova_compute[230518]:         <nova:project uuid="954946ff6b204fba90f767ec67210620">tempest-TestShelveInstance-228669170</nova:project>
Oct 02 13:09:08 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 13:09:08 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="c7b69b23-2de1-4a42-80f6-94ba898e82eb"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 13:09:08 compute-1 nova_compute[230518]:         <nova:port uuid="3994280c-c2c8-4fa7-bc48-f7b048d43015">
Oct 02 13:09:08 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 13:09:08 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 13:09:08 compute-1 nova_compute[230518]:   </metadata>
Oct 02 13:09:08 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <system>
Oct 02 13:09:08 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:09:08 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:09:08 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:09:08 compute-1 nova_compute[230518]:       <entry name="serial">b4640b6e-b1e0-4168-9970-c5d05a0e1621</entry>
Oct 02 13:09:08 compute-1 nova_compute[230518]:       <entry name="uuid">b4640b6e-b1e0-4168-9970-c5d05a0e1621</entry>
Oct 02 13:09:08 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     </system>
Oct 02 13:09:08 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 13:09:08 compute-1 nova_compute[230518]:   <os>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:   </os>
Oct 02 13:09:08 compute-1 nova_compute[230518]:   <features>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <apic/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:   </features>
Oct 02 13:09:08 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:   </clock>
Oct 02 13:09:08 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:   </cpu>
Oct 02 13:09:08 compute-1 nova_compute[230518]:   <devices>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 13:09:08 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk">
Oct 02 13:09:08 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:       </source>
Oct 02 13:09:08 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:09:08 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:09:08 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 13:09:08 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk.config">
Oct 02 13:09:08 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:       </source>
Oct 02 13:09:08 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:09:08 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:09:08 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 13:09:08 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:20:d8:5a"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:       <target dev="tap3994280c-c2"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     </interface>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 13:09:08 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621/console.log" append="off"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     </serial>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <video>
Oct 02 13:09:08 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     </video>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <input type="keyboard" bus="usb"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 13:09:08 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     </rng>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 13:09:08 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 13:09:08 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 13:09:08 compute-1 nova_compute[230518]:   </devices>
Oct 02 13:09:08 compute-1 nova_compute[230518]: </domain>
Oct 02 13:09:08 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:09:08 compute-1 nova_compute[230518]: 2025-10-02 13:09:08.588 2 DEBUG nova.compute.manager [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Preparing to wait for external event network-vif-plugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:09:08 compute-1 nova_compute[230518]: 2025-10-02 13:09:08.588 2 DEBUG oslo_concurrency.lockutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:08 compute-1 nova_compute[230518]: 2025-10-02 13:09:08.589 2 DEBUG oslo_concurrency.lockutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:08 compute-1 nova_compute[230518]: 2025-10-02 13:09:08.589 2 DEBUG oslo_concurrency.lockutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:08 compute-1 nova_compute[230518]: 2025-10-02 13:09:08.589 2 DEBUG nova.virt.libvirt.vif [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T13:08:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1341299623',display_name='tempest-TestShelveInstance-server-1341299623',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1341299623',id=193,image_ref='c7b69b23-2de1-4a42-80f6-94ba898e82eb',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-573161256',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:08:13Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='954946ff6b204fba90f767ec67210620',ramdisk_id='',reservation_id='r-qh5aopcq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-228669170',owner_user_name='tempest-TestShelveInstance-228669170-project-member',shelved_at='2025-10-02T13:08:53.336076',shelved_host='compute-1.ctlplane.example.com',shelved_image_id='c7b69b23-2de1-4a42-80f6-94ba898e82eb'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:09:03Z,user_data=None,user_id='62f4c4b5cc194bd59ca9cc9f1da78a79',uuid=b4640b6e-b1e0-4168-9970-c5d05a0e1621,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:09:08 compute-1 nova_compute[230518]: 2025-10-02 13:09:08.590 2 DEBUG nova.network.os_vif_util [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Converting VIF {"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:09:08 compute-1 nova_compute[230518]: 2025-10-02 13:09:08.590 2 DEBUG nova.network.os_vif_util [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:d8:5a,bridge_name='br-int',has_traffic_filtering=True,id=3994280c-c2c8-4fa7-bc48-f7b048d43015,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3994280c-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:09:08 compute-1 nova_compute[230518]: 2025-10-02 13:09:08.591 2 DEBUG os_vif [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:d8:5a,bridge_name='br-int',has_traffic_filtering=True,id=3994280c-c2c8-4fa7-bc48-f7b048d43015,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3994280c-c2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:09:08 compute-1 nova_compute[230518]: 2025-10-02 13:09:08.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:08 compute-1 nova_compute[230518]: 2025-10-02 13:09:08.592 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:08 compute-1 nova_compute[230518]: 2025-10-02 13:09:08.592 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:09:08 compute-1 nova_compute[230518]: 2025-10-02 13:09:08.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:08 compute-1 nova_compute[230518]: 2025-10-02 13:09:08.595 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3994280c-c2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:08 compute-1 nova_compute[230518]: 2025-10-02 13:09:08.595 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3994280c-c2, col_values=(('external_ids', {'iface-id': '3994280c-c2c8-4fa7-bc48-f7b048d43015', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:20:d8:5a', 'vm-uuid': 'b4640b6e-b1e0-4168-9970-c5d05a0e1621'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:08 compute-1 nova_compute[230518]: 2025-10-02 13:09:08.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:08 compute-1 NetworkManager[44960]: <info>  [1759410548.5973] manager: (tap3994280c-c2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/374)
Oct 02 13:09:08 compute-1 nova_compute[230518]: 2025-10-02 13:09:08.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:09:08 compute-1 nova_compute[230518]: 2025-10-02 13:09:08.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:08 compute-1 nova_compute[230518]: 2025-10-02 13:09:08.603 2 INFO os_vif [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:d8:5a,bridge_name='br-int',has_traffic_filtering=True,id=3994280c-c2c8-4fa7-bc48-f7b048d43015,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3994280c-c2')
Oct 02 13:09:08 compute-1 nova_compute[230518]: 2025-10-02 13:09:08.659 2 DEBUG nova.virt.libvirt.driver [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:09:08 compute-1 nova_compute[230518]: 2025-10-02 13:09:08.660 2 DEBUG nova.virt.libvirt.driver [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:09:08 compute-1 nova_compute[230518]: 2025-10-02 13:09:08.660 2 DEBUG nova.virt.libvirt.driver [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] No VIF found with MAC fa:16:3e:20:d8:5a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:09:08 compute-1 nova_compute[230518]: 2025-10-02 13:09:08.660 2 INFO nova.virt.libvirt.driver [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Using config drive
Oct 02 13:09:08 compute-1 nova_compute[230518]: 2025-10-02 13:09:08.686 2 DEBUG nova.storage.rbd_utils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] rbd image b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:09:08 compute-1 nova_compute[230518]: 2025-10-02 13:09:08.709 2 DEBUG nova.objects.instance [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lazy-loading 'ec2_ids' on Instance uuid b4640b6e-b1e0-4168-9970-c5d05a0e1621 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:09:08 compute-1 nova_compute[230518]: 2025-10-02 13:09:08.789 2 DEBUG nova.objects.instance [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lazy-loading 'keypairs' on Instance uuid b4640b6e-b1e0-4168-9970-c5d05a0e1621 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:09:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2403046575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2569559011' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3346000006' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:09:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:09:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:09.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:09:09 compute-1 nova_compute[230518]: 2025-10-02 13:09:09.884 2 INFO nova.virt.libvirt.driver [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Creating config drive at /var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621/disk.config
Oct 02 13:09:09 compute-1 nova_compute[230518]: 2025-10-02 13:09:09.889 2 DEBUG oslo_concurrency.processutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1_o151y3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:09 compute-1 ceph-mon[80926]: pgmap v2895: 305 pgs: 305 active+clean; 456 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 966 KiB/s rd, 4.1 MiB/s wr, 201 op/s
Oct 02 13:09:10 compute-1 nova_compute[230518]: 2025-10-02 13:09:10.025 2 DEBUG oslo_concurrency.processutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1_o151y3" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:10 compute-1 nova_compute[230518]: 2025-10-02 13:09:10.058 2 DEBUG nova.storage.rbd_utils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] rbd image b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:09:10 compute-1 nova_compute[230518]: 2025-10-02 13:09:10.061 2 DEBUG oslo_concurrency.processutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621/disk.config b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:09:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:10.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:09:10 compute-1 nova_compute[230518]: 2025-10-02 13:09:10.255 2 DEBUG oslo_concurrency.processutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621/disk.config b4640b6e-b1e0-4168-9970-c5d05a0e1621_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:10 compute-1 nova_compute[230518]: 2025-10-02 13:09:10.256 2 INFO nova.virt.libvirt.driver [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Deleting local config drive /var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621/disk.config because it was imported into RBD.
Oct 02 13:09:10 compute-1 nova_compute[230518]: 2025-10-02 13:09:10.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:10 compute-1 kernel: tap3994280c-c2: entered promiscuous mode
Oct 02 13:09:10 compute-1 NetworkManager[44960]: <info>  [1759410550.3030] manager: (tap3994280c-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/375)
Oct 02 13:09:10 compute-1 ovn_controller[129257]: 2025-10-02T13:09:10Z|00805|binding|INFO|Claiming lport 3994280c-c2c8-4fa7-bc48-f7b048d43015 for this chassis.
Oct 02 13:09:10 compute-1 ovn_controller[129257]: 2025-10-02T13:09:10Z|00806|binding|INFO|3994280c-c2c8-4fa7-bc48-f7b048d43015: Claiming fa:16:3e:20:d8:5a 10.100.0.6
Oct 02 13:09:10 compute-1 nova_compute[230518]: 2025-10-02 13:09:10.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.314 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:d8:5a 10.100.0.6'], port_security=['fa:16:3e:20:d8:5a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b4640b6e-b1e0-4168-9970-c5d05a0e1621', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4223a8cc-f72a-428d-accb-3f4210096878', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '954946ff6b204fba90f767ec67210620', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'b142f2e0-15cd-46cd-bd2d-c2af8d42e97a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.194'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8308a587-4cdc-4eb3-9fc6-aab7267ec23f, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=3994280c-c2c8-4fa7-bc48-f7b048d43015) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.317 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 3994280c-c2c8-4fa7-bc48-f7b048d43015 in datapath 4223a8cc-f72a-428d-accb-3f4210096878 bound to our chassis
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.319 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4223a8cc-f72a-428d-accb-3f4210096878
Oct 02 13:09:10 compute-1 nova_compute[230518]: 2025-10-02 13:09:10.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:10 compute-1 ovn_controller[129257]: 2025-10-02T13:09:10Z|00807|binding|INFO|Setting lport 3994280c-c2c8-4fa7-bc48-f7b048d43015 ovn-installed in OVS
Oct 02 13:09:10 compute-1 ovn_controller[129257]: 2025-10-02T13:09:10Z|00808|binding|INFO|Setting lport 3994280c-c2c8-4fa7-bc48-f7b048d43015 up in Southbound
Oct 02 13:09:10 compute-1 nova_compute[230518]: 2025-10-02 13:09:10.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:10 compute-1 systemd-udevd[305131]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.339 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[87761508-3dff-42e8-b17f-f5d32bf55bdf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.339 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4223a8cc-f1 in ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:09:10 compute-1 NetworkManager[44960]: <info>  [1759410550.3457] device (tap3994280c-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:09:10 compute-1 NetworkManager[44960]: <info>  [1759410550.3469] device (tap3994280c-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.348 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4223a8cc-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.348 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5d5559a6-6c4e-4647-ac5c-06f8b833ab57]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.350 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4709ff67-1d8b-48aa-b258-d0978b438028]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:10 compute-1 systemd-machined[188247]: New machine qemu-92-instance-000000c1.
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.365 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[3759fbef-2999-4f6d-bcb6-3a764429ed44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:10 compute-1 systemd[1]: Started Virtual Machine qemu-92-instance-000000c1.
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.391 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[83023efb-6929-4bac-9894-372a2d4b0a8e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.431 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[5d7192e9-8474-48f3-85fe-5e9211b7bf0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:10 compute-1 systemd-udevd[305136]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:09:10 compute-1 NetworkManager[44960]: <info>  [1759410550.4444] manager: (tap4223a8cc-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/376)
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.443 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c8458220-76c5-4170-9062-2b00b04dd98f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.486 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[efda7c95-9cfd-45c4-b5ac-164cab797f6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.490 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[34f49a6e-4ac7-4765-a472-ed9103eeaaf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:10 compute-1 NetworkManager[44960]: <info>  [1759410550.5136] device (tap4223a8cc-f0): carrier: link connected
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.519 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[f8081b45-d084-4cee-942c-2724608257d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.538 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e58b64e3-443f-48ca-859f-d17651cb1479]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4223a8cc-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:f5:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 841407, 'reachable_time': 26594, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305166, 'error': None, 'target': 'ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.557 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[11110ff4-8e59-430f-b437-ae699eb2d9a9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe74:f568'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 841407, 'tstamp': 841407}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305167, 'error': None, 'target': 'ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.575 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5bdea8f6-7ae1-427b-9b5d-e64691ee6121]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4223a8cc-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:f5:68'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 841407, 'reachable_time': 26594, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305168, 'error': None, 'target': 'ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.611 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[435dba41-5cde-4706-ba5f-88ebe12c6087]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.673 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[81e7a7fd-0ddd-464a-9a43-862b8c2809c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.674 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4223a8cc-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.675 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.675 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4223a8cc-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:10 compute-1 NetworkManager[44960]: <info>  [1759410550.6781] manager: (tap4223a8cc-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/377)
Oct 02 13:09:10 compute-1 kernel: tap4223a8cc-f0: entered promiscuous mode
Oct 02 13:09:10 compute-1 nova_compute[230518]: 2025-10-02 13:09:10.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.681 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4223a8cc-f0, col_values=(('external_ids', {'iface-id': '97eaefd1-ed23-4787-9782-741cd2cf7e3b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:10 compute-1 nova_compute[230518]: 2025-10-02 13:09:10.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:10 compute-1 ovn_controller[129257]: 2025-10-02T13:09:10Z|00809|binding|INFO|Releasing lport 97eaefd1-ed23-4787-9782-741cd2cf7e3b from this chassis (sb_readonly=0)
Oct 02 13:09:10 compute-1 nova_compute[230518]: 2025-10-02 13:09:10.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.711 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4223a8cc-f72a-428d-accb-3f4210096878.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4223a8cc-f72a-428d-accb-3f4210096878.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.712 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5cfa9969-ea0d-4af1-b5f6-0a7bcb86e740]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.713 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]: global
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-4223a8cc-f72a-428d-accb-3f4210096878
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/4223a8cc-f72a-428d-accb-3f4210096878.pid.haproxy
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 4223a8cc-f72a-428d-accb-3f4210096878
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:09:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:10.715 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878', 'env', 'PROCESS_TAG=haproxy-4223a8cc-f72a-428d-accb-3f4210096878', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4223a8cc-f72a-428d-accb-3f4210096878.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:09:11 compute-1 podman[305200]: 2025-10-02 13:09:11.078834229 +0000 UTC m=+0.059804288 container create 85cee7d8fa584c67fec4a9f25f7f9de1428f340633b092133c602e1d9cc2e7dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 13:09:11 compute-1 systemd[1]: Started libpod-conmon-85cee7d8fa584c67fec4a9f25f7f9de1428f340633b092133c602e1d9cc2e7dd.scope.
Oct 02 13:09:11 compute-1 podman[305200]: 2025-10-02 13:09:11.039873666 +0000 UTC m=+0.020843725 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:09:11 compute-1 systemd[1]: Started libcrun container.
Oct 02 13:09:11 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/757739b03a8fcb122583c99c39ed2c8eb5489f21e1732e8148a0278838977fc9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:09:11 compute-1 podman[305200]: 2025-10-02 13:09:11.279239962 +0000 UTC m=+0.260210051 container init 85cee7d8fa584c67fec4a9f25f7f9de1428f340633b092133c602e1d9cc2e7dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 13:09:11 compute-1 podman[305200]: 2025-10-02 13:09:11.286394717 +0000 UTC m=+0.267364776 container start 85cee7d8fa584c67fec4a9f25f7f9de1428f340633b092133c602e1d9cc2e7dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 13:09:11 compute-1 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[305215]: [NOTICE]   (305244) : New worker (305255) forked
Oct 02 13:09:11 compute-1 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[305215]: [NOTICE]   (305244) : Loading success.
Oct 02 13:09:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:11.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:11 compute-1 nova_compute[230518]: 2025-10-02 13:09:11.869 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410551.8691044, b4640b6e-b1e0-4168-9970-c5d05a0e1621 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:09:11 compute-1 nova_compute[230518]: 2025-10-02 13:09:11.870 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] VM Started (Lifecycle Event)
Oct 02 13:09:11 compute-1 nova_compute[230518]: 2025-10-02 13:09:11.902 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:09:11 compute-1 nova_compute[230518]: 2025-10-02 13:09:11.907 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410551.870403, b4640b6e-b1e0-4168-9970-c5d05a0e1621 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:09:11 compute-1 nova_compute[230518]: 2025-10-02 13:09:11.907 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] VM Paused (Lifecycle Event)
Oct 02 13:09:11 compute-1 nova_compute[230518]: 2025-10-02 13:09:11.930 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:09:11 compute-1 nova_compute[230518]: 2025-10-02 13:09:11.934 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:09:11 compute-1 nova_compute[230518]: 2025-10-02 13:09:11.955 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:09:12 compute-1 ceph-mon[80926]: pgmap v2896: 305 pgs: 305 active+clean; 471 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 5.9 MiB/s wr, 243 op/s
Oct 02 13:09:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:12.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:13 compute-1 ceph-mon[80926]: pgmap v2897: 305 pgs: 305 active+clean; 484 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.2 MiB/s rd, 7.3 MiB/s wr, 277 op/s
Oct 02 13:09:13 compute-1 nova_compute[230518]: 2025-10-02 13:09:13.371 2 DEBUG nova.compute.manager [req-28e8928b-bd37-4d0c-95f7-2accf9982c21 req-80eea822-b3d4-4ecc-b407-973c55e6194c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received event network-vif-plugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:09:13 compute-1 nova_compute[230518]: 2025-10-02 13:09:13.372 2 DEBUG oslo_concurrency.lockutils [req-28e8928b-bd37-4d0c-95f7-2accf9982c21 req-80eea822-b3d4-4ecc-b407-973c55e6194c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:13 compute-1 nova_compute[230518]: 2025-10-02 13:09:13.372 2 DEBUG oslo_concurrency.lockutils [req-28e8928b-bd37-4d0c-95f7-2accf9982c21 req-80eea822-b3d4-4ecc-b407-973c55e6194c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:13 compute-1 nova_compute[230518]: 2025-10-02 13:09:13.372 2 DEBUG oslo_concurrency.lockutils [req-28e8928b-bd37-4d0c-95f7-2accf9982c21 req-80eea822-b3d4-4ecc-b407-973c55e6194c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:13 compute-1 nova_compute[230518]: 2025-10-02 13:09:13.373 2 DEBUG nova.compute.manager [req-28e8928b-bd37-4d0c-95f7-2accf9982c21 req-80eea822-b3d4-4ecc-b407-973c55e6194c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Processing event network-vif-plugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:09:13 compute-1 nova_compute[230518]: 2025-10-02 13:09:13.373 2 DEBUG nova.compute.manager [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:09:13 compute-1 nova_compute[230518]: 2025-10-02 13:09:13.376 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410553.3766344, b4640b6e-b1e0-4168-9970-c5d05a0e1621 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:09:13 compute-1 nova_compute[230518]: 2025-10-02 13:09:13.377 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] VM Resumed (Lifecycle Event)
Oct 02 13:09:13 compute-1 nova_compute[230518]: 2025-10-02 13:09:13.378 2 DEBUG nova.virt.libvirt.driver [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:09:13 compute-1 nova_compute[230518]: 2025-10-02 13:09:13.382 2 INFO nova.virt.libvirt.driver [-] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Instance spawned successfully.
Oct 02 13:09:13 compute-1 nova_compute[230518]: 2025-10-02 13:09:13.406 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:09:13 compute-1 nova_compute[230518]: 2025-10-02 13:09:13.410 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:09:13 compute-1 nova_compute[230518]: 2025-10-02 13:09:13.437 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:09:13 compute-1 nova_compute[230518]: 2025-10-02 13:09:13.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:13.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:09:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:14.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:09:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e364 e364: 3 total, 3 up, 3 in
Oct 02 13:09:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:09:15 compute-1 nova_compute[230518]: 2025-10-02 13:09:15.243 2 DEBUG nova.compute.manager [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:09:15 compute-1 nova_compute[230518]: 2025-10-02 13:09:15.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:15 compute-1 nova_compute[230518]: 2025-10-02 13:09:15.307 2 DEBUG oslo_concurrency.lockutils [None req-401433dd-3a60-4f3b-bd18-c85bc8e571f7 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 12.356s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:15 compute-1 ceph-mon[80926]: osdmap e364: 3 total, 3 up, 3 in
Oct 02 13:09:15 compute-1 ceph-mon[80926]: pgmap v2899: 305 pgs: 305 active+clean; 484 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.3 MiB/s rd, 6.6 MiB/s wr, 274 op/s
Oct 02 13:09:15 compute-1 nova_compute[230518]: 2025-10-02 13:09:15.536 2 DEBUG nova.compute.manager [req-70d3e3ce-2d65-4db3-b828-f8abb4821c6e req-9243aba3-fd3a-4ca5-9182-96070134511c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received event network-vif-plugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:09:15 compute-1 nova_compute[230518]: 2025-10-02 13:09:15.537 2 DEBUG oslo_concurrency.lockutils [req-70d3e3ce-2d65-4db3-b828-f8abb4821c6e req-9243aba3-fd3a-4ca5-9182-96070134511c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:15 compute-1 nova_compute[230518]: 2025-10-02 13:09:15.538 2 DEBUG oslo_concurrency.lockutils [req-70d3e3ce-2d65-4db3-b828-f8abb4821c6e req-9243aba3-fd3a-4ca5-9182-96070134511c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:15 compute-1 nova_compute[230518]: 2025-10-02 13:09:15.538 2 DEBUG oslo_concurrency.lockutils [req-70d3e3ce-2d65-4db3-b828-f8abb4821c6e req-9243aba3-fd3a-4ca5-9182-96070134511c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:15 compute-1 nova_compute[230518]: 2025-10-02 13:09:15.539 2 DEBUG nova.compute.manager [req-70d3e3ce-2d65-4db3-b828-f8abb4821c6e req-9243aba3-fd3a-4ca5-9182-96070134511c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] No waiting events found dispatching network-vif-plugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:09:15 compute-1 nova_compute[230518]: 2025-10-02 13:09:15.539 2 WARNING nova.compute.manager [req-70d3e3ce-2d65-4db3-b828-f8abb4821c6e req-9243aba3-fd3a-4ca5-9182-96070134511c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received unexpected event network-vif-plugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 for instance with vm_state active and task_state None.
Oct 02 13:09:15 compute-1 nova_compute[230518]: 2025-10-02 13:09:15.735 2 DEBUG nova.compute.manager [req-0acc5bf2-c1fb-4782-94e4-39bd514e93ba req-b9e08f89-c1ce-49f0-ae03-0951f0afbeaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Received event network-changed-15cb070c-0f52-464f-a2b4-8597c15212e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:09:15 compute-1 nova_compute[230518]: 2025-10-02 13:09:15.735 2 DEBUG nova.compute.manager [req-0acc5bf2-c1fb-4782-94e4-39bd514e93ba req-b9e08f89-c1ce-49f0-ae03-0951f0afbeaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Refreshing instance network info cache due to event network-changed-15cb070c-0f52-464f-a2b4-8597c15212e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:09:15 compute-1 nova_compute[230518]: 2025-10-02 13:09:15.736 2 DEBUG oslo_concurrency.lockutils [req-0acc5bf2-c1fb-4782-94e4-39bd514e93ba req-b9e08f89-c1ce-49f0-ae03-0951f0afbeaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:09:15 compute-1 nova_compute[230518]: 2025-10-02 13:09:15.736 2 DEBUG oslo_concurrency.lockutils [req-0acc5bf2-c1fb-4782-94e4-39bd514e93ba req-b9e08f89-c1ce-49f0-ae03-0951f0afbeaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:09:15 compute-1 nova_compute[230518]: 2025-10-02 13:09:15.736 2 DEBUG nova.network.neutron [req-0acc5bf2-c1fb-4782-94e4-39bd514e93ba req-b9e08f89-c1ce-49f0-ae03-0951f0afbeaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Refreshing network info cache for port 15cb070c-0f52-464f-a2b4-8597c15212e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:09:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:15.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:16.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:16 compute-1 ovn_controller[129257]: 2025-10-02T13:09:16Z|00810|binding|INFO|Releasing lport aa788301-8c47-4421-b693-3b37cb064ae2 from this chassis (sb_readonly=0)
Oct 02 13:09:16 compute-1 ovn_controller[129257]: 2025-10-02T13:09:16Z|00811|binding|INFO|Releasing lport 97eaefd1-ed23-4787-9782-741cd2cf7e3b from this chassis (sb_readonly=0)
Oct 02 13:09:16 compute-1 nova_compute[230518]: 2025-10-02 13:09:16.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:17 compute-1 nova_compute[230518]: 2025-10-02 13:09:17.304 2 DEBUG nova.network.neutron [req-0acc5bf2-c1fb-4782-94e4-39bd514e93ba req-b9e08f89-c1ce-49f0-ae03-0951f0afbeaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Updated VIF entry in instance network info cache for port 15cb070c-0f52-464f-a2b4-8597c15212e9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:09:17 compute-1 nova_compute[230518]: 2025-10-02 13:09:17.305 2 DEBUG nova.network.neutron [req-0acc5bf2-c1fb-4782-94e4-39bd514e93ba req-b9e08f89-c1ce-49f0-ae03-0951f0afbeaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Updating instance_info_cache with network_info: [{"id": "15cb070c-0f52-464f-a2b4-8597c15212e9", "address": "fa:16:3e:e2:47:21", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15cb070c-0f", "ovs_interfaceid": "15cb070c-0f52-464f-a2b4-8597c15212e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:09:17 compute-1 nova_compute[230518]: 2025-10-02 13:09:17.344 2 DEBUG oslo_concurrency.lockutils [req-0acc5bf2-c1fb-4782-94e4-39bd514e93ba req-b9e08f89-c1ce-49f0-ae03-0951f0afbeaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:09:17 compute-1 ceph-mon[80926]: pgmap v2900: 305 pgs: 305 active+clean; 439 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.8 MiB/s rd, 4.7 MiB/s wr, 294 op/s
Oct 02 13:09:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:17.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:18.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:09:18 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3267789736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:18 compute-1 nova_compute[230518]: 2025-10-02 13:09:18.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3267789736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:09:19 compute-1 ceph-mon[80926]: pgmap v2901: 305 pgs: 305 active+clean; 405 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.8 MiB/s rd, 4.7 MiB/s wr, 268 op/s
Oct 02 13:09:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:09:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:19.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:09:19 compute-1 nova_compute[230518]: 2025-10-02 13:09:19.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:20.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:20 compute-1 ovn_controller[129257]: 2025-10-02T13:09:20Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e2:47:21 10.100.0.3
Oct 02 13:09:20 compute-1 ovn_controller[129257]: 2025-10-02T13:09:20Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e2:47:21 10.100.0.3
Oct 02 13:09:20 compute-1 nova_compute[230518]: 2025-10-02 13:09:20.191 2 DEBUG oslo_concurrency.lockutils [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "658821a7-5b97-43ad-8fe2-46e5303cf56c" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:20 compute-1 nova_compute[230518]: 2025-10-02 13:09:20.191 2 DEBUG oslo_concurrency.lockutils [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:20 compute-1 nova_compute[230518]: 2025-10-02 13:09:20.209 2 DEBUG nova.objects.instance [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'flavor' on Instance uuid 658821a7-5b97-43ad-8fe2-46e5303cf56c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:09:20 compute-1 nova_compute[230518]: 2025-10-02 13:09:20.270 2 DEBUG oslo_concurrency.lockutils [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:20 compute-1 nova_compute[230518]: 2025-10-02 13:09:20.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e365 e365: 3 total, 3 up, 3 in
Oct 02 13:09:20 compute-1 nova_compute[230518]: 2025-10-02 13:09:20.453 2 DEBUG oslo_concurrency.lockutils [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "658821a7-5b97-43ad-8fe2-46e5303cf56c" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:20 compute-1 nova_compute[230518]: 2025-10-02 13:09:20.453 2 DEBUG oslo_concurrency.lockutils [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:20 compute-1 nova_compute[230518]: 2025-10-02 13:09:20.454 2 INFO nova.compute.manager [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Attaching volume 2341c515-f8fa-4cdf-87e9-1faa534d8307 to /dev/vdb
Oct 02 13:09:20 compute-1 nova_compute[230518]: 2025-10-02 13:09:20.621 2 DEBUG os_brick.utils [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 13:09:20 compute-1 nova_compute[230518]: 2025-10-02 13:09:20.622 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:20 compute-1 nova_compute[230518]: 2025-10-02 13:09:20.631 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:20 compute-1 nova_compute[230518]: 2025-10-02 13:09:20.632 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[532d4b15-43a4-4993-85b4-3d8640363a57]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:20 compute-1 nova_compute[230518]: 2025-10-02 13:09:20.633 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:20 compute-1 nova_compute[230518]: 2025-10-02 13:09:20.639 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:20 compute-1 nova_compute[230518]: 2025-10-02 13:09:20.639 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[91c6fb38-0679-4170-ab13-d0fbd21382f1]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:20 compute-1 nova_compute[230518]: 2025-10-02 13:09:20.640 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:20 compute-1 nova_compute[230518]: 2025-10-02 13:09:20.647 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:20 compute-1 nova_compute[230518]: 2025-10-02 13:09:20.648 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[f14c8433-3ab7-45be-8c0e-7c3e2dbf8cf7]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:20 compute-1 nova_compute[230518]: 2025-10-02 13:09:20.649 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[0be3e92d-c162-4107-a11c-bed66dd9a5ac]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:20 compute-1 nova_compute[230518]: 2025-10-02 13:09:20.650 2 DEBUG oslo_concurrency.processutils [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:20 compute-1 nova_compute[230518]: 2025-10-02 13:09:20.681 2 DEBUG oslo_concurrency.processutils [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "nvme version" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:20 compute-1 nova_compute[230518]: 2025-10-02 13:09:20.684 2 DEBUG os_brick.initiator.connectors.lightos [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 13:09:20 compute-1 nova_compute[230518]: 2025-10-02 13:09:20.684 2 DEBUG os_brick.initiator.connectors.lightos [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 13:09:20 compute-1 nova_compute[230518]: 2025-10-02 13:09:20.684 2 DEBUG os_brick.initiator.connectors.lightos [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 13:09:20 compute-1 nova_compute[230518]: 2025-10-02 13:09:20.685 2 DEBUG os_brick.utils [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] <== get_connector_properties: return (63ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 13:09:20 compute-1 nova_compute[230518]: 2025-10-02 13:09:20.685 2 DEBUG nova.virt.block_device [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Updating existing volume attachment record: f6d5886f-7bf8-455c-85a2-e2c058fd585c _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 13:09:21 compute-1 ceph-mon[80926]: pgmap v2902: 305 pgs: 305 active+clean; 423 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.6 MiB/s rd, 4.1 MiB/s wr, 227 op/s
Oct 02 13:09:21 compute-1 ceph-mon[80926]: osdmap e365: 3 total, 3 up, 3 in
Oct 02 13:09:21 compute-1 nova_compute[230518]: 2025-10-02 13:09:21.539 2 DEBUG nova.objects.instance [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'flavor' on Instance uuid 658821a7-5b97-43ad-8fe2-46e5303cf56c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:09:21 compute-1 nova_compute[230518]: 2025-10-02 13:09:21.562 2 DEBUG nova.virt.libvirt.driver [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Attempting to attach volume 2341c515-f8fa-4cdf-87e9-1faa534d8307 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 13:09:21 compute-1 nova_compute[230518]: 2025-10-02 13:09:21.564 2 DEBUG nova.virt.libvirt.guest [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 13:09:21 compute-1 nova_compute[230518]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:09:21 compute-1 nova_compute[230518]:   <source protocol="rbd" name="volumes/volume-2341c515-f8fa-4cdf-87e9-1faa534d8307">
Oct 02 13:09:21 compute-1 nova_compute[230518]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:09:21 compute-1 nova_compute[230518]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:09:21 compute-1 nova_compute[230518]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:09:21 compute-1 nova_compute[230518]:   </source>
Oct 02 13:09:21 compute-1 nova_compute[230518]:   <auth username="openstack">
Oct 02 13:09:21 compute-1 nova_compute[230518]:     <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:09:21 compute-1 nova_compute[230518]:   </auth>
Oct 02 13:09:21 compute-1 nova_compute[230518]:   <target dev="vdb" bus="virtio"/>
Oct 02 13:09:21 compute-1 nova_compute[230518]:   <serial>2341c515-f8fa-4cdf-87e9-1faa534d8307</serial>
Oct 02 13:09:21 compute-1 nova_compute[230518]:   <shareable/>
Oct 02 13:09:21 compute-1 nova_compute[230518]: </disk>
Oct 02 13:09:21 compute-1 nova_compute[230518]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 13:09:21 compute-1 nova_compute[230518]: 2025-10-02 13:09:21.681 2 DEBUG nova.virt.libvirt.driver [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:09:21 compute-1 nova_compute[230518]: 2025-10-02 13:09:21.682 2 DEBUG nova.virt.libvirt.driver [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:09:21 compute-1 nova_compute[230518]: 2025-10-02 13:09:21.682 2 DEBUG nova.virt.libvirt.driver [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:09:21 compute-1 nova_compute[230518]: 2025-10-02 13:09:21.682 2 DEBUG nova.virt.libvirt.driver [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No VIF found with MAC fa:16:3e:e2:47:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:09:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:21.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:21 compute-1 nova_compute[230518]: 2025-10-02 13:09:21.880 2 DEBUG oslo_concurrency.lockutils [None req-4a9272d9-b21a-4c6b-b2dc-2ee96c9c99e0 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:22.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1024354763' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:23 compute-1 ceph-mon[80926]: pgmap v2904: 305 pgs: 305 active+clean; 438 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.2 MiB/s wr, 245 op/s
Oct 02 13:09:23 compute-1 nova_compute[230518]: 2025-10-02 13:09:23.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:23 compute-1 podman[305300]: 2025-10-02 13:09:23.794330921 +0000 UTC m=+0.049682171 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:09:23 compute-1 podman[305299]: 2025-10-02 13:09:23.827002237 +0000 UTC m=+0.082353557 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 13:09:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:09:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:23.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:09:24 compute-1 nova_compute[230518]: 2025-10-02 13:09:24.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:09:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:24.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:24 compute-1 nova_compute[230518]: 2025-10-02 13:09:24.081 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:24 compute-1 nova_compute[230518]: 2025-10-02 13:09:24.082 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:24 compute-1 nova_compute[230518]: 2025-10-02 13:09:24.082 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:24 compute-1 nova_compute[230518]: 2025-10-02 13:09:24.082 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:09:24 compute-1 nova_compute[230518]: 2025-10-02 13:09:24.082 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:24 compute-1 nova_compute[230518]: 2025-10-02 13:09:24.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:09:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:09:24 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1465649545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:24 compute-1 nova_compute[230518]: 2025-10-02 13:09:24.624 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:24 compute-1 nova_compute[230518]: 2025-10-02 13:09:24.693 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000c1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:09:24 compute-1 nova_compute[230518]: 2025-10-02 13:09:24.694 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000c1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:09:24 compute-1 nova_compute[230518]: 2025-10-02 13:09:24.698 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000c5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:09:24 compute-1 nova_compute[230518]: 2025-10-02 13:09:24.698 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000c5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:09:24 compute-1 nova_compute[230518]: 2025-10-02 13:09:24.699 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000c5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:09:24 compute-1 nova_compute[230518]: 2025-10-02 13:09:24.852 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:09:24 compute-1 nova_compute[230518]: 2025-10-02 13:09:24.853 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3811MB free_disk=20.851551055908203GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:09:24 compute-1 nova_compute[230518]: 2025-10-02 13:09:24.853 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:24 compute-1 nova_compute[230518]: 2025-10-02 13:09:24.853 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:24 compute-1 nova_compute[230518]: 2025-10-02 13:09:24.947 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 658821a7-5b97-43ad-8fe2-46e5303cf56c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:09:24 compute-1 nova_compute[230518]: 2025-10-02 13:09:24.948 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance b4640b6e-b1e0-4168-9970-c5d05a0e1621 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:09:24 compute-1 nova_compute[230518]: 2025-10-02 13:09:24.948 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:09:24 compute-1 nova_compute[230518]: 2025-10-02 13:09:24.948 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:09:25 compute-1 nova_compute[230518]: 2025-10-02 13:09:25.024 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:25 compute-1 nova_compute[230518]: 2025-10-02 13:09:25.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:25 compute-1 nova_compute[230518]: 2025-10-02 13:09:25.347 2 DEBUG oslo_concurrency.lockutils [None req-09baf50a-5443-4078-9874-5163e5eee92f 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "658821a7-5b97-43ad-8fe2-46e5303cf56c" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:25 compute-1 nova_compute[230518]: 2025-10-02 13:09:25.348 2 DEBUG oslo_concurrency.lockutils [None req-09baf50a-5443-4078-9874-5163e5eee92f 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:25 compute-1 nova_compute[230518]: 2025-10-02 13:09:25.359 2 INFO nova.compute.manager [None req-09baf50a-5443-4078-9874-5163e5eee92f 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Detaching volume 2341c515-f8fa-4cdf-87e9-1faa534d8307
Oct 02 13:09:25 compute-1 ceph-mon[80926]: pgmap v2905: 305 pgs: 305 active+clean; 438 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 197 op/s
Oct 02 13:09:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1465649545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/533345457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:09:25 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3051948380' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:25 compute-1 nova_compute[230518]: 2025-10-02 13:09:25.449 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:25 compute-1 nova_compute[230518]: 2025-10-02 13:09:25.454 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:09:25 compute-1 nova_compute[230518]: 2025-10-02 13:09:25.466 2 INFO nova.virt.block_device [None req-09baf50a-5443-4078-9874-5163e5eee92f 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Attempting to driver detach volume 2341c515-f8fa-4cdf-87e9-1faa534d8307 from mountpoint /dev/vdb
Oct 02 13:09:25 compute-1 nova_compute[230518]: 2025-10-02 13:09:25.474 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:09:25 compute-1 nova_compute[230518]: 2025-10-02 13:09:25.479 2 DEBUG nova.virt.libvirt.driver [None req-09baf50a-5443-4078-9874-5163e5eee92f 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Attempting to detach device vdb from instance 658821a7-5b97-43ad-8fe2-46e5303cf56c from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 13:09:25 compute-1 nova_compute[230518]: 2025-10-02 13:09:25.480 2 DEBUG nova.virt.libvirt.guest [None req-09baf50a-5443-4078-9874-5163e5eee92f 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 13:09:25 compute-1 nova_compute[230518]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:09:25 compute-1 nova_compute[230518]:   <source protocol="rbd" name="volumes/volume-2341c515-f8fa-4cdf-87e9-1faa534d8307">
Oct 02 13:09:25 compute-1 nova_compute[230518]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:09:25 compute-1 nova_compute[230518]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:09:25 compute-1 nova_compute[230518]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:09:25 compute-1 nova_compute[230518]:   </source>
Oct 02 13:09:25 compute-1 nova_compute[230518]:   <target dev="vdb" bus="virtio"/>
Oct 02 13:09:25 compute-1 nova_compute[230518]:   <serial>2341c515-f8fa-4cdf-87e9-1faa534d8307</serial>
Oct 02 13:09:25 compute-1 nova_compute[230518]:   <shareable/>
Oct 02 13:09:25 compute-1 nova_compute[230518]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 13:09:25 compute-1 nova_compute[230518]: </disk>
Oct 02 13:09:25 compute-1 nova_compute[230518]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 13:09:25 compute-1 nova_compute[230518]: 2025-10-02 13:09:25.486 2 INFO nova.virt.libvirt.driver [None req-09baf50a-5443-4078-9874-5163e5eee92f 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Successfully detached device vdb from instance 658821a7-5b97-43ad-8fe2-46e5303cf56c from the persistent domain config.
Oct 02 13:09:25 compute-1 nova_compute[230518]: 2025-10-02 13:09:25.487 2 DEBUG nova.virt.libvirt.driver [None req-09baf50a-5443-4078-9874-5163e5eee92f 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 658821a7-5b97-43ad-8fe2-46e5303cf56c from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 13:09:25 compute-1 nova_compute[230518]: 2025-10-02 13:09:25.487 2 DEBUG nova.virt.libvirt.guest [None req-09baf50a-5443-4078-9874-5163e5eee92f 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 13:09:25 compute-1 nova_compute[230518]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:09:25 compute-1 nova_compute[230518]:   <source protocol="rbd" name="volumes/volume-2341c515-f8fa-4cdf-87e9-1faa534d8307">
Oct 02 13:09:25 compute-1 nova_compute[230518]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:09:25 compute-1 nova_compute[230518]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:09:25 compute-1 nova_compute[230518]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:09:25 compute-1 nova_compute[230518]:   </source>
Oct 02 13:09:25 compute-1 nova_compute[230518]:   <target dev="vdb" bus="virtio"/>
Oct 02 13:09:25 compute-1 nova_compute[230518]:   <serial>2341c515-f8fa-4cdf-87e9-1faa534d8307</serial>
Oct 02 13:09:25 compute-1 nova_compute[230518]:   <shareable/>
Oct 02 13:09:25 compute-1 nova_compute[230518]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 13:09:25 compute-1 nova_compute[230518]: </disk>
Oct 02 13:09:25 compute-1 nova_compute[230518]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 13:09:25 compute-1 nova_compute[230518]: 2025-10-02 13:09:25.502 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:09:25 compute-1 nova_compute[230518]: 2025-10-02 13:09:25.503 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:25 compute-1 nova_compute[230518]: 2025-10-02 13:09:25.608 2 DEBUG nova.virt.libvirt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Received event <DeviceRemovedEvent: 1759410565.6075559, 658821a7-5b97-43ad-8fe2-46e5303cf56c => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 13:09:25 compute-1 nova_compute[230518]: 2025-10-02 13:09:25.609 2 DEBUG nova.virt.libvirt.driver [None req-09baf50a-5443-4078-9874-5163e5eee92f 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 658821a7-5b97-43ad-8fe2-46e5303cf56c _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 13:09:25 compute-1 nova_compute[230518]: 2025-10-02 13:09:25.611 2 INFO nova.virt.libvirt.driver [None req-09baf50a-5443-4078-9874-5163e5eee92f 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Successfully detached device vdb from instance 658821a7-5b97-43ad-8fe2-46e5303cf56c from the live domain config.
Oct 02 13:09:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:25.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:25 compute-1 nova_compute[230518]: 2025-10-02 13:09:25.902 2 DEBUG nova.objects.instance [None req-09baf50a-5443-4078-9874-5163e5eee92f 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'flavor' on Instance uuid 658821a7-5b97-43ad-8fe2-46e5303cf56c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:09:25 compute-1 nova_compute[230518]: 2025-10-02 13:09:25.938 2 DEBUG oslo_concurrency.lockutils [None req-09baf50a-5443-4078-9874-5163e5eee92f 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:25.964 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:25.965 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:25.965 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:25 compute-1 ovn_controller[129257]: 2025-10-02T13:09:25Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:20:d8:5a 10.100.0.6
Oct 02 13:09:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:26.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:26 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3051948380' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:26 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2098822834' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:27 compute-1 ceph-mon[80926]: pgmap v2906: 305 pgs: 305 active+clean; 438 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.6 MiB/s wr, 159 op/s
Oct 02 13:09:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:09:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:27.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:09:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:28.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:28 compute-1 nova_compute[230518]: 2025-10-02 13:09:28.502 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:09:28 compute-1 nova_compute[230518]: 2025-10-02 13:09:28.503 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:09:28 compute-1 nova_compute[230518]: 2025-10-02 13:09:28.503 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:09:28 compute-1 nova_compute[230518]: 2025-10-02 13:09:28.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:29 compute-1 nova_compute[230518]: 2025-10-02 13:09:29.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:09:29 compute-1 nova_compute[230518]: 2025-10-02 13:09:29.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:09:29 compute-1 ceph-mon[80926]: pgmap v2907: 305 pgs: 305 active+clean; 438 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 862 KiB/s rd, 2.6 MiB/s wr, 130 op/s
Oct 02 13:09:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/191147255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2885161977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:09:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:29.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:30 compute-1 nova_compute[230518]: 2025-10-02 13:09:30.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:09:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:30.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:30 compute-1 nova_compute[230518]: 2025-10-02 13:09:30.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:31 compute-1 ceph-mon[80926]: pgmap v2908: 305 pgs: 305 active+clean; 438 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 805 KiB/s rd, 933 KiB/s wr, 101 op/s
Oct 02 13:09:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:31.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:32 compute-1 nova_compute[230518]: 2025-10-02 13:09:32.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:09:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:09:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:32.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:09:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3476324504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1164258632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:33 compute-1 nova_compute[230518]: 2025-10-02 13:09:33.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:09:33 compute-1 nova_compute[230518]: 2025-10-02 13:09:33.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:09:33 compute-1 nova_compute[230518]: 2025-10-02 13:09:33.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:09:33 compute-1 nova_compute[230518]: 2025-10-02 13:09:33.226 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:09:33 compute-1 nova_compute[230518]: 2025-10-02 13:09:33.227 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:09:33 compute-1 nova_compute[230518]: 2025-10-02 13:09:33.227 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 13:09:33 compute-1 nova_compute[230518]: 2025-10-02 13:09:33.227 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 658821a7-5b97-43ad-8fe2-46e5303cf56c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:09:33 compute-1 ceph-mon[80926]: pgmap v2909: 305 pgs: 305 active+clean; 440 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 740 KiB/s rd, 802 KiB/s wr, 90 op/s
Oct 02 13:09:33 compute-1 nova_compute[230518]: 2025-10-02 13:09:33.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:33 compute-1 nova_compute[230518]: 2025-10-02 13:09:33.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:33.701 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=65, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=64) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:09:33 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:33.704 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:09:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:33.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.079 2 DEBUG nova.compute.manager [req-f243112b-1e88-4eb8-813a-a1e99429699f req-3813594a-58ed-465f-b2fc-63c78d590c2e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received event network-changed-3994280c-c2c8-4fa7-bc48-f7b048d43015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.079 2 DEBUG nova.compute.manager [req-f243112b-1e88-4eb8-813a-a1e99429699f req-3813594a-58ed-465f-b2fc-63c78d590c2e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Refreshing instance network info cache due to event network-changed-3994280c-c2c8-4fa7-bc48-f7b048d43015. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.080 2 DEBUG oslo_concurrency.lockutils [req-f243112b-1e88-4eb8-813a-a1e99429699f req-3813594a-58ed-465f-b2fc-63c78d590c2e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.080 2 DEBUG oslo_concurrency.lockutils [req-f243112b-1e88-4eb8-813a-a1e99429699f req-3813594a-58ed-465f-b2fc-63c78d590c2e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.080 2 DEBUG nova.network.neutron [req-f243112b-1e88-4eb8-813a-a1e99429699f req-3813594a-58ed-465f-b2fc-63c78d590c2e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Refreshing network info cache for port 3994280c-c2c8-4fa7-bc48-f7b048d43015 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:09:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:09:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:34.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.144 2 DEBUG oslo_concurrency.lockutils [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.145 2 DEBUG oslo_concurrency.lockutils [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.145 2 DEBUG oslo_concurrency.lockutils [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.146 2 DEBUG oslo_concurrency.lockutils [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.146 2 DEBUG oslo_concurrency.lockutils [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.147 2 INFO nova.compute.manager [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Terminating instance
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.148 2 DEBUG nova.compute.manager [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:09:34 compute-1 kernel: tap3994280c-c2 (unregistering): left promiscuous mode
Oct 02 13:09:34 compute-1 NetworkManager[44960]: <info>  [1759410574.2166] device (tap3994280c-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:09:34 compute-1 ovn_controller[129257]: 2025-10-02T13:09:34Z|00812|binding|INFO|Releasing lport 3994280c-c2c8-4fa7-bc48-f7b048d43015 from this chassis (sb_readonly=0)
Oct 02 13:09:34 compute-1 ovn_controller[129257]: 2025-10-02T13:09:34Z|00813|binding|INFO|Setting lport 3994280c-c2c8-4fa7-bc48-f7b048d43015 down in Southbound
Oct 02 13:09:34 compute-1 ovn_controller[129257]: 2025-10-02T13:09:34Z|00814|binding|INFO|Removing iface tap3994280c-c2 ovn-installed in OVS
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:34.245 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:d8:5a 10.100.0.6'], port_security=['fa:16:3e:20:d8:5a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b4640b6e-b1e0-4168-9970-c5d05a0e1621', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4223a8cc-f72a-428d-accb-3f4210096878', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '954946ff6b204fba90f767ec67210620', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'b142f2e0-15cd-46cd-bd2d-c2af8d42e97a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8308a587-4cdc-4eb3-9fc6-aab7267ec23f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=3994280c-c2c8-4fa7-bc48-f7b048d43015) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:09:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:34.246 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 3994280c-c2c8-4fa7-bc48-f7b048d43015 in datapath 4223a8cc-f72a-428d-accb-3f4210096878 unbound from our chassis
Oct 02 13:09:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:34.248 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4223a8cc-f72a-428d-accb-3f4210096878, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:09:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:34.251 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c7f8fd35-08cc-49eb-bf01-75b12f44b0cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:34.255 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878 namespace which is not needed anymore
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:34 compute-1 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000c1.scope: Deactivated successfully.
Oct 02 13:09:34 compute-1 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000c1.scope: Consumed 14.504s CPU time.
Oct 02 13:09:34 compute-1 systemd-machined[188247]: Machine qemu-92-instance-000000c1 terminated.
Oct 02 13:09:34 compute-1 podman[305392]: 2025-10-02 13:09:34.35437066 +0000 UTC m=+0.101239680 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:09:34 compute-1 podman[305395]: 2025-10-02 13:09:34.356512097 +0000 UTC m=+0.070697960 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.392 2 INFO nova.virt.libvirt.driver [-] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Instance destroyed successfully.
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.393 2 DEBUG nova.objects.instance [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lazy-loading 'resources' on Instance uuid b4640b6e-b1e0-4168-9970-c5d05a0e1621 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.408 2 DEBUG nova.virt.libvirt.vif [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-02T13:08:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1341299623',display_name='tempest-TestShelveInstance-server-1341299623',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1341299623',id=193,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAv/W6nS9dywKRKPfI/I8VC3fYq9NYpWuLkQShDmIF9/8AaTGucbYIXWWTw6soOxBIduh/FVz2D47Pgv7ES8PO/armLbuwNtkOQG1B1V9kNoQMvYjaLsOHDz5UTKAiXYBA==',key_name='tempest-TestShelveInstance-573161256',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:09:15Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='954946ff6b204fba90f767ec67210620',ramdisk_id='',reservation_id='r-qh5aopcq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-228669170',owner_user_name='tempest-TestShelveInstance-228669170-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:09:15Z,user_data=None,user_id='62f4c4b5cc194bd59ca9cc9f1da78a79',uuid=b4640b6e-b1e0-4168-9970-c5d05a0e1621,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.408 2 DEBUG nova.network.os_vif_util [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Converting VIF {"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.409 2 DEBUG nova.network.os_vif_util [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:d8:5a,bridge_name='br-int',has_traffic_filtering=True,id=3994280c-c2c8-4fa7-bc48-f7b048d43015,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3994280c-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.409 2 DEBUG os_vif [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:d8:5a,bridge_name='br-int',has_traffic_filtering=True,id=3994280c-c2c8-4fa7-bc48-f7b048d43015,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3994280c-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:34 compute-1 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[305215]: [NOTICE]   (305244) : haproxy version is 2.8.14-c23fe91
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.411 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3994280c-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:34 compute-1 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[305215]: [NOTICE]   (305244) : path to executable is /usr/sbin/haproxy
Oct 02 13:09:34 compute-1 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[305215]: [WARNING]  (305244) : Exiting Master process...
Oct 02 13:09:34 compute-1 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[305215]: [WARNING]  (305244) : Exiting Master process...
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:34 compute-1 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[305215]: [ALERT]    (305244) : Current worker (305255) exited with code 143 (Terminated)
Oct 02 13:09:34 compute-1 neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878[305215]: [WARNING]  (305244) : All workers exited. Exiting... (0)
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.418 2 INFO os_vif [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:d8:5a,bridge_name='br-int',has_traffic_filtering=True,id=3994280c-c2c8-4fa7-bc48-f7b048d43015,network=Network(4223a8cc-f72a-428d-accb-3f4210096878),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3994280c-c2')
Oct 02 13:09:34 compute-1 systemd[1]: libpod-85cee7d8fa584c67fec4a9f25f7f9de1428f340633b092133c602e1d9cc2e7dd.scope: Deactivated successfully.
Oct 02 13:09:34 compute-1 podman[305454]: 2025-10-02 13:09:34.424943866 +0000 UTC m=+0.053254074 container died 85cee7d8fa584c67fec4a9f25f7f9de1428f340633b092133c602e1d9cc2e7dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:09:34 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-85cee7d8fa584c67fec4a9f25f7f9de1428f340633b092133c602e1d9cc2e7dd-userdata-shm.mount: Deactivated successfully.
Oct 02 13:09:34 compute-1 systemd[1]: var-lib-containers-storage-overlay-757739b03a8fcb122583c99c39ed2c8eb5489f21e1732e8148a0278838977fc9-merged.mount: Deactivated successfully.
Oct 02 13:09:34 compute-1 podman[305454]: 2025-10-02 13:09:34.4760149 +0000 UTC m=+0.104325118 container cleanup 85cee7d8fa584c67fec4a9f25f7f9de1428f340633b092133c602e1d9cc2e7dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:09:34 compute-1 systemd[1]: libpod-conmon-85cee7d8fa584c67fec4a9f25f7f9de1428f340633b092133c602e1d9cc2e7dd.scope: Deactivated successfully.
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.538 2 DEBUG nova.compute.manager [req-6755fa74-c44b-4fde-bd17-c41958e9cdc0 req-d1923353-4b33-4de6-b608-834a263c403b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received event network-vif-unplugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.538 2 DEBUG oslo_concurrency.lockutils [req-6755fa74-c44b-4fde-bd17-c41958e9cdc0 req-d1923353-4b33-4de6-b608-834a263c403b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.538 2 DEBUG oslo_concurrency.lockutils [req-6755fa74-c44b-4fde-bd17-c41958e9cdc0 req-d1923353-4b33-4de6-b608-834a263c403b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.539 2 DEBUG oslo_concurrency.lockutils [req-6755fa74-c44b-4fde-bd17-c41958e9cdc0 req-d1923353-4b33-4de6-b608-834a263c403b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.539 2 DEBUG nova.compute.manager [req-6755fa74-c44b-4fde-bd17-c41958e9cdc0 req-d1923353-4b33-4de6-b608-834a263c403b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] No waiting events found dispatching network-vif-unplugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.539 2 DEBUG nova.compute.manager [req-6755fa74-c44b-4fde-bd17-c41958e9cdc0 req-d1923353-4b33-4de6-b608-834a263c403b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received event network-vif-unplugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:09:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:09:34 compute-1 podman[305516]: 2025-10-02 13:09:34.562374872 +0000 UTC m=+0.055743322 container remove 85cee7d8fa584c67fec4a9f25f7f9de1428f340633b092133c602e1d9cc2e7dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 13:09:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:34.569 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8a74e7ca-f835-448e-a24b-411d7f07aaff]: (4, ('Thu Oct  2 01:09:34 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878 (85cee7d8fa584c67fec4a9f25f7f9de1428f340633b092133c602e1d9cc2e7dd)\n85cee7d8fa584c67fec4a9f25f7f9de1428f340633b092133c602e1d9cc2e7dd\nThu Oct  2 01:09:34 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878 (85cee7d8fa584c67fec4a9f25f7f9de1428f340633b092133c602e1d9cc2e7dd)\n85cee7d8fa584c67fec4a9f25f7f9de1428f340633b092133c602e1d9cc2e7dd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:34.572 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a843af5f-3c69-4390-824f-fc3e77cd3028]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:34.573 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4223a8cc-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:34 compute-1 kernel: tap4223a8cc-f0: left promiscuous mode
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:34.600 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e96f9594-a76d-43c7-bf08-d3bcbf3e996b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:34.636 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ac072570-66ff-4fb0-b6d2-6f84d0521c41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:34.639 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[019f4f40-d33d-4113-8caf-5fdcff7aa4b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:34.668 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[421985fe-2255-4522-b72c-05482f743332]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 841398, 'reachable_time': 20838, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305531, 'error': None, 'target': 'ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:34.673 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4223a8cc-f72a-428d-accb-3f4210096878 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:09:34 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:34.673 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[4d2d4dd0-4d6d-4501-9600-03b478127433]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:34 compute-1 systemd[1]: run-netns-ovnmeta\x2d4223a8cc\x2df72a\x2d428d\x2daccb\x2d3f4210096878.mount: Deactivated successfully.
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.929 2 INFO nova.virt.libvirt.driver [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Deleting instance files /var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621_del
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.931 2 INFO nova.virt.libvirt.driver [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Deletion of /var/lib/nova/instances/b4640b6e-b1e0-4168-9970-c5d05a0e1621_del complete
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.989 2 INFO nova.compute.manager [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Took 0.84 seconds to destroy the instance on the hypervisor.
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.990 2 DEBUG oslo.service.loopingcall [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.990 2 DEBUG nova.compute.manager [-] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:09:34 compute-1 nova_compute[230518]: 2025-10-02 13:09:34.990 2 DEBUG nova.network.neutron [-] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:09:35 compute-1 nova_compute[230518]: 2025-10-02 13:09:35.282 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Updating instance_info_cache with network_info: [{"id": "15cb070c-0f52-464f-a2b4-8597c15212e9", "address": "fa:16:3e:e2:47:21", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15cb070c-0f", "ovs_interfaceid": "15cb070c-0f52-464f-a2b4-8597c15212e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:09:35 compute-1 nova_compute[230518]: 2025-10-02 13:09:35.308 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:09:35 compute-1 nova_compute[230518]: 2025-10-02 13:09:35.308 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 13:09:35 compute-1 nova_compute[230518]: 2025-10-02 13:09:35.309 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:09:35 compute-1 nova_compute[230518]: 2025-10-02 13:09:35.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:35 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #142. Immutable memtables: 0.
Oct 02 13:09:35 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:09:35.347412) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:09:35 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 142
Oct 02 13:09:35 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410575347471, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 749, "num_deletes": 258, "total_data_size": 1245937, "memory_usage": 1264608, "flush_reason": "Manual Compaction"}
Oct 02 13:09:35 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #143: started
Oct 02 13:09:35 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410575354789, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 143, "file_size": 821686, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69644, "largest_seqno": 70388, "table_properties": {"data_size": 818091, "index_size": 1374, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8425, "raw_average_key_size": 19, "raw_value_size": 810724, "raw_average_value_size": 1850, "num_data_blocks": 61, "num_entries": 438, "num_filter_entries": 438, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410531, "oldest_key_time": 1759410531, "file_creation_time": 1759410575, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:09:35 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 7422 microseconds, and 4060 cpu microseconds.
Oct 02 13:09:35 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:09:35 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:09:35.354841) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #143: 821686 bytes OK
Oct 02 13:09:35 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:09:35.354864) [db/memtable_list.cc:519] [default] Level-0 commit table #143 started
Oct 02 13:09:35 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:09:35.356476) [db/memtable_list.cc:722] [default] Level-0 commit table #143: memtable #1 done
Oct 02 13:09:35 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:09:35.356489) EVENT_LOG_v1 {"time_micros": 1759410575356484, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:09:35 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:09:35.356511) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:09:35 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 1241900, prev total WAL file size 1241900, number of live WAL files 2.
Oct 02 13:09:35 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000139.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:09:35 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:09:35.357111) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353136' seq:72057594037927935, type:22 .. '6C6F676D0032373638' seq:0, type:0; will stop at (end)
Oct 02 13:09:35 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:09:35 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [143(802KB)], [141(10174KB)]
Oct 02 13:09:35 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410575357150, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [143], "files_L6": [141], "score": -1, "input_data_size": 11239967, "oldest_snapshot_seqno": -1}
Oct 02 13:09:35 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #144: 9081 keys, 11112653 bytes, temperature: kUnknown
Oct 02 13:09:35 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410575438190, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 144, "file_size": 11112653, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11055223, "index_size": 33637, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22725, "raw_key_size": 239918, "raw_average_key_size": 26, "raw_value_size": 10897216, "raw_average_value_size": 1200, "num_data_blocks": 1279, "num_entries": 9081, "num_filter_entries": 9081, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759410575, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 144, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:09:35 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:09:35 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:09:35.438764) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 11112653 bytes
Oct 02 13:09:35 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:09:35.440584) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.3 rd, 136.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 9.9 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(27.2) write-amplify(13.5) OK, records in: 9611, records dropped: 530 output_compression: NoCompression
Oct 02 13:09:35 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:09:35.440619) EVENT_LOG_v1 {"time_micros": 1759410575440601, "job": 90, "event": "compaction_finished", "compaction_time_micros": 81247, "compaction_time_cpu_micros": 27910, "output_level": 6, "num_output_files": 1, "total_output_size": 11112653, "num_input_records": 9611, "num_output_records": 9081, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:09:35 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:09:35 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410575441146, "job": 90, "event": "table_file_deletion", "file_number": 143}
Oct 02 13:09:35 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000141.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:09:35 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410575445101, "job": 90, "event": "table_file_deletion", "file_number": 141}
Oct 02 13:09:35 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:09:35.357045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:09:35 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:09:35.445179) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:09:35 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:09:35.445186) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:09:35 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:09:35.445189) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:09:35 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:09:35.445191) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:09:35 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:09:35.445193) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:09:35 compute-1 ceph-mon[80926]: pgmap v2910: 305 pgs: 305 active+clean; 440 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 530 KiB/s rd, 39 KiB/s wr, 49 op/s
Oct 02 13:09:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:35.706 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '65'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:35 compute-1 nova_compute[230518]: 2025-10-02 13:09:35.730 2 DEBUG nova.network.neutron [-] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:09:35 compute-1 nova_compute[230518]: 2025-10-02 13:09:35.748 2 INFO nova.compute.manager [-] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Took 0.76 seconds to deallocate network for instance.
Oct 02 13:09:35 compute-1 nova_compute[230518]: 2025-10-02 13:09:35.821 2 DEBUG nova.compute.manager [req-958775d6-0470-4a49-ac1c-e1984520b1be req-767d7e55-b515-49e0-8fad-47e623662540 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received event network-vif-deleted-3994280c-c2c8-4fa7-bc48-f7b048d43015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:09:35 compute-1 nova_compute[230518]: 2025-10-02 13:09:35.883 2 DEBUG oslo_concurrency.lockutils [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:35 compute-1 nova_compute[230518]: 2025-10-02 13:09:35.883 2 DEBUG oslo_concurrency.lockutils [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:35.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:35 compute-1 nova_compute[230518]: 2025-10-02 13:09:35.959 2 DEBUG oslo_concurrency.processutils [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:36 compute-1 nova_compute[230518]: 2025-10-02 13:09:36.033 2 DEBUG nova.network.neutron [req-f243112b-1e88-4eb8-813a-a1e99429699f req-3813594a-58ed-465f-b2fc-63c78d590c2e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Updated VIF entry in instance network info cache for port 3994280c-c2c8-4fa7-bc48-f7b048d43015. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:09:36 compute-1 nova_compute[230518]: 2025-10-02 13:09:36.034 2 DEBUG nova.network.neutron [req-f243112b-1e88-4eb8-813a-a1e99429699f req-3813594a-58ed-465f-b2fc-63c78d590c2e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Updating instance_info_cache with network_info: [{"id": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "address": "fa:16:3e:20:d8:5a", "network": {"id": "4223a8cc-f72a-428d-accb-3f4210096878", "bridge": "br-int", "label": "tempest-TestShelveInstance-1799934733-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "954946ff6b204fba90f767ec67210620", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3994280c-c2", "ovs_interfaceid": "3994280c-c2c8-4fa7-bc48-f7b048d43015", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:09:36 compute-1 nova_compute[230518]: 2025-10-02 13:09:36.059 2 DEBUG oslo_concurrency.lockutils [req-f243112b-1e88-4eb8-813a-a1e99429699f req-3813594a-58ed-465f-b2fc-63c78d590c2e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-b4640b6e-b1e0-4168-9970-c5d05a0e1621" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:09:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:36.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:36 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:09:36 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1801784853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:36 compute-1 nova_compute[230518]: 2025-10-02 13:09:36.453 2 DEBUG oslo_concurrency.processutils [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:36 compute-1 nova_compute[230518]: 2025-10-02 13:09:36.461 2 DEBUG nova.compute.provider_tree [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:09:36 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1801784853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:36 compute-1 nova_compute[230518]: 2025-10-02 13:09:36.525 2 DEBUG nova.scheduler.client.report [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:09:36 compute-1 nova_compute[230518]: 2025-10-02 13:09:36.666 2 DEBUG nova.compute.manager [req-73928d21-84ad-4fc0-b098-a9d932382462 req-201a8d70-7b03-4707-a2b3-7b695bdf32e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received event network-vif-plugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:09:36 compute-1 nova_compute[230518]: 2025-10-02 13:09:36.667 2 DEBUG oslo_concurrency.lockutils [req-73928d21-84ad-4fc0-b098-a9d932382462 req-201a8d70-7b03-4707-a2b3-7b695bdf32e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:36 compute-1 nova_compute[230518]: 2025-10-02 13:09:36.667 2 DEBUG oslo_concurrency.lockutils [req-73928d21-84ad-4fc0-b098-a9d932382462 req-201a8d70-7b03-4707-a2b3-7b695bdf32e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:36 compute-1 nova_compute[230518]: 2025-10-02 13:09:36.668 2 DEBUG oslo_concurrency.lockutils [req-73928d21-84ad-4fc0-b098-a9d932382462 req-201a8d70-7b03-4707-a2b3-7b695bdf32e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:36 compute-1 nova_compute[230518]: 2025-10-02 13:09:36.668 2 DEBUG nova.compute.manager [req-73928d21-84ad-4fc0-b098-a9d932382462 req-201a8d70-7b03-4707-a2b3-7b695bdf32e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] No waiting events found dispatching network-vif-plugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:09:36 compute-1 nova_compute[230518]: 2025-10-02 13:09:36.669 2 WARNING nova.compute.manager [req-73928d21-84ad-4fc0-b098-a9d932382462 req-201a8d70-7b03-4707-a2b3-7b695bdf32e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Received unexpected event network-vif-plugged-3994280c-c2c8-4fa7-bc48-f7b048d43015 for instance with vm_state deleted and task_state None.
Oct 02 13:09:36 compute-1 nova_compute[230518]: 2025-10-02 13:09:36.710 2 DEBUG oslo_concurrency.lockutils [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.826s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:36 compute-1 nova_compute[230518]: 2025-10-02 13:09:36.873 2 INFO nova.scheduler.client.report [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Deleted allocations for instance b4640b6e-b1e0-4168-9970-c5d05a0e1621
Oct 02 13:09:37 compute-1 nova_compute[230518]: 2025-10-02 13:09:37.289 2 DEBUG oslo_concurrency.lockutils [None req-3f2a301c-55df-41e8-a0f8-bbad8fb0fb9c 62f4c4b5cc194bd59ca9cc9f1da78a79 954946ff6b204fba90f767ec67210620 - - default default] Lock "b4640b6e-b1e0-4168-9970-c5d05a0e1621" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.144s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:37 compute-1 ceph-mon[80926]: pgmap v2911: 305 pgs: 305 active+clean; 437 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 714 KiB/s rd, 2.5 MiB/s wr, 99 op/s
Oct 02 13:09:37 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/329906593' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:37 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2451403253' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:37.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:38.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:38 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4010470436' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:38 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1224616448' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:39 compute-1 nova_compute[230518]: 2025-10-02 13:09:39.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:09:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:39.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:39 compute-1 ceph-mon[80926]: pgmap v2912: 305 pgs: 305 active+clean; 451 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 543 KiB/s rd, 3.6 MiB/s wr, 107 op/s
Oct 02 13:09:39 compute-1 sudo[305556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:09:39 compute-1 sudo[305556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:40 compute-1 sudo[305556]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:40 compute-1 sudo[305581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:09:40 compute-1 sudo[305581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:40 compute-1 sudo[305581]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:09:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:40.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:09:40 compute-1 sudo[305606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:09:40 compute-1 sudo[305606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:40 compute-1 sudo[305606]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:40 compute-1 sudo[305631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 13:09:40 compute-1 sudo[305631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:40 compute-1 nova_compute[230518]: 2025-10-02 13:09:40.369 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:40 compute-1 sudo[305631]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:40 compute-1 sudo[305677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:09:40 compute-1 sudo[305677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:40 compute-1 sudo[305677]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:40 compute-1 sudo[305702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:09:40 compute-1 sudo[305702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:40 compute-1 sudo[305702]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:41 compute-1 sudo[305727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:09:41 compute-1 sudo[305727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:41 compute-1 sudo[305727]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:41 compute-1 sudo[305752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:09:41 compute-1 sudo[305752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:41 compute-1 ceph-mon[80926]: pgmap v2913: 305 pgs: 305 active+clean; 451 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 393 KiB/s rd, 3.6 MiB/s wr, 100 op/s
Oct 02 13:09:41 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:09:41 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:09:41 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:09:41 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:09:41 compute-1 sudo[305752]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:41.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:09:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:42.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:09:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:09:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:09:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:09:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:09:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:09:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:09:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:43.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:44 compute-1 ceph-mon[80926]: pgmap v2914: 305 pgs: 305 active+clean; 451 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 612 KiB/s rd, 3.6 MiB/s wr, 121 op/s
Oct 02 13:09:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:44.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:44 compute-1 nova_compute[230518]: 2025-10-02 13:09:44.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:09:45 compute-1 nova_compute[230518]: 2025-10-02 13:09:45.059 2 DEBUG nova.compute.manager [req-8c4ae982-b588-47a1-9f52-11202d77bbd8 req-f889f9fd-955a-4230-83fe-11b47c9bdf78 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Received event network-changed-15cb070c-0f52-464f-a2b4-8597c15212e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:09:45 compute-1 nova_compute[230518]: 2025-10-02 13:09:45.059 2 DEBUG nova.compute.manager [req-8c4ae982-b588-47a1-9f52-11202d77bbd8 req-f889f9fd-955a-4230-83fe-11b47c9bdf78 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Refreshing instance network info cache due to event network-changed-15cb070c-0f52-464f-a2b4-8597c15212e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:09:45 compute-1 nova_compute[230518]: 2025-10-02 13:09:45.059 2 DEBUG oslo_concurrency.lockutils [req-8c4ae982-b588-47a1-9f52-11202d77bbd8 req-f889f9fd-955a-4230-83fe-11b47c9bdf78 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:09:45 compute-1 nova_compute[230518]: 2025-10-02 13:09:45.059 2 DEBUG oslo_concurrency.lockutils [req-8c4ae982-b588-47a1-9f52-11202d77bbd8 req-f889f9fd-955a-4230-83fe-11b47c9bdf78 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:09:45 compute-1 nova_compute[230518]: 2025-10-02 13:09:45.060 2 DEBUG nova.network.neutron [req-8c4ae982-b588-47a1-9f52-11202d77bbd8 req-f889f9fd-955a-4230-83fe-11b47c9bdf78 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Refreshing network info cache for port 15cb070c-0f52-464f-a2b4-8597c15212e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:09:45 compute-1 nova_compute[230518]: 2025-10-02 13:09:45.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:45.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:46 compute-1 ceph-mon[80926]: pgmap v2915: 305 pgs: 305 active+clean; 451 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 544 KiB/s rd, 3.6 MiB/s wr, 115 op/s
Oct 02 13:09:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 13:09:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:46.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 13:09:47 compute-1 nova_compute[230518]: 2025-10-02 13:09:47.533 2 DEBUG nova.network.neutron [req-8c4ae982-b588-47a1-9f52-11202d77bbd8 req-f889f9fd-955a-4230-83fe-11b47c9bdf78 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Updated VIF entry in instance network info cache for port 15cb070c-0f52-464f-a2b4-8597c15212e9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:09:47 compute-1 nova_compute[230518]: 2025-10-02 13:09:47.534 2 DEBUG nova.network.neutron [req-8c4ae982-b588-47a1-9f52-11202d77bbd8 req-f889f9fd-955a-4230-83fe-11b47c9bdf78 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Updating instance_info_cache with network_info: [{"id": "15cb070c-0f52-464f-a2b4-8597c15212e9", "address": "fa:16:3e:e2:47:21", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15cb070c-0f", "ovs_interfaceid": "15cb070c-0f52-464f-a2b4-8597c15212e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:09:47 compute-1 nova_compute[230518]: 2025-10-02 13:09:47.583 2 DEBUG oslo_concurrency.lockutils [req-8c4ae982-b588-47a1-9f52-11202d77bbd8 req-f889f9fd-955a-4230-83fe-11b47c9bdf78 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:09:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:47.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:48 compute-1 ceph-mon[80926]: pgmap v2916: 305 pgs: 305 active+clean; 451 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 213 op/s
Oct 02 13:09:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:48.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:48 compute-1 nova_compute[230518]: 2025-10-02 13:09:48.549 2 DEBUG oslo_concurrency.lockutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "de995ad8-07bb-4097-899b-5c79d62a1f4c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:48 compute-1 nova_compute[230518]: 2025-10-02 13:09:48.550 2 DEBUG oslo_concurrency.lockutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:48 compute-1 nova_compute[230518]: 2025-10-02 13:09:48.565 2 DEBUG nova.compute.manager [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:09:48 compute-1 nova_compute[230518]: 2025-10-02 13:09:48.647 2 DEBUG oslo_concurrency.lockutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:48 compute-1 nova_compute[230518]: 2025-10-02 13:09:48.648 2 DEBUG oslo_concurrency.lockutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:48 compute-1 nova_compute[230518]: 2025-10-02 13:09:48.655 2 DEBUG nova.virt.hardware [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:09:48 compute-1 nova_compute[230518]: 2025-10-02 13:09:48.656 2 INFO nova.compute.claims [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Claim successful on node compute-1.ctlplane.example.com
Oct 02 13:09:48 compute-1 nova_compute[230518]: 2025-10-02 13:09:48.779 2 DEBUG oslo_concurrency.processutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:49 compute-1 sudo[305828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:09:49 compute-1 sudo[305828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:49 compute-1 sudo[305828]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:49 compute-1 sudo[305853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:09:49 compute-1 sudo[305853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:09:49 compute-1 sudo[305853]: pam_unix(sudo:session): session closed for user root
Oct 02 13:09:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:09:49 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/977708829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:49 compute-1 nova_compute[230518]: 2025-10-02 13:09:49.214 2 DEBUG oslo_concurrency.processutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:49 compute-1 nova_compute[230518]: 2025-10-02 13:09:49.220 2 DEBUG nova.compute.provider_tree [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:09:49 compute-1 nova_compute[230518]: 2025-10-02 13:09:49.245 2 DEBUG nova.scheduler.client.report [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:09:49 compute-1 nova_compute[230518]: 2025-10-02 13:09:49.274 2 DEBUG oslo_concurrency.lockutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:49 compute-1 nova_compute[230518]: 2025-10-02 13:09:49.276 2 DEBUG nova.compute.manager [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:09:49 compute-1 nova_compute[230518]: 2025-10-02 13:09:49.336 2 DEBUG nova.compute.manager [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:09:49 compute-1 nova_compute[230518]: 2025-10-02 13:09:49.337 2 DEBUG nova.network.neutron [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:09:49 compute-1 nova_compute[230518]: 2025-10-02 13:09:49.390 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410574.388958, b4640b6e-b1e0-4168-9970-c5d05a0e1621 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:09:49 compute-1 nova_compute[230518]: 2025-10-02 13:09:49.391 2 INFO nova.compute.manager [-] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] VM Stopped (Lifecycle Event)
Oct 02 13:09:49 compute-1 nova_compute[230518]: 2025-10-02 13:09:49.393 2 INFO nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:09:49 compute-1 nova_compute[230518]: 2025-10-02 13:09:49.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:49 compute-1 nova_compute[230518]: 2025-10-02 13:09:49.425 2 DEBUG nova.compute.manager [None req-50d4713b-70b7-4692-9594-0693ab21d92e - - - - - -] [instance: b4640b6e-b1e0-4168-9970-c5d05a0e1621] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:09:49 compute-1 nova_compute[230518]: 2025-10-02 13:09:49.426 2 DEBUG nova.compute.manager [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:09:49 compute-1 nova_compute[230518]: 2025-10-02 13:09:49.503 2 DEBUG nova.policy [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '156cc6022c70402ab6d194a340b076d5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9f85b8f387b146d29eabe946c4fbdee8', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:09:49 compute-1 nova_compute[230518]: 2025-10-02 13:09:49.529 2 DEBUG nova.compute.manager [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:09:49 compute-1 nova_compute[230518]: 2025-10-02 13:09:49.530 2 DEBUG nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:09:49 compute-1 nova_compute[230518]: 2025-10-02 13:09:49.530 2 INFO nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Creating image(s)
Oct 02 13:09:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:09:49 compute-1 nova_compute[230518]: 2025-10-02 13:09:49.559 2 DEBUG nova.storage.rbd_utils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image de995ad8-07bb-4097-899b-5c79d62a1f4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:09:49 compute-1 nova_compute[230518]: 2025-10-02 13:09:49.590 2 DEBUG nova.storage.rbd_utils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image de995ad8-07bb-4097-899b-5c79d62a1f4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:09:49 compute-1 nova_compute[230518]: 2025-10-02 13:09:49.616 2 DEBUG nova.storage.rbd_utils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image de995ad8-07bb-4097-899b-5c79d62a1f4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:09:49 compute-1 nova_compute[230518]: 2025-10-02 13:09:49.620 2 DEBUG oslo_concurrency.processutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:49 compute-1 nova_compute[230518]: 2025-10-02 13:09:49.692 2 DEBUG oslo_concurrency.processutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:49 compute-1 nova_compute[230518]: 2025-10-02 13:09:49.693 2 DEBUG oslo_concurrency.lockutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:49 compute-1 nova_compute[230518]: 2025-10-02 13:09:49.694 2 DEBUG oslo_concurrency.lockutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:49 compute-1 nova_compute[230518]: 2025-10-02 13:09:49.694 2 DEBUG oslo_concurrency.lockutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:49 compute-1 nova_compute[230518]: 2025-10-02 13:09:49.717 2 DEBUG nova.storage.rbd_utils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image de995ad8-07bb-4097-899b-5c79d62a1f4c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:09:49 compute-1 nova_compute[230518]: 2025-10-02 13:09:49.720 2 DEBUG oslo_concurrency.processutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 de995ad8-07bb-4097-899b-5c79d62a1f4c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:49 compute-1 ceph-mon[80926]: pgmap v2917: 305 pgs: 305 active+clean; 451 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.6 MiB/s rd, 1.1 MiB/s wr, 194 op/s
Oct 02 13:09:49 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:09:49 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:09:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/977708829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:09:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:49.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:09:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:50.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:50 compute-1 nova_compute[230518]: 2025-10-02 13:09:50.167 2 DEBUG oslo_concurrency.processutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 de995ad8-07bb-4097-899b-5c79d62a1f4c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:50 compute-1 nova_compute[230518]: 2025-10-02 13:09:50.251 2 DEBUG nova.storage.rbd_utils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] resizing rbd image de995ad8-07bb-4097-899b-5c79d62a1f4c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:09:50 compute-1 nova_compute[230518]: 2025-10-02 13:09:50.295 2 DEBUG nova.network.neutron [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Successfully created port: 513c3d66-613d-4626-8ab0-58520113de32 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:09:50 compute-1 nova_compute[230518]: 2025-10-02 13:09:50.375 2 DEBUG nova.objects.instance [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'migration_context' on Instance uuid de995ad8-07bb-4097-899b-5c79d62a1f4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:09:50 compute-1 nova_compute[230518]: 2025-10-02 13:09:50.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:50 compute-1 nova_compute[230518]: 2025-10-02 13:09:50.394 2 DEBUG nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:09:50 compute-1 nova_compute[230518]: 2025-10-02 13:09:50.394 2 DEBUG nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Ensure instance console log exists: /var/lib/nova/instances/de995ad8-07bb-4097-899b-5c79d62a1f4c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:09:50 compute-1 nova_compute[230518]: 2025-10-02 13:09:50.395 2 DEBUG oslo_concurrency.lockutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:50 compute-1 nova_compute[230518]: 2025-10-02 13:09:50.395 2 DEBUG oslo_concurrency.lockutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:50 compute-1 nova_compute[230518]: 2025-10-02 13:09:50.396 2 DEBUG oslo_concurrency.lockutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:51 compute-1 nova_compute[230518]: 2025-10-02 13:09:51.188 2 DEBUG nova.network.neutron [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Successfully updated port: 513c3d66-613d-4626-8ab0-58520113de32 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:09:51 compute-1 nova_compute[230518]: 2025-10-02 13:09:51.202 2 DEBUG oslo_concurrency.lockutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:09:51 compute-1 nova_compute[230518]: 2025-10-02 13:09:51.203 2 DEBUG oslo_concurrency.lockutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquired lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:09:51 compute-1 nova_compute[230518]: 2025-10-02 13:09:51.203 2 DEBUG nova.network.neutron [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:09:51 compute-1 nova_compute[230518]: 2025-10-02 13:09:51.374 2 DEBUG nova.network.neutron [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:09:51 compute-1 nova_compute[230518]: 2025-10-02 13:09:51.523 2 DEBUG nova.compute.manager [req-4b12e57a-b837-4399-8362-7710d2042208 req-3ceb7911-3ce1-4d1c-8e79-bfec6b45ea17 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received event network-changed-513c3d66-613d-4626-8ab0-58520113de32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:09:51 compute-1 nova_compute[230518]: 2025-10-02 13:09:51.523 2 DEBUG nova.compute.manager [req-4b12e57a-b837-4399-8362-7710d2042208 req-3ceb7911-3ce1-4d1c-8e79-bfec6b45ea17 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Refreshing instance network info cache due to event network-changed-513c3d66-613d-4626-8ab0-58520113de32. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:09:51 compute-1 nova_compute[230518]: 2025-10-02 13:09:51.524 2 DEBUG oslo_concurrency.lockutils [req-4b12e57a-b837-4399-8362-7710d2042208 req-3ceb7911-3ce1-4d1c-8e79-bfec6b45ea17 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:09:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:09:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:51.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:09:52 compute-1 ceph-mon[80926]: pgmap v2918: 305 pgs: 305 active+clean; 475 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.5 MiB/s rd, 1.1 MiB/s wr, 172 op/s
Oct 02 13:09:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:52.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:52 compute-1 nova_compute[230518]: 2025-10-02 13:09:52.305 2 DEBUG nova.network.neutron [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Updating instance_info_cache with network_info: [{"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:09:52 compute-1 nova_compute[230518]: 2025-10-02 13:09:52.326 2 DEBUG oslo_concurrency.lockutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Releasing lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:09:52 compute-1 nova_compute[230518]: 2025-10-02 13:09:52.326 2 DEBUG nova.compute.manager [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Instance network_info: |[{"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:09:52 compute-1 nova_compute[230518]: 2025-10-02 13:09:52.327 2 DEBUG oslo_concurrency.lockutils [req-4b12e57a-b837-4399-8362-7710d2042208 req-3ceb7911-3ce1-4d1c-8e79-bfec6b45ea17 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:09:52 compute-1 nova_compute[230518]: 2025-10-02 13:09:52.327 2 DEBUG nova.network.neutron [req-4b12e57a-b837-4399-8362-7710d2042208 req-3ceb7911-3ce1-4d1c-8e79-bfec6b45ea17 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Refreshing network info cache for port 513c3d66-613d-4626-8ab0-58520113de32 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:09:52 compute-1 nova_compute[230518]: 2025-10-02 13:09:52.331 2 DEBUG nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Start _get_guest_xml network_info=[{"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:09:52 compute-1 nova_compute[230518]: 2025-10-02 13:09:52.336 2 WARNING nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:09:52 compute-1 nova_compute[230518]: 2025-10-02 13:09:52.342 2 DEBUG nova.virt.libvirt.host [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:09:52 compute-1 nova_compute[230518]: 2025-10-02 13:09:52.342 2 DEBUG nova.virt.libvirt.host [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:09:52 compute-1 nova_compute[230518]: 2025-10-02 13:09:52.347 2 DEBUG nova.virt.libvirt.host [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:09:52 compute-1 nova_compute[230518]: 2025-10-02 13:09:52.347 2 DEBUG nova.virt.libvirt.host [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:09:52 compute-1 nova_compute[230518]: 2025-10-02 13:09:52.349 2 DEBUG nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:09:52 compute-1 nova_compute[230518]: 2025-10-02 13:09:52.349 2 DEBUG nova.virt.hardware [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:09:52 compute-1 nova_compute[230518]: 2025-10-02 13:09:52.350 2 DEBUG nova.virt.hardware [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:09:52 compute-1 nova_compute[230518]: 2025-10-02 13:09:52.350 2 DEBUG nova.virt.hardware [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:09:52 compute-1 nova_compute[230518]: 2025-10-02 13:09:52.351 2 DEBUG nova.virt.hardware [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:09:52 compute-1 nova_compute[230518]: 2025-10-02 13:09:52.351 2 DEBUG nova.virt.hardware [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:09:52 compute-1 nova_compute[230518]: 2025-10-02 13:09:52.351 2 DEBUG nova.virt.hardware [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:09:52 compute-1 nova_compute[230518]: 2025-10-02 13:09:52.351 2 DEBUG nova.virt.hardware [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:09:52 compute-1 nova_compute[230518]: 2025-10-02 13:09:52.352 2 DEBUG nova.virt.hardware [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:09:52 compute-1 nova_compute[230518]: 2025-10-02 13:09:52.352 2 DEBUG nova.virt.hardware [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:09:52 compute-1 nova_compute[230518]: 2025-10-02 13:09:52.352 2 DEBUG nova.virt.hardware [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:09:52 compute-1 nova_compute[230518]: 2025-10-02 13:09:52.353 2 DEBUG nova.virt.hardware [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:09:52 compute-1 nova_compute[230518]: 2025-10-02 13:09:52.356 2 DEBUG oslo_concurrency.processutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:09:52 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3761765269' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:52 compute-1 nova_compute[230518]: 2025-10-02 13:09:52.829 2 DEBUG oslo_concurrency.processutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:52 compute-1 nova_compute[230518]: 2025-10-02 13:09:52.858 2 DEBUG nova.storage.rbd_utils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image de995ad8-07bb-4097-899b-5c79d62a1f4c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:09:52 compute-1 nova_compute[230518]: 2025-10-02 13:09:52.862 2 DEBUG oslo_concurrency.processutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:53 compute-1 ceph-mon[80926]: pgmap v2919: 305 pgs: 305 active+clean; 524 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.6 MiB/s rd, 2.8 MiB/s wr, 203 op/s
Oct 02 13:09:53 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3761765269' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:09:53 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2259096278' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:53 compute-1 nova_compute[230518]: 2025-10-02 13:09:53.326 2 DEBUG oslo_concurrency.processutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:53 compute-1 nova_compute[230518]: 2025-10-02 13:09:53.327 2 DEBUG nova.virt.libvirt.vif [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:09:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-1',id=200,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGLRY7MYmIa6+oLUh+Qg+B8a5i2XXFFyzSdgxs13sBRV1pAy/AOUY7U032oAYrVoY3TX/q037Gu8fuAeVLEbydGt9ytZ7oOiP2uoiKS3ZsON6mJ6KSvHrVdqmkzPhkxnA==',key_name='tempest-keypair-841361442',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9f85b8f387b146d29eabe946c4fbdee8',ramdisk_id='',reservation_id='r-5qy16z2b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-2011266702',owner_user_name='tempest-AttachVolumeMultiAttachTest-2011266702-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:09:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='156cc6022c70402ab6d194a340b076d5',uuid=de995ad8-07bb-4097-899b-5c79d62a1f4c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:09:53 compute-1 nova_compute[230518]: 2025-10-02 13:09:53.328 2 DEBUG nova.network.os_vif_util [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converting VIF {"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:09:53 compute-1 nova_compute[230518]: 2025-10-02 13:09:53.329 2 DEBUG nova.network.os_vif_util [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:bc:4e,bridge_name='br-int',has_traffic_filtering=True,id=513c3d66-613d-4626-8ab0-58520113de32,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap513c3d66-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:09:53 compute-1 nova_compute[230518]: 2025-10-02 13:09:53.330 2 DEBUG nova.objects.instance [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'pci_devices' on Instance uuid de995ad8-07bb-4097-899b-5c79d62a1f4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:09:53 compute-1 nova_compute[230518]: 2025-10-02 13:09:53.360 2 DEBUG nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:09:53 compute-1 nova_compute[230518]:   <uuid>de995ad8-07bb-4097-899b-5c79d62a1f4c</uuid>
Oct 02 13:09:53 compute-1 nova_compute[230518]:   <name>instance-000000c8</name>
Oct 02 13:09:53 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 13:09:53 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 13:09:53 compute-1 nova_compute[230518]:   <metadata>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:09:53 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:       <nova:name>multiattach-server-1</nova:name>
Oct 02 13:09:53 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 13:09:52</nova:creationTime>
Oct 02 13:09:53 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 13:09:53 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 13:09:53 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 13:09:53 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 13:09:53 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:09:53 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:09:53 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 13:09:53 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 13:09:53 compute-1 nova_compute[230518]:         <nova:user uuid="156cc6022c70402ab6d194a340b076d5">tempest-AttachVolumeMultiAttachTest-2011266702-project-member</nova:user>
Oct 02 13:09:53 compute-1 nova_compute[230518]:         <nova:project uuid="9f85b8f387b146d29eabe946c4fbdee8">tempest-AttachVolumeMultiAttachTest-2011266702</nova:project>
Oct 02 13:09:53 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 13:09:53 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 13:09:53 compute-1 nova_compute[230518]:         <nova:port uuid="513c3d66-613d-4626-8ab0-58520113de32">
Oct 02 13:09:53 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 13:09:53 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 13:09:53 compute-1 nova_compute[230518]:   </metadata>
Oct 02 13:09:53 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <system>
Oct 02 13:09:53 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:09:53 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:09:53 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:09:53 compute-1 nova_compute[230518]:       <entry name="serial">de995ad8-07bb-4097-899b-5c79d62a1f4c</entry>
Oct 02 13:09:53 compute-1 nova_compute[230518]:       <entry name="uuid">de995ad8-07bb-4097-899b-5c79d62a1f4c</entry>
Oct 02 13:09:53 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     </system>
Oct 02 13:09:53 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 13:09:53 compute-1 nova_compute[230518]:   <os>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:   </os>
Oct 02 13:09:53 compute-1 nova_compute[230518]:   <features>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <apic/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:   </features>
Oct 02 13:09:53 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:   </clock>
Oct 02 13:09:53 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:   </cpu>
Oct 02 13:09:53 compute-1 nova_compute[230518]:   <devices>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 13:09:53 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/de995ad8-07bb-4097-899b-5c79d62a1f4c_disk">
Oct 02 13:09:53 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:       </source>
Oct 02 13:09:53 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:09:53 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:09:53 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 13:09:53 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/de995ad8-07bb-4097-899b-5c79d62a1f4c_disk.config">
Oct 02 13:09:53 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:       </source>
Oct 02 13:09:53 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:09:53 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:09:53 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 13:09:53 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:9a:bc:4e"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:       <target dev="tap513c3d66-61"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     </interface>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 13:09:53 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/de995ad8-07bb-4097-899b-5c79d62a1f4c/console.log" append="off"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     </serial>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <video>
Oct 02 13:09:53 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     </video>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 13:09:53 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     </rng>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 13:09:53 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 13:09:53 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 13:09:53 compute-1 nova_compute[230518]:   </devices>
Oct 02 13:09:53 compute-1 nova_compute[230518]: </domain>
Oct 02 13:09:53 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:09:53 compute-1 nova_compute[230518]: 2025-10-02 13:09:53.362 2 DEBUG nova.compute.manager [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Preparing to wait for external event network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:09:53 compute-1 nova_compute[230518]: 2025-10-02 13:09:53.363 2 DEBUG oslo_concurrency.lockutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:53 compute-1 nova_compute[230518]: 2025-10-02 13:09:53.363 2 DEBUG oslo_concurrency.lockutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:53 compute-1 nova_compute[230518]: 2025-10-02 13:09:53.363 2 DEBUG oslo_concurrency.lockutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:53 compute-1 nova_compute[230518]: 2025-10-02 13:09:53.364 2 DEBUG nova.virt.libvirt.vif [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:09:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-1',id=200,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGLRY7MYmIa6+oLUh+Qg+B8a5i2XXFFyzSdgxs13sBRV1pAy/AOUY7U032oAYrVoY3TX/q037Gu8fuAeVLEbydGt9ytZ7oOiP2uoiKS3ZsON6mJ6KSvHrVdqmkzPhkxnA==',key_name='tempest-keypair-841361442',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9f85b8f387b146d29eabe946c4fbdee8',ramdisk_id='',reservation_id='r-5qy16z2b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-2011266702',owner_user_name='tempest-AttachVolumeMultiAttachTest-2011266702-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:09:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='156cc6022c70402ab6d194a340b076d5',uuid=de995ad8-07bb-4097-899b-5c79d62a1f4c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:09:53 compute-1 nova_compute[230518]: 2025-10-02 13:09:53.364 2 DEBUG nova.network.os_vif_util [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converting VIF {"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:09:53 compute-1 nova_compute[230518]: 2025-10-02 13:09:53.365 2 DEBUG nova.network.os_vif_util [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:bc:4e,bridge_name='br-int',has_traffic_filtering=True,id=513c3d66-613d-4626-8ab0-58520113de32,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap513c3d66-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:09:53 compute-1 nova_compute[230518]: 2025-10-02 13:09:53.366 2 DEBUG os_vif [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:bc:4e,bridge_name='br-int',has_traffic_filtering=True,id=513c3d66-613d-4626-8ab0-58520113de32,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap513c3d66-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:09:53 compute-1 nova_compute[230518]: 2025-10-02 13:09:53.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:53 compute-1 nova_compute[230518]: 2025-10-02 13:09:53.367 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:53 compute-1 nova_compute[230518]: 2025-10-02 13:09:53.368 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:09:53 compute-1 nova_compute[230518]: 2025-10-02 13:09:53.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:53 compute-1 nova_compute[230518]: 2025-10-02 13:09:53.371 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap513c3d66-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:53 compute-1 nova_compute[230518]: 2025-10-02 13:09:53.372 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap513c3d66-61, col_values=(('external_ids', {'iface-id': '513c3d66-613d-4626-8ab0-58520113de32', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9a:bc:4e', 'vm-uuid': 'de995ad8-07bb-4097-899b-5c79d62a1f4c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:53 compute-1 nova_compute[230518]: 2025-10-02 13:09:53.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:53 compute-1 NetworkManager[44960]: <info>  [1759410593.3747] manager: (tap513c3d66-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/378)
Oct 02 13:09:53 compute-1 nova_compute[230518]: 2025-10-02 13:09:53.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:09:53 compute-1 nova_compute[230518]: 2025-10-02 13:09:53.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:53 compute-1 nova_compute[230518]: 2025-10-02 13:09:53.381 2 INFO os_vif [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:bc:4e,bridge_name='br-int',has_traffic_filtering=True,id=513c3d66-613d-4626-8ab0-58520113de32,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap513c3d66-61')
Oct 02 13:09:53 compute-1 nova_compute[230518]: 2025-10-02 13:09:53.452 2 DEBUG nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:09:53 compute-1 nova_compute[230518]: 2025-10-02 13:09:53.452 2 DEBUG nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:09:53 compute-1 nova_compute[230518]: 2025-10-02 13:09:53.453 2 DEBUG nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No VIF found with MAC fa:16:3e:9a:bc:4e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:09:53 compute-1 nova_compute[230518]: 2025-10-02 13:09:53.453 2 INFO nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Using config drive
Oct 02 13:09:53 compute-1 nova_compute[230518]: 2025-10-02 13:09:53.491 2 DEBUG nova.storage.rbd_utils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image de995ad8-07bb-4097-899b-5c79d62a1f4c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:09:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:53.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:54.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:54 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/527186031' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:09:54 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2259096278' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:54 compute-1 nova_compute[230518]: 2025-10-02 13:09:54.294 2 INFO nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Creating config drive at /var/lib/nova/instances/de995ad8-07bb-4097-899b-5c79d62a1f4c/disk.config
Oct 02 13:09:54 compute-1 nova_compute[230518]: 2025-10-02 13:09:54.301 2 DEBUG oslo_concurrency.processutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/de995ad8-07bb-4097-899b-5c79d62a1f4c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplurrrzzg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:54 compute-1 nova_compute[230518]: 2025-10-02 13:09:54.339 2 DEBUG nova.network.neutron [req-4b12e57a-b837-4399-8362-7710d2042208 req-3ceb7911-3ce1-4d1c-8e79-bfec6b45ea17 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Updated VIF entry in instance network info cache for port 513c3d66-613d-4626-8ab0-58520113de32. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:09:54 compute-1 nova_compute[230518]: 2025-10-02 13:09:54.340 2 DEBUG nova.network.neutron [req-4b12e57a-b837-4399-8362-7710d2042208 req-3ceb7911-3ce1-4d1c-8e79-bfec6b45ea17 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Updating instance_info_cache with network_info: [{"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:09:54 compute-1 nova_compute[230518]: 2025-10-02 13:09:54.359 2 DEBUG oslo_concurrency.lockutils [req-4b12e57a-b837-4399-8362-7710d2042208 req-3ceb7911-3ce1-4d1c-8e79-bfec6b45ea17 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:09:54 compute-1 nova_compute[230518]: 2025-10-02 13:09:54.450 2 DEBUG oslo_concurrency.processutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/de995ad8-07bb-4097-899b-5c79d62a1f4c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplurrrzzg" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:54 compute-1 nova_compute[230518]: 2025-10-02 13:09:54.476 2 DEBUG nova.storage.rbd_utils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] rbd image de995ad8-07bb-4097-899b-5c79d62a1f4c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:09:54 compute-1 nova_compute[230518]: 2025-10-02 13:09:54.480 2 DEBUG oslo_concurrency.processutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/de995ad8-07bb-4097-899b-5c79d62a1f4c/disk.config de995ad8-07bb-4097-899b-5c79d62a1f4c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:09:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:09:54 compute-1 nova_compute[230518]: 2025-10-02 13:09:54.786 2 DEBUG oslo_concurrency.processutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/de995ad8-07bb-4097-899b-5c79d62a1f4c/disk.config de995ad8-07bb-4097-899b-5c79d62a1f4c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.307s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:09:54 compute-1 nova_compute[230518]: 2025-10-02 13:09:54.787 2 INFO nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Deleting local config drive /var/lib/nova/instances/de995ad8-07bb-4097-899b-5c79d62a1f4c/disk.config because it was imported into RBD.
Oct 02 13:09:54 compute-1 podman[306170]: 2025-10-02 13:09:54.840177489 +0000 UTC m=+0.075518572 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:09:54 compute-1 kernel: tap513c3d66-61: entered promiscuous mode
Oct 02 13:09:54 compute-1 NetworkManager[44960]: <info>  [1759410594.8503] manager: (tap513c3d66-61): new Tun device (/org/freedesktop/NetworkManager/Devices/379)
Oct 02 13:09:54 compute-1 ovn_controller[129257]: 2025-10-02T13:09:54Z|00815|binding|INFO|Claiming lport 513c3d66-613d-4626-8ab0-58520113de32 for this chassis.
Oct 02 13:09:54 compute-1 ovn_controller[129257]: 2025-10-02T13:09:54Z|00816|binding|INFO|513c3d66-613d-4626-8ab0-58520113de32: Claiming fa:16:3e:9a:bc:4e 10.100.0.4
Oct 02 13:09:54 compute-1 nova_compute[230518]: 2025-10-02 13:09:54.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:54.862 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:bc:4e 10.100.0.4'], port_security=['fa:16:3e:9a:bc:4e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'de995ad8-07bb-4097-899b-5c79d62a1f4c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9001b9c-bca6-4085-a954-1414269e31bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f85b8f387b146d29eabe946c4fbdee8', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c95f312a-09a8-4e2c-af55-3ef0a0e41bfc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=57ece03e-f90b-4cd6-ae02-c9a908c888ae, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=513c3d66-613d-4626-8ab0-58520113de32) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:09:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:54.863 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 513c3d66-613d-4626-8ab0-58520113de32 in datapath d9001b9c-bca6-4085-a954-1414269e31bc bound to our chassis
Oct 02 13:09:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:54.865 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d9001b9c-bca6-4085-a954-1414269e31bc
Oct 02 13:09:54 compute-1 ovn_controller[129257]: 2025-10-02T13:09:54Z|00817|binding|INFO|Setting lport 513c3d66-613d-4626-8ab0-58520113de32 ovn-installed in OVS
Oct 02 13:09:54 compute-1 ovn_controller[129257]: 2025-10-02T13:09:54Z|00818|binding|INFO|Setting lport 513c3d66-613d-4626-8ab0-58520113de32 up in Southbound
Oct 02 13:09:54 compute-1 nova_compute[230518]: 2025-10-02 13:09:54.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:54 compute-1 nova_compute[230518]: 2025-10-02 13:09:54.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:54 compute-1 podman[306169]: 2025-10-02 13:09:54.884236883 +0000 UTC m=+0.116986786 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 13:09:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:54.885 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2d26ff21-0945-4f3f-9b8f-a8e87e0d7f9e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:54 compute-1 systemd-udevd[306223]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:09:54 compute-1 systemd-machined[188247]: New machine qemu-93-instance-000000c8.
Oct 02 13:09:54 compute-1 NetworkManager[44960]: <info>  [1759410594.9053] device (tap513c3d66-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:09:54 compute-1 NetworkManager[44960]: <info>  [1759410594.9059] device (tap513c3d66-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:09:54 compute-1 systemd[1]: Started Virtual Machine qemu-93-instance-000000c8.
Oct 02 13:09:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:54.922 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[7f72bbd4-b001-43a8-bd86-8e39d3e0ec14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:54.925 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[20a05a50-73c0-401c-af9e-659ea9b30229]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:54.953 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[fe364483-98fd-4c67-b98e-0f14c0f46d88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:54.969 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[5933e86e-3117-438b-a757-4512527d2073]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9001b9c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:c0:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 242], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 840857, 'reachable_time': 19345, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306236, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:54.986 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[de5f0608-01f7-4250-bfa0-a9c5e9524d16]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd9001b9c-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 840868, 'tstamp': 840868}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306237, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd9001b9c-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 840870, 'tstamp': 840870}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306237, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:09:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:54.988 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9001b9c-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:55 compute-1 nova_compute[230518]: 2025-10-02 13:09:55.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:55.023 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9001b9c-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:55.023 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:09:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:55.023 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd9001b9c-b0, col_values=(('external_ids', {'iface-id': 'aa788301-8c47-4421-b693-3b37cb064ae2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:09:55 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:09:55.024 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:09:55 compute-1 ceph-mon[80926]: pgmap v2920: 305 pgs: 305 active+clean; 524 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.3 MiB/s rd, 2.8 MiB/s wr, 179 op/s
Oct 02 13:09:55 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2287145047' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:55 compute-1 nova_compute[230518]: 2025-10-02 13:09:55.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:55 compute-1 nova_compute[230518]: 2025-10-02 13:09:55.425 2 DEBUG nova.compute.manager [req-ac51c553-2bf2-4617-87f2-accb534ce7dc req-53251163-ac70-4d28-931e-559a6de52164 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received event network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:09:55 compute-1 nova_compute[230518]: 2025-10-02 13:09:55.426 2 DEBUG oslo_concurrency.lockutils [req-ac51c553-2bf2-4617-87f2-accb534ce7dc req-53251163-ac70-4d28-931e-559a6de52164 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:55 compute-1 nova_compute[230518]: 2025-10-02 13:09:55.426 2 DEBUG oslo_concurrency.lockutils [req-ac51c553-2bf2-4617-87f2-accb534ce7dc req-53251163-ac70-4d28-931e-559a6de52164 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:55 compute-1 nova_compute[230518]: 2025-10-02 13:09:55.426 2 DEBUG oslo_concurrency.lockutils [req-ac51c553-2bf2-4617-87f2-accb534ce7dc req-53251163-ac70-4d28-931e-559a6de52164 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:55 compute-1 nova_compute[230518]: 2025-10-02 13:09:55.427 2 DEBUG nova.compute.manager [req-ac51c553-2bf2-4617-87f2-accb534ce7dc req-53251163-ac70-4d28-931e-559a6de52164 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Processing event network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:09:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:55.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:56 compute-1 nova_compute[230518]: 2025-10-02 13:09:56.030 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410596.0298114, de995ad8-07bb-4097-899b-5c79d62a1f4c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:09:56 compute-1 nova_compute[230518]: 2025-10-02 13:09:56.030 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] VM Started (Lifecycle Event)
Oct 02 13:09:56 compute-1 nova_compute[230518]: 2025-10-02 13:09:56.032 2 DEBUG nova.compute.manager [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:09:56 compute-1 nova_compute[230518]: 2025-10-02 13:09:56.035 2 DEBUG nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:09:56 compute-1 nova_compute[230518]: 2025-10-02 13:09:56.037 2 INFO nova.virt.libvirt.driver [-] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Instance spawned successfully.
Oct 02 13:09:56 compute-1 nova_compute[230518]: 2025-10-02 13:09:56.037 2 DEBUG nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:09:56 compute-1 nova_compute[230518]: 2025-10-02 13:09:56.058 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:09:56 compute-1 nova_compute[230518]: 2025-10-02 13:09:56.062 2 DEBUG nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:09:56 compute-1 nova_compute[230518]: 2025-10-02 13:09:56.062 2 DEBUG nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:09:56 compute-1 nova_compute[230518]: 2025-10-02 13:09:56.063 2 DEBUG nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:09:56 compute-1 nova_compute[230518]: 2025-10-02 13:09:56.063 2 DEBUG nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:09:56 compute-1 nova_compute[230518]: 2025-10-02 13:09:56.063 2 DEBUG nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:09:56 compute-1 nova_compute[230518]: 2025-10-02 13:09:56.064 2 DEBUG nova.virt.libvirt.driver [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:09:56 compute-1 nova_compute[230518]: 2025-10-02 13:09:56.067 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:09:56 compute-1 nova_compute[230518]: 2025-10-02 13:09:56.097 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:09:56 compute-1 nova_compute[230518]: 2025-10-02 13:09:56.098 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410596.0308108, de995ad8-07bb-4097-899b-5c79d62a1f4c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:09:56 compute-1 nova_compute[230518]: 2025-10-02 13:09:56.098 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] VM Paused (Lifecycle Event)
Oct 02 13:09:56 compute-1 nova_compute[230518]: 2025-10-02 13:09:56.119 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:09:56 compute-1 nova_compute[230518]: 2025-10-02 13:09:56.122 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410596.0343573, de995ad8-07bb-4097-899b-5c79d62a1f4c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:09:56 compute-1 nova_compute[230518]: 2025-10-02 13:09:56.122 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] VM Resumed (Lifecycle Event)
Oct 02 13:09:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:09:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:56.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:09:56 compute-1 nova_compute[230518]: 2025-10-02 13:09:56.130 2 INFO nova.compute.manager [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Took 6.60 seconds to spawn the instance on the hypervisor.
Oct 02 13:09:56 compute-1 nova_compute[230518]: 2025-10-02 13:09:56.131 2 DEBUG nova.compute.manager [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:09:56 compute-1 nova_compute[230518]: 2025-10-02 13:09:56.139 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:09:56 compute-1 nova_compute[230518]: 2025-10-02 13:09:56.142 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:09:56 compute-1 nova_compute[230518]: 2025-10-02 13:09:56.162 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:09:56 compute-1 nova_compute[230518]: 2025-10-02 13:09:56.189 2 INFO nova.compute.manager [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Took 7.57 seconds to build instance.
Oct 02 13:09:56 compute-1 nova_compute[230518]: 2025-10-02 13:09:56.203 2 DEBUG oslo_concurrency.lockutils [None req-3ed4c197-1dd8-426b-a89c-fef3087f3735 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:57 compute-1 ceph-mon[80926]: pgmap v2921: 305 pgs: 305 active+clean; 581 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.6 MiB/s rd, 6.7 MiB/s wr, 255 op/s
Oct 02 13:09:57 compute-1 nova_compute[230518]: 2025-10-02 13:09:57.549 2 DEBUG nova.compute.manager [req-ab7b91c4-ef75-45c2-bbb9-20dffb5790c3 req-2579eb34-e4fc-4fa3-b9ff-499a574d6897 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received event network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:09:57 compute-1 nova_compute[230518]: 2025-10-02 13:09:57.549 2 DEBUG oslo_concurrency.lockutils [req-ab7b91c4-ef75-45c2-bbb9-20dffb5790c3 req-2579eb34-e4fc-4fa3-b9ff-499a574d6897 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:09:57 compute-1 nova_compute[230518]: 2025-10-02 13:09:57.550 2 DEBUG oslo_concurrency.lockutils [req-ab7b91c4-ef75-45c2-bbb9-20dffb5790c3 req-2579eb34-e4fc-4fa3-b9ff-499a574d6897 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:09:57 compute-1 nova_compute[230518]: 2025-10-02 13:09:57.550 2 DEBUG oslo_concurrency.lockutils [req-ab7b91c4-ef75-45c2-bbb9-20dffb5790c3 req-2579eb34-e4fc-4fa3-b9ff-499a574d6897 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:09:57 compute-1 nova_compute[230518]: 2025-10-02 13:09:57.550 2 DEBUG nova.compute.manager [req-ab7b91c4-ef75-45c2-bbb9-20dffb5790c3 req-2579eb34-e4fc-4fa3-b9ff-499a574d6897 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] No waiting events found dispatching network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:09:57 compute-1 nova_compute[230518]: 2025-10-02 13:09:57.550 2 WARNING nova.compute.manager [req-ab7b91c4-ef75-45c2-bbb9-20dffb5790c3 req-2579eb34-e4fc-4fa3-b9ff-499a574d6897 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received unexpected event network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 for instance with vm_state active and task_state None.
Oct 02 13:09:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:57.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:09:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:09:58.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:09:58 compute-1 nova_compute[230518]: 2025-10-02 13:09:58.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:09:58 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3080379018' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:09:59 compute-1 ceph-mon[80926]: pgmap v2922: 305 pgs: 305 active+clean; 603 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 7.8 MiB/s wr, 230 op/s
Oct 02 13:09:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:09:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:09:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:09:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:09:59.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:10:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:00.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:00 compute-1 nova_compute[230518]: 2025-10-02 13:10:00.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:00 compute-1 ceph-mon[80926]: overall HEALTH_OK
Oct 02 13:10:01 compute-1 nova_compute[230518]: 2025-10-02 13:10:01.466 2 DEBUG nova.compute.manager [req-c271188b-2b58-4c73-a3d8-47abfcea0920 req-3eb32e5a-df7a-4afc-aa11-d1380ac707af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received event network-changed-513c3d66-613d-4626-8ab0-58520113de32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:10:01 compute-1 nova_compute[230518]: 2025-10-02 13:10:01.467 2 DEBUG nova.compute.manager [req-c271188b-2b58-4c73-a3d8-47abfcea0920 req-3eb32e5a-df7a-4afc-aa11-d1380ac707af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Refreshing instance network info cache due to event network-changed-513c3d66-613d-4626-8ab0-58520113de32. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:10:01 compute-1 nova_compute[230518]: 2025-10-02 13:10:01.467 2 DEBUG oslo_concurrency.lockutils [req-c271188b-2b58-4c73-a3d8-47abfcea0920 req-3eb32e5a-df7a-4afc-aa11-d1380ac707af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:10:01 compute-1 nova_compute[230518]: 2025-10-02 13:10:01.467 2 DEBUG oslo_concurrency.lockutils [req-c271188b-2b58-4c73-a3d8-47abfcea0920 req-3eb32e5a-df7a-4afc-aa11-d1380ac707af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:10:01 compute-1 nova_compute[230518]: 2025-10-02 13:10:01.467 2 DEBUG nova.network.neutron [req-c271188b-2b58-4c73-a3d8-47abfcea0920 req-3eb32e5a-df7a-4afc-aa11-d1380ac707af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Refreshing network info cache for port 513c3d66-613d-4626-8ab0-58520113de32 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:10:01 compute-1 ceph-mon[80926]: pgmap v2923: 305 pgs: 305 active+clean; 608 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 7.8 MiB/s wr, 250 op/s
Oct 02 13:10:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:01.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:10:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:02.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:10:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:10:03 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3483817046' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:10:03 compute-1 nova_compute[230518]: 2025-10-02 13:10:03.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:03 compute-1 ceph-mon[80926]: pgmap v2924: 305 pgs: 305 active+clean; 610 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 6.8 MiB/s wr, 266 op/s
Oct 02 13:10:03 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3483817046' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:10:03 compute-1 nova_compute[230518]: 2025-10-02 13:10:03.937 2 DEBUG nova.network.neutron [req-c271188b-2b58-4c73-a3d8-47abfcea0920 req-3eb32e5a-df7a-4afc-aa11-d1380ac707af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Updated VIF entry in instance network info cache for port 513c3d66-613d-4626-8ab0-58520113de32. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:10:03 compute-1 nova_compute[230518]: 2025-10-02 13:10:03.938 2 DEBUG nova.network.neutron [req-c271188b-2b58-4c73-a3d8-47abfcea0920 req-3eb32e5a-df7a-4afc-aa11-d1380ac707af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Updating instance_info_cache with network_info: [{"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:10:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:10:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:03.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:10:03 compute-1 nova_compute[230518]: 2025-10-02 13:10:03.969 2 DEBUG oslo_concurrency.lockutils [req-c271188b-2b58-4c73-a3d8-47abfcea0920 req-3eb32e5a-df7a-4afc-aa11-d1380ac707af 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:10:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:10:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:04.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:10:04 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #55. Immutable memtables: 11.
Oct 02 13:10:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:10:04 compute-1 nova_compute[230518]: 2025-10-02 13:10:04.656 2 DEBUG oslo_concurrency.lockutils [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "de995ad8-07bb-4097-899b-5c79d62a1f4c" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:04 compute-1 nova_compute[230518]: 2025-10-02 13:10:04.656 2 DEBUG oslo_concurrency.lockutils [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:04 compute-1 nova_compute[230518]: 2025-10-02 13:10:04.676 2 DEBUG nova.objects.instance [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'flavor' on Instance uuid de995ad8-07bb-4097-899b-5c79d62a1f4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:10:04 compute-1 nova_compute[230518]: 2025-10-02 13:10:04.764 2 DEBUG oslo_concurrency.lockutils [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.108s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:04 compute-1 podman[306281]: 2025-10-02 13:10:04.820708489 +0000 UTC m=+0.068121199 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:10:04 compute-1 podman[306282]: 2025-10-02 13:10:04.821747002 +0000 UTC m=+0.068897374 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 13:10:05 compute-1 nova_compute[230518]: 2025-10-02 13:10:05.000 2 DEBUG oslo_concurrency.lockutils [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "de995ad8-07bb-4097-899b-5c79d62a1f4c" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:05 compute-1 nova_compute[230518]: 2025-10-02 13:10:05.001 2 DEBUG oslo_concurrency.lockutils [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:05 compute-1 nova_compute[230518]: 2025-10-02 13:10:05.001 2 INFO nova.compute.manager [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Attaching volume 8347daf9-f32f-4c50-b89e-df9e913044db to /dev/vdb
Oct 02 13:10:05 compute-1 nova_compute[230518]: 2025-10-02 13:10:05.290 2 DEBUG os_brick.utils [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 13:10:05 compute-1 nova_compute[230518]: 2025-10-02 13:10:05.293 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:10:05 compute-1 nova_compute[230518]: 2025-10-02 13:10:05.309 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:10:05 compute-1 nova_compute[230518]: 2025-10-02 13:10:05.309 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[501b2869-3eb8-4229-b4e0-a129f79ded04]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:05 compute-1 nova_compute[230518]: 2025-10-02 13:10:05.313 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:10:05 compute-1 nova_compute[230518]: 2025-10-02 13:10:05.323 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:10:05 compute-1 nova_compute[230518]: 2025-10-02 13:10:05.325 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[f2f4dae2-f967-4671-971e-cb003e5ae290]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:05 compute-1 nova_compute[230518]: 2025-10-02 13:10:05.329 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:10:05 compute-1 nova_compute[230518]: 2025-10-02 13:10:05.340 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:10:05 compute-1 nova_compute[230518]: 2025-10-02 13:10:05.340 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[e2b44999-9a3e-4051-a3ed-81e8c697e302]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:05 compute-1 nova_compute[230518]: 2025-10-02 13:10:05.343 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[68c67cca-bb6d-4855-9326-357b12507bbb]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:05 compute-1 nova_compute[230518]: 2025-10-02 13:10:05.344 2 DEBUG oslo_concurrency.processutils [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:10:05 compute-1 nova_compute[230518]: 2025-10-02 13:10:05.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:05 compute-1 nova_compute[230518]: 2025-10-02 13:10:05.401 2 DEBUG oslo_concurrency.processutils [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "nvme version" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:10:05 compute-1 nova_compute[230518]: 2025-10-02 13:10:05.404 2 DEBUG os_brick.initiator.connectors.lightos [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 13:10:05 compute-1 nova_compute[230518]: 2025-10-02 13:10:05.404 2 DEBUG os_brick.initiator.connectors.lightos [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 13:10:05 compute-1 nova_compute[230518]: 2025-10-02 13:10:05.404 2 DEBUG os_brick.initiator.connectors.lightos [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 13:10:05 compute-1 nova_compute[230518]: 2025-10-02 13:10:05.405 2 DEBUG os_brick.utils [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] <== get_connector_properties: return (113ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 13:10:05 compute-1 nova_compute[230518]: 2025-10-02 13:10:05.405 2 DEBUG nova.virt.block_device [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Updating existing volume attachment record: 090235a9-9281-4043-bb90-ab5bad31a26e _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 13:10:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:05.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:06 compute-1 ceph-mon[80926]: pgmap v2925: 305 pgs: 305 active+clean; 610 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.0 MiB/s wr, 232 op/s
Oct 02 13:10:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1490262338' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:10:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1490262338' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:10:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:06.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:06 compute-1 nova_compute[230518]: 2025-10-02 13:10:06.280 2 DEBUG nova.objects.instance [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'flavor' on Instance uuid de995ad8-07bb-4097-899b-5c79d62a1f4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:10:06 compute-1 nova_compute[230518]: 2025-10-02 13:10:06.304 2 DEBUG nova.virt.libvirt.driver [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Attempting to attach volume 8347daf9-f32f-4c50-b89e-df9e913044db with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 13:10:06 compute-1 nova_compute[230518]: 2025-10-02 13:10:06.307 2 DEBUG nova.virt.libvirt.guest [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 13:10:06 compute-1 nova_compute[230518]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:10:06 compute-1 nova_compute[230518]:   <source protocol="rbd" name="volumes/volume-8347daf9-f32f-4c50-b89e-df9e913044db">
Oct 02 13:10:06 compute-1 nova_compute[230518]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:10:06 compute-1 nova_compute[230518]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:10:06 compute-1 nova_compute[230518]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:10:06 compute-1 nova_compute[230518]:   </source>
Oct 02 13:10:06 compute-1 nova_compute[230518]:   <auth username="openstack">
Oct 02 13:10:06 compute-1 nova_compute[230518]:     <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:10:06 compute-1 nova_compute[230518]:   </auth>
Oct 02 13:10:06 compute-1 nova_compute[230518]:   <target dev="vdb" bus="virtio"/>
Oct 02 13:10:06 compute-1 nova_compute[230518]:   <serial>8347daf9-f32f-4c50-b89e-df9e913044db</serial>
Oct 02 13:10:06 compute-1 nova_compute[230518]:   <shareable/>
Oct 02 13:10:06 compute-1 nova_compute[230518]: </disk>
Oct 02 13:10:06 compute-1 nova_compute[230518]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 13:10:06 compute-1 nova_compute[230518]: 2025-10-02 13:10:06.476 2 DEBUG nova.virt.libvirt.driver [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:10:06 compute-1 nova_compute[230518]: 2025-10-02 13:10:06.477 2 DEBUG nova.virt.libvirt.driver [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:10:06 compute-1 nova_compute[230518]: 2025-10-02 13:10:06.477 2 DEBUG nova.virt.libvirt.driver [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:10:06 compute-1 nova_compute[230518]: 2025-10-02 13:10:06.477 2 DEBUG nova.virt.libvirt.driver [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] No VIF found with MAC fa:16:3e:9a:bc:4e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:10:06 compute-1 nova_compute[230518]: 2025-10-02 13:10:06.673 2 DEBUG oslo_concurrency.lockutils [None req-6144fc57-a549-49d9-89f3-aa6a3fc3ceb5 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1997719586' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:10:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:10:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:07.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:10:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:08.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:08 compute-1 ceph-mon[80926]: pgmap v2926: 305 pgs: 305 active+clean; 610 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.5 MiB/s rd, 5.0 MiB/s wr, 296 op/s
Oct 02 13:10:08 compute-1 nova_compute[230518]: 2025-10-02 13:10:08.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:08 compute-1 nova_compute[230518]: 2025-10-02 13:10:08.591 2 DEBUG oslo_concurrency.lockutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "28cee0c6-0008-45f8-af11-48abbbbcb22c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:08 compute-1 nova_compute[230518]: 2025-10-02 13:10:08.591 2 DEBUG oslo_concurrency.lockutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "28cee0c6-0008-45f8-af11-48abbbbcb22c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:08 compute-1 nova_compute[230518]: 2025-10-02 13:10:08.609 2 DEBUG nova.compute.manager [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:10:08 compute-1 nova_compute[230518]: 2025-10-02 13:10:08.690 2 DEBUG oslo_concurrency.lockutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:08 compute-1 nova_compute[230518]: 2025-10-02 13:10:08.691 2 DEBUG oslo_concurrency.lockutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:08 compute-1 nova_compute[230518]: 2025-10-02 13:10:08.702 2 DEBUG nova.virt.hardware [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:10:08 compute-1 nova_compute[230518]: 2025-10-02 13:10:08.703 2 INFO nova.compute.claims [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Claim successful on node compute-1.ctlplane.example.com
Oct 02 13:10:08 compute-1 nova_compute[230518]: 2025-10-02 13:10:08.987 2 DEBUG oslo_concurrency.processutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:10:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:10:09 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/918586300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:09 compute-1 nova_compute[230518]: 2025-10-02 13:10:09.401 2 DEBUG oslo_concurrency.processutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:10:09 compute-1 nova_compute[230518]: 2025-10-02 13:10:09.409 2 DEBUG nova.compute.provider_tree [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:10:09 compute-1 nova_compute[230518]: 2025-10-02 13:10:09.429 2 DEBUG nova.scheduler.client.report [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:10:09 compute-1 ceph-mon[80926]: pgmap v2927: 305 pgs: 305 active+clean; 610 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 1.2 MiB/s wr, 221 op/s
Oct 02 13:10:09 compute-1 nova_compute[230518]: 2025-10-02 13:10:09.459 2 DEBUG oslo_concurrency.lockutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:09 compute-1 nova_compute[230518]: 2025-10-02 13:10:09.460 2 DEBUG nova.compute.manager [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:10:09 compute-1 nova_compute[230518]: 2025-10-02 13:10:09.501 2 DEBUG nova.compute.manager [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:10:09 compute-1 nova_compute[230518]: 2025-10-02 13:10:09.501 2 DEBUG nova.network.neutron [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:10:09 compute-1 nova_compute[230518]: 2025-10-02 13:10:09.523 2 INFO nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:10:09 compute-1 nova_compute[230518]: 2025-10-02 13:10:09.552 2 DEBUG nova.compute.manager [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:10:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:10:09 compute-1 nova_compute[230518]: 2025-10-02 13:10:09.650 2 DEBUG nova.compute.manager [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:10:09 compute-1 nova_compute[230518]: 2025-10-02 13:10:09.651 2 DEBUG nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:10:09 compute-1 nova_compute[230518]: 2025-10-02 13:10:09.651 2 INFO nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Creating image(s)
Oct 02 13:10:09 compute-1 nova_compute[230518]: 2025-10-02 13:10:09.672 2 DEBUG nova.storage.rbd_utils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 28cee0c6-0008-45f8-af11-48abbbbcb22c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:10:09 compute-1 nova_compute[230518]: 2025-10-02 13:10:09.701 2 DEBUG nova.storage.rbd_utils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 28cee0c6-0008-45f8-af11-48abbbbcb22c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:10:09 compute-1 nova_compute[230518]: 2025-10-02 13:10:09.725 2 DEBUG nova.storage.rbd_utils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 28cee0c6-0008-45f8-af11-48abbbbcb22c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:10:09 compute-1 nova_compute[230518]: 2025-10-02 13:10:09.728 2 DEBUG oslo_concurrency.processutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:10:09 compute-1 nova_compute[230518]: 2025-10-02 13:10:09.796 2 DEBUG oslo_concurrency.processutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:10:09 compute-1 nova_compute[230518]: 2025-10-02 13:10:09.796 2 DEBUG oslo_concurrency.lockutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:09 compute-1 nova_compute[230518]: 2025-10-02 13:10:09.797 2 DEBUG oslo_concurrency.lockutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:09 compute-1 nova_compute[230518]: 2025-10-02 13:10:09.797 2 DEBUG oslo_concurrency.lockutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:09 compute-1 nova_compute[230518]: 2025-10-02 13:10:09.824 2 DEBUG nova.storage.rbd_utils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 28cee0c6-0008-45f8-af11-48abbbbcb22c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:10:09 compute-1 nova_compute[230518]: 2025-10-02 13:10:09.828 2 DEBUG oslo_concurrency.processutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 28cee0c6-0008-45f8-af11-48abbbbcb22c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:10:09 compute-1 nova_compute[230518]: 2025-10-02 13:10:09.919 2 DEBUG nova.policy [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '362b536431b64b15b67740060af57e9c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e911de934ec043d1bd942c8aed562d04', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:10:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:09.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:10 compute-1 ovn_controller[129257]: 2025-10-02T13:10:10Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9a:bc:4e 10.100.0.4
Oct 02 13:10:10 compute-1 ovn_controller[129257]: 2025-10-02T13:10:10Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9a:bc:4e 10.100.0.4
Oct 02 13:10:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:10.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:10 compute-1 nova_compute[230518]: 2025-10-02 13:10:10.160 2 DEBUG oslo_concurrency.processutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 28cee0c6-0008-45f8-af11-48abbbbcb22c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.332s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:10:10 compute-1 nova_compute[230518]: 2025-10-02 13:10:10.246 2 DEBUG nova.storage.rbd_utils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] resizing rbd image 28cee0c6-0008-45f8-af11-48abbbbcb22c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:10:10 compute-1 nova_compute[230518]: 2025-10-02 13:10:10.372 2 DEBUG nova.objects.instance [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lazy-loading 'migration_context' on Instance uuid 28cee0c6-0008-45f8-af11-48abbbbcb22c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:10:10 compute-1 nova_compute[230518]: 2025-10-02 13:10:10.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:10 compute-1 nova_compute[230518]: 2025-10-02 13:10:10.387 2 DEBUG nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:10:10 compute-1 nova_compute[230518]: 2025-10-02 13:10:10.387 2 DEBUG nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Ensure instance console log exists: /var/lib/nova/instances/28cee0c6-0008-45f8-af11-48abbbbcb22c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:10:10 compute-1 nova_compute[230518]: 2025-10-02 13:10:10.388 2 DEBUG oslo_concurrency.lockutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:10 compute-1 nova_compute[230518]: 2025-10-02 13:10:10.388 2 DEBUG oslo_concurrency.lockutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:10 compute-1 nova_compute[230518]: 2025-10-02 13:10:10.389 2 DEBUG oslo_concurrency.lockutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/918586300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:11 compute-1 nova_compute[230518]: 2025-10-02 13:10:11.306 2 DEBUG nova.network.neutron [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Successfully created port: faebc160-66b6-4ba2-ab02-2b0098eef804 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:10:11 compute-1 ceph-mon[80926]: pgmap v2928: 305 pgs: 305 active+clean; 620 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 712 KiB/s wr, 176 op/s
Oct 02 13:10:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1018353601' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:11.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:12.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:12 compute-1 nova_compute[230518]: 2025-10-02 13:10:12.253 2 DEBUG nova.network.neutron [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Successfully updated port: faebc160-66b6-4ba2-ab02-2b0098eef804 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:10:12 compute-1 nova_compute[230518]: 2025-10-02 13:10:12.266 2 DEBUG oslo_concurrency.lockutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "refresh_cache-28cee0c6-0008-45f8-af11-48abbbbcb22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:10:12 compute-1 nova_compute[230518]: 2025-10-02 13:10:12.266 2 DEBUG oslo_concurrency.lockutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquired lock "refresh_cache-28cee0c6-0008-45f8-af11-48abbbbcb22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:10:12 compute-1 nova_compute[230518]: 2025-10-02 13:10:12.267 2 DEBUG nova.network.neutron [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:10:12 compute-1 nova_compute[230518]: 2025-10-02 13:10:12.366 2 DEBUG nova.compute.manager [req-0e47e726-765b-4dcf-b992-ff434624593f req-682ea374-4489-4c9c-a341-61f1a182413f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Received event network-changed-faebc160-66b6-4ba2-ab02-2b0098eef804 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:10:12 compute-1 nova_compute[230518]: 2025-10-02 13:10:12.367 2 DEBUG nova.compute.manager [req-0e47e726-765b-4dcf-b992-ff434624593f req-682ea374-4489-4c9c-a341-61f1a182413f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Refreshing instance network info cache due to event network-changed-faebc160-66b6-4ba2-ab02-2b0098eef804. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:10:12 compute-1 nova_compute[230518]: 2025-10-02 13:10:12.367 2 DEBUG oslo_concurrency.lockutils [req-0e47e726-765b-4dcf-b992-ff434624593f req-682ea374-4489-4c9c-a341-61f1a182413f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-28cee0c6-0008-45f8-af11-48abbbbcb22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:10:12 compute-1 nova_compute[230518]: 2025-10-02 13:10:12.432 2 DEBUG nova.network.neutron [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:10:13 compute-1 nova_compute[230518]: 2025-10-02 13:10:13.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:13 compute-1 nova_compute[230518]: 2025-10-02 13:10:13.589 2 DEBUG nova.network.neutron [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Updating instance_info_cache with network_info: [{"id": "faebc160-66b6-4ba2-ab02-2b0098eef804", "address": "fa:16:3e:4f:46:ff", "network": {"id": "54a08602-f5b6-41e1-816c-2c122542a2b7", "bridge": "br-int", "label": "tempest-network-smoke--1208477536", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaebc160-66", "ovs_interfaceid": "faebc160-66b6-4ba2-ab02-2b0098eef804", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:10:13 compute-1 ceph-mon[80926]: pgmap v2929: 305 pgs: 305 active+clean; 675 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.4 MiB/s wr, 195 op/s
Oct 02 13:10:13 compute-1 nova_compute[230518]: 2025-10-02 13:10:13.609 2 DEBUG oslo_concurrency.lockutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Releasing lock "refresh_cache-28cee0c6-0008-45f8-af11-48abbbbcb22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:10:13 compute-1 nova_compute[230518]: 2025-10-02 13:10:13.609 2 DEBUG nova.compute.manager [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Instance network_info: |[{"id": "faebc160-66b6-4ba2-ab02-2b0098eef804", "address": "fa:16:3e:4f:46:ff", "network": {"id": "54a08602-f5b6-41e1-816c-2c122542a2b7", "bridge": "br-int", "label": "tempest-network-smoke--1208477536", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaebc160-66", "ovs_interfaceid": "faebc160-66b6-4ba2-ab02-2b0098eef804", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:10:13 compute-1 nova_compute[230518]: 2025-10-02 13:10:13.609 2 DEBUG oslo_concurrency.lockutils [req-0e47e726-765b-4dcf-b992-ff434624593f req-682ea374-4489-4c9c-a341-61f1a182413f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-28cee0c6-0008-45f8-af11-48abbbbcb22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:10:13 compute-1 nova_compute[230518]: 2025-10-02 13:10:13.610 2 DEBUG nova.network.neutron [req-0e47e726-765b-4dcf-b992-ff434624593f req-682ea374-4489-4c9c-a341-61f1a182413f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Refreshing network info cache for port faebc160-66b6-4ba2-ab02-2b0098eef804 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:10:13 compute-1 nova_compute[230518]: 2025-10-02 13:10:13.615 2 DEBUG nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Start _get_guest_xml network_info=[{"id": "faebc160-66b6-4ba2-ab02-2b0098eef804", "address": "fa:16:3e:4f:46:ff", "network": {"id": "54a08602-f5b6-41e1-816c-2c122542a2b7", "bridge": "br-int", "label": "tempest-network-smoke--1208477536", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaebc160-66", "ovs_interfaceid": "faebc160-66b6-4ba2-ab02-2b0098eef804", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:10:13 compute-1 nova_compute[230518]: 2025-10-02 13:10:13.621 2 WARNING nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:10:13 compute-1 nova_compute[230518]: 2025-10-02 13:10:13.630 2 DEBUG nova.virt.libvirt.host [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:10:13 compute-1 nova_compute[230518]: 2025-10-02 13:10:13.631 2 DEBUG nova.virt.libvirt.host [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:10:13 compute-1 nova_compute[230518]: 2025-10-02 13:10:13.635 2 DEBUG nova.virt.libvirt.host [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:10:13 compute-1 nova_compute[230518]: 2025-10-02 13:10:13.635 2 DEBUG nova.virt.libvirt.host [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:10:13 compute-1 nova_compute[230518]: 2025-10-02 13:10:13.637 2 DEBUG nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:10:13 compute-1 nova_compute[230518]: 2025-10-02 13:10:13.638 2 DEBUG nova.virt.hardware [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:10:13 compute-1 nova_compute[230518]: 2025-10-02 13:10:13.639 2 DEBUG nova.virt.hardware [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:10:13 compute-1 nova_compute[230518]: 2025-10-02 13:10:13.639 2 DEBUG nova.virt.hardware [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:10:13 compute-1 nova_compute[230518]: 2025-10-02 13:10:13.639 2 DEBUG nova.virt.hardware [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:10:13 compute-1 nova_compute[230518]: 2025-10-02 13:10:13.640 2 DEBUG nova.virt.hardware [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:10:13 compute-1 nova_compute[230518]: 2025-10-02 13:10:13.640 2 DEBUG nova.virt.hardware [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:10:13 compute-1 nova_compute[230518]: 2025-10-02 13:10:13.640 2 DEBUG nova.virt.hardware [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:10:13 compute-1 nova_compute[230518]: 2025-10-02 13:10:13.641 2 DEBUG nova.virt.hardware [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:10:13 compute-1 nova_compute[230518]: 2025-10-02 13:10:13.641 2 DEBUG nova.virt.hardware [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:10:13 compute-1 nova_compute[230518]: 2025-10-02 13:10:13.641 2 DEBUG nova.virt.hardware [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:10:13 compute-1 nova_compute[230518]: 2025-10-02 13:10:13.642 2 DEBUG nova.virt.hardware [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:10:13 compute-1 nova_compute[230518]: 2025-10-02 13:10:13.646 2 DEBUG oslo_concurrency.processutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:10:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:13.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:10:14 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1576777115' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:10:14 compute-1 nova_compute[230518]: 2025-10-02 13:10:14.080 2 DEBUG oslo_concurrency.processutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:10:14 compute-1 nova_compute[230518]: 2025-10-02 13:10:14.104 2 DEBUG nova.storage.rbd_utils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 28cee0c6-0008-45f8-af11-48abbbbcb22c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:10:14 compute-1 nova_compute[230518]: 2025-10-02 13:10:14.107 2 DEBUG oslo_concurrency.processutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:10:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:10:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:14.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:10:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:10:14 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2097443980' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:10:14 compute-1 nova_compute[230518]: 2025-10-02 13:10:14.554 2 DEBUG oslo_concurrency.processutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:10:14 compute-1 nova_compute[230518]: 2025-10-02 13:10:14.555 2 DEBUG nova.virt.libvirt.vif [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:10:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-gen-1-16464808',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-gen-1-16464808',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2067500093-ge',id=202,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHH1HinzjUD7Q3tXKDfndQMl/GmvwtyywSMW01vuBjE0ArFGpxG7DyhVBxJNWD31t8BOgD0+NBlvzrAymSVFz2iPnx4lrKVlC4HjLQHFgeB7PDLQzvsLeeffGrKOfE8BLQ==',key_name='tempest-TestSecurityGroupsBasicOps-874748035',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e911de934ec043d1bd942c8aed562d04',ramdisk_id='',reservation_id='r-x8b2athb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-2067500093',owner_user_name='tempest-TestSecurityGroupsBasicOps-2067500093-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:10:09Z,user_data=None,user_id='362b536431b64b15b67740060af57e9c',uuid=28cee0c6-0008-45f8-af11-48abbbbcb22c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "faebc160-66b6-4ba2-ab02-2b0098eef804", "address": "fa:16:3e:4f:46:ff", "network": {"id": "54a08602-f5b6-41e1-816c-2c122542a2b7", "bridge": "br-int", "label": "tempest-network-smoke--1208477536", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaebc160-66", "ovs_interfaceid": "faebc160-66b6-4ba2-ab02-2b0098eef804", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:10:14 compute-1 nova_compute[230518]: 2025-10-02 13:10:14.556 2 DEBUG nova.network.os_vif_util [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converting VIF {"id": "faebc160-66b6-4ba2-ab02-2b0098eef804", "address": "fa:16:3e:4f:46:ff", "network": {"id": "54a08602-f5b6-41e1-816c-2c122542a2b7", "bridge": "br-int", "label": "tempest-network-smoke--1208477536", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaebc160-66", "ovs_interfaceid": "faebc160-66b6-4ba2-ab02-2b0098eef804", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:10:14 compute-1 nova_compute[230518]: 2025-10-02 13:10:14.556 2 DEBUG nova.network.os_vif_util [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:46:ff,bridge_name='br-int',has_traffic_filtering=True,id=faebc160-66b6-4ba2-ab02-2b0098eef804,network=Network(54a08602-f5b6-41e1-816c-2c122542a2b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaebc160-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:10:14 compute-1 nova_compute[230518]: 2025-10-02 13:10:14.557 2 DEBUG nova.objects.instance [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lazy-loading 'pci_devices' on Instance uuid 28cee0c6-0008-45f8-af11-48abbbbcb22c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:10:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:10:14 compute-1 nova_compute[230518]: 2025-10-02 13:10:14.575 2 DEBUG nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:10:14 compute-1 nova_compute[230518]:   <uuid>28cee0c6-0008-45f8-af11-48abbbbcb22c</uuid>
Oct 02 13:10:14 compute-1 nova_compute[230518]:   <name>instance-000000ca</name>
Oct 02 13:10:14 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 13:10:14 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 13:10:14 compute-1 nova_compute[230518]:   <metadata>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:10:14 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-gen-1-16464808</nova:name>
Oct 02 13:10:14 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 13:10:13</nova:creationTime>
Oct 02 13:10:14 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 13:10:14 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 13:10:14 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 13:10:14 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 13:10:14 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:10:14 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:10:14 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 13:10:14 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 13:10:14 compute-1 nova_compute[230518]:         <nova:user uuid="362b536431b64b15b67740060af57e9c">tempest-TestSecurityGroupsBasicOps-2067500093-project-member</nova:user>
Oct 02 13:10:14 compute-1 nova_compute[230518]:         <nova:project uuid="e911de934ec043d1bd942c8aed562d04">tempest-TestSecurityGroupsBasicOps-2067500093</nova:project>
Oct 02 13:10:14 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 13:10:14 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 13:10:14 compute-1 nova_compute[230518]:         <nova:port uuid="faebc160-66b6-4ba2-ab02-2b0098eef804">
Oct 02 13:10:14 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 13:10:14 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 13:10:14 compute-1 nova_compute[230518]:   </metadata>
Oct 02 13:10:14 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <system>
Oct 02 13:10:14 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:10:14 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:10:14 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:10:14 compute-1 nova_compute[230518]:       <entry name="serial">28cee0c6-0008-45f8-af11-48abbbbcb22c</entry>
Oct 02 13:10:14 compute-1 nova_compute[230518]:       <entry name="uuid">28cee0c6-0008-45f8-af11-48abbbbcb22c</entry>
Oct 02 13:10:14 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     </system>
Oct 02 13:10:14 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 13:10:14 compute-1 nova_compute[230518]:   <os>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:   </os>
Oct 02 13:10:14 compute-1 nova_compute[230518]:   <features>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <apic/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:   </features>
Oct 02 13:10:14 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:   </clock>
Oct 02 13:10:14 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:   </cpu>
Oct 02 13:10:14 compute-1 nova_compute[230518]:   <devices>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 13:10:14 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/28cee0c6-0008-45f8-af11-48abbbbcb22c_disk">
Oct 02 13:10:14 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:       </source>
Oct 02 13:10:14 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:10:14 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:10:14 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 13:10:14 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/28cee0c6-0008-45f8-af11-48abbbbcb22c_disk.config">
Oct 02 13:10:14 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:       </source>
Oct 02 13:10:14 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:10:14 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:10:14 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 13:10:14 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:4f:46:ff"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:       <target dev="tapfaebc160-66"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     </interface>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 13:10:14 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/28cee0c6-0008-45f8-af11-48abbbbcb22c/console.log" append="off"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     </serial>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <video>
Oct 02 13:10:14 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     </video>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 13:10:14 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     </rng>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 13:10:14 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 13:10:14 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 13:10:14 compute-1 nova_compute[230518]:   </devices>
Oct 02 13:10:14 compute-1 nova_compute[230518]: </domain>
Oct 02 13:10:14 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:10:14 compute-1 nova_compute[230518]: 2025-10-02 13:10:14.577 2 DEBUG nova.compute.manager [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Preparing to wait for external event network-vif-plugged-faebc160-66b6-4ba2-ab02-2b0098eef804 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:10:14 compute-1 nova_compute[230518]: 2025-10-02 13:10:14.577 2 DEBUG oslo_concurrency.lockutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "28cee0c6-0008-45f8-af11-48abbbbcb22c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:14 compute-1 nova_compute[230518]: 2025-10-02 13:10:14.578 2 DEBUG oslo_concurrency.lockutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "28cee0c6-0008-45f8-af11-48abbbbcb22c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:14 compute-1 nova_compute[230518]: 2025-10-02 13:10:14.578 2 DEBUG oslo_concurrency.lockutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "28cee0c6-0008-45f8-af11-48abbbbcb22c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:14 compute-1 nova_compute[230518]: 2025-10-02 13:10:14.579 2 DEBUG nova.virt.libvirt.vif [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:10:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-gen-1-16464808',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-gen-1-16464808',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2067500093-ge',id=202,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHH1HinzjUD7Q3tXKDfndQMl/GmvwtyywSMW01vuBjE0ArFGpxG7DyhVBxJNWD31t8BOgD0+NBlvzrAymSVFz2iPnx4lrKVlC4HjLQHFgeB7PDLQzvsLeeffGrKOfE8BLQ==',key_name='tempest-TestSecurityGroupsBasicOps-874748035',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e911de934ec043d1bd942c8aed562d04',ramdisk_id='',reservation_id='r-x8b2athb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-2067500093',owner_user_name='tempest-TestSecurityGroupsBasicOps-2067500093-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:10:09Z,user_data=None,user_id='362b536431b64b15b67740060af57e9c',uuid=28cee0c6-0008-45f8-af11-48abbbbcb22c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "faebc160-66b6-4ba2-ab02-2b0098eef804", "address": "fa:16:3e:4f:46:ff", "network": {"id": "54a08602-f5b6-41e1-816c-2c122542a2b7", "bridge": "br-int", "label": "tempest-network-smoke--1208477536", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaebc160-66", "ovs_interfaceid": "faebc160-66b6-4ba2-ab02-2b0098eef804", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:10:14 compute-1 nova_compute[230518]: 2025-10-02 13:10:14.579 2 DEBUG nova.network.os_vif_util [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converting VIF {"id": "faebc160-66b6-4ba2-ab02-2b0098eef804", "address": "fa:16:3e:4f:46:ff", "network": {"id": "54a08602-f5b6-41e1-816c-2c122542a2b7", "bridge": "br-int", "label": "tempest-network-smoke--1208477536", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaebc160-66", "ovs_interfaceid": "faebc160-66b6-4ba2-ab02-2b0098eef804", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:10:14 compute-1 nova_compute[230518]: 2025-10-02 13:10:14.580 2 DEBUG nova.network.os_vif_util [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:46:ff,bridge_name='br-int',has_traffic_filtering=True,id=faebc160-66b6-4ba2-ab02-2b0098eef804,network=Network(54a08602-f5b6-41e1-816c-2c122542a2b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaebc160-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:10:14 compute-1 nova_compute[230518]: 2025-10-02 13:10:14.581 2 DEBUG os_vif [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:46:ff,bridge_name='br-int',has_traffic_filtering=True,id=faebc160-66b6-4ba2-ab02-2b0098eef804,network=Network(54a08602-f5b6-41e1-816c-2c122542a2b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaebc160-66') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:10:14 compute-1 nova_compute[230518]: 2025-10-02 13:10:14.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:14 compute-1 nova_compute[230518]: 2025-10-02 13:10:14.582 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:10:14 compute-1 nova_compute[230518]: 2025-10-02 13:10:14.582 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:10:14 compute-1 nova_compute[230518]: 2025-10-02 13:10:14.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:14 compute-1 nova_compute[230518]: 2025-10-02 13:10:14.585 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfaebc160-66, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:10:14 compute-1 nova_compute[230518]: 2025-10-02 13:10:14.586 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfaebc160-66, col_values=(('external_ids', {'iface-id': 'faebc160-66b6-4ba2-ab02-2b0098eef804', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4f:46:ff', 'vm-uuid': '28cee0c6-0008-45f8-af11-48abbbbcb22c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:10:14 compute-1 nova_compute[230518]: 2025-10-02 13:10:14.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:14 compute-1 NetworkManager[44960]: <info>  [1759410614.5887] manager: (tapfaebc160-66): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/380)
Oct 02 13:10:14 compute-1 nova_compute[230518]: 2025-10-02 13:10:14.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:10:14 compute-1 nova_compute[230518]: 2025-10-02 13:10:14.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:14 compute-1 nova_compute[230518]: 2025-10-02 13:10:14.599 2 INFO os_vif [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:46:ff,bridge_name='br-int',has_traffic_filtering=True,id=faebc160-66b6-4ba2-ab02-2b0098eef804,network=Network(54a08602-f5b6-41e1-816c-2c122542a2b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaebc160-66')
Oct 02 13:10:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1576777115' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:10:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2097443980' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:10:14 compute-1 nova_compute[230518]: 2025-10-02 13:10:14.675 2 DEBUG nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:10:14 compute-1 nova_compute[230518]: 2025-10-02 13:10:14.677 2 DEBUG nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:10:14 compute-1 nova_compute[230518]: 2025-10-02 13:10:14.677 2 DEBUG nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] No VIF found with MAC fa:16:3e:4f:46:ff, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:10:14 compute-1 nova_compute[230518]: 2025-10-02 13:10:14.678 2 INFO nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Using config drive
Oct 02 13:10:14 compute-1 nova_compute[230518]: 2025-10-02 13:10:14.704 2 DEBUG nova.storage.rbd_utils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 28cee0c6-0008-45f8-af11-48abbbbcb22c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:10:15 compute-1 nova_compute[230518]: 2025-10-02 13:10:15.271 2 DEBUG nova.network.neutron [req-0e47e726-765b-4dcf-b992-ff434624593f req-682ea374-4489-4c9c-a341-61f1a182413f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Updated VIF entry in instance network info cache for port faebc160-66b6-4ba2-ab02-2b0098eef804. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:10:15 compute-1 nova_compute[230518]: 2025-10-02 13:10:15.272 2 DEBUG nova.network.neutron [req-0e47e726-765b-4dcf-b992-ff434624593f req-682ea374-4489-4c9c-a341-61f1a182413f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Updating instance_info_cache with network_info: [{"id": "faebc160-66b6-4ba2-ab02-2b0098eef804", "address": "fa:16:3e:4f:46:ff", "network": {"id": "54a08602-f5b6-41e1-816c-2c122542a2b7", "bridge": "br-int", "label": "tempest-network-smoke--1208477536", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaebc160-66", "ovs_interfaceid": "faebc160-66b6-4ba2-ab02-2b0098eef804", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:10:15 compute-1 nova_compute[230518]: 2025-10-02 13:10:15.291 2 INFO nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Creating config drive at /var/lib/nova/instances/28cee0c6-0008-45f8-af11-48abbbbcb22c/disk.config
Oct 02 13:10:15 compute-1 nova_compute[230518]: 2025-10-02 13:10:15.297 2 DEBUG oslo_concurrency.processutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/28cee0c6-0008-45f8-af11-48abbbbcb22c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppzdq4t_x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:10:15 compute-1 nova_compute[230518]: 2025-10-02 13:10:15.341 2 DEBUG oslo_concurrency.lockutils [req-0e47e726-765b-4dcf-b992-ff434624593f req-682ea374-4489-4c9c-a341-61f1a182413f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-28cee0c6-0008-45f8-af11-48abbbbcb22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:10:15 compute-1 nova_compute[230518]: 2025-10-02 13:10:15.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:15 compute-1 nova_compute[230518]: 2025-10-02 13:10:15.447 2 DEBUG oslo_concurrency.processutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/28cee0c6-0008-45f8-af11-48abbbbcb22c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppzdq4t_x" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:10:15 compute-1 nova_compute[230518]: 2025-10-02 13:10:15.479 2 DEBUG nova.storage.rbd_utils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 28cee0c6-0008-45f8-af11-48abbbbcb22c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:10:15 compute-1 nova_compute[230518]: 2025-10-02 13:10:15.485 2 DEBUG oslo_concurrency.processutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/28cee0c6-0008-45f8-af11-48abbbbcb22c/disk.config 28cee0c6-0008-45f8-af11-48abbbbcb22c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:10:15 compute-1 ceph-mon[80926]: pgmap v2930: 305 pgs: 305 active+clean; 675 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.4 MiB/s wr, 162 op/s
Oct 02 13:10:15 compute-1 nova_compute[230518]: 2025-10-02 13:10:15.711 2 DEBUG oslo_concurrency.processutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/28cee0c6-0008-45f8-af11-48abbbbcb22c/disk.config 28cee0c6-0008-45f8-af11-48abbbbcb22c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.226s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:10:15 compute-1 nova_compute[230518]: 2025-10-02 13:10:15.714 2 INFO nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Deleting local config drive /var/lib/nova/instances/28cee0c6-0008-45f8-af11-48abbbbcb22c/disk.config because it was imported into RBD.
Oct 02 13:10:15 compute-1 kernel: tapfaebc160-66: entered promiscuous mode
Oct 02 13:10:15 compute-1 NetworkManager[44960]: <info>  [1759410615.7750] manager: (tapfaebc160-66): new Tun device (/org/freedesktop/NetworkManager/Devices/381)
Oct 02 13:10:15 compute-1 systemd-udevd[306666]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:10:15 compute-1 ovn_controller[129257]: 2025-10-02T13:10:15Z|00819|binding|INFO|Claiming lport faebc160-66b6-4ba2-ab02-2b0098eef804 for this chassis.
Oct 02 13:10:15 compute-1 ovn_controller[129257]: 2025-10-02T13:10:15Z|00820|binding|INFO|faebc160-66b6-4ba2-ab02-2b0098eef804: Claiming fa:16:3e:4f:46:ff 10.100.0.14
Oct 02 13:10:15 compute-1 nova_compute[230518]: 2025-10-02 13:10:15.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:15 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:15.845 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:46:ff 10.100.0.14'], port_security=['fa:16:3e:4f:46:ff 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '28cee0c6-0008-45f8-af11-48abbbbcb22c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54a08602-f5b6-41e1-816c-2c122542a2b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e911de934ec043d1bd942c8aed562d04', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2e682aa5-16aa-4884-9ae4-6ca813b9baae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1398b7fe-9cb8-4053-9d9c-0523007b5e96, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=faebc160-66b6-4ba2-ab02-2b0098eef804) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:10:15 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:15.846 138374 INFO neutron.agent.ovn.metadata.agent [-] Port faebc160-66b6-4ba2-ab02-2b0098eef804 in datapath 54a08602-f5b6-41e1-816c-2c122542a2b7 bound to our chassis
Oct 02 13:10:15 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:15.847 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 54a08602-f5b6-41e1-816c-2c122542a2b7
Oct 02 13:10:15 compute-1 ovn_controller[129257]: 2025-10-02T13:10:15Z|00821|binding|INFO|Setting lport faebc160-66b6-4ba2-ab02-2b0098eef804 up in Southbound
Oct 02 13:10:15 compute-1 ovn_controller[129257]: 2025-10-02T13:10:15Z|00822|binding|INFO|Setting lport faebc160-66b6-4ba2-ab02-2b0098eef804 ovn-installed in OVS
Oct 02 13:10:15 compute-1 NetworkManager[44960]: <info>  [1759410615.8523] device (tapfaebc160-66): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:10:15 compute-1 nova_compute[230518]: 2025-10-02 13:10:15.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:15 compute-1 NetworkManager[44960]: <info>  [1759410615.8542] device (tapfaebc160-66): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:10:15 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:15.863 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[de6e7974-b7a2-4437-af20-3200e1a36792]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:15 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:15.864 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap54a08602-f1 in ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:10:15 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:15.866 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap54a08602-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:10:15 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:15.867 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c0da7f6b-5b00-49be-993a-71cdf2475869]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:15 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:15.867 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a13918e7-b8e3-4edf-b04c-0ea36e32f92c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:15 compute-1 systemd-machined[188247]: New machine qemu-94-instance-000000ca.
Oct 02 13:10:15 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:15.885 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[ef137b37-d5ff-4b13-bf4d-515770914e85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:15 compute-1 systemd[1]: Started Virtual Machine qemu-94-instance-000000ca.
Oct 02 13:10:15 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:15.912 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9f07b4e2-143c-4dcc-9dbd-2e79c286ca09]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:15 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:15.951 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[faca5c18-5261-49db-9b38-1a0850a90b97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:10:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:15.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:10:15 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:15.962 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6c1ce307-0894-4714-b39a-7ad5606298d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:15 compute-1 NetworkManager[44960]: <info>  [1759410615.9639] manager: (tap54a08602-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/382)
Oct 02 13:10:15 compute-1 systemd-udevd[306671]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:16.007 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[217aa86f-1e9b-499d-bf5b-53b66903b658]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:16.011 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[7019cfc1-7f9d-4a45-9171-a1ea1a174f3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:16 compute-1 NetworkManager[44960]: <info>  [1759410616.0416] device (tap54a08602-f0): carrier: link connected
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:16.053 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[7956671f-da9b-422c-9f75-d00a1aa19ef3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:16.081 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ae0b3d71-9f5d-4d65-871f-46adcfa39c06]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap54a08602-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:cf:34'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 248], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 847959, 'reachable_time': 39519, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306704, 'error': None, 'target': 'ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:16.103 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b22375a8-a14a-49aa-ac58-54426b9477bd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1f:cf34'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 847959, 'tstamp': 847959}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306705, 'error': None, 'target': 'ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:16.126 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[999a5f7a-556b-4da0-a188-75bc1f37b8c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap54a08602-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:cf:34'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 248], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 847959, 'reachable_time': 39519, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 306706, 'error': None, 'target': 'ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:16.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:16.168 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[93700384-757f-4616-8692-816bb2841b3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:16.252 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e2c0c2fd-c4ef-4efe-8e1d-7dfbe4d73020]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:16.254 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54a08602-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:16.254 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:16.255 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54a08602-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:10:16 compute-1 nova_compute[230518]: 2025-10-02 13:10:16.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:16 compute-1 NetworkManager[44960]: <info>  [1759410616.2587] manager: (tap54a08602-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/383)
Oct 02 13:10:16 compute-1 kernel: tap54a08602-f0: entered promiscuous mode
Oct 02 13:10:16 compute-1 nova_compute[230518]: 2025-10-02 13:10:16.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:16.268 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap54a08602-f0, col_values=(('external_ids', {'iface-id': '0e7b3164-07ea-4170-8a8f-05633e14550f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:10:16 compute-1 nova_compute[230518]: 2025-10-02 13:10:16.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:16 compute-1 ovn_controller[129257]: 2025-10-02T13:10:16Z|00823|binding|INFO|Releasing lport 0e7b3164-07ea-4170-8a8f-05633e14550f from this chassis (sb_readonly=0)
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:16.273 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/54a08602-f5b6-41e1-816c-2c122542a2b7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/54a08602-f5b6-41e1-816c-2c122542a2b7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:16.275 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[beaf4a4b-6db3-4f3f-8019-aaf022bac041]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:16.276 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]: global
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-54a08602-f5b6-41e1-816c-2c122542a2b7
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/54a08602-f5b6-41e1-816c-2c122542a2b7.pid.haproxy
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 54a08602-f5b6-41e1-816c-2c122542a2b7
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:10:16 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:16.277 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7', 'env', 'PROCESS_TAG=haproxy-54a08602-f5b6-41e1-816c-2c122542a2b7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/54a08602-f5b6-41e1-816c-2c122542a2b7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:10:16 compute-1 nova_compute[230518]: 2025-10-02 13:10:16.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:16 compute-1 podman[306780]: 2025-10-02 13:10:16.716304575 +0000 UTC m=+0.066323183 container create 13e6635d489e94db27241bfca0f2f23ed8f740be7afbb9d13dceff8847c34410 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 13:10:16 compute-1 podman[306780]: 2025-10-02 13:10:16.67665222 +0000 UTC m=+0.026670848 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:10:16 compute-1 systemd[1]: Started libpod-conmon-13e6635d489e94db27241bfca0f2f23ed8f740be7afbb9d13dceff8847c34410.scope.
Oct 02 13:10:16 compute-1 systemd[1]: Started libcrun container.
Oct 02 13:10:16 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e5b4910d155fef2dcdcf0c757ef1bff4d71c46ecff04442d7d49e1a585de238/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:10:16 compute-1 podman[306780]: 2025-10-02 13:10:16.851522661 +0000 UTC m=+0.201541279 container init 13e6635d489e94db27241bfca0f2f23ed8f740be7afbb9d13dceff8847c34410 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:10:16 compute-1 podman[306780]: 2025-10-02 13:10:16.86134716 +0000 UTC m=+0.211365788 container start 13e6635d489e94db27241bfca0f2f23ed8f740be7afbb9d13dceff8847c34410 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 02 13:10:16 compute-1 neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7[306795]: [NOTICE]   (306799) : New worker (306801) forked
Oct 02 13:10:16 compute-1 neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7[306795]: [NOTICE]   (306799) : Loading success.
Oct 02 13:10:17 compute-1 nova_compute[230518]: 2025-10-02 13:10:17.055 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410617.0543778, 28cee0c6-0008-45f8-af11-48abbbbcb22c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:10:17 compute-1 nova_compute[230518]: 2025-10-02 13:10:17.055 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] VM Started (Lifecycle Event)
Oct 02 13:10:17 compute-1 nova_compute[230518]: 2025-10-02 13:10:17.079 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:10:17 compute-1 nova_compute[230518]: 2025-10-02 13:10:17.083 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410617.05478, 28cee0c6-0008-45f8-af11-48abbbbcb22c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:10:17 compute-1 nova_compute[230518]: 2025-10-02 13:10:17.084 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] VM Paused (Lifecycle Event)
Oct 02 13:10:17 compute-1 nova_compute[230518]: 2025-10-02 13:10:17.103 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:10:17 compute-1 nova_compute[230518]: 2025-10-02 13:10:17.109 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:10:17 compute-1 nova_compute[230518]: 2025-10-02 13:10:17.128 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:10:17 compute-1 ceph-mon[80926]: pgmap v2931: 305 pgs: 305 active+clean; 715 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 6.1 MiB/s wr, 222 op/s
Oct 02 13:10:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:17.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:18.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:18 compute-1 nova_compute[230518]: 2025-10-02 13:10:18.475 2 DEBUG nova.compute.manager [req-1a2e83cb-a3af-43b6-8aea-2522c919b32f req-dc515e62-4070-41f0-906a-cdd8918f2a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Received event network-vif-plugged-faebc160-66b6-4ba2-ab02-2b0098eef804 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:10:18 compute-1 nova_compute[230518]: 2025-10-02 13:10:18.475 2 DEBUG oslo_concurrency.lockutils [req-1a2e83cb-a3af-43b6-8aea-2522c919b32f req-dc515e62-4070-41f0-906a-cdd8918f2a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "28cee0c6-0008-45f8-af11-48abbbbcb22c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:18 compute-1 nova_compute[230518]: 2025-10-02 13:10:18.476 2 DEBUG oslo_concurrency.lockutils [req-1a2e83cb-a3af-43b6-8aea-2522c919b32f req-dc515e62-4070-41f0-906a-cdd8918f2a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "28cee0c6-0008-45f8-af11-48abbbbcb22c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:18 compute-1 nova_compute[230518]: 2025-10-02 13:10:18.476 2 DEBUG oslo_concurrency.lockutils [req-1a2e83cb-a3af-43b6-8aea-2522c919b32f req-dc515e62-4070-41f0-906a-cdd8918f2a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "28cee0c6-0008-45f8-af11-48abbbbcb22c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:18 compute-1 nova_compute[230518]: 2025-10-02 13:10:18.477 2 DEBUG nova.compute.manager [req-1a2e83cb-a3af-43b6-8aea-2522c919b32f req-dc515e62-4070-41f0-906a-cdd8918f2a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Processing event network-vif-plugged-faebc160-66b6-4ba2-ab02-2b0098eef804 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:10:18 compute-1 nova_compute[230518]: 2025-10-02 13:10:18.477 2 DEBUG nova.compute.manager [req-1a2e83cb-a3af-43b6-8aea-2522c919b32f req-dc515e62-4070-41f0-906a-cdd8918f2a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Received event network-vif-plugged-faebc160-66b6-4ba2-ab02-2b0098eef804 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:10:18 compute-1 nova_compute[230518]: 2025-10-02 13:10:18.477 2 DEBUG oslo_concurrency.lockutils [req-1a2e83cb-a3af-43b6-8aea-2522c919b32f req-dc515e62-4070-41f0-906a-cdd8918f2a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "28cee0c6-0008-45f8-af11-48abbbbcb22c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:18 compute-1 nova_compute[230518]: 2025-10-02 13:10:18.478 2 DEBUG oslo_concurrency.lockutils [req-1a2e83cb-a3af-43b6-8aea-2522c919b32f req-dc515e62-4070-41f0-906a-cdd8918f2a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "28cee0c6-0008-45f8-af11-48abbbbcb22c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:18 compute-1 nova_compute[230518]: 2025-10-02 13:10:18.478 2 DEBUG oslo_concurrency.lockutils [req-1a2e83cb-a3af-43b6-8aea-2522c919b32f req-dc515e62-4070-41f0-906a-cdd8918f2a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "28cee0c6-0008-45f8-af11-48abbbbcb22c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:18 compute-1 nova_compute[230518]: 2025-10-02 13:10:18.478 2 DEBUG nova.compute.manager [req-1a2e83cb-a3af-43b6-8aea-2522c919b32f req-dc515e62-4070-41f0-906a-cdd8918f2a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] No waiting events found dispatching network-vif-plugged-faebc160-66b6-4ba2-ab02-2b0098eef804 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:10:18 compute-1 nova_compute[230518]: 2025-10-02 13:10:18.479 2 WARNING nova.compute.manager [req-1a2e83cb-a3af-43b6-8aea-2522c919b32f req-dc515e62-4070-41f0-906a-cdd8918f2a7e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Received unexpected event network-vif-plugged-faebc160-66b6-4ba2-ab02-2b0098eef804 for instance with vm_state building and task_state spawning.
Oct 02 13:10:18 compute-1 nova_compute[230518]: 2025-10-02 13:10:18.480 2 DEBUG nova.compute.manager [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:10:18 compute-1 nova_compute[230518]: 2025-10-02 13:10:18.485 2 DEBUG nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:10:18 compute-1 nova_compute[230518]: 2025-10-02 13:10:18.487 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410618.4853578, 28cee0c6-0008-45f8-af11-48abbbbcb22c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:10:18 compute-1 nova_compute[230518]: 2025-10-02 13:10:18.487 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] VM Resumed (Lifecycle Event)
Oct 02 13:10:18 compute-1 nova_compute[230518]: 2025-10-02 13:10:18.492 2 INFO nova.virt.libvirt.driver [-] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Instance spawned successfully.
Oct 02 13:10:18 compute-1 nova_compute[230518]: 2025-10-02 13:10:18.493 2 DEBUG nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:10:18 compute-1 nova_compute[230518]: 2025-10-02 13:10:18.521 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:10:18 compute-1 nova_compute[230518]: 2025-10-02 13:10:18.527 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:10:18 compute-1 nova_compute[230518]: 2025-10-02 13:10:18.532 2 DEBUG nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:10:18 compute-1 nova_compute[230518]: 2025-10-02 13:10:18.533 2 DEBUG nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:10:18 compute-1 nova_compute[230518]: 2025-10-02 13:10:18.533 2 DEBUG nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:10:18 compute-1 nova_compute[230518]: 2025-10-02 13:10:18.534 2 DEBUG nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:10:18 compute-1 nova_compute[230518]: 2025-10-02 13:10:18.534 2 DEBUG nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:10:18 compute-1 nova_compute[230518]: 2025-10-02 13:10:18.535 2 DEBUG nova.virt.libvirt.driver [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:10:18 compute-1 nova_compute[230518]: 2025-10-02 13:10:18.558 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:10:18 compute-1 nova_compute[230518]: 2025-10-02 13:10:18.604 2 INFO nova.compute.manager [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Took 8.95 seconds to spawn the instance on the hypervisor.
Oct 02 13:10:18 compute-1 nova_compute[230518]: 2025-10-02 13:10:18.604 2 DEBUG nova.compute.manager [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:10:18 compute-1 nova_compute[230518]: 2025-10-02 13:10:18.678 2 INFO nova.compute.manager [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Took 10.02 seconds to build instance.
Oct 02 13:10:18 compute-1 nova_compute[230518]: 2025-10-02 13:10:18.700 2 DEBUG oslo_concurrency.lockutils [None req-4cb5dcb0-ab16-406e-930c-c5be87cba961 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "28cee0c6-0008-45f8-af11-48abbbbcb22c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.109s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:10:19 compute-1 nova_compute[230518]: 2025-10-02 13:10:19.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:19 compute-1 ceph-mon[80926]: pgmap v2932: 305 pgs: 305 active+clean; 722 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 691 KiB/s rd, 6.1 MiB/s wr, 163 op/s
Oct 02 13:10:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.001999982s ======
Oct 02 13:10:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:19.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001999982s
Oct 02 13:10:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:10:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:20.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:10:20 compute-1 nova_compute[230518]: 2025-10-02 13:10:20.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:21 compute-1 ceph-mon[80926]: pgmap v2933: 305 pgs: 305 active+clean; 722 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 6.1 MiB/s wr, 198 op/s
Oct 02 13:10:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1814106570' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:10:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e366 e366: 3 total, 3 up, 3 in
Oct 02 13:10:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:21.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:22.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:22 compute-1 ceph-mon[80926]: osdmap e366: 3 total, 3 up, 3 in
Oct 02 13:10:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/153590829' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:10:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:10:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:23.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:10:24 compute-1 ceph-mon[80926]: pgmap v2935: 305 pgs: 305 active+clean; 722 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.3 MiB/s wr, 173 op/s
Oct 02 13:10:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1563581413' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:10:24 compute-1 nova_compute[230518]: 2025-10-02 13:10:24.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:10:24 compute-1 nova_compute[230518]: 2025-10-02 13:10:24.079 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:24 compute-1 nova_compute[230518]: 2025-10-02 13:10:24.079 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:24 compute-1 nova_compute[230518]: 2025-10-02 13:10:24.080 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:24 compute-1 nova_compute[230518]: 2025-10-02 13:10:24.080 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:10:24 compute-1 nova_compute[230518]: 2025-10-02 13:10:24.080 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:10:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:24.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:10:24 compute-1 nova_compute[230518]: 2025-10-02 13:10:24.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:24 compute-1 nova_compute[230518]: 2025-10-02 13:10:24.619 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:10:24 compute-1 nova_compute[230518]: 2025-10-02 13:10:24.819 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000c5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:10:24 compute-1 nova_compute[230518]: 2025-10-02 13:10:24.821 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000c5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:10:24 compute-1 nova_compute[230518]: 2025-10-02 13:10:24.828 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000c8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:10:24 compute-1 nova_compute[230518]: 2025-10-02 13:10:24.828 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000c8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:10:24 compute-1 nova_compute[230518]: 2025-10-02 13:10:24.829 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000c8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:10:24 compute-1 nova_compute[230518]: 2025-10-02 13:10:24.832 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000ca as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:10:24 compute-1 nova_compute[230518]: 2025-10-02 13:10:24.832 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000ca as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:10:25 compute-1 nova_compute[230518]: 2025-10-02 13:10:25.035 2 DEBUG nova.compute.manager [req-8f0a9117-9d5d-4e50-99fc-e17eb734ba82 req-c6c7bd9c-b965-417f-991d-0e2eb01634d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Received event network-changed-faebc160-66b6-4ba2-ab02-2b0098eef804 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:10:25 compute-1 nova_compute[230518]: 2025-10-02 13:10:25.035 2 DEBUG nova.compute.manager [req-8f0a9117-9d5d-4e50-99fc-e17eb734ba82 req-c6c7bd9c-b965-417f-991d-0e2eb01634d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Refreshing instance network info cache due to event network-changed-faebc160-66b6-4ba2-ab02-2b0098eef804. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:10:25 compute-1 nova_compute[230518]: 2025-10-02 13:10:25.036 2 DEBUG oslo_concurrency.lockutils [req-8f0a9117-9d5d-4e50-99fc-e17eb734ba82 req-c6c7bd9c-b965-417f-991d-0e2eb01634d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-28cee0c6-0008-45f8-af11-48abbbbcb22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:10:25 compute-1 nova_compute[230518]: 2025-10-02 13:10:25.036 2 DEBUG oslo_concurrency.lockutils [req-8f0a9117-9d5d-4e50-99fc-e17eb734ba82 req-c6c7bd9c-b965-417f-991d-0e2eb01634d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-28cee0c6-0008-45f8-af11-48abbbbcb22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:10:25 compute-1 nova_compute[230518]: 2025-10-02 13:10:25.036 2 DEBUG nova.network.neutron [req-8f0a9117-9d5d-4e50-99fc-e17eb734ba82 req-c6c7bd9c-b965-417f-991d-0e2eb01634d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Refreshing network info cache for port faebc160-66b6-4ba2-ab02-2b0098eef804 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:10:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3536817018' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:25 compute-1 nova_compute[230518]: 2025-10-02 13:10:25.084 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:10:25 compute-1 nova_compute[230518]: 2025-10-02 13:10:25.085 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3666MB free_disk=20.739215850830078GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:10:25 compute-1 nova_compute[230518]: 2025-10-02 13:10:25.085 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:25 compute-1 nova_compute[230518]: 2025-10-02 13:10:25.086 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:25 compute-1 nova_compute[230518]: 2025-10-02 13:10:25.247 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 658821a7-5b97-43ad-8fe2-46e5303cf56c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:10:25 compute-1 nova_compute[230518]: 2025-10-02 13:10:25.247 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance de995ad8-07bb-4097-899b-5c79d62a1f4c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:10:25 compute-1 nova_compute[230518]: 2025-10-02 13:10:25.247 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 28cee0c6-0008-45f8-af11-48abbbbcb22c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:10:25 compute-1 nova_compute[230518]: 2025-10-02 13:10:25.248 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:10:25 compute-1 nova_compute[230518]: 2025-10-02 13:10:25.248 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:10:25 compute-1 nova_compute[230518]: 2025-10-02 13:10:25.343 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:10:25 compute-1 nova_compute[230518]: 2025-10-02 13:10:25.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:10:25 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2201128665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:25 compute-1 nova_compute[230518]: 2025-10-02 13:10:25.814 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:10:25 compute-1 nova_compute[230518]: 2025-10-02 13:10:25.824 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:10:25 compute-1 nova_compute[230518]: 2025-10-02 13:10:25.847 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:10:25 compute-1 podman[306853]: 2025-10-02 13:10:25.852774412 +0000 UTC m=+0.095156349 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 13:10:25 compute-1 podman[306854]: 2025-10-02 13:10:25.861086722 +0000 UTC m=+0.091033779 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 02 13:10:25 compute-1 nova_compute[230518]: 2025-10-02 13:10:25.877 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:10:25 compute-1 nova_compute[230518]: 2025-10-02 13:10:25.879 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:25.965 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:25.966 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:25.967 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:25.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:26 compute-1 ceph-mon[80926]: pgmap v2936: 305 pgs: 305 active+clean; 722 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.3 MiB/s wr, 173 op/s
Oct 02 13:10:26 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2201128665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:26.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:27 compute-1 nova_compute[230518]: 2025-10-02 13:10:27.333 2 DEBUG nova.compute.manager [req-b27eb3c0-1a8a-4687-83d1-24cd8b9e01c6 req-2909b793-42d9-4e95-87b5-11fea90b9752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Received event network-changed-faebc160-66b6-4ba2-ab02-2b0098eef804 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:10:27 compute-1 nova_compute[230518]: 2025-10-02 13:10:27.334 2 DEBUG nova.compute.manager [req-b27eb3c0-1a8a-4687-83d1-24cd8b9e01c6 req-2909b793-42d9-4e95-87b5-11fea90b9752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Refreshing instance network info cache due to event network-changed-faebc160-66b6-4ba2-ab02-2b0098eef804. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:10:27 compute-1 nova_compute[230518]: 2025-10-02 13:10:27.334 2 DEBUG oslo_concurrency.lockutils [req-b27eb3c0-1a8a-4687-83d1-24cd8b9e01c6 req-2909b793-42d9-4e95-87b5-11fea90b9752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-28cee0c6-0008-45f8-af11-48abbbbcb22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:10:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:27.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:28 compute-1 ceph-mon[80926]: pgmap v2937: 305 pgs: 305 active+clean; 722 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 97 KiB/s wr, 136 op/s
Oct 02 13:10:28 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/293687028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:28.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:29 compute-1 nova_compute[230518]: 2025-10-02 13:10:29.107 2 DEBUG nova.network.neutron [req-8f0a9117-9d5d-4e50-99fc-e17eb734ba82 req-c6c7bd9c-b965-417f-991d-0e2eb01634d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Updated VIF entry in instance network info cache for port faebc160-66b6-4ba2-ab02-2b0098eef804. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:10:29 compute-1 nova_compute[230518]: 2025-10-02 13:10:29.108 2 DEBUG nova.network.neutron [req-8f0a9117-9d5d-4e50-99fc-e17eb734ba82 req-c6c7bd9c-b965-417f-991d-0e2eb01634d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Updating instance_info_cache with network_info: [{"id": "faebc160-66b6-4ba2-ab02-2b0098eef804", "address": "fa:16:3e:4f:46:ff", "network": {"id": "54a08602-f5b6-41e1-816c-2c122542a2b7", "bridge": "br-int", "label": "tempest-network-smoke--1208477536", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaebc160-66", "ovs_interfaceid": "faebc160-66b6-4ba2-ab02-2b0098eef804", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:10:29 compute-1 nova_compute[230518]: 2025-10-02 13:10:29.142 2 DEBUG oslo_concurrency.lockutils [req-8f0a9117-9d5d-4e50-99fc-e17eb734ba82 req-c6c7bd9c-b965-417f-991d-0e2eb01634d1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-28cee0c6-0008-45f8-af11-48abbbbcb22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:10:29 compute-1 nova_compute[230518]: 2025-10-02 13:10:29.143 2 DEBUG oslo_concurrency.lockutils [req-b27eb3c0-1a8a-4687-83d1-24cd8b9e01c6 req-2909b793-42d9-4e95-87b5-11fea90b9752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-28cee0c6-0008-45f8-af11-48abbbbcb22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:10:29 compute-1 nova_compute[230518]: 2025-10-02 13:10:29.143 2 DEBUG nova.network.neutron [req-b27eb3c0-1a8a-4687-83d1-24cd8b9e01c6 req-2909b793-42d9-4e95-87b5-11fea90b9752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Refreshing network info cache for port faebc160-66b6-4ba2-ab02-2b0098eef804 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:10:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1239067814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2272454097' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:29 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #145. Immutable memtables: 0.
Oct 02 13:10:29 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:10:29.396256) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:10:29 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 145
Oct 02 13:10:29 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410629396316, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 886, "num_deletes": 251, "total_data_size": 1611903, "memory_usage": 1637544, "flush_reason": "Manual Compaction"}
Oct 02 13:10:29 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #146: started
Oct 02 13:10:29 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410629488642, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 146, "file_size": 1052493, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70393, "largest_seqno": 71274, "table_properties": {"data_size": 1048384, "index_size": 1760, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9722, "raw_average_key_size": 19, "raw_value_size": 1039964, "raw_average_value_size": 2135, "num_data_blocks": 77, "num_entries": 487, "num_filter_entries": 487, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410575, "oldest_key_time": 1759410575, "file_creation_time": 1759410629, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:10:29 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 92434 microseconds, and 3347 cpu microseconds.
Oct 02 13:10:29 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:10:29 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:10:29.488689) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #146: 1052493 bytes OK
Oct 02 13:10:29 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:10:29.488710) [db/memtable_list.cc:519] [default] Level-0 commit table #146 started
Oct 02 13:10:29 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:10:29.534992) [db/memtable_list.cc:722] [default] Level-0 commit table #146: memtable #1 done
Oct 02 13:10:29 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:10:29.535040) EVENT_LOG_v1 {"time_micros": 1759410629535030, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:10:29 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:10:29.535063) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:10:29 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 1607322, prev total WAL file size 1607322, number of live WAL files 2.
Oct 02 13:10:29 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000142.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:10:29 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:10:29.535947) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Oct 02 13:10:29 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:10:29 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [146(1027KB)], [144(10MB)]
Oct 02 13:10:29 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410629536012, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [146], "files_L6": [144], "score": -1, "input_data_size": 12165146, "oldest_snapshot_seqno": -1}
Oct 02 13:10:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:10:29 compute-1 nova_compute[230518]: 2025-10-02 13:10:29.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:29 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #147: 9048 keys, 10295783 bytes, temperature: kUnknown
Oct 02 13:10:29 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410629631540, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 147, "file_size": 10295783, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10239298, "index_size": 32756, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22661, "raw_key_size": 239986, "raw_average_key_size": 26, "raw_value_size": 10082534, "raw_average_value_size": 1114, "num_data_blocks": 1237, "num_entries": 9048, "num_filter_entries": 9048, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759410629, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 147, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:10:29 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:10:29 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:10:29.631759) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 10295783 bytes
Oct 02 13:10:29 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:10:29.636383) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 127.3 rd, 107.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 10.6 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(21.3) write-amplify(9.8) OK, records in: 9568, records dropped: 520 output_compression: NoCompression
Oct 02 13:10:29 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:10:29.636426) EVENT_LOG_v1 {"time_micros": 1759410629636411, "job": 92, "event": "compaction_finished", "compaction_time_micros": 95595, "compaction_time_cpu_micros": 23989, "output_level": 6, "num_output_files": 1, "total_output_size": 10295783, "num_input_records": 9568, "num_output_records": 9048, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:10:29 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:10:29 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410629636828, "job": 92, "event": "table_file_deletion", "file_number": 146}
Oct 02 13:10:29 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000144.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:10:29 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410629639093, "job": 92, "event": "table_file_deletion", "file_number": 144}
Oct 02 13:10:29 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:10:29.535807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:10:29 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:10:29.639267) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:10:29 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:10:29.639308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:10:29 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:10:29.639314) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:10:29 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:10:29.639318) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:10:29 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:10:29.639322) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:10:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:10:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:29.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:10:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:10:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:30.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:10:30 compute-1 nova_compute[230518]: 2025-10-02 13:10:30.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:30 compute-1 ceph-mon[80926]: pgmap v2938: 305 pgs: 305 active+clean; 722 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 79 KiB/s wr, 156 op/s
Oct 02 13:10:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1514685641' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:30 compute-1 nova_compute[230518]: 2025-10-02 13:10:30.875 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:10:30 compute-1 nova_compute[230518]: 2025-10-02 13:10:30.876 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:10:30 compute-1 nova_compute[230518]: 2025-10-02 13:10:30.876 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:10:30 compute-1 nova_compute[230518]: 2025-10-02 13:10:30.876 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:10:30 compute-1 nova_compute[230518]: 2025-10-02 13:10:30.876 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:10:31 compute-1 nova_compute[230518]: 2025-10-02 13:10:31.055 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:10:31 compute-1 nova_compute[230518]: 2025-10-02 13:10:31.486 2 DEBUG nova.network.neutron [req-b27eb3c0-1a8a-4687-83d1-24cd8b9e01c6 req-2909b793-42d9-4e95-87b5-11fea90b9752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Updated VIF entry in instance network info cache for port faebc160-66b6-4ba2-ab02-2b0098eef804. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:10:31 compute-1 nova_compute[230518]: 2025-10-02 13:10:31.487 2 DEBUG nova.network.neutron [req-b27eb3c0-1a8a-4687-83d1-24cd8b9e01c6 req-2909b793-42d9-4e95-87b5-11fea90b9752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Updating instance_info_cache with network_info: [{"id": "faebc160-66b6-4ba2-ab02-2b0098eef804", "address": "fa:16:3e:4f:46:ff", "network": {"id": "54a08602-f5b6-41e1-816c-2c122542a2b7", "bridge": "br-int", "label": "tempest-network-smoke--1208477536", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaebc160-66", "ovs_interfaceid": "faebc160-66b6-4ba2-ab02-2b0098eef804", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:10:31 compute-1 nova_compute[230518]: 2025-10-02 13:10:31.532 2 DEBUG oslo_concurrency.lockutils [req-b27eb3c0-1a8a-4687-83d1-24cd8b9e01c6 req-2909b793-42d9-4e95-87b5-11fea90b9752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-28cee0c6-0008-45f8-af11-48abbbbcb22c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:10:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:31.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:32 compute-1 ceph-mon[80926]: pgmap v2939: 305 pgs: 305 active+clean; 722 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 54 KiB/s wr, 154 op/s
Oct 02 13:10:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:32.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e367 e367: 3 total, 3 up, 3 in
Oct 02 13:10:33 compute-1 ceph-mon[80926]: pgmap v2940: 305 pgs: 305 active+clean; 725 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 683 KiB/s wr, 151 op/s
Oct 02 13:10:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:10:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:33.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:10:34 compute-1 nova_compute[230518]: 2025-10-02 13:10:34.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:10:34 compute-1 nova_compute[230518]: 2025-10-02 13:10:34.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:10:34 compute-1 nova_compute[230518]: 2025-10-02 13:10:34.086 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:10:34 compute-1 nova_compute[230518]: 2025-10-02 13:10:34.087 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:10:34 compute-1 nova_compute[230518]: 2025-10-02 13:10:34.087 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:10:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:34.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:34 compute-1 ovn_controller[129257]: 2025-10-02T13:10:34Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4f:46:ff 10.100.0.14
Oct 02 13:10:34 compute-1 ovn_controller[129257]: 2025-10-02T13:10:34Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4f:46:ff 10.100.0.14
Oct 02 13:10:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:10:34 compute-1 ceph-mon[80926]: osdmap e367: 3 total, 3 up, 3 in
Oct 02 13:10:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3208984774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:34 compute-1 nova_compute[230518]: 2025-10-02 13:10:34.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:35 compute-1 nova_compute[230518]: 2025-10-02 13:10:35.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:35 compute-1 ceph-mon[80926]: pgmap v2942: 305 pgs: 305 active+clean; 725 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 713 KiB/s wr, 130 op/s
Oct 02 13:10:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1219523401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:35 compute-1 podman[306900]: 2025-10-02 13:10:35.803100032 +0000 UTC m=+0.057435294 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:10:35 compute-1 podman[306901]: 2025-10-02 13:10:35.807498981 +0000 UTC m=+0.059053195 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:10:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:10:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:35.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:10:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:36.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:37 compute-1 ceph-mon[80926]: pgmap v2943: 305 pgs: 305 active+clean; 743 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Oct 02 13:10:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:37.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:38.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1461054822' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:10:39 compute-1 nova_compute[230518]: 2025-10-02 13:10:39.127 2 DEBUG nova.compute.manager [req-74abd09d-bb1a-4e47-8fed-56a3a0f06cf1 req-dd87abc5-b197-417b-95ef-aeca9287ff06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received event network-changed-513c3d66-613d-4626-8ab0-58520113de32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:10:39 compute-1 nova_compute[230518]: 2025-10-02 13:10:39.129 2 DEBUG nova.compute.manager [req-74abd09d-bb1a-4e47-8fed-56a3a0f06cf1 req-dd87abc5-b197-417b-95ef-aeca9287ff06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Refreshing instance network info cache due to event network-changed-513c3d66-613d-4626-8ab0-58520113de32. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:10:39 compute-1 nova_compute[230518]: 2025-10-02 13:10:39.129 2 DEBUG oslo_concurrency.lockutils [req-74abd09d-bb1a-4e47-8fed-56a3a0f06cf1 req-dd87abc5-b197-417b-95ef-aeca9287ff06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:10:39 compute-1 nova_compute[230518]: 2025-10-02 13:10:39.130 2 DEBUG oslo_concurrency.lockutils [req-74abd09d-bb1a-4e47-8fed-56a3a0f06cf1 req-dd87abc5-b197-417b-95ef-aeca9287ff06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:10:39 compute-1 nova_compute[230518]: 2025-10-02 13:10:39.130 2 DEBUG nova.network.neutron [req-74abd09d-bb1a-4e47-8fed-56a3a0f06cf1 req-dd87abc5-b197-417b-95ef-aeca9287ff06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Refreshing network info cache for port 513c3d66-613d-4626-8ab0-58520113de32 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:10:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:10:39 compute-1 nova_compute[230518]: 2025-10-02 13:10:39.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:39 compute-1 ceph-mon[80926]: pgmap v2944: 305 pgs: 305 active+clean; 752 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.5 MiB/s wr, 180 op/s
Oct 02 13:10:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1953460216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:10:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:39.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:10:40 compute-1 nova_compute[230518]: 2025-10-02 13:10:40.091 2 DEBUG oslo_concurrency.lockutils [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "28cee0c6-0008-45f8-af11-48abbbbcb22c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:40 compute-1 nova_compute[230518]: 2025-10-02 13:10:40.093 2 DEBUG oslo_concurrency.lockutils [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "28cee0c6-0008-45f8-af11-48abbbbcb22c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:40 compute-1 nova_compute[230518]: 2025-10-02 13:10:40.094 2 DEBUG oslo_concurrency.lockutils [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "28cee0c6-0008-45f8-af11-48abbbbcb22c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:40 compute-1 nova_compute[230518]: 2025-10-02 13:10:40.095 2 DEBUG oslo_concurrency.lockutils [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "28cee0c6-0008-45f8-af11-48abbbbcb22c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:40 compute-1 nova_compute[230518]: 2025-10-02 13:10:40.095 2 DEBUG oslo_concurrency.lockutils [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "28cee0c6-0008-45f8-af11-48abbbbcb22c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:40 compute-1 nova_compute[230518]: 2025-10-02 13:10:40.098 2 INFO nova.compute.manager [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Terminating instance
Oct 02 13:10:40 compute-1 nova_compute[230518]: 2025-10-02 13:10:40.100 2 DEBUG nova.compute.manager [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:10:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 13:10:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:40.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 13:10:40 compute-1 kernel: tapfaebc160-66 (unregistering): left promiscuous mode
Oct 02 13:10:40 compute-1 NetworkManager[44960]: <info>  [1759410640.2125] device (tapfaebc160-66): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:10:40 compute-1 ovn_controller[129257]: 2025-10-02T13:10:40Z|00824|binding|INFO|Releasing lport faebc160-66b6-4ba2-ab02-2b0098eef804 from this chassis (sb_readonly=0)
Oct 02 13:10:40 compute-1 ovn_controller[129257]: 2025-10-02T13:10:40Z|00825|binding|INFO|Setting lport faebc160-66b6-4ba2-ab02-2b0098eef804 down in Southbound
Oct 02 13:10:40 compute-1 ovn_controller[129257]: 2025-10-02T13:10:40Z|00826|binding|INFO|Removing iface tapfaebc160-66 ovn-installed in OVS
Oct 02 13:10:40 compute-1 nova_compute[230518]: 2025-10-02 13:10:40.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:40 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:40.272 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:46:ff 10.100.0.14', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '28cee0c6-0008-45f8-af11-48abbbbcb22c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54a08602-f5b6-41e1-816c-2c122542a2b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e911de934ec043d1bd942c8aed562d04', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1398b7fe-9cb8-4053-9d9c-0523007b5e96, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=faebc160-66b6-4ba2-ab02-2b0098eef804) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:10:40 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:40.273 138374 INFO neutron.agent.ovn.metadata.agent [-] Port faebc160-66b6-4ba2-ab02-2b0098eef804 in datapath 54a08602-f5b6-41e1-816c-2c122542a2b7 unbound from our chassis
Oct 02 13:10:40 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:40.275 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 54a08602-f5b6-41e1-816c-2c122542a2b7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:10:40 compute-1 nova_compute[230518]: 2025-10-02 13:10:40.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:40 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:40.277 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[26aa3fc2-1912-4e01-9ff7-842b72c7ed67]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:40 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:40.278 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7 namespace which is not needed anymore
Oct 02 13:10:40 compute-1 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000ca.scope: Deactivated successfully.
Oct 02 13:10:40 compute-1 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000ca.scope: Consumed 15.170s CPU time.
Oct 02 13:10:40 compute-1 systemd-machined[188247]: Machine qemu-94-instance-000000ca terminated.
Oct 02 13:10:40 compute-1 nova_compute[230518]: 2025-10-02 13:10:40.364 2 INFO nova.virt.libvirt.driver [-] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Instance destroyed successfully.
Oct 02 13:10:40 compute-1 nova_compute[230518]: 2025-10-02 13:10:40.366 2 DEBUG nova.objects.instance [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lazy-loading 'resources' on Instance uuid 28cee0c6-0008-45f8-af11-48abbbbcb22c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:10:40 compute-1 nova_compute[230518]: 2025-10-02 13:10:40.388 2 DEBUG nova.virt.libvirt.vif [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:10:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-gen-1-16464808',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-gen-1-16464808',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2067500093-ge',id=202,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHH1HinzjUD7Q3tXKDfndQMl/GmvwtyywSMW01vuBjE0ArFGpxG7DyhVBxJNWD31t8BOgD0+NBlvzrAymSVFz2iPnx4lrKVlC4HjLQHFgeB7PDLQzvsLeeffGrKOfE8BLQ==',key_name='tempest-TestSecurityGroupsBasicOps-874748035',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:10:18Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e911de934ec043d1bd942c8aed562d04',ramdisk_id='',reservation_id='r-x8b2athb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-2067500093',owner_user_name='tempest-TestSecurityGroupsBasicOps-2067500093-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:10:18Z,user_data=None,user_id='362b536431b64b15b67740060af57e9c',uuid=28cee0c6-0008-45f8-af11-48abbbbcb22c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "faebc160-66b6-4ba2-ab02-2b0098eef804", "address": "fa:16:3e:4f:46:ff", "network": {"id": "54a08602-f5b6-41e1-816c-2c122542a2b7", "bridge": "br-int", "label": "tempest-network-smoke--1208477536", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaebc160-66", "ovs_interfaceid": "faebc160-66b6-4ba2-ab02-2b0098eef804", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:10:40 compute-1 nova_compute[230518]: 2025-10-02 13:10:40.389 2 DEBUG nova.network.os_vif_util [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converting VIF {"id": "faebc160-66b6-4ba2-ab02-2b0098eef804", "address": "fa:16:3e:4f:46:ff", "network": {"id": "54a08602-f5b6-41e1-816c-2c122542a2b7", "bridge": "br-int", "label": "tempest-network-smoke--1208477536", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaebc160-66", "ovs_interfaceid": "faebc160-66b6-4ba2-ab02-2b0098eef804", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:10:40 compute-1 nova_compute[230518]: 2025-10-02 13:10:40.390 2 DEBUG nova.network.os_vif_util [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4f:46:ff,bridge_name='br-int',has_traffic_filtering=True,id=faebc160-66b6-4ba2-ab02-2b0098eef804,network=Network(54a08602-f5b6-41e1-816c-2c122542a2b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaebc160-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:10:40 compute-1 nova_compute[230518]: 2025-10-02 13:10:40.391 2 DEBUG os_vif [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4f:46:ff,bridge_name='br-int',has_traffic_filtering=True,id=faebc160-66b6-4ba2-ab02-2b0098eef804,network=Network(54a08602-f5b6-41e1-816c-2c122542a2b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaebc160-66') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:10:40 compute-1 nova_compute[230518]: 2025-10-02 13:10:40.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:40 compute-1 nova_compute[230518]: 2025-10-02 13:10:40.395 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfaebc160-66, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:10:40 compute-1 nova_compute[230518]: 2025-10-02 13:10:40.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:40 compute-1 nova_compute[230518]: 2025-10-02 13:10:40.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:40 compute-1 nova_compute[230518]: 2025-10-02 13:10:40.408 2 INFO os_vif [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4f:46:ff,bridge_name='br-int',has_traffic_filtering=True,id=faebc160-66b6-4ba2-ab02-2b0098eef804,network=Network(54a08602-f5b6-41e1-816c-2c122542a2b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaebc160-66')
Oct 02 13:10:40 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e368 e368: 3 total, 3 up, 3 in
Oct 02 13:10:40 compute-1 nova_compute[230518]: 2025-10-02 13:10:40.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:40 compute-1 nova_compute[230518]: 2025-10-02 13:10:40.496 2 DEBUG nova.compute.manager [req-407b5d29-4ef2-4c5e-b2ab-91ae4d80e2b4 req-fde597ff-b7ff-4811-9a10-5f71aa09c2d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Received event network-vif-unplugged-faebc160-66b6-4ba2-ab02-2b0098eef804 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:10:40 compute-1 nova_compute[230518]: 2025-10-02 13:10:40.496 2 DEBUG oslo_concurrency.lockutils [req-407b5d29-4ef2-4c5e-b2ab-91ae4d80e2b4 req-fde597ff-b7ff-4811-9a10-5f71aa09c2d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "28cee0c6-0008-45f8-af11-48abbbbcb22c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:40 compute-1 nova_compute[230518]: 2025-10-02 13:10:40.497 2 DEBUG oslo_concurrency.lockutils [req-407b5d29-4ef2-4c5e-b2ab-91ae4d80e2b4 req-fde597ff-b7ff-4811-9a10-5f71aa09c2d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "28cee0c6-0008-45f8-af11-48abbbbcb22c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:40 compute-1 nova_compute[230518]: 2025-10-02 13:10:40.497 2 DEBUG oslo_concurrency.lockutils [req-407b5d29-4ef2-4c5e-b2ab-91ae4d80e2b4 req-fde597ff-b7ff-4811-9a10-5f71aa09c2d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "28cee0c6-0008-45f8-af11-48abbbbcb22c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:40 compute-1 nova_compute[230518]: 2025-10-02 13:10:40.497 2 DEBUG nova.compute.manager [req-407b5d29-4ef2-4c5e-b2ab-91ae4d80e2b4 req-fde597ff-b7ff-4811-9a10-5f71aa09c2d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] No waiting events found dispatching network-vif-unplugged-faebc160-66b6-4ba2-ab02-2b0098eef804 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:10:40 compute-1 nova_compute[230518]: 2025-10-02 13:10:40.498 2 DEBUG nova.compute.manager [req-407b5d29-4ef2-4c5e-b2ab-91ae4d80e2b4 req-fde597ff-b7ff-4811-9a10-5f71aa09c2d3 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Received event network-vif-unplugged-faebc160-66b6-4ba2-ab02-2b0098eef804 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:10:40 compute-1 neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7[306795]: [NOTICE]   (306799) : haproxy version is 2.8.14-c23fe91
Oct 02 13:10:40 compute-1 neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7[306795]: [NOTICE]   (306799) : path to executable is /usr/sbin/haproxy
Oct 02 13:10:40 compute-1 neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7[306795]: [ALERT]    (306799) : Current worker (306801) exited with code 143 (Terminated)
Oct 02 13:10:40 compute-1 neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7[306795]: [WARNING]  (306799) : All workers exited. Exiting... (0)
Oct 02 13:10:40 compute-1 systemd[1]: libpod-13e6635d489e94db27241bfca0f2f23ed8f740be7afbb9d13dceff8847c34410.scope: Deactivated successfully.
Oct 02 13:10:40 compute-1 podman[306972]: 2025-10-02 13:10:40.543522808 +0000 UTC m=+0.111479961 container died 13e6635d489e94db27241bfca0f2f23ed8f740be7afbb9d13dceff8847c34410 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:10:40 compute-1 nova_compute[230518]: 2025-10-02 13:10:40.585 2 DEBUG nova.network.neutron [req-74abd09d-bb1a-4e47-8fed-56a3a0f06cf1 req-dd87abc5-b197-417b-95ef-aeca9287ff06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Updated VIF entry in instance network info cache for port 513c3d66-613d-4626-8ab0-58520113de32. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:10:40 compute-1 nova_compute[230518]: 2025-10-02 13:10:40.586 2 DEBUG nova.network.neutron [req-74abd09d-bb1a-4e47-8fed-56a3a0f06cf1 req-dd87abc5-b197-417b-95ef-aeca9287ff06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Updating instance_info_cache with network_info: [{"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:10:40 compute-1 nova_compute[230518]: 2025-10-02 13:10:40.612 2 DEBUG oslo_concurrency.lockutils [req-74abd09d-bb1a-4e47-8fed-56a3a0f06cf1 req-dd87abc5-b197-417b-95ef-aeca9287ff06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:10:40 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-13e6635d489e94db27241bfca0f2f23ed8f740be7afbb9d13dceff8847c34410-userdata-shm.mount: Deactivated successfully.
Oct 02 13:10:40 compute-1 systemd[1]: var-lib-containers-storage-overlay-8e5b4910d155fef2dcdcf0c757ef1bff4d71c46ecff04442d7d49e1a585de238-merged.mount: Deactivated successfully.
Oct 02 13:10:41 compute-1 podman[306972]: 2025-10-02 13:10:41.022070016 +0000 UTC m=+0.590027139 container cleanup 13e6635d489e94db27241bfca0f2f23ed8f740be7afbb9d13dceff8847c34410 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 13:10:41 compute-1 systemd[1]: libpod-conmon-13e6635d489e94db27241bfca0f2f23ed8f740be7afbb9d13dceff8847c34410.scope: Deactivated successfully.
Oct 02 13:10:41 compute-1 nova_compute[230518]: 2025-10-02 13:10:41.082 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:10:41 compute-1 podman[307021]: 2025-10-02 13:10:41.151312913 +0000 UTC m=+0.095457558 container remove 13e6635d489e94db27241bfca0f2f23ed8f740be7afbb9d13dceff8847c34410 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:10:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:41.165 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[945224bc-f37e-4db3-aa38-8bee7f326e3f]: (4, ('Thu Oct  2 01:10:40 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7 (13e6635d489e94db27241bfca0f2f23ed8f740be7afbb9d13dceff8847c34410)\n13e6635d489e94db27241bfca0f2f23ed8f740be7afbb9d13dceff8847c34410\nThu Oct  2 01:10:41 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7 (13e6635d489e94db27241bfca0f2f23ed8f740be7afbb9d13dceff8847c34410)\n13e6635d489e94db27241bfca0f2f23ed8f740be7afbb9d13dceff8847c34410\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:41.169 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[dc974a55-e174-4258-8024-0cdd076d3824]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:41.170 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54a08602-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:10:41 compute-1 nova_compute[230518]: 2025-10-02 13:10:41.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:41 compute-1 kernel: tap54a08602-f0: left promiscuous mode
Oct 02 13:10:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:41.179 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6d778515-1b0e-4802-a240-57550252120b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:41 compute-1 nova_compute[230518]: 2025-10-02 13:10:41.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:41.211 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4066344c-9c3f-433a-9122-7e15bbba0f85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:41.213 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[53eb02d5-e615-44d8-b36b-dbdbc6490dbd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:41.231 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[99b7fb87-2042-452c-be30-b9443f4a4e20]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 847950, 'reachable_time': 40637, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307037, 'error': None, 'target': 'ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:41 compute-1 systemd[1]: run-netns-ovnmeta\x2d54a08602\x2df5b6\x2d41e1\x2d816c\x2d2c122542a2b7.mount: Deactivated successfully.
Oct 02 13:10:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:41.236 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-54a08602-f5b6-41e1-816c-2c122542a2b7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:10:41 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:41.236 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[ba17e448-94be-4c3c-8912-1548599711f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:41 compute-1 ceph-mon[80926]: pgmap v2945: 305 pgs: 305 active+clean; 755 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 1018 KiB/s rd, 2.6 MiB/s wr, 146 op/s
Oct 02 13:10:41 compute-1 ceph-mon[80926]: osdmap e368: 3 total, 3 up, 3 in
Oct 02 13:10:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:10:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:41.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:10:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:42.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:42 compute-1 nova_compute[230518]: 2025-10-02 13:10:42.576 2 DEBUG nova.compute.manager [req-5c4251bb-4293-4847-b3ca-c0406fe8008e req-b8c25db7-0a85-440b-bffe-b7fab653da06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Received event network-vif-plugged-faebc160-66b6-4ba2-ab02-2b0098eef804 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:10:42 compute-1 nova_compute[230518]: 2025-10-02 13:10:42.577 2 DEBUG oslo_concurrency.lockutils [req-5c4251bb-4293-4847-b3ca-c0406fe8008e req-b8c25db7-0a85-440b-bffe-b7fab653da06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "28cee0c6-0008-45f8-af11-48abbbbcb22c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:42 compute-1 nova_compute[230518]: 2025-10-02 13:10:42.577 2 DEBUG oslo_concurrency.lockutils [req-5c4251bb-4293-4847-b3ca-c0406fe8008e req-b8c25db7-0a85-440b-bffe-b7fab653da06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "28cee0c6-0008-45f8-af11-48abbbbcb22c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:42 compute-1 nova_compute[230518]: 2025-10-02 13:10:42.577 2 DEBUG oslo_concurrency.lockutils [req-5c4251bb-4293-4847-b3ca-c0406fe8008e req-b8c25db7-0a85-440b-bffe-b7fab653da06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "28cee0c6-0008-45f8-af11-48abbbbcb22c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:42 compute-1 nova_compute[230518]: 2025-10-02 13:10:42.578 2 DEBUG nova.compute.manager [req-5c4251bb-4293-4847-b3ca-c0406fe8008e req-b8c25db7-0a85-440b-bffe-b7fab653da06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] No waiting events found dispatching network-vif-plugged-faebc160-66b6-4ba2-ab02-2b0098eef804 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:10:42 compute-1 nova_compute[230518]: 2025-10-02 13:10:42.578 2 WARNING nova.compute.manager [req-5c4251bb-4293-4847-b3ca-c0406fe8008e req-b8c25db7-0a85-440b-bffe-b7fab653da06 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Received unexpected event network-vif-plugged-faebc160-66b6-4ba2-ab02-2b0098eef804 for instance with vm_state active and task_state deleting.
Oct 02 13:10:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/613403924' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:10:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:10:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:44.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:10:44 compute-1 ceph-mon[80926]: pgmap v2947: 305 pgs: 305 active+clean; 726 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 160 op/s
Oct 02 13:10:44 compute-1 nova_compute[230518]: 2025-10-02 13:10:44.133 2 INFO nova.virt.libvirt.driver [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Deleting instance files /var/lib/nova/instances/28cee0c6-0008-45f8-af11-48abbbbcb22c_del
Oct 02 13:10:44 compute-1 nova_compute[230518]: 2025-10-02 13:10:44.134 2 INFO nova.virt.libvirt.driver [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Deletion of /var/lib/nova/instances/28cee0c6-0008-45f8-af11-48abbbbcb22c_del complete
Oct 02 13:10:44 compute-1 nova_compute[230518]: 2025-10-02 13:10:44.188 2 INFO nova.compute.manager [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Took 4.09 seconds to destroy the instance on the hypervisor.
Oct 02 13:10:44 compute-1 nova_compute[230518]: 2025-10-02 13:10:44.189 2 DEBUG oslo.service.loopingcall [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:10:44 compute-1 nova_compute[230518]: 2025-10-02 13:10:44.189 2 DEBUG nova.compute.manager [-] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:10:44 compute-1 nova_compute[230518]: 2025-10-02 13:10:44.189 2 DEBUG nova.network.neutron [-] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:10:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:10:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:44.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:10:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:10:44 compute-1 nova_compute[230518]: 2025-10-02 13:10:44.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:44.752 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=66, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=65) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:10:44 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:44.754 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:10:44 compute-1 nova_compute[230518]: 2025-10-02 13:10:44.777 2 DEBUG nova.network.neutron [-] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:10:44 compute-1 nova_compute[230518]: 2025-10-02 13:10:44.796 2 INFO nova.compute.manager [-] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Took 0.61 seconds to deallocate network for instance.
Oct 02 13:10:44 compute-1 nova_compute[230518]: 2025-10-02 13:10:44.852 2 DEBUG oslo_concurrency.lockutils [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:44 compute-1 nova_compute[230518]: 2025-10-02 13:10:44.853 2 DEBUG oslo_concurrency.lockutils [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:44 compute-1 nova_compute[230518]: 2025-10-02 13:10:44.893 2 DEBUG nova.compute.manager [req-91d0db88-f5b5-4ef2-8614-a1c3dae66d0a req-2768e154-3325-4792-b6dd-9da6db284ca0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Received event network-vif-deleted-faebc160-66b6-4ba2-ab02-2b0098eef804 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:10:44 compute-1 nova_compute[230518]: 2025-10-02 13:10:44.935 2 DEBUG oslo_concurrency.processutils [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:10:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:10:45 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1341714694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:45 compute-1 nova_compute[230518]: 2025-10-02 13:10:45.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:45 compute-1 nova_compute[230518]: 2025-10-02 13:10:45.422 2 DEBUG oslo_concurrency.processutils [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:10:45 compute-1 nova_compute[230518]: 2025-10-02 13:10:45.430 2 DEBUG nova.compute.provider_tree [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:10:45 compute-1 nova_compute[230518]: 2025-10-02 13:10:45.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:45 compute-1 nova_compute[230518]: 2025-10-02 13:10:45.446 2 DEBUG nova.scheduler.client.report [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:10:45 compute-1 nova_compute[230518]: 2025-10-02 13:10:45.474 2 DEBUG oslo_concurrency.lockutils [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:45 compute-1 nova_compute[230518]: 2025-10-02 13:10:45.522 2 INFO nova.scheduler.client.report [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Deleted allocations for instance 28cee0c6-0008-45f8-af11-48abbbbcb22c
Oct 02 13:10:45 compute-1 ceph-mon[80926]: pgmap v2948: 305 pgs: 305 active+clean; 726 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.9 MiB/s wr, 146 op/s
Oct 02 13:10:45 compute-1 nova_compute[230518]: 2025-10-02 13:10:45.628 2 DEBUG oslo_concurrency.lockutils [None req-06f7a2eb-5da2-4d01-b541-7bbc0982ea9c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "28cee0c6-0008-45f8-af11-48abbbbcb22c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.535s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:46.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:10:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:46.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:10:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1341714694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:47 compute-1 ceph-mon[80926]: pgmap v2949: 305 pgs: 305 active+clean; 678 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 908 KiB/s rd, 395 KiB/s wr, 139 op/s
Oct 02 13:10:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:48.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:10:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:48.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:10:49 compute-1 sudo[307061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:10:49 compute-1 sudo[307061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:49 compute-1 sudo[307061]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:49 compute-1 sudo[307086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:10:49 compute-1 sudo[307086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:49 compute-1 sudo[307086]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:49 compute-1 sudo[307111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:10:49 compute-1 sudo[307111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:49 compute-1 sudo[307111]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:10:49 compute-1 sudo[307136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:10:49 compute-1 sudo[307136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:49 compute-1 nova_compute[230518]: 2025-10-02 13:10:49.713 2 DEBUG oslo_concurrency.lockutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:10:49 compute-1 nova_compute[230518]: 2025-10-02 13:10:49.714 2 DEBUG oslo_concurrency.lockutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquired lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:10:49 compute-1 nova_compute[230518]: 2025-10-02 13:10:49.714 2 DEBUG nova.network.neutron [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:10:49 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:49.756 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '66'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:10:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:50.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:50 compute-1 ceph-mon[80926]: pgmap v2950: 305 pgs: 305 active+clean; 678 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 592 KiB/s rd, 140 KiB/s wr, 83 op/s
Oct 02 13:10:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2847886454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:50 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:10:50 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:10:50 compute-1 sudo[307136]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:50.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:50 compute-1 nova_compute[230518]: 2025-10-02 13:10:50.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:50 compute-1 nova_compute[230518]: 2025-10-02 13:10:50.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:50 compute-1 nova_compute[230518]: 2025-10-02 13:10:50.790 2 DEBUG nova.network.neutron [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Updating instance_info_cache with network_info: [{"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:10:50 compute-1 nova_compute[230518]: 2025-10-02 13:10:50.818 2 DEBUG oslo_concurrency.lockutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Releasing lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:10:50 compute-1 nova_compute[230518]: 2025-10-02 13:10:50.925 2 DEBUG nova.virt.libvirt.driver [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511
Oct 02 13:10:50 compute-1 nova_compute[230518]: 2025-10-02 13:10:50.926 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Creating file /var/lib/nova/instances/de995ad8-07bb-4097-899b-5c79d62a1f4c/3985543ce32c4737bb5d5a8c7849ea66.tmp on remote host 192.168.122.100 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79
Oct 02 13:10:50 compute-1 nova_compute[230518]: 2025-10-02 13:10:50.927 2 DEBUG oslo_concurrency.processutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/de995ad8-07bb-4097-899b-5c79d62a1f4c/3985543ce32c4737bb5d5a8c7849ea66.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:10:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 13:10:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 13:10:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 13:10:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:10:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:10:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:10:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:10:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:10:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:10:51 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1324039356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:10:51 compute-1 nova_compute[230518]: 2025-10-02 13:10:51.379 2 DEBUG oslo_concurrency.processutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/de995ad8-07bb-4097-899b-5c79d62a1f4c/3985543ce32c4737bb5d5a8c7849ea66.tmp" returned: 1 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:10:51 compute-1 nova_compute[230518]: 2025-10-02 13:10:51.381 2 DEBUG oslo_concurrency.processutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] 'ssh -o BatchMode=yes 192.168.122.100 touch /var/lib/nova/instances/de995ad8-07bb-4097-899b-5c79d62a1f4c/3985543ce32c4737bb5d5a8c7849ea66.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Oct 02 13:10:51 compute-1 nova_compute[230518]: 2025-10-02 13:10:51.382 2 DEBUG nova.virt.libvirt.volume.remotefs [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Creating directory /var/lib/nova/instances/de995ad8-07bb-4097-899b-5c79d62a1f4c on remote host 192.168.122.100 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91
Oct 02 13:10:51 compute-1 nova_compute[230518]: 2025-10-02 13:10:51.383 2 DEBUG oslo_concurrency.processutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/de995ad8-07bb-4097-899b-5c79d62a1f4c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:10:51 compute-1 nova_compute[230518]: 2025-10-02 13:10:51.634 2 DEBUG oslo_concurrency.processutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.100 mkdir -p /var/lib/nova/instances/de995ad8-07bb-4097-899b-5c79d62a1f4c" returned: 0 in 0.252s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:10:51 compute-1 nova_compute[230518]: 2025-10-02 13:10:51.642 2 DEBUG nova.virt.libvirt.driver [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Oct 02 13:10:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:10:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:52.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:10:52 compute-1 ceph-mon[80926]: pgmap v2951: 305 pgs: 305 active+clean; 643 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 57 KiB/s wr, 124 op/s
Oct 02 13:10:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:52.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:53 compute-1 ceph-mon[80926]: pgmap v2952: 305 pgs: 305 active+clean; 599 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 62 KiB/s wr, 152 op/s
Oct 02 13:10:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 13:10:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:54.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 13:10:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:10:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:54.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:10:54 compute-1 kernel: tap513c3d66-61 (unregistering): left promiscuous mode
Oct 02 13:10:54 compute-1 NetworkManager[44960]: <info>  [1759410654.2637] device (tap513c3d66-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:10:54 compute-1 ovn_controller[129257]: 2025-10-02T13:10:54Z|00827|binding|INFO|Releasing lport 513c3d66-613d-4626-8ab0-58520113de32 from this chassis (sb_readonly=0)
Oct 02 13:10:54 compute-1 ovn_controller[129257]: 2025-10-02T13:10:54Z|00828|binding|INFO|Setting lport 513c3d66-613d-4626-8ab0-58520113de32 down in Southbound
Oct 02 13:10:54 compute-1 nova_compute[230518]: 2025-10-02 13:10:54.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:54 compute-1 ovn_controller[129257]: 2025-10-02T13:10:54Z|00829|binding|INFO|Removing iface tap513c3d66-61 ovn-installed in OVS
Oct 02 13:10:54 compute-1 nova_compute[230518]: 2025-10-02 13:10:54.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:54.285 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:bc:4e 10.100.0.4'], port_security=['fa:16:3e:9a:bc:4e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'de995ad8-07bb-4097-899b-5c79d62a1f4c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9001b9c-bca6-4085-a954-1414269e31bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f85b8f387b146d29eabe946c4fbdee8', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c95f312a-09a8-4e2c-af55-3ef0a0e41bfc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=57ece03e-f90b-4cd6-ae02-c9a908c888ae, chassis=[], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=513c3d66-613d-4626-8ab0-58520113de32) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:10:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:54.286 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 513c3d66-613d-4626-8ab0-58520113de32 in datapath d9001b9c-bca6-4085-a954-1414269e31bc unbound from our chassis
Oct 02 13:10:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:54.288 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d9001b9c-bca6-4085-a954-1414269e31bc
Oct 02 13:10:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:54.305 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8ddeecc3-05c4-404e-ab79-98bc07c62bf6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:54 compute-1 nova_compute[230518]: 2025-10-02 13:10:54.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:54.343 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[276b5c33-7d9b-4f12-945d-df2123afbb07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:54.345 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[fcc1e8c0-cd2b-44cb-802f-22210ed46598]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:54 compute-1 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000c8.scope: Deactivated successfully.
Oct 02 13:10:54 compute-1 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000c8.scope: Consumed 16.058s CPU time.
Oct 02 13:10:54 compute-1 systemd-machined[188247]: Machine qemu-93-instance-000000c8 terminated.
Oct 02 13:10:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:54.377 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[0314a062-6536-445d-85e3-0529eaae0dd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:54.398 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[31573ae3-e3d1-4b35-8520-54908d263a51]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9001b9c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:c0:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 242], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 840857, 'reachable_time': 19345, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307206, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:54.423 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6a4c147a-cbb8-48bf-9046-fe06ff4873b7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd9001b9c-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 840868, 'tstamp': 840868}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307207, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd9001b9c-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 840870, 'tstamp': 840870}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307207, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:10:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:54.427 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9001b9c-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:10:54 compute-1 nova_compute[230518]: 2025-10-02 13:10:54.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:54 compute-1 nova_compute[230518]: 2025-10-02 13:10:54.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:54.435 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9001b9c-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:10:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:54.435 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:10:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:54.436 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd9001b9c-b0, col_values=(('external_ids', {'iface-id': 'aa788301-8c47-4421-b693-3b37cb064ae2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:10:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:10:54.436 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:10:54 compute-1 nova_compute[230518]: 2025-10-02 13:10:54.558 2 DEBUG nova.compute.manager [req-db88738f-76b8-4d09-b9e6-e2bff0779f1e req-9d02cf48-8ce2-43d2-9418-5613ba8fcaf6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received event network-vif-unplugged-513c3d66-613d-4626-8ab0-58520113de32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:10:54 compute-1 nova_compute[230518]: 2025-10-02 13:10:54.558 2 DEBUG oslo_concurrency.lockutils [req-db88738f-76b8-4d09-b9e6-e2bff0779f1e req-9d02cf48-8ce2-43d2-9418-5613ba8fcaf6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:54 compute-1 nova_compute[230518]: 2025-10-02 13:10:54.559 2 DEBUG oslo_concurrency.lockutils [req-db88738f-76b8-4d09-b9e6-e2bff0779f1e req-9d02cf48-8ce2-43d2-9418-5613ba8fcaf6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:54 compute-1 nova_compute[230518]: 2025-10-02 13:10:54.559 2 DEBUG oslo_concurrency.lockutils [req-db88738f-76b8-4d09-b9e6-e2bff0779f1e req-9d02cf48-8ce2-43d2-9418-5613ba8fcaf6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:54 compute-1 nova_compute[230518]: 2025-10-02 13:10:54.559 2 DEBUG nova.compute.manager [req-db88738f-76b8-4d09-b9e6-e2bff0779f1e req-9d02cf48-8ce2-43d2-9418-5613ba8fcaf6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] No waiting events found dispatching network-vif-unplugged-513c3d66-613d-4626-8ab0-58520113de32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:10:54 compute-1 nova_compute[230518]: 2025-10-02 13:10:54.560 2 WARNING nova.compute.manager [req-db88738f-76b8-4d09-b9e6-e2bff0779f1e req-9d02cf48-8ce2-43d2-9418-5613ba8fcaf6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received unexpected event network-vif-unplugged-513c3d66-613d-4626-8ab0-58520113de32 for instance with vm_state active and task_state resize_migrating.
Oct 02 13:10:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:10:54 compute-1 nova_compute[230518]: 2025-10-02 13:10:54.662 2 INFO nova.virt.libvirt.driver [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Instance shutdown successfully after 3 seconds.
Oct 02 13:10:54 compute-1 nova_compute[230518]: 2025-10-02 13:10:54.668 2 INFO nova.virt.libvirt.driver [-] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Instance destroyed successfully.
Oct 02 13:10:54 compute-1 nova_compute[230518]: 2025-10-02 13:10:54.669 2 DEBUG nova.virt.libvirt.vif [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:09:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-1',id=200,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGLRY7MYmIa6+oLUh+Qg+B8a5i2XXFFyzSdgxs13sBRV1pAy/AOUY7U032oAYrVoY3TX/q037Gu8fuAeVLEbydGt9ytZ7oOiP2uoiKS3ZsON6mJ6KSvHrVdqmkzPhkxnA==',key_name='tempest-keypair-841361442',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:09:56Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9f85b8f387b146d29eabe946c4fbdee8',ramdisk_id='',reservation_id='r-5qy16z2b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-2011266702',owner_user_name='tempest-AttachVolumeMultiAttachTest-2011266702-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:10:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='156cc6022c70402ab6d194a340b076d5',uuid=de995ad8-07bb-4097-899b-5c79d62a1f4c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "vif_mac": "fa:16:3e:9a:bc:4e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:10:54 compute-1 nova_compute[230518]: 2025-10-02 13:10:54.670 2 DEBUG nova.network.os_vif_util [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converting VIF {"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "vif_mac": "fa:16:3e:9a:bc:4e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:10:54 compute-1 nova_compute[230518]: 2025-10-02 13:10:54.671 2 DEBUG nova.network.os_vif_util [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9a:bc:4e,bridge_name='br-int',has_traffic_filtering=True,id=513c3d66-613d-4626-8ab0-58520113de32,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap513c3d66-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:10:54 compute-1 nova_compute[230518]: 2025-10-02 13:10:54.672 2 DEBUG os_vif [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:bc:4e,bridge_name='br-int',has_traffic_filtering=True,id=513c3d66-613d-4626-8ab0-58520113de32,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap513c3d66-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:10:54 compute-1 nova_compute[230518]: 2025-10-02 13:10:54.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:54 compute-1 nova_compute[230518]: 2025-10-02 13:10:54.677 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap513c3d66-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:10:54 compute-1 nova_compute[230518]: 2025-10-02 13:10:54.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:54 compute-1 nova_compute[230518]: 2025-10-02 13:10:54.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:10:54 compute-1 nova_compute[230518]: 2025-10-02 13:10:54.685 2 INFO os_vif [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:bc:4e,bridge_name='br-int',has_traffic_filtering=True,id=513c3d66-613d-4626-8ab0-58520113de32,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap513c3d66-61')
Oct 02 13:10:54 compute-1 nova_compute[230518]: 2025-10-02 13:10:54.948 2 DEBUG nova.virt.libvirt.driver [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] skipping disk for instance-000000c8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:10:54 compute-1 nova_compute[230518]: 2025-10-02 13:10:54.948 2 DEBUG nova.virt.libvirt.driver [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] skipping disk for instance-000000c8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:10:54 compute-1 nova_compute[230518]: 2025-10-02 13:10:54.949 2 DEBUG nova.virt.libvirt.driver [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] skipping disk for instance-000000c8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:10:55 compute-1 nova_compute[230518]: 2025-10-02 13:10:55.363 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410640.361377, 28cee0c6-0008-45f8-af11-48abbbbcb22c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:10:55 compute-1 nova_compute[230518]: 2025-10-02 13:10:55.363 2 INFO nova.compute.manager [-] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] VM Stopped (Lifecycle Event)
Oct 02 13:10:55 compute-1 nova_compute[230518]: 2025-10-02 13:10:55.392 2 DEBUG nova.compute.manager [None req-e633623c-6c61-4db5-99d8-6d93a2c30ef8 - - - - - -] [instance: 28cee0c6-0008-45f8-af11-48abbbbcb22c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:10:55 compute-1 nova_compute[230518]: 2025-10-02 13:10:55.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:55 compute-1 ceph-mon[80926]: pgmap v2953: 305 pgs: 305 active+clean; 599 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 42 KiB/s wr, 128 op/s
Oct 02 13:10:55 compute-1 nova_compute[230518]: 2025-10-02 13:10:55.831 2 DEBUG neutronclient.v2_0.client [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 513c3d66-613d-4626-8ab0-58520113de32 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Oct 02 13:10:55 compute-1 nova_compute[230518]: 2025-10-02 13:10:55.976 2 DEBUG oslo_concurrency.lockutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:55 compute-1 nova_compute[230518]: 2025-10-02 13:10:55.976 2 DEBUG oslo_concurrency.lockutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:55 compute-1 nova_compute[230518]: 2025-10-02 13:10:55.977 2 DEBUG oslo_concurrency.lockutils [None req-2a9d6627-761a-4f94-bfb9-20c62b70607c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:10:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:56.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:10:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:56.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:56 compute-1 nova_compute[230518]: 2025-10-02 13:10:56.711 2 DEBUG nova.compute.manager [req-0a779f8e-08c6-49c3-8a67-d9e7c3538ef6 req-7fc280d9-c53b-4f3a-9721-538583307b39 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received event network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:10:56 compute-1 nova_compute[230518]: 2025-10-02 13:10:56.712 2 DEBUG oslo_concurrency.lockutils [req-0a779f8e-08c6-49c3-8a67-d9e7c3538ef6 req-7fc280d9-c53b-4f3a-9721-538583307b39 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:10:56 compute-1 nova_compute[230518]: 2025-10-02 13:10:56.713 2 DEBUG oslo_concurrency.lockutils [req-0a779f8e-08c6-49c3-8a67-d9e7c3538ef6 req-7fc280d9-c53b-4f3a-9721-538583307b39 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:10:56 compute-1 nova_compute[230518]: 2025-10-02 13:10:56.714 2 DEBUG oslo_concurrency.lockutils [req-0a779f8e-08c6-49c3-8a67-d9e7c3538ef6 req-7fc280d9-c53b-4f3a-9721-538583307b39 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:10:56 compute-1 nova_compute[230518]: 2025-10-02 13:10:56.714 2 DEBUG nova.compute.manager [req-0a779f8e-08c6-49c3-8a67-d9e7c3538ef6 req-7fc280d9-c53b-4f3a-9721-538583307b39 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] No waiting events found dispatching network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:10:56 compute-1 nova_compute[230518]: 2025-10-02 13:10:56.714 2 WARNING nova.compute.manager [req-0a779f8e-08c6-49c3-8a67-d9e7c3538ef6 req-7fc280d9-c53b-4f3a-9721-538583307b39 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received unexpected event network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 for instance with vm_state active and task_state resize_migrated.
Oct 02 13:10:56 compute-1 podman[307221]: 2025-10-02 13:10:56.843161384 +0000 UTC m=+0.087005263 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:10:56 compute-1 podman[307220]: 2025-10-02 13:10:56.895533428 +0000 UTC m=+0.139659027 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:10:57 compute-1 sudo[307264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:10:57 compute-1 sudo[307264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:57 compute-1 sudo[307264]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:57 compute-1 sudo[307289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:10:57 compute-1 sudo[307289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:10:57 compute-1 sudo[307289]: pam_unix(sudo:session): session closed for user root
Oct 02 13:10:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:10:58.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:58 compute-1 ceph-mon[80926]: pgmap v2954: 305 pgs: 305 active+clean; 599 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 48 KiB/s wr, 130 op/s
Oct 02 13:10:58 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:10:58 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:10:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:10:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:10:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:10:58.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:10:59 compute-1 nova_compute[230518]: 2025-10-02 13:10:59.405 2 DEBUG nova.compute.manager [req-b5a768ca-45aa-43a0-a667-6134e0bb5594 req-5f404d9c-6f55-4e8b-8e06-6e0c1935b266 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received event network-changed-513c3d66-613d-4626-8ab0-58520113de32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:10:59 compute-1 nova_compute[230518]: 2025-10-02 13:10:59.405 2 DEBUG nova.compute.manager [req-b5a768ca-45aa-43a0-a667-6134e0bb5594 req-5f404d9c-6f55-4e8b-8e06-6e0c1935b266 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Refreshing instance network info cache due to event network-changed-513c3d66-613d-4626-8ab0-58520113de32. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:10:59 compute-1 nova_compute[230518]: 2025-10-02 13:10:59.405 2 DEBUG oslo_concurrency.lockutils [req-b5a768ca-45aa-43a0-a667-6134e0bb5594 req-5f404d9c-6f55-4e8b-8e06-6e0c1935b266 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:10:59 compute-1 nova_compute[230518]: 2025-10-02 13:10:59.405 2 DEBUG oslo_concurrency.lockutils [req-b5a768ca-45aa-43a0-a667-6134e0bb5594 req-5f404d9c-6f55-4e8b-8e06-6e0c1935b266 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:10:59 compute-1 nova_compute[230518]: 2025-10-02 13:10:59.405 2 DEBUG nova.network.neutron [req-b5a768ca-45aa-43a0-a667-6134e0bb5594 req-5f404d9c-6f55-4e8b-8e06-6e0c1935b266 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Refreshing network info cache for port 513c3d66-613d-4626-8ab0-58520113de32 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:10:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:10:59 compute-1 nova_compute[230518]: 2025-10-02 13:10:59.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:10:59 compute-1 ovn_controller[129257]: 2025-10-02T13:10:59Z|00830|binding|INFO|Releasing lport aa788301-8c47-4421-b693-3b37cb064ae2 from this chassis (sb_readonly=0)
Oct 02 13:10:59 compute-1 nova_compute[230518]: 2025-10-02 13:10:59.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:00.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:00 compute-1 ceph-mon[80926]: pgmap v2955: 305 pgs: 305 active+clean; 599 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 40 KiB/s wr, 101 op/s
Oct 02 13:11:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:00.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:00 compute-1 nova_compute[230518]: 2025-10-02 13:11:00.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:01 compute-1 ceph-mon[80926]: pgmap v2956: 305 pgs: 305 active+clean; 599 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 27 KiB/s wr, 110 op/s
Oct 02 13:11:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:02.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:11:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:02.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:11:02 compute-1 nova_compute[230518]: 2025-10-02 13:11:02.300 2 DEBUG nova.network.neutron [req-b5a768ca-45aa-43a0-a667-6134e0bb5594 req-5f404d9c-6f55-4e8b-8e06-6e0c1935b266 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Updated VIF entry in instance network info cache for port 513c3d66-613d-4626-8ab0-58520113de32. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:11:02 compute-1 nova_compute[230518]: 2025-10-02 13:11:02.302 2 DEBUG nova.network.neutron [req-b5a768ca-45aa-43a0-a667-6134e0bb5594 req-5f404d9c-6f55-4e8b-8e06-6e0c1935b266 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Updating instance_info_cache with network_info: [{"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:11:02 compute-1 nova_compute[230518]: 2025-10-02 13:11:02.329 2 DEBUG oslo_concurrency.lockutils [req-b5a768ca-45aa-43a0-a667-6134e0bb5594 req-5f404d9c-6f55-4e8b-8e06-6e0c1935b266 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:11:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:11:02 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1043214721' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:11:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e369 e369: 3 total, 3 up, 3 in
Oct 02 13:11:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:11:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:04.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:11:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:11:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:04.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:11:04 compute-1 ceph-mon[80926]: pgmap v2957: 305 pgs: 305 active+clean; 599 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 41 KiB/s wr, 94 op/s
Oct 02 13:11:04 compute-1 nova_compute[230518]: 2025-10-02 13:11:04.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:04 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1043214721' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:11:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:11:04 compute-1 nova_compute[230518]: 2025-10-02 13:11:04.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:11:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1601308669' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:11:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:11:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1601308669' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:11:05 compute-1 nova_compute[230518]: 2025-10-02 13:11:05.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:05 compute-1 ceph-mon[80926]: osdmap e369: 3 total, 3 up, 3 in
Oct 02 13:11:05 compute-1 ceph-mon[80926]: pgmap v2959: 305 pgs: 305 active+clean; 599 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 643 KiB/s rd, 33 KiB/s wr, 56 op/s
Oct 02 13:11:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1601308669' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:11:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1601308669' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:11:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:11:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:06.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:11:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:06.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:06 compute-1 podman[307314]: 2025-10-02 13:11:06.867905631 +0000 UTC m=+0.098165074 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 13:11:06 compute-1 podman[307315]: 2025-10-02 13:11:06.887260078 +0000 UTC m=+0.116463498 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd)
Oct 02 13:11:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1300418728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:11:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/767376829' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:11:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:11:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:08.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:11:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:08.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:08 compute-1 ceph-mon[80926]: pgmap v2960: 305 pgs: 305 active+clean; 599 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 646 KiB/s rd, 37 KiB/s wr, 62 op/s
Oct 02 13:11:08 compute-1 nova_compute[230518]: 2025-10-02 13:11:08.463 2 DEBUG nova.compute.manager [req-23030715-0ff4-48a2-a697-bfa5e0088ae5 req-41c2e2ff-3ec2-4fb2-8eed-e008d6a95b4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received event network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:11:08 compute-1 nova_compute[230518]: 2025-10-02 13:11:08.464 2 DEBUG oslo_concurrency.lockutils [req-23030715-0ff4-48a2-a697-bfa5e0088ae5 req-41c2e2ff-3ec2-4fb2-8eed-e008d6a95b4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:11:08 compute-1 nova_compute[230518]: 2025-10-02 13:11:08.464 2 DEBUG oslo_concurrency.lockutils [req-23030715-0ff4-48a2-a697-bfa5e0088ae5 req-41c2e2ff-3ec2-4fb2-8eed-e008d6a95b4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:11:08 compute-1 nova_compute[230518]: 2025-10-02 13:11:08.465 2 DEBUG oslo_concurrency.lockutils [req-23030715-0ff4-48a2-a697-bfa5e0088ae5 req-41c2e2ff-3ec2-4fb2-8eed-e008d6a95b4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:11:08 compute-1 nova_compute[230518]: 2025-10-02 13:11:08.465 2 DEBUG nova.compute.manager [req-23030715-0ff4-48a2-a697-bfa5e0088ae5 req-41c2e2ff-3ec2-4fb2-8eed-e008d6a95b4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] No waiting events found dispatching network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:11:08 compute-1 nova_compute[230518]: 2025-10-02 13:11:08.465 2 WARNING nova.compute.manager [req-23030715-0ff4-48a2-a697-bfa5e0088ae5 req-41c2e2ff-3ec2-4fb2-8eed-e008d6a95b4b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received unexpected event network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 for instance with vm_state active and task_state resize_finish.
Oct 02 13:11:09 compute-1 nova_compute[230518]: 2025-10-02 13:11:09.501 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410654.500055, de995ad8-07bb-4097-899b-5c79d62a1f4c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:11:09 compute-1 nova_compute[230518]: 2025-10-02 13:11:09.501 2 INFO nova.compute.manager [-] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] VM Stopped (Lifecycle Event)
Oct 02 13:11:09 compute-1 nova_compute[230518]: 2025-10-02 13:11:09.542 2 DEBUG nova.compute.manager [None req-f49fe3bb-cf40-4167-b18e-f96ae61c2ee3 - - - - - -] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:11:09 compute-1 nova_compute[230518]: 2025-10-02 13:11:09.550 2 DEBUG nova.compute.manager [None req-f49fe3bb-cf40-4167-b18e-f96ae61c2ee3 - - - - - -] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:11:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:11:09 compute-1 nova_compute[230518]: 2025-10-02 13:11:09.598 2 INFO nova.compute.manager [None req-f49fe3bb-cf40-4167-b18e-f96ae61c2ee3 - - - - - -] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-1.ctlplane.example.com
Oct 02 13:11:09 compute-1 nova_compute[230518]: 2025-10-02 13:11:09.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:10.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:10 compute-1 ceph-mon[80926]: pgmap v2961: 305 pgs: 305 active+clean; 600 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 630 KiB/s rd, 35 KiB/s wr, 60 op/s
Oct 02 13:11:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:11:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:10.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:11:10 compute-1 nova_compute[230518]: 2025-10-02 13:11:10.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:10 compute-1 nova_compute[230518]: 2025-10-02 13:11:10.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:10 compute-1 nova_compute[230518]: 2025-10-02 13:11:10.636 2 DEBUG nova.compute.manager [req-eec82824-051b-42a0-9391-3ae2948f0c7f req-e0748017-2616-4c3c-b109-efcf44941e4f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received event network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:11:10 compute-1 nova_compute[230518]: 2025-10-02 13:11:10.637 2 DEBUG oslo_concurrency.lockutils [req-eec82824-051b-42a0-9391-3ae2948f0c7f req-e0748017-2616-4c3c-b109-efcf44941e4f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:11:10 compute-1 nova_compute[230518]: 2025-10-02 13:11:10.637 2 DEBUG oslo_concurrency.lockutils [req-eec82824-051b-42a0-9391-3ae2948f0c7f req-e0748017-2616-4c3c-b109-efcf44941e4f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:11:10 compute-1 nova_compute[230518]: 2025-10-02 13:11:10.638 2 DEBUG oslo_concurrency.lockutils [req-eec82824-051b-42a0-9391-3ae2948f0c7f req-e0748017-2616-4c3c-b109-efcf44941e4f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:11:10 compute-1 nova_compute[230518]: 2025-10-02 13:11:10.638 2 DEBUG nova.compute.manager [req-eec82824-051b-42a0-9391-3ae2948f0c7f req-e0748017-2616-4c3c-b109-efcf44941e4f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] No waiting events found dispatching network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:11:10 compute-1 nova_compute[230518]: 2025-10-02 13:11:10.638 2 WARNING nova.compute.manager [req-eec82824-051b-42a0-9391-3ae2948f0c7f req-e0748017-2616-4c3c-b109-efcf44941e4f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Received unexpected event network-vif-plugged-513c3d66-613d-4626-8ab0-58520113de32 for instance with vm_state resized and task_state None.
Oct 02 13:11:11 compute-1 ceph-mon[80926]: pgmap v2962: 305 pgs: 305 active+clean; 600 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 967 KiB/s rd, 31 KiB/s wr, 73 op/s
Oct 02 13:11:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:12.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:12.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:12 compute-1 nova_compute[230518]: 2025-10-02 13:11:12.834 2 DEBUG oslo_concurrency.lockutils [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "de995ad8-07bb-4097-899b-5c79d62a1f4c" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:11:12 compute-1 nova_compute[230518]: 2025-10-02 13:11:12.834 2 DEBUG oslo_concurrency.lockutils [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:11:12 compute-1 nova_compute[230518]: 2025-10-02 13:11:12.835 2 DEBUG nova.compute.manager [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Going to confirm migration 24 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679
Oct 02 13:11:13 compute-1 nova_compute[230518]: 2025-10-02 13:11:13.524 2 DEBUG neutronclient.v2_0.client [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 513c3d66-613d-4626-8ab0-58520113de32 for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Oct 02 13:11:13 compute-1 nova_compute[230518]: 2025-10-02 13:11:13.525 2 DEBUG oslo_concurrency.lockutils [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:11:13 compute-1 nova_compute[230518]: 2025-10-02 13:11:13.525 2 DEBUG oslo_concurrency.lockutils [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquired lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:11:13 compute-1 nova_compute[230518]: 2025-10-02 13:11:13.525 2 DEBUG nova.network.neutron [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:11:13 compute-1 nova_compute[230518]: 2025-10-02 13:11:13.525 2 DEBUG nova.objects.instance [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'info_cache' on Instance uuid de995ad8-07bb-4097-899b-5c79d62a1f4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:11:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:14.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:14 compute-1 ceph-mon[80926]: pgmap v2963: 305 pgs: 305 active+clean; 600 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 97 op/s
Oct 02 13:11:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:14.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:11:14 compute-1 nova_compute[230518]: 2025-10-02 13:11:14.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:15 compute-1 nova_compute[230518]: 2025-10-02 13:11:15.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:15 compute-1 ceph-mon[80926]: pgmap v2964: 305 pgs: 305 active+clean; 600 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 95 op/s
Oct 02 13:11:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:11:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:16.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:11:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:11:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:16.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:11:16 compute-1 nova_compute[230518]: 2025-10-02 13:11:16.311 2 DEBUG nova.network.neutron [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: de995ad8-07bb-4097-899b-5c79d62a1f4c] Updating instance_info_cache with network_info: [{"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:11:16 compute-1 nova_compute[230518]: 2025-10-02 13:11:16.440 2 DEBUG oslo_concurrency.lockutils [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Releasing lock "refresh_cache-de995ad8-07bb-4097-899b-5c79d62a1f4c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:11:16 compute-1 nova_compute[230518]: 2025-10-02 13:11:16.441 2 DEBUG nova.objects.instance [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'migration_context' on Instance uuid de995ad8-07bb-4097-899b-5c79d62a1f4c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:11:16 compute-1 nova_compute[230518]: 2025-10-02 13:11:16.641 2 DEBUG nova.storage.rbd_utils [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] removing snapshot(nova-resize) on rbd image(de995ad8-07bb-4097-899b-5c79d62a1f4c_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Oct 02 13:11:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:11:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:18.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:11:18 compute-1 ceph-mon[80926]: pgmap v2965: 305 pgs: 305 active+clean; 600 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 90 op/s
Oct 02 13:11:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:18.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e370 e370: 3 total, 3 up, 3 in
Oct 02 13:11:19 compute-1 nova_compute[230518]: 2025-10-02 13:11:19.040 2 DEBUG nova.virt.libvirt.vif [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:09:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-1',id=200,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGLRY7MYmIa6+oLUh+Qg+B8a5i2XXFFyzSdgxs13sBRV1pAy/AOUY7U032oAYrVoY3TX/q037Gu8fuAeVLEbydGt9ytZ7oOiP2uoiKS3ZsON6mJ6KSvHrVdqmkzPhkxnA==',key_name='tempest-keypair-841361442',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:11:09Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9f85b8f387b146d29eabe946c4fbdee8',ramdisk_id='',reservation_id='r-5qy16z2b',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-2011266702',owner_user_name='tempest-AttachVolumeMultiAttachTest-2011266702-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:11:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='156cc6022c70402ab6d194a340b076d5',uuid=de995ad8-07bb-4097-899b-5c79d62a1f4c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:11:19 compute-1 nova_compute[230518]: 2025-10-02 13:11:19.041 2 DEBUG nova.network.os_vif_util [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converting VIF {"id": "513c3d66-613d-4626-8ab0-58520113de32", "address": "fa:16:3e:9a:bc:4e", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap513c3d66-61", "ovs_interfaceid": "513c3d66-613d-4626-8ab0-58520113de32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:11:19 compute-1 nova_compute[230518]: 2025-10-02 13:11:19.043 2 DEBUG nova.network.os_vif_util [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9a:bc:4e,bridge_name='br-int',has_traffic_filtering=True,id=513c3d66-613d-4626-8ab0-58520113de32,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap513c3d66-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:11:19 compute-1 nova_compute[230518]: 2025-10-02 13:11:19.046 2 DEBUG os_vif [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:bc:4e,bridge_name='br-int',has_traffic_filtering=True,id=513c3d66-613d-4626-8ab0-58520113de32,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap513c3d66-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:11:19 compute-1 nova_compute[230518]: 2025-10-02 13:11:19.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:19 compute-1 nova_compute[230518]: 2025-10-02 13:11:19.049 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap513c3d66-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:11:19 compute-1 nova_compute[230518]: 2025-10-02 13:11:19.050 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:11:19 compute-1 nova_compute[230518]: 2025-10-02 13:11:19.053 2 INFO os_vif [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:bc:4e,bridge_name='br-int',has_traffic_filtering=True,id=513c3d66-613d-4626-8ab0-58520113de32,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap513c3d66-61')
Oct 02 13:11:19 compute-1 nova_compute[230518]: 2025-10-02 13:11:19.054 2 DEBUG oslo_concurrency.lockutils [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:11:19 compute-1 nova_compute[230518]: 2025-10-02 13:11:19.055 2 DEBUG oslo_concurrency.lockutils [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:11:19 compute-1 nova_compute[230518]: 2025-10-02 13:11:19.222 2 DEBUG nova.scheduler.client.report [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 13:11:19 compute-1 nova_compute[230518]: 2025-10-02 13:11:19.229 2 DEBUG oslo_concurrency.lockutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "5ecce258-097c-4a5a-9c44-087e8129ceaf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:11:19 compute-1 nova_compute[230518]: 2025-10-02 13:11:19.229 2 DEBUG oslo_concurrency.lockutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "5ecce258-097c-4a5a-9c44-087e8129ceaf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:11:19 compute-1 nova_compute[230518]: 2025-10-02 13:11:19.263 2 DEBUG nova.scheduler.client.report [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 13:11:19 compute-1 nova_compute[230518]: 2025-10-02 13:11:19.264 2 DEBUG nova.compute.provider_tree [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 13:11:19 compute-1 nova_compute[230518]: 2025-10-02 13:11:19.274 2 DEBUG nova.compute.manager [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:11:19 compute-1 nova_compute[230518]: 2025-10-02 13:11:19.293 2 DEBUG nova.scheduler.client.report [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 13:11:19 compute-1 nova_compute[230518]: 2025-10-02 13:11:19.331 2 DEBUG nova.scheduler.client.report [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 13:11:19 compute-1 nova_compute[230518]: 2025-10-02 13:11:19.426 2 DEBUG oslo_concurrency.lockutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:11:19 compute-1 nova_compute[230518]: 2025-10-02 13:11:19.508 2 DEBUG oslo_concurrency.processutils [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:11:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:11:19 compute-1 nova_compute[230518]: 2025-10-02 13:11:19.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:11:20 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1781545665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:20 compute-1 nova_compute[230518]: 2025-10-02 13:11:20.035 2 DEBUG oslo_concurrency.processutils [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:11:20 compute-1 nova_compute[230518]: 2025-10-02 13:11:20.041 2 DEBUG nova.compute.provider_tree [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:11:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:11:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:20.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:11:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:11:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:20.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:11:20 compute-1 nova_compute[230518]: 2025-10-02 13:11:20.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:20 compute-1 ceph-mon[80926]: pgmap v2966: 305 pgs: 305 active+clean; 600 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 8.0 KiB/s wr, 85 op/s
Oct 02 13:11:20 compute-1 ceph-mon[80926]: osdmap e370: 3 total, 3 up, 3 in
Oct 02 13:11:21 compute-1 nova_compute[230518]: 2025-10-02 13:11:21.709 2 DEBUG nova.scheduler.client.report [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:11:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1781545665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:21 compute-1 ceph-mon[80926]: pgmap v2968: 305 pgs: 305 active+clean; 600 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 8.6 KiB/s wr, 81 op/s
Oct 02 13:11:21 compute-1 nova_compute[230518]: 2025-10-02 13:11:21.935 2 DEBUG oslo_concurrency.lockutils [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 2.880s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:11:21 compute-1 nova_compute[230518]: 2025-10-02 13:11:21.939 2 DEBUG oslo_concurrency.lockutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 2.513s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:11:21 compute-1 nova_compute[230518]: 2025-10-02 13:11:21.969 2 DEBUG nova.virt.hardware [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:11:21 compute-1 nova_compute[230518]: 2025-10-02 13:11:21.969 2 INFO nova.compute.claims [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Claim successful on node compute-1.ctlplane.example.com
Oct 02 13:11:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:22.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:22 compute-1 nova_compute[230518]: 2025-10-02 13:11:22.141 2 INFO nova.scheduler.client.report [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Deleted allocation for migration 3176e491-7bce-457b-bb5e-0b1a709da887
Oct 02 13:11:22 compute-1 nova_compute[230518]: 2025-10-02 13:11:22.219 2 DEBUG oslo_concurrency.processutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:11:22 compute-1 nova_compute[230518]: 2025-10-02 13:11:22.254 2 DEBUG oslo_concurrency.lockutils [None req-87584829-1957-401b-843e-c19647839d9d 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "de995ad8-07bb-4097-899b-5c79d62a1f4c" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 9.420s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:11:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:11:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:22.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:11:22 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:11:22 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2051422908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:22 compute-1 nova_compute[230518]: 2025-10-02 13:11:22.681 2 DEBUG oslo_concurrency.processutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:11:22 compute-1 nova_compute[230518]: 2025-10-02 13:11:22.690 2 DEBUG nova.compute.provider_tree [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:11:22 compute-1 nova_compute[230518]: 2025-10-02 13:11:22.711 2 DEBUG nova.scheduler.client.report [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:11:22 compute-1 nova_compute[230518]: 2025-10-02 13:11:22.749 2 DEBUG oslo_concurrency.lockutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:11:22 compute-1 nova_compute[230518]: 2025-10-02 13:11:22.749 2 DEBUG nova.compute.manager [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:11:22 compute-1 nova_compute[230518]: 2025-10-02 13:11:22.851 2 DEBUG nova.compute.manager [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:11:22 compute-1 nova_compute[230518]: 2025-10-02 13:11:22.851 2 DEBUG nova.network.neutron [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:11:22 compute-1 nova_compute[230518]: 2025-10-02 13:11:22.892 2 INFO nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:11:22 compute-1 nova_compute[230518]: 2025-10-02 13:11:22.920 2 DEBUG nova.compute.manager [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:11:23 compute-1 nova_compute[230518]: 2025-10-02 13:11:23.063 2 DEBUG nova.compute.manager [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:11:23 compute-1 nova_compute[230518]: 2025-10-02 13:11:23.065 2 DEBUG nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:11:23 compute-1 nova_compute[230518]: 2025-10-02 13:11:23.065 2 INFO nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Creating image(s)
Oct 02 13:11:23 compute-1 nova_compute[230518]: 2025-10-02 13:11:23.178 2 DEBUG nova.storage.rbd_utils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 5ecce258-097c-4a5a-9c44-087e8129ceaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:11:23 compute-1 nova_compute[230518]: 2025-10-02 13:11:23.230 2 DEBUG nova.storage.rbd_utils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 5ecce258-097c-4a5a-9c44-087e8129ceaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:11:23 compute-1 nova_compute[230518]: 2025-10-02 13:11:23.281 2 DEBUG nova.storage.rbd_utils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 5ecce258-097c-4a5a-9c44-087e8129ceaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:11:23 compute-1 nova_compute[230518]: 2025-10-02 13:11:23.288 2 DEBUG oslo_concurrency.processutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:11:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2051422908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:23 compute-1 nova_compute[230518]: 2025-10-02 13:11:23.364 2 DEBUG nova.policy [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '362b536431b64b15b67740060af57e9c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e911de934ec043d1bd942c8aed562d04', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:11:23 compute-1 nova_compute[230518]: 2025-10-02 13:11:23.402 2 DEBUG oslo_concurrency.processutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.114s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:11:23 compute-1 nova_compute[230518]: 2025-10-02 13:11:23.403 2 DEBUG oslo_concurrency.lockutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:11:23 compute-1 nova_compute[230518]: 2025-10-02 13:11:23.404 2 DEBUG oslo_concurrency.lockutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:11:23 compute-1 nova_compute[230518]: 2025-10-02 13:11:23.404 2 DEBUG oslo_concurrency.lockutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:11:23 compute-1 nova_compute[230518]: 2025-10-02 13:11:23.439 2 DEBUG nova.storage.rbd_utils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 5ecce258-097c-4a5a-9c44-087e8129ceaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:11:23 compute-1 nova_compute[230518]: 2025-10-02 13:11:23.446 2 DEBUG oslo_concurrency.processutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 5ecce258-097c-4a5a-9c44-087e8129ceaf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:11:24 compute-1 nova_compute[230518]: 2025-10-02 13:11:24.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:11:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:24.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:24 compute-1 nova_compute[230518]: 2025-10-02 13:11:24.082 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:11:24 compute-1 nova_compute[230518]: 2025-10-02 13:11:24.083 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:11:24 compute-1 nova_compute[230518]: 2025-10-02 13:11:24.083 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:11:24 compute-1 nova_compute[230518]: 2025-10-02 13:11:24.083 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:11:24 compute-1 nova_compute[230518]: 2025-10-02 13:11:24.084 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:11:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:24.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:24 compute-1 ceph-mon[80926]: pgmap v2969: 305 pgs: 305 active+clean; 600 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 841 KiB/s rd, 26 KiB/s wr, 45 op/s
Oct 02 13:11:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:11:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:11:24 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2097975945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:24 compute-1 nova_compute[230518]: 2025-10-02 13:11:24.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:24 compute-1 nova_compute[230518]: 2025-10-02 13:11:24.699 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:11:24 compute-1 nova_compute[230518]: 2025-10-02 13:11:24.815 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000c5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:11:24 compute-1 nova_compute[230518]: 2025-10-02 13:11:24.816 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000c5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:11:25 compute-1 nova_compute[230518]: 2025-10-02 13:11:25.091 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:11:25 compute-1 nova_compute[230518]: 2025-10-02 13:11:25.093 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3951MB free_disk=20.805503845214844GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:11:25 compute-1 nova_compute[230518]: 2025-10-02 13:11:25.094 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:11:25 compute-1 nova_compute[230518]: 2025-10-02 13:11:25.094 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:11:25 compute-1 nova_compute[230518]: 2025-10-02 13:11:25.231 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 658821a7-5b97-43ad-8fe2-46e5303cf56c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:11:25 compute-1 nova_compute[230518]: 2025-10-02 13:11:25.231 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 5ecce258-097c-4a5a-9c44-087e8129ceaf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:11:25 compute-1 nova_compute[230518]: 2025-10-02 13:11:25.231 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:11:25 compute-1 nova_compute[230518]: 2025-10-02 13:11:25.232 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:11:25 compute-1 nova_compute[230518]: 2025-10-02 13:11:25.236 2 DEBUG oslo_concurrency.processutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 5ecce258-097c-4a5a-9c44-087e8129ceaf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.789s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:11:25 compute-1 nova_compute[230518]: 2025-10-02 13:11:25.293 2 DEBUG nova.network.neutron [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Successfully created port: 9fd04bf3-73c2-4224-81ff-32ef5640604b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:11:25 compute-1 nova_compute[230518]: 2025-10-02 13:11:25.365 2 DEBUG nova.storage.rbd_utils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] resizing rbd image 5ecce258-097c-4a5a-9c44-087e8129ceaf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:11:25 compute-1 nova_compute[230518]: 2025-10-02 13:11:25.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:25 compute-1 nova_compute[230518]: 2025-10-02 13:11:25.733 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:11:25 compute-1 ceph-mon[80926]: pgmap v2970: 305 pgs: 305 active+clean; 600 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 841 KiB/s rd, 26 KiB/s wr, 45 op/s
Oct 02 13:11:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2097975945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:11:25.967 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:11:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:11:25.968 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:11:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:11:25.969 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:11:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:26.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:26.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:26 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e371 e371: 3 total, 3 up, 3 in
Oct 02 13:11:26 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:11:26 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2413210923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:26 compute-1 nova_compute[230518]: 2025-10-02 13:11:26.559 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.826s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:11:26 compute-1 nova_compute[230518]: 2025-10-02 13:11:26.565 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:11:26 compute-1 nova_compute[230518]: 2025-10-02 13:11:26.579 2 DEBUG nova.network.neutron [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Successfully updated port: 9fd04bf3-73c2-4224-81ff-32ef5640604b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:11:27 compute-1 ceph-mon[80926]: osdmap e371: 3 total, 3 up, 3 in
Oct 02 13:11:27 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2413210923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:27 compute-1 nova_compute[230518]: 2025-10-02 13:11:27.634 2 DEBUG nova.objects.instance [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lazy-loading 'migration_context' on Instance uuid 5ecce258-097c-4a5a-9c44-087e8129ceaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:11:27 compute-1 podman[307646]: 2025-10-02 13:11:27.833223253 +0000 UTC m=+0.081232802 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 13:11:27 compute-1 podman[307645]: 2025-10-02 13:11:27.895751616 +0000 UTC m=+0.136018051 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 13:11:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:28.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:28.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:28 compute-1 ceph-mon[80926]: pgmap v2972: 305 pgs: 305 active+clean; 632 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 123 op/s
Oct 02 13:11:28 compute-1 nova_compute[230518]: 2025-10-02 13:11:28.515 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:11:28 compute-1 nova_compute[230518]: 2025-10-02 13:11:28.695 2 DEBUG nova.compute.manager [req-b5da7863-db15-49e5-a0ef-2110a223644c req-f10d05dc-8a7e-4cc7-a8c4-ad897dee30d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Received event network-changed-9fd04bf3-73c2-4224-81ff-32ef5640604b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:11:28 compute-1 nova_compute[230518]: 2025-10-02 13:11:28.696 2 DEBUG nova.compute.manager [req-b5da7863-db15-49e5-a0ef-2110a223644c req-f10d05dc-8a7e-4cc7-a8c4-ad897dee30d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Refreshing instance network info cache due to event network-changed-9fd04bf3-73c2-4224-81ff-32ef5640604b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:11:28 compute-1 nova_compute[230518]: 2025-10-02 13:11:28.697 2 DEBUG oslo_concurrency.lockutils [req-b5da7863-db15-49e5-a0ef-2110a223644c req-f10d05dc-8a7e-4cc7-a8c4-ad897dee30d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-5ecce258-097c-4a5a-9c44-087e8129ceaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:11:28 compute-1 nova_compute[230518]: 2025-10-02 13:11:28.697 2 DEBUG oslo_concurrency.lockutils [req-b5da7863-db15-49e5-a0ef-2110a223644c req-f10d05dc-8a7e-4cc7-a8c4-ad897dee30d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-5ecce258-097c-4a5a-9c44-087e8129ceaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:11:28 compute-1 nova_compute[230518]: 2025-10-02 13:11:28.697 2 DEBUG nova.network.neutron [req-b5da7863-db15-49e5-a0ef-2110a223644c req-f10d05dc-8a7e-4cc7-a8c4-ad897dee30d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Refreshing network info cache for port 9fd04bf3-73c2-4224-81ff-32ef5640604b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:11:28 compute-1 nova_compute[230518]: 2025-10-02 13:11:28.771 2 DEBUG oslo_concurrency.lockutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "refresh_cache-5ecce258-097c-4a5a-9c44-087e8129ceaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:11:28 compute-1 nova_compute[230518]: 2025-10-02 13:11:28.776 2 DEBUG nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:11:28 compute-1 nova_compute[230518]: 2025-10-02 13:11:28.776 2 DEBUG nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Ensure instance console log exists: /var/lib/nova/instances/5ecce258-097c-4a5a-9c44-087e8129ceaf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:11:28 compute-1 nova_compute[230518]: 2025-10-02 13:11:28.777 2 DEBUG oslo_concurrency.lockutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:11:28 compute-1 nova_compute[230518]: 2025-10-02 13:11:28.777 2 DEBUG oslo_concurrency.lockutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:11:28 compute-1 nova_compute[230518]: 2025-10-02 13:11:28.778 2 DEBUG oslo_concurrency.lockutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:11:28 compute-1 nova_compute[230518]: 2025-10-02 13:11:28.803 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:11:28 compute-1 nova_compute[230518]: 2025-10-02 13:11:28.803 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:11:28 compute-1 nova_compute[230518]: 2025-10-02 13:11:28.804 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:11:29 compute-1 nova_compute[230518]: 2025-10-02 13:11:29.150 2 DEBUG nova.network.neutron [req-b5da7863-db15-49e5-a0ef-2110a223644c req-f10d05dc-8a7e-4cc7-a8c4-ad897dee30d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:11:29 compute-1 ceph-mon[80926]: pgmap v2973: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 964 KiB/s rd, 2.2 MiB/s wr, 122 op/s
Oct 02 13:11:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/351148488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:11:29 compute-1 nova_compute[230518]: 2025-10-02 13:11:29.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:29 compute-1 nova_compute[230518]: 2025-10-02 13:11:29.734 2 DEBUG nova.network.neutron [req-b5da7863-db15-49e5-a0ef-2110a223644c req-f10d05dc-8a7e-4cc7-a8c4-ad897dee30d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:11:29 compute-1 nova_compute[230518]: 2025-10-02 13:11:29.758 2 DEBUG oslo_concurrency.lockutils [req-b5da7863-db15-49e5-a0ef-2110a223644c req-f10d05dc-8a7e-4cc7-a8c4-ad897dee30d0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-5ecce258-097c-4a5a-9c44-087e8129ceaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:11:29 compute-1 nova_compute[230518]: 2025-10-02 13:11:29.759 2 DEBUG oslo_concurrency.lockutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquired lock "refresh_cache-5ecce258-097c-4a5a-9c44-087e8129ceaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:11:29 compute-1 nova_compute[230518]: 2025-10-02 13:11:29.759 2 DEBUG nova.network.neutron [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:11:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:11:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:30.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:11:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:30.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:30 compute-1 nova_compute[230518]: 2025-10-02 13:11:30.375 2 DEBUG nova.network.neutron [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:11:30 compute-1 nova_compute[230518]: 2025-10-02 13:11:30.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/727687055' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2587128955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:31 compute-1 nova_compute[230518]: 2025-10-02 13:11:31.684 2 DEBUG nova.network.neutron [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Updating instance_info_cache with network_info: [{"id": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "address": "fa:16:3e:ef:26:39", "network": {"id": "dac20349-4f21-4aeb-a4a7-d775590cb44a", "bridge": "br-int", "label": "tempest-network-smoke--1297227184", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fd04bf3-73", "ovs_interfaceid": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:11:31 compute-1 ceph-mon[80926]: pgmap v2974: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 839 KiB/s rd, 2.2 MiB/s wr, 105 op/s
Oct 02 13:11:31 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4083973286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:31 compute-1 nova_compute[230518]: 2025-10-02 13:11:31.906 2 DEBUG oslo_concurrency.lockutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Releasing lock "refresh_cache-5ecce258-097c-4a5a-9c44-087e8129ceaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:11:31 compute-1 nova_compute[230518]: 2025-10-02 13:11:31.906 2 DEBUG nova.compute.manager [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Instance network_info: |[{"id": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "address": "fa:16:3e:ef:26:39", "network": {"id": "dac20349-4f21-4aeb-a4a7-d775590cb44a", "bridge": "br-int", "label": "tempest-network-smoke--1297227184", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fd04bf3-73", "ovs_interfaceid": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:11:31 compute-1 nova_compute[230518]: 2025-10-02 13:11:31.908 2 DEBUG nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Start _get_guest_xml network_info=[{"id": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "address": "fa:16:3e:ef:26:39", "network": {"id": "dac20349-4f21-4aeb-a4a7-d775590cb44a", "bridge": "br-int", "label": "tempest-network-smoke--1297227184", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fd04bf3-73", "ovs_interfaceid": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:11:31 compute-1 nova_compute[230518]: 2025-10-02 13:11:31.913 2 WARNING nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:11:31 compute-1 nova_compute[230518]: 2025-10-02 13:11:31.918 2 DEBUG nova.virt.libvirt.host [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:11:31 compute-1 nova_compute[230518]: 2025-10-02 13:11:31.918 2 DEBUG nova.virt.libvirt.host [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:11:31 compute-1 nova_compute[230518]: 2025-10-02 13:11:31.922 2 DEBUG nova.virt.libvirt.host [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:11:31 compute-1 nova_compute[230518]: 2025-10-02 13:11:31.923 2 DEBUG nova.virt.libvirt.host [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:11:31 compute-1 nova_compute[230518]: 2025-10-02 13:11:31.924 2 DEBUG nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:11:31 compute-1 nova_compute[230518]: 2025-10-02 13:11:31.925 2 DEBUG nova.virt.hardware [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:11:31 compute-1 nova_compute[230518]: 2025-10-02 13:11:31.925 2 DEBUG nova.virt.hardware [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:11:31 compute-1 nova_compute[230518]: 2025-10-02 13:11:31.925 2 DEBUG nova.virt.hardware [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:11:31 compute-1 nova_compute[230518]: 2025-10-02 13:11:31.926 2 DEBUG nova.virt.hardware [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:11:31 compute-1 nova_compute[230518]: 2025-10-02 13:11:31.926 2 DEBUG nova.virt.hardware [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:11:31 compute-1 nova_compute[230518]: 2025-10-02 13:11:31.926 2 DEBUG nova.virt.hardware [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:11:31 compute-1 nova_compute[230518]: 2025-10-02 13:11:31.926 2 DEBUG nova.virt.hardware [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:11:31 compute-1 nova_compute[230518]: 2025-10-02 13:11:31.927 2 DEBUG nova.virt.hardware [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:11:31 compute-1 nova_compute[230518]: 2025-10-02 13:11:31.927 2 DEBUG nova.virt.hardware [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:11:31 compute-1 nova_compute[230518]: 2025-10-02 13:11:31.927 2 DEBUG nova.virt.hardware [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:11:31 compute-1 nova_compute[230518]: 2025-10-02 13:11:31.927 2 DEBUG nova.virt.hardware [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:11:31 compute-1 nova_compute[230518]: 2025-10-02 13:11:31.930 2 DEBUG oslo_concurrency.processutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:11:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:32.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:32.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:11:32 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2188314948' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:11:32 compute-1 nova_compute[230518]: 2025-10-02 13:11:32.410 2 DEBUG oslo_concurrency.processutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:11:32 compute-1 nova_compute[230518]: 2025-10-02 13:11:32.440 2 DEBUG nova.storage.rbd_utils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 5ecce258-097c-4a5a-9c44-087e8129ceaf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:11:32 compute-1 nova_compute[230518]: 2025-10-02 13:11:32.445 2 DEBUG oslo_concurrency.processutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:11:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1116925894' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:11:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2188314948' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:11:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2511029647' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:11:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2511029647' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:11:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:11:33 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1694941547' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:11:33 compute-1 nova_compute[230518]: 2025-10-02 13:11:33.131 2 DEBUG oslo_concurrency.processutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.687s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:11:33 compute-1 nova_compute[230518]: 2025-10-02 13:11:33.133 2 DEBUG nova.virt.libvirt.vif [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:11:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-1967034242',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-1967034242',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2067500093-ac',id=203,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBARJDtWwEIa73jOfITh0S3Xi3OzT9rBF+alfsyLXRbxk2puonzRxucJBK2BRHoshC3dw4yzXZgc14rNHWEy9MW96gMF19bT8yeo1M4v5Bwum2wxMyyCXx0KGJeRmwnd5wQ==',key_name='tempest-TestSecurityGroupsBasicOps-133947143',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e911de934ec043d1bd942c8aed562d04',ramdisk_id='',reservation_id='r-i85j97wy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-2067500093',owner_user_name='tempest-TestSecurityGroupsBasicOps-2067500093-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:11:22Z,user_data=None,user_id='362b536431b64b15b67740060af57e9c',uuid=5ecce258-097c-4a5a-9c44-087e8129ceaf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "address": "fa:16:3e:ef:26:39", "network": {"id": "dac20349-4f21-4aeb-a4a7-d775590cb44a", "bridge": "br-int", "label": "tempest-network-smoke--1297227184", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fd04bf3-73", "ovs_interfaceid": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:11:33 compute-1 nova_compute[230518]: 2025-10-02 13:11:33.134 2 DEBUG nova.network.os_vif_util [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converting VIF {"id": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "address": "fa:16:3e:ef:26:39", "network": {"id": "dac20349-4f21-4aeb-a4a7-d775590cb44a", "bridge": "br-int", "label": "tempest-network-smoke--1297227184", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fd04bf3-73", "ovs_interfaceid": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:11:33 compute-1 nova_compute[230518]: 2025-10-02 13:11:33.135 2 DEBUG nova.network.os_vif_util [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:26:39,bridge_name='br-int',has_traffic_filtering=True,id=9fd04bf3-73c2-4224-81ff-32ef5640604b,network=Network(dac20349-4f21-4aeb-a4a7-d775590cb44a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fd04bf3-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:11:33 compute-1 nova_compute[230518]: 2025-10-02 13:11:33.136 2 DEBUG nova.objects.instance [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5ecce258-097c-4a5a-9c44-087e8129ceaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:11:33 compute-1 nova_compute[230518]: 2025-10-02 13:11:33.155 2 DEBUG nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:11:33 compute-1 nova_compute[230518]:   <uuid>5ecce258-097c-4a5a-9c44-087e8129ceaf</uuid>
Oct 02 13:11:33 compute-1 nova_compute[230518]:   <name>instance-000000cb</name>
Oct 02 13:11:33 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 13:11:33 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 13:11:33 compute-1 nova_compute[230518]:   <metadata>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:11:33 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-1967034242</nova:name>
Oct 02 13:11:33 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 13:11:31</nova:creationTime>
Oct 02 13:11:33 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 13:11:33 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 13:11:33 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 13:11:33 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 13:11:33 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:11:33 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:11:33 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 13:11:33 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 13:11:33 compute-1 nova_compute[230518]:         <nova:user uuid="362b536431b64b15b67740060af57e9c">tempest-TestSecurityGroupsBasicOps-2067500093-project-member</nova:user>
Oct 02 13:11:33 compute-1 nova_compute[230518]:         <nova:project uuid="e911de934ec043d1bd942c8aed562d04">tempest-TestSecurityGroupsBasicOps-2067500093</nova:project>
Oct 02 13:11:33 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 13:11:33 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 13:11:33 compute-1 nova_compute[230518]:         <nova:port uuid="9fd04bf3-73c2-4224-81ff-32ef5640604b">
Oct 02 13:11:33 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 13:11:33 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 13:11:33 compute-1 nova_compute[230518]:   </metadata>
Oct 02 13:11:33 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <system>
Oct 02 13:11:33 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:11:33 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:11:33 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:11:33 compute-1 nova_compute[230518]:       <entry name="serial">5ecce258-097c-4a5a-9c44-087e8129ceaf</entry>
Oct 02 13:11:33 compute-1 nova_compute[230518]:       <entry name="uuid">5ecce258-097c-4a5a-9c44-087e8129ceaf</entry>
Oct 02 13:11:33 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     </system>
Oct 02 13:11:33 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 13:11:33 compute-1 nova_compute[230518]:   <os>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:   </os>
Oct 02 13:11:33 compute-1 nova_compute[230518]:   <features>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <apic/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:   </features>
Oct 02 13:11:33 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:   </clock>
Oct 02 13:11:33 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:   </cpu>
Oct 02 13:11:33 compute-1 nova_compute[230518]:   <devices>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 13:11:33 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/5ecce258-097c-4a5a-9c44-087e8129ceaf_disk">
Oct 02 13:11:33 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:       </source>
Oct 02 13:11:33 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:11:33 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:11:33 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 13:11:33 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/5ecce258-097c-4a5a-9c44-087e8129ceaf_disk.config">
Oct 02 13:11:33 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:       </source>
Oct 02 13:11:33 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:11:33 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:11:33 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 13:11:33 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:ef:26:39"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:       <target dev="tap9fd04bf3-73"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     </interface>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 13:11:33 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/5ecce258-097c-4a5a-9c44-087e8129ceaf/console.log" append="off"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     </serial>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <video>
Oct 02 13:11:33 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     </video>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 13:11:33 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     </rng>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 13:11:33 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 13:11:33 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 13:11:33 compute-1 nova_compute[230518]:   </devices>
Oct 02 13:11:33 compute-1 nova_compute[230518]: </domain>
Oct 02 13:11:33 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:11:33 compute-1 nova_compute[230518]: 2025-10-02 13:11:33.156 2 DEBUG nova.compute.manager [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Preparing to wait for external event network-vif-plugged-9fd04bf3-73c2-4224-81ff-32ef5640604b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:11:33 compute-1 nova_compute[230518]: 2025-10-02 13:11:33.157 2 DEBUG oslo_concurrency.lockutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "5ecce258-097c-4a5a-9c44-087e8129ceaf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:11:33 compute-1 nova_compute[230518]: 2025-10-02 13:11:33.157 2 DEBUG oslo_concurrency.lockutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "5ecce258-097c-4a5a-9c44-087e8129ceaf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:11:33 compute-1 nova_compute[230518]: 2025-10-02 13:11:33.158 2 DEBUG oslo_concurrency.lockutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "5ecce258-097c-4a5a-9c44-087e8129ceaf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:11:33 compute-1 nova_compute[230518]: 2025-10-02 13:11:33.158 2 DEBUG nova.virt.libvirt.vif [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:11:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-1967034242',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-1967034242',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2067500093-ac',id=203,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBARJDtWwEIa73jOfITh0S3Xi3OzT9rBF+alfsyLXRbxk2puonzRxucJBK2BRHoshC3dw4yzXZgc14rNHWEy9MW96gMF19bT8yeo1M4v5Bwum2wxMyyCXx0KGJeRmwnd5wQ==',key_name='tempest-TestSecurityGroupsBasicOps-133947143',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e911de934ec043d1bd942c8aed562d04',ramdisk_id='',reservation_id='r-i85j97wy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-2067500093',owner_user_name='tempest-TestSecurityGroupsBasicOps-2067500093-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:11:22Z,user_data=None,user_id='362b536431b64b15b67740060af57e9c',uuid=5ecce258-097c-4a5a-9c44-087e8129ceaf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "address": "fa:16:3e:ef:26:39", "network": {"id": "dac20349-4f21-4aeb-a4a7-d775590cb44a", "bridge": "br-int", "label": "tempest-network-smoke--1297227184", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fd04bf3-73", "ovs_interfaceid": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:11:33 compute-1 nova_compute[230518]: 2025-10-02 13:11:33.159 2 DEBUG nova.network.os_vif_util [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converting VIF {"id": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "address": "fa:16:3e:ef:26:39", "network": {"id": "dac20349-4f21-4aeb-a4a7-d775590cb44a", "bridge": "br-int", "label": "tempest-network-smoke--1297227184", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fd04bf3-73", "ovs_interfaceid": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:11:33 compute-1 nova_compute[230518]: 2025-10-02 13:11:33.159 2 DEBUG nova.network.os_vif_util [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:26:39,bridge_name='br-int',has_traffic_filtering=True,id=9fd04bf3-73c2-4224-81ff-32ef5640604b,network=Network(dac20349-4f21-4aeb-a4a7-d775590cb44a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fd04bf3-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:11:33 compute-1 nova_compute[230518]: 2025-10-02 13:11:33.160 2 DEBUG os_vif [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:26:39,bridge_name='br-int',has_traffic_filtering=True,id=9fd04bf3-73c2-4224-81ff-32ef5640604b,network=Network(dac20349-4f21-4aeb-a4a7-d775590cb44a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fd04bf3-73') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:11:33 compute-1 nova_compute[230518]: 2025-10-02 13:11:33.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:33 compute-1 nova_compute[230518]: 2025-10-02 13:11:33.161 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:11:33 compute-1 nova_compute[230518]: 2025-10-02 13:11:33.161 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:11:33 compute-1 nova_compute[230518]: 2025-10-02 13:11:33.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:33 compute-1 nova_compute[230518]: 2025-10-02 13:11:33.164 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9fd04bf3-73, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:11:33 compute-1 nova_compute[230518]: 2025-10-02 13:11:33.164 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9fd04bf3-73, col_values=(('external_ids', {'iface-id': '9fd04bf3-73c2-4224-81ff-32ef5640604b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ef:26:39', 'vm-uuid': '5ecce258-097c-4a5a-9c44-087e8129ceaf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:11:33 compute-1 nova_compute[230518]: 2025-10-02 13:11:33.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:33 compute-1 NetworkManager[44960]: <info>  [1759410693.1666] manager: (tap9fd04bf3-73): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/384)
Oct 02 13:11:33 compute-1 nova_compute[230518]: 2025-10-02 13:11:33.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:11:33 compute-1 nova_compute[230518]: 2025-10-02 13:11:33.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:33 compute-1 nova_compute[230518]: 2025-10-02 13:11:33.172 2 INFO os_vif [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:26:39,bridge_name='br-int',has_traffic_filtering=True,id=9fd04bf3-73c2-4224-81ff-32ef5640604b,network=Network(dac20349-4f21-4aeb-a4a7-d775590cb44a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fd04bf3-73')
Oct 02 13:11:33 compute-1 nova_compute[230518]: 2025-10-02 13:11:33.272 2 DEBUG nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:11:33 compute-1 nova_compute[230518]: 2025-10-02 13:11:33.273 2 DEBUG nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:11:33 compute-1 nova_compute[230518]: 2025-10-02 13:11:33.273 2 DEBUG nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] No VIF found with MAC fa:16:3e:ef:26:39, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:11:33 compute-1 nova_compute[230518]: 2025-10-02 13:11:33.274 2 INFO nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Using config drive
Oct 02 13:11:33 compute-1 nova_compute[230518]: 2025-10-02 13:11:33.304 2 DEBUG nova.storage.rbd_utils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 5ecce258-097c-4a5a-9c44-087e8129ceaf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:11:33 compute-1 nova_compute[230518]: 2025-10-02 13:11:33.833 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:11:33 compute-1 nova_compute[230518]: 2025-10-02 13:11:33.833 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:11:33 compute-1 nova_compute[230518]: 2025-10-02 13:11:33.834 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:11:33 compute-1 nova_compute[230518]: 2025-10-02 13:11:33.834 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:11:33 compute-1 nova_compute[230518]: 2025-10-02 13:11:33.834 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:11:33 compute-1 nova_compute[230518]: 2025-10-02 13:11:33.835 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:11:33 compute-1 ceph-mon[80926]: pgmap v2975: 305 pgs: 305 active+clean; 648 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 422 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 02 13:11:33 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1694941547' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:11:34 compute-1 nova_compute[230518]: 2025-10-02 13:11:34.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:11:34 compute-1 nova_compute[230518]: 2025-10-02 13:11:34.055 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:11:34 compute-1 nova_compute[230518]: 2025-10-02 13:11:34.055 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:11:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:11:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:34.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:11:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:34.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:11:35 compute-1 nova_compute[230518]: 2025-10-02 13:11:35.276 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 13:11:35 compute-1 nova_compute[230518]: 2025-10-02 13:11:35.334 2 INFO nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Creating config drive at /var/lib/nova/instances/5ecce258-097c-4a5a-9c44-087e8129ceaf/disk.config
Oct 02 13:11:35 compute-1 nova_compute[230518]: 2025-10-02 13:11:35.339 2 DEBUG oslo_concurrency.processutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5ecce258-097c-4a5a-9c44-087e8129ceaf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwpov1ldt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:11:35 compute-1 nova_compute[230518]: 2025-10-02 13:11:35.454 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:11:35 compute-1 nova_compute[230518]: 2025-10-02 13:11:35.455 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:11:35 compute-1 nova_compute[230518]: 2025-10-02 13:11:35.456 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 13:11:35 compute-1 nova_compute[230518]: 2025-10-02 13:11:35.456 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 658821a7-5b97-43ad-8fe2-46e5303cf56c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:11:35 compute-1 nova_compute[230518]: 2025-10-02 13:11:35.481 2 DEBUG oslo_concurrency.processutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5ecce258-097c-4a5a-9c44-087e8129ceaf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwpov1ldt" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:11:35 compute-1 nova_compute[230518]: 2025-10-02 13:11:35.516 2 DEBUG nova.storage.rbd_utils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] rbd image 5ecce258-097c-4a5a-9c44-087e8129ceaf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:11:35 compute-1 nova_compute[230518]: 2025-10-02 13:11:35.521 2 DEBUG oslo_concurrency.processutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5ecce258-097c-4a5a-9c44-087e8129ceaf/disk.config 5ecce258-097c-4a5a-9c44-087e8129ceaf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:11:35 compute-1 nova_compute[230518]: 2025-10-02 13:11:35.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:35 compute-1 nova_compute[230518]: 2025-10-02 13:11:35.901 2 DEBUG oslo_concurrency.processutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5ecce258-097c-4a5a-9c44-087e8129ceaf/disk.config 5ecce258-097c-4a5a-9c44-087e8129ceaf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.380s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:11:35 compute-1 nova_compute[230518]: 2025-10-02 13:11:35.902 2 INFO nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Deleting local config drive /var/lib/nova/instances/5ecce258-097c-4a5a-9c44-087e8129ceaf/disk.config because it was imported into RBD.
Oct 02 13:11:35 compute-1 kernel: tap9fd04bf3-73: entered promiscuous mode
Oct 02 13:11:35 compute-1 ovn_controller[129257]: 2025-10-02T13:11:35Z|00831|binding|INFO|Claiming lport 9fd04bf3-73c2-4224-81ff-32ef5640604b for this chassis.
Oct 02 13:11:35 compute-1 ovn_controller[129257]: 2025-10-02T13:11:35Z|00832|binding|INFO|9fd04bf3-73c2-4224-81ff-32ef5640604b: Claiming fa:16:3e:ef:26:39 10.100.0.12
Oct 02 13:11:35 compute-1 NetworkManager[44960]: <info>  [1759410695.9712] manager: (tap9fd04bf3-73): new Tun device (/org/freedesktop/NetworkManager/Devices/385)
Oct 02 13:11:35 compute-1 nova_compute[230518]: 2025-10-02 13:11:35.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:11:35.980 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:26:39 10.100.0.12'], port_security=['fa:16:3e:ef:26:39 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5ecce258-097c-4a5a-9c44-087e8129ceaf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dac20349-4f21-4aeb-a4a7-d775590cb44a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e911de934ec043d1bd942c8aed562d04', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'caab64a4-2f87-4e39-a0ac-b96f95aae4c5 e9085353-0bf0-4de8-ac60-c4d68c9ff284', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f94f92e0-9e2a-42b5-8a3e-79ddfa458897, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=9fd04bf3-73c2-4224-81ff-32ef5640604b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:11:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:11:35.981 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 9fd04bf3-73c2-4224-81ff-32ef5640604b in datapath dac20349-4f21-4aeb-a4a7-d775590cb44a bound to our chassis
Oct 02 13:11:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:11:35.982 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dac20349-4f21-4aeb-a4a7-d775590cb44a
Oct 02 13:11:35 compute-1 ovn_controller[129257]: 2025-10-02T13:11:35Z|00833|binding|INFO|Setting lport 9fd04bf3-73c2-4224-81ff-32ef5640604b ovn-installed in OVS
Oct 02 13:11:35 compute-1 ovn_controller[129257]: 2025-10-02T13:11:35Z|00834|binding|INFO|Setting lport 9fd04bf3-73c2-4224-81ff-32ef5640604b up in Southbound
Oct 02 13:11:35 compute-1 nova_compute[230518]: 2025-10-02 13:11:35.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:11:35.995 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cc7b8f52-6b59-4504-b990-750e26f15467]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:11:35.996 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdac20349-41 in ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:11:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:11:35.998 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdac20349-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:11:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:11:35.998 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[33691f87-be80-4bf9-b676-a0999284364b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:35 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:11:35.999 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2c769055-6560-4b6f-8614-063297389f17]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:36 compute-1 systemd-udevd[307823]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.012 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[0ac45995-9c1c-4cf5-ac52-4adb65f731b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:36 compute-1 NetworkManager[44960]: <info>  [1759410696.0215] device (tap9fd04bf3-73): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:11:36 compute-1 NetworkManager[44960]: <info>  [1759410696.0228] device (tap9fd04bf3-73): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:11:36 compute-1 systemd-machined[188247]: New machine qemu-95-instance-000000cb.
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.036 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[413c7946-846f-4364-af68-394106f8efaa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:36 compute-1 ceph-mon[80926]: pgmap v2976: 305 pgs: 305 active+clean; 648 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 422 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct 02 13:11:36 compute-1 systemd[1]: Started Virtual Machine qemu-95-instance-000000cb.
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.078 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[16166b4e-f932-4be1-8710-8d7d87efcca6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:11:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:36.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.084 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[89d74c4f-3f1a-42c0-94c5-56108b7596d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:36 compute-1 NetworkManager[44960]: <info>  [1759410696.0860] manager: (tapdac20349-40): new Veth device (/org/freedesktop/NetworkManager/Devices/386)
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.137 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[76b432e0-3488-45e9-af53-5886c2955ca7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.141 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[34d45441-317e-421b-833e-4a08d88e6ea2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:36 compute-1 NetworkManager[44960]: <info>  [1759410696.1756] device (tapdac20349-40): carrier: link connected
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.188 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[f4560c9c-7c7a-48f4-8f9c-b4dae3d9353c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.208 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ffb7aab7-98c1-463d-a6df-e19b7c3640eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdac20349-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:d8:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 252], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 855973, 'reachable_time': 44947, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307858, 'error': None, 'target': 'ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.230 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e2c44d39-8b75-4083-b1c1-9f0a5c1b6a4a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe32:d8a6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 855973, 'tstamp': 855973}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307859, 'error': None, 'target': 'ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.247 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ece3819a-60cd-4af8-bb11-4477ba30bcba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdac20349-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:32:d8:a6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 252], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 855973, 'reachable_time': 44947, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307860, 'error': None, 'target': 'ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.291 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c7d294d7-89a9-4f26-af83-92e8e94b460c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:11:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:36.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.368 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[4ca3d03e-4d3b-47c1-8940-eb615dba8dd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.370 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdac20349-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.370 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.371 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdac20349-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:11:36 compute-1 NetworkManager[44960]: <info>  [1759410696.3735] manager: (tapdac20349-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/387)
Oct 02 13:11:36 compute-1 kernel: tapdac20349-40: entered promiscuous mode
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.376 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdac20349-40, col_values=(('external_ids', {'iface-id': '71ea06ee-2e8d-4617-a491-cbc5589b4465'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:11:36 compute-1 ovn_controller[129257]: 2025-10-02T13:11:36Z|00835|binding|INFO|Releasing lport 71ea06ee-2e8d-4617-a491-cbc5589b4465 from this chassis (sb_readonly=0)
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.379 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dac20349-4f21-4aeb-a4a7-d775590cb44a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dac20349-4f21-4aeb-a4a7-d775590cb44a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.380 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9b831838-321c-4e24-bf33-b28bc83afee9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.381 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]: global
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-dac20349-4f21-4aeb-a4a7-d775590cb44a
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/dac20349-4f21-4aeb-a4a7-d775590cb44a.pid.haproxy
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID dac20349-4f21-4aeb-a4a7-d775590cb44a
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:11:36 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:11:36.383 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a', 'env', 'PROCESS_TAG=haproxy-dac20349-4f21-4aeb-a4a7-d775590cb44a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dac20349-4f21-4aeb-a4a7-d775590cb44a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:11:36 compute-1 nova_compute[230518]: 2025-10-02 13:11:36.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:36 compute-1 nova_compute[230518]: 2025-10-02 13:11:36.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:36 compute-1 nova_compute[230518]: 2025-10-02 13:11:36.563 2 DEBUG nova.compute.manager [req-d1a37f1c-0258-49c9-8566-5dbb55877cfc req-45a425f3-9fc6-40bf-9234-f499606397a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Received event network-vif-plugged-9fd04bf3-73c2-4224-81ff-32ef5640604b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:11:36 compute-1 nova_compute[230518]: 2025-10-02 13:11:36.564 2 DEBUG oslo_concurrency.lockutils [req-d1a37f1c-0258-49c9-8566-5dbb55877cfc req-45a425f3-9fc6-40bf-9234-f499606397a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "5ecce258-097c-4a5a-9c44-087e8129ceaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:11:36 compute-1 nova_compute[230518]: 2025-10-02 13:11:36.564 2 DEBUG oslo_concurrency.lockutils [req-d1a37f1c-0258-49c9-8566-5dbb55877cfc req-45a425f3-9fc6-40bf-9234-f499606397a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5ecce258-097c-4a5a-9c44-087e8129ceaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:11:36 compute-1 nova_compute[230518]: 2025-10-02 13:11:36.564 2 DEBUG oslo_concurrency.lockutils [req-d1a37f1c-0258-49c9-8566-5dbb55877cfc req-45a425f3-9fc6-40bf-9234-f499606397a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5ecce258-097c-4a5a-9c44-087e8129ceaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:11:36 compute-1 nova_compute[230518]: 2025-10-02 13:11:36.565 2 DEBUG nova.compute.manager [req-d1a37f1c-0258-49c9-8566-5dbb55877cfc req-45a425f3-9fc6-40bf-9234-f499606397a4 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Processing event network-vif-plugged-9fd04bf3-73c2-4224-81ff-32ef5640604b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:11:36 compute-1 podman[307892]: 2025-10-02 13:11:36.795258492 +0000 UTC m=+0.026851565 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:11:37 compute-1 podman[307892]: 2025-10-02 13:11:37.192518596 +0000 UTC m=+0.424111649 container create 1b51d7f529ca9309b852c8d9d84d92761961af0af6c283bc554124db9c4cccc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 13:11:37 compute-1 systemd[1]: Started libpod-conmon-1b51d7f529ca9309b852c8d9d84d92761961af0af6c283bc554124db9c4cccc8.scope.
Oct 02 13:11:37 compute-1 systemd[1]: Started libcrun container.
Oct 02 13:11:37 compute-1 podman[307923]: 2025-10-02 13:11:37.327030139 +0000 UTC m=+0.102183799 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 13:11:37 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f39bcf2783cfa7733fec5ee8c9f0cd2433751285a50f27407623ad86aee7b446/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:11:37 compute-1 podman[307892]: 2025-10-02 13:11:37.369767841 +0000 UTC m=+0.601360904 container init 1b51d7f529ca9309b852c8d9d84d92761961af0af6c283bc554124db9c4cccc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 13:11:37 compute-1 podman[307924]: 2025-10-02 13:11:37.373871741 +0000 UTC m=+0.133197595 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 13:11:37 compute-1 podman[307892]: 2025-10-02 13:11:37.375666897 +0000 UTC m=+0.607259960 container start 1b51d7f529ca9309b852c8d9d84d92761961af0af6c283bc554124db9c4cccc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 13:11:37 compute-1 neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a[307970]: [NOTICE]   (307994) : New worker (307997) forked
Oct 02 13:11:37 compute-1 neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a[307970]: [NOTICE]   (307994) : Loading success.
Oct 02 13:11:37 compute-1 nova_compute[230518]: 2025-10-02 13:11:37.766 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Updating instance_info_cache with network_info: [{"id": "15cb070c-0f52-464f-a2b4-8597c15212e9", "address": "fa:16:3e:e2:47:21", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15cb070c-0f", "ovs_interfaceid": "15cb070c-0f52-464f-a2b4-8597c15212e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:11:37 compute-1 nova_compute[230518]: 2025-10-02 13:11:37.839 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410697.8389783, 5ecce258-097c-4a5a-9c44-087e8129ceaf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:11:37 compute-1 nova_compute[230518]: 2025-10-02 13:11:37.840 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] VM Started (Lifecycle Event)
Oct 02 13:11:37 compute-1 nova_compute[230518]: 2025-10-02 13:11:37.843 2 DEBUG nova.compute.manager [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:11:37 compute-1 nova_compute[230518]: 2025-10-02 13:11:37.848 2 DEBUG nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:11:37 compute-1 nova_compute[230518]: 2025-10-02 13:11:37.852 2 INFO nova.virt.libvirt.driver [-] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Instance spawned successfully.
Oct 02 13:11:37 compute-1 nova_compute[230518]: 2025-10-02 13:11:37.852 2 DEBUG nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:11:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:38.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:38 compute-1 ceph-mon[80926]: pgmap v2977: 305 pgs: 305 active+clean; 586 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 1.2 MiB/s wr, 43 op/s
Oct 02 13:11:38 compute-1 nova_compute[230518]: 2025-10-02 13:11:38.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 13:11:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:38.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 13:11:38 compute-1 nova_compute[230518]: 2025-10-02 13:11:38.617 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:11:38 compute-1 nova_compute[230518]: 2025-10-02 13:11:38.618 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:11:38 compute-1 nova_compute[230518]: 2025-10-02 13:11:38.618 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 13:11:38 compute-1 nova_compute[230518]: 2025-10-02 13:11:38.618 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:11:38 compute-1 nova_compute[230518]: 2025-10-02 13:11:38.620 2 DEBUG nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:11:38 compute-1 nova_compute[230518]: 2025-10-02 13:11:38.621 2 DEBUG nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:11:38 compute-1 nova_compute[230518]: 2025-10-02 13:11:38.621 2 DEBUG nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:11:38 compute-1 nova_compute[230518]: 2025-10-02 13:11:38.621 2 DEBUG nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:11:38 compute-1 nova_compute[230518]: 2025-10-02 13:11:38.622 2 DEBUG nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:11:38 compute-1 nova_compute[230518]: 2025-10-02 13:11:38.622 2 DEBUG nova.virt.libvirt.driver [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:11:38 compute-1 nova_compute[230518]: 2025-10-02 13:11:38.624 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:11:38 compute-1 nova_compute[230518]: 2025-10-02 13:11:38.625 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:11:38 compute-1 nova_compute[230518]: 2025-10-02 13:11:38.625 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 13:11:38 compute-1 nova_compute[230518]: 2025-10-02 13:11:38.628 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:11:38 compute-1 nova_compute[230518]: 2025-10-02 13:11:38.780 2 DEBUG nova.compute.manager [req-174b3298-8f0c-45fe-ae20-b30632bb9953 req-5c196d99-c99b-4a41-be91-f2727d1bf83b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Received event network-vif-plugged-9fd04bf3-73c2-4224-81ff-32ef5640604b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:11:38 compute-1 nova_compute[230518]: 2025-10-02 13:11:38.781 2 DEBUG oslo_concurrency.lockutils [req-174b3298-8f0c-45fe-ae20-b30632bb9953 req-5c196d99-c99b-4a41-be91-f2727d1bf83b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "5ecce258-097c-4a5a-9c44-087e8129ceaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:11:38 compute-1 nova_compute[230518]: 2025-10-02 13:11:38.782 2 DEBUG oslo_concurrency.lockutils [req-174b3298-8f0c-45fe-ae20-b30632bb9953 req-5c196d99-c99b-4a41-be91-f2727d1bf83b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5ecce258-097c-4a5a-9c44-087e8129ceaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:11:38 compute-1 nova_compute[230518]: 2025-10-02 13:11:38.782 2 DEBUG oslo_concurrency.lockutils [req-174b3298-8f0c-45fe-ae20-b30632bb9953 req-5c196d99-c99b-4a41-be91-f2727d1bf83b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5ecce258-097c-4a5a-9c44-087e8129ceaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:11:38 compute-1 nova_compute[230518]: 2025-10-02 13:11:38.782 2 DEBUG nova.compute.manager [req-174b3298-8f0c-45fe-ae20-b30632bb9953 req-5c196d99-c99b-4a41-be91-f2727d1bf83b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] No waiting events found dispatching network-vif-plugged-9fd04bf3-73c2-4224-81ff-32ef5640604b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:11:38 compute-1 nova_compute[230518]: 2025-10-02 13:11:38.783 2 WARNING nova.compute.manager [req-174b3298-8f0c-45fe-ae20-b30632bb9953 req-5c196d99-c99b-4a41-be91-f2727d1bf83b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Received unexpected event network-vif-plugged-9fd04bf3-73c2-4224-81ff-32ef5640604b for instance with vm_state building and task_state spawning.
Oct 02 13:11:38 compute-1 nova_compute[230518]: 2025-10-02 13:11:38.825 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:11:38 compute-1 nova_compute[230518]: 2025-10-02 13:11:38.826 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410697.8398361, 5ecce258-097c-4a5a-9c44-087e8129ceaf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:11:38 compute-1 nova_compute[230518]: 2025-10-02 13:11:38.826 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] VM Paused (Lifecycle Event)
Oct 02 13:11:38 compute-1 nova_compute[230518]: 2025-10-02 13:11:38.902 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:11:38 compute-1 nova_compute[230518]: 2025-10-02 13:11:38.906 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410697.847628, 5ecce258-097c-4a5a-9c44-087e8129ceaf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:11:38 compute-1 nova_compute[230518]: 2025-10-02 13:11:38.906 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] VM Resumed (Lifecycle Event)
Oct 02 13:11:38 compute-1 nova_compute[230518]: 2025-10-02 13:11:38.944 2 INFO nova.compute.manager [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Took 15.88 seconds to spawn the instance on the hypervisor.
Oct 02 13:11:38 compute-1 nova_compute[230518]: 2025-10-02 13:11:38.945 2 DEBUG nova.compute.manager [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:11:39 compute-1 nova_compute[230518]: 2025-10-02 13:11:39.005 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:11:39 compute-1 nova_compute[230518]: 2025-10-02 13:11:39.009 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:11:39 compute-1 nova_compute[230518]: 2025-10-02 13:11:39.047 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:11:39 compute-1 nova_compute[230518]: 2025-10-02 13:11:39.082 2 INFO nova.compute.manager [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Took 19.69 seconds to build instance.
Oct 02 13:11:39 compute-1 nova_compute[230518]: 2025-10-02 13:11:39.174 2 DEBUG oslo_concurrency.lockutils [None req-756ae28a-6a84-4012-8c90-3798faeb9e5c 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "5ecce258-097c-4a5a-9c44-087e8129ceaf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.945s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:11:39 compute-1 ceph-mon[80926]: pgmap v2978: 305 pgs: 305 active+clean; 567 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 1.1 MiB/s wr, 38 op/s
Oct 02 13:11:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:11:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:40.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:40.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:40 compute-1 nova_compute[230518]: 2025-10-02 13:11:40.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:40 compute-1 ovn_controller[129257]: 2025-10-02T13:11:40Z|00836|binding|INFO|Releasing lport aa788301-8c47-4421-b693-3b37cb064ae2 from this chassis (sb_readonly=0)
Oct 02 13:11:40 compute-1 ovn_controller[129257]: 2025-10-02T13:11:40Z|00837|binding|INFO|Releasing lport 71ea06ee-2e8d-4617-a491-cbc5589b4465 from this chassis (sb_readonly=0)
Oct 02 13:11:40 compute-1 nova_compute[230518]: 2025-10-02 13:11:40.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:41 compute-1 ceph-mon[80926]: pgmap v2979: 305 pgs: 305 active+clean; 568 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 582 KiB/s rd, 27 KiB/s wr, 48 op/s
Oct 02 13:11:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:42.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:11:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:42.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:11:43 compute-1 nova_compute[230518]: 2025-10-02 13:11:43.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:43 compute-1 ceph-mon[80926]: pgmap v2980: 305 pgs: 305 active+clean; 568 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 22 KiB/s wr, 101 op/s
Oct 02 13:11:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.001999982s ======
Oct 02 13:11:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:44.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001999982s
Oct 02 13:11:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:11:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:44.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:11:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:11:45 compute-1 nova_compute[230518]: 2025-10-02 13:11:45.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:45 compute-1 ceph-mon[80926]: pgmap v2981: 305 pgs: 305 active+clean; 568 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 21 KiB/s wr, 97 op/s
Oct 02 13:11:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:46.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 13:11:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:46.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 13:11:46 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:11:46.336 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=67, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=66) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:11:46 compute-1 nova_compute[230518]: 2025-10-02 13:11:46.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:46 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:11:46.340 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:11:47 compute-1 ceph-mon[80926]: pgmap v2982: 305 pgs: 305 active+clean; 568 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 26 KiB/s wr, 98 op/s
Oct 02 13:11:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 13:11:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:48.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 13:11:48 compute-1 nova_compute[230518]: 2025-10-02 13:11:48.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:11:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:48.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:11:48 compute-1 nova_compute[230518]: 2025-10-02 13:11:48.715 2 DEBUG nova.compute.manager [req-4cd22bfb-3cd0-4d49-8602-3e21f161ebb0 req-206d4085-005c-4414-a4cb-347158625713 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Received event network-changed-9fd04bf3-73c2-4224-81ff-32ef5640604b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:11:48 compute-1 nova_compute[230518]: 2025-10-02 13:11:48.716 2 DEBUG nova.compute.manager [req-4cd22bfb-3cd0-4d49-8602-3e21f161ebb0 req-206d4085-005c-4414-a4cb-347158625713 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Refreshing instance network info cache due to event network-changed-9fd04bf3-73c2-4224-81ff-32ef5640604b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:11:48 compute-1 nova_compute[230518]: 2025-10-02 13:11:48.717 2 DEBUG oslo_concurrency.lockutils [req-4cd22bfb-3cd0-4d49-8602-3e21f161ebb0 req-206d4085-005c-4414-a4cb-347158625713 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-5ecce258-097c-4a5a-9c44-087e8129ceaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:11:48 compute-1 nova_compute[230518]: 2025-10-02 13:11:48.718 2 DEBUG oslo_concurrency.lockutils [req-4cd22bfb-3cd0-4d49-8602-3e21f161ebb0 req-206d4085-005c-4414-a4cb-347158625713 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-5ecce258-097c-4a5a-9c44-087e8129ceaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:11:48 compute-1 nova_compute[230518]: 2025-10-02 13:11:48.719 2 DEBUG nova.network.neutron [req-4cd22bfb-3cd0-4d49-8602-3e21f161ebb0 req-206d4085-005c-4414-a4cb-347158625713 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Refreshing network info cache for port 9fd04bf3-73c2-4224-81ff-32ef5640604b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:11:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:11:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:50.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:50 compute-1 ceph-mon[80926]: pgmap v2983: 305 pgs: 305 active+clean; 568 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 23 KiB/s wr, 91 op/s
Oct 02 13:11:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:50.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:50 compute-1 nova_compute[230518]: 2025-10-02 13:11:50.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:51 compute-1 ceph-mon[80926]: pgmap v2984: 305 pgs: 305 active+clean; 572 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 739 KiB/s wr, 102 op/s
Oct 02 13:11:51 compute-1 ovn_controller[129257]: 2025-10-02T13:11:51Z|00838|binding|INFO|Releasing lport aa788301-8c47-4421-b693-3b37cb064ae2 from this chassis (sb_readonly=0)
Oct 02 13:11:51 compute-1 ovn_controller[129257]: 2025-10-02T13:11:51Z|00839|binding|INFO|Releasing lport 71ea06ee-2e8d-4617-a491-cbc5589b4465 from this chassis (sb_readonly=0)
Oct 02 13:11:51 compute-1 nova_compute[230518]: 2025-10-02 13:11:51.698 2 DEBUG nova.network.neutron [req-4cd22bfb-3cd0-4d49-8602-3e21f161ebb0 req-206d4085-005c-4414-a4cb-347158625713 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Updated VIF entry in instance network info cache for port 9fd04bf3-73c2-4224-81ff-32ef5640604b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:11:51 compute-1 nova_compute[230518]: 2025-10-02 13:11:51.699 2 DEBUG nova.network.neutron [req-4cd22bfb-3cd0-4d49-8602-3e21f161ebb0 req-206d4085-005c-4414-a4cb-347158625713 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Updating instance_info_cache with network_info: [{"id": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "address": "fa:16:3e:ef:26:39", "network": {"id": "dac20349-4f21-4aeb-a4a7-d775590cb44a", "bridge": "br-int", "label": "tempest-network-smoke--1297227184", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fd04bf3-73", "ovs_interfaceid": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:11:51 compute-1 nova_compute[230518]: 2025-10-02 13:11:51.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:51 compute-1 nova_compute[230518]: 2025-10-02 13:11:51.766 2 DEBUG oslo_concurrency.lockutils [req-4cd22bfb-3cd0-4d49-8602-3e21f161ebb0 req-206d4085-005c-4414-a4cb-347158625713 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-5ecce258-097c-4a5a-9c44-087e8129ceaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:11:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:52.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:52.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:53 compute-1 nova_compute[230518]: 2025-10-02 13:11:53.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:11:53.343 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '67'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:11:53 compute-1 ceph-osd[78262]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Oct 02 13:11:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:11:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:54.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:11:54 compute-1 ceph-mon[80926]: pgmap v2985: 305 pgs: 305 active+clean; 579 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.3 MiB/s wr, 89 op/s
Oct 02 13:11:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:11:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:54.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:11:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:11:54 compute-1 ovn_controller[129257]: 2025-10-02T13:11:54Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ef:26:39 10.100.0.12
Oct 02 13:11:54 compute-1 ovn_controller[129257]: 2025-10-02T13:11:54Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ef:26:39 10.100.0.12
Oct 02 13:11:55 compute-1 nova_compute[230518]: 2025-10-02 13:11:55.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:56.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:56 compute-1 ceph-mon[80926]: pgmap v2986: 305 pgs: 305 active+clean; 579 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 1.3 MiB/s wr, 33 op/s
Oct 02 13:11:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:11:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:56.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:11:57 compute-1 ceph-mon[80926]: pgmap v2987: 305 pgs: 305 active+clean; 638 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 237 KiB/s rd, 3.8 MiB/s wr, 78 op/s
Oct 02 13:11:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:11:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:11:58.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:11:58 compute-1 nova_compute[230518]: 2025-10-02 13:11:58.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:11:58 compute-1 sudo[308007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:11:58 compute-1 sudo[308007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:58 compute-1 sudo[308007]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:58 compute-1 sudo[308044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:11:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:11:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:11:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:11:58.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:11:58 compute-1 sudo[308044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:58 compute-1 sudo[308044]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:58 compute-1 podman[308032]: 2025-10-02 13:11:58.37370906 +0000 UTC m=+0.104620006 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:11:58 compute-1 podman[308031]: 2025-10-02 13:11:58.399153919 +0000 UTC m=+0.125267655 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 13:11:58 compute-1 sudo[308097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:11:58 compute-1 sudo[308097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:58 compute-1 sudo[308097]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:58 compute-1 sudo[308127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:11:58 compute-1 sudo[308127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:11:59 compute-1 sudo[308127]: pam_unix(sudo:session): session closed for user root
Oct 02 13:11:59 compute-1 ceph-mon[80926]: pgmap v2988: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 315 KiB/s rd, 3.9 MiB/s wr, 99 op/s
Oct 02 13:11:59 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:11:59 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:11:59 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:11:59 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:11:59 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:11:59 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:11:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:12:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:00.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:00.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/238194769' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:00 compute-1 nova_compute[230518]: 2025-10-02 13:12:00.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:01 compute-1 ceph-mon[80926]: pgmap v2989: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 309 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Oct 02 13:12:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:12:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:02.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:12:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:02.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2794547524' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:12:03 compute-1 nova_compute[230518]: 2025-10-02 13:12:03.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:03 compute-1 ceph-mon[80926]: pgmap v2990: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 291 KiB/s rd, 3.2 MiB/s wr, 80 op/s
Oct 02 13:12:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:04.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:04.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:12:05 compute-1 sudo[308182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:12:05 compute-1 sudo[308182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:05 compute-1 sudo[308182]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:05 compute-1 nova_compute[230518]: 2025-10-02 13:12:05.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:05 compute-1 sudo[308207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:12:05 compute-1 sudo[308207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:12:05 compute-1 sudo[308207]: pam_unix(sudo:session): session closed for user root
Oct 02 13:12:06 compute-1 ceph-mon[80926]: pgmap v2991: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 274 KiB/s rd, 2.6 MiB/s wr, 67 op/s
Oct 02 13:12:06 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:12:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2639919700' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:12:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2639919700' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:12:06 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:12:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1040018228' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:12:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:06.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:06.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:07 compute-1 podman[308233]: 2025-10-02 13:12:07.830594269 +0000 UTC m=+0.071562848 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 13:12:07 compute-1 podman[308232]: 2025-10-02 13:12:07.837565028 +0000 UTC m=+0.080051225 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, tcib_managed=true)
Oct 02 13:12:08 compute-1 ceph-mon[80926]: pgmap v2992: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 274 KiB/s rd, 2.6 MiB/s wr, 68 op/s
Oct 02 13:12:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:08 compute-1 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 13:12:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:08.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:08 compute-1 nova_compute[230518]: 2025-10-02 13:12:08.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:08.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:12:10 compute-1 ceph-mon[80926]: pgmap v2993: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 79 KiB/s rd, 122 KiB/s wr, 24 op/s
Oct 02 13:12:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:12:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:10.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:12:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:10.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:10 compute-1 nova_compute[230518]: 2025-10-02 13:12:10.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:11 compute-1 nova_compute[230518]: 2025-10-02 13:12:11.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:12 compute-1 ceph-mon[80926]: pgmap v2994: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 396 KiB/s rd, 13 KiB/s wr, 19 op/s
Oct 02 13:12:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:12:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:12.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:12:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:12:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:12.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:12:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e372 e372: 3 total, 3 up, 3 in
Oct 02 13:12:13 compute-1 nova_compute[230518]: 2025-10-02 13:12:13.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:13 compute-1 ceph-mon[80926]: osdmap e372: 3 total, 3 up, 3 in
Oct 02 13:12:13 compute-1 ceph-mon[80926]: pgmap v2996: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 29 KiB/s wr, 89 op/s
Oct 02 13:12:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3655608364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e373 e373: 3 total, 3 up, 3 in
Oct 02 13:12:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:14.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:14.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:14 compute-1 ceph-mon[80926]: osdmap e373: 3 total, 3 up, 3 in
Oct 02 13:12:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:12:15 compute-1 nova_compute[230518]: 2025-10-02 13:12:15.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:15 compute-1 ceph-mon[80926]: pgmap v2998: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 19 KiB/s wr, 111 op/s
Oct 02 13:12:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:16.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:16.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:17 compute-1 ceph-mon[80926]: pgmap v2999: 305 pgs: 305 active+clean; 686 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 141 op/s
Oct 02 13:12:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:18.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:18 compute-1 nova_compute[230518]: 2025-10-02 13:12:18.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:18.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:18 compute-1 nova_compute[230518]: 2025-10-02 13:12:18.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:19 compute-1 nova_compute[230518]: 2025-10-02 13:12:19.260 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:12:19 compute-1 nova_compute[230518]: 2025-10-02 13:12:19.260 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 13:12:19 compute-1 nova_compute[230518]: 2025-10-02 13:12:19.278 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 13:12:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3057999980' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:12:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/8473852' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:12:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:12:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:12:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:20.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:12:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:12:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:20.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:12:20 compute-1 ceph-mon[80926]: pgmap v3000: 305 pgs: 305 active+clean; 693 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 149 op/s
Oct 02 13:12:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/979295640' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:20 compute-1 nova_compute[230518]: 2025-10-02 13:12:20.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:21 compute-1 nova_compute[230518]: 2025-10-02 13:12:21.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:21 compute-1 ceph-mon[80926]: pgmap v3001: 305 pgs: 305 active+clean; 693 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 54 KiB/s rd, 2.6 MiB/s wr, 78 op/s
Oct 02 13:12:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e374 e374: 3 total, 3 up, 3 in
Oct 02 13:12:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:22.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:22.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:22 compute-1 ceph-mon[80926]: osdmap e374: 3 total, 3 up, 3 in
Oct 02 13:12:23 compute-1 nova_compute[230518]: 2025-10-02 13:12:23.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:23 compute-1 ceph-mon[80926]: pgmap v3003: 305 pgs: 305 active+clean; 693 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 52 KiB/s rd, 2.4 MiB/s wr, 74 op/s
Oct 02 13:12:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1560297776' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:12:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1560297776' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:12:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e375 e375: 3 total, 3 up, 3 in
Oct 02 13:12:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:12:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:24.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:12:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:24.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:12:24 compute-1 ceph-mon[80926]: osdmap e375: 3 total, 3 up, 3 in
Oct 02 13:12:25 compute-1 nova_compute[230518]: 2025-10-02 13:12:25.071 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:12:25 compute-1 nova_compute[230518]: 2025-10-02 13:12:25.103 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:25 compute-1 nova_compute[230518]: 2025-10-02 13:12:25.103 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:25 compute-1 nova_compute[230518]: 2025-10-02 13:12:25.103 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:25 compute-1 nova_compute[230518]: 2025-10-02 13:12:25.104 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:12:25 compute-1 nova_compute[230518]: 2025-10-02 13:12:25.104 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:12:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:12:25 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2888907482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:25 compute-1 nova_compute[230518]: 2025-10-02 13:12:25.617 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:12:25 compute-1 nova_compute[230518]: 2025-10-02 13:12:25.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:25 compute-1 nova_compute[230518]: 2025-10-02 13:12:25.719 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000cb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:12:25 compute-1 nova_compute[230518]: 2025-10-02 13:12:25.720 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000cb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:12:25 compute-1 nova_compute[230518]: 2025-10-02 13:12:25.725 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000c5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:12:25 compute-1 nova_compute[230518]: 2025-10-02 13:12:25.726 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000c5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:12:25 compute-1 nova_compute[230518]: 2025-10-02 13:12:25.967 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:12:25 compute-1 nova_compute[230518]: 2025-10-02 13:12:25.968 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3792MB free_disk=20.739166259765625GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:12:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:12:25.968 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:25 compute-1 nova_compute[230518]: 2025-10-02 13:12:25.968 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:12:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:12:25.969 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:25 compute-1 nova_compute[230518]: 2025-10-02 13:12:25.969 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:12:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:12:25.970 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:26 compute-1 nova_compute[230518]: 2025-10-02 13:12:26.150 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 658821a7-5b97-43ad-8fe2-46e5303cf56c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:12:26 compute-1 nova_compute[230518]: 2025-10-02 13:12:26.151 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 5ecce258-097c-4a5a-9c44-087e8129ceaf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:12:26 compute-1 nova_compute[230518]: 2025-10-02 13:12:26.151 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:12:26 compute-1 nova_compute[230518]: 2025-10-02 13:12:26.152 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:12:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:26.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:26 compute-1 nova_compute[230518]: 2025-10-02 13:12:26.206 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:12:26 compute-1 ceph-mon[80926]: pgmap v3005: 305 pgs: 305 active+clean; 693 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 461 KiB/s wr, 53 op/s
Oct 02 13:12:26 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2888907482' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:26.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:26 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:12:26 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1570152893' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:26 compute-1 nova_compute[230518]: 2025-10-02 13:12:26.998 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.792s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:12:27 compute-1 nova_compute[230518]: 2025-10-02 13:12:27.006 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:12:27 compute-1 nova_compute[230518]: 2025-10-02 13:12:27.038 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:12:27 compute-1 nova_compute[230518]: 2025-10-02 13:12:27.075 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:12:27 compute-1 nova_compute[230518]: 2025-10-02 13:12:27.075 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:12:27 compute-1 ceph-mon[80926]: pgmap v3006: 305 pgs: 305 active+clean; 652 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 44 KiB/s rd, 21 KiB/s wr, 58 op/s
Oct 02 13:12:27 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1570152893' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:28.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:28 compute-1 nova_compute[230518]: 2025-10-02 13:12:28.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:12:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:28.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:12:28 compute-1 podman[308320]: 2025-10-02 13:12:28.863312891 +0000 UTC m=+0.092432964 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:12:28 compute-1 podman[308319]: 2025-10-02 13:12:28.906177057 +0000 UTC m=+0.138321535 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 13:12:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:12:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:30.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:30 compute-1 ceph-mon[80926]: pgmap v3007: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 56 KiB/s rd, 23 KiB/s wr, 76 op/s
Oct 02 13:12:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/328279734' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:30.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:30 compute-1 nova_compute[230518]: 2025-10-02 13:12:30.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:32 compute-1 nova_compute[230518]: 2025-10-02 13:12:32.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:12:32 compute-1 nova_compute[230518]: 2025-10-02 13:12:32.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:12:32 compute-1 nova_compute[230518]: 2025-10-02 13:12:32.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:12:32 compute-1 nova_compute[230518]: 2025-10-02 13:12:32.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:12:32 compute-1 nova_compute[230518]: 2025-10-02 13:12:32.055 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:12:32 compute-1 nova_compute[230518]: 2025-10-02 13:12:32.055 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:12:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e376 e376: 3 total, 3 up, 3 in
Oct 02 13:12:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:32.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:12:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:32.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:12:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3156646001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:33 compute-1 nova_compute[230518]: 2025-10-02 13:12:33.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:34.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:12:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:34.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:12:35 compute-1 nova_compute[230518]: 2025-10-02 13:12:35.055 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:12:35 compute-1 nova_compute[230518]: 2025-10-02 13:12:35.056 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:12:35 compute-1 nova_compute[230518]: 2025-10-02 13:12:35.056 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:12:35 compute-1 ceph-mon[80926]: pgmap v3008: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 23 KiB/s wr, 117 op/s
Oct 02 13:12:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2512212279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:35 compute-1 ceph-mon[80926]: osdmap e376: 3 total, 3 up, 3 in
Oct 02 13:12:35 compute-1 ceph-mon[80926]: pgmap v3010: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 6.9 KiB/s wr, 170 op/s
Oct 02 13:12:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2121352586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:35 compute-1 nova_compute[230518]: 2025-10-02 13:12:35.311 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:12:35 compute-1 nova_compute[230518]: 2025-10-02 13:12:35.312 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:12:35 compute-1 nova_compute[230518]: 2025-10-02 13:12:35.312 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 13:12:35 compute-1 nova_compute[230518]: 2025-10-02 13:12:35.313 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 658821a7-5b97-43ad-8fe2-46e5303cf56c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:12:35 compute-1 nova_compute[230518]: 2025-10-02 13:12:35.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:12:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:36.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:12:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:12:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:36.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:12:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:12:37 compute-1 nova_compute[230518]: 2025-10-02 13:12:37.510 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Updating instance_info_cache with network_info: [{"id": "15cb070c-0f52-464f-a2b4-8597c15212e9", "address": "fa:16:3e:e2:47:21", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15cb070c-0f", "ovs_interfaceid": "15cb070c-0f52-464f-a2b4-8597c15212e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:12:37 compute-1 nova_compute[230518]: 2025-10-02 13:12:37.539 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-658821a7-5b97-43ad-8fe2-46e5303cf56c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:12:37 compute-1 nova_compute[230518]: 2025-10-02 13:12:37.540 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 13:12:37 compute-1 nova_compute[230518]: 2025-10-02 13:12:37.541 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:12:37 compute-1 nova_compute[230518]: 2025-10-02 13:12:37.541 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:12:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:38.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:38 compute-1 nova_compute[230518]: 2025-10-02 13:12:38.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:38.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:38 compute-1 ceph-mon[80926]: pgmap v3011: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.9 KiB/s wr, 145 op/s
Oct 02 13:12:38 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2156979982' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:38 compute-1 podman[308366]: 2025-10-02 13:12:38.8240321 +0000 UTC m=+0.074060097 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3)
Oct 02 13:12:38 compute-1 podman[308367]: 2025-10-02 13:12:38.883167546 +0000 UTC m=+0.120456623 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, container_name=multipathd)
Oct 02 13:12:40 compute-1 ceph-mon[80926]: pgmap v3012: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.7 KiB/s wr, 114 op/s
Oct 02 13:12:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:40.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:12:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:40.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:12:40 compute-1 nova_compute[230518]: 2025-10-02 13:12:40.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:42.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:12:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:42.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:12:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:12:43 compute-1 nova_compute[230518]: 2025-10-02 13:12:43.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:43 compute-1 ceph-mon[80926]: pgmap v3013: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.8 KiB/s wr, 89 op/s
Oct 02 13:12:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1589505672' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:12:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1589505672' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:12:43 compute-1 nova_compute[230518]: 2025-10-02 13:12:43.534 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:12:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:12:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:44.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:12:44 compute-1 ceph-mon[80926]: pgmap v3014: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.5 KiB/s wr, 50 op/s
Oct 02 13:12:44 compute-1 ceph-mon[80926]: pgmap v3015: 305 pgs: 305 active+clean; 647 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 270 KiB/s rd, 3.4 KiB/s wr, 11 op/s
Oct 02 13:12:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:12:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:44.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:12:45 compute-1 ceph-mon[80926]: pgmap v3016: 305 pgs: 305 active+clean; 650 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 13 KiB/s wr, 7 op/s
Oct 02 13:12:45 compute-1 nova_compute[230518]: 2025-10-02 13:12:45.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:12:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:46.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:12:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:46.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/28757893' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:12:47 compute-1 ceph-mon[80926]: pgmap v3017: 305 pgs: 305 active+clean; 686 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 112 KiB/s rd, 2.4 MiB/s wr, 46 op/s
Oct 02 13:12:47 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3907190954' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:12:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:12:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:48.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:48 compute-1 nova_compute[230518]: 2025-10-02 13:12:48.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:48.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:12:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:50.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:12:50 compute-1 ceph-mon[80926]: pgmap v3018: 305 pgs: 305 active+clean; 694 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 179 KiB/s rd, 3.8 MiB/s wr, 85 op/s
Oct 02 13:12:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4046206427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:12:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:50.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:50 compute-1 nova_compute[230518]: 2025-10-02 13:12:50.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:51 compute-1 ceph-mon[80926]: pgmap v3019: 305 pgs: 305 active+clean; 666 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 430 KiB/s rd, 3.9 MiB/s wr, 100 op/s
Oct 02 13:12:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:12:51.650 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=68, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=67) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:12:51 compute-1 nova_compute[230518]: 2025-10-02 13:12:51.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:12:51.651 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:12:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:12:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:52.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:12:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:52.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:12:53 compute-1 nova_compute[230518]: 2025-10-02 13:12:53.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:53 compute-1 ceph-mon[80926]: pgmap v3020: 305 pgs: 305 active+clean; 645 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 472 KiB/s rd, 3.9 MiB/s wr, 124 op/s
Oct 02 13:12:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:12:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:54.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:12:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:12:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:54.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:12:55 compute-1 nova_compute[230518]: 2025-10-02 13:12:55.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:56 compute-1 ceph-mon[80926]: pgmap v3021: 305 pgs: 305 active+clean; 645 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 953 KiB/s rd, 3.9 MiB/s wr, 152 op/s
Oct 02 13:12:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:56.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:56.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:56 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:12:56.653 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '68'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:12:57 compute-1 ceph-mon[80926]: pgmap v3022: 305 pgs: 305 active+clean; 596 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 205 op/s
Oct 02 13:12:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:12:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:12:58.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:58 compute-1 nova_compute[230518]: 2025-10-02 13:12:58.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:12:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:12:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:12:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:12:58.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:12:59 compute-1 ceph-mon[80926]: pgmap v3023: 305 pgs: 305 active+clean; 560 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.5 MiB/s wr, 178 op/s
Oct 02 13:12:59 compute-1 podman[308412]: 2025-10-02 13:12:59.855749524 +0000 UTC m=+0.078542686 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 13:12:59 compute-1 podman[308411]: 2025-10-02 13:12:59.887690628 +0000 UTC m=+0.119477013 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 13:13:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:00.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:00.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:00 compute-1 nova_compute[230518]: 2025-10-02 13:13:00.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3468991839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:02.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:02 compute-1 ceph-mon[80926]: pgmap v3024: 305 pgs: 305 active+clean; 533 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 124 KiB/s wr, 151 op/s
Oct 02 13:13:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:02.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:13:03 compute-1 nova_compute[230518]: 2025-10-02 13:13:03.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:03 compute-1 ceph-mon[80926]: pgmap v3025: 305 pgs: 305 active+clean; 504 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 66 KiB/s wr, 142 op/s
Oct 02 13:13:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:04.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:13:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:04.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:13:04 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3318170492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:05 compute-1 nova_compute[230518]: 2025-10-02 13:13:05.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:13:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/106990781' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:13:05 compute-1 ceph-mon[80926]: pgmap v3026: 305 pgs: 305 active+clean; 485 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 170 KiB/s wr, 129 op/s
Oct 02 13:13:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2316275095' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:13:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2316275095' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:13:05 compute-1 sudo[308455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:13:05 compute-1 sudo[308455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:05 compute-1 sudo[308455]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:06 compute-1 sudo[308480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:13:06 compute-1 sudo[308480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:06 compute-1 sudo[308480]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:06 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:13:06 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/106990781' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:13:06 compute-1 sudo[308505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:13:06 compute-1 sudo[308505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:06 compute-1 sudo[308505]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:06 compute-1 sudo[308530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 13:13:06 compute-1 sudo[308530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.210 2 DEBUG nova.compute.manager [req-42316382-e4e6-48d6-a418-22f65e49d84c req-4ce1ac0a-cd47-4542-9b9b-2d76e5feb490 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Received event network-changed-9fd04bf3-73c2-4224-81ff-32ef5640604b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.210 2 DEBUG nova.compute.manager [req-42316382-e4e6-48d6-a418-22f65e49d84c req-4ce1ac0a-cd47-4542-9b9b-2d76e5feb490 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Refreshing instance network info cache due to event network-changed-9fd04bf3-73c2-4224-81ff-32ef5640604b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.210 2 DEBUG oslo_concurrency.lockutils [req-42316382-e4e6-48d6-a418-22f65e49d84c req-4ce1ac0a-cd47-4542-9b9b-2d76e5feb490 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-5ecce258-097c-4a5a-9c44-087e8129ceaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.210 2 DEBUG oslo_concurrency.lockutils [req-42316382-e4e6-48d6-a418-22f65e49d84c req-4ce1ac0a-cd47-4542-9b9b-2d76e5feb490 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-5ecce258-097c-4a5a-9c44-087e8129ceaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.210 2 DEBUG nova.network.neutron [req-42316382-e4e6-48d6-a418-22f65e49d84c req-4ce1ac0a-cd47-4542-9b9b-2d76e5feb490 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Refreshing network info cache for port 9fd04bf3-73c2-4224-81ff-32ef5640604b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:13:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:06.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.261 2 DEBUG oslo_concurrency.lockutils [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "658821a7-5b97-43ad-8fe2-46e5303cf56c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.262 2 DEBUG oslo_concurrency.lockutils [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.262 2 DEBUG oslo_concurrency.lockutils [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "658821a7-5b97-43ad-8fe2-46e5303cf56c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.262 2 DEBUG oslo_concurrency.lockutils [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.262 2 DEBUG oslo_concurrency.lockutils [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.263 2 INFO nova.compute.manager [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Terminating instance
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.264 2 DEBUG nova.compute.manager [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.336 2 DEBUG oslo_concurrency.lockutils [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "5ecce258-097c-4a5a-9c44-087e8129ceaf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.337 2 DEBUG oslo_concurrency.lockutils [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "5ecce258-097c-4a5a-9c44-087e8129ceaf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.337 2 DEBUG oslo_concurrency.lockutils [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "5ecce258-097c-4a5a-9c44-087e8129ceaf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.337 2 DEBUG oslo_concurrency.lockutils [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "5ecce258-097c-4a5a-9c44-087e8129ceaf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.337 2 DEBUG oslo_concurrency.lockutils [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "5ecce258-097c-4a5a-9c44-087e8129ceaf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.338 2 INFO nova.compute.manager [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Terminating instance
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.339 2 DEBUG nova.compute.manager [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:13:06 compute-1 kernel: tap15cb070c-0f (unregistering): left promiscuous mode
Oct 02 13:13:06 compute-1 NetworkManager[44960]: <info>  [1759410786.3980] device (tap15cb070c-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:13:06 compute-1 ovn_controller[129257]: 2025-10-02T13:13:06Z|00840|binding|INFO|Releasing lport 15cb070c-0f52-464f-a2b4-8597c15212e9 from this chassis (sb_readonly=0)
Oct 02 13:13:06 compute-1 ovn_controller[129257]: 2025-10-02T13:13:06Z|00841|binding|INFO|Setting lport 15cb070c-0f52-464f-a2b4-8597c15212e9 down in Southbound
Oct 02 13:13:06 compute-1 ovn_controller[129257]: 2025-10-02T13:13:06Z|00842|binding|INFO|Removing iface tap15cb070c-0f ovn-installed in OVS
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:13:06.416 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:47:21 10.100.0.3'], port_security=['fa:16:3e:e2:47:21 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '658821a7-5b97-43ad-8fe2-46e5303cf56c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9001b9c-bca6-4085-a954-1414269e31bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9f85b8f387b146d29eabe946c4fbdee8', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c95f312a-09a8-4e2c-af55-3ef0a0e41bfc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=57ece03e-f90b-4cd6-ae02-c9a908c888ae, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=15cb070c-0f52-464f-a2b4-8597c15212e9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:13:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:13:06.417 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 15cb070c-0f52-464f-a2b4-8597c15212e9 in datapath d9001b9c-bca6-4085-a954-1414269e31bc unbound from our chassis
Oct 02 13:13:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:13:06.419 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d9001b9c-bca6-4085-a954-1414269e31bc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:13:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:13:06.423 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[763800aa-a1a4-481c-95e2-5485ff6be373]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:13:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:13:06.425 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc namespace which is not needed anymore
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:06.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:06 compute-1 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000c5.scope: Deactivated successfully.
Oct 02 13:13:06 compute-1 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000c5.scope: Consumed 23.553s CPU time.
Oct 02 13:13:06 compute-1 kernel: tap9fd04bf3-73 (unregistering): left promiscuous mode
Oct 02 13:13:06 compute-1 systemd-machined[188247]: Machine qemu-91-instance-000000c5 terminated.
Oct 02 13:13:06 compute-1 NetworkManager[44960]: <info>  [1759410786.4861] device (tap9fd04bf3-73): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:06 compute-1 ovn_controller[129257]: 2025-10-02T13:13:06Z|00843|binding|INFO|Releasing lport 9fd04bf3-73c2-4224-81ff-32ef5640604b from this chassis (sb_readonly=0)
Oct 02 13:13:06 compute-1 ovn_controller[129257]: 2025-10-02T13:13:06Z|00844|binding|INFO|Setting lport 9fd04bf3-73c2-4224-81ff-32ef5640604b down in Southbound
Oct 02 13:13:06 compute-1 ovn_controller[129257]: 2025-10-02T13:13:06Z|00845|binding|INFO|Removing iface tap9fd04bf3-73 ovn-installed in OVS
Oct 02 13:13:06 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:13:06.500 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:26:39 10.100.0.12'], port_security=['fa:16:3e:ef:26:39 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5ecce258-097c-4a5a-9c44-087e8129ceaf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dac20349-4f21-4aeb-a4a7-d775590cb44a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e911de934ec043d1bd942c8aed562d04', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'caab64a4-2f87-4e39-a0ac-b96f95aae4c5 e9085353-0bf0-4de8-ac60-c4d68c9ff284', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f94f92e0-9e2a-42b5-8a3e-79ddfa458897, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=9fd04bf3-73c2-4224-81ff-32ef5640604b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:06 compute-1 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000cb.scope: Deactivated successfully.
Oct 02 13:13:06 compute-1 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000cb.scope: Consumed 18.636s CPU time.
Oct 02 13:13:06 compute-1 systemd-machined[188247]: Machine qemu-95-instance-000000cb terminated.
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.705 2 INFO nova.virt.libvirt.driver [-] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Instance destroyed successfully.
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.706 2 DEBUG nova.objects.instance [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lazy-loading 'resources' on Instance uuid 658821a7-5b97-43ad-8fe2-46e5303cf56c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.723 2 DEBUG nova.virt.libvirt.vif [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:08:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='multiattach-server-1',id=197,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGLRY7MYmIa6+oLUh+Qg+B8a5i2XXFFyzSdgxs13sBRV1pAy/AOUY7U032oAYrVoY3TX/q037Gu8fuAeVLEbydGt9ytZ7oOiP2uoiKS3ZsON6mJ6KSvHrVdqmkzPhkxnA==',key_name='tempest-keypair-841361442',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:09:06Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9f85b8f387b146d29eabe946c4fbdee8',ramdisk_id='',reservation_id='r-51wwuied',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeMultiAttachTest-2011266702',owner_user_name='tempest-AttachVolumeMultiAttachTest-2011266702-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:09:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='156cc6022c70402ab6d194a340b076d5',uuid=658821a7-5b97-43ad-8fe2-46e5303cf56c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "15cb070c-0f52-464f-a2b4-8597c15212e9", "address": "fa:16:3e:e2:47:21", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15cb070c-0f", "ovs_interfaceid": "15cb070c-0f52-464f-a2b4-8597c15212e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.723 2 DEBUG nova.network.os_vif_util [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converting VIF {"id": "15cb070c-0f52-464f-a2b4-8597c15212e9", "address": "fa:16:3e:e2:47:21", "network": {"id": "d9001b9c-bca6-4085-a954-1414269e31bc", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1075503939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9f85b8f387b146d29eabe946c4fbdee8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap15cb070c-0f", "ovs_interfaceid": "15cb070c-0f52-464f-a2b4-8597c15212e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.724 2 DEBUG nova.network.os_vif_util [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e2:47:21,bridge_name='br-int',has_traffic_filtering=True,id=15cb070c-0f52-464f-a2b4-8597c15212e9,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15cb070c-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.725 2 DEBUG os_vif [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e2:47:21,bridge_name='br-int',has_traffic_filtering=True,id=15cb070c-0f52-464f-a2b4-8597c15212e9,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15cb070c-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.727 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap15cb070c-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.740 2 INFO os_vif [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e2:47:21,bridge_name='br-int',has_traffic_filtering=True,id=15cb070c-0f52-464f-a2b4-8597c15212e9,network=Network(d9001b9c-bca6-4085-a954-1414269e31bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap15cb070c-0f')
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.780 2 INFO nova.virt.libvirt.driver [-] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Instance destroyed successfully.
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.781 2 DEBUG nova.objects.instance [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lazy-loading 'resources' on Instance uuid 5ecce258-097c-4a5a-9c44-087e8129ceaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:13:06 compute-1 neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc[304769]: [NOTICE]   (304773) : haproxy version is 2.8.14-c23fe91
Oct 02 13:13:06 compute-1 neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc[304769]: [NOTICE]   (304773) : path to executable is /usr/sbin/haproxy
Oct 02 13:13:06 compute-1 neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc[304769]: [WARNING]  (304773) : Exiting Master process...
Oct 02 13:13:06 compute-1 neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc[304769]: [ALERT]    (304773) : Current worker (304775) exited with code 143 (Terminated)
Oct 02 13:13:06 compute-1 neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc[304769]: [WARNING]  (304773) : All workers exited. Exiting... (0)
Oct 02 13:13:06 compute-1 systemd[1]: libpod-c27054ff2842037e6cc54aa3b6a9d5fecd0fae1165591163b925e55b64e86003.scope: Deactivated successfully.
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.815 2 DEBUG nova.virt.libvirt.vif [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:11:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-1967034242',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-2067500093-access_point-1967034242',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-2067500093-ac',id=203,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBARJDtWwEIa73jOfITh0S3Xi3OzT9rBF+alfsyLXRbxk2puonzRxucJBK2BRHoshC3dw4yzXZgc14rNHWEy9MW96gMF19bT8yeo1M4v5Bwum2wxMyyCXx0KGJeRmwnd5wQ==',key_name='tempest-TestSecurityGroupsBasicOps-133947143',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:11:38Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e911de934ec043d1bd942c8aed562d04',ramdisk_id='',reservation_id='r-i85j97wy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-2067500093',owner_user_name='tempest-TestSecurityGroupsBasicOps-2067500093-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:11:39Z,user_data=None,user_id='362b536431b64b15b67740060af57e9c',uuid=5ecce258-097c-4a5a-9c44-087e8129ceaf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "address": "fa:16:3e:ef:26:39", "network": {"id": "dac20349-4f21-4aeb-a4a7-d775590cb44a", "bridge": "br-int", "label": "tempest-network-smoke--1297227184", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fd04bf3-73", "ovs_interfaceid": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.815 2 DEBUG nova.network.os_vif_util [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converting VIF {"id": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "address": "fa:16:3e:ef:26:39", "network": {"id": "dac20349-4f21-4aeb-a4a7-d775590cb44a", "bridge": "br-int", "label": "tempest-network-smoke--1297227184", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fd04bf3-73", "ovs_interfaceid": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:13:06 compute-1 podman[308623]: 2025-10-02 13:13:06.817241032 +0000 UTC m=+0.236908670 container died c27054ff2842037e6cc54aa3b6a9d5fecd0fae1165591163b925e55b64e86003 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.817 2 DEBUG nova.network.os_vif_util [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ef:26:39,bridge_name='br-int',has_traffic_filtering=True,id=9fd04bf3-73c2-4224-81ff-32ef5640604b,network=Network(dac20349-4f21-4aeb-a4a7-d775590cb44a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fd04bf3-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.819 2 DEBUG os_vif [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ef:26:39,bridge_name='br-int',has_traffic_filtering=True,id=9fd04bf3-73c2-4224-81ff-32ef5640604b,network=Network(dac20349-4f21-4aeb-a4a7-d775590cb44a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fd04bf3-73') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.822 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9fd04bf3-73, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:06 compute-1 nova_compute[230518]: 2025-10-02 13:13:06.829 2 INFO os_vif [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ef:26:39,bridge_name='br-int',has_traffic_filtering=True,id=9fd04bf3-73c2-4224-81ff-32ef5640604b,network=Network(dac20349-4f21-4aeb-a4a7-d775590cb44a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fd04bf3-73')
Oct 02 13:13:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/106990781' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:13:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/106990781' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:13:07 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c27054ff2842037e6cc54aa3b6a9d5fecd0fae1165591163b925e55b64e86003-userdata-shm.mount: Deactivated successfully.
Oct 02 13:13:07 compute-1 systemd[1]: var-lib-containers-storage-overlay-410f773b118732b26b6feca850b0977a2c84aeb1020cb4d6bcef409aa2a24707-merged.mount: Deactivated successfully.
Oct 02 13:13:07 compute-1 podman[308623]: 2025-10-02 13:13:07.518526193 +0000 UTC m=+0.938193831 container cleanup c27054ff2842037e6cc54aa3b6a9d5fecd0fae1165591163b925e55b64e86003 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 13:13:07 compute-1 systemd[1]: libpod-conmon-c27054ff2842037e6cc54aa3b6a9d5fecd0fae1165591163b925e55b64e86003.scope: Deactivated successfully.
Oct 02 13:13:07 compute-1 podman[308740]: 2025-10-02 13:13:07.8057014 +0000 UTC m=+0.463191265 container exec f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:13:08 compute-1 ceph-mon[80926]: pgmap v3027: 305 pgs: 305 active+clean; 501 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.5 MiB/s wr, 133 op/s
Oct 02 13:13:08 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:13:08 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:13:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:08.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:08 compute-1 nova_compute[230518]: 2025-10-02 13:13:08.419 2 DEBUG nova.compute.manager [req-80508610-5f35-466b-b904-89e83fc01c3b req-c6e88c82-60ad-4d40-8620-fb12e41ce312 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Received event network-vif-unplugged-9fd04bf3-73c2-4224-81ff-32ef5640604b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:13:08 compute-1 nova_compute[230518]: 2025-10-02 13:13:08.421 2 DEBUG oslo_concurrency.lockutils [req-80508610-5f35-466b-b904-89e83fc01c3b req-c6e88c82-60ad-4d40-8620-fb12e41ce312 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "5ecce258-097c-4a5a-9c44-087e8129ceaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:13:08 compute-1 nova_compute[230518]: 2025-10-02 13:13:08.421 2 DEBUG oslo_concurrency.lockutils [req-80508610-5f35-466b-b904-89e83fc01c3b req-c6e88c82-60ad-4d40-8620-fb12e41ce312 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5ecce258-097c-4a5a-9c44-087e8129ceaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:13:08 compute-1 nova_compute[230518]: 2025-10-02 13:13:08.421 2 DEBUG oslo_concurrency.lockutils [req-80508610-5f35-466b-b904-89e83fc01c3b req-c6e88c82-60ad-4d40-8620-fb12e41ce312 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5ecce258-097c-4a5a-9c44-087e8129ceaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:13:08 compute-1 nova_compute[230518]: 2025-10-02 13:13:08.422 2 DEBUG nova.compute.manager [req-80508610-5f35-466b-b904-89e83fc01c3b req-c6e88c82-60ad-4d40-8620-fb12e41ce312 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] No waiting events found dispatching network-vif-unplugged-9fd04bf3-73c2-4224-81ff-32ef5640604b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:13:08 compute-1 nova_compute[230518]: 2025-10-02 13:13:08.422 2 DEBUG nova.compute.manager [req-80508610-5f35-466b-b904-89e83fc01c3b req-c6e88c82-60ad-4d40-8620-fb12e41ce312 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Received event network-vif-unplugged-9fd04bf3-73c2-4224-81ff-32ef5640604b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:13:08 compute-1 nova_compute[230518]: 2025-10-02 13:13:08.453 2 DEBUG nova.compute.manager [req-af6ea8cf-b318-4852-8a12-27668fd47ee5 req-b026406f-9461-4394-b0c7-94611befc89f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Received event network-vif-unplugged-15cb070c-0f52-464f-a2b4-8597c15212e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:13:08 compute-1 nova_compute[230518]: 2025-10-02 13:13:08.453 2 DEBUG oslo_concurrency.lockutils [req-af6ea8cf-b318-4852-8a12-27668fd47ee5 req-b026406f-9461-4394-b0c7-94611befc89f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "658821a7-5b97-43ad-8fe2-46e5303cf56c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:13:08 compute-1 nova_compute[230518]: 2025-10-02 13:13:08.454 2 DEBUG oslo_concurrency.lockutils [req-af6ea8cf-b318-4852-8a12-27668fd47ee5 req-b026406f-9461-4394-b0c7-94611befc89f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:13:08 compute-1 nova_compute[230518]: 2025-10-02 13:13:08.454 2 DEBUG oslo_concurrency.lockutils [req-af6ea8cf-b318-4852-8a12-27668fd47ee5 req-b026406f-9461-4394-b0c7-94611befc89f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:13:08 compute-1 nova_compute[230518]: 2025-10-02 13:13:08.454 2 DEBUG nova.compute.manager [req-af6ea8cf-b318-4852-8a12-27668fd47ee5 req-b026406f-9461-4394-b0c7-94611befc89f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] No waiting events found dispatching network-vif-unplugged-15cb070c-0f52-464f-a2b4-8597c15212e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:13:08 compute-1 nova_compute[230518]: 2025-10-02 13:13:08.454 2 DEBUG nova.compute.manager [req-af6ea8cf-b318-4852-8a12-27668fd47ee5 req-b026406f-9461-4394-b0c7-94611befc89f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Received event network-vif-unplugged-15cb070c-0f52-464f-a2b4-8597c15212e9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:13:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:13:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:08.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:13:08 compute-1 podman[308753]: 2025-10-02 13:13:08.470587668 +0000 UTC m=+0.902455268 container remove c27054ff2842037e6cc54aa3b6a9d5fecd0fae1165591163b925e55b64e86003 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 13:13:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:13:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:13:08.480 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[92faccf8-cca9-43e6-8e4b-7b7e2d921426]: (4, ('Thu Oct  2 01:13:06 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc (c27054ff2842037e6cc54aa3b6a9d5fecd0fae1165591163b925e55b64e86003)\nc27054ff2842037e6cc54aa3b6a9d5fecd0fae1165591163b925e55b64e86003\nThu Oct  2 01:13:07 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc (c27054ff2842037e6cc54aa3b6a9d5fecd0fae1165591163b925e55b64e86003)\nc27054ff2842037e6cc54aa3b6a9d5fecd0fae1165591163b925e55b64e86003\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:13:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:13:08.484 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[019a1d4c-5645-4803-baa1-487aea6a0fcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:13:08 compute-1 kernel: tapd9001b9c-b0: left promiscuous mode
Oct 02 13:13:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:13:08.486 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9001b9c-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:13:08 compute-1 nova_compute[230518]: 2025-10-02 13:13:08.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:08 compute-1 nova_compute[230518]: 2025-10-02 13:13:08.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:13:08.515 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8eee8018-2779-4b6f-96be-da0d71127d78]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:13:08 compute-1 podman[308740]: 2025-10-02 13:13:08.538602084 +0000 UTC m=+1.196091869 container exec_died f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:13:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:13:08.560 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d3566eff-0cdd-4844-a4dd-caa49b3b4ffa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:13:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:13:08.561 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3330ae15-8fa3-4240-9796-857feccc6295]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:13:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:13:08.590 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ca78f58c-6969-4a8b-97da-adde91e41533]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 840851, 'reachable_time': 31347, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308790, 'error': None, 'target': 'ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:13:08 compute-1 systemd[1]: run-netns-ovnmeta\x2dd9001b9c\x2dbca6\x2d4085\x2da954\x2d1414269e31bc.mount: Deactivated successfully.
Oct 02 13:13:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:13:08.601 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d9001b9c-bca6-4085-a954-1414269e31bc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:13:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:13:08.602 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[b9b444c8-27a5-4d69-a84d-19b4a0a6f8e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:13:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:13:08.604 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 9fd04bf3-73c2-4224-81ff-32ef5640604b in datapath dac20349-4f21-4aeb-a4a7-d775590cb44a unbound from our chassis
Oct 02 13:13:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:13:08.605 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dac20349-4f21-4aeb-a4a7-d775590cb44a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:13:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:13:08.606 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[088ac60a-4959-4e61-aa5f-0ce69ea625e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:13:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:13:08.607 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a namespace which is not needed anymore
Oct 02 13:13:09 compute-1 neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a[307970]: [NOTICE]   (307994) : haproxy version is 2.8.14-c23fe91
Oct 02 13:13:09 compute-1 neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a[307970]: [NOTICE]   (307994) : path to executable is /usr/sbin/haproxy
Oct 02 13:13:09 compute-1 neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a[307970]: [WARNING]  (307994) : Exiting Master process...
Oct 02 13:13:09 compute-1 neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a[307970]: [WARNING]  (307994) : Exiting Master process...
Oct 02 13:13:09 compute-1 neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a[307970]: [ALERT]    (307994) : Current worker (307997) exited with code 143 (Terminated)
Oct 02 13:13:09 compute-1 neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a[307970]: [WARNING]  (307994) : All workers exited. Exiting... (0)
Oct 02 13:13:09 compute-1 podman[308808]: 2025-10-02 13:13:09.164161367 +0000 UTC m=+0.176554095 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 13:13:09 compute-1 systemd[1]: libpod-1b51d7f529ca9309b852c8d9d84d92761961af0af6c283bc554124db9c4cccc8.scope: Deactivated successfully.
Oct 02 13:13:09 compute-1 podman[308811]: 2025-10-02 13:13:09.169386871 +0000 UTC m=+0.170558226 container died 1b51d7f529ca9309b852c8d9d84d92761961af0af6c283bc554124db9c4cccc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 13:13:09 compute-1 ceph-mon[80926]: pgmap v3028: 305 pgs: 305 active+clean; 510 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 266 KiB/s rd, 2.1 MiB/s wr, 103 op/s
Oct 02 13:13:09 compute-1 podman[308809]: 2025-10-02 13:13:09.406311421 +0000 UTC m=+0.406254748 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 13:13:09 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1b51d7f529ca9309b852c8d9d84d92761961af0af6c283bc554124db9c4cccc8-userdata-shm.mount: Deactivated successfully.
Oct 02 13:13:09 compute-1 systemd[1]: var-lib-containers-storage-overlay-f39bcf2783cfa7733fec5ee8c9f0cd2433751285a50f27407623ad86aee7b446-merged.mount: Deactivated successfully.
Oct 02 13:13:09 compute-1 podman[308811]: 2025-10-02 13:13:09.703539744 +0000 UTC m=+0.704711099 container cleanup 1b51d7f529ca9309b852c8d9d84d92761961af0af6c283bc554124db9c4cccc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 13:13:09 compute-1 systemd[1]: libpod-conmon-1b51d7f529ca9309b852c8d9d84d92761961af0af6c283bc554124db9c4cccc8.scope: Deactivated successfully.
Oct 02 13:13:09 compute-1 nova_compute[230518]: 2025-10-02 13:13:09.933 2 DEBUG nova.network.neutron [req-42316382-e4e6-48d6-a418-22f65e49d84c req-4ce1ac0a-cd47-4542-9b9b-2d76e5feb490 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Updated VIF entry in instance network info cache for port 9fd04bf3-73c2-4224-81ff-32ef5640604b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:13:09 compute-1 nova_compute[230518]: 2025-10-02 13:13:09.934 2 DEBUG nova.network.neutron [req-42316382-e4e6-48d6-a418-22f65e49d84c req-4ce1ac0a-cd47-4542-9b9b-2d76e5feb490 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Updating instance_info_cache with network_info: [{"id": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "address": "fa:16:3e:ef:26:39", "network": {"id": "dac20349-4f21-4aeb-a4a7-d775590cb44a", "bridge": "br-int", "label": "tempest-network-smoke--1297227184", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e911de934ec043d1bd942c8aed562d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fd04bf3-73", "ovs_interfaceid": "9fd04bf3-73c2-4224-81ff-32ef5640604b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:13:09 compute-1 nova_compute[230518]: 2025-10-02 13:13:09.950 2 DEBUG oslo_concurrency.lockutils [req-42316382-e4e6-48d6-a418-22f65e49d84c req-4ce1ac0a-cd47-4542-9b9b-2d76e5feb490 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-5ecce258-097c-4a5a-9c44-087e8129ceaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:13:10 compute-1 podman[308909]: 2025-10-02 13:13:10.037935503 +0000 UTC m=+0.296630504 container remove 1b51d7f529ca9309b852c8d9d84d92761961af0af6c283bc554124db9c4cccc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 13:13:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:13:10.046 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[45cd89d0-6156-4c3f-9443-d8becf252ed3]: (4, ('Thu Oct  2 01:13:08 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a (1b51d7f529ca9309b852c8d9d84d92761961af0af6c283bc554124db9c4cccc8)\n1b51d7f529ca9309b852c8d9d84d92761961af0af6c283bc554124db9c4cccc8\nThu Oct  2 01:13:09 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a (1b51d7f529ca9309b852c8d9d84d92761961af0af6c283bc554124db9c4cccc8)\n1b51d7f529ca9309b852c8d9d84d92761961af0af6c283bc554124db9c4cccc8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:13:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:13:10.048 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[37d6603f-f501-48ff-b15b-f525021d943d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:13:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:13:10.049 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdac20349-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:13:10 compute-1 kernel: tapdac20349-40: left promiscuous mode
Oct 02 13:13:10 compute-1 nova_compute[230518]: 2025-10-02 13:13:10.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:13:10.061 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[81da0f3a-c823-445a-aa74-d71c77b5e5fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:13:10 compute-1 nova_compute[230518]: 2025-10-02 13:13:10.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:13:10.091 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[19a24845-ba57-41ae-86a9-c9ebc03d3c07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:13:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:13:10.094 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[cd369f6e-e270-464b-945a-235f16112cde]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:13:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:13:10.121 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[fcc76cf1-8e3c-4d54-ade2-e628ad51ef73]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 855962, 'reachable_time': 43297, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308946, 'error': None, 'target': 'ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:13:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:13:10.125 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dac20349-4f21-4aeb-a4a7-d775590cb44a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:13:10 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:13:10.125 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[37f3cba8-e23c-4d37-85be-81a7ceba34c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:13:10 compute-1 systemd[1]: run-netns-ovnmeta\x2ddac20349\x2d4f21\x2d4aeb\x2da4a7\x2dd775590cb44a.mount: Deactivated successfully.
Oct 02 13:13:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:13:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:10.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:13:10 compute-1 sudo[308530]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:13:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:10.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:13:10 compute-1 nova_compute[230518]: 2025-10-02 13:13:10.501 2 DEBUG nova.compute.manager [req-c4c3aad2-e767-4360-ac48-3959cee8334f req-60e2918f-19b1-4bcc-94e2-8b46f8b75917 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Received event network-vif-plugged-9fd04bf3-73c2-4224-81ff-32ef5640604b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:13:10 compute-1 nova_compute[230518]: 2025-10-02 13:13:10.502 2 DEBUG oslo_concurrency.lockutils [req-c4c3aad2-e767-4360-ac48-3959cee8334f req-60e2918f-19b1-4bcc-94e2-8b46f8b75917 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "5ecce258-097c-4a5a-9c44-087e8129ceaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:13:10 compute-1 nova_compute[230518]: 2025-10-02 13:13:10.502 2 DEBUG oslo_concurrency.lockutils [req-c4c3aad2-e767-4360-ac48-3959cee8334f req-60e2918f-19b1-4bcc-94e2-8b46f8b75917 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5ecce258-097c-4a5a-9c44-087e8129ceaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:13:10 compute-1 nova_compute[230518]: 2025-10-02 13:13:10.502 2 DEBUG oslo_concurrency.lockutils [req-c4c3aad2-e767-4360-ac48-3959cee8334f req-60e2918f-19b1-4bcc-94e2-8b46f8b75917 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "5ecce258-097c-4a5a-9c44-087e8129ceaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:13:10 compute-1 nova_compute[230518]: 2025-10-02 13:13:10.502 2 DEBUG nova.compute.manager [req-c4c3aad2-e767-4360-ac48-3959cee8334f req-60e2918f-19b1-4bcc-94e2-8b46f8b75917 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] No waiting events found dispatching network-vif-plugged-9fd04bf3-73c2-4224-81ff-32ef5640604b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:13:10 compute-1 nova_compute[230518]: 2025-10-02 13:13:10.503 2 WARNING nova.compute.manager [req-c4c3aad2-e767-4360-ac48-3959cee8334f req-60e2918f-19b1-4bcc-94e2-8b46f8b75917 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Received unexpected event network-vif-plugged-9fd04bf3-73c2-4224-81ff-32ef5640604b for instance with vm_state active and task_state deleting.
Oct 02 13:13:10 compute-1 nova_compute[230518]: 2025-10-02 13:13:10.552 2 DEBUG nova.compute.manager [req-b65fdcf1-a7ac-4cc4-9168-eaed190a7882 req-ad11f538-6150-484e-ac6c-a04ef67a09dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Received event network-vif-plugged-15cb070c-0f52-464f-a2b4-8597c15212e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:13:10 compute-1 nova_compute[230518]: 2025-10-02 13:13:10.553 2 DEBUG oslo_concurrency.lockutils [req-b65fdcf1-a7ac-4cc4-9168-eaed190a7882 req-ad11f538-6150-484e-ac6c-a04ef67a09dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "658821a7-5b97-43ad-8fe2-46e5303cf56c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:13:10 compute-1 nova_compute[230518]: 2025-10-02 13:13:10.555 2 DEBUG oslo_concurrency.lockutils [req-b65fdcf1-a7ac-4cc4-9168-eaed190a7882 req-ad11f538-6150-484e-ac6c-a04ef67a09dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:13:10 compute-1 nova_compute[230518]: 2025-10-02 13:13:10.555 2 DEBUG oslo_concurrency.lockutils [req-b65fdcf1-a7ac-4cc4-9168-eaed190a7882 req-ad11f538-6150-484e-ac6c-a04ef67a09dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:13:10 compute-1 nova_compute[230518]: 2025-10-02 13:13:10.556 2 DEBUG nova.compute.manager [req-b65fdcf1-a7ac-4cc4-9168-eaed190a7882 req-ad11f538-6150-484e-ac6c-a04ef67a09dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] No waiting events found dispatching network-vif-plugged-15cb070c-0f52-464f-a2b4-8597c15212e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:13:10 compute-1 nova_compute[230518]: 2025-10-02 13:13:10.556 2 WARNING nova.compute.manager [req-b65fdcf1-a7ac-4cc4-9168-eaed190a7882 req-ad11f538-6150-484e-ac6c-a04ef67a09dd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Received unexpected event network-vif-plugged-15cb070c-0f52-464f-a2b4-8597c15212e9 for instance with vm_state active and task_state deleting.
Oct 02 13:13:10 compute-1 sudo[308986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:13:10 compute-1 sudo[308986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:10 compute-1 sudo[308986]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:10 compute-1 sudo[309011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:13:10 compute-1 sudo[309011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:10 compute-1 sudo[309011]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:10 compute-1 sudo[309036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:13:10 compute-1 sudo[309036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:10 compute-1 sudo[309036]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:10 compute-1 nova_compute[230518]: 2025-10-02 13:13:10.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:10 compute-1 sudo[309061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:13:10 compute-1 sudo[309061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:11 compute-1 nova_compute[230518]: 2025-10-02 13:13:11.405 2 INFO nova.virt.libvirt.driver [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Deleting instance files /var/lib/nova/instances/658821a7-5b97-43ad-8fe2-46e5303cf56c_del
Oct 02 13:13:11 compute-1 nova_compute[230518]: 2025-10-02 13:13:11.407 2 INFO nova.virt.libvirt.driver [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Deletion of /var/lib/nova/instances/658821a7-5b97-43ad-8fe2-46e5303cf56c_del complete
Oct 02 13:13:11 compute-1 nova_compute[230518]: 2025-10-02 13:13:11.418 2 INFO nova.virt.libvirt.driver [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Deleting instance files /var/lib/nova/instances/5ecce258-097c-4a5a-9c44-087e8129ceaf_del
Oct 02 13:13:11 compute-1 nova_compute[230518]: 2025-10-02 13:13:11.419 2 INFO nova.virt.libvirt.driver [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Deletion of /var/lib/nova/instances/5ecce258-097c-4a5a-9c44-087e8129ceaf_del complete
Oct 02 13:13:11 compute-1 sudo[309061]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:11 compute-1 nova_compute[230518]: 2025-10-02 13:13:11.529 2 INFO nova.compute.manager [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Took 5.19 seconds to destroy the instance on the hypervisor.
Oct 02 13:13:11 compute-1 nova_compute[230518]: 2025-10-02 13:13:11.530 2 DEBUG oslo.service.loopingcall [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:13:11 compute-1 nova_compute[230518]: 2025-10-02 13:13:11.531 2 DEBUG nova.compute.manager [-] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:13:11 compute-1 nova_compute[230518]: 2025-10-02 13:13:11.531 2 DEBUG nova.network.neutron [-] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:13:11 compute-1 nova_compute[230518]: 2025-10-02 13:13:11.538 2 INFO nova.compute.manager [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Took 5.27 seconds to destroy the instance on the hypervisor.
Oct 02 13:13:11 compute-1 nova_compute[230518]: 2025-10-02 13:13:11.539 2 DEBUG oslo.service.loopingcall [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:13:11 compute-1 nova_compute[230518]: 2025-10-02 13:13:11.539 2 DEBUG nova.compute.manager [-] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:13:11 compute-1 nova_compute[230518]: 2025-10-02 13:13:11.540 2 DEBUG nova.network.neutron [-] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:13:11 compute-1 ceph-mon[80926]: pgmap v3029: 305 pgs: 305 active+clean; 438 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 268 KiB/s rd, 2.1 MiB/s wr, 102 op/s
Oct 02 13:13:11 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:13:11 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:13:11 compute-1 nova_compute[230518]: 2025-10-02 13:13:11.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:12.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:12.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:12 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:13:12 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:13:12 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:13:12 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:13:12 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:13:12 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:13:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:13:13 compute-1 nova_compute[230518]: 2025-10-02 13:13:13.711 2 DEBUG nova.network.neutron [-] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:13:13 compute-1 nova_compute[230518]: 2025-10-02 13:13:13.737 2 DEBUG nova.network.neutron [-] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:13:13 compute-1 nova_compute[230518]: 2025-10-02 13:13:13.741 2 INFO nova.compute.manager [-] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Took 2.21 seconds to deallocate network for instance.
Oct 02 13:13:13 compute-1 nova_compute[230518]: 2025-10-02 13:13:13.774 2 INFO nova.compute.manager [-] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Took 2.23 seconds to deallocate network for instance.
Oct 02 13:13:13 compute-1 nova_compute[230518]: 2025-10-02 13:13:13.805 2 DEBUG nova.compute.manager [req-0aef10e3-9bc8-4adf-8d29-f107013093bb req-42effe3e-aaf8-43aa-93ff-941047c81b93 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Received event network-vif-deleted-9fd04bf3-73c2-4224-81ff-32ef5640604b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:13:13 compute-1 nova_compute[230518]: 2025-10-02 13:13:13.841 2 DEBUG oslo_concurrency.lockutils [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:13:13 compute-1 nova_compute[230518]: 2025-10-02 13:13:13.841 2 DEBUG oslo_concurrency.lockutils [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:13:13 compute-1 nova_compute[230518]: 2025-10-02 13:13:13.861 2 DEBUG oslo_concurrency.lockutils [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:13:13 compute-1 nova_compute[230518]: 2025-10-02 13:13:13.900 2 DEBUG oslo_concurrency.processutils [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:13:14 compute-1 ceph-mon[80926]: pgmap v3030: 305 pgs: 305 active+clean; 404 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 316 KiB/s rd, 2.1 MiB/s wr, 106 op/s
Oct 02 13:13:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:14.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:13:14 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1903692744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:14 compute-1 nova_compute[230518]: 2025-10-02 13:13:14.443 2 DEBUG oslo_concurrency.processutils [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:13:14 compute-1 nova_compute[230518]: 2025-10-02 13:13:14.452 2 DEBUG nova.compute.provider_tree [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:13:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:14.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:14 compute-1 nova_compute[230518]: 2025-10-02 13:13:14.509 2 DEBUG nova.scheduler.client.report [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:13:14 compute-1 nova_compute[230518]: 2025-10-02 13:13:14.541 2 DEBUG oslo_concurrency.lockutils [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:13:14 compute-1 nova_compute[230518]: 2025-10-02 13:13:14.546 2 DEBUG oslo_concurrency.lockutils [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:13:14 compute-1 nova_compute[230518]: 2025-10-02 13:13:14.580 2 INFO nova.scheduler.client.report [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Deleted allocations for instance 5ecce258-097c-4a5a-9c44-087e8129ceaf
Oct 02 13:13:14 compute-1 nova_compute[230518]: 2025-10-02 13:13:14.597 2 DEBUG oslo_concurrency.processutils [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:13:14 compute-1 nova_compute[230518]: 2025-10-02 13:13:14.642 2 DEBUG oslo_concurrency.lockutils [None req-feb12e94-1c98-4690-b702-f641c70bf864 362b536431b64b15b67740060af57e9c e911de934ec043d1bd942c8aed562d04 - - default default] Lock "5ecce258-097c-4a5a-9c44-087e8129ceaf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.305s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:13:15 compute-1 nova_compute[230518]: 2025-10-02 13:13:15.072 2 DEBUG oslo_concurrency.processutils [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:13:15 compute-1 nova_compute[230518]: 2025-10-02 13:13:15.083 2 DEBUG nova.compute.provider_tree [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:13:15 compute-1 nova_compute[230518]: 2025-10-02 13:13:15.111 2 DEBUG nova.scheduler.client.report [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:13:15 compute-1 nova_compute[230518]: 2025-10-02 13:13:15.133 2 DEBUG oslo_concurrency.lockutils [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:13:15 compute-1 nova_compute[230518]: 2025-10-02 13:13:15.158 2 INFO nova.scheduler.client.report [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Deleted allocations for instance 658821a7-5b97-43ad-8fe2-46e5303cf56c
Oct 02 13:13:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1903692744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:15 compute-1 ceph-mon[80926]: pgmap v3031: 305 pgs: 305 active+clean; 359 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 113 op/s
Oct 02 13:13:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3948110413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:15 compute-1 nova_compute[230518]: 2025-10-02 13:13:15.262 2 DEBUG oslo_concurrency.lockutils [None req-33bf0fbe-3265-4e4b-aa37-87d7595ee73c 156cc6022c70402ab6d194a340b076d5 9f85b8f387b146d29eabe946c4fbdee8 - - default default] Lock "658821a7-5b97-43ad-8fe2-46e5303cf56c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:13:15 compute-1 nova_compute[230518]: 2025-10-02 13:13:15.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:16 compute-1 nova_compute[230518]: 2025-10-02 13:13:15.999 2 DEBUG nova.compute.manager [req-bd5cbfcb-e5d3-4cdf-a7c5-3f171604b3f7 req-75908910-649d-4d7c-a512-6858af7c2528 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Received event network-vif-deleted-15cb070c-0f52-464f-a2b4-8597c15212e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:13:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:16.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:16.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:16 compute-1 nova_compute[230518]: 2025-10-02 13:13:16.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:17 compute-1 ceph-mon[80926]: pgmap v3032: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.0 MiB/s wr, 123 op/s
Oct 02 13:13:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:18.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:18.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:13:18 compute-1 sudo[309161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:13:18 compute-1 sudo[309161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:18 compute-1 sudo[309161]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:18 compute-1 sudo[309186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:13:18 compute-1 sudo[309186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:13:18 compute-1 sudo[309186]: pam_unix(sudo:session): session closed for user root
Oct 02 13:13:19 compute-1 nova_compute[230518]: 2025-10-02 13:13:19.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:19 compute-1 nova_compute[230518]: 2025-10-02 13:13:19.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:13:19 compute-1 ceph-mon[80926]: pgmap v3033: 305 pgs: 305 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 255 KiB/s rd, 658 KiB/s wr, 97 op/s
Oct 02 13:13:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:13:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:13:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:20.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:13:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:13:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:20.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:13:20 compute-1 nova_compute[230518]: 2025-10-02 13:13:20.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:21 compute-1 nova_compute[230518]: 2025-10-02 13:13:21.704 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410786.7021995, 658821a7-5b97-43ad-8fe2-46e5303cf56c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:13:21 compute-1 nova_compute[230518]: 2025-10-02 13:13:21.705 2 INFO nova.compute.manager [-] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] VM Stopped (Lifecycle Event)
Oct 02 13:13:21 compute-1 nova_compute[230518]: 2025-10-02 13:13:21.729 2 DEBUG nova.compute.manager [None req-fb7e082b-0002-4d10-9a78-76278d269e87 - - - - - -] [instance: 658821a7-5b97-43ad-8fe2-46e5303cf56c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:13:21 compute-1 nova_compute[230518]: 2025-10-02 13:13:21.777 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410786.773232, 5ecce258-097c-4a5a-9c44-087e8129ceaf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:13:21 compute-1 nova_compute[230518]: 2025-10-02 13:13:21.778 2 INFO nova.compute.manager [-] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] VM Stopped (Lifecycle Event)
Oct 02 13:13:21 compute-1 nova_compute[230518]: 2025-10-02 13:13:21.819 2 DEBUG nova.compute.manager [None req-3ac42c07-6608-477e-9149-f10f6ba9437e - - - - - -] [instance: 5ecce258-097c-4a5a-9c44-087e8129ceaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:13:21 compute-1 nova_compute[230518]: 2025-10-02 13:13:21.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:21 compute-1 ceph-mon[80926]: pgmap v3034: 305 pgs: 305 active+clean; 314 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 108 KiB/s rd, 99 KiB/s wr, 72 op/s
Oct 02 13:13:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:22.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:22.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:13:23 compute-1 ceph-mon[80926]: pgmap v3035: 305 pgs: 305 active+clean; 299 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 98 KiB/s rd, 95 KiB/s wr, 63 op/s
Oct 02 13:13:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1530611564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:24.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:24.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:25 compute-1 nova_compute[230518]: 2025-10-02 13:13:25.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:13:25.968 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:13:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:13:25.969 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:13:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:13:25.969 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:13:26 compute-1 ceph-mon[80926]: pgmap v3036: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 54 KiB/s rd, 15 KiB/s wr, 62 op/s
Oct 02 13:13:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:26.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:26.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:26 compute-1 nova_compute[230518]: 2025-10-02 13:13:26.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:27 compute-1 nova_compute[230518]: 2025-10-02 13:13:27.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:13:27 compute-1 nova_compute[230518]: 2025-10-02 13:13:27.080 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:13:27 compute-1 nova_compute[230518]: 2025-10-02 13:13:27.081 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:13:27 compute-1 nova_compute[230518]: 2025-10-02 13:13:27.081 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:13:27 compute-1 nova_compute[230518]: 2025-10-02 13:13:27.081 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:13:27 compute-1 nova_compute[230518]: 2025-10-02 13:13:27.081 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:13:27 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1372799750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:27 compute-1 ceph-mon[80926]: pgmap v3037: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 15 KiB/s wr, 49 op/s
Oct 02 13:13:27 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:13:27 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/57139510' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:27 compute-1 nova_compute[230518]: 2025-10-02 13:13:27.579 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:13:27 compute-1 nova_compute[230518]: 2025-10-02 13:13:27.848 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:13:27 compute-1 nova_compute[230518]: 2025-10-02 13:13:27.850 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4146MB free_disk=20.942596435546875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:13:27 compute-1 nova_compute[230518]: 2025-10-02 13:13:27.850 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:13:27 compute-1 nova_compute[230518]: 2025-10-02 13:13:27.851 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:13:27 compute-1 nova_compute[230518]: 2025-10-02 13:13:27.941 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:13:27 compute-1 nova_compute[230518]: 2025-10-02 13:13:27.941 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:13:27 compute-1 nova_compute[230518]: 2025-10-02 13:13:27.961 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:13:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:13:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:28.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:13:28 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3093970101' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:28 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/57139510' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:13:28 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1871557515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:28.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:28 compute-1 nova_compute[230518]: 2025-10-02 13:13:28.503 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:13:28 compute-1 nova_compute[230518]: 2025-10-02 13:13:28.515 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:13:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:13:28 compute-1 nova_compute[230518]: 2025-10-02 13:13:28.544 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:13:28 compute-1 nova_compute[230518]: 2025-10-02 13:13:28.586 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:13:28 compute-1 nova_compute[230518]: 2025-10-02 13:13:28.587 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:13:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2928997528' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:13:29 compute-1 ceph-mon[80926]: pgmap v3038: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 3.1 KiB/s wr, 32 op/s
Oct 02 13:13:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1871557515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:13:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:30.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:13:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:30.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/532331378' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:30 compute-1 podman[309258]: 2025-10-02 13:13:30.835754972 +0000 UTC m=+0.075394028 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 02 13:13:30 compute-1 podman[309257]: 2025-10-02 13:13:30.900763494 +0000 UTC m=+0.140637348 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct 02 13:13:30 compute-1 nova_compute[230518]: 2025-10-02 13:13:30.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:31 compute-1 ceph-mon[80926]: pgmap v3039: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 2.7 KiB/s wr, 27 op/s
Oct 02 13:13:31 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1278279996' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:13:31 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1278279996' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:13:31 compute-1 nova_compute[230518]: 2025-10-02 13:13:31.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:32.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:13:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:32.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:13:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/306169587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:13:33 compute-1 nova_compute[230518]: 2025-10-02 13:13:33.582 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:13:33 compute-1 nova_compute[230518]: 2025-10-02 13:13:33.583 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:13:33 compute-1 nova_compute[230518]: 2025-10-02 13:13:33.583 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:13:33 compute-1 nova_compute[230518]: 2025-10-02 13:13:33.583 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:13:33 compute-1 nova_compute[230518]: 2025-10-02 13:13:33.583 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:13:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e377 e377: 3 total, 3 up, 3 in
Oct 02 13:13:33 compute-1 ceph-mon[80926]: pgmap v3040: 305 pgs: 305 active+clean; 261 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 2.4 KiB/s wr, 39 op/s
Oct 02 13:13:33 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/391005584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:34 compute-1 nova_compute[230518]: 2025-10-02 13:13:34.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:13:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:13:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:34.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:13:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:13:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:34.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:13:34 compute-1 ceph-mon[80926]: osdmap e377: 3 total, 3 up, 3 in
Oct 02 13:13:35 compute-1 nova_compute[230518]: 2025-10-02 13:13:35.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:13:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:36.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:13:36 compute-1 ceph-mon[80926]: pgmap v3042: 305 pgs: 305 active+clean; 237 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 210 KiB/s wr, 35 op/s
Oct 02 13:13:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:36.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:36 compute-1 nova_compute[230518]: 2025-10-02 13:13:36.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:37 compute-1 nova_compute[230518]: 2025-10-02 13:13:37.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:13:37 compute-1 nova_compute[230518]: 2025-10-02 13:13:37.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:13:37 compute-1 nova_compute[230518]: 2025-10-02 13:13:37.091 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:13:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e378 e378: 3 total, 3 up, 3 in
Oct 02 13:13:37 compute-1 ceph-mon[80926]: pgmap v3043: 305 pgs: 305 active+clean; 202 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 233 KiB/s wr, 42 op/s
Oct 02 13:13:38 compute-1 nova_compute[230518]: 2025-10-02 13:13:38.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:13:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:13:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:38.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:13:38 compute-1 ceph-mon[80926]: osdmap e378: 3 total, 3 up, 3 in
Oct 02 13:13:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:38.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:38 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:13:39 compute-1 nova_compute[230518]: 2025-10-02 13:13:39.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:13:39 compute-1 ceph-mon[80926]: pgmap v3045: 305 pgs: 305 active+clean; 202 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 59 KiB/s rd, 291 KiB/s wr, 76 op/s
Oct 02 13:13:39 compute-1 podman[309301]: 2025-10-02 13:13:39.826855345 +0000 UTC m=+0.067883802 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 13:13:39 compute-1 podman[309302]: 2025-10-02 13:13:39.839953497 +0000 UTC m=+0.075222284 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 13:13:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e379 e379: 3 total, 3 up, 3 in
Oct 02 13:13:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:13:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:40.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:13:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:40.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:40 compute-1 nova_compute[230518]: 2025-10-02 13:13:40.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:40 compute-1 ceph-mon[80926]: osdmap e379: 3 total, 3 up, 3 in
Oct 02 13:13:41 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e380 e380: 3 total, 3 up, 3 in
Oct 02 13:13:41 compute-1 nova_compute[230518]: 2025-10-02 13:13:41.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:42 compute-1 ceph-mon[80926]: pgmap v3047: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 227 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 64 op/s
Oct 02 13:13:42 compute-1 ceph-mon[80926]: osdmap e380: 3 total, 3 up, 3 in
Oct 02 13:13:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:42.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:42.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:13:44 compute-1 ceph-mon[80926]: pgmap v3049: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 253 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.4 MiB/s rd, 5.1 MiB/s wr, 88 op/s
Oct 02 13:13:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:13:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:44.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:13:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:44.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:45 compute-1 ceph-mon[80926]: pgmap v3050: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.7 MiB/s rd, 6.6 MiB/s wr, 144 op/s
Oct 02 13:13:45 compute-1 nova_compute[230518]: 2025-10-02 13:13:45.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:13:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:46.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:13:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:13:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:46.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:13:46 compute-1 nova_compute[230518]: 2025-10-02 13:13:46.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:47 compute-1 ceph-mon[80926]: pgmap v3051: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 120 op/s
Oct 02 13:13:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:13:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:48.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:13:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:13:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:13:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:48.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:13:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4100119741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:48 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #148. Immutable memtables: 0.
Oct 02 13:13:48 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:13:48.745411) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:13:48 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 148
Oct 02 13:13:48 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410828745451, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 2443, "num_deletes": 255, "total_data_size": 5694904, "memory_usage": 5781024, "flush_reason": "Manual Compaction"}
Oct 02 13:13:48 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #149: started
Oct 02 13:13:48 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410828770483, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 149, "file_size": 3723786, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 71279, "largest_seqno": 73717, "table_properties": {"data_size": 3713812, "index_size": 6339, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 21332, "raw_average_key_size": 20, "raw_value_size": 3693614, "raw_average_value_size": 3614, "num_data_blocks": 274, "num_entries": 1022, "num_filter_entries": 1022, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410630, "oldest_key_time": 1759410630, "file_creation_time": 1759410828, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:13:48 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 25174 microseconds, and 13544 cpu microseconds.
Oct 02 13:13:48 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:13:48 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:13:48.770568) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #149: 3723786 bytes OK
Oct 02 13:13:48 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:13:48.770616) [db/memtable_list.cc:519] [default] Level-0 commit table #149 started
Oct 02 13:13:48 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:13:48.772864) [db/memtable_list.cc:722] [default] Level-0 commit table #149: memtable #1 done
Oct 02 13:13:48 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:13:48.772891) EVENT_LOG_v1 {"time_micros": 1759410828772882, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:13:48 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:13:48.772923) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:13:48 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 5683980, prev total WAL file size 5683980, number of live WAL files 2.
Oct 02 13:13:48 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000145.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:13:48 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:13:48.776111) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Oct 02 13:13:48 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:13:48 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [149(3636KB)], [147(10054KB)]
Oct 02 13:13:48 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410828776192, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [149], "files_L6": [147], "score": -1, "input_data_size": 14019569, "oldest_snapshot_seqno": -1}
Oct 02 13:13:48 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #150: 9540 keys, 12075399 bytes, temperature: kUnknown
Oct 02 13:13:48 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410828876994, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 150, "file_size": 12075399, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12014051, "index_size": 36403, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23877, "raw_key_size": 251203, "raw_average_key_size": 26, "raw_value_size": 11847196, "raw_average_value_size": 1241, "num_data_blocks": 1385, "num_entries": 9540, "num_filter_entries": 9540, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759410828, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 150, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:13:48 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:13:48 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:13:48.877435) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 12075399 bytes
Oct 02 13:13:48 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:13:48.878812) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.9 rd, 119.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 9.8 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(7.0) write-amplify(3.2) OK, records in: 10070, records dropped: 530 output_compression: NoCompression
Oct 02 13:13:48 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:13:48.878833) EVENT_LOG_v1 {"time_micros": 1759410828878822, "job": 94, "event": "compaction_finished", "compaction_time_micros": 100935, "compaction_time_cpu_micros": 55521, "output_level": 6, "num_output_files": 1, "total_output_size": 12075399, "num_input_records": 10070, "num_output_records": 9540, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:13:48 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:13:48 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410828880087, "job": 94, "event": "table_file_deletion", "file_number": 149}
Oct 02 13:13:48 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000147.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:13:48 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410828882643, "job": 94, "event": "table_file_deletion", "file_number": 147}
Oct 02 13:13:48 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:13:48.775928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:13:48 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:13:48.882770) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:13:48 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:13:48.882785) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:13:48 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:13:48.882789) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:13:48 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:13:48.882794) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:13:48 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:13:48.882798) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:13:49 compute-1 ceph-mon[80926]: pgmap v3052: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.5 MiB/s rd, 5.4 MiB/s wr, 126 op/s
Oct 02 13:13:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:50.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:13:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:50.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:13:50 compute-1 nova_compute[230518]: 2025-10-02 13:13:50.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e381 e381: 3 total, 3 up, 3 in
Oct 02 13:13:51 compute-1 nova_compute[230518]: 2025-10-02 13:13:51.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:51 compute-1 ceph-mon[80926]: pgmap v3053: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.3 MiB/s wr, 102 op/s
Oct 02 13:13:51 compute-1 ceph-mon[80926]: osdmap e381: 3 total, 3 up, 3 in
Oct 02 13:13:51 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2946218505' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:13:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:52.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:13:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:52.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:13:53 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2028186615' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:13:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:13:54 compute-1 ceph-mon[80926]: pgmap v3055: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.6 MiB/s wr, 107 op/s
Oct 02 13:13:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:54.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:54.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:54 compute-1 nova_compute[230518]: 2025-10-02 13:13:54.727 2 DEBUG oslo_concurrency.lockutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "3dbb48be-2da9-48eb-814a-94eac9968d0f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:13:54 compute-1 nova_compute[230518]: 2025-10-02 13:13:54.728 2 DEBUG oslo_concurrency.lockutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:13:54 compute-1 nova_compute[230518]: 2025-10-02 13:13:54.743 2 DEBUG nova.compute.manager [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:13:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:13:54.771 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=69, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=68) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:13:54 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:13:54.772 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:13:54 compute-1 nova_compute[230518]: 2025-10-02 13:13:54.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:54 compute-1 nova_compute[230518]: 2025-10-02 13:13:54.857 2 DEBUG oslo_concurrency.lockutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:13:54 compute-1 nova_compute[230518]: 2025-10-02 13:13:54.857 2 DEBUG oslo_concurrency.lockutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:13:54 compute-1 nova_compute[230518]: 2025-10-02 13:13:54.864 2 DEBUG nova.virt.hardware [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:13:54 compute-1 nova_compute[230518]: 2025-10-02 13:13:54.864 2 INFO nova.compute.claims [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Claim successful on node compute-1.ctlplane.example.com
Oct 02 13:13:54 compute-1 nova_compute[230518]: 2025-10-02 13:13:54.989 2 DEBUG oslo_concurrency.processutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:13:55 compute-1 ceph-mon[80926]: pgmap v3056: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.9 KiB/s wr, 72 op/s
Oct 02 13:13:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:13:55 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4156363450' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:55 compute-1 nova_compute[230518]: 2025-10-02 13:13:55.620 2 DEBUG oslo_concurrency.processutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:13:55 compute-1 nova_compute[230518]: 2025-10-02 13:13:55.626 2 DEBUG nova.compute.provider_tree [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:13:55 compute-1 nova_compute[230518]: 2025-10-02 13:13:55.673 2 DEBUG nova.scheduler.client.report [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:13:55 compute-1 nova_compute[230518]: 2025-10-02 13:13:55.850 2 DEBUG oslo_concurrency.lockutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.993s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:13:55 compute-1 nova_compute[230518]: 2025-10-02 13:13:55.851 2 DEBUG nova.compute.manager [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:13:55 compute-1 nova_compute[230518]: 2025-10-02 13:13:55.898 2 DEBUG nova.compute.manager [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:13:55 compute-1 nova_compute[230518]: 2025-10-02 13:13:55.898 2 DEBUG nova.network.neutron [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:13:55 compute-1 nova_compute[230518]: 2025-10-02 13:13:55.928 2 INFO nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:13:55 compute-1 nova_compute[230518]: 2025-10-02 13:13:55.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:56 compute-1 nova_compute[230518]: 2025-10-02 13:13:56.007 2 DEBUG nova.compute.manager [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:13:56 compute-1 nova_compute[230518]: 2025-10-02 13:13:56.111 2 DEBUG nova.compute.manager [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:13:56 compute-1 nova_compute[230518]: 2025-10-02 13:13:56.114 2 DEBUG nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:13:56 compute-1 nova_compute[230518]: 2025-10-02 13:13:56.114 2 INFO nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Creating image(s)
Oct 02 13:13:56 compute-1 nova_compute[230518]: 2025-10-02 13:13:56.166 2 DEBUG nova.storage.rbd_utils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image 3dbb48be-2da9-48eb-814a-94eac9968d0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:13:56 compute-1 nova_compute[230518]: 2025-10-02 13:13:56.215 2 DEBUG nova.storage.rbd_utils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image 3dbb48be-2da9-48eb-814a-94eac9968d0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:13:56 compute-1 nova_compute[230518]: 2025-10-02 13:13:56.253 2 DEBUG nova.storage.rbd_utils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image 3dbb48be-2da9-48eb-814a-94eac9968d0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:13:56 compute-1 nova_compute[230518]: 2025-10-02 13:13:56.259 2 DEBUG oslo_concurrency.processutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:13:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:13:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:56.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:13:56 compute-1 nova_compute[230518]: 2025-10-02 13:13:56.362 2 DEBUG oslo_concurrency.processutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:13:56 compute-1 nova_compute[230518]: 2025-10-02 13:13:56.364 2 DEBUG oslo_concurrency.lockutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:13:56 compute-1 nova_compute[230518]: 2025-10-02 13:13:56.365 2 DEBUG oslo_concurrency.lockutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:13:56 compute-1 nova_compute[230518]: 2025-10-02 13:13:56.366 2 DEBUG oslo_concurrency.lockutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:13:56 compute-1 nova_compute[230518]: 2025-10-02 13:13:56.418 2 DEBUG nova.storage.rbd_utils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image 3dbb48be-2da9-48eb-814a-94eac9968d0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:13:56 compute-1 nova_compute[230518]: 2025-10-02 13:13:56.426 2 DEBUG oslo_concurrency.processutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 3dbb48be-2da9-48eb-814a-94eac9968d0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:13:56 compute-1 nova_compute[230518]: 2025-10-02 13:13:56.493 2 DEBUG nova.policy [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '37083e5fd56c447cb409b86d6394dd43', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7f5376733aec4630998da8d11db76561', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:13:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:13:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:56.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:13:56 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4156363450' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:13:56 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:13:56.774 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '69'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:13:56 compute-1 nova_compute[230518]: 2025-10-02 13:13:56.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:13:57 compute-1 nova_compute[230518]: 2025-10-02 13:13:57.376 2 DEBUG oslo_concurrency.processutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 3dbb48be-2da9-48eb-814a-94eac9968d0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.950s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:13:57 compute-1 nova_compute[230518]: 2025-10-02 13:13:57.490 2 DEBUG nova.storage.rbd_utils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] resizing rbd image 3dbb48be-2da9-48eb-814a-94eac9968d0f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:13:57 compute-1 ceph-mon[80926]: pgmap v3057: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.7 KiB/s wr, 72 op/s
Oct 02 13:13:57 compute-1 nova_compute[230518]: 2025-10-02 13:13:57.892 2 DEBUG nova.objects.instance [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lazy-loading 'migration_context' on Instance uuid 3dbb48be-2da9-48eb-814a-94eac9968d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:13:57 compute-1 nova_compute[230518]: 2025-10-02 13:13:57.911 2 DEBUG nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:13:57 compute-1 nova_compute[230518]: 2025-10-02 13:13:57.912 2 DEBUG nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Ensure instance console log exists: /var/lib/nova/instances/3dbb48be-2da9-48eb-814a-94eac9968d0f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:13:57 compute-1 nova_compute[230518]: 2025-10-02 13:13:57.913 2 DEBUG oslo_concurrency.lockutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:13:57 compute-1 nova_compute[230518]: 2025-10-02 13:13:57.913 2 DEBUG oslo_concurrency.lockutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:13:57 compute-1 nova_compute[230518]: 2025-10-02 13:13:57.914 2 DEBUG oslo_concurrency.lockutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:13:57 compute-1 nova_compute[230518]: 2025-10-02 13:13:57.925 2 DEBUG nova.network.neutron [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Successfully created port: 27cf437c-6f1f-4511-8b2a-3d68dd116906 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:13:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:13:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:13:58.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:13:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:13:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:13:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:13:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:13:58.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:13:58 compute-1 nova_compute[230518]: 2025-10-02 13:13:58.838 2 DEBUG nova.network.neutron [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Successfully updated port: 27cf437c-6f1f-4511-8b2a-3d68dd116906 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:13:58 compute-1 nova_compute[230518]: 2025-10-02 13:13:58.855 2 DEBUG oslo_concurrency.lockutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "refresh_cache-3dbb48be-2da9-48eb-814a-94eac9968d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:13:58 compute-1 nova_compute[230518]: 2025-10-02 13:13:58.855 2 DEBUG oslo_concurrency.lockutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquired lock "refresh_cache-3dbb48be-2da9-48eb-814a-94eac9968d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:13:58 compute-1 nova_compute[230518]: 2025-10-02 13:13:58.855 2 DEBUG nova.network.neutron [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:13:58 compute-1 nova_compute[230518]: 2025-10-02 13:13:58.935 2 DEBUG nova.compute.manager [req-02cb6fc8-8065-4e57-b71a-8220b2836c09 req-a2c9a283-5378-4f9e-af35-481a31009aed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Received event network-changed-27cf437c-6f1f-4511-8b2a-3d68dd116906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:13:58 compute-1 nova_compute[230518]: 2025-10-02 13:13:58.936 2 DEBUG nova.compute.manager [req-02cb6fc8-8065-4e57-b71a-8220b2836c09 req-a2c9a283-5378-4f9e-af35-481a31009aed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Refreshing instance network info cache due to event network-changed-27cf437c-6f1f-4511-8b2a-3d68dd116906. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:13:58 compute-1 nova_compute[230518]: 2025-10-02 13:13:58.936 2 DEBUG oslo_concurrency.lockutils [req-02cb6fc8-8065-4e57-b71a-8220b2836c09 req-a2c9a283-5378-4f9e-af35-481a31009aed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-3dbb48be-2da9-48eb-814a-94eac9968d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:13:59 compute-1 nova_compute[230518]: 2025-10-02 13:13:59.022 2 DEBUG nova.network.neutron [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:13:59 compute-1 nova_compute[230518]: 2025-10-02 13:13:59.906 2 DEBUG nova.network.neutron [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Updating instance_info_cache with network_info: [{"id": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "address": "fa:16:3e:38:cb:e4", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27cf437c-6f", "ovs_interfaceid": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:13:59 compute-1 nova_compute[230518]: 2025-10-02 13:13:59.921 2 DEBUG oslo_concurrency.lockutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Releasing lock "refresh_cache-3dbb48be-2da9-48eb-814a-94eac9968d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:13:59 compute-1 nova_compute[230518]: 2025-10-02 13:13:59.921 2 DEBUG nova.compute.manager [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Instance network_info: |[{"id": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "address": "fa:16:3e:38:cb:e4", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27cf437c-6f", "ovs_interfaceid": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:13:59 compute-1 nova_compute[230518]: 2025-10-02 13:13:59.922 2 DEBUG oslo_concurrency.lockutils [req-02cb6fc8-8065-4e57-b71a-8220b2836c09 req-a2c9a283-5378-4f9e-af35-481a31009aed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-3dbb48be-2da9-48eb-814a-94eac9968d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:13:59 compute-1 nova_compute[230518]: 2025-10-02 13:13:59.922 2 DEBUG nova.network.neutron [req-02cb6fc8-8065-4e57-b71a-8220b2836c09 req-a2c9a283-5378-4f9e-af35-481a31009aed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Refreshing network info cache for port 27cf437c-6f1f-4511-8b2a-3d68dd116906 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:13:59 compute-1 nova_compute[230518]: 2025-10-02 13:13:59.927 2 DEBUG nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Start _get_guest_xml network_info=[{"id": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "address": "fa:16:3e:38:cb:e4", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27cf437c-6f", "ovs_interfaceid": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:13:59 compute-1 ceph-mon[80926]: pgmap v3058: 305 pgs: 305 active+clean; 319 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.8 MiB/s wr, 112 op/s
Oct 02 13:13:59 compute-1 nova_compute[230518]: 2025-10-02 13:13:59.936 2 WARNING nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:13:59 compute-1 nova_compute[230518]: 2025-10-02 13:13:59.948 2 DEBUG nova.virt.libvirt.host [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:13:59 compute-1 nova_compute[230518]: 2025-10-02 13:13:59.949 2 DEBUG nova.virt.libvirt.host [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:13:59 compute-1 nova_compute[230518]: 2025-10-02 13:13:59.956 2 DEBUG nova.virt.libvirt.host [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:13:59 compute-1 nova_compute[230518]: 2025-10-02 13:13:59.957 2 DEBUG nova.virt.libvirt.host [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:13:59 compute-1 nova_compute[230518]: 2025-10-02 13:13:59.959 2 DEBUG nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:13:59 compute-1 nova_compute[230518]: 2025-10-02 13:13:59.959 2 DEBUG nova.virt.hardware [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:13:59 compute-1 nova_compute[230518]: 2025-10-02 13:13:59.960 2 DEBUG nova.virt.hardware [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:13:59 compute-1 nova_compute[230518]: 2025-10-02 13:13:59.961 2 DEBUG nova.virt.hardware [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:13:59 compute-1 nova_compute[230518]: 2025-10-02 13:13:59.961 2 DEBUG nova.virt.hardware [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:13:59 compute-1 nova_compute[230518]: 2025-10-02 13:13:59.962 2 DEBUG nova.virt.hardware [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:13:59 compute-1 nova_compute[230518]: 2025-10-02 13:13:59.962 2 DEBUG nova.virt.hardware [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:13:59 compute-1 nova_compute[230518]: 2025-10-02 13:13:59.963 2 DEBUG nova.virt.hardware [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:13:59 compute-1 nova_compute[230518]: 2025-10-02 13:13:59.963 2 DEBUG nova.virt.hardware [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:13:59 compute-1 nova_compute[230518]: 2025-10-02 13:13:59.964 2 DEBUG nova.virt.hardware [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:13:59 compute-1 nova_compute[230518]: 2025-10-02 13:13:59.964 2 DEBUG nova.virt.hardware [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:13:59 compute-1 nova_compute[230518]: 2025-10-02 13:13:59.965 2 DEBUG nova.virt.hardware [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:13:59 compute-1 nova_compute[230518]: 2025-10-02 13:13:59.970 2 DEBUG oslo_concurrency.processutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:14:00 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3992521573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:00.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:14:00 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3938846710' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:00 compute-1 nova_compute[230518]: 2025-10-02 13:14:00.471 2 DEBUG oslo_concurrency.processutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:00 compute-1 nova_compute[230518]: 2025-10-02 13:14:00.522 2 DEBUG nova.storage.rbd_utils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image 3dbb48be-2da9-48eb-814a-94eac9968d0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:14:00 compute-1 nova_compute[230518]: 2025-10-02 13:14:00.528 2 DEBUG oslo_concurrency.processutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:14:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:00.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:14:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e382 e382: 3 total, 3 up, 3 in
Oct 02 13:14:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3992521573' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3938846710' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:00 compute-1 nova_compute[230518]: 2025-10-02 13:14:00.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:14:00 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3755106332' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:01 compute-1 nova_compute[230518]: 2025-10-02 13:14:01.027 2 DEBUG oslo_concurrency.processutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:01 compute-1 nova_compute[230518]: 2025-10-02 13:14:01.030 2 DEBUG nova.virt.libvirt.vif [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:13:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1054588037',display_name='tempest-AttachVolumeNegativeTest-server-1054588037',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1054588037',id=208,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCQBraTImbOHfTH+zcfBRFJyePeIqcOFlGsPR6ZMRcMYMVZGuN9g/lIgLTbs1qdUo4qDQMWoBvweu9Ok7nksgXVqglFfrHDG04CgWRfT+7Tk6OyYqf+SJMw2cYyCygZmlA==',key_name='tempest-keypair-212526163',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7f5376733aec4630998da8d11db76561',ramdisk_id='',reservation_id='r-tfow0fvq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1084646737',owner_user_name='tempest-AttachVolumeNegativeTest-1084646737-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:13:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='37083e5fd56c447cb409b86d6394dd43',uuid=3dbb48be-2da9-48eb-814a-94eac9968d0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "address": "fa:16:3e:38:cb:e4", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27cf437c-6f", "ovs_interfaceid": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:14:01 compute-1 nova_compute[230518]: 2025-10-02 13:14:01.031 2 DEBUG nova.network.os_vif_util [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Converting VIF {"id": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "address": "fa:16:3e:38:cb:e4", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27cf437c-6f", "ovs_interfaceid": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:14:01 compute-1 nova_compute[230518]: 2025-10-02 13:14:01.032 2 DEBUG nova.network.os_vif_util [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:cb:e4,bridge_name='br-int',has_traffic_filtering=True,id=27cf437c-6f1f-4511-8b2a-3d68dd116906,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27cf437c-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:14:01 compute-1 nova_compute[230518]: 2025-10-02 13:14:01.035 2 DEBUG nova.objects.instance [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3dbb48be-2da9-48eb-814a-94eac9968d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:14:01 compute-1 nova_compute[230518]: 2025-10-02 13:14:01.065 2 DEBUG nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:14:01 compute-1 nova_compute[230518]:   <uuid>3dbb48be-2da9-48eb-814a-94eac9968d0f</uuid>
Oct 02 13:14:01 compute-1 nova_compute[230518]:   <name>instance-000000d0</name>
Oct 02 13:14:01 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 13:14:01 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 13:14:01 compute-1 nova_compute[230518]:   <metadata>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:14:01 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:       <nova:name>tempest-AttachVolumeNegativeTest-server-1054588037</nova:name>
Oct 02 13:14:01 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 13:13:59</nova:creationTime>
Oct 02 13:14:01 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 13:14:01 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 13:14:01 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 13:14:01 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 13:14:01 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:14:01 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:14:01 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 13:14:01 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 13:14:01 compute-1 nova_compute[230518]:         <nova:user uuid="37083e5fd56c447cb409b86d6394dd43">tempest-AttachVolumeNegativeTest-1084646737-project-member</nova:user>
Oct 02 13:14:01 compute-1 nova_compute[230518]:         <nova:project uuid="7f5376733aec4630998da8d11db76561">tempest-AttachVolumeNegativeTest-1084646737</nova:project>
Oct 02 13:14:01 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 13:14:01 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 13:14:01 compute-1 nova_compute[230518]:         <nova:port uuid="27cf437c-6f1f-4511-8b2a-3d68dd116906">
Oct 02 13:14:01 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 13:14:01 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 13:14:01 compute-1 nova_compute[230518]:   </metadata>
Oct 02 13:14:01 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <system>
Oct 02 13:14:01 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:14:01 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:14:01 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:14:01 compute-1 nova_compute[230518]:       <entry name="serial">3dbb48be-2da9-48eb-814a-94eac9968d0f</entry>
Oct 02 13:14:01 compute-1 nova_compute[230518]:       <entry name="uuid">3dbb48be-2da9-48eb-814a-94eac9968d0f</entry>
Oct 02 13:14:01 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     </system>
Oct 02 13:14:01 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 13:14:01 compute-1 nova_compute[230518]:   <os>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:   </os>
Oct 02 13:14:01 compute-1 nova_compute[230518]:   <features>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <apic/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:   </features>
Oct 02 13:14:01 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:   </clock>
Oct 02 13:14:01 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:   </cpu>
Oct 02 13:14:01 compute-1 nova_compute[230518]:   <devices>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 13:14:01 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/3dbb48be-2da9-48eb-814a-94eac9968d0f_disk">
Oct 02 13:14:01 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:       </source>
Oct 02 13:14:01 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:14:01 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:14:01 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 13:14:01 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/3dbb48be-2da9-48eb-814a-94eac9968d0f_disk.config">
Oct 02 13:14:01 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:       </source>
Oct 02 13:14:01 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:14:01 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:14:01 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 13:14:01 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:38:cb:e4"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:       <target dev="tap27cf437c-6f"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     </interface>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 13:14:01 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/3dbb48be-2da9-48eb-814a-94eac9968d0f/console.log" append="off"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     </serial>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <video>
Oct 02 13:14:01 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     </video>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 13:14:01 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     </rng>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 13:14:01 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 13:14:01 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 13:14:01 compute-1 nova_compute[230518]:   </devices>
Oct 02 13:14:01 compute-1 nova_compute[230518]: </domain>
Oct 02 13:14:01 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:14:01 compute-1 nova_compute[230518]: 2025-10-02 13:14:01.068 2 DEBUG nova.compute.manager [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Preparing to wait for external event network-vif-plugged-27cf437c-6f1f-4511-8b2a-3d68dd116906 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:14:01 compute-1 nova_compute[230518]: 2025-10-02 13:14:01.069 2 DEBUG oslo_concurrency.lockutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "3dbb48be-2da9-48eb-814a-94eac9968d0f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:01 compute-1 nova_compute[230518]: 2025-10-02 13:14:01.069 2 DEBUG oslo_concurrency.lockutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:01 compute-1 nova_compute[230518]: 2025-10-02 13:14:01.070 2 DEBUG oslo_concurrency.lockutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:01 compute-1 nova_compute[230518]: 2025-10-02 13:14:01.071 2 DEBUG nova.virt.libvirt.vif [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:13:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1054588037',display_name='tempest-AttachVolumeNegativeTest-server-1054588037',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1054588037',id=208,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCQBraTImbOHfTH+zcfBRFJyePeIqcOFlGsPR6ZMRcMYMVZGuN9g/lIgLTbs1qdUo4qDQMWoBvweu9Ok7nksgXVqglFfrHDG04CgWRfT+7Tk6OyYqf+SJMw2cYyCygZmlA==',key_name='tempest-keypair-212526163',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7f5376733aec4630998da8d11db76561',ramdisk_id='',reservation_id='r-tfow0fvq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1084646737',owner_user_name='tempest-AttachVolumeNegativeTest-1084646737-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:13:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='37083e5fd56c447cb409b86d6394dd43',uuid=3dbb48be-2da9-48eb-814a-94eac9968d0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "address": "fa:16:3e:38:cb:e4", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27cf437c-6f", "ovs_interfaceid": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:14:01 compute-1 nova_compute[230518]: 2025-10-02 13:14:01.072 2 DEBUG nova.network.os_vif_util [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Converting VIF {"id": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "address": "fa:16:3e:38:cb:e4", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27cf437c-6f", "ovs_interfaceid": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:14:01 compute-1 nova_compute[230518]: 2025-10-02 13:14:01.073 2 DEBUG nova.network.os_vif_util [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:cb:e4,bridge_name='br-int',has_traffic_filtering=True,id=27cf437c-6f1f-4511-8b2a-3d68dd116906,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27cf437c-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:14:01 compute-1 nova_compute[230518]: 2025-10-02 13:14:01.074 2 DEBUG os_vif [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:cb:e4,bridge_name='br-int',has_traffic_filtering=True,id=27cf437c-6f1f-4511-8b2a-3d68dd116906,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27cf437c-6f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:14:01 compute-1 nova_compute[230518]: 2025-10-02 13:14:01.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:01 compute-1 nova_compute[230518]: 2025-10-02 13:14:01.076 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:14:01 compute-1 nova_compute[230518]: 2025-10-02 13:14:01.076 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:14:01 compute-1 nova_compute[230518]: 2025-10-02 13:14:01.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:01 compute-1 nova_compute[230518]: 2025-10-02 13:14:01.081 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap27cf437c-6f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:14:01 compute-1 nova_compute[230518]: 2025-10-02 13:14:01.082 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap27cf437c-6f, col_values=(('external_ids', {'iface-id': '27cf437c-6f1f-4511-8b2a-3d68dd116906', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:cb:e4', 'vm-uuid': '3dbb48be-2da9-48eb-814a-94eac9968d0f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:14:01 compute-1 NetworkManager[44960]: <info>  [1759410841.0849] manager: (tap27cf437c-6f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/388)
Oct 02 13:14:01 compute-1 nova_compute[230518]: 2025-10-02 13:14:01.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:01 compute-1 nova_compute[230518]: 2025-10-02 13:14:01.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:14:01 compute-1 nova_compute[230518]: 2025-10-02 13:14:01.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:01 compute-1 nova_compute[230518]: 2025-10-02 13:14:01.094 2 INFO os_vif [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:cb:e4,bridge_name='br-int',has_traffic_filtering=True,id=27cf437c-6f1f-4511-8b2a-3d68dd116906,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27cf437c-6f')
Oct 02 13:14:01 compute-1 nova_compute[230518]: 2025-10-02 13:14:01.153 2 DEBUG nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:14:01 compute-1 nova_compute[230518]: 2025-10-02 13:14:01.154 2 DEBUG nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:14:01 compute-1 nova_compute[230518]: 2025-10-02 13:14:01.154 2 DEBUG nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] No VIF found with MAC fa:16:3e:38:cb:e4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:14:01 compute-1 nova_compute[230518]: 2025-10-02 13:14:01.155 2 INFO nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Using config drive
Oct 02 13:14:01 compute-1 nova_compute[230518]: 2025-10-02 13:14:01.201 2 DEBUG nova.storage.rbd_utils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image 3dbb48be-2da9-48eb-814a-94eac9968d0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:14:01 compute-1 nova_compute[230518]: 2025-10-02 13:14:01.526 2 INFO nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Creating config drive at /var/lib/nova/instances/3dbb48be-2da9-48eb-814a-94eac9968d0f/disk.config
Oct 02 13:14:01 compute-1 nova_compute[230518]: 2025-10-02 13:14:01.533 2 DEBUG oslo_concurrency.processutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3dbb48be-2da9-48eb-814a-94eac9968d0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd9yqxl8y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:01 compute-1 nova_compute[230518]: 2025-10-02 13:14:01.692 2 DEBUG oslo_concurrency.processutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3dbb48be-2da9-48eb-814a-94eac9968d0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd9yqxl8y" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:01 compute-1 nova_compute[230518]: 2025-10-02 13:14:01.736 2 DEBUG nova.storage.rbd_utils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image 3dbb48be-2da9-48eb-814a-94eac9968d0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:14:01 compute-1 nova_compute[230518]: 2025-10-02 13:14:01.741 2 DEBUG oslo_concurrency.processutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3dbb48be-2da9-48eb-814a-94eac9968d0f/disk.config 3dbb48be-2da9-48eb-814a-94eac9968d0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:01 compute-1 nova_compute[230518]: 2025-10-02 13:14:01.786 2 DEBUG nova.network.neutron [req-02cb6fc8-8065-4e57-b71a-8220b2836c09 req-a2c9a283-5378-4f9e-af35-481a31009aed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Updated VIF entry in instance network info cache for port 27cf437c-6f1f-4511-8b2a-3d68dd116906. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:14:01 compute-1 nova_compute[230518]: 2025-10-02 13:14:01.787 2 DEBUG nova.network.neutron [req-02cb6fc8-8065-4e57-b71a-8220b2836c09 req-a2c9a283-5378-4f9e-af35-481a31009aed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Updating instance_info_cache with network_info: [{"id": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "address": "fa:16:3e:38:cb:e4", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27cf437c-6f", "ovs_interfaceid": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:14:01 compute-1 nova_compute[230518]: 2025-10-02 13:14:01.819 2 DEBUG oslo_concurrency.lockutils [req-02cb6fc8-8065-4e57-b71a-8220b2836c09 req-a2c9a283-5378-4f9e-af35-481a31009aed 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-3dbb48be-2da9-48eb-814a-94eac9968d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:14:01 compute-1 podman[309635]: 2025-10-02 13:14:01.821116032 +0000 UTC m=+0.058147958 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 13:14:01 compute-1 podman[309631]: 2025-10-02 13:14:01.852226148 +0000 UTC m=+0.090833103 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 02 13:14:02 compute-1 nova_compute[230518]: 2025-10-02 13:14:02.015 2 DEBUG oslo_concurrency.processutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3dbb48be-2da9-48eb-814a-94eac9968d0f/disk.config 3dbb48be-2da9-48eb-814a-94eac9968d0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.274s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:02 compute-1 nova_compute[230518]: 2025-10-02 13:14:02.016 2 INFO nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Deleting local config drive /var/lib/nova/instances/3dbb48be-2da9-48eb-814a-94eac9968d0f/disk.config because it was imported into RBD.
Oct 02 13:14:02 compute-1 ceph-mon[80926]: pgmap v3059: 305 pgs: 305 active+clean; 344 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.7 MiB/s wr, 167 op/s
Oct 02 13:14:02 compute-1 ceph-mon[80926]: osdmap e382: 3 total, 3 up, 3 in
Oct 02 13:14:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3755106332' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e383 e383: 3 total, 3 up, 3 in
Oct 02 13:14:02 compute-1 kernel: tap27cf437c-6f: entered promiscuous mode
Oct 02 13:14:02 compute-1 NetworkManager[44960]: <info>  [1759410842.0834] manager: (tap27cf437c-6f): new Tun device (/org/freedesktop/NetworkManager/Devices/389)
Oct 02 13:14:02 compute-1 ovn_controller[129257]: 2025-10-02T13:14:02Z|00846|binding|INFO|Claiming lport 27cf437c-6f1f-4511-8b2a-3d68dd116906 for this chassis.
Oct 02 13:14:02 compute-1 ovn_controller[129257]: 2025-10-02T13:14:02Z|00847|binding|INFO|27cf437c-6f1f-4511-8b2a-3d68dd116906: Claiming fa:16:3e:38:cb:e4 10.100.0.7
Oct 02 13:14:02 compute-1 nova_compute[230518]: 2025-10-02 13:14:02.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:02 compute-1 nova_compute[230518]: 2025-10-02 13:14:02.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.101 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:cb:e4 10.100.0.7'], port_security=['fa:16:3e:38:cb:e4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '3dbb48be-2da9-48eb-814a-94eac9968d0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7f5376733aec4630998da8d11db76561', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5299c659-7804-482f-bd2a-becd049c9d51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9548860-2222-48ea-9270-42ff9a0246f7, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=27cf437c-6f1f-4511-8b2a-3d68dd116906) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.103 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 27cf437c-6f1f-4511-8b2a-3d68dd116906 in datapath bc02aa54-d19f-4274-8d92-cbabe7917dd9 bound to our chassis
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.107 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bc02aa54-d19f-4274-8d92-cbabe7917dd9
Oct 02 13:14:02 compute-1 systemd-machined[188247]: New machine qemu-96-instance-000000d0.
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.120 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d21a3689-79bc-45ac-b209-d331ea63813f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.122 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbc02aa54-d1 in ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.123 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbc02aa54-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.124 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[894710bb-911c-473b-b248-341226110ed5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.125 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[76088bb0-8a18-4203-a7e4-2bf2c9d776e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:02 compute-1 systemd[1]: Started Virtual Machine qemu-96-instance-000000d0.
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.137 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[854e6019-f1c5-4300-bff3-85c1cbeed4c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.164 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[11300918-4917-4365-ab2f-73eaf8bda82c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:02 compute-1 systemd-udevd[309711]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:14:02 compute-1 ovn_controller[129257]: 2025-10-02T13:14:02Z|00848|binding|INFO|Setting lport 27cf437c-6f1f-4511-8b2a-3d68dd116906 ovn-installed in OVS
Oct 02 13:14:02 compute-1 ovn_controller[129257]: 2025-10-02T13:14:02Z|00849|binding|INFO|Setting lport 27cf437c-6f1f-4511-8b2a-3d68dd116906 up in Southbound
Oct 02 13:14:02 compute-1 nova_compute[230518]: 2025-10-02 13:14:02.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:02 compute-1 NetworkManager[44960]: <info>  [1759410842.1837] device (tap27cf437c-6f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:14:02 compute-1 NetworkManager[44960]: <info>  [1759410842.1846] device (tap27cf437c-6f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.208 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[4a5086be-af8a-4d15-b120-dea047c42a62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:02 compute-1 NetworkManager[44960]: <info>  [1759410842.2152] manager: (tapbc02aa54-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/390)
Oct 02 13:14:02 compute-1 systemd-udevd[309717]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.216 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[778f30f5-57fd-4892-a419-55f4d90c5238]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.276 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[5c3d4188-6500-4668-9ef5-013835b9c125]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.282 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[e56404db-317e-4d9f-90b8-b66b246a479a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:02 compute-1 NetworkManager[44960]: <info>  [1759410842.3137] device (tapbc02aa54-d0): carrier: link connected
Oct 02 13:14:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:02.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.327 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[2a172e14-bbf2-45c8-a682-500ac7c62803]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.351 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9461db0a-d8e5-4b42-9dc6-f330def8ba89]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbc02aa54-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:fc:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 256], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 870587, 'reachable_time': 23381, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309740, 'error': None, 'target': 'ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.371 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[dea145e6-0f5b-4253-8564-cd8a18146197]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feba:fc0a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 870587, 'tstamp': 870587}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309741, 'error': None, 'target': 'ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.397 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8d84685e-cf6a-4d1e-b10b-45a235d1a689]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbc02aa54-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:fc:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 256], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 870587, 'reachable_time': 23381, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309742, 'error': None, 'target': 'ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:02 compute-1 nova_compute[230518]: 2025-10-02 13:14:02.402 2 DEBUG nova.compute.manager [req-2c0cd58d-ab25-438f-aae0-767fc4af9a7e req-7e8d0bb8-8473-4e6a-a77d-abb84cf46089 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Received event network-vif-plugged-27cf437c-6f1f-4511-8b2a-3d68dd116906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:14:02 compute-1 nova_compute[230518]: 2025-10-02 13:14:02.402 2 DEBUG oslo_concurrency.lockutils [req-2c0cd58d-ab25-438f-aae0-767fc4af9a7e req-7e8d0bb8-8473-4e6a-a77d-abb84cf46089 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3dbb48be-2da9-48eb-814a-94eac9968d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:02 compute-1 nova_compute[230518]: 2025-10-02 13:14:02.402 2 DEBUG oslo_concurrency.lockutils [req-2c0cd58d-ab25-438f-aae0-767fc4af9a7e req-7e8d0bb8-8473-4e6a-a77d-abb84cf46089 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:02 compute-1 nova_compute[230518]: 2025-10-02 13:14:02.403 2 DEBUG oslo_concurrency.lockutils [req-2c0cd58d-ab25-438f-aae0-767fc4af9a7e req-7e8d0bb8-8473-4e6a-a77d-abb84cf46089 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:02 compute-1 nova_compute[230518]: 2025-10-02 13:14:02.403 2 DEBUG nova.compute.manager [req-2c0cd58d-ab25-438f-aae0-767fc4af9a7e req-7e8d0bb8-8473-4e6a-a77d-abb84cf46089 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Processing event network-vif-plugged-27cf437c-6f1f-4511-8b2a-3d68dd116906 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.446 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[611211f6-5510-46dc-bf19-1cc78f115c2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.528 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[09666524-4a69-4d5b-a125-707c32eac871]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.529 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc02aa54-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.530 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.530 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbc02aa54-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:14:02 compute-1 kernel: tapbc02aa54-d0: entered promiscuous mode
Oct 02 13:14:02 compute-1 NetworkManager[44960]: <info>  [1759410842.5343] manager: (tapbc02aa54-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/391)
Oct 02 13:14:02 compute-1 nova_compute[230518]: 2025-10-02 13:14:02.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.539 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbc02aa54-d0, col_values=(('external_ids', {'iface-id': '05da4d4e-44a6-4aa1-b470-d9ad03ff2e45'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:14:02 compute-1 ovn_controller[129257]: 2025-10-02T13:14:02Z|00850|binding|INFO|Releasing lport 05da4d4e-44a6-4aa1-b470-d9ad03ff2e45 from this chassis (sb_readonly=0)
Oct 02 13:14:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:02.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:02 compute-1 nova_compute[230518]: 2025-10-02 13:14:02.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.566 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bc02aa54-d19f-4274-8d92-cbabe7917dd9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bc02aa54-d19f-4274-8d92-cbabe7917dd9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.567 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[940c54a0-ab8b-4483-8447-1469fb71e2e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.568 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]: global
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-bc02aa54-d19f-4274-8d92-cbabe7917dd9
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/bc02aa54-d19f-4274-8d92-cbabe7917dd9.pid.haproxy
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID bc02aa54-d19f-4274-8d92-cbabe7917dd9
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:14:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:02.568 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'env', 'PROCESS_TAG=haproxy-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bc02aa54-d19f-4274-8d92-cbabe7917dd9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:14:03 compute-1 podman[309816]: 2025-10-02 13:14:02.928272627 +0000 UTC m=+0.027542006 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:14:03 compute-1 podman[309816]: 2025-10-02 13:14:03.14971728 +0000 UTC m=+0.248986579 container create 3fc68881a6f51d045f88a76e7e561be7dd4c26c20a27f394eb0289c7aed0b64a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 13:14:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e384 e384: 3 total, 3 up, 3 in
Oct 02 13:14:03 compute-1 nova_compute[230518]: 2025-10-02 13:14:03.235 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410843.2347429, 3dbb48be-2da9-48eb-814a-94eac9968d0f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:14:03 compute-1 nova_compute[230518]: 2025-10-02 13:14:03.235 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] VM Started (Lifecycle Event)
Oct 02 13:14:03 compute-1 nova_compute[230518]: 2025-10-02 13:14:03.237 2 DEBUG nova.compute.manager [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:14:03 compute-1 nova_compute[230518]: 2025-10-02 13:14:03.241 2 DEBUG nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:14:03 compute-1 ceph-mon[80926]: osdmap e383: 3 total, 3 up, 3 in
Oct 02 13:14:03 compute-1 nova_compute[230518]: 2025-10-02 13:14:03.246 2 INFO nova.virt.libvirt.driver [-] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Instance spawned successfully.
Oct 02 13:14:03 compute-1 nova_compute[230518]: 2025-10-02 13:14:03.246 2 DEBUG nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:14:03 compute-1 systemd[1]: Started libpod-conmon-3fc68881a6f51d045f88a76e7e561be7dd4c26c20a27f394eb0289c7aed0b64a.scope.
Oct 02 13:14:03 compute-1 nova_compute[230518]: 2025-10-02 13:14:03.253 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:14:03 compute-1 nova_compute[230518]: 2025-10-02 13:14:03.257 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:14:03 compute-1 nova_compute[230518]: 2025-10-02 13:14:03.267 2 DEBUG nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:14:03 compute-1 nova_compute[230518]: 2025-10-02 13:14:03.268 2 DEBUG nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:14:03 compute-1 nova_compute[230518]: 2025-10-02 13:14:03.268 2 DEBUG nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:14:03 compute-1 nova_compute[230518]: 2025-10-02 13:14:03.268 2 DEBUG nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:14:03 compute-1 nova_compute[230518]: 2025-10-02 13:14:03.269 2 DEBUG nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:14:03 compute-1 nova_compute[230518]: 2025-10-02 13:14:03.269 2 DEBUG nova.virt.libvirt.driver [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:14:03 compute-1 nova_compute[230518]: 2025-10-02 13:14:03.272 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:14:03 compute-1 nova_compute[230518]: 2025-10-02 13:14:03.272 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410843.2349257, 3dbb48be-2da9-48eb-814a-94eac9968d0f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:14:03 compute-1 nova_compute[230518]: 2025-10-02 13:14:03.272 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] VM Paused (Lifecycle Event)
Oct 02 13:14:03 compute-1 systemd[1]: Started libcrun container.
Oct 02 13:14:03 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/500922d7592b87d0de794352ea2980ab9aa55fc51cd3a81b2cc7670108d2967e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:14:03 compute-1 nova_compute[230518]: 2025-10-02 13:14:03.301 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:14:03 compute-1 nova_compute[230518]: 2025-10-02 13:14:03.306 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410843.2407436, 3dbb48be-2da9-48eb-814a-94eac9968d0f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:14:03 compute-1 nova_compute[230518]: 2025-10-02 13:14:03.306 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] VM Resumed (Lifecycle Event)
Oct 02 13:14:03 compute-1 nova_compute[230518]: 2025-10-02 13:14:03.322 2 INFO nova.compute.manager [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Took 7.21 seconds to spawn the instance on the hypervisor.
Oct 02 13:14:03 compute-1 nova_compute[230518]: 2025-10-02 13:14:03.323 2 DEBUG nova.compute.manager [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:14:03 compute-1 nova_compute[230518]: 2025-10-02 13:14:03.327 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:14:03 compute-1 nova_compute[230518]: 2025-10-02 13:14:03.333 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:14:03 compute-1 podman[309816]: 2025-10-02 13:14:03.355041019 +0000 UTC m=+0.454310328 container init 3fc68881a6f51d045f88a76e7e561be7dd4c26c20a27f394eb0289c7aed0b64a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:14:03 compute-1 nova_compute[230518]: 2025-10-02 13:14:03.364 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:14:03 compute-1 podman[309816]: 2025-10-02 13:14:03.366298672 +0000 UTC m=+0.465567961 container start 3fc68881a6f51d045f88a76e7e561be7dd4c26c20a27f394eb0289c7aed0b64a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:14:03 compute-1 nova_compute[230518]: 2025-10-02 13:14:03.388 2 INFO nova.compute.manager [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Took 8.56 seconds to build instance.
Oct 02 13:14:03 compute-1 nova_compute[230518]: 2025-10-02 13:14:03.404 2 DEBUG oslo_concurrency.lockutils [None req-5599cd29-900f-4e31-8a5d-712a5bbd6497 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:03 compute-1 neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9[309830]: [NOTICE]   (309834) : New worker (309836) forked
Oct 02 13:14:03 compute-1 neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9[309830]: [NOTICE]   (309834) : Loading success.
Oct 02 13:14:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:14:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:04.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:04 compute-1 ceph-mon[80926]: pgmap v3062: 305 pgs: 305 active+clean; 374 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 5.3 MiB/s wr, 209 op/s
Oct 02 13:14:04 compute-1 ceph-mon[80926]: osdmap e384: 3 total, 3 up, 3 in
Oct 02 13:14:04 compute-1 nova_compute[230518]: 2025-10-02 13:14:04.510 2 DEBUG nova.compute.manager [req-57edbd7a-1c5b-4dd4-83e2-a636ee265745 req-4efdfc06-7560-49fc-bb06-3387e56de957 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Received event network-vif-plugged-27cf437c-6f1f-4511-8b2a-3d68dd116906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:14:04 compute-1 nova_compute[230518]: 2025-10-02 13:14:04.511 2 DEBUG oslo_concurrency.lockutils [req-57edbd7a-1c5b-4dd4-83e2-a636ee265745 req-4efdfc06-7560-49fc-bb06-3387e56de957 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3dbb48be-2da9-48eb-814a-94eac9968d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:04 compute-1 nova_compute[230518]: 2025-10-02 13:14:04.511 2 DEBUG oslo_concurrency.lockutils [req-57edbd7a-1c5b-4dd4-83e2-a636ee265745 req-4efdfc06-7560-49fc-bb06-3387e56de957 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:04 compute-1 nova_compute[230518]: 2025-10-02 13:14:04.511 2 DEBUG oslo_concurrency.lockutils [req-57edbd7a-1c5b-4dd4-83e2-a636ee265745 req-4efdfc06-7560-49fc-bb06-3387e56de957 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:04 compute-1 nova_compute[230518]: 2025-10-02 13:14:04.511 2 DEBUG nova.compute.manager [req-57edbd7a-1c5b-4dd4-83e2-a636ee265745 req-4efdfc06-7560-49fc-bb06-3387e56de957 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] No waiting events found dispatching network-vif-plugged-27cf437c-6f1f-4511-8b2a-3d68dd116906 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:14:04 compute-1 nova_compute[230518]: 2025-10-02 13:14:04.511 2 WARNING nova.compute.manager [req-57edbd7a-1c5b-4dd4-83e2-a636ee265745 req-4efdfc06-7560-49fc-bb06-3387e56de957 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Received unexpected event network-vif-plugged-27cf437c-6f1f-4511-8b2a-3d68dd116906 for instance with vm_state active and task_state None.
Oct 02 13:14:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:04.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e385 e385: 3 total, 3 up, 3 in
Oct 02 13:14:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:14:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2667773413' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:14:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:14:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2667773413' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:14:05 compute-1 NetworkManager[44960]: <info>  [1759410845.4132] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/392)
Oct 02 13:14:05 compute-1 NetworkManager[44960]: <info>  [1759410845.4144] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/393)
Oct 02 13:14:05 compute-1 nova_compute[230518]: 2025-10-02 13:14:05.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:05 compute-1 ceph-mon[80926]: pgmap v3064: 305 pgs: 305 active+clean; 374 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.1 MiB/s wr, 165 op/s
Oct 02 13:14:05 compute-1 ceph-mon[80926]: osdmap e385: 3 total, 3 up, 3 in
Oct 02 13:14:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2667773413' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:14:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2667773413' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:14:05 compute-1 nova_compute[230518]: 2025-10-02 13:14:05.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:05 compute-1 ovn_controller[129257]: 2025-10-02T13:14:05Z|00851|binding|INFO|Releasing lport 05da4d4e-44a6-4aa1-b470-d9ad03ff2e45 from this chassis (sb_readonly=0)
Oct 02 13:14:05 compute-1 nova_compute[230518]: 2025-10-02 13:14:05.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:05 compute-1 nova_compute[230518]: 2025-10-02 13:14:05.655 2 DEBUG nova.compute.manager [req-38293338-6af7-4e8d-ada0-e1b29ff1c293 req-30e07499-2850-4f1b-b878-8a974114150a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Received event network-changed-27cf437c-6f1f-4511-8b2a-3d68dd116906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:14:05 compute-1 nova_compute[230518]: 2025-10-02 13:14:05.655 2 DEBUG nova.compute.manager [req-38293338-6af7-4e8d-ada0-e1b29ff1c293 req-30e07499-2850-4f1b-b878-8a974114150a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Refreshing instance network info cache due to event network-changed-27cf437c-6f1f-4511-8b2a-3d68dd116906. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:14:05 compute-1 nova_compute[230518]: 2025-10-02 13:14:05.656 2 DEBUG oslo_concurrency.lockutils [req-38293338-6af7-4e8d-ada0-e1b29ff1c293 req-30e07499-2850-4f1b-b878-8a974114150a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-3dbb48be-2da9-48eb-814a-94eac9968d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:14:05 compute-1 nova_compute[230518]: 2025-10-02 13:14:05.656 2 DEBUG oslo_concurrency.lockutils [req-38293338-6af7-4e8d-ada0-e1b29ff1c293 req-30e07499-2850-4f1b-b878-8a974114150a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-3dbb48be-2da9-48eb-814a-94eac9968d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:14:05 compute-1 nova_compute[230518]: 2025-10-02 13:14:05.656 2 DEBUG nova.network.neutron [req-38293338-6af7-4e8d-ada0-e1b29ff1c293 req-30e07499-2850-4f1b-b878-8a974114150a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Refreshing network info cache for port 27cf437c-6f1f-4511-8b2a-3d68dd116906 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:14:06 compute-1 nova_compute[230518]: 2025-10-02 13:14:06.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:06 compute-1 nova_compute[230518]: 2025-10-02 13:14:06.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:06.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:06.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:06 compute-1 nova_compute[230518]: 2025-10-02 13:14:06.705 2 DEBUG nova.network.neutron [req-38293338-6af7-4e8d-ada0-e1b29ff1c293 req-30e07499-2850-4f1b-b878-8a974114150a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Updated VIF entry in instance network info cache for port 27cf437c-6f1f-4511-8b2a-3d68dd116906. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:14:06 compute-1 nova_compute[230518]: 2025-10-02 13:14:06.705 2 DEBUG nova.network.neutron [req-38293338-6af7-4e8d-ada0-e1b29ff1c293 req-30e07499-2850-4f1b-b878-8a974114150a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Updating instance_info_cache with network_info: [{"id": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "address": "fa:16:3e:38:cb:e4", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27cf437c-6f", "ovs_interfaceid": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:14:06 compute-1 nova_compute[230518]: 2025-10-02 13:14:06.746 2 DEBUG oslo_concurrency.lockutils [req-38293338-6af7-4e8d-ada0-e1b29ff1c293 req-30e07499-2850-4f1b-b878-8a974114150a 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-3dbb48be-2da9-48eb-814a-94eac9968d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:14:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:14:07 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4268729036' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:07 compute-1 ceph-mon[80926]: pgmap v3066: 305 pgs: 305 active+clean; 410 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 3.0 MiB/s wr, 152 op/s
Oct 02 13:14:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:08.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:14:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:08.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/4268729036' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e386 e386: 3 total, 3 up, 3 in
Oct 02 13:14:09 compute-1 ceph-mon[80926]: pgmap v3067: 305 pgs: 305 active+clean; 420 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.5 MiB/s rd, 3.4 MiB/s wr, 203 op/s
Oct 02 13:14:09 compute-1 ceph-mon[80926]: osdmap e386: 3 total, 3 up, 3 in
Oct 02 13:14:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:10.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:10.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:10 compute-1 podman[309847]: 2025-10-02 13:14:10.843936239 +0000 UTC m=+0.088422098 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 13:14:10 compute-1 podman[309846]: 2025-10-02 13:14:10.879863547 +0000 UTC m=+0.124354186 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 13:14:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e387 e387: 3 total, 3 up, 3 in
Oct 02 13:14:11 compute-1 nova_compute[230518]: 2025-10-02 13:14:11.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:11 compute-1 nova_compute[230518]: 2025-10-02 13:14:11.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:11 compute-1 ceph-mon[80926]: pgmap v3069: 305 pgs: 305 active+clean; 440 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 8.1 MiB/s rd, 4.3 MiB/s wr, 219 op/s
Oct 02 13:14:11 compute-1 ceph-mon[80926]: osdmap e387: 3 total, 3 up, 3 in
Oct 02 13:14:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:14:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:12.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:14:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:12.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:14:14 compute-1 ceph-mon[80926]: pgmap v3071: 305 pgs: 305 active+clean; 456 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 9.0 MiB/s rd, 4.7 MiB/s wr, 248 op/s
Oct 02 13:14:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:14.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:14.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:15 compute-1 ceph-mon[80926]: pgmap v3072: 305 pgs: 305 active+clean; 481 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 6.6 MiB/s rd, 4.0 MiB/s wr, 188 op/s
Oct 02 13:14:16 compute-1 nova_compute[230518]: 2025-10-02 13:14:16.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:16 compute-1 nova_compute[230518]: 2025-10-02 13:14:16.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:14:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:16.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:14:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:16.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1549993522' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:17 compute-1 ovn_controller[129257]: 2025-10-02T13:14:17Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:38:cb:e4 10.100.0.7
Oct 02 13:14:17 compute-1 ovn_controller[129257]: 2025-10-02T13:14:17Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:38:cb:e4 10.100.0.7
Oct 02 13:14:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:18.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:18 compute-1 ceph-mon[80926]: pgmap v3073: 305 pgs: 305 active+clean; 495 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.9 MiB/s wr, 149 op/s
Oct 02 13:14:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2092745978' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:14:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:18.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:19 compute-1 sudo[309888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:14:19 compute-1 sudo[309888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:19 compute-1 sudo[309888]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:19 compute-1 sudo[309913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:14:19 compute-1 sudo[309913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:19 compute-1 sudo[309913]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:19 compute-1 sudo[309938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:14:19 compute-1 sudo[309938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:19 compute-1 sudo[309938]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:19 compute-1 sudo[309963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:14:19 compute-1 sudo[309963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:19 compute-1 sudo[309963]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:19 compute-1 ceph-mon[80926]: pgmap v3074: 305 pgs: 305 active+clean; 499 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.7 MiB/s wr, 152 op/s
Oct 02 13:14:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:20.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:20.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:21 compute-1 nova_compute[230518]: 2025-10-02 13:14:21.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:21 compute-1 nova_compute[230518]: 2025-10-02 13:14:21.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:14:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:14:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:14:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:14:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:14:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:14:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:22.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:22.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:22 compute-1 ceph-mon[80926]: pgmap v3075: 305 pgs: 305 active+clean; 509 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.0 MiB/s wr, 116 op/s
Oct 02 13:14:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:14:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:24.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:24 compute-1 ceph-mon[80926]: pgmap v3076: 305 pgs: 305 active+clean; 510 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 926 KiB/s rd, 3.2 MiB/s wr, 83 op/s
Oct 02 13:14:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2760415651' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:24.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:25.969 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:25.970 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:25.971 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:26 compute-1 nova_compute[230518]: 2025-10-02 13:14:26.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:26 compute-1 nova_compute[230518]: 2025-10-02 13:14:26.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:14:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:26.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:14:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:26.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:26 compute-1 ceph-mon[80926]: pgmap v3077: 305 pgs: 305 active+clean; 512 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 920 KiB/s rd, 3.1 MiB/s wr, 88 op/s
Oct 02 13:14:27 compute-1 nova_compute[230518]: 2025-10-02 13:14:27.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:14:27 compute-1 nova_compute[230518]: 2025-10-02 13:14:27.076 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:27 compute-1 nova_compute[230518]: 2025-10-02 13:14:27.077 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:27 compute-1 nova_compute[230518]: 2025-10-02 13:14:27.077 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:27 compute-1 nova_compute[230518]: 2025-10-02 13:14:27.077 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:14:27 compute-1 nova_compute[230518]: 2025-10-02 13:14:27.077 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:27 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:14:27 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/577872724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:27 compute-1 nova_compute[230518]: 2025-10-02 13:14:27.784 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.707s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:27 compute-1 nova_compute[230518]: 2025-10-02 13:14:27.887 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000d0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:14:27 compute-1 nova_compute[230518]: 2025-10-02 13:14:27.887 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000d0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:14:28 compute-1 nova_compute[230518]: 2025-10-02 13:14:28.057 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:14:28 compute-1 nova_compute[230518]: 2025-10-02 13:14:28.059 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3987MB free_disk=20.888782501220703GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:14:28 compute-1 nova_compute[230518]: 2025-10-02 13:14:28.059 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:28 compute-1 nova_compute[230518]: 2025-10-02 13:14:28.060 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:28 compute-1 ceph-mon[80926]: pgmap v3078: 305 pgs: 305 active+clean; 517 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 571 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Oct 02 13:14:28 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/577872724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:28 compute-1 nova_compute[230518]: 2025-10-02 13:14:28.259 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 3dbb48be-2da9-48eb-814a-94eac9968d0f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:14:28 compute-1 nova_compute[230518]: 2025-10-02 13:14:28.259 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:14:28 compute-1 nova_compute[230518]: 2025-10-02 13:14:28.260 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:14:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:28.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:28 compute-1 nova_compute[230518]: 2025-10-02 13:14:28.475 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:14:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:28.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:14:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:14:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:14:28 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3626442609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:28 compute-1 nova_compute[230518]: 2025-10-02 13:14:28.966 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:28 compute-1 nova_compute[230518]: 2025-10-02 13:14:28.974 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:14:29 compute-1 nova_compute[230518]: 2025-10-02 13:14:29.007 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:14:29 compute-1 nova_compute[230518]: 2025-10-02 13:14:29.048 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:14:29 compute-1 nova_compute[230518]: 2025-10-02 13:14:29.049 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.989s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3626442609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:30.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:14:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:30.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:14:30 compute-1 ceph-mon[80926]: pgmap v3079: 305 pgs: 305 active+clean; 520 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 521 KiB/s rd, 1.4 MiB/s wr, 57 op/s
Oct 02 13:14:31 compute-1 nova_compute[230518]: 2025-10-02 13:14:31.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:31 compute-1 nova_compute[230518]: 2025-10-02 13:14:31.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:32.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:32.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:32 compute-1 ceph-mon[80926]: pgmap v3080: 305 pgs: 305 active+clean; 521 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 351 KiB/s rd, 936 KiB/s wr, 42 op/s
Oct 02 13:14:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/420625770' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:32 compute-1 podman[310064]: 2025-10-02 13:14:32.859234013 +0000 UTC m=+0.098512935 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent)
Oct 02 13:14:32 compute-1 podman[310063]: 2025-10-02 13:14:32.888378317 +0000 UTC m=+0.124214621 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 13:14:33 compute-1 ceph-mon[80926]: pgmap v3081: 305 pgs: 305 active+clean; 521 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 610 KiB/s rd, 435 KiB/s wr, 51 op/s
Oct 02 13:14:33 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/849229435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:33 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3782613241' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:33 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:14:33 compute-1 sudo[310106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:14:33 compute-1 sudo[310106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:33 compute-1 sudo[310106]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:33 compute-1 sudo[310131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:14:33 compute-1 sudo[310131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:14:33 compute-1 sudo[310131]: pam_unix(sudo:session): session closed for user root
Oct 02 13:14:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:14:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:14:34 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/725074288' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:34.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:34.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:34 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:14:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3484772399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/725074288' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:36 compute-1 ceph-mon[80926]: pgmap v3082: 305 pgs: 305 active+clean; 521 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 305 KiB/s wr, 70 op/s
Oct 02 13:14:36 compute-1 nova_compute[230518]: 2025-10-02 13:14:36.045 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:14:36 compute-1 nova_compute[230518]: 2025-10-02 13:14:36.046 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:14:36 compute-1 nova_compute[230518]: 2025-10-02 13:14:36.046 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:14:36 compute-1 nova_compute[230518]: 2025-10-02 13:14:36.046 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:14:36 compute-1 nova_compute[230518]: 2025-10-02 13:14:36.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:14:36 compute-1 nova_compute[230518]: 2025-10-02 13:14:36.047 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:14:36 compute-1 nova_compute[230518]: 2025-10-02 13:14:36.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:36 compute-1 nova_compute[230518]: 2025-10-02 13:14:36.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:14:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:36.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:14:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:36.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:37 compute-1 nova_compute[230518]: 2025-10-02 13:14:37.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:14:37 compute-1 nova_compute[230518]: 2025-10-02 13:14:37.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:14:37 compute-1 nova_compute[230518]: 2025-10-02 13:14:37.055 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:14:37 compute-1 nova_compute[230518]: 2025-10-02 13:14:37.388 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-3dbb48be-2da9-48eb-814a-94eac9968d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:14:37 compute-1 nova_compute[230518]: 2025-10-02 13:14:37.388 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-3dbb48be-2da9-48eb-814a-94eac9968d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:14:37 compute-1 nova_compute[230518]: 2025-10-02 13:14:37.389 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 13:14:37 compute-1 nova_compute[230518]: 2025-10-02 13:14:37.389 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3dbb48be-2da9-48eb-814a-94eac9968d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:14:38 compute-1 ceph-mon[80926]: pgmap v3083: 305 pgs: 305 active+clean; 521 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 269 KiB/s wr, 87 op/s
Oct 02 13:14:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 13:14:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:38.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 13:14:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:14:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:38.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:14:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:14:39 compute-1 nova_compute[230518]: 2025-10-02 13:14:39.131 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Updating instance_info_cache with network_info: [{"id": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "address": "fa:16:3e:38:cb:e4", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27cf437c-6f", "ovs_interfaceid": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:14:39 compute-1 nova_compute[230518]: 2025-10-02 13:14:39.162 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-3dbb48be-2da9-48eb-814a-94eac9968d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:14:39 compute-1 nova_compute[230518]: 2025-10-02 13:14:39.163 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 13:14:39 compute-1 nova_compute[230518]: 2025-10-02 13:14:39.164 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:14:39 compute-1 nova_compute[230518]: 2025-10-02 13:14:39.740 2 DEBUG oslo_concurrency.lockutils [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "3dbb48be-2da9-48eb-814a-94eac9968d0f" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:39 compute-1 nova_compute[230518]: 2025-10-02 13:14:39.741 2 DEBUG oslo_concurrency.lockutils [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:39 compute-1 ceph-mon[80926]: pgmap v3084: 305 pgs: 305 active+clean; 521 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 215 KiB/s wr, 86 op/s
Oct 02 13:14:39 compute-1 nova_compute[230518]: 2025-10-02 13:14:39.770 2 DEBUG nova.objects.instance [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lazy-loading 'flavor' on Instance uuid 3dbb48be-2da9-48eb-814a-94eac9968d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:14:39 compute-1 nova_compute[230518]: 2025-10-02 13:14:39.860 2 DEBUG oslo_concurrency.lockutils [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:40 compute-1 nova_compute[230518]: 2025-10-02 13:14:40.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:14:40 compute-1 nova_compute[230518]: 2025-10-02 13:14:40.207 2 DEBUG oslo_concurrency.lockutils [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "3dbb48be-2da9-48eb-814a-94eac9968d0f" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:40 compute-1 nova_compute[230518]: 2025-10-02 13:14:40.207 2 DEBUG oslo_concurrency.lockutils [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:40 compute-1 nova_compute[230518]: 2025-10-02 13:14:40.208 2 INFO nova.compute.manager [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Attaching volume 205d78a9-2344-4c93-8e1b-54d92d0b0fa2 to /dev/vdb
Oct 02 13:14:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:40.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:40 compute-1 nova_compute[230518]: 2025-10-02 13:14:40.468 2 DEBUG os_brick.utils [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 13:14:40 compute-1 nova_compute[230518]: 2025-10-02 13:14:40.471 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:40 compute-1 nova_compute[230518]: 2025-10-02 13:14:40.493 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:40 compute-1 nova_compute[230518]: 2025-10-02 13:14:40.493 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[e7d43b14-ca14-46e7-a47c-5e785ef9c325]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:40 compute-1 nova_compute[230518]: 2025-10-02 13:14:40.496 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:40 compute-1 nova_compute[230518]: 2025-10-02 13:14:40.512 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:40 compute-1 nova_compute[230518]: 2025-10-02 13:14:40.512 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[5c672bc7-d902-4e93-a8c6-be2c1313e6e8]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:40 compute-1 nova_compute[230518]: 2025-10-02 13:14:40.515 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:40 compute-1 nova_compute[230518]: 2025-10-02 13:14:40.530 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:40 compute-1 nova_compute[230518]: 2025-10-02 13:14:40.530 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[8eabb950-e642-4727-a73c-33a1cb130af5]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:40 compute-1 nova_compute[230518]: 2025-10-02 13:14:40.533 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[3f6586c1-b205-41f1-bfca-1146e5829488]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:40 compute-1 nova_compute[230518]: 2025-10-02 13:14:40.533 2 DEBUG oslo_concurrency.processutils [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:40 compute-1 nova_compute[230518]: 2025-10-02 13:14:40.589 2 DEBUG oslo_concurrency.processutils [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "nvme version" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:40 compute-1 nova_compute[230518]: 2025-10-02 13:14:40.593 2 DEBUG os_brick.initiator.connectors.lightos [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 13:14:40 compute-1 nova_compute[230518]: 2025-10-02 13:14:40.593 2 DEBUG os_brick.initiator.connectors.lightos [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 13:14:40 compute-1 nova_compute[230518]: 2025-10-02 13:14:40.593 2 DEBUG os_brick.initiator.connectors.lightos [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 13:14:40 compute-1 nova_compute[230518]: 2025-10-02 13:14:40.594 2 DEBUG os_brick.utils [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] <== get_connector_properties: return (125ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 13:14:40 compute-1 nova_compute[230518]: 2025-10-02 13:14:40.595 2 DEBUG nova.virt.block_device [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Updating existing volume attachment record: 63eeceae-5e4a-4e89-b426-2b83a872ab02 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 13:14:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:40.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:41 compute-1 nova_compute[230518]: 2025-10-02 13:14:41.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:14:41 compute-1 nova_compute[230518]: 2025-10-02 13:14:41.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:41 compute-1 nova_compute[230518]: 2025-10-02 13:14:41.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:41 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:14:41 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3573521355' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:41 compute-1 nova_compute[230518]: 2025-10-02 13:14:41.518 2 DEBUG nova.objects.instance [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lazy-loading 'flavor' on Instance uuid 3dbb48be-2da9-48eb-814a-94eac9968d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:14:41 compute-1 nova_compute[230518]: 2025-10-02 13:14:41.552 2 DEBUG nova.virt.libvirt.driver [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Attempting to attach volume 205d78a9-2344-4c93-8e1b-54d92d0b0fa2 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 13:14:41 compute-1 nova_compute[230518]: 2025-10-02 13:14:41.557 2 DEBUG nova.virt.libvirt.guest [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 13:14:41 compute-1 nova_compute[230518]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:14:41 compute-1 nova_compute[230518]:   <source protocol="rbd" name="volumes/volume-205d78a9-2344-4c93-8e1b-54d92d0b0fa2">
Oct 02 13:14:41 compute-1 nova_compute[230518]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:14:41 compute-1 nova_compute[230518]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:14:41 compute-1 nova_compute[230518]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:14:41 compute-1 nova_compute[230518]:   </source>
Oct 02 13:14:41 compute-1 nova_compute[230518]:   <auth username="openstack">
Oct 02 13:14:41 compute-1 nova_compute[230518]:     <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:14:41 compute-1 nova_compute[230518]:   </auth>
Oct 02 13:14:41 compute-1 nova_compute[230518]:   <target dev="vdb" bus="virtio"/>
Oct 02 13:14:41 compute-1 nova_compute[230518]:   <serial>205d78a9-2344-4c93-8e1b-54d92d0b0fa2</serial>
Oct 02 13:14:41 compute-1 nova_compute[230518]: </disk>
Oct 02 13:14:41 compute-1 nova_compute[230518]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 13:14:41 compute-1 podman[310186]: 2025-10-02 13:14:41.841360692 +0000 UTC m=+0.081471550 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd)
Oct 02 13:14:41 compute-1 podman[310185]: 2025-10-02 13:14:41.850472417 +0000 UTC m=+0.094017753 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 13:14:41 compute-1 nova_compute[230518]: 2025-10-02 13:14:41.962 2 DEBUG nova.virt.libvirt.driver [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:14:41 compute-1 nova_compute[230518]: 2025-10-02 13:14:41.963 2 DEBUG nova.virt.libvirt.driver [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:14:41 compute-1 nova_compute[230518]: 2025-10-02 13:14:41.964 2 DEBUG nova.virt.libvirt.driver [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:14:41 compute-1 nova_compute[230518]: 2025-10-02 13:14:41.964 2 DEBUG nova.virt.libvirt.driver [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] No VIF found with MAC fa:16:3e:38:cb:e4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:14:42 compute-1 ceph-mon[80926]: pgmap v3085: 305 pgs: 305 active+clean; 523 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 200 KiB/s wr, 88 op/s
Oct 02 13:14:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3573521355' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:42 compute-1 nova_compute[230518]: 2025-10-02 13:14:42.273 2 DEBUG oslo_concurrency.lockutils [None req-fd8e95a6-99be-4993-a136-05d3da82eb97 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.066s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:14:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:42.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:14:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:14:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:42.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:14:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:14:44 compute-1 ceph-mon[80926]: pgmap v3086: 305 pgs: 305 active+clean; 523 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 176 KiB/s wr, 83 op/s
Oct 02 13:14:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:44.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:44.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:45 compute-1 ceph-mon[80926]: pgmap v3087: 305 pgs: 305 active+clean; 529 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 596 KiB/s wr, 90 op/s
Oct 02 13:14:46 compute-1 nova_compute[230518]: 2025-10-02 13:14:46.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:46 compute-1 nova_compute[230518]: 2025-10-02 13:14:46.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:46.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:14:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:46.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:14:47 compute-1 ceph-mon[80926]: pgmap v3088: 305 pgs: 305 active+clean; 537 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.4 MiB/s wr, 69 op/s
Oct 02 13:14:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:48.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:14:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:48.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:14:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:14:50 compute-1 ceph-mon[80926]: pgmap v3089: 305 pgs: 305 active+clean; 539 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 576 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Oct 02 13:14:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:50.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:50 compute-1 nova_compute[230518]: 2025-10-02 13:14:50.567 2 DEBUG oslo_concurrency.lockutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "66f3a080-c034-4465-9d17-ee4b4afe4592" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:50 compute-1 nova_compute[230518]: 2025-10-02 13:14:50.567 2 DEBUG oslo_concurrency.lockutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "66f3a080-c034-4465-9d17-ee4b4afe4592" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:50 compute-1 nova_compute[230518]: 2025-10-02 13:14:50.589 2 DEBUG nova.compute.manager [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:14:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:50.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:50 compute-1 nova_compute[230518]: 2025-10-02 13:14:50.666 2 DEBUG oslo_concurrency.lockutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:50 compute-1 nova_compute[230518]: 2025-10-02 13:14:50.667 2 DEBUG oslo_concurrency.lockutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:50 compute-1 nova_compute[230518]: 2025-10-02 13:14:50.677 2 DEBUG nova.virt.hardware [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:14:50 compute-1 nova_compute[230518]: 2025-10-02 13:14:50.677 2 INFO nova.compute.claims [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Claim successful on node compute-1.ctlplane.example.com
Oct 02 13:14:50 compute-1 nova_compute[230518]: 2025-10-02 13:14:50.884 2 DEBUG oslo_concurrency.processutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:51 compute-1 nova_compute[230518]: 2025-10-02 13:14:51.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:51 compute-1 nova_compute[230518]: 2025-10-02 13:14:51.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:51 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:14:51 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1591108121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:51 compute-1 nova_compute[230518]: 2025-10-02 13:14:51.494 2 DEBUG oslo_concurrency.processutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:51 compute-1 nova_compute[230518]: 2025-10-02 13:14:51.501 2 DEBUG nova.compute.provider_tree [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:14:51 compute-1 nova_compute[230518]: 2025-10-02 13:14:51.519 2 DEBUG nova.scheduler.client.report [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:14:51 compute-1 nova_compute[230518]: 2025-10-02 13:14:51.555 2 DEBUG oslo_concurrency.lockutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.887s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:51 compute-1 nova_compute[230518]: 2025-10-02 13:14:51.555 2 DEBUG nova.compute.manager [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:14:51 compute-1 nova_compute[230518]: 2025-10-02 13:14:51.670 2 DEBUG nova.compute.manager [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:14:51 compute-1 nova_compute[230518]: 2025-10-02 13:14:51.670 2 DEBUG nova.network.neutron [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:14:51 compute-1 nova_compute[230518]: 2025-10-02 13:14:51.688 2 INFO nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:14:51 compute-1 nova_compute[230518]: 2025-10-02 13:14:51.710 2 DEBUG nova.compute.manager [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:14:51 compute-1 nova_compute[230518]: 2025-10-02 13:14:51.851 2 DEBUG nova.compute.manager [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:14:51 compute-1 nova_compute[230518]: 2025-10-02 13:14:51.852 2 DEBUG nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:14:51 compute-1 nova_compute[230518]: 2025-10-02 13:14:51.853 2 INFO nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Creating image(s)
Oct 02 13:14:51 compute-1 nova_compute[230518]: 2025-10-02 13:14:51.878 2 DEBUG nova.storage.rbd_utils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image 66f3a080-c034-4465-9d17-ee4b4afe4592_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:14:51 compute-1 nova_compute[230518]: 2025-10-02 13:14:51.905 2 DEBUG nova.storage.rbd_utils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image 66f3a080-c034-4465-9d17-ee4b4afe4592_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:14:51 compute-1 nova_compute[230518]: 2025-10-02 13:14:51.939 2 DEBUG nova.storage.rbd_utils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image 66f3a080-c034-4465-9d17-ee4b4afe4592_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:14:51 compute-1 nova_compute[230518]: 2025-10-02 13:14:51.943 2 DEBUG oslo_concurrency.processutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:51 compute-1 nova_compute[230518]: 2025-10-02 13:14:51.981 2 DEBUG nova.policy [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '37083e5fd56c447cb409b86d6394dd43', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7f5376733aec4630998da8d11db76561', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:14:52 compute-1 nova_compute[230518]: 2025-10-02 13:14:52.023 2 DEBUG oslo_concurrency.processutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:52 compute-1 nova_compute[230518]: 2025-10-02 13:14:52.024 2 DEBUG oslo_concurrency.lockutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:52 compute-1 nova_compute[230518]: 2025-10-02 13:14:52.025 2 DEBUG oslo_concurrency.lockutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:52 compute-1 nova_compute[230518]: 2025-10-02 13:14:52.025 2 DEBUG oslo_concurrency.lockutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "472c3cad2e339908bc4a8cea12fc22c04fcd93b6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:52 compute-1 nova_compute[230518]: 2025-10-02 13:14:52.062 2 DEBUG nova.storage.rbd_utils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image 66f3a080-c034-4465-9d17-ee4b4afe4592_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:14:52 compute-1 ceph-mon[80926]: pgmap v3090: 305 pgs: 305 active+clean; 532 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 428 KiB/s rd, 2.2 MiB/s wr, 86 op/s
Oct 02 13:14:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3137431195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1591108121' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:14:52 compute-1 nova_compute[230518]: 2025-10-02 13:14:52.070 2 DEBUG oslo_concurrency.processutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 66f3a080-c034-4465-9d17-ee4b4afe4592_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:52.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:52 compute-1 nova_compute[230518]: 2025-10-02 13:14:52.414 2 DEBUG oslo_concurrency.processutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 66f3a080-c034-4465-9d17-ee4b4afe4592_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.344s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:52 compute-1 nova_compute[230518]: 2025-10-02 13:14:52.531 2 DEBUG nova.storage.rbd_utils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] resizing rbd image 66f3a080-c034-4465-9d17-ee4b4afe4592_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:14:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:14:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:52.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:14:52 compute-1 nova_compute[230518]: 2025-10-02 13:14:52.687 2 DEBUG nova.objects.instance [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lazy-loading 'migration_context' on Instance uuid 66f3a080-c034-4465-9d17-ee4b4afe4592 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:14:52 compute-1 nova_compute[230518]: 2025-10-02 13:14:52.708 2 DEBUG nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:14:52 compute-1 nova_compute[230518]: 2025-10-02 13:14:52.708 2 DEBUG nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Ensure instance console log exists: /var/lib/nova/instances/66f3a080-c034-4465-9d17-ee4b4afe4592/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:14:52 compute-1 nova_compute[230518]: 2025-10-02 13:14:52.709 2 DEBUG oslo_concurrency.lockutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:52 compute-1 nova_compute[230518]: 2025-10-02 13:14:52.710 2 DEBUG oslo_concurrency.lockutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:52 compute-1 nova_compute[230518]: 2025-10-02 13:14:52.710 2 DEBUG oslo_concurrency.lockutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:52 compute-1 nova_compute[230518]: 2025-10-02 13:14:52.829 2 DEBUG nova.network.neutron [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Successfully created port: 6cd7cac1-06dc-4d61-8f7e-254639151526 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:14:53 compute-1 nova_compute[230518]: 2025-10-02 13:14:53.834 2 DEBUG nova.network.neutron [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Successfully updated port: 6cd7cac1-06dc-4d61-8f7e-254639151526 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:14:53 compute-1 nova_compute[230518]: 2025-10-02 13:14:53.871 2 DEBUG oslo_concurrency.lockutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "refresh_cache-66f3a080-c034-4465-9d17-ee4b4afe4592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:14:53 compute-1 nova_compute[230518]: 2025-10-02 13:14:53.871 2 DEBUG oslo_concurrency.lockutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquired lock "refresh_cache-66f3a080-c034-4465-9d17-ee4b4afe4592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:14:53 compute-1 nova_compute[230518]: 2025-10-02 13:14:53.872 2 DEBUG nova.network.neutron [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:14:53 compute-1 nova_compute[230518]: 2025-10-02 13:14:53.938 2 DEBUG nova.compute.manager [req-2f953b71-5868-48cc-a72f-0b3fba2b77c3 req-98dafe4e-da27-4d51-9ad3-aecab6d5e5fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Received event network-changed-6cd7cac1-06dc-4d61-8f7e-254639151526 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:14:53 compute-1 nova_compute[230518]: 2025-10-02 13:14:53.939 2 DEBUG nova.compute.manager [req-2f953b71-5868-48cc-a72f-0b3fba2b77c3 req-98dafe4e-da27-4d51-9ad3-aecab6d5e5fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Refreshing instance network info cache due to event network-changed-6cd7cac1-06dc-4d61-8f7e-254639151526. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:14:53 compute-1 nova_compute[230518]: 2025-10-02 13:14:53.939 2 DEBUG oslo_concurrency.lockutils [req-2f953b71-5868-48cc-a72f-0b3fba2b77c3 req-98dafe4e-da27-4d51-9ad3-aecab6d5e5fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-66f3a080-c034-4465-9d17-ee4b4afe4592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:14:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:14:54 compute-1 ceph-mon[80926]: pgmap v3091: 305 pgs: 305 active+clean; 532 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 375 KiB/s rd, 2.0 MiB/s wr, 86 op/s
Oct 02 13:14:54 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1480746353' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:14:54 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1480746353' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:14:54 compute-1 nova_compute[230518]: 2025-10-02 13:14:54.114 2 DEBUG nova.network.neutron [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:14:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:14:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:54.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:14:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:54.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:55 compute-1 nova_compute[230518]: 2025-10-02 13:14:55.062 2 DEBUG nova.network.neutron [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Updating instance_info_cache with network_info: [{"id": "6cd7cac1-06dc-4d61-8f7e-254639151526", "address": "fa:16:3e:b5:31:6a", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cd7cac1-06", "ovs_interfaceid": "6cd7cac1-06dc-4d61-8f7e-254639151526", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:14:55 compute-1 nova_compute[230518]: 2025-10-02 13:14:55.088 2 DEBUG oslo_concurrency.lockutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Releasing lock "refresh_cache-66f3a080-c034-4465-9d17-ee4b4afe4592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:14:55 compute-1 nova_compute[230518]: 2025-10-02 13:14:55.089 2 DEBUG nova.compute.manager [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Instance network_info: |[{"id": "6cd7cac1-06dc-4d61-8f7e-254639151526", "address": "fa:16:3e:b5:31:6a", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cd7cac1-06", "ovs_interfaceid": "6cd7cac1-06dc-4d61-8f7e-254639151526", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:14:55 compute-1 nova_compute[230518]: 2025-10-02 13:14:55.090 2 DEBUG oslo_concurrency.lockutils [req-2f953b71-5868-48cc-a72f-0b3fba2b77c3 req-98dafe4e-da27-4d51-9ad3-aecab6d5e5fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-66f3a080-c034-4465-9d17-ee4b4afe4592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:14:55 compute-1 nova_compute[230518]: 2025-10-02 13:14:55.091 2 DEBUG nova.network.neutron [req-2f953b71-5868-48cc-a72f-0b3fba2b77c3 req-98dafe4e-da27-4d51-9ad3-aecab6d5e5fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Refreshing network info cache for port 6cd7cac1-06dc-4d61-8f7e-254639151526 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:14:55 compute-1 nova_compute[230518]: 2025-10-02 13:14:55.097 2 DEBUG nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Start _get_guest_xml network_info=[{"id": "6cd7cac1-06dc-4d61-8f7e-254639151526", "address": "fa:16:3e:b5:31:6a", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cd7cac1-06", "ovs_interfaceid": "6cd7cac1-06dc-4d61-8f7e-254639151526", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': '423b8b5f-aab8-418b-8fad-d82c90818bdd'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:14:55 compute-1 nova_compute[230518]: 2025-10-02 13:14:55.105 2 WARNING nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:14:55 compute-1 nova_compute[230518]: 2025-10-02 13:14:55.113 2 DEBUG nova.virt.libvirt.host [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:14:55 compute-1 nova_compute[230518]: 2025-10-02 13:14:55.114 2 DEBUG nova.virt.libvirt.host [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:14:55 compute-1 nova_compute[230518]: 2025-10-02 13:14:55.133 2 DEBUG nova.virt.libvirt.host [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:14:55 compute-1 nova_compute[230518]: 2025-10-02 13:14:55.134 2 DEBUG nova.virt.libvirt.host [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:14:55 compute-1 nova_compute[230518]: 2025-10-02 13:14:55.137 2 DEBUG nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:14:55 compute-1 nova_compute[230518]: 2025-10-02 13:14:55.138 2 DEBUG nova.virt.hardware [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T12:08:46Z,direct_url=<?>,disk_format='qcow2',id=423b8b5f-aab8-418b-8fad-d82c90818bdd,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c3a6b94d2b4945a487dafe07f533efd6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T12:08:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:14:55 compute-1 nova_compute[230518]: 2025-10-02 13:14:55.138 2 DEBUG nova.virt.hardware [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:14:55 compute-1 nova_compute[230518]: 2025-10-02 13:14:55.139 2 DEBUG nova.virt.hardware [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:14:55 compute-1 nova_compute[230518]: 2025-10-02 13:14:55.139 2 DEBUG nova.virt.hardware [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:14:55 compute-1 nova_compute[230518]: 2025-10-02 13:14:55.139 2 DEBUG nova.virt.hardware [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:14:55 compute-1 nova_compute[230518]: 2025-10-02 13:14:55.140 2 DEBUG nova.virt.hardware [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:14:55 compute-1 nova_compute[230518]: 2025-10-02 13:14:55.140 2 DEBUG nova.virt.hardware [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:14:55 compute-1 nova_compute[230518]: 2025-10-02 13:14:55.140 2 DEBUG nova.virt.hardware [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:14:55 compute-1 nova_compute[230518]: 2025-10-02 13:14:55.141 2 DEBUG nova.virt.hardware [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:14:55 compute-1 nova_compute[230518]: 2025-10-02 13:14:55.141 2 DEBUG nova.virt.hardware [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:14:55 compute-1 nova_compute[230518]: 2025-10-02 13:14:55.142 2 DEBUG nova.virt.hardware [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:14:55 compute-1 nova_compute[230518]: 2025-10-02 13:14:55.146 2 DEBUG oslo_concurrency.processutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e388 e388: 3 total, 3 up, 3 in
Oct 02 13:14:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:14:55 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4017726753' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:55 compute-1 nova_compute[230518]: 2025-10-02 13:14:55.660 2 DEBUG oslo_concurrency.processutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:55 compute-1 nova_compute[230518]: 2025-10-02 13:14:55.689 2 DEBUG nova.storage.rbd_utils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image 66f3a080-c034-4465-9d17-ee4b4afe4592_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:14:55 compute-1 nova_compute[230518]: 2025-10-02 13:14:55.693 2 DEBUG oslo_concurrency.processutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:56 compute-1 nova_compute[230518]: 2025-10-02 13:14:56.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:56 compute-1 nova_compute[230518]: 2025-10-02 13:14:56.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:56 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:14:56 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2928502249' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:56 compute-1 ceph-mon[80926]: pgmap v3092: 305 pgs: 305 active+clean; 549 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 365 KiB/s rd, 2.5 MiB/s wr, 123 op/s
Oct 02 13:14:56 compute-1 ceph-mon[80926]: osdmap e388: 3 total, 3 up, 3 in
Oct 02 13:14:56 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4017726753' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.001999982s ======
Oct 02 13:14:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:56.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001999982s
Oct 02 13:14:56 compute-1 nova_compute[230518]: 2025-10-02 13:14:56.420 2 DEBUG oslo_concurrency.processutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.727s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:56 compute-1 nova_compute[230518]: 2025-10-02 13:14:56.422 2 DEBUG nova.virt.libvirt.vif [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:14:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-767527456',display_name='tempest-AttachVolumeNegativeTest-server-767527456',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-767527456',id=210,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF5VaY77OAaLVdyQYKCuQXqh8pYOP/3dwBveO9XzioRmwadp/WDR+EGEZOgEjr24GEhY0irDpHujXdAm06z7JsMiv1FBIuW6/qTrYjhQtIvMDaDJJ7Ig/IjNGFQrGpEpKA==',key_name='tempest-keypair-1210391514',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7f5376733aec4630998da8d11db76561',ramdisk_id='',reservation_id='r-lrtxjlpx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1084646737',owner_user_name='tempest-AttachVolumeNegativeTest-1084646737-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:14:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='37083e5fd56c447cb409b86d6394dd43',uuid=66f3a080-c034-4465-9d17-ee4b4afe4592,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6cd7cac1-06dc-4d61-8f7e-254639151526", "address": "fa:16:3e:b5:31:6a", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cd7cac1-06", "ovs_interfaceid": "6cd7cac1-06dc-4d61-8f7e-254639151526", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:14:56 compute-1 nova_compute[230518]: 2025-10-02 13:14:56.424 2 DEBUG nova.network.os_vif_util [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Converting VIF {"id": "6cd7cac1-06dc-4d61-8f7e-254639151526", "address": "fa:16:3e:b5:31:6a", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cd7cac1-06", "ovs_interfaceid": "6cd7cac1-06dc-4d61-8f7e-254639151526", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:14:56 compute-1 nova_compute[230518]: 2025-10-02 13:14:56.425 2 DEBUG nova.network.os_vif_util [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:31:6a,bridge_name='br-int',has_traffic_filtering=True,id=6cd7cac1-06dc-4d61-8f7e-254639151526,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6cd7cac1-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:14:56 compute-1 nova_compute[230518]: 2025-10-02 13:14:56.427 2 DEBUG nova.objects.instance [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lazy-loading 'pci_devices' on Instance uuid 66f3a080-c034-4465-9d17-ee4b4afe4592 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:14:56 compute-1 nova_compute[230518]: 2025-10-02 13:14:56.446 2 DEBUG nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:14:56 compute-1 nova_compute[230518]:   <uuid>66f3a080-c034-4465-9d17-ee4b4afe4592</uuid>
Oct 02 13:14:56 compute-1 nova_compute[230518]:   <name>instance-000000d2</name>
Oct 02 13:14:56 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 13:14:56 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 13:14:56 compute-1 nova_compute[230518]:   <metadata>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:14:56 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:       <nova:name>tempest-AttachVolumeNegativeTest-server-767527456</nova:name>
Oct 02 13:14:56 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 13:14:55</nova:creationTime>
Oct 02 13:14:56 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 13:14:56 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 13:14:56 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 13:14:56 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 13:14:56 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:14:56 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:14:56 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 13:14:56 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 13:14:56 compute-1 nova_compute[230518]:         <nova:user uuid="37083e5fd56c447cb409b86d6394dd43">tempest-AttachVolumeNegativeTest-1084646737-project-member</nova:user>
Oct 02 13:14:56 compute-1 nova_compute[230518]:         <nova:project uuid="7f5376733aec4630998da8d11db76561">tempest-AttachVolumeNegativeTest-1084646737</nova:project>
Oct 02 13:14:56 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 13:14:56 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="423b8b5f-aab8-418b-8fad-d82c90818bdd"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 13:14:56 compute-1 nova_compute[230518]:         <nova:port uuid="6cd7cac1-06dc-4d61-8f7e-254639151526">
Oct 02 13:14:56 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 13:14:56 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 13:14:56 compute-1 nova_compute[230518]:   </metadata>
Oct 02 13:14:56 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <system>
Oct 02 13:14:56 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:14:56 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:14:56 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:14:56 compute-1 nova_compute[230518]:       <entry name="serial">66f3a080-c034-4465-9d17-ee4b4afe4592</entry>
Oct 02 13:14:56 compute-1 nova_compute[230518]:       <entry name="uuid">66f3a080-c034-4465-9d17-ee4b4afe4592</entry>
Oct 02 13:14:56 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     </system>
Oct 02 13:14:56 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 13:14:56 compute-1 nova_compute[230518]:   <os>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:   </os>
Oct 02 13:14:56 compute-1 nova_compute[230518]:   <features>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <apic/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:   </features>
Oct 02 13:14:56 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:   </clock>
Oct 02 13:14:56 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:   </cpu>
Oct 02 13:14:56 compute-1 nova_compute[230518]:   <devices>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 13:14:56 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/66f3a080-c034-4465-9d17-ee4b4afe4592_disk">
Oct 02 13:14:56 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:       </source>
Oct 02 13:14:56 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:14:56 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:14:56 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 13:14:56 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/66f3a080-c034-4465-9d17-ee4b4afe4592_disk.config">
Oct 02 13:14:56 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:       </source>
Oct 02 13:14:56 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:14:56 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:14:56 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 13:14:56 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:b5:31:6a"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:       <target dev="tap6cd7cac1-06"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     </interface>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 13:14:56 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/66f3a080-c034-4465-9d17-ee4b4afe4592/console.log" append="off"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     </serial>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <video>
Oct 02 13:14:56 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     </video>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 13:14:56 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     </rng>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 13:14:56 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 13:14:56 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 13:14:56 compute-1 nova_compute[230518]:   </devices>
Oct 02 13:14:56 compute-1 nova_compute[230518]: </domain>
Oct 02 13:14:56 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:14:56 compute-1 nova_compute[230518]: 2025-10-02 13:14:56.447 2 DEBUG nova.compute.manager [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Preparing to wait for external event network-vif-plugged-6cd7cac1-06dc-4d61-8f7e-254639151526 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:14:56 compute-1 nova_compute[230518]: 2025-10-02 13:14:56.448 2 DEBUG oslo_concurrency.lockutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "66f3a080-c034-4465-9d17-ee4b4afe4592-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:56 compute-1 nova_compute[230518]: 2025-10-02 13:14:56.448 2 DEBUG oslo_concurrency.lockutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "66f3a080-c034-4465-9d17-ee4b4afe4592-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:56 compute-1 nova_compute[230518]: 2025-10-02 13:14:56.448 2 DEBUG oslo_concurrency.lockutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "66f3a080-c034-4465-9d17-ee4b4afe4592-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:56 compute-1 nova_compute[230518]: 2025-10-02 13:14:56.449 2 DEBUG nova.virt.libvirt.vif [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:14:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-767527456',display_name='tempest-AttachVolumeNegativeTest-server-767527456',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-767527456',id=210,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF5VaY77OAaLVdyQYKCuQXqh8pYOP/3dwBveO9XzioRmwadp/WDR+EGEZOgEjr24GEhY0irDpHujXdAm06z7JsMiv1FBIuW6/qTrYjhQtIvMDaDJJ7Ig/IjNGFQrGpEpKA==',key_name='tempest-keypair-1210391514',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7f5376733aec4630998da8d11db76561',ramdisk_id='',reservation_id='r-lrtxjlpx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1084646737',owner_user_name='tempest-AttachVolumeNegativeTest-1084646737-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:14:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='37083e5fd56c447cb409b86d6394dd43',uuid=66f3a080-c034-4465-9d17-ee4b4afe4592,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6cd7cac1-06dc-4d61-8f7e-254639151526", "address": "fa:16:3e:b5:31:6a", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cd7cac1-06", "ovs_interfaceid": "6cd7cac1-06dc-4d61-8f7e-254639151526", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:14:56 compute-1 nova_compute[230518]: 2025-10-02 13:14:56.449 2 DEBUG nova.network.os_vif_util [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Converting VIF {"id": "6cd7cac1-06dc-4d61-8f7e-254639151526", "address": "fa:16:3e:b5:31:6a", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cd7cac1-06", "ovs_interfaceid": "6cd7cac1-06dc-4d61-8f7e-254639151526", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:14:56 compute-1 nova_compute[230518]: 2025-10-02 13:14:56.450 2 DEBUG nova.network.os_vif_util [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:31:6a,bridge_name='br-int',has_traffic_filtering=True,id=6cd7cac1-06dc-4d61-8f7e-254639151526,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6cd7cac1-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:14:56 compute-1 nova_compute[230518]: 2025-10-02 13:14:56.451 2 DEBUG os_vif [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:31:6a,bridge_name='br-int',has_traffic_filtering=True,id=6cd7cac1-06dc-4d61-8f7e-254639151526,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6cd7cac1-06') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:14:56 compute-1 nova_compute[230518]: 2025-10-02 13:14:56.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:56 compute-1 nova_compute[230518]: 2025-10-02 13:14:56.452 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:14:56 compute-1 nova_compute[230518]: 2025-10-02 13:14:56.452 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:14:56 compute-1 nova_compute[230518]: 2025-10-02 13:14:56.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:56 compute-1 nova_compute[230518]: 2025-10-02 13:14:56.455 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6cd7cac1-06, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:14:56 compute-1 nova_compute[230518]: 2025-10-02 13:14:56.456 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6cd7cac1-06, col_values=(('external_ids', {'iface-id': '6cd7cac1-06dc-4d61-8f7e-254639151526', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b5:31:6a', 'vm-uuid': '66f3a080-c034-4465-9d17-ee4b4afe4592'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:14:56 compute-1 nova_compute[230518]: 2025-10-02 13:14:56.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:56 compute-1 NetworkManager[44960]: <info>  [1759410896.4600] manager: (tap6cd7cac1-06): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/394)
Oct 02 13:14:56 compute-1 nova_compute[230518]: 2025-10-02 13:14:56.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:14:56 compute-1 nova_compute[230518]: 2025-10-02 13:14:56.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:56 compute-1 nova_compute[230518]: 2025-10-02 13:14:56.468 2 INFO os_vif [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:31:6a,bridge_name='br-int',has_traffic_filtering=True,id=6cd7cac1-06dc-4d61-8f7e-254639151526,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6cd7cac1-06')
Oct 02 13:14:56 compute-1 nova_compute[230518]: 2025-10-02 13:14:56.551 2 DEBUG nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:14:56 compute-1 nova_compute[230518]: 2025-10-02 13:14:56.552 2 DEBUG nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:14:56 compute-1 nova_compute[230518]: 2025-10-02 13:14:56.552 2 DEBUG nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] No VIF found with MAC fa:16:3e:b5:31:6a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:14:56 compute-1 nova_compute[230518]: 2025-10-02 13:14:56.553 2 INFO nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Using config drive
Oct 02 13:14:56 compute-1 nova_compute[230518]: 2025-10-02 13:14:56.583 2 DEBUG nova.storage.rbd_utils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image 66f3a080-c034-4465-9d17-ee4b4afe4592_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:14:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:56.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:57 compute-1 nova_compute[230518]: 2025-10-02 13:14:57.260 2 INFO nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Creating config drive at /var/lib/nova/instances/66f3a080-c034-4465-9d17-ee4b4afe4592/disk.config
Oct 02 13:14:57 compute-1 nova_compute[230518]: 2025-10-02 13:14:57.268 2 DEBUG oslo_concurrency.processutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/66f3a080-c034-4465-9d17-ee4b4afe4592/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq_r9k54y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:57 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2928502249' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:14:57 compute-1 ceph-mon[80926]: pgmap v3094: 305 pgs: 305 active+clean; 588 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 434 KiB/s rd, 3.2 MiB/s wr, 148 op/s
Oct 02 13:14:57 compute-1 nova_compute[230518]: 2025-10-02 13:14:57.436 2 DEBUG oslo_concurrency.processutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/66f3a080-c034-4465-9d17-ee4b4afe4592/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq_r9k54y" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:57 compute-1 nova_compute[230518]: 2025-10-02 13:14:57.495 2 DEBUG nova.storage.rbd_utils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] rbd image 66f3a080-c034-4465-9d17-ee4b4afe4592_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:14:57 compute-1 nova_compute[230518]: 2025-10-02 13:14:57.499 2 DEBUG oslo_concurrency.processutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/66f3a080-c034-4465-9d17-ee4b4afe4592/disk.config 66f3a080-c034-4465-9d17-ee4b4afe4592_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:14:57 compute-1 nova_compute[230518]: 2025-10-02 13:14:57.748 2 DEBUG oslo_concurrency.processutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/66f3a080-c034-4465-9d17-ee4b4afe4592/disk.config 66f3a080-c034-4465-9d17-ee4b4afe4592_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.249s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:14:57 compute-1 nova_compute[230518]: 2025-10-02 13:14:57.750 2 INFO nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Deleting local config drive /var/lib/nova/instances/66f3a080-c034-4465-9d17-ee4b4afe4592/disk.config because it was imported into RBD.
Oct 02 13:14:57 compute-1 kernel: tap6cd7cac1-06: entered promiscuous mode
Oct 02 13:14:57 compute-1 NetworkManager[44960]: <info>  [1759410897.8379] manager: (tap6cd7cac1-06): new Tun device (/org/freedesktop/NetworkManager/Devices/395)
Oct 02 13:14:57 compute-1 ovn_controller[129257]: 2025-10-02T13:14:57Z|00852|binding|INFO|Claiming lport 6cd7cac1-06dc-4d61-8f7e-254639151526 for this chassis.
Oct 02 13:14:57 compute-1 ovn_controller[129257]: 2025-10-02T13:14:57Z|00853|binding|INFO|6cd7cac1-06dc-4d61-8f7e-254639151526: Claiming fa:16:3e:b5:31:6a 10.100.0.10
Oct 02 13:14:57 compute-1 nova_compute[230518]: 2025-10-02 13:14:57.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:57.853 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:31:6a 10.100.0.10'], port_security=['fa:16:3e:b5:31:6a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '66f3a080-c034-4465-9d17-ee4b4afe4592', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7f5376733aec4630998da8d11db76561', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3ce33b16-0c5a-4529-af8d-ea1438ef3f9d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9548860-2222-48ea-9270-42ff9a0246f7, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=6cd7cac1-06dc-4d61-8f7e-254639151526) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:14:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:57.855 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 6cd7cac1-06dc-4d61-8f7e-254639151526 in datapath bc02aa54-d19f-4274-8d92-cbabe7917dd9 bound to our chassis
Oct 02 13:14:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:57.857 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bc02aa54-d19f-4274-8d92-cbabe7917dd9
Oct 02 13:14:57 compute-1 ovn_controller[129257]: 2025-10-02T13:14:57Z|00854|binding|INFO|Setting lport 6cd7cac1-06dc-4d61-8f7e-254639151526 ovn-installed in OVS
Oct 02 13:14:57 compute-1 ovn_controller[129257]: 2025-10-02T13:14:57Z|00855|binding|INFO|Setting lport 6cd7cac1-06dc-4d61-8f7e-254639151526 up in Southbound
Oct 02 13:14:57 compute-1 nova_compute[230518]: 2025-10-02 13:14:57.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:57 compute-1 nova_compute[230518]: 2025-10-02 13:14:57.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:57.872 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a0638c-1114-4bc1-8db4-d19b0e661b47]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:57 compute-1 systemd-udevd[310550]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:14:57 compute-1 systemd-machined[188247]: New machine qemu-97-instance-000000d2.
Oct 02 13:14:57 compute-1 NetworkManager[44960]: <info>  [1759410897.9002] device (tap6cd7cac1-06): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:14:57 compute-1 NetworkManager[44960]: <info>  [1759410897.9014] device (tap6cd7cac1-06): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:14:57 compute-1 systemd[1]: Started Virtual Machine qemu-97-instance-000000d2.
Oct 02 13:14:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:57.914 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[4e367f54-eaa4-48a0-be7e-403bd9b595f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:57.917 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[be897465-d336-4ed9-b751-ccdf5aa9e423]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:57.954 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[0ae0a565-8403-41c4-ac0b-c6287e46090a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:57 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:57.982 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[faebce6c-97b1-4119-8ea5-f6ab7ea5bfe6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbc02aa54-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:fc:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 256], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 870587, 'reachable_time': 32949, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310561, 'error': None, 'target': 'ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:58 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:57.999 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f2681fae-fcc7-44c7-8edf-161c77d8f246]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbc02aa54-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 870603, 'tstamp': 870603}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310562, 'error': None, 'target': 'ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbc02aa54-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 870607, 'tstamp': 870607}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310562, 'error': None, 'target': 'ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:14:58 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:58.001 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc02aa54-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:14:58 compute-1 nova_compute[230518]: 2025-10-02 13:14:58.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:58 compute-1 nova_compute[230518]: 2025-10-02 13:14:58.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:14:58 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:58.030 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbc02aa54-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:14:58 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:58.031 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:14:58 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:58.031 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbc02aa54-d0, col_values=(('external_ids', {'iface-id': '05da4d4e-44a6-4aa1-b470-d9ad03ff2e45'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:14:58 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:14:58.031 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:14:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:14:58.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:58 compute-1 nova_compute[230518]: 2025-10-02 13:14:58.478 2 DEBUG nova.network.neutron [req-2f953b71-5868-48cc-a72f-0b3fba2b77c3 req-98dafe4e-da27-4d51-9ad3-aecab6d5e5fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Updated VIF entry in instance network info cache for port 6cd7cac1-06dc-4d61-8f7e-254639151526. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:14:58 compute-1 nova_compute[230518]: 2025-10-02 13:14:58.479 2 DEBUG nova.network.neutron [req-2f953b71-5868-48cc-a72f-0b3fba2b77c3 req-98dafe4e-da27-4d51-9ad3-aecab6d5e5fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Updating instance_info_cache with network_info: [{"id": "6cd7cac1-06dc-4d61-8f7e-254639151526", "address": "fa:16:3e:b5:31:6a", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cd7cac1-06", "ovs_interfaceid": "6cd7cac1-06dc-4d61-8f7e-254639151526", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:14:58 compute-1 nova_compute[230518]: 2025-10-02 13:14:58.507 2 DEBUG oslo_concurrency.lockutils [req-2f953b71-5868-48cc-a72f-0b3fba2b77c3 req-98dafe4e-da27-4d51-9ad3-aecab6d5e5fd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-66f3a080-c034-4465-9d17-ee4b4afe4592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:14:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:14:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:14:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:14:58.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:14:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e389 e389: 3 total, 3 up, 3 in
Oct 02 13:14:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:14:59 compute-1 nova_compute[230518]: 2025-10-02 13:14:59.156 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410899.1556535, 66f3a080-c034-4465-9d17-ee4b4afe4592 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:14:59 compute-1 nova_compute[230518]: 2025-10-02 13:14:59.157 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] VM Started (Lifecycle Event)
Oct 02 13:14:59 compute-1 nova_compute[230518]: 2025-10-02 13:14:59.196 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:14:59 compute-1 nova_compute[230518]: 2025-10-02 13:14:59.203 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410899.1601574, 66f3a080-c034-4465-9d17-ee4b4afe4592 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:14:59 compute-1 nova_compute[230518]: 2025-10-02 13:14:59.203 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] VM Paused (Lifecycle Event)
Oct 02 13:14:59 compute-1 nova_compute[230518]: 2025-10-02 13:14:59.233 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:14:59 compute-1 nova_compute[230518]: 2025-10-02 13:14:59.240 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:14:59 compute-1 nova_compute[230518]: 2025-10-02 13:14:59.272 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:14:59 compute-1 nova_compute[230518]: 2025-10-02 13:14:59.448 2 DEBUG nova.compute.manager [req-22f2f1a2-a4a0-415f-badb-5998ebf59a9b req-bcbb7f46-f809-421c-9f53-82378f33cf07 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Received event network-vif-plugged-6cd7cac1-06dc-4d61-8f7e-254639151526 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:14:59 compute-1 nova_compute[230518]: 2025-10-02 13:14:59.448 2 DEBUG oslo_concurrency.lockutils [req-22f2f1a2-a4a0-415f-badb-5998ebf59a9b req-bcbb7f46-f809-421c-9f53-82378f33cf07 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "66f3a080-c034-4465-9d17-ee4b4afe4592-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:14:59 compute-1 nova_compute[230518]: 2025-10-02 13:14:59.449 2 DEBUG oslo_concurrency.lockutils [req-22f2f1a2-a4a0-415f-badb-5998ebf59a9b req-bcbb7f46-f809-421c-9f53-82378f33cf07 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "66f3a080-c034-4465-9d17-ee4b4afe4592-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:14:59 compute-1 nova_compute[230518]: 2025-10-02 13:14:59.449 2 DEBUG oslo_concurrency.lockutils [req-22f2f1a2-a4a0-415f-badb-5998ebf59a9b req-bcbb7f46-f809-421c-9f53-82378f33cf07 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "66f3a080-c034-4465-9d17-ee4b4afe4592-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:59 compute-1 nova_compute[230518]: 2025-10-02 13:14:59.450 2 DEBUG nova.compute.manager [req-22f2f1a2-a4a0-415f-badb-5998ebf59a9b req-bcbb7f46-f809-421c-9f53-82378f33cf07 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Processing event network-vif-plugged-6cd7cac1-06dc-4d61-8f7e-254639151526 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:14:59 compute-1 nova_compute[230518]: 2025-10-02 13:14:59.451 2 DEBUG nova.compute.manager [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:14:59 compute-1 nova_compute[230518]: 2025-10-02 13:14:59.454 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759410899.4545662, 66f3a080-c034-4465-9d17-ee4b4afe4592 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:14:59 compute-1 nova_compute[230518]: 2025-10-02 13:14:59.455 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] VM Resumed (Lifecycle Event)
Oct 02 13:14:59 compute-1 nova_compute[230518]: 2025-10-02 13:14:59.458 2 DEBUG nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:14:59 compute-1 nova_compute[230518]: 2025-10-02 13:14:59.463 2 INFO nova.virt.libvirt.driver [-] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Instance spawned successfully.
Oct 02 13:14:59 compute-1 nova_compute[230518]: 2025-10-02 13:14:59.463 2 DEBUG nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:14:59 compute-1 nova_compute[230518]: 2025-10-02 13:14:59.561 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:14:59 compute-1 nova_compute[230518]: 2025-10-02 13:14:59.568 2 DEBUG nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:14:59 compute-1 nova_compute[230518]: 2025-10-02 13:14:59.568 2 DEBUG nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:14:59 compute-1 nova_compute[230518]: 2025-10-02 13:14:59.569 2 DEBUG nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:14:59 compute-1 nova_compute[230518]: 2025-10-02 13:14:59.569 2 DEBUG nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:14:59 compute-1 nova_compute[230518]: 2025-10-02 13:14:59.570 2 DEBUG nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:14:59 compute-1 nova_compute[230518]: 2025-10-02 13:14:59.570 2 DEBUG nova.virt.libvirt.driver [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:14:59 compute-1 nova_compute[230518]: 2025-10-02 13:14:59.576 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:14:59 compute-1 nova_compute[230518]: 2025-10-02 13:14:59.626 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:14:59 compute-1 nova_compute[230518]: 2025-10-02 13:14:59.681 2 INFO nova.compute.manager [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Took 7.83 seconds to spawn the instance on the hypervisor.
Oct 02 13:14:59 compute-1 nova_compute[230518]: 2025-10-02 13:14:59.682 2 DEBUG nova.compute.manager [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:14:59 compute-1 nova_compute[230518]: 2025-10-02 13:14:59.749 2 INFO nova.compute.manager [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Took 9.11 seconds to build instance.
Oct 02 13:14:59 compute-1 nova_compute[230518]: 2025-10-02 13:14:59.768 2 DEBUG oslo_concurrency.lockutils [None req-b6252fe6-bcf8-4e6c-a6a8-6068f5d03d18 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "66f3a080-c034-4465-9d17-ee4b4afe4592" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.201s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:14:59 compute-1 ceph-mon[80926]: pgmap v3095: 305 pgs: 305 active+clean; 560 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 269 KiB/s rd, 2.4 MiB/s wr, 143 op/s
Oct 02 13:14:59 compute-1 ceph-mon[80926]: osdmap e389: 3 total, 3 up, 3 in
Oct 02 13:15:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:15:00.405 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=70, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=69) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:15:00 compute-1 nova_compute[230518]: 2025-10-02 13:15:00.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:15:00.408 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:15:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:15:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:00.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:15:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:00.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:01 compute-1 nova_compute[230518]: 2025-10-02 13:15:01.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:01 compute-1 nova_compute[230518]: 2025-10-02 13:15:01.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:01 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e390 e390: 3 total, 3 up, 3 in
Oct 02 13:15:01 compute-1 nova_compute[230518]: 2025-10-02 13:15:01.575 2 DEBUG nova.compute.manager [req-d71df5c9-956f-40c7-b06d-bd3d269f7df1 req-9eddc718-ffe9-4d91-ae42-a0bd19a3dccd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Received event network-vif-plugged-6cd7cac1-06dc-4d61-8f7e-254639151526 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:15:01 compute-1 nova_compute[230518]: 2025-10-02 13:15:01.576 2 DEBUG oslo_concurrency.lockutils [req-d71df5c9-956f-40c7-b06d-bd3d269f7df1 req-9eddc718-ffe9-4d91-ae42-a0bd19a3dccd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "66f3a080-c034-4465-9d17-ee4b4afe4592-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:15:01 compute-1 nova_compute[230518]: 2025-10-02 13:15:01.576 2 DEBUG oslo_concurrency.lockutils [req-d71df5c9-956f-40c7-b06d-bd3d269f7df1 req-9eddc718-ffe9-4d91-ae42-a0bd19a3dccd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "66f3a080-c034-4465-9d17-ee4b4afe4592-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:15:01 compute-1 nova_compute[230518]: 2025-10-02 13:15:01.576 2 DEBUG oslo_concurrency.lockutils [req-d71df5c9-956f-40c7-b06d-bd3d269f7df1 req-9eddc718-ffe9-4d91-ae42-a0bd19a3dccd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "66f3a080-c034-4465-9d17-ee4b4afe4592-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:15:01 compute-1 nova_compute[230518]: 2025-10-02 13:15:01.577 2 DEBUG nova.compute.manager [req-d71df5c9-956f-40c7-b06d-bd3d269f7df1 req-9eddc718-ffe9-4d91-ae42-a0bd19a3dccd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] No waiting events found dispatching network-vif-plugged-6cd7cac1-06dc-4d61-8f7e-254639151526 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:15:01 compute-1 nova_compute[230518]: 2025-10-02 13:15:01.577 2 WARNING nova.compute.manager [req-d71df5c9-956f-40c7-b06d-bd3d269f7df1 req-9eddc718-ffe9-4d91-ae42-a0bd19a3dccd 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Received unexpected event network-vif-plugged-6cd7cac1-06dc-4d61-8f7e-254639151526 for instance with vm_state active and task_state None.
Oct 02 13:15:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:15:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:02.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:15:02 compute-1 ceph-mon[80926]: pgmap v3097: 305 pgs: 305 active+clean; 535 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 181 KiB/s rd, 2.8 MiB/s wr, 159 op/s
Oct 02 13:15:02 compute-1 ceph-mon[80926]: osdmap e390: 3 total, 3 up, 3 in
Oct 02 13:15:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3749576690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:02.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:03 compute-1 ceph-mon[80926]: pgmap v3099: 305 pgs: 305 active+clean; 509 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 420 KiB/s rd, 846 KiB/s wr, 127 op/s
Oct 02 13:15:03 compute-1 nova_compute[230518]: 2025-10-02 13:15:03.757 2 DEBUG nova.compute.manager [req-8839ee43-cd1d-4a33-9699-7d723dccef45 req-5004d0a1-9e50-4b1d-8d93-c84713744a19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Received event network-changed-6cd7cac1-06dc-4d61-8f7e-254639151526 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:15:03 compute-1 nova_compute[230518]: 2025-10-02 13:15:03.757 2 DEBUG nova.compute.manager [req-8839ee43-cd1d-4a33-9699-7d723dccef45 req-5004d0a1-9e50-4b1d-8d93-c84713744a19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Refreshing instance network info cache due to event network-changed-6cd7cac1-06dc-4d61-8f7e-254639151526. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:15:03 compute-1 nova_compute[230518]: 2025-10-02 13:15:03.758 2 DEBUG oslo_concurrency.lockutils [req-8839ee43-cd1d-4a33-9699-7d723dccef45 req-5004d0a1-9e50-4b1d-8d93-c84713744a19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-66f3a080-c034-4465-9d17-ee4b4afe4592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:15:03 compute-1 nova_compute[230518]: 2025-10-02 13:15:03.758 2 DEBUG oslo_concurrency.lockutils [req-8839ee43-cd1d-4a33-9699-7d723dccef45 req-5004d0a1-9e50-4b1d-8d93-c84713744a19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-66f3a080-c034-4465-9d17-ee4b4afe4592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:15:03 compute-1 nova_compute[230518]: 2025-10-02 13:15:03.759 2 DEBUG nova.network.neutron [req-8839ee43-cd1d-4a33-9699-7d723dccef45 req-5004d0a1-9e50-4b1d-8d93-c84713744a19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Refreshing network info cache for port 6cd7cac1-06dc-4d61-8f7e-254639151526 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:15:03 compute-1 podman[310607]: 2025-10-02 13:15:03.830758614 +0000 UTC m=+0.062159191 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 13:15:03 compute-1 podman[310606]: 2025-10-02 13:15:03.880915877 +0000 UTC m=+0.121535883 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 02 13:15:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:15:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:04.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 13:15:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:04.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 13:15:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e391 e391: 3 total, 3 up, 3 in
Oct 02 13:15:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:15:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3838565508' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:15:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:15:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3838565508' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:15:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:15:05.412 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '70'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:15:05 compute-1 nova_compute[230518]: 2025-10-02 13:15:05.669 2 DEBUG nova.network.neutron [req-8839ee43-cd1d-4a33-9699-7d723dccef45 req-5004d0a1-9e50-4b1d-8d93-c84713744a19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Updated VIF entry in instance network info cache for port 6cd7cac1-06dc-4d61-8f7e-254639151526. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:15:05 compute-1 nova_compute[230518]: 2025-10-02 13:15:05.669 2 DEBUG nova.network.neutron [req-8839ee43-cd1d-4a33-9699-7d723dccef45 req-5004d0a1-9e50-4b1d-8d93-c84713744a19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Updating instance_info_cache with network_info: [{"id": "6cd7cac1-06dc-4d61-8f7e-254639151526", "address": "fa:16:3e:b5:31:6a", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cd7cac1-06", "ovs_interfaceid": "6cd7cac1-06dc-4d61-8f7e-254639151526", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:15:05 compute-1 nova_compute[230518]: 2025-10-02 13:15:05.702 2 DEBUG oslo_concurrency.lockutils [req-8839ee43-cd1d-4a33-9699-7d723dccef45 req-5004d0a1-9e50-4b1d-8d93-c84713744a19 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-66f3a080-c034-4465-9d17-ee4b4afe4592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:15:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e392 e392: 3 total, 3 up, 3 in
Oct 02 13:15:05 compute-1 ceph-mon[80926]: pgmap v3100: 305 pgs: 305 active+clean; 509 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 45 KiB/s wr, 105 op/s
Oct 02 13:15:05 compute-1 ceph-mon[80926]: osdmap e391: 3 total, 3 up, 3 in
Oct 02 13:15:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3838565508' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:15:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3838565508' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:15:06 compute-1 nova_compute[230518]: 2025-10-02 13:15:06.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:06 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e393 e393: 3 total, 3 up, 3 in
Oct 02 13:15:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:06.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:06 compute-1 nova_compute[230518]: 2025-10-02 13:15:06.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:06.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:06 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:15:06 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3166744725' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:15:06 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:15:06 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3166744725' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:15:07 compute-1 ceph-mon[80926]: osdmap e392: 3 total, 3 up, 3 in
Oct 02 13:15:07 compute-1 ceph-mon[80926]: osdmap e393: 3 total, 3 up, 3 in
Oct 02 13:15:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3166744725' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:15:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3166744725' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:15:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:15:07 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/271385777' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:15:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:15:07 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/271385777' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:15:08 compute-1 ceph-mon[80926]: pgmap v3104: 305 pgs: 10 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 293 active+clean; 460 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.1 KiB/s wr, 196 op/s
Oct 02 13:15:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/271385777' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:15:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/271385777' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:15:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:08.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:08.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e393 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:15:09 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2214948547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:10 compute-1 ceph-mon[80926]: pgmap v3105: 305 pgs: 10 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 293 active+clean; 356 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 6.2 KiB/s wr, 218 op/s
Oct 02 13:15:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:10.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:10.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:10 compute-1 ovn_controller[129257]: 2025-10-02T13:15:10Z|00856|binding|INFO|Releasing lport 05da4d4e-44a6-4aa1-b470-d9ad03ff2e45 from this chassis (sb_readonly=0)
Oct 02 13:15:10 compute-1 nova_compute[230518]: 2025-10-02 13:15:10.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:11 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e394 e394: 3 total, 3 up, 3 in
Oct 02 13:15:11 compute-1 nova_compute[230518]: 2025-10-02 13:15:11.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:11 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:15:11 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2540394779' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:15:11 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:15:11 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2540394779' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:15:11 compute-1 nova_compute[230518]: 2025-10-02 13:15:11.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:12 compute-1 ceph-mon[80926]: pgmap v3106: 305 pgs: 10 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 293 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 7.8 KiB/s wr, 210 op/s
Oct 02 13:15:12 compute-1 ceph-mon[80926]: osdmap e394: 3 total, 3 up, 3 in
Oct 02 13:15:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2540394779' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:15:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2540394779' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:15:12 compute-1 ovn_controller[129257]: 2025-10-02T13:15:12Z|00121|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b5:31:6a 10.100.0.10
Oct 02 13:15:12 compute-1 ovn_controller[129257]: 2025-10-02T13:15:12Z|00122|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b5:31:6a 10.100.0.10
Oct 02 13:15:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:12.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:12.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:12 compute-1 podman[310651]: 2025-10-02 13:15:12.816777747 +0000 UTC m=+0.059289070 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3)
Oct 02 13:15:12 compute-1 podman[310650]: 2025-10-02 13:15:12.819400379 +0000 UTC m=+0.065884078 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.build-date=20251001)
Oct 02 13:15:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:15:14 compute-1 ceph-mon[80926]: pgmap v3108: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 604 KiB/s rd, 7.2 KiB/s wr, 177 op/s
Oct 02 13:15:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:15:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:14.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:15:14 compute-1 ovn_controller[129257]: 2025-10-02T13:15:14Z|00857|binding|INFO|Releasing lport 05da4d4e-44a6-4aa1-b470-d9ad03ff2e45 from this chassis (sb_readonly=0)
Oct 02 13:15:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:14.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:14 compute-1 nova_compute[230518]: 2025-10-02 13:15:14.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:14 compute-1 ovn_controller[129257]: 2025-10-02T13:15:14Z|00858|binding|INFO|Releasing lport 05da4d4e-44a6-4aa1-b470-d9ad03ff2e45 from this chassis (sb_readonly=0)
Oct 02 13:15:15 compute-1 nova_compute[230518]: 2025-10-02 13:15:15.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:16 compute-1 nova_compute[230518]: 2025-10-02 13:15:16.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:16 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 e395: 3 total, 3 up, 3 in
Oct 02 13:15:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:16.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:16 compute-1 nova_compute[230518]: 2025-10-02 13:15:16.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:16 compute-1 ceph-mon[80926]: pgmap v3109: 305 pgs: 305 active+clean; 254 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 108 KiB/s rd, 679 KiB/s wr, 128 op/s
Oct 02 13:15:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:16.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:17 compute-1 ceph-mon[80926]: osdmap e395: 3 total, 3 up, 3 in
Oct 02 13:15:17 compute-1 ceph-mon[80926]: pgmap v3111: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 552 KiB/s rd, 3.2 MiB/s wr, 177 op/s
Oct 02 13:15:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:18.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:18.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:18 compute-1 nova_compute[230518]: 2025-10-02 13:15:18.868 2 DEBUG oslo_concurrency.lockutils [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "66f3a080-c034-4465-9d17-ee4b4afe4592" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:15:18 compute-1 nova_compute[230518]: 2025-10-02 13:15:18.869 2 DEBUG oslo_concurrency.lockutils [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "66f3a080-c034-4465-9d17-ee4b4afe4592" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:15:18 compute-1 nova_compute[230518]: 2025-10-02 13:15:18.869 2 DEBUG oslo_concurrency.lockutils [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "66f3a080-c034-4465-9d17-ee4b4afe4592-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:15:18 compute-1 nova_compute[230518]: 2025-10-02 13:15:18.870 2 DEBUG oslo_concurrency.lockutils [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "66f3a080-c034-4465-9d17-ee4b4afe4592-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:15:18 compute-1 nova_compute[230518]: 2025-10-02 13:15:18.870 2 DEBUG oslo_concurrency.lockutils [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "66f3a080-c034-4465-9d17-ee4b4afe4592-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:15:18 compute-1 nova_compute[230518]: 2025-10-02 13:15:18.871 2 INFO nova.compute.manager [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Terminating instance
Oct 02 13:15:18 compute-1 nova_compute[230518]: 2025-10-02 13:15:18.872 2 DEBUG nova.compute.manager [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:15:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:15:19 compute-1 ovn_controller[129257]: 2025-10-02T13:15:19Z|00859|binding|INFO|Releasing lport 05da4d4e-44a6-4aa1-b470-d9ad03ff2e45 from this chassis (sb_readonly=0)
Oct 02 13:15:19 compute-1 nova_compute[230518]: 2025-10-02 13:15:19.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:19 compute-1 kernel: tap6cd7cac1-06 (unregistering): left promiscuous mode
Oct 02 13:15:19 compute-1 NetworkManager[44960]: <info>  [1759410919.6168] device (tap6cd7cac1-06): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:15:19 compute-1 ovn_controller[129257]: 2025-10-02T13:15:19Z|00860|binding|INFO|Releasing lport 6cd7cac1-06dc-4d61-8f7e-254639151526 from this chassis (sb_readonly=0)
Oct 02 13:15:19 compute-1 ovn_controller[129257]: 2025-10-02T13:15:19Z|00861|binding|INFO|Setting lport 6cd7cac1-06dc-4d61-8f7e-254639151526 down in Southbound
Oct 02 13:15:19 compute-1 ovn_controller[129257]: 2025-10-02T13:15:19Z|00862|binding|INFO|Removing iface tap6cd7cac1-06 ovn-installed in OVS
Oct 02 13:15:19 compute-1 nova_compute[230518]: 2025-10-02 13:15:19.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:19 compute-1 nova_compute[230518]: 2025-10-02 13:15:19.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:19 compute-1 nova_compute[230518]: 2025-10-02 13:15:19.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:15:19.646 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:31:6a 10.100.0.10'], port_security=['fa:16:3e:b5:31:6a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '66f3a080-c034-4465-9d17-ee4b4afe4592', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7f5376733aec4630998da8d11db76561', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3ce33b16-0c5a-4529-af8d-ea1438ef3f9d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.222'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9548860-2222-48ea-9270-42ff9a0246f7, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=6cd7cac1-06dc-4d61-8f7e-254639151526) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:15:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:15:19.648 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 6cd7cac1-06dc-4d61-8f7e-254639151526 in datapath bc02aa54-d19f-4274-8d92-cbabe7917dd9 unbound from our chassis
Oct 02 13:15:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:15:19.649 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bc02aa54-d19f-4274-8d92-cbabe7917dd9
Oct 02 13:15:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:15:19.670 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c10843f7-b31a-4cd1-8ebe-fcb6fca4e507]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:15:19 compute-1 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000d2.scope: Deactivated successfully.
Oct 02 13:15:19 compute-1 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000d2.scope: Consumed 14.112s CPU time.
Oct 02 13:15:19 compute-1 systemd-machined[188247]: Machine qemu-97-instance-000000d2 terminated.
Oct 02 13:15:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:15:19.699 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[903556f9-9138-4653-ac05-071acf607ee3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:15:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:15:19.702 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[153d303d-c074-41e8-8ff5-4586ae76149d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:15:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:15:19.732 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c6d37793-7bf7-49c7-8a1e-2c3e21a632f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:15:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:15:19.748 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[46f4e613-ed6b-465f-90f6-78faf06145d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbc02aa54-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:fc:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 256], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 870587, 'reachable_time': 32949, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310700, 'error': None, 'target': 'ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:15:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:15:19.765 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7ee2976c-a201-49e4-9fa8-55f2aaaa8e1f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbc02aa54-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 870603, 'tstamp': 870603}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310701, 'error': None, 'target': 'ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbc02aa54-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 870607, 'tstamp': 870607}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310701, 'error': None, 'target': 'ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:15:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:15:19.767 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc02aa54-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:15:19 compute-1 nova_compute[230518]: 2025-10-02 13:15:19.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:19 compute-1 nova_compute[230518]: 2025-10-02 13:15:19.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:15:19.774 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbc02aa54-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:15:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:15:19.774 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:15:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:15:19.775 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbc02aa54-d0, col_values=(('external_ids', {'iface-id': '05da4d4e-44a6-4aa1-b470-d9ad03ff2e45'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:15:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:15:19.775 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:15:19 compute-1 nova_compute[230518]: 2025-10-02 13:15:19.904 2 INFO nova.virt.libvirt.driver [-] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Instance destroyed successfully.
Oct 02 13:15:19 compute-1 nova_compute[230518]: 2025-10-02 13:15:19.905 2 DEBUG nova.objects.instance [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lazy-loading 'resources' on Instance uuid 66f3a080-c034-4465-9d17-ee4b4afe4592 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:15:19 compute-1 nova_compute[230518]: 2025-10-02 13:15:19.917 2 DEBUG nova.virt.libvirt.vif [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:14:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-767527456',display_name='tempest-AttachVolumeNegativeTest-server-767527456',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-767527456',id=210,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBF5VaY77OAaLVdyQYKCuQXqh8pYOP/3dwBveO9XzioRmwadp/WDR+EGEZOgEjr24GEhY0irDpHujXdAm06z7JsMiv1FBIuW6/qTrYjhQtIvMDaDJJ7Ig/IjNGFQrGpEpKA==',key_name='tempest-keypair-1210391514',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:14:59Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7f5376733aec4630998da8d11db76561',ramdisk_id='',reservation_id='r-lrtxjlpx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeNegativeTest-1084646737',owner_user_name='tempest-AttachVolumeNegativeTest-1084646737-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:14:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='37083e5fd56c447cb409b86d6394dd43',uuid=66f3a080-c034-4465-9d17-ee4b4afe4592,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6cd7cac1-06dc-4d61-8f7e-254639151526", "address": "fa:16:3e:b5:31:6a", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cd7cac1-06", "ovs_interfaceid": "6cd7cac1-06dc-4d61-8f7e-254639151526", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:15:19 compute-1 nova_compute[230518]: 2025-10-02 13:15:19.917 2 DEBUG nova.network.os_vif_util [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Converting VIF {"id": "6cd7cac1-06dc-4d61-8f7e-254639151526", "address": "fa:16:3e:b5:31:6a", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6cd7cac1-06", "ovs_interfaceid": "6cd7cac1-06dc-4d61-8f7e-254639151526", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:15:19 compute-1 nova_compute[230518]: 2025-10-02 13:15:19.918 2 DEBUG nova.network.os_vif_util [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b5:31:6a,bridge_name='br-int',has_traffic_filtering=True,id=6cd7cac1-06dc-4d61-8f7e-254639151526,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6cd7cac1-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:15:19 compute-1 nova_compute[230518]: 2025-10-02 13:15:19.918 2 DEBUG os_vif [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b5:31:6a,bridge_name='br-int',has_traffic_filtering=True,id=6cd7cac1-06dc-4d61-8f7e-254639151526,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6cd7cac1-06') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:15:19 compute-1 nova_compute[230518]: 2025-10-02 13:15:19.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:19 compute-1 nova_compute[230518]: 2025-10-02 13:15:19.921 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6cd7cac1-06, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:15:19 compute-1 nova_compute[230518]: 2025-10-02 13:15:19.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:19 compute-1 nova_compute[230518]: 2025-10-02 13:15:19.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:19 compute-1 nova_compute[230518]: 2025-10-02 13:15:19.926 2 INFO os_vif [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b5:31:6a,bridge_name='br-int',has_traffic_filtering=True,id=6cd7cac1-06dc-4d61-8f7e-254639151526,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6cd7cac1-06')
Oct 02 13:15:19 compute-1 nova_compute[230518]: 2025-10-02 13:15:19.950 2 DEBUG nova.compute.manager [req-f144d928-0c91-4f8b-9152-3509f7b11417 req-c6d2abb6-ceb6-48d5-b9f4-557d859849c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Received event network-vif-unplugged-6cd7cac1-06dc-4d61-8f7e-254639151526 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:15:19 compute-1 nova_compute[230518]: 2025-10-02 13:15:19.951 2 DEBUG oslo_concurrency.lockutils [req-f144d928-0c91-4f8b-9152-3509f7b11417 req-c6d2abb6-ceb6-48d5-b9f4-557d859849c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "66f3a080-c034-4465-9d17-ee4b4afe4592-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:15:19 compute-1 nova_compute[230518]: 2025-10-02 13:15:19.951 2 DEBUG oslo_concurrency.lockutils [req-f144d928-0c91-4f8b-9152-3509f7b11417 req-c6d2abb6-ceb6-48d5-b9f4-557d859849c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "66f3a080-c034-4465-9d17-ee4b4afe4592-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:15:19 compute-1 nova_compute[230518]: 2025-10-02 13:15:19.952 2 DEBUG oslo_concurrency.lockutils [req-f144d928-0c91-4f8b-9152-3509f7b11417 req-c6d2abb6-ceb6-48d5-b9f4-557d859849c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "66f3a080-c034-4465-9d17-ee4b4afe4592-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:15:19 compute-1 nova_compute[230518]: 2025-10-02 13:15:19.952 2 DEBUG nova.compute.manager [req-f144d928-0c91-4f8b-9152-3509f7b11417 req-c6d2abb6-ceb6-48d5-b9f4-557d859849c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] No waiting events found dispatching network-vif-unplugged-6cd7cac1-06dc-4d61-8f7e-254639151526 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:15:19 compute-1 nova_compute[230518]: 2025-10-02 13:15:19.952 2 DEBUG nova.compute.manager [req-f144d928-0c91-4f8b-9152-3509f7b11417 req-c6d2abb6-ceb6-48d5-b9f4-557d859849c7 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Received event network-vif-unplugged-6cd7cac1-06dc-4d61-8f7e-254639151526 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:15:20 compute-1 ceph-mon[80926]: pgmap v3112: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 543 KiB/s rd, 3.2 MiB/s wr, 160 op/s
Oct 02 13:15:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:15:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:20.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:15:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:20.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:21 compute-1 nova_compute[230518]: 2025-10-02 13:15:21.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:21 compute-1 ceph-mon[80926]: pgmap v3113: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 428 KiB/s rd, 2.7 MiB/s wr, 98 op/s
Oct 02 13:15:22 compute-1 nova_compute[230518]: 2025-10-02 13:15:22.098 2 DEBUG nova.compute.manager [req-e487731c-140e-472e-a2ec-b081db4220ef req-909b441e-9e76-4ae7-aae6-28306a8c4f05 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Received event network-vif-plugged-6cd7cac1-06dc-4d61-8f7e-254639151526 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:15:22 compute-1 nova_compute[230518]: 2025-10-02 13:15:22.098 2 DEBUG oslo_concurrency.lockutils [req-e487731c-140e-472e-a2ec-b081db4220ef req-909b441e-9e76-4ae7-aae6-28306a8c4f05 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "66f3a080-c034-4465-9d17-ee4b4afe4592-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:15:22 compute-1 nova_compute[230518]: 2025-10-02 13:15:22.099 2 DEBUG oslo_concurrency.lockutils [req-e487731c-140e-472e-a2ec-b081db4220ef req-909b441e-9e76-4ae7-aae6-28306a8c4f05 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "66f3a080-c034-4465-9d17-ee4b4afe4592-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:15:22 compute-1 nova_compute[230518]: 2025-10-02 13:15:22.099 2 DEBUG oslo_concurrency.lockutils [req-e487731c-140e-472e-a2ec-b081db4220ef req-909b441e-9e76-4ae7-aae6-28306a8c4f05 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "66f3a080-c034-4465-9d17-ee4b4afe4592-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:15:22 compute-1 nova_compute[230518]: 2025-10-02 13:15:22.099 2 DEBUG nova.compute.manager [req-e487731c-140e-472e-a2ec-b081db4220ef req-909b441e-9e76-4ae7-aae6-28306a8c4f05 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] No waiting events found dispatching network-vif-plugged-6cd7cac1-06dc-4d61-8f7e-254639151526 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:15:22 compute-1 nova_compute[230518]: 2025-10-02 13:15:22.099 2 WARNING nova.compute.manager [req-e487731c-140e-472e-a2ec-b081db4220ef req-909b441e-9e76-4ae7-aae6-28306a8c4f05 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Received unexpected event network-vif-plugged-6cd7cac1-06dc-4d61-8f7e-254639151526 for instance with vm_state active and task_state deleting.
Oct 02 13:15:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:22.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:22.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:23 compute-1 ceph-mon[80926]: pgmap v3114: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 405 KiB/s rd, 2.6 MiB/s wr, 94 op/s
Oct 02 13:15:23 compute-1 nova_compute[230518]: 2025-10-02 13:15:23.985 2 INFO nova.virt.libvirt.driver [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Deleting instance files /var/lib/nova/instances/66f3a080-c034-4465-9d17-ee4b4afe4592_del
Oct 02 13:15:23 compute-1 nova_compute[230518]: 2025-10-02 13:15:23.986 2 INFO nova.virt.libvirt.driver [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Deletion of /var/lib/nova/instances/66f3a080-c034-4465-9d17-ee4b4afe4592_del complete
Oct 02 13:15:24 compute-1 nova_compute[230518]: 2025-10-02 13:15:24.049 2 INFO nova.compute.manager [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Took 5.18 seconds to destroy the instance on the hypervisor.
Oct 02 13:15:24 compute-1 nova_compute[230518]: 2025-10-02 13:15:24.049 2 DEBUG oslo.service.loopingcall [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:15:24 compute-1 nova_compute[230518]: 2025-10-02 13:15:24.050 2 DEBUG nova.compute.manager [-] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:15:24 compute-1 nova_compute[230518]: 2025-10-02 13:15:24.050 2 DEBUG nova.network.neutron [-] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:15:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:15:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:24.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:24.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:24 compute-1 nova_compute[230518]: 2025-10-02 13:15:24.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:25 compute-1 nova_compute[230518]: 2025-10-02 13:15:25.075 2 DEBUG nova.network.neutron [-] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:15:25 compute-1 nova_compute[230518]: 2025-10-02 13:15:25.094 2 INFO nova.compute.manager [-] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Took 1.04 seconds to deallocate network for instance.
Oct 02 13:15:25 compute-1 nova_compute[230518]: 2025-10-02 13:15:25.154 2 DEBUG oslo_concurrency.lockutils [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:15:25 compute-1 nova_compute[230518]: 2025-10-02 13:15:25.155 2 DEBUG oslo_concurrency.lockutils [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:15:25 compute-1 nova_compute[230518]: 2025-10-02 13:15:25.182 2 DEBUG nova.compute.manager [req-caf76fde-1c8e-4b74-9313-d0ceffa8570a req-309eaf82-7583-4814-be59-71045bef6c9f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Received event network-vif-deleted-6cd7cac1-06dc-4d61-8f7e-254639151526 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:15:25 compute-1 nova_compute[230518]: 2025-10-02 13:15:25.228 2 DEBUG oslo_concurrency.processutils [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:15:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:15:25 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/938968884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:25 compute-1 nova_compute[230518]: 2025-10-02 13:15:25.689 2 DEBUG oslo_concurrency.processutils [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:15:25 compute-1 nova_compute[230518]: 2025-10-02 13:15:25.694 2 DEBUG nova.compute.provider_tree [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:15:25 compute-1 nova_compute[230518]: 2025-10-02 13:15:25.708 2 DEBUG nova.scheduler.client.report [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:15:25 compute-1 nova_compute[230518]: 2025-10-02 13:15:25.729 2 DEBUG oslo_concurrency.lockutils [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:15:25 compute-1 nova_compute[230518]: 2025-10-02 13:15:25.753 2 INFO nova.scheduler.client.report [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Deleted allocations for instance 66f3a080-c034-4465-9d17-ee4b4afe4592
Oct 02 13:15:25 compute-1 nova_compute[230518]: 2025-10-02 13:15:25.824 2 DEBUG oslo_concurrency.lockutils [None req-edda6af7-ad5e-4261-b9ab-873d45d4e9a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "66f3a080-c034-4465-9d17-ee4b4afe4592" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:15:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:15:25.970 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:15:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:15:25.970 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:15:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:15:25.971 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:15:26 compute-1 nova_compute[230518]: 2025-10-02 13:15:26.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:26 compute-1 ceph-mon[80926]: pgmap v3115: 305 pgs: 305 active+clean; 261 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 381 KiB/s rd, 2.0 MiB/s wr, 81 op/s
Oct 02 13:15:26 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/938968884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:26.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:26.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:27 compute-1 nova_compute[230518]: 2025-10-02 13:15:27.460 2 DEBUG oslo_concurrency.lockutils [None req-169f1fbb-3438-47b4-b7f6-a10f5c0cabd6 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "3dbb48be-2da9-48eb-814a-94eac9968d0f" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:15:27 compute-1 nova_compute[230518]: 2025-10-02 13:15:27.460 2 DEBUG oslo_concurrency.lockutils [None req-169f1fbb-3438-47b4-b7f6-a10f5c0cabd6 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:15:27 compute-1 nova_compute[230518]: 2025-10-02 13:15:27.473 2 INFO nova.compute.manager [None req-169f1fbb-3438-47b4-b7f6-a10f5c0cabd6 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Detaching volume 205d78a9-2344-4c93-8e1b-54d92d0b0fa2
Oct 02 13:15:27 compute-1 nova_compute[230518]: 2025-10-02 13:15:27.650 2 INFO nova.virt.block_device [None req-169f1fbb-3438-47b4-b7f6-a10f5c0cabd6 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Attempting to driver detach volume 205d78a9-2344-4c93-8e1b-54d92d0b0fa2 from mountpoint /dev/vdb
Oct 02 13:15:27 compute-1 nova_compute[230518]: 2025-10-02 13:15:27.661 2 DEBUG nova.virt.libvirt.driver [None req-169f1fbb-3438-47b4-b7f6-a10f5c0cabd6 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Attempting to detach device vdb from instance 3dbb48be-2da9-48eb-814a-94eac9968d0f from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 13:15:27 compute-1 nova_compute[230518]: 2025-10-02 13:15:27.662 2 DEBUG nova.virt.libvirt.guest [None req-169f1fbb-3438-47b4-b7f6-a10f5c0cabd6 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 13:15:27 compute-1 nova_compute[230518]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:15:27 compute-1 nova_compute[230518]:   <source protocol="rbd" name="volumes/volume-205d78a9-2344-4c93-8e1b-54d92d0b0fa2">
Oct 02 13:15:27 compute-1 nova_compute[230518]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:15:27 compute-1 nova_compute[230518]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:15:27 compute-1 nova_compute[230518]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:15:27 compute-1 nova_compute[230518]:   </source>
Oct 02 13:15:27 compute-1 nova_compute[230518]:   <target dev="vdb" bus="virtio"/>
Oct 02 13:15:27 compute-1 nova_compute[230518]:   <serial>205d78a9-2344-4c93-8e1b-54d92d0b0fa2</serial>
Oct 02 13:15:27 compute-1 nova_compute[230518]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 13:15:27 compute-1 nova_compute[230518]: </disk>
Oct 02 13:15:27 compute-1 nova_compute[230518]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 13:15:27 compute-1 ceph-mon[80926]: pgmap v3116: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 25 KiB/s wr, 33 op/s
Oct 02 13:15:27 compute-1 nova_compute[230518]: 2025-10-02 13:15:27.859 2 INFO nova.virt.libvirt.driver [None req-169f1fbb-3438-47b4-b7f6-a10f5c0cabd6 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Successfully detached device vdb from instance 3dbb48be-2da9-48eb-814a-94eac9968d0f from the persistent domain config.
Oct 02 13:15:27 compute-1 nova_compute[230518]: 2025-10-02 13:15:27.860 2 DEBUG nova.virt.libvirt.driver [None req-169f1fbb-3438-47b4-b7f6-a10f5c0cabd6 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 3dbb48be-2da9-48eb-814a-94eac9968d0f from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 13:15:27 compute-1 nova_compute[230518]: 2025-10-02 13:15:27.860 2 DEBUG nova.virt.libvirt.guest [None req-169f1fbb-3438-47b4-b7f6-a10f5c0cabd6 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 13:15:27 compute-1 nova_compute[230518]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:15:27 compute-1 nova_compute[230518]:   <source protocol="rbd" name="volumes/volume-205d78a9-2344-4c93-8e1b-54d92d0b0fa2">
Oct 02 13:15:27 compute-1 nova_compute[230518]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:15:27 compute-1 nova_compute[230518]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:15:27 compute-1 nova_compute[230518]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:15:27 compute-1 nova_compute[230518]:   </source>
Oct 02 13:15:27 compute-1 nova_compute[230518]:   <target dev="vdb" bus="virtio"/>
Oct 02 13:15:27 compute-1 nova_compute[230518]:   <serial>205d78a9-2344-4c93-8e1b-54d92d0b0fa2</serial>
Oct 02 13:15:27 compute-1 nova_compute[230518]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 13:15:27 compute-1 nova_compute[230518]: </disk>
Oct 02 13:15:27 compute-1 nova_compute[230518]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 13:15:27 compute-1 nova_compute[230518]: 2025-10-02 13:15:27.967 2 DEBUG nova.virt.libvirt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Received event <DeviceRemovedEvent: 1759410927.967262, 3dbb48be-2da9-48eb-814a-94eac9968d0f => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 13:15:27 compute-1 nova_compute[230518]: 2025-10-02 13:15:27.968 2 DEBUG nova.virt.libvirt.driver [None req-169f1fbb-3438-47b4-b7f6-a10f5c0cabd6 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 3dbb48be-2da9-48eb-814a-94eac9968d0f _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 13:15:27 compute-1 nova_compute[230518]: 2025-10-02 13:15:27.970 2 INFO nova.virt.libvirt.driver [None req-169f1fbb-3438-47b4-b7f6-a10f5c0cabd6 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Successfully detached device vdb from instance 3dbb48be-2da9-48eb-814a-94eac9968d0f from the live domain config.
Oct 02 13:15:28 compute-1 nova_compute[230518]: 2025-10-02 13:15:28.160 2 DEBUG nova.objects.instance [None req-169f1fbb-3438-47b4-b7f6-a10f5c0cabd6 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lazy-loading 'flavor' on Instance uuid 3dbb48be-2da9-48eb-814a-94eac9968d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:15:28 compute-1 nova_compute[230518]: 2025-10-02 13:15:28.195 2 DEBUG oslo_concurrency.lockutils [None req-169f1fbb-3438-47b4-b7f6-a10f5c0cabd6 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:15:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:28.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:28.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:28 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2661508965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:28 compute-1 nova_compute[230518]: 2025-10-02 13:15:28.908 2 DEBUG oslo_concurrency.lockutils [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "3dbb48be-2da9-48eb-814a-94eac9968d0f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:15:28 compute-1 nova_compute[230518]: 2025-10-02 13:15:28.909 2 DEBUG oslo_concurrency.lockutils [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:15:28 compute-1 nova_compute[230518]: 2025-10-02 13:15:28.909 2 DEBUG oslo_concurrency.lockutils [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "3dbb48be-2da9-48eb-814a-94eac9968d0f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:15:28 compute-1 nova_compute[230518]: 2025-10-02 13:15:28.910 2 DEBUG oslo_concurrency.lockutils [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:15:28 compute-1 nova_compute[230518]: 2025-10-02 13:15:28.910 2 DEBUG oslo_concurrency.lockutils [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:15:28 compute-1 nova_compute[230518]: 2025-10-02 13:15:28.911 2 INFO nova.compute.manager [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Terminating instance
Oct 02 13:15:28 compute-1 nova_compute[230518]: 2025-10-02 13:15:28.913 2 DEBUG nova.compute.manager [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:15:29 compute-1 nova_compute[230518]: 2025-10-02 13:15:29.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:15:29 compute-1 nova_compute[230518]: 2025-10-02 13:15:29.085 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:15:29 compute-1 nova_compute[230518]: 2025-10-02 13:15:29.086 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:15:29 compute-1 nova_compute[230518]: 2025-10-02 13:15:29.087 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:15:29 compute-1 nova_compute[230518]: 2025-10-02 13:15:29.087 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:15:29 compute-1 nova_compute[230518]: 2025-10-02 13:15:29.088 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:15:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:15:29 compute-1 kernel: tap27cf437c-6f (unregistering): left promiscuous mode
Oct 02 13:15:29 compute-1 NetworkManager[44960]: <info>  [1759410929.2100] device (tap27cf437c-6f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:15:29 compute-1 nova_compute[230518]: 2025-10-02 13:15:29.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:29 compute-1 ovn_controller[129257]: 2025-10-02T13:15:29Z|00863|binding|INFO|Releasing lport 27cf437c-6f1f-4511-8b2a-3d68dd116906 from this chassis (sb_readonly=0)
Oct 02 13:15:29 compute-1 ovn_controller[129257]: 2025-10-02T13:15:29Z|00864|binding|INFO|Setting lport 27cf437c-6f1f-4511-8b2a-3d68dd116906 down in Southbound
Oct 02 13:15:29 compute-1 ovn_controller[129257]: 2025-10-02T13:15:29Z|00865|binding|INFO|Removing iface tap27cf437c-6f ovn-installed in OVS
Oct 02 13:15:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:15:29.227 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:cb:e4 10.100.0.7'], port_security=['fa:16:3e:38:cb:e4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '3dbb48be-2da9-48eb-814a-94eac9968d0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7f5376733aec4630998da8d11db76561', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5299c659-7804-482f-bd2a-becd049c9d51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9548860-2222-48ea-9270-42ff9a0246f7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=27cf437c-6f1f-4511-8b2a-3d68dd116906) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:15:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:15:29.229 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 27cf437c-6f1f-4511-8b2a-3d68dd116906 in datapath bc02aa54-d19f-4274-8d92-cbabe7917dd9 unbound from our chassis
Oct 02 13:15:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:15:29.230 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bc02aa54-d19f-4274-8d92-cbabe7917dd9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:15:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:15:29.232 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[32ca9d42-3c55-478c-a7fa-c57c2580a7fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:15:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:15:29.232 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9 namespace which is not needed anymore
Oct 02 13:15:29 compute-1 nova_compute[230518]: 2025-10-02 13:15:29.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:29 compute-1 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000d0.scope: Deactivated successfully.
Oct 02 13:15:29 compute-1 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000d0.scope: Consumed 16.768s CPU time.
Oct 02 13:15:29 compute-1 systemd-machined[188247]: Machine qemu-96-instance-000000d0 terminated.
Oct 02 13:15:29 compute-1 nova_compute[230518]: 2025-10-02 13:15:29.351 2 INFO nova.virt.libvirt.driver [-] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Instance destroyed successfully.
Oct 02 13:15:29 compute-1 nova_compute[230518]: 2025-10-02 13:15:29.352 2 DEBUG nova.objects.instance [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lazy-loading 'resources' on Instance uuid 3dbb48be-2da9-48eb-814a-94eac9968d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:15:29 compute-1 nova_compute[230518]: 2025-10-02 13:15:29.370 2 DEBUG nova.virt.libvirt.vif [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:13:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1054588037',display_name='tempest-AttachVolumeNegativeTest-server-1054588037',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1054588037',id=208,image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCQBraTImbOHfTH+zcfBRFJyePeIqcOFlGsPR6ZMRcMYMVZGuN9g/lIgLTbs1qdUo4qDQMWoBvweu9Ok7nksgXVqglFfrHDG04CgWRfT+7Tk6OyYqf+SJMw2cYyCygZmlA==',key_name='tempest-keypair-212526163',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:14:03Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7f5376733aec4630998da8d11db76561',ramdisk_id='',reservation_id='r-tfow0fvq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='423b8b5f-aab8-418b-8fad-d82c90818bdd',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeNegativeTest-1084646737',owner_user_name='tempest-AttachVolumeNegativeTest-1084646737-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:14:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='37083e5fd56c447cb409b86d6394dd43',uuid=3dbb48be-2da9-48eb-814a-94eac9968d0f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "address": "fa:16:3e:38:cb:e4", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27cf437c-6f", "ovs_interfaceid": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:15:29 compute-1 nova_compute[230518]: 2025-10-02 13:15:29.370 2 DEBUG nova.network.os_vif_util [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Converting VIF {"id": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "address": "fa:16:3e:38:cb:e4", "network": {"id": "bc02aa54-d19f-4274-8d92-cbabe7917dd9", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-766144522-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7f5376733aec4630998da8d11db76561", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27cf437c-6f", "ovs_interfaceid": "27cf437c-6f1f-4511-8b2a-3d68dd116906", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:15:29 compute-1 nova_compute[230518]: 2025-10-02 13:15:29.371 2 DEBUG nova.network.os_vif_util [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:38:cb:e4,bridge_name='br-int',has_traffic_filtering=True,id=27cf437c-6f1f-4511-8b2a-3d68dd116906,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27cf437c-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:15:29 compute-1 nova_compute[230518]: 2025-10-02 13:15:29.372 2 DEBUG os_vif [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:38:cb:e4,bridge_name='br-int',has_traffic_filtering=True,id=27cf437c-6f1f-4511-8b2a-3d68dd116906,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27cf437c-6f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:15:29 compute-1 nova_compute[230518]: 2025-10-02 13:15:29.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:29 compute-1 nova_compute[230518]: 2025-10-02 13:15:29.375 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27cf437c-6f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:15:29 compute-1 nova_compute[230518]: 2025-10-02 13:15:29.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:29 compute-1 nova_compute[230518]: 2025-10-02 13:15:29.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:29 compute-1 nova_compute[230518]: 2025-10-02 13:15:29.380 2 INFO os_vif [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:38:cb:e4,bridge_name='br-int',has_traffic_filtering=True,id=27cf437c-6f1f-4511-8b2a-3d68dd116906,network=Network(bc02aa54-d19f-4274-8d92-cbabe7917dd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27cf437c-6f')
Oct 02 13:15:29 compute-1 nova_compute[230518]: 2025-10-02 13:15:29.417 2 DEBUG nova.compute.manager [req-ff2c5bbe-a516-46db-9580-3aa3e5ee116e req-16d2f120-5737-4dbc-b477-1ddacef4e91d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Received event network-vif-unplugged-27cf437c-6f1f-4511-8b2a-3d68dd116906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:15:29 compute-1 nova_compute[230518]: 2025-10-02 13:15:29.418 2 DEBUG oslo_concurrency.lockutils [req-ff2c5bbe-a516-46db-9580-3aa3e5ee116e req-16d2f120-5737-4dbc-b477-1ddacef4e91d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3dbb48be-2da9-48eb-814a-94eac9968d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:15:29 compute-1 nova_compute[230518]: 2025-10-02 13:15:29.419 2 DEBUG oslo_concurrency.lockutils [req-ff2c5bbe-a516-46db-9580-3aa3e5ee116e req-16d2f120-5737-4dbc-b477-1ddacef4e91d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:15:29 compute-1 nova_compute[230518]: 2025-10-02 13:15:29.419 2 DEBUG oslo_concurrency.lockutils [req-ff2c5bbe-a516-46db-9580-3aa3e5ee116e req-16d2f120-5737-4dbc-b477-1ddacef4e91d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:15:29 compute-1 nova_compute[230518]: 2025-10-02 13:15:29.420 2 DEBUG nova.compute.manager [req-ff2c5bbe-a516-46db-9580-3aa3e5ee116e req-16d2f120-5737-4dbc-b477-1ddacef4e91d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] No waiting events found dispatching network-vif-unplugged-27cf437c-6f1f-4511-8b2a-3d68dd116906 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:15:29 compute-1 nova_compute[230518]: 2025-10-02 13:15:29.420 2 DEBUG nova.compute.manager [req-ff2c5bbe-a516-46db-9580-3aa3e5ee116e req-16d2f120-5737-4dbc-b477-1ddacef4e91d 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Received event network-vif-unplugged-27cf437c-6f1f-4511-8b2a-3d68dd116906 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:15:29 compute-1 nova_compute[230518]: 2025-10-02 13:15:29.606 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:15:29 compute-1 nova_compute[230518]: 2025-10-02 13:15:29.686 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000d0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:15:29 compute-1 nova_compute[230518]: 2025-10-02 13:15:29.686 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000d0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:15:29 compute-1 neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9[309830]: [NOTICE]   (309834) : haproxy version is 2.8.14-c23fe91
Oct 02 13:15:29 compute-1 neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9[309830]: [NOTICE]   (309834) : path to executable is /usr/sbin/haproxy
Oct 02 13:15:29 compute-1 neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9[309830]: [WARNING]  (309834) : Exiting Master process...
Oct 02 13:15:29 compute-1 neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9[309830]: [WARNING]  (309834) : Exiting Master process...
Oct 02 13:15:29 compute-1 neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9[309830]: [ALERT]    (309834) : Current worker (309836) exited with code 143 (Terminated)
Oct 02 13:15:29 compute-1 neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9[309830]: [WARNING]  (309834) : All workers exited. Exiting... (0)
Oct 02 13:15:29 compute-1 systemd[1]: libpod-3fc68881a6f51d045f88a76e7e561be7dd4c26c20a27f394eb0289c7aed0b64a.scope: Deactivated successfully.
Oct 02 13:15:29 compute-1 podman[310805]: 2025-10-02 13:15:29.772815535 +0000 UTC m=+0.416044940 container died 3fc68881a6f51d045f88a76e7e561be7dd4c26c20a27f394eb0289c7aed0b64a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 13:15:29 compute-1 nova_compute[230518]: 2025-10-02 13:15:29.878 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:15:29 compute-1 nova_compute[230518]: 2025-10-02 13:15:29.879 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4184MB free_disk=20.942684173583984GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:15:29 compute-1 nova_compute[230518]: 2025-10-02 13:15:29.880 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:15:29 compute-1 nova_compute[230518]: 2025-10-02 13:15:29.880 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:15:29 compute-1 nova_compute[230518]: 2025-10-02 13:15:29.974 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 3dbb48be-2da9-48eb-814a-94eac9968d0f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:15:29 compute-1 nova_compute[230518]: 2025-10-02 13:15:29.975 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:15:29 compute-1 nova_compute[230518]: 2025-10-02 13:15:29.975 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:15:30 compute-1 nova_compute[230518]: 2025-10-02 13:15:30.005 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:15:30 compute-1 ceph-mon[80926]: pgmap v3117: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 22 KiB/s wr, 29 op/s
Oct 02 13:15:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3669883439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1886680935' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.001999982s ======
Oct 02 13:15:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:30.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001999982s
Oct 02 13:15:30 compute-1 systemd[1]: var-lib-containers-storage-overlay-500922d7592b87d0de794352ea2980ab9aa55fc51cd3a81b2cc7670108d2967e-merged.mount: Deactivated successfully.
Oct 02 13:15:30 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3fc68881a6f51d045f88a76e7e561be7dd4c26c20a27f394eb0289c7aed0b64a-userdata-shm.mount: Deactivated successfully.
Oct 02 13:15:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:15:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:30.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:15:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:15:30 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1328104633' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:30 compute-1 nova_compute[230518]: 2025-10-02 13:15:30.944 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.939s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:15:30 compute-1 nova_compute[230518]: 2025-10-02 13:15:30.950 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:15:31 compute-1 nova_compute[230518]: 2025-10-02 13:15:31.051 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:15:31 compute-1 nova_compute[230518]: 2025-10-02 13:15:31.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:31 compute-1 nova_compute[230518]: 2025-10-02 13:15:31.082 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:15:31 compute-1 nova_compute[230518]: 2025-10-02 13:15:31.082 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.202s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:15:31 compute-1 podman[310805]: 2025-10-02 13:15:31.186108476 +0000 UTC m=+1.829337871 container cleanup 3fc68881a6f51d045f88a76e7e561be7dd4c26c20a27f394eb0289c7aed0b64a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:15:31 compute-1 systemd[1]: libpod-conmon-3fc68881a6f51d045f88a76e7e561be7dd4c26c20a27f394eb0289c7aed0b64a.scope: Deactivated successfully.
Oct 02 13:15:31 compute-1 nova_compute[230518]: 2025-10-02 13:15:31.503 2 DEBUG nova.compute.manager [req-bd01780a-df62-4201-be31-df73e895dc60 req-3dc06d01-0a8b-417c-b14e-5582d1e63c58 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Received event network-vif-plugged-27cf437c-6f1f-4511-8b2a-3d68dd116906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:15:31 compute-1 nova_compute[230518]: 2025-10-02 13:15:31.504 2 DEBUG oslo_concurrency.lockutils [req-bd01780a-df62-4201-be31-df73e895dc60 req-3dc06d01-0a8b-417c-b14e-5582d1e63c58 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "3dbb48be-2da9-48eb-814a-94eac9968d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:15:31 compute-1 nova_compute[230518]: 2025-10-02 13:15:31.504 2 DEBUG oslo_concurrency.lockutils [req-bd01780a-df62-4201-be31-df73e895dc60 req-3dc06d01-0a8b-417c-b14e-5582d1e63c58 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:15:31 compute-1 nova_compute[230518]: 2025-10-02 13:15:31.504 2 DEBUG oslo_concurrency.lockutils [req-bd01780a-df62-4201-be31-df73e895dc60 req-3dc06d01-0a8b-417c-b14e-5582d1e63c58 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:15:31 compute-1 nova_compute[230518]: 2025-10-02 13:15:31.504 2 DEBUG nova.compute.manager [req-bd01780a-df62-4201-be31-df73e895dc60 req-3dc06d01-0a8b-417c-b14e-5582d1e63c58 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] No waiting events found dispatching network-vif-plugged-27cf437c-6f1f-4511-8b2a-3d68dd116906 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:15:31 compute-1 nova_compute[230518]: 2025-10-02 13:15:31.505 2 WARNING nova.compute.manager [req-bd01780a-df62-4201-be31-df73e895dc60 req-3dc06d01-0a8b-417c-b14e-5582d1e63c58 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Received unexpected event network-vif-plugged-27cf437c-6f1f-4511-8b2a-3d68dd116906 for instance with vm_state active and task_state deleting.
Oct 02 13:15:31 compute-1 ceph-mon[80926]: pgmap v3118: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 22 KiB/s wr, 35 op/s
Oct 02 13:15:31 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1328104633' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:32 compute-1 podman[310885]: 2025-10-02 13:15:32.029236702 +0000 UTC m=+0.808099289 container remove 3fc68881a6f51d045f88a76e7e561be7dd4c26c20a27f394eb0289c7aed0b64a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 13:15:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:15:32.038 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e5ef5b6d-3b0d-4f75-8df1-516c6ed18e12]: (4, ('Thu Oct  2 01:15:29 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9 (3fc68881a6f51d045f88a76e7e561be7dd4c26c20a27f394eb0289c7aed0b64a)\n3fc68881a6f51d045f88a76e7e561be7dd4c26c20a27f394eb0289c7aed0b64a\nThu Oct  2 01:15:31 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9 (3fc68881a6f51d045f88a76e7e561be7dd4c26c20a27f394eb0289c7aed0b64a)\n3fc68881a6f51d045f88a76e7e561be7dd4c26c20a27f394eb0289c7aed0b64a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:15:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:15:32.041 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f7025817-89e3-48aa-8bdb-3e4a5258442b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:15:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:15:32.041 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc02aa54-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:15:32 compute-1 nova_compute[230518]: 2025-10-02 13:15:32.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:32 compute-1 kernel: tapbc02aa54-d0: left promiscuous mode
Oct 02 13:15:32 compute-1 nova_compute[230518]: 2025-10-02 13:15:32.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:15:32.050 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f48abee2-55b2-4299-8f50-22e4deaa705c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:15:32 compute-1 nova_compute[230518]: 2025-10-02 13:15:32.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:15:32.083 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bd703fdf-0399-45bf-8029-9504795a2fa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:15:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:15:32.085 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[287a0621-3ae6-4825-a1ee-de81d93d584f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:15:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:15:32.102 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9d6f806f-dfd7-4b4e-a714-99cb02733500]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 870575, 'reachable_time': 35123, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310901, 'error': None, 'target': 'ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:15:32 compute-1 systemd[1]: run-netns-ovnmeta\x2dbc02aa54\x2dd19f\x2d4274\x2d8d92\x2dcbabe7917dd9.mount: Deactivated successfully.
Oct 02 13:15:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:15:32.108 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bc02aa54-d19f-4274-8d92-cbabe7917dd9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:15:32 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:15:32.108 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[db4ba028-cab4-4b39-9738-e2cbb80f0075]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:15:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:15:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:32.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:15:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:32.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:33 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2777553421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:34 compute-1 sudo[310903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:15:34 compute-1 sudo[310903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:34 compute-1 sudo[310903]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:34 compute-1 sudo[310941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:15:34 compute-1 sudo[310941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:34 compute-1 sudo[310941]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:34 compute-1 podman[310928]: 2025-10-02 13:15:34.161317547 +0000 UTC m=+0.103414835 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true)
Oct 02 13:15:34 compute-1 podman[310927]: 2025-10-02 13:15:34.165776727 +0000 UTC m=+0.109236418 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, tcib_managed=true)
Oct 02 13:15:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:15:34 compute-1 sudo[310995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:15:34 compute-1 sudo[310995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:34 compute-1 sudo[310995]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:34 compute-1 nova_compute[230518]: 2025-10-02 13:15:34.260 2 INFO nova.virt.libvirt.driver [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Deleting instance files /var/lib/nova/instances/3dbb48be-2da9-48eb-814a-94eac9968d0f_del
Oct 02 13:15:34 compute-1 nova_compute[230518]: 2025-10-02 13:15:34.262 2 INFO nova.virt.libvirt.driver [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Deletion of /var/lib/nova/instances/3dbb48be-2da9-48eb-814a-94eac9968d0f_del complete
Oct 02 13:15:34 compute-1 ceph-mon[80926]: pgmap v3119: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 22 KiB/s wr, 34 op/s
Oct 02 13:15:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3209803078' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:34 compute-1 sudo[311021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:15:34 compute-1 sudo[311021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:34 compute-1 nova_compute[230518]: 2025-10-02 13:15:34.311 2 INFO nova.compute.manager [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Took 5.40 seconds to destroy the instance on the hypervisor.
Oct 02 13:15:34 compute-1 nova_compute[230518]: 2025-10-02 13:15:34.312 2 DEBUG oslo.service.loopingcall [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:15:34 compute-1 nova_compute[230518]: 2025-10-02 13:15:34.313 2 DEBUG nova.compute.manager [-] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:15:34 compute-1 nova_compute[230518]: 2025-10-02 13:15:34.314 2 DEBUG nova.network.neutron [-] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:15:34 compute-1 nova_compute[230518]: 2025-10-02 13:15:34.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:15:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:34.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:15:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:34.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:34 compute-1 sudo[311021]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:34 compute-1 nova_compute[230518]: 2025-10-02 13:15:34.903 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410919.9022255, 66f3a080-c034-4465-9d17-ee4b4afe4592 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:15:34 compute-1 nova_compute[230518]: 2025-10-02 13:15:34.904 2 INFO nova.compute.manager [-] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] VM Stopped (Lifecycle Event)
Oct 02 13:15:34 compute-1 nova_compute[230518]: 2025-10-02 13:15:34.923 2 DEBUG nova.compute.manager [None req-e195015c-bd40-4802-915e-2dd744f7841a - - - - - -] [instance: 66f3a080-c034-4465-9d17-ee4b4afe4592] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:15:35 compute-1 ceph-mon[80926]: pgmap v3120: 305 pgs: 305 active+clean; 175 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 10 KiB/s wr, 40 op/s
Oct 02 13:15:35 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:15:35 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:15:35 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:15:35 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:15:35 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:15:35 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:15:35 compute-1 nova_compute[230518]: 2025-10-02 13:15:35.902 2 DEBUG nova.network.neutron [-] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:15:35 compute-1 nova_compute[230518]: 2025-10-02 13:15:35.920 2 INFO nova.compute.manager [-] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Took 1.61 seconds to deallocate network for instance.
Oct 02 13:15:36 compute-1 nova_compute[230518]: 2025-10-02 13:15:36.020 2 DEBUG oslo_concurrency.lockutils [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:15:36 compute-1 nova_compute[230518]: 2025-10-02 13:15:36.020 2 DEBUG oslo_concurrency.lockutils [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:15:36 compute-1 nova_compute[230518]: 2025-10-02 13:15:36.053 2 DEBUG nova.compute.manager [req-878b5ccd-60bf-4955-9280-f68e28e96f24 req-7a6eff81-b7a3-4c90-957e-3c635eab71e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Received event network-vif-deleted-27cf437c-6f1f-4511-8b2a-3d68dd116906 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:15:36 compute-1 nova_compute[230518]: 2025-10-02 13:15:36.075 2 DEBUG oslo_concurrency.processutils [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:15:36 compute-1 nova_compute[230518]: 2025-10-02 13:15:36.107 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:15:36 compute-1 nova_compute[230518]: 2025-10-02 13:15:36.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:36 compute-1 nova_compute[230518]: 2025-10-02 13:15:36.112 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:15:36 compute-1 nova_compute[230518]: 2025-10-02 13:15:36.113 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:15:36 compute-1 nova_compute[230518]: 2025-10-02 13:15:36.113 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:15:36 compute-1 nova_compute[230518]: 2025-10-02 13:15:36.113 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:15:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:36.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:36 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:15:36 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/188827119' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:36 compute-1 nova_compute[230518]: 2025-10-02 13:15:36.524 2 DEBUG oslo_concurrency.processutils [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:15:36 compute-1 nova_compute[230518]: 2025-10-02 13:15:36.532 2 DEBUG nova.compute.provider_tree [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:15:36 compute-1 nova_compute[230518]: 2025-10-02 13:15:36.558 2 DEBUG nova.scheduler.client.report [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:15:36 compute-1 nova_compute[230518]: 2025-10-02 13:15:36.581 2 DEBUG oslo_concurrency.lockutils [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:15:36 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/188827119' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:36 compute-1 nova_compute[230518]: 2025-10-02 13:15:36.618 2 INFO nova.scheduler.client.report [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Deleted allocations for instance 3dbb48be-2da9-48eb-814a-94eac9968d0f
Oct 02 13:15:36 compute-1 nova_compute[230518]: 2025-10-02 13:15:36.688 2 DEBUG oslo_concurrency.lockutils [None req-d6213eef-7dc3-4eb4-b604-32d6d9cb92a2 37083e5fd56c447cb409b86d6394dd43 7f5376733aec4630998da8d11db76561 - - default default] Lock "3dbb48be-2da9-48eb-814a-94eac9968d0f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:15:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:15:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:36.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:15:37 compute-1 nova_compute[230518]: 2025-10-02 13:15:37.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:15:37 compute-1 ceph-mon[80926]: pgmap v3121: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 1.6 KiB/s wr, 41 op/s
Oct 02 13:15:38 compute-1 nova_compute[230518]: 2025-10-02 13:15:38.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:15:38 compute-1 nova_compute[230518]: 2025-10-02 13:15:38.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:15:38 compute-1 nova_compute[230518]: 2025-10-02 13:15:38.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:15:38 compute-1 nova_compute[230518]: 2025-10-02 13:15:38.065 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:15:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:15:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:38.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:15:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:38.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:39 compute-1 nova_compute[230518]: 2025-10-02 13:15:39.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:15:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:15:39 compute-1 nova_compute[230518]: 2025-10-02 13:15:39.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:39 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:15:39 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 66K writes, 260K keys, 66K commit groups, 1.0 writes per commit group, ingest: 0.25 GB, 0.05 MB/s
                                           Cumulative WAL: 66K writes, 24K syncs, 2.72 writes per sync, written: 0.25 GB, 0.05 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8130 writes, 30K keys, 8130 commit groups, 1.0 writes per commit group, ingest: 33.16 MB, 0.06 MB/s
                                           Interval WAL: 8131 writes, 3070 syncs, 2.65 writes per sync, written: 0.03 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 13:15:39 compute-1 ceph-mon[80926]: pgmap v3122: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.3 KiB/s wr, 28 op/s
Oct 02 13:15:40 compute-1 nova_compute[230518]: 2025-10-02 13:15:40.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:15:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:40.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:40.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:41 compute-1 nova_compute[230518]: 2025-10-02 13:15:41.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:41 compute-1 ceph-mon[80926]: pgmap v3123: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 13:15:41 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:15:41 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:15:41 compute-1 sudo[311099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:15:41 compute-1 sudo[311099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:41 compute-1 sudo[311099]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:41 compute-1 sudo[311124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:15:41 compute-1 sudo[311124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:15:41 compute-1 sudo[311124]: pam_unix(sudo:session): session closed for user root
Oct 02 13:15:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:15:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:42.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:15:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:42.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:43 compute-1 podman[311149]: 2025-10-02 13:15:43.810520384 +0000 UTC m=+0.061951294 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct 02 13:15:43 compute-1 podman[311150]: 2025-10-02 13:15:43.817229765 +0000 UTC m=+0.062091059 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true)
Oct 02 13:15:43 compute-1 ceph-mon[80926]: pgmap v3124: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 22 op/s
Oct 02 13:15:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:15:44 compute-1 nova_compute[230518]: 2025-10-02 13:15:44.349 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759410929.3479283, 3dbb48be-2da9-48eb-814a-94eac9968d0f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:15:44 compute-1 nova_compute[230518]: 2025-10-02 13:15:44.350 2 INFO nova.compute.manager [-] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] VM Stopped (Lifecycle Event)
Oct 02 13:15:44 compute-1 nova_compute[230518]: 2025-10-02 13:15:44.371 2 DEBUG nova.compute.manager [None req-f82a4546-1917-4784-b924-b8497c45c3f7 - - - - - -] [instance: 3dbb48be-2da9-48eb-814a-94eac9968d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:15:44 compute-1 nova_compute[230518]: 2025-10-02 13:15:44.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:44.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:44.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:46 compute-1 nova_compute[230518]: 2025-10-02 13:15:46.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:46 compute-1 ceph-mon[80926]: pgmap v3125: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 22 op/s
Oct 02 13:15:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/112591635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:15:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:46.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:46.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:47 compute-1 ceph-mon[80926]: pgmap v3126: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 17 op/s
Oct 02 13:15:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:48.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:48.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:15:49 compute-1 nova_compute[230518]: 2025-10-02 13:15:49.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:49 compute-1 ceph-mon[80926]: pgmap v3127: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 255 B/s wr, 1 op/s
Oct 02 13:15:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:50.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:50.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:51 compute-1 nova_compute[230518]: 2025-10-02 13:15:51.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:51 compute-1 ceph-mon[80926]: pgmap v3128: 305 pgs: 305 active+clean; 149 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 1.0 MiB/s wr, 23 op/s
Oct 02 13:15:51 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1942650274' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:15:51 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2329846800' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:15:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:52.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:15:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:52.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:15:53 compute-1 ceph-mon[80926]: pgmap v3129: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:15:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:15:54 compute-1 nova_compute[230518]: 2025-10-02 13:15:54.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:54.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:54.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:56 compute-1 ceph-mon[80926]: pgmap v3130: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:15:56 compute-1 nova_compute[230518]: 2025-10-02 13:15:56.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:15:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:56.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:15:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:56.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:58 compute-1 ceph-mon[80926]: pgmap v3131: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Oct 02 13:15:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:15:58.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:15:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:15:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:15:58.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:15:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:15:59 compute-1 nova_compute[230518]: 2025-10-02 13:15:59.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:15:59 compute-1 ceph-mon[80926]: pgmap v3132: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Oct 02 13:16:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:00.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:00.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:01 compute-1 nova_compute[230518]: 2025-10-02 13:16:01.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:02 compute-1 ceph-mon[80926]: pgmap v3133: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 13:16:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:16:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:02.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:16:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:02.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:04 compute-1 ceph-mon[80926]: pgmap v3134: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 768 KiB/s wr, 76 op/s
Oct 02 13:16:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:16:04 compute-1 nova_compute[230518]: 2025-10-02 13:16:04.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:16:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:04.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:16:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:04.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:04 compute-1 podman[311189]: 2025-10-02 13:16:04.814106331 +0000 UTC m=+0.056003667 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:16:04 compute-1 podman[311188]: 2025-10-02 13:16:04.842189093 +0000 UTC m=+0.091019647 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:16:06 compute-1 nova_compute[230518]: 2025-10-02 13:16:06.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:06 compute-1 ceph-mon[80926]: pgmap v3135: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 02 13:16:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1591145988' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:16:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1591145988' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:16:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:06.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:16:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:06.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:16:07 compute-1 ceph-mon[80926]: pgmap v3136: 305 pgs: 305 active+clean; 175 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.1 MiB/s wr, 105 op/s
Oct 02 13:16:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:08.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:08.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:16:09 compute-1 nova_compute[230518]: 2025-10-02 13:16:09.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:09 compute-1 ceph-mon[80926]: pgmap v3137: 305 pgs: 305 active+clean; 175 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 230 KiB/s rd, 1.1 MiB/s wr, 33 op/s
Oct 02 13:16:10 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:16:10 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Cumulative writes: 14K writes, 75K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s
                                           Cumulative WAL: 14K writes, 14K syncs, 1.00 writes per sync, written: 0.15 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1590 writes, 7740 keys, 1590 commit groups, 1.0 writes per commit group, ingest: 16.10 MB, 0.03 MB/s
                                           Interval WAL: 1590 writes, 1590 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     55.3      1.66              0.28        47    0.035       0      0       0.0       0.0
                                             L6      1/0   11.52 MB   0.0      0.5     0.1      0.4       0.5      0.0       0.0   5.1    116.4     99.2      4.67              1.42        46    0.102    327K    24K       0.0       0.0
                                            Sum      1/0   11.52 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.1     85.9     87.7      6.33              1.70        93    0.068    327K    24K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.4     85.0     86.8      0.73              0.24        10    0.073     48K   2580       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.5      0.0       0.0   0.0    116.4     99.2      4.67              1.42        46    0.102    327K    24K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     55.4      1.66              0.28        46    0.036       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.090, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.54 GB write, 0.10 MB/s write, 0.53 GB read, 0.10 MB/s read, 6.3 seconds
                                           Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5591049f71f0#2 capacity: 304.00 MB usage: 59.78 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000312 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3440,57.38 MB,18.8744%) FilterBlock(93,913.05 KB,0.293305%) IndexBlock(93,1.51 MB,0.495318%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 13:16:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:10.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:16:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:10.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:16:11 compute-1 nova_compute[230518]: 2025-10-02 13:16:11.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:11 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #151. Immutable memtables: 0.
Oct 02 13:16:11 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:16:11.160452) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:16:11 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 151
Oct 02 13:16:11 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410971160492, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 1809, "num_deletes": 262, "total_data_size": 4073040, "memory_usage": 4128304, "flush_reason": "Manual Compaction"}
Oct 02 13:16:11 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #152: started
Oct 02 13:16:11 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410971175346, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 152, "file_size": 2675865, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 73722, "largest_seqno": 75526, "table_properties": {"data_size": 2668131, "index_size": 4611, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16622, "raw_average_key_size": 20, "raw_value_size": 2652457, "raw_average_value_size": 3282, "num_data_blocks": 202, "num_entries": 808, "num_filter_entries": 808, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410829, "oldest_key_time": 1759410829, "file_creation_time": 1759410971, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:16:11 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 14944 microseconds, and 5842 cpu microseconds.
Oct 02 13:16:11 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:16:11 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:16:11.175389) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #152: 2675865 bytes OK
Oct 02 13:16:11 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:16:11.175413) [db/memtable_list.cc:519] [default] Level-0 commit table #152 started
Oct 02 13:16:11 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:16:11.179877) [db/memtable_list.cc:722] [default] Level-0 commit table #152: memtable #1 done
Oct 02 13:16:11 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:16:11.179932) EVENT_LOG_v1 {"time_micros": 1759410971179920, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:16:11 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:16:11.179959) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:16:11 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 4064639, prev total WAL file size 4064639, number of live WAL files 2.
Oct 02 13:16:11 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000148.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:16:11 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:16:11.181086) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373637' seq:72057594037927935, type:22 .. '6C6F676D0033303139' seq:0, type:0; will stop at (end)
Oct 02 13:16:11 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:16:11 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [152(2613KB)], [150(11MB)]
Oct 02 13:16:11 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410971181123, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [152], "files_L6": [150], "score": -1, "input_data_size": 14751264, "oldest_snapshot_seqno": -1}
Oct 02 13:16:11 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #153: 9809 keys, 14611687 bytes, temperature: kUnknown
Oct 02 13:16:11 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410971300500, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 153, "file_size": 14611687, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14545538, "index_size": 40500, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24581, "raw_key_size": 258068, "raw_average_key_size": 26, "raw_value_size": 14371017, "raw_average_value_size": 1465, "num_data_blocks": 1556, "num_entries": 9809, "num_filter_entries": 9809, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759410971, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 153, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:16:11 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:16:11 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:16:11.300764) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 14611687 bytes
Oct 02 13:16:11 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:16:11.302077) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 123.5 rd, 122.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 11.5 +0.0 blob) out(13.9 +0.0 blob), read-write-amplify(11.0) write-amplify(5.5) OK, records in: 10348, records dropped: 539 output_compression: NoCompression
Oct 02 13:16:11 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:16:11.302091) EVENT_LOG_v1 {"time_micros": 1759410971302084, "job": 96, "event": "compaction_finished", "compaction_time_micros": 119451, "compaction_time_cpu_micros": 50643, "output_level": 6, "num_output_files": 1, "total_output_size": 14611687, "num_input_records": 10348, "num_output_records": 9809, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:16:11 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:16:11 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410971302622, "job": 96, "event": "table_file_deletion", "file_number": 152}
Oct 02 13:16:11 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000150.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:16:11 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759410971304922, "job": 96, "event": "table_file_deletion", "file_number": 150}
Oct 02 13:16:11 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:16:11.180963) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:16:11 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:16:11.305020) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:16:11 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:16:11.305028) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:16:11 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:16:11.305031) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:16:11 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:16:11.305034) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:16:11 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:16:11.305037) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:16:12 compute-1 ceph-mon[80926]: pgmap v3138: 305 pgs: 305 active+clean; 198 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 02 13:16:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:16:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:12.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:16:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:16:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:12.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:16:14 compute-1 ceph-mon[80926]: pgmap v3139: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 02 13:16:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:16:14 compute-1 nova_compute[230518]: 2025-10-02 13:16:14.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:14.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:14.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:14 compute-1 podman[311233]: 2025-10-02 13:16:14.79878211 +0000 UTC m=+0.051693933 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 02 13:16:14 compute-1 podman[311234]: 2025-10-02 13:16:14.85202681 +0000 UTC m=+0.086393251 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 13:16:15 compute-1 ceph-mon[80926]: pgmap v3140: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 02 13:16:16 compute-1 nova_compute[230518]: 2025-10-02 13:16:16.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/629256755' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:16:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:16.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:16.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:17 compute-1 ceph-mon[80926]: pgmap v3141: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 13:16:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:16:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:18.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:16:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:18.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:16:19 compute-1 nova_compute[230518]: 2025-10-02 13:16:19.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:19 compute-1 ceph-mon[80926]: pgmap v3142: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 137 KiB/s rd, 1.0 MiB/s wr, 34 op/s
Oct 02 13:16:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:16:20.298 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=71, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=70) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:16:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:16:20.299 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:16:20 compute-1 nova_compute[230518]: 2025-10-02 13:16:20.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:16:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:20.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:16:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:16:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:20.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:16:21 compute-1 nova_compute[230518]: 2025-10-02 13:16:21.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:21 compute-1 ceph-mon[80926]: pgmap v3143: 305 pgs: 305 active+clean; 178 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 147 KiB/s rd, 1.0 MiB/s wr, 48 op/s
Oct 02 13:16:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:22.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:22.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3683690967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:16:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:16:23.303 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '71'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:16:23 compute-1 ceph-mon[80926]: pgmap v3144: 305 pgs: 305 active+clean; 157 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 29 KiB/s wr, 24 op/s
Oct 02 13:16:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:16:24 compute-1 nova_compute[230518]: 2025-10-02 13:16:24.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:24.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:16:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:24.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:16:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:16:25.971 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:16:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:16:25.971 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:16:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:16:25.971 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:16:26 compute-1 ceph-mon[80926]: pgmap v3145: 305 pgs: 305 active+clean; 157 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 17 KiB/s wr, 21 op/s
Oct 02 13:16:26 compute-1 nova_compute[230518]: 2025-10-02 13:16:26.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:26.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:26.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:28 compute-1 ceph-mon[80926]: pgmap v3146: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 17 KiB/s wr, 31 op/s
Oct 02 13:16:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:28.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:28.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:16:29 compute-1 nova_compute[230518]: 2025-10-02 13:16:29.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:30 compute-1 nova_compute[230518]: 2025-10-02 13:16:30.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:16:30 compute-1 ceph-mon[80926]: pgmap v3147: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 5.5 KiB/s wr, 30 op/s
Oct 02 13:16:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/256858337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:16:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1660275554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:16:30 compute-1 nova_compute[230518]: 2025-10-02 13:16:30.081 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:16:30 compute-1 nova_compute[230518]: 2025-10-02 13:16:30.082 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:16:30 compute-1 nova_compute[230518]: 2025-10-02 13:16:30.082 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:16:30 compute-1 nova_compute[230518]: 2025-10-02 13:16:30.082 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:16:30 compute-1 nova_compute[230518]: 2025-10-02 13:16:30.082 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:16:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:16:30 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3079176688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:16:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:30.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:30 compute-1 nova_compute[230518]: 2025-10-02 13:16:30.573 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:16:30 compute-1 nova_compute[230518]: 2025-10-02 13:16:30.760 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:16:30 compute-1 nova_compute[230518]: 2025-10-02 13:16:30.762 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4231MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:16:30 compute-1 nova_compute[230518]: 2025-10-02 13:16:30.762 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:16:30 compute-1 nova_compute[230518]: 2025-10-02 13:16:30.762 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:16:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:30.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:30 compute-1 nova_compute[230518]: 2025-10-02 13:16:30.874 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:16:30 compute-1 nova_compute[230518]: 2025-10-02 13:16:30.875 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:16:30 compute-1 nova_compute[230518]: 2025-10-02 13:16:30.895 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 13:16:30 compute-1 nova_compute[230518]: 2025-10-02 13:16:30.941 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 13:16:30 compute-1 nova_compute[230518]: 2025-10-02 13:16:30.941 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 13:16:31 compute-1 nova_compute[230518]: 2025-10-02 13:16:31.016 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 13:16:31 compute-1 nova_compute[230518]: 2025-10-02 13:16:31.046 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 13:16:31 compute-1 nova_compute[230518]: 2025-10-02 13:16:31.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:31 compute-1 nova_compute[230518]: 2025-10-02 13:16:31.130 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:16:31 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3079176688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:16:31 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:16:31 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1391326950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:16:31 compute-1 nova_compute[230518]: 2025-10-02 13:16:31.760 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:16:31 compute-1 nova_compute[230518]: 2025-10-02 13:16:31.766 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:16:32 compute-1 ceph-mon[80926]: pgmap v3148: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 5.5 KiB/s wr, 30 op/s
Oct 02 13:16:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1391326950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:16:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:32.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:32.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:32 compute-1 nova_compute[230518]: 2025-10-02 13:16:32.821 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:16:32 compute-1 nova_compute[230518]: 2025-10-02 13:16:32.851 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:16:32 compute-1 nova_compute[230518]: 2025-10-02 13:16:32.852 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.090s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:16:33 compute-1 ceph-mon[80926]: pgmap v3149: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 1023 B/s wr, 16 op/s
Oct 02 13:16:33 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3044782465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:16:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:16:34 compute-1 nova_compute[230518]: 2025-10-02 13:16:34.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:34.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3539885036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:16:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:34.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:35 compute-1 nova_compute[230518]: 2025-10-02 13:16:35.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:16:35 compute-1 nova_compute[230518]: 2025-10-02 13:16:35.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:16:35 compute-1 nova_compute[230518]: 2025-10-02 13:16:35.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:16:35 compute-1 nova_compute[230518]: 2025-10-02 13:16:35.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:16:35 compute-1 nova_compute[230518]: 2025-10-02 13:16:35.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:16:35 compute-1 nova_compute[230518]: 2025-10-02 13:16:35.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:16:35 compute-1 ceph-mon[80926]: pgmap v3150: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.6 KiB/s rd, 170 B/s wr, 10 op/s
Oct 02 13:16:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/207042207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:16:35 compute-1 podman[311318]: 2025-10-02 13:16:35.816992718 +0000 UTC m=+0.049855105 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:16:35 compute-1 podman[311317]: 2025-10-02 13:16:35.832344049 +0000 UTC m=+0.079808623 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 13:16:36 compute-1 nova_compute[230518]: 2025-10-02 13:16:36.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:36.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:36.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:37 compute-1 nova_compute[230518]: 2025-10-02 13:16:37.086 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:16:37 compute-1 ceph-mon[80926]: pgmap v3151: 305 pgs: 305 active+clean; 143 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 910 KiB/s wr, 32 op/s
Oct 02 13:16:38 compute-1 nova_compute[230518]: 2025-10-02 13:16:38.055 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:16:38 compute-1 nova_compute[230518]: 2025-10-02 13:16:38.056 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:16:38 compute-1 nova_compute[230518]: 2025-10-02 13:16:38.056 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:16:38 compute-1 nova_compute[230518]: 2025-10-02 13:16:38.085 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:16:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:38.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:38.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:16:39 compute-1 nova_compute[230518]: 2025-10-02 13:16:39.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:39 compute-1 ceph-mon[80926]: pgmap v3152: 305 pgs: 305 active+clean; 143 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 910 KiB/s wr, 21 op/s
Oct 02 13:16:40 compute-1 nova_compute[230518]: 2025-10-02 13:16:40.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:16:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:40.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:40.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/637609681' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:16:41 compute-1 nova_compute[230518]: 2025-10-02 13:16:41.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:16:41 compute-1 nova_compute[230518]: 2025-10-02 13:16:41.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:41 compute-1 ceph-mon[80926]: pgmap v3153: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Oct 02 13:16:41 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2286483105' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:16:42 compute-1 sudo[311363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:16:42 compute-1 sudo[311363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:42 compute-1 sudo[311363]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:42 compute-1 sudo[311388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:16:42 compute-1 sudo[311388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:42 compute-1 sudo[311388]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:42 compute-1 sudo[311413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:16:42 compute-1 sudo[311413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:42 compute-1 sudo[311413]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:42 compute-1 sudo[311438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:16:42 compute-1 sudo[311438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:42.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:42 compute-1 sudo[311438]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:42.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:42 compute-1 sudo[311494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:16:42 compute-1 sudo[311494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:42 compute-1 sudo[311494]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:42 compute-1 sudo[311519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:16:42 compute-1 sudo[311519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:42 compute-1 sudo[311519]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:42 compute-1 sudo[311544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:16:42 compute-1 sudo[311544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:43 compute-1 sudo[311544]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:43 compute-1 nova_compute[230518]: 2025-10-02 13:16:43.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:16:43 compute-1 nova_compute[230518]: 2025-10-02 13:16:43.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 13:16:43 compute-1 sudo[311569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Oct 02 13:16:43 compute-1 sudo[311569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:43 compute-1 sudo[311569]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:16:44 compute-1 ceph-mon[80926]: pgmap v3154: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:16:44 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:16:44 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:16:44 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:16:44 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:16:44 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:16:44 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:16:44 compute-1 nova_compute[230518]: 2025-10-02 13:16:44.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:16:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:44.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:16:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:44.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:45 compute-1 nova_compute[230518]: 2025-10-02 13:16:45.059 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:16:45 compute-1 ceph-mgr[81282]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3443433125
Oct 02 13:16:45 compute-1 ceph-mon[80926]: pgmap v3155: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:16:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:16:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:16:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:16:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:16:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:16:45 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:16:45 compute-1 podman[311613]: 2025-10-02 13:16:45.805495535 +0000 UTC m=+0.053753667 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:16:45 compute-1 podman[311614]: 2025-10-02 13:16:45.821211758 +0000 UTC m=+0.067613161 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 13:16:46 compute-1 nova_compute[230518]: 2025-10-02 13:16:46.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:46.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:46 compute-1 nova_compute[230518]: 2025-10-02 13:16:46.700 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:16:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:46.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:47 compute-1 ceph-mon[80926]: pgmap v3156: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 13:16:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:16:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:48.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:16:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:48.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:16:49 compute-1 nova_compute[230518]: 2025-10-02 13:16:49.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:49 compute-1 ceph-mon[80926]: pgmap v3157: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 919 KiB/s wr, 79 op/s
Oct 02 13:16:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:50.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:50.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:51 compute-1 sudo[311654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:16:51 compute-1 sudo[311654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:51 compute-1 sudo[311654]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:51 compute-1 sudo[311679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:16:51 compute-1 sudo[311679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:16:51 compute-1 sudo[311679]: pam_unix(sudo:session): session closed for user root
Oct 02 13:16:51 compute-1 nova_compute[230518]: 2025-10-02 13:16:51.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:51 compute-1 ceph-mon[80926]: pgmap v3158: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 919 KiB/s wr, 79 op/s
Oct 02 13:16:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:16:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:16:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:16:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:52.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:16:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:52.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:53 compute-1 ceph-mon[80926]: pgmap v3159: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Oct 02 13:16:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:16:54 compute-1 nova_compute[230518]: 2025-10-02 13:16:54.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:54.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:54.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:55 compute-1 ceph-mon[80926]: pgmap v3160: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 02 13:16:56 compute-1 nova_compute[230518]: 2025-10-02 13:16:56.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:16:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:56.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:56.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:57 compute-1 ceph-mon[80926]: pgmap v3161: 305 pgs: 305 active+clean; 180 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.3 MiB/s wr, 126 op/s
Oct 02 13:16:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:16:58.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:16:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:16:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:16:58.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:16:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:16:59 compute-1 nova_compute[230518]: 2025-10-02 13:16:59.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:00 compute-1 ceph-mon[80926]: pgmap v3162: 305 pgs: 305 active+clean; 180 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 282 KiB/s rd, 1.3 MiB/s wr, 52 op/s
Oct 02 13:17:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:00.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:00.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:01 compute-1 nova_compute[230518]: 2025-10-02 13:17:01.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:02 compute-1 ceph-mon[80926]: pgmap v3163: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 13:17:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:02.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:02.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:03 compute-1 ceph-mon[80926]: pgmap v3164: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 13:17:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:17:04 compute-1 nova_compute[230518]: 2025-10-02 13:17:04.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:17:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:04.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:17:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:04.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:17:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/87994602' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:17:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:17:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/87994602' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:17:05 compute-1 ceph-mon[80926]: pgmap v3165: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 13:17:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/87994602' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:17:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/87994602' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:17:06 compute-1 nova_compute[230518]: 2025-10-02 13:17:06.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:06.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:06 compute-1 podman[311705]: 2025-10-02 13:17:06.807361306 +0000 UTC m=+0.053830279 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true)
Oct 02 13:17:06 compute-1 podman[311704]: 2025-10-02 13:17:06.832305749 +0000 UTC m=+0.085704239 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller)
Oct 02 13:17:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:06.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:07 compute-1 ceph-mon[80926]: pgmap v3166: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 13:17:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/4015710855' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:17:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:08.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:08.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:17:09 compute-1 nova_compute[230518]: 2025-10-02 13:17:09.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:09 compute-1 ceph-mon[80926]: pgmap v3167: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 46 KiB/s rd, 828 KiB/s wr, 13 op/s
Oct 02 13:17:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:10.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:10.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:11 compute-1 nova_compute[230518]: 2025-10-02 13:17:11.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:12 compute-1 ceph-mon[80926]: pgmap v3168: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 48 KiB/s rd, 828 KiB/s wr, 15 op/s
Oct 02 13:17:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:17:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:12.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:17:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:12.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:13 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #154. Immutable memtables: 0.
Oct 02 13:17:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:13.180036) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:17:13 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 154
Oct 02 13:17:13 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411033180068, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 896, "num_deletes": 251, "total_data_size": 1844974, "memory_usage": 1878224, "flush_reason": "Manual Compaction"}
Oct 02 13:17:13 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #155: started
Oct 02 13:17:13 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411033217422, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 155, "file_size": 1207876, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75531, "largest_seqno": 76422, "table_properties": {"data_size": 1203607, "index_size": 1984, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9581, "raw_average_key_size": 19, "raw_value_size": 1195082, "raw_average_value_size": 2484, "num_data_blocks": 85, "num_entries": 481, "num_filter_entries": 481, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759410972, "oldest_key_time": 1759410972, "file_creation_time": 1759411033, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:17:13 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 37427 microseconds, and 3635 cpu microseconds.
Oct 02 13:17:13 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:17:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:13.217463) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #155: 1207876 bytes OK
Oct 02 13:17:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:13.217482) [db/memtable_list.cc:519] [default] Level-0 commit table #155 started
Oct 02 13:17:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:13.251902) [db/memtable_list.cc:722] [default] Level-0 commit table #155: memtable #1 done
Oct 02 13:17:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:13.251944) EVENT_LOG_v1 {"time_micros": 1759411033251935, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:17:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:13.251966) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:17:13 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 1840371, prev total WAL file size 1840371, number of live WAL files 2.
Oct 02 13:17:13 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000151.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:17:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:13.252696) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Oct 02 13:17:13 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:17:13 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [155(1179KB)], [153(13MB)]
Oct 02 13:17:13 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411033252753, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [155], "files_L6": [153], "score": -1, "input_data_size": 15819563, "oldest_snapshot_seqno": -1}
Oct 02 13:17:13 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #156: 9771 keys, 13946196 bytes, temperature: kUnknown
Oct 02 13:17:13 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411033355543, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 156, "file_size": 13946196, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13880908, "index_size": 39767, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24453, "raw_key_size": 257971, "raw_average_key_size": 26, "raw_value_size": 13707628, "raw_average_value_size": 1402, "num_data_blocks": 1520, "num_entries": 9771, "num_filter_entries": 9771, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759411033, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 156, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:17:13 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:17:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:13.355766) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 13946196 bytes
Oct 02 13:17:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:13.389249) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.8 rd, 135.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 13.9 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(24.6) write-amplify(11.5) OK, records in: 10290, records dropped: 519 output_compression: NoCompression
Oct 02 13:17:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:13.389301) EVENT_LOG_v1 {"time_micros": 1759411033389271, "job": 98, "event": "compaction_finished", "compaction_time_micros": 102850, "compaction_time_cpu_micros": 31592, "output_level": 6, "num_output_files": 1, "total_output_size": 13946196, "num_input_records": 10290, "num_output_records": 9771, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:17:13 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:17:13 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411033389627, "job": 98, "event": "table_file_deletion", "file_number": 155}
Oct 02 13:17:13 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000153.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:17:13 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411033392207, "job": 98, "event": "table_file_deletion", "file_number": 153}
Oct 02 13:17:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:13.252576) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:17:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:13.392330) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:17:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:13.392335) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:17:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:13.392337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:17:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:13.392338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:17:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:13.392340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:17:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:17:14 compute-1 ceph-mon[80926]: pgmap v3169: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.3 KiB/s rd, 15 KiB/s wr, 11 op/s
Oct 02 13:17:14 compute-1 nova_compute[230518]: 2025-10-02 13:17:14.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:14.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 13:17:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:14.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 13:17:15 compute-1 ceph-mon[80926]: pgmap v3170: 305 pgs: 305 active+clean; 175 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 15 KiB/s wr, 20 op/s
Oct 02 13:17:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3499506024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:17:16 compute-1 nova_compute[230518]: 2025-10-02 13:17:16.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:16 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #157. Immutable memtables: 0.
Oct 02 13:17:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:16.379742) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:17:16 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 157
Oct 02 13:17:16 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411036379782, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 282, "num_deletes": 251, "total_data_size": 76208, "memory_usage": 81704, "flush_reason": "Manual Compaction"}
Oct 02 13:17:16 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #158: started
Oct 02 13:17:16 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411036381889, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 158, "file_size": 49090, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 76427, "largest_seqno": 76704, "table_properties": {"data_size": 47203, "index_size": 115, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5416, "raw_average_key_size": 20, "raw_value_size": 43494, "raw_average_value_size": 162, "num_data_blocks": 5, "num_entries": 268, "num_filter_entries": 268, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411034, "oldest_key_time": 1759411034, "file_creation_time": 1759411036, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:17:16 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 2182 microseconds, and 849 cpu microseconds.
Oct 02 13:17:16 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:17:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:16.381925) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #158: 49090 bytes OK
Oct 02 13:17:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:16.381943) [db/memtable_list.cc:519] [default] Level-0 commit table #158 started
Oct 02 13:17:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:16.383916) [db/memtable_list.cc:722] [default] Level-0 commit table #158: memtable #1 done
Oct 02 13:17:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:16.383933) EVENT_LOG_v1 {"time_micros": 1759411036383928, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:17:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:16.383950) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:17:16 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 74079, prev total WAL file size 74079, number of live WAL files 2.
Oct 02 13:17:16 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000154.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:17:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:16.384358) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353037' seq:72057594037927935, type:22 .. '6D6772737461740032373539' seq:0, type:0; will stop at (end)
Oct 02 13:17:16 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:17:16 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [158(47KB)], [156(13MB)]
Oct 02 13:17:16 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411036384383, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [158], "files_L6": [156], "score": -1, "input_data_size": 13995286, "oldest_snapshot_seqno": -1}
Oct 02 13:17:16 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #159: 9530 keys, 10147408 bytes, temperature: kUnknown
Oct 02 13:17:16 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411036446585, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 159, "file_size": 10147408, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10088625, "index_size": 33838, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23877, "raw_key_size": 253162, "raw_average_key_size": 26, "raw_value_size": 9924406, "raw_average_value_size": 1041, "num_data_blocks": 1272, "num_entries": 9530, "num_filter_entries": 9530, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759411036, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 159, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:17:16 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:17:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:16.446907) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 10147408 bytes
Oct 02 13:17:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:16.448542) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 224.7 rd, 162.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 13.3 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(491.8) write-amplify(206.7) OK, records in: 10039, records dropped: 509 output_compression: NoCompression
Oct 02 13:17:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:16.448573) EVENT_LOG_v1 {"time_micros": 1759411036448559, "job": 100, "event": "compaction_finished", "compaction_time_micros": 62296, "compaction_time_cpu_micros": 35764, "output_level": 6, "num_output_files": 1, "total_output_size": 10147408, "num_input_records": 10039, "num_output_records": 9530, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:17:16 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:17:16 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411036448757, "job": 100, "event": "table_file_deletion", "file_number": 158}
Oct 02 13:17:16 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000156.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:17:16 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411036453696, "job": 100, "event": "table_file_deletion", "file_number": 156}
Oct 02 13:17:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:16.384301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:17:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:16.453815) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:17:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:16.453821) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:17:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:16.453823) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:17:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:16.453824) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:17:16 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:17:16.453826) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:17:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:16.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:16 compute-1 podman[311750]: 2025-10-02 13:17:16.809045809 +0000 UTC m=+0.056596386 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:17:16 compute-1 podman[311749]: 2025-10-02 13:17:16.810468174 +0000 UTC m=+0.061390067 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 13:17:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:17:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:16.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:17:17 compute-1 ceph-mon[80926]: pgmap v3171: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 15 KiB/s wr, 32 op/s
Oct 02 13:17:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:17:18 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1157735850' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:17:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:17:18 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1157735850' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:17:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:18.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1157735850' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:17:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1157735850' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:17:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:18.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:17:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:17:19 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3945750907' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:17:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:17:19 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3945750907' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:17:19 compute-1 nova_compute[230518]: 2025-10-02 13:17:19.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:19 compute-1 ceph-mon[80926]: pgmap v3172: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 2.8 KiB/s wr, 31 op/s
Oct 02 13:17:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3945750907' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:17:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3945750907' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:17:20 compute-1 nova_compute[230518]: 2025-10-02 13:17:20.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:17:20 compute-1 nova_compute[230518]: 2025-10-02 13:17:20.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 13:17:20 compute-1 nova_compute[230518]: 2025-10-02 13:17:20.068 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 13:17:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:17:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:20.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:17:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:17:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:20.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:17:21 compute-1 nova_compute[230518]: 2025-10-02 13:17:21.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:17:21 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3295771948' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:17:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:17:21 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3295771948' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:17:21 compute-1 ceph-mon[80926]: pgmap v3173: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 106 KiB/s rd, 2.8 KiB/s wr, 165 op/s
Oct 02 13:17:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3295771948' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:17:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3295771948' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:17:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.002999973s ======
Oct 02 13:17:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:22.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002999973s
Oct 02 13:17:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:22.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:17:23.465 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=72, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=71) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:17:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:17:23.466 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:17:23 compute-1 nova_compute[230518]: 2025-10-02 13:17:23.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:23 compute-1 ceph-mon[80926]: pgmap v3174: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 138 KiB/s rd, 3.3 KiB/s wr, 219 op/s
Oct 02 13:17:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:17:24 compute-1 nova_compute[230518]: 2025-10-02 13:17:24.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:24.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:17:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:24.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:17:25 compute-1 nova_compute[230518]: 2025-10-02 13:17:25.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:25 compute-1 nova_compute[230518]: 2025-10-02 13:17:25.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:17:25.973 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:17:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:17:25.974 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:17:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:17:25.975 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:17:25 compute-1 ceph-mon[80926]: pgmap v3175: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 149 KiB/s rd, 1.6 KiB/s wr, 236 op/s
Oct 02 13:17:26 compute-1 nova_compute[230518]: 2025-10-02 13:17:26.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:26.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:26.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:28 compute-1 ceph-mon[80926]: pgmap v3176: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 143 KiB/s rd, 1.8 KiB/s wr, 229 op/s
Oct 02 13:17:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:17:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:28.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:17:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:17:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:28.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:17:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:17:29 compute-1 nova_compute[230518]: 2025-10-02 13:17:29.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:30 compute-1 ceph-mon[80926]: pgmap v3177: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 136 KiB/s rd, 767 B/s wr, 217 op/s
Oct 02 13:17:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:30.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:30.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:31 compute-1 nova_compute[230518]: 2025-10-02 13:17:31.067 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:17:31 compute-1 nova_compute[230518]: 2025-10-02 13:17:31.088 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:17:31 compute-1 nova_compute[230518]: 2025-10-02 13:17:31.088 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:17:31 compute-1 nova_compute[230518]: 2025-10-02 13:17:31.088 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:17:31 compute-1 nova_compute[230518]: 2025-10-02 13:17:31.089 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:17:31 compute-1 nova_compute[230518]: 2025-10-02 13:17:31.089 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:17:31 compute-1 nova_compute[230518]: 2025-10-02 13:17:31.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:31 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/610322793' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:17:31 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2563700799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:17:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:17:31.468 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '72'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:17:31 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:17:31 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/300480324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:17:31 compute-1 nova_compute[230518]: 2025-10-02 13:17:31.607 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:17:31 compute-1 nova_compute[230518]: 2025-10-02 13:17:31.748 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:17:31 compute-1 nova_compute[230518]: 2025-10-02 13:17:31.749 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4212MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:17:31 compute-1 nova_compute[230518]: 2025-10-02 13:17:31.750 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:17:31 compute-1 nova_compute[230518]: 2025-10-02 13:17:31.750 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:17:31 compute-1 nova_compute[230518]: 2025-10-02 13:17:31.847 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:17:31 compute-1 nova_compute[230518]: 2025-10-02 13:17:31.847 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:17:31 compute-1 nova_compute[230518]: 2025-10-02 13:17:31.864 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:17:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:17:32 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1010626883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:17:32 compute-1 ceph-mon[80926]: pgmap v3178: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 136 KiB/s rd, 767 B/s wr, 217 op/s
Oct 02 13:17:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/300480324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:17:32 compute-1 nova_compute[230518]: 2025-10-02 13:17:32.313 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:17:32 compute-1 nova_compute[230518]: 2025-10-02 13:17:32.320 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:17:32 compute-1 nova_compute[230518]: 2025-10-02 13:17:32.335 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:17:32 compute-1 nova_compute[230518]: 2025-10-02 13:17:32.336 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:17:32 compute-1 nova_compute[230518]: 2025-10-02 13:17:32.337 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:17:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e396 e396: 3 total, 3 up, 3 in
Oct 02 13:17:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:32.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:32.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:33 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1010626883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:17:33 compute-1 ceph-mon[80926]: pgmap v3179: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 52 KiB/s rd, 938 B/s wr, 84 op/s
Oct 02 13:17:33 compute-1 ceph-mon[80926]: osdmap e396: 3 total, 3 up, 3 in
Oct 02 13:17:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:17:34 compute-1 nova_compute[230518]: 2025-10-02 13:17:34.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2367091814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:17:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1280913833' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:17:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e397 e397: 3 total, 3 up, 3 in
Oct 02 13:17:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:34.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:34.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:35 compute-1 ceph-mon[80926]: pgmap v3181: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.7 KiB/s rd, 614 B/s wr, 5 op/s
Oct 02 13:17:35 compute-1 ceph-mon[80926]: osdmap e397: 3 total, 3 up, 3 in
Oct 02 13:17:36 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:17:36 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1633275689' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:17:36 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:17:36 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1633275689' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:17:36 compute-1 nova_compute[230518]: 2025-10-02 13:17:36.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:36 compute-1 nova_compute[230518]: 2025-10-02 13:17:36.322 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:17:36 compute-1 nova_compute[230518]: 2025-10-02 13:17:36.323 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:17:36 compute-1 nova_compute[230518]: 2025-10-02 13:17:36.323 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:17:36 compute-1 nova_compute[230518]: 2025-10-02 13:17:36.323 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:17:36 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1633275689' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:17:36 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1633275689' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:17:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:36.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:36.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:37 compute-1 nova_compute[230518]: 2025-10-02 13:17:37.049 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:17:37 compute-1 ceph-mon[80926]: pgmap v3183: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 2.1 KiB/s wr, 29 op/s
Oct 02 13:17:37 compute-1 podman[311834]: 2025-10-02 13:17:37.80043397 +0000 UTC m=+0.046334675 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent)
Oct 02 13:17:37 compute-1 podman[311833]: 2025-10-02 13:17:37.82916695 +0000 UTC m=+0.079793343 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Oct 02 13:17:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:38.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:38.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:39 compute-1 nova_compute[230518]: 2025-10-02 13:17:39.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:17:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:17:39 compute-1 nova_compute[230518]: 2025-10-02 13:17:39.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:39 compute-1 ceph-mon[80926]: pgmap v3184: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 2.1 KiB/s wr, 29 op/s
Oct 02 13:17:40 compute-1 nova_compute[230518]: 2025-10-02 13:17:40.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:17:40 compute-1 nova_compute[230518]: 2025-10-02 13:17:40.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:17:40 compute-1 nova_compute[230518]: 2025-10-02 13:17:40.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:17:40 compute-1 nova_compute[230518]: 2025-10-02 13:17:40.068 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:17:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:40.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:40.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:41 compute-1 nova_compute[230518]: 2025-10-02 13:17:41.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:17:41 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e398 e398: 3 total, 3 up, 3 in
Oct 02 13:17:41 compute-1 nova_compute[230518]: 2025-10-02 13:17:41.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:42 compute-1 ceph-mon[80926]: pgmap v3185: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 1.9 KiB/s wr, 44 op/s
Oct 02 13:17:42 compute-1 ceph-mon[80926]: osdmap e398: 3 total, 3 up, 3 in
Oct 02 13:17:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 13:17:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:42.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 13:17:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:42.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:43 compute-1 nova_compute[230518]: 2025-10-02 13:17:43.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:17:43 compute-1 ceph-mon[80926]: pgmap v3187: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 1.7 KiB/s wr, 42 op/s
Oct 02 13:17:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:17:44 compute-1 nova_compute[230518]: 2025-10-02 13:17:44.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:44.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:44.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:45 compute-1 ceph-mon[80926]: pgmap v3188: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 1.4 KiB/s wr, 34 op/s
Oct 02 13:17:46 compute-1 nova_compute[230518]: 2025-10-02 13:17:46.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:46.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:46.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:47 compute-1 ceph-mon[80926]: pgmap v3189: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 0 B/s wr, 12 op/s
Oct 02 13:17:47 compute-1 podman[311881]: 2025-10-02 13:17:47.799264633 +0000 UTC m=+0.057731212 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 13:17:47 compute-1 podman[311882]: 2025-10-02 13:17:47.802020159 +0000 UTC m=+0.055188482 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 13:17:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:48.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:17:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:48.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:17:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:17:49 compute-1 nova_compute[230518]: 2025-10-02 13:17:49.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:49 compute-1 ceph-mon[80926]: pgmap v3190: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 0 B/s wr, 12 op/s
Oct 02 13:17:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:50.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:50.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:51 compute-1 nova_compute[230518]: 2025-10-02 13:17:51.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:51 compute-1 sudo[311921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:17:51 compute-1 sudo[311921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:51 compute-1 sudo[311921]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:51 compute-1 sudo[311946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:17:51 compute-1 sudo[311946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:51 compute-1 sudo[311946]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:51 compute-1 sudo[311971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:17:51 compute-1 sudo[311971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:51 compute-1 sudo[311971]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:51 compute-1 sudo[311996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:17:51 compute-1 sudo[311996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:51 compute-1 ceph-mon[80926]: pgmap v3191: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:17:51 compute-1 sudo[311996]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:52 compute-1 sudo[312052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:17:52 compute-1 sudo[312052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:52 compute-1 sudo[312052]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:52 compute-1 sudo[312077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:17:52 compute-1 sudo[312077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:52 compute-1 sudo[312077]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:52 compute-1 sudo[312102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:17:52 compute-1 sudo[312102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:52 compute-1 sudo[312102]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:52 compute-1 sudo[312127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 20fdc58c-b037-5094-a8ef-d490aa7c36f3 -- inventory --format=json-pretty --filter-for-batch
Oct 02 13:17:52 compute-1 sudo[312127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:17:52 compute-1 podman[312191]: 2025-10-02 13:17:52.557232366 +0000 UTC m=+0.029334982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:17:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:52.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:52 compute-1 podman[312191]: 2025-10-02 13:17:52.866901308 +0000 UTC m=+0.339003904 container create 3a7bb1b1f47b3f8d921ddd84b2344c7a1d1bcf1cf5a916f026363981a02aef0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jepsen, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct 02 13:17:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:52.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:53 compute-1 systemd[1]: Started libpod-conmon-3a7bb1b1f47b3f8d921ddd84b2344c7a1d1bcf1cf5a916f026363981a02aef0c.scope.
Oct 02 13:17:53 compute-1 systemd[1]: Started libcrun container.
Oct 02 13:17:53 compute-1 podman[312191]: 2025-10-02 13:17:53.124608522 +0000 UTC m=+0.596711148 container init 3a7bb1b1f47b3f8d921ddd84b2344c7a1d1bcf1cf5a916f026363981a02aef0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 02 13:17:53 compute-1 podman[312191]: 2025-10-02 13:17:53.132660675 +0000 UTC m=+0.604763271 container start 3a7bb1b1f47b3f8d921ddd84b2344c7a1d1bcf1cf5a916f026363981a02aef0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jepsen, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 02 13:17:53 compute-1 vigilant_jepsen[312207]: 167 167
Oct 02 13:17:53 compute-1 systemd[1]: libpod-3a7bb1b1f47b3f8d921ddd84b2344c7a1d1bcf1cf5a916f026363981a02aef0c.scope: Deactivated successfully.
Oct 02 13:17:53 compute-1 conmon[312207]: conmon 3a7bb1b1f47b3f8d921d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3a7bb1b1f47b3f8d921ddd84b2344c7a1d1bcf1cf5a916f026363981a02aef0c.scope/container/memory.events
Oct 02 13:17:53 compute-1 podman[312191]: 2025-10-02 13:17:53.15420066 +0000 UTC m=+0.626303276 container attach 3a7bb1b1f47b3f8d921ddd84b2344c7a1d1bcf1cf5a916f026363981a02aef0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jepsen, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Oct 02 13:17:53 compute-1 podman[312191]: 2025-10-02 13:17:53.155686927 +0000 UTC m=+0.627789523 container died 3a7bb1b1f47b3f8d921ddd84b2344c7a1d1bcf1cf5a916f026363981a02aef0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jepsen, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct 02 13:17:53 compute-1 systemd[1]: var-lib-containers-storage-overlay-bbdf6ff5aee07841ef3644475a497a49320d1921a2cfc0c62f79dac3c7e37c50-merged.mount: Deactivated successfully.
Oct 02 13:17:53 compute-1 podman[312191]: 2025-10-02 13:17:53.200012858 +0000 UTC m=+0.672115454 container remove 3a7bb1b1f47b3f8d921ddd84b2344c7a1d1bcf1cf5a916f026363981a02aef0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jepsen, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:17:53 compute-1 systemd[1]: libpod-conmon-3a7bb1b1f47b3f8d921ddd84b2344c7a1d1bcf1cf5a916f026363981a02aef0c.scope: Deactivated successfully.
Oct 02 13:17:53 compute-1 podman[312231]: 2025-10-02 13:17:53.394894331 +0000 UTC m=+0.063495764 container create ede7938db5fe5fc7c89e8317d574aa4bfeba9f13ac5b7c8d2aab56a558c3a36c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 13:17:53 compute-1 systemd[1]: Started libpod-conmon-ede7938db5fe5fc7c89e8317d574aa4bfeba9f13ac5b7c8d2aab56a558c3a36c.scope.
Oct 02 13:17:53 compute-1 podman[312231]: 2025-10-02 13:17:53.356668481 +0000 UTC m=+0.025269914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct 02 13:17:53 compute-1 systemd[1]: Started libcrun container.
Oct 02 13:17:53 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37a67d161a7437767e344b4c7731a97feb3235aab033214bfd49d87256f59f77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 02 13:17:53 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37a67d161a7437767e344b4c7731a97feb3235aab033214bfd49d87256f59f77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 02 13:17:53 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37a67d161a7437767e344b4c7731a97feb3235aab033214bfd49d87256f59f77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 02 13:17:53 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37a67d161a7437767e344b4c7731a97feb3235aab033214bfd49d87256f59f77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 02 13:17:53 compute-1 podman[312231]: 2025-10-02 13:17:53.490668215 +0000 UTC m=+0.159269668 container init ede7938db5fe5fc7c89e8317d574aa4bfeba9f13ac5b7c8d2aab56a558c3a36c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct 02 13:17:53 compute-1 podman[312231]: 2025-10-02 13:17:53.497617132 +0000 UTC m=+0.166218545 container start ede7938db5fe5fc7c89e8317d574aa4bfeba9f13ac5b7c8d2aab56a558c3a36c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:17:53 compute-1 podman[312231]: 2025-10-02 13:17:53.501491894 +0000 UTC m=+0.170093347 container attach ede7938db5fe5fc7c89e8317d574aa4bfeba9f13ac5b7c8d2aab56a558c3a36c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:17:53 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:17:53 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:17:53 compute-1 ceph-mon[80926]: pgmap v3192: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:17:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:17:54 compute-1 nova_compute[230518]: 2025-10-02 13:17:54.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:54 compute-1 great_ritchie[312247]: [
Oct 02 13:17:54 compute-1 great_ritchie[312247]:     {
Oct 02 13:17:54 compute-1 great_ritchie[312247]:         "available": false,
Oct 02 13:17:54 compute-1 great_ritchie[312247]:         "ceph_device": false,
Oct 02 13:17:54 compute-1 great_ritchie[312247]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 02 13:17:54 compute-1 great_ritchie[312247]:         "lsm_data": {},
Oct 02 13:17:54 compute-1 great_ritchie[312247]:         "lvs": [],
Oct 02 13:17:54 compute-1 great_ritchie[312247]:         "path": "/dev/sr0",
Oct 02 13:17:54 compute-1 great_ritchie[312247]:         "rejected_reasons": [
Oct 02 13:17:54 compute-1 great_ritchie[312247]:             "Has a FileSystem",
Oct 02 13:17:54 compute-1 great_ritchie[312247]:             "Insufficient space (<5GB)"
Oct 02 13:17:54 compute-1 great_ritchie[312247]:         ],
Oct 02 13:17:54 compute-1 great_ritchie[312247]:         "sys_api": {
Oct 02 13:17:54 compute-1 great_ritchie[312247]:             "actuators": null,
Oct 02 13:17:54 compute-1 great_ritchie[312247]:             "device_nodes": "sr0",
Oct 02 13:17:54 compute-1 great_ritchie[312247]:             "devname": "sr0",
Oct 02 13:17:54 compute-1 great_ritchie[312247]:             "human_readable_size": "482.00 KB",
Oct 02 13:17:54 compute-1 great_ritchie[312247]:             "id_bus": "ata",
Oct 02 13:17:54 compute-1 great_ritchie[312247]:             "model": "QEMU DVD-ROM",
Oct 02 13:17:54 compute-1 great_ritchie[312247]:             "nr_requests": "2",
Oct 02 13:17:54 compute-1 great_ritchie[312247]:             "parent": "/dev/sr0",
Oct 02 13:17:54 compute-1 great_ritchie[312247]:             "partitions": {},
Oct 02 13:17:54 compute-1 great_ritchie[312247]:             "path": "/dev/sr0",
Oct 02 13:17:54 compute-1 great_ritchie[312247]:             "removable": "1",
Oct 02 13:17:54 compute-1 great_ritchie[312247]:             "rev": "2.5+",
Oct 02 13:17:54 compute-1 great_ritchie[312247]:             "ro": "0",
Oct 02 13:17:54 compute-1 great_ritchie[312247]:             "rotational": "0",
Oct 02 13:17:54 compute-1 great_ritchie[312247]:             "sas_address": "",
Oct 02 13:17:54 compute-1 great_ritchie[312247]:             "sas_device_handle": "",
Oct 02 13:17:54 compute-1 great_ritchie[312247]:             "scheduler_mode": "mq-deadline",
Oct 02 13:17:54 compute-1 great_ritchie[312247]:             "sectors": 0,
Oct 02 13:17:54 compute-1 great_ritchie[312247]:             "sectorsize": "2048",
Oct 02 13:17:54 compute-1 great_ritchie[312247]:             "size": 493568.0,
Oct 02 13:17:54 compute-1 great_ritchie[312247]:             "support_discard": "2048",
Oct 02 13:17:54 compute-1 great_ritchie[312247]:             "type": "disk",
Oct 02 13:17:54 compute-1 great_ritchie[312247]:             "vendor": "QEMU"
Oct 02 13:17:54 compute-1 great_ritchie[312247]:         }
Oct 02 13:17:54 compute-1 great_ritchie[312247]:     }
Oct 02 13:17:54 compute-1 great_ritchie[312247]: ]
Oct 02 13:17:54 compute-1 systemd[1]: libpod-ede7938db5fe5fc7c89e8317d574aa4bfeba9f13ac5b7c8d2aab56a558c3a36c.scope: Deactivated successfully.
Oct 02 13:17:54 compute-1 podman[312231]: 2025-10-02 13:17:54.594639363 +0000 UTC m=+1.263240776 container died ede7938db5fe5fc7c89e8317d574aa4bfeba9f13ac5b7c8d2aab56a558c3a36c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 02 13:17:54 compute-1 systemd[1]: libpod-ede7938db5fe5fc7c89e8317d574aa4bfeba9f13ac5b7c8d2aab56a558c3a36c.scope: Consumed 1.078s CPU time.
Oct 02 13:17:54 compute-1 systemd[1]: var-lib-containers-storage-overlay-37a67d161a7437767e344b4c7731a97feb3235aab033214bfd49d87256f59f77-merged.mount: Deactivated successfully.
Oct 02 13:17:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:54.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:54 compute-1 podman[312231]: 2025-10-02 13:17:54.685892115 +0000 UTC m=+1.354493528 container remove ede7938db5fe5fc7c89e8317d574aa4bfeba9f13ac5b7c8d2aab56a558c3a36c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct 02 13:17:54 compute-1 systemd[1]: libpod-conmon-ede7938db5fe5fc7c89e8317d574aa4bfeba9f13ac5b7c8d2aab56a558c3a36c.scope: Deactivated successfully.
Oct 02 13:17:54 compute-1 sudo[312127]: pam_unix(sudo:session): session closed for user root
Oct 02 13:17:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:54.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:55 compute-1 ceph-mon[80926]: pgmap v3193: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:17:55 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:17:55 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:17:55 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:17:55 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:17:55 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:17:55 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:17:55 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:17:55 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:17:55 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:17:55 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:17:56 compute-1 nova_compute[230518]: 2025-10-02 13:17:56.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:56.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:56.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:57 compute-1 ceph-mon[80926]: pgmap v3194: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:17:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 13:17:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:17:58.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 13:17:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:17:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:17:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:17:58.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:17:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:17:59 compute-1 nova_compute[230518]: 2025-10-02 13:17:59.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:17:59 compute-1 ceph-mon[80926]: pgmap v3195: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:18:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:00.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e399 e399: 3 total, 3 up, 3 in
Oct 02 13:18:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:00.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:01 compute-1 nova_compute[230518]: 2025-10-02 13:18:01.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:01 compute-1 sudo[313362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:18:01 compute-1 sudo[313362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:18:01 compute-1 sudo[313362]: pam_unix(sudo:session): session closed for user root
Oct 02 13:18:01 compute-1 sudo[313387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:18:01 compute-1 sudo[313387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:18:01 compute-1 sudo[313387]: pam_unix(sudo:session): session closed for user root
Oct 02 13:18:02 compute-1 ceph-mon[80926]: pgmap v3196: 305 pgs: 305 active+clean; 128 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 852 B/s rd, 683 KiB/s wr, 1 op/s
Oct 02 13:18:02 compute-1 ceph-mon[80926]: osdmap e399: 3 total, 3 up, 3 in
Oct 02 13:18:02 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:18:02 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:18:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:02.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:02.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:04 compute-1 ceph-mon[80926]: pgmap v3198: 305 pgs: 305 active+clean; 128 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 820 KiB/s wr, 14 op/s
Oct 02 13:18:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:18:04 compute-1 nova_compute[230518]: 2025-10-02 13:18:04.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:04.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:04.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1810113878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:18:06 compute-1 ceph-mon[80926]: pgmap v3199: 305 pgs: 305 active+clean; 136 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 1.6 MiB/s wr, 16 op/s
Oct 02 13:18:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3080546147' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:18:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3080546147' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:18:06 compute-1 nova_compute[230518]: 2025-10-02 13:18:06.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:06.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:06.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:08 compute-1 ceph-mon[80926]: pgmap v3200: 305 pgs: 305 active+clean; 141 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 2.0 MiB/s wr, 25 op/s
Oct 02 13:18:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:08.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:08 compute-1 podman[313413]: 2025-10-02 13:18:08.804589906 +0000 UTC m=+0.051046732 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 13:18:08 compute-1 podman[313412]: 2025-10-02 13:18:08.866301932 +0000 UTC m=+0.112529041 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:18:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:08.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:18:09 compute-1 ceph-mon[80926]: pgmap v3201: 305 pgs: 305 active+clean; 141 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 2.0 MiB/s wr, 25 op/s
Oct 02 13:18:09 compute-1 nova_compute[230518]: 2025-10-02 13:18:09.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:10.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:10.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:11 compute-1 nova_compute[230518]: 2025-10-02 13:18:11.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:11 compute-1 ceph-mon[80926]: pgmap v3202: 305 pgs: 305 active+clean; 179 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.0 MiB/s wr, 56 op/s
Oct 02 13:18:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3627526565' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:18:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1859342343' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:18:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:12.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:12.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:13 compute-1 ceph-mon[80926]: pgmap v3203: 305 pgs: 305 active+clean; 187 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.9 MiB/s wr, 49 op/s
Oct 02 13:18:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:18:14 compute-1 nova_compute[230518]: 2025-10-02 13:18:14.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:18:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:14.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:18:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:14.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:15 compute-1 ceph-mon[80926]: pgmap v3204: 305 pgs: 305 active+clean; 187 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.8 MiB/s wr, 37 op/s
Oct 02 13:18:16 compute-1 ovn_controller[129257]: 2025-10-02T13:18:16Z|00866|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 02 13:18:16 compute-1 nova_compute[230518]: 2025-10-02 13:18:16.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:16.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:16.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:17 compute-1 ceph-mon[80926]: pgmap v3205: 305 pgs: 305 active+clean; 187 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 61 op/s
Oct 02 13:18:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:18.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:18 compute-1 podman[313458]: 2025-10-02 13:18:18.802125788 +0000 UTC m=+0.056808843 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd)
Oct 02 13:18:18 compute-1 podman[313457]: 2025-10-02 13:18:18.821232067 +0000 UTC m=+0.078130582 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:18:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:18.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:18:19 compute-1 nova_compute[230518]: 2025-10-02 13:18:19.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:20 compute-1 ceph-mon[80926]: pgmap v3206: 305 pgs: 305 active+clean; 187 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 54 op/s
Oct 02 13:18:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:20.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:20.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:21 compute-1 nova_compute[230518]: 2025-10-02 13:18:21.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:22 compute-1 ceph-mon[80926]: pgmap v3207: 305 pgs: 305 active+clean; 187 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 02 13:18:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:22.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:22.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/67111490' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:18:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:18:24 compute-1 ceph-mon[80926]: pgmap v3208: 305 pgs: 305 active+clean; 187 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 312 KiB/s wr, 75 op/s
Oct 02 13:18:24 compute-1 nova_compute[230518]: 2025-10-02 13:18:24.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:18:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:24.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:18:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:24.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:25 compute-1 ceph-mon[80926]: pgmap v3209: 305 pgs: 305 active+clean; 187 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Oct 02 13:18:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e400 e400: 3 total, 3 up, 3 in
Oct 02 13:18:25 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #56. Immutable memtables: 12.
Oct 02 13:18:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:18:25.975 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:18:25.976 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:18:25.976 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:26 compute-1 nova_compute[230518]: 2025-10-02 13:18:26.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:26 compute-1 ceph-mon[80926]: osdmap e400: 3 total, 3 up, 3 in
Oct 02 13:18:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:26.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:26.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:18:27.318 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=73, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=72) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:18:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:18:27.320 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:18:27 compute-1 nova_compute[230518]: 2025-10-02 13:18:27.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:27 compute-1 ceph-mon[80926]: pgmap v3211: 305 pgs: 305 active+clean; 189 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 136 KiB/s wr, 83 op/s
Oct 02 13:18:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:18:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:28.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:18:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:18:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:28.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:18:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:18:29 compute-1 nova_compute[230518]: 2025-10-02 13:18:29.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:29 compute-1 ceph-mon[80926]: pgmap v3212: 305 pgs: 305 active+clean; 189 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 136 KiB/s wr, 83 op/s
Oct 02 13:18:30 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:18:30.322 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '73'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:18:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1475968903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:18:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:30.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:30.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:31 compute-1 nova_compute[230518]: 2025-10-02 13:18:31.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:31 compute-1 ceph-mon[80926]: pgmap v3213: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 464 KiB/s rd, 2.6 MiB/s wr, 91 op/s
Oct 02 13:18:31 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2399959325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:18:32 compute-1 nova_compute[230518]: 2025-10-02 13:18:32.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:18:32 compute-1 nova_compute[230518]: 2025-10-02 13:18:32.096 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:32 compute-1 nova_compute[230518]: 2025-10-02 13:18:32.096 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:32 compute-1 nova_compute[230518]: 2025-10-02 13:18:32.096 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:32 compute-1 nova_compute[230518]: 2025-10-02 13:18:32.096 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:18:32 compute-1 nova_compute[230518]: 2025-10-02 13:18:32.097 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:18:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:18:32 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/838882996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:18:32 compute-1 nova_compute[230518]: 2025-10-02 13:18:32.514 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:18:32 compute-1 nova_compute[230518]: 2025-10-02 13:18:32.690 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:18:32 compute-1 nova_compute[230518]: 2025-10-02 13:18:32.691 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4212MB free_disk=20.942913055419922GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:18:32 compute-1 nova_compute[230518]: 2025-10-02 13:18:32.692 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:18:32 compute-1 nova_compute[230518]: 2025-10-02 13:18:32.692 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:18:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 13:18:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:32.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 13:18:32 compute-1 nova_compute[230518]: 2025-10-02 13:18:32.771 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:18:32 compute-1 nova_compute[230518]: 2025-10-02 13:18:32.772 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:18:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/838882996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:18:32 compute-1 nova_compute[230518]: 2025-10-02 13:18:32.802 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:18:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:32.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:18:33 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/604091308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:18:33 compute-1 nova_compute[230518]: 2025-10-02 13:18:33.283 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:18:33 compute-1 nova_compute[230518]: 2025-10-02 13:18:33.291 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:18:33 compute-1 nova_compute[230518]: 2025-10-02 13:18:33.307 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:18:33 compute-1 nova_compute[230518]: 2025-10-02 13:18:33.309 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:18:33 compute-1 nova_compute[230518]: 2025-10-02 13:18:33.309 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:18:34 compute-1 ceph-mon[80926]: pgmap v3214: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 471 KiB/s rd, 2.6 MiB/s wr, 92 op/s
Oct 02 13:18:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/604091308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:18:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/552845782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:18:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:18:34 compute-1 nova_compute[230518]: 2025-10-02 13:18:34.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:34.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:34.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2379871822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:18:36 compute-1 ceph-mon[80926]: pgmap v3215: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 470 KiB/s rd, 2.6 MiB/s wr, 92 op/s
Oct 02 13:18:36 compute-1 nova_compute[230518]: 2025-10-02 13:18:36.309 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:18:36 compute-1 nova_compute[230518]: 2025-10-02 13:18:36.310 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:18:36 compute-1 nova_compute[230518]: 2025-10-02 13:18:36.310 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:18:36 compute-1 nova_compute[230518]: 2025-10-02 13:18:36.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:18:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:36.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:18:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:36.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:37 compute-1 nova_compute[230518]: 2025-10-02 13:18:37.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:18:38 compute-1 ceph-mon[80926]: pgmap v3216: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 418 KiB/s rd, 2.3 MiB/s wr, 81 op/s
Oct 02 13:18:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:38.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:38.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:39 compute-1 nova_compute[230518]: 2025-10-02 13:18:39.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:18:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:18:39 compute-1 ceph-mon[80926]: pgmap v3217: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 360 KiB/s rd, 2.0 MiB/s wr, 57 op/s
Oct 02 13:18:39 compute-1 nova_compute[230518]: 2025-10-02 13:18:39.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:39 compute-1 podman[313543]: 2025-10-02 13:18:39.829730404 +0000 UTC m=+0.059508388 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Oct 02 13:18:39 compute-1 podman[313542]: 2025-10-02 13:18:39.925303151 +0000 UTC m=+0.152974999 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 02 13:18:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:18:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:40.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:18:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:40.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:41 compute-1 nova_compute[230518]: 2025-10-02 13:18:41.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:18:41 compute-1 nova_compute[230518]: 2025-10-02 13:18:41.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:18:41 compute-1 nova_compute[230518]: 2025-10-02 13:18:41.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:18:41 compute-1 nova_compute[230518]: 2025-10-02 13:18:41.065 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:18:41 compute-1 nova_compute[230518]: 2025-10-02 13:18:41.066 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:18:41 compute-1 nova_compute[230518]: 2025-10-02 13:18:41.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:41 compute-1 ceph-mon[80926]: pgmap v3218: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 360 KiB/s rd, 2.0 MiB/s wr, 57 op/s
Oct 02 13:18:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:42.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:42.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:43 compute-1 nova_compute[230518]: 2025-10-02 13:18:43.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:18:43 compute-1 ceph-mon[80926]: pgmap v3219: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.7 KiB/s rd, 17 KiB/s wr, 2 op/s
Oct 02 13:18:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e401 e401: 3 total, 3 up, 3 in
Oct 02 13:18:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:18:44 compute-1 nova_compute[230518]: 2025-10-02 13:18:44.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:44 compute-1 ceph-mon[80926]: osdmap e401: 3 total, 3 up, 3 in
Oct 02 13:18:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:44.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:44.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:45 compute-1 nova_compute[230518]: 2025-10-02 13:18:45.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:18:45 compute-1 ceph-mon[80926]: pgmap v3221: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 KiB/s rd, 25 KiB/s wr, 3 op/s
Oct 02 13:18:45 compute-1 nova_compute[230518]: 2025-10-02 13:18:45.983 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:18:46 compute-1 nova_compute[230518]: 2025-10-02 13:18:46.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:46.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:47.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:47 compute-1 ceph-mon[80926]: pgmap v3222: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 26 KiB/s wr, 14 op/s
Oct 02 13:18:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:48.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:18:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:49.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:18:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:18:49 compute-1 nova_compute[230518]: 2025-10-02 13:18:49.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:49 compute-1 podman[313587]: 2025-10-02 13:18:49.789400792 +0000 UTC m=+0.047777140 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 13:18:49 compute-1 podman[313588]: 2025-10-02 13:18:49.803991089 +0000 UTC m=+0.058767173 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd)
Oct 02 13:18:50 compute-1 ceph-mon[80926]: pgmap v3223: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 26 KiB/s wr, 14 op/s
Oct 02 13:18:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:50.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:51.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:51 compute-1 nova_compute[230518]: 2025-10-02 13:18:51.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:51 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e402 e402: 3 total, 3 up, 3 in
Oct 02 13:18:51 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3546321456' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:18:51 compute-1 ceph-mon[80926]: pgmap v3224: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 23 KiB/s wr, 17 op/s
Oct 02 13:18:51 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/217378257' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:18:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:52.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:53 compute-1 ceph-mon[80926]: osdmap e402: 3 total, 3 up, 3 in
Oct 02 13:18:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:18:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:53.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:18:54 compute-1 ceph-mon[80926]: pgmap v3226: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 26 KiB/s wr, 20 op/s
Oct 02 13:18:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:18:54 compute-1 nova_compute[230518]: 2025-10-02 13:18:54.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:54.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:55.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:56 compute-1 nova_compute[230518]: 2025-10-02 13:18:56.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:56 compute-1 ceph-mon[80926]: pgmap v3227: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.0 KiB/s wr, 57 op/s
Oct 02 13:18:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:56.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:57.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:57 compute-1 ceph-mon[80926]: pgmap v3228: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.6 KiB/s wr, 88 op/s
Oct 02 13:18:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:18:58.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:18:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:18:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:18:59.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:18:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:18:59 compute-1 nova_compute[230518]: 2025-10-02 13:18:59.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:18:59 compute-1 ceph-mon[80926]: pgmap v3229: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.6 KiB/s wr, 88 op/s
Oct 02 13:19:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:00.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:19:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:01.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:19:01 compute-1 nova_compute[230518]: 2025-10-02 13:19:01.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:01 compute-1 ceph-mon[80926]: pgmap v3230: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 85 op/s
Oct 02 13:19:02 compute-1 sudo[313624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:19:02 compute-1 sudo[313624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:02 compute-1 sudo[313624]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:02 compute-1 sudo[313649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:19:02 compute-1 sudo[313649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:02 compute-1 sudo[313649]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:02 compute-1 sudo[313674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:19:02 compute-1 sudo[313674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:02 compute-1 sudo[313674]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:02 compute-1 sudo[313699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:19:02 compute-1 sudo[313699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:02 compute-1 sudo[313699]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:02.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:19:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:03.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:19:04 compute-1 ceph-mon[80926]: pgmap v3231: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 77 op/s
Oct 02 13:19:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:19:04 compute-1 nova_compute[230518]: 2025-10-02 13:19:04.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:04.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:19:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:05.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:19:06 compute-1 nova_compute[230518]: 2025-10-02 13:19:06.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:06 compute-1 ceph-mon[80926]: pgmap v3232: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 12 KiB/s wr, 86 op/s
Oct 02 13:19:06 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:19:06 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:19:06 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:19:06 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:19:06 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:19:06 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:19:06 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:19:06 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:19:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:06.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:07.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:08 compute-1 ceph-mon[80926]: pgmap v3233: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 13 KiB/s wr, 79 op/s
Oct 02 13:19:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:19:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:08.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:19:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:19:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:09.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:19:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:19:09 compute-1 ceph-mon[80926]: pgmap v3234: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 496 KiB/s rd, 13 KiB/s wr, 43 op/s
Oct 02 13:19:09 compute-1 nova_compute[230518]: 2025-10-02 13:19:09.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:10.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:10 compute-1 podman[313756]: 2025-10-02 13:19:10.80907777 +0000 UTC m=+0.057076371 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 13:19:10 compute-1 podman[313755]: 2025-10-02 13:19:10.864155918 +0000 UTC m=+0.112826820 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:19:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:11.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:11 compute-1 nova_compute[230518]: 2025-10-02 13:19:11.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:11 compute-1 ceph-mon[80926]: pgmap v3235: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 531 KiB/s rd, 21 KiB/s wr, 45 op/s
Oct 02 13:19:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:12.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:19:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:13.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:19:13 compute-1 ceph-mon[80926]: pgmap v3236: 305 pgs: 305 active+clean; 222 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 531 KiB/s rd, 23 KiB/s wr, 46 op/s
Oct 02 13:19:13 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:19:13 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:19:13 compute-1 sudo[313797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:19:13 compute-1 sudo[313797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:13 compute-1 sudo[313797]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:13 compute-1 sudo[313822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:19:13 compute-1 sudo[313822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:19:13 compute-1 sudo[313822]: pam_unix(sudo:session): session closed for user root
Oct 02 13:19:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:19:14 compute-1 nova_compute[230518]: 2025-10-02 13:19:14.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:14.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:15.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:15 compute-1 ceph-mon[80926]: pgmap v3237: 305 pgs: 305 active+clean; 222 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 707 KiB/s rd, 23 KiB/s wr, 50 op/s
Oct 02 13:19:16 compute-1 nova_compute[230518]: 2025-10-02 13:19:16.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:16.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:19:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:17.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:19:17 compute-1 ceph-mon[80926]: pgmap v3238: 305 pgs: 305 active+clean; 222 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 539 KiB/s rd, 11 KiB/s wr, 36 op/s
Oct 02 13:19:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:19:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:18.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:19:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:19:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:19.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:19:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:19:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:19:19.344 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=74, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=73) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:19:19 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:19:19.345 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:19:19 compute-1 nova_compute[230518]: 2025-10-02 13:19:19.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:19 compute-1 nova_compute[230518]: 2025-10-02 13:19:19.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:19 compute-1 ceph-mon[80926]: pgmap v3239: 305 pgs: 305 active+clean; 222 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 247 KiB/s rd, 11 KiB/s wr, 8 op/s
Oct 02 13:19:20 compute-1 podman[313848]: 2025-10-02 13:19:20.803395173 +0000 UTC m=+0.053565301 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid)
Oct 02 13:19:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:20.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:20 compute-1 podman[313849]: 2025-10-02 13:19:20.812432637 +0000 UTC m=+0.060942163 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 13:19:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:21.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:21 compute-1 nova_compute[230518]: 2025-10-02 13:19:21.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:22 compute-1 ceph-mon[80926]: pgmap v3240: 305 pgs: 305 active+clean; 222 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 247 KiB/s rd, 12 KiB/s wr, 8 op/s
Oct 02 13:19:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:22.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:23.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:23 compute-1 ceph-mon[80926]: pgmap v3241: 305 pgs: 305 active+clean; 222 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 212 KiB/s rd, 4.4 KiB/s wr, 6 op/s
Oct 02 13:19:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:19:24 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/404047804' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:19:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:19:24 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/404047804' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:19:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:19:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/404047804' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:19:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/404047804' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:19:24 compute-1 nova_compute[230518]: 2025-10-02 13:19:24.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:24.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:25.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:19:25.347 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '74'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:19:25 compute-1 ceph-mon[80926]: pgmap v3242: 305 pgs: 305 active+clean; 222 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 212 KiB/s rd, 3.4 KiB/s wr, 7 op/s
Oct 02 13:19:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:19:25.975 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:19:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:19:25.976 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:19:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:19:25.976 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:19:26 compute-1 nova_compute[230518]: 2025-10-02 13:19:26.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:26.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:27.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:27 compute-1 ceph-mon[80926]: pgmap v3243: 305 pgs: 305 active+clean; 209 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 54 KiB/s rd, 4.2 KiB/s wr, 26 op/s
Oct 02 13:19:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:19:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:28.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:19:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:29.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:19:29 compute-1 nova_compute[230518]: 2025-10-02 13:19:29.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:29 compute-1 ceph-mon[80926]: pgmap v3244: 305 pgs: 305 active+clean; 209 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 4.1 KiB/s wr, 24 op/s
Oct 02 13:19:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3014423520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:19:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 13:19:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:30.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 13:19:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:19:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:31.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:19:31 compute-1 nova_compute[230518]: 2025-10-02 13:19:31.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:31 compute-1 ceph-mon[80926]: pgmap v3245: 305 pgs: 305 active+clean; 141 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 4.4 KiB/s wr, 41 op/s
Oct 02 13:19:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e403 e403: 3 total, 3 up, 3 in
Oct 02 13:19:32 compute-1 nova_compute[230518]: 2025-10-02 13:19:32.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:19:32 compute-1 nova_compute[230518]: 2025-10-02 13:19:32.080 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:19:32 compute-1 nova_compute[230518]: 2025-10-02 13:19:32.080 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:19:32 compute-1 nova_compute[230518]: 2025-10-02 13:19:32.080 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:19:32 compute-1 nova_compute[230518]: 2025-10-02 13:19:32.081 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:19:32 compute-1 nova_compute[230518]: 2025-10-02 13:19:32.081 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:19:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:19:32 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3958918451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:19:32 compute-1 nova_compute[230518]: 2025-10-02 13:19:32.609 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:19:32 compute-1 nova_compute[230518]: 2025-10-02 13:19:32.798 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:19:32 compute-1 nova_compute[230518]: 2025-10-02 13:19:32.800 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4230MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:19:32 compute-1 nova_compute[230518]: 2025-10-02 13:19:32.800 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:19:32 compute-1 nova_compute[230518]: 2025-10-02 13:19:32.801 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:19:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:32.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:32 compute-1 nova_compute[230518]: 2025-10-02 13:19:32.936 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:19:32 compute-1 nova_compute[230518]: 2025-10-02 13:19:32.936 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:19:33 compute-1 nova_compute[230518]: 2025-10-02 13:19:33.017 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:19:33 compute-1 ceph-mon[80926]: osdmap e403: 3 total, 3 up, 3 in
Oct 02 13:19:33 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3859766783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:19:33 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3958918451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:19:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 13:19:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:33.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 13:19:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:19:33 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2642578138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:19:33 compute-1 nova_compute[230518]: 2025-10-02 13:19:33.504 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:19:33 compute-1 nova_compute[230518]: 2025-10-02 13:19:33.511 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:19:33 compute-1 nova_compute[230518]: 2025-10-02 13:19:33.538 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:19:33 compute-1 nova_compute[230518]: 2025-10-02 13:19:33.540 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:19:33 compute-1 nova_compute[230518]: 2025-10-02 13:19:33.540 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:19:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:19:34 compute-1 ceph-mon[80926]: pgmap v3247: 305 pgs: 305 active+clean; 141 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 3.4 KiB/s wr, 56 op/s
Oct 02 13:19:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2050096160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:19:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2642578138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:19:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e404 e404: 3 total, 3 up, 3 in
Oct 02 13:19:34 compute-1 nova_compute[230518]: 2025-10-02 13:19:34.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:34.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:19:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:35.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:19:36 compute-1 nova_compute[230518]: 2025-10-02 13:19:36.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:36 compute-1 nova_compute[230518]: 2025-10-02 13:19:36.541 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:19:36 compute-1 nova_compute[230518]: 2025-10-02 13:19:36.541 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:19:36 compute-1 ceph-mon[80926]: osdmap e404: 3 total, 3 up, 3 in
Oct 02 13:19:36 compute-1 ceph-mon[80926]: pgmap v3249: 305 pgs: 305 active+clean; 141 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 1.0 MiB/s wr, 41 op/s
Oct 02 13:19:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:19:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:36.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:19:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:19:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:37.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:19:37 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/76299625' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:19:37 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2359424399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:19:37 compute-1 ceph-mon[80926]: pgmap v3250: 305 pgs: 305 active+clean; 141 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 55 KiB/s rd, 2.6 MiB/s wr, 79 op/s
Oct 02 13:19:38 compute-1 nova_compute[230518]: 2025-10-02 13:19:38.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:19:38 compute-1 nova_compute[230518]: 2025-10-02 13:19:38.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:19:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:38.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:39.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:19:39 compute-1 nova_compute[230518]: 2025-10-02 13:19:39.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:39 compute-1 ceph-mon[80926]: pgmap v3251: 305 pgs: 305 active+clean; 141 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 39 KiB/s rd, 2.6 MiB/s wr, 54 op/s
Oct 02 13:19:40 compute-1 nova_compute[230518]: 2025-10-02 13:19:40.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:19:40 compute-1 nova_compute[230518]: 2025-10-02 13:19:40.703 2 DEBUG oslo_concurrency.lockutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:19:40 compute-1 nova_compute[230518]: 2025-10-02 13:19:40.703 2 DEBUG oslo_concurrency.lockutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:19:40 compute-1 nova_compute[230518]: 2025-10-02 13:19:40.739 2 DEBUG nova.compute.manager [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:19:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:40.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:40 compute-1 nova_compute[230518]: 2025-10-02 13:19:40.914 2 DEBUG oslo_concurrency.lockutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:19:40 compute-1 nova_compute[230518]: 2025-10-02 13:19:40.915 2 DEBUG oslo_concurrency.lockutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:19:40 compute-1 nova_compute[230518]: 2025-10-02 13:19:40.919 2 DEBUG nova.virt.hardware [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:19:40 compute-1 nova_compute[230518]: 2025-10-02 13:19:40.920 2 INFO nova.compute.claims [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Claim successful on node compute-1.ctlplane.example.com
Oct 02 13:19:41 compute-1 nova_compute[230518]: 2025-10-02 13:19:41.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:19:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:41.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:41 compute-1 nova_compute[230518]: 2025-10-02 13:19:41.348 2 DEBUG oslo_concurrency.processutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:19:41 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 e405: 3 total, 3 up, 3 in
Oct 02 13:19:41 compute-1 nova_compute[230518]: 2025-10-02 13:19:41.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:41 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:19:41 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1616618197' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:19:41 compute-1 nova_compute[230518]: 2025-10-02 13:19:41.791 2 DEBUG oslo_concurrency.processutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:19:41 compute-1 nova_compute[230518]: 2025-10-02 13:19:41.800 2 DEBUG nova.compute.provider_tree [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:19:41 compute-1 podman[313952]: 2025-10-02 13:19:41.817495625 +0000 UTC m=+0.059559970 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 13:19:41 compute-1 podman[313951]: 2025-10-02 13:19:41.837903355 +0000 UTC m=+0.087031811 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 02 13:19:41 compute-1 nova_compute[230518]: 2025-10-02 13:19:41.890 2 DEBUG nova.scheduler.client.report [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:19:41 compute-1 nova_compute[230518]: 2025-10-02 13:19:41.931 2 DEBUG oslo_concurrency.lockutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.017s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:19:41 compute-1 nova_compute[230518]: 2025-10-02 13:19:41.932 2 DEBUG nova.compute.manager [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:19:42 compute-1 ceph-mon[80926]: pgmap v3252: 305 pgs: 305 active+clean; 141 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 2.4 MiB/s wr, 43 op/s
Oct 02 13:19:42 compute-1 ceph-mon[80926]: osdmap e405: 3 total, 3 up, 3 in
Oct 02 13:19:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1616618197' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:19:42 compute-1 nova_compute[230518]: 2025-10-02 13:19:42.105 2 DEBUG nova.compute.manager [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:19:42 compute-1 nova_compute[230518]: 2025-10-02 13:19:42.106 2 DEBUG nova.network.neutron [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:19:42 compute-1 nova_compute[230518]: 2025-10-02 13:19:42.134 2 INFO nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:19:42 compute-1 nova_compute[230518]: 2025-10-02 13:19:42.166 2 DEBUG nova.compute.manager [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:19:42 compute-1 nova_compute[230518]: 2025-10-02 13:19:42.347 2 DEBUG nova.compute.manager [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:19:42 compute-1 nova_compute[230518]: 2025-10-02 13:19:42.349 2 DEBUG nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:19:42 compute-1 nova_compute[230518]: 2025-10-02 13:19:42.349 2 INFO nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Creating image(s)
Oct 02 13:19:42 compute-1 nova_compute[230518]: 2025-10-02 13:19:42.386 2 DEBUG nova.storage.rbd_utils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] rbd image 92ef7ede-4ed2-4a81-9849-bbc39c0be573_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:19:42 compute-1 nova_compute[230518]: 2025-10-02 13:19:42.424 2 DEBUG nova.storage.rbd_utils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] rbd image 92ef7ede-4ed2-4a81-9849-bbc39c0be573_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:19:42 compute-1 nova_compute[230518]: 2025-10-02 13:19:42.451 2 DEBUG nova.storage.rbd_utils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] rbd image 92ef7ede-4ed2-4a81-9849-bbc39c0be573_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:19:42 compute-1 nova_compute[230518]: 2025-10-02 13:19:42.456 2 DEBUG oslo_concurrency.lockutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "bb6d192aed85f84d0f22da0723b257d38ce90e47" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:19:42 compute-1 nova_compute[230518]: 2025-10-02 13:19:42.457 2 DEBUG oslo_concurrency.lockutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "bb6d192aed85f84d0f22da0723b257d38ce90e47" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:19:42 compute-1 nova_compute[230518]: 2025-10-02 13:19:42.730 2 DEBUG nova.policy [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '74f5186fabfb4fea86d32c8ef1f2e354', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ced4d30c525c44cca617c3b9838d21b7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:19:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:42.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:43 compute-1 nova_compute[230518]: 2025-10-02 13:19:43.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:19:43 compute-1 nova_compute[230518]: 2025-10-02 13:19:43.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:19:43 compute-1 nova_compute[230518]: 2025-10-02 13:19:43.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:19:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:43.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:43 compute-1 nova_compute[230518]: 2025-10-02 13:19:43.251 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Oct 02 13:19:43 compute-1 nova_compute[230518]: 2025-10-02 13:19:43.252 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:19:43 compute-1 nova_compute[230518]: 2025-10-02 13:19:43.523 2 DEBUG nova.virt.libvirt.imagebackend [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Image locations are: [{'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/f6be8018-0ea2-42f8-a1d7-8d704069aac9/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://20fdc58c-b037-5094-a8ef-d490aa7c36f3/images/f6be8018-0ea2-42f8-a1d7-8d704069aac9/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 02 13:19:44 compute-1 nova_compute[230518]: 2025-10-02 13:19:44.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:19:44 compute-1 ceph-mon[80926]: pgmap v3254: 305 pgs: 305 active+clean; 141 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 1.5 MiB/s wr, 38 op/s
Oct 02 13:19:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:19:44 compute-1 nova_compute[230518]: 2025-10-02 13:19:44.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:44 compute-1 nova_compute[230518]: 2025-10-02 13:19:44.629 2 DEBUG nova.network.neutron [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Successfully created port: 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:19:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:44.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:45.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:45 compute-1 ceph-mon[80926]: pgmap v3255: 305 pgs: 305 active+clean; 141 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.2 MiB/s wr, 31 op/s
Oct 02 13:19:45 compute-1 nova_compute[230518]: 2025-10-02 13:19:45.976 2 DEBUG oslo_concurrency.processutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bb6d192aed85f84d0f22da0723b257d38ce90e47.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:19:46 compute-1 nova_compute[230518]: 2025-10-02 13:19:46.052 2 DEBUG oslo_concurrency.processutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bb6d192aed85f84d0f22da0723b257d38ce90e47.part --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:19:46 compute-1 nova_compute[230518]: 2025-10-02 13:19:46.053 2 DEBUG nova.virt.images [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] f6be8018-0ea2-42f8-a1d7-8d704069aac9 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Oct 02 13:19:46 compute-1 nova_compute[230518]: 2025-10-02 13:19:46.055 2 DEBUG nova.privsep.utils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 02 13:19:46 compute-1 nova_compute[230518]: 2025-10-02 13:19:46.056 2 DEBUG oslo_concurrency.processutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/bb6d192aed85f84d0f22da0723b257d38ce90e47.part /var/lib/nova/instances/_base/bb6d192aed85f84d0f22da0723b257d38ce90e47.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:19:46 compute-1 nova_compute[230518]: 2025-10-02 13:19:46.189 2 DEBUG nova.network.neutron [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Successfully updated port: 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:19:46 compute-1 nova_compute[230518]: 2025-10-02 13:19:46.264 2 DEBUG oslo_concurrency.lockutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:19:46 compute-1 nova_compute[230518]: 2025-10-02 13:19:46.264 2 DEBUG oslo_concurrency.lockutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquired lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:19:46 compute-1 nova_compute[230518]: 2025-10-02 13:19:46.264 2 DEBUG nova.network.neutron [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:19:46 compute-1 nova_compute[230518]: 2025-10-02 13:19:46.417 2 DEBUG nova.compute.manager [req-a2bfc55d-2408-4b6c-9c38-fac2f9bd1993 req-f3e98f48-7513-479b-85d5-745ce98e2473 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received event network-changed-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:19:46 compute-1 nova_compute[230518]: 2025-10-02 13:19:46.418 2 DEBUG nova.compute.manager [req-a2bfc55d-2408-4b6c-9c38-fac2f9bd1993 req-f3e98f48-7513-479b-85d5-745ce98e2473 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Refreshing instance network info cache due to event network-changed-585ce74b-9d9e-45eb-a324-9ce87a1fcec0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:19:46 compute-1 nova_compute[230518]: 2025-10-02 13:19:46.418 2 DEBUG oslo_concurrency.lockutils [req-a2bfc55d-2408-4b6c-9c38-fac2f9bd1993 req-f3e98f48-7513-479b-85d5-745ce98e2473 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:19:46 compute-1 nova_compute[230518]: 2025-10-02 13:19:46.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:46 compute-1 nova_compute[230518]: 2025-10-02 13:19:46.640 2 DEBUG nova.network.neutron [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:19:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:46.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:47 compute-1 nova_compute[230518]: 2025-10-02 13:19:47.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:19:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:47.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:48 compute-1 nova_compute[230518]: 2025-10-02 13:19:48.042 2 DEBUG oslo_concurrency.processutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/bb6d192aed85f84d0f22da0723b257d38ce90e47.part /var/lib/nova/instances/_base/bb6d192aed85f84d0f22da0723b257d38ce90e47.converted" returned: 0 in 1.987s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:19:48 compute-1 nova_compute[230518]: 2025-10-02 13:19:48.047 2 DEBUG oslo_concurrency.processutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bb6d192aed85f84d0f22da0723b257d38ce90e47.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:19:48 compute-1 ceph-mon[80926]: pgmap v3256: 305 pgs: 305 active+clean; 141 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 102 B/s wr, 8 op/s
Oct 02 13:19:48 compute-1 nova_compute[230518]: 2025-10-02 13:19:48.120 2 DEBUG oslo_concurrency.processutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bb6d192aed85f84d0f22da0723b257d38ce90e47.converted --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:19:48 compute-1 nova_compute[230518]: 2025-10-02 13:19:48.121 2 DEBUG oslo_concurrency.lockutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "bb6d192aed85f84d0f22da0723b257d38ce90e47" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 5.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:19:48 compute-1 nova_compute[230518]: 2025-10-02 13:19:48.152 2 DEBUG nova.storage.rbd_utils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] rbd image 92ef7ede-4ed2-4a81-9849-bbc39c0be573_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:19:48 compute-1 nova_compute[230518]: 2025-10-02 13:19:48.156 2 DEBUG oslo_concurrency.processutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/bb6d192aed85f84d0f22da0723b257d38ce90e47 92ef7ede-4ed2-4a81-9849-bbc39c0be573_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:19:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:48.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:19:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:49.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:19:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:19:49 compute-1 nova_compute[230518]: 2025-10-02 13:19:49.350 2 DEBUG nova.network.neutron [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updating instance_info_cache with network_info: [{"id": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "address": "fa:16:3e:04:8b:f7", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap585ce74b-9d", "ovs_interfaceid": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:19:49 compute-1 nova_compute[230518]: 2025-10-02 13:19:49.375 2 DEBUG oslo_concurrency.lockutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Releasing lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:19:49 compute-1 nova_compute[230518]: 2025-10-02 13:19:49.376 2 DEBUG nova.compute.manager [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Instance network_info: |[{"id": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "address": "fa:16:3e:04:8b:f7", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap585ce74b-9d", "ovs_interfaceid": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:19:49 compute-1 nova_compute[230518]: 2025-10-02 13:19:49.376 2 DEBUG oslo_concurrency.lockutils [req-a2bfc55d-2408-4b6c-9c38-fac2f9bd1993 req-f3e98f48-7513-479b-85d5-745ce98e2473 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:19:49 compute-1 nova_compute[230518]: 2025-10-02 13:19:49.377 2 DEBUG nova.network.neutron [req-a2bfc55d-2408-4b6c-9c38-fac2f9bd1993 req-f3e98f48-7513-479b-85d5-745ce98e2473 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Refreshing network info cache for port 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:19:49 compute-1 nova_compute[230518]: 2025-10-02 13:19:49.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:49 compute-1 nova_compute[230518]: 2025-10-02 13:19:49.629 2 DEBUG oslo_concurrency.processutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/bb6d192aed85f84d0f22da0723b257d38ce90e47 92ef7ede-4ed2-4a81-9849-bbc39c0be573_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:19:49 compute-1 nova_compute[230518]: 2025-10-02 13:19:49.752 2 DEBUG nova.storage.rbd_utils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] resizing rbd image 92ef7ede-4ed2-4a81-9849-bbc39c0be573_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 02 13:19:49 compute-1 nova_compute[230518]: 2025-10-02 13:19:49.919 2 DEBUG nova.objects.instance [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lazy-loading 'migration_context' on Instance uuid 92ef7ede-4ed2-4a81-9849-bbc39c0be573 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:19:49 compute-1 nova_compute[230518]: 2025-10-02 13:19:49.935 2 DEBUG nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 02 13:19:49 compute-1 nova_compute[230518]: 2025-10-02 13:19:49.936 2 DEBUG nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Ensure instance console log exists: /var/lib/nova/instances/92ef7ede-4ed2-4a81-9849-bbc39c0be573/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:19:49 compute-1 nova_compute[230518]: 2025-10-02 13:19:49.936 2 DEBUG oslo_concurrency.lockutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:19:49 compute-1 nova_compute[230518]: 2025-10-02 13:19:49.937 2 DEBUG oslo_concurrency.lockutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:19:49 compute-1 nova_compute[230518]: 2025-10-02 13:19:49.937 2 DEBUG oslo_concurrency.lockutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:19:49 compute-1 nova_compute[230518]: 2025-10-02 13:19:49.939 2 DEBUG nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Start _get_guest_xml network_info=[{"id": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "address": "fa:16:3e:04:8b:f7", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap585ce74b-9d", "ovs_interfaceid": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T13:19:33Z,direct_url=<?>,disk_format='qcow2',id=f6be8018-0ea2-42f8-a1d7-8d704069aac9,min_disk=0,min_ram=0,name='tempest-scenario-img--1529097385',owner='ced4d30c525c44cca617c3b9838d21b7',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T13:19:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'guest_format': None, 'image_id': 'f6be8018-0ea2-42f8-a1d7-8d704069aac9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:19:49 compute-1 nova_compute[230518]: 2025-10-02 13:19:49.942 2 WARNING nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:19:49 compute-1 nova_compute[230518]: 2025-10-02 13:19:49.947 2 DEBUG nova.virt.libvirt.host [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:19:49 compute-1 nova_compute[230518]: 2025-10-02 13:19:49.948 2 DEBUG nova.virt.libvirt.host [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:19:49 compute-1 nova_compute[230518]: 2025-10-02 13:19:49.950 2 DEBUG nova.virt.libvirt.host [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:19:49 compute-1 nova_compute[230518]: 2025-10-02 13:19:49.950 2 DEBUG nova.virt.libvirt.host [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:19:49 compute-1 nova_compute[230518]: 2025-10-02 13:19:49.951 2 DEBUG nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:19:49 compute-1 nova_compute[230518]: 2025-10-02 13:19:49.951 2 DEBUG nova.virt.hardware [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-02T13:19:33Z,direct_url=<?>,disk_format='qcow2',id=f6be8018-0ea2-42f8-a1d7-8d704069aac9,min_disk=0,min_ram=0,name='tempest-scenario-img--1529097385',owner='ced4d30c525c44cca617c3b9838d21b7',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-02T13:19:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:19:49 compute-1 nova_compute[230518]: 2025-10-02 13:19:49.952 2 DEBUG nova.virt.hardware [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:19:49 compute-1 nova_compute[230518]: 2025-10-02 13:19:49.952 2 DEBUG nova.virt.hardware [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:19:49 compute-1 nova_compute[230518]: 2025-10-02 13:19:49.952 2 DEBUG nova.virt.hardware [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:19:49 compute-1 nova_compute[230518]: 2025-10-02 13:19:49.953 2 DEBUG nova.virt.hardware [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:19:49 compute-1 nova_compute[230518]: 2025-10-02 13:19:49.953 2 DEBUG nova.virt.hardware [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:19:49 compute-1 nova_compute[230518]: 2025-10-02 13:19:49.953 2 DEBUG nova.virt.hardware [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:19:49 compute-1 nova_compute[230518]: 2025-10-02 13:19:49.953 2 DEBUG nova.virt.hardware [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:19:49 compute-1 nova_compute[230518]: 2025-10-02 13:19:49.953 2 DEBUG nova.virt.hardware [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:19:49 compute-1 nova_compute[230518]: 2025-10-02 13:19:49.954 2 DEBUG nova.virt.hardware [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:19:49 compute-1 nova_compute[230518]: 2025-10-02 13:19:49.954 2 DEBUG nova.virt.hardware [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:19:49 compute-1 nova_compute[230518]: 2025-10-02 13:19:49.956 2 DEBUG oslo_concurrency.processutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:19:50 compute-1 ceph-mon[80926]: pgmap v3257: 305 pgs: 305 active+clean; 141 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 102 B/s wr, 8 op/s
Oct 02 13:19:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:19:50 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1397213192' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:19:50 compute-1 nova_compute[230518]: 2025-10-02 13:19:50.444 2 DEBUG oslo_concurrency.processutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:19:50 compute-1 nova_compute[230518]: 2025-10-02 13:19:50.477 2 DEBUG nova.storage.rbd_utils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] rbd image 92ef7ede-4ed2-4a81-9849-bbc39c0be573_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:19:50 compute-1 nova_compute[230518]: 2025-10-02 13:19:50.483 2 DEBUG oslo_concurrency.processutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:19:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:19:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:50.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:19:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:19:50 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2262272927' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:19:50 compute-1 nova_compute[230518]: 2025-10-02 13:19:50.937 2 DEBUG oslo_concurrency.processutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:19:50 compute-1 nova_compute[230518]: 2025-10-02 13:19:50.938 2 DEBUG nova.virt.libvirt.vif [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:19:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-2037893726',display_name='tempest-TestMinimumBasicScenario-server-2037893726',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-2037893726',id=214,image_ref='f6be8018-0ea2-42f8-a1d7-8d704069aac9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBwVl8G5ieo8D6LRNceGyD0RzVHiNmAhn+oNx9JwxYWBR403mrZfQlZXBcadX/gFJtwpWDcUYsJ9PDNLmwgBCuRs7yyL95+8n31Ih8AeyaGOYLATIt1ABWixcUbVaElI8A==',key_name='tempest-TestMinimumBasicScenario-635426937',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ced4d30c525c44cca617c3b9838d21b7',ramdisk_id='',reservation_id='r-joxmr5lv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f6be8018-0ea2-42f8-a1d7-8d704069aac9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestMinimumBasicScenario-1527105691',owner_user_name='tempest-TestMinimumBasicScenario-1527105691-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:19:42Z,user_data=None,user_id='74f5186fabfb4fea86d32c8ef1f2e354',uuid=92ef7ede-4ed2-4a81-9849-bbc39c0be573,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "address": "fa:16:3e:04:8b:f7", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap585ce74b-9d", "ovs_interfaceid": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:19:50 compute-1 nova_compute[230518]: 2025-10-02 13:19:50.939 2 DEBUG nova.network.os_vif_util [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Converting VIF {"id": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "address": "fa:16:3e:04:8b:f7", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap585ce74b-9d", "ovs_interfaceid": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:19:50 compute-1 nova_compute[230518]: 2025-10-02 13:19:50.939 2 DEBUG nova.network.os_vif_util [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:8b:f7,bridge_name='br-int',has_traffic_filtering=True,id=585ce74b-9d9e-45eb-a324-9ce87a1fcec0,network=Network(540159ad-ffd2-462a-a8b9-e86914ed6249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap585ce74b-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:19:50 compute-1 nova_compute[230518]: 2025-10-02 13:19:50.940 2 DEBUG nova.objects.instance [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 92ef7ede-4ed2-4a81-9849-bbc39c0be573 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:19:50 compute-1 nova_compute[230518]: 2025-10-02 13:19:50.956 2 DEBUG nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:19:50 compute-1 nova_compute[230518]:   <uuid>92ef7ede-4ed2-4a81-9849-bbc39c0be573</uuid>
Oct 02 13:19:50 compute-1 nova_compute[230518]:   <name>instance-000000d6</name>
Oct 02 13:19:50 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 13:19:50 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 13:19:50 compute-1 nova_compute[230518]:   <metadata>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:19:50 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:       <nova:name>tempest-TestMinimumBasicScenario-server-2037893726</nova:name>
Oct 02 13:19:50 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 13:19:49</nova:creationTime>
Oct 02 13:19:50 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 13:19:50 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 13:19:50 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 13:19:50 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 13:19:50 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:19:50 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:19:50 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 13:19:50 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 13:19:50 compute-1 nova_compute[230518]:         <nova:user uuid="74f5186fabfb4fea86d32c8ef1f2e354">tempest-TestMinimumBasicScenario-1527105691-project-member</nova:user>
Oct 02 13:19:50 compute-1 nova_compute[230518]:         <nova:project uuid="ced4d30c525c44cca617c3b9838d21b7">tempest-TestMinimumBasicScenario-1527105691</nova:project>
Oct 02 13:19:50 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 13:19:50 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="f6be8018-0ea2-42f8-a1d7-8d704069aac9"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 13:19:50 compute-1 nova_compute[230518]:         <nova:port uuid="585ce74b-9d9e-45eb-a324-9ce87a1fcec0">
Oct 02 13:19:50 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 13:19:50 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 13:19:50 compute-1 nova_compute[230518]:   </metadata>
Oct 02 13:19:50 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <system>
Oct 02 13:19:50 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:19:50 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:19:50 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:19:50 compute-1 nova_compute[230518]:       <entry name="serial">92ef7ede-4ed2-4a81-9849-bbc39c0be573</entry>
Oct 02 13:19:50 compute-1 nova_compute[230518]:       <entry name="uuid">92ef7ede-4ed2-4a81-9849-bbc39c0be573</entry>
Oct 02 13:19:50 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     </system>
Oct 02 13:19:50 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 13:19:50 compute-1 nova_compute[230518]:   <os>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:   </os>
Oct 02 13:19:50 compute-1 nova_compute[230518]:   <features>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <apic/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:   </features>
Oct 02 13:19:50 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:   </clock>
Oct 02 13:19:50 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:   </cpu>
Oct 02 13:19:50 compute-1 nova_compute[230518]:   <devices>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 13:19:50 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/92ef7ede-4ed2-4a81-9849-bbc39c0be573_disk">
Oct 02 13:19:50 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:       </source>
Oct 02 13:19:50 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:19:50 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:19:50 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 13:19:50 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/92ef7ede-4ed2-4a81-9849-bbc39c0be573_disk.config">
Oct 02 13:19:50 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:       </source>
Oct 02 13:19:50 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:19:50 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:19:50 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 13:19:50 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:04:8b:f7"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:       <target dev="tap585ce74b-9d"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     </interface>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 13:19:50 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/92ef7ede-4ed2-4a81-9849-bbc39c0be573/console.log" append="off"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     </serial>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <video>
Oct 02 13:19:50 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     </video>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 13:19:50 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     </rng>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 13:19:50 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 13:19:50 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 13:19:50 compute-1 nova_compute[230518]:   </devices>
Oct 02 13:19:50 compute-1 nova_compute[230518]: </domain>
Oct 02 13:19:50 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:19:50 compute-1 nova_compute[230518]: 2025-10-02 13:19:50.958 2 DEBUG nova.compute.manager [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Preparing to wait for external event network-vif-plugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:19:50 compute-1 nova_compute[230518]: 2025-10-02 13:19:50.958 2 DEBUG oslo_concurrency.lockutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:19:50 compute-1 nova_compute[230518]: 2025-10-02 13:19:50.958 2 DEBUG oslo_concurrency.lockutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:19:50 compute-1 nova_compute[230518]: 2025-10-02 13:19:50.959 2 DEBUG oslo_concurrency.lockutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:19:50 compute-1 nova_compute[230518]: 2025-10-02 13:19:50.959 2 DEBUG nova.virt.libvirt.vif [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:19:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-2037893726',display_name='tempest-TestMinimumBasicScenario-server-2037893726',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-2037893726',id=214,image_ref='f6be8018-0ea2-42f8-a1d7-8d704069aac9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBwVl8G5ieo8D6LRNceGyD0RzVHiNmAhn+oNx9JwxYWBR403mrZfQlZXBcadX/gFJtwpWDcUYsJ9PDNLmwgBCuRs7yyL95+8n31Ih8AeyaGOYLATIt1ABWixcUbVaElI8A==',key_name='tempest-TestMinimumBasicScenario-635426937',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ced4d30c525c44cca617c3b9838d21b7',ramdisk_id='',reservation_id='r-joxmr5lv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f6be8018-0ea2-42f8-a1d7-8d704069aac9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestMinimumBasicScenario-1527105691',owner_user_name='tempest-TestMinimumBasicScenario-1527105691-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:19:42Z,user_data=None,user_id='74f5186fabfb4fea86d32c8ef1f2e354',uuid=92ef7ede-4ed2-4a81-9849-bbc39c0be573,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "address": "fa:16:3e:04:8b:f7", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap585ce74b-9d", "ovs_interfaceid": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:19:50 compute-1 nova_compute[230518]: 2025-10-02 13:19:50.960 2 DEBUG nova.network.os_vif_util [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Converting VIF {"id": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "address": "fa:16:3e:04:8b:f7", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap585ce74b-9d", "ovs_interfaceid": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:19:50 compute-1 nova_compute[230518]: 2025-10-02 13:19:50.960 2 DEBUG nova.network.os_vif_util [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:8b:f7,bridge_name='br-int',has_traffic_filtering=True,id=585ce74b-9d9e-45eb-a324-9ce87a1fcec0,network=Network(540159ad-ffd2-462a-a8b9-e86914ed6249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap585ce74b-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:19:50 compute-1 nova_compute[230518]: 2025-10-02 13:19:50.961 2 DEBUG os_vif [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:8b:f7,bridge_name='br-int',has_traffic_filtering=True,id=585ce74b-9d9e-45eb-a324-9ce87a1fcec0,network=Network(540159ad-ffd2-462a-a8b9-e86914ed6249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap585ce74b-9d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:19:50 compute-1 nova_compute[230518]: 2025-10-02 13:19:50.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:50 compute-1 nova_compute[230518]: 2025-10-02 13:19:50.962 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:19:50 compute-1 nova_compute[230518]: 2025-10-02 13:19:50.962 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:19:50 compute-1 nova_compute[230518]: 2025-10-02 13:19:50.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:50 compute-1 nova_compute[230518]: 2025-10-02 13:19:50.965 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap585ce74b-9d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:19:50 compute-1 nova_compute[230518]: 2025-10-02 13:19:50.966 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap585ce74b-9d, col_values=(('external_ids', {'iface-id': '585ce74b-9d9e-45eb-a324-9ce87a1fcec0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:04:8b:f7', 'vm-uuid': '92ef7ede-4ed2-4a81-9849-bbc39c0be573'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:19:50 compute-1 nova_compute[230518]: 2025-10-02 13:19:50.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:50 compute-1 NetworkManager[44960]: <info>  [1759411190.9685] manager: (tap585ce74b-9d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/396)
Oct 02 13:19:50 compute-1 nova_compute[230518]: 2025-10-02 13:19:50.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:19:50 compute-1 nova_compute[230518]: 2025-10-02 13:19:50.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:50 compute-1 nova_compute[230518]: 2025-10-02 13:19:50.977 2 INFO os_vif [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:8b:f7,bridge_name='br-int',has_traffic_filtering=True,id=585ce74b-9d9e-45eb-a324-9ce87a1fcec0,network=Network(540159ad-ffd2-462a-a8b9-e86914ed6249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap585ce74b-9d')
Oct 02 13:19:51 compute-1 nova_compute[230518]: 2025-10-02 13:19:51.061 2 DEBUG nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:19:51 compute-1 nova_compute[230518]: 2025-10-02 13:19:51.062 2 DEBUG nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:19:51 compute-1 nova_compute[230518]: 2025-10-02 13:19:51.062 2 DEBUG nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] No VIF found with MAC fa:16:3e:04:8b:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:19:51 compute-1 nova_compute[230518]: 2025-10-02 13:19:51.063 2 INFO nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Using config drive
Oct 02 13:19:51 compute-1 nova_compute[230518]: 2025-10-02 13:19:51.090 2 DEBUG nova.storage.rbd_utils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] rbd image 92ef7ede-4ed2-4a81-9849-bbc39c0be573_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:19:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:51.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:51 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1397213192' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:19:51 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2262272927' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:19:51 compute-1 nova_compute[230518]: 2025-10-02 13:19:51.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:51 compute-1 podman[314257]: 2025-10-02 13:19:51.801234904 +0000 UTC m=+0.052730555 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_managed=true)
Oct 02 13:19:51 compute-1 podman[314256]: 2025-10-02 13:19:51.807132969 +0000 UTC m=+0.059092705 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:19:52 compute-1 ceph-mon[80926]: pgmap v3258: 305 pgs: 305 active+clean; 166 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 912 KiB/s wr, 37 op/s
Oct 02 13:19:52 compute-1 nova_compute[230518]: 2025-10-02 13:19:52.334 2 INFO nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Creating config drive at /var/lib/nova/instances/92ef7ede-4ed2-4a81-9849-bbc39c0be573/disk.config
Oct 02 13:19:52 compute-1 nova_compute[230518]: 2025-10-02 13:19:52.338 2 DEBUG oslo_concurrency.processutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/92ef7ede-4ed2-4a81-9849-bbc39c0be573/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbnzz7awj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:19:52 compute-1 nova_compute[230518]: 2025-10-02 13:19:52.370 2 DEBUG nova.network.neutron [req-a2bfc55d-2408-4b6c-9c38-fac2f9bd1993 req-f3e98f48-7513-479b-85d5-745ce98e2473 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updated VIF entry in instance network info cache for port 585ce74b-9d9e-45eb-a324-9ce87a1fcec0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:19:52 compute-1 nova_compute[230518]: 2025-10-02 13:19:52.371 2 DEBUG nova.network.neutron [req-a2bfc55d-2408-4b6c-9c38-fac2f9bd1993 req-f3e98f48-7513-479b-85d5-745ce98e2473 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updating instance_info_cache with network_info: [{"id": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "address": "fa:16:3e:04:8b:f7", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap585ce74b-9d", "ovs_interfaceid": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:19:52 compute-1 nova_compute[230518]: 2025-10-02 13:19:52.401 2 DEBUG oslo_concurrency.lockutils [req-a2bfc55d-2408-4b6c-9c38-fac2f9bd1993 req-f3e98f48-7513-479b-85d5-745ce98e2473 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:19:52 compute-1 nova_compute[230518]: 2025-10-02 13:19:52.476 2 DEBUG oslo_concurrency.processutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/92ef7ede-4ed2-4a81-9849-bbc39c0be573/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbnzz7awj" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:19:52 compute-1 nova_compute[230518]: 2025-10-02 13:19:52.503 2 DEBUG nova.storage.rbd_utils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] rbd image 92ef7ede-4ed2-4a81-9849-bbc39c0be573_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:19:52 compute-1 nova_compute[230518]: 2025-10-02 13:19:52.506 2 DEBUG oslo_concurrency.processutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/92ef7ede-4ed2-4a81-9849-bbc39c0be573/disk.config 92ef7ede-4ed2-4a81-9849-bbc39c0be573_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:19:52 compute-1 nova_compute[230518]: 2025-10-02 13:19:52.677 2 DEBUG oslo_concurrency.processutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/92ef7ede-4ed2-4a81-9849-bbc39c0be573/disk.config 92ef7ede-4ed2-4a81-9849-bbc39c0be573_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:19:52 compute-1 nova_compute[230518]: 2025-10-02 13:19:52.678 2 INFO nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Deleting local config drive /var/lib/nova/instances/92ef7ede-4ed2-4a81-9849-bbc39c0be573/disk.config because it was imported into RBD.
Oct 02 13:19:52 compute-1 kernel: tap585ce74b-9d: entered promiscuous mode
Oct 02 13:19:52 compute-1 NetworkManager[44960]: <info>  [1759411192.7335] manager: (tap585ce74b-9d): new Tun device (/org/freedesktop/NetworkManager/Devices/397)
Oct 02 13:19:52 compute-1 ovn_controller[129257]: 2025-10-02T13:19:52Z|00867|binding|INFO|Claiming lport 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 for this chassis.
Oct 02 13:19:52 compute-1 ovn_controller[129257]: 2025-10-02T13:19:52Z|00868|binding|INFO|585ce74b-9d9e-45eb-a324-9ce87a1fcec0: Claiming fa:16:3e:04:8b:f7 10.100.0.5
Oct 02 13:19:52 compute-1 nova_compute[230518]: 2025-10-02 13:19:52.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:52 compute-1 nova_compute[230518]: 2025-10-02 13:19:52.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:19:52.748 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:8b:f7 10.100.0.5'], port_security=['fa:16:3e:04:8b:f7 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '92ef7ede-4ed2-4a81-9849-bbc39c0be573', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-540159ad-ffd2-462a-a8b9-e86914ed6249', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ced4d30c525c44cca617c3b9838d21b7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0c95f94e-20dd-45bd-9644-7e1d8998955e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d33f65d0-f5c3-43e4-a0b6-d26b238c6ffb, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=585ce74b-9d9e-45eb-a324-9ce87a1fcec0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:19:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:19:52.749 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 in datapath 540159ad-ffd2-462a-a8b9-e86914ed6249 bound to our chassis
Oct 02 13:19:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:19:52.750 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 540159ad-ffd2-462a-a8b9-e86914ed6249
Oct 02 13:19:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:19:52.761 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b87a3239-ea74-4579-ad29-dae532e47ac6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:19:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:19:52.761 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap540159ad-f1 in ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:19:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:19:52.763 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap540159ad-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:19:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:19:52.763 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d37a2876-ff52-4760-bf6e-2bdad9b9a1b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:19:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:19:52.764 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[0c4c0689-ae6f-47ad-ace0-9b5a4c7457ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:19:52 compute-1 systemd-udevd[314346]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:19:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:19:52.774 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[03237844-201f-43d4-89c3-5924c37497fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:19:52 compute-1 systemd-machined[188247]: New machine qemu-98-instance-000000d6.
Oct 02 13:19:52 compute-1 NetworkManager[44960]: <info>  [1759411192.7840] device (tap585ce74b-9d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:19:52 compute-1 NetworkManager[44960]: <info>  [1759411192.7851] device (tap585ce74b-9d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:19:52 compute-1 nova_compute[230518]: 2025-10-02 13:19:52.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:19:52.799 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f6c2854a-611a-41ab-be4b-e494617974c7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:19:52 compute-1 systemd[1]: Started Virtual Machine qemu-98-instance-000000d6.
Oct 02 13:19:52 compute-1 ovn_controller[129257]: 2025-10-02T13:19:52Z|00869|binding|INFO|Setting lport 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 ovn-installed in OVS
Oct 02 13:19:52 compute-1 ovn_controller[129257]: 2025-10-02T13:19:52Z|00870|binding|INFO|Setting lport 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 up in Southbound
Oct 02 13:19:52 compute-1 nova_compute[230518]: 2025-10-02 13:19:52.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:19:52.829 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[4ce53dc4-873f-4d64-bb0e-b51ef57c6f32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:19:52 compute-1 systemd-udevd[314351]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:19:52 compute-1 NetworkManager[44960]: <info>  [1759411192.8366] manager: (tap540159ad-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/398)
Oct 02 13:19:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:19:52.835 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bba3d7c0-9554-41b0-a883-7a7d3b7ed28f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:19:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:52.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:19:52.882 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[ed5adb24-4554-4d9c-9074-17c863f180b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:19:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:19:52.885 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[80086011-175d-42a3-a8d6-d6cb95882af4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:19:52 compute-1 NetworkManager[44960]: <info>  [1759411192.9138] device (tap540159ad-f0): carrier: link connected
Oct 02 13:19:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:19:52.920 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[98671fbf-cffe-49d8-ae94-8d282b79a4d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:19:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:19:52.938 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[40c2c25f-83ac-4f42-a84c-d9db2a3676b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap540159ad-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:39:9b:b7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 261], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 905647, 'reachable_time': 25374, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314379, 'error': None, 'target': 'ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:19:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:19:52.952 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d455842e-0aa5-46d6-a51b-3d622fc20717]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe39:9bb7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 905647, 'tstamp': 905647}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314380, 'error': None, 'target': 'ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:19:52 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:19:52.968 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a401dac3-10d2-4ef6-9236-f5f599fc7aa2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap540159ad-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:39:9b:b7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 261], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 905647, 'reachable_time': 25374, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314381, 'error': None, 'target': 'ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:19:53.005 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[05201cd4-d495-4189-8589-dbb402ccd5d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:19:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:53.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:19:53.192 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[26c469f6-2523-4201-aec9-33409d124e17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:19:53.194 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap540159ad-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:19:53.194 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:19:53.194 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap540159ad-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:19:53 compute-1 kernel: tap540159ad-f0: entered promiscuous mode
Oct 02 13:19:53 compute-1 NetworkManager[44960]: <info>  [1759411193.1969] manager: (tap540159ad-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/399)
Oct 02 13:19:53 compute-1 nova_compute[230518]: 2025-10-02 13:19:53.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:19:53.199 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap540159ad-f0, col_values=(('external_ids', {'iface-id': 'b64b1a3a-1d89-4a71-b9b0-71e964509167'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:19:53 compute-1 ovn_controller[129257]: 2025-10-02T13:19:53Z|00871|binding|INFO|Releasing lport b64b1a3a-1d89-4a71-b9b0-71e964509167 from this chassis (sb_readonly=0)
Oct 02 13:19:53 compute-1 nova_compute[230518]: 2025-10-02 13:19:53.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:19:53.214 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/540159ad-ffd2-462a-a8b9-e86914ed6249.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/540159ad-ffd2-462a-a8b9-e86914ed6249.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:19:53.215 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[dc42b539-5df8-415d-a926-7266e08a0253]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:19:53.216 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]: global
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-540159ad-ffd2-462a-a8b9-e86914ed6249
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/540159ad-ffd2-462a-a8b9-e86914ed6249.pid.haproxy
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 540159ad-ffd2-462a-a8b9-e86914ed6249
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:19:53 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:19:53.217 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249', 'env', 'PROCESS_TAG=haproxy-540159ad-ffd2-462a-a8b9-e86914ed6249', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/540159ad-ffd2-462a-a8b9-e86914ed6249.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:19:53 compute-1 ceph-mon[80926]: pgmap v3259: 305 pgs: 305 active+clean; 175 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.2 MiB/s wr, 34 op/s
Oct 02 13:19:53 compute-1 podman[314455]: 2025-10-02 13:19:53.581530066 +0000 UTC m=+0.020475143 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:19:53 compute-1 podman[314455]: 2025-10-02 13:19:53.766122696 +0000 UTC m=+0.205067753 container create a4ff07259348eba0577ffd4bd93027d8453c16bbcc480e2e2eee59b3e543ebb8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 13:19:53 compute-1 systemd[1]: Started libpod-conmon-a4ff07259348eba0577ffd4bd93027d8453c16bbcc480e2e2eee59b3e543ebb8.scope.
Oct 02 13:19:53 compute-1 nova_compute[230518]: 2025-10-02 13:19:53.839 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759411193.8379366, 92ef7ede-4ed2-4a81-9849-bbc39c0be573 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:19:53 compute-1 nova_compute[230518]: 2025-10-02 13:19:53.839 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] VM Started (Lifecycle Event)
Oct 02 13:19:53 compute-1 systemd[1]: Started libcrun container.
Oct 02 13:19:53 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10a49f52618069d1ad7902e6a20a5811651f7f83eb122ae0dd39c392fa844ae3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:19:53 compute-1 nova_compute[230518]: 2025-10-02 13:19:53.897 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:19:53 compute-1 nova_compute[230518]: 2025-10-02 13:19:53.901 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759411193.8386834, 92ef7ede-4ed2-4a81-9849-bbc39c0be573 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:19:53 compute-1 nova_compute[230518]: 2025-10-02 13:19:53.902 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] VM Paused (Lifecycle Event)
Oct 02 13:19:53 compute-1 podman[314455]: 2025-10-02 13:19:53.903268668 +0000 UTC m=+0.342213745 container init a4ff07259348eba0577ffd4bd93027d8453c16bbcc480e2e2eee59b3e543ebb8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 13:19:53 compute-1 podman[314455]: 2025-10-02 13:19:53.90876535 +0000 UTC m=+0.347710407 container start a4ff07259348eba0577ffd4bd93027d8453c16bbcc480e2e2eee59b3e543ebb8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 13:19:53 compute-1 nova_compute[230518]: 2025-10-02 13:19:53.922 2 DEBUG nova.compute.manager [req-d27e7d56-0c0c-4ad7-b560-e478bc43775b req-245c4663-655e-4dda-b88d-df89df31ac86 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received event network-vif-plugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:19:53 compute-1 nova_compute[230518]: 2025-10-02 13:19:53.923 2 DEBUG oslo_concurrency.lockutils [req-d27e7d56-0c0c-4ad7-b560-e478bc43775b req-245c4663-655e-4dda-b88d-df89df31ac86 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:19:53 compute-1 nova_compute[230518]: 2025-10-02 13:19:53.923 2 DEBUG oslo_concurrency.lockutils [req-d27e7d56-0c0c-4ad7-b560-e478bc43775b req-245c4663-655e-4dda-b88d-df89df31ac86 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:19:53 compute-1 nova_compute[230518]: 2025-10-02 13:19:53.923 2 DEBUG oslo_concurrency.lockutils [req-d27e7d56-0c0c-4ad7-b560-e478bc43775b req-245c4663-655e-4dda-b88d-df89df31ac86 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:19:53 compute-1 nova_compute[230518]: 2025-10-02 13:19:53.923 2 DEBUG nova.compute.manager [req-d27e7d56-0c0c-4ad7-b560-e478bc43775b req-245c4663-655e-4dda-b88d-df89df31ac86 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Processing event network-vif-plugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:19:53 compute-1 nova_compute[230518]: 2025-10-02 13:19:53.924 2 DEBUG nova.compute.manager [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:19:53 compute-1 nova_compute[230518]: 2025-10-02 13:19:53.927 2 DEBUG nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:19:53 compute-1 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[314470]: [NOTICE]   (314474) : New worker (314476) forked
Oct 02 13:19:53 compute-1 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[314470]: [NOTICE]   (314474) : Loading success.
Oct 02 13:19:53 compute-1 nova_compute[230518]: 2025-10-02 13:19:53.930 2 INFO nova.virt.libvirt.driver [-] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Instance spawned successfully.
Oct 02 13:19:53 compute-1 nova_compute[230518]: 2025-10-02 13:19:53.930 2 DEBUG nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:19:53 compute-1 nova_compute[230518]: 2025-10-02 13:19:53.933 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:19:53 compute-1 nova_compute[230518]: 2025-10-02 13:19:53.936 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759411193.9264627, 92ef7ede-4ed2-4a81-9849-bbc39c0be573 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:19:53 compute-1 nova_compute[230518]: 2025-10-02 13:19:53.936 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] VM Resumed (Lifecycle Event)
Oct 02 13:19:53 compute-1 nova_compute[230518]: 2025-10-02 13:19:53.965 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:19:53 compute-1 nova_compute[230518]: 2025-10-02 13:19:53.968 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:19:53 compute-1 nova_compute[230518]: 2025-10-02 13:19:53.995 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:19:53 compute-1 nova_compute[230518]: 2025-10-02 13:19:53.998 2 DEBUG nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:19:53 compute-1 nova_compute[230518]: 2025-10-02 13:19:53.998 2 DEBUG nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:19:53 compute-1 nova_compute[230518]: 2025-10-02 13:19:53.999 2 DEBUG nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:19:53 compute-1 nova_compute[230518]: 2025-10-02 13:19:53.999 2 DEBUG nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:19:54 compute-1 nova_compute[230518]: 2025-10-02 13:19:54.000 2 DEBUG nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:19:54 compute-1 nova_compute[230518]: 2025-10-02 13:19:54.000 2 DEBUG nova.virt.libvirt.driver [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:19:54 compute-1 nova_compute[230518]: 2025-10-02 13:19:54.098 2 INFO nova.compute.manager [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Took 11.75 seconds to spawn the instance on the hypervisor.
Oct 02 13:19:54 compute-1 nova_compute[230518]: 2025-10-02 13:19:54.098 2 DEBUG nova.compute.manager [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:19:54 compute-1 nova_compute[230518]: 2025-10-02 13:19:54.162 2 INFO nova.compute.manager [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Took 13.32 seconds to build instance.
Oct 02 13:19:54 compute-1 nova_compute[230518]: 2025-10-02 13:19:54.180 2 DEBUG oslo_concurrency.lockutils [None req-48743baf-74bc-4a5a-81c7-b5be0ce39359 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.477s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:19:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:19:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:54.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:55.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:55 compute-1 nova_compute[230518]: 2025-10-02 13:19:55.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:56 compute-1 nova_compute[230518]: 2025-10-02 13:19:56.075 2 DEBUG nova.compute.manager [req-753e65d6-ea6c-4340-b3db-08e422406b23 req-8e56620a-71b6-445d-b30e-2ca9d8732e03 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received event network-vif-plugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:19:56 compute-1 nova_compute[230518]: 2025-10-02 13:19:56.075 2 DEBUG oslo_concurrency.lockutils [req-753e65d6-ea6c-4340-b3db-08e422406b23 req-8e56620a-71b6-445d-b30e-2ca9d8732e03 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:19:56 compute-1 nova_compute[230518]: 2025-10-02 13:19:56.075 2 DEBUG oslo_concurrency.lockutils [req-753e65d6-ea6c-4340-b3db-08e422406b23 req-8e56620a-71b6-445d-b30e-2ca9d8732e03 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:19:56 compute-1 nova_compute[230518]: 2025-10-02 13:19:56.075 2 DEBUG oslo_concurrency.lockutils [req-753e65d6-ea6c-4340-b3db-08e422406b23 req-8e56620a-71b6-445d-b30e-2ca9d8732e03 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:19:56 compute-1 nova_compute[230518]: 2025-10-02 13:19:56.075 2 DEBUG nova.compute.manager [req-753e65d6-ea6c-4340-b3db-08e422406b23 req-8e56620a-71b6-445d-b30e-2ca9d8732e03 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] No waiting events found dispatching network-vif-plugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:19:56 compute-1 nova_compute[230518]: 2025-10-02 13:19:56.076 2 WARNING nova.compute.manager [req-753e65d6-ea6c-4340-b3db-08e422406b23 req-8e56620a-71b6-445d-b30e-2ca9d8732e03 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received unexpected event network-vif-plugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 for instance with vm_state active and task_state None.
Oct 02 13:19:56 compute-1 ceph-mon[80926]: pgmap v3260: 305 pgs: 305 active+clean; 187 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 02 13:19:56 compute-1 nova_compute[230518]: 2025-10-02 13:19:56.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:19:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:56.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:19:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:57.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:19:57 compute-1 ceph-mon[80926]: pgmap v3261: 305 pgs: 305 active+clean; 187 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 66 op/s
Oct 02 13:19:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:19:58.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:19:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:19:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:19:59.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:19:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:20:00 compute-1 ceph-mon[80926]: pgmap v3262: 305 pgs: 305 active+clean; 187 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 60 op/s
Oct 02 13:20:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:20:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:00.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:20:00 compute-1 nova_compute[230518]: 2025-10-02 13:20:00.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:00 compute-1 nova_compute[230518]: 2025-10-02 13:20:00.997 2 DEBUG oslo_concurrency.lockutils [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:20:00 compute-1 nova_compute[230518]: 2025-10-02 13:20:00.997 2 DEBUG oslo_concurrency.lockutils [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:20:01 compute-1 nova_compute[230518]: 2025-10-02 13:20:01.018 2 DEBUG nova.objects.instance [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lazy-loading 'flavor' on Instance uuid 92ef7ede-4ed2-4a81-9849-bbc39c0be573 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:20:01 compute-1 nova_compute[230518]: 2025-10-02 13:20:01.095 2 DEBUG oslo_concurrency.lockutils [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.098s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:20:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:01.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:01 compute-1 nova_compute[230518]: 2025-10-02 13:20:01.455 2 DEBUG oslo_concurrency.lockutils [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:20:01 compute-1 nova_compute[230518]: 2025-10-02 13:20:01.456 2 DEBUG oslo_concurrency.lockutils [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:20:01 compute-1 nova_compute[230518]: 2025-10-02 13:20:01.456 2 INFO nova.compute.manager [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Attaching volume 501f7163-061f-4829-9c05-ac69ebd0ace5 to /dev/vdb
Oct 02 13:20:01 compute-1 nova_compute[230518]: 2025-10-02 13:20:01.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:01 compute-1 ceph-mon[80926]: overall HEALTH_OK
Oct 02 13:20:01 compute-1 nova_compute[230518]: 2025-10-02 13:20:01.693 2 DEBUG os_brick.utils [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 13:20:01 compute-1 nova_compute[230518]: 2025-10-02 13:20:01.695 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:20:01 compute-1 nova_compute[230518]: 2025-10-02 13:20:01.708 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:20:01 compute-1 nova_compute[230518]: 2025-10-02 13:20:01.708 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[bd8bf121-ffaf-40a2-8eba-c1840d46c596]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:20:01 compute-1 nova_compute[230518]: 2025-10-02 13:20:01.710 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:20:01 compute-1 nova_compute[230518]: 2025-10-02 13:20:01.719 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:20:01 compute-1 nova_compute[230518]: 2025-10-02 13:20:01.719 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[3759425c-4463-49ae-a2a6-04db399e7852]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:20:01 compute-1 nova_compute[230518]: 2025-10-02 13:20:01.721 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:20:01 compute-1 nova_compute[230518]: 2025-10-02 13:20:01.730 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:20:01 compute-1 nova_compute[230518]: 2025-10-02 13:20:01.731 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[7df5bfee-70e2-45c0-8b6d-88ef4b80cc7e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:20:01 compute-1 nova_compute[230518]: 2025-10-02 13:20:01.734 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[8ee90cdc-5816-47f6-bb6f-258031709bbc]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:20:01 compute-1 nova_compute[230518]: 2025-10-02 13:20:01.735 2 DEBUG oslo_concurrency.processutils [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:20:01 compute-1 nova_compute[230518]: 2025-10-02 13:20:01.779 2 DEBUG oslo_concurrency.processutils [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CMD "nvme version" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:20:01 compute-1 nova_compute[230518]: 2025-10-02 13:20:01.783 2 DEBUG os_brick.initiator.connectors.lightos [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 13:20:01 compute-1 nova_compute[230518]: 2025-10-02 13:20:01.784 2 DEBUG os_brick.initiator.connectors.lightos [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 13:20:01 compute-1 nova_compute[230518]: 2025-10-02 13:20:01.785 2 DEBUG os_brick.initiator.connectors.lightos [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 13:20:01 compute-1 nova_compute[230518]: 2025-10-02 13:20:01.785 2 DEBUG os_brick.utils [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] <== get_connector_properties: return (91ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 13:20:01 compute-1 nova_compute[230518]: 2025-10-02 13:20:01.785 2 DEBUG nova.virt.block_device [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updating existing volume attachment record: 36d0d5ec-018e-4c04-a804-bceb43745029 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 13:20:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:02.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:03 compute-1 ceph-mon[80926]: pgmap v3263: 305 pgs: 305 active+clean; 187 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 02 13:20:03 compute-1 nova_compute[230518]: 2025-10-02 13:20:03.035 2 DEBUG nova.objects.instance [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lazy-loading 'flavor' on Instance uuid 92ef7ede-4ed2-4a81-9849-bbc39c0be573 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:20:03 compute-1 nova_compute[230518]: 2025-10-02 13:20:03.068 2 DEBUG nova.virt.libvirt.driver [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Attempting to attach volume 501f7163-061f-4829-9c05-ac69ebd0ace5 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Oct 02 13:20:03 compute-1 nova_compute[230518]: 2025-10-02 13:20:03.070 2 DEBUG nova.virt.libvirt.guest [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] attach device xml: <disk type="network" device="disk">
Oct 02 13:20:03 compute-1 nova_compute[230518]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:20:03 compute-1 nova_compute[230518]:   <source protocol="rbd" name="volumes/volume-501f7163-061f-4829-9c05-ac69ebd0ace5">
Oct 02 13:20:03 compute-1 nova_compute[230518]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:20:03 compute-1 nova_compute[230518]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:20:03 compute-1 nova_compute[230518]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:20:03 compute-1 nova_compute[230518]:   </source>
Oct 02 13:20:03 compute-1 nova_compute[230518]:   <auth username="openstack">
Oct 02 13:20:03 compute-1 nova_compute[230518]:     <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:20:03 compute-1 nova_compute[230518]:   </auth>
Oct 02 13:20:03 compute-1 nova_compute[230518]:   <target dev="vdb" bus="virtio"/>
Oct 02 13:20:03 compute-1 nova_compute[230518]:   <serial>501f7163-061f-4829-9c05-ac69ebd0ace5</serial>
Oct 02 13:20:03 compute-1 nova_compute[230518]: </disk>
Oct 02 13:20:03 compute-1 nova_compute[230518]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 02 13:20:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:03.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:03 compute-1 nova_compute[230518]: 2025-10-02 13:20:03.623 2 DEBUG nova.virt.libvirt.driver [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:20:03 compute-1 nova_compute[230518]: 2025-10-02 13:20:03.624 2 DEBUG nova.virt.libvirt.driver [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:20:03 compute-1 nova_compute[230518]: 2025-10-02 13:20:03.624 2 DEBUG nova.virt.libvirt.driver [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:20:03 compute-1 nova_compute[230518]: 2025-10-02 13:20:03.624 2 DEBUG nova.virt.libvirt.driver [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] No VIF found with MAC fa:16:3e:04:8b:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:20:03 compute-1 nova_compute[230518]: 2025-10-02 13:20:03.993 2 DEBUG oslo_concurrency.lockutils [None req-2775ea14-6a25-4684-846d-2f3c61a327ee 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.537s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:20:04 compute-1 ceph-mon[80926]: pgmap v3264: 305 pgs: 305 active+clean; 187 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 77 op/s
Oct 02 13:20:04 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3036994989' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:20:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:20:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:04.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:05.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:20:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/468153223' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:20:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:20:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/468153223' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:20:05 compute-1 ceph-mon[80926]: pgmap v3265: 305 pgs: 305 active+clean; 187 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 698 KiB/s wr, 78 op/s
Oct 02 13:20:05 compute-1 nova_compute[230518]: 2025-10-02 13:20:05.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:06 compute-1 nova_compute[230518]: 2025-10-02 13:20:06.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:06.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:07.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/468153223' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:20:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/468153223' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:20:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:08.020 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=75, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=74) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:20:08 compute-1 nova_compute[230518]: 2025-10-02 13:20:08.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:08 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:08.024 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:20:08 compute-1 ceph-mon[80926]: pgmap v3266: 305 pgs: 305 active+clean; 187 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 767 B/s wr, 77 op/s
Oct 02 13:20:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:08.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:09 compute-1 nova_compute[230518]: 2025-10-02 13:20:09.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:09 compute-1 NetworkManager[44960]: <info>  [1759411209.1192] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/400)
Oct 02 13:20:09 compute-1 NetworkManager[44960]: <info>  [1759411209.1218] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/401)
Oct 02 13:20:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:20:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:09.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:20:09 compute-1 nova_compute[230518]: 2025-10-02 13:20:09.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:09 compute-1 ovn_controller[129257]: 2025-10-02T13:20:09Z|00872|binding|INFO|Releasing lport b64b1a3a-1d89-4a71-b9b0-71e964509167 from this chassis (sb_readonly=0)
Oct 02 13:20:09 compute-1 nova_compute[230518]: 2025-10-02 13:20:09.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:20:09 compute-1 nova_compute[230518]: 2025-10-02 13:20:09.441 2 DEBUG nova.compute.manager [req-4ebe019e-fb2f-4887-b30f-4069d5d2f686 req-89d7fbd7-4477-443d-925e-1fe6f2e1e2f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received event network-changed-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:20:09 compute-1 nova_compute[230518]: 2025-10-02 13:20:09.442 2 DEBUG nova.compute.manager [req-4ebe019e-fb2f-4887-b30f-4069d5d2f686 req-89d7fbd7-4477-443d-925e-1fe6f2e1e2f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Refreshing instance network info cache due to event network-changed-585ce74b-9d9e-45eb-a324-9ce87a1fcec0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:20:09 compute-1 nova_compute[230518]: 2025-10-02 13:20:09.442 2 DEBUG oslo_concurrency.lockutils [req-4ebe019e-fb2f-4887-b30f-4069d5d2f686 req-89d7fbd7-4477-443d-925e-1fe6f2e1e2f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:20:09 compute-1 nova_compute[230518]: 2025-10-02 13:20:09.443 2 DEBUG oslo_concurrency.lockutils [req-4ebe019e-fb2f-4887-b30f-4069d5d2f686 req-89d7fbd7-4477-443d-925e-1fe6f2e1e2f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:20:09 compute-1 nova_compute[230518]: 2025-10-02 13:20:09.443 2 DEBUG nova.network.neutron [req-4ebe019e-fb2f-4887-b30f-4069d5d2f686 req-89d7fbd7-4477-443d-925e-1fe6f2e1e2f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Refreshing network info cache for port 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:20:10 compute-1 ceph-mon[80926]: pgmap v3267: 305 pgs: 305 active+clean; 187 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 682 B/s wr, 48 op/s
Oct 02 13:20:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:10.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:10 compute-1 nova_compute[230518]: 2025-10-02 13:20:10.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:11.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:11 compute-1 nova_compute[230518]: 2025-10-02 13:20:11.338 2 DEBUG nova.network.neutron [req-4ebe019e-fb2f-4887-b30f-4069d5d2f686 req-89d7fbd7-4477-443d-925e-1fe6f2e1e2f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updated VIF entry in instance network info cache for port 585ce74b-9d9e-45eb-a324-9ce87a1fcec0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:20:11 compute-1 nova_compute[230518]: 2025-10-02 13:20:11.339 2 DEBUG nova.network.neutron [req-4ebe019e-fb2f-4887-b30f-4069d5d2f686 req-89d7fbd7-4477-443d-925e-1fe6f2e1e2f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updating instance_info_cache with network_info: [{"id": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "address": "fa:16:3e:04:8b:f7", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap585ce74b-9d", "ovs_interfaceid": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:20:11 compute-1 nova_compute[230518]: 2025-10-02 13:20:11.366 2 DEBUG oslo_concurrency.lockutils [req-4ebe019e-fb2f-4887-b30f-4069d5d2f686 req-89d7fbd7-4477-443d-925e-1fe6f2e1e2f1 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:20:11 compute-1 ovn_controller[129257]: 2025-10-02T13:20:11Z|00123|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:04:8b:f7 10.100.0.5
Oct 02 13:20:11 compute-1 ovn_controller[129257]: 2025-10-02T13:20:11Z|00124|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:04:8b:f7 10.100.0.5
Oct 02 13:20:11 compute-1 nova_compute[230518]: 2025-10-02 13:20:11.570 2 DEBUG nova.compute.manager [req-13d554fc-fb31-454d-bf63-1fff90c504f9 req-6f46e902-f832-4c05-a756-836e44cf01b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received event network-changed-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:20:11 compute-1 nova_compute[230518]: 2025-10-02 13:20:11.571 2 DEBUG nova.compute.manager [req-13d554fc-fb31-454d-bf63-1fff90c504f9 req-6f46e902-f832-4c05-a756-836e44cf01b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Refreshing instance network info cache due to event network-changed-585ce74b-9d9e-45eb-a324-9ce87a1fcec0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:20:11 compute-1 nova_compute[230518]: 2025-10-02 13:20:11.571 2 DEBUG oslo_concurrency.lockutils [req-13d554fc-fb31-454d-bf63-1fff90c504f9 req-6f46e902-f832-4c05-a756-836e44cf01b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:20:11 compute-1 nova_compute[230518]: 2025-10-02 13:20:11.572 2 DEBUG oslo_concurrency.lockutils [req-13d554fc-fb31-454d-bf63-1fff90c504f9 req-6f46e902-f832-4c05-a756-836e44cf01b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:20:11 compute-1 nova_compute[230518]: 2025-10-02 13:20:11.572 2 DEBUG nova.network.neutron [req-13d554fc-fb31-454d-bf63-1fff90c504f9 req-6f46e902-f832-4c05-a756-836e44cf01b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Refreshing network info cache for port 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:20:11 compute-1 nova_compute[230518]: 2025-10-02 13:20:11.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:12 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:12.027 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '75'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:20:12 compute-1 ceph-mon[80926]: pgmap v3268: 305 pgs: 305 active+clean; 201 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.5 MiB/s wr, 66 op/s
Oct 02 13:20:12 compute-1 podman[314515]: 2025-10-02 13:20:12.799151987 +0000 UTC m=+0.053930883 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:20:12 compute-1 podman[314514]: 2025-10-02 13:20:12.828061014 +0000 UTC m=+0.083276453 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 13:20:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:12.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:13.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:13 compute-1 sudo[314557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:20:14 compute-1 sudo[314557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:14 compute-1 sudo[314557]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:14 compute-1 sudo[314582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:20:14 compute-1 sudo[314582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:14 compute-1 sudo[314582]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:14 compute-1 ceph-mon[80926]: pgmap v3269: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 110 KiB/s rd, 2.0 MiB/s wr, 32 op/s
Oct 02 13:20:14 compute-1 sudo[314607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:20:14 compute-1 sudo[314607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:14 compute-1 sudo[314607]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:14 compute-1 sudo[314632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 13:20:14 compute-1 sudo[314632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:20:14 compute-1 sudo[314632]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:14.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:15.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:15 compute-1 nova_compute[230518]: 2025-10-02 13:20:15.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:16 compute-1 nova_compute[230518]: 2025-10-02 13:20:16.012 2 DEBUG nova.network.neutron [req-13d554fc-fb31-454d-bf63-1fff90c504f9 req-6f46e902-f832-4c05-a756-836e44cf01b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updated VIF entry in instance network info cache for port 585ce74b-9d9e-45eb-a324-9ce87a1fcec0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:20:16 compute-1 nova_compute[230518]: 2025-10-02 13:20:16.013 2 DEBUG nova.network.neutron [req-13d554fc-fb31-454d-bf63-1fff90c504f9 req-6f46e902-f832-4c05-a756-836e44cf01b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updating instance_info_cache with network_info: [{"id": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "address": "fa:16:3e:04:8b:f7", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap585ce74b-9d", "ovs_interfaceid": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:20:16 compute-1 nova_compute[230518]: 2025-10-02 13:20:16.037 2 DEBUG oslo_concurrency.lockutils [req-13d554fc-fb31-454d-bf63-1fff90c504f9 req-6f46e902-f832-4c05-a756-836e44cf01b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:20:16 compute-1 nova_compute[230518]: 2025-10-02 13:20:16.038 2 DEBUG nova.compute.manager [req-13d554fc-fb31-454d-bf63-1fff90c504f9 req-6f46e902-f832-4c05-a756-836e44cf01b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received event network-changed-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:20:16 compute-1 nova_compute[230518]: 2025-10-02 13:20:16.038 2 DEBUG nova.compute.manager [req-13d554fc-fb31-454d-bf63-1fff90c504f9 req-6f46e902-f832-4c05-a756-836e44cf01b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Refreshing instance network info cache due to event network-changed-585ce74b-9d9e-45eb-a324-9ce87a1fcec0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:20:16 compute-1 nova_compute[230518]: 2025-10-02 13:20:16.039 2 DEBUG oslo_concurrency.lockutils [req-13d554fc-fb31-454d-bf63-1fff90c504f9 req-6f46e902-f832-4c05-a756-836e44cf01b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:20:16 compute-1 nova_compute[230518]: 2025-10-02 13:20:16.039 2 DEBUG oslo_concurrency.lockutils [req-13d554fc-fb31-454d-bf63-1fff90c504f9 req-6f46e902-f832-4c05-a756-836e44cf01b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:20:16 compute-1 nova_compute[230518]: 2025-10-02 13:20:16.039 2 DEBUG nova.network.neutron [req-13d554fc-fb31-454d-bf63-1fff90c504f9 req-6f46e902-f832-4c05-a756-836e44cf01b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Refreshing network info cache for port 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:20:16 compute-1 sudo[314676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:20:16 compute-1 sudo[314676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:16 compute-1 sudo[314676]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:16 compute-1 sudo[314701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:20:16 compute-1 sudo[314701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:16 compute-1 sudo[314701]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:16 compute-1 sudo[314726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:20:16 compute-1 nova_compute[230518]: 2025-10-02 13:20:16.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:16 compute-1 sudo[314726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:16 compute-1 sudo[314726]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:16 compute-1 ceph-mon[80926]: pgmap v3270: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 354 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Oct 02 13:20:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:20:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:20:16 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:20:16 compute-1 sudo[314751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:20:16 compute-1 sudo[314751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:20:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:16.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:20:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:17.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:17 compute-1 sudo[314751]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:18 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:20:18 compute-1 ceph-mon[80926]: pgmap v3271: 305 pgs: 305 active+clean; 216 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 353 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Oct 02 13:20:18 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:20:18 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:20:18 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:20:18 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:20:18 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:20:18 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:20:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:18.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:20:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:19.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:20:19 compute-1 nova_compute[230518]: 2025-10-02 13:20:19.312 2 DEBUG nova.network.neutron [req-13d554fc-fb31-454d-bf63-1fff90c504f9 req-6f46e902-f832-4c05-a756-836e44cf01b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updated VIF entry in instance network info cache for port 585ce74b-9d9e-45eb-a324-9ce87a1fcec0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:20:19 compute-1 nova_compute[230518]: 2025-10-02 13:20:19.312 2 DEBUG nova.network.neutron [req-13d554fc-fb31-454d-bf63-1fff90c504f9 req-6f46e902-f832-4c05-a756-836e44cf01b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updating instance_info_cache with network_info: [{"id": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "address": "fa:16:3e:04:8b:f7", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap585ce74b-9d", "ovs_interfaceid": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:20:19 compute-1 nova_compute[230518]: 2025-10-02 13:20:19.333 2 DEBUG oslo_concurrency.lockutils [req-13d554fc-fb31-454d-bf63-1fff90c504f9 req-6f46e902-f832-4c05-a756-836e44cf01b0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:20:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:20:20 compute-1 ceph-mon[80926]: pgmap v3272: 305 pgs: 305 active+clean; 216 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 52 op/s
Oct 02 13:20:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:20.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:20 compute-1 nova_compute[230518]: 2025-10-02 13:20:20.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:21.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:21 compute-1 nova_compute[230518]: 2025-10-02 13:20:21.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:21 compute-1 ceph-mon[80926]: pgmap v3273: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 402 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 13:20:21 compute-1 nova_compute[230518]: 2025-10-02 13:20:21.778 2 DEBUG nova.compute.manager [req-ba57f243-b931-41b6-ae04-874579885786 req-2b720f21-151d-4c26-ae77-87868eeb254c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received event network-changed-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:20:21 compute-1 nova_compute[230518]: 2025-10-02 13:20:21.778 2 DEBUG nova.compute.manager [req-ba57f243-b931-41b6-ae04-874579885786 req-2b720f21-151d-4c26-ae77-87868eeb254c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Refreshing instance network info cache due to event network-changed-585ce74b-9d9e-45eb-a324-9ce87a1fcec0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:20:21 compute-1 nova_compute[230518]: 2025-10-02 13:20:21.779 2 DEBUG oslo_concurrency.lockutils [req-ba57f243-b931-41b6-ae04-874579885786 req-2b720f21-151d-4c26-ae77-87868eeb254c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:20:21 compute-1 nova_compute[230518]: 2025-10-02 13:20:21.779 2 DEBUG oslo_concurrency.lockutils [req-ba57f243-b931-41b6-ae04-874579885786 req-2b720f21-151d-4c26-ae77-87868eeb254c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:20:21 compute-1 nova_compute[230518]: 2025-10-02 13:20:21.779 2 DEBUG nova.network.neutron [req-ba57f243-b931-41b6-ae04-874579885786 req-2b720f21-151d-4c26-ae77-87868eeb254c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Refreshing network info cache for port 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:20:22 compute-1 nova_compute[230518]: 2025-10-02 13:20:22.661 2 DEBUG oslo_concurrency.lockutils [None req-30cec122-f71c-4d0a-ae3c-d6283abe48b9 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:20:22 compute-1 nova_compute[230518]: 2025-10-02 13:20:22.661 2 DEBUG oslo_concurrency.lockutils [None req-30cec122-f71c-4d0a-ae3c-d6283abe48b9 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:20:22 compute-1 nova_compute[230518]: 2025-10-02 13:20:22.662 2 INFO nova.compute.manager [None req-30cec122-f71c-4d0a-ae3c-d6283abe48b9 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Rebooting instance
Oct 02 13:20:22 compute-1 nova_compute[230518]: 2025-10-02 13:20:22.690 2 DEBUG oslo_concurrency.lockutils [None req-30cec122-f71c-4d0a-ae3c-d6283abe48b9 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:20:22 compute-1 podman[314807]: 2025-10-02 13:20:22.799535279 +0000 UTC m=+0.054377536 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 13:20:22 compute-1 podman[314808]: 2025-10-02 13:20:22.805247059 +0000 UTC m=+0.060053536 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 13:20:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:22.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:23.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:23 compute-1 nova_compute[230518]: 2025-10-02 13:20:23.323 2 DEBUG nova.network.neutron [req-ba57f243-b931-41b6-ae04-874579885786 req-2b720f21-151d-4c26-ae77-87868eeb254c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updated VIF entry in instance network info cache for port 585ce74b-9d9e-45eb-a324-9ce87a1fcec0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:20:23 compute-1 nova_compute[230518]: 2025-10-02 13:20:23.323 2 DEBUG nova.network.neutron [req-ba57f243-b931-41b6-ae04-874579885786 req-2b720f21-151d-4c26-ae77-87868eeb254c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updating instance_info_cache with network_info: [{"id": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "address": "fa:16:3e:04:8b:f7", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap585ce74b-9d", "ovs_interfaceid": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:20:23 compute-1 nova_compute[230518]: 2025-10-02 13:20:23.364 2 DEBUG oslo_concurrency.lockutils [req-ba57f243-b931-41b6-ae04-874579885786 req-2b720f21-151d-4c26-ae77-87868eeb254c 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:20:23 compute-1 nova_compute[230518]: 2025-10-02 13:20:23.365 2 DEBUG oslo_concurrency.lockutils [None req-30cec122-f71c-4d0a-ae3c-d6283abe48b9 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquired lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:20:23 compute-1 nova_compute[230518]: 2025-10-02 13:20:23.365 2 DEBUG nova.network.neutron [None req-30cec122-f71c-4d0a-ae3c-d6283abe48b9 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:20:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:20:24 compute-1 ceph-mon[80926]: pgmap v3274: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 361 KiB/s rd, 677 KiB/s wr, 45 op/s
Oct 02 13:20:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:24.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:25.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:25 compute-1 ceph-mon[80926]: pgmap v3275: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 310 KiB/s rd, 166 KiB/s wr, 38 op/s
Oct 02 13:20:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:25.977 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:20:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:25.977 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:20:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:25.978 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:20:25 compute-1 nova_compute[230518]: 2025-10-02 13:20:25.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:26 compute-1 nova_compute[230518]: 2025-10-02 13:20:26.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:26.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:27.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:27 compute-1 ceph-mon[80926]: pgmap v3276: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 66 KiB/s rd, 57 KiB/s wr, 14 op/s
Oct 02 13:20:27 compute-1 nova_compute[230518]: 2025-10-02 13:20:27.691 2 DEBUG nova.network.neutron [None req-30cec122-f71c-4d0a-ae3c-d6283abe48b9 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updating instance_info_cache with network_info: [{"id": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "address": "fa:16:3e:04:8b:f7", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap585ce74b-9d", "ovs_interfaceid": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:20:27 compute-1 nova_compute[230518]: 2025-10-02 13:20:27.744 2 DEBUG oslo_concurrency.lockutils [None req-30cec122-f71c-4d0a-ae3c-d6283abe48b9 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Releasing lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:20:27 compute-1 nova_compute[230518]: 2025-10-02 13:20:27.746 2 DEBUG nova.compute.manager [None req-30cec122-f71c-4d0a-ae3c-d6283abe48b9 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:20:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:28.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:20:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:29.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:20:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:20:29 compute-1 sudo[314845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:20:29 compute-1 sudo[314845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:29 compute-1 sudo[314845]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:29 compute-1 sudo[314870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:20:29 compute-1 sudo[314870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:20:29 compute-1 sudo[314870]: pam_unix(sudo:session): session closed for user root
Oct 02 13:20:30 compute-1 ceph-mon[80926]: pgmap v3277: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 65 KiB/s rd, 19 KiB/s wr, 11 op/s
Oct 02 13:20:30 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:20:30 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:20:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:30.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:30 compute-1 nova_compute[230518]: 2025-10-02 13:20:30.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:31.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:31 compute-1 nova_compute[230518]: 2025-10-02 13:20:31.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:32 compute-1 ceph-mon[80926]: pgmap v3278: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 66 KiB/s rd, 26 KiB/s wr, 12 op/s
Oct 02 13:20:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/89902622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:20:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:32.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:33 compute-1 nova_compute[230518]: 2025-10-02 13:20:33.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:20:33 compute-1 nova_compute[230518]: 2025-10-02 13:20:33.098 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:20:33 compute-1 nova_compute[230518]: 2025-10-02 13:20:33.098 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:20:33 compute-1 nova_compute[230518]: 2025-10-02 13:20:33.098 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:20:33 compute-1 nova_compute[230518]: 2025-10-02 13:20:33.099 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:20:33 compute-1 nova_compute[230518]: 2025-10-02 13:20:33.099 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:20:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:20:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:33.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:20:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:20:33 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3264101197' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:20:33 compute-1 nova_compute[230518]: 2025-10-02 13:20:33.514 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:20:33 compute-1 nova_compute[230518]: 2025-10-02 13:20:33.585 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000d6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:20:33 compute-1 nova_compute[230518]: 2025-10-02 13:20:33.585 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000d6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:20:33 compute-1 nova_compute[230518]: 2025-10-02 13:20:33.585 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000d6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:20:33 compute-1 ceph-mon[80926]: pgmap v3279: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.3 KiB/s rd, 25 KiB/s wr, 3 op/s
Oct 02 13:20:33 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/366028002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:20:33 compute-1 nova_compute[230518]: 2025-10-02 13:20:33.733 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:20:33 compute-1 nova_compute[230518]: 2025-10-02 13:20:33.735 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3948MB free_disk=20.942710876464844GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:20:33 compute-1 nova_compute[230518]: 2025-10-02 13:20:33.735 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:20:33 compute-1 nova_compute[230518]: 2025-10-02 13:20:33.735 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:20:33 compute-1 nova_compute[230518]: 2025-10-02 13:20:33.842 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 92ef7ede-4ed2-4a81-9849-bbc39c0be573 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:20:33 compute-1 nova_compute[230518]: 2025-10-02 13:20:33.843 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:20:33 compute-1 nova_compute[230518]: 2025-10-02 13:20:33.843 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:20:33 compute-1 nova_compute[230518]: 2025-10-02 13:20:33.885 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:20:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:20:34 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3983555616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:20:34 compute-1 nova_compute[230518]: 2025-10-02 13:20:34.344 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:20:34 compute-1 nova_compute[230518]: 2025-10-02 13:20:34.352 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:20:34 compute-1 nova_compute[230518]: 2025-10-02 13:20:34.371 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:20:34 compute-1 nova_compute[230518]: 2025-10-02 13:20:34.402 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:20:34 compute-1 nova_compute[230518]: 2025-10-02 13:20:34.402 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:20:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:20:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:34.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:35.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3264101197' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:20:35 compute-1 nova_compute[230518]: 2025-10-02 13:20:35.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:36 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #160. Immutable memtables: 0.
Oct 02 13:20:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:20:36.354134) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:20:36 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 160
Oct 02 13:20:36 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411236354194, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 2376, "num_deletes": 254, "total_data_size": 5870745, "memory_usage": 5943984, "flush_reason": "Manual Compaction"}
Oct 02 13:20:36 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #161: started
Oct 02 13:20:36 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411236374211, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 161, "file_size": 3812293, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 76709, "largest_seqno": 79080, "table_properties": {"data_size": 3802420, "index_size": 6302, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20243, "raw_average_key_size": 20, "raw_value_size": 3782831, "raw_average_value_size": 3856, "num_data_blocks": 273, "num_entries": 981, "num_filter_entries": 981, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411037, "oldest_key_time": 1759411037, "file_creation_time": 1759411236, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:20:36 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 20106 microseconds, and 8357 cpu microseconds.
Oct 02 13:20:36 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:20:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:20:36.374249) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #161: 3812293 bytes OK
Oct 02 13:20:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:20:36.374267) [db/memtable_list.cc:519] [default] Level-0 commit table #161 started
Oct 02 13:20:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:20:36.377216) [db/memtable_list.cc:722] [default] Level-0 commit table #161: memtable #1 done
Oct 02 13:20:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:20:36.377228) EVENT_LOG_v1 {"time_micros": 1759411236377224, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:20:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:20:36.377243) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:20:36 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 5860147, prev total WAL file size 5860147, number of live WAL files 2.
Oct 02 13:20:36 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000157.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:20:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:20:36.378449) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Oct 02 13:20:36 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:20:36 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [161(3722KB)], [159(9909KB)]
Oct 02 13:20:36 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411236378502, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [161], "files_L6": [159], "score": -1, "input_data_size": 13959701, "oldest_snapshot_seqno": -1}
Oct 02 13:20:36 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #162: 9983 keys, 12022193 bytes, temperature: kUnknown
Oct 02 13:20:36 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411236493473, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 162, "file_size": 12022193, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11958702, "index_size": 37458, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24965, "raw_key_size": 263384, "raw_average_key_size": 26, "raw_value_size": 11784959, "raw_average_value_size": 1180, "num_data_blocks": 1420, "num_entries": 9983, "num_filter_entries": 9983, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759411236, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 162, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:20:36 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:20:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:20:36.493924) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 12022193 bytes
Oct 02 13:20:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:20:36.496727) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 121.1 rd, 104.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 9.7 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(6.8) write-amplify(3.2) OK, records in: 10511, records dropped: 528 output_compression: NoCompression
Oct 02 13:20:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:20:36.496744) EVENT_LOG_v1 {"time_micros": 1759411236496735, "job": 102, "event": "compaction_finished", "compaction_time_micros": 115242, "compaction_time_cpu_micros": 49159, "output_level": 6, "num_output_files": 1, "total_output_size": 12022193, "num_input_records": 10511, "num_output_records": 9983, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:20:36 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:20:36 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411236497778, "job": 102, "event": "table_file_deletion", "file_number": 161}
Oct 02 13:20:36 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000159.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:20:36 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411236499630, "job": 102, "event": "table_file_deletion", "file_number": 159}
Oct 02 13:20:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:20:36.378353) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:20:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:20:36.499733) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:20:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:20:36.499737) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:20:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:20:36.499739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:20:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:20:36.499740) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:20:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:20:36.499742) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:20:36 compute-1 nova_compute[230518]: 2025-10-02 13:20:36.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:36 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3983555616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:20:36 compute-1 ceph-mon[80926]: pgmap v3280: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 KiB/s rd, 25 KiB/s wr, 3 op/s
Oct 02 13:20:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:36.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:37.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:37 compute-1 nova_compute[230518]: 2025-10-02 13:20:37.403 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:20:37 compute-1 nova_compute[230518]: 2025-10-02 13:20:37.404 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:20:38 compute-1 ceph-mon[80926]: pgmap v3281: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1023 B/s rd, 15 KiB/s wr, 2 op/s
Oct 02 13:20:38 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4120276339' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:20:38 compute-1 nova_compute[230518]: 2025-10-02 13:20:38.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:20:38 compute-1 nova_compute[230518]: 2025-10-02 13:20:38.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:20:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:38.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/506392747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:20:39 compute-1 ovn_controller[129257]: 2025-10-02T13:20:39Z|00873|memory_trim|INFO|Detected inactivity (last active 30015 ms ago): trimming memory
Oct 02 13:20:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:39.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:20:40 compute-1 ceph-mon[80926]: pgmap v3282: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1023 B/s rd, 14 KiB/s wr, 2 op/s
Oct 02 13:20:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:40.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:41 compute-1 nova_compute[230518]: 2025-10-02 13:20:41.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:41 compute-1 nova_compute[230518]: 2025-10-02 13:20:41.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:20:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:41.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:41 compute-1 ceph-mon[80926]: pgmap v3283: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 KiB/s rd, 15 KiB/s wr, 3 op/s
Oct 02 13:20:41 compute-1 nova_compute[230518]: 2025-10-02 13:20:41.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:42 compute-1 nova_compute[230518]: 2025-10-02 13:20:42.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:20:42 compute-1 kernel: tap585ce74b-9d (unregistering): left promiscuous mode
Oct 02 13:20:42 compute-1 NetworkManager[44960]: <info>  [1759411242.6780] device (tap585ce74b-9d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:20:42 compute-1 ovn_controller[129257]: 2025-10-02T13:20:42Z|00874|binding|INFO|Releasing lport 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 from this chassis (sb_readonly=0)
Oct 02 13:20:42 compute-1 nova_compute[230518]: 2025-10-02 13:20:42.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:42 compute-1 ovn_controller[129257]: 2025-10-02T13:20:42Z|00875|binding|INFO|Setting lport 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 down in Southbound
Oct 02 13:20:42 compute-1 ovn_controller[129257]: 2025-10-02T13:20:42Z|00876|binding|INFO|Removing iface tap585ce74b-9d ovn-installed in OVS
Oct 02 13:20:42 compute-1 nova_compute[230518]: 2025-10-02 13:20:42.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:42.710 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:8b:f7 10.100.0.5'], port_security=['fa:16:3e:04:8b:f7 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '92ef7ede-4ed2-4a81-9849-bbc39c0be573', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-540159ad-ffd2-462a-a8b9-e86914ed6249', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ced4d30c525c44cca617c3b9838d21b7', 'neutron:revision_number': '5', 'neutron:security_group_ids': '0c95f94e-20dd-45bd-9644-7e1d8998955e a3caf5f5-413d-4b29-98a6-4d3bfef93aba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.181'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d33f65d0-f5c3-43e4-a0b6-d26b238c6ffb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=585ce74b-9d9e-45eb-a324-9ce87a1fcec0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:20:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:42.712 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 in datapath 540159ad-ffd2-462a-a8b9-e86914ed6249 unbound from our chassis
Oct 02 13:20:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:42.714 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 540159ad-ffd2-462a-a8b9-e86914ed6249, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:20:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:42.716 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[45847498-13f3-42be-bc89-952c89c1e0e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:20:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:42.717 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249 namespace which is not needed anymore
Oct 02 13:20:42 compute-1 nova_compute[230518]: 2025-10-02 13:20:42.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:42 compute-1 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d000000d6.scope: Deactivated successfully.
Oct 02 13:20:42 compute-1 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d000000d6.scope: Consumed 15.298s CPU time.
Oct 02 13:20:42 compute-1 systemd-machined[188247]: Machine qemu-98-instance-000000d6 terminated.
Oct 02 13:20:42 compute-1 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[314470]: [NOTICE]   (314474) : haproxy version is 2.8.14-c23fe91
Oct 02 13:20:42 compute-1 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[314470]: [NOTICE]   (314474) : path to executable is /usr/sbin/haproxy
Oct 02 13:20:42 compute-1 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[314470]: [WARNING]  (314474) : Exiting Master process...
Oct 02 13:20:42 compute-1 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[314470]: [WARNING]  (314474) : Exiting Master process...
Oct 02 13:20:42 compute-1 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[314470]: [ALERT]    (314474) : Current worker (314476) exited with code 143 (Terminated)
Oct 02 13:20:42 compute-1 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[314470]: [WARNING]  (314474) : All workers exited. Exiting... (0)
Oct 02 13:20:42 compute-1 systemd[1]: libpod-a4ff07259348eba0577ffd4bd93027d8453c16bbcc480e2e2eee59b3e543ebb8.scope: Deactivated successfully.
Oct 02 13:20:42 compute-1 podman[314964]: 2025-10-02 13:20:42.860441115 +0000 UTC m=+0.044514987 container died a4ff07259348eba0577ffd4bd93027d8453c16bbcc480e2e2eee59b3e543ebb8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 13:20:42 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a4ff07259348eba0577ffd4bd93027d8453c16bbcc480e2e2eee59b3e543ebb8-userdata-shm.mount: Deactivated successfully.
Oct 02 13:20:42 compute-1 systemd[1]: var-lib-containers-storage-overlay-10a49f52618069d1ad7902e6a20a5811651f7f83eb122ae0dd39c392fa844ae3-merged.mount: Deactivated successfully.
Oct 02 13:20:42 compute-1 podman[314964]: 2025-10-02 13:20:42.907523752 +0000 UTC m=+0.091597624 container cleanup a4ff07259348eba0577ffd4bd93027d8453c16bbcc480e2e2eee59b3e543ebb8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 13:20:42 compute-1 nova_compute[230518]: 2025-10-02 13:20:42.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:42 compute-1 nova_compute[230518]: 2025-10-02 13:20:42.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:42 compute-1 systemd[1]: libpod-conmon-a4ff07259348eba0577ffd4bd93027d8453c16bbcc480e2e2eee59b3e543ebb8.scope: Deactivated successfully.
Oct 02 13:20:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:20:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:42.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:20:42 compute-1 podman[314986]: 2025-10-02 13:20:42.950291344 +0000 UTC m=+0.067907712 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:20:42 compute-1 podman[315016]: 2025-10-02 13:20:42.971180819 +0000 UTC m=+0.040786190 container remove a4ff07259348eba0577ffd4bd93027d8453c16bbcc480e2e2eee59b3e543ebb8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 13:20:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:42.976 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8801d96b-8057-470c-b29d-39e637a4bc5b]: (4, ('Thu Oct  2 01:20:42 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249 (a4ff07259348eba0577ffd4bd93027d8453c16bbcc480e2e2eee59b3e543ebb8)\na4ff07259348eba0577ffd4bd93027d8453c16bbcc480e2e2eee59b3e543ebb8\nThu Oct  2 01:20:42 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249 (a4ff07259348eba0577ffd4bd93027d8453c16bbcc480e2e2eee59b3e543ebb8)\na4ff07259348eba0577ffd4bd93027d8453c16bbcc480e2e2eee59b3e543ebb8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:20:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:42.977 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ebd067a2-e73e-4693-8799-327538f9da9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:20:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:42.978 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap540159ad-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:20:42 compute-1 kernel: tap540159ad-f0: left promiscuous mode
Oct 02 13:20:42 compute-1 podman[314978]: 2025-10-02 13:20:42.981208004 +0000 UTC m=+0.102246789 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct 02 13:20:42 compute-1 nova_compute[230518]: 2025-10-02 13:20:42.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:42 compute-1 nova_compute[230518]: 2025-10-02 13:20:42.996 2 DEBUG nova.compute.manager [req-109f8c58-3285-4a8b-8629-ce089c68da8f req-859f819d-a561-40ef-a48b-ae7c73e4122f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received event network-vif-unplugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:20:42 compute-1 nova_compute[230518]: 2025-10-02 13:20:42.997 2 DEBUG oslo_concurrency.lockutils [req-109f8c58-3285-4a8b-8629-ce089c68da8f req-859f819d-a561-40ef-a48b-ae7c73e4122f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:20:42 compute-1 nova_compute[230518]: 2025-10-02 13:20:42.997 2 DEBUG oslo_concurrency.lockutils [req-109f8c58-3285-4a8b-8629-ce089c68da8f req-859f819d-a561-40ef-a48b-ae7c73e4122f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:20:42 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:42.997 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[be50dc9e-6413-4217-bc2a-3a2559e575e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:20:42 compute-1 nova_compute[230518]: 2025-10-02 13:20:42.997 2 DEBUG oslo_concurrency.lockutils [req-109f8c58-3285-4a8b-8629-ce089c68da8f req-859f819d-a561-40ef-a48b-ae7c73e4122f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:20:42 compute-1 nova_compute[230518]: 2025-10-02 13:20:42.998 2 DEBUG nova.compute.manager [req-109f8c58-3285-4a8b-8629-ce089c68da8f req-859f819d-a561-40ef-a48b-ae7c73e4122f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] No waiting events found dispatching network-vif-unplugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:20:42 compute-1 nova_compute[230518]: 2025-10-02 13:20:42.998 2 WARNING nova.compute.manager [req-109f8c58-3285-4a8b-8629-ce089c68da8f req-859f819d-a561-40ef-a48b-ae7c73e4122f 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received unexpected event network-vif-unplugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 for instance with vm_state active and task_state reboot_started.
Oct 02 13:20:42 compute-1 nova_compute[230518]: 2025-10-02 13:20:42.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.021 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1c9f6b38-54b4-49b8-91ae-744f0ab9f25e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.022 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bccd195b-872a-42e1-a7e1-ddc0891e8ce1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.037 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[810325cb-43af-468f-bce3-916a53a3f74e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 905638, 'reachable_time': 40903, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315059, 'error': None, 'target': 'ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:20:43 compute-1 systemd[1]: run-netns-ovnmeta\x2d540159ad\x2dffd2\x2d462a\x2da8b9\x2de86914ed6249.mount: Deactivated successfully.
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.041 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.041 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[8536d839-8564-4e04-a2c6-90673ed0431d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:20:43 compute-1 nova_compute[230518]: 2025-10-02 13:20:43.047 2 INFO nova.virt.libvirt.driver [None req-30cec122-f71c-4d0a-ae3c-d6283abe48b9 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Instance shutdown successfully.
Oct 02 13:20:43 compute-1 kernel: tap585ce74b-9d: entered promiscuous mode
Oct 02 13:20:43 compute-1 systemd-udevd[314942]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:20:43 compute-1 NetworkManager[44960]: <info>  [1759411243.0995] manager: (tap585ce74b-9d): new Tun device (/org/freedesktop/NetworkManager/Devices/402)
Oct 02 13:20:43 compute-1 ovn_controller[129257]: 2025-10-02T13:20:43Z|00877|binding|INFO|Claiming lport 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 for this chassis.
Oct 02 13:20:43 compute-1 ovn_controller[129257]: 2025-10-02T13:20:43Z|00878|binding|INFO|585ce74b-9d9e-45eb-a324-9ce87a1fcec0: Claiming fa:16:3e:04:8b:f7 10.100.0.5
Oct 02 13:20:43 compute-1 nova_compute[230518]: 2025-10-02 13:20:43.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.107 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:8b:f7 10.100.0.5'], port_security=['fa:16:3e:04:8b:f7 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '92ef7ede-4ed2-4a81-9849-bbc39c0be573', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-540159ad-ffd2-462a-a8b9-e86914ed6249', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ced4d30c525c44cca617c3b9838d21b7', 'neutron:revision_number': '6', 'neutron:security_group_ids': '0c95f94e-20dd-45bd-9644-7e1d8998955e a3caf5f5-413d-4b29-98a6-4d3bfef93aba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.181'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d33f65d0-f5c3-43e4-a0b6-d26b238c6ffb, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=585ce74b-9d9e-45eb-a324-9ce87a1fcec0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.108 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 in datapath 540159ad-ffd2-462a-a8b9-e86914ed6249 bound to our chassis
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.109 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 540159ad-ffd2-462a-a8b9-e86914ed6249
Oct 02 13:20:43 compute-1 NetworkManager[44960]: <info>  [1759411243.1107] device (tap585ce74b-9d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:20:43 compute-1 NetworkManager[44960]: <info>  [1759411243.1113] device (tap585ce74b-9d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:20:43 compute-1 ovn_controller[129257]: 2025-10-02T13:20:43Z|00879|binding|INFO|Setting lport 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 ovn-installed in OVS
Oct 02 13:20:43 compute-1 ovn_controller[129257]: 2025-10-02T13:20:43Z|00880|binding|INFO|Setting lport 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 up in Southbound
Oct 02 13:20:43 compute-1 nova_compute[230518]: 2025-10-02 13:20:43.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:43 compute-1 nova_compute[230518]: 2025-10-02 13:20:43.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.127 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[285167b5-10c9-469a-a527-d2d0fb06248c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.128 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap540159ad-f1 in ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.130 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap540159ad-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.130 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f0d8709f-f059-41a6-8632-30f78c5dfaa8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.131 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ac4a0098-e8f1-4762-9957-0e4568851343]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.144 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[eb2f505c-6d70-4b6f-8e6a-25afb6a76d91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:20:43 compute-1 systemd-machined[188247]: New machine qemu-99-instance-000000d6.
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.158 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[01cebbf4-087f-4b64-9bf9-c15be447fa7b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:20:43 compute-1 systemd[1]: Started Virtual Machine qemu-99-instance-000000d6.
Oct 02 13:20:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:43.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.190 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[d0c30615-9cc0-4ad0-bc95-f9cebf49e7b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:20:43 compute-1 NetworkManager[44960]: <info>  [1759411243.1962] manager: (tap540159ad-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/403)
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.197 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ee46ffed-e5bc-481b-8803-654b2b180464]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.233 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c556c03d-2810-4167-b1ab-97cb18d81b21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.236 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[e2e0b361-8736-4ba0-b623-1ef9acd1cb6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:20:43 compute-1 NetworkManager[44960]: <info>  [1759411243.2650] device (tap540159ad-f0): carrier: link connected
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.270 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c3529c6e-f425-4636-9754-511fadf5ec5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.287 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a5e00367-35f8-4928-95a2-c8febadb5a0f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap540159ad-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:39:9b:b7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 264], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 910682, 'reachable_time': 40880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315105, 'error': None, 'target': 'ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.302 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[71e3f802-5826-49b2-a80c-7715f496f2bf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe39:9bb7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 910682, 'tstamp': 910682}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315106, 'error': None, 'target': 'ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.317 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[3e555137-b369-400d-9992-3fc4dfa96ab4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap540159ad-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:39:9b:b7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 264], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 910682, 'reachable_time': 40880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 315107, 'error': None, 'target': 'ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.348 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a609b9fb-a41f-4e59-a4f3-5ae0679fbffa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.402 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b5f06b36-78b9-4529-b45e-65454c5f1169]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.404 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap540159ad-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.404 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.404 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap540159ad-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:20:43 compute-1 nova_compute[230518]: 2025-10-02 13:20:43.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:43 compute-1 NetworkManager[44960]: <info>  [1759411243.4066] manager: (tap540159ad-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/404)
Oct 02 13:20:43 compute-1 kernel: tap540159ad-f0: entered promiscuous mode
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.410 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap540159ad-f0, col_values=(('external_ids', {'iface-id': 'b64b1a3a-1d89-4a71-b9b0-71e964509167'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:20:43 compute-1 nova_compute[230518]: 2025-10-02 13:20:43.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:43 compute-1 ovn_controller[129257]: 2025-10-02T13:20:43Z|00881|binding|INFO|Releasing lport b64b1a3a-1d89-4a71-b9b0-71e964509167 from this chassis (sb_readonly=0)
Oct 02 13:20:43 compute-1 nova_compute[230518]: 2025-10-02 13:20:43.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.423 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/540159ad-ffd2-462a-a8b9-e86914ed6249.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/540159ad-ffd2-462a-a8b9-e86914ed6249.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.424 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1f136446-d436-44e5-9e90-ff2daf873a28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.425 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: global
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-540159ad-ffd2-462a-a8b9-e86914ed6249
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/540159ad-ffd2-462a-a8b9-e86914ed6249.pid.haproxy
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 540159ad-ffd2-462a-a8b9-e86914ed6249
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:20:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:20:43.425 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249', 'env', 'PROCESS_TAG=haproxy-540159ad-ffd2-462a-a8b9-e86914ed6249', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/540159ad-ffd2-462a-a8b9-e86914ed6249.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:20:43 compute-1 podman[315175]: 2025-10-02 13:20:43.773859747 +0000 UTC m=+0.051203267 container create 0d1b1001f28188e1e9c21f0052988846b5e700ace56a9ede5e4edb1957dddf5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:20:43 compute-1 systemd[1]: Started libpod-conmon-0d1b1001f28188e1e9c21f0052988846b5e700ace56a9ede5e4edb1957dddf5c.scope.
Oct 02 13:20:43 compute-1 systemd[1]: Started libcrun container.
Oct 02 13:20:43 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ec57861729784fbfedd5a40971eeacc0f72b749ec4f97def4c07dfc1fd8ea29/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:20:43 compute-1 podman[315175]: 2025-10-02 13:20:43.747459839 +0000 UTC m=+0.024803379 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:20:43 compute-1 podman[315175]: 2025-10-02 13:20:43.850323115 +0000 UTC m=+0.127666635 container init 0d1b1001f28188e1e9c21f0052988846b5e700ace56a9ede5e4edb1957dddf5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 13:20:43 compute-1 podman[315175]: 2025-10-02 13:20:43.85558913 +0000 UTC m=+0.132932670 container start 0d1b1001f28188e1e9c21f0052988846b5e700ace56a9ede5e4edb1957dddf5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0)
Oct 02 13:20:43 compute-1 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[315210]: [NOTICE]   (315217) : New worker (315219) forked
Oct 02 13:20:43 compute-1 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[315210]: [NOTICE]   (315217) : Loading success.
Oct 02 13:20:43 compute-1 ceph-mon[80926]: pgmap v3284: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 KiB/s rd, 13 KiB/s wr, 2 op/s
Oct 02 13:20:44 compute-1 nova_compute[230518]: 2025-10-02 13:20:44.300 2 DEBUG nova.virt.libvirt.host [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Removed pending event for 92ef7ede-4ed2-4a81-9849-bbc39c0be573 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Oct 02 13:20:44 compute-1 nova_compute[230518]: 2025-10-02 13:20:44.300 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759411244.2979326, 92ef7ede-4ed2-4a81-9849-bbc39c0be573 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:20:44 compute-1 nova_compute[230518]: 2025-10-02 13:20:44.301 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] VM Resumed (Lifecycle Event)
Oct 02 13:20:44 compute-1 nova_compute[230518]: 2025-10-02 13:20:44.306 2 INFO nova.virt.libvirt.driver [-] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Instance running successfully.
Oct 02 13:20:44 compute-1 nova_compute[230518]: 2025-10-02 13:20:44.307 2 INFO nova.virt.libvirt.driver [None req-30cec122-f71c-4d0a-ae3c-d6283abe48b9 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Instance soft rebooted successfully.
Oct 02 13:20:44 compute-1 nova_compute[230518]: 2025-10-02 13:20:44.307 2 DEBUG nova.compute.manager [None req-30cec122-f71c-4d0a-ae3c-d6283abe48b9 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:20:44 compute-1 nova_compute[230518]: 2025-10-02 13:20:44.330 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:20:44 compute-1 nova_compute[230518]: 2025-10-02 13:20:44.333 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:20:44 compute-1 nova_compute[230518]: 2025-10-02 13:20:44.361 2 DEBUG oslo_concurrency.lockutils [None req-30cec122-f71c-4d0a-ae3c-d6283abe48b9 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 21.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:20:44 compute-1 nova_compute[230518]: 2025-10-02 13:20:44.363 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] During sync_power_state the instance has a pending task (reboot_started). Skip.
Oct 02 13:20:44 compute-1 nova_compute[230518]: 2025-10-02 13:20:44.363 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759411244.2996092, 92ef7ede-4ed2-4a81-9849-bbc39c0be573 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:20:44 compute-1 nova_compute[230518]: 2025-10-02 13:20:44.363 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] VM Started (Lifecycle Event)
Oct 02 13:20:44 compute-1 nova_compute[230518]: 2025-10-02 13:20:44.382 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:20:44 compute-1 nova_compute[230518]: 2025-10-02 13:20:44.386 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:20:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:20:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:20:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:44.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:20:45 compute-1 nova_compute[230518]: 2025-10-02 13:20:45.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:20:45 compute-1 nova_compute[230518]: 2025-10-02 13:20:45.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:20:45 compute-1 nova_compute[230518]: 2025-10-02 13:20:45.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:20:45 compute-1 nova_compute[230518]: 2025-10-02 13:20:45.138 2 DEBUG nova.compute.manager [req-ae4389a9-45ef-4fec-9bda-1f0a6facaa80 req-2052c292-96e0-4c17-a239-2200a8a03187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received event network-vif-plugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:20:45 compute-1 nova_compute[230518]: 2025-10-02 13:20:45.138 2 DEBUG oslo_concurrency.lockutils [req-ae4389a9-45ef-4fec-9bda-1f0a6facaa80 req-2052c292-96e0-4c17-a239-2200a8a03187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:20:45 compute-1 nova_compute[230518]: 2025-10-02 13:20:45.138 2 DEBUG oslo_concurrency.lockutils [req-ae4389a9-45ef-4fec-9bda-1f0a6facaa80 req-2052c292-96e0-4c17-a239-2200a8a03187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:20:45 compute-1 nova_compute[230518]: 2025-10-02 13:20:45.139 2 DEBUG oslo_concurrency.lockutils [req-ae4389a9-45ef-4fec-9bda-1f0a6facaa80 req-2052c292-96e0-4c17-a239-2200a8a03187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:20:45 compute-1 nova_compute[230518]: 2025-10-02 13:20:45.139 2 DEBUG nova.compute.manager [req-ae4389a9-45ef-4fec-9bda-1f0a6facaa80 req-2052c292-96e0-4c17-a239-2200a8a03187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] No waiting events found dispatching network-vif-plugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:20:45 compute-1 nova_compute[230518]: 2025-10-02 13:20:45.139 2 WARNING nova.compute.manager [req-ae4389a9-45ef-4fec-9bda-1f0a6facaa80 req-2052c292-96e0-4c17-a239-2200a8a03187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received unexpected event network-vif-plugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 for instance with vm_state active and task_state None.
Oct 02 13:20:45 compute-1 nova_compute[230518]: 2025-10-02 13:20:45.139 2 DEBUG nova.compute.manager [req-ae4389a9-45ef-4fec-9bda-1f0a6facaa80 req-2052c292-96e0-4c17-a239-2200a8a03187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received event network-vif-plugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:20:45 compute-1 nova_compute[230518]: 2025-10-02 13:20:45.139 2 DEBUG oslo_concurrency.lockutils [req-ae4389a9-45ef-4fec-9bda-1f0a6facaa80 req-2052c292-96e0-4c17-a239-2200a8a03187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:20:45 compute-1 nova_compute[230518]: 2025-10-02 13:20:45.140 2 DEBUG oslo_concurrency.lockutils [req-ae4389a9-45ef-4fec-9bda-1f0a6facaa80 req-2052c292-96e0-4c17-a239-2200a8a03187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:20:45 compute-1 nova_compute[230518]: 2025-10-02 13:20:45.140 2 DEBUG oslo_concurrency.lockutils [req-ae4389a9-45ef-4fec-9bda-1f0a6facaa80 req-2052c292-96e0-4c17-a239-2200a8a03187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:20:45 compute-1 nova_compute[230518]: 2025-10-02 13:20:45.140 2 DEBUG nova.compute.manager [req-ae4389a9-45ef-4fec-9bda-1f0a6facaa80 req-2052c292-96e0-4c17-a239-2200a8a03187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] No waiting events found dispatching network-vif-plugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:20:45 compute-1 nova_compute[230518]: 2025-10-02 13:20:45.140 2 WARNING nova.compute.manager [req-ae4389a9-45ef-4fec-9bda-1f0a6facaa80 req-2052c292-96e0-4c17-a239-2200a8a03187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received unexpected event network-vif-plugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 for instance with vm_state active and task_state None.
Oct 02 13:20:45 compute-1 nova_compute[230518]: 2025-10-02 13:20:45.140 2 DEBUG nova.compute.manager [req-ae4389a9-45ef-4fec-9bda-1f0a6facaa80 req-2052c292-96e0-4c17-a239-2200a8a03187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received event network-vif-plugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:20:45 compute-1 nova_compute[230518]: 2025-10-02 13:20:45.141 2 DEBUG oslo_concurrency.lockutils [req-ae4389a9-45ef-4fec-9bda-1f0a6facaa80 req-2052c292-96e0-4c17-a239-2200a8a03187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:20:45 compute-1 nova_compute[230518]: 2025-10-02 13:20:45.141 2 DEBUG oslo_concurrency.lockutils [req-ae4389a9-45ef-4fec-9bda-1f0a6facaa80 req-2052c292-96e0-4c17-a239-2200a8a03187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:20:45 compute-1 nova_compute[230518]: 2025-10-02 13:20:45.141 2 DEBUG oslo_concurrency.lockutils [req-ae4389a9-45ef-4fec-9bda-1f0a6facaa80 req-2052c292-96e0-4c17-a239-2200a8a03187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:20:45 compute-1 nova_compute[230518]: 2025-10-02 13:20:45.141 2 DEBUG nova.compute.manager [req-ae4389a9-45ef-4fec-9bda-1f0a6facaa80 req-2052c292-96e0-4c17-a239-2200a8a03187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] No waiting events found dispatching network-vif-plugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:20:45 compute-1 nova_compute[230518]: 2025-10-02 13:20:45.141 2 WARNING nova.compute.manager [req-ae4389a9-45ef-4fec-9bda-1f0a6facaa80 req-2052c292-96e0-4c17-a239-2200a8a03187 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received unexpected event network-vif-plugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 for instance with vm_state active and task_state None.
Oct 02 13:20:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:45.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:45 compute-1 nova_compute[230518]: 2025-10-02 13:20:45.250 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:20:45 compute-1 nova_compute[230518]: 2025-10-02 13:20:45.251 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:20:45 compute-1 nova_compute[230518]: 2025-10-02 13:20:45.251 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 13:20:45 compute-1 nova_compute[230518]: 2025-10-02 13:20:45.251 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 92ef7ede-4ed2-4a81-9849-bbc39c0be573 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:20:46 compute-1 nova_compute[230518]: 2025-10-02 13:20:46.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:46 compute-1 ceph-mon[80926]: pgmap v3285: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.9 KiB/s rd, 14 KiB/s wr, 4 op/s
Oct 02 13:20:46 compute-1 nova_compute[230518]: 2025-10-02 13:20:46.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:46.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:47 compute-1 nova_compute[230518]: 2025-10-02 13:20:47.060 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updating instance_info_cache with network_info: [{"id": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "address": "fa:16:3e:04:8b:f7", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap585ce74b-9d", "ovs_interfaceid": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:20:47 compute-1 nova_compute[230518]: 2025-10-02 13:20:47.076 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:20:47 compute-1 nova_compute[230518]: 2025-10-02 13:20:47.076 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 13:20:47 compute-1 nova_compute[230518]: 2025-10-02 13:20:47.076 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:20:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:47.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:48 compute-1 ceph-mon[80926]: pgmap v3286: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 727 KiB/s rd, 14 KiB/s wr, 32 op/s
Oct 02 13:20:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:48.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:49 compute-1 nova_compute[230518]: 2025-10-02 13:20:49.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:20:49 compute-1 nova_compute[230518]: 2025-10-02 13:20:49.076 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:20:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:20:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:49.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:20:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:20:49 compute-1 ceph-mon[80926]: pgmap v3287: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 727 KiB/s rd, 14 KiB/s wr, 32 op/s
Oct 02 13:20:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:50.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:51 compute-1 nova_compute[230518]: 2025-10-02 13:20:51.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:20:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:51.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:20:51 compute-1 nova_compute[230518]: 2025-10-02 13:20:51.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:51 compute-1 ceph-mon[80926]: pgmap v3288: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 02 13:20:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:52.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:20:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:53.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:20:53 compute-1 podman[315230]: 2025-10-02 13:20:53.819445437 +0000 UTC m=+0.064827555 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:20:53 compute-1 podman[315229]: 2025-10-02 13:20:53.849317074 +0000 UTC m=+0.091345367 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=iscsid, io.buildah.version=1.41.3)
Oct 02 13:20:54 compute-1 ceph-mon[80926]: pgmap v3289: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 72 op/s
Oct 02 13:20:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:20:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:54.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:55.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:56 compute-1 nova_compute[230518]: 2025-10-02 13:20:56.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:56 compute-1 ceph-mon[80926]: pgmap v3290: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 8.7 KiB/s wr, 71 op/s
Oct 02 13:20:56 compute-1 ovn_controller[129257]: 2025-10-02T13:20:56Z|00125|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:04:8b:f7 10.100.0.5
Oct 02 13:20:56 compute-1 nova_compute[230518]: 2025-10-02 13:20:56.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:20:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:56.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:57.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:58 compute-1 ceph-mon[80926]: pgmap v3291: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 9.8 KiB/s wr, 78 op/s
Oct 02 13:20:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:20:58.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:20:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:20:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:20:59.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:20:59 compute-1 ceph-mon[80926]: pgmap v3292: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 9.8 KiB/s wr, 51 op/s
Oct 02 13:20:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:21:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:00.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:01 compute-1 nova_compute[230518]: 2025-10-02 13:21:01.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:01.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:01 compute-1 nova_compute[230518]: 2025-10-02 13:21:01.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:01 compute-1 ceph-mon[80926]: pgmap v3293: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 13 KiB/s wr, 85 op/s
Oct 02 13:21:02 compute-1 nova_compute[230518]: 2025-10-02 13:21:02.850 2 DEBUG nova.compute.manager [req-baa45991-37e7-4299-9651-928e738c8e3b req-bf63cf9e-90a4-4f66-afaf-9ffbd90f4292 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received event network-changed-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:21:02 compute-1 nova_compute[230518]: 2025-10-02 13:21:02.850 2 DEBUG nova.compute.manager [req-baa45991-37e7-4299-9651-928e738c8e3b req-bf63cf9e-90a4-4f66-afaf-9ffbd90f4292 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Refreshing instance network info cache due to event network-changed-585ce74b-9d9e-45eb-a324-9ce87a1fcec0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:21:02 compute-1 nova_compute[230518]: 2025-10-02 13:21:02.850 2 DEBUG oslo_concurrency.lockutils [req-baa45991-37e7-4299-9651-928e738c8e3b req-bf63cf9e-90a4-4f66-afaf-9ffbd90f4292 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:21:02 compute-1 nova_compute[230518]: 2025-10-02 13:21:02.850 2 DEBUG oslo_concurrency.lockutils [req-baa45991-37e7-4299-9651-928e738c8e3b req-bf63cf9e-90a4-4f66-afaf-9ffbd90f4292 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:21:02 compute-1 nova_compute[230518]: 2025-10-02 13:21:02.851 2 DEBUG nova.network.neutron [req-baa45991-37e7-4299-9651-928e738c8e3b req-bf63cf9e-90a4-4f66-afaf-9ffbd90f4292 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Refreshing network info cache for port 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:21:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:02.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:03.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:03 compute-1 ceph-mon[80926]: pgmap v3294: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 529 KiB/s rd, 13 KiB/s wr, 44 op/s
Oct 02 13:21:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:21:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:04.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:21:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:05.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:21:06 compute-1 nova_compute[230518]: 2025-10-02 13:21:06.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:06 compute-1 ceph-mon[80926]: pgmap v3295: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 531 KiB/s rd, 22 KiB/s wr, 45 op/s
Oct 02 13:21:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1005916527' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:21:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1005916527' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:21:06 compute-1 nova_compute[230518]: 2025-10-02 13:21:06.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:06 compute-1 nova_compute[230518]: 2025-10-02 13:21:06.851 2 DEBUG nova.network.neutron [req-baa45991-37e7-4299-9651-928e738c8e3b req-bf63cf9e-90a4-4f66-afaf-9ffbd90f4292 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updated VIF entry in instance network info cache for port 585ce74b-9d9e-45eb-a324-9ce87a1fcec0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:21:06 compute-1 nova_compute[230518]: 2025-10-02 13:21:06.851 2 DEBUG nova.network.neutron [req-baa45991-37e7-4299-9651-928e738c8e3b req-bf63cf9e-90a4-4f66-afaf-9ffbd90f4292 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updating instance_info_cache with network_info: [{"id": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "address": "fa:16:3e:04:8b:f7", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap585ce74b-9d", "ovs_interfaceid": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:21:06 compute-1 nova_compute[230518]: 2025-10-02 13:21:06.892 2 DEBUG oslo_concurrency.lockutils [req-baa45991-37e7-4299-9651-928e738c8e3b req-bf63cf9e-90a4-4f66-afaf-9ffbd90f4292 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:21:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:06.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:07.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:08 compute-1 ceph-mon[80926]: pgmap v3296: 305 pgs: 305 active+clean; 222 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 531 KiB/s rd, 24 KiB/s wr, 45 op/s
Oct 02 13:21:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:08.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 13:21:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:09.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 13:21:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:21:09.375 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=76, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=75) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:21:09 compute-1 nova_compute[230518]: 2025-10-02 13:21:09.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:21:09.377 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:21:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:21:09 compute-1 nova_compute[230518]: 2025-10-02 13:21:09.733 2 DEBUG nova.compute.manager [req-3f6c29bc-7450-49f6-acb5-7c373289f212 req-3bf8c527-a8ee-4a61-b287-7f3051564d62 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received event network-changed-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:21:09 compute-1 nova_compute[230518]: 2025-10-02 13:21:09.734 2 DEBUG nova.compute.manager [req-3f6c29bc-7450-49f6-acb5-7c373289f212 req-3bf8c527-a8ee-4a61-b287-7f3051564d62 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Refreshing instance network info cache due to event network-changed-585ce74b-9d9e-45eb-a324-9ce87a1fcec0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:21:09 compute-1 nova_compute[230518]: 2025-10-02 13:21:09.734 2 DEBUG oslo_concurrency.lockutils [req-3f6c29bc-7450-49f6-acb5-7c373289f212 req-3bf8c527-a8ee-4a61-b287-7f3051564d62 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:21:09 compute-1 nova_compute[230518]: 2025-10-02 13:21:09.734 2 DEBUG oslo_concurrency.lockutils [req-3f6c29bc-7450-49f6-acb5-7c373289f212 req-3bf8c527-a8ee-4a61-b287-7f3051564d62 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:21:09 compute-1 nova_compute[230518]: 2025-10-02 13:21:09.734 2 DEBUG nova.network.neutron [req-3f6c29bc-7450-49f6-acb5-7c373289f212 req-3bf8c527-a8ee-4a61-b287-7f3051564d62 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Refreshing network info cache for port 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:21:10 compute-1 ceph-mon[80926]: pgmap v3297: 305 pgs: 305 active+clean; 222 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 432 KiB/s rd, 14 KiB/s wr, 35 op/s
Oct 02 13:21:10 compute-1 nova_compute[230518]: 2025-10-02 13:21:10.608 2 DEBUG oslo_concurrency.lockutils [None req-79bb7ea4-47e1-4e9c-a093-78e9060e8eaa 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:21:10 compute-1 nova_compute[230518]: 2025-10-02 13:21:10.609 2 DEBUG oslo_concurrency.lockutils [None req-79bb7ea4-47e1-4e9c-a093-78e9060e8eaa 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:21:10 compute-1 nova_compute[230518]: 2025-10-02 13:21:10.622 2 INFO nova.compute.manager [None req-79bb7ea4-47e1-4e9c-a093-78e9060e8eaa 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Detaching volume 501f7163-061f-4829-9c05-ac69ebd0ace5
Oct 02 13:21:10 compute-1 nova_compute[230518]: 2025-10-02 13:21:10.735 2 INFO nova.virt.block_device [None req-79bb7ea4-47e1-4e9c-a093-78e9060e8eaa 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Attempting to driver detach volume 501f7163-061f-4829-9c05-ac69ebd0ace5 from mountpoint /dev/vdb
Oct 02 13:21:10 compute-1 nova_compute[230518]: 2025-10-02 13:21:10.749 2 DEBUG nova.virt.libvirt.driver [None req-79bb7ea4-47e1-4e9c-a093-78e9060e8eaa 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Attempting to detach device vdb from instance 92ef7ede-4ed2-4a81-9849-bbc39c0be573 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 02 13:21:10 compute-1 nova_compute[230518]: 2025-10-02 13:21:10.750 2 DEBUG nova.virt.libvirt.guest [None req-79bb7ea4-47e1-4e9c-a093-78e9060e8eaa 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 13:21:10 compute-1 nova_compute[230518]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:21:10 compute-1 nova_compute[230518]:   <source protocol="rbd" name="volumes/volume-501f7163-061f-4829-9c05-ac69ebd0ace5">
Oct 02 13:21:10 compute-1 nova_compute[230518]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:21:10 compute-1 nova_compute[230518]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:21:10 compute-1 nova_compute[230518]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:21:10 compute-1 nova_compute[230518]:   </source>
Oct 02 13:21:10 compute-1 nova_compute[230518]:   <target dev="vdb" bus="virtio"/>
Oct 02 13:21:10 compute-1 nova_compute[230518]:   <serial>501f7163-061f-4829-9c05-ac69ebd0ace5</serial>
Oct 02 13:21:10 compute-1 nova_compute[230518]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 13:21:10 compute-1 nova_compute[230518]: </disk>
Oct 02 13:21:10 compute-1 nova_compute[230518]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 13:21:10 compute-1 nova_compute[230518]: 2025-10-02 13:21:10.760 2 INFO nova.virt.libvirt.driver [None req-79bb7ea4-47e1-4e9c-a093-78e9060e8eaa 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Successfully detached device vdb from instance 92ef7ede-4ed2-4a81-9849-bbc39c0be573 from the persistent domain config.
Oct 02 13:21:10 compute-1 nova_compute[230518]: 2025-10-02 13:21:10.761 2 DEBUG nova.virt.libvirt.driver [None req-79bb7ea4-47e1-4e9c-a093-78e9060e8eaa 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 92ef7ede-4ed2-4a81-9849-bbc39c0be573 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 02 13:21:10 compute-1 nova_compute[230518]: 2025-10-02 13:21:10.762 2 DEBUG nova.virt.libvirt.guest [None req-79bb7ea4-47e1-4e9c-a093-78e9060e8eaa 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] detach device xml: <disk type="network" device="disk">
Oct 02 13:21:10 compute-1 nova_compute[230518]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:21:10 compute-1 nova_compute[230518]:   <source protocol="rbd" name="volumes/volume-501f7163-061f-4829-9c05-ac69ebd0ace5">
Oct 02 13:21:10 compute-1 nova_compute[230518]:     <host name="192.168.122.100" port="6789"/>
Oct 02 13:21:10 compute-1 nova_compute[230518]:     <host name="192.168.122.102" port="6789"/>
Oct 02 13:21:10 compute-1 nova_compute[230518]:     <host name="192.168.122.101" port="6789"/>
Oct 02 13:21:10 compute-1 nova_compute[230518]:   </source>
Oct 02 13:21:10 compute-1 nova_compute[230518]:   <target dev="vdb" bus="virtio"/>
Oct 02 13:21:10 compute-1 nova_compute[230518]:   <serial>501f7163-061f-4829-9c05-ac69ebd0ace5</serial>
Oct 02 13:21:10 compute-1 nova_compute[230518]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Oct 02 13:21:10 compute-1 nova_compute[230518]: </disk>
Oct 02 13:21:10 compute-1 nova_compute[230518]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 02 13:21:10 compute-1 nova_compute[230518]: 2025-10-02 13:21:10.816 2 DEBUG nova.network.neutron [req-3f6c29bc-7450-49f6-acb5-7c373289f212 req-3bf8c527-a8ee-4a61-b287-7f3051564d62 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updated VIF entry in instance network info cache for port 585ce74b-9d9e-45eb-a324-9ce87a1fcec0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:21:10 compute-1 nova_compute[230518]: 2025-10-02 13:21:10.827 2 DEBUG nova.network.neutron [req-3f6c29bc-7450-49f6-acb5-7c373289f212 req-3bf8c527-a8ee-4a61-b287-7f3051564d62 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updating instance_info_cache with network_info: [{"id": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "address": "fa:16:3e:04:8b:f7", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap585ce74b-9d", "ovs_interfaceid": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:21:10 compute-1 nova_compute[230518]: 2025-10-02 13:21:10.842 2 DEBUG oslo_concurrency.lockutils [req-3f6c29bc-7450-49f6-acb5-7c373289f212 req-3bf8c527-a8ee-4a61-b287-7f3051564d62 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-92ef7ede-4ed2-4a81-9849-bbc39c0be573" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:21:10 compute-1 nova_compute[230518]: 2025-10-02 13:21:10.888 2 DEBUG nova.virt.libvirt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Received event <DeviceRemovedEvent: 1759411270.887751, 92ef7ede-4ed2-4a81-9849-bbc39c0be573 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 02 13:21:10 compute-1 nova_compute[230518]: 2025-10-02 13:21:10.890 2 DEBUG nova.virt.libvirt.driver [None req-79bb7ea4-47e1-4e9c-a093-78e9060e8eaa 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 92ef7ede-4ed2-4a81-9849-bbc39c0be573 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 02 13:21:10 compute-1 nova_compute[230518]: 2025-10-02 13:21:10.894 2 INFO nova.virt.libvirt.driver [None req-79bb7ea4-47e1-4e9c-a093-78e9060e8eaa 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Successfully detached device vdb from instance 92ef7ede-4ed2-4a81-9849-bbc39c0be573 from the live domain config.
Oct 02 13:21:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:10.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:11 compute-1 nova_compute[230518]: 2025-10-02 13:21:11.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:11 compute-1 nova_compute[230518]: 2025-10-02 13:21:11.045 2 DEBUG nova.objects.instance [None req-79bb7ea4-47e1-4e9c-a093-78e9060e8eaa 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lazy-loading 'flavor' on Instance uuid 92ef7ede-4ed2-4a81-9849-bbc39c0be573 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:21:11 compute-1 nova_compute[230518]: 2025-10-02 13:21:11.080 2 DEBUG oslo_concurrency.lockutils [None req-79bb7ea4-47e1-4e9c-a093-78e9060e8eaa 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.471s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:21:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:21:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:11.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:21:11 compute-1 nova_compute[230518]: 2025-10-02 13:21:11.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:12 compute-1 ceph-mon[80926]: pgmap v3298: 305 pgs: 305 active+clean; 222 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 432 KiB/s rd, 15 KiB/s wr, 35 op/s
Oct 02 13:21:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:21:12 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3526503879' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:21:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:21:12 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3526503879' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:21:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:12.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3526503879' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:21:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3526503879' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:21:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:13.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:13 compute-1 nova_compute[230518]: 2025-10-02 13:21:13.521 2 DEBUG oslo_concurrency.lockutils [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:21:13 compute-1 nova_compute[230518]: 2025-10-02 13:21:13.521 2 DEBUG oslo_concurrency.lockutils [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:21:13 compute-1 nova_compute[230518]: 2025-10-02 13:21:13.522 2 DEBUG oslo_concurrency.lockutils [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:21:13 compute-1 nova_compute[230518]: 2025-10-02 13:21:13.522 2 DEBUG oslo_concurrency.lockutils [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:21:13 compute-1 nova_compute[230518]: 2025-10-02 13:21:13.523 2 DEBUG oslo_concurrency.lockutils [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:21:13 compute-1 nova_compute[230518]: 2025-10-02 13:21:13.525 2 INFO nova.compute.manager [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Terminating instance
Oct 02 13:21:13 compute-1 nova_compute[230518]: 2025-10-02 13:21:13.527 2 DEBUG nova.compute.manager [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:21:13 compute-1 kernel: tap585ce74b-9d (unregistering): left promiscuous mode
Oct 02 13:21:13 compute-1 NetworkManager[44960]: <info>  [1759411273.5824] device (tap585ce74b-9d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:21:13 compute-1 nova_compute[230518]: 2025-10-02 13:21:13.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:13 compute-1 ovn_controller[129257]: 2025-10-02T13:21:13Z|00882|binding|INFO|Releasing lport 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 from this chassis (sb_readonly=0)
Oct 02 13:21:13 compute-1 ovn_controller[129257]: 2025-10-02T13:21:13Z|00883|binding|INFO|Setting lport 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 down in Southbound
Oct 02 13:21:13 compute-1 ovn_controller[129257]: 2025-10-02T13:21:13Z|00884|binding|INFO|Removing iface tap585ce74b-9d ovn-installed in OVS
Oct 02 13:21:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:21:13.602 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:8b:f7 10.100.0.5'], port_security=['fa:16:3e:04:8b:f7 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '92ef7ede-4ed2-4a81-9849-bbc39c0be573', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-540159ad-ffd2-462a-a8b9-e86914ed6249', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ced4d30c525c44cca617c3b9838d21b7', 'neutron:revision_number': '8', 'neutron:security_group_ids': '0c95f94e-20dd-45bd-9644-7e1d8998955e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d33f65d0-f5c3-43e4-a0b6-d26b238c6ffb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=585ce74b-9d9e-45eb-a324-9ce87a1fcec0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:21:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:21:13.605 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 585ce74b-9d9e-45eb-a324-9ce87a1fcec0 in datapath 540159ad-ffd2-462a-a8b9-e86914ed6249 unbound from our chassis
Oct 02 13:21:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:21:13.608 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 540159ad-ffd2-462a-a8b9-e86914ed6249, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:21:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:21:13.610 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[9cdbbbea-b84d-4790-96be-bad14ccfce5f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:21:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:21:13.611 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249 namespace which is not needed anymore
Oct 02 13:21:13 compute-1 nova_compute[230518]: 2025-10-02 13:21:13.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:13 compute-1 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d000000d6.scope: Deactivated successfully.
Oct 02 13:21:13 compute-1 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d000000d6.scope: Consumed 13.678s CPU time.
Oct 02 13:21:13 compute-1 systemd-machined[188247]: Machine qemu-99-instance-000000d6 terminated.
Oct 02 13:21:13 compute-1 podman[315277]: 2025-10-02 13:21:13.669687934 +0000 UTC m=+0.057517806 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:21:13 compute-1 podman[315274]: 2025-10-02 13:21:13.706034974 +0000 UTC m=+0.094350270 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 13:21:13 compute-1 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[315210]: [NOTICE]   (315217) : haproxy version is 2.8.14-c23fe91
Oct 02 13:21:13 compute-1 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[315210]: [NOTICE]   (315217) : path to executable is /usr/sbin/haproxy
Oct 02 13:21:13 compute-1 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[315210]: [WARNING]  (315217) : Exiting Master process...
Oct 02 13:21:13 compute-1 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[315210]: [WARNING]  (315217) : Exiting Master process...
Oct 02 13:21:13 compute-1 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[315210]: [ALERT]    (315217) : Current worker (315219) exited with code 143 (Terminated)
Oct 02 13:21:13 compute-1 neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249[315210]: [WARNING]  (315217) : All workers exited. Exiting... (0)
Oct 02 13:21:13 compute-1 systemd[1]: libpod-0d1b1001f28188e1e9c21f0052988846b5e700ace56a9ede5e4edb1957dddf5c.scope: Deactivated successfully.
Oct 02 13:21:13 compute-1 podman[315340]: 2025-10-02 13:21:13.760617756 +0000 UTC m=+0.052584870 container died 0d1b1001f28188e1e9c21f0052988846b5e700ace56a9ede5e4edb1957dddf5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 13:21:13 compute-1 nova_compute[230518]: 2025-10-02 13:21:13.761 2 INFO nova.virt.libvirt.driver [-] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Instance destroyed successfully.
Oct 02 13:21:13 compute-1 nova_compute[230518]: 2025-10-02 13:21:13.762 2 DEBUG nova.objects.instance [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lazy-loading 'resources' on Instance uuid 92ef7ede-4ed2-4a81-9849-bbc39c0be573 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:21:13 compute-1 nova_compute[230518]: 2025-10-02 13:21:13.779 2 DEBUG nova.virt.libvirt.vif [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:19:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestMinimumBasicScenario-server-2037893726',display_name='tempest-TestMinimumBasicScenario-server-2037893726',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testminimumbasicscenario-server-2037893726',id=214,image_ref='f6be8018-0ea2-42f8-a1d7-8d704069aac9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBwVl8G5ieo8D6LRNceGyD0RzVHiNmAhn+oNx9JwxYWBR403mrZfQlZXBcadX/gFJtwpWDcUYsJ9PDNLmwgBCuRs7yyL95+8n31Ih8AeyaGOYLATIt1ABWixcUbVaElI8A==',key_name='tempest-TestMinimumBasicScenario-635426937',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:19:54Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ced4d30c525c44cca617c3b9838d21b7',ramdisk_id='',reservation_id='r-joxmr5lv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f6be8018-0ea2-42f8-a1d7-8d704069aac9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestMinimumBasicScenario-1527105691',owner_user_name='tempest-TestMinimumBasicScenario-1527105691-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:20:44Z,user_data=None,user_id='74f5186fabfb4fea86d32c8ef1f2e354',uuid=92ef7ede-4ed2-4a81-9849-bbc39c0be573,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "address": "fa:16:3e:04:8b:f7", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap585ce74b-9d", "ovs_interfaceid": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:21:13 compute-1 nova_compute[230518]: 2025-10-02 13:21:13.779 2 DEBUG nova.network.os_vif_util [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Converting VIF {"id": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "address": "fa:16:3e:04:8b:f7", "network": {"id": "540159ad-ffd2-462a-a8b9-e86914ed6249", "bridge": "br-int", "label": "tempest-TestMinimumBasicScenario-1641642450-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ced4d30c525c44cca617c3b9838d21b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap585ce74b-9d", "ovs_interfaceid": "585ce74b-9d9e-45eb-a324-9ce87a1fcec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:21:13 compute-1 nova_compute[230518]: 2025-10-02 13:21:13.780 2 DEBUG nova.network.os_vif_util [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:04:8b:f7,bridge_name='br-int',has_traffic_filtering=True,id=585ce74b-9d9e-45eb-a324-9ce87a1fcec0,network=Network(540159ad-ffd2-462a-a8b9-e86914ed6249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap585ce74b-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:21:13 compute-1 nova_compute[230518]: 2025-10-02 13:21:13.781 2 DEBUG os_vif [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:04:8b:f7,bridge_name='br-int',has_traffic_filtering=True,id=585ce74b-9d9e-45eb-a324-9ce87a1fcec0,network=Network(540159ad-ffd2-462a-a8b9-e86914ed6249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap585ce74b-9d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:21:13 compute-1 nova_compute[230518]: 2025-10-02 13:21:13.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:13 compute-1 nova_compute[230518]: 2025-10-02 13:21:13.783 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap585ce74b-9d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:21:13 compute-1 nova_compute[230518]: 2025-10-02 13:21:13.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:21:13 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0d1b1001f28188e1e9c21f0052988846b5e700ace56a9ede5e4edb1957dddf5c-userdata-shm.mount: Deactivated successfully.
Oct 02 13:21:13 compute-1 systemd[1]: var-lib-containers-storage-overlay-7ec57861729784fbfedd5a40971eeacc0f72b749ec4f97def4c07dfc1fd8ea29-merged.mount: Deactivated successfully.
Oct 02 13:21:13 compute-1 nova_compute[230518]: 2025-10-02 13:21:13.803 2 INFO os_vif [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:04:8b:f7,bridge_name='br-int',has_traffic_filtering=True,id=585ce74b-9d9e-45eb-a324-9ce87a1fcec0,network=Network(540159ad-ffd2-462a-a8b9-e86914ed6249),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap585ce74b-9d')
Oct 02 13:21:13 compute-1 podman[315340]: 2025-10-02 13:21:13.809400506 +0000 UTC m=+0.101367640 container cleanup 0d1b1001f28188e1e9c21f0052988846b5e700ace56a9ede5e4edb1957dddf5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 02 13:21:13 compute-1 systemd[1]: libpod-conmon-0d1b1001f28188e1e9c21f0052988846b5e700ace56a9ede5e4edb1957dddf5c.scope: Deactivated successfully.
Oct 02 13:21:13 compute-1 nova_compute[230518]: 2025-10-02 13:21:13.867 2 DEBUG nova.compute.manager [req-4e21ce79-eecc-4ab5-aa80-e934f0a1639a req-cf053785-998b-4e18-89d5-19cd1da777ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received event network-vif-unplugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:21:13 compute-1 nova_compute[230518]: 2025-10-02 13:21:13.867 2 DEBUG oslo_concurrency.lockutils [req-4e21ce79-eecc-4ab5-aa80-e934f0a1639a req-cf053785-998b-4e18-89d5-19cd1da777ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:21:13 compute-1 nova_compute[230518]: 2025-10-02 13:21:13.867 2 DEBUG oslo_concurrency.lockutils [req-4e21ce79-eecc-4ab5-aa80-e934f0a1639a req-cf053785-998b-4e18-89d5-19cd1da777ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:21:13 compute-1 nova_compute[230518]: 2025-10-02 13:21:13.868 2 DEBUG oslo_concurrency.lockutils [req-4e21ce79-eecc-4ab5-aa80-e934f0a1639a req-cf053785-998b-4e18-89d5-19cd1da777ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:21:13 compute-1 nova_compute[230518]: 2025-10-02 13:21:13.868 2 DEBUG nova.compute.manager [req-4e21ce79-eecc-4ab5-aa80-e934f0a1639a req-cf053785-998b-4e18-89d5-19cd1da777ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] No waiting events found dispatching network-vif-unplugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:21:13 compute-1 nova_compute[230518]: 2025-10-02 13:21:13.868 2 DEBUG nova.compute.manager [req-4e21ce79-eecc-4ab5-aa80-e934f0a1639a req-cf053785-998b-4e18-89d5-19cd1da777ca 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received event network-vif-unplugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:21:13 compute-1 podman[315396]: 2025-10-02 13:21:13.885564545 +0000 UTC m=+0.049572866 container remove 0d1b1001f28188e1e9c21f0052988846b5e700ace56a9ede5e4edb1957dddf5c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 02 13:21:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:21:13.893 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[2c97a3f4-a9be-4208-88fe-574d069a5c6a]: (4, ('Thu Oct  2 01:21:13 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249 (0d1b1001f28188e1e9c21f0052988846b5e700ace56a9ede5e4edb1957dddf5c)\n0d1b1001f28188e1e9c21f0052988846b5e700ace56a9ede5e4edb1957dddf5c\nThu Oct  2 01:21:13 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249 (0d1b1001f28188e1e9c21f0052988846b5e700ace56a9ede5e4edb1957dddf5c)\n0d1b1001f28188e1e9c21f0052988846b5e700ace56a9ede5e4edb1957dddf5c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:21:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:21:13.894 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[855755dd-4eb7-46a5-9cb1-5740632a49b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:21:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:21:13.895 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap540159ad-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:21:13 compute-1 nova_compute[230518]: 2025-10-02 13:21:13.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:13 compute-1 kernel: tap540159ad-f0: left promiscuous mode
Oct 02 13:21:13 compute-1 nova_compute[230518]: 2025-10-02 13:21:13.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:21:13.911 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6b71bd91-8cbc-4003-88ac-6d0da21c08fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:21:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:21:13.949 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[37c933a2-8726-48d1-891d-f799daf63040]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:21:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:21:13.951 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c75d1095-41e2-4f35-91f2-2d46912bc282]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:21:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:21:13.967 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[679a3a29-41fe-4794-8cf3-83627857d075]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 910674, 'reachable_time': 29141, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315414, 'error': None, 'target': 'ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:21:13 compute-1 systemd[1]: run-netns-ovnmeta\x2d540159ad\x2dffd2\x2d462a\x2da8b9\x2de86914ed6249.mount: Deactivated successfully.
Oct 02 13:21:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:21:13.974 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-540159ad-ffd2-462a-a8b9-e86914ed6249 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:21:13 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:21:13.974 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[78b8b3d8-0684-4373-b3d4-df3b1f1b8762]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:21:14 compute-1 ceph-mon[80926]: pgmap v3299: 305 pgs: 305 active+clean; 222 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 KiB/s rd, 16 KiB/s wr, 2 op/s
Oct 02 13:21:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:21:14 compute-1 nova_compute[230518]: 2025-10-02 13:21:14.789 2 INFO nova.virt.libvirt.driver [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Deleting instance files /var/lib/nova/instances/92ef7ede-4ed2-4a81-9849-bbc39c0be573_del
Oct 02 13:21:14 compute-1 nova_compute[230518]: 2025-10-02 13:21:14.790 2 INFO nova.virt.libvirt.driver [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Deletion of /var/lib/nova/instances/92ef7ede-4ed2-4a81-9849-bbc39c0be573_del complete
Oct 02 13:21:14 compute-1 nova_compute[230518]: 2025-10-02 13:21:14.848 2 INFO nova.compute.manager [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Took 1.32 seconds to destroy the instance on the hypervisor.
Oct 02 13:21:14 compute-1 nova_compute[230518]: 2025-10-02 13:21:14.849 2 DEBUG oslo.service.loopingcall [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:21:14 compute-1 nova_compute[230518]: 2025-10-02 13:21:14.849 2 DEBUG nova.compute.manager [-] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:21:14 compute-1 nova_compute[230518]: 2025-10-02 13:21:14.850 2 DEBUG nova.network.neutron [-] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:21:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:14.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:21:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:15.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:21:15 compute-1 ceph-mon[80926]: pgmap v3300: 305 pgs: 305 active+clean; 182 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 20 KiB/s wr, 15 op/s
Oct 02 13:21:15 compute-1 nova_compute[230518]: 2025-10-02 13:21:15.948 2 DEBUG nova.compute.manager [req-90e88292-beea-4ef0-9cea-a7ace00d221c req-4a3db1ce-b26f-4594-ad57-ac5b23da4d61 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received event network-vif-plugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:21:15 compute-1 nova_compute[230518]: 2025-10-02 13:21:15.948 2 DEBUG oslo_concurrency.lockutils [req-90e88292-beea-4ef0-9cea-a7ace00d221c req-4a3db1ce-b26f-4594-ad57-ac5b23da4d61 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:21:15 compute-1 nova_compute[230518]: 2025-10-02 13:21:15.949 2 DEBUG oslo_concurrency.lockutils [req-90e88292-beea-4ef0-9cea-a7ace00d221c req-4a3db1ce-b26f-4594-ad57-ac5b23da4d61 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:21:15 compute-1 nova_compute[230518]: 2025-10-02 13:21:15.949 2 DEBUG oslo_concurrency.lockutils [req-90e88292-beea-4ef0-9cea-a7ace00d221c req-4a3db1ce-b26f-4594-ad57-ac5b23da4d61 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:21:15 compute-1 nova_compute[230518]: 2025-10-02 13:21:15.949 2 DEBUG nova.compute.manager [req-90e88292-beea-4ef0-9cea-a7ace00d221c req-4a3db1ce-b26f-4594-ad57-ac5b23da4d61 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] No waiting events found dispatching network-vif-plugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:21:15 compute-1 nova_compute[230518]: 2025-10-02 13:21:15.950 2 WARNING nova.compute.manager [req-90e88292-beea-4ef0-9cea-a7ace00d221c req-4a3db1ce-b26f-4594-ad57-ac5b23da4d61 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received unexpected event network-vif-plugged-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 for instance with vm_state active and task_state deleting.
Oct 02 13:21:16 compute-1 nova_compute[230518]: 2025-10-02 13:21:16.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:16 compute-1 nova_compute[230518]: 2025-10-02 13:21:16.906 2 DEBUG nova.network.neutron [-] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:21:16 compute-1 nova_compute[230518]: 2025-10-02 13:21:16.927 2 INFO nova.compute.manager [-] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Took 2.08 seconds to deallocate network for instance.
Oct 02 13:21:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:16 compute-1 nova_compute[230518]: 2025-10-02 13:21:16.982 2 DEBUG oslo_concurrency.lockutils [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:21:16 compute-1 nova_compute[230518]: 2025-10-02 13:21:16.982 2 DEBUG oslo_concurrency.lockutils [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:21:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:16.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:17 compute-1 nova_compute[230518]: 2025-10-02 13:21:17.020 2 DEBUG nova.compute.manager [req-a2a922d8-6ea6-457b-b10a-f4e14910fb11 req-601cd08a-2389-4c05-be39-be18125786ea 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Received event network-vif-deleted-585ce74b-9d9e-45eb-a324-9ce87a1fcec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:21:17 compute-1 nova_compute[230518]: 2025-10-02 13:21:17.045 2 DEBUG oslo_concurrency.processutils [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:21:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:17.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:17 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:21:17.379 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '76'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:21:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:21:17 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2487810029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:21:17 compute-1 nova_compute[230518]: 2025-10-02 13:21:17.467 2 DEBUG oslo_concurrency.processutils [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:21:17 compute-1 nova_compute[230518]: 2025-10-02 13:21:17.475 2 DEBUG nova.compute.provider_tree [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:21:17 compute-1 nova_compute[230518]: 2025-10-02 13:21:17.494 2 DEBUG nova.scheduler.client.report [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:21:17 compute-1 nova_compute[230518]: 2025-10-02 13:21:17.528 2 DEBUG oslo_concurrency.lockutils [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.546s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:21:17 compute-1 nova_compute[230518]: 2025-10-02 13:21:17.558 2 INFO nova.scheduler.client.report [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Deleted allocations for instance 92ef7ede-4ed2-4a81-9849-bbc39c0be573
Oct 02 13:21:17 compute-1 nova_compute[230518]: 2025-10-02 13:21:17.624 2 DEBUG oslo_concurrency.lockutils [None req-9ec6bd61-5248-4d47-9bf6-7a50cf5fd5b3 74f5186fabfb4fea86d32c8ef1f2e354 ced4d30c525c44cca617c3b9838d21b7 - - default default] Lock "92ef7ede-4ed2-4a81-9849-bbc39c0be573" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:21:17 compute-1 ceph-mon[80926]: pgmap v3301: 305 pgs: 305 active+clean; 141 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 12 KiB/s wr, 39 op/s
Oct 02 13:21:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2487810029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:21:18 compute-1 nova_compute[230518]: 2025-10-02 13:21:18.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:18.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:21:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:19.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:21:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:21:19 compute-1 ceph-mon[80926]: pgmap v3302: 305 pgs: 305 active+clean; 141 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 10 KiB/s wr, 38 op/s
Oct 02 13:21:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e406 e406: 3 total, 3 up, 3 in
Oct 02 13:21:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:21:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:20.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:21:21 compute-1 ceph-mon[80926]: osdmap e406: 3 total, 3 up, 3 in
Oct 02 13:21:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:21.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:21 compute-1 nova_compute[230518]: 2025-10-02 13:21:21.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:22 compute-1 ceph-mon[80926]: pgmap v3304: 305 pgs: 305 active+clean; 141 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 47 KiB/s rd, 12 KiB/s wr, 66 op/s
Oct 02 13:21:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:22.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:23.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:23 compute-1 nova_compute[230518]: 2025-10-02 13:21:23.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:23 compute-1 nova_compute[230518]: 2025-10-02 13:21:23.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:23 compute-1 nova_compute[230518]: 2025-10-02 13:21:23.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:24 compute-1 ceph-mon[80926]: pgmap v3305: 305 pgs: 305 active+clean; 141 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 47 KiB/s rd, 7.6 KiB/s wr, 66 op/s
Oct 02 13:21:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:21:24 compute-1 podman[315440]: 2025-10-02 13:21:24.793898826 +0000 UTC m=+0.047879263 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:21:24 compute-1 podman[315439]: 2025-10-02 13:21:24.832133145 +0000 UTC m=+0.085413930 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 13:21:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:24.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:25.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:21:25.979 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:21:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:21:25.979 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:21:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:21:25.980 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:21:26 compute-1 ceph-mon[80926]: pgmap v3306: 305 pgs: 305 active+clean; 133 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 3.6 KiB/s wr, 53 op/s
Oct 02 13:21:26 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 e407: 3 total, 3 up, 3 in
Oct 02 13:21:26 compute-1 nova_compute[230518]: 2025-10-02 13:21:26.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:26.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:27.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:27 compute-1 ceph-mon[80926]: pgmap v3307: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 KiB/s wr, 29 op/s
Oct 02 13:21:27 compute-1 ceph-mon[80926]: osdmap e407: 3 total, 3 up, 3 in
Oct 02 13:21:28 compute-1 nova_compute[230518]: 2025-10-02 13:21:28.761 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759411273.759324, 92ef7ede-4ed2-4a81-9849-bbc39c0be573 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:21:28 compute-1 nova_compute[230518]: 2025-10-02 13:21:28.761 2 INFO nova.compute.manager [-] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] VM Stopped (Lifecycle Event)
Oct 02 13:21:28 compute-1 nova_compute[230518]: 2025-10-02 13:21:28.782 2 DEBUG nova.compute.manager [None req-805e5f6c-0c16-4340-9fcc-d205d209c69c - - - - - -] [instance: 92ef7ede-4ed2-4a81-9849-bbc39c0be573] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:21:28 compute-1 nova_compute[230518]: 2025-10-02 13:21:28.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:28.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:29.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:21:29 compute-1 ceph-mon[80926]: pgmap v3309: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Oct 02 13:21:30 compute-1 sudo[315478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:21:30 compute-1 sudo[315478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:30 compute-1 sudo[315478]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:30 compute-1 sudo[315503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:21:30 compute-1 sudo[315503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:30 compute-1 sudo[315503]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:30 compute-1 sudo[315528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:21:30 compute-1 sudo[315528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:30 compute-1 sudo[315528]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:30 compute-1 sudo[315553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:21:30 compute-1 sudo[315553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:30 compute-1 sudo[315553]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:31.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:31.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:31 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:21:31 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:21:31 compute-1 ceph-mon[80926]: pgmap v3310: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.2 KiB/s rd, 716 B/s wr, 9 op/s
Oct 02 13:21:31 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 13:21:31 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 13:21:31 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 13:21:31 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:21:31 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:21:31 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:21:31 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:21:31 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:21:31 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:21:31 compute-1 nova_compute[230518]: 2025-10-02 13:21:31.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:33.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:33 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1089819506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:21:33 compute-1 nova_compute[230518]: 2025-10-02 13:21:33.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:21:33 compute-1 nova_compute[230518]: 2025-10-02 13:21:33.074 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:21:33 compute-1 nova_compute[230518]: 2025-10-02 13:21:33.074 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:21:33 compute-1 nova_compute[230518]: 2025-10-02 13:21:33.074 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:21:33 compute-1 nova_compute[230518]: 2025-10-02 13:21:33.075 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:21:33 compute-1 nova_compute[230518]: 2025-10-02 13:21:33.075 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:21:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:33.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:21:33 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1768222646' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:21:33 compute-1 nova_compute[230518]: 2025-10-02 13:21:33.512 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:21:33 compute-1 nova_compute[230518]: 2025-10-02 13:21:33.684 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:21:33 compute-1 nova_compute[230518]: 2025-10-02 13:21:33.685 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4209MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:21:33 compute-1 nova_compute[230518]: 2025-10-02 13:21:33.685 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:21:33 compute-1 nova_compute[230518]: 2025-10-02 13:21:33.685 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:21:33 compute-1 nova_compute[230518]: 2025-10-02 13:21:33.775 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:21:33 compute-1 nova_compute[230518]: 2025-10-02 13:21:33.775 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:21:33 compute-1 nova_compute[230518]: 2025-10-02 13:21:33.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:33 compute-1 nova_compute[230518]: 2025-10-02 13:21:33.814 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 13:21:33 compute-1 nova_compute[230518]: 2025-10-02 13:21:33.830 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 13:21:33 compute-1 nova_compute[230518]: 2025-10-02 13:21:33.830 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 13:21:33 compute-1 nova_compute[230518]: 2025-10-02 13:21:33.848 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 13:21:33 compute-1 nova_compute[230518]: 2025-10-02 13:21:33.869 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 13:21:33 compute-1 nova_compute[230518]: 2025-10-02 13:21:33.892 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:21:34 compute-1 ceph-mon[80926]: pgmap v3311: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.7 KiB/s rd, 716 B/s wr, 8 op/s
Oct 02 13:21:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2728240605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:21:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1768222646' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:21:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:21:34 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/479049458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:21:34 compute-1 nova_compute[230518]: 2025-10-02 13:21:34.320 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:21:34 compute-1 nova_compute[230518]: 2025-10-02 13:21:34.326 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:21:34 compute-1 nova_compute[230518]: 2025-10-02 13:21:34.341 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:21:34 compute-1 nova_compute[230518]: 2025-10-02 13:21:34.360 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:21:34 compute-1 nova_compute[230518]: 2025-10-02 13:21:34.360 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:21:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:21:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:21:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:35.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:21:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/479049458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:21:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:21:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:35.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:21:36 compute-1 ceph-mon[80926]: pgmap v3312: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.3 KiB/s rd, 307 B/s wr, 4 op/s
Oct 02 13:21:36 compute-1 nova_compute[230518]: 2025-10-02 13:21:36.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:37.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:21:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:37.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:21:38 compute-1 ceph-mon[80926]: pgmap v3313: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:38 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4131645090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:21:38 compute-1 nova_compute[230518]: 2025-10-02 13:21:38.361 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:21:38 compute-1 nova_compute[230518]: 2025-10-02 13:21:38.361 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:21:38 compute-1 sudo[315655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:21:38 compute-1 sudo[315655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:38 compute-1 sudo[315655]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:38 compute-1 sudo[315680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:21:38 compute-1 sudo[315680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:21:38 compute-1 sudo[315680]: pam_unix(sudo:session): session closed for user root
Oct 02 13:21:38 compute-1 nova_compute[230518]: 2025-10-02 13:21:38.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:39.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:39.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:39 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:21:39 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:21:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2314671866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:21:39 compute-1 ceph-mon[80926]: pgmap v3314: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:21:40 compute-1 nova_compute[230518]: 2025-10-02 13:21:40.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:21:40 compute-1 nova_compute[230518]: 2025-10-02 13:21:40.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:21:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:41.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:41.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:41 compute-1 ceph-mon[80926]: pgmap v3315: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:41 compute-1 nova_compute[230518]: 2025-10-02 13:21:41.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:43.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:43 compute-1 nova_compute[230518]: 2025-10-02 13:21:43.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:21:43 compute-1 nova_compute[230518]: 2025-10-02 13:21:43.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:21:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:21:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:43.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:21:43 compute-1 podman[315706]: 2025-10-02 13:21:43.797897166 +0000 UTC m=+0.044791376 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 13:21:43 compute-1 ceph-mon[80926]: pgmap v3316: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:43 compute-1 nova_compute[230518]: 2025-10-02 13:21:43.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:43 compute-1 podman[315705]: 2025-10-02 13:21:43.852670444 +0000 UTC m=+0.100770062 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 13:21:44 compute-1 nova_compute[230518]: 2025-10-02 13:21:44.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:21:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:21:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:45.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:45.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:45 compute-1 ceph-mon[80926]: pgmap v3317: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:46 compute-1 nova_compute[230518]: 2025-10-02 13:21:46.062 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:21:46 compute-1 nova_compute[230518]: 2025-10-02 13:21:46.063 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:21:46 compute-1 nova_compute[230518]: 2025-10-02 13:21:46.063 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:21:46 compute-1 nova_compute[230518]: 2025-10-02 13:21:46.076 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:21:46 compute-1 nova_compute[230518]: 2025-10-02 13:21:46.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:47.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:47 compute-1 nova_compute[230518]: 2025-10-02 13:21:47.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:21:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:47.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:47 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:21:47.670 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=77, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=76) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:21:47 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:21:47.671 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:21:47 compute-1 nova_compute[230518]: 2025-10-02 13:21:47.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:47 compute-1 ceph-mon[80926]: pgmap v3318: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:48 compute-1 nova_compute[230518]: 2025-10-02 13:21:48.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:21:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:49.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:21:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:21:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:49.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:21:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:21:50 compute-1 ceph-mon[80926]: pgmap v3319: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:50 compute-1 nova_compute[230518]: 2025-10-02 13:21:50.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:21:50 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:21:50.673 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '77'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:21:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:51.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:21:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:51.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:21:51 compute-1 nova_compute[230518]: 2025-10-02 13:21:51.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:52 compute-1 ceph-mon[80926]: pgmap v3320: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:53.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:53.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:53 compute-1 nova_compute[230518]: 2025-10-02 13:21:53.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:54 compute-1 ceph-mon[80926]: pgmap v3321: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:21:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:55.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:55.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:55 compute-1 podman[315747]: 2025-10-02 13:21:55.810255328 +0000 UTC m=+0.065497115 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:21:55 compute-1 podman[315748]: 2025-10-02 13:21:55.850051306 +0000 UTC m=+0.091904963 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 13:21:56 compute-1 ceph-mon[80926]: pgmap v3322: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:56 compute-1 nova_compute[230518]: 2025-10-02 13:21:56.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:57.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:57 compute-1 nova_compute[230518]: 2025-10-02 13:21:57.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:21:57 compute-1 nova_compute[230518]: 2025-10-02 13:21:57.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 13:21:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:57.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:58 compute-1 ceph-mon[80926]: pgmap v3323: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:21:58 compute-1 nova_compute[230518]: 2025-10-02 13:21:58.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:21:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:21:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:21:59.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:21:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:21:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:21:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:21:59.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:21:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:22:00 compute-1 ceph-mon[80926]: pgmap v3324: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:22:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:01.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:22:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:01.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:01 compute-1 nova_compute[230518]: 2025-10-02 13:22:01.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:02 compute-1 ceph-mon[80926]: pgmap v3325: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:03.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:22:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:03.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:22:03 compute-1 ceph-mon[80926]: pgmap v3326: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:03 compute-1 nova_compute[230518]: 2025-10-02 13:22:03.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:22:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:22:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:05.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:22:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:22:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3295503546' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:22:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:22:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3295503546' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:22:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:05.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:05 compute-1 ceph-mon[80926]: pgmap v3327: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3295503546' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:22:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3295503546' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:22:06 compute-1 nova_compute[230518]: 2025-10-02 13:22:06.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:07.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:07.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:07 compute-1 ceph-mon[80926]: pgmap v3328: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:08 compute-1 nova_compute[230518]: 2025-10-02 13:22:08.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:09.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:22:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:09.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:22:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:22:09 compute-1 ceph-mon[80926]: pgmap v3329: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1722731459' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:22:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:11.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:11.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:11 compute-1 nova_compute[230518]: 2025-10-02 13:22:11.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:11 compute-1 ceph-mon[80926]: pgmap v3330: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:22:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:13.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:13.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:13 compute-1 nova_compute[230518]: 2025-10-02 13:22:13.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:14 compute-1 ceph-mon[80926]: pgmap v3331: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 170 B/s rd, 0 op/s
Oct 02 13:22:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:22:14 compute-1 ovn_controller[129257]: 2025-10-02T13:22:14Z|00885|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct 02 13:22:14 compute-1 podman[315789]: 2025-10-02 13:22:14.807520227 +0000 UTC m=+0.054349015 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent)
Oct 02 13:22:14 compute-1 podman[315788]: 2025-10-02 13:22:14.877125821 +0000 UTC m=+0.119054116 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 13:22:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:15.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:15.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:16 compute-1 ceph-mon[80926]: pgmap v3332: 305 pgs: 305 active+clean; 140 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 612 KiB/s wr, 24 op/s
Oct 02 13:22:16 compute-1 nova_compute[230518]: 2025-10-02 13:22:16.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.001999981s ======
Oct 02 13:22:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:17.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001999981s
Oct 02 13:22:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4179133531' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:22:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1535284313' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:22:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:17.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:18 compute-1 ceph-mon[80926]: pgmap v3333: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:22:18 compute-1 nova_compute[230518]: 2025-10-02 13:22:18.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:19.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:19.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:22:20 compute-1 ceph-mon[80926]: pgmap v3334: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:22:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:21.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:21.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:21 compute-1 nova_compute[230518]: 2025-10-02 13:22:21.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:22 compute-1 ceph-mon[80926]: pgmap v3335: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 176 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 02 13:22:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:23.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:23.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:23 compute-1 ceph-mon[80926]: pgmap v3336: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 74 op/s
Oct 02 13:22:23 compute-1 nova_compute[230518]: 2025-10-02 13:22:23.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:22:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:25.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:25.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:25 compute-1 ceph-mon[80926]: pgmap v3337: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 92 op/s
Oct 02 13:22:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:22:25.980 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:22:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:22:25.980 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:22:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:22:25.981 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:22:26 compute-1 podman[315831]: 2025-10-02 13:22:26.813360655 +0000 UTC m=+0.065489765 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:22:26 compute-1 podman[315830]: 2025-10-02 13:22:26.82083452 +0000 UTC m=+0.078168004 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 13:22:26 compute-1 nova_compute[230518]: 2025-10-02 13:22:26.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:27 compute-1 nova_compute[230518]: 2025-10-02 13:22:27.071 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:22:27 compute-1 nova_compute[230518]: 2025-10-02 13:22:27.072 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 13:22:27 compute-1 nova_compute[230518]: 2025-10-02 13:22:27.086 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 13:22:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:27.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:27.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:27 compute-1 ceph-mon[80926]: pgmap v3338: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 75 op/s
Oct 02 13:22:28 compute-1 nova_compute[230518]: 2025-10-02 13:22:28.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:29.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:29.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:22:29 compute-1 ceph-mon[80926]: pgmap v3339: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 02 13:22:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 13:22:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:31.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 13:22:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:22:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:31.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:22:31 compute-1 nova_compute[230518]: 2025-10-02 13:22:31.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:32 compute-1 ceph-mon[80926]: pgmap v3340: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 20 KiB/s wr, 74 op/s
Oct 02 13:22:33 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2038774104' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:22:33 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/364948547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:22:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:33.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:33.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:33 compute-1 nova_compute[230518]: 2025-10-02 13:22:33.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:34 compute-1 ceph-mon[80926]: pgmap v3341: 305 pgs: 305 active+clean; 176 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 835 KiB/s wr, 77 op/s
Oct 02 13:22:34 compute-1 nova_compute[230518]: 2025-10-02 13:22:34.067 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:22:34 compute-1 nova_compute[230518]: 2025-10-02 13:22:34.098 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:22:34 compute-1 nova_compute[230518]: 2025-10-02 13:22:34.098 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:22:34 compute-1 nova_compute[230518]: 2025-10-02 13:22:34.098 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:22:34 compute-1 nova_compute[230518]: 2025-10-02 13:22:34.099 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:22:34 compute-1 nova_compute[230518]: 2025-10-02 13:22:34.099 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:22:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:22:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:22:34 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/520594306' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:22:34 compute-1 nova_compute[230518]: 2025-10-02 13:22:34.550 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:22:34 compute-1 nova_compute[230518]: 2025-10-02 13:22:34.711 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:22:34 compute-1 nova_compute[230518]: 2025-10-02 13:22:34.712 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4193MB free_disk=20.95783233642578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:22:34 compute-1 nova_compute[230518]: 2025-10-02 13:22:34.712 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:22:34 compute-1 nova_compute[230518]: 2025-10-02 13:22:34.713 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:22:34 compute-1 nova_compute[230518]: 2025-10-02 13:22:34.773 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:22:34 compute-1 nova_compute[230518]: 2025-10-02 13:22:34.774 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:22:34 compute-1 nova_compute[230518]: 2025-10-02 13:22:34.788 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:22:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/520594306' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:22:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 13:22:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:35.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 13:22:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:22:35 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1752492837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:22:35 compute-1 nova_compute[230518]: 2025-10-02 13:22:35.250 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:22:35 compute-1 nova_compute[230518]: 2025-10-02 13:22:35.255 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:22:35 compute-1 nova_compute[230518]: 2025-10-02 13:22:35.269 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:22:35 compute-1 nova_compute[230518]: 2025-10-02 13:22:35.270 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:22:35 compute-1 nova_compute[230518]: 2025-10-02 13:22:35.271 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:22:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:35.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:36 compute-1 ceph-mon[80926]: pgmap v3342: 305 pgs: 305 active+clean; 192 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.9 MiB/s wr, 76 op/s
Oct 02 13:22:36 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1752492837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:22:36 compute-1 nova_compute[230518]: 2025-10-02 13:22:36.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:37.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:37.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:38 compute-1 ceph-mon[80926]: pgmap v3343: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 572 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Oct 02 13:22:38 compute-1 sudo[315914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:22:38 compute-1 sudo[315914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:38 compute-1 sudo[315914]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:38 compute-1 sudo[315939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:22:38 compute-1 sudo[315939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:38 compute-1 sudo[315939]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:38 compute-1 sudo[315964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:22:38 compute-1 sudo[315964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:38 compute-1 sudo[315964]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:38 compute-1 nova_compute[230518]: 2025-10-02 13:22:38.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:39 compute-1 sudo[315989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:22:39 compute-1 sudo[315989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:22:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:39.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:22:39 compute-1 nova_compute[230518]: 2025-10-02 13:22:39.256 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:22:39 compute-1 nova_compute[230518]: 2025-10-02 13:22:39.257 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:22:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:22:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:39.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:22:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:22:39 compute-1 sudo[315989]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:40 compute-1 nova_compute[230518]: 2025-10-02 13:22:40.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:22:40 compute-1 ceph-mon[80926]: pgmap v3344: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 299 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 13:22:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2673423010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:22:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:41.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:41 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3291989988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:22:41 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:22:41 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:22:41 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:22:41 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:22:41 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:22:41 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:22:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:41.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:41 compute-1 nova_compute[230518]: 2025-10-02 13:22:41.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:42 compute-1 nova_compute[230518]: 2025-10-02 13:22:42.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:22:42 compute-1 ceph-mon[80926]: pgmap v3345: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 299 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 13:22:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:43.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:43.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:43 compute-1 nova_compute[230518]: 2025-10-02 13:22:43.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:44 compute-1 ceph-mon[80926]: pgmap v3346: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 297 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 02 13:22:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:22:45 compute-1 nova_compute[230518]: 2025-10-02 13:22:45.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:22:45 compute-1 nova_compute[230518]: 2025-10-02 13:22:45.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:22:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:45.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:45.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:45 compute-1 ceph-mon[80926]: pgmap v3347: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 248 KiB/s rd, 1.3 MiB/s wr, 51 op/s
Oct 02 13:22:45 compute-1 podman[316047]: 2025-10-02 13:22:45.859610927 +0000 UTC m=+0.096180597 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:22:45 compute-1 podman[316046]: 2025-10-02 13:22:45.86797469 +0000 UTC m=+0.107141202 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:22:46 compute-1 sudo[316091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:22:46 compute-1 sudo[316091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:46 compute-1 sudo[316091]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:46 compute-1 sudo[316116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:22:46 compute-1 sudo[316116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:22:46 compute-1 sudo[316116]: pam_unix(sudo:session): session closed for user root
Oct 02 13:22:46 compute-1 nova_compute[230518]: 2025-10-02 13:22:46.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:47 compute-1 nova_compute[230518]: 2025-10-02 13:22:47.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:22:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:47.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:47 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:22:47 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:22:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:47.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:48 compute-1 nova_compute[230518]: 2025-10-02 13:22:48.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:22:48 compute-1 nova_compute[230518]: 2025-10-02 13:22:48.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:22:48 compute-1 nova_compute[230518]: 2025-10-02 13:22:48.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:22:48 compute-1 ceph-mon[80926]: pgmap v3348: 305 pgs: 305 active+clean; 172 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 97 KiB/s rd, 229 KiB/s wr, 19 op/s
Oct 02 13:22:48 compute-1 nova_compute[230518]: 2025-10-02 13:22:48.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:49.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1778986615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:22:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:49.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:22:49 compute-1 nova_compute[230518]: 2025-10-02 13:22:49.504 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:22:50 compute-1 nova_compute[230518]: 2025-10-02 13:22:50.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:22:50 compute-1 nova_compute[230518]: 2025-10-02 13:22:50.072 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:22:50 compute-1 ceph-mon[80926]: pgmap v3349: 305 pgs: 305 active+clean; 172 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.2 KiB/s rd, 17 KiB/s wr, 7 op/s
Oct 02 13:22:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:51.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:22:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:51.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:22:51 compute-1 ceph-mon[80926]: pgmap v3350: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 18 KiB/s wr, 24 op/s
Oct 02 13:22:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:22:51.904 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=78, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=77) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:22:51 compute-1 nova_compute[230518]: 2025-10-02 13:22:51.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:22:51.905 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:22:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:22:51.906 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '78'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:22:51 compute-1 nova_compute[230518]: 2025-10-02 13:22:51.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:53.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:53.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:53 compute-1 ceph-mon[80926]: pgmap v3351: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 6.2 KiB/s wr, 30 op/s
Oct 02 13:22:53 compute-1 nova_compute[230518]: 2025-10-02 13:22:53.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:22:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:55.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:22:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:55.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:22:55 compute-1 ceph-mon[80926]: pgmap v3352: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 5.2 KiB/s wr, 30 op/s
Oct 02 13:22:56 compute-1 nova_compute[230518]: 2025-10-02 13:22:56.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:22:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:57.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:22:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:57.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:57 compute-1 podman[316141]: 2025-10-02 13:22:57.80700527 +0000 UTC m=+0.059449965 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3)
Oct 02 13:22:57 compute-1 ceph-mon[80926]: pgmap v3353: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Oct 02 13:22:57 compute-1 podman[316142]: 2025-10-02 13:22:57.835904827 +0000 UTC m=+0.082522320 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 02 13:22:58 compute-1 nova_compute[230518]: 2025-10-02 13:22:58.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:22:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:22:59.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:22:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:22:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:22:59.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:22:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:22:59 compute-1 ceph-mon[80926]: pgmap v3354: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 938 B/s wr, 22 op/s
Oct 02 13:23:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:01.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:01.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:01 compute-1 ceph-mon[80926]: pgmap v3355: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 938 B/s wr, 22 op/s
Oct 02 13:23:01 compute-1 nova_compute[230518]: 2025-10-02 13:23:01.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:03.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:03.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:03 compute-1 nova_compute[230518]: 2025-10-02 13:23:03.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:04 compute-1 ceph-mon[80926]: pgmap v3356: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.9 KiB/s rd, 255 B/s wr, 6 op/s
Oct 02 13:23:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:23:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:05.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:05.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:23:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4069522708' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:23:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:23:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4069522708' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:23:06 compute-1 ceph-mon[80926]: pgmap v3357: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/4069522708' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:23:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/4069522708' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:23:06 compute-1 nova_compute[230518]: 2025-10-02 13:23:06.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:07.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:07.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:08 compute-1 ceph-mon[80926]: pgmap v3358: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:08 compute-1 nova_compute[230518]: 2025-10-02 13:23:08.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:09.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:09.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:23:10 compute-1 ceph-mon[80926]: pgmap v3359: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:11.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:11.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:11 compute-1 ceph-mon[80926]: pgmap v3360: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:11 compute-1 nova_compute[230518]: 2025-10-02 13:23:11.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:13.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:23:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:13.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:23:14 compute-1 nova_compute[230518]: 2025-10-02 13:23:14.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:14 compute-1 ceph-mon[80926]: pgmap v3361: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:23:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:23:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:15.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:23:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:23:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:15.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:23:16 compute-1 ceph-mon[80926]: pgmap v3362: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:16 compute-1 podman[316181]: 2025-10-02 13:23:16.835108434 +0000 UTC m=+0.064011118 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 13:23:16 compute-1 podman[316180]: 2025-10-02 13:23:16.851983263 +0000 UTC m=+0.089114886 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:23:16 compute-1 nova_compute[230518]: 2025-10-02 13:23:16.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:17.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:17.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:18 compute-1 ceph-mon[80926]: pgmap v3363: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:19 compute-1 nova_compute[230518]: 2025-10-02 13:23:19.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:19.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:23:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:19.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:23:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:23:20 compute-1 ceph-mon[80926]: pgmap v3364: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:21.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:21.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:21 compute-1 ceph-mon[80926]: pgmap v3365: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:21 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #163. Immutable memtables: 0.
Oct 02 13:23:21 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:23:21.722727) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:23:21 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 163
Oct 02 13:23:21 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411401722782, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 1838, "num_deletes": 258, "total_data_size": 4363954, "memory_usage": 4432784, "flush_reason": "Manual Compaction"}
Oct 02 13:23:21 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #164: started
Oct 02 13:23:21 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411401781083, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 164, "file_size": 2870936, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79085, "largest_seqno": 80918, "table_properties": {"data_size": 2863262, "index_size": 4552, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 15894, "raw_average_key_size": 19, "raw_value_size": 2847854, "raw_average_value_size": 3582, "num_data_blocks": 200, "num_entries": 795, "num_filter_entries": 795, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411236, "oldest_key_time": 1759411236, "file_creation_time": 1759411401, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:23:21 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 58404 microseconds, and 6326 cpu microseconds.
Oct 02 13:23:21 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:23:21 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:23:21.781136) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #164: 2870936 bytes OK
Oct 02 13:23:21 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:23:21.781159) [db/memtable_list.cc:519] [default] Level-0 commit table #164 started
Oct 02 13:23:21 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:23:21.802978) [db/memtable_list.cc:722] [default] Level-0 commit table #164: memtable #1 done
Oct 02 13:23:21 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:23:21.803038) EVENT_LOG_v1 {"time_micros": 1759411401803022, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:23:21 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:23:21.803071) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:23:21 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 4355594, prev total WAL file size 4355594, number of live WAL files 2.
Oct 02 13:23:21 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000160.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:23:21 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:23:21.805120) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303138' seq:72057594037927935, type:22 .. '6C6F676D0033323731' seq:0, type:0; will stop at (end)
Oct 02 13:23:21 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:23:21 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [164(2803KB)], [162(11MB)]
Oct 02 13:23:21 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411401805172, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [164], "files_L6": [162], "score": -1, "input_data_size": 14893129, "oldest_snapshot_seqno": -1}
Oct 02 13:23:21 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #165: 10243 keys, 14750365 bytes, temperature: kUnknown
Oct 02 13:23:21 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411401945000, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 165, "file_size": 14750365, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14682186, "index_size": 41442, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25669, "raw_key_size": 269755, "raw_average_key_size": 26, "raw_value_size": 14501102, "raw_average_value_size": 1415, "num_data_blocks": 1589, "num_entries": 10243, "num_filter_entries": 10243, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759411401, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 165, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:23:21 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:23:21 compute-1 nova_compute[230518]: 2025-10-02 13:23:21.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:21 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:23:21.945431) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 14750365 bytes
Oct 02 13:23:21 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:23:21.947296) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 106.3 rd, 105.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 11.5 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(10.3) write-amplify(5.1) OK, records in: 10778, records dropped: 535 output_compression: NoCompression
Oct 02 13:23:21 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:23:21.947315) EVENT_LOG_v1 {"time_micros": 1759411401947306, "job": 104, "event": "compaction_finished", "compaction_time_micros": 140046, "compaction_time_cpu_micros": 48258, "output_level": 6, "num_output_files": 1, "total_output_size": 14750365, "num_input_records": 10778, "num_output_records": 10243, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:23:21 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:23:21 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411401948448, "job": 104, "event": "table_file_deletion", "file_number": 164}
Oct 02 13:23:21 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000162.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:23:21 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411401950510, "job": 104, "event": "table_file_deletion", "file_number": 162}
Oct 02 13:23:21 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:23:21.805027) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:23:21 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:23:21.950690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:23:21 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:23:21.950696) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:23:21 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:23:21.950699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:23:21 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:23:21.950701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:23:21 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:23:21.950703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:23:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:23.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:23.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:23 compute-1 ceph-mon[80926]: pgmap v3366: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:24 compute-1 nova_compute[230518]: 2025-10-02 13:23:24.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:23:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:25.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:25.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:23:25.981 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:23:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:23:25.982 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:23:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:23:25.982 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:23:26 compute-1 ceph-mon[80926]: pgmap v3367: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:26 compute-1 nova_compute[230518]: 2025-10-02 13:23:26.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:27.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:27.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:28 compute-1 ceph-mon[80926]: pgmap v3368: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:28 compute-1 podman[316225]: 2025-10-02 13:23:28.806057997 +0000 UTC m=+0.051823777 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Oct 02 13:23:28 compute-1 podman[316224]: 2025-10-02 13:23:28.852455452 +0000 UTC m=+0.093000458 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:23:29 compute-1 nova_compute[230518]: 2025-10-02 13:23:29.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:23:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:29.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:23:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:23:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:29.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:30 compute-1 ceph-mon[80926]: pgmap v3369: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:23:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:31.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:23:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:23:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:31.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:23:31 compute-1 nova_compute[230518]: 2025-10-02 13:23:31.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:32 compute-1 ceph-mon[80926]: pgmap v3370: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:33.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:33.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:33 compute-1 ceph-mon[80926]: pgmap v3371: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:33 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1469962813' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:23:34 compute-1 nova_compute[230518]: 2025-10-02 13:23:34.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:23:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2695112454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:23:35 compute-1 nova_compute[230518]: 2025-10-02 13:23:35.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:23:35 compute-1 nova_compute[230518]: 2025-10-02 13:23:35.075 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:23:35 compute-1 nova_compute[230518]: 2025-10-02 13:23:35.075 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:23:35 compute-1 nova_compute[230518]: 2025-10-02 13:23:35.075 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:23:35 compute-1 nova_compute[230518]: 2025-10-02 13:23:35.075 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:23:35 compute-1 nova_compute[230518]: 2025-10-02 13:23:35.076 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:23:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:23:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:35.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:23:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:35.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:23:35 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3013848652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:23:35 compute-1 nova_compute[230518]: 2025-10-02 13:23:35.587 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:23:35 compute-1 nova_compute[230518]: 2025-10-02 13:23:35.728 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:23:35 compute-1 nova_compute[230518]: 2025-10-02 13:23:35.729 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4196MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:23:35 compute-1 nova_compute[230518]: 2025-10-02 13:23:35.730 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:23:35 compute-1 nova_compute[230518]: 2025-10-02 13:23:35.730 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:23:35 compute-1 nova_compute[230518]: 2025-10-02 13:23:35.810 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:23:35 compute-1 nova_compute[230518]: 2025-10-02 13:23:35.811 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:23:35 compute-1 nova_compute[230518]: 2025-10-02 13:23:35.826 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:23:36 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:23:36 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3858995251' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:23:36 compute-1 nova_compute[230518]: 2025-10-02 13:23:36.237 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:23:36 compute-1 nova_compute[230518]: 2025-10-02 13:23:36.243 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:23:36 compute-1 nova_compute[230518]: 2025-10-02 13:23:36.257 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:23:36 compute-1 nova_compute[230518]: 2025-10-02 13:23:36.258 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:23:36 compute-1 nova_compute[230518]: 2025-10-02 13:23:36.259 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.529s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:23:36 compute-1 ceph-mon[80926]: pgmap v3372: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:36 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3013848652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:23:36 compute-1 nova_compute[230518]: 2025-10-02 13:23:36.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:37.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:37 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3858995251' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:23:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:37.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:38 compute-1 ceph-mon[80926]: pgmap v3373: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:39 compute-1 nova_compute[230518]: 2025-10-02 13:23:39.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:39.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:39 compute-1 nova_compute[230518]: 2025-10-02 13:23:39.259 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:23:39 compute-1 nova_compute[230518]: 2025-10-02 13:23:39.260 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:23:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:23:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:39.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:39 compute-1 ceph-mon[80926]: pgmap v3374: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3676436955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:23:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/33145839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:23:41 compute-1 nova_compute[230518]: 2025-10-02 13:23:41.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:23:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:41.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:23:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:41.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:23:41 compute-1 nova_compute[230518]: 2025-10-02 13:23:41.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:42 compute-1 ceph-mon[80926]: pgmap v3375: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:43.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:43.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:44 compute-1 nova_compute[230518]: 2025-10-02 13:23:44.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:44 compute-1 nova_compute[230518]: 2025-10-02 13:23:44.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:23:44 compute-1 ceph-mon[80926]: pgmap v3376: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:44 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1675875322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:23:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:23:45 compute-1 nova_compute[230518]: 2025-10-02 13:23:45.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:23:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:45.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:45 compute-1 ceph-mon[80926]: pgmap v3377: 305 pgs: 305 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:23:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:45.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:46 compute-1 sudo[316307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:23:46 compute-1 sudo[316307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:46 compute-1 sudo[316307]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:46 compute-1 nova_compute[230518]: 2025-10-02 13:23:46.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:46 compute-1 sudo[316344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:23:46 compute-1 sudo[316344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:46 compute-1 sudo[316344]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:46 compute-1 podman[316332]: 2025-10-02 13:23:46.991339706 +0000 UTC m=+0.078870845 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 02 13:23:46 compute-1 podman[316331]: 2025-10-02 13:23:46.991505921 +0000 UTC m=+0.082010983 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 13:23:47 compute-1 sudo[316400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:23:47 compute-1 sudo[316400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:47 compute-1 sudo[316400]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:47 compute-1 nova_compute[230518]: 2025-10-02 13:23:47.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:23:47 compute-1 nova_compute[230518]: 2025-10-02 13:23:47.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:23:47 compute-1 sudo[316425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 13:23:47 compute-1 sudo[316425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:47.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 13:23:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:47.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 13:23:47 compute-1 podman[316521]: 2025-10-02 13:23:47.604479579 +0000 UTC m=+0.063221065 container exec f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 02 13:23:47 compute-1 podman[316521]: 2025-10-02 13:23:47.699376446 +0000 UTC m=+0.158117912 container exec_died f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 02 13:23:47 compute-1 ceph-mon[80926]: pgmap v3378: 305 pgs: 305 active+clean; 140 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 702 KiB/s wr, 14 op/s
Oct 02 13:23:47 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/100086224' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:23:48 compute-1 sudo[316425]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:48 compute-1 sudo[316643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:23:48 compute-1 sudo[316643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:48 compute-1 sudo[316643]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:48 compute-1 sudo[316668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:23:48 compute-1 sudo[316668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:48 compute-1 sudo[316668]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:48 compute-1 sudo[316693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:23:48 compute-1 sudo[316693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:48 compute-1 sudo[316693]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:48 compute-1 sudo[316718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:23:48 compute-1 sudo[316718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:23:48 compute-1 sudo[316718]: pam_unix(sudo:session): session closed for user root
Oct 02 13:23:49 compute-1 nova_compute[230518]: 2025-10-02 13:23:49.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:23:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:49.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:23:49 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:23:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3947714417' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:23:49 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:23:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:23:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:49.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:50 compute-1 nova_compute[230518]: 2025-10-02 13:23:50.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:23:50 compute-1 nova_compute[230518]: 2025-10-02 13:23:50.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:23:50 compute-1 nova_compute[230518]: 2025-10-02 13:23:50.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:23:50 compute-1 nova_compute[230518]: 2025-10-02 13:23:50.069 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:23:50 compute-1 ceph-mon[80926]: pgmap v3379: 305 pgs: 305 active+clean; 140 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 702 KiB/s wr, 14 op/s
Oct 02 13:23:50 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:23:50 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:23:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:51.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:23:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:23:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:23:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:23:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:23:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:23:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:51.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:51 compute-1 nova_compute[230518]: 2025-10-02 13:23:51.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:52 compute-1 nova_compute[230518]: 2025-10-02 13:23:52.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:23:52 compute-1 ceph-mon[80926]: pgmap v3380: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 02 13:23:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:23:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:53.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:23:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:23:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:53.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:23:53 compute-1 ceph-mon[80926]: pgmap v3381: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 02 13:23:54 compute-1 nova_compute[230518]: 2025-10-02 13:23:54.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:23:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:55.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:55.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:56 compute-1 ceph-mon[80926]: pgmap v3382: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 74 op/s
Oct 02 13:23:57 compute-1 nova_compute[230518]: 2025-10-02 13:23:57.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:57.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:57.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:58 compute-1 ceph-mon[80926]: pgmap v3383: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 02 13:23:59 compute-1 nova_compute[230518]: 2025-10-02 13:23:59.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:23:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:23:59.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:23:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:23:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:23:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:23:59.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:23:59 compute-1 ceph-mon[80926]: pgmap v3384: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 86 op/s
Oct 02 13:23:59 compute-1 podman[316777]: 2025-10-02 13:23:59.804838625 +0000 UTC m=+0.050050781 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 13:23:59 compute-1 podman[316776]: 2025-10-02 13:23:59.828675563 +0000 UTC m=+0.077167901 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001)
Oct 02 13:24:00 compute-1 sudo[316818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:24:00 compute-1 sudo[316818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:24:00 compute-1 sudo[316818]: pam_unix(sudo:session): session closed for user root
Oct 02 13:24:00 compute-1 sudo[316843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:24:00 compute-1 sudo[316843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:24:00 compute-1 sudo[316843]: pam_unix(sudo:session): session closed for user root
Oct 02 13:24:01 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:24:01 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:24:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:01.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:01.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:02 compute-1 nova_compute[230518]: 2025-10-02 13:24:02.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:02 compute-1 ceph-mon[80926]: pgmap v3385: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 86 op/s
Oct 02 13:24:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:03.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:03.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:04 compute-1 nova_compute[230518]: 2025-10-02 13:24:04.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:04 compute-1 ceph-mon[80926]: pgmap v3386: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Oct 02 13:24:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:24:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:05.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:24:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3979355912' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:24:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:24:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3979355912' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:24:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:05.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:05 compute-1 ceph-mon[80926]: pgmap v3387: 305 pgs: 305 active+clean; 169 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 395 KiB/s wr, 87 op/s
Oct 02 13:24:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3979355912' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:24:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3979355912' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:24:07 compute-1 nova_compute[230518]: 2025-10-02 13:24:07.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:07.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:07.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:08 compute-1 ceph-mon[80926]: pgmap v3388: 305 pgs: 305 active+clean; 189 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.0 MiB/s wr, 71 op/s
Oct 02 13:24:09 compute-1 nova_compute[230518]: 2025-10-02 13:24:09.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:09.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:24:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:09.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:10 compute-1 ceph-mon[80926]: pgmap v3389: 305 pgs: 305 active+clean; 189 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 250 KiB/s rd, 2.0 MiB/s wr, 45 op/s
Oct 02 13:24:10 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #166. Immutable memtables: 0.
Oct 02 13:24:10 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:24:10.057503) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:24:10 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 166
Oct 02 13:24:10 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411450057526, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 769, "num_deletes": 251, "total_data_size": 1410515, "memory_usage": 1426800, "flush_reason": "Manual Compaction"}
Oct 02 13:24:10 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #167: started
Oct 02 13:24:10 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411450063368, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 167, "file_size": 920236, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 80923, "largest_seqno": 81687, "table_properties": {"data_size": 916494, "index_size": 1521, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8595, "raw_average_key_size": 19, "raw_value_size": 909023, "raw_average_value_size": 2080, "num_data_blocks": 66, "num_entries": 437, "num_filter_entries": 437, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411402, "oldest_key_time": 1759411402, "file_creation_time": 1759411450, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:24:10 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 5896 microseconds, and 2904 cpu microseconds.
Oct 02 13:24:10 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:24:10 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:24:10.063397) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #167: 920236 bytes OK
Oct 02 13:24:10 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:24:10.063414) [db/memtable_list.cc:519] [default] Level-0 commit table #167 started
Oct 02 13:24:10 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:24:10.064353) [db/memtable_list.cc:722] [default] Level-0 commit table #167: memtable #1 done
Oct 02 13:24:10 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:24:10.064364) EVENT_LOG_v1 {"time_micros": 1759411450064361, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:24:10 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:24:10.064378) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:24:10 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 1406429, prev total WAL file size 1406429, number of live WAL files 2.
Oct 02 13:24:10 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000163.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:24:10 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:24:10.064915) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Oct 02 13:24:10 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:24:10 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [167(898KB)], [165(14MB)]
Oct 02 13:24:10 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411450064940, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [167], "files_L6": [165], "score": -1, "input_data_size": 15670601, "oldest_snapshot_seqno": -1}
Oct 02 13:24:10 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #168: 10163 keys, 13669009 bytes, temperature: kUnknown
Oct 02 13:24:10 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411450151245, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 168, "file_size": 13669009, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13602511, "index_size": 40017, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25413, "raw_key_size": 268780, "raw_average_key_size": 26, "raw_value_size": 13423792, "raw_average_value_size": 1320, "num_data_blocks": 1523, "num_entries": 10163, "num_filter_entries": 10163, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759411450, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 168, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:24:10 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:24:10 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:24:10.151609) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 13669009 bytes
Oct 02 13:24:10 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:24:10.153210) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.3 rd, 158.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 14.1 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(31.9) write-amplify(14.9) OK, records in: 10680, records dropped: 517 output_compression: NoCompression
Oct 02 13:24:10 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:24:10.153257) EVENT_LOG_v1 {"time_micros": 1759411450153221, "job": 106, "event": "compaction_finished", "compaction_time_micros": 86425, "compaction_time_cpu_micros": 29734, "output_level": 6, "num_output_files": 1, "total_output_size": 13669009, "num_input_records": 10680, "num_output_records": 10163, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:24:10 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:24:10 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411450153629, "job": 106, "event": "table_file_deletion", "file_number": 167}
Oct 02 13:24:10 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000165.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:24:10 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411450157396, "job": 106, "event": "table_file_deletion", "file_number": 165}
Oct 02 13:24:10 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:24:10.064842) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:24:10 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:24:10.157442) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:24:10 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:24:10.157448) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:24:10 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:24:10.157450) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:24:10 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:24:10.157452) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:24:10 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:24:10.157453) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:24:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:11.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:11.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:12 compute-1 nova_compute[230518]: 2025-10-02 13:24:12.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:12 compute-1 ceph-mon[80926]: pgmap v3390: 305 pgs: 305 active+clean; 198 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 338 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 02 13:24:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:13.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:13.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:14 compute-1 nova_compute[230518]: 2025-10-02 13:24:14.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:14 compute-1 ceph-mon[80926]: pgmap v3391: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 338 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 02 13:24:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:24:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:15.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:15.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e408 e408: 3 total, 3 up, 3 in
Oct 02 13:24:15 compute-1 ceph-mon[80926]: pgmap v3392: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 338 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 02 13:24:16 compute-1 ceph-mon[80926]: osdmap e408: 3 total, 3 up, 3 in
Oct 02 13:24:17 compute-1 nova_compute[230518]: 2025-10-02 13:24:17.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:17.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:24:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:17.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:24:17 compute-1 podman[316869]: 2025-10-02 13:24:17.808146145 +0000 UTC m=+0.054728058 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:24:17 compute-1 podman[316868]: 2025-10-02 13:24:17.826259523 +0000 UTC m=+0.077415869 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 13:24:18 compute-1 ceph-mon[80926]: pgmap v3394: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 123 KiB/s rd, 135 KiB/s wr, 44 op/s
Oct 02 13:24:19 compute-1 nova_compute[230518]: 2025-10-02 13:24:19.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e409 e409: 3 total, 3 up, 3 in
Oct 02 13:24:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:19.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:24:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:24:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:19.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:24:20 compute-1 ceph-mon[80926]: pgmap v3395: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 123 KiB/s rd, 135 KiB/s wr, 44 op/s
Oct 02 13:24:20 compute-1 ceph-mon[80926]: osdmap e409: 3 total, 3 up, 3 in
Oct 02 13:24:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e410 e410: 3 total, 3 up, 3 in
Oct 02 13:24:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:21.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:21 compute-1 ceph-mon[80926]: osdmap e410: 3 total, 3 up, 3 in
Oct 02 13:24:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:21.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:22 compute-1 nova_compute[230518]: 2025-10-02 13:24:22.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:22 compute-1 ceph-mon[80926]: pgmap v3398: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 257 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.1 MiB/s rd, 5.5 MiB/s wr, 128 op/s
Oct 02 13:24:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:23.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:23 compute-1 ceph-mon[80926]: pgmap v3399: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 6.6 MiB/s rd, 6.5 MiB/s wr, 133 op/s
Oct 02 13:24:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:23.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:24 compute-1 nova_compute[230518]: 2025-10-02 13:24:24.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:24:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:25.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:25.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:25 compute-1 ceph-mon[80926]: pgmap v3400: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 98 op/s
Oct 02 13:24:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:24:25.982 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:24:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:24:25.983 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:24:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:24:25.983 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:24:26 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e411 e411: 3 total, 3 up, 3 in
Oct 02 13:24:27 compute-1 nova_compute[230518]: 2025-10-02 13:24:27.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:27.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:27.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:27 compute-1 ceph-mon[80926]: pgmap v3401: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 112 op/s
Oct 02 13:24:27 compute-1 ceph-mon[80926]: osdmap e411: 3 total, 3 up, 3 in
Oct 02 13:24:27 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/348882763' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:24:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:24:29.068 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=79, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=78) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:24:29 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:24:29.069 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:24:29 compute-1 nova_compute[230518]: 2025-10-02 13:24:29.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:29 compute-1 nova_compute[230518]: 2025-10-02 13:24:29.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:29.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:24:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:29.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:29 compute-1 ceph-mon[80926]: pgmap v3403: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.8 MiB/s wr, 95 op/s
Oct 02 13:24:30 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:24:30.071 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '79'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:24:30 compute-1 podman[316913]: 2025-10-02 13:24:30.794159928 +0000 UTC m=+0.052051533 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.schema-version=1.0)
Oct 02 13:24:30 compute-1 podman[316914]: 2025-10-02 13:24:30.7999447 +0000 UTC m=+0.053759248 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:24:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:31.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:31.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:31 compute-1 ceph-mon[80926]: pgmap v3404: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.4 MiB/s wr, 63 op/s
Oct 02 13:24:32 compute-1 nova_compute[230518]: 2025-10-02 13:24:32.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1390501743' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:24:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:33.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 13:24:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:33.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 13:24:34 compute-1 nova_compute[230518]: 2025-10-02 13:24:34.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:34 compute-1 ceph-mon[80926]: pgmap v3405: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 36 KiB/s rd, 1.1 KiB/s wr, 44 op/s
Oct 02 13:24:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/277055431' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:24:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:24:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/634568430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:24:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3827424755' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:24:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:35.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:24:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:35.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:24:36 compute-1 ceph-mon[80926]: pgmap v3406: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 1.3 KiB/s wr, 47 op/s
Oct 02 13:24:37 compute-1 nova_compute[230518]: 2025-10-02 13:24:37.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:37 compute-1 nova_compute[230518]: 2025-10-02 13:24:37.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:24:37 compute-1 nova_compute[230518]: 2025-10-02 13:24:37.099 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:24:37 compute-1 nova_compute[230518]: 2025-10-02 13:24:37.100 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:24:37 compute-1 nova_compute[230518]: 2025-10-02 13:24:37.100 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:24:37 compute-1 nova_compute[230518]: 2025-10-02 13:24:37.100 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:24:37 compute-1 nova_compute[230518]: 2025-10-02 13:24:37.101 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:24:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:37.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:24:37 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3425334544' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:24:37 compute-1 nova_compute[230518]: 2025-10-02 13:24:37.543 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:24:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:37.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:37 compute-1 nova_compute[230518]: 2025-10-02 13:24:37.711 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:24:37 compute-1 nova_compute[230518]: 2025-10-02 13:24:37.713 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4172MB free_disk=20.94255828857422GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:24:37 compute-1 nova_compute[230518]: 2025-10-02 13:24:37.713 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:24:37 compute-1 nova_compute[230518]: 2025-10-02 13:24:37.714 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:24:37 compute-1 nova_compute[230518]: 2025-10-02 13:24:37.857 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:24:37 compute-1 nova_compute[230518]: 2025-10-02 13:24:37.858 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:24:37 compute-1 nova_compute[230518]: 2025-10-02 13:24:37.873 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:24:38 compute-1 ceph-mon[80926]: pgmap v3407: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 616 KiB/s rd, 16 KiB/s wr, 61 op/s
Oct 02 13:24:38 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3425334544' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:24:38 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:24:38 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/25163837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:24:38 compute-1 nova_compute[230518]: 2025-10-02 13:24:38.336 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:24:38 compute-1 nova_compute[230518]: 2025-10-02 13:24:38.343 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:24:38 compute-1 nova_compute[230518]: 2025-10-02 13:24:38.359 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:24:38 compute-1 nova_compute[230518]: 2025-10-02 13:24:38.361 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:24:38 compute-1 nova_compute[230518]: 2025-10-02 13:24:38.361 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:24:39 compute-1 nova_compute[230518]: 2025-10-02 13:24:39.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:39.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/25163837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:24:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:24:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:39.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:40 compute-1 nova_compute[230518]: 2025-10-02 13:24:40.363 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:24:40 compute-1 nova_compute[230518]: 2025-10-02 13:24:40.363 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:24:40 compute-1 ceph-mon[80926]: pgmap v3408: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 513 KiB/s rd, 13 KiB/s wr, 51 op/s
Oct 02 13:24:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1606460489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:24:41 compute-1 nova_compute[230518]: 2025-10-02 13:24:41.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:24:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:24:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:41.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:24:41 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2282584196' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:24:41 compute-1 ceph-mon[80926]: pgmap v3409: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 13 KiB/s wr, 99 op/s
Oct 02 13:24:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:41.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:42 compute-1 nova_compute[230518]: 2025-10-02 13:24:42.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:43.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:24:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:43.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:24:43 compute-1 ceph-mon[80926]: pgmap v3410: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 77 op/s
Oct 02 13:24:44 compute-1 nova_compute[230518]: 2025-10-02 13:24:44.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:24:44 compute-1 nova_compute[230518]: 2025-10-02 13:24:44.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:24:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:45.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:45.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:45 compute-1 ceph-mon[80926]: pgmap v3411: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 76 op/s
Oct 02 13:24:46 compute-1 nova_compute[230518]: 2025-10-02 13:24:46.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:24:47 compute-1 nova_compute[230518]: 2025-10-02 13:24:47.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:47.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:24:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:47.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:24:48 compute-1 ceph-mon[80926]: pgmap v3412: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Oct 02 13:24:48 compute-1 nova_compute[230518]: 2025-10-02 13:24:48.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:24:48 compute-1 podman[316999]: 2025-10-02 13:24:48.789951822 +0000 UTC m=+0.043968030 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 02 13:24:48 compute-1 podman[316998]: 2025-10-02 13:24:48.812926723 +0000 UTC m=+0.071236486 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct 02 13:24:49 compute-1 nova_compute[230518]: 2025-10-02 13:24:49.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:24:49 compute-1 nova_compute[230518]: 2025-10-02 13:24:49.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:49.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:24:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:49.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:50 compute-1 ceph-mon[80926]: pgmap v3413: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 1023 B/s wr, 54 op/s
Oct 02 13:24:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:51.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:51.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:52 compute-1 nova_compute[230518]: 2025-10-02 13:24:52.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:52 compute-1 nova_compute[230518]: 2025-10-02 13:24:52.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:24:52 compute-1 nova_compute[230518]: 2025-10-02 13:24:52.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:24:52 compute-1 nova_compute[230518]: 2025-10-02 13:24:52.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:24:52 compute-1 nova_compute[230518]: 2025-10-02 13:24:52.066 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:24:52 compute-1 ceph-mon[80926]: pgmap v3414: 305 pgs: 305 active+clean; 292 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 470 KiB/s wr, 82 op/s
Oct 02 13:24:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:24:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:53.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:24:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:24:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:53.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:24:54 compute-1 nova_compute[230518]: 2025-10-02 13:24:54.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:24:54 compute-1 nova_compute[230518]: 2025-10-02 13:24:54.068 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:24:54 compute-1 nova_compute[230518]: 2025-10-02 13:24:54.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:54 compute-1 ceph-mon[80926]: pgmap v3415: 305 pgs: 305 active+clean; 293 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 495 KiB/s wr, 58 op/s
Oct 02 13:24:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:24:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:55.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:24:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:55.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:24:56 compute-1 ceph-mon[80926]: pgmap v3416: 305 pgs: 305 active+clean; 297 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 531 KiB/s wr, 54 op/s
Oct 02 13:24:57 compute-1 nova_compute[230518]: 2025-10-02 13:24:57.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:57.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:57.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:58 compute-1 ceph-mon[80926]: pgmap v3417: 305 pgs: 305 active+clean; 297 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 540 KiB/s wr, 56 op/s
Oct 02 13:24:59 compute-1 nova_compute[230518]: 2025-10-02 13:24:59.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:24:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:24:59.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:24:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:24:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:24:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:24:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:24:59.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:00 compute-1 ceph-mon[80926]: pgmap v3418: 305 pgs: 305 active+clean; 297 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 540 KiB/s wr, 54 op/s
Oct 02 13:25:00 compute-1 sudo[317042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:25:00 compute-1 sudo[317042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:00 compute-1 sudo[317042]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:00 compute-1 sudo[317067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:25:00 compute-1 sudo[317067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:00 compute-1 sudo[317067]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:00 compute-1 sudo[317092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:25:00 compute-1 sudo[317092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:00 compute-1 sudo[317092]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:00 compute-1 sudo[317117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:25:00 compute-1 sudo[317117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:01 compute-1 sudo[317117]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:01 compute-1 podman[317174]: 2025-10-02 13:25:01.3001861 +0000 UTC m=+0.057397531 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 13:25:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:01.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:01 compute-1 podman[317173]: 2025-10-02 13:25:01.333422702 +0000 UTC m=+0.089628432 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 13:25:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:25:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:01.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:25:02 compute-1 nova_compute[230518]: 2025-10-02 13:25:02.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:02 compute-1 ceph-mon[80926]: pgmap v3419: 305 pgs: 305 active+clean; 297 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 543 KiB/s wr, 54 op/s
Oct 02 13:25:02 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:25:02 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:25:02 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:25:02 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:25:02 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:25:02 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:25:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:25:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:03.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:25:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:25:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:03.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:25:04 compute-1 nova_compute[230518]: 2025-10-02 13:25:04.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:04 compute-1 ceph-mon[80926]: pgmap v3420: 305 pgs: 305 active+clean; 297 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 196 KiB/s rd, 75 KiB/s wr, 26 op/s
Oct 02 13:25:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:25:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:25:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/674427580' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:25:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:25:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/674427580' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:25:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:25:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:05.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:25:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:25:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:05.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:25:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/674427580' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:25:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/674427580' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:25:06 compute-1 ceph-mon[80926]: pgmap v3421: 305 pgs: 305 active+clean; 300 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 218 KiB/s rd, 240 KiB/s wr, 5 op/s
Oct 02 13:25:07 compute-1 nova_compute[230518]: 2025-10-02 13:25:07.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:07.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:07.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:08 compute-1 ceph-mon[80926]: pgmap v3422: 305 pgs: 305 active+clean; 300 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 219 KiB/s rd, 209 KiB/s wr, 7 op/s
Oct 02 13:25:09 compute-1 nova_compute[230518]: 2025-10-02 13:25:09.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:09.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:25:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:09.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:10 compute-1 ceph-mon[80926]: pgmap v3423: 305 pgs: 305 active+clean; 300 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 186 KiB/s rd, 199 KiB/s wr, 5 op/s
Oct 02 13:25:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:25:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:25:10 compute-1 sudo[317212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:25:10 compute-1 sudo[317212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:10 compute-1 sudo[317212]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:10 compute-1 sudo[317237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:25:10 compute-1 sudo[317237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:25:10 compute-1 sudo[317237]: pam_unix(sudo:session): session closed for user root
Oct 02 13:25:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:11.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:11.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:12 compute-1 nova_compute[230518]: 2025-10-02 13:25:12.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:12 compute-1 ceph-mon[80926]: pgmap v3424: 305 pgs: 305 active+clean; 300 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 186 KiB/s rd, 199 KiB/s wr, 5 op/s
Oct 02 13:25:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e412 e412: 3 total, 3 up, 3 in
Oct 02 13:25:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:13.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:13.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:14 compute-1 nova_compute[230518]: 2025-10-02 13:25:14.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:14 compute-1 ceph-mon[80926]: pgmap v3425: 305 pgs: 305 active+clean; 300 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 190 KiB/s rd, 196 KiB/s wr, 5 op/s
Oct 02 13:25:14 compute-1 ceph-mon[80926]: osdmap e412: 3 total, 3 up, 3 in
Oct 02 13:25:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e413 e413: 3 total, 3 up, 3 in
Oct 02 13:25:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:25:15 compute-1 ceph-mon[80926]: osdmap e413: 3 total, 3 up, 3 in
Oct 02 13:25:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e414 e414: 3 total, 3 up, 3 in
Oct 02 13:25:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:15.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:15.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:16 compute-1 ceph-mon[80926]: pgmap v3428: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 330 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.8 MiB/s wr, 46 op/s
Oct 02 13:25:16 compute-1 ceph-mon[80926]: osdmap e414: 3 total, 3 up, 3 in
Oct 02 13:25:17 compute-1 nova_compute[230518]: 2025-10-02 13:25:17.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:17.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:17.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:18 compute-1 ceph-mon[80926]: pgmap v3430: 305 pgs: 4 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 290 active+clean; 401 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 8.3 MiB/s rd, 15 MiB/s wr, 198 op/s
Oct 02 13:25:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e415 e415: 3 total, 3 up, 3 in
Oct 02 13:25:19 compute-1 nova_compute[230518]: 2025-10-02 13:25:19.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:19.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:25:19 compute-1 ceph-mon[80926]: osdmap e415: 3 total, 3 up, 3 in
Oct 02 13:25:19 compute-1 ceph-mon[80926]: pgmap v3432: 305 pgs: 4 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 290 active+clean; 401 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 9.1 MiB/s rd, 16 MiB/s wr, 216 op/s
Oct 02 13:25:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:25:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:19.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:25:19 compute-1 podman[317264]: 2025-10-02 13:25:19.799337034 +0000 UTC m=+0.051769965 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent)
Oct 02 13:25:19 compute-1 podman[317263]: 2025-10-02 13:25:19.826383183 +0000 UTC m=+0.083261443 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Oct 02 13:25:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:25:20.077 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=80, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=79) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:25:20 compute-1 nova_compute[230518]: 2025-10-02 13:25:20.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:20 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:25:20.078 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:25:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:21.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:25:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:21.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:25:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e416 e416: 3 total, 3 up, 3 in
Oct 02 13:25:22 compute-1 ceph-mon[80926]: pgmap v3433: 305 pgs: 305 active+clean; 336 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 7.7 MiB/s rd, 14 MiB/s wr, 194 op/s
Oct 02 13:25:22 compute-1 nova_compute[230518]: 2025-10-02 13:25:22.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:23 compute-1 ceph-mon[80926]: osdmap e416: 3 total, 3 up, 3 in
Oct 02 13:25:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:23.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:23.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:24 compute-1 nova_compute[230518]: 2025-10-02 13:25:24.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:24 compute-1 ceph-mon[80926]: pgmap v3435: 305 pgs: 305 active+clean; 300 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.3 MiB/s rd, 9.0 MiB/s wr, 156 op/s
Oct 02 13:25:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:25:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4070755414' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:25:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:25:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:25.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:25:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:25.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:25:25.984 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:25:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:25:25.985 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:25:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:25:25.985 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:25:26 compute-1 ceph-mon[80926]: pgmap v3436: 305 pgs: 305 active+clean; 288 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 49 KiB/s rd, 4.1 KiB/s wr, 68 op/s
Oct 02 13:25:26 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e417 e417: 3 total, 3 up, 3 in
Oct 02 13:25:26 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e418 e418: 3 total, 3 up, 3 in
Oct 02 13:25:27 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:25:27.079 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '80'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:25:27 compute-1 nova_compute[230518]: 2025-10-02 13:25:27.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:27.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:27 compute-1 ceph-mon[80926]: osdmap e417: 3 total, 3 up, 3 in
Oct 02 13:25:27 compute-1 ceph-mon[80926]: pgmap v3438: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 62 KiB/s rd, 6.4 KiB/s wr, 89 op/s
Oct 02 13:25:27 compute-1 ceph-mon[80926]: osdmap e418: 3 total, 3 up, 3 in
Oct 02 13:25:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:27.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:29 compute-1 nova_compute[230518]: 2025-10-02 13:25:29.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:29.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:25:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:29.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:29 compute-1 ceph-mon[80926]: pgmap v3440: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 2.7 KiB/s wr, 55 op/s
Oct 02 13:25:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:31.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:31.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:31 compute-1 podman[317309]: 2025-10-02 13:25:31.803311163 +0000 UTC m=+0.056726590 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:25:31 compute-1 podman[317310]: 2025-10-02 13:25:31.812231983 +0000 UTC m=+0.062778250 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:25:32 compute-1 ceph-mon[80926]: pgmap v3441: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 51 KiB/s rd, 3.4 KiB/s wr, 72 op/s
Oct 02 13:25:32 compute-1 nova_compute[230518]: 2025-10-02 13:25:32.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:33 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2833919162' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:25:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:25:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:33.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:25:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:33.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:34 compute-1 ceph-mon[80926]: pgmap v3442: 305 pgs: 305 active+clean; 159 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 55 KiB/s rd, 4.6 KiB/s wr, 81 op/s
Oct 02 13:25:34 compute-1 nova_compute[230518]: 2025-10-02 13:25:34.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:25:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1206261349' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:25:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3046713103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:25:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:35.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:35.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:36 compute-1 ceph-mon[80926]: pgmap v3443: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 43 KiB/s rd, 2.8 KiB/s wr, 64 op/s
Oct 02 13:25:36 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e419 e419: 3 total, 3 up, 3 in
Oct 02 13:25:36 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #169. Immutable memtables: 0.
Oct 02 13:25:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:25:36.933218) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:25:36 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 169
Oct 02 13:25:36 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411536933264, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 1232, "num_deletes": 254, "total_data_size": 2504437, "memory_usage": 2548280, "flush_reason": "Manual Compaction"}
Oct 02 13:25:36 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #170: started
Oct 02 13:25:36 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411536941418, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 170, "file_size": 1085387, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 81692, "largest_seqno": 82919, "table_properties": {"data_size": 1080869, "index_size": 2041, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11767, "raw_average_key_size": 21, "raw_value_size": 1071177, "raw_average_value_size": 1933, "num_data_blocks": 90, "num_entries": 554, "num_filter_entries": 554, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411451, "oldest_key_time": 1759411451, "file_creation_time": 1759411536, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:25:36 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 8253 microseconds, and 4995 cpu microseconds.
Oct 02 13:25:36 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:25:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:25:36.941468) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #170: 1085387 bytes OK
Oct 02 13:25:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:25:36.941486) [db/memtable_list.cc:519] [default] Level-0 commit table #170 started
Oct 02 13:25:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:25:36.942926) [db/memtable_list.cc:722] [default] Level-0 commit table #170: memtable #1 done
Oct 02 13:25:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:25:36.942942) EVENT_LOG_v1 {"time_micros": 1759411536942938, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:25:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:25:36.942960) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:25:36 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 2498487, prev total WAL file size 2498487, number of live WAL files 2.
Oct 02 13:25:36 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000166.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:25:36 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:25:36.943801) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373538' seq:72057594037927935, type:22 .. '6D6772737461740033303039' seq:0, type:0; will stop at (end)
Oct 02 13:25:36 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:25:36 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [170(1059KB)], [168(13MB)]
Oct 02 13:25:36 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411536943848, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [170], "files_L6": [168], "score": -1, "input_data_size": 14754396, "oldest_snapshot_seqno": -1}
Oct 02 13:25:37 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #171: 10231 keys, 11465759 bytes, temperature: kUnknown
Oct 02 13:25:37 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411537016579, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 171, "file_size": 11465759, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11402093, "index_size": 36993, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25605, "raw_key_size": 270469, "raw_average_key_size": 26, "raw_value_size": 11225547, "raw_average_value_size": 1097, "num_data_blocks": 1398, "num_entries": 10231, "num_filter_entries": 10231, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759411536, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 171, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:25:37 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:25:37 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:25:37.016865) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 11465759 bytes
Oct 02 13:25:37 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:25:37.019162) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 202.6 rd, 157.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 13.0 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(24.2) write-amplify(10.6) OK, records in: 10717, records dropped: 486 output_compression: NoCompression
Oct 02 13:25:37 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:25:37.019178) EVENT_LOG_v1 {"time_micros": 1759411537019171, "job": 108, "event": "compaction_finished", "compaction_time_micros": 72817, "compaction_time_cpu_micros": 27631, "output_level": 6, "num_output_files": 1, "total_output_size": 11465759, "num_input_records": 10717, "num_output_records": 10231, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:25:37 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:25:37 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411537019472, "job": 108, "event": "table_file_deletion", "file_number": 170}
Oct 02 13:25:37 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000168.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:25:37 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411537021827, "job": 108, "event": "table_file_deletion", "file_number": 168}
Oct 02 13:25:37 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:25:36.943708) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:25:37 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:25:37.021923) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:25:37 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:25:37.021929) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:25:37 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:25:37.021931) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:25:37 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:25:37.021933) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:25:37 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:25:37.021934) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:25:37 compute-1 nova_compute[230518]: 2025-10-02 13:25:37.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:37.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:37.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:37 compute-1 ceph-mon[80926]: pgmap v3444: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 2.3 KiB/s wr, 52 op/s
Oct 02 13:25:37 compute-1 ceph-mon[80926]: osdmap e419: 3 total, 3 up, 3 in
Oct 02 13:25:39 compute-1 nova_compute[230518]: 2025-10-02 13:25:39.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:25:39 compute-1 nova_compute[230518]: 2025-10-02 13:25:39.076 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:25:39 compute-1 nova_compute[230518]: 2025-10-02 13:25:39.076 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:25:39 compute-1 nova_compute[230518]: 2025-10-02 13:25:39.077 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:25:39 compute-1 nova_compute[230518]: 2025-10-02 13:25:39.077 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:25:39 compute-1 nova_compute[230518]: 2025-10-02 13:25:39.077 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:25:39 compute-1 nova_compute[230518]: 2025-10-02 13:25:39.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:25:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:39.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:25:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:25:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:25:39 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/881232608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:25:39 compute-1 nova_compute[230518]: 2025-10-02 13:25:39.551 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:25:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:39.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:39 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:25:39 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 69K writes, 270K keys, 69K commit groups, 1.0 writes per commit group, ingest: 0.26 GB, 0.04 MB/s
                                           Cumulative WAL: 69K writes, 25K syncs, 2.70 writes per sync, written: 0.26 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2949 writes, 9921 keys, 2949 commit groups, 1.0 writes per commit group, ingest: 9.42 MB, 0.02 MB/s
                                           Interval WAL: 2949 writes, 1228 syncs, 2.40 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 02 13:25:39 compute-1 nova_compute[230518]: 2025-10-02 13:25:39.749 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:25:39 compute-1 nova_compute[230518]: 2025-10-02 13:25:39.751 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4216MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:25:39 compute-1 nova_compute[230518]: 2025-10-02 13:25:39.751 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:25:39 compute-1 nova_compute[230518]: 2025-10-02 13:25:39.751 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:25:39 compute-1 nova_compute[230518]: 2025-10-02 13:25:39.850 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:25:39 compute-1 nova_compute[230518]: 2025-10-02 13:25:39.851 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:25:39 compute-1 nova_compute[230518]: 2025-10-02 13:25:39.868 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:25:39 compute-1 ceph-mon[80926]: pgmap v3446: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 2.3 KiB/s wr, 52 op/s
Oct 02 13:25:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/881232608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:25:40 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:25:40 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2823375801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:25:40 compute-1 nova_compute[230518]: 2025-10-02 13:25:40.315 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:25:40 compute-1 nova_compute[230518]: 2025-10-02 13:25:40.322 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:25:40 compute-1 nova_compute[230518]: 2025-10-02 13:25:40.340 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:25:40 compute-1 nova_compute[230518]: 2025-10-02 13:25:40.342 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:25:40 compute-1 nova_compute[230518]: 2025-10-02 13:25:40.342 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:25:41 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2823375801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:25:41 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3860199988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:25:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:41.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:25:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:41.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:25:42 compute-1 ceph-mon[80926]: pgmap v3447: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Oct 02 13:25:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1610177502' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:25:42 compute-1 nova_compute[230518]: 2025-10-02 13:25:42.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:42 compute-1 nova_compute[230518]: 2025-10-02 13:25:42.343 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:25:42 compute-1 nova_compute[230518]: 2025-10-02 13:25:42.344 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:25:43 compute-1 nova_compute[230518]: 2025-10-02 13:25:43.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:25:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:43.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:25:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:43.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:25:44 compute-1 ceph-mon[80926]: pgmap v3448: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 KiB/s rd, 409 B/s wr, 4 op/s
Oct 02 13:25:44 compute-1 nova_compute[230518]: 2025-10-02 13:25:44.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:25:45 compute-1 nova_compute[230518]: 2025-10-02 13:25:45.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:25:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:45.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:45.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:46 compute-1 ceph-mon[80926]: pgmap v3449: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:47 compute-1 nova_compute[230518]: 2025-10-02 13:25:47.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:47.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:25:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:47.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:25:48 compute-1 nova_compute[230518]: 2025-10-02 13:25:48.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:25:48 compute-1 ceph-mon[80926]: pgmap v3450: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:49 compute-1 nova_compute[230518]: 2025-10-02 13:25:49.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:49.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:25:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:49.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:50 compute-1 nova_compute[230518]: 2025-10-02 13:25:50.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:25:50 compute-1 ceph-mon[80926]: pgmap v3451: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:50 compute-1 podman[317394]: 2025-10-02 13:25:50.797071861 +0000 UTC m=+0.046450618 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:25:50 compute-1 podman[317393]: 2025-10-02 13:25:50.825114331 +0000 UTC m=+0.077558154 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Oct 02 13:25:51 compute-1 nova_compute[230518]: 2025-10-02 13:25:51.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:25:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:51.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:51.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:52 compute-1 nova_compute[230518]: 2025-10-02 13:25:52.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:52 compute-1 ceph-mon[80926]: pgmap v3452: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:53 compute-1 nova_compute[230518]: 2025-10-02 13:25:53.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:25:53 compute-1 nova_compute[230518]: 2025-10-02 13:25:53.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:25:53 compute-1 nova_compute[230518]: 2025-10-02 13:25:53.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:25:53 compute-1 nova_compute[230518]: 2025-10-02 13:25:53.066 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:25:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:53.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:53.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:53 compute-1 ceph-mon[80926]: pgmap v3453: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:54 compute-1 nova_compute[230518]: 2025-10-02 13:25:54.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:25:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:55.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:55.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:55 compute-1 ceph-mon[80926]: pgmap v3454: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:56 compute-1 nova_compute[230518]: 2025-10-02 13:25:56.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:25:57 compute-1 nova_compute[230518]: 2025-10-02 13:25:57.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:57.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:25:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:57.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:25:58 compute-1 ceph-mon[80926]: pgmap v3455: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:25:59 compute-1 nova_compute[230518]: 2025-10-02 13:25:59.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:25:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:25:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:25:59.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:25:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:25:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:25:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:25:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:25:59.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:26:00 compute-1 ceph-mon[80926]: pgmap v3456: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:26:00.368 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=81, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=80) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:26:00 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:26:00.369 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:26:00 compute-1 nova_compute[230518]: 2025-10-02 13:26:00.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:01 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:26:01.372 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '81'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:26:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:01.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:01 compute-1 ceph-mon[80926]: pgmap v3457: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:01.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:02 compute-1 nova_compute[230518]: 2025-10-02 13:26:02.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:02 compute-1 podman[317437]: 2025-10-02 13:26:02.797312534 +0000 UTC m=+0.054501590 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:26:02 compute-1 podman[317438]: 2025-10-02 13:26:02.803972613 +0000 UTC m=+0.052600521 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 13:26:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:03.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:03.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:03 compute-1 ceph-mon[80926]: pgmap v3458: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:04 compute-1 nova_compute[230518]: 2025-10-02 13:26:04.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:26:04 compute-1 nova_compute[230518]: 2025-10-02 13:26:04.053 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:26:04 compute-1 nova_compute[230518]: 2025-10-02 13:26:04.053 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:26:04 compute-1 nova_compute[230518]: 2025-10-02 13:26:04.055 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:26:04 compute-1 nova_compute[230518]: 2025-10-02 13:26:04.055 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:26:04 compute-1 nova_compute[230518]: 2025-10-02 13:26:04.055 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:26:04 compute-1 nova_compute[230518]: 2025-10-02 13:26:04.055 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:26:04 compute-1 nova_compute[230518]: 2025-10-02 13:26:04.072 2 DEBUG nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Oct 02 13:26:04 compute-1 nova_compute[230518]: 2025-10-02 13:26:04.079 2 DEBUG nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Oct 02 13:26:04 compute-1 nova_compute[230518]: 2025-10-02 13:26:04.080 2 WARNING nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6
Oct 02 13:26:04 compute-1 nova_compute[230518]: 2025-10-02 13:26:04.080 2 WARNING nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4
Oct 02 13:26:04 compute-1 nova_compute[230518]: 2025-10-02 13:26:04.080 2 WARNING nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/bb6d192aed85f84d0f22da0723b257d38ce90e47
Oct 02 13:26:04 compute-1 nova_compute[230518]: 2025-10-02 13:26:04.080 2 INFO nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Removable base files: /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6 /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4 /var/lib/nova/instances/_base/bb6d192aed85f84d0f22da0723b257d38ce90e47
Oct 02 13:26:04 compute-1 nova_compute[230518]: 2025-10-02 13:26:04.080 2 INFO nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/472c3cad2e339908bc4a8cea12fc22c04fcd93b6
Oct 02 13:26:04 compute-1 nova_compute[230518]: 2025-10-02 13:26:04.080 2 INFO nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/dd3a4569add1ef352b7c4d78d5e01667803900b4
Oct 02 13:26:04 compute-1 nova_compute[230518]: 2025-10-02 13:26:04.080 2 INFO nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/bb6d192aed85f84d0f22da0723b257d38ce90e47
Oct 02 13:26:04 compute-1 nova_compute[230518]: 2025-10-02 13:26:04.081 2 DEBUG nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Oct 02 13:26:04 compute-1 nova_compute[230518]: 2025-10-02 13:26:04.081 2 DEBUG nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Oct 02 13:26:04 compute-1 nova_compute[230518]: 2025-10-02 13:26:04.081 2 DEBUG nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Oct 02 13:26:04 compute-1 nova_compute[230518]: 2025-10-02 13:26:04.081 2 INFO nova.virt.libvirt.imagecache [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Oct 02 13:26:04 compute-1 nova_compute[230518]: 2025-10-02 13:26:04.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:26:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:05.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:05.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:05 compute-1 ceph-mon[80926]: pgmap v3459: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1524008942' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:26:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1524008942' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:26:07 compute-1 nova_compute[230518]: 2025-10-02 13:26:07.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:07.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:07.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:08 compute-1 ceph-mon[80926]: pgmap v3460: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:09 compute-1 nova_compute[230518]: 2025-10-02 13:26:09.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:09.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:26:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:09.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:10 compute-1 ceph-mon[80926]: pgmap v3461: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:10 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:26:10 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Cumulative writes: 16K writes, 83K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s
                                           Cumulative WAL: 16K writes, 16K syncs, 1.00 writes per sync, written: 0.17 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1540 writes, 7953 keys, 1540 commit groups, 1.0 writes per commit group, ingest: 16.14 MB, 0.03 MB/s
                                           Interval WAL: 1540 writes, 1540 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     57.5      1.81              0.31        54    0.033       0      0       0.0       0.0
                                             L6      1/0   10.93 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.3    119.7    102.4      5.37              1.70        53    0.101    401K    28K       0.0       0.0
                                            Sum      1/0   10.93 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.3     89.6     91.1      7.18              2.01       107    0.067    401K    28K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.2    117.0    116.3      0.85              0.31        14    0.060     73K   3633       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0    119.7    102.4      5.37              1.70        53    0.101    401K    28K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     57.5      1.80              0.31        53    0.034       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.101, interval 0.012
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.64 GB write, 0.11 MB/s write, 0.63 GB read, 0.11 MB/s read, 7.2 seconds
                                           Interval compaction: 0.10 GB write, 0.16 MB/s write, 0.10 GB read, 0.17 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5591049f71f0#2 capacity: 304.00 MB usage: 68.33 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000429 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3910,65.44 MB,21.5273%) FilterBlock(107,1.08 MB,0.3544%) IndexBlock(107,1.81 MB,0.595133%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 13:26:10 compute-1 sudo[317475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:26:10 compute-1 sudo[317475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:10 compute-1 sudo[317475]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:10 compute-1 sudo[317500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:26:10 compute-1 sudo[317500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:10 compute-1 sudo[317500]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:10 compute-1 sudo[317525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:26:10 compute-1 sudo[317525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:10 compute-1 sudo[317525]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:10 compute-1 sudo[317550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:26:10 compute-1 sudo[317550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:11 compute-1 sudo[317550]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:26:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:11.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:26:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:11.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:12 compute-1 nova_compute[230518]: 2025-10-02 13:26:12.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:12 compute-1 ceph-mon[80926]: pgmap v3462: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:12 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:26:12 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:26:12 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:26:12 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:26:12 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:26:12 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:26:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:13.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:13.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:14 compute-1 nova_compute[230518]: 2025-10-02 13:26:14.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:14 compute-1 ceph-mon[80926]: pgmap v3463: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:26:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:26:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:15.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:15.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:16 compute-1 ceph-mon[80926]: pgmap v3464: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 255 B/s rd, 0 op/s
Oct 02 13:26:17 compute-1 nova_compute[230518]: 2025-10-02 13:26:17.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:17.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:17 compute-1 ceph-mon[80926]: pgmap v3465: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Oct 02 13:26:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:17.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:18 compute-1 sudo[317605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:26:18 compute-1 sudo[317605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:18 compute-1 sudo[317605]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:18 compute-1 sudo[317630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:26:18 compute-1 sudo[317630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:26:18 compute-1 sudo[317630]: pam_unix(sudo:session): session closed for user root
Oct 02 13:26:18 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:26:18 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:26:19 compute-1 nova_compute[230518]: 2025-10-02 13:26:19.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:26:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:19.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:19 compute-1 ceph-mon[80926]: pgmap v3466: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Oct 02 13:26:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3344741174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:26:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:26:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:19.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:26:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:26:20 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/119302741' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:26:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/119302741' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:26:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:21.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:21.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:21 compute-1 ceph-mon[80926]: pgmap v3467: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Oct 02 13:26:21 compute-1 podman[317656]: 2025-10-02 13:26:21.805255207 +0000 UTC m=+0.054161790 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Oct 02 13:26:21 compute-1 podman[317655]: 2025-10-02 13:26:21.824780149 +0000 UTC m=+0.076732658 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:26:22 compute-1 nova_compute[230518]: 2025-10-02 13:26:22.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:23.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:23.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:23 compute-1 ceph-mon[80926]: pgmap v3468: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Oct 02 13:26:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/977123340' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:26:24 compute-1 nova_compute[230518]: 2025-10-02 13:26:24.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:26:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:26:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:25.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:26:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:25.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:26:25.986 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:26:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:26:25.986 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:26:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:26:25.986 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:26:26 compute-1 ceph-mon[80926]: pgmap v3469: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Oct 02 13:26:27 compute-1 nova_compute[230518]: 2025-10-02 13:26:27.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:27.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:26:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:27.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:26:28 compute-1 ceph-mon[80926]: pgmap v3470: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.4 KiB/s rd, 22 KiB/s wr, 10 op/s
Oct 02 13:26:29 compute-1 nova_compute[230518]: 2025-10-02 13:26:29.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:26:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:29.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:26:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:29.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:26:30 compute-1 ceph-mon[80926]: pgmap v3471: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.7 KiB/s rd, 341 B/s wr, 6 op/s
Oct 02 13:26:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:26:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:31.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:26:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:31.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:32 compute-1 nova_compute[230518]: 2025-10-02 13:26:32.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:32 compute-1 ceph-mon[80926]: pgmap v3472: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 12 KiB/s wr, 10 op/s
Oct 02 13:26:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2893176451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:26:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:33.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:33.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:33 compute-1 podman[317698]: 2025-10-02 13:26:33.815172803 +0000 UTC m=+0.058364612 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 13:26:33 compute-1 podman[317699]: 2025-10-02 13:26:33.840386344 +0000 UTC m=+0.085148572 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001)
Oct 02 13:26:34 compute-1 nova_compute[230518]: 2025-10-02 13:26:34.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:26:34 compute-1 ceph-mon[80926]: pgmap v3473: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 36 KiB/s rd, 13 KiB/s wr, 19 op/s
Oct 02 13:26:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:35.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:35.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:36 compute-1 ceph-mon[80926]: pgmap v3474: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 36 KiB/s rd, 13 KiB/s wr, 20 op/s
Oct 02 13:26:37 compute-1 nova_compute[230518]: 2025-10-02 13:26:37.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:37 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/859102279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:26:37 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3214726545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:26:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:37.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:37.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:38 compute-1 ceph-mon[80926]: pgmap v3475: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 45 KiB/s rd, 13 KiB/s wr, 32 op/s
Oct 02 13:26:38 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/828647226' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:26:38 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/828647226' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:26:39 compute-1 nova_compute[230518]: 2025-10-02 13:26:39.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:26:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:26:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:39.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:26:39 compute-1 ceph-mon[80926]: pgmap v3476: 305 pgs: 305 active+clean; 121 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 13 KiB/s wr, 26 op/s
Oct 02 13:26:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:39.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3812219334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:26:41 compute-1 nova_compute[230518]: 2025-10-02 13:26:41.082 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:26:41 compute-1 nova_compute[230518]: 2025-10-02 13:26:41.082 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:26:41 compute-1 nova_compute[230518]: 2025-10-02 13:26:41.083 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:26:41 compute-1 nova_compute[230518]: 2025-10-02 13:26:41.114 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:26:41 compute-1 nova_compute[230518]: 2025-10-02 13:26:41.115 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:26:41 compute-1 nova_compute[230518]: 2025-10-02 13:26:41.115 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:26:41 compute-1 nova_compute[230518]: 2025-10-02 13:26:41.115 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:26:41 compute-1 nova_compute[230518]: 2025-10-02 13:26:41.116 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:26:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:41.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:41 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:26:41 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3229219732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:26:41 compute-1 nova_compute[230518]: 2025-10-02 13:26:41.574 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:26:41 compute-1 nova_compute[230518]: 2025-10-02 13:26:41.757 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:26:41 compute-1 nova_compute[230518]: 2025-10-02 13:26:41.758 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4218MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:26:41 compute-1 nova_compute[230518]: 2025-10-02 13:26:41.759 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:26:41 compute-1 nova_compute[230518]: 2025-10-02 13:26:41.759 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:26:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:41.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:41 compute-1 nova_compute[230518]: 2025-10-02 13:26:41.855 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:26:41 compute-1 nova_compute[230518]: 2025-10-02 13:26:41.856 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:26:41 compute-1 nova_compute[230518]: 2025-10-02 13:26:41.875 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 13:26:41 compute-1 nova_compute[230518]: 2025-10-02 13:26:41.946 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 13:26:41 compute-1 nova_compute[230518]: 2025-10-02 13:26:41.947 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 13:26:41 compute-1 nova_compute[230518]: 2025-10-02 13:26:41.965 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 13:26:41 compute-1 nova_compute[230518]: 2025-10-02 13:26:41.985 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 13:26:42 compute-1 nova_compute[230518]: 2025-10-02 13:26:42.009 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:26:42 compute-1 ceph-mon[80926]: pgmap v3477: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 730 KiB/s rd, 13 KiB/s wr, 34 op/s
Oct 02 13:26:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1386844792' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:26:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3229219732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:26:42 compute-1 nova_compute[230518]: 2025-10-02 13:26:42.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:26:42 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/298458676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:26:42 compute-1 nova_compute[230518]: 2025-10-02 13:26:42.576 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:26:42 compute-1 nova_compute[230518]: 2025-10-02 13:26:42.581 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:26:42 compute-1 nova_compute[230518]: 2025-10-02 13:26:42.603 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:26:42 compute-1 nova_compute[230518]: 2025-10-02 13:26:42.605 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:26:42 compute-1 nova_compute[230518]: 2025-10-02 13:26:42.606 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:26:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/298458676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:26:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:43.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:26:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:43.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:26:44 compute-1 nova_compute[230518]: 2025-10-02 13:26:44.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:44 compute-1 ceph-mon[80926]: pgmap v3478: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.2 KiB/s wr, 34 op/s
Oct 02 13:26:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:26:44 compute-1 nova_compute[230518]: 2025-10-02 13:26:44.576 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:26:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:45.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:26:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:45.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:26:46 compute-1 nova_compute[230518]: 2025-10-02 13:26:46.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:26:46 compute-1 ceph-mon[80926]: pgmap v3479: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1023 B/s wr, 25 op/s
Oct 02 13:26:47 compute-1 nova_compute[230518]: 2025-10-02 13:26:47.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:47.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:47 compute-1 ceph-mon[80926]: pgmap v3480: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.6 KiB/s wr, 37 op/s
Oct 02 13:26:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:47.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:49 compute-1 nova_compute[230518]: 2025-10-02 13:26:49.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:26:49 compute-1 nova_compute[230518]: 2025-10-02 13:26:49.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:26:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:49.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:26:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:49.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:26:50 compute-1 ceph-mon[80926]: pgmap v3481: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1023 B/s wr, 25 op/s
Oct 02 13:26:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e420 e420: 3 total, 3 up, 3 in
Oct 02 13:26:51 compute-1 nova_compute[230518]: 2025-10-02 13:26:51.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:26:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:51.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:51 compute-1 ceph-mon[80926]: osdmap e420: 3 total, 3 up, 3 in
Oct 02 13:26:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:26:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:51.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:26:52 compute-1 nova_compute[230518]: 2025-10-02 13:26:52.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:26:52 compute-1 nova_compute[230518]: 2025-10-02 13:26:52.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:52 compute-1 ceph-mon[80926]: pgmap v3483: 305 pgs: 305 active+clean; 149 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.1 MiB/s wr, 44 op/s
Oct 02 13:26:52 compute-1 podman[317784]: 2025-10-02 13:26:52.819525351 +0000 UTC m=+0.059533149 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 02 13:26:52 compute-1 podman[317783]: 2025-10-02 13:26:52.869057095 +0000 UTC m=+0.120544173 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 13:26:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:53.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:53 compute-1 ceph-mon[80926]: pgmap v3484: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 2.1 MiB/s wr, 42 op/s
Oct 02 13:26:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:53.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:54 compute-1 nova_compute[230518]: 2025-10-02 13:26:54.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:26:54 compute-1 nova_compute[230518]: 2025-10-02 13:26:54.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:26:54 compute-1 nova_compute[230518]: 2025-10-02 13:26:54.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:26:54 compute-1 nova_compute[230518]: 2025-10-02 13:26:54.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:26:54 compute-1 nova_compute[230518]: 2025-10-02 13:26:54.779 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:26:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:26:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:55.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:26:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:26:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:55.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:26:56 compute-1 nova_compute[230518]: 2025-10-02 13:26:56.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:26:56 compute-1 ceph-mon[80926]: pgmap v3485: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Oct 02 13:26:57 compute-1 nova_compute[230518]: 2025-10-02 13:26:57.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:26:57 compute-1 nova_compute[230518]: 2025-10-02 13:26:57.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:26:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:57.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:26:57 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2514806046' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:26:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:57.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:58 compute-1 nova_compute[230518]: 2025-10-02 13:26:58.250 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:26:58 compute-1 nova_compute[230518]: 2025-10-02 13:26:58.699 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:26:58 compute-1 nova_compute[230518]: 2025-10-02 13:26:58.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:58 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:26:58.877 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=82, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=81) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:26:58 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:26:58.879 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:26:58 compute-1 ceph-mon[80926]: pgmap v3486: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 2.1 MiB/s wr, 33 op/s
Oct 02 13:26:59 compute-1 nova_compute[230518]: 2025-10-02 13:26:59.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:26:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:26:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:26:59.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:26:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:26:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:26:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:26:59.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:00 compute-1 ceph-mon[80926]: pgmap v3487: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 2.1 MiB/s wr, 33 op/s
Oct 02 13:27:01 compute-1 nova_compute[230518]: 2025-10-02 13:27:01.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:27:01 compute-1 nova_compute[230518]: 2025-10-02 13:27:01.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 13:27:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:01.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:01.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:02 compute-1 ceph-mon[80926]: pgmap v3488: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.7 MiB/s wr, 34 op/s
Oct 02 13:27:02 compute-1 nova_compute[230518]: 2025-10-02 13:27:02.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:27:02 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3595277946' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:27:03 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3595277946' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:27:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:27:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:03.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:27:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 13:27:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:03.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 13:27:04 compute-1 ceph-mon[80926]: pgmap v3489: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 852 KiB/s wr, 18 op/s
Oct 02 13:27:04 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3139082809' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:27:04 compute-1 nova_compute[230518]: 2025-10-02 13:27:04.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:27:04 compute-1 podman[317830]: 2025-10-02 13:27:04.804537657 +0000 UTC m=+0.053891512 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible)
Oct 02 13:27:04 compute-1 podman[317829]: 2025-10-02 13:27:04.826083502 +0000 UTC m=+0.080564668 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid)
Oct 02 13:27:04 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:27:04.882 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '82'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:27:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:27:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2957410666' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:27:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:27:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2957410666' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:27:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2957410666' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:27:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2957410666' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:27:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:05.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:05.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:06 compute-1 ceph-mon[80926]: pgmap v3490: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 17 op/s
Oct 02 13:27:07 compute-1 nova_compute[230518]: 2025-10-02 13:27:07.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:27:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:07.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:27:07 compute-1 ceph-mon[80926]: pgmap v3491: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 131 KiB/s rd, 13 KiB/s wr, 17 op/s
Oct 02 13:27:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:07.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:09 compute-1 nova_compute[230518]: 2025-10-02 13:27:09.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:27:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:09.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:09.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:09 compute-1 ceph-mon[80926]: pgmap v3492: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 131 KiB/s rd, 13 KiB/s wr, 17 op/s
Oct 02 13:27:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3209300186' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:27:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3209300186' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:27:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/288643418' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:27:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:11.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:11.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:11 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e421 e421: 3 total, 3 up, 3 in
Oct 02 13:27:11 compute-1 ceph-mon[80926]: pgmap v3493: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 02 13:27:12 compute-1 nova_compute[230518]: 2025-10-02 13:27:12.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:12 compute-1 ceph-mon[80926]: osdmap e421: 3 total, 3 up, 3 in
Oct 02 13:27:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:13.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:13.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:14 compute-1 nova_compute[230518]: 2025-10-02 13:27:14.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:14 compute-1 ceph-mon[80926]: pgmap v3495: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 111 op/s
Oct 02 13:27:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:27:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:15.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1862209437' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:27:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1862209437' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:27:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:15.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:16 compute-1 ceph-mon[80926]: pgmap v3496: 305 pgs: 305 active+clean; 152 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 150 op/s
Oct 02 13:27:17 compute-1 nova_compute[230518]: 2025-10-02 13:27:17.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:27:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:17.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:27:17 compute-1 ceph-mon[80926]: pgmap v3497: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.9 KiB/s wr, 155 op/s
Oct 02 13:27:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:27:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:17.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:27:18 compute-1 sudo[317870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:18 compute-1 sudo[317870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:18 compute-1 sudo[317870]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:18 compute-1 sudo[317895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:27:18 compute-1 sudo[317895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:18 compute-1 sudo[317895]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:18 compute-1 sudo[317920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:18 compute-1 sudo[317920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:18 compute-1 sudo[317920]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:18 compute-1 sudo[317945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:27:18 compute-1 sudo[317945]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:18 compute-1 sudo[317945]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:19 compute-1 nova_compute[230518]: 2025-10-02 13:27:19.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:27:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:19.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:27:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:19.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:27:19 compute-1 ceph-mon[80926]: pgmap v3498: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.9 KiB/s wr, 155 op/s
Oct 02 13:27:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:27:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:27:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:27:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:27:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:27:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:27:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:27:21 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:27:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:27:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:21.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:27:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:21.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:22 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e422 e422: 3 total, 3 up, 3 in
Oct 02 13:27:22 compute-1 nova_compute[230518]: 2025-10-02 13:27:22.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:22 compute-1 ceph-mon[80926]: pgmap v3499: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 KiB/s wr, 200 op/s
Oct 02 13:27:22 compute-1 ceph-mon[80926]: osdmap e422: 3 total, 3 up, 3 in
Oct 02 13:27:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:23.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:23 compute-1 podman[318002]: 2025-10-02 13:27:23.812130006 +0000 UTC m=+0.054007376 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 02 13:27:23 compute-1 podman[318001]: 2025-10-02 13:27:23.830152921 +0000 UTC m=+0.080424894 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_controller)
Oct 02 13:27:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:27:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:23.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:27:24 compute-1 nova_compute[230518]: 2025-10-02 13:27:24.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:27:24 compute-1 ceph-mon[80926]: pgmap v3501: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.6 KiB/s wr, 217 op/s
Oct 02 13:27:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:27:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:25.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:27:25 compute-1 ceph-mon[80926]: pgmap v3502: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 818 B/s wr, 203 op/s
Oct 02 13:27:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:25.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:27:25.987 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:27:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:27:25.988 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:27:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:27:25.988 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:27:27 compute-1 nova_compute[230518]: 2025-10-02 13:27:27.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:27.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:27 compute-1 ceph-mon[80926]: pgmap v3503: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 409 B/s wr, 218 op/s
Oct 02 13:27:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:27.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:29 compute-1 nova_compute[230518]: 2025-10-02 13:27:29.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:27:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:29.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:29.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:30 compute-1 ceph-mon[80926]: pgmap v3504: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 409 B/s wr, 218 op/s
Oct 02 13:27:31 compute-1 nova_compute[230518]: 2025-10-02 13:27:31.066 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:27:31 compute-1 nova_compute[230518]: 2025-10-02 13:27:31.066 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 13:27:31 compute-1 nova_compute[230518]: 2025-10-02 13:27:31.086 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 13:27:31 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #172. Immutable memtables: 0.
Oct 02 13:27:31 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:27:31.179371) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:27:31 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 172
Oct 02 13:27:31 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411651179410, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 1425, "num_deletes": 252, "total_data_size": 3130552, "memory_usage": 3173488, "flush_reason": "Manual Compaction"}
Oct 02 13:27:31 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #173: started
Oct 02 13:27:31 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411651191112, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 173, "file_size": 2055343, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 82924, "largest_seqno": 84344, "table_properties": {"data_size": 2049288, "index_size": 3321, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13355, "raw_average_key_size": 20, "raw_value_size": 2036904, "raw_average_value_size": 3086, "num_data_blocks": 146, "num_entries": 660, "num_filter_entries": 660, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411537, "oldest_key_time": 1759411537, "file_creation_time": 1759411651, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:27:31 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 11843 microseconds, and 5501 cpu microseconds.
Oct 02 13:27:31 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:27:31 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:27:31.191155) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #173: 2055343 bytes OK
Oct 02 13:27:31 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:27:31.191232) [db/memtable_list.cc:519] [default] Level-0 commit table #173 started
Oct 02 13:27:31 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:27:31.192888) [db/memtable_list.cc:722] [default] Level-0 commit table #173: memtable #1 done
Oct 02 13:27:31 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:27:31.192902) EVENT_LOG_v1 {"time_micros": 1759411651192898, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:27:31 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:27:31.192921) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:27:31 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 3123841, prev total WAL file size 3123841, number of live WAL files 2.
Oct 02 13:27:31 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000169.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:27:31 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:27:31.193740) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Oct 02 13:27:31 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:27:31 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [173(2007KB)], [171(10MB)]
Oct 02 13:27:31 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411651193816, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [173], "files_L6": [171], "score": -1, "input_data_size": 13521102, "oldest_snapshot_seqno": -1}
Oct 02 13:27:31 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #174: 10368 keys, 11507754 bytes, temperature: kUnknown
Oct 02 13:27:31 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411651243454, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 174, "file_size": 11507754, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11443201, "index_size": 37560, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25925, "raw_key_size": 274052, "raw_average_key_size": 26, "raw_value_size": 11264207, "raw_average_value_size": 1086, "num_data_blocks": 1416, "num_entries": 10368, "num_filter_entries": 10368, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759411651, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 174, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:27:31 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:27:31 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:27:31.243706) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 11507754 bytes
Oct 02 13:27:31 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:27:31.244989) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 272.5 rd, 232.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 10.9 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(12.2) write-amplify(5.6) OK, records in: 10891, records dropped: 523 output_compression: NoCompression
Oct 02 13:27:31 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:27:31.245004) EVENT_LOG_v1 {"time_micros": 1759411651244997, "job": 110, "event": "compaction_finished", "compaction_time_micros": 49611, "compaction_time_cpu_micros": 28144, "output_level": 6, "num_output_files": 1, "total_output_size": 11507754, "num_input_records": 10891, "num_output_records": 10368, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:27:31 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:27:31 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411651245724, "job": 110, "event": "table_file_deletion", "file_number": 173}
Oct 02 13:27:31 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000171.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:27:31 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411651247892, "job": 110, "event": "table_file_deletion", "file_number": 171}
Oct 02 13:27:31 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:27:31.193637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:27:31 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:27:31.247994) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:27:31 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:27:31.247998) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:27:31 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:27:31.248000) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:27:31 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:27:31.248001) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:27:31 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:27:31.248002) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:27:31 compute-1 sudo[318046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:31 compute-1 sudo[318046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:31 compute-1 sudo[318046]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:31 compute-1 sudo[318071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:27:31 compute-1 sudo[318071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:31 compute-1 sudo[318071]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:27:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:31.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:27:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:31.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:32 compute-1 nova_compute[230518]: 2025-10-02 13:27:32.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:32 compute-1 ceph-mon[80926]: pgmap v3505: 305 pgs: 305 active+clean; 131 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 674 KiB/s wr, 123 op/s
Oct 02 13:27:32 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:27:32 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:27:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:33.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:33.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:34 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 02 13:27:34 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 02 13:27:34 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 02 13:27:34 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:27:34 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:27:34 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:27:34 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:27:34 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:27:34 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:27:34 compute-1 nova_compute[230518]: 2025-10-02 13:27:34.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:27:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:27:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:35.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:27:35 compute-1 podman[318097]: 2025-10-02 13:27:35.795048916 +0000 UTC m=+0.049947648 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Oct 02 13:27:35 compute-1 podman[318096]: 2025-10-02 13:27:35.819038088 +0000 UTC m=+0.075282982 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 13:27:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:35.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:37 compute-1 nova_compute[230518]: 2025-10-02 13:27:37.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:37.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:37.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:38 compute-1 ceph-mon[80926]: pgmap v3507: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 45 KiB/s rd, 1.8 MiB/s wr, 74 op/s
Oct 02 13:27:38 compute-1 ceph-mon[80926]: pgmap v3508: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Oct 02 13:27:38 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3531806881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:27:38 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2860605944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:27:39 compute-1 ceph-mon[80926]: pgmap v3509: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 02 13:27:39 compute-1 nova_compute[230518]: 2025-10-02 13:27:39.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:27:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:27:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:39.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:27:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:39.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:40 compute-1 sudo[318135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:27:40 compute-1 sudo[318135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:40 compute-1 sudo[318135]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:40 compute-1 sudo[318160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:27:40 compute-1 sudo[318160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:27:40 compute-1 sudo[318160]: pam_unix(sudo:session): session closed for user root
Oct 02 13:27:41 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:27:41 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:27:41 compute-1 ceph-mon[80926]: pgmap v3510: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 02 13:27:41 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3155975616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:27:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:41.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:27:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:41.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:27:42 compute-1 nova_compute[230518]: 2025-10-02 13:27:42.073 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:27:42 compute-1 nova_compute[230518]: 2025-10-02 13:27:42.100 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:27:42 compute-1 nova_compute[230518]: 2025-10-02 13:27:42.101 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:27:42 compute-1 nova_compute[230518]: 2025-10-02 13:27:42.101 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:27:42 compute-1 nova_compute[230518]: 2025-10-02 13:27:42.101 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:27:42 compute-1 nova_compute[230518]: 2025-10-02 13:27:42.101 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:27:42 compute-1 nova_compute[230518]: 2025-10-02 13:27:42.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:27:42 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3220820876' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:27:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:27:42 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3497635110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:27:42 compute-1 nova_compute[230518]: 2025-10-02 13:27:42.555 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:27:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3220820876' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:27:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3497635110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:27:42 compute-1 nova_compute[230518]: 2025-10-02 13:27:42.723 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:27:42 compute-1 nova_compute[230518]: 2025-10-02 13:27:42.724 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4218MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:27:42 compute-1 nova_compute[230518]: 2025-10-02 13:27:42.724 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:27:42 compute-1 nova_compute[230518]: 2025-10-02 13:27:42.724 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:27:42 compute-1 nova_compute[230518]: 2025-10-02 13:27:42.789 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:27:42 compute-1 nova_compute[230518]: 2025-10-02 13:27:42.789 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:27:42 compute-1 nova_compute[230518]: 2025-10-02 13:27:42.812 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:27:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:27:43 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/213874345' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:27:43 compute-1 nova_compute[230518]: 2025-10-02 13:27:43.263 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:27:43 compute-1 nova_compute[230518]: 2025-10-02 13:27:43.268 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:27:43 compute-1 nova_compute[230518]: 2025-10-02 13:27:43.282 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:27:43 compute-1 nova_compute[230518]: 2025-10-02 13:27:43.286 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:27:43 compute-1 nova_compute[230518]: 2025-10-02 13:27:43.286 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:27:43 compute-1 nova_compute[230518]: 2025-10-02 13:27:43.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:27:43.322 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=83, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=82) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:27:43 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:27:43.322 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:27:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:43.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:43 compute-1 ceph-mon[80926]: pgmap v3511: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 1.2 MiB/s wr, 19 op/s
Oct 02 13:27:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/147395669' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:27:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/213874345' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:27:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/829377715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:27:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:43.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:44 compute-1 nova_compute[230518]: 2025-10-02 13:27:44.266 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:27:44 compute-1 nova_compute[230518]: 2025-10-02 13:27:44.266 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:27:44 compute-1 nova_compute[230518]: 2025-10-02 13:27:44.266 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:27:44 compute-1 nova_compute[230518]: 2025-10-02 13:27:44.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:27:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:27:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:45.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:27:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:27:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:45.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:27:45 compute-1 ceph-mon[80926]: pgmap v3512: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 KiB/s rd, 955 KiB/s wr, 4 op/s
Oct 02 13:27:47 compute-1 nova_compute[230518]: 2025-10-02 13:27:47.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:47 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:27:47.324 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '83'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:27:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 13:27:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:47.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 13:27:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:27:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:47.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:27:47 compute-1 ceph-mon[80926]: pgmap v3513: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 KiB/s rd, 85 B/s wr, 2 op/s
Oct 02 13:27:47 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4220012604' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:27:48 compute-1 nova_compute[230518]: 2025-10-02 13:27:48.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:27:49 compute-1 nova_compute[230518]: 2025-10-02 13:27:49.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:27:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:49.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:49.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:49 compute-1 ceph-mon[80926]: pgmap v3514: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:27:51 compute-1 nova_compute[230518]: 2025-10-02 13:27:51.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:27:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:27:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:51.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:27:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:27:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:51.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:27:51 compute-1 ceph-mon[80926]: pgmap v3515: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 KiB/s rd, 255 B/s wr, 3 op/s
Oct 02 13:27:52 compute-1 nova_compute[230518]: 2025-10-02 13:27:52.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:27:52 compute-1 nova_compute[230518]: 2025-10-02 13:27:52.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:53 compute-1 nova_compute[230518]: 2025-10-02 13:27:53.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:27:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:53.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:53.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:54 compute-1 ceph-mon[80926]: pgmap v3516: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 141 KiB/s rd, 341 B/s wr, 13 op/s
Oct 02 13:27:54 compute-1 nova_compute[230518]: 2025-10-02 13:27:54.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:27:54 compute-1 podman[318230]: 2025-10-02 13:27:54.837073738 +0000 UTC m=+0.081251460 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 02 13:27:54 compute-1 podman[318229]: 2025-10-02 13:27:54.846647669 +0000 UTC m=+0.095495667 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Oct 02 13:27:55 compute-1 nova_compute[230518]: 2025-10-02 13:27:55.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:27:55 compute-1 nova_compute[230518]: 2025-10-02 13:27:55.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:27:55 compute-1 nova_compute[230518]: 2025-10-02 13:27:55.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:27:55 compute-1 nova_compute[230518]: 2025-10-02 13:27:55.068 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:27:55 compute-1 ceph-mon[80926]: pgmap v3517: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 14 KiB/s wr, 49 op/s
Oct 02 13:27:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:55.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:55.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:57 compute-1 nova_compute[230518]: 2025-10-02 13:27:57.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:27:57 compute-1 nova_compute[230518]: 2025-10-02 13:27:57.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:57.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:57.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:58 compute-1 ceph-mon[80926]: pgmap v3518: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 02 13:27:59 compute-1 ceph-mon[80926]: pgmap v3519: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 02 13:27:59 compute-1 nova_compute[230518]: 2025-10-02 13:27:59.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:27:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:27:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:27:59.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:27:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:27:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:27:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:27:59.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:01.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:01.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:02 compute-1 ceph-mon[80926]: pgmap v3520: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Oct 02 13:28:02 compute-1 nova_compute[230518]: 2025-10-02 13:28:02.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:03.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:03 compute-1 ceph-mon[80926]: pgmap v3521: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 70 op/s
Oct 02 13:28:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:03.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:04 compute-1 nova_compute[230518]: 2025-10-02 13:28:04.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:28:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:28:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/313675752' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:28:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:28:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/313675752' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:28:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:05.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:05.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:06 compute-1 ceph-mon[80926]: pgmap v3522: 305 pgs: 305 active+clean; 168 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 610 KiB/s wr, 76 op/s
Oct 02 13:28:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/313675752' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:28:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/313675752' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:28:06 compute-1 podman[318274]: 2025-10-02 13:28:06.82646455 +0000 UTC m=+0.072648770 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 13:28:06 compute-1 podman[318273]: 2025-10-02 13:28:06.833590713 +0000 UTC m=+0.086967578 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 13:28:07 compute-1 nova_compute[230518]: 2025-10-02 13:28:07.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:07 compute-1 ceph-mon[80926]: pgmap v3523: 305 pgs: 305 active+clean; 191 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 86 op/s
Oct 02 13:28:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:07.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:07.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:09 compute-1 nova_compute[230518]: 2025-10-02 13:28:09.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:28:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:09.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:09.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:09 compute-1 ceph-mon[80926]: pgmap v3524: 305 pgs: 305 active+clean; 191 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 366 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 02 13:28:11 compute-1 ceph-mon[80926]: pgmap v3525: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 423 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 02 13:28:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:28:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:11.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:28:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:28:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:11.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:28:12 compute-1 nova_compute[230518]: 2025-10-02 13:28:12.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:13.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:13 compute-1 ceph-mon[80926]: pgmap v3526: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 424 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Oct 02 13:28:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:28:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:13.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:28:14 compute-1 nova_compute[230518]: 2025-10-02 13:28:14.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:28:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e423 e423: 3 total, 3 up, 3 in
Oct 02 13:28:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:15.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:15.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:15 compute-1 ceph-mon[80926]: osdmap e423: 3 total, 3 up, 3 in
Oct 02 13:28:15 compute-1 ceph-mon[80926]: pgmap v3528: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 442 KiB/s rd, 1.9 MiB/s wr, 65 op/s
Oct 02 13:28:17 compute-1 nova_compute[230518]: 2025-10-02 13:28:17.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:17 compute-1 ceph-mon[80926]: pgmap v3529: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 79 KiB/s rd, 78 KiB/s wr, 20 op/s
Oct 02 13:28:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:17.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 13:28:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:17.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 13:28:19 compute-1 ceph-mon[80926]: pgmap v3530: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 79 KiB/s rd, 78 KiB/s wr, 20 op/s
Oct 02 13:28:19 compute-1 nova_compute[230518]: 2025-10-02 13:28:19.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:28:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:19.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:19.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:28:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:21.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:28:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:28:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:21.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:28:22 compute-1 ceph-mon[80926]: pgmap v3531: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 18 KiB/s wr, 19 op/s
Oct 02 13:28:22 compute-1 nova_compute[230518]: 2025-10-02 13:28:22.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:23 compute-1 nova_compute[230518]: 2025-10-02 13:28:23.104 2 DEBUG oslo_concurrency.lockutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:28:23 compute-1 nova_compute[230518]: 2025-10-02 13:28:23.105 2 DEBUG oslo_concurrency.lockutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:28:23 compute-1 nova_compute[230518]: 2025-10-02 13:28:23.119 2 DEBUG nova.compute.manager [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:28:23 compute-1 ceph-mon[80926]: pgmap v3532: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 19 KiB/s wr, 20 op/s
Oct 02 13:28:23 compute-1 nova_compute[230518]: 2025-10-02 13:28:23.205 2 DEBUG oslo_concurrency.lockutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:28:23 compute-1 nova_compute[230518]: 2025-10-02 13:28:23.206 2 DEBUG oslo_concurrency.lockutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:28:23 compute-1 nova_compute[230518]: 2025-10-02 13:28:23.213 2 DEBUG nova.virt.hardware [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:28:23 compute-1 nova_compute[230518]: 2025-10-02 13:28:23.214 2 INFO nova.compute.claims [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Claim successful on node compute-1.ctlplane.example.com
Oct 02 13:28:23 compute-1 nova_compute[230518]: 2025-10-02 13:28:23.294 2 DEBUG oslo_concurrency.processutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:28:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:23.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:28:23 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1513027977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:28:23 compute-1 nova_compute[230518]: 2025-10-02 13:28:23.710 2 DEBUG oslo_concurrency.processutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:28:23 compute-1 nova_compute[230518]: 2025-10-02 13:28:23.717 2 DEBUG nova.compute.provider_tree [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:28:23 compute-1 nova_compute[230518]: 2025-10-02 13:28:23.740 2 DEBUG nova.scheduler.client.report [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:28:23 compute-1 nova_compute[230518]: 2025-10-02 13:28:23.783 2 DEBUG oslo_concurrency.lockutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:28:23 compute-1 nova_compute[230518]: 2025-10-02 13:28:23.784 2 DEBUG nova.compute.manager [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:28:23 compute-1 nova_compute[230518]: 2025-10-02 13:28:23.830 2 INFO nova.virt.libvirt.driver [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:28:23 compute-1 nova_compute[230518]: 2025-10-02 13:28:23.833 2 DEBUG nova.compute.manager [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:28:23 compute-1 nova_compute[230518]: 2025-10-02 13:28:23.833 2 DEBUG nova.network.neutron [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:28:23 compute-1 nova_compute[230518]: 2025-10-02 13:28:23.854 2 DEBUG nova.compute.manager [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:28:23 compute-1 nova_compute[230518]: 2025-10-02 13:28:23.913 2 INFO nova.virt.block_device [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Booting with volume snapshot b50be4d5-612a-4434-8eb6-55d27bed7a4d at /dev/vda
Oct 02 13:28:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:23.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:24 compute-1 nova_compute[230518]: 2025-10-02 13:28:24.075 2 DEBUG nova.policy [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2cb47684d0b34c729e9611e7b3943bed', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '18799a1c93354809911705bb424e673f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:28:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1513027977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:28:24 compute-1 nova_compute[230518]: 2025-10-02 13:28:24.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:28:25 compute-1 nova_compute[230518]: 2025-10-02 13:28:25.000 2 DEBUG nova.network.neutron [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Successfully created port: 650cec0d-3a37-4324-87bb-85f638f2c4fd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:28:25 compute-1 ceph-mon[80926]: pgmap v3533: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 5.8 KiB/s wr, 17 op/s
Oct 02 13:28:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:28:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:25.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:28:25 compute-1 nova_compute[230518]: 2025-10-02 13:28:25.670 2 DEBUG nova.network.neutron [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Successfully updated port: 650cec0d-3a37-4324-87bb-85f638f2c4fd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:28:25 compute-1 nova_compute[230518]: 2025-10-02 13:28:25.697 2 DEBUG oslo_concurrency.lockutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "refresh_cache-4fe12372-ed4b-40ab-9cf2-dcf304f21c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:28:25 compute-1 nova_compute[230518]: 2025-10-02 13:28:25.698 2 DEBUG oslo_concurrency.lockutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquired lock "refresh_cache-4fe12372-ed4b-40ab-9cf2-dcf304f21c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:28:25 compute-1 nova_compute[230518]: 2025-10-02 13:28:25.698 2 DEBUG nova.network.neutron [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:28:25 compute-1 nova_compute[230518]: 2025-10-02 13:28:25.790 2 DEBUG nova.compute.manager [req-b2fb6372-3546-4666-9a8c-206dd1ded822 req-eeb7a9d5-2414-406e-9816-019c358511e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Received event network-changed-650cec0d-3a37-4324-87bb-85f638f2c4fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:28:25 compute-1 nova_compute[230518]: 2025-10-02 13:28:25.791 2 DEBUG nova.compute.manager [req-b2fb6372-3546-4666-9a8c-206dd1ded822 req-eeb7a9d5-2414-406e-9816-019c358511e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Refreshing instance network info cache due to event network-changed-650cec0d-3a37-4324-87bb-85f638f2c4fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:28:25 compute-1 nova_compute[230518]: 2025-10-02 13:28:25.791 2 DEBUG oslo_concurrency.lockutils [req-b2fb6372-3546-4666-9a8c-206dd1ded822 req-eeb7a9d5-2414-406e-9816-019c358511e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-4fe12372-ed4b-40ab-9cf2-dcf304f21c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:28:25 compute-1 podman[318334]: 2025-10-02 13:28:25.805671524 +0000 UTC m=+0.057820245 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 13:28:25 compute-1 podman[318333]: 2025-10-02 13:28:25.82852117 +0000 UTC m=+0.083848231 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 13:28:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:28:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:25.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:28:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:28:25.988 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:28:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:28:25.988 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:28:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:28:25.989 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:28:26 compute-1 nova_compute[230518]: 2025-10-02 13:28:26.062 2 DEBUG nova.network.neutron [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:28:27 compute-1 nova_compute[230518]: 2025-10-02 13:28:27.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:27.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:27.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:28 compute-1 ceph-mon[80926]: pgmap v3534: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 5.3 KiB/s wr, 21 op/s
Oct 02 13:28:28 compute-1 nova_compute[230518]: 2025-10-02 13:28:28.176 2 DEBUG os_brick.utils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 13:28:28 compute-1 nova_compute[230518]: 2025-10-02 13:28:28.178 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:28:28 compute-1 nova_compute[230518]: 2025-10-02 13:28:28.193 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:28:28 compute-1 nova_compute[230518]: 2025-10-02 13:28:28.194 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[7a3034c3-6b7b-40ed-a4d5-3feae6696d0a]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:28:28 compute-1 nova_compute[230518]: 2025-10-02 13:28:28.195 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:28:28 compute-1 nova_compute[230518]: 2025-10-02 13:28:28.207 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:28:28 compute-1 nova_compute[230518]: 2025-10-02 13:28:28.207 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[018ea3d8-8f72-411c-9439-9ff5be3c9d0b]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:28:28 compute-1 nova_compute[230518]: 2025-10-02 13:28:28.209 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:28:28 compute-1 nova_compute[230518]: 2025-10-02 13:28:28.221 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:28:28 compute-1 nova_compute[230518]: 2025-10-02 13:28:28.222 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[5c748a54-011a-4816-9c10-2a680dc79a4f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:28:28 compute-1 nova_compute[230518]: 2025-10-02 13:28:28.224 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[8d8bb748-0142-43d7-bf89-cea4604d87da]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:28:28 compute-1 nova_compute[230518]: 2025-10-02 13:28:28.224 2 DEBUG oslo_concurrency.processutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:28:28 compute-1 nova_compute[230518]: 2025-10-02 13:28:28.266 2 DEBUG oslo_concurrency.processutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "nvme version" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:28:28 compute-1 nova_compute[230518]: 2025-10-02 13:28:28.268 2 DEBUG os_brick.initiator.connectors.lightos [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 13:28:28 compute-1 nova_compute[230518]: 2025-10-02 13:28:28.269 2 DEBUG os_brick.initiator.connectors.lightos [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 13:28:28 compute-1 nova_compute[230518]: 2025-10-02 13:28:28.269 2 DEBUG os_brick.initiator.connectors.lightos [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 13:28:28 compute-1 nova_compute[230518]: 2025-10-02 13:28:28.269 2 DEBUG os_brick.utils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] <== get_connector_properties: return (92ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 13:28:28 compute-1 nova_compute[230518]: 2025-10-02 13:28:28.269 2 DEBUG nova.virt.block_device [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Updating existing volume attachment record: 4f9bf0c2-6ea0-491d-b924-500ff3decac0 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.036 2 DEBUG nova.network.neutron [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Updating instance_info_cache with network_info: [{"id": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "address": "fa:16:3e:65:62:3b", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap650cec0d-3a", "ovs_interfaceid": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:28:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:28:29 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2295047483' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.075 2 DEBUG oslo_concurrency.lockutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Releasing lock "refresh_cache-4fe12372-ed4b-40ab-9cf2-dcf304f21c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.076 2 DEBUG nova.compute.manager [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Instance network_info: |[{"id": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "address": "fa:16:3e:65:62:3b", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap650cec0d-3a", "ovs_interfaceid": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.076 2 DEBUG oslo_concurrency.lockutils [req-b2fb6372-3546-4666-9a8c-206dd1ded822 req-eeb7a9d5-2414-406e-9816-019c358511e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-4fe12372-ed4b-40ab-9cf2-dcf304f21c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.076 2 DEBUG nova.network.neutron [req-b2fb6372-3546-4666-9a8c-206dd1ded822 req-eeb7a9d5-2414-406e-9816-019c358511e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Refreshing network info cache for port 650cec0d-3a37-4324-87bb-85f638f2c4fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:28:29 compute-1 ceph-mon[80926]: pgmap v3535: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 3.6 KiB/s wr, 13 op/s
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.378 2 DEBUG nova.compute.manager [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.380 2 DEBUG nova.virt.libvirt.driver [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.381 2 INFO nova.virt.libvirt.driver [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Creating image(s)
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.381 2 DEBUG nova.virt.libvirt.driver [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.381 2 DEBUG nova.virt.libvirt.driver [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Ensure instance console log exists: /var/lib/nova/instances/4fe12372-ed4b-40ab-9cf2-dcf304f21c31/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.382 2 DEBUG oslo_concurrency.lockutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.382 2 DEBUG oslo_concurrency.lockutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.382 2 DEBUG oslo_concurrency.lockutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.384 2 DEBUG nova.virt.libvirt.driver [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Start _get_guest_xml network_info=[{"id": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "address": "fa:16:3e:65:62:3b", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap650cec0d-3a", "ovs_interfaceid": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2025-10-02T13:28:13Z,direct_url=<?>,disk_format='qcow2',id=e0399c4c-8352-497f-b361-45e672712e68,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-566470057',owner='18799a1c93354809911705bb424e673f',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2025-10-02T13:28:14Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'delete_on_termination': True, 'disk_bus': 'virtio', 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-4f5d24ac-9e35-4d38-a9e7-6dec734d8ef8', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '4f5d24ac-9e35-4d38-a9e7-6dec734d8ef8', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '4fe12372-ed4b-40ab-9cf2-dcf304f21c31', 'attached_at': '', 'detached_at': '', 'volume_id': '4f5d24ac-9e35-4d38-a9e7-6dec734d8ef8', 'serial': '4f5d24ac-9e35-4d38-a9e7-6dec734d8ef8'}, 'boot_index': 0, 'attachment_id': '4f9bf0c2-6ea0-491d-b924-500ff3decac0', 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.390 2 WARNING nova.virt.libvirt.driver [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.394 2 DEBUG nova.virt.libvirt.host [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.395 2 DEBUG nova.virt.libvirt.host [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.397 2 DEBUG nova.virt.libvirt.host [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.398 2 DEBUG nova.virt.libvirt.host [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.399 2 DEBUG nova.virt.libvirt.driver [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.400 2 DEBUG nova.virt.hardware [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2025-10-02T13:28:13Z,direct_url=<?>,disk_format='qcow2',id=e0399c4c-8352-497f-b361-45e672712e68,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-566470057',owner='18799a1c93354809911705bb424e673f',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2025-10-02T13:28:14Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.400 2 DEBUG nova.virt.hardware [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.400 2 DEBUG nova.virt.hardware [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.401 2 DEBUG nova.virt.hardware [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.401 2 DEBUG nova.virt.hardware [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.401 2 DEBUG nova.virt.hardware [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.401 2 DEBUG nova.virt.hardware [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.402 2 DEBUG nova.virt.hardware [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.402 2 DEBUG nova.virt.hardware [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.402 2 DEBUG nova.virt.hardware [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.402 2 DEBUG nova.virt.hardware [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.439 2 DEBUG nova.storage.rbd_utils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] rbd image 4fe12372-ed4b-40ab-9cf2-dcf304f21c31_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.443 2 DEBUG oslo_concurrency.processutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:28:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:29.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:28:29 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/521275966' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.864 2 DEBUG oslo_concurrency.processutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.888 2 DEBUG nova.virt.libvirt.vif [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:28:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-426050554',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-426050554',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-426050554',id=221,image_ref='e0399c4c-8352-497f-b361-45e672712e68',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLILQTOU+IPOd8w0GqrVsN/QbgbwiFJSPjI2JTUbPYQ/ozNt878L7gRDoBhnJsqCVc05+BAE6CcyuObEWMpQFhUuKt2iunDqIotZaYJTz1b891j8Z4tJFAz/OrIq++6nPw==',key_name='tempest-keypair-176326542',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='18799a1c93354809911705bb424e673f',ramdisk_id='',reservation_id='r-hck380nt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-1344814684',image_owner_user_name='tempest-TestVolumeBootPattern-1344814684-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1344814684',owner_user_name='tempest-TestVolumeBootPattern-1344814684-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:28:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2cb47684d0b34c729e9611e7b3943bed',uuid=4fe12372-ed4b-40ab-9cf2-dcf304f21c31,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "address": "fa:16:3e:65:62:3b", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap650cec0d-3a", "ovs_interfaceid": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.889 2 DEBUG nova.network.os_vif_util [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converting VIF {"id": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "address": "fa:16:3e:65:62:3b", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap650cec0d-3a", "ovs_interfaceid": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.890 2 DEBUG nova.network.os_vif_util [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:62:3b,bridge_name='br-int',has_traffic_filtering=True,id=650cec0d-3a37-4324-87bb-85f638f2c4fd,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap650cec0d-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.891 2 DEBUG nova.objects.instance [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lazy-loading 'pci_devices' on Instance uuid 4fe12372-ed4b-40ab-9cf2-dcf304f21c31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.913 2 DEBUG nova.virt.libvirt.driver [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:28:29 compute-1 nova_compute[230518]:   <uuid>4fe12372-ed4b-40ab-9cf2-dcf304f21c31</uuid>
Oct 02 13:28:29 compute-1 nova_compute[230518]:   <name>instance-000000dd</name>
Oct 02 13:28:29 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 13:28:29 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 13:28:29 compute-1 nova_compute[230518]:   <metadata>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:28:29 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:       <nova:name>tempest-TestVolumeBootPattern-image-snapshot-server-426050554</nova:name>
Oct 02 13:28:29 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 13:28:29</nova:creationTime>
Oct 02 13:28:29 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 13:28:29 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 13:28:29 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 13:28:29 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 13:28:29 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:28:29 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:28:29 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 13:28:29 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 13:28:29 compute-1 nova_compute[230518]:         <nova:user uuid="2cb47684d0b34c729e9611e7b3943bed">tempest-TestVolumeBootPattern-1344814684-project-member</nova:user>
Oct 02 13:28:29 compute-1 nova_compute[230518]:         <nova:project uuid="18799a1c93354809911705bb424e673f">tempest-TestVolumeBootPattern-1344814684</nova:project>
Oct 02 13:28:29 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 13:28:29 compute-1 nova_compute[230518]:       <nova:root type="image" uuid="e0399c4c-8352-497f-b361-45e672712e68"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 13:28:29 compute-1 nova_compute[230518]:         <nova:port uuid="650cec0d-3a37-4324-87bb-85f638f2c4fd">
Oct 02 13:28:29 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 13:28:29 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 13:28:29 compute-1 nova_compute[230518]:   </metadata>
Oct 02 13:28:29 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <system>
Oct 02 13:28:29 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:28:29 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:28:29 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:28:29 compute-1 nova_compute[230518]:       <entry name="serial">4fe12372-ed4b-40ab-9cf2-dcf304f21c31</entry>
Oct 02 13:28:29 compute-1 nova_compute[230518]:       <entry name="uuid">4fe12372-ed4b-40ab-9cf2-dcf304f21c31</entry>
Oct 02 13:28:29 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     </system>
Oct 02 13:28:29 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 13:28:29 compute-1 nova_compute[230518]:   <os>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:   </os>
Oct 02 13:28:29 compute-1 nova_compute[230518]:   <features>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <apic/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:   </features>
Oct 02 13:28:29 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:   </clock>
Oct 02 13:28:29 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:   </cpu>
Oct 02 13:28:29 compute-1 nova_compute[230518]:   <devices>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 13:28:29 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/4fe12372-ed4b-40ab-9cf2-dcf304f21c31_disk.config">
Oct 02 13:28:29 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:       </source>
Oct 02 13:28:29 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:28:29 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:28:29 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 13:28:29 compute-1 nova_compute[230518]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:       <source protocol="rbd" name="volumes/volume-4f5d24ac-9e35-4d38-a9e7-6dec734d8ef8">
Oct 02 13:28:29 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:       </source>
Oct 02 13:28:29 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:28:29 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:28:29 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:       <serial>4f5d24ac-9e35-4d38-a9e7-6dec734d8ef8</serial>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 13:28:29 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:65:62:3b"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:       <target dev="tap650cec0d-3a"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     </interface>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 13:28:29 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/4fe12372-ed4b-40ab-9cf2-dcf304f21c31/console.log" append="off"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     </serial>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <video>
Oct 02 13:28:29 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     </video>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <input type="keyboard" bus="usb"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 13:28:29 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     </rng>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 13:28:29 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 13:28:29 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 13:28:29 compute-1 nova_compute[230518]:   </devices>
Oct 02 13:28:29 compute-1 nova_compute[230518]: </domain>
Oct 02 13:28:29 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.914 2 DEBUG nova.compute.manager [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Preparing to wait for external event network-vif-plugged-650cec0d-3a37-4324-87bb-85f638f2c4fd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.914 2 DEBUG oslo_concurrency.lockutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.915 2 DEBUG oslo_concurrency.lockutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.915 2 DEBUG oslo_concurrency.lockutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.916 2 DEBUG nova.virt.libvirt.vif [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:28:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-426050554',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-426050554',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-426050554',id=221,image_ref='e0399c4c-8352-497f-b361-45e672712e68',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLILQTOU+IPOd8w0GqrVsN/QbgbwiFJSPjI2JTUbPYQ/ozNt878L7gRDoBhnJsqCVc05+BAE6CcyuObEWMpQFhUuKt2iunDqIotZaYJTz1b891j8Z4tJFAz/OrIq++6nPw==',key_name='tempest-keypair-176326542',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='18799a1c93354809911705bb424e673f',ramdisk_id='',reservation_id='r-hck380nt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-1344814684',image_owner_user_name='tempest-TestVolumeBootPattern-1344814684-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1344814684',owner_user_name='tempest-TestVolumeBootPattern-1344814684-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:28:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2cb47684d0b34c729e9611e7b3943bed',uuid=4fe12372-ed4b-40ab-9cf2-dcf304f21c31,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "address": "fa:16:3e:65:62:3b", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap650cec0d-3a", "ovs_interfaceid": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.916 2 DEBUG nova.network.os_vif_util [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converting VIF {"id": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "address": "fa:16:3e:65:62:3b", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap650cec0d-3a", "ovs_interfaceid": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.917 2 DEBUG nova.network.os_vif_util [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:62:3b,bridge_name='br-int',has_traffic_filtering=True,id=650cec0d-3a37-4324-87bb-85f638f2c4fd,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap650cec0d-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.917 2 DEBUG os_vif [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:62:3b,bridge_name='br-int',has_traffic_filtering=True,id=650cec0d-3a37-4324-87bb-85f638f2c4fd,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap650cec0d-3a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.918 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.918 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.921 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap650cec0d-3a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.921 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap650cec0d-3a, col_values=(('external_ids', {'iface-id': '650cec0d-3a37-4324-87bb-85f638f2c4fd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:62:3b', 'vm-uuid': '4fe12372-ed4b-40ab-9cf2-dcf304f21c31'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:29 compute-1 NetworkManager[44960]: <info>  [1759411709.9238] manager: (tap650cec0d-3a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/405)
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:28:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:29.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:28:29 compute-1 nova_compute[230518]: 2025-10-02 13:28:29.931 2 INFO os_vif [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:62:3b,bridge_name='br-int',has_traffic_filtering=True,id=650cec0d-3a37-4324-87bb-85f638f2c4fd,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap650cec0d-3a')
Oct 02 13:28:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2295047483' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:28:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/521275966' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:28:30 compute-1 nova_compute[230518]: 2025-10-02 13:28:30.266 2 DEBUG nova.virt.libvirt.driver [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:28:30 compute-1 nova_compute[230518]: 2025-10-02 13:28:30.267 2 DEBUG nova.virt.libvirt.driver [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:28:30 compute-1 nova_compute[230518]: 2025-10-02 13:28:30.267 2 DEBUG nova.virt.libvirt.driver [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] No VIF found with MAC fa:16:3e:65:62:3b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:28:30 compute-1 nova_compute[230518]: 2025-10-02 13:28:30.267 2 INFO nova.virt.libvirt.driver [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Using config drive
Oct 02 13:28:30 compute-1 nova_compute[230518]: 2025-10-02 13:28:30.294 2 DEBUG nova.storage.rbd_utils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] rbd image 4fe12372-ed4b-40ab-9cf2-dcf304f21c31_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:28:31 compute-1 nova_compute[230518]: 2025-10-02 13:28:31.188 2 INFO nova.virt.libvirt.driver [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Creating config drive at /var/lib/nova/instances/4fe12372-ed4b-40ab-9cf2-dcf304f21c31/disk.config
Oct 02 13:28:31 compute-1 nova_compute[230518]: 2025-10-02 13:28:31.195 2 DEBUG oslo_concurrency.processutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4fe12372-ed4b-40ab-9cf2-dcf304f21c31/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphgsnrxkz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:28:31 compute-1 ceph-mon[80926]: pgmap v3536: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 4.0 KiB/s wr, 17 op/s
Oct 02 13:28:31 compute-1 nova_compute[230518]: 2025-10-02 13:28:31.334 2 DEBUG oslo_concurrency.processutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4fe12372-ed4b-40ab-9cf2-dcf304f21c31/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphgsnrxkz" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:28:31 compute-1 nova_compute[230518]: 2025-10-02 13:28:31.370 2 DEBUG nova.storage.rbd_utils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] rbd image 4fe12372-ed4b-40ab-9cf2-dcf304f21c31_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:28:31 compute-1 nova_compute[230518]: 2025-10-02 13:28:31.373 2 DEBUG oslo_concurrency.processutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4fe12372-ed4b-40ab-9cf2-dcf304f21c31/disk.config 4fe12372-ed4b-40ab-9cf2-dcf304f21c31_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:28:31 compute-1 nova_compute[230518]: 2025-10-02 13:28:31.539 2 DEBUG oslo_concurrency.processutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4fe12372-ed4b-40ab-9cf2-dcf304f21c31/disk.config 4fe12372-ed4b-40ab-9cf2-dcf304f21c31_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:28:31 compute-1 nova_compute[230518]: 2025-10-02 13:28:31.540 2 INFO nova.virt.libvirt.driver [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Deleting local config drive /var/lib/nova/instances/4fe12372-ed4b-40ab-9cf2-dcf304f21c31/disk.config because it was imported into RBD.
Oct 02 13:28:31 compute-1 kernel: tap650cec0d-3a: entered promiscuous mode
Oct 02 13:28:31 compute-1 ovn_controller[129257]: 2025-10-02T13:28:31Z|00886|binding|INFO|Claiming lport 650cec0d-3a37-4324-87bb-85f638f2c4fd for this chassis.
Oct 02 13:28:31 compute-1 NetworkManager[44960]: <info>  [1759411711.5908] manager: (tap650cec0d-3a): new Tun device (/org/freedesktop/NetworkManager/Devices/406)
Oct 02 13:28:31 compute-1 nova_compute[230518]: 2025-10-02 13:28:31.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:31 compute-1 ovn_controller[129257]: 2025-10-02T13:28:31Z|00887|binding|INFO|650cec0d-3a37-4324-87bb-85f638f2c4fd: Claiming fa:16:3e:65:62:3b 10.100.0.10
Oct 02 13:28:31 compute-1 nova_compute[230518]: 2025-10-02 13:28:31.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:31 compute-1 nova_compute[230518]: 2025-10-02 13:28:31.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:31 compute-1 NetworkManager[44960]: <info>  [1759411711.6048] manager: (patch-br-int-to-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/407)
Oct 02 13:28:31 compute-1 nova_compute[230518]: 2025-10-02 13:28:31.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:31 compute-1 NetworkManager[44960]: <info>  [1759411711.6057] manager: (patch-provnet-99fca131-6af0-44e9-8efb-ce2b2bcac45a-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/408)
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.608 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:62:3b 10.100.0.10'], port_security=['fa:16:3e:65:62:3b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4fe12372-ed4b-40ab-9cf2-dcf304f21c31', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '18799a1c93354809911705bb424e673f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ae0586a1-cfba-4e03-bfd2-a14f300878bc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=910cabf2-c1de-4576-8ee2-c8f223a58a1c, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=650cec0d-3a37-4324-87bb-85f638f2c4fd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.609 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 650cec0d-3a37-4324-87bb-85f638f2c4fd in datapath 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 bound to our chassis
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.611 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5
Oct 02 13:28:31 compute-1 systemd-udevd[318501]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.621 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[19b346d0-d0e2-4c26-a0f0-06967b9068e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.622 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap858f2b6f-81 in ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:28:31 compute-1 systemd-machined[188247]: New machine qemu-100-instance-000000dd.
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.625 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap858f2b6f-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.625 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[8e85f8c0-3599-4c6f-a461-65247e0447ef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.626 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[96bb8bc9-d9a8-46f6-81d3-ea5476d2c977]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.637 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[eed41117-9de7-4e14-9721-52c1461581f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:28:31 compute-1 NetworkManager[44960]: <info>  [1759411711.6430] device (tap650cec0d-3a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:28:31 compute-1 NetworkManager[44960]: <info>  [1759411711.6455] device (tap650cec0d-3a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:28:31 compute-1 systemd[1]: Started Virtual Machine qemu-100-instance-000000dd.
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.665 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[76231396-6db0-40c6-a63f-64d16ac4bcd7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:28:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:31.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:31 compute-1 nova_compute[230518]: 2025-10-02 13:28:31.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.719 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a0b4dfdc-6eb1-4615-9522-b9ded794efe6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:28:31 compute-1 nova_compute[230518]: 2025-10-02 13:28:31.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.728 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1b73398f-be50-44bc-91a7-e24b864d7e54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:28:31 compute-1 NetworkManager[44960]: <info>  [1759411711.7294] manager: (tap858f2b6f-80): new Veth device (/org/freedesktop/NetworkManager/Devices/409)
Oct 02 13:28:31 compute-1 systemd-udevd[318505]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:28:31 compute-1 nova_compute[230518]: 2025-10-02 13:28:31.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:31 compute-1 ovn_controller[129257]: 2025-10-02T13:28:31Z|00888|binding|INFO|Setting lport 650cec0d-3a37-4324-87bb-85f638f2c4fd ovn-installed in OVS
Oct 02 13:28:31 compute-1 ovn_controller[129257]: 2025-10-02T13:28:31Z|00889|binding|INFO|Setting lport 650cec0d-3a37-4324-87bb-85f638f2c4fd up in Southbound
Oct 02 13:28:31 compute-1 nova_compute[230518]: 2025-10-02 13:28:31.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.762 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[c06c31ca-791d-4e0d-8891-99b92f28e625]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.765 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[e82a4f27-3c54-4251-beea-4321ac38bbf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:28:31 compute-1 NetworkManager[44960]: <info>  [1759411711.7835] device (tap858f2b6f-80): carrier: link connected
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.787 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[7b2923f6-8d58-4eda-9f3c-cc516c00f828]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.803 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[b55b75ff-2234-4cba-9218-938bc06ae8ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap858f2b6f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:29:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 267], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 957534, 'reachable_time': 31910, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318533, 'error': None, 'target': 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.816 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c8fde3a7-d55e-4bcb-8af8-ce21b55e34e2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedd:29ad'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 957534, 'tstamp': 957534}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318534, 'error': None, 'target': 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.835 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[d25f889f-e49e-4fe3-a7ca-af2facc2223c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap858f2b6f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:29:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 267], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 957534, 'reachable_time': 31910, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 318535, 'error': None, 'target': 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.858 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[84a51b6c-1975-4880-83d8-af4033149128]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.909 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f3a45279-1b22-4995-bcb7-f790ed175b44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.910 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap858f2b6f-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.911 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.911 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap858f2b6f-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:28:31 compute-1 nova_compute[230518]: 2025-10-02 13:28:31.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:31 compute-1 NetworkManager[44960]: <info>  [1759411711.9136] manager: (tap858f2b6f-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/410)
Oct 02 13:28:31 compute-1 kernel: tap858f2b6f-80: entered promiscuous mode
Oct 02 13:28:31 compute-1 nova_compute[230518]: 2025-10-02 13:28:31.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.916 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap858f2b6f-80, col_values=(('external_ids', {'iface-id': 'cd468d5a-0c73-498a-8776-3dc2ab63d9cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:28:31 compute-1 nova_compute[230518]: 2025-10-02 13:28:31.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:31 compute-1 ovn_controller[129257]: 2025-10-02T13:28:31Z|00890|binding|INFO|Releasing lport cd468d5a-0c73-498a-8776-3dc2ab63d9cf from this chassis (sb_readonly=0)
Oct 02 13:28:31 compute-1 nova_compute[230518]: 2025-10-02 13:28:31.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:28:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:31.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.933 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/858f2b6f-8fe4-471b-981e-5d0b08d2f4c5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/858f2b6f-8fe4-471b-981e-5d0b08d2f4c5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.934 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[f9ef7d76-877a-4dcc-a7ba-9f1b004ce7bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.935 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]: global
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/858f2b6f-8fe4-471b-981e-5d0b08d2f4c5.pid.haproxy
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:28:31 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:28:31.936 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'env', 'PROCESS_TAG=haproxy-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/858f2b6f-8fe4-471b-981e-5d0b08d2f4c5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:28:32 compute-1 nova_compute[230518]: 2025-10-02 13:28:32.046 2 DEBUG nova.network.neutron [req-b2fb6372-3546-4666-9a8c-206dd1ded822 req-eeb7a9d5-2414-406e-9816-019c358511e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Updated VIF entry in instance network info cache for port 650cec0d-3a37-4324-87bb-85f638f2c4fd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:28:32 compute-1 nova_compute[230518]: 2025-10-02 13:28:32.047 2 DEBUG nova.network.neutron [req-b2fb6372-3546-4666-9a8c-206dd1ded822 req-eeb7a9d5-2414-406e-9816-019c358511e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Updating instance_info_cache with network_info: [{"id": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "address": "fa:16:3e:65:62:3b", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap650cec0d-3a", "ovs_interfaceid": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:28:32 compute-1 nova_compute[230518]: 2025-10-02 13:28:32.064 2 DEBUG oslo_concurrency.lockutils [req-b2fb6372-3546-4666-9a8c-206dd1ded822 req-eeb7a9d5-2414-406e-9816-019c358511e0 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-4fe12372-ed4b-40ab-9cf2-dcf304f21c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:28:32 compute-1 nova_compute[230518]: 2025-10-02 13:28:32.241 2 DEBUG nova.compute.manager [req-2132d589-3a81-4d07-9cdb-07242d73117f req-196c82d5-fc2b-4848-bf63-1eb765f8ddaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Received event network-vif-plugged-650cec0d-3a37-4324-87bb-85f638f2c4fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:28:32 compute-1 nova_compute[230518]: 2025-10-02 13:28:32.242 2 DEBUG oslo_concurrency.lockutils [req-2132d589-3a81-4d07-9cdb-07242d73117f req-196c82d5-fc2b-4848-bf63-1eb765f8ddaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:28:32 compute-1 nova_compute[230518]: 2025-10-02 13:28:32.242 2 DEBUG oslo_concurrency.lockutils [req-2132d589-3a81-4d07-9cdb-07242d73117f req-196c82d5-fc2b-4848-bf63-1eb765f8ddaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:28:32 compute-1 nova_compute[230518]: 2025-10-02 13:28:32.242 2 DEBUG oslo_concurrency.lockutils [req-2132d589-3a81-4d07-9cdb-07242d73117f req-196c82d5-fc2b-4848-bf63-1eb765f8ddaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:28:32 compute-1 nova_compute[230518]: 2025-10-02 13:28:32.242 2 DEBUG nova.compute.manager [req-2132d589-3a81-4d07-9cdb-07242d73117f req-196c82d5-fc2b-4848-bf63-1eb765f8ddaf 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Processing event network-vif-plugged-650cec0d-3a37-4324-87bb-85f638f2c4fd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:28:32 compute-1 nova_compute[230518]: 2025-10-02 13:28:32.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:32 compute-1 podman[318601]: 2025-10-02 13:28:32.295594204 +0000 UTC m=+0.048912796 container create b8bcaaa0351da3f2637991082e899f6aa17abb643a2171af0d2153302bbd6a39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:28:32 compute-1 systemd[1]: Started libpod-conmon-b8bcaaa0351da3f2637991082e899f6aa17abb643a2171af0d2153302bbd6a39.scope.
Oct 02 13:28:32 compute-1 systemd[1]: Started libcrun container.
Oct 02 13:28:32 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7018b0b22f5f7d1f2d4dd3aed7c70f935e42ebe9c211ba7bf0b7ea0fc5157f9a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:28:32 compute-1 podman[318601]: 2025-10-02 13:28:32.26835003 +0000 UTC m=+0.021668642 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:28:32 compute-1 podman[318601]: 2025-10-02 13:28:32.37454276 +0000 UTC m=+0.127861372 container init b8bcaaa0351da3f2637991082e899f6aa17abb643a2171af0d2153302bbd6a39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 02 13:28:32 compute-1 podman[318601]: 2025-10-02 13:28:32.379732343 +0000 UTC m=+0.133050935 container start b8bcaaa0351da3f2637991082e899f6aa17abb643a2171af0d2153302bbd6a39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 13:28:32 compute-1 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[318624]: [NOTICE]   (318628) : New worker (318630) forked
Oct 02 13:28:32 compute-1 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[318624]: [NOTICE]   (318628) : Loading success.
Oct 02 13:28:32 compute-1 nova_compute[230518]: 2025-10-02 13:28:32.768 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759411712.767411, 4fe12372-ed4b-40ab-9cf2-dcf304f21c31 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:28:32 compute-1 nova_compute[230518]: 2025-10-02 13:28:32.769 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] VM Started (Lifecycle Event)
Oct 02 13:28:32 compute-1 nova_compute[230518]: 2025-10-02 13:28:32.773 2 DEBUG nova.compute.manager [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:28:32 compute-1 nova_compute[230518]: 2025-10-02 13:28:32.778 2 DEBUG nova.virt.libvirt.driver [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:28:32 compute-1 nova_compute[230518]: 2025-10-02 13:28:32.784 2 INFO nova.virt.libvirt.driver [-] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Instance spawned successfully.
Oct 02 13:28:32 compute-1 nova_compute[230518]: 2025-10-02 13:28:32.785 2 INFO nova.compute.manager [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Took 3.41 seconds to spawn the instance on the hypervisor.
Oct 02 13:28:32 compute-1 nova_compute[230518]: 2025-10-02 13:28:32.786 2 DEBUG nova.compute.manager [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:28:32 compute-1 nova_compute[230518]: 2025-10-02 13:28:32.800 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:28:32 compute-1 nova_compute[230518]: 2025-10-02 13:28:32.803 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:28:32 compute-1 nova_compute[230518]: 2025-10-02 13:28:32.826 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:28:32 compute-1 nova_compute[230518]: 2025-10-02 13:28:32.827 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759411712.7676258, 4fe12372-ed4b-40ab-9cf2-dcf304f21c31 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:28:32 compute-1 nova_compute[230518]: 2025-10-02 13:28:32.827 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] VM Paused (Lifecycle Event)
Oct 02 13:28:32 compute-1 nova_compute[230518]: 2025-10-02 13:28:32.848 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:28:32 compute-1 nova_compute[230518]: 2025-10-02 13:28:32.851 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759411712.7781487, 4fe12372-ed4b-40ab-9cf2-dcf304f21c31 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:28:32 compute-1 nova_compute[230518]: 2025-10-02 13:28:32.851 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] VM Resumed (Lifecycle Event)
Oct 02 13:28:32 compute-1 nova_compute[230518]: 2025-10-02 13:28:32.859 2 INFO nova.compute.manager [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Took 9.69 seconds to build instance.
Oct 02 13:28:32 compute-1 nova_compute[230518]: 2025-10-02 13:28:32.864 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:28:32 compute-1 nova_compute[230518]: 2025-10-02 13:28:32.866 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:28:32 compute-1 nova_compute[230518]: 2025-10-02 13:28:32.872 2 DEBUG oslo_concurrency.lockutils [None req-0368c98e-fa2f-42a7-aef1-5bdbf69e2d5d 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:28:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:33.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:33.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:34 compute-1 ceph-mon[80926]: pgmap v3537: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.6 KiB/s rd, 16 KiB/s wr, 13 op/s
Oct 02 13:28:34 compute-1 nova_compute[230518]: 2025-10-02 13:28:34.456 2 DEBUG nova.compute.manager [req-5f426605-ef1e-4431-bf78-99960b802289 req-e0374b05-8be5-453d-a534-1aa26581a560 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Received event network-vif-plugged-650cec0d-3a37-4324-87bb-85f638f2c4fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:28:34 compute-1 nova_compute[230518]: 2025-10-02 13:28:34.456 2 DEBUG oslo_concurrency.lockutils [req-5f426605-ef1e-4431-bf78-99960b802289 req-e0374b05-8be5-453d-a534-1aa26581a560 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:28:34 compute-1 nova_compute[230518]: 2025-10-02 13:28:34.456 2 DEBUG oslo_concurrency.lockutils [req-5f426605-ef1e-4431-bf78-99960b802289 req-e0374b05-8be5-453d-a534-1aa26581a560 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:28:34 compute-1 nova_compute[230518]: 2025-10-02 13:28:34.456 2 DEBUG oslo_concurrency.lockutils [req-5f426605-ef1e-4431-bf78-99960b802289 req-e0374b05-8be5-453d-a534-1aa26581a560 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:28:34 compute-1 nova_compute[230518]: 2025-10-02 13:28:34.457 2 DEBUG nova.compute.manager [req-5f426605-ef1e-4431-bf78-99960b802289 req-e0374b05-8be5-453d-a534-1aa26581a560 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] No waiting events found dispatching network-vif-plugged-650cec0d-3a37-4324-87bb-85f638f2c4fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:28:34 compute-1 nova_compute[230518]: 2025-10-02 13:28:34.457 2 WARNING nova.compute.manager [req-5f426605-ef1e-4431-bf78-99960b802289 req-e0374b05-8be5-453d-a534-1aa26581a560 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Received unexpected event network-vif-plugged-650cec0d-3a37-4324-87bb-85f638f2c4fd for instance with vm_state active and task_state None.
Oct 02 13:28:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:28:34 compute-1 nova_compute[230518]: 2025-10-02 13:28:34.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:35 compute-1 ceph-mon[80926]: pgmap v3538: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 16 KiB/s wr, 53 op/s
Oct 02 13:28:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:35.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:35.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:36 compute-1 nova_compute[230518]: 2025-10-02 13:28:36.696 2 DEBUG nova.compute.manager [req-76f5dc81-ed73-4b27-98e9-fac63d5c36f7 req-6dd28e46-7ee3-48b9-b572-0179db323013 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Received event network-changed-650cec0d-3a37-4324-87bb-85f638f2c4fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:28:36 compute-1 nova_compute[230518]: 2025-10-02 13:28:36.698 2 DEBUG nova.compute.manager [req-76f5dc81-ed73-4b27-98e9-fac63d5c36f7 req-6dd28e46-7ee3-48b9-b572-0179db323013 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Refreshing instance network info cache due to event network-changed-650cec0d-3a37-4324-87bb-85f638f2c4fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:28:36 compute-1 nova_compute[230518]: 2025-10-02 13:28:36.698 2 DEBUG oslo_concurrency.lockutils [req-76f5dc81-ed73-4b27-98e9-fac63d5c36f7 req-6dd28e46-7ee3-48b9-b572-0179db323013 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-4fe12372-ed4b-40ab-9cf2-dcf304f21c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:28:36 compute-1 nova_compute[230518]: 2025-10-02 13:28:36.699 2 DEBUG oslo_concurrency.lockutils [req-76f5dc81-ed73-4b27-98e9-fac63d5c36f7 req-6dd28e46-7ee3-48b9-b572-0179db323013 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-4fe12372-ed4b-40ab-9cf2-dcf304f21c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:28:36 compute-1 nova_compute[230518]: 2025-10-02 13:28:36.700 2 DEBUG nova.network.neutron [req-76f5dc81-ed73-4b27-98e9-fac63d5c36f7 req-6dd28e46-7ee3-48b9-b572-0179db323013 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Refreshing network info cache for port 650cec0d-3a37-4324-87bb-85f638f2c4fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:28:37 compute-1 nova_compute[230518]: 2025-10-02 13:28:37.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:37.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:37 compute-1 podman[318640]: 2025-10-02 13:28:37.818095788 +0000 UTC m=+0.060173058 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 13:28:37 compute-1 podman[318639]: 2025-10-02 13:28:37.855535563 +0000 UTC m=+0.087496556 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 13:28:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:37.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:37 compute-1 ceph-mon[80926]: pgmap v3539: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 88 op/s
Oct 02 13:28:37 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1506722830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:28:38 compute-1 nova_compute[230518]: 2025-10-02 13:28:38.267 2 DEBUG nova.network.neutron [req-76f5dc81-ed73-4b27-98e9-fac63d5c36f7 req-6dd28e46-7ee3-48b9-b572-0179db323013 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Updated VIF entry in instance network info cache for port 650cec0d-3a37-4324-87bb-85f638f2c4fd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:28:38 compute-1 nova_compute[230518]: 2025-10-02 13:28:38.267 2 DEBUG nova.network.neutron [req-76f5dc81-ed73-4b27-98e9-fac63d5c36f7 req-6dd28e46-7ee3-48b9-b572-0179db323013 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Updating instance_info_cache with network_info: [{"id": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "address": "fa:16:3e:65:62:3b", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap650cec0d-3a", "ovs_interfaceid": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:28:38 compute-1 nova_compute[230518]: 2025-10-02 13:28:38.378 2 DEBUG oslo_concurrency.lockutils [req-76f5dc81-ed73-4b27-98e9-fac63d5c36f7 req-6dd28e46-7ee3-48b9-b572-0179db323013 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-4fe12372-ed4b-40ab-9cf2-dcf304f21c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:28:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1612691171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:28:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:28:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:39.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:39 compute-1 nova_compute[230518]: 2025-10-02 13:28:39.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:28:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:39.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:28:40 compute-1 ceph-mon[80926]: pgmap v3540: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 80 op/s
Oct 02 13:28:41 compute-1 sudo[318682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:28:41 compute-1 sudo[318682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:41 compute-1 sudo[318682]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:41 compute-1 sudo[318707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:28:41 compute-1 sudo[318707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:41 compute-1 sudo[318707]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:41 compute-1 sudo[318732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:28:41 compute-1 sudo[318732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:41 compute-1 sudo[318732]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:41 compute-1 ceph-mon[80926]: pgmap v3541: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 82 op/s
Oct 02 13:28:41 compute-1 sudo[318757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:28:41 compute-1 sudo[318757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:41.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:41 compute-1 sudo[318757]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:41.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:42 compute-1 nova_compute[230518]: 2025-10-02 13:28:42.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:28:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:28:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:28:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:28:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:28:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:28:43 compute-1 nova_compute[230518]: 2025-10-02 13:28:43.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:28:43 compute-1 nova_compute[230518]: 2025-10-02 13:28:43.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:28:43 compute-1 nova_compute[230518]: 2025-10-02 13:28:43.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:28:43 compute-1 ceph-mon[80926]: pgmap v3542: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 79 op/s
Oct 02 13:28:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:43.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:43.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:28:44 compute-1 nova_compute[230518]: 2025-10-02 13:28:44.882 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:28:44 compute-1 nova_compute[230518]: 2025-10-02 13:28:44.883 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:28:44 compute-1 nova_compute[230518]: 2025-10-02 13:28:44.883 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:28:44 compute-1 nova_compute[230518]: 2025-10-02 13:28:44.883 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:28:44 compute-1 nova_compute[230518]: 2025-10-02 13:28:44.883 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:28:44 compute-1 nova_compute[230518]: 2025-10-02 13:28:44.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:28:45 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3495008382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:28:45 compute-1 nova_compute[230518]: 2025-10-02 13:28:45.318 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:28:45 compute-1 nova_compute[230518]: 2025-10-02 13:28:45.402 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000dd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:28:45 compute-1 nova_compute[230518]: 2025-10-02 13:28:45.402 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000dd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:28:45 compute-1 nova_compute[230518]: 2025-10-02 13:28:45.536 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:28:45 compute-1 nova_compute[230518]: 2025-10-02 13:28:45.537 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4013MB free_disk=20.98794174194336GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:28:45 compute-1 nova_compute[230518]: 2025-10-02 13:28:45.538 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:28:45 compute-1 nova_compute[230518]: 2025-10-02 13:28:45.538 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:28:45 compute-1 nova_compute[230518]: 2025-10-02 13:28:45.654 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 4fe12372-ed4b-40ab-9cf2-dcf304f21c31 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:28:45 compute-1 nova_compute[230518]: 2025-10-02 13:28:45.655 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:28:45 compute-1 nova_compute[230518]: 2025-10-02 13:28:45.655 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:28:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:45.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:45 compute-1 ceph-mon[80926]: pgmap v3543: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.8 KiB/s wr, 77 op/s
Oct 02 13:28:45 compute-1 nova_compute[230518]: 2025-10-02 13:28:45.743 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:28:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:45.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:46 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:28:46 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3216221222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:28:46 compute-1 nova_compute[230518]: 2025-10-02 13:28:46.178 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:28:46 compute-1 nova_compute[230518]: 2025-10-02 13:28:46.185 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:28:46 compute-1 nova_compute[230518]: 2025-10-02 13:28:46.211 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:28:46 compute-1 nova_compute[230518]: 2025-10-02 13:28:46.250 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:28:46 compute-1 nova_compute[230518]: 2025-10-02 13:28:46.250 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:28:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3495008382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:28:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1243518253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:28:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3216221222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:28:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/957602223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:28:47 compute-1 ovn_controller[129257]: 2025-10-02T13:28:47Z|00126|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.12 does not match offer 10.100.0.10
Oct 02 13:28:47 compute-1 ovn_controller[129257]: 2025-10-02T13:28:47Z|00127|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:65:62:3b 10.100.0.10
Oct 02 13:28:47 compute-1 nova_compute[230518]: 2025-10-02 13:28:47.249 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:28:47 compute-1 nova_compute[230518]: 2025-10-02 13:28:47.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:47.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:47.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:48 compute-1 nova_compute[230518]: 2025-10-02 13:28:48.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:28:48 compute-1 ceph-mon[80926]: pgmap v3544: 305 pgs: 305 active+clean; 210 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 236 KiB/s wr, 61 op/s
Oct 02 13:28:49 compute-1 ceph-mon[80926]: pgmap v3545: 305 pgs: 305 active+clean; 210 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 475 KiB/s rd, 234 KiB/s wr, 26 op/s
Oct 02 13:28:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:28:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:49.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:49 compute-1 nova_compute[230518]: 2025-10-02 13:28:49.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:49.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:51 compute-1 ovn_controller[129257]: 2025-10-02T13:28:51Z|00128|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.12 does not match offer 10.100.0.10
Oct 02 13:28:51 compute-1 ovn_controller[129257]: 2025-10-02T13:28:51Z|00129|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:65:62:3b 10.100.0.10
Oct 02 13:28:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:51.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:51.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:52 compute-1 ovn_controller[129257]: 2025-10-02T13:28:52Z|00130|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:65:62:3b 10.100.0.10
Oct 02 13:28:52 compute-1 ovn_controller[129257]: 2025-10-02T13:28:52Z|00131|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:65:62:3b 10.100.0.10
Oct 02 13:28:52 compute-1 nova_compute[230518]: 2025-10-02 13:28:52.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:28:52 compute-1 ceph-mon[80926]: pgmap v3546: 305 pgs: 305 active+clean; 214 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 998 KiB/s rd, 495 KiB/s wr, 48 op/s
Oct 02 13:28:52 compute-1 nova_compute[230518]: 2025-10-02 13:28:52.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:53 compute-1 nova_compute[230518]: 2025-10-02 13:28:53.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:28:53 compute-1 sudo[318857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:28:53 compute-1 sudo[318857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:53 compute-1 sudo[318857]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:53 compute-1 sudo[318882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:28:53 compute-1 sudo[318882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:28:53 compute-1 sudo[318882]: pam_unix(sudo:session): session closed for user root
Oct 02 13:28:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:28:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:53.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:28:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:28:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:53.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:28:53 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:28:53 compute-1 ceph-mon[80926]: pgmap v3547: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 540 KiB/s wr, 54 op/s
Oct 02 13:28:53 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:28:54 compute-1 nova_compute[230518]: 2025-10-02 13:28:54.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:28:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:28:54 compute-1 nova_compute[230518]: 2025-10-02 13:28:54.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:55 compute-1 ceph-mon[80926]: pgmap v3548: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 540 KiB/s wr, 54 op/s
Oct 02 13:28:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:55.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:28:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:55.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:28:56 compute-1 nova_compute[230518]: 2025-10-02 13:28:56.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:28:56 compute-1 nova_compute[230518]: 2025-10-02 13:28:56.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:28:56 compute-1 nova_compute[230518]: 2025-10-02 13:28:56.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:28:56 compute-1 podman[318908]: 2025-10-02 13:28:56.822047367 +0000 UTC m=+0.055194133 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct 02 13:28:56 compute-1 podman[318907]: 2025-10-02 13:28:56.858547842 +0000 UTC m=+0.098743469 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 02 13:28:56 compute-1 nova_compute[230518]: 2025-10-02 13:28:56.998 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-4fe12372-ed4b-40ab-9cf2-dcf304f21c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:28:56 compute-1 nova_compute[230518]: 2025-10-02 13:28:56.999 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-4fe12372-ed4b-40ab-9cf2-dcf304f21c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:28:57 compute-1 nova_compute[230518]: 2025-10-02 13:28:56.999 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 13:28:57 compute-1 nova_compute[230518]: 2025-10-02 13:28:57.000 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4fe12372-ed4b-40ab-9cf2-dcf304f21c31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:28:57 compute-1 nova_compute[230518]: 2025-10-02 13:28:57.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:28:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:57.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:28:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:28:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:57.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:28:58 compute-1 ceph-mon[80926]: pgmap v3549: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 549 KiB/s wr, 56 op/s
Oct 02 13:28:59 compute-1 nova_compute[230518]: 2025-10-02 13:28:59.046 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Updating instance_info_cache with network_info: [{"id": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "address": "fa:16:3e:65:62:3b", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap650cec0d-3a", "ovs_interfaceid": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:28:59 compute-1 nova_compute[230518]: 2025-10-02 13:28:59.073 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-4fe12372-ed4b-40ab-9cf2-dcf304f21c31" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:28:59 compute-1 nova_compute[230518]: 2025-10-02 13:28:59.073 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 13:28:59 compute-1 nova_compute[230518]: 2025-10-02 13:28:59.073 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:28:59 compute-1 ceph-mon[80926]: pgmap v3550: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 617 KiB/s rd, 316 KiB/s wr, 32 op/s
Oct 02 13:28:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:28:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:28:59.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:28:59 compute-1 nova_compute[230518]: 2025-10-02 13:28:59.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:28:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:28:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:28:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:28:59.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:01 compute-1 nova_compute[230518]: 2025-10-02 13:29:01.068 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:29:01 compute-1 ceph-mon[80926]: pgmap v3551: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 802 KiB/s rd, 317 KiB/s wr, 32 op/s
Oct 02 13:29:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:29:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:01.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:29:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:01.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:02 compute-1 nova_compute[230518]: 2025-10-02 13:29:02.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:29:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:03.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:29:03 compute-1 ceph-mon[80926]: pgmap v3552: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 278 KiB/s rd, 66 KiB/s wr, 12 op/s
Oct 02 13:29:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:29:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:03.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:29:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:29:04 compute-1 nova_compute[230518]: 2025-10-02 13:29:04.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:05 compute-1 ceph-mon[80926]: pgmap v3553: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 219 KiB/s rd, 19 KiB/s wr, 5 op/s
Oct 02 13:29:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:29:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:05.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:29:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:05.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:06 compute-1 ovn_controller[129257]: 2025-10-02T13:29:06Z|00891|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Oct 02 13:29:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/883652432' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:29:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/883652432' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:29:07 compute-1 nova_compute[230518]: 2025-10-02 13:29:07.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:29:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:07.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:29:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:29:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:07.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:29:08 compute-1 ceph-mon[80926]: pgmap v3554: 305 pgs: 305 active+clean; 221 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 218 KiB/s rd, 208 KiB/s wr, 7 op/s
Oct 02 13:29:08 compute-1 podman[318954]: 2025-10-02 13:29:08.825933002 +0000 UTC m=+0.070926115 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:29:08 compute-1 podman[318953]: 2025-10-02 13:29:08.825919633 +0000 UTC m=+0.068053247 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 13:29:09 compute-1 ceph-mon[80926]: pgmap v3555: 305 pgs: 305 active+clean; 221 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 185 KiB/s rd, 199 KiB/s wr, 5 op/s
Oct 02 13:29:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:29:09 compute-1 nova_compute[230518]: 2025-10-02 13:29:09.667 2 DEBUG oslo_concurrency.lockutils [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:29:09 compute-1 nova_compute[230518]: 2025-10-02 13:29:09.668 2 DEBUG oslo_concurrency.lockutils [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:29:09 compute-1 nova_compute[230518]: 2025-10-02 13:29:09.668 2 DEBUG oslo_concurrency.lockutils [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:29:09 compute-1 nova_compute[230518]: 2025-10-02 13:29:09.669 2 DEBUG oslo_concurrency.lockutils [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:29:09 compute-1 nova_compute[230518]: 2025-10-02 13:29:09.669 2 DEBUG oslo_concurrency.lockutils [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:29:09 compute-1 nova_compute[230518]: 2025-10-02 13:29:09.671 2 INFO nova.compute.manager [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Terminating instance
Oct 02 13:29:09 compute-1 nova_compute[230518]: 2025-10-02 13:29:09.673 2 DEBUG nova.compute.manager [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:29:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:09.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:09 compute-1 kernel: tap650cec0d-3a (unregistering): left promiscuous mode
Oct 02 13:29:09 compute-1 NetworkManager[44960]: <info>  [1759411749.7435] device (tap650cec0d-3a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:29:09 compute-1 ovn_controller[129257]: 2025-10-02T13:29:09Z|00892|binding|INFO|Releasing lport 650cec0d-3a37-4324-87bb-85f638f2c4fd from this chassis (sb_readonly=0)
Oct 02 13:29:09 compute-1 ovn_controller[129257]: 2025-10-02T13:29:09Z|00893|binding|INFO|Setting lport 650cec0d-3a37-4324-87bb-85f638f2c4fd down in Southbound
Oct 02 13:29:09 compute-1 ovn_controller[129257]: 2025-10-02T13:29:09Z|00894|binding|INFO|Removing iface tap650cec0d-3a ovn-installed in OVS
Oct 02 13:29:09 compute-1 nova_compute[230518]: 2025-10-02 13:29:09.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:29:09.769 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:62:3b 10.100.0.10'], port_security=['fa:16:3e:65:62:3b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4fe12372-ed4b-40ab-9cf2-dcf304f21c31', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '18799a1c93354809911705bb424e673f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ae0586a1-cfba-4e03-bfd2-a14f300878bc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=910cabf2-c1de-4576-8ee2-c8f223a58a1c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=650cec0d-3a37-4324-87bb-85f638f2c4fd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:29:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:29:09.770 138374 INFO neutron.agent.ovn.metadata.agent [-] Port 650cec0d-3a37-4324-87bb-85f638f2c4fd in datapath 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 unbound from our chassis
Oct 02 13:29:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:29:09.772 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:29:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:29:09.773 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[64658e9f-9287-479d-a49e-d3fa7a220766]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:29:09 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:29:09.773 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 namespace which is not needed anymore
Oct 02 13:29:09 compute-1 nova_compute[230518]: 2025-10-02 13:29:09.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:09 compute-1 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d000000dd.scope: Deactivated successfully.
Oct 02 13:29:09 compute-1 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d000000dd.scope: Consumed 15.855s CPU time.
Oct 02 13:29:09 compute-1 systemd-machined[188247]: Machine qemu-100-instance-000000dd terminated.
Oct 02 13:29:09 compute-1 nova_compute[230518]: 2025-10-02 13:29:09.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:09 compute-1 nova_compute[230518]: 2025-10-02 13:29:09.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:09 compute-1 nova_compute[230518]: 2025-10-02 13:29:09.947 2 INFO nova.virt.libvirt.driver [-] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Instance destroyed successfully.
Oct 02 13:29:09 compute-1 nova_compute[230518]: 2025-10-02 13:29:09.948 2 DEBUG nova.objects.instance [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lazy-loading 'resources' on Instance uuid 4fe12372-ed4b-40ab-9cf2-dcf304f21c31 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:29:09 compute-1 nova_compute[230518]: 2025-10-02 13:29:09.977 2 DEBUG nova.virt.libvirt.vif [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:28:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-426050554',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-426050554',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-426050554',id=221,image_ref='e0399c4c-8352-497f-b361-45e672712e68',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLILQTOU+IPOd8w0GqrVsN/QbgbwiFJSPjI2JTUbPYQ/ozNt878L7gRDoBhnJsqCVc05+BAE6CcyuObEWMpQFhUuKt2iunDqIotZaYJTz1b891j8Z4tJFAz/OrIq++6nPw==',key_name='tempest-keypair-176326542',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:28:32Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='18799a1c93354809911705bb424e673f',ramdisk_id='',reservation_id='r-hck380nt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-1344814684',image_owner_user_name='tempest-TestVolumeBootPattern-1344814684-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1344814684',owner_user_name='tempest-TestVolumeBootPattern-1344814684-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:28:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2cb47684d0b34c729e9611e7b3943bed',uuid=4fe12372-ed4b-40ab-9cf2-dcf304f21c31,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "address": "fa:16:3e:65:62:3b", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap650cec0d-3a", "ovs_interfaceid": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:29:09 compute-1 nova_compute[230518]: 2025-10-02 13:29:09.977 2 DEBUG nova.network.os_vif_util [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converting VIF {"id": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "address": "fa:16:3e:65:62:3b", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap650cec0d-3a", "ovs_interfaceid": "650cec0d-3a37-4324-87bb-85f638f2c4fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:29:09 compute-1 nova_compute[230518]: 2025-10-02 13:29:09.978 2 DEBUG nova.network.os_vif_util [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:65:62:3b,bridge_name='br-int',has_traffic_filtering=True,id=650cec0d-3a37-4324-87bb-85f638f2c4fd,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap650cec0d-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:29:09 compute-1 nova_compute[230518]: 2025-10-02 13:29:09.978 2 DEBUG os_vif [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:62:3b,bridge_name='br-int',has_traffic_filtering=True,id=650cec0d-3a37-4324-87bb-85f638f2c4fd,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap650cec0d-3a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:29:09 compute-1 nova_compute[230518]: 2025-10-02 13:29:09.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:09 compute-1 nova_compute[230518]: 2025-10-02 13:29:09.980 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap650cec0d-3a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:29:09 compute-1 nova_compute[230518]: 2025-10-02 13:29:09.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:09 compute-1 nova_compute[230518]: 2025-10-02 13:29:09.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:29:09 compute-1 nova_compute[230518]: 2025-10-02 13:29:09.985 2 INFO os_vif [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:62:3b,bridge_name='br-int',has_traffic_filtering=True,id=650cec0d-3a37-4324-87bb-85f638f2c4fd,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap650cec0d-3a')
Oct 02 13:29:09 compute-1 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[318624]: [NOTICE]   (318628) : haproxy version is 2.8.14-c23fe91
Oct 02 13:29:09 compute-1 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[318624]: [NOTICE]   (318628) : path to executable is /usr/sbin/haproxy
Oct 02 13:29:09 compute-1 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[318624]: [WARNING]  (318628) : Exiting Master process...
Oct 02 13:29:09 compute-1 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[318624]: [WARNING]  (318628) : Exiting Master process...
Oct 02 13:29:09 compute-1 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[318624]: [ALERT]    (318628) : Current worker (318630) exited with code 143 (Terminated)
Oct 02 13:29:09 compute-1 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[318624]: [WARNING]  (318628) : All workers exited. Exiting... (0)
Oct 02 13:29:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:29:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:09.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:29:09 compute-1 systemd[1]: libpod-b8bcaaa0351da3f2637991082e899f6aa17abb643a2171af0d2153302bbd6a39.scope: Deactivated successfully.
Oct 02 13:29:09 compute-1 podman[319015]: 2025-10-02 13:29:09.999824774 +0000 UTC m=+0.123299619 container died b8bcaaa0351da3f2637991082e899f6aa17abb643a2171af0d2153302bbd6a39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 13:29:10 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b8bcaaa0351da3f2637991082e899f6aa17abb643a2171af0d2153302bbd6a39-userdata-shm.mount: Deactivated successfully.
Oct 02 13:29:10 compute-1 systemd[1]: var-lib-containers-storage-overlay-7018b0b22f5f7d1f2d4dd3aed7c70f935e42ebe9c211ba7bf0b7ea0fc5157f9a-merged.mount: Deactivated successfully.
Oct 02 13:29:10 compute-1 nova_compute[230518]: 2025-10-02 13:29:10.247 2 DEBUG nova.compute.manager [req-3a569b5f-2d95-4440-9b6d-b364f9c1e36c req-9b77e6f1-deb8-4e05-ad2a-c4c6089aa803 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Received event network-vif-unplugged-650cec0d-3a37-4324-87bb-85f638f2c4fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:29:10 compute-1 nova_compute[230518]: 2025-10-02 13:29:10.247 2 DEBUG oslo_concurrency.lockutils [req-3a569b5f-2d95-4440-9b6d-b364f9c1e36c req-9b77e6f1-deb8-4e05-ad2a-c4c6089aa803 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:29:10 compute-1 nova_compute[230518]: 2025-10-02 13:29:10.247 2 DEBUG oslo_concurrency.lockutils [req-3a569b5f-2d95-4440-9b6d-b364f9c1e36c req-9b77e6f1-deb8-4e05-ad2a-c4c6089aa803 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:29:10 compute-1 nova_compute[230518]: 2025-10-02 13:29:10.248 2 DEBUG oslo_concurrency.lockutils [req-3a569b5f-2d95-4440-9b6d-b364f9c1e36c req-9b77e6f1-deb8-4e05-ad2a-c4c6089aa803 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:29:10 compute-1 nova_compute[230518]: 2025-10-02 13:29:10.248 2 DEBUG nova.compute.manager [req-3a569b5f-2d95-4440-9b6d-b364f9c1e36c req-9b77e6f1-deb8-4e05-ad2a-c4c6089aa803 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] No waiting events found dispatching network-vif-unplugged-650cec0d-3a37-4324-87bb-85f638f2c4fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:29:10 compute-1 nova_compute[230518]: 2025-10-02 13:29:10.248 2 DEBUG nova.compute.manager [req-3a569b5f-2d95-4440-9b6d-b364f9c1e36c req-9b77e6f1-deb8-4e05-ad2a-c4c6089aa803 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Received event network-vif-unplugged-650cec0d-3a37-4324-87bb-85f638f2c4fd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:29:10 compute-1 podman[319015]: 2025-10-02 13:29:10.50085627 +0000 UTC m=+0.624331115 container cleanup b8bcaaa0351da3f2637991082e899f6aa17abb643a2171af0d2153302bbd6a39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:29:11 compute-1 podman[319070]: 2025-10-02 13:29:11.048523038 +0000 UTC m=+0.526505645 container remove b8bcaaa0351da3f2637991082e899f6aa17abb643a2171af0d2153302bbd6a39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:29:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:29:11.057 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[eea3d006-9b32-499b-b7da-712b024ccc41]: (4, ('Thu Oct  2 01:29:09 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 (b8bcaaa0351da3f2637991082e899f6aa17abb643a2171af0d2153302bbd6a39)\nb8bcaaa0351da3f2637991082e899f6aa17abb643a2171af0d2153302bbd6a39\nThu Oct  2 01:29:10 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 (b8bcaaa0351da3f2637991082e899f6aa17abb643a2171af0d2153302bbd6a39)\nb8bcaaa0351da3f2637991082e899f6aa17abb643a2171af0d2153302bbd6a39\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:29:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:29:11.060 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[63e69818-3563-4ffa-afc8-037caa05a9dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:29:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:29:11.063 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap858f2b6f-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:29:11 compute-1 nova_compute[230518]: 2025-10-02 13:29:11.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:11 compute-1 kernel: tap858f2b6f-80: left promiscuous mode
Oct 02 13:29:11 compute-1 systemd[1]: libpod-conmon-b8bcaaa0351da3f2637991082e899f6aa17abb643a2171af0d2153302bbd6a39.scope: Deactivated successfully.
Oct 02 13:29:11 compute-1 nova_compute[230518]: 2025-10-02 13:29:11.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:29:11.136 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1c40305a-3aa5-49f8-942f-267e1b577ce6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:29:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:29:11.164 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[401666fc-d16d-4739-99a7-5c1b97a3e5fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:29:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:29:11.167 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[a807b823-5046-4b4a-8003-73731d89d215]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:29:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:29:11.185 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[18a5d22b-2787-467d-8224-4c58138307bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 957526, 'reachable_time': 42906, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319086, 'error': None, 'target': 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:29:11 compute-1 systemd[1]: run-netns-ovnmeta\x2d858f2b6f\x2d8fe4\x2d471b\x2d981e\x2d5d0b08d2f4c5.mount: Deactivated successfully.
Oct 02 13:29:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:29:11.191 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:29:11 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:29:11.192 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[30a27b55-705c-4f8b-afd4-2222ab21f36d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:29:11 compute-1 ceph-mon[80926]: pgmap v3556: 305 pgs: 305 active+clean; 221 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 185 KiB/s rd, 199 KiB/s wr, 5 op/s
Oct 02 13:29:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:29:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:11.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:29:11 compute-1 nova_compute[230518]: 2025-10-02 13:29:11.855 2 INFO nova.virt.libvirt.driver [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Deleting instance files /var/lib/nova/instances/4fe12372-ed4b-40ab-9cf2-dcf304f21c31_del
Oct 02 13:29:11 compute-1 nova_compute[230518]: 2025-10-02 13:29:11.856 2 INFO nova.virt.libvirt.driver [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Deletion of /var/lib/nova/instances/4fe12372-ed4b-40ab-9cf2-dcf304f21c31_del complete
Oct 02 13:29:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:29:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:11.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:29:12 compute-1 nova_compute[230518]: 2025-10-02 13:29:12.024 2 INFO nova.compute.manager [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Took 2.35 seconds to destroy the instance on the hypervisor.
Oct 02 13:29:12 compute-1 nova_compute[230518]: 2025-10-02 13:29:12.024 2 DEBUG oslo.service.loopingcall [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:29:12 compute-1 nova_compute[230518]: 2025-10-02 13:29:12.025 2 DEBUG nova.compute.manager [-] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:29:12 compute-1 nova_compute[230518]: 2025-10-02 13:29:12.025 2 DEBUG nova.network.neutron [-] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:29:12 compute-1 nova_compute[230518]: 2025-10-02 13:29:12.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:12 compute-1 nova_compute[230518]: 2025-10-02 13:29:12.357 2 DEBUG nova.compute.manager [req-57e6b395-55d5-41c9-b5f4-4fa15d047874 req-7dfd4bbd-840b-4cb8-9d02-4abdb30eb94b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Received event network-vif-plugged-650cec0d-3a37-4324-87bb-85f638f2c4fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:29:12 compute-1 nova_compute[230518]: 2025-10-02 13:29:12.358 2 DEBUG oslo_concurrency.lockutils [req-57e6b395-55d5-41c9-b5f4-4fa15d047874 req-7dfd4bbd-840b-4cb8-9d02-4abdb30eb94b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:29:12 compute-1 nova_compute[230518]: 2025-10-02 13:29:12.358 2 DEBUG oslo_concurrency.lockutils [req-57e6b395-55d5-41c9-b5f4-4fa15d047874 req-7dfd4bbd-840b-4cb8-9d02-4abdb30eb94b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:29:12 compute-1 nova_compute[230518]: 2025-10-02 13:29:12.358 2 DEBUG oslo_concurrency.lockutils [req-57e6b395-55d5-41c9-b5f4-4fa15d047874 req-7dfd4bbd-840b-4cb8-9d02-4abdb30eb94b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:29:12 compute-1 nova_compute[230518]: 2025-10-02 13:29:12.359 2 DEBUG nova.compute.manager [req-57e6b395-55d5-41c9-b5f4-4fa15d047874 req-7dfd4bbd-840b-4cb8-9d02-4abdb30eb94b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] No waiting events found dispatching network-vif-plugged-650cec0d-3a37-4324-87bb-85f638f2c4fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:29:12 compute-1 nova_compute[230518]: 2025-10-02 13:29:12.359 2 WARNING nova.compute.manager [req-57e6b395-55d5-41c9-b5f4-4fa15d047874 req-7dfd4bbd-840b-4cb8-9d02-4abdb30eb94b 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Received unexpected event network-vif-plugged-650cec0d-3a37-4324-87bb-85f638f2c4fd for instance with vm_state active and task_state deleting.
Oct 02 13:29:13 compute-1 ceph-mon[80926]: pgmap v3557: 305 pgs: 305 active+clean; 221 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.5 KiB/s rd, 199 KiB/s wr, 9 op/s
Oct 02 13:29:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:13.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:13.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:29:14 compute-1 nova_compute[230518]: 2025-10-02 13:29:14.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:15 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:29:15.151 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=84, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=83) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:29:15 compute-1 nova_compute[230518]: 2025-10-02 13:29:15.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:15 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:29:15.152 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:29:15 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:29:15.153 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '84'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:29:15 compute-1 nova_compute[230518]: 2025-10-02 13:29:15.164 2 DEBUG nova.network.neutron [-] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:29:15 compute-1 nova_compute[230518]: 2025-10-02 13:29:15.213 2 INFO nova.compute.manager [-] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Took 3.19 seconds to deallocate network for instance.
Oct 02 13:29:15 compute-1 nova_compute[230518]: 2025-10-02 13:29:15.236 2 DEBUG nova.compute.manager [req-9e4381bc-a7ec-4def-b7ad-e26242fe3ee7 req-c1a97cf7-c053-4584-8f61-b4fd51766637 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Received event network-vif-deleted-650cec0d-3a37-4324-87bb-85f638f2c4fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:29:15 compute-1 nova_compute[230518]: 2025-10-02 13:29:15.391 2 INFO nova.compute.manager [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Took 0.18 seconds to detach 1 volumes for instance.
Oct 02 13:29:15 compute-1 nova_compute[230518]: 2025-10-02 13:29:15.392 2 DEBUG nova.compute.manager [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Deleting volume: 4f5d24ac-9e35-4d38-a9e7-6dec734d8ef8 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Oct 02 13:29:15 compute-1 ceph-mon[80926]: pgmap v3558: 305 pgs: 305 active+clean; 221 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 190 KiB/s wr, 16 op/s
Oct 02 13:29:15 compute-1 nova_compute[230518]: 2025-10-02 13:29:15.643 2 DEBUG oslo_concurrency.lockutils [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:29:15 compute-1 nova_compute[230518]: 2025-10-02 13:29:15.643 2 DEBUG oslo_concurrency.lockutils [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:29:15 compute-1 nova_compute[230518]: 2025-10-02 13:29:15.719 2 DEBUG oslo_concurrency.processutils [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:29:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:15.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:29:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:15.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:29:16 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:29:16 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1352680470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:29:16 compute-1 nova_compute[230518]: 2025-10-02 13:29:16.163 2 DEBUG oslo_concurrency.processutils [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:29:16 compute-1 nova_compute[230518]: 2025-10-02 13:29:16.170 2 DEBUG nova.compute.provider_tree [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:29:16 compute-1 nova_compute[230518]: 2025-10-02 13:29:16.212 2 DEBUG nova.scheduler.client.report [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:29:16 compute-1 nova_compute[230518]: 2025-10-02 13:29:16.305 2 DEBUG oslo_concurrency.lockutils [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:29:16 compute-1 nova_compute[230518]: 2025-10-02 13:29:16.342 2 INFO nova.scheduler.client.report [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Deleted allocations for instance 4fe12372-ed4b-40ab-9cf2-dcf304f21c31
Oct 02 13:29:16 compute-1 nova_compute[230518]: 2025-10-02 13:29:16.417 2 DEBUG oslo_concurrency.lockutils [None req-103e697e-d1b8-4a18-9e65-2c03fd4d83af 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "4fe12372-ed4b-40ab-9cf2-dcf304f21c31" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:29:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1352680470' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:29:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:29:17 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/950874610' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:29:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:29:17 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/950874610' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:29:17 compute-1 nova_compute[230518]: 2025-10-02 13:29:17.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:17.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:29:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:18.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:29:18 compute-1 ceph-mon[80926]: pgmap v3559: 305 pgs: 305 active+clean; 212 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 190 KiB/s wr, 17 op/s
Oct 02 13:29:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/950874610' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:29:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/950874610' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:29:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:29:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:29:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:19.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:29:19 compute-1 nova_compute[230518]: 2025-10-02 13:29:19.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:20.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:20 compute-1 ceph-mon[80926]: pgmap v3560: 305 pgs: 305 active+clean; 212 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 852 B/s wr, 14 op/s
Oct 02 13:29:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e424 e424: 3 total, 3 up, 3 in
Oct 02 13:29:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:29:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:21.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:29:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:22.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:22 compute-1 ceph-mon[80926]: pgmap v3561: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 1.7 KiB/s wr, 41 op/s
Oct 02 13:29:22 compute-1 nova_compute[230518]: 2025-10-02 13:29:22.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:23 compute-1 ceph-mon[80926]: osdmap e424: 3 total, 3 up, 3 in
Oct 02 13:29:23 compute-1 ceph-mon[80926]: pgmap v3563: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 1.5 KiB/s wr, 45 op/s
Oct 02 13:29:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:29:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:23.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:29:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:29:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:24.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:29:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:29:24 compute-1 nova_compute[230518]: 2025-10-02 13:29:24.944 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759411749.941914, 4fe12372-ed4b-40ab-9cf2-dcf304f21c31 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:29:24 compute-1 nova_compute[230518]: 2025-10-02 13:29:24.944 2 INFO nova.compute.manager [-] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] VM Stopped (Lifecycle Event)
Oct 02 13:29:24 compute-1 nova_compute[230518]: 2025-10-02 13:29:24.964 2 DEBUG nova.compute.manager [None req-0ffb723d-cee7-4837-b18d-312f022b85e8 - - - - - -] [instance: 4fe12372-ed4b-40ab-9cf2-dcf304f21c31] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:29:24 compute-1 nova_compute[230518]: 2025-10-02 13:29:24.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:25 compute-1 ceph-mon[80926]: pgmap v3564: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 1.5 KiB/s wr, 35 op/s
Oct 02 13:29:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:25.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:29:25.989 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:29:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:29:25.990 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:29:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:29:25.990 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:29:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:26.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:27 compute-1 nova_compute[230518]: 2025-10-02 13:29:27.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:27 compute-1 ceph-mon[80926]: pgmap v3565: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 1.6 KiB/s wr, 47 op/s
Oct 02 13:29:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:29:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:27.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:29:27 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e425 e425: 3 total, 3 up, 3 in
Oct 02 13:29:27 compute-1 podman[319111]: 2025-10-02 13:29:27.80981253 +0000 UTC m=+0.053190550 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct 02 13:29:27 compute-1 podman[319110]: 2025-10-02 13:29:27.839077228 +0000 UTC m=+0.086222346 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 02 13:29:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:28.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:29 compute-1 ceph-mon[80926]: osdmap e425: 3 total, 3 up, 3 in
Oct 02 13:29:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:29:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:29.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:29 compute-1 nova_compute[230518]: 2025-10-02 13:29:29.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:30.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:30 compute-1 ceph-mon[80926]: pgmap v3567: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 639 B/s wr, 19 op/s
Oct 02 13:29:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:31.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:31 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1239937293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:29:31 compute-1 ceph-mon[80926]: pgmap v3568: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 911 B/s wr, 23 op/s
Oct 02 13:29:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:29:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:32.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:29:32 compute-1 nova_compute[230518]: 2025-10-02 13:29:32.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:33 compute-1 ceph-mon[80926]: pgmap v3569: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 818 B/s wr, 21 op/s
Oct 02 13:29:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:33.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:34.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:29:34 compute-1 nova_compute[230518]: 2025-10-02 13:29:34.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:35 compute-1 ceph-mon[80926]: pgmap v3570: 305 pgs: 305 active+clean; 166 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 818 B/s wr, 22 op/s
Oct 02 13:29:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:35.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:36.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:37 compute-1 nova_compute[230518]: 2025-10-02 13:29:37.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:37 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/155816079' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:29:37 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/155816079' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:29:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:29:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:37.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:29:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:38.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:39 compute-1 ceph-mon[80926]: pgmap v3571: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 1.1 KiB/s wr, 23 op/s
Oct 02 13:29:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1826604766' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:29:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/112464615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:29:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:29:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:39.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:39 compute-1 podman[319154]: 2025-10-02 13:29:39.792865415 +0000 UTC m=+0.049832174 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 13:29:39 compute-1 podman[319155]: 2025-10-02 13:29:39.803152667 +0000 UTC m=+0.056460042 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:29:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:40.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:40 compute-1 nova_compute[230518]: 2025-10-02 13:29:40.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:40 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e426 e426: 3 total, 3 up, 3 in
Oct 02 13:29:40 compute-1 ceph-mon[80926]: pgmap v3572: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 1020 B/s wr, 21 op/s
Oct 02 13:29:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:41.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:42.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:42 compute-1 nova_compute[230518]: 2025-10-02 13:29:42.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:42 compute-1 ceph-mon[80926]: osdmap e426: 3 total, 3 up, 3 in
Oct 02 13:29:42 compute-1 ceph-mon[80926]: pgmap v3574: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Oct 02 13:29:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:43.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:43 compute-1 ceph-mon[80926]: pgmap v3575: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 02 13:29:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:29:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:44.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:29:44 compute-1 nova_compute[230518]: 2025-10-02 13:29:44.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:29:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:29:44 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2026244130' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:29:45 compute-1 nova_compute[230518]: 2025-10-02 13:29:45.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:29:45 compute-1 nova_compute[230518]: 2025-10-02 13:29:45.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:29:45 compute-1 nova_compute[230518]: 2025-10-02 13:29:45.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:29:45 compute-1 nova_compute[230518]: 2025-10-02 13:29:45.112 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:29:45 compute-1 nova_compute[230518]: 2025-10-02 13:29:45.113 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:29:45 compute-1 nova_compute[230518]: 2025-10-02 13:29:45.113 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:29:45 compute-1 nova_compute[230518]: 2025-10-02 13:29:45.113 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:29:45 compute-1 nova_compute[230518]: 2025-10-02 13:29:45.113 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:29:45 compute-1 nova_compute[230518]: 2025-10-02 13:29:45.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:29:45 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4132422905' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:29:45 compute-1 nova_compute[230518]: 2025-10-02 13:29:45.542 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:29:45 compute-1 nova_compute[230518]: 2025-10-02 13:29:45.694 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:29:45 compute-1 nova_compute[230518]: 2025-10-02 13:29:45.695 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4207MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:29:45 compute-1 nova_compute[230518]: 2025-10-02 13:29:45.696 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:29:45 compute-1 nova_compute[230518]: 2025-10-02 13:29:45.696 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:29:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:45.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:45 compute-1 nova_compute[230518]: 2025-10-02 13:29:45.774 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:29:45 compute-1 nova_compute[230518]: 2025-10-02 13:29:45.774 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:29:45 compute-1 nova_compute[230518]: 2025-10-02 13:29:45.833 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:29:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:29:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:46.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:29:46 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:29:46 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2373861693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:29:46 compute-1 ceph-mon[80926]: pgmap v3576: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 1.3 KiB/s wr, 36 op/s
Oct 02 13:29:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/997606669' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:29:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4132422905' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:29:46 compute-1 nova_compute[230518]: 2025-10-02 13:29:46.252 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:29:46 compute-1 nova_compute[230518]: 2025-10-02 13:29:46.259 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:29:46 compute-1 nova_compute[230518]: 2025-10-02 13:29:46.309 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:29:46 compute-1 nova_compute[230518]: 2025-10-02 13:29:46.346 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:29:46 compute-1 nova_compute[230518]: 2025-10-02 13:29:46.346 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:29:47 compute-1 nova_compute[230518]: 2025-10-02 13:29:47.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:47 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2373861693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:29:47 compute-1 ceph-mon[80926]: pgmap v3577: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 921 B/s wr, 23 op/s
Oct 02 13:29:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:47.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:48.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 e427: 3 total, 3 up, 3 in
Oct 02 13:29:49 compute-1 nova_compute[230518]: 2025-10-02 13:29:49.347 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:29:49 compute-1 ceph-mon[80926]: osdmap e427: 3 total, 3 up, 3 in
Oct 02 13:29:49 compute-1 ceph-mon[80926]: pgmap v3579: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 485 B/s wr, 18 op/s
Oct 02 13:29:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:29:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:49.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:50.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:50 compute-1 nova_compute[230518]: 2025-10-02 13:29:50.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:51 compute-1 ceph-mon[80926]: pgmap v3580: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 409 B/s wr, 15 op/s
Oct 02 13:29:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:51.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:52.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:52 compute-1 nova_compute[230518]: 2025-10-02 13:29:52.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:53 compute-1 nova_compute[230518]: 2025-10-02 13:29:53.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:29:53 compute-1 ceph-mon[80926]: pgmap v3581: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.8 KiB/s rd, 409 B/s wr, 12 op/s
Oct 02 13:29:53 compute-1 sudo[319238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:29:53 compute-1 sudo[319238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:53 compute-1 sudo[319238]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:53 compute-1 sudo[319263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:29:53 compute-1 sudo[319263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:53 compute-1 sudo[319263]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:53 compute-1 sudo[319288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:29:53 compute-1 sudo[319288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:53 compute-1 sudo[319288]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:53 compute-1 sudo[319313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:29:53 compute-1 sudo[319313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:29:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:53.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:29:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:54.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:29:54 compute-1 sudo[319313]: pam_unix(sudo:session): session closed for user root
Oct 02 13:29:54 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:29:54 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:29:54 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:29:54 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:29:54 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:29:54 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:29:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:29:55 compute-1 nova_compute[230518]: 2025-10-02 13:29:55.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:29:55 compute-1 nova_compute[230518]: 2025-10-02 13:29:55.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:29:55 compute-1 nova_compute[230518]: 2025-10-02 13:29:55.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:55 compute-1 ceph-mon[80926]: pgmap v3582: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 KiB/s rd, 307 B/s wr, 2 op/s
Oct 02 13:29:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:55.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:56.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:57 compute-1 nova_compute[230518]: 2025-10-02 13:29:57.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:29:57 compute-1 nova_compute[230518]: 2025-10-02 13:29:57.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:29:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:57.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:57 compute-1 ceph-mon[80926]: pgmap v3583: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 102 B/s wr, 8 op/s
Oct 02 13:29:58 compute-1 nova_compute[230518]: 2025-10-02 13:29:58.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:29:58 compute-1 nova_compute[230518]: 2025-10-02 13:29:58.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:29:58 compute-1 nova_compute[230518]: 2025-10-02 13:29:58.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:29:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:29:58.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:29:58 compute-1 nova_compute[230518]: 2025-10-02 13:29:58.077 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:29:58 compute-1 podman[319370]: 2025-10-02 13:29:58.809804254 +0000 UTC m=+0.055280585 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 13:29:58 compute-1 podman[319369]: 2025-10-02 13:29:58.849336113 +0000 UTC m=+0.099675218 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:29:59 compute-1 ovn_controller[129257]: 2025-10-02T13:29:59Z|00895|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 02 13:29:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:29:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:29:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:29:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:29:59.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:00 compute-1 ceph-mon[80926]: pgmap v3584: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 98 B/s wr, 7 op/s
Oct 02 13:30:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:30:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:00.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:30:00 compute-1 nova_compute[230518]: 2025-10-02 13:30:00.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:00 compute-1 sudo[319415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:30:00 compute-1 sudo[319415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:30:00 compute-1 sudo[319415]: pam_unix(sudo:session): session closed for user root
Oct 02 13:30:00 compute-1 sudo[319440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:30:00 compute-1 sudo[319440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:30:00 compute-1 sudo[319440]: pam_unix(sudo:session): session closed for user root
Oct 02 13:30:01 compute-1 ceph-mon[80926]: overall HEALTH_OK
Oct 02 13:30:01 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:30:01 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:30:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:01.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:02 compute-1 ceph-mon[80926]: pgmap v3585: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 16 op/s
Oct 02 13:30:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:02.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:02 compute-1 nova_compute[230518]: 2025-10-02 13:30:02.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:03 compute-1 ceph-mon[80926]: pgmap v3586: 305 pgs: 305 active+clean; 134 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 458 KiB/s wr, 35 op/s
Oct 02 13:30:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:03.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:04.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:30:05 compute-1 nova_compute[230518]: 2025-10-02 13:30:05.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:30:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:05.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:30:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:30:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:06.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:30:06 compute-1 ceph-mon[80926]: pgmap v3587: 305 pgs: 305 active+clean; 147 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 707 KiB/s wr, 40 op/s
Oct 02 13:30:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1085208703' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:30:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1085208703' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:30:07 compute-1 ceph-mon[80926]: pgmap v3588: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Oct 02 13:30:07 compute-1 nova_compute[230518]: 2025-10-02 13:30:07.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:30:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:07.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:30:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:08.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3192368789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:30:09 compute-1 ceph-mon[80926]: pgmap v3589: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 02 13:30:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:30:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:30:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:09.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:30:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:30:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:10.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:30:10 compute-1 nova_compute[230518]: 2025-10-02 13:30:10.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:10 compute-1 podman[319467]: 2025-10-02 13:30:10.796312286 +0000 UTC m=+0.050383931 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:30:10 compute-1 podman[319466]: 2025-10-02 13:30:10.796380079 +0000 UTC m=+0.052664424 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 13:30:11 compute-1 ceph-mon[80926]: pgmap v3590: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 02 13:30:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:11.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:12.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:12 compute-1 nova_compute[230518]: 2025-10-02 13:30:12.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:30:12 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1655923150' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:30:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1655923150' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:30:13 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #175. Immutable memtables: 0.
Oct 02 13:30:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:30:13.331564) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:30:13 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 175
Oct 02 13:30:13 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411813331634, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 1877, "num_deletes": 257, "total_data_size": 4382210, "memory_usage": 4457920, "flush_reason": "Manual Compaction"}
Oct 02 13:30:13 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #176: started
Oct 02 13:30:13 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411813389058, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 176, "file_size": 2871132, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 84349, "largest_seqno": 86221, "table_properties": {"data_size": 2863316, "index_size": 4693, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16490, "raw_average_key_size": 20, "raw_value_size": 2847513, "raw_average_value_size": 3481, "num_data_blocks": 206, "num_entries": 818, "num_filter_entries": 818, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411651, "oldest_key_time": 1759411651, "file_creation_time": 1759411813, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:30:13 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 57570 microseconds, and 5580 cpu microseconds.
Oct 02 13:30:13 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:30:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:30:13.389142) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #176: 2871132 bytes OK
Oct 02 13:30:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:30:13.389161) [db/memtable_list.cc:519] [default] Level-0 commit table #176 started
Oct 02 13:30:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:30:13.401764) [db/memtable_list.cc:722] [default] Level-0 commit table #176: memtable #1 done
Oct 02 13:30:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:30:13.401808) EVENT_LOG_v1 {"time_micros": 1759411813401798, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:30:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:30:13.401829) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:30:13 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 4373626, prev total WAL file size 4373626, number of live WAL files 2.
Oct 02 13:30:13 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000172.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:30:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:30:13.403400) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323730' seq:72057594037927935, type:22 .. '6C6F676D0033353231' seq:0, type:0; will stop at (end)
Oct 02 13:30:13 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:30:13 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [176(2803KB)], [174(10MB)]
Oct 02 13:30:13 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411813403446, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [176], "files_L6": [174], "score": -1, "input_data_size": 14378886, "oldest_snapshot_seqno": -1}
Oct 02 13:30:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:13.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:13 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #177: 10655 keys, 14241179 bytes, temperature: kUnknown
Oct 02 13:30:13 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411813843736, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 177, "file_size": 14241179, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14171699, "index_size": 41719, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26693, "raw_key_size": 281101, "raw_average_key_size": 26, "raw_value_size": 13984813, "raw_average_value_size": 1312, "num_data_blocks": 1592, "num_entries": 10655, "num_filter_entries": 10655, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759411813, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 177, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:30:13 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:30:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:30:13.843983) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 14241179 bytes
Oct 02 13:30:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:30:13.945330) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 32.7 rd, 32.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 11.0 +0.0 blob) out(13.6 +0.0 blob), read-write-amplify(10.0) write-amplify(5.0) OK, records in: 11186, records dropped: 531 output_compression: NoCompression
Oct 02 13:30:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:30:13.945388) EVENT_LOG_v1 {"time_micros": 1759411813945375, "job": 112, "event": "compaction_finished", "compaction_time_micros": 440362, "compaction_time_cpu_micros": 31450, "output_level": 6, "num_output_files": 1, "total_output_size": 14241179, "num_input_records": 11186, "num_output_records": 10655, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:30:13 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:30:13 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411813946115, "job": 112, "event": "table_file_deletion", "file_number": 176}
Oct 02 13:30:13 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000174.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:30:13 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411813948172, "job": 112, "event": "table_file_deletion", "file_number": 174}
Oct 02 13:30:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:30:13.403269) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:30:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:30:13.948290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:30:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:30:13.948296) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:30:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:30:13.948298) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:30:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:30:13.948300) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:30:13 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:30:13.948301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:30:14 compute-1 ceph-mon[80926]: pgmap v3591: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Oct 02 13:30:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:14.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:30:15 compute-1 nova_compute[230518]: 2025-10-02 13:30:15.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:15 compute-1 ceph-mon[80926]: pgmap v3592: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 1.3 MiB/s wr, 5 op/s
Oct 02 13:30:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:30:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:15.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:30:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:30:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:16.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:30:17 compute-1 nova_compute[230518]: 2025-10-02 13:30:17.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:17.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:30:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:18.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:30:18 compute-1 ceph-mon[80926]: pgmap v3593: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 170 B/s rd, 1.1 MiB/s wr, 1 op/s
Oct 02 13:30:19 compute-1 ceph-mon[80926]: pgmap v3594: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:30:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:19.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:30:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:30:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:20.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:20 compute-1 nova_compute[230518]: 2025-10-02 13:30:20.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3095014498' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:30:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:21.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:22.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:22 compute-1 nova_compute[230518]: 2025-10-02 13:30:22.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:22 compute-1 ceph-mon[80926]: pgmap v3595: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:30:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:30:23.136 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=85, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=84) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:30:23 compute-1 nova_compute[230518]: 2025-10-02 13:30:23.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:23 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:30:23.138 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:30:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:23.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:24 compute-1 ceph-mon[80926]: pgmap v3596: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 02 13:30:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:24.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:30:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:30:25.141 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '85'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:30:25 compute-1 nova_compute[230518]: 2025-10-02 13:30:25.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:25.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:25 compute-1 ceph-mon[80926]: pgmap v3597: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 12 KiB/s wr, 6 op/s
Oct 02 13:30:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:30:25.990 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:30:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:30:25.991 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:30:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:30:25.991 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:30:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:30:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:26.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:30:27 compute-1 nova_compute[230518]: 2025-10-02 13:30:27.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:27.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:28.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:28 compute-1 ceph-mon[80926]: pgmap v3598: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 12 KiB/s wr, 67 op/s
Oct 02 13:30:29 compute-1 ceph-mon[80926]: pgmap v3599: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 12 KiB/s wr, 67 op/s
Oct 02 13:30:29 compute-1 podman[319504]: 2025-10-02 13:30:29.803116537 +0000 UTC m=+0.054437539 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 13:30:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:30:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:29.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:30:29 compute-1 podman[319503]: 2025-10-02 13:30:29.867349841 +0000 UTC m=+0.121806312 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 13:30:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:30:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:30:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:30.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:30:30 compute-1 nova_compute[230518]: 2025-10-02 13:30:30.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:31.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:32 compute-1 ceph-mon[80926]: pgmap v3600: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 13:30:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:32.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:32 compute-1 nova_compute[230518]: 2025-10-02 13:30:32.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:33 compute-1 ceph-mon[80926]: pgmap v3601: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 13:30:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:33.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:34.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:30:35 compute-1 nova_compute[230518]: 2025-10-02 13:30:35.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:35.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:36.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:36 compute-1 ceph-mon[80926]: pgmap v3602: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 02 13:30:37 compute-1 ceph-mon[80926]: pgmap v3603: 305 pgs: 305 active+clean; 170 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 290 KiB/s wr, 78 op/s
Oct 02 13:30:37 compute-1 nova_compute[230518]: 2025-10-02 13:30:37.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:30:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:37.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:30:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:30:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:38.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:30:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:30:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:39.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:30:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:30:40 compute-1 ceph-mon[80926]: pgmap v3604: 305 pgs: 305 active+clean; 170 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 205 KiB/s rd, 290 KiB/s wr, 17 op/s
Oct 02 13:30:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3749824698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:30:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1847762052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:30:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:40.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:40 compute-1 nova_compute[230518]: 2025-10-02 13:30:40.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:41 compute-1 ceph-mon[80926]: pgmap v3605: 305 pgs: 305 active+clean; 195 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 406 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Oct 02 13:30:41 compute-1 podman[319548]: 2025-10-02 13:30:41.798870788 +0000 UTC m=+0.052662003 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, org.label-schema.license=GPLv2, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 13:30:41 compute-1 podman[319550]: 2025-10-02 13:30:41.804390291 +0000 UTC m=+0.053606373 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 13:30:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:41.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:42.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:42 compute-1 nova_compute[230518]: 2025-10-02 13:30:42.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:43.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:43 compute-1 ceph-mon[80926]: pgmap v3606: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 303 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Oct 02 13:30:43 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1511948820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:30:44 compute-1 nova_compute[230518]: 2025-10-02 13:30:44.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:30:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:30:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:44.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:30:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:30:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1749386317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:30:45 compute-1 nova_compute[230518]: 2025-10-02 13:30:45.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:30:45 compute-1 nova_compute[230518]: 2025-10-02 13:30:45.082 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:30:45 compute-1 nova_compute[230518]: 2025-10-02 13:30:45.082 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:30:45 compute-1 nova_compute[230518]: 2025-10-02 13:30:45.082 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:30:45 compute-1 nova_compute[230518]: 2025-10-02 13:30:45.083 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:30:45 compute-1 nova_compute[230518]: 2025-10-02 13:30:45.083 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:30:45 compute-1 nova_compute[230518]: 2025-10-02 13:30:45.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:30:45 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2790356428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:30:45 compute-1 nova_compute[230518]: 2025-10-02 13:30:45.513 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:30:45 compute-1 nova_compute[230518]: 2025-10-02 13:30:45.693 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:30:45 compute-1 nova_compute[230518]: 2025-10-02 13:30:45.695 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4216MB free_disk=20.98813247680664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:30:45 compute-1 nova_compute[230518]: 2025-10-02 13:30:45.695 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:30:45 compute-1 nova_compute[230518]: 2025-10-02 13:30:45.696 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:30:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:45.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:45 compute-1 nova_compute[230518]: 2025-10-02 13:30:45.894 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:30:45 compute-1 nova_compute[230518]: 2025-10-02 13:30:45.894 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:30:45 compute-1 nova_compute[230518]: 2025-10-02 13:30:45.951 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:30:46 compute-1 ceph-mon[80926]: pgmap v3607: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 303 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Oct 02 13:30:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2790356428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:30:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:46.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:46 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:30:46 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1118045396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:30:46 compute-1 nova_compute[230518]: 2025-10-02 13:30:46.463 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:30:46 compute-1 nova_compute[230518]: 2025-10-02 13:30:46.469 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:30:46 compute-1 nova_compute[230518]: 2025-10-02 13:30:46.490 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:30:46 compute-1 nova_compute[230518]: 2025-10-02 13:30:46.492 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:30:46 compute-1 nova_compute[230518]: 2025-10-02 13:30:46.493 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:30:47 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1118045396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:30:47 compute-1 nova_compute[230518]: 2025-10-02 13:30:47.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:30:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:47.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:30:48 compute-1 ceph-mon[80926]: pgmap v3608: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 307 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Oct 02 13:30:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:48.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:48 compute-1 nova_compute[230518]: 2025-10-02 13:30:48.494 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:30:48 compute-1 nova_compute[230518]: 2025-10-02 13:30:48.495 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:30:49 compute-1 nova_compute[230518]: 2025-10-02 13:30:49.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:30:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3144257203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:30:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:30:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:49.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:30:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:30:50 compute-1 ceph-mon[80926]: pgmap v3609: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 283 KiB/s rd, 1.9 MiB/s wr, 55 op/s
Oct 02 13:30:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:30:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:50.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:30:50 compute-1 nova_compute[230518]: 2025-10-02 13:30:50.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:51 compute-1 ceph-mon[80926]: pgmap v3610: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 290 KiB/s rd, 1.9 MiB/s wr, 64 op/s
Oct 02 13:30:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:51.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:52.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:52 compute-1 nova_compute[230518]: 2025-10-02 13:30:52.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2550152388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:30:53 compute-1 nova_compute[230518]: 2025-10-02 13:30:53.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:30:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:30:53 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1104312888' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:30:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:53.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:54 compute-1 ceph-mon[80926]: pgmap v3611: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 90 KiB/s rd, 77 KiB/s wr, 25 op/s
Oct 02 13:30:54 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1104312888' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:30:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:54.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:30:55 compute-1 nova_compute[230518]: 2025-10-02 13:30:55.412 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:55.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:56 compute-1 nova_compute[230518]: 2025-10-02 13:30:56.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:30:56 compute-1 nova_compute[230518]: 2025-10-02 13:30:56.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:30:56 compute-1 ceph-mon[80926]: pgmap v3612: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 30 KiB/s wr, 16 op/s
Oct 02 13:30:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:30:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:56.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:30:57 compute-1 nova_compute[230518]: 2025-10-02 13:30:57.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:30:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:57.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:58 compute-1 ceph-mon[80926]: pgmap v3613: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 29 KiB/s wr, 15 op/s
Oct 02 13:30:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:30:58.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:59 compute-1 nova_compute[230518]: 2025-10-02 13:30:59.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:30:59 compute-1 ceph-mon[80926]: pgmap v3614: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.2 KiB/s rd, 1.3 KiB/s wr, 8 op/s
Oct 02 13:30:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:30:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:30:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:30:59.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:30:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:31:00 compute-1 nova_compute[230518]: 2025-10-02 13:31:00.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:31:00 compute-1 nova_compute[230518]: 2025-10-02 13:31:00.066 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:31:00 compute-1 nova_compute[230518]: 2025-10-02 13:31:00.067 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:31:00 compute-1 nova_compute[230518]: 2025-10-02 13:31:00.067 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:31:00 compute-1 nova_compute[230518]: 2025-10-02 13:31:00.083 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:31:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:31:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:00.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:31:00 compute-1 nova_compute[230518]: 2025-10-02 13:31:00.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/971268917' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:31:00 compute-1 podman[319634]: 2025-10-02 13:31:00.830489354 +0000 UTC m=+0.063825592 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Oct 02 13:31:00 compute-1 podman[319633]: 2025-10-02 13:31:00.890991722 +0000 UTC m=+0.134942754 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:31:01 compute-1 sudo[319678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:31:01 compute-1 sudo[319678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:01 compute-1 sudo[319678]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:01 compute-1 sudo[319703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:31:01 compute-1 sudo[319703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:01 compute-1 sudo[319703]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:01 compute-1 sudo[319728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:31:01 compute-1 sudo[319728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:01 compute-1 sudo[319728]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:01 compute-1 sudo[319753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 13:31:01 compute-1 sudo[319753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:01 compute-1 ceph-mon[80926]: pgmap v3615: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.2 KiB/s rd, 1.3 KiB/s wr, 8 op/s
Oct 02 13:31:01 compute-1 sudo[319753]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:31:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:01.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:31:01 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #178. Immutable memtables: 0.
Oct 02 13:31:01 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:31:01.947404) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:31:01 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 178
Oct 02 13:31:01 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411861947432, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 749, "num_deletes": 251, "total_data_size": 1374628, "memory_usage": 1395280, "flush_reason": "Manual Compaction"}
Oct 02 13:31:01 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #179: started
Oct 02 13:31:01 compute-1 sudo[319799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:31:01 compute-1 sudo[319799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:01 compute-1 sudo[319799]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:01 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411861966047, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 179, "file_size": 907628, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 86226, "largest_seqno": 86970, "table_properties": {"data_size": 903979, "index_size": 1492, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8226, "raw_average_key_size": 19, "raw_value_size": 896732, "raw_average_value_size": 2119, "num_data_blocks": 65, "num_entries": 423, "num_filter_entries": 423, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411813, "oldest_key_time": 1759411813, "file_creation_time": 1759411861, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:31:01 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 18690 microseconds, and 3354 cpu microseconds.
Oct 02 13:31:01 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:31:01 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:31:01.966090) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #179: 907628 bytes OK
Oct 02 13:31:01 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:31:01.966109) [db/memtable_list.cc:519] [default] Level-0 commit table #179 started
Oct 02 13:31:01 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:31:01.968585) [db/memtable_list.cc:722] [default] Level-0 commit table #179: memtable #1 done
Oct 02 13:31:01 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:31:01.968595) EVENT_LOG_v1 {"time_micros": 1759411861968592, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:31:01 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:31:01.968610) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:31:01 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 1370658, prev total WAL file size 1370658, number of live WAL files 2.
Oct 02 13:31:01 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000175.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:31:01 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:31:01.969225) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Oct 02 13:31:01 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:31:01 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [179(886KB)], [177(13MB)]
Oct 02 13:31:01 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411861969256, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [179], "files_L6": [177], "score": -1, "input_data_size": 15148807, "oldest_snapshot_seqno": -1}
Oct 02 13:31:02 compute-1 sudo[319824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:31:02 compute-1 sudo[319824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:02 compute-1 sudo[319824]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:02 compute-1 sudo[319849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:31:02 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #180: 10562 keys, 13195834 bytes, temperature: kUnknown
Oct 02 13:31:02 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411862078440, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 180, "file_size": 13195834, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13127875, "index_size": 40454, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26437, "raw_key_size": 279808, "raw_average_key_size": 26, "raw_value_size": 12943411, "raw_average_value_size": 1225, "num_data_blocks": 1532, "num_entries": 10562, "num_filter_entries": 10562, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759411861, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 180, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:31:02 compute-1 sudo[319849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:02 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:31:02 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:31:02.078700) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 13195834 bytes
Oct 02 13:31:02 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:31:02.079846) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.7 rd, 120.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 13.6 +0.0 blob) out(12.6 +0.0 blob), read-write-amplify(31.2) write-amplify(14.5) OK, records in: 11078, records dropped: 516 output_compression: NoCompression
Oct 02 13:31:02 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:31:02.079867) EVENT_LOG_v1 {"time_micros": 1759411862079857, "job": 114, "event": "compaction_finished", "compaction_time_micros": 109252, "compaction_time_cpu_micros": 29832, "output_level": 6, "num_output_files": 1, "total_output_size": 13195834, "num_input_records": 11078, "num_output_records": 10562, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:31:02 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:31:02 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411862080261, "job": 114, "event": "table_file_deletion", "file_number": 179}
Oct 02 13:31:02 compute-1 sudo[319849]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:02 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000177.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:31:02 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759411862083482, "job": 114, "event": "table_file_deletion", "file_number": 177}
Oct 02 13:31:02 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:31:01.969146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:31:02 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:31:02.083544) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:31:02 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:31:02.083549) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:31:02 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:31:02.083550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:31:02 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:31:02.083552) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:31:02 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:31:02.083554) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:31:02 compute-1 sudo[319874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:31:02 compute-1 sudo[319874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:02.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:31:02.357 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=86, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=85) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:31:02 compute-1 nova_compute[230518]: 2025-10-02 13:31:02.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:02 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:31:02.359 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:31:02 compute-1 sudo[319874]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:02 compute-1 nova_compute[230518]: 2025-10-02 13:31:02.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:02 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:31:02 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:31:02 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:31:02 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:31:02 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:31:02 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:31:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:03.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:04 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:31:04 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:31:04 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:31:04 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:31:04 compute-1 ceph-mon[80926]: pgmap v3616: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 02 13:31:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:04.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:31:05 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:31:05.364 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '86'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:31:05 compute-1 nova_compute[230518]: 2025-10-02 13:31:05.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:05 compute-1 ceph-mon[80926]: pgmap v3617: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 193 KiB/s rd, 12 KiB/s wr, 8 op/s
Oct 02 13:31:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1995901434' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:31:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1995901434' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:31:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:05.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:31:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:06.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:31:07 compute-1 ceph-mon[80926]: pgmap v3618: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct 02 13:31:07 compute-1 nova_compute[230518]: 2025-10-02 13:31:07.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:07.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:31:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:08.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:31:09 compute-1 ceph-mon[80926]: pgmap v3619: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct 02 13:31:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:09.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:31:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:10.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:10 compute-1 nova_compute[230518]: 2025-10-02 13:31:10.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:11 compute-1 sudo[319930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:31:11 compute-1 sudo[319930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:11 compute-1 sudo[319930]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:11 compute-1 sudo[319955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:31:11 compute-1 sudo[319955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:31:11 compute-1 sudo[319955]: pam_unix(sudo:session): session closed for user root
Oct 02 13:31:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:11.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:12 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:31:12 compute-1 ceph-mon[80926]: pgmap v3620: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct 02 13:31:12 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:31:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:12.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:12 compute-1 nova_compute[230518]: 2025-10-02 13:31:12.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:12 compute-1 podman[319981]: 2025-10-02 13:31:12.8021991 +0000 UTC m=+0.056310337 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 13:31:12 compute-1 podman[319980]: 2025-10-02 13:31:12.83022821 +0000 UTC m=+0.083499431 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 02 13:31:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:13.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:13 compute-1 ceph-mon[80926]: pgmap v3621: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct 02 13:31:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:31:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:14.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:31:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:31:15 compute-1 nova_compute[230518]: 2025-10-02 13:31:15.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:15 compute-1 ceph-mon[80926]: pgmap v3622: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 76 op/s
Oct 02 13:31:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:31:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:15.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:31:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:16.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:17 compute-1 nova_compute[230518]: 2025-10-02 13:31:17.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:17 compute-1 ceph-mon[80926]: pgmap v3623: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 88 op/s
Oct 02 13:31:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:17.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:18.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:19 compute-1 ceph-mon[80926]: pgmap v3624: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 241 KiB/s rd, 12 KiB/s wr, 22 op/s
Oct 02 13:31:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:19.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:31:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:20.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:20 compute-1 nova_compute[230518]: 2025-10-02 13:31:20.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:21 compute-1 ceph-mon[80926]: pgmap v3625: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 532 KiB/s rd, 26 KiB/s wr, 45 op/s
Oct 02 13:31:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:21.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:22.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:22 compute-1 nova_compute[230518]: 2025-10-02 13:31:22.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:23.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:23 compute-1 ceph-mon[80926]: pgmap v3626: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 532 KiB/s rd, 26 KiB/s wr, 45 op/s
Oct 02 13:31:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:24.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:31:25 compute-1 nova_compute[230518]: 2025-10-02 13:31:25.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:25.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:31:25.991 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:31:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:31:25.991 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:31:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:31:25.991 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:31:26 compute-1 ceph-mon[80926]: pgmap v3627: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 532 KiB/s rd, 26 KiB/s wr, 45 op/s
Oct 02 13:31:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:26.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:27 compute-1 nova_compute[230518]: 2025-10-02 13:31:27.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:31:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:27.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:31:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e428 e428: 3 total, 3 up, 3 in
Oct 02 13:31:28 compute-1 ceph-mon[80926]: pgmap v3628: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 518 KiB/s rd, 25 KiB/s wr, 42 op/s
Oct 02 13:31:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:28.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:29 compute-1 ceph-mon[80926]: osdmap e428: 3 total, 3 up, 3 in
Oct 02 13:31:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:29.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:31:30 compute-1 ceph-mon[80926]: pgmap v3630: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 350 KiB/s rd, 16 KiB/s wr, 27 op/s
Oct 02 13:31:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:30.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:30 compute-1 nova_compute[230518]: 2025-10-02 13:31:30.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:31 compute-1 ceph-mon[80926]: pgmap v3631: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 236 KiB/s rd, 8.9 KiB/s wr, 7 op/s
Oct 02 13:31:31 compute-1 podman[320022]: 2025-10-02 13:31:31.825507233 +0000 UTC m=+0.076302024 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 13:31:31 compute-1 podman[320021]: 2025-10-02 13:31:31.854016578 +0000 UTC m=+0.110229529 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 02 13:31:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:31.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:32.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:32 compute-1 nova_compute[230518]: 2025-10-02 13:31:32.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:33.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:33 compute-1 ceph-mon[80926]: pgmap v3632: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 272 KiB/s rd, 15 KiB/s wr, 33 op/s
Oct 02 13:31:34 compute-1 nova_compute[230518]: 2025-10-02 13:31:34.029 2 DEBUG oslo_concurrency.lockutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:31:34 compute-1 nova_compute[230518]: 2025-10-02 13:31:34.029 2 DEBUG oslo_concurrency.lockutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:31:34 compute-1 nova_compute[230518]: 2025-10-02 13:31:34.045 2 DEBUG nova.compute.manager [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 02 13:31:34 compute-1 nova_compute[230518]: 2025-10-02 13:31:34.121 2 DEBUG oslo_concurrency.lockutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:31:34 compute-1 nova_compute[230518]: 2025-10-02 13:31:34.122 2 DEBUG oslo_concurrency.lockutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:31:34 compute-1 nova_compute[230518]: 2025-10-02 13:31:34.130 2 DEBUG nova.virt.hardware [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 02 13:31:34 compute-1 nova_compute[230518]: 2025-10-02 13:31:34.130 2 INFO nova.compute.claims [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Claim successful on node compute-1.ctlplane.example.com
Oct 02 13:31:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:34.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:34 compute-1 nova_compute[230518]: 2025-10-02 13:31:34.247 2 DEBUG oslo_concurrency.processutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:31:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:31:34 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/541340958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:31:34 compute-1 nova_compute[230518]: 2025-10-02 13:31:34.687 2 DEBUG oslo_concurrency.processutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:31:34 compute-1 nova_compute[230518]: 2025-10-02 13:31:34.693 2 DEBUG nova.compute.provider_tree [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:31:34 compute-1 nova_compute[230518]: 2025-10-02 13:31:34.706 2 DEBUG nova.scheduler.client.report [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:31:34 compute-1 nova_compute[230518]: 2025-10-02 13:31:34.725 2 DEBUG oslo_concurrency.lockutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:31:34 compute-1 nova_compute[230518]: 2025-10-02 13:31:34.725 2 DEBUG nova.compute.manager [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 02 13:31:34 compute-1 nova_compute[230518]: 2025-10-02 13:31:34.775 2 DEBUG nova.compute.manager [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 02 13:31:34 compute-1 nova_compute[230518]: 2025-10-02 13:31:34.775 2 DEBUG nova.network.neutron [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 02 13:31:34 compute-1 nova_compute[230518]: 2025-10-02 13:31:34.792 2 INFO nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 02 13:31:34 compute-1 nova_compute[230518]: 2025-10-02 13:31:34.810 2 DEBUG nova.compute.manager [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 02 13:31:34 compute-1 nova_compute[230518]: 2025-10-02 13:31:34.848 2 INFO nova.virt.block_device [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Booting with volume b03ecca0-5e5d-47a6-a97b-d3273a126768 at /dev/vda
Oct 02 13:31:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:31:34 compute-1 nova_compute[230518]: 2025-10-02 13:31:34.983 2 DEBUG os_brick.utils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.101', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-1.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Oct 02 13:31:34 compute-1 nova_compute[230518]: 2025-10-02 13:31:34.985 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:31:34 compute-1 nova_compute[230518]: 2025-10-02 13:31:34.995 2727 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:31:34 compute-1 nova_compute[230518]: 2025-10-02 13:31:34.995 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[ae80cf8b-7c12-4e43-9935-dfb11630735d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:31:34 compute-1 nova_compute[230518]: 2025-10-02 13:31:34.997 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:31:35 compute-1 nova_compute[230518]: 2025-10-02 13:31:35.004 2727 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:31:35 compute-1 nova_compute[230518]: 2025-10-02 13:31:35.005 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[3c6fbdb0-fb32-40e4-a9a3-3fa9e7c1f623]: (4, ('InitiatorName=iqn.1994-05.com.redhat:d783e47ecf', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:31:35 compute-1 nova_compute[230518]: 2025-10-02 13:31:35.006 2727 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:31:35 compute-1 nova_compute[230518]: 2025-10-02 13:31:35.014 2727 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:31:35 compute-1 nova_compute[230518]: 2025-10-02 13:31:35.014 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[0c68c66c-a4d1-4714-bf2b-37a5e36f11f5]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:31:35 compute-1 nova_compute[230518]: 2025-10-02 13:31:35.015 2727 DEBUG oslo.privsep.daemon [-] privsep: reply[945defd0-a019-4ff6-b0c8-e1deb6c97c96]: (4, '5d5cabb1-2c53-462b-89f3-16d4280c3e4c') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:31:35 compute-1 nova_compute[230518]: 2025-10-02 13:31:35.016 2 DEBUG oslo_concurrency.processutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:31:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/541340958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:31:35 compute-1 nova_compute[230518]: 2025-10-02 13:31:35.051 2 DEBUG oslo_concurrency.processutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "nvme version" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:31:35 compute-1 nova_compute[230518]: 2025-10-02 13:31:35.053 2 DEBUG os_brick.initiator.connectors.lightos [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Oct 02 13:31:35 compute-1 nova_compute[230518]: 2025-10-02 13:31:35.054 2 DEBUG os_brick.initiator.connectors.lightos [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Oct 02 13:31:35 compute-1 nova_compute[230518]: 2025-10-02 13:31:35.054 2 DEBUG os_brick.initiator.connectors.lightos [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Oct 02 13:31:35 compute-1 nova_compute[230518]: 2025-10-02 13:31:35.054 2 DEBUG os_brick.utils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] <== get_connector_properties: return (69ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.101', 'host': 'compute-1.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:d783e47ecf', 'do_local_attach': False, 'nvme_hostid': '2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'system uuid': '5d5cabb1-2c53-462b-89f3-16d4280c3e4c', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:2f7d2450-18ac-43a6-80ee-9caa4a7736e0', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Oct 02 13:31:35 compute-1 nova_compute[230518]: 2025-10-02 13:31:35.054 2 DEBUG nova.virt.block_device [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Updating existing volume attachment record: a5d24a42-20c2-4952-8e71-9fac1f9674e2 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Oct 02 13:31:35 compute-1 nova_compute[230518]: 2025-10-02 13:31:35.091 2 DEBUG nova.policy [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2cb47684d0b34c729e9611e7b3943bed', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '18799a1c93354809911705bb424e673f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 02 13:31:35 compute-1 nova_compute[230518]: 2025-10-02 13:31:35.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:31:35 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3826796133' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:31:35 compute-1 nova_compute[230518]: 2025-10-02 13:31:35.888 2 DEBUG nova.network.neutron [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Successfully created port: bdd118f1-3b0d-4709-847a-90adbb7b95f6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 02 13:31:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:35.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:36 compute-1 nova_compute[230518]: 2025-10-02 13:31:36.017 2 DEBUG nova.compute.manager [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 02 13:31:36 compute-1 nova_compute[230518]: 2025-10-02 13:31:36.018 2 DEBUG nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 02 13:31:36 compute-1 nova_compute[230518]: 2025-10-02 13:31:36.019 2 INFO nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Creating image(s)
Oct 02 13:31:36 compute-1 nova_compute[230518]: 2025-10-02 13:31:36.019 2 DEBUG nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Oct 02 13:31:36 compute-1 nova_compute[230518]: 2025-10-02 13:31:36.020 2 DEBUG nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Ensure instance console log exists: /var/lib/nova/instances/6226ed9a-8df2-43ad-b76c-e27e22f8199c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 02 13:31:36 compute-1 nova_compute[230518]: 2025-10-02 13:31:36.020 2 DEBUG oslo_concurrency.lockutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:31:36 compute-1 nova_compute[230518]: 2025-10-02 13:31:36.020 2 DEBUG oslo_concurrency.lockutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:31:36 compute-1 nova_compute[230518]: 2025-10-02 13:31:36.020 2 DEBUG oslo_concurrency.lockutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:31:36 compute-1 ceph-mon[80926]: pgmap v3633: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 273 KiB/s rd, 15 KiB/s wr, 34 op/s
Oct 02 13:31:36 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3826796133' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:31:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:36.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:36 compute-1 nova_compute[230518]: 2025-10-02 13:31:36.662 2 DEBUG nova.network.neutron [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Successfully updated port: bdd118f1-3b0d-4709-847a-90adbb7b95f6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 02 13:31:36 compute-1 nova_compute[230518]: 2025-10-02 13:31:36.695 2 DEBUG oslo_concurrency.lockutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "refresh_cache-6226ed9a-8df2-43ad-b76c-e27e22f8199c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:31:36 compute-1 nova_compute[230518]: 2025-10-02 13:31:36.696 2 DEBUG oslo_concurrency.lockutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquired lock "refresh_cache-6226ed9a-8df2-43ad-b76c-e27e22f8199c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:31:36 compute-1 nova_compute[230518]: 2025-10-02 13:31:36.697 2 DEBUG nova.network.neutron [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 02 13:31:36 compute-1 nova_compute[230518]: 2025-10-02 13:31:36.775 2 DEBUG nova.compute.manager [req-8330c615-a35f-4241-bfa0-42d1543cb572 req-3c1f00f1-d5e6-4bd8-ade5-5025e981fe1e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Received event network-changed-bdd118f1-3b0d-4709-847a-90adbb7b95f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:31:36 compute-1 nova_compute[230518]: 2025-10-02 13:31:36.776 2 DEBUG nova.compute.manager [req-8330c615-a35f-4241-bfa0-42d1543cb572 req-3c1f00f1-d5e6-4bd8-ade5-5025e981fe1e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Refreshing instance network info cache due to event network-changed-bdd118f1-3b0d-4709-847a-90adbb7b95f6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:31:36 compute-1 nova_compute[230518]: 2025-10-02 13:31:36.777 2 DEBUG oslo_concurrency.lockutils [req-8330c615-a35f-4241-bfa0-42d1543cb572 req-3c1f00f1-d5e6-4bd8-ade5-5025e981fe1e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-6226ed9a-8df2-43ad-b76c-e27e22f8199c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:31:37 compute-1 nova_compute[230518]: 2025-10-02 13:31:37.069 2 DEBUG nova.network.neutron [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 02 13:31:37 compute-1 ceph-mon[80926]: pgmap v3634: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 273 KiB/s rd, 20 KiB/s wr, 36 op/s
Oct 02 13:31:37 compute-1 nova_compute[230518]: 2025-10-02 13:31:37.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:37 compute-1 nova_compute[230518]: 2025-10-02 13:31:37.793 2 DEBUG nova.network.neutron [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Updating instance_info_cache with network_info: [{"id": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "address": "fa:16:3e:8a:2e:41", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdd118f1-3b", "ovs_interfaceid": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:31:37 compute-1 nova_compute[230518]: 2025-10-02 13:31:37.839 2 DEBUG oslo_concurrency.lockutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Releasing lock "refresh_cache-6226ed9a-8df2-43ad-b76c-e27e22f8199c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:31:37 compute-1 nova_compute[230518]: 2025-10-02 13:31:37.840 2 DEBUG nova.compute.manager [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Instance network_info: |[{"id": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "address": "fa:16:3e:8a:2e:41", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdd118f1-3b", "ovs_interfaceid": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 02 13:31:37 compute-1 nova_compute[230518]: 2025-10-02 13:31:37.840 2 DEBUG oslo_concurrency.lockutils [req-8330c615-a35f-4241-bfa0-42d1543cb572 req-3c1f00f1-d5e6-4bd8-ade5-5025e981fe1e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-6226ed9a-8df2-43ad-b76c-e27e22f8199c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:31:37 compute-1 nova_compute[230518]: 2025-10-02 13:31:37.840 2 DEBUG nova.network.neutron [req-8330c615-a35f-4241-bfa0-42d1543cb572 req-3c1f00f1-d5e6-4bd8-ade5-5025e981fe1e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Refreshing network info cache for port bdd118f1-3b0d-4709-847a-90adbb7b95f6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:31:37 compute-1 nova_compute[230518]: 2025-10-02 13:31:37.843 2 DEBUG nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Start _get_guest_xml network_info=[{"id": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "address": "fa:16:3e:8a:2e:41", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdd118f1-3b", "ovs_interfaceid": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'delete_on_termination': False, 'disk_bus': 'virtio', 'device_type': 'disk', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-b03ecca0-5e5d-47a6-a97b-d3273a126768', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'b03ecca0-5e5d-47a6-a97b-d3273a126768', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '6226ed9a-8df2-43ad-b76c-e27e22f8199c', 'attached_at': '', 'detached_at': '', 'volume_id': 'b03ecca0-5e5d-47a6-a97b-d3273a126768', 'serial': 'b03ecca0-5e5d-47a6-a97b-d3273a126768'}, 'boot_index': 0, 'attachment_id': 'a5d24a42-20c2-4952-8e71-9fac1f9674e2', 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 02 13:31:37 compute-1 nova_compute[230518]: 2025-10-02 13:31:37.848 2 WARNING nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:31:37 compute-1 nova_compute[230518]: 2025-10-02 13:31:37.853 2 DEBUG nova.virt.libvirt.host [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 02 13:31:37 compute-1 nova_compute[230518]: 2025-10-02 13:31:37.853 2 DEBUG nova.virt.libvirt.host [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 02 13:31:37 compute-1 nova_compute[230518]: 2025-10-02 13:31:37.859 2 DEBUG nova.virt.libvirt.host [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 02 13:31:37 compute-1 nova_compute[230518]: 2025-10-02 13:31:37.860 2 DEBUG nova.virt.libvirt.host [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 02 13:31:37 compute-1 nova_compute[230518]: 2025-10-02 13:31:37.861 2 DEBUG nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 02 13:31:37 compute-1 nova_compute[230518]: 2025-10-02 13:31:37.861 2 DEBUG nova.virt.hardware [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-02T12:08:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='99c52872-4e37-4be3-86cc-757b8f375aa8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 02 13:31:37 compute-1 nova_compute[230518]: 2025-10-02 13:31:37.861 2 DEBUG nova.virt.hardware [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 02 13:31:37 compute-1 nova_compute[230518]: 2025-10-02 13:31:37.861 2 DEBUG nova.virt.hardware [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 02 13:31:37 compute-1 nova_compute[230518]: 2025-10-02 13:31:37.862 2 DEBUG nova.virt.hardware [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 02 13:31:37 compute-1 nova_compute[230518]: 2025-10-02 13:31:37.862 2 DEBUG nova.virt.hardware [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 02 13:31:37 compute-1 nova_compute[230518]: 2025-10-02 13:31:37.862 2 DEBUG nova.virt.hardware [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 02 13:31:37 compute-1 nova_compute[230518]: 2025-10-02 13:31:37.862 2 DEBUG nova.virt.hardware [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 02 13:31:37 compute-1 nova_compute[230518]: 2025-10-02 13:31:37.863 2 DEBUG nova.virt.hardware [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 02 13:31:37 compute-1 nova_compute[230518]: 2025-10-02 13:31:37.863 2 DEBUG nova.virt.hardware [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 02 13:31:37 compute-1 nova_compute[230518]: 2025-10-02 13:31:37.863 2 DEBUG nova.virt.hardware [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 02 13:31:37 compute-1 nova_compute[230518]: 2025-10-02 13:31:37.863 2 DEBUG nova.virt.hardware [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 02 13:31:37 compute-1 nova_compute[230518]: 2025-10-02 13:31:37.895 2 DEBUG nova.storage.rbd_utils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] rbd image 6226ed9a-8df2-43ad-b76c-e27e22f8199c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:31:37 compute-1 nova_compute[230518]: 2025-10-02 13:31:37.899 2 DEBUG oslo_concurrency.processutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:31:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:37.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:31:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:38.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:31:38 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct 02 13:31:38 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3281413989' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:31:38 compute-1 nova_compute[230518]: 2025-10-02 13:31:38.319 2 DEBUG oslo_concurrency.processutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:31:38 compute-1 nova_compute[230518]: 2025-10-02 13:31:38.355 2 DEBUG nova.virt.libvirt.vif [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:31:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1205762778',display_name='tempest-TestVolumeBootPattern-server-1205762778',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1205762778',id=224,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEXlZ173v52AK5bvxZSCswZD+xa0FluYk6PRSfhpRbnZm8bOdlvZU5KBRnl3O9hs6ON23ziU7Z/FpjnMU4tf7Jp1qDf229EeHe6BdU98WhCvbuPXicABUQh5j2lZgRmPLw==',key_name='tempest-TestVolumeBootPattern-1422258886',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='18799a1c93354809911705bb424e673f',ramdisk_id='',reservation_id='r-00z9qk6p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1344814684',owner_user_name='tempest-TestVolumeBootPattern-1344814684-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:31:34Z,user_data=None,user_id='2cb47684d0b34c729e9611e7b3943bed',uuid=6226ed9a-8df2-43ad-b76c-e27e22f8199c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "address": "fa:16:3e:8a:2e:41", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdd118f1-3b", "ovs_interfaceid": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 02 13:31:38 compute-1 nova_compute[230518]: 2025-10-02 13:31:38.356 2 DEBUG nova.network.os_vif_util [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converting VIF {"id": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "address": "fa:16:3e:8a:2e:41", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdd118f1-3b", "ovs_interfaceid": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:31:38 compute-1 nova_compute[230518]: 2025-10-02 13:31:38.357 2 DEBUG nova.network.os_vif_util [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8a:2e:41,bridge_name='br-int',has_traffic_filtering=True,id=bdd118f1-3b0d-4709-847a-90adbb7b95f6,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdd118f1-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:31:38 compute-1 nova_compute[230518]: 2025-10-02 13:31:38.359 2 DEBUG nova.objects.instance [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lazy-loading 'pci_devices' on Instance uuid 6226ed9a-8df2-43ad-b76c-e27e22f8199c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:31:38 compute-1 nova_compute[230518]: 2025-10-02 13:31:38.374 2 DEBUG nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] End _get_guest_xml xml=<domain type="kvm">
Oct 02 13:31:38 compute-1 nova_compute[230518]:   <uuid>6226ed9a-8df2-43ad-b76c-e27e22f8199c</uuid>
Oct 02 13:31:38 compute-1 nova_compute[230518]:   <name>instance-000000e0</name>
Oct 02 13:31:38 compute-1 nova_compute[230518]:   <memory>131072</memory>
Oct 02 13:31:38 compute-1 nova_compute[230518]:   <vcpu>1</vcpu>
Oct 02 13:31:38 compute-1 nova_compute[230518]:   <metadata>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 02 13:31:38 compute-1 nova_compute[230518]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:       <nova:name>tempest-TestVolumeBootPattern-server-1205762778</nova:name>
Oct 02 13:31:38 compute-1 nova_compute[230518]:       <nova:creationTime>2025-10-02 13:31:37</nova:creationTime>
Oct 02 13:31:38 compute-1 nova_compute[230518]:       <nova:flavor name="m1.nano">
Oct 02 13:31:38 compute-1 nova_compute[230518]:         <nova:memory>128</nova:memory>
Oct 02 13:31:38 compute-1 nova_compute[230518]:         <nova:disk>1</nova:disk>
Oct 02 13:31:38 compute-1 nova_compute[230518]:         <nova:swap>0</nova:swap>
Oct 02 13:31:38 compute-1 nova_compute[230518]:         <nova:ephemeral>0</nova:ephemeral>
Oct 02 13:31:38 compute-1 nova_compute[230518]:         <nova:vcpus>1</nova:vcpus>
Oct 02 13:31:38 compute-1 nova_compute[230518]:       </nova:flavor>
Oct 02 13:31:38 compute-1 nova_compute[230518]:       <nova:owner>
Oct 02 13:31:38 compute-1 nova_compute[230518]:         <nova:user uuid="2cb47684d0b34c729e9611e7b3943bed">tempest-TestVolumeBootPattern-1344814684-project-member</nova:user>
Oct 02 13:31:38 compute-1 nova_compute[230518]:         <nova:project uuid="18799a1c93354809911705bb424e673f">tempest-TestVolumeBootPattern-1344814684</nova:project>
Oct 02 13:31:38 compute-1 nova_compute[230518]:       </nova:owner>
Oct 02 13:31:38 compute-1 nova_compute[230518]:       <nova:ports>
Oct 02 13:31:38 compute-1 nova_compute[230518]:         <nova:port uuid="bdd118f1-3b0d-4709-847a-90adbb7b95f6">
Oct 02 13:31:38 compute-1 nova_compute[230518]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:         </nova:port>
Oct 02 13:31:38 compute-1 nova_compute[230518]:       </nova:ports>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     </nova:instance>
Oct 02 13:31:38 compute-1 nova_compute[230518]:   </metadata>
Oct 02 13:31:38 compute-1 nova_compute[230518]:   <sysinfo type="smbios">
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <system>
Oct 02 13:31:38 compute-1 nova_compute[230518]:       <entry name="manufacturer">RDO</entry>
Oct 02 13:31:38 compute-1 nova_compute[230518]:       <entry name="product">OpenStack Compute</entry>
Oct 02 13:31:38 compute-1 nova_compute[230518]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 02 13:31:38 compute-1 nova_compute[230518]:       <entry name="serial">6226ed9a-8df2-43ad-b76c-e27e22f8199c</entry>
Oct 02 13:31:38 compute-1 nova_compute[230518]:       <entry name="uuid">6226ed9a-8df2-43ad-b76c-e27e22f8199c</entry>
Oct 02 13:31:38 compute-1 nova_compute[230518]:       <entry name="family">Virtual Machine</entry>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     </system>
Oct 02 13:31:38 compute-1 nova_compute[230518]:   </sysinfo>
Oct 02 13:31:38 compute-1 nova_compute[230518]:   <os>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <boot dev="hd"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <smbios mode="sysinfo"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:   </os>
Oct 02 13:31:38 compute-1 nova_compute[230518]:   <features>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <acpi/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <apic/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <vmcoreinfo/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:   </features>
Oct 02 13:31:38 compute-1 nova_compute[230518]:   <clock offset="utc">
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <timer name="pit" tickpolicy="delay"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <timer name="hpet" present="no"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:   </clock>
Oct 02 13:31:38 compute-1 nova_compute[230518]:   <cpu mode="custom" match="exact">
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <model>Nehalem</model>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <topology sockets="1" cores="1" threads="1"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:   </cpu>
Oct 02 13:31:38 compute-1 nova_compute[230518]:   <devices>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <disk type="network" device="cdrom">
Oct 02 13:31:38 compute-1 nova_compute[230518]:       <driver type="raw" cache="none"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:       <source protocol="rbd" name="vms/6226ed9a-8df2-43ad-b76c-e27e22f8199c_disk.config">
Oct 02 13:31:38 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:       </source>
Oct 02 13:31:38 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:31:38 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:31:38 compute-1 nova_compute[230518]:       <target dev="sda" bus="sata"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <disk type="network" device="disk">
Oct 02 13:31:38 compute-1 nova_compute[230518]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:       <source protocol="rbd" name="volumes/volume-b03ecca0-5e5d-47a6-a97b-d3273a126768">
Oct 02 13:31:38 compute-1 nova_compute[230518]:         <host name="192.168.122.100" port="6789"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:         <host name="192.168.122.102" port="6789"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:         <host name="192.168.122.101" port="6789"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:       </source>
Oct 02 13:31:38 compute-1 nova_compute[230518]:       <auth username="openstack">
Oct 02 13:31:38 compute-1 nova_compute[230518]:         <secret type="ceph" uuid="20fdc58c-b037-5094-a8ef-d490aa7c36f3"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:       </auth>
Oct 02 13:31:38 compute-1 nova_compute[230518]:       <target dev="vda" bus="virtio"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:       <serial>b03ecca0-5e5d-47a6-a97b-d3273a126768</serial>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     </disk>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <interface type="ethernet">
Oct 02 13:31:38 compute-1 nova_compute[230518]:       <mac address="fa:16:3e:8a:2e:41"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:       <driver name="vhost" rx_queue_size="512"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:       <mtu size="1442"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:       <target dev="tapbdd118f1-3b"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     </interface>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <serial type="pty">
Oct 02 13:31:38 compute-1 nova_compute[230518]:       <log file="/var/lib/nova/instances/6226ed9a-8df2-43ad-b76c-e27e22f8199c/console.log" append="off"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     </serial>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <video>
Oct 02 13:31:38 compute-1 nova_compute[230518]:       <model type="virtio"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     </video>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <input type="tablet" bus="usb"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <rng model="virtio">
Oct 02 13:31:38 compute-1 nova_compute[230518]:       <backend model="random">/dev/urandom</backend>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     </rng>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <controller type="pci" model="pcie-root-port"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <controller type="usb" index="0"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     <memballoon model="virtio">
Oct 02 13:31:38 compute-1 nova_compute[230518]:       <stats period="10"/>
Oct 02 13:31:38 compute-1 nova_compute[230518]:     </memballoon>
Oct 02 13:31:38 compute-1 nova_compute[230518]:   </devices>
Oct 02 13:31:38 compute-1 nova_compute[230518]: </domain>
Oct 02 13:31:38 compute-1 nova_compute[230518]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 02 13:31:38 compute-1 nova_compute[230518]: 2025-10-02 13:31:38.376 2 DEBUG nova.compute.manager [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Preparing to wait for external event network-vif-plugged-bdd118f1-3b0d-4709-847a-90adbb7b95f6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 02 13:31:38 compute-1 nova_compute[230518]: 2025-10-02 13:31:38.376 2 DEBUG oslo_concurrency.lockutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:31:38 compute-1 nova_compute[230518]: 2025-10-02 13:31:38.377 2 DEBUG oslo_concurrency.lockutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:31:38 compute-1 nova_compute[230518]: 2025-10-02 13:31:38.377 2 DEBUG oslo_concurrency.lockutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:31:38 compute-1 nova_compute[230518]: 2025-10-02 13:31:38.377 2 DEBUG nova.virt.libvirt.vif [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-02T13:31:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1205762778',display_name='tempest-TestVolumeBootPattern-server-1205762778',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1205762778',id=224,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEXlZ173v52AK5bvxZSCswZD+xa0FluYk6PRSfhpRbnZm8bOdlvZU5KBRnl3O9hs6ON23ziU7Z/FpjnMU4tf7Jp1qDf229EeHe6BdU98WhCvbuPXicABUQh5j2lZgRmPLw==',key_name='tempest-TestVolumeBootPattern-1422258886',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='18799a1c93354809911705bb424e673f',ramdisk_id='',reservation_id='r-00z9qk6p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1344814684',owner_user_name='tempest-TestVolumeBootPattern-1344814684-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-02T13:31:34Z,user_data=None,user_id='2cb47684d0b34c729e9611e7b3943bed',uuid=6226ed9a-8df2-43ad-b76c-e27e22f8199c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "address": "fa:16:3e:8a:2e:41", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdd118f1-3b", "ovs_interfaceid": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 02 13:31:38 compute-1 nova_compute[230518]: 2025-10-02 13:31:38.378 2 DEBUG nova.network.os_vif_util [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converting VIF {"id": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "address": "fa:16:3e:8a:2e:41", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdd118f1-3b", "ovs_interfaceid": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:31:38 compute-1 nova_compute[230518]: 2025-10-02 13:31:38.378 2 DEBUG nova.network.os_vif_util [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8a:2e:41,bridge_name='br-int',has_traffic_filtering=True,id=bdd118f1-3b0d-4709-847a-90adbb7b95f6,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdd118f1-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:31:38 compute-1 nova_compute[230518]: 2025-10-02 13:31:38.379 2 DEBUG os_vif [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:2e:41,bridge_name='br-int',has_traffic_filtering=True,id=bdd118f1-3b0d-4709-847a-90adbb7b95f6,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdd118f1-3b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 02 13:31:38 compute-1 nova_compute[230518]: 2025-10-02 13:31:38.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:38 compute-1 nova_compute[230518]: 2025-10-02 13:31:38.380 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:31:38 compute-1 nova_compute[230518]: 2025-10-02 13:31:38.380 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:31:38 compute-1 nova_compute[230518]: 2025-10-02 13:31:38.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:38 compute-1 nova_compute[230518]: 2025-10-02 13:31:38.385 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbdd118f1-3b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:31:38 compute-1 nova_compute[230518]: 2025-10-02 13:31:38.386 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbdd118f1-3b, col_values=(('external_ids', {'iface-id': 'bdd118f1-3b0d-4709-847a-90adbb7b95f6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8a:2e:41', 'vm-uuid': '6226ed9a-8df2-43ad-b76c-e27e22f8199c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:31:38 compute-1 nova_compute[230518]: 2025-10-02 13:31:38.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:38 compute-1 NetworkManager[44960]: <info>  [1759411898.3920] manager: (tapbdd118f1-3b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/411)
Oct 02 13:31:38 compute-1 nova_compute[230518]: 2025-10-02 13:31:38.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 02 13:31:38 compute-1 nova_compute[230518]: 2025-10-02 13:31:38.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:38 compute-1 nova_compute[230518]: 2025-10-02 13:31:38.399 2 INFO os_vif [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:2e:41,bridge_name='br-int',has_traffic_filtering=True,id=bdd118f1-3b0d-4709-847a-90adbb7b95f6,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdd118f1-3b')
Oct 02 13:31:38 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3281413989' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 02 13:31:38 compute-1 nova_compute[230518]: 2025-10-02 13:31:38.447 2 DEBUG nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:31:38 compute-1 nova_compute[230518]: 2025-10-02 13:31:38.448 2 DEBUG nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 02 13:31:38 compute-1 nova_compute[230518]: 2025-10-02 13:31:38.448 2 DEBUG nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] No VIF found with MAC fa:16:3e:8a:2e:41, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 02 13:31:38 compute-1 nova_compute[230518]: 2025-10-02 13:31:38.448 2 INFO nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Using config drive
Oct 02 13:31:38 compute-1 nova_compute[230518]: 2025-10-02 13:31:38.475 2 DEBUG nova.storage.rbd_utils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] rbd image 6226ed9a-8df2-43ad-b76c-e27e22f8199c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:31:38 compute-1 nova_compute[230518]: 2025-10-02 13:31:38.845 2 INFO nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Creating config drive at /var/lib/nova/instances/6226ed9a-8df2-43ad-b76c-e27e22f8199c/disk.config
Oct 02 13:31:38 compute-1 nova_compute[230518]: 2025-10-02 13:31:38.853 2 DEBUG oslo_concurrency.processutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6226ed9a-8df2-43ad-b76c-e27e22f8199c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpobx38oc7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:31:38 compute-1 nova_compute[230518]: 2025-10-02 13:31:38.994 2 DEBUG oslo_concurrency.processutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6226ed9a-8df2-43ad-b76c-e27e22f8199c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpobx38oc7" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:31:39 compute-1 nova_compute[230518]: 2025-10-02 13:31:39.025 2 DEBUG nova.storage.rbd_utils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] rbd image 6226ed9a-8df2-43ad-b76c-e27e22f8199c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 02 13:31:39 compute-1 nova_compute[230518]: 2025-10-02 13:31:39.032 2 DEBUG oslo_concurrency.processutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6226ed9a-8df2-43ad-b76c-e27e22f8199c/disk.config 6226ed9a-8df2-43ad-b76c-e27e22f8199c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:31:39 compute-1 nova_compute[230518]: 2025-10-02 13:31:39.093 2 DEBUG nova.network.neutron [req-8330c615-a35f-4241-bfa0-42d1543cb572 req-3c1f00f1-d5e6-4bd8-ade5-5025e981fe1e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Updated VIF entry in instance network info cache for port bdd118f1-3b0d-4709-847a-90adbb7b95f6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:31:39 compute-1 nova_compute[230518]: 2025-10-02 13:31:39.095 2 DEBUG nova.network.neutron [req-8330c615-a35f-4241-bfa0-42d1543cb572 req-3c1f00f1-d5e6-4bd8-ade5-5025e981fe1e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Updating instance_info_cache with network_info: [{"id": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "address": "fa:16:3e:8a:2e:41", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdd118f1-3b", "ovs_interfaceid": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:31:39 compute-1 nova_compute[230518]: 2025-10-02 13:31:39.112 2 DEBUG oslo_concurrency.lockutils [req-8330c615-a35f-4241-bfa0-42d1543cb572 req-3c1f00f1-d5e6-4bd8-ade5-5025e981fe1e 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-6226ed9a-8df2-43ad-b76c-e27e22f8199c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:31:39 compute-1 nova_compute[230518]: 2025-10-02 13:31:39.266 2 DEBUG oslo_concurrency.processutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6226ed9a-8df2-43ad-b76c-e27e22f8199c/disk.config 6226ed9a-8df2-43ad-b76c-e27e22f8199c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.235s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:31:39 compute-1 nova_compute[230518]: 2025-10-02 13:31:39.267 2 INFO nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Deleting local config drive /var/lib/nova/instances/6226ed9a-8df2-43ad-b76c-e27e22f8199c/disk.config because it was imported into RBD.
Oct 02 13:31:39 compute-1 kernel: tapbdd118f1-3b: entered promiscuous mode
Oct 02 13:31:39 compute-1 NetworkManager[44960]: <info>  [1759411899.3176] manager: (tapbdd118f1-3b): new Tun device (/org/freedesktop/NetworkManager/Devices/412)
Oct 02 13:31:39 compute-1 ovn_controller[129257]: 2025-10-02T13:31:39Z|00896|binding|INFO|Claiming lport bdd118f1-3b0d-4709-847a-90adbb7b95f6 for this chassis.
Oct 02 13:31:39 compute-1 ovn_controller[129257]: 2025-10-02T13:31:39Z|00897|binding|INFO|bdd118f1-3b0d-4709-847a-90adbb7b95f6: Claiming fa:16:3e:8a:2e:41 10.100.0.8
Oct 02 13:31:39 compute-1 nova_compute[230518]: 2025-10-02 13:31:39.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.329 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8a:2e:41 10.100.0.8'], port_security=['fa:16:3e:8a:2e:41 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6226ed9a-8df2-43ad-b76c-e27e22f8199c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '18799a1c93354809911705bb424e673f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '76b0b52e-400a-4f72-824a-095cd74b612b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=910cabf2-c1de-4576-8ee2-c8f223a58a1c, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=bdd118f1-3b0d-4709-847a-90adbb7b95f6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.330 138374 INFO neutron.agent.ovn.metadata.agent [-] Port bdd118f1-3b0d-4709-847a-90adbb7b95f6 in datapath 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 bound to our chassis
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.332 138374 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5
Oct 02 13:31:39 compute-1 ovn_controller[129257]: 2025-10-02T13:31:39Z|00898|binding|INFO|Setting lport bdd118f1-3b0d-4709-847a-90adbb7b95f6 ovn-installed in OVS
Oct 02 13:31:39 compute-1 ovn_controller[129257]: 2025-10-02T13:31:39Z|00899|binding|INFO|Setting lport bdd118f1-3b0d-4709-847a-90adbb7b95f6 up in Southbound
Oct 02 13:31:39 compute-1 nova_compute[230518]: 2025-10-02 13:31:39.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:39 compute-1 nova_compute[230518]: 2025-10-02 13:31:39.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:39 compute-1 systemd-udevd[320207]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.348 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[c199a152-8911-4a58-8ef6-8d86e1ef8a5b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.349 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap858f2b6f-81 in ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.351 233418 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap858f2b6f-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.351 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[7d0924dd-f877-44c4-8104-87924ef103b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.351 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[56b4fe11-2236-402f-ba31-ff49c304ac70]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:31:39 compute-1 systemd-machined[188247]: New machine qemu-101-instance-000000e0.
Oct 02 13:31:39 compute-1 NetworkManager[44960]: <info>  [1759411899.3622] device (tapbdd118f1-3b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 02 13:31:39 compute-1 NetworkManager[44960]: <info>  [1759411899.3633] device (tapbdd118f1-3b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.364 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[bb7e2856-6f8a-489a-899b-b8550a8d2e27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:31:39 compute-1 systemd[1]: Started Virtual Machine qemu-101-instance-000000e0.
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.389 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[95b5ecc9-d23c-4ed8-a1c2-77a5843ab35e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:31:39 compute-1 ceph-mon[80926]: pgmap v3635: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 252 KiB/s rd, 18 KiB/s wr, 33 op/s
Oct 02 13:31:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3871989529' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.418 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[772eae87-263d-4cbe-9e91-fb762431d56b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.423 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e5586408-ef27-4f5a-b2b8-97eb4cc5cd71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:31:39 compute-1 NetworkManager[44960]: <info>  [1759411899.4249] manager: (tap858f2b6f-80): new Veth device (/org/freedesktop/NetworkManager/Devices/413)
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.454 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[a695ae2a-939b-470f-8c5b-3fb25d88d31c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.457 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[e77f2b6b-5425-40f1-b1de-1c72254a4deb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:31:39 compute-1 NetworkManager[44960]: <info>  [1759411899.4779] device (tap858f2b6f-80): carrier: link connected
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.487 233568 DEBUG oslo.privsep.daemon [-] privsep: reply[d65b6154-b02e-43d7-9504-44df415e8dca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.504 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[68acdd18-4827-4350-8256-4091917b3bc5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap858f2b6f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:29:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 270], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 976303, 'reachable_time': 15731, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320240, 'error': None, 'target': 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.519 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[db1e94a4-6765-4cdb-bedd-876350ce1778]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedd:29ad'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 976303, 'tstamp': 976303}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320241, 'error': None, 'target': 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.536 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[91b6aee5-a403-4641-b8d8-d22680cd0b1d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap858f2b6f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:29:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 270], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 976303, 'reachable_time': 15731, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 320242, 'error': None, 'target': 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.566 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[64e86ea8-dd81-41e0-a449-780d5f4a8b01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:31:39 compute-1 nova_compute[230518]: 2025-10-02 13:31:39.615 2 DEBUG nova.compute.manager [req-305f21d6-bc51-4cbc-b1d3-6ac6f7dea937 req-effded93-e31e-47fb-b0f5-93697ca83752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Received event network-vif-plugged-bdd118f1-3b0d-4709-847a-90adbb7b95f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:31:39 compute-1 nova_compute[230518]: 2025-10-02 13:31:39.616 2 DEBUG oslo_concurrency.lockutils [req-305f21d6-bc51-4cbc-b1d3-6ac6f7dea937 req-effded93-e31e-47fb-b0f5-93697ca83752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:31:39 compute-1 nova_compute[230518]: 2025-10-02 13:31:39.616 2 DEBUG oslo_concurrency.lockutils [req-305f21d6-bc51-4cbc-b1d3-6ac6f7dea937 req-effded93-e31e-47fb-b0f5-93697ca83752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:31:39 compute-1 nova_compute[230518]: 2025-10-02 13:31:39.616 2 DEBUG oslo_concurrency.lockutils [req-305f21d6-bc51-4cbc-b1d3-6ac6f7dea937 req-effded93-e31e-47fb-b0f5-93697ca83752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:31:39 compute-1 nova_compute[230518]: 2025-10-02 13:31:39.616 2 DEBUG nova.compute.manager [req-305f21d6-bc51-4cbc-b1d3-6ac6f7dea937 req-effded93-e31e-47fb-b0f5-93697ca83752 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Processing event network-vif-plugged-bdd118f1-3b0d-4709-847a-90adbb7b95f6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.623 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[bafc3bb8-d1e1-40ef-ad22-6710af6d3dc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.624 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap858f2b6f-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.625 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.625 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap858f2b6f-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:31:39 compute-1 NetworkManager[44960]: <info>  [1759411899.6277] manager: (tap858f2b6f-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/414)
Oct 02 13:31:39 compute-1 kernel: tap858f2b6f-80: entered promiscuous mode
Oct 02 13:31:39 compute-1 nova_compute[230518]: 2025-10-02 13:31:39.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:39 compute-1 nova_compute[230518]: 2025-10-02 13:31:39.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.636 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap858f2b6f-80, col_values=(('external_ids', {'iface-id': 'cd468d5a-0c73-498a-8776-3dc2ab63d9cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:31:39 compute-1 nova_compute[230518]: 2025-10-02 13:31:39.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:39 compute-1 ovn_controller[129257]: 2025-10-02T13:31:39Z|00900|binding|INFO|Releasing lport cd468d5a-0c73-498a-8776-3dc2ab63d9cf from this chassis (sb_readonly=0)
Oct 02 13:31:39 compute-1 nova_compute[230518]: 2025-10-02 13:31:39.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.652 138374 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/858f2b6f-8fe4-471b-981e-5d0b08d2f4c5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/858f2b6f-8fe4-471b-981e-5d0b08d2f4c5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.653 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[1cc40855-54bf-4c76-b21f-375f99ace0e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.654 138374 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]: global
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]:     log         /dev/log local0 debug
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]:     log-tag     haproxy-metadata-proxy-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]:     user        root
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]:     group       root
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]:     maxconn     1024
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]:     pidfile     /var/lib/neutron/external/pids/858f2b6f-8fe4-471b-981e-5d0b08d2f4c5.pid.haproxy
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]:     daemon
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]: defaults
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]:     log global
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]:     mode http
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]:     option httplog
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]:     option dontlognull
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]:     option http-server-close
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]:     option forwardfor
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]:     retries                 3
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]:     timeout http-request    30s
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]:     timeout connect         30s
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]:     timeout client          32s
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]:     timeout server          32s
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]:     timeout http-keep-alive 30s
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]: 
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]: listen listener
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]:     bind 169.254.169.254:80
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]:     server metadata /var/lib/neutron/metadata_proxy
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]:     http-request add-header X-OVN-Network-ID 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 02 13:31:39 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:31:39.654 138374 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'env', 'PROCESS_TAG=haproxy-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/858f2b6f-8fe4-471b-981e-5d0b08d2f4c5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 02 13:31:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:39.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:31:40 compute-1 podman[320317]: 2025-10-02 13:31:40.064163687 +0000 UTC m=+0.083754158 container create 87757b8a0e34fe736a3c71060d17e023550b9658c7e7a944871e9b19c6258b38 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 13:31:40 compute-1 podman[320317]: 2025-10-02 13:31:40.005063263 +0000 UTC m=+0.024653764 image pull df4949fbbe269ec91c503c0c2a01f0407aa671cfac804c078bc791d1efed5574 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Oct 02 13:31:40 compute-1 systemd[1]: Started libpod-conmon-87757b8a0e34fe736a3c71060d17e023550b9658c7e7a944871e9b19c6258b38.scope.
Oct 02 13:31:40 compute-1 systemd[1]: Started libcrun container.
Oct 02 13:31:40 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca3a05d9c7d9e930b8b6730f8fdbb361ef758594c9818368b42030a951cdf9e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 02 13:31:40 compute-1 podman[320317]: 2025-10-02 13:31:40.155761209 +0000 UTC m=+0.175351710 container init 87757b8a0e34fe736a3c71060d17e023550b9658c7e7a944871e9b19c6258b38 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 13:31:40 compute-1 podman[320317]: 2025-10-02 13:31:40.161715067 +0000 UTC m=+0.181305538 container start 87757b8a0e34fe736a3c71060d17e023550b9658c7e7a944871e9b19c6258b38 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3)
Oct 02 13:31:40 compute-1 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[320332]: [NOTICE]   (320336) : New worker (320338) forked
Oct 02 13:31:40 compute-1 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[320332]: [NOTICE]   (320336) : Loading success.
Oct 02 13:31:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:40.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:40 compute-1 nova_compute[230518]: 2025-10-02 13:31:40.294 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759411900.29387, 6226ed9a-8df2-43ad-b76c-e27e22f8199c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:31:40 compute-1 nova_compute[230518]: 2025-10-02 13:31:40.295 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] VM Started (Lifecycle Event)
Oct 02 13:31:40 compute-1 nova_compute[230518]: 2025-10-02 13:31:40.297 2 DEBUG nova.compute.manager [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 02 13:31:40 compute-1 nova_compute[230518]: 2025-10-02 13:31:40.300 2 DEBUG nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 02 13:31:40 compute-1 nova_compute[230518]: 2025-10-02 13:31:40.304 2 INFO nova.virt.libvirt.driver [-] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Instance spawned successfully.
Oct 02 13:31:40 compute-1 nova_compute[230518]: 2025-10-02 13:31:40.304 2 DEBUG nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 02 13:31:40 compute-1 nova_compute[230518]: 2025-10-02 13:31:40.317 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:31:40 compute-1 nova_compute[230518]: 2025-10-02 13:31:40.319 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:31:40 compute-1 nova_compute[230518]: 2025-10-02 13:31:40.327 2 DEBUG nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:31:40 compute-1 nova_compute[230518]: 2025-10-02 13:31:40.327 2 DEBUG nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:31:40 compute-1 nova_compute[230518]: 2025-10-02 13:31:40.328 2 DEBUG nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:31:40 compute-1 nova_compute[230518]: 2025-10-02 13:31:40.328 2 DEBUG nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:31:40 compute-1 nova_compute[230518]: 2025-10-02 13:31:40.329 2 DEBUG nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:31:40 compute-1 nova_compute[230518]: 2025-10-02 13:31:40.329 2 DEBUG nova.virt.libvirt.driver [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 02 13:31:40 compute-1 nova_compute[230518]: 2025-10-02 13:31:40.340 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:31:40 compute-1 nova_compute[230518]: 2025-10-02 13:31:40.340 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759411900.2949874, 6226ed9a-8df2-43ad-b76c-e27e22f8199c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:31:40 compute-1 nova_compute[230518]: 2025-10-02 13:31:40.340 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] VM Paused (Lifecycle Event)
Oct 02 13:31:40 compute-1 nova_compute[230518]: 2025-10-02 13:31:40.364 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:31:40 compute-1 nova_compute[230518]: 2025-10-02 13:31:40.368 2 DEBUG nova.virt.driver [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] Emitting event <LifecycleEvent: 1759411900.299782, 6226ed9a-8df2-43ad-b76c-e27e22f8199c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:31:40 compute-1 nova_compute[230518]: 2025-10-02 13:31:40.368 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] VM Resumed (Lifecycle Event)
Oct 02 13:31:40 compute-1 nova_compute[230518]: 2025-10-02 13:31:40.390 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:31:40 compute-1 nova_compute[230518]: 2025-10-02 13:31:40.394 2 DEBUG nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 02 13:31:40 compute-1 nova_compute[230518]: 2025-10-02 13:31:40.406 2 INFO nova.compute.manager [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Took 4.39 seconds to spawn the instance on the hypervisor.
Oct 02 13:31:40 compute-1 nova_compute[230518]: 2025-10-02 13:31:40.407 2 DEBUG nova.compute.manager [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:31:40 compute-1 nova_compute[230518]: 2025-10-02 13:31:40.416 2 INFO nova.compute.manager [None req-c4e49ce3-a7ad-48b9-84ea-c89aef142b3c - - - - - -] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 02 13:31:40 compute-1 nova_compute[230518]: 2025-10-02 13:31:40.466 2 INFO nova.compute.manager [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Took 6.37 seconds to build instance.
Oct 02 13:31:40 compute-1 nova_compute[230518]: 2025-10-02 13:31:40.486 2 DEBUG oslo_concurrency.lockutils [None req-1fa95719-d11c-46bf-8c87-0bca335d90f4 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.457s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:31:40 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1750166120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:31:41 compute-1 ceph-mon[80926]: pgmap v3636: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 230 KiB/s rd, 17 KiB/s wr, 34 op/s
Oct 02 13:31:41 compute-1 nova_compute[230518]: 2025-10-02 13:31:41.738 2 DEBUG nova.compute.manager [req-19f8b0e3-2ff5-406b-9720-5a3404df7d4c req-4937d782-58a0-4f4d-8fc6-22b3a140c116 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Received event network-vif-plugged-bdd118f1-3b0d-4709-847a-90adbb7b95f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:31:41 compute-1 nova_compute[230518]: 2025-10-02 13:31:41.738 2 DEBUG oslo_concurrency.lockutils [req-19f8b0e3-2ff5-406b-9720-5a3404df7d4c req-4937d782-58a0-4f4d-8fc6-22b3a140c116 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:31:41 compute-1 nova_compute[230518]: 2025-10-02 13:31:41.739 2 DEBUG oslo_concurrency.lockutils [req-19f8b0e3-2ff5-406b-9720-5a3404df7d4c req-4937d782-58a0-4f4d-8fc6-22b3a140c116 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:31:41 compute-1 nova_compute[230518]: 2025-10-02 13:31:41.739 2 DEBUG oslo_concurrency.lockutils [req-19f8b0e3-2ff5-406b-9720-5a3404df7d4c req-4937d782-58a0-4f4d-8fc6-22b3a140c116 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:31:41 compute-1 nova_compute[230518]: 2025-10-02 13:31:41.739 2 DEBUG nova.compute.manager [req-19f8b0e3-2ff5-406b-9720-5a3404df7d4c req-4937d782-58a0-4f4d-8fc6-22b3a140c116 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] No waiting events found dispatching network-vif-plugged-bdd118f1-3b0d-4709-847a-90adbb7b95f6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:31:41 compute-1 nova_compute[230518]: 2025-10-02 13:31:41.740 2 WARNING nova.compute.manager [req-19f8b0e3-2ff5-406b-9720-5a3404df7d4c req-4937d782-58a0-4f4d-8fc6-22b3a140c116 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Received unexpected event network-vif-plugged-bdd118f1-3b0d-4709-847a-90adbb7b95f6 for instance with vm_state active and task_state None.
Oct 02 13:31:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:41.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:42.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:42 compute-1 nova_compute[230518]: 2025-10-02 13:31:42.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:43 compute-1 nova_compute[230518]: 2025-10-02 13:31:43.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:43 compute-1 podman[320348]: 2025-10-02 13:31:43.810629092 +0000 UTC m=+0.057360501 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:31:43 compute-1 podman[320349]: 2025-10-02 13:31:43.843533704 +0000 UTC m=+0.089387195 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 13:31:43 compute-1 nova_compute[230518]: 2025-10-02 13:31:43.864 2 DEBUG nova.compute.manager [req-63d4ca32-9e8a-4080-bc0a-99cdf0c5fd0d req-9a91c460-9d7e-46a2-ac08-7fac23fc0f57 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Received event network-changed-bdd118f1-3b0d-4709-847a-90adbb7b95f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:31:43 compute-1 nova_compute[230518]: 2025-10-02 13:31:43.864 2 DEBUG nova.compute.manager [req-63d4ca32-9e8a-4080-bc0a-99cdf0c5fd0d req-9a91c460-9d7e-46a2-ac08-7fac23fc0f57 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Refreshing instance network info cache due to event network-changed-bdd118f1-3b0d-4709-847a-90adbb7b95f6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:31:43 compute-1 nova_compute[230518]: 2025-10-02 13:31:43.864 2 DEBUG oslo_concurrency.lockutils [req-63d4ca32-9e8a-4080-bc0a-99cdf0c5fd0d req-9a91c460-9d7e-46a2-ac08-7fac23fc0f57 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-6226ed9a-8df2-43ad-b76c-e27e22f8199c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:31:43 compute-1 nova_compute[230518]: 2025-10-02 13:31:43.864 2 DEBUG oslo_concurrency.lockutils [req-63d4ca32-9e8a-4080-bc0a-99cdf0c5fd0d req-9a91c460-9d7e-46a2-ac08-7fac23fc0f57 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-6226ed9a-8df2-43ad-b76c-e27e22f8199c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:31:43 compute-1 nova_compute[230518]: 2025-10-02 13:31:43.865 2 DEBUG nova.network.neutron [req-63d4ca32-9e8a-4080-bc0a-99cdf0c5fd0d req-9a91c460-9d7e-46a2-ac08-7fac23fc0f57 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Refreshing network info cache for port bdd118f1-3b0d-4709-847a-90adbb7b95f6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:31:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:43.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:43 compute-1 ceph-mon[80926]: pgmap v3637: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 221 KiB/s rd, 22 KiB/s wr, 41 op/s
Oct 02 13:31:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:31:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:44.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:31:44 compute-1 nova_compute[230518]: 2025-10-02 13:31:44.850 2 DEBUG nova.network.neutron [req-63d4ca32-9e8a-4080-bc0a-99cdf0c5fd0d req-9a91c460-9d7e-46a2-ac08-7fac23fc0f57 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Updated VIF entry in instance network info cache for port bdd118f1-3b0d-4709-847a-90adbb7b95f6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:31:44 compute-1 nova_compute[230518]: 2025-10-02 13:31:44.851 2 DEBUG nova.network.neutron [req-63d4ca32-9e8a-4080-bc0a-99cdf0c5fd0d req-9a91c460-9d7e-46a2-ac08-7fac23fc0f57 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Updating instance_info_cache with network_info: [{"id": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "address": "fa:16:3e:8a:2e:41", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdd118f1-3b", "ovs_interfaceid": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:31:44 compute-1 nova_compute[230518]: 2025-10-02 13:31:44.871 2 DEBUG oslo_concurrency.lockutils [req-63d4ca32-9e8a-4080-bc0a-99cdf0c5fd0d req-9a91c460-9d7e-46a2-ac08-7fac23fc0f57 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-6226ed9a-8df2-43ad-b76c-e27e22f8199c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:31:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:31:44 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3774053728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:31:45 compute-1 nova_compute[230518]: 2025-10-02 13:31:45.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:31:45 compute-1 nova_compute[230518]: 2025-10-02 13:31:45.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:31:45 compute-1 nova_compute[230518]: 2025-10-02 13:31:45.083 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:31:45 compute-1 nova_compute[230518]: 2025-10-02 13:31:45.084 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:31:45 compute-1 nova_compute[230518]: 2025-10-02 13:31:45.084 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:31:45 compute-1 nova_compute[230518]: 2025-10-02 13:31:45.085 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:31:45 compute-1 nova_compute[230518]: 2025-10-02 13:31:45.086 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:31:45 compute-1 ceph-mgr[81282]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3443433125
Oct 02 13:31:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:31:45 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/349842219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:31:45 compute-1 nova_compute[230518]: 2025-10-02 13:31:45.528 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:31:45 compute-1 nova_compute[230518]: 2025-10-02 13:31:45.819 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000e0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:31:45 compute-1 nova_compute[230518]: 2025-10-02 13:31:45.820 2 DEBUG nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] skipping disk for instance-000000e0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 02 13:31:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.001999982s ======
Oct 02 13:31:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:45.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001999982s
Oct 02 13:31:45 compute-1 nova_compute[230518]: 2025-10-02 13:31:45.965 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:31:45 compute-1 nova_compute[230518]: 2025-10-02 13:31:45.966 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4046MB free_disk=20.987987518310547GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:31:45 compute-1 nova_compute[230518]: 2025-10-02 13:31:45.966 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:31:45 compute-1 nova_compute[230518]: 2025-10-02 13:31:45.967 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:31:46 compute-1 ceph-mon[80926]: pgmap v3638: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 385 KiB/s rd, 16 KiB/s wr, 28 op/s
Oct 02 13:31:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/349842219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:31:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3351703209' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:31:46 compute-1 nova_compute[230518]: 2025-10-02 13:31:46.038 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Instance 6226ed9a-8df2-43ad-b76c-e27e22f8199c actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 02 13:31:46 compute-1 nova_compute[230518]: 2025-10-02 13:31:46.039 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:31:46 compute-1 nova_compute[230518]: 2025-10-02 13:31:46.039 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:31:46 compute-1 nova_compute[230518]: 2025-10-02 13:31:46.107 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 13:31:46 compute-1 nova_compute[230518]: 2025-10-02 13:31:46.120 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 13:31:46 compute-1 nova_compute[230518]: 2025-10-02 13:31:46.121 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 13:31:46 compute-1 nova_compute[230518]: 2025-10-02 13:31:46.133 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 13:31:46 compute-1 nova_compute[230518]: 2025-10-02 13:31:46.154 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 13:31:46 compute-1 nova_compute[230518]: 2025-10-02 13:31:46.199 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:31:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:46.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:46 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:31:46 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3626880089' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:31:46 compute-1 nova_compute[230518]: 2025-10-02 13:31:46.647 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:31:46 compute-1 nova_compute[230518]: 2025-10-02 13:31:46.652 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:31:46 compute-1 nova_compute[230518]: 2025-10-02 13:31:46.675 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:31:46 compute-1 nova_compute[230518]: 2025-10-02 13:31:46.696 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:31:46 compute-1 nova_compute[230518]: 2025-10-02 13:31:46.696 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:31:47 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3626880089' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:31:47 compute-1 nova_compute[230518]: 2025-10-02 13:31:47.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:47.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:48 compute-1 ceph-mon[80926]: pgmap v3639: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 78 op/s
Oct 02 13:31:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:48.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:48 compute-1 nova_compute[230518]: 2025-10-02 13:31:48.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:49 compute-1 nova_compute[230518]: 2025-10-02 13:31:49.696 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:31:49 compute-1 nova_compute[230518]: 2025-10-02 13:31:49.697 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:31:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:31:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:49.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:50 compute-1 ceph-mon[80926]: pgmap v3640: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Oct 02 13:31:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:50.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:51 compute-1 nova_compute[230518]: 2025-10-02 13:31:51.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:31:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:51.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:52 compute-1 ceph-mon[80926]: pgmap v3641: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Oct 02 13:31:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:31:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:52.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:31:52 compute-1 nova_compute[230518]: 2025-10-02 13:31:52.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:52 compute-1 ovn_controller[129257]: 2025-10-02T13:31:52Z|00132|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8a:2e:41 10.100.0.8
Oct 02 13:31:52 compute-1 ovn_controller[129257]: 2025-10-02T13:31:52Z|00133|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8a:2e:41 10.100.0.8
Oct 02 13:31:53 compute-1 ceph-mon[80926]: pgmap v3642: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Oct 02 13:31:53 compute-1 nova_compute[230518]: 2025-10-02 13:31:53.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:53.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:54 compute-1 nova_compute[230518]: 2025-10-02 13:31:54.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:31:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:54.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:31:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:55.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:55 compute-1 ceph-mon[80926]: pgmap v3643: 305 pgs: 305 active+clean; 210 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 211 KiB/s wr, 70 op/s
Oct 02 13:31:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:56.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:57 compute-1 nova_compute[230518]: 2025-10-02 13:31:57.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:31:57 compute-1 nova_compute[230518]: 2025-10-02 13:31:57.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:31:57 compute-1 nova_compute[230518]: 2025-10-02 13:31:57.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:31:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:57.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:31:58 compute-1 ceph-mon[80926]: pgmap v3644: 305 pgs: 305 active+clean; 216 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 499 KiB/s wr, 105 op/s
Oct 02 13:31:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:31:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:31:58.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:31:58 compute-1 nova_compute[230518]: 2025-10-02 13:31:58.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:31:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:31:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:31:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:31:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:31:59.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:32:00 compute-1 nova_compute[230518]: 2025-10-02 13:32:00.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:32:00 compute-1 ceph-mon[80926]: pgmap v3645: 305 pgs: 305 active+clean; 216 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 499 KiB/s wr, 54 op/s
Oct 02 13:32:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:00.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:01.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:02 compute-1 nova_compute[230518]: 2025-10-02 13:32:02.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:32:02 compute-1 nova_compute[230518]: 2025-10-02 13:32:02.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:32:02 compute-1 nova_compute[230518]: 2025-10-02 13:32:02.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:32:02 compute-1 ceph-mon[80926]: pgmap v3646: 305 pgs: 305 active+clean; 216 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 509 KiB/s wr, 56 op/s
Oct 02 13:32:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:02.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:02 compute-1 nova_compute[230518]: 2025-10-02 13:32:02.436 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "refresh_cache-6226ed9a-8df2-43ad-b76c-e27e22f8199c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:32:02 compute-1 nova_compute[230518]: 2025-10-02 13:32:02.437 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquired lock "refresh_cache-6226ed9a-8df2-43ad-b76c-e27e22f8199c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:32:02 compute-1 nova_compute[230518]: 2025-10-02 13:32:02.437 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 02 13:32:02 compute-1 nova_compute[230518]: 2025-10-02 13:32:02.437 2 DEBUG nova.objects.instance [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6226ed9a-8df2-43ad-b76c-e27e22f8199c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:32:02 compute-1 nova_compute[230518]: 2025-10-02 13:32:02.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:02 compute-1 podman[320433]: 2025-10-02 13:32:02.813134901 +0000 UTC m=+0.057680550 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 13:32:02 compute-1 podman[320432]: 2025-10-02 13:32:02.839612702 +0000 UTC m=+0.087749824 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 02 13:32:03 compute-1 ceph-mon[80926]: pgmap v3647: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 580 KiB/s wr, 57 op/s
Oct 02 13:32:03 compute-1 nova_compute[230518]: 2025-10-02 13:32:03.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:03.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:04.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:32:05 compute-1 nova_compute[230518]: 2025-10-02 13:32:05.157 2 DEBUG nova.network.neutron [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Updating instance_info_cache with network_info: [{"id": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "address": "fa:16:3e:8a:2e:41", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdd118f1-3b", "ovs_interfaceid": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:32:05 compute-1 nova_compute[230518]: 2025-10-02 13:32:05.206 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Releasing lock "refresh_cache-6226ed9a-8df2-43ad-b76c-e27e22f8199c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:32:05 compute-1 nova_compute[230518]: 2025-10-02 13:32:05.206 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 02 13:32:05 compute-1 nova_compute[230518]: 2025-10-02 13:32:05.207 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:32:05 compute-1 nova_compute[230518]: 2025-10-02 13:32:05.207 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 13:32:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:32:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3625891900' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:32:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:32:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3625891900' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:32:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:05.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:06 compute-1 ceph-mon[80926]: pgmap v3648: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 580 KiB/s wr, 57 op/s
Oct 02 13:32:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3625891900' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:32:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3625891900' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:32:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:06.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:07 compute-1 nova_compute[230518]: 2025-10-02 13:32:07.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:32:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:07.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:32:08 compute-1 ceph-mon[80926]: pgmap v3649: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 714 KiB/s rd, 371 KiB/s wr, 46 op/s
Oct 02 13:32:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:08.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:08 compute-1 nova_compute[230518]: 2025-10-02 13:32:08.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:09 compute-1 nova_compute[230518]: 2025-10-02 13:32:09.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:32:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:32:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:32:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:09.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:32:10 compute-1 ceph-mon[80926]: pgmap v3650: 305 pgs: 305 active+clean; 220 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 67 KiB/s rd, 83 KiB/s wr, 3 op/s
Oct 02 13:32:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:32:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:10.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:32:11 compute-1 ceph-mon[80926]: pgmap v3651: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 104 KiB/s rd, 125 KiB/s wr, 6 op/s
Oct 02 13:32:11 compute-1 sudo[320477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:32:11 compute-1 sudo[320477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:11 compute-1 sudo[320477]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:11 compute-1 sudo[320502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:32:11 compute-1 sudo[320502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:11 compute-1 sudo[320502]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:11 compute-1 sudo[320527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:32:11 compute-1 sudo[320527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:11 compute-1 sudo[320527]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:11 compute-1 sudo[320552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:32:11 compute-1 sudo[320552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:11.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:12 compute-1 sudo[320552]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:12.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:12 compute-1 nova_compute[230518]: 2025-10-02 13:32:12.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:12 compute-1 ovn_controller[129257]: 2025-10-02T13:32:12Z|00901|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Oct 02 13:32:13 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:32:13 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:32:13 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 13:32:13 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 13:32:13 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 13:32:13 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:32:13 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:32:13 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:32:13 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:32:13 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:32:13 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:32:13 compute-1 nova_compute[230518]: 2025-10-02 13:32:13.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:13.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:14 compute-1 ceph-mon[80926]: pgmap v3652: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 103 KiB/s rd, 119 KiB/s wr, 5 op/s
Oct 02 13:32:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:14.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:14 compute-1 podman[320608]: 2025-10-02 13:32:14.830127859 +0000 UTC m=+0.063886056 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 13:32:14 compute-1 podman[320609]: 2025-10-02 13:32:14.830247832 +0000 UTC m=+0.063205133 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:32:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:32:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:15.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:16 compute-1 ceph-mon[80926]: pgmap v3653: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 49 KiB/s wr, 4 op/s
Oct 02 13:32:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:16.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:17 compute-1 nova_compute[230518]: 2025-10-02 13:32:17.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:17.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:18 compute-1 ceph-mon[80926]: pgmap v3654: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 178 KiB/s rd, 49 KiB/s wr, 8 op/s
Oct 02 13:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.107 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=87, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=86) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.108 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.306 2 DEBUG nova.compute.manager [req-a3ee0081-78cd-4e5b-bccc-fdff12600619 req-5a984056-296a-4f24-8042-871b2bbc3048 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Received event network-changed-bdd118f1-3b0d-4709-847a-90adbb7b95f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.307 2 DEBUG nova.compute.manager [req-a3ee0081-78cd-4e5b-bccc-fdff12600619 req-5a984056-296a-4f24-8042-871b2bbc3048 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Refreshing instance network info cache due to event network-changed-bdd118f1-3b0d-4709-847a-90adbb7b95f6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.307 2 DEBUG oslo_concurrency.lockutils [req-a3ee0081-78cd-4e5b-bccc-fdff12600619 req-5a984056-296a-4f24-8042-871b2bbc3048 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "refresh_cache-6226ed9a-8df2-43ad-b76c-e27e22f8199c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 02 13:32:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.308 2 DEBUG oslo_concurrency.lockutils [req-a3ee0081-78cd-4e5b-bccc-fdff12600619 req-5a984056-296a-4f24-8042-871b2bbc3048 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquired lock "refresh_cache-6226ed9a-8df2-43ad-b76c-e27e22f8199c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 02 13:32:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:18.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.308 2 DEBUG nova.network.neutron [req-a3ee0081-78cd-4e5b-bccc-fdff12600619 req-5a984056-296a-4f24-8042-871b2bbc3048 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Refreshing network info cache for port bdd118f1-3b0d-4709-847a-90adbb7b95f6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.373 2 DEBUG oslo_concurrency.lockutils [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.373 2 DEBUG oslo_concurrency.lockutils [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.373 2 DEBUG oslo_concurrency.lockutils [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.374 2 DEBUG oslo_concurrency.lockutils [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.374 2 DEBUG oslo_concurrency.lockutils [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.375 2 INFO nova.compute.manager [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Terminating instance
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.376 2 DEBUG nova.compute.manager [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 02 13:32:18 compute-1 kernel: tapbdd118f1-3b (unregistering): left promiscuous mode
Oct 02 13:32:18 compute-1 NetworkManager[44960]: <info>  [1759411938.4451] device (tapbdd118f1-3b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:18 compute-1 ovn_controller[129257]: 2025-10-02T13:32:18Z|00902|binding|INFO|Releasing lport bdd118f1-3b0d-4709-847a-90adbb7b95f6 from this chassis (sb_readonly=0)
Oct 02 13:32:18 compute-1 ovn_controller[129257]: 2025-10-02T13:32:18Z|00903|binding|INFO|Setting lport bdd118f1-3b0d-4709-847a-90adbb7b95f6 down in Southbound
Oct 02 13:32:18 compute-1 ovn_controller[129257]: 2025-10-02T13:32:18Z|00904|binding|INFO|Removing iface tapbdd118f1-3b ovn-installed in OVS
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.461 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8a:2e:41 10.100.0.8'], port_security=['fa:16:3e:8a:2e:41 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6226ed9a-8df2-43ad-b76c-e27e22f8199c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '18799a1c93354809911705bb424e673f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '76b0b52e-400a-4f72-824a-095cd74b612b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=910cabf2-c1de-4576-8ee2-c8f223a58a1c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=bdd118f1-3b0d-4709-847a-90adbb7b95f6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.463 138374 INFO neutron.agent.ovn.metadata.agent [-] Port bdd118f1-3b0d-4709-847a-90adbb7b95f6 in datapath 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 unbound from our chassis
Oct 02 13:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.464 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.466 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[e796ecfe-db22-403d-b57b-de3b71681ffa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.466 138374 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 namespace which is not needed anymore
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:18 compute-1 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d000000e0.scope: Deactivated successfully.
Oct 02 13:32:18 compute-1 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d000000e0.scope: Consumed 14.845s CPU time.
Oct 02 13:32:18 compute-1 systemd-machined[188247]: Machine qemu-101-instance-000000e0 terminated.
Oct 02 13:32:18 compute-1 kernel: tapbdd118f1-3b: entered promiscuous mode
Oct 02 13:32:18 compute-1 NetworkManager[44960]: <info>  [1759411938.6023] manager: (tapbdd118f1-3b): new Tun device (/org/freedesktop/NetworkManager/Devices/415)
Oct 02 13:32:18 compute-1 systemd-udevd[320655]: Network interface NamePolicy= disabled on kernel command line.
Oct 02 13:32:18 compute-1 ovn_controller[129257]: 2025-10-02T13:32:18Z|00905|binding|INFO|Claiming lport bdd118f1-3b0d-4709-847a-90adbb7b95f6 for this chassis.
Oct 02 13:32:18 compute-1 ovn_controller[129257]: 2025-10-02T13:32:18Z|00906|binding|INFO|bdd118f1-3b0d-4709-847a-90adbb7b95f6: Claiming fa:16:3e:8a:2e:41 10.100.0.8
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.616 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8a:2e:41 10.100.0.8'], port_security=['fa:16:3e:8a:2e:41 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6226ed9a-8df2-43ad-b76c-e27e22f8199c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '18799a1c93354809911705bb424e673f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '76b0b52e-400a-4f72-824a-095cd74b612b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=910cabf2-c1de-4576-8ee2-c8f223a58a1c, chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=bdd118f1-3b0d-4709-847a-90adbb7b95f6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:32:18 compute-1 kernel: tapbdd118f1-3b (unregistering): left promiscuous mode
Oct 02 13:32:18 compute-1 ovn_controller[129257]: 2025-10-02T13:32:18Z|00907|binding|INFO|Setting lport bdd118f1-3b0d-4709-847a-90adbb7b95f6 ovn-installed in OVS
Oct 02 13:32:18 compute-1 ovn_controller[129257]: 2025-10-02T13:32:18Z|00908|binding|INFO|Setting lport bdd118f1-3b0d-4709-847a-90adbb7b95f6 up in Southbound
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:18 compute-1 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[320332]: [NOTICE]   (320336) : haproxy version is 2.8.14-c23fe91
Oct 02 13:32:18 compute-1 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[320332]: [NOTICE]   (320336) : path to executable is /usr/sbin/haproxy
Oct 02 13:32:18 compute-1 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[320332]: [WARNING]  (320336) : Exiting Master process...
Oct 02 13:32:18 compute-1 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[320332]: [WARNING]  (320336) : Exiting Master process...
Oct 02 13:32:18 compute-1 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[320332]: [ALERT]    (320336) : Current worker (320338) exited with code 143 (Terminated)
Oct 02 13:32:18 compute-1 neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5[320332]: [WARNING]  (320336) : All workers exited. Exiting... (0)
Oct 02 13:32:18 compute-1 ovn_controller[129257]: 2025-10-02T13:32:18Z|00909|binding|INFO|Releasing lport bdd118f1-3b0d-4709-847a-90adbb7b95f6 from this chassis (sb_readonly=0)
Oct 02 13:32:18 compute-1 ovn_controller[129257]: 2025-10-02T13:32:18Z|00910|binding|INFO|Setting lport bdd118f1-3b0d-4709-847a-90adbb7b95f6 down in Southbound
Oct 02 13:32:18 compute-1 ovn_controller[129257]: 2025-10-02T13:32:18Z|00911|binding|INFO|Removing iface tapbdd118f1-3b ovn-installed in OVS
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:18 compute-1 systemd[1]: libpod-87757b8a0e34fe736a3c71060d17e023550b9658c7e7a944871e9b19c6258b38.scope: Deactivated successfully.
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:18 compute-1 podman[320672]: 2025-10-02 13:32:18.643623017 +0000 UTC m=+0.067279622 container died 87757b8a0e34fe736a3c71060d17e023550b9658c7e7a944871e9b19c6258b38 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.645 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8a:2e:41 10.100.0.8'], port_security=['fa:16:3e:8a:2e:41 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '6226ed9a-8df2-43ad-b76c-e27e22f8199c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '18799a1c93354809911705bb424e673f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '76b0b52e-400a-4f72-824a-095cd74b612b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=910cabf2-c1de-4576-8ee2-c8f223a58a1c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>], logical_port=bdd118f1-3b0d-4709-847a-90adbb7b95f6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f23cc4478b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.647 2 INFO nova.virt.libvirt.driver [-] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Instance destroyed successfully.
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.648 2 DEBUG nova.objects.instance [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lazy-loading 'resources' on Instance uuid 6226ed9a-8df2-43ad-b76c-e27e22f8199c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.665 2 DEBUG nova.virt.libvirt.vif [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-02T13:31:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1205762778',display_name='tempest-TestVolumeBootPattern-server-1205762778',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1205762778',id=224,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEXlZ173v52AK5bvxZSCswZD+xa0FluYk6PRSfhpRbnZm8bOdlvZU5KBRnl3O9hs6ON23ziU7Z/FpjnMU4tf7Jp1qDf229EeHe6BdU98WhCvbuPXicABUQh5j2lZgRmPLw==',key_name='tempest-TestVolumeBootPattern-1422258886',keypairs=<?>,launch_index=0,launched_at=2025-10-02T13:31:40Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='18799a1c93354809911705bb424e673f',ramdisk_id='',reservation_id='r-00z9qk6p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1344814684',owner_user_name='tempest-TestVolumeBootPattern-1344814684-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-02T13:31:40Z,user_data=None,user_id='2cb47684d0b34c729e9611e7b3943bed',uuid=6226ed9a-8df2-43ad-b76c-e27e22f8199c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "address": "fa:16:3e:8a:2e:41", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdd118f1-3b", "ovs_interfaceid": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.665 2 DEBUG nova.network.os_vif_util [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converting VIF {"id": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "address": "fa:16:3e:8a:2e:41", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdd118f1-3b", "ovs_interfaceid": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.666 2 DEBUG nova.network.os_vif_util [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8a:2e:41,bridge_name='br-int',has_traffic_filtering=True,id=bdd118f1-3b0d-4709-847a-90adbb7b95f6,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdd118f1-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.666 2 DEBUG os_vif [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8a:2e:41,bridge_name='br-int',has_traffic_filtering=True,id=bdd118f1-3b0d-4709-847a-90adbb7b95f6,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdd118f1-3b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 23 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.668 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbdd118f1-3b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.673 2 INFO os_vif [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8a:2e:41,bridge_name='br-int',has_traffic_filtering=True,id=bdd118f1-3b0d-4709-847a-90adbb7b95f6,network=Network(858f2b6f-8fe4-471b-981e-5d0b08d2f4c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdd118f1-3b')
Oct 02 13:32:18 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-87757b8a0e34fe736a3c71060d17e023550b9658c7e7a944871e9b19c6258b38-userdata-shm.mount: Deactivated successfully.
Oct 02 13:32:18 compute-1 systemd[1]: var-lib-containers-storage-overlay-2ca3a05d9c7d9e930b8b6730f8fdbb361ef758594c9818368b42030a951cdf9e-merged.mount: Deactivated successfully.
Oct 02 13:32:18 compute-1 podman[320672]: 2025-10-02 13:32:18.692073167 +0000 UTC m=+0.115729762 container cleanup 87757b8a0e34fe736a3c71060d17e023550b9658c7e7a944871e9b19c6258b38 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:32:18 compute-1 systemd[1]: libpod-conmon-87757b8a0e34fe736a3c71060d17e023550b9658c7e7a944871e9b19c6258b38.scope: Deactivated successfully.
Oct 02 13:32:18 compute-1 podman[320717]: 2025-10-02 13:32:18.781437859 +0000 UTC m=+0.056450231 container remove 87757b8a0e34fe736a3c71060d17e023550b9658c7e7a944871e9b19c6258b38 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 02 13:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.790 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[588c42a1-476a-4ae1-993a-83dc859fc4e4]: (4, ('Thu Oct  2 01:32:18 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 (87757b8a0e34fe736a3c71060d17e023550b9658c7e7a944871e9b19c6258b38)\n87757b8a0e34fe736a3c71060d17e023550b9658c7e7a944871e9b19c6258b38\nThu Oct  2 01:32:18 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 (87757b8a0e34fe736a3c71060d17e023550b9658c7e7a944871e9b19c6258b38)\n87757b8a0e34fe736a3c71060d17e023550b9658c7e7a944871e9b19c6258b38\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.794 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[323267e5-ac8c-4931-86a9-842042f9b1fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.796 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap858f2b6f-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:18 compute-1 kernel: tap858f2b6f-80: left promiscuous mode
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.810 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[35a84186-1729-495a-bfb7-555094e0b441]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.848 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[324a0839-ee22-43a2-bb5d-36afe7614ff4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.850 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[537368a6-d8c8-4d4c-b6db-dd2091320a0f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.872 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[90597b1d-584c-4c4e-833f-9fdfa323f0b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 976297, 'reachable_time': 19626, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320734, 'error': None, 'target': 'ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:32:18 compute-1 systemd[1]: run-netns-ovnmeta\x2d858f2b6f\x2d8fe4\x2d471b\x2d981e\x2d5d0b08d2f4c5.mount: Deactivated successfully.
Oct 02 13:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.880 138533 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 02 13:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.881 138533 DEBUG oslo.privsep.daemon [-] privsep: reply[f04c812d-5ea0-4e89-a71e-af4bc33f2e9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.882 138374 INFO neutron.agent.ovn.metadata.agent [-] Port bdd118f1-3b0d-4709-847a-90adbb7b95f6 in datapath 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 unbound from our chassis
Oct 02 13:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.883 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.884 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[ac0c2fc5-2f25-4e68-8525-f7db97a5a5cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.885 138374 INFO neutron.agent.ovn.metadata.agent [-] Port bdd118f1-3b0d-4709-847a-90adbb7b95f6 in datapath 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5 unbound from our chassis
Oct 02 13:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.886 138374 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 858f2b6f-8fe4-471b-981e-5d0b08d2f4c5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 02 13:32:18 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:32:18.887 233418 DEBUG oslo.privsep.daemon [-] privsep: reply[6c9c0f28-33db-4936-9bb4-b6b51173cce3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.904 2 INFO nova.virt.libvirt.driver [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Deleting instance files /var/lib/nova/instances/6226ed9a-8df2-43ad-b76c-e27e22f8199c_del
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.905 2 INFO nova.virt.libvirt.driver [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Deletion of /var/lib/nova/instances/6226ed9a-8df2-43ad-b76c-e27e22f8199c_del complete
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.975 2 INFO nova.compute.manager [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Took 0.60 seconds to destroy the instance on the hypervisor.
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.975 2 DEBUG oslo.service.loopingcall [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.976 2 DEBUG nova.compute.manager [-] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 02 13:32:18 compute-1 nova_compute[230518]: 2025-10-02 13:32:18.976 2 DEBUG nova.network.neutron [-] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 02 13:32:19 compute-1 sudo[320736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:32:19 compute-1 sudo[320736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:19 compute-1 sudo[320736]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:19 compute-1 sudo[320761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:32:19 compute-1 sudo[320761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:32:19 compute-1 sudo[320761]: pam_unix(sudo:session): session closed for user root
Oct 02 13:32:19 compute-1 nova_compute[230518]: 2025-10-02 13:32:19.324 2 DEBUG nova.compute.manager [req-6ee5a586-0f65-4154-9735-fc8e7a679ede req-504cca75-2da4-4e07-8438-e33fc3cbb8c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Received event network-vif-unplugged-bdd118f1-3b0d-4709-847a-90adbb7b95f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:32:19 compute-1 nova_compute[230518]: 2025-10-02 13:32:19.325 2 DEBUG oslo_concurrency.lockutils [req-6ee5a586-0f65-4154-9735-fc8e7a679ede req-504cca75-2da4-4e07-8438-e33fc3cbb8c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:32:19 compute-1 nova_compute[230518]: 2025-10-02 13:32:19.325 2 DEBUG oslo_concurrency.lockutils [req-6ee5a586-0f65-4154-9735-fc8e7a679ede req-504cca75-2da4-4e07-8438-e33fc3cbb8c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:32:19 compute-1 nova_compute[230518]: 2025-10-02 13:32:19.327 2 DEBUG oslo_concurrency.lockutils [req-6ee5a586-0f65-4154-9735-fc8e7a679ede req-504cca75-2da4-4e07-8438-e33fc3cbb8c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:32:19 compute-1 nova_compute[230518]: 2025-10-02 13:32:19.327 2 DEBUG nova.compute.manager [req-6ee5a586-0f65-4154-9735-fc8e7a679ede req-504cca75-2da4-4e07-8438-e33fc3cbb8c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] No waiting events found dispatching network-vif-unplugged-bdd118f1-3b0d-4709-847a-90adbb7b95f6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:32:19 compute-1 nova_compute[230518]: 2025-10-02 13:32:19.328 2 DEBUG nova.compute.manager [req-6ee5a586-0f65-4154-9735-fc8e7a679ede req-504cca75-2da4-4e07-8438-e33fc3cbb8c6 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Received event network-vif-unplugged-bdd118f1-3b0d-4709-847a-90adbb7b95f6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 02 13:32:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:32:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:32:19 compute-1 ceph-mon[80926]: pgmap v3655: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 178 KiB/s rd, 47 KiB/s wr, 7 op/s
Oct 02 13:32:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:32:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:19.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:20.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:20 compute-1 nova_compute[230518]: 2025-10-02 13:32:20.338 2 DEBUG nova.network.neutron [-] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:32:20 compute-1 nova_compute[230518]: 2025-10-02 13:32:20.356 2 INFO nova.compute.manager [-] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Took 1.38 seconds to deallocate network for instance.
Oct 02 13:32:20 compute-1 nova_compute[230518]: 2025-10-02 13:32:20.576 2 INFO nova.compute.manager [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Took 0.22 seconds to detach 1 volumes for instance.
Oct 02 13:32:20 compute-1 nova_compute[230518]: 2025-10-02 13:32:20.614 2 DEBUG oslo_concurrency.lockutils [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:32:20 compute-1 nova_compute[230518]: 2025-10-02 13:32:20.615 2 DEBUG oslo_concurrency.lockutils [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:32:20 compute-1 nova_compute[230518]: 2025-10-02 13:32:20.656 2 DEBUG oslo_concurrency.processutils [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:32:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:32:21 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3910490391' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:32:21 compute-1 nova_compute[230518]: 2025-10-02 13:32:21.097 2 DEBUG oslo_concurrency.processutils [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:32:21 compute-1 nova_compute[230518]: 2025-10-02 13:32:21.106 2 DEBUG nova.compute.provider_tree [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:32:21 compute-1 nova_compute[230518]: 2025-10-02 13:32:21.136 2 DEBUG nova.scheduler.client.report [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:32:21 compute-1 nova_compute[230518]: 2025-10-02 13:32:21.172 2 DEBUG oslo_concurrency.lockutils [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:32:21 compute-1 nova_compute[230518]: 2025-10-02 13:32:21.193 2 DEBUG nova.network.neutron [req-a3ee0081-78cd-4e5b-bccc-fdff12600619 req-5a984056-296a-4f24-8042-871b2bbc3048 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Updated VIF entry in instance network info cache for port bdd118f1-3b0d-4709-847a-90adbb7b95f6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 02 13:32:21 compute-1 nova_compute[230518]: 2025-10-02 13:32:21.194 2 DEBUG nova.network.neutron [req-a3ee0081-78cd-4e5b-bccc-fdff12600619 req-5a984056-296a-4f24-8042-871b2bbc3048 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Updating instance_info_cache with network_info: [{"id": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "address": "fa:16:3e:8a:2e:41", "network": {"id": "858f2b6f-8fe4-471b-981e-5d0b08d2f4c5", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1723354448-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "18799a1c93354809911705bb424e673f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdd118f1-3b", "ovs_interfaceid": "bdd118f1-3b0d-4709-847a-90adbb7b95f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 02 13:32:21 compute-1 nova_compute[230518]: 2025-10-02 13:32:21.196 2 INFO nova.scheduler.client.report [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Deleted allocations for instance 6226ed9a-8df2-43ad-b76c-e27e22f8199c
Oct 02 13:32:21 compute-1 nova_compute[230518]: 2025-10-02 13:32:21.212 2 DEBUG oslo_concurrency.lockutils [req-a3ee0081-78cd-4e5b-bccc-fdff12600619 req-5a984056-296a-4f24-8042-871b2bbc3048 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Releasing lock "refresh_cache-6226ed9a-8df2-43ad-b76c-e27e22f8199c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 02 13:32:21 compute-1 nova_compute[230518]: 2025-10-02 13:32:21.257 2 DEBUG oslo_concurrency.lockutils [None req-41a0c15e-6a7c-41a1-8244-d2a48ecbade7 2cb47684d0b34c729e9611e7b3943bed 18799a1c93354809911705bb424e673f - - default default] Lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.884s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:32:21 compute-1 nova_compute[230518]: 2025-10-02 13:32:21.395 2 DEBUG nova.compute.manager [req-ef56f0ad-1837-487c-acd4-b0db7601531d req-37f63b6e-be9f-417a-bf0b-4dc2e129e6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Received event network-vif-plugged-bdd118f1-3b0d-4709-847a-90adbb7b95f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:32:21 compute-1 nova_compute[230518]: 2025-10-02 13:32:21.396 2 DEBUG oslo_concurrency.lockutils [req-ef56f0ad-1837-487c-acd4-b0db7601531d req-37f63b6e-be9f-417a-bf0b-4dc2e129e6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:32:21 compute-1 nova_compute[230518]: 2025-10-02 13:32:21.396 2 DEBUG oslo_concurrency.lockutils [req-ef56f0ad-1837-487c-acd4-b0db7601531d req-37f63b6e-be9f-417a-bf0b-4dc2e129e6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:32:21 compute-1 nova_compute[230518]: 2025-10-02 13:32:21.396 2 DEBUG oslo_concurrency.lockutils [req-ef56f0ad-1837-487c-acd4-b0db7601531d req-37f63b6e-be9f-417a-bf0b-4dc2e129e6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:32:21 compute-1 nova_compute[230518]: 2025-10-02 13:32:21.396 2 DEBUG nova.compute.manager [req-ef56f0ad-1837-487c-acd4-b0db7601531d req-37f63b6e-be9f-417a-bf0b-4dc2e129e6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] No waiting events found dispatching network-vif-plugged-bdd118f1-3b0d-4709-847a-90adbb7b95f6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:32:21 compute-1 nova_compute[230518]: 2025-10-02 13:32:21.397 2 WARNING nova.compute.manager [req-ef56f0ad-1837-487c-acd4-b0db7601531d req-37f63b6e-be9f-417a-bf0b-4dc2e129e6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Received unexpected event network-vif-plugged-bdd118f1-3b0d-4709-847a-90adbb7b95f6 for instance with vm_state deleted and task_state None.
Oct 02 13:32:21 compute-1 nova_compute[230518]: 2025-10-02 13:32:21.397 2 DEBUG nova.compute.manager [req-ef56f0ad-1837-487c-acd4-b0db7601531d req-37f63b6e-be9f-417a-bf0b-4dc2e129e6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Received event network-vif-plugged-bdd118f1-3b0d-4709-847a-90adbb7b95f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:32:21 compute-1 nova_compute[230518]: 2025-10-02 13:32:21.397 2 DEBUG oslo_concurrency.lockutils [req-ef56f0ad-1837-487c-acd4-b0db7601531d req-37f63b6e-be9f-417a-bf0b-4dc2e129e6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Acquiring lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:32:21 compute-1 nova_compute[230518]: 2025-10-02 13:32:21.397 2 DEBUG oslo_concurrency.lockutils [req-ef56f0ad-1837-487c-acd4-b0db7601531d req-37f63b6e-be9f-417a-bf0b-4dc2e129e6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:32:21 compute-1 nova_compute[230518]: 2025-10-02 13:32:21.398 2 DEBUG oslo_concurrency.lockutils [req-ef56f0ad-1837-487c-acd4-b0db7601531d req-37f63b6e-be9f-417a-bf0b-4dc2e129e6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] Lock "6226ed9a-8df2-43ad-b76c-e27e22f8199c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:32:21 compute-1 nova_compute[230518]: 2025-10-02 13:32:21.398 2 DEBUG nova.compute.manager [req-ef56f0ad-1837-487c-acd4-b0db7601531d req-37f63b6e-be9f-417a-bf0b-4dc2e129e6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] No waiting events found dispatching network-vif-plugged-bdd118f1-3b0d-4709-847a-90adbb7b95f6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 02 13:32:21 compute-1 nova_compute[230518]: 2025-10-02 13:32:21.398 2 WARNING nova.compute.manager [req-ef56f0ad-1837-487c-acd4-b0db7601531d req-37f63b6e-be9f-417a-bf0b-4dc2e129e6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Received unexpected event network-vif-plugged-bdd118f1-3b0d-4709-847a-90adbb7b95f6 for instance with vm_state deleted and task_state None.
Oct 02 13:32:21 compute-1 nova_compute[230518]: 2025-10-02 13:32:21.398 2 DEBUG nova.compute.manager [req-ef56f0ad-1837-487c-acd4-b0db7601531d req-37f63b6e-be9f-417a-bf0b-4dc2e129e6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Received event network-vif-deleted-bdd118f1-3b0d-4709-847a-90adbb7b95f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 02 13:32:21 compute-1 nova_compute[230518]: 2025-10-02 13:32:21.398 2 INFO nova.compute.manager [req-ef56f0ad-1837-487c-acd4-b0db7601531d req-37f63b6e-be9f-417a-bf0b-4dc2e129e6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Neutron deleted interface bdd118f1-3b0d-4709-847a-90adbb7b95f6; detaching it from the instance and deleting it from the info cache
Oct 02 13:32:21 compute-1 nova_compute[230518]: 2025-10-02 13:32:21.399 2 DEBUG nova.network.neutron [req-ef56f0ad-1837-487c-acd4-b0db7601531d req-37f63b6e-be9f-417a-bf0b-4dc2e129e6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Oct 02 13:32:21 compute-1 nova_compute[230518]: 2025-10-02 13:32:21.401 2 DEBUG nova.compute.manager [req-ef56f0ad-1837-487c-acd4-b0db7601531d req-37f63b6e-be9f-417a-bf0b-4dc2e129e6a2 6005b6c7177949febac5e5a925c2f87b 516d4d3bc591448c8a9ac3484e15b579 - - default default] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Detach interface failed, port_id=bdd118f1-3b0d-4709-847a-90adbb7b95f6, reason: Instance 6226ed9a-8df2-43ad-b76c-e27e22f8199c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 02 13:32:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:21.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:22 compute-1 ceph-mon[80926]: pgmap v3656: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 254 KiB/s rd, 47 KiB/s wr, 18 op/s
Oct 02 13:32:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3910490391' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:32:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:22.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:22 compute-1 nova_compute[230518]: 2025-10-02 13:32:22.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:23 compute-1 nova_compute[230518]: 2025-10-02 13:32:23.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.002999973s ======
Oct 02 13:32:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:23.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002999973s
Oct 02 13:32:24 compute-1 ceph-mon[80926]: pgmap v3657: 305 pgs: 305 active+clean; 223 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 220 KiB/s rd, 6.0 KiB/s wr, 19 op/s
Oct 02 13:32:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2315824707' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:32:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2315824707' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:32:24 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:32:24.110 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '87'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:32:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:24.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:32:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e429 e429: 3 total, 3 up, 3 in
Oct 02 13:32:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:32:25.992 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:32:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:32:25.992 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:32:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:32:25.992 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:32:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:32:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:26.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:32:26 compute-1 ceph-mon[80926]: pgmap v3658: 305 pgs: 305 active+clean; 212 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 221 KiB/s rd, 1.6 KiB/s wr, 20 op/s
Oct 02 13:32:26 compute-1 ceph-mon[80926]: osdmap e429: 3 total, 3 up, 3 in
Oct 02 13:32:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:26.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:27 compute-1 ceph-mon[80926]: pgmap v3660: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 121 KiB/s rd, 2.3 KiB/s wr, 53 op/s
Oct 02 13:32:27 compute-1 nova_compute[230518]: 2025-10-02 13:32:27.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:28.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:28.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:28 compute-1 nova_compute[230518]: 2025-10-02 13:32:28.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:29 compute-1 ceph-mon[80926]: pgmap v3661: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 121 KiB/s rd, 2.3 KiB/s wr, 53 op/s
Oct 02 13:32:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:32:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:32:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:30.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:32:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:30.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:31 compute-1 ceph-mon[80926]: pgmap v3662: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 2.7 KiB/s wr, 44 op/s
Oct 02 13:32:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:32.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:32.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:32 compute-1 nova_compute[230518]: 2025-10-02 13:32:32.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 e430: 3 total, 3 up, 3 in
Oct 02 13:32:33 compute-1 nova_compute[230518]: 2025-10-02 13:32:33.646 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759411938.6447265, 6226ed9a-8df2-43ad-b76c-e27e22f8199c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 02 13:32:33 compute-1 nova_compute[230518]: 2025-10-02 13:32:33.647 2 INFO nova.compute.manager [-] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] VM Stopped (Lifecycle Event)
Oct 02 13:32:33 compute-1 nova_compute[230518]: 2025-10-02 13:32:33.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:33 compute-1 nova_compute[230518]: 2025-10-02 13:32:33.697 2 DEBUG nova.compute.manager [None req-5fab88c1-4008-4bfd-8fac-4a7152f847dd - - - - - -] [instance: 6226ed9a-8df2-43ad-b76c-e27e22f8199c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 02 13:32:33 compute-1 podman[320809]: 2025-10-02 13:32:33.803708978 +0000 UTC m=+0.048184822 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 02 13:32:33 compute-1 podman[320808]: 2025-10-02 13:32:33.859761605 +0000 UTC m=+0.105858109 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 02 13:32:34 compute-1 ceph-mon[80926]: pgmap v3663: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 39 KiB/s rd, 2.3 KiB/s wr, 52 op/s
Oct 02 13:32:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/731007096' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:32:34 compute-1 ceph-mon[80926]: osdmap e430: 3 total, 3 up, 3 in
Oct 02 13:32:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:34.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:34.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:32:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:32:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:36.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:32:36 compute-1 ceph-mon[80926]: pgmap v3665: 305 pgs: 305 active+clean; 201 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 39 KiB/s rd, 2.3 KiB/s wr, 51 op/s
Oct 02 13:32:36 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2730303382' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:32:36 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2730303382' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:32:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:36.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:37 compute-1 nova_compute[230518]: 2025-10-02 13:32:37.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:38.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:38 compute-1 ceph-mon[80926]: pgmap v3666: 305 pgs: 305 active+clean; 139 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 1.4 KiB/s wr, 36 op/s
Oct 02 13:32:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:38.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:38 compute-1 nova_compute[230518]: 2025-10-02 13:32:38.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:39 compute-1 ceph-mon[80926]: pgmap v3667: 305 pgs: 305 active+clean; 139 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 1.4 KiB/s wr, 36 op/s
Oct 02 13:32:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:32:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:32:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:40.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:32:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:40.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:41 compute-1 nova_compute[230518]: 2025-10-02 13:32:41.066 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:32:41 compute-1 nova_compute[230518]: 2025-10-02 13:32:41.066 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 13:32:41 compute-1 nova_compute[230518]: 2025-10-02 13:32:41.082 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 13:32:42 compute-1 ceph-mon[80926]: pgmap v3668: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 1023 B/s wr, 33 op/s
Oct 02 13:32:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3679501352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:32:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/556147337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:32:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:42.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:42.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:42 compute-1 nova_compute[230518]: 2025-10-02 13:32:42.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:43 compute-1 nova_compute[230518]: 2025-10-02 13:32:43.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:44.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:44 compute-1 ceph-mon[80926]: pgmap v3669: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 716 B/s wr, 21 op/s
Oct 02 13:32:44 compute-1 nova_compute[230518]: 2025-10-02 13:32:44.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:44 compute-1 nova_compute[230518]: 2025-10-02 13:32:44.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:32:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:44.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:32:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:32:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2663759352' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:32:45 compute-1 nova_compute[230518]: 2025-10-02 13:32:45.068 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:32:45 compute-1 podman[320855]: 2025-10-02 13:32:45.798548603 +0000 UTC m=+0.053098705 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3)
Oct 02 13:32:45 compute-1 podman[320856]: 2025-10-02 13:32:45.800639439 +0000 UTC m=+0.053743335 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 13:32:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:46.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:46 compute-1 ceph-mon[80926]: pgmap v3670: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 616 B/s wr, 18 op/s
Oct 02 13:32:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/863069546' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:32:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:32:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:46.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:32:47 compute-1 nova_compute[230518]: 2025-10-02 13:32:47.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:32:47 compute-1 nova_compute[230518]: 2025-10-02 13:32:47.094 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:32:47 compute-1 nova_compute[230518]: 2025-10-02 13:32:47.094 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:32:47 compute-1 nova_compute[230518]: 2025-10-02 13:32:47.094 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:32:47 compute-1 nova_compute[230518]: 2025-10-02 13:32:47.095 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:32:47 compute-1 nova_compute[230518]: 2025-10-02 13:32:47.095 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:32:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:32:47 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3673655604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:32:47 compute-1 nova_compute[230518]: 2025-10-02 13:32:47.526 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:32:47 compute-1 nova_compute[230518]: 2025-10-02 13:32:47.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:47 compute-1 nova_compute[230518]: 2025-10-02 13:32:47.690 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:32:47 compute-1 nova_compute[230518]: 2025-10-02 13:32:47.691 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4177MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:32:47 compute-1 nova_compute[230518]: 2025-10-02 13:32:47.691 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:32:47 compute-1 nova_compute[230518]: 2025-10-02 13:32:47.692 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:32:47 compute-1 nova_compute[230518]: 2025-10-02 13:32:47.824 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:32:47 compute-1 nova_compute[230518]: 2025-10-02 13:32:47.824 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:32:47 compute-1 nova_compute[230518]: 2025-10-02 13:32:47.874 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:32:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:48.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:48 compute-1 ceph-mon[80926]: pgmap v3671: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 17 op/s
Oct 02 13:32:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3673655604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:32:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:32:48 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3453873877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:32:48 compute-1 nova_compute[230518]: 2025-10-02 13:32:48.316 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:32:48 compute-1 nova_compute[230518]: 2025-10-02 13:32:48.321 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:32:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:32:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:48.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:32:48 compute-1 nova_compute[230518]: 2025-10-02 13:32:48.381 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:32:48 compute-1 nova_compute[230518]: 2025-10-02 13:32:48.494 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:32:48 compute-1 nova_compute[230518]: 2025-10-02 13:32:48.495 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:32:48 compute-1 nova_compute[230518]: 2025-10-02 13:32:48.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3453873877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:32:49 compute-1 nova_compute[230518]: 2025-10-02 13:32:49.495 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:32:49 compute-1 nova_compute[230518]: 2025-10-02 13:32:49.496 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:32:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:32:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:50.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:50 compute-1 ceph-mon[80926]: pgmap v3672: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 426 B/s rd, 0 B/s wr, 0 op/s
Oct 02 13:32:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:50.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:52.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:52 compute-1 ceph-mon[80926]: pgmap v3673: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 426 B/s rd, 0 B/s wr, 0 op/s
Oct 02 13:32:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:52.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:52 compute-1 nova_compute[230518]: 2025-10-02 13:32:52.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:53 compute-1 nova_compute[230518]: 2025-10-02 13:32:53.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:32:53 compute-1 ceph-mon[80926]: pgmap v3674: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:53 compute-1 nova_compute[230518]: 2025-10-02 13:32:53.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:32:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:54.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:32:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:54.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:32:56 compute-1 ceph-mon[80926]: pgmap v3675: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:56.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:56 compute-1 nova_compute[230518]: 2025-10-02 13:32:56.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:32:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:56.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:57 compute-1 nova_compute[230518]: 2025-10-02 13:32:57.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:32:57 compute-1 nova_compute[230518]: 2025-10-02 13:32:57.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:58 compute-1 ceph-mon[80926]: pgmap v3676: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:32:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:32:58.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:58 compute-1 nova_compute[230518]: 2025-10-02 13:32:58.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:32:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:32:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:32:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:32:58.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:32:58 compute-1 nova_compute[230518]: 2025-10-02 13:32:58.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:32:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:33:00 compute-1 ceph-mon[80926]: pgmap v3677: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:00.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:00.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:01 compute-1 nova_compute[230518]: 2025-10-02 13:33:01.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:33:02 compute-1 nova_compute[230518]: 2025-10-02 13:33:02.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:33:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:02.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:02 compute-1 ceph-mon[80926]: pgmap v3678: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:02.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:02 compute-1 nova_compute[230518]: 2025-10-02 13:33:02.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:03 compute-1 ceph-mon[80926]: pgmap v3679: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:03 compute-1 nova_compute[230518]: 2025-10-02 13:33:03.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:04 compute-1 nova_compute[230518]: 2025-10-02 13:33:04.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:33:04 compute-1 nova_compute[230518]: 2025-10-02 13:33:04.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:33:04 compute-1 nova_compute[230518]: 2025-10-02 13:33:04.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:33:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:04.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:04 compute-1 nova_compute[230518]: 2025-10-02 13:33:04.093 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:33:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:33:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:04.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:33:04 compute-1 podman[320941]: 2025-10-02 13:33:04.79802059 +0000 UTC m=+0.049619376 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:33:04 compute-1 podman[320940]: 2025-10-02 13:33:04.833258654 +0000 UTC m=+0.086696029 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 02 13:33:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:33:06 compute-1 ceph-mon[80926]: pgmap v3680: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/487181521' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:33:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/487181521' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:33:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:06.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:06.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:07 compute-1 nova_compute[230518]: 2025-10-02 13:33:07.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:08 compute-1 ceph-mon[80926]: pgmap v3681: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:08.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:33:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:08.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:33:08 compute-1 nova_compute[230518]: 2025-10-02 13:33:08.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:33:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:33:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:10.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:33:10 compute-1 ceph-mon[80926]: pgmap v3682: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:33:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:10.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:33:11 compute-1 ceph-mon[80926]: pgmap v3683: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:12.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:12.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:12 compute-1 nova_compute[230518]: 2025-10-02 13:33:12.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:13 compute-1 nova_compute[230518]: 2025-10-02 13:33:13.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:14.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:14 compute-1 ceph-mon[80926]: pgmap v3684: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:14.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:33:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:16.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:16 compute-1 ceph-mon[80926]: pgmap v3685: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:16.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:16 compute-1 podman[320984]: 2025-10-02 13:33:16.824187107 +0000 UTC m=+0.069103797 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 13:33:16 compute-1 podman[320983]: 2025-10-02 13:33:16.843236025 +0000 UTC m=+0.092191182 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:33:17 compute-1 nova_compute[230518]: 2025-10-02 13:33:17.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:18.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:18 compute-1 ceph-mon[80926]: pgmap v3686: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:33:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:18.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:33:18 compute-1 nova_compute[230518]: 2025-10-02 13:33:18.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:19 compute-1 ceph-mon[80926]: pgmap v3687: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:19 compute-1 sudo[321021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:33:19 compute-1 sudo[321021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:19 compute-1 sudo[321021]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:19 compute-1 sudo[321046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:33:19 compute-1 sudo[321046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:19 compute-1 sudo[321046]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:19 compute-1 sudo[321071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:33:19 compute-1 sudo[321071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:19 compute-1 sudo[321071]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:19 compute-1 sudo[321096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:33:19 compute-1 sudo[321096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:19 compute-1 ovn_controller[129257]: 2025-10-02T13:33:19Z|00912|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Oct 02 13:33:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:33:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:20.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:20 compute-1 sudo[321096]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:20 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:33:20 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:33:20 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:33:20 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:33:20 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:33:20 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:33:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:33:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:20.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:33:21 compute-1 ceph-mon[80926]: pgmap v3688: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:22.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:22.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:22 compute-1 nova_compute[230518]: 2025-10-02 13:33:22.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:23 compute-1 nova_compute[230518]: 2025-10-02 13:33:23.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:33:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:24.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:33:24 compute-1 ceph-mon[80926]: pgmap v3689: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:24.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:33:25 compute-1 ceph-mon[80926]: pgmap v3690: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:33:25.994 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:33:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:33:25.995 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:33:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:33:25.995 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:33:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:26.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:33:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:26.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:33:26 compute-1 sudo[321151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:33:26 compute-1 sudo[321151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:26 compute-1 sudo[321151]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:26 compute-1 sudo[321176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:33:26 compute-1 sudo[321176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:33:26 compute-1 sudo[321176]: pam_unix(sudo:session): session closed for user root
Oct 02 13:33:27 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:33:27 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:33:27 compute-1 ceph-mon[80926]: pgmap v3691: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:27 compute-1 nova_compute[230518]: 2025-10-02 13:33:27.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:28.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:28.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:28 compute-1 nova_compute[230518]: 2025-10-02 13:33:28.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:33:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:30.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:30 compute-1 ceph-mon[80926]: pgmap v3692: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:33:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:30.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:33:31 compute-1 ceph-mon[80926]: pgmap v3693: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:33:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:32.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:33:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:32.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:32 compute-1 nova_compute[230518]: 2025-10-02 13:33:32.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:33 compute-1 nova_compute[230518]: 2025-10-02 13:33:33.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:34 compute-1 ceph-mon[80926]: pgmap v3694: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:34.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:34.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:33:35 compute-1 podman[321202]: 2025-10-02 13:33:35.806145498 +0000 UTC m=+0.044267149 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 02 13:33:35 compute-1 podman[321201]: 2025-10-02 13:33:35.835773717 +0000 UTC m=+0.084921244 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:33:36 compute-1 ceph-mon[80926]: pgmap v3695: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:36.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:36.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:37 compute-1 nova_compute[230518]: 2025-10-02 13:33:37.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:33:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:38.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:33:38 compute-1 ceph-mon[80926]: pgmap v3696: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:33:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:38.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:33:38 compute-1 nova_compute[230518]: 2025-10-02 13:33:38.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:39 compute-1 ceph-mon[80926]: pgmap v3697: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:33:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:40.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:40.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:42.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:42 compute-1 ceph-mon[80926]: pgmap v3698: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:42.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:42 compute-1 nova_compute[230518]: 2025-10-02 13:33:42.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:43 compute-1 ceph-mon[80926]: pgmap v3699: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:43 compute-1 nova_compute[230518]: 2025-10-02 13:33:43.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:33:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:44.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:33:44 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2390697569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:33:44 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3466644390' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:33:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:44.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:33:45 compute-1 nova_compute[230518]: 2025-10-02 13:33:45.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:33:45 compute-1 ceph-mon[80926]: pgmap v3700: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:33:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:46.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:33:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:46.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3699282246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:33:47 compute-1 nova_compute[230518]: 2025-10-02 13:33:47.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:47 compute-1 podman[321245]: 2025-10-02 13:33:47.791066719 +0000 UTC m=+0.048083088 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 13:33:47 compute-1 podman[321246]: 2025-10-02 13:33:47.797054167 +0000 UTC m=+0.050878457 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 13:33:47 compute-1 ceph-mon[80926]: pgmap v3701: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:47 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2823414242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:33:48 compute-1 nova_compute[230518]: 2025-10-02 13:33:48.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:33:48 compute-1 nova_compute[230518]: 2025-10-02 13:33:48.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:33:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:33:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:48.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:33:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:33:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:48.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:33:48 compute-1 nova_compute[230518]: 2025-10-02 13:33:48.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:49 compute-1 nova_compute[230518]: 2025-10-02 13:33:49.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:33:49 compute-1 nova_compute[230518]: 2025-10-02 13:33:49.087 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:33:49 compute-1 nova_compute[230518]: 2025-10-02 13:33:49.088 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:33:49 compute-1 nova_compute[230518]: 2025-10-02 13:33:49.088 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:33:49 compute-1 nova_compute[230518]: 2025-10-02 13:33:49.088 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:33:49 compute-1 nova_compute[230518]: 2025-10-02 13:33:49.088 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:33:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:33:49 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3420975852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:33:49 compute-1 nova_compute[230518]: 2025-10-02 13:33:49.556 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:33:49 compute-1 nova_compute[230518]: 2025-10-02 13:33:49.748 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:33:49 compute-1 nova_compute[230518]: 2025-10-02 13:33:49.750 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4175MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:33:49 compute-1 nova_compute[230518]: 2025-10-02 13:33:49.750 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:33:49 compute-1 nova_compute[230518]: 2025-10-02 13:33:49.750 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:33:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:33:50 compute-1 ceph-mon[80926]: pgmap v3702: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3420975852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:33:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:50.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:50.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:50 compute-1 nova_compute[230518]: 2025-10-02 13:33:50.724 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:33:50 compute-1 nova_compute[230518]: 2025-10-02 13:33:50.725 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:33:50 compute-1 nova_compute[230518]: 2025-10-02 13:33:50.784 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:33:51 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:33:51 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/796460566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:33:51 compute-1 nova_compute[230518]: 2025-10-02 13:33:51.237 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:33:51 compute-1 nova_compute[230518]: 2025-10-02 13:33:51.243 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:33:51 compute-1 nova_compute[230518]: 2025-10-02 13:33:51.269 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:33:51 compute-1 nova_compute[230518]: 2025-10-02 13:33:51.271 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:33:51 compute-1 nova_compute[230518]: 2025-10-02 13:33:51.272 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.522s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:33:51 compute-1 ceph-mon[80926]: pgmap v3703: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:52.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:33:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:52.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:33:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/796460566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:33:52 compute-1 nova_compute[230518]: 2025-10-02 13:33:52.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:53 compute-1 ceph-mon[80926]: pgmap v3704: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:53 compute-1 nova_compute[230518]: 2025-10-02 13:33:53.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:33:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:54.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:33:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:54.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:33:56 compute-1 ceph-mon[80926]: pgmap v3705: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:56.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:56 compute-1 nova_compute[230518]: 2025-10-02 13:33:56.272 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:33:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:33:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:56.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:33:57 compute-1 nova_compute[230518]: 2025-10-02 13:33:57.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:33:57 compute-1 nova_compute[230518]: 2025-10-02 13:33:57.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:58 compute-1 nova_compute[230518]: 2025-10-02 13:33:58.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:33:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:33:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:33:58.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:33:58 compute-1 ceph-mon[80926]: pgmap v3706: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:58 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #181. Immutable memtables: 0.
Oct 02 13:33:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:33:58.334977) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:33:58 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 115] Flushing memtable with next log file: 181
Oct 02 13:33:58 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412038335011, "job": 115, "event": "flush_started", "num_memtables": 1, "num_entries": 1978, "num_deletes": 252, "total_data_size": 4780865, "memory_usage": 4836856, "flush_reason": "Manual Compaction"}
Oct 02 13:33:58 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 115] Level-0 flush table #182: started
Oct 02 13:33:58 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412038344773, "cf_name": "default", "job": 115, "event": "table_file_creation", "file_number": 182, "file_size": 1865443, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 86975, "largest_seqno": 88948, "table_properties": {"data_size": 1859405, "index_size": 3048, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 16082, "raw_average_key_size": 21, "raw_value_size": 1845919, "raw_average_value_size": 2422, "num_data_blocks": 136, "num_entries": 762, "num_filter_entries": 762, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759411862, "oldest_key_time": 1759411862, "file_creation_time": 1759412038, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:33:58 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 115] Flush lasted 9843 microseconds, and 4910 cpu microseconds.
Oct 02 13:33:58 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:33:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:33:58.344819) [db/flush_job.cc:967] [default] [JOB 115] Level-0 flush table #182: 1865443 bytes OK
Oct 02 13:33:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:33:58.344836) [db/memtable_list.cc:519] [default] Level-0 commit table #182 started
Oct 02 13:33:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:33:58.346122) [db/memtable_list.cc:722] [default] Level-0 commit table #182: memtable #1 done
Oct 02 13:33:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:33:58.346134) EVENT_LOG_v1 {"time_micros": 1759412038346131, "job": 115, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:33:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:33:58.346148) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:33:58 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 115] Try to delete WAL files size 4771927, prev total WAL file size 4771927, number of live WAL files 2.
Oct 02 13:33:58 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000178.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:33:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:33:58.347038) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303038' seq:72057594037927935, type:22 .. '6D6772737461740033323630' seq:0, type:0; will stop at (end)
Oct 02 13:33:58 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 116] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:33:58 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 115 Base level 0, inputs: [182(1821KB)], [180(12MB)]
Oct 02 13:33:58 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412038347065, "job": 116, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [182], "files_L6": [180], "score": -1, "input_data_size": 15061277, "oldest_snapshot_seqno": -1}
Oct 02 13:33:58 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 116] Generated table #183: 10883 keys, 12449823 bytes, temperature: kUnknown
Oct 02 13:33:58 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412038405154, "cf_name": "default", "job": 116, "event": "table_file_creation", "file_number": 183, "file_size": 12449823, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12381988, "index_size": 39498, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27269, "raw_key_size": 286752, "raw_average_key_size": 26, "raw_value_size": 12194370, "raw_average_value_size": 1120, "num_data_blocks": 1496, "num_entries": 10883, "num_filter_entries": 10883, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759412038, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 183, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:33:58 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:33:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:33:58.405445) [db/compaction/compaction_job.cc:1663] [default] [JOB 116] Compacted 1@0 + 1@6 files to L6 => 12449823 bytes
Oct 02 13:33:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:33:58.406561) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 259.0 rd, 214.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 12.6 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(14.7) write-amplify(6.7) OK, records in: 11324, records dropped: 441 output_compression: NoCompression
Oct 02 13:33:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:33:58.406715) EVENT_LOG_v1 {"time_micros": 1759412038406699, "job": 116, "event": "compaction_finished", "compaction_time_micros": 58153, "compaction_time_cpu_micros": 27565, "output_level": 6, "num_output_files": 1, "total_output_size": 12449823, "num_input_records": 11324, "num_output_records": 10883, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:33:58 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:33:58 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412038407289, "job": 116, "event": "table_file_deletion", "file_number": 182}
Oct 02 13:33:58 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000180.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:33:58 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412038410119, "job": 116, "event": "table_file_deletion", "file_number": 180}
Oct 02 13:33:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:33:58.347006) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:33:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:33:58.410190) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:33:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:33:58.410195) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:33:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:33:58.410197) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:33:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:33:58.410199) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:33:58 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:33:58.410201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:33:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:33:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:33:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:33:58.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:33:58 compute-1 nova_compute[230518]: 2025-10-02 13:33:58.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:33:59 compute-1 nova_compute[230518]: 2025-10-02 13:33:59.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:33:59 compute-1 ceph-mon[80926]: pgmap v3707: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:33:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:34:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:34:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:00.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:34:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:00.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:01 compute-1 nova_compute[230518]: 2025-10-02 13:34:01.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:34:02 compute-1 sshd-session[321326]: Accepted publickey for zuul from 192.168.122.10 port 56036 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 13:34:02 compute-1 systemd-logind[795]: New session 62 of user zuul.
Oct 02 13:34:02 compute-1 systemd[1]: Started Session 62 of User zuul.
Oct 02 13:34:02 compute-1 sshd-session[321326]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 13:34:02 compute-1 ceph-mon[80926]: pgmap v3708: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:02.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:02 compute-1 sudo[321330]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Oct 02 13:34:02 compute-1 sudo[321330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 13:34:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 13:34:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:02.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 13:34:02 compute-1 nova_compute[230518]: 2025-10-02 13:34:02.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:03 compute-1 nova_compute[230518]: 2025-10-02 13:34:03.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:04.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:04 compute-1 ceph-mon[80926]: pgmap v3709: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:34:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:04.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:34:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:34:05 compute-1 ceph-mon[80926]: from='client.36975 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:05 compute-1 ceph-mon[80926]: pgmap v3710: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:06 compute-1 nova_compute[230518]: 2025-10-02 13:34:06.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:34:06 compute-1 nova_compute[230518]: 2025-10-02 13:34:06.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:34:06 compute-1 nova_compute[230518]: 2025-10-02 13:34:06.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:34:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:06.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:06 compute-1 nova_compute[230518]: 2025-10-02 13:34:06.146 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:34:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/4257618237' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:34:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/4257618237' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:34:06 compute-1 ceph-mon[80926]: from='client.36981 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2772241408' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:34:06 compute-1 ceph-mon[80926]: from='client.45911 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:06.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:06 compute-1 podman[321558]: 2025-10-02 13:34:06.866726665 +0000 UTC m=+0.104392603 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 13:34:06 compute-1 podman[321557]: 2025-10-02 13:34:06.886710322 +0000 UTC m=+0.124375730 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 02 13:34:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 02 13:34:07 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3064877731' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:34:07 compute-1 ceph-mon[80926]: from='client.45917 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:07 compute-1 ceph-mon[80926]: from='client.46774 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:07 compute-1 ceph-mon[80926]: pgmap v3711: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:07 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3064877731' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:34:07 compute-1 nova_compute[230518]: 2025-10-02 13:34:07.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:08.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:08 compute-1 ceph-mon[80926]: from='client.46780 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/630458637' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:34:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:08.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:08 compute-1 nova_compute[230518]: 2025-10-02 13:34:08.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:09 compute-1 ceph-mon[80926]: pgmap v3712: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:34:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:10.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:10 compute-1 ovs-vsctl[321660]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 02 13:34:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:10.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:11 compute-1 virtqemud[230067]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 02 13:34:11 compute-1 virtqemud[230067]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 02 13:34:11 compute-1 virtqemud[230067]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 02 13:34:11 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: cache status {prefix=cache status} (starting...)
Oct 02 13:34:11 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct 02 13:34:11 compute-1 lvm[321974]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 02 13:34:11 compute-1 lvm[321974]: VG ceph_vg0 finished
Oct 02 13:34:11 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: client ls {prefix=client ls} (starting...)
Oct 02 13:34:11 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct 02 13:34:12 compute-1 kernel: block loop3: the capability attribute has been deprecated.
Oct 02 13:34:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:12.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:12 compute-1 ceph-mon[80926]: pgmap v3713: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2145955627' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:34:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:34:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:12.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:34:12 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: damage ls {prefix=damage ls} (starting...)
Oct 02 13:34:12 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct 02 13:34:12 compute-1 nova_compute[230518]: 2025-10-02 13:34:12.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:12 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: dump loads {prefix=dump loads} (starting...)
Oct 02 13:34:12 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct 02 13:34:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Oct 02 13:34:12 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/535649708' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:34:12 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 02 13:34:12 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct 02 13:34:12 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 02 13:34:12 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct 02 13:34:13 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 02 13:34:13 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct 02 13:34:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:34:13 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1976143621' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:34:13 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 02 13:34:13 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct 02 13:34:13 compute-1 ceph-mon[80926]: from='client.37002 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:13 compute-1 ceph-mon[80926]: from='client.37014 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:13 compute-1 ceph-mon[80926]: from='client.45932 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2219750363' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:34:13 compute-1 ceph-mon[80926]: from='client.45944 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/535649708' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:34:13 compute-1 ceph-mon[80926]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:34:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2149323355' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 02 13:34:13 compute-1 ceph-mon[80926]: from='client.37038 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:13 compute-1 ceph-mon[80926]: pgmap v3714: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1976143621' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:34:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/999954966' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 02 13:34:13 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 02 13:34:13 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct 02 13:34:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct 02 13:34:13 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/42129484' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 02 13:34:13 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 02 13:34:13 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct 02 13:34:13 compute-1 nova_compute[230518]: 2025-10-02 13:34:13.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:13 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: ops {prefix=ops} (starting...)
Oct 02 13:34:13 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct 02 13:34:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct 02 13:34:13 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2805152507' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 02 13:34:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct 02 13:34:13 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2149402501' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 02 13:34:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:34:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:14.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:34:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 02 13:34:14 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2900548446' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:34:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3986665879' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 02 13:34:14 compute-1 ceph-mon[80926]: from='client.45971 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:14 compute-1 ceph-mon[80926]: from='client.46798 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/42129484' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 02 13:34:14 compute-1 ceph-mon[80926]: from='client.37059 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/25412291' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:34:14 compute-1 ceph-mon[80926]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:34:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/491187411' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:34:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2805152507' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 02 13:34:14 compute-1 ceph-mon[80926]: from='client.46813 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2149402501' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 02 13:34:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2033142661' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:34:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1715324174' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:34:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2900548446' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:34:14 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: session ls {prefix=session ls} (starting...)
Oct 02 13:34:14 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct 02 13:34:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:14.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:14 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: status {prefix=status} (starting...)
Oct 02 13:34:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 02 13:34:14 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1477980080' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:34:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:34:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Oct 02 13:34:15 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/731540033' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:34:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 02 13:34:15 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1353089157' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:34:15 compute-1 ceph-mon[80926]: from='client.37071 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:15 compute-1 ceph-mon[80926]: from='client.46004 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1531075006' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 02 13:34:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2337664208' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:34:15 compute-1 ceph-mon[80926]: from='client.46019 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/140374564' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:34:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1477980080' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:34:15 compute-1 ceph-mon[80926]: from='client.46843 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:15 compute-1 ceph-mon[80926]: pgmap v3715: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/133173445' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 02 13:34:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1059518549' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 02 13:34:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/731540033' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:34:15 compute-1 ceph-mon[80926]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:34:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1353089157' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:34:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/278414120' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 02 13:34:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3004068556' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:34:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct 02 13:34:15 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/966138210' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 02 13:34:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 02 13:34:15 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2678827406' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:34:16 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct 02 13:34:16 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/888571476' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 13:34:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:34:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:16.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:34:16 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct 02 13:34:16 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3713185548' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 02 13:34:16 compute-1 ceph-mon[80926]: from='client.37110 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:16 compute-1 ceph-mon[80926]: from='client.46864 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/290070505' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 13:34:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/966138210' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 02 13:34:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4036943251' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:34:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2678827406' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:34:16 compute-1 ceph-mon[80926]: from='client.46882 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2629866350' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 02 13:34:16 compute-1 ceph-mon[80926]: from='client.46058 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4183441616' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:34:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/888571476' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 13:34:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3869755304' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:34:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2540074245' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:34:16 compute-1 ceph-mon[80926]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:34:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/213820232' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 02 13:34:16 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 02 13:34:16 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1298687505' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:34:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:16.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:16 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct 02 13:34:16 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1460685059' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 02 13:34:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 02 13:34:17 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3363334927' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:34:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3713185548' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 02 13:34:17 compute-1 ceph-mon[80926]: from='client.37146 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1298687505' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:34:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/977154715' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:34:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3242153421' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 02 13:34:17 compute-1 ceph-mon[80926]: from='client.37158 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:17 compute-1 ceph-mon[80926]: from='client.46106 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3477531783' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:34:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1460685059' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 02 13:34:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2460489288' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:34:17 compute-1 ceph-mon[80926]: pgmap v3716: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:17 compute-1 ceph-mon[80926]: from='client.37164 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/810063346' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:34:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3363334927' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:34:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2199224408' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 13:34:17 compute-1 nova_compute[230518]: 2025-10-02 13:34:17.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 02 13:34:17 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1208016185' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:13.841850+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 357 heartbeat osd_stat(store_statfs(0x19ead1000/0x0/0x1bfc00000, data 0x51ebd58/0x53c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 440983552 unmapped: 48168960 heap: 489152512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:14.842018+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 443801600 unmapped: 45350912 heap: 489152512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x55943f784c00 session 0x5594377af4a0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x55943725c400 session 0x5594351d4d20
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:15.842151+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441540608 unmapped: 47611904 heap: 489152512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x559434792c00 session 0x559437e52f00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:16.842330+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 357 heartbeat osd_stat(store_statfs(0x19eb4e000/0x0/0x1bfc00000, data 0x5b0ed58/0x5ce8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441556992 unmapped: 47595520 heap: 489152512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:17.842443+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441556992 unmapped: 47595520 heap: 489152512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4738523 data_alloc: 234881024 data_used: 24154112
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:18.842630+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441556992 unmapped: 47595520 heap: 489152512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:19.842774+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 440639488 unmapped: 48513024 heap: 489152512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:20.842927+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 440639488 unmapped: 48513024 heap: 489152512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 357 heartbeat osd_stat(store_statfs(0x19f7f8000/0x0/0x1bfc00000, data 0x4e6cd58/0x5046000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:21.843055+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 440639488 unmapped: 48513024 heap: 489152512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 357 heartbeat osd_stat(store_statfs(0x19f7f8000/0x0/0x1bfc00000, data 0x4e6cd58/0x5046000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.279331207s of 11.395094872s, submitted: 171
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x559436e3a800 session 0x559435f43e00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x55943ae62c00 session 0x559436c4d2c0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:22.843165+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 440647680 unmapped: 48504832 heap: 489152512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4735927 data_alloc: 234881024 data_used: 24154112
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:23.843350+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 440647680 unmapped: 48504832 heap: 489152512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x559434792c00 session 0x559437e57860
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:24.843513+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 430768128 unmapped: 58384384 heap: 489152512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 357 heartbeat osd_stat(store_statfs(0x1a0900000/0x0/0x1bfc00000, data 0x3ae5ce6/0x3cbd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:25.843644+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 430768128 unmapped: 58384384 heap: 489152512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:26.843795+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 430768128 unmapped: 58384384 heap: 489152512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:27.843907+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 430776320 unmapped: 58376192 heap: 489152512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4504800 data_alloc: 234881024 data_used: 15077376
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:28.844023+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 430776320 unmapped: 58376192 heap: 489152512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:29.844224+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 430776320 unmapped: 58376192 heap: 489152512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 357 heartbeat osd_stat(store_statfs(0x1a0900000/0x0/0x1bfc00000, data 0x3ae5ce6/0x3cbd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350c8800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x5594350c8800 session 0x559436f7fe00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943725c400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x55943725c400 session 0x5594370f4d20
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943f784c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x55943f784c00 session 0x559437cb0000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:30.844361+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 430784512 unmapped: 58368000 heap: 489152512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x559434792c00 session 0x559434c623c0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350c8800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943725c400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x55943725c400 session 0x5594351d5860
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x5594350c8800 session 0x559437e56f00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943ae62c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x55943ae62c00 session 0x559437c42960
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594344f9c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x5594344f9c00 session 0x559437d09680
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x559434792c00 session 0x559435f67e00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350c8800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x5594350c8800 session 0x5594352865a0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943725c400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:31.844480+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x55943725c400 session 0x559437bcb860
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 442736640 unmapped: 54394880 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:32.844685+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 442736640 unmapped: 54394880 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 357 heartbeat osd_stat(store_statfs(0x19f682000/0x0/0x1bfc00000, data 0x4fe2d58/0x51bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4726513 data_alloc: 251658240 data_used: 31571968
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:33.845778+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 442736640 unmapped: 54394880 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943ae62c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.135056496s of 11.742373466s, submitted: 75
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x55943ae62c00 session 0x559434c65e00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:34.845988+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 442736640 unmapped: 54394880 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c3dc00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c3e000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943ae63800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:35.846920+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x55943ae63800 session 0x5594370f4d20
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x559434792c00 session 0x559437e57860
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350c8800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x5594350c8800 session 0x559435f43e00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441778176 unmapped: 55353344 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943725c400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x55943725c400 session 0x559437e52f00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943ae62c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 357 ms_handle_reset con 0x55943ae62c00 session 0x5594351d4d20
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 357 heartbeat osd_stat(store_statfs(0x19f640000/0x0/0x1bfc00000, data 0x5024d58/0x51fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:36.847695+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441778176 unmapped: 55353344 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 357 heartbeat osd_stat(store_statfs(0x19ecec000/0x0/0x1bfc00000, data 0x5978d58/0x5b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:37.847887+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441778176 unmapped: 55353344 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4796273 data_alloc: 251658240 data_used: 31580160
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:38.848683+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441786368 unmapped: 55345152 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:39.848933+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441786368 unmapped: 55345152 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:40.849548+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441786368 unmapped: 55345152 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:41.849828+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 357 heartbeat osd_stat(store_statfs(0x19ecec000/0x0/0x1bfc00000, data 0x5978d58/0x5b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441786368 unmapped: 55345152 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436e3ac00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:42.850488+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441786368 unmapped: 55345152 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4820961 data_alloc: 251658240 data_used: 35008512
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 357 heartbeat osd_stat(store_statfs(0x19ecec000/0x0/0x1bfc00000, data 0x5978d58/0x5b52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:43.850688+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 357 handle_osd_map epochs [357,358], i have 357, src has [1,358]
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 358 handle_osd_map epochs [358,358], i have 358, src has [1,358]
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441737216 unmapped: 55394304 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.272820473s of 10.002835274s, submitted: 38
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 358 ms_handle_reset con 0x559436e3ac00 session 0x559436ddef00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:44.851021+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 432644096 unmapped: 64487424 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:45.851351+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 432644096 unmapped: 64487424 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 358 ms_handle_reset con 0x559434792c00 session 0x559436fa3c20
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a0084000/0x0/0x1bfc00000, data 0x45dfa05/0x47ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:46.851880+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 432644096 unmapped: 64487424 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:47.852124+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350c8800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 358 ms_handle_reset con 0x5594350c8800 session 0x559437c57860
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 432644096 unmapped: 64487424 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943725c400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4676383 data_alloc: 234881024 data_used: 22024192
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:48.852270+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 358 ms_handle_reset con 0x55943725c400 session 0x559437c56780
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943ae62c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 358 ms_handle_reset con 0x55943ae62c00 session 0x559436f7f4a0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437502400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 358 ms_handle_reset con 0x559437502400 session 0x559437cb03c0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 358 ms_handle_reset con 0x559434792c00 session 0x559437cb01e0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350c8800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 358 ms_handle_reset con 0x5594350c8800 session 0x559434ce1e00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 432799744 unmapped: 64331776 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 358 handle_osd_map epochs [358,359], i have 358, src has [1,359]
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:49.852489+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943725c400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x55943725c400 session 0x559435f60780
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943ae62c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 432807936 unmapped: 64323584 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x55943ae62c00 session 0x559435153a40
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:50.852795+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943c02b400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19ff28000/0x0/0x1bfc00000, data 0x4738577/0x4916000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 432824320 unmapped: 64307200 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:51.852908+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 432824320 unmapped: 64307200 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:52.853070+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 432824320 unmapped: 64307200 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4701279 data_alloc: 234881024 data_used: 22065152
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:53.853185+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 435830784 unmapped: 61300736 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19ff28000/0x0/0x1bfc00000, data 0x4738577/0x4916000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:54.853477+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 435863552 unmapped: 61267968 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:55.853610+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 435863552 unmapped: 61267968 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:56.853785+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 435863552 unmapped: 61267968 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:57.853989+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 435863552 unmapped: 61267968 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4798679 data_alloc: 251658240 data_used: 34697216
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19ff28000/0x0/0x1bfc00000, data 0x4738577/0x4916000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:58.854117+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 435863552 unmapped: 61267968 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19ff28000/0x0/0x1bfc00000, data 0x4738577/0x4916000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:05:59.854432+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 435863552 unmapped: 61267968 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.026805878s of 16.320375443s, submitted: 68
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559436c3dc00 session 0x559435f643c0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:00.854661+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19ff28000/0x0/0x1bfc00000, data 0x4738577/0x4916000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559436c3e000 session 0x559435d8d0e0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 435863552 unmapped: 61267968 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434fee000 session 0x559436ddf0e0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x55943c02b400 session 0x559436eaaf00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:01.854858+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 435863552 unmapped: 61267968 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:02.855052+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434792c00 session 0x559436f7ef00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 435863552 unmapped: 61267968 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350c8800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943725c400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4797472 data_alloc: 251658240 data_used: 34697216
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:03.855224+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 435863552 unmapped: 61267968 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:04.856105+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19ff27000/0x0/0x1bfc00000, data 0x473859a/0x4917000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 436224000 unmapped: 60907520 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:05.856261+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 436224000 unmapped: 60907520 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:06.856388+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 436224000 unmapped: 60907520 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:07.856526+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19ff27000/0x0/0x1bfc00000, data 0x473859a/0x4917000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 436224000 unmapped: 60907520 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4807712 data_alloc: 251658240 data_used: 36016128
Oct 02 13:34:17 compute-1 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:08.856657+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 436224000 unmapped: 60907520 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19ff27000/0x0/0x1bfc00000, data 0x473859a/0x4917000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943ae62c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943beb5800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:09.856773+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 436224000 unmapped: 60907520 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437438400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:10.856962+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943beb4000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.196416855s of 10.219329834s, submitted: 6
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 436224000 unmapped: 60907520 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:11.857141+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 436224000 unmapped: 60907520 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:12.857342+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 436224000 unmapped: 60907520 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4808004 data_alloc: 251658240 data_used: 36020224
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19ff27000/0x0/0x1bfc00000, data 0x473859a/0x4917000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:13.857480+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 436224000 unmapped: 60907520 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:14.857689+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 436224000 unmapped: 60907520 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:15.857827+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 437182464 unmapped: 59949056 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19fc43000/0x0/0x1bfc00000, data 0x4a1c59a/0x4bfb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:16.857972+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 437182464 unmapped: 59949056 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:17.858097+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 437182464 unmapped: 59949056 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19fb71000/0x0/0x1bfc00000, data 0x4aee59a/0x4ccd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4835894 data_alloc: 251658240 data_used: 36028416
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:18.858235+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 437182464 unmapped: 59949056 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:19.858383+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 438566912 unmapped: 58564608 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:20.858500+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559437438400 session 0x559436c4a5a0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.843790054s of 10.033929825s, submitted: 64
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x55943beb4000 session 0x559436dde5a0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 438616064 unmapped: 58515456 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:21.858641+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 438657024 unmapped: 58474496 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:22.858798+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19f8f0000/0x0/0x1bfc00000, data 0x4d6f59a/0x4f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 439803904 unmapped: 57327616 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4866724 data_alloc: 251658240 data_used: 37351424
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:23.858944+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 439861248 unmapped: 57270272 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:24.859129+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 439861248 unmapped: 57270272 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:25.859380+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 439861248 unmapped: 57270272 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:26.859604+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 439861248 unmapped: 57270272 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:27.859807+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 439861248 unmapped: 57270272 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19f8c5000/0x0/0x1bfc00000, data 0x4d9a59a/0x4f79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437438400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4866856 data_alloc: 251658240 data_used: 37351424
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:28.860021+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441147392 unmapped: 55984128 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:29.860175+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441540608 unmapped: 55590912 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:30.860431+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.108606339s of 10.016971588s, submitted: 41
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441548800 unmapped: 55582720 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:31.860620+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441548800 unmapped: 55582720 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:32.860777+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441548800 unmapped: 55582720 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4890798 data_alloc: 251658240 data_used: 37740544
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19f60b000/0x0/0x1bfc00000, data 0x504c59a/0x522b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:33.860925+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559437438400 session 0x559436f6b0e0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434792c00 session 0x559436f6af00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441548800 unmapped: 55582720 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:34.861128+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 438550528 unmapped: 58580992 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:35.861344+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434fee000 session 0x559436eaaf00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 438591488 unmapped: 58540032 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c3e000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559436c3e000 session 0x5594370f4d20
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c3e000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559436c3e000 session 0x559434c65e00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434792c00 session 0x559437bcb860
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:36.861513+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434fee000 session 0x5594352865a0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437438400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559437438400 session 0x559437d09680
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943beb4000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x55943beb4000 session 0x559434c62960
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943beb4000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x55943beb4000 session 0x559436ddfe00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 439107584 unmapped: 58023936 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434792c00 session 0x559436f7f2c0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434fee000 session 0x559436c51e00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:37.861637+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 439107584 unmapped: 58023936 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c3e000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559436c3e000 session 0x559435f650e0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437438400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4796781 data_alloc: 234881024 data_used: 27525120
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:38.861758+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559437438400 session 0x559435ddcd20
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19f9ca000/0x0/0x1bfc00000, data 0x4c98557/0x4e74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 440107008 unmapped: 57024512 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:39.861979+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 440107008 unmapped: 57024512 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434792c00 session 0x559436eaa000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:40.862117+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 440107008 unmapped: 57024512 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:41.862320+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.470400810s of 10.955029488s, submitted: 96
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x55943ae62c00 session 0x559434c6f680
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x55943beb5800 session 0x559436c50f00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 440107008 unmapped: 57024512 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19f9ca000/0x0/0x1bfc00000, data 0x4c98557/0x4e74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [0,0,0,1])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:42.862414+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425287680 unmapped: 71843840 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-1 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4542801 data_alloc: 234881024 data_used: 13787136
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:43.862638+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434fee000 session 0x559436dde780
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425287680 unmapped: 71843840 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:44.862800+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425287680 unmapped: 71843840 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:45.862937+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425320448 unmapped: 71811072 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:46.863074+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a0eed000/0x0/0x1bfc00000, data 0x37774e5/0x3951000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [0,0,1,0,2])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425385984 unmapped: 71745536 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:47.863193+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c3e000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559436c3e000 session 0x559434ce0780
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425598976 unmapped: 71532544 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4540722 data_alloc: 234881024 data_used: 13676544
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:48.863340+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425615360 unmapped: 71516160 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a0ec9000/0x0/0x1bfc00000, data 0x379b4e5/0x3975000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b3bf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:49.863501+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a0ab9000/0x0/0x1bfc00000, data 0x379b4e5/0x3975000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425615360 unmapped: 71516160 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:50.863656+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425615360 unmapped: 71516160 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:51.863800+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425369600 unmapped: 71761920 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:52.864271+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425246720 unmapped: 71884800 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a0ab9000/0x0/0x1bfc00000, data 0x379b4e5/0x3975000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4602746 data_alloc: 234881024 data_used: 22228992
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:53.864425+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.654204369s of 11.906198502s, submitted: 296
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425246720 unmapped: 71884800 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:54.864666+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425246720 unmapped: 71884800 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:55.864883+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425246720 unmapped: 71884800 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:56.865036+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425246720 unmapped: 71884800 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:57.865221+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425246720 unmapped: 71884800 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4603274 data_alloc: 234881024 data_used: 22228992
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:58.865396+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a0ab9000/0x0/0x1bfc00000, data 0x379b4e5/0x3975000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425246720 unmapped: 71884800 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:06:59.865587+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425246720 unmapped: 71884800 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a0ab9000/0x0/0x1bfc00000, data 0x379b4e5/0x3975000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:00.865730+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425263104 unmapped: 71868416 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:01.865887+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 425295872 unmapped: 71835648 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:02.866020+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 428965888 unmapped: 68165632 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4656348 data_alloc: 234881024 data_used: 23576576
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:03.866157+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.189975739s of 10.397528648s, submitted: 60
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 429072384 unmapped: 68059136 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:04.866352+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a040d000/0x0/0x1bfc00000, data 0x3e3f4e5/0x4019000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 429072384 unmapped: 68059136 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:05.866501+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 429072384 unmapped: 68059136 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:06.866676+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 429072384 unmapped: 68059136 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a040d000/0x0/0x1bfc00000, data 0x3e3f4e5/0x4019000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:07.866834+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 429072384 unmapped: 68059136 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4663476 data_alloc: 234881024 data_used: 23478272
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:08.867043+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a040d000/0x0/0x1bfc00000, data 0x3e3f4e5/0x4019000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 429006848 unmapped: 68124672 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:09.867212+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 429146112 unmapped: 67985408 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:10.867398+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434792c00 session 0x559437c43e00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434fee000 session 0x5594377af0e0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 430194688 unmapped: 66936832 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c3e000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:11.867604+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559436c3e000 session 0x559435f661e0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426754048 unmapped: 70377472 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:12.867734+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426754048 unmapped: 70377472 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4474744 data_alloc: 234881024 data_used: 13676544
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:13.867914+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426754048 unmapped: 70377472 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:14.868140+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a1349000/0x0/0x1bfc00000, data 0x2f0b4e5/0x30e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426754048 unmapped: 70377472 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:15.868274+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426754048 unmapped: 70377472 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:16.868431+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426762240 unmapped: 70369280 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:17.868622+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x5594350c8800 session 0x559434c63e00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x55943725c400 session 0x559436f6a960
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426762240 unmapped: 70369280 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a1349000/0x0/0x1bfc00000, data 0x2f0b4e5/0x30e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4474744 data_alloc: 234881024 data_used: 13676544
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:18.868745+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a1349000/0x0/0x1bfc00000, data 0x2f0b4e5/0x30e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.693103790s of 14.924519539s, submitted: 82
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426778624 unmapped: 70352896 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:19.868889+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434792c00 session 0x559434c632c0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426778624 unmapped: 70352896 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:20.869363+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426778624 unmapped: 70352896 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:21.869524+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426778624 unmapped: 70352896 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:22.869675+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426778624 unmapped: 70352896 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4398502 data_alloc: 234881024 data_used: 11030528
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:23.869784+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426786816 unmapped: 70344704 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:24.869979+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a1b06000/0x0/0x1bfc00000, data 0x274e4c2/0x2927000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426786816 unmapped: 70344704 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:25.870127+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426786816 unmapped: 70344704 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:26.870357+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426786816 unmapped: 70344704 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:27.870545+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434fee000 session 0x5594351d50e0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350c8800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426786816 unmapped: 70344704 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:28.870856+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4414822 data_alloc: 234881024 data_used: 16404480
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x5594350c8800 session 0x559437c57a40
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426418176 unmapped: 70713344 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a1b06000/0x0/0x1bfc00000, data 0x274e4c2/0x2927000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:29.871070+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426418176 unmapped: 70713344 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a1b06000/0x0/0x1bfc00000, data 0x274e4c2/0x2927000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:30.871246+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c3e000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.645080566s of 11.849708557s, submitted: 22
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559436c3e000 session 0x559435de03c0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943ae62c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x55943ae62c00 session 0x5594377afa40
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426418176 unmapped: 70713344 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:31.871335+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426418176 unmapped: 70713344 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:32.871478+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426418176 unmapped: 70713344 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:33.871597+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4421903 data_alloc: 234881024 data_used: 16404480
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 426418176 unmapped: 70713344 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434792c00 session 0x559437104f00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434fee000 session 0x559435f65860
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350c8800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:34.871745+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x5594350c8800 session 0x5594351d4d20
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c3e000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943beb5800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559436c3e000 session 0x5594351d4f00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943beb4000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x55943beb4000 session 0x5594370f52c0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x55943beb5800 session 0x559435ddd680
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 427032576 unmapped: 70098944 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:35.871855+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a1044000/0x0/0x1bfc00000, data 0x32114c2/0x33ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434792c00 session 0x5594370f4b40
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 429268992 unmapped: 67862528 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434fee000 session 0x559437c43e00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:36.872043+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 427106304 unmapped: 70025216 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a074c000/0x0/0x1bfc00000, data 0x3b094c2/0x3ce2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:37.872158+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 427106304 unmapped: 70025216 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:38.872353+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4572484 data_alloc: 234881024 data_used: 16404480
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 427114496 unmapped: 70017024 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:39.872510+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 427114496 unmapped: 70017024 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:40.872695+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 427114496 unmapped: 70017024 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:41.872827+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350c8800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.230933189s of 10.956824303s, submitted: 150
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x5594350c8800 session 0x559434c6f680
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c3e000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943c02b400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 427286528 unmapped: 69844992 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:42.872965+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 427286528 unmapped: 69844992 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a0728000/0x0/0x1bfc00000, data 0x3b2d4c2/0x3d06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:43.873159+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4579290 data_alloc: 234881024 data_used: 16379904
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 429473792 unmapped: 67657728 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:44.873384+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 429473792 unmapped: 67657728 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:45.873533+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 429473792 unmapped: 67657728 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a0728000/0x0/0x1bfc00000, data 0x3b2d4c2/0x3d06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:46.873798+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a0728000/0x0/0x1bfc00000, data 0x3b2d4c2/0x3d06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 429473792 unmapped: 67657728 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:47.874071+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943bc35400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x55943bc35400 session 0x5594377af2c0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 429481984 unmapped: 67649536 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:48.874378+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4646650 data_alloc: 234881024 data_used: 25628672
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434792c00 session 0x559436f7fe00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 429481984 unmapped: 67649536 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:49.874614+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434fee000 session 0x559435de12c0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350c8800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 429490176 unmapped: 67641344 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x5594350c8800 session 0x559437d09c20
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:50.874880+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943beb5800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436e3bc00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 431890432 unmapped: 65241088 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:51.875029+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559436e3bc00 session 0x559437e563c0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436bd3000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559436bd3000 session 0x559437b0a1e0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434792c00 session 0x559437d081e0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.774092674s of 10.054019928s, submitted: 30
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434fee000 session 0x559434c62960
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350c8800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x5594350c8800 session 0x559434c754a0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436e3bc00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 431112192 unmapped: 66019328 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:52.875171+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a017e000/0x0/0x1bfc00000, data 0x40d54f5/0x42b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 434233344 unmapped: 62898176 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:53.875423+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4775360 data_alloc: 251658240 data_used: 34942976
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 434233344 unmapped: 62898176 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a017e000/0x0/0x1bfc00000, data 0x40d54f5/0x42b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:54.875624+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19fa6f000/0x0/0x1bfc00000, data 0x47e44f5/0x49bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 436617216 unmapped: 60514304 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:55.875851+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19fa0d000/0x0/0x1bfc00000, data 0x48464f5/0x4a21000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 437501952 unmapped: 59629568 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:56.876056+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 437501952 unmapped: 59629568 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:57.876210+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 437518336 unmapped: 59613184 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:58.876508+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4842696 data_alloc: 251658240 data_used: 35905536
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 437526528 unmapped: 59604992 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:07:59.876729+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594374ad800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x5594374ad800 session 0x559434ce0f00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 438575104 unmapped: 58556416 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:00.877019+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19f985000/0x0/0x1bfc00000, data 0x48ce4f5/0x4aa9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 438575104 unmapped: 58556416 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:01.877344+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943d784400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x55943d784400 session 0x559434c8c780
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 438575104 unmapped: 58556416 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:02.877499+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434792c00 session 0x559434c8d4a0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.614707947s of 11.423568726s, submitted: 71
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 440131584 unmapped: 56999936 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:03.877632+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4888202 data_alloc: 251658240 data_used: 35979264
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 440131584 unmapped: 56999936 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:04.877851+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 440139776 unmapped: 56991744 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:05.878068+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434fee000 session 0x559437e56b40
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 439468032 unmapped: 57663488 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:06.878347+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19f49f000/0x0/0x1bfc00000, data 0x4db3505/0x4f8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350c8800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594374ad800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436f95400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559436f95400 session 0x559436c4dc20
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437503400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559437503400 session 0x559437d090e0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436cb1800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559436cb1800 session 0x559437e52d20
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 439468032 unmapped: 57663488 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434792c00 session 0x559436f6ab40
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:07.878571+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 439476224 unmapped: 57655296 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:08.878688+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434fee000 session 0x559437bca780
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436f95400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559436f95400 session 0x559437cb0f00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4947161 data_alloc: 251658240 data_used: 37863424
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437503400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559437503400 session 0x559437e57860
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594358bec00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19efa1000/0x0/0x1bfc00000, data 0x52b1505/0x548d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x5594358bec00 session 0x559436eab0e0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434792c00 session 0x5594351d4960
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 443023360 unmapped: 54108160 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:09.878905+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559436c3e000 session 0x559435ddcd20
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 443072512 unmapped: 54059008 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x55943c02b400 session 0x5594370ed0e0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:10.879053+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 heartbeat osd_stat(store_statfs(0x19efa1000/0x0/0x1bfc00000, data 0x52b1505/0x548d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 443080704 unmapped: 54050816 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:11.879226+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 ms_handle_reset con 0x559434fee000 session 0x559435914780
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436f95400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 443088896 unmapped: 54042624 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:12.879496+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 359 handle_osd_map epochs [360,360], i have 359, src has [1,360]
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437503400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 360 ms_handle_reset con 0x559437503400 session 0x559437bcb4a0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 360 ms_handle_reset con 0x559436f95400 session 0x559435e4f0e0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.119692802s of 10.014904976s, submitted: 81
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 443146240 unmapped: 53985280 heap: 497131520 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c3e000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 360 ms_handle_reset con 0x559434fee000 session 0x5594370ed4a0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 360 ms_handle_reset con 0x559434792c00 session 0x559435ddcd20
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 360 ms_handle_reset con 0x559436c3e000 session 0x559437e57860
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:13.879627+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437503400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5113152 data_alloc: 251658240 data_used: 44658688
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 360 heartbeat osd_stat(store_statfs(0x19ef98000/0x0/0x1bfc00000, data 0x52b915e/0x5496000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,7])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943c02b400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 360 ms_handle_reset con 0x55943c02b400 session 0x559437b0a780
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 462585856 unmapped: 42950656 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:14.879773+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441835520 unmapped: 63700992 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:15.879925+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 360 ms_handle_reset con 0x559437503400 session 0x559437cb0f00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441843712 unmapped: 63692800 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:16.880180+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 360 ms_handle_reset con 0x559434792c00 session 0x559437c56d20
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441851904 unmapped: 63684608 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:17.880310+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c3e000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436f95400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 360 handle_osd_map epochs [361,361], i have 360, src has [1,361]
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 361 ms_handle_reset con 0x559434fee000 session 0x559435f61a40
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943c02b400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 441876480 unmapped: 63660032 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:18.880495+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5184360 data_alloc: 268435456 data_used: 48173056
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 443228160 unmapped: 62308352 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:19.880702+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 361 heartbeat osd_stat(store_statfs(0x19d2c5000/0x0/0x1bfc00000, data 0x6f89e2e/0x7169000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 361 handle_osd_map epochs [361,362], i have 361, src has [1,362]
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 362 handle_osd_map epochs [362,362], i have 362, src has [1,362]
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 444628992 unmapped: 60907520 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:20.880886+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 362 ms_handle_reset con 0x55943c02b400 session 0x559434c754a0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 444628992 unmapped: 60907520 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:21.881048+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594450edc00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 362 ms_handle_reset con 0x5594450edc00 session 0x559434c6f680
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 362 heartbeat osd_stat(store_statfs(0x19d296000/0x0/0x1bfc00000, data 0x6fb5b05/0x7197000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 444653568 unmapped: 60882944 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:22.881414+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 362 ms_handle_reset con 0x559434792c00 session 0x5594370f4b40
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 362 ms_handle_reset con 0x559434fee000 session 0x5594351d4f00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 4.504015923s of 10.231104851s, submitted: 146
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 445382656 unmapped: 60153856 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:23.881702+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5277136 data_alloc: 268435456 data_used: 50122752
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 445390848 unmapped: 60145664 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:24.881978+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 362 heartbeat osd_stat(store_statfs(0x19d0ed000/0x0/0x1bfc00000, data 0x7159b05/0x733b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 362 handle_osd_map epochs [363,363], i have 362, src has [1,363]
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 445456384 unmapped: 60080128 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:25.882174+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 445489152 unmapped: 60047360 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:26.882314+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437503400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943c02b400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x55943c02b400 session 0x559436c4a3c0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594450edc00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x5594450edc00 session 0x559436ddf4a0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435a4f800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x559435a4f800 session 0x559437bca1e0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 445915136 unmapped: 59621376 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 363 heartbeat osd_stat(store_statfs(0x19d09e000/0x0/0x1bfc00000, data 0x71aa644/0x738d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [0,0,1])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:27.882462+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435a4f800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x559435a4f800 session 0x5594351d4960
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 363 heartbeat osd_stat(store_statfs(0x19dc27000/0x0/0x1bfc00000, data 0x5f2d644/0x6110000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 444833792 unmapped: 60702720 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x559434792c00 session 0x5594352865a0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:28.882576+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x559434fee000 session 0x559435e4fa40
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943c02b400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x55943c02b400 session 0x559436f7e5a0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594450edc00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x5594450edc00 session 0x559434ff12c0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594450edc00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x5594450edc00 session 0x559436ec70e0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5114633 data_alloc: 251658240 data_used: 32268288
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x559437503400 session 0x559434c8d0e0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 444841984 unmapped: 60694528 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:29.882750+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 363 heartbeat osd_stat(store_statfs(0x19d341000/0x0/0x1bfc00000, data 0x6813644/0x69f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 363 heartbeat osd_stat(store_statfs(0x19d341000/0x0/0x1bfc00000, data 0x6813644/0x69f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [0,0,0,0,0,0,1,2])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 446636032 unmapped: 58900480 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:30.882870+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 446242816 unmapped: 59293696 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:31.882991+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x559434792c00 session 0x559435152f00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 447578112 unmapped: 57958400 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:32.883150+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x559434fee000 session 0x559437b0ba40
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435a4f800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x559435a4f800 session 0x559437e532c0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.331444263s of 10.065950394s, submitted: 150
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 447578112 unmapped: 57958400 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:33.883300+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5149772 data_alloc: 251658240 data_used: 32313344
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x5594350c8800 session 0x559437e561e0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x5594374ad800 session 0x559436ec7e00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x559434792c00 session 0x559437bcaf00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 446119936 unmapped: 59416576 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:34.883451+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435a4f800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437503400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 363 heartbeat osd_stat(store_statfs(0x19d4cb000/0x0/0x1bfc00000, data 0x6d79654/0x6f5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [0,0,0,0,5])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 446136320 unmapped: 59400192 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:35.883552+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x559434fee000 session 0x559435142000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594450edc00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x5594450edc00 session 0x559435f421e0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594450edc00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 446144512 unmapped: 59392000 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:36.883682+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x559434792c00 session 0x559437b0b860
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x5594450edc00 session 0x559435286000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x559434fee000 session 0x559437d092c0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350c8800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x5594350c8800 session 0x559435142960
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594374ad800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x5594374ad800 session 0x559437d08b40
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594374ad800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x5594374ad800 session 0x559437d08000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x559434792c00 session 0x559436eaaf00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:37.883835+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 461185024 unmapped: 44351488 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:38.884004+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 461185024 unmapped: 44351488 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5198404 data_alloc: 268435456 data_used: 58335232
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:39.884154+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 461185024 unmapped: 44351488 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 363 ms_handle_reset con 0x559434fee000 session 0x559437e534a0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:40.884361+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 461185024 unmapped: 44351488 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350c8800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594450edc00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 363 heartbeat osd_stat(store_statfs(0x19da80000/0x0/0x1bfc00000, data 0x67cb644/0x69ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:41.884485+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 461201408 unmapped: 44335104 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:42.884669+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 461201408 unmapped: 44335104 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943c02b400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:43.884806+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 458932224 unmapped: 46604288 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5189902 data_alloc: 268435456 data_used: 58331136
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 363 handle_osd_map epochs [363,364], i have 363, src has [1,364]
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.957287312s of 10.495686531s, submitted: 80
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 363 handle_osd_map epochs [364,364], i have 364, src has [1,364]
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:44.885011+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 455180288 unmapped: 50356224 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 364 ms_handle_reset con 0x55943c02b400 session 0x559435f42000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:45.885140+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 455180288 unmapped: 50356224 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436f69800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 364 ms_handle_reset con 0x559436f69800 session 0x559436f6af00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 364 ms_handle_reset con 0x559434792c00 session 0x559437d09e00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:46.888793+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 455196672 unmapped: 50339840 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 364 heartbeat osd_stat(store_statfs(0x19f05f000/0x0/0x1bfc00000, data 0x51eb28f/0x53ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:47.889339+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 457179136 unmapped: 48357376 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 364 heartbeat osd_stat(store_statfs(0x19ebca000/0x0/0x1bfc00000, data 0x568128f/0x5864000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:48.889622+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 457400320 unmapped: 48136192 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5057969 data_alloc: 251658240 data_used: 41594880
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:49.889782+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 457523200 unmapped: 48013312 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 364 handle_osd_map epochs [364,365], i have 364, src has [1,365]
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:50.890353+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 457539584 unmapped: 47996928 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594374ad800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:51.890585+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 457547776 unmapped: 47988736 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19ebb5000/0x0/0x1bfc00000, data 0x5693dce/0x5878000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:52.890706+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 457547776 unmapped: 47988736 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559434fee000 session 0x559435d8c000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:53.890933+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 457547776 unmapped: 47988736 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5068663 data_alloc: 251658240 data_used: 41820160
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19ebb5000/0x0/0x1bfc00000, data 0x5693dce/0x5878000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:54.891104+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 457629696 unmapped: 47906816 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.571898460s of 11.282312393s, submitted: 111
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x5594374ad800 session 0x559437b0b860
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:55.891232+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 457703424 unmapped: 47833088 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:56.891368+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 457728000 unmapped: 47808512 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:57.891485+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 457736192 unmapped: 47800320 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19ebb6000/0x0/0x1bfc00000, data 0x5693dce/0x5878000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19ebb6000/0x0/0x1bfc00000, data 0x5693dce/0x5878000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:58.891683+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 457736192 unmapped: 47800320 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5083571 data_alloc: 251658240 data_used: 43376640
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:08:59.891992+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 457736192 unmapped: 47800320 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:00.892362+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 457736192 unmapped: 47800320 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:01.892523+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 457736192 unmapped: 47800320 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943c02b400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x55943c02b400 session 0x559436fa3c20
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437eacc00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559437eacc00 session 0x559434c630e0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559434792c00 session 0x559437c565a0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559434fee000 session 0x559435143a40
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594374ad800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x5594374ad800 session 0x559437e53680
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943c02b400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x55943c02b400 session 0x559435f65c20
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943c02a800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x55943c02a800 session 0x559436fa2d20
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559434792c00 session 0x559436f0e960
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559434fee000 session 0x5594351d4d20
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594374ad800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:02.892752+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 458571776 unmapped: 46964736 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x5594374ad800 session 0x559435152d20
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943c02b400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x55943c02b400 session 0x559437104960
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350c9c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x5594350c9c00 session 0x5594351d4780
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559434792c00 session 0x559437e57c20
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559434fee000 session 0x559436f6ad20
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:03.893054+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 458645504 unmapped: 46891008 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x5594350c8800 session 0x559434c74960
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x5594450edc00 session 0x559437d083c0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5203783 data_alloc: 251658240 data_used: 44023808
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594374ad800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19de84000/0x0/0x1bfc00000, data 0x63c5dce/0x65aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [0,0,0,1])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:04.893328+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x5594374ad800 session 0x559436eab0e0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 458104832 unmapped: 47431680 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:05.893538+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 458104832 unmapped: 47431680 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:06.893744+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594374ad800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x5594374ad800 session 0x5594351530e0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 458104832 unmapped: 47431680 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19e7ba000/0x0/0x1bfc00000, data 0x5a82dce/0x5c67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:07.893900+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 458121216 unmapped: 47415296 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559434792c00 session 0x559435ddc1e0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.733769417s of 13.131544113s, submitted: 79
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:08.894147+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 458121216 unmapped: 47415296 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5074719 data_alloc: 234881024 data_used: 36212736
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559434fee000 session 0x5594351434a0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350c8800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594450edc00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x5594350c8800 session 0x559437c423c0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:09.894347+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x5594450edc00 session 0x559436c4d860
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 458481664 unmapped: 47054848 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594450edc00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350c8800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:10.894506+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 458489856 unmapped: 47046656 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:11.894809+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 457129984 unmapped: 48406528 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19e79d000/0x0/0x1bfc00000, data 0x5aacdce/0x5c91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:12.894993+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 460357632 unmapped: 45178880 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559436ea1c00 session 0x559437d085a0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5dc00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559436c5dc00 session 0x559437e56f00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:13.895242+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 461611008 unmapped: 43925504 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5137212 data_alloc: 251658240 data_used: 47284224
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:14.895441+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 461611008 unmapped: 43925504 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea0400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559436ea0400 session 0x559435142b40
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:15.895641+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 461611008 unmapped: 43925504 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:16.895881+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 461611008 unmapped: 43925504 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436f94400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559436f94400 session 0x5594370f54a0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594372bbc00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:17.896048+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x5594372bbc00 session 0x559437cb03c0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 461635584 unmapped: 43900928 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5dc00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19e79c000/0x0/0x1bfc00000, data 0x5aacdde/0x5c92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559436c5dc00 session 0x5594377ae3c0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea0400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:18.896219+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 461783040 unmapped: 43753472 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19e79c000/0x0/0x1bfc00000, data 0x5aacdde/0x5c92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559436ea0400 session 0x559437cb0000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5156263 data_alloc: 251658240 data_used: 47284224
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19e644000/0x0/0x1bfc00000, data 0x5c04dde/0x5dea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.920718193s of 11.131993294s, submitted: 45
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:19.896362+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 462102528 unmapped: 43433984 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559436ea1c00 session 0x559436f7f4a0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436f94400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559436f94400 session 0x5594377afc20
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943b0e9000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x55943b0e9000 session 0x559434c8c780
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943b0e9000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x55943b0e9000 session 0x559436c4c780
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5dc00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559436c5dc00 session 0x559435d8d0e0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19dbd6000/0x0/0x1bfc00000, data 0x6671e07/0x6858000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1b7cf9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:20.896524+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 462110720 unmapped: 43425792 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:21.896695+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 462110720 unmapped: 43425792 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:22.896800+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469540864 unmapped: 35995648 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:23.897004+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470319104 unmapped: 35217408 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5365616 data_alloc: 251658240 data_used: 49094656
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea0400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559436ea0400 session 0x559437bcb860
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:24.897155+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470827008 unmapped: 34709504 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19bd3f000/0x0/0x1bfc00000, data 0x735fe40/0x7546000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436f94400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:25.897315+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470949888 unmapped: 34586624 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:26.897432+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475594752 unmapped: 29941760 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:27.897556+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 478511104 unmapped: 27025408 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594385cd400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:28.897661+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x5594385cd400 session 0x559435e4f0e0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477888512 unmapped: 27648000 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943b0e8800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5465331 data_alloc: 268435456 data_used: 59858944
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:29.897777+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477888512 unmapped: 27648000 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19bcfb000/0x0/0x1bfc00000, data 0x73ace40/0x7593000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594374ac000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.010444641s of 10.779306412s, submitted: 210
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x5594374ac000 session 0x5594351d5860
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:30.897903+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 478715904 unmapped: 26820608 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:31.898033+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 478715904 unmapped: 26820608 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:32.898197+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 478715904 unmapped: 26820608 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19bcd6000/0x0/0x1bfc00000, data 0x73d1e40/0x75b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1c96f9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:33.898363+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 478756864 unmapped: 26779648 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #55. Immutable memtables: 11.
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5478811 data_alloc: 268435456 data_used: 61161472
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:34.898525+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 480854016 unmapped: 24682496 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19ab36000/0x0/0x1bfc00000, data 0x73d1e40/0x75b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:35.898666+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 480886784 unmapped: 24649728 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:36.898985+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 480894976 unmapped: 24641536 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19ab36000/0x0/0x1bfc00000, data 0x73d1e40/0x75b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:37.899111+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 20389888 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:38.899221+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485269504 unmapped: 20267008 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5552239 data_alloc: 268435456 data_used: 62242816
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5dc00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559436c5dc00 session 0x559437e53c20
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea0400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559436ea0400 session 0x559434c6f4a0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594385cd400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x5594385cd400 session 0x559435153a40
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943b0e9000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x55943b0e9000 session 0x559436c4a000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:39.899340+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594374acc00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x5594374acc00 session 0x559434ff12c0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486137856 unmapped: 19398656 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5dc00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559436c5dc00 session 0x559437c42b40
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea0400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559436ea0400 session 0x559437b0ba40
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594385cd400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x5594385cd400 session 0x559435f654a0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943b0e9000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x55943b0e9000 session 0x559436ddf4a0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:40.899492+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 19316736 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.066761017s of 10.603158951s, submitted: 163
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:41.899621+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488226816 unmapped: 17309696 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:42.899779+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488316928 unmapped: 17219584 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19985b000/0x0/0x1bfc00000, data 0x86aaeb2/0x8893000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:43.899908+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488325120 unmapped: 17211392 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5652950 data_alloc: 268435456 data_used: 62857216
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:44.900059+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488325120 unmapped: 17211392 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437438800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559437438800 session 0x559437c56d20
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559434792c00 session 0x5594370f4780
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x5594350c8800 session 0x559437e532c0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:45.900218+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488325120 unmapped: 17211392 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5dc00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea0400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:46.900407+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488325120 unmapped: 17211392 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:47.900593+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488325120 unmapped: 17211392 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19984e000/0x0/0x1bfc00000, data 0x86b7eb2/0x88a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:48.900776+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 489111552 unmapped: 16424960 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 heartbeat osd_stat(store_statfs(0x19984e000/0x0/0x1bfc00000, data 0x86b7eb2/0x88a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5682278 data_alloc: 268435456 data_used: 67149824
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:49.900966+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 492879872 unmapped: 12656640 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:50.901105+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 492929024 unmapped: 12607488 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437438800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 ms_handle_reset con 0x559437438800 session 0x559435153860
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594385cd400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.824018478s of 10.099150658s, submitted: 57
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 handle_osd_map epochs [365,366], i have 365, src has [1,366]
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 365 handle_osd_map epochs [366,366], i have 366, src has [1,366]
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:51.901263+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 366 ms_handle_reset con 0x5594385cd400 session 0x559437b0b4a0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943b0e9000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 366 ms_handle_reset con 0x55943b0e9000 session 0x559434c8c5a0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 366 ms_handle_reset con 0x559434792c00 session 0x559437cb1680
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 492986368 unmapped: 12550144 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350c8800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 366 ms_handle_reset con 0x5594350c8800 session 0x559434c74d20
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:52.904158+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 492986368 unmapped: 12550144 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 366 ms_handle_reset con 0x559436ea1400 session 0x559435f65c20
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437438800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 366 ms_handle_reset con 0x559437438800 session 0x559434c6f680
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 366 ms_handle_reset con 0x55943b0e8800 session 0x559434c8d4a0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943b0e8800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:53.904314+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 492986368 unmapped: 12550144 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5698584 data_alloc: 268435456 data_used: 68788224
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:54.904534+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 366 heartbeat osd_stat(store_statfs(0x199849000/0x0/0x1bfc00000, data 0x86bab0b/0x88a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494100480 unmapped: 11436032 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:55.904673+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494329856 unmapped: 11206656 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:56.905009+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494329856 unmapped: 11206656 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:57.905334+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494329856 unmapped: 11206656 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:58.905892+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494338048 unmapped: 11198464 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5710188 data_alloc: 268435456 data_used: 68947968
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 366 heartbeat osd_stat(store_statfs(0x199848000/0x0/0x1bfc00000, data 0x86bbb0b/0x88a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:09:59.906356+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350c8800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494346240 unmapped: 11190272 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:00.906604+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 495542272 unmapped: 9994240 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.477729797s of 10.070092201s, submitted: 19
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 366 ms_handle_reset con 0x559436ea1400 session 0x559434ff12c0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437438800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:01.906831+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 495665152 unmapped: 9871360 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:02.907000+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 493993984 unmapped: 11542528 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 366 handle_osd_map epochs [367,367], i have 366, src has [1,367]
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:03.907143+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 367 ms_handle_reset con 0x559437438800 session 0x559436fa2960
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 367 ms_handle_reset con 0x5594350c8800 session 0x5594377aed20
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 493789184 unmapped: 11747328 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5784008 data_alloc: 268435456 data_used: 69672960
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:04.907445+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 367 heartbeat osd_stat(store_statfs(0x198f41000/0x0/0x1bfc00000, data 0x8fc17b8/0x91ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 493862912 unmapped: 11673600 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:05.907592+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494133248 unmapped: 11403264 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 367 heartbeat osd_stat(store_statfs(0x198f33000/0x0/0x1bfc00000, data 0x8fce7b8/0x91b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:06.907822+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494264320 unmapped: 11272192 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:07.908009+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494264320 unmapped: 11272192 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 367 heartbeat osd_stat(store_statfs(0x198f33000/0x0/0x1bfc00000, data 0x8fce7b8/0x91b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:08.908234+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494264320 unmapped: 11272192 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5801066 data_alloc: 268435456 data_used: 70463488
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:09.908439+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 367 ms_handle_reset con 0x559436c5dc00 session 0x559437c43c20
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 367 ms_handle_reset con 0x559436ea0400 session 0x559437c572c0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494272512 unmapped: 11264000 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 367 handle_osd_map epochs [367,368], i have 367, src has [1,368]
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea0400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:10.908569+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494092288 unmapped: 11444224 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.887620449s of 10.231764793s, submitted: 150
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:11.908718+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494198784 unmapped: 11337728 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:12.909000+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494215168 unmapped: 11321344 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350c8800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 368 ms_handle_reset con 0x5594350c8800 session 0x559435f61a40
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:13.909126+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 368 ms_handle_reset con 0x559436ea0400 session 0x559436c4a5a0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494362624 unmapped: 11173888 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5dc00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 368 heartbeat osd_stat(store_statfs(0x199f62000/0x0/0x1bfc00000, data 0x7fa12a8/0x818c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5625777 data_alloc: 268435456 data_used: 63156224
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:14.909343+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494362624 unmapped: 11173888 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:15.909750+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494370816 unmapped: 11165696 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 368 heartbeat osd_stat(store_statfs(0x199f62000/0x0/0x1bfc00000, data 0x7fa12a8/0x818c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1db0f9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:16.910056+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494370816 unmapped: 11165696 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 368 ms_handle_reset con 0x5594450edc00 session 0x559436f0ed20
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 368 ms_handle_reset con 0x559434fee000 session 0x559436ddf680
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:17.910405+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437438800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483721216 unmapped: 21815296 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:18.910757+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 368 ms_handle_reset con 0x559437438800 session 0x559436f7e5a0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484777984 unmapped: 20758528 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5446649 data_alloc: 251658240 data_used: 53604352
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:19.910906+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484777984 unmapped: 20758528 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:20.911134+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484777984 unmapped: 20758528 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.747402191s of 10.040046692s, submitted: 73
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:21.911348+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 368 heartbeat osd_stat(store_statfs(0x19a904000/0x0/0x1bfc00000, data 0x71ef2a8/0x73da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484802560 unmapped: 20733952 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:22.911542+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484802560 unmapped: 20733952 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:23.911677+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 368 ms_handle_reset con 0x559436ea1c00 session 0x5594370f45a0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 368 ms_handle_reset con 0x559436f94400 session 0x559435de05a0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484802560 unmapped: 20733952 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5450433 data_alloc: 251658240 data_used: 53600256
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:24.911824+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484802560 unmapped: 20733952 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:25.911928+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484802560 unmapped: 20733952 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:26.912127+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484802560 unmapped: 20733952 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 368 heartbeat osd_stat(store_statfs(0x19a902000/0x0/0x1bfc00000, data 0x71ef2a8/0x73da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:27.912335+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484802560 unmapped: 20733952 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:28.912481+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484802560 unmapped: 20733952 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5460657 data_alloc: 268435456 data_used: 54726656
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:29.912626+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484802560 unmapped: 20733952 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:30.912810+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484802560 unmapped: 20733952 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:31.913004+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 368 heartbeat osd_stat(store_statfs(0x19a902000/0x0/0x1bfc00000, data 0x71ef2a8/0x73da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484802560 unmapped: 20733952 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.029449463s of 11.190081596s, submitted: 33
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:32.913161+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 368 heartbeat osd_stat(store_statfs(0x19a902000/0x0/0x1bfc00000, data 0x71ef2a8/0x73da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,3])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484802560 unmapped: 20733952 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 368 ms_handle_reset con 0x559434fee000 session 0x559437e53c20
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350c8800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:33.913274+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 368 handle_osd_map epochs [369,369], i have 368, src has [1,369]
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 369 handle_osd_map epochs [368,369], i have 369, src has [1,369]
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484810752 unmapped: 20725760 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 369 ms_handle_reset con 0x5594350c8800 session 0x559437cb1680
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5465301 data_alloc: 268435456 data_used: 54730752
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:34.913567+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea0400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 369 ms_handle_reset con 0x559436ea0400 session 0x559437cb0f00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 369 ms_handle_reset con 0x559434fee000 session 0x5594370f4d20
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484810752 unmapped: 20725760 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350c8800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 369 ms_handle_reset con 0x5594350c8800 session 0x559437e57c20
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:35.913750+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484810752 unmapped: 20725760 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 369 heartbeat osd_stat(store_statfs(0x19a8fb000/0x0/0x1bfc00000, data 0x71f5f01/0x73e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:36.913953+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea0400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 369 ms_handle_reset con 0x559436ea0400 session 0x5594370f4f00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 369 heartbeat osd_stat(store_statfs(0x19a8fb000/0x0/0x1bfc00000, data 0x71f5f01/0x73e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484810752 unmapped: 20725760 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436f94400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:37.914264+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484818944 unmapped: 20717568 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:38.914564+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484818944 unmapped: 20717568 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5466581 data_alloc: 268435456 data_used: 54878208
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:39.914743+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484818944 unmapped: 20717568 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:40.914933+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 369 heartbeat osd_stat(store_statfs(0x19a8fa000/0x0/0x1bfc00000, data 0x71f6f01/0x73e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484818944 unmapped: 20717568 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:41.915135+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484818944 unmapped: 20717568 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:42.915384+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484818944 unmapped: 20717568 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:43.915583+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 369 heartbeat osd_stat(store_statfs(0x19a8fa000/0x0/0x1bfc00000, data 0x71f6f01/0x73e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484818944 unmapped: 20717568 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5466833 data_alloc: 268435456 data_used: 54878208
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:44.915777+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484818944 unmapped: 20717568 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:45.915921+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484827136 unmapped: 20709376 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594450edc00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 369 ms_handle_reset con 0x5594450edc00 session 0x559434c63a40
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594385cd400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.163173676s of 13.797877312s, submitted: 11
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:46.916041+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484827136 unmapped: 20709376 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:47.916202+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484827136 unmapped: 20709376 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 369 handle_osd_map epochs [369,370], i have 369, src has [1,370]
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 369 handle_osd_map epochs [370,370], i have 370, src has [1,370]
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:48.916348+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 370 ms_handle_reset con 0x5594385cd400 session 0x559437c57680
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484876288 unmapped: 20660224 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 370 heartbeat osd_stat(store_statfs(0x19a8f2000/0x0/0x1bfc00000, data 0x71fdbae/0x73eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5472195 data_alloc: 268435456 data_used: 55263232
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:49.916502+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 370 heartbeat osd_stat(store_statfs(0x19a8f2000/0x0/0x1bfc00000, data 0x71fdbae/0x73eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484999168 unmapped: 20537344 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:50.916787+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484999168 unmapped: 20537344 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 370 ms_handle_reset con 0x559436c5dc00 session 0x559435f67a40
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:51.917022+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 370 ms_handle_reset con 0x559436ea1400 session 0x559435d8d0e0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484999168 unmapped: 20537344 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 370 heartbeat osd_stat(store_statfs(0x19a8f2000/0x0/0x1bfc00000, data 0x71fdbae/0x73eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:52.917159+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350c8800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 370 ms_handle_reset con 0x5594350c8800 session 0x559436f6a1e0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea0400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 370 ms_handle_reset con 0x559436ea0400 session 0x559437c57e00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594450edc00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 370 ms_handle_reset con 0x5594450edc00 session 0x559437e56d20
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350c8800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 370 ms_handle_reset con 0x5594350c8800 session 0x559437b0b4a0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484999168 unmapped: 20537344 heap: 505536512 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5dc00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:53.917375+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 497999872 unmapped: 21774336 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5592888 data_alloc: 268435456 data_used: 55508992
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:54.917548+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 370 ms_handle_reset con 0x559436c5dc00 session 0x559437b0af00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea0400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 370 ms_handle_reset con 0x559436ea0400 session 0x5594351d5860
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 370 ms_handle_reset con 0x559436ea1400 session 0x559437d08d20
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea3c00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485048320 unmapped: 34725888 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:55.917720+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 370 handle_osd_map epochs [370,371], i have 370, src has [1,371]
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485048320 unmapped: 34725888 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 370 handle_osd_map epochs [371,371], i have 371, src has [1,371]
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 370 handle_osd_map epochs [371,371], i have 371, src has [1,371]
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 4.890111923s of 10.060988426s, submitted: 70
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 371 ms_handle_reset con 0x559434fee000 session 0x559437b0b2c0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:56.918810+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 371 ms_handle_reset con 0x559436ea3c00 session 0x559434c8c5a0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485048320 unmapped: 34725888 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 371 ms_handle_reset con 0x559434fee000 session 0x5594352861e0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:57.919039+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 371 heartbeat osd_stat(store_statfs(0x199c9a000/0x0/0x1bfc00000, data 0x7e536da/0x8042000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485048320 unmapped: 34725888 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:58.919241+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485048320 unmapped: 34725888 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5570362 data_alloc: 268435456 data_used: 55422976
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:10:59.920056+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485048320 unmapped: 34725888 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:00.920483+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350c8800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 371 ms_handle_reset con 0x5594350c8800 session 0x559437e574a0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5dc00
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485048320 unmapped: 34725888 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:01.920617+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 371 ms_handle_reset con 0x559436c5dc00 session 0x559435ddc780
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481050624 unmapped: 38723584 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:02.920853+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481050624 unmapped: 38723584 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:03.921240+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 371 heartbeat osd_stat(store_statfs(0x19a0ee000/0x0/0x1bfc00000, data 0x7a026ca/0x7bf0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481050624 unmapped: 38723584 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:17 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5503962 data_alloc: 251658240 data_used: 52387840
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:04.921692+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481058816 unmapped: 38715392 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea0400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 371 ms_handle_reset con 0x559436ea0400 session 0x559437e561e0
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:05.921865+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481067008 unmapped: 38707200 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:06.922163+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea0400
Oct 02 13:34:17 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.180594444s of 10.442781448s, submitted: 27
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:17 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481075200 unmapped: 38699008 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:07.922442+0000)
Oct 02 13:34:17 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350c8800
Oct 02 13:34:17 compute-1 ceph-osd[78262]: osd.0 371 ms_handle_reset con 0x5594350c8800 session 0x559437c56d20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5dc00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481075200 unmapped: 38699008 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:08.922581+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 371 ms_handle_reset con 0x559436c5dc00 session 0x559437d08780
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486227968 unmapped: 33546240 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5569534 data_alloc: 268435456 data_used: 63524864
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:09.922749+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 371 heartbeat osd_stat(store_statfs(0x19a0ee000/0x0/0x1bfc00000, data 0x7a026ca/0x7bf0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486227968 unmapped: 33546240 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:10.923020+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486268928 unmapped: 33505280 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:11.923340+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea3c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486268928 unmapped: 33505280 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 371 ms_handle_reset con 0x559436ea3c00 session 0x559435e4fe00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 371 heartbeat osd_stat(store_statfs(0x19a0ee000/0x0/0x1bfc00000, data 0x7a026ca/0x7bf0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:12.923530+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 371 ms_handle_reset con 0x559436ea1400 session 0x559436fa25a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486268928 unmapped: 33505280 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:13.923763+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486268928 unmapped: 33505280 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5572827 data_alloc: 268435456 data_used: 63512576
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:14.924039+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486268928 unmapped: 33505280 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:15.924322+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486268928 unmapped: 33505280 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55944816f000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 371 ms_handle_reset con 0x55944816f000 session 0x559434c75860
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350c8800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:16.924512+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 371 heartbeat osd_stat(store_statfs(0x19a0ed000/0x0/0x1bfc00000, data 0x7a026da/0x7bf1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.943758011s of 10.015493393s, submitted: 23
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486268928 unmapped: 33505280 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:17.924739+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486334464 unmapped: 33439744 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:18.924997+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488112128 unmapped: 31662080 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5597135 data_alloc: 268435456 data_used: 63549440
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:19.925183+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488153088 unmapped: 31621120 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:20.925364+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 371 heartbeat osd_stat(store_statfs(0x199dd3000/0x0/0x1bfc00000, data 0x7d1d6ca/0x7f0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488161280 unmapped: 31612928 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:21.925580+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 371 ms_handle_reset con 0x5594350c8800 session 0x559436c4d680
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 371 heartbeat osd_stat(store_statfs(0x199dd3000/0x0/0x1bfc00000, data 0x7d1d6ca/0x7f0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5dc00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488161280 unmapped: 31612928 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:22.925721+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 371 ms_handle_reset con 0x559436c5dc00 session 0x559436eaaf00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 371 ms_handle_reset con 0x559436ea1400 session 0x559434c754a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea3c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 371 ms_handle_reset con 0x559436ea3c00 session 0x559434c8d0e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488226816 unmapped: 31547392 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:23.925861+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488333312 unmapped: 31440896 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:24.926056+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5625910 data_alloc: 268435456 data_used: 64086016
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488423424 unmapped: 31350784 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:25.926377+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488423424 unmapped: 31350784 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:26.926541+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488423424 unmapped: 31350784 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:27.926692+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 371 heartbeat osd_stat(store_statfs(0x199c92000/0x0/0x1bfc00000, data 0x7e5b6da/0x804a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488431616 unmapped: 31342592 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:28.926813+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488431616 unmapped: 31342592 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:29.927030+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5626230 data_alloc: 268435456 data_used: 64094208
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488431616 unmapped: 31342592 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:30.927175+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488431616 unmapped: 31342592 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:31.927885+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488431616 unmapped: 31342592 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:32.929155+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488431616 unmapped: 31342592 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:33.929703+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 371 heartbeat osd_stat(store_statfs(0x199c92000/0x0/0x1bfc00000, data 0x7e5b6da/0x804a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488431616 unmapped: 31342592 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:34.930133+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5626230 data_alloc: 268435456 data_used: 64094208
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 371 heartbeat osd_stat(store_statfs(0x199c92000/0x0/0x1bfc00000, data 0x7e5b6da/0x804a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488431616 unmapped: 31342592 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:35.930268+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488439808 unmapped: 31334400 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943a64dc00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:36.930469+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.505456924s of 19.853292465s, submitted: 154
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 371 ms_handle_reset con 0x55943a64dc00 session 0x559435de1860
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350c8800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5dc00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488448000 unmapped: 31326208 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:37.930652+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488579072 unmapped: 31195136 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:38.930779+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488579072 unmapped: 31195136 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:39.930921+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5625742 data_alloc: 268435456 data_used: 64143360
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 371 heartbeat osd_stat(store_statfs(0x199c93000/0x0/0x1bfc00000, data 0x7e5b6fd/0x804b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488579072 unmapped: 31195136 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:40.931087+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488579072 unmapped: 31195136 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:41.931244+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea3c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 371 ms_handle_reset con 0x559436ea1400 session 0x559436ddf4a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 371 handle_osd_map epochs [372,372], i have 371, src has [1,372]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 371 heartbeat osd_stat(store_statfs(0x199c93000/0x0/0x1bfc00000, data 0x7e5b6fd/0x804b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 371 handle_osd_map epochs [372,372], i have 372, src has [1,372]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 372 handle_osd_map epochs [372,372], i have 372, src has [1,372]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488595456 unmapped: 31178752 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:42.931354+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea2400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 372 ms_handle_reset con 0x559436ea2400 session 0x559436f7fc20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 372 handle_osd_map epochs [373,373], i have 372, src has [1,373]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435975000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 373 ms_handle_reset con 0x559435975000 session 0x559435152960
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943cdd3c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 373 ms_handle_reset con 0x55943cdd3c00 session 0x559436c4c5a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 373 ms_handle_reset con 0x559436ea3c00 session 0x559435d8c000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435975000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 373 ms_handle_reset con 0x559435975000 session 0x559435915c20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488873984 unmapped: 30900224 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 373 ms_handle_reset con 0x559436ea1400 session 0x559436f6b0e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:43.931933+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488873984 unmapped: 30900224 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:44.932468+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 373 heartbeat osd_stat(store_statfs(0x199918000/0x0/0x1bfc00000, data 0x81d1fcd/0x83c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5669033 data_alloc: 268435456 data_used: 64167936
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 373 ms_handle_reset con 0x5594350c8800 session 0x559437e57a40
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 373 ms_handle_reset con 0x559436c5dc00 session 0x559435142780
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea2400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488873984 unmapped: 30900224 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:45.932707+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488873984 unmapped: 30900224 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:46.933037+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.431745529s of 10.299204826s, submitted: 54
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 373 ms_handle_reset con 0x559436ea2400 session 0x5594370ecb40
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488882176 unmapped: 30892032 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:47.933322+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350c8800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 373 ms_handle_reset con 0x5594350c8800 session 0x559435143c20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488882176 unmapped: 30892032 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:48.933470+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435975000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 373 ms_handle_reset con 0x559435975000 session 0x559435ddc1e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 373 heartbeat osd_stat(store_statfs(0x199919000/0x0/0x1bfc00000, data 0x81d1faa/0x83c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488890368 unmapped: 30883840 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5dc00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 373 ms_handle_reset con 0x559436c5dc00 session 0x559437c42960
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:49.933625+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5666929 data_alloc: 268435456 data_used: 64163840
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea2400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 373 ms_handle_reset con 0x559436ea2400 session 0x559434ce1860
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 373 ms_handle_reset con 0x559436ea1400 session 0x559436eab2c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 489193472 unmapped: 30580736 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350c8800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:50.933826+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435975000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5dc00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 489209856 unmapped: 30564352 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 373 ms_handle_reset con 0x5594350c8800 session 0x559437b0a3c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:51.934152+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea2400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 373 handle_osd_map epochs [374,374], i have 373, src has [1,374]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 374 ms_handle_reset con 0x559436ea2400 session 0x5594370f4b40
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943cdd3c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 374 ms_handle_reset con 0x55943cdd3c00 session 0x5594359145a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 489398272 unmapped: 30375936 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:52.934352+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594384a4c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 489414656 unmapped: 30359552 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 374 heartbeat osd_stat(store_statfs(0x199a25000/0x0/0x1bfc00000, data 0x80c5c47/0x82b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 374 handle_osd_map epochs [374,375], i have 374, src has [1,375]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 374 handle_osd_map epochs [375,375], i have 375, src has [1,375]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:53.934475+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 375 ms_handle_reset con 0x5594384a4c00 session 0x559437bca780
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 489447424 unmapped: 30326784 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 375 heartbeat osd_stat(store_statfs(0x199a21000/0x0/0x1bfc00000, data 0x80c7910/0x82bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:54.934652+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5663257 data_alloc: 268435456 data_used: 64131072
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5c800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 489447424 unmapped: 30326784 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:55.934807+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 489472000 unmapped: 30302208 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:56.935025+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.482002735s of 10.422252655s, submitted: 203
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 489512960 unmapped: 30261248 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:57.935244+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435133800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 489635840 unmapped: 30138368 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:58.935358+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 375 heartbeat osd_stat(store_statfs(0x199a22000/0x0/0x1bfc00000, data 0x80c88f2/0x82bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,7,4])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490102784 unmapped: 29671424 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:11:59.935501+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5682669 data_alloc: 268435456 data_used: 66539520
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490110976 unmapped: 29663232 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:00.935676+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 375 handle_osd_map epochs [375,376], i have 375, src has [1,376]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490176512 unmapped: 29597696 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 375 ms_handle_reset con 0x559435133800 session 0x559437b0a1e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:01.935801+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 375 handle_osd_map epochs [376,376], i have 376, src has [1,376]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 375 handle_osd_map epochs [376,376], i have 376, src has [1,376]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 heartbeat osd_stat(store_statfs(0x199a1e000/0x0/0x1bfc00000, data 0x80ca479/0x82bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [0,0,0,2,0,0,0,0,1,3])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490225664 unmapped: 29548544 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:02.935897+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490242048 unmapped: 29532160 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:03.935998+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490242048 unmapped: 29532160 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:04.936171+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5688781 data_alloc: 268435456 data_used: 66609152
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350c8800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x5594350c8800 session 0x559437bcaf00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea2400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559436ea2400 session 0x559436ec70e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594384a4c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x5594384a4c00 session 0x559437cb0000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943cdd3c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x55943cdd3c00 session 0x559434c754a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594358bf400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490250240 unmapped: 29523968 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:05.936351+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490258432 unmapped: 29515776 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:06.936521+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 heartbeat osd_stat(store_statfs(0x199a1d000/0x0/0x1bfc00000, data 0x80ca489/0x82c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [0,0,0,0,0,0,3,0,0,2])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490299392 unmapped: 29474816 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:07.936708+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 0.000000000s of 10.280552864s, submitted: 113
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490323968 unmapped: 29450240 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:08.936845+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559436c5c800 session 0x559435f65860
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490323968 unmapped: 29450240 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:09.937025+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5691198 data_alloc: 268435456 data_used: 66609152
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 heartbeat osd_stat(store_statfs(0x199a1e000/0x0/0x1bfc00000, data 0x80ca489/0x82c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,1,1])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490332160 unmapped: 29442048 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:10.937194+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 496320512 unmapped: 23453696 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:11.937374+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559436ea1c00 session 0x5594370f4000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559436f94400 session 0x5594370ede00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 496320512 unmapped: 23453696 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:12.937478+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350c8800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 500097024 unmapped: 19677184 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:13.937603+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x5594358bf400 session 0x559435143c20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea2400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559436ea2400 session 0x559437c565a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594384a4c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x5594384a4c00 session 0x559436fa3c20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594358bf400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 heartbeat osd_stat(store_statfs(0x19893f000/0x0/0x1bfc00000, data 0x91a9489/0x939f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [0,0,0,0,1,0,0,1])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 495886336 unmapped: 23887872 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:14.937778+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5828710 data_alloc: 268435456 data_used: 66772992
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x5594358bf400 session 0x559434c62f00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559436ea1c00 session 0x559436f7f680
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 491405312 unmapped: 28368896 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:15.938358+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea2400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559436ea2400 session 0x5594351d50e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436f94400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559436f94400 session 0x559437d08960
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 491405312 unmapped: 28368896 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:16.938548+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x5594350c8800 session 0x559435ddc780
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350c8800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x5594350c8800 session 0x559437b0bc20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 heartbeat osd_stat(store_statfs(0x199681000/0x0/0x1bfc00000, data 0x7e7f427/0x8074000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594358bf400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 491405312 unmapped: 28368896 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:17.938746+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 0.534853995s of 10.019935608s, submitted: 142
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x5594358bf400 session 0x559436f7fe00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 491339776 unmapped: 28434432 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:18.938942+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea2400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 491339776 unmapped: 28434432 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:19.939094+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5603559 data_alloc: 251658240 data_used: 54120448
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 491208704 unmapped: 28565504 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:20.939227+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x55943b0e8800 session 0x559437bcbe00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559434792c00 session 0x559436f6b4a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436f94400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 491208704 unmapped: 28565504 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:21.939359+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 491151360 unmapped: 28622848 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:22.939510+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 heartbeat osd_stat(store_statfs(0x199c5f000/0x0/0x1bfc00000, data 0x722c44a/0x7422000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [0,0,0,0,1,0,0,0,5])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 491151360 unmapped: 28622848 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:23.939651+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559435975000 session 0x559436c4d860
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559436c5dc00 session 0x559436f7f2c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 491151360 unmapped: 28622848 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:24.939830+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435975000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5505739 data_alloc: 268435456 data_used: 55472128
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 491151360 unmapped: 28622848 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:25.939952+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 heartbeat osd_stat(store_statfs(0x19a931000/0x0/0x1bfc00000, data 0x71b744a/0x73ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [0,0,0,0,0,1,1,0,1])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559436f94400 session 0x559434c752c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 491159552 unmapped: 28614656 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:26.940099+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 491159552 unmapped: 28614656 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:27.940250+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 3.935825348s of 10.140642166s, submitted: 62
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 491159552 unmapped: 28614656 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:28.940437+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 491159552 unmapped: 28614656 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:29.940609+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 heartbeat osd_stat(store_statfs(0x19b7b4000/0x0/0x1bfc00000, data 0x633444a/0x652a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5375410 data_alloc: 251658240 data_used: 52854784
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 491159552 unmapped: 28614656 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:30.940784+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559435975000 session 0x559434ff0b40
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559434792c00 session 0x559435ddc000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350c8800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 492339200 unmapped: 27435008 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:31.940934+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 492363776 unmapped: 27410432 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:32.941075+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494755840 unmapped: 25018368 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:33.941173+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 heartbeat osd_stat(store_statfs(0x19b042000/0x0/0x1bfc00000, data 0x6a9e44a/0x6c94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [0,0,0,0,0,0,1,0,1])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:34.941322+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 493895680 unmapped: 25878528 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5436136 data_alloc: 251658240 data_used: 53837824
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x5594350c8800 session 0x559437cb05a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:35.941449+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 493961216 unmapped: 25812992 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559435a4f800 session 0x559435de10e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559437503400 session 0x559437bcab40
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559436ea0400 session 0x559436ddf0e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559434fee000 session 0x559437d09e00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350c8800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:36.941713+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 493158400 unmapped: 26615808 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:37.941899+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 493182976 unmapped: 26591232 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 2.901371241s of 10.082528114s, submitted: 220
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:38.942056+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 493182976 unmapped: 26591232 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:39.942199+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 478044160 unmapped: 41730048 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5058678 data_alloc: 234881024 data_used: 32534528
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 heartbeat osd_stat(store_statfs(0x19cd0c000/0x0/0x1bfc00000, data 0x4ddd43a/0x4fd2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:40.942359+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 41721856 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x5594350c8800 session 0x559437d083c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559434792c00 session 0x559434c8c1e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:41.942547+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 41721856 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:42.942759+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 41721856 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:43.942955+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 41721856 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:44.943325+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 41721856 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5058129 data_alloc: 234881024 data_used: 32534528
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 heartbeat osd_stat(store_statfs(0x19cd0d000/0x0/0x1bfc00000, data 0x4ddd42a/0x4fd1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:45.943717+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 478052352 unmapped: 41721856 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559436c3e000 session 0x559435f67c20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:46.943908+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 478060544 unmapped: 41713664 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559436f95400 session 0x559437e56b40
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:47.944051+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 478068736 unmapped: 41705472 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.358567715s of 10.010686874s, submitted: 74
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:48.944171+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475004928 unmapped: 44769280 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:49.944376+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475004928 unmapped: 44769280 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559434fee000 session 0x5594351d50e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4976417 data_alloc: 234881024 data_used: 31948800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 heartbeat osd_stat(store_statfs(0x19d76c000/0x0/0x1bfc00000, data 0x437e42a/0x4572000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:50.944560+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475004928 unmapped: 44769280 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:51.944775+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475004928 unmapped: 44769280 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 heartbeat osd_stat(store_statfs(0x19d76c000/0x0/0x1bfc00000, data 0x437e407/0x4571000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:52.944977+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475013120 unmapped: 44761088 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:53.945143+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475013120 unmapped: 44761088 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:54.945349+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475013120 unmapped: 44761088 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4976417 data_alloc: 234881024 data_used: 31948800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:55.945524+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475013120 unmapped: 44761088 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x55943beb5800 session 0x559435f43e00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559436e3bc00 session 0x559437cb1e00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:56.945694+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475013120 unmapped: 44761088 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559434792c00 session 0x559436dde5a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 heartbeat osd_stat(store_statfs(0x19d76c000/0x0/0x1bfc00000, data 0x437e407/0x4571000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:57.945863+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475029504 unmapped: 44744704 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.540034294s of 10.271340370s, submitted: 46
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:58.946051+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475029504 unmapped: 44744704 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:12:59.946248+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475029504 unmapped: 44744704 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c3e000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4975096 data_alloc: 234881024 data_used: 31956992
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:00.946501+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467091456 unmapped: 52682752 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 heartbeat osd_stat(store_statfs(0x19e34d000/0x0/0x1bfc00000, data 0x37a03d4/0x3991000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559436c3e000 session 0x559434ce0f00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:01.946671+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467116032 unmapped: 52658176 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 ms_handle_reset con 0x559434fee000 session 0x559434c621e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 heartbeat osd_stat(store_statfs(0x19e350000/0x0/0x1bfc00000, data 0x37985d4/0x398a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:02.946854+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467116032 unmapped: 52658176 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436f95400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 376 handle_osd_map epochs [377,377], i have 376, src has [1,377]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 377 ms_handle_reset con 0x559436f95400 session 0x559437c57e00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:03.947055+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467132416 unmapped: 52641792 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:04.947408+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467132416 unmapped: 52641792 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4827321 data_alloc: 234881024 data_used: 22282240
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:05.947574+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467132416 unmapped: 52641792 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 377 ms_handle_reset con 0x559434792c00 session 0x559437b0a5a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 377 heartbeat osd_stat(store_statfs(0x19e34f000/0x0/0x1bfc00000, data 0x379a28f/0x398e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:06.947736+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467132416 unmapped: 52641792 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 377 handle_osd_map epochs [377,378], i have 377, src has [1,378]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 377 handle_osd_map epochs [378,378], i have 378, src has [1,378]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 378 ms_handle_reset con 0x559434fee000 session 0x559436f0e960
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c3e000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436e3bc00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:07.947937+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 378 ms_handle_reset con 0x559436e3bc00 session 0x559435d8d680
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 378 ms_handle_reset con 0x559436c3e000 session 0x559434c65e00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943beb5800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467083264 unmapped: 52690944 heap: 519774208 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.728199959s of 10.013930321s, submitted: 174
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 378 ms_handle_reset con 0x55943beb5800 session 0x559435143c20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:08.948120+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476651520 unmapped: 51101696 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 378 heartbeat osd_stat(store_statfs(0x19d379000/0x0/0x1bfc00000, data 0x476df5a/0x4965000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 378 handle_osd_map epochs [378,379], i have 378, src has [1,379]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 378 handle_osd_map epochs [379,379], i have 379, src has [1,379]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 379 ms_handle_reset con 0x559434792c00 session 0x559436ddeb40
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:09.948349+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476651520 unmapped: 51101696 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5007198 data_alloc: 234881024 data_used: 30478336
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:10.948540+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 379 handle_osd_map epochs [380,380], i have 379, src has [1,380]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469049344 unmapped: 58703872 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 380 ms_handle_reset con 0x559434fee000 session 0x559434c62960
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 380 heartbeat osd_stat(store_statfs(0x19d375000/0x0/0x1bfc00000, data 0x476fc07/0x4968000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:11.948781+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469049344 unmapped: 58703872 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c3e000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 380 ms_handle_reset con 0x559436c3e000 session 0x559434c63c20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436e3bc00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 380 ms_handle_reset con 0x559436e3bc00 session 0x559435f643c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435a4f800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 380 ms_handle_reset con 0x559435a4f800 session 0x559437d090e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:12.948971+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469049344 unmapped: 58703872 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:13.949163+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469049344 unmapped: 58703872 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 380 ms_handle_reset con 0x559434fee000 session 0x559434c654a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 380 ms_handle_reset con 0x559434792c00 session 0x559437d085a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435a4f800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 380 ms_handle_reset con 0x559435a4f800 session 0x559435ddc780
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:14.949406+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 380 heartbeat osd_stat(store_statfs(0x19d372000/0x0/0x1bfc00000, data 0x47718de/0x496c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469057536 unmapped: 58695680 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4990616 data_alloc: 234881024 data_used: 30486528
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:15.949609+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469057536 unmapped: 58695680 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 380 heartbeat osd_stat(store_statfs(0x19d372000/0x0/0x1bfc00000, data 0x47718de/0x496c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:16.949781+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469057536 unmapped: 58695680 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:17.949978+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c3e000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 380 ms_handle_reset con 0x559436c3e000 session 0x559436ddfc20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436e3bc00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 380 ms_handle_reset con 0x559436e3bc00 session 0x559435de10e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 380 ms_handle_reset con 0x559434792c00 session 0x559436c4d680
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469057536 unmapped: 58695680 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 380 ms_handle_reset con 0x559434fee000 session 0x559435915860
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435a4f800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c3e000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.562785149s of 10.035156250s, submitted: 37
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 380 ms_handle_reset con 0x559436c3e000 session 0x559435f67e00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 380 ms_handle_reset con 0x559435a4f800 session 0x559436ec72c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436e3bc00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 380 ms_handle_reset con 0x559436e3bc00 session 0x559436c510e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 380 ms_handle_reset con 0x559434792c00 session 0x559435e4f0e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 380 ms_handle_reset con 0x559434fee000 session 0x559435e4e3c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:18.950157+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469057536 unmapped: 58695680 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:19.950348+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435a4f800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469057536 unmapped: 58695680 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 380 ms_handle_reset con 0x559435a4f800 session 0x559437b0ad20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c3e000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4992074 data_alloc: 234881024 data_used: 30486528
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 380 ms_handle_reset con 0x559436c3e000 session 0x559435e4fa40
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 380 heartbeat osd_stat(store_statfs(0x19d372000/0x0/0x1bfc00000, data 0x47718de/0x496c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:20.950511+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 380 handle_osd_map epochs [381,381], i have 380, src has [1,381]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470441984 unmapped: 57311232 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436e3bc00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559436e3bc00 session 0x559437e56f00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:21.950670+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470441984 unmapped: 57311232 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559434792c00 session 0x559437bcbe00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559434fee000 session 0x559437cb05a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:22.950859+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435a4f800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559435a4f800 session 0x559437b0a1e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470466560 unmapped: 57286656 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c3e000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559436c3e000 session 0x559436c4d860
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:23.951059+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea0400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470614016 unmapped: 57139200 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437503400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 381 heartbeat osd_stat(store_statfs(0x19d349000/0x0/0x1bfc00000, data 0x479743d/0x4995000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435975000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559435975000 session 0x559435ddd4a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:24.951420+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470614016 unmapped: 57139200 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4998146 data_alloc: 234881024 data_used: 30617600
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559434792c00 session 0x559437c42780
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:25.974944+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435a4f800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559435a4f800 session 0x559437b0a960
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470032384 unmapped: 57720832 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c3e000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559436c3e000 session 0x559437b0a3c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5dc00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559436c5dc00 session 0x559437e57680
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436f94400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559436f94400 session 0x559434c62000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559434fee000 session 0x559436f7eb40
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:26.975124+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435a4f800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481607680 unmapped: 46145536 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559435a4f800 session 0x559436c4a3c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559434792c00 session 0x559435915e00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c3e000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559436c3e000 session 0x559436ec72c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5dc00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559436c5dc00 session 0x559435915860
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 381 heartbeat osd_stat(store_statfs(0x19cb98000/0x0/0x1bfc00000, data 0x4f47466/0x5146000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559434792c00 session 0x559436c4d680
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559434fee000 session 0x559436ddfc20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:27.975391+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472104960 unmapped: 55648256 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:28.975680+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472104960 unmapped: 55648256 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 381 heartbeat osd_stat(store_statfs(0x19c18f000/0x0/0x1bfc00000, data 0x595049f/0x5b4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:29.975887+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435a4f800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c3e000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472104960 unmapped: 55648256 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5203289 data_alloc: 251658240 data_used: 39280640
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5dc00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 381 ms_handle_reset con 0x559436c5dc00 session 0x559437d085a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:30.976019+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 381 handle_osd_map epochs [382,382], i have 381, src has [1,382]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.986348152s of 12.396906853s, submitted: 71
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594358bf400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 382 ms_handle_reset con 0x5594358bf400 session 0x559437d090e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472113152 unmapped: 55640064 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943b0e8800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 382 ms_handle_reset con 0x55943b0e8800 session 0x559435f643c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:31.976714+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 382 ms_handle_reset con 0x559434792c00 session 0x559434c63c20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 382 handle_osd_map epochs [382,383], i have 382, src has [1,383]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 382 handle_osd_map epochs [383,383], i have 383, src has [1,383]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472113152 unmapped: 55640064 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 383 ms_handle_reset con 0x559436c3e000 session 0x559436fa30e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594358bf400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:32.976914+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 383 handle_osd_map epochs [383,384], i have 383, src has [1,384]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 383 handle_osd_map epochs [384,384], i have 384, src has [1,384]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472121344 unmapped: 55631872 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5dc00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943beb5400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 384 ms_handle_reset con 0x559436c5dc00 session 0x559437e56780
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:33.977061+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470114304 unmapped: 57638912 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 384 ms_handle_reset con 0x55943beb5400 session 0x559437e574a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 384 ms_handle_reset con 0x559435a4f800 session 0x559435ddc780
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:34.977253+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 384 handle_osd_map epochs [385,385], i have 384, src has [1,385]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470245376 unmapped: 57507840 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 385 heartbeat osd_stat(store_statfs(0x19bc70000/0x0/0x1bfc00000, data 0x5e68a8f/0x606d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5335349 data_alloc: 251658240 data_used: 49635328
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:35.977471+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470245376 unmapped: 57507840 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 385 ms_handle_reset con 0x559434792c00 session 0x559436f0e960
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:36.977632+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470269952 unmapped: 57483264 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c3e000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:37.977756+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5dc00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 385 ms_handle_reset con 0x559436c5dc00 session 0x559437bca5a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470540288 unmapped: 57212928 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943beb5400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559442cf2400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 385 ms_handle_reset con 0x559442cf2400 session 0x559436f0e780
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 385 heartbeat osd_stat(store_statfs(0x19b10d000/0x0/0x1bfc00000, data 0x69c9776/0x6bd1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:38.977939+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 385 handle_osd_map epochs [385,386], i have 385, src has [1,386]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 386 ms_handle_reset con 0x55943beb5400 session 0x559437bca3c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594372bb000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 386 ms_handle_reset con 0x5594372bb000 session 0x559434ce1860
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471826432 unmapped: 55926784 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 386 ms_handle_reset con 0x559436c3e000 session 0x559437c57e00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:39.978248+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 386 heartbeat osd_stat(store_statfs(0x19b108000/0x0/0x1bfc00000, data 0x69cb44d/0x6bd5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471826432 unmapped: 55926784 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5456103 data_alloc: 251658240 data_used: 52891648
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 386 handle_osd_map epochs [386,387], i have 386, src has [1,387]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:40.978433+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471834624 unmapped: 55918592 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:41.978572+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471834624 unmapped: 55918592 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:42.978722+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471851008 unmapped: 55902208 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:43.978890+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 387 heartbeat osd_stat(store_statfs(0x19b104000/0x0/0x1bfc00000, data 0x69ccf8c/0x6bd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.126157761s of 13.135083199s, submitted: 81
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471048192 unmapped: 56705024 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:44.979105+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475226112 unmapped: 52527104 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5496307 data_alloc: 251658240 data_used: 52883456
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:45.979429+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475226112 unmapped: 52527104 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:46.979558+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476626944 unmapped: 51126272 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:47.979697+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476626944 unmapped: 51126272 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 387 heartbeat osd_stat(store_statfs(0x19ab9c000/0x0/0x1bfc00000, data 0x6f28f8c/0x7134000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:48.979862+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476626944 unmapped: 51126272 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:49.980019+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476233728 unmapped: 51519488 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5504855 data_alloc: 251658240 data_used: 54112256
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:50.980240+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477290496 unmapped: 50462720 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 387 heartbeat osd_stat(store_statfs(0x19ab9b000/0x0/0x1bfc00000, data 0x6f35f8c/0x7141000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594372bb000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 387 ms_handle_reset con 0x5594372bb000 session 0x559437bcab40
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:51.980463+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477298688 unmapped: 50454528 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:52.980640+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 480206848 unmapped: 47546368 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 387 ms_handle_reset con 0x559434792c00 session 0x559434c75680
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:53.980773+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 480206848 unmapped: 47546368 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:54.980918+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.762639046s of 11.009344101s, submitted: 103
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 480206848 unmapped: 47546368 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5538075 data_alloc: 251658240 data_used: 54456320
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:55.981033+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477921280 unmapped: 49831936 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:56.981164+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477929472 unmapped: 49823744 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 387 heartbeat osd_stat(store_statfs(0x19a965000/0x0/0x1bfc00000, data 0x716df8c/0x7379000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:57.981305+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477929472 unmapped: 49823744 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5dc00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 387 ms_handle_reset con 0x559436c5dc00 session 0x559434c8d680
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943beb5400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 387 ms_handle_reset con 0x55943beb5400 session 0x559434c630e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:58.981471+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c3e000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477937664 unmapped: 49815552 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:13:59.981683+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477945856 unmapped: 49807360 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5533515 data_alloc: 268435456 data_used: 55549952
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:00.981859+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481075200 unmapped: 46678016 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 387 heartbeat osd_stat(store_statfs(0x19a965000/0x0/0x1bfc00000, data 0x716df8c/0x7379000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:01.982142+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481697792 unmapped: 46055424 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:02.982362+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481697792 unmapped: 46055424 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:03.982541+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5dc00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481714176 unmapped: 46039040 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:04.982795+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 387 heartbeat osd_stat(store_statfs(0x19a965000/0x0/0x1bfc00000, data 0x716df8c/0x7379000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481714176 unmapped: 46039040 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5596683 data_alloc: 268435456 data_used: 61296640
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:05.982988+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.516175270s of 10.890303612s, submitted: 4
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481812480 unmapped: 45940736 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 387 heartbeat osd_stat(store_statfs(0x19a965000/0x0/0x1bfc00000, data 0x716df8c/0x7379000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1df1f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:06.983206+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482598912 unmapped: 45154304 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 387 ms_handle_reset con 0x559436c5dc00 session 0x559436f0e960
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:07.983368+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594372bb000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482598912 unmapped: 45154304 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 387 ms_handle_reset con 0x5594372bb000 session 0x559437b0b680
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:08.983637+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482598912 unmapped: 45154304 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:09.983845+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 387 ms_handle_reset con 0x559436ea0400 session 0x559437bcad20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 387 ms_handle_reset con 0x559437503400 session 0x559436ec6780
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482598912 unmapped: 45154304 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559442cf2400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5609160 data_alloc: 268435456 data_used: 64102400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:10.984097+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482598912 unmapped: 45154304 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594386f9400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:11.984301+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 387 heartbeat osd_stat(store_statfs(0x19a553000/0x0/0x1bfc00000, data 0x716dffe/0x737b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e32f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,2,1])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482607104 unmapped: 45146112 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:12.984523+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 491315200 unmapped: 36438016 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:13.984763+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494116864 unmapped: 33636352 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:14.985060+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 387 heartbeat osd_stat(store_statfs(0x1999f8000/0x0/0x1bfc00000, data 0x825cffe/0x7ed6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e32f9c6), peers [1,2] op hist [0,0,0,0,0,0,16])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 493355008 unmapped: 34398208 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5730330 data_alloc: 268435456 data_used: 63557632
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:15.985394+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 2.637902498s of 10.000240326s, submitted: 133
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 493117440 unmapped: 34635776 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:16.985563+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 493117440 unmapped: 34635776 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:17.985798+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 493142016 unmapped: 34611200 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:18.985993+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 493158400 unmapped: 34594816 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 387 ms_handle_reset con 0x559442cf2400 session 0x559434c754a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:19.986164+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 493166592 unmapped: 34586624 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5744035 data_alloc: 268435456 data_used: 64389120
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:20.986344+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 387 heartbeat osd_stat(store_statfs(0x199a6b000/0x0/0x1bfc00000, data 0x82f6fee/0x7e61000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e32f9c6), peers [1,2] op hist [0,0,0,0,0,2])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 493273088 unmapped: 34480128 heap: 527753216 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:21.986523+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5dc00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 387 ms_handle_reset con 0x559436c5dc00 session 0x559436f7f4a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494362624 unmapped: 36544512 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea0400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 387 ms_handle_reset con 0x559436ea0400 session 0x559437c56f00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594372bb000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 387 ms_handle_reset con 0x5594372bb000 session 0x559436f0f2c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437503400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 387 ms_handle_reset con 0x559437503400 session 0x559435152780
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559442cf2400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 387 ms_handle_reset con 0x559442cf2400 session 0x5594351d5860
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5dc00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:22.986713+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 387 ms_handle_reset con 0x559436c5dc00 session 0x559436f0f860
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea0400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494379008 unmapped: 36528128 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 387 ms_handle_reset con 0x559436ea0400 session 0x559436f7f2c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:23.986857+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594372bb000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494387200 unmapped: 36519936 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:24.987007+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 387 handle_osd_map epochs [388,388], i have 387, src has [1,388]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494395392 unmapped: 36511744 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437503400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 388 ms_handle_reset con 0x559437503400 session 0x559437e532c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5793908 data_alloc: 268435456 data_used: 64753664
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:25.987194+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.007493496s of 10.075882912s, submitted: 128
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 388 heartbeat osd_stat(store_statfs(0x19952b000/0x0/0x1bfc00000, data 0x8835c9b/0x83a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e32f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494403584 unmapped: 36503552 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 388 ms_handle_reset con 0x5594372bb000 session 0x559437b0af00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436bd2400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 388 ms_handle_reset con 0x559436bd2400 session 0x5594370f45a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:26.987430+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5dc00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 388 ms_handle_reset con 0x559436c5dc00 session 0x559435e4fa40
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea0400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494419968 unmapped: 36487168 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 388 ms_handle_reset con 0x559434792c00 session 0x559435de12c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594372bb000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 388 ms_handle_reset con 0x5594372bb000 session 0x559436c4d2c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437503400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 388 ms_handle_reset con 0x559436c3e000 session 0x559436c4dc20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 388 ms_handle_reset con 0x559437503400 session 0x559437cb0d20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:27.987590+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c3e000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 388 ms_handle_reset con 0x559434792c00 session 0x559437e52960
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5dc00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494419968 unmapped: 36487168 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 388 handle_osd_map epochs [388,389], i have 388, src has [1,389]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 388 handle_osd_map epochs [389,389], i have 389, src has [1,389]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:28.987761+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 389 ms_handle_reset con 0x559436ea0400 session 0x559435f65680
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494428160 unmapped: 36478976 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:29.987935+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494747648 unmapped: 36159488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5643707 data_alloc: 268435456 data_used: 58109952
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:30.988152+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 389 handle_osd_map epochs [390,390], i have 389, src has [1,390]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494755840 unmapped: 36151296 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 389 handle_osd_map epochs [390,390], i have 390, src has [1,390]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:31.988461+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 390 heartbeat osd_stat(store_statfs(0x19a4f6000/0x0/0x1bfc00000, data 0x786a3fb/0x73d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e32f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494764032 unmapped: 36143104 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:32.988685+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 390 ms_handle_reset con 0x559436ea1c00 session 0x559437d08780
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 390 ms_handle_reset con 0x559436ea2400 session 0x5594351523c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594372bb000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494772224 unmapped: 36134912 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:33.988889+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55944816f000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943d785c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436f94800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 494788608 unmapped: 36118528 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 390 handle_osd_map epochs [391,391], i have 390, src has [1,391]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:34.989165+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 492118016 unmapped: 38789120 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5433933 data_alloc: 251658240 data_used: 44068864
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 391 handle_osd_map epochs [392,392], i have 391, src has [1,392]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:35.989338+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 392 ms_handle_reset con 0x55944816f000 session 0x559437c56d20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 392 ms_handle_reset con 0x559436f94800 session 0x559437cb0d20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea0400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 4.643511295s of 10.026472092s, submitted: 117
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 392 ms_handle_reset con 0x55943d785c00 session 0x559435914780
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 392 handle_osd_map epochs [393,393], i have 392, src has [1,393]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 492126208 unmapped: 38780928 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 393 ms_handle_reset con 0x559436ea0400 session 0x559436f7f2c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 393 ms_handle_reset con 0x559434792c00 session 0x559436fa3c20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 393 ms_handle_reset con 0x5594372bb000 session 0x559437105860
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:36.989505+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487759872 unmapped: 43147264 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 393 heartbeat osd_stat(store_statfs(0x19c876000/0x0/0x1bfc00000, data 0x631f810/0x6096000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:37.989654+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487759872 unmapped: 43147264 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:38.989818+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487759872 unmapped: 43147264 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:39.989977+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 393 heartbeat osd_stat(store_statfs(0x19e69c000/0x0/0x1bfc00000, data 0x3e3878b/0x4043000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487759872 unmapped: 43147264 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 393 ms_handle_reset con 0x559434792c00 session 0x559437b0b680
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea0400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5067049 data_alloc: 234881024 data_used: 32141312
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:40.990105+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 393 handle_osd_map epochs [393,394], i have 393, src has [1,394]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 394 ms_handle_reset con 0x559436ea0400 session 0x559434c8d680
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487694336 unmapped: 43212800 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:41.990231+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488497152 unmapped: 42409984 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:42.990386+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 394 heartbeat osd_stat(store_statfs(0x19e567000/0x0/0x1bfc00000, data 0x41950d6/0x439f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488505344 unmapped: 42401792 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:43.990533+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488505344 unmapped: 42401792 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:44.990700+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488505344 unmapped: 42401792 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5097737 data_alloc: 234881024 data_used: 30162944
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:45.990825+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 394 handle_osd_map epochs [395,395], i have 394, src has [1,395]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.323350906s of 10.089933395s, submitted: 174
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487841792 unmapped: 43065344 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e569000/0x0/0x1bfc00000, data 0x4198c15/0x43a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:46.990994+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487841792 unmapped: 43065344 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:47.991159+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487841792 unmapped: 43065344 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:48.991247+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436c3e000 session 0x559436ddeb40
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487849984 unmapped: 43057152 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436c5dc00 session 0x559435d8d0e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436f94800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:49.991318+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487849984 unmapped: 43057152 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5097699 data_alloc: 234881024 data_used: 30175232
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:50.991471+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19eff6000/0x0/0x1bfc00000, data 0x370cc15/0x3918000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483573760 unmapped: 47333376 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:51.991656+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483581952 unmapped: 47325184 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19eff6000/0x0/0x1bfc00000, data 0x370cbb3/0x3917000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:52.991846+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483590144 unmapped: 47316992 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436f94800 session 0x559437c430e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:53.991993+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483590144 unmapped: 47316992 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:54.992133+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483590144 unmapped: 47316992 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4982284 data_alloc: 234881024 data_used: 24694784
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:55.992369+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483590144 unmapped: 47316992 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:56.992527+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483590144 unmapped: 47316992 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19eff7000/0x0/0x1bfc00000, data 0x370cba3/0x3916000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594386f9400 session 0x559436c4ab40
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:57.992634+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483590144 unmapped: 47316992 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.068715096s of 12.702533722s, submitted: 69
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:58.992773+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434fee000 session 0x559436c4a000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594358bf400 session 0x559434c65e00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483459072 unmapped: 47448064 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:59.992867+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483459072 unmapped: 47448064 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4979220 data_alloc: 234881024 data_used: 24694784
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:00.992996+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483459072 unmapped: 47448064 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:01.993121+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19eff8000/0x0/0x1bfc00000, data 0x370cba3/0x3916000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,8])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 60735488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:02.993268+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 60735488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:03.993488+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434792c00 session 0x559434ff0d20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 60735488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:04.993663+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 60735488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4783696 data_alloc: 218103808 data_used: 12791808
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:05.993782+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19ff78000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 60735488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:06.993896+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 60735488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:07.994042+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 60735488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:08.994190+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 60735488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 66K writes, 260K keys, 66K commit groups, 1.0 writes per commit group, ingest: 0.25 GB, 0.05 MB/s
                                           Cumulative WAL: 66K writes, 24K syncs, 2.72 writes per sync, written: 0.25 GB, 0.05 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8130 writes, 30K keys, 8130 commit groups, 1.0 writes per commit group, ingest: 33.16 MB, 0.06 MB/s
                                           Interval WAL: 8131 writes, 3070 syncs, 2.65 writes per sync, written: 0.03 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:09.994344+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 60735488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19ff78000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4783696 data_alloc: 218103808 data_used: 12791808
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets getting new tickets!
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:10.994594+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _finish_auth 0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:10.996224+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470179840 unmapped: 60727296 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:11.994769+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19ff78000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470179840 unmapped: 60727296 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:12.994947+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19ff78000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470179840 unmapped: 60727296 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:13.995089+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470179840 unmapped: 60727296 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:14.995228+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470179840 unmapped: 60727296 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c3e000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436c3e000 session 0x559437e561e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434792c00 session 0x559436f7e3c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434fee000 session 0x559437bcb680
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4783696 data_alloc: 218103808 data_used: 12791808
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594358bf400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594358bf400 session 0x559436ec6d20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:15.995354+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594386f9400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.497853279s of 17.155778885s, submitted: 27
Oct 02 13:34:18 compute-1 ceph-osd[78262]: mgrc ms_handle_reset ms_handle_reset con 0x5594350d6c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3443433125
Oct 02 13:34:18 compute-1 ceph-osd[78262]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3443433125,v1:192.168.122.100:6801/3443433125]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: get_auth_request con 0x559436c3e000 auth_method 0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: mgrc handle_mgr_configure stats_period=5
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19ff78000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470188032 unmapped: 60719104 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:16.995548+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436c3d800 session 0x559437c432c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5dc00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 479182848 unmapped: 55402496 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x55943a64d800 session 0x559436dde960
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594450ed400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594370a1000 session 0x559437cb0780
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c6e800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:17.995684+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594386f9400 session 0x559436c50f00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594370a1000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594370a1000 session 0x559436c51a40
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434792c00 session 0x559434c623c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434fee000 session 0x5594352863c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594358bf400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594358bf400 session 0x559434c6e960
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470597632 unmapped: 63987712 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f6a0000/0x0/0x1bfc00000, data 0x3064b50/0x326e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:18.995807+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f6a0000/0x0/0x1bfc00000, data 0x3064b50/0x326e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470597632 unmapped: 63987712 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:19.995913+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470597632 unmapped: 63987712 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4858586 data_alloc: 218103808 data_used: 12795904
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594386f9400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594386f9400 session 0x559436ddfc20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:20.996056+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f6a0000/0x0/0x1bfc00000, data 0x3064b50/0x326e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f6a0000/0x0/0x1bfc00000, data 0x3064b50/0x326e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea0400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436ea0400 session 0x559434c63a40
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470597632 unmapped: 63987712 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:21.996189+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434792c00 session 0x5594351530e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434fee000 session 0x559434c65680
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470532096 unmapped: 64053248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:22.996360+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594358bf400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594386f9400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f69e000/0x0/0x1bfc00000, data 0x3064b83/0x3270000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470540288 unmapped: 64045056 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f69e000/0x0/0x1bfc00000, data 0x3064b83/0x3270000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:23.996514+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f69e000/0x0/0x1bfc00000, data 0x3064b83/0x3270000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:24.996665+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4918653 data_alloc: 234881024 data_used: 20762624
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:25.996815+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f69e000/0x0/0x1bfc00000, data 0x3064b83/0x3270000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:26.996966+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:27.997406+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:28.997514+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:29.997649+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f69e000/0x0/0x1bfc00000, data 0x3064b83/0x3270000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4918653 data_alloc: 234881024 data_used: 20762624
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:30.997790+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:31.997915+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:32.998104+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f69e000/0x0/0x1bfc00000, data 0x3064b83/0x3270000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:33.998246+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.158052444s of 18.919492722s, submitted: 23
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:34.998420+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473210880 unmapped: 61374464 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:35.998542+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4955009 data_alloc: 234881024 data_used: 21942272
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473415680 unmapped: 61169664 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:36.998736+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473423872 unmapped: 61161472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:37.998917+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473423872 unmapped: 61161472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e12d000/0x0/0x1bfc00000, data 0x342cb83/0x3638000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:38.999106+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473423872 unmapped: 61161472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:39.999323+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473423872 unmapped: 61161472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:40.999464+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4959757 data_alloc: 234881024 data_used: 21839872
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473423872 unmapped: 61161472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:41.999643+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473423872 unmapped: 61161472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:42.999790+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436f94800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436f94800 session 0x559437bcab40
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472940544 unmapped: 61644800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:43.999925+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472940544 unmapped: 61644800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e136000/0x0/0x1bfc00000, data 0x342cb83/0x3638000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:45.000105+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472940544 unmapped: 61644800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:46.000346+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4953597 data_alloc: 234881024 data_used: 21839872
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472940544 unmapped: 61644800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:47.000498+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e136000/0x0/0x1bfc00000, data 0x342cb83/0x3638000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472940544 unmapped: 61644800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:48.000629+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472940544 unmapped: 61644800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:49.000815+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e136000/0x0/0x1bfc00000, data 0x342cb83/0x3638000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594358bf400 session 0x559437c57e00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594386f9400 session 0x559437bcb860
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472940544 unmapped: 61644800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594386f9400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.585352898s of 14.913706779s, submitted: 79
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:50.001037+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467542016 unmapped: 67043328 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:51.001232+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4794544 data_alloc: 218103808 data_used: 12791808
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594386f9400 session 0x559434c630e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467542016 unmapped: 67043328 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:52.001381+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467542016 unmapped: 67043328 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edd7000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:53.001537+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467542016 unmapped: 67043328 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:54.001700+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edd7000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467542016 unmapped: 67043328 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:55.001872+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467542016 unmapped: 67043328 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:56.002022+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4794544 data_alloc: 218103808 data_used: 12791808
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467542016 unmapped: 67043328 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:57.002317+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467550208 unmapped: 67035136 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:58.002491+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467550208 unmapped: 67035136 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:59.002637+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467550208 unmapped: 67035136 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edd7000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:00.002751+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467550208 unmapped: 67035136 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:01.002873+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4794544 data_alloc: 218103808 data_used: 12791808
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467550208 unmapped: 67035136 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:02.003086+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467550208 unmapped: 67035136 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:03.003331+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467550208 unmapped: 67035136 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:04.003490+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467550208 unmapped: 67035136 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:05.003678+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edd7000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.042637825s of 15.562180519s, submitted: 27
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468082688 unmapped: 66502656 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434792c00 session 0x559437b0b680
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434fee000 session 0x559436ddfc20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594358bf400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594358bf400 session 0x559435e4f0e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:06.004001+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436f94800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4883222 data_alloc: 218103808 data_used: 12795904
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436f94800 session 0x559437b0a960
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434792c00 session 0x5594351521e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467337216 unmapped: 67248128 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:07.004178+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467345408 unmapped: 67239936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:08.004434+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467345408 unmapped: 67239936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:09.004611+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467345408 unmapped: 67239936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e333000/0x0/0x1bfc00000, data 0x3231ba3/0x343b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:10.004886+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467345408 unmapped: 67239936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:11.005031+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4883106 data_alloc: 218103808 data_used: 12791808
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467345408 unmapped: 67239936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434fee000 session 0x559435f61e00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:12.005152+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594358bf400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594386f9400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467353600 unmapped: 67231744 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:13.005355+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467124224 unmapped: 67461120 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:14.005513+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:15.005721+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e332000/0x0/0x1bfc00000, data 0x3231bc6/0x343c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:16.005917+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4963611 data_alloc: 234881024 data_used: 23748608
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e332000/0x0/0x1bfc00000, data 0x3231bc6/0x343c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:17.006444+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e332000/0x0/0x1bfc00000, data 0x3231bc6/0x343c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:18.006597+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:19.006818+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:20.006972+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:21.007136+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4963611 data_alloc: 234881024 data_used: 23748608
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e332000/0x0/0x1bfc00000, data 0x3231bc6/0x343c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:22.007387+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:23.007518+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e332000/0x0/0x1bfc00000, data 0x3231bc6/0x343c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:24.007660+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.326089859s of 19.245832443s, submitted: 49
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470376448 unmapped: 64208896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:25.007825+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19dbdd000/0x0/0x1bfc00000, data 0x3980bc6/0x3b8b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471351296 unmapped: 63234048 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:26.008006+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5037977 data_alloc: 234881024 data_used: 24559616
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471777280 unmapped: 62808064 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:27.008192+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471777280 unmapped: 62808064 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:28.008470+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19db4a000/0x0/0x1bfc00000, data 0x3a13bc6/0x3c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471777280 unmapped: 62808064 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:29.008619+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471777280 unmapped: 62808064 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:30.008767+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471777280 unmapped: 62808064 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:31.008893+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5039897 data_alloc: 234881024 data_used: 24702976
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471916544 unmapped: 62668800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:32.009044+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471916544 unmapped: 62668800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:33.009473+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471916544 unmapped: 62668800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:34.009729+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19db2e000/0x0/0x1bfc00000, data 0x3a35bc6/0x3c40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943d785c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x55943d785c00 session 0x559437e56f00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471916544 unmapped: 62668800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:35.009973+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19db2d000/0x0/0x1bfc00000, data 0x3a35bd5/0x3c41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471916544 unmapped: 62668800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:36.010135+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5039685 data_alloc: 234881024 data_used: 24715264
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471916544 unmapped: 62668800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:37.010317+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471932928 unmapped: 62652416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:38.010443+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19db2d000/0x0/0x1bfc00000, data 0x3a35bd5/0x3c41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471932928 unmapped: 62652416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:39.010578+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436ea1c00 session 0x559435286d20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471932928 unmapped: 62652416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19db2d000/0x0/0x1bfc00000, data 0x3a35bd5/0x3c41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:40.010716+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.604205132s of 15.311234474s, submitted: 113
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594358bf400 session 0x559434ce12c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594386f9400 session 0x559437cb0d20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471932928 unmapped: 62652416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:41.010845+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5039749 data_alloc: 234881024 data_used: 24715264
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19db28000/0x0/0x1bfc00000, data 0x3a3abd5/0x3c46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466722816 unmapped: 67862528 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:42.011107+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434792c00 session 0x559437e56b40
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466722816 unmapped: 67862528 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:43.011230+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466722816 unmapped: 67862528 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:44.011382+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466722816 unmapped: 67862528 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:45.011547+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466722816 unmapped: 67862528 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:46.011702+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4809599 data_alloc: 218103808 data_used: 12791808
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edd6000/0x0/0x1bfc00000, data 0x278db50/0x2997000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466722816 unmapped: 67862528 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:47.011967+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434fee000 session 0x559434ce0780
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466731008 unmapped: 67854336 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436ea1c00 session 0x559437e565a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:48.012156+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466755584 unmapped: 67829760 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:49.012365+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943d785c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edd8000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x55943d785c00 session 0x5594351423c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edd8000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466812928 unmapped: 67772416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:50.012520+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.131870270s of 10.003231049s, submitted: 300
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466829312 unmapped: 67756032 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:51.014535+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434792c00 session 0x559437d09680
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4809039 data_alloc: 218103808 data_used: 12795904
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434fee000 session 0x559434ce0960
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:52.016439+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edda000/0x0/0x1bfc00000, data 0x278dacf/0x2994000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:53.017456+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:54.018153+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:55.019652+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:56.020151+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4807462 data_alloc: 218103808 data_used: 12791808
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:57.021907+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edda000/0x0/0x1bfc00000, data 0x278dacf/0x2994000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:58.022258+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:59.022502+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:00.022710+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edda000/0x0/0x1bfc00000, data 0x278dacf/0x2994000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.209687233s of 10.373024940s, submitted: 38
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436ea1c00 session 0x559435f643c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:01.023122+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4809244 data_alloc: 218103808 data_used: 12791808
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edd9000/0x0/0x1bfc00000, data 0x278db31/0x2995000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:02.023415+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594386f9400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 395 handle_osd_map epochs [396,396], i have 395, src has [1,396]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:03.023556+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 396 ms_handle_reset con 0x5594386f9400 session 0x559436ec6b40
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466853888 unmapped: 67731456 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:04.023817+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea2400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 396 ms_handle_reset con 0x559436ea2400 session 0x559437e52f00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466853888 unmapped: 67731456 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 396 heartbeat osd_stat(store_statfs(0x19edd4000/0x0/0x1bfc00000, data 0x278f7ec/0x2999000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 396 handle_osd_map epochs [396,397], i have 396, src has [1,397]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 397 ms_handle_reset con 0x559434792c00 session 0x559437105e00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:05.024256+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466870272 unmapped: 67715072 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 397 ms_handle_reset con 0x559434fee000 session 0x559435152960
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:06.024663+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466870272 unmapped: 67715072 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4816765 data_alloc: 218103808 data_used: 12808192
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:07.024982+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466870272 unmapped: 67715072 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 397 heartbeat osd_stat(store_statfs(0x19edd3000/0x0/0x1bfc00000, data 0x27913d5/0x299a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:08.025337+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466870272 unmapped: 67715072 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 397 heartbeat osd_stat(store_statfs(0x19edd3000/0x0/0x1bfc00000, data 0x27913d5/0x299a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:09.025509+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:10.025717+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:11.026044+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 397 handle_osd_map epochs [397,398], i have 397, src has [1,398]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.111035347s of 10.823334694s, submitted: 35
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4819739 data_alloc: 218103808 data_used: 12808192
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:12.026207+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466853888 unmapped: 67731456 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 398 heartbeat osd_stat(store_statfs(0x19edd0000/0x0/0x1bfc00000, data 0x2792f14/0x299d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:13.026492+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466853888 unmapped: 67731456 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:14.026663+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466853888 unmapped: 67731456 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:15.026869+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466853888 unmapped: 67731456 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:16.027043+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466870272 unmapped: 67715072 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4819739 data_alloc: 218103808 data_used: 12808192
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 398 heartbeat osd_stat(store_statfs(0x19edd0000/0x0/0x1bfc00000, data 0x2792f14/0x299d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:17.027269+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466870272 unmapped: 67715072 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:18.027577+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466870272 unmapped: 67715072 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:19.027820+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466870272 unmapped: 67715072 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 398 heartbeat osd_stat(store_statfs(0x19edd0000/0x0/0x1bfc00000, data 0x2792f14/0x299d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:20.028173+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466878464 unmapped: 67706880 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:21.028351+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 398 heartbeat osd_stat(store_statfs(0x19edd0000/0x0/0x1bfc00000, data 0x2792f14/0x299d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466878464 unmapped: 67706880 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4819739 data_alloc: 218103808 data_used: 12808192
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 398 heartbeat osd_stat(store_statfs(0x19edd0000/0x0/0x1bfc00000, data 0x2792f14/0x299d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:22.028499+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466878464 unmapped: 67706880 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:23.028713+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466878464 unmapped: 67706880 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:24.028866+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466878464 unmapped: 67706880 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:25.029116+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466878464 unmapped: 67706880 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:26.029719+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466886656 unmapped: 67698688 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4819739 data_alloc: 218103808 data_used: 12808192
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:27.030231+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466886656 unmapped: 67698688 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 398 heartbeat osd_stat(store_statfs(0x19edd0000/0x0/0x1bfc00000, data 0x2792f14/0x299d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:28.030671+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466886656 unmapped: 67698688 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 398 heartbeat osd_stat(store_statfs(0x19edd0000/0x0/0x1bfc00000, data 0x2792f14/0x299d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:29.031026+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466886656 unmapped: 67698688 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.684392929s of 18.692880630s, submitted: 12
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:30.031349+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466886656 unmapped: 67698688 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 398 handle_osd_map epochs [399,399], i have 398, src has [1,399]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x559436ea1c00 session 0x559437b0ad20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:31.031590+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466903040 unmapped: 67682304 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4824530 data_alloc: 218103808 data_used: 12816384
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:32.031801+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466903040 unmapped: 67682304 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 399 heartbeat osd_stat(store_statfs(0x19edcd000/0x0/0x1bfc00000, data 0x2794b6d/0x29a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:33.031991+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466903040 unmapped: 67682304 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:34.032171+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466903040 unmapped: 67682304 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:35.032349+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466903040 unmapped: 67682304 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594386f9400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:36.032526+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x5594386f9400 session 0x559437bcad20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55944816f000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466903040 unmapped: 67682304 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x55944816f000 session 0x559435de12c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4823650 data_alloc: 218103808 data_used: 12816384
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:37.032742+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466911232 unmapped: 67674112 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:38.032901+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466911232 unmapped: 67674112 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x559434792c00 session 0x559437bca960
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 399 heartbeat osd_stat(store_statfs(0x19edce000/0x0/0x1bfc00000, data 0x2794b6d/0x29a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x559434fee000 session 0x559436fa2780
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x559436ea1c00 session 0x559437d090e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594386f9400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x5594386f9400 session 0x559435d8d860
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55944816f000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x55944816f000 session 0x559435142b40
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:39.033093+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466911232 unmapped: 67674112 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:40.033201+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466911232 unmapped: 67674112 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:41.033571+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466911232 unmapped: 67674112 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x559434792c00 session 0x559436c514a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4851916 data_alloc: 218103808 data_used: 12816384
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:42.033836+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x559434fee000 session 0x559436c4d860
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466911232 unmapped: 67674112 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x559436ea1c00 session 0x559435152d20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:43.034027+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594386f9400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.884056091s of 13.213858604s, submitted: 20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x5594386f9400 session 0x559437bca780
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467066880 unmapped: 67518464 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436cb0c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5a800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 399 heartbeat osd_stat(store_statfs(0x19ea25000/0x0/0x1bfc00000, data 0x2b3bba0/0x2d49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:44.034334+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467075072 unmapped: 67510272 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 399 heartbeat osd_stat(store_statfs(0x19ea25000/0x0/0x1bfc00000, data 0x2b3bba0/0x2d49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 399 heartbeat osd_stat(store_statfs(0x19ea25000/0x0/0x1bfc00000, data 0x2b3bba0/0x2d49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:45.034490+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467091456 unmapped: 67493888 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:46.034620+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467091456 unmapped: 67493888 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4883747 data_alloc: 218103808 data_used: 16490496
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:47.034827+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 399 heartbeat osd_stat(store_statfs(0x19ea25000/0x0/0x1bfc00000, data 0x2b3bba0/0x2d49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467091456 unmapped: 67493888 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:48.035003+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467091456 unmapped: 67493888 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:49.035356+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467091456 unmapped: 67493888 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943cdd2800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:50.035586+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x55943cdd2800 session 0x5594351423c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467107840 unmapped: 67477504 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 399 heartbeat osd_stat(store_statfs(0x19ea25000/0x0/0x1bfc00000, data 0x2b3bba0/0x2d49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:51.035748+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467107840 unmapped: 67477504 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4885237 data_alloc: 218103808 data_used: 16494592
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:52.036012+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467116032 unmapped: 67469312 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:53.036262+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.991956711s of 10.035635948s, submitted: 8
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467116032 unmapped: 67469312 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:54.036670+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467116032 unmapped: 67469312 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:55.036969+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 399 handle_osd_map epochs [399,400], i have 399, src has [1,400]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 400 heartbeat osd_stat(store_statfs(0x19ea20000/0x0/0x1bfc00000, data 0x2b3d85b/0x2d4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 400 ms_handle_reset con 0x559434fee000 session 0x559437c572c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467124224 unmapped: 67461120 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #56. Immutable memtables: 12.
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:56.037161+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468312064 unmapped: 66273280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4909236 data_alloc: 218103808 data_used: 16621568
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:57.037385+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 65216512 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:58.037563+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 65216512 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:59.037699+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 65216512 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:00.037867+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 400 heartbeat osd_stat(store_statfs(0x19d727000/0x0/0x1bfc00000, data 0x2c9685b/0x2ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:01.038044+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4913386 data_alloc: 218103808 data_used: 16621568
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:02.038188+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:03.038340+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:04.038523+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:05.038761+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 400 heartbeat osd_stat(store_statfs(0x19d727000/0x0/0x1bfc00000, data 0x2c9685b/0x2ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:06.038949+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4913386 data_alloc: 218103808 data_used: 16621568
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:07.039105+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 400 heartbeat osd_stat(store_statfs(0x19d727000/0x0/0x1bfc00000, data 0x2c9685b/0x2ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:08.039352+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:09.039655+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:10.039829+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:11.039970+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4913386 data_alloc: 218103808 data_used: 16621568
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:12.040121+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.320636749s of 19.659267426s, submitted: 22
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 400 ms_handle_reset con 0x559436ea1c00 session 0x5594370ede00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594386f9400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 400 heartbeat osd_stat(store_statfs(0x19d727000/0x0/0x1bfc00000, data 0x2c9685b/0x2ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [1])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:13.040357+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 400 heartbeat osd_stat(store_statfs(0x19d727000/0x0/0x1bfc00000, data 0x2c9685b/0x2ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469295104 unmapped: 65290240 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 400 handle_osd_map epochs [400,401], i have 400, src has [1,401]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 400 handle_osd_map epochs [401,401], i have 401, src has [1,401]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 401 ms_handle_reset con 0x5594386f9400 session 0x559436c512c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 401 heartbeat osd_stat(store_statfs(0x19d729000/0x0/0x1bfc00000, data 0x2c967f9/0x2ea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:14.040579+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469303296 unmapped: 65282048 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:15.040793+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469303296 unmapped: 65282048 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 401 heartbeat osd_stat(store_statfs(0x19d725000/0x0/0x1bfc00000, data 0x2c984a6/0x2ea8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:16.040925+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469303296 unmapped: 65282048 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4914910 data_alloc: 218103808 data_used: 16629760
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:17.041081+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 401 heartbeat osd_stat(store_statfs(0x19d725000/0x0/0x1bfc00000, data 0x2c984a6/0x2ea8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 401 heartbeat osd_stat(store_statfs(0x19d725000/0x0/0x1bfc00000, data 0x2c984a6/0x2ea8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:18.041211+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:19.041331+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 401 ms_handle_reset con 0x559436cb0c00 session 0x559436fa2000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 401 ms_handle_reset con 0x559436c5a800 session 0x559436c4a000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 401 ms_handle_reset con 0x559434792c00 session 0x559434ce0780
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469278720 unmapped: 65306624 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:20.041487+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5a800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 401 ms_handle_reset con 0x559436c5a800 session 0x559434c63a40
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469278720 unmapped: 65306624 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:21.041640+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469278720 unmapped: 65306624 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 401 handle_osd_map epochs [401,402], i have 401, src has [1,402]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436cb0c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4917912 data_alloc: 218103808 data_used: 16633856
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:22.041876+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470343680 unmapped: 64241664 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:23.042125+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d722000/0x0/0x1bfc00000, data 0x2c99fe5/0x2eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:24.042353+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d722000/0x0/0x1bfc00000, data 0x2c99fe5/0x2eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:25.042587+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:26.042727+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4918364 data_alloc: 218103808 data_used: 16687104
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:27.043330+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:28.043543+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d722000/0x0/0x1bfc00000, data 0x2c99fe5/0x2eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:29.044115+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d722000/0x0/0x1bfc00000, data 0x2c99fe5/0x2eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:30.044302+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d722000/0x0/0x1bfc00000, data 0x2c99fe5/0x2eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:31.044796+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4918364 data_alloc: 218103808 data_used: 16687104
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:32.044938+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d722000/0x0/0x1bfc00000, data 0x2c99fe5/0x2eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:33.045189+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:34.045460+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:35.045618+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:36.045898+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d722000/0x0/0x1bfc00000, data 0x2c99fe5/0x2eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4918844 data_alloc: 218103808 data_used: 16736256
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:37.046200+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:38.046457+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d722000/0x0/0x1bfc00000, data 0x2c99fe5/0x2eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:39.046723+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.274785995s of 26.369352341s, submitted: 37
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d71d000/0x0/0x1bfc00000, data 0x2c9ffe5/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:40.046903+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:41.047109+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4920738 data_alloc: 218103808 data_used: 16736256
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:42.047321+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:43.047460+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:44.047806+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d71d000/0x0/0x1bfc00000, data 0x2c9ffe5/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:45.048150+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:46.048449+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4922338 data_alloc: 218103808 data_used: 17092608
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:47.048571+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:48.048881+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470376448 unmapped: 64208896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:49.049146+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d30c000/0x0/0x1bfc00000, data 0x2ca0fe5/0x2eb2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470376448 unmapped: 64208896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:50.049481+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d30c000/0x0/0x1bfc00000, data 0x2ca0fe5/0x2eb2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470376448 unmapped: 64208896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:51.049697+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470376448 unmapped: 64208896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4922590 data_alloc: 218103808 data_used: 17092608
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.667269707s of 12.846582413s, submitted: 6
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:52.049840+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 402 ms_handle_reset con 0x559436cb0c00 session 0x559435142780
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470376448 unmapped: 64208896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:53.050080+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594386f9400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 402 ms_handle_reset con 0x5594386f9400 session 0x559434c8c5a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943cdd2800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470376448 unmapped: 64208896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 402 ms_handle_reset con 0x55943cdd2800 session 0x559434c63c20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:54.050226+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470376448 unmapped: 64208896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 402 ms_handle_reset con 0x559434fee000 session 0x559435f65680
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 402 ms_handle_reset con 0x559436ea1c00 session 0x559437cb12c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:55.050389+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943cdd2800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467795968 unmapped: 66789376 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 402 ms_handle_reset con 0x55943cdd2800 session 0x5594370ecf00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:56.050631+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d30c000/0x0/0x1bfc00000, data 0x2ca0fe5/0x2eb2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467795968 unmapped: 66789376 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4843049 data_alloc: 218103808 data_used: 12832768
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:57.050795+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467795968 unmapped: 66789376 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:58.051028+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467795968 unmapped: 66789376 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:59.051191+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d814000/0x0/0x1bfc00000, data 0x2799fb2/0x29a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467795968 unmapped: 66789376 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:00.051467+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467795968 unmapped: 66789376 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:01.051629+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467795968 unmapped: 66789376 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4843097 data_alloc: 218103808 data_used: 12832768
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 402 handle_osd_map epochs [402,403], i have 402, src has [1,403]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:02.051730+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.570983887s of 10.014533043s, submitted: 55
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467828736 unmapped: 66756608 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 403 ms_handle_reset con 0x559434792c00 session 0x559436dde780
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:03.051880+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5a800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467828736 unmapped: 66756608 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:04.052026+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 403 heartbeat osd_stat(store_statfs(0x19d811000/0x0/0x1bfc00000, data 0x279bc5f/0x29ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 403 handle_osd_map epochs [403,404], i have 403, src has [1,404]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 404 ms_handle_reset con 0x559436c5a800 session 0x559437e565a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467836928 unmapped: 66748416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:05.052230+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467836928 unmapped: 66748416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:06.052378+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467836928 unmapped: 66748416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4849429 data_alloc: 218103808 data_used: 12840960
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:07.052621+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 404 heartbeat osd_stat(store_statfs(0x19d80e000/0x0/0x1bfc00000, data 0x279d8d4/0x29af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467836928 unmapped: 66748416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:08.052756+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467836928 unmapped: 66748416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:09.052904+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467845120 unmapped: 66740224 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:10.053039+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467845120 unmapped: 66740224 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:11.053181+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 404 heartbeat osd_stat(store_statfs(0x19d80e000/0x0/0x1bfc00000, data 0x279d8d4/0x29af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467845120 unmapped: 66740224 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4849429 data_alloc: 218103808 data_used: 12840960
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 404 handle_osd_map epochs [405,405], i have 404, src has [1,405]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:12.053336+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559434792c00 session 0x559437d09680
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559434fee000 session 0x559436c50f00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559436ea1c00 session 0x559434c6e960
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467861504 unmapped: 66723840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:13.053490+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467861504 unmapped: 66723840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:14.053600+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19d80b000/0x0/0x1bfc00000, data 0x279f413/0x29b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467869696 unmapped: 66715648 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:15.053748+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467869696 unmapped: 66715648 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:16.053872+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467869696 unmapped: 66715648 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4852403 data_alloc: 218103808 data_used: 12840960
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:17.053998+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467869696 unmapped: 66715648 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19d80b000/0x0/0x1bfc00000, data 0x279f413/0x29b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:18.054120+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943cdd2800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x55943cdd2800 session 0x559437b0b4a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436cb0c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467869696 unmapped: 66715648 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19d80b000/0x0/0x1bfc00000, data 0x279f413/0x29b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:19.054269+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.887578964s of 17.205017090s, submitted: 46
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559436cb0c00 session 0x559435de1e00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559434792c00 session 0x5594351d4d20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559434fee000 session 0x559437d09c20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467877888 unmapped: 66707456 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559436ea1c00 session 0x559437d08d20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943cdd2800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x55943cdd2800 session 0x5594377ae780
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:20.054455+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467886080 unmapped: 66699264 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:21.054591+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467886080 unmapped: 66699264 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4888135 data_alloc: 218103808 data_used: 12840960
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:22.054747+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594386f9400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x5594386f9400 session 0x559437d081e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467886080 unmapped: 66699264 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:23.054896+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467886080 unmapped: 66699264 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:24.055063+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19d3b9000/0x0/0x1bfc00000, data 0x2bf2413/0x2e05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467845120 unmapped: 66740224 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:25.055218+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:26.055350+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4920884 data_alloc: 218103808 data_used: 17362944
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:27.055464+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:28.055586+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:29.055734+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.796065331s of 10.041505814s, submitted: 15
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19d3b9000/0x0/0x1bfc00000, data 0x2bf2413/0x2e05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559436ea1c00 session 0x559434c8c5a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:30.055880+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:31.056007+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19d3b9000/0x0/0x1bfc00000, data 0x2bf2413/0x2e05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4920884 data_alloc: 218103808 data_used: 17362944
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:32.056260+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19d3b9000/0x0/0x1bfc00000, data 0x2bf2413/0x2e05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:33.056566+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943cdd2800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:34.056736+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:35.056994+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:36.057186+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467886080 unmapped: 66699264 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4926114 data_alloc: 218103808 data_used: 17371136
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:37.057329+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472064000 unmapped: 62521344 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:38.057556+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19ceb5000/0x0/0x1bfc00000, data 0x30f6413/0x3309000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,2])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468131840 unmapped: 66453504 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:39.057726+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.967789173s of 10.081938744s, submitted: 32
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468131840 unmapped: 66453504 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:40.057900+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468148224 unmapped: 66437120 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:41.058052+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468099072 unmapped: 66486272 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4969600 data_alloc: 218103808 data_used: 17580032
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:42.058207+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468099072 unmapped: 66486272 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cda9000/0x0/0x1bfc00000, data 0x3202413/0x3415000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:43.058373+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cda9000/0x0/0x1bfc00000, data 0x3202413/0x3415000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,5])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468189184 unmapped: 66396160 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cda1000/0x0/0x1bfc00000, data 0x3208413/0x341b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:44.058550+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468189184 unmapped: 66396160 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:45.058842+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468189184 unmapped: 66396160 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:46.059071+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468189184 unmapped: 66396160 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4974878 data_alloc: 218103808 data_used: 17403904
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:47.059192+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468189184 unmapped: 66396160 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:48.059394+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468189184 unmapped: 66396160 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:49.059558+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468197376 unmapped: 66387968 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:50.059727+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468197376 unmapped: 66387968 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:51.059871+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468197376 unmapped: 66387968 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4974878 data_alloc: 218103808 data_used: 17403904
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:52.060007+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468197376 unmapped: 66387968 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:53.060210+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468197376 unmapped: 66387968 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:54.060356+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468197376 unmapped: 66387968 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:55.060534+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468197376 unmapped: 66387968 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:56.060673+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468197376 unmapped: 66387968 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4974878 data_alloc: 218103808 data_used: 17403904
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:57.060801+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 66379776 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.758423805s of 18.601793289s, submitted: 23
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:58.060962+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 66379776 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:59.061108+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 66379776 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:00.061245+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 66379776 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:01.061371+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 66379776 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4977182 data_alloc: 218103808 data_used: 17395712
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:02.061486+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 66379776 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:03.061635+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468213760 unmapped: 66371584 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:04.061824+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468213760 unmapped: 66371584 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:05.062013+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468213760 unmapped: 66371584 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:06.062181+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468221952 unmapped: 66363392 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:07.062346+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4977182 data_alloc: 218103808 data_used: 17395712
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468221952 unmapped: 66363392 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:08.062522+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468221952 unmapped: 66363392 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:09.062645+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.873358727s of 11.896842003s, submitted: 17
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468230144 unmapped: 66355200 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:10.062918+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468230144 unmapped: 66355200 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:11.063240+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468230144 unmapped: 66355200 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:12.063440+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4975934 data_alloc: 218103808 data_used: 17391616
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559434792c00 session 0x559437d08d20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559434fee000 session 0x559435ddc1e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x55943cdd2800 session 0x559434c6e960
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468230144 unmapped: 66355200 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:13.063594+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437ead000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436cb1c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437438000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468238336 unmapped: 66347008 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:14.063766+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468238336 unmapped: 66347008 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:15.063928+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468246528 unmapped: 66338816 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:16.064091+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468246528 unmapped: 66338816 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:17.064246+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4974974 data_alloc: 218103808 data_used: 17502208
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468246528 unmapped: 66338816 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:18.064384+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468246528 unmapped: 66338816 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:19.064526+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468246528 unmapped: 66338816 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:20.064679+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468246528 unmapped: 66338816 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:21.064856+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468246528 unmapped: 66338816 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:22.065011+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4974974 data_alloc: 218103808 data_used: 17502208
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468246528 unmapped: 66338816 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:23.065163+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468246528 unmapped: 66338816 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:24.065348+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468254720 unmapped: 66330624 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:25.065672+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.424508095s of 15.441400528s, submitted: 12
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469303296 unmapped: 65282048 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:26.065820+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:27.066046+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4983214 data_alloc: 218103808 data_used: 17915904
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:28.066206+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:29.066375+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:30.066513+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:31.066634+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:32.066781+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4983214 data_alloc: 218103808 data_used: 17915904
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:33.066943+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:34.067103+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:35.067270+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:36.067426+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:37.067575+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4983214 data_alloc: 218103808 data_used: 17915904
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:38.067711+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:39.067849+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:40.067986+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.449235916s of 15.471648216s, submitted: 15
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559436cb1c00 session 0x559437cb0d20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:41.068151+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:42.068268+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4982142 data_alloc: 218103808 data_used: 17915904
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559434792c00 session 0x5594371054a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:43.068484+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559437ead000 session 0x559436c4ba40
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559437438000 session 0x559435ddd680
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:44.068583+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559434fee000 session 0x559437d09680
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469319680 unmapped: 65265664 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:45.068816+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469319680 unmapped: 65265664 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:46.068999+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469319680 unmapped: 65265664 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:47.069147+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4861028 data_alloc: 218103808 data_used: 12840960
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19d80c000/0x0/0x1bfc00000, data 0x279f413/0x29b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469327872 unmapped: 65257472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:48.069308+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469327872 unmapped: 65257472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:49.069455+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 405 handle_osd_map epochs [405,406], i have 405, src has [1,406]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:50.069575+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:51.069701+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 406 ms_handle_reset con 0x559436ea1c00 session 0x559437cb12c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 406 heartbeat osd_stat(store_statfs(0x19d808000/0x0/0x1bfc00000, data 0x27a10c0/0x29b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:52.069852+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4865202 data_alloc: 218103808 data_used: 12849152
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:53.069977+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:54.070107+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:55.070341+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 406 heartbeat osd_stat(store_statfs(0x19d808000/0x0/0x1bfc00000, data 0x27a10c0/0x29b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:56.070448+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 406 handle_osd_map epochs [406,407], i have 406, src has [1,407]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.777113914s of 15.972728729s, submitted: 61
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:57.070591+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4868176 data_alloc: 218103808 data_used: 12849152
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:58.070753+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 65232896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:59.070896+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 65232896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:00.071013+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 65232896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:01.071161+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 65232896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:02.071301+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4868176 data_alloc: 218103808 data_used: 12849152
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 65232896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:03.071413+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:04.071563+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469360640 unmapped: 65224704 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:05.071714+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469360640 unmapped: 65224704 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:06.071875+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469360640 unmapped: 65224704 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:07.072008+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469360640 unmapped: 65224704 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4868176 data_alloc: 218103808 data_used: 12849152
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:08.072188+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 65216512 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:09.072363+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 65216512 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:10.072510+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 65216512 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:11.072675+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 65216512 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:12.072834+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 65208320 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4868176 data_alloc: 218103808 data_used: 12849152
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:13.072967+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 65208320 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:14.073104+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 65208320 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:15.073335+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 65208320 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:16.073493+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 65208320 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:17.073647+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 65200128 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4868176 data_alloc: 218103808 data_used: 12849152
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:18.073796+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 65191936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:19.073942+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 65191936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:20.074108+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 65191936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:21.074235+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 65191936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:22.074343+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 65191936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4868176 data_alloc: 218103808 data_used: 12849152
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:23.074477+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 65183744 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:24.074650+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 65183744 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:25.074853+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 65183744 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:26.075037+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 65183744 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:27.075207+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469409792 unmapped: 65175552 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4868176 data_alloc: 218103808 data_used: 12849152
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:28.075368+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469409792 unmapped: 65175552 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:29.075588+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469409792 unmapped: 65175552 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:30.075835+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469409792 unmapped: 65175552 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:31.076074+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469409792 unmapped: 65175552 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:32.076333+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469409792 unmapped: 65175552 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4868176 data_alloc: 218103808 data_used: 12849152
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:33.076618+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469417984 unmapped: 65167360 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:34.076844+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 65159168 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:35.077134+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 65159168 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:36.077388+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 65159168 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:37.077564+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 65159168 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4868176 data_alloc: 218103808 data_used: 12849152
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:38.077841+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 65159168 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:39.078093+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 65159168 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:40.078374+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469434368 unmapped: 65150976 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:41.078908+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469434368 unmapped: 65150976 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 45.048465729s of 45.059295654s, submitted: 15
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434792c00 session 0x5594351d41e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434fee000 session 0x559436fa2960
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:42.079115+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559436ea1c00 session 0x559435286000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437438000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559437438000 session 0x559437c425a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437ead000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559437ead000 session 0x559435153860
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469786624 unmapped: 73195520 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4943462 data_alloc: 218103808 data_used: 12849152
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:43.079316+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469786624 unmapped: 73195520 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:44.079503+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469794816 unmapped: 73187328 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:45.079821+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469794816 unmapped: 73187328 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19ce1b000/0x0/0x1bfc00000, data 0x318dbff/0x33a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:46.080051+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434792c00 session 0x559436ddeb40
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469794816 unmapped: 73187328 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434fee000 session 0x559435f65860
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:47.080252+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469794816 unmapped: 73187328 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4943462 data_alloc: 218103808 data_used: 12849152
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:48.080487+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559436ea1c00 session 0x559437b0b860
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437438000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559437438000 session 0x559436c4a3c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469794816 unmapped: 73187328 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943cdd2800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435132800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:49.080637+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469794816 unmapped: 73187328 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:50.081309+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 72630272 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:51.081523+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470941696 unmapped: 72040448 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19ce1a000/0x0/0x1bfc00000, data 0x318dc0f/0x33a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:52.082344+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470941696 unmapped: 72040448 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5020337 data_alloc: 234881024 data_used: 23162880
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:53.082780+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470941696 unmapped: 72040448 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:54.083621+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470949888 unmapped: 72032256 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:55.083927+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470949888 unmapped: 72032256 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:56.084211+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470949888 unmapped: 72032256 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19ce1a000/0x0/0x1bfc00000, data 0x318dc0f/0x33a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:57.084341+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470949888 unmapped: 72032256 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5020337 data_alloc: 234881024 data_used: 23162880
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:58.084668+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470949888 unmapped: 72032256 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:59.084825+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470949888 unmapped: 72032256 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:00.085268+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.403457642s of 18.507741928s, submitted: 19
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470949888 unmapped: 72032256 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:01.085640+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 66543616 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c465000/0x0/0x1bfc00000, data 0x3b3cc0f/0x3d53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:02.086007+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5109183 data_alloc: 234881024 data_used: 24010752
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:03.086236+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:04.086401+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:05.086613+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c427000/0x0/0x1bfc00000, data 0x3b78c0f/0x3d8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c427000/0x0/0x1bfc00000, data 0x3b78c0f/0x3d8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:06.086766+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:07.087045+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5109183 data_alloc: 234881024 data_used: 24010752
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c42c000/0x0/0x1bfc00000, data 0x3b7bc0f/0x3d92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:08.087220+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:09.087447+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:10.087561+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:11.087715+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c42c000/0x0/0x1bfc00000, data 0x3b7bc0f/0x3d92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:12.087891+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5102963 data_alloc: 234881024 data_used: 24010752
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.284965515s of 12.586762428s, submitted: 112
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:13.089075+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:14.089968+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c42b000/0x0/0x1bfc00000, data 0x3b7cc0f/0x3d93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:15.090421+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x55943cdd2800 session 0x559435ddc780
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559435132800 session 0x559437c42780
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:16.090772+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c42b000/0x0/0x1bfc00000, data 0x3b7cc0f/0x3d93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c42b000/0x0/0x1bfc00000, data 0x3b7cc0f/0x3d93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [0,0,1])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434792c00 session 0x559434c63a40
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:17.091406+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:18.091909+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:19.092175+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:20.092323+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:21.092796+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:22.093339+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:23.093800+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:24.094055+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:25.094362+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:26.094734+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:27.094885+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:28.095173+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:29.095436+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:30.095639+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:31.095854+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:32.096038+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:33.096199+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:34.096355+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:35.096590+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:36.096738+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:37.096876+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:38.097065+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:39.097239+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:40.097425+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473710592 unmapped: 69271552 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:41.097558+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473710592 unmapped: 69271552 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:42.097711+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473710592 unmapped: 69271552 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:43.097879+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473710592 unmapped: 69271552 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:44.098013+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473710592 unmapped: 69271552 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:45.098187+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473710592 unmapped: 69271552 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:46.098398+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 69263360 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:47.098599+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 69263360 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:48.098745+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 69263360 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:49.098992+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 69263360 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:50.099196+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 69263360 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:51.099398+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 69263360 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:52.099602+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 69263360 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:53.099761+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 69255168 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:54.099916+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 69255168 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:55.100099+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 69255168 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:56.100321+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 69255168 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:57.100539+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 69255168 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:58.100701+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 69255168 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:59.100998+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 69255168 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:00.101215+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 69255168 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:01.101453+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 69246976 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:02.101642+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 69246976 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:03.101820+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 69246976 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:04.101987+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 69238784 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:05.102162+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 69238784 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:06.102300+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 69238784 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:07.102494+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 69238784 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:08.102686+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 69238784 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:09.102854+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473751552 unmapped: 69230592 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:10.103000+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473751552 unmapped: 69230592 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:11.103146+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473751552 unmapped: 69230592 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:12.103364+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473751552 unmapped: 69230592 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:13.103512+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473751552 unmapped: 69230592 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:14.103675+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434fee000 session 0x5594370ed4a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559436ea1c00 session 0x559434c65e00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437438000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559437438000 session 0x559436ec6b40
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434792c00 session 0x559435152780
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473751552 unmapped: 69230592 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 61.791675568s of 61.911861420s, submitted: 35
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:15.103899+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434fee000 session 0x559436f0f4a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2c0f/0x29b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435132800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559435132800 session 0x559437e53c20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559436ea1c00 session 0x559437c42f00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350d7000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473767936 unmapped: 69214208 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x5594350d7000 session 0x559437c565a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434792c00 session 0x559437b0bc20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:16.104087+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473767936 unmapped: 69214208 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:17.104240+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434fee000 session 0x559437e52960
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473767936 unmapped: 69214208 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4918731 data_alloc: 218103808 data_used: 12849152
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:18.104358+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435132800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559435132800 session 0x559435915c20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d2f5000/0x0/0x1bfc00000, data 0x2cb2c0f/0x2ec9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473767936 unmapped: 69214208 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559436ea1c00 session 0x559436f6b680
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594358acc00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:19.104514+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x5594358acc00 session 0x559437c56b40
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473784320 unmapped: 69197824 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:20.104828+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473784320 unmapped: 69197824 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:21.105086+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473784320 unmapped: 69197824 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:22.105337+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d2f4000/0x0/0x1bfc00000, data 0x2cb2c32/0x2eca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473784320 unmapped: 69197824 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4959857 data_alloc: 218103808 data_used: 18165760
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:23.105478+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473784320 unmapped: 69197824 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:24.105641+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473784320 unmapped: 69197824 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:25.105827+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473784320 unmapped: 69197824 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:26.105981+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473784320 unmapped: 69197824 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:27.106132+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473792512 unmapped: 69189632 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d2f4000/0x0/0x1bfc00000, data 0x2cb2c32/0x2eca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4959857 data_alloc: 218103808 data_used: 18165760
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:28.106266+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473792512 unmapped: 69189632 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:29.106439+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d2f4000/0x0/0x1bfc00000, data 0x2cb2c32/0x2eca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473792512 unmapped: 69189632 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:30.106634+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d2f4000/0x0/0x1bfc00000, data 0x2cb2c32/0x2eca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473792512 unmapped: 69189632 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:31.106851+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d2f4000/0x0/0x1bfc00000, data 0x2cb2c32/0x2eca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473792512 unmapped: 69189632 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:32.106974+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.014217377s of 17.533912659s, submitted: 26
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 474546176 unmapped: 68435968 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5014409 data_alloc: 218103808 data_used: 18165760
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:33.107097+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 67592192 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:34.107340+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 67592192 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:35.107564+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 67592192 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:36.107772+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 66543616 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:37.107921+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c7e2000/0x0/0x1bfc00000, data 0x37c4c32/0x39dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 66543616 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5048133 data_alloc: 218103808 data_used: 18374656
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:38.108120+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 66543616 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:39.108373+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 66543616 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:40.108559+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 66543616 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:41.108749+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c7dc000/0x0/0x1bfc00000, data 0x37cac32/0x39e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 66543616 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:42.108911+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 66543616 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5047641 data_alloc: 218103808 data_used: 18374656
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:43.109114+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476446720 unmapped: 66535424 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:44.109320+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435132800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559435132800 session 0x559436ec72c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.816077232s of 12.077077866s, submitted: 68
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476446720 unmapped: 66535424 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:45.109534+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 407 handle_osd_map epochs [407,408], i have 407, src has [1,408]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 408 handle_osd_map epochs [408,408], i have 408, src has [1,408]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 408 ms_handle_reset con 0x559436ea1c00 session 0x559435ddc000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 408 heartbeat osd_stat(store_statfs(0x19c7d8000/0x0/0x1bfc00000, data 0x37cec32/0x39e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436e8c800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476463104 unmapped: 66519040 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:46.109955+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 408 ms_handle_reset con 0x559434792800 session 0x559436ddf4a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 408 ms_handle_reset con 0x559436e8c800 session 0x559436fa34a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559442cf9800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476594176 unmapped: 66387968 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:47.110111+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487800832 unmapped: 61882368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5197727 data_alloc: 234881024 data_used: 29822976
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:48.110371+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 408 ms_handle_reset con 0x559442cf9800 session 0x5594351d4f00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487817216 unmapped: 61865984 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:49.110598+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 409 ms_handle_reset con 0x559434792800 session 0x559434ce12c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487825408 unmapped: 61857792 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:50.110844+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 409 handle_osd_map epochs [409,410], i have 409, src has [1,410]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487849984 unmapped: 61833216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:51.111028+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 410 heartbeat osd_stat(store_statfs(0x19ba47000/0x0/0x1bfc00000, data 0x455a1ad/0x4775000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488038400 unmapped: 61644800 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:52.111231+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488046592 unmapped: 61636608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5214357 data_alloc: 234881024 data_used: 29831168
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:53.111353+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488046592 unmapped: 61636608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:54.111494+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488046592 unmapped: 61636608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:55.111692+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488046592 unmapped: 61636608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:56.111988+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 410 heartbeat osd_stat(store_statfs(0x19ba47000/0x0/0x1bfc00000, data 0x455a1ad/0x4775000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 410 handle_osd_map epochs [410,411], i have 410, src has [1,411]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.516270638s of 12.276865005s, submitted: 37
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481460224 unmapped: 68222976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:57.112189+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435132800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436e8c800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 411 ms_handle_reset con 0x559436e8c800 session 0x559436c4d680
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 411 ms_handle_reset con 0x559435132800 session 0x559437bcb4a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481460224 unmapped: 68222976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 411 ms_handle_reset con 0x559436ea1c00 session 0x559435f67c20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5201257 data_alloc: 234881024 data_used: 29831168
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594384a4000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 411 ms_handle_reset con 0x5594384a4000 session 0x559435e4e780
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 411 ms_handle_reset con 0x559434792800 session 0x559437c43a40
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:58.112386+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481460224 unmapped: 68222976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:59.112593+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481468416 unmapped: 68214784 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:00.112832+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481476608 unmapped: 68206592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:01.113011+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19ba44000/0x0/0x1bfc00000, data 0x455bd4e/0x4779000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481476608 unmapped: 68206592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:02.113191+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481476608 unmapped: 68206592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5201257 data_alloc: 234881024 data_used: 29831168
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:03.113360+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481476608 unmapped: 68206592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:04.113491+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435132800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 411 ms_handle_reset con 0x559435132800 session 0x559437c42b40
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436e8c800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481624064 unmapped: 68059136 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:05.113606+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19ba20000/0x0/0x1bfc00000, data 0x457fd71/0x479e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481632256 unmapped: 68050944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:06.113706+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481894400 unmapped: 67788800 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:07.114061+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5258226 data_alloc: 251658240 data_used: 37351424
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:08.114310+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:09.114510+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19ba20000/0x0/0x1bfc00000, data 0x457fd71/0x479e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:10.114798+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:11.114968+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:12.115220+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5258226 data_alloc: 251658240 data_used: 37351424
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:13.115467+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.335296631s of 16.486038208s, submitted: 17
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:14.115708+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:15.116113+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19ba20000/0x0/0x1bfc00000, data 0x457fd71/0x479e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:16.116373+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:17.116501+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482648064 unmapped: 67035136 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5280461 data_alloc: 251658240 data_used: 38633472
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:18.116728+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:19.116863+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:20.117040+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19b9da000/0x0/0x1bfc00000, data 0x45c5d71/0x47e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:21.117193+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:22.117356+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5285421 data_alloc: 251658240 data_used: 39116800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:23.117559+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.030493736s of 10.215833664s, submitted: 10
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:24.117756+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:25.117946+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19b9da000/0x0/0x1bfc00000, data 0x45c5d71/0x47e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:26.118100+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19b9da000/0x0/0x1bfc00000, data 0x45c5d71/0x47e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:27.118305+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19b9da000/0x0/0x1bfc00000, data 0x45c5d71/0x47e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5286573 data_alloc: 251658240 data_used: 39518208
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:28.118541+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:29.118701+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:30.118842+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:31.119046+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:32.119224+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5287453 data_alloc: 251658240 data_used: 39518208
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:33.119678+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19b9da000/0x0/0x1bfc00000, data 0x45c5d71/0x47e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:34.119901+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:35.120154+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:36.120362+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:37.120559+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:38.120915+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5287453 data_alloc: 251658240 data_used: 39518208
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19b9da000/0x0/0x1bfc00000, data 0x45c5d71/0x47e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:39.121163+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:40.121377+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:41.121519+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:42.121759+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436f95400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 411 ms_handle_reset con 0x559436f95400 session 0x559436f0e960
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437eac800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:43.122102+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5288733 data_alloc: 251658240 data_used: 39571456
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 411 handle_osd_map epochs [411,412], i have 411, src has [1,412]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.819889069s of 19.845161438s, submitted: 9
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 412 ms_handle_reset con 0x559437eac800 session 0x559437c561e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943776fc00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943beb7000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 412 ms_handle_reset con 0x55943beb7000 session 0x559437c43e00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 412 ms_handle_reset con 0x55943776fc00 session 0x559437bcaf00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 412 ms_handle_reset con 0x559434792800 session 0x559434ce0960
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 412 heartbeat osd_stat(store_statfs(0x19b9da000/0x0/0x1bfc00000, data 0x45c5d71/0x47e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435132800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487571456 unmapped: 62111744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:44.122268+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 412 handle_osd_map epochs [412,413], i have 412, src has [1,413]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 413 ms_handle_reset con 0x559435132800 session 0x559435de12c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436f95400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487579648 unmapped: 62103552 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:45.122528+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 413 handle_osd_map epochs [413,414], i have 413, src has [1,414]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 414 ms_handle_reset con 0x559436f95400 session 0x559434c62000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487202816 unmapped: 62480384 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:46.122860+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437eac800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 414 ms_handle_reset con 0x559437eac800 session 0x5594377af680
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437eac800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 414 ms_handle_reset con 0x559437eac800 session 0x559435915e00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 414 ms_handle_reset con 0x559434792800 session 0x5594359145a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487227392 unmapped: 62455808 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:47.123272+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 414 heartbeat osd_stat(store_statfs(0x199ba7000/0x0/0x1bfc00000, data 0x63f235e/0x6616000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435132800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487235584 unmapped: 62447616 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:48.123626+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5538217 data_alloc: 251658240 data_used: 44425216
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 414 handle_osd_map epochs [414,415], i have 414, src has [1,415]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487251968 unmapped: 62431232 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:49.123850+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 415 ms_handle_reset con 0x559435132800 session 0x5594351d4d20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:50.124032+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487268352 unmapped: 62414848 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 415 ms_handle_reset con 0x559436e8c800 session 0x559434c8dc20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 415 ms_handle_reset con 0x559436ea1c00 session 0x559434ff1860
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 415 heartbeat osd_stat(store_statfs(0x19b9cc000/0x0/0x1bfc00000, data 0x45ccfb5/0x47f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:51.124208+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487268352 unmapped: 62414848 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:52.124357+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487284736 unmapped: 62398464 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 416 ms_handle_reset con 0x559434792800 session 0x559436f6a1e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:53.124541+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487309312 unmapped: 62373888 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5321486 data_alloc: 251658240 data_used: 44027904
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:54.124849+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487309312 unmapped: 62373888 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:55.125091+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487309312 unmapped: 62373888 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435132800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:56.125372+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487309312 unmapped: 62373888 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 416 heartbeat osd_stat(store_statfs(0x19b9ee000/0x0/0x1bfc00000, data 0x4564a8b/0x4787000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 416 handle_osd_map epochs [416,417], i have 416, src has [1,417]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.964550972s of 13.325811386s, submitted: 218
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 417 handle_osd_map epochs [418,418], i have 417, src has [1,418]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:57.125496+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487325696 unmapped: 62357504 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 418 ms_handle_reset con 0x559435132800 session 0x559434ff12c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:58.125689+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483909632 unmapped: 65773568 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5137584 data_alloc: 234881024 data_used: 29847552
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 418 ms_handle_reset con 0x559434792c00 session 0x5594351d4780
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 418 ms_handle_reset con 0x559434fee000 session 0x559436c512c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436e8c800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:59.125848+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483917824 unmapped: 65765376 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:00.126040+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 75980800 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 418 ms_handle_reset con 0x559436e8c800 session 0x559436f7f2c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:01.126253+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 418 heartbeat osd_stat(store_statfs(0x19d7e4000/0x0/0x1bfc00000, data 0x27b6260/0x29d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:02.126576+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:03.126747+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4949106 data_alloc: 218103808 data_used: 12886016
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:04.126908+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 418 heartbeat osd_stat(store_statfs(0x19d7e4000/0x0/0x1bfc00000, data 0x27b6260/0x29d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 418 heartbeat osd_stat(store_statfs(0x19d7e4000/0x0/0x1bfc00000, data 0x27b6260/0x29d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:05.127073+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:06.127343+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.211503029s of 10.501904488s, submitted: 95
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:07.127546+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:08.127758+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4953104 data_alloc: 218103808 data_used: 12894208
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:09.127999+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 69K writes, 270K keys, 69K commit groups, 1.0 writes per commit group, ingest: 0.26 GB, 0.04 MB/s
                                           Cumulative WAL: 69K writes, 25K syncs, 2.70 writes per sync, written: 0.26 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2949 writes, 9921 keys, 2949 commit groups, 1.0 writes per commit group, ingest: 9.42 MB, 0.02 MB/s
                                           Interval WAL: 2949 writes, 1228 syncs, 2.40 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:10.128185+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:11.128352+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:12.128755+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:13.129017+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4953104 data_alloc: 218103808 data_used: 12894208
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:14.129251+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:15.129621+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:16.129881+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:17.130142+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:18.130353+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 75956224 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4953104 data_alloc: 218103808 data_used: 12894208
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:19.130565+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 75956224 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:20.130746+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 75956224 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:21.130912+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:22.131050+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:23.131250+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4953104 data_alloc: 218103808 data_used: 12894208
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:24.131558+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:25.131781+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:26.131967+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:27.132823+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:28.133151+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4953104 data_alloc: 218103808 data_used: 12894208
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:29.133661+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:30.134205+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:31.134972+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:32.135169+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:33.135893+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4953104 data_alloc: 218103808 data_used: 12894208
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:34.136056+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:35.136638+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:36.137163+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:37.137447+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:38.137871+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4953104 data_alloc: 218103808 data_used: 12894208
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:39.138161+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:40.138311+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:41.138503+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:42.138764+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 35.431304932s of 35.444664001s, submitted: 15
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:43.139028+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4954123 data_alloc: 218103808 data_used: 12894208
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:44.139233+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559434792800 session 0x559436f7f860
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:45.139481+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:46.139811+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7e02/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:47.140107+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:48.140364+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4954051 data_alloc: 218103808 data_used: 12894208
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:49.140654+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7e02/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:50.140892+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:51.141146+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:52.141482+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559434792c00 session 0x5594351530e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:53.141637+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4954051 data_alloc: 218103808 data_used: 12894208
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559434fee000 session 0x559436c4d2c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:54.141820+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7e02/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:55.142006+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435132800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559435132800 session 0x559437bca3c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.737661362s of 12.753558159s, submitted: 5
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559436ea1c00 session 0x559437bcab40
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:56.142250+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 75939840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:57.142423+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 75939840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:58.142602+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 75939840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4955630 data_alloc: 218103808 data_used: 12898304
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:59.143031+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 75939840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7e02/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:00.143866+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559434792800 session 0x559436c4a5a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 75939840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559434792c00 session 0x559436c51680
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:01.144034+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 75939840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:02.144766+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 75939840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:03.145244+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7e02/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 75939840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4954907 data_alloc: 218103808 data_used: 12898304
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:04.145865+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 75939840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:05.146365+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.274845123s of 10.169509888s, submitted: 30
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473751552 unmapped: 75931648 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:06.146597+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7e02/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473751552 unmapped: 75931648 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:07.147065+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473759744 unmapped: 75923456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559434fee000 session 0x559435d8d860
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:08.147428+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473759744 unmapped: 75923456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4954212 data_alloc: 218103808 data_used: 12894208
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:09.147583+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473759744 unmapped: 75923456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:10.148017+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435132800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559435132800 session 0x559437b0be00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473759744 unmapped: 75923456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559436ea1c00 session 0x5594377afc20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:11.148323+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 479232000 unmapped: 70451200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:12.148482+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559434792800 session 0x5594352874a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 479232000 unmapped: 70451200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:13.148791+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 479232000 unmapped: 70451200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4953898 data_alloc: 218103808 data_used: 16105472
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:14.148953+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7e01/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 479232000 unmapped: 70451200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7e01/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:15.149121+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.661519051s of 10.000641823s, submitted: 19
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 479232000 unmapped: 70451200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7e01/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:16.149382+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 479232000 unmapped: 70451200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559434792c00 session 0x559435f67e00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:17.149587+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 479232000 unmapped: 70451200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:18.149792+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559434fee000 session 0x559436c4dc20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476880896 unmapped: 72802304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5032045 data_alloc: 218103808 data_used: 16105472
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:19.150016+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476880896 unmapped: 72802304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19cde8000/0x0/0x1bfc00000, data 0x31b1d9f/0x33d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:20.150164+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435132800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476880896 unmapped: 72802304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559435132800 session 0x559434ff12c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:21.150294+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476880896 unmapped: 72802304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 420 heartbeat osd_stat(store_statfs(0x19cde3000/0x0/0x1bfc00000, data 0x31b3a5a/0x33da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:22.150469+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:23.150630+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5037331 data_alloc: 218103808 data_used: 16113664
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:24.150875+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:25.151133+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:26.151408+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 420 heartbeat osd_stat(store_statfs(0x19cde3000/0x0/0x1bfc00000, data 0x31b3a5a/0x33da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:27.151653+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:28.151879+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437eac800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436f95400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.522792816s of 12.859356880s, submitted: 17
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5037331 data_alloc: 218103808 data_used: 16113664
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559436f95400 session 0x559436dde1e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559437eac800 session 0x559437bcb860
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:29.152091+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:30.152316+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:31.152465+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 420 heartbeat osd_stat(store_statfs(0x19cde3000/0x0/0x1bfc00000, data 0x31b3a5a/0x33da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:32.152610+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559434792800 session 0x559435ddcd20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:33.152717+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5037331 data_alloc: 218103808 data_used: 16113664
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559434792c00 session 0x559436f7f680
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 420 heartbeat osd_stat(store_statfs(0x19cde3000/0x0/0x1bfc00000, data 0x31b3a5a/0x33da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:34.152816+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559434fee000 session 0x559437e53c20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435132800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 420 heartbeat osd_stat(store_statfs(0x19cde3000/0x0/0x1bfc00000, data 0x31b3a5a/0x33da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559435132800 session 0x559437b0a3c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 420 heartbeat osd_stat(store_statfs(0x19cdc0000/0x0/0x1bfc00000, data 0x31d7a5a/0x33fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:35.152956+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:36.153221+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476905472 unmapped: 72777728 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:37.153369+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476905472 unmapped: 72777728 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:38.153532+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559434792800 session 0x559436f6be00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559434792c00 session 0x559436c510e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476905472 unmapped: 72777728 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5113591 data_alloc: 234881024 data_used: 24510464
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.465334892s of 10.475492477s, submitted: 2
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559434fee000 session 0x559435286d20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:39.153621+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476913664 unmapped: 72769536 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 420 heartbeat osd_stat(store_statfs(0x19cde4000/0x0/0x1bfc00000, data 0x31b3a5a/0x33da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437eac800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559437eac800 session 0x559435915860
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943776fc00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:40.153741+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x55943776fc00 session 0x559437c43860
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476897280 unmapped: 72785920 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:41.153939+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559434792800 session 0x559437b0ad20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476897280 unmapped: 72785920 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 421 ms_handle_reset con 0x559434792c00 session 0x559435152d20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:42.154097+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476897280 unmapped: 72785920 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:43.154226+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 421 heartbeat osd_stat(store_statfs(0x19cde1000/0x0/0x1bfc00000, data 0x31b56a5/0x33dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [0,0,1])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472244224 unmapped: 77438976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4967251 data_alloc: 218103808 data_used: 15532032
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:44.154360+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 421 ms_handle_reset con 0x559434fee000 session 0x559434c8d680
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472244224 unmapped: 77438976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:45.154510+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472244224 unmapped: 77438976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:46.154640+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472244224 unmapped: 77438976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:47.154761+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472285184 unmapped: 77398016 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:48.154879+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472227840 unmapped: 77455360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4967251 data_alloc: 218103808 data_used: 15532032
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.788832664s of 10.246772766s, submitted: 174
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437eac800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 421 heartbeat osd_stat(store_statfs(0x19d7dc000/0x0/0x1bfc00000, data 0x27bb6a5/0x29e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:49.154989+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 421 ms_handle_reset con 0x559437eac800 session 0x559437d08d20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436f68800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472227840 unmapped: 77455360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 421 ms_handle_reset con 0x559436f68800 session 0x559437d09c20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:50.155118+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476839936 unmapped: 72843264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:51.155239+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476856320 unmapped: 72826880 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 421 handle_osd_map epochs [422,422], i have 422, src has [1,422]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 422 ms_handle_reset con 0x559434792800 session 0x5594351d5860
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:52.155355+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476880896 unmapped: 72802304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:53.155521+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476880896 unmapped: 72802304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4973367 data_alloc: 218103808 data_used: 16130048
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d7d7000/0x0/0x1bfc00000, data 0x27bd246/0x29e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:54.155710+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476880896 unmapped: 72802304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:55.155892+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d7d7000/0x0/0x1bfc00000, data 0x27bd246/0x29e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476880896 unmapped: 72802304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:56.156042+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476880896 unmapped: 72802304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:57.156196+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476897280 unmapped: 72785920 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d7d8000/0x0/0x1bfc00000, data 0x27bd246/0x29e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:58.156440+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476848128 unmapped: 72835072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4972486 data_alloc: 218103808 data_used: 16134144
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 422 ms_handle_reset con 0x559434792c00 session 0x5594351d4f00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:59.156647+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.671635628s of 10.225932121s, submitted: 147
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 478871552 unmapped: 70811648 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d7d8000/0x0/0x1bfc00000, data 0x27bd20d/0x29e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:00.156790+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 422 ms_handle_reset con 0x559434fee000 session 0x559436ec6d20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477257728 unmapped: 72425472 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437eac800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 422 ms_handle_reset con 0x559437eac800 session 0x559436ddeb40
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:01.156948+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:02.157064+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:03.157219+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5005971 data_alloc: 218103808 data_used: 16130048
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:04.157412+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:05.157574+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:06.157694+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:07.157875+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:08.158055+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5005971 data_alloc: 218103808 data_used: 16130048
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:09.158188+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:10.158481+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:11.158671+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:12.158793+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:13.158969+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5005971 data_alloc: 218103808 data_used: 16130048
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:14.159146+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:15.159308+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:16.159495+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:17.159802+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435132000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 422 ms_handle_reset con 0x559435132000 session 0x559436c4c780
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 422 ms_handle_reset con 0x559434792800 session 0x559437d08960
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:18.159956+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5005971 data_alloc: 218103808 data_used: 16130048
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:19.160134+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 422 ms_handle_reset con 0x559434792c00 session 0x5594370f4f00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.234014511s of 20.326173782s, submitted: 33
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 422 ms_handle_reset con 0x559434fee000 session 0x559436f7ed20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437eac800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:20.160258+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5bc00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:21.160434+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:22.160592+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:23.160791+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5025344 data_alloc: 218103808 data_used: 18558976
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:24.160976+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:25.161206+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:26.161389+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:27.161529+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:28.161741+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5025344 data_alloc: 218103808 data_used: 18558976
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:29.161905+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:30.162038+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:31.162200+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:32.162342+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.673128128s of 12.695754051s, submitted: 6
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483557376 unmapped: 66125824 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19cd68000/0x0/0x1bfc00000, data 0x2e1d246/0x3046000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,5])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:33.162495+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482705408 unmapped: 66977792 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5106862 data_alloc: 234881024 data_used: 19513344
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:34.162621+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482721792 unmapped: 66961408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:35.162786+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19c637000/0x0/0x1bfc00000, data 0x3548246/0x3771000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483680256 unmapped: 66002944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:18.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:36.162963+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483680256 unmapped: 66002944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:37.163196+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483680256 unmapped: 66002944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:38.163406+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483680256 unmapped: 66002944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19c617000/0x0/0x1bfc00000, data 0x3565246/0x378e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5123676 data_alloc: 234881024 data_used: 20525056
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:39.163588+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19c617000/0x0/0x1bfc00000, data 0x3565246/0x378e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483680256 unmapped: 66002944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:40.163737+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483680256 unmapped: 66002944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19c617000/0x0/0x1bfc00000, data 0x3565246/0x378e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:41.163866+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483680256 unmapped: 66002944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:42.164016+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19c617000/0x0/0x1bfc00000, data 0x3565246/0x378e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483680256 unmapped: 66002944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:43.164143+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559442cf8c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.902449608s of 11.393979073s, submitted: 107
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484753408 unmapped: 64929792 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5121647 data_alloc: 234881024 data_used: 20537344
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:44.164319+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 422 handle_osd_map epochs [422,423], i have 422, src has [1,423]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559442cf8c00 session 0x559437b0a3c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:45.164471+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c61e000/0x0/0x1bfc00000, data 0x3565274/0x3790000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:46.164636+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:47.164761+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:48.164920+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5127613 data_alloc: 234881024 data_used: 20541440
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c619000/0x0/0x1bfc00000, data 0x356703d/0x3794000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:49.165080+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:50.165263+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c619000/0x0/0x1bfc00000, data 0x356703d/0x3794000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:51.165472+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c619000/0x0/0x1bfc00000, data 0x356703d/0x3794000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:52.165648+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:53.165781+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5127613 data_alloc: 234881024 data_used: 20541440
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:54.165968+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594358ad400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943a64d400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.034794807s of 11.101709366s, submitted: 10
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x55943a64d400 session 0x559435e4f0e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x5594358ad400 session 0x559437bcb860
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:55.166182+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c61a000/0x0/0x1bfc00000, data 0x356703d/0x3794000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:56.166327+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:57.166456+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:58.166628+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5126189 data_alloc: 234881024 data_used: 20541440
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:59.166763+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559434792800 session 0x559434ff12c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:00.166945+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559434792c00 session 0x559435f67e00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:01.167130+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c61a000/0x0/0x1bfc00000, data 0x356703d/0x3794000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559434fee000 session 0x5594352874a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559442cf8c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559442cf8c00 session 0x5594377afc20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:02.167362+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c5ee000/0x0/0x1bfc00000, data 0x3591070/0x37c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:03.167548+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5134778 data_alloc: 234881024 data_used: 20692992
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c5ee000/0x0/0x1bfc00000, data 0x3591070/0x37c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:04.167756+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:05.167975+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:06.168140+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.602036476s of 11.628366470s, submitted: 7
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c5ee000/0x0/0x1bfc00000, data 0x3591070/0x37c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:07.168292+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:08.168476+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5157810 data_alloc: 234881024 data_used: 20692992
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:09.168607+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:10.168758+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:11.168928+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:12.169105+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c5ec000/0x0/0x1bfc00000, data 0x389c070/0x37c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:13.169345+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5157842 data_alloc: 234881024 data_used: 20692992
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:14.169504+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484892672 unmapped: 64790528 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:15.169701+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484253696 unmapped: 65429504 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:16.169825+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484261888 unmapped: 65421312 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:17.169979+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c4ea000/0x0/0x1bfc00000, data 0x399e070/0x38c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484270080 unmapped: 65413120 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:18.170163+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484270080 unmapped: 65413120 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5190426 data_alloc: 234881024 data_used: 23044096
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:19.170318+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484270080 unmapped: 65413120 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:20.170454+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484270080 unmapped: 65413120 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:21.170671+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.007882118s of 15.059731483s, submitted: 11
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484327424 unmapped: 65355776 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:22.170816+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c462000/0x0/0x1bfc00000, data 0x3a53070/0x394c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484327424 unmapped: 65355776 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:23.174339+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484327424 unmapped: 65355776 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5219236 data_alloc: 234881024 data_used: 23040000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:24.174676+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484327424 unmapped: 65355776 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:25.174840+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484327424 unmapped: 65355776 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:26.175009+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c462000/0x0/0x1bfc00000, data 0x3a53070/0x394c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484327424 unmapped: 65355776 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:27.175143+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483991552 unmapped: 65691648 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:28.175374+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483991552 unmapped: 65691648 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5213860 data_alloc: 234881024 data_used: 23044096
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:29.175579+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c462000/0x0/0x1bfc00000, data 0x3a53070/0x394c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483991552 unmapped: 65691648 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:30.175747+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:31.175963+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:32.176142+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.056324005s of 11.241897583s, submitted: 37
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:33.176353+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c457000/0x0/0x1bfc00000, data 0x3a5a070/0x3953000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5214202 data_alloc: 234881024 data_used: 23044096
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:34.176537+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c45b000/0x0/0x1bfc00000, data 0x3a5a070/0x3953000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:35.176702+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:36.176861+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:37.176997+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:38.177175+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c45b000/0x0/0x1bfc00000, data 0x3a5a070/0x3953000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5214202 data_alloc: 234881024 data_used: 23044096
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:39.177340+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559434792800 session 0x559436c51680
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559434792c00 session 0x559437cb1a40
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c45b000/0x0/0x1bfc00000, data 0x3a5a070/0x3953000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:40.177448+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:41.177593+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559434fee000 session 0x559437d081e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:42.177768+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:43.177928+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c486000/0x0/0x1bfc00000, data 0x3a3003d/0x3927000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5204715 data_alloc: 234881024 data_used: 22904832
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:44.178080+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:45.178246+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594358ad400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x5594358ad400 session 0x559437cb1e00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559442cf8c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.740118027s of 13.175251007s, submitted: 17
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:46.178348+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559442cf8c00 session 0x559437c57a40
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:47.178546+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:48.178750+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484007936 unmapped: 65675264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5184485 data_alloc: 234881024 data_used: 22790144
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c5fe000/0x0/0x1bfc00000, data 0x38b903d/0x37b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:49.178923+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559434792800 session 0x559437d09860
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484007936 unmapped: 65675264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:50.179094+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484007936 unmapped: 65675264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:51.179349+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c5fe000/0x0/0x1bfc00000, data 0x38b903d/0x37b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484007936 unmapped: 65675264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:52.179543+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484007936 unmapped: 65675264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:53.179707+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 424 ms_handle_reset con 0x559437eac800 session 0x559435152d20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484007936 unmapped: 65675264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 424 ms_handle_reset con 0x559436c5bc00 session 0x559436c51e00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5172513 data_alloc: 234881024 data_used: 22794240
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:54.179931+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 424 heartbeat osd_stat(store_statfs(0x19c616000/0x0/0x1bfc00000, data 0x3568cea/0x3797000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484007936 unmapped: 65675264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:55.180149+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484016128 unmapped: 65667072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:56.180350+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.032597542s of 10.602803230s, submitted: 41
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 424 ms_handle_reset con 0x559434792c00 session 0x559435d8d680
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484016128 unmapped: 65667072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:57.180564+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 424 handle_osd_map epochs [424,425], i have 424, src has [1,425]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 424 heartbeat osd_stat(store_statfs(0x19c617000/0x0/0x1bfc00000, data 0x3568cea/0x3797000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 424 handle_osd_map epochs [425,425], i have 425, src has [1,425]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484016128 unmapped: 65667072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:58.180751+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484016128 unmapped: 65667072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5175215 data_alloc: 234881024 data_used: 22802432
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:59.180888+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484016128 unmapped: 65667072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:00.181025+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 425 heartbeat osd_stat(store_statfs(0x19c613000/0x0/0x1bfc00000, data 0x356a829/0x379a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484016128 unmapped: 65667072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:01.181238+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484024320 unmapped: 65658880 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:02.181381+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484024320 unmapped: 65658880 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:03.181533+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482091008 unmapped: 67592192 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5006019 data_alloc: 218103808 data_used: 16154624
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:04.181642+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 425 heartbeat osd_stat(store_statfs(0x19d012000/0x0/0x1bfc00000, data 0x27c2829/0x29f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482091008 unmapped: 67592192 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:05.181766+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 425 ms_handle_reset con 0x559434fee000 session 0x5594351430e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482091008 unmapped: 67592192 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:06.181903+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482091008 unmapped: 67592192 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:07.182066+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 425 heartbeat osd_stat(store_statfs(0x19d3bc000/0x0/0x1bfc00000, data 0x27c27c7/0x29f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.051541328s of 11.549398422s, submitted: 39
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482091008 unmapped: 67592192 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:08.182238+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482091008 unmapped: 67592192 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:09.182378+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5004722 data_alloc: 218103808 data_used: 16162816
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482091008 unmapped: 67592192 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:10.182549+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 425 handle_osd_map epochs [425,426], i have 425, src has [1,426]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 426 handle_osd_map epochs [426,426], i have 426, src has [1,426]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482091008 unmapped: 67592192 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:11.182734+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 426 heartbeat osd_stat(store_statfs(0x19d3ba000/0x0/0x1bfc00000, data 0x27c4456/0x29f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482099200 unmapped: 67584000 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:12.182919+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482115584 unmapped: 67567616 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:13.183104+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 426 ms_handle_reset con 0x559434792800 session 0x559436eab2c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482115584 unmapped: 67567616 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:14.183347+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5007079 data_alloc: 218103808 data_used: 16162816
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482115584 unmapped: 67567616 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:15.183517+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482115584 unmapped: 67567616 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:16.183665+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 426 heartbeat osd_stat(store_statfs(0x19d3bc000/0x0/0x1bfc00000, data 0x27c4428/0x29f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482115584 unmapped: 67567616 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:17.183909+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482115584 unmapped: 67567616 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:18.184031+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 426 heartbeat osd_stat(store_statfs(0x19d3bc000/0x0/0x1bfc00000, data 0x27c4428/0x29f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 426 handle_osd_map epochs [427,427], i have 427, src has [1,427]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.208436966s of 10.811527252s, submitted: 28
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482140160 unmapped: 67543040 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:19.184194+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5010053 data_alloc: 218103808 data_used: 16162816
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19d3b9000/0x0/0x1bfc00000, data 0x27c5f67/0x29f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482140160 unmapped: 67543040 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:20.184343+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482140160 unmapped: 67543040 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:21.184503+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:22.184672+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482140160 unmapped: 67543040 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19d3b9000/0x0/0x1bfc00000, data 0x27c5f67/0x29f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:23.184879+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482140160 unmapped: 67543040 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:24.185051+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482140160 unmapped: 67543040 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5010053 data_alloc: 218103808 data_used: 16162816
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559434792c00 session 0x559435d8d0e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5bc00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559436c5bc00 session 0x559435153860
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:25.185371+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484704256 unmapped: 64978944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:26.185564+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484704256 unmapped: 64978944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19d3b9000/0x0/0x1bfc00000, data 0x27c5f67/0x29f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437eac800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559437eac800 session 0x559436c4b0e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:27.185821+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484704256 unmapped: 64978944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:28.186063+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484704256 unmapped: 64978944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:29.186361+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484704256 unmapped: 64978944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5010053 data_alloc: 218103808 data_used: 16162816
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594358ad400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.400998116s of 10.534256935s, submitted: 15
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x5594358ad400 session 0x559436fa30e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:30.186565+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484704256 unmapped: 64978944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19d3ba000/0x0/0x1bfc00000, data 0x27c5f67/0x29f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [0,0,1,0,1,4])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559434792800 session 0x559435de10e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:31.186704+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485982208 unmapped: 63700992 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ce64000/0x0/0x1bfc00000, data 0x2d1af90/0x2f4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559434792c00 session 0x559436fa30e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:32.186864+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485785600 unmapped: 63897600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:33.187045+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485785600 unmapped: 63897600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:34.187388+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485785600 unmapped: 63897600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5056307 data_alloc: 218103808 data_used: 16162816
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:35.187660+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485785600 unmapped: 63897600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:36.187838+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485785600 unmapped: 63897600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ce64000/0x0/0x1bfc00000, data 0x2d1afc9/0x2f4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:37.187970+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485785600 unmapped: 63897600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:38.188084+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485793792 unmapped: 63889408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ce64000/0x0/0x1bfc00000, data 0x2d1afc9/0x2f4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:39.188268+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485793792 unmapped: 63889408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5056307 data_alloc: 218103808 data_used: 16162816
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:40.188455+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485793792 unmapped: 63889408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:41.188648+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485793792 unmapped: 63889408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:42.188891+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485801984 unmapped: 63881216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:43.189047+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485801984 unmapped: 63881216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:44.189240+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485801984 unmapped: 63881216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5056307 data_alloc: 218103808 data_used: 16162816
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ce64000/0x0/0x1bfc00000, data 0x2d1afc9/0x2f4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:45.189551+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485810176 unmapped: 63873024 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:46.189767+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485818368 unmapped: 63864832 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:47.189984+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485818368 unmapped: 63864832 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:48.190162+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485818368 unmapped: 63864832 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ce64000/0x0/0x1bfc00000, data 0x2d1afc9/0x2f4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:49.190364+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485818368 unmapped: 63864832 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5056307 data_alloc: 218103808 data_used: 16162816
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:50.190579+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5bc00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.333019257s of 21.654922485s, submitted: 47
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:51.190788+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ce63000/0x0/0x1bfc00000, data 0x2d1afec/0x2f4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [0,0,0,0,0,1])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:52.190977+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559436c5bc00 session 0x559437d09860
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437eac800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436e8d000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:53.191147+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:54.191349+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5058092 data_alloc: 218103808 data_used: 16162816
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:55.191544+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ce63000/0x0/0x1bfc00000, data 0x2d1afec/0x2f4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:56.191771+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:57.192013+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:58.192190+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:59.192374+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5095692 data_alloc: 234881024 data_used: 21016576
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:00.192605+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:01.192824+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ce63000/0x0/0x1bfc00000, data 0x2d1afec/0x2f4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:02.192990+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:03.193237+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:04.193434+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5096012 data_alloc: 234881024 data_used: 21024768
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:05.193677+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.898363113s of 14.440773964s, submitted: 5
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:06.193940+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487833600 unmapped: 61849600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:07.194157+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19cb00000/0x0/0x1bfc00000, data 0x307dfec/0x32ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487849984 unmapped: 61833216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:08.194305+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488079360 unmapped: 61603840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca7a000/0x0/0x1bfc00000, data 0x3103fec/0x3334000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:09.194474+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488079360 unmapped: 61603840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5140300 data_alloc: 234881024 data_used: 21843968
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:10.194600+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488079360 unmapped: 61603840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:11.194914+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488079360 unmapped: 61603840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:12.195083+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488079360 unmapped: 61603840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca7a000/0x0/0x1bfc00000, data 0x3103fec/0x3334000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:13.195254+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488079360 unmapped: 61603840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:14.195417+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488087552 unmapped: 61595648 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5138620 data_alloc: 234881024 data_used: 21848064
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:15.195567+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559437eac800 session 0x559437d081e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559436e8d000 session 0x559436f6b680
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488087552 unmapped: 61595648 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.954394341s of 10.175483704s, submitted: 76
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559434792800 session 0x559435f67e00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:16.195728+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:17.195940+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559436c5dc00 session 0x559435d8c3c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x5594450ed400 session 0x559437d08000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5bc00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559436c6e800 session 0x559436fa3860
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350d7400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:18.196337+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:19.196993+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5139012 data_alloc: 234881024 data_used: 21852160
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:20.197171+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:21.197353+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:22.197563+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:23.197776+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:24.198004+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5139012 data_alloc: 234881024 data_used: 21852160
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:25.198254+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:26.198478+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:27.198659+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:28.198821+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:29.198992+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5139012 data_alloc: 234881024 data_used: 21852160
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:30.199211+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:31.199356+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c6e800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.853529930s of 15.902283669s, submitted: 13
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559436c6e800 session 0x559435286d20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c3e400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943beb4400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:32.199530+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:33.199682+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca49000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:34.199848+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5140531 data_alloc: 234881024 data_used: 21905408
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:35.200151+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:36.200360+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:37.200524+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:38.200925+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:39.201182+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca49000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5140531 data_alloc: 234881024 data_used: 21905408
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:40.201421+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:41.201634+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:42.201798+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca49000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:43.202112+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.303579330s of 12.341936111s, submitted: 8
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:44.202382+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5144999 data_alloc: 234881024 data_used: 22102016
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:45.202616+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:46.202850+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:47.203179+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:48.203453+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:49.203755+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5155079 data_alloc: 234881024 data_used: 22765568
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:50.204065+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:51.204350+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:52.204655+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:53.204826+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:54.204973+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488112128 unmapped: 61571072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5155079 data_alloc: 234881024 data_used: 22765568
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:55.205194+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488112128 unmapped: 61571072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:56.205351+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488112128 unmapped: 61571072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:57.205534+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943ae61000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.465478897s of 13.490488052s, submitted: 6
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488112128 unmapped: 61571072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 427 handle_osd_map epochs [427,428], i have 427, src has [1,428]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 428 handle_osd_map epochs [428,428], i have 428, src has [1,428]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x55943ae61000 session 0x559434c8d0e0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:58.205825+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488128512 unmapped: 61554688 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:59.205996+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488136704 unmapped: 61546496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5177867 data_alloc: 234881024 data_used: 22937600
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:00.206349+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19ca3c000/0x0/0x1bfc00000, data 0x3304c22/0x3372000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943725c800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437438c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x559437438c00 session 0x5594377aed20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x55943725c800 session 0x559435d8c780
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488136704 unmapped: 61546496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:01.206506+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488136704 unmapped: 61546496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:02.206774+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488136704 unmapped: 61546496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:03.206942+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488136704 unmapped: 61546496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:04.207067+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488136704 unmapped: 61546496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5179373 data_alloc: 234881024 data_used: 22937600
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:05.207228+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19ca3c000/0x0/0x1bfc00000, data 0x3304c22/0x3372000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488136704 unmapped: 61546496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:06.207425+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488144896 unmapped: 61538304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:07.207630+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488144896 unmapped: 61538304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:08.207863+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488153088 unmapped: 61530112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.730504036s of 11.801014900s, submitted: 24
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:09.208016+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x559434792800 session 0x559436f7fc20
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488456192 unmapped: 61227008 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5183366 data_alloc: 234881024 data_used: 22941696
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c6e800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437438c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:10.208163+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488464384 unmapped: 61218816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:11.208389+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19ca17000/0x0/0x1bfc00000, data 0x3328c45/0x3397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:12.208600+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:13.208769+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19ca17000/0x0/0x1bfc00000, data 0x3328c45/0x3397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:14.208910+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5183818 data_alloc: 234881024 data_used: 22949888
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:15.209062+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:16.209219+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:17.209554+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19ca17000/0x0/0x1bfc00000, data 0x3328c45/0x3397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:18.209761+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:19.209896+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5183818 data_alloc: 234881024 data_used: 22949888
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:20.210018+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:21.210151+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.934249878s of 12.953603745s, submitted: 6
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:22.210320+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490176512 unmapped: 59506688 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19c702000/0x0/0x1bfc00000, data 0x363dc45/0x36ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:23.210447+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490176512 unmapped: 59506688 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:24.210635+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490176512 unmapped: 59506688 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5225783 data_alloc: 234881024 data_used: 24535040
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:25.210830+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490176512 unmapped: 59506688 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:26.211032+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490176512 unmapped: 59506688 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:27.211253+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490176512 unmapped: 59506688 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:28.211532+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19c702000/0x0/0x1bfc00000, data 0x363dc45/0x36ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:29.211746+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5229843 data_alloc: 234881024 data_used: 24678400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:30.211904+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:31.212078+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:32.212191+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:33.212321+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:34.212482+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19c62f000/0x0/0x1bfc00000, data 0x3710c45/0x377f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5229843 data_alloc: 234881024 data_used: 24678400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:35.212667+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:36.212853+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:37.213119+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.297815323s of 15.375535965s, submitted: 9
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486416384 unmapped: 63266816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:38.213346+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486416384 unmapped: 63266816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19c627000/0x0/0x1bfc00000, data 0x3710c45/0x377f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:39.213536+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486416384 unmapped: 63266816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5226883 data_alloc: 234881024 data_used: 24678400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:40.213733+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486416384 unmapped: 63266816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:41.213939+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486416384 unmapped: 63266816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19c627000/0x0/0x1bfc00000, data 0x3710c45/0x377f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:42.214094+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486416384 unmapped: 63266816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:43.214272+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486416384 unmapped: 63266816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:44.214502+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19c627000/0x0/0x1bfc00000, data 0x3710c45/0x377f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486424576 unmapped: 63258624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5226883 data_alloc: 234881024 data_used: 24678400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:45.214689+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486424576 unmapped: 63258624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:46.214901+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19c627000/0x0/0x1bfc00000, data 0x3710c45/0x377f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486424576 unmapped: 63258624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:47.215099+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486424576 unmapped: 63258624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:48.215310+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x559436c6e800 session 0x559437c56960
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.852976799s of 10.868772507s, submitted: 5
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x559437438c00 session 0x559436f7eb40
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943ae61000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486424576 unmapped: 63258624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x55943ae61000 session 0x559434c6f680
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:49.215464+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486432768 unmapped: 63250432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5220223 data_alloc: 234881024 data_used: 24690688
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:50.215679+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19c653000/0x0/0x1bfc00000, data 0x36ecc22/0x375a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486432768 unmapped: 63250432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:51.215818+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486432768 unmapped: 63250432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:52.215996+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437503400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x559437503400 session 0x559437b0a5a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486432768 unmapped: 63250432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x559434792800 session 0x559437cb1e00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:53.216256+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486440960 unmapped: 63242240 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c6e800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x559436c6e800 session 0x559435f61e00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437438c00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:54.216576+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486457344 unmapped: 63225856 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5188901 data_alloc: 234881024 data_used: 24559616
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 428 handle_osd_map epochs [428,429], i have 428, src has [1,429]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 429 handle_osd_map epochs [429,429], i have 429, src has [1,429]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 429 ms_handle_reset con 0x559437438c00 session 0x559437c565a0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:55.216741+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 429 heartbeat osd_stat(store_statfs(0x19ca3d000/0x0/0x1bfc00000, data 0x313e8cf/0x3370000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486473728 unmapped: 63209472 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:56.216938+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486473728 unmapped: 63209472 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:57.217127+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486490112 unmapped: 63193088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:58.217357+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.918631554s of 10.093006134s, submitted: 66
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 429 ms_handle_reset con 0x559436c3e400 session 0x559435f61a40
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 429 ms_handle_reset con 0x55943beb4400 session 0x559437105e00
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943beb4400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486490112 unmapped: 63193088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:59.217537+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486490112 unmapped: 63193088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5182857 data_alloc: 234881024 data_used: 24567808
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 429 heartbeat osd_stat(store_statfs(0x19ca3e000/0x0/0x1bfc00000, data 0x313e8cf/0x3370000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:00.217713+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486490112 unmapped: 63193088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 429 ms_handle_reset con 0x55943beb4400 session 0x559437c56000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:01.217910+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486490112 unmapped: 63193088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:02.218088+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486490112 unmapped: 63193088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:03.218250+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 429 handle_osd_map epochs [429,430], i have 429, src has [1,430]
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486498304 unmapped: 63184896 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:04.218372+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19ca3a000/0x0/0x1bfc00000, data 0x314040e/0x3373000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486506496 unmapped: 63176704 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5186216 data_alloc: 234881024 data_used: 24576000
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 430 ms_handle_reset con 0x559434792800 session 0x559437e532c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c3e400
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:05.218502+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 430 ms_handle_reset con 0x559436c3e400 session 0x559437c432c0
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:06.218630+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:07.218779+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:08.218963+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:09.219142+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:10.219337+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:11.219483+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:12.219667+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:13.219828+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:14.219998+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:15.220196+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:16.220374+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:17.220537+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:18.220677+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:19.220816+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:20.220990+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:21.221179+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486555648 unmapped: 63127552 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:22.221367+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486563840 unmapped: 63119360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:23.221543+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486563840 unmapped: 63119360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:24.221706+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486563840 unmapped: 63119360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:25.221950+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486563840 unmapped: 63119360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:26.222125+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486563840 unmapped: 63119360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:27.222261+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486563840 unmapped: 63119360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:28.222499+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486572032 unmapped: 63111168 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:29.222664+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486572032 unmapped: 63111168 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:30.222860+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486572032 unmapped: 63111168 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:31.223036+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486572032 unmapped: 63111168 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:32.223188+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:33.223356+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:34.223485+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:35.223642+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:36.223812+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:37.223954+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:38.224139+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:39.224387+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486588416 unmapped: 63094784 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:40.224598+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486588416 unmapped: 63094784 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:41.224763+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486588416 unmapped: 63094784 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:42.224928+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486588416 unmapped: 63094784 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:43.225109+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486588416 unmapped: 63094784 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:44.225265+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486588416 unmapped: 63094784 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:45.225519+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:46.225676+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:47.225846+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:48.226029+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:49.226206+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:50.226416+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:51.226589+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486604800 unmapped: 63078400 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:52.226867+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486612992 unmapped: 63070208 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:53.227086+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486612992 unmapped: 63070208 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:54.227423+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486612992 unmapped: 63070208 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:55.227848+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486612992 unmapped: 63070208 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:56.228091+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486621184 unmapped: 63062016 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:57.228334+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486621184 unmapped: 63062016 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:58.228526+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486621184 unmapped: 63062016 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:59.228762+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486621184 unmapped: 63062016 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:00.228913+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486621184 unmapped: 63062016 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:01.229113+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486629376 unmapped: 63053824 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:02.229437+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486629376 unmapped: 63053824 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:03.229644+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486629376 unmapped: 63053824 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:04.229831+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486637568 unmapped: 63045632 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:05.230030+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486637568 unmapped: 63045632 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:06.230172+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486637568 unmapped: 63045632 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:07.230371+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486637568 unmapped: 63045632 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:08.230510+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486637568 unmapped: 63045632 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:09.230663+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486645760 unmapped: 63037440 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:10.230794+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486645760 unmapped: 63037440 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:11.230953+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486653952 unmapped: 63029248 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:12.231127+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:13.231377+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:14.231566+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:15.231762+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:16.231983+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:17.232192+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486670336 unmapped: 63012864 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:18.232390+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486670336 unmapped: 63012864 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:19.232535+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486678528 unmapped: 63004672 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:20.232739+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486678528 unmapped: 63004672 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:21.232868+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486678528 unmapped: 63004672 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:22.233020+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486678528 unmapped: 63004672 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:23.233239+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486678528 unmapped: 63004672 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:24.233435+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486686720 unmapped: 62996480 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:25.233656+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486686720 unmapped: 62996480 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:26.233856+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486686720 unmapped: 62996480 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:27.234033+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:28.234187+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:29.234343+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:30.234536+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:31.234680+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:32.234826+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:33.234954+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486703104 unmapped: 62980096 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:34.235188+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486703104 unmapped: 62980096 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:35.235580+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486703104 unmapped: 62980096 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:36.235775+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486711296 unmapped: 62971904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:37.235934+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486711296 unmapped: 62971904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:38.236073+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486711296 unmapped: 62971904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:39.236258+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486711296 unmapped: 62971904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:40.236404+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486711296 unmapped: 62971904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:41.236555+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486711296 unmapped: 62971904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:42.236698+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486719488 unmapped: 62963712 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:43.236852+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486719488 unmapped: 62963712 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:44.236974+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:34:18 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:34:18 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:45.237172+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: do_command 'config diff' '{prefix=config diff}'
Oct 02 13:34:18 compute-1 ceph-osd[78262]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 02 13:34:18 compute-1 ceph-osd[78262]: do_command 'config show' '{prefix=config show}'
Oct 02 13:34:18 compute-1 ceph-osd[78262]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 02 13:34:18 compute-1 ceph-osd[78262]: do_command 'counter dump' '{prefix=counter dump}'
Oct 02 13:34:18 compute-1 ceph-osd[78262]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 02 13:34:18 compute-1 ceph-osd[78262]: do_command 'counter schema' '{prefix=counter schema}'
Oct 02 13:34:18 compute-1 ceph-osd[78262]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485924864 unmapped: 63758336 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:46.237351+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485990400 unmapped: 63692800 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:34:18 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:47.237471+0000)
Oct 02 13:34:18 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:34:18 compute-1 ceph-osd[78262]: do_command 'log dump' '{prefix=log dump}'
Oct 02 13:34:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 02 13:34:18 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/874236552' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:34:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:18.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:18 compute-1 ceph-mon[80926]: from='client.46933 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:18 compute-1 ceph-mon[80926]: from='client.46115 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:18 compute-1 ceph-mon[80926]: from='client.37185 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:18 compute-1 ceph-mon[80926]: from='client.46130 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3429833707' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 02 13:34:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/252430662' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:34:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1208016185' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:34:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/113161508' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:34:18 compute-1 ceph-mon[80926]: from='client.46148 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:18 compute-1 ceph-mon[80926]: from='client.37194 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2595115548' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 02 13:34:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/944900759' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:34:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/874236552' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:34:18 compute-1 nova_compute[230518]: 2025-10-02 13:34:18.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 02 13:34:18 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1184456124' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:34:18 compute-1 podman[323114]: 2025-10-02 13:34:18.826080847 +0000 UTC m=+0.071417649 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Oct 02 13:34:18 compute-1 podman[323112]: 2025-10-02 13:34:18.864033287 +0000 UTC m=+0.108818372 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid)
Oct 02 13:34:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 02 13:34:19 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/584366054' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:34:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct 02 13:34:19 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2205226372' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 02 13:34:19 compute-1 ceph-mon[80926]: from='client.46978 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:19 compute-1 ceph-mon[80926]: from='client.46160 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:19 compute-1 ceph-mon[80926]: from='client.37206 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2588293621' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:34:19 compute-1 ceph-mon[80926]: from='client.46993 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/704768393' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:34:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1184456124' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:34:19 compute-1 ceph-mon[80926]: from='client.46178 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:19 compute-1 ceph-mon[80926]: from='client.37224 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:19 compute-1 ceph-mon[80926]: pgmap v3717: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:19 compute-1 ceph-mon[80926]: from='client.47005 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1278800428' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:34:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/262734' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 02 13:34:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/584366054' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:34:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/585650995' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:34:19 compute-1 crontab[323302]: (root) LIST (root)
Oct 02 13:34:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:34:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:34:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:20.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:34:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct 02 13:34:20 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3914800062' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 02 13:34:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:34:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:20.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:34:20 compute-1 ceph-mon[80926]: from='client.46193 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:20 compute-1 ceph-mon[80926]: from='client.37248 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:20 compute-1 ceph-mon[80926]: from='client.47017 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2205226372' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 02 13:34:20 compute-1 ceph-mon[80926]: from='client.46205 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:20 compute-1 ceph-mon[80926]: from='client.37260 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:20 compute-1 ceph-mon[80926]: from='client.47029 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:20 compute-1 ceph-mon[80926]: from='client.46220 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/847947838' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:34:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2860676982' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 02 13:34:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3914800062' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 02 13:34:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2831220437' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:34:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/977002523' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 02 13:34:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct 02 13:34:20 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/388357450' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 02 13:34:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct 02 13:34:21 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1289503258' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 02 13:34:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct 02 13:34:21 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2579968663' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 02 13:34:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct 02 13:34:21 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1520222773' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Oct 02 13:34:21 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2065690386' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 02 13:34:21 compute-1 ceph-mon[80926]: from='client.47041 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:21 compute-1 ceph-mon[80926]: from='client.37284 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:21 compute-1 ceph-mon[80926]: from='client.47053 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:21 compute-1 ceph-mon[80926]: from='client.46241 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/388357450' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 02 13:34:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3903563452' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 02 13:34:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1738238686' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 02 13:34:21 compute-1 ceph-mon[80926]: pgmap v3718: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:21 compute-1 ceph-mon[80926]: from='client.47068 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/807069971' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 02 13:34:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1289503258' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 02 13:34:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2579968663' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 02 13:34:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3841789548' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 02 13:34:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1520222773' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2065690386' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 02 13:34:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Oct 02 13:34:21 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1792309887' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 02 13:34:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Oct 02 13:34:21 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1117553503' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 02 13:34:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:22.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:22 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Oct 02 13:34:22 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3583909278' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 02 13:34:22 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Oct 02 13:34:22 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2658690698' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 02 13:34:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:34:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:22.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:34:22 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Oct 02 13:34:22 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2701917429' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 02 13:34:22 compute-1 nova_compute[230518]: 2025-10-02 13:34:22.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:22 compute-1 ceph-mon[80926]: from='client.47080 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1669040533' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3031844630' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 02 13:34:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3386889593' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 02 13:34:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1792309887' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 02 13:34:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1117553503' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 02 13:34:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2374216749' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 02 13:34:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1434328923' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 02 13:34:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1179389264' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 02 13:34:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3583909278' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 02 13:34:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2658690698' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 02 13:34:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2591116761' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 02 13:34:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/148650609' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 02 13:34:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/209078430' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 02 13:34:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2701917429' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 02 13:34:22 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Oct 02 13:34:22 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2972436643' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 02 13:34:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 02 13:34:23 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2796163003' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 13:34:23 compute-1 systemd[1]: Starting Hostname Service...
Oct 02 13:34:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Oct 02 13:34:23 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3813509163' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 02 13:34:23 compute-1 systemd[1]: Started Hostname Service.
Oct 02 13:34:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Oct 02 13:34:23 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3027924521' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 02 13:34:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Oct 02 13:34:23 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3331320437' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 02 13:34:23 compute-1 nova_compute[230518]: 2025-10-02 13:34:23.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:23 compute-1 ceph-mon[80926]: from='client.47101 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/564963111' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 02 13:34:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2972436643' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 02 13:34:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/856630200' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 13:34:23 compute-1 ceph-mon[80926]: pgmap v3719: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2796163003' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 13:34:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4127737275' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 02 13:34:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/990446423' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 02 13:34:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3813509163' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 02 13:34:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2317346084' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2128338992' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 02 13:34:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3027924521' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 02 13:34:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3780658384' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 02 13:34:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/252421204' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 02 13:34:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3331320437' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 02 13:34:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1810802137' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 02 13:34:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:34:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:24.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:34:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:24.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 02 13:34:24 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2676800153' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 13:34:24 compute-1 ceph-mon[80926]: from='client.46361 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:24 compute-1 ceph-mon[80926]: from='client.37386 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2877372303' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 02 13:34:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3619739455' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 02 13:34:24 compute-1 ceph-mon[80926]: from='client.46367 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3258619475' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 02 13:34:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/11611765' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 02 13:34:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3605085164' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 02 13:34:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2676800153' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 13:34:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:34:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Oct 02 13:34:25 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1049783261' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 02 13:34:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Oct 02 13:34:25 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1875449801' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 02 13:34:25 compute-1 ceph-mon[80926]: from='client.46373 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:25 compute-1 ceph-mon[80926]: from='client.37398 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:25 compute-1 ceph-mon[80926]: from='client.47167 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:25 compute-1 ceph-mon[80926]: from='client.46382 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:25 compute-1 ceph-mon[80926]: from='client.37434 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:25 compute-1 ceph-mon[80926]: from='client.46403 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1683149492' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 02 13:34:25 compute-1 ceph-mon[80926]: pgmap v3720: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:25 compute-1 ceph-mon[80926]: from='client.37446 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1263180927' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 02 13:34:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1049783261' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 02 13:34:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2473643529' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 02 13:34:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3341206882' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 02 13:34:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1875449801' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 02 13:34:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4145739425' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 02 13:34:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:34:25.994 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:34:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:34:25.995 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:34:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:34:25.995 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:34:26 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 02 13:34:26 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3910303546' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:34:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:34:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:26.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:34:26 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Oct 02 13:34:26 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1068626528' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 02 13:34:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:34:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:26.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:34:26 compute-1 sudo[324310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:34:26 compute-1 sudo[324310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:26 compute-1 sudo[324310]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:26 compute-1 sudo[324339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:34:26 compute-1 sudo[324339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:26 compute-1 sudo[324339]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:26 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:34:26 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:34:26 compute-1 ceph-mon[80926]: from='client.46424 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:26 compute-1 ceph-mon[80926]: from='client.37452 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:26 compute-1 ceph-mon[80926]: from='client.47206 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:26 compute-1 ceph-mon[80926]: from='client.46445 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:26 compute-1 ceph-mon[80926]: from='client.47212 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:26 compute-1 ceph-mon[80926]: from='client.37464 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:26 compute-1 ceph-mon[80926]: from='client.47218 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:26 compute-1 ceph-mon[80926]: from='client.46457 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:26 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3910303546' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:34:26 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/285745496' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:34:26 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1068626528' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 02 13:34:26 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2044801920' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 02 13:34:26 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:34:26 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:34:26 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:34:26 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:34:26 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:34:26 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:34:26 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:34:26 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:34:26 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:34:26 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:34:26 compute-1 sudo[324385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:34:26 compute-1 sudo[324385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:26 compute-1 sudo[324385]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:26 compute-1 sudo[324414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 13:34:26 compute-1 sudo[324414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:27 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:34:27 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:34:27 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Oct 02 13:34:27 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4044084775' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 02 13:34:27 compute-1 podman[324562]: 2025-10-02 13:34:27.447069222 +0000 UTC m=+0.056198162 container exec f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 02 13:34:27 compute-1 podman[324562]: 2025-10-02 13:34:27.55063554 +0000 UTC m=+0.159764460 container exec_died f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct 02 13:34:27 compute-1 nova_compute[230518]: 2025-10-02 13:34:27.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:27 compute-1 ceph-mon[80926]: from='client.47233 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:27 compute-1 ceph-mon[80926]: from='client.37479 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:27 compute-1 ceph-mon[80926]: from='client.46466 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:27 compute-1 ceph-mon[80926]: from='client.47245 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:27 compute-1 ceph-mon[80926]: from='client.37491 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:27 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1392752194' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 02 13:34:27 compute-1 ceph-mon[80926]: pgmap v3721: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:27 compute-1 ceph-mon[80926]: from='client.47263 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:27 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:34:27 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:34:27 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4044084775' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 02 13:34:27 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2709166255' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 02 13:34:27 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/681771291' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 02 13:34:27 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2576054269' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:34:27 compute-1 sudo[324414]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:28 compute-1 sudo[324712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:34:28 compute-1 sudo[324712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:28 compute-1 sudo[324712]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:28 compute-1 sudo[324737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:34:28 compute-1 sudo[324737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:28 compute-1 sudo[324737]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:28.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:28 compute-1 sudo[324768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:34:28 compute-1 sudo[324768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:28 compute-1 sudo[324768]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:28 compute-1 sudo[324794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:34:28 compute-1 sudo[324794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:28.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:28 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Oct 02 13:34:28 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1130845526' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 02 13:34:28 compute-1 sudo[324794]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:28 compute-1 nova_compute[230518]: 2025-10-02 13:34:28.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:28 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:34:28 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:34:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Oct 02 13:34:29 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3337079194' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 02 13:34:29 compute-1 ceph-mon[80926]: from='client.47290 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:29 compute-1 ceph-mon[80926]: from='client.46517 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:29 compute-1 ceph-mon[80926]: from='client.37533 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:29 compute-1 ceph-mon[80926]: from='client.47302 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:29 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:34:29 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:34:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1619898129' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 02 13:34:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2584179053' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 02 13:34:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1544442175' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 02 13:34:29 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:34:29 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:34:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1130845526' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 02 13:34:29 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:34:29 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:34:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1875532775' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 02 13:34:29 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:34:29 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:34:29 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:34:29 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:34:29 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3337079194' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 02 13:34:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Oct 02 13:34:29 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2241500397' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 02 13:34:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Oct 02 13:34:29 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2024816665' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 02 13:34:29 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #184. Immutable memtables: 0.
Oct 02 13:34:29 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:34:29.484867) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:34:29 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 117] Flushing memtable with next log file: 184
Oct 02 13:34:29 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412069484898, "job": 117, "event": "flush_started", "num_memtables": 1, "num_entries": 853, "num_deletes": 251, "total_data_size": 1252296, "memory_usage": 1268768, "flush_reason": "Manual Compaction"}
Oct 02 13:34:29 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 117] Level-0 flush table #185: started
Oct 02 13:34:29 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412069490115, "cf_name": "default", "job": 117, "event": "table_file_creation", "file_number": 185, "file_size": 825939, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 88953, "largest_seqno": 89801, "table_properties": {"data_size": 821354, "index_size": 1980, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 13647, "raw_average_key_size": 22, "raw_value_size": 811086, "raw_average_value_size": 1320, "num_data_blocks": 84, "num_entries": 614, "num_filter_entries": 614, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759412039, "oldest_key_time": 1759412039, "file_creation_time": 1759412069, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:34:29 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 117] Flush lasted 5276 microseconds, and 2327 cpu microseconds.
Oct 02 13:34:29 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:34:29 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:34:29.490145) [db/flush_job.cc:967] [default] [JOB 117] Level-0 flush table #185: 825939 bytes OK
Oct 02 13:34:29 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:34:29.490161) [db/memtable_list.cc:519] [default] Level-0 commit table #185 started
Oct 02 13:34:29 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:34:29.491146) [db/memtable_list.cc:722] [default] Level-0 commit table #185: memtable #1 done
Oct 02 13:34:29 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:34:29.491159) EVENT_LOG_v1 {"time_micros": 1759412069491155, "job": 117, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:34:29 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:34:29.491173) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:34:29 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 117] Try to delete WAL files size 1247214, prev total WAL file size 1247214, number of live WAL files 2.
Oct 02 13:34:29 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000181.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:34:29 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:34:29.491803) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037373831' seq:72057594037927935, type:22 .. '7061786F730038303333' seq:0, type:0; will stop at (end)
Oct 02 13:34:29 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 118] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:34:29 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 117 Base level 0, inputs: [185(806KB)], [183(11MB)]
Oct 02 13:34:29 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412069491877, "job": 118, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [185], "files_L6": [183], "score": -1, "input_data_size": 13275762, "oldest_snapshot_seqno": -1}
Oct 02 13:34:29 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 118] Generated table #186: 10981 keys, 11369014 bytes, temperature: kUnknown
Oct 02 13:34:29 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412069567797, "cf_name": "default", "job": 118, "event": "table_file_creation", "file_number": 186, "file_size": 11369014, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11301792, "index_size": 38693, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27461, "raw_key_size": 290853, "raw_average_key_size": 26, "raw_value_size": 11113650, "raw_average_value_size": 1012, "num_data_blocks": 1453, "num_entries": 10981, "num_filter_entries": 10981, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759412069, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 186, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:34:29 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:34:29 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:34:29.568194) [db/compaction/compaction_job.cc:1663] [default] [JOB 118] Compacted 1@0 + 1@6 files to L6 => 11369014 bytes
Oct 02 13:34:29 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:34:29.569582) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 174.4 rd, 149.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 11.9 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(29.8) write-amplify(13.8) OK, records in: 11497, records dropped: 516 output_compression: NoCompression
Oct 02 13:34:29 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:34:29.569601) EVENT_LOG_v1 {"time_micros": 1759412069569592, "job": 118, "event": "compaction_finished", "compaction_time_micros": 76127, "compaction_time_cpu_micros": 33125, "output_level": 6, "num_output_files": 1, "total_output_size": 11369014, "num_input_records": 11497, "num_output_records": 10981, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:34:29 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000185.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:34:29 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412069570249, "job": 118, "event": "table_file_deletion", "file_number": 185}
Oct 02 13:34:29 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000183.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:34:29 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412069573055, "job": 118, "event": "table_file_deletion", "file_number": 183}
Oct 02 13:34:29 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:34:29.491677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:34:29 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:34:29.573225) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:34:29 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:34:29.573231) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:34:29 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:34:29.573232) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:34:29 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:34:29.573234) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:34:29 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:34:29.573235) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:34:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:34:30 compute-1 ceph-mon[80926]: from='client.47317 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:30 compute-1 ceph-mon[80926]: pgmap v3722: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2241500397' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 02 13:34:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1577056816' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 02 13:34:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2024816665' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 02 13:34:30 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:34:30 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:34:30 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:34:30 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:34:30 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:34:30 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:34:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/791472702' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 02 13:34:30 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4115939123' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 02 13:34:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:30.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Oct 02 13:34:30 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2142265629' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 02 13:34:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:30.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Oct 02 13:34:30 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4641624' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 02 13:34:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Oct 02 13:34:30 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3245435999' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 02 13:34:31 compute-1 ceph-mon[80926]: from='client.47359 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:31 compute-1 ceph-mon[80926]: from='client.46589 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:31 compute-1 ceph-mon[80926]: from='client.37590 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:31 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2142265629' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 02 13:34:31 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/79628961' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 02 13:34:31 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3804244376' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 02 13:34:31 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4641624' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 02 13:34:31 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2364329330' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 02 13:34:31 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3245435999' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 02 13:34:32 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Oct 02 13:34:32 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2703532931' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 02 13:34:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:34:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:32.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:34:32 compute-1 ceph-mon[80926]: pgmap v3723: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:32 compute-1 ceph-mon[80926]: from='client.46610 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1326738878' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 02 13:34:32 compute-1 ceph-mon[80926]: from='client.37605 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3819595688' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 02 13:34:32 compute-1 ceph-mon[80926]: from='client.47389 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2703532931' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 02 13:34:32 compute-1 ceph-mon[80926]: from='client.37620 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:32 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1448114110' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 02 13:34:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:32.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:32 compute-1 nova_compute[230518]: 2025-10-02 13:34:32.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Oct 02 13:34:33 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4169906255' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 02 13:34:33 compute-1 ceph-mon[80926]: from='client.46625 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:33 compute-1 ceph-mon[80926]: from='client.37626 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:33 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3814933560' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 02 13:34:33 compute-1 ceph-mon[80926]: from='client.46634 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:33 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3175671781' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 02 13:34:33 compute-1 ceph-mon[80926]: pgmap v3724: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:33 compute-1 ceph-mon[80926]: from='client.47410 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:33 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4169906255' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 02 13:34:33 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/428714164' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 02 13:34:33 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Oct 02 13:34:33 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2495208596' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 02 13:34:33 compute-1 nova_compute[230518]: 2025-10-02 13:34:33.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:34.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:34.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3043116930' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 02 13:34:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2495208596' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 02 13:34:34 compute-1 ceph-mon[80926]: from='client.37647 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:34 compute-1 ceph-mon[80926]: from='client.37653 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:34 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2519601793' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 02 13:34:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:34:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Oct 02 13:34:35 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/159196570' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 02 13:34:35 compute-1 ovs-appctl[326500]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 02 13:34:35 compute-1 ovs-appctl[326507]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 02 13:34:35 compute-1 ovs-appctl[326519]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 02 13:34:35 compute-1 ceph-mon[80926]: from='client.47425 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:35 compute-1 ceph-mon[80926]: from='client.46658 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3012681020' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 02 13:34:35 compute-1 ceph-mon[80926]: from='client.46664 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:35 compute-1 ceph-mon[80926]: from='client.47431 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:35 compute-1 ceph-mon[80926]: pgmap v3725: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/159196570' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 02 13:34:35 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3885141461' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 02 13:34:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Oct 02 13:34:35 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2244708378' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 02 13:34:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:36.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:36.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:36 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 02 13:34:36 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3348321435' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:34:36 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Oct 02 13:34:36 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2985896459' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 02 13:34:37 compute-1 ceph-mon[80926]: from='client.37674 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:37 compute-1 ceph-mon[80926]: from='client.37683 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:37 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2244708378' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 02 13:34:37 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2454552929' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 02 13:34:37 compute-1 ceph-mon[80926]: from='client.46694 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:37 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3058153738' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:34:37 compute-1 ceph-mon[80926]: from='client.47452 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:37 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3818688740' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 02 13:34:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Oct 02 13:34:37 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1199591392' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 02 13:34:37 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Oct 02 13:34:37 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2982575882' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:37 compute-1 nova_compute[230518]: 2025-10-02 13:34:37.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:37 compute-1 podman[327236]: 2025-10-02 13:34:37.858381014 +0000 UTC m=+0.103217667 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct 02 13:34:37 compute-1 podman[327233]: 2025-10-02 13:34:37.886253318 +0000 UTC m=+0.130904365 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:34:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:38.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:38 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 02 13:34:38 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3211568006' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:34:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:38.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:38 compute-1 nova_compute[230518]: 2025-10-02 13:34:38.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:38 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Oct 02 13:34:38 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1232340834' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 02 13:34:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Oct 02 13:34:39 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1855449270' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:39 compute-1 ceph-mon[80926]: from='client.46700 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:39 compute-1 ceph-mon[80926]: from='client.47461 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3348321435' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:34:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2985896459' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 02 13:34:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3457941525' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:39 compute-1 ceph-mon[80926]: pgmap v3726: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1199591392' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 02 13:34:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2982575882' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/534626633' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 02 13:34:39 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4261411917' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:34:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:34:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:34:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:40.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:34:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:34:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:40.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:34:41 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Oct 02 13:34:41 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3643781993' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 02 13:34:41 compute-1 ceph-mon[80926]: from='client.37710 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:41 compute-1 ceph-mon[80926]: from='client.46742 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:41 compute-1 ceph-mon[80926]: from='client.47473 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:41 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3211568006' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:34:41 compute-1 ceph-mon[80926]: from='client.47479 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:34:41 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2166609430' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 02 13:34:41 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1232340834' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 02 13:34:41 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1102273128' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:34:41 compute-1 ceph-mon[80926]: pgmap v3727: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:41 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/662320132' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:41 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1855449270' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:41 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2047886250' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 02 13:34:41 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/462444664' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 02 13:34:41 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1969284233' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:41 compute-1 sudo[327518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:34:41 compute-1 sudo[327518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:41 compute-1 sudo[327518]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:41 compute-1 sudo[327545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:34:41 compute-1 sudo[327545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:34:41 compute-1 sudo[327545]: pam_unix(sudo:session): session closed for user root
Oct 02 13:34:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:42.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0) v1
Oct 02 13:34:42 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/753413889' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 02 13:34:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:42.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:42 compute-1 nova_compute[230518]: 2025-10-02 13:34:42.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:42 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0) v1
Oct 02 13:34:42 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3820141928' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:42 compute-1 ceph-mon[80926]: from='client.37740 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:42 compute-1 ceph-mon[80926]: from='client.47503 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:42 compute-1 ceph-mon[80926]: pgmap v3728: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/369431865' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:34:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/710434068' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 02 13:34:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3643781993' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 02 13:34:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:34:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4127268521' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 02 13:34:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:34:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/285501038' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:42 compute-1 ceph-mon[80926]: from='client.46778 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/594778116' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/753413889' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 02 13:34:42 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1645206274' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 02 13:34:43 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0) v1
Oct 02 13:34:43 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/957710283' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 02 13:34:43 compute-1 nova_compute[230518]: 2025-10-02 13:34:43.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:44 compute-1 ceph-mon[80926]: from='client.37764 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:44 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3055223004' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 02 13:34:44 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3820141928' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:44 compute-1 ceph-mon[80926]: from='client.47533 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:44 compute-1 ceph-mon[80926]: pgmap v3729: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:44 compute-1 ceph-mon[80926]: from='client.37776 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:44 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2475723777' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 02 13:34:44 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/957710283' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 02 13:34:44 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3726113376' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:44.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:44.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0) v1
Oct 02 13:34:44 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2314818960' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:34:45 compute-1 nova_compute[230518]: 2025-10-02 13:34:45.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:34:45 compute-1 ceph-mon[80926]: from='client.46796 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:45 compute-1 ceph-mon[80926]: from='client.37782 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4235838842' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:45 compute-1 ceph-mon[80926]: from='client.46811 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2695014501' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 02 13:34:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2540540668' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 02 13:34:45 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2314818960' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0) v1
Oct 02 13:34:45 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1588464893' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 02 13:34:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:34:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:46.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:34:46 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 02 13:34:46 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/10710595' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:34:46 compute-1 ceph-mon[80926]: from='client.47554 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:46 compute-1 ceph-mon[80926]: from='client.46817 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:46 compute-1 ceph-mon[80926]: from='client.37803 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:46 compute-1 ceph-mon[80926]: from='client.37809 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:46 compute-1 ceph-mon[80926]: from='client.47566 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:46 compute-1 ceph-mon[80926]: pgmap v3730: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1588464893' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 02 13:34:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3215334327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:34:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1426669717' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:34:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1819816859' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 02 13:34:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2736487162' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 02 13:34:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4129652381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:34:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2347324164' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 02 13:34:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:34:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:46.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:34:46 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0) v1
Oct 02 13:34:46 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3352078029' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 02 13:34:46 compute-1 virtqemud[230067]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 02 13:34:47 compute-1 ceph-mon[80926]: from='client.47581 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:47 compute-1 ceph-mon[80926]: from='client.46847 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:47 compute-1 ceph-mon[80926]: from='client.46856 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:47 compute-1 ceph-mon[80926]: from='client.37833 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:47 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/10710595' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:34:47 compute-1 ceph-mon[80926]: from='client.37839 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:47 compute-1 ceph-mon[80926]: from='client.47614 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:47 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3352078029' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 02 13:34:47 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/392394444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:34:47 compute-1 ceph-mon[80926]: from='client.47620 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:47 compute-1 ceph-mon[80926]: from='client.46877 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:47 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/217139599' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 13:34:47 compute-1 ceph-mon[80926]: pgmap v3731: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:47 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1946319729' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:34:47 compute-1 systemd[1]: Starting Time & Date Service...
Oct 02 13:34:47 compute-1 systemd[1]: Started Time & Date Service.
Oct 02 13:34:47 compute-1 nova_compute[230518]: 2025-10-02 13:34:47.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:47 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct 02 13:34:47 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2260468465' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 13:34:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:48.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:48 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0) v1
Oct 02 13:34:48 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1666422989' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 02 13:34:48 compute-1 ceph-mon[80926]: from='client.46886 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/682664001' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 02 13:34:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2707107688' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 02 13:34:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2260468465' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 13:34:48 compute-1 ceph-mon[80926]: from='client.47638 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1666422989' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 02 13:34:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3625609811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:34:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:34:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:48.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:34:48 compute-1 nova_compute[230518]: 2025-10-02 13:34:48.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0) v1
Oct 02 13:34:49 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/241776807' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 02 13:34:49 compute-1 ceph-mon[80926]: from='client.47647 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:34:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/229797035' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 02 13:34:49 compute-1 ceph-mon[80926]: pgmap v3732: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/241776807' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 02 13:34:49 compute-1 podman[328432]: 2025-10-02 13:34:49.832961634 +0000 UTC m=+0.072340459 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:34:49 compute-1 podman[328433]: 2025-10-02 13:34:49.851390742 +0000 UTC m=+0.094375510 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 02 13:34:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:34:50 compute-1 nova_compute[230518]: 2025-10-02 13:34:50.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:34:50 compute-1 nova_compute[230518]: 2025-10-02 13:34:50.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:34:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:50.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:34:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:50.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:34:51 compute-1 nova_compute[230518]: 2025-10-02 13:34:51.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:34:51 compute-1 nova_compute[230518]: 2025-10-02 13:34:51.146 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:34:51 compute-1 nova_compute[230518]: 2025-10-02 13:34:51.146 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:34:51 compute-1 nova_compute[230518]: 2025-10-02 13:34:51.147 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:34:51 compute-1 nova_compute[230518]: 2025-10-02 13:34:51.147 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:34:51 compute-1 nova_compute[230518]: 2025-10-02 13:34:51.148 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:34:51 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:34:51 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1433318012' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:34:51 compute-1 nova_compute[230518]: 2025-10-02 13:34:51.565 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:34:51 compute-1 nova_compute[230518]: 2025-10-02 13:34:51.726 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:34:51 compute-1 nova_compute[230518]: 2025-10-02 13:34:51.728 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=3981MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:34:51 compute-1 nova_compute[230518]: 2025-10-02 13:34:51.728 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:34:51 compute-1 nova_compute[230518]: 2025-10-02 13:34:51.728 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:34:52 compute-1 ceph-mon[80926]: pgmap v3733: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1433318012' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:34:52 compute-1 nova_compute[230518]: 2025-10-02 13:34:52.165 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:34:52 compute-1 nova_compute[230518]: 2025-10-02 13:34:52.166 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:34:52 compute-1 nova_compute[230518]: 2025-10-02 13:34:52.187 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:34:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:52.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:52.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:34:52 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3555859192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:34:52 compute-1 nova_compute[230518]: 2025-10-02 13:34:52.636 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:34:52 compute-1 nova_compute[230518]: 2025-10-02 13:34:52.641 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:34:52 compute-1 nova_compute[230518]: 2025-10-02 13:34:52.673 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:34:52 compute-1 nova_compute[230518]: 2025-10-02 13:34:52.674 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:34:52 compute-1 nova_compute[230518]: 2025-10-02 13:34:52.674 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.946s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:34:52 compute-1 nova_compute[230518]: 2025-10-02 13:34:52.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:53 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3555859192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:34:53 compute-1 nova_compute[230518]: 2025-10-02 13:34:53.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:34:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:54.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:34:54 compute-1 ceph-mon[80926]: pgmap v3734: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:54.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:34:55 compute-1 ceph-mon[80926]: pgmap v3735: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:56.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:56.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:57 compute-1 nova_compute[230518]: 2025-10-02 13:34:57.675 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:34:57 compute-1 nova_compute[230518]: 2025-10-02 13:34:57.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:58 compute-1 ceph-mon[80926]: pgmap v3736: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:34:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:34:58.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:34:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:34:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:34:58.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:34:58 compute-1 nova_compute[230518]: 2025-10-02 13:34:58.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:34:59 compute-1 nova_compute[230518]: 2025-10-02 13:34:59.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:34:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:35:00 compute-1 nova_compute[230518]: 2025-10-02 13:35:00.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:35:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:35:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:00.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:35:00 compute-1 ceph-mon[80926]: pgmap v3737: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:00.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:01 compute-1 nova_compute[230518]: 2025-10-02 13:35:01.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:35:01 compute-1 ceph-mon[80926]: pgmap v3738: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:02 compute-1 nova_compute[230518]: 2025-10-02 13:35:02.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:35:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:02.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:02.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:02 compute-1 nova_compute[230518]: 2025-10-02 13:35:02.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:03 compute-1 nova_compute[230518]: 2025-10-02 13:35:03.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:04 compute-1 ceph-mon[80926]: pgmap v3739: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:04.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:04.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:35:06 compute-1 nova_compute[230518]: 2025-10-02 13:35:06.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:35:06 compute-1 nova_compute[230518]: 2025-10-02 13:35:06.071 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:35:06 compute-1 nova_compute[230518]: 2025-10-02 13:35:06.071 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:35:06 compute-1 nova_compute[230518]: 2025-10-02 13:35:06.072 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:35:06 compute-1 nova_compute[230518]: 2025-10-02 13:35:06.089 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:35:06 compute-1 ceph-mon[80926]: pgmap v3740: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/4065200134' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:35:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/4065200134' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:35:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:06.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:35:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:06.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:35:07 compute-1 nova_compute[230518]: 2025-10-02 13:35:07.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:08.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:08 compute-1 ceph-mon[80926]: pgmap v3741: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:08.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:08 compute-1 podman[328517]: 2025-10-02 13:35:08.605165817 +0000 UTC m=+0.062709147 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:35:08 compute-1 podman[328516]: 2025-10-02 13:35:08.630203992 +0000 UTC m=+0.097134326 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 02 13:35:08 compute-1 nova_compute[230518]: 2025-10-02 13:35:08.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:35:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:10.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:10 compute-1 ceph-mon[80926]: pgmap v3742: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:10.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:11 compute-1 ceph-mon[80926]: pgmap v3743: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:12.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:12.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:12 compute-1 nova_compute[230518]: 2025-10-02 13:35:12.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:13 compute-1 nova_compute[230518]: 2025-10-02 13:35:13.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:14.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:14 compute-1 ceph-mon[80926]: pgmap v3744: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:14.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:35:15 compute-1 ceph-mon[80926]: pgmap v3745: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:16.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:35:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:16.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:35:17 compute-1 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 02 13:35:17 compute-1 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 02 13:35:17 compute-1 nova_compute[230518]: 2025-10-02 13:35:17.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:18 compute-1 ceph-mon[80926]: pgmap v3746: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:18.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:18.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:18 compute-1 nova_compute[230518]: 2025-10-02 13:35:18.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:35:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:20.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:20 compute-1 ceph-mon[80926]: pgmap v3747: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:20 compute-1 podman[328565]: 2025-10-02 13:35:20.37093841 +0000 UTC m=+0.103261948 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 02 13:35:20 compute-1 podman[328566]: 2025-10-02 13:35:20.387803579 +0000 UTC m=+0.120356895 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:35:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:20.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:21 compute-1 ceph-mon[80926]: pgmap v3748: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:22.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:22.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:22 compute-1 nova_compute[230518]: 2025-10-02 13:35:22.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:23 compute-1 nova_compute[230518]: 2025-10-02 13:35:23.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:24 compute-1 ceph-mon[80926]: pgmap v3749: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:24.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:24.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:35:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:35:25.995 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:35:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:35:25.995 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:35:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:35:25.995 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:35:26 compute-1 ceph-mon[80926]: pgmap v3750: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:26.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:26.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:27 compute-1 nova_compute[230518]: 2025-10-02 13:35:27.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:28.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:28 compute-1 ceph-mon[80926]: pgmap v3751: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:28.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:28 compute-1 nova_compute[230518]: 2025-10-02 13:35:28.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:29 compute-1 sudo[321330]: pam_unix(sudo:session): session closed for user root
Oct 02 13:35:29 compute-1 sshd-session[321329]: Received disconnect from 192.168.122.10 port 56036:11: disconnected by user
Oct 02 13:35:29 compute-1 sshd-session[321329]: Disconnected from user zuul 192.168.122.10 port 56036
Oct 02 13:35:29 compute-1 sshd-session[321326]: pam_unix(sshd:session): session closed for user zuul
Oct 02 13:35:29 compute-1 systemd-logind[795]: Session 62 logged out. Waiting for processes to exit.
Oct 02 13:35:29 compute-1 systemd[1]: session-62.scope: Deactivated successfully.
Oct 02 13:35:29 compute-1 systemd[1]: session-62.scope: Consumed 2min 39.552s CPU time, 1.0G memory peak, read 451.8M from disk, written 296.8M to disk.
Oct 02 13:35:29 compute-1 systemd-logind[795]: Removed session 62.
Oct 02 13:35:29 compute-1 ceph-mon[80926]: pgmap v3752: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:29 compute-1 sshd-session[328605]: Accepted publickey for zuul from 192.168.122.10 port 49876 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 13:35:29 compute-1 systemd-logind[795]: New session 63 of user zuul.
Oct 02 13:35:29 compute-1 systemd[1]: Started Session 63 of User zuul.
Oct 02 13:35:29 compute-1 sshd-session[328605]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 13:35:29 compute-1 sudo[328609]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-1-2025-10-02-xqkwcke.tar.xz
Oct 02 13:35:29 compute-1 sudo[328609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 13:35:29 compute-1 sudo[328609]: pam_unix(sudo:session): session closed for user root
Oct 02 13:35:29 compute-1 sshd-session[328608]: Received disconnect from 192.168.122.10 port 49876:11: disconnected by user
Oct 02 13:35:29 compute-1 sshd-session[328608]: Disconnected from user zuul 192.168.122.10 port 49876
Oct 02 13:35:29 compute-1 sshd-session[328605]: pam_unix(sshd:session): session closed for user zuul
Oct 02 13:35:29 compute-1 systemd[1]: session-63.scope: Deactivated successfully.
Oct 02 13:35:29 compute-1 systemd-logind[795]: Session 63 logged out. Waiting for processes to exit.
Oct 02 13:35:29 compute-1 systemd-logind[795]: Removed session 63.
Oct 02 13:35:29 compute-1 sshd-session[328634]: Accepted publickey for zuul from 192.168.122.10 port 49884 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 13:35:29 compute-1 systemd-logind[795]: New session 64 of user zuul.
Oct 02 13:35:29 compute-1 systemd[1]: Started Session 64 of User zuul.
Oct 02 13:35:29 compute-1 sshd-session[328634]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 13:35:29 compute-1 sudo[328639]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Oct 02 13:35:29 compute-1 sudo[328639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 13:35:29 compute-1 sudo[328639]: pam_unix(sudo:session): session closed for user root
Oct 02 13:35:29 compute-1 sshd-session[328638]: Received disconnect from 192.168.122.10 port 49884:11: disconnected by user
Oct 02 13:35:29 compute-1 sshd-session[328638]: Disconnected from user zuul 192.168.122.10 port 49884
Oct 02 13:35:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:35:29 compute-1 sshd-session[328634]: pam_unix(sshd:session): session closed for user zuul
Oct 02 13:35:29 compute-1 systemd[1]: session-64.scope: Deactivated successfully.
Oct 02 13:35:29 compute-1 systemd-logind[795]: Session 64 logged out. Waiting for processes to exit.
Oct 02 13:35:29 compute-1 systemd-logind[795]: Removed session 64.
Oct 02 13:35:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:35:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:30.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:35:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:30.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:32 compute-1 ceph-mon[80926]: pgmap v3753: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:32.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:32.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:32 compute-1 nova_compute[230518]: 2025-10-02 13:35:32.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:33 compute-1 nova_compute[230518]: 2025-10-02 13:35:33.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:34.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:34 compute-1 ceph-mon[80926]: pgmap v3754: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:34.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:35:35 compute-1 ceph-mon[80926]: pgmap v3755: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:36.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:36.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:37 compute-1 nova_compute[230518]: 2025-10-02 13:35:37.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:38 compute-1 ceph-mon[80926]: pgmap v3756: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:35:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:38.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:35:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:35:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:38.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:35:38 compute-1 nova_compute[230518]: 2025-10-02 13:35:38.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:38 compute-1 podman[328665]: 2025-10-02 13:35:38.805078334 +0000 UTC m=+0.056923286 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 13:35:38 compute-1 podman[328664]: 2025-10-02 13:35:38.833069701 +0000 UTC m=+0.088407033 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:35:39 compute-1 ceph-mon[80926]: pgmap v3757: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:39 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:35:39 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 71K writes, 275K keys, 71K commit groups, 1.0 writes per commit group, ingest: 0.27 GB, 0.04 MB/s
                                           Cumulative WAL: 71K writes, 26K syncs, 2.69 writes per sync, written: 0.27 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1937 writes, 5766 keys, 1937 commit groups, 1.0 writes per commit group, ingest: 5.33 MB, 0.01 MB/s
                                           Interval WAL: 1937 writes, 854 syncs, 2.27 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 13:35:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:35:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:40.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:40.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:42 compute-1 ceph-mon[80926]: pgmap v3758: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:42 compute-1 sudo[328710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:35:42 compute-1 sudo[328710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:35:42 compute-1 sudo[328710]: pam_unix(sudo:session): session closed for user root
Oct 02 13:35:42 compute-1 sudo[328735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:35:42 compute-1 sudo[328735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:35:42 compute-1 sudo[328735]: pam_unix(sudo:session): session closed for user root
Oct 02 13:35:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:42.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:42 compute-1 sudo[328760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:35:42 compute-1 sudo[328760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:35:42 compute-1 sudo[328760]: pam_unix(sudo:session): session closed for user root
Oct 02 13:35:42 compute-1 sudo[328785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:35:42 compute-1 sudo[328785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:35:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:35:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:42.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:35:42 compute-1 nova_compute[230518]: 2025-10-02 13:35:42.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:42 compute-1 sudo[328785]: pam_unix(sudo:session): session closed for user root
Oct 02 13:35:43 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:35:43 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:35:43 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:35:43 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:35:43 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:35:43 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:35:43 compute-1 nova_compute[230518]: 2025-10-02 13:35:43.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:44 compute-1 ceph-mon[80926]: pgmap v3759: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:44.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:35:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:44.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:35:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:35:46 compute-1 nova_compute[230518]: 2025-10-02 13:35:46.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:35:46 compute-1 ceph-mon[80926]: pgmap v3760: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:46 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2360798426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:35:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:46.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:46.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:47 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2077650695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:35:47 compute-1 nova_compute[230518]: 2025-10-02 13:35:47.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:48 compute-1 ceph-mon[80926]: pgmap v3761: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/970104711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:35:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2108203446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:35:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:35:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:48.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:35:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:48.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:48 compute-1 nova_compute[230518]: 2025-10-02 13:35:48.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:49 compute-1 sudo[328841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:35:49 compute-1 sudo[328841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:35:49 compute-1 sudo[328841]: pam_unix(sudo:session): session closed for user root
Oct 02 13:35:49 compute-1 sudo[328866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:35:49 compute-1 sudo[328866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:35:49 compute-1 sudo[328866]: pam_unix(sudo:session): session closed for user root
Oct 02 13:35:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:35:50 compute-1 ceph-mon[80926]: pgmap v3762: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:50 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:35:50 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:35:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:50.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:50.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:50 compute-1 podman[328891]: 2025-10-02 13:35:50.80768189 +0000 UTC m=+0.058436782 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:35:50 compute-1 podman[328892]: 2025-10-02 13:35:50.81086885 +0000 UTC m=+0.061629502 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 13:35:51 compute-1 nova_compute[230518]: 2025-10-02 13:35:51.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:35:51 compute-1 nova_compute[230518]: 2025-10-02 13:35:51.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:35:51 compute-1 ceph-mon[80926]: pgmap v3763: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:52 compute-1 nova_compute[230518]: 2025-10-02 13:35:52.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:35:52 compute-1 nova_compute[230518]: 2025-10-02 13:35:52.087 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:35:52 compute-1 nova_compute[230518]: 2025-10-02 13:35:52.088 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:35:52 compute-1 nova_compute[230518]: 2025-10-02 13:35:52.088 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:35:52 compute-1 nova_compute[230518]: 2025-10-02 13:35:52.088 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:35:52 compute-1 nova_compute[230518]: 2025-10-02 13:35:52.089 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:35:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:52.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:52 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:35:52 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1326945611' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:35:52 compute-1 nova_compute[230518]: 2025-10-02 13:35:52.525 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:35:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1326945611' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:35:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:35:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:52.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:35:52 compute-1 nova_compute[230518]: 2025-10-02 13:35:52.677 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:35:52 compute-1 nova_compute[230518]: 2025-10-02 13:35:52.678 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4136MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:35:52 compute-1 nova_compute[230518]: 2025-10-02 13:35:52.679 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:35:52 compute-1 nova_compute[230518]: 2025-10-02 13:35:52.679 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:35:52 compute-1 nova_compute[230518]: 2025-10-02 13:35:52.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:52 compute-1 nova_compute[230518]: 2025-10-02 13:35:52.980 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:35:52 compute-1 nova_compute[230518]: 2025-10-02 13:35:52.981 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:35:53 compute-1 nova_compute[230518]: 2025-10-02 13:35:53.018 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:35:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:35:53 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2167374719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:35:53 compute-1 nova_compute[230518]: 2025-10-02 13:35:53.456 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:35:53 compute-1 nova_compute[230518]: 2025-10-02 13:35:53.462 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:35:53 compute-1 nova_compute[230518]: 2025-10-02 13:35:53.496 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:35:53 compute-1 nova_compute[230518]: 2025-10-02 13:35:53.498 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:35:53 compute-1 nova_compute[230518]: 2025-10-02 13:35:53.498 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:35:53 compute-1 ceph-mon[80926]: pgmap v3764: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:53 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2167374719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:35:53 compute-1 nova_compute[230518]: 2025-10-02 13:35:53.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:54.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:35:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:54.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:35:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:35:56 compute-1 ceph-mon[80926]: pgmap v3765: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:56.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:56.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:57 compute-1 nova_compute[230518]: 2025-10-02 13:35:57.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:58 compute-1 ceph-mon[80926]: pgmap v3766: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:35:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:35:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:35:58.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:35:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:35:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:35:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:35:58.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:35:58 compute-1 nova_compute[230518]: 2025-10-02 13:35:58.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:35:59 compute-1 nova_compute[230518]: 2025-10-02 13:35:59.499 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:35:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:36:00 compute-1 ceph-mon[80926]: pgmap v3767: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:00.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:36:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:00.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:36:01 compute-1 nova_compute[230518]: 2025-10-02 13:36:01.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:36:01 compute-1 nova_compute[230518]: 2025-10-02 13:36:01.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:36:02 compute-1 nova_compute[230518]: 2025-10-02 13:36:02.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:36:02 compute-1 ceph-mon[80926]: pgmap v3768: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:02.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:02.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:02 compute-1 nova_compute[230518]: 2025-10-02 13:36:02.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:03 compute-1 nova_compute[230518]: 2025-10-02 13:36:03.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:36:03 compute-1 nova_compute[230518]: 2025-10-02 13:36:03.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:04 compute-1 ceph-mon[80926]: pgmap v3769: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:04.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:04.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:36:05 compute-1 ceph-mon[80926]: pgmap v3770: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:06 compute-1 nova_compute[230518]: 2025-10-02 13:36:06.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:36:06 compute-1 nova_compute[230518]: 2025-10-02 13:36:06.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:36:06 compute-1 nova_compute[230518]: 2025-10-02 13:36:06.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:36:06 compute-1 nova_compute[230518]: 2025-10-02 13:36:06.074 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:36:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:06.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/452282918' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:36:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/452282918' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:36:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:06.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:07 compute-1 ceph-mon[80926]: pgmap v3771: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:07 compute-1 nova_compute[230518]: 2025-10-02 13:36:07.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:08.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:36:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:08.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:36:08 compute-1 nova_compute[230518]: 2025-10-02 13:36:08.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:09 compute-1 podman[328975]: 2025-10-02 13:36:09.825871137 +0000 UTC m=+0.075608111 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:36:09 compute-1 podman[328974]: 2025-10-02 13:36:09.833491035 +0000 UTC m=+0.084853401 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Oct 02 13:36:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:36:10 compute-1 ceph-mon[80926]: pgmap v3772: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:36:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:10.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:36:10 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:36:10 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Cumulative writes: 17K writes, 90K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 17K writes, 17K syncs, 1.00 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1516 writes, 7694 keys, 1516 commit groups, 1.0 writes per commit group, ingest: 16.13 MB, 0.03 MB/s
                                           Interval WAL: 1516 writes, 1516 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     58.6      1.91              0.33        59    0.032       0      0       0.0       0.0
                                             L6      1/0   10.84 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.4    116.4     99.9      6.10              1.85        58    0.105    457K    31K       0.0       0.0
                                            Sum      1/0   10.84 MB   0.0      0.7     0.1      0.6       0.7      0.1       0.0   6.4     88.7     90.1      8.01              2.18       117    0.068    457K    31K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.4     81.4     81.3      0.84              0.17        10    0.084     55K   2527       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0    116.4     99.9      6.10              1.85        58    0.105    457K    31K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     58.7      1.91              0.33        58    0.033       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.109, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.70 GB write, 0.11 MB/s write, 0.69 GB read, 0.11 MB/s read, 8.0 seconds
                                           Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.07 GB read, 0.11 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5591049f71f0#2 capacity: 304.00 MB usage: 78.41 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.000555 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(4885,75.15 MB,24.7193%) FilterBlock(117,1.23 MB,0.404895%) IndexBlock(117,2.03 MB,0.66694%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 13:36:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:10.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:12 compute-1 ceph-mon[80926]: pgmap v3773: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:12.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:12.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:12 compute-1 nova_compute[230518]: 2025-10-02 13:36:12.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:13 compute-1 ceph-mon[80926]: pgmap v3774: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:13 compute-1 nova_compute[230518]: 2025-10-02 13:36:13.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:14.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:14.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:36:16 compute-1 ceph-mon[80926]: pgmap v3775: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:16.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:36:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:16.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:36:17 compute-1 nova_compute[230518]: 2025-10-02 13:36:17.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:18 compute-1 ceph-mon[80926]: pgmap v3776: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:18.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:36:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:18.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:36:18 compute-1 nova_compute[230518]: 2025-10-02 13:36:18.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:36:20 compute-1 ceph-mon[80926]: pgmap v3777: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:20.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:20.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:21 compute-1 ceph-mon[80926]: pgmap v3778: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:21 compute-1 podman[329020]: 2025-10-02 13:36:21.822303425 +0000 UTC m=+0.064894566 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:36:21 compute-1 podman[329019]: 2025-10-02 13:36:21.841989142 +0000 UTC m=+0.097631892 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:36:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 13:36:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:22.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 13:36:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:22.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:22 compute-1 nova_compute[230518]: 2025-10-02 13:36:22.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:23 compute-1 ceph-mon[80926]: pgmap v3779: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:23 compute-1 nova_compute[230518]: 2025-10-02 13:36:23.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:24.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:24.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:36:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:36:25.996 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:36:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:36:25.997 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:36:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:36:25.997 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:36:26 compute-1 ceph-mon[80926]: pgmap v3780: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:26.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:26.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:27 compute-1 nova_compute[230518]: 2025-10-02 13:36:27.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:28 compute-1 ceph-mon[80926]: pgmap v3781: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:28.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:28.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:28 compute-1 nova_compute[230518]: 2025-10-02 13:36:28.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:36:30 compute-1 ceph-mon[80926]: pgmap v3782: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:36:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:30.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:36:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:30.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:32 compute-1 ceph-mon[80926]: pgmap v3783: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:32.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:32 compute-1 nova_compute[230518]: 2025-10-02 13:36:32.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:32.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:33 compute-1 nova_compute[230518]: 2025-10-02 13:36:33.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:34 compute-1 ceph-mon[80926]: pgmap v3784: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:34.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:34.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:36:35 compute-1 ceph-mon[80926]: pgmap v3785: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:36.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:36.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:37 compute-1 ceph-mon[80926]: pgmap v3786: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:37 compute-1 nova_compute[230518]: 2025-10-02 13:36:37.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:38.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:36:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:38.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:36:38 compute-1 nova_compute[230518]: 2025-10-02 13:36:38.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:36:40 compute-1 ceph-mon[80926]: pgmap v3787: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:40.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:40.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:40 compute-1 podman[329061]: 2025-10-02 13:36:40.80004156 +0000 UTC m=+0.047673946 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:36:40 compute-1 podman[329060]: 2025-10-02 13:36:40.822335059 +0000 UTC m=+0.076566521 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct 02 13:36:42 compute-1 ceph-mon[80926]: pgmap v3788: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:36:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:42.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:36:42 compute-1 nova_compute[230518]: 2025-10-02 13:36:42.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:42.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:43 compute-1 nova_compute[230518]: 2025-10-02 13:36:43.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:44 compute-1 ceph-mon[80926]: pgmap v3789: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:44.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:44.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:36:45 compute-1 ceph-mon[80926]: pgmap v3790: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:46.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:46.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:47 compute-1 nova_compute[230518]: 2025-10-02 13:36:47.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:48 compute-1 nova_compute[230518]: 2025-10-02 13:36:48.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:36:48 compute-1 ceph-mon[80926]: pgmap v3791: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2708338914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:36:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1800985970' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:36:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:48.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:48.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:48 compute-1 nova_compute[230518]: 2025-10-02 13:36:48.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/80985682' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:36:49 compute-1 sudo[329101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:36:49 compute-1 sudo[329101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:36:49 compute-1 sudo[329101]: pam_unix(sudo:session): session closed for user root
Oct 02 13:36:49 compute-1 sudo[329126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:36:49 compute-1 sudo[329126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:36:49 compute-1 sudo[329126]: pam_unix(sudo:session): session closed for user root
Oct 02 13:36:49 compute-1 sudo[329151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:36:49 compute-1 sudo[329151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:36:49 compute-1 sudo[329151]: pam_unix(sudo:session): session closed for user root
Oct 02 13:36:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:36:50 compute-1 sudo[329176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:36:50 compute-1 sudo[329176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:36:50 compute-1 ceph-mon[80926]: pgmap v3792: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3282365555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:36:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:36:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:50.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:36:50 compute-1 sudo[329176]: pam_unix(sudo:session): session closed for user root
Oct 02 13:36:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:50.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:51 compute-1 nova_compute[230518]: 2025-10-02 13:36:51.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:36:51 compute-1 nova_compute[230518]: 2025-10-02 13:36:51.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:36:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:36:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:36:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:36:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:36:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:36:51 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:36:52 compute-1 ceph-mon[80926]: pgmap v3793: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:52.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:52 compute-1 nova_compute[230518]: 2025-10-02 13:36:52.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:36:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:52.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:36:52 compute-1 podman[329232]: 2025-10-02 13:36:52.797257529 +0000 UTC m=+0.050278167 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:36:52 compute-1 podman[329233]: 2025-10-02 13:36:52.81321324 +0000 UTC m=+0.059151356 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 13:36:53 compute-1 nova_compute[230518]: 2025-10-02 13:36:53.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:36:53 compute-1 nova_compute[230518]: 2025-10-02 13:36:53.085 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:36:53 compute-1 nova_compute[230518]: 2025-10-02 13:36:53.085 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:36:53 compute-1 nova_compute[230518]: 2025-10-02 13:36:53.085 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:36:53 compute-1 nova_compute[230518]: 2025-10-02 13:36:53.086 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:36:53 compute-1 nova_compute[230518]: 2025-10-02 13:36:53.086 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:36:53 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:36:53 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2432501427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:36:53 compute-1 nova_compute[230518]: 2025-10-02 13:36:53.490 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:36:53 compute-1 nova_compute[230518]: 2025-10-02 13:36:53.628 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:36:53 compute-1 nova_compute[230518]: 2025-10-02 13:36:53.629 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4151MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:36:53 compute-1 nova_compute[230518]: 2025-10-02 13:36:53.629 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:36:53 compute-1 nova_compute[230518]: 2025-10-02 13:36:53.629 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:36:53 compute-1 nova_compute[230518]: 2025-10-02 13:36:53.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:53 compute-1 nova_compute[230518]: 2025-10-02 13:36:53.924 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:36:53 compute-1 nova_compute[230518]: 2025-10-02 13:36:53.925 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:36:54 compute-1 nova_compute[230518]: 2025-10-02 13:36:54.154 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 13:36:54 compute-1 nova_compute[230518]: 2025-10-02 13:36:54.178 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 13:36:54 compute-1 nova_compute[230518]: 2025-10-02 13:36:54.178 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 13:36:54 compute-1 ceph-mon[80926]: pgmap v3794: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:54 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2432501427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:36:54 compute-1 nova_compute[230518]: 2025-10-02 13:36:54.213 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 13:36:54 compute-1 nova_compute[230518]: 2025-10-02 13:36:54.248 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 13:36:54 compute-1 nova_compute[230518]: 2025-10-02 13:36:54.274 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:36:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:54.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:36:54 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1900774806' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:36:54 compute-1 nova_compute[230518]: 2025-10-02 13:36:54.687 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:36:54 compute-1 nova_compute[230518]: 2025-10-02 13:36:54.692 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:36:54 compute-1 nova_compute[230518]: 2025-10-02 13:36:54.747 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:36:54 compute-1 nova_compute[230518]: 2025-10-02 13:36:54.748 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:36:54 compute-1 nova_compute[230518]: 2025-10-02 13:36:54.748 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:36:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:54.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:36:55 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1900774806' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:36:56 compute-1 ceph-mon[80926]: pgmap v3795: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:56.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:56 compute-1 sudo[329314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:36:56 compute-1 sudo[329314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:36:56 compute-1 sudo[329314]: pam_unix(sudo:session): session closed for user root
Oct 02 13:36:56 compute-1 sudo[329339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:36:56 compute-1 sudo[329339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:36:56 compute-1 sudo[329339]: pam_unix(sudo:session): session closed for user root
Oct 02 13:36:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:56.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:57 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:36:57 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:36:57 compute-1 ceph-mon[80926]: pgmap v3796: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:36:57 compute-1 nova_compute[230518]: 2025-10-02 13:36:57.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:36:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:36:58.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:36:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:36:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:36:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:36:58.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:36:58 compute-1 nova_compute[230518]: 2025-10-02 13:36:58.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:36:59 compute-1 nova_compute[230518]: 2025-10-02 13:36:59.748 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:36:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:37:00 compute-1 ceph-mon[80926]: pgmap v3797: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:00.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:00.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:02 compute-1 nova_compute[230518]: 2025-10-02 13:37:02.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:37:02 compute-1 ceph-mon[80926]: pgmap v3798: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:37:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:02.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:37:02 compute-1 nova_compute[230518]: 2025-10-02 13:37:02.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:02.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:03 compute-1 nova_compute[230518]: 2025-10-02 13:37:03.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:37:03 compute-1 nova_compute[230518]: 2025-10-02 13:37:03.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:37:03 compute-1 nova_compute[230518]: 2025-10-02 13:37:03.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:37:03 compute-1 ceph-mon[80926]: pgmap v3799: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:03 compute-1 nova_compute[230518]: 2025-10-02 13:37:03.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:04.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:04 compute-1 nova_compute[230518]: 2025-10-02 13:37:04.721 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:37:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:04.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:37:05 compute-1 ceph-mon[80926]: pgmap v3800: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:37:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1174177996' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:37:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:37:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1174177996' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:37:06 compute-1 nova_compute[230518]: 2025-10-02 13:37:06.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:37:06 compute-1 nova_compute[230518]: 2025-10-02 13:37:06.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:37:06 compute-1 nova_compute[230518]: 2025-10-02 13:37:06.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:37:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:06.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1174177996' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:37:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1174177996' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:37:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:06.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:06 compute-1 nova_compute[230518]: 2025-10-02 13:37:06.909 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:37:07 compute-1 ceph-mon[80926]: pgmap v3801: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:07 compute-1 nova_compute[230518]: 2025-10-02 13:37:07.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:08.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:08.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:08 compute-1 nova_compute[230518]: 2025-10-02 13:37:08.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:08 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #187. Immutable memtables: 0.
Oct 02 13:37:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:37:08.949831) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:37:08 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 119] Flushing memtable with next log file: 187
Oct 02 13:37:08 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412228949861, "job": 119, "event": "flush_started", "num_memtables": 1, "num_entries": 1902, "num_deletes": 256, "total_data_size": 4369363, "memory_usage": 4436848, "flush_reason": "Manual Compaction"}
Oct 02 13:37:08 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 119] Level-0 flush table #188: started
Oct 02 13:37:08 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412228966600, "cf_name": "default", "job": 119, "event": "table_file_creation", "file_number": 188, "file_size": 2863530, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 89806, "largest_seqno": 91703, "table_properties": {"data_size": 2855298, "index_size": 4917, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 18571, "raw_average_key_size": 20, "raw_value_size": 2838286, "raw_average_value_size": 3174, "num_data_blocks": 216, "num_entries": 894, "num_filter_entries": 894, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759412069, "oldest_key_time": 1759412069, "file_creation_time": 1759412228, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 188, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:37:08 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 119] Flush lasted 16815 microseconds, and 7726 cpu microseconds.
Oct 02 13:37:08 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:37:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:37:08.966642) [db/flush_job.cc:967] [default] [JOB 119] Level-0 flush table #188: 2863530 bytes OK
Oct 02 13:37:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:37:08.966662) [db/memtable_list.cc:519] [default] Level-0 commit table #188 started
Oct 02 13:37:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:37:08.968641) [db/memtable_list.cc:722] [default] Level-0 commit table #188: memtable #1 done
Oct 02 13:37:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:37:08.968658) EVENT_LOG_v1 {"time_micros": 1759412228968652, "job": 119, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:37:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:37:08.968676) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:37:08 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 119] Try to delete WAL files size 4360377, prev total WAL file size 4360377, number of live WAL files 2.
Oct 02 13:37:08 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000184.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:37:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:37:08.969944) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353230' seq:72057594037927935, type:22 .. '6C6F676D0033373733' seq:0, type:0; will stop at (end)
Oct 02 13:37:08 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 120] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:37:08 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 119 Base level 0, inputs: [188(2796KB)], [186(10MB)]
Oct 02 13:37:08 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412228969986, "job": 120, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [188], "files_L6": [186], "score": -1, "input_data_size": 14232544, "oldest_snapshot_seqno": -1}
Oct 02 13:37:09 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 120] Generated table #189: 11350 keys, 14109957 bytes, temperature: kUnknown
Oct 02 13:37:09 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412229054325, "cf_name": "default", "job": 120, "event": "table_file_creation", "file_number": 189, "file_size": 14109957, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14037321, "index_size": 43147, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28421, "raw_key_size": 300255, "raw_average_key_size": 26, "raw_value_size": 13839797, "raw_average_value_size": 1219, "num_data_blocks": 1641, "num_entries": 11350, "num_filter_entries": 11350, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759412228, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 189, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:37:09 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:37:09 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:37:09.054752) [db/compaction/compaction_job.cc:1663] [default] [JOB 120] Compacted 1@0 + 1@6 files to L6 => 14109957 bytes
Oct 02 13:37:09 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:37:09.057735) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.4 rd, 167.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 10.8 +0.0 blob) out(13.5 +0.0 blob), read-write-amplify(9.9) write-amplify(4.9) OK, records in: 11875, records dropped: 525 output_compression: NoCompression
Oct 02 13:37:09 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:37:09.057765) EVENT_LOG_v1 {"time_micros": 1759412229057751, "job": 120, "event": "compaction_finished", "compaction_time_micros": 84512, "compaction_time_cpu_micros": 44404, "output_level": 6, "num_output_files": 1, "total_output_size": 14109957, "num_input_records": 11875, "num_output_records": 11350, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:37:09 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000188.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:37:09 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412229059163, "job": 120, "event": "table_file_deletion", "file_number": 188}
Oct 02 13:37:09 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000186.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:37:09 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412229063335, "job": 120, "event": "table_file_deletion", "file_number": 186}
Oct 02 13:37:09 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:37:08.969799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:37:09 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:37:09.063526) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:37:09 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:37:09.063533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:37:09 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:37:09.063536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:37:09 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:37:09.063539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:37:09 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:37:09.063542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:37:09 compute-1 nova_compute[230518]: 2025-10-02 13:37:09.903 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:37:09 compute-1 ceph-mon[80926]: pgmap v3802: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:37:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:10.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:10.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:11 compute-1 podman[329365]: 2025-10-02 13:37:11.800592577 +0000 UTC m=+0.051402783 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 02 13:37:11 compute-1 podman[329364]: 2025-10-02 13:37:11.814533663 +0000 UTC m=+0.070842662 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:37:12 compute-1 ceph-mon[80926]: pgmap v3803: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:12.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:12 compute-1 nova_compute[230518]: 2025-10-02 13:37:12.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:12.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:13 compute-1 nova_compute[230518]: 2025-10-02 13:37:13.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:37:13 compute-1 nova_compute[230518]: 2025-10-02 13:37:13.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 13:37:13 compute-1 ceph-mon[80926]: pgmap v3804: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:13 compute-1 nova_compute[230518]: 2025-10-02 13:37:13.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:14 compute-1 nova_compute[230518]: 2025-10-02 13:37:14.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:37:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:37:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:14.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:37:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:14.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:37:16 compute-1 ceph-mon[80926]: pgmap v3805: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:16.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:16.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:17 compute-1 radosgw[82922]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Oct 02 13:37:17 compute-1 nova_compute[230518]: 2025-10-02 13:37:17.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:18 compute-1 ceph-mon[80926]: pgmap v3806: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:18.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:18 compute-1 nova_compute[230518]: 2025-10-02 13:37:18.783 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:37:18 compute-1 nova_compute[230518]: 2025-10-02 13:37:18.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:37:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:18.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:37:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:37:20 compute-1 ceph-mon[80926]: pgmap v3807: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:20.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:37:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:20.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:37:21 compute-1 ceph-mon[80926]: pgmap v3808: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 0 B/s wr, 59 op/s
Oct 02 13:37:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:37:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:22.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:37:22 compute-1 nova_compute[230518]: 2025-10-02 13:37:22.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:22.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:23 compute-1 podman[329409]: 2025-10-02 13:37:23.801070106 +0000 UTC m=+0.055663517 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 13:37:23 compute-1 nova_compute[230518]: 2025-10-02 13:37:23.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:23 compute-1 podman[329408]: 2025-10-02 13:37:23.852061704 +0000 UTC m=+0.101246375 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, container_name=iscsid)
Oct 02 13:37:24 compute-1 ceph-mon[80926]: pgmap v3809: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 53 KiB/s rd, 0 B/s wr, 89 op/s
Oct 02 13:37:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:37:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:24.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:37:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:24.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:37:25 compute-1 ceph-mon[80926]: pgmap v3810: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 67 KiB/s rd, 0 B/s wr, 111 op/s
Oct 02 13:37:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:37:25.997 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:37:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:37:25.997 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:37:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:37:25.997 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:37:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:26.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:37:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:26.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:37:27 compute-1 nova_compute[230518]: 2025-10-02 13:37:27.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:28 compute-1 ceph-mon[80926]: pgmap v3811: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 99 KiB/s rd, 0 B/s wr, 165 op/s
Oct 02 13:37:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:37:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:28.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:37:28 compute-1 nova_compute[230518]: 2025-10-02 13:37:28.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:37:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:28.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:37:29 compute-1 ceph-mon[80926]: pgmap v3812: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 99 KiB/s rd, 0 B/s wr, 165 op/s
Oct 02 13:37:29 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:37:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:30.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:37:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:30.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:37:32 compute-1 ceph-mon[80926]: pgmap v3813: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 106 KiB/s rd, 0 B/s wr, 177 op/s
Oct 02 13:37:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:32.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:32 compute-1 nova_compute[230518]: 2025-10-02 13:37:32.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:32.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:33 compute-1 nova_compute[230518]: 2025-10-02 13:37:33.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:34 compute-1 ceph-mon[80926]: pgmap v3814: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 71 KiB/s rd, 0 B/s wr, 118 op/s
Oct 02 13:37:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:34.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:34.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:34 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:37:36 compute-1 ceph-mon[80926]: pgmap v3815: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 53 KiB/s rd, 0 B/s wr, 88 op/s
Oct 02 13:37:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:36.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:36.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:37 compute-1 ceph-mon[80926]: pgmap v3816: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 0 B/s wr, 67 op/s
Oct 02 13:37:37 compute-1 nova_compute[230518]: 2025-10-02 13:37:37.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:38.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:38 compute-1 nova_compute[230518]: 2025-10-02 13:37:38.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:37:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:38.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:37:39 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:37:40 compute-1 ceph-mon[80926]: pgmap v3817: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 12 op/s
Oct 02 13:37:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:40.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:40.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:42 compute-1 ceph-mon[80926]: pgmap v3818: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 12 op/s
Oct 02 13:37:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:37:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:42.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:37:42 compute-1 nova_compute[230518]: 2025-10-02 13:37:42.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:42 compute-1 podman[329448]: 2025-10-02 13:37:42.807269232 +0000 UTC m=+0.054902513 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:37:42 compute-1 podman[329447]: 2025-10-02 13:37:42.839510412 +0000 UTC m=+0.085685987 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 02 13:37:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:37:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:42.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:37:43 compute-1 nova_compute[230518]: 2025-10-02 13:37:43.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:44 compute-1 ceph-mon[80926]: pgmap v3819: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s
Oct 02 13:37:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:37:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:44.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:37:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:37:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:44.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:37:44 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:37:45 compute-1 ceph-mon[80926]: pgmap v3820: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Oct 02 13:37:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:46.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:46.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:47 compute-1 nova_compute[230518]: 2025-10-02 13:37:47.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:48 compute-1 nova_compute[230518]: 2025-10-02 13:37:48.090 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:37:48 compute-1 ceph-mon[80926]: pgmap v3821: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Oct 02 13:37:48 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3208392920' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:37:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:37:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:48.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:37:48 compute-1 nova_compute[230518]: 2025-10-02 13:37:48.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:37:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:48.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:37:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/385079418' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:37:49 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3332069202' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:37:49 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:37:50 compute-1 ceph-mon[80926]: pgmap v3822: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1459108784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:37:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:37:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:50.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:37:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:37:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:50.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:37:51 compute-1 nova_compute[230518]: 2025-10-02 13:37:51.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:37:51 compute-1 nova_compute[230518]: 2025-10-02 13:37:51.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:37:51 compute-1 nova_compute[230518]: 2025-10-02 13:37:51.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:37:51 compute-1 nova_compute[230518]: 2025-10-02 13:37:51.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 13:37:51 compute-1 nova_compute[230518]: 2025-10-02 13:37:51.088 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 13:37:51 compute-1 ceph-mon[80926]: pgmap v3823: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:52.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:52 compute-1 nova_compute[230518]: 2025-10-02 13:37:52.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:52.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:53 compute-1 nova_compute[230518]: 2025-10-02 13:37:53.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:54 compute-1 ceph-mon[80926]: pgmap v3824: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:37:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:54.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:37:54 compute-1 podman[329490]: 2025-10-02 13:37:54.801341073 +0000 UTC m=+0.050892797 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:37:54 compute-1 podman[329491]: 2025-10-02 13:37:54.85614436 +0000 UTC m=+0.089647991 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 13:37:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:54.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:54 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:37:55 compute-1 nova_compute[230518]: 2025-10-02 13:37:55.087 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:37:55 compute-1 nova_compute[230518]: 2025-10-02 13:37:55.122 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:37:55 compute-1 nova_compute[230518]: 2025-10-02 13:37:55.122 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:37:55 compute-1 nova_compute[230518]: 2025-10-02 13:37:55.123 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:37:55 compute-1 nova_compute[230518]: 2025-10-02 13:37:55.123 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:37:55 compute-1 nova_compute[230518]: 2025-10-02 13:37:55.123 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:37:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:37:55 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/439213855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:37:55 compute-1 nova_compute[230518]: 2025-10-02 13:37:55.590 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:37:55 compute-1 nova_compute[230518]: 2025-10-02 13:37:55.813 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:37:55 compute-1 nova_compute[230518]: 2025-10-02 13:37:55.816 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4131MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:37:55 compute-1 nova_compute[230518]: 2025-10-02 13:37:55.816 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:37:55 compute-1 nova_compute[230518]: 2025-10-02 13:37:55.817 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:37:55 compute-1 nova_compute[230518]: 2025-10-02 13:37:55.906 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:37:55 compute-1 nova_compute[230518]: 2025-10-02 13:37:55.907 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:37:55 compute-1 nova_compute[230518]: 2025-10-02 13:37:55.941 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:37:56 compute-1 ceph-mon[80926]: pgmap v3825: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:56 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/439213855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:37:56 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:37:56 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3149066955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:37:56 compute-1 nova_compute[230518]: 2025-10-02 13:37:56.416 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:37:56 compute-1 nova_compute[230518]: 2025-10-02 13:37:56.421 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:37:56 compute-1 nova_compute[230518]: 2025-10-02 13:37:56.447 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:37:56 compute-1 nova_compute[230518]: 2025-10-02 13:37:56.449 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:37:56 compute-1 nova_compute[230518]: 2025-10-02 13:37:56.449 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:37:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:56.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:56.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:56 compute-1 sudo[329572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:37:56 compute-1 sudo[329572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:37:56 compute-1 sudo[329572]: pam_unix(sudo:session): session closed for user root
Oct 02 13:37:57 compute-1 sudo[329597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:37:57 compute-1 sudo[329597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:37:57 compute-1 sudo[329597]: pam_unix(sudo:session): session closed for user root
Oct 02 13:37:57 compute-1 sudo[329622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:37:57 compute-1 sudo[329622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:37:57 compute-1 sudo[329622]: pam_unix(sudo:session): session closed for user root
Oct 02 13:37:57 compute-1 sudo[329647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:37:57 compute-1 sudo[329647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:37:57 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3149066955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:37:57 compute-1 ceph-mon[80926]: pgmap v3826: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:37:57 compute-1 sudo[329647]: pam_unix(sudo:session): session closed for user root
Oct 02 13:37:57 compute-1 nova_compute[230518]: 2025-10-02 13:37:57.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:37:58.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:58 compute-1 nova_compute[230518]: 2025-10-02 13:37:58.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:37:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:37:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:37:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:37:58.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:37:59 compute-1 nova_compute[230518]: 2025-10-02 13:37:59.414 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:37:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:38:00 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:38:00 compute-1 ceph-mon[80926]: pgmap v3827: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:00 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:38:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:00.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:00.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:01 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:38:01 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:38:01 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:38:01 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:38:01 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:38:01 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:38:02 compute-1 nova_compute[230518]: 2025-10-02 13:38:02.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:38:02 compute-1 ceph-mon[80926]: pgmap v3828: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:02.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:02 compute-1 nova_compute[230518]: 2025-10-02 13:38:02.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:02.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:03 compute-1 nova_compute[230518]: 2025-10-02 13:38:03.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:04 compute-1 nova_compute[230518]: 2025-10-02 13:38:04.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:38:04 compute-1 ceph-mon[80926]: pgmap v3829: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:04.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:04.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:04 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:38:05 compute-1 nova_compute[230518]: 2025-10-02 13:38:05.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:38:05 compute-1 nova_compute[230518]: 2025-10-02 13:38:05.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:38:05 compute-1 ceph-mon[80926]: pgmap v3830: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:38:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:06.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:38:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/535136869' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:38:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/535136869' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:38:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:38:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:06.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:38:07 compute-1 nova_compute[230518]: 2025-10-02 13:38:07.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:38:07 compute-1 nova_compute[230518]: 2025-10-02 13:38:07.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:38:07 compute-1 nova_compute[230518]: 2025-10-02 13:38:07.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:38:07 compute-1 nova_compute[230518]: 2025-10-02 13:38:07.123 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:38:07 compute-1 ceph-mon[80926]: pgmap v3831: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:07 compute-1 nova_compute[230518]: 2025-10-02 13:38:07.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:08.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:08 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #190. Immutable memtables: 0.
Oct 02 13:38:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:38:08.582010) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:38:08 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 121] Flushing memtable with next log file: 190
Oct 02 13:38:08 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412288582095, "job": 121, "event": "flush_started", "num_memtables": 1, "num_entries": 814, "num_deletes": 251, "total_data_size": 1571607, "memory_usage": 1591816, "flush_reason": "Manual Compaction"}
Oct 02 13:38:08 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 121] Level-0 flush table #191: started
Oct 02 13:38:08 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412288590344, "cf_name": "default", "job": 121, "event": "table_file_creation", "file_number": 191, "file_size": 1037761, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 91708, "largest_seqno": 92517, "table_properties": {"data_size": 1033885, "index_size": 1655, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8768, "raw_average_key_size": 19, "raw_value_size": 1026112, "raw_average_value_size": 2295, "num_data_blocks": 73, "num_entries": 447, "num_filter_entries": 447, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759412229, "oldest_key_time": 1759412229, "file_creation_time": 1759412288, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 191, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:38:08 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 121] Flush lasted 8350 microseconds, and 3897 cpu microseconds.
Oct 02 13:38:08 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:38:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:38:08.590381) [db/flush_job.cc:967] [default] [JOB 121] Level-0 flush table #191: 1037761 bytes OK
Oct 02 13:38:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:38:08.590402) [db/memtable_list.cc:519] [default] Level-0 commit table #191 started
Oct 02 13:38:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:38:08.591540) [db/memtable_list.cc:722] [default] Level-0 commit table #191: memtable #1 done
Oct 02 13:38:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:38:08.591553) EVENT_LOG_v1 {"time_micros": 1759412288591549, "job": 121, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:38:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:38:08.591572) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:38:08 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 121] Try to delete WAL files size 1567362, prev total WAL file size 1567362, number of live WAL files 2.
Oct 02 13:38:08 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000187.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:38:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:38:08.592237) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038303332' seq:72057594037927935, type:22 .. '7061786F730038323834' seq:0, type:0; will stop at (end)
Oct 02 13:38:08 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 122] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:38:08 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 121 Base level 0, inputs: [191(1013KB)], [189(13MB)]
Oct 02 13:38:08 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412288592374, "job": 122, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [191], "files_L6": [189], "score": -1, "input_data_size": 15147718, "oldest_snapshot_seqno": -1}
Oct 02 13:38:08 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 122] Generated table #192: 11281 keys, 13208412 bytes, temperature: kUnknown
Oct 02 13:38:08 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412288700824, "cf_name": "default", "job": 122, "event": "table_file_creation", "file_number": 192, "file_size": 13208412, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13137075, "index_size": 42040, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28229, "raw_key_size": 299502, "raw_average_key_size": 26, "raw_value_size": 12941483, "raw_average_value_size": 1147, "num_data_blocks": 1589, "num_entries": 11281, "num_filter_entries": 11281, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759412288, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 192, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:38:08 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:38:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:38:08.701107) [db/compaction/compaction_job.cc:1663] [default] [JOB 122] Compacted 1@0 + 1@6 files to L6 => 13208412 bytes
Oct 02 13:38:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:38:08.702955) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 139.6 rd, 121.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 13.5 +0.0 blob) out(12.6 +0.0 blob), read-write-amplify(27.3) write-amplify(12.7) OK, records in: 11797, records dropped: 516 output_compression: NoCompression
Oct 02 13:38:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:38:08.702971) EVENT_LOG_v1 {"time_micros": 1759412288702963, "job": 122, "event": "compaction_finished", "compaction_time_micros": 108522, "compaction_time_cpu_micros": 61472, "output_level": 6, "num_output_files": 1, "total_output_size": 13208412, "num_input_records": 11797, "num_output_records": 11281, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:38:08 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000191.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:38:08 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412288703294, "job": 122, "event": "table_file_deletion", "file_number": 191}
Oct 02 13:38:08 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000189.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:38:08 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412288705797, "job": 122, "event": "table_file_deletion", "file_number": 189}
Oct 02 13:38:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:38:08.592089) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:38:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:38:08.705941) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:38:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:38:08.705946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:38:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:38:08.705948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:38:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:38:08.705950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:38:08 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:38:08.705952) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:38:08 compute-1 nova_compute[230518]: 2025-10-02 13:38:08.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:08 compute-1 sudo[329703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:38:08 compute-1 sudo[329703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:38:08 compute-1 sudo[329703]: pam_unix(sudo:session): session closed for user root
Oct 02 13:38:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:38:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:08.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:38:08 compute-1 sudo[329728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:38:08 compute-1 sudo[329728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:38:08 compute-1 sudo[329728]: pam_unix(sudo:session): session closed for user root
Oct 02 13:38:09 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:38:09 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:38:09 compute-1 ceph-mon[80926]: pgmap v3832: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:38:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:38:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:10.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:38:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:38:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:10.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:38:12 compute-1 ceph-mon[80926]: pgmap v3833: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:12.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:12 compute-1 nova_compute[230518]: 2025-10-02 13:38:12.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:12.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:13 compute-1 ceph-mon[80926]: pgmap v3834: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:13 compute-1 podman[329754]: 2025-10-02 13:38:13.796582408 +0000 UTC m=+0.050017810 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 02 13:38:13 compute-1 nova_compute[230518]: 2025-10-02 13:38:13.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:13 compute-1 podman[329753]: 2025-10-02 13:38:13.833692711 +0000 UTC m=+0.089728885 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:38:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 13:38:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:14.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 13:38:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:14.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:38:16 compute-1 ceph-mon[80926]: pgmap v3835: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:16.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:16.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:17 compute-1 nova_compute[230518]: 2025-10-02 13:38:17.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:18 compute-1 ceph-mon[80926]: pgmap v3836: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:18.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:18 compute-1 nova_compute[230518]: 2025-10-02 13:38:18.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:18.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:19 compute-1 ceph-mon[80926]: pgmap v3837: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:38:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:20.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:20.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:21 compute-1 ceph-mon[80926]: pgmap v3838: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:22.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:22 compute-1 nova_compute[230518]: 2025-10-02 13:38:22.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:22.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:23 compute-1 nova_compute[230518]: 2025-10-02 13:38:23.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:24 compute-1 ceph-mon[80926]: pgmap v3839: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:24.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:24.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:38:25 compute-1 ceph-mon[80926]: pgmap v3840: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:25 compute-1 podman[329798]: 2025-10-02 13:38:25.829093123 +0000 UTC m=+0.078467901 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 13:38:25 compute-1 podman[329799]: 2025-10-02 13:38:25.847070606 +0000 UTC m=+0.075798367 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd)
Oct 02 13:38:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:38:25.998 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:38:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:38:25.998 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:38:25 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:38:25.998 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:38:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:38:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:26.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:38:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:26.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:27 compute-1 nova_compute[230518]: 2025-10-02 13:38:27.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:28 compute-1 ceph-mon[80926]: pgmap v3841: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:28.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:28 compute-1 nova_compute[230518]: 2025-10-02 13:38:28.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 13:38:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:28.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 13:38:29 compute-1 ceph-mon[80926]: pgmap v3842: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:38:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:38:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:30.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:38:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:30.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:31 compute-1 ceph-mon[80926]: pgmap v3843: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:38:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:32.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:38:32 compute-1 nova_compute[230518]: 2025-10-02 13:38:32.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:32.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:33 compute-1 nova_compute[230518]: 2025-10-02 13:38:33.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:34 compute-1 ceph-mon[80926]: pgmap v3844: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:34.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:34.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:38:35 compute-1 ceph-mon[80926]: pgmap v3845: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:38:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:36.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:38:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:38:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:36.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:38:37 compute-1 nova_compute[230518]: 2025-10-02 13:38:37.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:38 compute-1 ceph-mon[80926]: pgmap v3846: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:38:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:38.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:38:38 compute-1 nova_compute[230518]: 2025-10-02 13:38:38.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:38:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:38.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:38:40 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:38:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:40.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:40 compute-1 ceph-mon[80926]: pgmap v3847: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:40.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:41 compute-1 ceph-mon[80926]: pgmap v3848: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:42.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:42 compute-1 nova_compute[230518]: 2025-10-02 13:38:42.725 2 DEBUG oslo_concurrency.processutils [None req-f37077f4-0dab-40b0-9f70-91a74b690f75 c004f5628e4845ada3addf46ef5dfd33 c3a6b94d2b4945a487dafe07f533efd6 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:38:42 compute-1 nova_compute[230518]: 2025-10-02 13:38:42.769 2 DEBUG oslo_concurrency.processutils [None req-f37077f4-0dab-40b0-9f70-91a74b690f75 c004f5628e4845ada3addf46ef5dfd33 c3a6b94d2b4945a487dafe07f533efd6 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:38:42 compute-1 nova_compute[230518]: 2025-10-02 13:38:42.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:42.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:43 compute-1 nova_compute[230518]: 2025-10-02 13:38:43.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:44 compute-1 ceph-mon[80926]: pgmap v3849: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:44.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:44 compute-1 podman[329842]: 2025-10-02 13:38:44.797333111 +0000 UTC m=+0.044957060 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 02 13:38:44 compute-1 podman[329841]: 2025-10-02 13:38:44.826045961 +0000 UTC m=+0.075908341 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible)
Oct 02 13:38:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:38:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:44.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:38:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:38:45 compute-1 ceph-mon[80926]: pgmap v3850: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:46.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:46.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:47 compute-1 ceph-mon[80926]: pgmap v3851: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:47 compute-1 nova_compute[230518]: 2025-10-02 13:38:47.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:48 compute-1 nova_compute[230518]: 2025-10-02 13:38:48.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:38:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:48.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:48 compute-1 nova_compute[230518]: 2025-10-02 13:38:48.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:38:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:48.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:38:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:38:50 compute-1 ceph-mon[80926]: pgmap v3852: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4120944310' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:38:50 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4086869222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:38:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:38:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:50.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:38:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:38:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:50.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:38:51 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2682809650' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:38:51 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2539287375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:38:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:38:51.510 138374 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=88, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '72:78:d8', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '56:7f:83:dc:a2:3a'}, ipsec=False) old=SB_Global(nb_cfg=87) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 02 13:38:51 compute-1 nova_compute[230518]: 2025-10-02 13:38:51.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:38:51.511 138374 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 02 13:38:51 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:38:51.512 138374 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=db222192-8da1-4f7c-972d-dc680c3e6630, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '88'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 02 13:38:52 compute-1 nova_compute[230518]: 2025-10-02 13:38:52.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:38:52 compute-1 nova_compute[230518]: 2025-10-02 13:38:52.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:38:52 compute-1 ceph-mon[80926]: pgmap v3853: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:52.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:52 compute-1 nova_compute[230518]: 2025-10-02 13:38:52.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:52.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:53 compute-1 ceph-mon[80926]: pgmap v3854: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:53 compute-1 nova_compute[230518]: 2025-10-02 13:38:53.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:54.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:54.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:38:55 compute-1 nova_compute[230518]: 2025-10-02 13:38:55.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:38:55 compute-1 nova_compute[230518]: 2025-10-02 13:38:55.460 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:38:55 compute-1 nova_compute[230518]: 2025-10-02 13:38:55.461 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:38:55 compute-1 nova_compute[230518]: 2025-10-02 13:38:55.461 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:38:55 compute-1 nova_compute[230518]: 2025-10-02 13:38:55.461 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:38:55 compute-1 nova_compute[230518]: 2025-10-02 13:38:55.461 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:38:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:38:55 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2831501368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:38:55 compute-1 nova_compute[230518]: 2025-10-02 13:38:55.871 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:38:56 compute-1 nova_compute[230518]: 2025-10-02 13:38:56.012 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:38:56 compute-1 nova_compute[230518]: 2025-10-02 13:38:56.013 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4136MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:38:56 compute-1 nova_compute[230518]: 2025-10-02 13:38:56.013 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:38:56 compute-1 nova_compute[230518]: 2025-10-02 13:38:56.014 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:38:56 compute-1 nova_compute[230518]: 2025-10-02 13:38:56.114 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:38:56 compute-1 nova_compute[230518]: 2025-10-02 13:38:56.114 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:38:56 compute-1 nova_compute[230518]: 2025-10-02 13:38:56.134 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:38:56 compute-1 ceph-mon[80926]: pgmap v3855: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:56 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2831501368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:38:56 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:38:56 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3151770201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:38:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:56.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:56 compute-1 nova_compute[230518]: 2025-10-02 13:38:56.555 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:38:56 compute-1 nova_compute[230518]: 2025-10-02 13:38:56.560 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:38:56 compute-1 nova_compute[230518]: 2025-10-02 13:38:56.586 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:38:56 compute-1 nova_compute[230518]: 2025-10-02 13:38:56.588 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:38:56 compute-1 nova_compute[230518]: 2025-10-02 13:38:56.589 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:38:56 compute-1 podman[329933]: 2025-10-02 13:38:56.802982601 +0000 UTC m=+0.055337565 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 02 13:38:56 compute-1 podman[329932]: 2025-10-02 13:38:56.814909805 +0000 UTC m=+0.068403705 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 13:38:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:56.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:57 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3151770201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:38:57 compute-1 nova_compute[230518]: 2025-10-02 13:38:57.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:58 compute-1 ceph-mon[80926]: pgmap v3856: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:38:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:38:58.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:58 compute-1 nova_compute[230518]: 2025-10-02 13:38:58.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:38:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:38:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:38:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:38:58.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:38:59 compute-1 ceph-mon[80926]: pgmap v3857: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:39:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:00.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:00.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:01 compute-1 nova_compute[230518]: 2025-10-02 13:39:01.589 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:39:02 compute-1 ceph-mon[80926]: pgmap v3858: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:02.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:02 compute-1 nova_compute[230518]: 2025-10-02 13:39:02.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:39:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:02.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:39:03 compute-1 nova_compute[230518]: 2025-10-02 13:39:03.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:39:03 compute-1 nova_compute[230518]: 2025-10-02 13:39:03.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:04 compute-1 ceph-mon[80926]: pgmap v3859: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:04.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:04.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:39:05 compute-1 nova_compute[230518]: 2025-10-02 13:39:05.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:39:05 compute-1 nova_compute[230518]: 2025-10-02 13:39:05.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:39:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:39:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1700533051' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:39:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:39:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1700533051' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:39:06 compute-1 nova_compute[230518]: 2025-10-02 13:39:06.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:39:06 compute-1 ceph-mon[80926]: pgmap v3860: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1700533051' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:39:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1700533051' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:39:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:06.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:06.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:07 compute-1 ceph-mon[80926]: pgmap v3861: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:07 compute-1 nova_compute[230518]: 2025-10-02 13:39:07.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:08.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:08 compute-1 nova_compute[230518]: 2025-10-02 13:39:08.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:08.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:09 compute-1 nova_compute[230518]: 2025-10-02 13:39:09.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:39:09 compute-1 nova_compute[230518]: 2025-10-02 13:39:09.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:39:09 compute-1 nova_compute[230518]: 2025-10-02 13:39:09.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:39:09 compute-1 nova_compute[230518]: 2025-10-02 13:39:09.077 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:39:09 compute-1 sudo[329974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:39:09 compute-1 sudo[329974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:39:09 compute-1 sudo[329974]: pam_unix(sudo:session): session closed for user root
Oct 02 13:39:09 compute-1 sudo[329999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:39:09 compute-1 sudo[329999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:39:09 compute-1 sudo[329999]: pam_unix(sudo:session): session closed for user root
Oct 02 13:39:09 compute-1 sudo[330024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:39:09 compute-1 sudo[330024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:39:09 compute-1 sudo[330024]: pam_unix(sudo:session): session closed for user root
Oct 02 13:39:09 compute-1 sudo[330049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:39:09 compute-1 sudo[330049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:39:09 compute-1 sudo[330049]: pam_unix(sudo:session): session closed for user root
Oct 02 13:39:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:39:10 compute-1 nova_compute[230518]: 2025-10-02 13:39:10.071 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:39:10 compute-1 ceph-mon[80926]: pgmap v3862: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:39:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:39:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:39:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:39:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:39:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:39:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:10.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:10.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:12 compute-1 ceph-mon[80926]: pgmap v3863: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:12.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:12 compute-1 nova_compute[230518]: 2025-10-02 13:39:12.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:39:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:12.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:39:13 compute-1 ceph-mon[80926]: pgmap v3864: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:13 compute-1 nova_compute[230518]: 2025-10-02 13:39:13.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:14.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:39:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:14.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:39:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:39:15 compute-1 podman[330106]: 2025-10-02 13:39:15.790091032 +0000 UTC m=+0.042759632 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:39:15 compute-1 podman[330105]: 2025-10-02 13:39:15.822223829 +0000 UTC m=+0.077866723 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 02 13:39:16 compute-1 ceph-mon[80926]: pgmap v3865: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:16.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:16.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:17 compute-1 sudo[330148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:39:17 compute-1 sudo[330148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:39:17 compute-1 sudo[330148]: pam_unix(sudo:session): session closed for user root
Oct 02 13:39:17 compute-1 sudo[330173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:39:17 compute-1 sudo[330173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:39:17 compute-1 sudo[330173]: pam_unix(sudo:session): session closed for user root
Oct 02 13:39:17 compute-1 nova_compute[230518]: 2025-10-02 13:39:17.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:18 compute-1 ceph-mon[80926]: pgmap v3866: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:18 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:39:18 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:39:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:39:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:18.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:39:18 compute-1 nova_compute[230518]: 2025-10-02 13:39:18.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:18.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:39:20 compute-1 ceph-mon[80926]: pgmap v3867: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:20.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:20.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:21 compute-1 ceph-mon[80926]: pgmap v3868: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:39:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:22.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:39:22 compute-1 nova_compute[230518]: 2025-10-02 13:39:22.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:39:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:22.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:39:23 compute-1 nova_compute[230518]: 2025-10-02 13:39:23.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:24 compute-1 ceph-mon[80926]: pgmap v3869: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:24.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:25.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:39:25 compute-1 ceph-mon[80926]: pgmap v3870: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:39:26.000 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:39:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:39:26.001 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:39:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:39:26.001 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:39:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:26.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:39:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:27.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:39:27 compute-1 podman[330199]: 2025-10-02 13:39:27.792181224 +0000 UTC m=+0.045309822 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 02 13:39:27 compute-1 nova_compute[230518]: 2025-10-02 13:39:27.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:27 compute-1 podman[330198]: 2025-10-02 13:39:27.820078519 +0000 UTC m=+0.076301944 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct 02 13:39:28 compute-1 ceph-mon[80926]: pgmap v3871: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:28.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:28 compute-1 nova_compute[230518]: 2025-10-02 13:39:28.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:39:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:29.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:39:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:39:30 compute-1 ceph-mon[80926]: pgmap v3872: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:30.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:39:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:31.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:39:31 compute-1 ceph-mon[80926]: pgmap v3873: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:32.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:32 compute-1 nova_compute[230518]: 2025-10-02 13:39:32.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:33.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:33 compute-1 nova_compute[230518]: 2025-10-02 13:39:33.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:34 compute-1 ceph-mon[80926]: pgmap v3874: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:34.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:39:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:35.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:36 compute-1 ceph-mon[80926]: pgmap v3875: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:36.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:37.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:37 compute-1 ceph-mon[80926]: pgmap v3876: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:37 compute-1 nova_compute[230518]: 2025-10-02 13:39:37.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:38.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:38 compute-1 nova_compute[230518]: 2025-10-02 13:39:38.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:39.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:40 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:39:40 compute-1 ceph-mon[80926]: pgmap v3877: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:40.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:39:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:41.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:39:41 compute-1 ceph-mon[80926]: pgmap v3878: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:42.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:42 compute-1 nova_compute[230518]: 2025-10-02 13:39:42.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:43.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:43 compute-1 ceph-mon[80926]: pgmap v3879: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:43 compute-1 nova_compute[230518]: 2025-10-02 13:39:43.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:44.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:39:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:45.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:46 compute-1 ceph-mon[80926]: pgmap v3880: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:39:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:46.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:39:46 compute-1 podman[330240]: 2025-10-02 13:39:46.793076095 +0000 UTC m=+0.046286562 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 13:39:46 compute-1 podman[330239]: 2025-10-02 13:39:46.813729463 +0000 UTC m=+0.070639366 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 02 13:39:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:47.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:47 compute-1 nova_compute[230518]: 2025-10-02 13:39:47.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:48 compute-1 nova_compute[230518]: 2025-10-02 13:39:48.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:39:48 compute-1 ceph-mon[80926]: pgmap v3881: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:39:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:48.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:39:48 compute-1 nova_compute[230518]: 2025-10-02 13:39:48.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:39:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:49.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:39:49 compute-1 ceph-mon[80926]: pgmap v3882: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:39:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:50.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:39:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:51.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:39:51 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3052700554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:39:52 compute-1 nova_compute[230518]: 2025-10-02 13:39:52.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:39:52 compute-1 nova_compute[230518]: 2025-10-02 13:39:52.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:39:52 compute-1 ceph-mon[80926]: pgmap v3883: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3126593498' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:39:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/557114497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:39:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:52.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:52 compute-1 nova_compute[230518]: 2025-10-02 13:39:52.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:53.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:53 compute-1 ceph-mon[80926]: pgmap v3884: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:53 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/482641165' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:39:53 compute-1 nova_compute[230518]: 2025-10-02 13:39:53.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:39:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:54.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:39:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:39:55 compute-1 nova_compute[230518]: 2025-10-02 13:39:55.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:39:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:39:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:55.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:39:55 compute-1 nova_compute[230518]: 2025-10-02 13:39:55.622 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:39:55 compute-1 nova_compute[230518]: 2025-10-02 13:39:55.622 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:39:55 compute-1 nova_compute[230518]: 2025-10-02 13:39:55.622 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:39:55 compute-1 nova_compute[230518]: 2025-10-02 13:39:55.622 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:39:55 compute-1 nova_compute[230518]: 2025-10-02 13:39:55.622 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:39:56 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:39:56 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2950983091' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:39:56 compute-1 nova_compute[230518]: 2025-10-02 13:39:56.056 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:39:56 compute-1 nova_compute[230518]: 2025-10-02 13:39:56.210 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:39:56 compute-1 nova_compute[230518]: 2025-10-02 13:39:56.211 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4130MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:39:56 compute-1 nova_compute[230518]: 2025-10-02 13:39:56.212 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:39:56 compute-1 nova_compute[230518]: 2025-10-02 13:39:56.212 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:39:56 compute-1 ceph-mon[80926]: pgmap v3885: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:56 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2950983091' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:39:56 compute-1 nova_compute[230518]: 2025-10-02 13:39:56.368 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:39:56 compute-1 nova_compute[230518]: 2025-10-02 13:39:56.369 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:39:56 compute-1 nova_compute[230518]: 2025-10-02 13:39:56.391 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:39:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:56.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:56 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:39:56 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3887751941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:39:56 compute-1 nova_compute[230518]: 2025-10-02 13:39:56.811 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:39:56 compute-1 nova_compute[230518]: 2025-10-02 13:39:56.816 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:39:56 compute-1 nova_compute[230518]: 2025-10-02 13:39:56.838 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:39:56 compute-1 nova_compute[230518]: 2025-10-02 13:39:56.839 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:39:56 compute-1 nova_compute[230518]: 2025-10-02 13:39:56.840 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:39:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:39:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:57.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:39:57 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3887751941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:39:57 compute-1 nova_compute[230518]: 2025-10-02 13:39:57.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:39:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:39:58.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:39:58 compute-1 podman[330330]: 2025-10-02 13:39:58.79906943 +0000 UTC m=+0.053427836 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:39:58 compute-1 podman[330329]: 2025-10-02 13:39:58.823043501 +0000 UTC m=+0.080703620 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0)
Oct 02 13:39:58 compute-1 ceph-mon[80926]: pgmap v3886: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:39:58 compute-1 nova_compute[230518]: 2025-10-02 13:39:58.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:39:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:39:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:39:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:39:59.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:39:59 compute-1 ceph-mon[80926]: pgmap v3887: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:40:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:40:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:00.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:40:00 compute-1 ceph-mon[80926]: overall HEALTH_OK
Oct 02 13:40:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:01.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:02 compute-1 ceph-mon[80926]: pgmap v3888: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:02.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:02 compute-1 nova_compute[230518]: 2025-10-02 13:40:02.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:02 compute-1 nova_compute[230518]: 2025-10-02 13:40:02.842 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:40:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:03.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:03 compute-1 nova_compute[230518]: 2025-10-02 13:40:03.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:04 compute-1 ceph-mon[80926]: pgmap v3889: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:04.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:40:05 compute-1 nova_compute[230518]: 2025-10-02 13:40:05.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:40:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:40:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:05.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:40:05 compute-1 ceph-mon[80926]: pgmap v3890: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/302418049' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:40:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/302418049' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:40:06 compute-1 nova_compute[230518]: 2025-10-02 13:40:06.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:40:06 compute-1 nova_compute[230518]: 2025-10-02 13:40:06.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:40:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:06.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:07 compute-1 nova_compute[230518]: 2025-10-02 13:40:07.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:40:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:07.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:07 compute-1 nova_compute[230518]: 2025-10-02 13:40:07.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:08 compute-1 ceph-mon[80926]: pgmap v3891: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:40:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:08.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:40:08 compute-1 nova_compute[230518]: 2025-10-02 13:40:08.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:09 compute-1 nova_compute[230518]: 2025-10-02 13:40:09.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:40:09 compute-1 nova_compute[230518]: 2025-10-02 13:40:09.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:40:09 compute-1 nova_compute[230518]: 2025-10-02 13:40:09.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:40:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:09.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:09 compute-1 nova_compute[230518]: 2025-10-02 13:40:09.075 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:40:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:40:10 compute-1 ceph-mon[80926]: pgmap v3892: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:40:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:10.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:40:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:11.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:12 compute-1 ceph-mon[80926]: pgmap v3893: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:12.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:12 compute-1 nova_compute[230518]: 2025-10-02 13:40:12.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:13.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:13 compute-1 nova_compute[230518]: 2025-10-02 13:40:13.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:14 compute-1 ceph-mon[80926]: pgmap v3894: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:40:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:14.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:40:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:40:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:15.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:15 compute-1 ceph-mon[80926]: pgmap v3895: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:16.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:17.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:17 compute-1 sudo[330367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:40:17 compute-1 sudo[330367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:40:17 compute-1 sudo[330367]: pam_unix(sudo:session): session closed for user root
Oct 02 13:40:17 compute-1 sudo[330404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:40:17 compute-1 sudo[330404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:40:17 compute-1 podman[330392]: 2025-10-02 13:40:17.713209682 +0000 UTC m=+0.048465080 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct 02 13:40:17 compute-1 sudo[330404]: pam_unix(sudo:session): session closed for user root
Oct 02 13:40:17 compute-1 podman[330391]: 2025-10-02 13:40:17.736953136 +0000 UTC m=+0.075627941 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 02 13:40:17 compute-1 sudo[330462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:40:17 compute-1 sudo[330462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:40:17 compute-1 sudo[330462]: pam_unix(sudo:session): session closed for user root
Oct 02 13:40:17 compute-1 nova_compute[230518]: 2025-10-02 13:40:17.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:17 compute-1 sudo[330489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:40:17 compute-1 sudo[330489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:40:18 compute-1 sudo[330489]: pam_unix(sudo:session): session closed for user root
Oct 02 13:40:18 compute-1 ceph-mon[80926]: pgmap v3896: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:18.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:18 compute-1 nova_compute[230518]: 2025-10-02 13:40:18.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:19.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:40:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:40:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:40:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:40:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:40:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:40:19 compute-1 sshd-session[330545]: Connection closed by 196.251.118.184 port 51256
Oct 02 13:40:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:40:20 compute-1 ceph-mon[80926]: pgmap v3897: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:20.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:21.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:21 compute-1 ceph-mon[80926]: pgmap v3898: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:40:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:22.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:40:22 compute-1 nova_compute[230518]: 2025-10-02 13:40:22.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:23.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:23 compute-1 ceph-mon[80926]: pgmap v3899: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:23 compute-1 nova_compute[230518]: 2025-10-02 13:40:23.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:24.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:40:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:25.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:40:26.002 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:40:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:40:26.003 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:40:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:40:26.003 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:40:26 compute-1 ceph-mon[80926]: pgmap v3900: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:40:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:26.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:40:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:27.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:27 compute-1 ceph-mon[80926]: pgmap v3901: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:27 compute-1 nova_compute[230518]: 2025-10-02 13:40:27.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:28.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:28 compute-1 nova_compute[230518]: 2025-10-02 13:40:28.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:29 compute-1 sudo[330546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:40:29 compute-1 sudo[330546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:40:29 compute-1 sudo[330546]: pam_unix(sudo:session): session closed for user root
Oct 02 13:40:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:40:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:29.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:40:29 compute-1 podman[330571]: 2025-10-02 13:40:29.168481371 +0000 UTC m=+0.081475645 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct 02 13:40:29 compute-1 podman[330570]: 2025-10-02 13:40:29.173388554 +0000 UTC m=+0.082317961 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:40:29 compute-1 sudo[330587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:40:29 compute-1 sudo[330587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:40:29 compute-1 sudo[330587]: pam_unix(sudo:session): session closed for user root
Oct 02 13:40:29 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:40:29 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:40:29 compute-1 ceph-mon[80926]: pgmap v3902: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:40:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:30.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:31.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:32 compute-1 ceph-mon[80926]: pgmap v3903: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:40:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:32.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:40:32 compute-1 nova_compute[230518]: 2025-10-02 13:40:32.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:40:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:33.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:40:33 compute-1 nova_compute[230518]: 2025-10-02 13:40:33.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:34 compute-1 ceph-mon[80926]: pgmap v3904: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:34.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:40:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:35.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:36 compute-1 ceph-mon[80926]: pgmap v3905: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:36.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:37.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:37 compute-1 ceph-mon[80926]: pgmap v3906: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:37 compute-1 nova_compute[230518]: 2025-10-02 13:40:37.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:38.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:38 compute-1 nova_compute[230518]: 2025-10-02 13:40:38.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:39.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:40 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:40:40 compute-1 ceph-mon[80926]: pgmap v3907: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:40:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:40.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:40:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:41.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:41 compute-1 ceph-mon[80926]: pgmap v3908: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:42.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:42 compute-1 nova_compute[230518]: 2025-10-02 13:40:42.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:40:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:43.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:40:43 compute-1 ceph-mon[80926]: pgmap v3909: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:43 compute-1 nova_compute[230518]: 2025-10-02 13:40:43.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:44.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:40:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:45.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:45 compute-1 ceph-mon[80926]: pgmap v3910: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:46.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:40:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:47.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:40:47 compute-1 podman[330634]: 2025-10-02 13:40:47.790138933 +0000 UTC m=+0.045355243 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 13:40:47 compute-1 nova_compute[230518]: 2025-10-02 13:40:47.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:47 compute-1 podman[330653]: 2025-10-02 13:40:47.890942483 +0000 UTC m=+0.074670132 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 13:40:48 compute-1 ceph-mon[80926]: pgmap v3911: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:48.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:48 compute-1 nova_compute[230518]: 2025-10-02 13:40:48.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:49.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:49 compute-1 ceph-mon[80926]: pgmap v3912: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:40:50 compute-1 nova_compute[230518]: 2025-10-02 13:40:50.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:40:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:50.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:51.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:52 compute-1 nova_compute[230518]: 2025-10-02 13:40:52.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:40:52 compute-1 nova_compute[230518]: 2025-10-02 13:40:52.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:40:52 compute-1 ceph-mon[80926]: pgmap v3913: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:52 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4087168575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:40:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:40:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:52.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:40:52 compute-1 nova_compute[230518]: 2025-10-02 13:40:52.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:53.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:53 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1254812133' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:40:53 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/814707070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:40:53 compute-1 ceph-mon[80926]: pgmap v3914: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:53 compute-1 nova_compute[230518]: 2025-10-02 13:40:53.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:54 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/730228373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:40:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:54.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:40:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:40:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:55.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:40:55 compute-1 ceph-mon[80926]: pgmap v3915: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:56 compute-1 nova_compute[230518]: 2025-10-02 13:40:56.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:40:56 compute-1 nova_compute[230518]: 2025-10-02 13:40:56.082 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:40:56 compute-1 nova_compute[230518]: 2025-10-02 13:40:56.082 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:40:56 compute-1 nova_compute[230518]: 2025-10-02 13:40:56.082 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:40:56 compute-1 nova_compute[230518]: 2025-10-02 13:40:56.082 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:40:56 compute-1 nova_compute[230518]: 2025-10-02 13:40:56.083 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:40:56 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:40:56 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/630243333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:40:56 compute-1 nova_compute[230518]: 2025-10-02 13:40:56.514 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:40:56 compute-1 nova_compute[230518]: 2025-10-02 13:40:56.656 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:40:56 compute-1 nova_compute[230518]: 2025-10-02 13:40:56.657 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4148MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:40:56 compute-1 nova_compute[230518]: 2025-10-02 13:40:56.657 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:40:56 compute-1 nova_compute[230518]: 2025-10-02 13:40:56.657 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:40:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:56.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:56 compute-1 nova_compute[230518]: 2025-10-02 13:40:56.817 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:40:56 compute-1 nova_compute[230518]: 2025-10-02 13:40:56.817 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:40:56 compute-1 nova_compute[230518]: 2025-10-02 13:40:56.880 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:40:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:57.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:57 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:40:57 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3725250433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:40:57 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/630243333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:40:57 compute-1 nova_compute[230518]: 2025-10-02 13:40:57.358 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:40:57 compute-1 nova_compute[230518]: 2025-10-02 13:40:57.364 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:40:57 compute-1 nova_compute[230518]: 2025-10-02 13:40:57.384 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:40:57 compute-1 nova_compute[230518]: 2025-10-02 13:40:57.385 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:40:57 compute-1 nova_compute[230518]: 2025-10-02 13:40:57.386 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:40:57 compute-1 nova_compute[230518]: 2025-10-02 13:40:57.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:58 compute-1 ceph-mon[80926]: pgmap v3916: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:58 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3725250433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:40:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:40:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:40:58.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:40:58 compute-1 nova_compute[230518]: 2025-10-02 13:40:58.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:40:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:40:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:40:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:40:59.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:40:59 compute-1 ceph-mon[80926]: pgmap v3917: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:40:59 compute-1 podman[330723]: 2025-10-02 13:40:59.800065329 +0000 UTC m=+0.053373944 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 02 13:40:59 compute-1 podman[330724]: 2025-10-02 13:40:59.806895594 +0000 UTC m=+0.052677693 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=multipathd)
Oct 02 13:41:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:41:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:00.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:01.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:02 compute-1 ceph-mon[80926]: pgmap v3918: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:02.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:02 compute-1 nova_compute[230518]: 2025-10-02 13:41:02.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:41:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:03.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:41:03 compute-1 ceph-mon[80926]: pgmap v3919: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:03 compute-1 nova_compute[230518]: 2025-10-02 13:41:03.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:04 compute-1 nova_compute[230518]: 2025-10-02 13:41:04.386 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:41:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:04.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:41:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:05.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:41:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2544127254' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:41:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:41:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2544127254' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:41:06 compute-1 nova_compute[230518]: 2025-10-02 13:41:06.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:41:06 compute-1 ceph-mon[80926]: pgmap v3920: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2544127254' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:41:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2544127254' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:41:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:06.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:07 compute-1 nova_compute[230518]: 2025-10-02 13:41:07.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:41:07 compute-1 nova_compute[230518]: 2025-10-02 13:41:07.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:41:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:07.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:07 compute-1 nova_compute[230518]: 2025-10-02 13:41:07.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:08 compute-1 nova_compute[230518]: 2025-10-02 13:41:08.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:41:08 compute-1 ceph-mon[80926]: pgmap v3921: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:41:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:08.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:41:08 compute-1 nova_compute[230518]: 2025-10-02 13:41:08.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:41:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:09.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:41:09 compute-1 ceph-mon[80926]: pgmap v3922: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:41:10 compute-1 nova_compute[230518]: 2025-10-02 13:41:10.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:41:10 compute-1 nova_compute[230518]: 2025-10-02 13:41:10.062 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:41:10 compute-1 nova_compute[230518]: 2025-10-02 13:41:10.062 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:41:10 compute-1 nova_compute[230518]: 2025-10-02 13:41:10.062 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:41:10 compute-1 nova_compute[230518]: 2025-10-02 13:41:10.078 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:41:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:41:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:10.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:41:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:11.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:12 compute-1 ceph-mon[80926]: pgmap v3923: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:41:12 compute-1 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 13:41:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:12.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:41:12 compute-1 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 02 13:41:12 compute-1 nova_compute[230518]: 2025-10-02 13:41:12.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:13.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:13 compute-1 sshd-session[330763]: error: kex_exchange_identification: read: Connection reset by peer
Oct 02 13:41:13 compute-1 sshd-session[330763]: Connection reset by 196.251.118.184 port 51026
Oct 02 13:41:13 compute-1 nova_compute[230518]: 2025-10-02 13:41:13.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:14 compute-1 ceph-mon[80926]: pgmap v3924: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:14.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:41:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:15.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:15 compute-1 ceph-mon[80926]: pgmap v3925: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:16.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:17.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:17 compute-1 nova_compute[230518]: 2025-10-02 13:41:17.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:18 compute-1 ceph-mon[80926]: pgmap v3926: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:18.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:18 compute-1 podman[330768]: 2025-10-02 13:41:18.831967406 +0000 UTC m=+0.089634972 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 02 13:41:18 compute-1 podman[330769]: 2025-10-02 13:41:18.833688589 +0000 UTC m=+0.074192037 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 02 13:41:18 compute-1 nova_compute[230518]: 2025-10-02 13:41:18.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:19.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:19 compute-1 ceph-mon[80926]: pgmap v3927: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:41:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:20.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:21.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:22 compute-1 ceph-mon[80926]: pgmap v3928: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:22.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:22 compute-1 nova_compute[230518]: 2025-10-02 13:41:22.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:23.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:23 compute-1 nova_compute[230518]: 2025-10-02 13:41:23.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:24 compute-1 ceph-mon[80926]: pgmap v3929: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:41:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:24.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:41:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:41:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:25.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:41:26.004 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:41:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:41:26.004 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:41:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:41:26.004 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:41:26 compute-1 ceph-mon[80926]: pgmap v3930: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:41:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:26.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:41:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:41:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:27.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:41:27 compute-1 ceph-mon[80926]: pgmap v3931: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:27 compute-1 nova_compute[230518]: 2025-10-02 13:41:27.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:41:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:28.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:41:28 compute-1 nova_compute[230518]: 2025-10-02 13:41:28.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:29.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:29 compute-1 sudo[330813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:41:29 compute-1 sudo[330813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:29 compute-1 sudo[330813]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:29 compute-1 sudo[330838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:41:29 compute-1 sudo[330838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:29 compute-1 sudo[330838]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:29 compute-1 sudo[330863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:41:29 compute-1 sudo[330863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:29 compute-1 sudo[330863]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:29 compute-1 sudo[330888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Oct 02 13:41:29 compute-1 sudo[330888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:29 compute-1 sudo[330888]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:41:30 compute-1 sudo[330933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:41:30 compute-1 sudo[330933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:30 compute-1 sudo[330933]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:30 compute-1 ceph-mon[80926]: pgmap v3932: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:30 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:41:30 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:41:30 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:41:30 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:41:30 compute-1 sudo[330970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:41:30 compute-1 sudo[330970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:30 compute-1 podman[330958]: 2025-10-02 13:41:30.283552539 +0000 UTC m=+0.053662214 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 13:41:30 compute-1 sudo[330970]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:30 compute-1 podman[330957]: 2025-10-02 13:41:30.309239565 +0000 UTC m=+0.081356092 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:41:30 compute-1 sudo[331021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:41:30 compute-1 sudo[331021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:30 compute-1 sudo[331021]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:30 compute-1 sudo[331046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:41:30 compute-1 sudo[331046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:30.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:30 compute-1 sudo[331046]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:31.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:31 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:41:31 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:41:31 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:41:31 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:41:31 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:41:31 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:41:32 compute-1 ceph-mon[80926]: pgmap v3933: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:32.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:32 compute-1 nova_compute[230518]: 2025-10-02 13:41:32.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:33 compute-1 unix_chkpwd[331102]: password check failed for user (root)
Oct 02 13:41:33 compute-1 sshd-session[330766]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.118.184  user=root
Oct 02 13:41:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:33.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:33 compute-1 nova_compute[230518]: 2025-10-02 13:41:33.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:34 compute-1 ceph-mon[80926]: pgmap v3934: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:41:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:34.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:41:34 compute-1 sshd-session[330766]: Failed password for root from 196.251.118.184 port 50124 ssh2
Oct 02 13:41:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:41:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:35.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:36 compute-1 ceph-mon[80926]: pgmap v3935: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:41:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:36.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:41:37 compute-1 sudo[331103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:41:37 compute-1 sudo[331103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:37 compute-1 sudo[331103]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:37 compute-1 sudo[331128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:41:37 compute-1 sudo[331128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:41:37 compute-1 sudo[331128]: pam_unix(sudo:session): session closed for user root
Oct 02 13:41:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:37.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:37 compute-1 sshd-session[330766]: Connection closed by authenticating user root 196.251.118.184 port 50124 [preauth]
Oct 02 13:41:37 compute-1 nova_compute[230518]: 2025-10-02 13:41:37.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:37 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:41:37 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:41:37 compute-1 ceph-mon[80926]: pgmap v3936: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:38.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:38 compute-1 nova_compute[230518]: 2025-10-02 13:41:38.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:39.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:40 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:41:40 compute-1 ceph-mon[80926]: pgmap v3937: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:40.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:41.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:42 compute-1 ceph-mon[80926]: pgmap v3938: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:41:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:42.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:41:42 compute-1 nova_compute[230518]: 2025-10-02 13:41:42.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:41:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:43.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:41:43 compute-1 nova_compute[230518]: 2025-10-02 13:41:43.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:44 compute-1 ceph-mon[80926]: pgmap v3939: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:41:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:44.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:41:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:41:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:41:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:45.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:41:45 compute-1 ceph-mon[80926]: pgmap v3940: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:46.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:41:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:47.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:41:47 compute-1 nova_compute[230518]: 2025-10-02 13:41:47.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:48 compute-1 ceph-mon[80926]: pgmap v3941: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:41:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:48.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:41:48 compute-1 nova_compute[230518]: 2025-10-02 13:41:48.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:49.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:49 compute-1 ceph-mon[80926]: pgmap v3942: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:49 compute-1 podman[331154]: 2025-10-02 13:41:49.818162086 +0000 UTC m=+0.061136818 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 13:41:49 compute-1 podman[331153]: 2025-10-02 13:41:49.86808442 +0000 UTC m=+0.119454626 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 02 13:41:50 compute-1 nova_compute[230518]: 2025-10-02 13:41:50.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:41:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:41:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:41:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:50.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:41:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:51.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:52 compute-1 ceph-mon[80926]: pgmap v3943: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:52.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:52 compute-1 nova_compute[230518]: 2025-10-02 13:41:52.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:41:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:53.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:41:53 compute-1 ceph-mon[80926]: pgmap v3944: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:53 compute-1 nova_compute[230518]: 2025-10-02 13:41:53.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:54 compute-1 nova_compute[230518]: 2025-10-02 13:41:54.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:41:54 compute-1 nova_compute[230518]: 2025-10-02 13:41:54.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:41:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:54.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:55 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2271838928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:41:55 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3216893789' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:41:55 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #193. Immutable memtables: 0.
Oct 02 13:41:55 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:41:55.102495) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:41:55 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 123] Flushing memtable with next log file: 193
Oct 02 13:41:55 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412515102573, "job": 123, "event": "flush_started", "num_memtables": 1, "num_entries": 2331, "num_deletes": 251, "total_data_size": 5938842, "memory_usage": 6022112, "flush_reason": "Manual Compaction"}
Oct 02 13:41:55 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 123] Level-0 flush table #194: started
Oct 02 13:41:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:41:55 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412515158923, "cf_name": "default", "job": 123, "event": "table_file_creation", "file_number": 194, "file_size": 3880001, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 92522, "largest_seqno": 94848, "table_properties": {"data_size": 3870469, "index_size": 6089, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18979, "raw_average_key_size": 20, "raw_value_size": 3851624, "raw_average_value_size": 4101, "num_data_blocks": 267, "num_entries": 939, "num_filter_entries": 939, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759412288, "oldest_key_time": 1759412288, "file_creation_time": 1759412515, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 194, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:41:55 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 123] Flush lasted 56477 microseconds, and 8051 cpu microseconds.
Oct 02 13:41:55 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:41:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:55.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:55 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:41:55.158979) [db/flush_job.cc:967] [default] [JOB 123] Level-0 flush table #194: 3880001 bytes OK
Oct 02 13:41:55 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:41:55.159001) [db/memtable_list.cc:519] [default] Level-0 commit table #194 started
Oct 02 13:41:55 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:41:55.229436) [db/memtable_list.cc:722] [default] Level-0 commit table #194: memtable #1 done
Oct 02 13:41:55 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:41:55.229482) EVENT_LOG_v1 {"time_micros": 1759412515229472, "job": 123, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:41:55 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:41:55.229505) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:41:55 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 123] Try to delete WAL files size 5928574, prev total WAL file size 5928574, number of live WAL files 2.
Oct 02 13:41:55 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000190.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:41:55 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:41:55.231414) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038323833' seq:72057594037927935, type:22 .. '7061786F730038353335' seq:0, type:0; will stop at (end)
Oct 02 13:41:55 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 124] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:41:55 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 123 Base level 0, inputs: [194(3789KB)], [192(12MB)]
Oct 02 13:41:55 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412515231451, "job": 124, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [194], "files_L6": [192], "score": -1, "input_data_size": 17088413, "oldest_snapshot_seqno": -1}
Oct 02 13:41:55 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 124] Generated table #195: 11703 keys, 15047767 bytes, temperature: kUnknown
Oct 02 13:41:55 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412515370391, "cf_name": "default", "job": 124, "event": "table_file_creation", "file_number": 195, "file_size": 15047767, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14972204, "index_size": 45178, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29317, "raw_key_size": 308911, "raw_average_key_size": 26, "raw_value_size": 14767755, "raw_average_value_size": 1261, "num_data_blocks": 1719, "num_entries": 11703, "num_filter_entries": 11703, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759412515, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 195, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:41:55 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:41:55 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:41:55.370821) [db/compaction/compaction_job.cc:1663] [default] [JOB 124] Compacted 1@0 + 1@6 files to L6 => 15047767 bytes
Oct 02 13:41:55 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:41:55.415167) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 122.9 rd, 108.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 12.6 +0.0 blob) out(14.4 +0.0 blob), read-write-amplify(8.3) write-amplify(3.9) OK, records in: 12220, records dropped: 517 output_compression: NoCompression
Oct 02 13:41:55 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:41:55.415230) EVENT_LOG_v1 {"time_micros": 1759412515415207, "job": 124, "event": "compaction_finished", "compaction_time_micros": 139094, "compaction_time_cpu_micros": 33699, "output_level": 6, "num_output_files": 1, "total_output_size": 15047767, "num_input_records": 12220, "num_output_records": 11703, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:41:55 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000194.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:41:55 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412515416991, "job": 124, "event": "table_file_deletion", "file_number": 194}
Oct 02 13:41:55 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000192.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:41:55 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412515419509, "job": 124, "event": "table_file_deletion", "file_number": 192}
Oct 02 13:41:55 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:41:55.231330) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:41:55 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:41:55.419621) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:41:55 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:41:55.419628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:41:55 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:41:55.419630) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:41:55 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:41:55.419637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:41:55 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:41:55.419639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:41:56 compute-1 nova_compute[230518]: 2025-10-02 13:41:56.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:41:56 compute-1 nova_compute[230518]: 2025-10-02 13:41:56.086 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:41:56 compute-1 nova_compute[230518]: 2025-10-02 13:41:56.086 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:41:56 compute-1 nova_compute[230518]: 2025-10-02 13:41:56.086 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:41:56 compute-1 nova_compute[230518]: 2025-10-02 13:41:56.087 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:41:56 compute-1 nova_compute[230518]: 2025-10-02 13:41:56.087 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:41:56 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/123264677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:41:56 compute-1 ceph-mon[80926]: pgmap v3945: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:56 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2404797557' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:41:56 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:41:56 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/96427201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:41:56 compute-1 nova_compute[230518]: 2025-10-02 13:41:56.557 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:41:56 compute-1 nova_compute[230518]: 2025-10-02 13:41:56.693 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:41:56 compute-1 nova_compute[230518]: 2025-10-02 13:41:56.694 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4153MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:41:56 compute-1 nova_compute[230518]: 2025-10-02 13:41:56.695 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:41:56 compute-1 nova_compute[230518]: 2025-10-02 13:41:56.695 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:41:56 compute-1 nova_compute[230518]: 2025-10-02 13:41:56.773 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:41:56 compute-1 nova_compute[230518]: 2025-10-02 13:41:56.774 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:41:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:41:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:56.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:41:56 compute-1 nova_compute[230518]: 2025-10-02 13:41:56.846 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 13:41:56 compute-1 nova_compute[230518]: 2025-10-02 13:41:56.865 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 13:41:56 compute-1 nova_compute[230518]: 2025-10-02 13:41:56.866 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 13:41:56 compute-1 nova_compute[230518]: 2025-10-02 13:41:56.882 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 13:41:56 compute-1 nova_compute[230518]: 2025-10-02 13:41:56.923 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 13:41:56 compute-1 nova_compute[230518]: 2025-10-02 13:41:56.941 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:41:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:57.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:57 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:41:57 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2919220754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:41:57 compute-1 nova_compute[230518]: 2025-10-02 13:41:57.356 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:41:57 compute-1 nova_compute[230518]: 2025-10-02 13:41:57.361 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:41:57 compute-1 nova_compute[230518]: 2025-10-02 13:41:57.379 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:41:57 compute-1 nova_compute[230518]: 2025-10-02 13:41:57.381 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:41:57 compute-1 nova_compute[230518]: 2025-10-02 13:41:57.381 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:41:57 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/96427201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:41:57 compute-1 ceph-mon[80926]: pgmap v3946: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:41:57 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2919220754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:41:57 compute-1 nova_compute[230518]: 2025-10-02 13:41:57.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:41:58.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:58 compute-1 nova_compute[230518]: 2025-10-02 13:41:58.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:41:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:41:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:41:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:41:59.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:41:59 compute-1 ceph-mon[80926]: pgmap v3947: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:42:00 compute-1 podman[331242]: 2025-10-02 13:42:00.802755839 +0000 UTC m=+0.055047887 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 02 13:42:00 compute-1 podman[331243]: 2025-10-02 13:42:00.803608126 +0000 UTC m=+0.053931432 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 02 13:42:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:42:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:00.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:42:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:01.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:02 compute-1 ceph-mon[80926]: pgmap v3948: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:42:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:02.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:42:02 compute-1 nova_compute[230518]: 2025-10-02 13:42:02.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:03.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:03 compute-1 ceph-mon[80926]: pgmap v3949: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:03 compute-1 nova_compute[230518]: 2025-10-02 13:42:03.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:04 compute-1 nova_compute[230518]: 2025-10-02 13:42:04.383 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:42:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:04.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:42:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:05.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:42:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4074320952' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:42:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:42:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4074320952' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:42:06 compute-1 ceph-mon[80926]: pgmap v3950: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/4074320952' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:42:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/4074320952' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:42:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:06.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:07.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:07 compute-1 ceph-mon[80926]: pgmap v3951: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:07 compute-1 nova_compute[230518]: 2025-10-02 13:42:07.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:08 compute-1 nova_compute[230518]: 2025-10-02 13:42:08.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:42:08 compute-1 nova_compute[230518]: 2025-10-02 13:42:08.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:42:08 compute-1 nova_compute[230518]: 2025-10-02 13:42:08.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:42:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:08.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:08 compute-1 nova_compute[230518]: 2025-10-02 13:42:08.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:09 compute-1 nova_compute[230518]: 2025-10-02 13:42:09.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:42:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:09.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:42:10 compute-1 ceph-mon[80926]: pgmap v3952: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:10.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:11 compute-1 nova_compute[230518]: 2025-10-02 13:42:11.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:42:11 compute-1 nova_compute[230518]: 2025-10-02 13:42:11.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:42:11 compute-1 nova_compute[230518]: 2025-10-02 13:42:11.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:42:11 compute-1 nova_compute[230518]: 2025-10-02 13:42:11.065 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:42:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:11.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:12 compute-1 ceph-mon[80926]: pgmap v3953: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:12.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:12 compute-1 nova_compute[230518]: 2025-10-02 13:42:12.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:13.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:13 compute-1 nova_compute[230518]: 2025-10-02 13:42:13.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:14 compute-1 nova_compute[230518]: 2025-10-02 13:42:14.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:42:14 compute-1 nova_compute[230518]: 2025-10-02 13:42:14.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 13:42:14 compute-1 ceph-mon[80926]: pgmap v3954: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:14.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:42:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:15.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:16 compute-1 ceph-mon[80926]: pgmap v3955: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:16.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:17 compute-1 nova_compute[230518]: 2025-10-02 13:42:17.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:42:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:17.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:17 compute-1 ceph-mon[80926]: pgmap v3956: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:17 compute-1 nova_compute[230518]: 2025-10-02 13:42:17.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:18.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:18 compute-1 nova_compute[230518]: 2025-10-02 13:42:18.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:19 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #196. Immutable memtables: 0.
Oct 02 13:42:19 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:42:19.110661) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:42:19 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 125] Flushing memtable with next log file: 196
Oct 02 13:42:19 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412539110728, "job": 125, "event": "flush_started", "num_memtables": 1, "num_entries": 458, "num_deletes": 250, "total_data_size": 613358, "memory_usage": 622592, "flush_reason": "Manual Compaction"}
Oct 02 13:42:19 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 125] Level-0 flush table #197: started
Oct 02 13:42:19 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412539187014, "cf_name": "default", "job": 125, "event": "table_file_creation", "file_number": 197, "file_size": 312887, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 94853, "largest_seqno": 95306, "table_properties": {"data_size": 310473, "index_size": 513, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6550, "raw_average_key_size": 20, "raw_value_size": 305594, "raw_average_value_size": 952, "num_data_blocks": 23, "num_entries": 321, "num_filter_entries": 321, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759412516, "oldest_key_time": 1759412516, "file_creation_time": 1759412539, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 197, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:42:19 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 125] Flush lasted 76388 microseconds, and 1801 cpu microseconds.
Oct 02 13:42:19 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:42:19 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:42:19.187059) [db/flush_job.cc:967] [default] [JOB 125] Level-0 flush table #197: 312887 bytes OK
Oct 02 13:42:19 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:42:19.187076) [db/memtable_list.cc:519] [default] Level-0 commit table #197 started
Oct 02 13:42:19 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:42:19.200628) [db/memtable_list.cc:722] [default] Level-0 commit table #197: memtable #1 done
Oct 02 13:42:19 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:42:19.200676) EVENT_LOG_v1 {"time_micros": 1759412539200667, "job": 125, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:42:19 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:42:19.200698) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:42:19 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 125] Try to delete WAL files size 610526, prev total WAL file size 610526, number of live WAL files 2.
Oct 02 13:42:19 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000193.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:42:19 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:42:19.201340) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033323539' seq:72057594037927935, type:22 .. '6D6772737461740033353130' seq:0, type:0; will stop at (end)
Oct 02 13:42:19 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 126] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:42:19 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 125 Base level 0, inputs: [197(305KB)], [195(14MB)]
Oct 02 13:42:19 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412539201468, "job": 126, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [197], "files_L6": [195], "score": -1, "input_data_size": 15360654, "oldest_snapshot_seqno": -1}
Oct 02 13:42:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:19.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:19 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 126] Generated table #198: 11521 keys, 11628843 bytes, temperature: kUnknown
Oct 02 13:42:19 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412539310782, "cf_name": "default", "job": 126, "event": "table_file_creation", "file_number": 198, "file_size": 11628843, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11559027, "index_size": 39879, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28869, "raw_key_size": 305337, "raw_average_key_size": 26, "raw_value_size": 11362244, "raw_average_value_size": 986, "num_data_blocks": 1497, "num_entries": 11521, "num_filter_entries": 11521, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759412539, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 198, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:42:19 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:42:19 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:42:19.311052) [db/compaction/compaction_job.cc:1663] [default] [JOB 126] Compacted 1@0 + 1@6 files to L6 => 11628843 bytes
Oct 02 13:42:19 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:42:19.354106) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 140.4 rd, 106.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 14.4 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(86.3) write-amplify(37.2) OK, records in: 12024, records dropped: 503 output_compression: NoCompression
Oct 02 13:42:19 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:42:19.354140) EVENT_LOG_v1 {"time_micros": 1759412539354128, "job": 126, "event": "compaction_finished", "compaction_time_micros": 109372, "compaction_time_cpu_micros": 30154, "output_level": 6, "num_output_files": 1, "total_output_size": 11628843, "num_input_records": 12024, "num_output_records": 11521, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:42:19 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000197.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:42:19 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412539354380, "job": 126, "event": "table_file_deletion", "file_number": 197}
Oct 02 13:42:19 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000195.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:42:19 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412539357171, "job": 126, "event": "table_file_deletion", "file_number": 195}
Oct 02 13:42:19 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:42:19.201166) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:42:19 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:42:19.357197) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:42:19 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:42:19.357201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:42:19 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:42:19.357202) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:42:19 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:42:19.357204) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:42:19 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:42:19.357205) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:42:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:42:20 compute-1 ceph-mon[80926]: pgmap v3957: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:20 compute-1 podman[331281]: 2025-10-02 13:42:20.823257905 +0000 UTC m=+0.081404913 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:42:20 compute-1 podman[331282]: 2025-10-02 13:42:20.823528673 +0000 UTC m=+0.078083659 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 02 13:42:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:20.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:21.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:22 compute-1 ceph-mon[80926]: pgmap v3958: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:22.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:22 compute-1 nova_compute[230518]: 2025-10-02 13:42:22.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:23.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:23 compute-1 nova_compute[230518]: 2025-10-02 13:42:23.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:24 compute-1 ceph-mon[80926]: pgmap v3959: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:24.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:42:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:25.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:25 compute-1 ceph-mon[80926]: pgmap v3960: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:42:26.005 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:42:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:42:26.006 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:42:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:42:26.006 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:42:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:42:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:26.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:42:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:42:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:27.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:42:27 compute-1 nova_compute[230518]: 2025-10-02 13:42:27.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:28 compute-1 ceph-mon[80926]: pgmap v3961: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:42:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:28.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:42:28 compute-1 nova_compute[230518]: 2025-10-02 13:42:28.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:29.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:29 compute-1 ceph-mon[80926]: pgmap v3962: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:42:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:30.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:31.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:31 compute-1 ceph-mon[80926]: pgmap v3963: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:31 compute-1 podman[331327]: 2025-10-02 13:42:31.798094223 +0000 UTC m=+0.048055547 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 13:42:31 compute-1 podman[331326]: 2025-10-02 13:42:31.798580828 +0000 UTC m=+0.053760436 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:42:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:32.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:32 compute-1 nova_compute[230518]: 2025-10-02 13:42:32.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:33.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:33 compute-1 ceph-mon[80926]: pgmap v3964: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:33 compute-1 nova_compute[230518]: 2025-10-02 13:42:33.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:34.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:42:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:42:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:35.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:42:35 compute-1 ceph-mon[80926]: pgmap v3965: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:36.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:37.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:37 compute-1 sudo[331364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:42:37 compute-1 sudo[331364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:42:37 compute-1 sudo[331364]: pam_unix(sudo:session): session closed for user root
Oct 02 13:42:37 compute-1 sudo[331389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:42:37 compute-1 sudo[331389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:42:37 compute-1 sudo[331389]: pam_unix(sudo:session): session closed for user root
Oct 02 13:42:37 compute-1 sudo[331414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:42:37 compute-1 sudo[331414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:42:37 compute-1 sudo[331414]: pam_unix(sudo:session): session closed for user root
Oct 02 13:42:37 compute-1 sudo[331439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:42:37 compute-1 sudo[331439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:42:37 compute-1 sudo[331439]: pam_unix(sudo:session): session closed for user root
Oct 02 13:42:37 compute-1 nova_compute[230518]: 2025-10-02 13:42:37.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:38 compute-1 ceph-mon[80926]: pgmap v3966: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:38.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:38 compute-1 nova_compute[230518]: 2025-10-02 13:42:38.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:39.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:39 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 02 13:42:39 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 02 13:42:39 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:42:39 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:42:40 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:42:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:40.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:41 compute-1 ceph-mon[80926]: pgmap v3967: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:41 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 02 13:42:41 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:42:41 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:42:41 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:42:41 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:42:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:41.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:42:42 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:42:42 compute-1 ceph-mon[80926]: pgmap v3968: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:42 compute-1 nova_compute[230518]: 2025-10-02 13:42:42.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 13:42:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:42.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 13:42:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:42:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:43.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:42:43 compute-1 ceph-mon[80926]: pgmap v3969: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:43 compute-1 nova_compute[230518]: 2025-10-02 13:42:43.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:44.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:42:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:45.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:45 compute-1 ceph-mon[80926]: pgmap v3970: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:46.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:47.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:47 compute-1 ceph-mon[80926]: pgmap v3971: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:47 compute-1 nova_compute[230518]: 2025-10-02 13:42:47.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:48 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:48 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:48 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:48.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:48 compute-1 nova_compute[230518]: 2025-10-02 13:42:48.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:49.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:49 compute-1 ceph-mon[80926]: pgmap v3972: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:50 compute-1 nova_compute[230518]: 2025-10-02 13:42:50.072 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:42:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:42:50 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:50 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:50 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:50.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:51.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:51 compute-1 ceph-mon[80926]: pgmap v3973: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:51 compute-1 podman[331498]: 2025-10-02 13:42:51.819302644 +0000 UTC m=+0.062985916 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Oct 02 13:42:51 compute-1 podman[331497]: 2025-10-02 13:42:51.891974672 +0000 UTC m=+0.137895584 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Oct 02 13:42:52 compute-1 nova_compute[230518]: 2025-10-02 13:42:52.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:42:52 compute-1 nova_compute[230518]: 2025-10-02 13:42:52.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 02 13:42:52 compute-1 nova_compute[230518]: 2025-10-02 13:42:52.177 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 02 13:42:52 compute-1 nova_compute[230518]: 2025-10-02 13:42:52.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:52 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:52 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:52 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:52.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:53.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:53 compute-1 ceph-mon[80926]: pgmap v3974: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:53 compute-1 nova_compute[230518]: 2025-10-02 13:42:53.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:54 compute-1 sudo[331541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:42:54 compute-1 sudo[331541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:42:54 compute-1 sudo[331541]: pam_unix(sudo:session): session closed for user root
Oct 02 13:42:54 compute-1 sudo[331566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:42:54 compute-1 sudo[331566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:42:54 compute-1 sudo[331566]: pam_unix(sudo:session): session closed for user root
Oct 02 13:42:54 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:54 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:42:54 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:54.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:42:55 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:42:55 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:42:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:42:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:55.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:56 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/281986870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:42:56 compute-1 ceph-mon[80926]: pgmap v3975: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:56 compute-1 nova_compute[230518]: 2025-10-02 13:42:56.176 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:42:56 compute-1 nova_compute[230518]: 2025-10-02 13:42:56.177 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:42:56 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:56 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:42:56 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:56.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:42:57 compute-1 nova_compute[230518]: 2025-10-02 13:42:57.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:42:57 compute-1 nova_compute[230518]: 2025-10-02 13:42:57.090 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:42:57 compute-1 nova_compute[230518]: 2025-10-02 13:42:57.091 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:42:57 compute-1 nova_compute[230518]: 2025-10-02 13:42:57.092 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:42:57 compute-1 nova_compute[230518]: 2025-10-02 13:42:57.092 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:42:57 compute-1 nova_compute[230518]: 2025-10-02 13:42:57.092 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:42:57 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/805743753' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:42:57 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1059668800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:42:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:42:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:57.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:42:57 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:42:57 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3426009518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:42:57 compute-1 nova_compute[230518]: 2025-10-02 13:42:57.523 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:42:57 compute-1 nova_compute[230518]: 2025-10-02 13:42:57.664 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:42:57 compute-1 nova_compute[230518]: 2025-10-02 13:42:57.665 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4141MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:42:57 compute-1 nova_compute[230518]: 2025-10-02 13:42:57.665 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:42:57 compute-1 nova_compute[230518]: 2025-10-02 13:42:57.665 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:42:57 compute-1 nova_compute[230518]: 2025-10-02 13:42:57.815 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:42:57 compute-1 nova_compute[230518]: 2025-10-02 13:42:57.816 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:42:57 compute-1 nova_compute[230518]: 2025-10-02 13:42:57.844 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:42:57 compute-1 nova_compute[230518]: 2025-10-02 13:42:57.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:58 compute-1 ceph-mon[80926]: pgmap v3976: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:42:58 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3426009518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:42:58 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1568033478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:42:58 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:42:58 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2860197508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:42:58 compute-1 nova_compute[230518]: 2025-10-02 13:42:58.328 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:42:58 compute-1 nova_compute[230518]: 2025-10-02 13:42:58.334 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:42:58 compute-1 nova_compute[230518]: 2025-10-02 13:42:58.433 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:42:58 compute-1 nova_compute[230518]: 2025-10-02 13:42:58.434 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:42:58 compute-1 nova_compute[230518]: 2025-10-02 13:42:58.435 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:42:58 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:58 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:58 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:42:58.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:58 compute-1 nova_compute[230518]: 2025-10-02 13:42:58.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:42:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:42:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:42:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:42:59.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:42:59 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2860197508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:43:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:43:00 compute-1 ceph-mon[80926]: pgmap v3977: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:00 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:00 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:00 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:00.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:43:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:01.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:43:01 compute-1 ceph-mon[80926]: pgmap v3978: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:02 compute-1 podman[331635]: 2025-10-02 13:43:02.7938444 +0000 UTC m=+0.046177589 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true)
Oct 02 13:43:02 compute-1 podman[331636]: 2025-10-02 13:43:02.798480635 +0000 UTC m=+0.047423708 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 13:43:02 compute-1 nova_compute[230518]: 2025-10-02 13:43:02.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:02 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:02 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:02 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:02.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:03.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:03 compute-1 nova_compute[230518]: 2025-10-02 13:43:03.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:04 compute-1 ceph-mon[80926]: pgmap v3979: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:04 compute-1 nova_compute[230518]: 2025-10-02 13:43:04.435 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:43:04 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:04 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:43:04 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:04.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:43:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:43:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:05.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:43:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1628467526' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:43:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:43:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1628467526' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:43:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1628467526' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:43:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/1628467526' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:43:06 compute-1 ceph-mon[80926]: pgmap v3980: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:06 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:06 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:06 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:06.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:07.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:07 compute-1 ceph-mon[80926]: pgmap v3981: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:07 compute-1 nova_compute[230518]: 2025-10-02 13:43:07.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:08 compute-1 nova_compute[230518]: 2025-10-02 13:43:08.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:43:08 compute-1 nova_compute[230518]: 2025-10-02 13:43:08.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:43:08 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:08 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:08 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:08.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:08 compute-1 nova_compute[230518]: 2025-10-02 13:43:08.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:09.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:10 compute-1 nova_compute[230518]: 2025-10-02 13:43:10.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:43:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:43:10 compute-1 ceph-mon[80926]: pgmap v3982: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:10 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:10 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:10 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:10.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:11 compute-1 nova_compute[230518]: 2025-10-02 13:43:11.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:43:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:11.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:12 compute-1 nova_compute[230518]: 2025-10-02 13:43:12.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:43:12 compute-1 nova_compute[230518]: 2025-10-02 13:43:12.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:43:12 compute-1 nova_compute[230518]: 2025-10-02 13:43:12.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:43:12 compute-1 nova_compute[230518]: 2025-10-02 13:43:12.067 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:43:12 compute-1 ceph-mon[80926]: pgmap v3983: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:12 compute-1 nova_compute[230518]: 2025-10-02 13:43:12.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:12 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:12 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:12 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:12.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:13 compute-1 nova_compute[230518]: 2025-10-02 13:43:13.062 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:43:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:43:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:13.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:43:13 compute-1 nova_compute[230518]: 2025-10-02 13:43:13.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:14 compute-1 ceph-mon[80926]: pgmap v3984: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:14 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:14 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:14 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:14.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:43:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:43:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:15.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:43:16 compute-1 ceph-mon[80926]: pgmap v3985: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:16 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:16 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:16 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:16.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:17.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:17 compute-1 ceph-mon[80926]: pgmap v3986: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:17 compute-1 nova_compute[230518]: 2025-10-02 13:43:17.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:18 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:18 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:18 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:18.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:18 compute-1 nova_compute[230518]: 2025-10-02 13:43:18.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:19.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:43:20 compute-1 ceph-mon[80926]: pgmap v3987: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:20 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:20 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:20 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:20.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:43:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:21.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:43:22 compute-1 ceph-mon[80926]: pgmap v3988: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:22 compute-1 podman[331675]: 2025-10-02 13:43:22.798822866 +0000 UTC m=+0.049396119 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct 02 13:43:22 compute-1 podman[331674]: 2025-10-02 13:43:22.847971547 +0000 UTC m=+0.091770278 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 02 13:43:22 compute-1 nova_compute[230518]: 2025-10-02 13:43:22.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:22 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:22 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:22 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:22.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:43:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:23.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:43:23 compute-1 nova_compute[230518]: 2025-10-02 13:43:23.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:24 compute-1 ceph-mon[80926]: pgmap v3989: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:24 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:24 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:43:24 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:24.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:43:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:43:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:43:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:25.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:43:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:43:26.007 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:43:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:43:26.007 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:43:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:43:26.007 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:43:26 compute-1 ceph-mon[80926]: pgmap v3990: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:26 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:26 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:43:26 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:26.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:43:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:27.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:27 compute-1 nova_compute[230518]: 2025-10-02 13:43:27.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:28 compute-1 ceph-mon[80926]: pgmap v3991: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:28 compute-1 nova_compute[230518]: 2025-10-02 13:43:28.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:28 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:28 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:43:28 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:28.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:43:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:29.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:43:30 compute-1 ceph-mon[80926]: pgmap v3992: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:30 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:30 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:30 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:30.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:31.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:31 compute-1 ceph-mon[80926]: pgmap v3993: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:32 compute-1 nova_compute[230518]: 2025-10-02 13:43:32.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:32 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:32 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:43:32 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:32.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:43:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:33.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:33 compute-1 podman[331723]: 2025-10-02 13:43:33.809609061 +0000 UTC m=+0.056181742 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:43:33 compute-1 podman[331722]: 2025-10-02 13:43:33.80896179 +0000 UTC m=+0.059133375 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:43:33 compute-1 nova_compute[230518]: 2025-10-02 13:43:33.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:34 compute-1 ceph-mon[80926]: pgmap v3994: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:34 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:34 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:34 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:34.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:43:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:43:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:35.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:43:36 compute-1 ceph-mon[80926]: pgmap v3995: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:36 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:36 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:43:36 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:36.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:43:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:43:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:37.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:43:37 compute-1 nova_compute[230518]: 2025-10-02 13:43:37.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:38 compute-1 ceph-mon[80926]: pgmap v3996: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:38 compute-1 nova_compute[230518]: 2025-10-02 13:43:38.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:38 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:38 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:38 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:38.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:43:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:39.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:43:40 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:43:40 compute-1 ceph-mon[80926]: pgmap v3997: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:40 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:40 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:43:40 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:40.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:43:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:41.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:41 compute-1 ceph-mon[80926]: pgmap v3998: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:42 compute-1 nova_compute[230518]: 2025-10-02 13:43:42.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:42 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:42 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:42 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:42.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:43:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:43.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:43:43 compute-1 ceph-mon[80926]: pgmap v3999: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:43 compute-1 nova_compute[230518]: 2025-10-02 13:43:43.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:44 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:44 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:44 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:44.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:43:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:43:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:45.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:43:45 compute-1 ceph-mon[80926]: pgmap v4000: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:46 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:46 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:46 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:46.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:43:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:47.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:43:47 compute-1 ceph-mon[80926]: pgmap v4001: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:47 compute-1 nova_compute[230518]: 2025-10-02 13:43:47.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:48 compute-1 nova_compute[230518]: 2025-10-02 13:43:48.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:43:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:48.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:43:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:49.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:43:50 compute-1 ceph-mon[80926]: pgmap v4002: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:43:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:51.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:43:51 compute-1 nova_compute[230518]: 2025-10-02 13:43:51.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:43:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:51.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:51 compute-1 ceph-mon[80926]: pgmap v4003: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:52 compute-1 nova_compute[230518]: 2025-10-02 13:43:52.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:53.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:53.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:53 compute-1 podman[331763]: 2025-10-02 13:43:53.793943085 +0000 UTC m=+0.043930068 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 02 13:43:53 compute-1 podman[331762]: 2025-10-02 13:43:53.834748355 +0000 UTC m=+0.087501924 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 02 13:43:53 compute-1 nova_compute[230518]: 2025-10-02 13:43:53.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:54 compute-1 ceph-mon[80926]: pgmap v4004: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:54 compute-1 sudo[331803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:43:54 compute-1 sudo[331803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:43:54 compute-1 sudo[331803]: pam_unix(sudo:session): session closed for user root
Oct 02 13:43:54 compute-1 sudo[331828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:43:54 compute-1 sudo[331828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:43:54 compute-1 sudo[331828]: pam_unix(sudo:session): session closed for user root
Oct 02 13:43:54 compute-1 sudo[331853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:43:54 compute-1 sudo[331853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:43:54 compute-1 sudo[331853]: pam_unix(sudo:session): session closed for user root
Oct 02 13:43:54 compute-1 sudo[331878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:43:54 compute-1 sudo[331878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:43:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:55.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:43:55 compute-1 sudo[331878]: pam_unix(sudo:session): session closed for user root
Oct 02 13:43:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:55.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:55 compute-1 ceph-mon[80926]: pgmap v4005: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:55 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:43:55 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:43:56 compute-1 nova_compute[230518]: 2025-10-02 13:43:56.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:43:56 compute-1 nova_compute[230518]: 2025-10-02 13:43:56.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:43:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:43:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:57.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:43:57 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:43:57 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:43:57 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:43:57 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:43:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:43:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:57.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:43:57 compute-1 nova_compute[230518]: 2025-10-02 13:43:57.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:58 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/86670586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:43:58 compute-1 ceph-mon[80926]: pgmap v4006: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:43:58 compute-1 nova_compute[230518]: 2025-10-02 13:43:58.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:43:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:43:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:43:59.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:43:59 compute-1 nova_compute[230518]: 2025-10-02 13:43:59.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:43:59 compute-1 nova_compute[230518]: 2025-10-02 13:43:59.089 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:43:59 compute-1 nova_compute[230518]: 2025-10-02 13:43:59.090 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:43:59 compute-1 nova_compute[230518]: 2025-10-02 13:43:59.090 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:43:59 compute-1 nova_compute[230518]: 2025-10-02 13:43:59.090 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:43:59 compute-1 nova_compute[230518]: 2025-10-02 13:43:59.090 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:43:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:43:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:43:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:43:59.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:43:59 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:43:59 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/955581800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:43:59 compute-1 nova_compute[230518]: 2025-10-02 13:43:59.594 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:43:59 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2452559841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:43:59 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3553964524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:43:59 compute-1 nova_compute[230518]: 2025-10-02 13:43:59.768 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:43:59 compute-1 nova_compute[230518]: 2025-10-02 13:43:59.770 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4126MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:43:59 compute-1 nova_compute[230518]: 2025-10-02 13:43:59.770 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:43:59 compute-1 nova_compute[230518]: 2025-10-02 13:43:59.771 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:43:59 compute-1 nova_compute[230518]: 2025-10-02 13:43:59.932 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:43:59 compute-1 nova_compute[230518]: 2025-10-02 13:43:59.932 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:43:59 compute-1 nova_compute[230518]: 2025-10-02 13:43:59.991 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:44:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:44:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:44:00 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1389360451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:44:00 compute-1 nova_compute[230518]: 2025-10-02 13:44:00.450 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:44:00 compute-1 nova_compute[230518]: 2025-10-02 13:44:00.455 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:44:00 compute-1 nova_compute[230518]: 2025-10-02 13:44:00.488 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:44:00 compute-1 nova_compute[230518]: 2025-10-02 13:44:00.489 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:44:00 compute-1 nova_compute[230518]: 2025-10-02 13:44:00.490 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:44:00 compute-1 ceph-mon[80926]: pgmap v4007: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3738925671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:44:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/955581800' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:44:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1389360451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:44:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:01.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:44:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:01.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:44:02 compute-1 ceph-mon[80926]: pgmap v4008: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:02 compute-1 nova_compute[230518]: 2025-10-02 13:44:02.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:03.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:03.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:03 compute-1 ceph-mon[80926]: pgmap v4009: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:03 compute-1 nova_compute[230518]: 2025-10-02 13:44:03.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:04 compute-1 nova_compute[230518]: 2025-10-02 13:44:04.489 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:44:04 compute-1 podman[331979]: 2025-10-02 13:44:04.813161911 +0000 UTC m=+0.058179515 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd)
Oct 02 13:44:04 compute-1 podman[331978]: 2025-10-02 13:44:04.824857098 +0000 UTC m=+0.075286351 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:44:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:44:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:05.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:44:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:44:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:05.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:06 compute-1 ceph-mon[80926]: pgmap v4010: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3923771226' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:44:06 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/3923771226' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:44:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:07.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:44:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:07.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:44:07 compute-1 sudo[332016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:44:07 compute-1 sudo[332016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:44:07 compute-1 sudo[332016]: pam_unix(sudo:session): session closed for user root
Oct 02 13:44:07 compute-1 sudo[332041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:44:07 compute-1 sudo[332041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:44:07 compute-1 sudo[332041]: pam_unix(sudo:session): session closed for user root
Oct 02 13:44:07 compute-1 nova_compute[230518]: 2025-10-02 13:44:07.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:08 compute-1 nova_compute[230518]: 2025-10-02 13:44:08.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:44:08 compute-1 ceph-mon[80926]: pgmap v4011: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:08 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:44:08 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:44:08 compute-1 nova_compute[230518]: 2025-10-02 13:44:08.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:09.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:09 compute-1 nova_compute[230518]: 2025-10-02 13:44:09.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:44:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:09.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:09 compute-1 ceph-mon[80926]: pgmap v4012: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:44:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:11.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:11 compute-1 nova_compute[230518]: 2025-10-02 13:44:11.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:44:11 compute-1 nova_compute[230518]: 2025-10-02 13:44:11.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:44:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:11.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:12 compute-1 nova_compute[230518]: 2025-10-02 13:44:12.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:44:12 compute-1 nova_compute[230518]: 2025-10-02 13:44:12.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:44:12 compute-1 nova_compute[230518]: 2025-10-02 13:44:12.054 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:44:12 compute-1 nova_compute[230518]: 2025-10-02 13:44:12.177 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:44:12 compute-1 ceph-mon[80926]: pgmap v4013: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:12 compute-1 nova_compute[230518]: 2025-10-02 13:44:12.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:13.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:13.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:13 compute-1 ceph-mon[80926]: pgmap v4014: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:13 compute-1 nova_compute[230518]: 2025-10-02 13:44:13.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:44:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:15.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:44:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:44:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:44:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:15.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:44:15 compute-1 ceph-mon[80926]: pgmap v4015: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:17.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:17.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:17 compute-1 nova_compute[230518]: 2025-10-02 13:44:17.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:18 compute-1 ceph-mon[80926]: pgmap v4016: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:18 compute-1 nova_compute[230518]: 2025-10-02 13:44:18.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:19.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:19.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:19 compute-1 ceph-mon[80926]: pgmap v4017: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:44:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:21.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:21.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:21 compute-1 ceph-mon[80926]: pgmap v4018: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:22 compute-1 nova_compute[230518]: 2025-10-02 13:44:22.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:44:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:23.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:44:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:44:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:23.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:44:23 compute-1 ceph-mon[80926]: pgmap v4019: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:23 compute-1 nova_compute[230518]: 2025-10-02 13:44:23.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:24 compute-1 podman[332067]: 2025-10-02 13:44:24.821509789 +0000 UTC m=+0.073454205 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 02 13:44:24 compute-1 podman[332068]: 2025-10-02 13:44:24.830345075 +0000 UTC m=+0.080006349 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 02 13:44:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:25.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:44:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:25.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:44:26.008 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:44:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:44:26.008 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:44:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:44:26.008 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:44:26 compute-1 ceph-mon[80926]: pgmap v4020: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:27.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:27.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:27 compute-1 ceph-mon[80926]: pgmap v4021: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:27 compute-1 nova_compute[230518]: 2025-10-02 13:44:27.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:28 compute-1 sshd-session[332112]: Invalid user 1 from 196.251.118.184 port 43072
Oct 02 13:44:28 compute-1 nova_compute[230518]: 2025-10-02 13:44:28.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:29.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:29 compute-1 sshd-session[332112]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 13:44:29 compute-1 sshd-session[332112]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.118.184
Oct 02 13:44:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:29.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:29 compute-1 ceph-mon[80926]: pgmap v4022: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:44:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:31.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:31 compute-1 sshd-session[332112]: Failed password for invalid user 1 from 196.251.118.184 port 43072 ssh2
Oct 02 13:44:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:31.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:31 compute-1 ceph-mon[80926]: pgmap v4023: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:32 compute-1 nova_compute[230518]: 2025-10-02 13:44:32.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:33.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:33 compute-1 sshd-session[332112]: Connection closed by invalid user 1 196.251.118.184 port 43072 [preauth]
Oct 02 13:44:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:33.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:33 compute-1 ceph-mon[80926]: pgmap v4024: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:33 compute-1 nova_compute[230518]: 2025-10-02 13:44:33.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:35.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:44:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:35.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:35 compute-1 ceph-mon[80926]: pgmap v4025: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:35 compute-1 podman[332114]: 2025-10-02 13:44:35.81715817 +0000 UTC m=+0.057736382 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=iscsid)
Oct 02 13:44:35 compute-1 podman[332115]: 2025-10-02 13:44:35.826814802 +0000 UTC m=+0.063977247 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 02 13:44:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:37.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:37.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:37 compute-1 ceph-mon[80926]: pgmap v4026: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:37 compute-1 nova_compute[230518]: 2025-10-02 13:44:37.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:38 compute-1 nova_compute[230518]: 2025-10-02 13:44:38.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:39.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:39.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:40 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:44:40 compute-1 sshd-session[332153]: Invalid user a from 196.251.118.184 port 50602
Oct 02 13:44:40 compute-1 ceph-mon[80926]: pgmap v4027: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:41.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:44:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:41.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:44:41 compute-1 sshd-session[332153]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 13:44:41 compute-1 sshd-session[332153]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.118.184
Oct 02 13:44:41 compute-1 ceph-mon[80926]: pgmap v4028: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:42 compute-1 nova_compute[230518]: 2025-10-02 13:44:42.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:43.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:43.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:43 compute-1 sshd-session[332153]: Failed password for invalid user a from 196.251.118.184 port 50602 ssh2
Oct 02 13:44:43 compute-1 nova_compute[230518]: 2025-10-02 13:44:43.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:44 compute-1 ceph-mon[80926]: pgmap v4029: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:45.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:44:45 compute-1 sshd-session[332153]: Connection closed by invalid user a 196.251.118.184 port 50602 [preauth]
Oct 02 13:44:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:44:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:45.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:44:46 compute-1 ceph-mon[80926]: pgmap v4030: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:47.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:47.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:47 compute-1 ceph-mon[80926]: pgmap v4031: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:47 compute-1 nova_compute[230518]: 2025-10-02 13:44:47.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:48 compute-1 nova_compute[230518]: 2025-10-02 13:44:48.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:49.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:49 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #199. Immutable memtables: 0.
Oct 02 13:44:49 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:44:49.161177) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:44:49 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 127] Flushing memtable with next log file: 199
Oct 02 13:44:49 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412689161199, "job": 127, "event": "flush_started", "num_memtables": 1, "num_entries": 1634, "num_deletes": 255, "total_data_size": 3910360, "memory_usage": 3963376, "flush_reason": "Manual Compaction"}
Oct 02 13:44:49 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 127] Level-0 flush table #200: started
Oct 02 13:44:49 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412689207591, "cf_name": "default", "job": 127, "event": "table_file_creation", "file_number": 200, "file_size": 2572239, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 95311, "largest_seqno": 96940, "table_properties": {"data_size": 2565284, "index_size": 4025, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14215, "raw_average_key_size": 19, "raw_value_size": 2551404, "raw_average_value_size": 3543, "num_data_blocks": 177, "num_entries": 720, "num_filter_entries": 720, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759412540, "oldest_key_time": 1759412540, "file_creation_time": 1759412689, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 200, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:44:49 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 127] Flush lasted 46557 microseconds, and 6222 cpu microseconds.
Oct 02 13:44:49 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:44:49 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:44:49.207724) [db/flush_job.cc:967] [default] [JOB 127] Level-0 flush table #200: 2572239 bytes OK
Oct 02 13:44:49 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:44:49.207794) [db/memtable_list.cc:519] [default] Level-0 commit table #200 started
Oct 02 13:44:49 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:44:49.212085) [db/memtable_list.cc:722] [default] Level-0 commit table #200: memtable #1 done
Oct 02 13:44:49 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:44:49.212121) EVENT_LOG_v1 {"time_micros": 1759412689212111, "job": 127, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:44:49 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:44:49.212144) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:44:49 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 127] Try to delete WAL files size 3902831, prev total WAL file size 3902831, number of live WAL files 2.
Oct 02 13:44:49 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000196.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:44:49 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:44:49.214173) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373732' seq:72057594037927935, type:22 .. '6C6F676D0034303233' seq:0, type:0; will stop at (end)
Oct 02 13:44:49 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 128] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:44:49 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 127 Base level 0, inputs: [200(2511KB)], [198(11MB)]
Oct 02 13:44:49 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412689214229, "job": 128, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [200], "files_L6": [198], "score": -1, "input_data_size": 14201082, "oldest_snapshot_seqno": -1}
Oct 02 13:44:49 compute-1 sshd-session[332155]: Invalid user acitoolkit from 196.251.118.184 port 57230
Oct 02 13:44:49 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 128] Generated table #201: 11712 keys, 14066784 bytes, temperature: kUnknown
Oct 02 13:44:49 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412689343729, "cf_name": "default", "job": 128, "event": "table_file_creation", "file_number": 201, "file_size": 14066784, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13993043, "index_size": 43356, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29317, "raw_key_size": 310202, "raw_average_key_size": 26, "raw_value_size": 13790255, "raw_average_value_size": 1177, "num_data_blocks": 1645, "num_entries": 11712, "num_filter_entries": 11712, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759412689, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 201, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:44:49 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:44:49 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:44:49.344173) [db/compaction/compaction_job.cc:1663] [default] [JOB 128] Compacted 1@0 + 1@6 files to L6 => 14066784 bytes
Oct 02 13:44:49 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:44:49.346381) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 109.5 rd, 108.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 11.1 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(11.0) write-amplify(5.5) OK, records in: 12241, records dropped: 529 output_compression: NoCompression
Oct 02 13:44:49 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:44:49.346405) EVENT_LOG_v1 {"time_micros": 1759412689346393, "job": 128, "event": "compaction_finished", "compaction_time_micros": 129689, "compaction_time_cpu_micros": 34390, "output_level": 6, "num_output_files": 1, "total_output_size": 14066784, "num_input_records": 12241, "num_output_records": 11712, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:44:49 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000200.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:44:49 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412689347374, "job": 128, "event": "table_file_deletion", "file_number": 200}
Oct 02 13:44:49 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000198.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:44:49 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412689350529, "job": 128, "event": "table_file_deletion", "file_number": 198}
Oct 02 13:44:49 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:44:49.214112) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:44:49 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:44:49.350642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:44:49 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:44:49.350651) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:44:49 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:44:49.350656) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:44:49 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:44:49.350660) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:44:49 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:44:49.350665) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:44:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:49.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:49 compute-1 sshd-session[332155]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 13:44:49 compute-1 sshd-session[332155]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.118.184
Oct 02 13:44:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:44:50 compute-1 ceph-mon[80926]: pgmap v4032: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:44:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:51.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:44:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:51.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:52 compute-1 nova_compute[230518]: 2025-10-02 13:44:52.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:44:52 compute-1 sshd-session[332155]: Failed password for invalid user acitoolkit from 196.251.118.184 port 57230 ssh2
Oct 02 13:44:52 compute-1 ceph-mon[80926]: pgmap v4033: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:52 compute-1 nova_compute[230518]: 2025-10-02 13:44:52.938 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:44:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:53.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:44:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:53.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:53 compute-1 nova_compute[230518]: 2025-10-02 13:44:53.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:54 compute-1 sshd-session[332155]: Connection closed by invalid user acitoolkit 196.251.118.184 port 57230 [preauth]
Oct 02 13:44:54 compute-1 ceph-mon[80926]: pgmap v4034: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:55.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:44:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:55.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:55 compute-1 ceph-mon[80926]: pgmap v4035: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:55 compute-1 podman[332158]: 2025-10-02 13:44:55.828934599 +0000 UTC m=+0.080018139 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 02 13:44:55 compute-1 podman[332157]: 2025-10-02 13:44:55.854090259 +0000 UTC m=+0.107159921 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true)
Oct 02 13:44:56 compute-1 nova_compute[230518]: 2025-10-02 13:44:56.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:44:56 compute-1 nova_compute[230518]: 2025-10-02 13:44:56.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:44:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:57.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:57 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/290588798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:44:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:57.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:57 compute-1 nova_compute[230518]: 2025-10-02 13:44:57.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:58 compute-1 ceph-mon[80926]: pgmap v4036: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:44:58 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3055510341' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:44:58 compute-1 nova_compute[230518]: 2025-10-02 13:44:58.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:44:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:44:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:44:59.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:44:59 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/410477897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:44:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:44:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:44:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:44:59.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:45:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:45:00 compute-1 ceph-mon[80926]: pgmap v4037: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/357204207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:45:00 compute-1 sshd-session[332201]: Invalid user admin from 196.251.118.184 port 50986
Oct 02 13:45:00 compute-1 sshd-session[332201]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 13:45:00 compute-1 sshd-session[332201]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.118.184
Oct 02 13:45:01 compute-1 nova_compute[230518]: 2025-10-02 13:45:01.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:45:01 compute-1 nova_compute[230518]: 2025-10-02 13:45:01.082 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:45:01 compute-1 nova_compute[230518]: 2025-10-02 13:45:01.082 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:45:01 compute-1 nova_compute[230518]: 2025-10-02 13:45:01.082 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:45:01 compute-1 nova_compute[230518]: 2025-10-02 13:45:01.083 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:45:01 compute-1 nova_compute[230518]: 2025-10-02 13:45:01.083 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:45:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:01.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:45:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:01.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:45:01 compute-1 ceph-mon[80926]: pgmap v4038: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:01 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:45:01 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2734375241' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:45:01 compute-1 nova_compute[230518]: 2025-10-02 13:45:01.594 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:45:01 compute-1 nova_compute[230518]: 2025-10-02 13:45:01.735 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:45:01 compute-1 nova_compute[230518]: 2025-10-02 13:45:01.736 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4143MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:45:01 compute-1 nova_compute[230518]: 2025-10-02 13:45:01.737 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:45:01 compute-1 nova_compute[230518]: 2025-10-02 13:45:01.737 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:45:01 compute-1 nova_compute[230518]: 2025-10-02 13:45:01.841 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:45:01 compute-1 nova_compute[230518]: 2025-10-02 13:45:01.842 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:45:01 compute-1 nova_compute[230518]: 2025-10-02 13:45:01.871 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:45:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:45:02 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2899976604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:45:02 compute-1 nova_compute[230518]: 2025-10-02 13:45:02.309 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:45:02 compute-1 nova_compute[230518]: 2025-10-02 13:45:02.315 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:45:02 compute-1 nova_compute[230518]: 2025-10-02 13:45:02.338 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:45:02 compute-1 nova_compute[230518]: 2025-10-02 13:45:02.339 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:45:02 compute-1 nova_compute[230518]: 2025-10-02 13:45:02.340 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:45:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2734375241' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:45:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2899976604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:45:02 compute-1 nova_compute[230518]: 2025-10-02 13:45:02.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:03.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:03 compute-1 sshd-session[332201]: Failed password for invalid user admin from 196.251.118.184 port 50986 ssh2
Oct 02 13:45:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:03.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:03 compute-1 ceph-mon[80926]: pgmap v4039: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:03 compute-1 nova_compute[230518]: 2025-10-02 13:45:03.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:04 compute-1 nova_compute[230518]: 2025-10-02 13:45:04.340 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:45:05 compute-1 sshd-session[332201]: Connection closed by invalid user admin 196.251.118.184 port 50986 [preauth]
Oct 02 13:45:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:05.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:45:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:45:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/500724514' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:45:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:45:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/500724514' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:45:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/500724514' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:45:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/500724514' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:45:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:05.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:06 compute-1 ceph-mon[80926]: pgmap v4040: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:06 compute-1 podman[332247]: 2025-10-02 13:45:06.816832574 +0000 UTC m=+0.067124945 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 02 13:45:06 compute-1 podman[332248]: 2025-10-02 13:45:06.82113396 +0000 UTC m=+0.069256442 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 02 13:45:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:07.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:07.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:07 compute-1 nova_compute[230518]: 2025-10-02 13:45:07.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:08 compute-1 sudo[332287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:45:08 compute-1 sudo[332287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:08 compute-1 sudo[332287]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:08 compute-1 sudo[332312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:45:08 compute-1 sudo[332312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:08 compute-1 sudo[332312]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:08 compute-1 sudo[332338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:45:08 compute-1 sudo[332338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:08 compute-1 sudo[332338]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:08 compute-1 sudo[332363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Oct 02 13:45:08 compute-1 sudo[332363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:08 compute-1 ceph-mon[80926]: pgmap v4041: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:08 compute-1 podman[332458]: 2025-10-02 13:45:08.653094292 +0000 UTC m=+0.065601979 container exec f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct 02 13:45:08 compute-1 podman[332458]: 2025-10-02 13:45:08.776902822 +0000 UTC m=+0.189410489 container exec_died f746e1325e768fce757b5e10b6cd231fa2f9248cbf3c1aa34bf72cfd4c31ca13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-20fdc58c-b037-5094-a8ef-d490aa7c36f3-crash-compute-1, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Oct 02 13:45:08 compute-1 nova_compute[230518]: 2025-10-02 13:45:08.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:45:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:09.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:45:09 compute-1 sudo[332363]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:09 compute-1 sudo[332582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:45:09 compute-1 sudo[332582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:09 compute-1 sudo[332582]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:09 compute-1 sudo[332607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:45:09 compute-1 sudo[332607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:45:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:09.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:45:09 compute-1 sudo[332607]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:09 compute-1 sudo[332632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:45:09 compute-1 sudo[332632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:09 compute-1 sudo[332632]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:09 compute-1 sudo[332657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:45:09 compute-1 sudo[332657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:10 compute-1 nova_compute[230518]: 2025-10-02 13:45:10.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:45:10 compute-1 sudo[332657]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:45:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:45:10 compute-1 ceph-mon[80926]: pgmap v4042: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:45:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:45:10 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:45:10 compute-1 sshd-session[332335]: Invalid user admin from 196.251.118.184 port 52634
Oct 02 13:45:11 compute-1 nova_compute[230518]: 2025-10-02 13:45:11.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:45:11 compute-1 nova_compute[230518]: 2025-10-02 13:45:11.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:45:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:11.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:11 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:45:11 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:45:11 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:45:11 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:45:11 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:45:11 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:45:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:11.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:11 compute-1 sshd-session[332335]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 13:45:11 compute-1 sshd-session[332335]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.118.184
Oct 02 13:45:12 compute-1 nova_compute[230518]: 2025-10-02 13:45:12.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:45:12 compute-1 nova_compute[230518]: 2025-10-02 13:45:12.055 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:45:12 compute-1 nova_compute[230518]: 2025-10-02 13:45:12.055 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:45:12 compute-1 nova_compute[230518]: 2025-10-02 13:45:12.130 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:45:12 compute-1 ceph-mon[80926]: pgmap v4043: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:12 compute-1 nova_compute[230518]: 2025-10-02 13:45:12.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:13 compute-1 nova_compute[230518]: 2025-10-02 13:45:13.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:45:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:45:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:13.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:45:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:45:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:13.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:45:13 compute-1 sshd-session[332335]: Failed password for invalid user admin from 196.251.118.184 port 52634 ssh2
Oct 02 13:45:13 compute-1 nova_compute[230518]: 2025-10-02 13:45:13.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:14 compute-1 ceph-mon[80926]: pgmap v4044: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:15.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:45:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:45:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:15.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:45:15 compute-1 sshd-session[332335]: Connection closed by invalid user admin 196.251.118.184 port 52634 [preauth]
Oct 02 13:45:16 compute-1 ceph-mon[80926]: pgmap v4045: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:17.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:17.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:17 compute-1 sudo[332714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:45:17 compute-1 sudo[332714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:17 compute-1 sudo[332714]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:17 compute-1 sudo[332739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:45:17 compute-1 sudo[332739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:45:17 compute-1 sudo[332739]: pam_unix(sudo:session): session closed for user root
Oct 02 13:45:17 compute-1 nova_compute[230518]: 2025-10-02 13:45:17.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:18 compute-1 nova_compute[230518]: 2025-10-02 13:45:18.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:45:18 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:45:18 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:45:18 compute-1 ceph-mon[80926]: pgmap v4046: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:18 compute-1 nova_compute[230518]: 2025-10-02 13:45:18.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:19.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:45:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:19.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:45:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:45:20 compute-1 ceph-mon[80926]: pgmap v4047: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:21.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:21.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:21 compute-1 sshd-session[332764]: Invalid user admin from 196.251.118.184 port 36854
Oct 02 13:45:22 compute-1 ceph-mon[80926]: pgmap v4048: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:22 compute-1 sshd-session[332764]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 13:45:22 compute-1 sshd-session[332764]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.118.184
Oct 02 13:45:22 compute-1 nova_compute[230518]: 2025-10-02 13:45:22.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:23.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:23 compute-1 ceph-mon[80926]: pgmap v4049: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:23.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:23 compute-1 nova_compute[230518]: 2025-10-02 13:45:23.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:24 compute-1 sshd-session[332764]: Failed password for invalid user admin from 196.251.118.184 port 36854 ssh2
Oct 02 13:45:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:25.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:45:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:45:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:25.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:45:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:45:26.009 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:45:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:45:26.009 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:45:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:45:26.009 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:45:26 compute-1 ceph-mon[80926]: pgmap v4050: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:26 compute-1 podman[332767]: 2025-10-02 13:45:26.807440775 +0000 UTC m=+0.058256817 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 02 13:45:26 compute-1 podman[332766]: 2025-10-02 13:45:26.836586369 +0000 UTC m=+0.088583038 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 02 13:45:27 compute-1 sshd-session[332764]: Connection closed by invalid user admin 196.251.118.184 port 36854 [preauth]
Oct 02 13:45:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:45:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:27.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:45:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:45:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:27.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:45:27 compute-1 nova_compute[230518]: 2025-10-02 13:45:27.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:28 compute-1 ceph-mon[80926]: pgmap v4051: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:28 compute-1 nova_compute[230518]: 2025-10-02 13:45:28.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:29.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:29.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:45:30 compute-1 ceph-mon[80926]: pgmap v4052: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:31.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:31 compute-1 sshd-session[332810]: Invalid user admin from 196.251.118.184 port 34066
Oct 02 13:45:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:45:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:31.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:45:31 compute-1 sshd-session[332810]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 13:45:31 compute-1 sshd-session[332810]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.118.184
Oct 02 13:45:32 compute-1 ceph-mon[80926]: pgmap v4053: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:32 compute-1 nova_compute[230518]: 2025-10-02 13:45:32.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:33.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:33.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:33 compute-1 ceph-mon[80926]: pgmap v4054: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:33 compute-1 nova_compute[230518]: 2025-10-02 13:45:33.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:34 compute-1 sshd-session[332810]: Failed password for invalid user admin from 196.251.118.184 port 34066 ssh2
Oct 02 13:45:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:35.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:45:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:35.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:35 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #202. Immutable memtables: 0.
Oct 02 13:45:35 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:45:35.565569) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 02 13:45:35 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:856] [default] [JOB 129] Flushing memtable with next log file: 202
Oct 02 13:45:35 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412735565599, "job": 129, "event": "flush_started", "num_memtables": 1, "num_entries": 751, "num_deletes": 251, "total_data_size": 1400583, "memory_usage": 1414496, "flush_reason": "Manual Compaction"}
Oct 02 13:45:35 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:885] [default] [JOB 129] Level-0 flush table #203: started
Oct 02 13:45:35 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412735674779, "cf_name": "default", "job": 129, "event": "table_file_creation", "file_number": 203, "file_size": 913877, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 96945, "largest_seqno": 97691, "table_properties": {"data_size": 910235, "index_size": 1485, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8415, "raw_average_key_size": 19, "raw_value_size": 902881, "raw_average_value_size": 2104, "num_data_blocks": 65, "num_entries": 429, "num_filter_entries": 429, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759412690, "oldest_key_time": 1759412690, "file_creation_time": 1759412735, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 203, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:45:35 compute-1 ceph-mon[80926]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 129] Flush lasted 109262 microseconds, and 3241 cpu microseconds.
Oct 02 13:45:35 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:45:35 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:45:35.674826) [db/flush_job.cc:967] [default] [JOB 129] Level-0 flush table #203: 913877 bytes OK
Oct 02 13:45:35 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:45:35.674847) [db/memtable_list.cc:519] [default] Level-0 commit table #203 started
Oct 02 13:45:35 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:45:35.676745) [db/memtable_list.cc:722] [default] Level-0 commit table #203: memtable #1 done
Oct 02 13:45:35 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:45:35.676760) EVENT_LOG_v1 {"time_micros": 1759412735676755, "job": 129, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 02 13:45:35 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:45:35.676778) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 02 13:45:35 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 129] Try to delete WAL files size 1396575, prev total WAL file size 1396575, number of live WAL files 2.
Oct 02 13:45:35 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000199.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:45:35 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:45:35.677435) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038353334' seq:72057594037927935, type:22 .. '7061786F730038373836' seq:0, type:0; will stop at (end)
Oct 02 13:45:35 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 130] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 02 13:45:35 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 129 Base level 0, inputs: [203(892KB)], [201(13MB)]
Oct 02 13:45:35 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412735677469, "job": 130, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [203], "files_L6": [201], "score": -1, "input_data_size": 14980661, "oldest_snapshot_seqno": -1}
Oct 02 13:45:35 compute-1 ceph-mon[80926]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 130] Generated table #204: 11624 keys, 12971342 bytes, temperature: kUnknown
Oct 02 13:45:35 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412735788536, "cf_name": "default", "job": 130, "event": "table_file_creation", "file_number": 204, "file_size": 12971342, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12899297, "index_size": 41852, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29125, "raw_key_size": 309047, "raw_average_key_size": 26, "raw_value_size": 12699070, "raw_average_value_size": 1092, "num_data_blocks": 1573, "num_entries": 11624, "num_filter_entries": 11624, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759405570, "oldest_key_time": 0, "file_creation_time": 1759412735, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f39ba2d7-ed25-4935-a0d2-2c1c33353d32", "db_session_id": "FDMBSZ550JKCBY0GVN5D", "orig_file_number": 204, "seqno_to_time_mapping": "N/A"}}
Oct 02 13:45:35 compute-1 ceph-mon[80926]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 02 13:45:35 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:45:35.788832) [db/compaction/compaction_job.cc:1663] [default] [JOB 130] Compacted 1@0 + 1@6 files to L6 => 12971342 bytes
Oct 02 13:45:35 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:45:35.790722) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 134.8 rd, 116.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 13.4 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(30.6) write-amplify(14.2) OK, records in: 12141, records dropped: 517 output_compression: NoCompression
Oct 02 13:45:35 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:45:35.790742) EVENT_LOG_v1 {"time_micros": 1759412735790732, "job": 130, "event": "compaction_finished", "compaction_time_micros": 111147, "compaction_time_cpu_micros": 29656, "output_level": 6, "num_output_files": 1, "total_output_size": 12971342, "num_input_records": 12141, "num_output_records": 11624, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 02 13:45:35 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000203.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:45:35 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412735791114, "job": 130, "event": "table_file_deletion", "file_number": 203}
Oct 02 13:45:35 compute-1 ceph-mon[80926]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000201.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 02 13:45:35 compute-1 ceph-mon[80926]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759412735794507, "job": 130, "event": "table_file_deletion", "file_number": 201}
Oct 02 13:45:35 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:45:35.677361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:45:35 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:45:35.794544) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:45:35 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:45:35.794548) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:45:35 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:45:35.794550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:45:35 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:45:35.794552) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:45:35 compute-1 ceph-mon[80926]: rocksdb: (Original Log Time 2025/10/02-13:45:35.794554) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 02 13:45:36 compute-1 sshd-session[332810]: Connection closed by invalid user admin 196.251.118.184 port 34066 [preauth]
Oct 02 13:45:36 compute-1 ceph-mon[80926]: pgmap v4055: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:45:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:37.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:45:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:37.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:37 compute-1 podman[332813]: 2025-10-02 13:45:37.802177298 +0000 UTC m=+0.054934583 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:45:37 compute-1 podman[332814]: 2025-10-02 13:45:37.803060176 +0000 UTC m=+0.054307084 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 02 13:45:37 compute-1 nova_compute[230518]: 2025-10-02 13:45:37.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:38 compute-1 ceph-mon[80926]: pgmap v4056: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:39 compute-1 nova_compute[230518]: 2025-10-02 13:45:38.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:45:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:39.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:45:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:39.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:45:39 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.1 total, 600.0 interval
                                           Cumulative writes: 71K writes, 276K keys, 71K commit groups, 1.0 writes per commit group, ingest: 0.27 GB, 0.04 MB/s
                                           Cumulative WAL: 71K writes, 26K syncs, 2.68 writes per sync, written: 0.27 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 455 writes, 777 keys, 455 commit groups, 1.0 writes per commit group, ingest: 0.25 MB, 0.00 MB/s
                                           Interval WAL: 455 writes, 199 syncs, 2.29 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 13:45:40 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:45:40 compute-1 ceph-mon[80926]: pgmap v4057: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:40 compute-1 sshd-session[332812]: Invalid user admin from 196.251.118.184 port 53982
Oct 02 13:45:41 compute-1 sshd-session[332812]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 13:45:41 compute-1 sshd-session[332812]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.118.184
Oct 02 13:45:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999992s ======
Oct 02 13:45:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:41.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999992s
Oct 02 13:45:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:41.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:42 compute-1 ceph-mon[80926]: pgmap v4058: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:42 compute-1 sshd-session[332812]: Failed password for invalid user admin from 196.251.118.184 port 53982 ssh2
Oct 02 13:45:42 compute-1 nova_compute[230518]: 2025-10-02 13:45:42.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:43.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:43.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:43 compute-1 sshd-session[332812]: Connection closed by invalid user admin 196.251.118.184 port 53982 [preauth]
Oct 02 13:45:43 compute-1 ceph-mon[80926]: pgmap v4059: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:44 compute-1 nova_compute[230518]: 2025-10-02 13:45:44.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:45:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:45.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:45:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:45.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:45:46 compute-1 ceph-mon[80926]: pgmap v4060: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:47.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:47.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:47 compute-1 nova_compute[230518]: 2025-10-02 13:45:47.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:48 compute-1 ceph-mon[80926]: pgmap v4061: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:49 compute-1 nova_compute[230518]: 2025-10-02 13:45:49.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:49.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:49.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:45:50 compute-1 ceph-mon[80926]: pgmap v4062: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:45:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:51.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:45:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:51.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:51 compute-1 sshd-session[332854]: Invalid user admin from 196.251.118.184 port 59278
Oct 02 13:45:52 compute-1 sshd-session[332854]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 13:45:52 compute-1 sshd-session[332854]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.118.184
Oct 02 13:45:52 compute-1 ceph-mon[80926]: pgmap v4063: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:52 compute-1 nova_compute[230518]: 2025-10-02 13:45:52.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:45:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:53.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:45:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:53.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:53 compute-1 sshd-session[332854]: Failed password for invalid user admin from 196.251.118.184 port 59278 ssh2
Oct 02 13:45:54 compute-1 nova_compute[230518]: 2025-10-02 13:45:54.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:54 compute-1 nova_compute[230518]: 2025-10-02 13:45:54.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:45:54 compute-1 ceph-mon[80926]: pgmap v4064: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:54 compute-1 sshd-session[332854]: Connection closed by invalid user admin 196.251.118.184 port 59278 [preauth]
Oct 02 13:45:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:45:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:45:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:55.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:45:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:45:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:55.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:45:56 compute-1 ceph-mon[80926]: pgmap v4065: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:57 compute-1 nova_compute[230518]: 2025-10-02 13:45:57.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:45:57 compute-1 nova_compute[230518]: 2025-10-02 13:45:57.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:45:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:45:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:57.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:45:57 compute-1 ceph-mon[80926]: pgmap v4066: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:57.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:57 compute-1 podman[332858]: 2025-10-02 13:45:57.812689841 +0000 UTC m=+0.060044253 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 02 13:45:57 compute-1 podman[332857]: 2025-10-02 13:45:57.832366848 +0000 UTC m=+0.090203938 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct 02 13:45:57 compute-1 nova_compute[230518]: 2025-10-02 13:45:57.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:58 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4038663619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:45:59 compute-1 nova_compute[230518]: 2025-10-02 13:45:59.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:45:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:45:59.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:45:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:45:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:45:59.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:45:59 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4274614728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:45:59 compute-1 ceph-mon[80926]: pgmap v4067: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:45:59 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1648727320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:46:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:46:00 compute-1 sshd-session[332856]: Invalid user administrator from 196.251.118.184 port 54600
Oct 02 13:46:00 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2203131589' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:46:01 compute-1 nova_compute[230518]: 2025-10-02 13:46:01.051 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:46:01 compute-1 nova_compute[230518]: 2025-10-02 13:46:01.161 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:46:01 compute-1 nova_compute[230518]: 2025-10-02 13:46:01.162 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:46:01 compute-1 nova_compute[230518]: 2025-10-02 13:46:01.162 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:46:01 compute-1 nova_compute[230518]: 2025-10-02 13:46:01.162 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:46:01 compute-1 nova_compute[230518]: 2025-10-02 13:46:01.162 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:46:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:01.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:46:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:01.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:46:01 compute-1 sshd-session[332856]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 13:46:01 compute-1 sshd-session[332856]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.118.184
Oct 02 13:46:01 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:46:01 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3288497061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:46:01 compute-1 nova_compute[230518]: 2025-10-02 13:46:01.596 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:46:01 compute-1 nova_compute[230518]: 2025-10-02 13:46:01.734 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:46:01 compute-1 nova_compute[230518]: 2025-10-02 13:46:01.736 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4139MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:46:01 compute-1 nova_compute[230518]: 2025-10-02 13:46:01.736 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:46:01 compute-1 nova_compute[230518]: 2025-10-02 13:46:01.736 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:46:01 compute-1 ceph-mon[80926]: pgmap v4068: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3288497061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:46:02 compute-1 nova_compute[230518]: 2025-10-02 13:46:02.145 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:46:02 compute-1 nova_compute[230518]: 2025-10-02 13:46:02.145 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:46:02 compute-1 nova_compute[230518]: 2025-10-02 13:46:02.278 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:46:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:46:02 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3746623552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:46:02 compute-1 nova_compute[230518]: 2025-10-02 13:46:02.706 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:46:02 compute-1 nova_compute[230518]: 2025-10-02 13:46:02.711 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:46:02 compute-1 nova_compute[230518]: 2025-10-02 13:46:02.768 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:46:02 compute-1 nova_compute[230518]: 2025-10-02 13:46:02.769 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:46:02 compute-1 nova_compute[230518]: 2025-10-02 13:46:02.769 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:46:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3746623552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:46:02 compute-1 nova_compute[230518]: 2025-10-02 13:46:02.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:03.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:03 compute-1 sshd-session[332856]: Failed password for invalid user administrator from 196.251.118.184 port 54600 ssh2
Oct 02 13:46:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:46:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:03.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:46:03 compute-1 ceph-mon[80926]: pgmap v4069: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:04 compute-1 nova_compute[230518]: 2025-10-02 13:46:04.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:46:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:46:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:05.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:46:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct 02 13:46:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/920276162' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:46:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct 02 13:46:05 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/920276162' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:46:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/920276162' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:46:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/920276162' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:46:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:46:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:05.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:46:05 compute-1 nova_compute[230518]: 2025-10-02 13:46:05.770 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:46:06 compute-1 sshd-session[332856]: Connection closed by invalid user administrator 196.251.118.184 port 54600 [preauth]
Oct 02 13:46:06 compute-1 ceph-mon[80926]: pgmap v4070: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:07.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:07.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:07 compute-1 nova_compute[230518]: 2025-10-02 13:46:07.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:08 compute-1 ceph-mon[80926]: pgmap v4071: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:08 compute-1 podman[332951]: 2025-10-02 13:46:08.808203196 +0000 UTC m=+0.054834430 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 02 13:46:08 compute-1 podman[332950]: 2025-10-02 13:46:08.821933217 +0000 UTC m=+0.068323013 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 02 13:46:09 compute-1 nova_compute[230518]: 2025-10-02 13:46:09.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:09.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:09.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:10 compute-1 sshd-session[332948]: Invalid user administrator from 196.251.118.184 port 54212
Oct 02 13:46:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:46:10 compute-1 ceph-mon[80926]: pgmap v4072: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:10 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:46:10 compute-1 ceph-mon[80926]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.0 total, 600.0 interval
                                           Cumulative writes: 19K writes, 98K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.20 GB, 0.03 MB/s
                                           Cumulative WAL: 19K writes, 19K syncs, 1.00 writes per sync, written: 0.20 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1410 writes, 7107 keys, 1410 commit groups, 1.0 writes per commit group, ingest: 15.01 MB, 0.03 MB/s
                                           Interval WAL: 1410 writes, 1410 syncs, 1.00 writes per sync, written: 0.01 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     55.3      2.22              0.36        65    0.034       0      0       0.0       0.0
                                             L6      1/0   12.37 MB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   5.6    117.5    101.2      6.79              2.08        64    0.106    529K    34K       0.0       0.0
                                            Sum      1/0   12.37 MB   0.0      0.8     0.1      0.7       0.8      0.1       0.0   6.6     88.5     89.9      9.01              2.45       129    0.070    529K    34K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.0     87.1     88.7      1.00              0.26        12    0.083     72K   3107       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   0.0    117.5    101.2      6.79              2.08        64    0.106    529K    34K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     55.3      2.22              0.36        64    0.035       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 7200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.120, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.79 GB write, 0.11 MB/s write, 0.78 GB read, 0.11 MB/s read, 9.0 seconds
                                           Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.08 GB read, 0.14 MB/s read, 1.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5591049f71f0#2 capacity: 304.00 MB usage: 86.89 MB table_size: 0 occupancy: 18446744073709551615 collections: 13 last_copies: 0 last_secs: 0.000582 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(5373,83.15 MB,27.3505%) FilterBlock(129,1.43 MB,0.469905%) IndexBlock(129,2.32 MB,0.763231%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 02 13:46:10 compute-1 sshd-session[332948]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 13:46:10 compute-1 sshd-session[332948]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.118.184
Oct 02 13:46:11 compute-1 nova_compute[230518]: 2025-10-02 13:46:11.054 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:46:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:11.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:11.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:12 compute-1 nova_compute[230518]: 2025-10-02 13:46:12.048 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:46:12 compute-1 nova_compute[230518]: 2025-10-02 13:46:12.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:46:12 compute-1 sshd-session[332948]: Failed password for invalid user administrator from 196.251.118.184 port 54212 ssh2
Oct 02 13:46:12 compute-1 ceph-mon[80926]: pgmap v4073: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:12 compute-1 sshd-session[332948]: Connection closed by invalid user administrator 196.251.118.184 port 54212 [preauth]
Oct 02 13:46:12 compute-1 nova_compute[230518]: 2025-10-02 13:46:12.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:13.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:13 compute-1 ceph-mon[80926]: pgmap v4074: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:13.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:14 compute-1 nova_compute[230518]: 2025-10-02 13:46:14.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:14 compute-1 nova_compute[230518]: 2025-10-02 13:46:14.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:46:14 compute-1 nova_compute[230518]: 2025-10-02 13:46:14.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:46:14 compute-1 nova_compute[230518]: 2025-10-02 13:46:14.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:46:14 compute-1 nova_compute[230518]: 2025-10-02 13:46:14.068 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:46:15 compute-1 nova_compute[230518]: 2025-10-02 13:46:15.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:46:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:46:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:46:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:15.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:46:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:15.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:16 compute-1 ceph-mon[80926]: pgmap v4075: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:17.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:17.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:17 compute-1 sudo[332994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:46:17 compute-1 sudo[332994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:46:17 compute-1 sudo[332994]: pam_unix(sudo:session): session closed for user root
Oct 02 13:46:17 compute-1 sudo[333019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:46:17 compute-1 sudo[333019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:46:17 compute-1 sudo[333019]: pam_unix(sudo:session): session closed for user root
Oct 02 13:46:17 compute-1 sudo[333044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:46:17 compute-1 sudo[333044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:46:17 compute-1 sudo[333044]: pam_unix(sudo:session): session closed for user root
Oct 02 13:46:17 compute-1 nova_compute[230518]: 2025-10-02 13:46:17.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:17 compute-1 sudo[333069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:46:17 compute-1 sudo[333069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:46:18 compute-1 ceph-mon[80926]: pgmap v4076: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:18 compute-1 sudo[333069]: pam_unix(sudo:session): session closed for user root
Oct 02 13:46:19 compute-1 nova_compute[230518]: 2025-10-02 13:46:19.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:46:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:19.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:46:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:46:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 02 13:46:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:46:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 02 13:46:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 02 13:46:19 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:46:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:19.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:46:20 compute-1 ceph-mon[80926]: pgmap v4077: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:20 compute-1 sshd-session[332993]: Invalid user Administrator from 196.251.118.184 port 50922
Oct 02 13:46:21 compute-1 sshd-session[332993]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 13:46:21 compute-1 sshd-session[332993]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.118.184
Oct 02 13:46:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:21.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:21.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:22 compute-1 ceph-mon[80926]: pgmap v4078: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:22 compute-1 nova_compute[230518]: 2025-10-02 13:46:22.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:23.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:23 compute-1 ceph-mon[80926]: pgmap v4079: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:23 compute-1 sshd-session[332993]: Failed password for invalid user Administrator from 196.251.118.184 port 50922 ssh2
Oct 02 13:46:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:23.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:24 compute-1 nova_compute[230518]: 2025-10-02 13:46:24.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:46:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:25.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:25 compute-1 sudo[333125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:46:25 compute-1 sudo[333125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:46:25 compute-1 sudo[333125]: pam_unix(sudo:session): session closed for user root
Oct 02 13:46:25 compute-1 sudo[333150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 02 13:46:25 compute-1 sudo[333150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:46:25 compute-1 sudo[333150]: pam_unix(sudo:session): session closed for user root
Oct 02 13:46:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:25.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:25 compute-1 sshd-session[332993]: Connection closed by invalid user Administrator 196.251.118.184 port 50922 [preauth]
Oct 02 13:46:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:46:26.009 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:46:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:46:26.010 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:46:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:46:26.010 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:46:26 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:46:26 compute-1 ceph-mon[80926]: from='mgr.14134 192.168.122.100:0/2631919672' entity='mgr.compute-0.unmtoh' 
Oct 02 13:46:26 compute-1 ceph-mon[80926]: pgmap v4080: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:27.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:27 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:27 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:27 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:27.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:27 compute-1 nova_compute[230518]: 2025-10-02 13:46:27.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:28 compute-1 ceph-mon[80926]: pgmap v4081: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:28 compute-1 podman[333177]: 2025-10-02 13:46:28.83916015 +0000 UTC m=+0.087979029 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct 02 13:46:28 compute-1 podman[333176]: 2025-10-02 13:46:28.856071411 +0000 UTC m=+0.106479530 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 02 13:46:29 compute-1 nova_compute[230518]: 2025-10-02 13:46:29.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:29.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:29 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:29 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:29 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:29.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:30 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:46:30 compute-1 ceph-mon[80926]: pgmap v4082: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:31 compute-1 sshd-session[333175]: Invalid user Administrator from 196.251.118.184 port 35762
Oct 02 13:46:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:31.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:31 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:31 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:31 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:31.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:31 compute-1 sshd-session[333175]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 13:46:31 compute-1 sshd-session[333175]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.118.184
Oct 02 13:46:32 compute-1 ceph-mon[80926]: pgmap v4083: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:32 compute-1 nova_compute[230518]: 2025-10-02 13:46:32.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:33.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:33 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:33 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:33 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:33.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:34 compute-1 nova_compute[230518]: 2025-10-02 13:46:34.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:34 compute-1 sshd-session[333175]: Failed password for invalid user Administrator from 196.251.118.184 port 35762 ssh2
Oct 02 13:46:34 compute-1 ceph-mon[80926]: pgmap v4084: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:35 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:46:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:35.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:35 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:35 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:35 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:35.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:36 compute-1 sshd-session[333175]: Connection closed by invalid user Administrator 196.251.118.184 port 35762 [preauth]
Oct 02 13:46:36 compute-1 ceph-mon[80926]: pgmap v4085: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:37.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:37 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:37 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:46:37 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:37.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:46:37 compute-1 ceph-mon[80926]: pgmap v4086: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:37 compute-1 nova_compute[230518]: 2025-10-02 13:46:37.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:39 compute-1 nova_compute[230518]: 2025-10-02 13:46:39.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:39.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:39 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:39 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:39 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:39.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:39 compute-1 podman[333223]: 2025-10-02 13:46:39.799462492 +0000 UTC m=+0.054977933 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, config_id=iscsid)
Oct 02 13:46:39 compute-1 podman[333224]: 2025-10-02 13:46:39.819766959 +0000 UTC m=+0.073324539 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001)
Oct 02 13:46:40 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:46:40 compute-1 ceph-mon[80926]: pgmap v4087: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:40 compute-1 sshd-session[333221]: Invalid user admin from 196.251.118.184 port 52866
Oct 02 13:46:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:41.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:41 compute-1 sshd-session[333221]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 13:46:41 compute-1 sshd-session[333221]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.118.184
Oct 02 13:46:41 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:41 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:46:41 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:41.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:46:41 compute-1 ceph-mon[80926]: pgmap v4088: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:42 compute-1 nova_compute[230518]: 2025-10-02 13:46:42.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:43 compute-1 sshd-session[333221]: Failed password for invalid user admin from 196.251.118.184 port 52866 ssh2
Oct 02 13:46:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:43.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:43 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:43 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:43 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:43.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:44 compute-1 nova_compute[230518]: 2025-10-02 13:46:44.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:44 compute-1 ceph-mon[80926]: pgmap v4089: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:45 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:46:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:45.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:45 compute-1 sshd-session[333221]: Connection closed by invalid user admin 196.251.118.184 port 52866 [preauth]
Oct 02 13:46:45 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:45 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:45 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:45.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:45 compute-1 ceph-mon[80926]: pgmap v4090: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:46:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:47.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:46:47 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:47 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:46:47 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:47.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:46:47 compute-1 ceph-mon[80926]: pgmap v4091: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:47 compute-1 nova_compute[230518]: 2025-10-02 13:46:47.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:49 compute-1 nova_compute[230518]: 2025-10-02 13:46:49.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:49.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:49 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:49 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:49 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:49.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:49 compute-1 ceph-mon[80926]: pgmap v4092: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:50 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:46:50 compute-1 sshd-session[333264]: Invalid user admin from 196.251.118.184 port 51732
Oct 02 13:46:50 compute-1 sshd-session[333264]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 13:46:50 compute-1 sshd-session[333264]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.118.184
Oct 02 13:46:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:51.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:51 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:51 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:51 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:51.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:51 compute-1 ceph-mon[80926]: pgmap v4093: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:52 compute-1 sshd-session[333264]: Failed password for invalid user admin from 196.251.118.184 port 51732 ssh2
Oct 02 13:46:52 compute-1 nova_compute[230518]: 2025-10-02 13:46:52.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:46:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:53.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:46:53 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:53 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:53 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:53.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:53 compute-1 sshd-session[333264]: Connection closed by invalid user admin 196.251.118.184 port 51732 [preauth]
Oct 02 13:46:53 compute-1 ceph-mon[80926]: pgmap v4094: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:54 compute-1 nova_compute[230518]: 2025-10-02 13:46:54.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:54 compute-1 nova_compute[230518]: 2025-10-02 13:46:54.053 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:46:55 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:46:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:55.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:55 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:55 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:55 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:55.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:56 compute-1 ceph-mon[80926]: pgmap v4095: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:57.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:57 compute-1 ceph-mon[80926]: pgmap v4096: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:46:57 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:57 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:57 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:57.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:57 compute-1 nova_compute[230518]: 2025-10-02 13:46:57.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:58 compute-1 nova_compute[230518]: 2025-10-02 13:46:58.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:46:58 compute-1 nova_compute[230518]: 2025-10-02 13:46:58.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 02 13:46:58 compute-1 sshd-session[333266]: Invalid user admin from 196.251.118.184 port 50366
Oct 02 13:46:58 compute-1 sshd-session[333268]: Accepted publickey for zuul from 192.168.122.10 port 43100 ssh2: ECDSA SHA256:PSU2PmP8Vkt/bMfoXwPsgy3Tf+S99N4A3cF3twydmPs
Oct 02 13:46:58 compute-1 systemd-logind[795]: New session 65 of user zuul.
Oct 02 13:46:58 compute-1 systemd[1]: Started Session 65 of User zuul.
Oct 02 13:46:58 compute-1 sshd-session[333268]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 02 13:46:58 compute-1 sudo[333272]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Oct 02 13:46:58 compute-1 sudo[333272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 02 13:46:59 compute-1 nova_compute[230518]: 2025-10-02 13:46:59.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:46:59 compute-1 podman[333297]: 2025-10-02 13:46:59.079256098 +0000 UTC m=+0.096386592 container health_status 0f5ab5d7932ad1b9157c9f08a07fae25ebb3fb5306ddc24f0a2106b630615afd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true)
Oct 02 13:46:59 compute-1 podman[333296]: 2025-10-02 13:46:59.07932206 +0000 UTC m=+0.109471373 container health_status 0afe8942cbdab0979a479f761f89e190de8b0a095195939d9b569a5d4f958409 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a)
Oct 02 13:46:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:46:59.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:59 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:46:59 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:46:59 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:46:59.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:46:59 compute-1 sshd-session[333266]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 13:46:59 compute-1 sshd-session[333266]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.118.184
Oct 02 13:47:00 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:47:00 compute-1 ceph-mon[80926]: pgmap v4097: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:47:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:47:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:01.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/425010776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:47:01 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/811443912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:47:01 compute-1 sshd-session[333266]: Failed password for invalid user admin from 196.251.118.184 port 50366 ssh2
Oct 02 13:47:01 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:47:01 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:01 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:01.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:02 compute-1 nova_compute[230518]: 2025-10-02 13:47:02.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:47:02 compute-1 nova_compute[230518]: 2025-10-02 13:47:02.122 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:47:02 compute-1 nova_compute[230518]: 2025-10-02 13:47:02.123 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:47:02 compute-1 nova_compute[230518]: 2025-10-02 13:47:02.123 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:47:02 compute-1 nova_compute[230518]: 2025-10-02 13:47:02.124 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 02 13:47:02 compute-1 nova_compute[230518]: 2025-10-02 13:47:02.124 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:47:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Oct 02 13:47:02 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/473589523' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:47:02 compute-1 ceph-mon[80926]: from='client.47225 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:02 compute-1 ceph-mon[80926]: pgmap v4098: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:47:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1073257260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:47:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/853717363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:47:02 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/473589523' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:47:02 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:47:02 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1498764034' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:47:02 compute-1 nova_compute[230518]: 2025-10-02 13:47:02.578 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:47:02 compute-1 nova_compute[230518]: 2025-10-02 13:47:02.736 2 WARNING nova.virt.libvirt.driver [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 02 13:47:02 compute-1 nova_compute[230518]: 2025-10-02 13:47:02.737 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4094MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 02 13:47:02 compute-1 nova_compute[230518]: 2025-10-02 13:47:02.738 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:47:02 compute-1 nova_compute[230518]: 2025-10-02 13:47:02.738 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:47:02 compute-1 nova_compute[230518]: 2025-10-02 13:47:02.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:47:03 compute-1 nova_compute[230518]: 2025-10-02 13:47:03.046 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 02 13:47:03 compute-1 nova_compute[230518]: 2025-10-02 13:47:03.046 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 02 13:47:03 compute-1 nova_compute[230518]: 2025-10-02 13:47:03.191 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing inventories for resource provider 730da6ce-9754-46f0-88e3-0019d056443f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 02 13:47:03 compute-1 nova_compute[230518]: 2025-10-02 13:47:03.234 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating ProviderTree inventory for provider 730da6ce-9754-46f0-88e3-0019d056443f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 02 13:47:03 compute-1 nova_compute[230518]: 2025-10-02 13:47:03.235 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Updating inventory in ProviderTree for provider 730da6ce-9754-46f0-88e3-0019d056443f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 02 13:47:03 compute-1 nova_compute[230518]: 2025-10-02 13:47:03.253 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing aggregate associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 02 13:47:03 compute-1 nova_compute[230518]: 2025-10-02 13:47:03.278 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Refreshing trait associations for resource provider 730da6ce-9754-46f0-88e3-0019d056443f, traits: COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_ACCELERATORS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 02 13:47:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:47:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:03.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:03 compute-1 nova_compute[230518]: 2025-10-02 13:47:03.294 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 02 13:47:03 compute-1 ceph-mon[80926]: from='client.38202 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:03 compute-1 ceph-mon[80926]: from='client.47950 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:03 compute-1 ceph-mon[80926]: from='client.47231 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:03 compute-1 ceph-mon[80926]: from='client.38214 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:03 compute-1 ceph-mon[80926]: from='client.47965 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:03 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1498764034' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:47:03 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3447872947' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:47:03 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1402390930' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 02 13:47:03 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:47:03 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:47:03 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:03.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:47:03 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct 02 13:47:03 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/240607502' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:47:03 compute-1 nova_compute[230518]: 2025-10-02 13:47:03.745 2 DEBUG oslo_concurrency.processutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 02 13:47:03 compute-1 nova_compute[230518]: 2025-10-02 13:47:03.751 2 DEBUG nova.compute.provider_tree [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed in ProviderTree for provider: 730da6ce-9754-46f0-88e3-0019d056443f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 02 13:47:03 compute-1 nova_compute[230518]: 2025-10-02 13:47:03.825 2 DEBUG nova.scheduler.client.report [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Inventory has not changed for provider 730da6ce-9754-46f0-88e3-0019d056443f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 02 13:47:03 compute-1 nova_compute[230518]: 2025-10-02 13:47:03.827 2 DEBUG nova.compute.resource_tracker [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 02 13:47:03 compute-1 nova_compute[230518]: 2025-10-02 13:47:03.827 2 DEBUG oslo_concurrency.lockutils [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:47:04 compute-1 nova_compute[230518]: 2025-10-02 13:47:04.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:47:04 compute-1 sshd-session[333266]: Connection closed by invalid user admin 196.251.118.184 port 50366 [preauth]
Oct 02 13:47:04 compute-1 ceph-mon[80926]: pgmap v4099: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:47:04 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/240607502' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 02 13:47:05 compute-1 ovs-vsctl[333638]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 02 13:47:05 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:47:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:47:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:05.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:05 compute-1 ceph-mon[80926]: pgmap v4100: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:47:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2758744299' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 02 13:47:05 compute-1 ceph-mon[80926]: from='client.? 192.168.122.10:0/2758744299' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 02 13:47:05 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:47:05 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:05 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:05.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:05 compute-1 virtqemud[230067]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 02 13:47:05 compute-1 virtqemud[230067]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 02 13:47:06 compute-1 virtqemud[230067]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 02 13:47:06 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: cache status {prefix=cache status} (starting...)
Oct 02 13:47:06 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct 02 13:47:06 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: client ls {prefix=client ls} (starting...)
Oct 02 13:47:06 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct 02 13:47:06 compute-1 lvm[333984]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 02 13:47:06 compute-1 lvm[333984]: VG ceph_vg0 finished
Oct 02 13:47:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:47:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:47:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:07.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:47:07 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: damage ls {prefix=damage ls} (starting...)
Oct 02 13:47:07 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct 02 13:47:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Oct 02 13:47:07 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3622600288' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:47:07 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: dump loads {prefix=dump loads} (starting...)
Oct 02 13:47:07 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct 02 13:47:07 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:47:07 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:07 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:07.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:07 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 02 13:47:07 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct 02 13:47:07 compute-1 nova_compute[230518]: 2025-10-02 13:47:07.829 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:47:07 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 02 13:47:07 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct 02 13:47:07 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct 02 13:47:07 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1778923128' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:47:07 compute-1 nova_compute[230518]: 2025-10-02 13:47:07.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:47:07 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 02 13:47:07 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct 02 13:47:08 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 02 13:47:08 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct 02 13:47:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct 02 13:47:08 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/571185985' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 02 13:47:08 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 02 13:47:08 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct 02 13:47:08 compute-1 ceph-mon[80926]: from='client.47276 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:08 compute-1 ceph-mon[80926]: pgmap v4101: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:47:08 compute-1 ceph-mon[80926]: from='client.47995 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3622600288' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:47:08 compute-1 ceph-mon[80926]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:47:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3618812975' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:47:08 compute-1 ceph-mon[80926]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:47:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1778923128' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:47:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1511379920' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:47:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/571185985' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 02 13:47:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1879570657' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 02 13:47:08 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4261183795' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 02 13:47:08 compute-1 sshd-session[333944]: Invalid user admin from 196.251.118.184 port 40948
Oct 02 13:47:08 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 02 13:47:08 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct 02 13:47:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct 02 13:47:08 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2798236164' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 02 13:47:08 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct 02 13:47:08 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3224226010' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 02 13:47:08 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: ops {prefix=ops} (starting...)
Oct 02 13:47:08 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct 02 13:47:08 compute-1 sshd-session[333944]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 13:47:08 compute-1 sshd-session[333944]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.118.184
Oct 02 13:47:09 compute-1 nova_compute[230518]: 2025-10-02 13:47:09.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:47:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 02 13:47:09 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1393102153' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:47:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:47:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:47:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:09.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:47:09 compute-1 ceph-mon[80926]: from='client.47288 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:09 compute-1 ceph-mon[80926]: from='client.48016 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:09 compute-1 ceph-mon[80926]: from='client.38256 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:09 compute-1 ceph-mon[80926]: from='client.38268 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:09 compute-1 ceph-mon[80926]: from='client.47312 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:09 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2739593765' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 02 13:47:09 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2798236164' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 02 13:47:09 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3224226010' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 02 13:47:09 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2575496155' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 02 13:47:09 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2997921120' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 02 13:47:09 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3094434344' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 02 13:47:09 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1393102153' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:47:09 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3431451301' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:47:09 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: session ls {prefix=session ls} (starting...)
Oct 02 13:47:09 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq Can't run that command on an inactive MDS!
Oct 02 13:47:09 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:47:09 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:09 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:09.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:09 compute-1 ceph-mds[84183]: mds.cephfs.compute-1.bhscyq asok_command: status {prefix=status} (starting...)
Oct 02 13:47:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 02 13:47:09 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3569749802' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:47:09 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Oct 02 13:47:09 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2328061460' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:47:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 02 13:47:10 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1056629558' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:47:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:47:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct 02 13:47:10 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1861714374' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 02 13:47:10 compute-1 ceph-mon[80926]: from='client.48046 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:10 compute-1 ceph-mon[80926]: from='client.47333 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:10 compute-1 ceph-mon[80926]: from='client.48070 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:10 compute-1 ceph-mon[80926]: from='client.38310 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:10 compute-1 ceph-mon[80926]: pgmap v4102: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:47:10 compute-1 ceph-mon[80926]: from='client.47357 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3569749802' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:47:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4046414643' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 02 13:47:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1621607900' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 02 13:47:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/356698630' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:47:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/165376371' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:47:10 compute-1 ceph-mon[80926]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:47:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2328061460' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:47:10 compute-1 ceph-mon[80926]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:47:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1056629558' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:47:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/649391731' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:47:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2403507624' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:47:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2648762803' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 02 13:47:10 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1861714374' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 02 13:47:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 02 13:47:10 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2714691879' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:47:10 compute-1 sshd-session[333944]: Failed password for invalid user admin from 196.251.118.184 port 40948 ssh2
Oct 02 13:47:10 compute-1 podman[334526]: 2025-10-02 13:47:10.825089867 +0000 UTC m=+0.064894696 container health_status a735438f89162f51ad5328469e1a59f38f3b17f3dc134b91878dd9a8f2af6b1a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 02 13:47:10 compute-1 podman[334523]: 2025-10-02 13:47:10.844599458 +0000 UTC m=+0.086507933 container health_status 362d4709545e411dd3e864f47ce095746f2e713601fe96d3b8852aeb4e64cfb9 (image=quay.io/podified-antelope-centos9/openstack-iscsid:current-podified, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=a0eac564d779a7eaac46c9816bff261a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_managed=true)
Oct 02 13:47:10 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct 02 13:47:10 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2721180045' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 13:47:11 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct 02 13:47:11 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2241957471' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 02 13:47:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:47:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:47:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:11.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:47:11 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 02 13:47:11 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4215150955' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:47:11 compute-1 ceph-mon[80926]: from='client.48088 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:11 compute-1 ceph-mon[80926]: from='client.38343 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2714691879' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:47:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3411214287' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:47:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/185257415' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:47:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2721180045' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 13:47:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3505966985' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 13:47:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/635537241' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 02 13:47:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3993059270' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 02 13:47:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2406744526' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:47:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2241957471' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 02 13:47:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4215150955' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:47:11 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1295786025' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:47:11 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:47:11 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:47:11 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:11.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:47:11 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct 02 13:47:11 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1977807061' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 02 13:47:12 compute-1 nova_compute[230518]: 2025-10-02 13:47:12.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:47:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct 02 13:47:12 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1645271435' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:47:12 compute-1 ceph-mon[80926]: from='client.38355 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:12 compute-1 ceph-mon[80926]: from='client.48142 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:12 compute-1 ceph-mon[80926]: from='client.47405 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:12 compute-1 ceph-mon[80926]: pgmap v4103: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:47:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4046619135' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 02 13:47:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/809149563' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 02 13:47:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3500950542' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:47:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1977807061' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 02 13:47:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2425987190' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:47:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4212330445' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 02 13:47:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1645271435' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:47:12 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2013296607' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 02 13:47:12 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct 02 13:47:12 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/392775669' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:47:12 compute-1 nova_compute[230518]: 2025-10-02 13:47:12.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:47:13 compute-1 nova_compute[230518]: 2025-10-02 13:47:13.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:47:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct 02 13:47:13 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3564654829' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483459072 unmapped: 47448064 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:14:59.992867+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483459072 unmapped: 47448064 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4979220 data_alloc: 234881024 data_used: 24694784
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:00.992996+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483459072 unmapped: 47448064 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:01.993121+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19eff8000/0x0/0x1bfc00000, data 0x370cba3/0x3916000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,8])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 60735488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:02.993268+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 60735488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:03.993488+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434792c00 session 0x559434ff0d20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 60735488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:04.993663+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 60735488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4783696 data_alloc: 218103808 data_used: 12791808
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:05.993782+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19ff78000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 60735488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:06.993896+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 60735488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:07.994042+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 60735488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:08.994190+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 60735488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 66K writes, 260K keys, 66K commit groups, 1.0 writes per commit group, ingest: 0.25 GB, 0.05 MB/s
                                           Cumulative WAL: 66K writes, 24K syncs, 2.72 writes per sync, written: 0.25 GB, 0.05 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8130 writes, 30K keys, 8130 commit groups, 1.0 writes per commit group, ingest: 33.16 MB, 0.06 MB/s
                                           Interval WAL: 8131 writes, 3070 syncs, 2.65 writes per sync, written: 0.03 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:09.994344+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 60735488 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19ff78000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4783696 data_alloc: 218103808 data_used: 12791808
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets getting new tickets!
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:10.994594+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _finish_auth 0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:10.996224+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470179840 unmapped: 60727296 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:11.994769+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19ff78000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470179840 unmapped: 60727296 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:12.994947+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19ff78000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470179840 unmapped: 60727296 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:13.995089+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470179840 unmapped: 60727296 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:14.995228+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470179840 unmapped: 60727296 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c3e000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436c3e000 session 0x559437e561e0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434792c00 session 0x559436f7e3c0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434fee000 session 0x559437bcb680
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4783696 data_alloc: 218103808 data_used: 12791808
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594358bf400
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594358bf400 session 0x559436ec6d20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:15.995354+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594386f9400
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.497853279s of 17.155778885s, submitted: 27
Oct 02 13:47:13 compute-1 ceph-osd[78262]: mgrc ms_handle_reset ms_handle_reset con 0x5594350d6c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3443433125
Oct 02 13:47:13 compute-1 ceph-osd[78262]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3443433125,v1:192.168.122.100:6801/3443433125]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: get_auth_request con 0x559436c3e000 auth_method 0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: mgrc handle_mgr_configure stats_period=5
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19ff78000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470188032 unmapped: 60719104 heap: 530907136 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:16.995548+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436c3d800 session 0x559437c432c0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5dc00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 479182848 unmapped: 55402496 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x55943a64d800 session 0x559436dde960
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594450ed400
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594370a1000 session 0x559437cb0780
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c6e800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:17.995684+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594386f9400 session 0x559436c50f00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594370a1000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594370a1000 session 0x559436c51a40
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434792c00 session 0x559434c623c0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434fee000 session 0x5594352863c0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594358bf400
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594358bf400 session 0x559434c6e960
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470597632 unmapped: 63987712 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f6a0000/0x0/0x1bfc00000, data 0x3064b50/0x326e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:18.995807+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f6a0000/0x0/0x1bfc00000, data 0x3064b50/0x326e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470597632 unmapped: 63987712 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:19.995913+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470597632 unmapped: 63987712 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4858586 data_alloc: 218103808 data_used: 12795904
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594386f9400
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594386f9400 session 0x559436ddfc20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:20.996056+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f6a0000/0x0/0x1bfc00000, data 0x3064b50/0x326e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f6a0000/0x0/0x1bfc00000, data 0x3064b50/0x326e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea0400
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436ea0400 session 0x559434c63a40
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470597632 unmapped: 63987712 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:21.996189+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434792c00 session 0x5594351530e0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434fee000 session 0x559434c65680
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470532096 unmapped: 64053248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:22.996360+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594358bf400
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594386f9400
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f69e000/0x0/0x1bfc00000, data 0x3064b83/0x3270000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470540288 unmapped: 64045056 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f69e000/0x0/0x1bfc00000, data 0x3064b83/0x3270000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:23.996514+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f69e000/0x0/0x1bfc00000, data 0x3064b83/0x3270000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:24.996665+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4918653 data_alloc: 234881024 data_used: 20762624
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:25.996815+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f69e000/0x0/0x1bfc00000, data 0x3064b83/0x3270000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:26.996966+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:27.997406+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:28.997514+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:29.997649+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f69e000/0x0/0x1bfc00000, data 0x3064b83/0x3270000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4918653 data_alloc: 234881024 data_used: 20762624
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:30.997790+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:31.997915+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:32.998104+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19f69e000/0x0/0x1bfc00000, data 0x3064b83/0x3270000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1d2ef9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:33.998246+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470171648 unmapped: 64413696 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.158052444s of 18.919492722s, submitted: 23
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:34.998420+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473210880 unmapped: 61374464 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:35.998542+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4955009 data_alloc: 234881024 data_used: 21942272
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473415680 unmapped: 61169664 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:36.998736+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473423872 unmapped: 61161472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:37.998917+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473423872 unmapped: 61161472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e12d000/0x0/0x1bfc00000, data 0x342cb83/0x3638000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:38.999106+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473423872 unmapped: 61161472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:39.999323+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473423872 unmapped: 61161472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:40.999464+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4959757 data_alloc: 234881024 data_used: 21839872
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473423872 unmapped: 61161472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:41.999643+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473423872 unmapped: 61161472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:42.999790+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436f94800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436f94800 session 0x559437bcab40
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472940544 unmapped: 61644800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:43.999925+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472940544 unmapped: 61644800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e136000/0x0/0x1bfc00000, data 0x342cb83/0x3638000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:45.000105+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472940544 unmapped: 61644800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:46.000346+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4953597 data_alloc: 234881024 data_used: 21839872
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472940544 unmapped: 61644800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:47.000498+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e136000/0x0/0x1bfc00000, data 0x342cb83/0x3638000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472940544 unmapped: 61644800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:48.000629+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472940544 unmapped: 61644800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:49.000815+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e136000/0x0/0x1bfc00000, data 0x342cb83/0x3638000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594358bf400 session 0x559437c57e00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594386f9400 session 0x559437bcb860
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472940544 unmapped: 61644800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594386f9400
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.585352898s of 14.913706779s, submitted: 79
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:50.001037+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467542016 unmapped: 67043328 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:51.001232+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4794544 data_alloc: 218103808 data_used: 12791808
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594386f9400 session 0x559434c630e0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467542016 unmapped: 67043328 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:52.001381+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467542016 unmapped: 67043328 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edd7000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:53.001537+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467542016 unmapped: 67043328 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:54.001700+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edd7000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467542016 unmapped: 67043328 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:55.001872+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467542016 unmapped: 67043328 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:56.002022+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4794544 data_alloc: 218103808 data_used: 12791808
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467542016 unmapped: 67043328 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:57.002317+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467550208 unmapped: 67035136 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:58.002491+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467550208 unmapped: 67035136 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:15:59.002637+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467550208 unmapped: 67035136 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edd7000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:00.002751+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467550208 unmapped: 67035136 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:01.002873+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4794544 data_alloc: 218103808 data_used: 12791808
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467550208 unmapped: 67035136 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:02.003086+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467550208 unmapped: 67035136 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:03.003331+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467550208 unmapped: 67035136 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:04.003490+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467550208 unmapped: 67035136 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:05.003678+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edd7000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.042637825s of 15.562180519s, submitted: 27
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468082688 unmapped: 66502656 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434792c00 session 0x559437b0b680
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434fee000 session 0x559436ddfc20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594358bf400
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594358bf400 session 0x559435e4f0e0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:06.004001+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436f94800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4883222 data_alloc: 218103808 data_used: 12795904
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436f94800 session 0x559437b0a960
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434792c00 session 0x5594351521e0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467337216 unmapped: 67248128 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:07.004178+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467345408 unmapped: 67239936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:08.004434+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467345408 unmapped: 67239936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:09.004611+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467345408 unmapped: 67239936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e333000/0x0/0x1bfc00000, data 0x3231ba3/0x343b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:10.004886+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467345408 unmapped: 67239936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:11.005031+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4883106 data_alloc: 218103808 data_used: 12791808
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467345408 unmapped: 67239936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434fee000 session 0x559435f61e00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:12.005152+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594358bf400
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594386f9400
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467353600 unmapped: 67231744 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:13.005355+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467124224 unmapped: 67461120 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:14.005513+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:15.005721+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e332000/0x0/0x1bfc00000, data 0x3231bc6/0x343c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:16.005917+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4963611 data_alloc: 234881024 data_used: 23748608
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e332000/0x0/0x1bfc00000, data 0x3231bc6/0x343c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:17.006444+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e332000/0x0/0x1bfc00000, data 0x3231bc6/0x343c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:18.006597+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:19.006818+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:20.006972+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:21.007136+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4963611 data_alloc: 234881024 data_used: 23748608
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e332000/0x0/0x1bfc00000, data 0x3231bc6/0x343c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:22.007387+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:23.007518+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19e332000/0x0/0x1bfc00000, data 0x3231bc6/0x343c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467460096 unmapped: 67125248 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:24.007660+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.326089859s of 19.245832443s, submitted: 49
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470376448 unmapped: 64208896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:25.007825+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19dbdd000/0x0/0x1bfc00000, data 0x3980bc6/0x3b8b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471351296 unmapped: 63234048 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:26.008006+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5037977 data_alloc: 234881024 data_used: 24559616
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471777280 unmapped: 62808064 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:27.008192+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471777280 unmapped: 62808064 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:28.008470+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19db4a000/0x0/0x1bfc00000, data 0x3a13bc6/0x3c1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471777280 unmapped: 62808064 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:29.008619+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471777280 unmapped: 62808064 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:30.008767+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471777280 unmapped: 62808064 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:31.008893+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5039897 data_alloc: 234881024 data_used: 24702976
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471916544 unmapped: 62668800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:32.009044+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471916544 unmapped: 62668800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:33.009473+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471916544 unmapped: 62668800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:34.009729+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19db2e000/0x0/0x1bfc00000, data 0x3a35bc6/0x3c40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943d785c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x55943d785c00 session 0x559437e56f00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471916544 unmapped: 62668800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:35.009973+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19db2d000/0x0/0x1bfc00000, data 0x3a35bd5/0x3c41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471916544 unmapped: 62668800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:36.010135+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5039685 data_alloc: 234881024 data_used: 24715264
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471916544 unmapped: 62668800 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:37.010317+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471932928 unmapped: 62652416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:38.010443+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19db2d000/0x0/0x1bfc00000, data 0x3a35bd5/0x3c41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471932928 unmapped: 62652416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:39.010578+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436ea1c00 session 0x559435286d20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471932928 unmapped: 62652416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19db2d000/0x0/0x1bfc00000, data 0x3a35bd5/0x3c41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:40.010716+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.604205132s of 15.311234474s, submitted: 113
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594358bf400 session 0x559434ce12c0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x5594386f9400 session 0x559437cb0d20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 471932928 unmapped: 62652416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:41.010845+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5039749 data_alloc: 234881024 data_used: 24715264
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19db28000/0x0/0x1bfc00000, data 0x3a3abd5/0x3c46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466722816 unmapped: 67862528 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:42.011107+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434792c00 session 0x559437e56b40
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466722816 unmapped: 67862528 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:43.011230+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466722816 unmapped: 67862528 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:44.011382+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466722816 unmapped: 67862528 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:45.011547+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466722816 unmapped: 67862528 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:46.011702+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4809599 data_alloc: 218103808 data_used: 12791808
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edd6000/0x0/0x1bfc00000, data 0x278db50/0x2997000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466722816 unmapped: 67862528 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:47.011967+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434fee000 session 0x559434ce0780
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466731008 unmapped: 67854336 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436ea1c00 session 0x559437e565a0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:48.012156+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466755584 unmapped: 67829760 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:49.012365+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943d785c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edd8000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x55943d785c00 session 0x5594351423c0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edd8000/0x0/0x1bfc00000, data 0x278db41/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466812928 unmapped: 67772416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:50.012520+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.131870270s of 10.003231049s, submitted: 300
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466829312 unmapped: 67756032 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:51.014535+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434792c00 session 0x559437d09680
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4809039 data_alloc: 218103808 data_used: 12795904
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559434fee000 session 0x559434ce0960
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:52.016439+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edda000/0x0/0x1bfc00000, data 0x278dacf/0x2994000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:53.017456+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:54.018153+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:55.019652+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:56.020151+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4807462 data_alloc: 218103808 data_used: 12791808
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:57.021907+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edda000/0x0/0x1bfc00000, data 0x278dacf/0x2994000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:58.022258+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:16:59.022502+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:00.022710+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edda000/0x0/0x1bfc00000, data 0x278dacf/0x2994000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.209687233s of 10.373024940s, submitted: 38
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 ms_handle_reset con 0x559436ea1c00 session 0x559435f643c0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:01.023122+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4809244 data_alloc: 218103808 data_used: 12791808
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 heartbeat osd_stat(store_statfs(0x19edd9000/0x0/0x1bfc00000, data 0x278db31/0x2995000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:02.023415+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594386f9400
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 395 handle_osd_map epochs [396,396], i have 395, src has [1,396]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:03.023556+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 396 ms_handle_reset con 0x5594386f9400 session 0x559436ec6b40
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466853888 unmapped: 67731456 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:04.023817+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea2400
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 396 ms_handle_reset con 0x559436ea2400 session 0x559437e52f00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466853888 unmapped: 67731456 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 396 heartbeat osd_stat(store_statfs(0x19edd4000/0x0/0x1bfc00000, data 0x278f7ec/0x2999000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 396 handle_osd_map epochs [396,397], i have 396, src has [1,397]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 397 ms_handle_reset con 0x559434792c00 session 0x559437105e00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:05.024256+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466870272 unmapped: 67715072 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 397 ms_handle_reset con 0x559434fee000 session 0x559435152960
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:06.024663+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466870272 unmapped: 67715072 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4816765 data_alloc: 218103808 data_used: 12808192
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:07.024982+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466870272 unmapped: 67715072 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 397 heartbeat osd_stat(store_statfs(0x19edd3000/0x0/0x1bfc00000, data 0x27913d5/0x299a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:08.025337+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466870272 unmapped: 67715072 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 397 heartbeat osd_stat(store_statfs(0x19edd3000/0x0/0x1bfc00000, data 0x27913d5/0x299a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:09.025509+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:10.025717+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:11.026044+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466837504 unmapped: 67747840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 397 handle_osd_map epochs [397,398], i have 397, src has [1,398]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.111035347s of 10.823334694s, submitted: 35
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4819739 data_alloc: 218103808 data_used: 12808192
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:12.026207+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466853888 unmapped: 67731456 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 398 heartbeat osd_stat(store_statfs(0x19edd0000/0x0/0x1bfc00000, data 0x2792f14/0x299d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:13.026492+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466853888 unmapped: 67731456 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:14.026663+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466853888 unmapped: 67731456 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:15.026869+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466853888 unmapped: 67731456 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:16.027043+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466870272 unmapped: 67715072 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4819739 data_alloc: 218103808 data_used: 12808192
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 398 heartbeat osd_stat(store_statfs(0x19edd0000/0x0/0x1bfc00000, data 0x2792f14/0x299d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:17.027269+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466870272 unmapped: 67715072 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:18.027577+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466870272 unmapped: 67715072 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:19.027820+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466870272 unmapped: 67715072 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 398 heartbeat osd_stat(store_statfs(0x19edd0000/0x0/0x1bfc00000, data 0x2792f14/0x299d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:20.028173+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466878464 unmapped: 67706880 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:21.028351+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 398 heartbeat osd_stat(store_statfs(0x19edd0000/0x0/0x1bfc00000, data 0x2792f14/0x299d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466878464 unmapped: 67706880 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4819739 data_alloc: 218103808 data_used: 12808192
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 398 heartbeat osd_stat(store_statfs(0x19edd0000/0x0/0x1bfc00000, data 0x2792f14/0x299d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:22.028499+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466878464 unmapped: 67706880 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:23.028713+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466878464 unmapped: 67706880 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:24.028866+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466878464 unmapped: 67706880 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:25.029116+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466878464 unmapped: 67706880 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:26.029719+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466886656 unmapped: 67698688 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4819739 data_alloc: 218103808 data_used: 12808192
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:27.030231+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466886656 unmapped: 67698688 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 398 heartbeat osd_stat(store_statfs(0x19edd0000/0x0/0x1bfc00000, data 0x2792f14/0x299d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:28.030671+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466886656 unmapped: 67698688 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 398 heartbeat osd_stat(store_statfs(0x19edd0000/0x0/0x1bfc00000, data 0x2792f14/0x299d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:29.031026+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466886656 unmapped: 67698688 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.684392929s of 18.692880630s, submitted: 12
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:30.031349+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466886656 unmapped: 67698688 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 398 handle_osd_map epochs [399,399], i have 398, src has [1,399]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x559436ea1c00 session 0x559437b0ad20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:31.031590+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466903040 unmapped: 67682304 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4824530 data_alloc: 218103808 data_used: 12816384
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:32.031801+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466903040 unmapped: 67682304 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 399 heartbeat osd_stat(store_statfs(0x19edcd000/0x0/0x1bfc00000, data 0x2794b6d/0x29a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:33.031991+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466903040 unmapped: 67682304 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:34.032171+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466903040 unmapped: 67682304 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:35.032349+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466903040 unmapped: 67682304 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594386f9400
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:36.032526+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x5594386f9400 session 0x559437bcad20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55944816f000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466903040 unmapped: 67682304 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x55944816f000 session 0x559435de12c0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4823650 data_alloc: 218103808 data_used: 12816384
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:37.032742+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466911232 unmapped: 67674112 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:38.032901+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466911232 unmapped: 67674112 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x559434792c00 session 0x559437bca960
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 399 heartbeat osd_stat(store_statfs(0x19edce000/0x0/0x1bfc00000, data 0x2794b6d/0x29a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x559434fee000 session 0x559436fa2780
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x559436ea1c00 session 0x559437d090e0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594386f9400
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x5594386f9400 session 0x559435d8d860
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55944816f000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x55944816f000 session 0x559435142b40
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:39.033093+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466911232 unmapped: 67674112 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:40.033201+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466911232 unmapped: 67674112 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:41.033571+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466911232 unmapped: 67674112 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x559434792c00 session 0x559436c514a0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4851916 data_alloc: 218103808 data_used: 12816384
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:42.033836+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x559434fee000 session 0x559436c4d860
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 466911232 unmapped: 67674112 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x559436ea1c00 session 0x559435152d20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:43.034027+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594386f9400
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.884056091s of 13.213858604s, submitted: 20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x5594386f9400 session 0x559437bca780
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467066880 unmapped: 67518464 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436cb0c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5a800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 399 heartbeat osd_stat(store_statfs(0x19ea25000/0x0/0x1bfc00000, data 0x2b3bba0/0x2d49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:44.034334+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467075072 unmapped: 67510272 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 399 heartbeat osd_stat(store_statfs(0x19ea25000/0x0/0x1bfc00000, data 0x2b3bba0/0x2d49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 399 heartbeat osd_stat(store_statfs(0x19ea25000/0x0/0x1bfc00000, data 0x2b3bba0/0x2d49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:45.034490+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467091456 unmapped: 67493888 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:46.034620+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467091456 unmapped: 67493888 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4883747 data_alloc: 218103808 data_used: 16490496
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:47.034827+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 399 heartbeat osd_stat(store_statfs(0x19ea25000/0x0/0x1bfc00000, data 0x2b3bba0/0x2d49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467091456 unmapped: 67493888 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:48.035003+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467091456 unmapped: 67493888 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:49.035356+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467091456 unmapped: 67493888 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943cdd2800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:50.035586+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 399 ms_handle_reset con 0x55943cdd2800 session 0x5594351423c0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467107840 unmapped: 67477504 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 399 heartbeat osd_stat(store_statfs(0x19ea25000/0x0/0x1bfc00000, data 0x2b3bba0/0x2d49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:51.035748+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467107840 unmapped: 67477504 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4885237 data_alloc: 218103808 data_used: 16494592
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:52.036012+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467116032 unmapped: 67469312 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:53.036262+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.991956711s of 10.035635948s, submitted: 8
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467116032 unmapped: 67469312 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:54.036670+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467116032 unmapped: 67469312 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:55.036969+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 399 handle_osd_map epochs [399,400], i have 399, src has [1,400]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 400 heartbeat osd_stat(store_statfs(0x19ea20000/0x0/0x1bfc00000, data 0x2b3d85b/0x2d4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1e48f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 400 ms_handle_reset con 0x559434fee000 session 0x559437c572c0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467124224 unmapped: 67461120 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #56. Immutable memtables: 12.
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:56.037161+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468312064 unmapped: 66273280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4909236 data_alloc: 218103808 data_used: 16621568
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:57.037385+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 65216512 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:58.037563+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 65216512 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:17:59.037699+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 65216512 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:00.037867+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 400 heartbeat osd_stat(store_statfs(0x19d727000/0x0/0x1bfc00000, data 0x2c9685b/0x2ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:01.038044+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4913386 data_alloc: 218103808 data_used: 16621568
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:02.038188+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:03.038340+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:04.038523+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:05.038761+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 400 heartbeat osd_stat(store_statfs(0x19d727000/0x0/0x1bfc00000, data 0x2c9685b/0x2ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:06.038949+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4913386 data_alloc: 218103808 data_used: 16621568
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:07.039105+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 400 heartbeat osd_stat(store_statfs(0x19d727000/0x0/0x1bfc00000, data 0x2c9685b/0x2ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:08.039352+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:09.039655+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:10.039829+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:11.039970+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4913386 data_alloc: 218103808 data_used: 16621568
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:12.040121+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.320636749s of 19.659267426s, submitted: 22
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 400 ms_handle_reset con 0x559436ea1c00 session 0x5594370ede00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594386f9400
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 400 heartbeat osd_stat(store_statfs(0x19d727000/0x0/0x1bfc00000, data 0x2c9685b/0x2ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [1])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:13.040357+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 400 heartbeat osd_stat(store_statfs(0x19d727000/0x0/0x1bfc00000, data 0x2c9685b/0x2ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469295104 unmapped: 65290240 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 400 handle_osd_map epochs [400,401], i have 400, src has [1,401]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 400 handle_osd_map epochs [401,401], i have 401, src has [1,401]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 401 ms_handle_reset con 0x5594386f9400 session 0x559436c512c0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 401 heartbeat osd_stat(store_statfs(0x19d729000/0x0/0x1bfc00000, data 0x2c967f9/0x2ea5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:14.040579+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469303296 unmapped: 65282048 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:15.040793+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469303296 unmapped: 65282048 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 401 heartbeat osd_stat(store_statfs(0x19d725000/0x0/0x1bfc00000, data 0x2c984a6/0x2ea8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:16.040925+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469303296 unmapped: 65282048 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4914910 data_alloc: 218103808 data_used: 16629760
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:17.041081+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 401 heartbeat osd_stat(store_statfs(0x19d725000/0x0/0x1bfc00000, data 0x2c984a6/0x2ea8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 401 heartbeat osd_stat(store_statfs(0x19d725000/0x0/0x1bfc00000, data 0x2c984a6/0x2ea8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:18.041211+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:19.041331+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 401 ms_handle_reset con 0x559436cb0c00 session 0x559436fa2000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 401 ms_handle_reset con 0x559436c5a800 session 0x559436c4a000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 401 ms_handle_reset con 0x559434792c00 session 0x559434ce0780
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469278720 unmapped: 65306624 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:20.041487+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5a800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 401 ms_handle_reset con 0x559436c5a800 session 0x559434c63a40
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469278720 unmapped: 65306624 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:21.041640+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469278720 unmapped: 65306624 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 401 handle_osd_map epochs [401,402], i have 401, src has [1,402]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436cb0c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4917912 data_alloc: 218103808 data_used: 16633856
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:22.041876+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470343680 unmapped: 64241664 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:23.042125+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d722000/0x0/0x1bfc00000, data 0x2c99fe5/0x2eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:24.042353+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d722000/0x0/0x1bfc00000, data 0x2c99fe5/0x2eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:25.042587+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:26.042727+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4918364 data_alloc: 218103808 data_used: 16687104
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:27.043330+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:28.043543+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d722000/0x0/0x1bfc00000, data 0x2c99fe5/0x2eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:29.044115+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d722000/0x0/0x1bfc00000, data 0x2c99fe5/0x2eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:30.044302+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d722000/0x0/0x1bfc00000, data 0x2c99fe5/0x2eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:31.044796+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4918364 data_alloc: 218103808 data_used: 16687104
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:32.044938+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d722000/0x0/0x1bfc00000, data 0x2c99fe5/0x2eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:33.045189+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 64233472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:34.045460+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:35.045618+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:36.045898+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d722000/0x0/0x1bfc00000, data 0x2c99fe5/0x2eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4918844 data_alloc: 218103808 data_used: 16736256
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:37.046200+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:38.046457+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d722000/0x0/0x1bfc00000, data 0x2c99fe5/0x2eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:39.046723+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.274785995s of 26.369352341s, submitted: 37
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d71d000/0x0/0x1bfc00000, data 0x2c9ffe5/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:40.046903+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:41.047109+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4920738 data_alloc: 218103808 data_used: 16736256
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:42.047321+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:43.047460+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:44.047806+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d71d000/0x0/0x1bfc00000, data 0x2c9ffe5/0x2eb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1f62f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:45.048150+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:46.048449+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4922338 data_alloc: 218103808 data_used: 17092608
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:47.048571+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470368256 unmapped: 64217088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:48.048881+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470376448 unmapped: 64208896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:49.049146+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d30c000/0x0/0x1bfc00000, data 0x2ca0fe5/0x2eb2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470376448 unmapped: 64208896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:50.049481+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d30c000/0x0/0x1bfc00000, data 0x2ca0fe5/0x2eb2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470376448 unmapped: 64208896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:51.049697+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470376448 unmapped: 64208896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4922590 data_alloc: 218103808 data_used: 17092608
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.667269707s of 12.846582413s, submitted: 6
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:52.049840+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 402 ms_handle_reset con 0x559436cb0c00 session 0x559435142780
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470376448 unmapped: 64208896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:53.050080+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594386f9400
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 402 ms_handle_reset con 0x5594386f9400 session 0x559434c8c5a0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943cdd2800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470376448 unmapped: 64208896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 402 ms_handle_reset con 0x55943cdd2800 session 0x559434c63c20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:54.050226+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470376448 unmapped: 64208896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 402 ms_handle_reset con 0x559434fee000 session 0x559435f65680
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 402 ms_handle_reset con 0x559436ea1c00 session 0x559437cb12c0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:55.050389+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943cdd2800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467795968 unmapped: 66789376 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 402 ms_handle_reset con 0x55943cdd2800 session 0x5594370ecf00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:56.050631+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d30c000/0x0/0x1bfc00000, data 0x2ca0fe5/0x2eb2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467795968 unmapped: 66789376 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4843049 data_alloc: 218103808 data_used: 12832768
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:57.050795+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467795968 unmapped: 66789376 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:58.051028+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467795968 unmapped: 66789376 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:18:59.051191+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 402 heartbeat osd_stat(store_statfs(0x19d814000/0x0/0x1bfc00000, data 0x2799fb2/0x29a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467795968 unmapped: 66789376 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:00.051467+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467795968 unmapped: 66789376 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:01.051629+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467795968 unmapped: 66789376 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4843097 data_alloc: 218103808 data_used: 12832768
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 402 handle_osd_map epochs [402,403], i have 402, src has [1,403]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:02.051730+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.570983887s of 10.014533043s, submitted: 55
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467828736 unmapped: 66756608 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 403 ms_handle_reset con 0x559434792c00 session 0x559436dde780
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:03.051880+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5a800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467828736 unmapped: 66756608 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:04.052026+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 403 heartbeat osd_stat(store_statfs(0x19d811000/0x0/0x1bfc00000, data 0x279bc5f/0x29ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 403 handle_osd_map epochs [403,404], i have 403, src has [1,404]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 404 ms_handle_reset con 0x559436c5a800 session 0x559437e565a0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467836928 unmapped: 66748416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:05.052230+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467836928 unmapped: 66748416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:06.052378+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467836928 unmapped: 66748416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4849429 data_alloc: 218103808 data_used: 12840960
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:07.052621+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 404 heartbeat osd_stat(store_statfs(0x19d80e000/0x0/0x1bfc00000, data 0x279d8d4/0x29af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467836928 unmapped: 66748416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:08.052756+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467836928 unmapped: 66748416 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:09.052904+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467845120 unmapped: 66740224 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:10.053039+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467845120 unmapped: 66740224 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:11.053181+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 404 heartbeat osd_stat(store_statfs(0x19d80e000/0x0/0x1bfc00000, data 0x279d8d4/0x29af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467845120 unmapped: 66740224 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4849429 data_alloc: 218103808 data_used: 12840960
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 404 handle_osd_map epochs [405,405], i have 404, src has [1,405]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:12.053336+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559434792c00 session 0x559437d09680
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559434fee000 session 0x559436c50f00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559436ea1c00 session 0x559434c6e960
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467861504 unmapped: 66723840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:13.053490+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467861504 unmapped: 66723840 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:14.053600+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19d80b000/0x0/0x1bfc00000, data 0x279f413/0x29b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467869696 unmapped: 66715648 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:15.053748+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467869696 unmapped: 66715648 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:16.053872+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467869696 unmapped: 66715648 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4852403 data_alloc: 218103808 data_used: 12840960
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:17.053998+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467869696 unmapped: 66715648 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19d80b000/0x0/0x1bfc00000, data 0x279f413/0x29b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:18.054120+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943cdd2800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x55943cdd2800 session 0x559437b0b4a0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436cb0c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467869696 unmapped: 66715648 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19d80b000/0x0/0x1bfc00000, data 0x279f413/0x29b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:19.054269+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.887578964s of 17.205017090s, submitted: 46
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559436cb0c00 session 0x559435de1e00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559434792c00 session 0x5594351d4d20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559434fee000 session 0x559437d09c20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467877888 unmapped: 66707456 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559436ea1c00 session 0x559437d08d20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943cdd2800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x55943cdd2800 session 0x5594377ae780
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:20.054455+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467886080 unmapped: 66699264 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:21.054591+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467886080 unmapped: 66699264 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4888135 data_alloc: 218103808 data_used: 12840960
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:22.054747+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594386f9400
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x5594386f9400 session 0x559437d081e0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467886080 unmapped: 66699264 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:23.054896+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467886080 unmapped: 66699264 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:24.055063+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19d3b9000/0x0/0x1bfc00000, data 0x2bf2413/0x2e05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467845120 unmapped: 66740224 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:25.055218+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:26.055350+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4920884 data_alloc: 218103808 data_used: 17362944
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:27.055464+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:28.055586+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:29.055734+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.796065331s of 10.041505814s, submitted: 15
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19d3b9000/0x0/0x1bfc00000, data 0x2bf2413/0x2e05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559436ea1c00 session 0x559434c8c5a0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:30.055880+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:31.056007+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19d3b9000/0x0/0x1bfc00000, data 0x2bf2413/0x2e05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4920884 data_alloc: 218103808 data_used: 17362944
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:32.056260+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19d3b9000/0x0/0x1bfc00000, data 0x2bf2413/0x2e05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:33.056566+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943cdd2800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:34.056736+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:35.056994+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467763200 unmapped: 66822144 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:36.057186+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 467886080 unmapped: 66699264 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4926114 data_alloc: 218103808 data_used: 17371136
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:37.057329+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472064000 unmapped: 62521344 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:38.057556+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19ceb5000/0x0/0x1bfc00000, data 0x30f6413/0x3309000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,2])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468131840 unmapped: 66453504 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:39.057726+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.967789173s of 10.081938744s, submitted: 32
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468131840 unmapped: 66453504 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:40.057900+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468148224 unmapped: 66437120 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:41.058052+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468099072 unmapped: 66486272 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4969600 data_alloc: 218103808 data_used: 17580032
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:42.058207+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468099072 unmapped: 66486272 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cda9000/0x0/0x1bfc00000, data 0x3202413/0x3415000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:43.058373+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cda9000/0x0/0x1bfc00000, data 0x3202413/0x3415000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,5])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468189184 unmapped: 66396160 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cda1000/0x0/0x1bfc00000, data 0x3208413/0x341b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:44.058550+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468189184 unmapped: 66396160 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:45.058842+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468189184 unmapped: 66396160 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:46.059071+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468189184 unmapped: 66396160 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4974878 data_alloc: 218103808 data_used: 17403904
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:47.059192+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468189184 unmapped: 66396160 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:48.059394+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468189184 unmapped: 66396160 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:49.059558+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468197376 unmapped: 66387968 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:50.059727+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468197376 unmapped: 66387968 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:51.059871+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468197376 unmapped: 66387968 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4974878 data_alloc: 218103808 data_used: 17403904
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:52.060007+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468197376 unmapped: 66387968 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:53.060210+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468197376 unmapped: 66387968 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:54.060356+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468197376 unmapped: 66387968 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:55.060534+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468197376 unmapped: 66387968 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:56.060673+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468197376 unmapped: 66387968 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4974878 data_alloc: 218103808 data_used: 17403904
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:57.060801+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 66379776 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.758423805s of 18.601793289s, submitted: 23
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:58.060962+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 66379776 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:19:59.061108+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 66379776 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:00.061245+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 66379776 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:01.061371+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 66379776 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4977182 data_alloc: 218103808 data_used: 17395712
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:02.061486+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468205568 unmapped: 66379776 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:03.061635+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468213760 unmapped: 66371584 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:04.061824+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468213760 unmapped: 66371584 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:05.062013+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468213760 unmapped: 66371584 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:06.062181+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468221952 unmapped: 66363392 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:07.062346+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4977182 data_alloc: 218103808 data_used: 17395712
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468221952 unmapped: 66363392 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:08.062522+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468221952 unmapped: 66363392 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:09.062645+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.873358727s of 11.896842003s, submitted: 17
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468230144 unmapped: 66355200 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:10.062918+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468230144 unmapped: 66355200 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:11.063240+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468230144 unmapped: 66355200 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:12.063440+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4975934 data_alloc: 218103808 data_used: 17391616
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559434792c00 session 0x559437d08d20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559434fee000 session 0x559435ddc1e0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x55943cdd2800 session 0x559434c6e960
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468230144 unmapped: 66355200 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:13.063594+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437ead000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436cb1c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437438000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468238336 unmapped: 66347008 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:14.063766+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468238336 unmapped: 66347008 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:15.063928+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468246528 unmapped: 66338816 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:16.064091+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468246528 unmapped: 66338816 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:17.064246+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4974974 data_alloc: 218103808 data_used: 17502208
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468246528 unmapped: 66338816 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:18.064384+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468246528 unmapped: 66338816 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:19.064526+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468246528 unmapped: 66338816 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:20.064679+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468246528 unmapped: 66338816 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:21.064856+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468246528 unmapped: 66338816 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:22.065011+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4974974 data_alloc: 218103808 data_used: 17502208
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468246528 unmapped: 66338816 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:23.065163+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468246528 unmapped: 66338816 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:24.065348+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 468254720 unmapped: 66330624 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:25.065672+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.424508095s of 15.441400528s, submitted: 12
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469303296 unmapped: 65282048 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:26.065820+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:27.066046+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4983214 data_alloc: 218103808 data_used: 17915904
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:28.066206+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:29.066375+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:30.066513+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:31.066634+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:32.066781+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4983214 data_alloc: 218103808 data_used: 17915904
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:33.066943+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:34.067103+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:35.067270+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:36.067426+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:37.067575+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4983214 data_alloc: 218103808 data_used: 17915904
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:38.067711+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:39.067849+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:40.067986+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.449235916s of 15.471648216s, submitted: 15
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559436cb1c00 session 0x559437cb0d20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:41.068151+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:42.068268+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4982142 data_alloc: 218103808 data_used: 17915904
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19cd95000/0x0/0x1bfc00000, data 0x3216413/0x3429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559434792c00 session 0x5594371054a0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:43.068484+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559437ead000 session 0x559436c4ba40
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559437438000 session 0x559435ddd680
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469311488 unmapped: 65273856 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:44.068583+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 ms_handle_reset con 0x559434fee000 session 0x559437d09680
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469319680 unmapped: 65265664 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:45.068816+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469319680 unmapped: 65265664 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:46.068999+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469319680 unmapped: 65265664 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:47.069147+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4861028 data_alloc: 218103808 data_used: 12840960
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 heartbeat osd_stat(store_statfs(0x19d80c000/0x0/0x1bfc00000, data 0x279f413/0x29b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469327872 unmapped: 65257472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:48.069308+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469327872 unmapped: 65257472 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:49.069455+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 405 handle_osd_map epochs [405,406], i have 405, src has [1,406]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:50.069575+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:51.069701+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 406 ms_handle_reset con 0x559436ea1c00 session 0x559437cb12c0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 406 heartbeat osd_stat(store_statfs(0x19d808000/0x0/0x1bfc00000, data 0x27a10c0/0x29b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:52.069852+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4865202 data_alloc: 218103808 data_used: 12849152
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469336064 unmapped: 65249280 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:53.069977+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:54.070107+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:55.070341+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 406 heartbeat osd_stat(store_statfs(0x19d808000/0x0/0x1bfc00000, data 0x27a10c0/0x29b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:56.070448+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 406 handle_osd_map epochs [406,407], i have 406, src has [1,407]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.777113914s of 15.972728729s, submitted: 61
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:57.070591+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4868176 data_alloc: 218103808 data_used: 12849152
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469344256 unmapped: 65241088 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:58.070753+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 65232896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:20:59.070896+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 65232896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:00.071013+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 65232896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:01.071161+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 65232896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:02.071301+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4868176 data_alloc: 218103808 data_used: 12849152
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469352448 unmapped: 65232896 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:03.071413+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:04.071563+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469360640 unmapped: 65224704 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:05.071714+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469360640 unmapped: 65224704 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:06.071875+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469360640 unmapped: 65224704 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:07.072008+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469360640 unmapped: 65224704 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4868176 data_alloc: 218103808 data_used: 12849152
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:08.072188+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 65216512 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:09.072363+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 65216512 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:10.072510+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 65216512 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:11.072675+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469368832 unmapped: 65216512 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:12.072834+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 65208320 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4868176 data_alloc: 218103808 data_used: 12849152
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:13.072967+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 65208320 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:14.073104+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 65208320 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:15.073335+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 65208320 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:16.073493+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469377024 unmapped: 65208320 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:17.073647+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469385216 unmapped: 65200128 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4868176 data_alloc: 218103808 data_used: 12849152
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:18.073796+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 65191936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:19.073942+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 65191936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:20.074108+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 65191936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:21.074235+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 65191936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:22.074343+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469393408 unmapped: 65191936 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4868176 data_alloc: 218103808 data_used: 12849152
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:23.074477+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 65183744 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:24.074650+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 65183744 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:25.074853+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 65183744 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:26.075037+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469401600 unmapped: 65183744 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:27.075207+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469409792 unmapped: 65175552 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4868176 data_alloc: 218103808 data_used: 12849152
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:28.075368+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469409792 unmapped: 65175552 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:29.075588+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469409792 unmapped: 65175552 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:30.075835+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469409792 unmapped: 65175552 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:31.076074+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469409792 unmapped: 65175552 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:32.076333+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469409792 unmapped: 65175552 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4868176 data_alloc: 218103808 data_used: 12849152
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:33.076618+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469417984 unmapped: 65167360 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:34.076844+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 65159168 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:35.077134+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 65159168 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:36.077388+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 65159168 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:37.077564+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 65159168 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4868176 data_alloc: 218103808 data_used: 12849152
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:38.077841+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 65159168 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:39.078093+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469426176 unmapped: 65159168 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:40.078374+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469434368 unmapped: 65150976 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:41.078908+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469434368 unmapped: 65150976 heap: 534585344 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 45.048465729s of 45.059295654s, submitted: 15
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434792c00 session 0x5594351d41e0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434fee000 session 0x559436fa2960
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:42.079115+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559436ea1c00 session 0x559435286000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437438000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559437438000 session 0x559437c425a0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437ead000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559437ead000 session 0x559435153860
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469786624 unmapped: 73195520 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4943462 data_alloc: 218103808 data_used: 12849152
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:43.079316+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469786624 unmapped: 73195520 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:44.079503+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469794816 unmapped: 73187328 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:45.079821+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469794816 unmapped: 73187328 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19ce1b000/0x0/0x1bfc00000, data 0x318dbff/0x33a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:46.080051+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434792c00 session 0x559436ddeb40
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469794816 unmapped: 73187328 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434fee000 session 0x559435f65860
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:47.080252+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469794816 unmapped: 73187328 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4943462 data_alloc: 218103808 data_used: 12849152
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:48.080487+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559436ea1c00 session 0x559437b0b860
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437438000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559437438000 session 0x559436c4a3c0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469794816 unmapped: 73187328 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943cdd2800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435132800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:49.080637+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 469794816 unmapped: 73187328 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:50.081309+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470351872 unmapped: 72630272 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:51.081523+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470941696 unmapped: 72040448 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19ce1a000/0x0/0x1bfc00000, data 0x318dc0f/0x33a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:52.082344+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470941696 unmapped: 72040448 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5020337 data_alloc: 234881024 data_used: 23162880
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:53.082780+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470941696 unmapped: 72040448 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:54.083621+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470949888 unmapped: 72032256 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:55.083927+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470949888 unmapped: 72032256 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:56.084211+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470949888 unmapped: 72032256 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19ce1a000/0x0/0x1bfc00000, data 0x318dc0f/0x33a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:57.084341+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470949888 unmapped: 72032256 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5020337 data_alloc: 234881024 data_used: 23162880
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:58.084668+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470949888 unmapped: 72032256 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:21:59.084825+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470949888 unmapped: 72032256 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:00.085268+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.403457642s of 18.507741928s, submitted: 19
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 470949888 unmapped: 72032256 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:01.085640+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 66543616 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c465000/0x0/0x1bfc00000, data 0x3b3cc0f/0x3d53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:02.086007+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5109183 data_alloc: 234881024 data_used: 24010752
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:03.086236+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:04.086401+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:05.086613+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c427000/0x0/0x1bfc00000, data 0x3b78c0f/0x3d8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c427000/0x0/0x1bfc00000, data 0x3b78c0f/0x3d8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:06.086766+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:07.087045+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5109183 data_alloc: 234881024 data_used: 24010752
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c42c000/0x0/0x1bfc00000, data 0x3b7bc0f/0x3d92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:08.087220+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:09.087447+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:10.087561+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:11.087715+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c42c000/0x0/0x1bfc00000, data 0x3b7bc0f/0x3d92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:12.087891+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5102963 data_alloc: 234881024 data_used: 24010752
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.284965515s of 12.586762428s, submitted: 112
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:13.089075+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:14.089968+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c42b000/0x0/0x1bfc00000, data 0x3b7cc0f/0x3d93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:15.090421+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x55943cdd2800 session 0x559435ddc780
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559435132800 session 0x559437c42780
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:16.090772+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c42b000/0x0/0x1bfc00000, data 0x3b7cc0f/0x3d93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473686016 unmapped: 69296128 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c42b000/0x0/0x1bfc00000, data 0x3b7cc0f/0x3d93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [0,0,1])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434792c00 session 0x559434c63a40
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:17.091406+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:18.091909+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:19.092175+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:20.092323+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:21.092796+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:22.093339+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:23.093800+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:24.094055+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:25.094362+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:26.094734+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473694208 unmapped: 69287936 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:27.094885+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:28.095173+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:29.095436+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:30.095639+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:31.095854+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:32.096038+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:33.096199+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:34.096355+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:35.096590+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:36.096738+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:37.096876+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:38.097065+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:39.097239+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 69279744 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:40.097425+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473710592 unmapped: 69271552 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:41.097558+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473710592 unmapped: 69271552 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:42.097711+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473710592 unmapped: 69271552 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:43.097879+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473710592 unmapped: 69271552 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:44.098013+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473710592 unmapped: 69271552 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:45.098187+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473710592 unmapped: 69271552 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:46.098398+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 69263360 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:47.098599+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 69263360 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:48.098745+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 69263360 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:49.098992+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 69263360 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:50.099196+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 69263360 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:51.099398+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 69263360 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:52.099602+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 69263360 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:53.099761+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 69255168 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:54.099916+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 69255168 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:55.100099+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 69255168 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:56.100321+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 69255168 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:57.100539+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 69255168 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:58.100701+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 69255168 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:22:59.100998+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 69255168 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:00.101215+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 69255168 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:01.101453+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 69246976 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:02.101642+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 69246976 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:03.101820+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 69246976 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:04.101987+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 69238784 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:05.102162+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 69238784 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:06.102300+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 69238784 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:07.102494+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 69238784 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:08.102686+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 69238784 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:09.102854+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473751552 unmapped: 69230592 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:10.103000+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473751552 unmapped: 69230592 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d806000/0x0/0x1bfc00000, data 0x27a2bff/0x29b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:11.103146+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473751552 unmapped: 69230592 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:12.103364+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473751552 unmapped: 69230592 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4879042 data_alloc: 218103808 data_used: 12849152
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:13.103512+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473751552 unmapped: 69230592 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:14.103675+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434fee000 session 0x5594370ed4a0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559436ea1c00 session 0x559434c65e00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437438000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559437438000 session 0x559436ec6b40
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434792c00 session 0x559435152780
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473751552 unmapped: 69230592 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 61.791675568s of 61.911861420s, submitted: 35
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:15.103899+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434fee000 session 0x559436f0f4a0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d805000/0x0/0x1bfc00000, data 0x27a2c0f/0x29b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435132800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559435132800 session 0x559437e53c20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559436ea1c00 session 0x559437c42f00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350d7000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473767936 unmapped: 69214208 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x5594350d7000 session 0x559437c565a0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434792c00 session 0x559437b0bc20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:16.104087+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473767936 unmapped: 69214208 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:17.104240+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559434fee000 session 0x559437e52960
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473767936 unmapped: 69214208 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4918731 data_alloc: 218103808 data_used: 12849152
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:18.104358+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435132800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559435132800 session 0x559435915c20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d2f5000/0x0/0x1bfc00000, data 0x2cb2c0f/0x2ec9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473767936 unmapped: 69214208 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559436ea1c00 session 0x559436f6b680
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594358acc00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:19.104514+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x5594358acc00 session 0x559437c56b40
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473784320 unmapped: 69197824 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:20.104828+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473784320 unmapped: 69197824 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:21.105086+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473784320 unmapped: 69197824 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:22.105337+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d2f4000/0x0/0x1bfc00000, data 0x2cb2c32/0x2eca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473784320 unmapped: 69197824 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4959857 data_alloc: 218103808 data_used: 18165760
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:23.105478+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473784320 unmapped: 69197824 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:24.105641+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473784320 unmapped: 69197824 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:25.105827+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473784320 unmapped: 69197824 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:26.105981+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473784320 unmapped: 69197824 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:27.106132+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473792512 unmapped: 69189632 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d2f4000/0x0/0x1bfc00000, data 0x2cb2c32/0x2eca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4959857 data_alloc: 218103808 data_used: 18165760
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:28.106266+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473792512 unmapped: 69189632 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:29.106439+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d2f4000/0x0/0x1bfc00000, data 0x2cb2c32/0x2eca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473792512 unmapped: 69189632 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:30.106634+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d2f4000/0x0/0x1bfc00000, data 0x2cb2c32/0x2eca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473792512 unmapped: 69189632 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:31.106851+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19d2f4000/0x0/0x1bfc00000, data 0x2cb2c32/0x2eca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473792512 unmapped: 69189632 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:32.106974+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.014217377s of 17.533912659s, submitted: 26
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 474546176 unmapped: 68435968 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5014409 data_alloc: 218103808 data_used: 18165760
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:33.107097+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 67592192 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:34.107340+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 67592192 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:35.107564+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 475389952 unmapped: 67592192 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:36.107772+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 66543616 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:37.107921+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c7e2000/0x0/0x1bfc00000, data 0x37c4c32/0x39dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 66543616 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5048133 data_alloc: 218103808 data_used: 18374656
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:38.108120+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 66543616 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:39.108373+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 66543616 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:40.108559+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 66543616 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:41.108749+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 heartbeat osd_stat(store_statfs(0x19c7dc000/0x0/0x1bfc00000, data 0x37cac32/0x39e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 66543616 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:42.108911+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476438528 unmapped: 66543616 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5047641 data_alloc: 218103808 data_used: 18374656
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:43.109114+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476446720 unmapped: 66535424 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:44.109320+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435132800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 ms_handle_reset con 0x559435132800 session 0x559436ec72c0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.816077232s of 12.077077866s, submitted: 68
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476446720 unmapped: 66535424 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:45.109534+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 407 handle_osd_map epochs [407,408], i have 407, src has [1,408]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 408 handle_osd_map epochs [408,408], i have 408, src has [1,408]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 408 ms_handle_reset con 0x559436ea1c00 session 0x559435ddc000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 408 heartbeat osd_stat(store_statfs(0x19c7d8000/0x0/0x1bfc00000, data 0x37cec32/0x39e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436e8c800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476463104 unmapped: 66519040 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:46.109955+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 408 ms_handle_reset con 0x559434792800 session 0x559436ddf4a0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 408 ms_handle_reset con 0x559436e8c800 session 0x559436fa34a0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559442cf9800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476594176 unmapped: 66387968 heap: 542982144 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:47.110111+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487800832 unmapped: 61882368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5197727 data_alloc: 234881024 data_used: 29822976
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:48.110371+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 408 ms_handle_reset con 0x559442cf9800 session 0x5594351d4f00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487817216 unmapped: 61865984 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:49.110598+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 409 ms_handle_reset con 0x559434792800 session 0x559434ce12c0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487825408 unmapped: 61857792 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:50.110844+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 409 handle_osd_map epochs [409,410], i have 409, src has [1,410]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487849984 unmapped: 61833216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:51.111028+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 410 heartbeat osd_stat(store_statfs(0x19ba47000/0x0/0x1bfc00000, data 0x455a1ad/0x4775000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488038400 unmapped: 61644800 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:52.111231+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488046592 unmapped: 61636608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5214357 data_alloc: 234881024 data_used: 29831168
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:53.111353+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488046592 unmapped: 61636608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:54.111494+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488046592 unmapped: 61636608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:55.111692+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488046592 unmapped: 61636608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:56.111988+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 410 heartbeat osd_stat(store_statfs(0x19ba47000/0x0/0x1bfc00000, data 0x455a1ad/0x4775000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 410 handle_osd_map epochs [410,411], i have 410, src has [1,411]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.516270638s of 12.276865005s, submitted: 37
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481460224 unmapped: 68222976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 sshd-session[333944]: Connection closed by invalid user admin 196.251.118.184 port 40948 [preauth]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:57.112189+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435132800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436e8c800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 411 ms_handle_reset con 0x559436e8c800 session 0x559436c4d680
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 411 ms_handle_reset con 0x559435132800 session 0x559437bcb4a0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481460224 unmapped: 68222976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 411 ms_handle_reset con 0x559436ea1c00 session 0x559435f67c20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5201257 data_alloc: 234881024 data_used: 29831168
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594384a4000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 411 ms_handle_reset con 0x5594384a4000 session 0x559435e4e780
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 411 ms_handle_reset con 0x559434792800 session 0x559437c43a40
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:58.112386+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481460224 unmapped: 68222976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:23:59.112593+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481468416 unmapped: 68214784 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:00.112832+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481476608 unmapped: 68206592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:01.113011+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19ba44000/0x0/0x1bfc00000, data 0x455bd4e/0x4779000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481476608 unmapped: 68206592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:02.113191+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481476608 unmapped: 68206592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5201257 data_alloc: 234881024 data_used: 29831168
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:03.113360+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481476608 unmapped: 68206592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:04.113491+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435132800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 411 ms_handle_reset con 0x559435132800 session 0x559437c42b40
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436e8c800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481624064 unmapped: 68059136 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:05.113606+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19ba20000/0x0/0x1bfc00000, data 0x457fd71/0x479e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481632256 unmapped: 68050944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:06.113706+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 481894400 unmapped: 67788800 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:07.114061+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5258226 data_alloc: 251658240 data_used: 37351424
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:08.114310+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:09.114510+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19ba20000/0x0/0x1bfc00000, data 0x457fd71/0x479e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:10.114798+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:11.114968+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:12.115220+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5258226 data_alloc: 251658240 data_used: 37351424
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:13.115467+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.335296631s of 16.486038208s, submitted: 17
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:14.115708+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:15.116113+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19ba20000/0x0/0x1bfc00000, data 0x457fd71/0x479e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:16.116373+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482615296 unmapped: 67067904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:17.116501+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482648064 unmapped: 67035136 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5280461 data_alloc: 251658240 data_used: 38633472
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:18.116728+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:19.116863+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:20.117040+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19b9da000/0x0/0x1bfc00000, data 0x45c5d71/0x47e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:21.117193+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:22.117356+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5285421 data_alloc: 251658240 data_used: 39116800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:23.117559+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.030493736s of 10.215833664s, submitted: 10
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:24.117756+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:25.117946+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19b9da000/0x0/0x1bfc00000, data 0x45c5d71/0x47e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:26.118100+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19b9da000/0x0/0x1bfc00000, data 0x45c5d71/0x47e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:27.118305+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19b9da000/0x0/0x1bfc00000, data 0x45c5d71/0x47e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5286573 data_alloc: 251658240 data_used: 39518208
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:28.118541+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:29.118701+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:30.118842+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:31.119046+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482680832 unmapped: 67002368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:32.119224+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5287453 data_alloc: 251658240 data_used: 39518208
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:33.119678+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19b9da000/0x0/0x1bfc00000, data 0x45c5d71/0x47e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:34.119901+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:35.120154+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:36.120362+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:37.120559+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:38.120915+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5287453 data_alloc: 251658240 data_used: 39518208
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 411 heartbeat osd_stat(store_statfs(0x19b9da000/0x0/0x1bfc00000, data 0x45c5d71/0x47e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:39.121163+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:40.121377+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:41.121519+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:42.121759+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436f95400
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 411 ms_handle_reset con 0x559436f95400 session 0x559436f0e960
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437eac800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482689024 unmapped: 66994176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:43.122102+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5288733 data_alloc: 251658240 data_used: 39571456
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 411 handle_osd_map epochs [411,412], i have 411, src has [1,412]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.819889069s of 19.845161438s, submitted: 9
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 412 ms_handle_reset con 0x559437eac800 session 0x559437c561e0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943776fc00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943beb7000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 412 ms_handle_reset con 0x55943beb7000 session 0x559437c43e00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 412 ms_handle_reset con 0x55943776fc00 session 0x559437bcaf00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 412 ms_handle_reset con 0x559434792800 session 0x559434ce0960
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 412 heartbeat osd_stat(store_statfs(0x19b9da000/0x0/0x1bfc00000, data 0x45c5d71/0x47e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435132800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487571456 unmapped: 62111744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:44.122268+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 412 handle_osd_map epochs [412,413], i have 412, src has [1,413]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 413 ms_handle_reset con 0x559435132800 session 0x559435de12c0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436f95400
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487579648 unmapped: 62103552 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:45.122528+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 413 handle_osd_map epochs [413,414], i have 413, src has [1,414]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 414 ms_handle_reset con 0x559436f95400 session 0x559434c62000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487202816 unmapped: 62480384 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:46.122860+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437eac800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 414 ms_handle_reset con 0x559437eac800 session 0x5594377af680
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437eac800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 414 ms_handle_reset con 0x559437eac800 session 0x559435915e00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 414 ms_handle_reset con 0x559434792800 session 0x5594359145a0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487227392 unmapped: 62455808 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:47.123272+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 414 heartbeat osd_stat(store_statfs(0x199ba7000/0x0/0x1bfc00000, data 0x63f235e/0x6616000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435132800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487235584 unmapped: 62447616 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:48.123626+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5538217 data_alloc: 251658240 data_used: 44425216
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 414 handle_osd_map epochs [414,415], i have 414, src has [1,415]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487251968 unmapped: 62431232 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:49.123850+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 415 ms_handle_reset con 0x559435132800 session 0x5594351d4d20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:50.124032+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487268352 unmapped: 62414848 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 415 ms_handle_reset con 0x559436e8c800 session 0x559434c8dc20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 415 ms_handle_reset con 0x559436ea1c00 session 0x559434ff1860
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 415 heartbeat osd_stat(store_statfs(0x19b9cc000/0x0/0x1bfc00000, data 0x45ccfb5/0x47f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:51.124208+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487268352 unmapped: 62414848 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:52.124357+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487284736 unmapped: 62398464 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 416 ms_handle_reset con 0x559434792800 session 0x559436f6a1e0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:53.124541+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487309312 unmapped: 62373888 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5321486 data_alloc: 251658240 data_used: 44027904
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:54.124849+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487309312 unmapped: 62373888 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:55.125091+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487309312 unmapped: 62373888 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435132800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:56.125372+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487309312 unmapped: 62373888 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 416 heartbeat osd_stat(store_statfs(0x19b9ee000/0x0/0x1bfc00000, data 0x4564a8b/0x4787000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 416 handle_osd_map epochs [416,417], i have 416, src has [1,417]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.964550972s of 13.325811386s, submitted: 218
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 417 handle_osd_map epochs [418,418], i have 417, src has [1,418]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:57.125496+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487325696 unmapped: 62357504 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 418 ms_handle_reset con 0x559435132800 session 0x559434ff12c0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:58.125689+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483909632 unmapped: 65773568 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5137584 data_alloc: 234881024 data_used: 29847552
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 418 ms_handle_reset con 0x559434792c00 session 0x5594351d4780
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 418 ms_handle_reset con 0x559434fee000 session 0x559436c512c0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436e8c800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:24:59.125848+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483917824 unmapped: 65765376 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:00.126040+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473702400 unmapped: 75980800 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 418 ms_handle_reset con 0x559436e8c800 session 0x559436f7f2c0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:01.126253+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 418 heartbeat osd_stat(store_statfs(0x19d7e4000/0x0/0x1bfc00000, data 0x27b6260/0x29d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:02.126576+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:03.126747+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4949106 data_alloc: 218103808 data_used: 12886016
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:04.126908+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 418 heartbeat osd_stat(store_statfs(0x19d7e4000/0x0/0x1bfc00000, data 0x27b6260/0x29d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 418 heartbeat osd_stat(store_statfs(0x19d7e4000/0x0/0x1bfc00000, data 0x27b6260/0x29d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:05.127073+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:06.127343+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.211503029s of 10.501904488s, submitted: 95
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:07.127546+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:08.127758+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4953104 data_alloc: 218103808 data_used: 12894208
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:09.127999+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 69K writes, 270K keys, 69K commit groups, 1.0 writes per commit group, ingest: 0.26 GB, 0.04 MB/s
                                           Cumulative WAL: 69K writes, 25K syncs, 2.70 writes per sync, written: 0.26 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2949 writes, 9921 keys, 2949 commit groups, 1.0 writes per commit group, ingest: 9.42 MB, 0.02 MB/s
                                           Interval WAL: 2949 writes, 1228 syncs, 2.40 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.017       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9770#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5594336a9610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:10.128185+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:11.128352+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:12.128755+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:13.129017+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4953104 data_alloc: 218103808 data_used: 12894208
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:14.129251+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:15.129621+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:16.129881+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:17.130142+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473718784 unmapped: 75964416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:18.130353+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 75956224 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4953104 data_alloc: 218103808 data_used: 12894208
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:19.130565+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 75956224 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:20.130746+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473726976 unmapped: 75956224 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:21.130912+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:22.131050+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:23.131250+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4953104 data_alloc: 218103808 data_used: 12894208
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:24.131558+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:25.131781+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:26.131967+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:27.132823+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:28.133151+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4953104 data_alloc: 218103808 data_used: 12894208
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:29.133661+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:30.134205+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:31.134972+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:32.135169+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:33.135893+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4953104 data_alloc: 218103808 data_used: 12894208
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:34.136056+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:35.136638+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:36.137163+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:37.137447+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:38.137871+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4953104 data_alloc: 218103808 data_used: 12894208
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:39.138161+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:40.138311+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:41.138503+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:42.138764+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 35.431304932s of 35.444664001s, submitted: 15
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:43.139028+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4954123 data_alloc: 218103808 data_used: 12894208
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:44.139233+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559434792800 session 0x559436f7f860
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:45.139481+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:46.139811+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7e02/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:47.140107+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:48.140364+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4954051 data_alloc: 218103808 data_used: 12894208
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:49.140654+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7e02/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:50.140892+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:51.141146+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:52.141482+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559434792c00 session 0x5594351530e0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:53.141637+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4954051 data_alloc: 218103808 data_used: 12894208
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559434fee000 session 0x559436c4d2c0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:54.141820+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7e02/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:55.142006+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435132800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559435132800 session 0x559437bca3c0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.737661362s of 12.753558159s, submitted: 5
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473735168 unmapped: 75948032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559436ea1c00 session 0x559437bcab40
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:56.142250+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 75939840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:57.142423+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 75939840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:58.142602+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 75939840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4955630 data_alloc: 218103808 data_used: 12898304
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:25:59.143031+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 75939840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7e02/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:00.143866+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559434792800 session 0x559436c4a5a0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 75939840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559434792c00 session 0x559436c51680
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:01.144034+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 75939840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:02.144766+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 75939840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:03.145244+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7e02/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 75939840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4954907 data_alloc: 218103808 data_used: 12898304
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:04.145865+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473743360 unmapped: 75939840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:05.146365+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.274845123s of 10.169509888s, submitted: 30
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473751552 unmapped: 75931648 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:06.146597+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7e02/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473751552 unmapped: 75931648 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:07.147065+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473759744 unmapped: 75923456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559434fee000 session 0x559435d8d860
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:08.147428+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473759744 unmapped: 75923456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4954212 data_alloc: 218103808 data_used: 12894208
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:09.147583+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473759744 unmapped: 75923456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:10.148017+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435132800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559435132800 session 0x559437b0be00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436ea1c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 473759744 unmapped: 75923456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559436ea1c00 session 0x5594377afc20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7d9f/0x29dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:11.148323+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 479232000 unmapped: 70451200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:12.148482+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559434792800 session 0x5594352874a0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 479232000 unmapped: 70451200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:13.148791+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 479232000 unmapped: 70451200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4953898 data_alloc: 218103808 data_used: 16105472
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:14.148953+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7e01/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 479232000 unmapped: 70451200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7e01/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:15.149121+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.661519051s of 10.000641823s, submitted: 19
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 479232000 unmapped: 70451200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19d7e1000/0x0/0x1bfc00000, data 0x27b7e01/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:16.149382+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 479232000 unmapped: 70451200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559434792c00 session 0x559435f67e00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:17.149587+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 479232000 unmapped: 70451200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:18.149792+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 419 ms_handle_reset con 0x559434fee000 session 0x559436c4dc20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476880896 unmapped: 72802304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5032045 data_alloc: 218103808 data_used: 16105472
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:19.150016+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476880896 unmapped: 72802304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 419 heartbeat osd_stat(store_statfs(0x19cde8000/0x0/0x1bfc00000, data 0x31b1d9f/0x33d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:20.150164+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435132800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476880896 unmapped: 72802304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559435132800 session 0x559434ff12c0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:21.150294+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476880896 unmapped: 72802304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 420 heartbeat osd_stat(store_statfs(0x19cde3000/0x0/0x1bfc00000, data 0x31b3a5a/0x33da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:22.150469+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:23.150630+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5037331 data_alloc: 218103808 data_used: 16113664
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:24.150875+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:25.151133+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:26.151408+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 420 heartbeat osd_stat(store_statfs(0x19cde3000/0x0/0x1bfc00000, data 0x31b3a5a/0x33da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:27.151653+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:28.151879+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437eac800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436f95400
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.522792816s of 12.859356880s, submitted: 17
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5037331 data_alloc: 218103808 data_used: 16113664
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559436f95400 session 0x559436dde1e0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559437eac800 session 0x559437bcb860
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:29.152091+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:30.152316+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:31.152465+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 420 heartbeat osd_stat(store_statfs(0x19cde3000/0x0/0x1bfc00000, data 0x31b3a5a/0x33da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:32.152610+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559434792800 session 0x559435ddcd20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:33.152717+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5037331 data_alloc: 218103808 data_used: 16113664
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559434792c00 session 0x559436f7f680
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 420 heartbeat osd_stat(store_statfs(0x19cde3000/0x0/0x1bfc00000, data 0x31b3a5a/0x33da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:34.152816+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559434fee000 session 0x559437e53c20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435132800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 420 heartbeat osd_stat(store_statfs(0x19cde3000/0x0/0x1bfc00000, data 0x31b3a5a/0x33da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559435132800 session 0x559437b0a3c0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 420 heartbeat osd_stat(store_statfs(0x19cdc0000/0x0/0x1bfc00000, data 0x31d7a5a/0x33fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:35.152956+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476889088 unmapped: 72794112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:36.153221+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476905472 unmapped: 72777728 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:37.153369+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476905472 unmapped: 72777728 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:38.153532+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559434792800 session 0x559436f6be00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559434792c00 session 0x559436c510e0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476905472 unmapped: 72777728 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5113591 data_alloc: 234881024 data_used: 24510464
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.465334892s of 10.475492477s, submitted: 2
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559434fee000 session 0x559435286d20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:39.153621+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476913664 unmapped: 72769536 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 420 heartbeat osd_stat(store_statfs(0x19cde4000/0x0/0x1bfc00000, data 0x31b3a5a/0x33da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437eac800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559437eac800 session 0x559435915860
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943776fc00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:40.153741+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x55943776fc00 session 0x559437c43860
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476897280 unmapped: 72785920 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:41.153939+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 420 ms_handle_reset con 0x559434792800 session 0x559437b0ad20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476897280 unmapped: 72785920 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 421 ms_handle_reset con 0x559434792c00 session 0x559435152d20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:42.154097+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476897280 unmapped: 72785920 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:43.154226+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 421 heartbeat osd_stat(store_statfs(0x19cde1000/0x0/0x1bfc00000, data 0x31b56a5/0x33dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [0,0,1])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472244224 unmapped: 77438976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4967251 data_alloc: 218103808 data_used: 15532032
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:44.154360+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 421 ms_handle_reset con 0x559434fee000 session 0x559434c8d680
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472244224 unmapped: 77438976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:45.154510+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472244224 unmapped: 77438976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:46.154640+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472244224 unmapped: 77438976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:47.154761+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472285184 unmapped: 77398016 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:48.154879+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472227840 unmapped: 77455360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4967251 data_alloc: 218103808 data_used: 15532032
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.788832664s of 10.246772766s, submitted: 174
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437eac800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 421 heartbeat osd_stat(store_statfs(0x19d7dc000/0x0/0x1bfc00000, data 0x27bb6a5/0x29e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:49.154989+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 421 ms_handle_reset con 0x559437eac800 session 0x559437d08d20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436f68800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 472227840 unmapped: 77455360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 421 ms_handle_reset con 0x559436f68800 session 0x559437d09c20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:50.155118+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476839936 unmapped: 72843264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:51.155239+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476856320 unmapped: 72826880 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 421 handle_osd_map epochs [422,422], i have 422, src has [1,422]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 422 ms_handle_reset con 0x559434792800 session 0x5594351d5860
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:52.155355+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476880896 unmapped: 72802304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:53.155521+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476880896 unmapped: 72802304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4973367 data_alloc: 218103808 data_used: 16130048
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d7d7000/0x0/0x1bfc00000, data 0x27bd246/0x29e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:54.155710+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476880896 unmapped: 72802304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:55.155892+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d7d7000/0x0/0x1bfc00000, data 0x27bd246/0x29e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476880896 unmapped: 72802304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:56.156042+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476880896 unmapped: 72802304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:57.156196+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476897280 unmapped: 72785920 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d7d8000/0x0/0x1bfc00000, data 0x27bd246/0x29e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:58.156440+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 476848128 unmapped: 72835072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4972486 data_alloc: 218103808 data_used: 16134144
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 422 ms_handle_reset con 0x559434792c00 session 0x5594351d4f00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:26:59.156647+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.671635628s of 10.225932121s, submitted: 147
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 478871552 unmapped: 70811648 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d7d8000/0x0/0x1bfc00000, data 0x27bd20d/0x29e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:00.156790+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 422 ms_handle_reset con 0x559434fee000 session 0x559436ec6d20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477257728 unmapped: 72425472 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437eac800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 422 ms_handle_reset con 0x559437eac800 session 0x559436ddeb40
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:01.156948+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:02.157064+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:03.157219+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5005971 data_alloc: 218103808 data_used: 16130048
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:04.157412+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:05.157574+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:06.157694+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:07.157875+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:08.158055+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5005971 data_alloc: 218103808 data_used: 16130048
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:09.158188+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:10.158481+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:11.158671+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:12.158793+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:13.158969+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5005971 data_alloc: 218103808 data_used: 16130048
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:14.159146+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:15.159308+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:16.159495+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:17.159802+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559435132000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 422 ms_handle_reset con 0x559435132000 session 0x559436c4c780
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 422 ms_handle_reset con 0x559434792800 session 0x559437d08960
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:18.159956+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5005971 data_alloc: 218103808 data_used: 16130048
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:19.160134+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 422 ms_handle_reset con 0x559434792c00 session 0x5594370f4f00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.234014511s of 20.326173782s, submitted: 33
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 422 ms_handle_reset con 0x559434fee000 session 0x559436f7ed20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437eac800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:20.160258+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5bc00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:21.160434+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:22.160592+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:23.160791+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5025344 data_alloc: 218103808 data_used: 18558976
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:24.160976+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:25.161206+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:26.161389+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:27.161529+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:28.161741+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5025344 data_alloc: 218103808 data_used: 18558976
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:29.161905+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:30.162038+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:31.162200+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19d45f000/0x0/0x1bfc00000, data 0x2b36246/0x2d5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fa3f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 477265920 unmapped: 72417280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:32.162342+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.673128128s of 12.695754051s, submitted: 6
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483557376 unmapped: 66125824 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19cd68000/0x0/0x1bfc00000, data 0x2e1d246/0x3046000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,5])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:33.162495+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482705408 unmapped: 66977792 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5106862 data_alloc: 234881024 data_used: 19513344
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:34.162621+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482721792 unmapped: 66961408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:35.162786+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19c637000/0x0/0x1bfc00000, data 0x3548246/0x3771000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483680256 unmapped: 66002944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:36.162963+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483680256 unmapped: 66002944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:37.163196+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483680256 unmapped: 66002944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:38.163406+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483680256 unmapped: 66002944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19c617000/0x0/0x1bfc00000, data 0x3565246/0x378e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5123676 data_alloc: 234881024 data_used: 20525056
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:39.163588+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19c617000/0x0/0x1bfc00000, data 0x3565246/0x378e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483680256 unmapped: 66002944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:40.163737+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483680256 unmapped: 66002944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19c617000/0x0/0x1bfc00000, data 0x3565246/0x378e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:41.163866+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483680256 unmapped: 66002944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:42.164016+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 422 heartbeat osd_stat(store_statfs(0x19c617000/0x0/0x1bfc00000, data 0x3565246/0x378e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483680256 unmapped: 66002944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:43.164143+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559442cf8c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.902449608s of 11.393979073s, submitted: 107
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484753408 unmapped: 64929792 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5121647 data_alloc: 234881024 data_used: 20537344
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:44.164319+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 422 handle_osd_map epochs [422,423], i have 422, src has [1,423]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559442cf8c00 session 0x559437b0a3c0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:45.164471+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c61e000/0x0/0x1bfc00000, data 0x3565274/0x3790000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:46.164636+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:47.164761+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:48.164920+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5127613 data_alloc: 234881024 data_used: 20541440
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c619000/0x0/0x1bfc00000, data 0x356703d/0x3794000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:49.165080+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:50.165263+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c619000/0x0/0x1bfc00000, data 0x356703d/0x3794000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:51.165472+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c619000/0x0/0x1bfc00000, data 0x356703d/0x3794000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:52.165648+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:53.165781+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5127613 data_alloc: 234881024 data_used: 20541440
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:54.165968+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594358ad400
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943a64d400
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.034794807s of 11.101709366s, submitted: 10
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x55943a64d400 session 0x559435e4f0e0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x5594358ad400 session 0x559437bcb860
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:55.166182+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c61a000/0x0/0x1bfc00000, data 0x356703d/0x3794000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:56.166327+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:57.166456+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:58.166628+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5126189 data_alloc: 234881024 data_used: 20541440
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:27:59.166763+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559434792800 session 0x559434ff12c0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:00.166945+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559434792c00 session 0x559435f67e00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484769792 unmapped: 64913408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:01.167130+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c61a000/0x0/0x1bfc00000, data 0x356703d/0x3794000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559434fee000 session 0x5594352874a0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559442cf8c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559442cf8c00 session 0x5594377afc20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:02.167362+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c5ee000/0x0/0x1bfc00000, data 0x3591070/0x37c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:03.167548+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5134778 data_alloc: 234881024 data_used: 20692992
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c5ee000/0x0/0x1bfc00000, data 0x3591070/0x37c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:04.167756+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:05.167975+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:06.168140+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.602036476s of 11.628366470s, submitted: 7
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c5ee000/0x0/0x1bfc00000, data 0x3591070/0x37c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:07.168292+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:08.168476+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5157810 data_alloc: 234881024 data_used: 20692992
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:09.168607+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:10.168758+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:11.168928+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:12.169105+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c5ec000/0x0/0x1bfc00000, data 0x389c070/0x37c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:13.169345+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485146624 unmapped: 64536576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5157842 data_alloc: 234881024 data_used: 20692992
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:14.169504+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484892672 unmapped: 64790528 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:15.169701+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484253696 unmapped: 65429504 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:16.169825+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484261888 unmapped: 65421312 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:17.169979+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c4ea000/0x0/0x1bfc00000, data 0x399e070/0x38c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484270080 unmapped: 65413120 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:18.170163+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484270080 unmapped: 65413120 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5190426 data_alloc: 234881024 data_used: 23044096
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:19.170318+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484270080 unmapped: 65413120 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:20.170454+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484270080 unmapped: 65413120 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:21.170671+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.007882118s of 15.059731483s, submitted: 11
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484327424 unmapped: 65355776 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:22.170816+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c462000/0x0/0x1bfc00000, data 0x3a53070/0x394c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484327424 unmapped: 65355776 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:23.174339+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484327424 unmapped: 65355776 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5219236 data_alloc: 234881024 data_used: 23040000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:24.174676+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484327424 unmapped: 65355776 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:25.174840+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484327424 unmapped: 65355776 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:26.175009+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c462000/0x0/0x1bfc00000, data 0x3a53070/0x394c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484327424 unmapped: 65355776 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:27.175143+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483991552 unmapped: 65691648 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:28.175374+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483991552 unmapped: 65691648 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5213860 data_alloc: 234881024 data_used: 23044096
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:29.175579+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c462000/0x0/0x1bfc00000, data 0x3a53070/0x394c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483991552 unmapped: 65691648 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:30.175747+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:31.175963+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:32.176142+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.056324005s of 11.241897583s, submitted: 37
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:33.176353+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c457000/0x0/0x1bfc00000, data 0x3a5a070/0x3953000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5214202 data_alloc: 234881024 data_used: 23044096
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:34.176537+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c45b000/0x0/0x1bfc00000, data 0x3a5a070/0x3953000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:35.176702+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:36.176861+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:37.176997+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:38.177175+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c45b000/0x0/0x1bfc00000, data 0x3a5a070/0x3953000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5214202 data_alloc: 234881024 data_used: 23044096
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:39.177340+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559434792800 session 0x559436c51680
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559434792c00 session 0x559437cb1a40
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c45b000/0x0/0x1bfc00000, data 0x3a5a070/0x3953000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:40.177448+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:41.177593+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559434fee000 session 0x559437d081e0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:42.177768+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:43.177928+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c486000/0x0/0x1bfc00000, data 0x3a3003d/0x3927000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5204715 data_alloc: 234881024 data_used: 22904832
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:44.178080+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:45.178246+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594358ad400
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x5594358ad400 session 0x559437cb1e00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559442cf8c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.740118027s of 13.175251007s, submitted: 17
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:46.178348+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559442cf8c00 session 0x559437c57a40
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:47.178546+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 483999744 unmapped: 65683456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:48.178750+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484007936 unmapped: 65675264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5184485 data_alloc: 234881024 data_used: 22790144
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c5fe000/0x0/0x1bfc00000, data 0x38b903d/0x37b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:49.178923+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 423 ms_handle_reset con 0x559434792800 session 0x559437d09860
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484007936 unmapped: 65675264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:50.179094+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484007936 unmapped: 65675264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:51.179349+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 423 heartbeat osd_stat(store_statfs(0x19c5fe000/0x0/0x1bfc00000, data 0x38b903d/0x37b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484007936 unmapped: 65675264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:52.179543+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484007936 unmapped: 65675264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:53.179707+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 424 ms_handle_reset con 0x559437eac800 session 0x559435152d20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484007936 unmapped: 65675264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 424 ms_handle_reset con 0x559436c5bc00 session 0x559436c51e00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5172513 data_alloc: 234881024 data_used: 22794240
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:54.179931+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 424 heartbeat osd_stat(store_statfs(0x19c616000/0x0/0x1bfc00000, data 0x3568cea/0x3797000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484007936 unmapped: 65675264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:55.180149+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484016128 unmapped: 65667072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:56.180350+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.032597542s of 10.602803230s, submitted: 41
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 424 ms_handle_reset con 0x559434792c00 session 0x559435d8d680
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484016128 unmapped: 65667072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:57.180564+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 424 handle_osd_map epochs [424,425], i have 424, src has [1,425]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 424 heartbeat osd_stat(store_statfs(0x19c617000/0x0/0x1bfc00000, data 0x3568cea/0x3797000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 424 handle_osd_map epochs [425,425], i have 425, src has [1,425]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484016128 unmapped: 65667072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:58.180751+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484016128 unmapped: 65667072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5175215 data_alloc: 234881024 data_used: 22802432
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:28:59.180888+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484016128 unmapped: 65667072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:00.181025+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 425 heartbeat osd_stat(store_statfs(0x19c613000/0x0/0x1bfc00000, data 0x356a829/0x379a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434fee000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484016128 unmapped: 65667072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:01.181238+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484024320 unmapped: 65658880 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:02.181381+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484024320 unmapped: 65658880 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:03.181533+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482091008 unmapped: 67592192 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5006019 data_alloc: 218103808 data_used: 16154624
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:04.181642+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 425 heartbeat osd_stat(store_statfs(0x19d012000/0x0/0x1bfc00000, data 0x27c2829/0x29f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482091008 unmapped: 67592192 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:05.181766+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 425 ms_handle_reset con 0x559434fee000 session 0x5594351430e0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482091008 unmapped: 67592192 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:06.181903+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482091008 unmapped: 67592192 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:07.182066+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 425 heartbeat osd_stat(store_statfs(0x19d3bc000/0x0/0x1bfc00000, data 0x27c27c7/0x29f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.051541328s of 11.549398422s, submitted: 39
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482091008 unmapped: 67592192 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:08.182238+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482091008 unmapped: 67592192 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:09.182378+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5004722 data_alloc: 218103808 data_used: 16162816
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482091008 unmapped: 67592192 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:10.182549+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 425 handle_osd_map epochs [425,426], i have 425, src has [1,426]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 426 handle_osd_map epochs [426,426], i have 426, src has [1,426]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482091008 unmapped: 67592192 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:11.182734+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 426 heartbeat osd_stat(store_statfs(0x19d3ba000/0x0/0x1bfc00000, data 0x27c4456/0x29f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482099200 unmapped: 67584000 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:12.182919+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482115584 unmapped: 67567616 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:13.183104+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 426 ms_handle_reset con 0x559434792800 session 0x559436eab2c0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482115584 unmapped: 67567616 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:14.183347+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5007079 data_alloc: 218103808 data_used: 16162816
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482115584 unmapped: 67567616 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:15.183517+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482115584 unmapped: 67567616 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:16.183665+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 426 heartbeat osd_stat(store_statfs(0x19d3bc000/0x0/0x1bfc00000, data 0x27c4428/0x29f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482115584 unmapped: 67567616 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:17.183909+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482115584 unmapped: 67567616 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:18.184031+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 426 heartbeat osd_stat(store_statfs(0x19d3bc000/0x0/0x1bfc00000, data 0x27c4428/0x29f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 426 handle_osd_map epochs [427,427], i have 427, src has [1,427]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.208436966s of 10.811527252s, submitted: 28
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482140160 unmapped: 67543040 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:19.184194+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5010053 data_alloc: 218103808 data_used: 16162816
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19d3b9000/0x0/0x1bfc00000, data 0x27c5f67/0x29f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482140160 unmapped: 67543040 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:20.184343+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482140160 unmapped: 67543040 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:21.184503+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:22.184672+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482140160 unmapped: 67543040 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19d3b9000/0x0/0x1bfc00000, data 0x27c5f67/0x29f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:23.184879+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482140160 unmapped: 67543040 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:24.185051+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 482140160 unmapped: 67543040 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5010053 data_alloc: 218103808 data_used: 16162816
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559434792c00 session 0x559435d8d0e0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5bc00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559436c5bc00 session 0x559435153860
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:25.185371+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484704256 unmapped: 64978944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:26.185564+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484704256 unmapped: 64978944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19d3b9000/0x0/0x1bfc00000, data 0x27c5f67/0x29f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437eac800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559437eac800 session 0x559436c4b0e0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:27.185821+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484704256 unmapped: 64978944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:28.186063+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484704256 unmapped: 64978944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:29.186361+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484704256 unmapped: 64978944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5010053 data_alloc: 218103808 data_used: 16162816
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594358ad400
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.400998116s of 10.534256935s, submitted: 15
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x5594358ad400 session 0x559436fa30e0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:30.186565+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 484704256 unmapped: 64978944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19d3ba000/0x0/0x1bfc00000, data 0x27c5f67/0x29f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [0,0,1,0,1,4])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559434792800 session 0x559435de10e0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:31.186704+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485982208 unmapped: 63700992 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ce64000/0x0/0x1bfc00000, data 0x2d1af90/0x2f4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559434792c00 session 0x559436fa30e0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:32.186864+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485785600 unmapped: 63897600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:33.187045+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485785600 unmapped: 63897600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:34.187388+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485785600 unmapped: 63897600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5056307 data_alloc: 218103808 data_used: 16162816
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:35.187660+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485785600 unmapped: 63897600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:36.187838+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485785600 unmapped: 63897600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ce64000/0x0/0x1bfc00000, data 0x2d1afc9/0x2f4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:37.187970+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485785600 unmapped: 63897600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:38.188084+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485793792 unmapped: 63889408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ce64000/0x0/0x1bfc00000, data 0x2d1afc9/0x2f4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:39.188268+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485793792 unmapped: 63889408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5056307 data_alloc: 218103808 data_used: 16162816
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:40.188455+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485793792 unmapped: 63889408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:41.188648+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485793792 unmapped: 63889408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:42.188891+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485801984 unmapped: 63881216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:43.189047+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485801984 unmapped: 63881216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:44.189240+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485801984 unmapped: 63881216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5056307 data_alloc: 218103808 data_used: 16162816
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ce64000/0x0/0x1bfc00000, data 0x2d1afc9/0x2f4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:45.189551+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485810176 unmapped: 63873024 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:46.189767+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485818368 unmapped: 63864832 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:47.189984+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485818368 unmapped: 63864832 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:48.190162+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485818368 unmapped: 63864832 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ce64000/0x0/0x1bfc00000, data 0x2d1afc9/0x2f4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:49.190364+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485818368 unmapped: 63864832 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5056307 data_alloc: 218103808 data_used: 16162816
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:50.190579+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5bc00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.333019257s of 21.654922485s, submitted: 47
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:51.190788+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ce63000/0x0/0x1bfc00000, data 0x2d1afec/0x2f4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [0,0,0,0,0,1])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:52.190977+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559436c5bc00 session 0x559437d09860
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437eac800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436e8d000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:53.191147+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:54.191349+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5058092 data_alloc: 218103808 data_used: 16162816
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:55.191544+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ce63000/0x0/0x1bfc00000, data 0x2d1afec/0x2f4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:56.191771+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:57.192013+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:58.192190+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:29:59.192374+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5095692 data_alloc: 234881024 data_used: 21016576
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:00.192605+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:01.192824+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ce63000/0x0/0x1bfc00000, data 0x2d1afec/0x2f4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:02.192990+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:03.193237+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:04.193434+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5096012 data_alloc: 234881024 data_used: 21024768
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:05.193677+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.898363113s of 14.440773964s, submitted: 5
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:06.193940+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487833600 unmapped: 61849600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:07.194157+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19cb00000/0x0/0x1bfc00000, data 0x307dfec/0x32ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487849984 unmapped: 61833216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:08.194305+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488079360 unmapped: 61603840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca7a000/0x0/0x1bfc00000, data 0x3103fec/0x3334000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:09.194474+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488079360 unmapped: 61603840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5140300 data_alloc: 234881024 data_used: 21843968
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:10.194600+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488079360 unmapped: 61603840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:11.194914+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488079360 unmapped: 61603840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:12.195083+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488079360 unmapped: 61603840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca7a000/0x0/0x1bfc00000, data 0x3103fec/0x3334000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:13.195254+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488079360 unmapped: 61603840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:14.195417+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488087552 unmapped: 61595648 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5138620 data_alloc: 234881024 data_used: 21848064
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:15.195567+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559437eac800 session 0x559437d081e0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559436e8d000 session 0x559436f6b680
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488087552 unmapped: 61595648 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.954394341s of 10.175483704s, submitted: 76
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559434792800 session 0x559435f67e00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:16.195728+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:17.195940+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559436c5dc00 session 0x559435d8c3c0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x5594450ed400 session 0x559437d08000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c5bc00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559436c6e800 session 0x559436fa3860
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x5594350d7400
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:18.196337+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:19.196993+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5139012 data_alloc: 234881024 data_used: 21852160
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:20.197171+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:21.197353+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:22.197563+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:23.197776+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:24.198004+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5139012 data_alloc: 234881024 data_used: 21852160
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:25.198254+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:26.198478+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:27.198659+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488095744 unmapped: 61587456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:28.198821+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:29.198992+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5139012 data_alloc: 234881024 data_used: 21852160
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:30.199211+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:31.199356+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c6e800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.853529930s of 15.902283669s, submitted: 13
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 ms_handle_reset con 0x559436c6e800 session 0x559435286d20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c3e400
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943beb4400
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:32.199530+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:33.199682+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca49000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:34.199848+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5140531 data_alloc: 234881024 data_used: 21905408
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:35.200151+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:36.200360+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:37.200524+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:38.200925+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:39.201182+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca49000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5140531 data_alloc: 234881024 data_used: 21905408
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:40.201421+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:41.201634+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:42.201798+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca49000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:43.202112+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.303579330s of 12.341936111s, submitted: 8
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:44.202382+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5144999 data_alloc: 234881024 data_used: 22102016
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:45.202616+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:46.202850+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:47.203179+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:48.203453+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:49.203755+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5155079 data_alloc: 234881024 data_used: 22765568
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:50.204065+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:51.204350+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:52.204655+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:53.204826+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488103936 unmapped: 61579264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:54.204973+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488112128 unmapped: 61571072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5155079 data_alloc: 234881024 data_used: 22765568
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:55.205194+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488112128 unmapped: 61571072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:56.205351+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488112128 unmapped: 61571072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:57.205534+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 heartbeat osd_stat(store_statfs(0x19ca47000/0x0/0x1bfc00000, data 0x3135fc9/0x3365000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943ae61000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.465478897s of 13.490488052s, submitted: 6
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488112128 unmapped: 61571072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 427 handle_osd_map epochs [427,428], i have 427, src has [1,428]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 428 handle_osd_map epochs [428,428], i have 428, src has [1,428]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x55943ae61000 session 0x559434c8d0e0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:58.205825+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488128512 unmapped: 61554688 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:30:59.205996+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488136704 unmapped: 61546496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5177867 data_alloc: 234881024 data_used: 22937600
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:00.206349+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19ca3c000/0x0/0x1bfc00000, data 0x3304c22/0x3372000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943725c800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437438c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x559437438c00 session 0x5594377aed20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x55943725c800 session 0x559435d8c780
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488136704 unmapped: 61546496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:01.206506+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488136704 unmapped: 61546496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:02.206774+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488136704 unmapped: 61546496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:03.206942+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488136704 unmapped: 61546496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:04.207067+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488136704 unmapped: 61546496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5179373 data_alloc: 234881024 data_used: 22937600
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:05.207228+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19ca3c000/0x0/0x1bfc00000, data 0x3304c22/0x3372000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488136704 unmapped: 61546496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:06.207425+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488144896 unmapped: 61538304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:07.207630+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488144896 unmapped: 61538304 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:08.207863+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488153088 unmapped: 61530112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.730504036s of 11.801014900s, submitted: 24
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:09.208016+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x559434792800 session 0x559436f7fc20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488456192 unmapped: 61227008 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5183366 data_alloc: 234881024 data_used: 22941696
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c6e800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437438c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:10.208163+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488464384 unmapped: 61218816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:11.208389+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19ca17000/0x0/0x1bfc00000, data 0x3328c45/0x3397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:12.208600+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:13.208769+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19ca17000/0x0/0x1bfc00000, data 0x3328c45/0x3397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:14.208910+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5183818 data_alloc: 234881024 data_used: 22949888
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:15.209062+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:16.209219+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:17.209554+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19ca17000/0x0/0x1bfc00000, data 0x3328c45/0x3397000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:18.209761+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:19.209896+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5183818 data_alloc: 234881024 data_used: 22949888
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:20.210018+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:21.210151+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 488480768 unmapped: 61202432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.934249878s of 12.953603745s, submitted: 6
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:22.210320+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490176512 unmapped: 59506688 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19c702000/0x0/0x1bfc00000, data 0x363dc45/0x36ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:23.210447+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490176512 unmapped: 59506688 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:24.210635+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490176512 unmapped: 59506688 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5225783 data_alloc: 234881024 data_used: 24535040
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:25.210830+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490176512 unmapped: 59506688 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:26.211032+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490176512 unmapped: 59506688 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:27.211253+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 490176512 unmapped: 59506688 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:28.211532+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19c702000/0x0/0x1bfc00000, data 0x363dc45/0x36ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:29.211746+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5229843 data_alloc: 234881024 data_used: 24678400
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:30.211904+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:31.212078+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:32.212191+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:33.212321+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:34.212482+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19c62f000/0x0/0x1bfc00000, data 0x3710c45/0x377f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5229843 data_alloc: 234881024 data_used: 24678400
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:35.212667+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:36.212853+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:37.213119+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.297815323s of 15.375535965s, submitted: 9
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486416384 unmapped: 63266816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:38.213346+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486416384 unmapped: 63266816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19c627000/0x0/0x1bfc00000, data 0x3710c45/0x377f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:39.213536+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486416384 unmapped: 63266816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5226883 data_alloc: 234881024 data_used: 24678400
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:40.213733+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486416384 unmapped: 63266816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:41.213939+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486416384 unmapped: 63266816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19c627000/0x0/0x1bfc00000, data 0x3710c45/0x377f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:42.214094+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486416384 unmapped: 63266816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:43.214272+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486416384 unmapped: 63266816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:44.214502+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19c627000/0x0/0x1bfc00000, data 0x3710c45/0x377f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486424576 unmapped: 63258624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5226883 data_alloc: 234881024 data_used: 24678400
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:45.214689+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486424576 unmapped: 63258624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:46.214901+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19c627000/0x0/0x1bfc00000, data 0x3710c45/0x377f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486424576 unmapped: 63258624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:47.215099+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486424576 unmapped: 63258624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:48.215310+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x559436c6e800 session 0x559437c56960
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.852976799s of 10.868772507s, submitted: 5
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x559437438c00 session 0x559436f7eb40
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943ae61000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486424576 unmapped: 63258624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x55943ae61000 session 0x559434c6f680
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:49.215464+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486432768 unmapped: 63250432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5220223 data_alloc: 234881024 data_used: 24690688
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:50.215679+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 428 heartbeat osd_stat(store_statfs(0x19c653000/0x0/0x1bfc00000, data 0x36ecc22/0x375a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486432768 unmapped: 63250432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:51.215818+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486432768 unmapped: 63250432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:52.215996+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437503400
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x559437503400 session 0x559437b0a5a0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486432768 unmapped: 63250432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x559434792800 session 0x559437cb1e00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:53.216256+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486440960 unmapped: 63242240 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c6e800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 428 ms_handle_reset con 0x559436c6e800 session 0x559435f61e00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437438c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:54.216576+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486457344 unmapped: 63225856 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5188901 data_alloc: 234881024 data_used: 24559616
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 428 handle_osd_map epochs [428,429], i have 428, src has [1,429]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _renew_subs
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 429 handle_osd_map epochs [429,429], i have 429, src has [1,429]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 429 ms_handle_reset con 0x559437438c00 session 0x559437c565a0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:55.216741+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 429 heartbeat osd_stat(store_statfs(0x19ca3d000/0x0/0x1bfc00000, data 0x313e8cf/0x3370000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486473728 unmapped: 63209472 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:56.216938+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486473728 unmapped: 63209472 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:57.217127+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486490112 unmapped: 63193088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:58.217357+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.918631554s of 10.093006134s, submitted: 66
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 429 ms_handle_reset con 0x559436c3e400 session 0x559435f61a40
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 429 ms_handle_reset con 0x55943beb4400 session 0x559437105e00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x55943beb4400
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486490112 unmapped: 63193088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:31:59.217537+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486490112 unmapped: 63193088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5182857 data_alloc: 234881024 data_used: 24567808
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 429 heartbeat osd_stat(store_statfs(0x19ca3e000/0x0/0x1bfc00000, data 0x313e8cf/0x3370000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:00.217713+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486490112 unmapped: 63193088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 429 ms_handle_reset con 0x55943beb4400 session 0x559437c56000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:01.217910+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486490112 unmapped: 63193088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:02.218088+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486490112 unmapped: 63193088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:03.218250+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 429 handle_osd_map epochs [429,430], i have 429, src has [1,430]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486498304 unmapped: 63184896 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:04.218372+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19ca3a000/0x0/0x1bfc00000, data 0x314040e/0x3373000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486506496 unmapped: 63176704 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5186216 data_alloc: 234881024 data_used: 24576000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559434792800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 ms_handle_reset con 0x559434792800 session 0x559437e532c0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c3e400
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:05.218502+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 ms_handle_reset con 0x559436c3e400 session 0x559437c432c0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:06.218630+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:07.218779+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:08.218963+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:09.219142+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:10.219337+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:11.219483+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:12.219667+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:13.219828+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:14.219998+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:15.220196+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:16.220374+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:17.220537+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:18.220677+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:19.220816+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:20.220990+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:21.221179+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486555648 unmapped: 63127552 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:22.221367+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486563840 unmapped: 63119360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:23.221543+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486563840 unmapped: 63119360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:24.221706+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486563840 unmapped: 63119360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:25.221950+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486563840 unmapped: 63119360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:26.222125+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486563840 unmapped: 63119360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:27.222261+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486563840 unmapped: 63119360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:28.222499+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486572032 unmapped: 63111168 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:29.222664+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486572032 unmapped: 63111168 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:30.222860+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486572032 unmapped: 63111168 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:31.223036+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486572032 unmapped: 63111168 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:32.223188+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:33.223356+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:34.223485+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:35.223642+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:36.223812+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:37.223954+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:38.224139+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:39.224387+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486588416 unmapped: 63094784 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:40.224598+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486588416 unmapped: 63094784 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:41.224763+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486588416 unmapped: 63094784 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:42.224928+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486588416 unmapped: 63094784 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:43.225109+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486588416 unmapped: 63094784 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:44.225265+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486588416 unmapped: 63094784 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:45.225519+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:46.225676+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:47.225846+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:48.226029+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:49.226206+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:50.226416+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:51.226589+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486604800 unmapped: 63078400 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:52.226867+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486612992 unmapped: 63070208 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:53.227086+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486612992 unmapped: 63070208 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:54.227423+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486612992 unmapped: 63070208 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:55.227848+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486612992 unmapped: 63070208 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:56.228091+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486621184 unmapped: 63062016 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:57.228334+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486621184 unmapped: 63062016 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:58.228526+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486621184 unmapped: 63062016 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:32:59.228762+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486621184 unmapped: 63062016 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:00.228913+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486621184 unmapped: 63062016 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:01.229113+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486629376 unmapped: 63053824 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:02.229437+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486629376 unmapped: 63053824 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:03.229644+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486629376 unmapped: 63053824 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:04.229831+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486637568 unmapped: 63045632 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:05.230030+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486637568 unmapped: 63045632 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:06.230172+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486637568 unmapped: 63045632 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:07.230371+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486637568 unmapped: 63045632 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:08.230510+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486637568 unmapped: 63045632 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:09.230663+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486645760 unmapped: 63037440 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:10.230794+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486645760 unmapped: 63037440 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:11.230953+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486653952 unmapped: 63029248 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:12.231127+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:13.231377+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:14.231566+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:15.231762+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:16.231983+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:17.232192+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486670336 unmapped: 63012864 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:18.232390+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486670336 unmapped: 63012864 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:19.232535+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486678528 unmapped: 63004672 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:20.232739+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486678528 unmapped: 63004672 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:21.232868+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486678528 unmapped: 63004672 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:22.233020+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486678528 unmapped: 63004672 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:23.233239+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486678528 unmapped: 63004672 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:24.233435+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486686720 unmapped: 62996480 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:25.233656+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486686720 unmapped: 62996480 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:26.233856+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486686720 unmapped: 62996480 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:27.234033+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:28.234187+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:29.234343+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:30.234536+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:31.234680+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:32.234826+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:33.234954+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486703104 unmapped: 62980096 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:34.235188+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486703104 unmapped: 62980096 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:35.235580+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486703104 unmapped: 62980096 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:36.235775+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486711296 unmapped: 62971904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:37.235934+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486711296 unmapped: 62971904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:38.236073+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486711296 unmapped: 62971904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:39.236258+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486711296 unmapped: 62971904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:40.236404+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486711296 unmapped: 62971904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:41.236555+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486711296 unmapped: 62971904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:42.236698+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486719488 unmapped: 62963712 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:43.236852+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486719488 unmapped: 62963712 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:44.236974+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:45.237172+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: do_command 'config diff' '{prefix=config diff}'
Oct 02 13:47:13 compute-1 ceph-osd[78262]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 02 13:47:13 compute-1 ceph-osd[78262]: do_command 'config show' '{prefix=config show}'
Oct 02 13:47:13 compute-1 ceph-osd[78262]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 02 13:47:13 compute-1 ceph-osd[78262]: do_command 'counter dump' '{prefix=counter dump}'
Oct 02 13:47:13 compute-1 ceph-osd[78262]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 02 13:47:13 compute-1 ceph-osd[78262]: do_command 'counter schema' '{prefix=counter schema}'
Oct 02 13:47:13 compute-1 ceph-osd[78262]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485924864 unmapped: 63758336 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:46.237351+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485990400 unmapped: 63692800 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:47.237471+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: do_command 'log dump' '{prefix=log dump}'
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 497090560 unmapped: 52592640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:48.237592+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: do_command 'perf dump' '{prefix=perf dump}'
Oct 02 13:47:13 compute-1 ceph-osd[78262]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Oct 02 13:47:13 compute-1 ceph-osd[78262]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Oct 02 13:47:13 compute-1 ceph-osd[78262]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Oct 02 13:47:13 compute-1 ceph-osd[78262]: do_command 'perf schema' '{prefix=perf schema}'
Oct 02 13:47:13 compute-1 ceph-osd[78262]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485679104 unmapped: 64004096 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:49.237734+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485687296 unmapped: 63995904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:50.237897+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485687296 unmapped: 63995904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:51.238013+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485687296 unmapped: 63995904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:52.238144+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485687296 unmapped: 63995904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:53.238359+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485687296 unmapped: 63995904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:54.238535+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485687296 unmapped: 63995904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:55.238699+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485687296 unmapped: 63995904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:56.238850+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485687296 unmapped: 63995904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:57.238967+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485695488 unmapped: 63987712 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:58.239121+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485695488 unmapped: 63987712 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:33:59.239266+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485703680 unmapped: 63979520 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:00.239409+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485703680 unmapped: 63979520 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:01.239537+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485703680 unmapped: 63979520 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:02.239654+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485703680 unmapped: 63979520 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:03.239779+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485703680 unmapped: 63979520 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:04.239904+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:05.240072+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485703680 unmapped: 63979520 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:06.240179+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485711872 unmapped: 63971328 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:07.240309+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485711872 unmapped: 63971328 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:08.240417+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485711872 unmapped: 63971328 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:09.240581+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485711872 unmapped: 63971328 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:10.240722+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485711872 unmapped: 63971328 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:11.240923+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485711872 unmapped: 63971328 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:12.241035+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485720064 unmapped: 63963136 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:13.241261+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485720064 unmapped: 63963136 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:14.241419+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485728256 unmapped: 63954944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:15.241582+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485728256 unmapped: 63954944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:16.241714+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485736448 unmapped: 63946752 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:17.241832+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485736448 unmapped: 63946752 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:18.241943+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485736448 unmapped: 63946752 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:19.242088+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485736448 unmapped: 63946752 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:20.242327+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485736448 unmapped: 63946752 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:21.242488+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485744640 unmapped: 63938560 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:22.242611+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485752832 unmapped: 63930368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:23.242726+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485752832 unmapped: 63930368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:24.242896+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485752832 unmapped: 63930368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:25.243109+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485752832 unmapped: 63930368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:26.243255+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485752832 unmapped: 63930368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:27.243418+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485752832 unmapped: 63930368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:28.243536+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485752832 unmapped: 63930368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:29.243720+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485752832 unmapped: 63930368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:30.243850+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485761024 unmapped: 63922176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:31.243993+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485769216 unmapped: 63913984 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:32.244210+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485769216 unmapped: 63913984 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:33.244423+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485769216 unmapped: 63913984 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:34.244611+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485769216 unmapped: 63913984 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:35.244855+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485769216 unmapped: 63913984 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:36.245039+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485769216 unmapped: 63913984 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:37.245263+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485769216 unmapped: 63913984 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:38.245557+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485777408 unmapped: 63905792 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:39.245864+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485777408 unmapped: 63905792 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:40.246147+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485777408 unmapped: 63905792 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:41.246373+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485777408 unmapped: 63905792 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:42.246597+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485777408 unmapped: 63905792 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:43.246758+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485777408 unmapped: 63905792 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:44.246970+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485793792 unmapped: 63889408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:45.247454+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485793792 unmapped: 63889408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:46.247628+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485793792 unmapped: 63889408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:47.247842+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485793792 unmapped: 63889408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:48.247960+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485793792 unmapped: 63889408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:49.248205+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485801984 unmapped: 63881216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:50.248408+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485801984 unmapped: 63881216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:51.248791+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485801984 unmapped: 63881216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:52.248938+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485801984 unmapped: 63881216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:53.249207+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485801984 unmapped: 63881216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:54.249472+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485818368 unmapped: 63864832 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:55.249662+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485818368 unmapped: 63864832 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:56.249899+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485818368 unmapped: 63864832 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:57.250107+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485818368 unmapped: 63864832 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:58.250411+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485818368 unmapped: 63864832 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:34:59.250642+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485818368 unmapped: 63864832 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:00.250854+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485818368 unmapped: 63864832 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:01.251066+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485818368 unmapped: 63864832 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:02.251399+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485818368 unmapped: 63864832 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:03.251566+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485818368 unmapped: 63864832 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:04.251677+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485826560 unmapped: 63856640 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:05.251845+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485834752 unmapped: 63848448 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:06.251977+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485834752 unmapped: 63848448 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:07.252118+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485834752 unmapped: 63848448 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:08.252270+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485834752 unmapped: 63848448 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:09.252503+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485834752 unmapped: 63848448 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 71K writes, 275K keys, 71K commit groups, 1.0 writes per commit group, ingest: 0.27 GB, 0.04 MB/s
                                           Cumulative WAL: 71K writes, 26K syncs, 2.69 writes per sync, written: 0.27 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1937 writes, 5766 keys, 1937 commit groups, 1.0 writes per commit group, ingest: 5.33 MB, 0.01 MB/s
                                           Interval WAL: 1937 writes, 854 syncs, 2.27 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:10.252645+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485842944 unmapped: 63840256 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:11.252779+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485842944 unmapped: 63840256 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:12.252971+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485851136 unmapped: 63832064 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:13.253128+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485851136 unmapped: 63832064 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:14.253255+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485851136 unmapped: 63832064 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:15.253445+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485851136 unmapped: 63832064 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:16.253601+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485851136 unmapped: 63832064 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:17.253762+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485851136 unmapped: 63832064 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:18.253880+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485859328 unmapped: 63823872 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:19.254029+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485859328 unmapped: 63823872 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:20.254164+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485859328 unmapped: 63823872 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:21.254344+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485859328 unmapped: 63823872 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:22.254460+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485875712 unmapped: 63807488 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:23.254590+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485875712 unmapped: 63807488 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:24.254753+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485875712 unmapped: 63807488 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:25.255042+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485875712 unmapped: 63807488 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:26.255180+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485883904 unmapped: 63799296 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:27.255371+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485883904 unmapped: 63799296 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:28.255502+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485883904 unmapped: 63799296 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:29.255661+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485883904 unmapped: 63799296 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:30.255797+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485883904 unmapped: 63799296 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:31.256078+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485883904 unmapped: 63799296 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:32.256193+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485883904 unmapped: 63799296 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:33.256361+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485883904 unmapped: 63799296 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:34.256502+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485892096 unmapped: 63791104 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:35.256678+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485892096 unmapped: 63791104 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:36.256861+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485892096 unmapped: 63791104 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:37.258386+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485900288 unmapped: 63782912 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:38.258719+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485900288 unmapped: 63782912 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:39.259825+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485900288 unmapped: 63782912 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:40.260823+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485900288 unmapped: 63782912 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:41.261760+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485900288 unmapped: 63782912 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:42.262496+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485900288 unmapped: 63782912 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:43.263145+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485900288 unmapped: 63782912 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:44.263733+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485900288 unmapped: 63782912 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:45.264318+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485908480 unmapped: 63774720 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:46.264863+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485908480 unmapped: 63774720 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:47.265434+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485908480 unmapped: 63774720 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:48.265929+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485908480 unmapped: 63774720 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:49.266321+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485916672 unmapped: 63766528 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:50.266670+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485924864 unmapped: 63758336 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:51.267018+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485924864 unmapped: 63758336 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:52.267357+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485924864 unmapped: 63758336 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:53.267638+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485924864 unmapped: 63758336 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:54.267911+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485924864 unmapped: 63758336 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:55.268205+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485924864 unmapped: 63758336 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:56.268532+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485924864 unmapped: 63758336 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:57.268810+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485941248 unmapped: 63741952 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:58.269070+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485941248 unmapped: 63741952 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:35:59.269361+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485941248 unmapped: 63741952 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:00.269605+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485941248 unmapped: 63741952 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:01.269830+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485941248 unmapped: 63741952 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:02.270035+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485941248 unmapped: 63741952 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:03.270348+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485941248 unmapped: 63741952 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:04.270495+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485941248 unmapped: 63741952 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:05.270691+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485949440 unmapped: 63733760 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:06.270901+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485949440 unmapped: 63733760 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:07.271068+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485949440 unmapped: 63733760 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:08.271237+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485949440 unmapped: 63733760 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:09.271402+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485957632 unmapped: 63725568 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:10.271553+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485957632 unmapped: 63725568 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:11.271693+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485957632 unmapped: 63725568 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:12.271850+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485965824 unmapped: 63717376 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:13.271990+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485974016 unmapped: 63709184 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:14.272087+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485974016 unmapped: 63709184 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:15.272257+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485974016 unmapped: 63709184 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:16.272584+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485974016 unmapped: 63709184 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:17.272687+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485974016 unmapped: 63709184 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:18.272815+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485974016 unmapped: 63709184 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:19.272897+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485974016 unmapped: 63709184 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:20.273031+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485982208 unmapped: 63700992 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:21.273158+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485982208 unmapped: 63700992 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:22.273394+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485982208 unmapped: 63700992 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:23.273513+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485982208 unmapped: 63700992 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:24.273654+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485998592 unmapped: 63684608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:25.273802+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485998592 unmapped: 63684608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:26.273909+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485998592 unmapped: 63684608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:27.274079+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485998592 unmapped: 63684608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:28.274202+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485998592 unmapped: 63684608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:29.274364+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485998592 unmapped: 63684608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:30.274604+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 485998592 unmapped: 63684608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:31.274791+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486006784 unmapped: 63676416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:32.274937+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486006784 unmapped: 63676416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:33.275133+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486006784 unmapped: 63676416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:34.275339+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486006784 unmapped: 63676416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:35.275538+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486006784 unmapped: 63676416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:36.275701+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486006784 unmapped: 63676416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:37.275874+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486014976 unmapped: 63668224 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:38.276058+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486023168 unmapped: 63660032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:39.276237+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486031360 unmapped: 63651840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:40.276506+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486031360 unmapped: 63651840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:41.276691+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486031360 unmapped: 63651840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:42.276863+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486031360 unmapped: 63651840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:43.276998+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486031360 unmapped: 63651840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:44.277148+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486039552 unmapped: 63643648 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d06f000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:45.277341+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486047744 unmapped: 63635456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039312 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:46.277527+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 288.169189453s of 288.322509766s, submitted: 73
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486055936 unmapped: 63627264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:47.277665+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486064128 unmapped: 63619072 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:48.277788+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486080512 unmapped: 63602688 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:49.277970+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,2])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486105088 unmapped: 63578112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:50.278087+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486121472 unmapped: 63561728 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039208 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:51.278251+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486146048 unmapped: 63537152 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:52.278425+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486146048 unmapped: 63537152 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:53.278559+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486170624 unmapped: 63512576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:54.278719+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [0,0,0,0,2,0,1])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486195200 unmapped: 63488000 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:55.278927+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486203392 unmapped: 63479808 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:56.279049+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:57.279198+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:58.279333+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:36:59.279481+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:00.279615+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:01.279745+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:02.279919+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:03.280076+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:04.280251+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:05.280473+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:06.280589+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:07.280739+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:08.280929+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:09.281107+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:10.281315+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:11.281476+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:12.281606+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:13.281767+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:14.281960+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486219776 unmapped: 63463424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:15.282170+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486227968 unmapped: 63455232 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:16.282356+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486227968 unmapped: 63455232 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:17.282516+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486227968 unmapped: 63455232 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:18.282678+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486227968 unmapped: 63455232 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:19.282866+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486227968 unmapped: 63455232 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:20.283033+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486227968 unmapped: 63455232 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:21.283195+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486227968 unmapped: 63455232 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:22.283335+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486227968 unmapped: 63455232 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:23.283484+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486227968 unmapped: 63455232 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:24.283636+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486236160 unmapped: 63447040 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:25.283793+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486236160 unmapped: 63447040 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:26.283918+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486236160 unmapped: 63447040 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:27.284083+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486252544 unmapped: 63430656 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:28.284206+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486252544 unmapped: 63430656 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:29.284341+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486252544 unmapped: 63430656 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:30.284598+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486252544 unmapped: 63430656 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:31.284815+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486252544 unmapped: 63430656 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:32.285022+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486252544 unmapped: 63430656 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:33.285187+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486260736 unmapped: 63422464 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:34.285295+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486260736 unmapped: 63422464 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:35.285445+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486260736 unmapped: 63422464 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:36.285575+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486260736 unmapped: 63422464 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:37.285714+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486260736 unmapped: 63422464 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:38.285850+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486260736 unmapped: 63422464 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:39.285990+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486260736 unmapped: 63422464 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:40.286112+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486268928 unmapped: 63414272 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:41.286207+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486268928 unmapped: 63414272 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:42.286390+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486268928 unmapped: 63414272 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:43.286516+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486268928 unmapped: 63414272 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:44.286636+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486277120 unmapped: 63406080 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:45.287626+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486277120 unmapped: 63406080 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:46.288705+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486277120 unmapped: 63406080 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:47.289472+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486277120 unmapped: 63406080 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:48.290171+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486277120 unmapped: 63406080 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:49.290786+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486277120 unmapped: 63406080 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:50.291400+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486277120 unmapped: 63406080 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:51.291941+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486285312 unmapped: 63397888 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:52.292467+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486285312 unmapped: 63397888 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:53.292919+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486285312 unmapped: 63397888 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:54.293323+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486293504 unmapped: 63389696 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:55.293717+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486293504 unmapped: 63389696 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:56.293867+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486293504 unmapped: 63389696 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:57.294202+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486301696 unmapped: 63381504 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:58.294486+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486301696 unmapped: 63381504 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:37:59.294747+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486301696 unmapped: 63381504 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:00.294986+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486301696 unmapped: 63381504 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:01.295238+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486301696 unmapped: 63381504 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:02.295484+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486301696 unmapped: 63381504 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:03.295700+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486301696 unmapped: 63381504 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:04.295958+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486301696 unmapped: 63381504 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:05.296234+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486309888 unmapped: 63373312 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:06.296486+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486309888 unmapped: 63373312 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:07.296753+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486309888 unmapped: 63373312 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:08.296960+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486309888 unmapped: 63373312 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:09.297192+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486309888 unmapped: 63373312 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:10.297369+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486309888 unmapped: 63373312 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:11.297598+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486309888 unmapped: 63373312 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:12.297813+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486309888 unmapped: 63373312 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:13.297953+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486309888 unmapped: 63373312 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:14.298144+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486318080 unmapped: 63365120 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:15.298316+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486318080 unmapped: 63365120 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:16.298510+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486326272 unmapped: 63356928 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:17.298653+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486326272 unmapped: 63356928 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:18.298782+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486326272 unmapped: 63356928 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:19.298900+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486326272 unmapped: 63356928 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:20.299012+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486326272 unmapped: 63356928 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:21.299155+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486334464 unmapped: 63348736 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:22.299231+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486334464 unmapped: 63348736 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:23.299354+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:24.299468+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486334464 unmapped: 63348736 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:25.299621+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486334464 unmapped: 63348736 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:26.299741+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486334464 unmapped: 63348736 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:27.299872+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486334464 unmapped: 63348736 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:28.300000+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486334464 unmapped: 63348736 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:29.300232+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486342656 unmapped: 63340544 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:30.300384+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486350848 unmapped: 63332352 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:31.300531+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486350848 unmapped: 63332352 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:32.300654+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486350848 unmapped: 63332352 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:33.300860+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486367232 unmapped: 63315968 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:34.301007+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486367232 unmapped: 63315968 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:35.301198+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486367232 unmapped: 63315968 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:36.301337+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486375424 unmapped: 63307776 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:37.301487+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486375424 unmapped: 63307776 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:38.301620+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486383616 unmapped: 63299584 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:39.301757+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486383616 unmapped: 63299584 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:40.301842+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486391808 unmapped: 63291392 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:41.301949+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486391808 unmapped: 63291392 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:42.302080+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486391808 unmapped: 63291392 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:43.302213+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486391808 unmapped: 63291392 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:44.302358+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486391808 unmapped: 63291392 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:45.302530+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486391808 unmapped: 63291392 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:46.302656+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486400000 unmapped: 63283200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:47.302796+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486400000 unmapped: 63283200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:48.302908+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486400000 unmapped: 63283200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:49.303097+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486408192 unmapped: 63275008 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:50.303990+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486416384 unmapped: 63266816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:51.304783+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486416384 unmapped: 63266816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:52.305650+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486424576 unmapped: 63258624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:53.306317+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486424576 unmapped: 63258624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:54.306979+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486424576 unmapped: 63258624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:55.307586+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486424576 unmapped: 63258624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:56.308090+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486424576 unmapped: 63258624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:57.308547+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486424576 unmapped: 63258624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:58.308958+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486424576 unmapped: 63258624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:38:59.309328+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486424576 unmapped: 63258624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:00.309695+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486432768 unmapped: 63250432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:01.309940+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486432768 unmapped: 63250432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:02.310184+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486440960 unmapped: 63242240 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:03.310380+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486440960 unmapped: 63242240 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:04.310572+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486449152 unmapped: 63234048 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:05.310756+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486449152 unmapped: 63234048 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:06.310897+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486449152 unmapped: 63234048 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:07.311037+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486457344 unmapped: 63225856 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:08.311338+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486457344 unmapped: 63225856 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:09.311560+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486457344 unmapped: 63225856 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:10.311869+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486465536 unmapped: 63217664 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:11.312095+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486465536 unmapped: 63217664 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:12.312326+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486465536 unmapped: 63217664 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:13.312494+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486465536 unmapped: 63217664 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:14.312656+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486465536 unmapped: 63217664 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:15.312871+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486465536 unmapped: 63217664 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:16.313047+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486465536 unmapped: 63217664 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:17.313230+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486473728 unmapped: 63209472 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:18.313581+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486481920 unmapped: 63201280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:19.313748+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486481920 unmapped: 63201280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:20.313933+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486481920 unmapped: 63201280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:21.314102+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486490112 unmapped: 63193088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:22.314266+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486490112 unmapped: 63193088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:23.314464+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486490112 unmapped: 63193088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:24.314602+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486490112 unmapped: 63193088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:25.314763+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486490112 unmapped: 63193088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:26.314889+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486498304 unmapped: 63184896 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:27.315035+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486498304 unmapped: 63184896 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:28.315152+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486506496 unmapped: 63176704 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:29.315466+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486506496 unmapped: 63176704 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:30.315628+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486506496 unmapped: 63176704 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:31.315757+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486506496 unmapped: 63176704 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:32.315908+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486506496 unmapped: 63176704 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:33.316066+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486506496 unmapped: 63176704 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:34.316227+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486514688 unmapped: 63168512 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:35.316436+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486514688 unmapped: 63168512 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:36.316671+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486514688 unmapped: 63168512 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:37.316837+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486514688 unmapped: 63168512 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:38.316964+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486522880 unmapped: 63160320 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:39.317102+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486522880 unmapped: 63160320 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:40.317228+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486531072 unmapped: 63152128 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:41.317408+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486531072 unmapped: 63152128 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:42.317558+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486539264 unmapped: 63143936 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:43.317659+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:44.317764+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:45.317908+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:46.318119+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:47.318371+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:48.318495+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:49.318655+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486547456 unmapped: 63135744 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:50.318811+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486555648 unmapped: 63127552 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:51.319052+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486555648 unmapped: 63127552 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:52.319197+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486563840 unmapped: 63119360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:53.319337+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486563840 unmapped: 63119360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:54.320042+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486563840 unmapped: 63119360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:55.320721+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486563840 unmapped: 63119360 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:56.320883+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486572032 unmapped: 63111168 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:57.321524+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486572032 unmapped: 63111168 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:58.321837+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486572032 unmapped: 63111168 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:39:59.322463+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486572032 unmapped: 63111168 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:00.322829+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:01.323146+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:02.323362+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:03.323529+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:04.323852+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486580224 unmapped: 63102976 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:05.324225+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486588416 unmapped: 63094784 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:06.324489+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:07.324769+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:08.325063+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:09.325368+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:10.325628+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:11.325868+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:12.326195+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486596608 unmapped: 63086592 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:13.326399+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486604800 unmapped: 63078400 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:14.326607+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486604800 unmapped: 63078400 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:15.326853+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486604800 unmapped: 63078400 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:16.327119+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486612992 unmapped: 63070208 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:17.327384+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486612992 unmapped: 63070208 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:18.327676+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486612992 unmapped: 63070208 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:19.327900+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486612992 unmapped: 63070208 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:20.328105+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486612992 unmapped: 63070208 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:21.328334+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486629376 unmapped: 63053824 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:22.328560+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486629376 unmapped: 63053824 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:23.328905+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486629376 unmapped: 63053824 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:24.329140+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486629376 unmapped: 63053824 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:25.329406+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486629376 unmapped: 63053824 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:26.329590+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486629376 unmapped: 63053824 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:27.329761+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486629376 unmapped: 63053824 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:28.329979+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486645760 unmapped: 63037440 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:29.330186+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486645760 unmapped: 63037440 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:30.330397+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486645760 unmapped: 63037440 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:31.330592+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486645760 unmapped: 63037440 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:32.330784+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486653952 unmapped: 63029248 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:33.330988+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:34.331171+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:35.331401+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:36.331609+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:37.331812+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:38.332057+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:39.332336+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:40.332538+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:41.332785+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:42.333069+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:43.333263+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486662144 unmapped: 63021056 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:44.333456+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486678528 unmapped: 63004672 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:45.333714+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486678528 unmapped: 63004672 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:46.333919+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486686720 unmapped: 62996480 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:47.334098+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486686720 unmapped: 62996480 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:48.334390+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:49.334575+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:50.334860+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:51.335191+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:52.335414+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486694912 unmapped: 62988288 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:53.335567+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486703104 unmapped: 62980096 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:54.335708+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486703104 unmapped: 62980096 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:55.335924+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486711296 unmapped: 62971904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:56.336140+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486711296 unmapped: 62971904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:57.336325+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486711296 unmapped: 62971904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:58.336481+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486711296 unmapped: 62971904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:40:59.336615+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486711296 unmapped: 62971904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:00.336735+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486711296 unmapped: 62971904 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:01.336888+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486719488 unmapped: 62963712 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:02.337002+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486719488 unmapped: 62963712 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:03.337160+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486719488 unmapped: 62963712 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:04.337395+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486719488 unmapped: 62963712 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:05.337570+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486727680 unmapped: 62955520 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:06.337687+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486727680 unmapped: 62955520 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:07.337805+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486727680 unmapped: 62955520 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:08.337929+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486727680 unmapped: 62955520 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:09.338052+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486735872 unmapped: 62947328 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:10.338215+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486735872 unmapped: 62947328 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:11.338383+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486735872 unmapped: 62947328 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:12.338498+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486735872 unmapped: 62947328 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:13.338657+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486735872 unmapped: 62947328 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:14.338821+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486735872 unmapped: 62947328 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:15.338986+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486735872 unmapped: 62947328 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:16.339122+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486744064 unmapped: 62939136 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:17.339254+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486752256 unmapped: 62930944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:18.339381+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486752256 unmapped: 62930944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:19.339492+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486752256 unmapped: 62930944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:20.339718+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486752256 unmapped: 62930944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:21.339852+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486752256 unmapped: 62930944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:22.339989+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486752256 unmapped: 62930944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:23.340104+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486752256 unmapped: 62930944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:24.340267+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486752256 unmapped: 62930944 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:25.340483+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486760448 unmapped: 62922752 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:26.340640+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486760448 unmapped: 62922752 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:27.340755+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486768640 unmapped: 62914560 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:28.340921+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486776832 unmapped: 62906368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:29.341089+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486776832 unmapped: 62906368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:30.341241+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486776832 unmapped: 62906368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:31.342231+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486776832 unmapped: 62906368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:32.342692+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486776832 unmapped: 62906368 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:33.343501+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486785024 unmapped: 62898176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:34.344202+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486785024 unmapped: 62898176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:35.344880+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486785024 unmapped: 62898176 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:36.345446+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486793216 unmapped: 62889984 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:37.345673+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486793216 unmapped: 62889984 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:38.346133+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486793216 unmapped: 62889984 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:39.346692+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486793216 unmapped: 62889984 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:40.346941+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486793216 unmapped: 62889984 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:41.347350+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486801408 unmapped: 62881792 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:42.347607+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486801408 unmapped: 62881792 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:43.347898+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486809600 unmapped: 62873600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:44.348031+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486809600 unmapped: 62873600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:45.348205+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486809600 unmapped: 62873600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:46.348836+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486809600 unmapped: 62873600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:47.349113+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486809600 unmapped: 62873600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:48.349296+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486809600 unmapped: 62873600 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:49.349566+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486817792 unmapped: 62865408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:50.349758+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486817792 unmapped: 62865408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:51.349979+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486817792 unmapped: 62865408 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:52.350111+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486825984 unmapped: 62857216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:53.350263+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486825984 unmapped: 62857216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:54.350405+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486825984 unmapped: 62857216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:55.350611+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486825984 unmapped: 62857216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:56.350803+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486825984 unmapped: 62857216 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:57.351015+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486834176 unmapped: 62849024 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:58.351213+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486834176 unmapped: 62849024 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:41:59.351383+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486834176 unmapped: 62849024 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:00.351606+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486858752 unmapped: 62824448 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:01.351768+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486858752 unmapped: 62824448 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:02.351915+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486858752 unmapped: 62824448 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:03.352080+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486858752 unmapped: 62824448 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:04.352228+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486858752 unmapped: 62824448 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:05.352820+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:06.353154+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486858752 unmapped: 62824448 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:07.353309+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486858752 unmapped: 62824448 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:08.354068+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486858752 unmapped: 62824448 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:09.354817+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486866944 unmapped: 62816256 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:10.355058+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486866944 unmapped: 62816256 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:11.355299+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486866944 unmapped: 62816256 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:12.355541+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486875136 unmapped: 62808064 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:13.355782+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486875136 unmapped: 62808064 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:14.355998+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486883328 unmapped: 62799872 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:15.356399+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486883328 unmapped: 62799872 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:16.356574+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486883328 unmapped: 62799872 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:17.357016+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486883328 unmapped: 62799872 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:18.357390+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486883328 unmapped: 62799872 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:19.357554+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486883328 unmapped: 62799872 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:20.357836+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486891520 unmapped: 62791680 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:21.358084+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486891520 unmapped: 62791680 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:22.358441+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486891520 unmapped: 62791680 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:23.358667+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486891520 unmapped: 62791680 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:24.358954+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486899712 unmapped: 62783488 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:25.359265+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486899712 unmapped: 62783488 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:26.359498+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486899712 unmapped: 62783488 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:27.359739+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486907904 unmapped: 62775296 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:28.359935+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486907904 unmapped: 62775296 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:29.360094+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486907904 unmapped: 62775296 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:30.360293+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486907904 unmapped: 62775296 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:31.360433+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486907904 unmapped: 62775296 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:32.360638+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486916096 unmapped: 62767104 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:33.360800+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486916096 unmapped: 62767104 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:34.360959+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486916096 unmapped: 62767104 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:35.361138+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486916096 unmapped: 62767104 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:36.361298+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486924288 unmapped: 62758912 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:37.361447+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486924288 unmapped: 62758912 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:38.361622+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486932480 unmapped: 62750720 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:39.361767+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486932480 unmapped: 62750720 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:40.361911+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486932480 unmapped: 62750720 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:41.362021+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486932480 unmapped: 62750720 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:42.362192+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486932480 unmapped: 62750720 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:43.362307+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486932480 unmapped: 62750720 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:44.362390+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486932480 unmapped: 62750720 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:45.362667+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486932480 unmapped: 62750720 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:46.362847+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486948864 unmapped: 62734336 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:47.362975+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486948864 unmapped: 62734336 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:48.363138+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486948864 unmapped: 62734336 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:49.363320+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486948864 unmapped: 62734336 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:50.363460+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486948864 unmapped: 62734336 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:51.363588+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486948864 unmapped: 62734336 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:52.363747+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486957056 unmapped: 62726144 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:53.363890+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486957056 unmapped: 62726144 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:54.364044+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486957056 unmapped: 62726144 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:55.364341+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486957056 unmapped: 62726144 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:56.364486+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486965248 unmapped: 62717952 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:57.364618+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486965248 unmapped: 62717952 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:58.364785+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486973440 unmapped: 62709760 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:42:59.364907+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486973440 unmapped: 62709760 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:00.365033+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486973440 unmapped: 62709760 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:01.365258+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486973440 unmapped: 62709760 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:02.365509+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486989824 unmapped: 62693376 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:03.365700+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486989824 unmapped: 62693376 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:04.365861+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486989824 unmapped: 62693376 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:05.366020+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486989824 unmapped: 62693376 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:06.366160+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486989824 unmapped: 62693376 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:07.366338+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486989824 unmapped: 62693376 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:08.367228+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486989824 unmapped: 62693376 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:09.367453+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486989824 unmapped: 62693376 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:10.367619+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486998016 unmapped: 62685184 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:11.367898+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 486998016 unmapped: 62685184 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:12.368584+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487014400 unmapped: 62668800 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:13.368827+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487014400 unmapped: 62668800 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:14.369397+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487022592 unmapped: 62660608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:15.369642+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487022592 unmapped: 62660608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:16.370069+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487022592 unmapped: 62660608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:17.370268+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487022592 unmapped: 62660608 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:18.370863+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487030784 unmapped: 62652416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:19.371015+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487030784 unmapped: 62652416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:20.371273+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487030784 unmapped: 62652416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:21.371522+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487030784 unmapped: 62652416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:22.371677+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487030784 unmapped: 62652416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:23.371883+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487030784 unmapped: 62652416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:24.372111+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487030784 unmapped: 62652416 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:25.372343+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487038976 unmapped: 62644224 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:26.372522+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487038976 unmapped: 62644224 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:27.372697+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487038976 unmapped: 62644224 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:28.372925+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487047168 unmapped: 62636032 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:29.373102+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487055360 unmapped: 62627840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:30.373368+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487055360 unmapped: 62627840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:31.373558+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487055360 unmapped: 62627840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:32.373746+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487055360 unmapped: 62627840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:33.373883+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487055360 unmapped: 62627840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:34.374050+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487055360 unmapped: 62627840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:35.374312+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487055360 unmapped: 62627840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:36.374513+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487055360 unmapped: 62627840 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating renewing rotating keys (they expired before 2025-10-02T13:43:37.374724+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _finish_auth 0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:37.376693+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487071744 unmapped: 62611456 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:38.375059+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487079936 unmapped: 62603264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:39.375294+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487079936 unmapped: 62603264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:40.375987+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487079936 unmapped: 62603264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:41.376155+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487079936 unmapped: 62603264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:42.376405+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487079936 unmapped: 62603264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:43.376573+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487079936 unmapped: 62603264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:44.376748+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487079936 unmapped: 62603264 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:45.376870+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487096320 unmapped: 62586880 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:46.377100+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487096320 unmapped: 62586880 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:47.377253+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487096320 unmapped: 62586880 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:48.377457+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487096320 unmapped: 62586880 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:49.377540+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487096320 unmapped: 62586880 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:50.377690+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487096320 unmapped: 62586880 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:51.377838+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487112704 unmapped: 62570496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:52.377957+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487112704 unmapped: 62570496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:53.378089+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487112704 unmapped: 62570496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:54.378227+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487112704 unmapped: 62570496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:55.378485+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487112704 unmapped: 62570496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:56.378647+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487112704 unmapped: 62570496 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:57.378788+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487129088 unmapped: 62554112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:58.378945+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487129088 unmapped: 62554112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:43:59.379082+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487129088 unmapped: 62554112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:00.379239+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487129088 unmapped: 62554112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:01.379388+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487129088 unmapped: 62554112 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:02.379509+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487137280 unmapped: 62545920 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:03.379695+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487137280 unmapped: 62545920 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:04.379824+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487137280 unmapped: 62545920 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:05.379979+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487137280 unmapped: 62545920 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:06.380154+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487137280 unmapped: 62545920 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:07.380313+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487137280 unmapped: 62545920 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:08.380544+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487137280 unmapped: 62545920 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:09.380738+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487145472 unmapped: 62537728 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:10.380835+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487145472 unmapped: 62537728 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:11.380970+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487145472 unmapped: 62537728 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:12.381900+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487161856 unmapped: 62521344 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:13.382366+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487161856 unmapped: 62521344 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:14.382906+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487161856 unmapped: 62521344 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:15.383198+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487170048 unmapped: 62513152 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:16.383430+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487170048 unmapped: 62513152 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:17.383711+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487170048 unmapped: 62513152 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:18.383992+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487170048 unmapped: 62513152 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:19.384214+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487170048 unmapped: 62513152 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:20.384793+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487170048 unmapped: 62513152 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:21.384967+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487178240 unmapped: 62504960 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:22.385185+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487178240 unmapped: 62504960 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:23.385441+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487186432 unmapped: 62496768 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:24.385646+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487186432 unmapped: 62496768 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:25.385926+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487186432 unmapped: 62496768 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:26.386068+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487186432 unmapped: 62496768 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:27.386199+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487186432 unmapped: 62496768 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:28.386433+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487194624 unmapped: 62488576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:29.386626+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487194624 unmapped: 62488576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:30.386898+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487194624 unmapped: 62488576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:31.387139+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487194624 unmapped: 62488576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:32.387296+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487194624 unmapped: 62488576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:33.387487+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487194624 unmapped: 62488576 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:34.387741+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487202816 unmapped: 62480384 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:35.388032+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487202816 unmapped: 62480384 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:36.388235+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487202816 unmapped: 62480384 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:37.388411+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487211008 unmapped: 62472192 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:38.388532+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487219200 unmapped: 62464000 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:39.388744+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487219200 unmapped: 62464000 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:40.388957+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487219200 unmapped: 62464000 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:41.389224+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487219200 unmapped: 62464000 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:42.389407+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487219200 unmapped: 62464000 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:43.389628+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487219200 unmapped: 62464000 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:44.389890+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487219200 unmapped: 62464000 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:45.390121+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487235584 unmapped: 62447616 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:46.390269+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:47.390461+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:48.390628+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:49.390710+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487243776 unmapped: 62439424 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:50.390869+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487251968 unmapped: 62431232 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:51.391035+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487251968 unmapped: 62431232 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:52.391161+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487251968 unmapped: 62431232 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:53.391259+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487260160 unmapped: 62423040 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:54.391422+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487260160 unmapped: 62423040 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:55.391605+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487260160 unmapped: 62423040 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:56.391739+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487268352 unmapped: 62414848 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:57.391823+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487268352 unmapped: 62414848 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:58.391938+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487268352 unmapped: 62414848 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:44:59.392060+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487268352 unmapped: 62414848 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:00.392185+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487268352 unmapped: 62414848 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:01.392350+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487276544 unmapped: 62406656 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:02.392504+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487276544 unmapped: 62406656 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:03.392642+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487276544 unmapped: 62406656 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:04.392827+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487276544 unmapped: 62406656 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:05.392988+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487276544 unmapped: 62406656 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:06.393155+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487284736 unmapped: 62398464 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:07.393303+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487284736 unmapped: 62398464 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:08.393406+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487284736 unmapped: 62398464 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:09.393646+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.1 total, 600.0 interval
                                           Cumulative writes: 71K writes, 276K keys, 71K commit groups, 1.0 writes per commit group, ingest: 0.27 GB, 0.04 MB/s
                                           Cumulative WAL: 71K writes, 26K syncs, 2.68 writes per sync, written: 0.27 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 455 writes, 777 keys, 455 commit groups, 1.0 writes per commit group, ingest: 0.25 MB, 0.00 MB/s
                                           Interval WAL: 455 writes, 199 syncs, 2.29 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487301120 unmapped: 62382080 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:10.393927+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487301120 unmapped: 62382080 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:11.394152+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487301120 unmapped: 62382080 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:12.394369+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487301120 unmapped: 62382080 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:13.394509+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487301120 unmapped: 62382080 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:14.394656+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487301120 unmapped: 62382080 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:15.394867+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487301120 unmapped: 62382080 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: mgrc ms_handle_reset ms_handle_reset con 0x559436c3e000
Oct 02 13:47:13 compute-1 ceph-osd[78262]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3443433125
Oct 02 13:47:13 compute-1 ceph-osd[78262]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3443433125,v1:192.168.122.100:6801/3443433125]
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:16.394991+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: get_auth_request con 0x559436e8d000 auth_method 0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: mgrc handle_mgr_configure stats_period=5
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487358464 unmapped: 62324736 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:17.395917+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 ms_handle_reset con 0x559434792c00 session 0x559437b0a3c0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559436c6e800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 ms_handle_reset con 0x559436c5bc00 session 0x559436ec6f00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437438c00
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 ms_handle_reset con 0x5594350d7400 session 0x559436ddfc20
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: handle_auth_request added challenge on 0x559437eac800
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487358464 unmapped: 62324736 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:18.396971+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487358464 unmapped: 62324736 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:19.397385+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487358464 unmapped: 62324736 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:20.398239+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487358464 unmapped: 62324736 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:21.398804+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487366656 unmapped: 62316544 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:22.399625+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487366656 unmapped: 62316544 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:23.400189+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487366656 unmapped: 62316544 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:24.400412+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:25.400867+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487366656 unmapped: 62316544 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:26.401371+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487374848 unmapped: 62308352 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:27.401592+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487383040 unmapped: 62300160 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:28.401958+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487383040 unmapped: 62300160 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:29.402311+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487383040 unmapped: 62300160 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:30.402673+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487383040 unmapped: 62300160 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:31.402817+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487383040 unmapped: 62300160 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:32.402961+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487383040 unmapped: 62300160 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:33.403218+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487383040 unmapped: 62300160 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:34.403475+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487391232 unmapped: 62291968 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:35.403625+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487391232 unmapped: 62291968 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:36.403888+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487391232 unmapped: 62291968 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:37.404140+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487399424 unmapped: 62283776 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:38.404385+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487399424 unmapped: 62283776 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:39.404565+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487399424 unmapped: 62283776 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:40.404725+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487407616 unmapped: 62275584 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:41.404914+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487407616 unmapped: 62275584 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:42.405078+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487407616 unmapped: 62275584 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:43.405245+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487407616 unmapped: 62275584 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:44.405434+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487415808 unmapped: 62267392 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:45.405690+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487424000 unmapped: 62259200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:46.405851+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487424000 unmapped: 62259200 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:47.406074+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487432192 unmapped: 62251008 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:48.406223+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487440384 unmapped: 62242816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:49.406375+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487440384 unmapped: 62242816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:50.406525+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487440384 unmapped: 62242816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:51.406663+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487440384 unmapped: 62242816 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:52.406817+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487448576 unmapped: 62234624 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:53.406961+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487456768 unmapped: 62226432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:54.407127+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487456768 unmapped: 62226432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:55.407321+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487456768 unmapped: 62226432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:56.407439+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487456768 unmapped: 62226432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:57.407583+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487456768 unmapped: 62226432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:58.407801+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487456768 unmapped: 62226432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:45:59.407928+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487456768 unmapped: 62226432 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:00.408092+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487464960 unmapped: 62218240 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:01.408225+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487464960 unmapped: 62218240 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:02.408417+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487464960 unmapped: 62218240 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:03.408603+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487464960 unmapped: 62218240 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:04.408754+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487464960 unmapped: 62218240 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:05.409273+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487464960 unmapped: 62218240 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:06.409544+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487464960 unmapped: 62218240 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:07.409728+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487473152 unmapped: 62210048 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:08.409906+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487473152 unmapped: 62210048 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:09.410050+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487481344 unmapped: 62201856 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:10.410192+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487481344 unmapped: 62201856 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:11.410340+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487481344 unmapped: 62201856 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:12.410488+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487489536 unmapped: 62193664 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:13.410628+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487489536 unmapped: 62193664 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:14.410783+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487489536 unmapped: 62193664 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:15.410922+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487489536 unmapped: 62193664 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:16.411028+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487489536 unmapped: 62193664 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:17.411161+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487497728 unmapped: 62185472 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:47:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:13.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:18.411370+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487497728 unmapped: 62185472 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:19.411532+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487497728 unmapped: 62185472 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:20.411696+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487505920 unmapped: 62177280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:21.413051+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487505920 unmapped: 62177280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:22.413333+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487505920 unmapped: 62177280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:23.414744+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487505920 unmapped: 62177280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:24.415336+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487505920 unmapped: 62177280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:25.416491+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487505920 unmapped: 62177280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:26.417109+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487505920 unmapped: 62177280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:27.418031+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487505920 unmapped: 62177280 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:28.418522+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487514112 unmapped: 62169088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:29.419174+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487522304 unmapped: 62160896 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:30.419638+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487522304 unmapped: 62160896 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:31.419835+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487522304 unmapped: 62160896 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:32.419993+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487546880 unmapped: 62136320 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:33.420127+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487546880 unmapped: 62136320 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:34.420415+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487546880 unmapped: 62136320 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:35.421521+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487546880 unmapped: 62136320 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:36.421745+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487546880 unmapped: 62136320 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:37.421990+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487555072 unmapped: 62128128 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 02 13:47:13 compute-1 ceph-osd[78262]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 02 13:47:13 compute-1 ceph-osd[78262]: bluestore.MempoolThread(0x559433787b60) _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5039136 data_alloc: 218103808 data_used: 15671296
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:38.422159+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487555072 unmapped: 62128128 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:39.422312+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487555072 unmapped: 62128128 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:40.422429+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: do_command 'config diff' '{prefix=config diff}'
Oct 02 13:47:13 compute-1 ceph-osd[78262]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 02 13:47:13 compute-1 ceph-osd[78262]: do_command 'config show' '{prefix=config show}'
Oct 02 13:47:13 compute-1 ceph-osd[78262]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487702528 unmapped: 61980672 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: do_command 'counter dump' '{prefix=counter dump}'
Oct 02 13:47:13 compute-1 ceph-osd[78262]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 02 13:47:13 compute-1 ceph-osd[78262]: do_command 'counter schema' '{prefix=counter schema}'
Oct 02 13:47:13 compute-1 ceph-osd[78262]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:41.422547+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487481344 unmapped: 62201856 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: tick
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_tickets
Oct 02 13:47:13 compute-1 ceph-osd[78262]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-02T13:46:42.422685+0000)
Oct 02 13:47:13 compute-1 ceph-osd[78262]: osd.0 430 heartbeat osd_stat(store_statfs(0x19d3b1000/0x0/0x1bfc00000, data 0x27cb3ac/0x29fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1fe4f9c6), peers [1,2] op hist [])
Oct 02 13:47:13 compute-1 ceph-osd[78262]: prioritycache tune_memory target: 4294967296 mapped: 487514112 unmapped: 62169088 heap: 549683200 old mem: 2845415833 new mem: 2845415833
Oct 02 13:47:13 compute-1 ceph-osd[78262]: do_command 'log dump' '{prefix=log dump}'
Oct 02 13:47:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 02 13:47:13 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4214691663' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:47:13 compute-1 ceph-mon[80926]: from='client.47444 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:13 compute-1 ceph-mon[80926]: from='client.48193 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:13 compute-1 ceph-mon[80926]: from='client.38397 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:13 compute-1 ceph-mon[80926]: from='client.47456 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:13 compute-1 ceph-mon[80926]: from='client.48208 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:13 compute-1 ceph-mon[80926]: from='client.47474 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/402183147' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:47:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4153172606' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:47:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/392775669' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:47:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3677689488' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 02 13:47:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1967728647' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:47:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3564654829' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:47:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1513305581' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 02 13:47:13 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4214691663' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:47:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct 02 13:47:13 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3678720938' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:47:13 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:47:13 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:13 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:13.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:13 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct 02 13:47:13 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3968431496' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:47:14 compute-1 nova_compute[230518]: 2025-10-02 13:47:14.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:47:14 compute-1 nova_compute[230518]: 2025-10-02 13:47:14.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:47:14 compute-1 nova_compute[230518]: 2025-10-02 13:47:14.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:47:14 compute-1 nova_compute[230518]: 2025-10-02 13:47:14.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 02 13:47:14 compute-1 nova_compute[230518]: 2025-10-02 13:47:14.052 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 02 13:47:14 compute-1 nova_compute[230518]: 2025-10-02 13:47:14.113 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 02 13:47:14 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct 02 13:47:14 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3951671801' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 02 13:47:14 compute-1 ceph-mon[80926]: from='client.48220 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:14 compute-1 ceph-mon[80926]: from='client.47489 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:14 compute-1 ceph-mon[80926]: from='client.48235 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:14 compute-1 ceph-mon[80926]: from='client.38445 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:14 compute-1 ceph-mon[80926]: from='client.47504 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:14 compute-1 ceph-mon[80926]: from='client.48244 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:14 compute-1 ceph-mon[80926]: pgmap v4104: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Oct 02 13:47:14 compute-1 ceph-mon[80926]: from='client.38460 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3678720938' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:47:14 compute-1 ceph-mon[80926]: from='client.47519 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:14 compute-1 ceph-mon[80926]: from='client.48262 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:14 compute-1 ceph-mon[80926]: from='client.38469 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3783132319' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 02 13:47:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/325351810' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:47:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3968431496' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:47:14 compute-1 ceph-mon[80926]: from='client.47531 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:14 compute-1 ceph-mon[80926]: from='client.48280 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:14 compute-1 ceph-mon[80926]: from='client.38487 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/531916238' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 02 13:47:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2814393998' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 02 13:47:14 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3951671801' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 02 13:47:14 compute-1 ceph-mon[80926]: from='client.47546 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:14 compute-1 crontab[335319]: (root) LIST (root)
Oct 02 13:47:15 compute-1 nova_compute[230518]: 2025-10-02 13:47:15.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:47:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:47:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct 02 13:47:15 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1067755395' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 02 13:47:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:47:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999990s ======
Oct 02 13:47:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:15.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999990s
Oct 02 13:47:15 compute-1 ceph-mon[80926]: from='client.48298 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:15 compute-1 ceph-mon[80926]: from='client.48304 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1444893857' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 02 13:47:15 compute-1 ceph-mon[80926]: from='client.47567 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:15 compute-1 ceph-mon[80926]: from='client.48316 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:15 compute-1 ceph-mon[80926]: from='client.38514 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2664765077' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 02 13:47:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3895258866' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 02 13:47:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1067755395' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 02 13:47:15 compute-1 ceph-mon[80926]: pgmap v4105: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Oct 02 13:47:15 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1775043447' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 02 13:47:15 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:47:15 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:47:15 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:15.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:47:15 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct 02 13:47:15 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1739021306' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 02 13:47:16 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct 02 13:47:16 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1965547968' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 02 13:47:16 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct 02 13:47:16 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3440469174' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 02 13:47:16 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Oct 02 13:47:16 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2969688570' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 02 13:47:16 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct 02 13:47:16 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3042551414' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 02 13:47:16 compute-1 ceph-mon[80926]: from='client.38529 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:16 compute-1 ceph-mon[80926]: from='client.47591 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/684971148' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 02 13:47:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1739021306' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 02 13:47:16 compute-1 ceph-mon[80926]: from='client.48349 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2071889290' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 02 13:47:16 compute-1 ceph-mon[80926]: from='client.38541 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1965547968' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 02 13:47:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3440469174' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 02 13:47:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4037444600' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 02 13:47:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1088956816' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 02 13:47:16 compute-1 ceph-mon[80926]: from='client.38559 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2969688570' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 02 13:47:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3474037624' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 02 13:47:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3042551414' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 02 13:47:16 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1715167348' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 02 13:47:16 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Oct 02 13:47:16 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2672268410' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 02 13:47:16 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Oct 02 13:47:16 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3946441983' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 02 13:47:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Oct 02 13:47:17 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3676004763' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 02 13:47:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:47:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:17.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Oct 02 13:47:17 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/868572869' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 02 13:47:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Oct 02 13:47:17 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2457636684' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 02 13:47:17 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:47:17 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:17 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:17.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3164670167' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 02 13:47:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2672268410' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 02 13:47:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2775357721' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 02 13:47:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3946441983' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 02 13:47:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2668710871' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 02 13:47:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1460811360' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 02 13:47:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3676004763' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 02 13:47:17 compute-1 ceph-mon[80926]: from='client.38589 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2540075283' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 02 13:47:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3469539752' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 02 13:47:17 compute-1 ceph-mon[80926]: pgmap v4106: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 0 B/s wr, 39 op/s
Oct 02 13:47:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3512761745' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 02 13:47:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/868572869' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 02 13:47:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2457636684' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 02 13:47:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2370018337' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 02 13:47:17 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1224356891' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 02 13:47:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Oct 02 13:47:17 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2065031953' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 02 13:47:17 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct 02 13:47:17 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2951718361' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 13:47:17 compute-1 nova_compute[230518]: 2025-10-02 13:47:17.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:47:18 compute-1 systemd[1]: Starting Hostname Service...
Oct 02 13:47:18 compute-1 sshd-session[335449]: Invalid user admin from 196.251.118.184 port 55992
Oct 02 13:47:18 compute-1 systemd[1]: Started Hostname Service.
Oct 02 13:47:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Oct 02 13:47:18 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3340405755' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 02 13:47:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Oct 02 13:47:18 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1209891436' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 02 13:47:18 compute-1 sshd-session[335449]: pam_unix(sshd:auth): check pass; user unknown
Oct 02 13:47:18 compute-1 sshd-session[335449]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=196.251.118.184
Oct 02 13:47:18 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Oct 02 13:47:18 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4036422266' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 02 13:47:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1780767086' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 02 13:47:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3501681092' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 13:47:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2065031953' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 02 13:47:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2951718361' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 13:47:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1650286015' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 02 13:47:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4160799232' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 02 13:47:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3500533200' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 02 13:47:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/73184818' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 02 13:47:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3340405755' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 02 13:47:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1209891436' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 02 13:47:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4061298051' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 02 13:47:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/1571223080' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 02 13:47:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/4017930756' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 02 13:47:18 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/4036422266' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 02 13:47:19 compute-1 nova_compute[230518]: 2025-10-02 13:47:19.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:47:19 compute-1 nova_compute[230518]: 2025-10-02 13:47:19.047 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:47:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:47:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:19.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:19 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:47:19 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:47:19 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:19.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:47:19 compute-1 ceph-mon[80926]: from='client.48481 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:19 compute-1 ceph-mon[80926]: from='client.47735 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:19 compute-1 ceph-mon[80926]: from='client.48502 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3880924643' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 02 13:47:19 compute-1 ceph-mon[80926]: from='client.48496 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3220108605' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 02 13:47:19 compute-1 ceph-mon[80926]: from='client.47744 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:19 compute-1 ceph-mon[80926]: from='client.47750 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:19 compute-1 ceph-mon[80926]: from='client.48511 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:19 compute-1 ceph-mon[80926]: pgmap v4107: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 0 B/s wr, 39 op/s
Oct 02 13:47:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2758411455' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 02 13:47:19 compute-1 ceph-mon[80926]: from='client.47759 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:19 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2218426517' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 02 13:47:19 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Oct 02 13:47:19 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3080644985' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 02 13:47:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:47:20 compute-1 sshd-session[335449]: Failed password for invalid user admin from 196.251.118.184 port 55992 ssh2
Oct 02 13:47:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Oct 02 13:47:20 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2646368920' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 02 13:47:20 compute-1 ceph-mon[80926]: from='client.48532 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2948375085' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 02 13:47:20 compute-1 ceph-mon[80926]: from='client.47783 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3856099125' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 02 13:47:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1025455943' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 02 13:47:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3080644985' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 02 13:47:20 compute-1 ceph-mon[80926]: from='client.48547 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:20 compute-1 ceph-mon[80926]: from='client.47795 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:20 compute-1 ceph-mon[80926]: from='client.38697 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2042272111' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 02 13:47:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/707473037' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 02 13:47:20 compute-1 ceph-mon[80926]: from='client.48559 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2646368920' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 02 13:47:20 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3100741757' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:47:20 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Oct 02 13:47:20 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/791524135' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:47:21 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Oct 02 13:47:21 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2251973729' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 02 13:47:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:47:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:21.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:21 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:47:21 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:47:21 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:47:21 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:21 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:21.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:21 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:47:21 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:47:21 compute-1 ceph-mon[80926]: from='client.47807 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:21 compute-1 ceph-mon[80926]: from='client.38709 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:21 compute-1 ceph-mon[80926]: from='client.38715 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:21 compute-1 ceph-mon[80926]: from='client.48577 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/791524135' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:47:21 compute-1 ceph-mon[80926]: from='client.47819 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2931492052' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 02 13:47:21 compute-1 ceph-mon[80926]: from='client.38727 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:21 compute-1 ceph-mon[80926]: from='client.48592 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:21 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:47:21 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:47:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2251973729' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 02 13:47:21 compute-1 ceph-mon[80926]: pgmap v4108: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 91 KiB/s rd, 0 B/s wr, 152 op/s
Oct 02 13:47:21 compute-1 ceph-mon[80926]: from='client.47834 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:21 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:47:21 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:47:21 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:47:21 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:47:21 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:47:21 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:47:21 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:47:21 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:47:21 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2386771092' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 02 13:47:21 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:47:21 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:47:22 compute-1 nova_compute[230518]: 2025-10-02 13:47:22.052 2 DEBUG oslo_service.periodic_task [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 02 13:47:22 compute-1 nova_compute[230518]: 2025-10-02 13:47:22.053 2 DEBUG nova.compute.manager [None req-bb64cf88-5494-4d0e-b8f5-b5b3c2e31764 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 02 13:47:22 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Oct 02 13:47:22 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1550219910' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 02 13:47:22 compute-1 ceph-mon[80926]: from='client.38745 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/889082410' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 02 13:47:22 compute-1 ceph-mon[80926]: from='client.38760 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2592065937' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 02 13:47:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/1550219910' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 02 13:47:22 compute-1 ceph-mon[80926]: from='client.48646 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:22 compute-1 ceph-mon[80926]: from='client.38772 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:22 compute-1 ceph-mon[80926]: from='client.47888 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1634751391' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 02 13:47:22 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4290154286' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 02 13:47:22 compute-1 sshd-session[335449]: Connection closed by invalid user admin 196.251.118.184 port 55992 [preauth]
Oct 02 13:47:22 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Oct 02 13:47:22 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/500212777' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 02 13:47:22 compute-1 nova_compute[230518]: 2025-10-02 13:47:22.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:47:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Oct 02 13:47:23 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3923046428' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 02 13:47:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:47:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:23.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:23 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:47:23 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:47:23 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:47:23 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000999991s ======
Oct 02 13:47:23 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:23.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999991s
Oct 02 13:47:23 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Oct 02 13:47:23 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3015238990' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 02 13:47:23 compute-1 ceph-mon[80926]: from='client.38784 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/500212777' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 02 13:47:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1695843270' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 02 13:47:23 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:47:23 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:47:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/803408113' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 02 13:47:23 compute-1 ceph-mon[80926]: from='client.38796 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 02 13:47:23 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:47:23 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:47:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3923046428' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 02 13:47:23 compute-1 ceph-mon[80926]: pgmap v4109: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 123 KiB/s rd, 0 B/s wr, 205 op/s
Oct 02 13:47:23 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 02 13:47:23 compute-1 ceph-mon[80926]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 02 13:47:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/4108506579' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 02 13:47:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/3015238990' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 02 13:47:23 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1913213451' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 02 13:47:24 compute-1 nova_compute[230518]: 2025-10-02 13:47:24.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 02 13:47:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/2522795157' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 02 13:47:24 compute-1 ceph-mon[80926]: from='client.38844 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/863202803' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 02 13:47:24 compute-1 ceph-mon[80926]: from='client.48688 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1235792891' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 02 13:47:24 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3111663973' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 02 13:47:24 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Oct 02 13:47:24 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2935842323' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 02 13:47:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 02 13:47:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:47:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.001999982s ======
Oct 02 13:47:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.102 - anonymous [02/Oct/2025:13:47:25.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001999982s
Oct 02 13:47:25 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Oct 02 13:47:25 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/712644003' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 02 13:47:25 compute-1 sudo[336739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:47:25 compute-1 sudo[336739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:47:25 compute-1 sudo[336739]: pam_unix(sudo:session): session closed for user root
Oct 02 13:47:25 compute-1 sudo[336770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:47:25 compute-1 sudo[336770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:47:25 compute-1 sudo[336770]: pam_unix(sudo:session): session closed for user root
Oct 02 13:47:25 compute-1 radosgw[82922]: ====== starting new request req=0x7f9200cf96f0 =====
Oct 02 13:47:25 compute-1 radosgw[82922]: ====== req done req=0x7f9200cf96f0 op status=0 http_status=200 latency=0.000000000s ======
Oct 02 13:47:25 compute-1 radosgw[82922]: beast: 0x7f9200cf96f0: 192.168.122.100 - anonymous [02/Oct/2025:13:47:25.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 02 13:47:25 compute-1 sudo[336802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:47:25 compute-1 sudo[336802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:47:25 compute-1 sudo[336802]: pam_unix(sudo:session): session closed for user root
Oct 02 13:47:25 compute-1 sudo[336834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Oct 02 13:47:25 compute-1 sudo[336834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:47:25 compute-1 ceph-mon[80926]: from='client.47936 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 02 13:47:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/2906699497' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 02 13:47:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/2935842323' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 02 13:47:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.102:0/3319394505' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 02 13:47:25 compute-1 ceph-mon[80926]: pgmap v4110: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 131 KiB/s rd, 0 B/s wr, 218 op/s
Oct 02 13:47:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.101:0/712644003' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 02 13:47:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/3016534467' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 02 13:47:25 compute-1 ceph-mon[80926]: from='client.? 192.168.122.100:0/1501585160' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 02 13:47:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:47:26.010 138374 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 02 13:47:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:47:26.011 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 02 13:47:26 compute-1 ovn_metadata_agent[138369]: 2025-10-02 13:47:26.011 138374 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 02 13:47:26 compute-1 ceph-mon[80926]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Oct 02 13:47:26 compute-1 ceph-mon[80926]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1164119169' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 02 13:47:26 compute-1 sudo[336834]: pam_unix(sudo:session): session closed for user root
Oct 02 13:47:26 compute-1 sudo[336979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:47:26 compute-1 sudo[336979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:47:26 compute-1 sudo[336979]: pam_unix(sudo:session): session closed for user root
Oct 02 13:47:26 compute-1 sudo[337037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 02 13:47:26 compute-1 sudo[337037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:47:26 compute-1 sudo[337037]: pam_unix(sudo:session): session closed for user root
Oct 02 13:47:26 compute-1 sudo[337068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 02 13:47:26 compute-1 sudo[337068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 02 13:47:26 compute-1 sudo[337068]: pam_unix(sudo:session): session closed for user root
Oct 02 13:47:26 compute-1 sudo[337107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/20fdc58c-b037-5094-a8ef-d490aa7c36f3/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Oct 02 13:47:26 compute-1 sudo[337107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
